精华内容
下载资源
问答
  • KubeSphere
    2021-01-17 17:21:19

    “当微服务变得越来越多时,管理就会出问题。就像汽车的轮子,单个轮子肯定不行,需要整装、驱动,但是轮子越来越多时,如何管理这些轮子也很重要。这时候必须有一个管理系统,Kubernetes 已经成为容器平台的标准,所以青云QingCloud 会拥抱 Kubernetes。Kubernetes 不仅仅是容器的调度和编排,它还制订了一套标准和规范,生态非常强大。我们认为 Kubernetes 是容器时代的分布式操作系统内核,我们一定是要基于此再发展上层的功能。”对于企业在云原生时代的业务部署和应用交付趋势,青云QingCloud 应用及容器平台研发总监周小四有着清晰的判断。

    青云QingCloud 应用及容器平台研发总监周小四

    借助云计算、人工智能、5G、物联网等技术对企业进行流程和模式的升级或再造,已成为科技厂商赋能传统产业的新路径,产业互联网的概念也由此而来。如果说数字经济是推动企业IT化助燃剂,那么要想让数字化转型渗透到业务的每一个环节,就要从 IT 基础做起,而这种转变不仅是理念上的,也是架构上的。

    早在 2016 年 11 月,青云QingCloud 就提出了一套架构同时支持虚拟主机和容器主机的技术理念。2017 年,青云QingCloud 的 PaaS 服务通过应用容器技术,将性能至多提升 500%,同年 7 月,Kubernetes on QingCloud 上线。Kubernetes 的可移植性、开发交付快等特性无需多言,但其在部署复杂度、存储、调试等方面的局限性也让不少企业头痛不已。

    “Kubernetes 只负责底层,上层的服务都不管,这对企业客户来说肯定是不够的。企业要的是 DevOps、微服务治理等,他们要很多的功能,这些 Kubernetes 都没有。”说到 Kubernetes 的学习成本以及二次开发和管理成本,周小四深有感触。

    随着越来越多的企业在部署业务时开始关注云原生环境,以往的交付方式亦在向服务化、应用化的形态转变,如何让产品迭代更快、交付周期更短、服务响应更及时、需求满足更个性化,要从IT底层架构进行适应。

    为什么会这样?这还是要从单体应用和微服务的差别说起。

    在周小四看来,容器是云原生时代的核心基础设施,应用改造从以前的单体应用、3 Tier、SOA 架构转化到微服务架构,区别在于原有的单体应用一旦出现问题要重新打包和交付,而分布式架构的微服务则是对每个功能独立打包,出问题后各自解决即可。

    数字化转型 2.0 到来

    4 月 19 日,青云QingCloud 正式推出了 KubeSphere 容器平台高级版,并宣布 KubeSphere 加入 Cloud Native Computing Foundation 云原生计算基金会。“以前是大机、数据中心、云计算,以虚机为基础的云计算、云平台。现在发展到容器平台(KubeSphere),要到这个基础设施上。”周小四说。KubeSphere 的初衷是以“简”之道降低容器复杂度,解决企业微服务架构、DevOps 中遇到的复杂多样挑战,帮助企业轻量级调度全栈云功能。

    KubeSphere 容器平台发布会

    据了解,KubeSphere 的代码开发始于 2018 年 4 月,当时的想法是要让 KubeSphere 的架构设计和理念更先进、更灵活。首先是让 KubeSphere 可以部署在任何的 Kubernetes 集群上,包括每一个发行版。其次,KubeSphere 要具备可配置、可插拔的功能,也就是让客户能够根据实际需求定制。如果说 KubeSphere 的定位是分布式操作系统,那么 Kubernetes 就是里面的内核。2018 年 6 月,KubeSphere 社区版上线,2019 年 4 月发布 2.0 版本,并将在之后的版本中陆续加入了对 AI、无服务器计算、API 网关的支持。

    KubeSphere 发展路径

    周小四透露,后续将提供QKS(公有容器云服务)的支持,在价格方面会重新调整,“Kubernetes 之所以能赢得战争,就是因为生态很强大。KubeSphere 也是开源的,我们在上游社区贡献了很多,我们自己也贡献了一些项目,覆盖的面比较广,存储、网络、DevOps、负载均衡器都有,现在我们成为了很多开源社区的重要成员。”

    那么,KubeSphere 是解决问题的呢?

    存储方面,青云QingCloud 提供了使用天然适合微服务的分布式存储 QingStor NeonSAN,其性能、稳定性、安全性远高于开源产品;网络方面,青云QingCloud 拥有在公有云和私有云经过验证过的 QingCloud SDN,可以有效解决多租户场景中网络强隔离的问题,同样要比开源方案更可靠;实用方面,青云QingCloud 提供了完善的可视化的工具和技术平台,以及微服务治理和应用商店 AppCenter。

    进一步看,KubeSphere 提供了多种经过社区众多开发者和厂商验证过的开源插件,支持多款存储插件和存储模式。同时,青云QingCloud 的存储都是基于 CSI(Kubernetes)的标准插件接驳 KubeSphere,一键部署不需要额外的配置,包括块存储、QingStor NeonSAN 等产品均可以提供更低延时、更加弹性、更高性能的特性。

    青云QingCloud 云平台块存储在 K8S 容器测试表现

    青云QingCloud QingStor NeonSAN 在 K8S 容器测试表现

    KubeSphere 容器平台产品经理于爽表示,QingStor NeonSAN 可以提供数百万的 IOPS,时延是亚毫秒级别的。在青云QingCloud 公有云上,NeonSAN(企业级分布式 SAN)也已经上线,很多用户正在使用。从实际测试来看,企业在容器中使用青云QingCloud 的块存储和 QingStor NeonSAN 的时候,与使用虚拟化存储是一样的,没有系统层面的损失。同时,上述两款存储产品进入了 CSI Driver 目录,得到了官方社区的认可。

    KubeSphere 容器平台产品经理于爽

    再来看网络,这是青云QingCloud 多年以来重点投入的技术领域,其也将 SDN 的能力无缝迁移到了 KubeSphere 中,不仅可以为客户提供三层 SDN 租户硬隔离方案,也可以提供 Region 架构,实现跨集群网络访问的对接。通过 KubeSphere,能够对接 calico、flannel 等主流网络插件。

    而对于那些已经将业务迁移到 Kubernetes 环境的企业,KubeSphere 则提供了负载均衡器插件,该插件曾在 Kubernetes PaaS 服务中出现过,同样也可以直接用到青云QingCloud 的 SDN 服务。要知道,以互联网为代表的企业希望获得高性能的解决方案,会直接将 Kubernetes 部署在物理机上,不会再在中间隔一层虚拟化平台。

    针对这种部署方式,周小四也对笔者做出了解释:在生产环境直接用物理机一定是趋势。在虚机上有虚拟化的性能损耗和网络的性能损耗。在 IaaS 上用的是 SDN,网络性能会有损耗,在物理机上可以解决这个问题。

    “这就谈到另一个话题:企业如何使用 K8S,到底放在虚机还是物理机,这就要思考一个问题——客户想要什么?刚刚谈到的是生产环境,要的是性能,放在物理机是没问题的。对于开发测试,放在 IaaS 云平台是很好的。IaaS 可以提供资源层面的弹性,计算、存储和软件定义网络。这时候你在上面有弹性、敏捷性。如果要三个节点,在物理机上弄挺麻烦的,但在虚拟机上可以立马启三个节点。在开发测试环境上,放在虚机是更好的选择,它没有性能的要求。”周小四说。

    无论是放在物理机还是虚拟机,青云QingCloud 都有较好的支持。以部署在物理机为例,毕竟在裸金属的平台上并没有云平台 SDN 的负载均衡,对此,青云QingCloud 提供了开源插件 Porter(在 GitHub 开源),允许客户可以自行采购物理交换机。

    有了坚实的基础设施支撑,接下来就是向上层拓展场景功能。DevOps 就是很好的例子,企业可以通过 DevOps 可以快速迭代,实现功能的快速发布。“我们的 DevOps 提供了各种功能,基于 Jenkins 的可视化流水线控制台,用户通过鼠标拖拽、点击就可以构建自己的自动化流水线,不需要了解后台复杂的服务器配置。”于爽介绍称,“用户要做容器镜像的发布,部署在 K8S 环境里,也不需要在里面配置各种复杂的 K8S 命令,只需通过一个界面,填写发布对象的 K8S 的地址,平台就会自动帮你做这件事。”

    内置模板可实现代码拉取、代码编译、代码静态扫描(扫描代码里的漏洞)、制作容器镜像、推送、发布等端到端流程的自动化

    用户可自行编辑流水线支持的串行和并行任务

    代码直达容器镜像功能

    微服务治理是另一个用户较为关注的特性,KubeSphere 的做法是追随社区主流,这样做的好处是更加便于开发者接受,也降低了使用门槛,KubeSphere 可以原生对接 Service Mesh 框架 Istio。于爽认为,Istio 是未来的趋势。随着功能即服务(FaaS,Function as a Service)对原有开发模式的革新,不受 Java 语言栈绑定的 Istio 将发挥更大的影响力。

    提供 Spring Cloud 兼容解决方案

    借助微服务治理平台,青云QingCloud 可以实现金丝雀发布、蓝绿部署、熔断和链路追踪。“这些功能在 Istio 里是通过很多复杂的配置、复杂的操作实现的功能。我们帮助企业屏蔽了后端所有乱七八糟的东西,客户只需要在界面简单点几下就可以实现想要的功能,他不需要对 Istio 有太多的了解,用户甚至不需要知道 Istio 这个词。”于爽说。把简单留给客户,把困难留给自己,一直是青云QingCloud的理念。

    当然,未来可能用户并不需要知道底层的基础资源由哪一方提供,只会关心应用的效果和体验。青云QingCloud 在 GitHub 上开源的 OpenPitrix,可以实现跨云、跨平台的应用管理和一键部署,其底层后端无缝迁移至 KubeSphere 之后,延续了跨平台应用全生命周期管理的能力,开发者的应用开发、发布、版本管控、运维、监控等流程均可得到满足。

    “大道至简,举重若轻”,这是青云QingCloud KubeSphere容器平台发布会的主题,而极简的减法理念也是 KubeSphere 自开发之初所秉承的。为此,青云QingCloud 利用一系列的 UI 设计让 KubeSphere 开启了“上帝视角”,通过研发时比对每一个功能点相较同类产品操作步骤进行逐一简化,让用户用尽可能少的操作实现同样的功能。其结果是,KubeSphere 在服务发布流程方面降低了用户 40% 的操作负担。

    服务发布功能

    一键发布业务应用

    可以看到,KubeSphere 之于 Kubernetes 的再造是全方面的,而这也与青云QingCloud 对于云化、容器化、应用化的未来 IT 论断一致。就像周小四所说的,容器对于企业来说不仅仅是技术,更多的是对企业文化的改造,需要企业在业务理念、流程等进行全方位的改造,“企业如何切分业务才能做到微服务化,流程从以前的方式改成 DevOps 的方式,这是不容易的,要耗费资金和精力培训员工做思想上、技术上的整体改造,只有这样才有可能推动数字化转型取得成功。”

    更多相关内容
  • kubesphere离线部署文件

    2022-04-10 22:54:51
    文件为百度云盘地址 kubesphere离线部署文件,帮您节省下载文件的时间。 kubesphere-images(镜像文件) packages(依赖包) kubekey-v2.0.0-linux-64bit.rpm
  • KubeSphere入门使用手册

    2021-12-24 12:04:22
    适合对k8s想要了解的新手,基于kubesphere编写的一套入门级别的手册
  • kubesphere安装 kk文件

    2022-04-22 17:54:25
    kubesphere安装 kk文件,国内不容易下载
  • kubesphere快速署脚本

    2022-04-11 12:55:42
    kubesphere快速署脚本, 可帮您省去操作系统配置、时间配置、docker安装、镜像库导入、kk安装的时间。脚本亲自测试。脚本依赖文件,您可以自己准备,也可以从这里...
  • KubeSphere控制台 KubeSphere控制台是群集的基于Web的UI。 入门 开始之前,需要一个KubeSphere集群。 阅读《指南》以安装集群。 阅读以开始使用KubeSphere。 功能图: 开发人员指南 制备 确保已安装以下软件并将...
  • kubesphere-minimal.yaml

    2021-04-26 09:10:33
    kubespherev2.1的最小化安装文件,可以对应kubernetesv1.17.3版本安装。
  • kubesphere-devops操作视频
  • 本项目为kubesphere基于springboot内置流水线示例项目,具体参见kubesphere , 项目中包含Jenkinsfile在SCM:Jenkinsfile,在线文件(Jenkinsfile在SCM意为将Jenkinsfile文件本身作为源代码管理(源代码控制管理)的...
  • community:KubeSphere社区

    2021-04-07 13:59:12
    KubeSphere社区欢迎来到KubeSphere社区! 如果您正在寻找有关如何加入我们的信息,那么您来对地方了。 请继续阅读以了解如何参与其中,为KubeSphere代码和文档做出贡献,提出新功能和设计并与KubeSphere社区最新消息...
  • docker部署redis命令 docker run --name redis -p 6379:6379 -d --restart=always a4d3716dbb72 redis-server --appendonly yes --requirepass 123456 ...这里的配置信息就和上面redis-server后面的一样 ...
  • 在现有的Kubernetes集群上安装KubeSphere English | 除了支持在VM和BM上进行部署外,KubeSphere还支持在云托管和本地现有Kubernetes集群上进行安装。 先决条件 Kubernetes版本:1.15.x,1.16.x,1.17.x,1.18.x; ...
  • 2021.5.29 KubeSphere 社区主办的云原生 Meetup——KubeSphere and Friends 2021 杭州站 ,KubeSphere 架构师、KubeEdge 社区 Maintainer、Nebula Graph 图数据库工程师、资深 MySQL 内核研发SementFault(思否)CTO...
  • KubeSphere控制台KubeSphere控制台是KubeSphere集群的基于Web的UI。 入门在开始之前,需要一个KubeSphere集群。 阅读《安装指南》以安装集群。 阅读KubeSphere控制台KubeSphere控制台是KubeSphere集群的基于Web的UI...
  • KubeSphere网站该项目使用为KubeSphere构建新的网站。贡献欢迎任何形式的捐助! 感谢这些杰出的贡献者,他们使我们的社区和产品快速增长。 分叉并克隆仓库首先,创建您自己的存储库分支。 然后,克隆您的fork并输入...
  • 文档[弃用]此存储库曾经是KubeSphere网站的宿主存储库,但自v3.0起,我们将移至该。先决条件首先安装以下三个依赖项: 吉特node.js 纱线(或npm, we recommend yarn ) 检查是否成功安装。$ git --versiongit ...
  • KubeSphere 回顾与展望 KubeSphere 社区年度报告 KubeSphere Cloud 云原生备份容灾服务 The Road of KubeSphere The Map of Cloud Native
  • 离线搭建kubernetes和kubesphere软件包,教程见https://blog.csdn.net/qq_38120778/article/details/124226357
  • kubesphere-api-doc

    2021-02-07 13:15:37
    kubesphere-api-doc
  • KubeSphere 愿景是打造一个以 Kubernetes 为内核的云原生分布式操作系统,它的架构可以非常方便地使第三方应用与云原生生态组件进行即插即用(plug-and-play)的集成,支持云原生应用在多云与多集群的统一分发和运维...

    摘要

    KubeSphere 愿景是打造一个以 Kubernetes 为内核的云原生分布式操作系统,它的架构可以非常方便地使第三方应用与云原生生态组件进行即插即用(plug-and-play)的集成,支持云原生应用在多云与多集群的统一分发和运维管理。

    一、安装环境准备

    • 如需在 Kubernetes 上安装 KubeSphere 3.2.1,您的 Kubernetes 版本必须为:1.19.x、1.20.x、1.21.x 或 1.22.x(实验性支持)。
    • 确保您的机器满足最低硬件要求:CPU > 1 核,内存 > 2 GB。
    • 在安装之前,需要配置 Kubernetes 集群中的默认存储类型。
    • 当使用 --cluster-signing-cert-file--cluster-signing-key-file 参数启动时,在 kube-apiserver 中会激活 CSR 签名功能。请参见 RKE 安装问题
    • 有关在 Kubernetes 上安装 KubeSphere 的准备工作,请参见准备工作

    二、安装Kubernetes环境

    大家可以参考这个博文进行Kubernetes环境的安装。

    Kubernetes——K8s集群构建实战_庄小焱-CSDN博客

    三、安装nfs文件系统

    # 在每个机器
    
    yum install -y nfs-utils
    
    
    # 在master 执行以下命令 
    
    echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
    
    
    # 执行以下命令,启动 nfs 服务;创建共享目录
    
    mkdir -p /nfs/data
    
    
    # 在master执行
    
    systemctl enable rpcbind
    systemctl enable nfs-server
    systemctl start rpcbind
    systemctl start nfs-server
    
    # 使配置生效
    
    exportfs -r
    
    
    #检查配置是否生效
    
    exportfs
    
    
    配置nfs-client
    
    showmount -e 172.31.0.4(自己的IP地址)
    
    mkdir -p /nfs/data
    
    mount -t nfs 172.31.0.4(自己的IP地址):/nfs/data /nfs/data
    # 配置默认存储
    
    vim sc.yaml
    --------------------------------------------------------------------------
    ## 创建了一个存储类
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: nfs-storage
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
    parameters:
      archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份
    
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nfs-client-provisioner
      labels:
        app: nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    spec:
      replicas: 1
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: nfs-client-provisioner
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
              # resources:
              #    limits:
              #      cpu: 10m
              #    requests:
              #      cpu: 10m
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: k8s-sigs.io/nfs-subdir-external-provisioner
                - name: NFS_SERVER
                  value: 172.31.0.4 ## 指定自己nfs服务器地址
                - name: NFS_PATH  
                  value: /nfs/data  ## nfs服务器共享的目录
          volumes:
            - name: nfs-client-root
              nfs:
                server: 172.31.0.4
                path: /nfs/data
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "update", "patch"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: default
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: default
    roleRef:
      kind: Role
      name: leader-locking-nfs-client-provisioner
      apiGroup: rbac.authorization.k8s.io
    #确认配置是否生效
    
    kubectl get sc

    四、安装集群指标监控组件

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        k8s-app: metrics-server
        rbac.authorization.k8s.io/aggregate-to-admin: "true"
        rbac.authorization.k8s.io/aggregate-to-edit: "true"
        rbac.authorization.k8s.io/aggregate-to-view: "true"
      name: system:aggregated-metrics-reader
    rules:
    - apiGroups:
      - metrics.k8s.io
      resources:
      - pods
      - nodes
      verbs:
      - get
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        k8s-app: metrics-server
      name: system:metrics-server
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      - nodes
      - nodes/stats
      - namespaces
      - configmaps
      verbs:
      - get
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server-auth-reader
      namespace: kube-system
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: extension-apiserver-authentication-reader
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server:system:auth-delegator
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:auth-delegator
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: system:metrics-server
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:metrics-server
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    spec:
      ports:
      - name: https
        port: 443
        protocol: TCP
        targetPort: https
      selector:
        k8s-app: metrics-server
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    spec:
      selector:
        matchLabels:
          k8s-app: metrics-server
      strategy:
        rollingUpdate:
          maxUnavailable: 0
      template:
        metadata:
          labels:
            k8s-app: metrics-server
        spec:
          containers:
          - args:
            - --cert-dir=/tmp
            - --kubelet-insecure-tls
            - --secure-port=4443
            - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
            - --kubelet-use-node-status-port
            image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
            imagePullPolicy: IfNotPresent
            livenessProbe:
              failureThreshold: 3
              httpGet:
                path: /livez
                port: https
                scheme: HTTPS
              periodSeconds: 10
            name: metrics-server
            ports:
            - containerPort: 4443
              name: https
              protocol: TCP
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: /readyz
                port: https
                scheme: HTTPS
              periodSeconds: 10
            securityContext:
              readOnlyRootFilesystem: true
              runAsNonRoot: true
              runAsUser: 1000
            volumeMounts:
            - mountPath: /tmp
              name: tmp-dir
          nodeSelector:
            kubernetes.io/os: linux
          priorityClassName: system-cluster-critical
          serviceAccountName: metrics-server
          volumes:
          - emptyDir: {}
            name: tmp-dir
    ---
    apiVersion: apiregistration.k8s.io/v1
    kind: APIService
    metadata:
      labels:
        k8s-app: metrics-server
      name: v1beta1.metrics.k8s.io
    spec:
      group: metrics.k8s.io
      groupPriorityMinimum: 100
      insecureSkipTLSVerify: true
      service:
        name: metrics-server
        namespace: kube-system
      version: v1beta1
      versionPriority: 100

    五、安装KubeSphere

    5.1 下载核心文件

    wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
    
    wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml

    5.2 修改cluster-configuration

    在 cluster-configuration.yaml中指定我们需要开启的功能  参照官网“启用可插拔组件” 
    
    https://kubesphere.com.cn/docs/pluggable-components/overview/

     kubesphere-installer.yaml

    ---
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: clusterconfigurations.installer.kubesphere.io
    spec:
      group: installer.kubesphere.io
      versions:
      - name: v1alpha1
        served: true
        storage: true
      scope: Namespaced
      names:
        plural: clusterconfigurations
        singular: clusterconfiguration
        kind: ClusterConfiguration
        shortNames:
        - cc
    
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: kubesphere-system
    
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: ks-installer
      namespace: kubesphere-system
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: ks-installer
    rules:
    - apiGroups:
      - ""
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - apps
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - extensions
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - batch
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - rbac.authorization.k8s.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - apiregistration.k8s.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - apiextensions.k8s.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - tenant.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - certificates.k8s.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - devops.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - monitoring.coreos.com
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - logging.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - jaegertracing.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - storage.k8s.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - admissionregistration.k8s.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - policy
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - autoscaling
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - networking.istio.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - config.istio.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - iam.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - notification.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - auditing.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - events.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - core.kubefed.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - installer.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - storage.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - security.istio.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - monitoring.kiali.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - kiali.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - networking.k8s.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - kubeedge.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - types.kubefed.io
      resources:
      - '*'
      verbs:
      - '*'
    
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: ks-installer
    subjects:
    - kind: ServiceAccount
      name: ks-installer
      namespace: kubesphere-system
    roleRef:
      kind: ClusterRole
      name: ks-installer
      apiGroup: rbac.authorization.k8s.io
    
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ks-installer
      namespace: kubesphere-system
      labels:
        app: ks-install
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: ks-install
      template:
        metadata:
          labels:
            app: ks-install
        spec:
          serviceAccountName: ks-installer
          containers:
          - name: installer
            image: kubesphere/ks-installer:v3.1.1
            imagePullPolicy: "Always"
            resources:
              limits:
                cpu: "1"
                memory: 1Gi
              requests:
                cpu: 20m
                memory: 100Mi
            volumeMounts:
            - mountPath: /etc/localtime
              name: host-time
          volumes:
          - hostPath:
              path: /etc/localtime
              type: ""
            name: host-time
    

    cluster-configuration.yaml

    ---
    apiVersion: installer.kubesphere.io/v1alpha1
    kind: ClusterConfiguration
    metadata:
      name: ks-installer
      namespace: kubesphere-system
      labels:
        version: v3.1.1
    spec:
      persistence:
        storageClass: ""        # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
      authentication:
        jwtSecret: ""           # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
      local_registry: ""        # Add your private registry address if it is needed.
      etcd:
        monitoring: true       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
        endpointIps: 172.31.0.4  # etcd cluster EndpointIps. It can be a bunch of IPs here.
        port: 2379              # etcd port.
        tlsEnable: true
      common:
        redis:
          enabled: true
        openldap:
          enabled: true
        minioVolumeSize: 20Gi # Minio PVC size.
        openldapVolumeSize: 2Gi   # openldap PVC size.
        redisVolumSize: 2Gi # Redis PVC size.
        monitoring:
          # type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
          endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
        es:   # Storage backend for logging, events and auditing.
          # elasticsearchMasterReplicas: 1   # The total number of master nodes. Even numbers are not allowed.
          # elasticsearchDataReplicas: 1     # The total number of data nodes.
          elasticsearchMasterVolumeSize: 4Gi   # The volume size of Elasticsearch master nodes.
          elasticsearchDataVolumeSize: 20Gi    # The volume size of Elasticsearch data nodes.
          logMaxAge: 7                     # Log retention time in built-in Elasticsearch. It is 7 days by default.
          elkPrefix: logstash              # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
          basicAuth:
            enabled: false
            username: ""
            password: ""
          externalElasticsearchUrl: ""
          externalElasticsearchPort: ""
      console:
        enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
        port: 30880
      alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
        enabled: true         # Enable or disable the KubeSphere Alerting System.
        # thanosruler:
        #   replicas: 1
        #   resources: {}
      auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
        enabled: true         # Enable or disable the KubeSphere Auditing Log System. 
      devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
        enabled: true             # Enable or disable the KubeSphere DevOps System.
        jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
        jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
        jenkinsVolumeSize: 8Gi     # Jenkins volume size.
        jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.
        jenkinsJavaOpts_Xmx: 512m
        jenkinsJavaOpts_MaxRAM: 2g
      events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
        enabled: true         # Enable or disable the KubeSphere Events System.
        ruler:
          enabled: true
          replicas: 2
      logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
        enabled: true         # Enable or disable the KubeSphere Logging System.
        logsidecar:
          enabled: true
          replicas: 2
      metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
        enabled: false                   # Enable or disable metrics-server.
      monitoring:
        storageClass: ""                 # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
        # prometheusReplicas: 1          # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
        prometheusMemoryRequest: 400Mi   # Prometheus request memory.
        prometheusVolumeSize: 20Gi       # Prometheus PVC size.
        # alertmanagerReplicas: 1          # AlertManager Replicas.
      multicluster:
        clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.
      network:
        networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
          # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
          enabled: true # Enable or disable network policies.
        ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
          type: calico # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
        topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
          type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
      openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
        store:
          enabled: true # Enable or disable the KubeSphere App Store.
      servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
        enabled: true     # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
      kubeedge:          # Add edge nodes to your cluster and deploy workloads on edge nodes.
        enabled: true   # Enable or disable KubeEdge.
        cloudCore:
          nodeSelector: {"node-role.kubernetes.io/worker": ""}
          tolerations: []
          cloudhubPort: "10000"
          cloudhubQuicPort: "10001"
          cloudhubHttpsPort: "10002"
          cloudstreamPort: "10003"
          tunnelPort: "10004"
          cloudHub:
            advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
              - ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
            nodeLimit: "100"
          service:
            cloudhubNodePort: "30000"
            cloudhubQuicNodePort: "30001"
            cloudhubHttpsNodePort: "30002"
            cloudstreamNodePort: "30003"
            tunnelNodePort: "30004"
        edgeWatcher:
          nodeSelector: {"node-role.kubernetes.io/worker": ""}
          tolerations: []
          edgeWatcherAgent:
            nodeSelector: {"node-role.kubernetes.io/worker": ""}
            tolerations: []

    5.3 执行安装

    kubectl apply -f kubesphere-installer.yaml
    
    kubectl apply -f cluster-configuration.yaml

    5.4 查看安装进度

    kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
    

    5.5 etcd监控证书找不到问题

    kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

    六、安装测试与登入

    访问任意机器的 30880端口
    
    账号 : admin
    
    密码 : P@88w0rd

     

    博文参考

    展开全文
  • 安装KubeSphere

    千次阅读 2022-03-09 19:46:24
    https://kubesphere.com.cn/ 1、下载核心文件 如果下载不到,请复制附录的内容 wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml wget ...

    https://kubesphere.com.cn/
    1、下载核心文件
    如果下载不到,请复制附录的内容

    wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
    
    wget https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml
    

    附录:
    1、kubesphere-installer.yaml

    ---
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: clusterconfigurations.installer.kubesphere.io
    spec:
      group: installer.kubesphere.io
      versions:
      - name: v1alpha1
        served: true
        storage: true
      scope: Namespaced
      names:
        plural: clusterconfigurations
        singular: clusterconfiguration
        kind: ClusterConfiguration
        shortNames:
        - cc
    
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: kubesphere-system
    
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: ks-installer
      namespace: kubesphere-system
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: ks-installer
    rules:
    - apiGroups:
      - ""
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - apps
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - extensions
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - batch
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - rbac.authorization.k8s.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - apiregistration.k8s.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - apiextensions.k8s.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - tenant.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - certificates.k8s.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - devops.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - monitoring.coreos.com
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - logging.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - jaegertracing.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - storage.k8s.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - admissionregistration.k8s.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - policy
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - autoscaling
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - networking.istio.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - config.istio.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - iam.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - notification.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - auditing.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - events.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - core.kubefed.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - installer.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - storage.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - security.istio.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - monitoring.kiali.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - kiali.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - networking.k8s.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - kubeedge.kubesphere.io
      resources:
      - '*'
      verbs:
      - '*'
    - apiGroups:
      - types.kubefed.io
      resources:
      - '*'
      verbs:
      - '*'
    
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: ks-installer
    subjects:
    - kind: ServiceAccount
      name: ks-installer
      namespace: kubesphere-system
    roleRef:
      kind: ClusterRole
      name: ks-installer
      apiGroup: rbac.authorization.k8s.io
    
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ks-installer
      namespace: kubesphere-system
      labels:
        app: ks-install
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: ks-install
      template:
        metadata:
          labels:
            app: ks-install
        spec:
          serviceAccountName: ks-installer
          containers:
          - name: installer
            image: kubesphere/ks-installer:v3.1.1
            imagePullPolicy: "Always"
            resources:
              limits:
                cpu: "1"
                memory: 1Gi
              requests:
                cpu: 20m
                memory: 100Mi
            volumeMounts:
            - mountPath: /etc/localtime
              name: host-time
          volumes:
          - hostPath:
              path: /etc/localtime
              type: ""
            name: host-time
    
    

    2、cluster-configuration.yaml

    ---
    apiVersion: installer.kubesphere.io/v1alpha1
    kind: ClusterConfiguration
    metadata:
      name: ks-installer
      namespace: kubesphere-system
      labels:
        version: v3.1.1
    spec:
      persistence:
        storageClass: ""        # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
      authentication:
        jwtSecret: ""           # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
      local_registry: ""        # Add your private registry address if it is needed.
      etcd:
        monitoring: true       # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
        endpointIps: 172.31.0.4  # etcd cluster EndpointIps. It can be a bunch of IPs here.
        port: 2379              # etcd port.
        tlsEnable: true
      common:
        redis:
          enabled: true
        openldap:
          enabled: true
        minioVolumeSize: 20Gi # Minio PVC size.
        openldapVolumeSize: 2Gi   # openldap PVC size.
        redisVolumSize: 2Gi # Redis PVC size.
        monitoring:
          # type: external   # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
          endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
        es:   # Storage backend for logging, events and auditing.
          # elasticsearchMasterReplicas: 1   # The total number of master nodes. Even numbers are not allowed.
          # elasticsearchDataReplicas: 1     # The total number of data nodes.
          elasticsearchMasterVolumeSize: 4Gi   # The volume size of Elasticsearch master nodes.
          elasticsearchDataVolumeSize: 20Gi    # The volume size of Elasticsearch data nodes.
          logMaxAge: 7                     # Log retention time in built-in Elasticsearch. It is 7 days by default.
          elkPrefix: logstash              # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
          basicAuth:
            enabled: false
            username: ""
            password: ""
          externalElasticsearchUrl: ""
          externalElasticsearchPort: ""
      console:
        enableMultiLogin: true  # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
        port: 30880
      alerting:                # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
        enabled: true         # Enable or disable the KubeSphere Alerting System.
        # thanosruler:
        #   replicas: 1
        #   resources: {}
      auditing:                # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants.
        enabled: true         # Enable or disable the KubeSphere Auditing Log System. 
      devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
        enabled: true             # Enable or disable the KubeSphere DevOps System.
        jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
        jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
        jenkinsVolumeSize: 8Gi     # Jenkins volume size.
        jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.
        jenkinsJavaOpts_Xmx: 512m
        jenkinsJavaOpts_MaxRAM: 2g
      events:                  # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
        enabled: true         # Enable or disable the KubeSphere Events System.
        ruler:
          enabled: true
          replicas: 2
      logging:                 # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
        enabled: true         # Enable or disable the KubeSphere Logging System.
        logsidecar:
          enabled: true
          replicas: 2
      metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
        enabled: false                   # Enable or disable metrics-server.
      monitoring:
        storageClass: ""                 # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
        # prometheusReplicas: 1          # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
        prometheusMemoryRequest: 400Mi   # Prometheus request memory.
        prometheusVolumeSize: 20Gi       # Prometheus PVC size.
        # alertmanagerReplicas: 1          # AlertManager Replicas.
      multicluster:
        clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the Host or Member Cluster.
      network:
        networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
          # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
          enabled: true # Enable or disable network policies.
        ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
          type: calico # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
        topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
          type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
      openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
        store:
          enabled: true # Enable or disable the KubeSphere App Store.
      servicemesh:         # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
        enabled: true     # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
      kubeedge:          # Add edge nodes to your cluster and deploy workloads on edge nodes.
        enabled: true   # Enable or disable KubeEdge.
        cloudCore:
          nodeSelector: {"node-role.kubernetes.io/worker": ""}
          tolerations: []
          cloudhubPort: "10000"
          cloudhubQuicPort: "10001"
          cloudhubHttpsPort: "10002"
          cloudstreamPort: "10003"
          tunnelPort: "10004"
          cloudHub:
            advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
              - ""            # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
            nodeLimit: "100"
          service:
            cloudhubNodePort: "30000"
            cloudhubQuicNodePort: "30001"
            cloudhubHttpsNodePort: "30002"
            cloudstreamNodePort: "30003"
            tunnelNodePort: "30004"
        edgeWatcher:
          nodeSelector: {"node-role.kubernetes.io/worker": ""}
          tolerations: []
          edgeWatcherAgent:
            nodeSelector: {"node-role.kubernetes.io/worker": ""}
            tolerations: []
    

    2、修改cluster-configuration
    在 cluster-configuration.yaml中指定我们需要开启的功能
    参照官网“启用可插拔组件”
    https://kubesphere.com.cn/docs/pluggable-components/overview/
    3、执行安装

    kubectl apply -f kubesphere-installer.yaml
    
    kubectl apply -f cluster-configuration.yaml
    
    
    

    4、查看安装进度

    kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
    
    

    访问任意机器的 30880端口
    账号 : admin
    密码 : P@88w0rd

    解决etcd监控证书找不到问题

    kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key
    
    展开全文
  • 上一篇主要讲了如何进行单机版本kubesphere,本篇主要讲如何基于单机镜像完成集群的配置与管理。 二、导入镜像 目标:我们期望在部署集群是无外网的情况下依然可以执行集群的安装。三台测试机器拷贝文件到kube_...

    前言

    上一篇主要讲了如何进行单机版本kubesphere,本篇主要讲如何基于单机镜像完成集群的配置与管理。

    一、导出镜像

    以下操作必须要在之前的单机上执行,不然没效果。

    #创建配置文件
    ./kk create manifest
    

    查看配置文件
    注意这里的harbor前面的注释一定要记得关闭
    尽量把配置写全一点,这样内部后续包就包含进来了。
    当然 mainfest.yaml,如果太大就会导致打包的文件太大。
    所以我们给出两份demo文件,mainfest_mini.yaml,mainfest_full.yaml

    # 把文件拷贝一份
    cp mainfest_simple.yaml mainfest.yaml
    

    最小版本

    vim mainfest_mini.yaml
    文件开始>>>>>
    
    apiVersion: kubekey.kubesphere.io/v1alpha2
    kind: Manifest
    metadata:
      name: sample
    spec:
      arches:
      - amd64
      operatingSystems:
      - arch: amd64
        type: linux
        id: centos
        version: "7"
        osImage: CentOS Linux 7 (Core)
        repository:
          iso:
            localPath:
            url: "https://github.com/kubesphere/kubekey/releases/download/v2.0.0/centos-7-amd64-rpms.iso"
      kubernetesDistributions:
      - type: kubernetes
        version: v1.21.5
      components:
        helm:
          version: v3.6.3
        cni:
          version: v0.9.1
        etcd:
          version: v3.4.13
        ## For now, if your cluster container runtime is containerd, KubeKey will add a docker 20.10.8 container runtime in the below list.
        ## The reason is KubeKey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.
        containerRuntimes:
        - type: docker
          version: 20.10.8
        crictl:
          version: v1.22.0
    
        docker-registry:
          version: "2"
        harbor:
          version: v2.4.1
        docker-compose:
          version: v2.2.2
      images:
      - docker.io/calico/cni:v3.20.0
      - docker.io/calico/kube-controllers:v3.20.0
      - docker.io/calico/node:v3.20.0
      - docker.io/calico/pod2daemon-flexvol:v3.20.0
      - docker.io/coredns/coredns:1.8.0
      - docker.io/csiplugin/snapshot-controller:v4.0.0
      - docker.io/kubesphere/k8s-dns-node-cache:1.15.12
      - docker.io/kubesphere/ks-apiserver:v3.2.1
      - docker.io/kubesphere/ks-console:v3.2.1
      - docker.io/kubesphere/ks-controller-manager:v3.2.1
      - docker.io/kubesphere/ks-installer:v3.2.1
      - docker.io/kubesphere/kube-apiserver:v1.21.5
      - docker.io/kubesphere/kube-controller-manager:v1.21.5
      - docker.io/kubesphere/kube-proxy:v1.21.5
      - docker.io/kubesphere/kube-rbac-proxy:v0.8.0
      - docker.io/kubesphere/kube-scheduler:v1.21.5
      - docker.io/kubesphere/kube-state-metrics:v1.9.7
      - docker.io/kubesphere/kubectl:v1.21.0
      - docker.io/kubesphere/notification-manager-operator:v1.4.0
      - docker.io/kubesphere/notification-manager:v1.4.0
      - docker.io/kubesphere/notification-tenant-sidecar:v3.2.0
      - docker.io/kubesphere/pause:3.4.1
      - docker.io/kubesphere/prometheus-config-reloader:v0.43.2
      - docker.io/kubesphere/prometheus-operator:v0.43.2
      - docker.io/mirrorgooglecontainers/defaultbackend-amd64:1.4
      - docker.io/openebs/provisioner-localpv:2.10.1
      - docker.io/prom/alertmanager:v0.21.0
      - docker.io/prom/node-exporter:v0.18.1
      - docker.io/prom/prometheus:v2.26.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.5
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.5
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.5
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.5
      - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1
      registry:
        auths: {}
    

    最大版本

    mainfest_full.yaml
    
    文件开始>>>>>
    ---
    apiVersion: kubekey.kubesphere.io/v1alpha2
    kind: Manifest
    metadata:
      name: sample
    spec:
      arches:
      - amd64
      operatingSystems:
      - arch: amd64
        type: linux
        id: centos
        version: "7"
        repository:
          iso:
            localPath: ""
            url: "https://github.com/kubesphere/kubekey/releases/download/v2.0.0/centos-7-amd64-rpms.iso"
      kubernetesDistributions:
      - type: kubernetes
        version: v1.21.5
      components:
        helm:
          version: v3.6.3
        cni:
          version: v0.9.1
        etcd:
          version: v3.4.13
        ## For now, if your cluster container runtime is containerd, KubeKey will add a docker 20.10.8 container runtime in the below list.
        ## The reason is KubeKey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.
        containerRuntimes:
        - type: docker
          version: 20.10.8
        crictl:
          version: v1.22.0
        ##
        # docker-registry:
        #   version: "2"
        harbor:
          version: v2.4.1
        docker-compose:
          version: v2.2.2
      images:
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.5
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.5
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.5
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.5
      - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
      - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.20.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.20.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.20.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.20.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.20.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:2.10.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:2.10.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
      - registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2
      - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
      - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.2.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.2.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.2.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.2.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
      - registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
      - registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v0.48.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
      - registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2
      - registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
      - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine
      - registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
      - registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.7.2
      - registry.cn-beijing.aliyuncs.com/kubesphereio/edge-watcher:v0.1.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/edge-watcher-agent:v0.1.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2
      - registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.2.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:v3.2.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:v3.2.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:v3.2.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.2.0-2.249.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/jnlp-slave:3.27-1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.0-podman
      - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman
      - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman
      - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman
      - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
      - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
      - registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.3.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.26.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.43.2
      - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.43.2
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v1.9.7
      - registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v0.18.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-prometheus-adapter-amd64:v0.6.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.21.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.18.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:7.4.3
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6
      - registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.7.0-1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.11.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
      - registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.3
      - registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.3.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.3.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.3.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27
      - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27
      - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27
      - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27
      - registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38
      - registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
      - registry.cn-beijing.aliyuncs.com/kubesphereio/nginx:1.14-alpine
      - registry.cn-beijing.aliyuncs.com/kubesphereio/wget:1.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text
      - registry.cn-beijing.aliyuncs.com/kubesphereio/wordpress:4.8-apache
      - registry.cn-beijing.aliyuncs.com/kubesphereio/hpa-example:latest
      - registry.cn-beijing.aliyuncs.com/kubesphereio/java:openjdk-8-jre-alpine
      - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentd:v1.4.2-2.0
      - registry.cn-beijing.aliyuncs.com/kubesphereio/perl:latest
      - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-productpage-v1:1.16.2
      - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v1:1.16.2
      - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v2:1.16.2
      - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-details-v1:1.16.2
      - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-ratings-v1:1.16.3
      registry:
        auths: {}
    <<<<<文件结束
    
    #导出镜像, 不知道为啥我操作了两次。
    #导出文件大小及时间取决于我们配置的文件多少。
    ./kk artifact export -m manifest.yaml -o kubesphere.tar.gz
    #最后成功执行如下
    13:48:20 CST success: [LocalHost]
    13:48:20 CST [ChownOutputModule] Chown output file
    13:48:20 CST success: [LocalHost]
    13:48:20 CST [ChownWorkerModule] Chown ./kubekey dir
    13:48:20 CST success: [LocalHost]
    13:48:20 CST Pipeline[ArtifactExportPipeline] execute successful
    [root@localhost ~]# ll
    # kubesphere.tar.gz 就是我们打的镜像文件了
    

    二、导入镜像

    目标:我们期望在部署集群是无外网的情况下依然可以执行集群的安装。

    环境准备

    三台测试机器

    roleiphostname
    master192.168.3.65kube_master01
    master192.168.3.66kube_master02
    master192.168.3.67kube_master03
    node192.168.3.68kube_node1
    node192.168.3.69kube_node2

    master上需要安装 etcd 高可用集群,这样我用任意一台就可以管理我们的机器了,实际管理时我们可以在任意一台机器中操作kubectl即可,因为它们都是向高可用集群etcd数据库中写入数据,然后再完成schedule调度任务。

    拷贝文件到kube_master中

    scp kk root@192.168.3.65:/root
    scp kubesphere.tar.gz root@192.168.3.65:/root
    

    执行操作

    ssh root@192.168.3.65
    ##创建配置文件
    ./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.21.5 -f config-sample.yaml
    ##修改配置文件设置registry,注意设置hosts/registry
    cp config-sample.yaml  config.yaml
    cat config.yaml
    apiVersion: kubekey.kubesphere.io/v1alpha2
    kind: Cluster
    metadata:
      name: sample
    spec:
      hosts:
      - {name: kube_master01, address: 192.168.3.65, internalAddress: 192.168.3.65, user: root, password: "123456"}
      - {name: kube_master02, address: 192.168.3.66, internalAddress: 192.168.3.66, user: root, password: "123456"}
      - {name: kube_master03, address: 192.168.3.67, internalAddress: 192.168.3.67, user: root, password: "123456"}
      - {name: kube_node01, address: 192.168.3.68, internalAddress: 192.168.3.68, user: root, password: "123456"}
      - {name: kube_node02, address: 192.168.3.69, internalAddress: 192.168.3.69, user: root, password: "123456"}
      roleGroups:
        etcd:
        - kube_master01
        - kube_master02
        - kube_master03
        control-plane:
        - kube_master01
        - kube_master02
        - kube_master03
        worker:
        - kube_node01
        - kube_node02
        registry:
        - kube_master02
      controlPlaneEndpoint:
        ## Internal loadbalancer for apiservers
        internalLoadbalancer: haproxy
        domain: lb.kubesphere.local
        address: ""
        port: 6443
      kubernetes:
        version: v1.21.5
        clusterName: cluster.local
      network:
        plugin: calico
        kubePodsCIDR: 10.233.64.0/18
        kubeServiceCIDR: 10.233.0.0/18
        ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
        multusCNI:
          enabled: false
      registry:
        type: harbor
        auths:
          "dockerhub.kubekey.local":
            username: admin
            password: Harbor12345
        plainHTTP: false
        privateRegistry: "dockerhub.kubekey.local"
        namespaceOverride: "kubesphereio"
        registryMirrors: []
        insecureRegistries: []
      addons: []
    
    
    
    ---
    apiVersion: installer.kubesphere.io/v1alpha1
    kind: ClusterConfiguration
    metadata:
      name: ks-installer
      namespace: kubesphere-system
      labels:
        version: v3.2.1
    spec:
      persistence:
        storageClass: ""
      authentication:
        jwtSecret: ""
      local_registry: ""
      namespace_override: ""
      # dev_tag: ""
      etcd:
        monitoring: false
        endpointIps: localhost
        port: 2379
        tlsEnable: true
      common:
        core:
          console:
            enableMultiLogin: true
            port: 30880
            type: NodePort
        # apiserver:
        #  resources: {}
        # controllerManager:
        #  resources: {}
        redis:
          enabled: false
          volumeSize: 2Gi
        openldap:
          enabled: false
          volumeSize: 2Gi
        minio:
          volumeSize: 20Gi
        monitoring:
          # type: external
          endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
          GPUMonitoring:
            enabled: false
        gpu:
          kinds:
          - resourceName: "nvidia.com/gpu"
            resourceType: "GPU"
            default: true
        es:
          # master:
          #   volumeSize: 4Gi
          #   replicas: 1
          #   resources: {}
          # data:
          #   volumeSize: 20Gi
          #   replicas: 1
          #   resources: {}
          logMaxAge: 7
          elkPrefix: logstash
          basicAuth:
            enabled: false
            username: ""
            password: ""
          externalElasticsearchHost: ""
          externalElasticsearchPort: ""
      alerting:
        enabled: false
        # thanosruler:
        #   replicas: 1
        #   resources: {}
      auditing:
        enabled: false
        # operator:
        #   resources: {}
        # webhook:
        #   resources: {}
      devops:
        enabled: false
        jenkinsMemoryLim: 2Gi
        jenkinsMemoryReq: 1500Mi
        jenkinsVolumeSize: 8Gi
        jenkinsJavaOpts_Xms: 512m
        jenkinsJavaOpts_Xmx: 512m
        jenkinsJavaOpts_MaxRAM: 2g
      events:
        enabled: false
        # operator:
        #   resources: {}
        # exporter:
        #   resources: {}
        # ruler:
        #   enabled: true
        #   replicas: 2
        #   resources: {}
      logging:
        enabled: false
        containerruntime: docker
        logsidecar:
          enabled: true
          replicas: 2
          # resources: {}
      metrics_server:
        enabled: false
      monitoring:
        storageClass: ""
        # kube_rbac_proxy:
        #   resources: {}
        # kube_state_metrics:
        #   resources: {}
        # prometheus:
        #   replicas: 1
        #   volumeSize: 20Gi
        #   resources: {}
        #   operator:
        #     resources: {}
        #   adapter:
        #     resources: {}
        # node_exporter:
        #   resources: {}
        # alertmanager:
        #   replicas: 1
        #   resources: {}
        # notification_manager:
        #   resources: {}
        #   operator:
        #     resources: {}
        #   proxy:
        #     resources: {}
        gpu:
          nvidia_dcgm_exporter:
            enabled: false
            # resources: {}
      multicluster:
        clusterRole: none
      network:
        networkpolicy:
          enabled: false
        ippool:
          type: none
        topology:
          type: none
      openpitrix:
        store:
          enabled: false
      servicemesh:
        enabled: false
      kubeedge:
        enabled: false
        cloudCore:
          nodeSelector: {"node-role.kubernetes.io/worker": ""}
          tolerations: []
          cloudhubPort: "10000"
          cloudhubQuicPort: "10001"
          cloudhubHttpsPort: "10002"
          cloudstreamPort: "10003"
          tunnelPort: "10004"
          cloudHub:
            advertiseAddress:
              - ""
            nodeLimit: "100"
          service:
            cloudhubNodePort: "30000"
            cloudhubQuicNodePort: "30001"
            cloudhubHttpsNodePort: "30002"
            cloudstreamNodePort: "30003"
            tunnelNodePort: "30004"
        edgeWatcher:
          nodeSelector: {"node-role.kubernetes.io/worker": ""}
          tolerations: []
          edgeWatcherAgent:
            nodeSelector: {"node-role.kubernetes.io/worker": ""}
            tolerations: []
    <<<文件结束
    # 必须要设置好 registry ,不然后续我们更新将会很麻烦的。
    # 执行 registry 安装
    ./kk init registry -f config.yaml -a kubesphere.tar.gz
    .....
    22:26:09 CST skipped: [kube_master02]
    22:26:09 CST [InstallRegistryModule] Enable docker
    22:26:11 CST skipped: [kube_master02]
    22:26:11 CST [InstallRegistryModule] Install docker compose
    22:26:15 CST success: [kube_master02]
    22:26:15 CST [InstallRegistryModule] Sync harbor package
    22:27:44 CST success: [kube_master02]
    22:27:44 CST [InstallRegistryModule] Generate harbor config
    22:27:47 CST success: [kube_master02]
    22:27:47 CST [InstallRegistryModule] start harbor
    
    Local image registry created successfully. Address: dockerhub.kubekey.local
    
    22:28:24 CST success: [kube_master02]
    22:28:24 CST Pipeline[InitRegistryPipeline] execute successful
    

    好像安装成功了。。。

    ssh root@192.168.3.66
    [root@kube_node1 ~]# docker ps
    CONTAINER ID   IMAGE                                  COMMAND                  CREATED         STATUS                   PORTS                                                                                                                       NAMES
    0b6262ef1994   goharbor/nginx-photon:v2.4.1           "nginx -g 'daemon of…"   3 minutes ago   Up 3 minutes (healthy)   0.0.0.0:4443->4443/tcp, :::4443->4443/tcp, 0.0.0.0:80->8080/tcp, :::80->8080/tcp, 0.0.0.0:443->8443/tcp, :::443->8443/tcp   nginx
    7577ff03905d   goharbor/harbor-jobservice:v2.4.1      "/harbor/entrypoint.…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               harbor-jobservice
    3ba916375569   goharbor/notary-server-photon:v2.4.1   "/bin/sh -c 'migrate…"   3 minutes ago   Up 3 minutes                                                                                                                                         notary-server
    6c6ab9420de0   goharbor/harbor-core:v2.4.1            "/harbor/entrypoint.…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               harbor-core
    4e66ca5bff32   goharbor/trivy-adapter-photon:v2.4.1   "/home/scanner/entry…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               trivy-adapter
    554c004cf1b2   goharbor/notary-signer-photon:v2.4.1   "/bin/sh -c 'migrate…"   3 minutes ago   Up 3 minutes                                                                                                                                         notary-signer
    1368d5c294c3   goharbor/redis-photon:v2.4.1           "redis-server /etc/r…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               redis
    955b1e91535f   goharbor/harbor-registryctl:v2.4.1     "/home/harbor/start.…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               registryctl
    ae71922e7b43   goharbor/chartmuseum-photon:v2.4.1     "./docker-entrypoint…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               chartmuseum
    636a15e66450   goharbor/harbor-db:v2.4.1              "/docker-entrypoint.…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               harbor-db
    f041fefe6684   goharbor/harbor-portal:v2.4.1          "nginx -g 'daemon of…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               harbor-portal
    f030bb92b6ba   goharbor/registry-photon:v2.4.1        "/home/harbor/entryp…"   3 minutes ago   Up 3 minutes (healthy)                                                                                                                               registry
    fc4fb2a3474f   goharbor/harbor-log:v2.4.1             "/bin/sh -c /usr/loc…"   3 minutes ago   Up 3 minutes (healthy)   127.0.0.1:1514->10514/tcp                                                                                                   harbor-log
    

    创建 Harbor 项目

    vim /create_project_harbor.sh
    #!/usr/bin/env bash
       
    # Copyright 2018 The KubeSphere Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
       
    url="https://dockerhub.kubekey.local"  #修改url的值为https://dockerhub.kubekey.local
    user="admin"
    passwd="Harbor12345"
       
    harbor_projects=(library
        kubesphereio
        kubesphere
        calico
        coredns
        openebs
        csiplugin
        minio
        mirrorgooglecontainers
        osixia
        prom
        thanosio
        jimmidyson
        grafana
        elastic
        istio
        jaegertracing
        jenkins
        weaveworks
        openpitrix
        joosthofman
        nginxdemos
        fluent
        kubeedge
    )
       
    for project in "${harbor_projects[@]}"; do
        echo "creating $project"
        curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k #curl命令末尾加上 -k
    done
    <<<文件结束
    #再次执行以下命令修改集群配置文件
    vim config.yaml
      ...
      registry:
        type: harbor
        auths:
          "dockerhub.kubekey.local":
            username: admin
            password: Harbor12345
        plainHTTP: false
        privateRegistry: "dockerhub.kubekey.local"
        namespaceOverride: "kubesphereio"
        registryMirrors: []
        insecureRegistries: []
      addons: []
    

    安装 KubeSphere 集群

    ./kk create cluster -f config.yaml -a kubesphere.tar.gz --with-packages
    22:32:58 CST [ConfirmModule] Display confirmation form
    +---------------+------+------+---------+----------+-------+-------+-----------+--------+---------+------------+-------------+------------------+--------------+
    | name          | sudo | curl | openssl | ebtables | socat | ipset | conntrack | chrony | docker  | nfs client | ceph client | glusterfs client | time         |
    +---------------+------+------+---------+----------+-------+-------+-----------+--------+---------+------------+-------------+------------------+--------------+
    | kube_master01 | y    | y    | y       | y        |       | y     |           | y      |         | y          |             | y                | PDT 07:32:57 |
    | kube_master02 | y    | y    | y       | y        |       | y     |           | y      | 20.10.8 | y          |             | y                | PDT 07:32:57 |
    | kube_master03 | y    | y    | y       | y        |       | y     |           | y      |         | y          |             | y                | PDT 07:32:57 |
    | kube_node01   | y    | y    | y       | y        |       | y     |           | y      |         | y          |             | y                | PDT 07:32:58 |
    | kube_node02   | y    | y    | y       | y        |       | y     |           | y      |         | y          |             | y                | PDT 07:32:57 |
    +---------------+------+------+---------+----------+-------+-------+-----------+--------+---------+------------+-------------+------------------+--------------+
    
    .....
    # 执行过程中你会可能会发现磁盘不够用的现象。。。。
    failure: repodata/repomd.xml from base-local: [Errno 256] No more mirrors to try.
    file:///tmp/kubekey/iso/repodata/repomd.xml: [Errno -1] Error importing repomd.xml for base-local: Damaged repomd.xml file: Process exited with status 1
    failed: [kube_node1] [AddLocalRepository] exec failed after 1 retires: update local repository failed: Failed to exec command: sudo -E /bin/bash -c "yum clean all && yum makecache"
    Loaded plugins: fastestmirror, langpacks
    Cleaning repos: base-local
    
    #没关系我们忽略错误试试
    ./kk create cluster -f config.yaml -a kubesphere.tar.gz --with-packages --ignore-err
    
    .....结果还是不行
    failure: repodata/repomd.xml from base-local: [Errno 256] No more mirrors to try.
    file:///tmp/kubekey/iso/repodata/repomd.xml: [Errno -1] Error importing repomd.xml for base-local: Damaged repomd.xml file: Process exited with status 1
    
    或者发生如下错误:
    
    error: Pipeline[CreateClusterPipeline] execute failed: Module[CopyImagesToRegistryModule] exec failed:
    failed: [LocalHost] [CopyImagesToRegistry] exec failed after 1 retires: read index.json failed: open /root/kubekey/images/index.json: no such file or directory
    
    反正我是想近任何方法想把它跑通,但是确实没有办法搞出来。。。
    

    花了大半天研究的离线部署,最后成了这样。。。


    kubesphere离线打包问题(安装问题整理)

    1、下载文件要依赖已存在的集群

    [root@kube_master ~]# ./kk create manifest
    /root/manifest-sample.yaml already exists. Are you sure you want to overwrite this file? [yes/no]: yes
    error: get kubernetes client failed: open /root/.kube/config: no such file or directory
    

    如果我的集群里没有安装过kubernates或者kubesphere,竟然无法执行create manifest,这里没想通。

    2、下载文件时经常性卡死退出需要关闭docker服务才行。

    downloading amd64 harbor v2.4.1 ...
    已杀死
    

    3、下载文件的配置manifest.yaml没有给充足的说明如何使用导致下载文件时间巨长,其实很多文件根本没啥用。所以建议使用manifest-sample.yaml文件,方便简单。
    为了下载这些文件,我的整个磁盘最大占用差不多50G了。
    在这里插入图片描述

    4、无法下载ISO文件,需要手动指定安装的路径。

    错误如下:
    failed: [LocalHost] [DownloadISOFile] exec failed after 1 retires: Failed to download centos-7-amd64.iso iso file: curl -L -o /root/kubekey/centos-7-amd64.iso https://github.com/kubesphere/kubekey/releases/download/v2.0.0/centos-7-amd64-rpms.iso error: exit status 35
    

    修改manifest.yaml如下:
    需要提前从github上手动下载下来才行。

      operatingSystems:
      - arch: amd64
        type: linux
        id: centos
        version: "7"
        osImage: CentOS Linux 7 (Core)
        repository:
          iso:
            localPath: /root/centos-7-amd64-rpms.iso
            url: ""
    

    5、我的实际打包文件其实跟我的配置文件一点关系都没有,比如我使用manifest_mini.yaml进行打包实际它依然会把无关的包给我打进来了。
    在这里插入图片描述
    以至于我整个包差不多15个G了,哈哈。

    总结

    KubeSphere单机还是挺好用的,可能对支持联网状态的安装比较友好吧。对于tob场景下的离线安装我基本就放弃了。

    其实之前腾讯云出一版离线安装教程,地址如下:
    https://cloud.tencent.com/developer/article/1802614
    下载离线包大约10G左右,并且版本是固定的,这对于我们生产环境来说不一定是好的,我感觉KubeSphere是应该思考一下如何支持离线部署了。

    展开全文
  • 2022 年 6 月 27 日,KubeSphere 开源社区激动地向大家宣布,KubeSphere 3.3.0 正式发布! CNCF 发布的 2021 年度调查报告指出,容器和 K8s 的事实地位已经趋于稳固,并慢慢退居“幕后”,类似于无处不在的 Linux,...
  • 作者:老Z,中电信数智科技有限公司山东分公司运维架构师,云原生爱好者,目前专注于云原生运维,云原生领域技术栈涉及Kubernetes、KubeSphere、DevOps、OpenStack、Ansible等。 KubeKey 是一个用于部署 K8s 集群的...
  • Kubernetes上安装KubeSphere

    千次阅读 2022-02-12 10:30:29
    云原生Java架构师的第一课K8s+Docker+KubeSphere+DevOps Kubernetes上安装KubeSphere 学习笔记 安装Docker # 移除之前安装的docker sudo yum remove docker* # 安装yum工具,进行docker国内yum源的配置 sudo yum ...
  • KubeSphere介绍 官网地址:https://kubesphere.com.cn/ 首先先看下官网的介绍 1、KubeSphere是打造一个以Kubernetes为内核的云原生分布式操作系统。它的架构可以非常方便地使第三方应用与云原生生态组件进行即插即用...
  • 删除kubesphere所有资源

    2022-05-25 12:01:45
    参考资料: “kubesphere github delete” 详细如下: #!/usr/bin/env bash ...Delete the KubeSphere cluster, including the module kubesphere-system kubesphere-devops-system kubesphere-devops-work
  • KubeSphere简介

    千次阅读 2022-02-21 13:37:59
    KubeSphere简介,功能介绍,优势,架构说明及应用场景 KuberSphere简介1.1 功能介绍Kubernetes 资源管理微服务治理多租户管理DevOps 工程Source to Image多维度监控自研多租户告警系统日志查询与收集应用管理与编排...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 7,016
精华内容 2,806
关键字:

KubeSphere