精华内容
下载资源
问答
  • 适合新手部署实验
  • Kubernetes安装笔记

    2017-08-21 11:31:50
    Kubernetes安装笔记基于kubernetes1.7.2为版本,目前最新版为1.7.4版(2017年8月21日)
  • kubernetes安装使用+ceph实现后端持久化存储
  • kubernetes安装过程.md

    2020-04-15 07:19:31
    实践k8s中,最实用的部署方式!基于此部署方式基本都可以成功部署!本人尝试部署一次,在学习中部署并编写此文档
  • kubernetes安装配置详解

    2018-09-21 16:54:01
    kubernetes安装配置详解,docker配置,路由配置,k8s的监控实例配置以及nginx实例配置等等。下载app注册免费获取:http://m3w.cn/jcsh
  • kubernetes安装脚本

    2018-02-28 16:16:33
    在centos7下基于kubeadm一键安装kubernetes。用于学习和实验。需要服务器能访问internet
  • Kubernetes安装

    2018-07-18 17:42:35
    Kubernetes 快速安装部署指南Kubernetes master 包含三个核心组件:API Server、Scheduler 和 Controller Manager,它们分 别对应于一个要启动的服务,kube-apiserver、kube-scheduler 和 kube-controller-manager。
  • kubernetes安装及集群搭建

    千次阅读 2020-06-10 16:24:08
    Docker(已安装)、kubernetes 内存4G,双核CPU 192.168.2.20(node2) Docker(已安装)、kubernetes 内存4G,双核CPU 192.168.2.30(node3) Docker(已安装)、kubernetes 内存4G,双核CPU 安装步骤 环境...

    在这里插入图片描述

    k8s安装

    kubernetes官网

    安装环境

    不支持centos8的系统

    IP服务硬件要求
    192.168.2.10(node1)Docker(已安装)、kubernetes内存4G,双核CPU
    192.168.2.20(node2)Docker(已安装)、kubernetes内存4G,双核CPU
    192.168.2.30(node3)Docker(已安装)、kubernetes内存4G,双核CPU

    安装步骤

    环境准备
    为了修改文件传输文件的方便,将主机名更改为安装环境所述的node,并写入hosts文件

    ssh设置

    node1

    [root@localhost ~]# vim /etc/hosts
    192.168.2.10 node1
    192.168.2.20 node2
    192.168.2.30 node3
    

    ssh免密

    方便传输文件

    node1

    [root@localhost ~]# ssh-keygen
    [root@localhost ~]# ssh-copy-id -i root@node2
    [root@localhost ~]# ssh-copy-id -i root@node3
    
    

    传输hosts文件

    [root@localhost ~]# scp /etc/hosts root@node2:/etc
    [root@localhost ~]# scp /etc/hosts root@node3:/etc
    
    

    更改主机名

    node1

    [root@localhost ~]# hostname node1
    [root@localhost ~]# bash
    [root@node1 ~]#
    

    node2

    [root@localhost ~]# hostname node2
    [root@localhost ~]# bash
    [root@node2 ~]#
    

    node3

    [root@localhost ~]# hostname node3
    [root@localhost ~]# bash
    [root@node3 ~]#
    

    关闭防火墙沙盒

    node1/2/3

    systemctl stop firewalld && systemctl disable firewalld
    setenforce 0
    vim /etc/selinux/config 
    # 修改
    SELINUX=disabled
    

    kubernetes安装环境要求

    官方文档

    防火墙端口
    如果不关闭防火墙,请放下以下端口
    Master节点
    在这里插入图片描述
    Node节点

    验证每个节点的mac地址和product_uuid是唯一的

    kubernetes通过这两个值来确定集群中的节点

    node1

    [root@node1 ~]# cat /sys/class/dmi/id/product_uuid
    19D84D56-03C0-B76A-96EA-37D54065C278
    
    

    node2

    [root@node2 ~]# cat /sys/class/dmi/id/product_uuid
    D8304D56-8AB4-7444-D531-C67AF37581C7
    

    node3

    [root@node3 ~]# cat /sys/class/dmi/id/product_uuid
    69E84D56-2D4B-3E4E-9DE8-4D7B44052C25
    

    关闭swap分区

    三台操作同样,这一步不要scp,因为fstab不一样

    swapoff -a
    vim /etc/fstab 
    将分区类型为swap的一行注释掉
    /dev/mapper/centos-swap swap
    

    查看是否关闭

    [root@node1 ~]# free -m
                  total        used        free      shared  buff/cache   available
    Mem:           2793        1024         206          14        1562        1334
    Swap:             0           0           0
    
    

    下载第三方依赖的库文件(会解决很多依赖关系)

    node1/2/3

    yum install epel-release -y
    

    node1

    [root@node1 ~]# vim /etc/sysctl.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    vm.swappiness = 0   # 当内存oom是不使用交换分区
    

    在这里插入图片描述

    如果发现那两条显示没有那个文件或目录,然后先导入两个模块(多敲几次)

    modprobe  ip_vs_rr
    modprobe br_netfilter
    

    让参数生效

    [root@node1 ~]# sysctl -p
    net.ipv4.ip_forward = 1
    vm.swappiness = 0
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    
    

    传输过去

    [root@node1 ~]# scp /etc/sysctl.conf root@node2:/etc
    [root@node1 ~]# scp /etc/sysctl.conf root@node3:/etc
    

    node2

    [root@node2 ~]# modprobe  ip_vs_rr
    [root@node2 ~]# modprobe br_netfilter
    [root@node2 ~]# sysctl -p
    
    

    node3

    [root@node3 ~]# modprobe  ip_vs_rr
    [root@node3 ~]# modprobe br_netfilter
    [root@node3 ~]# sysctl -p
    
    

    kubernetes安装
    这里使用阿里云镜像站安装
    kubernetes阿里云镜像站
    node1

    进入上面这个页面,使用centos的yum源

    [root@node1 ~]# vim /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    

    将yum源传给每台主机

    [root@node1 ~]# scp /etc/yum.repos.d/kubernetes.repo root@node2:/etc/yum.repos.d/
    [root@node1 ~]# scp /etc/yum.repos.d/kubernetes.repo root@node3:/etc/yum.repos.d/
    

    node1/2/3

    yum install -y kubelet kubeadm kubectl
    
    • kubeadm:引导集群的命令。
    • kubelet:在群集中所有计算机上运行的组件,它执行诸如启动Pod和容器之类的操作。
    • kubectl:用于与您的集群通信的命令行工具。

    三台启动服务

    systemctl enable kubelet.service && systemctl start kubelet.service
    systemctl start docker.service
    

    安装kubernetes的tab快捷键

    [root@node1 ~]# source <(kubectl completion bash)
    [root@node1 ~]# source <(kubeadm completion bash)
    

    查看版本

    [root@node1 ~]# kubeadm version
    kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:49:29Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
    
    

    kubernetes创建集群

    node1

    执行这条后会等待一段时间,需要进行下载镜像

    kubeadm init --apiserver-advertise-address 192.168.2.10 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.3 --pod-network-cidr=10.244.0.0/16
    
    参数释义

    –apiserver-advertise-address:通告侦听地址
    –image-repository:指定镜像地址使用阿里云的,默认会使用谷歌镜像
    –kubernetes-version:指定当前的kubernetes的版本
    –pod-network-cidr=10.244.0.0/16:flannel网络的固定地址范围

    成功则会返回如下信息:
    # 如果使用普通账号操作,执行以下三条 
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    # 部署集群网络
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    # 其他kubernetes主机通过以下命令加入kubernetes集群
    
    kubeadm join 192.168.2.10:6443 --token 1dnjl6.51yjfkjm2ys5p6gx \
        --discovery-token-ca-cert-hash sha256:e5b6bfc12b3b0e7101ffb6433ae0c0be7a7a24aaf322112b59e1b577c0f9256d 
    
    

    如果是root用户先执行以下命令

    export KUBECONFIG=/etc/kubernetes/admin.conf
    
    加入命令创建有效期24小时,允许管理我们的令牌,如果丢了就会出现错误

    查看计算机当中的令牌

    [root@node1 ~]# kubeadm token list
    TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
    1dnjl6.51yjfkjm2ys5p6gx   17h         2020-06-11T09:43:35+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
    
    

    部署集群网络

    浏览器打开地址https://kubernetes.io/docs/concepts/cluster-administration/addons/
    找到下图位置,点击flannel
    在这里插入图片描述

    进入Documentation目录下

    在这里插入图片描述

    找到kube-flannel.yml文件

    在这里插入图片描述

    将里面的内容进行复制到虚拟机的kube-flannel.yml文件

    [root@node1 ~]# vi kube-flannel.yml
    ---
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: psp.flannel.unprivileged
      annotations:
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
        seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
        apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
        apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
    spec:
      privileged: false
      volumes:
        - configMap
        - secret
        - emptyDir
        - hostPath
      allowedHostPaths:
        - pathPrefix: "/etc/cni/net.d"
        - pathPrefix: "/etc/kube-flannel"
        - pathPrefix: "/run/flannel"
      readOnlyRootFilesystem: false
      # Users and groups
      runAsUser:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      fsGroup:
        rule: RunAsAny
      # Privilege Escalation
      allowPrivilegeEscalation: false
      defaultAllowPrivilegeEscalation: false
      # Capabilities
      allowedCapabilities: ['NET_ADMIN']
      defaultAddCapabilities: []
      requiredDropCapabilities: []
      # Host namespaces
      hostPID: false
      hostIPC: false
      hostNetwork: true
      hostPorts:
      - min: 0
        max: 65535
      # SELinux
      seLinux:
        # SELinux is unused in CaaSP
        rule: 'RunAsAny'
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: flannel
    rules:
      - apiGroups: ['extensions']
        resources: ['podsecuritypolicies']
        verbs: ['use']
        resourceNames: ['psp.flannel.unprivileged']
      - apiGroups:
          - ""
        resources:
          - pods
        verbs:
          - get
      - apiGroups:
          - ""
        resources:
          - nodes
        verbs:
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - nodes/status
        verbs:
          - patch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: flannel
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
    - kind: ServiceAccount
      name: flannel
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: flannel
      namespace: kube-system
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: kube-flannel-cfg
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    data:
      cni-conf.json: |
        {
          "name": "cbr0",
          "cniVersion": "0.3.1",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "10.244.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-amd64
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - amd64
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.12.0-amd64
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.12.0-amd64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-arm64
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - arm64
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.12.0-arm64
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.12.0-arm64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-arm
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - arm
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.12.0-arm
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.12.0-arm
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-ppc64le
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - ppc64le
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.12.0-ppc64le
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.12.0-ppc64le
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-s390x
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - s390x
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.12.0-s390x
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.12.0-s390x
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    
    找到该位置,此镜像需要使用docker下载

    在这里插入图片描述

    所有主机尽量都pull一下也是可以的

    [root@node1 ~]# docker pull quay.io/coreos/flannel:v0.12.0-amd64
    

    部署集群网络

    [root@node1 ~]# kubectl apply -f kube-flannel.yml
    

    node2/3加入集群

    kubeadm join 192.168.2.10:6443 --token 1dnjl6.51yjfkjm2ys5p6gx \
        --discovery-token-ca-cert-hash sha256:e5b6bfc12b3b0e7101ffb6433ae0c0be7a7a24aaf322112b59e1b577c0f9256d 
    

    完成后输出一下信息

    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
     
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    

    node1

    在node1运行kubectl get nodes查看集群节点,状态的为Notready的是还没有准备好,多等一会,再去查看

    [root@node1 ~]# kubectl get nodes
    NAME      STATUS   ROLES    AGE     VERSION
    server1   Ready    master   6h35m   v1.18.3
    server2   Ready    <none>   6h27m   v1.18.3
    server3   Ready    <none>   6h27m   v1.18.3
    
    

    如果发现节点为Notready,时间长也不行,将默认的网络插件cni去掉,集群中全部修改

    vim /var/lib/kubelet/kubeadm-flags.env
    # 删除--network-plugin=cni
    

    重新启动

    systemctl daemon-reload
    systemctl restart kubelet
    

    此时的kubelet才启动成功。
    重新查看集群状态

    执行kubectl get pod --all-namespaces,查看所有命名空间的pod,如果有状态为PodInitializing,就多等一会,网络原因,pod插件没有下载好,多等等就好了,全部这台为running即可

    [root@node1 ~]# kubectl get pod --all-namespaces
    NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE
    kube-system   coredns-7ff77c879f-dvh7g            1/1     Running   0          6h38m
    kube-system   coredns-7ff77c879f-gk9zd            1/1     Running   0          6h38m
    kube-system   etcd-server1                        1/1     Running   0          6h38m
    kube-system   kube-apiserver-server1              1/1     Running   0          6h38m
    kube-system   kube-controller-manager-server1     1/1     Running   0          6h38m
    kube-system   kube-flannel-ds-amd64-tdcnh         1/1     Running   0          6h30m
    kube-system   kube-flannel-ds-amd64-thsxp         1/1     Running   2          6h30m
    kube-system   kube-flannel-ds-amd64-twqjf         1/1     Running   2          6h30m
    kube-system   kube-proxy-4drcp                    1/1     Running   0          6h30m
    kube-system   kube-proxy-fts9q                    1/1     Running   0          6h38m
    kube-system   kube-proxy-jnb7c                    1/1     Running   0          6h30m
    kube-system   kube-scheduler-server1              1/1     Running   0          6h38m
    
    

    集群搭建成功

    展开全文
  • kubernetes 安装 Prometheus +Grafanakubernetes install Prometheus + Grafana官网Official websitehtt...

    kubernetes 安装 Prometheus + Grafana

    kubernetes install Prometheus + Grafana

    官网

    Official website

    https://prometheus.io/

    GitHub

    GitHub

    https://github.com/coreos/kube-prometheus

    组件说明

    Component description

    MetricServer:是kubernetes集群资源使用情况的聚合器,收集数据给kubernetes集群内使用,如 kubectl,hpa,scheduler等。

    PrometheusOperator:是一个系统监测和警报工具箱,用来存储监控数据。

    NodeExporter:用于各node的关键度量指标状态数据。

    KubeStateMetrics:收集kubernetes集群内资源对象数 据,制定告警规则。

    Prometheus:采用pull方式收集apiserver,scheduler,controller-manager,kubelet组件数 据,通过http协议传输。

    Grafana:是可视化数据统计和监控平台。

    MetricServer: It is an aggregator of the resource usage of the kubernetes cluster, collecting data for use in the kubernetes cluster, such as kubectl, hpa, scheduler, etc.

    PrometheusOperator: is a system monitoring and alerting toolbox used to store monitoring data.

    NodeExporter: Used for the key metric status data of each node.

    KubeStateMetrics: Collect resource object data in the kubernetes cluster and formulate alarm rules.

    Prometheus: collect data from apiserver, scheduler, controller-manager, and kubelet components in a pull mode, and transmit it through the http protocol.

    Grafana: It is a platform for visual data statistics and monitoring.

    安装

    Install

    配置Google上网环境下的docker,docker会去外网进行下载部分镜像

    Configure docker in Google's Internet environment, docker will go to the external network to download part of the image

    sudo mkdir -p /etc/systemd/system/docker.service.d
    sudo touch /etc/systemd/system/docker.service.d/proxy.conf
    [root@k8s-master-node1 ~]# cat /etc/systemd/system/docker.service.d/proxy.conf
    [Service]
    Environment="HTTP_PROXY=http://192.168.1.6:7890/" 
    Environment="HTTPS_PROXY=http://192.168.1.6:7890/" 
    Environment="NO_PROXY=localhost,127.0.0.1,.example.com"

    dockerd代理的修改比较特殊,它实际上是改systemd的配置,因此需要重载systemd并重启dockerd才能生效。

    The modification of the dockerd agent is quite special. It actually changes the configuration of systemd, so systemd needs to be reloaded and dockerd restarted to take effect.

    sudo systemctl daemon-reload
    sudo systemctl restart docker

    b326a7883a5fb4873853511809c04955.png

    下载

    download

    [root@k8s-master-node1 ~]# git clone https://github.com/coreos/kube-prometheus.git
    Cloning into 'kube-prometheus'...
    remote: Enumerating objects: 13409, done.
    remote: Counting objects: 100% (1908/1908), done.
    remote: Compressing objects: 100% (801/801), done.
    remote: Total 13409 (delta 1184), reused 1526 (delta 947), pack-reused 11501
    Receiving objects: 100% (13409/13409), 6.65 MiB | 5.21 MiB/s, done.
    Resolving deltas: 100% (8313/8313), done.
    [root@k8s-master-node1 ~]# 
    [root@k8s-master-node1 ~]# cd kube-prometheus/manifests
    [root@k8s-master-node1 ~/kube-prometheus/manifests]#

    修改 grafana-service.yaml 文件,使用 nodepode 方式访问 grafana:

    Modify the grafana-service.yaml file and use nodepode to access grafana:

    [root@k8s-master-node1 ~/kube-prometheus/manifests]# cat grafana-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app.kubernetes.io/component: grafana
        app.kubernetes.io/name: grafana
        app.kubernetes.io/part-of: kube-prometheus
        app.kubernetes.io/version: 8.1.3
      name: grafana
      namespace: monitoring
    spec:
      type: NodePort
      ports:
      - name: http
        port: 3000
        targetPort: http
        nodePort: 31100
      selector:
        app.kubernetes.io/component: grafana
        app.kubernetes.io/name: grafana
        app.kubernetes.io/part-of: kube-prometheus

    修改 prometheus-service.yaml,改为 nodepode:

    Modify prometheus-service.yaml to nodepode:

    [root@k8s-master-node1 ~/kube-prometheus/manifests]# cat prometheus-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app.kubernetes.io/component: prometheus
        app.kubernetes.io/name: prometheus
        app.kubernetes.io/part-of: kube-prometheus
        app.kubernetes.io/version: 2.30.0
        prometheus: k8s
      name: prometheus-k8s
      namespace: monitoring
    spec:
      type: NodePort
      ports:
      - name: web
        port: 9090
        targetPort: web
        nodePort: 31200
      - name: reloader-web
        port: 8080
        targetPort: reloader-web
        nodePort: 31300
      selector:
        app: prometheus
        app.kubernetes.io/component: prometheus
        app.kubernetes.io/name: prometheus
        app.kubernetes.io/part-of: kube-prometheus
        prometheus: k8s
      sessionAffinity: ClientIP

    修改 alertmanager-service.yaml,改为 nodepode

    Modify alertmanager-service.yaml to nodepode

    [root@k8s-master-node1 ~/kube-prometheus/manifests]# cat alertmanager-service.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        alertmanager: main
        app.kubernetes.io/component: alert-router
        app.kubernetes.io/name: alertmanager
        app.kubernetes.io/part-of: kube-prometheus
        app.kubernetes.io/version: 0.23.0
      name: alertmanager-main
      namespace: monitoring
    spec:
      type: NodePort
      ports:
      - name: web
        port: 9093
        targetPort: web
        nodePort: 31400
      - name: reloader-web
        port: 8080
        targetPort: reloader-web
        nodePort: 31500
      selector:
        alertmanager: main
        app: alertmanager
        app.kubernetes.io/component: alert-router
        app.kubernetes.io/name: alertmanager
        app.kubernetes.io/part-of: kube-prometheus
      sessionAffinity: ClientIP
    [root@k8s-master-node1 ~/kube-prometheus/manifests]#

    创建名称空间和CRD

    Create namespace and CRD

    [root@k8s-master-node1 ~/kube-prometheus]# kubectl create -f /root/kube-prometheus/manifests/setup
    namespace/monitoring created
    customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com created
    customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
    customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
    customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created
    customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
    customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
    customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
    customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
    clusterrole.rbac.authorization.k8s.io/prometheus-operator created
    clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
    deployment.apps/prometheus-operator created
    service/prometheus-operator created
    serviceaccount/prometheus-operator created

    等待资源可用后,安装

    After waiting for resources to be available, install

    [root@k8s-master-node1 ~/kube-prometheus]# 
    [root@k8s-master-node1 ~/kube-prometheus]# 
    [root@k8s-master-node1 ~/kube-prometheus]# kubectl create -f /root/kube-prometheus/manifests/
    
    
    
    
    ---略---
    
    
    
    
    [root@k8s-master-node1 ~/kube-prometheus]#

    访问 Prometheus

    Visit Prometheus

    http://192.168.1.10:31200/targets

    ce3579a8549c5a57f610137b3938c936.png

    访问 Grafana

    Visit Grafana

    http://192.168.1.10:31100/

    658089eab1cef9781312b66f2fc03180.png

    访问报警平台 AlertManager

    Visit the alert platform AlertManager

    http://192.168.1.10:31400/#/status

    90e5bc2941447d022f9c9eb0e57005a4.png

    30be913773cbbc84c1112a9969d35a65.png

    展开全文
  • 适用于Ubuntu的Google Kubernetes安装程序 支持平台 的Ubuntu 属性 默认 钥匙 类型 描述 默认 ['kubernetes'] ['container_runtime'] 细绳 引擎类型 码头工人 ['kubernetes'] ['roles'] ['master'] 细绳 主服务器...
  • 此文介绍通过kubernetes安装redmine。 详细介绍请参考博客文章:https://mp.csdn.net/postedit/82082134 此文档为excel格式,保护操作步骤的大部分截图,方便学习。 主要包括以下内容: 从github取得redmine的yaml...
  • 此文介绍通过kubernetes安装openldap还有phpldapadmin。 博客文章请查看:https://blog.csdn.net/engchina/article/details/82079340 主要包括: 从github取得openldap的yaml文件。 启动deployment和service。 查看...
  • kubernetes安装部署手册

    2021-11-01 20:05:12
    kubernetes安装部署手册
  • Ubuntu18.04 Kubernetes安装01

    千次阅读 2020-04-02 18:02:39
    一、统一环境配置(避免逐台安装) 在原有虚拟机的基础上克隆一台虚拟机,取名Kubernetes,打开进行如下配置 1、关闭交换空间 swapoff -a 2、避免开机启动交换空间 vi /etc/fstab 注释掉swap开头的 3、关闭防火墙 ...

    一、统一环境配置(避免逐台安装)

    在原有虚拟机的基础上克隆一台虚拟机,取名Kubernetes,打开进行如下配置
    1、关闭交换空间
    swapoff -a
    在这里插入图片描述
    2、避免开机启动交换空间
    vi /etc/fstab
    在这里插入图片描述
    注释掉swap开头的
    在这里插入图片描述
    在这里插入图片描述
    3、关闭防火墙
    ufw disable
    在这里插入图片描述
    4、配置DNS
    取消 DNS 行注释,并增加 DNS 配置:114.114.114.114
    vi /etc/systemd/resolved.conf
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    5、安装Docker
    更新软件源
    sudo apt-get update
    安装所需依赖
    sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
    安装 GPG 证书
    curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
    新增软件源信息
    sudo add-apt-repository “deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable”
    再次更新软件源
    sudo apt-get -y update
    安装 Docker CE 版
    sudo apt-get -y install docker-ce

    6、配置 Docker 加速器
    在 /etc/docker/daemon.json 中写入如下内容
    在这里插入图片描述
    重启docker
    systemctl restart docker
    在这里插入图片描述

    7、安装 Kubernetes 必备工具 kubeadm, kubealet, kubeactl
    安装系统工具
    apt-get update && apt-get install -y apt-transport-https
    安装 GPG 证书
    curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
    写入软件源;注意:我们用系统代号为 bionic,但目前阿里云不支持,所以沿用 16.04 的 xenial
    cat << EOF >/etc/apt/sources.list.d/kubernetes.list
    deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
    EOF
    安装(指定版本)
    apt-get update && apt-get install -y kubelet kubeadm kubectl

    8、同步时间
    设置时区
    dpkg-reconfigure tzdata

    选亚洲上海

    在这里插入图片描述
    在这里插入图片描述
    时间同步
    安装 ntpdate
    apt-get install ntpdate
    设置系统时间与网络时间同步(cn.pool.ntp.org 位于中国的公共 NTP 服务器
    ntpdate cn.pool.ntp.org
    将系统时间写入硬件时间
    hwclock --systohc

    确认时间
    date
    输出如下(自行对照与系统时间是否一致)
    Sun Jun 2 22:02:35 CST 2019

    9、修改 cloud.cfg
    注意:我的机器上没有/etc/cloud/cloud.cfg这个文件,
    运行cloud-init --version,是让安装cloud-init,
    运行 apt install cloud-init, 安装后,路径etc/cloud/cloud.cfg就有了

    主要作用是防止重启后主机名还原
    vi /etc/cloud/cloud.cfg
    该配置默认为 false,修改为 true 即可
    preserve_hostname: true

    然后重启一下
    reboot

    在这里插入图片描述

    二、Master 和 Node 节点配置

    在Kubernetes基础上克隆三台虚拟机,分别命名为
    kubernetes-master、kubernetes-node-01、kubernetes-node-02,
    并根据下面表格分别进行配置

    1、配置ip
    #网上的地址都是 vi /etc/netplan/50-cloud-init.yaml 配置文件
    我的电脑为 vi /etc/netplan/01-network-manager-all.yaml 配置文件,修改内容如下

    network:
      ethernets:
          ens33:
             addresses: [192.168.160.110/24]
             gateway4: 192.168.160.2
             nameservers:
                 addresses: [192.168.160.2]
      version: 2
    

    `在这里插入图片描述

    2、配置主机名

    修改主机名

    hostnamectl set-hostname kubernetes-master

    配置 hosts

    cat >> /etc/hosts << EOF
    192.168.160.110 kubernetes-master
    EOF
    然后重启 reboot
    node1 和 node2 配置一样注意修改ip和主机名

    展开全文
  • kubernetes安装相关包

    2021-04-05 17:52:39
    kubernetes安装相关包
  • Kubernetes安装部署流程

    千次阅读 2020-05-16 11:38:30
    Kubernetes 介绍 Kubernetes是当今最流行的开源容器管理平台,它就是大名鼎鼎的Google Borg的开源版本。Google在2014年推出了KubernetesKubernetes源于希腊语,意为舵手,K8S是一个简称,因为首尾字母中间正好...

    Kubernetes 介绍

    Kubernetes是当今最流行的开源容器管理平台,它就是大名鼎鼎的Google Borg的开源版本。Google在2014年推出了Kubernetes。

    Kubernetes源于希腊语,意为舵手,K8S是一个简称,因为首尾字母中间正好有8个字母。基于容器技术,Kubernetes可以方便的进行集群应用的部署、扩容、缩容、自愈机制、服务发现、负载均衡、日志、监控等功能,大大减少日常运维的工作量。

    Kubernetes所有的操作都可以通过Kubernetes API来进行,通过API来操作Kubernetes中的对象,包括Pod、Service、Volume、Namespace等等。

    Kubernetes 设计架构

    安装前准备

    1. 一台或多台机器,操作系统CentOS7.x86_x64
    2. 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
    3. 集群中所有机器之间网络互通
    4. 可以访问外网,需要拉取镜像
    5. 禁止swap分区

    1.准备环境

      分别在master、node1、node2(执行)

    关闭防火墙
    $ systemctl stop firewalld
    $ systemctl disable firewalld
    
    关闭swap
    $ swapoff -a $ 临时
    $ vim /etc/fstab $ 永久
    
    添加主机名与IP对应关系
    192.168.31.140 k8s-master 
    192.168.31.141 k8s-node1
    192.168.31.142 k8s-node2
    
    将桥连的IPv4流量传递到iptables的链中:
    $ cat > /etc/sysctl.d/k8s.conf << EOF
    net.bridge.bridge.-nf-call-ip6tables = 1
    net.bridge.bridge.-nf-call-iptables = 1
    EOF

    2.安装Docker

     分别在master、node1、node2(安装docker)

    $ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    $ yum -y install docker-ce-18.06.1.ce-3.el7
    $ systemctl enable docker && systemctl start docker
    $ docker --version

    3.添加阿里云YUM软件源

    分别在master、node1、node2(执行)

    $ cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
           https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF

    4.安装kubeadm、kubelet、kubectl

     分别在master、node1、node2(执行)

    $ yum -y install kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
    $ systemctl enable kubelet.service

    部署kubernetes Master

    1.在192.168.31.140(Master)执行,由于默认拉取的镜像文件地址是国内无法下载的,因此指定阿里云镜像地址。

    $ kubeadm init \
    --apiserver-advertise-address=192.168.31.140 \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.15.0 \
    --service-cidr=10.1.0.0/16 \
    --pod-network-cidr=10.244.0.0/16
    $ mkdir -p $HOME/.kube
    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config

    2.下载kube-flannel.yml官方文件,现在国内下载不了该站点的文件,可以通过修改hosts文件的方法解决,找到真实IP地址并添加到hosts文件的最后一行。

    $ 199.232.68.133 raw.githubusercontent.com >> /etc/hosts
    $ kubectl apply -f
    https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
    

    3.分别在master、node1、node2上拉取镜像flannel:v0.11.0-amd64

    $ docker pull lizhenliang/flannel:v0.11.0-amd64
    

    有时候会网速慢下载不下来,但别的机器有镜像时,可以在一台机器中复制镜像到别的机器上。

    查看所有镜像文件
    $ docker images
    
    压缩镜像
    $ docker save -o v0.11.0-flannel.tar quay.io/coreos/flannel
    
    在另一台机器上解压镜像
    docker load < v0.11.0-flannel.tar
    
    安装flannel
    $ kubectl apply -f kube-flannel.yml

    4.安装节点到master上,复制初始化中打印的命令并分别在node1和node2上执行

    $ kubeadm join 192.168.31.140:6443 --token 78yk4x.yk07bitlne6z08uh --discovery-token-ca-cert-hash sha256:f3d660ab28b0278eb3679b7fb63fedd673d37a6e43a060b18080fc80c7a0ab22

    5.当镜像拉取成功后会有如图所示的提示信息

    6.执行命令可看到节点信息已准备就绪

    $ kubectl get nodes

    部署kubernetes-dashboard

    1.下载UI镜像

    $ docker pull  lizhenliang/kubernetes-dashboard-amd64:v1.10.1

    2.修改kubernetes-dashboard.yaml配置

    # Copyright 2017 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    # Configuration to deploy release version of the Dashboard UI compatible with
    # Kubernetes 1.8.
    #
    # Example usage: kubectl create -f <this_file>
    
    # ------------------- Dashboard Secret ------------------- #
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-certs
      namespace: kube-system
    type: Opaque
    
    ---
    # ------------------- Dashboard Service Account ------------------- #
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kube-system
    
    ---
    # ------------------- Dashboard Role & Role Binding ------------------- #
    
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: kubernetes-dashboard-minimal
      namespace: kube-system
    rules:
      # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
    - apiGroups: [""]
      resources: ["secrets"]
      verbs: ["create"]
      # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
    - apiGroups: [""]
      resources: ["configmaps"]
      verbs: ["create"]
      # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
    - apiGroups: [""]
      resources: ["secrets"]
      resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
      verbs: ["get", "update", "delete"]
      # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
    - apiGroups: [""]
      resources: ["configmaps"]
      resourceNames: ["kubernetes-dashboard-settings"]
      verbs: ["get", "update"]
      # Allow Dashboard to get metrics from heapster.
    - apiGroups: [""]
      resources: ["services"]
      resourceNames: ["heapster"]
      verbs: ["proxy"]
    - apiGroups: [""]
      resources: ["services/proxy"]
      resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
      verbs: ["get"]
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kubernetes-dashboard-minimal
      namespace: kube-system
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kubernetes-dashboard-minimal
    subjects:
    - kind: ServiceAccount
      name: kubernetes-dashboard
      namespace: kube-system
    
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kube-system
    roleRef:
      kind: ClusterRole
      name: cluster-admin
      apiGroup: rbac.authorization.k8s.io
    ---
    # ------------------- Dashboard Deployment ------------------- #
    
    kind: Deployment
    apiVersion: apps/v1beta2
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kube-system
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: kubernetes-dashboard
      template:
        metadata:
          labels:
            k8s-app: kubernetes-dashboard
        spec:
          serviceAccountName: kubernetes-dashboard-admin
          containers:
          - name: kubernetes-dashboard
            image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1
            ports:
            - containerPort: 9090
              protocol: TCP
            args:
              #- --auto-generate-certificates
              # Uncomment the following line to manually specify Kubernetes API server Host
              # If not specified, Dashboard will attempt to auto discover the API server and connect
              # to it. Uncomment only if the default does not work.
              #- --apiserver-host=http://10.0.1.168:8080
            volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
            livenessProbe:
              httpGet:
                scheme: HTTP
                path: /
                port: 9090
              initialDelaySeconds: 30
              timeoutSeconds: 30
          volumes:
          - name: kubernetes-dashboard-certs
            secret:
              secretName: kubernetes-dashboard-certs
          - name: tmp-volume
            emptyDir: {}
          serviceAccountName: kubernetes-dashboard
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
          - key: node-role.kubernetes.io/master
            effect: NoSchedule
    
    ---
    # ------------------- Dashboard Service ------------------- #
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kube-system
    spec:
      ports:
        - port: 9090
          targetPort: 9090
      selector:
        k8s-app: kubernetes-dashboard
    
    # ------------------------------------------------------------
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-external
      namespace: kube-system
    spec:
      ports:
        - port: 9090
          targetPort: 9090
          nodePort: 30090
      type: NodePort
      selector:
        k8s-app: kubernetes-dashboard

    3.安装配置    

    $ kubectl apply -f kubernetes-dashboard.yaml

    4.查看pod service列表

    $ kubectl get pods,svc -n kube-system

    5.输入IP:30090显示UI界面

    展开全文
  • 环境初始化 #关闭swap swapoff -a rm -f /swap.img vim /etc/fstab # /swap.img ...step1:安装依赖 sudo apt-get update sudo apt-get -y install apt-transport-https ca-certificates curl software-prop
  • 操作环境:Ubuntu16.04 kubernetes15.1 一、 更改主机配置 在主节点和工作节点都要执行 进入root模式 sudo passwd root su 更改主机名,主节点改为master,工作节点改为worker vi /etc/hostname 更改hosts文件 vi ...
  • kubernetes安装dashboardv1.10.1-附件资源
  • kubernetes安装手册.txt

    2021-02-18 10:23:11
    一步一步安装kubernetes, 步骤比较详细, 适合新手
  • Kubernetes安装 | k8s组件安装

    千次阅读 2018-09-08 18:27:43
    --昨夜西风凋碧树,独上高楼... 准备kubernetes的证书 在master节点操作 (1)创建相关目录 mkdir $HOME/ssl &amp;&amp; cd $HOME/ssl (2)配置 &amp;生成 root ca #配置root ca cat &gt;...
  • 基于CentOS 7的Kubernetes安装全过程(含附件) 目录如下: 第一部分:Nginx on Kubernetes应用部署 3 一、环境准备 3 1.1软硬件环境 3 1.2 网络拓扑 4 二、Kubenetes及相关组件部署 6 2.1 Docker容器及私有仓库部署...
  • Kubernetes 1.19.4安装Nginx-Ingress-Controller全套资源 包含以下内容: deploy.yml jettech-kube-webhook-certgen-v1.5.0.tar k8s.gcr.io-ingress-nginx-controller-v0.41.2.tar nginx-1.19.5.tar
  • 阿里云 ... /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0...
  • kubernetes安装部署文档

    2018-03-29 10:05:26
    kubernetes 部署文档 安装包!!!!!!!!!!!!!!!!
  • Kubernetes安装配置与服务部署

    万次阅读 2017-03-29 12:28:41
    kubernetes安装和配置; kubernetes发布服务
  • Kubernetes安装与使用

    千次阅读 2017-07-23 13:05:33
    安装 翻墙使用root权限执行以下内容或者参考这里 wget https://coding.net/u/scaffrey/p/hosts/git/raw/master/hosts cp hosts /etc/hosts 安装 apt-get update && apt-get install -y apt-transport-https curl -s...
  • Kubernetes的Alta3方式 本手册将引导您逐步设置Kubernetes Alta3。 本指南是使用Ansible启动Kubernetes集群的全自动命令。 Kubernetes Alta3方式针对学习进行了优化,这意味着要走很长的路,以确保您了解引导...
  • centos7 基于Kubernetes 安装dashboard

    千次阅读 2019-06-23 00:16:40
    2.创建kubernetes-dashboard管理员角色 3.获取token 4.使用管理员角色登陆kubernetes-dashboard web界面 1.创建新目录 在master机器上执行: # mkdir dashboard # cd dashboard 下载yaml文件: # curl -o ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 67,926
精华内容 27,170
关键字:

kubernets安装