精华内容
下载资源
问答
  • Kubeadm The purpose of this repo is to aggregate issues filed against the kubeadm component. What is Kubeadm ? Kubeadm is a tool built to provide best-practice "fast paths" for creating Kubernetes ...
  • kubeadm部署 Dashboard2.0.3

    2020-07-23 23:38:15
    您可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。您可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源(如 ...

    Dashboard 是基于网页的 Kubernetes 用户界面。您可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。您可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源(如 Deployment,Job,DaemonSet 等等)。例如,您可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。

    Dashboard 还提供有关集群中Kubernetes资源状态以及可能发生的任何错误的信息。

    1.下载 recommended.yaml文件

    部署GitHub上目前最新版本的dashboard v2.0.3-beta8
    https://github.com/kubernetes/dashboard/releases

    $ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3-beta8/aio/deploy/recommended.yaml
    

    在Service里面添加NodePort访问类型以及端口,我的recommended.yaml文件如下:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      type: NodePort		#添加访问类型
      ports:
        - port: 443
          nodePort: 30001   #添加端口
          targetPort: 8443
      selector:
        k8s-app: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-certs
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-csrf
      namespace: kubernetes-dashboard
    type: Opaque
    data:
      csrf: ""
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-key-holder
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    kind: ConfigMap
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-settings
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    rules:
      # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
      - apiGroups: [""]
        resources: ["secrets"]
        resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
        verbs: ["get", "update", "delete"]
        # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
      - apiGroups: [""]
        resources: ["configmaps"]
        resourceNames: ["kubernetes-dashboard-settings"]
        verbs: ["get", "update"]
        # Allow Dashboard to get metrics.
      - apiGroups: [""]
        resources: ["services"]
        resourceNames: ["heapster", "dashboard-metrics-scraper"]
        verbs: ["proxy"]
      - apiGroups: [""]
        resources: ["services/proxy"]
        resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
        verbs: ["get"]
    
    ---
    
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
    rules:
      # Allow Metrics Scraper to get metrics from the Metrics server
      - apiGroups: ["metrics.k8s.io"]
        resources: ["pods", "nodes"]
        verbs: ["get", "list", "watch"]
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kubernetes-dashboard
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kubernetes-dashboard
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: kubernetes-dashboard
      template:
        metadata:
          labels:
            k8s-app: kubernetes-dashboard
        spec:
          containers:
            - name: kubernetes-dashboard
              image: kubernetesui/dashboard:v2.0.3
              imagePullPolicy: Always
              ports:
                - containerPort: 8443
                  protocol: TCP
              args:
                - --auto-generate-certificates
                - --namespace=kubernetes-dashboard
                # Uncomment the following line to manually specify Kubernetes API server Host
                # If not specified, Dashboard will attempt to auto discover the API server and connect
                # to it. Uncomment only if the default does not work.
                # - --apiserver-host=http://my-address:port
              volumeMounts:
                - name: kubernetes-dashboard-certs
                  mountPath: /certs
                  # Create on-disk volume to store exec logs
                - mountPath: /tmp
                  name: tmp-volume
              livenessProbe:
                httpGet:
                  scheme: HTTPS
                  path: /
                  port: 8443
                initialDelaySeconds: 30
                timeoutSeconds: 30
              securityContext:
                allowPrivilegeEscalation: false
                readOnlyRootFilesystem: true
                runAsUser: 1001
                runAsGroup: 2001
          volumes:
            - name: kubernetes-dashboard-certs
              secret:
                secretName: kubernetes-dashboard-certs
            - name: tmp-volume
              emptyDir: {}
          serviceAccountName: kubernetes-dashboard
          nodeSelector:
            "kubernetes.io/os": linux
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      ports:
        - port: 8000
          targetPort: 8000
      selector:
        k8s-app: dashboard-metrics-scraper
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: dashboard-metrics-scraper
      template:
        metadata:
          labels:
            k8s-app: dashboard-metrics-scraper
          annotations:
            seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
        spec:
          containers:
            - name: dashboard-metrics-scraper
              image: kubernetesui/metrics-scraper:v1.0.4
              ports:
                - containerPort: 8000
                  protocol: TCP
              livenessProbe:
                httpGet:
                  scheme: HTTP
                  path: /
                  port: 8000
                initialDelaySeconds: 30
                timeoutSeconds: 30
              volumeMounts:
              - mountPath: /tmp
                name: tmp-volume
              securityContext:
                allowPrivilegeEscalation: false
                readOnlyRootFilesystem: true
                runAsUser: 1001
                runAsGroup: 2001
          serviceAccountName: kubernetes-dashboard
          nodeSelector:
            "kubernetes.io/os": linux
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
          volumes:
            - name: tmp-volume
              emptyDir: {}
    

    应用配置文件:

    [root@master ~]# kubectl apply -f recommended.yaml
    namespace/kubernetes-dashboard created
    serviceaccount/kubernetes-dashboard created
    service/kubernetes-dashboard created
    secret/kubernetes-dashboard-certs created
    secret/kubernetes-dashboard-csrf created
    secret/kubernetes-dashboard-key-holder created
    configmap/kubernetes-dashboard-settings created
    role.rbac.authorization.k8s.io/kubernetes-dashboard created
    clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    deployment.apps/kubernetes-dashboard created
    service/dashboard-metrics-scraper created
    deployment.apps/dashboard-metrics-scraper created
    

    查看所属node以及端口

    $ kubectl -n kubernetes-dashboard get pod,svc -o wide
    

    在这里插入图片描述
    通过任意节点ip以及service的端口30001访问dashboard页面
    在这里插入图片描述

    创建create-admin.yaml文件:

    [root@master ~]# cat create-admin.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kubernetes-dashboard
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: admin-user
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kubernetes-dashboard
    

    运行

    $ kubectl apply -f create-admin.yaml
    

    获取到用户的token以用作登录

    $ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
    

    在这里插入图片描述
    把上面的token复制下来,登录https://192.168.13.130:30001/
    在这里插入图片描述
    删除dashboard:

    $ kubectl delete -f create-admin.yaml
    $ kubectl delete -f recommended.yaml
    

    启动:

    $ kubectl apply -f recommended.yaml	    
    $ kubectl apply -f create-admin.yaml
    
    展开全文
  • kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。 这个工具能通过两条指令完成一个kubernetes集群的部署: 1. 安装要求 在开始之前,部署Kubernetes集群机器需要满足以下几个条件: 一台或多台...

    kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。

    这个工具能通过两条指令完成一个kubernetes集群的部署:

    1. 安装要求

    在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

    一台或多台机器,操作系统 CentOS7.x-86_x64 硬件配置:

    2GB或更多RAM2CPU或更多CPU,硬盘30GB或更多

    集群中所有机器之间网络互通 可以访问外网,需要拉取镜像

    禁止swap分区

    2. 学习目标

    1. 在所有节点上安装Dockerkubeadm

    2. 部署Kubernetes Master

    3. 部署容器网络插件

    4. 部署 Kubernetes Node,将节点加入Kubernetes集群中

    5. 部署Dashboard Web页面,可视化查看Kubernetes资源

    3. 准备环境

    关闭防火墙: systemctl stop firewalld

    systemctl disable firewalld

    Iptables -F

     

    关闭selinux$ sed -i 's/enforcing/disabled/' /etc/selinux/config $ setenforce 0

    临时 $ setenforce 0

     

    关闭swap$ swapoff -a  $ 临时 $ vim /etc/fstab  $ 永久

     

    添加主机名与IP对应关系(记得设置主机名):

    $ cat /etc/hosts

    192.168.0.11 k8s-master

    192.168.0.12 k8s-node1

    192.168.0.13 k8s-node2

     

    将桥接的IPv4流量传递到iptables的链:

    $

    [root@k8s-node1 ~]# cat /etc/sysctl.d/k8s.conf

    net.bridge.bridge-nf-call-ip6tables = 1

    net.bridge.bridge-nf-call-iptables = 1

    [root@k8s-node1 ~]# sysctl --system

     

    4. 所有节点安装Docker/kubeadm/kubelet

    4.1 安装Docker

    Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker

    [root@k8s-node1 ~]#  wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

    [root@k8s-node1 ~]# yum -y install docker-ce

    [root@k8s-node1 ~]#systemctl enable docker && systemctl start docker

    [root@k8s-node1 ~]#docker --version

    4.2 添加阿里云YUM软件

    [root@k8s-node2 ~]# vim /etc/repos.d/kubernetes.repo

    [kubernetes]

    name=Kubernetes

    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

    enabled=1

    gpgcheck=1

    repo_gpgcheck=1

    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

     

    4.3 安装kubeadmkubeletkubectl                       

    [root@k8s-node1 ~]# yum -y install kubelet kubeadm kubectl

    [root@k8s-node1 ~]# systemctl enable kubelet.service

    kubeadm init --apiserver-advertise-address=192.168.30.21 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16

    按照自己的版本和master主机IP去填

     

     

     

    5. 部署Kubernetes Master

    master主机操作

    [root@k8s-master ~]# kubeadm init --apiserver-advertise-address=192.168.30.21 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16

     

    初始化完成会显示以下内容:这里的东西很重要,颜色表明的要按自己的去复制

    Your Kubernetes control-plane has initialized successfully!

     

    To start using your cluster, you need to run the following as a regular user:

     

      mkdir -p $HOME/.kube

      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

      sudo chown $(id -u):$(id -g) $HOME/.kube/config

     

    You should now deploy a pod network to the cluster.

    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

      https://kubernetes.io/docs/concepts/cluster-administration/addons/

     

    Then you can join any number of worker nodes by running the following on each as root:

     

    kubeadm join 192.168.30.21:6443 --token x8gdiq.sbcj8g4fmoocd5tl \

    --discovery-token-ca-cert-hash sha256:0b48e70fa8a268f8b88cd69b02cf87d8a2bf2efe519bb88dfa558de20d4a9993

     

    使用kubectl工具

    [root@k8s-master ~]# mkdir -p $HOME/.kube

    [root@k8s-master ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    [root@k8s-master ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

    目前node节点没有准备

    [root@k8s-master ~]# kubectl get nodes

    NAME         STATUS     ROLES    AGE   VERSION

    k8s-master   NotReady   master   10m   v1.15.0

     

    6. 安装Pod网络插件(CNI

    master中操作

    [root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

    确保节点就绪情况我们的master节点已经开启

    [root@k8s-master ~]# kubectl get nodes

    NAME         STATUS   ROLES    AGE   VERSION

    k8s-master   Ready    master   32m   v1.15.0

    确保pod已经启动 kube-syetem 命名空间

    [root@k8s-master ~]# kubectl get pods -n kube-system

    NAME                                 READY   STATUS    RESTARTS   AGE

    coredns-bccdc95cf-5j5fd              1/1     Running   0          33m

    coredns-bccdc95cf-6plrt              1/1     Running   0          33m

    etcd-k8s-master                      1/1     Running   0          32m

    kube-apiserver-k8s-master            1/1     Running   0          33m

    kube-controller-manager-k8s-master   1/1     Running   0          32m

    kube-flannel-ds-amd64-l8dg8          1/1     Running   0          8m1s

    kube-proxy-lxn4w                     1/1     Running   0          33m

    kube-scheduler-k8s-master            1/1     Running   0          33m

     

     

    7. 加入Kubernetes Node

    向集群添加新节点,执行在kubeadm init输出的kubeadm join命令

     

    [root@k8s-node2 ~]# kubeadm join 192.168.30.21:6443 --token x8gdiq.sbcj8g4fmoocd5tl \

    >     --discovery-token-ca-cert-hash sha256:0b48e70fa8a268f8b88cd69b02cf87d8a2bf2efe519bb88dfa558de20d4a9993

     

    查看node节点

    [root@k8s-master ~]# kubectl get node

    NAME         STATUS   ROLES    AGE   VERSION

    k8s-master   Ready    master   81m   v1.15.0

    k8s-node1    Ready    <none>   23m   v1.15.0

    k8s-node2    Ready    <none>   26m   v1.15.0

     

    8. 测试在Kubernetes集群

     

    Kubernetes集群中创建一个pod,验证是否正常运行:

    [root@k8s-master ~]#  kubectl create deployment nginx --image=nginx

    [root@k8s-master ~]#  kubectl expose deployment nginx --port=80 --type=NodePort

    查看pod的详细信息

    这里我启动了一个node,另外一个swap有问题,所以没有开,不过不影响我们的部署

    [root@k8s-master ~]# kubectl get pods,svc -o wide

    NAME                         READY   STATUS    RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES

    pod/nginx-554b9c67f9-sfxh2   1/1     Running   0          5m7s   10.244.1.2   k8s-node2   <none>           <none>

     

    NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE     SELECTOR

    service/kubernetes   ClusterIP   10.1.0.1      <none>        443/TCP        96m     <none>

    service/nginx        NodePort    10.1.66.144   <none>        80:30900/TCP   3m44s   app=nginx

    访问我们的Node节点的应用

    http://nodeip:port   也就是30900端口

    9. 部署 Dashboard

    [root@k8s-master ~]#

     wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

     [root@k8s-master ~]# vim kubernetes-dashboard.yaml

     

    112行   image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1   指定国内的镜像,这里默认是谷歌的

     

    158   type: NodePort   添加类型

    159   ports:

    160     - port: 443

    应用dashboard

    [root@k8s-master ~]# kubectl apply -f kubernetes-dashboard.yaml

    查看pods状态

    [root@k8s-master ~]# kubectl get pods -n kube-system

    NAME                                 READY   STATUS    RESTARTS   AGE

    coredns-bccdc95cf-5j5fd              1/1     Running   0          121m

    coredns-bccdc95cf-6plrt              1/1     Running   0          121m

    etcd-k8s-master                      1/1     Running   0          120m

    kube-apiserver-k8s-master            1/1     Running   0          121m

    kube-controller-manager-k8s-master   1/1     Running   0          120m

    kube-flannel-ds-amd64-6m7ct          1/1     Running   0          64m

    kube-flannel-ds-amd64-l8dg8          1/1     Running   0          95m

    kube-proxy-lxn4w                     1/1     Running   0          121m

    kube-proxy-xdcgv                     1/1     Running   0          64m

    kube-scheduler-k8s-master            1/1     Running   0          121m

    kubernetes-dashboard-79ddd5-t7q57    1/1     Running   0          82s

     

    查看端口进行访问31510:需要用https://192.168.30.23:31510

    [root@k8s-master ~]# kubectl get pods,svc -n kube-system

    NAME                                     READY   STATUS    RESTARTS   AGE

    pod/coredns-bccdc95cf-5j5fd              1/1     Running   0          127m

    pod/coredns-bccdc95cf-6plrt              1/1     Running   0          127m

    pod/etcd-k8s-master                      1/1     Running   0          126m

    pod/kube-apiserver-k8s-master            1/1     Running   0          126m

    pod/kube-controller-manager-k8s-master   1/1     Running   0          126m

    pod/kube-flannel-ds-amd64-6m7ct          1/1     Running   0          69m

    pod/kube-flannel-ds-amd64-l8dg8          1/1     Running   0          101m

    pod/kube-proxy-lxn4w                     1/1     Running   0          127m

    pod/kube-proxy-xdcgv                     1/1     Running   0          69m

    pod/kube-scheduler-k8s-master            1/1     Running   0          126m

    pod/kubernetes-dashboard-79ddd5-t7q57    1/1     Running   0          6m53s

     

    NAME                           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE

    service/kube-dns               ClusterIP   10.1.0.10     <none>        53/UDP,53/TCP,9153/TCP   127m

    service/kubernetes-dashboard   NodePort    10.1.45.160   <none>        443:31510/TCP            6m53s

    这里有选择kubeconfig

    还有令牌,我们选择令牌登录

    创建service account并绑定默认cluster-admin管理员集群角色:

    [root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system

    [root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

    找到dashboardadmin

    [root@k8s-master ~]# kubectl get secret -n kube-system

    复制这个dashboard-admin-token-sx5gl

     

     

    查看详细信息,并复制粘贴在我们的web页面的令牌上

     

    [root@k8s-master ~]# kubectl describe secret dashboard-admin-token-sx5gl -n kube-system

    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tc3g1Z2wiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMTBmNTk4YWUtMWFlNS00YzNjLTgzZjUtOWRmNDg3MzJhNDVjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.QYPP7SDQnPp062yb_4XrdOE8xkmTergaTTTPulDADvXzyG2udAEU5AKfHNDH1ZXu5pw9RN9OgX5xUwE_OzQXpPoE5Qt0x2M3VOdpscW2pOw_KOUfnYtf-Aq6Z8c9KgsNdAtUkBHwFbMucL3tDH-Uxb9AdBMX6q5W9jbGlfMa0M6tp2o4zIcoqpli1qAMI_FjvNfkmWX0x4akIzsVeoocewdjzB8Ca-VyqEFZXCMULQv5L8z1RszCXZ4VgOnkHQB6AiVUGmJ9B8iwtCZu-SW2iwWaT-4iQeQvtM3HQTl5aZycaI26qUlsuUtBj5eqyJqugSGlidXJs5TPdn_xmF-FZg

     

     

    转载于:https://www.cnblogs.com/zc1741845455/p/11104514.html

    展开全文
  • kubeadmdashboard 1.因访问dashboard界面时需要使用https,所以在本次测试环境中使用openssl进行数据加密传输: [root@k8s-master ~]# openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048 ...

    kubeadm之dashboard

    1.因访问dashboard界面时需要使用https,所以在本次测试环境中使用openssl进行数据加密传输:

    [root@k8s-master ~]# openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
    Generating RSA private key, 2048 bit long modulus
    ....................+++
    ........+++
    e is 65537 (0x10001)
    [root@k8s-master ~]# openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
    writing RSA key
    [root@k8s-master ~]# openssl req -new -key dashboard.key -out dashboard.csr
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [XX]:CN
    State or Province Name (full name) []:china
    Locality Name (eg, city) [Default City]:beijing
    Organization Name (eg, company) [Default Company Ltd]:qf
    Organizational Unit Name (eg, section) []:qf
    Common Name (eg, your name or your server's hostname) []:xingdian
    Email Address []:zhuangyaovip@163.com
    
    Please enter the following 'extra' attributes
    to be sent with your certificate request
    A challenge password []:
    An optional company name []:
    [root@k8s-master ~]# openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
    Signature ok
    subject=/C=CN/ST=china/L=beijing/O=qf/OU=qf/CN=xingdian/emailAddress=zhuangyaovip@163.com
    Getting Private key
    

    2.将生成的秘钥传给node节点

    [root@k8s-master ~]# mkdir /opt/certs
    [root@k8s-master ~]# ls
    dashboard.crt  dashboard.csr  dashboard.key  dashboard.pass.key 
    [root@k8s-master ~]# mv dashboard.crt dashboard.key /opt/certs/
    [root@k8s-master ~]# scp -r /opt/certs  k8s-node-1:/opt/
    dashboard.crt                                                                           100% 1273   919.4KB/s   00:00    
    dashboard.key                                                                           100% 1675     1.5MB/s   00:00    
    [root@k8s-master ~]# scp -r /opt/certs  k8s-node-2:/opt/
    dashboard.crt                                                                           100% 1273   966.4KB/s   00:00    
    dashboard.key    
    

    3.先将yaml文件下载下来,修改里面镜像地址和Service NodePort类型

    [root@k8s-master ~]# git clone https://github.com/blackmed/kubernetes-kubeadm.git
    [root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0
    

    使用我的git下载的yaml文件是已经修改过得,以下是修改过程

    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kube-system
    spec:
      type: NodePort
      ports:
        - port: 443
          targetPort: 8443
          nodePort: 30001
      selector:
        k8s-app: kubernetes-dashboard
    

    执行yaml文件:

    [root@k8s-master ~]# kubectl apply -f kubernetes-dashboard.yaml
    

    4.创建一个管理员角色:

    [root@k8s-master ~]# vim kubernetes-admin.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: dashboard-admin
      namespace: kube-system
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: dashboard-admin
    subjects:
      - kind: ServiceAccount
        name: dashboard-admin
        namespace: kube-system
    roleRef:
      kind: ClusterRole
      name: cluster-admin
      apiGroup: rbac.authorization.k8s.io
    

    执行yaml文件

    [root@k8s-master ~]# kubectl apply -f kubernetes-admin.yaml
    

    5.生成token的令牌登录使用

    [root@k8s-master dashboard]# kubectl describe secret dashboard-admin  -n kube-system
    Name:         dashboard-admin-token-fsdcn
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  kubernetes.io/service-account.name: dashboard-admin
                  kubernetes.io/service-account.uid: 6700f33f-8fc3-409c-b253-8796cf850014
    
    Type:  kubernetes.io/service-account-token
    
    Data
    ====
    ca.crt:     1025 bytes
    namespace:  11 bytes
    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjE3OVpva3B2Z2drNGN3OGppcTVkc1hhbVVzY2NJclF5QlBEYWQwZ0tjUVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZnNkY24iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjcwMGYzM2YtOGZjMy00MDljLWIyNTMtODc5NmNmODUwMDE0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.i4P9A96V9847mlzv1e4q4EtXU-2PwXebT1Ax85d_5GtNMetPr7tDadeciw09TlTK0Ju8MCicmN0UmPDTQ3gCD6B9zR7V1chIPh7GuiSKaYxHQFeRjcRqRBhNUREmtUd_F5CZR3nP5XwNoimVQuCLD2EdveXCr8WcZTG5E8fy7T2ip0PJ1emoD_V1CV49ldSu2AmN4h7LZ9X7o4CbSt_XVABQEIBHyMn3GkeC-Q-YOM6BWKviJM8kAynSFFNSyVzygzMqwzCfZqqNv9-FE0aAUq2jECvY-aFnFBqkLAIPX_vPIlailQu4mmUNctV-GlBw2yeY0y4Zd2OMXhFGxpzrQw
    

    6.检查pods发现dashboard正常运行

    [root@k8s-master dashboard]# kubectl  get pods --namespace=kube-system
    NAME                                    READY   STATUS    RESTARTS   AGE
    coredns-6955765f44-4t2jd                1/1     Running   0          32h
    coredns-6955765f44-ck62g                1/1     Running   0          32h
    etcd-k8s-master                         1/1     Running   2          32h
    kube-apiserver-k8s-master               1/1     Running   2          32h
    kube-controller-manager-k8s-master      1/1     Running   3          32h
    kube-flannel-ds-amd64-4n72n             1/1     Running   0          3h31m
    kube-flannel-ds-amd64-mpdsm             1/1     Running   0          99m
    kube-flannel-ds-amd64-vblsd             1/1     Running   0          99m
    kube-proxy-2f4jl                        1/1     Running   0          99m
    kube-proxy-8kmc4                        1/1     Running   0          99m
    kube-proxy-r4qsn                        1/1     Running   2          32h
    kube-scheduler-k8s-master               1/1     Running   3          32h
    kubernetes-dashboard-6745f84c7b-rkg4d   1/1     Running   0          5m25s
    

    7.浏览器访问
    在这里插入图片描述

    展开全文
  • 使用kubeadm+dashboard构建k8s集群15版本 kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。 这个工具能通过两条指令完成一个kubernetes集群的部署: 1. 安装要求 在开始之前,部署Kubernetes集群...

    使用kubeadm+dashboard构建k8s集群15版本

    kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。

    这个工具能通过两条指令完成一个kubernetes集群的部署:

    1. 安装要求

    在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

    一台或多台机器,操作系统 CentOS7.x-86_x64 硬件配置:

    2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多

    集群中所有机器之间网络互通 可以访问外网,需要拉取镜像

    禁止swap分区
    2. 学习目标

    1. 在所有节点上安装Docker和kubeadm

    2. 部署Kubernetes Master

    3. 部署容器网络插件

    4. 部署 Kubernetes Node,将节点加入Kubernetes集群中

    5. 部署Dashboard Web页面,可视化查看Kubernetes资源
      3. 准备环境

       关闭防火墙: systemctl stop firewalld
       
       systemctl disable firewalld
       
       Iptables -F
       
        
       
       关闭selinux: $ sed -i 's/enforcing/disabled/' /etc/selinux/config $ setenforce 0
       
       临时 $ setenforce 0
       
        
       
       关闭swap: $ swapoff -a  $ 临时 $ vim /etc/fstab  $ 永久
      

      添加主机名与IP对应关系(记得设置主机名):

      $ cat /etc/hosts
       
       192.168.0.11 k8s-master
       
       192.168.0.12 k8s-node1
       
       192.168.0.13 k8s-node2
      

    将桥接的IPv4流量传递到iptables的链:

    $

    [root@k8s-node1 ~]# cat /etc/sysctl.d/k8s.conf
    
    net.bridge.bridge-nf-call-ip6tables = 1
    
    net.bridge.bridge-nf-call-iptables = 1
    
    [root@k8s-node1 ~]# sysctl --system
    
     
    4. 所有节点安装Docker/kubeadm/kubelet
    4.1 安装Docker
    
    Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。
    
    [root@k8s-node1 ~]#  wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    
    [root@k8s-node1 ~]# yum -y install docker-ce
    
    [root@k8s-node1 ~]#systemctl enable docker && systemctl start docker
    
    [root@k8s-node1 ~]#docker --version
    4.2 添加阿里云YUM软件源
    
    [root@k8s-node2 ~]# vim /etc/repos.d/kubernetes.repo
    
    [kubernetes]
    
    name=Kubernetes
    
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    
    enabled=1
    
    gpgcheck=1
    
    repo_gpgcheck=1
    
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    
     
    
    4.3 安装kubeadm,kubelet和kubectl                       
    
    [root@k8s-node1 ~]# yum -y install kubelet kubeadm kubectl
    
    [root@k8s-node1 ~]# systemctl enable kubelet.service
    
    kubeadm init --apiserver-advertise-address=192.168.30.21 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
    
    按照自己的版本和master主机IP去填
    
     
    
     
    
     
    5. 部署Kubernetes Master
    
    master主机操作
    
    [root@k8s-master ~]# kubeadm init --apiserver-advertise-address=192.168.30.21 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
    
     
    
    初始化完成会显示以下内容:这里的东西很重要,颜色表明的要按自己的去复制
    
    Your Kubernetes control-plane has initialized successfully!
    
     
    
    To start using your cluster, you need to run the following as a regular user:
    
     
    
      mkdir -p $HOME/.kube
    
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
     
    
    You should now deploy a pod network to the cluster.
    
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
     
    
    Then you can join any number of worker nodes by running the following on each as root:
    
     
    
    kubeadm join 192.168.30.21:6443 --token x8gdiq.sbcj8g4fmoocd5tl \
    
    --discovery-token-ca-cert-hash sha256:0b48e70fa8a268f8b88cd69b02cf87d8a2bf2efe519bb88dfa558de20d4a9993
    
     
    
    使用kubectl工具
    
    [root@k8s-master ~]# mkdir -p $HOME/.kube
    
    [root@k8s-master ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    
    [root@k8s-master ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    目前node节点没有准备
    
    [root@k8s-master ~]# kubectl get nodes
    
    NAME         STATUS     ROLES    AGE   VERSION
    
    k8s-master   NotReady   master   10m   v1.15.0
    
     
    6. 安装Pod网络插件(CNI)
    
    在master中操作
    
    [root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
    
    确保节点就绪情况我们的master节点已经开启
    
    [root@k8s-master ~]# kubectl get nodes
    
    NAME         STATUS   ROLES    AGE   VERSION
    
    k8s-master   Ready    master   32m   v1.15.0
    
    确保pod已经启动 kube-syetem 命名空间
    
    [root@k8s-master ~]# kubectl get pods -n kube-system
    
    NAME                                 READY   STATUS    RESTARTS   AGE
    
    coredns-bccdc95cf-5j5fd              1/1     Running   0          33m
    
    coredns-bccdc95cf-6plrt              1/1     Running   0          33m
    
    etcd-k8s-master                      1/1     Running   0          32m
    
    kube-apiserver-k8s-master            1/1     Running   0          33m
    
    kube-controller-manager-k8s-master   1/1     Running   0          32m
    
    kube-flannel-ds-amd64-l8dg8          1/1     Running   0          8m1s
    
    kube-proxy-lxn4w                     1/1     Running   0          33m
    
    kube-scheduler-k8s-master            1/1     Running   0          33m
    
     
    
     
    7. 加入Kubernetes Node
    
    向集群添加新节点,执行在kubeadm init输出的kubeadm join命令
    
     
    
    [root@k8s-node2 ~]# kubeadm join 192.168.30.21:6443 --token x8gdiq.sbcj8g4fmoocd5tl \
    
    >     --discovery-token-ca-cert-hash sha256:0b48e70fa8a268f8b88cd69b02cf87d8a2bf2efe519bb88dfa558de20d4a9993
    
     
    
    查看node节点
    
    [root@k8s-master ~]# kubectl get node
    
    NAME         STATUS   ROLES    AGE   VERSION
    
    k8s-master   Ready    master   81m   v1.15.0
    
    k8s-node1    Ready    <none>   23m   v1.15.0
    
    k8s-node2    Ready    <none>   26m   v1.15.0
    
     
    8. 测试在Kubernetes集群
    
     
    
    在Kubernetes集群中创建一个pod,验证是否正常运行:
    
    [root@k8s-master ~]#  kubectl create deployment nginx --image=nginx
    
    [root@k8s-master ~]#  kubectl expose deployment nginx --port=80 --type=NodePort
    
    查看pod的详细信息
    
    这里我启动了一个node,另外一个swap有问题,所以没有开,不过不影响我们的部署
    
    [root@k8s-master ~]# kubectl get pods,svc -o wide
    
    NAME                         READY   STATUS    RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
    
    pod/nginx-554b9c67f9-sfxh2   1/1     Running   0          5m7s   10.244.1.2   k8s-node2   <none>           <none>
    
     
    
    NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE     SELECTOR
    
    service/kubernetes   ClusterIP   10.1.0.1      <none>        443/TCP        96m     <none>
    
    service/nginx        NodePort    10.1.66.144   <none>        80:30900/TCP   3m44s   app=nginx
    
    访问我们的Node节点的应用
    ## 标题
    http://nodeip:port   也就是30900端口
    

    在这里插入图片描述

    9.部署 Dashboard

    [root@k8s-master ~]#
     wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
     [root@k8s-master ~]# vim kubernetes-dashboard.yaml 
    
    112行   image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1   指定国内的镜像,这里默认是谷歌的
    
    158行   type: NodePort   添加类型
    159   ports:
    160     - port: 443
    应用dashboard
    [root@k8s-master ~]# kubectl apply -f kubernetes-dashboard.yaml 
    查看pods状态
    [root@k8s-master ~]# kubectl get pods -n kube-system
    NAME                                 READY   STATUS    RESTARTS   AGE
    coredns-bccdc95cf-5j5fd              1/1     Running   0          121m
    coredns-bccdc95cf-6plrt              1/1     Running   0          121m
    etcd-k8s-master                      1/1     Running   0          120m
    kube-apiserver-k8s-master            1/1     Running   0          121m
    kube-controller-manager-k8s-master   1/1     Running   0          120m
    kube-flannel-ds-amd64-6m7ct          1/1     Running   0          64m
    kube-flannel-ds-amd64-l8dg8          1/1     Running   0          95m
    kube-proxy-lxn4w                     1/1     Running   0          121m
    kube-proxy-xdcgv                     1/1     Running   0          64m
    kube-scheduler-k8s-master            1/1     Running   0          121m
    kubernetes-dashboard-79ddd5-t7q57    1/1     Running   0          82s
    
    查看端口进行访问31510:需要用https://192.168.30.23:31510
    [root@k8s-master ~]# kubectl get pods,svc -n kube-system
    NAME                                     READY   STATUS    RESTARTS   AGE
    pod/coredns-bccdc95cf-5j5fd              1/1     Running   0          127m
    pod/coredns-bccdc95cf-6plrt              1/1     Running   0          127m
    pod/etcd-k8s-master                      1/1     Running   0          126m
    pod/kube-apiserver-k8s-master            1/1     Running   0          126m
    pod/kube-controller-manager-k8s-master   1/1     Running   0          126m
    pod/kube-flannel-ds-amd64-6m7ct          1/1     Running   0          69m
    pod/kube-flannel-ds-amd64-l8dg8          1/1     Running   0          101m
    pod/kube-proxy-lxn4w                     1/1     Running   0          127m
    pod/kube-proxy-xdcgv                     1/1     Running   0          69m
    pod/kube-scheduler-k8s-master            1/1     Running   0          126m
    pod/kubernetes-dashboard-79ddd5-t7q57    1/1     Running   0          6m53s
    
    NAME                           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
    service/kube-dns               ClusterIP   10.1.0.10     <none>        53/UDP,53/TCP,9153/TCP   127m
    service/kubernetes-dashboard   NodePort    10.1.45.160   <none>        443:31510/TCP            
    
    • 这里有选择kubeconfig 还有令牌,我们选择令牌登录 先创建service
      account并绑定默认cluster-admin管理员集群角色: [root@k8s-master ~]# kubectl create
      serviceaccount dashboard-admin -n kube-system [root@k8s-master ~]#
      kubectl create clusterrolebinding dashboard-admin
      –clusterrole=cluster-admin
      –serviceaccount=kube-system:dashboard-admin 找到dashboard的admin [root@k8s-master ~]# kubectl get secret -n kube-system
      复制这个dashboard-admin-token-sx5gl
      mNzZG4ubmV0L3dlaXhpbl80NDE1Mzc4Ng==,size_16,color_FFFFFF,t_70)
      查看详细信息,并复制粘贴在我们的web页面的令牌上

    [root@k8s-master ~]# kubectl describe secret dashboard-admin-token-sx5gl -n kube-system
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tc3g1Z2wiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMTBmNTk4YWUtMWFlNS00YzNjLTgzZjUtOWRmNDg3MzJhNDVjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.QYPP7SDQnPp062yb_4XrdOE8xkmTergaTTTPulDADvXzyG2udAEU5AKfHNDH1ZXu5pw9RN9OgX5xUwE_OzQXpPoE5Qt0x2M3VOdpscW2pOw_KOUfnYtf-Aq6Z8c9KgsNdAtUkBHwFbMucL3tDH-Uxb9AdBMX6q5W9jbGlfMa0M6tp2o4zIcoqpli1qAMI_FjvNfkmWX0x4akIzsVeoocewdjzB8Ca-VyqEFZXCMULQv5L8z1RszCXZ4VgOnkHQB6AiVUGmJ9B8iwtCZu-SW2iwWaT-4iQeQvtM3HQTl5aZycaI26qUlsuUtBj5eqyJqugSGlidXJs5TPdn_xmF-FZg
    在这里插入图片描述
    [1]: http://meta.math.stackexchange.com/questions/5020/mathjax-basic-tutorial-and-quick-reference
    [2]: https://mermaidjs.github.io/
    [3]: https://mermaidjs.github.io/
    [4]: http://adrai.github.io/flowchart.js/

    展开全文
  • https://blog.csdn.net/chenleiking/article/details/81488028
  • kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。这个工具能通过两条指令完成一个kubernetes集群的部署:1. 安装要求 在开始之前,部署Kubernetes集群机器需要满足以下几个条件: 一台或多台机器,...
  • 说明:本次k8s安装是1.13.0版本,并且在安装过程中通过重新编译修改其默认证书期限,最后部署dashboard安装之前确保之前没有安装或者安装的k8s以及docker,etcd已经卸载yum-yremovekubernetes*docker*docker-...
  • 把刚刚dashboard船舰的service 的类型改成NodePort ,可以在部署dashboard的时候修改下yaml文件 kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system    登陆...
  • 部署环境(所有节点都做) OS:CentOS Linux release 7.6.1810 (Core) kernel: 3.10.0-957.21.3.el7.x86_64 1 台master 2台node 环境准备(三台机器都做以下操作) 1.1 关闭防火墙: systemctl stop firewalld ...
  • 1 环境准备 1.1 环境要求 本文中介绍的...2.1 master部署 #只能在matser上面部署 [root@k8smaster180 ~]# kubeadm init \ > --apiserver-advertise-address=172.16.1.180 \ > --image-repository registry.aliyuncs....
  • kubeadm部署k8s

    2020-03-29 21:59:54
    利用kubeadm 部署 kubernetes kubeadm 介绍 github地址: 点击此处 官网安装地址: 点击此处 详细介绍地址: 点击此处 节点分配 配置信息: CentOS7.6 2 GB 以上 2 核以上 小写的主机名 Swap disabled. 充足的硬盘...
  • 参考《附003.Kubeadm部署Kubernetes》。 1.2 kubeadm功能 参考《附003.Kubeadm部署Kubernetes》。 二 部署规划 2.1 节点规划 ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 2,914
精华内容 1,165
关键字:

kubeadm部署dashboard