精华内容
下载资源
问答
  • kubeadm init \ --kubernetes-version=v1.17.4 \ --pod-network-cidr=10.244.0.0/16 \ --service-cidr=10.96.0.0/12 \ --apiserver-advertise-address=192.168.159.21 2.报错信息: [root@master ~]# kubeadm ...

    1.执行以下代码错误:

    kubeadm init \
    --kubernetes-version=v1.17.4 \
    --pod-network-cidr=10.244.0.0/16 \
     --service-cidr=10.96.0.0/12 \
     --apiserver-advertise-address=192.168.159.21
    

    2.报错信息:

    [root@master ~]# kubeadm init \
    > --kubernetes-version=v1.17.4 \
    > --pod-network-cidr=10.244.0.0/16 \
    >  --service-cidr=10.96.0.0/12 \
    >  --apiserver-advertise-address=192.168.159.21
    W1109 10:26:28.265274    2406 validation.go:28] Cannot validate kube-proxy config - no validator is available
    W1109 10:26:28.265424    2406 validation.go:28] Cannot validate kubelet config - no validator is available
    [init] Using Kubernetes version: v1.17.4
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    
    ^C
    [root@master ~]#  kubeadm init \
    > --kubernetes-version=v1.17.4 \
    > --pod-network-cidr=10.244.0.0/16 \
    >  --service-cidr=10.96.0.0/12 \
    >  --apiserver-advertise-address=192.168.159.21
    W1109 10:29:33.395709    2645 validation.go:28] Cannot validate kube-proxy config - no validator is available
    W1109 10:29:33.396073    2645 validation.go:28] Cannot validate kubelet config - no validator is available
    [init] Using Kubernetes version: v1.17.4
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    
    
    error execution phase preflight: [preflight] Some fatal errors occurred:
    	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.17.4: output: Trying to pull repository k8s.gcr.io/kube-apiserver ... 
    Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.187.82:443: connect: connection refused
    , error: exit status 1
    	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.17.4: output: Trying to pull repository k8s.gcr.io/kube-controller-manager ... 
    Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.187.82:443: connect: connection refused
    , error: exit status 1
    	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.17.4: output: Trying to pull repository k8s.gcr.io/kube-scheduler ... 
    Get https://k8s.gcr.io/v1/_ping: dial tcp 142.251.8.82:443: connect: connection refused
    , error: exit status 1
    	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.17.4: output: Trying to pull repository k8s.gcr.io/kube-proxy ... 
    Get https://k8s.gcr.io/v1/_ping: dial tcp 142.251.8.82:443: connect: connection refused
    , error: exit status 1
    	[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Trying to pull repository k8s.gcr.io/pause ... 
    Get https://k8s.gcr.io/v1/_ping: dial tcp 142.251.8.82:443: connect: connection refused
    , error: exit status 1
    	[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.3-0: output: Trying to pull repository k8s.gcr.io/etcd ... 
    Get https://k8s.gcr.io/v1/_ping: dial tcp 142.251.8.82:443: connect: connection refused
    , error: exit status 1
    	[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.5: output: Trying to pull repository k8s.gcr.io/coredns ... 
    Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: connect: connection refused
    , error: exit status 1
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    To see the stack trace of this error execute with --v=5 or higher
    
    

    3.原因:
    kubernetes v1.18.3 安装时需要从 k8s.gcr.io 拉取镜像,但是该网站被我国屏蔽了,国内没法正常访问导致没法正常进行kubernetes正常安装。

    这里通过介绍从Docker官方默认镜像平台拉取镜像并重新打tag的方式来绕过对 k8s.gcr.io 的访问。

    4.解决方法通过执行 kubeadm config images list 获取到需要拉取的镜像列表。

    试过部分国内镜像源没有v1.18.3镜像,从https://hub.docker.com//mirrorgcrio/xxx 拉取k8s.gcr.io对应的镜像有效

    5.具体命令
    5.1:docker pull镜像拉取命令

    docker pull mirrorgcrio/kube-apiserver:v1.18.3
    docker pull mirrorgcrio/kube-controller-manager:v1.18.3
    docker pull mirrorgcrio/kube-scheduler:v1.18.3
    docker pull mirrorgcrio/kube-proxy:v1.18.3
    docker pull mirrorgcrio/pause:3.2
    docker pull mirrorgcrio/etcd:3.4.3-0
    docker pull mirrorgcrio/coredns:1.6.7
    

    5.2:docker tag镜像重命名

    docker tag mirrorgcrio/kube-apiserver:v1.18.3 k8s.gcr.io/kube-apiserver:v1.18.3
    docker tag mirrorgcrio/kube-controller-manager:v1.18.3 k8s.gcr.io/kube-controller-manager:v1.18.3
    docker tag mirrorgcrio/kube-scheduler:v1.18.3 k8s.gcr.io/kube-scheduler:v1.18.3
    docker tag mirrorgcrio/kube-proxy:v1.18.3 k8s.gcr.io/kube-proxy:v1.18.3
    docker tag mirrorgcrio/pause:3.2 k8s.gcr.io/pause:3.2
    docker tag mirrorgcrio/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
    docker tag mirrorgcrio/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
    

    5.3:docker image rm命令删除原始镜像

    docker image rm mirrorgcrio/kube-apiserver:v1.18.3
    docker image rm mirrorgcrio/kube-controller-manager:v1.18.3
    docker image rm mirrorgcrio/kube-scheduler:v1.18.3
    docker image rm mirrorgcrio/kube-proxy:v1.18.3
    docker image rm mirrorgcrio/pause:3.2
    docker image rm mirrorgcrio/etcd:3.4.3-0
    docker image rm mirrorgcrio/coredns:1.6.7
    

    完成可以继续安装kubernetes了

    展开全文
  • [K8S] kubeadm init部署K8S

    2021-08-07 10:17:37
    官方文档 ->... @Kubernetes 是什么? https://v1-21.docs.kubernetes.io/zh/docs/concepts/overview/what-is-kubernetes/ 访问和工具层 ... K8S 容器引擎层 Docker IaaS基础设施层 提供.

    官方文档 -> https://kubernetes.io/zh/docs/home/

    @Kubernetes 是什么?
    https://v1-21.docs.kubernetes.io/zh/docs/concepts/overview/what-is-kubernetes/

    访问和工具层Web控制台、RESTful API、日志、监控、CI/CD
    PaaS服务层统一服务平台
    容器编排层K8S
    容器引擎层Docker
    IaaS基础设施层提供基础运行环境(物理机、虚拟机、网络、存储……)

    注意:Kubernetes 不仅仅是一个编排系统,实际上它消除了编排的需要。 编排的技术定义是执行已定义的工作流程:首先执行 A,然后执行 B,再执行 C。 相比之下,Kubernetes 包含一组独立的、可组合的控制过程, 这些过程连续地将当前状态驱动到所提供的所需状态。 如何从 A 到 C 的方式无关紧要,也不需要集中控制,这使得系统更易于使用 且功能更强大、系统更健壮、更为弹性和可扩展。

    @Kubernetes 为你提供...
    - 服务发现和负载均衡
    - 存储编排
    - 自动部署和回滚
    - 自动完成装箱计算 -> 允许指定每个容器所需 CPU 和内存(RAM),当容器指定了资源请求时,Kubernetes可以做出更好的决策来管理容器的资源。
    - 自我修复
    - 密钥与配置管理

    @Kubernetes 组件
    https://v1-21.docs.kubernetes.io/zh/docs/concepts/overview/components/

     说明: 通常会在同一个计算机上启动所有控制平面组件,所以图中的控制平面(Control Plane)可以理解成Master节点

    Master组件(3个,kube-apiserver、kube-scheduler、kube-controller-manager)

    kube-apiserver集群的统一入口,各组件协调者,以RESTful
    API提供接口服务。所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给etcd存储
    etcd不算是Master组件,可以部署在Master上。
    etcd 是兼具一致性和高可用性的键值数据库。用于保存集群状态数据,比如Pod、Service等对象信息。
    kube-scheduler负责监视新创建的、未指定运行节点(node)的 Pods,选择节点让 Pod 在上面运行。
    kube-controller-manager处理集群中常规后台任务,一个资源对应一个控制器,而
    ControllerManager就是负责管理这些控制器的。

     

    Node组件

    kubelet是集群中每个节点(node)上运行的代理。 它保证容器(containers)都 运行在 Pod 中。
    kubelet 不会管理不是由 Kubernetes 创建的容器。
    kube-proxy是集群中每个节点上运行的网络代理, 实现 Kubernetes 服务(Service)概念的一部分。
    维护节点上的网络规则和实现四层负载均衡。
    容器运行时(Container Runtime)负责运行容器的软件,例如: Docker、 containerd、CRI-O

    @使用kubeadm部署集群
    https://v1-20.docs.kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ (步骤1-6)
    https://v1-20.docs.kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/ (步骤7)
    https://v1-20.docs.kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-join/ (步骤8)
    例如有3台VM,下述步骤1-6在每个VM都要执行
    1.确保每个节点上 MAC 地址和 product_uuid 的唯一性 
    你可以使用命令 ip link 或 ifconfig -a 来获取网络接口的 MAC 地址
    可以使用 sudo cat /sys/class/dmi/id/product_uuid 命令对 product_uuid 校验

    2.关闭防火墙

    systemctl stop firewalld
    systemctl disable firewalld

    3.关闭selinux
     

    # 永久关闭
    sed -i 's/enforcing/disabled/' /etc/selinux/config  
    # 临时关闭
    setenforce 0  

    4.允许 iptables 检查桥接流量

    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    br_netfilter
    EOF
    
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sudo sysctl --system

    5.安装 runtime,例如docker

    6.安装 kubeadm、kubelet 和 kubectl,可以指定版本

    yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0
    systemctl enable kubelet

    说明:
    kubeadm 初始化集群的指令。
    kubelet 在集群中的每个节点上用来启动 Pod 和容器等。
    kubectl 与集群通信的命令行工具。


    步骤7在Master节点执行
    7.部署Master节点
    kubeadm init 此命令初始化一个 Kubernetes 控制平面节点

    kubeadm init \
      --apiserver-advertise-address=<Master IP> \
      --image-repository registry.aliyuncs.com/google_containers \
      --kubernetes-version v1.20.0 \
      --service-cidr=x.x.x.x/xx \
      --pod-network-cidr=y.y.y.y/yy \
      --ignore-preflight-errors=all

    说明1:
    -apiserver-advertise-address 集群通告地址
    --image-repository 默认拉取镜像地址k8s.gcr.io,这里指定阿里云镜像仓库地址
    --kubernetes-version K8s版本
    --service-cidr 默认值为"10.96.0.0/12",为服务的虚拟IP地址指定IP地址段,Pod统一访问入口  
    --pod-network-cidr 指明 pod 网络可以使用的 IP 地址段,注意与之后部署容器网络(CNI)里面定义Pod网络一致
    --ignore-preflight-errors stringSlice 错误将显示为警告的检查列表,取值为"all"时将忽略检查中的所有错误

    说明2:
    --service-cidr 10.96.0.0/12,于是稍后起了服务给service分的CLUSTER-IP就是这个网段的

    [root@k8s-master ~]# kubectl get svc -o wide
    NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
    kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        13d   <none>
    nginx        NodePort    10.105.63.231   <none>        80:32155/TCP   13d   app=nginx

    --pod-network-cidr=10.244.0.0/16,于是稍后起了服务给pod分的IP就是这个网段的

    [root@k8s-master ~]# kubectl get pods -o wide
    NAME                         READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
    nginx-58f8c48d58-48xbn       1/1     Running   2          10d   10.244.169.173   k8s-node2   <none>           <none>
    [root@k8s-master ~]#

    执行kubeadm init最后会打印如下信息

    ...

    our Kubernetes control-plane has initialized successfully!

    To start using your cluster, you need to run the following as a regular user:

    (说明:在Master节点执行下述蓝色命令)

      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Alternatively, if you are the root user, you can run:

      export KUBECONFIG=/etc/kubernetes/admin.conf

    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/

    Then you can join any number of worker nodes by running the following on each as root:

    (说明:记录下述紫色命令,在每个Node执行使其加入集群)

    kubeadm join 192.168.231.121:6443 --token cqqj2t.gwkj57io7aue66mr \
        --discovery-token-ca-cert-hash
    sha256:ac272bbfc0a687db9a37099f440ff0dc0f684909117aa501ee3cd4b7474ae7b0
    [root@k8s-master ~]#

    步骤8在每个Node节点执行
    8.加入工作节点到集群
    每台Node执行上述kubeadm init步骤最后得到的kubeadm join命令

    步骤9在Master节点执行
    9.部署容器网络(CNI),以calico为例
    下载yaml -> wget https://docs.projectcalico.org/manifests/calico.yaml
    修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init的 --pod-network-cidr指定的一样。
    执行 kubectl apply -f calico.yaml
    kubectl get pods -n kube-system 查看calico网络部署结果
    kubectl get nodes 查看集群部署结果

    (END)

    展开全文
  • controller-manager-master 1/1 Running 2 2d2h 10.1.1.11 master kube-flannel-ds-amd64-h5skl 1/1 Running 2 2d2h 10.1.1.11 master kube-flannel-ds-amd64-mg4n5 0/1 Init:0/1 0 31m 10.1....

    转载自:https://www.cnblogs.com/liuyi778/p/12771259.html

     

    1、错误提示

    1.1、节点状态

    1

    2

    3

    4

    5

    [root@master ~]# kubectl get nodes

    NAME     STATUS     ROLES    AGE    VERSION

    master   Ready      master   2d2h   v1.18.2

    node1    NotReady   <none>   31m    v1.18.2

    [root@master ~]#

    1.2、组件状态

    回到顶部

    1.2.1、master节点查看

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    [root@master ~]# kubectl get pod -n kube-system -o wide

    NAME                             READY   STATUS     RESTARTS   AGE    IP           NODE     NOMINATED NODE   READINESS GATES

    coredns-7ff77c879f-78sl5         1/1     Running    2          2d2h   10.244.0.6   master   <none>           <none>

    coredns-7ff77c879f-pv744         1/1     Running    2          2d2h   10.244.0.7   master   <none>           <none>

    etcd-master                      1/1     Running    2          2d2h   10.1.1.11    master   <none>           <none>

    kube-apiserver-master            1/1     Running    2          2d2h   10.1.1.11    master   <none>           <none>

    kube-controller-manager-master   1/1     Running    2          2d2h   10.1.1.11    master   <none>           <none>

    kube-flannel-ds-amd64-h5skl      1/1     Running    2          2d2h   10.1.1.11    master   <none>           <none>

    kube-flannel-ds-amd64-mg4n5      0/1     Init:0/1   0          31m    10.1.1.13    node1    <none>           <none>

    kube-proxy-j7np7                 1/1     Running    0          31m    10.1.1.13    node1    <none>           <none>

    kube-proxy-x7s46                 1/1     Running    2          2d2h   10.1.1.11    master   <none>           <none>

    kube-scheduler-master            1/1     Running    2          2d2h   10.1.1.11    master   <none>           <none>

    回到顶部

    1.2.2、查看node1节点容器运行状态

    1

    2

    3

    4

    5

    6

    root@node1:~# docker ps -a

    CONTAINER ID        IMAGE                                                COMMAND                  CREATED             STATUS              PORTS               NAMES

    76fee67569a2        registry.aliyuncs.com/google_containers/kube-proxy   "/usr/local/bin/kube…"   33 minutes ago      Up 33 minutes                           k8s_kube-proxy_kube-proxy-j7np7_kube-system_9c28dac9-f5f5-460e-93b3-d8679d0867e2_0

    2c7fa6fa86a3        registry.aliyuncs.com/google_containers/pause:3.2    "/pause"                 33 minutes ago      Up 33 minutes                           k8s_POD_kube-proxy-j7np7_kube-system_9c28dac9-f5f5-460e-93b3-d8679d0867e2_0

    0d570648b79f        registry.aliyuncs.com/google_containers/pause:3.2    "/pause"                 33 minutes ago      Up 33 minutes                           k8s_POD_kube-flannel-ds-amd64-mg4n5_kube-system_c7496136-fe22-438d-8267-9d69f705311e_0

    root@node1:~#

      

     

    1.3、查看hosts文件配置(如果hosts没问题,则忽略这一步,后面的步骤还是需要照做,亲测可以修复成功,zhoulidong-2021-1-5留)

    回到顶部

    1.3.1、master节点

    1

    2

    3

    4

    5

    [root@master ~]# cat /etc/hosts

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

    10.1.1.11 master

    10.1.1.11 master

      

    此时可以看到,少了node1节点的映射

    回到顶部

    1.3.2、node1节点

    1

    2

    3

    4

    5

    6

    7

    8

    root@node1:~# cat /etc/hosts

    127.0.0.1   localhost

    127.0.1.1   node1

     

    # The following lines are desirable for IPv6 capable hosts

    ::1     localhost ip6-localhost ip6-loopback

    ff02::1 ip6-allnodes

    ff02::2 ip6-allrouters

    同样的,在node1节点上也没有master节点的主机映射信息

    回到顶部

    1.3.3、更新主机映射

    master节点

    1

    2

    3

    4

    5

    6

    7

    [root@master ~]# echo '10.1.1.13 node1' >> /etc/hosts

    [root@master ~]# cat /etc/hosts

    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

    10.1.1.11 master

    10.1.1.11 master

    10.1.1.13 node1

    node1节点

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    root@node1:~# echo -e "10.1.1.11 master Master\n10.1.1.13 node1 Node1" >> /etc/hosts

    root@node1:~# cat /etc/hosts

    127.0.0.1   localhost

    127.0.1.1   node1

     

    # The following lines are desirable for IPv6 capable hosts

    ::1     localhost ip6-localhost ip6-loopback

    ff02::1 ip6-allnodes

    ff02::2 ip6-allrouters

    10.1.1.11 master Master

    10.1.1.13 node1 Node1

    1.4、主机通讯检测

    回到顶部

    1.4.1、master节点

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    [root@master ~]# ping node1 -c 5

    PING node1 (10.1.1.13) 56(84) bytes of data.

    64 bytes from node1 (10.1.1.13): icmp_seq=1 ttl=64 time=0.331 ms

    64 bytes from node1 (10.1.1.13): icmp_seq=2 ttl=64 time=0.330 ms

    64 bytes from node1 (10.1.1.13): icmp_seq=3 ttl=64 time=0.468 ms

    64 bytes from node1 (10.1.1.13): icmp_seq=4 ttl=64 time=0.614 ms

    64 bytes from node1 (10.1.1.13): icmp_seq=5 ttl=64 time=0.469 ms

     

    --- node1 ping statistics ---

    5 packets transmitted, 5 received, 0% packet loss, time 4002ms

    rtt min/avg/max/mdev = 0.330/0.442/0.614/0.107 ms

    回到顶部

    1.4.2、node1节点

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    root@node1:~# ping master -c 5

    PING master (10.1.1.11) 56(84) bytes of data.

    64 bytes from master (10.1.1.11): icmp_seq=1 ttl=64 time=0.479 ms

    64 bytes from master (10.1.1.11): icmp_seq=2 ttl=64 time=0.262 ms

    64 bytes from master (10.1.1.11): icmp_seq=3 ttl=64 time=0.249 ms

    64 bytes from master (10.1.1.11): icmp_seq=4 ttl=64 time=0.428 ms

    64 bytes from master (10.1.1.11): icmp_seq=5 ttl=64 time=0.308 ms

     

    --- master ping statistics ---

    5 packets transmitted, 5 received, 0% packet loss, time 94ms

    rtt min/avg/max/mdev = 0.249/0.345/0.479/0.092 ms

    2、重启k8s服务

    2.1、所有节点重启服务

    回到顶部

    master节点

    1

    2

    3

    4

    5

    6

    7

    [root@master ~]# systemctl restart kubelet docker

    [root@master ~]# kubectl get nodes

    The connection to the server 10.1.1.11:6443 was refused - did you specify the right host or port?

    [root@master ~]# kubectl get nodes

    NAME     STATUS     ROLES    AGE    VERSION

    master   Ready      master   2d2h   v1.18.2

    node1    NotReady   <none>   45m    v1.18.2

    回到顶部

    node1节点

    1

    2

    3

    4

    5

    6

    7

    8

    root@node1:~# systemctl restart kubelet docker

    root@node1:~# docker ps -a

    CONTAINER ID        IMAGE                                               COMMAND                  CREATED              STATUS                          PORTS               NAMES

    9a8f714be9f6        0d40868643c6                                        "/usr/local/bin/kube…"   About a minute ago   Up About a minute                                   k8s_kube-proxy_kube-proxy-j7np7_kube-system_9c28dac9-f5f5-460e-93b3-d8679d0867e2_2

    aceb8ae3a07b        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 About a minute ago   Up About a minute                                   k8s_POD_kube-proxy-j7np7_kube-system_9c28dac9-f5f5-460e-93b3-d8679d0867e2_2

    dd608fbcc5f5        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 About a minute ago   Up About a minute                                   k8s_POD_kube-flannel-ds-amd64-mg4n5_kube-system_c7496136-fe22-438d-8267-9d69f705311e_0

    e9b073aa917e        0d40868643c6                                        "/usr/local/bin/kube…"   2 minutes ago        Exited (2) About a minute ago                       k8s_kube-proxy_kube-proxy-j7np7_kube-system_9c28dac9-f5f5-460e-93b3-d8679d0867e2_1

    71d69c4dccc5        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 2 minutes ago        Exited (0) About a minute ago                       k8s_POD_kube-proxy-j7np7_kube-system_9c28dac9-f5f5-460e-93b3-d8679d0867e2_1

    2.2、删除node1节点,重新加入

    回到顶部

    2.2.1、删除节点

    1

    2

    [root@master ~]# kubectl delete node node1

    node "node1" deleted

    回到顶部

    2.2.2、生成注册命令

    1

    2

    3

    [root@master ~]# kubeadm token create --print-join-command

    W0425 01:02:19.391867   62603 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

    kubeadm join 10.1.1.11:6443 --token 757a06.wnp34zge3cdcqag6     --discovery-token-ca-cert-hash sha256:b1ab3a019f671de99e3af0d9fd023078ad64941a3b8cd56c2a65624f0a218642

    回到顶部

    2.2.3、删除所有容器(node1)

    1

    2

    3

    4

    5

    6

    root@node1:~# docker ps -qa | xargs docker rm -f

    5e71e6e988d8

    5c2ff662e72b

    9a8f714be9f6

    aceb8ae3a07b

    dd608fbcc5f5

    回到顶部

    2.2.4、重新注册

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    root@node1:~# kubeadm join 10.1.1.11:6443 --token 757a06.wnp34zge3cdcqag6     --discovery-token-ca-cert-hash sha256:b1ab3a019f671de99e3af0d9fd023078ad64941a3b8cd56c2a65624f0a218642

    W0425 01:03:08.461617   22573 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.

    [preflight] Running pre-flight checks

        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

    error execution phase preflight: [preflight] Some fatal errors occurred:

        [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists

        [ERROR Port-10250]: Port 10250 is in use

        [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists

    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

    To see the stack trace of this error execute with --v=5 or higher

      下面开始解决报错

    2.3、解决重新注册失败的问题

    回到顶部

    2.3.1、删除旧的配置文件

    1

    root@node1:~# rm -f /etc/kubernetes/kubelet.conf

    回到顶部

    2.3.2、重启k8s及docker服务

    1

    2

    root@node1:~# systemctl restart docker kubelet

    root@node1:~#

    回到顶部

    2.3.3、删除旧的ca文件

    1

    2

    root@node1:~# rm -f /etc/kubernetes/pki/ca.crt

    root@node1:~#

    2.4、重新注册

    回到顶部

    node节点

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    root@node1:~# kubeadm join 10.1.1.11:6443 --token 757a06.wnp34zge3cdcqag6     --discovery-token-ca-cert-hash sha256:b1ab3a019f671de99e3af0d9fd023078ad64941a3b8cd56c2a65624f0a218642

    W0425 01:09:45.778629   23773 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.

    [preflight] Running pre-flight checks

        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

    [preflight] Reading configuration from the cluster...

    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace

    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

    [kubelet-start] Starting the kubelet

    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

     

    This node has joined the cluster:

    * Certificate signing request was sent to apiserver and a response was received.

    * The Kubelet was informed of the new secure connection details.

     

    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

    回到顶部

    master节点

    1

    2

    3

    4

    [root@master ~]# kubectl get nodes

    NAME     STATUS   ROLES    AGE    VERSION

    master   Ready    master   2d2h   v1.18.2

    node1    Ready    <none>   38s    v1.18.2

     

    至此,成功!

     

    展开全文
  • 第一次执行 kubeadm init 报错了,具体看看怎么处理。 # kubeadm init [init] Using Kubernetes version: v1.22.3 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not ...

    第一次执行 kubeadm init 报错了,具体看看怎么处理。

    # kubeadm init
    [init] Using Kubernetes version: v1.22.3
    [preflight] Running pre-flight checks
    	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    error execution phase preflight: [preflight] Some fatal errors occurred:
    	[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    To see the stack trace of this error execute with --v=5 or higher
    

    docker 的 service 没有置为 enable,设置好就行了。另外关于 ipv4 通过 echo 1 > /proc/sys/net/ipv4/ip_forward 就可以了,这里的意思是是否打开 ip 转发。

    [root@VM-23-61-centos ~]# echo "1" > /proc/sys/net/ipv4/ip_forward
    [root@VM-23-61-centos ~]# service network restart
    Restarting network (via systemctl):                        [  OK  ]
    

    通过 ps -ef|grep -i docker 可以看到下面的进程。

    docker pull k8s.gcr.io/kube-apiserver:v1.22.3
    

    意思就是 kubeadm init 调用 docker 在拉取镜像。

    在这里插入图片描述
    但是你以为在拉,但实际 docker images 都找不到镜像啊,查看 docker.service 的日志发现还有些报错。
    在这里插入图片描述
    没办法,还是基本都超时,想想办法替换一下配置文件的仓库源看看。
    在这里插入图片描述
    从日志看,需要拉取下面几个镜像。

    I1030 15:05:58.040211  181596 checks.go:855] pulling: k8s.gcr.io/kube-apiserver:v1.22.3
    I1030 15:07:13.314043  181596 checks.go:855] pulling: k8s.gcr.io/kube-controller-manager:v1.22.3
    I1030 15:08:28.598273  181596 checks.go:855] pulling: k8s.gcr.io/kube-scheduler:v1.22.3
    I1030 15:09:43.864626  181596 checks.go:855] pulling: k8s.gcr.io/kube-proxy:v1.22.3
    I1030 15:10:59.142041  181596 checks.go:855] pulling: k8s.gcr.io/pause:3.5
    I1030 15:12:14.433512  181596 checks.go:855] pulling: k8s.gcr.io/etcd:2.5.0-0
    I1030 15:13:29.702488  181596 checks.go:855] pulling: k8s.gcr.io/coredns/coredns:v1.8.4
    

    所以执行下面的命令,先拉取镜像,再重新打 tag。

    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.3
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.3
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.3
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.3
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4
    
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.3 k8s.gcr.io/kube-apiserver:v1.22.3
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.3 k8s.gcr.io/kube-controller-manager:v1.22.3
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.3 k8s.gcr.io/kube-scheduler:v1.22.3
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.3 k8s.gcr.io/kube-proxy:v1.22.3
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5 k8s.gcr.io/pause:3.5
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0 k8s.gcr.io/etcd:3.5.0-0
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4 k8s.gcr.io/coredns/coredns:v1.8.4
    

    然后再执行 kubeadm init 就会得到镜像已经下载好的提示。

    I1030 22:53:56.634982  398832 checks.go:838] using image pull policy: IfNotPresent
    I1030 22:53:56.662771  398832 checks.go:847] image exists: k8s.gcr.io/kube-apiserver:v1.22.3
    I1030 22:53:56.689402  398832 checks.go:847] image exists: k8s.gcr.io/kube-controller-manager:v1.22.3
    I1030 22:53:56.717194  398832 checks.go:847] image exists: k8s.gcr.io/kube-scheduler:v1.22.3
    I1030 22:53:56.743029  398832 checks.go:847] image exists: k8s.gcr.io/kube-proxy:v1.22.3
    I1030 22:53:56.770703  398832 checks.go:847] image exists: k8s.gcr.io/pause:3.5
    I1030 22:53:56.798320  398832 checks.go:847] image exists: k8s.gcr.io/etcd:3.5.0-0
    I1030 22:53:56.824973  398832 checks.go:847] image exists: k8s.gcr.io/coredns/coredns:v1.8.4
    
    展开全文
  • It's what the internal API of Linux does, it evolves if there are better ways of solving the problem. 我一直在尝试移植一些Linux驱动程序,并且意识到内核版本2.4与2.6之间存在实质性差异。 在2.4版本的内核...
  • 在搭建完k8s之后,出现一点小问题,服务状态为 lnit:0/1,在删除重建后变为running ** 删除node节点 [root@master ~]# kubectl delete node node1 node "node1" deleted 生成注册命令 [root@master ~]# kubeadm...
  • if (g == null) {/*Determine if it's an applet or not*/ /*If there is a security manager, ask the security manager what to do.*/ if (security != null) { g=security.getThreadGroup(); }/*If the security ...
  • 语法 whatis 参数 实例[root@bogon ~]# whatis cd cd (1) - bash built-in commands, see bash(1) cd (1p) - change the working directory [root@bogon ~]# whatis bash bash (1) - GNU ...
  • I am trying to compile my program on my new server, but it's not working for me at the moment.Error log is:rasmus@web01:~/c++$ make testg++ `mysql_config --cflags --libs` main.cpp logger.cpp cpulogger...
  • Python __init__ *参数

    2021-07-16 14:22:37
    So I'm pretty new to Python and there is this library I want to work ... However there is an argument in the constructor of the class which I can't find anything about.init method looks like this:de...
  • android的initrc语法分析

    2021-05-26 12:46:31
    其中有一处与源码的system/core/init/readme.txt(此文也是对init.rc 的解释)内容不同:socket [ [ ] ]-----------------------------------------------注1:另外还...
  • I am trying to understand what the best practices are with regards to Python's (v2.7) import mechanics. I have a project that has started to grow a bit and lets say my code is organised as follows:foo...
  • 概述本文通过简要分析init进程源码,梳理其处理流程,重点关注init进程如何启动应用程序,总结启动脚本文件的编写思路init进程源码分析init进程是linux内核启动的第一个进程,怎么知道的?从内核源码linux-2.6.xxx/...
  • Ubuntu安装k8s单节点参见:ubuntu18.04使用kubeadm部署k8s单节点 注意:在参照上面文章进行操作时,可能会出现如下问题: ①执行apt-get -y install apt-transport-...
  • kubeadm init默认用的是k8s.gcr.io这个仓库,由于国内被屏蔽了搜了很多文章都是用脚本去解决,这个太繁琐了。最后kubeadm init --help看了一下,发现其实可以指定image的仓库。Flags:--apiserver-advertise-address ...
  • Xenomai源码解析第一章-xenomai_init(一)前言一、xenomai启动前的事情二、xenomai_init(void)函数setup_init_state()插曲 smpxnsched_register_classes() 前言 一直都在使用xenomai,平常或多或少都会看一些...
  • I'm a Python newbie, trying to understand the philosophy/logic behind the inheritance ... Questions ultimately regards why and when one has to use the __init__ method in a subclass. Example:It se...
  • What is the difference between doingclass a:def __init__(self):self.val=1to doingclass a:val=1def __init__(self):pass解决方案class a:def __init__(self):self.val=1this creates a class (in Py2, a cruddy...
  • 在python中重载__init__

    2021-07-16 14:16:35
    Let's say I have a class that has a member called data which is a list.I want to be able to initialize the class with, for example, a filename (which contains data to initialize the list) or with an a...
  • 因此,在 docker 容器的启动过程中,docker 会为这个容器中的进程创建一个 cgroup: $ docker exec -it mysql-work /bin/bash $ top PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 mysql 20 0 963996 ...
  • k8s安装

    2021-08-30 11:49:44
    在master节点导出并修改配置文件 cd /app/k8s/ kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml 主要修改配置文件的几个属性:advertiseAddress imageRepository kubernetes...
  • When I compile an arbitrary __init__.py file on Windows with setup.py build_ext --inplace command, it has an unresolvable external symbol error (i.e. "LINK : error LNK2001: An unresolvable external sy...
  • 1.2 docker VS k8s1.3 K8s核心组件1.4 K8s 的一个简单架构理解二. 安装部署2.1 目标与环境准备2.2 kubeadm ,kubelet,kubectl简介安装2.3 在 master 部署K8s master参考 一. k8s基本概念 1.1 why k8s? 为什么叫k8s...
  • class __impl:def __init__(self):print "INIT"__instance = Nonedef __init__(self):# Check whether we already have an instanceif Singleton.__instance is None:S...
  • Android启动脚本init.rc

    2021-05-13 06:50:50
    在 Android中使用启动脚本init.rc,可以在系统的初始化过程中进行一些简单的初始化操作。这个脚本被直接安装到目标系统的根文件系统中,被 init可执行程序解析。 init.rc是在init启动后被执行的启动脚本,其语法主要...
  • 这节我们该往上层走了,这一节我们来研究ini进程的初始化和启动,事实上前面的内核初始化已经进入init初始化了,这节我们来仔细研究,这个pid为1的进程。 2. 源码解析 注:本系列源码都是基于kernel4.9的 前面我们...
  • While reading the code of OpenStack and I encountered this.A class named 'Service' inherits the base class 'object', and then in Service's __init__() method, object's __init__ is called. The related c...
  • I have a little question about python 3.I want to create a class, which is using a function from within of that ... Just like:class Plus:def __init__(self, x, y):self.x = xself.y = yself.test()def ...
  • K8S集群安装

    2021-09-09 17:13:46
    1、k8s快速入门 1)、简介 Kubernetes简介k8s。是用于自动部署,扩展和管理容器化应用程序的开源系统。 中文官网:https://kubernetes.io/zh/ 中文社区:https://www.kubernetes.org.cn/ 官方文档:...
  • Init.rc中,用service关键字声明了一系列服务.init.rc对service的说明如下:(详见system/core/init/readme.txt)Services--------Services are programs which init launches and (optionally) restartswhen they ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 80,855
精华内容 32,342
关键字:

initwhat's