精华内容
下载资源
问答
  • Kubeadm部署的K8S添加节点
    2022-03-30 19:46:42

    新node节点部署环境

    # 初始化
    关闭防火墙、selinux
    关闭时swap分区
    同步master时间
    修改主机名
    
    # 安装docker
    yum install -y yum-utils device-mapper-persistent-data lvm2 git
    yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    yum install -y docker-ce
    systemctl start docker && systemctl enable docker
    
    # 拉去镜像
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.2
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.2
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.2
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.2
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
    
    # 修改镜像tag
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.2 k8s.gcr.io/kube-proxy:v1.20.2
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
    
    # 添加yum源安装kubelet
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    yum install -y kubelet-1.20.2-0.x86_64 kubeadm-1.20.2-0.x86_64 kubectl-1.20.2-0.x86_64 ipvsadm
    
    # 加载ipvs相关内核模块
    如果重新开机,需要重新加载(可以写在 /etc/rc.local 中开机自动加载)
    modprobe ip_vs
    modprobe ip_vs_rr
    modprobe ip_vs_wrr
    modprobe ip_vs_sh
    modprobe nf_conntrack_ipv4
    
    # 编辑文件添加开机启动
    # vim /etc/rc.local 
    # chmod +x /etc/rc.local
    
    # 配置转发相关参数,否则可能会出错
    cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    vm.swappiness=0
    EOF
    sysctl --system
    
    # 配置启动kubelet
    DOCKER_CGROUPS=`docker info |grep 'Cgroup' | awk ' NR==1 {print $3}'`
    
    # 配置kubelet的cgroups
    cat >/etc/sysconfig/kubelet<<EOF
    KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=k8s.gcr.io/pause:3.2"
    EOF
    
    systemctl daemon-reload
    systemctl enable kubelet && systemctl restart kubelet
    
    
    

    master 添加节点

    # 重新生成token
    kubeadm token create --print-join-command
    
    # 重新生成证书
    kubeadm init phase upload-certs --upload-certs
    

    新节点加入集群

    kubeadm join 192.168.96.165:6443 --token vtxxim.sy0u93t20ixpg4sq   --discovery-token-ca-cert-hash sha256:308629a4406bfca94585345d0d15c00d95a9876bf772386cb3d54e9482af6fea
    
    # 也可以添加master 节点
    # 添加新master节点
    kubeadm join apiserver.cluster.local:6443 --token sc2ty3.ej38ceisi5lmt9ad  --discovery-token-ca-cert-hash sha256:42bf6e526b795854b61b7c0ca875f9a8292b989d44f0f51a4d8dec450711b89e   --control-plane --certificate-key 0c00611d30adffe68126477aa33613604c4a423ae2c06e125fe55f838a88b45f
    
    

    删除node节点

    # 驱离node节点上的pod
    kubectl drain k8s-node3 --delete-local-data --force --ignore-daemonsets
    
    # 检查节点状态,被标记为不可调度节点
    kubectl get nodes
    
    # 删除这个node节点
    kubectl delete node k8s-node3
    
    
    更多相关内容
  • 运行命令: kubeadm token create --print-join-command 添加参数--ttl 0永不过期

    运行命令:

     kubeadm token create --print-join-command
    

    添加参数--ttl 0永不过期

    在这里插入图片描述

    展开全文
  • K8S加入新的node节点

    千次阅读 2021-08-02 17:14:27
    基于kubeadm安装的k8s集群加入新的节点节点服务器名称:node1 1、初始化新的node1节点服务器 # 关闭防火墙 systemctl stop firewalld systemctl disable firewalld # 关闭selinux sed -i 's/enforcing/...

    基于kubeadm安装的k8s集群加入新的节点。节点服务器名称:node1

    1、初始化新的node1节点服务器

    # 关闭防火墙
    systemctl stop firewalld 
    systemctl disable firewalld
     
    # 关闭selinux
    sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
    setenforce 0  # 临时
     
    # 关闭swap
    swapoff -a  # 临时
    sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久
     
    # 根据规划设置主机名 node1
    # hostnamectl set-hostname <hostname>
    hostnamectl set-hostname node1
    
     
    # 在新的node1修改hosts ,并同步到原有的master,node上
    cat >> /etc/hosts << EOF
    192.168.136.131 master
    192.168.136.132 node
    192.168.136.136 node1
    EOF
    
    #设置免密登陆,在master上执行
    ssh-copy-id node1
     
    # 将桥接的IPv4流量传递到iptables的链
    cat > /etc/sysctl.d/k8s.conf << EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl --system  # 生效
    # 时间同步
    yum install ntpdate -y
    ntpdate time.windows.com

    2、安装Docker、kubelet、kubeadm

     2.1 安装Docker

    wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    yum -y install docker-ce
    systemctl enable docker && systemctl start docker

    配置镜像下载加速器:

    cat > /etc/docker/daemon.json << EOF
    {
      "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
    }
    EOF

    重启docker:

    systemctl daemon-reload
    systemctl restart docker
    docker info

    2.2 添加阿里云YUM软件源

    cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF

    2.3 安装kubeadm,kubelet

    这里指定与原有的版本号一致的版本部署:

    yum install -y kubelet-1.20.4 kubeadm-1.20.4 
    systemctl enable kubelet

    3、上传calico镜像

    链接:https://pan.baidu.com/s/1FCCob6eL_Iy9i_DpV_2QXA 
    提取码:y2pv 

    解压镜像文件

    docker load -i k8s-images-v1.20.4.tar.gz

    查看docker镜像

    docker images

     4、加入node1节点

    4.1查询加入节点的命令,在master上执行

    kubeadm token create --print-join-command

     4.2在node1上执行以上加入命令

    kubeadm join 192.168.136.131:6443 --token 7419pd.7f1t458gunw31062     --discovery-token-ca-cert-hash sha256:e2426645cc62d3ef06e722d9cc464ecdb32bdb840a3a88d6d8989eb8cf5b1835

     4.3在master查看节点信息

    kubectl get nodes

     状态显示为Ready 表示添加成功。

    展开全文
  • 将node节点重新添加k8s集群中 [root@k8s03 ~]# kubeadm join 192.168.54.128:6443 --token mg4o13.4ilr1oi605tj850w --discovery-token-ca-cert-hash sha256:363b5b8525ddb86f4dc157f059e40c864223add26ef53d0cfc...

    转载自https://www.cnblogs.com/ztxd/articles/13192064.html

    1.查看节点数和删除node节点(master节点)

    [root@k8s01 ~]# kubectl  get nodes
    NAME    STATUS   ROLES    AGE   VERSION
    k8s01   Ready    master   40d   v1.15.3
    k8s02   Ready    <none>   40d   v1.15.3
    k8s03   Ready    <none>   40d   v1.15.3
    
    [root@k8s01 ~]# kubectl  delete nodes k8s03
    node "k8s03" deleted
    [root@k8s01 ~]# kubectl  get nodes
    NAME    STATUS   ROLES    AGE   VERSION
    k8s01   Ready    master   40d   v1.15.3
    k8s02   Ready    <none>   40d   v1.15.3
    [root@k8s01 ~]#

    2.在被删除的node节点清空集群信息

    [root@k8s03 ~]# kubeadm reset
    [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
    [reset] Are you sure you want to proceed? [y/N]: y
    [preflight] Running pre-flight checks
    W1017 15:43:41.491522    3010 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
    [reset] No etcd config found. Assuming external etcd
    [reset] Please, manually reset etcd to prevent further issues
    [reset] Stopping the kubelet service
    [reset] Unmounting mounted directories in "/var/lib/kubelet"
    [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
    [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
    [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
    
    The reset process does not reset or clean up iptables rules or IPVS tables.
    If you wish to reset iptables, you must do so manually.
    For example:
    iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
    
    If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
    to reset your system's IPVS tables.
    
    The reset process does not clean your kubeconfig files and you must remove them manually.
    Please, check the contents of the $HOME/.kube/config file.
    [root@k8s03 ~]# 

    初始化网络

    systemctl stop kubelet
    systemctl stop docker
    rm -rf /var/lib/cni/
    rm -rf /var/lib/kubelet/*
    rm -rf /etc/cni/
    ifconfig cni0 down
    ifconfig flannel.1 down
    ifconfig docker0 down
    ip link delete cni0
    ip link delete flannel.1
    systemctl start docker
    systemctl start kubelet

    3.在master节点查看集群的token值

    [root@k8s01 ~]# kubeadm token create --print-join-command
    kubeadm join 192.168.54.128:6443 --token mg4o13.4ilr1oi605tj850w     --discovery-token-ca-cert-hash sha256:363b5b8525ddb86f4dc157f059e40c864223add26ef53d0cfc9becc3cbae8ad3
    [root@k8s01 ~]#

    4.将node节点重新添加到k8s集群中

    [root@k8s03 ~]# kubeadm join 192.168.54.128:6443 --token mg4o13.4ilr1oi605tj850w     --discovery-token-ca-cert-hash sha256:363b5b8525ddb86f4dc157f059e40c864223add26ef53d0cfc9becc3cbae8ad3
    [preflight] Running pre-flight checks
     [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
     [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2. Latest validated version: 18.09
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Activating the kubelet service
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    
    [root@k8s03 ~]# 

    5.查看整个集群状态

    [root@k8s01 ~]# kubectl  get nodes
    NAME    STATUS   ROLES    AGE   VERSION
    k8s01   Ready    master   40d   v1.15.3
    k8s02   Ready    <none>   40d   v1.15.3
    k8s03   Ready    <none>   41s   v1.15.3
    [root@k8s01 ~]#

    6.节点部署

    # 关闭防火墙
    systemctl stop firewalld
    systemctl disable firewalld
    
    # 添加iptables路由规则
     echo "sleep 30 && /sbin/iptables -P FORWARD ACCEPT" >> /etc/rc.local
    echo "modprobe br_netfilter" >> /etc/rc.local
    chmod +x /etc/rc.local
    vim /etc/sysctl.d/k8s.conf 
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    vm.swappiness=0
    
    
      # 禁用swap
    swapoff -a && sysctl -w vm.swappiness=0
    sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
    
    
    安装合适版本的包  //可以参考K8S包下载
    yum install -y kubernetes-cni kubelet kubectl kubeadm cri-tools screen conntrack ntp ipvsadm ipset iptables curl sysstat libseccomp wget vim net-tools git
    
    mkdir -p /etc/cni/net.d
    
    cat > /etc/cni/net.d/10-flannel.conflist << EOF
    {
        "name": "cbr0",
        "plugins": [
            {
                "type": "flannel",
                "delegate": {
                    "hairpinMode": true,
                    "isDefaultGateway": true
                }
            },
            {
                "type": "portmap",
                "capabilities": {
                    "portMappings": true
                }
            }
        ]
    }
    EOF
    
    
    //重启kublet
    systemctl start kubelet
    
    
    
    
    
    

    展开全文
  • 在安装完master节点后,下一步就是添加worker节点,如果以前机器上已经运行过kubeadm命令,在添加之前需要先清空一下,使用kubeadm reset命令。 然后删除网络插件sudo rm -rf /etc/cni/net.d 删除用户权限文件sudo...
  • K8S集群添加NODE节点的简单实现

    万次阅读 2019-06-13 20:43:39
    前提:在已经部署的K8S集群基础上进行,部署细节大约如前面写的另一篇博文:https://blog.csdn.net/lsysafe/article/details/85851376 先配好/etc/hosts等解析最好和各MASTR节点的SSH认证也一并做好 NODE节点需要部署...
  • 文章目录k8s添加/删除节点,证书有效期1、删除节点1.1执行驱逐pod命令1.2查看node状态1.3执行删除node命令2、添加节点3、查看证书 k8s添加/删除节点,证书有效期 背景: 网络calico,pod未指定IP,以及未固定IP 1、...
  • k8s添加新的node节点

    千次阅读 2020-12-28 11:39:06
    系统配置: [master server:192.168.200.41] [new node05 server:192.168.200.201] [node1 server:192.168.200.51] [api server:...[4cpu] [20G内存] [50G系统硬盘] [500G数据盘] ...#master节点上 在所
  • kubectl drain k8s-node2 --delete-local-data --force --ignore-daemonsets kubectl delete node k8s-node2 kubectl drain k8s-node3 --delete-local-data --force --ignore-daemonsets kubectl delete node k8s-...
  • K8s kubeadm添加节点详细操作

    千次阅读 2021-03-24 15:56:32
    关闭防火墙和selinux 关闭交换分区 开启路由转发 sysctl -w net.ipv4.ip_forward=1 4.设置本地解析 192.168.130.140 kub-k8s-node3
  • k8s集群新增节点

    千次阅读 2021-02-07 09:16:06
    如何动态的为k8s集群增加worknode节点?本文将详细介绍,kubeadm搭建k8集群详见 https://blog.csdn.net/wangqiubo2010/article/details/101203625。 一、VMWare(xSphere)安装Censtos7虚拟机。 具体安装步骤 请谷歌...
  • k8s添加删除节点

    千次阅读 2019-07-30 16:08:29
    添加节点: master初始化成功后注意将kubeadm join xxx保存下来,等下node节点需要使用。如果忘记了,可以在master上通过kubeadm token list得到。 默认token 24小时就会过期,后续的机器要加入集群需要使用以下...
  • 后续需要添加节点时,忘记 token,使用以下命令查看token: kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 10cydu.o48pzyw9txf3832w <forever> <never&g.
  • K8S安装与节点加入

    2021-08-06 11:31:38
    kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。 1. 安装要求 在开始之前,部署Kubernetes集群机器需要满足以下几个条件: 一台或多台机器,操作...k8s-master 192.168.31.61 k8s-node1 192.168.
  • k8s的安装部署与节点添加和删除 k8s简介 Kubernetes是Google 2014年创建管理的,是Google 10多年大规模容器管理技术Borg的开源版本。它是容器集群管理系统,是一个开源的平台,可以实现容器集群的自动化部署、自动...
  • 同时,k8s集群节点更换ip也相当于k8s集群添加节点,他们的操作流程是一样的。 2.机器更换ip后发生的现象:  (1)查看节点状态,kubectl get node ,还是原来的节点名称和Ready状态,但事实上它已经不工作了;....
  • 解决K8S集群节点显示NotReady问题的方法
  • kubectl label nodes 节点名字 node-role.kubernetes.io/你想要的roles(=/-) 最后括号里的加减号,减号就是删除roles,等号就是增加roles 举例 kubectl label node cnsz92vl17870.cmftdc....
  • 产线有一套用RKE搭建的K8S集群,由于业务需要,需要通过GPU来运行一些业务,所以需要集群中添加GPU节点 现有环境 RKE:Running RKE version: v1.1.2 Kubernetes: 1.17 - Master节点: 3个 - Worker节点: 6个...
  • k8s集群添加master节点

    千次阅读 2020-03-13 20:46:58
    下载k8s离线包(v1.14.1,v1.15.0,v1.16.0,v1.17.0)14版本的下载链接找不到了,找到再补。 启动docker 添加master节点 环境: [root@node131 ~]# kubectl get node NAME STATUS ROLES AGE VERSION cm-serv...
  • k8s节点调度

    2021-12-14 16:28:02
    设置指定的node为不可用 ...## --delete-local-data 清空本地数据 ## --ignore-daemonsets 忽略daemonsets错误 ## --force 强制执行 kubectl drain node名 --delete-local-data --ignore-daemonsets --force # 查看no
  • k8s添加删除label

    2021-09-24 20:54:13
    添加: kubectl label node sm-node2 yamecloud.kubernetes.io/node=master label可以是node,pod等等 删除: kubectl label node starbucks-mysql2 yamecloud.kubernetes.io/node- 注意后面的-号,就是删除的意思...
  • k8s加入node节点失败原因及解决方案
  • 1.首先在新节点的机器上配置好master节点的host编辑/etc/host 文件添加master 节点的域名IP192.168.30.131apiserver.cluster.local2.在master节点上执行 kubeadm token create --print-join-command执行上面命令后将...
  • k8s节点加入集群

    2022-03-19 23:31:41
    k8s节点加入集群流程及常见问题 //移除docker yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ .
  • K8S】之 添加 Work节点(四)

    千次阅读 2019-05-12 06:50:49
    让 Master 节点也承载工作负荷(可选): kubectl taint nodes --all node -role.kubernetes.io/master- 建议不要让 Master 节点承受工作负荷 配置在非Master节点上管理集群 分发 /etc/kubernetes/admin.conf ...
  • 作者主页(文火冰糖的硅基工坊):文火冰糖(王文兵)的博客_文火冰糖的硅基工坊_CSDN博客 本文网址: 目录 第一步:集群规划 1.1 K8S的官方架构 1.2 K8S的实验架构 ...1.4K8S集群搭建方式选择 ...3.3 为K8S修改 dock..
  • 在网络配置中,我们要配置虚拟机为固定的IP地址,避免使用DCHP动态分配IP(否则每次启动k8s集群各节点的IP都变化)。 配置master节点网络 在最终配置好多个节点的网络后,各个节点只有IP地址不同,其他配置均应相同...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 32,477
精华内容 12,990
关键字:

k8s添加节点