为您推荐:
精华内容
最热下载
问答
  • 5星
    75.33MB u010575833 2021-04-27 11:57:42
  • 4KB weixin_45950088 2020-10-20 23:41:38
  • 68KB weixin_38705252 2021-01-07 10:19:50
  • 确认安装前已经安装好StorageClass,可参考:...将msyql-5.7.tar拷贝至K8s镜像服务器所在的节点 docker load -i mysql-5.7.tar docker tag mysql:5.7 172.16.10.190:80/lib

    确认安装前已经安装好StorageClass,可参考:https://blog.csdn.net/u011943534/article/details/100887530
    1、准备镜像

    docker pull mysql:5.7
    docker save -o mysql-5.7.tar mysql:5.7
    

    将msyql-5.7.tar拷贝至K8s镜像服务器所在的节点

    docker load -i mysql-5.7.tar
    docker tag mysql:5.7 172.16.10.190:80/library/mysql:5.7
    docker push 172.16.10.190:80/library/mysql:5.7
    
    docker pull ist0ne/xtrabackup:1.0
    docker save -o xtrabackup_1.0.tar ist0ne/xtrabackup:1.0
    

    将xtrabackup_1.0.tar拷贝至K8s镜像服务器所在的节点

    docker tag ist0ne/xtrabackup:1.0 172.16.10.190:80/library/xtrabackup:1.0
    docker push 172.16.10.190:80/library/xtrabackup:1.0
    

    2、创建namespace

    kubectl create ns db
    

    3、创建configmap

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: mysql-cluster
      namespace: mysql
      labels:
        app: mysql-cluster
    data:
      master.cnf: |
        # Apply this config only on the master.
        [mysqld]
        log-bin
        log_bin_trust_function_creators=1
        lower_case_table_names=1
      slave.cnf: |
        # Apply this config only on slaves.
        [mysqld]
        super-read-only
        log_bin_trust_function_creators=1
    
    
    kubectl apply -f mysql-configmap.yaml
    

    4、创建service

    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql
      namespace: mysql
    spec:
      ports:
      - port: 3306
        targetPort: 3306
      clusterIP: None
      selector:
        app: mysql
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql-read
      namespace: mysql
    spec:
      type: NodePort
      ports:
      - port: 3306
        targetPort: 3306
        nodePort: 31045
      selector:
        app: mysql
    
    
    
    kubectl apply -f mysql-services.yaml
    

    5、创建statefulset

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: mysql
      namespace: mysql
    spec:
      selector:
        matchLabels:
          app: mysql
      serviceName: mysql
      replicas: 2
      template:
        metadata:
          labels:
            app: mysql
        spec:
          imagePullSecrets:
           - name: harborsecret
          initContainers:
          - name: init-mysql
            image: 172.16.10.190:80/library/mysql:5.7
            command:
            - bash
            - "-c"
            - |
              set -ex
              # Generate mysql server-id from pod ordinal index.
              [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
              ordinal=${BASH_REMATCH[1]}
              echo [mysqld] > /mnt/conf.d/server-id.cnf
              # Add an offset to avoid reserved server-id=0 value.
              echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
              # Copy appropriate conf.d files from config-map to emptyDir.
              if [[ $ordinal -eq 0 ]]; then
                cp /mnt/config-map/master.cnf /mnt/conf.d/
              else
                cp /mnt/config-map/slave.cnf /mnt/conf.d/
              fi
            volumeMounts:
            - name: conf
              mountPath: /mnt/conf.d
            - name: config-map
              mountPath: /mnt/config-map
          - name: clone-mysql
            image: 172.16.10.190:80/library/xtrabackup:1.0
            command:
            - bash
            - "-c"
            - |
              set -ex
              # Skip the clone if data already exists.
              [[ -d /var/lib/mysql/mysql ]] && exit 0
              # Skip the clone on master (ordinal index 0).
              [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
              ordinal=${BASH_REMATCH[1]}
              [[ $ordinal -eq 0 ]] && exit 0
              # Clone data from previous peer.
              ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
              # Prepare the backup.
              xtrabackup --prepare --target-dir=/var/lib/mysql
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
          containers:
          - name: mysql
            image: 172.16.10.190:80/library/mysql:5.7
            env:
            - name: MYSQL_ALLOW_EMPTY_PASSWORD
              value: "1"
            ports:
            - name: mysql
              containerPort: 3306
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            resources:
              requests:
                cpu: 500m
                memory: 1Gi
            livenessProbe:
              exec:
                command: ["mysqladmin", "ping"]
              initialDelaySeconds: 30
              periodSeconds: 10
              timeoutSeconds: 5
            readinessProbe:
              exec:
                # Check we can execute queries over TCP (skip-networking is off).
                command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
              initialDelaySeconds: 5
              periodSeconds: 2
              timeoutSeconds: 1
          - name: xtrabackup
            image: 172.16.10.190:80/library/xtrabackup:1.0
            ports:
            - name: xtrabackup
              containerPort: 3307
            command:
            - bash
            - "-c"
            - |
              set -ex
              cd /var/lib/mysql
    
              # Determine binlog position of cloned data, if any.
              if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
                # XtraBackup already generated a partial "CHANGE MASTER TO" query
                # because we're cloning from an existing slave. (Need to remove the tailing semicolon!)
                cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
                # Ignore xtrabackup_binlog_info in this case (it's useless).
                rm -f xtrabackup_slave_info xtrabackup_binlog_info
              elif [[ -f xtrabackup_binlog_info ]]; then
                # We're cloning directly from master. Parse binlog position.
                [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
                rm -f xtrabackup_binlog_info xtrabackup_slave_info
                echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                      MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
              fi
    
              # Check if we need to complete a clone by starting replication.
              if [[ -f change_master_to.sql.in ]]; then
                echo "Waiting for mysqld to be ready (accepting connections)"
                until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
    
                echo "Initializing replication from clone position"
                mysql -h 127.0.0.1 \
                      -e "$(<change_master_to.sql.in), \
                              MASTER_HOST='mysql-0.mysql', \
                              MASTER_USER='root', \
                              MASTER_PASSWORD='', \
                              MASTER_CONNECT_RETRY=10; \
                            START SLAVE;" || exit 1
                # In case of container restart, attempt this at-most-once.
                mv change_master_to.sql.in change_master_to.sql.orig
              fi
    
              # Start a server to send backups when requested by peers.
              exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
                "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            resources:
              requests:
                cpu: 100m
                memory: 100Mi
          volumes:
          - name: conf
            emptyDir: {}
          - name: config-map
            configMap:
              name: mysql-cluster
      volumeClaimTemplates:
      - metadata:
          name: data
          annotations:
            volume.beta.kubernetes.io/storage-class: "course-nfs-storage"
        spec:
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 10Gi
    
    
    kubectl apply -f mysql-statefulset.yaml
    
    展开全文
    u011943534 2021-06-10 15:35:32
  • vim mysql-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: mysql1 labels: app: mysql_nfs spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy:

    一、创建NFS动态卷持久化存储

    1、创建NFS服务端

    yum install nfs-utils rpcbind -y
    systemctl start nfs
    systemctl start rpcbind
    systemctl enable nfs
    systemctl enable rpcbind
    mkdir -p /data/mysql/
    chmod 777 /data/mysql/
    vim /etc/exports
    /data/mysql/    *(rw,sync,no_root_squash,no_all_squash)
    
    systemctl restart rpcbind
    systemctl restart nfs
    showmount -e localhost
    
    Export list for localhost:
    /data/mysql *
    

    2、各个节点安装nfs-utils rpcbind

    yum install nfs-utils rpcbind -y
    systemctl start nfs
    systemctl start rpcbind
    systemctl enable nfs
    systemctl enable rpcbind
    

    3、创建动态卷提供者
    (1)创建RBAC授权

    vim  rbac.yaml
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: kube-system
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "update", "patch"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: kube-system
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: kube-system
    rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: kube-system
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: kube-system
    roleRef:
      kind: Role
      name: leader-locking-nfs-client-provisioner
      apiGroup: rbac.authorization.k8s.io
    
    kubectl apply -f rbac.yaml
    

    (2)创建 Storageclass

    vim class.yaml
    
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: managed-nfs-storage
    provisioner: asd ##与nfs-client-provisioner.yaml里的保持一致
    parameters:
      archiveOnDelete: "false"
    
    kubectl apply -f class.yaml
    

    (3)创建nfs-client-provisioner自动配置程序,以便自动创建持久卷(PV)

    vim nfs-client-provisioner.yaml
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: nfs-client-provisioner
      namespace: kube-system
    spec:
      replicas: 1
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: nfs-client-provisioner
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: quay.io/external_storage/nfs-client-provisioner:latest
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: asd  ##与class.yaml里的保持一致
                - name: NFS_SERVER
                  value: 192.168.0.108
                - name: NFS_PATH
                  value: /data/mysql
          volumes:
            - name: nfs-client-root
              nfs:
                server: 192.168.0.108
                path: /data/mysql
    
    kubectl apply -f nfs-client-provisioner.yaml
    

    二、部署mysql应用

    1、创建ConfigMap

    vim mysql-configmap.yaml
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: mysql
      labels:
        app: mysql
    data:
      master.cnf: |
        # Apply this config only on the master.
        [mysqld]
        log-bin
      slave.cnf: |
        # Apply this config only on slaves.
        [mysqld]
        super-read-only
    
    kubectl apply -f mysql-configmap.yaml
    

    2、创建services

    vim mysql-services.yaml
    
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql
      labels:
        app: mysql
    spec:
      ports:
      - name: mysql
        port: 3306
      clusterIP: None
      selector:
        app: mysql
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql-read
      labels:
        app: mysql
    spec:
      ports:
      - name: mysql
        port: 3306
      selector:
        app: mysql
    
    kubectl apply -f mysql-services.yaml
    

    3、创建statefulset

    vim mysql-statefulset.yaml
    
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: mysql
    spec:
      selector:
        matchLabels:
          app: mysql
      serviceName: mysql
      replicas: 3
      template:
        metadata:
          labels:
            app: mysql
        spec:
          imagePullSecrets:
          - name: regcred
          initContainers:
          - name: init-mysql
            image: 192.168.0.107:80/heosun/mysql:5.7
            command:
            - bash
            - "-c"
            - |
              set -ex
              # Generate mysql server-id from pod ordinal index.
              [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
              ordinal=${BASH_REMATCH[1]}
              echo [mysqld] > /mnt/conf.d/server-id.cnf
              # Add an offset to avoid reserved server-id=0 value.
              echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
              # Copy appropriate conf.d files from config-map to emptyDir.
              if [[ $ordinal -eq 0 ]]; then
                cp /mnt/config-map/master.cnf /mnt/conf.d/
              else
                cp /mnt/config-map/slave.cnf /mnt/conf.d/
              fi
            volumeMounts:
            - name: conf
              mountPath: /mnt/conf.d
            - name: config-map
              mountPath: /mnt/config-map
          - name: clone-mysql
            image: 192.168.0.107:80/heosun/xtrabackup:v1.0
            command:
            - bash
            - "-c"
            - |
              set -ex
              # Skip the clone if data already exists.
              [[ -d /var/lib/mysql/mysql ]] && exit 0
              # Skip the clone on master (ordinal index 0).
              [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
              ordinal=${BASH_REMATCH[1]}
              [[ $ordinal -eq 0 ]] && exit 0
              # Clone data from previous peer.
              ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
              # Prepare the backup.
              xtrabackup --prepare --target-dir=/var/lib/mysql
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
          containers:
          - name: mysql
            image: 192.168.0.107:80/heosun/mysql:5.7
            env:
            - name: MYSQL_ALLOW_EMPTY_PASSWORD
              value: "1"
            ports:
            - name: mysql
              containerPort: 3306
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            resources:
              requests:
                cpu: 100m
                memory: 1Gi
            livenessProbe:
              exec:
                command: ["mysqladmin", "ping"]
              initialDelaySeconds: 30
              periodSeconds: 10
              timeoutSeconds: 5
            readinessProbe:
              exec:
                # Check we can execute queries over TCP (skip-networking is off).
                command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
              initialDelaySeconds: 5
              periodSeconds: 2
              timeoutSeconds: 1
          - name: xtrabackup
            image: 192.168.0.107:80/heosun/xtrabackup:v1.0
            ports:
            - name: xtrabackup
              containerPort: 3307
            command:
            - bash
            - "-c"
            - |
              set -ex
              cd /var/lib/mysql
    
              # Determine binlog position of cloned data, if any.
              if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
                # XtraBackup already generated a partial "CHANGE MASTER TO" query
                # because we're cloning from an existing slave. (Need to remove the tailing semicolon!)
                cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
                # Ignore xtrabackup_binlog_info in this case (it's useless).
                rm -f xtrabackup_slave_info xtrabackup_binlog_info
              elif [[ -f xtrabackup_binlog_info ]]; then
                # We're cloning directly from master. Parse binlog position.
                [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
                rm -f xtrabackup_binlog_info xtrabackup_slave_info
                echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                      MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
              fi
    
              # Check if we need to complete a clone by starting replication.
              if [[ -f change_master_to.sql.in ]]; then
                echo "Waiting for mysqld to be ready (accepting connections)"
                until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
    
                echo "Initializing replication from clone position"
                mysql -h 127.0.0.1 \
                      -e "$(<change_master_to.sql.in), \
                              MASTER_HOST='mysql-0.mysql', \
                              MASTER_USER='root', \
                              MASTER_PASSWORD='', \
                              MASTER_CONNECT_RETRY=10; \
                            START SLAVE;" || exit 1
                # In case of container restart, attempt this at-most-once.
                mv change_master_to.sql.in change_master_to.sql.orig
              fi
    
              # Start a server to send backups when requested by peers.
              exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
                "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            resources:
              requests:
                cpu: 100m
                memory: 100Mi
          volumes:
          - name: conf
            emptyDir: {}
          - name: config-map
            configMap:
              name: mysql
      volumeClaimTemplates:
      - metadata:
          name: data
          annotations:
            volume.beta.kubernetes.io/storage-class: managed-nfs-storage
        spec:
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 1Gi
    
    kubectl apply -f mysql-statefulset.yaml
    

    三、测试数据库

    1、写入数据

    kubectl run mysql-client --image=mysql:5.7 -i --tty  --restart=Always --  mysql -h mysql-0.mysql
    
    CREATE DATABASE test;
    CREATE TABLE test.messages (message VARCHAR(250));
    INSERT INTO test.messages VALUES ('hello');
    

    2、检查从机读取数据

    kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --  mysql -h mysql-read -e "SELECT * FROM test.messages"
    

    3、循环读取,可以看到server-id不断变化,说明每次从不同机器读取数据

    kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --  bash -ic "while sleep 1; do mysql -h mysql-read -e 'SELECT @@server_id,NOW()'; done"
    
    展开全文
    weixin_45950088 2020-10-20 23:55:45
  • 主从的cm 主从的svc , statefulset ...#application/mysql/mysql-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: mysql labels: app: mysql data: master.cnf: | # Apply this config on

    主从的cm
    主从的svc , statefulset
    都被我合并到一个文件里了,
    应该是可以用的

    注意修改storageclass的名称
    注意apply的时候 -n 指定ns

    #application/mysql/mysql-configmap.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: mysql
      labels:
        app: mysql
    data:
      master.cnf: |
        # Apply this config only on the master.
        [mysqld]
        log-bin
      slave.cnf: |
        # Apply this config only on slaves.
        [mysqld]
        super-read-only
    ---
    # application/mysql/mysql-services.yaml
    # Headless service for stable DNS entries of StatefulSet members.
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql
      labels:
        app: mysql
    spec:
      ports:
        - name: mysql
          port: 3306
      clusterIP: None
      selector:
        app: mysql
    ---
    # Client service for connecting to any MySQL instance for reads.
    # For writes, you must instead connect to the master: mysql-0.mysql.
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql-read
      labels:
        app: mysql
    spec:
      ports:
        - name: mysql
          port: 3306
      selector:
        app: mysql
    ---
    #application/mysql/mysql-statefulset.yaml
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: mysql
    spec:
      selector:
        matchLabels:
          app: mysql
      serviceName: mysql
      replicas: 3
      template:
        metadata:
          labels:
            app: mysql
        spec:
          # 设置初始化容器,进行一些准备工作
          initContainers:
            - name: init-mysql
              image: mysql:5.7
              # 为每个MySQL节点配置service-id
              # 如果节点序号是0,则使用master的配置, 其余节点使用slave的配置
              command:
                - bash
                - "-c"
                - |
                  set -ex
                  # Generate mysql server-id from pod ordinal index.
                  [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
                  ordinal=${BASH_REMATCH[1]}
                  echo [mysqld] > /mnt/conf.d/server-id.cnf
                  # Add an offset to avoid reserved server-id=0 value.
                  echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
                  # Copy appropriate conf.d files from config-map to emptyDir.
                  if [[ $ordinal -eq 0 ]]; then
                    cp /mnt/config-map/master.cnf /mnt/conf.d/
                  else
                    cp /mnt/config-map/slave.cnf /mnt/conf.d/
                  fi
              volumeMounts:
                - name: conf
                  mountPath: /mnt/conf.d
                - name: config-map
                  mountPath: /mnt/config-map
            - name: clone-mysql
              image: yizhiyong/xtrabackup:latest
              # 为除了节点序号为0的主节点外的其它节点,备份前一个节点的数据
              command:
                - bash
                - "-c"
                - |
                  set -ex
                  # Skip the clone if data already exists.
                  [[ -d /var/lib/mysql/mysql ]] && exit 0
                  # Skip the clone on master (ordinal index 0).
                  [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
                  ordinal=${BASH_REMATCH[1]}
                  [[ $ordinal -eq 0 ]] && exit 0
                  # Clone data from previous peer.
                  ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
                  # Prepare the backup.
                  xtrabackup --prepare --target-dir=/var/lib/mysql
              volumeMounts:
                - name: data
                  mountPath: /var/lib/mysql
                  subPath: mysql
                - name: conf
                  mountPath: /etc/mysql/conf.d
          containers:
            - name: mysql
              image: mysql:5.7
              # 设置支持免密登录
              env:
                - name: MYSQL_ALLOW_EMPTY_PASSWORD
                  value: "1"
              ports:
                - name: mysql
                  containerPort: 3306
              volumeMounts:
                - name: data
                  mountPath: /var/lib/mysql
                  subPath: mysql
                - name: conf
                  mountPath: /etc/mysql/conf.d
              resources:
                # 设置启动pod需要的资源,官方文档上需要500m cpu,1Gi memory。
                # 我本地测试的时候,会因为资源不足,报1 Insufficient cpu, 1 Insufficient memory错误,所以我改小了点
                requests:
                  # m是千分之一的意思,100m表示需要0.1个cpu
                  cpu: 100m
                  # Mi是兆的意思,需要100M 内存
                  memory: 100Mi
              livenessProbe:
                # 使用mysqladmin ping命令,对MySQL节点进行探活检测
                # 在节点部署完30秒后开始,每10秒检测一次,超时时间为5秒
                exec:
                  command: [ "mysqladmin", "ping" ]
                initialDelaySeconds: 30
                periodSeconds: 10
                timeoutSeconds: 5
              readinessProbe:
                # 对节点服务可用性进行检测, 启动5秒后开始,每2秒检测一次,超时时间1秒
                exec:
                  # Check we can execute queries over TCP (skip-networking is off).
                  command: [ "mysql", "-h", "127.0.0.1", "-e", "SELECT 1" ]
                initialDelaySeconds: 5
                periodSeconds: 2
                timeoutSeconds: 1
            - name: xtrabackup
              image: yizhiyong/xtrabackup:latest
              ports:
                - name: xtrabackup
                  containerPort: 3307
              # 开始进行备份文件校验、解析和开始同步
              command:
                - bash
                - "-c"
                - |
                  set -ex
                  cd /var/lib/mysql
                  # Determine binlog position of cloned data, if any.
                  if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
                    # XtraBackup already generated a partial "CHANGE MASTER TO" query
                    # because we're cloning from an existing slave. (Need to remove the tailing semicolon!)
                    cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
                    # Ignore xtrabackup_binlog_info in this case (it's useless).
                    rm -f xtrabackup_slave_info xtrabackup_binlog_info
                  elif [[ -f xtrabackup_binlog_info ]]; then
                    # We're cloning directly from master. Parse binlog position.
                    [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
                    rm -f xtrabackup_binlog_info xtrabackup_slave_info
                    echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                          MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
                  fi
                  # Check if we need to complete a clone by starting replication.
                  if [[ -f change_master_to.sql.in ]]; then
                    echo "Waiting for mysqld to be ready (accepting connections)"
                    until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
                    echo "Initializing replication from clone position"
                    mysql -h 127.0.0.1 \
                          -e "$(<change_master_to.sql.in), \
                                  MASTER_HOST='mysql-0.mysql', \
                                  MASTER_USER='root', \
                                  MASTER_PASSWORD='', \
                                  MASTER_CONNECT_RETRY=10; \
                                START SLAVE;" || exit 1
                    # In case of container restart, attempt this at-most-once.
                    mv change_master_to.sql.in change_master_to.sql.orig
                  fi
                  # Start a server to send backups when requested by peers.
                  exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
                    "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
              volumeMounts:
                - name: data
                  mountPath: /var/lib/mysql
                  subPath: mysql
                - name: conf
                  mountPath: /etc/mysql/conf.d
              resources:
                requests:
                  cpu: 100m
                  memory: 100Mi
          volumes:
            - name: conf
              emptyDir: { }
            - name: config-map
              configMap:
                name: mysql
      # 设置PVC
      volumeClaimTemplates:
        - metadata:
            name: data
            annotations:
              volume.beta.kubernetes.io/storage-class: data-nfs-storage
          spec:
            storageClassName: data-nfs-storage
            accessModes: [ "ReadWriteOnce" ]
            resources:
              requests:
                storage: 1Gi
    
    展开全文
    qq_33709508 2021-12-02 13:56:52
  • 用户可以快速地了解到配置中心apollo的基本使用,如需要看上文的可以看如下链接:但是在生产环境中我们往往需要高可用的部署配置中心,这样我们就得有k8s来进行部署。在apollo的官方文档中有关于k8s部署的文章...

    前言

    在前一篇文章中简单地介绍了《通过docker快速部署并使用apollo配置中心》,用户可以快速地了解到配置中心apollo的基本使用,如需要看上文的可以看如下链接:

    但是在生产环境中我们往往需要高可用的部署配置中心,这样我们就得有k8s来进行部署。在apollo的官方文档中有关于k8s部署的文章(https://github.com/ctripcorp/apollo/tree/master/scripts/apollo-on-kubernetes),但是在执行的过程中是会遇到一定的坑的。包括实际部署中遇到的,镜像的制作、portal服务的多实例支持、ingress的创建都没有说到,本人在部署的过程中已经全部解决以上问题,希望可以帮到没有部署过的用户。

    部署成功页面

    下面是部署完成后,访问apollo的登录页面

    5de58c1d0135fdc1aa374fee8bbc8d74.png

    登录页面

    输入用户名密码:apollo/admin,部署环境完成portal页面图

    9c1adbdaf44a4d9be92d099cfa070a48.png

    部署完成apollo后页面

    k8s的dashboard部署页面,本文部署了dev, fat, pro三个环境。

    9979320abafe993126093e395a56781e.png

    部署环境完成k8s后dashboard图

    部署过程

    本文在部署的时候使用了当前最新的apollo版本为:1.7.1,所以下面的所有构建也是基于当前版本的。

    一、构建镜像

    首先从git上下载源码,可以从github下载:https://github.com/ctripcorp/apollo;也可以从gitee下载:https://gitee.com/nobodyiam/apollo,国内的会快一点。然后进入到目录

    /scripts/apollo-on-kubernetes

    去构建镜像。

    1、 直接使用编译的的包进行安装,获取 apollo 压缩包

    可以直接从官网下载,因为github实在是太慢了。建议直接从我的百度云下载。

    A、下载比较慢,直接用我百度云

    链接:https://pan.baidu.com/s/1eLL2ocYE1uzXcvzO2Y3dNg

    提取码:nfvm

    B、从 https://github.com/ctripcorp/apollo/releases 下载预先打好的 java 包

    (1)进入scripts/apollo-on-kubernetes/

    执行 wget https://github.com/ctripcorp/apollo/releases/download/v1.7.1/apollo-portal-1.7.1-github.zip

    (2)进入scripts/apollo-on-kubernetes/

    执行 wget https://github.com/ctripcorp/apollo/releases/download/v1.7.1/apollo-adminservice-1.7.1-github.zip

    (3)进入scripts/apollo-on-kubernetes/

    执行 wget https://github.com/ctripcorp/apollo/releases/download/v1.7.1/apollo-configservice-1.7.1-github.zip

    2、解压压缩包, 获取程序 jar 包

    不要忘记重命名,把版本号去掉。

    解压 apollo-portal-1.7.1-github.zip

    获取 apollo-portal-1.7.1.jar, 重命名为 apollo-portal.jar, 放到 scripts/apollo-on-kubernetes/apollo-portal-server

    解压 apollo-adminservice-1.7.1-github.zip

    获取 apollo-adminservice-1.7.1.jar, 重命名为 apollo-adminservice.jar, 放到 scripts/apollo-on-kubernetes/apollo-admin-server

    解压 apollo-configservice-1.7.1-github.zip

    获取 apollo-configservice-1.7.1.jar, 重命名为 apollo-configservice.jar, 放到 scripts/apollo-on-kubernetes/apollo-config-server

    3、构建镜像

    注意:因为许多地方都要同时改,在构建的时候要确定命名空间,我使用zizai。

    要构建如下的镜像:alpine-bash-3.8-image,apollo-config-server,apollo-admin-server和apollo-portal-server,对应的镜像文件,在对应的目录下:

    75716262f027b4ff3d3106c6bdf6356c.png

    构建镜像要去到对应的Dockerfile同级目录下去执行。

    例如,去到scripts/apollo-on-kubernetes/apollo-config-server下执行:

    docker build -t apollo-config-server:v1.7.1 .

    注意,总共要构建4个镜像。整体的思路是:先构建镜像,然后打tag,再推到仓库里去。

    在对应目录下,总结整体脚本如下:

    alpine-bash-3.8-image的镜像:

    docker build -t alpine-bash:3.8 .

    docker tag alpine-bash:3.8 hub.thinkinpower.net/zizai/alpine-bash:3.8

    docker push hub.thinkinpower.net/zizai/alpine-bash:3.8

    apollo对应的镜像:

    docker build -t apollo-config-server:v1.7.1 .

    docker tag apollo-config-server:v1.7.1 hub.xx.net/zizai/apollo-config-server:v1.7.1

    docker push hub.xx.net/zizai/apollo-config-server:v1.7.1

    docker build -t apollo-admin-server:v1.7.1 .

    docker tag apollo-admin-server:v1.7.1 hub.xx.net/zizai/apollo-admin-server:v1.7.1

    docker push hub.xx.net/zizai/apollo-admin-server:v1.7.1

    docker build -t apollo-portal-server:v1.7.1 .

    docker tag apollo-portal-server:v1.7.1 hub.thinkinpower.net/zizai/apollo-portal-server:v1.7.1

    docker push hub.thinkinpower.net/zizai/apollo-portal-server:v1.7.1

    二、部署apollo到kubernetes

    1、创建数据库脚本

    说明一下:

    在实际的生产环境使用中,通过分布式存储来实现的磁盘在mysql这种IO密集性应用中,性能问题会显得非常突出。所以在实际应用中,一般不会把mysql这种应用直接放入kubernetes中管理,而是使用专用的服务器来独立部署。而像web这种无状态应用依然会运行在kubernetes当中,这个时候web服务器要连接kubernetes管理之外的数据库,有两种方式:一是直接连接数据库所在物理服务器IP,另一种方式就是借助kubernetes的Endpoints直接将外部服务器映射为kubernetes内部的一个服务。

    我们使用外面的mysql作为数据库,不会将mysql部署到k8s里。

    执行目录scripts/apollo-on-kubernetes/db下的脚本。Apollo服务端共需要两个数据库:ApolloPortalDB和ApolloConfigDB。每一个配置的config一个数据库脚本,portal一个数据库脚本。数据库脚本见:https://github.com/ctripcorp/apollo/tree/master/scripts/apollo-on-kubernetes/db,在git里已经有。如果apollo 开启了 4 个环境, 即 dev、test-alpha、test-beta、prod, 在MySQL 中导入 scripts/apollo-on-kubernetes/db 下的文件。

    e20586299b22901907f4418886716623.jpg

    2、部署k8s的yaml文件

    官网的yaml可以下载修改就可以了,因为我用自己的仓库的镜像,并且多次测试,我主要有如下的修改:

    (1)配置文件要删除掉安全提示:

    securityContext:

    privileged: true

    (2)添加仓库的密钥:

    imagePullSecrets:

    – name: registry-harbor

    (3)下面修改为每次都拉镜像:Always

    imagePullPolicy: Always

    (4)添加mysql的配置信息

    我只用3个环境,需要修改的文件如图:

    97d7002105bcc7dfcb25083dd30b24d6.jpg

    因为修改得比较多,我将在下面列出每一个文件。我只拿开发环境apollo-env-dev的作为一个示例,其它的只是对应修改就可以了。在执行的时候,建议大家从下面的(3)、(2)、(1)的顺序执行下面的文件。

    (1)、service-apollo-admin-server-dev.yaml

    ---

    # configmap for apollo-admin-server-dev

    kind: ConfigMap

    apiVersion: v1

    metadata:

    namespace: zizai

    name: configmap-apollo-admin-server-dev

    data:

    application-github.properties: |

    spring.datasource.url = jdbc:mysql://service-mysql-for-apollo-dev-env.zizai:3306/DevApolloConfigDB?characterEncoding=utf8

    spring.datasource.username = admin

    spring.datasource.password = mysql-admin

    eureka.service.url = http://statefulset-apollo-config-server-dev-0.service-apollo-meta-server-dev:8080/eureka/,http://statefulset-apollo-config-server-dev-1.service-apollo-meta-server-dev:8080/eureka/,http://statefulset-apollo-config-server-dev-2.service-apollo-meta-server-dev:8080/eureka/

    ---

    kind: Service

    apiVersion: v1

    metadata:

    namespace: zizai

    name: service-apollo-admin-server-dev

    labels:

    app: service-apollo-admin-server-dev

    spec:

    ports:

    - protocol: TCP

    port: 8090

    targetPort: 8090

    selector:

    app: pod-apollo-admin-server-dev

    type: ClusterIP

    sessionAffinity: ClientIP

    ---

    kind: Deployment

    apiVersion: apps/v1

    metadata:

    namespace: zizai

    name: deployment-apollo-admin-server-dev

    labels:

    app: deployment-apollo-admin-server-dev

    spec:

    replicas: 3

    selector:

    matchLabels:

    app: pod-apollo-admin-server-dev

    strategy:

    rollingUpdate:

    maxSurge: 1

    maxUnavailable: 1

    type: RollingUpdate

    template:

    metadata:

    labels:

    app: pod-apollo-admin-server-dev

    spec:

    imagePullSecrets: # dokcer仓库密码,不需要的可以去掉

    - name: registry-harbor

    affinity:

    podAntiAffinity:

    preferredDuringSchedulingIgnoredDuringExecution:

    - weight: 100

    podAffinityTerm:

    labelSelector:

    matchExpressions:

    - key: app

    operator: In

    values:

    - pod-apollo-admin-server-dev

    topologyKey: kubernetes.io/hostname

    volumes:

    - name: volume-configmap-apollo-admin-server-dev

    configMap:

    name: configmap-apollo-admin-server-dev

    items:

    - key: application-github.properties

    path: application-github.properties

    initContainers:

    - image: hub.thinkinpower.net/zizai/alpine-bash:3.8

    imagePullPolicy: Always

    name: check-service-apollo-config-server-dev

    command: [\'bash\', \'-c\', "curl --connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1 --retry-max-time 120 service-apollo-config-server-dev.zizai:8080"]

    containers:

    - image: hub.thinkinpower.net/zizai/apollo-admin-server:v1.7.1

    imagePullPolicy: Always

    name: container-apollo-admin-server-dev

    ports:

    - protocol: TCP

    containerPort: 8090

    volumeMounts:

    - name: volume-configmap-apollo-admin-server-dev

    mountPath: /apollo-admin-server/config/application-github.properties

    subPath: application-github.properties

    env:

    - name: APOLLO_ADMIN_SERVICE_NAME

    value: "service-apollo-admin-server-dev.zizai"

    readinessProbe:

    tcpSocket:

    port: 8090

    initialDelaySeconds: 10

    periodSeconds: 5

    livenessProbe:

    tcpSocket:

    port: 8090

    initialDelaySeconds: 120

    periodSeconds: 10

    dnsPolicy: ClusterFirst

    restartPolicy: Always

    (2)、service-apollo-config-server-dev.yaml

    ---

    # configmap for apollo-config-server-dev

    kind: ConfigMap

    apiVersion: v1

    metadata:

    namespace: zizai

    name: configmap-apollo-config-server-dev

    data:

    application-github.properties: |

    spring.datasource.url = jdbc:mysql://service-mysql-for-apollo-dev-env.zizai:3306/DevApolloConfigDB?characterEncoding=utf8

    spring.datasource.username = admin

    spring.datasource.password = mysql-admin

    eureka.service.url = http://statefulset-apollo-config-server-dev-0.service-apollo-meta-server-dev:8080/eureka/,http://statefulset-apollo-config-server-dev-1.service-apollo-meta-server-dev:8080/eureka/,http://statefulset-apollo-config-server-dev-2.service-apollo-meta-server-dev:8080/eureka/

    ---

    kind: Service

    apiVersion: v1

    metadata:

    namespace: zizai

    name: service-apollo-meta-server-dev

    labels:

    app: service-apollo-meta-server-dev

    spec:

    ports:

    - protocol: TCP

    port: 8080

    targetPort: 8080

    selector:

    app: pod-apollo-config-server-dev

    type: ClusterIP

    clusterIP: None

    sessionAffinity: ClientIP

    ---

    kind: Service

    apiVersion: v1

    metadata:

    namespace: zizai

    name: service-apollo-config-server-dev

    labels:

    app: service-apollo-config-server-dev

    spec:

    ports:

    - protocol: TCP

    port: 8080

    targetPort: 8080

    nodePort: 30002

    selector:

    app: pod-apollo-config-server-dev

    type: NodePort

    sessionAffinity: ClientIP

    ---

    kind: StatefulSet

    apiVersion: apps/v1

    metadata:

    namespace: zizai

    name: statefulset-apollo-config-server-dev

    labels:

    app: statefulset-apollo-config-server-dev

    spec:

    serviceName: service-apollo-meta-server-dev

    replicas: 3

    selector:

    matchLabels:

    app: pod-apollo-config-server-dev

    updateStrategy:

    type: RollingUpdate

    template:

    metadata:

    labels:

    app: pod-apollo-config-server-dev

    spec:

    imagePullSecrets: # dokcer仓库密码,不需要的可以去掉

    - name: registry-harbor

    affinity:

    podAntiAffinity:

    preferredDuringSchedulingIgnoredDuringExecution:

    - weight: 100

    podAffinityTerm:

    labelSelector:

    matchExpressions:

    - key: app

    operator: In

    values:

    - pod-apollo-config-server-dev

    topologyKey: kubernetes.io/hostname

    volumes:

    - name: volume-configmap-apollo-config-server-dev

    configMap:

    name: configmap-apollo-config-server-dev

    items:

    - key: application-github.properties

    path: application-github.properties

    containers:

    - image: hub.thinkinpower.net/zizai/apollo-config-server:v1.7.1

    imagePullPolicy: Always

    name: container-apollo-config-server-dev

    ports:

    - protocol: TCP

    containerPort: 8080

    volumeMounts:

    - name: volume-configmap-apollo-config-server-dev

    mountPath: /apollo-config-server/config/application-github.properties

    subPath: application-github.properties

    env:

    - name: APOLLO_CONFIG_SERVICE_NAME

    value: "service-apollo-config-server-dev.zizai"

    readinessProbe:

    tcpSocket:

    port: 8080

    initialDelaySeconds: 10

    periodSeconds: 5

    livenessProbe:

    tcpSocket:

    port: 8080

    initialDelaySeconds: 120

    periodSeconds: 10

    dnsPolicy: ClusterFirst

    restartPolicy: Always

    (3)、service-mysql-for-apollo-dev-env.yaml

    ---

    # 为外部 mysql 服务设置 service

    kind: Service

    apiVersion: v1

    metadata:

    namespace: zizai

    name: service-mysql-for-apollo-dev-env

    labels:

    app: service-mysql-for-apollo-dev-env

    spec:

    ports:

    - protocol: TCP

    port: 3306

    targetPort: 3306

    type: ClusterIP

    sessionAffinity: None

    ---

    kind: Endpoints

    apiVersion: v1

    metadata:

    namespace: zizai

    name: service-mysql-for-apollo-dev-env

    subsets:

    - addresses:

    - ip: 10.29.254.48

    ports:

    - protocol: TCP

    port: 3306

    3、添加Ingress

    官网给的示例是用k8s的NodePort来访问,但是在实际中,我们用会用Ingress来访问Portal。

    注意:因为我们在部署portal的时候是多实例的,所以Ingress要添加保持会话,要不页面会登录不了,进入不了portal页面。具体为:

    metadata:

    annotations:

    nginx.ingress.kubernetes.io/affinity: "cookie" # 解决会话保持

    nginx.ingress.kubernetes.io/session-cookie-name: "route"

    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"

    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"

    ingress的代码示例为如下:

    apiVersion: extensions/v1beta1

    kind: Ingress

    metadata:

    name: zizai-apollo-portal

    namespace: zizai

    annotations:

    nginx.ingress.kubernetes.io/affinity: "cookie" # 解决会话保持

    nginx.ingress.kubernetes.io/session-cookie-name: "route"

    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"

    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"

    spec:

    rules:

    - host: zizai-apollo-portal.test.thinkinpower.net

    http:

    paths:

    - path: /

    backend:

    serviceName: service-apollo-portal-server

    servicePort: 8070

    4、配置nginx

    添加nginx访问到Ingress里:

    nginx配置文件:zizai-apollo-portal.test.thinkinpower.net.conf

    server {

    listen 80;

    server_name zizai-apollo-portal.test.thinkinpower.net;

    access_log /data/logs/nginx/zizai-apollo-portal.test.thinkinpower.net.access.log main;

    error_log /data/logs/nginx/zizai-apollo-portal.test.thinkinpower.net.error.log;

    root /data/webapps/zizai-apollo-portal.test.thinkinpower.net/test/static;

    index index.html index.htm;

    client_max_body_size 50m;

    location / {

    proxy_set_header Host $http_host;

    proxy_set_header X-Real-IP $remote_addr;

    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_pass http://kubernetes; # 指向集群的

    }

    }

    这样就可以根据域名:http://zizai-apollo-portal.test.thinkinpower.net 访问portal了。

    (1)创建的部署:

    d067074af8ab944a1c1805e9fc66f03a.png

    (2)创建的有部署副本:

    14208ed782553cc723813cedcf5d583b.png

    (3)创建的service:

    49be63c4c1418398df7ba5fc81eaab06.png

    (4)创建的ingress:

    f392a207a96c56a91e7d30d3423a3412.png

    (5)创建的配置字典:

    c938620c8276de66a5c5b9a46f718edb.png

    三、简单使用

    本文章将只有简单的使用,后面会有文章介绍详细的使用,需要的可以在本文留言。

    1、创建项目

    2ea7af7a5980676ef17a7898feb31ae9.png

    2、选择一个环境添加变量timeout

    ee2805d56f95694895a3bcbf57df2d30.png

    3、如果是在添加环境的过程中,刷新页面会有“添加补缺环境”的提示

    fcdc8e6a969d5a117f5734bf18b2cb5f.png

    添加补缺环境

    就这样实现了k8s部署高可能的apollo。有建议的可以在评论区留言。

    声明:本网站所收集的部分公开资料来源于互联网,转载的目的在于传递更多信息及用于网络分享,并不代表本站赞同其观点和对其真实性负责,也不构成任何其他建议。如果您发现网站上有侵犯您的知识产权的作品,请与我们取得联系,我们会及时修改或删除。导航:艺宵博客 » k8s部署高可用配置中心apollo-手动验证成功

    展开全文
    weixin_35970195 2021-02-06 22:59:27
  • weixin_30844577 2021-03-13 05:04:41
  • weixin_38320674 2020-05-12 09:07:32
  • qq_38900565 2019-08-28 19:32:38
  • xujiamin0022016 2020-09-21 21:34:58
  • qq_37703224 2021-11-14 19:21:35
  • hxpjava1 2018-05-31 16:22:18
  • oToyix 2021-06-25 17:59:07
  • 18KB ss433433 2020-06-27 11:25:11
  • yzj5208 2018-08-24 10:36:28
  • jasonhe2018 2021-11-08 22:07:42
  • m0_57480266 2021-10-08 14:15:31
  • weixin_42518480 2021-01-27 03:39:05
  • qq_40764171 2021-08-08 17:04:43
  • weixin_39708708 2021-02-04 18:44:05
  • zwqjoy 2021-01-05 17:06:10
  • weixin_29446845 2021-01-18 20:04:28
  • liao__ran 2021-08-06 18:05:57
  • weixin_28813357 2021-01-19 02:24:18
  • weixin_36054508 2021-03-13 05:44:03
  • wonain 2019-12-29 16:24:24
  • 29.96MB weixin_42131414 2021-02-02 17:24:31
  • 4.5MB tiantangpw 2020-09-10 13:55:40

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 7,753
精华内容 3,101
关键字:

k8smysql高可用

mysql 订阅