为您推荐:
精华内容
最热下载
问答
  • 6.5MB weixin_38744375 2019-10-25 13:11:07
  • 7.69MB weixin_38744270 2019-10-25 13:10:30
  • 一、habor简介 二、 node2 172.25.7.2 swarm 节点 node3 172.25.7.3 swarm 节点 node4 172.25.7.4 swarm 节点 node5 172.25.7.5 装有harbor-offline-installer-v1.8.2.tgz node2有三个swarm节点: ...

    一、habor简介

    Harbor 是由 VMware 公司中国团队为企业用户设计的 Registry server 开源项目,包括了权限管理(RBAC)、LDAP、审计、管理界面、自我注册、HA 等企业必需的功能。

    • 基于角色的访问控制 - 用户与 Docker 镜像仓库通过“项目”进行组织管理,一个用户可以对多个镜像仓库在同一命名空间(project)里有不同的权限。

    • 镜像复制 - 镜像可以在多个 Registry 实例中复制(同步)。尤其适合于负载均衡,高可用,混合云和多云的场景。

    • 图形化用户界面 - 用户可以通过浏览器来浏览,检索当前 Docker 镜像仓库,管理项目和命名空间。

    • AD/LDAP 支持 - Harbor 可以集成企业内部已有的 AD/LDAP,用于鉴权认证管理。

    • 审计管理 - 所有针对镜像仓库的操作都可以被记录追溯,用于审计管理。

    • 国际化 - 已拥有英文、中文、德文、日文和俄文的本地化版本。更多的语言将会添加进来。

    • RESTful API - RESTful API 提供给管理员对于 Harbor 更多的操控, 使得与其它管理软件集成变得更容易。
      介绍转载于:https://www.oschina.net/p/harbor

    二、

    node2172.25.7.2swarm 节点
    node3172.25.7.3swarm 节点
    node4172.25.7.4swarm 节点
    node5172.25.7.5装有harbor-offline-installer-v1.8.2.tgz

    node2有三个swarm节点:
    在这里插入图片描述
    node5装有docker-compose:
    在这里插入图片描述
    node5下载harbor-offline-installer-v1.8.2.tgz
    在这里插入图片描述
    生成证书:

    [root@node5 docker]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout reg.westos.org.key -x509 -days 365 -out reg.westos.org.crt
    

    在这里插入图片描述game2048给node5改主机名,编写解析
    在这里插入图片描述在解压的harbor目录里更改habor.yml文件
    在这里插入图片描述在这里插入图片描述安装:
    在这里插入图片描述

    在真机中写入解析:
    在这里插入图片描述

    浏览器输入:reg.westos.org
    在这里插入图片描述登陆成功后页面中,什么镜像都没有,因为安装的时候走的时候是443,所以给私有仓库push镜像要有证书,和用户认证。否则会出现以下错误:
    错误1:
    在这里插入图片描述
    解决:将上面生成的reg.westos.org.crt复制到/etc/docker/certs.d/reg.westos.org/,并改名ca.crt

    [root@reg reg.westos.org]# cp /etc/docker/reg.westos.org.crt ca.crt
    

    错误2:证书认证以后,再次push,错误提示没有经过登陆。权限被禁止。
    在这里插入图片描述
    解决:
    docker login reg.westos.org #登陆,再次上传即可。
    在这里插入图片描述
    在浏览器中查看:
    在这里插入图片描述镜像的拉取:
    将node5(reg.westos.org)的/etc/docker/certs.d/reg.westos.org.crt复制到三个节点上:
    在这里插入图片描述
    三个节点可以拉取镜像:
    在这里插入图片描述在这里插入图片描述创建用户:
    在这里插入图片描述
    创建一个项目,将用户wetos以维护人员角色加进去
    在这里插入图片描述docker再往私有仓库里push镜像的时候,需要先进行用户登陆。
    在这里插入图片描述然后再上传:

    [root@node2 reg.westos.org]# docker tag ubuntu:latest reg.westos.org/test/ubuntu
    [root@node2 reg.westos.org]# docker push reg.westos.org/test/ubuntu
    The push refers to repository [reg.westos.org/test/ubuntu]
    5f70bf18a086: Pushed 
    11083b444c90: Pushed 
    9468150a390c: Pushed 
    56abdd66ba31: Pushed 
    latest: digest: sha256:4e709bde11754c2a27ed6e9b9ba55569647f83903f85cd8107e36162c5579984 size: 1151
    

    匿名用户一般可以拉取,但是对于私有仓库,不能拉取。
    在这里插入图片描述在集群中部署一个服务,让各节点自动下载。
    在这里插入图片描述在这里插入图片描述
    由上图值主节点node2已拉取成功,再来查看一下其他节点:
    在这里插入图片描述在这里插入图片描述

    展开全文
    qq_41977453 2020-03-05 11:42:26
  • 891KB dajun_x 2021-03-30 22:29:21
  • 文章目录 Docker 安全加密 使用ssl加密连接 docker registry 配置用户认证访问 habor仓库 安装habor仓库 使用habor 仓库 docker-compose 命令 docker网络 桥接模式 host 模式 none模式 join(container)模式 一些实用...

    Docker 安全加密

    使用ssl加密连接 docker registry

    • 建立一个 certs 目录
    • 生成自签名证书
    [root@server1 certs]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout ./westos.org.key -x509 -days 365 -out ./westos.org.crt 
    Generating a 4096 bit RSA private key
    .......................................................................++
    ...............++
    writing new private key to './westos.org.key'
    -----
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [XX]:cn
    State or Province Name (full name) []:shannxi
    Locality Name (eg, city) [Default City]:Xi'an
    Organization Name (eg, company) [Default Company Ltd]:westos
    Organizational Unit Name (eg, section) []:linux
    Common Name (eg, your name or your server's hostname) []:westos.org
    Email Address []:root@westos.org
    [root@server1 certs]# ls
    westos.org.crt  westos.org.key
    
    • 建立容器
    [root@server1 ~]# ls
    anaconda-ks.cfg  certs  docker  game2048.tar  init_vm.sh
    [root@server1 ~]# docker run -d -p 443:443 --restart=always --name registry -v /opt/registry:/var/lib/registry -v /root/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/westos.org.crt -e REGISTRY_HTTP_TLS_KEY=/certs/westos.org.key  registry
    f4bf979077994df0e60c8f8a96261e6e41ce560c66e10a51e9b1a0a7bdf519eb
    [root@server1 ~]# docker ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                            NAMES
    f4bf97907799        registry            "/entrypoint.sh /etc…"   8 seconds ago       Up 6 seconds        0.0.0.0:443->443/tcp, 5000/tcp   registry
    [root@server1 ~]# docker port registry 
    443/tcp -> 0.0.0.0:443
    
    • 使用证书连接registry
      把 xxx.crt 拷贝到/etc/docker/certs.d/westos.org/ca.cert
    [root@server1 ~]# cp certs/westos.org.crt /etc/docker/certs.d/westos.org/ca.cert
    [root@server1 ~]# ll /etc/docker/certs.d/westos.org/ca.cert 
    -rw-r--r-- 1 root root 2098 Sep  8 20:23 /etc/docker/certs.d/westos.org/ca.cert
    

    本地:

    • 上传:
    [root@server1 ~]# docker push westos.org/busybox
    The push refers to repository [westos.org/busybox]
    c632c18da752: Pushed 
    latest: digest: sha256:c2d41d2ba6d8b7b4a3ffec621578eb4d9a0909df29dfa2f6fd8a2e5fd0836aed size: 527
    

    远程:

    [root@server2 ~]# docker pull westos.org/busybox
    Using default tag: latest
    latest: Pulling from busybox
    9c075fe2c773: Pull complete 
    Digest: sha256:c2d41d2ba6d8b7b4a3ffec621578eb4d9a0909df29dfa2f6fd8a2e5fd0836aed
    Status: Downloaded newer image for westos.org/busybox:latest
    westos.org/busybox:latest
    

    配置除docker客户以外的访问

    [root@server2 ~]# cp westos.org.crt /etc/pki/ca-trust/source/anchors/
    [root@server2 ~]# update-ca-trust 
    [root@server2 ~]# curl https://westos.org/v2/_catalog
    {"repositories":["busybox"]}
    

    配置用户认证访问

    • 使用htpasswd 生成密钥
    htpasswd -Bb auth/htpasswd admin 
    
    • 创建registry容器
    [root@server1 ~]# docker run -d -p 443:443 --restart=always --name registry -v /opt/registry:/var/lib/registry -v /root/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/westos.org.crt -e REGISTRY_HTTP_TLS_KEY=/certs/westos.org.key -v /root/auth:/auth -e REGISTRY_AUTH=htpasswd -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd registry
    bfe3305451d8feffdf0fa94b81fcb80394f11a4cdeea2d39a4628df026df6efc
    [root@server1 ~]# docker ps -a
    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                            NAMES
    bfe3305451d8        registry            "/entrypoint.sh /etc…"   9 seconds ago       Up 7 seconds        0.0.0.0:443->443/tcp, 5000/tcp   registry
    [root@server1 ~]# 
    
    • 登陆
      在这里插入图片描述

    habor仓库

    安装habor仓库

    [root@server1 harbor]# ls
    common  common.sh  docker-compose.yml  harbor.v1.10.1.tar.gz  harbor.yml  install.sh  LICENSE  prepare
    [root@server1 harbor]# ./install.sh
    

    在这里插入图片描述
    在这里插入图片描述

    使用habor 仓库

    使用前必须登陆

    [root@server1 ~]# docker tag westos.org/busybox:latest westos.org/library/busybox
    [root@server1 ~]# docker push westos.org/library/busybox
    The push refers to repository [westos.org/library/busybox]
    c632c18da752: Pushed 
    latest: digest: sha256:c2d41d2ba6d8b7b4a3ffec621578eb4d9a0909df29dfa2f6fd8a2e5fd0836aed size: 527
    

    创建仓库和用户
    在这里插入图片描述
    在这里插入图片描述
    给项目添加成员
    在这里插入图片描述
    登陆

    [root@server2 ~]# docker login westos.org
    Username: zzzhq
    Password: 
    WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
    Configure a credential helper to remove this warning. See
    https://docs.docker.com/engine/reference/commandline/login/#credentials-store
    
    Login Succeeded
    

    上传

    [root@server2 ~]# docker push westos.org/test/busybox
    The push refers to repository [westos.org/test/busybox]
    be8b8b42328a: Pushed 
    latest: digest: sha256:2ca5e69e244d2da7368f7088ea3ad0653c3ce7aaccd0b8823d11b0d5de956002 size: 527
    

    下载

    [root@server2 ~]# docker pull westos.org/test/busybox
    Using default tag: latest
    latest: Pulling from test/busybox
    df8698476c65: Pull complete 
    Digest: sha256:2ca5e69e244d2da7368f7088ea3ad0653c3ce7aaccd0b8823d11b0d5de956002
    Status: Downloaded newer image for westos.org/test/busybox:latest
    westos.org/test/busybox:latest
    

    指定镜像源

    [root@server1 ~]# cat /etc/docker/daemon.json 
    {
    "insecure-registries":[
    "westos.org" 
    ]
    }
    
    

    重启docker

    [root@server1 ~]# docker pull library/busybox
    Using default tag: latest
    latest: Pulling from library/busybox
    df8698476c65: Pull complete 
    Digest: sha256:d366a4665ab44f0648d7a00ae3fae139d55e32f9712c67accd604bb55df9d05a
    Status: Downloaded newer image for busybox:latest
    docker.io/library/busybox:latest
    

    docker-compose 命令

    • docker-compose stop/start 关闭/开启 habor
    [root@server1 harbor]# docker-compose stop 
    Stopping nginx         ... done
    Stopping harbor-core   ... done
    Stopping registry      ... done
    Stopping harbor-portal ... done
    Stopping harbor-log    ... done
    [root@server1 harbor]# docker-compose start
    Starting log         ... done
    Starting registry    ... done
    Starting registryctl ... done
    Starting postgresql  ... done
    Starting portal      ... done
    Starting redis       ... done
    Starting core        ... done
    Starting jobservice  ... done
    Starting proxy       ... done
    
    • docker-compose up/down 关闭删除/启用重建habor仓库
    [root@server1 harbor]# docker-compose down
    Stopping harbor-jobservice ... done
    Stopping nginx             ... done
    Stopping harbor-core       ... done
    Stopping harbor-db         ... done
    Stopping registryctl       ... done
    Stopping redis             ... done
    Stopping registry          ... done
    Stopping harbor-portal     ... done
    Stopping harbor-log        ... done
    Removing harbor-jobservice ... done
    Removing nginx             ... done
    Removing harbor-core       ... done
    Removing harbor-db         ... done
    Removing registryctl       ... done
    Removing redis             ... done
    Removing registry          ... done
    Removing harbor-portal     ... done
    Removing harbor-log        ... done
    Removing network harbor_harbor
    [root@server1 harbor]# docker-compose up
    Creating network "harbor_harbor" with the default driver
    Creating harbor-log ... done
    Creating harbor-portal ... done
    Creating registry      ... done
    Creating registryctl   ... done
    Creating redis         ... done
    Creating harbor-db     ... done
    Creating harbor-core   ... done
    Creating nginx             ... done
    Creating harbor-jobservice ... done
    

    docker网络

    桥接模式

    在开启docker引擎时 会创建一个叫docker0 的 网桥 所有容器都桥接在这个网桥上 默认为172.17.0.1

    当开启一个nginx的容器
    在这里插入图片描述
    docker inspect nginx

    "Gateway": "172.17.0.1",
                "GlobalIPv6Address": "",
                "GlobalIPv6PrefixLen": 0,
                "IPAddress": "172.17.0.2",
    

    容器内部:

    [root@server1 ~]# docker run -it --rm busybox
    / # ping baidu.com
    PING baidu.com (39.156.69.79): 56 data bytes
    64 bytes from 39.156.69.79: seq=0 ttl=45 time=81.037 ms
    64 bytes from 39.156.69.79: seq=1 ttl=45 time=247.098 ms
    ^C
    --- baidu.com ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 81.037/164.067/247.098 ms
    / # route -n
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    0.0.0.0         172.17.0.1      0.0.0.0         UG    0      0        0 eth0
    172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0
    / # cat /etc/resolv.conf 
    nameserver 114.114.114.114
    

    只要宿主机可以上网 容器就可以上网 并且共享宿主机的dns配置
    桥接模式 的缺点是 外部不能 直接访问 容器 只能通过端口映射的方式 访问容器

    host 模式

    一个Network Namespace提供了一份独立的网络环境,包括网卡、路由、Iptable规则等都与其他的Network Namespace隔离。一个Docker容器一般会分配一个独立的Network Namespace。但如果启动容器的时候使用host模式,那么这个容器将不会获得一个独立的Network Namespace,而是和宿主机共用一个Network Namespace。容器将不会虚拟出自己的网卡,配置自己的IP等,而是使用宿主机的IP和端口
    默认创建容器时使用的是桥接模式 使用–network host 指定使用host模式

    [root@server1 ~]# docker run -d --name nginx --network host nginx
    1a877cf68b3fd792f187397ff1fe022afa337e51ea71f9b1d73e099f3eae5f8c
    [root@server1 ~]# netstat -antlup
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      18802/nginx: master 
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      3170/sshd           
    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      3283/master         
    tcp        0      0 172.25.254.101:22       172.25.254.1:46798      ESTABLISHED 3296/sshd: root@pts 
    tcp6       0      0 :::80                   :::*                    LISTEN      18802/nginx: master 
    tcp6       0      0 :::22                   :::*                    LISTEN      3170/sshd           
    tcp6       0      0 ::1:25                  :::*                    LISTEN      3283/master
    

    此时容器直接监听宿主机上的80端口 和宿主机共享网络栈
    在这里插入图片描述
    这样的好处是容器直接和外界通信,不用走桥接,缺点是容器缺少隔离性

    none模式

    当容器需要隔离 不使用网络 比如 只是原来保存 重要的密码的容器

    join(container)模式

    创建的容器不会创建自己的网卡,配置自己的IP,而是和一个指定的容器共享IP、端口范围。

    一些实用的docker命令

    • docker logs 查看容器日志
    [root@server1 ~]# docker logs test 
    /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
    /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
    10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
    10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
    /docker-entrypoint.sh: Configuration complete; ready for start up
    
    • docker stats 查看容器的cpu内存
    CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT   MEM %               NET I/O             BLOCK I/O           PIDS
    056bd91195ec        test                0.00%               1.391MiB / 991MiB   0.14%               0B / 0B             0B / 0B             2
    
    • docker network ls 查看容器网络
    [root@server1 ~]# docker network ls
    NETWORK ID          NAME                DRIVER              SCOPE
    b36a79642ad5        bridge              bridge              local
    989dfe7c1853        host                host                local
    667109ee3035        none                null                local
    

    使用cadvisor

    cadvisor是谷歌的一款开源容器监控工具
    cadvisor(容器顾问)使容器用户了解其运行容器的资源使用情况和性能特征。它是一个正在运行的守护程序,用于收集,聚合,处理和导出有关正在运行的容器的信息。具体来说,对于每个容器,它保留资源隔离参数,历史资源使用情况,完整历史资源使用情况的直方图和网络统计信息。此数据按容器和机器范围导出。

    • 运行
    [root@server1 ~]# docker run \ 
    --volume=/:/rootfs:ro   \
    --volume=/var/run:/var/run:ro   \
    --volume=/sys:/sys:ro   \
    --volume=/var/lib/docker/:/var/lib/docker:ro   \
    --volume=/dev/disk/:/dev/disk:ro   \
    --publish=8080:8080   \
    --detach=true   \
    --name=cadvisor   \
    --privileged   \
    --device=/dev/kmsg \
    google/cadvisor
    4ab54628f413b772dc3358859f0b38253bd94829db91329e325f15e87386b716
    
    • 访问
      在这里插入图片描述
      在这里插入图片描述

    Docker 自定义网络

    docker 提供了三种自定义网络驱动

    • bridge
    • overlay
    • macvlan
      bridge驱动类似默认的bridge网络模式,但增加了一些新的功能
      overlay 和 macvlan 用于创建跨主机的网络

    bridge

    创建自定义网桥
    [root@server1 ~]# docker network create -d bridge mynet1
    2cb008aca3872d04229eedfd6ca7d11ac9036ff01dc86f2ff3e0ee9a98e17cc4
    [root@server1 ~]# docker network ls
    NETWORK ID          NAME                DRIVER              SCOPE
    b36a79642ad5        bridge              bridge              local
    989dfe7c1853        host                host                local
    2cb008aca387        mynet1              bridge              local
    667109ee3035        none                null                local
    [root@server1 ~]# docker run -it --rm --network mynet1 busyboxplus
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    119: eth0@if120: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff
        inet 172.19.0.2/16 brd 172.19.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    
    自定义网段和网关
    [root@server1 ~]# docker network create -d bridge --subnet 172.22.0.0/24 --gateway 172.22.0.1 mynet2
    aba28c93549378f6e66b977449c3b8e476e62e96999aa93f2fb28602fae14cc3
    [root@server1 ~]# docker network inspect mynet2
    [
        {
            "Name": "mynet2",
            "Id": "aba28c93549378f6e66b977449c3b8e476e62e96999aa93f2fb28602fae14cc3",
            "Created": "2020-09-09T20:57:49.225747592+08:00",
            "Scope": "local",
            "Driver": "bridge",
            "EnableIPv6": false,
            "IPAM": {
                "Driver": "default",
                "Options": {},
                "Config": [
                    {
                        "Subnet": "172.22.0.0/24",
                        "Gateway": "172.22.0.1"
                    }
                ]
            },
            "Internal": false,
            "Attachable": false,
            "Ingress": false,
            "ConfigFrom": {
                "Network": ""
            },
            "ConfigOnly": false,
            "Containers": {},
            "Options": {},
            "Labels": {}
        }
    ]
    
    设定容器ip
    [root@server1 ~]# docker run -it --rm --network mynet2 --ip 172.22.0.33  busyboxplus 
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    124: eth0@if125: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:16:00:21 brd ff:ff:ff:ff:ff:ff
        inet 172.22.0.33/24 brd 172.22.0.255 scope global eth0
           valid_lft forever preferred_lft forever
    

    桥接到同一个网桥上的容器能互相通信
    在这里插入图片描述
    假设vm2 停掉vm3 占用了原来的vm2 ip vm2 重新启动后 vm1 仍能通过vm2来访问

    [root@server1 ~]# docker exec -it vm1 sh
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    132: eth0@if133: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff
        inet 172.19.0.2/16 brd 172.19.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    / # [root@server1 ~]# docrun -it -d --name vm3 --network mynet1   busyboxplus 
    7a2c387315ff67a26e48b58707efd21eff1ba6ee33db91a9afc753503b44df91
    [root@server1 ~]# docker exec -it vm3 sh
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    146: eth0@if147: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:13:00:03 brd ff:ff:ff:ff:ff:ff
        inet 172.19.0.3/16 brd 172.19.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    / # [root@server1 ~]# docker run -it -d --name vm2 --network mynet1   busyboxplus 
    30568bb84a8e007bc53a49b4d76e6f80615a5a76b6ab18e8916640db2278c450
    [root@server1 ~]# docker exec -it vm2 sh
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    148: eth0@if149: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:13:00:04 brd ff:ff:ff:ff:ff:ff
        inet 172.19.0.4/16 brd 172.19.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    / # [root@server1 ~]# docexec -it vm1 sh
    / # ping vm2
    PING vm2 (172.19.0.4): 56 data bytes
    64 bytes from 172.19.0.4: seq=0 ttl=64 time=0.322 ms
    64 bytes from 172.19.0.4: seq=1 ttl=64 time=0.231 ms
    64 bytes from 172.19.0.4: seq=2 ttl=64 time=0.237 ms
    ^C
    --- vm2 ping statistics ---
    3 packets transmitted, 3 packets received, 0% packet loss
    round-trip min/avg/max = 0.231/0.263/0.322 ms
    / # 
    
    不同网桥间通信
    • 使用docker network connect 连接
    [root@server1 ~]# docker run -it -d --name vm2 --network mynet2   busyboxplus 
    197f38c6893985252a04182277d2d10093fbf1abb830c0aca8af4d56e5700f80
    [root@server1 ~]# docker exec -it vm1 sh
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    132: eth0@if133: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff
        inet 172.19.0.2/16 brd 172.19.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    / # [root@server1 ~]# docker exec -it vm2 sh
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    152: eth0@if153: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:16:00:02 brd ff:ff:ff:ff:ff:ff
        inet 172.22.0.2/24 brd 172.22.0.255 scope global eth0
           valid_lft forever preferred_lft forever
    / # [root@server1 ~]# docker network connect mynet2 vm1
    [root@server1 ~]# docker exec -it vm1 sh
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    132: eth0@if133: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff
        inet 172.19.0.2/16 brd 172.19.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    154: eth1@if155: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:16:00:03 brd ff:ff:ff:ff:ff:ff
        inet 172.22.0.3/24 brd 172.22.0.255 scope global eth1
           valid_lft forever preferred_lft forever
    / # ping vm2
    PING vm2 (172.22.0.2): 56 data bytes
    64 bytes from 172.22.0.2: seq=0 ttl=64 time=0.358 ms
    64 bytes from 172.22.0.2: seq=1 ttl=64 time=0.278 ms
    ^C
    --- vm2 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.278/0.318/0.358 ms
    
    • 使用Container连接
      container模式指定新创建的容器和已经存在的一个容器共享一个Network Namespace,而不是和宿主机共享。新创建的容器不会创建自己的网卡,配置自己的IP,而是和一个指定的容器共享IP、端口范围等。同样,两个容器除了网络方面,其他的如文件系统、进程列表等还是隔离的。两个容器的进程可以通过lo网卡设备通信。
    [root@server1 ~]# docker run -d -it --name vm2 --network container:vm1 busyboxplus
    c2d3af703289f364e3c7c32c8c933bd4c24a9c481c0a0626bd9a92d660ba5808
    [root@server1 ~]# docker exec -it vm2 sh
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    132: eth0@if133: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff
        inet 172.19.0.2/16 brd 172.19.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    / # ping vm1
    PING vm1 (172.19.0.2): 56 data bytes
    64 bytes from 172.19.0.2: seq=0 ttl=64 time=0.533 ms
    64 bytes from 172.19.0.2: seq=1 ttl=64 time=0.238 ms
    ^C
    --- vm1 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.238/0.385/0.533 ms
    / # 
    / # [root@server1 ~]# 
    [root@server1 ~]# docker exec -it vm1 sh
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    132: eth0@if133: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff
        inet 172.19.0.2/16 brd 172.19.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    

    这种模式 适用于需要频繁交流的容器 使用localhost交流 比如web容器和应用容器
    在这里插入图片描述

    [root@server1 ~]# docker run -d -it --name vm2 --network container:vm1 nginx
    0de7918c769d3c2da36d22d4cb9b4cff471a47ad725f4fc5c8755a17ea24c1c6
    / # netstat -antlu
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      
    tcp        0      0 127.0.0.11:35025        0.0.0.0:*               LISTEN      
    tcp        0      0 :::80                   :::*                    LISTEN      
    udp        0      0 127.0.0.11:34067        0.0.0.0:*                           
    / # curl localhost
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
        body {
            width: 35em;
            margin: 0 auto;
            font-family: Tahoma, Verdana, Arial, sans-serif;
        }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
    
    • 使用 -link 连接容器
    [root@server1 ~]# docker run -d --name test nginx
    a402506fa470430cc46671c652e0c16a55a38771463e89d950d94e9e49dd3ac6
    [root@server1 ~]# docker ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
    a402506fa470        nginx               "/docker-entrypoint.…"   14 seconds ago      Up 12 seconds       80/tcp              test
    [root@server1 ~]# docker run -it --name vm1 --link test:web busyboxplus
    / # ping web
    PING web (172.17.0.2): 56 data bytes
    64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.505 ms
    64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.249 ms
    64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.243 ms
    ^C
    --- web ping statistics ---
    3 packets transmitted, 3 packets received, 0% packet loss
    round-trip min/avg/max = 0.243/0.332/0.505 ms
    / # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    158: eth0@if159: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue 
        link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
           valid_lft forever preferred_lft forever
    / # cat /etc/hosts
    127.0.0.1       localhost
    ::1     localhost ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    172.17.0.2      web a402506fa470 test
    172.17.0.3      6ad45d325508
    / # env
    HOSTNAME=6ad45d325508
    SHLVL=1
    HOME=/
    WEB_PORT=tcp://172.17.0.2:80
    WEB_NAME=/vm1/web
    WEB_PORT_80_TCP_ADDR=172.17.0.2
    WEB_PORT_80_TCP_PORT=80
    WEB_PORT_80_TCP_PROTO=tcp
    TERM=xterm
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    WEB_ENV_PKG_RELEASE=1~buster
    WEB_PORT_80_TCP=tcp://172.17.0.2:80
    WEB_ENV_NGINX_VERSION=1.19.2
    WEB_ENV_NJS_VERSION=0.4.3
    PWD=/
    / # 
    

    容器写了解析和设定了env

    [root@server1 ~]# docker stop test 
    test
    [root@server1 ~]# docker run -d  nginx            
    fe6c24fe2eeb13a144921d9d7abef7979543d6fd487b101164240d7cb90fbca0
    [root@server1 ~]# docker ps -a
    CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                          PORTS               NAMES
    fe6c24fe2eeb        nginx               "/docker-entrypoint.…"   7 seconds ago       Up 4 seconds                    80/tcp              elegant_morse
    6ad45d325508        busyboxplus         "/bin/sh"                2 minutes ago       Exited (0) About a minute ago                       vm1
    a402506fa470        nginx               "/docker-entrypoint.…"   3 minutes ago       Exited (0) 32 seconds ago                           test
    [root@server1 ~]# docker inspect elegant_morse 
     "IPAddress": "172.17.0.2",
    [root@server1 ~]# docker start test 
    test
    [root@server1 ~]# docker start vm1
    vm1
    [root@server1 ~]# docker attach vm1
    / # cat /etc/hosts
    127.0.0.1       localhost
    ::1     localhost ip6-localhost ip6-loopback
    fe00::0 ip6-localnet
    ff00::0 ip6-mcastprefix
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    172.17.0.3      web a402506fa470 test
    172.17.0.4      6ad45d325508
    / # env
    HOSTNAME=6ad45d325508
    SHLVL=1
    HOME=/
    WEB_PORT=tcp://172.17.0.3:80
    WEB_NAME=/vm1/web
    WEB_PORT_80_TCP_ADDR=172.17.0.3
    WEB_PORT_80_TCP_PORT=80
    WEB_PORT_80_TCP_PROTO=tcp
    TERM=xterm
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    WEB_ENV_PKG_RELEASE=1~buster
    WEB_PORT_80_TCP=tcp://172.17.0.3:80
    WEB_ENV_NGINX_VERSION=1.19.2
    WEB_ENV_NJS_VERSION=0.4.3
    PWD=/
    / # 
    

    ip 变动后 host和env也随之变动

    Docker跨主机通信

    docker跨主机的网络方式

    docker原生的

    • overlay
    • macvlan
      第三方:
    • flannel
    • weave
    • calico
    CNM(container network model)模型

    Sandbox: 容器网络栈(namespace
    Endpoint: 将sandbox接入network(veth)
    Network: 包含一组endpoint,同一network的endpoint可以通信

    使用macvlan实现Docker容器跨主机网络

    macvlan特点:

    • 使用linux内核提供的一种网卡虚拟化技术
    • 性能好:因为无需使用桥接,直接使用宿主机物理网口。

    步骤:

    • 在server1 和 server2 上添加 两块网卡
      在这里插入图片描述
    • 激活网卡并激活混杂模式
    [root@server1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 
    NAME=eth1
    DEVICE=eth1
    BOOTPROTO=none
    ONBOOT=yes
    
    [root@server1 ~]# ip link set eth1 up 
    [root@server1 ~]# ip link set eth1 promisc on
    

    在这里插入图片描述

    • server1和server2上创建macvlan 网络模型:
    ### server1 上
    [root@server1 ~]# docker network create -d macvlan --subnet 172.10.0.0/24 --gateway 172.10.0.1 -o parent=eth1 vlan1
    b044216355b37ebaf8628fe1694e73cc242c86c17a010c17bfbcb8683defa883
    [root@server1 ~]# docker network ls
    NETWORK ID          NAME                DRIVER              SCOPE
    b36a79642ad5        bridge              bridge              local
    989dfe7c1853        host                host                local
    667109ee3035        none                null                local
    b044216355b3        vlan1               macvlan             local
    ### serve2 上相同操作
    
    • 测试
    ### server1 上
    [root@server1 ~]# docker run -d --name web --network vlan1 --ip 172.10.0.10 nginx
    b8b320373bb8f8b19295ff7221e62648050ef5087a5240951d6354a79554c67f
    ### server2 上
    [root@server2 ~]# docker run -it --rm --network vlan1 --ip 172.10.0.20 busyboxplus
    / # curl 172.10.0.10
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
        body {
            width: 35em;
            margin: 0 auto;
            font-family: Tahoma, Verdana, Arial, sans-serif;
        }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
    
    问题及解决方法

    两台主机的网络都没有桥接到docker0上。而是直接占用了物理网卡。无需 NAT 或端口映射 效率大大提升
    但是 每一个网络独占一个网卡是不现实的

    • 解决方法:
      macvlan会独占网卡,但是可以使用vlan的子接口实现多macvlan网络,最多可以将物理二层网络划分成4094各逻辑网络,且彼此隔离。
      vlan 可以将物理二层网络划分为 4094 个逻辑网络,彼此隔离,取值为 1~4094,这样就极大的提升了网卡的复用性
    ### server1 和 server2 相同操作
    [root@server1 ~]# docker network create -d macvlan --subnet 172.20.0.0/24 --gateway 172.20.0.1 -o parent=eth1.1 vlan2
    71294916fa0f1975361194350860e7b4f7b6828ffd255129dae0277ed126e64e
    
    [root@server1 ~]# docker run -d --name web2 --network vlan2 --ip 172.20.0.10 nginx
    178e08410adc5f300fffa9ac04d59a1850dad50993e428ffa86a8b00ff50fdc4
    [root@server2 ~]# docker run -it --rm --network vlan2 --ip 172.20.0.20 busyboxplus
    / # curl 172.20.0.10
    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
        body {
            width: 35em;
            margin: 0 auto;
            font-family: Tahoma, Verdana, Arial, sans-serif;
        }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>
    / # curl 172.10.0.10
    curl: (7) Failed to connect to 172.10.0.10 port 80: No route to host
    

    macvlan网络在物理网络二层是隔离的,无法直接通信,但是可以在三层上(即容器内部)通过网关将macvlan网络连通起来。
    使用 vlan 子接口实现多 macvlan 网络

    展开全文
    weixin_43414025 2020-09-09 23:59:52
  • weixin_39560112 2019-07-01 18:17:41
  • 1. 安装: 第一步:安装docker和docker-compose yum install -y docker-compose 第二步:下载harbor-offline-installer-vxxx.tgz 第三步:上传到/opt,并解压 第四步:修改harbor.cfg配置文件 hostname = 10.0.0.100 ...

    1. 环境准备

    1.1. 安装docker

    链接:https://blog.csdn.net/m0_46674735/article/details/111690794

    1.2 安装docker-compose

    链接: https://blog.csdn.net/m0_46674735/article/details/111799167

    2. harbor 安装和部署:

    2.1 harbor 软件包下载地址

    链接:https://github.com/goharbor/harbor/releases

    2.2 上传并解压harbor软件包

    [root@localhost ~]# tar xzf harbor-offline-installer-v1.9.4.tgz -C /usr/local/
    

    2.3 修改harbor的配置文件

    2.3.1 切换到harbor目录下修改配置文件

    [root@localhost ~]# cd /usr/local/harbor/
    [root@localhost harbor]# vim harbor.yml 
    

    2.3.2 修改以下内容

    要访问的IP地址
    在这里插入图片描述
    访问的端口(默认为80)
    在这里插入图片描述
    管理员初始登录密码
    在这里插入图片描述
    修改完成后保存退出

    2.4 执行安装脚本

    2.4.1 首次安装需要导入很多镜像,时间较长,耐心等待即可,

    在这里插入图片描述

    2.4.2 最终出现如下提示,说明部署成功

    在这里插入图片描述

    2.4.3 查看容器状态,当容器状态全部为healthy时,说明容器初始化完毕,浏览器即可访问

    在这里插入图片描述

    2.5 浏览器登录harbor

    2.5.1 登录

    在这里插入图片描述

    2.5.2 修改语言为简体中文

    在这里插入图片描述

    3. 在harbor中新建项目并上传镜像

    **新建项目名为docker,访问级别为公开
    在这里插入图片描述
    查看创建成功
    在这里插入图片描述

    3.1 向harbor中推送mysql镜像

    推送镜像可以在任何一个可以访问harbor的docker主机上操作
    在IP地址为 192.168.153.189的主机上进行操作

    3.1.1 修改docker的配置文件

    [root@localhost ~]# vim /etc/docker/daemon.json 
    

    修改内容如下:
    在这里插入图片描述

    3.1.2 修改保存退出之后重启docker

    [root@localhost ~]# systemctl daemon-reload
    [root@localhost ~]# systemctl restart docker
    

    3.1.3 登录harbor,输入harbor管理员账号和密码(设置均为admin)

    在这里插入图片描述

    3.1.4 给mysql镜像加一个包含harbor仓库ip和项目的标签

    [root@localhost ~]# docker tag mysql:5.7 192.168.153.188/docker/mysql:5.7
    

    3.1.5 推送镜像,执行docker push 镜像名字:标签,即可完成推送

    [root@localhost ~]# docker push 192.168.153.188/docker/mysql:5.7
    The push refers to repository [192.168.153.188/docker/mysql]
    6c316520569e: Pushed 
    f6bef35c0067: Pushed 
    a6ea401b7864: Pushed 
    94bd7d7999de: Pushed 
    8df989cb6670: Pushed 
    f358b00d8ce7: Pushed 
    ae39983d39c4: Pushed 
    b55e8d7c5659: Pushed 
    e8fd11b2289c: Pushed 
    e9affce9cbe8: Pushed 
    316393412e04: Pushed 
    d0f104dc0a1f: Pushed 
    5.7: digest: sha256:b9c1994c82f94c13370b0d79efa703616a538bf55fcb7e0923892d5a5e753514 size: 2829
    

    3.1.6 登录harbor查看docker项目中是否有mysql镜像

    在这里插入图片描述

    3.2 从harbor中拉取mysql镜像

    如需要从harbor仓库中下载镜像的话,也需要向 3.1.1 中修改docker的daemon.json文件,修改完成后重启动车客人服务,之前已经修改,所以不需要再修改

    3.2.1 删除本地的mysql镜像

    [root@localhost ~]# docker rmi mysql:5.7
    Untagged: mysql:5.7
    [root@localhost ~]# docker rmi  192.168.153.188/docker/mysql:5.7
    Untagged: 192.168.153.188/docker/mysql:5.7
    Untagged: 192.168.153.188/docker/mysql@sha256:b9c1994c82f94c13370b0d79efa703616a538bf55fcb7e0923892d5a5e753514
    Deleted: sha256:a4cc8ac4386762cd0e8e3d9c7ca4ba6e84898aff2995762baaf47aef8cbaf063
    Deleted: sha256:58943f97772ae5603ec8a3d9ca0e1795361be5f5219e607907dd3bc36c40c024
    Deleted: sha256:058d93ef2bfb943ba6a19d8b679c702be96e34337901da9e1a07ad62b772bf3d
    Deleted: sha256:7bca77783fcf15499a0386127dd7d5c679328a21b6566c8be861ba424ac13e49
    Deleted: sha256:183d05512fa88dfa8c17abb9b6f09a79922d9e9ee001a33ef34d1bc094bf8f9f
    Deleted: sha256:165805124136fdee738ed19021a522bb53de75c2ca9b6ca87076f51c27385fd7
    Deleted: sha256:904abdc2d0bea0edbb1a8171d1a1353fa6de22150a9c5d81358799a5b6c38c8d
    Deleted: sha256:d26f7649f78cf789267fbbca8aeb234932e230109c728632c6b9fbc60ca5591b
    Deleted: sha256:7fcf7796e23ea5b42eb3bbd5bec160ba5f5f47ecb239053762f9cf766c143942
    Deleted: sha256:826130797a5760bcd2bb19a6c6d92b5f4860bbffbfa954f5d3fc627904a76e9d
    Deleted: sha256:53e0181c63e41fb85bce681ec8aadfa323cd00f70509107f7001a1d0614e5adf
    Deleted: sha256:d6854b83e83d7eb48fb0ef778c58a8b839adb932dd036a085d94a7c2db98f890
    Deleted: sha256:d0f104dc0a1f9c744b65b23b3fd4d4d3236b4656e67f776fe13f8ad8423b955c
    [root@localhost ~]# 
    

    3.2.2 从harbor仓库中拉取镜像

    [root@localhost ~]# docker pull 192.168.153.188/docker/mysql:5.7
    5.7: Pulling from docker/mysql
    bf5952930446: Pull complete 
    8254623a9871: Pull complete 
    938e3e06dac4: Pull complete 
    ea28ebf28884: Pull complete 
    f3cef38785c2: Pull complete 
    894f9792565a: Pull complete 
    1d8a57523420: Pull complete 
    5f09bf1d31c1: Pull complete 
    1b6ff254abe7: Pull complete 
    74310a0bf42d: Pull complete 
    d398726627fd: Pull complete 
    784aa83a1bf2: Pull complete 
    Digest: sha256:b9c1994c82f94c13370b0d79efa703616a538bf55fcb7e0923892d5a5e753514
    Status: Downloaded newer image for 192.168.153.188/docker/mysql:5.7
    192.168.153.188/docker/mysql:5.7
    
    查看拉取成功
    [root@localhost ~]# docker images |grep mysql
    192.168.153.188/docker/mysql   5.7                 a4cc8ac43867        4 months ago        448MB
    
    展开全文
    m0_46674735 2020-11-30 19:40:53
  • tianyouououou 2020-03-13 14:46:04
  • weixin_44224288 2019-05-25 22:29:39
  • 6.34MB weixin_38744270 2019-11-13 18:17:35
  • baidu_38432732 2020-05-30 16:40:38
  • qq_30054961 2020-05-22 09:56:04
  • weixin_30648963 2019-05-13 16:54:00
  • cr7258 2021-08-11 01:00:21

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 1,155
精华内容 462
关键字:

habor