精华内容
下载资源
问答
  • 网络游戏-用于向分布式网络环境自动部署流程的方法和系统.zip
  • 联想 云部署首次部署流程-视频,安装云教室必看。
  • OpenStackRDO部署流程 - 3(Neutron网络服务) 目前OpenStackNeutron框架支持的网络服务有:LoadBalancing as a Service,VPNas a Service,Firewallas a Service。 1. 安装和配置网络服务(在网络节点上) (1) ...

    OpenStack RDO 部署流程 - 3(Neutron网络服务)


    目前OpenStackNeutron框架支持的网络服务有:LoadBalancing as a Service,VPNas a Service,Firewallas a Service。

    1. 安装和配置网络服务(在网络节点上)

    (1) 安装软件包

    yum install openstack-neutron-vpn-agent openstack-neutron openswanhaproxy

    /etc/sysctl.conf:
    net.ipv4.conf.default.accept_redirects=0
    net.ipv4.conf.default.send_redirects=0

    (2) VPNaaS的特别要求

    检查Ipsec服务状态,VPNaaS需要。
    chkconfig ipsec on
    service ipsec start
    ipsec verify

    ln -s /dev/urandom /dev/random

    (3) 配置服务(我这边把三类服务一起配了,其实一看就该明白,配置项都类似)

    /usr/share/neutron/neutron-dist.conf
    service_provider = LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
    service_provider = VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
    service_provider = FIREWALL:Iptables:neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver

    /etc/neutron/neutron.conf
    service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.loadbalancer.plugin.LoadBalancerPlugin, neutron.services.firewall.fwaas_plugin.FirewallPlugin,neutron.services.vpn.plugin.VPNDriverPlugin

     

    /etc/neutron/fwaas_driver.ini
    [fwaas]
    #driver =neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver
    enabled = True

     

    /etc/neutron/vpn_agent.ini
    [DEFAULT]
    # VPN-Agent configuration file
    # Note vpn-agent inherits l3-agent, so you can use configs on l3-agent also

    [vpnagent]
    #vpn device drivers which vpn agent will use
    #If we want to use multiple drivers, we need to define this option multipletimes.
    vpn_device_driver=neutron.services.vpn.device_drivers.ipsec.OpenSwanDriver
    #vpn_device_driver=another_driver

    [ipsec]
    #Status check interval
    #ipsec_status_check_interval=60

     

    /etc/neutron/lbaas_agent.ini
    [DEFAULT]
    # Show debugging output in log (sets DEBUG log level output).
    # debug = False
    debug = False

    # The LBaaS agent will resync its state with Neutron to recoverfrom any
    # transient notification or rpc errors. The interval is number of
    # seconds between attempts.
    # periodic_interval = 10

    # LBaas requires an interface driver be set. Choose the one thatbest
    # matches your plugin.
    # interface_driver =
    interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver

    # Example of interface_driver option for OVS based plugins (OVS, Ryu, NEC, NVP,
    # BigSwitch/Floodlight)
    # interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

    # Use veth for an OVS interface or not.
    # Support kernels with limited namespace support
    # (e.g. RHEL 6.5) so long as ovs_use_veth is set to True.
    # ovs_use_veth = False

    # Example of interface_driver option for LinuxBridge
    #interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

    # The agent requires a driver to manage the loadbalancer. HAProxy is the
    # opensource version.
    # device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver
    device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver

    # The user group
    # user_group = nogroup
    user_group = haproxy
    use_namespaces=True


    展开全文
  • OpenStack ironic 详细部署流程

    千次阅读 2019-11-06 17:57:04
    在使用neutron网络接口的情况下,ironic在网络服务中创建单独的配置端口,在 flat 网络接口的情况下,nova创建的端口用于配置和部署实例网络。 8、Ironic节点的启动接口准备PXE配置和缓存 deploy kernel和 ramdisk。...

    1、引导实例请求通过 Nova API 进入,通过消息队列进入 Nova 调度程序。
    2、Nova Scheduler 应用过滤器并查找符合条件的虚拟机监控nova调度程序,还使用flavor的extra_specs(例如 cpu_arch)来匹配目标物理节点。
    3、Nova compute 管理选中hypervisor的资源声明。
    4、Nova compute 管理器根据 nova引导请求中网络接口在Networking服务中创建(未绑定)租户虚拟接口(VIF)。这里需要注意的是,端口的MAC将随机生成,并且当VIF连接到某个节点以对应于节点网络接口卡时MAC将更新。
    5、nova compute创建一个 spawn 任务,它包含所有信息,例如从哪个镜像引导等。它从Nova compute的virt层调用 driver.spawn。在spawn过程中,virt驱动程序执行以下操作: 更新目标ironic节点的deploy镜像,实例UUID,请求的功能和各种flavor属性。通过调用ironic API 验证节点的电源和部署的接口。将之前创建的VIF附加到节点。每个neutron port 可以被附加到任何ironic port和port group,port groups 比 ports有更高的优先级。在Ironic这边,这个工作时由 network interface做的。
    6、Nova 的 ironic驱动程序通过 Ironic API 向服务裸机节点的 Ironic conductor 发出部署请求。
    7、配置虚拟接口,Neutron API更新DHCP端口以设置 PXE/TFTP选项。在使用neutron网络接口的情况下,ironic在网络服务中创建单独的配置端口,在 flat 网络接口的情况下,nova创建的端口用于配置和部署实例网络。
    8、Ironic节点的启动接口准备PXE配置和缓存 deploy kernel和 ramdisk。
    9、Ironic 节点通过 Management interfacec发出命令来启动节点的网络引导。
    10、Ironic 节点的 Deploy Interface 缓存实例镜像(在 iSCSI 部署接口的情况下),缓存kernel和ramdisk,在netboot的时候需要它。
    11、Ironic节点的Power Interface 指示节点的电源接口通电。
    12、节点引导部署ramdisk
    13、根据确切的驱动程序需求,conductor 将通过iSCSI复制镜像到物理节点。
    14、节点的引导接口将 pxe config切换为引用实例镜像,要求 ramdisk agent 软关闭节点电源,如果ramdisk agent 软关闭电源失败,则通过IPMI/BMS呼叫关机裸机节点电源。
    15、部署接口会触发网络接口,以便在创建时删除配置端口,并将租户端口绑定到节点。然后给节点上电。
    16、裸机节点的 provisioning state更新为active状态。

     

    展开全文
  • Hyperledger Fabric 2.x 多机部署/分布式集群部署流程

    千次阅读 多人点赞 2020-11-28 12:26:02
    硬件环境:CentOS7虚拟机一台,Ubuntu虚拟机一台,一共两台主机,有条件的小伙伴可以试一下在3台以上的主机部署。 软件环境: 名称 版本 Fabric 2.2.0 go 1.14.4 docker 19.03.11 docker-compose ...

    硬件环境:CentOS7虚拟机一台,Ubuntu虚拟机一台,一共两台主机,有条件的小伙伴可以试一下在3台以上的主机部署。

    软件环境

    名称版本
    Fabric2.2.0
    go1.15
    docker19.03.11
    docker-compose1.12.0

    部署方案:3个orderer、2个组织、每个组织2个普通节点,通过静态IP的方式实现Hyperledger Fabric多机/分布式部署

    1.安装Go、docker、docker-compose(虚拟机1和虚拟机2都需要)

    安装Go流程
    wget链接获取来源链接为https://studygolang.com/dl,如果以下wget命令失效请到以上网站获取或下载
    打开终端,输入指令

    sudo wget -P /usr/local https://studygolang.com/dl/golang/go1.15.linux-amd64.tar.gz
    cd /usr/local
    sudo tar -zxvf go1.15.linux-amd64.tar.gz
    

    添加环境变量,打开编辑器

    vim ~/.bashrc
    

    将以下内容复制到bashrc文件中,按I插入,插入完成后按ESC退出插入,输出:wq!保存退出,如下图所示

    export GOROOT=/usr/local/go
    export PATH=$PATH:$GOROOT/bin
    export GOPATH=$HOME/go
    export PATH=$PATH:/home/yujialing/go/src/github.com/hyperledger/fabric-samples/bin
    export FABRIC_CFG_PATH=/home/yujialing/go/src/github.com/hyperledger/fabric-samples/multiple-deployment
    
    source ~/.bashrc
    

    验证go是否安装成功

    go version
    

    安装docker流程

    sudo yum install -y yum-utils
    sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    yum makecache fast
    sudo yum -y install docker-ce
    

    启动Docker-ce

    sudo systemctl start docker
    sudo systemctl enable docker
    

    将当前用户添加到docker组中

    sudo usermod -aG docker $USER
    newgrp - docker
    

    检查docker是否安装成功

    docker -v
    

    在这里插入图片描述
    没有VPN的同学需要修改docker镜像源,否则后面的步骤有些去外网下载的步骤有可能会报连接失败的错误

    vim /etc/docker/daemon.json
    
    {
    "registry-mirrors":["https://registry.docker-cn.com"]
    }
    

    I插入,插入完成后按ESC退出插入,输出:wq!保存退出

    如果显示无法写入请使用sudo chmod -R 777 docker/指令给docker最高权限方可写入。

    重启docker服务

    systemctl restart docker.service
    

    安装docker-compose流程

    curl -L https://get.daocloud.io/docker/compose/releases/download/1.12.0/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
    sudo mv ~/docker-compose /usr/local/bin/docker-compose 
    chmod +x /usr/local/bin/docker-compose
    

    查看docker是否安装成功

    docker-compose -v
    

    2.下载fabric二进制文件、docker镜像和fabric样例(虚拟机1、2都要)

    下载fabric二进制文件、docker镜像、fabric样例

    cd /home/yujialing
    mkdir go
    mkdir go/src
    mkdir go/src/github.com
    mkdir go/src/github.com/hyperledger
    cd /home/yujialing/go/src/github.com/hyperledger
    git clone https://github.com/hyperledger/fabric-samples.git
    cd fabric-samples
    git checkout 22393b629bcac7f7807cc6998aa44e06ecc77426
    curl -sSL https://bit.ly/2ysbOFE | bash -s -- 2.2.0 1.4.8 -s
    

    最后一步用时稍微有点长,请耐心等待

    3.建立多机部署的文件夹

    fabric-samples文件夹下

    虚拟机1和虚拟机2的操作

    mkdir multiple-deployment
    cd multiple-deployment
    

    以下是虚拟机1的操作,复制fabric样例中的名为abstore的链码(智能合约)到chaincode/go

    mkdir chaincode
    mkdir chaincode/go
    cp -r ../chaincode/abstore/go/* chaincode/go
    

    下载链码的依赖包

    cd chaincode/go
    go env -w GOPROXY=https://goproxy.io,direct
    go env -w GO111MODULE=on
    go mod vendor
    

    可以看到go文件夹下多出存放链码(智能合约)依赖的vendor文件夹

    4.生成证书及通道配置

    虚拟机1的操作

    multiple-deployment文件夹下新建crypto-config.yaml文件和configtx.yaml文件

    cd ../..
    touch crypto-config.yaml
    touch configtx.yaml
    

    将以下内容写进crypto-config.yaml文件中

    OrdererOrgs:
      - Name: Orderer
        Domain: example.com
        Specs:
          - Hostname: orderer0
          - Hostname: orderer1
          - Hostname: orderer2
    
    PeerOrgs:
      - Name: Org1
        Domain: org1.example.com
        EnableNodeOUs: true
        Template:
          Count: 2
        Users:
          Count: 1
      - Name: Org2
        Domain: org2.example.com
        EnableNodeOUs: true
        Template:
          Count: 2
        Users:
          Count: 1
    

    将以下内容写进configtx.yaml文件中

    Organizations:
        - &OrdererOrg
            Name: OrdererOrg
            ID: OrdererMSP
            MSPDir: crypto-config/ordererOrganizations/example.com/msp
            Policies:
                Readers:
                    Type: Signature
                    Rule: "OR('OrdererMSP.member')"
                Writers:
                    Type: Signature
                    Rule: "OR('OrdererMSP.member')"
                Admins:
                    Type: Signature
                    Rule: "OR('OrdererMSP.admin')"
    
        - &Org1
            Name: Org1MSP
            ID: Org1MSP
            MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
            Policies:
                Readers:
                    Type: Signature
                    Rule: "OR('Org1MSP.admin', 'Org1MSP.peer', 'Org1MSP.client')"
                Writers:
                    Type: Signature
                    Rule: "OR('Org1MSP.admin', 'Org1MSP.client')"
                Admins:
                    Type: Signature
                    Rule: "OR('Org1MSP.admin')"
                Endorsement:
                    Type: Signature
                    Rule: "OR('Org1MSP.peer')"
    
            AnchorPeers:
                - Host: peer0.org1.example.com
                  Port: 7051
    
        - &Org2
            Name: Org2MSP
            ID: Org2MSP
            MSPDir: crypto-config/peerOrganizations/org2.example.com/msp
            Policies:
                Readers:
                    Type: Signature
                    Rule: "OR('Org2MSP.admin', 'Org2MSP.peer', 'Org2MSP.client')"
                Writers:
                    Type: Signature
                    Rule: "OR('Org2MSP.admin', 'Org2MSP.client')"
                Admins:
                    Type: Signature
                    Rule: "OR('Org2MSP.admin')"
                Endorsement:
                    Type: Signature
                    Rule: "OR('Org2MSP.peer')"
    
            AnchorPeers:
                - Host: peer0.org2.example.com
                  Port: 7051
    
    Capabilities:
        Channel: &ChannelCapabilities
            V2_0: true
        Orderer: &OrdererCapabilities
            V2_0: true
        Application: &ApplicationCapabilities
            V2_0: true
    
    Application: &ApplicationDefaults
    
        Organizations:
    
        Policies:
            Readers:
                Type: ImplicitMeta
                Rule: "ANY Readers"
            Writers:
                Type: ImplicitMeta
                Rule: "ANY Writers"
            Admins:
                Type: ImplicitMeta
                Rule: "MAJORITY Admins"
            LifecycleEndorsement:
                Type: ImplicitMeta
                Rule: "MAJORITY Endorsement"
            Endorsement:
                Type: ImplicitMeta
                Rule: "MAJORITY Endorsement"
    
        Capabilities:
            <<: *ApplicationCapabilities
    
    Orderer: &OrdererDefaults
    
        OrdererType: etcdraft
    
        Addresses: # orderer 集群节点
            - orderer0.example.com:7050
            - orderer1.example.com:8050
            - orderer2.example.com:7050
        # Batch Timeout: The amount of time to wait before creating a batch
        BatchTimeout: 2s
    
        # Batch Size: Controls the number of messages batched into a block
        BatchSize:
    
            MaxMessageCount: 10
    
            AbsoluteMaxBytes: 99 MB
    
            PreferredMaxBytes: 512 KB
    
        Organizations:
    
        Policies:
            Readers:
                Type: ImplicitMeta
                Rule: "ANY Readers"
            Writers:
                Type: ImplicitMeta
                Rule: "ANY Writers"
            Admins:
                Type: ImplicitMeta
                Rule: "MAJORITY Admins"
            # BlockValidation specifies what signatures must be included in the block
            # from the orderer for the peer to validate it.
            BlockValidation:
                Type: ImplicitMeta
                Rule: "ANY Writers"
    
    Channel: &ChannelDefaults
    
        Policies:
            # Who may invoke the 'Deliver' API
            Readers:
                Type: ImplicitMeta
                Rule: "ANY Readers"
            # Who may invoke the 'Broadcast' API
            Writers:
                Type: ImplicitMeta
                Rule: "ANY Writers"
            # By default, who may modify elements at this config level
            Admins:
                Type: ImplicitMeta
                Rule: "MAJORITY Admins"
    
        Capabilities:
            <<: *ChannelCapabilities
    
    Profiles:
    
        TwoOrgsChannel:
            Consortium: SampleConsortium
            <<: *ChannelDefaults
            Application:
                <<: *ApplicationDefaults
                Organizations:
                    - *Org1
                    - *Org2
                Capabilities:
                    <<: *ApplicationCapabilities
    
        SampleMultiNodeEtcdRaft:
            <<: *ChannelDefaults
            Capabilities:
                <<: *ChannelCapabilities
            Orderer:
                <<: *OrdererDefaults
                OrdererType: etcdraft
                EtcdRaft:
                    Consenters:
                    - Host: orderer0.example.com
                      Port: 7050
                      ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/server.crt
                      ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/server.crt
                    - Host: orderer1.example.com
                      Port: 8050
                      ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/server.crt
                      ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/server.crt
                    - Host: orderer2.example.com
                      Port: 7050
                      ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/server.crt
                      ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/server.crt
                Addresses:
                    - orderer0.example.com:7050
                    - orderer1.example.com:8050
                    - orderer2.example.com:7050
                Organizations:
                - *OrdererOrg
                Capabilities:
                    <<: *OrdererCapabilities
            Application:
                <<: *ApplicationDefaults
                Organizations:
                - <<: *OrdererOrg
            Consortiums:
                SampleConsortium:
                    Organizations:
                    - *Org1
                    - *Org2
    

    虚拟机1的操作

    通过cryptogen命令生成证书配置

    cryptogen generate --config=./crypto-config.yaml
    

    通过configtxgen命令生成创世区块

    configtxgen -profile SampleMultiNodeEtcdRaft -channelID multiple-deployment-channel -outputBlock ./channel-artifacts/genesis.block
    

    通过configtxgen命令生成通道配置

    configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID mychannel
    

    定义组织1的锚节点(也称主节点)

    configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID mychannel -asOrg Org1MSP
    

    定义组织2的锚节点

    configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org2MSPanchors.tx -channelID mychannel -asOrg Org2MSP
    

    channel-artifactscrypto-config这两个文件夹拷贝到虚拟机2

    6.IP与域名映射配置(虚拟机1和虚拟机2都需要设定)

    通过修改/etc/hosts文件 添加IP地址与域名的映射

    修改/etc/hosts文件指令

    sudo vim /etc/hosts
    

    I插入,插入完成后按ESC退出插入,输入:wq!保存退出,如下图所示:

    虚拟机1IP orderer0.example.com
    虚拟机1IP orderer1.example.com
    虚拟机2IP orderer2.example.com
    虚拟机1IP peer0.org1.example.com
    虚拟机1IP peer1.org1.example.com
    虚拟机2IP peer0.org2.example.com
    虚拟机2IP peer1.org2.example.com
    

    完成添加后重启一下网络

    Ubuntu

    sudo /etc/init.d/networking restart
    

    CentOS

    sudo /etc/init.d/network restart
    

    7.编写网络启动脚本并启动网络

    虚拟机1的操作

    touch docker-compose-up.yaml
    

    将以下内容写入docker-compose-up.yaml中,yaml文件中的虚拟机的IP记得修改成你们的虚拟机IP

    version: '2'
    
    services:
      orderer0.example.com:
        container_name: orderer0.example.com
        image: hyperledger/fabric-orderer
        environment:
          - FABRIC_LOGGING_SPEC=DEBUG
          - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
          - ORDERER_GENERAL_BOOTSTRAPMETHOD=file
          - ORDERER_GENERAL_BOOTSTRAPFILE=/var/hyperledger/orderer/orderer.genesis.block
          - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
          - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
          # enabled TLS
          - ORDERER_GENERAL_TLS_ENABLED=true
          - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
          - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
          - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
          - ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
          - ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
          - ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
        working_dir: /opt/gopath/src/github.com/hyperledger/fabric
        command: orderer
        volumes:
            - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
            - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
            - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls
        ports:
          - 7050:7050
        extra_hosts:
          - "orderer0.example.com:虚拟机1的IP"
          - "orderer1.example.com:虚拟机1的IP"
          - "orderer2.example.com:虚拟机2的IP"
    
      orderer1.example.com:
        container_name: orderer1.example.com
        image: hyperledger/fabric-orderer
        environment:
          - FABRIC_LOGGING_SPEC=INFO
          - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
          - ORDERER_GENERAL_BOOTSTRAPMETHOD=file
          - ORDERER_GENERAL_BOOTSTRAPFILE=/var/hyperledger/orderer/orderer.genesis.block
          - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
          - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
          # enabled TLS
          - ORDERER_GENERAL_TLS_ENABLED=true
          - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
          - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
          - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
          - ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
          - ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
          - ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
        working_dir: /opt/gopath/src/github.com/hyperledger/fabric
        command: orderer
        volumes:
            - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
            - ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/msp:/var/hyperledger/orderer/msp
            - ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/:/var/hyperledger/orderer/tls
        ports:
          - 8050:8050
        extra_hosts:
          - "orderer0.example.com:虚拟机1的IP"
          - "orderer1.example.com:虚拟机1的IP"
          - "orderer2.example.com:虚拟机2的IP"
    
      peer0.org1.example.com:
        container_name: peer0.org1.example.com
        image: hyperledger/fabric-peer
        environment:
          - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
          - CORE_PEER_ID=peer0.org1.example.com
          - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
          - CORE_PEER_LISTENADDRESS=0.0.0.0:7051
          - CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052
          - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
          - CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051
          - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
          - CORE_PEER_LOCALMSPID=Org1MSP
          - FABRIC_LOGGING_SPEC=INFO
          - CORE_PEER_TLS_ENABLED=true
          - CORE_PEER_GOSSIP_USELEADERELECTION=true
          - CORE_PEER_GOSSIP_ORGLEADER=false
          - CORE_PEER_PROFILE_ENABLED=true
          - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
          - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
          - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
          # Allow more time for chaincode container to build on install.
          - CORE_CHAINCODE_EXECUTETIMEOUT=300s
        working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
        command: peer node start
        volumes:
           - /var/run/:/host/var/run/
           - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
           - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
        ports:
          - 7051:7051
          - 7052:7052
          - 7053:7053
        extra_hosts:
          - "orderer0.example.com:虚拟机1的IP"
          - "orderer1.example.com:虚拟机1的IP"
          - "orderer2.example.com:虚拟机2的IP"
    
      peer1.org1.example.com:
        container_name: peer1.org1.example.com
        image: hyperledger/fabric-peer
        environment:
          - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
          - CORE_PEER_ID=peer1.org1.example.com
          - CORE_PEER_ADDRESS=peer1.org1.example.com:8051
          - CORE_PEER_LISTENADDRESS=0.0.0.0:8051
          - CORE_PEER_CHAINCODEADDRESS=peer1.org1.example.com:8052
          - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:8052
          - CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org1.example.com:8051
          - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:8051
          - CORE_PEER_LOCALMSPID=Org1MSP
          - FABRIC_LOGGING_SPEC=INFO
          - CORE_PEER_TLS_ENABLED=true
          - CORE_PEER_GOSSIP_USELEADERELECTION=true
          - CORE_PEER_GOSSIP_ORGLEADER=false
          - CORE_PEER_PROFILE_ENABLED=true
          - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
          - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
          - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
          # Allow more time for chaincode container to build on install.
          - CORE_CHAINCODE_EXECUTETIMEOUT=300s
        working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
        command: peer node start
        volumes:
           - /var/run/:/host/var/run/
           - ./crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp:/etc/hyperledger/fabric/msp
           - ./crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls:/etc/hyperledger/fabric/tls
        ports:
          - 8051:8051
          - 8052:8052
          - 8053:8053
        extra_hosts:
          - "orderer0.example.com:虚拟机1的IP"
          - "orderer1.example.com:虚拟机1的IP"
          - "orderer2.example.com:虚拟机2的IP"
    
      cli1:
        container_name: cli1
        image: hyperledger/fabric-tools
        tty: true
        stdin_open: true
        environment:
          - GOPATH=/opt/gopath
          - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
          #- FABRIC_LOGGING_SPEC=DEBUG
          - FABRIC_LOGGING_SPEC=INFO
          - CORE_PEER_ID=cli1
          - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
          - CORE_PEER_LOCALMSPID=Org1MSP
          - CORE_PEER_TLS_ENABLED=true
          - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
          - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
          - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
          - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
        working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
        command: /bin/bash
        volumes:
            - /var/run/:/host/var/run/
            - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/multiple-deployment/chaincode/go
            - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
            - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
        extra_hosts:
          - "orderer0.example.com:虚拟机1的IP"
          - "orderer1.example.com:虚拟机1的IP"
          - "orderer2.example.com:虚拟机2的IP"
          - "peer0.org1.example.com:虚拟机1的IP"
          - "peer1.org1.example.com:虚拟机1的IP"
          - "peer0.org2.example.com:虚拟机2的IP"
          - "peer1.org2.example.com:虚拟机2的IP"
    
      cli2:
        container_name: cli2
        image: hyperledger/fabric-tools
        tty: true
        stdin_open: true
        environment:
          - GOPATH=/opt/gopath
          - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
          #- FABRIC_LOGGING_SPEC=DEBUG
          - FABRIC_LOGGING_SPEC=INFO
          - CORE_PEER_ID=cli2
          - CORE_PEER_ADDRESS=peer1.org1.example.com:8051
          - CORE_PEER_LOCALMSPID=Org1MSP
          - CORE_PEER_TLS_ENABLED=true
          - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/server.crt
          - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/server.key
          - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt
          - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
        working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
        command: /bin/bash
        volumes:
            - /var/run/:/host/var/run/
            - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/multiple-deployment/chaincode/go
            - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
            - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
        extra_hosts:
          - "orderer0.example.com:虚拟机1的IP"
          - "orderer1.example.com:虚拟机1的IP"
          - "orderer2.example.com:虚拟机2的IP"
          - "peer0.org1.example.com:虚拟机1的IP"
          - "peer1.org1.example.com:虚拟机1的IP"
          - "peer0.org2.example.com:虚拟机2的IP"
          - "peer1.org2.example.com:虚拟机2的IP"
    

    虚拟机2的操作

    touch docker-compose-up.yaml
    

    将以下内容写入docker-compose-up.yaml中,yaml文件中的虚拟机的IP记得修改成你们的虚拟机IP

    version: '2'
    
    services:
      orderer2.example.com:
        container_name: orderer2.example.com
        image: hyperledger/fabric-orderer
        environment:
          - FABRIC_LOGGING_SPEC=INFO
          - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
          - ORDERER_GENERAL_BOOTSTRAPMETHOD=file
          - ORDERER_GENERAL_BOOTSTRAPFILE=/var/hyperledger/orderer/orderer.genesis.block
          - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
          - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
          # enabled TLS
          - ORDERER_GENERAL_TLS_ENABLED=true
          - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
          - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
          - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
          - ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
          - ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
          - ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
        working_dir: /opt/gopath/src/github.com/hyperledger/fabric
        command: orderer
        volumes:
            - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
            - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/msp:/var/hyperledger/orderer/msp
            - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/:/var/hyperledger/orderer/tls
        ports:
          - 7050:7050
        extra_hosts:
          - "orderer0.example.com:虚拟机1的IP"
          - "orderer1.example.com:虚拟机1的IP"
          - "orderer2.example.com:虚拟机2的IP"
    
      peer0.org2.example.com:
        container_name: peer0.org2.example.com
        image: hyperledger/fabric-peer
        environment:
          - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
          - CORE_PEER_ID=peer0.org2.example.com
          - CORE_PEER_ADDRESS=peer0.org2.example.com:7051
          - CORE_PEER_LISTENADDRESS=0.0.0.0:7051
          - CORE_PEER_CHAINCODEADDRESS=peer0.org2.example.com:7052
          - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
          - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org2.example.com:7051
          - CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org2.example.com:7051
          - CORE_PEER_LOCALMSPID=Org2MSP
          - FABRIC_LOGGING_SPEC=INFO
          - CORE_PEER_TLS_ENABLED=true
          - CORE_PEER_GOSSIP_USELEADERELECTION=true
          - CORE_PEER_GOSSIP_ORGLEADER=false
          - CORE_PEER_PROFILE_ENABLED=true
          - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
          - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
          - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
          # Allow more time for chaincode container to build on install.
          - CORE_CHAINCODE_EXECUTETIMEOUT=300s
        working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
        command: peer node start
        volumes:
           - /var/run/:/host/var/run/
           - ./crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp:/etc/hyperledger/fabric/msp
           - ./crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls:/etc/hyperledger/fabric/tls
        ports:
          - 7051:7051
          - 7052:7052
          - 7053:7053
        extra_hosts:
          - "orderer0.example.com:虚拟机1的IP"
          - "orderer1.example.com:虚拟机1的IP"
          - "orderer2.example.com:虚拟机2的IP"
    
      peer1.org2.example.com:
        container_name: peer1.org2.example.com
        image: hyperledger/fabric-peer
        environment:
          - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
          - CORE_PEER_ID=peer1.org2.example.com
          - CORE_PEER_ADDRESS=peer1.org2.example.com:8051
          - CORE_PEER_LISTENADDRESS=0.0.0.0:8051
          - CORE_PEER_CHAINCODEADDRESS=peer1.org2.example.com:8052
          - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:8052
          - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org2.example.com:8051
          - CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org2.example.com:8051
          - CORE_PEER_LOCALMSPID=Org2MSP
          - FABRIC_LOGGING_SPEC=INFO
          - CORE_PEER_TLS_ENABLED=true
          - CORE_PEER_GOSSIP_USELEADERELECTION=true
          - CORE_PEER_GOSSIP_ORGLEADER=false
          - CORE_PEER_PROFILE_ENABLED=true
          - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
          - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
          - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
          # Allow more time for chaincode container to build on install.
          - CORE_CHAINCODE_EXECUTETIMEOUT=300s
        working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
        command: peer node start
        volumes:
           - /var/run/:/host/var/run/
           - ./crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/msp:/etc/hyperledger/fabric/msp
           - ./crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls:/etc/hyperledger/fabric/tls
        ports:
          - 8051:8051
          - 8052:8052
          - 8053:8053
        extra_hosts:
          - "orderer0.example.com:虚拟机1的IP"
          - "orderer1.example.com:虚拟机1的IP"
          - "orderer2.example.com:虚拟机2的IP"
    
      cli1:
        container_name: cli1
        image: hyperledger/fabric-tools
        tty: true
        stdin_open: true
        environment:
          - GOPATH=/opt/gopath
          - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
          #- FABRIC_LOGGING_SPEC=DEBUG
          - FABRIC_LOGGING_SPEC=INFO
          - CORE_PEER_ID=cli1
          - CORE_PEER_ADDRESS=peer0.org2.example.com:7051
          - CORE_PEER_LOCALMSPID=Org2MSP
          - CORE_PEER_TLS_ENABLED=true
          - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.crt
          - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.key
          - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
          - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp
        working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
        command: /bin/bash
        volumes:
            - /var/run/:/host/var/run/
            - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/multiple-deployment/chaincode/go
            - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
            - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
        extra_hosts:
          - "orderer0.example.com:虚拟机1的IP"
          - "orderer1.example.com:虚拟机1的IP"
          - "orderer2.example.com:虚拟机2的IP"
          - "peer0.org1.example.com:虚拟机1的IP"
          - "peer1.org1.example.com:虚拟机1的IP"
          - "peer0.org2.example.com:虚拟机2的IP"
          - "peer1.org2.example.com:虚拟机2的IP"
    
      cli2:
        container_name: cli2
        image: hyperledger/fabric-tools
        tty: true
        stdin_open: true
        environment:
          - GOPATH=/opt/gopath
          - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
          #- FABRIC_LOGGING_SPEC=DEBUG
          - FABRIC_LOGGING_SPEC=INFO
          - CORE_PEER_ID=cli2
          - CORE_PEER_ADDRESS=peer1.org2.example.com:8051
          - CORE_PEER_LOCALMSPID=Org2MSP
          - CORE_PEER_TLS_ENABLED=true
          - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/server.crt
          - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/server.key
          - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt
          - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp
        working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
        command: /bin/bash
        volumes:
            - /var/run/:/host/var/run/
            - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/multiple-deployment/chaincode/go
            - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
            - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
        extra_hosts:
          - "orderer0.example.com:虚拟机1的IP"
          - "orderer1.example.com:虚拟机1的IP"
          - "orderer2.example.com:虚拟机2的IP"
          - "peer0.org1.example.com:虚拟机1的IP"
          - "peer1.org1.example.com:虚拟机1的IP"
          - "peer0.org2.example.com:虚拟机2的IP"
          - "peer1.org2.example.com:虚拟机2的IP"
    

    接下来在两台虚拟机上启动网络

    虚拟机1和虚拟机2的操作

    docker-compose -f docker-compose-up.yaml up -d
    

    虚拟机1的操作

    进入cli1容器,也就是以peer0.org1的角色与网络交互

    docker exec -it cli1 bash
    

    创建通道,这一步最好等网络跑起来15秒之后再操作,因为orderer之间要握手和选举raft领导节点,需要点时间。操作太快会报no Raft leader的错误

    peer channel create -o orderer0.example.com:7050 -c mychannel -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    

    组织1peer0节点加入通道mychannel

    peer channel join -b mychannel.block
    

    更新组织1的锚节点

    peer channel update -o orderer0.example.com:7050 -c mychannel -f ./channel-artifacts/Org1MSPanchors.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    

    退出cli1容器,从cli1容器中拷贝出mychannel.block文件到multiple-deployment文件夹下,并复制到虚拟机2

    exit
    docker cp cli1:/opt/gopath/src/github.com/hyperledger/fabric/peer/mychannel.block ./
    

    接下来到虚拟机2的操作

    mychannel.block拷贝到虚拟机2cli1组织2peer0角色)容器中

    docker cp mychannel.block cli1:/opt/gopath/src/github.com/hyperledger/fabric/peer/
    

    进入cli1容器

    docker exec -it cli1 bash
    

    加入通道

    peer channel join -b mychannel.block
    

    更新组织2的锚节点(主节点)

    peer channel update -o orderer0.example.com:7050 -c mychannel -f ./channel-artifacts/Org2MSPanchors.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    

    8.安装与调用链码(智能合约)

    接下来到虚拟机1的操作

    进入cli1容器中

    docker exec -it cli1 bash
    
    peer lifecycle chaincode package mycc.tar.gz --path github.com/hyperledger/multiple-deployment/chaincode/go --lang golang --label mycc_1
    

    安装链码

    peer lifecycle chaincode install mycc.tar.gz
    

    退出cli1容器,将打包的链码mycc.tar.gz从cli1容器中提取出来,并拷贝到虚拟机2multiple-deployment目录下

    exit
    docker cp cli1:/opt/gopath/src/github.com/hyperledger/fabric/peer/mycc.tar.gz ./
    

    接下来到虚拟机2的操作

    退出cli1容器,将multiple-deployment目录下的链码压缩包mycc.tar.gz复制到cli1组织2peer0)容器中

    exit
    docker cp mycc.tar.gz cli1:/opt/gopath/src/github.com/hyperledger/fabric/peer/
    

    再次进入cli1容器,安装链码

    docker exec -it cli1 bash
    peer lifecycle chaincode install mycc.tar.gz
    

    组织2同意提交链码,下面指令中的链码id记得替换成上一条指令生成的链码id

    peer lifecycle chaincode approveformyorg --channelID mychannel --name mycc --version 1.0 --init-required --package-id mycc_1:242b275209a0cacb00772667b69b4ed1d6efe91dab266042b0b7047ded06adb3 --sequence 1 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    

    接下来到虚拟机1的操作

    再次进入cli1容器,同意提交链码

    docker exec -it cli1 bash
    
    peer lifecycle chaincode approveformyorg --channelID mychannel --name mycc --version 1.0 --init-required --package-id mycc_1:242b275209a0cacb00772667b69b4ed1d6efe91dab266042b0b7047ded06adb3 --sequence 1 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
    

    查看链码的状态是否就绪

    peer lifecycle chaincode checkcommitreadiness --channelID mychannel --name mycc --version 1.0 --init-required --sequence 1 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --output json
    

    提交链码

    peer lifecycle chaincode commit -o orderer0.example.com:7050 --channelID mychannel --name mycc --version 1.0 --sequence 1 --init-required --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
    

    初始化链码

    peer chaincode invoke -o orderer0.example.com:7050 --isInit --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n mycc --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["Init","a","100","b","100"]}' --waitForEvent
    

    虚拟机1查询a的余额

    peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
    

    虚拟机2查询a的余额

    双方均可查询到a的余额

    虚拟机2上操作a转账10给b

    peer chaincode invoke -o orderer0.example.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n mycc --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["invoke","a","b","10"]}' --waitForEvent
    

    转账完成后查询a的余额

    peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
    

    查询结果为90,转账成功

    虚拟机1查询a的余额

    peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'
    

    虚拟机2的操作已经fabric网络的各节点中同步

    Hyperledger Fabric 2.x 多机部署/分布式集群部署成功

    如果要结束网络,可以在退出cli1容器后使用以下指令退出

    docker-compose -f docker-compose-up.yaml down
    

    docker-compose-up.yaml文件中还有各组织1、2的peer1节点,容器名为cli2,可以进入容器中使他们加入网络,同步消息,操作与peer0一样

    欢迎小伙伴讨论,如有错误请在评论区评论或发私聊消息,谢谢你。

    展开全文
  • OpenStackRDO部署流程 - 1(自动化部署) 0. 环境: 操作系统:CentOS6.5 amd64 目标系统:OpenStackHavana Release + Open vSwitch + GRE 网络环境: 10.1.101.0/24:负责外网流量 192.168.200.0/24:负责管理流量 ...

    OpenStack RDO 部署流程 - 1(自动化部署)


    0.环境:

    操作系统:CentOS6.5 amd64

    目标系统:OpenStack Havana Release + Open vSwitch + GRE

    网络环境:

    10.1.101.0/24:负责外网流量

    192.168.200.0/24:负责管理流量

    192.168.300.0/24:负责隧道流量

    计划部署3台服务器:

    1ControllerNovaNeutronKeystoneCinderGlanceNagios

    hostname: nick-controller

    ip-eth0: 10.1.101.192

    ip-eth1: 192.168.200.192

    ip-eth2: 192.168.300.192

    2NetworkNeutronAgents

    hostname: nick-network

    ip-eth0: 10.1.101.191

    ip-eth1: 192.168.200.191

    ip-eth2: 192.168.300.191

    3ComputeNova-computeNeutron-L2-Agent

    hostname: nick-compute-1

    ip-eth0: 10.1.101.190

    ip-eth1: 192.168.200.190

    ip-eth2: 192.168.300.190

    1.操作系统配置

    (1) /etc/hosts:保证所有结点互相能pinghostname

    (2) resolv.conf:保证DNS服务器正常

    (3) SELinux: disabled

    (4) sshd config: /etc/ssh/sshd_config: Listen 0.0.0.0

    (5) EPEL Repo:

    # rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

    # rpm -Uvh http://www.elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm

    # rpm -Uvh http://mirrors.hustunique.com/epel/6/x86_64/epel-release-6-8.noarch.rpm

    (6) RDO Repo:

    # rpm -Uvh http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-7.noarch.rpm

    (7) 更新操作系统到最新版本

    # yum -y update

    # reboot

    2. RDO安装(在控制结点上操作)

    (1) yum install -y ntp

    (2) yum install -y openstack-packstack

    生成RDO配置文件

    (3) packstack --gen-answer-file my_answers.txt

    3.编辑配置文件

    # 安装Nagios监控

    CONFIG_NAGIOS_INSTALL=y

    # 测试Cinder,会mount一个本地文件作为卷

    CONFIG_CINDER_VOLUMES_CREATE=y

    # 安装Compute节点

    CONFIG_NOVA_COMPUTE_HOSTS=192.168.200.190

    # 开启GRE隧道(暂时RDO还不支持直接部署VxLAN)

    CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre

    # 配置隧道ID范围,映射到租户

    CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1:10000

    # 配置隧道通讯网卡,一般建议独占

    CONFIG_NEUTRON_OVS_TUNNEL_IF=eth2

    # 配置网络节点

    CONFIG_NEUTRON_DHCP_HOSTS=192.168.200.191

    CONFIG_NEUTRON_L3_HOSTS=192.168.200.191

    CONFIG_NEUTRON_LBAAS_HOSTS=192.168.200.191

    CONFIG_NEUTRON_METADATA_HOSTS=192.168.200.191

    4.自动化部署

    packstack --answer-file my_answers.txt

    5.后续问题处理

    (1) Horizon访问权限

    在控制节点上,编辑/etc/openstack-dashboard/local_settings

    ALLOWED_HOSTS = [ '*' ]

     

    (2) 网桥配置

    在网络节点上,需要手动配置一次OVS网桥

    eth0外网网卡配置复制到br-ex,同时清除eth0的配置信息。

    /etc/sysconfig/network-scripts/ifcfg-br-ex

    /etc/sysconfig/network-scripts/ifcfg-eth0

    添加br-ex、br-int网桥设备,将eth0绑定到网桥上。

    # ovs-vsctl add-br br-ex

    # ovs-vsctl add-port br-ex eth0

    # ovs-vsctl add-br br-int

    # service network restart


    在计算节点上,需要手动配置一次OVS网桥

    # ovs-vsctl add-br br-int

    # service network restart

     

    (3) 虚拟网卡MTU配置,降低MTU防止不必要的分片

    在网络节点上,增加dnsmasq配置文件:

    /etc/neutron/dnsmasq-neutron.conf:

    内容为:dhcp-option-force=26,1400

    将其指定到dhcp-agent配置文件中:

    /etc/neutron/dhcp_agent.ini:

    dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

    # service neutron-dhcp-agent restart

     

    (4) 配置NoVNC

    在计算节点上:/etc/nova/nova.conf

    --vnc_enabled=true

    --vncserver_listen=0.0.0.0

    --vncserver_proxyclient_address=192.168.200.190

    --novncproxy_base_url=http://10.1.101.190:6080/vnc_auto.html

    --xvpvncproxy_base_url=http://10.1.101.190:6081/console

     

    (5) Libvirt配置

    在计算节点上:

    /etc/libvirt/libvirtd.conf:

    listen_tls = 0

    listen_tcp = 1

    auth_tcp = "none"

    auth_tls = "none"

     

    /etc/libvirt/qemu.conf:

    cgroup_device_acl = [

    "/dev/null", "/dev/full","/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc", "/dev/hpet","/dev/net/tun",
    ]


    /etc/sysconfig/libvirtd:
    LIBVIRTD_ARGS="-d -l"


    展开全文
  • Ironic 裸金属实例的部署流程

    千次阅读 2019-05-07 18:03:27
    IPA 通过 Heartbeat 与 Ironic 通信,驱动部署流程触发 ironic-conductor 发送 prepare_image 命令到 IPA,让 IPA 直接**(Direct)**下载并注入用户镜像到本地磁盘。写镜像是一个耗时较长的动作,通常 300GB 大小的...
  • Winform程序开发部署流程

    千次阅读 2016-10-25 11:45:55
    经过翻阅资料发现,VS2012虽然没有集成打包工具,但它为我们提供了下载的端口,需要我们手动安装... 上篇博客稍微总结了下.NET程序的编译机制,下面了解些有关程序部署的一些机制。在.NET程序包下有三个文件夹分别是bi
  • 新服务器部署完整流程

    万次阅读 多人点赞 2019-09-03 23:12:41
    compose build 和docker-compose up -d 会报如下错误,这是因为网络环境还没配置好 打开docker-compose.yml文件我们可以看到了使用名为master的网络,而我们执行docker network ls可以发现,我们并没有叫做master这...
  • 百度人脸识别私有化部署流程

    千次阅读 2019-01-04 14:52:04
    按照百度官方的文档部署私有化人脸识别服务器的过程中碰到了很多坑,在此整理记录一下。 环境 操作系统: centos 7.2 (或者ubuntu,版本号尽量和官方一致) GPU : Nvidia GTX 1070 (官方建议使用特斯拉系列的卡...
  • activiti 流程部署的各种方式

    千次阅读 2018-07-30 15:15:19
    流程资源可以是各种类型的文件,在启动流程流程实例运行过程中会被读取。下面介绍常用的流程资源。  一、流程资源  流程定义文件:扩展名为bpmn20.xml和bpmn;  流程定义的图片:用BPMN2.0规范的各种图形描绘...
  •   在《Jenkins+maven+Git实现源码的部署和发布》这一篇文章中,已经介绍了如何基于Jenkins+Maven+Git实现从代码库到项目部署的自动化部署流程。然而在实际的生产环境中,新功能的上线,往往会带来数据库结构的...
  • Nodejs项目部署阿里云完整流程

    万次阅读 多人点赞 2018-01-11 19:37:34
    nodejs项目部署阿里云完整流程 参考文档:http://blog.csdn.net/chenlinIT/article/details/73343793 细节注意:1.启动nodejs服务需要开启两个阿里云主机窗口,一个是数据库mongoose服务,一个是nodejs服务器 2....
  • Model部署流程 @RequestMapping(value = "deploy/{modelId}") public String deploy(@PathVariable("modelId") String modelId, RedirectAttributes redirectAttributes) { try { Model m...
  • 他用于开发,集成,部署和管理大型的分布式web应用,网络应用和数据库应用。这种大型的服务器有着自己独特的优势,即标准领先(它的标准包括ejb,jsb,jms,jdbc,xml和wml),扩展性无限(它的体系架构具有高扩展...
  • 企业在选购和部署自己的IT架构及相关设备时,常常会面临众多的选择和随之而来的困惑,成本、业务、流程、效率以及茶品本身的功能性能,都成为让企业用户难于做出决策的原因。这一问题在传统的制造行业
  • nodejs项目部署阿里云完整流程

    千次阅读 2019-07-01 15:06:14
    nodejs项目部署阿里云完整流程 参考文档:http://blog.csdn.net/chenlinIT/article/details/...
  • 小程序部署发布全流程

    千次阅读 多人点赞 2021-01-22 08:28:10
    小程序部署发布全流程部署服务端非云开发云服务器要求部署配置nginx获取证书继续配置nginx云开发方式一 微信小程序云开发方式二 腾讯云SCF部署小程序配置服务器域名上传与发布 部署服务端 非云开发 云开发更加方便...
  • 整合Pytorch和MNN的嵌入式部署流程

    千次阅读 2019-12-05 11:27:37
    在线部署流程在这里,为使用MNN加载解析好的mnn模型参数进行inference等一系列业务操作。关于如何在android上面使用mnn进行部署,本专栏已经有好几篇介绍的文章,这里就不进行赘述了。完整的JNI业务代码可以参考如下...
  • 在工作中遇到基于Java开发的网站项目,第三方公司提供打包好的.war文件包,接下来的部署就是一路靠自己。 软件依赖中间件 排名不分安装顺序先后,但是最好按照所示步骤安装,每个中间件安装好后立即进行测试,成功...
  • 这些是“将深度神经网络部署到嵌入式GPU和CPU”日本网络研讨会的文件,该网络研讨会于2018年4月首次亮相。此演示中使用的图像来自CDC DPDx寄生图像库。 该文件已在2018年4月举行的网络研讨会“深度学习嵌入式设备...
  • AnyChat安装部署和开发流程指南;目录;网络拓补图;安装部署;Window服务器的安装部署-核心服务器;Window服务器的安装部署-业务服务器;Window服务器的安装部署-录像服务器;安装部署;核心服务器和业务服务器 /forum....
  • 流程资源可以是各种类型的文件,在启动流程流程实例运行过程中会被读取。下面介绍常用的流程资源。   一、流程资源  流程定义文件:扩展名为bpmn20.xml和bpmn;  流程定义的图片:用BPMN2.0规范的各种...
  • 是一家专门做网络语音视频实时交互的平台。他们的官方网站是:http://www.anychat.cn.最新版本java业务服务器支持部署在Windows,Linux两个平台上。AnyChat Platform Core SDK提供了最新版本的示例程序源代码(使用...
  • 将深度神经网络算法部署到现实场景(基于FPGA),主要包含两部分的工作,一个是为场景找到合适的算法,训练出满足要求的模型;第2个是将模型定点化后部署在FPGA上运行,主要含以下步骤: 准备工作:数据收集及...
  • OpenStack RDO 部署流程 - 2(ML2 + VxLAN)

    千次阅读 2014-02-25 22:12:20
    OpenStackRDO部署流程 - 2(ML2 + VxLAN) 1. 安装和配置Neutron ML2 框架 (1) 安装在控制节点上(运行Neutron-server的节点) service neutron-server stop yum install openstack-neutron-ml2 python-pyudev -y ln...
  • 转自: ...   微软在昨天发布了TFS Deployer ..., 这个产品最初是由一家澳大利亚的...脚本准备完成后,就可以测试你的部署流程了,希望这个TFS部署管理器对大家有所帮助。 (以上内容大部分翻译自Mitch的blog)。
  • AnyChat安装部署和开发流程指南;目录;网络拓补图;安装部署;Window服务器的安装部署-核心服务器;Window服务器的安装部署-业务服务器;Window服务器的安装部署-录像服务器;安装部署;核心服务器和业务服务器 /forum....
  • Linux必知必会的基本命令和部署项目流程

    万次阅读 多人点赞 2021-05-29 10:02:17
    进程间通信:管道、网络 管道的基本原理: 命令行中通过竖线|分割两个命令实现管道 grep进行文本匹配 命令 [选项] 其他参数 ls --help |grep -- '-a' 如果要查询-a,但是-a会被识别为选项,使用–代表选项结束 ...
  • CloudStack+KVM 虚拟机部署测试详细流程

    千次阅读 多人点赞 2020-10-08 11:42:53
    目录 基本介绍 部署准备 开始部署 1.查看并修改虚拟机网络 2.创建manager节点 ...5.设置网络桥接 ...本文旨在介绍如何在本机虚拟机里部署一套CloudStack环境的测试流程,包括一台管理节点(manag.

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 182,160
精华内容 72,864
关键字:

网络部署流程