-
fabric多orderer节点环境搭建的详细过程
2019-05-17 10:55:16本文配置结构为3台kafka,2个orderer,4个peer 其中kafka需要安装jdk, peer需要安装go和docker 因为实际环境需要,本文全部都是离线安装,安装详见附件 系统 Ubuntu 16.04.3 ## hosts 将hosts复制到所有主机中 ...## 引言
文章写了块两年了,一直在公司内部分享,今天存放在这里,也算做个记录吧
本文配置结构为3台kafka,2个orderer,4个peer
其中kafka需要安装jdk,
peer需要安装go和docker
因为实际环境需要,本文全部都是离线安装,安装详见附件
系统 Ubuntu 16.04.3
## hosts
将hosts复制到所有主机中
```shell
vim /etc/hosts192.168.3.181 node1 orderer1.local
192.168.3.183 node2 orderer2.local
192.168.3.185 node3 peer0.org1.local
192.168.3.184 kafka1
192.168.3.186 kafka2
192.168.3.119 kafka3# 分发到其他机器上
scp /etc/hosts root@xxx.xxx.x.IP:/etc/
```##kafka zookeeper集群配置
kafka1、kafka2、kafka3
### 安装jdk
```shell
mkdir /usr/local/software
# 将安装工具上传到/usr/local/software/目录下,为了分发方便,我将所有安装工具都放到这里了,后续的安装也会根据这个路径来# 分别将software分发到其他服务器上
scp -r /usr/local/software root@XXX.XXX.XIP /usr/local/software
# 解压jdk文件到/usr/local/目录下
tar zxf /usr/local/software/jdk-8u144-linux-x64.tar.gz -C /usr/local/
# 设置环境变量,这里偷了个懒,将其他服务器所需的kafka和go都设置了,后续就不再说明了,避免后续遗漏,最好所有服务器都执行
echo "export JAVA_HOME=/usr/local/jdk1.8.0_144" >> ~/.profile
echo 'export JRE_HOME=${JAVA_HOME}/jre' >> ~/.profile
echo 'export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib' >> ~/.profile
echo 'export PATH=${JAVA_HOME}/bin:$PATH' >> ~/.profile
echo 'export KAFKA_HOME=/usr/local/kafka_2.10-0.10.2.0' >> ~/.profile
echo 'export PATH=$KAFKA_HOME/bin:$PATH' >> ~/.profile
echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.profile
echo 'export GOPATH=/root/go' >> ~/.profile# 使环境变量生效
source ~/.profile
```### kafka zookeeper 安装
```shell
# 解压zookeeper文件到/usr/local/目录下
tar zxf /usr/local/software/zookeeper-3.4.10.tar.gz -C /usr/local/
# 解压kafka文件到/usr/local/目录下
tar zxf /usr/local/software/kafka_2.10-0.10.2.0.tgz -C /usr/local/
# 修改server.properties配置文件,有就修改,没有则添加
# 我已经将命令语句整理出,对应机器分别执行以下命令
```kafka1 server.properties
```shell
sed -i 's/broker.id=0/broker.id=1/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/zookeeper.connect=localhost:2181
/zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'listeners=PLAINTEXT://kafka1:9092' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'unclean.leader.election.enable=false' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'min.insync.replicas=1' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'default.replication.factor=2' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties#################以上命令修改和添加的变量################
## broker.id=1
## log.dirs=/data/kafka-logs
## zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181## listeners=PLAINTEXT://kafka1:9092
## unclean.leader.election.enable=false
## min.insync.replicas=1
## default.replication.factor=2
```kafka2 server.properties
```shell
sed -i 's/broker.id=0/broker.id=2/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/zookeeper.connect=localhost:2181
/zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'listeners=PLAINTEXT://kafka2:9092' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'unclean.leader.election.enable=false' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'min.insync.replicas=1' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'default.replication.factor=2' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties#################以上命令修改和添加的变量################
## broker.id=2
## log.dirs=/data/kafka-logs
## zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181## listeners=PLAINTEXT://kafka2:9092
## unclean.leader.election.enable=false
## min.insync.replicas=1
## default.replication.factor=2
```kafka3 server.properties
```shell
sed -i 's/broker.id=0/broker.id=3/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/zookeeper.connect=localhost:2181
/zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'listeners=PLAINTEXT://kafka3:9092' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'unclean.leader.election.enable=false' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'min.insync.replicas=1' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'default.replication.factor=2' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties#################以上命令修改和添加的变量#################
## broker.id=2
## log.dirs=/data/kafka-logs
## zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181## listeners=PLAINTEXT://kafka2:9092
## unclean.leader.election.enable=false
## min.insync.replicas=1
## default.replication.factor=2
```修改kafka里zookeeper配置文件,新增myid
这里同样整理除了命令语句,对应机器分别执行即可
```shell
#创建kafka和zookeeper相关文件夹,只要和properties文件中的log.dirs一致就行
mkdir -p /data/kafka-logs
mkdir -p /data/zookeeper
```zookeeper 1
```shell
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'initLimit=5' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'syncLimit=2' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.1=kafka1:2888:3888' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.2=kafka2:2888:3888' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.3=kafka3:2888:3888' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 1 > /data/zookeeper/myid#################以上命令修改和添加的变量#################
## zookeeper.properties### clientPort=2181
### initLimit=5
### syncLimit=2
### server.1=kafka1:2888:3888
### server.2=kafka2:2888:3888
### server.3=kafka3:2888:3888## 新增myid,并设置值为1
```zookeeper 2
```shell
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'initLimit=5' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'syncLimit=2' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.1=kafka1:2888:3888' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.2=kafka2:2888:3888' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.3=kafka3:2888:3888' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 2 > /data/zookeeper/myid#################以上命令修改和添加的变量#################
## zookeeper.properties### clientPort=2181
### initLimit=5
### syncLimit=2
### server.1=kafka1:2888:3888
### server.2=kafka2:2888:3888
### server.3=kafka3:2888:3888## 新增myid,并设置值为2
```zookeeper 3
```shell
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'initLimit=5' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'syncLimit=2' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.1=kafka1:2888:3888' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.2=kafka2:2888:3888' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.3=kafka3:2888:3888' >> /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 3 > /data/zookeeper/myid#################以上命令修改和添加的变量#################
## zookeeper.properties### clientPort=2181
### initLimit=5
### syncLimit=2
### server.1=kafka1:2888:3888
### server.2=kafka2:2888:3888
### server.3=kafka3:2888:3888## 新增myid,并设置值为3
```补充:server.properties可选参数如下
```shell
broker.id=0 #当前机器在集群中的唯一标识,和zookeeper的myid性质一样
port=19092 #当前kafka对外提供服务的端口默认是9092
host.name=192.168.7.100 #这个参数默认是关闭的,在0.8.1有个bug,DNS解析问题,失败率的问题。
num.network.threads=3 #这个是borker进行网络处理的线程数
num.io.threads=8 #这个是borker进行I/O处理的线程数
log.dirs=/opt/kafka/kafkalogs/ #消息存放的目录,这个目录可以配置为“,”逗号分割的表达式,上面的num.io.threads要大于这个目录的个数这个目录,如果配置多个目录,新创建的topic他把消息持久化的地方是,当前以逗号分割的目录中,那个分区数最少就放那一个
socket.send.buffer.bytes=102400 #发送缓冲区buffer大小,数据不是一下子就发送的,先回存储到缓冲区了到达一定的大小后在发送,能提高性能
socket.receive.buffer.bytes=102400 #kafka接收缓冲区大小,当数据到达一定大小后在序列化到磁盘
socket.request.max.bytes=104857600 #这个参数是向kafka请求消息或者向kafka发送消息的请请求的最大数,这个值不能超过java的堆栈大小
num.partitions=1 #默认的分区数,一个topic默认1个分区数
log.retention.hours=168 #默认消息的最大持久化时间,168小时,7天
message.max.byte=5242880 #消息保存的最大值5M
default.replication.factor=2 #kafka保存消息的副本数,如果一个副本失效了,另一个还可以继续提供服务
replica.fetch.max.bytes=5242880 #取消息的最大直接数
log.segment.bytes=1073741824 #这个参数是:因为kafka的消息是以追加的形式落地到文件,当超过这个值的时候,kafka会新起一个文件
log.retention.check.interval.ms=300000 #每隔300000毫秒去检查上面配置的log失效时间(log.retention.hours=168 ),到目录查看是否有过期的消息如果有,删除
log.cleaner.enable=false #是否启用log压缩,一般不用启用,启用的话可以提高性能
zookeeper.connect=192.168.7.100:12181,192.168.7.101:12181,192.168.7.107:1218 #设置zookeeper的连接端口
```启动
```shell
# 启动zookeeper
nohup zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties >> ~/zookeeper.log 2>&1 &
# 启动kafka
nohup kafka-server-start.sh $KAFKA_HOME/config/server.properties >> ~/kafka.log 2>&1 &jps
# 查看kafka和zookeeper进程是否启动起来
1462 Jps
1193 Kafka
937 QuorumPeerMain# kafka测试:
#(1)一台机器上创建主题和生产者:
$KAFKA_HOME/bin/./kafka-topics.sh --create --zookeeper kafka1:2181 --replication-factor 2 --partition 1 --topic test$KAFKA_HOME/bin/./kafka-console-producer.sh --broker-list kafka1:9092 --topic test
#(2)另外2台接收消息
$KAFKA_HOME/bin/./kafka-console-consumer.sh --zookeeper kafka1:2181 --topic test --from beginning
```kafka部署错误
```shell
# 查看kafka server.properties 中的listener是否已设置
# 查看log.dirs目录是否存在,是否拥有读写权限
# 查看myid文件初始值是否写入
# 查看内存是否够用# kafka和zookeeper其他相关命令
# 查看所有主题
$KAFKA_HOME/bin/kafka-topics.sh --zookeeper kafka1:2181 --list
# 查看单个主题(test)详情
$KAFKA_HOME/bin/kafka-topics.sh --describe --zookeeper kafka1:2181 --topic test# 关闭kafka
kafka-server-stop.sh
# 关闭zookeeper
zookeeper-server-stop.sh
```node3、node4、node5、node6
##安装GO
```shell
# 解压go相关软件
tar zxf /usr/local/software/go1.9.linux-amd64.tar.gz -C /usr/local/
```##安装Docker
```shell
# 安装deb版Docker,这里为ubunt deb安装文件
dpkg -i /usr/local/software/*.deb# 启动 docker
service docker start
# 查看安装版本号
docker versionClient:
Version: 17.05.0-ce
API version: 1.29
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:06:06 2017
OS/Arch: linux/amd64Server:
Version: 17.05.0-ce
API version: 1.29 (minimum version 1.12)
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:06:06 2017
OS/Arch: linux/amd64
Experimental: false
# load 镜像
docker load -i /usr/local/software/fabric-javaenv.tar.gz
docker load -i /usr/local/software/fabric-ccenv.tar.gz
docker load -i /usr/local/software/fabric-baseimage.tar.gz
docker load -i /usr/local/software/fabric-baseos.tar.gz
```## fabric配置
### 配置文件说明
####orderer.yaml
orderer启动读取的配置文件
```yaml
---
################################################################################
#
# Orderer Configuration
#
# - This controls the type and configuration of the orderer.
#
################################################################################
General:# Ledger Type: The ledger type to provide to the orderer.
# Two non-production ledger types are provided for test purposes only:
# - ram: An in-memory ledger whose contents are lost on restart.
# - json: A simple file ledger that writes blocks to disk in JSON format.
# Only one production ledger type is provided:
# - file: A production file-based ledger.
LedgerType: file# Listen address: The IP on which to bind to listen.
ListenAddress: 127.0.0.1# Listen port: The port on which to bind to listen.
ListenPort: 7050# TLS: TLS settings for the GRPC server.
TLS:
Enabled: false
PrivateKey: tls/server.key
Certificate: tls/server.crt
RootCAs:
- tls/ca.crt
ClientAuthEnabled: false
ClientRootCAs:# Log Level: The level at which to log. This accepts logging specifications
# per: fabric/docs/Setup/logging-control.md
LogLevel: info# Genesis method: The method by which the genesis block for the orderer
# system channel is specified. Available options are "provisional", "file":
# - provisional: Utilizes a genesis profile, specified by GenesisProfile,
# to dynamically generate a new genesis block.
# - file: Uses the file provided by GenesisFile as the genesis block.
GenesisMethod: provisional# Genesis profile: The profile to use to dynamically generate the genesis
# block to use when initializing the orderer system channel and
# GenesisMethod is set to "provisional". See the configtx.yaml file for the
# descriptions of the available profiles. Ignored if GenesisMethod is set to
# "file".
GenesisProfile: SampleInsecureSolo# Genesis file: The file containing the genesis block to use when
# initializing the orderer system channel and GenesisMethod is set to
# "file". Ignored if GenesisMethod is set to "provisional".
GenesisFile: genesisblock# LocalMSPDir is where to find the private crypto material needed by the
# orderer. It is set relative here as a default for dev environments but
# should be changed to the real location in production.
LocalMSPDir: msp# LocalMSPID is the identity to register the local MSP material with the MSP
# manager. IMPORTANT: The local MSP ID of an orderer needs to match the MSP
# ID of one of the organizations defined in the orderer system channel's
# /Channel/Orderer configuration. The sample organization defined in the
# sample configuration provided has an MSP ID of "DEFAULT".
LocalMSPID: DEFAULT# Enable an HTTP service for Go "pprof" profiling as documented at:
# https://golang.org/pkg/net/http/pprof
Profile:
Enabled: false
Address: 0.0.0.0:6060# BCCSP configures the blockchain crypto service providers.
BCCSP:
# Default specifies the preferred blockchain crypto service provider
# to use. If the preferred provider is not available, the software
# based provider ("SW") will be used.
# Valid providers are:
# - SW: a software based crypto provider
# - PKCS11: a CA hardware security module crypto provider.
Default: SW# SW configures the software based blockchain crypto provider.
SW:
# TODO: The default Hash and Security level needs refactoring to be
# fully configurable. Changing these defaults requires coordination
# SHA2 is hardcoded in several places, not only BCCSP
Hash: SHA2
Security: 256
# Location of key store. If this is unset, a location will be
# chosen using: 'LocalMSPDir'/keystore
FileKeyStore:
KeyStore:################################################################################
#
# SECTION: File Ledger
#
# - This section applies to the configuration of the file or json ledgers.
#
################################################################################
FileLedger:# Location: The directory to store the blocks in.
# NOTE: If this is unset, a new temporary location will be chosen every time
# the orderer is restarted, using the prefix specified by Prefix.
Location: /var/hyperledger/production/orderer# The prefix to use when generating a ledger directory in temporary space.
# Otherwise, this value is ignored.
Prefix: hyperledger-fabric-ordererledger################################################################################
#
# SECTION: RAM Ledger
#
# - This section applies to the configuration of the RAM ledger.
#
################################################################################
RAMLedger:# History Size: The number of blocks that the RAM ledger is set to retain.
# WARNING: Appending a block to the ledger might cause the oldest block in
# the ledger to be dropped in order to limit the number total number blocks
# to HistorySize. For example, if history size is 10, when appending block
# 10, block 0 (the genesis block!) will be dropped to make room for block 10.
HistorySize: 1000################################################################################
#
# SECTION: Kafka
#
# - This section applies to the configuration of the Kafka-based orderer, and
# its interaction with the Kafka cluster.
#
################################################################################
Kafka:# Retry: What do if a connection to the Kafka cluster cannot be established,
# or if a metadata request to the Kafka cluster needs to be repeated.
Retry:
# When a new channel is created, or when an existing channel is reloaded
# (in case of a just-restarted orderer), the orderer interacts with the
# Kafka cluster in the following ways:
# 1. It creates a Kafka producer (writer) for the Kafka partition that
# corresponds to the channel.
# 2. It uses that producer to post a no-op CONNECT message to that
# partition
# 3. It creates a Kafka consumer (reader) for that partition.
# If any of these steps fail, they will be re-attempted every
# <ShortInterval> for a total of <ShortTotal>, and then every
# <LongInterval> for a total of <LongTotal> until they succeed.
# Note that the orderer will be unable to write to or read from a
# channel until all of the steps above have been completed successfully.
ShortInterval: 5s
ShortTotal: 10m
LongInterval: 5m
LongTotal: 12h
# Affects the socket timeouts when waiting for an initial connection, a
# response, or a transmission. See Config.Net for more info:
# https://godoc.org/github.com/Shopify/sarama#Config
NetworkTimeouts:
DialTimeout: 10s
ReadTimeout: 10s
WriteTimeout: 10s
# Affects the metadata requests when the Kafka cluster is in the middle
# of a leader election.See Config.Metadata for more info:
# https://godoc.org/github.com/Shopify/sarama#Config
Metadata:
RetryBackoff: 250ms
RetryMax: 3
# What to do if posting a message to the Kafka cluster fails. See
# Config.Producer for more info:
# https://godoc.org/github.com/Shopify/sarama#Config
Producer:
RetryBackoff: 100ms
RetryMax: 3
# What to do if reading from the Kafka cluster fails. See
# Config.Consumer for more info:
# https://godoc.org/github.com/Shopify/sarama#Config
Consumer:
RetryBackoff: 2s# Verbose: Enable logging for interactions with the Kafka cluster.
Verbose: false# TLS: TLS settings for the orderer's connection to the Kafka cluster.
TLS:# Enabled: Use TLS when connecting to the Kafka cluster.
Enabled: false# PrivateKey: PEM-encoded private key the orderer will use for
# authentication.
PrivateKey:
# As an alternative to specifying the PrivateKey here, uncomment the
# following "File" key and specify the file name from which to load the
# value of PrivateKey.
#File: path/to/PrivateKey# Certificate: PEM-encoded signed public key certificate the orderer will
# use for authentication.
Certificate:
# As an alternative to specifying the Certificate here, uncomment the
# following "File" key and specify the file name from which to load the
# value of Certificate.
#File: path/to/Certificate# RootCAs: PEM-encoded trusted root certificates used to validate
# certificates from the Kafka cluster.
RootCAs:
# As an alternative to specifying the RootCAs here, uncomment the
# following "File" key and specify the file name from which to load the
# value of RootCAs.
#File: path/to/RootCAs# Kafka version of the Kafka cluster brokers (defaults to 0.9.0.1)
Version:
```#### crypto-config.yaml
生成网络拓扑和证书
文件可以帮我们为每个组织和组织中的成员生成证书库。每个组织分配一个根证书(ca-cert),这个根证书会绑定一些peers和orders到这个组织。fabric中的交易和通信都会被一个参与者的私钥(keystore)签名,并会被公钥(signcerts)验证。最后Users Count=1是说每个Template下面会有几个普通User(注意,Admin是Admin,不包含在这个计数中),这里配置了1,也就是说我们只需要一个普通用户User1@org2.local 我们可以根据实际需要调整这个配置文件,增删Org Users等
```yaml
# ---------------------------------------------------------------------------
# "OrdererOrgs" - Definition of organizations managing orderer nodes
# ---------------------------------------------------------------------------
OrdererOrgs:
# ---------------------------------------------------------------------------
# Orderer
# ---------------------------------------------------------------------------
- Name: Orderer
Domain: orderer.local
# ---------------------------------------------------------------------------
# "Specs" - See PeerOrgs below for complete description
# ---------------------------------------------------------------------------
Specs:
- Hostname: orderer1.local
CommonName: orderer1.local
- Hostname: orderer2.local
CommonName: orderer2.local
# ---------------------------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# ---------------------------------------------------------------------------
PeerOrgs:
# ---------------------------------------------------------------------------
# Org1
# ---------------------------------------------------------------------------
- Name: Org1
Domain: org1.local
# ---------------------------------------------------------------------------
# "Specs"
# ---------------------------------------------------------------------------
# Uncomment this section to enable the explicit definition of hosts in your
# configuration. Most users will want to use Template, below
#
# Specs is an array of Spec entries. Each Spec entry consists of two fields:
# - Hostname: (Required) The desired hostname, sans the domain.
# - CommonName: (Optional) Specifies the template or explicit override for
# the CN. By default, this is the template:
#
# "{{.Hostname}}.{{.Domain}}"
#
# which obtains its values from the Spec.Hostname and
# Org.Domain, respectively.
# ---------------------------------------------------------------------------
# Specs:
# - Hostname: foo # implicitly "foo.org1.example.com"
# CommonName: foo27.org5.example.com # overrides Hostname-based FQDN set above
# - Hostname: bar
# - Hostname: baz
Specs:
- Hostname: peer0.org1.local
CommonName: peer0.org1.local
# ---------------------------------------------------------------------------
# "Template"
# ---------------------------------------------------------------------------
# Allows for the definition of 1 or more hosts that are created sequentially
# from a template. By default, this looks like "peer%d" from 0 to Count-1.
# You may override the number of nodes (Count), the starting index (Start)
# or the template used to construct the name (Hostname).
#
# Note: Template and Specs are not mutually exclusive. You may define both
# sections and the aggregate nodes will be created for you. Take care with
# name collisions
# ---------------------------------------------------------------------------
Template:
Count: 2
# Start: 5
# Hostname: {{.Prefix}}{{.Index}} # default
# ---------------------------------------------------------------------------
# "Users"
# ---------------------------------------------------------------------------
# Count: The number of user accounts _in addition_ to Admin
# ---------------------------------------------------------------------------
Users:
Count: 1
# ---------------------------------------------------------------------------
# Org2: See "Org1" for full specification
# ---------------------------------------------------------------------------
- Name: Org2
Domain: org2.local
Specs:
- Hostname: peer0.org2.local
CommonName: peer0.org2.local
Template:
Count: 2
Users:
Count: 1
```core.yaml peer启动读取的配置文件
```yaml
###############################################################################
#
# LOGGING section
#
###############################################################################
logging:# Default logging levels are specified here.
# Valid logging levels are case-insensitive strings chosen from
# CRITICAL | ERROR | WARNING | NOTICE | INFO | DEBUG
# The overall default logging level can be specified in various ways,
# listed below from strongest to weakest:
#
# 1. The --logging-level=<level> command line option overrides all other
# default specifications.
#
# 2. The environment variable CORE_LOGGING_LEVEL otherwise applies to
# all peer commands if defined as a non-empty string.
#
# 3. The value of peer that directly follows in this file. It can also
# be set via the environment variable CORE_LOGGING_PEER.
#
# If no overall default level is provided via any of the above methods,
# the peer will default to INFO (the value of defaultLevel in
# common/flogging/logging.go)# Default for all modules running within the scope of a peer.
# Note: this value is only used when --logging-level or CORE_LOGGING_LEVEL
# are not set
peer: info# The overall default values mentioned above can be overridden for the
# specific components listed in the override section below.# Override levels for various peer modules. These levels will be
# applied once the peer has completely started. They are applied at this
# time in order to be sure every logger has been registered with the
# logging package.
# Note: the modules listed below are the only acceptable modules at this
# time.
cauthdsl: warning
gossip: warning
ledger: info
msp: warning
policies: warning
grpc: error# Message format for the peer logs
format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'###############################################################################
#
# Peer section
#
###############################################################################
peer:# The Peer id is used for identifying this Peer instance.
id: jdoe# The networkId allows for logical seperation of networks
networkId: dev# The Address at local network interface this Peer will listen on.
# By default, it will listen on all network interfaces
listenAddress: 0.0.0.0:7051# The endpoint this peer uses to listen for inbound chaincode connections.
#
# The chaincode connection does not support TLS-mutual auth. Having a
# separate listener for the chaincode helps isolate the chaincode
# environment for enhanced security, so it is strongly recommended to
# uncomment chaincodeListenAddress and specify a protected endpoint.
#
# If chaincodeListenAddress is not configured or equals to the listenAddress,
# listenAddress will be used for chaincode connections. This is not
# recommended for production.
#
# chaincodeListenAddress: 127.0.0.1:7052# When used as peer config, this represents the endpoint to other peers
# in the same organization for peers in other organization, see
# gossip.externalEndpoint for more info.
# When used as CLI config, this means the peer's endpoint to interact with
address: 0.0.0.0:7051# Whether the Peer should programmatically determine its address
# This case is useful for docker containers.
addressAutoDetect: false# Setting for runtime.GOMAXPROCS(n). If n < 1, it does not change the
# current setting
gomaxprocs: -1# Gossip related configuration
gossip:
# Bootstrap set to initialize gossip with.
# This is a list of other peers that this peer reaches out to at startup.
# Important: The endpoints here have to be endpoints of peers in the same
# organization, because the peer would refuse connecting to these endpoints
# unless they are in the same organization as the peer.
bootstrap: 127.0.0.1:7051# NOTE: orgLeader and useLeaderElection parameters are mutual exclusive.
# Setting both to true would result in the termination of the peer
# since this is undefined state. If the peers are configured with
# useLeaderElection=false, make sure there is at least 1 peer in the
# organization that its orgLeader is set to true.# Defines whenever peer will initialize dynamic algorithm for
# "leader" selection, where leader is the peer to establish
# connection with ordering service and use delivery protocol
# to pull ledger blocks from ordering service. It is recommended to
# use leader election for large networks of peers.
useLeaderElection: false
# Statically defines peer to be an organization "leader",
# where this means that current peer will maintain connection
# with ordering service and disseminate block across peers in
# its own organization
orgLeader: true# Overrides the endpoint that the peer publishes to peers
# in its organization. For peers in foreign organizations
# see 'externalEndpoint'
endpoint:
# Maximum count of blocks stored in memory
maxBlockCountToStore: 100
# Max time between consecutive message pushes(unit: millisecond)
maxPropagationBurstLatency: 10ms
# Max number of messages stored until a push is triggered to remote peers
maxPropagationBurstSize: 10
# Number of times a message is pushed to remote peers
propagateIterations: 1
# Number of peers selected to push messages to
propagatePeerNum: 3
# Determines frequency of pull phases(unit: second)
pullInterval: 4s
# Number of peers to pull from
pullPeerNum: 3
# Determines frequency of pulling state info messages from peers(unit: second)
requestStateInfoInterval: 4s
# Determines frequency of pushing state info messages to peers(unit: second)
publishStateInfoInterval: 4s
# Maximum time a stateInfo message is kept until expired
stateInfoRetentionInterval:
# Time from startup certificates are included in Alive messages(unit: second)
publishCertPeriod: 10s
# Should we skip verifying block messages or not (currently not in use)
skipBlockVerification: false
# Dial timeout(unit: second)
dialTimeout: 3s
# Connection timeout(unit: second)
connTimeout: 2s
# Buffer size of received messages
recvBuffSize: 20
# Buffer size of sending messages
sendBuffSize: 200
# Time to wait before pull engine processes incoming digests (unit: second)
digestWaitTime: 1s
# Time to wait before pull engine removes incoming nonce (unit: second)
requestWaitTime: 1s
# Time to wait before pull engine ends pull (unit: second)
responseWaitTime: 2s
# Alive check interval(unit: second)
aliveTimeInterval: 5s
# Alive expiration timeout(unit: second)
aliveExpirationTimeout: 25s
# Reconnect interval(unit: second)
reconnectInterval: 25s
# This is an endpoint that is published to peers outside of the organization.
# If this isn't set, the peer will not be known to other organizations.
externalEndpoint:
# Leader election service configuration
election:
# Longest time peer waits for stable membership during leader election startup (unit: second)
startupGracePeriod: 15s
# Interval gossip membership samples to check its stability (unit: second)
membershipSampleInterval: 1s
# Time passes since last declaration message before peer decides to perform leader election (unit: second)
leaderAliveThreshold: 10s
# Time between peer sends propose message and declares itself as a leader (sends declaration message) (unit: second)
leaderElectionDuration: 5s# EventHub related configuration
events:
# The address that the Event service will be enabled on the peer
address: 0.0.0.0:7053# total number of events that could be buffered without blocking send
buffersize: 100# timeout duration for producer to send an event.
# if < 0, if buffer full, unblocks immediately and not send
# if 0, if buffer full, will block and guarantee the event will be sent out
# if > 0, if buffer full, blocks till timeout
timeout: 10ms# TLS Settings
# Note that peer-chaincode connections through chaincodeListenAddress is
# not mutual TLS auth. See comments on chaincodeListenAddress for more info
tls:
enabled: false
cert:
file: tls/server.crt
key:
file: tls/server.key
rootcert:
file: tls/ca.crt# The server name use to verify the hostname returned by TLS handshake
serverhostoverride:# Path on the file system where peer will store data (eg ledger). This
# location must be access control protected to prevent unintended
# modification that might corrupt the peer operations.
fileSystemPath: /var/hyperledger/production# BCCSP (Blockchain crypto provider): Select which crypto implementation or
# library to use
BCCSP:
Default: SW
SW:
# TODO: The default Hash and Security level needs refactoring to be
# fully configurable. Changing these defaults requires coordination
# SHA2 is hardcoded in several places, not only BCCSP
Hash: SHA2
Security: 256
# Location of Key Store
FileKeyStore:
# If "", defaults to 'mspConfigPath'/keystore
# TODO: Ensure this is read with fabric/core/config.GetPath() once ready
KeyStore:# Path on the file system where peer will find MSP local configurations
mspConfigPath: msp# Identifier of the local MSP
# ----!!!!IMPORTANT!!!-!!!IMPORTANT!!!-!!!IMPORTANT!!!!----
# Deployers need to change the value of the localMspId string.
# In particular, the name of the local MSP ID of a peer needs
# to match the name of one of the MSPs in each of the channel
# that this peer is a member of. Otherwise this peer's messages
# will not be identified as valid by other nodes.
localMspId: DEFAULT# Used with Go profiling tools only in none production environment. In
# production, it should be disabled (eg enabled: false)
profile:
enabled: false
listenAddress: 0.0.0.0:6060###############################################################################
#
# VM section
#
###############################################################################
vm:# Endpoint of the vm management system. For docker can be one of the following in general
# unix:///var/run/docker.sock
# http://localhost:2375
# https://localhost:2376
endpoint: unix:///var/run/docker.sock# settings for docker vms
docker:
tls:
enabled: false
ca:
file: docker/ca.crt
cert:
file: docker/tls.crt
key:
file: docker/tls.key# Enables/disables the standard out/err from chaincode containers for
# debugging purposes
attachStdout: false# Parameters on creating docker container.
# Container may be efficiently created using ipam & dns-server for cluster
# NetworkMode - sets the networking mode for the container. Supported
# standard values are: `host`(default),`bridge`,`ipvlan`,`none`.
# Dns - a list of DNS servers for the container to use.
# Note: `Privileged` `Binds` `Links` and `PortBindings` properties of
# Docker Host Config are not supported and will not be used if set.
# LogConfig - sets the logging driver (Type) and related options
# (Config) for Docker. For more info,
# https://docs.docker.com/engine/admin/logging/overview/
# Note: Set LogConfig using Environment Variables is not supported.
hostConfig:
NetworkMode: host
Dns:
# - 192.168.0.1
LogConfig:
Type: json-file
Config:
max-size: "50m"
max-file: "5"
Memory: 2147483648###############################################################################
#
# Chaincode section
#
###############################################################################
chaincode:
# This is used if chaincode endpoint resolution fails with the
# chaincodeListenAddress property
peerAddress:# The id is used by the Chaincode stub to register the executing Chaincode
# ID with the Peer and is generally supplied through ENV variables
# the `path` form of ID is provided when installing the chaincode.
# The `name` is used for all other requests and can be any string.
id:
path:
name:# Generic builder environment, suitable for most chaincode types
builder: $(DOCKER_NS)/fabric-ccenv:$(ARCH)-$(PROJECT_VERSION)golang:
# golang will never need more than baseos
runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)car:
# car may need more facilities (JVM, etc) in the future as the catalog
# of platforms are expanded. For now, we can just use baseos
runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)java:
# This is an image based on java:openjdk-8 with addition compiler
# tools added for java shim layer packaging.
# This image is packed with shim layer libraries that are necessary
# for Java chaincode runtime.
Dockerfile: |
from $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION)# Timeout duration for starting up a container and waiting for Register
# to come through. 1sec should be plenty for chaincode unit tests
startuptimeout: 300s# Timeout duration for Invoke and Init calls to prevent runaway.
# This timeout is used by all chaincodes in all the channels, including
# system chaincodes.
# Note that during Invoke, if the image is not available (e.g. being
# cleaned up when in development environment), the peer will automatically
# build the image, which might take more time. In production environment,
# the chaincode image is unlikely to be deleted, so the timeout could be
# reduced accordingly.
executetimeout: 30s# There are 2 modes: "dev" and "net".
# In dev mode, user runs the chaincode after starting peer from
# command line on local machine.
# In net mode, peer will run chaincode in a docker container.
mode: net# keepalive in seconds. In situations where the communiction goes through a
# proxy that does not support keep-alive, this parameter will maintain connection
# between peer and chaincode.
# A value <= 0 turns keepalive off
keepalive: 0# system chaincodes whitelist. To add system chaincode "myscc" to the
# whitelist, add "myscc: enable" to the list below, and register in
# chaincode/importsysccs.go
system:
cscc: enable
lscc: enable
escc: enable
vscc: enable
qscc: enable# Logging section for the chaincode container
logging:
# Default level for all loggers within the chaincode container
level: info
# Override default level for the 'shim' module
shim: warning
# Format for the chaincode container logs
format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'###############################################################################
#
# Ledger section - ledger configuration encompases both the blockchain
# and the state
#
###############################################################################
ledger:blockchain:
state:
# stateDatabase - options are "goleveldb", "CouchDB"
# goleveldb - default state database stored in goleveldb.
# CouchDB - store state database in CouchDB
stateDatabase: goleveldb
couchDBConfig:
# It is recommended to run CouchDB on the same server as the peer, and
# not map the CouchDB container port to a server port in docker-compose.
# Otherwise proper security must be provided on the connection between
# CouchDB client (on the peer) and server.
couchDBAddress: 127.0.0.1:5984
# This username must have read and write authority on CouchDB
username:
# The password is recommended to pass as an environment variable
# during start up (eg LEDGER_COUCHDBCONFIG_PASSWORD).
# If it is stored here, the file must be access control protected
# to prevent unintended users from discovering the password.
password:
# Number of retries for CouchDB errors
maxRetries: 3
# Number of retries for CouchDB errors during peer startup
maxRetriesOnStartup: 10
# CouchDB request timeout (unit: duration, e.g. 20s)
requestTimeout: 35s
# Limit on the number of records to return per query
queryLimit: 10000
history:
# enableHistoryDatabase - options are true or false
# Indicates if the history of key updates should be stored.
# All history 'index' will be stored in goleveldb, regardless if using
# CouchDB or alternate database for the state.
enableHistoryDatabase: true
```####configtx.yaml
生成网络拓扑和证书
文件包含网络的定义,网络中有2个成员(Org1和Org2)分别管理维护2个peer。 在文件最上方 “Profile”段落中,有两个header,一个是orderer genesis block - TwoOrgsOrdererGenesis ,另一个是channel - TwoOrgsChannel。这两个header十分重要,我们创建artifacts是我们会把他们作为参数传入。文件中还包含每个成员的MSP 目录位置.crypto目录包含每个实体的admin证书、ca证书、签名证书和私钥
```yaml
---
################################################################################
#
# Profile
#
# - Different configuration profiles may be encoded here to be specified
# as parameters to the configtxgen tool
#
################################################################################
Profiles:TwoOrgsOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *Org1
- *Org2
TwoOrgsChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2################################################################################
#
# Section: Organizations
#
# - This section defines the different organizational identities which will
# be referenced later in the configuration.
#
################################################################################
Organizations:# SampleOrg defines an MSP using the sampleconfig. It should never be used
# in production but may be used as a template for other definitions
- &OrdererOrg
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: OrdererOrg# ID to load the MSP definition as
ID: OrdererMSP# MSPDir is the filesystem path which contains the MSP configuration
MSPDir: crypto-config/ordererOrganizations/orderer.local/msp
- &Org1
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: Org1MSP# ID to load the MSP definition as
ID: Org1MSPMSPDir: crypto-config/peerOrganizations/org1.local/msp
AnchorPeers:
# AnchorPeers defines the location of peers which can be used
# for cross org gossip communication. Note, this value is only
# encoded in the genesis block in the Application section context
# - Host: peer0.org1.local
# Port: 7051
# - Host: peer1.org1.local
# Port: 7051- &Org2
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: Org2MSP# ID to load the MSP definition as
ID: Org2MSPMSPDir: crypto-config/peerOrganizations/org2.local/msp
AnchorPeers:
# AnchorPeers defines the location of peers which can be used
# for cross org gossip communication. Note, this value is only
# encoded in the genesis block in the Application section context
# - Host: peer0.org2.local
# Port: 7051
# - Host: peer1.org2.local
# Port: 7051
################################################################################
#
# SECTION: Orderer
#
# - This section defines the values to encode into a config transaction or
# genesis block for orderer related parameters
#
################################################################################
Orderer: &OrdererDefaults# Orderer Type: The orderer implementation to start
# Available types are "solo" and "kafka"
OrdererType: kafkaAddresses:
- orderer1.local:7050
- orderer2.local:7050
#- orderer3.local:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s# Batch Size: Controls the number of messages batched into a block
BatchSize:# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 10# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 99 MB# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KBKafka:
# Brokers: A list of Kafka brokers to which the orderer connects
# NOTE: Use IP:port notation
Brokers:
- kafka1:9092
- kafka2:9092
- kafka3:9092# Organizations is the list of orgs which are defined as participants on
# the orderer side of the network
Organizations:################################################################################
#
# SECTION: Application
#
# - This section defines the values to encode into a config transaction or
# genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults# Organizations is the list of orgs which are defined as participants on
# the application side of the network
Organizations:
```####core.yaml
peer启动读取的配置文件
```yaml
###############################################################################
#
# LOGGING section
#
###############################################################################
logging:# Default logging levels are specified here.
# Valid logging levels are case-insensitive strings chosen from
# CRITICAL | ERROR | WARNING | NOTICE | INFO | DEBUG
# The overall default logging level can be specified in various ways,
# listed below from strongest to weakest:
#
# 1. The --logging-level=<level> command line option overrides all other
# default specifications.
#
# 2. The environment variable CORE_LOGGING_LEVEL otherwise applies to
# all peer commands if defined as a non-empty string.
#
# 3. The value of peer that directly follows in this file. It can also
# be set via the environment variable CORE_LOGGING_PEER.
#
# If no overall default level is provided via any of the above methods,
# the peer will default to INFO (the value of defaultLevel in
# common/flogging/logging.go)# Default for all modules running within the scope of a peer.
# Note: this value is only used when --logging-level or CORE_LOGGING_LEVEL
# are not set
peer: info# The overall default values mentioned above can be overridden for the
# specific components listed in the override section below.# Override levels for various peer modules. These levels will be
# applied once the peer has completely started. They are applied at this
# time in order to be sure every logger has been registered with the
# logging package.
# Note: the modules listed below are the only acceptable modules at this
# time.
cauthdsl: warning
gossip: warning
ledger: info
msp: warning
policies: warning
grpc: error# Message format for the peer logs
format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'###############################################################################
#
# Peer section
#
###############################################################################
peer:# The Peer id is used for identifying this Peer instance.
id: jdoe# The networkId allows for logical seperation of networks
networkId: dev# The Address at local network interface this Peer will listen on.
# By default, it will listen on all network interfaces
listenAddress: 0.0.0.0:7051# The endpoint this peer uses to listen for inbound chaincode connections.
#
# The chaincode connection does not support TLS-mutual auth. Having a
# separate listener for the chaincode helps isolate the chaincode
# environment for enhanced security, so it is strongly recommended to
# uncomment chaincodeListenAddress and specify a protected endpoint.
#
# If chaincodeListenAddress is not configured or equals to the listenAddress,
# listenAddress will be used for chaincode connections. This is not
# recommended for production.
#
# chaincodeListenAddress: 127.0.0.1:7052# When used as peer config, this represents the endpoint to other peers
# in the same organization for peers in other organization, see
# gossip.externalEndpoint for more info.
# When used as CLI config, this means the peer's endpoint to interact with
address: 0.0.0.0:7051# Whether the Peer should programmatically determine its address
# This case is useful for docker containers.
addressAutoDetect: false# Setting for runtime.GOMAXPROCS(n). If n < 1, it does not change the
# current setting
gomaxprocs: -1# Gossip related configuration
gossip:
# Bootstrap set to initialize gossip with.
# This is a list of other peers that this peer reaches out to at startup.
# Important: The endpoints here have to be endpoints of peers in the same
# organization, because the peer would refuse connecting to these endpoints
# unless they are in the same organization as the peer.
bootstrap: 127.0.0.1:7051# NOTE: orgLeader and useLeaderElection parameters are mutual exclusive.
# Setting both to true would result in the termination of the peer
# since this is undefined state. If the peers are configured with
# useLeaderElection=false, make sure there is at least 1 peer in the
# organization that its orgLeader is set to true.# Defines whenever peer will initialize dynamic algorithm for
# "leader" selection, where leader is the peer to establish
# connection with ordering service and use delivery protocol
# to pull ledger blocks from ordering service. It is recommended to
# use leader election for large networks of peers.
useLeaderElection: false
# Statically defines peer to be an organization "leader",
# where this means that current peer will maintain connection
# with ordering service and disseminate block across peers in
# its own organization
orgLeader: true# Overrides the endpoint that the peer publishes to peers
# in its organization. For peers in foreign organizations
# see 'externalEndpoint'
endpoint:
# Maximum count of blocks stored in memory
maxBlockCountToStore: 100
# Max time between consecutive message pushes(unit: millisecond)
maxPropagationBurstLatency: 10ms
# Max number of messages stored until a push is triggered to remote peers
maxPropagationBurstSize: 10
# Number of times a message is pushed to remote peers
propagateIterations: 1
# Number of peers selected to push messages to
propagatePeerNum: 3
# Determines frequency of pull phases(unit: second)
pullInterval: 4s
# Number of peers to pull from
pullPeerNum: 3
# Determines frequency of pulling state info messages from peers(unit: second)
requestStateInfoInterval: 4s
# Determines frequency of pushing state info messages to peers(unit: second)
publishStateInfoInterval: 4s
# Maximum time a stateInfo message is kept until expired
stateInfoRetentionInterval:
# Time from startup certificates are included in Alive messages(unit: second)
publishCertPeriod: 10s
# Should we skip verifying block messages or not (currently not in use)
skipBlockVerification: false
# Dial timeout(unit: second)
dialTimeout: 3s
# Connection timeout(unit: second)
connTimeout: 2s
# Buffer size of received messages
recvBuffSize: 20
# Buffer size of sending messages
sendBuffSize: 200
# Time to wait before pull engine processes incoming digests (unit: second)
digestWaitTime: 1s
# Time to wait before pull engine removes incoming nonce (unit: second)
requestWaitTime: 1s
# Time to wait before pull engine ends pull (unit: second)
responseWaitTime: 2s
# Alive check interval(unit: second)
aliveTimeInterval: 5s
# Alive expiration timeout(unit: second)
aliveExpirationTimeout: 25s
# Reconnect interval(unit: second)
reconnectInterval: 25s
# This is an endpoint that is published to peers outside of the organization.
# If this isn't set, the peer will not be known to other organizations.
externalEndpoint:
# Leader election service configuration
election:
# Longest time peer waits for stable membership during leader election startup (unit: second)
startupGracePeriod: 15s
# Interval gossip membership samples to check its stability (unit: second)
membershipSampleInterval: 1s
# Time passes since last declaration message before peer decides to perform leader election (unit: second)
leaderAliveThreshold: 10s
# Time between peer sends propose message and declares itself as a leader (sends declaration message) (unit: second)
leaderElectionDuration: 5s# EventHub related configuration
events:
# The address that the Event service will be enabled on the peer
address: 0.0.0.0:7053# total number of events that could be buffered without blocking send
buffersize: 100# timeout duration for producer to send an event.
# if < 0, if buffer full, unblocks immediately and not send
# if 0, if buffer full, will block and guarantee the event will be sent out
# if > 0, if buffer full, blocks till timeout
timeout: 10ms# TLS Settings
# Note that peer-chaincode connections through chaincodeListenAddress is
# not mutual TLS auth. See comments on chaincodeListenAddress for more info
tls:
enabled: false
cert:
file: tls/server.crt
key:
file: tls/server.key
rootcert:
file: tls/ca.crt# The server name use to verify the hostname returned by TLS handshake
serverhostoverride:# Path on the file system where peer will store data (eg ledger). This
# location must be access control protected to prevent unintended
# modification that might corrupt the peer operations.
fileSystemPath: /var/hyperledger/production# BCCSP (Blockchain crypto provider): Select which crypto implementation or
# library to use
BCCSP:
Default: SW
SW:
# TODO: The default Hash and Security level needs refactoring to be
# fully configurable. Changing these defaults requires coordination
# SHA2 is hardcoded in several places, not only BCCSP
Hash: SHA2
Security: 256
# Location of Key Store
FileKeyStore:
# If "", defaults to 'mspConfigPath'/keystore
# TODO: Ensure this is read with fabric/core/config.GetPath() once ready
KeyStore:# Path on the file system where peer will find MSP local configurations
mspConfigPath: msp# Identifier of the local MSP
# ----!!!!IMPORTANT!!!-!!!IMPORTANT!!!-!!!IMPORTANT!!!!----
# Deployers need to change the value of the localMspId string.
# In particular, the name of the local MSP ID of a peer needs
# to match the name of one of the MSPs in each of the channel
# that this peer is a member of. Otherwise this peer's messages
# will not be identified as valid by other nodes.
localMspId: DEFAULT# Used with Go profiling tools only in none production environment. In
# production, it should be disabled (eg enabled: false)
profile:
enabled: false
listenAddress: 0.0.0.0:6060###############################################################################
#
# VM section
#
###############################################################################
vm:# Endpoint of the vm management system. For docker can be one of the following in general
# unix:///var/run/docker.sock
# http://localhost:2375
# https://localhost:2376
endpoint: unix:///var/run/docker.sock# settings for docker vms
docker:
tls:
enabled: false
ca:
file: docker/ca.crt
cert:
file: docker/tls.crt
key:
file: docker/tls.key# Enables/disables the standard out/err from chaincode containers for
# debugging purposes
attachStdout: false# Parameters on creating docker container.
# Container may be efficiently created using ipam & dns-server for cluster
# NetworkMode - sets the networking mode for the container. Supported
# standard values are: `host`(default),`bridge`,`ipvlan`,`none`.
# Dns - a list of DNS servers for the container to use.
# Note: `Privileged` `Binds` `Links` and `PortBindings` properties of
# Docker Host Config are not supported and will not be used if set.
# LogConfig - sets the logging driver (Type) and related options
# (Config) for Docker. For more info,
# https://docs.docker.com/engine/admin/logging/overview/
# Note: Set LogConfig using Environment Variables is not supported.
hostConfig:
NetworkMode: host
Dns:
# - 192.168.0.1
LogConfig:
Type: json-file
Config:
max-size: "50m"
max-file: "5"
Memory: 2147483648###############################################################################
#
# Chaincode section
#
###############################################################################
chaincode:
# This is used if chaincode endpoint resolution fails with the
# chaincodeListenAddress property
peerAddress:# The id is used by the Chaincode stub to register the executing Chaincode
# ID with the Peer and is generally supplied through ENV variables
# the `path` form of ID is provided when installing the chaincode.
# The `name` is used for all other requests and can be any string.
id:
path:
name:# Generic builder environment, suitable for most chaincode types
builder: $(DOCKER_NS)/fabric-ccenv:$(ARCH)-$(PROJECT_VERSION)golang:
# golang will never need more than baseos
runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)car:
# car may need more facilities (JVM, etc) in the future as the catalog
# of platforms are expanded. For now, we can just use baseos
runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)java:
# This is an image based on java:openjdk-8 with addition compiler
# tools added for java shim layer packaging.
# This image is packed with shim layer libraries that are necessary
# for Java chaincode runtime.
Dockerfile: |
from $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION)# Timeout duration for starting up a container and waiting for Register
# to come through. 1sec should be plenty for chaincode unit tests
startuptimeout: 300s# Timeout duration for Invoke and Init calls to prevent runaway.
# This timeout is used by all chaincodes in all the channels, including
# system chaincodes.
# Note that during Invoke, if the image is not available (e.g. being
# cleaned up when in development environment), the peer will automatically
# build the image, which might take more time. In production environment,
# the chaincode image is unlikely to be deleted, so the timeout could be
# reduced accordingly.
executetimeout: 30s# There are 2 modes: "dev" and "net".
# In dev mode, user runs the chaincode after starting peer from
# command line on local machine.
# In net mode, peer will run chaincode in a docker container.
mode: net# keepalive in seconds. In situations where the communiction goes through a
# proxy that does not support keep-alive, this parameter will maintain connection
# between peer and chaincode.
# A value <= 0 turns keepalive off
keepalive: 0# system chaincodes whitelist. To add system chaincode "myscc" to the
# whitelist, add "myscc: enable" to the list below, and register in
# chaincode/importsysccs.go
system:
cscc: enable
lscc: enable
escc: enable
vscc: enable
qscc: enable# Logging section for the chaincode container
logging:
# Default level for all loggers within the chaincode container
level: info
# Override default level for the 'shim' module
shim: warning
# Format for the chaincode container logs
format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'###############################################################################
#
# Ledger section - ledger configuration encompases both the blockchain
# and the state
#
###############################################################################
ledger:blockchain:
state:
# stateDatabase - options are "goleveldb", "CouchDB"
# goleveldb - default state database stored in goleveldb.
# CouchDB - store state database in CouchDB
stateDatabase: goleveldb
couchDBConfig:
# It is recommended to run CouchDB on the same server as the peer, and
# not map the CouchDB container port to a server port in docker-compose.
# Otherwise proper security must be provided on the connection between
# CouchDB client (on the peer) and server.
couchDBAddress: 127.0.0.1:5984
# This username must have read and write authority on CouchDB
username:
# The password is recommended to pass as an environment variable
# during start up (eg LEDGER_COUCHDBCONFIG_PASSWORD).
# If it is stored here, the file must be access control protected
# to prevent unintended users from discovering the password.
password:
# Number of retries for CouchDB errors
maxRetries: 3
# Number of retries for CouchDB errors during peer startup
maxRetriesOnStartup: 10
# CouchDB request timeout (unit: duration, e.g. 20s)
requestTimeout: 35s
# Limit on the number of records to return per query
queryLimit: 10000
history:
# enableHistoryDatabase - options are true or false
# Indicates if the history of key updates should be stored.
# All history 'index' will be stored in goleveldb, regardless if using
# CouchDB or alternate database for the state.
enableHistoryDatabase: true
```### 生成配置
上文是fabirc核心的四个文件,主要调整config文件中的相关参数即可
下面我们通过脚本生成配置文件
```shell
# 创建fabric目录
mkdir -p /opt/fabric
# 解压相关文件
tar zxf /usr/local/software/hyperledger-fabric-linux-amd64-1.0.3.tar.gz -C /opt/fabric/tar zxf /usr/local/software/fabric.tar.gz -C /opt/
######### 因为要保证证书和秘钥相同,以下步骤在一台电脑上执行,然后再分发到其他机器,begin
# 进入fabric configs目录,
cd /opt/fabric/configs/
# 执行generateArtifacts.sh 脚本
../scripts/generateArtifacts.sh
# 这个做了以下操作
# 1.基于crypto-config.yaml生成公私钥和证书信息,并保存在当前路径生成的crypto-config文件夹中
# 2.生成创世区块和通道相关信息,并保存在当前生成的channel-artifacts文件夹
# 感兴趣的小伙伴可以自己看下生成后的文档结构,这里就不再列出了# 将整个fabric文件分发到所有orderer和peer服务器中
scp -r /opt/fabric root@node"X":/opt/######### end #########
# chaincode所需文件
mkdir -p ~/go/src/github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02cp /usr/local/software/chaincode_example02.go ~/go/src/github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02/
```peer启动脚本默认是peer0.org1.local的,所以这里我们要修改另外三台
node4 peer1.org1.local
```shell
sed -i 's/peer0/peer1/g' /opt/fabric/start-peer.sh
```node5 peer0.org2.local
```shell
sed -i 's/org1/org2/g' /opt/fabric/start-peer.sh
sed -i 's/Org1/Org2/g' /opt/fabric/start-peer.sh
```node6 peer1.org2.local
```shell
sed -i 's/peer0/peer1/g' /opt/fabric/start-peer.sh
sed -i 's/org1/org2/g' /opt/fabric/start-peer.sh
sed -i 's/Org1/Org2/g' /opt/fabric/start-peer.sh
```至此,peer环境已搭建完毕,我们暂不启动peer,等orderer完成并启动后再启动peer。
## 安装orderer
node1、node2
```shell
# 因为文件默认是node1(orderer1)的启动脚本,这里需要对node2(orderer2)的启动脚本进行修改
# 在node2服务器执行后面命令 sed -i 's/orderer1/orderer2/g' /opt/fabric/start-orderer.sh
# 启动orderer
/opt/fabric/start-orderer.sh
# 查看日志
tail -99f ~/orderer.log
```由于前面我们已经启动了kafka和orderer,这里再启动peer
```
# 启动四台peer
/opt/fabric/start-peer.sh
# 查看日志
tail -99f ~/peer.log
```## 链上代码的安装与运行
以上,整个Fabric网络都准备完毕,接下来我们创建Channel、安装和运行ChainCode。这个例子实现了a,b两个账户,相互之间可以转账。
我们先在其中一个节点(peer0.org1.local)执行以下命令
```shell
cd ~# 设置CLI的环境变量
export FABRIC_ROOT=/opt/fabric
export FABRIC_CFG_PATH=/opt/fabric/configsexport CORE_LOGGING_LEVEL=DEBUG
export CORE_LOGGING_PEER=debug
export CORE_LOGGING_FORMAT='%{color}%{time:15:04:05.000} [%{module}] [%{longfile}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
export CORE_CHAINCODE_LOGGING_LEVEL=DEBUGexport CORE_PEER_ID=peer0.org1.local
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_MSPCONFIGPATH=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org1.local/users/Admin@org1.local/msp
export CORE_PEER_ADDRESS=peer0.org1.local:7051export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_TLS_ROOTCERT_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org1.local/peers/peer0.org1.local/tls/ca.crt
export CORE_PEER_TLS_KEY_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org1.local/peers/peer0.org1.local/tls/server.key
export CORE_PEER_TLS_CERT_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org1.local/peers/peer0.org1.local/tls/server.crtexport ordererCa=$FABRIC_CFG_PATH/crypto-config/ordererOrganizations/orderer.local/orderers/orderer1.local/msp/tlscacerts/tlsca.orderer.local-cert.pem
# 创建Channel
# 系统会在当前目录创建一个mychannel.block文件,这个文件非常重要,接下来其他节点要加入这个Channel就必须使用这个文件,要将这个文件分发到其他peer当中。
$FABRIC_ROOT/bin/peer channel create -o orderer1.local:7050 -f $FABRIC_CFG_PATH/channel-artifacts/channel.tx -c mychannel -t 30 --tls true --cafile $ordererCa# 加入通道
$FABRIC_ROOT/bin/peer channel join -b mychannel.block# 更新锚节点,这个我还是没太理解,即使没有设置锚节点,也不会影响整个网络
$FABRIC_ROOT/bin/peer channel update -o orderer1.local:7050 -c mychannel -f $FABRIC_CFG_PATH/channel-artifacts/${CORE_PEER_LOCALMSPID}anchors.tx --tls true --cafile $ordererCa# 安装 chaincode
# 链上代码的安装需要在各个相关的Peer上进行,对于现在这种Fabric网络,如果4个Peer都想对Example02进行操作,那么就需要安装4次
$FABRIC_ROOT/bin/peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02# Instantiate ChainCode
# 实例化链上代码主要是在Peer所在的机器上对前面安装好的链上代码进行包装,生成对应Channel的Docker镜像和Docker容器。并且在实例化时我们可以指定背书策略。
$FABRIC_ROOT/bin/peer chaincode instantiate -o orderer1.local:7050 --tls true --cafile $ordererCa -C mychannel -n mycc -v 2.0 -c '{"Args":["init","a","100","b","200"]}' -P "OR ('Org1MSP.member','Org2MSP.member')"# 现在链上代码的实例也有了,并且在实例化的时候指定了a账户100,b账户200,我们可以试着调用ChainCode的查询代码,验证一下
$FABRIC_ROOT/bin/peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'# 返回结果:Query Result: 100
# 把a账户的10元转给b
$FABRIC_ROOT/bin/peer chaincode invoke -o orderer1.local:7050 --tls true --cafile $ordererCa -C mychannel -n mycc -c '{"Args":["invoke","a","b","10"]}'
```在另一个节点(peer0.org2.local)上查询
前面的操作都是在org1下面做的,那么加入同一个区块链(mychannel)的org2,是否会看org1的更改呢?我们试着给peer0.org2.local加入mychannel、安装链上代码
```shell
# 设置CLI的环境变量export FABRIC_ROOT=/opt/fabric
export FABRIC_CFG_PATH=/opt/fabric/configsexport CORE_LOGGING_LEVEL=DEBUG
export CORE_LOGGING_PEER=debug
export CORE_LOGGING_FORMAT='%{color}%{time:15:04:05.000} [%{module}] [%{longfile}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
export CORE_CHAINCODE_LOGGING_LEVEL=DEBUGexport CORE_PEER_ID=peer0.org2.local
export CORE_PEER_LOCALMSPID=Org2MSP
export CORE_PEER_MSPCONFIGPATH=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org2.local/users/Admin@org2.local/msp
export CORE_PEER_ADDRESS=peer0.org2.local:7051export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_TLS_ROOTCERT_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org2.local/peers/peer0.org2.local/tls/ca.crt
export CORE_PEER_TLS_KEY_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org2.local/peers/peer0.org2.local/tls/server.key
export CORE_PEER_TLS_CERT_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org2.local/peers/peer0.org2.local/tls/server.crtexport ordererCa=$FABRIC_CFG_PATH/crypto-config/ordererOrganizations/orderer.local/orderers/orderer2.local/msp/tlscacerts/tlsca.orderer.local-cert.pem
# 进入分发过来mychannel.block所在目录,执行以下命令加入通道
$FABRIC_ROOT/bin/peer channel join -b mychannel.block# 安装 chaincode
$FABRIC_ROOT/bin/peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02# org1的时候实例化了,也就是说对应的区块已经生成了,所以在org2不能再次初始化,直接查询
$FABRIC_ROOT/bin/peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'# 运行该命令后要等很久才返回结果:
Query Result: 90
# 这是因为peer0.org2也需要生成Docker镜像,创建对应的容器,才能通过容器返回结果。我们执行docker ps -a,可以看到又多了一个容器;
```其他节点也同样如此,记得改环境变量
## 清除
```shell
# 清理pper数据
# 删除docker容器和镜像
docker rm 节点所生成的容器id(通过docker ps -a 查询)
docker rmi 节点生成的镜像id (通过docker images 查询)rm -rf /var/hyperledger/*
# 清理orderer数据
/opt/fabric/stop-orderer.sh
rm -rf /var/hyperledger/*# 清理kafka数据
# 关闭kafka
kafka-server-stop.sh
rm -rf /data/kafka-logs/*
# 清理zookeeper数据(zookeeper 必须为启动状态):
/usr/local/zookeeper-3.4.10/bin/./zkCli.shrmr /brokers
rmr /admin
rmr /config
rmr /isr_change_notification
quit# 关闭zookeeper
zookeeper-server-stop.sh
#kafka经常会出现无法停止的现象,可通过jps查看进程,再通过kill杀掉
rm -rf /data/zookeeper/version*
``` -
Hyperledger Fabric笔记--kafka共识的多orderer集群部署
2018-03-09 00:10:49Hyperledger Fabric 1.0默认的共识是solo,即单节点共识。本文主要介绍基于kafka...orderer节点:3个 peer节点:4个 cli节点:1个 环境准备 配置本机GOPATH 下载hyperledger/fabric到本地:$GOPATH/src/git...Hyperledger Fabric 1.0默认的共识是solo,即单节点共识。本文主要介绍基于kafka共识的orderer集群部署方案。
部署方案
- zookeeper节点:3个
- kafka节点:4个
- orderer节点:3个
- peer节点:4个
- cli节点:1个
环境准备
- 配置本机
GOPATH
- 下载
hyperledger/fabric
到本地:$GOPATH/src/github.com/hyperledger/fabric
- 下载
hyperledger/fabric-samples
到本地:$GOPATH/src/github.com/hyperledger/fabric-samples
- 打开fabric项目,在examples中创建文件夹:
kafka
,即$GOPATH/src/github.com/hyperledger/fabric/examples/kafka
。下文相关的所有配置或脚本均在此目录下。 - 思路:参照fabric-samples/first-network和fabric/examples/e2e_cli以及orderer相关配置文档在本机docker下模拟部署
- 相关资源:YAML配置和脚本(默认系统最低2积分)
生成MSP证书
在crypto-config.yaml中配置orderer节点信息:
OrdererOrgs: - Name: Orderer Domain: example.com Specs: - Hostname: orderer1 - Hostname: orderer2 - Hostname: orderer3
使用cryptogen工具生成MSP证书:
kafka $ cryptogen generate --config=./crypto-config.yaml org1.example.com org2.example.com
生成创世区块
结合crypto-config.yaml配置内容定义Orderer配置信息,具体参考如下:
Orderer: &OrdererExample OrdererType: kafka Addresses: - orderer1.example.com:7050 - orderer2.example.com:7050 - orderer3.example.com:7050 BatchTimeout: 2s BatchSize: MaxMessageCount: 10 AbsoluteMaxBytes: 99 MB PreferredMaxBytes: 512 KB Kafka: Brokers: - k1:9092 - k3:9092
使用configtxgen工具生成创世区块:
# 生成排序服务创世区块 export FABRIC_CFG_PATH=$PWD configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block # 生成通道配置创世区块 export CHANNEL_NAME=mychannel configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID $CHANNEL_NAME
定义组织锚节点
configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID $CHANNEL_NAME -asOrg Org1MSP configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org2MSPanchors.tx -channelID $CHANNEL_NAME -asOrg Org2MSP
创建YAML文件
参考fabric-samples/first-network和fabric/examples/e2e_cli项目编写自己的docker-compose.yaml。
当前目录结构:
kafka $ ls channel-artifacts kafka-base.yaml configtx.yaml orderer-base.yaml crypto-config peer-base.yaml crypto-config.yaml scripts dc-kafka.yaml
备注:
cp ../e2e_cli/scripts/script.sh scripts/script.sh
启动整个网络
docker-compose -f dc-kafka.yaml up -d
创建并加入通道
docker exec -it cli bash export CHANNEL_NAME=mychannel peer channel create -o orderer1.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx peer channel join -b mychannel.block
切换环境变量,将peer1.org1.example.com加入通道
export CORE_PEER_ADDRESS=peer1.org1.example.com:7051 export CORE_PEER_LOCALMSPID="Org1MSP" export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin\@org1.example.com/msp peer channel join -b mychannel.block
然后,参考上述操作逐步将其他节点加入该通道。
安装和实例化链码
在peer0.org.example.com上安装链码:
export CORE_PEER_ADDRESS=peer0.org1.example.com:7051 export CORE_PEER_LOCALMSPID="Org1MSP" export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin\@org1.example.com/msp peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 # 重新打开一个终端,确认安装是否成功 docker exec peer0.org1.example.com ls /var/hyperledger/production/chaincodes
然后,参考上述操作逐步在其他节点安装链码。
安装完成后,任意选择一个peer节点实例化链码:peer chaincode instantiate -o orderer1.example.com:7050 -C $CHANNEL_NAME -n mycc -v 1.0 -c '{"Args":["init", "a", "100", "b", "200"]}' -P "OR('Org1MSP.member', 'Org2MSP.member')"
链码调用或查询
切换至任意节点的环境,然后可以执行调用或查询。
export CORE_PEER_ADDRESS=peer0.org2.example.com:7051 export CORE_PEER_LOCALMSPID="Org2MSP" export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin\@org2.example.com/msp peer chaincode invoke -o orderer1.example.com:7050 -C $CHANNEL_NAME -n mycc -c '{"Args":["invoke", "a", "b", "10"]}' peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}'
使用脚本自动执行
打开cli容器的command参数。注意此处的
sleep 30
,预留各容器的服务启动时间,否则可能运行出错。相关文件
你可以参考如下文件编写及生成自己需要的配置,也可以直接下载本资源:YAML配置和脚本
1. crypto-config.yaml
# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # # --------------------------------------------------------------------------- # "OrdererOrgs" - Definition of organizations managing orderer nodes # --------------------------------------------------------------------------- OrdererOrgs: # --------------------------------------------------------------------------- # Orderer # --------------------------------------------------------------------------- - Name: Orderer Domain: example.com # --------------------------------------------------------------------------- # "Specs" - See PeerOrgs below for complete description # --------------------------------------------------------------------------- Specs: - Hostname: orderer1 - Hostname: orderer2 - Hostname: orderer3 # --------------------------------------------------------------------------- # "PeerOrgs" - Definition of organizations managing peer nodes # --------------------------------------------------------------------------- PeerOrgs: # --------------------------------------------------------------------------- # Org1 # --------------------------------------------------------------------------- - Name: Org1 Domain: org1.example.com # --------------------------------------------------------------------------- # "Specs" # --------------------------------------------------------------------------- # Uncomment this section to enable the explicit definition of hosts in your # configuration. Most users will want to use Template, below # # Specs is an array of Spec entries. Each Spec entry consists of two fields: # - Hostname: (Required) The desired hostname, sans the domain. # - CommonName: (Optional) Specifies the template or explicit override for # the CN. By default, this is the template: # # "{{.Hostname}}.{{.Domain}}" # # which obtains its values from the Spec.Hostname and # Org.Domain, respectively. # --------------------------------------------------------------------------- # Specs: # - Hostname: foo # implicitly "foo.org1.example.com" # CommonName: foo27.org5.example.com # overrides Hostname-based FQDN set above # - Hostname: bar # - Hostname: baz # --------------------------------------------------------------------------- # "Template" # --------------------------------------------------------------------------- # Allows for the definition of 1 or more hosts that are created sequentially # from a template. By default, this looks like "peer%d" from 0 to Count-1. # You may override the number of nodes (Count), the starting index (Start) # or the template used to construct the name (Hostname). # # Note: Template and Specs are not mutually exclusive. You may define both # sections and the aggregate nodes will be created for you. Take care with # name collisions # --------------------------------------------------------------------------- Template: Count: 2 # Start: 5 # Hostname: {{.Prefix}}{{.Index}} # default # --------------------------------------------------------------------------- # "Users" # --------------------------------------------------------------------------- # Count: The number of user accounts _in addition_ to Admin # --------------------------------------------------------------------------- Users: Count: 1 # --------------------------------------------------------------------------- # Org2: See "Org1" for full specification # --------------------------------------------------------------------------- - Name: Org2 Domain: org2.example.com Template: Count: 2 Users: Count: 1
2. configtx.yaml
# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # --- ################################################################################ # # Profile # # - Different configuration profiles may be encoded here to be specified # as parameters to the configtxgen tool # ################################################################################ Profiles: TwoOrgsOrdererGenesis: Orderer: <<: *OrdererExample Organizations: - *OrdererOrg Consortiums: SampleConsortium: Organizations: - *Org1 - *Org2 TwoOrgsChannel: Consortium: SampleConsortium Application: <<: *ApplicationDefaults Organizations: - *Org1 - *Org2 ################################################################################ # # Section: Organizations # # - This section defines the different organizational identities which will # be referenced later in the configuration. # ################################################################################ Organizations: # SampleOrg defines an MSP using the sampleconfig. It should never be used # in production but may be used as a template for other definitions - &OrdererOrg # DefaultOrg defines the organization which is used in the sampleconfig # of the fabric.git development environment Name: OrdererOrg # ID to load the MSP definition as ID: OrdererMSP # MSPDir is the filesystem path which contains the MSP configuration MSPDir: crypto-config/ordererOrganizations/example.com/msp - &Org1 # DefaultOrg defines the organization which is used in the sampleconfig # of the fabric.git development environment Name: Org1MSP # ID to load the MSP definition as ID: Org1MSP MSPDir: crypto-config/peerOrganizations/org1.example.com/msp AnchorPeers: # AnchorPeers defines the location of peers which can be used # for cross org gossip communication. Note, this value is only # encoded in the genesis block in the Application section context - Host: peer0.org1.example.com Port: 7051 - &Org2 # DefaultOrg defines the organization which is used in the sampleconfig # of the fabric.git development environment Name: Org2MSP # ID to load the MSP definition as ID: Org2MSP MSPDir: crypto-config/peerOrganizations/org2.example.com/msp AnchorPeers: # AnchorPeers defines the location of peers which can be used # for cross org gossip communication. Note, this value is only # encoded in the genesis block in the Application section context - Host: peer0.org2.example.com Port: 7051 ################################################################################ # # SECTION: Orderer # # - This section defines the values to encode into a config transaction or # genesis block for orderer related parameters # ################################################################################ Orderer: &OrdererExample # Orderer Type: The orderer implementation to start # Available types are "solo" and "kafka" OrdererType: kafka Addresses: - orderer1.example.com:7050 - orderer2.example.com:7050 - orderer3.example.com:7050 # Batch Timeout: The amount of time to wait before creating a batch BatchTimeout: 2s # Batch Size: Controls the number of messages batched into a block BatchSize: # Max Message Count: The maximum number of messages to permit in a batch MaxMessageCount: 10 # Absolute Max Bytes: The absolute maximum number of bytes allowed for # the serialized messages in a batch. # 设置最大的区块大小。每个区块最大有Orderer.AbsoluteMaxBytes个字节(不包括头部)。 # 假定这里设置的值为A,记住这个值,这会影响怎样配置Kafka代理。 AbsoluteMaxBytes: 99 MB # Preferred Max Bytes: The preferred maximum number of bytes allowed for # the serialized messages in a batch. A message larger than the preferred # max bytes will result in a batch larger than preferred max bytes. # 设置每个区块建议的大小。Kafka对于相对小的消息提供更高的吞吐量;区块大小最好不要超过1MB。 PreferredMaxBytes: 512 KB Kafka: # Brokers: A list of Kafka brokers to which the orderer connects # NOTE: Use IP:port notation # 包括Kafka集群中至少两个代理的地址信息(IP:port) # 这个List不需要是完全的(这些是你的种子代理) # 这个代理表示当前Orderer所要连接的Kafka代理 Brokers: - k1:9092 - k3:9092 # Organizations is the list of orgs which are defined as participants on # the orderer side of the network Organizations: ################################################################################ # # SECTION: Application # # - This section defines the values to encode into a config transaction or # genesis block for application related parameters # ################################################################################ Application: &ApplicationDefaults # Organizations is the list of orgs which are defined as participants on # the application side of the network Organizations:
3. peer-base.yaml
# COPYRIGHT Hello Corp. All Rights Reserved. # # Author: Haley # version: '2' services: peer-base: image: hyperledger/fabric-peer environment: - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=kafka_default - CORE_LOGGING_LEVEL=DEBUG - CORE_PEER_TLS_ENABLED=false - CORE_PEER_GOSSIP_USELEADERELECTION=true - CORE_PEER_GOSSIP_ORGLEADER=false - CORE_PEER_PROFILE_ENABLED=true - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key - CORE_PEER_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer command: peer node start
4. kafka-base.yaml
# COPYRIGHT Hello Corp. All Rights Reserved. # # Author: Haley # version: '2' services: zookeeper: image: hyperledger/fabric-zookeeper restart: always environment: - quorumListenOnAllIPs=true ports: - '2181' - '2888' - '3888' kafka: image: hyperledger/fabric-kafka restart: always environment: # message.max.bytes # The maximum size of envelope that the broker can receive. - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false - KAFKA_LOG_RETENTION_MS=-1 ports: - '9092'
5. orderer-base.yaml
# COPYRIGHT Hello Corp. All Rights Reserved. # # Author: Haley # version: '2' services: orderer.example.com: image: hyperledger/fabric-orderer environment: - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=example_default - ORDERER_GENERAL_LOGLEVEL=error - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 - ORDERER_GENERAL_LISTENPORT=7050 - ORDERER_GENERAL_GENESISMETHOD=file - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block - ORDERER_GENERAL_LOCALMSPID=OrdererMSP - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp # enabled TLS - ORDERER_GENERAL_TLS_ENABLED=false - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s - ORDERER_KAFKA_RETRY_LONGTOTAL=100s - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s - ORDERER_KAFKA_VERBOSE=true - ORDERER_KAFKA_BROKERS=[k1:9092,k2:9092,k3:9092,k4:9092] working_dir: /opt/gopath/src/github.com/hyperledger/fabric command: orderer volumes: - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block networks: default: aliases: - example ports: - "7050"
6. dc-kafka.yaml
# COPYRIGHT Hello Corp. All Rights Reserved. # # Author: Haley # version: '2' services: z1: extends: file: kafka-base.yaml service: zookeeper container_name: z1 hostname: z1 environment: # ID在集合中必须是唯一的并且应该有一个值在1-255之间。 - ZOO_MY_ID=1 # 组成ZK集合的服务器列表。客户端使用的列表必须与ZooKeeper服务器列表所拥有的每一个ZK服务器相匹配。 # 有两个端口号:第一个是追随者用来连接领导者的,第二个是领导人选举。 - ZOO_SERVERS=server.1=z1:2888:3888 server.2=z2:2888:3888 server.3=z3:2888:3888 # volumes: # - /var/run/:/host/var/run/ z2: extends: file: kafka-base.yaml service: zookeeper container_name: z2 hostname: z2 environment: - ZOO_MY_ID=2 - ZOO_SERVERS=server.1=z1:2888:3888 server.2=z2:2888:3888 server.3=z3:2888:3888 z3: extends: file: kafka-base.yaml service: zookeeper container_name: z3 hostname: z3 environment: - ZOO_MY_ID=3 - ZOO_SERVERS=server.1=z1:2888:3888 server.2=z2:2888:3888 server.3=z3:2888:3888 k1: extends: file: kafka-base.yaml service: kafka container_name: k1 hostname: k1 environment: - KAFKA_BROKER_ID=1 # min.insync.replicas=M --- 设置一个M值(例如1<M<N,查看下面的default.replication.factor) # 数据提交时会写入至少M个副本(这些数据然后会被同步并且归属到in-sync副本集合或ISR)。 # 其它情况,写入操作会返回一个错误。接下来: # 1. 如果channel写入的数据多达N-M个副本变的不可用,操作可以正常执行。 # 2. 如果有更多的副本不可用,Kafka不可以维护一个有M数量的ISR集合,因此Kafka停止接收写操作。Channel只有当同步M个副本后才可以重新可以写。 - KAFKA_MIN_INSYNC_REPLICAS=2 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 # 指向Zookeeper节点的集合,其中包含ZK的集合。 - KAFKA_ZOOKEEPER_CONNECT=z1:2181,z2:2181,z3:2181 depends_on: - z1 - z2 - z3 k2: extends: file: kafka-base.yaml service: kafka container_name: k2 hostname: k2 environment: - KAFKA_BROKER_ID=2 - KAFKA_MIN_INSYNC_REPLICAS=2 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 - KAFKA_ZOOKEEPER_CONNECT=z1:2181,z2:2181,z3:2181 depends_on: - z1 - z2 - z3 k3: extends: file: kafka-base.yaml service: kafka container_name: k3 hostname: k3 environment: - KAFKA_BROKER_ID=3 - KAFKA_MIN_INSYNC_REPLICAS=2 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 - KAFKA_ZOOKEEPER_CONNECT=z1:2181,z2:2181,z3:2181 depends_on: - z1 - z2 - z3 k4: extends: file: kafka-base.yaml service: kafka container_name: k4 hostname: k4 environment: - KAFKA_BROKER_ID=4 - KAFKA_MIN_INSYNC_REPLICAS=2 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 - KAFKA_ZOOKEEPER_CONNECT=z1:2181,z2:2181,z3:2181 depends_on: - z1 - z2 - z3 orderer1.example.com: extends: file: orderer-base.yaml service: orderer.example.com container_name: orderer1.example.com #environment: #- ORDERER_GENERAL_LOCALMSPID=OrdererMSP #- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp #- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key #- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt #- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] volumes: - ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls:/var/hyperledger/orderer/tls depends_on: - z1 - z2 - z3 - k1 - k2 - k3 - k4 orderer2.example.com: extends: file: orderer-base.yaml service: orderer.example.com container_name: orderer2.example.com #environment: #- ORDERER_GENERAL_LOCALMSPID=OrdererMSP #- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp #- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key #- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt #- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] volumes: - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls:/var/hyperledger/orderer/tls depends_on: - z1 - z2 - z3 - k1 - k2 - k3 - k4 orderer3.example.com: extends: file: orderer-base.yaml service: orderer.example.com container_name: orderer3.example.com #environment: #- ORDERER_GENERAL_LOCALMSPID=OrdererMSP #- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp #- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key #- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt #- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] volumes: - ./crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/tls:/var/hyperledger/orderer/tls depends_on: - z1 - z2 - z3 - k1 - k2 - k3 - k4 peer0.org1.example.com: extends: file: peer-base.yaml service: peer-base container_name: peer0.org1.example.com environment: - CORE_PEER_ID=peer0.org1.example.com - CORE_PEER_ADDRESS=peer0.org1.example.com:7051 - CORE_PEER_CHAINCODELISTENADDRESS=peer0.org1.example.com:7052 - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051 - CORE_PEER_LOCALMSPID=Org1MSP volumes: - /var/run/:/host/var/run/ - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls ports: - 7051:7051 - 7052:7052 - 7053:7053 peer1.org1.example.com: extends: file: peer-base.yaml service: peer-base container_name: peer1.org1.example.com environment: - CORE_PEER_ID=peer1.org1.example.com - CORE_PEER_ADDRESS=peer1.org1.example.com:7051 - CORE_PEER_CHAINCODELISTENADDRESS=peer1.org1.example.com:7052 - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:7051 - CORE_PEER_LOCALMSPID=Org1MSP volumes: - /var/run/:/host/var/run/ - ./crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp:/etc/hyperledger/fabric/msp - ./crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls:/etc/hyperledger/fabric/tls ports: - 8051:7051 - 8052:7052 - 8053:7053 peer0.org2.example.com: extends: file: peer-base.yaml service: peer-base container_name: peer0.org2.example.com environment: - CORE_PEER_ID=peer0.org2.example.com - CORE_PEER_ADDRESS=peer0.org2.example.com:7051 - CORE_PEER_CHAINCODELISTENADDRESS=peer0.org2.example.com:7052 - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org2.example.com:7051 - CORE_PEER_LOCALMSPID=Org2MSP volumes: - /var/run/:/host/var/run/ - ./crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp:/etc/hyperledger/fabric/msp - ./crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls:/etc/hyperledger/fabric/tls ports: - 9051:7051 - 9052:7052 - 9053:7053 peer1.org2.example.com: extends: file: peer-base.yaml service: peer-base container_name: peer1.org2.example.com environment: - CORE_PEER_ID=peer1.org2.example.com - CORE_PEER_ADDRESS=peer1.org2.example.com:7051 - CORE_PEER_CHAINCODELISTENADDRESS=peer1.org2.example.com:7052 - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org2.example.com:7051 - CORE_PEER_LOCALMSPID=Org2MSP volumes: - /var/run/:/host/var/run/ - ./crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/msp:/etc/hyperledger/fabric/msp - ./crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls:/etc/hyperledger/fabric/tls ports: - 10051:7051 - 10052:7052 - 10053:7053 cli: container_name: cli image: hyperledger/fabric-tools tty: true environment: - GOPATH=/opt/gopath - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock - CORE_LOGGING_LEVEL=DEBUG - CORE_PEER_ID=cli - CORE_PEER_ADDRESS=peer0.org1.example.com:7051 - CORE_PEER_LOCALMSPID=Org1MSP - CORE_PEER_TLS_ENABLED=false - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer #command: /bin/bash -c 'sleep 30; ./scripts/script.sh ${CHANNEL_NAME}; sleep $TIMEOUT' volumes: - /var/run/:/host/var/run/ - ../chaincode/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ - ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/ - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts depends_on: - orderer1.example.com - orderer2.example.com - orderer3.example.com - peer0.org1.example.com - peer1.org1.example.com - peer0.org2.example.com - peer1.org2.example.com
资源下载
参考
-
Fabric1.4 八、建立kafka共识的多orderer集群
2020-02-26 22:25:521.排序节点介绍 本节内容基于前几节介绍的hellowrold区块链环境...在创建区块链创世块以及通道的时候我们会用到一个configtx.yaml文件,该配置文件中的Orderer配置信息中有一个OrdererType参数, 该参数可配置为"so...1.排序节点介绍
本节内容基于前几节介绍的hellowrold区块链环境基础,实现基于kafka模式的排序节点部署和测试。
Fabric 的共识模型是Execute-Order-Validate,即先执行再排序最后验证的过程。
在创建区块链创世块以及通道的时候我们会用到一个configtx.yaml文件,该配置文件中的Orderer配置信息中有一个OrdererType参数,
该参数可配置为"solo" and “kafka”,之前例子我们使用的配置皆是solo,即单节点共识。使用kafka集群作为orderer共识的技术方式可以为排序服务提供足够的容错空间,当客户端向peer节点提交Transaction的时候,
peer节点会得到一个读写集结果,该结果会发送给orderer节点进行共识和排序,此时如果orderer节点突然down掉,
将导致请求服务失效、数据丢失等问题。因此,在生产环境部署Fabric,往往需要采用具有容错能力的orderer,即kafka模式,使用kafka模式搭建orderer节点集群需要会依赖于kafka和zookeeper。
1.1.排序执行过程
一个交易的生命周期包括下面几个步骤:
- client先向peer提交一个题案进行备案;
- peer执行智能合约,调用数据执行操作,peer将操作结果发送给orderer;
- orderer将收到的所有的题案排序,打包成区块,并发送给Peer;
- Peer打开每个区块,并进行验证,若验证通过,则写入本地账本,更新世界状态。向client发送event告知交易被提交到账本中。
1.2.Atomic Broadcast(Total Order):
客户端提交交易信息 –> orderers:将交易排序并打包成块 –> 各个账本写入全局有序的区块
1.2.1 全局排序要求:
- 全局唯一、容错(cft、bft)
- 网络分区(分区出去的节点的限制)
- 强一致性
- bft(fabric v1.4 的orderer是cft,并不代表fabric是cft)
1.2.2 Block Cutting(打包规则):
BatchSize:批次大小
MaxMessageCount:最大消息数量 AbsoluteMaxBytes :限制单个交易大小,超过该限制,会被拒绝掉 PreferredMaxBytes :综合可能超过PreferredMaxBytes,假设PreferredMaxBytes=200b,前9个交易大小为100b,第10个交易大小为200b,这时会把第10个交易一块打包,这样就会大于PreferredMaxBytes。
BatchTimeout
Timeout 按照时间规则打包
2.部署基于kafka的排序节点
我们将部署三个Oraderer节点,
基于前几节的helloworld案例按以下步骤重新修改配置文件,重新部署区块链网络。步骤1. 修改crypto-config.yaml,添加OrdererOrgs 的Specs
Specs: - Hostname: orderer - Hostname: orderer2 - Hostname: orderer3
crypto-config.yaml完整配置文件如下:
# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # # --------------------------------------------------------------------------- # "OrdererOrgs" - Definition of organizations managing orderer nodes # --------------------------------------------------------------------------- OrdererOrgs: # --------------------------------------------------------------------------- # Orderer # --------------------------------------------------------------------------- - Name: Orderer Domain: example.com EnableNodeOUs: true # --------------------------------------------------------------------------- # "Specs" - See PeerOrgs below for complete description # --------------------------------------------------------------------------- Specs: - Hostname: orderer - Hostname: orderer2 - Hostname: orderer3 # --------------------------------------------------------------------------- # "PeerOrgs" - Definition of organizations managing peer nodes # --------------------------------------------------------------------------- PeerOrgs: # --------------------------------------------------------------------------- # Org1 # --------------------------------------------------------------------------- - Name: Org1 Domain: org1.example.com EnableNodeOUs: true Specs: - Hostname: peer0 - Hostname: peer1 - Hostname: peer2 - Hostname: peer3 - Hostname: peer4 # --------------------------------------------------------------------------- # "Specs" # --------------------------------------------------------------------------- # Uncomment this section to enable the explicit definition of hosts in your # configuration. Most users will want to use Template, below # # Specs is an array of Spec entries. Each Spec entry consists of two fields: # - Hostname: (Required) The desired hostname, sans the domain. # - CommonName: (Optional) Specifies the template or explicit override for # the CN. By default, this is the template: # # "{{.Hostname}}.{{.Domain}}" # # which obtains its values from the Spec.Hostname and # Org.Domain, respectively. # --------------------------------------------------------------------------- # Specs: # - Hostname: foo # implicitly "foo.org1.example.com" # CommonName: foo27.org5.example.com # overrides Hostname-based FQDN set above # - Hostname: bar # - Hostname: baz # --------------------------------------------------------------------------- # "Template" # --------------------------------------------------------------------------- # Allows for the definition of 1 or more hosts that are created sequentially # from a template. By default, this looks like "peer%d" from 0 to Count-1. # You may override the number of nodes (Count), the starting index (Start) # or the template used to construct the name (Hostname). # # Note: Template and Specs are not mutually exclusive. You may define both # sections and the aggregate nodes will be created for you. Take care with # name collisions # --------------------------------------------------------------------------- Template: Count: 3 # Start: 5 # Hostname: {{.Prefix}}{{.Index}} # default # --------------------------------------------------------------------------- # "Users" # --------------------------------------------------------------------------- # Count: The number of user accounts _in addition_ to Admin # --------------------------------------------------------------------------- Users: Count: 1 # --------------------------------------------------------------------------- # Org2: See "Org1" for full specification # --------------------------------------------------------------------------- - Name: Org2 Domain: org2.example.com EnableNodeOUs: true Specs: - Hostname: peer0 - Hostname: peer1 - Hostname: peer2 - Hostname: peer3 - Hostname: peer4 Template: Count: 3 Users: Count: 1
步骤2. 执行生成证书文件
cryptogen generate --config=./crypto-config.yaml
执行后可见crypto-config/orderOrganizations/example.com/orderers目录下出现了三个组织的证书:
orderer.example.com orderer2.example.com orderer3.example.com
步骤3. 修改configtx.yaml配置文件
将OrdererType设置为kafka 并配置打包规则和kafka地址
Orderer: &OrdererDefaults # Orderer Type: The orderer implementation to start # Available types are "solo" and "kafka" OrdererType: kafka Addresses: - orderer.example.com:7050 - orderer2.example.com:7050 - orderer3.example.com:7050 # Batch Timeout: The amount of time to wait before creating a batch BatchTimeout: 2s # Batch Size: Controls the number of messages batched into a block BatchSize: # Max Message Count: The maximum number of messages to permit in a batch MaxMessageCount: 10 # Absolute Max Bytes: The absolute maximum number of bytes allowed for # the serialized messages in a batch. AbsoluteMaxBytes: 99 MB # Preferred Max Bytes: The preferred maximum number of bytes allowed for # the serialized messages in a batch. A message larger than the preferred # max bytes will result in a batch larger than preferred max bytes. PreferredMaxBytes: 512 KB Kafka: # Brokers: A list of Kafka brokers to which the orderer connects # NOTE: Use IP:port notation Brokers: - kafka1:9092 - kafka2:9092 - kafka3:9092 # Organizations is the list of orgs which are defined as participants on # the orderer side of the network Organizations:
完整的configtx.yaml配置文件如下:
# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # --- ################################################################################ # # Section: Organizations # # - This section defines the different organizational identities which will # be referenced later in the configuration. # ################################################################################ Organizations: # SampleOrg defines an MSP using the sampleconfig. It should never be used # in production but may be used as a template for other definitions - &OrdererOrg # DefaultOrg defines the organization which is used in the sampleconfig # of the fabric.git development environment Name: OrdererOrg # ID to load the MSP definition as ID: OrdererMSP # MSPDir is the filesystem path which contains the MSP configuration MSPDir: crypto-config/ordererOrganizations/example.com/msp # Policies defines the set of policies at this level of the config tree # For organization policies, their canonical path is usually # /Channel/<Application|Orderer>/<OrgName>/<PolicyName> Policies: Readers: Type: Signature Rule: "OR('OrdererMSP.member')" Writers: Type: Signature Rule: "OR('OrdererMSP.member')" Admins: Type: Signature Rule: "OR('OrdererMSP.admin')" - &Org1 # DefaultOrg defines the organization which is used in the sampleconfig # of the fabric.git development environment Name: Org1MSP # ID to load the MSP definition as ID: Org1MSP MSPDir: crypto-config/peerOrganizations/org1.example.com/msp # Policies defines the set of policies at this level of the config tree # For organization policies, their canonical path is usually # /Channel/<Application|Orderer>/<OrgName>/<PolicyName> Policies: Readers: Type: Signature Rule: "OR('Org1MSP.admin', 'Org1MSP.peer', 'Org1MSP.client')" Writers: Type: Signature Rule: "OR('Org1MSP.admin', 'Org1MSP.client')" Admins: Type: Signature Rule: "OR('Org1MSP.admin')" # leave this flag set to true. AnchorPeers: # AnchorPeers defines the location of peers which can be used # for cross org gossip communication. Note, this value is only # encoded in the genesis block in the Application section context - Host: peer0.org1.example.com Port: 7051 - &Org2 # DefaultOrg defines the organization which is used in the sampleconfig # of the fabric.git development environment Name: Org2MSP # ID to load the MSP definition as ID: Org2MSP MSPDir: crypto-config/peerOrganizations/org2.example.com/msp # Policies defines the set of policies at this level of the config tree # For organization policies, their canonical path is usually # /Channel/<Application|Orderer>/<OrgName>/<PolicyName> Policies: Readers: Type: Signature Rule: "OR('Org2MSP.admin', 'Org2MSP.peer', 'Org2MSP.client')" Writers: Type: Signature Rule: "OR('Org2MSP.admin', 'Org2MSP.client')" Admins: Type: Signature Rule: "OR('Org2MSP.admin')" AnchorPeers: # AnchorPeers defines the location of peers which can be used # for cross org gossip communication. Note, this value is only # encoded in the genesis block in the Application section context - Host: peer0.org2.example.com Port: 9051 - &Org3 # DefaultOrg defines the organization which is used in the sampleconfig # of the fabric.git development environment Name: Org3MSP # ID to load the MSP definition as ID: Org3MSP MSPDir: crypto-config/peerOrganizations/org3.example.com/msp # Policies defines the set of policies at this level of the config tree # For organization policies, their canonical path is usually # /Channel/<Application|Orderer>/<OrgName>/<PolicyName> Policies: Readers: Type: Signature Rule: "OR('Org3MSP.admin', 'Org3MSP.peer', 'Org3MSP.client')" Writers: Type: Signature Rule: "OR('Org3MSP.admin', 'Org2MSP.client')" Admins: Type: Signature Rule: "OR('Org3MSP.admin')" AnchorPeers: # AnchorPeers defines the location of peers which can be used # for cross org gossip communication. Note, this value is only # encoded in the genesis block in the Application section context - Host: peer0.org3.example.com Port: 13051 ################################################################################ # # SECTION: Capabilities # # - This section defines the capabilities of fabric network. This is a new # concept as of v1.1.0 and should not be utilized in mixed networks with # v1.0.x peers and orderers. Capabilities define features which must be # present in a fabric binary for that binary to safely participate in the # fabric network. For instance, if a new MSP type is added, newer binaries # might recognize and validate the signatures from this type, while older # binaries without this support would be unable to validate those # transactions. This could lead to different versions of the fabric binaries # having different world states. Instead, defining a capability for a channel # informs those binaries without this capability that they must cease # processing transactions until they have been upgraded. For v1.0.x if any # capabilities are defined (including a map with all capabilities turned off) # then the v1.0.x peer will deliberately crash. # ################################################################################ Capabilities: # Channel capabilities apply to both the orderers and the peers and must be # supported by both. # Set the value of the capability to true to require it. Channel: &ChannelCapabilities # V1.4.3 for Channel is a catchall flag for behavior which has been # determined to be desired for all orderers and peers running at the v1.4.3 # level, but which would be incompatible with orderers and peers from # prior releases. # Prior to enabling V1.4.3 channel capabilities, ensure that all # orderers and peers on a channel are at v1.4.3 or later. V1_4_3: true # V1.3 for Channel enables the new non-backwards compatible # features and fixes of fabric v1.3 V1_3: false # V1.1 for Channel enables the new non-backwards compatible # features and fixes of fabric v1.1 V1_1: false # Orderer capabilities apply only to the orderers, and may be safely # used with prior release peers. # Set the value of the capability to true to require it. Orderer: &OrdererCapabilities # V1.4.2 for Orderer is a catchall flag for behavior which has been # determined to be desired for all orderers running at the v1.4.2 # level, but which would be incompatible with orderers from prior releases. # Prior to enabling V1.4.2 orderer capabilities, ensure that all # orderers on a channel are at v1.4.2 or later. V1_4_2: true # V1.1 for Orderer enables the new non-backwards compatible # features and fixes of fabric v1.1 V1_1: false # Application capabilities apply only to the peer network, and may be safely # used with prior release orderers. # Set the value of the capability to true to require it. Application: &ApplicationCapabilities # V1.4.2 for Application enables the new non-backwards compatible # features and fixes of fabric v1.4.2. V1_4_2: true # V1.3 for Application enables the new non-backwards compatible # features and fixes of fabric v1.3. V1_3: false # V1.2 for Application enables the new non-backwards compatible # features and fixes of fabric v1.2 (note, this need not be set if # later version capabilities are set) V1_2: false # V1.1 for Application enables the new non-backwards compatible # features and fixes of fabric v1.1 (note, this need not be set if # later version capabilities are set). V1_1: false ################################################################################ # # SECTION: Application # # - This section defines the values to encode into a config transaction or # genesis block for application related parameters # ################################################################################ Application: &ApplicationDefaults # Organizations is the list of orgs which are defined as participants on # the application side of the network Organizations: # Policies defines the set of policies at this level of the config tree # For Application policies, their canonical path is # /Channel/Application/<PolicyName> Policies: Readers: Type: ImplicitMeta Rule: "ANY Readers" Writers: Type: ImplicitMeta Rule: "ANY Writers" Admins: Type: ImplicitMeta Rule: "MAJORITY Admins" Capabilities: <<: *ApplicationCapabilities ################################################################################ # # SECTION: Orderer # # - This section defines the values to encode into a config transaction or # genesis block for orderer related parameters # ################################################################################ Orderer: &OrdererDefaults # Orderer Type: The orderer implementation to start # Available types are "solo" and "kafka" OrdererType: kafka Addresses: - orderer.example.com:7050 - orderer2.example.com:7050 - orderer3.example.com:7050 # Batch Timeout: The amount of time to wait before creating a batch BatchTimeout: 2s # Batch Size: Controls the number of messages batched into a block BatchSize: # Max Message Count: The maximum number of messages to permit in a batch MaxMessageCount: 10 # Absolute Max Bytes: The absolute maximum number of bytes allowed for # the serialized messages in a batch. AbsoluteMaxBytes: 99 MB # Preferred Max Bytes: The preferred maximum number of bytes allowed for # the serialized messages in a batch. A message larger than the preferred # max bytes will result in a batch larger than preferred max bytes. PreferredMaxBytes: 512 KB Kafka: # Brokers: A list of Kafka brokers to which the orderer connects # NOTE: Use IP:port notation Brokers: - kafka1:9092 - kafka2:9092 - kafka3:9092 # Organizations is the list of orgs which are defined as participants on # the orderer side of the network Organizations: # Policies defines the set of policies at this level of the config tree # For Orderer policies, their canonical path is # /Channel/Orderer/<PolicyName> Policies: Readers: Type: ImplicitMeta Rule: "ANY Readers" Writers: Type: ImplicitMeta Rule: "ANY Writers" Admins: Type: ImplicitMeta Rule: "MAJORITY Admins" # BlockValidation specifies what signatures must be included in the block # from the orderer for the peer to validate it. BlockValidation: Type: ImplicitMeta Rule: "ANY Writers" ################################################################################ # # CHANNEL # # This section defines the values to encode into a config transaction or # genesis block for channel related parameters. # ################################################################################ Channel: &ChannelDefaults # Policies defines the set of policies at this level of the config tree # For Channel policies, their canonical path is # /Channel/<PolicyName> Policies: # Who may invoke the 'Deliver' API Readers: Type: ImplicitMeta Rule: "ANY Readers" # Who may invoke the 'Broadcast' API Writers: Type: ImplicitMeta Rule: "ANY Writers" # By default, who may modify elements at this config level Admins: Type: ImplicitMeta Rule: "MAJORITY Admins" # Capabilities describes the channel level capabilities, see the # dedicated Capabilities section elsewhere in this file for a full # description Capabilities: <<: *ChannelCapabilities ################################################################################ # # Profile # # - Different configuration profiles may be encoded here to be specified # as parameters to the configtxgen tool # ################################################################################ Profiles: TwoOrgsOrdererGenesis: <<: *ChannelDefaults Orderer: <<: *OrdererDefaults Organizations: - *OrdererOrg Capabilities: <<: *OrdererCapabilities Consortiums: SampleConsortium: Organizations: - *Org1 - *Org2 - *Org3 TwoOrgsChannel: Consortium: SampleConsortium <<: *ChannelDefaults Application: <<: *ApplicationDefaults Organizations: - *Org1 - *Org2 - *Org3 Capabilities: <<: *ApplicationCapabilities SampleDevModeKafka: <<: *ChannelDefaults Capabilities: <<: *ChannelCapabilities Orderer: <<: *OrdererDefaults OrdererType: kafka Kafka: Brokers: - kafka.example.com:9092 Organizations: - *OrdererOrg Capabilities: <<: *OrdererCapabilities Application: <<: *ApplicationDefaults Organizations: - <<: *OrdererOrg Consortiums: SampleConsortium: Organizations: - *Org1 - *Org2 - *Org3 SampleMultiNodeEtcdRaft: <<: *ChannelDefaults Capabilities: <<: *ChannelCapabilities Orderer: <<: *OrdererDefaults OrdererType: etcdraft EtcdRaft: Consenters: - Host: orderer.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt - Host: orderer2.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/server.crt - Host: orderer3.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/server.crt - Host: orderer4.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer4.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer4.example.com/tls/server.crt - Host: orderer5.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer5.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer5.example.com/tls/server.crt Addresses: - orderer.example.com:7050 - orderer2.example.com:7050 - orderer3.example.com:7050 - orderer4.example.com:7050 - orderer5.example.com:7050 Organizations: - *OrdererOrg Capabilities: <<: *OrdererCapabilities Application: <<: *ApplicationDefaults Organizations: - <<: *OrdererOrg Consortiums: SampleConsortium: Organizations: - *Org1 - *Org2 - *Org3
步骤4. 生成创世块和通道
# 生成创世区块 首先要确保channel-artifacts文件夹存在,如果不存在需要手动创建,不然会报错 configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block # 生成通道配置文件 其中通道名mychannel可以修改为自己的 configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/mychannel.tx -channelID mychannel
步骤5.配置docker-compose文件
docker-compose-orderer.yaml 配置文件包含3个zookeeper服务、3个kafka服务以3个orderer节点配置。
内容如下:
# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # version: '2' networks: hello: services: zookeeper1: container_name: zookeeper1 hostname: zookeeper1 image: hyperledger/fabric-zookeeper restart: always environment: - quorumListenOnAllIPs=true - ZOO_MY_ID=1 - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888 ports: - 12181:2181 - 12888:2888 - 13888:3888 networks: - hello zookeeper2: container_name: zookeeper2 hostname: zookeeper2 image: hyperledger/fabric-zookeeper restart: always environment: - quorumListenOnAllIPs=true - ZOO_MY_ID=2 - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888 ports: - 22181:2181 - 22888:2888 - 23888:3888 networks: - hello zookeeper3: container_name: zookeeper3 hostname: zookeeper3 image: hyperledger/fabric-zookeeper restart: always environment: - quorumListenOnAllIPs=true - ZOO_MY_ID=3 - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888 ports: - 32181:2181 - 32888:2888 - 33888:3888 networks: - hello kafka1: container_name: kafka1 image: hyperledger/fabric-kafka restart: always environment: - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false - KAFKA_BROKER_ID=1 - KAFKA_MIN_INSYNC_REPLICAS=2 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181 ports: - 19092:9092 networks: - hello depends_on: - zookeeper1 - zookeeper2 - zookeeper3 kafka2: container_name: kafka2 image: hyperledger/fabric-kafka restart: always environment: - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false - KAFKA_BROKER_ID=2 - KAFKA_MIN_INSYNC_REPLICAS=2 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181 ports: - 29092:9092 networks: - hello depends_on: - zookeeper1 - zookeeper2 - zookeeper3 kafka3: container_name: kafka3 image: hyperledger/fabric-kafka restart: always environment: - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false - KAFKA_BROKER_ID=3 - KAFKA_MIN_INSYNC_REPLICAS=2 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181 ports: - 39092:9092 networks: - hello depends_on: - zookeeper1 - zookeeper2 - zookeeper3 orderer.example.com: container_name: orderer.example.com image: hyperledger/fabric-orderer environment: - ORDERER_GENERAL_LOGLEVEL=debug - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 - ORDERER_GENERAL_GENESISMETHOD=file - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block - ORDERER_GENERAL_LOCALMSPID=OrdererMSP - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp # enabled TLS - ORDERER_GENERAL_TLS_ENABLED=true - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] #关于kafka的相关配置 - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s - ORDERER_KAFKA_RETRY_LONGTOTAL=100s - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s - ORDERER_KAFKA_VERBOSE=true - ORDERER_KAFKA_BROKERS=[kafka1:9092,kafka2:9092,kafka3:9092] working_dir: /opt/gopath/src/github.com/hyperledger/fabric command: orderer volumes: - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls ports: - 7050:7050 networks: - hello depends_on: - zookeeper1 - zookeeper2 - zookeeper3 - kafka1 - kafka2 - kafka3 orderer2.example.com: container_name: orderer2.example.com image: hyperledger/fabric-orderer environment: - ORDERER_GENERAL_LOGLEVEL=debug - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 - ORDERER_GENERAL_GENESISMETHOD=file - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block - ORDERER_GENERAL_LOCALMSPID=OrdererMSP - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp # enabled TLS - ORDERER_GENERAL_TLS_ENABLED=true - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] #关于kafka的相关配置 - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s - ORDERER_KAFKA_RETRY_LONGTOTAL=100s - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s - ORDERER_KAFKA_VERBOSE=true - ORDERER_KAFKA_BROKERS=[kafka1:9092,kafka2:9092,kafka3:9092] working_dir: /opt/gopath/src/github.com/hyperledger/fabric command: orderer volumes: - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/:/var/hyperledger/orderer/tls ports: - 27050:7050 networks: - hello depends_on: - zookeeper1 - zookeeper2 - zookeeper3 - kafka1 - kafka2 - kafka3 orderer3.example.com: container_name: orderer3.example.com image: hyperledger/fabric-orderer environment: - ORDERER_GENERAL_LOGLEVEL=debug - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 - ORDERER_GENERAL_GENESISMETHOD=file - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block - ORDERER_GENERAL_LOCALMSPID=OrdererMSP - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp # enabled TLS - ORDERER_GENERAL_TLS_ENABLED=true - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] #关于kafka的相关配置 - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s - ORDERER_KAFKA_RETRY_LONGTOTAL=100s - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s - ORDERER_KAFKA_VERBOSE=true - ORDERER_KAFKA_BROKERS=[kafka1:9092,kafka2:9092,kafka3:9092] working_dir: /opt/gopath/src/github.com/hyperledger/fabric command: orderer volumes: - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - ./crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/msp:/var/hyperledger/orderer/msp - ./crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/:/var/hyperledger/orderer/tls ports: - 37050:7050 networks: - hello depends_on: - zookeeper1 - zookeeper2 - zookeeper3 - kafka1 - kafka2 - kafka3
步骤6. 启动所有节点
COMPOSE_FILE_ORG1_PEER0="docker-compose-org1-peer0.yaml" COMPOSE_FILE_ORG1_PEER1="docker-compose-org1-peer1.yaml" COMPOSE_FILE_ORG1_PEER2="docker-compose-org1-peer2.yaml" COMPOSE_FILE_ORG2_PEER0="docker-compose-org2-peer0.yaml" COMPOSE_FILE_ORG2_PEER1="docker-compose-org2-peer1.yaml" COMPOSE_FILE_ORG3_PEER0="docker-compose-org3-peer0.yaml" COMPOSE_FILE_ORG3_PEER1="docker-compose-org3-peer1.yaml" COMPOSE_FILE_ORDERER="docker-compose-orderer.yaml" COMPOSE_FILE_CA="docker-compose-ca.yaml" docker-compose -f $COMPOSE_FILE_ORG1_PEER0 -f $COMPOSE_FILE_ORG1_PEER1 -f $COMPOSE_FILE_ORG1_PEER2 -f $COMPOSE_FILE_ORG2_PEER0 -f $COMPOSE_FILE_ORG2_PEER1 -f $COMPOSE_FILE_ORG3_PEER0 -f $COMPOSE_FILE_ORG3_PEER1 -f $COMPOSE_FILE_ORDERER up -d #启动CA节点,startCA.sh的内容可参考第七节 ./startCA.sh
至此分布式orderer已经部署完成。
步骤7. 验证
我们通过在第一个orderer节点创建通道,然后在第二个orderer节点实例化链码来测试分布式orderer是否可行。
首先:进入一个fabric-tool 服务docker exec -it cli_peer0_org1 bash
采用第一个orderer节点创建通道
ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem peer channel create -o orderer.example.com:7050 -c mychannel -f ./channel-artifacts/mychannel.tx --tls --cafile $ORDERER_CA
加入通道
peer channel join -b mychannel.block
将CA切换到第二个orderer
ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer2.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
在第二个通道安装和实例化链码
peer chaincode install -n mycc -p github.com/hyperledger/fabric/helloworld/chaincode/go/helloworld/ -v 1.0 peer chaincode instantiate -o orderer2.example.com:7050 --tls --cafile $ORDERER_CA -C mychannel -n mycc -v 1.0 -c '{"Args":["a","hello"]}' -P "OR ('Org1MSP.peer','Org2MSP.peer','Org3MSP.peer')"
这个环节如果执行成功就表示分布式orderer部署没有问题,否则会报找不到channel的错误。
执行链码查询
peer chaincode query -C mychannel -n mycc -c '{"function":"get","Args":["a"]}'
-
HyperLedger Fabric 1.4.4搭建基于多orderer节点的kafka共识集群
2020-04-08 11:05:41配置 ...org1的peer0和peer1节点,orderer节点,三个kafka和三个zookeeper节点 前言 其实搭建的过程很简单,主要的时间都是耗费在了kafka的搭建上。因为机器资源不够,所以kafka都是搭建在同一...配置
主机名 ip 功能 bd223 10.1.24.223 org2的peer0和peer1节点,orderer1节点 bd225 10.1.24.225 org1的peer0和peer1节点,orderer节点,三个kafka和三个zookeeper节点 前言
其实搭建的过程很简单,主要的时间都是耗费在了kafka的搭建上。因为机器资源不够,所以kafka都是搭建在同一台机器上的。这就导致了一个问题,和
kafka集群
搭建在同一台主机上的orderer节点
可以正常操作没有问题,但是搭建在另一台主机上的orderer1节点
就出现了kafka服务
没有准备就绪的错误,初步推断是kafka集群
的监听问题,最后果然也就是这个问题。configtx.yaml
在
first-network
中,configtx.yaml
默认是生成了五个orderer(orderer.example.com,orderer2.example.com,orderer3.example.com,orderer4.example.com,orderer5.example.com)
的证书等文件。可以看到是没有orderer1的,所以修改configtx.yaml
增加orderer1.example.com
。当然也可以直接使用orderer2.example.com
。修改后的文件如下:# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # --- ################################################################################ # # Section: Organizations # # - This section defines the different organizational identities which will # be referenced later in the configuration. # ################################################################################ Organizations: # SampleOrg defines an MSP using the sampleconfig. It should never be used # in production but may be used as a template for other definitions - &OrdererOrg # DefaultOrg defines the organization which is used in the sampleconfig # of the fabric.git development environment Name: OrdererOrg # ID to load the MSP definition as ID: OrdererMSP # MSPDir is the filesystem path which contains the MSP configuration MSPDir: crypto-config/ordererOrganizations/example.com/msp # Policies defines the set of policies at this level of the config tree # For organization policies, their canonical path is usually # /Channel/<Application|Orderer>/<OrgName>/<PolicyName> Policies: Readers: Type: Signature Rule: "OR('OrdererMSP.member')" Writers: Type: Signature Rule: "OR('OrdererMSP.member')" Admins: Type: Signature Rule: "OR('OrdererMSP.admin')" - &Org1 # DefaultOrg defines the organization which is used in the sampleconfig # of the fabric.git development environment Name: Org1MSP # ID to load the MSP definition as ID: Org1MSP MSPDir: crypto-config/peerOrganizations/org1.example.com/msp # Policies defines the set of policies at this level of the config tree # For organization policies, their canonical path is usually # /Channel/<Application|Orderer>/<OrgName>/<PolicyName> Policies: Readers: Type: Signature Rule: "OR('Org1MSP.admin', 'Org1MSP.peer', 'Org1MSP.client')" Writers: Type: Signature Rule: "OR('Org1MSP.admin', 'Org1MSP.client')" Admins: Type: Signature Rule: "OR('Org1MSP.admin')" # leave this flag set to true. AnchorPeers: # AnchorPeers defines the location of peers which can be used # for cross org gossip communication. Note, this value is only # encoded in the genesis block in the Application section context - Host: peer0.org1.example.com Port: 7051 - &Org2 # DefaultOrg defines the organization which is used in the sampleconfig # of the fabric.git development environment Name: Org2MSP # ID to load the MSP definition as ID: Org2MSP MSPDir: crypto-config/peerOrganizations/org2.example.com/msp # Policies defines the set of policies at this level of the config tree # For organization policies, their canonical path is usually # /Channel/<Application|Orderer>/<OrgName>/<PolicyName> Policies: Readers: Type: Signature Rule: "OR('Org2MSP.admin', 'Org2MSP.peer', 'Org2MSP.client')" Writers: Type: Signature Rule: "OR('Org2MSP.admin', 'Org2MSP.client')" Admins: Type: Signature Rule: "OR('Org2MSP.admin')" AnchorPeers: # AnchorPeers defines the location of peers which can be used # for cross org gossip communication. Note, this value is only # encoded in the genesis block in the Application section context - Host: peer0.org2.example.com Port: 9051 ################################################################################ # # SECTION: Capabilities # # - This section defines the capabilities of fabric network. This is a new # concept as of v1.1.0 and should not be utilized in mixed networks with # v1.0.x peers and orderers. Capabilities define features which must be # present in a fabric binary for that binary to safely participate in the # fabric network. For instance, if a new MSP type is added, newer binaries # might recognize and validate the signatures from this type, while older # binaries without this support would be unable to validate those # transactions. This could lead to different versions of the fabric binaries # having different world states. Instead, defining a capability for a channel # informs those binaries without this capability that they must cease # processing transactions until they have been upgraded. For v1.0.x if any # capabilities are defined (including a map with all capabilities turned off) # then the v1.0.x peer will deliberately crash. # ################################################################################ Capabilities: # Channel capabilities apply to both the orderers and the peers and must be # supported by both. # Set the value of the capability to true to require it. Channel: &ChannelCapabilities # V1.4.3 for Channel is a catchall flag for behavior which has been # determined to be desired for all orderers and peers running at the v1.4.3 # level, but which would be incompatible with orderers and peers from # prior releases. # Prior to enabling V1.4.3 channel capabilities, ensure that all # orderers and peers on a channel are at v1.4.3 or later. V1_4_3: true # V1.3 for Channel enables the new non-backwards compatible # features and fixes of fabric v1.3 V1_3: false # V1.1 for Channel enables the new non-backwards compatible # features and fixes of fabric v1.1 V1_1: false # Orderer capabilities apply only to the orderers, and may be safely # used with prior release peers. # Set the value of the capability to true to require it. Orderer: &OrdererCapabilities # V1.4.2 for Orderer is a catchall flag for behavior which has been # determined to be desired for all orderers running at the v1.4.2 # level, but which would be incompatible with orderers from prior releases. # Prior to enabling V1.4.2 orderer capabilities, ensure that all # orderers on a channel are at v1.4.2 or later. V1_4_2: true # V1.1 for Orderer enables the new non-backwards compatible # features and fixes of fabric v1.1 V1_1: false # Application capabilities apply only to the peer network, and may be safely # used with prior release orderers. # Set the value of the capability to true to require it. Application: &ApplicationCapabilities # V1.4.2 for Application enables the new non-backwards compatible # features and fixes of fabric v1.4.2. V1_4_2: true # V1.3 for Application enables the new non-backwards compatible # features and fixes of fabric v1.3. V1_3: false # V1.2 for Application enables the new non-backwards compatible # features and fixes of fabric v1.2 (note, this need not be set if # later version capabilities are set) V1_2: false # V1.1 for Application enables the new non-backwards compatible # features and fixes of fabric v1.1 (note, this need not be set if # later version capabilities are set). V1_1: false ################################################################################ # # SECTION: Application # # - This section defines the values to encode into a config transaction or # genesis block for application related parameters # ################################################################################ Application: &ApplicationDefaults # Organizations is the list of orgs which are defined as participants on # the application side of the network Organizations: # Policies defines the set of policies at this level of the config tree # For Application policies, their canonical path is # /Channel/Application/<PolicyName> Policies: Readers: Type: ImplicitMeta Rule: "ANY Readers" Writers: Type: ImplicitMeta Rule: "ANY Writers" Admins: Type: ImplicitMeta Rule: "MAJORITY Admins" Capabilities: <<: *ApplicationCapabilities ################################################################################ # # SECTION: Orderer # # - This section defines the values to encode into a config transaction or # genesis block for orderer related parameters # ################################################################################ Orderer: &OrdererDefaults # Orderer Type: The orderer implementation to start # Available types are "solo","kafka" and "etcdraft" OrdererType: solo Addresses: - orderer.example.com:7050 # Batch Timeout: The amount of time to wait before creating a batch BatchTimeout: 2s # Batch Size: Controls the number of messages batched into a block BatchSize: # Max Message Count: The maximum number of messages to permit in a batch MaxMessageCount: 10 # Absolute Max Bytes: The absolute maximum number of bytes allowed for # the serialized messages in a batch. AbsoluteMaxBytes: 99 MB # Preferred Max Bytes: The preferred maximum number of bytes allowed for # the serialized messages in a batch. A message larger than the preferred # max bytes will result in a batch larger than preferred max bytes. PreferredMaxBytes: 512 KB Kafka: # Brokers: A list of Kafka brokers to which the orderer connects # NOTE: Use IP:port notation Brokers: - 127.0.0.1:9092 # EtcdRaft defines configuration which must be set when the "etcdraft" # orderertype is chosen. EtcdRaft: # The set of Raft replicas for this network. For the etcd/raft-based # implementation, we expect every replica to also be an OSN. Therefore, # a subset of the host:port items enumerated in this list should be # replicated under the Orderer.Addresses key above. Consenters: - Host: orderer.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt - Host: orderer1.example.com Port: 7060 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/server.crt - Host: orderer2.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/server.crt - Host: orderer3.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/server.crt - Host: orderer4.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer4.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer4.example.com/tls/server.crt - Host: orderer5.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer5.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer5.example.com/tls/server.crt # Organizations is the list of orgs which are defined as participants on # the orderer side of the network Organizations: # Policies defines the set of policies at this level of the config tree # For Orderer policies, their canonical path is # /Channel/Orderer/<PolicyName> Policies: Readers: Type: ImplicitMeta Rule: "ANY Readers" Writers: Type: ImplicitMeta Rule: "ANY Writers" Admins: Type: ImplicitMeta Rule: "MAJORITY Admins" # BlockValidation specifies what signatures must be included in the block # from the orderer for the peer to validate it. BlockValidation: Type: ImplicitMeta Rule: "ANY Writers" ################################################################################ # # CHANNEL # # This section defines the values to encode into a config transaction or # genesis block for channel related parameters. # ################################################################################ Channel: &ChannelDefaults # Policies defines the set of policies at this level of the config tree # For Channel policies, their canonical path is # /Channel/<PolicyName> Policies: # Who may invoke the 'Deliver' API Readers: Type: ImplicitMeta Rule: "ANY Readers" # Who may invoke the 'Broadcast' API Writers: Type: ImplicitMeta Rule: "ANY Writers" # By default, who may modify elements at this config level Admins: Type: ImplicitMeta Rule: "MAJORITY Admins" # Capabilities describes the channel level capabilities, see the # dedicated Capabilities section elsewhere in this file for a full # description Capabilities: <<: *ChannelCapabilities ################################################################################ # # Profile # # - Different configuration profiles may be encoded here to be specified # as parameters to the configtxgen tool # ################################################################################ Profiles: TwoOrgsOrdererGenesis: <<: *ChannelDefaults Orderer: <<: *OrdererDefaults Organizations: - *OrdererOrg Capabilities: <<: *OrdererCapabilities Consortiums: SampleConsortium: Organizations: - *Org1 - *Org2 TwoOrgsChannel: Consortium: SampleConsortium <<: *ChannelDefaults Application: <<: *ApplicationDefaults Organizations: - *Org1 - *Org2 Capabilities: <<: *ApplicationCapabilities SampleDevModeKafka: <<: *ChannelDefaults Capabilities: <<: *ChannelCapabilities Orderer: <<: *OrdererDefaults OrdererType: kafka Kafka: Brokers: - kafka1.example.com:9082 - kafka2.example.com:9083 - kafka3.example.com:9084 Organizations: - *OrdererOrg Capabilities: <<: *OrdererCapabilities Application: <<: *ApplicationDefaults Organizations: - <<: *OrdererOrg Consortiums: SampleConsortium: Organizations: - *Org1 - *Org2 SampleMultiNodeEtcdRaft: <<: *ChannelDefaults Capabilities: <<: *ChannelCapabilities Orderer: <<: *OrdererDefaults OrdererType: etcdraft EtcdRaft: Consenters: - Host: orderer.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt - Host: orderer1.example.com Port: 7060 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/server.crt - Host: orderer2.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/server.crt - Host: orderer3.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/server.crt - Host: orderer4.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer4.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer4.example.com/tls/server.crt - Host: orderer5.example.com Port: 7050 ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer5.example.com/tls/server.crt ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer5.example.com/tls/server.crt Addresses: - orderer.example.com:7050 - orderer1.example.com:7060 - orderer2.example.com:7050 - orderer3.example.com:7050 - orderer4.example.com:7050 - orderer5.example.com:7050 Organizations: - *OrdererOrg Capabilities: <<: *OrdererCapabilities Application: <<: *ApplicationDefaults Organizations: - <<: *OrdererOrg Consortiums: SampleConsortium: Organizations: - *Org1 - *Org2
docker-compose-e2e-template.yaml
增加
orderer1.example.com
或者选择的其他orderer
节点的配置。增加后的文件如下:# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # version: '2' volumes: orderer.example.com: orderer1.example.com: peer0.org1.example.com: peer1.org1.example.com: peer0.org2.example.com: peer1.org2.example.com: services: ca0: image: hyperledger/fabric-ca:$IMAGE_TAG environment: - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server - FABRIC_CA_SERVER_CA_NAME=ca-org1 - FABRIC_CA_SERVER_TLS_ENABLED=true - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/CA1_PRIVATE_KEY ports: - "7054:7054" command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/CA1_PRIVATE_KEY -b admin:adminpw -d' volumes: - ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config container_name: ca_peerOrg1 ca1: image: hyperledger/fabric-ca:$IMAGE_TAG environment: - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server - FABRIC_CA_SERVER_CA_NAME=ca-org2 - FABRIC_CA_SERVER_TLS_ENABLED=true - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/CA2_PRIVATE_KEY ports: - "8054:7054" command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/CA2_PRIVATE_KEY -b admin:adminpw -d' volumes: - ./crypto-config/peerOrganizations/org2.example.com/ca/:/etc/hyperledger/fabric-ca-server-config container_name: ca_peerOrg2 orderer.example.com: extends: file: base/docker-compose-base.yaml service: orderer.example.com container_name: orderer.example.com orderer1.example.com: extends: file: base/docker-compose-base.yaml service: orderer1.example.com container_name: orderer1.example.com peer0.org1.example.com: container_name: peer0.org1.example.com extends: file: base/docker-compose-base.yaml service: peer0.org1.example.com peer1.org1.example.com: container_name: peer1.org1.example.com extends: file: base/docker-compose-base.yaml service: peer1.org1.example.com peer0.org2.example.com: container_name: peer0.org2.example.com extends: file: base/docker-compose-base.yaml service: peer0.org2.example.com peer1.org2.example.com: container_name: peer1.org2.example.com extends: file: base/docker-compose-base.yaml service: peer1.org2.example.com
base\docker-compose-base.yaml
增加
orderer1.example.com
或者选择的其他orderer
节点的配置。增加后的文件如下:# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # version: '2' services: orderer.example.com: container_name: orderer.example.com extends: file: peer-base.yaml service: orderer-base volumes: - ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp - ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls - orderer.example.com:/var/hyperledger/production/orderer ports: - 7050:7050 orderer1.example.com: container_name: orderer1.example.com extends: file: peer-base.yaml service: orderer-base volumes: - ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - ../crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/msp:/var/hyperledger/orderer/msp - ../crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/:/var/hyperledger/orderer/tls - orderer1.example.com:/var/hyperledger/production/orderer ports: - 7060:7050 peer0.org1.example.com: container_name: peer0.org1.example.com extends: file: peer-base.yaml service: peer-base environment: - CORE_PEER_ID=peer0.org1.example.com - CORE_PEER_ADDRESS=peer0.org1.example.com:7051 - CORE_PEER_LISTENADDRESS=0.0.0.0:7051 - CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052 - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 - CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org1.example.com:8051 - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051 - CORE_PEER_LOCALMSPID=Org1MSP volumes: - /var/run/:/host/var/run/ - ../crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp - ../crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls - peer0.org1.example.com:/var/hyperledger/production ports: - 7051:7051 peer1.org1.example.com: container_name: peer1.org1.example.com extends: file: peer-base.yaml service: peer-base environment: - CORE_PEER_ID=peer1.org1.example.com - CORE_PEER_ADDRESS=peer1.org1.example.com:8051 - CORE_PEER_LISTENADDRESS=0.0.0.0:8051 - CORE_PEER_CHAINCODEADDRESS=peer1.org1.example.com:8052 - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:8052 - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:8051 - CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051 - CORE_PEER_LOCALMSPID=Org1MSP volumes: - /var/run/:/host/var/run/ - ../crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp:/etc/hyperledger/fabric/msp - ../crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls:/etc/hyperledger/fabric/tls - peer1.org1.example.com:/var/hyperledger/production ports: - 8051:8051 peer0.org2.example.com: container_name: peer0.org2.example.com extends: file: peer-base.yaml service: peer-base environment: - CORE_PEER_ID=peer0.org2.example.com - CORE_PEER_ADDRESS=peer0.org2.example.com:9051 - CORE_PEER_LISTENADDRESS=0.0.0.0:9051 - CORE_PEER_CHAINCODEADDRESS=peer0.org2.example.com:9052 - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:9052 - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org2.example.com:9051 - CORE_PEER_GOSSIP_BOOTSTRAP=peer1.org2.example.com:10051 - CORE_PEER_LOCALMSPID=Org2MSP volumes: - /var/run/:/host/var/run/ - ../crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp:/etc/hyperledger/fabric/msp - ../crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls:/etc/hyperledger/fabric/tls - peer0.org2.example.com:/var/hyperledger/production ports: - 9051:9051 peer1.org2.example.com: container_name: peer1.org2.example.com extends: file: peer-base.yaml service: peer-base environment: - CORE_PEER_ID=peer1.org2.example.com - CORE_PEER_ADDRESS=peer1.org2.example.com:10051 - CORE_PEER_LISTENADDRESS=0.0.0.0:10051 - CORE_PEER_CHAINCODEADDRESS=peer1.org2.example.com:10052 - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:10052 - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org2.example.com:10051 - CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org2.example.com:9051 - CORE_PEER_LOCALMSPID=Org2MSP volumes: - /var/run/:/host/var/run/ - ../crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/msp:/etc/hyperledger/fabric/msp - ../crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls:/etc/hyperledger/fabric/tls - peer1.org2.example.com:/var/hyperledger/production ports: - 10051:10051
docker-compose-orderer1.yaml
创建
orderer1
服务的docker-compose
启动文件,文件内容如下:# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # version: '2' volumes: orderer.example.com: orderer1.example.com: peer0.org1.example.com: peer1.org1.example.com: peer0.org2.example.com: peer1.org2.example.com: services: orderer1.example.com: extends: file: base/docker-compose-base.yaml service: orderer1.example.com container_name: orderer1.example.com extra_hosts: - "orderer.example.com:10.1.24.225" - "peer0.org1.example.com:10.1.24.225" - "peer1.org1.example.com:10.1.24.225" - "peer0.org2.example.com:10.1.24.223" - "peer1.org2.example.com:10.1.24.223" - "kafka1.example.com:10.1.24.225" - "kafka2.example.com:10.1.24.225" - "kafka3.example.com:10.1.24.225"
docker-compose-kafka.yaml
修改
docker-compose-kafka.yaml
配置文件如下:# Copyright IBM Corp. All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # # NOTE: This is not the way a Kafka cluster would normally be deployed in production, as it is not secure # and is not fault tolerant. This example is a toy deployment that is only meant to exercise the Kafka code path # of the ordering service. version: '2' services: zookeeper1.example.com: container_name: zookeeper1.example.com hostname: zookeeper1.example.com image: hyperledger/fabric-zookeeper:latest ports: - "2171:2181" - "2878:2878" - "3878:3878" environment: # 这里是zookeeper客户端的端口号,一般无用 # 需要注意的是zookeeper的端口号依然是2181 # 所以一个主机装多个zookeeper的时候,需要将2181映射到外部的其他端口上 ZOOKEEPER_CLIENT_PORT: 32171 ZOOKEEPER_TICK_TIME: 2000 ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zookeeper1.example.com:2878:3878 server.2=zookeeper2.example.com:2879:3879 server.3=zookeeper3.example.com:2880:3880 extra_hosts: - "kafka1.example.com:10.1.24.225" - "zookeeper2.example.com:10.1.24.225" - "kafka2.example.com:10.1.24.225" - "zookeeper3.example.com:10.1.24.225" - "kafka3.example.com:10.1.24.225" kafka1.example.com: container_name: kafka1.example.com image: hyperledger/fabric-kafka:latest depends_on: - zookeeper1.example.com - zookeeper2.example.com - zookeeper3.example.com ports: - "9082:9082" environment: - KAFKA_BROKER_ID=1 - KAFKA_ZOOKEEPER_CONNECT=zookeeper1.example.com:2171,zookeeper2.example.com:2172,zookeeper3.example.com:2173 # 虽然我们是内网环境,但是因为给这个docker容器分配的ip地址是不确定的, # 所以需要通过这样的方式来实现监听所有转到这个端口的请求, # 一定要将端口号也修改成映射出去的端口,否则还是监听不到 - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9082 # 这个是kafka外网监听配置,我们正常使用过程中其实是用不到这个配置的,只是为了可以使用上面的0.0.0.0的配置 # 所以这里写任何hostname都是可以的,但是为了明确性,还是监听了kafka1.example.com,即容器名 - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka1.example.com:9082 - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=3 - KAFKA_MESSAGE_MAX_BYTES=1048576 # 1 * 1024 * 1024 B - KAFKA_REPLICA_FETCH_MAX_BYTES=1048576 # 1 * 1024 * 1024 B - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false - KAFKA_LOG_RETENTION_MS=-1 - KAFKA_MIN_INSYNC_REPLICAS=1 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 extra_hosts: - "zookeeper1.example.com:10.1.24.225" - "zookeeper2.example.com:10.1.24.225" - "kafka2.example.com:10.1.24.225" - "zookeeper3.example.com:10.1.24.225" - "kafka3.example.com:10.1.24.225" zookeeper2.example.com: container_name: zookeeper2.example.com hostname: zookeeper2.example.com image: hyperledger/fabric-zookeeper:latest ports: - "2172:2181" - "2879:2879" - "3879:3879" environment: ZOOKEEPER_CLIENT_PORT: 32172 ZOOKEEPER_TICK_TIME: 2000 ZOO_MY_ID: 2 ZOO_SERVERS: server.1=zookeeper1.example.com:2878:3878 server.2=zookeeper2.example.com:2879:3879 server.3=zookeeper3.example.com:2880:3880 extra_hosts: - "zookeeper1.example.com:10.1.24.225" - "kafka1.example.com:10.1.24.225" - "kafka2.example.com:10.1.24.225" - "zookeeper3.example.com:10.1.24.225" - "kafka3.example.com:10.1.24.225" kafka2.example.com: container_name: kafka2.example.com image: hyperledger/fabric-kafka:latest depends_on: - zookeeper1.example.com - zookeeper2.example.com - zookeeper3.example.com ports: - "9083:9083" environment: - KAFKA_BROKER_ID=2 - KAFKA_ZOOKEEPER_CONNECT=zookeeper1.example.com:2171,zookeeper2.example.com:2172,zookeeper3.example.com:2173 - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9083 - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka2.example.com:9083 - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=3 - KAFKA_MESSAGE_MAX_BYTES=1048576 # 1 * 1024 * 1024 B - KAFKA_REPLICA_FETCH_MAX_BYTES=1048576 # 1 * 1024 * 1024 B - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false - KAFKA_LOG_RETENTION_MS=-1 - KAFKA_MIN_INSYNC_REPLICAS=1 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 extra_hosts: - "zookeeper1.example.com:10.1.24.225" - "kafka1.example.com:10.1.24.225" - "zookeeper2.example.com:10.1.24.225" - "zookeeper3.example.com:10.1.24.225" - "kafka3.example.com:10.1.24.225" zookeeper3.example.com: container_name: zookeeper3.example.com hostname: zookeeper3.example.com image: hyperledger/fabric-zookeeper:latest ports: - "2173:2181" - "2880:2880" - "3880:3880" environment: ZOOKEEPER_CLIENT_PORT: 32173 ZOOKEEPER_TICK_TIME: 2000 ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zookeeper1.example.com:2878:3878 server.2=zookeeper2.example.com:2879:3879 server.3=zookeeper3.example.com:2880:3880 extra_hosts: - "zookeeper1.example.com:10.1.24.225" - "kafka1.example.com:10.1.24.225" - "zookeeper2.example.com:10.1.24.225" - "kafka2.example.com:10.1.24.225" - "kafka3.example.com:10.1.24.225" kafka3.example.com: container_name: kafka3.example.com image: hyperledger/fabric-kafka:latest depends_on: - zookeeper1.example.com - zookeeper2.example.com - zookeeper3.example.com ports: - "9084:9084" environment: - KAFKA_BROKER_ID=3 - KAFKA_ZOOKEEPER_CONNECT=zookeeper1.example.com:2171,zookeeper2.example.com:2172,zookeeper3.example.com:2173 - KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9084 - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka3.example.com:9084 - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=3 - KAFKA_MESSAGE_MAX_BYTES=1048576 # 1 * 1024 * 1024 B - KAFKA_REPLICA_FETCH_MAX_BYTES=1048576 # 1 * 1024 * 1024 B - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false - KAFKA_LOG_RETENTION_MS=-1 - KAFKA_MIN_INSYNC_REPLICAS=1 - KAFKA_DEFAULT_REPLICATION_FACTOR=3 extra_hosts: - "zookeeper1.example.com:10.1.24.225" - "kafka1.example.com:10.1.24.225" - "zookeeper2.example.com:10.1.24.225" - "kafka2.example.com:10.1.24.225" - "zookeeper3.example.com:10.1.24.225"
在其他yaml文件中指定orderer1节点的hosts
需要指定host的文件有
docker-compose-orderer.yaml
、docker-compose-org1.yaml
、docker-compose-org2.yaml
等文件。启动
关于如何启动可以参考我的前面两篇文章。
HyperLedger Fabric 1.4.4多机多节点部署(solo共识)
HyperLedger Fabric 1.4.4搭建基于多节点kafka共识集群 -
Fabric 1orderer+1peer 部署文件
2018-08-21 11:45:21自定义多机架构部署,包括1个orderer服务器+一个peer节点服务器,配置文件直接拷贝至不同服务器中,docker-compse运行即可。 -
fabric006 peer模块和orderer模块在Fabric中的作用
2018-12-22 20:18:22上面提到了peer模块和orderer模块式Fabric中最重要的模块,到底有多重要我们下面来简单描述下: ...在上面一个步骤你只是知道了有几个Orderer的组织,有几个Peer组织,你并不知道Orderer的组织下有几个Ordere... -
创世区块的Channel、Orderer 和 peer 配置
2020-03-02 10:10:38可以看到 configtx.yaml这个文件,内容比较多,删除了一些注释,可以直接看代码: # first-network/configtx.yaml ###########################定义不同的组织身份 稍后在配置中引用##############... -
Hyperledger Fabric 1.0 从零开始(十三)——orderer分布式方案
2018-01-15 18:10:00在搭建HyperLedger Fabric环境的过程中,我们会用到一个configtx.yaml文件(可参考Hyperledger Fabric 1.0 从零开始(八)——Fabric多节点集群生产部署),该配置文件主要用于构建创世区块(在构建创世区块之前需要... -
区块链 Hyperledger fabric 排序服务Kafka
2021-01-17 11:10:08所有交易在发送给 Committer 进行验证接受之前,需要先经过排序服务进行全局排序。...Orderer 节点(Ordering Service Node,OSN)在网络中起到代理作用,多个 Orderer 节点会连接到 Kafka 集群,利用 Kafka ... -
动态添加orderer节点_Hyperledger Fabric 动态增加组织到网络中
2020-12-29 00:16:52本文基于Hyperledger Fabric 1.4版本。官方文档地址:传送门动态添加一个组织到Fabric网络中也是一个比较重要的功能。官方文档写的已经很详细了,有能力...启动网络还是以官方的byfn为例好了,不多说,对Fabric有一定... -
HyperLedger Fabric orderer过程解析&如何基于tendermint实现fabric的拜占庭容错排序
2019-02-11 20:38:37HyperLedger Fabric作为一个架构灵活的企业级区块链平台,正在被越来越多的企业用于生产环境。上次我分享过一篇文章《HyperLedger Fabric在携程区块链服务平台的应用实战》介绍了一些我们对于HyperLedger Fabric的... -
超级账本Fabric:Fabric-CA的使用演示(两个组织一个Orderer三个Peer)
2019-09-15 12:37:33说明 转自:...这里将演示如何使用FabricCA为每个组件和用户生成证书,并在多服务器、多节点的环境中应用。这里的操作是在超级账本Hyp... -
超级账本HyperLedger:Fabric-CA的使用演示(两个组织一个Orderer三个Peer)
2018-06-26 15:24:00这里将演示如何使用FabricCA为每个组件和用户生成证书,并在多服务器、多节点的环境中应用。这里的操作是在超级账本Hyperledger:Fabric项目的多服务器、全手动部署的基础上进行的。 如果对下面的操作有不清楚的地方... -
多机上部署多个组织(4org)的fabric网络
2018-06-06 16:32:42本部署基于e2e_cli官方例程,即4Peer+1Orderer的多节点架构,5台主机分别是orderer.example.com、peer0.org1.example.com、peer0.org2...可以首先确保官方提供的2个org,每个组织两个peer的多节点例子能够跑通,具体... -
阿里云多机部署Fabric 1order节点多个peer节点
2018-10-10 10:59:35多机部署需要有N个固定的IP服务器地址,把order节点和peer节点放在不同的服务器上,如图所示 1.首先,更改配置文件crypto-config.yaml OrdererOrgs: - Name: Orderer Domain: example.com PeerOrgs: - Name... -
HyperLedger Fabric 1.2 多机多节点部署(10.3)
2018-09-01 19:45:00多机多节点指在多台电脑上部署多个组织和节点,本案例部署一个排序(orderer)服务,两个组织(org1,org2)和四个节点(peer),每个组织包括两个节点,需要五台计算机,计算机配置如下: 多机多节点部署结构图如下... -
在阿里云进行Fabric的多机部署需要注意的几个坑
2018-06-28 20:22:25从这篇博客中可以看到深蓝所用的5个节点时同一局域网下的几个,和利用阿里云服务器进行多机部署有几个地方需要注意。1. Fabric源码的版本不同会导致后面各种报错(例如orderer无法启动),直接通过git下载下来的会是... -
HyperLedger Fabric 1.4.4多机多节点部署(solo共识)
2020-03-30 17:02:35本片文章搭建的网络有一个orderer节点,两个组织和四个peer节点。总共需要五台主机,但由于资源限制,只有两台主机,但方法都是一样的。所以本文在一台主机上搭建一个orderer节点和一个组织的两个peer节点,另一台... -
超级账本 Fabric1.1.0 本地启动多个peer节点 猜想
2020-03-09 15:41:02又看了一遍有关配置文件,其中 ...orderer.yaml配置文件中 规定了排序节点的监听地址,我设置的是0.0.0.0:7050 core.yaml配置化文件中 对peer节点的监听地址: 0.0.0.0:7051 链码监听地址: 0.0.0.0:705... -
HyperledgerFabric多机部署-kafka共识
2018-01-08 13:25:02基于《Hyperledger Fabric 区块链多机部署...4个orderer节点 4个peer节点 5个zookeerper节点 5个kafka节点 配置文件调整: crypto-config.yaml不变 configtx.yaml调整: Orderer: &OrdererDefaults Ordere -
fabric手动多机部署
2018-08-29 10:36:27网络中有1个orderer节点,4个组织,每个组织各有1个节点。 名称 ip 节点Hostname Organization Server 1 10.11.6.118 orderer.example.... -
Hyperledger Fabric 或 Composer 查看当前区块链网络的区块生成机制、多长时间、多少个交易
2019-11-24 21:46:30// 1. 进入docker,获取当前...# peer channel fetch config -c composerchannel ./config.pb --orderer orderer.example.com:7050 // 2. 将docker中的config.pb拷贝到Ubuntu主机中 $ docker cp b7200c... -
fabric1.4.3 多节点部署
2019-09-05 11:59:02多机多节点指在多台电脑上部署多个组织和节点,本案例部署一个排序(orderer)服务,两个组织(org1,org2)和四个节点(peer),每个组织包括两个节点,需要五台服务器,规划如下: orderer.example.com ... -
15.Solo多机多节点部署
2018-11-21 08:38:353. Solo多机多节点部署 所有的节点分离部署, 每台主机上有一个节点 名称 IP Hostname 组织机构 orderer 192.168.247.129 orderer.itcast.com Orderer peer0 192.168.247.141 peer0.... -
Fabric多机部署
2019-04-17 15:40:45本次只准备了两台电脑,一个orderer一个peer,修改两台电脑的hosts,194为orderer,109为peer: 192.168.2.194 orderer.flt.cn 192.168.2.109 peer0.dev.flt.cn 新建一个文件夹a_test,用来存放生成的文件(之后... -
fabric1.4多机部署步骤
2020-10-30 10:21:49多机多节点部署(1个orderer+4个peer)步骤如下: ====10.3.1 部署orderer.example.com cd $GOPATH/src/github.com/hyperledger/fabric mkdir multipeer cd multipeer (或者,可以创建好文件夹后,进行下面的命令) ... -
HyperLedger Fabric 1.4 多机多节点部署(10.3)
2018-09-01 19:45:00多机多节点指在多台电脑上部署多个组织和节点,本案例部署一个排序(orderer)服务,两个组织(org1,org2)和四个节点(peer),每个组织包括两个节点,需要五台计算机,计算机配置如下: 多机多节点部署结构图...
-
仿ACDSee图像浏览软件源代码.zip
-
java 生成不同对象_Java中创建对象的5种不同方法
-
JAVA 链表 去重_C语言数据结构实现链表去重的实例
-
积分排行榜代码.zip
-
用Visual Basic实现对系统I/O端口的操作.vb
-
项目经理成长之路
-
java 链接池超时_连接池异常:无法获得连接,池错误超时等待空闲对象
-
java 链表长度_Java实现链表(个人理解链表的小例子)
-
说说你对接口测试的理解?
-
spring cloud分布式电商实战代码
-
java web service层_javaWeb服务详解(含源代码,测试通过,注释) ——Dept的Service层...
-
VC_CFileOpen打开文件对话框中实现多选文件.visual c++
-
2018年蚂蚁金服金融大脑赛题分享
-
java 重载函数_Java中函数的重载和重写
-
【ssm项目源码】学生宿舍管理系统.zip
-
MHA 高可用 MySQL 架构与 Altas 读写分离
-
《C++ Primer中文版(第5版)》笔记
-
Unity RUST 逆向安全开发
-
【ssm项目源码】智能医院管理系统.zip
-
java 锐化_Java Opencv 实现锐化