基于docker的 Hyperledger 法布里c 多机环境搭建(上)

规划:

环境:ubuntu 16.04

replSet 复制集名称: rs一

Docker  17.04.0-ce

MongoDB数据库安装安装路径为:/usr/local/mongodb/

go 1.7.4

复制集成员IP与端口:

consoul v0.8.0.4

节点1: localhost:28010   (默认的primary节点)

 

节点2: localhost:20811

=======================================================================

节点3: localhost:28012

一、项目准备

复制集各节点的数据文件,日志文件,私钥文件路径:

可能能搜到那篇小说的人对Hyperledger
法布里c(以下简称法布里c)已经是有了必然的精通了,作者就不多介绍了。

节点1: /data/data/r0  , /data/log/r0.log , /data/key/r0

先说一下法布里c的构建环境:

节点2: /data/data/r1  , /data/log/r1.log , /data/key/r1

Hyperledger Fabric  tag v一.0.0-阿尔法 (法布里c 的主项目)

节点3: /data/data/r2  , /data/log/r2.log , /data/key/r2

Hyperledger Fabric-sdk-node  branch v1.0.0-alpha(Fabric的js应用的sdk)

 

Hyperledger 法布里c-ca branch v壹.0.0-阿尔法(法布里c的权位管理项目)

$ wget
https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-3.2.8.tgz

两台主机,处于同三个网段。

–2016-07-22 11:17:12– 
https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-3.2.8.tgz

那边要多说两句,我们从git hub
服务器(无论是github.com还友善搭建的服务器)上clone下来的工程1般暗中同意是在master分支上的,所以我们下载下来上边多个工程后要记得切换分支,如今在本人写那篇博文的时候曾经是Fabric一.0壹度推出多少个本子了(阿尔法、阿尔法二、beta),大家用的是首先个发行版本v一.0.0-阿尔法。

Resolving fastdl.mongodb.org (fastdl.mongodb.org)… 54.182.5.247,
54.182.5.9, 54.182.5.45, …

说一下切换分支的方法,熟谙的人方可跳过:

Connecting to fastdl.mongodb.org
(fastdl.mongodb.org)|54.182.5.247|:443… connected.

率先种在仿制的时候内定分支

HTTP request sent, awaiting response… 200 OK

git clone --branch <远程分支名>  <源地址>

Length: 71943658 (69M) [application/x-gzip]

 第二种是仿制后切换分支

Saving to: ‘mongodb-linux-x86_64-rhel70-3.2.8.tgz’

git clone <源地址>

 

git checkout <远程分支名>
#此时你的git仓库头指针是游离的,必要新建三个地方分支和长途绑定

100%[===============================================================================>]
71,943,658   116KB/s   in 16m 37s

git checkout -b <本地分支名> #绑定完毕

 

ps:切换分支此前请保管本地的文书已经完全交由且与长途1致,不然会报错。

2016-07-22 11:33:51 (70.5 KB/s) –
‘mongodb-linux-x86_64-rhel70-3.2.8.tgz’ saved [71943658/71943658]

 

 

二、镜像获取

$tar zxvf mongodb-linux-x86_64-rhel70-3.2.8.tgz -C /usr/local/

镜像的拿走方式有过各个,不肯定非要本身从源代码编写翻译,你也能够从官网内定的地址去拉去镜像,只怕直接实施
`docker pull` 命令。

$mv mongodb-linux-x86_64-rhel70-3.2.8 mongodb

一旦想用源码编写翻译镜像的话相当的粗略,分别切换来$fabric、$fabric-ca下实施 `
make docker `

一) 创造数据文件存款和储蓄路径

ps:这里友情提醒以下,从源码编写翻译最棒是准备三个楼梯,不然你会痛心不堪,推荐直接拉取编写翻译好的镜像。

[root@node222 mongodb]# mkdir -p /data02/mongors/data/r0

参考地址:

[root@node222 mongodb]# mkdir -p /data02/mongors/data/r1

3、环境搭建

[root@node222 mongodb]# mkdir -p /data02/mongors/data/r2

当今开班讲最首要的有些,全体服务基于docker多机环境搭建绕不开贰个标题:跨主机通讯的。依照本人在网上征集来的音讯方可将促成跨主机通信的秘籍分为以下二种:

 

一>
本地共享:最直接的格局便是与主机共享互联网财富,那种艺术的弊病也很分明。

 

2>
物理桥接:将物理网卡桥接到docker的所注重的杜撰网桥中,将ip地址限定为大体主机同一网段,这些方法跟一差不多。

2)创设日志文件路径

三> swarm(docker或第二提供的docker集成管理项目):简单的话正是将多主机纳入同贰个docker
manager下,将底层的财富调度分配与客户端铺排隔断,首若是构建docker集群。法布里c社区中有三个kubernetes的座谈组,他们现在重视钻探方向正是将云服务器与kubernetes相结合,计划三个“云区块链”,方便那么些中小企布署区块链方案,可是小编个人认为那早已是跑偏了。区块链本身是2个去中央化的技艺,假设完全托管到云上,岂不是一种变相的中央化?终究你有所的数码都在云服务商手中。

[root@node222 mongodb]# mkdir -p /data02/mongors/log

4>overlay network: swarm 中也使用到了overlay network
基于的是vxlan技术,正是在主机网络上在封装一层虚拟互连网,达到将具备容器纳入同一网段的指标。而且它相比较于直接的物理桥接有一个便宜正是能够依照容器的host
name来动态解析路由容器的ip,只要您容器host
name不变,大家中间就能团结,从而落成了ip的动态分配。

 

实际上用第一种方法更有利于壹些,因为我们要执行docker-compose.yml来布局环境,而在swarm二中可以透过
‘docker stack deploy -c  <compose>.yml
<name>’来壹键实现傻瓜式安插,可是它有1个害处,容器的到底安顿在哪些节点是依照其调度算法来分配的,大家则是想要内定每3个节点安装的主机。

三)创设主从key文件

参照网站:http://www.sdnlab.com/7241.html

用以标识集群的私钥的欧洲经济共同体路径,如若每一个实例的key
file内容不等同,程序将不可能健康使用

综合,大家最终动用了 docker-compose + overlay network
的诀窍来促成我们想要的分布式布置,大家先看一下
$fabric-sdk-node/test/fixtures目录下的 docker-compose.yaml

[root@node222 mongodb]# mkdir -p /data02/mongors/key

version: '3'

services:
  ca0:
    image: hyperledger/fabric-ca
    environment:
      - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
      - FABRIC_CA_SERVER_CA_NAME=ca-org1
      - FABRIC_CA_SERVER_TLS_ENABLED=true
      - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/org1.example.com-cert.pem
      - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/a22daf356b2aab5792ea53e35f66fccef1d7f1aa2b3a2b92dbfbf96a448ea26a_sk
    ports:
      - "7054:7054"
    command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/a22daf356b2aab5792ea53e35f66fccef1d7f1aa2b3a2b92dbfbf96a448ea26a_sk -b admin:adminpw -d'
    volumes:
      - ./channel/crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
    container_name: ca_peerOrg1
  ca1:
    image: hyperledger/fabric-ca
    environment:
      - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
      - FABRIC_CA_SERVER_CA_NAME=ca-org2
      - FABRIC_CA_SERVER_TLS_ENABLED=true
      - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/org2.example.com-cert.pem
      - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/464d550fe9bf9e7d8976cdf59d1a5d472598f54c058c3546317c5c5fb0ddfd6e_sk
    ports:
      - "8054:7054"
    command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/org2.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/464d550fe9bf9e7d8976cdf59d1a5d472598f54c058c3546317c5c5fb0ddfd6e_sk -b admin:adminpw -d'
    volumes:
      - ./channel/crypto-config/peerOrganizations/org2.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
    container_name: ca_peerOrg2

  orderer.example.com:
    container_name: orderer.example.com
    image: hyperledger/fabric-orderer
    environment:
      - ORDERER_GENERAL_LOGLEVEL=debug
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/twoorgs.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp/orderer
      - ORDERER_GENERAL_TLS_ENABLED=true
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/etc/hyperledger/msp/orderer/keystore/e8a4fdaacf1ef1d925686f19f56eb558b6c71f0116d85916299fc1368de2d58a_sk
      - ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/msp/orderer/signcerts/orderer.example.com-cert.pem
      - ORDERER_GENERAL_TLS_ROOTCAS=[/etc/hyperledger/msp/orderer/cacerts/example.com-cert.pem, /etc/hyperledger/msp/peerOrg1/cacerts/org1.example.com-cert.pem, /etc/hyperledger/msp/peerOrg2/cacerts/org2.example.com-cert.pem]
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
    command: orderer
    ports:
      - 7050:7050
    dns:
      - 8.8.8.8
      - 8.8.4.4
    volumes:
        - ./channel:/etc/hyperledger/configtx
        - ./channel/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/:/etc/hyperledger/msp/orderer
        - ./channel/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/msp/peerOrg1
        - ./channel/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/msp/peerOrg2

  peer0.org1.example.com:
    container_name: peer0.org1.example.com
    image: hyperledger/fabric-peer
    environment:
      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      - CORE_PEER_ID=peer0.org1.example.com
      - CORE_LOGGING_PEER=debug
      - CORE_PEER_LOCALMSPID=Org1MSP
      - CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
      - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
      - CORE_PEER_TLS_ENABLED=true
      - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/msp/peer/keystore/ecd9f80eb183352d5d4176eeb692e30bbfba4c38813ff9a0b6b799b546dda1d8_sk
      - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/msp/peer/signcerts/peer0.org1.example.com-cert.pem
      - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/msp/peer/cacerts/org1.example.com-cert.pem
      # # the following setting starts chaincode containers on the same
      # # bridge network as the peers
      # # https://docs.docker.com/compose/networking/
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=fixtures_default
      - GOPATH=/tmp/gopath
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: peer node start --peer-defaultchain=false
    ports:
      - 7051:7051
      - 7053:7053
    volumes:
        - ./src/github.com:/tmp/gopath/src/github.com
        - /var/run/:/host/var/run/
        - ./channel/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/msp/peer
    depends_on:
      - orderer.example.com

  peer0.org2.example.com:
    container_name: peer0.org2.example.com
    image: hyperledger/fabric-peer
    environment:
      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      - CORE_PEER_ID=peer0.org2.example.com
      - CORE_PEER_LOCALMSPID=Org2MSP
      - CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
      - CORE_PEER_ADDRESS=peer0.org2.example.com:7051
      - CORE_PEER_TLS_ENABLED=true
      - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/msp/peer/keystore/81c71fd393054571c8d789f302a10390e0e83eb079b02a26a162ee26f02ff796_sk
      - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/msp/peer/signcerts/peer0.org2.example.com-cert.pem
      - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/msp/peer/cacerts/org2.example.com-cert.pem
      # # the following setting starts chaincode containers on the same
      # # bridge network as the peers
      # # https://docs.docker.com/compose/networking/
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=fixtures_default
      - GOPATH=/tmp/gopath
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: peer node start --peer-defaultchain=false
    ports:
      - 8051:7051
      - 8053:7053
    volumes:
        - ./src/github.com:/tmp/gopath/src/github.com
        - /var/run/:/host/var/run/
        - ./channel/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/msp/peer
    depends_on:
      - orderer.example.com

  couchdb:
    container_name: couchdb
    image: hyperledger/fabric-couchdb
    ports:
      - 5984:5984
    environment:
      DB_URL: http://localhost:5984/member_db

[root@node222 mongodb]# echo “this is rs1  key” >
/data02/mongors/key/r0

 

[root@node222 mongodb]# echo “this is rs1  key” >
/data02/mongors/key/r1

它所布置法布里c 互连网是二个单orderer的solo互联网,结构如下:

[root@node222 mongodb]# echo “this is rs1  key” >
/data02/mongors/key/r2

OrdererMsp

  |----- orderer.example.com (共识服务节点)

Org1MSP

  |-----peer0.org1.example.com (peer节点、Anchor节点)

  |-----ca0(ca服务)

Org2MSP

  |-----peer0.org2.example.com

  |-----ca1

DB

 |---- couchdb(总帐存储)

[root@node222 mongodb]# chmod 600 /data02/mongors/key/r*

 

 

为了能分布式计划个中有一对环境变量参数是内需大家开始展览改动的,至于怎么修改我们等下再说,先说一下构建overlay网络环境,我第二参考了托尼大神的一篇博客,只是因为环境分裂有壹部分转移。

4)启动3个实例

参考网站:
http://tonybai.com/2016/02/15/understanding-docker-multi-host-networking/

梯次增加运维参数,其中几个MongoDB实例:

1、创建consul 服务

考虑到kv
store在本文并非关键,仅作跨多主机容器网络创制运营的前提条件之用,由此仅用含有一个server节点的”cluster”。

参考拓扑图,我们在上运行3个consul,关于consul集群以及劳动登记、服务意识等细节能够参考笔者前边的
篇文章

 $consul agent -server -bootstrap-expect 1 -data-dir ./data -node=master
-bind=10.10.126.101 -client=0.0.0.0 &

ps
:这一个consul解压出来以往正是一个二进制的执行文书,咱们得以间接把它添加到$PATH路径上边

 

[root@node222 mongodb]# /usr/local/mongodb/bin/mongod –replSet rs1
–keyFile /data02/mongors/key/r0 –fork –port 28010 –dbpath
/data02/mongors/data/r0 –logpath=/data02/mongors/log/r0.log –logappend

2、修改Docker Daemon DOCKER_OPTS参数

前边提到过,通过Docker
1.九创立跨多主机容器互联网供给重新配置种种主机节点上的Docker
Daemon的起步参数:

 ubuntu系统这么些布局在/lib/systemd/system/docker.service 下:
ExecStart=/usr/bin/dockerd -H fd://

修改为ExecStart=/usr/bin/dockerd -H fd:// –dns 八.8.8.八 –dns 八.八.四.四 -H
tcp://0.0.0.0:2375 –cluster-advertise
<物理网卡的名字,不自然是eth0>:237伍 –cluster-store
consul://十.10.1二陆.十壹:8500/network –storage-driver=devicemapper

ps:注意有相当大希望运维服务时候会战败,不要慌。那是因为大家一贯暗许–storage-driver使用的是aufs格局,当切换来devicemapper格局时大概会产生错误。

code bash:

sudo /usr/bin/dockerd –storage-driver=devicemapper
#那儿会展现错误的切切实实音信,个中山大学约的意思是不可能rename二个文件,大家就要是它为
xxx

sudo apt-get install dmsteup

sudo dmsteup rm xxx
#实践这几个命令从前一定要备份镜像,因为这几个命令执行完今后你会发觉以前的镜像都不见了

或者

修改 ExecStart=/usr/bin/dockerd -H fd:// –dns 八.8.八.8 –dns 8.8.4.4 -H
tcp://0.0.0.0:2375 –cluster-advertise
<物理网卡的名字,不肯定是eth0>:2375 –cluster-store
consul://10.10.1二6.十一:8500/network –storage-driver=aufs

aufs 那时一个默许的docker的
–storage-driver,首借使用以docker容器和主机共享文件卷,它和devicemapper的实际差异还有带查证,但近期的Linux系统大多数支撑aufs,所以并未有切换的不可或缺了。

 

此地多说几句:

-H(或–host)配置的是Docker client(包涵地点和长途的client)与Docker
Daemon的通讯媒介,也是Docker REST
api的劳务端口。暗中同意是/var/run/docker.sock(仅用于地方),当然也得以经过tcp协议通讯以便于远程Client访问,就好像上面配置的那样。非加密网通讯选拔237伍端口,而TLS加密连接则用237陆端口。那五个端口已经报名在IANA注册并获批,变成了名高天下端口。-H能够布置多少个,就如上边配置的这样。
unix socket便于地点docker client访问本地docker
daemon;tcp端口则用来远程client访问。那样1来:docker pull
ubuntu,走docker.sock;而docker -H 10.10.1二陆.十一:237伍 pull ubuntu则走tcp
socket。

–cluster-advertise 配置的是本Docker Daemon实例在cluster中的地址;
–cluster-store配置的是Cluster的分布式KV store的拜访地址;

倘使你从前手工业修改过iptables的平整,提出重启Docker
Daemon之前清理一下iptables规则:sudo iptables -t nat -F, sudo iptables
-t filter -F等。

ps:为了图方便自作者平昔把防火墙关了,别的小编有1篇博客也说过一向改动/etc/default/docker不会生效。

about to fork child process, waiting until server is ready for
connections.

3、运行各节点上的Docker Daemon

以10.10.126.101为例:

$ sudo service docker start

$ ps -ef|grep docker
root      2069     1  0 Feb02 ?        00:01:41 /usr/bin/docker -d --dns 8.8.8.8 --dns 8.8.4.4 --storage-driver=devicemapper -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-advertise eth0:2375 --cluster-store consul://10.10.126.101:8500/network

启动后iptables的nat,
filter规则与单机Docker网络初叶情形并无2致。

101节点上初始网络driver类型:
$docker network ls
NETWORK ID          NAME                DRIVER
47e57d6fdfe8        bridge              bridge
7c5715710e34        none                null
19cc2d0d76f7        host                host

forked process: 16629

4、创建overlay网络net1和net2

在101节点上,创建net1:

$ sudo docker network create -d overlay net1

在71节点上,创建net2:

$ sudo docker network create -d overlay net2

后来无论在71节点还是10一节点,我们查阅当前互连网以及驱动类型皆以之类结果:

$ docker network ls
NETWORK ID          NAME                DRIVER
283b96845cbe        net2                overlay
da3d1b5fcb8e        net1                overlay
00733ecf5065        bridge              bridge
71f3634bf562        none                null
7ff8b1007c09        host                host

那时候,iptables规则也并无变化。

child process started successfully, parent exiting

伍、运行多个overlay net下的containers

我们分别在net1和net2底下运维多个container,每一种节点上各个net一和net二的container各二个:

101:
sudo docker run -itd --name net1c1 --net net1 ubuntu:14.04
sudo docker run -itd --name net2c1 --net net2 ubuntu:14.04

71:
sudo docker run -itd --name net1c2 --net net1 ubuntu:14.04
sudo docker run -itd --name net2c2 --net net2 ubuntu:14.04

运转后,大家就获得如下互连网新闻(容器的ip地址大概与前方拓扑图中的分裂,每趟容器运营ip地址都大概变动):

net1:
    net1c1 - 10.0.0.7
    net1c2 - 10.0.0.5

net2:
    net2c1 - 10.0.0.4
    net2c2 -  10.0.0.6

[root@node222 mongodb]# /usr/local/mongodb/bin/mongod –replSet rs1
–keyFile /data02/mongors/key/r1 –fork –port 28011 –dbpath
/data02/mongors/data/r1 –logpath=/data02/mongors/log/r1.log –logappend

六、容器连通性

在net一c第11中学,咱们来看望其到net一和net二的连通性:

root@021f14bf3924:/# ping net1c2
PING 10.0.0.5 (10.0.0.5) 56(84) bytes of data.
64 bytes from 10.0.0.5: icmp_seq=1 ttl=64 time=0.670 ms
64 bytes from 10.0.0.5: icmp_seq=2 ttl=64 time=0.387 ms
^C
--- 10.0.0.5 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.387/0.528/0.670/0.143 ms

root@021f14bf3924:/# ping 10.0.0.4
PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
^C
--- 10.0.0.4 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1008ms

可知,net第11中学的容器是互通的,但net壹和net二那三个overlay net之间是割裂的。

ps:那正是我们前边说的,overlay网络同网段下运营容器能够根据容器的host
name 路由到指标容器.

 

借使你壹块履行到此地表明您的overlay环境已经搭好了,那么咱们说一下docker-compose.yaml应该怎么修改:

一>将法布里c拆分为四个部分:

host1 (104)包含order、Org1、couchdb

host2 (121) 包含Org2

因此小编将docker-compose.yaml也拆成了四个部分。

贰>网络的设定:

咱俩前边会新建八个overlay网络net一、net二,为了方便大家就径直用net一用作内定的网络。假如我们不点名互连网,docker-compose会私下认可生成二个brige互连网<($dirname)_default>,这样是心有余而力不足球联合会通的。

3>参数修改:

CORE_VM_ENDPOINT= tcp://ip:port

fabric最近平素不运营chain code 的虚拟机,当安顿chain
code时它会活动创设四个docker镜像,然后再以运维容器的方式来推行chain
code。那么些环境变量的作用救是在安插chain
code时候,钦定peer与docker后台服务通讯地址.

CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=net1

指定peer的network

4>示例

host(104) :docer-compose-0.yaml

version: '3'

services:
  ca0:
    container_name: ca_peerOrg1
    image: hyperledger/fabric-ca
    environment:
      - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
      - FABRIC_CA_SERVER_CA_NAME=ca-org1
      - FABRIC_CA_SERVER_TLS_ENABLED=true
      - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/org1.example.com-cert.pem
      - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/a22daf356b2aab5792ea53e35f66fccef1d7f1aa2b3a2b92dbfbf96a448ea26a_sk
    ports:
      - "7054:7054"
    command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/a22daf356b2aab5792ea53e35f66fccef1d7f1aa2b3a2b92dbfbf96a448ea26a_sk -b admin:adminpw -d'
    volumes:
      - ./channel/crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
    networks:
      - net1
  orderer.example.com:
    container_name: orderer.example.com
    image: hyperledger/fabric-orderer
    environment:
      - ORDERER_GENERAL_LOGLEVEL=debug
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/twoorgs.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp/orderer
      - ORDERER_GENERAL_TLS_ENABLED=true
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/etc/hyperledger/msp/orderer/keystore/e8a4fdaacf1ef1d925686f19f56eb558b6c71f0116d85916299fc1368de2d58a_sk
      - ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/msp/orderer/signcerts/orderer.example.com-cert.pem
      - ORDERER_GENERAL_TLS_ROOTCAS=[/etc/hyperledger/msp/orderer/cacerts/example.com-cert.pem, /etc/hyperledger/msp/peerOrg1/cacerts/org1.example.com-cert.pem, /etc/hyperledger/msp/peerOrg2/cacerts/org2.example.com-cert.pem]
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
    command: orderer
    ports:
      - 7050:7050
    volumes:
        - ./channel:/etc/hyperledger/configtx
        - ./channel/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/:/etc/hyperledger/msp/orderer
        - ./channel/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/msp/peerOrg1
        - ./channel/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/msp/peerOrg2
    networks:
      - net1

  peer0.org1.example.com:
    container_name: peer0.org1.example.com
    image: hyperledger/fabric-peer
    environment:
      - CORE_VM_ENDPOINT=tcp://192.168.1.104:2375
      - CORE_PEER_ID=peer0.org1.example.com
      - CORE_LOGGING_PEER=debug
      - CORE_PEER_LOCALMSPID=Org1MSP
      - CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
      - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
      - CORE_PEER_TLS_ENABLED=true
      - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/msp/peer/keystore/ecd9f80eb183352d5d4176eeb692e30bbfba4c38813ff9a0b6b799b546dda1d8_sk
      - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/msp/peer/signcerts/peer0.org1.example.com-cert.pem
      - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/msp/peer/cacerts/org1.example.com-cert.pem
      # # the following setting starts chaincode containers on the same
      # # bridge network as the peers
      # # https://docs.docker.com/compose/networking/
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=net1
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: peer node start --peer-defaultchain=false
    ports:
      - 7051:7051
      - 7053:7053
    volumes:
        - /var/run/:/host/var/run/
        - ./channel/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/msp/peer
    depends_on:
      - orderer.example.com
    networks:
      - net1

  couchdb:
    container_name: couchdb
    image: hyperledger/fabric-couchdb
    ports:
      - 5984:5984
    environment:
      DB_URL: http://localhost:5984/member_db
    networks:
      - net1

networks:
  net1:
    external: true #此网络由外部引用,无需创建

 host(121) docker-compose-1.yaml

version: '3'

services:
  ca1:
    image: hyperledger/fabric-ca
    environment:
      - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
      - FABRIC_CA_SERVER_CA_NAME=ca-org2
      - FABRIC_CA_SERVER_TLS_ENABLED=true
      - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/org2.example.com-cert.pem
      - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/464d550fe9bf9e7d8976cdf59d1a5d472598f54c058c3546317c5c5fb0ddfd6e_sk
    ports:
      - "7054:7054"
    command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/org2.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/464d550fe9bf9e7d8976cdf59d1a5d472598f54c058c3546317c5c5fb0ddfd6e_sk -b admin:adminpw -d'
    volumes:
      - ./channel/crypto-config/peerOrganizations/org2.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
    container_name: ca_peerOrg2
    networks:
      - net1 
  peer0.org2.example.com:
    container_name: peer0.org2.example.com
    image: hyperledger/fabric-peer
    environment:
      - CORE_VM_ENDPOINT=tcp://192.168.1.104:2375
      - CORE_PEER_ID=peer0.org2.example.com
      - CORE_PEER_LOCALMSPID=Org2MSP
      - CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
      - CORE_PEER_ADDRESS=peer0.org2.example.com:7051
      - CORE_PEER_TLS_ENABLED=true
      - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/msp/peer/keystore/81c71fd393054571c8d789f302a10390e0e83eb079b02a26a162ee26f02ff796_sk
      - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/msp/peer/signcerts/peer0.org2.example.com-cert.pem
      - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/msp/peer/cacerts/org2.example.com-cert.pem
      # # the following setting starts chaincode containers on the same
      # # bridge network as the peers
      # # https://docs.docker.com/compose/networking/
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=net1
      - GOPATH=/tmp/gopath
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: peer node start --peer-defaultchain=false
    ports:
      - 7051:7051
      - 7053:7053
    volumes:
        - ./src/github.com:/tmp/gopath/src/github.com
        - /var/run/:/host/var/run/
        - ./channel/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/msp/peer
        -  ./channel:/etc/hyperledger/configtx
    networks:
      - net1

networks:
   net1:
     external: true

 运行的时候先运营docker-compose-0.yaml:

   docker-compose -f docer-compose-0.yaml up –force-recreate

下一场在另一台主机械运输维docker-compose-一.yaml

  docker-compose -f docer-compose-1.yaml up –force-recreate

运作后实施docker命令进入容器的bash界面

  docker exec -it <container> bash

ping 另多少个主机上的器皿验证是或不是联通,fabric的镜像里面是平昔不ping的
大家须要团结手动安装

  apt-get update

  apt-get install inetutils-ping

一旦ping通了那么恭喜您,你早就做到了十分之五工程了,若是发现不可能ping通也休想惊慌,先关闭清理当前的器皿
:

  docker-compose -f xx.yaml down

接下来再清理一下不曾被选择的docker网络(此时毫不有docker 容器运维):

  docker network prune -f

清理之后重启fabric网络再尝试。

about to fork child process, waiting until server is ready for
connections.

forked process: 16666

child process started successfully, parent exiting

[root@node222 mongodb]# /usr/local/mongodb/bin/mongod –replSet rs1
–keyFile /data02/mongors/key/r2 –fork –port 28012 –dbpath
/data02/mongors/data/r2 –logpath=/data02/mongors/log/r2.log –logappend

about to fork child process, waiting until server is ready for
connections.

forked process: 16703

child process started successfully, parent exiting

[root@node222 mongodb]#

 

注:运转命令的参数分别为,mongod为主运行命令,
replSet钦点复制集名称叫rs一,keyfile钦点公钥文件
,fork钦赐运转格局为deamo后台运维;port内定端口号,dbpath钦定数据文件目录,logpath内定日志文件,logappend内定错误日志为日志追加形式;

 

因此进度和端口,验证运维状态

[root@node222 mongodb]# ps -ef|grep mongo

root     16629     1  0 17:36 ?        00:00:00
/usr/local/mongodb/bin/mongod –replSet rs1 –keyFile
/data02/mongors/key/r0 –fork –port 28010 –dbpath
/data02/mongors/data/r0 –logpath=/data02/mongors/log/r0.log –logappend

root     16666     1  0 17:36 ?        00:00:00
/usr/local/mongodb/bin/mongod –replSet rs1 –keyFile
/data02/mongors/key/r1 –fork –port 28011 –dbpath
/data02/mongors/data/r1 –logpath=/data02/mongors/log/r1.log –logappend

root     16703     1  0 17:36 ?        00:00:00
/usr/local/mongodb/bin/mongod –replSet rs1 –keyFile
/data02/mongors/key/r2 –fork –port 28012 –dbpath
/data02/mongors/data/r2 –logpath=/data02/mongors/log/r2.log –logappend

root     16739 12972  0 17:37 pts/1    00:00:00 grep –color=auto mongo

 

[root@node222 mongodb]# netstat -tunlp |grep mong

tcp        0      0 0.0.0.0:28010           0.0.0.0:*              
LISTEN      16629/mongod       

tcp        0      0 0.0.0.0:28011           0.0.0.0:*              
LISTEN      16666/mongod       

tcp        0      0 0.0.0.0:28012           0.0.0.0:*              
LISTEN      16703/mongod       

[root@node222 mongodb]#

 

5)配置及初步化 Replica Sets

登录到primary服务器上:

# /usr/local/mongodb/bin/mongo -port 28010

布局复制集:

> config = {_id: ‘rs1’, members: [

                           {_id: 0, host:
‘localhost:28010’,priority:1},

                           {_id: 1, host: ‘localhost:28011’},

                           {_id: 2, host: ‘localhost:28012’}]

            }

早先化配置,使地点的配备生效:

>  rs.initiate(config);

陆)查看复制集状态

> rs.status()

在primary节点上查看复制集状态:

> rs.isMaster()

 

能够再primary节点举办各样添删改查等各类数据操作,其余非master节点不能进行种种数码操作,也不能够倡导复制集修改命令

 

进程如下:(securecrt执行时候总是会复制一下回显)

[root@node222 mongodb]#  /usr/local/mongodb/bin/mongo -port 28010

MongoDB shell version: 3.2.8

connecting to: 127.0.0.1:28010/test

> use adminduse admind

switched to db admind

> use adminuse admin

switched to db admin

> config = {_id: ‘rs1’, members: [config = {_id: ‘rs1’, members:
[

…              {_id: 0, host:
‘localhost:28010’,priority:1},             {_id: 0, host:
‘localhost:28010’,priority:1},

…              {_id: 1, host: ‘localhost:28011’},             {_id:
1, host: ‘localhost:28011’},

…              {_id: 2, host: ‘localhost:28012’}]             {_id:
2, host: ‘localhost:28012’}]

…       }      }

{

        “_id” : “rs1”,

        “members” : [

                {

                        “_id” : 0,

                        “host” : “localhost:28010”,

                        “priority” : 1

                },

                {

                        “_id” : 1,

                        “host” : “localhost:28011”

                },

                {

                        “_id” : 2,

                        “host” : “localhost:28012”

                }

        ]

}

> rs.initiate(config);rs.initiate(config);

{ “ok” : 1 }

rs1:OTHER>

rs1:SECONDARY>

rs1:SECONDARY> rs.status()rs.status()

{

        “set” : “rs1”,

        “date” : ISODate(“2016-07-22T09:40:47.831Z”),

        “myState” : 1,

        “term” : NumberLong(1),

        “heartbeatIntervalMillis” : NumberLong(2000),

        “members” : [

                {

                        “_id” : 0,

                        “name” : “localhost:28010”,

                        “health” : 1,

                        “state” : 1,

                        “stateStr” : “PRIMARY”,

                        “uptime” : 266,

                        “optime” : {

                                “ts” : Timestamp(1469180428, 2),

                                “t” : NumberLong(1)

                        },

                        “optimeDate” : ISODate(“2016-07-22T09:40:28Z”),

                        “infoMessage” : “could not find member to sync
from”,

                        “electionTime” : Timestamp(1469180428, 1),

                        “electionDate” :
ISODate(“2016-07-22T09:40:28Z”),

                        “configVersion” : 1,

                        “self” : true

                },

                {

                        “_id” : 1,

                        “name” : “localhost:28011”,

                        “health” : 1,

                        “state” : 2,

                        “stateStr” : “SECONDARY”,

                        “uptime” : 31,

                        “optime” : {

                                “ts” : Timestamp(1469180428, 2),

                                “t” : NumberLong(1)

                        },

                        “optimeDate” : ISODate(“2016-07-22T09:40:28Z”),

                        “lastHeartbeat” :
ISODate(“2016-07-22T09:40:46.077Z”),

                        “lastHeartbeatRecv” :
ISODate(“2016-07-22T09:40:47.815Z”),

                        “pingMs” : NumberLong(0),

                        “syncingTo” : “localhost:28010”,

                        “configVersion” : 1

                },

                {

                        “_id” : 2,

                        “name” : “localhost:28012”,

                        “health” : 1,

                        “state” : 2,

                        “stateStr” : “SECONDARY”,

                        “uptime” : 31,

                        “optime” : {

                                “ts” : Timestamp(1469180428, 2),

                                “t” : NumberLong(1)

                        },

                        “optimeDate” : ISODate(“2016-07-22T09:40:28Z”),

                        “lastHeartbeat” :
ISODate(“2016-07-22T09:40:46.104Z”),

                        “lastHeartbeatRecv” :
ISODate(“2016-07-22T09:40:47.813Z”),

                        “pingMs” : NumberLong(0),

                        “syncingTo” : “localhost:28010”,

                        “configVersion” : 1

                }

        ],

        “ok” : 1

}

rs1:PRIMARY> rs.isMaster()rs.isMaster()

{

        “hosts” : [

                “localhost:28010”,

                “localhost:28011”,

                “localhost:28012”

        ],

        “setName” : “rs1”,

        “setVersion” : 1,

        “ismaster” : true,

        “secondary” : false,

        “primary” : “localhost:28010”,

        “me” : “localhost:28010”,

        “electionId” : ObjectId(“7fffffff0000000000000001”),

        “maxBsonObjectSize” : 16777216,

        “maxMessageSizeBytes” : 48000000,

        “maxWriteBatchSize” : 1000,

        “localTime” : ISODate(“2016-07-22T09:41:07.987Z”),

        “maxWireVersion” : 4,

        “minWireVersion” : 0,

        “ok” : 1

}

rs1:PRIMARY>

 

相关文章

发表评论

电子邮件地址不会被公开。 必填项已用*标注