Multi-Host Deployment for First Network (Hyperledger Fabric v2)

KC Tam
7 min readAug 11, 2020

--

Overview

This is a rework on my previous articles published last December (2019), about deploying the raft-based First Network deployed in a multi-host environment.

The previous one was done on v1.4. Here the demonstration is done on Fabric v2.2, which was released just a month ago. There are several changes compared to the previous article:

  • The generation of crypto material and channel artifacts is skipped. Instead they were pre-generated and available in the repository. This simplifies the initial steps when dealing with scp between hosts.
  • First Network is not available in v2.2. As this setup is largely based on First Network, those relevant files were copied and kept it in the repository.
  • The chaincode deployment in v2 is different from v1.4.
  • Scripts are created for easier deployment.

As a result, a repository is created here. Once you have the swarm setup across the four hosts, we can clone the repository and start demonstration. Those who wish to understand the details, you can always refer to the previous article.

The setup remains the same. The demonstration is performed in this flow.

  1. Bring up four hosts in AWS
  2. Create an overlay with Docker Swarm
  3. Clone the repository on all hosts
  4. Bring up each host
  5. Bring up mychannel and join all peers to mychannel
  6. Deploy fabcar chaincode
  7. Test fabcar chaincode

Setup

First Network is composed of one orderer organization and two peer organizations. In orderer organization a raft-based ordering service cluster of five ordering service nodes (orderers). In each peer organization (Org1 and Org2) there are two peers, peer0 and peer1. A channel mychannel is created and all peers join the mychannel.

The setup is identical to the previous article: here is how the network components are deployed in each host.

Demonstration

Step 1: Bring up four hosts in AWS

The four hosts are running Fabric v2.2 on Ubuntu 18.04 LTS.

Note: Since we do not use any features on managed blockchain features, feel free to use hosts in different cloud providers or even across cloud providers. Just make sure the ports for communication are well opened between the hosts.

Here is my deployment.

Step 2: Create an overlay with Docker Swarm

On Host 1,

docker swarm init --advertise-addr <host-1 ip address>
docker swarm join-token manager

Note the output of docker swarm join-token manager as it is being used immediately in next step.

On Host 2, 3 and 4,

<output from join-token manager> --advertise-addr <host n ip>

On Host 1, create first-network overlay network, which is being used for our network components (see each hostn.yaml in the repository)

docker network create --attachable --driver overlay first-network

Now we can check each host. All are sharing the same overlay (note the same network ID in all hosts).

Terminals for Host 1, 2, 3 and 4 (from top to bottom)

With this the Docker Swarm is ready for our fabric network.

Step 3: Clone the repository on all hosts

On each host,

cd fabric-samples
git clone https://github.com/kctam/4host-swarm.git
cd 4host-swarm

Note: All the four hosts have the same set of material. This is just for demonstration. In real deployment each host should have a customized set of material. This is to ensure that only the right set of material (crypto material) and containers (e.g. CLI for different organizations) should be specified.

Step 4: Bring up each host

On each host, bring up the host with script hostnup.sh. This script is just a docker-compose up with the corresponding configuration file.

./hostnup.sh

To check if everything works fine, check the containers running in each host. It is our setup.

Containers in Host 1, 2, 3 and 4 (from top to bottom)

Step 5: Bring up mychannel and join all peers to mychannel

This is a standard process, by generating the channel genesis block (for mychannel), join all the peers with this block file and update the anchor peer transactions. As all commands are issued on CLI (which is in Host 1), a script is created to perform these tasks.

On Host 1,

./mychannelup.sh

After all peers join the channel, they should have the same ledger. We use this command to check in each host. All peers are of the same blockchain height (3), and of the same block hash.

Note here we are using docker exec command directly on each peer, not using CLI.

docker exec peerx.orgy.example.com peer channel getinfo -c mychannel
All peers have the same ledger (blockchain) after joining mychannel

Now all the peers have joined mychannel, and the network is ready for chaincode deployment.

Step 6: Deploy fabcar chaincode

We follow the previous article and deploy fabcar chaincode for demonstration. Note that in v2.2 the process is different from v1.4 as we are using lifecycle chaincode for deployment. For more information you can refer to the readthedocs and this article.

Note all peers commands are issued from CLI container, which is on Host 1.

Package chaincode

# If not done before
pushd ../chaincode/fabcar/go
GO111MODULE=on go mod vendor
popd
# packaging
docker exec cli peer lifecycle chaincode package fabcar.tar.gz --path github.com/chaincode/fabcar/go --label fabcar_1

Install chaincode package to all peers

# peer0.org1
docker exec cli peer lifecycle chaincode install fabcar.tar.gz
# peer1.org1
docker exec -e CORE_PEER_ADDRESS=peer1.org1.example.com:8051 -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt cli peer lifecycle chaincode install fabcar.tar.gz
# peer0.org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer0.org2.example.com:9051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt cli peer lifecycle chaincode install fabcar.tar.gz
# peer1.org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer1.org2.example.com:10051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt cli peer lifecycle chaincode install fabcar.tar.gz

Once the installation of each peer is complete, you can see a new chaincode container image is built in each host. It is not instantiated yet (you cannot see it in docker ps), and will be instantiated once the chaincode is committed. Also, although the command is issued on CLI of Host 1, the chaincode container image is built in the host where the peer is running.

Chaincode container images are built after chaincode package is installed on each peer.

Approve chaincode for both organizations

Note in case your package ID is not the same. Use the output of previous installation.

# for org1
docker exec cli peer lifecycle chaincode approveformyorg --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --channelID mychannel --name fabcar --version 1 --sequence 1 --waitForEvent --package-id fabcar_1:a976a3f2eb95c19b91322fc939dd37135837e0cfc5d52e4dbc3a2ef881d14179
# for org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer0.org2.example.com:9051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt cli peer lifecycle chaincode approveformyorg --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --channelID mychannel --name fabcar --version 1 --sequence 1 --waitForEvent --package-id fabcar_1:a976a3f2eb95c19b91322fc939dd37135837e0cfc5d52e4dbc3a2ef881d14179

To check approval status,

docker exec cli peer lifecycle chaincode checkcommitreadiness --channelID mychannel --name fabcar --version 1 --sequence 1

Commit chaincode

docker exec cli peer lifecycle chaincode commit -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt --channelID mychannel --name fabcar --version 1 --sequence 1

To check commit status,

docker exec cli peer lifecycle chaincode querycommitted --channelID mychannel --name fabcar

Step 7: Test fabcar chaincode

We will invoke chaincode functions and see if the deployment is working well.

Invoke initLedger

As we are using a raft-based ordering cluster, we can specify any orderers to perform our transaction. When invoking initLedger here we specify orderer3.example.com to handle this process, which is running in Host 3. You can try others.

docker exec cli peer chaincode invoke -o orderer3.example.com:9050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer3.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n fabcar --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:9051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["initLedger"]}'

Query queryCar

After that, we can query a car record on the four peer nodes. We get back the same result. This shows that the fabric network is working well.

# peer0.org1
docker exec cli peer chaincode query -n fabcar -C mychannel -c '{"Args":["queryCar","CAR0"]}'
# peer1.org1
docker exec -e CORE_PEER_ADDRESS=peer1.org1.example.com:8051 -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt cli peer chaincode query -n fabcar -C mychannel -c '{"Args":["queryCar","CAR0"]}'
# peer0.org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer0.org2.example.com:9051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt cli peer chaincode query -n fabcar -C mychannel -c '{"Args":["queryCar","CAR0"]}'
# peer1.org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer1.org2.example.com:10051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt cli peer chaincode query -n fabcar -C mychannel -c '{"Args":["queryCar","CAR0"]}'
query on peer0.org1.example.com
query on peer1.org1.example.com
query on peer0.org2.example.com
query on peer1.org2.example.com

Step 8: Tear down everything

On each host, tear down everything with script hostndown.sh. This script uses docker-compose to shut down and remove containers. Besides, all chaincode containers and the chaincode images are removed.

./hostndown.sh

Hope you enjoy this work.

--

--

KC Tam
KC Tam

Responses (12)