Another Multi-Node Setup of a Fabric Network

KC Tam
7 min readJul 30, 2019

Overview

In my previous work I already showed how to use extra-host to create a multi-node setup of a fabric network. I received some comments from the readers. Therefore here I provide another setup example of a multi-node fabric network. The basic idea is the same, with certain differences from the previous one. After these two setups you may be more comfortable when designing this type of multi-node setup for a fabric network.

Network Setup

Here is the setup of this fabric network

  • Orderer cluster (3 ordering service nodes) using Kafka cluster (3 Zookeepers and 4 Kafka brokers)
  • Two organizations, Org1 and Org2.
  • Each organization comes with two peer, peer0 and peer1, and a Certificate Authority, CA
  • One channel mychannel is created and all peers join mychannel.

These components are deployed in four nodes as following. The distribution of components are more for demonstration, trying to make them as distributed as possible. In real life, you need to consider the requirement and the arrangement of organizations. More nodes may be needed.

Also I deploy a CLI for each peer. This makes demonstration easier without involving too many environment variables. Again, in real life, you may not need CLI as chaincode operation is more performed by client applications.

Configuration Files

You can find the configuration file for this network setup here.

As general fabric network, configuration contains the following items

  1. Crypto material configuration crypto-config.yaml and channel configuration configtx.yaml
  2. Docker compose files for each node (node1.yaml, node2.yaml, etc.)
  3. An environment variable file .env

For sake of simplicity, I have generated the crypto material (stored in ./crypto-config) and channel artifacts (stored in ./channel-artifacts). I have skipped here how to generate these files. You can refer to any of my previous articles to create them.

If you just perform a demonstration, the material generated should be good enough. In case you need your own setup, simply create your own based on the configuration files.

In particular, the setup of Fabric-CA containers (in node1.yaml and node3.yaml) for Org1 and Org2 requires the crypto-material generated. It is the variable FABRIC_CA_SERVER_CA_KEYFILE. If you generate your own crypto-material, don’t forget to update this based on your material. You can get the name of file under crypto-config/peerOrganizations/org[x].example.com/ca.

Bringing Up the Fabric Network

Now we can start our journey to bring up this fabric network.

Step 1: Create four Ubuntu virtual machines

Again I am using AWS EC2 t2.small with my fabric image. You can create from a clean Ubuntu 18.04 LTS and install the required software for hyperledger fabric. For those who are interested to create a machine image which speeds up faster deployment, please refer to this article.

Here is my four instances. I have marked their public IP as the Name for easy reference.

As usual, I open up all ports for demonstration purpose. Note that this time I am using public IP addresses for all instance. This means that the nodes can be deployed everywhere, as far as public IP is reachable.

Step 2: Clone the Repository

First clone the repository to fabric-samples to your localhost.

# localhost
cd fabric-samples
git clone https://github.com/kctam/fullgear-4node-setup.git
cd fullgear-4node-setup

Step 3: Update the .env to reflect the public IP of each node

The docker compose files will refer to environment variables NODE1, NODE2, etc. Here we update it in .env file.

Step 4: Upload the full directory to all four nodes

Archive the directory and scp to each node.

#localhost
cd fabric-sample
tar cf fullgear-4node-setup.tar fullgear-4node-setup/
# for all four nodes
scp -i [key] fullgear-4node-setup.tar ubuntu@[node_address]:/home/ubuntu/fabric-samples/

Step 5: Open four terminals, one for each node

# for each node
ssh -i [key] ubuntu@[node address]

This is my four terminals.

Step 6: Bring up containers in each node

On each node we will bring up the containers using corresponding docker compose file.

# each node
cd fabric-samples
tar xf fullgear-4node-setup.tar
cd fullgear-4node-setup
docker-compose -f node[n].yaml up -d

Note that due to some component dependency, not all containers (in particular Kafka) are up and running. We can use the docker-compose again to bring those containers.

The final picture we should have all these containers, as indicated in the diagram for each node. (Use docker ps to inspect the containers).

Node 1: 6 containers
Node 2: 5 containers
Node 3: 6 containers
Node 4: 3 containers

Step 7: Setup the mychannel and all peers join mychannel

First generate the block file mychannel.block from Node 1.

# Node 1
docker exec cli peer channel create -o orderer0.example.com:7050 -c mychannel -f ./channel-artifacts/channel.tx
Block 0 (mychannel.block) is created

Join peer0.org1.example.com (at Node 1) to mychannel.

# Node 1
docker exec cli peer channel join -b mychannel.block
Node 1 (peer0.org1.example.com) joins mychannel

Now we copy the block file to other nodes. We need localhost and scp between nodes.

# Node 1
docker cp cli:/opt/gopath/src/github.com/hyperledger/fabric/peer/mychannel.block .
# localhost: copy from node 1
scp -i ~/Downloads/aws.pem ubuntu@[node1]:/home/ubuntu/fabric-samples/fullgear-4node-setup/mychannel.block .
# localhost: copy to node 2, 3 and 4
scp -i ~/Downloads/aws.pem mychannel.block ubuntu@[node2,3,4]:/home/ubuntu/fabric-samples/fullgear-4node-setup/
# on each node (2, 3, and 4), copy from node localhost to cli container
docker cp mychannel.block cli:/opt/gopath/src/github.com/hyperledger/fabric/peer/
# join node to mychannel for node 2, 3 and 4
docker exec cli peer channel join -b mychannel.block
Node 2 (peer1.org1.example.com) joins mychannel
Node 3 (peer0.org2.example.com) joins mychannel
Node 4 (peer1.org2.example.com) joins mychannel

Step 8: Get channel info

To check if all four peers are already in the same channel, we will use the following command on each node.

docker exec cli peer channel list
docker exec cli peer channel getinfo -c mychannel
Node 1 (peer0.org1.example.com) getinfo
Node 2 (peer1.org1.example.com) getinfo
Node 3 (peer0.org2.example.com) getinfo
Node 4 (peer1.org2.example.com) getinfo

We see all peers (through their cli) has joined mychannel, and the latest blocks of each peer are identical. Now the network is now ready for chaincode operation.

Chaincode Operation (Fabcar)

For demonstration purpose, we will deploy Fabcar into mychannel.

Step 1: Install Fabcar chaincode to all nodes (peers)

# all nodes
docker exec cli peer chaincode install -n fabcar -v 1.0 -p github.com/chaincode/fabcar/go/

Step 2: Instantiate chaincode from peer0.org1.example.com

# Node 1
docker exec cli peer chaincode instantiate -o orderer0.example.com:7050 -C mychannel -n fabcar -v 1.0 -c '{"Args":[]}' -P "OR ('Org1MSP.peer','Org2MSP.peer')"

Step 3: Invoke initLedger() in order to preload 10 car records into ledger

Note that this can be invoked in any node. Now we try to do it at peer1.org1.example.com (Node 2),

# Node 2
docker exec cli peer chaincode invoke -o orderer1.example.com:7050 -C mychannel -n fabcar -c '{"Args":["initLedger"]}'

Step 4: Invoke queryAllCars() to get back the preloaded data from ledger

For demonstration, we do it at peer0.org2.example.com (Node 3),

# Node 3
docker exec cli peer chaincode invoke -o orderer1.example.com:7050 -C mychannel -n fabcar -c '{"Args":["queryAllCars"]}'

Step 5: Invoke changeCarOwner() for CAR0

Again for demonstration we do this on peer1.org2.example.com (Node 4).

# Node 4
docker exec cli peer chaincode invoke -o orderer0.example.com:7050 -C mychannel -n fabcar -c '{"Args":["changeCarOwner","CAR0","KC"]}'

Step 6: Query CAR0

This time we query CAR0 from peer0.org1.example.com (Node 1).

# Node 1
docker exec cli peer chaincode invoke -o orderer0.example.com:7050 -C mychannel -n fabcar -c '{"Args":["queryCar","CAR0"]}'

Note that we can use 0, 1 or 2 or ordering service. There is no requirement that which peer is using which orderer, as the orderer cluster is powered by Kafka cluster. This shows the robustness of the ordering service.

Now we see the whole network is functioning. Ledger is peers is updated on each chaincode invoke.

Clean Up

To clean things up, in each node.

docker-compose -f node[n].yaml down --volumes
docker rm $(docker ps -aq)
docker rmi $(docker images dev-* -q)

You can either keep the four instances for upcoming demonstration, or simply terminate them.

--

--