Overview
Since the demo of a multi-channel setup (link) on a localhost is published last December, I keep receiving questions about a multi-node deployment. This is by all means the most common production setup, in which each organization has their own infrastructure setup in their own datacenter.
Not surprisingly there are many attempts to build such a setup, and currently I see more pre-built, pre-configured and more user-friendly tools (e.g. Amazon Managed Blockchain or FabDep) such that deploying multiple-node is not as difficult as expected. Nevertheless, for those who wish to become master in Hyperledger Fabric, I still suggest a more hands-on practice on the whole process. This maintains the deployment flexibility, and most importantly, helps one understand more about the Hyperledger Fabric architecture.
One challenge of building multi-node deployment is the communication between docker containers. In a localhost everything can talk to each other. In real life it never happens. When I first research for those successfully bringing up multi-node setup, some of them relying on Docker Swarm (e.g, link). It is of no surprise as Hyperledger Fabric uses docker containers. But the deployment with Docker Swarm always gets into challenges, if not mentioning many concerns about Docker Swarm.
There is a simpler way from λ.eranga. by using extra hosts. This makes the communication between running containers easier, provided that we are feeding the necessary information when bringing up the network.
Inspired by this work, here I am deploying a three-organization fabric network running on three separate machines. And what makes more fun is I will deploy two channels on this setup, and see how things are working.
Setup
It is a very simple setup. In this fabric network we only have one orderer. There are total three organizations, in each of which a peer node is configured. For simplicity we take out most of components, and only observe the behaviour of setting up channels in multiple nodes.
We will first build a channel called channelall. All three organizations (peers) join channelall. After creation of channelall, we will install and instantiate the chaincode Simple Asset Chaincode (sacc) on channelall, and observe the behaviour of chaincode query and invocation across the three nodes.
Later we add one more channel channel12, in which only Org1 and Org2 join. We will again install and instantiate the sacc chaincode on channel12, and observe the behaviour of chaincode query and invocation for non-member and for cross-channel perspective.
Here is how the setup looks like.
Demo Steps
The process in this demo follows what we learn from Build Your First Network (BYFN). What’s new is to crate four nodes on AWS.
Step 1: Bring Up the Four Nodes
Step 1.1: Create 4 AWS EC2 instances
Follow standard creation of EC2 instances. Note that,
- Select Ubuntu 16.04 LTS
- Use t2.small instances
- For demo purpose, just use Security Group that open everything. In production, only open those IP and ports which is needed
We will designate the EC2 instances as following
- Orderer
- Node 1 for Org1
- Node 2 for Org2
- Node 3 for Org3
Step 1.2: Install prerequisite and Hyperledger Fabric related material
On each node, install the prerequisite (link) and Hyperledger Fabric related material (link), including docker images, the tools and fabric samples.
After installation use Build Your First Network (BYFN) and see if the whole process completes.
$ cd fabric-samples/first-network
$ ./byfn.sh up
And bring down the BYFN with
$ ./byfn.sh down
Here I write down the Private IP and Public IP on all four nodes, and I will use these during the whole article. When you bring up your machines, make sure you are using your own set of addresses.
Step 2: Generate Crypto and Network Artifacts
As a practice of Build Your First Network (BYFN), we first prepare configurations. In this demo I use my own localhost.
First, prepare a project folder. I create it inside fabric-samples. It is not a must, but we will refer some directories later. If you are using another directory, make sure you can refer the right contents.
In my localhost,
$ cd fabric-samples
$ mkdir 3node2channel && cd 3node2channel
We need to create a directory for channel-artifacts and deployment. The directory crypto-config is created automatically when running the tool later.
$ mkdir channel-artifacts
$ mkdir deployment
At this point we work on the two files: crypto-config.yaml and configtx.yaml.
Later we will build channels using profile ChannelAll and Channel12.
Here is how the directory looks like.
Now we can generate the required artifacts.
First, crypto artifacts
$ ../bin/cryptogen generate —-config=./crypto-config.yaml
Now, the network artifacts. First we work on the genesis block for orderer
$ export FABRIC_CFG_PATH=$PWD
$ ../bin/configtxgen -profile ThreeOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block
We also need to configuration transaction for the both channels.
$ ../bin/configtxgen -profile ChannelAll -outputCreateChannelTx ./channel-artifacts/channelall.tx -channelID channelall$ ../bin/configtxgen -profile Channel12 -outputCreateChannelTx ./channel-artifacts/channel12.tx -channelID channel12
After the generation we now have the artifacts required for our setup.
Step 3: Examine the Docker-Compose Files
Another set of important files are the docker compose files, which determine what and how nodes are brought up and running for the setup.
They are all stored under deployment directory. The structure we have is like this.
- docker-compose-base.yml: This is the base for both orderers and peers. This file is referred in other docker compose files.
- docker-compose-orderer.yml: This is for orderer node.
- docker-compose-node#.yml: This is for node of Org1, Org2 and Org3.
There are some remarks on these files.
- As said, the directory structure has been considered. If you are not following the previous steps, make sure you update the directory (mainly on the volumes)
- There are entries of extra hosts (extra_hosts). It is quite straight forward: what we need is to specify other nodes with their (private) IP addresses. These entries will then be configured inside /etc/hosts of the containers.
- Each Org comes with a peer and a CLI. We will use CLI for chaincode operations.
Here are the directory structure.
Step 4: Upload the 3node2channel Directory to All Nodes
In my local host
// fabric-samples directory
$ tar cf 3node2channel.tar 3node2channel/// update to all nodes (use your own key file and public IP addresses)
$ scp -i ~/Downloads/aws.pem 3node2channel.tar ubuntu@3.90.64.249:/home/ubuntu/fabric-samples/
$ scp -i ~/Downloads/aws.pem 3node2channel.tar ubuntu@3.92.233.164:/home/ubuntu/fabric-samples/
$ scp -i ~/Downloads/aws.pem 3node2channel.tar ubuntu@18.233.151.196:/home/ubuntu/fabric-samples/
$ scp -i ~/Downloads/aws.pem 3node2channel.tar ubuntu@52.23.207.17:/home/ubuntu/fabric-samples/
Step 5: Extract the Directory and Bring Up Nodes
For all nodes
// in fabric-samples
$ tar xf 3node2channel
$ cd 3node2channel/deployment
For Orderer node
$ docker-compose -f docker-compose-orderer.yml up -d
$ docker ps
For Org1 node
$ docker-compose -f docker-compose-node1.yml up -d
$ docker ps
For Org2 node
$ docker-compose -f docker-compose-node2.yml up -d
$ docker ps
For Org3 node
$ docker-compose -f docker-compose-node3.yml up -d
$ docker ps
Step 6: Setup Channel channelall
Note: I saw some queries about administration error in Step 6, there are two possible reasons.
(1) my original copy-n-paste automatically converts the double quote that the docker command doesn’t recognized. This has been updated. Now you can copy and paste directly from this article.
(2) make sure you are using the same set of crypto-config files. In my work I generate it from localhost and then scp to all nodes (see step 2). If you generate crypto-config from each node, you will encounter the authentication error as those files in each node are not identical.
Here we take Node 1 to create the genesis block for channel channelall.
Here we take Node 1 to create the genesis block for channel channelall.
$ docker exec -e "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/users/Admin@org1.example.com/msp" peer0.org1.example.com peer channel create -o orderer.example.com:7050 -c channelall -f /var/hyperledger/configs/channelall.tx
The file channelall.block is created.
We will first let peer0.org1.example.com join channelall first using this channelall.block file.
$ docker exec -e "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/users/Admin@org1.example.com/msp" peer0.org1.example.com peer channel join -b channelall.block
We have peer0.org1.example.com now in channelall. We will do the same on peer0.org2.example.com and peer0.org3.example.com.
Note that the file is now inside container peer0.org1.example.com on node 1. We will use localhost as “bridge” to have this file sent to node 2 and 3, and then passed to peer0.org2.example.com and peer0.org3.example.com, respectively.
Here are the steps (again, update the public IP addresses of your own instances):
// node1
$ docker cp peer0.org1.example.com:channelall.block .// localhost$ scp -i ~/Downloads/aws.pem ubuntu@3.92.233.164:/home/ubuntu/fabric-samples/3node2channel/deployment/channelall.block .$ scp -i ~/Downloads/aws.pem channelall.block ubuntu@18.233.151.196:/home/ubuntu/fabric-samples/3node2channel/deployment/$ scp -i ~/Downloads/aws.pem channelall.block ubuntu@52.23.207.17:/home/ubuntu/fabric-samples/3node2channel/deployment/// node2
$ docker cp channelall.block peer0.org2.example.com:/channelall.block// node3
$ docker cp channelall.block peer0.org3.example.com:/channelall.block
We see the file channelall.block is now in both Node 2 and Node 3.
Finally we can have peer0.org2.example.com and peer0.org3.example.com joining channelall.
Node 2
$ docker exec -e "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/users/Admin@org2.example.com/msp" peer0.org2.example.com peer channel join -b channelall.block
Node 3
$ docker exec -e "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/users/Admin@org3.example.com/msp" peer0.org3.example.com peer channel join -b channelall.block
Now all three peers in three organization have joint channelall.
Step 7: Chaincode Operations
We are using Simple Asset Chaincode (sacc), which is already inside fabric-sample.
In all nodes, we install the sacc chaincode.
$ docker exec -it cli peer chaincode install -n mycc -p github.com/chaincode/sacc -v v0
After chaincode installation, we will instantiate the chaincode on one of the node. In the instantiation we specify an initial value (key: a, value: 100). We also specify endorsement policy only requires one of the three organizations for sake of simplicity.
Node 1
$ docker exec -it cli peer chaincode instantiate -o orderer.example.com:7050 -C channelall -n mycc github.com/chaincode/sacc -v v0 -c '{"Args": ["a", "100"]}' -P "OR('Org1MSP.member', 'Org2MSP.member','Org3MSP.member')"
Now we can query from other nodes, say Node 2 for the value of a.
Node 2
$ docker exec -it cli peer chaincode query -C channelall -n mycc -c '{"Args":["query","a"]}'
And we see 100. And same result if we query from Node 1 and Node 3.
We now invoke the chaincode and set the value of a to 200. Here we do it on Node 3.
Node 3
$ docker exec -it cli peer chaincode invoke -o orderer.example.com:7050 -C channelall -n mycc -c '{"Args":["set","a", "200"]}'
Again again, we check the value on Node 2
Node 2
$ docker exec -it cli peer chaincode query -C channelall -n mycc -c '{"Args":["query","a"]}'
We now see 200. And same result if we query from Node 1 and Node 3.
So our fabric network across three nodes and three organizations works well on channelall.
Step 8: Setup Channel channel12 and Chaincode Instantiation
This is same as Step 6 and 7, but now we are working on channel12. I omit those screenshot as it’s almost the same as in Step 6.
Here we take Node 1 to create the genesis block for channel channel12.
$ docker exec -e "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/users/Admin@org1.example.com/msp" peer0.org1.example.com peer channel create -o orderer.example.com:7050 -c channel12 -f /var/hyperledger/configs/channel12.tx
The file channel12.block is created.
We will first let peer0.org1.example.com join channel12 first using this channel12.block file.
$ docker exec -e "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/users/Admin@org1.example.com/msp" peer0.org1.example.com peer channel join -b channel12.block
We have peer0.org1.example.com now in channel12. We will do the same on peer0.org2.example.com.
Here are the steps (again, update the public IP addresses of your own instances):
// node1
$ docker cp peer0.org1.example.com:channel12.block .// localhost$ scp -i ~/Downloads/aws.pem ubuntu@3.92.233.164:/home/ubuntu/fabric-samples/3node2channel/deployment/channel12.block .$ scp -i ~/Downloads/aws.pem channel12.block ubuntu@18.233.151.196:/home/ubuntu/fabric-samples/3node2channel/deployment/// node2
$ docker cp channel12.block peer0.org2.example.com:/channel12.block
Finally we can have peer0.org2.example.com joining channel12.
Node 2
$ docker exec -e "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/users/Admin@org2.example.com/msp" peer0.org2.example.com peer channel join -b channel12.block
Now Org1 and Org2 have joint channel12.
We are using sacc as the chaincode demonstration, they are already installed in the peers. What we need is to instantiate the chaincode on channel12. For sake of demonstration, we specify another initial value (key: b, value: 1).
Node 1
$ docker exec -it cli peer chaincode instantiate -o orderer.example.com:7050 -C channel12 -n mycc github.com/chaincode/sacc -v v0 -c '{"Args": ["b", "1"]}' -P "OR('Org1MSP.member', 'Org2MSP.member')"
Query from Node 2, and we see the correct result.
$ docker exec -it cli peer chaincode query -C channel12 -n mycc -c '{"Args":["query","b"]}'
Query from Node 3, and we see error as Node 3 (Org 3) is not in channel12.
Finally we will check whether we can get the key/value in different channels.
In Node 1, we try to get the value of b on channelall.
$ docker exec -it cli peer chaincode query -C channelall -n mycc -c '{"Args":["query","b"]}'
As we can see, asset b is not defined in the ledger of channelall.
Each channel has its own ledger. Even a peer joins multiple channels the ledger states of these channels are not shared.
Step 9 Clean Up
When we complete our setup, we can clean up everything.
On each node, use docker-compose to tear down the containers. We also clean up the images created during chaincode operation.
// orderer
$ docker-compose -f docker-compose-orderer.yml down// node1
$ docker-compose -f docker-compose-node1.yml down
$ docker rm $(docker ps -aq)
$ docker rmi $(docker images net-* -q)// node2
$ docker-compose -f docker-compose-node2.yml down
$ docker rm $(docker ps -aq)
$ docker rmi $(docker images net-* -q)// node3
$ docker-compose -f docker-compose-node3.yml down
$ docker rm $(docker ps -aq)
$ docker rmi $(docker images net-* -q)
You can determine whether you keep the 3node2channel tar file and directory. Also you can determine whether you stop the AWS EC2 instances (for next demo) or terminate them.
Summary
In this article we show how to deploy a three-node setup for three organizations. Although it seems a bit complicated, the logic is quite straight forward. We largely use what we learn from Build Your First Network (BYFN), and with careful preparation of the docker compose files, we demonstrate how multiple channels are created and observe the behaviour of same chaincode across multiple channels.