Multi-Host Setup with RAFT-based Ordering Service

KC Tam
11 min readDec 22, 2019

Overview

This is another multi-host setup for Hyperledger Fabric. It is a setup of Raft-based ordering service using Docker Swarm as the multi-host container environment, which is different from my previous articles using Static IP. We first make an introduction on raft-based ordering service and Docker Swarm. With that, we go through the steps, bringing up this fabric network in a four-host network in AWS environment.

About Orderer Clustering

Ordering service plays a significant role in Hyperledger Fabric. It receives endorsed transactions, after validation, arranges them in order and places them into a block. This new block is broadcast to peers on the channel, and the peers commit this block into their ledger.

There are three mechanisms so far available in Hyperledger Fabric. If you first try First-Network, the default setup is Solo, in which one orderer is providing the ordering service. This obviously does not give us any fault tolerance. Then we see Kafka-based ordering service, a crash fault tolerant setup. With that we first see more than one orderer serving the network (an orderer cluster). Those who have tried can tell how complex it looks like: the orderer cluster is supported by a Kafka cluster, which is composed of Zookeepers and Kafka nodes. The minimum charge is 3 Zookeeper nodes and 4 Kafka nodes per recommendation (link).

Thanks to Raft, things are much neater. Introduced in release 1.4.1, Raft is natively running in orderer code, and we no longer need external components to power the orderer clustering.

You can refer to more introduction on various ordering service in Hyperledger Fabric documentation.

First Network with Raft-based Ordering Service

First Network is a very good example to learn Hyperledger Fabric: it contains all the elements in a fabric network. And everything is well scripted in byfn.sh. When we run it without any parameters, it brings out a two-org network with total four peers, and an orderer implementing Solo ordering service. The option -o is needed when we need other types of ordering service.

./byfn.sh up -o <kafka | etcdraft>

The latest First-Network design has prepared crypto material of five orderers. If you are deploying Solo or Kafka-based ordering service, only the first orderer is running (defined in docker-compose-cli.yaml).

One difference when we specify raft is the generation of genesis block using configtxgen tool. The profile we are specifying is called SampleMultiNodeEtcdRaft inside configtx.yaml. Others operation remains the same. You can see we are using this profile in Step 3 below.

Finally we need to bring up all the other four orderers, which is defined in docker-compose-etcdraft2.yaml.

Here is the deployed setup after raft-based First Network, and we can see the five orderers up and running.

Five orderers are running in Raft-based First Network

And here is the component of this deployment.

First Network with Raft-based Ordering Service

Like my previous articles, I am building this first network with raft-based ordering service across multiple hosts. This is a more realistic deployment. Instead of using byfn.sh script, we will start everything from scratch and observe how to build this First-Network.

Various Ways for Multi-host Deployment

As Hyperledger Fabric components are deployed as containers, everything works fine when they are all running in a localhost. When they are running in different hosts, we need to find a way to make these containers talk to one another.

While there are no official recommendation from Hyperledger Fabric, we so far see three ways.

Static IP: By specifying the host IP where a container is running, containers can communicate to each other. Those host IPs are specified using extra_hosts in docker-compose files, and after a container is running, these entries are seen in /etc/hosts. It is straightforward and we do not need to rely on any external components. The down side is that things are statically configured, and there are challenges when one need to add or change configuration. In my previous two articles (link, link) I was using this approach.

Docker Swarm: Docker Swarm is a container orchestration tool natively in Docker environment. In a nutshell, It provides an overlay network for containers across multiple hosts. Those containers on this overlay network can communicate to one another, as if they were on a large host. The good side, obviously, is that the original configuration can be used with minimal modification, and no static information such as IP are coded in configuration. In this article we are using Docker Swarm. However, now we are relying on an external component (Docker Swarm) for our fabric network, which may complicate the setup and operation.

Kubernetes (k8s): K8s by far is the most popular container orchestration tool. The mechanism is similar to Docker Swarm. So far I see several articles trying to implement this, but the implementation seems more challenging than the previous two mechanisms.

I also tested the Static IP for Raft-based clustering as this is quite straightforward. It works well. In this article I will use Docker Swarm, and you can make a comparison in my previous two multi-host setup with Static IP.

Demo Setup

We distribute these containers into four hosts. Here we are using four EC2 running in AWS. Like other demonstration we do not use any features on AWS, just purely EC2 instances with Ubuntu and the required software. Communication is through public IP. Feel free to use other or even mixed cloud providers.

Raft-based First Network deployed in 4-host environment.

After the fabric network is up and running, we will use Fabcar as chaincode to test the setup. Refer to this article for more information about Fabcar.

The overall process is like this

  1. Bring up four AWS EC2 instances with proper fabric prerequisite, tools and images.
  2. Form an overlay network and make all the four hosts join.
  3. Prepare everything on Host 1, including the crypto material, channel configuration transactions, docker-compose files for each node. Then copy the whole structure to all other hosts.
  4. Bring up all components with docker-compose files.
  5. Create channel and join all the peers to mychannel.
  6. Install and instantiate Fabcar chaincode.
  7. Invoke and query chaincode functions.

Demo

Step 1: Bring Up Hosts

Again here I am using AWS EC2 t2.small instances. I have preloaded the required components in the node according to the fabric documentation. The detail is omitted here. You can refer to this article for how to install the required components for a fabric node, and how to create an image for easy instantiation on AWS. Release 1.4.4 is used in this demo.

Note that for demo purpose I have a security group opening for everything (all UDP, TCP and ICMP). For production make sure you just open the ports needed.

The four hosts: Note that for good practice I have labeled it with the public IP address.

Step 2: Form an Overlay Network with Docker Swarm

Now we can open four terminals, one for each host.

ssh -i <key> ubuntu@<public IP>

From Host 1,

docker swarm init --advertise-addr <host-1 ip address>
docker swarm join-token manager
On Host 1

Use the last output, add other hosts as manager to this swarm.

From Host 2, 3 and 4,

<output from join-token manager> --advertise-addr <host n ip>

Finally we add an overlay first-network, which will be the network for our demo. This is only done on one node. If Docker Swarm works correctly, all nodes will have this overlay network.

From Host 1

docker network create --attachable --driver overlay first-networkdocker network ls

In all other hosts, we see the first-network (with the same network ID).

Now the overlay network is formed. This information will be used in the docker-compose files later.

Step 3: Prepare Fabric Network Material in Host 1 and Copy to Others

One of the critical part is to make sure all components are sharing the same crypto material. We will use Host 1 to create the material, and copy them to other hosts.

In theory, we only need to ensure identity (certificates and signing keys) follow the scheme required. Certificates for an organization (e.g. org1) are issued and signed by the same CA (ca.org1). This is ensured by the use of cryptogen. For sake of simplicity, in this demo we create all the material in Host 1, and the whole directory is then copied to other hosts.

We first go to fabric-samples directory, and create a raft-4node-swarm directory.

From Host 1

cd fabric-samples
mkdir raft-4node-swarm
cd raft-4node-swarm

We copy directly the crypto-config.yaml and configtx.yaml files from first-network.

cp ../first-network/crypto-config.yaml .
cp ../first-network/configtx.yaml .

Then we generate the required material.

../bin/cryptogen generate --config=./crypto-config.yamlexport FABRIC_CFG_PATH=$PWDmkdir channel-artifacts../bin/configtxgen -profile SampleMultiNodeEtcdRaft -outputBlock ./channel-artifacts/genesis.block../bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID mychannel../bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID mychannel -asOrg Org1MSP../bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org2MSPanchors.tx -channelID mychannel -asOrg Org2MSP

Now we are preparing docker-compose files for all hosts. We largely base what we have in First-Network, with proper modification. We are creating six files here.

  • base/peer-base.yaml
  • base/docker-compose-peer.yaml
  • host1.yaml
  • host2.yaml
  • host3.yaml
  • host4.yaml
  • .env

base/peer-base.yaml

base/docker-compose-base.yaml

host1.yaml

host2.yaml

host3.yaml

host4.yaml

.env

Here is what we will have in the directory.

Directory structure of the demo.

Here are the change made on these files.

  • In base/peer-base.yaml CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE is changed to the overlay network (first-network) we create previously.
  • In base/docker-compose-base.yaml, since all peers are now in different hosts, we change all port mapping back from 7051:7051. Updated is also done in the environment variables for each peer.
  • In all hostn.yaml files, we add external network with overlay network (first-network) in byfn.

We now have everything on Host 1. Copy this directory to all other hosts. As files cannot be copied across EC2, I will take my localhost as a “bridge”.

# on Host 1
cd ..
tar cf raft-4node-swarm.tar raft-4node-swarm/
# on my localhost
scp -i <key> ubuntu@<Host 1 IP>:/home/ubuntu/fabric-samples/raft-4node-swarm.tar .
scp -i <key> raft-4node-swarm.tar ubuntu@<Host 2, 3 and 4 IP>:/home/ubuntu/fabric-samples/
# on Host 2, 3 and 4
cd fabric-samples
tar xf raft-4node-swarm
cd raft-4node-swarm

All nodes are now having the same crypto material and required docker-compose files. We are ready to bring up all containers.

Step 4: Bring Up Containers in Each Host

We use docker-compose to bring up all hosts.

# on Host 1, 2, 3 and 4, bring up corresponding yaml file
docker-compose -f hostn.yaml up -d

Step 5: Create Channel and All Peer Nodes Join It

As we only have CLI on Host 1, all commands are issued from Host 1 terminal.

Create channel genesis block for mychannel.

docker exec cli peer channel create -o orderer.example.com:7050 -c mychannel -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem

Join peer0.org1 to mychannel

docker exec cli peer channel join -b mychannel.block

Join peer1.org1 to mychannel

docker exec -e CORE_PEER_ADDRESS=peer1.org1.example.com:7051 -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt cli peer channel join -b mychannel.block

Join peer0.org2 to mychannel

docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer0.org2.example.com:7051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt cli peer channel join -b mychannel.block

Join peer1.org2 to mychannel

docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer1.org2.example.com:7051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt cli peer channel join -b mychannel.block

Step 6: Install and Instantiate Fabcar Chaincode

From Host 1 terminal,

Install Fabcar chaincode to all peer nodes

# to peer0.org1
docker exec cli peer chaincode install -n mycc -v 1.0 -p github.com/chaincode/fabcar/go/
# to peer1.org1
docker exec -e CORE_PEER_ADDRESS=peer1.org1.example.com:7051 -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt cli peer chaincode install -n mycc -v 1.0 -p github.com/chaincode/fabcar/go/
# to peer0.org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer0.org2.example.com:7051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt cli peer chaincode install -n mycc -v 1.0 -p github.com/chaincode/fabcar/go/
# to peer1.org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer1.org2.example.com:7051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt cli peer chaincode install -n mycc -v 1.0 -p github.com/chaincode/fabcar/go/

Instantiate Fabcar chaincode to mychannel.

docker exec cli peer chaincode instantiate -o orderer.example.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n mycc -v 1.0 -c '{"Args":[]}' -P "AND ('Org1MSP.peer','Org2MSP.peer')"

Step 7: Chaincode Invoke and Query

For demonstration and according to Fabcar design, we first invoke initLedger function to preload10 car records in the ledger.

On peer0.org1,

docker exec cli peer chaincode invoke -o orderer.example.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n mycc --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["initLedger"]}'

After that, we can query a car record from the four peer nodes. This shows that the fabric network is working well.

# from peer0.org1
docker exec cli peer chaincode query -n mycc -C mychannel -c '{"Args":["queryCar","CAR0"]}'
# from peer1.org1
docker exec -e CORE_PEER_ADDRESS=peer1.org1.example.com:7051 -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt cli peer chaincode query -n mycc -C mychannel -c '{"Args":["queryCar","CAR0"]}'
# from peer0.org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer0.org2.example.com:7051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt cli peer chaincode query -n mycc -C mychannel -c '{"Args":["queryCar","CAR0"]}'
# from peer1.org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer1.org2.example.com:7051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt cli peer chaincode query -n mycc -C mychannel -c '{"Args":["queryCar","CAR0"]}'
Query result from all four peer nodes.

Now we can invoke the changeCarOwner function. This time we use orderer3.example.com (or any other you like). After chaincode is invoked, we use query to check again.

docker exec cli peer chaincode invoke -o orderer3.example.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n mycc --peerAddresses peer0.org1.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt --peerAddresses peer0.org2.example.com:7051 --tlsRootCertFiles /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt -c '{"Args":["changeCarOwner","CAR0","KC"]}'# from peer0.org1
docker exec cli peer chaincode query -n mycc -C mychannel -c '{"Args":["queryCar","CAR0"]}'
# from peer1.org2
docker exec -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp -e CORE_PEER_ADDRESS=peer1.org2.example.com:7051 -e CORE_PEER_LOCALMSPID="Org2MSP" -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt cli peer chaincode query -n mycc -C mychannel -c '{"Args":["queryCar","CAR0"]}'
Any orderer can be used for processing transactions.

You can try using other orderers in chaincode invoke. As they are forming a cluster, you will get back the same result. That means the orderer cluster is running properly.

Step 8: Clean Up

To clean up all hosts, we use docker-compose to bring containers down and remove everything.

# each host
docker-compose -f hostn.yaml down -v

Summary

In this demonstration we build a raft-based orderer cluster with first network. These containers are running in four separate hosts. Docker Swarm brings these four hosts together such that containers running in different hosts can communicate. We no longer specify static IP on fabric network components, and all containers talk to one another as if they were on the same host.

Docker Swarm comes with many nice tools facilitating container orchestration. We are not using the concepts like docker stack and service. Rather we use the existing Docker Compose files with minimum modification in order to run on Docker Swarm. If you wish to use Docker stack and service, you need to rework on the Compose files or create another set of configuration files.

--

--