Decentralized Ordering Service with Peer-Org-Owned Orderers

KC Tam
12 min readJan 29, 2020

--

Overview

Raft-based ordering service brings important values to Hyperledger Fabric. Before this, Kafka-based implementation requires an orderer organization holding a complicated setup. This portion is somehow centralized within a fabric network. Raft-based implementation reduces the complexity, that is, no external components such as Kafka cluster. More importantly, it enables a fabric network in which peer organizations can have their own orderers. This achieves a higher level of decentralization.

In most of today’s examples the orderer cluster is still deployed in one organization. Here I am demonstrating an implementation in which the ordering service is provided by peer-organization-owned ordering service nodes.

Quick Review of Ordering Service

Ordering service plays an important role in a fabric network: it maintains network configuration, and keeps generating new blocks of validated and ordered transactions as permanent records before those blocks are committed by peer nodes.

There are three types of ordering service implementations available in release 1.4 (link).

Solo supports only one ordering service node (a.k.a. orderer). Solo does not provide any fault tolerance and is therefore only good for chaincode development and testing.

Kafka is the first cluster implementation of ordering service, introduced in release 1.0. It supports more than one orderer and therefore comes with crash fault tolerance. The magic behind is a message handling system provided by a Kafka cluster. A production-grade Kafka setup requires certain number of Kafka nodes and Zookeeper nodes, which make a quite cumbersome deployment. More importantly, this Kafka cluster needs to be under control of a single organization, which makes the fabric network less decentralized as what a blockchain system promises. External components also bring problems of version compatibility, as Hyperledger Fabric and Kafka are two separate projects.

Raft is introduced in release 1.4.1, providing a better choice for cluster setup in ordering service. Instead of relying on external Kafka cluster, Raft is natively implemented in the orderer code. This reduces the problems of compatibility and manageability. It is considered a “go-to ordering service choice for production networks”.

Here is a quick comparison among the three implementations.

Three Ordering Service Implementations in Hyperledger Fabric Release 1.4

One eye-catching advantage of Raft-based ordering service is that “each organization can have its own ordering nodes, participating in the ordering service”. This provides a level of decentralization not available before. Here is the difference between these two ways.

Raft-based implementation supports single Orderer Organization and Peer-Org owned orderer setup

And in this article, we are demonstrating this decentralized setup.

Setup up Overview

There is a good design on the same topic: bring-your-own-orderer in aldredb’s repository (link). This design provides a complete picture, simulating how to join a new organization into an existing network. The steps are logical and implemented in well-written scripts. Besides, the crypto material (secret key and certificates) of all components are done directly using a Certificate Authority (CA). From the scripts we learn how to generate crypto material for admin, orderers and peer nodes through the CA.

I take a different approach. Based on the First Network architecture, my objective is to use what we are familiar with. For example, I will use cryptogen to generate the required crypto material. Certain modification of some components and configuration files is required, and it will be described more in detail during the demonstration.

To make things more interesting, I use two separate hosts for the two peer organizations. Inside each organization (host) two orderers and two peers are provisioned. And CLI is also provisioned for chaincode demonstration. Docker Swarm is used for this multi-host fabric network.

This is the setup of my demonstration.

Demonstration Setup

Step 1: Prepare two Fabric Hosts in AWS

As usual we will bring up two fabric hosts. Each of which have installed the prerequisite (link) and the fabric software, including fabric binary tools, fabric docker images and fabric samples (link). You can refer to my previous article on how to bring these up in AWS, and how to build an image (AWS Machine Image, AMI) for easier EC2 instantiation.

Bring up two fabric nodes

Now open up two terminals for the two hosts.

Step 2: Prepare a Docker Swarm environment

There are several ways to provide a container network across hosts, and in this article I am using Docker Swarm. You can simply consider hosts connected through Docker Swarm can be considered as a single host. Refer to these articles (Docker Swarm, Static Extra Host) for some detail about multi-host deployment of fabric network.

On Host 1,

docker swarm init --advertise-addr <host-1 ip address>
docker swarm join-token manager

Use the result for Host 2,

<output from join-token manager> --advertise-addr <host-2 ip address>

On Host 1, we create an overlay network first-network, which will be used in our setup.

docker network create --attachable --driver overlay first-network

When we check on both hosts, we will see first-network (of the same network ID) created. Now we have the network for our fabric containers.

Overlay first-network appears in both hosts in a Docker Swarm environment

Step 3: Generate crypto material

In this and the next two steps, we create crypto material, channel artifacts and docker-compose files in Host 1. After that we copy the whole directory to Host 2. This is to ensure both hosts having the same crypto material. Note: In real life we shall not share the crypto material between organizations. What I am showing here is for demonstration only.

We first create directory for this setup.

cd fabric-samples
mkdir raft-2node
cd raft-2node

And here is the configuration file crypto-config.yaml.

If you compare the crypto-config.yaml with the First Network, you will see some difference

  • There is no organization for orderer: only peer organizations are defined
  • Instead of using template to generate peerx, here I specify the actual host names. For each organization there will be two orderers (orderer0 and orderer1) and two peers (peer0 and peer1).
  • After we run cryptogen we see all these four nodes are inside crypto-config/peerOrganizations/orgn.example.com/peers/. We will specify the right crypto material later in Docker-Compose files.
  • Again, all nodes (orderers or peers) are under same certificate authority, which is a feature in cryptogen. You can browser the ca folder inside each node and you will find they are identical.

We now generate the crypto-material from Host 1.

../bin/cryptogen generate --config=./crypto-config.yaml

And we can see crypto material for those nodes here.

Now the crypto material is there. We move to next step for channel artifacts.

Step 4: Generate channel artifacts

Channel artifacts include three parts: genesis block, channel configuration transaction, and anchor peer update transactions for both organizations. The configuration is shown in configtx.yaml.

It is a sample configtx.yaml based on First Network. We modify the addresses inside orderer to reflect the four orderers, and the consenters inside etcdraft, to reflect the location of crypto material of these orderers.

We are using configtxgen to create the channel artifacts. Note that you need to create the channel-artifacts directory before running configtxgen.

export FABRIC_CFG_PATH=$PWDmkdir channel-artifacts../bin/configtxgen -profile OrdererGenesis -outputBlock ./channel-artifacts/genesis.block -channelID byfn-sys-channel../bin/configtxgen -profile Channel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID mychannel../bin/configtxgen -profile Channel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID mychannel -asOrg Org1MSP../bin/configtxgen -profile Channel -outputAnchorPeersUpdate ./channel-artifacts/Org2MSPanchors.tx -channelID mychannel -asOrg Org2MSP

After that we will have these four files.

Finally we will work on the Docker-Compose files.

Step 5: Prepare Docker-Compose files for the two hosts

Per our design, we will create two docker-compose files, one for each host. The files are created based on the First Network. In original docker-compose files of First Network, there are three files: docker-compose-cli.yaml, base/peer-base.yaml and base/docker-compose-base.yaml. I have combined them into one. Besides, certain points are updated according to our design, and they will be highlighted later.

Here is docker-compose-host1.yaml.

Here is docker-compose-host2.yaml.

Some highlights

  • Only those containers related to org1 are included in host 1, and those related to org2 in host 2.
  • The crypto material for ordererx.orgy.example.com is correctly mapped in the volumes. See volumes mapping on each orderer of each docker-compose file.
  • The environment variable ORDERER_GENERAL_LOCALMSPID is now Org1MSP and Org2MSP, respectively. In original file it is in OrdererMSP which is not used in our setup.
  • The CLI exists in both hosts. However, the environment variables are set such that CLI in Host 1 is default to peer0.org1.example.com, and CLI in Host 2 is default to peer0.org2.example.com.
  • In all peer nodes, we change the environment variable CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE to first-network, which is the overlay network we define in a Docker Swarm cluster (see step 2).

Now we can create these two files in our directory.

Finally we copy directly the .env file from first network. Inside which we only need the IMAGE_TAG.

cp ../first-network/.env .

Here is the complete content for in the directory.

Step 6: Copy the whole directory to Host 2

Now we are ready to copy the whole directory to Host 2. The purpose is to guarantee that the crypto material is generated once with the cryptogen. Certain information such as identity (digital certificate) of components are already written into channel artifacts.

We use scp from my localhost, serving as a bridge between the two instances.

On Host 1,

cd fabric-samples
tar cf raft-2node.tar raft-2node/

On my localhost,

scp -i <key> ubuntu@<host-1 ip>:/home/ubuntu/fabric-samples/raft-2node.tar .scp -i <key> raft-2node.tar ubuntu@<host 2 ip>:/home/ubuntu/fabric-samples/

On Host 2,

cd fabric-samples
tar xf raft-2node.tar
cd raft-2node

Now we have the same set of files in this directory.

Step 7: Bring up Containers

With these files in both hosts, we can bring up the containers for each host (organization).

On Host 1

docker-compose -f docker-compose-host1.yaml up -d

On Host 2

docker-compose -f docker-compose-host2.yaml up -d

We can use docker ps to observe the running containers (use format for better illustration).

The running containers in both hosts.

Step 8: Bring up Channels

With all containers running in the two hosts, we can bring up the channel.

Bring up the channel involves the following steps.

First, generate the channel genesis block from Host 1. Note that this command is reaching orderer0.org1.example.com and therefore, appropriate TLS CA certificate is specified.

docker exec cli peer channel create -o orderer0.org1.example.com:7050 -c mychannel -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/orderer0.org1.example.com/msp/tlscacerts/tlsca.org1.example.com-cert.pem

After some error messages, we will get the Received block: 0 message.

This is the channel genesis block file for mychannel (called mychannel.block). The file is now in CLI at Host 1.

Then join both peer0.org1 and peer1.org1 to mychannel using this block file.

# for peer0.org1
docker exec cli peer channel join -b mychannel.block
# for peer1.org1
docker exec -e CORE_PEER_ADDRESS=peer1.org1.example.com:7051 -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt cli peer channel join -b mychannel.block

Now we need copy the block file mychannel.block from CLI at Host 1 to CLI at Host 2. We again use localhost to perform this copy.

First we copy the file from CLI container to Host 1. On Host 1,

docker cp cli:/opt/gopath/src/github.com/hyperledger/fabric/peer/mychannel.block .

The file now is in Host 1 (/home/ubuntu/raft-2node/). On my local host,

scp -i <key> ubuntu@<host-1 ip>:/home/ubuntu/fabric-samples/raft-2node/mychannel.block .
scp -i <key> mychannel.block ubuntu@<host 2 ip>:/home/ubuntu/fabric-samples/raft-2node/

The file is now in Host 2 (/home/ubuntu/raft-2node/). On Host 2 copy it back to CLI container.

docker cp mychannel.block cli:/opt/gopath/src/github.com/hyperledger/fabric/peer/

We have the block file in CLI at Host 2.

Now we can join the peer0.org2 and peer1.org2 to channel with this block file. On Host 2,

# for peer0.org2
docker exec cli peer channel join -b mychannel.block
# for peer1.org2
docker exec -e CORE_PEER_ADDRESS=peer1.org2.example.com:7051 -e CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt cli peer channel join -b mychannel.block

Now all four peer nodes (from both organizations) joined the channel.

Step 9: Use Fabcar to Test the Ordering Service

To observe how the orderer cluster works, we use Fabcar chaincode as example. You can refer to my previous article about Fabcar more in detail.

First, we install chaincode on peer0.org1 and peer0.org2 (We are not using peer1 in both organizations for demo purpose. But it does no harm to install in all peers).

On Host 1,

docker exec cli peer chaincode install -n mycc -p github.com/chaincode/fabcar/go -v 0

On Host 2,

docker exec cli peer chaincode install -n mycc -p github.com/chaincode/fabcar/go -v 0

Then we instantiate chaincode with simple endorsing policy (either one organization). Note that I am using orderer0.org1.example.com in this case.

On Host 1,

docker exec cli peer chaincode instantiate -o orderer0.org1.example.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/orderer0.org1.example.com/msp/tlscacerts/tlsca.org1.example.com-cert.pem -C mychannel -n mycc -v 0 -c '{"Args": []}' -P "OR('Org1MSP.member','Org2MSP.member')"

Per Fabcar design, we will invoke the initLedger() function in order to preload 10 car records. Here I am using another orderer, orderer1.org1.example.com to process this transaction.

On Host 1,

docker exec cli peer chaincode invoke -o orderer1.org1.example.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/orderer1.org1.example.com/msp/tlscacerts/tlsca.org1.example.com-cert.pem -C mychannel -n mycc -c '{"Args":["initLedger"]}'

The invoke is successful. We now check whether we have the car records preloaded. We check CAR0 in Host 1.

docker exec cli peer chaincode query -C mychannel -n mycc -c '{"Args":["queryCar","CAR0"]}'

To see if the fabric network is running successfully, we can perform the same query on Host 2 (peer0.org2) and we will get the same result.

On Host 2,

docker exec cli peer chaincode query -C mychannel -n mycc -c '{"Args":["queryCar","CAR0"]}'

It will take a while as a chaincode container is instantiated. We will have the same result as we see as Host 1.

Now, we will invoke another function changeCarOwner() on Host 2. Now we use another orderer: orderer0.org2.example.com.

On Host 2,

docker exec cli peer chaincode invoke -o orderer0.org2.example.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/orderer0.org2.example.com/msp/tlscacerts/tlsca.org2.example.com-cert.pem -C mychannel -n mycc -c '{"Args":["changeCarOwner","CAR0","KC"]}'

Finally, we check both peers and see the ledger is updated accordingly.

Now we see any orderer from two organizations can provide ordering service to the fabric network. The fabric network is operating correctly.

Step 10: Clean up

As closing, we can tear down everything by

On Host 1

docker-compose -f docker-compose-host1.yaml down -v
docker rm $(docker ps -aq)
docker rmi $(docker images dev-* -q)

On Host 2

docker-compose -f docker-compose-host2.yaml down -v
docker rm $(docker ps -aq)
docker rmi $(docker images dev-* -q)

Summary

In this demonstration we have shown a raft-based ordering service setup, with orderer nodes owned by peer organizations. This is a desired setup as it provides a higher level of decentralization compared to Kafka-based implementation (in which all orderers are under one organization). If you wish to try some more complicate process, such as “adding a new organization with orderers and peers”, you can refer to aldredb (link) setup.

--

--

KC Tam
KC Tam

Responses (6)