Awesome  Rust

Awesome Rust


Many Rs: Rust Libraries for The MANY Protocol


Rust libraries for the MANY protocol.


  • MANY module interfaces
  • MANY common types
  • MANY message and transport layers
  • MANY client and server
  • Hardware Security Module
  • CLI developer's tools


  • Concise Binary Object Representation (CBOR): RFC 8949
  • CBOR Object Signing and Encryption (COSE): RFC 8152
  • Platform-independent API to cryptographic tokens: PKCS #11

Developer tools


Here's a list of crates published by this repository and their purposes. You can visit their crates entries (linked below) for more information.

The dependency graph between the crates in this repository looks like this:

graph TD;
  many-identity --> many-client;
  many-identity-dsa --> many-client;
  many-modules --> many-client;
  many-protocol --> many-client;
  many-server --> many-client;
  many-error --> many-identity-dsa;
  many-identity --> many-identity-dsa;
  many-error --> many-identity-hsm;
  many-identity --> many-identity-hsm;
  many-identity-dsa --> many-identity-hsm;
  many-protocol --> many-identity-hsm;
  many-error --> many-identity-webauthn;
  many-identity --> many-identity-webauthn;
  many-identity-dsa --> many-identity-webauthn;
  many-protocol --> many-identity-webauthn;
  many-error --> many-identity;
  many-client --> many-mock;
  many-error --> many-mock;
  many-identity --> many-mock;
  many-identity-dsa --> many-mock;
  many-identity-webauthn --> many-mock;
  many-modules --> many-mock;
  many-protocol --> many-mock;
  many-server --> many-mock;
  many-error --> many-modules;
  many-identity --> many-modules;
  many-identity-dsa --> many-modules;
  many-macros --> many-modules;
  many-protocol --> many-modules;
  many-types --> many-modules;
  many-error --> many-protocol;
  many-identity --> many-protocol;
  many-types --> many-protocol;
  many-error --> many-server;
  many-identity --> many-server;
  many-identity-dsa --> many-server;
  many-macros --> many-server;
  many-modules --> many-server;
  many-protocol --> many-server;
  many-types --> many-server;
  many-error --> many-types;
  many-identity --> many-types;
  many-client --> many;
  many-error --> many;
  many-identity --> many;
  many-identity-dsa --> many;
  many-identity-hsm --> many;
  many-mock --> many;
  many-modules --> many;
  many-protocol --> many;
  many-server --> many;
  many-types --> many;
  • many(crates, docs) – Contains the CLI tool to contact and diagnose MANY servers.
  • many-client(crates, docs) – Types and methods to talk to the MANY network.
  • many-error(crates, docs) – Error and Reason types, as defined by the specification.
  • many-identity(crates, docs) – Types for managing an identity, its address and traits related to signing/verification of messages.
  • many-identity-dsa(crates, docs) – Digital Signature identity, verifiers and utility functions. This crate has features for all supported algorithms (e.g. ed25519).
  • many-identity-hsm(crates, docs) – Hardware Security Module based identity, verifiers and utility functions.
  • many-identity-webauthn(crates, docs) – Verifiers for WebAuthn signed envelopes. This uses our custom WebAuthn format, which is not fully compliant with the WebAuthn standard. See the Lifted WebAuthn Auth Paper.
  • many-macros(crates, docs) – Contains macros to help with server and module declaration and implementations.
  • many-mock(crates, docs) – Utility types for creating mocked MANY servers.
  • many-modules(crates, docs) – All modules declared in the specification.
  • many-protocol(crates, docs) – Types exclusively associated with the protocol. This does not include types that are related to attributes or modules.
  • many-server(crates, docs) – Types and methods to create a MANY network server and neighborhood.
  • many-types(crates, docs) – General types related to CBOR encoding, or to the specification.


  1. Update your package database
# Ubuntu
$ sudo apt update

# CentOS
$ sudo yum update

# Archlinux
$ sudo pacman -Syu

2.   Install Rust using rustup

$ curl --proto '=https' --tlsv1.2 -sSf | sh
$ source $HOME/.cargo/env

3.   Install build dependencies

# Ubuntu
$ sudo apt install build-essential pkg-config clang libssl-dev libsofthsm2

# CentOS
$ sudo yum install clang gcc softhsm git pkgconf

# Archlinux
$ sudo pacman -S clang gcc softhsm git pkgconf

# macOS
$ git # and follow the instructions

4.   Build many-rs

$ git clone
$ cd many-rs
$ cargo build

5.   Run tests

$ cargo test

Usage example

Below are some examples of how to use the many CLI.

Retrieve the MANY ID of a key

# Generate a new Ed25519 key
$ openssl genpkey -algorithm Ed25519 -out id1.pem

# Print the MANY ID of the key
$ ./target/debug/many id id1.pem

Retrieve the status of a running MANY server

$ ./target/debug/many message --server 'status' '{}'
    0: 1,
    1: "AbciModule(many-ledger)",
    2: h'a5010103270481022006215820e5cd546d5292af5d9f0ffd54b57ff555c51b91a249b9cf544010a3c01cfa75a2',
    3: 10000_1(h'01378dd9916915fb276116ff4bc13c04a4e413f663e04b710199c46021'),
    4: [0, 1, 2, 4, 6, 8, 9, 1002_1],
    5: "0.1.0",
    7: 300_1,


  1. Read our Contributing Guidelines
  2. Fork the project (
  3. Create a feature branch (git checkout -b feature/fooBar)
  4. Commit your changes (git commit -am 'Add some fooBar')
  5. Push to the branch (git push origin feature/fooBar)
  6. Create a new Pull Request (

Download details:

Author: liftedinit
Source code: 
License: Apache-2.0 license

#rust #rustlang #web3 #blockchain #protocol 

Many Rs: Rust Libraries for The MANY Protocol
Ben Taylor

Ben Taylor


Top Layer 2 Crypto Projects | Top 30 Layer 2 Blockchain protocols

In this article, you will learn what is Layer 2 Blockchain protocols. Discuss Top 30 Layer 2 Blockchain protocols (Crypto Projects) in Crypto World. Les't go!

1. What Is Layer 2 in Blockchain?

Since the release of Bitcoin, blockchains have gained more popularity. Excluding payments, blockchain has broader application areas since it is highly decentralized and secure.

No single blockchain can provide decentralization, security, and scalability in achieving optimal functionality. Most tend to sacrifice some of these elements.

Bitcoin and Ethereum platforms are good examples. These platforms are competing to offer a secure, scalable and decentralized system. This has led to the discovery of solutions such as layer-2 protocols.

In this article, we will cover layer-2 blockchain solutions and explain how the layer-2 protocols work and why they are used in the blockchain network.

The concept of layer-2 blockchain

Layer-2 blockchain network operates on top of another network forming a secondary protocol. The layer-2 blockchain is different from the layer-1 blockchain since it does not depend on the layer-1 protocols (base layer).

The purpose of layer-2 protocols is to assist in validating transactions thus minimizing the tasks handled by the base layer. These protocols allow the blockchain network to process transactions faster. This gives the network ability to accommodate and manage many users.

Blockchain transactions are generally slow compared to the usual payment methods. The performance usually depends on the blockchain structure. Generally, blockchain transactions go through various phases before they are approved and completed. Some of these phases include acceptance, mining distribution, and validation.

The layer-1 blockchain handles the activities that deal with transactions validation. However, this impacts the processing speed, which affects the blockchain’s scalability and experience. Layer-2 blockchain supports the layer-1 blockchain by relieving it from specific tasks. This enables layer-1 blockchain to deal with specific functions such as security and control.

The layer-2 blockchain reports to the layer-1 blockchain because layer-1 confirms every transaction to ensure the security of a network.

Importance of layer-2 blockchains

Below are some benefits of layer-2 protocols in the blockchain ecosystem:

Improved security

The layer-1 blockchain solutions rewrite the base layer protocol for more scalability. This is by adding blocks into the chain network or increasing the speed of validating new blocks.

However, it is never a good practice to interfere with the architecture of a blockchain. It can lead to serious security concerns of the chain network.

The layer-2 blockchain solutions solve the security challenge. Since the layer-2 solution has been made to compliment the base layer, no change is made to the base layer. Therefore, no change is made to the underlying protocol.

Increased transaction speed

In layer-2 protocols, a transaction is handled off-chain. This minimizes the chain’s workload, increasing the blockchain speed and scalability.

As mentioned earlier layer-2 solutions also relieve the main chain from performing tasks. So, the main chain can focus on the blockchain’s security and decentralization.

Furthermore, the layer-2 protocols can be enhanced to handle computations faster. This leads to increased throughput.

Increased scalability

The layer-2 blockchain was designed to provide scalability to the blockchain’s applications. The off-chain protocols ensure a higher throughput of the blockchain network.

This ensures that the blockchain’s applications can be scaled with ease so the users can enjoy a great experience regardless of the blockchain’s network load.

Reduced transaction fees

In a blockchain network, miners are responsible for transactions validation. It means that the miners utilize the blockchain’s cryptographic algorithms to perform validations.

This process involves huge computing power as the users join the blockchain network. As a result, the transactions performed tend to grow in size hence jamming the whole system.

However, the miners may not have the required processing resources required. Thus, they are left with the option of using the transactions that have costly fees.

The layer-2 blockchain minimizes the processing resources needed to handle the validations. This reduces the transaction costs since the miners can process more transactions.

Most used layer-2 blockchain protocols

Below are some of the most used layer-2 blockchain protocols:

Nested blockchains

Nested blockchains consist of the main chain and secondary chains designed such that a chain can operate on top of the other. The purpose of the main chain is to assign tasks and take control of all the parameters. The secondary chains then perform the transactions.

To better understand the analogy, let’s use a company as an example. The supervisor can assign a huge task to several team members instead of a single individual. Upon completing the task, the team members can report to the manager to approve and mark the work as complete.

The nested blockchain ensures that a primary chain can assign several secondary chains with tasks. Upon completing the assigned task, the chains can report back for approvals.

State channels

In the state channels, parties interact directly in the blockchain network. Blockchain users can perform transactions without involving the primary chains. The miners spend minimal time, leading to fast processing rates.

State channels do not need the transactions to be validated by the layer-1 blockchain. This is because validations of the resource are done through the smart-contract mechanism. Upon successful completion of the transaction, the resulting state is stored on the primary layer.

State channels protect the transaction details that are exchanged between different parties. However, the final transaction details are stored in the ledger to be accessed publicly to maintain records.


A sidechain is a side blockchain linked to the primary chain using a two-way peg. We look at it like a forest where the trees can act as sidechains, and the forest itself is the primary chain. They are designed to handle a large batch of transactions.

A sidechain assists the primary chain with validating various transactions in the blockchain. The primary chain then has ample time to deal with security and resolve disputes.

Sidechains are not the same as state channels. The reason is that they store the transactions records on a ledger that can be publicly accessed.

This means that if the sidechain is attacked, it cannot affect the primary chain’s operations. However, sidechains involve much time and a significant amount of effort to design and build.


These are layer-2 protocols that perform computations off the primary chain. In this blockchain, transaction details transfer occurs after a given time interval. This assists in maintaining records.

Also, rollups handle transactions without interfering with the primary layer. This leads to higher throughput at minimal transaction costs. Since transaction details of the rollups are stored on the primary layer, the rollup’s security can be guaranteed.

Optimistic rollups

Optimistic rollups ensure that every transaction performed on the blockchain is valid.

However, in most cases, optimistic rollups take a considerate amount of time to confirm the transactions. This waiting period gives rollups ample time to resolve a dispute if a challenge arises.

Zero-knowledge rollups

They perform computations off the chain before submitting the primary chain’s validity proof. They at times use a smart contract to hold the funds on the base layer.

Funds are then released once the proof of validity has been submitted and the primary chain has validated and confirmed the transaction.

Top exchanges for token-coin trading. Follow instructions and make unlimited money

Top 30 Layer 2 Blockchain protocols

Here is the complete list of 30 Layer 2 Crypto Projects projects (Sort by trading volume).

1Bitcoin Lightning NetworkThe Bitcoin Lightning Network is one of the best-known layer 2 solutions for Bitcoin. Like other layer 2 solutions, it takes transaction bundles from the main chain to be dealt with off-chain before transferring that information back. The Lightning Network also brings smart contracts to Bitcoin, which is a big improvement to the network overall.
Bitcoin lightning network promises the following benefits: instant payment, scalability, low cost and cross blockchains swaps.

As the name suggests, this layer 2 solution will introduce lightning-fast payments on the Bitcoin blockchain, as fast as milliseconds. The current Bitcoin average transaction time is about 10 minutes. However, it can vary largely if the network is congested.
The Bitcoin lightning network also claims that it is capable of processing millions to billions of TPS, which is many times higher than legacy payment providers like Visa.

By settling transactions off-chain as layer 2 solution, fees are greatly reduced, allowing for instant micropayments.

Finally, cross-chain atomic swaps can occur off-chain as long as the chains support the same cryptographic hash function. Bitcoin uses the SHA-256 cryptographic has function in its algorithm.
2PolygonPolygon (formerly Matic) is a Layer 2 solution powering Ethereum scaling and infrastructure development. The great thing about Polygon is that it’s already used by many projects like Sushiswap, Aavegotchi, Chain Games, Quickswap etc.

Using Polygon, one can create optimistic rollup chains, ZK rollup chains, stand alone chains or any other kind of infra required by the developer.
3OptimismOptimism (OP) is a layer-two blockchain on top of Ethereum. Optimism benefits from the security of the Ethereum mainnet and helps scale the Ethereum ecosystem by using optimistic rollups. That means transactions are trustlessly recorded on Optimism but ultimately secured on Ethereum.

Optimism's design process is built around the idea of long-term sustainability and not taking shortcuts to scalability. That is why it uses optimistic rollups and takes advantage of the consensus mechanism of Ethereum to scale the network. Blocks are constructed and executed on the L2 (Optimism), while user transactions are batched up and submitted to the L1 (Ethereum). The L2 has no mempool, and transactions are immediately accepted or rejected. This guarantees a smooth user experience while ensuring security through the Ethereum consensus mechanism.
4UniswapUniswap is one of the leading the decentralised crypto trading protocol projects that supports thousands of DeFi applications including token swapping, staking, voting, liquidity providers and more.

You can use the Uniswap app to swap DeFi tokens directly on-chain through popular DeFi wallet providers on Ethereum, Polygon, Optimism and Arbitrum networks.
5LoopringThis Layer 2 scaling solution for the Ethereum network features a payment/transfer facility and decentralized exchange (DEX). LRC is the native token. The team has developed zkRollups, which is used to power the Loopring exchange.

Using the Loopring Pay facility, users can send and receive Ethereum (ETH) or Ethereum based assets, instantly and for free. The zkRollup scales Ethereum 1000x over its nominal capacity by transferring most processing on another chain.
6dYdXDYDX (dYdX) is the governance token for the layer 2 protocol of the eponymous non-custodial decentralized cryptocurrency exchange. It serves to facilitate the operation of layer 2 and allows traders, liquidity providers and partners to contribute to the definition of the protocol's future as a community.

Token holders are granted the right to propose changes on the dYdX’s layer 2, and are presented with an opportunity to profit through token staking and trading fee discounts.

Built on Starkwire’s StarkEx scalability engine, layer 2 is used for trading of cross-margined perpetuals on the platform. The scaling solution allows dYdX to increase transaction speed, eliminate gas costs, reduce trading fees and lower​​ minimum trade sizes on the protocol.

An open-source platform with smart contract functionality, dYdX is designed for users to lend, borrow and trade crypto assets. Although dYdX supports spot trading, the main focus of the platform is on derivatives and margin trading.
7Immutable XImmutable X is a layer-two blockchain with zero gas fees, where operators can set their own trading fees. In contrast to other scaling solutions to Ethereum, a 51% attack on Immutable X is unfeasible, as it is not a centralized side chain but benefits from inheriting the native security of the Ethereum blockchain. 

Immutable X uses zk-rollups, meaning assets are traded on the second-layer blockchain, but the validity proof of a transaction is stored on the layer-one blockchain, in this case Ethereum. Immutable X uses STARK proofs because, unlike SNARKs, they are post-quantum-secure and provide greater user security despite the greater cost.
8OMG NetworkOMG Network is a Plasma-based Layer 2 scaling solution, which scales the Ethereum blockchain to thousands of TPS. It reduces the fees by an estimated 1/3rd, compared to before, all while maintaining Ethereum’s security.

OMG Network does so by transferring the processing of the transaction outside the Ethereum mainnet, using it only for the final settlement. Further, it batches the transactions, making the process more efficient and less resource-intensive.
9MetisDAOBased on the spirit of Optimistic Rollup, Metis is building an easy-to-use, highly scalable, low-cost, and fully functional Layer 2 framework (Metis Rollup) to fully support the application and business migration from Web 2.0 to Web 3.0. Its scalable protocol supports a wide range of use cases, including NFT platforms, decentralized Reddit-like social platforms, open-source developer communities, influencer communities, gaming communities, freelancer communities, crowdfunding, yield farming, DEX trading, and much more.

It also makes it easy to use pre-set tools to facilitate their development, manage collaboration, and enjoy the network effects of the world's largest decentralized finance ecosystem, without the costs and bottlenecks normally associated with Ethereum.

Metis' goal is to make building DApps and DACs on its platform so easy to do; even total blockchain novices can make it happen in a matter of minutes.
10SkaleSKALE network is an open-source security and execution Ethereum Layer 2 scaling solution, which relies on elastic side chains to divert processing off the mainnet.

It consists of high-performance side chains executing sub-second block times, running up to 2,000 TPS per chain. Skale can support full-state smart contracts, decentralized storage, execute Rollups, and machine learning in EVM.

They offer speed/functionality without compromising security or decentralization. It can support thousand of thousands of independent blockchains of all subtypes.

It has full compatibility with Solidity and thus the Ethereum ecosystem. It is a collusion-resistant leaderless network with mathematically provable secure ABBA-based consensus.
11Gnosis ChainGnosis Chain (formerly known as xDai) is a prediction market platform on the Ethereum network. Gnosis chain is providing its users with the chance to build their own prediction platform through the creation of a specific infrastructure layer.

Gnosis Chain is the associated execution-layer EVM chain for stable transactions. It uses the xDai token and includes a wide-ranging group of projects and users.
12Boba NetworkBoba is an L2 Ethereum scaling & augmenting solution built by the Enya team as core contributors to the OMG Foundation. Boba is a next-generation Ethereum Layer 2 Optimistic Rollup scaling solution that reduces gas fees, improves transaction throughput, and extends the capabilities of smart contracts. 

Boba offers fast exits backed by community-driven liquidity pools, shrinking the Optimistic Rollup exit period from seven days to only a few minutes, while giving LPs incentivized yield farming opportunities.

Boba’s extensible smart contracts will enable developers across the Ethereum ecosystem to build dApps that invoke code executed on web-scale infrastructure such as AWS Lambda, making it possible to use algorithms that are either too expensive or impossible to execute on-chain.
13ZKSpaceThe ZKSwap is a Layer 2 scaling solution, specifically an automated market maker (AMM) type decentralized exchange (DEX) powered by zkRollup technology. Developed by L2Lab, it has already launched on Ethereum mainnet.

It transfers all tokens to Layer 2 and guarantees consistency by continuously generating zero-knowledge proofs. It allows exchanges to execute swaps with zero gas fees and unlimited scalability.
14Raiden NetworkRaiden Network is an off-chain scaling solution, enabling quick and cheap payments. It is Ethereum's version of Bitcoin's Lightning Network.

It complements the Ethereum blockchain and works with any ERC-20 token. The Raiden Network Token (RDN) supports a host of use cases such as micropayments, M2M Markets, API Access, and Decentralized Exchanges.
15zkSynczkSync is a scaling and privacy engine for Ethereum. Its current functionality scope includes low gas transfers of ETH and ERC20 tokens in the Ethereum network, atomic swaps & limit orders as well as native L2 NFT support. This document is a high-level description of the zkSync development ecosystem.

zkSync is built on ZK Rollup architecture. ZK Rollup is an L2 scaling solution in which all funds are held by a smart contract on the mainchain, while computation and storage are performed off-chain. For every Rollup block, a state transition zero-knowledge proof (SNARK) is generated and verified by the mainchain contract. This SNARK includes the proof of the validity of every single transaction in the Rollup block. Additionally, the public data update for every block is published over the mainchain network in the cheap calldata.
16StarkwareStarkNet is a permissionless decentralized ZK-Rollup operating as an L2 network over Ethereum, where any dApp can achieve unlimited scale for its computation, without compromising Ethereum’s composability and security.

Cairo is a programming language for writing provable programs, where one party can prove to another that a certain computation was executed correctly. Cairo and similar proof systems can be used to provide scalability to blockchains.

StarkNet uses the Cairo programming language both for its infrastructure and for writing StarkNet contracts.
17AztecAztec Network (Private layer 2 payments) is the first private ZK-rollup on Ethereum, enabling decentralized applications to access privacy and scale. Aztec’s rollup is secured by its industry-standard PLONK proving mechanism used by the leading zero-knowledge scaling projects.

Private: Aztec is the only zero-knowledge rollup built with a privacy-first architecture from the ground up, allowing users to access their favorite apps on Layer 1 completely privately.

Accessible: Proving Aztec transaction validity through zero-knowledge proofs on Ethereum reduces transaction costs by up to 100x.

Compliant: Programmably private system supports opt-in auditability and compliance while fully preserving confidentiality.
18zkopruZero knowledge proofs make private transactions possible while optimistic rollup technology lowers the cost to send tokens and NFTs on Ethereum’s Layer 2.

Zkopru is a new protocol for the storage of crypto assets and gas-efficient private transactions on the Ethereum blockchain.

Zkopru wallets can be used to store and send ETH, ERC-20s, and NFTs anonymously at a lower cost than main net transfers.
19CelestiaModular consensus and data network. Celestia is a minimal blockchain that only orders and publishes transactions and does not execute them. 

By decoupling the consensus and application execution layers, Celestia modularizes the blockchain technology stack and unlocks new possibilities for decentralized application builders, built to enable anyone to easily deploy their own blockchain with minimal overhead.
20FuelOptimistic Rollup chain. Sway is a domain-specific language (DSL) for the Fuel Virtual Machine (FuelVM), a blockchain-optimized VM designed for the Fuel blockchain. Sway is based on Rust, and includes syntax to leverage a blockchain VM without needlessly verbose boilerplate.
21AltLayerAltLayer is a system of highly scalable application-dedicated execution layers that derive security from an underlying L1/L2. It is designed as a modular and pluggable framework for a multi-chain and multi-VM world. 

At its core, AltLayer is a system of several optimistic rollup-like execution layers called flash layers with a novel innovation that makes them disposable and therefore highly resource-optimized.
22AztecAztec Network is the first private zk-rollup on Ethereum, enabling decentralized applications to access privacy and scale.

Shielding funds to Aztec creates a private note on Layer 2. Private notes can be traded, staked, and used to earn yield just like normal Ethereum assets–but with full privacy protection.


The layer-2 solution is the main gateway towards decentralizing the blockchain network. The protocols provide miners with adequate computing resources and reduced transaction fees.

With layer-2 protocols, blockchain can also be integrated into global commerce. This can assist in building networks useful in different industries.

Read more: Top Layer 1 Crypto Projects | Top 100 Layer 1 Blockchain protocols

I hope this article will help you. Don't forget to leave a like, comment and sharing it with others. Thank you!

#blockchain #bitcoin #cryptocurrency #layers #protocol  

Top Layer 2 Crypto Projects | Top 30 Layer 2 Blockchain protocols
Ben Taylor

Ben Taylor


Top Layer 1 Crypto Projects | Top 100 Layer 1 Blockchain protocols

In this article, you will learn what is Layer 1 Blockchain protocols. Discuss Top 100 Layer 1 Blockchain protocols (Crypto Projects) in Crypto World. Les't go!

1. What Is Layer 1 in Blockchain?

A layer 1 blockchain solution basically refers to a collection of solutions tailored for improving the design of base protocols. The subtle modifications in the base protocol introduced by layer one solutions help in enabling better scalability for the overall system. Many popular blockchain networks have been struggling with issues of scalability. 

Blockchain developers have been investing efforts in scalability solutions, albeit without any focus on the alternatives. Some of the prominent entries among layer 1 blockchain examples show how the layer 1 options can provide different approaches for scaling. For example, layer 1 solutions can facilitate increased block size over the base protocol. As a result, the blockchain network could process more transactions in one specific block. 

The two other prominent approaches for implementing solutions in a layer 1 blockchain list include sharding and modification in consensus mechanisms. Some of the common examples of layer one blockchain networks include Bitcoin, Binance Smart Chain, Ethereum, Solana... All of these networks serve as the best examples to show the need for layer 1 solutions. 

Layer 1 scaling

A common problem with layer-1 networks is their inability to scale. Bitcoin and other big blockchains have been struggling to process transactions in times of increased demand. Bitcoin uses the Proof of Work (PoW) consensus mechanism, which requires a lot of computational resources. 

While PoW ensures decentralization and security, PoW networks also tend to slow down when the volume of transactions is too high. This increases transaction confirmation times and makes fees more expensive.

Blockchain developers have been working on scalability solutions for many years, but there is still a lot of discussion going on regarding the best alternatives. For layer-1 scaling, some options include:

1. Increasing block size, allowing more transactions to be processed in each block.

2. Changing the consensus mechanism used, such as with the upcoming Ethereum 2.0 update.

3. Implementing sharding. A form of database partitioning.

Layer 1 improvements require significant work to implement. In many cases, not all the network users will agree to the change. This can lead to community splits or even a hard fork, as happened with Bitcoin and Bitcoin Cash in 2017.

Types of Layer One Blockchain Solutions

From the most basic perspective, a layer 1 blockchain protocol must offer decentralization, security, and scalability. The layer one blockchain networks can guarantee better results for scalability through different approaches. Here are the two different types of layer 1 blockchain examples based on the approaches they follow for scalability. 

Consensus Protocol 

The first category of layer 1 blockchain solutions would obviously include references to the switching of consensus mechanisms. Many conventional blockchain networks use Proof of Work, a resource-intensive and slow consensus mechanism. While Proof of Work supports decentralized consensus and security through cryptography, it presents notable setbacks for scalability. 

On the contrary, a layer 1 blockchain protocol could leverage Proof of Stake as the consensus mechanism. Proof of Stake helps in achieving decentralized consensus on a blockchain network alongside authentication of block transactions according to stakes. However, Proof of Stake loses on security while providing better transaction speed. Therefore, new layer one blockchain improvements are necessary for resolving the scalability concerns while taking care of security.


Another top feature among entries in a layer 1 blockchain list would point towards possibilities of sharding. It is a productive method, used primarily in database partitioning, which you can apply for distributed ledger technology in the blockchain. Sharding serves as one of the reliable layer 1 scaling solutions for increasing transaction throughput. As of now, sharding is still an experimental approach in the blockchain space. 

It includes breaking down a network into a collection of different individual database blocks, also referred to as shards. The division of the network and its nodes helps in the effective distribution of workload alongside enabling improved transaction speeds. Every shard for a layer 1 blockchain would take care of managing the subset in the activity of the whole network. Therefore, every shard has its own transactions, separate blocks, and nodes. 

In the case of sharding in layer 1 blockchain examples, every node does not have to maintain a complete copy of the whole blockchain. On the contrary, the nodes report accounts of completed work to the main chain and share the state of local data, such as address balances and other metrics. 

Top exchanges for token-coin trading. Follow instructions and make unlimited money

Top 100 Layer 1 Blockchain protocols

The Layer 1 blockchain ecosystem is growing at unparalleled rates, never seen before for any technological intervention. Here is the complete list of 100 Layer 1 Crypto Projects projects (Sort by trading volume).

 NameDescriptionCoin/TokenBuy/SellWebsiteWhite paper
1BitcoinFirst, and most secure PoW chain.BTCBinance
2EthereumEthereum is a decentralized open-source blockchain system that features its own cryptocurrency, Ether. ETH works as a platform for numerous other cryptocurrencies, as well as for the execution of decentralized smart contracts.ETHBinance
3XrpThe XRP Ledger (XRPL) is an open-source, permissionless and decentralized technology. Benefits of the XRP Ledger include its low-cost ($0.0002 to transact), speed (settling transactions in 3-5 seconds), scalability (1,500 transactions per second) and inherently green attributes (carbon-neutral and energy-efficient). XRPBinance



4Ethereum ClassicEthereum Classic (ETC) is a hard fork of Ethereum (ETH). Its main function is as a smart contract network, with the ability to host and support decentralized applications (DApps). ETCBinance


5Binance smart chainBinance is a unique ecosystem of decentralized, blockchain-based networks. The company has grown to be the leading crypto exchange in a number of countries, and their side organizations are attracting significant interest as well.BNBBinancehttps://www.bnbchain.orgLink
6SolanaThe Solana protocol is designed to facilitate decentralized app (DApp) creation. It aims to improve scalability by introducing a proof-of-history (PoH) consensus combined with the underlying proof-of-stake (PoS) consensus of the blockchain.SOLBinance
7CardanoCardano is one of the biggest blockchains to successfully use a proof-of-stake consensus mechanism, which is less energy intensive than the proof-of-work algorithm relied upon by Bitcoin. Although the much larger Ethereum is going to be upgrading to PoS, this transition is only going to take place gradually.ADABinance


8EOSThe EOS Network is an open-source blockchain platform that prioritizes high performance, flexibility, security, and developer experience. As a third-generation blockchain platform powered by the EOS virtual machine, EOS has an extensible WebAssembly engine for deterministic execution of near fee-less transactions.

The EOS Network is attractive because of its technology and community. It allows developers to build projects that other blockchains cannot support. The network is straightforward and has multiple tools and educational resources to help users acclimate to the blockchain.


9DogecoinDogecoin (DOGE) is based on the popular "doge" Internet meme and features a Shiba Inu on its logo. DOGEBinance


10CosmosCosmos is described as “Blockchain 3.0” — and as we mentioned earlier, a big goal is ensuring that its infrastructure is straightforward to use. To this end, the Cosmos software development kit focuses on modularity. This allows a network to be easily built using chunks of code that already exist. Long-term, it’s hoped that complex applications will be straightforward to construct as a result.ATOMBinance
11LitecoinLitecoin (LTC) is a cryptocurrency that was designed to provide fast, secure and low-cost payments by leveraging the unique properties of blockchain technology.LTCBinance


12TronThe platform was built to create a decentralized Internet and serves as a tool for developers to create dApps, acting as an alternative to Ethereum, anyone can create dApps on the TRON network.TRXBinance


13NEAR ProtocolNEAR Protocol is a layer-one blockchain that was designed as a community-run cloud computing platform and that eliminates some of the limitations that have been bogging competing blockchains, such as low transaction speeds, low throughput and poor interoperability. NEARBinance
14PolkadotPolkadot is known as a layer-0 metaprotocol because it underlies and describes a format for a network of layer 1 blockchains known as parachains (parallel chains). As a metaprotocol, Polkadot is also capable of autonomously and forklessly updating its own codebase via on-chain governance according to the will of its token holder community.DOTBinance
15FilecoinFilecoin aims to store data in a decentralized manner. Unlike cloud storage companies like Amazon Web Services or Cloudflare, which are prone to the problems of centralization, Filecoin leverages its decentralized nature to protect the integrity of a data’s location, making it easily retrievable and hard to censor.


16ZilliqaZilliqa is a public, permissionless blockchain that is designed to offer high throughput with the ability to complete thousands of transactions per second. It seeks to solve the issue of blockchain scalability and speed by employing sharding as a second-layer scaling solutionZILBinance


17WavesWaves is a multi-purpose blockchain platform which supports various use cases including decentralized applications (DApps) and smart contracts (Smart asset blockchain with focus on Web 3 & Defi)WAVESBinance


18StellarThe platform is designed to move financial resources swiftly and reliably at minimal cost. Stellar links people, banks, payment processors and allows users to create, send and trade multiple types of crypto.XLMBinance


19Bitcoin CashBitcoin Cash is a peer-to-peer electronic cash system that aims to become sound global money with fast payments, micro fees, privacy, and high transaction capacity (big blocks). In the same way that physical money, such as a dollar bill, is handed directly to the person being paid, Bitcoin Cash payments are sent directly from one person to another.BCHBinance


20Bitcoin SVBitcoin SV (BSV) emerged following a hard fork of the Bitcoin Cash (BCH) blockchain in 2018, which had in turn forked from the BTC blockchain a year earlier.


21VeChainVeChain (VET) is a versatile enterprise-grade L1 smart contract platform. VeChain aims to use distributed governance and Internet of Things (IoT) technologies to create an ecosystem which solves major data hurdles for multiple global industries from medical to energy, food & beverage to sustainability and SDG goals.VETBinance


22MoneroThere are several things that make Monero unique. One of the project’s biggest aims is achieving the greatest level of decentralization possible, meaning that a user doesn’t need to trust anyone else on the network.


23DashDash's governance system, or treasury, distributes 10% of the block rewards for the development of the project in a competitive and decentralized way. This has allowed the creation of many funded organizations, including Dash Core Group. In addition, the Dash Foundation, which advocates for the adoption of the cryptocurrency, receives donations and offers paid individual and institutional memberships.


24ZcashZcash is a decentralized cryptocurrency focused on privacy and anonymity. Zcash’s main advantage lies in its optional anonymity, which allows for a level of privacy unattainable with regular like Bitcoin or Ethereum.ZECBinance


25Internet ComputerChain Key Technology is the scientific breakthrough that powers Internet Computer, which makes it possible to add new nodes, form subnets, implement scaling; replace defective nodes; restore subnets; update the Internet Computer Protocol, its features, and fix bugs.


26Theta NetworkTheta (THETA) is a blockchain powered network purpose-built for decentralize video streaming, data delivery and edge computing, making it more efficient, cost-effective and fair for industry participants.THETABinance


27QtumQtum is a proof-of-stake (PoS) smart contract open-source blockchain platform and value transfer protocol. Qtum is built on Bitcoin's UTXO transaction model, with the added functionality of smart contract execution and DApps. QTUMBinance


28AlgorandAlgorand was invented to speed up transactions and improve efficiency, in response to the slow transaction times of Bitcoin and other blockchains. Algorand is designed so that there are lower transaction fees, as well as no mining (like Bitcoin's energy-intensive process), as it is based on a permissionless pure proof-of-stake (PoS) blockchain protocol.ALGOBinance
29Oasis NetworkOasis is the leading privacy-enabled and scalable layer-1 blockchain network. It combines high throughput and low gas fees with secure architecture to provide a next-generation foundation for Web3 and will power DeFi, GameFi, NFTs, Metaverse, Data tokenization and Data DAOs.


30KusamaKusama is built on Substrate — a blockchain building kit developed by Parity Technologies. Kusama has almost the same codebase as Polkadot — one of the most successful interoperable blockchains.


31NEOOne of the unique selling points of the Neo blockchain concerns its continuous development, which helps ensure that it is futureproof and able to cope with sudden increases in demand. NEOBinance


32TezosTezos is a blockchain network that’s based on smart contracts, in a way that’s not too dissimilar to Ethereum. However, there’s a big difference: Tezos aims to offer infrastructure that is more advanced — meaning it can evolve and improve over time without there ever being a danger of a hard fork. This is something that both Bitcoin and Ethereum have suffered since they were created. People who hold XTZ can vote on proposals for protocol upgrades that have been put forward by Tezos developers.


33ElrondElrond is a blockchain protocol that seeks to offer extremely fast transaction speeds by using sharding. EGLDBinance
34HederaHedera is the most used, sustainable, enterprise-grade public network for the decentralized economy that allows individuals and businesses to create powerful decentralized applications (DApps).

Unlike most other cryptocurrency platforms, Hedera Hashgraph isn’t built on top of a conventional blockchain. Instead, it introduces a completely novel type of distributed ledger technology known as a Hashgraph.


35IOTAIOTA is a Distributed ledger technology focused on high throughputsMIOTABinance


36FlowFlow is a fast, decentralized, and developer-friendly blockchain, designed as the foundation for a new generation of games, apps, and the digital assets that power them. FLOWBinance


37ArkARK aims to solve the difficulty of working with blockchain technology and developing solutions that satisfy various use cases. The ARK Core Framework is designed to give developers easier access to blockchain technology.ARKBinance


38HarmonyHarmony is a blockchain platform designed to facilitate the creation and use of decentralized applications (DApps). The network aims to innovate the way decentralized applications work by focusing on random state sharding, which allows creating blocks in seconds.


39ConfluxConflux is a high throughput first layer consensus blockchain that utilizes a unique Tree-Graph consensus algorithm, enabling the parallel processing of blocks and transactions for increased throughput and scalability.CFXBinance


40IoTeXIoTeX has built a decentralized platform whose aim is to empower the open economics for machines — an open ecosystem where people and machines can interact with guaranteed trust, free will, and under properly designed economic incentives.IOTXBinance


41KavaKava is a Layer-1 blockchain that combines the speed and interoperability of Cosmos with the developer power of Ethereum. The Kava Network uses a developer-optimized co-chain architecture. KAVABinance


42HorizenTheir blockchain network provides a unique sidechain solution that allows developers to build their own scalable blockchains with the ability to support tens of thousands of transactions per second while maintaining true decentralization across tens of thousands of nodes. Horizen also offers unique optional privacy features.


43CronosCronos Chain: Payment-focused chain based on Tendermint consensusCROBinance


44RavencoinRavencoin is a digital peer-to-peer (P2P) network that aims to implement a use case specific blockchain, designed to efficiently handle one specific function: the transfer of assets from one party to another.RVNBinance


45HolochainHolo is a peer-to-peer distributed platform for hosting decentralized applications built using Holochain, a framework for developing DApps that does not require the use of blockchain technology. HOTBinance


46DecredBy combining battle tested Proof-of-Work with an innovative take on Proof-of-Stake that places coin holders in charge of shaping the future, Decred is able to adapt to challenges and innovate at a rapid pace. 

Decred’s security, privacy, scalability, and decentralized treasury empower stakeholders and provides them with the tools needed to enhance their financial sovereignty.


47OntologyThe Ontology blockchain is a high speed, low cost public blockchain. It is designed to bring decentralized identity and data solutions to Web3, with the goal of increasing privacy, transparency, and trustONTBinance


48IostIOST’s blockchain infrastructure is open-source and designed to be secure and scalableIOSTBinance


49WAXWAX (WAXP) is a purpose-built blockchain, that is designed to make e-commerce transactions faster, simpler and safer for every party involved. The WAX blockchain uses delegated proof-of-stake (DPoS) as its consensus mechanism. It is fully compatible with EOS.WAXPBinance


50MoonbeamMoonbeam is an Ethereum-compatible smart contract parachain on Polkadot. Moonbeam makes it easy to use popular Ethereum developer tools to build or redeploy Solidity projects in a Substrate-based environment.GLMRBinance
51AelfAelf is an open-source blockchain network designed as a complete business solution. The structure of ‘one main-chain + multiple side-chains’ can support developers to independently deploy or run DApps on individual side-chains to achieve resource isolation. ELFBinance


52HeliumHelium (HNT) is a decentralized blockchain-powered network for Internet of Things (IoT) devices.


53WaltonchainWaltonchain (WTC) is building an ecosystem that melds blockchain, RFID technology, and IoT (Internet of Things). This translates to enhanced operational efficiency, especially for supply chain use cases such high-end clothing identification, food & drug traceability, and logistics tracking.WTCBinance


54MinaMina Protocol is a minimal “succinct blockchain” built to curtail computational requirements in order to run DApps more efficiently. MINABinance


55CotiTraditional payment systems simply cost both merchants and customers amounts up to billions of dollars on an annual basis. As such, the white label payment network is a global payment network for users and merchants that make transactions freely throughout a digital wallet, coin and much more.COTIBinance


56ArweaveArweave is a decentralized storage network that seeks to offer a platform for the indefinite storage of data. Describing itself as "a collectively owned hard drive that never forgets," the network primarily hosts "the permaweb" — a permanent, decentralized web with a number of community-driven applications and platforms.ARBinance


57CasperCasper is a unique utilization of blockchain technology and the proof-of-stake (PoS) consensus method. Casper is introducing a new standard for blockchain energy consumption, and is 136,000% more energy-efficient than Bitcoin.





58HiveHive is a decentralized information sharing network with an accompanying blockchain-based financial ledger built on the Delegated Proof of Stake (DPoS) protocol. HIVEBinance


59Fetch.aiArtificial intelligence for blockchain with autonomous agentsFETBinance


60KadenaKadena offers a public proof-of-work blockchain with unparalleled throughput by combining two separate consensus mechanisms: DAG and proof-of-work.KDABinance


61Dusk NetworkDusk Network is a privacy blockchain for financial applications. It is a layer-1 blockchain that powers the Confidential Security Contract (XSC) standard, and supports native confidential smart contracts.DUSKBinance


62DigibyteA longstanding public blockchain and cryptocurrency, DigiByte uses five different algorithms to improve security, and originally aimed to improve on the Bitcoin blockchain’s security, capacity and transaction speed.


63Seele-NSeele-N (SEELE) is Anti-asic, sharded, high-throughput chainSEELEGate


64StratisStratis is a blockchain-as-a-service platform that offers several products and services for enterprises, including launching private sidechains, running full nodes, developing and deploying smart contracts, an initial coin offering platform, and a proof-of-identity application. STRAXBinance


65ThundercoreThunderCore is a secure, high-performance, EVM-compatible public blockchain with its own native currency. TTHuobi


66IconICON Network is a layer-one blockchain focused on building a multichain bridging solution that is scalable, chain-agnostic, and secure. ICON is a hub that connects partner blockchains with all other blockchains integrated via BTP.


67KomodoKomodo is an open-source technology provider that offers all-in-one blockchain solutions for developers and enterprises. Komodo builds technologies that enable anyone to launch branded decentralized exchanges, cross-protocol financial applications, and independent blockchains.KMDBinance


68TomochainTomoChain (TOMO) is a project that attempts to improve the scalability of the Ethereum (ETH) blockchain. This is primarily done via increasing its transactions per second (TPS) capacity.TOMOBinance


69XinFinThe XDC Network (formerly called XinFin Network) is an enterprise-grade, EVM-compatible blockchain equipped with interoperable smart contracts. XDCBinance


70WanchainWanchain is a distributed ledger that allows for cross-chain transactions and the interoperability of multiple chains. Most cross-chain transactions are completed through third-party platforms. Wanchain’s uniqueness lies in its decentralization. The platform employs multi-party computing and threshold secret-sharing technology to manage accounts autonomously.WANBinance


71CeloCelo is a blockchain ecosystem focused on increasing cryptocurrency adoption among smartphone users. The platform aims to host various stablecoins, with three, the Celo Dollar (CUSD), the Celo Euro (CEUR) and the Celo Brazilian Real (CREAL) already in use.CELOBinance


72LiskLisk blockchain application platform allows interoperability between all application-specific blockchains built with the Lisk SDK. With other words, users will be able to use DeFi, NFT or Metaverse blockchain applications on Lisk.LSKBinance


73TelosTelos is a high performance L1 and the home to the fastest EVM, Blockchain based on EOS core. Telos is the the only blockchain to support the two leading standards EVM and EOSIO for smart contract development. These two technologies together make up the majority of the top dApps on popular tracking websites such as dapp radar.TLOSMEXC


74LTO NetworkHybrid blockchain for businessLTOBinance
75AcalaEVM compatible parachain with a focus on DeFi & aUSD adoption across PolkadotACABinance
76SymbolSymbol aims to solve for problems inherent in EVM-based platforms and omnichain solutions, where security is often defined at a smart-contract level (versus a network-wide level); where L2 validators are centralized and not incentivized; and where new features and functionality are decided by central client teams versus a fair and free market.XYMMEXC


77NanoNano is a lightweight cryptocurrency that is designed to facilitate secure, practically instant payments, without fees, and addresses some of the major limitations of both legacy financial infrastructure, and many modern cryptocurrencies.XNOBinance


78Aleph ZeroAleph Zero is a privacy-enhancing, Proof-of-Stake public blockckchain with instant finality. AZEROBinance


79BitSharesOne of BitShares’ major distinguishing features is its integrated decentralized cryptocurrency exchange platform (DEX), which allows users to trade regular cryptocurrencies, as well as more traditional financial instruments (via BitAssets) without middlemen.BTSBinance


80NervosThe Nervos Network describes itself as an open-source public blockchain ecosystem and collection of protocols. The Nervos CKB (Common Knowledge Base) is the layer 1, proof of work public blockchain protocol of the Nervos Network.CKBBinance


81AionThe main purpose of The OAN is to facilitate interoperability between different blockchains, thus allowing users and developers to create a variety of applicationsAIONBinance
82ViteVite virtual machine maintains compatibility with EVM, and utilizes asynchronous smart contract language, Solidity++.VITEBinance


83RadixRadix (XRD) is a layer-one protocol specifically built for DeFi purposes. It employs a new consensus mechanism called Cerberus, which is supposed to deliver the performance needed to fulfill its ambitious goal of creating a new, decentralized global financial system.XRDMEXC


84ElastosWith Elastos, aside from the fact that you gain full ownership over your digital assets, you do not have to access the internet in order to run DApps. All DApps will run on the Smart Web. ELAHuobi


85KILT protocolKILT is a decentralized blockchain identity protocol for issuing verifiable, revocable, and anonymous claims-based credentials in Web 3.0 KILTMEXC


86AeternityAeternity (AE) is a blockchain platform that focuses on high bandwidth transacting, purely-functional smart contracts, and decentralized oracles. AEGate
87EdgewareEdgeware aims to provide a smart contract platform designated the first on Polkadot Network.EDGGate
88DeroDero is the crypto project to combine a proof-of-work blockchain with a DAG block structure and wholly anonymous transactions. DEROKucoin
89ProximaxProximaX is an enterprise-grade infrastructure and development platform that integrates blockchain technology with distributed and decentralized service layers: storage, streaming, database, and Supercontract (enhanced smart contracts). XPXMEXC
90ConcordiumPrivacy-centric, public and permissionless Blockchain, built for businessesCCDMEXC


91RchainBlockchain protocol with Rholang smart contract languageREVMEXC


92FlowchainFlowchain is an IoT project that leverages distributed ledger technology (DLT) for peer-to-peer IoT networks and real-time data transaction. FLC
93Sui BlockchainSui is the first permissionless Layer 1 blockchain designed from the ground up to enable creators and developers to build experiences that cater to the next billion users in web3. Sui is horizontally scalable to support a wide range of application development with unrivaled speed at low cost.SUI




The different highlights of layer 1 blockchain examples and their fundamentals show how layer 1 improves scalability. Blockchain usage is gradually expanding across different industries and many real-world use cases. With the help of changes in the base protocol, layer one blockchain networks could spell a new wave of change. Blockchain networks struggle with scalability in the bid to ensure security and decentralization.

However, developers could find a way to achieve decentralized and secure blockchain networks without compromising scalability. Layer 1 scaling solutions not only improve throughput but also change the norms for community development. The foremost challenge for layer one blockchain solutions right now would refer to the limited awareness of layer 1 solutions. Start exploring more about layer 1 solutions for blockchain scalability now.

Read more: Cryptocurrency APIs: Top 200 APIs for Developer and Traders

I hope this article will help you. Don't forget to leave a like, comment and sharing it with others. Thank you!

#blockchain #bitcoin #cryptocurrency #layers #protocol  

Top Layer 1 Crypto Projects | Top 100 Layer 1 Blockchain protocols
Elian  Harber

Elian Harber


Raftdb: A Simple Distributed Key Value Store Based on The Raft


raftdb is a simple distributed key value store based on the Raft consensus protocol. It can be run on Linux, OSX, and Windows.

Running raftdb

Building raftdb requires Go 1.9 or later. gvm is a great tool for installing and managing your versions of Go.

Starting and running a raftdb cluster is easy. Download raftdb like so:

mkdir -p $GOPATH/src/github/hanj4096
cd $GOPATH/src/github/hanj4096
git clone

Build raftdb like so:

cd $GOPATH/src/
go install ./...

Step 1: Modify the /etc/hosts File

Add your servers’ hostnames and IP addresses to each cluster server’s /etc/hosts file (the hostnames below are representative).

<Meta_1_IP> raft-cluster-host-01
<Meta_2_IP> raft-cluster-host-02
<Meta_3_IP> raft-cluster-host-03

Verification steps:

Before proceeding with the installation, verify on each server that the other servers are resolvable. Here is an example set of shell commands using ping:

ping -qc 1 raft-cluster-host-01

ping -qc 1 raft-cluster-host-02

ping -qc 1 raft-cluster-host-03

We highly recommend that each server be able to resolve the IP from the hostname alone as shown here. Resolve any connectivity issues before proceeding with the installation. A healthy cluster requires that every meta node can communicate with every other meta node.

Step 2: Bring up a cluster

Run your first raftdb node like so:

$GOPATH/bin/raftd -id node01  -haddr raft-cluster-host-01:8091 -raddr raft-cluster-host-01:8089 ~/.raftdb

Let's bring up 2 more nodes, so we have a 3-node cluster. That way we can tolerate the failure of 1 node:

$GOPATH/bin/raftd -id node02 -haddr raft-cluster-host-02:8091 -raddr raft-cluster-host-02:8089 -join raft-cluster-host-01:8091 ~/.raftdb

$GOPATH/bin/raftd -id node03 -haddr raft-cluster-host-03:8091 -raddr raft-cluster-host-03:8089 -join raft-cluster-host-01:8091 ~/.raftdb

Reading and writing keys

You can now set a key and read its value back:

curl -X POST raft-cluster-host-01:8091/key -d '{"foo": "bar"}' -L
curl -X GET raft-cluster-host-01:8091/key/foo -L

You can now delete a key and its value:

curl -X DELETE raft-cluster-host-01:8091/key/foo -L

Three read consistency level

You can now read the key's value by different read consistency level:

curl -X GET raft-cluster-host-01:8091/key/foo?level=stale
curl -X GET raft-cluster-host-01:8091/key/foo?level=default  -L
curl -X GET raft-cluster-host-01:8091/key/foo?level=consistent  -L

Author: Hanj4096
Source Code: 
License: MIT license

#go #golang #protocol 

Raftdb: A Simple Distributed Key Value Store Based on The Raft
Elian  Harber

Elian Harber


Raftdb: A Simple Distributed Key-value Database


The raftdb implements a simple distributed key-value database, using the raft distributed consensus protocol.

Get started


go get


go build -o raftdb main.go

Three nodes

./raftdb -h=localhost -p=7001 -c=8001 -f=9001 -path=./raftdb.1 \

./raftdb -h=localhost -p=7002 -c=8002 -f=9002  -path=./raftdb.2 \

./raftdb -h=localhost -p=7003 -c=8003 -f=9003  -path=./raftdb.3 \


curl -XPOST http://localhost:7001/db/foo -d 'bar'


curl http://localhost:7001/db/foo

Client example

package main

import (

func main() {
    client := node.NewClient("localhost:8001", "localhost:8002", "localhost:8003")
    key := "foo"
    value := "Hello World"
    if ok := client.Set(key, value); !ok {
        panic("set failed")
    if result, ok := client.LeaseReadGet(key); ok && result != value {
    if result, ok := client.ReadIndexGet(key); ok {


Hello World


Running on a three nodes cluster.



Read Index


Author: hslam
Source Code: 
License: MIT license

#go #golang #protocol 

Raftdb: A Simple Distributed Key-value Database
Elian  Harber

Elian Harber


A Feature Complete and High Performance Multi-group Raft Library in Go

Dragonboat - A Multi-Group Raft library in Go / 中文版   


Dragonboat is a high performance multi-group Raft consensus library in pure Go.

Consensus algorithms such as Raft provides fault-tolerance by alllowing a system continue to operate as long as the majority member servers are available. For example, a Raft shard of 5 servers can make progress even if 2 servers fail. It also appears to clients as a single entity with strong data consistency always provided. All Raft replicas can be used to handle read requests for aggregated read throughput.

Dragonboat handles all technical difficulties associated with Raft to allow users to just focus on their application domains. It is also very easy to use, our step-by-step examples can help new users to master it in half an hour.


  • Easy to use pure-Go APIs for building Raft based applications
  • Feature complete and scalable multi-group Raft implementation
  • Disk based and memory based state machine support
  • Fully pipelined and TLS mutual authentication support, ready for high latency open environment
  • Custom Raft log storage and transport support, easy to integrate with latest I/O techs
  • Prometheus based health metrics support
  • Built-in tool to repair Raft shards that permanently lost the quorum
  • Extensively tested including using Jepsen's Knossos linearizability checker, some results are here

All major features covered in Diego Ongaro's Raft thesis have been supported -

  • leader election, log replication, snapshotting and log compaction
  • membership change
  • pre-vote
  • ReadIndex protocol for read-only queries
  • leadership transfer
  • non-voting member
  • witness member
  • idempotent update transparent to applications
  • batching and pipelining
  • disk based state machine


Dragonboat is the fastest open source multi-group Raft implementation on Github.

For 3-nodes system using mid-range hardware (details here) and in-memory state machine, when RocksDB is used as the storage engine, Dragonboat can sustain at 9 million writes per second when the payload is 16bytes each or 11 million mixed I/O per second at 9:1 read:write ratio. High throughput is maintained in geographically distributed environment. When the RTT between nodes is 30ms, 2 million I/O per second can still be achieved using a much larger number of clients. throughput

The number of concurrent active Raft groups affects the overall throughput as requests become harder to be batched. On the other hand, having thousands of idle Raft groups has a much smaller impact on throughput. nodes

Table below shows write latencies in millisecond, Dragonboat has <5ms P99 write latency when handling 8 million writes per second at 16 bytes each. Read latency is lower than writes as the ReadIndex protocol employed for linearizable reads doesn't require fsync-ed disk I/O.

OpsPayload Size99.9% percentile99% percentileAVG

When tested on a single Raft group, Dragonboat can sustain writes at 1.25 million per second when payload is 16 bytes each, average latency is 1.3ms and the P99 latency is 2.6ms. This is achieved when using an average of 3 cores (2.8GHz) on each server.

As visualized below, Stop-the-World pauses caused by Go1.11's GC are sub-millisecond on highly loaded systems. Such very short Stop-the-World pause time is further significantly reduced in Go 1.12. Golang's runtime.ReadMemStats reports that less than 1% of the available CPU time is used by GC on highly loaded system. stw


  • x86_64/Linux, x86_64/MacOS or ARM64/Linux, Go 1.15 or 1.14

Getting Started

Master is our unstable branch for development, it is current working towards the v4.0 release. Please use the latest released versions for any production purposes. For Dragonboat v3.3.x, please follow the instructions in v3.3.x's

Go 1.17 or above with Go module support is required.

Use the following command to add Dragonboat v3 into your project.

go get

Or you can use the following command to start using the development version of the Dragonboat, which is current at v4 for its APIs.

go get

By default, Pebble is used for storing Raft Logs in Dragonboat. RocksDB and other storage engines are also supported, more info here.

You can also follow our examples on how to use Dragonboat.


FAQ, docs, step-by-step examples, DevOps doc, CHANGELOG and online chat are available.


Dragonboat examples are here.


Dragonboat is production ready.


For reporting bugs, please open an issue. For contributing improvements or new features, please send in the pull request.


  • 2022-06-03 We are working towards a v4.0 release which will come with API changes. See CHANGELOG for details.
  • 2021-01-20 Dragonboat v3.3 has been released, please check CHANGELOG for all changes.

Author: lni
Source Code: 
License: Apache-2.0 license

#go #golang #protocol #algorithm 

A Feature Complete and High Performance Multi-group Raft Library in Go
Elian  Harber

Elian Harber


Dragonboat-example: Examples for Dragonboat

About / 中文版

This repo contains examples for dragonboat.

The master branch and the release-3.3 branch of this repo target Dragonboat's master and v3.3.x releases.

Go 1.17 or later releases with Go module support is required.


Programs provided here in this repo are examples - they are intentionally created in a more straight forward way to help users to understand the basics of the dragonboat library. They are not benchmark programs.


To download the example code to say $HOME/src/dragonboat-example:

$ cd $HOME/src
$ git clone

Build all examples:

$ cd $HOME/src/dragonboat-example
$ make


Click links below for more details.

Next Step

Author: lni
Source Code: 
License: Apache-2.0 license

#go #golang #protocol #algorithm 

Dragonboat-example: Examples for Dragonboat
Elian  Harber

Elian Harber


M2: A Simple Http Key/value Cache System Based on Raft Protocol


m2 is a simple http key/value cache system based on hashicorp/raft.


go get

Create Cluster

Start first node

./m2 --node_id 1 --port 8001 --raft_port 18001

then, start second node

./m2 --node_id 2 --port 8002 --raft_port 18002
curl -d 'nodeid=2&addr=' http://localhost:8001/raft/join

Key/Value Api


  • /set - set key&value
  • /get - get value
  • /del - del key

Query params expected are key and val

# set
curl "http://localhost:8001/set?key=foo&val=bar"
# or use post method 
# curl -d "key=foo&val=bar" http://localhost:8001/set
# output:ok

# get
curl "http://localhost:8002/get?key=foo"
# output:bar

# del
curl "http://localhost:8001/del?key=foo"
# output:ok

Raft Api


  • /raft/join - join raft cluster
  • /raft/leave - leave raft cluster
  • /raft/status - get raft node status
# join
curl "http://localhost:8001/raft/join?nodeid=2&addr="
# or use post method 
# curl -d "nodeid=2&addr=" http://localhost:8001/raft/join
# output:ok

# leave
curl "http://localhost:8001/raft/leave?nodeid=2&addr="
# output:removed successfully

# node status
curl "http://localhost:8001/raft/status"
# output:
    "applied_index": "2",
    "commit_index": "2",
    "fsm_pending": "0",
    "last_contact": "0",
    "last_log_index": "2",
    "last_log_term": "2",
    "last_snapshot_index": "0",
    "last_snapshot_term": "0",
    "latest_configuration": "[{Suffrage:Voter ID:1 Address:}]",
    "latest_configuration_index": "0",
    "num_peers": "0",
    "protocol_version": "3",
    "protocol_version_max": "3",
    "protocol_version_min": "0",
    "snapshot_version_max": "1",
    "snapshot_version_min": "0",
    "state": "Leader",
    "term": "2"


m2 use badger-db as storage

Author: Hashicorp
Source Code: 
License: MPL-2.0 license

#go #golang #protocol 

M2: A Simple Http Key/value Cache System Based on Raft Protocol

The Balancer Protocol V2 Core Smart Contracts Written in Solidity

Balancer V2 Monorepo

This repository contains the Balancer Protocol V2 core smart contracts, including the Vault and standard Pools, along with their tests, configuration, and deployment information.

For a high-level introduction to Balancer V2, see Introducing Balancer V2: Generalized AMMs.


This is a Yarn 2 monorepo, with the packages meant to be published in the pkg directory. Newly developed packages may not be published yet.

Active development occurs in this repository, which means some contracts in it might not be production-ready. Proceed with caution.


Build and Test

Before any tests can be run, the repository needs to be prepared:

$ yarn # install all dependencies
$ yarn build # compile all contracts

Most tests are standalone and simply require installation of dependencies and compilation. Some packages however have extra requirements. Notably, the v2-deployments package must have access to mainnet archive nodes in order to perform fork tests. For more details, head to its readme file.

In order to run all tests (including those with extra dependencies), run:

$ yarn test # run all tests

To instead run a single package's tests, run:

$ cd pkg/<package> # e.g. cd pkg/v2-vault
$ yarn test

You can see a sample report of a test run here.


Multiple independent reviews and audits were performed by Certora, OpenZeppelin and Trail of Bits. The latest reports from these engagements are located in the audits directory.

Bug bounties apply to most of the smart contracts hosted in this repository: head to Balancer V2 Bug Bounties to learn more.

All core smart contracts are immutable, and cannot be upgraded. See page 6 of the Trail of Bits audit:

Upgradeability | Not Applicable. The system cannot be upgraded.


  • All files in the openzeppelin directory of the v2-solidity-utils package are based on the OpenZeppelin Contracts library, and as such are licensed under the MIT License: see LICENSE.
  • The LogExpMath contract from the v2-solidity-utils package is licensed under the MIT License.
  • All other files, including tests and the pvt directory are unlicensed.

Download details:

Author: balancer-labs
Source code:
License: GPL-3.0 license

#solidity #smartcontract #ethereum #blockchain #typescript #protocol

The Balancer Protocol V2 Core Smart Contracts Written in Solidity
Reid  Rohan

Reid Rohan


Level-2pc: A Two-phase-commit Protocol for Leveldb


A two-phase-commit protocol for leveldb.


Provides strong-consistency for local-cluster replication.

Every node in your cluster can be writable and all reads from any node will be consistent.

Uses reconnect-core to support an injectable transport for e.g. browser compatibility.


The algorithm for how this works is here.




var level = require('level');
var Replicator = require('level-2pc');
var net = require('net');

var db1 = level('./db', { valueEncoding: 'json' });

var opts = {
  peers: [
    { host: 'localhost', port: 3001 },
    { host: 'localhost', port: 3002 }

var r = Replicator(db1, opts);

net.createServer(function(con) {
  var server = r.createServer();


var opts = {
  peers: [
    { host: 'localhost', port: 3000 },
    { host: 'localhost', port: 3002 }

var r = Replicator(db2, opts);

net.createServer(function(con) {
  var server = r.createServer();


var opts = {
  peers: [
    { host: 'localhost', port: 3000 },
    { host: 'localhost', port: 3001 }

var r = Replicator(db3, opts);

net.createServer(function(con) {
  var server = r.createServer();


Now go ahead and write some data to one of the servers and watch the data magically appear in the other servers!

setTimeout(function() {

  db1.put('x', 100, function(err) {
    console.log(err || 'ok');

  setTimeout(function() {
    db2.get('x', function() {
      db3.get('x', function() {
  }, 100);

}, 100);


When the server wants to connect to the peers that have been specified, it defaults to using tcp from the net module. You can inject any transportation layer you like by setting the transport property in the options object:

var net = require('net');

var opts = {
  transport: function() {
    return net.connect.apply(null, arguments);
  peers: [ /* .. */ ]

var r = Replicator(db, opts);


Replicator(db, opts)

Returns a Replicator object, which is an EventEmitter.

db leveldb database object

opts options object with the following properties:

  • host host that other peers should connect to
  • port port that other peers should connect to
  • peers an array of objects that specify the host and port of each peer
  • minConsensus how many peers must connect initially or respond to quorum


Returns a duplex rpc-stream that can be served over e.g. http or tcp or any other transport supporting node streams.


Closes connections to all peers.

Event: 'ready'

Emitted when the replicator is ready to replicate with other peers. Happens when the replicator has enough connections for the quorum, i.e. when the number of peers is above minConsensus.

Event: 'notready'

Emitted when the replicator is not ready to replicate with other peers. Happens when the replicator doesn't have enough connections for the quorum, i.e. when the number of peers goes below minConsensus.

Event: 'connect'

Emitted when the replicator has connected to a peer.

  • host host of the connected peer
  • port port of the connected peer

Event: 'error'

Emitted when there was an error in the connection between the replicator and a peer.

  • err error object

Event: 'disconnect'

Emitted when the replicator has disconnected from a peer.

  • host host of the disconnected peer
  • port port of the disconnected peer

Event: 'reconnect'

Emitted when the replicator tries to reconnect to a peer.

  • host retrying connection to this host
  • port retrying connection to this port

Event: 'fail'

Emitted when the replicator has tried to reconnect but failed too many times. There might be a problem with the connection, or the peer is simply offline.

  • host host of the failing peer
  • port port of the failing peer

Author: Heapwolf
Source Code: 

#javascript #node #protocol #leveldb 

Level-2pc: A Two-phase-commit Protocol for Leveldb
Royce  Reinger

Royce Reinger


Protobuf: A Pure Ruby Implementation Of Google's Protocol Buffers


Protobuf is an implementation of Google's protocol buffers in ruby, version 2.5.0 is currently supported.



If you wish to compile .proto definitions to ruby, you will need to install Google's Protocol Buffers from your favorite package manager or from source. This gem currently supports protobuf up to 3.6.x.

Note: the compiled headers are not a runtime requirement for this library to work, they are only necessary if you wish to compile your definitions to ruby.

OSX Install

$ brew install protobuf


$ sudo apt-get install -y protobuf

Gem Install

Once the protobuf package is installed, install this gem with RubyGems or Bundler.

$ gem install protobuf

Compiling Definitions

Protocol Buffers are great because they allow you to clearly define data storage or data transfer packets. Google officially supports Java, C++, and Python for compilation and usage. Let's make it ruby aware!

Let's say you have a definitions/foo/user.proto file that defines a User message.

- definitions
  |- foo
      |- user.proto
// definitions/foo/user.proto
package foo;
message User {
  optional string first_name = 1;
  optional string last_name = 2;

Now let's compile that definition to ruby:

$ protoc -I ./definitions --ruby_out ./lib definitions/foo/user.proto

The previous line will take whatever is defined in user.proto and output ruby classes to the ./lib directory, obeying the package directive. Your ./lib should now look like this:

- lib
  |- foo
      |- user.pb.rb

The generated file user.pb.rb should look something like this:

# lib/foo/user.pb.rb
module Foo
  class User < ::Protobuf::Message; end

  class User
    optional :string, :first_name, 1
    optional :string, :last_name, 2

Note: The generator will pre-define all message/enum classes empty and then re-open to apply the defined fields. This is to prevent circular field dependency issues.

The generated class is now just a plain old ruby object. You can use it however you wish. Recognize that you can also compile multiple protos at the same time, just use shell glob syntax.

$ protoc -I ./definitions --ruby_out ./lib definitions/**/*.proto

Compiling with rake

This library now provides compiler rake tasks that you can use directly or inherit from your own tasks. The simplest solution is to simply load our compile.rake task file in your Rakefile and you should automatically get a compile and a clean task.

# Rakefile
load 'protobuf/tasks/compile.rake'
$ bundle exec rake -T

# Only the first argument is required.
$ bx rake protobuf:compile[my_base_package]
$ bx rake protobuf:compile[my_base_package] PB_NO_CLEAN=1
$ bx rake protobuf:compile[my_base_package, src, defs, my-crazy-plugin, '.fuby']

The compile task takes one to five arguments. The first argument, the package is the base package defined in your protos. The other 4 arguments, with their defaults, are as follows (in this order):

args.with_defaults(:source => 'definitions')
args.with_defaults(:destination => 'lib')
args.with_defaults(:plugin => 'ruby')
args.with_defaults(:file_extension => '.pb.rb')

The compile by default will force a clean using the same arguments you passed to compile. To avoid cleaning before the compile use the environment variable PB_NO_CLEAN=1.

The clean task accepts 1 to 3 arguments. The only required argument is the base package (just like the compile task). You can clean without prompt (force clean) by passing PB_FORCE_CLEAN=1.

args.with_defaults(:destination => 'lib') args.with_defaults(:file_extension => '.pb.rb')

# Only the first argument is required.
$ bx rake protobuf:clean[my_base_package]
$ bx rake protobuf:clean[my_base_package] PB_FORCE_CLEAN=1
$ bx rake protobuf:clean[my_base_package, src, '.fuby']

You can also invoke these tasks via Rake's ruby API to provide sensible defaults for your project. This is nothing special about our tasks of course, it's just Rake.

# Rakefile
load 'protobuf/tasks/compile.rake'

task :compile do
  # do some stuff before compile

  # Invoke the protobuf compile task with your sensible defaults
  ::Rake::Task['protobuf:compile'].invoke('my_base_package', 'src', 'defs', 'my-crazy-plugin', '.fuby')

  # Make sure you "reenable" the compile task if you plan to call it again...

  # Call the compile again with other args
  ::Rake::Task['protobuf:compile'].invoke('martian_pkg', 'martian_src', '../martians/defs', 'martian-plugin', '.martians.rb')

  # do some stuff after compile

task :clean do
  # ...

API Roadmap

This roadmap is not a definitive list of features or fixes I intend to add in the future, just the prominent ones that I'd like users to be aware may be coming. Any timeline or feature completeness is obviously subject to change.

Version 3.x

NOTE: As of 2014-02-19 v3.0.0.rc1 has been released. Go see the release notes.

  • Introduce plugin-style Server and Client interfaces.
  • Rework message <-> field interface.

Version 4.x

  • Extract ZMQ server/client to its own gem.
  • Extract Socket server/client to its own gem.

Probably gonna happen

  • Add config module and railtie for less janky integration for both client and server modes.
  • More complete examples and wiki guides.
  • Group encode/decode support (#84).
  • Custom options compiler support, the first option of which is ruby file encoding. If you have any custom options you think would make sense for this gem let me know by creating an issue.
  • YARD plugin (possibly different gem) that understands the message and service API and can produce good documentation on compiled classes.

See our Installation Guide on the wiki.


The wiki contains in-depth guides on the various ways to use this gem including compiling definitions, object APIs, services, clients, and even an API roadmap.


See recent changes in the release notes or the changelog.

Author: Ruby-protobuf
Source Code: 
License: MIT license

#ruby #protocol 

Protobuf: A Pure Ruby Implementation Of Google's Protocol Buffers
Gordon  Taylor

Gordon Taylor


Protocol-buffers: Protocol Buffers for Node.js


Protocol Buffers for Node.js

npm install protocol-buffers 


Assuming the following test.proto file exists

enum FOO {
  BAR = 1;

message Test {
  required float num  = 1;
  required string payload = 2;

message AnotherOne {
  repeated FOO list = 1;

Use the above proto file to encode/decode messages by doing

var protobuf = require('protocol-buffers')

// pass a proto file as a buffer/string or pass a parsed protobuf-schema object
var messages = protobuf(fs.readFileSync('test.proto'))

var buf = messages.Test.encode({
  num: 42,
  payload: 'hello world'

console.log(buf) // should print a buffer

To decode a message use Test.decode

var obj = messages.Test.decode(buf)
console.log(obj) // should print an object similar to above

Enums are accessed in the same way as messages

var buf = messages.AnotherOne.encode({
  list: [

Nested emums are accessed as properties on the corresponding message

var buf = message.SomeMessage.encode({
  list: [

See the Google Protocol Buffers docs for more information about the available types etc.

Compile to a file

Since v4 you can now compile your schemas to a JavaScript file you can require from Node. This means you do not have runtime parse the schemas, which is useful if using in the browser or on embedded devices. It also makes the dependency footprint a lot smaller.

# first install the cli tool
npm install -g protocol-buffers

# compile the schema
protocol-buffers test.proto -o messages.js

# then install the runtime dependency in the project
npm install --save protocol-buffers-encodings

That's it! Then in your application you can simply do

var messages = require('./messages')

var buf = messages.Test.encode({
  num: 42

The compilation functionality is also available as a JavaScript API for programmatic use:

var protobuf = require('protocol-buffers')

// protobuf.toJS() takes the same arguments as protobuf()
var js = protobuf.toJS(fs.readFileSync('test.proto'))
fs.writeFileSync('messages.js', js)


The cli tool supports protocol buffer imports by default.

Currently all imports are treated as public and the public/weak keywords not supported.

To use it programmatically you need to pass-in a filename & a resolveImport hooks:

var protobuf = require('protocol-buffers')
var messages = protobuf(null, {
  filename: 'initial.proto',
  resolveImport (filename) {
    // can return a Buffer, String or Schema


This module is fast.

It uses code generation to build as fast as possible encoders/decoders for the protobuf schema. You can run the benchmarks yourself by doing npm run bench.

On my Macbook Air it gives the following results

Benchmarking JSON (baseline)
  Running object encoding benchmark...
  Encoded 1000000 objects in 2142 ms (466853 enc/s)

  Running object decoding benchmark...
  Decoded 1000000 objects in 970 ms (1030928 dec/s)

  Running object encoding+decoding benchmark...
  Encoded+decoded 1000000 objects in 3131 ms (319387 enc+dec/s)

Benchmarking protocol-buffers
  Running object encoding benchmark...
  Encoded 1000000 objects in 2089 ms (478698 enc/s)

  Running object decoding benchmark...
  Decoded 1000000 objects in 735 ms (1360544 dec/s)

  Running object encoding+decoding benchmark...
  Encoded+decoded 1000000 objects in 2826 ms (353857 enc+dec/s)

Note that JSON parsing/serialization in node is a native function that is really fast.

Leveldb encoding compatibility

Compiled protocol buffers messages are valid levelup encodings. This means you can pass them as valueEncoding and keyEncoding.

var level = require('level')
var db = level('db')

db.put('hello', {payload:'world'}, {valueEncoding:messages.Test}, function(err) {
  db.get('hello', {valueEncoding:messages.Test}, function(err, message) {

Author: Mafintosh
Source Code: 
License: MIT license

#node #javascript #protocol 

Protocol-buffers: Protocol Buffers for Node.js
Royce  Reinger

Royce Reinger


Http-2: Pure Ruby Implementation Of HTTP/2 Protocol


Pure Ruby, framework and transport agnostic, implementation of HTTP/2 protocol and HPACK header compression with support for:

Protocol specifications:

Getting started

$> gem install http-2

This implementation makes no assumptions as how the data is delivered: it could be a regular Ruby TCP socket, your custom eventloop, or whatever other transport you wish to use - e.g. ZeroMQ, avian carriers, etc.

Your code is responsible for feeding data into the parser, which performs all of the necessary HTTP/2 decoding, state management and the rest, and vice versa, the parser will emit bytes (encoded HTTP/2 frames) that you can then route to the destination. Roughly, this works as follows:

require 'http/2'
socket =

conn =
conn.on(:frame) {|bytes| socket << bytes }

while bytes =
 conn << bytes

Checkout provided client and server implementations for basic examples.

Connection lifecycle management

Depending on the role of the endpoint you must initialize either a Client or a Server object. Doing so picks the appropriate header compression / decompression algorithms and stream management logic. From there, you can subscribe to connection level events, or invoke appropriate APIs to allocate new streams and manage the lifecycle. For example:

# - Server ---------------
server =

server.on(:stream) { |stream| ... } # process inbound stream
server.on(:frame)  { |bytes| ... }  # encoded HTTP/2 frames { ... } # run liveness check, process pong response
server.goaway # send goaway frame to the client

# - Client ---------------
client =
client.on(:promise) { |stream| ... } # process push promise

stream = client.new_stream # allocate new stream
stream.headers({':method' => 'post', ...}, end_stream: false), end_stream: true)

Events emitted by the connection object:

:promiseclient role only, fires once for each new push promise
:streamserver role only, fires once for each new client stream
:framefires once for every encoded HTTP/2 frame that needs to be sent to the peer

Stream lifecycle management

A single HTTP/2 connection can multiplex multiple streams in parallel: multiple requests and responses can be in flight simultaneously and stream data can be interleaved and prioritized. Further, the specification provides a well-defined lifecycle for each stream (see below).

The good news is, all of the stream management, and state transitions, and error checking is handled by the library. All you have to do is subscribe to appropriate events (marked with ":" prefix in diagram below) and provide your application logic to handle request and response processing.

                 PP   |        |   PP
             ,--------|  idle  |--------.
            /         |        |         \
           v          +--------+          v
    +----------+          |           +----------+
    |          |          | H         |          |
,---|:reserved |          |           |:reserved |---.
|   | (local)  |          v           | (remote) |   |
|   +----------+      +--------+      +----------+   |
|      | :active      |        |      :active |      |
|      |      ,-------|:active |-------.      |      |
|      | H   /   ES   |        |   ES   \   H |      |
|      v    v         +--------+         v    v      |
|   +-----------+          |          +-----------+  |
|   |:half_close|          |          |:half_close|  |
|   |  (remote) |          |          |  (local)  |  |
|   +-----------+          |          +-----------+  |
|        |                 v                |        |
|        |    ES/R    +--------+    ES/R    |        |
|        `----------->|        |<-----------'        |
| R                   | :close |                   R |
`-------------------->|        |<--------------------'

For sake of example, let's take a look at a simple server implementation:

conn =

# emits new streams opened by the client
conn.on(:stream) do |stream|
  stream.on(:active) { } # fires when stream transitions to open state
  stream.on(:close)  { } # stream is closed by client and server

  stream.on(:headers) { |head| ... } # header callback
  stream.on(:data) { |chunk| ... }   # body payload callback

  # fires when client terminates its request (i.e. request finished)
  stream.on(:half_close) do

    # ... generate_response

    # send response
      ":status" => 200,
      "content-type" => "text/plain"

    # split response between multiple DATA frames, end_stream: false)

Events emitted by the Stream object:

:reservedfires exactly once when a push stream is initialized
:activefires exactly once when the stream become active and is counted towards the open stream limit
:headersfires once for each received header block (multi-frame blocks are reassembled before emitting this event)
:datafires once for every DATA frame (no buffering)
:half_closefires exactly once when the opposing peer closes its end of connection (e.g. client indicating that request is finished, or server indicating that response is finished)
:closefires exactly once when both peers close the stream, or if the stream is reset
:priorityfires once for each received priority update (server only)


Each HTTP/2 stream has a priority value that can be sent when the new stream is initialized, and optionally reprioritized later:

client =

default_priority_stream = client.new_stream
custom_priority_stream = client.new_stream(priority: 42)

# sometime later: change priority value
custom_priority_stream.reprioritize(32000) # emits PRIORITY frame

On the opposite side, the server can optimize its stream processing order or resource allocation by accessing the stream priority value (stream.priority).

Flow control

Multiplexing multiple streams over the same TCP connection introduces contention for shared bandwidth resources. Stream priorities can help determine the relative order of delivery, but priorities alone are insufficient to control how the resource allocation is performed between multiple streams. To address this, HTTP/2 provides a simple mechanism for stream and connection flow control.

Connection and stream flow control is handled by the library: all streams are initialized with the default window size (64KB), and send/receive window updates are automatically processed - i.e. window is decremented on outgoing data transfers, and incremented on receipt of window frames. Similarly, if the window is exceeded, then data frames are automatically buffered until window is updated.

The only thing left is for your application to specify the logic as to when to emit window updates:

conn.buffered_amount     # check amount of buffered data
conn.window              # check current window size
conn.window_update(1024) # increment connection window by 1024 bytes

stream.buffered_amount     # check amount of buffered data
stream.window              # check current window size
stream.window_update(2048) # increment stream window by 2048 bytes

Server push

An HTTP/2 server can send multiple replies to a single client request. To do so, first it emits a "push promise" frame which contains the headers of the promised resource, followed by the response to the original request, as well as promised resource payloads (which may be interleaved). A simple example is in order:

conn =

conn.on(:stream) do |stream|
  stream.on(:headers) { |head| ... }
  stream.on(:data) { |chunk| ... }

  # fires when client terminates its request (i.e. request finished)
  stream.on(:half_close) do
    promise_header = { ':method' => 'GET',
                       ':authority' => 'localhost',
                       ':scheme' => 'https',
                       ':path' => "/other_resource" }

    # initiate server push stream
    push_stream = nil
    stream.promise(promise_header) do |push|
      push_stream = push

    # send response
      ":status" => 200,
      "content-type" => "text/plain"

    # split response between multiple DATA frames, end_stream: false)
    # now send the previously promised data

When a new push promise stream is sent by the server, the client is notified via the :promise event:

conn =
conn.on(:promise) do |push|
  # process push stream

The client can cancel any given push stream (via .close), or disable server push entirely by sending the appropriate settings frame:

client.settings(settings_enable_push: 0)


To run specs:


Author: igrigorik
Source Code: 
License: MIT license

#ruby #http #protocol GA

Http-2: Pure Ruby Implementation Of HTTP/2 Protocol
Reid  Rohan

Reid Rohan


Netvis: D3.js-based tool To Visualize Network Communication


NetVis is a highly customizable javascript framework for building interactive network visualizations:

Visualize any network activity by describing your network events in a straightforward JSON-based NetVis format detailing network nodes, events and messages.

Convert your server logs / network trace files to NetVis format and quickly visualize them. Generic nature of the tool means support for visualizing communication in any existing protocols, including IP, TCP, HTTP, TSL, BitCoin or IPFS as well as a pefrect tool for developing new network protocols.

Browse and traverse your network model with the d3-based graph visualization and time playback controls (play/pause/ffwd/rwind/speed) for events.

Customize the looks and appearance easily by overwriting the default View handlers in plain javascript. NetVis maintains the form and function customization separate. Specifying custom colors and tags for nodes and messages, or things like depicting the nodes on the georgaphical map is super simple.

NetVis is built by the IPFS (, and Filecoin ( team.

What can NetVis do for me?

Here is an example of the use case:

  1. Live nodes implementing protocols run, generating a real sequence of events. They store this sequence in one or many log files.
  2. The log files are consolidated into one netvis history.
  3. The history is fed into a simulator, which runs the visualization.

This means that the live nodes / producers need not emit netvis exactly; we can have a processing step in the pipeline that converts whatever the native protocol logs are into netvis. (for example, combining two differet entries, announcing an outgoing + incoming packet, into one single netvis message entry)

And it also means that simulators need not ingest netvis directly, but can also be processed to fit their purposes better. This makes netvis a middle-format that seeks to ensure all the necessary elements are present, and that both the producer and consumer programs handle them correctly.

netvis pipeline:

live nodes --> logs --> netvis logs --> simulator input --> simulator

NetVis format

See the specififcation draft.

Here is an example of a NetVis file:

      "time": "2014-11-12T11:34:01.077817100Z",
      "level": "info",
      "name": "Earth"

      "time": "2014-11-12T11:34:01.477817180Z",
      "level": "info",
      "destinationNode": "Qmd9uGaZ6vKTES5nezVyCZDP2zJzdii2EXWiCbyGYq1tZX",
          "request_id": "c655d844aed528caabfad155408ee5832ba64d78",
          "time": "2014-11-12T11:34:01.477817180Z",
          "protocol": "IPFS 0.1",
          "type": "join",
          "contents": "{\"body\":\"Hello Jupiter!This is Earth, bow to our might!\"}"

      "time": "2014-11-12T11:34:02.000000003Z",
      "level": "info",
      "sourceNode": "Qmd9uGaZ6vKTES5nezVyCZDP2zJzdii2EXWiCbyGYq1tZX",
          "request_id": "a001c4d79b323808729ecfe673d84048e1725b39a96049dce2241dbd11d6abf9",
          "time": "2014-11-12T11:34:01.900000003Z",
          "protocol": "IPFS 0.1",
          "type": "lol",
          "contents": "lol wat"

We see an example of simple network activity where a node "Earth" sends a message to "Jupiter" and get a response.

Note that while the Earth node is defined with a nodeEntered event, Jupiter is only introduced implicitely, by being mentioned. That is acceptable, NetVis tries to deduce things as much as possible.


Also see:

  • - netvis project design and API doc
  • netvis network log file format specification
  •, project development roadmap
  •, internal designdoc. If you are considering contributing, or just want to see how things work internally, awesome! That would be a good place to start.

( good place to start is the showcasing page.)

Author: dborzov
Source Code: 
License: MIT license

#javascript #d3 #network #protocol 

Netvis: D3.js-based tool To Visualize Network Communication
Monty  Boehm

Monty Boehm


ProtoBuf.jl: Julia Protobuf Implementation


Protocol buffers are a language-neutral, platform-neutral, extensible way of serializing structured data for use in communications protocols, data storage, and more.

ProtoBuf.jl is a Julia implementation for protocol buffers.

Generating Code (from .proto files)

ProtoBuf.jl includes the protoc compiler version 3 binary appropriate for your operating system. The Julia code generator plugs in to the protoc compiler. It is implemented as ProtoBuf.Gen, a sub-module of ProtoBuf. The callable program (as required by protoc) is provided as the script plugin/protoc-gen-julia for unix like systems and plugin/protoc-gen-julia_win.bat for Windows.

For convenience, ProtoBuf.jl exports a protoc(args) command that will setup the PATH and environment correctly for the included protoc. E.g. to generate Julia code from proto/plugin.proto, run the command below which will create a corresponding file jlout/plugin.jl, simply run (from a Julia REPL):

julia> using ProtoBuf

julia> ProtoBuf.protoc(`-I=proto --julia_out=jlout proto/plugin.proto`)

Each .proto file results in a corresponding .jl file, including one each for other included .proto files. Separate .jl files are generated with modules corresponding to each top level package.

If a field name in a message or enum matches a Julia keyword, it is prepended with an _ character during code generation.

If a package contains a message which has the same name as the package itself, optionally set the JULIA_PROTOBUF_MODULE_POSTFIX=1 environment variable when running protoc, this will append _pb to the module names.

ProtoBuf map types are generated as Julia Dict types by default. They can also be generated as Array of key-values by setting the JULIA_PROTOBUF_MAP_AS_ARRAY=1 environment variable when running protoc.

Julia Type Mapping

.proto TypeJulia TypeNotes
int32Int32Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.
int64Int64Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.
uint32UInt32Uses variable-length encoding.
uint64UInt64Uses variable-length encoding.
sint32Int32Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.
sint64Int64Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.
fixed32UInt32Always four bytes. More efficient than uint32 if values are often greater than 2^28.
fixed64UInt64Always eight bytes. More efficient than uint64 if values are often greater than 2^56.
sfixed32Int32Always four bytes.
sfixed64Int64Always eight bytes.
stringByteStringA string must always contain UTF-8 encoded or 7-bit ASCII text.
bytesArray{UInt8,1}May contain any arbitrary sequence of bytes.
mapDictCan be generated as Array of key-value by setting environment variable JULIA_PROTOBUF_MAP_AS_ARRAY=1

Well-Known Types

The protocol buffers well known types are pre-generated and included in the package as a sub-module The version of the code included with this package have additional changes to make them compatible with Julia.

You can refer to them in your code after including the following statements:

using ProtoBuf

While generating code for your .proto files that use well-known types, add ProtoBuf/gen to the list of includes, e.g.:

julia> using ProtoBuf

julia> ProtoBuf.protoc(`-I=proto -I=ProtoBuf/gen --julia_out=jlout proto/msg.proto`)

Though this would generate code for the well-known types along with your messages, you just need to use the files generated for your messages.

Generic Services

The Julia code generator generates code for generic services if they are switched on for either C++ (cc_generic_services), Python (py_generic_services) or Java (java_generic_services).

To use generic services, users provide implementations of the RPC controller, RPC channel, and service methods.

The RPC Controller must be an implementation of ProtoRpcController. It is not currently used by the generated code except for passing it on to the RPC channel.

The RPC channel must implement call_method(channel, method_descriptor, controller, request) and return the response.

RPC method inputs or outputs that are defined as stream type, are generated as Channel of the corresponding type.

Service stubs are Julia types. Stubs can be constructed by passing an RPC channel to the constructor. For each service, two stubs are generated:

  • Stub: The asynchronous stub that takes a callback to invoke with the result on completion
  • BlockingStub: The blocking stub that returns the result on completion


  • Extensions are not supported yet.
  • Groups are not supported. They are deprecated anyway.
  • Enums are declared as Int32 types in the generated code. For every enum, a separate named tuple is generated with fields matching the enum values. The lookup method can be used to verify valid values.
  • In order to use the code generator, you must have installed ProtoBuf in the base Julia environment (]activate; add ProtoBuf).

Using ProtoBuf

Julia code for protobuf message types can be generated via protoc (see "Generating Julia Code from .proto Specifications"). Generated Julia code for a protobuf message look something like:

mutable struct Description <: ProtoType
    # a bunch of internal fields
    function Description(; kwargs...)
        # code to initialize the internal fields
end # mutable struct Description
const __meta_Description = Ref{ProtoMeta}()
function meta(::Type{Description})
    # code to initialize the metadata
function Base.getproperty(obj::Description, name::Symbol)
    # code to get properties

Reading and writing data structures using ProtoBuf is similar to serialization and deserialization. Methods writeproto and readproto can write and read Julia types from IO streams.

julia> using ProtoBuf                       # include protoc generated package here

julia> mutable struct MyType <: ProtoType   # a Julia composite type generated from protoc that
         ...                                # has intval::Int and strval::String as properties
         function MyType(; kwargs...)

julia> iob = PipeBuffer();

julia> writeproto(iob, MyType(; intval=10, strval="hello world"));   # write an instance of it

julia> data = readproto(iob, MyType());  # read it back into another instance

julia> data.intval

julia> data.strval
"hello world"

Reading message from a file is very similar to reading from a stream. Here's an example that writes a message to file and then reads it back.

julia> include("test_type.jl")

julia> mktemp() do path, io
           tt1 = TestType(; a="abc", b=true) # construct a message
           writeproto(io, tt1)  # write message to file
           close(io) # close the file handle
           open(path) do io2 # open the file we just wrote in read mode
               tt2 = readproto(io2, TestType()) # read message from the file
               @info("read back from file", tt1.a, tt1.b, tt2.a, tt2.b) # print written and read messages
┌ Info: read back from file
│   tt1.a = "abc"
│   tt1.b = true
│   tt2.a = "abc"
└   tt2.b = true       

Contents of the generated code in test_type.jl:

using ProtoBuf
import ProtoBuf.meta

mutable struct TestType <: ProtoType

    function TestType(; kwargs...)
        obj = new(meta(TestType), Dict{Symbol,Any}(), Set{Symbol}())
        values = obj.__protobuf_jl_internal_values
        symdict = obj.__protobuf_jl_internal_meta.symdict
        for nv in kwargs
            fldname, fldval = nv
            fldtype = symdict[fldname].jtyp
            (fldname in keys(symdict)) || error(string(typeof(obj), " has no field with name ", fldname))
            values[fldname] = isa(fldval, fldtype) ? fldval : convert(fldtype, fldval)
end #type TestType
const __meta_TestType = Ref{ProtoMeta}()
function meta(::Type{TestType})
    if !isassigned(__meta_TestType)
        __meta_TestType[] = target = ProtoMeta(TestType)
        allflds = Pair{Symbol,Union{Type,String}}[:a => AbstractString, :b => Bool]
        meta(target, TestType, allflds, [:a], ProtoBuf.DEF_FNUM, ProtoBuf.DEF_VAL, ProtoBuf.DEF_PACK, ProtoBuf.DEF_WTYPES, ProtoBuf.DEF_ONEOFS, ProtoBuf.DEF_ONEOF_NAMES)
function Base.getproperty(obj::TestType, name::Symbol)
    if name === :a
        return (obj.__protobuf_jl_internal_values[name])::AbstractString
    elseif name === :b
        return (obj.__protobuf_jl_internal_values[name])::Bool
        getfield(obj, name)

Setting and Getting Fields

Types used as protocol buffer structures are regular Julia types and the Julia syntax to set and get fields can be used on them. The generated type constructor makes it easier to set large types with many fields by passing name value pairs during construction: T(; name=val...).

Fields that are marked as optional may not be present in an instance of the struct that is read. Also, you may want to clear a set property from an instance. The following methods are exported to assist doing this:

  • propertynames(obj) : Returns a list of property names possible
  • setproperty!(obj, fld::Symbol, v) : Sets obj.fld.
  • getproperty(obj, fld::Symbol) : Gets obj.fld if it has been set. Throws an error otherwise.
  • hasproperty(obj, fld::Symbol) : Checks whether property fld has been set in obj.
  • clear(obj, fld::Symbol) : clears property fld of obj.
  • clear(obj) : Clears all properties of obj.
julia> using ProtoBuf

julia> mutable struct MyType <: ProtoType  # a Julia composite type
           ... # intval::Int

julia> mutable struct OptType <: ProtoType # and another one to contain it
           ... #opt::MyType

julia> iob = PipeBuffer();

julia> writeproto(iob, OptType(opt=MyType(intval=10)));

julia> readval = readproto(iob, OptType());

julia> hasproperty(readval, :opt)

julia> writeproto(iob, OptType());

julia> readval = readproto(iob, OptType());

julia> hasproperty(readval, :opt)

The isinitialized(obj::Any) method checks whether all mandatory fields are set. It is useful to check objects using this method before sending them. Method writeproto results in an exception if this condition is violated.

julia> using ProtoBuf

julia> import ProtoBuf.meta

julia> mutable struct TestType <: ProtoType
           ... # val::Any

julia> mutable struct TestFilled <: ProtoType
           ... # fld1::TestType (mandatory)
           ... # fld2::TestType

julia> tf = TestFilled();

julia> isinitialized(tf)      # false, since fld1 is not set

julia> tf.fld1 = TestType(fld1="");

julia> isinitialized(tf)      # true, even though fld2 is not set yet

Equality & Hash Value

It is possible for fields marked as optional to be in an "unset" state. Even bits type fields (isbitstype(T) == true) can be in this state though they may have valid contents. Such fields should then not be compared for equality or used for computing hash values. All ProtoBuf compatible types, by virtue of extending abstract ProtoType type, override hash, isequal and == methods to handle this.

Other Methods

  • copy!{T}(to::T, from::T) : shallow copy of objects
  • isfilled(obj) : same as isinitialized
  • lookup(en, val::Integer) : lookup the name (symbol) corresponding to an enum value
  • enumstr(enumname, enumvalue::Int32): returns a string with the enum field name matching the value
  • which_oneof(obj, oneof::Symbol): returns a symbol indicating the name of the field in the oneof group that is filled

Thread safety

Most of the book-keeping data for a protobuf struct is kept inside the struct instance. So that does not hinder thread safe usage. However struct instances themselves need to be locked if they are being read and written to from different threads, as is expected of any regular Julia struct.

Protobuf metadata for a struct (the information about fields and their properties as mentioned in the protobuf IDL definition) however is best initialized once and reused. It was not possible to generate code in such a way that it could be initialized when code is loaded and pre-compiled. This was because of the need to support nested and recursive struct references that protobuf allows - metadata for a struct could be defined only after the struct and all of its dependencies were defined. Metadata initialization had to be deferred to the first constructor call. But in order to reuse the metadata definition, it gets stored into a Ref that is set once. A process wide lock is used to make access to it thread safe. There is a small cost to be borne for that, and it should be negligible for most usages.

If an application wishes to eliminate that cost entirely, then the way to do it would be to call the constructors of all protobuf structs it wishes to use first and then switch the lock off by calling ProtoBuf.enable_async_safety(false). Once all metadata definitiions have been initialized, this would allow them to be used without any further locking overhead. This can also be set to false for a single threaded synchronous application where it is known that no parallelism is possible.

Both version 2 and 3 of the protobuf specification language are supported.

Author: JuliaIO
Source Code: 
License: View license

#julia #protocol

ProtoBuf.jl: Julia Protobuf Implementation