Best of Crypto

Best of Crypto


How to Integrate Chainlink Feed Pallet In Substrate-based Chains


This repository contains the Chainlink feed pallet as well as an example node showing how to integrate it in Substrate-based chains.

It also includes the pallet-chainlink for interacting with the Chainlink job-based oracle system.

How to integrate the Chainlink feed pallet into a runtime?

The pallet is added to the runtime like any regular pallet (see tutorial). It then needs to be configured. See the pallet readme for details.

The usage is simple:

let feed = T::Oracle::feed(0.into()).ok_or(Error::<T>::FeedMissing)?;
let RoundData { answer, .. } = feed.latest_data();

See the template pallet for a full example showing how to access a price feed.

Run the example

substrate-node-example demonstrates how to use pallet-chainlink-feed end-to-end. To test:

  • start the chain using make run-temp (for a temporary node which cleans up after itself)
  • connect to the chain by pointing (or a locally hosted version) to the local dev node
  • specify the types by copying substrate-node-example/types.json into the input at Settings > Developer

You are now ready to send extrinsics to the pallet.

Download details:
Author: smartcontractkit
Source code:
License: View license

#smartcontract #blockchain #oracle #chainlink #polkadot #go  #rust 

How to Integrate Chainlink Feed Pallet In Substrate-based Chains

AS Substrate: Collection Of Libraries Written in AssemblyScript


A collection of resources to develop proof of concept projects for Substrate in AssemblyScript. AssemblyScript compiles a strict subset of TypeScript to WebAssembly using Binaryen.

At the moment, this repository is mainly home for a collection of smart contract examples and a small smart contract library to write contracts for Substrates contracts pallet, but it might be extended with more examples in the future.


This repository is using yarn and yarn workspaces. You also need a fairly up-to-date version of node.


The packages folder contains the PoC libraries and projects.


The contracts folder contains a number of example contracts that make use of the as-contracts package. The compiled example contracts in the contracts folder can be deployed and executed on any Substrate chain that includes the contracts pallet.

Getting started

  1. Clone the whole as-substrate repository.
$ git clone

2.   Install all dependencies

$ yarn

3.   Compile all packages, projects and contract examples to wasm

$ yarn build

To clean up all workspaces in the repository, run:

$ yarn clean

Write your own contract

The @substrate/as-contracts and @substrate/as-utils packages are not being published to the npmjs registry. That's why you need to add the complete as-substrate repository as a dependency directly from git.

$ yarn add

// or

$ npm install

In your projects, you can then import the as-contracts functions directly from the node_modules folder

The recommended way of writing smart contracts is using the Rust Smart Contract Language ink!.

Another way of writing Smart Contracts for Substrate is using the Solidity to Wasm compiler Solang.


Everything in this repository is highly experimental and should not be used for any professional or financial purposes.

Download Details:
Author: paritytech
Source Code:
License: GPL-3.0 license

#blockchain #assemblyscript #substrate  #smartcontract  #polkadot #rust 

AS Substrate: Collection Of Libraries Written in AssemblyScript

Ledgeracio: CLI for Use with The Ledger Staking App

WARNING: This is alpha quality software and not suitable for production. It is incomplete and will have bugs.

Ledgeracio CLI

Ledgeracio is a command-line tool and a Ledger app designed for staking operations on Substrate-based networks.

Running ledgeracio --help will provide top-level usage instructions.

Ledgeracio CLI is intended to work with a special Ledgeracio Ledger app, but most of its commands will work with stock Kusama or Polkadot Ledger apps as well. This is less secure, however, as these apps do not enforce the same restrictions that the Ledgeracio app does. Using a stock app in production is not recommended.

The Polkadot app can be found here and the Kusama app can be found here. Other Substrate-based chains are currently not supported, but local devnets should work as long as their RPC API matches Kusama/Polkadot's.

Ledgeracio only supports Unix-like systems, and has mostly been tested on Linux. That said, it works on macOS and other Unix-like systems that provide the necessary support for userspace USB drivers.

What is Ledgeracio?

Ledgeracio is a CLI app to perform various tasks common to staking on Kusama and Polkadot, aka staking-ops. Ledgeracio is designed to reduce the risk of user error by way of an allowlist of validators that is set up and signed once and stored on the Ledger device. Furthermore, Ledgeracio can speed up the workflow considerably when compared to alternatives using Parity Signer + Polkadot{.js}.

This repository only contains the CLI. To submit transactions with Ledgeracio, you will also need the companion Ledger app that you can install from the Ledger app store for Polkadot and Kusama. Development versions of the apps are available at Zondax/ledger-polkadot and Zondax/ledger-kusama. Please do not use the unaudited versions in production. For instruction on how to setup and use your Ledger device with Polkadot/Kusama, see the Polkadot wiki.

The Ledgeracio CLI contains two binaries. The first, simply called ledgeracio, is used to submit transactions. The second, called ledgeracio-allowlist, is used to manage the Ledgeracio Ledger app’s list of allowed stash accounts. Generally, one will use ledgeracio for normal operations, and only use ledgeracio-allowlist when the list of allowed stash accounts must be changed. ledgeracio does not handle sensitive data, so it can safely be used on virtually any machine on which it will run. Some subcommands of ledgeracio-allowlist, however, generate and use secret keys, which are stored unencrypted on disk. Therefore, they MUST NOT be used except on trusted and secured machines. Ideally, these subcommands should be run on a machine that is reserved for provisioning of Ledger devices with the Ledgeracio app, and which has no network connectivity.

The allowlist serves to prevent one from accidentally nominating the wrong validator, which could result in a slash. It does NOT protect against malicious use of the device. Anyone with both the device and its PIN can uninstall the Ledgeracio app and install the standard Polkadot or Kusama app, which uses the same derivation path and thus can perform the same transactions.


  • An index is an integer, at least 1, specified in decimal. Indexes are used to determine which BIP44 derivation path to use.
  • Subcommands that take a single argument take it directly. Subcommands that take multiple arguments use keyword arguments, which are passed as --key value or --key=value. This avoids needing to memorize the order of arguments.
  • All commands require that a network name be passed as the first argument. You might want to make a shell alias for this, such as
alias 'ledgeracio-polkadot=ledgeracio --network polkadot'
alias 'ledgeracio-kusama=ledgeracio --network kusama'

Getting Started

Allowlist signing

Provisioning the Ledgeracio Ledger app requires a trusted computer. This computer will store the secret key used to sign allowlists. This computer does not need network access, and generally should not have it. ledgeracio-allowlist does not encrypt the secret key, so operations that involve secret keys should only be done on machines that use encrypted storage.

Only devices used for nomination need to be provisioned. However, if you only intend to use the app for validator management, you should set an empty allowlist, which blocks all nominator operations.

First, ledgeracio-allowlist gen-key <file> is used to generate a secret key. The public part will be placed in <file>.pub and the secret part in <file>.sec. Both will be created with 0400 permissions, so that they are not accidentally overwritten or exposed. This operation requires a trusted computer. The public key file can be freely redistributed, while the secret key file should never leave the machine it was generated on.

You can now sign a textual allowlist file with ledgeracio-allowlist sign. A textual allowlist file has one SS58 address per line. Leading and trailing whitespace is stripped. If the first non-whitespace character on a line is # or ;, or if the line is empty or consists entirely of whitespace, it is considered to be a comment and ignored.

ledgeracio-allowlist sign is invoked as follows:

ledgeracio-allowlist --network <network> sign --file <file> --nonce <nonce> --output <output> --secret <secret>

<file> is the allowlist file. <nonce> is the nonce, which is incorporated into the signed allowlist file named <output>. Ledgeracio apps keep track of the nonce of the most recent allowlist uploaded, and reject new uploads unless the new allowlist has a nonce higher than the old one. Nonces do not need to be contiguous, so skipping a nonce is okay. Signed allowlists are stored in a binary format.

Device provisioning

ledgeracio-allowlist is also used for device provisioning. To set the allowlist, use ledgeracio-allowlist set-key. This command will only succeed once. If an allowlist has already been uploaded, it will fail. The only way to change the allowlist signing key is to reinstall the Ledgeracio app, which does not result in any funds being lost.

ledgeracio-allowlist upload is used to upload an allowlist. The uploaded allowlist must have a nonce that is greater than the nonce of the previous allowlist. If there was no previous allowlist, any nonce is allowed.

To verify the signature of a binary allowlist file, use ledgeracio-allowlist inspect. This also displays the allowlist on stdout.

Ledgeracio Use

ledgeracio is used for staking operations. Before accounts on a Ledger device can be used for staking, they must be chosen as a controller account. You can obtain the address by running ledgeracio <validator|nominator> address. The address can be directly pasted into a GUI tool, such as Polkadot{.js}.

ledgeracio nominator nominate is used to nominate an approved validator, and ledgeracio validator announce is used to announce intention to validate. ledgeracio [nominator|validator] set-payee is used to set the payment target. ledgeracio [nominator|validator] chill is used to stop staking, while ledgeracio [nominator|validator] show and ledgeracio [nominator|validator] show-address are used to display staking status. The first takes an index, while the second takes an address. show-address does not require a Ledger device. ledgeracio validator replace-key is used to set a validator’s session key.

Subcommand Reference

Allowlist handling: ledgeracio-allowlist

The Ledgeracio app enforces a list of allowed stash accounts. This is managed using the ledgeracio-allowlist command.

Some subcommands involve the generation or use of secret keys, which are stored on disk without encryption. These subcommands MUST NOT be used on untrusted machines. Ideally, they should be run on a machine that is reserved for provisioning of Ledgeracio apps, and which has no access to the Internet.

Key generation: ledgeracio-allowlist gen-key

This command takes one argument: the basename (filename without extension) of the keys to generate. The public key will be given the extension .pub and the secret key the extension .sec. The files will be generated with 0400 permissions, which means that they can only be read by the current user and the system administrator, and they cannot be written to except by the administrator. This is to prevent accidental overwrites.

The public key is not sensitive, and is required by anyone who wishes to verify signed allowlists and operate on the allowed accounts. It will be uploaded to the Ledger device by ledgeracio-allowlist set-key. The secret key allows generating signatures, and therefore must be kept secret. It should never leave the (preferably air gapped) machine it is generated on.

Uploading an allowlist signing key to a device: ledgeracio-allowlist set-key

This command takes one argument, the name of the public key file (including extension). The key will be parsed and uploaded to the Ledgeracio app running on the attached Ledger device. If it is not able to do so, Ledgeracio will print an error message and exit with a non-zero status.

If a key has already been uploaded, uploading a new key will fail. The only workaround is to reinstall the Ledgeracio app. This does not forfeit any funds stored on the device. We strongly recommend users to use separate Ledger devices for ledgeracio and cold storage.

The user will be required to confirm the upload via the Ledger UI. This allows the user to check that the correct key has been uploaded, instead of a key chosen by an attacker who has compromised the user’s machine.

Retrieving the uploaded key: ledgeracio-allowlist get-key

This command takes no arguments. The public key that has been uploaded will be retrieved and printed to stdout. If no public key has been uploaded, or if the app is not the Ledgeracio app, an error will be returned.

Signing an allowlist: ledgeracio-allowlist sign

This command takes the following arguments. All of them are mandatory.

  • --file <file>: the textual allowlist file to sign. See for its format.
  • --nonce <nonce>: The nonce to sign the file with. The nonce must be greater than the previous nonce, or the Ledgeracio app will reject the allowlist.
  • --output <output>: The name of the output file to write.
  • --secret <secret>: The name of the secret key file.

Inspecting a signed allowlist: ledgeracio-allowlist inspect

This command takes two arguments. Both of them are mandatory.

  • --file <file>: The name of the signed allowlist to inspect.
  • --public <public>: The name of the public key file that signed the allowlist. This command will fail if the signature cannot be verified.

Uploading an allowlist: ledgeracio-allowlist upload

This command takes one argument: the filename of the signed binary allowlist to upload. The command will fail if any of the following occurs:

  • There is no Ledger device connected.
  • The attached device is not running the Ledgeracio app.
  • The Ledgeracio app refuses the operation.

The Ledgeracio app will refuse the operation if:

  • No signing key has been uploaded.
  • The allowlist has not been signed by the public key stored in the app.
  • The nonce is not greater than that of the previously uploaded allowlist. If no allowlist has been previously uploaded, any nonce is allowed.
  • The user refuses the operation.

Metadata inspection: ledgeracio metadata

This command takes no arguments. It pretty-prints the chain metadata to stdout. It is primarily intended for debugging. Requires a network connection.

Properties inspection: ledgeracio properties

This command takes no arguments. It pretty-prints the chain properties to stdout. It is primarily intended for debugging. Requires a network connection.

Nominator operations: ledgeracio nominator

This command performs operations using nominator keys ― that is, keys on a nominator derivation path. Requires a network connection. The following subcommands are available:

Displaying the address at an index: ledgeracio nominator address

This command takes an index as a parameter. The address on the device corresponding to that index is displayed on stdout.

Showing a nominator controller: ledgeracio nominator show

This command takes an index as parameter, and displays information about the corresponding nominator controller account.

Showing a nominator controller address: ledgeracio nominator show-address

This command takes an SS58-formatted address as parameter, and displays information about the corresponding nominator controller account. It does not require a Ledger device.

Nominating a new validator set: ledgeracio nominator nominate

This command takes a index followed by a list of SS58-formatted addresses. It uses the account at the provided index to nominate the provided validator stash accounts.

The user must confirm this action on the Ledger device. For security reasons, users MUST confirm that the addresses displayed on the device are the intended ones. A compromised host machine can send a set of accounts that is not the ones the user intended. If any of the addresses sent to the device are not on the allowlist, the transaction will not be signed.

Stopping nomination: ledgeracio nominator chill

This command stops the account at the provided index from nominating.

The user must confirm this action on the Ledger device.

Setting a payment target: ledgeracio nominator set-payee

This command takes an index as argument, and sets the payment target. The target must be one of Stash, Staked, or Controller (case-insensitive).

Validator operations: ledgeracio validator

This command handles validator operations. It requires a network connection, and has the following subcommands:

Displaying a validator address: ledgeracio validator address <index>

This command displays the address of the validator controller account at the given index.

Announcing an intention to validate: ledgeracio validator announce <index> [commission]

This command announces that the controller account at <index> intends to validate. An optional commission (as a decimal between 0 and 1 inclusive) may also be provided. If none is supplied, it defaults to 1, or 100%.

Cease validation: ledgeracio validator chill

This command stops validation.

The user must confirm this action on the Ledger device.

Setting the payment target: ledgeracio validator set-payee

This command is the validator version of ledgeracio nominator set-payee. See its documentation for details.

Displaying information on a given validator: ledgeracio validator show

This command is the validator version of ledgeracio nominator show. See its documentation for details.

Displaying information on a given validator address: ledgeracio validator show-address

This command is the validator version of ledgeracio nominator show-address. See its documentation for details.

Rotating a session key: ledgeracio validator replace-key <index> <keys>

This command sets the session keys of the validator controlled by the account at <index>. The keys must be in hexidecimal, as returned by the key rotation RPC call.

Download Details:
Author: paritytech
Source Code:
License: GPL-3.0 License

#blockchain  #polkadot  #smartcontract  #substrate 

Ledgeracio: CLI for Use with The Ledger Staking App

MultiSigil: Substrate Multisig Address Calculator for Your CLI

It is basically what it says on the tin. Since Substrate multisig addresses are deterministic, MultiSigil doesn't need to do any network connections — and can be used even before the chain has been started.


$ multi-sigil --help

multi-sigil 0.1.0
Parity Technologies <>
CLI for generating Substrate multisig addresses

    multi-sigil [OPTIONS] <THRESHOLD> <ADDRESSES>...

    <THRESHOLD>       The number of signatures needed to perform the operation
    <ADDRESSES>...    The addresses to use

    -h, --help       Prints help information
    -V, --version    Prints version information

        --network <NETWORK>    Network to calculate multisig for; defaults to Kusama [default: kusama]  [possible
                               values: kusama, polkadot]

Supported networks

Currently only Kusama and Polkadot are supported.

It should be fairly trivial to add support of other networks from the list of supported in SS58 — PRs are welcome!

Download Details:
Author: paritytech
Source Code:
License: Apache-2.0 License

#blockchain  #polkadot  #smartcontract  #substrate 

MultiSigil: Substrate Multisig Address Calculator for Your CLI

Substrate Air-gapped: Vulnerable Decryption and Signing tools

[WIP] Substrate Airgapped

Tools to facilitate an air-gapped construction, decoding, and signing flow for transactions of FRAME-based chains.


  • substrate-airgapped-cli: CLI that combines all functionality of the available substrate-airgapped libraries.
  • substrate-airgapped: Where core components & functionality is being built out.
  • substrate-metadata: A wrapper around runtime metadata that can be used to programmatically get the call index of transaction.



Please file an issue for any questions, feature requests, or additional examples

Download Details:
Author: paritytech
Source Code:

#blockchain  #polkadot  #smartcontract  #substrate 

Substrate Air-gapped: Vulnerable Decryption and Signing tools

An Project to Enable Writing Substrate integration Tests Easily

Substrate Test Runner

Allows you to test 

  • Migrations
  • Runtime Upgrades
  • Pallets and general runtime functionality.

This works by running a full node with a ManualSeal-BABE™ hybrid consensus for block authoring.

The test runner provides two apis of note

  • seal_blocks(count: u32)

This tells manual seal authorship task running on the node to author count number of blocks, including any transactions in the transaction pool in those blocks.

  • submit_extrinsic<T: frame_system::Config>(call: Impl Into<T::Call>, from: T::AccountId)

Providing a Call and an AccountId, creates an UncheckedExtrinsic with an empty signature and sends to the node to be included in future block.


The running node has no signature verification, which allows us author extrinsics for any account on chain. 

How do I Use this?

/// tons of ignored imports
use substrate_test_runner::{TestRequirements, Node};

struct Requirements;

impl TestRequirements for Requirements {
    /// Provide a Block type with an OpaqueExtrinsic
    type Block = polkadot_core_primitives::Block;
    /// Provide an Executor type for the runtime
    type Executor = polkadot_service::PolkadotExecutor;
    /// Provide the runtime itself
    type Runtime = polkadot_runtime::Runtime;
    /// A touch of runtime api
    type RuntimeApi = polkadot_runtime::RuntimeApi;
    /// A pinch of SelectChain implementation
    type SelectChain = sc_consensus::LongestChain<TFullBackend<Self::Block>, Self::Block>;
    /// A slice of concrete BlockImport type
    type BlockImport = BlockImport<
        TFullClient<Self::Block, Self::RuntimeApi, Self::Executor>,
    /// and a dash of SignedExtensions
    type SignedExtension = SignedExtra;

    /// Load the chain spec for your runtime here.
    fn load_spec() -> Result<Box<dyn sc_service::ChainSpec>, String> {
        let wasm_binary = polkadot_runtime::WASM_BINARY.ok_or("Polkadot development wasm not available")?;

            move || polkadot_development_config_genesis(wasm_binary),

    /// Optionally provide the base path if you want to fork an existing chain.
    // fn base_path() -> Option<&'static str> {
    //     Some("/home/seun/.local/share/polkadot")
    // }

    /// Create your signed extras here.
    fn signed_extras(
        from: <Self::Runtime as frame_system::Config>::AccountId,
    ) -> Self::SignedExtension
        S: StateProvider
        let nonce = frame_system::Module::<Self::Runtime>::account_nonce(from);


    /// The function signature tells you all you need to know. ;)
    fn create_client_parts(config: &Configuration) -> Result<
            Arc<TFullClient<Self::Block, Self::RuntimeApi, Self::Executor>>,
                dyn ConsensusDataProvider<
                    Transaction = TransactionFor<
                        TFullClient<Self::Block, Self::RuntimeApi, Self::Executor>,
    > {
        let (
        ) = new_full_parts::<Self::Block, Self::RuntimeApi, Self::Executor>(config)?;
        let client = Arc::new(client);

        let inherent_providers = InherentDataProviders::new();
        let select_chain = sc_consensus::LongestChain::new(backend.clone());

        let (grandpa_block_import, ..) =
            sc_finality_grandpa::block_import(client.clone(), &(client.clone() as Arc<_>), select_chain.clone())?;

        let (block_import, babe_link) = sc_consensus_babe::block_import(

        let consensus_data_provider = BabeConsensusDataProvider::new(
            vec![(AuthorityId::from(Alice.public()), 1000)]
        .expect("failed to create ConsensusDataProvider");


/// And now for the most basic test

fn simple_balances_test() {
    // given
    let mut node = Node::<Requirements>::new();

    type Balances = pallet_balances::Module<Runtime>;

    let (alice, bob) = (Sr25519Keyring::Alice.pair(), Sr25519Keyring::Bob.pair());
    let (alice_account_id, bob_acount_id) = (

    /// the function with_state allows us to read state, pretty cool right? :D
    let old_balance = node.with_state(|| Balances::free_balance(alice_account_id.clone()));

    // 70 dots
    let amount = 70_000_000_000_000;

    /// Send extrinsic in action.
    node.submit_extrinsic(BalancesCall::transfer(bob_acount_id.clone(), amount), alice_account_id.clone());

    /// Produce blocks in action, Powered by manual-seal™.

    /// we can check the new state :D
    let new_balance = node.with_state(|| Balances::free_balance(alice_account_id));

    /// we can now make assertions on how state has changed.
    assert_eq!(old_balance + amount, new_balance);

Download Details:
Author: paritytech
Source Code:

#blockchain  #polkadot  #smartcontract  #substrate #rust 

An Project to Enable Writing Substrate integration Tests Easily

Dot Jaeger: Service for Visualizing & Collecting Traces From Parachain

Dot Jaeger

service for visualizing and collecting traces from Parachains.



  • Make sure you can access your JaegerUI Endpoint collecting traces from Parachain Validators.
  • edit the docker-compose.yml prometheus volumes path with your path to prometheus.yml in the dot-jaeger repo
  • Start the external services (Prometheus + Grafana) with
docker-compose up

This starts Prometheus on port 9090 and grafana on port 3000. The Grafana dashboard can be accessed from localhost:3000, with the default login being user: admin password: admin

  • Start dot-jaeger in daemon mode with chosen arguments. The help command may be used for quick docs on the core app or any of the subcommands.
  • Login to local grafana instance, and add dot-jaeger as a Prometheus source.
    • URL: localhost:9090
    • Access: Browser
  • Import the Dashboard from the Repository named Parachain Rococo Candidates-{{bunch of numbers}}
    • dashboard can be manipulated from grafana

Data should start showing up. Grafana update interval can be modified in the top right

Here's a Quick ASCIICast of the dot-jaeger and docker setup process

Recommended number of traces at once: 5-20. Asking for too many traces from the JaegerUI both requests large amounts of data (potentially slowing down any other services) and makes dot-jaeger slower as it has to potentially sort the parent-child relationship of each span, although this can be configured with --recurse-children and recurse-parents CLI options.


Usage: dot-jaeger [--service <service>] [--url <url>] [--limit <limit>] [--pretty-print] [--lookback <lookback>] <command> [<args>]

Jaeger Trace CLI App

  --service         name a specific node that reports to the Jaeger Agent from
                    which to query traces.
  --url             URL where Jaeger Service runs.
  --limit           maximum number of traces to return.
  --pretty-print    pretty print result
  --lookback        specify how far back in time to look for traces. In format:
                    `1h`, `1d`
  --help            display usage information

  traces            Use when observing many traces
  trace             Use when observing only one trace
  services          List of services reporting to the Jaeger Agent
  daemon            Daemonize Jaeger Trace collection to run at some interval


Usage: dot-jaeger daemon [--frequency <frequency>] [--port <port>] [--recurse-parents] [--recurse-children] [--include-unknown]

Daemonize Jaeger Trace collection to run at some interval

  --frequency       frequency to update jaeger metrics in milliseconds.
  --port            port to expose prometheus metrics at. Default 9186
  --recurse-parents fallback to recursing through parent traces if the current
                    span has one of a candidate hash or stage, but not the
                    fallback to recursing through parent traces if the current
                    span has one of a candidate hash or stage but not the other.
                    Recursing children is slower than recursing parents.
  --include-unknown include candidates that have a stage but no candidate hash
                in the prometheus data.
  --help            display usage information


./dot-jaeger --url "http://JaegerUI:16686" --limit 10 --service polkadot-rococo-3-validator-5 daemon --recurse-children


Adding a new Stage

  • Modify Stage enum and associated Into/From implementations to accomadate a new stage
  • Modify Prometheus Gauges to add new stage to Histograms

Download Details:
Author: paritytech
Source Code:
License: GPL-3.0 License

#blockchain  #polkadot  #smartcontract  #substrate 

Dot Jaeger: Service for Visualizing & Collecting Traces From Parachain

Testbed for Code Size Minimization Strategies in The Rust

The Rust Programming Language

This is the main source code repository for Rust. It contains the compiler, standard library, and documentation.

Note: this README is for users rather than contributors. If you wish to contribute to the compiler, you should read the Getting Started section of the rustc-dev-guide instead.

Quick Start

Read "Installation" from The Book.

Installing from Source

The Rust build system uses a Python script called to build the compiler, which manages the bootstrapping process. It lives in the root of the project.

The command can be run directly on most systems in the following format:

./ <subcommand> [flags]

This is how the documentation and examples assume you are running

Systems such as Ubuntu 20.04 LTS do not create the necessary python command by default when Python is installed that allows to be run directly. In that case you can either create a symlink for python (Ubuntu provides the python-is-python3 package for this), or run using Python itself:

# Python 3
python3 <subcommand> [flags]

# Python 2.7
python2.7 <subcommand> [flags]

More information about can be found by running it with the --help flag or reading the rustc dev guide.

Building on a Unix-like system

  1. Make sure you have installed the dependencies:
  • g++ 5.1 or later or clang++ 3.5 or later
  • python 3 or 2.7
  • GNU make 3.81 or later
  • cmake 3.13.4 or later
  • ninja
  • curl
  • git
  • ssl which comes in libssl-dev or openssl-devel
  • pkg-config if you are compiling on Linux and targeting Linux

2.   Clone the source with git:

git clone
cd rust

3.   Configure the build settings:

The Rust build system uses a file named config.toml in the root of the source tree to determine various configuration settings for the build. Copy the default config.toml.example to config.toml to get started.

cp config.toml.example config.toml

If you plan to use install to create an installation, it is recommended that you set the prefix value in the [install] section to a directory.

Create install directory if you are not installing in default directory

4.   Build and install:

./ build && ./ install

When complete, ./ install will place several programs into $PREFIX/bin: rustc, the Rust compiler, and rustdoc, the API-documentation tool. This install does not include Cargo, Rust's package manager. To build and install Cargo, you may run ./ install cargo or set the build.extended key in config.toml to true to build and install all tools.

Building on Windows

There are two prominent ABIs in use on Windows: the native (MSVC) ABI used by Visual Studio, and the GNU ABI used by the GCC toolchain. Which version of Rust you need depends largely on what C/C++ libraries you want to interoperate with: for interop with software produced by Visual Studio use the MSVC build of Rust; for interop with GNU software built using the MinGW/MSYS2 toolchain use the GNU build.


MSYS2 can be used to easily build Rust on Windows:

Grab the latest MSYS2 installer and go through the installer.

Run mingw32_shell.bat or mingw64_shell.bat from wherever you installed MSYS2 (i.e. C:\msys64), depending on whether you want 32-bit or 64-bit Rust. (As of the latest version of MSYS2 you have to run msys2_shell.cmd -mingw32 or msys2_shell.cmd -mingw64 from the command line instead)

From this terminal, install the required tools:

# Update package mirrors (may be needed if you have a fresh install of MSYS2)
pacman -Sy pacman-mirrors

# Install build tools needed for Rust. If you're building a 32-bit compiler,
# then replace "x86_64" below with "i686". If you've already got git, python,
# or CMake installed and in PATH you can remove them from this list. Note
# that it is important that you do **not** use the 'python2', 'cmake' and 'ninja'
# packages from the 'msys2' subsystem. The build has historically been known
# to fail with these packages.
pacman -S git \
            make \
            diffutils \
            tar \
            mingw-w64-x86_64-python \
            mingw-w64-x86_64-cmake \
            mingw-w64-x86_64-gcc \

Navigate to Rust's source code (or clone it), then build it:

./ build && ./ install


MSVC builds of Rust additionally require an installation of Visual Studio 2017 (or later) so rustc can use its linker. The simplest way is to get the Visual Studio, check the “C++ build tools” and “Windows 10 SDK” workload.

(If you're installing cmake yourself, be careful that “C++ CMake tools for Windows” doesn't get included under “Individual components”.)

With these dependencies installed, you can build the compiler in a cmd.exe shell with:

python build

Currently, building Rust only works with some known versions of Visual Studio. If you have a more recent version installed and the build system doesn't understand, you may need to force rustbuild to use an older version. This can be done by manually calling the appropriate vcvars file before running the bootstrap.

CALL "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat"
python build

Specifying an ABI

Each specific ABI can also be used from either environment (for example, using the GNU ABI in PowerShell) by using an explicit build triple. The available Windows build triples are:

  • GNU ABI (using GCC)
    • i686-pc-windows-gnu
    • x86_64-pc-windows-gnu
  • The MSVC ABI
    • i686-pc-windows-msvc
    • x86_64-pc-windows-msvc

The build triple can be specified by either specifying --build=<triple> when invoking commands, or by copying the config.toml file (as described in Installing From Source), and modifying the build option under the [build] section.

Configure and Make

While it's not the recommended build system, this project also provides a configure script and makefile (the latter of which just invokes

make && sudo make install

When using the configure script, the generated file may override the config.toml file. To go back to the config.toml file, delete the generated file.

Building Documentation

If you’d like to build the documentation, it’s almost the same:

./ doc

The generated documentation will appear under doc in the build directory for the ABI used. I.e., if the ABI was x86_64-pc-windows-msvc, the directory will be build\x86_64-pc-windows-msvc\doc.


Since the Rust compiler is written in Rust, it must be built by a precompiled "snapshot" version of itself (made in an earlier stage of development). As such, source builds require a connection to the Internet, to fetch snapshots, and an OS that can execute the available snapshot binaries.

Snapshot binaries are currently built and tested on several platforms:

Platform / Architecturex86x86_64
Windows (7, 8, 10, ...)
Linux (kernel 2.6.32, glibc 2.11 or later)
macOS (10.7 Lion or later)(*)

(*): Apple dropped support for running 32-bit binaries starting from macOS 10.15 and iOS 11. Due to this decision from Apple, the targets are no longer useful to our users. Please read our blog post for more info.

You may find that other platforms work, but these are our officially supported build environments that are most likely to work.

Getting Help

The Rust community congregates in a few places:


If you are interested in contributing to the Rust project, please take a look at the Getting Started guide in the rustc-dev-guide.


The Rust Foundation owns and protects the Rust and Cargo trademarks and logos (the “Rust Trademarks”).

If you want to use these names or brands, please read the media guide.

Third-party logos may be subject to third-party copyrights and trademarks. See Licenses for details.

Download Details:
Author: paritytech
Source Code:
License: View license

#blockchain  #polkadot  #smartcontract  #substrate 

Testbed for Code Size Minimization Strategies in The Rust

Contract Sizes: Comparisons EVM Vs. WASM Contract Code Sizes

Contract Code Size Comparison

The goal of this repository is to compare the sizes of compiled solidity contracts when compiled to EVM (with solc) versus WASM (with solang).

After some experimentation it turned out that a huge contributor to WASM code sizes is the smaller word size of WASM. Solidity treats 256bit variables as value types and passes them on the stack. Solang generates four 32bit stack accesses to emulate this. In order to improve comparability we do the following:

  • Patch all contracts used for comparisons to not use wide integers (use uint32 everywhere).
  • Pass --value-size 4 --address-size 4 to solang so that 32bit is usedfor the builtin types (address, msg.value).

How to use this repository

Put solang in your PATH and run which is located in the root of this repository. The solc compiler will be downloaded automatically.

Test corpus

The current plan is to use the following sources as a test corpus:

Adding a new contract to the corpus from either of those sources is a time consuming process because solang isn't a drop in replacement. It tries hard to be one but there are some things that won't work on solang: First, almost all contracts use EVM inline assembly which obviously won't work on a compiler targeting another architecture. Second, differences in builtin types (address, balance) will prevent the compilation of most contracts.

Therefore we need to apply substantial changes to every contract before it can bea dded to the corpus in order to make it compile and establish comparability.


The following results show the compressed sizes (zstd) of the evm and wasm targets together with their compression ratio. Wasm relative describes the relative size of the compressed wasm output when compared to the evm output.

The concatenated row is what we get when we concatenate the uncompressed results of all contracts.

Used solang version is commit c2a8bd9881e64e41565cdfe088ffe9464c74dae4.

ContractEVM CompressedWASM CompressedEVM RatioWASM RatioWasm Relative

Download Details:
Author: paritytech
Source Code:

#blockchain  #polkadot  #smartcontract  #substrate 

Contract Sizes: Comparisons EVM Vs. WASM Contract Code Sizes

Parity Tokio Ipc: Crate Abstracts interprocess Transport for UNIX


This crate abstracts interprocess transport for UNIX/Windows.

It utilizes unix sockets on UNIX (via tokio::net::UnixStream) and named pipes on windows (via tokio::net::windows::named_pipe module).

Endpoint is transport-agnostic interface for incoming connections:

use parity_tokio_ipc::Endpoint;
use futures::stream::StreamExt;

// For testing purposes only - instead, use a path to an actual socket or a pipe
let addr = parity_tokio_ipc::dummy_endpoint();

let server = async move {
        .expect("Couldn't set up server")
        .for_each(|conn| async {
            match conn {
                Ok(stream) => println!("Got connection!"),
                Err(e) => eprintln!("Error when receiving connection: {:?}", e),

let rt = tokio::runtime::Builder::new_current_thread().enable_all().build().unwrap();

Download Details:
Author: paritytech
Source Code:
License: View license

#blockchain  #polkadot  #smartcontract  #substrate #rust 

Parity Tokio Ipc: Crate Abstracts interprocess Transport for UNIX

UI for Substrate Bridges in Polkadot

The goal of the UI is to provide the users a convenient way of interacting with the Bridge - querying its state and sending transactions.

Configuring custom Substrate providers / chains

The project includes a .env file at root project directory that contains all the variables for running the bridge UI:


ℹ️In case you need to overwrite any of the variables defined, please do so creating a new .env.local.

In case of questions about .env management please refer to this link: create-react-app env files

Custom Hashers for building connections

If any of the chains (or both) need to use a custom hasher function this one can be built and exported from the file: src/configs/chainsSetup/customHashers.ts. Then it is just a matter of referring the function name using variable REACT_APP_CUSTOM_HASHER_CHAIN_<Chain number> from .env file.

Running the bridge

Please refer to this section of the Bridges project to run the bridge locally: running-the-bridge



This will install all the dependencies for the project.

yarn start

Runs the app in the development mode. Open http://localhost:3001 to view it in the browser.

yarn test

Runs the test suite.

yarn lint

Runs the linter & formatter.

Execute E2E test

Puppeteer is used for running E2E test for bridges (Only chrome for now).


a) Have chrome installed on your computer. (This test requires it and will not download it when running); b) ensure that in your env.local file the REACT_APP_IS_DEVELOPMENT and REACT_APP_KEYRING_DEV_LOAD_ACCOUNTS are true; c) Make sure all steps mentioned above have run in a seperate terminal (yarn - yarn start) and the application of bridges is running; d) In a different terminal window run the following command:

yarn run test:e2e-alone

customTypes config files process.

There is an automated process that downloads all the required types.json files available in the deployments section of parity-bridges-common repository. This hook is executed before the local development server starts and during the lint/test/build process during deployment. In case there is an unexpected issue with this process you can test this process isolated by running:

yarn prestart

Learn More

For additional information about the Bridges Project please refer to parity-bridges-common repository.


To build the image run the:

docker build -t parity-bridges-ui:dev .

Now that image is built, container can start with the following command, which will serve our app on port 8080.

docker run --rm -it -p 8080:80 parity-bridges-ui:dev

Download Details:
Author: paritytech
Source Code:
License: GPL-3.0 License

#blockchain  #polkadot  #smartcontract  #substrate #rust 

UI for Substrate Bridges in Polkadot

Substrate Runtime and Contract Interactions for Polkadot

A Substrate node demonstrating two-way interactions between the runtime and Ink! smart contracts.


This Substrate project demonstrates through example how to interact between Substrate runtimes and ink! smart contracts through extrinsic calls and ink! chain extensions.


Sharing Substrate runtime functionality with ink! smart contracts is a powerful feature. Chains with unique runtime functionality can create rich application developer ecosystems by exposing choice pieces of their runtime. The inverse interaction of runtime to ink! smart contract calls may be similarly valuable. Runtime logic can query or set important context information at the smart contracts level.

Both of the types of interactions described above are asked about in the context of support, and a recent example demonstrating how to perform these interactions has not been developed.


If you have not already, it is recommended to go through the ink! smart contracts tutorial or otherwise have written and compiled smart contracts according to the ink! docs. It is also recommended to have some experience with Substrate runtime development.

Ensure you have

  1. Installed Substrate according to the instructions
  2. Run:
rustup component add rust-src --toolchain nightly
rustup target add wasm32-unknown-unknown --toolchain nightly

3.   Installed Cargo Contracts

# For Ubuntu or Debian users
sudo apt install binaryen
# For MacOS users
brew install binaryen

cargo install cargo-contract --vers ^0.15 --force --locked

Contract-to-Runtime Interactions

The project demonstrates contract-to-runtime interactions through the use of Chain extensions. Chain Extensions allow a runtime developer to extend runtime functions to smart contracts. In the case of this example, the functions being extended are a custom pallet extrinsic, and the pallet_balances::transfer extrinsic.

See also the rand-extension chain extension code example, which is one example that this project extended.

Runtime-to-Contract Interactions

Runtime-to-contract interactions are enabled through invocations of the pallet-contract's own bare_call method, invoked from a custom pallet extrinsic. The example extrinsic is called call_smart_contract and is meant to demonstrate calling an existing(uploaded and instantiated) smart-contract generically. The caller specifies the account id of the smart contract to be called, the selector of the smart contract function(found in the metadata.json in the compiled contract), and one argument to be passed to the smart contract function.



The cargo run command will perform an initial build. Use the following command to build the node without launching it:

cargo build --release

Smart contracts

To build the included smart contract example, first cd into smart-contracts/example-extension. then run:

cargo +nightly contracts build


Use Rust's native cargo command to build and launch the template node:

cargo run --release -- --dev --tmp

Local Contract Deployment

Once the smart contract is compiled, you may use the hosted Canvas UI. Please follow the Deploy Your Contract guide for specific instructions. This contract uses a default constructor, so there is no need to specify values for its constructor.

You may also use the Polkadotjs Apps UI to upload and instantiate the contract.

Example Usage

Ensure you have uploaded and instantiated the example contract.


Call the set_value smart contract function from a generic pallet extrinsic

  1. Browse to extrinsics in the Polkadotjs apps UI.
  2. Supply the necessary arguments to instruct our extrinsic to call the smart contract function. Enter the following values in the Submission tab:
    • dest: AccountId of the desired contract.
    • submit the following extrinsic : templateModule
    • selector: 0x00abcdef (note: this denotes the function to call, and is found in smart-contracts/example-extension/target/ink/metadata.json. See more here on the ink! selector macro)
    • arg: some u32 of your choice
    • gasLimit: 10000000000
  3. Submit Transaction -> Sign and Submit.

This extrinsic passed these arguments to the pallet_contracts::bare_call function, which resulted in our set_value smart contract function being called with the new u32 value. This value can now be verified by calling the get_value, and checking whether the new value is returned.


Call the insert_number extrinsic from the smart contract

  1. Browse to the Execute page in the hosted Canvas UI
  2. Under chain-extension-example, click Execute.
  3. Under Message to Send, select store_in_runtime.
  4. Enter some u32 to be stored.
  5. Ensure send as transaction is selected.
  6. Click Call

The smart contract function is less generic than the extrinsic used above, and so aready knows how to call our custom runtime extrinsic through the chain extension that is set up. You can verify that the contract called the extrinsic by checking the contractEntry storage in the Polkadotjs UI.


To run the tests for the included example pallet, run cargo test in the root.


Build node with benchmarks enabled:

cargo build --release --features runtime-benchmarks

Then, to generate the weights into the pallet template's file:

./target/release/node-template benchmark \
 --chain dev \
 --pallet=pallet_template \
 --extrinsic='*' \
 --repeat=20 \
 --steps=50 \
 --execution wasm \
 --wasm-execution compiled \
 --raw \
 --output pallets/template/src/ \

Download Details:
Author: paritytech
Source Code:
License: Unlicense License

#blockchain  #polkadot  #smartcontract  #substrate #rust 

Substrate Runtime and Contract Interactions for Polkadot

Decode Substrate with Backwards Compatible Metadata

De[code] Sub[strate]

† This software is experimental, and not intended for production use yet. Use at your own risk.

Encompassing decoder for substrate/polkadot/kusama types.

Gets type definitions from polkadot-js via JSON and decodes them into components that outline types and make decoding byte-strings possible, as long as the module/generic type name are known.

Supports Metadata versions from v8, which means all of Kusama (from CC1). Older networks are not supported (E.G Alexander).

  • makes decoding generic types from the substrate rpc possible
  • requires parsing JSON with type definitions, and implementing traits TypeDetective and Decoder in order to work for arbitrary chains. However, if the JSON follows the same format as PolkadotJS definitions (look at definitions.json and overrides.json) it would be possible to simply deserialize into Polkadot structs and utilize those. The decoding itself is generic enough to allow it.
  • types must adhere to the conventions set out by polkadot decoding
    • type definitions for Polkadot (Kusama) are taken from Polkadot.js and deserialized into Rust (extras/polkadot)

Currently Supported Metadata Versions (From Kusama CC1):

  •  V8
  •  V9
  •  V10
  •  V11
  •  V12
  •  V13
  •  V14

(Tentative) Release & Maintenence

Note: Release description is in no way complete because of current & active development for legacy desub types & scale-info based types. it is purely here as a record for things that should be taken into account in the future

  • Depending on changes in legacy desub code, bump version in Cargo.toml for desub/, desub-current/, desub-legacy/, desub-common/, desub-json-resolver/
  • note upgrade-blocks present here and modify the hard-coded upgrade blocks as necessary in the desub file.
  • Take note of PR's that have been merged since the last release.
    • look over CHANGELOG. Make sure to include any PR's that were missed in the Unreleased section.
    • Move changes in Unreleased section to a new section corresponding to the version being released, making sure to keep the Unreleased header.
  • make a PR with these changes
  • once PR is merged, push a tag in the form vX.X.X (E.G v0.1.0)
git tag v0.1.0
git push --tags origin master
  • Once tags are pushed, a github workflow will start that will draft a release. You should be able to find the workflow running under Actions in the github repository.
    • NOTE: If something goes wrong it is OK. Delete the tag from the repo, re-create the tag locally and re-push. The workflow will run whenever a tag with the correct form is pushed. If more changes need to be made to the repo that will require another PR.
  • Once the workflow finishes, make changes to the resulting draft release if necessary, and hit publish.
  • Once published on github, publish each crate that has changed to Refer to this for how to publish to

Download Details:
Author: paritytech
Source Code:
License: GPL-3.0 License

#blockchain  #polkadot  #smartcontract  #substrate #rust 

Decode Substrate with Backwards Compatible Metadata

Cargo Release Automation tooling Based on Substrate

cargo unleash em 🐉

cargo release automation tooling for massiv mono-repo. Developed primarily for Parity Substrate.




Use cargo install to install:

cargo install cargo-unleash --version 1.0.0-alpha.13


Try and have it report what it would do on your mono repo with

cargo unleash em-dragons --dry-run

There are more options available on the CLI, just run with --help:

Release the crates of this massiv monorepo USAGE:    cargo-unleash [FLAGS] [OPTIONS] <SUBCOMMAND> FLAGS:    -h, --help            Prints help information    -V, --version            Prints version information    -v, --verbose            Show verbose cargo output OPTIONS:    -l, --log <log>            Specify the log levels [default: warn]    -m, --manifest-path <manifest-path>            The path to workspace manifest            Can either be the folder if the file is named `Cargo.toml` or the path to the specific `.toml`-manifest to            load as the cargo workspace. [default: ./] SUBCOMMANDS:    add-owner      Add owners for a lot of crates    check          Check whether crates can be packaged    clean-deps     Check the package(s) for unused dependencies    de-dev-deps    Deactivate the `[dev-dependencies]`    em-dragons     Unleash 'em dragons    help           Prints this message or the help of the given subcommand(s)    rename         Rename a package    set            Set a field in all manifests    to-release     Calculate the packages and the order in which to release    version        Messing with versioning


The main command is cargo unleash em-dragons, here is its help. All subcommands have extensive --help for you.

$ cargo-unleash em-dragons --help Unleash 'em dragons Package all selected crates, check them and attempt to publish them. USAGE:    cargo-unleash em-dragons [FLAGS] [OPTIONS] FLAGS:        --build            Actually build the package in check            By default, this only runs `cargo check` against the package build. Set this flag to have it run an actual            `build` instead.        --check-readme            Generate & verify whether the Readme file has changed.            When enabled, this will generate a Readme file from the crate's doc comments (using cargo-readme), and check            whether the existing Readme (if any) matches.        --dry-run            dry run        --empty-is-failure            Consider no package matching the criteria an error    -h, --help            Prints help information        --ignore-publish            Ignore whether `publish` is set.            If nothing else is specified, `publish = true` is assumed for every package. If publish is set to false or            any registry, it is ignored by default. If you want to include it regardless, set this flag.        --include-dev-deps            Do not disable dev-dependencies            By default we disable dev-dependencies before the run.        --include-pre-deps            Even if not selected by default, also include depedencies with a pre (cascading)        --no-check            dry run    -V, --version            Prints version information OPTIONS:        --owner <add-owner>            Ensure we have the owner set as well    -c, --changed-since <changed-since>            Automatically detect the packages, which changed compared to the given git commit.            Compares the current git `head` to the reference given, identifies which files changed and attempts to            identify the packages and its dependents through that mechanism. You can use any `tag`, `branch` or            `commit`, but you must be sure it is available (and up to date) locally.    -i, --ignore-pre-version <ignore-pre-version>...            Ignore version pre-releases            Skip if the SemVer pre-release field is any of the listed. Mutually exclusive with `--package`    -p, --packages <packages>...            Only use the specfic set of packages            Apply only to the packages named as defined. This is mutually exclusive with skip and ignore-version-pre.    -s, --skip <skip>...            Skip the package names matching ...            Provide one or many regular expression that, if the package name matches, means we skip that package.            Mutually exclusive with `--package`        --token <token>            the token to use for uploading            If this is nor the environment variable are set, this falls back to the default value provided in the user            directory [env: CRATES_TOKEN]

Common Usage Examples

Release all crates not having the -dev-pre version set

cargo-unleash em-dragons --ignore-pre-version dev

Check if a PR can be released (checking only changes in the PR compared to main)

cargo-unleash check --changed-since=main

Release all crates not having test in the name

cargo-unleash em-dragons --skip test

Set the pre-version to -dev

cargo-unleash version set-pre dev

Bump the pre-version, so for e.g. from alpha.1 to alpha.2 or beta.3 to beta.4:

cargo-unleash version bump-pre

In the wild

You are using the tooling and want to be mentioned here–create an issue

Download Details:
Author: paritytech
Source Code:

#blockchain  #polkadot  #smartcontract  #substrate #rust 

Cargo Release Automation tooling Based on Substrate

Adz Demoe for Polkadot Written in Rust



cargo build

Run Local Chain

./target/debug/node-template --dev --tmp

Build Chain Spec

./target/debug/node-template build-spec

Adz Demoe

Using Nix

Install nix and optionally direnv and lorri for a fully plug and play experience for setting up the development environment. To get all the correct dependencies activate direnv direnv allow and lorri lorri shell.

Rust Setup

First, complete the basic Rust setup instructions.


Use Rust's native cargo command to build and launch the template node:

cargo run --release -- --dev --tmp


The cargo run command will perform an initial build. Use the following command to build the node without launching it:

cargo build --release

NOTE: You must use the release builds for parachains! The optimizations here are required as in debug mode, it is expected that nodes are not able to run fast enough to produce blocks.

Relay Chain

NOTE: In the following two sections, we document how to manually start a few relay chain nodes, start a parachain node (collator), and register the parachain with the relay chain.

We also have the polkadot-launch CLI tool that automate the following steps and help you easily launch relay chains and parachains. However it is still good to go through the following procedures once to understand the mechanism for running and registering a parachain.

Once the project has been built, the following command can be used to explore all parameters and subcommands:

./target/release/node-template -h


The provided cargo run command will launch a temporary node and its state will be discarded after you terminate the process. After the project has been built, there are other ways to launch the node.

Single-Node Development Chain

This command will start the single-node development chain with persistent state:

./target/release/node-template --dev

Start Relay Chain

We need n + 1 full validator nodes running on a relay chain to accept n parachain / parathread connections. Here we will start two relay chain nodes so we can have one parachain node connecting in later.

From the Polkadot working directory:

./target/release/node-template purge-chain --dev

Start the development chain with detailed logging:

RUST_LOG=debug RUST_BACKTRACE=1 ./target/release/node-template -lruntime=debug --dev

Connect with Polkadot-JS Apps Front-end

To connect to a relay chain, you must first _reserve a ParaId for your parathread that will become a parachain. To do this, you will need sufficient amount of currency on the network account to reserve the ID.

Multi-Node Local Testnet

If you want to see the multi-node consensus algorithm in action, refer to our Start a Private Network tutorial.

Template Structure

A Substrate project such as this consists of a number of components that are spread across a few directories.


A blockchain node is an application that allows users to participate in a blockchain network. Substrate-based blockchain nodes expose a number of capabilities:

  • Networking: Substrate nodes use the libp2p networking stack to allow the nodes in the network to communicate with one another.
  • Consensus: Blockchains must have a way to come to consensus on the state of the network. Substrate makes it possible to supply custom consensus engines and also ships with several consensus mechanisms that have been built on top of Web3 Foundation research.
  • RPC Server: A remote procedure call (RPC) server is used to interact with Substrate nodes.

There are several files in the node directory - take special note of the following:

  • A chain specification is a source code file that defines a Substrate chain's initial (genesis) state. Chain specifications are useful for development and testing, and critical when architecting the launch of a production chain. Take note of the development_config and testnet_genesis functions, which are used to define the genesis state for the local development chain configuration. These functions identify some well-known accounts and use them to configure the blockchain's initial state.
  • This file defines the node implementation. Take note of the libraries that this file imports and the names of the functions it invokes. In particular, there are references to consensus-related topics, such as the longest chain rule, the Aura block authoring mechanism and the GRANDPA finality gadget.

After the node has been built, refer to the embedded documentation to learn more about the capabilities and configuration parameters that it exposes:

./target/release/node-template --help


In Substrate, the terms "runtime" and "state transition function" are analogous - they refer to the core logic of the blockchain that is responsible for validating blocks and executing the state changes they define. The Substrate project in this repository uses the FRAME framework to construct a blockchain runtime. FRAME allows runtime developers to declare domain-specific logic in modules called "pallets". At the heart of FRAME is a helpful macro language that makes it easy to create pallets and flexibly compose them to create blockchains that can address a variety of needs.

Review the FRAME runtime implementation included in this template and note the following:

  • This file configures several pallets to include in the runtime. Each pallet configuration is defined by a code block that begins with impl $PALLET_NAME::Config for Runtime.
  • The pallets are composed into a single runtime by way of the construct_runtime! macro, which is part of the core FRAME Support library.


The runtime in this project is constructed using many FRAME pallets that ship with the core Substrate repository and a template pallet that is defined in the pallets directory.

A FRAME pallet is compromised of a number of blockchain primitives:

  • Storage: FRAME defines a rich set of powerful storage abstractions that makes it easy to use Substrate's efficient key-value database to manage the evolving state of a blockchain.
  • Dispatchables: FRAME pallets define special types of functions that can be invoked (dispatched) from outside of the runtime in order to update its state.
  • Events: Substrate uses events to notify users of important changes in the runtime.b
  • Errors: When a dispatchable fails, it returns an error.
  • Config: The Config configuration interface is used to define the types and parameters upon which a FRAME pallet depends.

Run in Docker

First, install Docker and Docker Compose.

Then run the following command to start a single node development chain.


This command will firstly compile your code, and then start a local development network. You can also replace the default command (cargo build --release && ./target/release/node-template --dev --ws-external) by appending your own. A few useful ones are as follow.

# Run Substrate node without re-compiling
./scripts/ ./target/release/node-template --dev --ws-external

# Purge the local dev chain
./scripts/ ./target/release/node-template purge-chain --dev

Now that you have two relay chain nodes, and a parachain node accompanied with a relay chain light
client running, the next step is to register the parachain in the relay chain with the following
steps (for detail, refer to the [Substrate Cumulus Worship](

-   Goto [Polkadot Apps UI](, connecting to your relay chain.

-   Execute a sudo extrinsic on the relay chain by going to `Developer` -> `sudo` page.

-   Pick `paraSudoWrapper` -> `sudoScheduleParaInitialize(id, genesis)` as the extrinsic type,
    shown below.

        ![Polkadot Apps UI](docs/assets/ss01.png)

-   Set the `id: ParaId` to 2,000 (or whatever ParaId you used above), and set the `parachain: Bool`
    option to **Yes**.

-   For the `genesisHead`, drag the genesis state file exported above, `para-2000-genesis`, in.

-   For the `validationCode`, drag the genesis wasm file exported above, `para-2000-wasm`, in.

> **Note**: When registering to the public Rococo testnet, ensure you set a **unique** `paraId`
> larger than 1,000. Values below 1,000 are reserved _exclusively_ for system parachains.

### Restart the Parachain (Collator)

The collator node may need to be restarted to get it functioning as expected. After a
[new epoch]( starts on the relay chain,
your parachain will come online. Once this happens, you should see the collator start
reporting _parachain_ blocks:

# Notice the relay epoch change! Only then do we start parachain collating!
2021-05-30 17:00:04 [Relaychain] 💤 Idle (2 peers), best: #30 (0xfc02…2a2a), finalized #28 (0x10ff…6539), ⬇ 1.0kiB/s ⬆ 0.3kiB/s
2021-05-30 17:00:04 [Parachain] 💤 Idle (0 peers), best: #0 (0xd42b…f271), finalized #0 (0xd42b…f271), ⬇ 0 ⬆ 0
2021-05-30 17:00:06 [Relaychain] 👶 New epoch 3 launching at block 0x68bc…0605 (block slot 270402601 >= start slot 270402601).
2021-05-30 17:00:06 [Relaychain] 👶 Next epoch starts at slot 270402611
2021-05-30 17:00:06 [Relaychain] ✨ Imported #31 (0x68bc…0605)
2021-05-30 17:00:06 [Parachain] Starting collation. relay_parent=0x68bcc93d24a31a2c89800a56c7a2b275fe9ca7bd63f829b64588ae0d99280605 at=0xd42bb78354bc21770e3f0930ed45c7377558d2d8e81ca4d457e573128aabf271
2021-05-30 17:00:06 [Parachain] 🙌 Starting consensus session on top of parent 0xd42bb78354bc21770e3f0930ed45c7377558d2d8e81ca4d457e573128aabf271
2021-05-30 17:00:06 [Parachain] 🎁 Prepared block for proposing at 1 [hash: 0xf6507812bf60bf53af1311f775aac03869be870df6b0406b2969784d0935cb92; parent_hash: 0xd42b…f271; extrinsics (2): [0x1bf5…1d76, 0x7c9b…4e23]]
2021-05-30 17:00:06 [Parachain] 🔖 Pre-sealed block for proposal at 1. Hash now 0x80fc151d7ccf228b802525022b6de257e42388ec7dc3c1dd7de491313650ccae, previously 0xf6507812bf60bf53af1311f775aac03869be870df6b0406b2969784d0935cb92.
2021-05-30 17:00:06 [Parachain] ✨ Imported #1 (0x80fc…ccae)
2021-05-30 17:00:06 [Parachain] Produced proof-of-validity candidate. block_hash=0x80fc151d7ccf228b802525022b6de257e42388ec7dc3c1dd7de491313650ccae
2021-05-30 17:00:09 [Relaychain] 💤 Idle (2 peers), best: #31 (0x68bc…0605), finalized #29 (0xa6fa…9e16), ⬇ 1.2kiB/s ⬆ 129.9kiB/s
2021-05-30 17:00:09 [Parachain] 💤 Idle (0 peers), best: #0 (0xd42b…f271), finalized #0 (0xd42b…f271), ⬇ 0 ⬆ 0
2021-05-30 17:00:12 [Relaychain] ✨ Imported #32 (0x5e92…ba30)
2021-05-30 17:00:12 [Relaychain] Moving approval window from session 0..=2 to 0..=3
2021-05-30 17:00:12 [Relaychain] ✨ Imported #32 (0x8144…74eb)
2021-05-30 17:00:14 [Relaychain] 💤 Idle (2 peers), best: #32 (0x5e92…ba30), finalized #29 (0xa6fa…9e16), ⬇ 1.4kiB/s ⬆ 0.2kiB/s
2021-05-30 17:00:14 [Parachain] 💤 Idle (0 peers), best: #0 (0xd42b…f271), finalized #0 (0xd42b…f271), ⬇ 0 ⬆ 0
2021-05-30 17:00:18 [Relaychain] ✨ Imported #33 (0x8c30…9ccd)
2021-05-30 17:00:18 [Parachain] Starting collation. relay_parent=0x8c30ce9e6e9867824eb2aff40148ac1ed64cf464f51c5f2574013b44b20f9ccd at=0x80fc151d7ccf228b802525022b6de257e42388ec7dc3c1dd7de491313650ccae
2021-05-30 17:00:19 [Relaychain] 💤 Idle (2 peers), best: #33 (0x8c30…9ccd), finalized #30 (0xfc02…2a2a), ⬇ 0.7kiB/s ⬆ 0.4kiB/s
2021-05-30 17:00:19 [Parachain] 💤 Idle (0 peers), best: #1 (0x80fc…ccae), finalized #0 (0xd42b…f271), ⬇ 0 ⬆ 0
2021-05-30 17:00:22 [Relaychain] 👴 Applying authority set change scheduled at block #31
2021-05-30 17:00:22 [Relaychain] 👴 Applying GRANDPA set change to new set [(Public(88dc3417d5058ec4b4503e0c12ea1a0a89be200fe98922423d4334014fa6b0ee (5FA9nQDV...)), 1), (Public(d17c2d7823ebf260fd138f2d7e27d114c0145d968b5ff5006125f2414fadae69 (5GoNkf6W...)), 1)]
2021-05-30 17:00:22 [Relaychain] 👴 Imported justification for block #31 that triggers command Changing authorities, signaling voter.
2021-05-30 17:00:24 [Relaychain] ✨ Imported #34 (0x211b…febf)
2021-05-30 17:00:24 [Parachain] Starting collation. relay_parent=0x211b3c53bebeff8af05e8f283d59fe171b7f91a5bf9c4669d88943f5a42bfebf at=0x80fc151d7ccf228b802525022b6de257e42388ec7dc3c1dd7de491313650ccae
2021-05-30 17:00:24 [Parachain] 🙌 Starting consensus session on top of parent 0x80fc151d7ccf228b802525022b6de257e42388ec7dc3c1dd7de491313650ccae
2021-05-30 17:00:24 [Parachain] 🎁 Prepared block for proposing at 2 [hash: 0x10fcb3180e966729c842d1b0c4d8d2c4028cfa8bef02b909af5ef787e6a6a694; parent_hash: 0x80fc…ccae; extrinsics (2): [0x4a6c…1fc6, 0x6b84…7cea]]
2021-05-30 17:00:24 [Parachain] 🔖 Pre-sealed block for proposal at 2. Hash now 0x5087fd06b1b73d90cfc3ad175df8495b378fffbb02fea212cc9e49a00fd8b5a0, previously 0x10fcb3180e966729c842d1b0c4d8d2c4028cfa8bef02b909af5ef787e6a6a694.
2021-05-30 17:00:24 [Parachain] ✨ Imported #2 (0x5087…b5a0)
2021-05-30 17:00:24 [Parachain] Produced proof-of-validity candidate. block_hash=0x5087fd06b1b73d90cfc3ad175df8495b378fffbb02fea212cc9e49a00fd8b5a0
2021-05-30 17:00:24 [Relaychain] 💤 Idle (2 peers), best: #34 (0x211b…febf), finalized #31 (0x68bc…0605), ⬇ 1.0kiB/s ⬆ 130.1kiB/s
2021-05-30 17:00:24 [Parachain] 💤 Idle (0 peers), best: #1 (0x80fc…ccae), finalized #0 (0xd42b…f271), ⬇ 0 ⬆ 0
2021-05-30 17:00:29 [Relaychain] 💤 Idle (2 peers), best: #34 (0x211b…febf), finalized #32 (0x5e92…ba30), ⬇ 0.2kiB/s ⬆ 0.1kiB/s
2021-05-30 17:00:29 [Parachain] 💤 Idle (0 peers), best: #1 (0x80fc…ccae), finalized #0 (0xd42b…f271), ⬇ 0 ⬆ 0
2021-05-30 17:00:30 [Relaychain] ✨ Imported #35 (0xee07…38a0)
2021-05-30 17:00:34 [Relaychain] 💤 Idle (2 peers), best: #35 (0xee07…38a0), finalized #33 (0x8c30…9ccd), ⬇ 0.9kiB/s ⬆ 0.3kiB/s
2021-05-30 17:00:34 [Parachain] 💤 Idle (0 peers), best: #1 (0x80fc…ccae), finalized #1 (0x80fc…ccae), ⬇ 0 ⬆ 0
2021-05-30 17:00:36 [Relaychain] ✨ Imported #36 (0xe8ce…4af6)
2021-05-30 17:00:36 [Parachain] Starting collation. relay_parent=0xe8cec8015c0c7bf508bf3f2f82b1696e9cca078e814b0f6671f0b0d5dfe84af6 at=0x5087fd06b1b73d90cfc3ad175df8495b378fffbb02fea212cc9e49a00fd8b5a0
2021-05-30 17:00:39 [Relaychain] 💤 Idle (2 peers), best: #36 (0xe8ce…4af6), finalized #33 (0x8c30…9ccd), ⬇ 0.6kiB/s ⬆ 0.1kiB/s
2021-05-30 17:00:39 [Parachain] 💤 Idle (0 peers), best: #2 (0x5087…b5a0), finalized #1 (0x80fc…ccae), ⬇ 0 ⬆ 0

Note the delay here! It may take some time for your relay chain to enter a new epoch.

Rococo & Westend Relay Chain Testnets

Is this Cumulus Parachain Template Rococo & Westend testnets compatible? Yes!

  • Rococo is the testnet of Kusama (join the Rococo Faucet to get testing funds).
  • Westend is the testnet of Polkadot (join the Westend Faucet to get testing funds).

See the Cumulus Workshop for the latest instructions to register a parathread/parachain on a relay chain.

NOTE: When running the relay chain and parachain, you must use the same tagged version of Polkadot and Cumulus so the collator would register successfully to the relay chain. You should test locally registering your parachain successfully before attempting to connect to any running relay chain network!

Find chainspec files to connect to live networks here. You want to be sure to use the correct git release tag in these files, as they change from time to time and must match the live network!

These networks are under constant development - so please follow the progress and update of your parachains in lock step with the testnet changes if you wish to connect to the network. Do join the Parachain Technical matrix chat room to ask questions and connect with the parachain building teams.

Learn More

  • More detailed instructions to use Cumulus parachains are found in the Cumulus Workshop.
  • Refer to the upstream Substrate Node Template to learn more about the structure of this project, the capabilities it encapsulates and the way in which those capabilities are implemented.
  • Learn more about how a parachain block is added to a finalized chain here.

Download Details:
Author: paritytech
Source Code:
License: Unlicense License

#blockchain  #polkadot  #smartcontract  #substrate 

Adz Demoe for Polkadot Written in Rust