Best of Crypto

Best of Crypto


Dia Substrate: Writes an Event As Signed Transaction for All Local Key

DIA offchain worker

This offchain worker (ocw) gets data from an endpoint and writes an event as signed transaction for all local keys with subkey type dia!.


Node runtime

To add the ocw pallet to your node, add it to your runtime like this (in this repository already done):

  1. Edit runtime/Cargo.toml:

Add the following under [dependencies]:

pallet-dia-ocw = { version = "2.0.0", default-features = false, path = "../../../frame/dia-ocw" }

Add "pallet-dia-ocw/std", at [features]:

std = [

2.   Edit runtime/src/ like this:

Add the following:

impl pallet_dia_ocw::Trait for Runtime {
    type Event = Event;
    type Call = Call;
    type AuthorityId = pallet_dia_ocw::crypto::TestAuthId;

Insert DIAOCW: pallet_dia_ocw::{Module, Call, Event<T>}, to Runtime enum:

    pub enum Runtime where
        Block = Block,
        NodeBlock = node_primitives::Block,
        UncheckedExtrinsic = UncheckedExtrinsic
        // ...
        DIAOCW: pallet_dia_ocw::{Module, Call, Event<T>},


For each block, this ocw automatically adds a signed transaction. The signer account needs to pay the fees.

Local development mode

  • Start the node and dev network by running cargo run -- --dev --tmp.
  • Create an account or add a subkey to an existing account, e.g. the example account Alice via RPC:
curl http://localhost:9933 -H "Content-Type:application/json;charset=utf-8" -d \
    "params": [
      "bottom drive obey lake curtain smoke basket hold race lonely fit walk//Alice",

Download Details:
Author: diadata-org
Source Code:

#dia  #web3  #defi  #blockchain #substrate #rust 

Dia Substrate: Writes an Event As Signed Transaction for All Local Key
Best of Crypto

Best of Crypto


Dia Data Key/Value Oracle Written using Wasm

DIA WASM oracle

This project contains the diadata Key/Value oracle written using wasm, can be deployed to supported substrate chains.

Functions of the wasm oracle

get : Gets the latest value of the asset symbol with timestamp

set : Sets latest value of asset, requires price and timestamp. Can be called only by the owner of contract

setup instructions for cargo contract

Deployed Contract

Network: Astar testnet (Shibuya) : YpfUaqH4zMcEo8Kw1egpPrjAGmBDWu1VVTLEEimXr2Kzevb

Running Oracle Service

Set required environment variables


after setting up environmnet variables run these command to start service

cd oracle
npm run build
npm run start

Download Details:
Author: diadata-org
Source Code:

#dia #rust  #web3  #defi  #blockchain #wasm #oracle #substrate 

Dia Data Key/Value Oracle Written using Wasm
August  Larson

August Larson


Python Substrate Interface Library

Python Substrate Interface


This library specializes in interfacing with a Substrate node, providing additional convenience methods to deal with SCALE encoding/decoding (the default output and input format of the Substrate JSONRPC), metadata parsing, type registry management and versioning of types.



pip install substrate-interface


The following examples show how to initialize for supported chains:

Autodiscover mode

substrate = SubstrateInterface(

When only an url is provided, it tries to determine certain properties like ss58_format and type_registry_preset automatically by calling the RPC method system_properties.

At the moment this will work for most MetadataV14 and above chains like Polkadot, Kusama, Acala, Moonbeam, for other chains the ss58_format (default 42) and type_registry (defaults to the latest vanilla Substrate types) should be set manually.

Manually set required properties


substrate = SubstrateInterface(


substrate = SubstrateInterface(


substrate = SubstrateInterface(


substrate = SubstrateInterface(

Substrate Node Template

Compatible with

substrate = SubstrateInterface(


Retrieve extrinsics for a certain block

Method 1: access serialized value

# Set block_hash to None for chaintip
block_hash = "0x51d15792ff3c5ee9c6b24ddccd95b377d5cccc759b8e76e5de9250cf58225087"

# Retrieve extrinsics in block
result = substrate.get_block(block_hash=block_hash)

for extrinsic in result['extrinsics']:

    if 'address' in extrinsic.value:
        signed_by_address = extrinsic.value['address']
        signed_by_address = None

    print('\nPallet: {}\nCall: {}\nSigned by: {}'.format(

    # Loop through call params
    for param in extrinsic.value["call"]['call_args']:

        if param['type'] == 'Balance':
            param['value'] = '{} {}'.format(param['value'] / 10 ** substrate.token_decimals, substrate.token_symbol)

        print("Param '{}': {}".format(param['name'], param['value']))

Method 2: access nested objects

# Set block_hash to None for chaintip
block_hash = "0x51d15792ff3c5ee9c6b24ddccd95b377d5cccc759b8e76e5de9250cf58225087"

# Retrieve extrinsics in block
result = substrate.get_block(block_hash=block_hash)

for extrinsic in result['extrinsics']:

    if 'address' in extrinsic:
        signed_by_address = extrinsic['address'].value
        signed_by_address = None

    print('\nPallet: {}\nCall: {}\nSigned by: {}'.format(

    # Loop through call params
    for param in extrinsic["call"]['call_args']:

        if param['type'] == 'Balance':
            param['value'] = '{} {}'.format(param['value'] / 10 ** substrate.token_decimals, substrate.token_symbol)

        print("Param '{}': {}".format(param['name'], param['value']))

Subscribe to new block headers

def subscription_handler(obj, update_nr, subscription_id):

    print(f"New block #{obj['header']['number']} produced by {obj['author']}")

    if update_nr > 10:
        return {'message': 'Subscription will cancel when a value is returned', 'updates_processed': update_nr}

result = substrate.subscribe_block_headers(subscription_handler, include_author=True)

Storage queries

The modules and storage functions are provided in the metadata (see substrate.get_metadata_storage_functions()), parameters will be automatically converted to SCALE-bytes (also including decoding of SS58 addresses).


result = substrate.query(

print(result.value['nonce']) #  7695
print(result.value['data']['free']) # 635278638077956496

Get the account info at a specific block hash:

account_info = substrate.query(

print(account_info['nonce'].value) #  7673
print(account_info['data']['free'].value) # 637747267365404068

Type information about how to format parameters

To retrieve more information about how to format the parameters of a storage function:

storage_function = self.substrate.get_metadata_storage_function("Tokens", "TotalIssuance")

# [{'variant': {'variants': [{'name': 'Token', 'fields': [{'name': None, 'type': 44, 'typeName': 'TokenSymbol', 'docs': []}], 'index': 0, 'docs': [], 'value': {'variant': {'variants': [{'name': 'ACA', 'fields': [], 'index': 0, 'docs': []}, {'name': 'AUSD', 'fields': [], 'index': 1, 'docs': []}, {'name': 'DOT', 'fields': [], 'index': 2, 'docs': []}, {'name': 'LDOT', 'fields': [], 'index': 3, 'docs': []}, {'name': 'RENBTC', 'fields': [], 'index': 20, 'docs': []}, {'name': 'CASH', 'fields': [], 'index': 21, 'docs': []}, {'name': 'KAR', 'fields': [], 'index': 128, 'docs': []}, {'name': 'KUSD', 'fields': [], 'index': 129, 'docs': []}, {'name': 'KSM', 'fields': [], 'index': 130, 'docs': []}, {'name': 'LKSM', 'fields': [], 'index': 131, 'docs': []}, {'name': 'TAI', 'fields': [], 'index': 132, 'docs': []}, {'name': 'BNC', 'fields': [], 'index': 168, 'docs': []}, {'name': 'VSKSM', 'fields': [], 'index': 169, 'docs': []}, {'name': 'PHA', 'fields': [], 'index': 170, 'docs': []}, {'name': 'KINT', 'fields': [], 'index': 171, 'docs': []}, {'name': 'KBTC', 'fields': [], 'index': 172, 'docs': []}]}}}, {'name': 'DexShare', 'fields': [{'name': None, 'type': 45, 'typeName': 'DexShare', 'docs': []}, {'name': None, 'type': 45, 'typeName': 'DexShare', 'docs': []}], 'index': 1, 'docs': [], 'value': {'variant': {'variants': [{'name': 'Token', 'fields': [{'name': None, 'type': 44, 'typeName': 'TokenSymbol', 'docs': []}], 'index': 0, 'docs': [], 'value': {'variant': {'variants': [{'name': 'ACA', 'fields': [], 'index': 0, 'docs': []}, {'name': 'AUSD', 'fields': [], 'index': 1, 'docs': []}, {'name': 'DOT', 'fields': [], 'index': 2, 'docs': []}, {'name': 'LDOT', 'fields': [], 'index': 3, 'docs': []}, {'name': 'RENBTC', 'fields': [], 'index': 20, 'docs': []}, {'name': 'CASH', 'fields': [], 'index': 21, 'docs': []}, {'name': 'KAR', 'fields': [], 'index': 128, 'docs': []}, {'name': 'KUSD', 'fields': [], 'index': 129, 'docs': []}, {'name': 'KSM', 'fields': [], 'index': 130, 'docs': []}, {'name': 'LKSM', 'fields': [], 'index': 131, 'docs': []}, {'name': 'TAI', 'fields': [], 'index': 132, 'docs': []}, {'name': 'BNC', 'fields': [], 'index': 168, 'docs': []}, {'name': 'VSKSM', 'fields': [], 'index': 169, 'docs': []}, {'name': 'PHA', 'fields': [], 'index': 170, 'docs': []}, {'name': 'KINT', 'fields': [], 'index': 171, 'docs': []}, {'name': 'KBTC', 'fields': [], 'index': 172, 'docs': []}]}}}, {'name': 'Erc20', 'fields': [{'name': None, 'type': 46, 'typeName': 'EvmAddress', 'docs': []}], 'index': 1, 'docs': [], 'value': {'composite': {'fields': [{'name': None, 'type': 47, 'typeName': '[u8; 20]', 'docs': [], 'value': {'array': {'len': 20, 'type': 2, 'value': {'primitive': 'u8'}}}}]}}}, {'name': 'LiquidCrowdloan', 'fields': [{'name': None, 'type': 4, 'typeName': 'Lease', 'docs': []}], 'index': 2, 'docs': [], 'value': {'primitive': 'u32'}}, {'name': 'ForeignAsset', 'fields': [{'name': None, 'type': 36, 'typeName': 'ForeignAssetId', 'docs': []}], 'index': 3, 'docs': [], 'value': {'primitive': 'u16'}}]}}}, {'name': 'Erc20', 'fields': [{'name': None, 'type': 46, 'typeName': 'EvmAddress', 'docs': []}], 'index': 2, 'docs': [], 'value': {'composite': {'fields': [{'name': None, 'type': 47, 'typeName': '[u8; 20]', 'docs': [], 'value': {'array': {'len': 20, 'type': 2, 'value': {'primitive': 'u8'}}}}]}}}, {'name': 'StableAssetPoolToken', 'fields': [{'name': None, 'type': 4, 'typeName': 'StableAssetPoolId', 'docs': []}], 'index': 3, 'docs': [], 'value': {'primitive': 'u32'}}, {'name': 'LiquidCrowdloan', 'fields': [{'name': None, 'type': 4, 'typeName': 'Lease', 'docs': []}], 'index': 4, 'docs': [], 'value': {'primitive': 'u32'}}, {'name': 'ForeignAsset', 'fields': [{'name': None, 'type': 36, 'typeName': 'ForeignAssetId', 'docs': []}], 'index': 5, 'docs': [], 'value': {'primitive': 'u16'}}]}}]

The query_map() function can also be used to see examples of used parameters:

result = substrate.query_map("Tokens", "TotalIssuance")

# [[<scale_info::43(value={'DexShare': ({'Token': 'KSM'}, {'Token': 'LKSM'})})>, <U128(value=11513623028320124)>], [<scale_info::43(value={'DexShare': ({'Token': 'KUSD'}, {'Token': 'BNC'})})>, <U128(value=2689948474603237982)>], [<scale_info::43(value={'DexShare': ({'Token': 'KSM'}, {'ForeignAsset': 0})})>, <U128(value=5285939253205090)>], [<scale_info::43(value={'Token': 'VSKSM'})>, <U128(value=273783457141483)>], [<scale_info::43(value={'DexShare': ({'Token': 'KAR'}, {'Token': 'KSM'})})>, <U128(value=1175872380578192993)>], [<scale_info::43(value={'DexShare': ({'Token': 'KUSD'}, {'Token': 'KSM'})})>, <U128(value=3857629383220790030)>], [<scale_info::43(value={'DexShare': ({'Token': 'KUSD'}, {'ForeignAsset': 0})})>, <U128(value=494116000924219532)>], [<scale_info::43(value={'Token': 'KSM'})>, <U128(value=77261320750464113)>], [<scale_info::43(value={'Token': 'TAI'})>, <U128(value=10000000000000000000)>], [<scale_info::43(value={'Token': 'LKSM'})>, <U128(value=681009957030687853)>], [<scale_info::43(value={'DexShare': ({'Token': 'KUSD'}, {'Token': 'LKSM'})})>, <U128(value=4873824439975242272)>], [<scale_info::43(value={'Token': 'KUSD'})>, <U128(value=5799665835441836111)>], [<scale_info::43(value={'ForeignAsset': 0})>, <U128(value=2319784932899895)>], [<scale_info::43(value={'DexShare': ({'Token': 'KAR'}, {'Token': 'LKSM'})})>, <U128(value=635158183535133903)>], [<scale_info::43(value={'Token': 'BNC'})>, <U128(value=1163757660576711961)>]]

Using ScaleType objects

The result of the previous storage query example is a ScaleType object, more specific a Struct.

The nested object structure of this account_info object is as follows:

account_info = <AccountInfo(value={'nonce': <U32(value=5)>, 'consumers': <U32(value=0)>, 'providers': <U32(value=1)>, 'sufficients': <U32(value=0)>, 'data': <AccountData(value={'free': 1152921503981846391, 'reserved': 0, 'misc_frozen': 0, 'fee_frozen': 0})>})>

Every ScaleType have the following characteristics:

Shorthand lookup of nested types

Inside the AccountInfo struct there are several U32 objects that represents for example a nonce or the amount of provider, also another struct object AccountData which contains more nested types.

To access these nested structures you can access those formally using:


As a convenient shorthand you can also use:


ScaleType objects can also be automatically converted to an iterable, so if the object is for example the others in the result Struct of Staking.eraStakers can be iterated via:

for other_info in era_stakers['others']:
    print(other_info['who'], other_info['value'])


Each ScaleType holds a complete serialized version of itself in the account_info.serialize() property, so it can easily store or used to create JSON strings.

So the whole result of account_info.serialize() will be a dict containing the following:

    "nonce": 5,
    "consumers": 0,
    "providers": 1,
    "sufficients": 0,
    "data": {
        "free": 1152921503981846391,
        "reserved": 0,
        "misc_frozen": 0,
        "fee_frozen": 0

Comparing values with ScaleType objects

It is possible to compare ScaleType objects directly to Python primitives, internally the serialized value attribute is compared:

metadata_obj[1][1]['extrinsic']['version'] # '<U8(value=4)>'
metadata_obj[1][1]['extrinsic']['version'] == 4 # True

Storage subscriptions

When a callable is passed as kwarg subscription_handler, there will be a subscription created for given storage query. Updates will be pushed to the callable and will block execution until a final value is returned. This value will be returned as a result of the query and finally automatically unsubscribed from further updates.

def subscription_handler(account_info_obj, update_nr, subscription_id):

    if update_nr == 0:
        print('Initial account data:', account_info_obj.value)

    if update_nr > 0:
        # Do something with the update
        print('Account data changed:', account_info_obj.value)

    # The execution will block until an arbitrary value is returned, which will be the result of the `query`
    if update_nr > 5:
        return account_info_obj

result = substrate.query("System", "Account", ["5GNJqTPyNqANBkUVMN1LPPrxXnFouWXoe2wNSmmEoLctxiZY"],


Query a mapped storage function

Mapped storage functions can be iterated over all key/value pairs, for these type of storage functions query_map can be used.

The result is a QueryMapResult object, which is an iterator:

# Retrieve the first 199 System.Account entries
result = substrate.query_map('System', 'Account', max_results=199)

for account, account_info in result:
    print(f"Free balance of account '{account.value}': {account_info.value['data']['free']}")

These results are transparently retrieved in batches capped by the page_size kwarg, currently the maximum page_size restricted by the RPC node is 1000

# Retrieve all System.Account entries in batches of 200 (automatically appended by `QueryMapResult` iterator)
result = substrate.query_map('System', 'Account', page_size=200, max_results=400)

for account, account_info in result:
    print(f"Free balance of account '{account.value}': {account_info.value['data']['free']}")

Querying a DoubleMap storage function:

era_stakers = substrate.query_map(

Create and send signed extrinsics

The following code snippet illustrates how to create a call, wrap it in a signed extrinsic and send it to the network:

from substrateinterface import SubstrateInterface, Keypair
from substrateinterface.exceptions import SubstrateRequestException

substrate = SubstrateInterface(

keypair = Keypair.create_from_mnemonic('episode together nose spoon dose oil faculty zoo ankle evoke admit walnut')

call = substrate.compose_call(
        'dest': '5E9oDs9PjpsBbxXxRE9uMaZZhnBAV38n2ouLB28oecBDdeQo',
        'value': 1 * 10**12

extrinsic = substrate.create_signed_extrinsic(call=call, keypair=keypair)

    receipt = substrate.submit_extrinsic(extrinsic, wait_for_inclusion=True)
    print("Extrinsic '{}' sent and included in block '{}'".format(receipt.extrinsic_hash, receipt.block_hash))

except SubstrateRequestException as e:
    print("Failed to send: {}".format(e))

The wait_for_inclusion keyword argument used in the example above will block giving the result until it gets confirmation from the node that the extrinsic is succesfully included in a block. The wait_for_finalization keyword will wait until extrinsic is finalized. Note this feature is only available for websocket connections.

Examining the ExtrinsicReceipt object

The substrate.submit_extrinsic example above returns an ExtrinsicReceipt object, which contains information about the on-chain execution of the extrinsic. Because the block_hash is necessary to retrieve the triggered events from storage, most information is only available when wait_for_inclusion=True or wait_for_finalization=True is used when submitting an extrinsic.


receipt = substrate.submit_extrinsic(extrinsic, wait_for_inclusion=True)
print(receipt.is_success) # False
print(receipt.weight) # 216625000
print(receipt.total_fee_amount) # 2749998966
print(receipt.error_message['name']) # 'LiquidityRestrictions'

ExtrinsicReceipt objects can also be created for all existing extrinsics on-chain:

receipt = ExtrinsicReceipt.create_from_extrinsic_identifier(
    substrate=substrate, extrinsic_identifier="5233297-1"

print(receipt.is_success) # False
print( # 'Identity'
print( # 'remove_sub'
print(receipt.weight) # 359262000
print(receipt.total_fee_amount) # 2483332406
print(receipt.error_message['docs']) # [' Sender is not a sub-account.']

for event in receipt.triggered_events:
    print(f'* {event.value}')

ink! contract interfacing

Deploy a contract

Tested on canvas-node with the Flipper contract from the tutorial_:

substrate = SubstrateInterface(

keypair = Keypair.create_from_uri('//Alice')

# Deploy contract
code = ContractCode.create_from_contract_files(
    metadata_file=os.path.join(os.path.dirname(__file__), 'assets', 'flipper.json'),
    wasm_file=os.path.join(os.path.dirname(__file__), 'assets', 'flipper.wasm'),

contract = code.deploy(
    endowment=10 ** 15,
    args={'init_value': True},

print(f'✅ Deployed @ {contract.contract_address}')

Work with an existing instance:

# Create contract instance from deterministic address
contract = ContractInstance.create_from_address(
    metadata_file=os.path.join(os.path.dirname(__file__), 'assets', 'flipper.json'),

Read data from a contract:

result =, 'get')
print('Current value of "get":', result.contract_result_data)

Execute a contract call

 # Do a gas estimation of the message
gas_predit_result =, 'flip')

print('Result of dry-run: ', gas_predit_result.contract_result_data)
print('Gas estimate: ', gas_predit_result.gas_required)

# Do the actual call
print('Executing contract call...')
contract_receipt = contract.exec(keypair, 'flip', args={

}, gas_limit=gas_predit_result.gas_required)

if contract_receipt.is_success:
    print(f'Events triggered in contract: {contract_receipt.contract_events}')
    print(f'Call failed: {contract_receipt.error_message}')

See complete code example for more details

Create mortal extrinsics

By default, immortal extrinsics are created, which means they have an indefinite lifetime for being included in a block. However, it is recommended to use specify an expiry window, so you know after a certain amount of time if the extrinsic is not included in a block, it will be invalidated.

extrinsic = substrate.create_signed_extrinsic(call=call, keypair=keypair, era={'period': 64})

The period specifies the number of blocks the extrinsic is valid counted from current head.

Keypair creation and signing

mnemonic = Keypair.generate_mnemonic()
keypair = Keypair.create_from_mnemonic(mnemonic)
signature = keypair.sign("Test123")
if keypair.verify("Test123", signature):

By default, a keypair is using SR25519 cryptography, alternatively ED25519 and ECDSA can be explicitly specified:

keypair = Keypair.create_from_mnemonic(mnemonic, crypto_type=KeypairType.ECDSA)

Creating keypairs with soft and hard key derivation paths

mnemonic = Keypair.generate_mnemonic()
keypair = Keypair.create_from_uri(mnemonic + '//hard/soft')

By omitting the mnemonic the default development mnemonic is used:

keypair = Keypair.create_from_uri('//Alice')

Creating ECDSA keypairs with BIP44 derivation paths

mnemonic = Keypair.generate_mnemonic()
keypair = Keypair.create_from_uri(f"{mnemonic}/m/44'/60'/0'/0/0", crypto_type=KeypairType.ECDSA)

Getting estimate of network fees for extrinsic in advance

keypair = Keypair(ss58_address="EaG2CRhJWPb7qmdcJvy3LiWdh26Jreu9Dx6R1rXxPmYXoDk")

call = substrate.compose_call(
        'dest': 'EaG2CRhJWPb7qmdcJvy3LiWdh26Jreu9Dx6R1rXxPmYXoDk',
        'value': 2 * 10 ** 3
payment_info = substrate.get_payment_info(call=call, keypair=keypair)
# {'class': 'normal', 'partialFee': 2499999066, 'weight': 216625000}

Offline signing of extrinsics

This example generates a signature payload which can be signed on another (offline) machine and later on sent to the network with the generated signature.

  • Generate signature payload on online machine:
substrate = SubstrateInterface(

call = substrate.compose_call(
        'dest': '5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY',
        'value': 2 * 10**8

era = {'period': 64, 'current': 22719}
nonce = 0

signature_payload = substrate.generate_signature_payload(call=call, era=era, nonce=nonce)
  • Then on another (offline) machine generate the signature with given signature_payload:
keypair = Keypair.create_from_mnemonic("nature exchange gasp toy result bacon coin broccoli rule oyster believe lyrics")
signature = keypair.sign(signature_payload)
  • Finally on the online machine send the extrinsic with generated signature:
keypair = Keypair(ss58_address="5EChUec3ZQhUvY1g52ZbfBVkqjUY9Kcr6mcEvQMbmd38shQL")

extrinsic = substrate.create_signed_extrinsic(

result = substrate.submit_extrinsic(


Accessing runtime constants

All runtime constants are provided in the metadata (see substrate.get_metadata_constants()), to access these as a decoded ScaleType you can use the function substrate.get_constant():

constant = substrate.get_constant("Balances", "ExistentialDeposit")

print(constant.value) # 10000000000

Cleanup and context manager

At the end of the lifecycle of a SubstrateInterface instance, calling the close() method will do all the necessary cleanup, like closing the websocket connection.

When using the context manager this will be done automatically:

with SubstrateInterface(url="wss://") as substrate:
    events = substrate.query("System", "Events")

# connection is now closed

Keeping type registry presets up to date

:information_source: Only applicable for chains with metadata < V14

When on-chain runtime upgrades occur, types used in call- or storage functions can be added or modified. Therefore it is important to keep the type registry presets up to date, otherwise this can lead to decoding errors like RemainingScaleBytesNotEmptyException.

At the moment the type registry presets for Polkadot, Kusama, Rococo and Westend are being actively maintained for this library, and a check and update procedure can be triggered with:


This will also activate the updated preset for the current instance.

It is also possible to always use the remote type registry preset from Github with the use_remote_preset kwarg when instantiating:

substrate = SubstrateInterface(

To check for updates after instantiating the substrate object, using substrate.reload_type_registry() will download the most recent type registry preset from Github and apply changes to current object.

Contact and Support

For questions, please reach out to us on our matrix chat group: Polkascan Technical.

Download Details:
Source Code: 

#python  #blockchain #substrate 

Python Substrate Interface Library

AS Substrate: Collection Of Libraries Written in AssemblyScript


A collection of resources to develop proof of concept projects for Substrate in AssemblyScript. AssemblyScript compiles a strict subset of TypeScript to WebAssembly using Binaryen.

At the moment, this repository is mainly home for a collection of smart contract examples and a small smart contract library to write contracts for Substrates contracts pallet, but it might be extended with more examples in the future.


This repository is using yarn and yarn workspaces. You also need a fairly up-to-date version of node.


The packages folder contains the PoC libraries and projects.


The contracts folder contains a number of example contracts that make use of the as-contracts package. The compiled example contracts in the contracts folder can be deployed and executed on any Substrate chain that includes the contracts pallet.

Getting started

  1. Clone the whole as-substrate repository.
$ git clone

2.   Install all dependencies

$ yarn

3.   Compile all packages, projects and contract examples to wasm

$ yarn build

To clean up all workspaces in the repository, run:

$ yarn clean

Write your own contract

The @substrate/as-contracts and @substrate/as-utils packages are not being published to the npmjs registry. That's why you need to add the complete as-substrate repository as a dependency directly from git.

$ yarn add

// or

$ npm install

In your projects, you can then import the as-contracts functions directly from the node_modules folder

The recommended way of writing smart contracts is using the Rust Smart Contract Language ink!.

Another way of writing Smart Contracts for Substrate is using the Solidity to Wasm compiler Solang.


Everything in this repository is highly experimental and should not be used for any professional or financial purposes.

Download Details:
Author: paritytech
Source Code:
License: GPL-3.0 license

#blockchain #assemblyscript #substrate  #smartcontract  #polkadot #rust 

AS Substrate: Collection Of Libraries Written in AssemblyScript

Ledgeracio: CLI for Use with The Ledger Staking App

WARNING: This is alpha quality software and not suitable for production. It is incomplete and will have bugs.

Ledgeracio CLI

Ledgeracio is a command-line tool and a Ledger app designed for staking operations on Substrate-based networks.

Running ledgeracio --help will provide top-level usage instructions.

Ledgeracio CLI is intended to work with a special Ledgeracio Ledger app, but most of its commands will work with stock Kusama or Polkadot Ledger apps as well. This is less secure, however, as these apps do not enforce the same restrictions that the Ledgeracio app does. Using a stock app in production is not recommended.

The Polkadot app can be found here and the Kusama app can be found here. Other Substrate-based chains are currently not supported, but local devnets should work as long as their RPC API matches Kusama/Polkadot's.

Ledgeracio only supports Unix-like systems, and has mostly been tested on Linux. That said, it works on macOS and other Unix-like systems that provide the necessary support for userspace USB drivers.

What is Ledgeracio?

Ledgeracio is a CLI app to perform various tasks common to staking on Kusama and Polkadot, aka staking-ops. Ledgeracio is designed to reduce the risk of user error by way of an allowlist of validators that is set up and signed once and stored on the Ledger device. Furthermore, Ledgeracio can speed up the workflow considerably when compared to alternatives using Parity Signer + Polkadot{.js}.

This repository only contains the CLI. To submit transactions with Ledgeracio, you will also need the companion Ledger app that you can install from the Ledger app store for Polkadot and Kusama. Development versions of the apps are available at Zondax/ledger-polkadot and Zondax/ledger-kusama. Please do not use the unaudited versions in production. For instruction on how to setup and use your Ledger device with Polkadot/Kusama, see the Polkadot wiki.

The Ledgeracio CLI contains two binaries. The first, simply called ledgeracio, is used to submit transactions. The second, called ledgeracio-allowlist, is used to manage the Ledgeracio Ledger app’s list of allowed stash accounts. Generally, one will use ledgeracio for normal operations, and only use ledgeracio-allowlist when the list of allowed stash accounts must be changed. ledgeracio does not handle sensitive data, so it can safely be used on virtually any machine on which it will run. Some subcommands of ledgeracio-allowlist, however, generate and use secret keys, which are stored unencrypted on disk. Therefore, they MUST NOT be used except on trusted and secured machines. Ideally, these subcommands should be run on a machine that is reserved for provisioning of Ledger devices with the Ledgeracio app, and which has no network connectivity.

The allowlist serves to prevent one from accidentally nominating the wrong validator, which could result in a slash. It does NOT protect against malicious use of the device. Anyone with both the device and its PIN can uninstall the Ledgeracio app and install the standard Polkadot or Kusama app, which uses the same derivation path and thus can perform the same transactions.


  • An index is an integer, at least 1, specified in decimal. Indexes are used to determine which BIP44 derivation path to use.
  • Subcommands that take a single argument take it directly. Subcommands that take multiple arguments use keyword arguments, which are passed as --key value or --key=value. This avoids needing to memorize the order of arguments.
  • All commands require that a network name be passed as the first argument. You might want to make a shell alias for this, such as
alias 'ledgeracio-polkadot=ledgeracio --network polkadot'
alias 'ledgeracio-kusama=ledgeracio --network kusama'

Getting Started

Allowlist signing

Provisioning the Ledgeracio Ledger app requires a trusted computer. This computer will store the secret key used to sign allowlists. This computer does not need network access, and generally should not have it. ledgeracio-allowlist does not encrypt the secret key, so operations that involve secret keys should only be done on machines that use encrypted storage.

Only devices used for nomination need to be provisioned. However, if you only intend to use the app for validator management, you should set an empty allowlist, which blocks all nominator operations.

First, ledgeracio-allowlist gen-key <file> is used to generate a secret key. The public part will be placed in <file>.pub and the secret part in <file>.sec. Both will be created with 0400 permissions, so that they are not accidentally overwritten or exposed. This operation requires a trusted computer. The public key file can be freely redistributed, while the secret key file should never leave the machine it was generated on.

You can now sign a textual allowlist file with ledgeracio-allowlist sign. A textual allowlist file has one SS58 address per line. Leading and trailing whitespace is stripped. If the first non-whitespace character on a line is # or ;, or if the line is empty or consists entirely of whitespace, it is considered to be a comment and ignored.

ledgeracio-allowlist sign is invoked as follows:

ledgeracio-allowlist --network <network> sign --file <file> --nonce <nonce> --output <output> --secret <secret>

<file> is the allowlist file. <nonce> is the nonce, which is incorporated into the signed allowlist file named <output>. Ledgeracio apps keep track of the nonce of the most recent allowlist uploaded, and reject new uploads unless the new allowlist has a nonce higher than the old one. Nonces do not need to be contiguous, so skipping a nonce is okay. Signed allowlists are stored in a binary format.

Device provisioning

ledgeracio-allowlist is also used for device provisioning. To set the allowlist, use ledgeracio-allowlist set-key. This command will only succeed once. If an allowlist has already been uploaded, it will fail. The only way to change the allowlist signing key is to reinstall the Ledgeracio app, which does not result in any funds being lost.

ledgeracio-allowlist upload is used to upload an allowlist. The uploaded allowlist must have a nonce that is greater than the nonce of the previous allowlist. If there was no previous allowlist, any nonce is allowed.

To verify the signature of a binary allowlist file, use ledgeracio-allowlist inspect. This also displays the allowlist on stdout.

Ledgeracio Use

ledgeracio is used for staking operations. Before accounts on a Ledger device can be used for staking, they must be chosen as a controller account. You can obtain the address by running ledgeracio <validator|nominator> address. The address can be directly pasted into a GUI tool, such as Polkadot{.js}.

ledgeracio nominator nominate is used to nominate an approved validator, and ledgeracio validator announce is used to announce intention to validate. ledgeracio [nominator|validator] set-payee is used to set the payment target. ledgeracio [nominator|validator] chill is used to stop staking, while ledgeracio [nominator|validator] show and ledgeracio [nominator|validator] show-address are used to display staking status. The first takes an index, while the second takes an address. show-address does not require a Ledger device. ledgeracio validator replace-key is used to set a validator’s session key.

Subcommand Reference

Allowlist handling: ledgeracio-allowlist

The Ledgeracio app enforces a list of allowed stash accounts. This is managed using the ledgeracio-allowlist command.

Some subcommands involve the generation or use of secret keys, which are stored on disk without encryption. These subcommands MUST NOT be used on untrusted machines. Ideally, they should be run on a machine that is reserved for provisioning of Ledgeracio apps, and which has no access to the Internet.

Key generation: ledgeracio-allowlist gen-key

This command takes one argument: the basename (filename without extension) of the keys to generate. The public key will be given the extension .pub and the secret key the extension .sec. The files will be generated with 0400 permissions, which means that they can only be read by the current user and the system administrator, and they cannot be written to except by the administrator. This is to prevent accidental overwrites.

The public key is not sensitive, and is required by anyone who wishes to verify signed allowlists and operate on the allowed accounts. It will be uploaded to the Ledger device by ledgeracio-allowlist set-key. The secret key allows generating signatures, and therefore must be kept secret. It should never leave the (preferably air gapped) machine it is generated on.

Uploading an allowlist signing key to a device: ledgeracio-allowlist set-key

This command takes one argument, the name of the public key file (including extension). The key will be parsed and uploaded to the Ledgeracio app running on the attached Ledger device. If it is not able to do so, Ledgeracio will print an error message and exit with a non-zero status.

If a key has already been uploaded, uploading a new key will fail. The only workaround is to reinstall the Ledgeracio app. This does not forfeit any funds stored on the device. We strongly recommend users to use separate Ledger devices for ledgeracio and cold storage.

The user will be required to confirm the upload via the Ledger UI. This allows the user to check that the correct key has been uploaded, instead of a key chosen by an attacker who has compromised the user’s machine.

Retrieving the uploaded key: ledgeracio-allowlist get-key

This command takes no arguments. The public key that has been uploaded will be retrieved and printed to stdout. If no public key has been uploaded, or if the app is not the Ledgeracio app, an error will be returned.

Signing an allowlist: ledgeracio-allowlist sign

This command takes the following arguments. All of them are mandatory.

  • --file <file>: the textual allowlist file to sign. See for its format.
  • --nonce <nonce>: The nonce to sign the file with. The nonce must be greater than the previous nonce, or the Ledgeracio app will reject the allowlist.
  • --output <output>: The name of the output file to write.
  • --secret <secret>: The name of the secret key file.

Inspecting a signed allowlist: ledgeracio-allowlist inspect

This command takes two arguments. Both of them are mandatory.

  • --file <file>: The name of the signed allowlist to inspect.
  • --public <public>: The name of the public key file that signed the allowlist. This command will fail if the signature cannot be verified.

Uploading an allowlist: ledgeracio-allowlist upload

This command takes one argument: the filename of the signed binary allowlist to upload. The command will fail if any of the following occurs:

  • There is no Ledger device connected.
  • The attached device is not running the Ledgeracio app.
  • The Ledgeracio app refuses the operation.

The Ledgeracio app will refuse the operation if:

  • No signing key has been uploaded.
  • The allowlist has not been signed by the public key stored in the app.
  • The nonce is not greater than that of the previously uploaded allowlist. If no allowlist has been previously uploaded, any nonce is allowed.
  • The user refuses the operation.

Metadata inspection: ledgeracio metadata

This command takes no arguments. It pretty-prints the chain metadata to stdout. It is primarily intended for debugging. Requires a network connection.

Properties inspection: ledgeracio properties

This command takes no arguments. It pretty-prints the chain properties to stdout. It is primarily intended for debugging. Requires a network connection.

Nominator operations: ledgeracio nominator

This command performs operations using nominator keys ― that is, keys on a nominator derivation path. Requires a network connection. The following subcommands are available:

Displaying the address at an index: ledgeracio nominator address

This command takes an index as a parameter. The address on the device corresponding to that index is displayed on stdout.

Showing a nominator controller: ledgeracio nominator show

This command takes an index as parameter, and displays information about the corresponding nominator controller account.

Showing a nominator controller address: ledgeracio nominator show-address

This command takes an SS58-formatted address as parameter, and displays information about the corresponding nominator controller account. It does not require a Ledger device.

Nominating a new validator set: ledgeracio nominator nominate

This command takes a index followed by a list of SS58-formatted addresses. It uses the account at the provided index to nominate the provided validator stash accounts.

The user must confirm this action on the Ledger device. For security reasons, users MUST confirm that the addresses displayed on the device are the intended ones. A compromised host machine can send a set of accounts that is not the ones the user intended. If any of the addresses sent to the device are not on the allowlist, the transaction will not be signed.

Stopping nomination: ledgeracio nominator chill

This command stops the account at the provided index from nominating.

The user must confirm this action on the Ledger device.

Setting a payment target: ledgeracio nominator set-payee

This command takes an index as argument, and sets the payment target. The target must be one of Stash, Staked, or Controller (case-insensitive).

Validator operations: ledgeracio validator

This command handles validator operations. It requires a network connection, and has the following subcommands:

Displaying a validator address: ledgeracio validator address <index>

This command displays the address of the validator controller account at the given index.

Announcing an intention to validate: ledgeracio validator announce <index> [commission]

This command announces that the controller account at <index> intends to validate. An optional commission (as a decimal between 0 and 1 inclusive) may also be provided. If none is supplied, it defaults to 1, or 100%.

Cease validation: ledgeracio validator chill

This command stops validation.

The user must confirm this action on the Ledger device.

Setting the payment target: ledgeracio validator set-payee

This command is the validator version of ledgeracio nominator set-payee. See its documentation for details.

Displaying information on a given validator: ledgeracio validator show

This command is the validator version of ledgeracio nominator show. See its documentation for details.

Displaying information on a given validator address: ledgeracio validator show-address

This command is the validator version of ledgeracio nominator show-address. See its documentation for details.

Rotating a session key: ledgeracio validator replace-key <index> <keys>

This command sets the session keys of the validator controlled by the account at <index>. The keys must be in hexidecimal, as returned by the key rotation RPC call.

Download Details:
Author: paritytech
Source Code:
License: GPL-3.0 License

#blockchain  #polkadot  #smartcontract  #substrate 

Ledgeracio: CLI for Use with The Ledger Staking App

MultiSigil: Substrate Multisig Address Calculator for Your CLI

It is basically what it says on the tin. Since Substrate multisig addresses are deterministic, MultiSigil doesn't need to do any network connections — and can be used even before the chain has been started.


$ multi-sigil --help

multi-sigil 0.1.0
Parity Technologies <>
CLI for generating Substrate multisig addresses

    multi-sigil [OPTIONS] <THRESHOLD> <ADDRESSES>...

    <THRESHOLD>       The number of signatures needed to perform the operation
    <ADDRESSES>...    The addresses to use

    -h, --help       Prints help information
    -V, --version    Prints version information

        --network <NETWORK>    Network to calculate multisig for; defaults to Kusama [default: kusama]  [possible
                               values: kusama, polkadot]

Supported networks

Currently only Kusama and Polkadot are supported.

It should be fairly trivial to add support of other networks from the list of supported in SS58 — PRs are welcome!

Download Details:
Author: paritytech
Source Code:
License: Apache-2.0 License

#blockchain  #polkadot  #smartcontract  #substrate 

MultiSigil: Substrate Multisig Address Calculator for Your CLI

Substrate Air-gapped: Vulnerable Decryption and Signing tools

[WIP] Substrate Airgapped

Tools to facilitate an air-gapped construction, decoding, and signing flow for transactions of FRAME-based chains.


  • substrate-airgapped-cli: CLI that combines all functionality of the available substrate-airgapped libraries.
  • substrate-airgapped: Where core components & functionality is being built out.
  • substrate-metadata: A wrapper around runtime metadata that can be used to programmatically get the call index of transaction.



Please file an issue for any questions, feature requests, or additional examples

Download Details:
Author: paritytech
Source Code:

#blockchain  #polkadot  #smartcontract  #substrate 

Substrate Air-gapped: Vulnerable Decryption and Signing tools

An Project to Enable Writing Substrate integration Tests Easily

Substrate Test Runner

Allows you to test 

  • Migrations
  • Runtime Upgrades
  • Pallets and general runtime functionality.

This works by running a full node with a ManualSeal-BABE™ hybrid consensus for block authoring.

The test runner provides two apis of note

  • seal_blocks(count: u32)

This tells manual seal authorship task running on the node to author count number of blocks, including any transactions in the transaction pool in those blocks.

  • submit_extrinsic<T: frame_system::Config>(call: Impl Into<T::Call>, from: T::AccountId)

Providing a Call and an AccountId, creates an UncheckedExtrinsic with an empty signature and sends to the node to be included in future block.


The running node has no signature verification, which allows us author extrinsics for any account on chain. 

How do I Use this?

/// tons of ignored imports
use substrate_test_runner::{TestRequirements, Node};

struct Requirements;

impl TestRequirements for Requirements {
    /// Provide a Block type with an OpaqueExtrinsic
    type Block = polkadot_core_primitives::Block;
    /// Provide an Executor type for the runtime
    type Executor = polkadot_service::PolkadotExecutor;
    /// Provide the runtime itself
    type Runtime = polkadot_runtime::Runtime;
    /// A touch of runtime api
    type RuntimeApi = polkadot_runtime::RuntimeApi;
    /// A pinch of SelectChain implementation
    type SelectChain = sc_consensus::LongestChain<TFullBackend<Self::Block>, Self::Block>;
    /// A slice of concrete BlockImport type
    type BlockImport = BlockImport<
        TFullClient<Self::Block, Self::RuntimeApi, Self::Executor>,
    /// and a dash of SignedExtensions
    type SignedExtension = SignedExtra;

    /// Load the chain spec for your runtime here.
    fn load_spec() -> Result<Box<dyn sc_service::ChainSpec>, String> {
        let wasm_binary = polkadot_runtime::WASM_BINARY.ok_or("Polkadot development wasm not available")?;

            move || polkadot_development_config_genesis(wasm_binary),

    /// Optionally provide the base path if you want to fork an existing chain.
    // fn base_path() -> Option<&'static str> {
    //     Some("/home/seun/.local/share/polkadot")
    // }

    /// Create your signed extras here.
    fn signed_extras(
        from: <Self::Runtime as frame_system::Config>::AccountId,
    ) -> Self::SignedExtension
        S: StateProvider
        let nonce = frame_system::Module::<Self::Runtime>::account_nonce(from);


    /// The function signature tells you all you need to know. ;)
    fn create_client_parts(config: &Configuration) -> Result<
            Arc<TFullClient<Self::Block, Self::RuntimeApi, Self::Executor>>,
                dyn ConsensusDataProvider<
                    Transaction = TransactionFor<
                        TFullClient<Self::Block, Self::RuntimeApi, Self::Executor>,
    > {
        let (
        ) = new_full_parts::<Self::Block, Self::RuntimeApi, Self::Executor>(config)?;
        let client = Arc::new(client);

        let inherent_providers = InherentDataProviders::new();
        let select_chain = sc_consensus::LongestChain::new(backend.clone());

        let (grandpa_block_import, ..) =
            sc_finality_grandpa::block_import(client.clone(), &(client.clone() as Arc<_>), select_chain.clone())?;

        let (block_import, babe_link) = sc_consensus_babe::block_import(

        let consensus_data_provider = BabeConsensusDataProvider::new(
            vec![(AuthorityId::from(Alice.public()), 1000)]
        .expect("failed to create ConsensusDataProvider");


/// And now for the most basic test

fn simple_balances_test() {
    // given
    let mut node = Node::<Requirements>::new();

    type Balances = pallet_balances::Module<Runtime>;

    let (alice, bob) = (Sr25519Keyring::Alice.pair(), Sr25519Keyring::Bob.pair());
    let (alice_account_id, bob_acount_id) = (

    /// the function with_state allows us to read state, pretty cool right? :D
    let old_balance = node.with_state(|| Balances::free_balance(alice_account_id.clone()));

    // 70 dots
    let amount = 70_000_000_000_000;

    /// Send extrinsic in action.
    node.submit_extrinsic(BalancesCall::transfer(bob_acount_id.clone(), amount), alice_account_id.clone());

    /// Produce blocks in action, Powered by manual-seal™.

    /// we can check the new state :D
    let new_balance = node.with_state(|| Balances::free_balance(alice_account_id));

    /// we can now make assertions on how state has changed.
    assert_eq!(old_balance + amount, new_balance);

Download Details:
Author: paritytech
Source Code:

#blockchain  #polkadot  #smartcontract  #substrate #rust 

An Project to Enable Writing Substrate integration Tests Easily

Dot Jaeger: Service for Visualizing & Collecting Traces From Parachain

Dot Jaeger

service for visualizing and collecting traces from Parachains.



  • Make sure you can access your JaegerUI Endpoint collecting traces from Parachain Validators.
  • edit the docker-compose.yml prometheus volumes path with your path to prometheus.yml in the dot-jaeger repo
  • Start the external services (Prometheus + Grafana) with
docker-compose up

This starts Prometheus on port 9090 and grafana on port 3000. The Grafana dashboard can be accessed from localhost:3000, with the default login being user: admin password: admin

  • Start dot-jaeger in daemon mode with chosen arguments. The help command may be used for quick docs on the core app or any of the subcommands.
  • Login to local grafana instance, and add dot-jaeger as a Prometheus source.
    • URL: localhost:9090
    • Access: Browser
  • Import the Dashboard from the Repository named Parachain Rococo Candidates-{{bunch of numbers}}
    • dashboard can be manipulated from grafana

Data should start showing up. Grafana update interval can be modified in the top right

Here's a Quick ASCIICast of the dot-jaeger and docker setup process

Recommended number of traces at once: 5-20. Asking for too many traces from the JaegerUI both requests large amounts of data (potentially slowing down any other services) and makes dot-jaeger slower as it has to potentially sort the parent-child relationship of each span, although this can be configured with --recurse-children and recurse-parents CLI options.


Usage: dot-jaeger [--service <service>] [--url <url>] [--limit <limit>] [--pretty-print] [--lookback <lookback>] <command> [<args>]

Jaeger Trace CLI App

  --service         name a specific node that reports to the Jaeger Agent from
                    which to query traces.
  --url             URL where Jaeger Service runs.
  --limit           maximum number of traces to return.
  --pretty-print    pretty print result
  --lookback        specify how far back in time to look for traces. In format:
                    `1h`, `1d`
  --help            display usage information

  traces            Use when observing many traces
  trace             Use when observing only one trace
  services          List of services reporting to the Jaeger Agent
  daemon            Daemonize Jaeger Trace collection to run at some interval


Usage: dot-jaeger daemon [--frequency <frequency>] [--port <port>] [--recurse-parents] [--recurse-children] [--include-unknown]

Daemonize Jaeger Trace collection to run at some interval

  --frequency       frequency to update jaeger metrics in milliseconds.
  --port            port to expose prometheus metrics at. Default 9186
  --recurse-parents fallback to recursing through parent traces if the current
                    span has one of a candidate hash or stage, but not the
                    fallback to recursing through parent traces if the current
                    span has one of a candidate hash or stage but not the other.
                    Recursing children is slower than recursing parents.
  --include-unknown include candidates that have a stage but no candidate hash
                in the prometheus data.
  --help            display usage information


./dot-jaeger --url "http://JaegerUI:16686" --limit 10 --service polkadot-rococo-3-validator-5 daemon --recurse-children


Adding a new Stage

  • Modify Stage enum and associated Into/From implementations to accomadate a new stage
  • Modify Prometheus Gauges to add new stage to Histograms

Download Details:
Author: paritytech
Source Code:
License: GPL-3.0 License

#blockchain  #polkadot  #smartcontract  #substrate 

Dot Jaeger: Service for Visualizing & Collecting Traces From Parachain

Testbed for Code Size Minimization Strategies in The Rust

The Rust Programming Language

This is the main source code repository for Rust. It contains the compiler, standard library, and documentation.

Note: this README is for users rather than contributors. If you wish to contribute to the compiler, you should read the Getting Started section of the rustc-dev-guide instead.

Quick Start

Read "Installation" from The Book.

Installing from Source

The Rust build system uses a Python script called to build the compiler, which manages the bootstrapping process. It lives in the root of the project.

The command can be run directly on most systems in the following format:

./ <subcommand> [flags]

This is how the documentation and examples assume you are running

Systems such as Ubuntu 20.04 LTS do not create the necessary python command by default when Python is installed that allows to be run directly. In that case you can either create a symlink for python (Ubuntu provides the python-is-python3 package for this), or run using Python itself:

# Python 3
python3 <subcommand> [flags]

# Python 2.7
python2.7 <subcommand> [flags]

More information about can be found by running it with the --help flag or reading the rustc dev guide.

Building on a Unix-like system

  1. Make sure you have installed the dependencies:
  • g++ 5.1 or later or clang++ 3.5 or later
  • python 3 or 2.7
  • GNU make 3.81 or later
  • cmake 3.13.4 or later
  • ninja
  • curl
  • git
  • ssl which comes in libssl-dev or openssl-devel
  • pkg-config if you are compiling on Linux and targeting Linux

2.   Clone the source with git:

git clone
cd rust

3.   Configure the build settings:

The Rust build system uses a file named config.toml in the root of the source tree to determine various configuration settings for the build. Copy the default config.toml.example to config.toml to get started.

cp config.toml.example config.toml

If you plan to use install to create an installation, it is recommended that you set the prefix value in the [install] section to a directory.

Create install directory if you are not installing in default directory

4.   Build and install:

./ build && ./ install

When complete, ./ install will place several programs into $PREFIX/bin: rustc, the Rust compiler, and rustdoc, the API-documentation tool. This install does not include Cargo, Rust's package manager. To build and install Cargo, you may run ./ install cargo or set the build.extended key in config.toml to true to build and install all tools.

Building on Windows

There are two prominent ABIs in use on Windows: the native (MSVC) ABI used by Visual Studio, and the GNU ABI used by the GCC toolchain. Which version of Rust you need depends largely on what C/C++ libraries you want to interoperate with: for interop with software produced by Visual Studio use the MSVC build of Rust; for interop with GNU software built using the MinGW/MSYS2 toolchain use the GNU build.


MSYS2 can be used to easily build Rust on Windows:

Grab the latest MSYS2 installer and go through the installer.

Run mingw32_shell.bat or mingw64_shell.bat from wherever you installed MSYS2 (i.e. C:\msys64), depending on whether you want 32-bit or 64-bit Rust. (As of the latest version of MSYS2 you have to run msys2_shell.cmd -mingw32 or msys2_shell.cmd -mingw64 from the command line instead)

From this terminal, install the required tools:

# Update package mirrors (may be needed if you have a fresh install of MSYS2)
pacman -Sy pacman-mirrors

# Install build tools needed for Rust. If you're building a 32-bit compiler,
# then replace "x86_64" below with "i686". If you've already got git, python,
# or CMake installed and in PATH you can remove them from this list. Note
# that it is important that you do **not** use the 'python2', 'cmake' and 'ninja'
# packages from the 'msys2' subsystem. The build has historically been known
# to fail with these packages.
pacman -S git \
            make \
            diffutils \
            tar \
            mingw-w64-x86_64-python \
            mingw-w64-x86_64-cmake \
            mingw-w64-x86_64-gcc \

Navigate to Rust's source code (or clone it), then build it:

./ build && ./ install


MSVC builds of Rust additionally require an installation of Visual Studio 2017 (or later) so rustc can use its linker. The simplest way is to get the Visual Studio, check the “C++ build tools” and “Windows 10 SDK” workload.

(If you're installing cmake yourself, be careful that “C++ CMake tools for Windows” doesn't get included under “Individual components”.)

With these dependencies installed, you can build the compiler in a cmd.exe shell with:

python build

Currently, building Rust only works with some known versions of Visual Studio. If you have a more recent version installed and the build system doesn't understand, you may need to force rustbuild to use an older version. This can be done by manually calling the appropriate vcvars file before running the bootstrap.

CALL "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat"
python build

Specifying an ABI

Each specific ABI can also be used from either environment (for example, using the GNU ABI in PowerShell) by using an explicit build triple. The available Windows build triples are:

  • GNU ABI (using GCC)
    • i686-pc-windows-gnu
    • x86_64-pc-windows-gnu
  • The MSVC ABI
    • i686-pc-windows-msvc
    • x86_64-pc-windows-msvc

The build triple can be specified by either specifying --build=<triple> when invoking commands, or by copying the config.toml file (as described in Installing From Source), and modifying the build option under the [build] section.

Configure and Make

While it's not the recommended build system, this project also provides a configure script and makefile (the latter of which just invokes

make && sudo make install

When using the configure script, the generated file may override the config.toml file. To go back to the config.toml file, delete the generated file.

Building Documentation

If you’d like to build the documentation, it’s almost the same:

./ doc

The generated documentation will appear under doc in the build directory for the ABI used. I.e., if the ABI was x86_64-pc-windows-msvc, the directory will be build\x86_64-pc-windows-msvc\doc.


Since the Rust compiler is written in Rust, it must be built by a precompiled "snapshot" version of itself (made in an earlier stage of development). As such, source builds require a connection to the Internet, to fetch snapshots, and an OS that can execute the available snapshot binaries.

Snapshot binaries are currently built and tested on several platforms:

Platform / Architecturex86x86_64
Windows (7, 8, 10, ...)
Linux (kernel 2.6.32, glibc 2.11 or later)
macOS (10.7 Lion or later)(*)

(*): Apple dropped support for running 32-bit binaries starting from macOS 10.15 and iOS 11. Due to this decision from Apple, the targets are no longer useful to our users. Please read our blog post for more info.

You may find that other platforms work, but these are our officially supported build environments that are most likely to work.

Getting Help

The Rust community congregates in a few places:


If you are interested in contributing to the Rust project, please take a look at the Getting Started guide in the rustc-dev-guide.


The Rust Foundation owns and protects the Rust and Cargo trademarks and logos (the “Rust Trademarks”).

If you want to use these names or brands, please read the media guide.

Third-party logos may be subject to third-party copyrights and trademarks. See Licenses for details.

Download Details:
Author: paritytech
Source Code:
License: View license

#blockchain  #polkadot  #smartcontract  #substrate 

Testbed for Code Size Minimization Strategies in The Rust

Contract Sizes: Comparisons EVM Vs. WASM Contract Code Sizes

Contract Code Size Comparison

The goal of this repository is to compare the sizes of compiled solidity contracts when compiled to EVM (with solc) versus WASM (with solang).

After some experimentation it turned out that a huge contributor to WASM code sizes is the smaller word size of WASM. Solidity treats 256bit variables as value types and passes them on the stack. Solang generates four 32bit stack accesses to emulate this. In order to improve comparability we do the following:

  • Patch all contracts used for comparisons to not use wide integers (use uint32 everywhere).
  • Pass --value-size 4 --address-size 4 to solang so that 32bit is usedfor the builtin types (address, msg.value).

How to use this repository

Put solang in your PATH and run which is located in the root of this repository. The solc compiler will be downloaded automatically.

Test corpus

The current plan is to use the following sources as a test corpus:

Adding a new contract to the corpus from either of those sources is a time consuming process because solang isn't a drop in replacement. It tries hard to be one but there are some things that won't work on solang: First, almost all contracts use EVM inline assembly which obviously won't work on a compiler targeting another architecture. Second, differences in builtin types (address, balance) will prevent the compilation of most contracts.

Therefore we need to apply substantial changes to every contract before it can bea dded to the corpus in order to make it compile and establish comparability.


The following results show the compressed sizes (zstd) of the evm and wasm targets together with their compression ratio. Wasm relative describes the relative size of the compressed wasm output when compared to the evm output.

The concatenated row is what we get when we concatenate the uncompressed results of all contracts.

Used solang version is commit c2a8bd9881e64e41565cdfe088ffe9464c74dae4.

ContractEVM CompressedWASM CompressedEVM RatioWASM RatioWasm Relative

Download Details:
Author: paritytech
Source Code:

#blockchain  #polkadot  #smartcontract  #substrate 

Contract Sizes: Comparisons EVM Vs. WASM Contract Code Sizes

Parity Tokio Ipc: Crate Abstracts interprocess Transport for UNIX


This crate abstracts interprocess transport for UNIX/Windows.

It utilizes unix sockets on UNIX (via tokio::net::UnixStream) and named pipes on windows (via tokio::net::windows::named_pipe module).

Endpoint is transport-agnostic interface for incoming connections:

use parity_tokio_ipc::Endpoint;
use futures::stream::StreamExt;

// For testing purposes only - instead, use a path to an actual socket or a pipe
let addr = parity_tokio_ipc::dummy_endpoint();

let server = async move {
        .expect("Couldn't set up server")
        .for_each(|conn| async {
            match conn {
                Ok(stream) => println!("Got connection!"),
                Err(e) => eprintln!("Error when receiving connection: {:?}", e),

let rt = tokio::runtime::Builder::new_current_thread().enable_all().build().unwrap();

Download Details:
Author: paritytech
Source Code:
License: View license

#blockchain  #polkadot  #smartcontract  #substrate #rust 

Parity Tokio Ipc: Crate Abstracts interprocess Transport for UNIX

UI for Substrate Bridges in Polkadot

The goal of the UI is to provide the users a convenient way of interacting with the Bridge - querying its state and sending transactions.

Configuring custom Substrate providers / chains

The project includes a .env file at root project directory that contains all the variables for running the bridge UI:


ℹ️In case you need to overwrite any of the variables defined, please do so creating a new .env.local.

In case of questions about .env management please refer to this link: create-react-app env files

Custom Hashers for building connections

If any of the chains (or both) need to use a custom hasher function this one can be built and exported from the file: src/configs/chainsSetup/customHashers.ts. Then it is just a matter of referring the function name using variable REACT_APP_CUSTOM_HASHER_CHAIN_<Chain number> from .env file.

Running the bridge

Please refer to this section of the Bridges project to run the bridge locally: running-the-bridge



This will install all the dependencies for the project.

yarn start

Runs the app in the development mode. Open http://localhost:3001 to view it in the browser.

yarn test

Runs the test suite.

yarn lint

Runs the linter & formatter.

Execute E2E test

Puppeteer is used for running E2E test for bridges (Only chrome for now).


a) Have chrome installed on your computer. (This test requires it and will not download it when running); b) ensure that in your env.local file the REACT_APP_IS_DEVELOPMENT and REACT_APP_KEYRING_DEV_LOAD_ACCOUNTS are true; c) Make sure all steps mentioned above have run in a seperate terminal (yarn - yarn start) and the application of bridges is running; d) In a different terminal window run the following command:

yarn run test:e2e-alone

customTypes config files process.

There is an automated process that downloads all the required types.json files available in the deployments section of parity-bridges-common repository. This hook is executed before the local development server starts and during the lint/test/build process during deployment. In case there is an unexpected issue with this process you can test this process isolated by running:

yarn prestart

Learn More

For additional information about the Bridges Project please refer to parity-bridges-common repository.


To build the image run the:

docker build -t parity-bridges-ui:dev .

Now that image is built, container can start with the following command, which will serve our app on port 8080.

docker run --rm -it -p 8080:80 parity-bridges-ui:dev

Download Details:
Author: paritytech
Source Code:
License: GPL-3.0 License

#blockchain  #polkadot  #smartcontract  #substrate #rust 

UI for Substrate Bridges in Polkadot

Substrate Runtime and Contract Interactions for Polkadot

A Substrate node demonstrating two-way interactions between the runtime and Ink! smart contracts.


This Substrate project demonstrates through example how to interact between Substrate runtimes and ink! smart contracts through extrinsic calls and ink! chain extensions.


Sharing Substrate runtime functionality with ink! smart contracts is a powerful feature. Chains with unique runtime functionality can create rich application developer ecosystems by exposing choice pieces of their runtime. The inverse interaction of runtime to ink! smart contract calls may be similarly valuable. Runtime logic can query or set important context information at the smart contracts level.

Both of the types of interactions described above are asked about in the context of support, and a recent example demonstrating how to perform these interactions has not been developed.


If you have not already, it is recommended to go through the ink! smart contracts tutorial or otherwise have written and compiled smart contracts according to the ink! docs. It is also recommended to have some experience with Substrate runtime development.

Ensure you have

  1. Installed Substrate according to the instructions
  2. Run:
rustup component add rust-src --toolchain nightly
rustup target add wasm32-unknown-unknown --toolchain nightly

3.   Installed Cargo Contracts

# For Ubuntu or Debian users
sudo apt install binaryen
# For MacOS users
brew install binaryen

cargo install cargo-contract --vers ^0.15 --force --locked

Contract-to-Runtime Interactions

The project demonstrates contract-to-runtime interactions through the use of Chain extensions. Chain Extensions allow a runtime developer to extend runtime functions to smart contracts. In the case of this example, the functions being extended are a custom pallet extrinsic, and the pallet_balances::transfer extrinsic.

See also the rand-extension chain extension code example, which is one example that this project extended.

Runtime-to-Contract Interactions

Runtime-to-contract interactions are enabled through invocations of the pallet-contract's own bare_call method, invoked from a custom pallet extrinsic. The example extrinsic is called call_smart_contract and is meant to demonstrate calling an existing(uploaded and instantiated) smart-contract generically. The caller specifies the account id of the smart contract to be called, the selector of the smart contract function(found in the metadata.json in the compiled contract), and one argument to be passed to the smart contract function.



The cargo run command will perform an initial build. Use the following command to build the node without launching it:

cargo build --release

Smart contracts

To build the included smart contract example, first cd into smart-contracts/example-extension. then run:

cargo +nightly contracts build


Use Rust's native cargo command to build and launch the template node:

cargo run --release -- --dev --tmp

Local Contract Deployment

Once the smart contract is compiled, you may use the hosted Canvas UI. Please follow the Deploy Your Contract guide for specific instructions. This contract uses a default constructor, so there is no need to specify values for its constructor.

You may also use the Polkadotjs Apps UI to upload and instantiate the contract.

Example Usage

Ensure you have uploaded and instantiated the example contract.


Call the set_value smart contract function from a generic pallet extrinsic

  1. Browse to extrinsics in the Polkadotjs apps UI.
  2. Supply the necessary arguments to instruct our extrinsic to call the smart contract function. Enter the following values in the Submission tab:
    • dest: AccountId of the desired contract.
    • submit the following extrinsic : templateModule
    • selector: 0x00abcdef (note: this denotes the function to call, and is found in smart-contracts/example-extension/target/ink/metadata.json. See more here on the ink! selector macro)
    • arg: some u32 of your choice
    • gasLimit: 10000000000
  3. Submit Transaction -> Sign and Submit.

This extrinsic passed these arguments to the pallet_contracts::bare_call function, which resulted in our set_value smart contract function being called with the new u32 value. This value can now be verified by calling the get_value, and checking whether the new value is returned.


Call the insert_number extrinsic from the smart contract

  1. Browse to the Execute page in the hosted Canvas UI
  2. Under chain-extension-example, click Execute.
  3. Under Message to Send, select store_in_runtime.
  4. Enter some u32 to be stored.
  5. Ensure send as transaction is selected.
  6. Click Call

The smart contract function is less generic than the extrinsic used above, and so aready knows how to call our custom runtime extrinsic through the chain extension that is set up. You can verify that the contract called the extrinsic by checking the contractEntry storage in the Polkadotjs UI.


To run the tests for the included example pallet, run cargo test in the root.


Build node with benchmarks enabled:

cargo build --release --features runtime-benchmarks

Then, to generate the weights into the pallet template's file:

./target/release/node-template benchmark \
 --chain dev \
 --pallet=pallet_template \
 --extrinsic='*' \
 --repeat=20 \
 --steps=50 \
 --execution wasm \
 --wasm-execution compiled \
 --raw \
 --output pallets/template/src/ \

Download Details:
Author: paritytech
Source Code:
License: Unlicense License

#blockchain  #polkadot  #smartcontract  #substrate #rust 

Substrate Runtime and Contract Interactions for Polkadot

Decode Substrate with Backwards Compatible Metadata

De[code] Sub[strate]

† This software is experimental, and not intended for production use yet. Use at your own risk.

Encompassing decoder for substrate/polkadot/kusama types.

Gets type definitions from polkadot-js via JSON and decodes them into components that outline types and make decoding byte-strings possible, as long as the module/generic type name are known.

Supports Metadata versions from v8, which means all of Kusama (from CC1). Older networks are not supported (E.G Alexander).

  • makes decoding generic types from the substrate rpc possible
  • requires parsing JSON with type definitions, and implementing traits TypeDetective and Decoder in order to work for arbitrary chains. However, if the JSON follows the same format as PolkadotJS definitions (look at definitions.json and overrides.json) it would be possible to simply deserialize into Polkadot structs and utilize those. The decoding itself is generic enough to allow it.
  • types must adhere to the conventions set out by polkadot decoding
    • type definitions for Polkadot (Kusama) are taken from Polkadot.js and deserialized into Rust (extras/polkadot)

Currently Supported Metadata Versions (From Kusama CC1):

  •  V8
  •  V9
  •  V10
  •  V11
  •  V12
  •  V13
  •  V14

(Tentative) Release & Maintenence

Note: Release description is in no way complete because of current & active development for legacy desub types & scale-info based types. it is purely here as a record for things that should be taken into account in the future

  • Depending on changes in legacy desub code, bump version in Cargo.toml for desub/, desub-current/, desub-legacy/, desub-common/, desub-json-resolver/
  • note upgrade-blocks present here and modify the hard-coded upgrade blocks as necessary in the desub file.
  • Take note of PR's that have been merged since the last release.
    • look over CHANGELOG. Make sure to include any PR's that were missed in the Unreleased section.
    • Move changes in Unreleased section to a new section corresponding to the version being released, making sure to keep the Unreleased header.
  • make a PR with these changes
  • once PR is merged, push a tag in the form vX.X.X (E.G v0.1.0)
git tag v0.1.0
git push --tags origin master
  • Once tags are pushed, a github workflow will start that will draft a release. You should be able to find the workflow running under Actions in the github repository.
    • NOTE: If something goes wrong it is OK. Delete the tag from the repo, re-create the tag locally and re-push. The workflow will run whenever a tag with the correct form is pushed. If more changes need to be made to the repo that will require another PR.
  • Once the workflow finishes, make changes to the resulting draft release if necessary, and hit publish.
  • Once published on github, publish each crate that has changed to Refer to this for how to publish to

Download Details:
Author: paritytech
Source Code:
License: GPL-3.0 License

#blockchain  #polkadot  #smartcontract  #substrate #rust 

Decode Substrate with Backwards Compatible Metadata