1653954420
DIA offchain worker
This offchain worker (ocw) gets data from an endpoint and writes an event as signed transaction for all local keys with subkey type dia!
.
To add the ocw pallet to your node, add it to your runtime like this (in this repository already done):
runtime/Cargo.toml
:Add the following under [dependencies]
:
pallet-dia-ocw = { version = "2.0.0", default-features = false, path = "../../../frame/dia-ocw" }
Add "pallet-dia-ocw/std",
at [features]
:
[features]
std = [
[...]
"pallet-dia-ocw/std",
]
2. Edit runtime/src/lib.rs
like this:
Add the following:
impl pallet_dia_ocw::Trait for Runtime {
type Event = Event;
type Call = Call;
type AuthorityId = pallet_dia_ocw::crypto::TestAuthId;
}
Insert DIAOCW: pallet_dia_ocw::{Module, Call, Event<T>},
to Runtime
enum:
construct_runtime!(
pub enum Runtime where
Block = Block,
NodeBlock = node_primitives::Block,
UncheckedExtrinsic = UncheckedExtrinsic
{
// ...
DIAOCW: pallet_dia_ocw::{Module, Call, Event<T>},
}
);
For each block, this ocw automatically adds a signed transaction. The signer account needs to pay the fees.
cargo run -- --dev --tmp
.Alice
via RPC:curl http://localhost:9933 -H "Content-Type:application/json;charset=utf-8" -d \
'{
"jsonrpc":"2.0",
"id":1,
"method":"author_insertKey",
"params": [
"dia!",
"bottom drive obey lake curtain smoke basket hold race lonely fit walk//Alice",
"0xd43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d"
]
}'
Download Details:
Author: diadata-org
Source Code: https://github.com/diadata-org/dia-substrate
License:
1653939720
This project contains the diadata Key/Value oracle written using wasm, can be deployed to supported substrate chains.
get : Gets the latest value of the asset symbol with timestamp
set : Sets latest value of asset, requires price and timestamp. Can be called only by the owner of contract
https://github.com/paritytech/cargo-contract
Network: Astar testnet (Shibuya) : YpfUaqH4zMcEo8Kw1egpPrjAGmBDWu1VVTLEEimXr2Kzevb
Set required environment variables
PRIVATE_KEY=
UNLOCK_PASSWORD=
CONTRACT_ADDRESS=
RPC_ADDRESS=
SYMBOLS=
after setting up environmnet variables run these command to start service
cd oracle
npm run build
npm run start
Download Details:
Author: diadata-org
Source Code: https://github.com/diadata-org/dia-wasm-oracle
License:
1653923820
Python Substrate Interface
This library specializes in interfacing with a Substrate node, providing additional convenience methods to deal with SCALE encoding/decoding (the default output and input format of the Substrate JSONRPC), metadata parsing, type registry management and versioning of types.
https://polkascan.github.io/py-substrate-interface/
pip install substrate-interface
The following examples show how to initialize for supported chains:
substrate = SubstrateInterface(
url="wss://rpc.polkadot.io"
)
When only an url
is provided, it tries to determine certain properties like ss58_format
and type_registry_preset
automatically by calling the RPC method system_properties
.
At the moment this will work for most MetadataV14
and above chains like Polkadot, Kusama, Acala, Moonbeam, for other chains the ss58_format
(default 42) and type_registry
(defaults to the latest vanilla Substrate types) should be set manually.
Polkadot
substrate = SubstrateInterface(
url="wss://rpc.polkadot.io",
ss58_format=0,
type_registry_preset='polkadot'
)
Kusama
substrate = SubstrateInterface(
url="wss://kusama-rpc.polkadot.io/",
ss58_format=2,
type_registry_preset='kusama'
)
Rococo
substrate = SubstrateInterface(
url="wss://rococo-rpc.polkadot.io",
ss58_format=42,
type_registry_preset='rococo'
)
Westend
substrate = SubstrateInterface(
url="wss://westend-rpc.polkadot.io",
ss58_format=42,
type_registry_preset='westend'
)
Compatible with https://github.com/substrate-developer-hub/substrate-node-template
substrate = SubstrateInterface(
url="ws://127.0.0.1:9944",
ss58_format=42,
type_registry_preset='substrate-node-template'
)
# Set block_hash to None for chaintip
block_hash = "0x51d15792ff3c5ee9c6b24ddccd95b377d5cccc759b8e76e5de9250cf58225087"
# Retrieve extrinsics in block
result = substrate.get_block(block_hash=block_hash)
for extrinsic in result['extrinsics']:
if 'address' in extrinsic.value:
signed_by_address = extrinsic.value['address']
else:
signed_by_address = None
print('\nPallet: {}\nCall: {}\nSigned by: {}'.format(
extrinsic.value["call"]["call_module"],
extrinsic.value["call"]["call_function"],
signed_by_address
))
# Loop through call params
for param in extrinsic.value["call"]['call_args']:
if param['type'] == 'Balance':
param['value'] = '{} {}'.format(param['value'] / 10 ** substrate.token_decimals, substrate.token_symbol)
print("Param '{}': {}".format(param['name'], param['value']))
# Set block_hash to None for chaintip
block_hash = "0x51d15792ff3c5ee9c6b24ddccd95b377d5cccc759b8e76e5de9250cf58225087"
# Retrieve extrinsics in block
result = substrate.get_block(block_hash=block_hash)
for extrinsic in result['extrinsics']:
if 'address' in extrinsic:
signed_by_address = extrinsic['address'].value
else:
signed_by_address = None
print('\nPallet: {}\nCall: {}\nSigned by: {}'.format(
extrinsic["call"]["call_module"].name,
extrinsic["call"]["call_function"].name,
signed_by_address
))
# Loop through call params
for param in extrinsic["call"]['call_args']:
if param['type'] == 'Balance':
param['value'] = '{} {}'.format(param['value'] / 10 ** substrate.token_decimals, substrate.token_symbol)
print("Param '{}': {}".format(param['name'], param['value']))
def subscription_handler(obj, update_nr, subscription_id):
print(f"New block #{obj['header']['number']} produced by {obj['author']}")
if update_nr > 10:
return {'message': 'Subscription will cancel when a value is returned', 'updates_processed': update_nr}
result = substrate.subscribe_block_headers(subscription_handler, include_author=True)
The modules and storage functions are provided in the metadata (see substrate.get_metadata_storage_functions()
), parameters will be automatically converted to SCALE-bytes (also including decoding of SS58 addresses).
result = substrate.query(
module='System',
storage_function='Account',
params=['F4xQKRUagnSGjFqafyhajLs94e7Vvzvr8ebwYJceKpr8R7T']
)
print(result.value['nonce']) # 7695
print(result.value['data']['free']) # 635278638077956496
account_info = substrate.query(
module='System',
storage_function='Account',
params=['F4xQKRUagnSGjFqafyhajLs94e7Vvzvr8ebwYJceKpr8R7T'],
block_hash='0x176e064454388fd78941a0bace38db424e71db9d5d5ed0272ead7003a02234fa'
)
print(account_info['nonce'].value) # 7673
print(account_info['data']['free'].value) # 637747267365404068
To retrieve more information about how to format the parameters of a storage function:
storage_function = self.substrate.get_metadata_storage_function("Tokens", "TotalIssuance")
print(storage_function.get_param_info())
# [{'variant': {'variants': [{'name': 'Token', 'fields': [{'name': None, 'type': 44, 'typeName': 'TokenSymbol', 'docs': []}], 'index': 0, 'docs': [], 'value': {'variant': {'variants': [{'name': 'ACA', 'fields': [], 'index': 0, 'docs': []}, {'name': 'AUSD', 'fields': [], 'index': 1, 'docs': []}, {'name': 'DOT', 'fields': [], 'index': 2, 'docs': []}, {'name': 'LDOT', 'fields': [], 'index': 3, 'docs': []}, {'name': 'RENBTC', 'fields': [], 'index': 20, 'docs': []}, {'name': 'CASH', 'fields': [], 'index': 21, 'docs': []}, {'name': 'KAR', 'fields': [], 'index': 128, 'docs': []}, {'name': 'KUSD', 'fields': [], 'index': 129, 'docs': []}, {'name': 'KSM', 'fields': [], 'index': 130, 'docs': []}, {'name': 'LKSM', 'fields': [], 'index': 131, 'docs': []}, {'name': 'TAI', 'fields': [], 'index': 132, 'docs': []}, {'name': 'BNC', 'fields': [], 'index': 168, 'docs': []}, {'name': 'VSKSM', 'fields': [], 'index': 169, 'docs': []}, {'name': 'PHA', 'fields': [], 'index': 170, 'docs': []}, {'name': 'KINT', 'fields': [], 'index': 171, 'docs': []}, {'name': 'KBTC', 'fields': [], 'index': 172, 'docs': []}]}}}, {'name': 'DexShare', 'fields': [{'name': None, 'type': 45, 'typeName': 'DexShare', 'docs': []}, {'name': None, 'type': 45, 'typeName': 'DexShare', 'docs': []}], 'index': 1, 'docs': [], 'value': {'variant': {'variants': [{'name': 'Token', 'fields': [{'name': None, 'type': 44, 'typeName': 'TokenSymbol', 'docs': []}], 'index': 0, 'docs': [], 'value': {'variant': {'variants': [{'name': 'ACA', 'fields': [], 'index': 0, 'docs': []}, {'name': 'AUSD', 'fields': [], 'index': 1, 'docs': []}, {'name': 'DOT', 'fields': [], 'index': 2, 'docs': []}, {'name': 'LDOT', 'fields': [], 'index': 3, 'docs': []}, {'name': 'RENBTC', 'fields': [], 'index': 20, 'docs': []}, {'name': 'CASH', 'fields': [], 'index': 21, 'docs': []}, {'name': 'KAR', 'fields': [], 'index': 128, 'docs': []}, {'name': 'KUSD', 'fields': [], 'index': 129, 'docs': []}, {'name': 'KSM', 'fields': [], 'index': 130, 'docs': []}, {'name': 'LKSM', 'fields': [], 'index': 131, 'docs': []}, {'name': 'TAI', 'fields': [], 'index': 132, 'docs': []}, {'name': 'BNC', 'fields': [], 'index': 168, 'docs': []}, {'name': 'VSKSM', 'fields': [], 'index': 169, 'docs': []}, {'name': 'PHA', 'fields': [], 'index': 170, 'docs': []}, {'name': 'KINT', 'fields': [], 'index': 171, 'docs': []}, {'name': 'KBTC', 'fields': [], 'index': 172, 'docs': []}]}}}, {'name': 'Erc20', 'fields': [{'name': None, 'type': 46, 'typeName': 'EvmAddress', 'docs': []}], 'index': 1, 'docs': [], 'value': {'composite': {'fields': [{'name': None, 'type': 47, 'typeName': '[u8; 20]', 'docs': [], 'value': {'array': {'len': 20, 'type': 2, 'value': {'primitive': 'u8'}}}}]}}}, {'name': 'LiquidCrowdloan', 'fields': [{'name': None, 'type': 4, 'typeName': 'Lease', 'docs': []}], 'index': 2, 'docs': [], 'value': {'primitive': 'u32'}}, {'name': 'ForeignAsset', 'fields': [{'name': None, 'type': 36, 'typeName': 'ForeignAssetId', 'docs': []}], 'index': 3, 'docs': [], 'value': {'primitive': 'u16'}}]}}}, {'name': 'Erc20', 'fields': [{'name': None, 'type': 46, 'typeName': 'EvmAddress', 'docs': []}], 'index': 2, 'docs': [], 'value': {'composite': {'fields': [{'name': None, 'type': 47, 'typeName': '[u8; 20]', 'docs': [], 'value': {'array': {'len': 20, 'type': 2, 'value': {'primitive': 'u8'}}}}]}}}, {'name': 'StableAssetPoolToken', 'fields': [{'name': None, 'type': 4, 'typeName': 'StableAssetPoolId', 'docs': []}], 'index': 3, 'docs': [], 'value': {'primitive': 'u32'}}, {'name': 'LiquidCrowdloan', 'fields': [{'name': None, 'type': 4, 'typeName': 'Lease', 'docs': []}], 'index': 4, 'docs': [], 'value': {'primitive': 'u32'}}, {'name': 'ForeignAsset', 'fields': [{'name': None, 'type': 36, 'typeName': 'ForeignAssetId', 'docs': []}], 'index': 5, 'docs': [], 'value': {'primitive': 'u16'}}]}}]
The query_map()
function can also be used to see examples of used parameters:
result = substrate.query_map("Tokens", "TotalIssuance")
print(list(result))
# [[<scale_info::43(value={'DexShare': ({'Token': 'KSM'}, {'Token': 'LKSM'})})>, <U128(value=11513623028320124)>], [<scale_info::43(value={'DexShare': ({'Token': 'KUSD'}, {'Token': 'BNC'})})>, <U128(value=2689948474603237982)>], [<scale_info::43(value={'DexShare': ({'Token': 'KSM'}, {'ForeignAsset': 0})})>, <U128(value=5285939253205090)>], [<scale_info::43(value={'Token': 'VSKSM'})>, <U128(value=273783457141483)>], [<scale_info::43(value={'DexShare': ({'Token': 'KAR'}, {'Token': 'KSM'})})>, <U128(value=1175872380578192993)>], [<scale_info::43(value={'DexShare': ({'Token': 'KUSD'}, {'Token': 'KSM'})})>, <U128(value=3857629383220790030)>], [<scale_info::43(value={'DexShare': ({'Token': 'KUSD'}, {'ForeignAsset': 0})})>, <U128(value=494116000924219532)>], [<scale_info::43(value={'Token': 'KSM'})>, <U128(value=77261320750464113)>], [<scale_info::43(value={'Token': 'TAI'})>, <U128(value=10000000000000000000)>], [<scale_info::43(value={'Token': 'LKSM'})>, <U128(value=681009957030687853)>], [<scale_info::43(value={'DexShare': ({'Token': 'KUSD'}, {'Token': 'LKSM'})})>, <U128(value=4873824439975242272)>], [<scale_info::43(value={'Token': 'KUSD'})>, <U128(value=5799665835441836111)>], [<scale_info::43(value={'ForeignAsset': 0})>, <U128(value=2319784932899895)>], [<scale_info::43(value={'DexShare': ({'Token': 'KAR'}, {'Token': 'LKSM'})})>, <U128(value=635158183535133903)>], [<scale_info::43(value={'Token': 'BNC'})>, <U128(value=1163757660576711961)>]]
The result of the previous storage query example is a ScaleType
object, more specific a Struct
.
The nested object structure of this account_info
object is as follows:
account_info = <AccountInfo(value={'nonce': <U32(value=5)>, 'consumers': <U32(value=0)>, 'providers': <U32(value=1)>, 'sufficients': <U32(value=0)>, 'data': <AccountData(value={'free': 1152921503981846391, 'reserved': 0, 'misc_frozen': 0, 'fee_frozen': 0})>})>
Every ScaleType
have the following characteristics:
Inside the AccountInfo
struct there are several U32
objects that represents for example a nonce or the amount of provider, also another struct object AccountData
which contains more nested types.
To access these nested structures you can access those formally using:
account_info.value_object['data'].value_object['free']
As a convenient shorthand you can also use:
account_info['data']['free']
ScaleType
objects can also be automatically converted to an iterable, so if the object is for example the others
in the result Struct of Staking.eraStakers
can be iterated via:
for other_info in era_stakers['others']:
print(other_info['who'], other_info['value'])
Each ScaleType
holds a complete serialized version of itself in the account_info.serialize()
property, so it can easily store or used to create JSON strings.
So the whole result of account_info.serialize()
will be a dict
containing the following:
{
"nonce": 5,
"consumers": 0,
"providers": 1,
"sufficients": 0,
"data": {
"free": 1152921503981846391,
"reserved": 0,
"misc_frozen": 0,
"fee_frozen": 0
}
}
ScaleType
objectsIt is possible to compare ScaleType objects directly to Python primitives, internally the serialized value
attribute is compared:
metadata_obj[1][1]['extrinsic']['version'] # '<U8(value=4)>'
metadata_obj[1][1]['extrinsic']['version'] == 4 # True
When a callable is passed as kwarg subscription_handler
, there will be a subscription created for given storage query. Updates will be pushed to the callable and will block execution until a final value is returned. This value will be returned as a result of the query and finally automatically unsubscribed from further updates.
def subscription_handler(account_info_obj, update_nr, subscription_id):
if update_nr == 0:
print('Initial account data:', account_info_obj.value)
if update_nr > 0:
# Do something with the update
print('Account data changed:', account_info_obj.value)
# The execution will block until an arbitrary value is returned, which will be the result of the `query`
if update_nr > 5:
return account_info_obj
result = substrate.query("System", "Account", ["5GNJqTPyNqANBkUVMN1LPPrxXnFouWXoe2wNSmmEoLctxiZY"],
subscription_handler=subscription_handler)
print(result)
Mapped storage functions can be iterated over all key/value pairs, for these type of storage functions query_map
can be used.
The result is a QueryMapResult
object, which is an iterator:
# Retrieve the first 199 System.Account entries
result = substrate.query_map('System', 'Account', max_results=199)
for account, account_info in result:
print(f"Free balance of account '{account.value}': {account_info.value['data']['free']}")
These results are transparently retrieved in batches capped by the page_size
kwarg, currently the maximum page_size
restricted by the RPC node is 1000
# Retrieve all System.Account entries in batches of 200 (automatically appended by `QueryMapResult` iterator)
result = substrate.query_map('System', 'Account', page_size=200, max_results=400)
for account, account_info in result:
print(f"Free balance of account '{account.value}': {account_info.value['data']['free']}")
Querying a DoubleMap
storage function:
era_stakers = substrate.query_map(
module='Staking',
storage_function='ErasStakers',
params=[2100]
)
The following code snippet illustrates how to create a call, wrap it in a signed extrinsic and send it to the network:
from substrateinterface import SubstrateInterface, Keypair
from substrateinterface.exceptions import SubstrateRequestException
substrate = SubstrateInterface(
url="ws://127.0.0.1:9944",
ss58_format=42,
type_registry_preset='kusama'
)
keypair = Keypair.create_from_mnemonic('episode together nose spoon dose oil faculty zoo ankle evoke admit walnut')
call = substrate.compose_call(
call_module='Balances',
call_function='transfer',
call_params={
'dest': '5E9oDs9PjpsBbxXxRE9uMaZZhnBAV38n2ouLB28oecBDdeQo',
'value': 1 * 10**12
}
)
extrinsic = substrate.create_signed_extrinsic(call=call, keypair=keypair)
try:
receipt = substrate.submit_extrinsic(extrinsic, wait_for_inclusion=True)
print("Extrinsic '{}' sent and included in block '{}'".format(receipt.extrinsic_hash, receipt.block_hash))
except SubstrateRequestException as e:
print("Failed to send: {}".format(e))
The wait_for_inclusion
keyword argument used in the example above will block giving the result until it gets confirmation from the node that the extrinsic is succesfully included in a block. The wait_for_finalization
keyword will wait until extrinsic is finalized. Note this feature is only available for websocket connections.
The substrate.submit_extrinsic
example above returns an ExtrinsicReceipt
object, which contains information about the on-chain execution of the extrinsic. Because the block_hash
is necessary to retrieve the triggered events from storage, most information is only available when wait_for_inclusion=True
or wait_for_finalization=True
is used when submitting an extrinsic.
Examples:
receipt = substrate.submit_extrinsic(extrinsic, wait_for_inclusion=True)
print(receipt.is_success) # False
print(receipt.weight) # 216625000
print(receipt.total_fee_amount) # 2749998966
print(receipt.error_message['name']) # 'LiquidityRestrictions'
ExtrinsicReceipt
objects can also be created for all existing extrinsics on-chain:
receipt = ExtrinsicReceipt.create_from_extrinsic_identifier(
substrate=substrate, extrinsic_identifier="5233297-1"
)
print(receipt.is_success) # False
print(receipt.extrinsic.call_module.name) # 'Identity'
print(receipt.extrinsic.call.name) # 'remove_sub'
print(receipt.weight) # 359262000
print(receipt.total_fee_amount) # 2483332406
print(receipt.error_message['docs']) # [' Sender is not a sub-account.']
for event in receipt.triggered_events:
print(f'* {event.value}')
Tested on canvas-node with the Flipper contract from the tutorial_:
substrate = SubstrateInterface(
url="ws://127.0.0.1:9944",
type_registry_preset='canvas'
)
keypair = Keypair.create_from_uri('//Alice')
# Deploy contract
code = ContractCode.create_from_contract_files(
metadata_file=os.path.join(os.path.dirname(__file__), 'assets', 'flipper.json'),
wasm_file=os.path.join(os.path.dirname(__file__), 'assets', 'flipper.wasm'),
substrate=substrate
)
contract = code.deploy(
keypair=keypair,
endowment=10 ** 15,
gas_limit=1000000000000,
constructor="new",
args={'init_value': True},
upload_code=True
)
print(f'✅ Deployed @ {contract.contract_address}')
# Create contract instance from deterministic address
contract = ContractInstance.create_from_address(
contract_address=contract_address,
metadata_file=os.path.join(os.path.dirname(__file__), 'assets', 'flipper.json'),
substrate=substrate
)
result = contract.read(keypair, 'get')
print('Current value of "get":', result.contract_result_data)
# Do a gas estimation of the message
gas_predit_result = contract.read(keypair, 'flip')
print('Result of dry-run: ', gas_predit_result.contract_result_data)
print('Gas estimate: ', gas_predit_result.gas_required)
# Do the actual call
print('Executing contract call...')
contract_receipt = contract.exec(keypair, 'flip', args={
}, gas_limit=gas_predit_result.gas_required)
if contract_receipt.is_success:
print(f'Events triggered in contract: {contract_receipt.contract_events}')
else:
print(f'Call failed: {contract_receipt.error_message}')
See complete code example for more details
By default, immortal extrinsics are created, which means they have an indefinite lifetime for being included in a block. However, it is recommended to use specify an expiry window, so you know after a certain amount of time if the extrinsic is not included in a block, it will be invalidated.
extrinsic = substrate.create_signed_extrinsic(call=call, keypair=keypair, era={'period': 64})
The period
specifies the number of blocks the extrinsic is valid counted from current head.
mnemonic = Keypair.generate_mnemonic()
keypair = Keypair.create_from_mnemonic(mnemonic)
signature = keypair.sign("Test123")
if keypair.verify("Test123", signature):
print('Verified')
By default, a keypair is using SR25519 cryptography, alternatively ED25519 and ECDSA can be explicitly specified:
keypair = Keypair.create_from_mnemonic(mnemonic, crypto_type=KeypairType.ECDSA)
mnemonic = Keypair.generate_mnemonic()
keypair = Keypair.create_from_uri(mnemonic + '//hard/soft')
By omitting the mnemonic the default development mnemonic is used:
keypair = Keypair.create_from_uri('//Alice')
mnemonic = Keypair.generate_mnemonic()
keypair = Keypair.create_from_uri(f"{mnemonic}/m/44'/60'/0'/0/0", crypto_type=KeypairType.ECDSA)
keypair = Keypair(ss58_address="EaG2CRhJWPb7qmdcJvy3LiWdh26Jreu9Dx6R1rXxPmYXoDk")
call = substrate.compose_call(
call_module='Balances',
call_function='transfer',
call_params={
'dest': 'EaG2CRhJWPb7qmdcJvy3LiWdh26Jreu9Dx6R1rXxPmYXoDk',
'value': 2 * 10 ** 3
}
)
payment_info = substrate.get_payment_info(call=call, keypair=keypair)
# {'class': 'normal', 'partialFee': 2499999066, 'weight': 216625000}
This example generates a signature payload which can be signed on another (offline) machine and later on sent to the network with the generated signature.
substrate = SubstrateInterface(
url="ws://127.0.0.1:9944",
ss58_format=42,
type_registry_preset='substrate-node-template',
)
call = substrate.compose_call(
call_module='Balances',
call_function='transfer',
call_params={
'dest': '5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY',
'value': 2 * 10**8
}
)
era = {'period': 64, 'current': 22719}
nonce = 0
signature_payload = substrate.generate_signature_payload(call=call, era=era, nonce=nonce)
signature_payload
:keypair = Keypair.create_from_mnemonic("nature exchange gasp toy result bacon coin broccoli rule oyster believe lyrics")
signature = keypair.sign(signature_payload)
keypair = Keypair(ss58_address="5EChUec3ZQhUvY1g52ZbfBVkqjUY9Kcr6mcEvQMbmd38shQL")
extrinsic = substrate.create_signed_extrinsic(
call=call,
keypair=keypair,
era=era,
nonce=nonce,
signature=signature
)
result = substrate.submit_extrinsic(
extrinsic=extrinsic
)
print(result.extrinsic_hash)
All runtime constants are provided in the metadata (see substrate.get_metadata_constants()
), to access these as a decoded ScaleType
you can use the function substrate.get_constant()
:
constant = substrate.get_constant("Balances", "ExistentialDeposit")
print(constant.value) # 10000000000
At the end of the lifecycle of a SubstrateInterface
instance, calling the close()
method will do all the necessary cleanup, like closing the websocket connection.
When using the context manager this will be done automatically:
with SubstrateInterface(url="wss://rpc.polkadot.io") as substrate:
events = substrate.query("System", "Events")
# connection is now closed
:information_source: Only applicable for chains with metadata < V14
When on-chain runtime upgrades occur, types used in call- or storage functions can be added or modified. Therefore it is important to keep the type registry presets up to date, otherwise this can lead to decoding errors like RemainingScaleBytesNotEmptyException
.
At the moment the type registry presets for Polkadot, Kusama, Rococo and Westend are being actively maintained for this library, and a check and update procedure can be triggered with:
substrate.reload_type_registry()
This will also activate the updated preset for the current instance.
It is also possible to always use the remote type registry preset from Github with the use_remote_preset
kwarg when instantiating:
substrate = SubstrateInterface(
url="wss://rpc.polkadot.io",
ss58_format=0,
type_registry_preset='polkadot',
use_remote_preset=True
)
To check for updates after instantiating the substrate
object, using substrate.reload_type_registry()
will download the most recent type registry preset from Github and apply changes to current object.
For questions, please reach out to us on our matrix chat group: Polkascan Technical.
Download Details:
Author:
Source Code:
License:
1652144431
A collection of resources to develop proof of concept projects for Substrate in AssemblyScript. AssemblyScript compiles a strict subset of TypeScript to WebAssembly using Binaryen.
At the moment, this repository is mainly home for a collection of smart contract examples and a small smart contract library to write contracts for Substrates contracts pallet, but it might be extended with more examples in the future.
This repository is using yarn and yarn workspaces. You also need a fairly up-to-date version of node.
The packages folder contains the PoC libraries and projects.
The contracts folder contains a number of example contracts that make use of the as-contracts
package. The compiled example contracts in the contracts folder can be deployed and executed on any Substrate chain that includes the contracts pallet.
as-substrate
repository.$ git clone https://github.com/paritytech/as-substrate.git
2. Install all dependencies
$ yarn
3. Compile all packages, projects and contract examples to wasm
$ yarn build
To clean up all workspaces in the repository, run:
$ yarn clean
The @substrate/as-contracts
and @substrate/as-utils
packages are not being published to the npmjs registry. That's why you need to add the complete as-substrate
repository as a dependency directly from git.
$ yarn add https://github.com/paritytech/as-substrate.git
// or
$ npm install https://github.com/paritytech/as-substrate.git
In your projects, you can then import the as-contracts functions directly from the node_modules
folder
The recommended way of writing smart contracts is using the Rust Smart Contract Language ink!
.
Another way of writing Smart Contracts for Substrate is using the Solidity to Wasm compiler Solang.
Everything in this repository is highly experimental and should not be used for any professional or financial purposes.
Download Details:
Author: paritytech
Source Code: https://github.com/paritytech/as-substrate
License: GPL-3.0 license
#blockchain #assemblyscript #substrate #smartcontract #polkadot #rust
1651872600
WARNING: This is alpha quality software and not suitable for production. It is incomplete and will have bugs.
Ledgeracio is a command-line tool and a Ledger app designed for staking operations on Substrate-based networks.
Running ledgeracio --help
will provide top-level usage instructions.
Ledgeracio CLI is intended to work with a special Ledgeracio Ledger app, but most of its commands will work with stock Kusama or Polkadot Ledger apps as well. This is less secure, however, as these apps do not enforce the same restrictions that the Ledgeracio app does. Using a stock app in production is not recommended.
The Polkadot app can be found here and the Kusama app can be found here. Other Substrate-based chains are currently not supported, but local devnets should work as long as their RPC API matches Kusama/Polkadot's.
Ledgeracio only supports Unix-like systems, and has mostly been tested on Linux. That said, it works on macOS and other Unix-like systems that provide the necessary support for userspace USB drivers.
Ledgeracio is a CLI app to perform various tasks common to staking on Kusama and Polkadot, aka staking-ops. Ledgeracio is designed to reduce the risk of user error by way of an allowlist of validators that is set up and signed once and stored on the Ledger device. Furthermore, Ledgeracio can speed up the workflow considerably when compared to alternatives using Parity Signer + Polkadot{.js}.
This repository only contains the CLI. To submit transactions with Ledgeracio, you will also need the companion Ledger app that you can install from the Ledger app store for Polkadot and Kusama. Development versions of the apps are available at Zondax/ledger-polkadot and Zondax/ledger-kusama. Please do not use the unaudited versions in production. For instruction on how to setup and use your Ledger device with Polkadot/Kusama, see the Polkadot wiki.
The Ledgeracio CLI contains two binaries. The first, simply called ledgeracio
, is used to submit transactions. The second, called ledgeracio-allowlist
, is used to manage the Ledgeracio Ledger app’s list of allowed stash accounts. Generally, one will use ledgeracio
for normal operations, and only use ledgeracio-allowlist
when the list of allowed stash accounts must be changed. ledgeracio
does not handle sensitive data, so it can safely be used on virtually any machine on which it will run. Some subcommands of ledgeracio-allowlist
, however, generate and use secret keys, which are stored unencrypted on disk. Therefore, they MUST NOT be used except on trusted and secured machines. Ideally, these subcommands should be run on a machine that is reserved for provisioning of Ledger devices with the Ledgeracio app, and which has no network connectivity.
The allowlist serves to prevent one from accidentally nominating the wrong validator, which could result in a slash. It does NOT protect against malicious use of the device. Anyone with both the device and its PIN can uninstall the Ledgeracio app and install the standard Polkadot or Kusama app, which uses the same derivation path and thus can perform the same transactions.
--key value
or --key=value
. This avoids needing to memorize the order of arguments.alias 'ledgeracio-polkadot=ledgeracio --network polkadot'
alias 'ledgeracio-kusama=ledgeracio --network kusama'
Provisioning the Ledgeracio Ledger app requires a trusted computer. This computer will store the secret key used to sign allowlists. This computer does not need network access, and generally should not have it. ledgeracio-allowlist
does not encrypt the secret key, so operations that involve secret keys should only be done on machines that use encrypted storage.
Only devices used for nomination need to be provisioned. However, if you only intend to use the app for validator management, you should set an empty allowlist, which blocks all nominator operations.
First, ledgeracio-allowlist gen-key <file>
is used to generate a secret key. The public part will be placed in <file>.pub
and the secret part in <file>.sec
. Both will be created with 0400 permissions, so that they are not accidentally overwritten or exposed. This operation requires a trusted computer. The public key file can be freely redistributed, while the secret key file should never leave the machine it was generated on.
You can now sign a textual allowlist file with ledgeracio-allowlist sign
. A textual allowlist file has one SS58 address per line. Leading and trailing whitespace is stripped. If the first non-whitespace character on a line is #
or ;
, or if the line is empty or consists entirely of whitespace, it is considered to be a comment and ignored.
ledgeracio-allowlist sign
is invoked as follows:
ledgeracio-allowlist --network <network> sign --file <file> --nonce <nonce> --output <output> --secret <secret>
<file>
is the allowlist file. <nonce>
is the nonce, which is incorporated into the signed allowlist file named <output>
. Ledgeracio apps keep track of the nonce of the most recent allowlist uploaded, and reject new uploads unless the new allowlist has a nonce higher than the old one. Nonces do not need to be contiguous, so skipping a nonce is okay. Signed allowlists are stored in a binary format.
ledgeracio-allowlist
is also used for device provisioning. To set the allowlist, use ledgeracio-allowlist set-key
. This command will only succeed once. If an allowlist has already been uploaded, it will fail. The only way to change the allowlist signing key is to reinstall the Ledgeracio app, which does not result in any funds being lost.
ledgeracio-allowlist upload
is used to upload an allowlist. The uploaded allowlist must have a nonce that is greater than the nonce of the previous allowlist. If there was no previous allowlist, any nonce is allowed.
To verify the signature of a binary allowlist file, use ledgeracio-allowlist inspect
. This also displays the allowlist on stdout.
ledgeracio
is used for staking operations. Before accounts on a Ledger device can be used for staking, they must be chosen as a controller account. You can obtain the address by running ledgeracio <validator|nominator> address
. The address can be directly pasted into a GUI tool, such as Polkadot{.js}.
ledgeracio nominator nominate
is used to nominate an approved validator, and ledgeracio validator announce
is used to announce intention to validate. ledgeracio [nominator|validator] set-payee
is used to set the payment target. ledgeracio [nominator|validator] chill
is used to stop staking, while ledgeracio [nominator|validator] show
and ledgeracio [nominator|validator] show-address
are used to display staking status. The first takes an index, while the second takes an address. show-address
does not require a Ledger device. ledgeracio validator replace-key
is used to set a validator’s session key.
ledgeracio-allowlist
The Ledgeracio app enforces a list of allowed stash accounts. This is managed using the ledgeracio-allowlist
command.
Some subcommands involve the generation or use of secret keys, which are stored on disk without encryption. These subcommands MUST NOT be used on untrusted machines. Ideally, they should be run on a machine that is reserved for provisioning of Ledgeracio apps, and which has no access to the Internet.
ledgeracio-allowlist gen-key
This command takes one argument: the basename (filename without extension) of the keys to generate. The public key will be given the extension .pub
and the secret key the extension .sec
. The files will be generated with 0400 permissions, which means that they can only be read by the current user and the system administrator, and they cannot be written to except by the administrator. This is to prevent accidental overwrites.
The public key is not sensitive, and is required by anyone who wishes to verify signed allowlists and operate on the allowed accounts. It will be uploaded to the Ledger device by ledgeracio-allowlist set-key
. The secret key allows generating signatures, and therefore must be kept secret. It should never leave the (preferably air gapped) machine it is generated on.
ledgeracio-allowlist set-key
This command takes one argument, the name of the public key file (including extension). The key will be parsed and uploaded to the Ledgeracio app running on the attached Ledger device. If it is not able to do so, Ledgeracio will print an error message and exit with a non-zero status.
If a key has already been uploaded, uploading a new key will fail. The only workaround is to reinstall the Ledgeracio app. This does not forfeit any funds stored on the device. We strongly recommend users to use separate Ledger devices for ledgeracio and cold storage.
The user will be required to confirm the upload via the Ledger UI. This allows the user to check that the correct key has been uploaded, instead of a key chosen by an attacker who has compromised the user’s machine.
ledgeracio-allowlist get-key
This command takes no arguments. The public key that has been uploaded will be retrieved and printed to stdout. If no public key has been uploaded, or if the app is not the Ledgeracio app, an error will be returned.
ledgeracio-allowlist sign
This command takes the following arguments. All of them are mandatory.
--file <file>
: the textual allowlist file to sign. See FORMATS.md for its format.--nonce <nonce>
: The nonce to sign the file with. The nonce must be greater than the previous nonce, or the Ledgeracio app will reject the allowlist.--output <output>
: The name of the output file to write.--secret <secret>
: The name of the secret key file.ledgeracio-allowlist inspect
This command takes two arguments. Both of them are mandatory.
--file <file>
: The name of the signed allowlist to inspect.--public <public>
: The name of the public key file that signed the allowlist. This command will fail if the signature cannot be verified.ledgeracio-allowlist upload
This command takes one argument: the filename of the signed binary allowlist to upload. The command will fail if any of the following occurs:
The Ledgeracio app will refuse the operation if:
ledgeracio metadata
This command takes no arguments. It pretty-prints the chain metadata to stdout. It is primarily intended for debugging. Requires a network connection.
ledgeracio properties
This command takes no arguments. It pretty-prints the chain properties to stdout. It is primarily intended for debugging. Requires a network connection.
ledgeracio nominator
This command performs operations using nominator keys ― that is, keys on a nominator derivation path. Requires a network connection. The following subcommands are available:
ledgeracio nominator address
This command takes an index as a parameter. The address on the device corresponding to that index is displayed on stdout.
ledgeracio nominator show
This command takes an index as parameter, and displays information about the corresponding nominator controller account.
ledgeracio nominator show-address
This command takes an SS58-formatted address as parameter, and displays information about the corresponding nominator controller account. It does not require a Ledger device.
ledgeracio nominator nominate
This command takes a index followed by a list of SS58-formatted addresses. It uses the account at the provided index to nominate the provided validator stash accounts.
The user must confirm this action on the Ledger device. For security reasons, users MUST confirm that the addresses displayed on the device are the intended ones. A compromised host machine can send a set of accounts that is not the ones the user intended. If any of the addresses sent to the device are not on the allowlist, the transaction will not be signed.
ledgeracio nominator chill
This command stops the account at the provided index from nominating.
The user must confirm this action on the Ledger device.
ledgeracio nominator set-payee
This command takes an index as argument, and sets the payment target. The target must be one of Stash
, Staked
, or Controller
(case-insensitive).
ledgeracio validator
This command handles validator operations. It requires a network connection, and has the following subcommands:
ledgeracio validator address <index>
This command displays the address of the validator controller account at the given index.
ledgeracio validator announce <index> [commission]
This command announces that the controller account at <index>
intends to validate. An optional commission (as a decimal between 0 and 1 inclusive) may also be provided. If none is supplied, it defaults to 1, or 100%.
ledgeracio validator chill
This command stops validation.
The user must confirm this action on the Ledger device.
ledgeracio validator set-payee
This command is the validator version of ledgeracio nominator set-payee
. See its documentation for details.
ledgeracio validator show
This command is the validator version of ledgeracio nominator show
. See its documentation for details.
ledgeracio validator show-address
This command is the validator version of ledgeracio nominator show-address
. See its documentation for details.
ledgeracio validator replace-key <index> <keys>
This command sets the session keys of the validator controlled by the account at <index>
. The keys must be in hexidecimal, as returned by the key rotation RPC call.
Download Details:
Author: paritytech
Source Code: https://github.com/paritytech/ledgeracio
License: GPL-3.0 License
1651865220
It is basically what it says on the tin. Since Substrate multisig addresses are deterministic, MultiSigil doesn't need to do any network connections — and can be used even before the chain has been started.
$ multi-sigil --help
multi-sigil 0.1.0
Parity Technologies <admin@parity.io>
CLI for generating Substrate multisig addresses
USAGE:
multi-sigil [OPTIONS] <THRESHOLD> <ADDRESSES>...
ARGS:
<THRESHOLD> The number of signatures needed to perform the operation
<ADDRESSES>... The addresses to use
FLAGS:
-h, --help Prints help information
-V, --version Prints version information
OPTIONS:
--network <NETWORK> Network to calculate multisig for; defaults to Kusama [default: kusama] [possible
values: kusama, polkadot]
Currently only Kusama and Polkadot are supported.
It should be fairly trivial to add support of other networks from the list of supported in SS58 — PRs are welcome!
Download Details:
Author: paritytech
Source Code: https://github.com/paritytech/MultiSigil
License: Apache-2.0 License
1651857900
Tools to facilitate an air-gapped construction, decoding, and signing flow for transactions of FRAME
-based chains.
Please file an issue for any questions, feature requests, or additional examples
Download Details:
Author: paritytech
Source Code: https://github.com/paritytech/substrate-airgapped
License:
1651850520
Allows you to test
This works by running a full node with a ManualSeal-BABE™ hybrid consensus for block authoring.
The test runner provides two apis of note
seal_blocks(count: u32)
This tells manual seal authorship task running on the node to author count
number of blocks, including any transactions in the transaction pool in those blocks.
submit_extrinsic<T: frame_system::Config>(call: Impl Into<T::Call>, from: T::AccountId)
Providing a Call
and an AccountId
, creates an UncheckedExtrinsic
with an empty signature and sends to the node to be included in future block.
The running node has no signature verification, which allows us author extrinsics for any account on chain.
/// tons of ignored imports
use substrate_test_runner::{TestRequirements, Node};
struct Requirements;
impl TestRequirements for Requirements {
/// Provide a Block type with an OpaqueExtrinsic
type Block = polkadot_core_primitives::Block;
/// Provide an Executor type for the runtime
type Executor = polkadot_service::PolkadotExecutor;
/// Provide the runtime itself
type Runtime = polkadot_runtime::Runtime;
/// A touch of runtime api
type RuntimeApi = polkadot_runtime::RuntimeApi;
/// A pinch of SelectChain implementation
type SelectChain = sc_consensus::LongestChain<TFullBackend<Self::Block>, Self::Block>;
/// A slice of concrete BlockImport type
type BlockImport = BlockImport<
Self::Block,
TFullBackend<Self::Block>,
TFullClient<Self::Block, Self::RuntimeApi, Self::Executor>,
Self::SelectChain,
>;
/// and a dash of SignedExtensions
type SignedExtension = SignedExtra;
/// Load the chain spec for your runtime here.
fn load_spec() -> Result<Box<dyn sc_service::ChainSpec>, String> {
let wasm_binary = polkadot_runtime::WASM_BINARY.ok_or("Polkadot development wasm not available")?;
Ok(Box::new(PolkadotChainSpec::from_genesis(
"Development",
"polkadot",
ChainType::Development,
move || polkadot_development_config_genesis(wasm_binary),
vec![],
None,
Some("dot"),
None,
Default::default(),
)))
}
/// Optionally provide the base path if you want to fork an existing chain.
// fn base_path() -> Option<&'static str> {
// Some("/home/seun/.local/share/polkadot")
// }
/// Create your signed extras here.
fn signed_extras(
from: <Self::Runtime as frame_system::Config>::AccountId,
) -> Self::SignedExtension
where
S: StateProvider
{
let nonce = frame_system::Module::<Self::Runtime>::account_nonce(from);
(
frame_system::CheckSpecVersion::<Self::Runtime>::new(),
frame_system::CheckTxVersion::<Self::Runtime>::new(),
frame_system::CheckGenesis::<Self::Runtime>::new(),
frame_system::CheckMortality::<Self::Runtime>::from(Era::Immortal),
frame_system::CheckNonce::<Self::Runtime>::from(nonce),
frame_system::CheckWeight::<Self::Runtime>::new(),
pallet_transaction_payment::ChargeTransactionPayment::<Self::Runtime>::from(0),
polkadot_runtime_common::claims::PrevalidateAttests::<Self::Runtime>::new(),
)
}
/// The function signature tells you all you need to know. ;)
fn create_client_parts(config: &Configuration) -> Result<
(
Arc<TFullClient<Self::Block, Self::RuntimeApi, Self::Executor>>,
Arc<TFullBackend<Self::Block>>,
KeyStorePtr,
TaskManager,
InherentDataProviders,
Option<Box<
dyn ConsensusDataProvider<
Self::Block,
Transaction = TransactionFor<
TFullClient<Self::Block, Self::RuntimeApi, Self::Executor>,
Self::Block
>,
>
>>,
Self::SelectChain,
Self::BlockImport
),
sc_service::Error
> {
let (
client,
backend,
keystore,
task_manager,
) = new_full_parts::<Self::Block, Self::RuntimeApi, Self::Executor>(config)?;
let client = Arc::new(client);
let inherent_providers = InherentDataProviders::new();
let select_chain = sc_consensus::LongestChain::new(backend.clone());
let (grandpa_block_import, ..) =
sc_finality_grandpa::block_import(client.clone(), &(client.clone() as Arc<_>), select_chain.clone())?;
let (block_import, babe_link) = sc_consensus_babe::block_import(
sc_consensus_babe::Config::get_or_compute(&*client)?,
grandpa_block_import,
client.clone(),
)?;
let consensus_data_provider = BabeConsensusDataProvider::new(
client.clone(),
keystore.clone(),
&inherent_providers,
babe_link.epoch_changes().clone(),
vec![(AuthorityId::from(Alice.public()), 1000)]
)
.expect("failed to create ConsensusDataProvider");
Ok((
client,
backend,
keystore,
task_manager,
inherent_providers,
Some(Box::new(consensus_data_provider)),
select_chain,
block_import
))
}
}
/// And now for the most basic test
#[test]
fn simple_balances_test() {
// given
let mut node = Node::<Requirements>::new();
type Balances = pallet_balances::Module<Runtime>;
let (alice, bob) = (Sr25519Keyring::Alice.pair(), Sr25519Keyring::Bob.pair());
let (alice_account_id, bob_acount_id) = (
MultiSigner::from(alice.public()).into_account(),
MultiSigner::from(bob.public()).into_account()
);
/// the function with_state allows us to read state, pretty cool right? :D
let old_balance = node.with_state(|| Balances::free_balance(alice_account_id.clone()));
// 70 dots
let amount = 70_000_000_000_000;
/// Send extrinsic in action.
node.submit_extrinsic(BalancesCall::transfer(bob_acount_id.clone(), amount), alice_account_id.clone());
/// Produce blocks in action, Powered by manual-seal™.
node.seal_blocks(1);
/// we can check the new state :D
let new_balance = node.with_state(|| Balances::free_balance(alice_account_id));
/// we can now make assertions on how state has changed.
assert_eq!(old_balance + amount, new_balance);
}
Download Details:
Author: paritytech
Source Code: https://github.com/paritytech/substrate-test-runner
License:
1651843140
service for visualizing and collecting traces from Parachains.
volumes
path with your path to prometheus.yml in the dot-jaeger repodocker-compose up
This starts Prometheus on port 9090 and grafana on port 3000. The Grafana dashboard can be accessed from localhost:3000, with the default login being user: admin
password: admin
daemon
mode with chosen arguments. The help
command may be used for quick docs on the core app or any of the subcommands.dot-jaeger
as a Prometheus source.localhost:9090
Parachain Rococo Candidates-{{bunch of numbers}}
Data should start showing up. Grafana update interval can be modified in the top right
Here's a Quick ASCIICast of the dot-jaeger and docker setup process
Recommended number of traces at once: 5-20. Asking for too many traces from the JaegerUI both requests large amounts of data (potentially slowing down any other services) and makes dot-jaeger slower as it has to potentially sort the parent-child relationship of each span, although this can be configured with --recurse-children
and recurse-parents
CLI options.
Usage: dot-jaeger [--service <service>] [--url <url>] [--limit <limit>] [--pretty-print] [--lookback <lookback>] <command> [<args>]
Jaeger Trace CLI App
Options:
--service name a specific node that reports to the Jaeger Agent from
which to query traces.
--url URL where Jaeger Service runs.
--limit maximum number of traces to return.
--pretty-print pretty print result
--lookback specify how far back in time to look for traces. In format:
`1h`, `1d`
--help display usage information
Commands:
traces Use when observing many traces
trace Use when observing only one trace
services List of services reporting to the Jaeger Agent
daemon Daemonize Jaeger Trace collection to run at some interval
Usage: dot-jaeger daemon [--frequency <frequency>] [--port <port>] [--recurse-parents] [--recurse-children] [--include-unknown]
Daemonize Jaeger Trace collection to run at some interval
Options:
--frequency frequency to update jaeger metrics in milliseconds.
--port port to expose prometheus metrics at. Default 9186
--recurse-parents fallback to recursing through parent traces if the current
span has one of a candidate hash or stage, but not the
other.
--recurse-children
fallback to recursing through parent traces if the current
span has one of a candidate hash or stage but not the other.
Recursing children is slower than recursing parents.
--include-unknown include candidates that have a stage but no candidate hash
in the prometheus data.
--help display usage information
./dot-jaeger --url "http://JaegerUI:16686" --limit 10 --service polkadot-rococo-3-validator-5 daemon --recurse-children
Stage
enum and associated Into/From implementations to accomadate a new stage stage.rs
stage.rs
Download Details:
Author: paritytech
Source Code: https://github.com/paritytech/dot-jaeger
License: GPL-3.0 License
1651837920
This is the main source code repository for Rust. It contains the compiler, standard library, and documentation.
Note: this README is for users rather than contributors. If you wish to contribute to the compiler, you should read the Getting Started section of the rustc-dev-guide instead.
Read "Installation" from The Book.
The Rust build system uses a Python script called x.py
to build the compiler, which manages the bootstrapping process. It lives in the root of the project.
The x.py
command can be run directly on most systems in the following format:
./x.py <subcommand> [flags]
This is how the documentation and examples assume you are running x.py
.
Systems such as Ubuntu 20.04 LTS do not create the necessary python
command by default when Python is installed that allows x.py
to be run directly. In that case you can either create a symlink for python
(Ubuntu provides the python-is-python3
package for this), or run x.py
using Python itself:
# Python 3
python3 x.py <subcommand> [flags]
# Python 2.7
python2.7 x.py <subcommand> [flags]
More information about x.py
can be found by running it with the --help
flag or reading the rustc dev guide.
g++
5.1 or later or clang++
3.5 or laterpython
3 or 2.7make
3.81 or latercmake
3.13.4 or laterninja
curl
git
ssl
which comes in libssl-dev
or openssl-devel
pkg-config
if you are compiling on Linux and targeting Linux2. Clone the source with git
:
git clone https://github.com/rust-lang/rust.git
cd rust
3. Configure the build settings:
The Rust build system uses a file named config.toml
in the root of the source tree to determine various configuration settings for the build. Copy the default config.toml.example
to config.toml
to get started.
cp config.toml.example config.toml
If you plan to use x.py install
to create an installation, it is recommended that you set the prefix
value in the [install]
section to a directory.
Create install directory if you are not installing in default directory
4. Build and install:
./x.py build && ./x.py install
When complete, ./x.py install
will place several programs into $PREFIX/bin
: rustc
, the Rust compiler, and rustdoc
, the API-documentation tool. This install does not include Cargo, Rust's package manager. To build and install Cargo, you may run ./x.py install cargo
or set the build.extended
key in config.toml
to true
to build and install all tools.
There are two prominent ABIs in use on Windows: the native (MSVC) ABI used by Visual Studio, and the GNU ABI used by the GCC toolchain. Which version of Rust you need depends largely on what C/C++ libraries you want to interoperate with: for interop with software produced by Visual Studio use the MSVC build of Rust; for interop with GNU software built using the MinGW/MSYS2 toolchain use the GNU build.
MSYS2 can be used to easily build Rust on Windows:
Grab the latest MSYS2 installer and go through the installer.
Run mingw32_shell.bat
or mingw64_shell.bat
from wherever you installed MSYS2 (i.e. C:\msys64
), depending on whether you want 32-bit or 64-bit Rust. (As of the latest version of MSYS2 you have to run msys2_shell.cmd -mingw32
or msys2_shell.cmd -mingw64
from the command line instead)
From this terminal, install the required tools:
# Update package mirrors (may be needed if you have a fresh install of MSYS2)
pacman -Sy pacman-mirrors
# Install build tools needed for Rust. If you're building a 32-bit compiler,
# then replace "x86_64" below with "i686". If you've already got git, python,
# or CMake installed and in PATH you can remove them from this list. Note
# that it is important that you do **not** use the 'python2', 'cmake' and 'ninja'
# packages from the 'msys2' subsystem. The build has historically been known
# to fail with these packages.
pacman -S git \
make \
diffutils \
tar \
mingw-w64-x86_64-python \
mingw-w64-x86_64-cmake \
mingw-w64-x86_64-gcc \
mingw-w64-x86_64-ninja
Navigate to Rust's source code (or clone it), then build it:
./x.py build && ./x.py install
MSVC builds of Rust additionally require an installation of Visual Studio 2017 (or later) so rustc
can use its linker. The simplest way is to get the Visual Studio, check the “C++ build tools” and “Windows 10 SDK” workload.
(If you're installing cmake yourself, be careful that “C++ CMake tools for Windows” doesn't get included under “Individual components”.)
With these dependencies installed, you can build the compiler in a cmd.exe
shell with:
python x.py build
Currently, building Rust only works with some known versions of Visual Studio. If you have a more recent version installed and the build system doesn't understand, you may need to force rustbuild to use an older version. This can be done by manually calling the appropriate vcvars file before running the bootstrap.
CALL "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat"
python x.py build
Each specific ABI can also be used from either environment (for example, using the GNU ABI in PowerShell) by using an explicit build triple. The available Windows build triples are:
i686-pc-windows-gnu
x86_64-pc-windows-gnu
i686-pc-windows-msvc
x86_64-pc-windows-msvc
The build triple can be specified by either specifying --build=<triple>
when invoking x.py
commands, or by copying the config.toml
file (as described in Installing From Source), and modifying the build
option under the [build]
section.
While it's not the recommended build system, this project also provides a configure script and makefile (the latter of which just invokes x.py
).
./configure
make && sudo make install
When using the configure script, the generated config.mk
file may override the config.toml
file. To go back to the config.toml
file, delete the generated config.mk
file.
If you’d like to build the documentation, it’s almost the same:
./x.py doc
The generated documentation will appear under doc
in the build
directory for the ABI used. I.e., if the ABI was x86_64-pc-windows-msvc
, the directory will be build\x86_64-pc-windows-msvc\doc
.
Since the Rust compiler is written in Rust, it must be built by a precompiled "snapshot" version of itself (made in an earlier stage of development). As such, source builds require a connection to the Internet, to fetch snapshots, and an OS that can execute the available snapshot binaries.
Snapshot binaries are currently built and tested on several platforms:
Platform / Architecture | x86 | x86_64 |
---|---|---|
Windows (7, 8, 10, ...) | ✓ | ✓ |
Linux (kernel 2.6.32, glibc 2.11 or later) | ✓ | ✓ |
macOS (10.7 Lion or later) | (*) | ✓ |
(*): Apple dropped support for running 32-bit binaries starting from macOS 10.15 and iOS 11. Due to this decision from Apple, the targets are no longer useful to our users. Please read our blog post for more info.
You may find that other platforms work, but these are our officially supported build environments that are most likely to work.
The Rust community congregates in a few places:
If you are interested in contributing to the Rust project, please take a look at the Getting Started guide in the rustc-dev-guide.
The Rust Foundation owns and protects the Rust and Cargo trademarks and logos (the “Rust Trademarks”).
If you want to use these names or brands, please read the media guide.
Third-party logos may be subject to third-party copyrights and trademarks. See Licenses for details.
Download Details:
Author: paritytech
Source Code: https://github.com/paritytech/rustc-codesize-min
License: View license
1651822980
The goal of this repository is to compare the sizes of compiled solidity contracts when compiled to EVM (with solc) versus WASM (with solang).
After some experimentation it turned out that a huge contributor to WASM code sizes is the smaller word size of WASM. Solidity treats 256bit variables as value types and passes them on the stack. Solang generates four 32bit stack accesses to emulate this. In order to improve comparability we do the following:
uint32
everywhere).--value-size 4 --address-size 4
to solang so that 32bit is usedfor the builtin types (address, msg.value).Put solang
in your PATH
and run compile.sh
which is located in the root of this repository. The solc
compiler will be downloaded automatically.
The current plan is to use the following sources as a test corpus:
Adding a new contract to the corpus from either of those sources is a time consuming process because solang isn't a drop in replacement. It tries hard to be one but there are some things that won't work on solang: First, almost all contracts use EVM inline assembly which obviously won't work on a compiler targeting another architecture. Second, differences in builtin types (address, balance) will prevent the compilation of most contracts.
Therefore we need to apply substantial changes to every contract before it can bea dded to the corpus in order to make it compile and establish comparability.
The following results show the compressed sizes (zstd) of the evm and wasm targets together with their compression ratio. Wasm relative describes the relative size of the compressed wasm output when compared to the evm output.
The concatenated row is what we get when we concatenate the uncompressed results of all contracts.
Used solang version is commit c2a8bd9881e64e41565cdfe088ffe9464c74dae4
.
Contract | EVM Compressed | WASM Compressed | EVM Ratio | WASM Ratio | Wasm Relative |
---|---|---|---|---|---|
UniswapV2Pair.sol | 3986 | 6912 | 44% | 33% | 173% |
UniswapV2Router02.sol | 5826 | 9219 | 30% | 28% | 158% |
ERC20PresetFixedSupply.sol | 2162 | 2891 | 50% | 34% | 133% |
concatenated | 11112 | 17397 | 34% | 28% | 156% |
Download Details:
Author: paritytech
Source Code: https://github.com/paritytech/contract-sizes
License:
1651815660
This crate abstracts interprocess transport for UNIX/Windows.
It utilizes unix sockets on UNIX (via tokio::net::UnixStream
) and named pipes on windows (via tokio::net::windows::named_pipe
module).
Endpoint is transport-agnostic interface for incoming connections:
use parity_tokio_ipc::Endpoint;
use futures::stream::StreamExt;
// For testing purposes only - instead, use a path to an actual socket or a pipe
let addr = parity_tokio_ipc::dummy_endpoint();
let server = async move {
Endpoint::new(addr)
.incoming()
.expect("Couldn't set up server")
.for_each(|conn| async {
match conn {
Ok(stream) => println!("Got connection!"),
Err(e) => eprintln!("Error when receiving connection: {:?}", e),
}
});
};
let rt = tokio::runtime::Builder::new_current_thread().enable_all().build().unwrap();
rt.block_on(server);
Download Details:
Author: paritytech
Source Code: https://github.com/paritytech/parity-tokio-ipc
License: View license
1651808280
The goal of the UI is to provide the users a convenient way of interacting with the Bridge - querying its state and sending transactions.
The project includes a .env
file at root project directory that contains all the variables for running the bridge UI:
REACT_APP_CHAIN_1_CUSTOM_TYPES_URL=https://raw.githubusercontent.com/paritytech/parity-bridges-common/master/deployments/types-rialto.json
REACT_APP_CHAIN_1_SUBSTRATE_PROVIDER=wss://wss.rialto.brucke.link
REACT_APP_CHAIN_2_CUSTOM_HASHER=blake2Keccak256Hasher
REACT_APP_CHAIN_2_CUSTOM_TYPES_URL=https://raw.githubusercontent.com/paritytech/parity-bridges-common/master/deployments/types-millau.json
REACT_APP_CHAIN_2_SUBSTRATE_PROVIDER=wss://wss.millau.brucke.link
REACT_APP_LANE_ID=0x00000000
REACT_APP_KEYRING_DEV_LOAD_ACCOUNTS=false
REACT_APP_IS_DEVELOPMENT=false
ℹ️ | In case you need to overwrite any of the variables defined, please do so creating a new .env.local . |
---|
In case of questions about .env
management please refer to this link: create-react-app env files
If any of the chains (or both) need to use a custom hasher function this one can be built and exported from the file: src/configs/chainsSetup/customHashers.ts
. Then it is just a matter of referring the function name using variable REACT_APP_CUSTOM_HASHER_CHAIN_<Chain number>
from .env
file.
Please refer to this section of the Bridges project to run the bridge locally: running-the-bridge
yarn
This will install all the dependencies for the project.
yarn start
Runs the app in the development mode. Open http://localhost:3001 to view it in the browser.
yarn test
Runs the test suite.
yarn lint
Runs the linter & formatter.
Puppeteer is used for running E2E test for bridges (Only chrome for now).
Requirements:
a) Have chrome installed on your computer. (This test requires it and will not download it when running); b) ensure that in your env.local
file the REACT_APP_IS_DEVELOPMENT
and REACT_APP_KEYRING_DEV_LOAD_ACCOUNTS
are true; c) Make sure all steps mentioned above have run in a seperate terminal (yarn
- yarn start
) and the application of bridges is running; d) In a different terminal window run the following command:
yarn run test:e2e-alone
There is an automated process that downloads all the required types.json files available in the deployments section of parity-bridges-common repository. This hook is executed before the local development server starts and during the lint/test/build process during deployment. In case there is an unexpected issue with this process you can test this process isolated by running:
yarn prestart
For additional information about the Bridges Project please refer to parity-bridges-common repository.
To build the image run the:
docker build -t parity-bridges-ui:dev .
Now that image is built, container can start with the following command, which will serve our app on port 8080.
docker run --rm -it -p 8080:80 parity-bridges-ui:dev
Download Details:
Author: paritytech
Source Code: https://github.com/paritytech/parity-bridges-ui
License: GPL-3.0 License
1651800900
A Substrate node demonstrating two-way interactions between the runtime and Ink! smart contracts.
This Substrate project demonstrates through example how to interact between Substrate runtimes and ink! smart contracts through extrinsic calls and ink! chain extensions.
Sharing Substrate runtime functionality with ink! smart contracts is a powerful feature. Chains with unique runtime functionality can create rich application developer ecosystems by exposing choice pieces of their runtime. The inverse interaction of runtime to ink! smart contract calls may be similarly valuable. Runtime logic can query or set important context information at the smart contracts level.
Both of the types of interactions described above are asked about in the context of support, and a recent example demonstrating how to perform these interactions has not been developed.
If you have not already, it is recommended to go through the ink! smart contracts tutorial or otherwise have written and compiled smart contracts according to the ink! docs. It is also recommended to have some experience with Substrate runtime development.
Ensure you have
rustup component add rust-src --toolchain nightly
rustup target add wasm32-unknown-unknown --toolchain nightly
3. Installed Cargo Contracts
# For Ubuntu or Debian users
sudo apt install binaryen
# For MacOS users
brew install binaryen
cargo install cargo-contract --vers ^0.15 --force --locked
The project demonstrates contract-to-runtime interactions through the use of Chain extensions. Chain Extensions allow a runtime developer to extend runtime functions to smart contracts. In the case of this example, the functions being extended are a custom pallet extrinsic, and the pallet_balances::transfer
extrinsic.
See also the rand-extension
chain extension code example, which is one example that this project extended.
Runtime-to-contract interactions are enabled through invocations of the pallet-contract's own bare_call
method, invoked from a custom pallet extrinsic. The example extrinsic is called call_smart_contract
and is meant to demonstrate calling an existing(uploaded and instantiated) smart-contract generically. The caller specifies the account id of the smart contract to be called, the selector of the smart contract function(found in the metadata.json in the compiled contract), and one argument to be passed to the smart contract function.
The cargo run
command will perform an initial build. Use the following command to build the node without launching it:
cargo build --release
To build the included smart contract example, first cd
into smart-contracts/example-extension
. then run:
cargo +nightly contracts build
Use Rust's native cargo
command to build and launch the template node:
cargo run --release -- --dev --tmp
Once the smart contract is compiled, you may use the hosted Canvas UI. Please follow the Deploy Your Contract guide for specific instructions. This contract uses a default
constructor, so there is no need to specify values for its constructor.
You may also use the Polkadotjs Apps UI to upload and instantiate the contract.
Ensure you have uploaded and instantiated the example contract.
Call the set_value
smart contract function from a generic pallet extrinsic
Submission
tab:templateModule
0x00abcdef
(note: this denotes the function to call, and is found in smart-contracts/example-extension/target/ink/metadata.json
. See more here on the ink! selector macro)u32
of your choice10000000000
Submit Transaction
-> Sign and Submit
.This extrinsic passed these arguments to the pallet_contracts::bare_call function, which resulted in our set_value
smart contract function being called with the new u32
value. This value can now be verified by calling the get_value
, and checking whether the new value is returned.
Call the insert_number
extrinsic from the smart contract
chain-extension-example
, click Execute
.Message to Send
, select store_in_runtime
.u32
to be stored.send as transaction
is selected.Call
The smart contract function is less generic than the extrinsic used above, and so aready knows how to call our custom runtime extrinsic through the chain extension that is set up. You can verify that the contract called the extrinsic by checking the contractEntry
storage in the Polkadotjs UI.
To run the tests for the included example pallet, run cargo test
in the root.
Build node with benchmarks enabled:
cargo build --release --features runtime-benchmarks
Then, to generate the weights into the pallet template's weights.rs
file:
./target/release/node-template benchmark \
--chain dev \
--pallet=pallet_template \
--extrinsic='*' \
--repeat=20 \
--steps=50 \
--execution wasm \
--wasm-execution compiled \
--raw \
--output pallets/template/src/weights.rs \
--template=./weight-template.hbs
Download Details:
Author: paritytech
Source Code: https://github.com/paritytech/Runtime-Contract-Interactions
License: Unlicense License
1651793340
De[code] Sub[strate]
† This software is experimental, and not intended for production use yet. Use at your own risk.
Encompassing decoder for substrate/polkadot/kusama types.
Gets type definitions from polkadot-js via JSON and decodes them into components that outline types and make decoding byte-strings possible, as long as the module/generic type name are known.
Supports Metadata versions from v8, which means all of Kusama (from CC1). Older networks are not supported (E.G Alexander).
TypeDetective
and Decoder
in order to work for arbitrary chains. However, if the JSON follows the same format as PolkadotJS definitions (look at definitions.json
and overrides.json
) it would be possible to simply deserialize into Polkadot structs and utilize those. The decoding itself is generic enough to allow it.Currently Supported Metadata Versions (From Kusama CC1):
desub/
, desub-current/
, desub-legacy/
, desub-common/
, desub-json-resolver/
upgrade-blocks
present here and modify the hard-coded upgrade blocks as necessary in the desub runtimes.rs
file.Unreleased
section.Unreleased
section to a new section corresponding to the version being released, making sure to keep the Unreleased
header.vX.X.X
(E.G v0.1.0
)git tag v0.1.0
git push --tags origin master
Actions
in the github repository.publish
.crates.io
. Refer to this for how to publish to crates.io.Download Details:
Author: paritytech
Source Code: https://github.com/paritytech/desub
License: GPL-3.0 License