Suggest 9 Favorite GitHub Repos for Web3 Developers

Introduction to Web3

Centralization has helped onboard billions of people to the World Wide Web and created the stable, robust infrastructure on which it lives. At the same time, a handful of centralized entities have a stronghold on large swathes of the World Wide Web, unilaterally deciding what should and should not be allowed.

Web3 is the answer to this dilemma. Instead of a Web monopolized by large technology companies, Web3 embraces decentralization and is being built, operated, and owned by its users. Web3 puts power in the hands of individuals rather than corporations. Before we talk about Web3, let's explore how we got here.

Core ideas of Web3

Although it's challenging to provide a rigid definition of what Web3 is, a few core principles guide its creation.

  • Web3 is decentralized: instead of large swathes of the internet controlled and owned by centralized entities, ownership gets distributed amongst its builders and users.
  • Web3 is permissionless: everyone has equal access to participate in Web3, and no one gets excluded.
  • Web3 has native payments: it uses cryptocurrency for spending and sending money online instead of relying on the outdated infrastructure of banks and payment processors.
  • Web3 is trustless: it operates using incentives and economic mechanisms instead of relying on trusted third-parties

In this article below, We'll share Suggest 9 Favorite GitHub Repos for Web3 Developers

1.   foundry

Foundry is a blazing fast, portable and modular toolkit for Ethereum application development written in Rust.

Foundry consists of:

  • Forge: Ethereum testing framework (like Truffle, Hardhat and DappTools).
  • Cast: Swiss army knife for interacting with EVM smart contracts, sending transactions and getting chain data.
  • Anvil: local Ethereum node, akin to Ganache, Hardhat Network.

Installation

Having issues? See the troubleshooting section.

First run the command below to get foundryup, the Foundry toolchain installer:

curl -L https://foundry.paradigm.xyz | bash

If you do not want to use the redirect, feel free to manually download the foundryup installation script from here.

Then, run foundryup in a new terminal session or after reloading your PATH.

Other ways to use foundryup, and other documentation, can be found here. Happy forging!

Installing from Source

For people that want to install from source, you can do so like below:

git clone https://github.com/foundry-rs/foundry
cd foundry
# install cast + forge
cargo install --path ./cli --profile local --bins --locked --force
# install anvil
cargo install --path ./anvil --profile local --locked --force

Or via cargo install --git https://github.com/foundry-rs/foundry --profile local --locked foundry-cli anvil.

Installing for CI in Github Action

See https://github.com/foundry-rs/foundry-toolchain GitHub Action.

Installing via Docker

Foundry maintains a Docker image repository.

You can pull the latest release image like so:

docker pull ghcr.io/foundry-rs/foundry:latest

For examples and guides on using this image, see the Docker section in the book.

Features

  • Fast & flexible compilation pipeline
    • Automatic Solidity compiler version detection & installation (under ~/.svm)
    • Incremental compilation & caching: Only changed files are re-compiled
    • Parallel compilation
    • Non-standard directory structures support (e.g. Hardhat repos)
  • Tests are written in Solidity (like in DappTools)
  • Fast fuzz testing with shrinking of inputs & printing of counter-examples
  • Fast remote RPC forking mode, leveraging Rust's async infrastructure like tokio
  • Flexible debug logging
    • DappTools-style, using DsTest's emitted logs
    • Hardhat-style, using the popular console.sol contract
  • Portable (5-10MB) & easy to install without requiring Nix or any other package manager
  • Fast CI with the Foundry GitHub action.

View Github

2.   full-stack-ethereum

Building full stack apps with Solidity, Ethers.js, Hardhat, and The Graph

The things I was interested in were this:

  1. How to create, deploy, and test Ethereum smart contracts to local, test, and mainnet
  2. How to switch between local, test, and production environments / networks
  3. How to connect to and interact with the contracts using various environments from a front end like React, Vue, Svelte, or Angular

▶︎ Client Framework - React
▶︎ Ethereum development environment - Hardhat
▶︎ Ethereum Web Client Library - Ethers.js
▶︎ API layer - The Graph Protocol

View Github

3.   openzeppelin-contracts

OpenZeppelin Contracts is a library for secure smart contract development.

A library for secure smart contract development. Build on a solid foundation of community-vetted code.

Installation

$ npm install @openzeppelin/contracts

OpenZeppelin Contracts features a stable API, which means that your contracts won't break unexpectedly when upgrading to a newer minor version.

An alternative to npm is to use the GitHub repository (openzeppelin/openzeppelin-contracts) to retrieve the contracts. When doing this, make sure to specify the tag for a release such as v4.5.0, instead of using the master branch.

Usage

Once installed, you can use the contracts in the library by importing them:

pragma solidity ^0.8.0;

import "@openzeppelin/contracts/token/ERC721/ERC721.sol";

contract MyCollectible is ERC721 {
    constructor() ERC721("MyCollectible", "MCO") {
    }
}

If you're new to smart contract development, head to Developing Smart Contracts to learn about creating a new project and compiling your contracts.

To keep your system secure, you should always use the installed code as-is, and neither copy-paste it from online sources nor modify it yourself. The library is designed so that only the contracts and functions you use are deployed, so you don't need to worry about it needlessly increasing gas costs.

View Github

4.   NFT-Marketplace-Tutorial

NFT marketplace tutorial by Alchemy

There is no better place to start your Web3 development journey than the Alchemy NFT tutorial repository. Using Solidity, Tailwind, and Ether.js, you will get to learn how to build an NFT marketplace from scratch. 

View Github

5.   rainbowkit

The best way to connect a wallet 🌈

RainbowKit is a React library that makes it easy to add wallet connection to your dapp.

  • 🔥 Out-of-the-box wallet management
  • ✅ Easily customizable
  • 🦄 Built on top of wagmi and ethers

Quick start

You can scaffold a new RainbowKit + wagmi + Next.js app with one of the following commands, using your package manager of choice:

npm init @rainbow-me/rainbowkit@latest
# or
yarn create @rainbow-me/rainbowkit@latest
# or
pnpm create @rainbow-me/rainbowkit@latest

Examples

The following examples are provided in the examples folder of this repo.

  • with-create-react-app
  • with-next
  • with-next-custom-button
  • with-next-mint-nft
  • with-next-siwe-next-auth
  • with-next-siwe-iron-session
  • with-remix

View Github

6.   create-dao

We have halted the development for this project for now.

To use the npx package run, in a terminal: npx create-dao

Technologies

This project is built with the following open source libraries, frameworks and languages.

TechDescription
NextJSFront end user interface
ChakraUIA simple & modular component library
HardhatEthereum development environment for professionals
@web3-ui/coreA set of React components and hooks made for web3-specific use cases.

View Github

7.   free-Web3-resources

A list of FREE resources to make Web3 accessible to everyone.

  • Getting Involved
  • Web3 Roadmaps
  • Ethereum Development Tools
  • Ethereum free resources
  • Ethereum Languages
  • Solidity
  • Ethereum Clients
  • Ethereum in different languages
  • DAO Communities
  • SDKs
  • Oracles
  • Off Chain Data Protocols
  • NFT Marketplaces
  • Node Providers
  • File Storage
  • Ethereum Development Environment
  • Ethereum Development IDEs
  • Identity
  • Indexing
  • Client SDKs
  • Blockchains
  • Learning Platforms
  • Solana
  • Youtube Channels

View Github

8.   blockchain-in-js

Build your own blockchain!

Javascript is one of the most used languages in programming and is also relevant in Web3 development. This repository shows how to use Javascript in Web3. It also gives an explain-like-I’m-five (ELI5) walk-through of blockchain technology. 

View Github

9.   blockchain-tutorial

Write and publish your own blockchain in less than 200 lines of Go

A couple of blockchain protocols were implemented in the Go language. If you have always wanted to build your blockchain as a Web3 developer, this repo will show you a step-by-step approach to building your blockchain in less than 200 lines of code

View Github

Here are all Suggest 9 Favorite GitHub Repos for Web3 Developers in my opinion. If you find it interesting, please like and share to support.

#solidity #smartcontract #dao #web3 #blockchain

Suggest 9 Favorite GitHub Repos for Web3 Developers

Building A Basic Dapp Project From Scratch using Solidity

dapp is a tool for building, testing and deploying smart contracts from the comfort of the command line.

As opposed to other tools, it does not use rpc to execute transactions. Instead, it invokes the hevm cli directly. This is faster, and allows for a lot of flexibility that isn't available in rpc, such as fuzz testing, symbolic execution, or cheat codes to modify mainnet state.

Table of Contents

Installing

dapp is distributed as part of the Dapp tools suite.

Basic usage: a tutorial

Let's create a new dapp project. We make a new directory and initialize the dapp skeleton structure:

mkdir dapptutorial
cd dapptutorial
dapp init

This creates two contracts, Dapptutorial.sol and Dapptutorial.t.sol in the src subdirectory and installs our testing library ds-test in the lib subdirectory.

Dapptutorial.t.sol is a testing contract with two trivial tests, which we can run with dapp test.

Building

For the sake of this tutorial, let's change Dapptutorial.sol to a simple vault with an eth bounty that can be accessed by giving the password 42:

pragma solidity ^0.8.6;

contract Dapptutorial {
    receive() external payable {
    }

    function withdraw(uint password) public {
        require(password == 42, "Access denied!");
        payable(msg.sender).transfer(address(this).balance);
    }
}

Compile the contract by running dapp build. If you didn't make any mistakes, you should simply see:

+ dapp clean
+ rm -rf out

Unit testing

Let's write some tests for our vault. Change Dapptutorial.t.sol to the following. We'll go over whats going on in the next paragraph.

import {DSTest} from "ds-test/test.sol";
import {Dapptutorial} from "./Dapptutorial.sol";

contract DapptutorialTest is DSTest {
    Dapptutorial dapptutorial;

    function setUp() public {
        dapptutorial = new Dapptutorial();
    }

    function test_withdraw() public {
        payable(address(dapptutorial)).transfer(1 ether);
        uint preBalance = address(this).balance;
        dapptutorial.withdraw(42);
        uint postBalance = address(this).balance;
        assertEq(preBalance + 1 ether, postBalance);
    }

    function testFail_withdraw_wrong_pass() public {
        payable(address(dapptutorial)).transfer(1 ether);
        uint preBalance = address(this).balance;
        dapptutorial.withdraw(1);
        uint postBalance = address(this).balance;
        assertEq(preBalance + 1 ether, postBalance);
    }

    // allow sending eth to the test contract
    receive() external payable {}
}

In the setUp() function, we are deploying the Dapptutorial contract. All following tests are run against the poststate of the setUp() function. The test_withdraw function first deposits 1 eth and then withdraws it, by giving the correct password. We check that the call was successful by comparing the pre and post balance of the testing account using assertEq. You can try changing the right hand side to postBalance + 1 and see what happens. Finally, we are testing the case where the wrong password is given in testFail_withdraw_wrong_pass. Any function prefixed with testFail is expected to fail, either with a revert or by violating an assertion. Finally, since a successful call to withdraw sends eth to the testing contract, we have to remember to implement a receive function in it.

For more debugging information, run dapp test with the -v flag to print the calltrace for failing tests, or enter the interactive debugger by running dapp debug.

Property based testing

Now let's try something more interesting - property based testing and symbolically executed tests.

We can generailize our test_withdraw function to not use the hardcoded 1 ether, but instead take the value as a parameter:

function test_withdraw(uint amount) public {
    payable(address(dapptutorial)).transfer(amount);
    uint preBalance = address(this).balance;
    dapptutorial.withdraw(42);
    uint postBalance = address(this).balance;
    assertEq(preBalance + amount, postBalance);
}

A test that takes at least one parameters is interpreted as a "property based test", or "fuzz test", and will be run multiple times with different values given to the parameters. The number of times each test is run can be configured by the --fuzz-runs flag and defaults to 100.

Running this test with dapp test -v, we see that this test actually fails with error BalanceTooLow for very high values of amount.

By default, the testing contract is given a balance of 2**96 wei, so we have to restrict the type of amount to uint96 to make sure we don't try to transfer more than we have:

function test_withdraw(uint96 amount) public {
    payable(address(dapptutorial)).transfer(amount);
    uint preBalance = address(this).balance;
    dapptutorial.withdraw(42);
    uint postBalance = address(this).balance;
    assertEq(preBalance + amount, postBalance);
}

If a counterexample is found, it can be replayed or analyzed in the debugger using the --replay flag.

Symbolically executed tests

While property based testing runs each function repeatedly with new input values, symbolic execution leaves these values symbolic and tries to explore each possible execution path. This gives a stronger guarantee and is more powerful than property based testing, but is also more difficult, especially for complicated functions.

Continuing with our vault example, imagine that we forgot the password and did not have the source available. We can symbolically explore all possibilities to find the one that lets us withdraw by writing a proveFail test:

function proveFail_withdraw(uint guess) public {
    payable(address(dapptutorial)).transfer(1 ether);
    uint preBalance = address(this).balance;
    dapptutorial.withdraw(guess);
    uint postBalance = address(this).balance;
    assertEq(preBalance + 1 ether, postBalance);
}

When we run this with dapp test, we are given a counterexample:

Failure: proveFail_withdraw(uint256)

  Counterexample:

    result:   Successful execution
    calldata: proveFail_withdraw(42)

which demonstrates that if we give the password 42, it is possible to withdraw from the vault.

The symbolic execution engine is backed by an SMT solver. When symbolically executing more complex tests you may encounter test failures with an SMT Query Timeout message. In this case, consider increasing the smt timeout using the --smttimeout flag or DAPP_TEST_SMTTIMEOUT environment variable (the default timeout is 60000 ms). Note that this timeout is per smt query not per test, and that each test may execute multiple queries (at least one query for each potential path through the test method).

For more reading on property based testing and symbolic execution, see this tutorial on the Ethereum Foundation blog.

Invariant testing

While other forms of tests are always run against the post state of the setUp() function in the testing contract, it can be also be useful to check whether a property is satisfied at every possible contract state. This can be done with the invariant* testing type. When running an invariant test, hevm will invoke any state mutating function from all addresses returned by a call to targetContracts(), if such a function exists in the testing contracts. If no such method exists, it will invoke methods from any non-testing contract available after the setUp() function has been run, checking the invariant* after each run.

The --depth parameter determines how many transactions deep each test will run, while the --fuzz-runs parameter determines how many times the whole process is repeated.

Note that a revert in any of the randomly generated call will not trigger a test failure. The goal of invariant tests is to find a state change that results in a violation of the assertions defined in the body of the test method, and since reverts do not result in a state change, they can be safely ignored. Reverts within the body of the invariant* test method will however still cause a test failure.

Example:

function invariant_totalSupply() public {
    assertEq(token.totalSupply(), initialTotalSupply);
}

If a counterexample is found, it can be replayed or analyzed in the debugger using the --replay flag.

SMTChecker testing

If you are using the standard JSON input mode and its field settings.modelChecker.engine is all, bmc or chc, Solidity's SMTChecker will be invoked when you run dapp build. If you wish to use that mode, these steps are recommended:

  • Run the usual compilation
  • Generate a separate input JSON with the SMTChecker enabled: export DAPP_SMTCHECKER=1 && dapp mk-standard-json &> dapp_smtchecker.json
  • Modify settings.modelChecker in the new JSON input accordingly. It is recommended that you use the contracts field settings.modelChecker.contracts to specify the main contracts you want to verify.
  • Tell dapp to use the new JSON as input: export DAPP_STANDARD_JSON=./dapp_smtchecker.json
  • Run dapp build

You may also want to change the settings.modelChecker.timeout and/or other fields in different runs.

Testing against RPC state

You can test how your contract interacts with already deployed contracts by letting the testing state be fetched from rpc with the --rpc flag.

Running dapp test with the --rpc flag enabled will cause every state fetching operation (such as SLOAD, EXTCODESIZE, CALL*, etc.) to request the state from $ETH_RPC_URL.

For example, if you want to try out wrapping ETH you could define WETH in the setUp() function:

import "ds-test/test.sol";

interface WETH {
    function balanceOf(address) external returns (uint);
    function deposit() external payable;
}

contract WethTest is DSTest {
    WETH weth;
    function setUp() public {
        weth = WETH(0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2);
    }

    function testWrap() public {
        assertEq(weth.balanceOf(address(this)), 0);
        weth.deposit{value :1 ether}();
        assertEq(weth.balanceOf(address(this)), 1 ether);
    }
}

With ETH_RPC_URL set, you can run dapp test --rpc on this test or dapp debug --rpc to step through the testWrap function in the interactive debugger.

It is often useful to modify the state for testing purposes, for example to grant the testing contract with a balance of a particular token. This can be done using hevm cheat codes.

Deployment

To deploy a contract, you can use dapp create:

dapp create Dapptutorial [<constructorArgs>] [<options>]

The --verify flag verifies the contract on etherscan (requires ETHERSCAN_API_KEY).

Configuration

The commands of dapp can be customized with environment variables or flags. These variables can be set at the prompt or in a .dapprc file.

Below is a list of the environment variables recognized by dapp. You can additionally control various block parameters when running unit tests by using the hevm specific environment variables.

VariableDefaultSynopsis
DAPP_SRCsrcDirectory for the project's Solidity contracts
DAPP_LIBlibDirectory for installed Dapp packages
DAPP_OUToutDirectory for compilation artifacts
DAPP_ROOT.Root directory of compilation
DAPP_SOLC_VERSION0.8.6Solidity compiler version to use
DAPP_SOLCn/asolc binary to use
DAPP_LIBRARIESautomatically deployedLibrary addresses to link to
DAPP_SKIP_BUILDn/aAvoid compiling this time
DAPP_COVERAGEn/aPrint coverage data
DAPP_LINK_TEST_LIBRARIES1 when testing; else 0Compile with libraries
DAPP_VERIFY_CONTRACTyesAttempt Etherscan verification
DAPP_ASYNCn/aSet to yes to skip waiting for etherscan verification to succeed
DAPP_STANDARD_JSON$(dapp mk-standard-json)Solidity compilation options
DAPP_SMTCHECKERn/aSet to 1 to output the default model checker settings when using dapp mk-standard-json. Running dapp build will invoke the SMTChecker.
DAPP_REMAPPINGS$(dapp remappings)Solidity remappings
DAPP_BUILD_OPTIMIZE0Activate Solidity optimizer (0 or 1)
DAPP_BUILD_OPTIMIZE_RUNS200Set the optimizer runs
DAPP_VIA_IR0Change compilation pipeline to go through the Yul intermediate representation (0 or 1)
DAPP_TEST_MATCHn/aOnly run test methods matching a regex
DAPP_TEST_VERBOSITY0Sets how much detail dapp test logs. Verbosity 1 shows traces for failing tests, 2 shows logs for all tests, 3 shows traces for all tests
DAPP_TEST_FFI0Allow use of the ffi cheatcode in tests (0 or 1)
DAPP_TEST_FUZZ_RUNS200How many iterations to use for each property test in your project
DAPP_TEST_DEPTH20Number of transactions to sequence per invariant cycle
DAPP_TEST_SMTTIMEOUT60000Timeout passed to the smt solver for symbolic tests (in ms, and per smt query)
DAPP_TEST_MAX_ITERATIONSn/aThe number of times hevm will revisit a particular branching point when symbolically executing
DAPP_TEST_SOLVERz3Solver to use for symbolic execution (cvc4 or z3)
DAPP_TEST_MATCHn/aRegex used to determine test methods to run
DAPP_TEST_COV_MATCHn/aRegex used to determine which files to print coverage reports for. Prints all imported files by default (excluding tests and libs).
DAPP_TEST_REPLAYn/aCalldata for a specific property test case to replay in the debugger
HEVM_RPCn/aSet to yes to have hevm fetch state from rpc when running unit tests
ETH_RPC_URLn/aThe url of the rpc server that should be used for any rpc calls
DAPP_TESTNET_RPC_PORT8545Which port to expose the rpc server on when running dapp testnet
DAPP_TESTNET_RPC_ADDRESS127.0.0.1Which ip address to bind the rpc server to when running dapp testnet
DAPP_TESTNET_CHAINID99Which chain id to use when running dapp testnet
DAPP_TESTNET_PERIOD0Blocktime to use for dapp testnet. 0 means blocks are produced instantly as soon as a transaction is received
DAPP_TESTNET_ACCOUNTS0How many extra accounts to create when running dapp testnet (At least one is always created)
DAPP_TESTNET_gethdir$HOME/.dapp/testnetRoot directory that should be used for dapp testnet data
DAPP_TESTNET_SAVEn/aName of the subdirectory under ${DAPP_TESTNET_gethdir}/snapshots where the chain data from the current dapp testnet invocation should be saved
DAPP_TESTNET_LOADn/aName of the subdirectory under ${DAPP_TESTNET_gethdir}/snapshots from which dapp testnet chain data should be loaded
DAPP_BUILD_EXTRACTn/aSet to a non null value to output .abi, .bin and .bin-runtime when using dapp build. Uses legacy build mode
DAPP_BUILD_LEGACYn/aSet to a non null value to compile using the --combined-json flag. This is provided for compatibility with older workflows

A global (always loaded) config file is located in ~/.dapprc. A local .dapprc can also be defined in your project's root, which overrides variables in the global config.

Whenever you run a dapp command the .dapprc files are sourced in order (global first, then the one in the current working directory, if it exists). If you wish to set configuration variables, you must use export as below:

export DAPP_SOLC_VERSION=0.8.6
export DAPP_REMAPPINGS=$(cat remappings.txt)
export DAPP_BUILD_OPTIMIZE=1
export DAPP_BUILD_OPTIMIZE_RUNS=1000000000
export DAPP_TEST_VERBOSITY=1

Under the hood .dapprc is interpreted as a shell script, which means you can add additional scripting logic which will be run whenever you use dapp. For example if you wanted to fuzz for many iterations in CI and only a few locally you could add this to your .dapprc:

if [ "$CI" == "true" ]
then
  export DAPP_TEST_FUZZ_RUNS=1000000 # In CI we want to fuzz for a long time.
else
  export DAPP_TEST_FUZZ_RUNS=1000 # When developing locally we only want to fuzz briefly.
fi

Precedence

There are multiple places to specify configuration options. They are read with the following precedence:

  1. command line flags
  2. local .dapprc
  3. global .dapprc
  4. locally set environment variables

solc version

You can specify a custom solc version to run within dapp with dapp --use <arg>. If the argument is of the form solc:x.y.z, the appropriate solc version will temporarily installed. If the argument contains a /, it is interpreted as a path to a solc binary to be used.

You may also specify a solc version using the DAPP_SOLC_VERSION environment variable, which is equivalent to running dapp --use solc:${DAPP_SOLC_VERSION} manually.

You can install any supported solc "standalone" (i.e. add it to your $PATH) with:

nix-env -iA solc-static-versions.solc_x_y_z \  -if https://github.com/dapphub/dapptools/tarball/master

For a list of the supported solc versions, check solc-static-versions.nix.

Commands

dapp init

dapp-init -- bootstrap a new dapp
Usage: dapp init

Initializes the current directory to the default dapp structure, installing ds-test and creating two boilerplate contracts in the src directory.

dapp build

dapp-build -- compile the source code
Usage: dapp build [--extract]

--extract:  After building, write the .abi, .bin and .bin-runtime. Implies `--legacy`
    files from the solc json into $DAPP_OUT. Beware of contract
    name collisions. This is provided for compatibility with older
    workflows.
--optimize: activate the solidity optimizer.
--via-ir: change compilation pipeline to go through the Yul intermediate representation
--legacy:   Compile using the `--combined-json` flag. Some options are
    missing from this format. This is provided for compatibility with older
    workflows.

Compiles the contracts in the src directory. The compiler options of the build are generated by the dapp mk-standard-json command, which infers most options from the project structure. For more customizability, you can define your own configuration json by setting the file to the environment variable DAPP_STANDARD_JSON.

By default, dapp build uses dapp remappings to resolve Solidity import paths.

You can override this with the DAPP_REMAPPINGS environment variable.

dapp test

Usage: dapp test [<options>]

Options:
    -v, --verbose             trace output for failing tests
    --coverage                print coverage data
    --verbosity <number>      sets the verbosity to <number>
    --depth=<number>          number of transactions to sequence per invariant cycle
    --fuzz-runs <number>      number of times to run fuzzing tests
    --replay <string>         rerun a particular test case
    -m, --match <string>      only run test methods matching regex
    --cov-match <string>      only print coverage for files matching regex

RPC options:
    --rpc                     fetch remote state via ETH_RPC_URL
    --rpc-url <url>           fetch remote state via <url>
    --rpc-block <number>      block number (latest if not specified)

SMT options:
    --smttimeout <number>     timeout passed to the smt solver in ms (default 60000)
    --solver <string>         name of the smt solver to use (either "z3" or "cvc4")
    --max-iterations <number> number of times we may revisit a particular branching point during symbolic execution

dapp tests are written in Solidity using the ds-test module. To install it, run

dapp install ds-test

Every contract which inherits from DSTest will be treated as a test contract, if it has a setUp() function, it will be run before every test.

Every function prefixed with test is expected to succeed, while functions prefixed by testFail are expected to fail.

Functions prefixed with prove are run symbolically, expecting success while functions prefixed proveFail are run symbolically expecting failure.

The -v flag prints call traces for failing tests, --verbosity 2 will show ds-test events for all tests, while --verbosity 3 will show call traces for all tests.

If you provide --rpc, state will be fetched via rpc. Local changes take priority.

You can configure the testing environment using hevm specific environment variables.

To modify local state even more, you can use hevm cheat codes.

If your test function takes arguments, they will be randomly instantiated and the function will be run multiple times.

The number of times run is configurable using --fuzz-runs.

To step through a test in hevm interactive debugger, use dapp debug.

dapp test --match <regex> will only run tests that match the given regular expression. This will be matched against the file path of the test file, followed by the contract name and the test method, in the form src/my_test_file.sol:TestContract.test_name(). For example, to only run tests from the contract ContractA:

dapp test --match ':ContractA\.'

To run all tests, from all contracts, that contain either foo or bar in the test name:

dapp test --match '(foo|bar)'

To only run tests called 'test_this()' from TheContract in the src/test/a.t.sol file:

dapp test --match 'src/test/a\.t\.sol:TheContract\.test_this\(\)'

By default, dapp test also recompiles your contracts. To skip this, you can set the environment variable DAPP_SKIP_BUILD=1.

If you have any libraries in DAPP_SRC or DAPP_LIB with nonzero bytecode, they will be deployed locally and linked to by any contracts referring to them. This can be skipped by setting DAPP_LINK_TEST_LIBRARIES=0.

dapp debug

dapp-debug -- run unit tests interactively (hevm)
Usage: dapp debug [<options>]

Options:
   --rpc                 fetch remote state via ETH_RPC_URL
   --rpc-url=<url>       fetch remote state via <url>
   --rpc-block=<number>  block number (latest if not specified)

Enters the interactive debugger. See the hevm README for key bindings for navigation.

dapp create

dapp-create -- deploy a compiled contract (--verify on Etherscan)
Usage: dapp create <contractname> or
    dapp create <path>:<contractname>
Add --verify and export your ETHERSCAN_API_KEY to auto-verify on Etherscan

dapp address

dapp-address -- determine address of newly generated contract
Usage: dapp address <sender> <nonce>

dapp install

dapp-install -- install a smart contract library
Usage: dapp install <lib>
<lib> may be:
- a Dapphub repo (ds-foo)
- the URL of a Dapphub repo (https://github.com/dapphub/ds-foo)
- a path to a repo in another Github org (org-name/repo-name)

You can also specify a version (or branch / commit hash) for the repository by suffixing the URL with @<version>. dapp install will then proceed to clone the repository and then git checkout --recurse-submodules $version.

If the project you want to install does not follow the typical dapp project structure, you may need to configure the DAPP_REMAPPINGS environment variable to be able to find it. For an example, see this repo.

dapp uninstall

dapp-uninstall -- remove a smart contract library
Usage: dapp uninstall <lib>

dapp update

dapp-update -- fetch all upstream lib changes
Usage: dapp update [<lib>]

Updates a project submodule in the lib subdirectory.

dapp snapshot

dapp-snapshot -- creates a snapshot of each test's gas usage
Usage: dapp snapshot

Saves a snapshot of each concrete test's gas usage in a .gas-snapshot file.

dapp check-snapshot

dapp-check-snapshot -- check snapshot is up to date
Usage: dapp check-snapshot

Runs dapp snapshot and exits with an error code if its output does not match the current .gas-snapshot file.

dapp upgrade

dapp-upgrade -- pull & commit all upstream lib changes
Usage: dapp upgrade [<lib>]

dapp testnet

Spins up a geth testnet.

dapp verify-contract

dapp-verify-contract -- verify contract source on etherscan
Usage: dapp verify-contract <path>:<contractname> <address> [constructorArgs]

Example: dapp verify-contract src/auth/authorities/RolesAuthority.sol:RolesAuthority 0x9ed0e..

Requires ETHERSCAN_API_KEY to be set.

seth chain will be used to determine on which network the contract is to be verified.

Automatically run when the --verify flag is passed to dapp create.

dapp mk-standard-json

Generates a Solidity settings input json using the structure of the current directory.

The following environment variables can be used to override settings:

  • DAPP_SRC
  • DAPP_REMAPPINGS
  • DAPP_BUILD_OPTIMIZE
  • DAPP_BUILD_OPTIMIZE_RUNS
  • DAPP_LIBRARIES
  • DAPP_SMTCHECKER

Link: https://github.com/dapphub/dapptools/tree/master/src/dapp#basic-usage-a-tutorial

#solidity #smartcontract #dapp #blockchain #dao

Building A Basic Dapp Project From Scratch using Solidity

A Solidity Library Helps You Call Untrusted Contracts Safely

ExcessivelySafeCall

This solidity library helps you call untrusted contracts safely. Specifically, it seeks to prevent all possible ways that the callee can maliciously cause the caller to revert. Most of these revert cases are covered by the use of a low-level call. The main difference with between address.call()call and address.excessivelySafeCall() is that a regular solidity call will automatically copy bytes to memory without consideration of gas.

This is to say, a low-level solidity call will copy any amount of bytes to local memory. When bytes are copied from returndata to memory, the memory expansion cost is paid. This means that when using a standard solidity call, the callee can "returnbomb" the caller, imposing an arbitrary gas cost. Because this gas is paid by the caller and in the caller's context, it can cause the caller to run out of gas and halt execution.

To prevent returnbombing, we provide excessivelySafeCall and excessivelySafeStaticCall. These behave similarly to solidity's low-level calls, however, they allow the user to specify a maximum number of bytes to be copied to local memory. E.g. a user desiring a single return value should specify a _maxCopy of 32 bytes. Refusing to copy large blobs to local memory effectively prevents the callee from triggering local OOG reversion. We also recommend careful consideration of the gas amount passed to untrusted callees.

Consider the following contracts:

contract BadGuy {
    function youveActivateMyTrapCard() external pure returns (bytes memory) {
        assembly{
            revert(0, 1_000_000)
        }
    }
}

contract Mark {
    function oops(address badGuy) {
        bool success;
        bytes memory ret;

        // Mark pays a lot of gas for this copy 😬😬😬
        (success, ret) == badGuy.call(
            SOME_GAS,
            abi.encodeWithSelector(
                BadGuy.youveActivateMyTrapCard.selector
            )
        );

        // Mark may OOG here, preventing local state changes
        importantCleanup();
    }
}

contract ExcessivelySafeSam {
    using ExcessivelySafeCall for address;

    // Sam is cool and doesn't get returnbombed
    function sunglassesEmoji(address badGuy) {
        bool success;
        bytes memory ret;

        (success, ret) == badGuy.excessivelySafeCall(
            SOME_GAS,
            32,  // <-- the magic. Copy no more than 32 bytes to memory
            abi.encodeWithSelector(
                BadGuy.youveActivateMyTrapCard.selector
            )
        );

        // Sam can afford to clean up after himself.
        importantCleanup();
    }
}

When would I use this

ExcessivelySafeCall prevents malicious callees from affecting post-execution cleanup (e.g. state-based replay protection). Given that a dev is unlikely to hard-code a call to a malicious contract, we expect most danger to come from dynamic dispatch protocols, where neither the callee nor the code being called is known to the developer ahead of time.

Dynamic dispatch in solidity is probably most useful for metatransaction protocols. This includes gas-abstraction relayers, smart contract wallets, bridges, etc.

Nomad uses excessively safe calls for safe processing of cross-domain messages. This guarantees that a message recipient cannot interfere with safe operation of the cross-domain communication channel and message processing layer.

Interacting with the repo

To install in your project:

  • install Foundry
  • forge install nomad-xyz/ExcessivelySafeCall

To run tests:

A note on licensing:

Tests are licensed GPLv3, as they extend the DSTest contract. Non-test work is avialable under user's choice of MIT and Apache2.0.

Download details:

Author: nomad-xyz
Source code: https://github.com/nomad-xyz/ExcessivelySafeCall 

#solidity #smartcontract #dapp #blockchain #dao

A Solidity Library Helps You Call Untrusted Contracts Safely

Reaper: A Yield Aggregator on Fantom Built with Solidity

welcome to unhacked

unhacked is a weekly ctf, giving whitehats the chance to go back in time before real exploits and recover funds before the bad guys get them.

you are a whitehat, right anon?

meet reaper

reaper farm is a yield aggregator on fantom. their V2 vaults were hacked on 8/2.

there were a number of implementations of the vaults with damages totalling $1.7mm, but the exploit was the same on all of them, so let's just focus on one — a DAI vault hacked for over $400k.

review the code in this repo, find the exploit, and recover > $400k.

how to play

fork this repo and clone it locally.

update the .env file with an environment variable for FANTOM_RPC (already preset to the public RPC endpoint, which should work fine, in which case you can skip this).

review the code in the src/ folder, which contains all the code at the time of the hack. you can explore the state of the contract before the hack using block 44000000. ex: cast call --rpc-url ${FANTOM_RPC} --block 44000000 0x77dc33dC0278d21398cb9b16CbFf99c1B712a87A "totalAssets()" | cast 2d

when you find an exploit, code it up in ReaperHack.t.sol. the test will pass if you succeed.

post on twitter for bragging rights and tag @unhackedctf. no cheating.

subscribe

for new weekly challenges and solutions, subscribe to the unhacked newsletter.

Download details:

Author: unhackedctf
Source code: https://github.com/unhackedctf/reaper 

#solidity #smartcontract #dapp #blockchain #dao #fantom

Reaper: A Yield Aggregator on Fantom Built with Solidity

Paradigm CTF: My Notes and Code From The Paradigm CTF

Paradigm CTF

Had a great time with the Paradigm CTF this weekend. Hoped to get more done but I got 0xmonaco'd and only spent like 1/4 of the time on the challenges.

Challenges

0xMonaco

I haven't been this excited about a game in a long time, let alone a crypto game (have I ever been excited about one of those before?). I don't even tweet and I tweeted about it twice. I already included my progression of cars on twitter but I'll add some interesting tidbits here

RampUp

RampUp was the bread-and-butter on day 1. The main idea was to focus on using race progression as a heuristic, where we start off relatively passive and get more aggressive as the race goes on. It defines race progression as the y-value of the first-place car, scaled down to 1-10 so it would be easier to work with.

RampUp mainly scales the amount it's willing to spend for items, always as a proportion of its balance. So, early in the game it may only be willing to spend 1/10 of its money, while later on it would be willing to spend half or more. It also scales the speed threshold which it generally always tries to maintain by spending more on boost when below target.

PhasedCar

RampUp did really well at first, but started to lose out to economy cars by running out of money. My main thinking was that linear scaling just doesn't cut it -- the marginal value of a boost on turn 100 is exponentially more than on turn 1. But granular exponential curves are hard, so lets do a piecewise function. A piecewise mindset also allowed for strategy changes beyond just item-value scaling.

This is the code I ended up using for the final hours of the competition. It was tweaked soooo many times and got pretty messy :)

LearningCar

I got super tired of tweaking parameters for PhasedCar, and started to feel like it wasn't getting any better. There were a lot of parameters that I wanted to include (other players' coin balances, speeds, etc) but nothing seemed to improve PhasedCar's performance. I've seen a CodeBullet video or two, so figured it was time to pit cars against each other in a genetic evolution.

I started by defining valuation functions for the items, returning the current unit cost that the car should be willing to pay. Then had to define a bunch of weight parameters that the genetic algorithm can easily tweak to update the valuation functions.

Finally there is the Genetics script which randomly generates mutations and attempts to move towards an (at least local) optimum weighting. The fitness function is generated by running a given car phenotype against a set of various cars and summing the score for each race, where the score is:

  • the distance behind first if the phenotype didn't get first
  • flat 100 points if we got first (don't want to over-incentivize getting first by a wide margin)

I set it up with some sane defaults and let it run all night, hoping to wake up to the most perfectly optimal car in history. The car I woke up to was great -- it could beat every other manually-generated car I had! I pushed it up to 0xmonaco and... it sucked. It sat at the starting blocks and shot a bunch of shells at second place until the very end of the race. Obviously it overfitted, but I think its strategy was to make the other cars run out of money and come from behind to win. Bold.

I spent a bit longer trying to get a better result but ended up giving up and going back to tinkering with PhasedCar.

Sourcecode

Worked on this one first, it looked pretty easy. Just have to generate some bytecode that returns the code at its own address. I planned on writing a simple contract to return address(this).code in the fallback, get the bytecode and post that. 3 minutes later I was stoked to be already done with a challenge .... Reverted: deploy/code-unsafe. shit, (ext)codecopy is an unsafe opcode.

Later on I was doing some googling and found this helpful hint. Problem though, it includes codesize as a hack to push 0x0 in a single byte, but thats also off-limits. Adding even a single extra byte complicates the code quite a bit so I was really hoping to make it work. No dice though (though it is possible).

An hour of docs and evm.codes later I had something that worked!

This one ended up being really fun, it felt great to hack out some raw bytecode that does what I want. Though in hindsight the bytecode-development process would have been much easier with etk.

Reversing

I wanted this one so bad. I had a lot of fun stepping through the bytecode in the forge debugger and slowly putting it together. Right when I thought I was getting close, forge test crashed my entire computer and I didn't have any other tools I liked enough to keep going. 0xMonaco was calling my name anyway

Download details:

Author: marktoda
Source code: https://github.com/marktoda/paradigm-ctf 

#solidity #smartcontract #dapp #blockchain #dao

Paradigm CTF: My Notes and Code From The Paradigm CTF

Building Aztec Connect Bridges with Foundry & Solidity

Aztec Connect Bridges

This repo has been built with Foundry. Given the interconnected nature of Aztec Connect Bridges with existing mainnet protocols, we decided Foundry / Forge offered the best support for testing. This repo should make debugging, mainnet-forking, impersonation and gas profiling simple. It makes sense to test Solidity contracts with Solidity, not with the added complication of Ethers / Typescript.

Writing a bridge

Developing a bridge is simple. It is done entirely in Solidity and no knowledge of cryptography underpinning Aztec Connect is required. Users of your bridge will get the full benefits of ironclad privacy and 10-30x gas savings.

Note: Currently adding a new bridge or asset to Aztec Connect is permissioned and requires our approval. Once Aztec Connect leaves beta this won't be the case anymore and developing a bridge will become completely permissionless.

  1. To get started follow the steps below:
git clone https://github.com/YOUR_USERNAME/aztec-connect-bridges.git
git checkout -b your_username/bridge-name

2.    Fork this repository on GitHub, clone your fork and create a new branch:

cd aztec-connect-bridges
yarn setup

3.   Install dependencies and build the repo:

src/bridges/example
src/test/example
src/client/example
src/specs/example
src/deployment/example

4.   Copy and rename the following folders (e.g. rename example to uniswap):

Implement the bridges and tests. See the example bridge, example bridge tests and documentation of IDefiBridge for more details. For a more complex example check out other bridges in this repository.

5.   Debug your bridge:

forge test --match-contract YourBridge -vvv

6.   Write a deployment script. Make a script that inherits from the BaseDeployment.s.sol file. The base provides helper functions for listing assets/bridges and a getter for the rollup address. Use the env variables broadcast = false|true and network=mainnet|devnet|testnet to specify how to run it, with broadcast = true, the listBridge and listAsset helpers will be broadcast, otherwise they are similuated as if they came from the controller. See the example scripts from other bridges, for inspiration on how to do it.

All bridges need to be submitted via PRs to this repo. To receive a grant payment we expect the following work to be done:

  1. A solidity bridge that interfaces with the protocol you are bridging to (e.g AAVE),
  2. tests in Solidity that test the bridge with production values and the deployed protocol that is currently on mainnet (you should test a range of assets, edge cases and use Forge's fuzzing abilities),
  3. tests cover the full contract, there are no untested functions or lines.
  4. implementation of the Typescript bridge-data.ts class that tells a frontend developer how to use your bridge.
  5. an explanation of the flows your bridge supports should be included as spec.md,
  6. NatSpec documentation of all the functions in all the contracts which are to be deployed on mainnet.
  7. A deployment script to deploy the bridge with proper configuration

Before submitting a PR for a review make sure that the following is true:

  1. All the tests you wrote pass (forge test --match-contract TestName),
  2. there are no linting errors (yarn lint),
  3. you fetched upstream changes to your fork on GitHub and your branch has been rebased against the head of the master branch (not merged, if you are not sure how to rebase check out this article),
  4. the diff contains only changes related to the PR description,
  5. NatSpec documentation has already been written.
  6. A spec was written
  7. A deployment script was written

SDK

You can find more information about setting up connections to bridge contracts with the SDK on the Ethereum Interactions page of the docs.

Testing methodology

This repo includes an Infura key that allows forking from mainnet. We have included helpers to make testing easier (see example bridge tests).

In production a bridge is called by a user creating a client side proof via the Aztec SDK. These transaction proofs are sent to a rollup provider for aggregation. The rollup provider then sends the aggregate rollup proof with the sum of all users' proofs for a given bridgeCallData to your bridge contract.

A bridgeCallData uniquely defines the expected inputs/outputs of a DeFi interaction. It is a uint256 that represents a bit-string containing multiple fields. When unpacked its data is used to create a BridgeData struct in the rollup processor contract.

Structure of the bit-string is as follows (starting at the least significant bit):

bit positionbit lengthdefinitiondescription
032bridgeAddressIdid of bridge smart contract address
3230inputAssetAasset id of 1st input asset
6230inputAssetBasset id of 1st input asset
9230outputAssetAasset id of 1st output asset
12230outputAssetBasset id of 2nd output asset
15232bitConfigflags that describe asset types
18464auxDatacustom auxiliary data for bridge-specific logic

Bit Config Definition:

bitmeaning
0secondInputInUse
1secondOutputInUse

Note 1: Last 8 bits of bridgeCallData bit-string are wasted because the circuits don't support values of full 256 bits (248 is the largest multiple of 8 that we can use).

Note 2: bitConfig is 32 bits large even though we only use 2 bits because we need it to be future proofed (e.g. we might add NFT support, and we would need new bit flag for that).

The rollup contract uses this to construct the function parameters to pass into your bridge contract (via convert(...) function). It calls your bridge with a fixed amount of gas via a delegateCall via the DefiBridgeProxy.sol contract.

We decided to have 2 separate approaches of bridge testing:

In the first one it is expected that you call convert function directly on the bridge contract. This allows for simple debugging because execution traces are simple. Disadvantage of this approach is that you have take care of transferring tokens to and from the bridge (this is handle by the DefiBridgeProxy contract in the production environment). This type of test can be considered to be a unit test and an example of such a test is here.

In the second approach we construct a bridgeCallData, we mock proof data and verifier's response and we pass this data directly to the RollupProcessor's processRollup(...) function. The purpose of this test is to test the bridge in an environment that is as close to the final deployment as possible without spinning up all the rollup infrastructure (sequencer, proof generator etc.). This test can be considered an end-to-end test of the bridge and an example of such a test is here.

We encourage you to first write all tests as unit tests to be able to leverage simple traces while you are debugging the bridge. Once you make the bridge work in the unit tests environment convert the relevant tests to E2E. Converting unit tests to E2E is straightforward because we made the BridgeTestBase.sendDefiRollup(...) function return the same values as IDefiBridge.convert(...).

In production, the rollup contract will supply _totalInputValue of both input assets and use the _interactionNonce as a globally unique ID. For testing, you may provide this value.

The rollup contract will send _totalInputValue of _inputAssetA and _inputAssetB ahead of the call to convert. In production, the rollup contract already has these tokens as they are the users' funds. For testing use the deal method from the forge-std's Test class (see Example.t.sol for details). This method prefunds the rollup with sufficient tokens to enable the transfer.

After extending the Test class simply call:

  deal(address(dai), address(rollupProcessor), amount);

This will set the balance of the rollup to the amount required.

Sending Tokens Back to the Rollup

The rollup contract expects a bridge to have approved for transfer by the rollupAddress the ERC20 tokens that are returned via outputValueA or outputValueB for a given asset. The DeFiBridgeProxy will attempt to recover the tokens by calling transferFrom(bridgeAddress, rollupAddress, amount) for these values once bridge.convert() or bridge.finalise() have executed (in production and in end-to-end tests). In unit tests it is expected that you transfer these tokens on your own.

ETH is returned to the rollup from a bridge by calling the payable function with a msg.value rollupContract.receiveETH(uint256 interactionNonce). You must also set the outputValue of the corresponding outputAsset (A or B) to be the amount of ETH sent.

This repo supports TypeChain so all Typescript bindings will be auto generated and added to the typechain-types folder.

bridge-data.ts

This is a Typescript class designed to help a developer on the frontend use your bridge. You should implement the functions to fetch data from your bridge / L1.

Aztec Connect Background

You can also find more information on the Aztec docs site.

What is Aztec?

Aztec is a privacy focussed L2, that enables cheap private interactions with Layer 1 smart contracts and liquidity, via a process called DeFi Aggregation. We use advanced zero-knowledge technology "zk-zk rollups" to add privacy and significant gas savings any Layer 1 protocol via Aztec Connect bridges.

What is a bridge?

A bridge is a Layer 1 Solidity contract deployed on mainnet that conforms a DeFi protocol to the interface the Aztec rollup expects. This allows the Aztec rollup contract to interact with the DeFi protocol via the bridge.

A bridge contract models any Layer 1 DeFi protocol as an asynchronous asset swap. You can specify up to two input assets and two output assets per bridge.

How does this work?

Users who have shielded assets on Aztec can construct a zero-knowledge proof instructing the Aztec rollup contract to make an external L1 contract call.

Rollup providers batch multiple L2 transaction intents on the Aztec Network together in a rollup. The rollup contract then makes aggregate transaction against L1 DeFi contracts and returns the funds pro-rata to the users on L2.

How much does this cost?

Aztec connect transactions can be batched together into one rollup. Each user in the batch pays a base fee to cover the verification of the rollup proof and their share of the L1 DeFi transaction gas cost. A single Aztec Connect transaction requires 3 base fees or approximately ~8000 gas.

The user then pays their share of the L1 DeFi transaction. The rollup will call a bridge contract with a fixed deterministic amount of gas so the total fee is simple ~8000 gas + BRIDGE_GAS / BATCH_SIZE.

What is private?

The source of funds for any Aztec Connect transaction is an Aztec-shielded asset. When a user interacts with an Aztec Connect Bridge contract, their identity is kept hidden, but balances sent to the bridge are public.

Virtual Assets

Aztec uses the concept of Virtual Assets or Position tokens to represent a share of assets held by a Bridge contract. This is far more gas efficient than minting ERC20 tokens. These are used when the bridge holds an asset that Aztec doesn't support (e.g. Uniswap Position NFTs or other non-fungible assets).

If the output asset of any interaction is specified as "VIRTUAL", the user will receive encrypted notes on Aztec representing their share of the position, but no tokens or ETH need to be transferred. The position tokens have an assetId that is the interactionNonce of the DeFi Bridge call. This is globally unique. Virtual assets can be used to construct complex flows, such as entering or exiting LP positions (e.g. one bridge contract can have multiple flows which are triggered using different input assets).

Example flows and asset configurations

  1. Swapping - 1 real input, 1 real output
  2. Swapping with incentives - 1 real input, 2 real outputs (2nd output reward token)
  3. Borrowing - 1 real input (collateral), 1 real output (borrowed asset, e.g. Dai), 1 virtual output (represents the position, e.g. position is a vault in MakerDao)
  4. Purchasing an NFT - 1 real input, 1 virtual output (asset representing NFT)
  5. Selling an NFT - 1 virtual input (asset representing NFT), 1 real output (e.g. ETH)
  6. Repaying a loan - 1 real input (e.g. Dai), 1 virtual input (representing the vault/position), 1 real output (collateral, e.g. ETH)
  7. Repaying a loan 2 - 1 real input (e.g. USDC), 1 virtual input (representing the position), 2 real outputs ( 1st output collateral, 2nd output reward token, e.g. AAVE)
  8. Partial loan repaying - 1 real input (e.g. Dai), 1 virtual input (representing the vault/position), 1 real output (collateral, e.g. ETH), 1 virtual output (representing the vault/position)
  9. Claiming fees from Uniswap position and redepositing them - 1 virtual input (represents LP position NFT), 1 virtual output (representing the modified LP position)

Please reach out on Discord with any questions. You can join our Discord here.

Download details:

Author: AztecProtocol
Source code: https://github.com/AztecProtocol/aztec-connect-bridges 
License:

#solidity #smartcontract #dapp #blockchain #dao

Building Aztec Connect Bridges with Foundry & Solidity

General ERC20 to ERC1155 Token Wrapper Contract

Allows any ERC-20 token to be wrapped inside of an ERC-1155 contract, and thereby allows an ERC-20 token to function as an ERC-1155 contract.

Getting started

Install

yarn add @0xsequence/erc20-meta-token or npm install @0xsequence/erc20-meta-token

Usage from Solidity

pragma solidity ^0.7.4;

import '@0xsequence/erc20-meta-token/contracts/interfaces/IERC20Wrapper.sol';

contract ContractA {
  //...
  function f(address payable wrapperAddress, address ERC20tokenAddress, uint256 amount) public {
    IERC20Wrapper(wrapperAddress).deposit(ERC20tokenAddress, msg.sender, amount);
  }
}

How does it work?

When you deposit ERC-20 tokens (e.g. DAI) in the wrapper contract, it will give you back ERC-1155 metaTokens (e.g. MetaDAI) with a 1:1 ratio. These metaToken have native meta-transaction functionalities, which allow you to transfer tokens without doing an on-chain transaction yourself, but by simply signing a message and broadcasting this message to "executors". You can also "approve" addresses to transfer tokens on your behalf with a signed message instead of calling the ERC-20 approve() function.

If you want to transfer some metaTokens, you simply need to call safeTransferFrom(sender, recipient, ERC20tokenAddress, amount, metaTransactionData) where token address is the address of the ERC-20 token you want to transfer. Obtaining the balance is similar; balanceOf(user, ERC20tokenAddress).

You can, at anytime, convert back these metaTokens back to their original tokens by calling the withdraw() method.

Gas Fee Abstraction

When transferring metaTokens, like metaDAI, you can specify in which currency you want the transaction fee to be paid in. By default, ERC20 token transfers require users to pay the fee in ETH, but with metaTokens, users can pay directly in any ERC20 token they wish. Hence, at a high level, users could transfer DAI by paying the transaction fee in DAI as well, never needing to possess ETH.

Why use the ERC-1155 Interface?

There are a few reasons why the ERC-1155 standard interface was chosen for this contract. First of all, since byte arrays needs to be passed to the contract, supporting the ERC-20 interface for these metaTokens would not be possible (at least not without adding significant complexity). Secondly, having a single contract for all ERC-20s is simpler for developers and third parties. Indeed, you don't need to deploy a contract for every ERC-20 token contract users want to augment with meta transaction functionality and third parties don't need to maintain a list of which ERC20 token address maps with which wrapper contract address.

In addition, it becomes easier to have multiple version of wrapper contracts. Indeed, if 5 versions exists, you only need 5 contracts to support all ERC20s in the five different versions, compared for 5N contracts, where N is the number of ERC-20 contracts.

Security Review

erc20-meta-token has been audited by two independant parties and all issues discovered were addressed.

** Agustín was hired as a full-time employee at Horizon after the audit was completed. Agustín did not take part in the writing of erc20-meta-token contracts.

Dev env & release

This repository is configured as a yarn workspace, and has multiple pacakge.json files. Specifically, we have the root ./package.json for the development environment, contract compilation and testing. Contract source code and distribution files are packaged in "src/package.json".

To release a new version, make sure to bump the version, tag it, and run yarn release. The release command will publish the @0xsequence/erc20-meta-token package in the "src/" folder, separate from the root package. The advantage here is that application developers who consume @0xsequence/erc20-meta-token aren't required to install any of the devDependencies in their toolchains as our build and contract packages are separated.

Download details:

Author: 0xsequence
Source code: https://github.com/0xsequence/erc20-meta-token
License: View license

#solidity #smartcontract #dapp #blockchain #dao

General ERC20 to ERC1155 Token Wrapper Contract

Zk NFT: A NFT Powered By ZkSNARKs

🎲 Zero-knowledge NFT

Here I present zk-NFT, a NFT powered by zkSNARKs that flips the concept of an NFT upside down. Normally, NFTs contain metadata that are fully revealed. Here, zk-NFT allows users to create and prove ownership of NFTs as well as their characteristics without revealing the full underlying metadata.

This would allow interesting universes to be built on top of the NFTs. For example, two pets, represented by NFTs and their hidden attributes, can engage in battle, but through ZK, can be orchestrated in a way such that the attributes are not fully revealed, but only the battle results are. In addition, this creates unseen dynamics in trading through a fog-of-war type interaction.

zkSNARKs written in Circom and Solidity.

How it works

Here I propose a hypothetical game play involving fog-of-war type bids.

Suppose Alice plays a video game (powered by zk-NFT). First, the game asks Alice to create her character (mint the NFT). The character has three attributes: speed, agility, and endurance, and the total of these attributes cannot exceed 10.

Alice then creates her character and keeps these attributes private (off-chain) to herself and submits a hash of it onchain (committing the zk proof). After a few hours, many other players such as Bob also create their own unique characters.

As the game progressed, many have realize that characters with high speed and endurance values are currently useful, and Alice happens to have a character like that. However, because the information is hidden, others don't know. Seeing that her character is desirable, Alice decides to sell it.

In order to create speculation and movement in the market, Alice partially reveals (using SNARKs) that her character indeed has a speed attribute above 7, but doesn't reveal exactly the stat (realistically she should be able to reveal a combination of any proofs). This garners interest from buyers, and Bob decides to place a bid of 1 ETH. At the same time, Alice, without revealing the entire NFT, retains some leverage and information asymmetry.

Alice, seeing that Bob's bid is fair, accepts the bid and sells her character. When she accepts the bid, she hands over the right of the character, and is also bounded to simultaneously reveals all three attributes of her character. Bob is very happy - he knew that the speed attribute is higher than 7, and is pleased to find out that it's actually a 9, thinking he got a good deal. Alice, also thinks she got a decent deal, since the other attributes of her character, agility and endurance are low (but she didn't have to reveal them to Bob), and she suspects that in the future, the game will value characters with high agility stats, so she was happy to sell it off. She continues the buy cycle by trying to buy other unrevealed characters ...

The Breakdown

Minting. User first select attributes for their NFT in private with some constraint. In this prototype, it means selecting three attributes for your character, and the sum of the attributes has to be less than a max value. These attributes are then stored off-chain in a browser cache. To mint the NFT, the proof for minting is submitted which contains hidden values of attributes.

Revealing and Speculating Trading is an integral part of any NFT ecosystem, but how would people trade if the information is hidden? Here, we use a partial reveal schema that allows users to first reveal a portion of their NFT's metadata without revealing the entire scheme, proven by zkSNARKs. For example, seller A wants to prove to buyer B that his NFT has an attribute of "speed" greater than 5 to encourage a purchase, but not specifying what that value is to retain leverage. This partial reveal schema provides the necessary speculation to engage buyers and sellers in a fog-of-war type interaction we haven't seen anywhere else in the NFT world.

Trading. In this version of the game, when a seller accepts a bid, they must reveal the full attributes of their NFT, effectively broadcasting it on-chain. It's possible to also swap the hidden attributes in off-chain ways. I'm currently exploring more trading variations!

Future work

I'm broadly interested in exploring how ZK can play out in gaming and community-owned characters.

Please follow me here and reach out here to jam on ideas!

Download details:

Author: kevinz917
Source code: https://github.com/kevinz917/zk-NFT

#solidity #smartcontract #dapp #blockchain #dao #nft #zkSNARKs

Zk NFT: A NFT Powered By ZkSNARKs

Orbit Model: A framework for building high gravity communities

Welcome to the Orbit Model

The Orbit Model is a framework for building high gravity communities. A high gravity community is one that excels at attracting and retaining members by providing an outstanding member experience.

Explore the model

The model will help you answer questions like:

  • How do I measure my community's engagement and growth?
  • How do I attract new people to my community?
  • Which members should I prioritize spending time with?
  • What contribution should I ask each community member to make?

Who can use the Orbit Model?

Product and developer communities. Form a community of your most enthusiastic users and developers.

Community-driven businesses. Learn who your most influential customers and help increase their reach.

Open source projects. Build relationships that help convert users into contributors.

And anyone else focused on growing their community!

About the Orbit Model

The Orbit Model was developed by developer advocates for working with communities of software developers, but the principles apply to most things that are community-shaped. The model was first used in 2014 and put on GitHub in November 2019 so anyone can use it and contribute back. We aim to make this framework useful to open source maintainers, developer advocates, community managers, founders, and anyone interested in building a community. It is sponsored and maintained by Orbit.

Learn more on the about page.

Contributing

Contributions and questions are very welcome. Please open issues with ideas on how to improve the model including feedback, critiques, and information about how you're using it.

Read the contribution guide to learn more.

Tech Stack

We want to thank and acknowledge the open source projects used to build this site:

  • Next.js
  • MDX
  • G6
  • FontAwesome
  • Tailwind CSS

Changelog

May 2022

  • New website, updated concepts

July 2021

  • Updated Orbit Level names

March 2020

  • Created sections for love, reach, and gravity
  • Added calculations and example tables for each metric
  • Added Orbit KPIs section
  • Added sections about choosing and distributing orbit levels

December 2019

  • Added orbit levels
  • Improved introduction

June 2019

  • Repository created

Download details:

Author: orbit-love
Source code: https://github.com/orbit-love/orbit-model
License: MIT license

#solidity #smartcontract #dapp #blockchain #dao

Orbit Model: A framework for building high gravity communities

Build Smart Contracts for IDO Usecase Using Solidity & Typescript

EasyAuction

EasyAuction should become a platform to execute batch auctions for fair initial offerings and token buyback programs. Batch auctions are a market mechanism for matching limit orders of buyers and sellers of a particular good with one fair clearing price. Already in traditional finance, batch auctions have established themselves as a tool for initial offerings, and in the blockchain ecosystem, they are expected to become a fundamental DeFi building lego brick for price-discovery as well. The EasyAuction platform is inspired by the auction mechanism of the Gnosis Protocol v1, which has shown a significant product-market fit for initial dex offerings (IDOs) (cp. sales of DIA, mStable, etc…). EasyAuction improves significantly the user experience for IDOs, by settling up arbitrary many bids with a single clearing price instead of roughly 28 orders and thereby making the mechanism fairer, easier to use, and more predictable. Given the emerging regulations for IDOs and utility sales - see MiCA -, EasyAuction is intending to comply with them and enabling new projects a safe start without legal risks.

Use cases

Initial token offering with a fair price-discovery

This batch auction mechanism allows communities, projects, and DAOs to offer their future stakeholders fair participation within their ecosystem by acquiring utility and governance tokens. One of the promises of DeFi is the democratization and communitization of big corporate infrastructures to build open platforms for the public good. By running a batch auction, these projects facilitate one of the fairest distribution of stakes. Projects should demonstrate their interest in fair participation by setting a precedence for future processes by choosing this fair auction principle over private multi-stage sales. Auction based initial offerings are also picking up in the traditional finance world. Google was one of the first big tech companies using a similar auction mechanism with a unique clearing price to IPO. Nowadays, companies like Slack and Spotify are advocating similar approaches as well via Direct Listings: Selling their stocks in the pre-open auction on the first day of trading. Overall this market for initial token offerings is expected to grow significantly over time together with the ecosystem. Even on gnosis protocol version 1, a platform not intended for this use case, was able to facilitate IDOs with a total of more than 20 million funding.

Token buy back programs

Many decentralized projects have to buy back their tokens or auction off their tokens to clear deficits within their protocol. This EasyAuction platform allows them to schedule individual auctions to facilitate these kinds of operations.

Initial credit offering

First implementations of yTokens are live on the ethereum blockchain. The initial price finding and matching of creditor and debtor for such credit instruments can be done in a very fair manner for these kind of financial contracts.

Protocol description

The batch auction mechanism

In this auction type a pre-defined amount of tokens is auctioned off. Anyone can bid to buy these tokens by placing a buy-order with a specified limit price during the whole bidding time. At the end of the auction, the final price is calculated by the following method: The buy volumes from the highest bids are getting added up until this sum reaches the initial sell volume. The bid increasing the overall buy volume to is the bid defining the uniform clearing price. All bids with higher price will be settled and traded against the initial sell volume with the clearing price. All bids with a lower price will not be considered for the settlement. The principle is described best with the following diagram: image from nytimes

Specific implementation batch auction

EasyAuction allows anyone to start a new auction of any ERC20 token (auctioningToken) against another ERC20 token (biddingToken). The entity initiating the auction, the auctioneer, has to define the amount of token to be sold, the minimal price for the auction, the end-time for the order cancellation period, the end-time of the auction, plus some additional parameters. The auctioneer initiates the auction with an ethereum transaction transferring the auctioningTokens and setting the parameters. From this moment on, anyone can participate as a buyer and submit bids. Each buyer places sell-orders with a specified limit price into the system. Until the "end-time of the order cancellation period", orders can still be canceled. Once this time has passed they can only be placed until the auction end-date.

There is only one exception: If the auction is configured to allow atomic-closures, then the next step - the price calculation - and placing one last order even after the auction end-date can be done within one ethereum transaction. In all other cases, no further order placement is allowed once the auction end-date passed and the auction can be cleared by the on-chain price calculations. The price calculation can happen over several ethereum transactions, in case the calculation is consuming more gas than available in one ethereum block. By setting a minium sell-amount per order, the gas for a price calculation can be restricted, and even be forced to fit into one block. This can be useful, in case the auction should be atomically closable.

Once the price of an auction has been calculated, everyone can claim their part of the auction. The auctioneer can withdraw the bought funds, the buyers being matched in the auction can withdraw their bought tokens. The buyers bidding with a too low price which were not matched in the auction can withdraw their bidding funds back.

Comparison to Dutch auctions

The proposed batch auction system has a number of advantages over dutch auction.

  • The biggest advantage is certainly that buyers don't have to wait for a certain time to submit orders, but that they can submit orders at any time. This makes the system much more convenient for users.
  • Dutch auctions have a very high activity right before the auction is closing. If pieces of the infrastructure are not working reliable during this time period, then prices can fall further than expected, causing a loss for the auctioneer. Also, high gas prices during this short time period can be a hindering factor for buyers to quickly join the auction.
  • Dutch auctions calculate their price based the blocktime. This pricing is hard to predict for all participants, as the mining is a stochastical process Additionally, the unpredictability for the mining time of the next block
  • Dutch auctions are causing a gas price bidding war to close the auction. In contrast in batch auction, different buyers will bid against other bidder in the mem-pool. Especially, once EIP-1559 is implemented and the mining of a transaction is guaranteed for the next block, then bidders have to compete on bidding limit-prices instead of the gas-prices to get included into the auction.

Warnings

In case the auction is expected to raise more than 2^96 units of the biddingToken, don't start the auction,as it will not be settlable. This corresponds to about 79 billion DAI.

Prices between biddingToken and auctioningToken are expressed by a fraction whose components are stored as uint96. Make sure your expected prices are representable as such fractions.

Instructions

Backend

Install dependencies

git clone https://github.com/gnosis/ido-contracts
cd ido-contracts
yarn
yarn build

Running tests:

yarn test

Run migration:

yarn deploy --network $NETWORK

Verify on etherscan:

npx hardhat etherscan-verify --license None --network rinkeby

Running scripts

Create auctions

New auctions can be started with a hardhat script or via a safe app. The safe-app can be found here: Auction-Starter A new auction selling the token 0xc778417e063141139fce010982780140aa0cd5ab for 0x5592EC0cfb4dbc12D3aB100b257153436a1f0FEa can be started using the hardhat script like that:

export NETWORK=<Your Network>
export GAS_PRICE_GWEI=<Your gas price>
export INFURA_KEY=<Your infura key>
export PK=<Your private key>
yarn hardhat initiateAuction --auctioning-token "0xc778417e063141139fce010982780140aa0cd5ab" --bidding-token "0x5592EC0cfb4dbc12D3aB100b257153436a1f0FEa" --sell-amount 0.1 --min-buy-amount 50 --network $NETWORK

Please look in the hardhat script /src/tasks/initiate_new_auction to better understand all parameters.

A more complex example for starting an auction would look like this:

yarn hardhat initiateAuction --auctioning-token "0xc778417e063141139fce010982780140aa0cd5ab" --bidding-token "0x5592ec0cfb4dbc12d3ab100b257153436a1f0fea" --sell-amount 0.5 --min-buy-amount 800 --auction-end-date 1619195139 --order-cancellation-end-date 1619195139 --allow-list-manager "0x80b8AcA4689EC911F048c4E0976892cCDE14031E" --allow-list-data "0x000000000000000000000000740a98f8f4fae0986fb3264fe4aacf94ac1ee96f"  --network $NETWORK

Settle auctions

Auctions can be settled with the clearAuction script permissionlessly by any account:

export NETWORK=<Your Network>
export GAS_PRICE_GWEI=<Your gas price>
export INFURA_KEY=<Your infura key>
export PK=<Your private key>
yarn hardhat clearAuction --auction-id <Your auction ID> --network $NETWORK

Allow-Listing: Generating signatures

Signatures for an auction with participation restriction can be created like that:

  1. Create a file: your_address_inputs.txt with comma separated addresses that should be allow-listed for the auction
  2. Initiate the auction and remember your auctionId
  3. Run the following script:
export NETWORK=<Your Network>
export INFURA_KEY=<Your infura key>
export PK=<Your private key _for the signing address_. The address for this key should not hold any ETH>
yarn hardhat generateSignatures --auction-id "Your auctionId" --file-with-address "./your_address_inputs.txt" --network $NETWORK

The generated signatures can be directly uploaded to the backend by adding the flag --post-to-api - or --post-to-dev-api in case you are testing with development environment - to the previous command. Uploading signatures allows all authorized users to create orders from the web interface without the extra friction of managing a signature.

Audit

The solidity code was audited by Adam Kolar, from the G0 Group. The report can be found here and here.

Download details:

Author: gnosis
Source code: https://github.com/gnosis/ido-contracts
License: LGPL-3.0 license

#solidity #smartcontract #dapp #blockchain #dao #iod #typescript

Build Smart Contracts for IDO Usecase Using Solidity & Typescript

Tally SafeGuard: A tool for on-chain Ethereum DAOs To Delegate Funds

Tally SafeGuard  

Project Abstract

SafeGuard governance accountability tool built around the Gnosis Safe multisig. It's a multisig system that allows governance to retain veto power over Gnosis Safe contracts, and allows governance to reclaim funds entrusted to multsig holders without requiring multisig signatories approval.

Example use cases include giving final veto power on the execution of multisig transactions to token holders, and the ability to reclaim funds from a multisig where the signatories are unable or unwilling to sign a refund transaction.

Background

On chain governance is slow and cumbersome, requiring coordination of many individual voters as well as substantial gas expenses. This can make it unsuitable for managing day to day operations or expenses.

Protocols can help scale their governance capacities and increase agility by delegating authority to smaller decision making bodies such as multi sigs. But this presents a tradeoff between reducing friction and maintaining security, as governance has no way to claw back funds or authority from a multisig without the cooperation of the multisig signers.

Problem

Current multi sigs are a one-way delegation of power. Multi-sigs are secure but lack oversight, transparency and accountability. This lack of accountability limits protocols ability to delegate authority to subcommittees, bottlenecks protocol growth and limits the effectiveness of governance.

Solution / Vision

Create a modular open source framework built around Gnosis MultiSig, OpenZeppelin Timelocks, Nomic Labs HardHat, and Compound Governor alpha that supports optimistic multi-sig governance.

The proposed solution would work alongside existing governance structures, require no modifications to existing smart contracts, and be entirely optional. The solution would work for any governance ecosystem where the underlying token is an ERC20 and uses a checkpointing system similar to Uniswap and Compound.

User Stories

As a governance, I want to entrust a budget to a multisig with the confidence that governance has the ability to reclaim funds without requiring the approval of the multisig signers.

As a governance, I want some quorum of governance token holders to have ultimate authority to delay or cancel transactions approved by multisig signers.

Preliminary Specification

The figure below shows how the proposed mechanism works and interacts with existing governance systems. New components are highlighted in yellow, while process flow is shown via the numbered elements.

Implementation Diagram

New components:

Guardian: A modified version of the Governor Alpha contract Main governance timelock has ability to change guardian parameters including voting period, quorum, and proposal threshold

Multisig timelock: A modified version of the Compound timelock contract Admin powers to cancel transactions are split from admin power to queue or execute transactions

Process flow:

Token holders submit and approve a regular Governor Alpha proposal to transfer funds or authority to an external entity/multisig.

Multisig signers generate and sign a transaction, and queue the transaction into the multisig timelock.

If the main governance token holders believes that the multisig took improper actions, the token holders can cancel pending transactions or sweep funds and permissions from the multisig timelock back to the governance timelock. They do this by submitting and passing a proposal through the guardian contract.

If no action is taken to stop the queued transaction, it can be executed by the multisig after the timelock period elapses.

Development

Pre Requisites

Before running any command, make sure to install dependencies:

$ yarn install

Compile

Compile the smart contracts with Hardhat:

$ yarn compile

TypeChain

Compile the smart contracts and generate TypeChain artifacts:

$ yarn typechain

Lint Solidity

Lint the Solidity code:

$ yarn lint:sol

Lint TypeScript

Lint the TypeScript code:

$ yarn lint:ts

Test

Run the Mocha tests:

$ yarn test

Coverage

Generate the code coverage report:

$ yarn coverage

Report Gas

See the gas usage per unit test and average gas per method call:

$ REPORT_GAS=true yarn test

Clean

Delete the smart contract artifacts, the coverage reports and the Hardhat cache:

$ yarn clean

Download details:

Author: withtally
Source code: https://github.com/withtally/safeguard
License: MIT license

#solidity #smartcontract #dapp #blockchain #dao #typescript

Tally SafeGuard: A tool for on-chain Ethereum DAOs To Delegate Funds

Build Smart Contract for 5 Methods For NFT Airdrop Using Solidity

Merkle Distributor Airdrop

Introduction

NFT merkle airdrop

This demo introduces 5 methods for NFT-airdrop:

  • Airdrop to specific address
  • A signature implying the beneficiary is submitted to blockchain for verification and airdropping
  • A signature following EIP-712 is submitted to blockchain for verification and airdropping
  • A signature following EIP-712 is submitted to blockchain for verification ,signature check and airdropping
  • A merkle proof is submitted to blockchain for verification and airdropping

ERC20 merkle airdrop

Kindly refer to 1inch, dydx, uniswap for more details, all the projects utilize merkle proof for airdropping. Please read more for details:

Demo: Red packet

We present a demo in which you can airdrop to your friends on holiday!

Please refer to the code: contracts/redpacket

Demo instruction

ERC721Basic The most simple demo where the issuer can airdrop NFT to specific address by calling "mint" :

//
await expect(this.registry.connect(this.accounts[1]).mint(account, tokenId))
  .to.emit(this.registry, 'Transfer')
  .withArgs(ethers.constants.AddressZero, account, tokenId);

ERC721LazyMint
This is a demo where airdropping happens after verification. Considier the following case: Before the real "mint" is called on blockchain, the issuer firstly send an email containing the token id to the user, who will respond with a signature created with this token id. Then, the issuer wraps user address(fetched from some database), token id and signature as input to calls "redeem" of the NFT contract, which will mint a new NFT to the target user on verification success.

// Signature
this.token.signature = await this.accounts[1].signMessage(hashToken(this.token.tokenId, this.token.account));

// Post the signature to blockchain
await expect(this.registry.redeem(this.token.account, this.token.tokenId, this.token.signature))
  .to.emit(this.registry, 'Transfer')
  .withArgs(ethers.constants.AddressZero, this.token.account, this.token.tokenId);

ERC721LazyMintWith712
In the above example, we create signature with a parameter maybe friendly to machine but not that friendly to humans. Moreover, when Metamask pops up for signing we do not know what is being signed. We prefer a more structured, more readable scheme for us to see what data will be signed. EIP-712 address this point; It can display the data to be signed to user, and the generated signature will be verified by smart contract. The whole process is similiar to that in ERC721LazyMint.

Here is a example for creating signature by EIP-712:

// Domain
  {
    name: 'Name',
    version: '1.0.0',
    chainId: this.chainId,
    verifyingContract: this.registry.address,
  },
  // Types
  {
    NFT: [
      { name: 'tokenId', type: 'uint256' },
      { name: 'account', type: 'address' },
    ],
  },
  // Value
  this.token,
);

ERC721LazyMintWith712SignatureChecker
This demo is similiar to ERC721LazyMintWith712 but with a SignatureChecker inside.

function _verify(address signer, bytes32 digest, bytes memory signature)
    internal view returns (bool)
    {
        return hasRole(MINTER_ROLE, signer) && SignatureChecker.isValidSignatureNow(signer, digest, signature);
    }

ERC721MerkleDrop
A merkle proof is generated and sent to blockchain for verification, if valid, the newly created NTT is minted to specific user.

// Generate merkle proof offchain
this.token.proof = this.merkleTree.getHexProof(hashToken(this.token.tokenId, this.token.account));

// Verify the proof and mint to some user. Whole process happens onchain.
await expect(this.registry.redeem(this.token.account, this.token.tokenId, this.token.signature))
  .to.emit(this.registry, 'Transfer')
  .withArgs(ethers.constants.AddressZero, this.token.account, this.token.tokenId);

Quickstart

Merkle airdrop

Install dependencies

yarn

Test

npx hardhat test

HappyRedPacket

Setup enviroments

cp .env.exmpale .env

## Please configure PRIVATE_KEY, PRIVATE_KEY1, PRIVATE_KEY2,INFURA_ID, PROJECT_ID, TARGET_ACCOUNT in .env
## PRIVATE_KEY is the private key of your wallet account , while TARGET_ACCOUNT is your wallet address.
## If you want multiple accounts trying to claim the same redpacket, please configures multiple PRIVATE_KEYs.

Install dependencies

yarn

Deploy ERC20 smart contract Execute the following command and we take "Token address" from output as address of the deployed contract.

npx hardhat run scripts/redpacket/1-deploySimpleToken.js --network kovan

## Console output
Deploying contracts with the account: 0x3238f24e7C752398872B768Ace7dd63c54CfEFEc
Account balance: 796474026501725149
Token address: 0xdc6999dC3f818B4f74550569CCC7C82091cA419F
1000000000

Deploy RedPacket smart contract Execute the following command and we take "RedPacket address" from output as address of the deployed contract.

npx hardhat run scripts/redpacket/2-deployHappyRedPacket.js --network kovan

## Console output
Deploying contracts with the account: 0x3238f24e7C752398872B768Ace7dd63c54CfEFEc
Account balance: 783625061469463255
RedPacket address: 0x6F35e57a7421F5b04DDb47b67453A5a5Be32e58B

Create a red packet

npx hardhat run scripts/redpacket/3-createRedPacket.js --network kovan

## Console output
Approve Successfully
merkleTree Root: 0x5cc6f1ff34a2c6f871d40cdc4559468f96a7ec06d7bf6ab0f9b5aeccc9b33154
CreationSuccess Event, total: 10000   RedpacketId: 0x45eb11e56a1b699f5e99bd16785c84b73a8257c712e0d1f31306ab1e3423b2e0
Create Red Packet successfully

Claim packet

npx hardhat run scripts/redpacket/4-claimRedpacket.js --network kovan

## We can see "Sign Message:" followed by a signature verified by smart contract 

References

Link: https://github.com/Dapp-Learning-DAO/Dapp-Learning/tree/main/basic/42-merkle-distributor-airdrop

#solidity #smartcontract #dapp #blockchain #dao

Build Smart Contract for 5 Methods For NFT Airdrop Using Solidity

Optimism: A Layer2 Solution for Ethereum

Optimism

Abstract

Optimistic Rollups(OR)is a Layer2 solution,which means it's not an independent chain,but relys on Ethereum mainnet。The benefits of such construction are that it can not only run smart contracts at scale,but also enjoys the benefit of Ethereum security,just similar to Plasma,but have less capacity of transactions。OR chooses to use OVM(Optimistic Virtual Machine)compatible with EVM,allowing contracts to have same behavior on both sides.

The name "Opmistic Rollup" comes from the characteristics of the solution itself。 Optimistic means less infomation for aggregator to publish ,and no need to provide any proof。 Rollup means transactions are submitted to L1 in bundles。

Test steps

ETH cross-chain with Optimism gateway

  • Deposite ETH to Optimistic
    Optimism testnet links to Kovan testnet。 Before we send transactions to Optimistic, we need to deposite ETH to Optimistic first。
    Visit Optimism gateway,then choose "Deposite" ,and input ETH amount

  • Waiting for deposite finish
    If deposite successfully,you'll see following message on the web page

  • Check Balance
    After ETH deposite successfully,check balance on Optimistic with MetaMask

  • Install dependencies
yarn
  • Config env parameters
    Use template .env.example to create .env ,then config PRIVATE_KEY && INFURA_ID in it
  • Deploy Contract
❯ npx hardhat run scripts/deploy.js --network optimism
Deploying contracts with the account: 0xa3F2Cf140F9446AC4a57E9B72986Ce081dB61E75
Account balance: 1500000000000000000
Token address: 0x0d29e73F0b1AE67e28495880636e2407e41480F2

ETH cross-chain with script

  • Deposite ETH to Optimism with script
    In addition to do cross-chain through UI,we can also do it with script。
    In the following script,by calling cross-chain contract on Kovan side, ETH will be deposited to Optimism。
npx hardhat run scripts/deposit-eth.js --network kovan

## It will takes about 5 minuts to finish the deposite,then it will add 0.0001 ETH to your account on Optimism side  
  • Withdraw ETH to Kovan
    After deposite ETH to Optimism,we can also withdraw it back to Kovan。
    Similar to deposite,we just call cross-chain contract on Optimism side,ETH will be withdrawed to Kovan。
npx hardhat run scripts/withdraw-eth.js --network optimism

## It will takes about 5 minuts to finish the withdraw,then it will add 0.0001 ETH to your account on Kovan side 

Link: https://github.com/Dapp-Learning-DAO/Dapp-Learning/tree/main/basic/28-optimism-layer2

#solidity #smartcontract #dapp #blockchain #dao #optimism

Optimism: A Layer2 Solution for Ethereum

Arbitrum Layer2: More Dependent on Ethereum Virtual Machine (EVM)

Arbitrum

The difference between Arbitrum and Optimism lies in the Interactive Proving challenge.

Arbitrum is more dependent on Ethereum virtual machine (EVM). When someone submits a challenge in Optimism, The transaction in question is run through EVM. In contrast, **Arbitrum uses the off-chain dispute resolution process to reduce the dispute to one step in a transaction **. The protocol then sends this one-step assertion (rather than the entire transaction) to EVM for final validation. Therefore, conceptually speaking, the dispute resolution process of Optimism is much simpler than Arbitrum.

Advantages of Interactive Proving:

  1. More efficient in the optimistic case;
  2. More efficient in the pessimistic case;
  3. Much higher per-tx gas limit;
  4. No limit on contract size
  5. More implementation flexibility

This means that in the case of disputed transactions, in the case of Arbitrum, the final confirmation of Arbitrum will be delayed longer than that of Optimism.

Arbitrum is cheaper in transaction cost in dispute settlement (in Layer1).

The dispute resolution process of Optimism is simpler and faster than Arbitrum, because it only provides disputed transactions to EVM. Speed is an advantage of Optimism, because disputes can be resolved quickly and without interfering with future progress of the rollup chain.

The Arbitrum 2.0 protocol

The current Arbitrum protocol makes important advances over the original Arbitrum protocol in that it supports multiple pipelined DAs In the new protocol, each state can have at most one DA following from it. If a DA has no following state, then anybody can create a DA that follows it, creating a new branch point. The result will be a tree of possible futures, like the one shown below.

AVM

The Arbitrum Virtual Machine (AVM) is the interface between the Layer 1 and Layer 2 parts of Arbitrum. Layer 1 provides the AVM interface and ensures correct execution of the virtual machine.

Every Arbitrum chain has a single AVM which does all of the computation and maintains all of the storage for everything that happens on the chain.

Differences between AVM and EVM are motivated by the needs of Arbitrum's Layer 2 protocol and Arbitrum's use of a interactive proving to resolve disputes.

Gotchas

Block Numbers: Arbitrum vs. Ethereum

  • One Ehtereum block may contain several Arbitrum's multiple block.
  • Arbitrum block use layer1's blocktimestamp

Useful Addresses: https://developer.offchainlabs.com/docs/useful_addresses

L1 to L2 messaging

Ethereum to Arbitrum: Retryable Tickets

Retryable tickets are the Arbitrum protocol’s canonical method for passing generalized messages from Ethereum to Arbitrum. A retryable ticket is an L2 message encoded and delivered by L1; if gas is provided, it will be executed immediately. If no gas is provided or the execution reverts, it will be placed in the L2 retry buffer, where any user can re-execute for some fixed period (roughly one week).

L2 to L1 messaging

https://github.com/OffchainLabs/arbitrum-tutorials/tree/master/packages/outbox-execute

Quick Start

depoly SimpleToken

install dependencies

yarn

config env variables
copy .env.example file rename it to .env, then modify PRIVATE_KEY and INFURA_ID

switch network to arbitrum testnet (arbitrum rinkeby)

Because the testnet is arbitrum rinkeby, so we need get some test token from ethereum rinkeby network rinkeby 测试网.

Then transfer ethereum rinkeby test token to arbitrum testnet through arbitrum bridge , it will take 10mins.

After a while, we can see balance on arbitrum testnet is not zero any more.

run script

npx hardhat run scripts/deploy.js --network arbitrum_rinkeby

output content (421611 is Arbitrum-Rinkeby chainId)

Network ChainId: 421611 Deploying contracts with the account: 0x.... Account balance: ... Token address: 0x...

L1 to L2

node ./scripts/L1toL2.js

output:

Arbitrum Demo: Cross-chain Greeter Lets Go ➡️ ...🚀 Deploying L1 Greeter 👋 deployed to 0x24b11e81B6477129f298e546c568C20e73b6DD5b Deploying L2 Greeter 👋👋 deployed to 0x4998e921AC9Cd7ba3B2921aDA9dCedbDC1341465 ...

TODO

  • L2 to L1 messaging demo

Link: https://github.com/Dapp-Learning-DAO/Dapp-Learning/tree/main/basic/27-Arbitrum-layer2

#solidity #smartcontract #dapp #blockchain #dao

Arbitrum Layer2: More Dependent on Ethereum Virtual Machine (EVM)

Build Smart Contract for Quadratic Voting & Quadratic Funding

Quadratic Voting and Quadratic Funding

Concept

In the governance of the public sphere, votes are needed to decide where the funds will be spent, and which projects will receive priority funding. For example, a city may allocate its budget for projects such as parks, hospitals, roads. Or a blockchain project funded by communities and institutions may allocate its budget for projects such as wallets, developer tools, document editing, hackathons, community podcasts, privacy agreements, etc.

There are usually two ways to vote: "one-person-one-vote" and "one-dollar-one-vote."

One-Person-One-Vote

The nature of "one-person-one-vote" is that no matter how much you care about something, you can only vote for it once. In Vitalik's article, one-person-one-vote is: if you care about an issue (or a public good/project), the cost of your first vote is extremely low, but if you want to continue to contribute, the cost becomes infinitely high (because you only have one vote).

One-Dollar-One-Vote

One-dollar-one-vote is a way to vote with money (or Token), people who care more about an issue can contribute more (provided you have enough money/tokens). For example, the PoS consensus implements this idea. This approach leads to the ability to buy influence.

Suppose a community wants to allocate its budget for two public infrastructure projects: a road and a garden on a street corner. Perhaps most people are more concerned about the road, but the rich man who lives on the corner is very concern about having a garden on the corner. The rich man will pay a lot of money, and as a result, projects that most people care about may lose out to projects that few people care about.

Think

What if we want to take into account people's concerns about different issues at the same time, without completely "buying influence"? That's when you can use quadratic vote and quadratic fund.

Quadtratic-Voting

Quadratic voting is a collective decision-making procedure which involves individuals allocating votes to express the degree of their preferences, rather than just the direction of their preferences. By doing so, quadratic voting seeks to address issues of voting paradox and majority rule. Quadratic voting works by allowing users to "pay" for additional votes on a given matter to express their support for given issues more strongly, resulting in voting outcomes that are aligned with the highest willingness to pay outcome, rather than just the outcome preferred by the majority regardless of the intensity of individual preferences. The payment for votes may be through either artificial or real currencies (e.g. with tokens distributed equally among voting members or with real money). Quadratic voting is a variant of cumulative voting in the class of cardinal voting. It differs from cumulative voting by altering "the cost" and "the vote" relation from linear to quadratic.

Quadratic voting is based upon market principles, where each voter is given a budget of vote credits that they have the personal decisions and delegation to spend in order to influence the outcome of a range of decisions. If a participant has a strong support for or against a specific decision, additional votes could be allocated to proportionally demonstrate the voter's support. A vote pricing rule determines the cost of additional votes, with each vote becoming increasingly more expensive. By increasing voter credit costs, this demonstrates an individual's support and interests toward the particular decision. If money is used, it is eventually cycled back to the voters based upon per capita. Both E Glen Weyl and Steven Lalley conducted research in which they claim to demonstrate that this decision-making policy expedites efficiency as the number of voters increases. The simplified formula on how quadratic voting functions is:

cost to the voter = (number of votes)^2

The formula in smart contracts is slightly different:

cost to the voter =  = 2^0 + 2^1 + 2^2 + ... + 2^(number of votes - 1)

Code Review

  1. voteTool quadratic vote
  2. FinancingTool quadratic fund

Common One

In both cases, the ID is obtained using hash

function hash(bytes memory _b) public pure returns (bytes32){    return keccak256(_b); }

Since dynamic value types such as bytes and string will be hashed in the event indexed parameters, the hash value is directly used as the id value of the vote.

Common Two

Address 0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE means accepting ETH, and the other means accepting tokens.

Both Addresses are examples and can be modified as required.

VoteTool

Quadratic vote, the more votes, the more money needed.

Example: the first ticket 1ETH, the second ticket 2ETH, the third ticket 4ETH, the fourth ticket 8ETH...

Then the cost of the n-th vote is:

cost of n-th vote = 2^(n-1)

Using both eth and Token for voting is not implemented for now.

  • addProposal(uint256 _proposal) public onlyOwner add proposal
  • expireProposal(uint256 _proposal) public onlyOwner make proposal expired (Voters will no longer be able to vote)
  • vote(uint256 _proposal, uint256 _n) public payable vote
  • withdraw() public onlyOwner withdraw eth to voter

FinancingTool

quadratic_funding.png

Each user vote on a proposal is the square root of the total amount ().

  1. Each green square represents the amount of a donation, the area of the large square C can be interpreted as the total grant pool amount, and the yellow area S can be interpreted as an externally supported grant pool. Now, the amount each contributor contributes is c_i, so the area of that big square is C=(sum(sqrt(c_i)))^2, amount of math S=C−sum(c_i)
  2. At any one time, if there is more than one contributor, then C > sum(c_i)
  3. If S and the subsidy pool are not exactly the same, they can be allocated proportionally according to the yellow area
  4. Multiple small donations can result in large yellow acreage, allowing projects to win more funding allocations
/**
    Formula interpretation provided by Harry:
    Green: 
    project A: 1*1 = 1
    project B: 
    user1: 4: length 2
    user2: 16: length 4, total 6
    user1: 12, total: 6 - 2 + sqrt(4+12) = 8
    project C: 2*2=4
    project D: 3*3=9

    Total bottom length : 1+8+2+3=14

    Total square area: 14*14 = 196

    math amount = 196-(1+32+4+9) = 150

    Finally: 
    A: 1 + 1/14 * 150 = 11.714285714
    B: 32 + 8/14 * 150 = 117.714285714
    C: 4 + 2/14 * 150 = 25.428571429
    D: 9 + 3/14 * 150 = 41.142857143
*/

struct Proposal {
    uint256 name;// proposal id
    uint256 amount;// received amount
    uint256 voteCount;
    address owner;
    address[] userAddrArr;// voter address
    uint8 isEnd;// 0 not end, 1 ended
}

struct UserVote {// Each user will have one instance of each proposal
    uint256 count;
    uint256 amount;
}

Case: Each person can donate to multiple proposals within a certain period of time. A certain amount of time will be allocated to increase the donation time after the proposal complete. After confirming the complete completion, the proposal owner can claim the donation amount.

  • addProposal(uint256 _proposal) public onlyOwner add new proposal, the code p.owner=msg.sender is wrongand can be modified
  • vote(uint256 _proposal, uint256 _inAmount) public payable vote and donate
  • addExtraAmount(address _maker, uint256 _inAmount) public payable add money to subsidy, _maker is donator
  • withdrawProposal(uint256 _proposal) public checkEnd withdraw money after proposal ended
  • function getResult(uint256 _proposal) public view returns (uint256, uint256) View the current contribution amount/share of the proposal

Quick Start

install dependencies

yarn

compile contracts

npx hardhat compile

run test scripts

npx hardhat test

Reference

Link: https://github.com/Dapp-Learning-DAO/Dapp-Learning/tree/main/basic/26-quadratic-vote&gitcoin

#solidity #smartcontract #dapp #blockchain #dao

Build Smart Contract for Quadratic Voting & Quadratic Funding