Best of Crypto

Best of Crypto

1648172160

Node Exporter: Exporter for Machine Metrics in Algorand

Node exporter

Prometheus exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors.

The WMI exporter is recommended for Windows users.

Collectors

There is varying support for collectors on each operating system. The tables below list all existing collectors and the supported systems.

Collectors are enabled by providing a --collector.<name> flag. Collectors that are enabled by default can be disabled by providing a --no-collector.<name> flag.

Enabled by default

NameDescriptionOS
arpExposes ARP statistics from /proc/net/arp.Linux
bcacheExposes bcache statistics from /sys/fs/bcache/.Linux
bondingExposes the number of configured and active slaves of Linux bonding interfaces.Linux
boottimeExposes system boot time derived from the kern.boottime sysctl.Darwin, Dragonfly, FreeBSD, NetBSD, OpenBSD
conntrackShows conntrack statistics (does nothing if no /proc/sys/net/netfilter/ present).Linux
cpuExposes CPU statisticsDarwin, Dragonfly, FreeBSD, Linux
diskstatsExposes disk I/O statistics.Darwin, Linux
edacExposes error detection and correction statistics.Linux
entropyExposes available entropy.Linux
execExposes execution statistics.Dragonfly, FreeBSD
filefdExposes file descriptor statistics from /proc/sys/fs/file-nr.Linux
filesystemExposes filesystem statistics, such as disk space used.Darwin, Dragonfly, FreeBSD, Linux, OpenBSD
hwmonExpose hardware monitoring and sensor data from /sys/class/hwmon/.Linux
infinibandExposes network statistics specific to InfiniBand and Intel OmniPath configurations.Linux
ipvsExposes IPVS status from /proc/net/ip_vs and stats from /proc/net/ip_vs_stats.Linux
loadavgExposes load average.Darwin, Dragonfly, FreeBSD, Linux, NetBSD, OpenBSD, Solaris
mdadmExposes statistics about devices in /proc/mdstat (does nothing if no /proc/mdstat present).Linux
meminfoExposes memory statistics.Darwin, Dragonfly, FreeBSD, Linux, OpenBSD
netclassExposes network interface info from /sys/class/net/Linux
netdevExposes network interface statistics such as bytes transferred.Darwin, Dragonfly, FreeBSD, Linux, OpenBSD
netstatExposes network statistics from /proc/net/netstat. This is the same information as netstat -s.Linux
nfsExposes NFS client statistics from /proc/net/rpc/nfs. This is the same information as nfsstat -c.Linux
nfsdExposes NFS kernel server statistics from /proc/net/rpc/nfsd. This is the same information as nfsstat -s.Linux
sockstatExposes various statistics from /proc/net/sockstat.Linux
statExposes various statistics from /proc/stat. This includes boot time, forks and interrupts.Linux
textfileExposes statistics read from local disk. The --collector.textfile.directory flag must be set.any
timeExposes the current system time.any
timexExposes selected adjtimex(2) system call stats.Linux
unameExposes system information as provided by the uname system call.Linux
vmstatExposes statistics from /proc/vmstat.Linux
wifiExposes WiFi device and station statistics.Linux
xfsExposes XFS runtime statistics.Linux (kernel 4.4+)
zfsExposes ZFS performance statistics.Linux

Disabled by default

NameDescriptionOS
buddyinfoExposes statistics of memory fragments as reported by /proc/buddyinfo.Linux
devstatExposes device statisticsDragonfly, FreeBSD
drbdExposes Distributed Replicated Block Device statistics (to version 8.4)Linux
interruptsExposes detailed interrupts statistics.Linux, OpenBSD
ksmdExposes kernel and system statistics from /sys/kernel/mm/ksm.Linux
logindExposes session counts from logind.Linux
meminfo_numaExposes memory statistics from /proc/meminfo_numa.Linux
mountstatsExposes filesystem statistics from /proc/self/mountstats. Exposes detailed NFS client statistics.Linux
ntpExposes local NTP daemon health to check timeany
qdiscExposes queuing discipline statisticsLinux
runitExposes service status from runit.any
supervisordExposes service status from supervisord.any
systemdExposes service and system status from systemd.Linux
tcpstatExposes TCP connection status information from /proc/net/tcp and /proc/net/tcp6. (Warning: the current version has potential performance issues in high load situations.)Linux

Textfile Collector

The textfile collector is similar to the Pushgateway, in that it allows exporting of statistics from batch jobs. It can also be used to export static metrics, such as what role a machine has. The Pushgateway should be used for service-level metrics. The textfile module is for metrics that are tied to a machine.

To use it, set the --collector.textfile.directory flag on the Node exporter. The collector will parse all files in that directory matching the glob *.prom using the text format.

To atomically push completion time for a cron job:

echo my_batch_job_completion_time $(date +%s) > /path/to/directory/my_batch_job.prom.$$
mv /path/to/directory/my_batch_job.prom.$$ /path/to/directory/my_batch_job.prom

To statically set roles for a machine using labels:

echo 'role{role="application_server"} 1' > /path/to/directory/role.prom.$$
mv /path/to/directory/role.prom.$$ /path/to/directory/role.prom

Filtering enabled collectors

The node_exporter will expose all metrics from enabled collectors by default. This is the recommended way to collect metrics to avoid errors when comparing metrics of different families.

For advanced use the node_exporter can be passed an optional list of collectors to filter metrics. The collect[] parameter may be used multiple times. In Prometheus configuration you can use this syntax under the scrape config.

  params:
    collect[]:
      - foo
      - bar

This can be useful for having different Prometheus servers collect specific metrics from nodes.

Building and running

Prerequisites:

Building:

go get github.com/algorand/node_exporter
cd ${GOPATH-$HOME/go}/src/github.com/algorand/node_exporter
make
./node_exporter <flags>

To see all available configuration flags:

./node_exporter -h

Running tests

make test

Using Docker

The node_exporter is designed to monitor the host system. It's not recommended to deploy it as a Docker container because it requires access to the host system. Be aware that any non-root mount points you want to monitor will need to be bind-mounted into the container. If you start container for host monitoring, specify path.rootfs argument. This argument must match path in bind-mount of host root. The node_exporter will use path.rootfs as prefix to access host filesystem.

docker run -d \
  --net="host" \
  --pid="host" \
  -v "/:/host:ro,rslave" \
  quay.io/prometheus/node-exporter \
  --path.rootfs /host

On some systems, the timex collector requires an additional Docker flag, --cap-add=SYS_TIME, in order to access the required syscalls.

Using a third-party repository for RHEL/CentOS/Fedora

There is a community-supplied COPR repository which closely follows upstream releases.

Download Details:
Author: algorand
Source Code: https://github.com/algorand/node_exporter
License: Apache-2.0 License

#algorand  #blockchain  #cryptocurrency #node #go #golang 

Node Exporter: Exporter for Machine Metrics in Algorand
Best of Crypto

Best of Crypto

1648164780

Wallet Connect Bridge Automation for Algorand

This deploys https://github.com/aktionariat/walletconnect-bridge.git to our EKS clusters. While we are sharing this automation for others to benefit, the Algorand team does NOT make warranties regarding the stability / reliability of the referenced bridge implementation. Please research and make decisions around use at your own discretion.

For 2.0 support, see the v2.0 branch. It uses https://github.com/WalletConnect/walletconnect-monorepo.git

Scripts

scripts/build.sh

This builds a docker image for the walletconnect bridge. It currently versions through a timestamp and will produce two docker images, for example:

walletconnect/relay-server:latest
walletconnect/relay-server:latest-java
walletconnect/relay-server:1633462163-java

scripts/push.sh

$ scripts/push.sh -h
Usage: scripts/push.sh <-i IMAGE> [-r AWS_REGION] [-h]

This script pushes images to ECR. It will check the aws account that the shell it runs in has credentials to talk to and creates the ECR repo if it does not already exist. It then pushes the image with the appropriate tag.

scripts/push.sh -i walletconnect/relay-server:1633456298-java

scripts/deploy.sh

$ scripts/deploy.sh -h
Usage: scripts/deploy.sh [-l VERSION] [-r AWS_REGION] [-n NAMESPACE] [-c CLASSIFIER] [-h]

This script will deploy to the kubernetes cluster that shell it runs in has access to. When ingress is enabled, it is very opinionated about running with nginx ingress controller, external-dns and lets-encrypt. If you would like to use the settings shown here for your ingress rule, make sure that your lb supporting the ingress controller can handle the timeout.

scripts/status.sh

scripts/status.sh -h
Usage: scripts/status.sh [-n NAMESPACE] [-c CLASSIFIER] [-h]

This script shows some data about a deployed service.

scripts/status.sh
VERSION: 1633456298-java
ENDPOINT: wss://wallet-connect.default.dev.example.com/
NAME                                                     READY   STATUS    RESTARTS   AGE
wallet-connect-bridge-default-default-59669996d4-vr6bd   1/1     Running   0          15m

Test

Run DAPP locally

Clone the algorand example dapp repository.

git clone https://github.com/algorand/walletconnect-example-dapp

Go in the page config and edit it to use your endpoint. You can find this using the status script shown before.

You need to change bridge variable in src/App.tsx to do this. https://github.com/algorand/walletconnect-example-dapp/blob/master/src/App.tsx#L179

Next you need to start the app. You will get a notification to allow your shell to use chrome. Please approve this.

npm install
npm run start

After following these steps you should see something like the following.

dapp-example

Wallet Connect Flow

Once you have a demo app running and configured to use your wallet connect bridge endpoint, you can try to register it with a wallet.

Navigate to the dapp in your browser, probably running in http://localhost:3000.

dapp-example

Click "Connect to WalletConnect" and you will see a QR code

copy-demo-QR-code

This QR code is what you need to use to integrate the demo dapp with your wallet. Copy the QR code and then you can navigate to https://test.walletconnect.org/ and test it out.

demo-wallet-home

Paste the copied QR code where it says "Paste wc: url"

Check the console logs in dev tools if you run into any issues while working with this site.

demo-wallet-prompt

You will be prompted to either accept or reject integrating with your demo dapp. Click "Approve"

demo-wallet-success

If you see the following screen, your integration has been successful!

Download Details:
Author: algorand
Source Code: https://github.com/algorand/walletconnect-automation
License: MIT License

#algorand  #blockchain  #cryptocurrency 

Wallet Connect Bridge Automation for Algorand
Best of Crypto

Best of Crypto

1648157400

Ff Zeroize: A fork Of Zkcrypto/ff Crate with Zeroize Feature

ff-zeroize

  • ff-zeroize is a temporary crate that enables zeroize features for ffcrate
  • ff is a finite field library written in pure Rust, with no unsafe{} code.

Disclaimers

  • This library does not provide constant-time guarantees.

Usage

Add the ff crate to your Cargo.toml:

[dependencies]
ff_zeroize = "0.6.1"

The ff crate contains Field, PrimeField, PrimeFieldRepr and SqrtField traits. See the documentation for more.

#![derive(PrimeField)]

If you need an implementation of a prime field, this library also provides a procedural macro that will expand into an efficient implementation of a prime field when supplied with the modulus. PrimeFieldGenerator must be an element of Fp of p-1 order, that is also quadratic nonresidue.

First, enable the derive crate feature:

[dependencies]
ff_zeroize = { version = "0.6.1", features = ["derive"] }

And then use the macro like so:

extern crate rand;
#[macro_use]
extern crate ff;

#[derive(PrimeField)]
#[PrimeFieldModulus = "52435875175126190479447740508185965837690552500527637822603658699938581184513"]
#[PrimeFieldGenerator = "7"]
struct Fp(FpRepr);

And that's it! Fp now implements Field and PrimeField. Fp will also implement SqrtField if supported. The library implements FpRepr itself and derives PrimeFieldRepr for it.

Download Details:
Author: algorand
Source Code: https://github.com/algorand/ff-zeroize
License: View license

#algorand  #blockchain  #cryptocurrency #rust 

Ff Zeroize: A fork Of Zkcrypto/ff Crate with Zeroize Feature
Best of Crypto

Best of Crypto

1648150020

Examples for the 2019 Derivhack Hackathon

Introduction

The goal of the ISDA Common Domain Model (CDM) is to allow financial institutions to have a coherent representation of financial instruments and events. This document shows how institutions can use the CDM and the Algorand blockchain to maintain separately owned but coherent financial databases with the following properties:

  1. Coherency: All institutions participating in a trade agree on the digital representation of that trade at any point in time.
  2. Privacy: The details of the trade are only revealed to the institutions which participate in it. Any other agent cannot learn anything about the trade.
  3. Lineage: Any modification in the state of a trade can refer to the previous state, generating a traceable lineage for the history of that trade.
  4. Ease of Use: Because the Algorand blockchain is a permissionless blockchain, institutions can interact with it using the software of their choice, and without the need to set up their own distributed system. Algorand provides easy to use APIs that read to and write from the blockchain, and SDKs in Python, Go, Java and Javascript.

The Algorand Blockchain

The Algorand blockchain can process 1000 transactions per second with a latency of less than 5 seconds and ensures transaction finality with point-of-sale speed. The Algorand blockchain is a permissionless blockchain with hundreds of independently operating nodes distributed around the world. The Algorand blockchain allows developers to create their applications without having to set up their own distributed systems. In addition, Algorand provides extensive documentation, and provides SDKs in four languages (Go, Python, Java and Javascript) to interact with the blockchain.

Figure 1: Nodes running the Algorand client software around the world Figure 1: Nodes running the Algorand client software around the world

Installing, Compiling and Running the Code

Dependencies

Running the code in this repository requires that you have

  1. A Unix-based OS such as Mac OS X or Linux
  2. Java
  3. Maven

Java and Maven Installation

OS X

These are bash scripts which install Java and Maven and set the correct paths to use them. These scripts are in the INSTALL folder and should be run in the following order

  1. install_brew.sh if the user does not have Hombrew installed (OS X utility to install programs)
  2. install_java.sh if the user does not have Java installed. This installs the OpenJDK
  3. install_maven.sh if the user does not have Maven installed
  4. install_mongo.sh if the user does not have MongoDB installed

Ubuntu

These are bash scripts which install Java and Maven and set the correct paths to use them. These scripts are in the INSTALL folder and should be run in the following order

  1. install_java_for_ubuntu.sh if the user does not have Java installed. This installs the OpenJDK
  2. install_maven_for_ubuntu.sh if the user does not have Maven installed
  3. install_mongo_for_ubuntu.sh if the user does not have MongoDB installed

Java library Installation

The main directory contains a pom.xml file which Maven uses to download Java libraries that the code depends on, including the Algorand Java SDK, and the Java implementation of the ISDA CDM.

The code has been tested on a computer running OS X Version 10.14.5, OpenJDK 13, and Maven version 3.6.1. and on an AWS instance ("4.15.0-1044-aws") running Ubuntu 18.04.2 LTS, OpenJDK 11 and Maven version 3.6.0

Compilation

A settings.xml file is provided in the project root directory, use it install dependencies as below:

mvn -s settings.xml clean install

You can also run

sh compile.sh

from the root directory.

Running the Code

To run the example code, type

sh run.sh 

in the root directory.

This script will start a MongoDB service and run the examples for the first three use cases in the hackathon. Ubuntu users need to uncomment the following line to run the mongo service on ubuntu.

##UNCOMMENT THIS LINE FOR UBUNBTU                                                                                                 
# bash start_mongo_on_ubuntu.sh 

(OPTIONAL): Starting and Stopping MongoDB

The code needs to have a Mongo DB service running to persist some information. Right now the run.sh script starts this service automatically if it is not running. However, we have provided scripts to start and stop this automatically

To run the mongodb service, run

sh start_mongo.sh

To stop the mongodb service, run

sh stop_mongo.sh

Example Use Cases

Execution

In the Derivhack Hackathon, users are given a trade execution file and need to

  1. Load the JSON file into their system
  2. Create users in their distributed ledger corresponding to the parties in the execution
  3. Create a report of the execution

In this example, we use the Algorand blockchain to ensure different parties have consistent versions of the file, while keeping their datastores private. The information stored in the chain includes the global key of the execution, its lineage, and the file path where the user stored the Execution JSON object in their private data store.

The following function, from the class CommitExecution.java reads a CDM Event, creates Algorand accounts for all parties in the event. It gets the executing party (Client 1's broker), and has this party send details of the execution to all other parties on the Algorand blockchain.

 public  class CommitExecution {

    public static void main(String [] args) throws Exception{
        
        //Read the input arguments and read them into files
        String fileName = args[0];
        String fileContents = ReadAndWrite.readFile(fileName);

         //Read the event file into a CDM object using the Rosetta object mapper
        ObjectMapper rosettaObjectMapper = RosettaObjectMapper.getDefaultRosettaObjectMapper();
        Event event = rosettaObjectMapper
                .readValue(fileContents, Event.class);
        
        //Create Algorand Accounts for all parties
        // and persist accounts to filesystem/database
        List<Party> parties = event.getParty();
        User user;
        DB mongoDB = MongoUtils.getDatabase("users");
        parties.parallelStream()
                .map(party -> User.getOrCreateUser(party,mongoDB))
                .collect(Collectors.toList());

        //Get the execution
        Execution execution = event
                                .getPrimitive()
                                .getExecution().get(0)
                                .getAfter()
                                .getExecution();


        // Get the executing party  reference
        String executingPartyReference = execution.getPartyRole()
                .stream()
                .filter(r -> r.getRole() == PartyRoleEnum.EXECUTING_ENTITY)
                .map(r -> r.getPartyReference().getGlobalReference())
                .collect(MoreCollectors.onlyElement());

        // Get the executing party
        Party executingParty = event.getParty().stream()
                .filter(p -> executingPartyReference.equals(p.getMeta().getGlobalKey()))
                .collect(MoreCollectors.onlyElement());

        // Get all other parties
        List<Party> otherParties =  event.getParty().stream()
                .filter(p -> !executingPartyReference.equals(p.getMeta().getGlobalKey()))
                .collect(Collectors.toList());

        // Find or create the executing user
        User executingUser = User.getOrCreateUser(executingParty, mongoDB);
       
        //Send all other parties the contents of the event as a set of blockchain transactions
        List<User> users = otherParties.
                            parallelStream()
                            .map(p -> User.getOrCreateUser(p,mongoDB))
                            .collect(Collectors.toList());

        List<Transaction> transactions = users
                                            .parallelStream()
                                            .map(u->executingUser.sendEventTransaction(u,event,"execution"))
                                            .collect(Collectors.toList());
        
    }
}

The corresponding shell command to execute this function with the Block trades file is

##Commit the execution file to the blockchain
mvn -s settings.xml exec:java -Dexec.mainClass="com.algorand.demo.CommitExecution" \
 -Dexec.args="./Files/UC1_block_execute_BT1.json" -e -q

Allocation

The second use case for Derivhack is allocation of trades. That is, the block trade execution given in use case 1 will be allocated among multiple accounts. Participants are also given a JSON CDM file specifying the [allocation] (https://github.com/algorand/DerivhackExamples/blob/master/Files/UC2_allocation_execution_AT1.json). Since allocations are CDM events, the same logic applies as in the Execution use case. To commit the allocation event to the blockchain, participants can use the following shell command

mvn -s settings.xml exec:java -Dexec.mainClass="com.algorand.demo.CommitAllocation" \
 -Dexec.args="./Files/UC2_allocation_execution_AT1.json" -e -q

Affirmation

The third use case is the affirmation of the trade by the clients. In contrast with the other cases, the Participants can look at the classes CommitAffirmation.java (https://github.com/algorand/DerivhackExamples/blob/master/src/main/java/com/algorand/demo/CommitAffirmation.java) and AffirmImpl.java (https://github.com/algorand/DerivhackExamples/blob/master/src/main/java/com/algorand/demo/AffirmationImpl.java) for examples on how to derive the Affirmation of a trade from its allocation.

In the affirmation step, the client produces a CDM affirmation from the Allocation Event, and sends the affirmation to the broker over the Algorand Chain.


``` class CommitAffirmation {
public static void main(String[] args){

        //Load the database to lookup users
        DB mongoDB = MongoUtils.getDatabase("users");

        //Load a file with client global keys
        String allocationFile = args[0];
        String allocationCDM = ReadAndWrite.readFile(allocationFile);
        ObjectMapper rosettaObjectMapper = RosettaObjectMapper.getDefaultRosettaObjectMapper();
        Event allocationEvent = null;
            try{
                allocationEvent = rosettaObjectMapper
                                    .readValue(allocationCDM, Event.class);
            }
            catch(java.io.IOException e){
                e.printStackTrace();
            }
                
       
        List<Trade> allocatedTrades = allocationEvent.getPrimitive().getAllocation().get(0).getAfter().getAllocatedTrade();
        //Keep track of the trade index
        int tradeIndex = 0;

        //Collect the affirmation transaction id and broker key in a file
        String result = "";
        //For each trade...
        for(Trade trade: allocatedTrades){

        //Get the broker that we need to send the affirmation to
        String brokerReference = trade.getExecution().getPartyRole()
            .stream()
            .filter(r -> r.getRole() == PartyRoleEnum.EXECUTING_ENTITY)
            .map(r -> r.getPartyReference().getGlobalReference())
            .collect(MoreCollectors.onlyElement());

            User broker = User.getUser(brokerReference,mongoDB);

        //Get the client reference for that trade
        String clientReference = trade.getExecution()
                                        .getPartyRole()
                                        .stream()
                                        .filter(r-> r.getRole()==PartyRoleEnum.CLIENT)
                                        .map(r->r.getPartyReference().getGlobalReference())
                                        .collect(MoreCollectors.onlyElement());
                
        // Load the client user, with algorand passphrase
        User user = User.getUser(clientReference,mongoDB);
        String algorandPassphrase = user.algorandPassphrase;

        // Confirm the user has received the global key of the allocation from the broker
        String receivedKey = AlgorandUtils.readEventTransaction( algorandPassphrase, allocationEvent.getMeta().getGlobalKey());
        assert receivedKey == allocationEvent.getMeta().getGlobalKey() : "Have not received allocation event from broker";
            //Compute the affirmation
            Affirmation affirmation = new AffirmImpl().doEvaluate(allocationEvent,tradeIndex).build();
                    
             //Send the affirmation to the broker
            Transaction transaction = 
                        user.sendAffirmationTransaction(broker, affirmation);
                    
            result += transaction.getTx() + "," + brokerReference +"\n";
                    
                
            tradeIndex = tradeIndex + 1;
        }
        try{
           ReadAndWrite.writeFile("./Files/AffirmationOutputs.txt", result);
        }
        catch(Exception e){
            e.printStackTrace();
        }
    }

}

Download Details:
Author: algorand
Source Code: https://github.com/algorand/DerivhackExamples
License: MIT License

#algorand  #blockchain  #cryptocurrency #java 

Examples for the 2019 Derivhack Hackathon
Best of Crypto

Best of Crypto

1648142640

Getting Started with Reach Auction in Algorand

reach-auction

Getting Started With Reach

Install Reach

Reach is designed to work on POSIX systems with make, Docker, and Docker Compose installed. The best way to install Docker on Mac and Windows is with Docker Desktop.

To confirm everything is installed try to run the following three commands and see no errors

$ make --version
$ docker --version
$ docker-compose --version

If you’re using Windows, consult the guide to using Reach on Windows.

Once you've confirmed that the Reach prerequisites are installed, choose a directory for this project such as:

$ mkdir -p ~/reach && cd ~/reach

Clone the Reach Auction demo application

Clone the repository using the following commands.

git clone https://github.com/algorand/reach-auction.git 

Navigate to the project folder

cd reach_auction

Next, download Reach by running

$ curl https://docs.reach.sh/reach -o reach ; chmod +x reach

Confirm the download worked by running

$ ./reach version

Since Reach is Dockerized, when first used, the images it uses need to be downloaded. This will happen automatically when used for the first time, but can be done manually now by running

$ ./reach update

You’ll know that everything is in order if you can run

$ ./reach compile --help

To determine the current version is installed, run

$ ./reach hashes

Output should look similar to:

reach: fb449c94
reach-cli: fb449c94
react-runner: fb449c94
rpc-server: fb449c94
runner: fb449c94
devnet-algo: fb449c94
devnet-cfx: fb449c94
devnet-eth: fb449c94

All of the hashes listed should be the same and then visit the #releases channel on the Reach Discord Server to see the current hashes.

More information: Detailed Reach install instructions can be found in the docs.

Download Details:
Author: algorand
Source Code: https://github.com/algorand/reach-auction
License:

#algorand  #blockchain  #cryptocurrency 

Getting Started with Reach Auction in Algorand
Best of Crypto

Best of Crypto

1648134900

Go Sumhash: A Go Implementation Of Algorand’s Subset-sum Hash Function

Sumhash

A Go implementation of Algorand’s subset-sum hash function. The library exports the subset sum hash function via a hash.Hash interface.

Install

go get github.com/algorand/go-sumhash

Alternatively the same can be achieved if you use import in a package:

import "github.com/algorand/go-sumhash"

and run go get without parameters.

Usage

Construct a sumhash instance with block size of 512.

package main

import (
    "fmt"

    "github.com/algorand/go-sumhash"
)

func main() {
    h := sumhash.New512(nil)
    input := []byte("sumhash input")
    _, _ = h.Write(input)

    sum := h.Sum(nil)
    fmt.Printf("subset sum hash value: %X", sum)
}

Testing

go test ./...

Spec

The specification of the function as well as the security parameters can be found here

Download Details:
Author: algorand
Source Code: https://github.com/algorand/go-sumhash
License: MIT License

#algorand  #blockchain  #cryptocurrency #go #golang 

Go Sumhash: A Go Implementation Of Algorand’s Subset-sum Hash Function
Best of Crypto

Best of Crypto

1648127580

C Sumhash: Algorand's Subset-sum Hash Function Implementation in C

C-Sumhash

Algorand's subset-sum hash function implementation in C.

Build And Tests

git clone https://github.com/algorand/c-sumhash
make

The make command builds the library and runs the tests. The output can be found in the build directory:

./build/libsumhash.a

Usage

#include <stdio.h>
#include <string.h>

#include "include/sumhash512.h"

int main() {
    char* input = "Algorand";
    sumhash512_state hash;
    sumhash512_init(&hash);
    sumhash512_update(&hash, (uint8_t*)input, strlen(input));
    uint8_t output [SUMHASH512_DIGEST_SIZE];
    sumhash512_final(&hash, output);

    return 0;
}

Simple API usage:

#include <stdio.h>
#include <string.h>

#include "include/sumhash512.h"

int main() {
    char* input = "Algorand";
    uint8_t output [SUMHASH512_DIGEST_SIZE];
    sumhash512(output, (uint8_t*)input, strlen(input));

    return 0;
}

The include/sumhash512.h header contains more information about the functions usage

Spec

The specification of the function as well as the security parameters can be found here

Download Details:
Author: algorand
Source Code: https://github.com/algorand/c-sumhash
License: MIT License

#algorand  #blockchain  #cryptocurrency #c 

C Sumhash: Algorand's Subset-sum Hash Function Implementation in C
Best of Crypto

Best of Crypto

1648120200

Algorand Node Sandbox

Algorand Sandbox

This is a fast way to create and configure an Algorand development environment with Algod and Indexer.

Docker Compose MUST be installed. Instructions.

On a Windows machine, Docker Desktop comes with the necessary tools. Please see the Windows section in getting started for more details.

Warning: Algorand Sandbox is not meant for production environments and should not be used to store secure Algorand keys. Updates may reset all the data and keys that are stored.

Usage

Use the sandbox command to interact with the Algorand Sandbox.

sandbox commands:
  up    [config]  -> start the sandbox environment.
  down            -> tear down the sandbox environment.
  reset           -> reset the containers to their initial state.
  clean           -> stops and deletes containers and data directory.
  test            -> runs some tests to demonstrate usage.
  enter [algod||indexer||indexer-db]
                  -> enter the sandbox container.
  version         -> print binary versions.
  copyTo <file>   -> copy <file> into the algod container. Useful for offline transactions & LogicSigs plus TEAL work.
  copyFrom <file> -> copy <file> from the algod container. Useful for offline transactions & LogicSigs plus TEAL work.

algorand commands:
  logs            -> stream algorand logs with the carpenter utility.
  status          -> get node status.
  goal (args)     -> run goal command like 'goal node status'.
  tealdbg (args)  -> run tealdbg command to debug program execution.

special flags for 'up' command:
  -v|--verbose           -> display verbose output when starting standbox.
  -s|--skip-fast-catchup -> skip catchup when connecting to real network.
  -i|--interactive       -> start docker-compose in interactive mode.

Sandbox creates the following API endpoints:

  • algod:
    • address: http://localhost:4001
    • token: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
  • kmd:
    • address: http://localhost:4002
    • token: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
  • indexer:
    • address: http://localhost:8980

Getting Started

Ubuntu and macOS

Make sure the docker daemon is running and docker-compose is installed.

Open a terminal and run:

git clone https://github.com/algorand/sandbox.git

In whatever local directory the sandbox should reside. Then:

cd sandbox
./sandbox up

This will run the sandbox shell script with the default configuration. See the Basic Configuration for other options.

Note for Ubuntu: You may need to alias docker to sudo docker or follow the steps in https://docs.docker.com/install/linux/linux-postinstall so that a non-root user can user the command docker.

Run the test command for examples of how to interact with the environment:

./sandbox test

Windows

Note: Be sure to use the latest version of Windows 10. Older versions may not work properly.

Note: While installing the following programs, several restarts may be required for windows to recognize the new software correctly.

Option 1: Using WSL 2

The installation instructions for Docker Desktop contain some of this but are repeated here.

  1. In order to work with Docker Desktop on windows, a prerequisite is WSL2 and install instructions are available here.
  2. Install Docker Desktop using the instructions available here.
  3. We recommend using the official Windows Terminal, available in the app store here.
  4. Install whatever distribution of Linux desired.
  5. Open the Windows Terminal with the distribution installed in the previous step and follow the instruction for Ubuntu and macOS above.

Option 2: Using Git for Windows/ MSYS 2

  1. Install Git for Windows: https://gitforwindows.org/
  2. Install and launch Docker for Windows: https://docs.docker.com/get-docker
  3. Open "Git Bash" and follow the instruction for Ubuntu and macOS above, in the "Git Bash" terminal.

Troubleshooting

  • If you see
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'.

check that you are using the latest versions of: Docker, Git for Windows, and Windows 10.

If this does not solve the issue, open an issue with all the versions with all the software used, as well as all the commands typed.

  • If you see
Error response from daemon: open \\.\pipe\docker_engine_linux: The system cannot find the file specified.

check that Docker is running.

Basic Configuration

Sandbox supports two primary modes of operation. By default, a private network will be created, which is only available from the local environment. There are also configurations available for the public networks which will attempt to connect to one of the long running Algorand networks and allow interaction with it.

To specify which configuration to run:

./sandbox up $CONFIG

Where $CONFIG is specified as one of the configurations in the sandbox directory.

For example to run a dev mode network, run:

./sandbox up dev

To switch the configuration:

./sandbox down
./sandbox clean
./sandbox up $NEW_CONFIG

Private Network

If no configuration is specified the sandbox will be started with the release configuration which is a private network. The other private network configurations are those not suffixed with net. Namely these are beta, dev and nightly.

The private network environment creates and funds a number of accounts in the algod containers local kmd ready to use for testing transactions. These accounts can be reviewed using ./sandbox goal account list.

Private networks also include an Indexer service configured to synchronize against the private network. Because it doesn't require catching up to one of the long running networks it also starts very quickly.

The dev configuration runs a private network in dev mode. In this mode, every transaction being sent to the node automatically generates a new block, rather than wait for a new round in real time. This is extremely useful for fast e2e testing of an application.

Public Network

The mainnet, testnet, betanet, and devnet configurations configure the sandbox to connect to one of those long running networks. Once started it will automatically attempt to catchup to the latest round. Catchup tends to take a while and a progress bar will be displayed to illustrate of the progress.

Due to technical limitations, this configuration does not contain preconfigured accounts that may be immediately transact with, and Indexer is not available. A new wallet and accounts may be created or imported at will using the goal wallet new command to create a wallet and the goal account import or goal account new commands.

Note A newly created account will not be funded and wont be able to submit transactions until it is. If a testnet configuration is used, please visit the TestNet Dispenser to fund the newly created account.

Advanced configurations

The sandbox environment is completely configured using the config.* files in the root of this repository. For example, the default configuration for config.nightly is:

export ALGOD_CHANNEL="nightly"
export ALGOD_URL=""
export ALGOD_BRANCH=""
export ALGOD_SHA=""
export ALGOD_BOOTSTRAP_URL=""
export ALGOD_GENESIS_FILE=""
export INDEXER_URL="https://github.com/algorand/indexer"
export INDEXER_BRANCH="develop"
export INDEXER_SHA=""
export INDEXER_DISABLED=""

Indexer is always built from source since it can be done quickly. For most configurations, algod will be installed using our standard release channels, but building from source is also available by setting the git URL, Branch and optionally a specific SHA commit hash.

The up command looks for the config extension based on the argument provided. With a custom configuration pointed to a fork, the sandbox will start using the fork:

export ALGOD_CHANNEL=""
export ALGOD_URL="https://github.com/<user>/go-algorand"
export ALGOD_BRANCH="my-test-branch"
export ALGOD_SHA=""
export ALGOD_BOOTSTRAP_URL=""
export ALGOD_GENESIS_FILE=""
export INDEXER_URL="https://github.com/<user>/go-algorand"
export INDEXER_BRANCH="develop"
export INDEXER_SHA=""
export INDEXER_DISABLED=""

Working with files

Some Algorand commands require using a file for the input. For example working with TEAL programs. In some other cases like working with Logical signatures or transactions offline the output from a LogicSig or transaction may be needed.

To stage a file use the copyTo command. The file will be placed in the algod data directory, which is where sandbox executes goal. This means the files can be used without specifying their full path.

To copy a file from sandbox (algod instance) use the copyFrom command. The file will be copied to sandbox directory on host filesystem.

copyTo example

these commands will stage two TEAL programs then use them in a goal command:

~$ ./sandbox copyTo approval.teal
~$ ./sandbox copyTo clear.teal
~$ ./sandbox goal app create --approval-prog approval.teal --clear-prog clear.teal --creator YOUR_ACCOUNT  --global-byteslices 1 --global-ints 1 --local-byteslices 1 --local-ints 1

copyFrom example

these commands will create and copy a signed logic transaction file, created by goal, to be sent or communicated off the chain (e.g. by email or as a QR Code) and submitted else where:

~$ ./sandbox goal clerk send -f <source-account> -t <destination-account> --fee 1000 -a 1000000 -o "unsigned.txn"
~$ ./sandbox goal clerk sign --infile unsigned.txn --outfile signed.txn
~$ ./sandbox copyFrom "signed.txn"

Errors

If something goes wrong, check the sandbox.log file for details.

Debugging for teal developers

For detailed information on how to debug smart contracts and use tealdbg CLI,please consult with Algorand Development Portal :: Smart Contract Debugging.

Algorand smart contract debugging process uses tealdbg command line of algod instance(algod container in sandbox).

Note: Always use tealdbg with --listen 0.0.0.0 or --listen [IP ADDRESS] flags, if access is needed to tealdbg from outside of algod docker container!

tealdbg examples

Debugging smart contract with Chrome Developer Tools (CDT): ~$ ./sandbox tealdbg debug ${TEAL_PROGRAM} -f cdt -d dryrun.json

Debugging smart contract with Web Interface (primal web UI) ~$ ./sandbox tealdbg debug ${TEAL_PROGRAM} -f web -d dryrun.json

The debugging endpoint port (default 9392) is forwarded directly to the host machine and can be used directly by Chrome Dev Tools for debugging Algorand TEAL smart comtracts (Goto url chrome://inspect/ and configure port 9392 before using please).

Note: If a different port is needed than the default, it may be changed by running tealdbg --port YOUR_PORT then modifying the docker-compose.yml file and change all occurances of mapped 9392 port with the desired one.

ADVANCED: Sandbox Interactive Debugging with VSCode's Remote - Container Extension

For those looking to develop or extend algod or indexer it's highly recommended to test and debug using a realistic environment. Being able to interactively debug code with breakpoints and introspect the stack as the Algorand daemon communicates with a live network is quite useful. Here are steps that you can take if you want to run an interactive debugger with an indexer running on the sandbox. Analogous instructions work for algod as well.

Before starting, make sure you have VS-Code and have installed the Remote - Containers Extension.

  1. Inside docker_compose.yml add the key/val privileged: true under the indexer: service
  2. Start the sandbox with ./sandbox up YOUR_CONFIG and wait for it to be fully up and running
  • you may need to run a ./sandbox clean first
  • you can verify by seeing healthy output from ./sandbox test
  1. In VS Code...
  2. Go to the Command Palette (on a Mac it's SHIFT-COMMAND-P) and enter Remote - Containers: Attach to Running Container
  3. The container of interest, e.g. /algorand-sandbox-indexer, should pop up and you should choose it
  4. The first time you attach to a container, you'll get the option of choosing which top-level directory inside the container to attach the file browser to. The default HOME (/opt/indexer in the case of indexer) is usually your best choice
  5. Next, VS Code should auto-detect that you're running a go based project and suggest various extensions to add into the container enviroment. You should do this
  6. Now navigate to the file you'd like to debug (e.g. api/handlers.go) and add a breakpoint as you usually would
  7. You'll need to identify the PID of the indexer process so you can attach to it. Choose TerminalNew Terminal from the menu and run ps | egrep "daemon|PID". Note the resulting PID
  8. Now start the debugger with F5. It should give you the option to attach to a process and generate a launch.json with processId: 0 for you
  9. Modify the launch.json with the correct processId. Below I provide an example of a launch.json
  10. Now you're ready to rumble! If you hit your sandbox endpoint with a well formatted request, you should end up reaching and pausing at your break point. For indexer, you would request against port 8980. See the curl example below

Example launch.json

{
  // Use IntelliSense to learn about possible attributes.
  // Hover to view descriptions of existing attributes.
  // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
  "version": "0.2.0",
  "configurations": [
    {
      "name": "Attach to Process",
      "type": "go",
      "request": "attach",
      "mode": "local",
      "processId": YOUR_PID_HERE
    }
  ]
}

Example curl command

~$ curl "localhost:8980/v2/accounts"

Download Details:
Author: algorand
Source Code: https://github.com/algorand/sandbox
License:

#algorand  #blockchain  #cryptocurrency #python 

Algorand Node Sandbox
Best of Crypto

Best of Crypto

1648112820

Wallet Connect: Open Protocol for Connecting Wallets to Dapps

WalletConnect v1.x.x

Open protocol for connecting Wallets to Dapps - https://walletconnect.org

Packages

SDKCurrent VersionDescription
walletconnectnpm versionSDK
ClientsCurrent VersionDescription
@walletconnect/corenpm versionCore Client
@walletconnect/clientnpm versionIsomorphic Client
ProvidersCurrent VersionDescription
@walletconnect/ethereum-providernpm versionEthereum Provider
@walletconnect/truffle-providernpm versionTruffle Provider
@walletconnect/web3-providernpm versionWeb3 Provider
@walletconnect/web3-subprovidernpm versionWeb3 Subprovider
HelpersCurrent VersionDescription
@walletconnect/browser-utilsnpm versionBrowser Utilities
@walletconnect/http-connectionnpm versionHTTP Connection
@walletconnect/iso-cryptonpm versionIsomorphic Crypto
@walletconnect/qrcode-modalnpm versionQR Code Modal
@walletconnect/react-native-dappnpm versionReact-Native Dapp
@walletconnect/signer-connectionnpm versionSigner Connection
@walletconnect/socket-transportnpm versionSocket Transport
@walletconnect/typesnpm versionTypescript Types
@walletconnect/utilsnpm versionUtility Library

## Quick Start

Find quick start examples for your platform at https://docs.walletconnect.org/quick-start

Documentation

Read more about WalletConnect protocol and how to use our Clients at https://docs.walletconnect.org

Download Details:
Author: algorand
Source Code: https://github.com/algorand/walletconnect-monorepo
License: Apache-2.0 License

#algorand  #blockchain  #cryptocurrency #typescript #javascript 

Wallet Connect: Open Protocol for Connecting Wallets to Dapps
Best of Crypto

Best of Crypto

1648105380

Pyteal Utils: PyTEAL Utility Methods Common in Many Smart Contract

pyteal-utils

EXPERIMENTAL WIP

There is no guarantee to the API of this repository. It is subject to change without a tagged release.

This repository is meant to contain PyTEAL utility methods common in many Smart Contract programs.

Utils

Inline Assembly

  • InlineAssembly - Can be used to inject TEAL source directly into a PyTEAL program

Iter

  • accumulate
  • iterate - Provides a convenience method for calling a method n times

Math

  • odd - Returns 1 if x is odd
  • even - Returns 1 if x is even
  • factorial - Returns x! = x * x-1 * x-2 * ...
  • wide_factorial - Returns x! = x * x-1 * x-2 * ...
  • wide_power
  • exponential - Approximates e ** x for n iterations
  • log2
  • log10 - Returns log base 10 of the integer passed
  • ln - Returns natural log of x of the integer passed
  • pow10 - Returns 10 ** x
  • max - Returns the maximum of 2 integers
  • min - Returns the minimum of 2 integers
  • div_ceil - Returns the result of division rounded up to the next integer
  • saturation - Returns an output that is the value of n bounded to the upper and lower saturation values

Storage

  • GlobalBlob - Class holding static methods to work with the global storage of an application as a binary large object
  • LocalBlob - Class holding static methods to work with the local storage of an application as a binary large object
  • global_must_get - Returns the result of a global storage MaybeValue if it exists, else Assert and fail the program
  • global_get_else - Returns the result of a global storage MaybeValue if it exists, else return a default value
  • local_must_get - Returns the result of a loccal storage MaybeValue if it exists, else Assert and fail the program
  • local_get_else - Returns the result of a local storage MaybeValue if it exists, else return a default value

Strings

  • atoi - Converts a byte string representing a number to the integer value it represents
  • itoa - Converts an integer to the ascii byte string it represents
  • witoa - Converts an byte string interpreted as an integer to the ascii byte string it represents
  • head - Gets the first byte from a bytestring, returns as bytes
  • tail - Returns the string with the first character removed
  • suffix - Returns the last n bytes of a given byte string
  • prefix - Returns the first n bytes of a given byte string
  • rest
  • encode_uvarint - Returns the uvarint encoding of an integer

Transactions

  • assert_common_checks - Calls all txn checker assert methods
  • assert_min_fee - Checks that the fee for a transaction is exactly equal to the current min fee
  • assert_no_rekey - Checks that the rekey_to field is empty, Assert if it is set
  • assert_no_close_to - Checks that the close_remainder_to field is empty, Assert if it is set
  • assert_no_asset_close_to - Checks that the asset_close_to field is empty, Assert if it is set

Common inner transaction operations

  • pay
  • axfer

Contributing

As PyTEAL user, your contribution is extremely valuable to grow PyTEAL utilities!

Please follow the contribution guide!

Prerequisites

Set up your PyTEAL environment

  1. Set up the sandbox and start it (dev mode recommended): ./sandbox up dev
  2. Clone this repo: git clone https://github.com/algorand/pyteal-utils.git
  3. Install Python dependecies: poetry install
  4. Activate a virual env: poetry shell
  5. Configure pre-commit hooks: pre-commit install

Download Details:
Author: algorand
Source Code: https://github.com/algorand/pyteal-utils
License: MIT License

#algorand  #blockchain  #cryptocurrency #smartcontract 

Pyteal Utils: PyTEAL Utility Methods Common in Many Smart Contract
Best of Crypto

Best of Crypto

1648098000

Testing Framework for Algorand SDKs

algorand-sdk-testing

Testing files for Algorand SDKs

About

The files in this repository are used for testing the different Algorand SDK implementations. By writing the tests once and sharing them amongst the SDKs we are able to increase the coverage of our tests, and avoid rewriting similar tests over and over again. In addition to test cases, we have a standard test environment which is managed by docker.

To define tests we use cucumber, and feature files written with gherkin syntax. Each SDK is responsible for finding a framework which can use these files. There are implementations for many popular programming languages.

We have different feature files for unit and integration tests. The unit tests should be run as a normal part of development to quickly identify bugs and regressions. Integration tests on the other hand take much longer to run and require a special test environment. The test environment is made up of multiple services and managed with docker compose.

Test Descriptions

Unit Tests

These reside in the unit features directory

tagdescription
@unitSelect all unit tests.
@unit.abijsonABI types and method encoding/decoding unit tests.
@unit.algodAlgod REST API unit tests.
@unit.applicationsApplication endpoints added to Algod and Indexer.
@unit.atomic_transaction_composerABI / atomic transaction construction unit tests.
@unit.dryrunDryrun endpoint added to Algod.
@unit.feetestFee transaction encoding tests.
@unit.indexerIndexer REST API unit tests.
@unit.indexer.logsApplication logs endpoints added to Indexer.
@unit.indexer.rekeyRekey endpoints added to Algod and Indexer
@unit.offlineThe first unit tests we wrote for cucumber.
@unit.rekeyRekey Transaction golden tests.
@unit.responsesREST Client Response serialization tests.
@unit.responses.231REST Client Unit Tests for Indexer 2.3.1+
@unit.responses.genesisREST Client Unit Tests for GetGenesis endpoint
@unit.responses.messagepackREST Client MessagePack Unit Tests
@unit.responses.messagepack.231REST Client MessagePack Unit Tests for Indexer 2.3.1+
@unit.tealsignTest TEAL signature utilities.
@unit.transactionsTransaction encoding tests.
@unit.transactions.keyregKeyreg encoding tests.
@unit.transactions.paymentPayment encoding tests.

Integration Tests

These reside in the integration features directory

tagdescription
@abiTest the Application Binary Interface (ABI) with atomic txn composition and execution.
@algodGeneral tests against algod REST endpoints.
@application.evaldeltaTest that eval delta fields are included in algod and indexer.
@applications.verifiedSubmit all types of application transactions and verify account state.
@assetsSubmit all types of asset transactions.
@auctionEncode and decode bids for an auction.
@c2cTest Contract to Contract invocations and injestion.
@compileTest the algod compile endpoint.
@dryrunTest the algod dryrun endpoint.
@dryrun.testingTest the testing harness that relies on dryrun endpoint. Python only.
@indexerTest all types of indexer queries and parameters against a static dataset.
@indexer.231REST Client Integration Tests for Indexer 2.3.1+
@indexer.applicationsEndpoints and parameters added to support applications.
@kmdTest the kmd REST endpoints.
@rekeyTest the rekeying transactions.
@sendTest the ability to submit transactions to algod.

Test Implementation Status

Almost all the tags above are implemented by all 4 of our official SDK's

However, a few are not fully supported:

tagSDK's which implement
@application.evaldeltaJava only
@dryrun.testingPython only
@indexer.rekeymissing from Python and JS
@unit.responses.genesismissing from Python and Java
@unit.responses.messagepackmissing from Python
@unit.responses.messagepack.231missing from Python and JS
@unit.responses.messagepackmissing from Python and JS
@unit.transactions.keyreggo only

SDK Overview

Full featured Algorand SDKs have 6 major components. Depending on the compatibility level, certain components may be missing. The components include:

  1. REST Clients
  2. Transaction Utilities
  3. Encoding Utilities
  4. Crypto Utilities
  5. TEAL Utilities
  6. Testing

SDK Overview

REST Client

The most basic functionality includes the REST clients for communicating with algod and indexer. These interfaces are defined by OpenAPI specifications:

  • algod v1 / indexer v1 (generated at build time at daemon/algod/api/swagger.json)
  • kmd v1 (generated at build time at daemon/kmd/api/swagger.json)
  • algod v2
  • indexer v2

Transaction Utilities

One of the basic features of an Algorand SDK is the ability to construct all types of Algorand transactions. This includes simple transactions of all types and the tooling to configure things like leases and atomic transfers (group transactions)

Encoding Utilities

In order to ensure transactions are compact and can hash consistently, there are some special encoding requirements. The SDKs must provide utilities to work with these encodings. Algorand uses MessagePack as a compact binary-encoded JSON alternative, and fields with default values are excluded from the encoded object. Additionally to ensure consistent hashes, the fields must be alphebatized.

Crypto Utilities

All things related to crypto to make it easier for developers to work with the blockchain. This includes standard things like ED25519 signing, up through Algorand specific LogicSig and MultiSig utilities. There are also some convenience methods for converting Mnemonics.

TEAL Utilities

Everything related to working with TEAL. This includes some utilities for parsing and validating compiled TEAL programs.

Testing

Each SDK has a number of unit tests specific to that particular SDK. The details of SDK-specific unit tests are up to the developers discretion. There are also a large number of cucumber integration tests stored in this repository which cover various unit-style tests and many integration tests. To assist with working in this environment each SDK must provide tooling to download and install the cucumber files, and a Dockerfile which configures an environment suitable for building the SDK and running the tests, and 3 makefile targets: make unit, make integration, and make docker-test. The rest of this document relates to details about the Cucumber test.

How to write tests

Tests consist of two things -- the feature files defined in this repository and some code snippets that map the text in the feature files to specific functions. The implementation process will vary by programming language and isn't covered here, refer to the relevant documentation for setting up a new SDK.

Tags

We use tags, and a simple directory structure, to organize our feature files. All cucumber implementations should allow specifying one or more tags to include, or exclude, when running tests.

Unit tests

All unit tests should be tagged with @unit so that unit tests can be run together during development for quick regression tests. For example, to run unit tests with java a tag filter is provided as follows:

~$ mvn test -Dcucumber.filter.tags="@unit"

This command will vary by cucumber implementation, the specific framework documentation should be referenced for details.

Adding a new test

When adding a new test to an existing feature file, or a new feature file, a new tag should be created which describes that test. For example, the templates feature file has a corresponding @templates tag. By adding a new tag for each feature we are able to add new tests to this repository without breaking the SDKs.

In order for this to work, each SDK maintains a whitelist of tags which have been implemented.

If a new feature file is created, the tag would go at the top of the file. If a new scenario is added the tag would go right above the scenario.

If possible, please run a formatter on the file modified. There are several, including one built into VSCode Cucuber/Gherkin plugin.

Implementing tests in the SDK

The code snippets (or step definitions) live in the SDKs. Each SDK has a script which is able to clone this repository, and copy the tests into the correct locations.

When a test fails, the cucumber libraries we use print the code snippets which should be included in the SDK test code. The code snippets are empty functions which should be implemented according to the tests requirements. In many cases some state needs to be modified and stored outside of the functions in order to implement the test. Exactly how this state is managed is up to the developer. Refer to the cucumber documentation for tips about managing state. There may be better documentation in the specific cucumber language library you're using.

Running tests

The SDKs come with a Makefile to coordinate running the cucumber test suites. There are 3 main targets:

  • unit: runs all of the short unit tests.
  • integration: runs all integration tests.
  • docker-test: installs feature file dependencies, starts the test environment, and runs the SDK tests in a docker container.

At a high level, the docker-test target is required to:

  1. clone algorand-sdk-testing.
  2. copy supported feature files from the features directory into the SDK.
  3. build and start the test environment by calling ./scripts/up.sh
  4. launch an SDK container using --network host which runs the cucumber test suite.

Running tests during development

This will vary by SDK. By calling up.sh the environment is available to the integration tests, and tests can be run locally with an IDE or debugger. This is often significantly faster than waiting for the entire test suite to run.

Some of the tests are stateful and will require restarting the environment before re-running the test.

Once the test environment is running you can use make unit and make integration to run tests.

Integration test environment

Docker compose is used to manage several containers which work together to provide the test environment. Currently that includes algod, kmd, indexer and a postgres database. The services run on specific ports with specific API tokens. Refer to docker-compose.yml and the docker directory for how this is configured.

Integration Test Environment

Start the test environment

There are a number of scripts to help with managing the test environment. The names should help you understand what they do, but to get started simply run up.sh to bring up a new environment, and down.sh to shut it down.

When starting the environment we avoid using the cache intentionally. It uses the go-algorand nightly build, and we want to ensure that the containers are always running against the most recent nightly build. In the future these scripts should be improved, but for now we completely avoid using cached docker containers to ensure that we don't accidentally run against a stale environment.

Download Details:
Author: algorand
Source Code:  https://github.com/algorand/algorand-sdk-testing
License: MIT License

#algorand  #blockchain  #cryptocurrency #testing #gherkin

Testing Framework for Algorand SDKs
Best of Crypto

Best of Crypto

1648094160

Algorand App for Ledger Nano S Built with C & Python

Algorand App for Ledger Nano S

Run make load to build and load the application onto the device. After installing and running the application, you can run cli/sign.py. Running without any arguments should print the address corresponding to the key on the Ledger device. To sign a transaction, run cli/sign.py input.tx output.tx; this will ask the Ledger device to sign the transaction from input.tx, and put the resulting signed transaction into output.tx. You can use goal clerk send .. -o input.tx to construct an input.tx file, and then use goal clerk rawsend to broadcast the output.tx file to the Algorand network.

Development notes

Python environment

  • sudo apt install python-hid python-hidapi python3-hid python3-hidapi
  • sudo pip install ledgerblue
  • Set up /etc/udev/rules.d based on these notes

Firmware update

Setting up a custom CA for automating app loading

  • Documentation
  • python -m ledgerblue.genCAPair
  • python -m ledgerblue.setupCustomCA --targetId 0x31100004 --public 040db5032de3dc9ac155959bca5e163d1ab35789192495c99b39dceb82dafb5ffad14ce7fd32d739388b6017c606f26028fdfa3e7000fa8c9793740a7aff839587 --name dev
  • export SCP_PRIVKEY=7f189771ea6ee2808e4a66e6b74600b7eadb720a7ccf06bfe2ac0f67c7103250

PRINTF-style debugging

  • python -m ledgerblue.loadMCU --targetId 0x01000001 --fileName blup_0.9_misc_m1.hex --nocrc
  • python -m ledgerblue.loadMCU --targetId 0x01000001 --fileName mcu_1.7-printf_over_0.9.hex --reverse --nocrc
  • ./usbtool/usbtool -v 0x2c97 log
  • Edit Makefile to enable PRINTF (and edit it back for production to disable PRINTF)

To go back to release firmware:

  • Instructions
  • python -m ledgerblue.loadMCU --targetId 0x01000001 --fileName blup_0.9_misc_m1.hex --nocrc
  • python -m ledgerblue.loadMCU --targetId 0x01000001 --fileName mcu_1.7_over_0.9.hex --reverse --nocrc

Python HID debugging

  • Pass debug=True to getDongle() in cli/sign.py

Glyph/icon

  • convert -resize 12 -extent 16x16 -gravity center -colors 2 ...

Assorted complaints / tricks

  • Need volatile for N_ variables; not correctly done in default example
  • ed25519 public keys have an extra garbage byte upfront, then 64 bytes of uncompressed X and Y points, in reverse byte order
  • Have to pass full message to cx_eddsa_sign() despite it being the "hash"
  • bip32 keygen returns 64 bytes instead of 32 needed for Ed25519; not documented anywhere
  • BSS not actually zeroed out; have to explicitly initialize global variables
  • Can't call PRINTF after UX_DISPLAY
  • Converting (int)-2 to (char) and then back to (int) produces 254; the base32 library broke as a result
  • Weird memory behavior
  • The app gets 4KBytes of SRAM for writable memory and stack. Look at debug/app.map to make sure there's nothing too large in SRAM (look between the _bss and _estack symbols), and check for large stack use in functions (look for large sub sp statements in debug/app.asm).

Download Details:
Author: algorand
Source Code: https://github.com/algorand/ledger-app-algorand
License: MIT License

#algorand  #blockchain  #cryptocurrency #c #python 

Algorand App for Ledger Nano S Built with C & Python
Best of Crypto

Best of Crypto

1648086780

Algorand SDK for Java7+ to interact with the Algorand Metwork

java-algorand-sdk

AlgoSDK is a Java library for communicating and interacting with the Algorand network. It contains a REST client for accessing algod instances over the web, and also exposes functionality for generating keypairs, mnemonics, creating transactions, signing transactions, and serializing data across the network.

Prerequisites

Java 7+ and Android minSdkVersion 16+

Installation

Maven:

<dependency>
    <groupId>com.algorand</groupId>
    <artifactId>algosdk</artifactId>
    <version>1.13.0-beta-1</version>
</dependency>

Quickstart

This program connects to a running sandbox private network, creates a payment transaction between two of the accounts, signs it with kmd, and reads result from Indexer.

import com.algorand.algosdk.account.Account;
import com.algorand.algosdk.crypto.Address;
import com.algorand.algosdk.kmd.client.ApiException;
import com.algorand.algosdk.kmd.client.KmdClient;
import com.algorand.algosdk.kmd.client.api.KmdApi;
import com.algorand.algosdk.kmd.client.model.*;
import com.algorand.algosdk.transaction.SignedTransaction;
import com.algorand.algosdk.transaction.Transaction;
import com.algorand.algosdk.util.Encoder;
import com.algorand.algosdk.v2.client.common.AlgodClient;
import com.algorand.algosdk.v2.client.common.IndexerClient;
import com.algorand.algosdk.v2.client.common.Response;
import com.algorand.algosdk.v2.client.model.PendingTransactionResponse;
import com.algorand.algosdk.v2.client.model.PostTransactionsResponse;
import com.algorand.algosdk.v2.client.model.TransactionsResponse;

import java.io.IOException;
import java.security.NoSuchAlgorithmException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

public class Main {
    private static String token = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
    private static KmdApi kmd = null;

    public static void main(String[] args) throws Exception {
        // Initialize algod/indexer v2 clients.
        AlgodClient algod = new AlgodClient("http://localhost", 4001, token);
        IndexerClient indexer = new IndexerClient("http://localhost", 8980);

        // Initialize KMD v1 client
        KmdClient kmdClient = new KmdClient();
        kmdClient.setBasePath("http://localhost:4002");
        kmdClient.setApiKey(token);
        kmd = new KmdApi(kmdClient);

        // Get accounts from sandbox.
        String walletHandle = getDefaultWalletHandle();
        List<Address> accounts  = getWalletAccounts(walletHandle);

        // Create a payment transaction
        Transaction tx1 = Transaction.PaymentTransactionBuilder()
                .lookupParams(algod) // lookup fee, firstValid, lastValid
                .sender(accounts.get(0))
                .receiver(accounts.get(1))
                .amount(1000000)
                .noteUTF8("test transaction!")
                .build();

        // Sign with KMD
        SignedTransaction stx1a = signTransactionWithKMD(tx1, walletHandle);
        byte[] stx1aBytes = Encoder.encodeToMsgPack(stx1a);

        // Sign with private key
        byte[] privateKey = lookupPrivateKey(accounts.get(0), walletHandle);
        Account account = new Account(privateKey);
        SignedTransaction stx1b = account.signTransaction(tx1);
        byte[] stx1bBytes = Encoder.encodeToMsgPack(stx1b);

        // KMD and signing directly should both be the same.
        if (!Arrays.equals(stx1aBytes, stx1bBytes)) {
            throw new RuntimeException("KMD disagrees with the manual signature!");
        }

        // Send transaction
        Response<PostTransactionsResponse> post = algod.RawTransaction().rawtxn(stx1aBytes).execute();
        if (!post.isSuccessful()) {
            throw new RuntimeException("Failed to post transaction");
        }

        // Wait for confirmation
        boolean done = false;
        while (!done) {
            Response<PendingTransactionResponse> txInfo = algod.PendingTransactionInformation(post.body().txId).execute();
            if (!txInfo.isSuccessful()) {
                throw new RuntimeException("Failed to check on tx progress");
            }
            if (txInfo.body().confirmedRound != null) {
                done = true;
            }
        }

        // Wait for indexer to index the round.
        Thread.sleep(5000);

        // Query indexer for the transaction
        Response<TransactionsResponse> transactions = indexer.searchForTransactions()
                .txid(post.body().txId)
                .execute();

        if (!transactions.isSuccessful()) {
            throw new RuntimeException("Failed to lookup transaction");
        }

        System.out.println("Transaction received! \n" + transactions.toString());
    }

    public static SignedTransaction signTransactionWithKMD(Transaction tx, String walletHandle) throws IOException, ApiException {
        SignTransactionRequest req = new SignTransactionRequest();
        req.transaction(Encoder.encodeToMsgPack(tx));
        req.setWalletHandleToken(walletHandle);
        req.setWalletPassword("");
        byte[] stxBytes = kmd.signTransaction(req).getSignedTransaction();
        return Encoder.decodeFromMsgPack(stxBytes, SignedTransaction.class);
    }

    public static byte[] lookupPrivateKey(Address addr, String walletHandle) throws ApiException {
        ExportKeyRequest req = new ExportKeyRequest();
        req.setAddress(addr.toString());
        req.setWalletHandleToken(walletHandle);
        req.setWalletPassword("");
        return kmd.exportKey(req).getPrivateKey();
    }

    public static String getDefaultWalletHandle() throws ApiException {
        for (APIV1Wallet w : kmd.listWallets().getWallets()) {
            if (w.getName().equals("unencrypted-default-wallet")) {
                InitWalletHandleTokenRequest tokenreq = new InitWalletHandleTokenRequest();
                tokenreq.setWalletId(w.getId());
                tokenreq.setWalletPassword("");
                return kmd.initWalletHandleToken(tokenreq).getWalletHandleToken();
            }
        }
        throw new RuntimeException("Default wallet not found.");
    }

    public static List<Address> getWalletAccounts(String walletHandle) throws ApiException, NoSuchAlgorithmException {
        List<Address> accounts = new ArrayList<>();

        ListKeysRequest keysRequest = new ListKeysRequest();
        keysRequest.setWalletHandleToken(walletHandle);
        for (String addr : kmd.listKeysInWallet(keysRequest).getAddresses()) {
            accounts.add(new Address(addr));
        }

        return accounts;
    }
}

Documentation

Javadoc can be found at https://algorand.github.io/java-algorand-sdk
Additional resources and code samples are located at https://developer.algorand.org.

Cryptography

AlgoSDK depends on org.bouncycastle:bcprov-jdk15on:1.61 for Ed25519 signatures, sha512/256 digests, and deserializing X.509-encoded Ed25519 private keys. The latter is the only explicit dependency on an external crypto library - all other references are abstracted through the JCA.

Java 9+

When using cryptographic functionality, and Java9+, you may run into the following warning:

WARNING: Illegal reflective access by org.bouncycastle.jcajce.provider.drbg.DRBG

This is known behavior, caused by more restrictive language features in Java 9+, that Bouncy Castle has yet to support. This warning can be suppressed safely. We will monitor cryptographic packages for updates or alternative implementations.

Contributing to this Project

build

This project uses Maven.

To build

~$ mvn package

To run the example project Use the following command in the examples directory, be sure to update your algod network address and the API token parameters (see examples/README for more information):

~$ mvn exec:java -Dexec.mainClass="com.algorand.algosdk.example.Main" -Dexec.args="127.0.0.1:8080 ***X-Algo-API-Token***"

To test

We are using separate version targets for production and testing to allow using JUnit5 for tests. Some IDEs, like IDEA do not support this very well. To workaround the issue a special ide profile should be enabled if your IDE does not support mixed target and testTarget versions. Regardless of IDE support, the tests can be run from the command line. In this case clean is used in case an incremental build was made by the IDE with Java8.

~$ mvn clean test

There is also a special integration test environment, and shared tests. To run these use the Makefile:

~$ make docker-test

deploying artifacts

The generated pom file provides maven compatibility and deploy capabilities.

mvn clean install
mvn clean deploy -P github,default
mvn clean site -P github,default  # for javadoc
mvn clean deploy -P release,default

Android Support

Significant work has been taken to ensure Android compatibility (in particular for minSdkVersion 16). Note that the default crypto provider on Android does not provide ed25519 signatures, so you will need to provide your own (e.g. BouncyCastle).

Algod V2 and Indexer Code Generation

The classes com.algorand.algosdk.v2.client.algod.\*, com.algorand.algosdk.v2.client.indexer.\*, com.algorand.algosdk.v2.client.common.AlgodClient, and com.algorand.algosdk.v2.client.common.IndexerClient are generated from OpenAPI specifications in: algod.oas2.json and indexer.oas2.json.

The specification files can be obtained from:

A testing framework can also be generated with: com.algorand.sdkutils.RunQueryMapperGenerator and the tests run from com.algorand.sdkutils.RunAlgodV2Tests and com.algorand.sdkutils.RunIndexerTests

Regenerate the Client Code

To actually regenerate the code, use run_generator.sh with paths to the *.oas2.json files mentioned above.

Updating the kmd REST client

The kmd REST client has not been upgraded to use the new code generation, it is still largely autogenerated by swagger-codegen. [https://github.com/swagger-api/swagger-codegen]

To regenerate the clients, first, check out the latest swagger-codegen from the github repo. (In particular, the Homebrew version is out of date and fails to handle raw byte arrays properly). Note OpenAPI 2.0 doesn't support unsigned types. Luckily we don't have any uint32 types in algod, so we can do a lossless type-mapping fromt uint64->int64 (Long) -> BigInteger:

curl http://localhost:8080/swagger.json | sed -e 's/uint32/int64/g' > temp.json
swagger-codegen generate -i temp.json -l java -c config.json

config.json looks like:

{
  "library": "okhttp-gson",
  "java8": false,
  "hideGenerationTimestamp": true,
  "serializableModel": false,
  "supportJava6": true,
  "invokerPackage": "com.algorand.algosdk.{kmd or algod}.client",
  "apiPackage": "com.algorand.algosdk.{kmd or algod}.client.api",
  "modelPackage": "com.algorand.algosdk.{kmd or algod}.client.model"
}

Make sure you convert all uint32 types to Long types.

The generated code (as of April 2019) has one circular dependency involving client.Pair. The client package depends on client.auth, but client.auth uses client.Pair which is in the client package. One more problem is that uint64 is not a valid format in OpenAPI 2.0; however, we need to send large integers to the algod API (kmd is fine). To resolve this, we do the following manual pass on generated code:

  • Move Pair.java into the client.lib package
  • Find-and-replace Integer with BigInteger (for uint64), Long (for uint32), etc. in com.algorand.algosdk.algod and subpackages (unnecessary for algod)
  • Run an Optimize Imports operation on generated code, to minimize dependencies.

Note that msgpack-java is good at using the minimal representation.

Download Details:
Author: algorand
Source Code: https://github.com/algorand/java-algorand-sdk
License: MIT License

#algorand  #blockchain  #cryptocurrency #java #sdk 

Algorand SDK for Java7+ to interact with the Algorand Metwork
Best of Crypto

Best of Crypto

1648079400

A General Purpose OpenAPI Code Generator for Algorand

generator

This is a general purpose OpenAPI code generator. It is currently used to completely generate the HTTP code in the Java SDK, and generate some of the HTTP code in our Golang SDK.

Usage

We currently have two HTTP endpoints. One for algod and one for indexer, so in most cases, this tool would be run once with each OpenAPI spec.

Build as a self-executing jar:

~$ mvn package -DskipTests
~$ java -jar target/generator-*-jar-with-dependencies.jar -h

You'll see that there are a number of subcommands:

  • java - the original Java SDK generator.
  • responses - generate randomized test files for SDK unit tests.
  • template - a generator that uses velocity templates rather than Java code to configure the code generation.

Code layout

The command line interface uses JCommander to define the command line interface. See Main.java.

The main code involves an OpenAPI parser / event generator and several listeners for the actual generation.

object layout

Templates

The template subcommand is using Apache Velocity as the underlying template engine. Things like variables, loops, and statements are all supported. So business logic can technically be implemented in the template if it's actually necessary.

Template files

There are three phases: client, query, and model. Each phase must provide two templates, one for the file generation and one to specify the filename to be used. If all results should go to the same file. For query and model generation the template will be executed once for each query / model. If you want to put everything in one file return the same filename twice in a row and the processing will exit early.

phasefilenamepurpose
clientclient.vmClient class with functions to call each query.
clientclient_filename.vmFile to write to the client output directory.
queryquery.vmTemplate to use for generating query files.
queryquery_filename.vmFile to write to the query output directory.
modelmodel.vmTemplate to use for generating model files.
modelmodel_filename.vmFile to write to the model output directory.

Output directories

The template command will only run the templates which have an output directory is provided. So if you just want to regenerate models, only use the -m option.

  -c, --clientOutputDir
    Directory to write client file(s).
  -m, --modelsOutputDir
    Directory to write model file(s).
  -q, --queryOutputDir
    Directory to write query file(s).

Property files

The template subcommand accepts a --propertyFiles option. It can be provided multiple times, or as a comma separated list of files. Property files will be processed and bound to a velocity variable available to templates.

template variables

For details on a type you can put it directly into your template. It will be serialized along with its fields for your reference. Here is a high level description of what is available:

templatevariabletypepurpose
allstrStringHelpers.javaSome string utilities are available. See StringHelpers.java for details. There are simple things like $str.capitalize("someData") -> SomeData, and also some more complex helpers like $str.formatDoc($query.doc, "// ") which will split the document at the word boundary nearest to 80 characters without going over, and add a prefix to each new line.
allorderOrderHelpers.javaSome ordering utilities available. See OrderHelpers.java for details. An example utility function is $order.propertiesWithOrdering($props, $preferred_order), where $props is a list of properties and $preferred_order is a string list to use when ordering the properties list.
allpropFilePropertiesThe contents of all property files are available with this variable. For example if package=com.algorand.v2.algod is in the property file, the template may use ${propFile.package}.
allmodelsHashMap<StructDef, List<TypeDef>>A list of all models.
allqueriesList<QueryDef>A list of all queries.
queryqQueryDefThe current query definition.
modeldefStructDefThe current model definition if multiple files are being generated.
modelpropsList<TypeDef>A list of properties for the current model.

Example usage

In the following example, we are careful to generate the algod code first because the algod models are a strict subset of the indexer models. For that reason, we are able to reuse some overlapping models from indexer in algod.

~$ java -jar generator*jar template
        -s algod.oas2.json
        -t go_templates
        -c algodClient
        -m allModels
        -q algodQueries
        -p common_config.properties,algod_config.properties
~$ java -jar generator*jar template
        -s indexer.oas2.json
        -t go_templates
        -c indexerClient
        -m allModels
        -q indexerQueries
        -p common_config.properties,indexer_config.properties

Test Template

There is a test template that gives you some basic usage in the test_templates directory.

You can generate the test code in the output directory with the following commands:

~$ mkdir output
~$ java -jar target/generator-*-jar-with-dependencies.jar \
    template \
    -s /path/to/a/spec/file/indexer.oas2.json \
    -t test_templates/ \
    -m output \
    -q output \
    -c output \
    -p test_templates/my.properties

Golang Template

The Golang templates are in the go_templates directory.

The Golang HTTP API is only partially generated. The hand written parts were not totally consistent with the spec and that makes it difficult to regenerate them. Regardless, an attempt has been made. In the templates there are some macros which map "generated" values to the hand written ones. For example the query types have this mapping:

#macro ( queryType )
#if ( ${str.capitalize($q.name)} == "SearchForAccounts" )
SearchAccounts## The hand written client doesn't quite match the spec...
#elseif ( ${str.capitalize($q.name)} == "GetStatus" )
Status##
#elseif ( ${str.capitalize($q.name)} == "GetPendingTransactionsByAddress" )
PendingTransactionInformationByAddress##
#elseif ( ${str.capitalize($q.name)} == "GetPendingTransactions" )
PendingTransactions##
#else
${str.capitalize($q.name)}##
#end
#end

Other mappings are more specific to the language, such as the OpenAPI type to SDK type:

#macro ( toQueryType $param )##
#if ( $param.algorandFormat == "RFC3339 String" )
string##
#elseif ( $param.type == "integer" )
uint64##
#elseif ( $param.type == "string" )
string##
#elseif ( $param.type == "boolean" )
bool##
#elseif( $param.type == "binary" )
string##
#else
UNHANDLED TYPE
- ref: $!param.refType
- type: $!param.type
- array type: $!param.arrayType
- algorand format: $!param.algorandFormat
- format: $!param.format
##$unknown.type ## force a template failure because $unknown.type does not exist.
#end
#end

Because of this, we are phasing in code generation gradually by skipping some types. The skipped types are specified in the property files:

common_config.properties

model_skip=AccountParticipation,AssetParams,RawBlockJson,etc,...

algod_config.properties

query_skip=Block,BlockRaw,SendRawTransaction,SuggestedParams,etc,...

indexer_config.properties

query_skip=LookupAssetByID,LookupAccountTransactions,SearchForAssets,LookupAssetBalances,LookupAssetTransactions,LookupBlock,LookupTransactions,SearchForTransactions

Java Template

The Java templates are in the java_templates directory.

These are not used yet, they are the initial experiments for the template engine. Since the Java SDK has used code generation from the beginning, we should be able to fully migrate to the template engine eventually.

Automation

Preparing an external repository for automatic code generation

In general, the automation pipeline will build and run whatever Dockerfile is found in a repository's templates directory. For instructions on how to configure the templates directory, look at the repository template directory example.

If you are trying to verify that automatic code generation works as intended, we recommend creating a testing branch from that repository and using the SKIP_PR=true environment variable to avoid creating pull requests. If all goes according to plan, generated files should be available in the container's /repo directory.

Setting up the automatic generator

The automatic generator scripts depend on certain prerequisites that are listed in automation/REQUIREMENTS.md. Once those conditions have been satisfied, automatically generating code for external repositories should be as easy as building and running a particular SDK's templates/Dockerfile file.


Download Details:
Author: algorand
Source Code: https://github.com/algorand/generator
License:

#algorand  #blockchain  #cryptocurrency #java #golang #openapi 

A General Purpose OpenAPI Code Generator for Algorand
Best of Crypto

Best of Crypto

1648072020

The Official JavaScript SDK for Algorand

js-algorand-sdk

AlgoSDK is the official JavaScript library for communicating with the Algorand network. It's designed for modern browsers and Node.js.

Installation

Node.js

$ npm install algosdk

This package provides TypeScript types, but you will need TypeScript version 4.2 or higher to use them properly.

For errors in Webpack 5 or Vite projects, you will need to install extra dependencies.

Browser

Include a minified browser bundle directly in your HTML like so:

<script
  src="https://unpkg.com/algosdk@v1.15.0-beta.1/dist/browser/algosdk.min.js"
  integrity="sha384-wURu1H0s7z6Nj/AiP4O+0EorWZNvjiXwex7pNwtJH77x60mNs0Wm2zR37iUtHMwH"
  crossorigin="anonymous"
></script>

or

<script
  src="https://cdn.jsdelivr.net/npm/algosdk@v1.15.0-beta.1/dist/browser/algosdk.min.js"
  integrity="sha384-wURu1H0s7z6Nj/AiP4O+0EorWZNvjiXwex7pNwtJH77x60mNs0Wm2zR37iUtHMwH"
  crossorigin="anonymous"
></script>

Information about hosting the package for yourself, finding the browser bundles of previous versions, and computing the SRI hash is available here.

Quick Start

const token = 'Your algod API token';
const server = 'http://127.0.0.1';
const port = 8080;
const client = new algosdk.Algodv2(token, server, port);

(async () => {
  console.log(await client.status().do());
})().catch((e) => {
  console.log(e);
});

Documentation

Documentation for this SDK is available here: https://algorand.github.io/js-algorand-sdk/. Additional resources are available on https://developer.algorand.org.

Examples

Running examples requires access to a running node. Follow the instructions in Algorand's developer resources to install a node on your computer.

As portions of the codebase are written in TypeScript, example files cannot be run directly using node. Please refer to the instructions described in the examples/README.md file for more information regarding running the examples.

SDK Development

Building

To build a new version of the library, run:

npm run build

Generating Documentation

To generate the documentation website, run:

npm run docs

The static website will be located in the docs/ directory.

Testing

We have two test suites: mocha tests in this repo, and the Algorand SDK test suite from https://github.com/algorand/algorand-sdk-testing.

Node.js

To run the mocha tests in Node.js, run:

npm test

To run the SDK test suite in Node.js, run:

make docker-test

Browsers

The test suites can also run in browsers. To do so, set the environment variable TEST_BROWSER to one of our supported browsers. Currently we support testing in chrome and firefox. When TEST_BROWSER is set, the mocha and SDK test suites will run in that browser.

For example, to run mocha tests in Chrome:

TEST_BROWSER=chrome npm test

And to run SDK tests in Firefox:

TEST_BROWSER=firefox make docker-test

Code Style

This project enforces a modified version of the Airbnb code style.

We've setup linters and formatters to help catch errors and improve the development experience:

  • Prettier – ensures that code is formatted in a readable way.
  • ESLint — checks code for antipatterns as well as formatting.

If using the Visual Studio Code editor with the recommended extensions, ESLint errors should be highlighted in red and the Prettier extension should format code on every save.

Precommit Hook

The linters and formatters listed above should run automatically on each commit to catch errors early and save CI running time.

Download Details:
Author: algorand
Source Code: https://github.com/algorand/js-algorand-sdk
License: MIT License

#algorand  #blockchain  #cryptocurrency #javascript #typescript 

The Official JavaScript SDK for Algorand