1648172160
Prometheus exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors.
The WMI exporter is recommended for Windows users.
There is varying support for collectors on each operating system. The tables below list all existing collectors and the supported systems.
Collectors are enabled by providing a --collector.<name>
flag. Collectors that are enabled by default can be disabled by providing a --no-collector.<name>
flag.
Name | Description | OS |
---|---|---|
arp | Exposes ARP statistics from /proc/net/arp . | Linux |
bcache | Exposes bcache statistics from /sys/fs/bcache/ . | Linux |
bonding | Exposes the number of configured and active slaves of Linux bonding interfaces. | Linux |
boottime | Exposes system boot time derived from the kern.boottime sysctl. | Darwin, Dragonfly, FreeBSD, NetBSD, OpenBSD |
conntrack | Shows conntrack statistics (does nothing if no /proc/sys/net/netfilter/ present). | Linux |
cpu | Exposes CPU statistics | Darwin, Dragonfly, FreeBSD, Linux |
diskstats | Exposes disk I/O statistics. | Darwin, Linux |
edac | Exposes error detection and correction statistics. | Linux |
entropy | Exposes available entropy. | Linux |
exec | Exposes execution statistics. | Dragonfly, FreeBSD |
filefd | Exposes file descriptor statistics from /proc/sys/fs/file-nr . | Linux |
filesystem | Exposes filesystem statistics, such as disk space used. | Darwin, Dragonfly, FreeBSD, Linux, OpenBSD |
hwmon | Expose hardware monitoring and sensor data from /sys/class/hwmon/ . | Linux |
infiniband | Exposes network statistics specific to InfiniBand and Intel OmniPath configurations. | Linux |
ipvs | Exposes IPVS status from /proc/net/ip_vs and stats from /proc/net/ip_vs_stats . | Linux |
loadavg | Exposes load average. | Darwin, Dragonfly, FreeBSD, Linux, NetBSD, OpenBSD, Solaris |
mdadm | Exposes statistics about devices in /proc/mdstat (does nothing if no /proc/mdstat present). | Linux |
meminfo | Exposes memory statistics. | Darwin, Dragonfly, FreeBSD, Linux, OpenBSD |
netclass | Exposes network interface info from /sys/class/net/ | Linux |
netdev | Exposes network interface statistics such as bytes transferred. | Darwin, Dragonfly, FreeBSD, Linux, OpenBSD |
netstat | Exposes network statistics from /proc/net/netstat . This is the same information as netstat -s . | Linux |
nfs | Exposes NFS client statistics from /proc/net/rpc/nfs . This is the same information as nfsstat -c . | Linux |
nfsd | Exposes NFS kernel server statistics from /proc/net/rpc/nfsd . This is the same information as nfsstat -s . | Linux |
sockstat | Exposes various statistics from /proc/net/sockstat . | Linux |
stat | Exposes various statistics from /proc/stat . This includes boot time, forks and interrupts. | Linux |
textfile | Exposes statistics read from local disk. The --collector.textfile.directory flag must be set. | any |
time | Exposes the current system time. | any |
timex | Exposes selected adjtimex(2) system call stats. | Linux |
uname | Exposes system information as provided by the uname system call. | Linux |
vmstat | Exposes statistics from /proc/vmstat . | Linux |
wifi | Exposes WiFi device and station statistics. | Linux |
xfs | Exposes XFS runtime statistics. | Linux (kernel 4.4+) |
zfs | Exposes ZFS performance statistics. | Linux |
Name | Description | OS |
---|---|---|
buddyinfo | Exposes statistics of memory fragments as reported by /proc/buddyinfo. | Linux |
devstat | Exposes device statistics | Dragonfly, FreeBSD |
drbd | Exposes Distributed Replicated Block Device statistics (to version 8.4) | Linux |
interrupts | Exposes detailed interrupts statistics. | Linux, OpenBSD |
ksmd | Exposes kernel and system statistics from /sys/kernel/mm/ksm . | Linux |
logind | Exposes session counts from logind. | Linux |
meminfo_numa | Exposes memory statistics from /proc/meminfo_numa . | Linux |
mountstats | Exposes filesystem statistics from /proc/self/mountstats . Exposes detailed NFS client statistics. | Linux |
ntp | Exposes local NTP daemon health to check time | any |
qdisc | Exposes queuing discipline statistics | Linux |
runit | Exposes service status from runit. | any |
supervisord | Exposes service status from supervisord. | any |
systemd | Exposes service and system status from systemd. | Linux |
tcpstat | Exposes TCP connection status information from /proc/net/tcp and /proc/net/tcp6 . (Warning: the current version has potential performance issues in high load situations.) | Linux |
The textfile collector is similar to the Pushgateway, in that it allows exporting of statistics from batch jobs. It can also be used to export static metrics, such as what role a machine has. The Pushgateway should be used for service-level metrics. The textfile module is for metrics that are tied to a machine.
To use it, set the --collector.textfile.directory
flag on the Node exporter. The collector will parse all files in that directory matching the glob *.prom
using the text format.
To atomically push completion time for a cron job:
echo my_batch_job_completion_time $(date +%s) > /path/to/directory/my_batch_job.prom.$$
mv /path/to/directory/my_batch_job.prom.$$ /path/to/directory/my_batch_job.prom
To statically set roles for a machine using labels:
echo 'role{role="application_server"} 1' > /path/to/directory/role.prom.$$
mv /path/to/directory/role.prom.$$ /path/to/directory/role.prom
The node_exporter
will expose all metrics from enabled collectors by default. This is the recommended way to collect metrics to avoid errors when comparing metrics of different families.
For advanced use the node_exporter
can be passed an optional list of collectors to filter metrics. The collect[]
parameter may be used multiple times. In Prometheus configuration you can use this syntax under the scrape config.
params:
collect[]:
- foo
- bar
This can be useful for having different Prometheus servers collect specific metrics from nodes.
Prerequisites:
glibc-static
package.Building:
go get github.com/algorand/node_exporter
cd ${GOPATH-$HOME/go}/src/github.com/algorand/node_exporter
make
./node_exporter <flags>
To see all available configuration flags:
./node_exporter -h
make test
The node_exporter
is designed to monitor the host system. It's not recommended to deploy it as a Docker container because it requires access to the host system. Be aware that any non-root mount points you want to monitor will need to be bind-mounted into the container. If you start container for host monitoring, specify path.rootfs
argument. This argument must match path in bind-mount of host root. The node_exporter will use path.rootfs
as prefix to access host filesystem.
docker run -d \
--net="host" \
--pid="host" \
-v "/:/host:ro,rslave" \
quay.io/prometheus/node-exporter \
--path.rootfs /host
On some systems, the timex
collector requires an additional Docker flag, --cap-add=SYS_TIME
, in order to access the required syscalls.
There is a community-supplied COPR repository which closely follows upstream releases.
Download Details:
Author: algorand
Source Code: https://github.com/algorand/node_exporter
License: Apache-2.0 License
1648164780
This deploys https://github.com/aktionariat/walletconnect-bridge.git to our EKS clusters. While we are sharing this automation for others to benefit, the Algorand team does NOT make warranties regarding the stability / reliability of the referenced bridge implementation. Please research and make decisions around use at your own discretion.
For 2.0 support, see the v2.0 branch. It uses https://github.com/WalletConnect/walletconnect-monorepo.git
This builds a docker image for the walletconnect bridge. It currently versions through a timestamp and will produce two docker images, for example:
walletconnect/relay-server:latest
walletconnect/relay-server:latest-java
walletconnect/relay-server:1633462163-java
$ scripts/push.sh -h
Usage: scripts/push.sh <-i IMAGE> [-r AWS_REGION] [-h]
This script pushes images to ECR. It will check the aws account that the shell it runs in has credentials to talk to and creates the ECR repo if it does not already exist. It then pushes the image with the appropriate tag.
scripts/push.sh -i walletconnect/relay-server:1633456298-java
$ scripts/deploy.sh -h
Usage: scripts/deploy.sh [-l VERSION] [-r AWS_REGION] [-n NAMESPACE] [-c CLASSIFIER] [-h]
This script will deploy to the kubernetes cluster that shell it runs in has access to. When ingress is enabled, it is very opinionated about running with nginx ingress controller, external-dns and lets-encrypt. If you would like to use the settings shown here for your ingress rule, make sure that your lb supporting the ingress controller can handle the timeout.
scripts/status.sh -h
Usage: scripts/status.sh [-n NAMESPACE] [-c CLASSIFIER] [-h]
This script shows some data about a deployed service.
scripts/status.sh
VERSION: 1633456298-java
ENDPOINT: wss://wallet-connect.default.dev.example.com/
NAME READY STATUS RESTARTS AGE
wallet-connect-bridge-default-default-59669996d4-vr6bd 1/1 Running 0 15m
Clone the algorand example dapp repository.
git clone https://github.com/algorand/walletconnect-example-dapp
Go in the page config and edit it to use your endpoint. You can find this using the status script shown before.
You need to change bridge variable in src/App.tsx
to do this. https://github.com/algorand/walletconnect-example-dapp/blob/master/src/App.tsx#L179
Next you need to start the app. You will get a notification to allow your shell to use chrome. Please approve this.
npm install
npm run start
After following these steps you should see something like the following.
Once you have a demo app running and configured to use your wallet connect bridge endpoint, you can try to register it with a wallet.
Navigate to the dapp in your browser, probably running in http://localhost:3000.
Click "Connect to WalletConnect" and you will see a QR code
This QR code is what you need to use to integrate the demo dapp with your wallet. Copy the QR code and then you can navigate to https://test.walletconnect.org/ and test it out.
Paste the copied QR code where it says "Paste wc: url"
Check the console logs in dev tools if you run into any issues while working with this site.
You will be prompted to either accept or reject integrating with your demo dapp. Click "Approve"
If you see the following screen, your integration has been successful!
Download Details:
Author: algorand
Source Code: https://github.com/algorand/walletconnect-automation
License: MIT License
1648157400
ff-zeroize
is a temporary crate that enables zeroize
features for ff
crateff
is a finite field library written in pure Rust, with no unsafe{}
code.Add the ff
crate to your Cargo.toml
:
[dependencies]
ff_zeroize = "0.6.1"
The ff
crate contains Field
, PrimeField
, PrimeFieldRepr
and SqrtField
traits. See the documentation for more.
If you need an implementation of a prime field, this library also provides a procedural macro that will expand into an efficient implementation of a prime field when supplied with the modulus. PrimeFieldGenerator
must be an element of Fp of p-1 order, that is also quadratic nonresidue.
First, enable the derive
crate feature:
[dependencies]
ff_zeroize = { version = "0.6.1", features = ["derive"] }
And then use the macro like so:
extern crate rand;
#[macro_use]
extern crate ff;
#[derive(PrimeField)]
#[PrimeFieldModulus = "52435875175126190479447740508185965837690552500527637822603658699938581184513"]
#[PrimeFieldGenerator = "7"]
struct Fp(FpRepr);
And that's it! Fp
now implements Field
and PrimeField
. Fp
will also implement SqrtField
if supported. The library implements FpRepr
itself and derives PrimeFieldRepr
for it.
Download Details:
Author: algorand
Source Code: https://github.com/algorand/ff-zeroize
License: View license
1648150020
The goal of the ISDA Common Domain Model (CDM) is to allow financial institutions to have a coherent representation of financial instruments and events. This document shows how institutions can use the CDM and the Algorand blockchain to maintain separately owned but coherent financial databases with the following properties:
The Algorand blockchain can process 1000 transactions per second with a latency of less than 5 seconds and ensures transaction finality with point-of-sale speed. The Algorand blockchain is a permissionless blockchain with hundreds of independently operating nodes distributed around the world. The Algorand blockchain allows developers to create their applications without having to set up their own distributed systems. In addition, Algorand provides extensive documentation, and provides SDKs in four languages (Go, Python, Java and Javascript) to interact with the blockchain.
Figure 1: Nodes running the Algorand client software around the world
Running the code in this repository requires that you have
These are bash scripts which install Java and Maven and set the correct paths to use them. These scripts are in the INSTALL
folder and should be run in the following order
install_brew.sh
if the user does not have Hombrew installed (OS X utility to install programs)install_java.sh
if the user does not have Java installed. This installs the OpenJDKinstall_maven.sh
if the user does not have Maven installedinstall_mongo.sh
if the user does not have MongoDB installedThese are bash scripts which install Java and Maven and set the correct paths to use them. These scripts are in the INSTALL
folder and should be run in the following order
install_java_for_ubuntu.sh
if the user does not have Java installed. This installs the OpenJDKinstall_maven_for_ubuntu.sh
if the user does not have Maven installedinstall_mongo_for_ubuntu.sh
if the user does not have MongoDB installedThe main directory contains a pom.xml file which Maven uses to download Java libraries that the code depends on, including the Algorand Java SDK, and the Java implementation of the ISDA CDM.
The code has been tested on a computer running OS X Version 10.14.5, OpenJDK 13, and Maven version 3.6.1. and on an AWS instance ("4.15.0-1044-aws") running Ubuntu 18.04.2 LTS, OpenJDK 11 and Maven version 3.6.0
A settings.xml
file is provided in the project root directory, use it install dependencies as below:
mvn -s settings.xml clean install
You can also run
sh compile.sh
from the root directory.
To run the example code, type
sh run.sh
in the root directory.
This script will start a MongoDB service and run the examples for the first three use cases in the hackathon. Ubuntu users need to uncomment the following line to run the mongo service on ubuntu.
##UNCOMMENT THIS LINE FOR UBUNBTU
# bash start_mongo_on_ubuntu.sh
The code needs to have a Mongo DB service running to persist some information. Right now the run.sh
script starts this service automatically if it is not running. However, we have provided scripts to start and stop this automatically
To run the mongodb service, run
sh start_mongo.sh
To stop the mongodb service, run
sh stop_mongo.sh
In the Derivhack Hackathon, users are given a trade execution file and need to
In this example, we use the Algorand blockchain to ensure different parties have consistent versions of the file, while keeping their datastores private. The information stored in the chain includes the global key of the execution, its lineage, and the file path where the user stored the Execution JSON object in their private data store.
The following function, from the class CommitExecution.java
reads a CDM Event, creates Algorand accounts for all parties in the event. It gets the executing party (Client 1's broker), and has this party send details of the execution to all other parties on the Algorand blockchain.
public class CommitExecution {
public static void main(String [] args) throws Exception{
//Read the input arguments and read them into files
String fileName = args[0];
String fileContents = ReadAndWrite.readFile(fileName);
//Read the event file into a CDM object using the Rosetta object mapper
ObjectMapper rosettaObjectMapper = RosettaObjectMapper.getDefaultRosettaObjectMapper();
Event event = rosettaObjectMapper
.readValue(fileContents, Event.class);
//Create Algorand Accounts for all parties
// and persist accounts to filesystem/database
List<Party> parties = event.getParty();
User user;
DB mongoDB = MongoUtils.getDatabase("users");
parties.parallelStream()
.map(party -> User.getOrCreateUser(party,mongoDB))
.collect(Collectors.toList());
//Get the execution
Execution execution = event
.getPrimitive()
.getExecution().get(0)
.getAfter()
.getExecution();
// Get the executing party reference
String executingPartyReference = execution.getPartyRole()
.stream()
.filter(r -> r.getRole() == PartyRoleEnum.EXECUTING_ENTITY)
.map(r -> r.getPartyReference().getGlobalReference())
.collect(MoreCollectors.onlyElement());
// Get the executing party
Party executingParty = event.getParty().stream()
.filter(p -> executingPartyReference.equals(p.getMeta().getGlobalKey()))
.collect(MoreCollectors.onlyElement());
// Get all other parties
List<Party> otherParties = event.getParty().stream()
.filter(p -> !executingPartyReference.equals(p.getMeta().getGlobalKey()))
.collect(Collectors.toList());
// Find or create the executing user
User executingUser = User.getOrCreateUser(executingParty, mongoDB);
//Send all other parties the contents of the event as a set of blockchain transactions
List<User> users = otherParties.
parallelStream()
.map(p -> User.getOrCreateUser(p,mongoDB))
.collect(Collectors.toList());
List<Transaction> transactions = users
.parallelStream()
.map(u->executingUser.sendEventTransaction(u,event,"execution"))
.collect(Collectors.toList());
}
}
The corresponding shell command to execute this function with the Block trades file is
##Commit the execution file to the blockchain
mvn -s settings.xml exec:java -Dexec.mainClass="com.algorand.demo.CommitExecution" \
-Dexec.args="./Files/UC1_block_execute_BT1.json" -e -q
The second use case for Derivhack is allocation of trades. That is, the block trade execution given in use case 1 will be allocated among multiple accounts. Participants are also given a JSON CDM file specifying the [allocation] (https://github.com/algorand/DerivhackExamples/blob/master/Files/UC2_allocation_execution_AT1.json). Since allocations are CDM events, the same logic applies as in the Execution use case. To commit the allocation event to the blockchain, participants can use the following shell command
mvn -s settings.xml exec:java -Dexec.mainClass="com.algorand.demo.CommitAllocation" \
-Dexec.args="./Files/UC2_allocation_execution_AT1.json" -e -q
The third use case is the affirmation of the trade by the clients. In contrast with the other cases, the Participants can look at the classes CommitAffirmation.java
(https://github.com/algorand/DerivhackExamples/blob/master/src/main/java/com/algorand/demo/CommitAffirmation.java) and AffirmImpl.java
(https://github.com/algorand/DerivhackExamples/blob/master/src/main/java/com/algorand/demo/AffirmationImpl.java) for examples on how to derive the Affirmation of a trade from its allocation.
In the affirmation step, the client produces a CDM affirmation from the Allocation Event, and sends the affirmation to the broker over the Algorand Chain.
``` class CommitAffirmation {
public static void main(String[] args){
//Load the database to lookup users
DB mongoDB = MongoUtils.getDatabase("users");
//Load a file with client global keys
String allocationFile = args[0];
String allocationCDM = ReadAndWrite.readFile(allocationFile);
ObjectMapper rosettaObjectMapper = RosettaObjectMapper.getDefaultRosettaObjectMapper();
Event allocationEvent = null;
try{
allocationEvent = rosettaObjectMapper
.readValue(allocationCDM, Event.class);
}
catch(java.io.IOException e){
e.printStackTrace();
}
List<Trade> allocatedTrades = allocationEvent.getPrimitive().getAllocation().get(0).getAfter().getAllocatedTrade();
//Keep track of the trade index
int tradeIndex = 0;
//Collect the affirmation transaction id and broker key in a file
String result = "";
//For each trade...
for(Trade trade: allocatedTrades){
//Get the broker that we need to send the affirmation to
String brokerReference = trade.getExecution().getPartyRole()
.stream()
.filter(r -> r.getRole() == PartyRoleEnum.EXECUTING_ENTITY)
.map(r -> r.getPartyReference().getGlobalReference())
.collect(MoreCollectors.onlyElement());
User broker = User.getUser(brokerReference,mongoDB);
//Get the client reference for that trade
String clientReference = trade.getExecution()
.getPartyRole()
.stream()
.filter(r-> r.getRole()==PartyRoleEnum.CLIENT)
.map(r->r.getPartyReference().getGlobalReference())
.collect(MoreCollectors.onlyElement());
// Load the client user, with algorand passphrase
User user = User.getUser(clientReference,mongoDB);
String algorandPassphrase = user.algorandPassphrase;
// Confirm the user has received the global key of the allocation from the broker
String receivedKey = AlgorandUtils.readEventTransaction( algorandPassphrase, allocationEvent.getMeta().getGlobalKey());
assert receivedKey == allocationEvent.getMeta().getGlobalKey() : "Have not received allocation event from broker";
//Compute the affirmation
Affirmation affirmation = new AffirmImpl().doEvaluate(allocationEvent,tradeIndex).build();
//Send the affirmation to the broker
Transaction transaction =
user.sendAffirmationTransaction(broker, affirmation);
result += transaction.getTx() + "," + brokerReference +"\n";
tradeIndex = tradeIndex + 1;
}
try{
ReadAndWrite.writeFile("./Files/AffirmationOutputs.txt", result);
}
catch(Exception e){
e.printStackTrace();
}
}
}
Download Details:
Author: algorand
Source Code: https://github.com/algorand/DerivhackExamples
License: MIT License
1648142640
Getting Started With Reach
Reach is designed to work on POSIX systems with make, Docker, and Docker Compose installed. The best way to install Docker on Mac and Windows is with Docker Desktop.
To confirm everything is installed try to run the following three commands and see no errors
$ make --version
$ docker --version
$ docker-compose --version
If you’re using Windows, consult the guide to using Reach on Windows.
Once you've confirmed that the Reach prerequisites are installed, choose a directory for this project such as:
$ mkdir -p ~/reach && cd ~/reach
Clone the repository using the following commands.
git clone https://github.com/algorand/reach-auction.git
Navigate to the project folder
cd reach_auction
Next, download Reach by running
$ curl https://docs.reach.sh/reach -o reach ; chmod +x reach
Confirm the download worked by running
$ ./reach version
Since Reach is Dockerized, when first used, the images it uses need to be downloaded. This will happen automatically when used for the first time, but can be done manually now by running
$ ./reach update
You’ll know that everything is in order if you can run
$ ./reach compile --help
To determine the current version is installed, run
$ ./reach hashes
Output should look similar to:
reach: fb449c94
reach-cli: fb449c94
react-runner: fb449c94
rpc-server: fb449c94
runner: fb449c94
devnet-algo: fb449c94
devnet-cfx: fb449c94
devnet-eth: fb449c94
All of the hashes listed should be the same and then visit the #releases channel on the Reach Discord Server to see the current hashes.
More information: Detailed Reach install instructions can be found in the docs.
Download Details:
Author: algorand
Source Code: https://github.com/algorand/reach-auction
License:
1648134900
A Go implementation of Algorand’s subset-sum hash function. The library exports the subset sum hash function via a hash.Hash
interface.
go get github.com/algorand/go-sumhash
Alternatively the same can be achieved if you use import in a package:
import "github.com/algorand/go-sumhash"
and run go get without parameters.
Construct a sumhash instance with block size of 512.
package main
import (
"fmt"
"github.com/algorand/go-sumhash"
)
func main() {
h := sumhash.New512(nil)
input := []byte("sumhash input")
_, _ = h.Write(input)
sum := h.Sum(nil)
fmt.Printf("subset sum hash value: %X", sum)
}
go test ./...
The specification of the function as well as the security parameters can be found here
Download Details:
Author: algorand
Source Code: https://github.com/algorand/go-sumhash
License: MIT License
1648127580
Algorand's subset-sum hash function implementation in C.
git clone https://github.com/algorand/c-sumhash
make
The make
command builds the library and runs the tests. The output can be found in the build directory:
./build/libsumhash.a
#include <stdio.h>
#include <string.h>
#include "include/sumhash512.h"
int main() {
char* input = "Algorand";
sumhash512_state hash;
sumhash512_init(&hash);
sumhash512_update(&hash, (uint8_t*)input, strlen(input));
uint8_t output [SUMHASH512_DIGEST_SIZE];
sumhash512_final(&hash, output);
return 0;
}
Simple API usage:
#include <stdio.h>
#include <string.h>
#include "include/sumhash512.h"
int main() {
char* input = "Algorand";
uint8_t output [SUMHASH512_DIGEST_SIZE];
sumhash512(output, (uint8_t*)input, strlen(input));
return 0;
}
The include/sumhash512.h
header contains more information about the functions usage
The specification of the function as well as the security parameters can be found here
Download Details:
Author: algorand
Source Code: https://github.com/algorand/c-sumhash
License: MIT License
1648120200
This is a fast way to create and configure an Algorand development environment with Algod and Indexer.
Docker Compose MUST be installed. Instructions.
On a Windows machine, Docker Desktop comes with the necessary tools. Please see the Windows section in getting started for more details.
Warning: Algorand Sandbox is not meant for production environments and should not be used to store secure Algorand keys. Updates may reset all the data and keys that are stored.
Use the sandbox command to interact with the Algorand Sandbox.
sandbox commands:
up [config] -> start the sandbox environment.
down -> tear down the sandbox environment.
reset -> reset the containers to their initial state.
clean -> stops and deletes containers and data directory.
test -> runs some tests to demonstrate usage.
enter [algod||indexer||indexer-db]
-> enter the sandbox container.
version -> print binary versions.
copyTo <file> -> copy <file> into the algod container. Useful for offline transactions & LogicSigs plus TEAL work.
copyFrom <file> -> copy <file> from the algod container. Useful for offline transactions & LogicSigs plus TEAL work.
algorand commands:
logs -> stream algorand logs with the carpenter utility.
status -> get node status.
goal (args) -> run goal command like 'goal node status'.
tealdbg (args) -> run tealdbg command to debug program execution.
special flags for 'up' command:
-v|--verbose -> display verbose output when starting standbox.
-s|--skip-fast-catchup -> skip catchup when connecting to real network.
-i|--interactive -> start docker-compose in interactive mode.
Sandbox creates the following API endpoints:
algod
:http://localhost:4001
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
kmd
:http://localhost:4002
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
indexer
:http://localhost:8980
Make sure the docker daemon is running and docker-compose is installed.
Open a terminal and run:
git clone https://github.com/algorand/sandbox.git
In whatever local directory the sandbox should reside. Then:
cd sandbox
./sandbox up
This will run the sandbox
shell script with the default configuration. See the Basic Configuration for other options.
Note for Ubuntu: You may need to alias docker
to sudo docker
or follow the steps in https://docs.docker.com/install/linux/linux-postinstall so that a non-root user can user the command docker
.
Run the test command for examples of how to interact with the environment:
./sandbox test
Note: Be sure to use the latest version of Windows 10. Older versions may not work properly.
Note: While installing the following programs, several restarts may be required for windows to recognize the new software correctly.
The installation instructions for Docker Desktop contain some of this but are repeated here.
Troubleshooting
the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'.
check that you are using the latest versions of: Docker, Git for Windows, and Windows 10.
If this does not solve the issue, open an issue with all the versions with all the software used, as well as all the commands typed.
Error response from daemon: open \\.\pipe\docker_engine_linux: The system cannot find the file specified.
check that Docker is running.
Sandbox supports two primary modes of operation. By default, a private network will be created, which is only available from the local environment. There are also configurations available for the public networks which will attempt to connect to one of the long running Algorand networks and allow interaction with it.
To specify which configuration to run:
./sandbox up $CONFIG
Where $CONFIG
is specified as one of the configurations in the sandbox directory.
For example to run a dev
mode network, run:
./sandbox up dev
To switch the configuration:
./sandbox down
./sandbox clean
./sandbox up $NEW_CONFIG
If no configuration is specified the sandbox will be started with the release
configuration which is a private network. The other private network configurations are those not suffixed with net
. Namely these are beta
, dev
and nightly
.
The private network environment creates and funds a number of accounts in the algod containers local kmd
ready to use for testing transactions. These accounts can be reviewed using ./sandbox goal account list
.
Private networks also include an Indexer
service configured to synchronize against the private network. Because it doesn't require catching up to one of the long running networks it also starts very quickly.
The dev
configuration runs a private network in dev mode. In this mode, every transaction being sent to the node automatically generates a new block, rather than wait for a new round in real time. This is extremely useful for fast e2e testing of an application.
The mainnet
, testnet
, betanet
, and devnet
configurations configure the sandbox to connect to one of those long running networks. Once started it will automatically attempt to catchup to the latest round. Catchup tends to take a while and a progress bar will be displayed to illustrate of the progress.
Due to technical limitations, this configuration does not contain preconfigured accounts that may be immediately transact with, and Indexer is not available. A new wallet and accounts may be created or imported at will using the goal wallet new command to create a wallet and the goal account import or goal account new commands.
Note A newly created account will not be funded and wont be able to submit transactions until it is. If a testnet
configuration is used, please visit the TestNet Dispenser to fund the newly created account.
The sandbox environment is completely configured using the config.*
files in the root of this repository. For example, the default configuration for config.nightly is:
export ALGOD_CHANNEL="nightly"
export ALGOD_URL=""
export ALGOD_BRANCH=""
export ALGOD_SHA=""
export ALGOD_BOOTSTRAP_URL=""
export ALGOD_GENESIS_FILE=""
export INDEXER_URL="https://github.com/algorand/indexer"
export INDEXER_BRANCH="develop"
export INDEXER_SHA=""
export INDEXER_DISABLED=""
Indexer is always built from source since it can be done quickly. For most configurations, algod will be installed using our standard release channels, but building from source is also available by setting the git URL, Branch and optionally a specific SHA commit hash.
The up command looks for the config extension based on the argument provided. With a custom configuration pointed to a fork, the sandbox will start using the fork:
export ALGOD_CHANNEL=""
export ALGOD_URL="https://github.com/<user>/go-algorand"
export ALGOD_BRANCH="my-test-branch"
export ALGOD_SHA=""
export ALGOD_BOOTSTRAP_URL=""
export ALGOD_GENESIS_FILE=""
export INDEXER_URL="https://github.com/<user>/go-algorand"
export INDEXER_BRANCH="develop"
export INDEXER_SHA=""
export INDEXER_DISABLED=""
Some Algorand commands require using a file for the input. For example working with TEAL programs. In some other cases like working with Logical signatures or transactions offline the output from a LogicSig or transaction may be needed.
To stage a file use the copyTo
command. The file will be placed in the algod data directory, which is where sandbox executes goal
. This means the files can be used without specifying their full path.
To copy a file from sandbox (algod instance) use the copyFrom
command. The file will be copied to sandbox directory on host filesystem.
these commands will stage two TEAL programs then use them in a goal
command:
~$ ./sandbox copyTo approval.teal
~$ ./sandbox copyTo clear.teal
~$ ./sandbox goal app create --approval-prog approval.teal --clear-prog clear.teal --creator YOUR_ACCOUNT --global-byteslices 1 --global-ints 1 --local-byteslices 1 --local-ints 1
these commands will create and copy a signed logic transaction file, created by goal
, to be sent or communicated off the chain (e.g. by email or as a QR Code) and submitted else where:
~$ ./sandbox goal clerk send -f <source-account> -t <destination-account> --fee 1000 -a 1000000 -o "unsigned.txn"
~$ ./sandbox goal clerk sign --infile unsigned.txn --outfile signed.txn
~$ ./sandbox copyFrom "signed.txn"
If something goes wrong, check the sandbox.log
file for details.
For detailed information on how to debug smart contracts and use tealdbg CLI,please consult with Algorand Development Portal :: Smart Contract Debugging.
Algorand smart contract debugging process uses tealdbg
command line of algod instance(algod container in sandbox).
Note: Always use tealdbg
with --listen 0.0.0.0
or --listen [IP ADDRESS]
flags, if access is needed to tealdbg from outside of algod docker container!
Debugging smart contract with Chrome Developer Tools (CDT): ~$ ./sandbox tealdbg debug ${TEAL_PROGRAM} -f cdt -d dryrun.json
Debugging smart contract with Web Interface (primal web UI) ~$ ./sandbox tealdbg debug ${TEAL_PROGRAM} -f web -d dryrun.json
The debugging endpoint port (default 9392) is forwarded directly to the host machine and can be used directly by Chrome Dev Tools for debugging Algorand TEAL smart comtracts (Goto url chrome://inspect/ and configure port 9392 before using please).
Note: If a different port is needed than the default, it may be changed by running tealdbg --port YOUR_PORT
then modifying the docker-compose.yml file and change all occurances of mapped 9392 port with the desired one.
Remote - Container
ExtensionFor those looking to develop or extend algod or indexer it's highly recommended to test and debug using a realistic environment. Being able to interactively debug code with breakpoints and introspect the stack as the Algorand daemon communicates with a live network is quite useful. Here are steps that you can take if you want to run an interactive debugger with an indexer
running on the sandbox. Analogous instructions work for algod
as well.
Before starting, make sure you have VS-Code and have installed the Remote - Containers Extension.
privileged: true
under the indexer:
service./sandbox up YOUR_CONFIG
and wait for it to be fully up and running./sandbox clean
first./sandbox test
Remote - Containers: Attach to Running Container
/algorand-sandbox-indexer
, should pop up and you should choose it/opt/indexer
in the case of indexer) is usually your best choicego
based project and suggest various extensions to add into the container enviroment. You should do thisapi/handlers.go
) and add a breakpoint as you usually wouldps | egrep "daemon|PID"
. Note the resulting PIDF5
. It should give you the option to attach to a process
and generate a launch.json
with processId: 0
for youlaunch.json
with the correct processId
. Below I provide an example of a launch.json
curl
example belowlaunch.json
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Attach to Process",
"type": "go",
"request": "attach",
"mode": "local",
"processId": YOUR_PID_HERE
}
]
}
curl
command~$ curl "localhost:8980/v2/accounts"
Download Details:
Author: algorand
Source Code: https://github.com/algorand/sandbox
License:
1648112820
Open protocol for connecting Wallets to Dapps - https://walletconnect.org
SDK | Current Version | Description |
---|---|---|
walletconnect | SDK |
Clients | Current Version | Description |
---|---|---|
@walletconnect/core | Core Client | |
@walletconnect/client | Isomorphic Client |
Providers | Current Version | Description |
---|---|---|
@walletconnect/ethereum-provider | Ethereum Provider | |
@walletconnect/truffle-provider | Truffle Provider | |
@walletconnect/web3-provider | Web3 Provider | |
@walletconnect/web3-subprovider | Web3 Subprovider |
Helpers | Current Version | Description |
---|---|---|
@walletconnect/browser-utils | Browser Utilities | |
@walletconnect/http-connection | HTTP Connection | |
@walletconnect/iso-crypto | Isomorphic Crypto | |
@walletconnect/qrcode-modal | QR Code Modal | |
@walletconnect/react-native-dapp | React-Native Dapp | |
@walletconnect/signer-connection | Signer Connection | |
@walletconnect/socket-transport | Socket Transport | |
@walletconnect/types | Typescript Types | |
@walletconnect/utils | Utility Library |
## Quick Start
Find quick start examples for your platform at https://docs.walletconnect.org/quick-start
Read more about WalletConnect protocol and how to use our Clients at https://docs.walletconnect.org
Download Details:
Author: algorand
Source Code: https://github.com/algorand/walletconnect-monorepo
License: Apache-2.0 License
#algorand #blockchain #cryptocurrency #typescript #javascript
1648105380
EXPERIMENTAL WIP
There is no guarantee to the API of this repository. It is subject to change without a tagged release.
This repository is meant to contain PyTEAL utility methods common in many Smart Contract programs.
InlineAssembly
- Can be used to inject TEAL source directly into a PyTEAL programaccumulate
iterate
- Provides a convenience method for calling a method n timesodd
- Returns 1 if x
is oddeven
- Returns 1 if x
is evenfactorial
- Returns x! = x * x-1 * x-2 * ...
wide_factorial
- Returns x! = x * x-1 * x-2 * ...
wide_power
exponential
- Approximates e ** x
for n
iterationslog2
log10
- Returns log base 10
of the integer passedln
- Returns natural log of x
of the integer passedpow10
- Returns 10 ** x
max
- Returns the maximum of 2 integersmin
- Returns the minimum of 2 integersdiv_ceil
- Returns the result of division rounded up to the next integersaturation
- Returns an output that is the value of n bounded to the upper and lower saturation valuesGlobalBlob
- Class holding static methods to work with the global storage of an application as a binary large objectLocalBlob
- Class holding static methods to work with the local storage of an application as a binary large objectglobal_must_get
- Returns the result of a global storage MaybeValue if it exists, else Assert and fail the programglobal_get_else
- Returns the result of a global storage MaybeValue if it exists, else return a default valuelocal_must_get
- Returns the result of a loccal storage MaybeValue if it exists, else Assert and fail the programlocal_get_else
- Returns the result of a local storage MaybeValue if it exists, else return a default valueatoi
- Converts a byte string representing a number to the integer value it representsitoa
- Converts an integer to the ascii byte string it representswitoa
- Converts an byte string interpreted as an integer to the ascii byte string it representshead
- Gets the first byte from a bytestring, returns as bytestail
- Returns the string with the first character removedsuffix
- Returns the last n bytes of a given byte stringprefix
- Returns the first n bytes of a given byte stringrest
encode_uvarint
- Returns the uvarint encoding of an integerassert_common_checks
- Calls all txn checker assert methodsassert_min_fee
- Checks that the fee for a transaction is exactly equal to the current min feeassert_no_rekey
- Checks that the rekey_to field is empty, Assert if it is setassert_no_close_to
- Checks that the close_remainder_to field is empty, Assert if it is setassert_no_asset_close_to
- Checks that the asset_close_to field is empty, Assert if it is setCommon inner transaction operations
pay
axfer
As PyTEAL user, your contribution is extremely valuable to grow PyTEAL utilities!
Please follow the contribution guide!
dev
mode recommended): ./sandbox up dev
git clone https://github.com/algorand/pyteal-utils.git
poetry install
poetry shell
pre-commit install
Download Details:
Author: algorand
Source Code: https://github.com/algorand/pyteal-utils
License: MIT License
1648098000
algorand-sdk-testing
Testing files for Algorand SDKs
The files in this repository are used for testing the different Algorand SDK implementations. By writing the tests once and sharing them amongst the SDKs we are able to increase the coverage of our tests, and avoid rewriting similar tests over and over again. In addition to test cases, we have a standard test environment which is managed by docker.
To define tests we use cucumber, and feature files written with gherkin syntax. Each SDK is responsible for finding a framework which can use these files. There are implementations for many popular programming languages.
We have different feature files for unit and integration tests. The unit tests should be run as a normal part of development to quickly identify bugs and regressions. Integration tests on the other hand take much longer to run and require a special test environment. The test environment is made up of multiple services and managed with docker compose.
These reside in the unit features directory
tag | description |
---|---|
@unit | Select all unit tests. |
@unit.abijson | ABI types and method encoding/decoding unit tests. |
@unit.algod | Algod REST API unit tests. |
@unit.applications | Application endpoints added to Algod and Indexer. |
@unit.atomic_transaction_composer | ABI / atomic transaction construction unit tests. |
@unit.dryrun | Dryrun endpoint added to Algod. |
@unit.feetest | Fee transaction encoding tests. |
@unit.indexer | Indexer REST API unit tests. |
@unit.indexer.logs | Application logs endpoints added to Indexer. |
@unit.indexer.rekey | Rekey endpoints added to Algod and Indexer |
@unit.offline | The first unit tests we wrote for cucumber. |
@unit.rekey | Rekey Transaction golden tests. |
@unit.responses | REST Client Response serialization tests. |
@unit.responses.231 | REST Client Unit Tests for Indexer 2.3.1+ |
@unit.responses.genesis | REST Client Unit Tests for GetGenesis endpoint |
@unit.responses.messagepack | REST Client MessagePack Unit Tests |
@unit.responses.messagepack.231 | REST Client MessagePack Unit Tests for Indexer 2.3.1+ |
@unit.tealsign | Test TEAL signature utilities. |
@unit.transactions | Transaction encoding tests. |
@unit.transactions.keyreg | Keyreg encoding tests. |
@unit.transactions.payment | Payment encoding tests. |
These reside in the integration features directory
tag | description |
---|---|
@abi | Test the Application Binary Interface (ABI) with atomic txn composition and execution. |
@algod | General tests against algod REST endpoints. |
@application.evaldelta | Test that eval delta fields are included in algod and indexer. |
@applications.verified | Submit all types of application transactions and verify account state. |
@assets | Submit all types of asset transactions. |
@auction | Encode and decode bids for an auction. |
@c2c | Test Contract to Contract invocations and injestion. |
@compile | Test the algod compile endpoint. |
@dryrun | Test the algod dryrun endpoint. |
@dryrun.testing | Test the testing harness that relies on dryrun endpoint. Python only. |
@indexer | Test all types of indexer queries and parameters against a static dataset. |
@indexer.231 | REST Client Integration Tests for Indexer 2.3.1+ |
@indexer.applications | Endpoints and parameters added to support applications. |
@kmd | Test the kmd REST endpoints. |
@rekey | Test the rekeying transactions. |
@send | Test the ability to submit transactions to algod. |
However, a few are not fully supported:
tag | SDK's which implement |
---|---|
@application.evaldelta | Java only |
@dryrun.testing | Python only |
@indexer.rekey | missing from Python and JS |
@unit.responses.genesis | missing from Python and Java |
@unit.responses.messagepack | missing from Python |
@unit.responses.messagepack.231 | missing from Python and JS |
@unit.responses.messagepack | missing from Python and JS |
@unit.transactions.keyreg | go only |
Full featured Algorand SDKs have 6 major components. Depending on the compatibility level, certain components may be missing. The components include:
The most basic functionality includes the REST clients for communicating with algod and indexer. These interfaces are defined by OpenAPI specifications:
One of the basic features of an Algorand SDK is the ability to construct all types of Algorand transactions. This includes simple transactions of all types and the tooling to configure things like leases and atomic transfers (group transactions)
In order to ensure transactions are compact and can hash consistently, there are some special encoding requirements. The SDKs must provide utilities to work with these encodings. Algorand uses MessagePack as a compact binary-encoded JSON alternative, and fields with default values are excluded from the encoded object. Additionally to ensure consistent hashes, the fields must be alphebatized.
All things related to crypto to make it easier for developers to work with the blockchain. This includes standard things like ED25519 signing, up through Algorand specific LogicSig and MultiSig utilities. There are also some convenience methods for converting Mnemonics.
Everything related to working with TEAL. This includes some utilities for parsing and validating compiled TEAL programs.
Each SDK has a number of unit tests specific to that particular SDK. The details of SDK-specific unit tests are up to the developers discretion. There are also a large number of cucumber integration tests stored in this repository which cover various unit-style tests and many integration tests. To assist with working in this environment each SDK must provide tooling to download and install the cucumber files, and a Dockerfile which configures an environment suitable for building the SDK and running the tests, and 3 makefile targets: make unit
, make integration
, and make docker-test
. The rest of this document relates to details about the Cucumber test.
Tests consist of two things -- the feature files defined in this repository and some code snippets that map the text in the feature files to specific functions. The implementation process will vary by programming language and isn't covered here, refer to the relevant documentation for setting up a new SDK.
We use tags, and a simple directory structure, to organize our feature files. All cucumber implementations should allow specifying one or more tags to include, or exclude, when running tests.
All unit tests should be tagged with @unit
so that unit tests can be run together during development for quick regression tests. For example, to run unit tests with java a tag filter is provided as follows:
~$ mvn test -Dcucumber.filter.tags="@unit"
This command will vary by cucumber implementation, the specific framework documentation should be referenced for details.
When adding a new test to an existing feature file, or a new feature file, a new tag should be created which describes that test. For example, the templates feature file has a corresponding @templates
tag. By adding a new tag for each feature we are able to add new tests to this repository without breaking the SDKs.
In order for this to work, each SDK maintains a whitelist of tags which have been implemented.
If a new feature file is created, the tag would go at the top of the file. If a new scenario is added the tag would go right above the scenario.
If possible, please run a formatter on the file modified. There are several, including one built into VSCode Cucuber/Gherkin plugin.
The code snippets (or step definitions) live in the SDKs. Each SDK has a script which is able to clone this repository, and copy the tests into the correct locations.
When a test fails, the cucumber libraries we use print the code snippets which should be included in the SDK test code. The code snippets are empty functions which should be implemented according to the tests requirements. In many cases some state needs to be modified and stored outside of the functions in order to implement the test. Exactly how this state is managed is up to the developer. Refer to the cucumber documentation for tips about managing state. There may be better documentation in the specific cucumber language library you're using.
The SDKs come with a Makefile to coordinate running the cucumber test suites. There are 3 main targets:
At a high level, the docker-test target is required to:
algorand-sdk-testing
.features
directory into the SDK../scripts/up.sh
--network host
which runs the cucumber test suite.This will vary by SDK. By calling up.sh the environment is available to the integration tests, and tests can be run locally with an IDE or debugger. This is often significantly faster than waiting for the entire test suite to run.
Some of the tests are stateful and will require restarting the environment before re-running the test.
Once the test environment is running you can use make unit
and make integration
to run tests.
Docker compose is used to manage several containers which work together to provide the test environment. Currently that includes algod, kmd, indexer and a postgres database. The services run on specific ports with specific API tokens. Refer to docker-compose.yml and the docker directory for how this is configured.
There are a number of scripts to help with managing the test environment. The names should help you understand what they do, but to get started simply run up.sh to bring up a new environment, and down.sh to shut it down.
When starting the environment we avoid using the cache intentionally. It uses the go-algorand nightly build, and we want to ensure that the containers are always running against the most recent nightly build. In the future these scripts should be improved, but for now we completely avoid using cached docker containers to ensure that we don't accidentally run against a stale environment.
Download Details:
Author: algorand
Source Code: https://github.com/algorand/algorand-sdk-testing
License: MIT License
#algorand #blockchain #cryptocurrency #testing #gherkin
1648094160
Run make load
to build and load the application onto the device. After installing and running the application, you can run cli/sign.py
. Running without any arguments should print the address corresponding to the key on the Ledger device. To sign a transaction, run cli/sign.py input.tx output.tx
; this will ask the Ledger device to sign the transaction from input.tx
, and put the resulting signed transaction into output.tx
. You can use goal clerk send .. -o input.tx
to construct an input.tx
file, and then use goal clerk rawsend
to broadcast the output.tx
file to the Algorand network.
Development notes
sudo apt install python-hid python-hidapi python3-hid python3-hidapi
sudo pip install ledgerblue
/etc/udev/rules.d
based on these notespython -m ledgerblue.genCAPair
python -m ledgerblue.setupCustomCA --targetId 0x31100004 --public 040db5032de3dc9ac155959bca5e163d1ab35789192495c99b39dceb82dafb5ffad14ce7fd32d739388b6017c606f26028fdfa3e7000fa8c9793740a7aff839587 --name dev
export SCP_PRIVKEY=7f189771ea6ee2808e4a66e6b74600b7eadb720a7ccf06bfe2ac0f67c7103250
python -m ledgerblue.loadMCU --targetId 0x01000001 --fileName blup_0.9_misc_m1.hex --nocrc
python -m ledgerblue.loadMCU --targetId 0x01000001 --fileName mcu_1.7-printf_over_0.9.hex --reverse --nocrc
./usbtool/usbtool -v 0x2c97 log
Makefile
to enable PRINTF
(and edit it back for production to disable PRINTF
)To go back to release firmware:
python -m ledgerblue.loadMCU --targetId 0x01000001 --fileName blup_0.9_misc_m1.hex --nocrc
python -m ledgerblue.loadMCU --targetId 0x01000001 --fileName mcu_1.7_over_0.9.hex --reverse --nocrc
debug=True
to getDongle()
in cli/sign.py
convert -resize 12 -extent 16x16 -gravity center -colors 2 ...
volatile
for N_ variables; not correctly done in default examplecx_eddsa_sign()
despite it being the "hash"(int)-2
to (char)
and then back to (int)
produces 254; the base32 library broke as a resultdebug/app.map
to make sure there's nothing too large in SRAM (look between the _bss
and _estack
symbols), and check for large stack use in functions (look for large sub sp
statements in debug/app.asm
).Download Details:
Author: algorand
Source Code: https://github.com/algorand/ledger-app-algorand
License: MIT License
1648086780
AlgoSDK is a Java library for communicating and interacting with the Algorand network. It contains a REST client for accessing algod
instances over the web, and also exposes functionality for generating keypairs, mnemonics, creating transactions, signing transactions, and serializing data across the network.
Java 7+ and Android minSdkVersion
16+
Maven:
<dependency>
<groupId>com.algorand</groupId>
<artifactId>algosdk</artifactId>
<version>1.13.0-beta-1</version>
</dependency>
This program connects to a running sandbox private network, creates a payment transaction between two of the accounts, signs it with kmd, and reads result from Indexer.
import com.algorand.algosdk.account.Account;
import com.algorand.algosdk.crypto.Address;
import com.algorand.algosdk.kmd.client.ApiException;
import com.algorand.algosdk.kmd.client.KmdClient;
import com.algorand.algosdk.kmd.client.api.KmdApi;
import com.algorand.algosdk.kmd.client.model.*;
import com.algorand.algosdk.transaction.SignedTransaction;
import com.algorand.algosdk.transaction.Transaction;
import com.algorand.algosdk.util.Encoder;
import com.algorand.algosdk.v2.client.common.AlgodClient;
import com.algorand.algosdk.v2.client.common.IndexerClient;
import com.algorand.algosdk.v2.client.common.Response;
import com.algorand.algosdk.v2.client.model.PendingTransactionResponse;
import com.algorand.algosdk.v2.client.model.PostTransactionsResponse;
import com.algorand.algosdk.v2.client.model.TransactionsResponse;
import java.io.IOException;
import java.security.NoSuchAlgorithmException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class Main {
private static String token = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
private static KmdApi kmd = null;
public static void main(String[] args) throws Exception {
// Initialize algod/indexer v2 clients.
AlgodClient algod = new AlgodClient("http://localhost", 4001, token);
IndexerClient indexer = new IndexerClient("http://localhost", 8980);
// Initialize KMD v1 client
KmdClient kmdClient = new KmdClient();
kmdClient.setBasePath("http://localhost:4002");
kmdClient.setApiKey(token);
kmd = new KmdApi(kmdClient);
// Get accounts from sandbox.
String walletHandle = getDefaultWalletHandle();
List<Address> accounts = getWalletAccounts(walletHandle);
// Create a payment transaction
Transaction tx1 = Transaction.PaymentTransactionBuilder()
.lookupParams(algod) // lookup fee, firstValid, lastValid
.sender(accounts.get(0))
.receiver(accounts.get(1))
.amount(1000000)
.noteUTF8("test transaction!")
.build();
// Sign with KMD
SignedTransaction stx1a = signTransactionWithKMD(tx1, walletHandle);
byte[] stx1aBytes = Encoder.encodeToMsgPack(stx1a);
// Sign with private key
byte[] privateKey = lookupPrivateKey(accounts.get(0), walletHandle);
Account account = new Account(privateKey);
SignedTransaction stx1b = account.signTransaction(tx1);
byte[] stx1bBytes = Encoder.encodeToMsgPack(stx1b);
// KMD and signing directly should both be the same.
if (!Arrays.equals(stx1aBytes, stx1bBytes)) {
throw new RuntimeException("KMD disagrees with the manual signature!");
}
// Send transaction
Response<PostTransactionsResponse> post = algod.RawTransaction().rawtxn(stx1aBytes).execute();
if (!post.isSuccessful()) {
throw new RuntimeException("Failed to post transaction");
}
// Wait for confirmation
boolean done = false;
while (!done) {
Response<PendingTransactionResponse> txInfo = algod.PendingTransactionInformation(post.body().txId).execute();
if (!txInfo.isSuccessful()) {
throw new RuntimeException("Failed to check on tx progress");
}
if (txInfo.body().confirmedRound != null) {
done = true;
}
}
// Wait for indexer to index the round.
Thread.sleep(5000);
// Query indexer for the transaction
Response<TransactionsResponse> transactions = indexer.searchForTransactions()
.txid(post.body().txId)
.execute();
if (!transactions.isSuccessful()) {
throw new RuntimeException("Failed to lookup transaction");
}
System.out.println("Transaction received! \n" + transactions.toString());
}
public static SignedTransaction signTransactionWithKMD(Transaction tx, String walletHandle) throws IOException, ApiException {
SignTransactionRequest req = new SignTransactionRequest();
req.transaction(Encoder.encodeToMsgPack(tx));
req.setWalletHandleToken(walletHandle);
req.setWalletPassword("");
byte[] stxBytes = kmd.signTransaction(req).getSignedTransaction();
return Encoder.decodeFromMsgPack(stxBytes, SignedTransaction.class);
}
public static byte[] lookupPrivateKey(Address addr, String walletHandle) throws ApiException {
ExportKeyRequest req = new ExportKeyRequest();
req.setAddress(addr.toString());
req.setWalletHandleToken(walletHandle);
req.setWalletPassword("");
return kmd.exportKey(req).getPrivateKey();
}
public static String getDefaultWalletHandle() throws ApiException {
for (APIV1Wallet w : kmd.listWallets().getWallets()) {
if (w.getName().equals("unencrypted-default-wallet")) {
InitWalletHandleTokenRequest tokenreq = new InitWalletHandleTokenRequest();
tokenreq.setWalletId(w.getId());
tokenreq.setWalletPassword("");
return kmd.initWalletHandleToken(tokenreq).getWalletHandleToken();
}
}
throw new RuntimeException("Default wallet not found.");
}
public static List<Address> getWalletAccounts(String walletHandle) throws ApiException, NoSuchAlgorithmException {
List<Address> accounts = new ArrayList<>();
ListKeysRequest keysRequest = new ListKeysRequest();
keysRequest.setWalletHandleToken(walletHandle);
for (String addr : kmd.listKeysInWallet(keysRequest).getAddresses()) {
accounts.add(new Address(addr));
}
return accounts;
}
}
Javadoc can be found at https://algorand.github.io/java-algorand-sdk.
Additional resources and code samples are located at https://developer.algorand.org.
AlgoSDK depends on org.bouncycastle:bcprov-jdk15on:1.61
for Ed25519
signatures, sha512/256
digests, and deserializing X.509
-encoded Ed25519
private keys. The latter is the only explicit dependency on an external crypto library - all other references are abstracted through the JCA.
When using cryptographic functionality, and Java9+, you may run into the following warning:
WARNING: Illegal reflective access by org.bouncycastle.jcajce.provider.drbg.DRBG
This is known behavior, caused by more restrictive language features in Java 9+, that Bouncy Castle has yet to support. This warning can be suppressed safely. We will monitor cryptographic packages for updates or alternative implementations.
This project uses Maven.
~$ mvn package
To run the example project Use the following command in the examples directory, be sure to update your algod network address and the API token parameters (see examples/README for more information):
~$ mvn exec:java -Dexec.mainClass="com.algorand.algosdk.example.Main" -Dexec.args="127.0.0.1:8080 ***X-Algo-API-Token***"
We are using separate version targets for production and testing to allow using JUnit5 for tests. Some IDEs, like IDEA do not support this very well. To workaround the issue a special ide
profile should be enabled if your IDE does not support mixed target
and testTarget
versions. Regardless of IDE support, the tests can be run from the command line. In this case clean
is used in case an incremental build was made by the IDE with Java8.
~$ mvn clean test
There is also a special integration test environment, and shared tests. To run these use the Makefile:
~$ make docker-test
The generated pom file provides maven compatibility and deploy capabilities.
mvn clean install
mvn clean deploy -P github,default
mvn clean site -P github,default # for javadoc
mvn clean deploy -P release,default
Significant work has been taken to ensure Android compatibility (in particular for minSdkVersion
16). Note that the default crypto provider on Android does not provide ed25519
signatures, so you will need to provide your own (e.g. BouncyCastle
).
The classes com.algorand.algosdk.v2.client.algod.\*
, com.algorand.algosdk.v2.client.indexer.\*
, com.algorand.algosdk.v2.client.common.AlgodClient
, and com.algorand.algosdk.v2.client.common.IndexerClient
are generated from OpenAPI specifications in: algod.oas2.json
and indexer.oas2.json
.
The specification files can be obtained from:
A testing framework can also be generated with: com.algorand.sdkutils.RunQueryMapperGenerator
and the tests run from com.algorand.sdkutils.RunAlgodV2Tests
and com.algorand.sdkutils.RunIndexerTests
To actually regenerate the code, use run_generator.sh
with paths to the *.oas2.json
files mentioned above.
Updating the kmd
REST client
The kmd
REST client has not been upgraded to use the new code generation, it is still largely autogenerated by swagger-codegen
. [https://github.com/swagger-api/swagger-codegen]
To regenerate the clients, first, check out the latest swagger-codegen
from the github repo. (In particular, the Homebrew version is out of date and fails to handle raw byte arrays properly). Note OpenAPI 2.0 doesn't support unsigned types. Luckily we don't have any uint32 types in algod, so we can do a lossless type-mapping fromt uint64->int64 (Long) -> BigInteger:
curl http://localhost:8080/swagger.json | sed -e 's/uint32/int64/g' > temp.json
swagger-codegen generate -i temp.json -l java -c config.json
config.json
looks like:
{
"library": "okhttp-gson",
"java8": false,
"hideGenerationTimestamp": true,
"serializableModel": false,
"supportJava6": true,
"invokerPackage": "com.algorand.algosdk.{kmd or algod}.client",
"apiPackage": "com.algorand.algosdk.{kmd or algod}.client.api",
"modelPackage": "com.algorand.algosdk.{kmd or algod}.client.model"
}
Make sure you convert all uint32
types to Long
types.
The generated code (as of April 2019) has one circular dependency involving client.Pair
. The client
package depends on client.auth
, but client.auth
uses client.Pair
which is in the client
package. One more problem is that uint64
is not a valid format in OpenAPI 2.0; however, we need to send large integers to the algod
API (kmd
is fine). To resolve this, we do the following manual pass on generated code:
Pair.java
into the client.lib
packageInteger
with BigInteger
(for uint64), Long
(for uint32), etc. in com.algorand.algosdk.algod
and subpackages (unnecessary for algod)Optimize Imports
operation on generated code, to minimize dependencies.Note that msgpack-java is good at using the minimal representation.
Download Details:
Author: algorand
Source Code: https://github.com/algorand/java-algorand-sdk
License: MIT License
1648079400
This is a general purpose OpenAPI code generator. It is currently used to completely generate the HTTP code in the Java SDK, and generate some of the HTTP code in our Golang SDK.
We currently have two HTTP endpoints. One for algod and one for indexer, so in most cases, this tool would be run once with each OpenAPI spec.
~$ mvn package -DskipTests
~$ java -jar target/generator-*-jar-with-dependencies.jar -h
You'll see that there are a number of subcommands:
The command line interface uses JCommander to define the command line interface. See Main.java.
The main code involves an OpenAPI parser / event generator and several listeners for the actual generation.
The template subcommand is using Apache Velocity as the underlying template engine. Things like variables, loops, and statements are all supported. So business logic can technically be implemented in the template if it's actually necessary.
There are three phases: client, query, and model. Each phase must provide two templates, one for the file generation and one to specify the filename to be used. If all results should go to the same file. For query and model generation the template will be executed once for each query / model. If you want to put everything in one file return the same filename twice in a row and the processing will exit early.
phase | filename | purpose |
---|---|---|
client | client.vm | Client class with functions to call each query. |
client | client_filename.vm | File to write to the client output directory. |
query | query.vm | Template to use for generating query files. |
query | query_filename.vm | File to write to the query output directory. |
model | model.vm | Template to use for generating model files. |
model | model_filename.vm | File to write to the model output directory. |
The template command will only run the templates which have an output directory is provided. So if you just want to regenerate models, only use the -m option.
-c, --clientOutputDir
Directory to write client file(s).
-m, --modelsOutputDir
Directory to write model file(s).
-q, --queryOutputDir
Directory to write query file(s).
The template subcommand accepts a --propertyFiles option. It can be provided multiple times, or as a comma separated list of files. Property files will be processed and bound to a velocity variable available to templates.
For details on a type you can put it directly into your template. It will be serialized along with its fields for your reference. Here is a high level description of what is available:
template | variable | type | purpose |
---|---|---|---|
all | str | StringHelpers.java | Some string utilities are available. See StringHelpers.java for details. There are simple things like $str.capitalize("someData") -> SomeData , and also some more complex helpers like $str.formatDoc($query.doc, "// ") which will split the document at the word boundary nearest to 80 characters without going over, and add a prefix to each new line. |
all | order | OrderHelpers.java | Some ordering utilities available. See OrderHelpers.java for details. An example utility function is $order.propertiesWithOrdering($props, $preferred_order) , where $props is a list of properties and $preferred_order is a string list to use when ordering the properties list. |
all | propFile | Properties | The contents of all property files are available with this variable. For example if package=com.algorand.v2.algod is in the property file, the template may use ${propFile.package} . |
all | models | HashMap<StructDef, List<TypeDef>> | A list of all models. |
all | queries | List<QueryDef> | A list of all queries. |
query | q | QueryDef | The current query definition. |
model | def | StructDef | The current model definition if multiple files are being generated. |
model | props | List<TypeDef> | A list of properties for the current model. |
In the following example, we are careful to generate the algod code first because the algod models are a strict subset of the indexer models. For that reason, we are able to reuse some overlapping models from indexer in algod.
~$ java -jar generator*jar template
-s algod.oas2.json
-t go_templates
-c algodClient
-m allModels
-q algodQueries
-p common_config.properties,algod_config.properties
~$ java -jar generator*jar template
-s indexer.oas2.json
-t go_templates
-c indexerClient
-m allModels
-q indexerQueries
-p common_config.properties,indexer_config.properties
There is a test template that gives you some basic usage in the test_templates directory.
You can generate the test code in the output directory with the following commands:
~$ mkdir output
~$ java -jar target/generator-*-jar-with-dependencies.jar \
template \
-s /path/to/a/spec/file/indexer.oas2.json \
-t test_templates/ \
-m output \
-q output \
-c output \
-p test_templates/my.properties
The Golang templates are in the go_templates directory.
The Golang HTTP API is only partially generated. The hand written parts were not totally consistent with the spec and that makes it difficult to regenerate them. Regardless, an attempt has been made. In the templates there are some macros which map "generated" values to the hand written ones. For example the query types have this mapping:
#macro ( queryType )
#if ( ${str.capitalize($q.name)} == "SearchForAccounts" )
SearchAccounts## The hand written client doesn't quite match the spec...
#elseif ( ${str.capitalize($q.name)} == "GetStatus" )
Status##
#elseif ( ${str.capitalize($q.name)} == "GetPendingTransactionsByAddress" )
PendingTransactionInformationByAddress##
#elseif ( ${str.capitalize($q.name)} == "GetPendingTransactions" )
PendingTransactions##
#else
${str.capitalize($q.name)}##
#end
#end
Other mappings are more specific to the language, such as the OpenAPI type to SDK type:
#macro ( toQueryType $param )##
#if ( $param.algorandFormat == "RFC3339 String" )
string##
#elseif ( $param.type == "integer" )
uint64##
#elseif ( $param.type == "string" )
string##
#elseif ( $param.type == "boolean" )
bool##
#elseif( $param.type == "binary" )
string##
#else
UNHANDLED TYPE
- ref: $!param.refType
- type: $!param.type
- array type: $!param.arrayType
- algorand format: $!param.algorandFormat
- format: $!param.format
##$unknown.type ## force a template failure because $unknown.type does not exist.
#end
#end
Because of this, we are phasing in code generation gradually by skipping some types. The skipped types are specified in the property files:
common_config.properties
model_skip=AccountParticipation,AssetParams,RawBlockJson,etc,...
algod_config.properties
query_skip=Block,BlockRaw,SendRawTransaction,SuggestedParams,etc,...
indexer_config.properties
query_skip=LookupAssetByID,LookupAccountTransactions,SearchForAssets,LookupAssetBalances,LookupAssetTransactions,LookupBlock,LookupTransactions,SearchForTransactions
The Java templates are in the java_templates directory.
These are not used yet, they are the initial experiments for the template engine. Since the Java SDK has used code generation from the beginning, we should be able to fully migrate to the template engine eventually.
In general, the automation pipeline will build and run whatever Dockerfile
is found in a repository's templates
directory. For instructions on how to configure the templates
directory, look at the repository template directory example.
If you are trying to verify that automatic code generation works as intended, we recommend creating a testing branch from that repository and using the SKIP_PR=true
environment variable to avoid creating pull requests. If all goes according to plan, generated files should be available in the container's /repo
directory.
The automatic generator scripts depend on certain prerequisites that are listed in automation/REQUIREMENTS.md. Once those conditions have been satisfied, automatically generating code for external repositories should be as easy as building and running a particular SDK's templates/Dockerfile
file.
Download Details:
Author: algorand
Source Code: https://github.com/algorand/generator
License:
#algorand #blockchain #cryptocurrency #java #golang #openapi
1648072020
AlgoSDK is the official JavaScript library for communicating with the Algorand network. It's designed for modern browsers and Node.js.
$ npm install algosdk
This package provides TypeScript types, but you will need TypeScript version 4.2 or higher to use them properly.
For errors in Webpack 5 or Vite projects, you will need to install extra dependencies.
Include a minified browser bundle directly in your HTML like so:
<script
src="https://unpkg.com/algosdk@v1.15.0-beta.1/dist/browser/algosdk.min.js"
integrity="sha384-wURu1H0s7z6Nj/AiP4O+0EorWZNvjiXwex7pNwtJH77x60mNs0Wm2zR37iUtHMwH"
crossorigin="anonymous"
></script>
or
<script
src="https://cdn.jsdelivr.net/npm/algosdk@v1.15.0-beta.1/dist/browser/algosdk.min.js"
integrity="sha384-wURu1H0s7z6Nj/AiP4O+0EorWZNvjiXwex7pNwtJH77x60mNs0Wm2zR37iUtHMwH"
crossorigin="anonymous"
></script>
Information about hosting the package for yourself, finding the browser bundles of previous versions, and computing the SRI hash is available here.
const token = 'Your algod API token';
const server = 'http://127.0.0.1';
const port = 8080;
const client = new algosdk.Algodv2(token, server, port);
(async () => {
console.log(await client.status().do());
})().catch((e) => {
console.log(e);
});
Documentation for this SDK is available here: https://algorand.github.io/js-algorand-sdk/. Additional resources are available on https://developer.algorand.org.
Running examples requires access to a running node. Follow the instructions in Algorand's developer resources to install a node on your computer.
As portions of the codebase are written in TypeScript, example files cannot be run directly using node
. Please refer to the instructions described in the examples/README.md file for more information regarding running the examples.
To build a new version of the library, run:
npm run build
To generate the documentation website, run:
npm run docs
The static website will be located in the docs/
directory.
We have two test suites: mocha tests in this repo, and the Algorand SDK test suite from https://github.com/algorand/algorand-sdk-testing.
To run the mocha tests in Node.js, run:
npm test
To run the SDK test suite in Node.js, run:
make docker-test
The test suites can also run in browsers. To do so, set the environment variable TEST_BROWSER
to one of our supported browsers. Currently we support testing in chrome
and firefox
. When TEST_BROWSER
is set, the mocha and SDK test suites will run in that browser.
For example, to run mocha tests in Chrome:
TEST_BROWSER=chrome npm test
And to run SDK tests in Firefox:
TEST_BROWSER=firefox make docker-test
This project enforces a modified version of the Airbnb code style.
We've setup linters and formatters to help catch errors and improve the development experience:
If using the Visual Studio Code editor with the recommended extensions, ESLint errors should be highlighted in red and the Prettier extension should format code on every save.
The linters and formatters listed above should run automatically on each commit to catch errors early and save CI running time.
Download Details:
Author: algorand
Source Code: https://github.com/algorand/js-algorand-sdk
License: MIT License
#algorand #blockchain #cryptocurrency #javascript #typescript