Corey Brooks

Corey Brooks

1663318714

What is Elixir Programming Language?

Elixir is a dynamic functional programming language built on top of the Erlang BEAM virtual machine. It is excels at building concurrent fault-tolerant applications at scale. 

Elixir is a dynamic, functional language for building scalable and maintainable applications.

Elixir strikes a balance between expressiveness and readability. It runs on the Erlang VM, known for creating low-latency, distributed, and fault-tolerant systems. These capabilities allow Elixir developers to be productive in several domains, such as web development, embedded software, data pipelines, and multimedia processing, across a wide range of industries.

Here is a peek:

iex> "Elixir" |> String.graphemes() |> Enum.frequencies()
%{"E" => 1, "i" => 2, "l" => 1, "r" => 1, "x" => 1}

Check our getting started guide and our learning page to begin your journey with Elixir. Or keep scrolling for an overview of the platform, language, and tools.

Platform features

Scalability

All Elixir code runs inside lightweight threads of execution (called processes) that are isolated and exchange information via messages:

current_process = self()

# Spawn an Elixir process (not an operating system one!)
spawn_link(fn ->
  send(current_process, {:msg, "hello world"})
end)

# Block until the message is received
receive do
  {:msg, contents} -> IO.puts(contents)
end

Due to their lightweight nature, it is not uncommon to have hundreds of thousands of processes running concurrently in the same machine. Isolation allows processes to be garbage collected independently, reducing system-wide pauses, and using all machine resources as efficiently as possible (vertical scaling).

Processes are also able to communicate with other processes running on different machines in the same network. This provides the foundation for distribution, allowing developers to coordinate work across multiple nodes (horizontal scaling).

Fault-tolerance

The unavoidable truth about software running in production is that things will go wrong. Even more when we take network, file systems, and other third-party resources into account.

To cope with failures, Elixir provides supervisors which describe how to restart parts of your system when things go awry, going back to a known initial state that is guaranteed to work:

children = [
  TCP.Pool,
  {TCP.Acceptor, port: 4040}
]

Supervisor.start_link(children, strategy: :one_for_one)

The combination of fault-tolerance and event-driven programming via message passing makes Elixir an excellent choice for reactive programming and robust architectures.

Language features

Functional programming

Functional programming promotes a coding style that helps developers write code that is short, concise, and maintainable. For example, pattern matching allows developers to easily destructure data and access its contents:

%User{name: name, age: age} = User.get("John Doe")
name #=> "John Doe"

When mixed with guards, pattern matching allows us to elegantly match and assert specific conditions for some code to execute:

def drive(%User{age: age}) when age >= 16 do
  # Code that drives a car
end

drive(User.get("John Doe"))
#=> Fails if the user is under 16

Elixir relies heavily on those features to ensure your software is working under the expected constraints. And when it is not, don't worry, supervisors have your back!

Extensibility and DSLs

Elixir has been designed to be extensible, letting developers naturally extend the language to particular domains, in order to increase their productivity.

As an example, let's write a simple test case using Elixir's test framework called ExUnit:

defmodule MathTest do
  use ExUnit.Case, async: true

  test "can add two numbers" do
    assert 1 + 1 == 2
  end
end

The async: true option allows tests to run in parallel, using as many CPU cores as possible, while the assert functionality can introspect your code, providing great reports in case of failures. Those features are built using Elixir macros, making it possible to add new constructs as if they were part of the language itself.

Tooling features

A growing ecosystem

Elixir ships with a great set of tools to ease development. Mix is a build tool that allows you to easily create projects, manage tasks, run tests and more:

$ mix new my_app
$ cd my_app
$ mix test
.

Finished in 0.04 seconds (0.04s on load, 0.00s on tests)
1 test, 0 failures

Mix is also able to manage dependencies and integrates with the Hex package manager, which performs dependency resolution, fetches remote packages, and hosts documentation for the whole ecosystem.

Interactive development

Tools like IEx (Elixir's interactive shell) are able to leverage many aspects of the language and platform to provide auto-complete, debugging tools, code reloading, as well as nicely formatted documentation:

$ iex
Interactive Elixir - press Ctrl+C to exit (type h() ENTER for help)
iex> h String.trim           # Prints the documentation for function
iex> i "Hello, World"        # Prints information about the given data type
iex> break! String.trim/1    # Sets a breakpoint in the String.trim/1 function
iex> recompile               # Recompiles the current project on the fly

Erlang compatible

Elixir runs on the Erlang VM giving developers complete access to Erlang's ecosystem, used by companies like Heroku, WhatsApp, Klarna and many more to build distributed, fault-tolerant applications. An Elixir programmer can invoke any Erlang function with no runtime cost:

iex> :crypto.hash(:md5, "Using crypto from Erlang OTP")
<<192, 223, 75, 115, ...>>

#elixir #erlang

What is Elixir Programming Language?
Best of Crypto

Best of Crypto

1661233020

Building eWallet Backend for the OmiseGO SDKs Using Elixir

OmiseGO eWallet Server 

OmiseGO eWallet Server is a server application in OmiseGO eWallet Suite that allows a provider (businesses or individuals) to setup and run their own digital wallet services through a local ledger, and to a decentralized blockchain exchange in the future to form a federated network on the OMG network allowing exchange of any currency into any other in a transparent way.

Getting started

The quickest way to get OmiseGO eWallet Server running on macOS and Linux is to use Docker-Compose.

Install Docker and Docker-Compose

Download OmiseGO eWallet Server's docker-compose.yml:

curl -O -sSL https://raw.githubusercontent.com/omisego/ewallet/master/docker-compose.yml

Create docker-compose.override.yml either manually or use this auto-configuration script:

curl -O -sSL https://raw.githubusercontent.com/omisego/ewallet/master/docker-gen.sh
chmod +x docker-gen.sh
./docker-gen.sh > docker-compose.override.yml

Initialize the database and start the server:

docker-compose run --rm ewallet initdb
docker-compose run --rm ewallet seed
docker-compose up -d

Encountered a problem during the installation? See the Setup Troubleshooting Guide.

For other platforms or a more advanced setup, see alternative installation below.

Alternative installation

Upgrade

See Upgrading the eWallet Server.

Commands

Docker image entrypoint is configured to recognize most commands that are used during normal operations. The way to invoke these commands depend on the installation method you choose.

  • In case of Docker-Compose, use docker-compose run --rm ewallet <command>
  • In case of Docker, use docker run -it --rm omisego/ewallet <command>
  • In case of bare metal, see also bare metal installation instruction.

initdb

For example:

  • docker-compose run --rm ewallet initdb (Docker-Compose)
  • docker run -it --rm omisego/ewallet:latest initdb (Docker)

These commands create the database if not already created, or upgrade them if necessary. This command is expected to be run every time you have upgraded the version of OmiseGO eWallet Suite.

seed

For example:

  • docker-compose run --rm ewallet seed (Docker-Compose)
  • docker run -it --rm omisego/ewallet:latest seed (Docker)

These commands create the initial data in the database. If seed is run without arguments, the command will seed initial data for production environment. The seed command may be configured to seed with other kind of seed data:

  • seed --sample will seed a sample data suitable for evaluating OmiseGO eWallet Server.
  • seed --e2e will seed a data for end-to-end testing.
  • seed --settings will seed the application settings for the OmiseGO eWallet Server.

config

For example:

  • docker-compose run --rm ewallet config <key> <value> (Docker-Compose)
  • docker run -it --rm omisego/ewallet:latest config <key> <value> (Docker)

These commands will update the configuration key (see also settings documentation) in the database. For some keys which require whitespace, such as gcs_credentials, you can prevent string splitting by putting them in a single or double-quote, e.g. config gcs_credentials "gcs configuration".

Documentation

All documentations can found in the docs directory. It is recommended to take a look at the documentation of the OmiseGO eWallet Server you are running.

API documentation

OmiseGO eWallet Server is meant to be run by the provider, and thus API documentation is available in the OmiseGO eWallet Server itself rather than as online documentation. You may review the API documentation at the following locations in the OmiseGO eWallet Server setup.

  • /api/admin/docs.ui for Admin API, used by server apps to manage tokens, accounts, transactions, global settings, etc.
  • /api/client/docs.ui for Client API, used by client apps to create transaction on behalf of user, user's settings, etc.

In case you want to explore the API documentation without installing the OmiseGO eWallet Server, you may use our OmiseGO eWallet Staging. Please note that OmiseGO eWallet Staging tracks development release and there might be API differences from the stable release.

SDKs

These are SDKs for integrating with the OmiseGO eWallet Server. For example, to integrate a loyalty point system built on OmiseGO eWallet Server into an existing system.

It is also possible to run OmiseGO eWallet Server in a standalone mode without needing to integrate into an existing system. These apps demonstrate the capabilities of the OmiseGO eWallet Server as a physical Point-of-Sale server and client.

Community Efforts

We are thankful to our community for creating and maintaining these wonderful works that we otherwise could not have done ourselves. If you have ported any part of the OmiseGO eWallet Server to another platform, we will be happy to list them here. Submit us a pull request.

Contributing

Contributing to the OmiseGO eWallet Server can be contributions to the code base, bug reports, feature suggestions or any sort of feedback. Please learn more from our contributing guide.

Support

The OmiseGO eWallet Server team closely monitors the following channels.

  • GitHub Issues: Browse or file a report for any bugs found
  • Gitter: Discuss features and suggestions in real-time
  • Stack Overflow: Search or create a new question with the tag omisego

If you need enterprise support or hosting solutions, please get in touch with us for more details.


Download details:

Author: omgnetwork
Source code: https://github.com/omgnetwork/ewallet
License: Apache-2.0 license

#omg #blockchain #web3 #ethereum #elixir #javascript

Building eWallet Backend for the OmiseGO SDKs Using Elixir
Best of Crypto

Best of Crypto

1661158920

ExPlasma: Elixir library for OMG Plasma Contracts Transaction Formmat

ExPlasma

ExPlasma is an Elixir library for encoding, decoding and validating transactions used for the OMG Network Plasma contracts.

Installation

If available in Hex, the package can be installed by adding ex_plasma to your list of dependencies in mix.exs:

def deps do
  [
    {:ex_plasma, "~> 0.2.0"}
  ]
end

You will also need to specify some configurations in your config/config.exs:

config :ex_plasma,
  eip_712_domain: %{
    name: "OMG Network",
    salt: "0xfad5c7f626d80f9256ef01929f3beb96e058b8b4b0e3fe52d84f054c0e2a7a83",
    verifying_contract: "0xd17e1233a03affb9092d5109179b43d6a8828607",
    version: "1"
  }

Setup

ExPlasma requires Rust to be installed because it uses Rust NIFs for keccak hash and secp256k1.

  1. Clone the repo to your desktop git@github.com:omgnetwork/ex_plasma.git
  2. Run mix compile in your terminal.
  3. If there are any unavailable dependencies, run mix deps.get.

Usage

To build a transaction use ExPlasma.Builder module:

{:ok, txn} =
  ExPlasma.payment_v1()
  |> ExPlasma.Builder.new()
  |> ExPlasma.Builder.add_input(blknum: 1, txindex: 0, oindex: 0)
  |> ExPlasma.Builder.add_output(output_type: 1, output_data: %{output_guard: <<1::160>>, token: <<0::160>>, amount: 1})
  |> ExPlasma.Builder.sign(["0x79298b0292bbfa9b15705c56b6133201c62b798f102d7d096d31d7637f9b2382"])
{:ok,
 %ExPlasma.Transaction{
   inputs: [
     %ExPlasma.Output{
       output_data: nil,
       output_id: %{blknum: 1, oindex: 0, txindex: 0},
       output_type: nil
     }
   ],
   metadata: <<0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
     0, 0, 0, 0, 0, 0, 0, 0, 0, 0>>,
   nonce: nil,
   outputs: [
     %ExPlasma.Output{
       output_data: %{
         amount: 1,
         output_guard: <<0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
           0, 1>>,
         token: <<0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0>>
       },
       output_id: nil,
       output_type: 1
     }
   ],
   sigs: [
     <<236, 177, 165, 5, 109, 208, 210, 116, 68, 176, 199, 17, 168, 29, 30, 198,
       77, 45, 233, 147, 149, 38, 93, 136, 24, 98, 53, 218, 52, 177, 200, 127,
       26, 6, 138, 17, 36, 52, 97, 152, 240, 222, ...>>
   ],
   tx_data: 0,
   tx_type: 1,
   witnesses: []
}}

You can encode a transaction using ExPlasma.encode/2:

{:ok, rlp} = ExPlasma.encode(txn, signed: false)
{:ok,
 <<248, 116, 1, 225, 160, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
   0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 59, 154, 202, 0, 238, 237, 1, 235, 148, 0, 0,
   0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 148, 0, 0, 0, 0, 0, 0,
   0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 128, 160, 0, 0, 0, 0, 0, 0, 0,
   0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0>>}

You can decode a transaction using ExPlasma.decode/2:

{:ok, txn} = ExPlasma.decode(rlp, signed: false)
{:ok,
 %ExPlasma.Transaction{
   inputs: [
     %ExPlasma.Output{
       output_data: nil,
       output_id: %{blknum: 1, oindex: 0, position: 1000000000, txindex: 0},
       output_type: nil
     }
   ],
   metadata: <<0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
     0, 0, 0, 0, 0, 0, 0, 0, 0, 0>>,
   nonce: nil,
   outputs: [
     %ExPlasma.Output{
       output_data: %{
         amount: 1,
         output_guard: <<0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
           0, 1>>,
         token: <<0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0>>
       },
       output_id: nil,
       output_type: 1
     }
   ],
   sigs: [],
   tx_data: 0,
   tx_type: 1,
   witnesses: []
 }}

You can validate a transaction using ExPlasma.validate/1:

ExPlasma.validate(txn)

View the documentation

Testing

You can run the tests by running;

mix test
mix credo
mix dialyzer

This will load up Ganche and the plasma contracts to deploy.

Conformance test

To ensure we can encode/decode according to the contracts, we have a separate suite of conformance tests that loads up mock contracts to compare encoding results. You can run the test by:

make up-mocks
mix test --only conformance

This will spin up ganache and deploy the mock contracts.

Contributing

  1. Fork it!
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

Download details:

Author: omgnetwork
Source code: https://github.com/omgnetwork/ex_plasma
License: Apache-2.0 license

#omg #blockchain #web3 #ethereum #elixir 

ExPlasma: Elixir library for OMG Plasma Contracts Transaction Formmat
Best of Crypto

Best of Crypto

1661147820

ExULID: Unique Lexicographically Sortable Identifier (ULID) in Elixir

ExULID

Universally Unique Lexicographically Sortable Identifier (ULID) in Elixir. Implemented according to ulid/spec.

Why ULID?

UUID can be suboptimal for many uses-cases because:

  • It isn't the most character efficient way of encoding 128 bits of randomness
  • UUID v1/v2 is impractical in many environments, as it requires access to a unique, stable MAC address
  • UUID v3/v5 requires a unique seed and produces randomly distributed IDs, which can cause fragmentation in many data structures
  • UUID v4 provides no other information than randomness which can cause fragmentation in many data structures

Instead, herein is proposed ULID:

  • 128-bit compatibility with UUID
  • 1.21e+24 unique ULIDs per millisecond
  • Lexicographically sortable!
  • Canonically encoded as a 26 character string, as opposed to the 36 character UUID
  • Uses Crockford's base32 for better efficiency and readability (5 bits per character)
  • Case insensitive
  • No special characters (URL safe)
  • Monotonic sort order (correctly detects and handles the same millisecond)

Goodies that comes with this libraries

  • It uses binary operations (so it's super fast!)
  • It can decode the timestamp back from the ULID
  • It includes tests from other language's implementations, ensuring the consistency & correctness of the ULID produced.

Installation

Add ExULID as a dependency in your project's mix.exs:

def deps do
  [
    {:ex_ulid, "~> 0.1.0"}
  ]
end

Then run mix deps.get to resolve and install it.

Usage

Generate a ULID with the current time:

ExULID.ULID.generate()
#=> "01C9GJZZ3D530PE8Q0ZYV5HJ9K"

Generate a ULID for a specific time:

ExULID.ULID.generate(1469918176385)
#=> "01ARYZ6S41QJQECH4KPG6SEF3Y"

Decode the ULID back to get the timestamp and randomness:

ExULID.ULID.decode("01ARYZ6S41QJQECH4KPG6SEF3Y")
#=> {1469918176385, "QJQECH4KPG6SEF3Y"}

Benchmark

$ mix run bench/run.exs
Operating System: macOS
CPU Information: Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz
Number of Available Cores: 4
Available memory: 16 GB
Elixir 1.6.4
Erlang 20.2.4
Benchmark suite executing with the following configuration:
warmup: 2 s
time: 5 s
parallel: 1
inputs: none specified
Estimated total run time: 7 s

Benchmarking encode...

Name             ips        average  deviation         median         99th %
encode       52.08 K       19.20 μs   ±116.33%          16 μs          60 μs


Operating System: macOS
CPU Information: Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz
Number of Available Cores: 4
Available memory: 16 GB
Elixir 1.6.4
Erlang 20.2.4
Benchmark suite executing with the following configuration:
warmup: 2 s
time: 5 s
parallel: 1
inputs: none specified
Estimated total run time: 7 s

Benchmarking decode...

Name             ips        average  deviation         median         99th %
decode       18.86 K       53.03 μs    ±24.71%          50 μs         100 μs

TODO

Download details:

Author: omgnetwork
Source code: https://github.com/omgnetwork/ex_ulid
License: Apache-2.0 license

#omg #blockchain #web3 #ethereum #elixir 

ExULID: Unique Lexicographically Sortable Identifier (ULID) in Elixir
Archie  Powell

Archie Powell

1658188800

Stemmer: An English (Porter2) Stemming Implementation in Elixir.

Stemmer  

An English (Porter2) stemming implementation in Elixir.

In linguistic morphology and information retrieval, stemming is the process of reducing inflected (or sometimes derived) words to their word stem, base or root form—generally a written word form. The stem need not be identical to the morphological root of the word; it is usually sufficient that related words map to the same stem, even if this stem is not in itself a valid root. - Wikipedia

Usage

The Stemmer.stem/1 function supports stemming a single word (String), a sentence (String) or a list of single words (List of Strings).

Stemmer.stem("capabilities")                    # => "capabl"
Stemmer.stem("extraordinary capabilities")      # => "extraordinari capabl"
Stemmer.stem(["extraordinary", "capabilities"]) # => ["extraordinari", "capabl"]

Compatibility

Stemmer is 100% compatible with the official Porter2 implementation, it is tested against the official diffs.txt which contains more than 29000 words.

Naive Bayes

Stemmer was built to support the Simple Bayes library. :heart:


Author:  fredwu
Source code: https://github.com/fredwu/stemmer
License:

#elixir #machine-learning 

Stemmer: An English (Porter2) Stemming Implementation in Elixir.
Archie  Powell

Archie Powell

1658181600

Tensorflex: Tensorflow Bindings for The Elixir Programming Language

Tensorflex

The paper detailing Tensorflex was presented at NeurIPS/NIPS 2018 as part of the MLOSS workshop. The paper can be found here

Contents

How to run

  • You need to have the Tensorflow C API installed. Look here for details.
  • You also need the C library libjpeg. If you are using Linux or OSX, it should already be present on your machine, otherwise be sure to install (brew install libjpeg for OSX, and sudo apt-get install libjpeg-dev for Ubuntu).
  • Simply add Tensorflex to your list of dependencies in mix.exs and you are good to go!:
{:tensorflex, "~> 0.1.2"}

In case you want the latest development version use this:

{:tensorflex, github: "anshuman23/tensorflex"}

Documentation

Tensorflex contains three main structs which handle different datatypes. These are %Graph, %Matrix and %Tensor. %Graph type structs handle pre-trained graph models, %Matrix handles Tensorflex 2-D matrices, and %Tensor handles Tensorflow Tensor types. The official Tensorflow documentation is present here and do note that this README only briefly discusses Tensorflex functionalities.

read_graph/1:

Used for loading a Tensorflow .pb graph model in Tensorflex.

Reads in a pre-trained Tensorflow protobuf (.pb) Graph model binary file.

Returns a tuple {:ok, %Graph}.

%Graph is an internal Tensorflex struct which holds the name of the graph file and the binary definition data that is read in via the .pb file.

get_graph_ops/1:

Used for listing all the operations in a Tensorflow .pb graph.

Reads in a Tensorflex %Graph struct obtained from read_graph/1.

Returns a list of all the operation names (as strings) that populate the graph model.

create_matrix/3:

Creates a 2-D Tensorflex matrix from custom input specifications.

Takes three input arguments: number of rows in matrix (nrows), number of columns in matrix (ncols), and a list of lists of the data that will form the matrix (datalist).

Returns a %Matrix Tensorflex struct type.

matrix_pos/3:

Used for accessing an element of a Tensorflex matrix.

Takes in three input arguments: a Tensorflex %Matrix struct matrix, and the row (row) and column (col) values of the required element in the matrix. Both row and col here are NOT zero indexed.

Returns the value as float.

size_of_matrix/1:

Used for obtaining the size of a Tensorflex matrix.

Takes a Tensorflex %Matrix struct matrix as input.

Returns a tuple {nrows, ncols} where nrows represents the number of rows of the matrix and ncols represents the number of columns of the matrix.

append_to_matrix/2:

Appends a single row to the back of a Tensorflex matrix.

Takes a Tensorflex %Matrix matrix as input and a single row of data (with the same number of columns as the original matrix) as a list of lists (datalist) to append to the original matrix.

Returns the extended and modified %Matrix struct matrix.

matrix_to_lists/1:

Converts a Tensorflex matrix (back) to a list of lists format.

Takes a Tensorflex %Matrix struct matrix as input.

Returns a list of lists representing the data stored in the matrix.

NOTE: If the matrix contains very high dimensional data, typically obtained from a function like load_csv_as_matrix/2, then it is not recommended to convert the matrix back to a list of lists format due to a possibility of memory errors.

float64_tensor/2, float32_tensor/2, int32_tensor/2:

Creates a TF_DOUBLE, TF_FLOAT, or TF_INT32 tensor from Tensorflex matrices containing the values and dimensions specified.

Takes two arguments: a %Matrix matrix (matrix1) containing the values the tensor should have and another %Matrix matrix (matrix2) containing the dimensions of the required tensor.

Returns a tuple {:ok, %Tensor} where %Tensor represents an internal Tensorflex struct type that is used for holding tensor data and type.

float64_tensor/1, float32_tensor/1, int32_tensor/1, string_tensor/1:

Creates a TF_DOUBLE, TF_FLOAT, TF_INT32, or TF_STRING constant value one-dimensional tensor from the input value specified.

Takes in a float, int or string value (depending on function) as input.

Returns a tuple {:ok, %Tensor} where %Tensor represents an internal Tensorflex struct type that is used for holding tensor data and type.

float64_tensor_alloc/1, float32_tensor_alloc/1, int32_tensor_alloc/1:

Allocates a TF_DOUBLE, TF_FLOAT, or TF_INT32 tensor of specified dimensions.

This function is generally used to allocate output tensors that do not hold any value data yet, but will after the session is run for Inference. Output tensors of the required dimensions are allocated and then passed to the run_session/5 function to hold the output values generated as predictions.

Takes a Tensorflex %Matrix struct matrix as input.

Returns a tuple {:ok, %Tensor} where %Tensor represents an internal Tensorflex struct type that is used for holding the potential tensor data and type.

tensor_datatype/1:

Used to get the datatype of a created tensor.

Takes in a %Tensor struct tensor as input.

Returns a tuple {:ok, datatype} where datatype is an atom representing the list of Tensorflow TF_DataType tensor datatypes. Click here to view a list of all possible datatypes.

load_image_as_tensor/1:

Loads JPEG images into Tensorflex directly as a TF_UINT8 tensor of dimensions image height x image width x number of color channels.

This function is very useful if you wish to do image classification using Convolutional Neural Networks, or other Deep Learning Models. One of the most widely adopted and robust image classification models is the Inception model by Google. It makes classifications on images from over a 1000 classes with highly accurate results. The load_image_as_tensor/1 function is an essential component for the prediction pipeline of the Inception model (and for other similar image classification models) to work in Tensorflex.

Reads in the path to a JPEG image file (.jpg or .jpeg).

Returns a tuple {:ok, %Tensor} where %Tensor represents an internal Tensorflex struct type that is used for holding the tensor data and type. Here the created Tensor is a uint8 tensor (TF_UINT8).

NOTE: For now, only 3 channel RGB JPEG color images can be passed as arguments. Support for grayscale images and other image formats such as PNG will be added in the future.

loads_csv_as_matrix/2:

Loads high-dimensional data from a CSV file as a Tensorflex 2-D matrix in a super-fast manner.

The load_csv_as_matrix/2 function is very fast-- when compared with the Python based pandas library for data science and analysis' function read_csv on the test.csv file from MNIST Kaggle data (source), the following execution times were obtained:

  • read_csv: 2.549233 seconds
  • load_csv_as_matrix/2: 1.711494 seconds

This function takes in 2 arguments: a path to a valid CSV file (filepath) and other optional arguments opts. These include whether or not a header needs to be discarded in the CSV, and what the delimiter type is. These are specified by passing in an atom :true or :false to the header: key, and setting a string value for the delimiter: key. By default, the header is considered to be present (:true) and the delimiter is set to ,.

Returns a %Matrix Tensorflex struct type.

run_session/5:

Runs a Tensorflow session to generate predictions for a given graph, input data, and required input/output operations.

This function is the final step of the Inference (prediction) pipeline and generates output for a given set of input data, a pre-trained graph model, and the specified input and output operations of the graph.

Takes in five arguments: a pre-trained Tensorflow graph .pb model read in from the read_graph/1 function (graph), an input tensor with the dimensions and data required for the input operation of the graph to run (tensor1), an output tensor allocated with the right dimensions (tensor2), the name of the input operation of the graph that needs where the input data is fed (input_opname), and the output operation name in the graph where the outputs are obtained (output_opname). The input tensor is generally created from the matrices manually or using the load_csv_as_matrix/2 function, and then passed through to one of the tensor creation functions. For image classification the load_image_as_tensor/1 can also be used to create the input tensor from an image. The output tensor is created using the tensor allocation functions (generally containing alloc at the end of the function name).

Returns a List of Lists (similar to the matrix_to_lists/1 function) containing the generated predictions as per the output tensor dimensions.

add_scalar_to_matrix/2:

Adds scalar value to matrix.

Takes two arguments: %Matrix matrix and scalar value (int or float)

Returns a %Matrix modified matrix.

subtract_scalar_from_matrix/2:

Subtracts scalar value from matrix.

Takes two arguments: %Matrix matrix and scalar value (int or float)

Returns a %Matrix modified matrix.

multiply_matrix_with_scalar/2:

Multiplies scalar value with matrix.

Takes two arguments: %Matrix matrix and scalar value (int or float)

Returns a %Matrix modified matrix.

divide_matrix_by_scalar/2:

Divides matrix values by scalar.

Takes two arguments: %Matrix matrix and scalar value (int or float)

Returns a %Matrix modified matrix.

add_matrices/2:

Adds two matrices of same dimensions together.

Takes in two %Matrix matrices as arguments.

Returns the resultant %Matrix matrix.

subtract_matrices/2:

Subtracts matrix2 from matrix1.

Takes in two %Matrix matrices as arguments.

Returns the resultant %Matrix matrix.

tensor_to_matrix/1:

Converts the data stored in a 2-D tensor back to a 2-D matrix.

Takes in a single argument as a %Tensor tensor (any TF_Datatype).

Returns a %Matrix 2-D matrix.

NOTE: Tensorflex doesn't currently support 3-D matrices, and therefore tensors that are 3-D (such as created using the load_image_as_tensor/1 function) cannot be converted back to a matrix, yet. Support for 3-D matrices will be added soon.

Examples

Examples are generally added in full description on my blog here. A blog post covering how to do classification on the Iris Dataset is present here.


INCEPTION CNN MODEL EXAMPLE:

Here we will briefly touch upon how to use the Google V3 Inception pre-trained graph model to do image classficiation from over a 1000 classes. First, the Inception V3 model can be downloaded here: http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz

After unzipping, see that it contains the graphdef .pb file (classify_image_graphdef.pb) which contains our graph definition, a test jpeg image that should identify/classify as a panda (cropped_panda.pb) and a few other files I will detail later.

Now for running this in Tensorflex first the graph is loaded:

iex(1)> {:ok, graph} = Tensorflex.read_graph("classify_image_graph_def.pb")
2018-07-29 00:48:19.849870: W tensorflow/core/framework/op_def_util.cc:346] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().
{:ok,
 %Tensorflex.Graph{
   def: #Reference<0.2597534446.2498625538.211058>,
   name: "classify_image_graph_def.pb"
 }}

Then the cropped_panda image is loaded using the new load_image_as_tensor function:

iex(2)> {:ok, input_tensor} = Tensorflex.load_image_as_tensor("cropped_panda.jpg")
{:ok,
 %Tensorflex.Tensor{
   datatype: :tf_uint8,
   tensor: #Reference<0.2597534446.2498625538.211093>
 }}

Then create the output tensor which will hold out output vector values. For the inception model, the output is received as a 1008x1 tensor, as there are 1008 classes in the model:

iex(3)> out_dims = Tensorflex.create_matrix(1,2,[[1008,1]])
%Tensorflex.Matrix{
  data: #Reference<0.2597534446.2498625538.211103>,
  ncols: 2,
  nrows: 1
}

iex(4)> {:ok, output_tensor} = Tensorflex.float32_tensor_alloc(out_dims)
{:ok,
 %Tensorflex.Tensor{
   datatype: :tf_float,
   tensor: #Reference<0.2597534446.2498625538.211116>
 }}

Then the output results are read into a list called results. Also, the input operation in the Inception model is DecodeJpeg and the output operation is softmax:

iex(5)> results = Tensorflex.run_session(graph, input_tensor, output_tensor, "DecodeJpeg", "softmax")
2018-07-29 00:51:13.631154: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
[
  [1.059142014128156e-4, 2.8240500250831246e-4, 8.30648496048525e-5,
   1.2982363114133477e-4, 7.32232874725014e-5, 8.014426566660404e-5,
   6.63459359202534e-5, 0.003170756157487631, 7.931600703159347e-5,
   3.707312498590909e-5, 3.0997329304227605e-5, 1.4232713147066534e-4,
   1.0381334868725389e-4, 1.1057958181481808e-4, 1.4321311027742922e-4,
   1.203602587338537e-4, 1.3130248407833278e-4, 5.850398520124145e-5,
   2.641105093061924e-4, 3.1629020668333396e-5, 3.906813799403608e-5,
   2.8646905775531195e-5, 2.2863158665131778e-4, 1.2222197256051004e-4,
   5.956588938715868e-5, 5.421260357252322e-5, 5.996063555357978e-5,
   4.867801326327026e-4, 1.1005574924638495e-4, 2.3433618480339646e-4,
   1.3062104699201882e-4, 1.317620772169903e-4, 9.388553007738665e-5,
   7.076268957462162e-5, 4.281177825760096e-5, 1.6863139171618968e-4,
   9.093972039408982e-5, 2.611844101920724e-4, 2.7584232157096267e-4,
   5.157176201464608e-5, 2.144951868103817e-4, 1.3628098531626165e-4,
   8.007588621694595e-5, 1.7929042223840952e-4, 2.2831936075817794e-4,
   6.216531619429588e-5, 3.736453436431475e-5, 6.782123091397807e-5,
   1.1538144462974742e-4, ...]
]

Finally, we need to find which class has the maximum probability and identify it's label. Since results is a List of Lists, it's better to read in the nested list. Then we need to find the index of the element in the new list which as the maximum value. Therefore:

iex(6)> max_prob = List.flatten(results) |> Enum.max
0.8849328756332397

iex(7)> Enum.find_index(results |> List.flatten, fn(x) -> x == max_prob end)
169

We can thus see that the class with the maximum probability predicted (0.8849328756332397) for the image is 169. We will now find what the 169 label corresponds to. For this we can look back into the unzipped Inception folder, where there is a file called imagenet_2012_challenge_label_map_proto.pbtxt. On opening this file, we can find the string class identifier for the 169 class index. This is n02510455 and is present on Line 1556 in the file. Finally, we need to match this string identifier to a set of identification labels by referring to the file imagenet_synset_to_human_label_map.txt file. Here we can see that corresponding to the string class n02510455 the human labels are giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca (Line 3691 in the file).

Thus, we have correctly identified the animal in the image as a panda using Tensorflex!


RNN LSTM SENTIMENT ANALYSIS MODEL EXAMPLE:

A brief idea of what this example entails:

  • The Recurrent Neural Network utilizes Long-Short-Term-Memory (LSTM) cells for holding the state for the data flowing in through the network
  • In this example, we utilize the LSTM network for sentiment analysis on movie reviews data in Tensorflex. The trained models are originally created as part of an online tutorial (source) and are present in a Github repository here.

To do sentiment analysis in Tensorflex however, we first need to do some preprocessing and prepare the graph model (.pb) as done multiple times before in other examples. For that, in the examples/rnn-lstm-example directory there are two scripts: freeze.py and create_input_data.py. Prior to explaining the working of these scripts you first need to download the original saved models as well as the datasets:

  • For the model, download from here and then store all the 4 model files in the examples/rnn-lstm-example/model folder
  • For the dataset, download from here. After decompressing, we do not need all the files, just the 2 numpy binaries wordsList.npy and wordVectors.npy. These will be used to encode our text data into UTF-8 encoding for feeding our RNN as input.

Now, for the Python two scripts: freeze.py and create_input_data.py:

  • freeze.py: This is used to create our pb model from the Python saved checkpoints. Here we will use the downloaded Python checkpoints' model to create the .pb graph. Just running python freeze.py after putting the model files in the correct directory will do the trick. In the same ./model/ folder, you will now see a file called frozen_model_lstm.pb. This is the file which we will load into Tensorflex. In case for some reason you want to skip this step and just get the loaded graph here is a Dropbox link
  • create_input_data.py: Even if we can load our model into Tensorflex, we also need some data to do inference on. For that, we will write our own example sentences and convert them (read encode) to a numeral (int32) format that can be used by the network as input. For that, you can inspect the code in the script to get an understanding of what is happening. Basically, the neural network takes in an input of a 24x250 int32 (matrix) tensor created from text which has been encoded as UTF-8. Again, running python create_input_data.py will give you two csv files (one indicating positive sentiment and the other a negative sentiment) which we will later load into Tensorflex. The two sentences converted are:
    • Negative sentiment sentence: That movie was terrible.
    • Positive sentiment sentence: That movie was the best one I have ever seen.

Both of these get converted to two files inputMatrixPositive.csv and inputMatrixNegative.csv (by create_input_data.py) which we load into Tensorflex next.

Inference in Tensorflex: Now we do sentiment analysis in Tensorflex. A few things to note:

  • The input graph operation is named Placeholder_1
  • The output graph operation is named add and is the eventual result of a matrix multiplication. Of this obtained result we only need the first row
  • Here the input is going to be a integer valued matrix tensor of dimensions 24x250 representing our sentence/review
  • The output will have 2 columns, as there are 2 classes-- for positive and negative sentiment respectively. Since we will only be needing only the first row we will get our result in a 1x2 vector. If the value of the first column is higher than the second column, then the network indicates a positive sentiment otherwise a negative sentiment. All this can be observed in the original repository in a Jupyter notebook here: ```elixir iex(1)> {:ok, graph} = Tensorflex.read_graph "examples/rnn-lstm-example/model/frozen_model_lstm.pb" {:ok, %Tensorflex.Graph{ def: #Reference<0.713975820.1050542081.11558>, name: "examples/rnn-lstm-example/model/frozen_model_lstm.pb" }}

iex(2)> Tensorflex.get_graph_ops graph ["Placeholder_1", "embedding_lookup/params_0", "embedding_lookup", "transpose/perm", "transpose", "rnn/Shape", "rnn/strided_slice/stack", "rnn/strided_slice/stack_1", "rnn/strided_slice/stack_2", "rnn/strided_slice", "rnn/stack/1", "rnn/stack", "rnn/zeros/Const", "rnn/zeros", "rnn/stack_1/1", "rnn/stack_1", "rnn/zeros_1/Const", "rnn/zeros_1", "rnn/Shape_1", "rnn/strided_slice_2/stack", "rnn/strided_slice_2/stack_1", "rnn/strided_slice_2/stack_2", "rnn/strided_slice_2", "rnn/time", "rnn/TensorArray", "rnn/TensorArray_1", "rnn/TensorArrayUnstack/Shape", "rnn/TensorArrayUnstack/strided_slice/stack", "rnn/TensorArrayUnstack/strided_slice/stack_1", "rnn/TensorArrayUnstack/strided_slice/stack_2", "rnn/TensorArrayUnstack/strided_slice", "rnn/TensorArrayUnstack/range/start", "rnn/TensorArrayUnstack/range/delta", "rnn/TensorArrayUnstack/range", "rnn/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3", "rnn/while/Enter", "rnn/while/Enter_1", "rnn/while/Enter_2", "rnn/while/Enter_3", "rnn/while/Merge", "rnn/while/Merge_1", "rnn/while/Merge_2", "rnn/while/Merge_3", "rnn/while/Less/Enter", "rnn/while/Less", "rnn/while/LoopCond", "rnn/while/Switch", "rnn/while/Switch_1", "rnn/while/Switch_2", "rnn/while/Switch_3", ...]

First we will try for positive sentiment:
```elixir
iex(3)> input_vals = Tensorflex.load_csv_as_matrix("examples/rnn-lstm-example/inputMatrixPositive.csv", header: :false)
%Tensorflex.Matrix{
  data: #Reference<0.713975820.1050542081.13138>,
  ncols: 250,
  nrows: 24
}

iex(4)> input_dims = Tensorflex.create_matrix(1,2,[[24,250]])
%Tensorflex.Matrix{
  data: #Reference<0.713975820.1050542081.13575>,
  ncols: 2,
  nrows: 1
}

iex(5)> {:ok, input_tensor} = Tensorflex.int32_tensor(input_vals, input_dims)
{:ok,
 %Tensorflex.Tensor{
   datatype: :tf_int32,
   tensor: #Reference<0.713975820.1050542081.14434>
 }}

iex(6)> output_dims = Tensorflex.create_matrix(1,2,[[24,2]])
%Tensorflex.Matrix{
  data: #Reference<0.713975820.1050542081.14870>,
  ncols: 2,
  nrows: 1
}

iex(7)> {:ok, output_tensor} = Tensorflex.float32_tensor_alloc(output_dims)
{:ok,
 %Tensorflex.Tensor{
   datatype: :tf_float,
   tensor: #Reference<0.713975820.1050542081.15363>
 }}

We only need the first row, the rest do not indicate anything:

iex(8)> [result_pos | _ ] = Tensorflex.run_session(graph, input_tensor,output_tensor, "Placeholder_1", "add")
[
  [4.483788013458252, -1.273943305015564],
  [-0.17151066660881042, -2.165886402130127],
  [0.9569928646087646, -1.131581425666809],
  [0.5669126510620117, -1.3842089176177979],
  [-1.4346938133239746, -4.0750861167907715],
  [0.4680981934070587, -1.3494354486465454],
  [1.068990707397461, -2.0195648670196533],
  [3.427264451980591, 0.48857203125953674],
  [0.6307879686355591, -2.069119691848755],
  [0.35061028599739075, -1.700657844543457],
  [3.7612719535827637, 2.421398878097534],
  [2.7635951042175293, -0.7214710116386414],
  [1.146680235862732, -0.8688814640045166],
  [0.8996094465255737, -1.0183486938476563],
  [0.23605018854141235, -1.893072247505188],
  [2.8790698051452637, -0.37355837225914],
  [-1.7325369119644165, -3.6470277309417725],
  [-1.687785029411316, -4.903762340545654],
  [3.6726789474487305, 0.14170047640800476],
  [0.982108473777771, -1.554244875907898],
  [2.248904228210449, 1.0617655515670776],
  [0.3663095533847809, -3.5266385078430176],
  [-1.009346604347229, -2.901120901107788],
  [3.0659966468811035, -1.7605335712432861]
]

iex(9)> result_pos
[4.483788013458252, -1.273943305015564]

Thus we can clearly see that the RNN predicts a positive sentiment. For a negative sentiment, next:

iex(10)> input_vals = Tensorflex.load_csv_as_matrix("examples/rnn-lstm-example/inputMatrixNegative.csv", header: :false)
%Tensorflex.Matrix{
  data: #Reference<0.713975820.1050542081.16780>,
  ncols: 250,
  nrows: 24
}

iex(11)> {:ok, input_tensor} = Tensorflex.int32_tensor(input_vals,input_dims)
{:ok,              
 %Tensorflex.Tensor{
   datatype: :tf_int32,
   tensor: #Reference<0.713975820.1050542081.16788>
 }}

iex(12)> [result_neg|_] = Tensorflex.run_session(graph, input_tensor,output_tensor, "Placeholder_1", "add")
[
  [0.7635725736618042, 10.895986557006836],
  [2.205151319503784, -0.6267685294151306],
  [3.5995595455169678, -0.1240251287817955],
  [-1.6063352823257446, -3.586883068084717],
  [1.9608432054519653, -3.084211826324463],
  [3.772461414337158, -0.19421455264091492],
  [3.9185996055603027, 0.4442034661769867],
  [3.010765552520752, -1.4757057428359985],
  [3.23650860786438, -0.008513949811458588],
  [2.263028144836426, -0.7358709573745728],
  [0.206748828291893, -2.1945853233337402],
  [2.913491725921631, 0.8632720708847046],
  [0.15935257077217102, -2.9757845401763916],
  [-0.7757357358932495, -2.360766649246216],
  [3.7359719276428223, -0.7668198347091675],
  [2.2896337509155273, -0.45704856514930725],
  [-1.5497230291366577, -4.42919921875],
  [-2.8478822708129883, -5.541027545928955],
  [1.894787073135376, -0.8441318273544312],
  [0.15720489621162415, -2.699129819869995],
  [-0.18114641308784485, -2.988100051879883],
  [3.342879056930542, 2.1714375019073486],
  [2.906526565551758, 0.18969044089317322],
  [0.8568912744522095, -1.7559258937835693]
]
iex(13)> result_neg
[0.7635725736618042, 10.895986557006836]

Thus we can clearly see that in this case the RNN indicates negative sentiment! Our model works!

Pull Requests Made


Author: anshuman23
Source code: https://github.com/anshuman23/tensorflex
License: Apache-2.0 license

#elixir #tensorflow #machine-learning 

Tensorflex: Tensorflow Bindings for The Elixir Programming Language
Archie  Powell

Archie Powell

1658174400

Emel: A Simple and Functional Machine Learning Library Written/elixir

emel

Turn data into functions! A simple and functional machine learning library written in elixir.

emel neural network

Installation

The package can be installed by adding emel to your list of dependencies in mix.exs:

def deps do
[
  {:emel, "~> 0.3.0"}
]
end

The docs can be found at https://hexdocs.pm/emel/0.3.0.

Usage

# set up the aliases for the module
alias Emel.Ml.KNearestNeighbors, as: KNN

dataset = [
  %{"x1" => 0.0, "x2" => 0.0, "x3" => 0.0, "y" => 0.0},
  %{"x1" => 0.5, "x2" => 0.5, "x3" => 0.5, "y" => 1.5},
  %{"x1" => 1.0, "x2" => 1.0, "x3" => 1.0, "y" => 3.0},
  %{"x1" => 1.5, "x2" => 1.5, "x3" => 1.5, "y" => 4.5},
  %{"x1" => 2.0, "x2" => 2.0, "x3" => 2.0, "y" => 6.0},
  %{"x1" => 2.5, "x2" => 2.5, "x3" => 2.5, "y" => 7.5},
  %{"x1" => 3.0, "x2" => 3.3, "x3" => 3.0, "y" => 9.0}
]

# turn the dataset into a function
f = KNN.predictor(dataset, ["x1", "x2", "x3"], "y", 2)

# make predictions
f.(%{"x1" => 1.725, "x2" => 1.725, "x3" => 1.725})
# 5.25

Implemented Algorithms

alias Emel.Ml.DecisionTree, as: DecisionTree
alias Emel.Help.Model, as: Mdl
alias Emel.Math.Statistics, as: Stat

dataset = [
  %{risk: "high", collateral: "none", income: "low", debt: "high", credit_history: "bad"},
  %{risk: "high", collateral: "none", income: "moderate", debt: "high", credit_history: "unknown"},
  %{risk: "moderate", collateral: "none", income: "moderate", debt: "low", credit_history: "unknown"},
  %{risk: "high", collateral: "none", income: "low", debt: "low", credit_history: "unknown"},
  %{risk: "low", collateral: "none", income: "high", debt: "low", credit_history: "unknown"},
  %{risk: "low", collateral: "adequate", income: "high", debt: "low", credit_history: "unknown"},
  %{risk: "high", collateral: "none", income: "low", debt: "low", credit_history: "bad"},
  %{risk: "moderate", collateral: "adequate", income: "high", debt: "low", credit_history: "bad"},
  %{risk: "low", collateral: "none", income: "high", debt: "low", credit_history: "good"},
  %{risk: "low", collateral: "adequate", income: "high", debt: "high", credit_history: "good"},
  %{risk: "high", collateral: "none", income: "low", debt: "high", credit_history: "good"},
  %{risk: "moderate", collateral: "none", income: "moderate", debt: "high", credit_history: "good"},
  %{risk: "low", collateral: "none", income: "high", debt: "high", credit_history: "good"},
  %{risk: "high", collateral: "none", income: "moderate", debt: "high", credit_history: "bad"}
]

{training_set, test_set} = Mdl.training_and_test_sets(dataset, 0.75)

f = DecisionTree.classifier(training_set, [:collateral, :income, :debt, :credit_history], :risk)

predictions = Enum.map(test_set, fn row -> f.(row) end)
actual_values = Enum.map(test_set, fn %{risk: v} -> v end)
Stat.similarity(predictions, actual_values)
# 0.75

Mathematics


Author: mrdimosthenis
Source code: https://github.com/mrdimosthenis/emel
License:

#elixir #machine-learning 

Emel: A Simple and Functional Machine Learning Library Written/elixir

EthContract: A Set Of Helper Methods to Help Query ETH Smart Contracts

EthContract

A set of helper methods to help query ETH smart contracts

Installation

If available in Hex, the package can be installed by adding eth_contract to your list of dependencies in mix.exs:

def deps do
  [
    {:eth_contract, "~> 0.1.0"}
  ]
end

Documentation can be generated with ExDoc and published on HexDocs. Once published, the docs can be found at https://hexdocs.pm/eth_contract.

Configuration

Add your JSON RPC provider URL in config.exs

config :ethereumex,
  url: "http://"

Usage

Load and parse the ABI

abi = EthContract.parse_abi("crypto_kitties.json")

Get meta given a token_id and method name

EthContract.meta(%{token_id: 45, method: "getKitty", contract: "0x06012c8cf97BEaD5deAe237070F9587f8E7A266d", abi: abi})

This will return a map with all the meta:

%{                                                                                                                                                                                "birthTime" => 1511417999,
  "cooldownIndex" => 0,  
  "generation" => 0,
  "genes" => 626837621154801616088980922659877168609154386318304496692374110716999053,
  "isGestating" => false,
  "isReady" => true,
  "matronId" => 0,
  "nextActionAt" => 0,
  "sireId" => 0,
  "siringWithId" => 0
}

Download Details:
Author: zyield
Source Code: https://github.com/zyield/eth_contract
License: GPL-3.0 license

#blockchain  #solidity  #ethereum  #smartcontract #elixir 

EthContract: A Set Of Helper Methods to Help Query ETH Smart Contracts

Ethereumex: Elixir JSON-RPC Client for The Ethereum Blockchain

Ethereumex

Elixir JSON-RPC client for the Ethereum blockchain.

Check out the documentation here.

Installation

Add :ethereumex to your list of dependencies in mix.exs:

def deps do
  [
    {:ethereumex, "~> 0.9"}
  ]
end

Ensure :ethereumex is started before your application:

def application do
  [
    applications: [:ethereumex]
  ]
end

Configuration

In config/config.exs, add Ethereum protocol host params to your config file

config :ethereumex,
  url: "http://localhost:8545"

You can also configure the HTTP request timeout for requests sent to the Ethereum JSON-RPC (you can also overwrite this configuration in opts used when calling the client).

config :ethereumex,
  http_options: [pool_timeout: 5000, receive_timeout: 15_000],
  http_headers: [{"Content-Type", "application/json"}]

:pool_timeout - This timeout is applied when we check out a connection from the pool. Default value is 5_000. :receive_timeout - The maximum time to wait for a response before returning an error. Default value is 15_000

If you want to use IPC you will need to set a few things in your config.

First, specify the :client_type:

config :ethereumex,
  client_type: :ipc

This will resolve to :http by default.

Second, specify the :ipc_path:

config :ethereumex,
  ipc_path: "/path/to/ipc"

If you want to count the number of RPC calls per RPC method or overall, you can attach yourself to executed telemetry events. There are two events you can attach yourself to: [:ethereumex] # has RPC method name in metadata Emitted event: {:event, [:ethereumex], %{counter: 1}, %{method_name: "method_name"}}

or more granular [:ethereumex, <rpc_method>] # %{} metadata Emitted event: {:event, [:ethereumex, :method_name_as_atom], %{counter: 1}, %{}}

Each event caries a single ticker that you can pass into your counters (like Statix.increment/2). Be sure to add :telemetry as project dependency.

The IPC client type mode opens a pool of connection workers (default is 5 and 2, respectively). You can configure the pool size.

config :ethereumex,
  ipc_worker_size: 5,
  ipc_max_worker_overflow: 2,
  ipc_request_timeout: 60_000

Test

Download parity and initialize the password file

$ make setup

Run parity

$ make run

Run tests

$ make test

Usage

Available methods:

IpcClient

You can follow along with any of these examples using IPC by replacing HttpClient with IpcClient.

Examples

iex> Ethereumex.HttpClient.web3_client_version
{:ok, "Parity//v1.7.2-beta-9f47909-20170918/x86_64-macos/rustc1.19.0"}

# Using the url option will overwrite the configuration
iex> Ethereumex.HttpClient.web3_client_version(url: "http://localhost:8545")
{:ok, "Parity//v1.7.2-beta-9f47909-20170918/x86_64-macos/rustc1.19.0"}

iex> Ethereumex.HttpClient.web3_sha3("wrong_param")
{:error, %{"code" => -32602, "message" => "Invalid params: invalid format."}}

iex> Ethereumex.HttpClient.eth_get_balance("0x407d73d8a49eeb85d32cf465507dd71d507100c1")
{:ok, "0x0"}

Note that all method names are snakecases, so, for example, shh_getMessages method has corresponding Ethereumex.HttpClient.shh_get_messages/1 method. Signatures can be found in Ethereumex.Client.Behaviour. There are more examples in tests.

eth_call example - Read only smart contract calls

In order to call a smart contract using the JSON-RPC interface you need to properly hash the data attribute (this will need to include the contract method signature along with arguments if any). You can do this manually or use a hex package like ABI to parse your smart contract interface or encode individual calls.

defp deps do
  [
    ...
    {:ethereumex, "~> 0.9"},
    {:ex_abi, "~> 0.5"}
    ...
  ]
end

Now load the ABI and pass the method signature. Note that the address needs to be converted to bytes:

address           = "0xF742d4cE7713c54dD701AA9e92101aC42D63F895" |> String.slice(2..-1) |> Base.decode16!(case: :mixed)
contract_address  = "0xC28980830dD8b9c68a45384f5489ccdAF19D53cC"
abi_encoded_data  = ABI.encode("balanceOf(address)", [address]) |> Base.encode16(case: :lower)

Now you can use eth_call to execute this smart contract command:

balance_bytes = Ethereumex.HttpClient.eth_call(%{
  data: "0x" <> abi_encoded_data,
  to: contract_address
})

To convert the balance into an integer:

balance_bytes
|> String.slice(2..-1)
|> Base.decode16!(case: :lower)
|> TypeDecoder.decode_raw([{:uint, 256}])
|> List.first

Custom requests

Many Ethereum protocol implementations support additional JSON-RPC API methods. To use them, you should call Ethereumex.HttpClient.request/3 method.

For example, let's call parity's personal_listAccounts method.

iex> Ethereumex.HttpClient.request("personal_listAccounts", [], [])
{:ok,
 ["0x71cf0b576a95c347078ec2339303d13024a26910",
  "0x7c12323a4fff6df1a25d38319d5692982f48ec2e"]}

Batch requests

To send batch requests use Ethereumex.HttpClient.batch_request/1 or Ethereumex.HttpClient.batch_request/2 method.

requests = [
   {:web3_client_version, []},
   {:net_version, []},
   {:web3_sha3, ["0x68656c6c6f20776f726c64"]}
 ]
 Ethereumex.HttpClient.batch_request(requests)
 {
   :ok,
   [
     {:ok, "Parity//v1.7.2-beta-9f47909-20170918/x86_64-macos/rustc1.19.0"},
     {:ok, "42"},
     {:ok, "0x47173285a8d7341e5e972fc677286384f802f8ef42a5ec5f03bbfa254cb01fad"}
   ]
 }

Built on Ethereumex

If you are curious what others are building with ethereumex, you might want to take a look at these projects:

exw3 - A high-level contract abstraction and other goodies similar to web3.js

eth - Ethereum utilities for Elixir.

eth_contract - A set of helper methods for calling ETH Smart Contracts via JSON RPC.

Contributing

  1. Fork it!
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

Download Details:
Author: mana-ethereum
Source Code: https://github.com/mana-ethereum/ethereumex
License: MIT license

#blockchain  #solidity  #ethereum  #smartcontract #elixir #json #JSONrpc

Ethereumex: Elixir JSON-RPC Client for The Ethereum Blockchain

Examples Of GraphQL Elixir Plug Endpoints Mounted in Phoenix

GraphQL Phoenix Examples 

This is a Phoenix app containing examples of how to use plug_graphql which in turn uses the GraphQL Elixir Core

Installation

Clone this repo and start your Phoenix app:

  1. Install dependencies with mix deps.get
  2. Create your development database using mix ecto.create. NOTE: you may need to edit config/dev.exs to set up your database configuration if it is not configured for passwordless logins on localhost.
  3. Setup your DB for the Ecto example with mix ecto.migrate and mix run priv/repo/seeds.exs
  4. Start Phoenix endpoint with mix phoenix.server

Now you can visit localhost:4000 from your browser.

Examples

Using plug_graphql with Phoenix is very simple.

Simply mount your GraphQL endpoint like so:

  1. Define your schema in web/graphql (see https://github.com/graphql-elixir/hello_graphql_phoenix/tree/master/web/graphql)
  2. Mount your endpoint

Resources


Author: graphql-elixir
Source Code: https://github.com/graphql-elixir/hello_graphql_phoenix
License: View license

#graphql #elixir 

Examples Of GraphQL Elixir Plug Endpoints Mounted in Phoenix

Plot: GraphQL Parser and Resolver for Elixir.

Plot

A GraphQL parser and resolver for Elixir.

This project is still a work in progress, but the eventual goal is to support the full GraphQL spec.

Basic Usage

Build a basic AST from a doc

"{
  user {
    id,
    firstName,
    lastName
  }
}" |> Plot.parse
# returns:
{:ok,
[{:query, nil,
  [{:object, "user", nil, [],
    [{:field, "id", nil, []},
     {:field, "firstName", nil, []},
     {:field, "lastName", nil, []}]}]}]}

Build Plot objects from a doc

"{
  user {
    id,
    firstName,
    lastName
  }
}" |> Plot.parse_and_generate
# returns:
%Plot.Document{fragments: [],
 operations: [%Plot.Query{name: nil,
 objects:    [%Plot.Object{alias: nil, args: [], name: "user",
              fields: [%Plot.Field{alias: nil, args: [], name: "lastName"},
                       %Plot.Field{alias: nil, args: [], name: "firstName"},
                       %Plot.Field{alias: nil, args: [], name: "id"}]}]}],
variables:   []}

Query resolution

Query resolution works via Elixir protocols. Implement the Resolution protocol for the various nodes of your queries.

# Implementation
defimpl Plot.Resolution, for: Plot.Object do
  def resolve(%Plot.Object{name: "user"} = _obj, _context) do
    %{"firstName" => "Phil", "lastName" => "Burrows", dontInclude: "this"}
  end

  def resolve(%Plot.Object{name: "birthday"}, _context) do
    %{"day" => 15, "month" => 12, "year" => 2015}
  end
end

# Resolution
doc = "{user {firstName, lastName, birthday {month, year}}}" |> Plot.parse_and_generate
doc.operations |> Enum.at(0) |> Plot.Resolver.resolve
# returns:
[%{"user" => %{"birthday" => %{"month" => 12, "year" => 2015},
   "firstName" => "Phil", "lastName" => "Burrows"}}]

Author: peburrows
Source Code: https://github.com/peburrows/plot
License: 

#elixir #graphql 

Plot: GraphQL Parser and Resolver for Elixir.

Graphql: A Tool To Compile Graphql Queries into Native Elixir.

Graphql

A tool to compile graphql queries into native Elixir.

The goal is for people to define their schemas in Elixir with callbacks while the library handles the asynchronous requests, composing the results together, and sending a reply to the query. It would be a great benefit to have a babeljs plugin be able to replace graphql es6-templated strings and leave behind a cryptographic checksum of the query, and then compile that those queries on the server.

For right now, this project focuses mainly on getting the very basics up and running.

References

Specification: http://facebook.github.io/graphql/

Reference Implementation: https://github.com/graphql/graphql-js/

Status

Sections from the RFC for the parser that are complete.

  •  
    1. Grammar
    2.  8.1 Tokens
      •  8.1.1 Ignored Source
    3.  8.2 Syntax
      •  8.2.1 Document
      •  8.2.2 Operations
      •  8.2.3 Fragments
      •  8.2.4 Values
        •  8.2.4.1 Array Value
        •  8.2.4.2 Object Value
      •  8.2.5 Directives
      •  8.2.6 Types

Author: asonge
Source Code: https://github.com/asonge/graphql
License: Apache-2.0 License

#graphql #elixir 

Graphql: A Tool To Compile Graphql Queries into Native Elixir.

Graphql Parser: Elixir Bindings For Libgraphqlparser

GraphQL.Parser

An Elixir binding for libgraphqlparser implemented as a NIF for parsing GraphQL.

Introduction

GraphQL is a query language designed to build client applications by providing an intuitive and flexible syntax and system for describing their data requirements and interactions.

This library is an Elixir interface for the query language parser and not a full implementation of GraphQL. It takes a query string as input and outputs the AST in a format suitable for performing pattern matching. Use this library directly only if you want to write your own implementation of GraphQL or you want to work with AST for something else. For the full Elixir implementation, checkout graphql-elixir. Head here if you are looking out for the full GraphQL specification.

Requirements

A C++ compiler that supports C++11, cmake, and make, for building and installing libgraphqlparser. It also requires Mac OS X or Linux.

Installation

To get started quickly, add GraphQL.Parser to your deps in mix.exs:

defp deps do
  [{:graphql_parser, "~> 0.0.3"}]
end

then, update your deps:

$ mix deps.get

Installing libgraphqlparser

You need to install libgraphqlparser before attemting to compile & run this library. If you're on a mac, you could do brew install libgraphqlparser.

But I'd recommend building and installing from source, because the library is constantly updated with critical bug fixes. It's in pretty early stages, so this is the most recommended approach. To install from source :

$ cd libgraphqlparser/
$ cmake .
$ make
$ make install

Once libgraphqlparser is successfully installed, do a mix compile to compile the NIF and you're good to go!

Examples

iex> GraphQL.Parser.parse "{ hello }"
{:ok,
 %{definitions: [%{directives: nil, kind: "OperationDefinition",
      loc: %{end: 10, start: 1}, name: nil, operation: "query",
      selectionSet: %{kind: "SelectionSet", loc: %{end: 10, start: 1},
        selections: [%{alias: nil, arguments: nil, directives: nil,
           kind: "Field", loc: %{end: 8, start: 3},
           name: %{kind: "Name", loc: %{end: 8, start: 3}, value: "hello"},
           selectionSet: nil}]}, variableDefinitions: nil}], kind: "Document",
   loc: %{end: 10, start: 1}}}

iex> GraphQL.Parser.parse! "{ hello }"
%{definitions: [%{directives: nil, kind: "OperationDefinition",
     loc: %{end: 10, start: 1}, name: nil, operation: "query",
     selectionSet: %{kind: "SelectionSet", loc: %{end: 10, start: 1},
       selections: [%{alias: nil, arguments: nil, directives: nil,
          kind: "Field", loc: %{end: 8, start: 3},
          name: %{kind: "Name", loc: %{end: 8, start: 3}, value: "hello"},
          selectionSet: nil}]}, variableDefinitions: nil}], kind: "Document",
  loc: %{end: 10, start: 1}}

iex> GraphQL.Parser.parse! " hello }"
** (GraphQL.Parser.SyntaxError) 1.2-6: syntax error, unexpected IDENTIFIER, expecting fragment or mutation or query or { on line
    lib/graphql/parser.ex:20: GraphQL.Parser.parse!/1

License

Copyright (c) 2015 Vignesh Rajagopalan

MIT License

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.


Author: graphql-elixir
Source Code: https://github.com/graphql-elixir/graphql_parser
License: 

#elixir #graphql 

Graphql Parser: Elixir Bindings For Libgraphqlparser

Graphql Relay: Relay Helpers for GraphQL Elixir.

GraphQL.Relay

This library contains helper functions that make it easier to setup a Relay compatible GraphQL schema.

You do not need this library to create a Relay compatible GraphQL schema, it just makes it easier. To illustrate this point here's what a Relay compatible schema looks like when you don't use this library and when you do use it.

This library relies on the GraphQL Elixir library.

Installation

Add graphql_relay to your list of dependencies in mix.exs:

def deps do
  [{:graphql_relay, "~> 0.5"}]
end

Configuration

Relay requires a schema.json file which is generated server-side, so we need a way of creating and updating this file.

In config/config.exs add the following config:

config :graphql_relay,
  schema_module: MyApp.Schema,
  schema_json_path: "#{Path.dirname(__DIR__)}/priv/graphql"
  • MyApp.Schema is a module with a schema function that returns your GraphQL schema
  • The schema_json_path is where the generated JSON schema lives

Generate your schema with:

mix run -e GraphQL.Relay.generate_schema_json!

Phoenix Integration

In Phoenix you can generate the schema automatically after each modification to a GraphQL related schema file in the dev environment:

Babel and Relay

Relay's Babel Plugin (Relay Docs, npm) and babel-relay-plugin-loader (npm, GitHub) rely on a schema.json file existing that contains the result of running the full GraphQL introspection query against your GraphQL endpoint. Babel needs this file for transpiling GraphQL queries for use with Relay.

Usage

See the Star Wars test schema for a simple example and the TodoMVC example Phoenix application for a full application example that uses Ecto as well.

Learning GraphQL and Relay

It's important that you understand GraphQL first and then Relay second. Relay is simply a convention for how to organize a GraphQL schema so that Relay clients can query the GraphQL server in a standard way.

Helpful Tools


Author: graphql-elixir
Source Code: https://github.com/graphql-elixir/graphql_relay
License: View license

#elixir #graphql 

Graphql Relay: Relay Helpers for GraphQL Elixir.

Plug Graphql: Plug Integration For GraphQL Elixir

GraphQL Plug

plug_graphql is a Plug integration for the GraphQL Elixir implementation of Facebook's GraphQL.

This Plug allows you to easily mount a GraphQL endpoint in Phoenix. This example project shows you how:

Installation

  1. Make a new Phoenix app, or add it to your existing app.
```sh
mix phoenix.new hello_graphql
cd hello_graphql
```

```sh
git clone https://github.com/graphql-elixir/hello_graphql_phoenix
```
  1. Add plug_graphql to your list of dependencies and applications in mix.exs and install the package with mix deps.get.
```elixir
def application do
  # Add the application to your list of applications.
  # This will ensure that it will be included in a release.
  [applications: [:logger, :plug_graphql]]
end

def deps do
  [{:plug_graphql, "~> 0.3.1"}]
end
```

Usage

  1. Define a simple schema in web/graphql/test_schema.ex:
```elixir
defmodule TestSchema do
  def schema do
    %GraphQL.Schema{
      query: %GraphQL.Type.ObjectType{
        name: "Hello",
        fields: %{
          greeting: %{
            type: %GraphQL.Type.String{},
            args: %{
              name: %{
                type: %GraphQL.Type.String{}
              }
            },
            resolve: {TestSchema, :greeting}
          }
        }
      }
    }
  end

  def greeting(_, %{name: name}, _), do: "Hello, #{name}!"
  def greeting(_, _, _), do: "Hello, world!"
end
```
  1. Your api pipeline should have this as a minimum:
```elixir
pipeline :api do
  plug :accepts, ["json"]
end
```
  1. Mount the GraphQL endpoint as follows:
```elixir
scope "/api" do
  pipe_through :api

  forward "/", GraphQL.Plug, schema: {TestSchema, :schema}
end
```
  1. Start Phoenix
```sh
mix phoenix.server
```
  1. Open your browser to http://localhost:4000/api?query={greeting} and you should see something like this:
```json
{
  "data": {
    "greeting": "Hello, world!"
  }
}
```

Contributions

This is pretty early days, the GraphQL Elixir ecosystem needs a lot more work to be useful.

However we can't get there without your help, so any questions, bug reports, feedback, feature requests and/or PRs are most welcome!

Acknowledgements

Thanks and appreciation goes to the following contributors for PRs, discussions, answering many questions and providing helpful feedback:

Thanks also to everyone who has submitted PRs, logged issues, given feedback or asked questions.


Author: graphql-elixir
Source Code: https://github.com/graphql-elixir/plug_graphql
License: View license

#elixir #graphql 

Plug Graphql: Plug Integration For GraphQL Elixir