Royce  Reinger

Royce Reinger

1663987140

9 Popular Rust Computation Library

In today's post we will learn about 9 Popular Rust Computation Library.

What is Computation?

"Computation is always defined relative to a computational model that specifies the agent performing the computation. Computation is seen as a process generated by that agent.

"Any of the four computational models proposed in the 1930s—recursive functions, rewriting rules, lambda-calculus, and Turing machine—could have been used as the reference model for computation. The Turing machine won that designation because it most closely resembled the new generation of digital electronic computers.

Table of contents:

  • Argmin-rs/argmin [argmin] - A pure Rust optimization library.
  • Mikkyang/rust-blas - BLAS bindings.
  • Calebwin/emu - A language for GPGPU numerical computing from a Rust macro.
  • Indigits/scirust - Scientific computing library in Rust.
  • GuillaumeGomez/rust-GSL - GSL bindings.
  • Stainless-steel/lapack - LAPACK bindings.
  • Arrayfire/arrayfire-rust - Arrayfire bindings.
  • Autumnai/collenchyma - An extensible, pluggable, backend-agnostic framework for parallel, high-performance computations on CUDA, OpenCL and common host CPU. 
  • Statrs-dev/statrs - Robust statistical computation library in Rust.

1 - Argmin-rs/argmin [argmin].

A pure Rust optimization library.

Argmins goal is to offer a wide range of optimization algorithms with a consistent interface. It is type-agnostic by the design, meaning that any type and/or math backend, such as nalgebra or ndarray can be used -- even your own.

Observers allow one to track the progress of iterations, either by using one of the provided ones for logging to screen or disk or by implementing your own.

An optional checkpointing mechanism helps to mitigate the negative effects of crashes in unstable computing environments.

Due to Rusts powerful generics and traits, most features can be exchanged by your own tailored implementations.

Argmin is designed to simplify the implementation of optimization algorithms and as such can also be used as a toolbox for the development of new algorithms. One can focus on the algorithm itself, while the handling of termination, parameter vectors, populations, gradients, Jacobians and Hessians is taken care of by the library.

Algorithms

  • Line searches
    • Backtracking line search
    • More-Thuente line search
    • Hager-Zhang line search
  • Trust region method
    • Cauchy point method
    • Dogleg method
    • Steihaug method
  • Steepest descent
  • Conjugate gradient method
  • Nonlinear conjugate gradient method
  • Newton methods
    • Newton’s method
    • Newton-CG
  • Quasi-Newton methods
    • BFGS
    • L-BFGS
    • DFP
    • SR1
    • SR1-TrustRegion
  • Gauss-Newton method
  • Gauss-Newton method with linesearch
  • Golden-section search
  • Landweber iteration
  • Brent’s method
  • Nelder-Mead method
  • Simulated Annealing
  • Particle Swarm Optimization

View on Github

2 - Mikkyang/rust-blas:

BLAS bindings.

Rust bindings and wrappers for BLAS (Basic Linear Algebra Subprograms).

Overview

RBLAS wraps each external call in a trait with the same name (but capitalized). This trait contains a single static method, of the same name. These traits are generic over the four main types of numbers BLAS supports: f32, f64, Complex32, and Complex64.

For example the functions cblas_saxpy, cblas_daxpy, cblas_caxypy, and cblas_zaxpy are called with the function Axpy::axpy.

Additionally, RBLAS introduces a few traits to shorten calls to these BLAS functions: Vector for types that implement vector-like characteristics and Matrix for types that implement matrix-like characteristics. The Vector trait is already implemented by Vec and [] types.

Installation

By default, the library links with blas dynamically. To link to an alternate implementation, like OpenBLAS, use the environment variable CARGO_BLAS. If you've already built the bindings, you may need to clean and build again.

export CARGO_BLAS=openblas

Example

extern crate rblas;

use rblas::Dot;

fn main() {
    let x = vec![1.0, -2.0, 3.0, 4.0];
    let y = [1.0, 1.0, 1.0, 1.0, 7.0];

    let d = Dot::dot(&x, &y[..x.len()]);
    assert_eq!(d, 6.0);
}

Sugared Example (Soon to be Deprecated)

#[macro_use]
extern crate rblas as blas;
use blas::math::Mat;
use blas::{Matrix, Vector};
use blas::math::Marker::T;

fn main() {
    let x = vec![1.0, 2.0];
    let xr = &x as &Vector<_>;
    let i = mat![1.0, 0.0; 0.0, 1.0];
    let ir = &i as &Matrix<_>;

    assert!(xr + &x == 2.0 * xr);
    assert!(ir * xr == x);

    let dot = (xr ^ T) * xr;
    assert!(dot == 5.0);
}

View on Github

3- Calebwin/emu:

A language for GPGPU numerical computing from a Rust macro.

Overview

Emu is a GPGPU library for Rust with a focus on portability, modularity, and performance.

It's a CUDA-esque compute-specific abstraction over WebGPU providing specific functionality to make WebGPU feel more like CUDA. Here's a quick run-down of highlight features...

Emu can run anywhere - Emu uses WebGPU to support DirectX, Metal, Vulkan (and also OpenGL and browser eventually) as compile targets. This allows Emu to run on pretty much any user interface including desktop, mobile, and browser. By moving heavy computations to the user's device, you can reduce system latency and improve privacy.

Emu makes compute easier - Emu makes WebGPU feel like CUDA. It does this by providing...

  • DeviceBox<T> as a wrapper for data that lives on the GPU (thereby ensuring type-safe data movement)
  • DevicePool as a no-config auto-managed pool of devices (similar to CUDA)
  • trait Cache - a no-setup-required LRU cache of JITed compute kernels.

Emu is transparent - Emu is a fully transparent abstraction. This means, at any point, you can decide to remove the abstraction and work directly with WebGPU constructs with zero overhead. For example, if you want to mix Emu with WebGPU-based graphics, you can do that with zero overhead. You can also swap out the JIT compiler artifact cache with your own cache, manage the device pool if you wish, and define your own compile-to-SPIR-V compiler that interops with Emu.

Emu is asynchronous - Emu is fully asynchronous. Most API calls will be non-blocking and can be synchronized by calls to DeviceBox::get when data is read back from device.

An example

Here's a quick example of Emu. You can find more in emu_core/examples and most recent documentation here.

First, we just import a bunch of stuff

use emu_glsl::*;
use emu_core::prelude::*;
use zerocopy::*;

We can define types of structures so that they can be safely serialized and deserialized to/from the GPU.

#[repr(C)]
#[derive(AsBytes, FromBytes, Copy, Clone, Default, Debug)]
struct Rectangle {
    x: u32,
    y: u32,
    w: i32,
    h: i32,
}

For this example, we make this entire function async but in reality you will only want small blocks of code to be async (like a bunch of asynchronous memory transfers and computation) and these blocks will be sent off to an executor to execute. You definitely don't want to do something like this where you are blocking (by doing an entire compilation step) in your async code.

fn main() -> Result<(), Box<dyn std::error::Error>> {
    futures::executor::block_on(assert_device_pool_initialized());

    // first, we move a bunch of rectangles to the GPU
    let mut x: DeviceBox<[Rectangle]> = vec![Default::default(); 128].as_device_boxed()?;
    
    // then we compile some GLSL code using the GlslCompile compiler and
    // the GlobalCache for caching compiler artifacts
    let c = compile::<String, GlslCompile, _, GlobalCache>(
        GlslBuilder::new()
            .set_entry_point_name("main")
            .add_param_mut()
            .set_code_with_glsl(
            r#"
#version 450
layout(local_size_x = 1) in; // our thread block size is 1, that is we only have 1 thread per block

struct Rectangle {
    uint x;
    uint y;
    int w;
    int h;
};

// make sure to use only a single set and keep all your n parameters in n storage buffers in bindings 0 to n-1
// you shouldn't use push constants or anything OTHER than storage buffers for passing stuff into the kernel
// just use buffers with one buffer per binding
layout(set = 0, binding = 0) buffer Rectangles {
    Rectangle[] rectangles;
}; // this is used as both input and output for convenience

Rectangle flip(Rectangle r) {
    r.x = r.x + r.w;
    r.y = r.y + r.h;
    r.w *= -1;
    r.h *= -1;
    return r;
}

// there should be only one entry point and it should be named "main"
// ultimately, Emu has to kind of restrict how you use GLSL because it is compute focused
void main() {
    uint index = gl_GlobalInvocationID.x; // this gives us the index in the x dimension of the thread space
    rectangles[index] = flip(rectangles[index]);
}
            "#,
        )
    )?.finish()?;
    
    // we spawn 128 threads (really 128 thread blocks)
    unsafe {
        spawn(128).launch(call!(c, &mut x));
    }

    // this is the Future we need to block on to get stuff to happen
    // everything else is non-blocking in the API (except stuff like compilation)
    println!("{:?}", futures::executor::block_on(x.get())?);

    Ok(())
}

And last but certainly not least, we use an executor to execute.

fn main() {
    futures::executor::block_on(do_some_stuff()).expect("failed to do stuff on GPU");
}

View on Github

4 - Indigits/scirust:

Scientific computing library written in Rust programming language.

The objective is to design a generic library which can be used as a backbone for scientific computing.

Current emphasis is less on performance and more on providing a comprehensive API.

Features

General

  • Pure Rust implementation
  • Focus on generic programming
  • Extensive unit tests for all features
  • Column major implementation

Matrices

  • Generic matrix class supporting various data-types (u8, i8, u16, i16, ... , f32, f64, Complex32, Complex64)
  • Views over parts of matrices
  • Comprehensive support for operations on matrices.
  • Views over sub-matrices with similar operations.
  • Special support for triangular matrices.

Linear algebra

  • Solving systems of linear equations
  • LDU factorization
  • Rank, Determinant, Inverse

About Rust and Building the project

If you are unfamiliar with Rust, you are recommended to go through The Rust Programming Language Book.

The library can be built and used using Cargo which is the official dependency management and build tool for Rust.

Working with matrices requires a lot of low level code. As a user of the library, we expect that you won't have to write the low level code yourself. If you are reading or debugging through the source code of the library, you would see a lot of low level code. Good knowledge of Rust help you sail through them. Check out topics like:

The library code is full of unit tests. These unit tests serve multiple purposes

  • Making sure that the functions work as advertised.
  • Extensively testing those functions which use unsafe and low level features of Rust.
  • Learning about how to use the library features.

If you haven't read already, please familiarize yourself with Unit Testing in Rust. Writing unit tests will help you write better and more reliable code.

View on Github

5 - GuillaumeGomez/rust-GSL:

GSL bindings. A Rust binding for the GSL library (the GNU Scientific Library).

The minimum support Rust version is 1.54.

Installation

This binding requires the GSL library library (version >= 2) to be installed:

Linux

# on debian based systems:
sudo apt-get install libgsl0-dev

macOS

brew install gsl

Apple silicon

Homebrew installs libraries under /opt/homebrew/include on Apple silicon to maintain backward compatibility with Rosetta 2.

After gsl has been installed in the usual way, use the environment variable:

RUSTFLAGS='-L /opt/homebrew/include'

before cargo run, cargo build, etc., to tell the compiler where gsl is located.

Usage

This crate works with Cargo and is on crates.io. Just add the following to your Cargo.toml file:

[dependencies]
GSL = "4.0"

You can see examples in the examples folder.

Building

To build rgsl, just run cargo build. However, if you want to use a specific version, you'll need to use the cargo features. For example:

cargo build --features v2_1

If a project depends on this version, don't forget to add in your Cargo.toml:

[dependencies.GSL]
version = "2"
features = ["v2_1"]

Documentation

You can access the rgsl documentation locally, just build it:

> cargo doc --open

Then open this file with an internet browser: file:///{rgsl_location}/target/doc/rgsl/index.html

You can also access the latest build of the documentation via the internet here.

View on Github

6 - Stainless-steel/lapack:

LAPACK bindings.

The package provides wrappers for LAPACK (Fortran).

Example

use lapack::*;

let n = 3;
let mut a = vec![3.0, 1.0, 1.0, 1.0, 3.0, 1.0, 1.0, 1.0, 3.0];
let mut w = vec![0.0; n as usize];
let mut work = vec![0.0; 4 * n as usize];
let lwork = 4 * n;
let mut info = 0;

unsafe {
    dsyev(b'V', b'U', n, &mut a, n, &mut w, &mut work, lwork, &mut info);
}

assert!(info == 0);
for (one, another) in w.iter().zip(&[2.0, 2.0, 5.0]) {
    assert!((one - another).abs() < 1e-14);
}

Development

The code is generated via a Python script based on the content the lapack-sys submodule. To re-generate, run the following commands:

./bin/generate.py > src/lapack-sys.rs
rustfmt src/lapack-sys.rs

Contribution

Your contribution is highly appreciated. Do not hesitate to open an issue or a pull request. Note that any contribution submitted for inclusion in the project will be licensed according to the terms given in LICENSE.md.

View on Github

7 - Arrayfire/arrayfire-rust:

Arrayfire bindings.

ArrayFire is a high performance library for parallel computing with an easy-to-use API. It enables users to write scientific computing code that is portable across CUDA, OpenCL and CPU devices. This project provides Rust bindings for the ArrayFire library. Given below table shows the rust bindings compatability with ArrayFire. If you find any bugs, please report them here.

arrayfire-rustArrayFire
M.m.p1M.m.p2

Only, Major(M) & Minor(m) version numbers need to match. p1 and p2 are patch/fix updates for arrayfire-rust & ArrayFire respectively, and they don't need to match.

Use from Crates.io

To use the rust bindings for ArrayFire from crates.io, the following requirements are to be met first.

  1. Download and install ArrayFire binaries based on your operating system. Depending on the method of your installation for Linux, steps (2) & (3) may not be required. If that is the case, proceed to step (4) directly.
  2. Set the evironment variable AF_PATH to point to ArrayFire installation root folder.
  3. Make sure to add the path to lib files to your path environment variables.
    • On Linux: do export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$AF_PATH/lib64
    • On OSX: do export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:$AF_PATH/lib
    • On Windows: Add %AF_PATH%\lib to your PATH environment variable.
  4. Add arrayfire = "3.8" to the dependencies section of your project's Cargo.toml file. Make sure to change the version to latest available.

Once step (4) is over, you should be able to use ArrayFire in your Rust project. If you find any bugs, please report them here.

Build from Source

Edit build.conf to modify the build flags. The structure is a simple JSON blob. Currently Rust does not allow key:value pairs to be passed from the CLI. To use an existing ArrayFire installation modify the first three JSON values. You can install ArrayFire using one of the following two ways.

To build arrayfire submodule available in the rust wrapper repository, you have to do the following.

git submodule update --init --recursive
cargo build // use --all to build all crates in the workspace

This is recommended way to build Rust wrapper since the submodule points to the most compatible version of ArrayFire the Rust wrapper has been tested with. You can find the ArrayFire dependencies below.

Example

let num_rows: u64 = 5;
let num_cols: u64 = 3;
let dims = Dim4::new(&[num_rows, num_cols, 1, 1]);
let a = randu::<f32>(dims);
af_print!("Create a 5-by-3 matrix of random floats on the GPU", a);

Sample output

~/p/arrayfire_rust> cargo run --example helloworld
...
Create a 5-by-3 matrix of random floats on the GPU
[5 3 1 1]
    0.7402     0.4464     0.7762
    0.9210     0.6673     0.2948
    0.0390     0.1099     0.7140
    0.9690     0.4702     0.3585
    0.9251     0.5132     0.6814
...

Troubleshooting

If the build command fails with undefined references errors even after taking care of environment variables, we recommend doing a cargo clean and re-running cargo build or cargo test.

You can also use some environment variables mentioned in our book, such as AF_PRINT_ERRORS to print more elaborate error messages to console.

View on Github

8 - Autumnai/collenchyma:

An extensible, pluggable, backend-agnostic framework for parallel, high-performance computations on CUDA, OpenCL and common host CPU.

Collenchyma is an extensible, pluggable, backend-agnostic framework for parallel, high-performance computations on CUDA, OpenCL and common host CPU. It is fast, easy to build and provides an extensible Rust struct to execute operations on almost any machine, even if it does not have CUDA or OpenCL capable devices.

Collenchyma's abstracts over the different computation languages (Native, OpenCL, Cuda) and let's you run highly-performant code, thanks to easy parallelization, on servers, desktops or mobiles without the need to adapt your code for the machine you deploy to. Collenchyma does not require OpenCL or Cuda on the machine and automatically falls back to the native host CPU, making your application highly flexible and fast to build.

Collenchyma was started at Autumn to support the Machine Intelligence Framework Leaf with backend-agnostic, state-of-the-art performance.

Parallelizing Performance
Collenchyma makes it easy to parallelize computations on your machine, putting all the available cores of your CPUs/GPUs to use. Collenchyma provides optimized operations through Plugins, that you can use right away to speed up your application.

Easily Extensible
Writing custom operations for GPU execution becomes easy with Collenchyma, as it already takes care of Framework peculiarities, memory management, safety and other overhead. Collenchyma provides Plugins (see examples below), that you can use to extend the Collenchyma backend with your own, business-specific operations.

Butter-smooth Builds
As Collenchyma does not require the installation of various frameworks and libraries, it will not add significantly to the build time of your application. Collenchyma checks at run-time if these frameworks can be used and gracefully falls back to the standard, native host CPU if they are not. No long and painful build procedures for you or your users.

Getting Started

If you're using Cargo, just add Collenchyma to your Cargo.toml:

[dependencies]
collenchyma = "0.0.8"

If you're using Cargo Edit, you can call:

$ cargo add collenchyma

Plugins

You can easily extend Collenchyma's Backend with more backend-agnostic operations, through Plugins. Plugins provide a set of related operations - BLAS would be a good example. To extend Collenchyma's Backend with operations from a Plugin, just add a the desired Plugin crate to your Cargo.toml file. Here is a list of available Collenchyma Plugins, that you can use right away for your own application, or take as a starting point, if you would like to create your own Plugin.

  • BLAS - Collenchyma plugin for backend-agnostic Basic Linear Algebra Subprogram Operations.
  • NN - Collenchyma plugin for Neural Network related algorithms.

You can easily write your own backend-agnostic, parallel operations and provide it for others, via a Plugin - we are happy to feature your Plugin here, just send us a PR.

Examples

Collenchyma comes without any operations. The following examples therefore assumes, that you have added both collenchyma and the Collenchyma Plugin collenchyma-nn to your Cargo manifest.

extern crate collenchyma as co;
extern crate collenchyma_nn as nn;
use co::prelude::*;
use nn::*;

fn write_to_memory<T: Copy>(mem: &mut MemoryType, data: &[T]) {
    if let &mut MemoryType::Native(ref mut mem) = mem {
        let mut mem_buffer = mem.as_mut_slice::<T>();
        for (index, datum) in data.iter().enumerate() {
            mem_buffer[index] = *datum;
        }
    }
}

fn main() {
    // Initialize a CUDA Backend.
    let backend = Backend::<Cuda>::default().unwrap();
    // Initialize two SharedTensors.
    let mut x = SharedTensor::<f32>::new(backend.device(), &(1, 1, 3)).unwrap();
    let mut result = SharedTensor::<f32>::new(backend.device(), &(1, 1, 3)).unwrap();
    // Fill `x` with some data.
    let payload: &[f32] = &::std::iter::repeat(1f32).take(x.capacity()).collect::<Vec<f32>>();
    let native = Backend::<Native>::default().unwrap();
    x.add_device(native.device()).unwrap(); // Add native host memory
    x.sync(native.device()).unwrap(); // Sync to native host memory
    write_to_memory(x.get_mut(native.device()).unwrap(), payload); // Write to native host memory.
    x.sync(backend.device()).unwrap(); // Sync the data to the CUDA device.
    // Run the sigmoid operation, provided by the NN Plugin, on your CUDA enabled GPU.
    backend.sigmoid(&mut x, &mut result).unwrap();
    // See the result.
    result.add_device(native.device()).unwrap(); // Add native host memory
    result.sync(native.device()).unwrap(); // Sync the result to host memory.
    println!("{:?}", result.get(native.device()).unwrap().as_native().unwrap().as_slice::<f32>());
}

View on Github

9 - Statrs-dev/statrs:

Robust statistical computation library in Rust.

Statrs provides a host of statistical utilities for Rust scientific computing. Included are a number of common distributions that can be sampled (i.e. Normal, Exponential, Student's T, Gamma, Uniform, etc.) plus common statistical functions like the gamma function, beta function, and error function.

This library is a work-in-progress port of the statistical capabilities in the C# Math.NET library. All unit tests in the library borrowed from Math.NET when possible and filled-in when not.

This library is a work-in-progress and not complete. Planned for future releases are continued implementations of distributions as well as porting over more statistical utilities

Please check out the documentation here

Usage

Add the most recent release to your Cargo.toml

[dependencies]
statrs = "0.16"

Examples

Statrs comes with a number of commonly used distributions including Normal, Gamma, Student's T, Exponential, Weibull, etc. The common use case is to set up the distributions and sample from them which depends on the Rand crate for random number generation

use statrs::distribution::Exp;
use rand::distributions::Distribution;

let mut r = rand::rngs::OsRng;
let n = Exp::new(0.5).unwrap();
print!("{}", n.sample(&mut r));

Statrs also comes with a number of useful utility traits for more detailed introspection of distributions

use statrs::distribution::{Exp, Continuous, ContinuousCDF};
use statrs::statistics::Distribution;

let n = Exp::new(1.0).unwrap();
assert_eq!(n.mean(), Some(1.0));
assert_eq!(n.variance(), Some(1.0));
assert_eq!(n.entropy(), Some(1.0));
assert_eq!(n.skewness(), Some(2.0));
assert_eq!(n.cdf(1.0), 0.6321205588285576784045);
assert_eq!(n.pdf(1.0), 0.3678794411714423215955);

as well as utility functions including erf, gamma, ln_gamma, beta, etc.

use statrs::statistics::Distribution;
use statrs::distribution::FisherSnedecor;

let n = FisherSnedecor::new(1.0, 1.0).unwrap();
assert!(n.variance().is_none());

Contributing

Want to contribute? Check out some of the issues marked help wanted

How to contribute

Clone the repo:

git clone https://github.com/statrs-dev/statrs

Create a feature branch:

git checkout -b <feature_branch> master

After commiting your code:

git push -u origin <feature_branch>

Then submit a PR, preferably referencing the relevant issue.

View on Github

Thank you for following this article.

Related videos:

Rust: Creating A Simple Math Library

#rust #compute 

9 Popular Rust Computation Library

9 Popular Golang Libraries for Version Control

In today's post we will learn about 9 Popular Golang Libraries for Version Control.

What is Version Control?

Version control, also known as source control, is the practice of tracking and managing changes to software code. Version control systems are software tools that help software teams manage changes to source code over time. As development environments have accelerated, version control systems help software teams work faster and smarter. They are especially useful for DevOps teams since they help them to reduce development time and increase successful deployments.

Version control software keeps track of every modification to the code in a special kind of database. If a mistake is made, developers can turn back the clock and compare earlier versions of the code to help fix the mistake while minimizing disruption to all team members.

Table of contents:

  • Froggit-go - Froggit-Go is a Go library, allowing to perform actions on VCS providers.
  • GH - Scriptable server and net/http middleware for GitHub Webhooks.
  • Git2go - Go bindings for libgit2.
  • Githooks - Per-repo and shared Git hooks with version control and auto update.
  • Glab - An open-source GitLab command line tool bringing GitLab's cool features to your command line.
  • Go-git - Highly extensible Git implementation in pure Go.
  • Go-vcs - Manipulate and inspect VCS repositories in Go.
  • Hercules - Gaining advanced insights from Git repository history.
  • Hgo - Hgo is a collection of Go packages providing read-access to local Mercurial repositories.

1 - Froggit-go:

Froggit-Go is a Go library, allowing to perform actions on VCS providers. Currently supported providers are: GitHub, Bitbucket Server, Bitbucket Cloud, and GitLab.

VCS Clients

Create Clients

GitHub

GitHub api v3 is used

// The VCS provider. Cannot be changed.
vcsProvider := vcsutils.GitHub
// API endpoint to GitHub. Leave empty to use the default - https://api.github.com
apiEndpoint := "https://github.example.com"
// Access token to GitHub
token := "secret-github-token"
// Logger
logger := log.Default()

client, err := vcsclient.NewClientBuilder(vcsProvider).ApiEndpoint(apiEndpoint).Token(token).Build()

GitLab

GitLab api v4 is used.

// The VCS provider. Cannot be changed.
vcsProvider := vcsutils.GitLab
// API endpoint to GitLab. Leave empty to use the default - https://gitlab.com
apiEndpoint := "https://gitlab.example.com"
// Access token to GitLab
token := "secret-gitlab-token"
// Logger
logger := log.Default()

client, err := vcsclient.NewClientBuilder(vcsProvider).ApiEndpoint(apiEndpoint).Token(token).Build()

Bitbucket Server

Bitbucket api 1.0 is used.

// The VCS provider. Cannot be changed.
vcsProvider := vcsclient.BitbucketServer
// API endpoint to Bitbucket server. Typically ends with /rest.
apiEndpoint := "https://git.acme.com/rest"
// Access token to Bitbucket
token := "secret-bitbucket-token"
// Logger
logger := log.Default()

client, err := vcsclient.NewClientBuilder(vcsProvider).ApiEndpoint(apiEndpoint).Token(token).Build()

Bitbucket Cloud

Bitbucket cloud api version 2.0 is used and the version should be added to the apiEndpoint.

// The VCS provider. Cannot be changed.
vcsProvider := vcsutils.BitbucketCloud
// API endpoint to Bitbucket cloud. Leave empty to use the default - https://api.bitbucket.org/2.0
apiEndpoint := "https://bitbucket.example.com"
// Bitbucket username
username := "bitbucket-user"
// Password or Bitbucket "App Password'
token := "secret-bitbucket-token"
// Logger
logger := log.Default()

client, err := vcsclient.NewClientBuilder(vcsProvider).ApiEndpoint(apiEndpoint).Username(username).Token(token).Build()

View on Github

2 - GH:

Scriptable server and net/http middleware for GitHub Webhooks.

Commands and packages for GitHub services.

Installation

~ $ go get -u github.com/rjeczalik/gh

webhook 

Package webhook implements middleware for GitHub Webhooks. User provides webhook service object that handles events delivered by GitHub. Webhook handler verifies payload signature delivered along with the event, unmarshals it to corresponding event struct and dispatches control to user service.

Examples

Notify Slack's channel about recent push:

package main

import (
	"flag"
	"fmt"
	"log"
	"net/http"
	"net/url"

	"github.com/rjeczalik/gh/webhook"
)

var (
	secret  = flag.String("secret", "", "GitHub webhook secret")
	token   = flag.String("token", "", "Slack API token")
	channel = flag.String("channel", "", "Slack channel name")
)

type slack struct{}

func (s slack) Push(e *webhook.PushEvent) {
	const format = "https://slack.com/api/chat.postMessage?token=%s&channel=%s&text=%s"
	text := url.QueryEscape(fmt.Sprintf("%s pushed to %s", e.Pusher.Email, e.Repository.Name))
	if _, err := http.Get(fmt.Sprintf(format, *token, *channel, text)); err != nil {
		log.Println(err)
	}
}

func main() {
	flag.Parse()
	log.Fatal(http.ListenAndServe(":8080", webhook.New(*secret, slack{})))
}

Notify HipChat's room about recent push:

package main

import (
	"flag"
	"fmt"
	"log"
	"net/http"
	"strings"

	"github.com/rjeczalik/gh/webhook"
)

var (
	secret = flag.String("secret", "", "GitHub webhook secret")
	token  = flag.String("token", "", "HipChat personal API token")
	room   = flag.String("room", "", "HipChat room ID")
)

type hipchat struct{}

func (h hipchat) Push(e *webhook.PushEvent) {
	url := fmt.Sprintf("https://api.hipchat.com/v2/room/%s/notification", *room)
	body := fmt.Sprintf(`{"message":"%s pushed to %s"}`, e.Pusher.Email, e.Repository.Name)
	req, err := http.NewRequest("POST", url, strings.NewReader(body))
	if err != nil {
		log.Println(err)
		return
	}
	req.Header.Set("Content-Type", "application/json")
	req.Header.Set("Authorization", "Bearer "+*token)
	if _, err := http.DefaultClient.Do(req); err != nil {
		log.Println(err)
	}
}

func main() {
	flag.Parse()
	log.Fatal(http.ListenAndServe(":8080", webhook.New(*secret, hipchat{})))
}

View on Github

3 - Git2go:

Go bindings for libgit2.

Which Go version to use

Due to the fact that Go 1.11 module versions have semantic meaning and don't necessarily align with libgit2's release schedule, please consult the following table for a mapping between libgit2 and git2go module versions:

libgit2git2go
main(will be v34)
1.3v33
1.2v32
1.1v31
1.0v30
0.99v29
0.28v28
0.27v27

You can import them in your project with the version's major number as a suffix. For example, if you have libgit2 v1.2 installed, you'd import git2go v33 with:

go get github.com/libgit2/git2go/v33
import "github.com/libgit2/git2go/v33"

which will ensure there are no sudden changes to the API.

The main branch follows the tip of libgit2 itself (with some lag) and as such has no guarantees on the stability of libgit2's API. Thus this only supports statically linking against libgit2.

Which branch to send Pull requests to

If there's something version-specific that you'd want to contribute to, you can send them to the release-${MAJOR}.${MINOR} branches, which follow libgit2's releases.

Installing

This project wraps the functionality provided by libgit2. It thus needs it in order to perform the work.

This project wraps the functionality provided by libgit2. If you're using a versioned branch, install it to your system via your system's package manager and then install git2go.

Versioned branch, dynamic linking

When linking dynamically against a released version of libgit2, install it via your system's package manager. CGo will take care of finding its pkg-config file and set up the linking. Import via Go modules, e.g. to work against libgit2 v1.2

import "github.com/libgit2/git2go/v33"

Versioned branch, static linking

Follow the instructions for Versioned branch, dynamic linking, but pass the -tags static,system_libgit2 flag to all go commands that build any binaries. For instance:

go build -tags static,system_libgit2 github.com/my/project/...
go test -tags static,system_libgit2 github.com/my/project/...
go install -tags static,system_libgit2 github.com/my/project/...

main branch, or vendored static linking

If using main or building a branch with the vendored libgit2 statically, we need to build libgit2 first. In order to build it, you need cmake, pkg-config and a C compiler. You will also need the development packages for OpenSSL (outside of Windows or macOS) and LibSSH2 installed if you want libgit2 to support HTTPS and SSH respectively. Note that even if libgit2 is included in the resulting binary, its dependencies will not be.

Run go get -d github.com/libgit2/git2go to download the code and go to your $GOPATH/src/github.com/libgit2/git2go directory. From there, we need to build the C code and put it into the resulting go binary.

git submodule update --init # get libgit2
make install-static

will compile libgit2, link it into git2go and install it. The main branch is set up to follow the specific libgit2 version that is vendored, so trying dynamic linking may or may not work depending on the exact versions involved.

In order to let Go pass the correct flags to pkg-config, -tags static needs to be passed to all go commands that build any binaries. For instance:

go build -tags static github.com/my/project/...
go test -tags static github.com/my/project/...
go install -tags static github.com/my/project/...

One thing to take into account is that since Go expects the pkg-config file to be within the same directory where make install-static was called, so the go.mod file may need to have a replace directive so that the correct setup is achieved. So if git2go is checked out at $GOPATH/src/github.com/libgit2/git2go and your project at $GOPATH/src/github.com/my/project, the go.mod file of github.com/my/project might need to have a line like

replace github.com/libgit2/git2go/v33 => ../../libgit2/git2go

View on Github

4 - Githooks:

Per-repo and shared Git hooks with version control and auto update.

A platform-independend hooks manager written in Go to support shared hook repositories and per-repository Git hooks, checked into the working repository. This implementation is the Go port and successor of the original impementation (see Migration).

To make this work, the installer creates run-wrappers for Githooks that are installed into the .git/hooks folders automatically on git init and git clone. There's more to the story though. When one of the Githooks run-wrappers executes, Githooks starts up and tries to find matching hooks in the .githooks directory under the project root, and invoke them one-by-one. Also it searches for hooks in configured shared hook repositories.

This Git hook manager supports:

  • Running repository checked-in hooks.
  • Running shared hooks from other Git repositories (with auto-update).
  • Git LFS support.
  • Command line interface.
  • Fast execution due to compiled executable. (even 2-3x faster with v2.1.1)
  • Fast parallel execution over threadpool.
  • Ignoring non-shared and shared hooks with patterns.
  • Automatic Githooks updates: Fully configurable for your own company by url/branch and deploy settings.
  • Bonus: Platform-independent dialog tool for user prompts inside your own hooks.

Layout and Options

Take this snippet of a Git repository layout as an example:

/
├── .githooks/
│    ├── commit-msg/          # All commit-msg hooks.
│    │    ├── validate        # Normal hook script.
│    │    └── add-text        # Normal hook script.
│    │
│    ├── pre-commit/          # All pre-commit hooks.
│    │    ├── .ignore.yaml    # Ignores relative to 'pre-commit' folder.
│    │    ├── 01-validate     # Normal hook script.
│    │    ├── 02-lint         # Normal hook script.
│    │    ├── 03-test.yaml    # Hook run configuration.
│    │    ├── docs.md         # Ignored in '.ignore.yaml'.
│    │    └── final/          # Batch folder 'final' which runs all in parallel.
│    │        ├── 01-validate # Normal hook script.
│    │        └── 02-upload   # Normal hook script.
│    │
│    ├── post-checkout/       # All post-checkout hooks.
│    │   ├── .all-parallel    # All hooks in this folder run in parallel.
│    │   └── ...
│    ├── ...
│    ├── .ignore.yaml         # Main ignores.
│    ├── .shared.yaml         # Shared hook configuration.
│    ├── .envs.yaml           # Environment variables passed to shared hooks.
│    └── .lfs-required        # LFS is required.
└── ...

All hooks to be executed live under the .githooks top-level folder, that should be checked into the repository. Inside, we can have directories with the name of the hook (like commit-msg and pre-commit above), or a file matching the hook name (like post-checkout in the example). The filenames in the directory do not matter, but the ones starting with a . (dotfiles) will be excluded by default. All others are executed in lexical order according to the Go function Walk. rules. Subfolders as e.g. final get treated as parallel batch and all hooks inside are by default executed in parallel over the thread pool. See Parallel Execution for details.

You can use the command line helper (a globally configured Git alias alias.hooks), that is git hooks list, to list all hooks and their current state that apply to the current repository. For this repository this looks like the following.

Execution

If a file is executable, it is directly invoked, otherwise it is interpreted with the sh shell. On Windows that mostly means dispatching to the bash.exe from https://gitforwindows.org.

All parameters and standard input are forwarded from Git to the hooks. The standard output and standard error of any hook which Githooks runs is captured together1 and printed to the standard error stream which might or might not get read by Git itself (e.g. pre-push).

Hooks can also be specified by a run configuration in a corresponding YAML file, see Hook Run Configuration.

Hooks related to commit events (where it makes sense, not post-commit) will also have a ${STAGED_FILES} environment variable set, i.e. the list of staged and changed files according to git diff --cached --diff-filter=ACMR --name-only. File paths are separated by a newline \n. If you want to iterate in a shell script over them, and expect spaces in paths, you might want to set the IFS like this:

IFS="
"
for STAGED in ${STAGED_FILES}; do
    ...
done

The ACMR filter in the git diff will include staged files that are added, copied, modified or renamed.

1 Note: This caveat is basically there because standard output and error might get interleaved badly and so far no solution to this small problem has been tackled yet. It is far better to output both streams in the correct order, and therefore send it to the error stream because that will not conflict in anyway with Git (see fsmonitor-watchman, unsupported right now.). If that poses a real problem for you, open an issue.

View on Github

5 - Glab:

An open-source GitLab command line tool bringing GitLab's cool features to your command line.

GLab is an open source GitLab CLI tool bringing GitLab to your terminal next to where you are already working with git and your code without switching between windows and browser tabs. Work with issues, merge requests, watch running pipelines directly from your CLI among other features. Inspired by gh, the official GitHub CLI tool.

glab is available for repositories hosted on GitLab.com and self-hosted GitLab Instances. glab supports multiple authenticated GitLab instances and automatically detects the authenticated hostname from the remotes available in the working git directory.

Usage

glab <command> <subcommand> [flags]

Demo

asciicast

Documentation

Read the documentation for usage instructions.

Installation

Download a binary suitable for your OS at the releases page.

Quick Install

Supported Platforms: Linux and macOS

Homebrew

brew install glab

Updating (Homebrew):

brew upgrade glab

Alternatively, you can install glab by shell script:

curl -sL https://j.mp/glab-cli | sudo sh

or

curl -s https://raw.githubusercontent.com/profclems/glab/trunk/scripts/install.sh | sudo sh

Installs into usr/bin

NOTE: Please take care when running scripts in this fashion. Consider peeking at the install script itself and verify that it works as intended.

View on Github

6 - Go-git:

Highly extensible Git implementation in pure Go.

go-git is a highly extensible git implementation library written in pure Go.

It can be used to manipulate git repositories at low level (plumbing) or high level (porcelain), through an idiomatic Go API. It also supports several types of storage, such as in-memory filesystems, or custom implementations, thanks to the Storer interface.

It's being actively developed since 2015 and is being used extensively by Keybase, Gitea or Pulumi, and by many other libraries and tools.

Project Status

After the legal issues with the src-d organization, the lack of update for four months and the requirement to make a hard fork, the project is now back to normality.

The project is currently actively maintained by individual contributors, including several of the original authors, but also backed by a new company, gitsight, where go-git is a critical component used at scale.

Comparison with git

go-git aims to be fully compatible with git, all the porcelain operations are implemented to work exactly as git does.

git is a humongous project with years of development by thousands of contributors, making it challenging for go-git to implement all the features. You can find a comparison of go-git vs git in the compatibility documentation.

Installation

The recommended way to install go-git is:

import "github.com/go-git/go-git/v5" // with go modules enabled (GO111MODULE=on or outside GOPATH)
import "github.com/go-git/go-git" // with go modules disabled

Examples

Please note that the CheckIfError and Info functions used in the examples are from the examples package just to be used in the examples.

Basic example

A basic example that mimics the standard git clone command

// Clone the given repository to the given directory
Info("git clone https://github.com/go-git/go-git")

_, err := git.PlainClone("/tmp/foo", false, &git.CloneOptions{
    URL:      "https://github.com/go-git/go-git",
    Progress: os.Stdout,
})

CheckIfError(err)

Outputs:

Counting objects: 4924, done.
Compressing objects: 100% (1333/1333), done.
Total 4924 (delta 530), reused 6 (delta 6), pack-reused 3533

In-memory example

Cloning a repository into memory and printing the history of HEAD, just like git log does

// Clones the given repository in memory, creating the remote, the local
// branches and fetching the objects, exactly as:
Info("git clone https://github.com/go-git/go-billy")

r, err := git.Clone(memory.NewStorage(), nil, &git.CloneOptions{
    URL: "https://github.com/go-git/go-billy",
})

CheckIfError(err)

// Gets the HEAD history from HEAD, just like this command:
Info("git log")

// ... retrieves the branch pointed by HEAD
ref, err := r.Head()
CheckIfError(err)


// ... retrieves the commit history
cIter, err := r.Log(&git.LogOptions{From: ref.Hash()})
CheckIfError(err)

// ... just iterates over the commits, printing it
err = cIter.ForEach(func(c *object.Commit) error {
	fmt.Println(c)
	return nil
})
CheckIfError(err)

View on Github

7 - Go-vcs:

Manipulate and inspect VCS repositories in Go.

go-vcs is a library for manipulating and inspecting VCS repositories in Go. It currently supports Git and Mercurial (hg).

Note: the public API is experimental and subject to change until further notice.

Resolving dependencies

For hg blame, you need to install hglib: pip install python-hglib.

Installing

go get -u sourcegraph.com/sourcegraph/go-vcs/vcs

Implementation differences

The goal is to have all supported backends at feature parity, but until then, consult this table for implementation differences.

Featuregitgitcmdhghgcmd
vcs.CommitsOptions.Path
vcs.BranchesOptions.MergedInto
vcs.BranchesOptions.IncludeCommit
vcs.BranchesOptions.BehindAheadBranch
vcs.Repository.Committers
vcs.FileLister
vcs.UpdateResult

Contributions that fill in the gaps are welcome!

Development

First-time installation of protobuf and other codegen tools

You need to install and run the protobuf compiler before you can regenerate Go code after you change the vcs.proto file.

Install protoc, the protobuf compiler. Find more details in the protobuf README.

On OS X, you can install it with Homebrew by running:

brew install --devel protobuf

Then make sure the protoc binary is in your $PATH.

Install gogo/protobuf.

go get -u github.com/gogo/protobuf/...

Install gopathexec:

go get -u sourcegraph.com/sourcegraph/gopathexec

Regenerating Go code after changing vcs.proto

go generate sourcegraph.com/sourcegraph/go-vcs/vcs/...

View on Github

8 - Hercules: 

Gaining advanced insights from Git repository history.

Installation

Grab hercules binary from the Releases page. labours is installable from PyPi:

pip3 install labours

pip3 is the Python package manager.

Numpy and Scipy can be installed on Windows using http://www.lfd.uci.edu/~gohlke/pythonlibs/

Build from source

You are going to need Go (>= v1.11) and protoc.

git clone https://github.com/src-d/hercules && cd hercules
make
pip3 install -e ./python

Usage

The most useful and reliably up-to-date command line reference:

hercules --help

Some examples:

# Use "memory" go-git backend and display the burndown plot. "memory" is the fastest but the repository's git data must fit into RAM.
hercules --burndown https://github.com/go-git/go-git | labours -m burndown-project --resample month
# Use "file system" go-git backend and print some basic information about the repository.
hercules /path/to/cloned/go-git
# Use "file system" go-git backend, cache the cloned repository to /tmp/repo-cache, use Protocol Buffers and display the burndown plot without resampling.
hercules --burndown --pb https://github.com/git/git /tmp/repo-cache | labours -m burndown-project -f pb --resample raw

# Now something fun
# Get the linear history from git rev-list, reverse it
# Pipe to hercules, produce burndown snapshots for every 30 days grouped by 30 days
# Save the raw data to cache.yaml, so that later is possible to labours -i cache.yaml
# Pipe the raw data to labours, set text font size to 16pt, use Agg matplotlib backend and save the plot to output.png
git rev-list HEAD | tac | hercules --commits - --burndown https://github.com/git/git | tee cache.yaml | labours -m burndown-project --font-size 16 --backend Agg --output git.png

labours -i /path/to/yaml allows to read the output from hercules which was saved on disk.

Caching

It is possible to store the cloned repository on disk. The subsequent analysis can run on the corresponding directory instead of cloning from scratch:

# First time - cache
hercules https://github.com/git/git /tmp/repo-cache

# Second time - use the cache
hercules --some-analysis /tmp/repo-cache

View on Github

9 - Hgo:

Hgo is a collection of Go packages providing read-access to local Mercurial repositories.

Hgo is a collection of Go packages providing read-access to local Mercurial repositories. Only a subset of Mercurial's functionality is supported. It is possible to access revisions of files and to read changelogs, manifests, and tags.

Hgo supports the following repository features:

* revlogv1
* store
* fncache (no support for hash encoded names, though)
* dotencode

The Go packages have been implemented from scratch, based on information found in Mercurial's wiki.

The project should be considered unstable. The BUGS file lists known issues yet to be addressed.

cmd/hgo contains an example program that implements a few commands similar to a subset of Mercurial's hg.

View on Github

Thank you for following this article.

Related videos:

Source Control (a.k.a Version Control System) and DevOps

#go #golang #version 

9 Popular Golang Libraries for Version Control
Reid  Rohan

Reid Rohan

1660598700

Semantic-release/npm: Semantic-release Plugin to Publish A NPM Package

@semantic-release/npm

semantic-release plugin to publish a npm package.   

StepDescription
verifyConditionsVerify the presence of the NPM_TOKEN environment variable, or an .npmrc file, and verify the authentication method is valid.
prepareUpdate the package.json version and create the npm package tarball.
addChannelAdd a release to a dist-tag.
publishPublish the npm package to the registry.

Install

$ npm install @semantic-release/npm -D

Usage

The plugin can be configured in the semantic-release configuration file:

{
  "plugins": [
    "@semantic-release/commit-analyzer",
    "@semantic-release/release-notes-generator",
    "@semantic-release/npm",
  ]
}

Configuration

Npm registry authentication

The npm authentication configuration is required and can be set via environment variables.

Both the token and the legacy (username, password and email) authentication are supported. It is recommended to use the token authentication. The legacy authentication is supported as the alternative npm registries Artifactory and npm-registry-couchapp only supports that form of authentication.

Notes:

  • Only the auth-only level of npm two-factor authentication is supported, semantic-release will not work with the default auth-and-writes level.
  • The presence of an .npmrc file will override any specified environment variables.

Environment variables

VariableDescription
NPM_TOKENNpm token created via npm token create
NPM_USERNAMENpm username created via npm adduser or on npmjs.com
NPM_PASSWORDPassword of the npm user.
NPM_EMAILEmail address associated with the npm user
NPM_CONFIG_USERCONFIGPath to non-default .npmrc file

Use either NPM_TOKEN for token authentication or NPM_USERNAME, NPM_PASSWORD and NPM_EMAIL for legacy authentication

Options

OptionsDescriptionDefault
npmPublishWhether to publish the npm package to the registry. If false the package.json version will still be updated.false if the package.json private property is true, true otherwise.
pkgRootDirectory path to publish..
tarballDirDirectory path in which to write the package tarball. If false the tarball is not be kept on the file system.false

Note: The pkgRoot directory must contain a package.json. The version will be updated only in the package.json and npm-shrinkwrap.json within the pkgRoot directory.

Note: If you use a shareable configuration that defines one of these options you can set it to false in your semantic-release configuration in order to use the default value.

Npm configuration

The plugin uses the npm CLI which will read the configuration from .npmrc. See npm config for the option list.

The registry can be configured via the npm environment variable NPM_CONFIG_REGISTRY and will take precedence over the configuration in .npmrc.

The registry and dist-tag can be configured in the package.json and will take precedence over the configuration in .npmrc and NPM_CONFIG_REGISTRY:

{
  "publishConfig": {
    "registry": "https://registry.npmjs.org/",
    "tag": "latest"
  }
}

Examples

The npmPublish and tarballDir option can be used to skip the publishing to the npm registry and instead, release the package tarball with another plugin. For example with the @semantic-release/github plugin:

{
  "plugins": [
    "@semantic-release/commit-analyzer",
    "@semantic-release/release-notes-generator",
    ["@semantic-release/npm", {
      "npmPublish": false,
      "tarballDir": "dist",
    }],
    ["@semantic-release/github", {
      "assets": "dist/*.tgz"
    }]
  ]
}

When publishing from a sub-directory with the pkgRoot option, the package.json and npm-shrinkwrap.json updated with the new version can be moved to another directory with a postversion. For example with the @semantic-release/git plugin:

{
  "plugins": [
    "@semantic-release/commit-analyzer",
    "@semantic-release/release-notes-generator",
    ["@semantic-release/npm", {
      "pkgRoot": "dist",
    }],
    ["@semantic-release/git", {
      "assets": ["package.json", "npm-shrinkwrap.json"]
    }]
  ]
}
{
  "scripts": {
    "postversion": "cp -r package.json .. && cp -r npm-shrinkwrap.json .."
  }
}

Download Details:

Author: Semantic-release
Source Code: https://github.com/semantic-release/npm 
License: MIT license

#javascript #npm #registry #version 

Semantic-release/npm: Semantic-release Plugin to Publish A NPM Package

Juliavm: Julia Version Manager

juliavm

A Julia version manager

JuliaVM is a command-line tool which allows you to easily install, manage, and work with Julia environments and switch between them easily. It's inspired in rvm and nvm.

Install

Clone the repo:

git clone https://github.com/pmargreff/juliavm

Inside the repo provide the right permissions to install:

cd juliavm && chmod u+x install.sh

Run the script for install:

./install.sh

Commands

  • juliavm ls-remote - list all remote versions
  • juliavm ls - list all locale versions
  • juliavm install x.y.z [-ARCHITECTURE] - install x.y.x version, ARCHITECTURE is an optional param
  • juliavm use x.y.z [-ARCHITECTURE] - use x.y.x version, ARCHITECTURE is an optional param
  • juliavm update - update juliavm with latest resources
  • juliavm uninstall [--hard] - uninstall juliavm and all julia versions downloaded inside juliavm, with hard parameter it uninstall all Julia packages, if not pass hard param soft uninstall (doesn't delete Julia major packages) will be used.
  • juliavm help - list all available commands

Architectures

  • -x64 - unix 64 bits
  • -x86 - unix 32 bits
  • If you don't pass the architecture unix 64 bits will be used.

Unix (32 and 64 bits) version is supported right now, feel free to add OSX compatibility or use asdf instead.

Download Details:

Author: Pmargreff
Source Code: https://github.com/pmargreff/juliavm 
License: MIT license

#julia #version 

Juliavm: Julia Version Manager
Royce  Reinger

Royce Reinger

1660128300

Rash_alt: Simple extension to Hashie::Mash for rubyified keys

rash 

Rash is an extension to Hashie (hashie/hashie ).

Rash subclasses Hashie::Mash to convert all keys in the hash to underscore.

The purpose of this is when working w/ Java (or any other apis) that return hashes (including nested) that have camelCased keys.

You will now be able to access those keys through underscored key names (camelCase still available).

Installation

Add this line to your application's Gemfile:

gem 'rash_alt', require: 'rash'

And then execute:

$ bundle

Or install it yourself as:

$ gem install rash_alt

Usage

@rash = Hashie::Mash::Rash.new({
  "varOne" => 1,
  "two" => 2,
  :three => 3,
  :varFour => 4,
  "fiveHumpHumps" => 5,
  :nested => {
    "NestedOne" => "One",
    :two => "two",
    "nested_three" => "three"
  },
  "nestedTwo" => {
    "nested_two" => 22,
    :nestedThree => 23
  }
})

@rash.var_one                 # => 1
@rash.two                     # => 2
@rash.three                   # => 3
@rash.var_four                # => 4
@rash.five_hump_humps         # => 5
@rash.nested.nested_one       # => "One"
@rash.nested.two              # => "two"
@rash.nested.nested_three     # => "three"
@rash.nested_two.nested_two   # => 22
@rash.nested_two.nested_three # => 23

Known Issue

You may have Hashie's warnings like this

WARN -- : You are setting a key that conflicts with a built-in method Hashie::Mash::Rash#varOne defined in Hashie::Mash::Rash. This can cause unexpected behavior when accessing the key as a property. You can still access the key via the #[] method.

if you want to disable this, use disable_warnings

https://github.com/hashie/hashie#mash

Note on Patches/Pull Requests

  • Fork the project.
  • Make your feature addition or bug fix.
  • Add tests for it. This is important so I don't break it in a future version unintentionally.
  • Commit, do not mess with rakefile, version, or history. (if you want to have your own version, that is fine but bump version in a commit by itself I can ignore when I pull)
  • Send me a pull request. Bonus points for topic branches.

Copyright

  • Copyright (c) 2010 Tom Cocca.
  • Copyright (c) 2016 Shigenobu Nishikawa.

Acknowledgements

Download Details:

Author: Shishi
Source Code: https://github.com/shishi/rash_alt 
License: MIT license

#ruby #version #extension 

Rash_alt: Simple extension to Hashie::Mash for rubyified keys
Royce  Reinger

Royce Reinger

1660090680

Http-cookie: A Ruby Library to Handle HTTP Cookies

HTTP::Cookie

HTTP::Cookie is a ruby library to handle HTTP cookies in a way both compliant with RFCs and compatible with today's major browsers.

It was originally a part of the Mechanize library, separated as an independent library in the hope of serving as a common component that is reusable from any HTTP related piece of software.

The following is an incomplete list of its features:

Its behavior is highly compatible with that of today's major web browsers.

It is based on and conforms to RFC 6265 (the latest standard for the HTTP cookie mechanism) to a high extent, with real world conventions deeply in mind.

It takes eTLD (effective TLD, also known as "Public Suffix") into account just as major browsers do, to reject cookies with an eTLD domain like "org", "co.jp", or "appspot.com". This feature is brought to you by the domain_name gem.

The number of cookies and the size are properly capped so that a cookie store does not get flooded.

It supports the legacy Netscape cookies.txt format for serialization, maximizing the interoperability with other implementations.

It supports the cookies.sqlite format adopted by Mozilla Firefox for backend store database which can be shared among multiple program instances.

It is relatively easy to add a new serialization format or a backend store because of its modular API.

Installation

Add this line to your application's Gemfile:

gem 'http-cookie'

And then execute:

$ bundle

Or install it yourself as:

$ gem install http-cookie

Usage

########################
# Client side example 1
########################

# Initialize a cookie jar
jar = HTTP::CookieJar.new

# Load from a file
jar.load(filename) if File.exist?(filename)

# Store received cookies, where uri is the origin of this header
header["Set-Cookie"].each { |value|
  jar.parse(value, uri)
}

# ...

# Set the Cookie header value, where uri is the destination URI
header["Cookie"] = HTTP::Cookie.cookie_value(jar.cookies(uri))

# Save to a file
jar.save(filename)


########################
# Client side example 2
########################

# Initialize a cookie jar using a Mozilla compatible SQLite3 backend
jar = HTTP::CookieJar.new(store: :mozilla, filename: 'cookies.sqlite')

# There is no need for load & save in this backend.

# Store received cookies, where uri is the origin of this header
header["Set-Cookie"].each { |value|
  jar.parse(value, uri)
}

# ...

# Set the Cookie header value, where uri is the destination URI
header["Cookie"] = HTTP::Cookie.cookie_value(jar.cookies(uri))


########################
# Server side example
########################

# Generate a domain cookie
cookie1 = HTTP::Cookie.new("uid", "u12345", domain: 'example.org',
                                            for_domain: true,
                                            path: '/',
                                            max_age: 7*86400)

# Add it to the Set-Cookie response header
header['Set-Cookie'] = cookie1.set_cookie_value

# Generate a host-only cookie
cookie2 = HTTP::Cookie.new("aid", "a12345", origin: my_url,
                                            path: '/',
                                            max_age: 7*86400)

# Add it to the Set-Cookie response header
header['Set-Cookie'] = cookie2.set_cookie_value

Incompatibilities with Mechanize::Cookie/CookieJar

There are several incompatibilities between Mechanize::Cookie/CookieJar and HTTP::Cookie/CookieJar. Below is how to rewrite existing code written for Mechanize::Cookie with equivalent using HTTP::Cookie:

Mechanize::Cookie.parse

The parameter order changed in HTTP::Cookie.parse.

  # before
  cookies1 = Mechanize::Cookie.parse(uri, set_cookie1)
  cookies2 = Mechanize::Cookie.parse(uri, set_cookie2, log)

  # after
  cookies1 = HTTP::Cookie.parse(set_cookie1, uri_or_url)
  cookies2 = HTTP::Cookie.parse(set_cookie2, uri_or_url, logger: log)
  # or you can directly store parsed cookies in your jar
  jar.parse(set_cookie1, uri_or_url)
  jar.parse(set_cookie1, uri_or_url, logger: log)

Mechanize::Cookie#version, #version=

There is no longer a sense of version in the HTTP cookie specification. The only version number ever defined was zero, and there will be no other version defined since the version attribute has been removed in RFC 6265.

Mechanize::Cookie#comment, #comment=

Ditto. The comment attribute has been removed in RFC 6265.

Mechanize::Cookie#set_domain

This method was unintentionally made public. Simply use HTTP::Cookie#domain=.

  # before
  cookie.set_domain(domain)

  # after
  cookie.domain = domain

Mechanize::CookieJar#add, #add!

Always use HTTP::CookieJar#add.

  # before
  jar.add!(cookie1)
  jar.add(uri, cookie2)

  # after
  jar.add(cookie1)
  cookie2.origin = uri; jar.add(cookie2)  # or specify origin in parse() or new()

Mechanize::CookieJar#clear!

Use HTTP::Cookiejar#clear.

  # before
  jar.clear!

  # after
  jar.clear

Mechanize::CookieJar#save_as

Use HTTP::CookieJar#save.

  # before
  jar.save_as(file)

  # after
  jar.save(file)

Mechanize::CookieJar#jar

There is no direct access to the internal hash in HTTP::CookieJar since it has introduced an abstract store layer. If you want to tweak the internals of the hash store, try creating a new store class referring to the default store class HTTP::CookieJar::HashStore.

If you desperately need it you can access it by jar.store.instance_variable_get(:@jar), but there is no guarantee that it will remain available in the future.

HTTP::Cookie/CookieJar raise runtime errors to help migration, so after replacing the class names, try running your test code once to find out how to fix your code base.

File formats

The YAML serialization format has changed, and HTTP::CookieJar#load cannot import what is written in a YAML file saved by Mechanize::CookieJar#save_as. HTTP::CookieJar#load will not raise an exception if an incompatible YAML file is given, but the content is silently ignored.

Note that there is (obviously) no forward compatibillity with this. Trying to load a YAML file saved by HTTP::CookieJar with Mechanize::CookieJar will fail in runtime error.

On the other hand, there has been (and will ever be) no change in the cookies.txt format, so use it instead if compatibility is significant.

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

Download Details:

Author: Sparklemotion
Source Code: https://github.com/sparklemotion/http-cookie 
License: MIT license

#ruby #http #cookies 

Http-cookie: A Ruby Library to Handle HTTP Cookies
Kennith  Kuhic

Kennith Kuhic

1658379600

CogCompNLP: CogComp's Natural Language Processing Libraries and Demos

CogCompNLP

This project collects a number of core libraries for Natural Language Processing (NLP) developed by Cognitive Computation Group.

How to use it?

Depending on what you are after, follow one of the items:

  • To annotate raw text (i.e. no need to open the annotator boxes to retrain them) you should look into the pipeline.
  • To train and test an NLP annotator (i.e. you want to open an annotator box), see the list of components below and choose the desired one. We recommend using JDK8, as no other versions are officially supported and tested.
  • To read a corpus you should look into the corpus-readers module.
  • To do feature-extraction you should look into edison module.

CogComp's main NLP libraries

Each library contains detailed readme and instructions on how to use it. In addition the javadoc of the whole project is available here.

ModuleDescription
nlp-pipelineProvides an end-to-end NLP processing application that runs a variety of NLP tools on input text.
core-utilitiesProvides a set of NLP-friendly data structures and a number of NLP-related utilities that support writing NLP applications, running experiments, etc.
corpusreadersProvides classes to read documents from corpora into core-utilities data structures.
curatorSupports use of CogComp NLP Curator, a tool to run NLP applications as services.
edisonA library for feature extraction from core-utilities data structures.
lemmatizerAn application that uses WordNet and simple rules to find the root forms of words in plain text.
tokenizerAn application that identifies sentence and word boundaries in plain text.
transliterationAn application that transliterates names between different scripts.
posAn application that identifies the part of speech (e.g. verb + tense, noun + number) of each word in plain text.
nerAn application that identifies named entities in plain text according to two different sets of categories.
mdAn application that identifies entity mentions in plain text.
relation-extractionAn application that identifies entity mentions, then identify relation pairs among the mentions detected.
quantifierThis tool detects mentions of quantities in the text, as well as normalizes it to a standard form.
inferenceA suite of unified wrappers to a set optimization libraries, as well as some basic approximate solvers.
depparseAn application that identifies the dependency parse tree of a sentence.
verbsenseThis system addresses the verb sense disambiguation (VSD) problem for English.
prepsrlAn application that identifies semantic relations expressed by prepositions and develops statistical learning models for predicting the relations.
commasrlThis software extracts relations that commas participate in.
similarityThis software compare objects --especially Strings-- and return a score indicating how similar they are.
temporal-normalizerA temporal extractor and normalizer.
dataless-classifierClassifies text into a user-specified label hierarchy from just the textual label descriptions
external-annotatorsA collection useful external annotators.
  • Questions? Have a look at our FAQs.

Using each library programmatically

To include one of the modules in your Maven project, add the following snippet with the #modulename# and #version entries replaced with the relevant module name and the version listed in this project's pom.xml file. Note that you also add to need the <repository> element for the CogComp maven repository in the <repositories> element.

    <dependencies>
         ...
        <dependency>
            <groupId>edu.illinois.cs.cogcomp</groupId>
            <artifactId>#modulename#</artifactId>
            <version>#version#</version>
        </dependency>
        ...
    </dependencies>
    ...
    <repositories>
        <repository>
            <id>CogCompSoftware</id>
            <name>CogCompSoftware</name>
            <url>http://cogcomp.org/m2repo/</url>
        </repository>
    </repositories>

Citing

If you are using the framework, please cite our paper:

@inproceedings{2018_lrec_cogcompnlp,
    author = {Daniel Khashabi, Mark Sammons, Ben Zhou, Tom Redman, Christos Christodoulopoulos, Vivek Srikumar, Nicholas Rizzolo, Lev Ratinov, Guanheng Luo, Quang Do, Chen-Tse Tsai, Subhro Roy, Stephen Mayhew, Zhili Feng, John Wieting, Xiaodong Yu, Yangqiu Song, Shashank Gupta, Shyam Upadhyay, Naveen Arivazhagan, Qiang Ning, Shaoshi Ling, Dan Roth},
    title = {CogCompNLP: Your Swiss Army Knife for NLP},
    booktitle = {11th Language Resources and Evaluation Conference},
    year = {2018},
    url = "http://cogcomp.org/papers/2018_lrec_cogcompnlp.pdf",
}

Author: CogComp
Source code: https://github.com/CogComp/cogcomp-nlp
License: View license

#machine-learning 

CogCompNLP: CogComp's Natural Language Processing Libraries and Demos
Mary  Turcotte

Mary Turcotte

1657267200

Rollup Plugin Glsl Optimize: Import GLSL Source Files As Strings

rollup-plugin-glsl-optimize 

Import GLSL source files as strings. Pre-processed, validated and optimized with Khronos Group SPIRV-Tools.

Primary use-case is processing WebGL2 / GLSL ES 300 shaders.

import frag from './shaders/myShader.frag';
console.log(frag);

Features

GLSL Optimizer

For WebGL2 / GLSL ES >= 300

With optimize: true (default) shaders will be compiled to SPIR-V (opengl semantics) and optimized for performance using the Khronos SPIR-V Tools Optimizer before being cross-compiled back to GLSL.

Shader Preprocessor

Shaders are preprocessed and validated using the Khronos Glslang Validator.

Macros are run at build time with support for C-style #include directives: *

#version 300 es
#include "postProcessingShared.glsl"
#include "dofCircle.glsl"

void main() {
  outColor = CircleDof(UVAndScreenPos, Color, ColorCoc);
}

* Via the GL_GOOGLE_include_directive extension. But an #extension directive is not required nor recommended in your final inlined code.

Supports glslify

Specify glslify: true to process shader sources with glslify (a node.js-style module system for GLSL).

And install glslify in your devDependencies with npm i -D glslify

Installation

npm i rollup-plugin-glsl-optimize -D

Khronos tools

This plugin uses the Khronos Glslang Validator, Khronos SPIRV-Tools Optimizer and Khronos SPIRV Cross compiler.

Binaries are automatically installed for:

  • Windows 64bit (MSVC 2017)
  • MacOS x86_64 (clang)
  • Ubuntu Trusty / Debian Buster amd64 (clang)

Paths can be manually provided / overridden with the GLSLANG_VALIDATOR, GLSLANG_OPTIMIZER, GLSLANG_CROSS environment variables.

Usage

// rollup.config.mjs
import {default as glslOptimize} from 'rollup-plugin-glsl-optimize';

export default {
    // ...
    plugins: [
        glslOptimize(),
    ]
};

Shader stages

The following shader stages are supported by the Khronos tools and recognized by file extension:

Shader StageFile Extensions
Vertex.vs, .vert, .vs.glsl, .vert.glsl
Fragment.fs, .frag, .fs.glsl, .frag.glsl
Geometry*.geom, .geom.glsl
Compute*.comp, .comp.glsl
Tess Control*.tesc, .tesc.glsl
Tess Eval*.tese, .tese.glsl

* Unsupported in WebGL2

Options

  • include : PathFilter (default table above) File extensions within rollup to include. Though this option can be reconfigured, shader stage detection still operates based on the table above.
  • exclude : PathFilter (default undefined) File extensions within rollup to exclude.
  • includePaths : string[] (default undefined) Additional search paths for #include directive (source file directory is always searched)

Features

  • optimize : boolean (default true) Optimize via SPIR-V as described in the Optimization section [requires WebGL2 / GLSL ES >= 300]. When disabled simply runs the preprocessor [all supported GLSL versions].
  • compress : boolean (default true) Strip all whitespace in the sources

Debugging

  • sourceMap : boolean (default true) Emit source maps. These contain the final preprocessed/optimized GLSL source (but not stripped of whitespace) to aid debugging.
  • emitLineDirectives : boolean (default false) Emit #line NN "original.file" directives for debugging - useful with #include. Note this requires the GL_GOOGLE_cpp_style_line_directive extension so the shader will fail to run in drivers that lack support.

Preprocessor

  • optimizerPreserveUnusedBindings : boolean (default true) Ensure that the optimizer preserves all declared bindings, even when those bindings are unused.
  • preamble : string (default undefined) Prepended to the shader source (after the #version directive, before the preprocessor runs)

glslify

  • glslify : boolean (default false) Process sources using glslify prior to all preprocessing, validation and optimization.
  • glslifyOptions (default undefined) When glslify enabled, pass these additional options to glslify.compile().

Advanced Options

  • optimizerDebugSkipOptimizer : boolean (default false) When optimize enabled, skip the SPIR-V optimizer - compiles to SPIR-V then cross-compiles back to GLSL immediately.
  • suppressLineExtensionDirective : boolean (default false) When emitLineDirectives enabled, suppress the GL_GOOGLE_cpp_style_line_directive directive.
  • extraValidatorParams, extraOptimizerParams, extraCrossParams : string[] (default undefined) Additional parameters for the Khronos Glslang Validator here, the Khronos SPIR-V Optimizer here, and the Khronos SPIR-V Cross compiler here.
  • glslangValidatorPath, glslangOptimizerPath, glslangCrossPath : string (default undefined) Provide / override binary tool paths.\

It's recommended to instead use the environment variables GLSLANG_VALIDATOR, GLSLANG_OPTIMIZER, GLSLANG_CROSS where needed. They always take precedence if set.

Changelog

Available in CHANGES.md.

Caveats & Known Issues

  • This plugin handles glsl and glslify by itself. Use with conflicting plugins (e.g. rollup-plugin-glsl, rollup-plugin-glslify) will cause unpredictable results.
  • Optimizer: lowp precision qualifier - emitted as mediump
    SPIR-V has a single RelaxedPrecision decoration for 16-32bit precision. However most implementations actually treat mediump and lowp equivalently, hence the lack of need for it in SPIR-V.

License

Released under the MIT license.
Strip whitespace function adapted from code by Vincent Wochnik (rollup-plugin-glsl).

Khronos tool binaries (built by the upstream projects) are distributed and installed with this plugin under the terms of the Apache License Version 2.0. See the corresponding LICENSE files installed in the bin folder and the binary releases.


Author: docd27
Source code: https://github.com/docd27/rollup-plugin-glsl-optimize
License: MIT license

#javascript #Rollup 

Rollup Plugin Glsl Optimize: Import GLSL Source Files As Strings

CogCompNLP | Core Libraries for Natural Language Processing (NLP)

CogCompNLP

This project collects a number of core libraries for Natural Language Processing (NLP) developed by Cognitive Computation Group.

How to use it?

Depending on what you are after, follow one of the items:

  • To annotate raw text (i.e. no need to open the annotator boxes to retrain them) you should look into the pipeline.
  • To train and test an NLP annotator (i.e. you want to open an annotator box), see the list of components below and choose the desired one. We recommend using JDK8, as no other versions are officially supported and tested.
  • To read a corpus you should look into the corpus-readers module.
  • To do feature-extraction you should look into edison module.

CogComp's main NLP libraries

Each library contains detailed readme and instructions on how to use it. In addition the javadoc of the whole project is available here.

ModuleDescription
nlp-pipelineProvides an end-to-end NLP processing application that runs a variety of NLP tools on input text.
core-utilitiesProvides a set of NLP-friendly data structures and a number of NLP-related utilities that support writing NLP applications, running experiments, etc.
corpusreadersProvides classes to read documents from corpora into core-utilities data structures.
curatorSupports use of CogComp NLP Curator, a tool to run NLP applications as services.
edisonA library for feature extraction from core-utilities data structures.
lemmatizerAn application that uses WordNet and simple rules to find the root forms of words in plain text.
tokenizerAn application that identifies sentence and word boundaries in plain text.
transliterationAn application that transliterates names between different scripts.
posAn application that identifies the part of speech (e.g. verb + tense, noun + number) of each word in plain text.
nerAn application that identifies named entities in plain text according to two different sets of categories.
mdAn application that identifies entity mentions in plain text.
relation-extractionAn application that identifies entity mentions, then identify relation pairs among the mentions detected.
quantifierThis tool detects mentions of quantities in the text, as well as normalizes it to a standard form.
inferenceA suite of unified wrappers to a set optimization libraries, as well as some basic approximate solvers.
depparseAn application that identifies the dependency parse tree of a sentence.
verbsenseThis system addresses the verb sense disambiguation (VSD) problem for English.
prepsrlAn application that identifies semantic relations expressed by prepositions and develops statistical learning models for predicting the relations.
commasrlThis software extracts relations that commas participate in.
similarityThis software compare objects --especially Strings-- and return a score indicating how similar they are.
temporal-normalizerA temporal extractor and normalizer.
dataless-classifierClassifies text into a user-specified label hierarchy from just the textual label descriptions
external-annotatorsA collection useful external annotators.
  • Questions? Have a look at our FAQs.

Using each library programmatically

To include one of the modules in your Maven project, add the following snippet with the #modulename# and #version entries replaced with the relevant module name and the version listed in this project's pom.xml file. Note that you also add to need the <repository> element for the CogComp maven repository in the <repositories> element.

    <dependencies>
         ...
        <dependency>
            <groupId>edu.illinois.cs.cogcomp</groupId>
            <artifactId>#modulename#</artifactId>
            <version>#version#</version>
        </dependency>
        ...
    </dependencies>
    ...
    <repositories>
        <repository>
            <id>CogCompSoftware</id>
            <name>CogCompSoftware</name>
            <url>http://cogcomp.org/m2repo/</url>
        </repository>
    </repositories>

Citing

If you are using the framework, please cite our paper:

@inproceedings{2018_lrec_cogcompnlp,
    author = {Daniel Khashabi, Mark Sammons, Ben Zhou, Tom Redman, Christos Christodoulopoulos, Vivek Srikumar, Nicholas Rizzolo, Lev Ratinov, Guanheng Luo, Quang Do, Chen-Tse Tsai, Subhro Roy, Stephen Mayhew, Zhili Feng, John Wieting, Xiaodong Yu, Yangqiu Song, Shashank Gupta, Shyam Upadhyay, Naveen Arivazhagan, Qiang Ning, Shaoshi Ling, Dan Roth},
    title = {CogCompNLP: Your Swiss Army Knife for NLP},
    booktitle = {11th Language Resources and Evaluation Conference},
    year = {2018},
    url = "http://cogcomp.org/papers/2018_lrec_cogcompnlp.pdf",
}

Download Details:
Author: CogComp
Source Code: https://github.com/CogComp/cogcomp-nlp
License: View license

#java

CogCompNLP | Core Libraries for Natural Language Processing (NLP)
Hermann  Frami

Hermann Frami

1657151400

Serverless Version Tracker

Serverless Version Tracker

A serverless plugin for tracking deployed versions of your code.

Description

This plugin has a super simple function: after you run serverless deploy, it will create a local git tag based on the version of the Lambda function that you just deployed. For instance, if your function is named foo-production-index and a deploy creates Lambda version 56, this plugin will automatically create a local git tag foo-production-index-56.

This guarantees that you always know exactly what version of your source code is actually running in the cloud.

Usage requirements

By default, this plugin only runs for deployments to the production stage. If you'd like to customize this behavior, you can set the versionTrackerStages custom variable.

Installation instructions

Install the Serverless Framework if you haven't already. Then choose your own adventure to install the plugin.

Install using Serverless plugin manager

serverless plugin install --name serverless-version-tracker

Install using npm

Install the module using npm:

npm install serverless-version-tracker --save-dev

Add serverless-version-tracker to the plugin list of your serverless.yml file:

plugins:
  - serverless-version-tracker

Quick start instructions

  1. Ensure that you have committed all of your changes to Git. The deploy will be aborted if the Git working directory is not clean. This prevents the possibility of deploying uncommitted / untraceable / volatile code.
  2. Run a deploy as normal (i.e. sls deploy --stage production). The plugin will automatically tag your local Git repository with the Lambda function name and new version.
  3. Make sure to push the new tag to your remote repository (the plugin won't do this for you).

Author: Danepowell
Source Code: https://github.com/danepowell/serverless-version-tracker 
License: 

#serverless #version #plugin 

Serverless Version Tracker
Hermann  Frami

Hermann Frami

1656453720

Serverless Prune Versions

Serverless prune versions

Overview

This plugin for the Serverless Framework removes old versions of AWS Lambda functions - important because if left to it's own devices each time the Serverless Framework is used to update your Lambda or Lambda Layer code in AWS it creates a new version. But if you aren't using the old versions then no harm, no foul - right? Unfortunately not, because for each and every version that's created AWS Lambda stores the source code used by that version for you, and there's a hard limit of only 75GB available per account for storage of this source code. By removing old versions this plugin keeps you from hitting this storage limit, letting you worry about features instead of account limits.

Installation and setup

Install the plugin as a dev dependency in your project:

npm i serverless-prune-versions -D or yarn add serverless-prune-versions -D

Add the plugin to the plugins block of your serverless.yml file:

plugins:
  - serverless-prune-versions

Configuration

Because this plugin will delete deployed versions of your Lambda functions it is disabled by default and you must explicitly enable it.

This plugin uses the following default configuration:

PropertyDescriptionDefault value
AutomaticBoolean, should plugin run automatically post-deploymentfalse
Include LayersBoolean, should plugin remove Lambda Layer versions in addition to Lambda versionsfalse
NumberNumeric, how many versions to retain5

All properties can be changed by overriding values in the custom block of your serverless.yml. In this example the plugin will automatically run after every deployment and will remove all Lambda and Lambda Layer versions except for the 3 most recent.

custom:
  prune:
    automatic: true
    includeLayers: true
    number: 3

This is the minimal configuration needed for the plugin to run automatically after every deployment - it will only remove Lambda versions (not Lambda Layer versions) and will retain the last 5 versions (since those defaults weren't overriden).

custom:
  prune:
    automatic: true

Feedback appreciated! If you have an idea for how this library can be improved (or just a complaint/criticism) please open an issue.

Author: Manwaring
Source Code: https://github.com/manwaring/serverless-prune-versions 
License: MIT license

#serverless #version #lambda 

Serverless Prune Versions

Standard-version: Automate Versioning and CHANGELOG Generation

Standard Version

standard-version is deprecated. If you're a GitHub user, I recommend release-please as an alternative. I encourage folks to fork this repository and, if a fork gets popular, I will link to it in this README.

A utility for versioning using semver and CHANGELOG generation powered by Conventional Commits.

Having problems? Want to contribute? Join us on the node-tooling community Slack.

How It Works:

  1. Follow the Conventional Commits Specification in your repository.
  2. When you're ready to release, run standard-version.

standard-version will then do the following:

  1. Retrieve the current version of your repository by looking at packageFiles[1], falling back to the last git tag.
  2. bump the version in bumpFiles[1] based on your commits.
  3. Generates a changelog based on your commits (uses conventional-changelog under the hood).
  4. Creates a new commit including your bumpFiles[1] and updated CHANGELOG.
  5. Creates a new tag with the new version number.

bumpFiles, packageFiles and updaters

standard-version uses a few key concepts for handling version bumping in your project.

  • packageFiles – User-defined files where versions can be read from and be "bumped".
    • Examples: package.json, manifest.json
    • In most cases (including the default), packageFiles are a subset of bumpFiles.
  • bumpFiles – User-defined files where versions should be "bumped", but not explicitly read from.
    • Examples: package-lock.json, npm-shrinkwrap.json
  • updaters – Simple modules used for reading packageFiles and writing to bumpFiles.

By default, standard-version assumes you're working in a NodeJS based project... because of this, for the majority of projects you might never need to interact with these options.

That said, if you find your self asking How can I use standard-version for additional metadata files, languages or version files? – these configuration options will help!

Installing standard-version

As a local npm run script

Install and add to devDependencies:

npm i --save-dev standard-version

Add an npm run script to your package.json:

{
  "scripts": {
    "release": "standard-version"
  }
}

Now you can use npm run release in place of npm version.

This has the benefit of making your repo/package more portable, so that other developers can cut releases without having to globally install standard-version on their machine.

As global bin

Install globally (add to your PATH):

npm i -g standard-version

Now you can use standard-version in place of npm version.

This has the benefit of allowing you to use standard-version on any repo/package without adding a dev dependency to each one.

Using npx

As of npm@5.2.0, npx is installed alongside npm. Using npx you can use standard-version without having to keep a package.json file by running: npx standard-version.

This method is especially useful when using standard-version in non-JavaScript projects.

Configuration

You can configure standard-version either by:

  1. Placing a standard-version stanza in your package.json (assuming your project is JavaScript).
  2. Creating a .versionrc, .versionrc.json or .versionrc.js.
  • If you are using a .versionrc.js your default export must be a configuration object, or a function returning a configuration object.

Any of the command line parameters accepted by standard-version can instead be provided via configuration. Please refer to the conventional-changelog-config-spec for details on available configuration options.

Customizing CHANGELOG Generation

By default (as of 6.0.0), standard-version uses the conventionalcommits preset.

This preset:

  • Adheres closely to the conventionalcommits.org specification.
  • Is highly configurable, following the configuration specification maintained here.
    • We've documented these config settings as a recommendation to other tooling makers.

There are a variety of dials and knobs you can turn related to CHANGELOG generation.

As an example, suppose you're using GitLab, rather than GitHub, you might modify the following variables:

  • commitUrlFormat: the URL format of commit SHAs detected in commit messages.
  • compareUrlFormat: the URL format used to compare two tags.
  • issueUrlFormat: the URL format used to link to issues.

Making these URLs match GitLab's format, rather than GitHub's.

CLI Usage

NOTE: To pass nested configurations to the CLI without defining them in the package.json use dot notation as the parameters e.g. --skip.changelog.

First Release

To generate your changelog for your first release, simply do:

# npm run script
npm run release -- --first-release
# global bin
standard-version --first-release
# npx
npx standard-version --first-release

This will tag a release without bumping the version bumpFiles1.

When you are ready, push the git tag and npm publish your first release. \o/

Cutting Releases

If you typically use npm version to cut a new release, do this instead:

# npm run script
npm run release
# or global bin
standard-version

As long as your git commit messages are conventional and accurate, you no longer need to specify the semver type - and you get CHANGELOG generation for free! \o/

After you cut a release, you can push the new git tag and npm publish (or npm publish --tag next) when you're ready.

Release as a Pre-Release

Use the flag --prerelease to generate pre-releases:

Suppose the last version of your code is 1.0.0, and your code to be committed has patched changes. Run:

# npm run script
npm run release -- --prerelease

This will tag your version as: 1.0.1-0.

If you want to name the pre-release, you specify the name via --prerelease <name>.

For example, suppose your pre-release should contain the alpha prefix:

# npm run script
npm run release -- --prerelease alpha

This will tag the version as: 1.0.1-alpha.0

Release as a Target Type Imperatively (npm version-like)

To forgo the automated version bump use --release-as with the argument major, minor or patch.

Suppose the last version of your code is 1.0.0, you've only landed fix: commits, but you would like your next release to be a minor. Simply run the following:

# npm run script
npm run release -- --release-as minor
# Or
npm run release -- --release-as 1.1.0

You will get version 1.1.0 rather than what would be the auto-generated version 1.0.1.

NOTE: you can combine --release-as and --prerelease to generate a release. This is useful when publishing experimental feature(s).

Prevent Git Hooks

If you use git hooks, like pre-commit, to test your code before committing, you can prevent hooks from being verified during the commit step by passing the --no-verify option:

# npm run script
npm run release -- --no-verify
# or global bin
standard-version --no-verify

Signing Commits and Tags

If you have your GPG key set up, add the --sign or -s flag to your standard-version command.

Lifecycle Scripts

standard-version supports lifecycle scripts. These allow you to execute your own supplementary commands during the release. The following hooks are available and execute in the order documented:

  • prerelease: executed before anything happens. If the prerelease script returns a non-zero exit code, versioning will be aborted, but it has no other effect on the process.
  • prebump/postbump: executed before and after the version is bumped. If the prebump script returns a version #, it will be used rather than the version calculated by standard-version.
  • prechangelog/postchangelog: executes before and after the CHANGELOG is generated.
  • precommit/postcommit: called before and after the commit step.
  • pretag/posttag: called before and after the tagging step.

Simply add the following to your package.json to configure lifecycle scripts:

{
  "standard-version": {
    "scripts": {
      "prebump": "echo 9.9.9"
    }
  }
}

As an example to change from using GitHub to track your items to using your projects Jira use a postchangelog script to replace the url fragment containing 'https://github.com/`myproject`/issues/' with a link to your Jira - assuming you have already installed replace

{
  "standard-version": {
    "scripts": {
      "postchangelog": "replace 'https://github.com/myproject/issues/' 'https://myjira/browse/' CHANGELOG.md"
    }
  }
}

Skipping Lifecycle Steps

You can skip any of the lifecycle steps (bump, changelog, commit, tag), by adding the following to your package.json:

{
  "standard-version": {
    "skip": {
      "changelog": true
    }
  }
}

Committing Generated Artifacts in the Release Commit

If you want to commit generated artifacts in the release commit, you can use the --commit-all or -a flag. You will need to stage the artifacts you want to commit, so your release command could look like this:

{
  "standard-version": {
    "scripts": {
      "prerelease": "webpack -p --bail && git add <file(s) to commit>"
    }
  }
}
{
  "scripts": {
    "release": "standard-version -a"
  }
}

Dry Run Mode

running standard-version with the flag --dry-run allows you to see what commands would be run, without committing to git or updating files.

# npm run script
npm run release -- --dry-run
# or global bin
standard-version --dry-run

Prefix Tags

Tags are prefixed with v by default. If you would like to prefix your tags with something else, you can do so with the -t flag.

standard-version -t @scope/package\@

This will prefix your tags to look something like @scope/package@2.0.0

If you do not want to have any tag prefix you can use the -t flag and provide it with an empty string as value.

Note: simply -t or --tag-prefix without any value will fallback to the default 'v'

CLI Help

# npm run script
npm run release -- --help
# or global bin
standard-version --help

Code Usage

const standardVersion = require('standard-version')

// Options are the same as command line, except camelCase
// standardVersion returns a Promise
standardVersion({
  noVerify: true,
  infile: 'docs/CHANGELOG.md',
  silent: true
}).then(() => {
  // standard-version is done
}).catch(err => {
    console.error(`standard-version failed with message: ${err.message}`)
})

TIP: Use the silent option to prevent standard-version from printing to the console.

FAQ

How is standard-version different from semantic-release?

semantic-release is described as:

semantic-release automates the whole package release workflow including: determining the next version number, generating the release notes and publishing the package.

While both are based on the same foundation of structured commit messages, standard-version takes a different approach by handling versioning, changelog generation, and git tagging for you without automatic pushing (to GitHub) or publishing (to an npm registry). Use of standard-version only affects your local git repo - it doesn't affect remote resources at all. After you run standard-version, you can review your release state, correct mistakes and follow the release strategy that makes the most sense for your codebase.

We think they are both fantastic tools, and we encourage folks to use semantic-release instead of standard-version if it makes sense for their use-case.

Should I always squash commits when merging PRs?

The instructions to squash commits when merging pull requests assumes that one PR equals, at most, one feature or fix.

If you have multiple features or fixes landing in a single PR and each commit uses a structured message, then you can do a standard merge when accepting the PR. This will preserve the commit history from your branch after the merge.

Although this will allow each commit to be included as separate entries in your CHANGELOG, the entries will not be able to reference the PR that pulled the changes in because the preserved commit messages do not include the PR number.

For this reason, we recommend keeping the scope of each PR to one general feature or fix. In practice, this allows you to use unstructured commit messages when committing each little change and then squash them into a single commit with a structured message (referencing the PR number) once they have been reviewed and accepted.

Can I use standard-version for additional metadata files, languages or version files?

As of version 7.1.0 you can configure multiple bumpFiles and packageFiles.

  1. Specify a custom bumpFile "filename", this is the path to the file you want to "bump"
  2. Specify the bumpFile "updater", this is how the file will be bumped. a. If you're using a common type, you can use one of standard-version's built-in updaters by specifying a type. b. If your using an less-common version file, you can create your own updater.
// .versionrc
{
  "bumpFiles": [
    {
      "filename": "MY_VERSION_TRACKER.txt",
      // The `plain-text` updater assumes the file contents represents the version.
      "type": "plain-text"
    },
    {
      "filename": "a/deep/package/dot/json/file/package.json",
      // The `json` updater assumes the version is available under a `version` key in the provided JSON document.
      "type": "json"
    },
    {
      "filename": "VERSION_TRACKER.json",
      //  See "Custom `updater`s" for more details.
      "updater": "standard-version-updater.js"
    }
  ]
}

If using .versionrc.js as your configuration file, the updater may also be set as an object, rather than a path:

// .versionrc.js
const tracker = {
  filename: 'VERSION_TRACKER.json',
  updater: require('./path/to/custom-version-updater')
}

module.exports = {
  bumpFiles: [tracker],
  packageFiles: [tracker]
}

Custom updaters

An updater is expected to be a Javascript module with atleast two methods exposed: readVersion and writeVersion.

readVersion(contents = string): string

This method is used to read the version from the provided file contents.

The return value is expected to be a semantic version string.

writeVersion(contents = string, version: string): string

This method is used to write the version to the provided contents.

The return value will be written directly (overwrite) to the provided file.


Let's assume our VERSION_TRACKER.json has the following contents:

{
  "tracker": {
    "package": {
      "version": "1.0.0"
    }
  }
}

An acceptable standard-version-updater.js would be:

// standard-version-updater.js
const stringifyPackage = require('stringify-package')
const detectIndent = require('detect-indent')
const detectNewline = require('detect-newline')

module.exports.readVersion = function (contents) {
  return JSON.parse(contents).tracker.package.version;
}

module.exports.writeVersion = function (contents, version) {
  const json = JSON.parse(contents)
  let indent = detectIndent(contents).indent
  let newline = detectNewline(contents)
  json.tracker.package.version = version
  return stringifyPackage(json, indent, newline)
}

Author: Conventional-changelog
Source Code: https://github.com/conventional-changelog/standard-version 
License: ISC license

#node #javascript #version #git #cli 

Standard-version: Automate Versioning and CHANGELOG Generation
Waylon  Bruen

Waylon Bruen

1654748040

Govvv: "go Build" Wrapper to Add Version info To Golang Applications

govvv

The simple Go binary versioning tool that wraps the go build command.

68747470733a2f2f636c2e6c792f3055326d34343176333932512f696e74726f2d312e676966

Stop worrying about -ldflags and go get github.com/ahmetb/govvv now.

Build Variables

VariableDescriptionExample
main.GitCommitshort commit hash of source tree0b5ed7a
main.GitBranchcurrent branch name the code is built offmaster
main.GitStatewhether there are uncommitted changesclean or dirty
main.GitSummaryoutput of git describe --tags --dirty --alwaysv1.0.0
v1.0.1-5-g585c78f-dirty
fbd157c
main.BuildDateRFC3339 formatted UTC date2016-08-04T18:07:54Z
main.Versioncontents of ./VERSION file, if exists, or the value passed via the -version option2.0.0

Using govvv is easy

Just add the build variables you want to the main package and run:

old:sparkles: new :sparkles:
go buildgovvv build
go installgovvv install

Version your app with govvv

Create a VERSION file in your build root directory and add a Version variable to your main package.

Do you have your own way of specifying Version? No problem:

govvv lets you specify custom -ldflags

Your existing -ldflags argument will still be preserved:

govvv build -ldflags "-X main.BuildNumber=$buildnum" myapp

and the -ldflags constructed by govvv will be appended to your flag.

Don’t want to depend on govvv? It’s fine!

You can just pass a -print argument and govvv will just print the go build command with -ldflags for you and will not execute the go tool:

$ govvv build -print
go build \
    -ldflags \
    "-X main.GitCommit=57b9870 -X main.GitBranch=dry-run -X main.GitState=dirty -X main.Version=0.1.0 -X main.BuildDate=2016-08-08T20:50:21Z"

Still don’t want to wrap the go tool? Well, try -flags to retrieve the LDFLAGS govvv prepares:

$ go build -ldflags="$(govvv -flags)"

Want to use a different package?

You can pass a -pkg argument with the full package name, and govvv will set the build variables in that package instead of main. For example:

# build with govvv
$ govvv build -pkg github.com/myacct/myproj/mypkg

# build with go
$ go build -ldflags="$(govvv -flags -pkg $(go list ./mypkg))"

Want to use a different version?

You can pass a -version argument with the desired version, and govvv will use the specified version instead of obtaining it from the ./VERSION file. For example:

# build with govvv
$ govvv build -version 1.2.3

# build with go
$ go build -ldflags="$(govvv -flags -version 1.2.3)"

Try govvv today

$ go get github.com/ahmetb/govvv

Author: Ahmetb
Source Code: https://github.com/ahmetb/govvv 
License: Apache-2.0 license

#go #golang #version 

Govvv: "go Build" Wrapper to Add Version info To Golang Applications
Awesome  Rust

Awesome Rust

1653036300

Emu: The Write-once-run-anywhere GPGPU Library for Rust

Overview

Emu is a GPGPU library for Rust with a focus on portability, modularity, and performance.

It's a CUDA-esque compute-specific abstraction over WebGPU providing specific functionality to make WebGPU feel more like CUDA. Here's a quick run-down of highlight features...

Emu can run anywhere - Emu uses WebGPU to support DirectX, Metal, Vulkan (and also OpenGL and browser eventually) as compile targets. This allows Emu to run on pretty much any user interface including desktop, mobile, and browser. By moving heavy computations to the user's device, you can reduce system latency and improve privacy.

Emu makes compute easier - Emu makes WebGPU feel like CUDA. It does this by providing...

  • DeviceBox<T> as a wrapper for data that lives on the GPU (thereby ensuring type-safe data movement)
  • DevicePool as a no-config auto-managed pool of devices (similar to CUDA)
  • trait Cache - a no-setup-required LRU cache of JITed compute kernels.

Emu is transparent - Emu is a fully transparent abstraction. This means, at any point, you can decide to remove the abstraction and work directly with WebGPU constructs with zero overhead. For example, if you want to mix Emu with WebGPU-based graphics, you can do that with zero overhead. You can also swap out the JIT compiler artifact cache with your own cache, manage the device pool if you wish, and define your own compile-to-SPIR-V compiler that interops with Emu.

Emu is asynchronous - Emu is fully asynchronous. Most API calls will be non-blocking and can be synchronized by calls to DeviceBox::get when data is read back from device.

An example

Here's a quick example of Emu. You can find more in emu_core/examples and most recent documentation here.

First, we just import a bunch of stuff

use emu_glsl::*;
use emu_core::prelude::*;
use zerocopy::*;

We can define types of structures so that they can be safely serialized and deserialized to/from the GPU.

#[repr(C)]
#[derive(AsBytes, FromBytes, Copy, Clone, Default, Debug)]
struct Rectangle {
    x: u32,
    y: u32,
    w: i32,
    h: i32,
}

For this example, we make this entire function async but in reality you will only want small blocks of code to be async (like a bunch of asynchronous memory transfers and computation) and these blocks will be sent off to an executor to execute. You definitely don't want to do something like this where you are blocking (by doing an entire compilation step) in your async code.

fn main() -> Result<(), Box<dyn std::error::Error>> {
    futures::executor::block_on(assert_device_pool_initialized());

    // first, we move a bunch of rectangles to the GPU
    let mut x: DeviceBox<[Rectangle]> = vec![Default::default(); 128].as_device_boxed()?;
    
    // then we compile some GLSL code using the GlslCompile compiler and
    // the GlobalCache for caching compiler artifacts
    let c = compile::<String, GlslCompile, _, GlobalCache>(
        GlslBuilder::new()
            .set_entry_point_name("main")
            .add_param_mut()
            .set_code_with_glsl(
            r#"
#version 450
layout(local_size_x = 1) in; // our thread block size is 1, that is we only have 1 thread per block

struct Rectangle {
    uint x;
    uint y;
    int w;
    int h;
};

// make sure to use only a single set and keep all your n parameters in n storage buffers in bindings 0 to n-1
// you shouldn't use push constants or anything OTHER than storage buffers for passing stuff into the kernel
// just use buffers with one buffer per binding
layout(set = 0, binding = 0) buffer Rectangles {
    Rectangle[] rectangles;
}; // this is used as both input and output for convenience

Rectangle flip(Rectangle r) {
    r.x = r.x + r.w;
    r.y = r.y + r.h;
    r.w *= -1;
    r.h *= -1;
    return r;
}

// there should be only one entry point and it should be named "main"
// ultimately, Emu has to kind of restrict how you use GLSL because it is compute focused
void main() {
    uint index = gl_GlobalInvocationID.x; // this gives us the index in the x dimension of the thread space
    rectangles[index] = flip(rectangles[index]);
}
            "#,
        )
    )?.finish()?;
    
    // we spawn 128 threads (really 128 thread blocks)
    unsafe {
        spawn(128).launch(call!(c, &mut x));
    }

    // this is the Future we need to block on to get stuff to happen
    // everything else is non-blocking in the API (except stuff like compilation)
    println!("{:?}", futures::executor::block_on(x.get())?);

    Ok(())
}

And last but certainly not least, we use an executor to execute.

fn main() {
    futures::executor::block_on(do_some_stuff()).expect("failed to do stuff on GPU");
}

Built with Emu

Emu is relatively new but has already been used for GPU acceleration in a variety of projects.

  • Used in toil for GPU-accelerated linear algebra
  • Used in ipl3hasher for hash collision finding
  • Used in bigbang for simulating gravitational acceleration (used older version of Emu)

Getting started

The latest stable version is on Crates.io. To start using Emu, simply add the following line to your Cargo.toml.

[dependencies]
emu_core = "0.1.1"

To understand how to start using Emu, check out the docs. If you have any questions, please ask in the Discord.

Contributing

Feedback, discussion, PRs would all very much be appreciated. Some relatively high-priority, non-API-breaking things that have yet to be implemented are the following in rough order of priority.

  •  Enusre that WebGPU polling is done correctly in `DeviceBox::get
  •  Add support for WGLSL as input, use Naga for shader compilation
  •  Add WASM support in Cargo.toml
  •  Add benchmarks`
  •  Reuse staging buffers between different DeviceBoxes
  •  Maybe use uniforms for DeviceBox<T> when T is small (maybe)

If you are interested in any of these or anything else, please don't hesitate to open an issue on GitHub or discuss more on Discord.

Download Details:
Author: calebwin
Source Code: https://github.com/calebwin/emu
License: MIT license

#rust  #rustlang 

Emu: The Write-once-run-anywhere GPGPU Library for Rust
Connor Mills

Connor Mills

1648805225

Getting Started with JavaScript Classes

Learn what need to get started with JavaScript classes. Learn about class constructor, properties (public, static and private) and methods.

JavaScript classes are popular feature of JavaScript. This tutorial will help you learn what you should know so you can get started with JavaScript classes. You will learn about class constructor, properties and methods. You will also learn what public, static and private class fields are.
 

A quick introduction

Before we dive into how to get started with JavaScript classes, let’s quickly talk about few things. First, classes were added to JavaScript in ES6 specification (ECMAScript 2015). Second, they are not a new feature per se. Classes basically provide a different way to create objects and work with prototypes and inheritance.

This is also why many JavaScript developers call classes a syntactic sugar. They are correct. Classes are a syntactic sugar. Under the hood, you are still working with objects, prototypes and so on. The only real difference is in the syntax you are using. Another is that your code will not work in IE. Babel will help you fix this.

That being said, there is nothing wrong with using JavaScript classes over other older options. It is mainly a matter of your preference. If you like them, use them. If you don’t, don’t. Now, let’s take a look at what you need to know to get started with JavaScript classes.

The syntax

The syntax of classes is easy to learn and remember. Every class starts with class keyword. Next comes body of the class, a block of code wrapped with curly brackets. There are no parentheses and parameters you know from functions. When you declare new class, the convention is to start with a capital letter.

// Create new class called "MyClass":
class MyClass {
  // Body of the class.
}

Classes, constructor and parameters

When you declare new class there are no parentheses where you could specify parameters. This doesn’t mean that classes don’t support parameters. They do. They just work with them in a different way. When you want to specify parameters for your class you have to use method called constructor.

This constructor is a unique method. You can create it only inside a class and only once. If you don’t create this method yourself, JavaScript will automatically use default that is built inside every class. The main job of this method is to execute tasks you have specified when you create a new instance of a class.

Instance is basically a new object based on a specific class, and it inherits all properties and methods defined in that class. Every time you create new instance of a class it will also automatically invoke the constructor method. This is useful when you want to do something when you create new class instance.

For example, assigning properties with initial values. Another thing constructor allows is specifying parameters. The constructor method is a normal method. As such, it can also accept parameters. If you specify some parameter for the constructor method these parameters will become parameters of the class itself.

When you create new instance of the class, you can pass in some values as arguments, based on the parameters of the constructor. Otherwise, you can omit any parameters and use just the constructor to do some initial tasks. If you define your own constructor, and replace the default, do at the top of the class.

// Create new class "MyClass" with constructor,
// but without any parameters.
class MyClass {
  // Create constructor method without any parameters
  constructor() {
    // Code that will be executed
    // when a new class instance is created.
  }
}


// Create new class "MyClass"
// that accepts two parameters: name and age.
class MyClass {
  // Create constructor method
  // and specify "name" and "age" parameters.
  constructor(name, age) {
    // Create properties "name" and "age" on the class
    // and assign them values passed as arguments
    // for "name" and "age" parameters.
    this.name = name
    this.age = age
  }
}

this and classes

When you work with JavaScript classes it is very likely you will see the this keyword a lot. Basically all you need to know is this. When you use this inside a class it will refer to the class itself. When you create new instance of that class, it will refer to that very instance.

One thing that can help you is using your imagination. When you see this inside a class you can imagine replacing that this with the name of the class you are currently working with. This is, theoretically speaking, what’s happening.

// Create new class:
class MyClass {
  // Create constructor and define one parameter:
  constructor(name) {
    // This:
    this.name = name
    // Can be translated here to:
    // MyClass.name = name

    // When you create an instance of MyClass
    // it can be translated here to:
    // InstanceOfMyClass.name = name
  }
}

Class properties and methods

Every class can have infinite number of properties, just like any object. In the beginning, there was only one way to define these properties. You could define properties only inside the constructor method. Note that it doesn’t matter if the constructor method accepts any parameter.

Even if the constructor method doesn’t accept any, defining class properties was still possible only inside it. This changed only to some degree. The constructor method is still the only place to define parameters for the class and assign their values to some class properties.

// Create new class:
class MyClass {
  // Create constructor and define one parameter:
  constructor(name) {
    // Create class property called "name"
    // and assign it a value of "name" parameter
    this.name = name

    // Create additional class properties:
    this isHuman = true
    this.isAlive = true
  }
}

Other ways to create class properties are class fields. The names class fields and class properties are almost the same. Difference is that properties are defined inside constructor method while class fields are defined outside it, inside the class body. Other than that, class properties and class fields are basically interchangeable.

At this moment, there are three types of class fields: public, static and private. We will talk about each in the next section. But first, let’s quickly talk about class methods.

Class methods

When you want to create class method, you define it right inside the class body. Defining a class method is as simple as defining a function. There is one difference. When you create a class method you omit the function keyword and start with the method name. And, no need for the this keyword when you define the method.

However, you will need this if you want to reference some property or method of the class you are working with. When you want to call some class method you create new instance of the class. Then, you call the method on that instance, using dot notation.

// Create new class with method:
class MyClass {
  // Create class method:
  myMethod() {
    return 'Hello!'
  }
}

// Create instance of "MyClass":
const myClassInstance = new MyClass()

// Call "myMethod" on "myClassInstance" instance:
joe.myMethod()
// Output:
// 'Hello!'


// Create new class with method using this:
class MyClass {
  // Create constructor and define one parameter:
  constructor(name) {
    // Create class property called "name"
    // and assign it a value of "name" parameter
    this.name = name
  }

  // Create class method:
  sayHi() {
    return `Hello, my name is ${this.name}.`
  }
}

// Create instance of "MyClass":
const joe = new MyClass('Joe')

// Call "sayHi" on "joe" instance:
joe.sayHi()
// Output:
// 'Hello, my name is Joe.'

Public class fields and methods

Class properties and public class field are very similar. The main difference is that you define class properties in the constructor method. With class fields, you don’t need the constructor, because they are defined outside it. This is also means that if you don’t need the constructor for something else, you can omit it.

However, if you want to define class parameters or do some stuff during class instantiation, you will still have to use constructor. Another important difference is that public fields don’t use the this keyword. When you define new public field, you start with the name of the field (property), not the this and dot.

One thing about public class fields and access. Fields you define as public will be always accessible from the inside as well as the outside of the class, and its instances. This means that you will be able to access and modify them as you want. The same applies to public methods. They will be all accessible and modifiable.

Last thing. Any class field and method you define are public by default. You can change this by defining the field or method either as static or private. This means using corresponding keyword. Otherwise, JavaScript will automatically assume that the field or method should be public and make them that way.

// Create new class:
class Car {
  // Define class fields for "numOfWheels" and "fuel":
  numOfWheels = 4
  fuelType = 'electric'

  // Define public method:
  startEngine() {
    return 'Engine is running.'
  }
}

// Create instance of Car class:
const tesla = new Car()

// Log the value of public class field "fuelType":
console.log(tesla.fuelType)
// Output:
// 'electric'

// Call the "startEngine" method:
console.log(tesla.startEngine())
// Output:
// 'Engine is running.'

Static class fields and methods

The second type of class fields and methods are static. When you want to define a static class field or method you add the keyword static before the field or method name. The main difference between static class fields and public class fields is that you can’t access static class fields on instances of the class.

You can access static class fields only on the class itself. The same applies to static methods. You can’t call them on instances of the class. You can call them only on the class itself. Static fields and methods are often used for utility purposes. For example, doing cleanups, updates or having an evidence of existing class instances.

When you work with static class fields remember that methods that can work with them are only static methods. You can’t access static class fields with neither public nor private methods, only static.

class Car {
  // Declare static property to keep track
  // of how many instances of Car has been created.
  static numOfCopies = 0

  constructor() {
    // When new instance of Car is created
    // update the number of Car instances:
    Car.numOfCopies++
  }

  // Create static method to access
  // static field "numOfCopies".
  static getNumOfCopies() {
    // Return the value of "numOfCopies" field:
    return Car.numOfCopies
  }
}

// Log number of instances of MyClass
console.log(Car.getNumOfCopies())
// Output:
// 0

// Create instance of Car:
const porsche = new Car()

// Log number of instances of Car again:
console.log(Car.getNumOfCopies())
// Output:
// 1

Private class fields and methods

Private class fields and methods are the last type of fields and methods you can use. Private class fields and methods are basically the opposite of public fields and methods. When you define some field or method as private you can work with it only inside the class. From the outside, they will be invisible.

This can be useful when you want to keep some data private. When you want some data to be inaccessible from the outside and also from any class instance. The syntax for private fields and methods is simple. In order to define private field or method start the name with # (hashtag symbol).

When you want to access private field, or call private method, you also have to use the hashtag symbol. One interesting thing is that public method can access private fields and methods. So, if you want, you can create private field or method. Then, you can create a public method to access the private field or call the private method. Both things will work.

class App {
  // Declare private field "version":
  #version = '1.0'

  // Create private method "getVersion":
  #getVersion() {
    return this.#version
  }

  // Create public method "getVersionPublic" to access
  // private field "version":
  getVersionPublic() {
    // Return the value of "numOfCopies" field:
    return this.#version
  }

  // Create another public method "callGetVersion"
  // that calls the private method "getVersion":
  callGetVersion() {
    return this.#getVersion()
  }
}

// Create instance of Car:
const myApp = new App()

// Log number of instances of Car again:
console.log(myApp.getVersionPublic())
// Output:
// '1.0'

console.log(myApp.callGetVersion())
// Output:
// '1.0'

Classes and instances

We already talked about instances of classes a couple of times. It is time to talk about them more. As I mentioned, instances are like new objects you create based on existing classes. The reason for creating new instances is that they automatically inherit properties and methods you defined in the class they are based on.

This means that you don’t have to write the same code over and over again if you want to use it in multiple objects. What you can do is create one class and put the code you want to re-use there. When you need an object that can do all that stuff you can use that class to create new instance.

This instance will inherit properties and methods you defined in that “parent” class. It will be able to work with these properties and methods. In order to create new instance of class, you declare new variable. On the right side, you use the new keyword followed by the name of the class you want to instantiate and parentheses.

If the class accepts any parameters, you pass them inside the parentheses that follow after the name of the class. Otherwise, you leave the parentheses empty. This way, you can create as many instances of a specific class as you want.

Remember that all properties and their values you “hard-code” in the constructor of a specific class will be inherited by all instances of that class. Any properties you assign values passed as arguments will be dynamic. They will depend on arguments you use during instantiation.

// Class without parameters:
class MyClass {
  // Create constructor:
  constructor() {
    // Create class property "isAlive" and assign it true.
    this.isAlive = true
  }
}

// Create instance of "MyClass" class:
const myClassInstance = new MyClass('Jessica')

// log the value of "isAlive" property
// on "myClassInstance" instance:
console.log(myClassInstance.isAlive)
// Output:
// true


// Class with one parameter:
class MyClassTwo {
  // Create constructor and define one parameter:
  constructor(name) {
    // Create class property called "name"
    // and assign it a value of "name" parameter
    // and another boolean property "isAlive".
    this.name = name
    this.isAlive = true
  }
}

// Create instance of "MyClassTwo" class
// and pass in argument for "name" parameter:
const myClassInstanceTwo = new MyClassTwo('Jacob')

// log the value of "name" property
// on "myClassInstanceTwo" instance:
console.log(myClassInstanceTwo.name)
// Output:
// 'Jacob'

// Create another instance of "MyClassTwo" class
const myClassInstanceThree = new MyClassTwo('Tobias')

// log the value of "name" property
// on "myClassInstanceTwo" instance:
console.log(myClassInstanceThree.name)
// Output:
// 'Tobias'

Conclusion

JavaScript classes are interesting feature that offers a new way of creating objects and working with prototypes and prototypal inheritance. I hope that this short and quick guide helped you understand at least the basics so you can get started with JavaScript classes.

Original article source at https://blog.alexdevero.com

#javascript #programming

Getting Started with JavaScript Classes