Laravel-nestedset: Effective Tree Structures in Laravel 4-8

This is a Laravel 4-8 package for working with trees in relational databases.

  • Laravel 5.7, 5.8, 6.0, 7.0, 8.0 is supported since v5
  • Laravel 5.5, 5.6 is supported since v4.3
  • Laravel 5.2, 5.3, 5.4 is supported since v4
  • Laravel 5.1 is supported in v3
  • Laravel 4 is supported in v2

What are nested sets?

Nested sets or Nested Set Model is a way to effectively store hierarchical data in a relational table. From wikipedia:

The nested set model is to number the nodes according to a tree traversal, which visits each node twice, assigning numbers in the order of visiting, and at both visits. This leaves two numbers for each node, which are stored as two attributes. Querying becomes inexpensive: hierarchy membership can be tested by comparing these numbers. Updating requires renumbering and is therefore expensive.

Applications

NSM shows good performance when tree is updated rarely. It is tuned to be fast for getting related nodes. It'is ideally suited for building multi-depth menu or categories for shop.

Documentation

Suppose that we have a model Category; a $node variable is an instance of that model and the node that we are manipulating. It can be a fresh model or one from database.

Relationships

Node has following relationships that are fully functional and can be eagerly loaded:

  • Node belongs to parent
  • Node has many children
  • Node has many ancestors
  • Node has many descendants

Inserting nodes

Moving and inserting nodes includes several database queries, so it is highly recommended to use transactions.

IMPORTANT! As of v4.2.0 transaction is not automatically started

Another important note is that structural manipulations are deferred until you hit save on model (some methods implicitly call save and return boolean result of the operation).

If model is successfully saved it doesn't mean that node was moved. If your application depends on whether the node has actually changed its position, use hasMoved method:

if ($node->save()) {
    $moved = $node->hasMoved();
}

Creating nodes

When you simply creating a node, it will be appended to the end of the tree:

Category::create($attributes); // Saved as root
$node = new Category($attributes);
$node->save(); // Saved as root

In this case the node is considered a root which means that it doesn't have a parent.

Making a root from existing node

// #1 Implicit save
$node->saveAsRoot();

// #2 Explicit save
$node->makeRoot()->save();

The node will be appended to the end of the tree.

Appending and prepending to the specified parent

If you want to make node a child of other node, you can make it last or first child.

In following examples, $parent is some existing node.

There are few ways to append a node:

// #1 Using deferred insert
$node->appendToNode($parent)->save();

// #2 Using parent node
$parent->appendNode($node);

// #3 Using parent's children relationship
$parent->children()->create($attributes);

// #5 Using node's parent relationship
$node->parent()->associate($parent)->save();

// #6 Using the parent attribute
$node->parent_id = $parent->id;
$node->save();

// #7 Using static method
Category::create($attributes, $parent);

And only a couple ways to prepend:

// #1
$node->prependToNode($parent)->save();

// #2
$parent->prependNode($node);

Inserting before or after specified node

You can make $node to be a neighbor of the $neighbor node using following methods:

$neighbor must exists, target node can be fresh. If target node exists, it will be moved to the new position and parent will be changed if it's required.

# Explicit save
$node->afterNode($neighbor)->save();
$node->beforeNode($neighbor)->save();

# Implicit save
$node->insertAfterNode($neighbor);
$node->insertBeforeNode($neighbor);

Building a tree from array

When using static method create on node, it checks whether attributes contains children key. If it does, it creates more nodes recursively.

$node = Category::create([
    'name' => 'Foo',

    'children' => [
        [
            'name' => 'Bar',

            'children' => [
                [ 'name' => 'Baz' ],
            ],
        ],
    ],
]);

$node->children now contains a list of created child nodes.

Rebuilding a tree from array

You can easily rebuild a tree. This is useful for mass-changing the structure of the tree.

Category::rebuildTree($data, $delete);

$data is an array of nodes:

$data = [
    [ 'id' => 1, 'name' => 'foo', 'children' => [ ... ] ],
    [ 'name' => 'bar' ],
];

There is an id specified for node with the name of foo which means that existing node will be filled and saved. If node is not exists ModelNotFoundException is thrown. Also, this node has children specified which is also an array of nodes; they will be processed in the same manner and saved as children of node foo.

Node bar has no primary key specified, so it will be created.

$delete shows whether to delete nodes that are already exists but not present in $data. By default, nodes aren't deleted.

Rebuilding a subtree

As of 4.2.8 you can rebuild a subtree:

Category::rebuildSubtree($root, $data);

This constraints tree rebuilding to descendants of $root node.

Retrieving nodes

In some cases we will use an $id variable which is an id of the target node.

Ancestors and descendants

Ancestors make a chain of parents to the node. Helpful for displaying breadcrumbs to the current category.

Descendants are all nodes in a sub tree, i.e. children of node, children of children, etc.

Both ancestors and descendants can be eagerly loaded.

// Accessing ancestors
$node->ancestors;

// Accessing descendants
$node->descendants;

It is possible to load ancestors and descendants using custom query:

$result = Category::ancestorsOf($id);
$result = Category::ancestorsAndSelf($id);
$result = Category::descendantsOf($id);
$result = Category::descendantsAndSelf($id);

In most cases, you need your ancestors to be ordered by the level:

$result = Category::defaultOrder()->ancestorsOf($id);

A collection of ancestors can be eagerly loaded:

$categories = Category::with('ancestors')->paginate(30);

// in view for breadcrumbs:
@foreach($categories as $i => $category)
    <small>{{ $category->ancestors->count() ? implode(' > ', $category->ancestors->pluck('name')->toArray()) : 'Top Level' }}</small><br>
    {{ $category->name }}
@endforeach

Siblings

Siblings are nodes that have same parent.

$result = $node->getSiblings();

$result = $node->siblings()->get();

To get only next siblings:

// Get a sibling that is immediately after the node
$result = $node->getNextSibling();

// Get all siblings that are after the node
$result = $node->getNextSiblings();

// Get all siblings using a query
$result = $node->nextSiblings()->get();

To get previous siblings:

// Get a sibling that is immediately before the node
$result = $node->getPrevSibling();

// Get all siblings that are before the node
$result = $node->getPrevSiblings();

// Get all siblings using a query
$result = $node->prevSiblings()->get();

Getting related models from other table

Imagine that each category has many goods. I.e. HasMany relationship is established. How can you get all goods of $category and every its descendant? Easy!

// Get ids of descendants
$categories = $category->descendants()->pluck('id');

// Include the id of category itself
$categories[] = $category->getKey();

// Get goods
$goods = Goods::whereIn('category_id', $categories)->get();

Including node depth

If you need to know at which level the node is:

$result = Category::withDepth()->find($id);

$depth = $result->depth;

Root node will be at level 0. Children of root nodes will have a level of 1, etc.

To get nodes of specified level, you can apply having constraint:

$result = Category::withDepth()->having('depth', '=', 1)->get();

IMPORTANT! This will not work in database strict mode

Default order

All nodes are strictly organized internally. By default, no order is applied, so nodes may appear in random order and this doesn't affect displaying a tree. You can order nodes by alphabet or other index.

But in some cases hierarchical order is essential. It is required for retrieving ancestors and can be used to order menu items.

To apply tree order defaultOrder method is used:

$result = Category::defaultOrder()->get();

You can get nodes in reversed order:

$result = Category::reversed()->get();

To shift node up or down inside parent to affect default order:

$bool = $node->down();
$bool = $node->up();

// Shift node by 3 siblings
$bool = $node->down(3);

The result of the operation is boolean value of whether the node has changed its position.

Constraints

Various constraints that can be applied to the query builder:

  • whereIsRoot() to get only root nodes;
  • hasParent() to get non-root nodes;
  • whereIsLeaf() to get only leaves;
  • hasChildren() to get non-leave nodes;
  • whereIsAfter($id) to get every node (not just siblings) that are after a node with specified id;
  • whereIsBefore($id) to get every node that is before a node with specified id.

Descendants constraints:

$result = Category::whereDescendantOf($node)->get();
$result = Category::whereNotDescendantOf($node)->get();
$result = Category::orWhereDescendantOf($node)->get();
$result = Category::orWhereNotDescendantOf($node)->get();
$result = Category::whereDescendantAndSelf($id)->get();

// Include target node into result set
$result = Category::whereDescendantOrSelf($node)->get();

Ancestor constraints:

$result = Category::whereAncestorOf($node)->get();
$result = Category::whereAncestorOrSelf($id)->get();

$node can be either a primary key of the model or model instance.

Building a tree

After getting a set of nodes, you can convert it to tree. For example:

$tree = Category::get()->toTree();

This will fill parent and children relationships on every node in the set and you can render a tree using recursive algorithm:

$nodes = Category::get()->toTree();

$traverse = function ($categories, $prefix = '-') use (&$traverse) {
    foreach ($categories as $category) {
        echo PHP_EOL.$prefix.' '.$category->name;

        $traverse($category->children, $prefix.'-');
    }
};

$traverse($nodes);

This will output something like this:

- Root
-- Child 1
--- Sub child 1
-- Child 2
- Another root

Building flat tree

Also, you can build a flat tree: a list of nodes where child nodes are immediately after parent node. This is helpful when you get nodes with custom order (i.e. alphabetically) and don't want to use recursion to iterate over your nodes.

$nodes = Category::get()->toFlatTree();

Previous example will output:

Root
Child 1
Sub child 1
Child 2
Another root

Getting a subtree

Sometimes you don't need whole tree to be loaded and just some subtree of specific node. It is show in following example:

$root = Category::descendantsAndSelf($rootId)->toTree()->first();

In a single query we are getting a root of a subtree and all of its descendants that are accessible via children relation.

If you don't need $root node itself, do following instead:

$tree = Category::descendantsOf($rootId)->toTree($rootId);

Deleting nodes

To delete a node:

$node->delete();

IMPORTANT! Any descendant that node has will also be deleted!

IMPORTANT! Nodes are required to be deleted as models, don't try do delete them using a query like so:

Category::where('id', '=', $id)->delete();

This will break the tree!

SoftDeletes trait is supported, also on model level.

Helper methods

To check if node is a descendant of other node:

$bool = $node->isDescendantOf($parent);

To check whether the node is a root:

$bool = $node->isRoot();

Other checks:

  • $node->isChildOf($other);
  • $node->isAncestorOf($other);
  • $node->isSiblingOf($other);
  • $node->isLeaf()

Checking consistency

You can check whether a tree is broken (i.e. has some structural errors):

$bool = Category::isBroken();

It is possible to get error statistics:

$data = Category::countErrors();

It will return an array with following keys:

  • oddness -- the number of nodes that have wrong set of lft and rgt values
  • duplicates -- the number of nodes that have same lft or rgt values
  • wrong_parent -- the number of nodes that have invalid parent_id value that doesn't correspond to lft and rgt values
  • missing_parent -- the number of nodes that have parent_id pointing to node that doesn't exists

Fixing tree

Since v3.1 tree can now be fixed. Using inheritance info from parent_id column, proper _lft and _rgt values are set for every node.

Node::fixTree();

Scoping

Imagine you have Menu model and MenuItems. There is a one-to-many relationship set up between these models. MenuItem has menu_id attribute for joining models together. MenuItem incorporates nested sets. It is obvious that you would want to process each tree separately based on menu_id attribute. In order to do so, you need to specify this attribute as scope attribute:

protected function getScopeAttributes()
{
    return [ 'menu_id' ];
}

But now, in order to execute some custom query, you need to provide attributes that are used for scoping:

MenuItem::scoped([ 'menu_id' => 5 ])->withDepth()->get(); // OK
MenuItem::descendantsOf($id)->get(); // WRONG: returns nodes from other scope
MenuItem::scoped([ 'menu_id' => 5 ])->fixTree(); // OK

When requesting nodes using model instance, scopes applied automatically based on the attributes of that model:

$node = MenuItem::findOrFail($id);

$node->siblings()->withDepth()->get(); // OK

To get scoped query builder using instance:

$node->newScopedQuery();

Scoping and eager loading

Always use scoped query when eager loading:

MenuItem::scoped([ 'menu_id' => 5])->with('descendants')->findOrFail($id); // OK
MenuItem::with('descendants')->findOrFail($id); // WRONG

Requirements

  • PHP >= 5.4
  • Laravel >= 4.1

It is highly suggested to use database that supports transactions (like MySql's InnoDb) to secure a tree from possible corruption.

Installation

To install the package, in terminal:

composer require kalnoy/nestedset

Setting up from scratch

The schema

For Laravel 5.5 and above users:

Schema::create('table', function (Blueprint $table) {
    ...
    $table->nestedSet();
});

// To drop columns
Schema::table('table', function (Blueprint $table) {
    $table->dropNestedSet();
});

For prior Laravel versions:

...
use Kalnoy\Nestedset\NestedSet;

Schema::create('table', function (Blueprint $table) {
    ...
    NestedSet::columns($table);
});

To drop columns:

...
use Kalnoy\Nestedset\NestedSet;

Schema::table('table', function (Blueprint $table) {
    NestedSet::dropColumns($table);
});

The model

Your model should use Kalnoy\Nestedset\NodeTrait trait to enable nested sets:

use Kalnoy\Nestedset\NodeTrait;

class Foo extends Model {
    use NodeTrait;
}

Migrating existing data

Migrating from other nested set extension

If your previous extension used different set of columns, you just need to override following methods on your model class:

public function getLftName()
{
    return 'left';
}

public function getRgtName()
{
    return 'right';
}

public function getParentIdName()
{
    return 'parent';
}

// Specify parent id attribute mutator
public function setParentAttribute($value)
{
    $this->setParentIdAttribute($value);
}

Migrating from basic parentage info

If your tree contains parent_id info, you need to add two columns to your schema:

$table->unsignedInteger('_lft');
$table->unsignedInteger('_rgt');

After setting up your model you only need to fix the tree to fill _lft and _rgt columns:

MyModel::fixTree();

Download Details:

Author: lazychaser
Source Code: https://github.com/lazychaser/laravel-nestedset 

#php #laravel #tree #structure 

Laravel-nestedset: Effective Tree Structures in Laravel 4-8

Structures Of Arrays That Behave Like Arrays Of Structures

StructsOfArrays

A traditional Julia array of immutable objects is an array of structures. Fields of a given object are stored adjacent in memory. However, this often inhibits SIMD optimizations. StructsOfArrays implements the classic structure of arrays optimization. The contents of a given field for all objects is stored linearly in memory, and different fields are stored in different arrays. This permits SIMD optimizations in more cases and can also save a bit of memory if the object contains padding. It is especially useful for arrays of complex numbers.

Usage

You can construct a StructOfArrays directly:

using StructsOfArrays
A = StructOfArrays(Complex128, 10, 10)

or by converting an AbstractArray:

A = convert(StructOfArrays, complex(randn(10), randn(10)))

Beyond that, there's not much to say. Assignment and indexing works as with other AbstractArray types. Indexing a StructOfArrays{T} yields an object of type T, and you can assign objects of type T to a given index. The "magic" is in the optimizations that the alternative memory layout allows LLVM to perform.

While you can create a StructOfArrays of non-isbits immutables, this is probably slower than an ordinary array, since a new object must be heap allocated every time the StructOfArrays is indexed. In practice, StructsOfArrays works best with isbits immutables such as Complex{T}.

Benchmark

using StructsOfArrays
regular = complex(randn(1000000), randn(1000000))
soa = convert(StructOfArrays, regular)

function f(x, a)
    s = zero(eltype(x))
    @simd for i in 1:length(x)
        @inbounds s += x[i] * a
    end
    s
end

using Benchmarks
@benchmark f(regular, 0.5+0.5im)
@benchmark f(soa, 0.5+0.5im)

The time for f(regular, 0.5+0.5im) is:

Average elapsed time: 1.244 ms
  95% CI for average: [1.183 ms, 1.305 ms]
Minimum elapsed time: 1.177 ms

and for f(soa, 0.5+0.5im):

Average elapsed time: 832.198 μs
  95% CI for average: [726.349 μs, 938.048 μs]
Minimum elapsed time: 713.730 μs

In this case, StructsOfArrays are about 1.5x faster than ordinary arrays. Inspection of generated code demonstrates that f(soa, a) uses SIMD instructions, while f(regular, a) does not.

Download Details:

Author: JuliaArrays
Source Code: https://github.com/JuliaArrays/StructsOfArrays.jl 
License: View license

#julia #arrays #structure 

Structures Of Arrays That Behave Like Arrays Of Structures

Experimental New Implementation Of StrPack.jl-like Functionality

StructIO   

Generates IO methods (pack, unpack) from structure definitions. Also defines packed_sizeof to give the on-disk size of a packed structure, which is smaller than sizeof would give, if the struct is marked as align_packed.

Example usage

julia> using StructIO

julia> @io struct TwoUInt64s
           x::UInt64
           y::UInt64
       end

julia> buf = IOBuffer(collect(UInt8(1):UInt8(16))); 

julia> seekstart(buf); unpack(buf, TwoUInt64s) # Default endianness depends on machine
TwoUInt64s(0x0807060504030201, 0x100f0e0d0c0b0a09)

julia> seekstart(buf); unpack(buf, TwoUInt64s, :BigEndian)
TwoUInt64s(0x0102030405060708, 0x090a0b0c0d0e0f10)

Download Details:

Author: Keno
Source Code: https://github.com/Keno/StructIO.jl 
License: View license

#julia #structure #generate 

Experimental New Implementation Of StrPack.jl-like Functionality

10 Best Unofficial Set Of Patterns for Structuring Projects with Go

In today's post we will learn about 10 Best Unofficial set of patterns for structuring projects with Go.

What is Project Layout?

The project layout sheet (or sheets) shows the horizontal alignment and plan or plan-profile sheet sequence and numbering for the project. This is an optional sheet, to be included in the plans set at the discretion of the district.

Table of contents:

  • Ardanlabs/service - A starter kit for building production grade scalable web service applications.
  • Cookiecutter-golang - A Go application boilerplate template for quick starting projects following production best practices.
  • Go-sample - A sample layout for Go application projects with the real code.
  • Go-starter - An opinionated production-ready RESTful JSON backend template, highly integrated with VSCode DevContainers.
  • Go-todo-backend - Go Todo Backend example using modular project layout for product microservice.
  • Gobase - A simple skeleton for golang application with basic setup for real golang application.
  • Golang-standards/project-layout - Set of common historical and emerging project layout patterns in the Go ecosystem. Note: despite the org-name they do not represent official golang standards, see this issue for more information. Nonetheless, some may find the layout useful.
  • Golang-templates/seed - Go application GitHub repository template.
  • Insidieux/inizio - Golang project layout generator with plugins.
  • Modern-go-application - Go application boilerplate and example applying modern practices.

1 - Ardanlabs/service: A starter kit for building production grade scalable web service applications.

Ultimate Go: Service with Kubernetes

This class teaches how to build production-level services in Go leveraging the power of Kubernetes. From the beginning, you will pair program with the instructor walking through the design philosophies and guidelines for building services in Go. With each new feature that is added to the service, you will learn how to deploy to and manage the Kubernetes environment used to run the service.

Purchase Video

The entire training class has been recorded to be made available to those who can't have the class taught at their company or who can't attend a conference. This is the entire class material.

ardanlabs.com/education

Our Experience

We have taught Go to thousands of developers all around the world since 2014. There is no other company that has been doing it longer and our material has proven to help jump-start developers 6 to 12 months ahead of their knowledge of Go. We know what knowledge developers need in order to be productive and efficient when writing software in Go.

Our classes are perfect for intermediate-level developers who have at least a few months to years of experience writing code in Go. Our classes provide a very deep knowledge of the programming langauge with a big push on language mechanics, design philosophies and guidelines. We focus on teaching how to write code with a priority on consistency, integrity, readability and simplicity. We cover a lot about “if performance matters” with a focus on mechanical sympathy, data oriented design, decoupling and writing/debugging production software.

Our Teacher

William Kennedy (@goinggodotnet)

William Kennedy is a managing partner at Ardan Labs in Miami, Florida. Ardan Labs is a high-performance development and training firm working with startups and fortune 500 companies. He is also a co-author of the book Go in Action, the author of the blog GoingGo.Net, and a founding member of GoBridge which is working to increase Go adoption through diversity.

Video Training
Ultimate Go Video Ardan Labs YouTube Channel

Blog
Going Go

Writing
Running MongoDB Queries Concurrently With Go
Go In Action

Articles
IT World Canada

View on Github

2 - Cookiecutter-golang: A Go application boilerplate template for quick starting projects following production best practices.

Powered by Cookiecutter, Cookiecutter Golang is a framework for jumpstarting production-ready go projects quickly.

Features

  • Generous Makefile with management commands
  • Uses go dep (with optional go module support requires go 1.11)
  • injects build time and git hash at build time.

Optional Integrations

  • Can use viper for env var config
  • Can use cobra for cli tools
  • Can use logrus for logging
  • Can create dockerfile for building go binary and dockerfile for final go binary (no code in final container)
  • If docker is used adds docker management commands to makefile
  • Option of TravisCI, CircleCI or None

Constraints

  • Uses dep or mod for dependency management
  • Only maintained 3rd party libraries are used.

This project now uses docker multistage builds, you need at least docker version v17.05.0-ce to use the docker file in this template, you can read more about multistage builds here.

Docker

This template uses docker multistage builds to make images slimmer and containers only the final project binary and assets with no source code whatsoever.

You can find the image dokcer file in this repo and more information about docker multistage builds in this blog post.

Apps run under non root user and also with dumb-init.

Usage

Let's pretend you want to create a project called "echoserver". Rather than starting from scratch maybe copying some files and then editing the results to include your name, email, and various configuration issues that always get forgotten until the worst possible moment, get cookiecutter to do all the work.

First, get Cookiecutter. Trust me, it's awesome:

$ pip install cookiecutter

Alternatively, you can install cookiecutter with homebrew:

$ brew install cookiecutter

Finally, to run it based on this template, type:

$ cookiecutter https://github.com/lacion/cookiecutter-golang.git

You will be asked about your basic info (name, project name, app name, etc.). This info will be used to customize your new project.

Warning: After this point, change 'Luis Morales', 'lacion', etc to your own information.

View on Github

3 - Go-sample: A sample layout for Go application projects with the real code.

A sample layout for Go application projects with the real code.

Where it all comes from?

Ideas used to create the architecture and structure:

Requirements

Installation

git clone https://github.com/zitryss/perfmon.git

Usage

docker-compose up --build

View on Github

4 - Go-starter: An opinionated production-ready RESTful JSON backend template, highly integrated with VSCode DevContainers.

go-starter is an opinionated production-ready RESTful JSON backend template written in Go, highly integrated with VSCode DevContainers by allaboutapps.

Usage

Please find more detailed information regarding the history, usage and other whys? of this project in our FAQ.

Demo

A demo go-starter service is deployed at https://go-starter.allaboutapps.at for you to play around with.

Please visit our FAQ to find out more about the limitations of this demo environment.

Requirements

Requires the following local setup for development:

This project makes use of the Remote - Containers extension provided by Visual Studio Code. A local installation of the Go tool-chain is no longer required when using this setup.

Please refer to the official installation guide how this works for your host OS and head to our FAQ: How does our VSCode setup work? if you encounter issues.

Quickstart

Create a new git repository through the GitHub template repository feature (use this template). You will then start with a single initial commit in your own repository.

# Clone your new repository, cd into it, then easily start the docker-compose dev environment through our helper
./docker-helper.sh --up

You should be inside the 'service' docker container with a bash shell.

development@94242c61cf2b:/app$ # inside your container...

# Shortcut for make init, make build, make info and make test
make all

# Print all available make targets
make help

Merge with the go-starter template repository to get future updates

These steps are not necessary if you have a "real" fork.

If your new project is generated from a template project (you have a single commit), you want to run the following command immediately and before applying any changes. Otherwise you won't be able to easily merge upstream go-starter changes into your own repository (see GitHub Template Repositories, Refusing to merge unrelated histories and FAQ: I want to compare or update my project/fork to the latest go-starter master).

make git-merge-go-starter
# Attempting to execute 'git merge --no-commit --no-ff go-starter/master' into your current HEAD.
# Are you sure? [y/N]y
# git merge --no-commit --no-ff --allow-unrelated-histories go-starter/master

git commit -m "Initial merge of unrelated go-starter template history"

Set project module name for your new project

To replace all occurrences of allaboutapps.dev/aw/go-stater (our internal module name of this project) with your desired projects' module name, do the following:

development@94242c61cf2b:/app$ # inside your container...

# Set a new go project module name.
make set-module-name
# allaboutapps.dev/<GIT_PROJECT>/<GIT_REPO> (internal only)
# github.com/<USER>/<PROJECT>
# e.g. github.com/majodev/my-service

The above command writes your new go module name to tmp/.modulename, go.mod. It actually sets it everywhere in **/* - thus this step is typically only required once. If you need to merge changes from the upstream go-starter later, we may want to run make force-module-name to set your own go module name everywhere again (especially relevant for new files / import paths). See our FAQ for more information about this update flow.

Optionally you may want to move the original README.md and LICENSE away:

development@94242c61cf2b:/app$ # inside your container...

# Optionally you may want to move our LICENSE and README.md away.
mv README.md README-go-starter.md
mv LICENSE LICENSE-go-starter

# Optionally create a new README.md for your project.
make get-module-name > README.md

View on Github

5 - Go-todo-backend: Go Todo Backend example using modular project layout for product microservice.

Go Todo Backend Example Using Modular Project Layout for Product Microservice. It's suitable as starting point for a medium to larger project.

This example uses Chi for http router and REL for database access.

Feature:

  • Modular Project Structure.
  • Full example including tests.
  • Docker deployment.
  • Compatible with todobackend.

Installation

Prerequisite

  1. Install mockery for interface mock generation.
  2. Install rel cli for database migration.

Running

Prepare .env.

cp .env.sample .env

Start postgresql and create database.

docker-compose up -d

Prepare database schema.

rel migrate

Build and Running

make

View on Github

6 - Gobase: A simple skeleton for golang application with basic setup for real golang application.

This is a simple skeleton for golang application. Inspired by development experience and updated according to github.com/golang-standards/project-layout.

How to use?

  1. Clone the repository (with git client git clone github.com/wajox/gobase [project_name] or use it as template on github for creating a new project)
  2. replace github.com/wajox/gobase with [your_pkg_name] in the all files

Structure

  • /api - OpenAPI specs, documentation generated by swag
  • /cmd - apps
  • /db - database migrations and seeds
  • /docs - documentation
  • /internal - application sources for internal usage
  • /pkg - application sources for external usage(SDK and libraries)
  • /test - some stuff for testing purposes

Commands

# install dev tools(wire, golangci-lint, swag, ginkgo)
make install-tools

# start test environment from docker-compose-test.yml
make start-docker-compose-test

# stop test environment from docker-compose-test.yml
make stop-docker-compose-test

# build application
make build

# run all tests
make test-all

# run go generate
make gen

# generate OpenAPI docs with swag
make swagger

# generate source code from .proto files
make proto

# generate dependencies with wire
make deps

Create new project

With clonegopkg

# install clonegopkg
go install github.com/wajox/clonegopkg@latest

# create your project
clonegopkg clone git@github.com:wajox/gobase.git github.com/wajox/newproject

# push to your git repository
cd ~/go/src/github.com/wajox/newproject
git add .
git commit -m "init project from gobase template"
git remote add origin git@github.com:wajox/newproject.git
git push origin master

View on Github

7 - Golang-standards/project-layout: Set of common historical and emerging project layout patterns in the Go ecosystem. Note: despite the org-name they do not represent official golang standards, see this issue for more information. Nonetheless, some may find the layout useful.

Overview

This is a basic layout for Go application projects. It's not an official standard defined by the core Go dev team; however, it is a set of common historical and emerging project layout patterns in the Go ecosystem. Some of these patterns are more popular than others. It also has a number of small enhancements along with several supporting directories common to any large enough real world application.

If you are trying to learn Go or if you are building a PoC or a simple project for yourself this project layout is an overkill. Start with something really simple instead (a single main.gofile andgo.mod is more than enough). As your project grows keep in mind that it'll be important to make sure your code is well structured otherwise you'll end up with a messy code with lots of hidden dependencies and global state. When you have more people working on the project you'll need even more structure. That's when it's important to introduce a common way to manage packages/libraries. When you have an open source project or when you know other projects import the code from your project repository that's when it's important to have private (aka internal) packages and code. Clone the repository, keep what you need and delete everything else! Just because it's there it doesn't mean you have to use it all. None of these patterns are used in every single project. Even the vendor pattern is not universal.

With Go 1.14 Go Modules are finally ready for production. Use Go Modules unless you have a specific reason not to use them and if you do then you don’t need to worry about $GOPATH and where you put your project. The basic go.mod file in the repo assumes your project is hosted on GitHub, but it's not a requirement. The module path can be anything though the first module path component should have a dot in its name (the current version of Go doesn't enforce it anymore, but if you are using slightly older versions don't be surprised if your builds fail without it). See Issues 37554 and 32819 if you want to know more about it.

This project layout is intentionally generic and it doesn't try to impose a specific Go package structure.

This is a community effort. Open an issue if you see a new pattern or if you think one of the existing patterns needs to be updated.

If you need help with naming, formatting and style start by running gofmt and golint. Also make sure to read these Go code style guidelines and recommendations:

See Go Project Layout for additional background information.

More about naming and organizing packages as well as other code structure recommendations:

A Chinese Post about Package-Oriented-Design guidelines and Architecture layer

Go Directories

/cmd

Main applications for this project.

The directory name for each application should match the name of the executable you want to have (e.g., /cmd/myapp).

Don't put a lot of code in the application directory. If you think the code can be imported and used in other projects, then it should live in the /pkg directory. If the code is not reusable or if you don't want others to reuse it, put that code in the /internal directory. You'll be surprised what others will do, so be explicit about your intentions!

It's common to have a small main function that imports and invokes the code from the /internal and /pkg directories and nothing else.

See the /cmd directory for examples.

/internal

Private application and library code. This is the code you don't want others importing in their applications or libraries. Note that this layout pattern is enforced by the Go compiler itself. See the Go 1.4 release notes for more details. Note that you are not limited to the top level internal directory. You can have more than one internal directory at any level of your project tree.

You can optionally add a bit of extra structure to your internal packages to separate your shared and non-shared internal code. It's not required (especially for smaller projects), but it's nice to have visual clues showing the intended package use. Your actual application code can go in the /internal/app directory (e.g., /internal/app/myapp) and the code shared by those apps in the /internal/pkg directory (e.g., /internal/pkg/myprivlib).

View on Github

8 - Golang-templates/seed: Go application GitHub repository template.

This is a GitHub repository template for Go. It has been created for ease-of-use for anyone who wants to:

  • quickly get into Go without losing too much time on environment setup,
  • create a new repoisitory with basic Continous Integration.

It includes:

Star this repository if you find it valuable and worth maintaining.

Watch this repository to get notified about new releases, issues, etc.

Usage

  1. Sign up on Codecov and configure Codecov GitHub Application for all repositories.
  2. Click the Use this template button (alt. clone or download this repository).
  3. Replace all occurences of golang-templates/seed to your_org/repo_name in all files.
  4. Replace all occurences of seed to repo_name in Dockerfile.
  5. Update the following files:

Setup

Below you can find sample instructions on how to set up the development environment. Of course you can use other tools like GoLand, Vim, Emacs. However take notice that the Visual Studio Go extension is officially supported by the Go team.

Local Machine

Follow these steps if you are OK installing and using Go on your machine.

  1. Install Go.
  2. Install Visual Studio Code.
  3. Install Go extension.
  4. Clone and open this repository.
  5. F1 -> Go: Install/Update Tools -> (select all) -> OK.

Development Container

Follow these steps if you do not want to install Go on your machine and you prefer to use a Development Container instead.

  1. Install Visual Studio Code.
  2. Follow Developing inside a Container - Getting Started.
  3. Clone and open this repository.
  4. F1 -> Remote-Containers: Reopen in Container.
  5. F1 -> Go: Install/Update Tools -> (select all) -> OK.

The Development Container configuration mixes Docker in Docker and Go definitions. Thanks to it you can use go, docker, docker-compose inside the container.

View on Github

9 - Insidieux/inizio: Golang project layout generator with plugins.

inizio is a simple binary, which allows generating/bootstrapping golang project with predefined layout.

This project is easy can be extended, cause it also supports plugins for generation, based on go-plugin package.

Installing

Install inizio by running:

go get github.com/insidieux/inizio/cmd/inizio

Usage

inizio \
  --plugins.config /etc/inizio/plugins.yaml \
  --plugins.path /usr/local/bin/inizio-plugins \
    path-to-project

Example

inizio.gif

Ensure that $GOPATH/bin is added to your $PATH.

View on Github

10 - Modern-go-application: Go application boilerplate and example applying modern practices.

Go application boilerplate and example applying modern practices

This repository tries to collect the best practices of application development using Go language. In addition to the language specific details, it also implements various language independent practices.

Some of the areas Modern Go Application touches:

  • architecture
  • package structure
  • building the application
  • testing
  • configuration
  • running the application (eg. in Docker)
  • developer environment/experience
  • telemetry

To help adopting these practices, this repository also serves as a boilerplate for new applications.

First steps

To create a new application from the boilerplate clone this repository (if you haven't done already) into your GOPATH then execute the following:

chmod +x init.sh && ./init.sh
? Package name (github.com/sagikazarmark/modern-go-application)
? Project name (modern-go-application)
? Binary name (modern-go-application)
? Service name (modern-go-application)
? Friendly service name (Modern Go Application)
? Update README (Y/n)
? Remove init script (y/N) y

It updates every import path and name in the repository to your project's values. Review and commit the changes.

Load generation

To test or demonstrate the application it comes with a simple load generation tool. You can use it to test the example endpoints and generate some load (for example in order to fill dashboards with data).

Follow the instructions in etc/loadgen.

Inspiration

See INSPIRATION.md for links to articles, projects, code examples that somehow inspired me while working on this project.

View on Github

Thank you for following this article.

Related videos:

Kat Zien - How Do You Structure Your Go Apps

#go #golang #structure #pattern 

10 Best Unofficial Set Of Patterns for Structuring Projects with Go

6 Popular Node.js Tree Structure Libraries

In today's post we will learn about 6 Popular Node.js Tree Structure Libraries.

A tree view or a tree structure is a UI component that provides a hierarchical view of your complex data structures. Some items in a tree view may have a number of subitems, and the user can expand & collapse them by clicking on the parent node.

1 - Tree-model-js

Manipulate and traverse tree-like structures in javascript

Installation

Node

TreeModel is available as an npm module so you can install it with npm install tree-model and use it in your script:

var TreeModel = require('tree-model'),
    tree = new TreeModel(),
    root = tree.parse({name: 'a', children: [{name: 'b'}]});

TypeScript

Type definitions are already bundled with the package, which should just work with npm install.

You can maually find the definition files in the types folder.

View on Github

2 - Functional-red-black-tree

A purely functional red-black tree data structure.

Install

npm install functional-red-black-tree

Example

Here is an example of some basic usage:

//Load the library
var createTree = require("functional-red-black-tree")

//Create a tree
var t1 = createTree()

//Insert some items into the tree
var t2 = t1.insert(1, "foo")
var t3 = t2.insert(2, "bar")

//Remove something
var t4 = t3.remove(1)

API

var createTree = require("functional-red-black-tree")

View on Github

3 - Splay-tree

Fast splay-tree data structure

Install

npm i -S splaytree
import SplayTree from 'splaytree';
const tree = new SplayTree();

Or get it from CDN

<script src="https://unpkg.com/splaytree"></script>
<script>
  var tree = new SplayTree();
  ...
</script>

Example

import Tree from 'splaytree';

const t = new Tree();
t.insert(5);
t.insert(-10);
t.insert(0);
t.insert(33);
t.insert(2);

console.log(t.keys()); // [-10, 0, 2, 5, 33]
console.log(t.size);   // 5
console.log(t.min());  // -10
console.log(t.max());  // -33

t.remove(0);
console.log(t.size);   // 4

View on Github

4 - Node-interval-tree

An Interval Tree data structure.

Usage

import IntervalTree from 'node-interval-tree'
const intervalTree = new IntervalTree<string>()

Insert

intervalTree.insert(low, high, 'foo')

Insert an interval with associated data into the tree. Intervals with the same low and high value can be inserted, as long as their data is different. Data can be any JS primitive or object. low and high have to be numbers where low <= high (also the case for all other operations with low and high). Returns true if successfully inserted, false if nothing inserted.

Search

intervalTree.search(low, high)

Search all intervals that overlap low and high arguments, both of them inclusive. Low and high values don't need to be in the tree themselves. Returns an array of all data objects of the intervals in the range [low, high]; doesn't return the intervals themselves.

View on Github

5 - Array-to-tree

Convert a plain array of nodes (with pointers to parent nodes) to a nested data structure

Installation

$ npm install array-to-tree --save

Usage

var arrayToTree = require('array-to-tree');

var dataOne = [
  {
    id: 1,
    name: 'Portfolio',
    parent_id: undefined
  },
  {
    id: 2,
    name: 'Web Development',
    parent_id: 1
  },
  {
    id: 3,
    name: 'Recent Works',
    parent_id: 2
  },
  {
    id: 4,
    name: 'About Me',
    parent_id: undefined
  }
];

arrayToTree(dataOne);

/*
 * Output:
 *
 * Portfolio
 *   Web Development
 *     Recent Works
 * About Me
 */

var dataTwo = [
  {
    _id: 'ec654ec1-7f8f-11e3-ae96-b385f4bc450c',
    name: 'Portfolio',
    parent: null
  },
  {
    _id: 'ec666030-7f8f-11e3-ae96-0123456789ab',
    name: 'Web Development',
    parent: 'ec654ec1-7f8f-11e3-ae96-b385f4bc450c'
  },
  {
    _id: 'ec66fc70-7f8f-11e3-ae96-000000000000',
    name: 'Recent Works',
    parent: 'ec666030-7f8f-11e3-ae96-0123456789ab'
  },
  {
    _id: '32a4fbed-676d-47f9-a321-cb2f267e2918',
    name: 'About Me',
    parent: null
  }
];

arrayToTree(dataTwo, {
  parentProperty: 'parent',
  customID: '_id'
});

/*
 * Output:
 *
 * Portfolio
 *   Web Development
 *     Recent Works
 * About Me
 */

View on Github

6 - Treeize

Converts row data (in JSON/associative array format) to tree structure based on column naming conventions.

Installation

npm install treeize

Why?

Because APIs usually require data in a deep object graph/collection form, but SQL results (especially heavily joined data), excel, csv, and other flat data sources that we're often forced to drive our applications from represent data in a very "flat" way. Treeize takes this flattened data and based on simple column/attribute naming conventions, remaps it into a deep object graph - all without the overhead/hassle of hydrating a traditional ORM.

What it does...

// Treeize turns flat associative data (as from SQL queries) like this:
var peopleData = [
  {
    'name': 'John Doe',
    'age': 34,
    'pets:name': 'Rex',
    'pets:type': 'dog',
    'pets:toys:type': 'bone'
  },
  {
    'name': 'John Doe',
    'age': 34,
    'pets:name': 'Rex',
    'pets:type': 'dog',
    'pets:toys:type': 'ball'
  },
  {
    'name': 'Mary Jane',
    'age': 19,
    'pets:name': 'Mittens',
    'pets:type': 'kitten',
    'pets:toys:type': 'yarn'
  },
  {
    'name': 'Mary Jane',
    'age': 19,
    'pets:name': 'Fluffy',
    'pets:type': 'cat'
  }
];


// ...or flat array-of-values data (as from CSV/excel) like this:
var peopleData = [
  ['name', 'age', 'pets:name', 'pets:type', 'pets:toys:type'], // header row
  ['John Doe', 34, 'Rex', 'dog', 'bone'],
  ['John Doe', 34, 'Rex', 'dog', 'ball'],
  ['Mary Jane', 19, 'Mittens', 'kitten', 'yarn'],
  ['Mary Jane', 19, 'Fluffy', 'cat', null]
];


// ...via a dead-simple implementation:
var Treeize   = require('treeize');
var people    = new Treeize();

people.grow(peopleData);


// ...into deep API-ready object graphs like this:
people.getData() == [
  {
    name: 'John Doe',
    age: 34,
    pets: [
      {
        name: 'Rex',
        type: 'dog',
        toys: [
          { type: 'bone' },
          { type: 'ball' }
        ]
      }
    ]
  },
  {
    name: 'Mary Jane',
    age: 19,
    pets: [
      {
        name: 'Mittens',
        type: 'kitten',
        toys: [
          { type: 'yarn' }
        ]
      },
      {
        name: 'Fluffy',
        type: 'cat'
      }
    ]
  }
];

View on Github

Thank you for following this article. 

#node #tree #structure 

6 Popular Node.js Tree Structure Libraries

5 Popular Node.js Graph Structure Libraries

In today's post we will learn about 5 Popular Node.js Graph Structure Libraries. 

This is a graph data structure with topological sort and shortest path algorithms. This library provides a minimalist implementation of a directed graph data structure. 

1 - Graph-data-structure

A graph data structure with topological sort and shortest path algorithms.

Installing

This library is distributed only via NPM. Install by running

npm install graph-data-structure

Require it in your code like this.

var Graph = require("graph-data-structure");

Examples

ABC

To create a graph instance, invoke Graph as a constructor function.

var graph = Graph();

Add some nodes and edges with addNode and addEdge.

graph.addNode("a");
graph.addNode("b");
graph.addEdge("a", "b");

Nodes are added implicitly when edges are added.

graph.addEdge("b", "c");

Now we have the following graph.

Topological sorting can be done by invoking topologicalSort like this.

graph.topologicalSort(); // Returns ["a", "b", "c"]

View on Github

2 - Node-open-graph

An Open Graph implementation for Node.js.

Install

npm install open-graph

Usage

const og = require('open-graph');

const url = 'http://github.com/samholmes/node-open-graph/raw/master/test.html';

og(url, function (err, meta) {
  console.log(meta);
});

Outputs:

{
  title: 'OG Testing',
  type: 'website',
  url: 'http://github.com/samholmes/node-open-graph/raw/master/test.html',
  site_name: 'irrelavent',
  description: 'This is a test bed for Open Graph protocol.',
  image: {
    url: 'http://google.com/images/logo.gif',
    width: '100',
    height: '100'
  }
}

View on Github

3 - Data-structures

Fast, light and hassle-free JavaScript data structures, written in CoffeeScript.

Installation and Usage

Server-side:

Using npm:

npm install data-structures

Then where needed:

var Heap = require('data-structures').Heap;
var heap = new Heap();
heap.add(3);
heap.removeMin();

Alternatively, you can directly use the compiled JavaScript version in the "distribution" folder. It's always in sync with the CoffeeScript one.

View on Github

4 - Acausal

Easily create and pick values from Random Distributions and Markov Chains.

Installation

Run:

npm install -s acausal

Basic Examples:

import { MarkovChain, Distribution, Random } from 'acausal';

// Random Rarity Distribution
const dist = new Distribution({ seed: 1 });
dist.add('Green', 10);    // Common
dist.add('Blue', 5);      // Uncommon
dist.add('Purple', 1);    // Rare

dist.pick(10);

/* Results in:
[
  'Green',  'Green',  'Green',  'Blue',  'Green',
  'Blue',  'Purple', 'Green',  'Green',  'Green'
]
*/

// Markov Chain Name Generator
const mc = new MarkovChain({ seed: 1 });
mc.addSequence('alice'.split(''));
mc.addSequence('bob'.split(''));
mc.addSequence('erwin'.split(''));

console.log(mc.generate({ order: 1 }));

/* Results in:

[ 'a', 'l', 'i', 'n' ]

*/

// Random Numbers
const rand = new Random({ seed: 1 });

rand.integer(1, 6); // Roll 1d6

// Results in: 6

View on Github

5 - Graph-json

A JSON backed graph structure with advanced identification algorithms

JSON Scheme (DirectedGraph):

{
    "type": "object",
    "properties": {
        "nodes": {
            "type": "array",
            "items": {
                "type": "object",
                "properties": {
                    "name": {
                        "type": "string",
                        "required": true
                    }
                }
            }
        },
        "edges": {
            "type": "array",
            "items": {
                "type": "object",
                "properties": {
                    "name": {
                        "type": "string",
                        "required": true
                    },
                    "from": {
                        "type": "string",
                        "required": true
                    },
                    "to": {
                        "type": "string",
                        "required": true
                    }
                }
            }
        }
    }
}

DirectedGraph

You can use the DirectedGraph library in your project by either calling

var DirectedGraph = require('graph-json').DirectedGraph;

or

var DirectedGraph = require('graph-json').DG;

View on Github

Thank you for following this article. 

Related videos:

JavaScript Data Structures: Getting Started

#node #graph #structure 

5 Popular Node.js Graph Structure Libraries

Molly.jl: Molecular Simulation in Julia

Molly.jl     

Much of science can be explained by the movement and interaction of molecules. Molecular dynamics (MD) is a computational technique used to explore these phenomena, from noble gases to biological macromolecules. Molly.jl is a pure Julia package for MD, and for the simulation of physical systems more broadly. The package is described in a talk at the JuliaMolSim minisymposium at JuliaCon 2022.

At the minute the package is a proof of concept for MD in Julia. It is not production ready, though it can do some cool things and is under active development. Implemented features include:

  • Non-bonded interactions - Lennard-Jones Van der Waals/repulsion force, electrostatic Coulomb potential and reaction field, gravitational potential, soft sphere potential, Mie potential.
  • Bonded interactions - harmonic and Morse bonds, bond angles, torsion angles, harmonic position restraints.
  • Interface to allow definition of new interactions, simulators, thermostats, neighbor finders, loggers etc.
  • Read in OpenMM force field files and coordinate files supported by Chemfiles.jl. There is also some support for Gromacs files.
  • Andersen, Berendsen and velocity rescaling thermostats.
  • Verlet, velocity Verlet, Störmer-Verlet and flexible Langevin integrators.
  • Steepest descent energy minimization.
  • Replica exchange molecular dynamics.
  • Periodic, triclinic and infinite boundary conditions in a cubic box.
  • Flexible loggers to track arbitrary properties throughout simulations.
  • Cutoff algorithms for non-bonded interactions.
  • Various neighbor list implementations to speed up the calculation of non-bonded forces, including use of CellListMap.jl.
  • Implicit solvent GBSA methods.
  • Unitful.jl compatibility so numbers have physical meaning.
  • Automatic multithreading.
  • GPU acceleration on CUDA-enabled devices.
  • Run with Float64 or Float32.
  • Some analysis functions, e.g. RDF.
  • Visualise simulations as animations with Makie.jl.
  • Physical agent-based modelling.
  • Differentiable molecular simulation. This is a unique feature of the package and the focus of its current development.

Features not yet implemented include:

  • Simulators such as metadynamics.
  • Other temperature or pressure coupling methods.
  • Particle mesh Ewald summation.
  • Protein preparation - solvent box, add hydrogens etc.
  • Quantum mechanical modelling.
  • Constrained bonds and angles.
  • Domain decomposition algorithms.
  • Alchemical free energy calculations.
  • High test coverage.
  • API stability.
  • High GPU performance.

Installation

Julia is required, with Julia v1.7 or later required to get the latest version of Molly. Install Molly from the Julia REPL. Enter the package mode by pressing ] and run add Molly.

Usage

Some examples are given here, see the documentation for more on how to use the package.

Simulation of a Lennard-Jones fluid:

using Molly

n_atoms = 100
boundary = CubicBoundary(2.0u"nm", 2.0u"nm", 2.0u"nm")
temp = 298.0u"K"
atom_mass = 10.0u"u"

atoms = [Atom(mass=atom_mass, σ=0.3u"nm", ϵ=0.2u"kJ * mol^-1") for i in 1:n_atoms]
coords = place_atoms(n_atoms, boundary; min_dist=0.3u"nm")
velocities = [velocity(atom_mass, temp) for i in 1:n_atoms]
pairwise_inters = (LennardJones(),)
simulator = VelocityVerlet(
    dt=0.002u"ps",
    coupling=AndersenThermostat(temp, 1.0u"ps"),
)

sys = System(
    atoms=atoms,
    pairwise_inters=pairwise_inters,
    coords=coords,
    velocities=velocities,
    boundary=boundary,
    loggers=(temp=TemperatureLogger(100),),
)

simulate!(sys, simulator, 10_000)

Simulation of a protein:

using Molly

sys = System(
    joinpath(dirname(pathof(Molly)), "..", "data", "5XER", "gmx_coords.gro"),
    joinpath(dirname(pathof(Molly)), "..", "data", "5XER", "gmx_top_ff.top");
    loggers=(
        temp=TemperatureLogger(10),
        writer=StructureWriter(10, "traj_5XER_1ps.pdb"),
    ),
)

temp = 298.0u"K"
random_velocities!(sys, temp)
simulator = VelocityVerlet(
    dt=0.0002u"ps",
    coupling=AndersenThermostat(temp, 1.0u"ps"),
)

simulate!(sys, simulator, 5_000)

The above 1 ps simulation looks something like this when you view it in VMD: MD simulation

Contributing

Contributions are very welcome - see the roadmap issue for more.

Join the molly channel on the JuliaMolSim Zulip to discuss the usage and development of Molly.jl.

Download Details:

Author: JuliaMolSim
Source Code: https://github.com/JuliaMolSim/Molly.jl 
License: View license

#julia #structure

Molly.jl: Molecular Simulation in Julia

AtomsBase.jl: A Julian Abstract interface for Atomic Structures

AtomsBase

A Julian abstract interface for atomic structures.  

AtomsBase is currently in the relatively early stages of development and we very much want developer/user input! If you think anything about it should be added/removed/changed, please file an issue or chime into the discussion on an existing one! (Look particularly for issues with the question label)

AtomsBase is an abstract interface for representation of atomic geometries in Julia. It aims to be a lightweight means of facilitating interoperability between various tools including...

  • chemical simulation engines (e.g. density functional theory, molecular dynamics, etc.)
  • file I/O with standard formats (.cif, .xyz, ...)
  • numerical tools: sampling, integration schemes, etc.
  • automatic differentiation and machine learning systems
  • visualization (e.g. plot recipes)

Currently, the design philosophy is to be as lightweight as possible, with only a small set of required function dispatches to make adopting the interface into existing packages easy. We also provide a couple of standard flexible implementations of the interface that we envision to be broadly applicable. If features beyond these are required we encourage developers to open PRs or provide their own implementations. For more on how to use the package, see the documentation.

Installation

AtomsBase can be installed using the Julia package manager. From the Julia REPL, type ] to enter the Pkg REPL mode and run

pkg> add AtomsBase

Packages Using AtomsBase

The following (not all yet-registered) packages currently make use of this interface (please feel free to send a PR to add to this list!):

Download Details:

Author: JuliaMolSim
Source Code: https://github.com/JuliaMolSim/AtomsBase.jl 
License: MIT license

#julia #structure 

AtomsBase.jl: A Julian Abstract interface for Atomic Structures
Mike  Kozey

Mike Kozey

1661416380

GATT (Generic ATTribute Profile) Structured Data for BLE Communication

GATT (Generic ATTribute profile)

GATT (Generic ATTribute profile) is the information protocol at the heart of Bluetooth Low Energy communication. This package provides structured representations for GATT IDs, which identify services, characteristics, and descriptors. This package also includes many SIG GATT IDs for easy reference in your BLE code.

Usage

Reference existing SIG GATT IDs, or define your own.

import 'package:gatt/gatt.dart';

// Reference official SIG IDs:
print(batteryLevelCharacteristicId);

// Construct a SIG ID with the leading 16-bit or 32-bit UUID:
print(const SigGattId(0x1234));

// Construct a custom GATT ID:
print(const GattId(0x1234, '-0000-1111-2222-3456789ABCDE'));

// Access the leading 32-bits as an integer:
print(const GattId(0x1234, '-0000-1111-2222-3456789ABCDE').leadingHexInt);
// prints the integer form of 0x1234

// Access the leading 32-bits as a hex string:
print(const GattId(0x1234, '-0000-1111-2222-3456789ABCDE').asUuid4Bytes);
// prints "0001234"

// Access the lower 16-bits of the leading 32-bits as a hex string:
print(const GattId(0x1234, '-0000-1111-2222-3456789ABCDE').asUuid2Bytes);
// prints "1234"

Installing

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add gatt

With Flutter:

 $ flutter pub add gatt

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

dependencies:
  gatt: ^0.0.1

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:gatt/gatt.dart';

Contributing

Please follow the standard Flutter Bounty Hunter contribution guidelines

Learn more

Adafruit introduction to GATT

Bluetooth SIG assigned numbers

Bluetooth SIG UUIDs

Download Details:

Author: Flutter-Bounty-Hunters
Source Code: https://github.com/Flutter-Bounty-Hunters/ble/ 
License: MIT

#flutter #dart #structure 

GATT (Generic ATTribute Profile) Structured Data for BLE Communication
Nat  Grady

Nat Grady

1660829460

Strcode: Structure Your Code Better

README 

The strcode (short for structuring code) package contains tools to organize and abstract your code better. It consists of

  • An RStudio Add-in that lets you quickly add code block separators and titles (possibly with unique identifiers) to divide your work into sections. The titles are recognized as sections by RStudio, which enhances the coding experience further.
  • A function sum_str that summarizes the code structure based on the separators and their comments added with the Add-in. For one or more files, it can cat the structure to the console or a file.
  • An RStudio Add-in that lets you insert a code anchor, that is, a hash sequence which can be used to uniquely identify a line in a large code base.

 

Installation

You can install the package from GitHub.

# install.packages("devtools")
devtools::install_github("lorenzwalthert/strcode")

Structuring code

We suggest three levels of granularity for code structuring, whereas higher-level blocks can contain lower-level blocks.

  • level 1 sections, which are high-level blocks that can be separated as follows:
#   ____________________________________________________________________________
#   A title                                                                 ####
  • level 2 sections, which are medium-level blocks that can be separated as follows:
##  ............................................................................
##  A subtitle                                                              ####
  • level 3 sections, which are low-level blocks that can be separated as follows:
### .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
### One more                                                                ####

You can notice from the above that

  • The number of # used in front of the break character (___, ..., . .) corresponds to the level of granularity that is separated.
  • The breaks characters ___, ..., . . were chosen such that they reflect the level of granularity, namely ___ has a much higher visual density than . ..
  • Each block has an (optional) short title on what that block is about.
  • Every title ends with ####. Therefore, the titles are recognized by RStudio as sections. This has the advantages that you can get a quick summary of your code in Rstudio's code pane and you can fold sections as you can fold code or function declarations or if statements. See the pictures below for details.

The separators all have length 80. The value is looked up in the global option strcode$char_length and can therefore be changed by the user.

By default, breaks and titles are inserted via a Shiny Gadget, but this default can be overridden by setting the option strcode$insert_with_shiny to FALSE and hence only inserting the break.

Anchoring sections

Sometimes it is required to refer to a code section, which can be done by title. A better way, however, is to use a unique hash sequence - let us call it a code anchor - to create an arguably unique reference to that section. A code anchor in strcode is enclosed by #< and ># so all anchors can be found using regular expressions. You can add section breaks that include a hash. That might look like this:

##  .................. #< 685c967d4e78477623b861d533d0937a ># ..................
##  An anchored section                                                     ####

Insert a code anchor

Code anchors might prove helpful in other situations where one want to anchor a single line. That is also possible with strcode. An example of a code anchor is the following:

#< 56f5139874167f4f5635b42c37fd6594 >#
this_is_a_super_important_but_hard_to_describe_line_so_let_me_anchor_it

The hash sequences in strcode are produced with the R package digest.

Summarizing code

Once code has been structured by adding sections (as above), it can easily be summarized or represented in a compact and abstract form. This is particularly handy when the codebase is large, when a lot of people work on the code or when new people join a project. The function sum_str is designed for the purpose of extracting separators and respective comments, in order to provide high level code summaries. It is highly customizable and flexible, with a host of options. Thanks to RStudio's API, you can even create summaries of the file you are working on, simply by typing sum_str() in the console. The file presented in the example section below can be summarized as follows:

sum_str(path_in = "placeholder_code/example.R", 
        file_out = "",
        width = 40,
        granularity = 2,
        lowest_sep = FALSE, 
        header = TRUE)
#> Summarized structure of placeholder_code/example.R
#> 
#> line  level section
#> 2    #   _
#> 3    #   function test
#> 6    ##  -A: pre-processing
#> 57   ##  B: actual function
#> 83   #   ____________________________________
#> 84   #   function test2
#> 87   ##  A: pre-processing
#> 138  ##  B: actual function
#> 169  ##  test
  • path_in specifies a directory or filenames for looking for content to summarize.
  • file_out indicates where to dump the output.
  • width gives the width of the output in characters.
  • granularity = 2indicates that we want two of three levels of granularity to be contained in the summary and don't include level 3 comments.
  • Similarly, we use lowest_sep = FALSE to indicate that we want lowest separators (given granularity) to be omitted between the titles of the sections.
  • header was set to TRUE, so the column names were reported as well. Note that they are slightly off since knitr uses a different tab length. In the R console and more imporantly in the outputed file, they are aliged.

Example of improved legibility

To demonstrate the improvement in legibility, we give an extended example with some placeholder code.

#   ____________________________________________________________________________
#   function test                                                           ####
test <- function(x) {
##  ............................................................................
##  A: pre-processing                                                       ####
### .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
### a: assertive tests                                                      ####
  # x
  if(missing(x) || is.null(x)){ 
    x <- character()
  }
  assert(
    # use check within assert
    check_character(x),
    check_factor(x),
    check_numeric(x)
  )
  
  # levels 
  if(!missing(levels)){
    assert(
      check_character(levels),
      check_integer(levels),
      check_numeric(levels))
    levels <- na.omit(levels)
    
  }
  
  # labels
  if(!missing(labels)){
    assert(
      check_character(labels),
      check_numeric(labels),
      check_factor(labels)
      )
  }
  
### .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
### b: coercion / remove missing                                            ####
  x <- as.character(x)
  uniq_x <- unique(na.omit(x), nmax = nmax)
  
### .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
### c: warnings                                                             ####
  
  if(length(breaks) == 1) {
    if(breaks > max(x) - min(x) + 1) {
      stop("range too small for the number of breaks specified")
    }
    if(length(x) <= breaks) {
      warning("breaks is a scalar not smaller than the length of x")
    }
  }  
  
##  ............................................................................
##  B: actual function                                                      ####
   variable < -paste("T", period, "nog_", sector, sep = "")
   variable <- paste(variable, "==", 1, sep = "")
        
   arg<-substitute(variable)
   r<-eval(arg, idlist.data[[1]])
   a<<-1
   
   was_factor <- FALSE
   if (is.factor(yes)) {
     yes <- as.character(yes)
     was_factor <- TRUE
   } 
   if (is.factor(no)) {
     no <- as.character(no)
     was_factor <- TRUE
   }
   out <- ifelse(test, yes, no)
   if(was_factor) {
     cfactor(out)
   } else {
     out
   } 
   
##  ............................................................................
}
#   ____________________________________________________________________________
#   function test2                                                          ####
test2 <- function(x) {
##  ............................................................................
##  A: pre-processing                                                       ####
### .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
### a: assertive tests                                                      ####
  # x
  if(missing(x) || is.null(x)){ 
    x <- character()
  }
  assert(
    # use check within assert
    check_character(x),
    check_factor(x),
    check_numeric(x)
  )
  
  # levels 
  if(!missing(levels)){
    assert(
      check_character(levels),
      check_integer(levels),
      check_numeric(levels))
    levels <- na.omit(levels)
    
  }
  
  # labels
  if(!missing(labels)){
    assert(
      check_character(labels),
      check_numeric(labels),
      check_factor(labels)
      )
  }
  
### .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
### b: coercion / remove missing                                            ####
  x <- as.character(x)
  uniq_x <- unique(na.omit(x), nmax = nmax)
  
### .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 
### c: warnings                                                             ####
  
  if(length(breaks) == 1) {
    if(breaks > max(x) - min(x) + 1) {
      stop("range too small for the number of breaks specified")
    }
    if(length(x) <= breaks) {
      warning("breaks is a scalar not smaller than the length of x")
    }
  }  
  
##  ............................................................................
##  B: actual function                                                      ####
   variable < -paste("T", period, "nog_", sector, sep = "")
   variable <- paste(variable, "==", 1, sep = "")
        
   arg<-substitute(variable)
   r<-eval(arg, idlist.data[[1]])
   a<<-1
   
   was_factor <- FALSE
   if (is.factor(yes)) {
     yes <- as.character(yes)
     was_factor <- TRUE
   } 
   if (is.factor(no)) {
     no <- as.character(no)
     was_factor <- TRUE
   }
   out <- ifelse(test, yes, no)
   if(was_factor) {
     cfactor(out)
   } else {
     out
   } 
   
##  ............................................................................
}

Download Details:

Author: lorenzwalthert
Source Code: https://github.com/lorenzwalthert/strcode 
License: MIT license

#r #structure #rstudio 

Strcode: Structure Your Code Better
Monty  Boehm

Monty Boehm

1660247400

XSim.jl: Simulate Sequence Data & Complicated Pedigree Structures

XSim

XSim is a fast and user-friendly tool to simulate sequence data and complicated pedigree structures

Features

  • An efficient CPOS algorithm
  • Using founders that are characterized by real genome sequence data
  • Complicated pedigree structures among descendants

Quick-start

# Load XSim
using XSim

# Simulate genome with 10 chromosomes, and 100 markers are located on each chromosome.
build_genome(n_chr=10, n_loci=100)
# Simulate two independent traits controlled by 3 and 8 QTLs, respectively.
build_phenome([3, 8])

# Initialize founders
n_sires = 3
n_dams  = 20
sires   = Founders(n_sires)
dams    = Founders(n_dams)

# Define parameters
args     = Dict(# mating
                :nA               => 3,
                :nB_per_A         => 5,
                :n_per_mate       => 2,
                :ratio_malefemale => 1.0,
                # selection
                :h2               => [.8, .5],
                :weights          => [.6, .4],
                # breeding
                :n_gens           => 5,
                :n_select_A       => 3,
                :n_select_B       => 20)

# Breeding program
sires_new, dams_new   = breed(sires, dams; args...)

# Inspect the results
summary(sires + dams)
summary(sires_new + dams_new)

Citing XSimV2

Bibliography

Chen, C.J., D. Garrick, R. Fernando, E. Karaman, C. Stricker, M. Keehan, and H. Cheng. 2022. XSim version 2: simulation of modern breeding programs. G3 Genes|Genomes|Genetics 12:jkac032. doi:10.1093/g3journal/jkac032.

BibTeX

@article{chen_xsim_2022,
 title = {{XSim} version 2: simulation of modern breeding programs},
 volume = {12},
 issn = {2160-1836},
 url = {<https://doi.org/10.1093/g3journal/jkac032>},
 doi = {10.1093/g3journal/jkac032},
 number = {4},
 urldate = {2022-05-26},
 journal = {G3 Genes{\textbar}Genomes{\textbar}Genetics},
 author = {Chen, Chunpeng James and Garrick, Dorian and Fernando, Rohan and Karaman, Emre and Stricker, Chris and Keehan, Michael and Cheng, Hao},
 month = apr,
 year = {2022},
}

Help

Old users may install the old version of XSim as using Pkg; Pkg.add(name="XSim", version="0.5")

  • Authors: Hao Cheng,Rohan Fernando,Dorian Garrick
  • Citing XSim

Cheng H, Garrick D, and Fernando R (2015) XSim: Simulation of descendants from ancestors with sequence data. G3: Genes-Genomes-Genetics, 5(7):1415-1417.



Download Details:

Author: Reworkhow
Source Code: https://github.com/reworkhow/XSim.jl 
License: GPL-2.0 license

#julia #data #structure 

XSim.jl: Simulate Sequence Data & Complicated Pedigree Structures
Gordon  Taylor

Gordon Taylor

1657187940

Bytewise-core: Binary Serialization Of Arbitrarily Complex Structures

bytewise-core

Binary serialization of arbitrarily complex structures that sort element-wise

Allows efficient comparison of a variety of useful data structures in a way that respects the sort order defined by typewise.

This library defines a total order for well-structured keyspaces in key value stores. The ordering is a superset of the sorting algorithm defined by IndexedDB and the one defined by CouchDB. This serialization makes it easy to take advantage of the benefits of structured indexing in systems with fast but naïve binary indexing (key/value databases).

Order of Supported Structures

This package is a barebones kernel of bytewise, containing only the structures most often used to create structured keyspaces.

This is the top level order of the various structures that may be encoded:

  • null
  • false
  • true
  • Number (numeric)
  • Date (time-wise)
  • Buffer, Uint8Array (bit-wise)
  • String (character-wise)
  • Array (element-wise)
  • undefined

Structured types like Array may recursively contain any other supported structures.

Usage

encode serializes any supported type and returns a Buffer, or throws if an unsupported structure is provided.

var assert = require('assert')
var bytewise = require('./')
var encode = bytewise.encode

// Numbers are stored in 9 bytes -- 1 byte for the type tag and an 8 byte float
assert.equal(encode(12345).toString('hex'), '4240c81c8000000000')
// Negative numbers are stored as positive numbers, but with a lower type tag and their bits inverted
assert.equal(encode(-12345).toString('hex'), '41bf37e37fffffffff')

// The `toString` method of `Buffer` values returned by `encode` is augmented
// to use "hex" encoding by default. This ensures bytewise encoding still
// works when bytewise keys are accidentally coerced to strings.
assert.equal(encode(-12345) + '', '41bf37e37fffffffff')

// All numbers, integer or floating point, are stored as IEEE 754 doubles
assert.equal(encode(1.2345) + '', '423ff3c083126e978d')
assert.equal(encode(-1.2345) + '', '41c00c3f7ced916872')

// Serialization does not preserve the sign bit, so 0 is indistinguishable from -0
assert.equal(encode(-0) + '', '420000000000000000')
assert.equal(encode(0) + '', '420000000000000000')

// Strings are encoded as utf8, prefixed with their type tag (0x70, or the "p" character)
assert.equal(encode('foo').toString('utf8'), 'pfoo')
assert.equal(encode('föo').toString('utf8'), 'pföo')

// Arrays are just a series of values, separated by and terminated with a null byte
assert.equal(encode([ 'foo', 'bar' ]) + '', 'a070666f6f00706261720000')

// Items in arrays are delimited by null bytes, and a final end byte marks the end of the array
assert.equal(encode([ 'foo' ]).toString('binary'), '\xa0pfoo\x00\x00')

// Complex types like arrays can be arbitrarily nested, and fixed-sized types don't require a terminating byte
assert.equal(encode([ [ 'foo', 10 ], 'bar' ]) + '', 'a0a070666f6f0042402400000000000000706261720000')

decode parses a buffer and returns the structured data.

var decode = bytewise.decode
var key = 'a0a070666f6f0042402400000000000000706261720000'

// Decode takes a buffer and decodes a bytewise value
assert.deepEqual(decode(new Buffer(key, 'hex')), [ [ 'foo', 10 ], 'bar' ])

// String input can be decoded, defaulting to hex
assert.deepEqual(decode(key), [ [ 'foo', 10 ], 'bar' ])

// An alternate string encoding can be provided when initializing bytewise
// TODO

Use Cases

Take a look at the bytewise library for an idea of what kind of stuff this could be useful for.

Issues

Issues should be reported here.

Author: Deanlandolt
Source Code: https://github.com/deanlandolt/bytewise-core 
License: MIT

#javascript #structure 

Bytewise-core: Binary Serialization Of Arbitrarily Complex Structures
Gordon  Taylor

Gordon Taylor

1657165560

Bytewise: Binary Serialization Which Sorts Bytewise

bytewise

Binary serialization of arbitrarily complex structures that sort element-wise

Allows efficient comparison of a variety of useful data structures in a way that respects the sort order defined by typewise.

The bytewise-core library defines a total order for well-structured keyspaces in key value stores. The ordering is a superset of the sorting algorithm defined by IndexedDB and the one defined by CouchDB. This serialization makes it easy to take advantage of the benefits of structured indexing in systems with fast but naïve binary indexing (key/value databases).

NB: use charwise if possible

The charwise library gives you almost everything bytewise does, but much faster, and it's much more actively developed/maintained. You should probably try this first.

Order of Supported Structures

This is the top level order of the various structures that may be encoded:

  • null
  • false
  • true
  • Number (numeric)
  • Date (numeric, epoch offset)
  • Buffer (bitwise)
  • String (lexicographic)
  • Array (componentwise)
  • undefined

These specific structures can be used to serialize the vast majority of javascript values in a way that can be sorted in an efficient, complete and sensible manner. Each value is prefixed with a type tag, and we do some bit munging to encode our values in such a way as to carefully preserve the desired sort behavior, even in the presence of structural nested.

For example, negative numbers are stored as a different type from positive numbers, with its sign bit stripped and its bytes inverted to ensure numbers with a larger magnitude come first. Infinity and -Infinity can also be encoded -- they are nullary types, encoded using just their type tag. The same can be said of null and undefined, and the boolean values false, true. Date instances are stored just like Number instances -- but as in IndexedDB -- Date sorts after Number (including Infinity). Buffer data can be stored in the raw, and is sorted before String data. Then come the collection types (just Array for the time being).

Unsupported Structures

This serialization accommodates a wide range of javascript structures, but it is not exhaustive. Complex structures with reference cycles cannot be serialized. NaN is also illegal anywhere in a serialized value -- its presence very likely indicates of an error, but more importantly sorting on NaN is nonsensical by definition. Objects which are instances of Error are also rejected, as well as Invalid Date objects. If and when we support more complex collection types, WeakMap and WeakSet objects will never be serializable as they cannot be enumerated. Attempts to serialize any values which include these structures will throw an error.

Usage

encode serializes any supported type and returns a buffer, or throws if an unsupported structure is provided.

var assert = require('assert');
var bytewise = require('./');
var encode = bytewise.encode;

// Many types can be represented using only their type tag, a single byte
assert.equal(encode(null).toString('binary'), '\x10');
assert.equal(encode(false).toString('binary'), '\x20');
assert.equal(encode(true).toString('binary'), '\x21');
assert.equal(encode(undefined).toString('binary'), '\xf0');

// Numbers are stored in 9 bytes -- 1 byte for the type tag and an 8 byte float
assert.equal(encode(12345).toString('hex'), '4240c81c8000000000');
// Negative numbers are stored as positive numbers, but with a lower type tag and their bits inverted
assert.equal(encode(-12345).toString('hex'), '41bf37e37fffffffff');

// The `toString` method of `Buffer` values returned by `encode` is augmented
// to use "hex" encoding by default. This ensures bytewise encoding still
// works when bytewise keys are accidentally coerced to strings.
assert.equal(encode(true) + '', '21');

// All numbers, integer or floating point, are stored as IEEE 754 doubles
assert.equal(encode(1.2345) + '', '423ff3c083126e978d');
assert.equal(encode(-1.2345) + '', '41c00c3f7ced916872');

// Serialization does not preserve the sign bit, so 0 is indistinguishable from -0
assert.equal(encode(-0) + '', '420000000000000000');
assert.equal(encode(0) + '', '420000000000000000');

// We can even serialize Infinity and -Infinity, though we just use their type tag
assert.equal(encode(-Infinity) + '', '40');
assert.equal(encode(Infinity) + '', '43');

// Dates are stored just like numbers, but with different (and higher) type tags
assert.equal(encode(new Date(-12345)) + '', '51bf37e37fffffffff');
assert.equal(encode(new Date(12345)) + '', '5240c81c8000000000');

// Strings are encoded as utf8, prefixed with their type tag (0x70, or the "p" character)
assert.equal(encode('foo').toString('utf8'), 'pfoo');
assert.equal(encode('föo').toString('utf8'), 'pföo');

// Buffers are also left alone, other than being prefixed with their type tag (0x60)
assert.equal(encode(new Buffer('ff00fe01', 'hex')) + '', '60ff00fe01');

// Arrays are just a series of values terminated with a null byte
assert.equal(encode([ true, -1.2345 ]) + '', 'a02141c00c3f7ced91687200');

// Strings are also legible when embedded in complex structures like arrays
// Items in arrays are delimited by null bytes, and a final end byte marks the end of the array
assert.equal(encode([ 'foo' ]).toString('binary'), '\xa0pfoo\x00\x00');

// The 0x01 and 0xfe bytes are used to escape high and low bytes while preserving the correct collation
assert.equal(encode([ new Buffer('ff00fe01', 'hex') ]) + '', 'a060fefe0101fefd01020000');

// Complex types like arrays can be arbitrarily nested, and fixed-sized types don't require a terminating byte
assert.equal(encode([ [ 'foo', true ], 'bar' ]).toString('binary'), '\xa0\xa0\pfoo\x00\x21\x00\pbar\x00\x00');

// Objects are just string-keyed maps, stored like arrays: [ k1, v1, k2, v2, ... ]
// NYI in this version
// assert.equal(encode({ foo: true, bar: 'baz' }).toString('binary'), '\xb0pfoo\x00\x21\pbar\x00\pbaz\x00\x00');

decode parses a buffer and returns the structured data, or throws if malformed:

var samples = [
  'foo √',
  null,
  '',
  new Date('2000-01-01T00:00:00Z'),
  42,
  undefined,
  [ undefined ],
  -1.1,
  {},
  [],
  true,
  { bar: 1 },
  [ { bar: 1 }, { bar: [ 'baz' ] } ],
  -Infinity,
  false
];
var result = samples.map(bytewise.encode).map(bytewise.decode);
assert.deepEqual(samples, result);

compare is just a convenience bytewise comparison function:

var sorted = [
  null,
  false,
  true,
  -Infinity,
  -1.1,
  42,
  new Date('2000-01-01Z'),
  '',
  'foo √',
  [],
  [ { bar: 1 }, { bar: [ 'baz' ] } ],
  [ undefined ],
  {},
  { bar: 1 },
  undefined
];

var result = samples.map(bytewise.encode).sort(bytewise.compare).map(bytewise.decode);
assert.deepEqual(sorted, result);

Use Cases

Numeric indexing

This is surprisingly difficult to with vanilla LevelDB -- basic approaches require ugly hacks like left-padding numbers to make them sort lexicographically (which is prone to overflow problems). You could write a one-off comparator function in C, but there a number of drawbacks to this as well. This serialization solves this problem in a clean and generalized way, in part by taking advantage of properties of the byte sequences defined by the IEE 754 floating point standard.

Namespaces, partitions and patterns

This is another really basic and oft-needed amenity that isn't very easy out of the box in LevelDB. We reserve the lowest and highest bytes as abstract tags representing low and high key sentinels, allowing you to faithfully request all values in any portion of an array. Arrays can be used as namespaces without any leaky hacks, or even more detailed slicing can be done per element to implement wildcards or even more powerful pattern semantics for specific elements in the array keyspace.

Document storage

It may be reasonably fast to encode and decode, but JSON.stringify isn't terribly useful or objects as document records in a way that is useful for range queries, where LevelDB and its ilk excel. This serialization allows you to build indexes on top of your documents, as well as expanding on the range of serializable types available from JSON.

Multilevel language-sensitive collation

You have a bunch of strings in a particular language-specific strings you want to index, but at the time of indexing you're not sure how sorted you need them. Queries may or may not care about case or punctuation differences, for instance. You can index your string as an array of weights, most-to-least specific, and prefixed by collation language (since our values are language-sensitive). There are mechanisms available to compress this array to keep its size reasonable.

Full-text search

Full-text indexing is a natural extension of the language-sensitive collation use case described above. Add a little lexing and stemming and basic full text search is close at hand. Structured indexes can be employed to make other more interesting search features possible as well.

CouchDB-style "joins"

Build a view that colocates related subrecords, taking advantage of component-wise sorting of arrays to interleave them. This is a technique employed by CouchDB, leveraging its very similar collation semantics to keep related grouped together hierarchically. More recently Akiban has formalized this concept of a table grouping and brought it the SQL world. Again, bytewise sorting extends naturally to their notions of hierarchical keys.

Emulating other systems

Clients that wish to employ a subset of the full range of possible types above can preprocess values to coerce them into the desired simpler forms before serializing. For instance, if you were to build CouchDB-style indexing you could round-trip values through a JSON encode cycle (to get just the subset of types supported by CouchDB) before passing to encode, resulting in a collation that is identical to CouchDB. Emulating IndexedDB's collation would at least require preprocessing away Buffer data and undefined values and normalizing for the es6 types.

Issues

Issues should be reported here.

Author: Deanlandolt
Source Code: https://github.com/deanlandolt/bytewise 
License: MIT

#bytes #javascript #structure 

Bytewise: Binary Serialization Which Sorts Bytewise

Mergo: Merging Go Structs and Maps Since 2013

Mergo

A helper to merge structs and maps in Golang. Useful for configuration default values, avoiding messy if-statements.

Mergo merges same-type structs and maps by setting default values in zero-value fields. Mergo won't merge unexported (private) fields. It will do recursively any exported one. It also won't merge structs inside maps (because they are not addressable using Go reflection).

Also a lovely comune (municipality) in the Province of Ancona in the Italian region of Marche.

Status

It is ready for production use. It is used in several projects by Docker, Google, The Linux Foundation, VMWare, Shopify, etc.

Important note

Please keep in mind that a problematic PR broke 0.3.9. I reverted it in 0.3.10, and I consider it stable but not bug-free. Also, this version adds support for go modules.

Keep in mind that in 0.3.2, Mergo changed Merge()and Map() signatures to support transformers. I added an optional/variadic argument so that it won't break the existing code.

If you were using Mergo before April 6th, 2015, please check your project works as intended after updating your local copy with go get -u github.com/imdario/mergo. I apologize for any issue caused by its previous behavior and any future bug that Mergo could cause in existing projects after the change (release 0.2.0).

Mergo in the wild

Install

go get github.com/imdario/mergo

// use in your .go code
import (
    "github.com/imdario/mergo"
)

Usage

You can only merge same-type structs with exported fields initialized as zero value of their type and same-types maps. Mergo won't merge unexported (private) fields but will do recursively any exported one. It won't merge empty structs value as they are zero values too. Also, maps will be merged recursively except for structs inside maps (because they are not addressable using Go reflection).

if err := mergo.Merge(&dst, src); err != nil {
    // ...
}

Also, you can merge overwriting values using the transformer WithOverride.

if err := mergo.Merge(&dst, src, mergo.WithOverride); err != nil {
    // ...
}

Additionally, you can map a map[string]interface{} to a struct (and otherwise, from struct to map), following the same restrictions as in Merge(). Keys are capitalized to find each corresponding exported field.

if err := mergo.Map(&dst, srcMap); err != nil {
    // ...
}

Warning: if you map a struct to map, it won't do it recursively. Don't expect Mergo to map struct members of your struct as map[string]interface{}. They will be just assigned as values.

Here is a nice example:

package main

import (
    "fmt"
    "github.com/imdario/mergo"
)

type Foo struct {
    A string
    B int64
}

func main() {
    src := Foo{
        A: "one",
        B: 2,
    }
    dest := Foo{
        A: "two",
    }
    mergo.Merge(&dest, src)
    fmt.Println(dest)
    // Will print
    // {two 2}
}

Note: if test are failing due missing package, please execute:

go get gopkg.in/yaml.v2

Transformers

Transformers allow to merge specific types differently than in the default behavior. In other words, now you can customize how some types are merged. For example, time.Time is a struct; it doesn't have zero value but IsZero can return true because it has fields with zero value. How can we merge a non-zero time.Time?

package main

import (
    "fmt"
    "github.com/imdario/mergo"
        "reflect"
        "time"
)

type timeTransformer struct {
}

func (t timeTransformer) Transformer(typ reflect.Type) func(dst, src reflect.Value) error {
    if typ == reflect.TypeOf(time.Time{}) {
        return func(dst, src reflect.Value) error {
            if dst.CanSet() {
                isZero := dst.MethodByName("IsZero")
                result := isZero.Call([]reflect.Value{})
                if result[0].Bool() {
                    dst.Set(src)
                }
            }
            return nil
        }
    }
    return nil
}

type Snapshot struct {
    Time time.Time
    // ...
}

func main() {
    src := Snapshot{time.Now()}
    dest := Snapshot{}
    mergo.Merge(&dest, src, mergo.WithTransformers(timeTransformer{}))
    fmt.Println(dest)
    // Will print
    // { 2018-01-12 01:15:00 +0000 UTC m=+0.000000001 }
}

Contact me

If I can help you, you have an idea or you are using Mergo in your projects, don't hesitate to drop me a line (or a pull request): @im_dario

About

Written by Dario Castañé.

Author: imdario
Source Code: https://github.com/imdario/mergo 
License: BSD-3-Clause license

#go #golang #structure 

Mergo: Merging Go Structs and Maps Since 2013
Delbert  Ferry

Delbert Ferry

1623294354

How to structure GraphQL server code

Out of the box, GraphQL provides you with two abstractions: a schema and resolvers.

The first layer of abstraction is provided by the GraphQL schema. It hides the details of the backend architecture from the frontend. The schema can even be written in GraphQL schema language, making it quite concise and easy to read or maintain. The most important decision at this layer is choosing which types and fields to create. This is specific to each application, but there are some general rules you can follow. For the sake of brevity I’ll skip them here, but if you’re interested, watch out for a future post on our GraphQL blog.

#graphql #code #structure

How to structure GraphQL server code