Yvette  Bell

Yvette Bell

1666931520

Castore: Better DevX for Event Sourcing in TypeScript

Castore 🦫

 Better DevX for Event Sourcing in TypeScript

Castore provides a unified interface for implementing Event Sourcing in TypeScript πŸ¦Έβ€β™‚οΈ.

πŸ€” Why use Castore ?

πŸ’¬ Verbosity: Castore classes are designed to increase dryness and provide the optimal developer experience. Event Sourcing is hard, don't make it harder!

πŸ“ Strong typings: We love type inference, we know you will to!

πŸ„β€β™‚οΈ Interfaces before implementations: Castore provides a standard interface to modelize common event sourcing patterns in TypeScript. But it DOES NOT enforce any particular implementation (storage service, messaging system etc...). You can use Castore in React apps, containers or lambdas, it's up to you! Some common implementations are provided, but you are free to use any implementation you want via custom classes, as long as they follow the required interfaces.

πŸ‘ Enforces best practices: Gained from years of usage like using integer versions instead of timestamps, transactions for multi-store events and state-carrying transfer events for projections.

πŸ›  Rich suite of helpers: Like mock events builder to help you write tests.

Table of content

Events

The first step in your ✨ Castore journey ✨ is to define your business events! 🦫

Castore lets you easily create the Event Types which will constitute your Event Store. Simply use the EventType class and start defining, once and for all, your events! πŸŽ‰

import { EventType } from "@castore/core"

export const userCreatedEvent = new EventType<
  // Typescript EventType
  'USER_CREATED',
  // Typescript EventDetails
  {
    aggregateId: string;
    version: number;
    type: 'USER_CREATED';
    timestamp: string;
    payload: { name: string; age: number };
  }
>({
  // EventType
  type: 'USER_CREATED',
});

const userRemovedEvent = ...

const eventTypes = [
  userCreatedEvent,
  userRemovedEvent,
];

You can also define your events with JSON Schemas or Zod Events, see @castore/json-schema-event and @castore/zod-event documentations for implementation 🦫

Once you're happy with your set of EventTypes you can move on to step 2: attaching the EventTypes to an actual EventStore! πŸͺ.

Event Store

Welcome in the heart of Castore: the EventStore ❀️
The EventStore class lets you instantiate an object containing all the methods you will need to interact with your event sourcing store. πŸ’ͺ

const userEventStore = new EventStore({
  eventStoreId: 'user-event-store-id',
  eventTypes,
  // πŸ‘‡ See #reducer sub-section
  reducer,
  // πŸ‘‡ See #storage_adapters section
  storageAdapter,
});

Reducer

The reducer needed in the EventStore initialization is the function that will be applied to the sorted array of events in order to build the aggregates βš™οΈ. It works like your usual Redux reducer!

Basically, it consists in a function implementing switch cases for all event types and returning the aggregate updated with your business logic. 🧠

Here is an example reducer for our User Event Store.

export const usersReducer = (
  userAggregate: UserAggregate,
  event: UserEventsDetails,
): UserAggregate => {
  const { version, aggregateId } = event;

  switch (event.type) {
    case 'USER_CREATED': {
      const { name, age } = event.payload;

      return {
        aggregateId,
        version: event.version,
        name,
        age,
        status: 'CREATED',
      };
    }
    case 'USER_REMOVED':
      return {
        ...userAggregate,
        version,
        status: 'REMOVED',
      };
  }
};

Storage Adapter

'Storage Adapter'

You can store your events in many different ways. To specify how to store them (in memory, DynamoDB...) Castore implements Storage Adapters.

Adapters offer an interface between the Event Store class and your storage method πŸ’Ύ.

To be able to use your EventStore, you will need to attach a Storage Adapter πŸ”—.

All the Storage Adapters have the same interface, and you can create your own if you want to implement new storage methods!

So far, castore supports 2 Storage Adapters ✨:

  • in-memory
  • DynamoDB

Event Store Interface

Now that our Event Store has been instantiated with a reducer and a Storage Adapter, we can start using it to actually populate our database with events and retrieve business data from it 🌈.

To do that, the Event Store class exposes several methods, including the following two:

pushEvent: Takes an object containing event details and puts it in the database. It will throw if the event's version already exists!

getAggregate: Returns the output of the reducer applied to the array of all events.

Here is a quick example showing how an application would use these two methods:

const removeUser = async (userId: string) => {
  // get the aggregate for that userId,
  // which is a representation of our user's state
  const { aggregate } = await userEventStore.getAggregate(userId);

  // use the aggregate to check the user status
  if (aggregate.status === 'REMOVED') {
    throw new Error('User already removed');
  }

  // put the USER_REMOVED event in the event store 🦫
  await userEventStore.pushEvent({
    aggregateId: userId,
    version: aggregate.version + 1,
    type: 'USER_REMOVED',
    timestamp: new Date(),
  });
};

Going Further πŸƒβ€β™‚οΈ

We've only covered the basic functionalities of the Event Store!

The Event Store class actually implements other very useful methods πŸ’ͺ

Here is a small recap of these methods:

getEvents: Returns the list of all events for a given aggregateId.

listAggregateIds: Returns the list of all aggregateIds present in the Event Store.

simulateAggregate: Simulates the aggregate you would have obtained with getAggregate at a given date.


Download Details:

Author: castore-dev
Source Code: https://github.com/castore-dev/castore

License:  MIT license

#typescript 

Castore: Better DevX for Event Sourcing in TypeScript
Elian  Harber

Elian Harber

1665453600

Highly Available Prometheus Setup with Long Term Storage Capabilities

Overview

Thanos is a set of components that can be composed into a highly available metric system with unlimited storage capacity, which can be added seamlessly on top of existing Prometheus deployments.

Thanos is a CNCF Incubating project.

Thanos leverages the Prometheus 2.0 storage format to cost-efficiently store historical metric data in any object storage while retaining fast query latencies. Additionally, it provides a global query view across all Prometheus installations and can merge data from Prometheus HA pairs on the fly.

Concretely the aims of the project are:

  1. Global query view of metrics.
  2. Unlimited retention of metrics.
  3. High availability of components, including Prometheus.

Features

  • Global querying view across all connected Prometheus servers
  • Deduplication and merging of metrics collected from Prometheus HA pairs
  • Seamless integration with existing Prometheus setups
  • Any object storage as its only, optional dependency
  • Downsampling historical data for massive query speedup
  • Cross-cluster federation
  • Fault-tolerant query routing
  • Simple gRPC "Store API" for unified data access across all metric data
  • Easy integration points for custom metric providers

Architecture Overview

Deployment with Sidecar:

Sidecar

Deployment with Receive:

Receive

Thanos Philosophy

The philosophy of Thanos and our community is borrowing much from UNIX philosophy and the golang programming language.

  • Each subcommand should do one thing and do it well
    • eg. thanos query proxies incoming calls to known store API endpoints merging the result
  • Write components that work together
    • e.g. blocks should be stored in native prometheus format
  • Make it easy to read, write, and, run components
    • e.g. reduce complexity in system design and implementation

Releases

Main branch should be stable and usable. Every commit to main builds docker image named main-<date>-<sha> in quay.io/thanos/thanos and thanosio/thanos dockerhub (mirror)

We also perform minor releases every 6 weeks.

During that, we build tarballs for major platforms and release docker images.

See release process docs for details.

Getting Started

Contributing

Contributions are very welcome! See our CONTRIBUTING.md for more information.

Community

Thanos is an open source project and we value and welcome new contributors and members of the community. Here are ways to get in touch with the community:

Adopters

See Adopters List.

Maintainers

See MAINTAINERS.md

Download Details:

Author: Thanos-io
Source Code: https://github.com/thanos-io/thanos 
License: Apache-2.0 license

#go #golang #storage #monitoring 

Highly Available Prometheus Setup with Long Term Storage Capabilities
Elian  Harber

Elian Harber

1665021360

Redisstorage: Redis Based Storage Backend for Colly

Redis Storage for Colly

This is a redis based storage backend for Colly collectors.

Install

go get -u github.com/gocolly/redisstorage

Usage

import (
    "github.com/gocolly/colly"
    "github.com/gocolly/redisstorage"
)
c := colly.NewCollector()

storage := &redisstorage.Storage{
    Address:  "127.0.0.1:6379",
    Password: "",
    DB:       0,
    Prefix:   "job01",
}

err := c.SetStorage(storage)
if err != nil {
    panic(err)
}

Bugs

Bugs or suggestions? Visit the issue tracker or join #colly on freenode

Download Details:

Author: Gocolly
Source Code: https://github.com/gocolly/redisstorage 
License: Apache-2.0 license

#go #golang #storage 

Redisstorage: Redis Based Storage Backend for Colly

10 Favorite PHP Libraries for Data Structure and Storage

In today's post we will learn about 10 Favorite PHP Libraries for Data Structure and Storage.

What are Data Structures?

A data structure is a storage that is used to store and organize data. It is a way of arranging data on a computer so that it can be accessed and updated efficiently. A data structure is not only used for organizing the data. It is also used for processing, retrieving, and storing data.

Table of contents:

  • CakePHP Collection - A simple collections library.
  • Fractal - A library for converting complex data structures to JSON output.
  • Ginq - Another PHP library based on .NET's LINQ.
  • JsonMapper - A library that maps nested JSON structures onto PHP classes.
  • JSON Machine - Provides iteration over huge JSONs using simple foreach
  • Knapsack - Collection library inspired by Clojure's sequences.
  • Msgpack.php - A pure PHP implementation of the MessagePack serialization format.
  • PINQ - A PHP library based on .NET's LINQ (Language Integrated Query).
  • Serializer - A library for serialising and de-serialising data.
  • YaLinqo - Yet Another LINQ to Objects for PHP.

1 - CakePHP Collection:

A simple collections library.

The collection classes provide a set of tools to manipulate arrays or Traversable objects. If you have ever used underscore.js, you have an idea of what you can expect from the collection classes.

Usage

Collections can be created using an array or Traversable object. A simple use of a Collection would be:

use Cake\Collection\Collection;

$items = ['a' => 1, 'b' => 2, 'c' => 3];
$collection = new Collection($items);

// Create a new collection containing elements
// with a value greater than one.
$overOne = $collection->filter(function ($value, $key, $iterator) {
    return $value > 1;
});

The Collection\CollectionTrait allows you to integrate collection-like features into any Traversable object you have in your application as well.

Documentation

Please make sure you check the official documentation

View on Github

2 - Fractal:

A library for converting complex data structures to JSON output.

Fractal provides a presentation and transformation layer for complex data output, the like found in RESTful APIs, and works really well with JSON. Think of this as a view layer for your JSON/YAML/etc.

When building an API it is common for people to just grab stuff from the database and pass it to json_encode(). This might be passable for "trivial" APIs but if they are in use by the public, or used by mobile applications then this will quickly lead to inconsistent output.

Goals

  • Create a protective shield between source data and output, so schema changes do not affect users
  • Systematic type-casting of data, to avoid foreach()ing through and (bool)ing everything
  • Include (a.k.a embedding, nesting or side-loading) relationships for complex data structures
  • Work with standards like HAL and JSON-API but also allow custom serialization
  • Support the pagination of data results, for small and large data sets alike
  • Generally ease the subtle complexities of outputting data in a non-trivial API

This package is compliant with PSR-1, PSR-2 and PSR-4. If you notice compliance oversights, please send a patch via pull request.

Install

Via Composer

$ composer require league/fractal

Requirements

The following versions of PHP are supported by this version:

>= PHP 7.4

Todo

  • add HAL serializers

Testing

$ phpunit

Documentation

Fractal has full documentation, powered by Jekyll.

Contribute to this documentation in the gh-pages branch.

View on Github

3 - Ginq:

Another PHP library based on .NET's LINQ.

Array handling in PHP? Be happy with Ginq!

Ginq is a DSL that can handle arrays and iterators of PHP unified.

Ginq is inspired by Linq to Object, but is not a clone.

Many functions in Ginq are evaluated in lazy, and no actions are taken until that time. This features bring you many benefits.

Install

composer.json:

{
    "require": {
        "ginq/ginq": "dev-master"
    }
}

see: https://packagist.org/packages/ginq/ginq

Usage

$xs = Ginq::from(array(1,2,3,4,5,6,7,8,9,10))
        ->where(function($x) { return $x % 2 != 0; })
        ->select(function($x) { return $x * $x; });
        ;

You pass Ginq data and build a query with it. In this example above, you order Ginq to choose even numbers and square each of them.

But Ginq do nothing, Ginq knows only you want a result of chosen and squared numbers.

Let's execute foreach loop with Ginq to get the result.

foreach ($xs as $x) { echo "$x "; }

The result is

1 9 25 49 81

You got the expected result!

Next, you can get an array with toList.

$xs->toList();

array(1,9,25,49,81);

Ginq has functions, well-known in SQL, such as join(), orderBy(), and groupBy() other than select(), where() listed above.

Selector and Predicate

Most of methods in Ginq receive a closure as a argument.

You may not be familiar with closures, but it is very simple things. There are just three types of closures in Ginq, you can remember simply. These are predicate, selector, and connection selector.

Predicate

A closure that passed to a method that do select, such as where() is called predicate.

Predicate is a closure that receive a pair of key and values in the elements and return boolean value.

function ($v, [$k]) { return $v % 2 == 0; }

You get even numbers when you pass this closure to where(). You can skip second argument when you don't need it in the process.

Selector

A closure that passed to a method that do projection, such as select() is called selector.

Selector is a closure that receive a pair of key and value in the elements and create a new value or key, and then return it.

function ($v, [$k]) { return $v * $v ; }

You get squared numbers of original when you pass this closure to select().

This function is used to specify the key of grouping with groupBy(), the key of sorting with groupBy().

Connection Selector

Connection Selector is one of the selector that combine two elements into one, is used with join(), zip().

function ($v0, $v1, [$k0, $k1]) { return array($v0, $v1) ; }

This function receive 4 arguments, two values and two keys, and then create new value or key and return it. You can skip arguments when you don't need it in the process.

These are zip() example that combine each elements from two arrays.

$foods  = array("meat", "pasta", "salada");
$spices = array("time", "basil", "dill");

$xs = Ginq::from($foods)
        ->zip($spices, function($f, $s) {
            return "$f with $s!";
        })
        ;

foreach ($xs as $x) { echo "$x\n"; }
meat with time!
pasta with basil!
salada with dill!

View on Github

4 - JsonMapper:

A library that maps nested JSON structures onto PHP classes.

Takes data retrieved from a JSON web service and converts them into nested object and arrays - using your own model classes.

Starting from a base object, it maps JSON data on class properties, converting them into the correct simple types or objects.

It's a bit like the native SOAP parameter mapping PHP's SoapClient gives you, but for JSON. It does not rely on any schema, only your PHP class definitions.

Type detection works by parsing @var docblock annotations of class properties, as well as type hints in setter methods.

You do not have to modify your model classes by adding JSON specific code; it works automatically by parsing already-existing docblocks.

Keywords: deserialization, hydration

Usage

Basic usage

  1. Register an autoloader that can load PSR-0 compatible classes.
  2. Create a JsonMapper object instance
  3. Call the map or mapArray method, depending on your data

Map a normal object:

<?php
require 'autoload.php';
$mapper = new JsonMapper();
$contactObject = $mapper->map($jsonContact, new Contact());
?>

Map an array of objects:

<?php
require 'autoload.php';
$mapper = new JsonMapper();
$contactsArray = $mapper->mapArray(
    $jsonContacts, array(), 'Contact'
);
?>

Instead of array() you may also use ArrayObject and descending classes.

Example

JSON from an address book web service:

{
    'name':'Sheldon Cooper',
    'address': {
        'street': '2311 N. Los Robles Avenue',
        'city': 'Pasadena'
    }
}

Your local Contact class:

<?php
class Contact
{
    /**
     * Full name
     * @var string
     */
    public $name;

    /**
     * @var Address
     */
    public $address;
}
?>

Your local Address class:

<?php
class Address
{
    public $street;
    public $city;

    public function getGeoCoords()
    {
        //do something with $street and $city
    }
}
?>

Your application code:

<?php
$json = json_decode(file_get_contents('http://example.org/sheldon.json'));
$mapper = new JsonMapper();
$contact = $mapper->map($json, new Contact());

echo "Geo coordinates for " . $contact->name . ": "
    . var_export($contact->address->getGeoCoords(), true);
?>

View on Github

5 - JSON Machine:

Provides iteration over huge JSONs using simple foreach

Very easy to use and memory efficient drop-in replacement for inefficient iteration of big JSON files or streams for PHP >=7.0. See TL;DR. No dependencies in production except optional ext-json.

TL;DR

<?php

use \JsonMachine\Items;

// this often causes Allowed Memory Size Exhausted
- $users = json_decode(file_get_contents('500MB-users.json'));

// this usually takes few kB of memory no matter the file size
+ $users = Items::fromFile('500MB-users.json');

foreach ($users as $id => $user) {
    // just process $user as usual
    var_dump($user->name);
}

Random access like $users[42] is not yet possible. Use above-mentioned foreach and find the item or use JSON Pointer.

Count the items via iterator_count($users). Remember it will still have to internally iterate the whole thing to get the count and thus will take about the same time.

Requires ext-json if used out of the box. See Decoders.

Introduction

JSON Machine is an efficient, easy-to-use and fast JSON stream/pull/incremental/lazy (whatever you name it) parser based on generators developed for unpredictably long JSON streams or documents. Main features are:

  • Constant memory footprint for unpredictably large JSON documents.
  • Ease of use. Just iterate JSON of any size with foreach. No events and callbacks.
  • Efficient iteration on any subtree of the document, specified by JSON Pointer
  • Speed. Performance critical code contains no unnecessary function calls, no regular expressions and uses native json_decode to decode JSON document items by default. See Decoders.
  • Parses not only streams but any iterable that produces JSON chunks.
  • Thoroughly tested. More than 200 tests and 1000 assertions.

Parsing JSON documents

Parsing a document

Let's say that fruits.json contains this huge JSON document:

// fruits.json
{
    "apple": {
        "color": "red"
    },
    "pear": {
        "color": "yellow"
    }
}

It can be parsed this way:

<?php

use \JsonMachine\Items;

$fruits = Items::fromFile('fruits.json');

foreach ($fruits as $name => $data) {
    // 1st iteration: $name === "apple" and $data->color === "red"
    // 2nd iteration: $name === "pear" and $data->color === "yellow"
}

Parsing a json array instead of a json object follows the same logic. The key in a foreach will be a numeric index of an item.

If you prefer JSON Machine to return arrays instead of objects, use new ExtJsonDecoder(true) as a decoder.

<?php

use JsonMachine\JsonDecoder\ExtJsonDecoder;
use JsonMachine\Items;

$objects = Items::fromFile('path/to.json', ['decoder' => new ExtJsonDecoder(true)]);

Follow CHANGELOG.

Parsing a subtree

If you want to iterate only results subtree in this fruits.json:

// fruits.json
{
    "results": {
        "apple": {
            "color": "red"
        },
        "pear": {
            "color": "yellow"
        }
    }
}

use JSON Pointer /results as pointer option:

<?php

use \JsonMachine\Items;

$fruits = Items::fromFile('fruits.json', ['pointer' => '/results']);
foreach ($fruits as $name => $data) {
    // The same as above, which means:
    // 1st iteration: $name === "apple" and $data->color === "red"
    // 2nd iteration: $name === "pear" and $data->color === "yellow"
}

Note:

Value of results is not loaded into memory at once, but only one item in results at a time. It is always one item in memory at a time at the level/subtree you are currently iterating. Thus, the memory consumption is constant.

View on Github

6 - Knapsack:

Collection library inspired by Clojure's sequences.

Knapsack is a collection library for PHP >= 5.6 that implements most of the sequence operations proposed by Clojures sequences plus some additional ones. All its features are available as functions (for functional programming) and as a collection pipeline object methods.

The heart of Knapsack is its Collection class. However its every method calls a simple function with the same name that does the actual heavy lifting. These are located in DusanKasan\Knapsack namespace and you can find them here. Collection is a Traversable implementor (via IteratorAggregate) that accepts Traversable object, array or even a callable that produces a Traversable object or array as constructor argument. It provides most of Clojures sequence functionality plus some extra features. It is also immutable - operations preformed on the collection will return new collection (or value) instead of modifying the original collection.

Most of the methods of Collection return lazy collections (such as filter/map/etc.). However, some return non-lazy collections (reverse) or simple values (count). For these operations all of the items in the collection must be iterated over (and realized). There are also operations (drop) that iterate over some items of the collection but do not affect/return them in the result. This behaviour as well as laziness is noted for each of the operations.

If you want more example usage beyond what is provided here, check the specs and/or scenarios. There are also performance tests you can run on your machine and see the computation time impact of this library (the output of these is included below).

Feel free to report any issues you find. I will do my best to fix them as soon as possible, but community pull requests to fix them are more than welcome.

Installation

Require this package using Composer.

composer require dusank/knapsack

Usage

Instantiate via static or dynamic constructor

use DusanKasan\Knapsack\Collection;

$collection1 = new Collection([1, 2, 3]);
$collection2 = Collection::from([1, 2, 3]); //preferred since you can call methods on its result directly.

Work with arrays, Traversable objects or callables that produce Traversables

$collection1 = Collection::from([1, 2, 3]);
$collection2 = Collection::from(new ArrayIterator([1, 2, 3]);

//Used because Generator can not be rewound
$collection2 = Collection::from(function() { //must have 0 arguments
    foreach ([1, 2, 3] as $value) {
        yield $value;
    }
});

Basic map/reduce

$result = Collection::from([1, 2])
    ->map(function($v) {return $v*2;})
    ->reduce(function($tmp, $v) {return $tmp+$v;}, 0);
    
echo $result; //6

The same map/reduce using Knapsack's collection functions

$result = reduce(
    map(
        [1, 2], 
        function($v) {return $v*2;}
    ),
    function($tmp, $v) {return $tmp+$v;},
    0
);

echo $result; //6

Get first 5 items of Fibonacci's sequence

$result = Collection::iterate([1,1], function($v) {
        return [$v[1], $v[0] + $v[1]]; //[1, 2], [2, 3] ...
    })
    ->map('\DusanKasan\Knapsack\first') //one of the collection functions
    ->take(5);
    
foreach ($result as $item) {
    echo $item . PHP_EOL;
}

//1
//1
//2
//3
//5

If array or Traversable would be returned from functions that return an item from the collection, it can be converted to Collection using the optional flag. By default it returns the item as is.

$result = Collection::from([[[1]]])
    ->first(true)
    ->first();
    
var_dump($result); //[1]

Collections are immutable

function multiplyBy2($v)
{
    return $v * 2;
}

function multiplyBy3($v)
{
    return $v * 3;
}

function add($a, $b)
{
    return $a + $b;
}

$collection = Collection::from([1, 2]);

$result = $collection
    ->map('multiplyBy2')
    ->reduce(0, 'add');
    
echo $result; //6

//On the same collection
$differentResult = $collection
    ->map('multiplyBy3')
    ->reduce(0, 'add');
    
echo $differentResult; //9

Keys are not unique by design

It would harm performance. This is only a problem if you need to call toArray(), then you should call values() before.

$result = Collection::from([1, 2])->concat([3,4]);
    
//arrays have unique keys
$result->toArray(); //[3,4]
$result->values()->toArray(); //[1, 2, 3, 4]

//When iterating, you can have multiple keys.
foreach ($result as $key => $item) {
    echo $key . ':' . $item . PHP_EOL;
}

//0:1
//1:2
//0:3
//1:4

Collection trait is provided

If you wish to use all the Collection methods in your existing classes directly, no need to proxy their calls, you can just use the provided CollectionTrait. This will work on any Traversable by default. In any other class you will have to override the getItems() method provided by the trait. Keep in mind that after calling filter or any other method that returns collection, the returned type will be actually Collection, not the original Traversable.

class AwesomeIterator extends ArrayIterator {
    use CollectionTrait;
}

$iterator = new AwesomeIterator([1, 2, 3]);
$iterator->size(); //3

View on Github

7 - Msgpack.php:

A pure PHP implementation of the MessagePack serialization format.

Installation

The recommended way to install the library is through Composer:

composer require rybakit/msgpack

Usage

Packing

To pack values you can either use an instance of a Packer:

$packer = new Packer();
$packed = $packer->pack($value);

or call a static method on the MessagePack class:

$packed = MessagePack::pack($value);

In the examples above, the method pack automatically packs a value depending on its type. However, not all PHP types can be uniquely translated to MessagePack types. For example, the MessagePack format defines map and array types, which are represented by a single array type in PHP. By default, the packer will pack a PHP array as a MessagePack array if it has sequential numeric keys, starting from 0 and as a MessagePack map otherwise:

$mpArr1 = $packer->pack([1, 2]);               // MP array [1, 2]
$mpArr2 = $packer->pack([0 => 1, 1 => 2]);     // MP array [1, 2]
$mpMap1 = $packer->pack([0 => 1, 2 => 3]);     // MP map {0: 1, 2: 3}
$mpMap2 = $packer->pack([1 => 2, 2 => 3]);     // MP map {1: 2, 2: 3}
$mpMap3 = $packer->pack(['a' => 1, 'b' => 2]); // MP map {a: 1, b: 2}

However, sometimes you need to pack a sequential array as a MessagePack map. To do this, use the packMap method:

$mpMap = $packer->packMap([1, 2]); // {0: 1, 1: 2}

Here is a list of type-specific packing methods:

$packer->packNil();           // MP nil
$packer->packBool(true);      // MP bool
$packer->packInt(42);         // MP int
$packer->packFloat(M_PI);     // MP float (32 or 64)
$packer->packFloat32(M_PI);   // MP float 32
$packer->packFloat64(M_PI);   // MP float 64
$packer->packStr('foo');      // MP str
$packer->packBin("\x80");     // MP bin
$packer->packArray([1, 2]);   // MP array
$packer->packMap(['a' => 1]); // MP map
$packer->packExt(1, "\xaa");  // MP ext

Check the "Custom types" section below on how to pack custom types.

Packing options

The Packer object supports a number of bitmask-based options for fine-tuning the packing process (defaults are in bold):

NameDescription
FORCE_STRForces PHP strings to be packed as MessagePack UTF-8 strings
FORCE_BINForces PHP strings to be packed as MessagePack binary data
DETECT_STR_BINDetects MessagePack str/bin type automatically
  
FORCE_ARRForces PHP arrays to be packed as MessagePack arrays
FORCE_MAPForces PHP arrays to be packed as MessagePack maps
DETECT_ARR_MAPDetects MessagePack array/map type automatically
  
FORCE_FLOAT32Forces PHP floats to be packed as 32-bits MessagePack floats
FORCE_FLOAT64Forces PHP floats to be packed as 64-bits MessagePack floats

The type detection mode (DETECT_STR_BIN/DETECT_ARR_MAP) adds some overhead which can be noticed when you pack large (16- and 32-bit) arrays or strings. However, if you know the value type in advance (for example, you only work with UTF-8 strings or/and associative arrays), you can eliminate this overhead by forcing the packer to use the appropriate type, which will save it from running the auto-detection routine. Another option is to explicitly specify the value type. The library provides 2 auxiliary classes for this, Map and Bin. Check the "Custom types" section below for details.

Examples:

// detect str/bin type and pack PHP 64-bit floats (doubles) to MP 32-bit floats
$packer = new Packer(PackOptions::DETECT_STR_BIN | PackOptions::FORCE_FLOAT32);

// these will throw MessagePack\Exception\InvalidOptionException
$packer = new Packer(PackOptions::FORCE_STR | PackOptions::FORCE_BIN);
$packer = new Packer(PackOptions::FORCE_FLOAT32 | PackOptions::FORCE_FLOAT64);

View on Github

8 - PINQ:

A PHP library based on .NET's LINQ (Language Integrated Query).

What is PINQ?

Based off the .NET's LINQ (Language integrated query), PINQ unifies querying across arrays/iterators and external data sources, in a single readable and concise fluent API.

An example

$youngPeopleDetails = $people
        ->where(function ($row) { return $row['age'] <= 50; })
        ->orderByAscending(function ($row) { return $row['firstName']; })
        ->thenByAscending(function ($row) { return $row['lastName']; })
        ->take(50)
        ->indexBy(function ($row) { return $row['phoneNumber']; })
        ->select(function ($row) { 
            return [
                'fullName'    => $row['firstName'] . ' ' . $row['lastName'],
                'address'     => $row['address'],
                'dateOfBirth' => $row['dateOfBirth'],
            ]; 
        });

Installation

PINQ is compatible with >= PHP 7.3

Install the package via composer:

composer require timetoogo/pinq

View on Github

9 - Serializer:

A library for serialising and de-serialising data.

Introduction

This library allows you to (de-)serialize data of any complexity. Currently, it supports XML and JSON.

It also provides you with a rich tool-set to adapt the output to your specific needs.

Built-in features include:

  • (De-)serialize data of any complexity; circular references and complex exclusion strategies are handled gracefully.
  • Supports many built-in PHP types (such as dates, intervals)
  • Integrates with Doctrine ORM, et. al.
  • Supports versioning, e.g. for APIs
  • Configurable via XML, YAML, or Annotations

Notes

You are browsing the code for the 3.x version, if you are interested in the 1.x or 2.x version, check the 1.x and 2.x branches.

The version 3.x is the supported version (master branch). The 1.x and 2.x versions are not supported anymore.

For the 1.x and 2.x branches there will be no additional feature releases.
Security issues will be fixed till the 1st January 2020 and only critical bugs might receive fixes until the 1st September 2019.

Instructions on how to upgrade to 3.x are available in the UPGRADING document.

Professional Support

For eventual paid support please write an email to goetas@gmail.com.

Documentation

Learn more about the serializer in its documentation.

View on Github

10 - YaLinqo:

Yet Another LINQ to Objects for PHP.

Features

  • The most complete port of .NET LINQ to PHP, with many additional methods.
  • Lazy evaluation, error messages and other behavior of original LINQ.
  • Detailed PHPDoc and online reference based on PHPDoc for all methods. Articles are adapted from original LINQ documentation from MSDN.
  • 100% unit test coverage.
  • Best performance among full-featured LINQ ports (YaLinqo, Ginq, Pinq), at least 2x faster than the closest competitor, see performance tests.
  • Callback functions can be specified as closures (like function ($v) { return $v; }), PHP "function pointers" (either strings like 'strnatcmp' or arrays like array($object, 'methodName')), string "lambdas" using various syntaxes ('"$k = $v"', '$v ==> $v+1', '($v, $k) ==> $v + $k', '($v, $k) ==> { return $v + $k; }').
  • Keys are as important as values. Most callback functions receive both values and the keys; transformations can be applied to both values and the keys; keys are never lost during transformations, if possible.
  • SPL interfaces Iterator, IteratorAggregate etc. are used throughout the code and can be used interchangeably with Enumerable.
  • Redundant collection classes are avoided, native PHP arrays are used everywhere.
  • Composer support (package on Packagist).
  • No external dependencies.

Example

Process sample data:

// Data
$products = array(
    array('name' => 'Keyboard',    'catId' => 'hw', 'quantity' =>  10, 'id' => 1),
    array('name' => 'Mouse',       'catId' => 'hw', 'quantity' =>  20, 'id' => 2),
    array('name' => 'Monitor',     'catId' => 'hw', 'quantity' =>   0, 'id' => 3),
    array('name' => 'Joystick',    'catId' => 'hw', 'quantity' =>  15, 'id' => 4),
    array('name' => 'CPU',         'catId' => 'hw', 'quantity' =>  15, 'id' => 5),
    array('name' => 'Motherboard', 'catId' => 'hw', 'quantity' =>  11, 'id' => 6),
    array('name' => 'Windows',     'catId' => 'os', 'quantity' => 666, 'id' => 7),
    array('name' => 'Linux',       'catId' => 'os', 'quantity' => 666, 'id' => 8),
    array('name' => 'Mac',         'catId' => 'os', 'quantity' => 666, 'id' => 9),
);
$categories = array(
    array('name' => 'Hardware',          'id' => 'hw'),
    array('name' => 'Operating systems', 'id' => 'os'),
);

// Put products with non-zero quantity into matching categories;
// sort categories by name;
// sort products within categories by quantity descending, then by name.
$result = from($categories)
    ->orderBy('$cat ==> $cat["name"]')
    ->groupJoin(
        from($products)
            ->where('$prod ==> $prod["quantity"] > 0')
            ->orderByDescending('$prod ==> $prod["quantity"]')
            ->thenBy('$prod ==> $prod["name"]'),
        '$cat ==> $cat["id"]', '$prod ==> $prod["catId"]',
        '($cat, $prods) ==> array(
            "name" => $cat["name"],
            "products" => $prods
        )'
    );

// Alternative shorter syntax using default variable names
$result2 = from($categories)
    ->orderBy('$v["name"]')
    ->groupJoin(
        from($products)
            ->where('$v["quantity"] > 0')
            ->orderByDescending('$v["quantity"]')
            ->thenBy('$v["name"]'),
        '$v["id"]', '$v["catId"]',
        'array(
            "name" => $v["name"],
            "products" => $e
        )'
    );

// Closure syntax, maximum support in IDEs, but verbose and hard to read
$result3 = from($categories)
    ->orderBy(function ($cat) { return $cat['name']; })
    ->groupJoin(
        from($products)
            ->where(function ($prod) { return $prod["quantity"] > 0; })
            ->orderByDescending(function ($prod) { return $prod["quantity"]; })
            ->thenBy(function ($prod) { return $prod["name"]; }),
        function ($cat) { return $cat["id"]; },
        function ($prod) { return $prod["catId"]; },
        function ($cat, $prods) {
            return array(
                "name" => $cat["name"],
                "products" => $prods
            );
        }
    );

print_r($result->toArrayDeep());

Output (compacted):

Array (
    [hw] => Array (
        [name] => Hardware
        [products] => Array (
            [0] => Array ( [name] => Mouse       [catId] => hw [quantity] =>  20 [id] => 2 )
            [1] => Array ( [name] => CPU         [catId] => hw [quantity] =>  15 [id] => 5 )
            [2] => Array ( [name] => Joystick    [catId] => hw [quantity] =>  15 [id] => 4 )
            [3] => Array ( [name] => Motherboard [catId] => hw [quantity] =>  11 [id] => 6 )
            [4] => Array ( [name] => Keyboard    [catId] => hw [quantity] =>  10 [id] => 1 )
        )
    )
    [os] => Array (
        [name] => Operating systems
        [products] => Array (
            [0] => Array ( [name] => Linux       [catId] => os [quantity] => 666 [id] => 8 )
            [1] => Array ( [name] => Mac         [catId] => os [quantity] => 666 [id] => 9 )
            [2] => Array ( [name] => Windows     [catId] => os [quantity] => 666 [id] => 7 )
        )
    )
)

Requirements

  • Version 1 (stable): PHP 5.3 or higher.
  • Version 2 (stable): PHP 5.5 or higher.
  • Version 3 (pre-alpha): PHP 7.0 or higher.

Usage

Add to composer.json:

{
    "require": {
        "athari/yalinqo": "^2.0"
    }
}

Add to your PHP script:

require_once 'vendor/autoloader.php';
use \YaLinqo\Enumerable;

// 'from' can be called as a static method or via a global function shortcut
Enumerable::from(array(1, 2, 3));
from(array(1, 2, 3));

View on Github

Thank you for following this article.

Related videos:

PHP Standard Library Part 1: Datastructures

#php #datastructure  #storage 

10 Favorite PHP Libraries for Data Structure and Storage

Easy to Use But Powerful Storage for Flutter and Web

Easy to use but powerful storage for Flutter and web.

Note on deprecation

TL;DR: Everything FluffyBox implemented became part of Hive.

FluffyBox was started as a fork of Hive extending its features in regard of better web performance. After that FluffyBox reached at many improvements, we added all our patches to the original Hive code making FluffyBox obsolete.

In general, the API of FluffyBox remains 100 % same in Hive. Please pay attention on the fact, that Boxes in BoxCollections were renamed to CollectionBox in the updated Hive.

Hive includes FluffyBox since release 2.2.0.


Motivation

Hive lacks performance on web and the IndexedDB API does not fit that well to the Hive API. FluffyBox acts as a wrapper over IndexedDB and Hive, has a API which is similar to Hive but supports lazy loading keys, uses ObjectStores instead of databases for boxes, creating ObjectStores at start time and supports transactions.

Features

  • Simple API with Boxes highly inspired by Hive
  • Just uses native indexedDB on web and Hive on native
  • Nothing is loaded to memory until it is needed
  • You have BoxCollections (Databases in Hive and IndexedDB) and Boxes (tables in Hive and ObjectStores in IndexedDB)
  • Transactions to speed up dozens of write actions ^

Getting started

Add FluffyBox to your pubspec.yaml:

  fluffybox: <latest-version>

Usage

  // Create a box collection
  final collection = await BoxCollection.open(
    'MyFirstFluffyBox', // Name of your database
    {'cats', 'dogs'}, // Names of your boxes
    path: './', // Path where to store your boxes (Only used in Flutter / Dart IO)
    key: HiveCipher(), // Key to encrypt your boxes (Only used in Flutter / Dart IO)
  );

  // Open your boxes. Optional: Give it a type.
  final catsBox = collection.openBox<Map>('cats');

  // Put something in
  await catsBox.put('fluffy', {'name': 'Fluffy', 'age': 4});
  await catsBox.put('loki', {'name': 'Loki', 'age': 2});

  // Get values of type (immutable) Map?
  final loki = await catsBox.get('loki');
  print('Loki is ${loki?['age']} years old.');

  // Returns a List of values
  final cats = await catsBox.getAll(['loki', 'fluffy']);
  print(cats);

  // Returns a List<String> of all keys
  final allCatKeys = await catsBox.getAllKeys();
  print(allCatKeys);

  // Returns a Map<String, Map> with all keys and entries
  final catMap = await catsBox.getAllValues();
  print(catMap);

  // delete one or more entries
  await catsBox.delete('loki');
  await catsBox.deleteAll(['loki', 'fluffy']);

  // ...or clear the whole box at once
  await catsBox.clear();

  // Speed up write actions with transactions
  await collection.transaction(
    () async {
      await catsBox.put('fluffy', {'name': 'Fluffy', 'age': 4});
      await catsBox.put('loki', {'name': 'Loki', 'age': 2});
      // ...
    },
    boxNames: ['cats'], // By default all boxes become blocked.
    readOnly: false,
  );

Credits

Special thanks to Hive and its contributors for making this package possible.

We included hive ourself to already include https://github.com/hivedb/hive/pull/852 and thus be able to prevent database corruptions from happening.

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add fluffybox

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

dependencies:
  fluffybox: ^0.5.0

Alternatively, your editor might support dart pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:fluffybox/fluffybox.dart'; 

example/fluffybox_example.dart

import 'package:fluffybox/fluffybox.dart';

void main() async {
  // Create a box collection
  final collection = await BoxCollection.open(
    'MyFirstFluffyBox', // Name of your database
    {'cats', 'dogs'}, // Names of your boxes
  );

  // Open your boxes. Optional: Give it a type.
  final catsBox = await collection.openBox<Map>('cats');

  // Put something in
  await catsBox.put('fluffy', {'name': 'Fluffy', 'age': 4});
  await catsBox.put('loki', {'name': 'Loki', 'age': 2});

  // Get values of type (immutable) Map?
  final loki = await catsBox.get('loki');
  print('Loki is ${loki?['age']} years old.');

  // Returns a List of values
  final cats = await catsBox.getAll(['loki', 'fluffy']);
  print(cats);

  // Returns a List<String> of all keys
  final allCatKeys = await catsBox.getAllKeys();
  print(allCatKeys);

  // Returns a Map<String, Map> with all keys and entries
  final catMap = await catsBox.getAllValues();
  print(catMap);

  // delete one or more entries
  await catsBox.delete('loki');
  await catsBox.deleteAll(['loki', 'fluffy']);

  // ...or clear the whole box at once
  await catsBox.clear();

  // Speed up write actions with transactions
  await collection.transaction(
    () async {
      await catsBox.put('fluffy', {'name': 'Fluffy', 'age': 4});
      await catsBox.put('loki', {'name': 'Loki', 'age': 2});
      // ...
    },
    boxNames: ['cats'], // By default all boxes become blocked.
    readOnly: false,
  );
} 

Download Details:

Author: company

Source Code: https://gitlab.com/famedly/company/frontend/libraries/fluffybox

#flutter #storage #android #ios 

Easy to Use But Powerful Storage for Flutter and Web

Julia Bindings to KyotoCabinet Library (key Value Storage)

Julia binding for KyotoCabinet

This package provides bindings for KyotoCabinet key-value storage.

Installation

Pkg.add("KyotoCabinet")

Generic interface

using KyotoCabinet

To open database, use open method:

db = open("db.kch", "r")
# db::Dict{Array{Uint8,1},Array{Uint8,1}}
close(db)

There is also bracketed version:

open(Db{K,V}(), "db.kch", "w+") do db
  # db::Dict{K,V}
  # do stuff...
end

Db object implements basic collections and Dict methods.

open(Db{String,String}(), "db.kch", "w+") do db
  # Basic getindex, setindex! methods
  db["a"] = "1"
  println(db["a"])

  # Dict methods also implemented:
  # haskey, getkey, get, get!, delete!, pop!
  if (!haskey(db, "x"))
    x = get(db, "x", "default")
    y = get!(db, "y", "set_value_if_non_exists")
  end
end

Support iteration over records, keys and values:

for (k, v) = db
  println("k=$k v=$v")
end
for k = keys(db)
  println("k=$k")
end

Serialization/Deserialization

KyotoCabinet treats keys and values as byte arrays. To make it work with arbitrary types, one needs to define pack/unpack methods.

immutable K
  x::Int
end

immutable V
  a::Int
  b::String
end

function KyotoCabinet.pack(k::K)
  io = IOBuffer()
  write(io, int32(k.x))
  takebuf_array(io)
end
function KyotoCabinet.unpack(T::Type{K}, buf::Array{Uint8,1})
  io = IOBuffer(buf)
  x = read(io, Int32)
  K(int(x))
end

function KyotoCabinet.pack(v::V)
  io = IOBuffer()
  write(io, int32(v.a))
  write(io, int32(length(v.b)))
  write(io, v.b)
  takebuf_array(io)
end
function KyotoCabinet.unpack(T::Type{V}, buf::Array{Uint8,1})
  io = IOBuffer(buf)
  a = read(io, Int32)
  l = read(io, Int32)
  b = bytestring(read(io, Uint8, l))
  V(int(a), b)
end

After that these types can be used as keys/values:

open(Db{K, V}(), "db.kch", "w+") do db
  db[K(1)] = V(1, "a")
  db[K(1999999999)] = V(2, repeat("b",100))
end

k = K(1)
println(db[k])

KyotoCabinet specific

There are also KyotoCabinet specific methods.

Database info

# Get the path of the database file
p = path(db)

Compare-and-swap

cas(db::Db, key, old, new)

Compare-and-swap method. Update the value only if it's in the expected state. Returns true if value have been updated.

cas(db, "k", "old", "new") # update only if db["k"] == "old"
cas(db, "k", "old", ())    # remove record, only if db["k"] == "old"
cas(db, "k", (), "new")    # add record, only if "k" not in db

Bulk operations

# Updates records in one operation, atomically if needed.
bulkset!(db, ["a" => "1", "b" => "2"], true)

# Removes records in one operation, atomically if needed.
bulkdelete!(db, ["a", "b"], true)

Download Details:

Author: Tuzzeg
Source Code: https://github.com/tuzzeg/kyotocabinet.jl 
License: View license

#julia #key #value #storage 

Julia Bindings to KyotoCabinet Library (key Value Storage)
Gordon  Taylor

Gordon Taylor

1660748700

Dexie.js: A Minimalistic Wrapper for indexedDB

Dexie.js

Dexie.js is a wrapper library for indexedDB - the standard database in the browser. https://dexie.org

Why?

Dexie solves three main issues with the native IndexedDB API:

  1. Ambiguous error handling
  2. Poor queries
  3. Code complexity

Dexie provides a neat database API with a well thought-through API design, robust error handling, extendability, change tracking awareness and extended KeyRange support (case insensitive search, set matches and OR operations).

Hello World

<!doctype html>
<html>
 <head>
  <script src="https://unpkg.com/dexie@latest/dist/dexie.js"></script>
  <script>
   //
   // Declare Database
   //
   var db = new Dexie("FriendDatabase");
   db.version(1).stores({
     friends: "++id,name,age"
   });

   //
   // Manipulate and Query Database
   //
   db.friends.add({name: "Josephine", age: 21}).then(function() {
       return db.friends.where("age").below(25).toArray();
   }).then(function (youngFriends) {
       alert ("My young friends: " + JSON.stringify(youngFriends));
   }).catch(function (e) {
       alert ("Error: " + (e.stack || e));
   });
  </script>
 </head>
</html>

Yes, it's that simple.

An equivalent modern version (works in all modern browsers):

<!doctype html>
<html>
 <head>
  <script type="module">
   import Dexie from "https://unpkg.com/dexie@latest/dist/modern/dexie.mjs";
   //
   // Declare Database
   //
   const db = new Dexie("FriendDatabase");
   db.version(1).stores({
     friends: "++id,name,age"
   });

   //
   // Manipulate and Query Database
   //
   try {
     await db.friends.add({name: "Josephine", age: 21});
     const youngFriends = await db.friends.where("age").below(25).toArray();
     alert (`My young friends: ${JSON.stringify(youngFriends)}`);
   } catch (e) {
     alert (`Error: ${e}`);
   }
  </script>
 </head>
</html>

Tutorial

API Reference

Samples

Performance

Dexie has kick-ass performance. Its bulk methods take advantage of a lesser-known feature in IndexedDB that makes it possible to store stuff without listening to every onsuccess event. This speeds up the performance to a maximum.

Supported operations

above(key): Collection;
aboveOrEqual(key): Collection;
add(item, key?): Promise;
and(filter: (x) => boolean): Collection;
anyOf(keys[]): Collection;
anyOfIgnoreCase(keys: string[]): Collection;
below(key): Collection;
belowOrEqual(key): Collection;
between(lower, upper, includeLower?, includeUpper?): Collection;
bulkAdd(items: Array): Promise;
bulkDelete(keys: Array): Promise;
bulkPut(items: Array): Promise;
clear(): Promise;
count(): Promise;
delete(key): Promise;
distinct(): Collection;
each(callback: (obj) => any): Promise;
eachKey(callback: (key) => any): Promise;
eachPrimaryKey(callback: (key) => any): Promise;
eachUniqueKey(callback: (key) => any): Promise;
equals(key): Collection;
equalsIgnoreCase(key): Collection;
filter(fn: (obj) => boolean): Collection;
first(): Promise;
get(key): Promise;
inAnyRange(ranges): Collection;
keys(): Promise;
last(): Promise;
limit(n: number): Collection;
modify(changeCallback: (obj: T, ctx:{value: T}) => void): Promise;
modify(changes: { [keyPath: string]: any } ): Promise;
noneOf(keys: Array): Collection;
notEqual(key): Collection;
offset(n: number): Collection;
or(indexOrPrimayKey: string): WhereClause;
orderBy(index: string): Collection;
primaryKeys(): Promise;
put(item: T, key?: Key): Promise;
reverse(): Collection;
sortBy(keyPath: string): Promise;
startsWith(key: string): Collection;
startsWithAnyOf(prefixes: string[]): Collection;
startsWithAnyOfIgnoreCase(prefixes: string[]): Collection;
startsWithIgnoreCase(key: string): Collection;
toArray(): Promise;
toCollection(): Collection;
uniqueKeys(): Promise;
until(filter: (value) => boolean, includeStopEntry?: boolean): Collection;
update(key: Key, changes: { [keyPath: string]: any }): Promise;

This is a mix of methods from WhereClause, Table and Collection. Dive into the API reference to see the details.

Hello World (Typescript)

import Dexie, { Table } from 'dexie';

interface Friend {
    id?: number;
    name?: string;
    age?: number;
}

//
// Declare Database
//
class FriendDatabase extends Dexie {
    public friends!: Table<Friend, number>; // id is number in this case

    public constructor() {
        super("FriendDatabase");
        this.version(1).stores({
            friends: "++id,name,age"
        });
    }
}

const db = new FriendDatabase();

db.transaction('rw', db.friends, async() => {

    // Make sure we have something in DB:
    if ((await db.friends.where({name: 'Josephine'}).count()) === 0) {
        const id = await db.friends.add({name: "Josephine", age: 21});
        alert (`Addded friend with id ${id}`);
    }

    // Query:
    const youngFriends = await db.friends.where("age").below(25).toArray();

    // Show result:
    alert ("My young friends: " + JSON.stringify(youngFriends));

}).catch(e => {
    alert(e.stack || e);
});

Samples

https://dexie.org/docs/Samples

https://github.com/dexie/Dexie.js/tree/master/samples

Knowledge Base

https://dexie.org/docs/Questions-and-Answers

Website

https://dexie.org

Install over npm

npm install dexie

Download

For those who don't like package managers, here's the download links:

Legacy:

https://unpkg.com/dexie@latest/dist/dexie.min.js

https://unpkg.com/dexie@latest/dist/dexie.min.js.map

Modern:

https://unpkg.com/dexie@latest/dist/modern/dexie.min.mjs

https://unpkg.com/dexie@latest/dist/modern/dexie.min.mjs.map

Typings:

https://unpkg.com/dexie@latest/dist/dexie.d.ts

Contributing

Here is a little cheat-sheet for how to symlink your app's node_modules/dexie to a place where you can edit the source, version control your changes and create pull requests back to Dexie. Assuming you've already ran npm install dexie --save for the app your are developing.

Fork Dexie.js from the web gui on github

Clone your fork locally by launching a shell/command window and cd to a neutral place (like ~repos/, c:\repos or whatever)

Run the following commands:

git clone https://github.com/YOUR-USERNAME/Dexie.js.git dexie
cd dexie
npm install
npm run build
npm link

cd to your app directory and write:

npm link dexie

Your app's node_modules/dexie/ is now sym-linked to the Dexie.js clone on your hard drive so any change you do there will propagate to your app. Build dexie.js using npm run build or npm run watch. The latter will react on any source file change and rebuild the dist files.

That's it. Now you're up and running to test and commit changes to files under dexie/src/* or dexie/test/* and the changes will instantly affect the app you are developing.

Pull requests are more than welcome. Some advices are:

  • Run npm test before making a pull request.
  • If you find an issue, a unit test that reproduces it is lovely ;). If you don't know where to put it, put it in test/tests-misc.js. We use qunit. Just look at existing tests in tests-misc.js to see how they should be written. Tests are transpiled in the build script so you can use ES6 if you like.

Build

npm install
npm run build

Test

npm test

Watch

npm run watch

Download Details:

Author: Dexie
Source Code: https://github.com/dexie/Dexie.js 
License: Apache-2.0 license

#javascript #database #storage 

Dexie.js: A Minimalistic Wrapper for indexedDB

Dart Library for Abstraction Over A Single Item Storage

single_item_shared_prefs

SharedPreferences/UserDefaults persistent Storage implementation.

This package is an addon to the single_item_storage package and offers a Storage implementation using the shared_preferences package and dart JSON converters, json.encode and json.decode, to store items.

Getting started

Create a new instance by providing fromMap and toMap item converters, itemKey as key for this item in shared preferences, and an optional sharedPreferences instance.

Storage<User> storage = CachedStorage<User>(SharedPrefsStorage(
  itemKey: 'model.user.key',
  fromMap: (map) => User.fromMap(map),
  toMap: (item) => item.toMap(),
));

@JsonSerializable()
class User {
  final String id;
  final String email;

  factory User.fromMap(Map<String, dynamic> json) => _$UserFromJson(json);
  Map<String, dynamic> toMap() => _$UserToJson(this);

  User(this.id, this.email);
}

To store primitive values that don't need a converter use the .primitive named constructor.

/* Supported primitive types: 
 - bool
 - double
 - int
 - String
 - List<String>
 */
SharedPrefsStorage<int>.primitive(itemKey: 'cow_counter')

If the sharedPreferences is omitted, then SharedPreferences.getInstance is used.

Notice that the SharedPrefsStorage is wrapped in CachedStorage to add in-memory caching for better performance.

When defining the to/from map converter mind that the map values can only be: number, boolean, string, null, list or a map with string keys as defined in json.encode and json.decode from the dart:convert package.

This example uses json_serializable as Map converter for convenience.

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add single_item_shared_prefs

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  single_item_shared_prefs: ^0.0.4

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:single_item_shared_prefs/single_item_shared_prefs.dart'; 

Download Details:

Author: dimitar-zabaznoski

Source Code: https://github.com/dimitar-zabaznoski/single-item-storage-dart

#dart #storage 

Dart Library for Abstraction Over A Single Item Storage
Rocio  O'Keefe

Rocio O'Keefe

1660692780

Code_id_storage: A Package local storage for use for company code.id

code_id_flutter

A Combined Flutter library & reusable code for use

Installing

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add code_id_storage

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  code_id_storage: ^0.0.8

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:code_id_storage/code_id_storage.dart';

About the Package

This package is use for internal code.id Company

For how to use you can contact me. To know to use it. ]

Original article source at: https://pub.dev/packages/code_id_storage 

#flutter #dart #id #storage 

Code_id_storage: A Package local storage for use for company code.id
Maryjane  Olson

Maryjane Olson

1660541400

Castore: A Simple Way to Implement Event Sourcing in TypeScript

🦫 Castore - Better DevX for Event Sourcing in TypeScript

Castore provides a unified interface for implementing Event Sourcing in TypeScript πŸ¦Έβ€β™‚οΈ.

πŸ€” Why use Castore ?

πŸ’¬ Verbosity: Castore classes are designed to increase dryness and provide the optimal developer experience. Event Sourcing is hard, don't make it harder!

πŸ“ Strong typings: We love type inference, we know you will to!

πŸ„β€β™‚οΈ Interfaces before implementations: Castore provides a standard interface to modelize common event sourcing patterns in TypeScript. But it DOES NOT enforce any particular implementation (storage service, messaging system etc...). You can use Castore in React apps, containers or lambdas, it's up to you! Some common implementations are provided, but you are free to use any implementation you want via custom classes, as long as they follow the required interfaces.

πŸ‘ Enforces best practices: Gained from years of usage like using integer versions instead of timestamps, transactions for multi-store events and state-carrying transfer events for projections.

πŸ›  Rich suite of helpers: Like mock events builder to help you write tests.

Table of content

Events

The first step in your ✨ Castore journey ✨ is to define your business events! 🦫

Castore lets you easily create the Event Types which will constitute your Event Store. Simply use the EventType class and start defining, once and for all, your events! πŸŽ‰

import { EventType } from "@castore/core"

export const userCreatedEvent = new EventType<
  // Typescript EventType
  'USER_CREATED',
  // Typescript EventDetails
  {
    aggregateId: string;
    version: number;
    type: 'USER_CREATED';
    timestamp: string;
    payload: { name: string; age: number };
  }
>({
  // EventType
  type: 'USER_CREATED',
});

const userRemovedEvent = ...

const eventTypes = [
  userCreatedEvent,
  userRemovedEvent,
];

You can also define your events with JSON Schemas or Zod Events, see @castore/json-schema-event and @castore/zod-event documentations for implementation 🦫

Once you're happy with your set of EventTypes you can move on to step 2: attaching the EventTypes to an actual EventStore! πŸͺ.

Event Store

Welcome in the heart of Castore: the EventStore ❀️
The EventStore class lets you instantiate an object containing all the methods you will need to interact with your event sourcing store. πŸ’ͺ

const userEventStore = new EventStore({
  eventStoreId: 'user-event-store-id',
  eventTypes,
  // πŸ‘‡ See #reducer sub-section
  reducer,
  // πŸ‘‡ See #storage_adapters section
  storageAdapter,
});

Reducer

The reducer needed in the EventStore initialization is the function that will be applied to the sorted array of events in order to build the aggregates βš™οΈ. It works like your usual Redux reducer!

Basically, it consists in a function implementing switch cases for all event types and returning the aggregate updated with your business logic. 🧠

Here is an example reducer for our User Event Store.

export const usersReducer = (
  userAggregate: UserAggregate,
  event: UserEventsDetails,
): UserAggregate => {
  const { version, aggregateId } = event;

  switch (event.type) {
    case 'USER_CREATED': {
      const { name, age } = event.payload;

      return {
        aggregateId,
        version: event.version,
        name,
        age,
        status: 'CREATED',
      };
    }
    case 'USER_REMOVED':
      return {
        ...userAggregate,
        version,
        status: 'REMOVED',
      };
  }
};

Storage Adapter

'Storage Adapter'

You can store your events in many different ways. To specify how to store them (in memory, DynamoDB...) Castore implements Storage Adapters.

Adapters offer an interface between the Event Store class and your storage method πŸ’Ύ.

To be able to use your EventStore, you will need to attach a Storage Adapter πŸ”—.

All the Storage Adapters have the same interface, and you can create your own if you want to implement new storage methods!

So far, castore supports 2 Storage Adapters ✨:

  • in-memory
  • DynamoDB

Event Store Interface

Now that our Event Store has been instantiated with a reducer and a Storage Adapter, we can start using it to actually populate our database with events and retrieve business data from it 🌈.

To do that, the Event Store class exposes several methods, including the following two:

pushEvent: Takes an object containing event details and puts it in the database. It will throw if the event's version already exists!

getAggregate: Returns the output of the reducer applied to the array of all events.

Here is a quick example showing how an application would use these two methods:


const removeUser = async (userId: string) => {

  // get the aggregate for that userId, 
  // which is a representation of our user's state
  const { aggregate } = await userEventStore.getAggregate(userId)

  // use the aggregate to check the user status
  if (aggregate.status === "REMOVED") {
    throw new Error("User already removed")
  }

  // put the USER_REMOVED event in the event store 🦫
  await userEventStore.pushEvent({
    aggregateId: userId,
    version: aggregate.version + 1,
    type: "USER_REMOVED",
    timestamp: new Date()
  })
}

Going Further πŸƒβ€β™‚οΈ

We've only covered the basic functionalities of the Event Store!

The Event Store class actually implements other very useful methods πŸ’ͺ

Here is a small recap of these methods:

getEvents: Returns the list of all events for a given aggregateId.

listAggregateIds: Returns the list of all aggregateIds present in the Event Store.

simulateAggregate: Simulates the aggregate you would have obtained with getAggregate at a given date.


Author: theodo
Source code: https://github.com/theodo/castore
License:  MIT license
#react-native  #typescript  #javascript 

Castore: A Simple Way to Implement Event Sourcing in TypeScript
Monty  Boehm

Monty Boehm

1660266240

SnpArrays.jl: Compressed Storage for SNP Data

SnpArrays.jl

Routines for reading and manipulating compressed storage of biallelic SNP (Single-Nucleotide Polymorphism) data.

Data from genome-wide association studies (GWAS) are often saved as a PLINK binary biallelic genotype table or .bed file. To be useful, such files should be accompanied by a .fam file, containing metadata on the rows of the table, and a .bim file, containing metadata on the columns. The .fam and .bim files are in tab-separated format.

Linear algebra operations on the PLINK formatted data now support multi-threading and GPU (CUDA) computing.

Installation

This package requires Julia v1.5 or later, which can be obtained from https://julialang.org/downloads/ or by building Julia from the sources in the JuliaLang/julia repository.

This package is registered in the default Julia package registry, and can be installed through standard package installation procedure.

using Pkg
pkg"add SnpArrays"

Use the backspace key to return to the Julia REPL.

Citation

If you use OpenMendel analysis packages in your research, please cite the following reference in the resulting publications:

Zhou H, Sinsheimer JS, Bates DM, Chu BB, German CA, Ji SS, Keys KL, Kim J, Ko S, Mosher GD, Papp JC, Sobel EM, Zhai J, Zhou JJ, Lange K. OPENMENDEL: a cooperative programming project for statistical genetics. Hum Genet. 2020 Jan;139(1):61-71. doi: 10.1007/s00439-019-02001-z. Epub 2019 Mar 26. PMID: 30915546; PMCID: PMC6763373.

Acknowledgments

Current implementation incorporates ideas in the package BEDFiles.jl by Doug Bates (@dmbates).

Chris Elrod (@chriselrod) helped us accelerate CPU linear algebra through his great support of LoopVectorization.jl package.

This project has been supported by the National Institutes of Health under awards R01GM053275, R01HG006139, R25GM103774, and 1R25HG011845.

Download Details:

Author: OpenMendel
Source Code: https://github.com/OpenMendel/SnpArrays.jl 
License: View license

#julia #storage #data 

SnpArrays.jl: Compressed Storage for SNP Data

A User-space File System for interacting with Google Cloud Storage

gcsfuse is a user-space file system for interacting with Google Cloud Storage.

Current status

Please treat gcsfuse as beta-quality software. Use it for whatever you like, but be aware that bugs may lurk, and that we reserve the right to make small backwards-incompatible changes.

The careful user should be sure to read semantics.md for information on how gcsfuse maps file system operations to GCS operations, and especially on surprising behaviors. The list of open issues may also be of interest.

Installing

See installing.md for full installation instructions for Linux and macOS.

Mounting

Prerequisites

GCS credentials are automatically loaded using Google application default credentials, or a JSON key file can be specified explicitly using --key-file. If you haven't already done so, the easiest way to set up your credentials for testing is to run the gcloud tool:

gcloud auth login

See mounting.md for more information on credentials.

Invoking gcsfuse

To mount a bucket using gcsfuse over an existing directory /path/to/mount, invoke it like this:

gcsfuse my-bucket /path/to/mount

Important: You should run gcsfuse as the user who will be using the file system, not as root. Do not use sudo.

The gcsfuse tool will exit successfully after mounting the file system. Unmount in the usual way for a fuse file system on your operating system:

umount /path/to/mount         # macOS
fusermount -u /path/to/mount  # Linux

If you are mounting a bucket that was populated with objects by some other means besides gcsfuse, you may be interested in the --implicit-dirs flag. See the notes in semantics.md for more information.

See mounting.md for more detail, including notes on running in the foreground and fstab compatibility.

Performance

Latency and rsync

Writing files to and reading files from GCS has a much higher latency than using a local file system. If you are reading or writing one small file at a time, this may cause you to achieve a low throughput to or from GCS. If you want high throughput, you will need to either use larger files to smooth across latency hiccups or read/write multiple files at a time.

Note in particular that this heavily affects rsync, which reads and writes only one file at a time. You might try using gsutil -m rsync to transfer multiple files to or from your bucket in parallel instead of plain rsync with gcsfuse.

Rate limiting

If you would like to rate limit traffic to/from GCS in order to set limits on your GCS spending on behalf of gcsfuse, you can do so:

  • The flag --limit-ops-per-sec controls the rate at which gcsfuse will send requests to GCS.
  • The flag --limit-bytes-per-sec controls the egress bandwidth from gcsfuse to GCS.

All rate limiting is approximate, and is performed over an 8-hour window. By default, there are no limits applied.

Upload procedure control

An upload procedure is implemented as a retry loop with exponential backoff for failed requests to the GCS backend. Once the backoff duration exceeds this limit, the retry stops. Flag --max-retry-sleep controls such behavior. The default is 1 minute. A value of 0 disables retries.

GCS round trips

By default, gcsfuse uses two forms of caching to save round trips to GCS, at the cost of consistency guarantees. These caching behaviors can be controlled with the flags --stat-cache-capacity, --stat-cache-ttl and --type-cache-ttl. See semantics.md for more information.

Timeouts

If you are using FUSE for macOS, be aware that by default it will give gcsfuse only 60 seconds to respond to each file system operation. This means that if you write and then flush a large file and your upstream bandwidth is insufficient to write it all to GCS within 60 seconds, your gcsfuse file system may become unresponsive. This behavior can be tuned using the daemon_timeout mount option. See issue #196 for details.

Downloading object contents

Behind the scenes, when a newly-opened file is first modified, gcsfuse downloads the entire backing object's contents from GCS. The contents are stored in a local temporary file whose location is controlled by the flag --temp-dir. Later, when the file is closed or fsync'd, gcsfuse writes the contents of the local file back to GCS as a new object generation.

Files that have not been modified are read portion by portion on demand. gcsfuse uses a heuristic to detect when a file is being read sequentially, and will issue fewer, larger read requests to GCS in this case.

The consequence of this is that gcsfuse is relatively efficient when reading or writing entire large files, but will not be particularly fast for small numbers of random writes within larger files, and to a lesser extent the same is true of small random reads. Performance when copying large files into GCS is comparable to gsutil (see issue #22 for testing notes). There is some overhead due to the staging of data in a local temporary file, as discussed above.

Note that new and modified files are also fully staged in the local temporary directory until they are written out to GCS due to being closed or fsync'd. Therefore the user must ensure that there is enough free space available to handle staged content when writing large files.

Other performance issues

If you notice otherwise unreasonable performance, please file an issue.

Support

gcsfuse is open source software, released under the Apache license. It is distributed as-is, without warranties or conditions of any kind.

For support, visit Server Fault. Tag your questions with gcsfuse and google-cloud-platform, and make sure to look at previous questions and answers before asking a new one. For bugs and feature requests, please file an issue.

Versioning

gcsfuse version numbers are assigned according to Semantic Versioning. Note that the current major version is 0, which means that we reserve the right to make backwards-incompatible changes.

Download Details:

Author: GoogleCloudPlatform
Source Code: https://github.com/GoogleCloudPlatform/gcsfuse 
License: Apache-2.0 license

#go #golang #google #cloud #storage 

A User-space File System for interacting with Google Cloud Storage

Lotusdb: A Fast K/v Database Compatible with LSM Tree and B+ Tree

LotusDB is a fast k/v database compatible with LSM tree and B+ tree.

Key features:

Combine the advantages of LSM and B+ tree

Fast read/write performance

Much lower read and space amplification than typical LSM

Design Overview

LotusDB is inspired by a paper named SLM-DB in USENIX FAST ’19, and the Wisckey paper also helps a lot.

Quick Start

1. embedded usage: see examples.

Documentation

see wiki.

Contributing

see CONTRIBUTING.md

Download Details:

Author: flower-corp
Source Code: https://github.com/flower-corp/lotusdb 
License: Apache-2.0 license

#go #golang #database #storage 

Lotusdb: A Fast K/v Database Compatible with LSM Tree and B+ Tree

Rook: Storage Orchestration for Kubernetes

What is Rook?

Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments.

Rook turns storage software into self-managing, self-scaling, and self-healing storage services. It does this by automating deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management. Rook uses the facilities provided by the underlying cloud-native container management, scheduling and orchestration platform to perform its duties.

Rook integrates deeply into cloud native environments leveraging extension points and providing a seamless experience for scheduling, lifecycle management, resource management, security, monitoring, and user experience.

For more details about the storage solutions currently supported by Rook, please refer to the project status section below. We plan to continue adding support for other storage systems and environments based on community demand and engagement in future releases. See our roadmap for more details.

Rook is hosted by the Cloud Native Computing Foundation (CNCF) as a graduated level project. If you are a company that wants to help shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented, consider joining the CNCF. For details about who's involved and how Rook plays a role, read the CNCF announcement.

Getting Started and Documentation

For installation, deployment, and administration, see our Documentation.

Contributing

We welcome contributions. See Contributing to get started.

Report a Bug

For filing bugs, suggesting improvements, or requesting new features, please open an issue.

Reporting Security Vulnerabilities

If you find a vulnerability or a potential vulnerability in Rook please let us know immediately at cncf-rook-security@lists.cncf.io. We'll send a confirmation email to acknowledge your report, and we'll send an additional email when we've identified the issues positively or negatively.

For further details, please see the complete security release process.

Contact

Please use the following to reach members of the community:

Community Meeting

A regular community meeting takes place every other Tuesday at 9:00 AM PT (Pacific Time). Convert to your local timezone.

Any changes to the meeting schedule will be added to the agenda doc and posted to Slack #announcements.

Anyone who wants to discuss the direction of the project, design and implementation reviews, or general questions with the broader community is welcome and encouraged to join.

Project Status

The status of each storage provider supported by Rook can be found in the table below. Each API group is assigned its own individual status to reflect their varying maturity and stability. More details about API versioning and status in Kubernetes can be found on the Kubernetes API versioning page, but the key difference between the statuses are summarized below:

  • Alpha: The API may change in incompatible ways in a later software release without notice, recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.
  • Beta: Support for the overall features will not be dropped, though details may change. Support for upgrading or migrating between versions will be provided, either through automation or manual steps.
  • Stable: Features will appear in released software for many subsequent versions and support for upgrading between versions will be provided with software automation in the vast majority of scenarios.
NameDetailsAPI GroupStatus
CephCeph is a distributed storage system that provides file, block and object storage and is deployed in large scale production clusters.ceph.rook.io/v1Stable

This repo is for the Ceph storage provider. The Cassandra and NFS storage providers moved to a separate repo to allow for each storage provider to have an independent development and release schedule.

Official Releases

Official releases of Rook can be found on the releases page. Please note that it is strongly recommended that you use official releases of Rook, as unreleased versions from the master branch are subject to changes and incompatibilities that will not be supported in the official releases. Builds from the master branch can have functionality changed and even removed at any time without compatibility support and without prior notice.

Download Details:

Author: Rook
Source Code: https://github.com/rook/rook 
License: Apache-2.0 license

#go #golang #kubernetes #storage 

Rook: Storage Orchestration for Kubernetes

Storj: Ongoing Storj V3 Development

Storj V3 Network  

Storj is building a decentralized cloud storage network.


Storj is an S3-compatible platform and suite of decentralized applications that allows you to store data in a secure and decentralized manner. Your files are encrypted, broken into little pieces and stored in a global decentralized network of computers. Luckily, we also support allowing you (and only you) to retrieve those files!

Contributing to Storj

All of our code for Storj v3 is open source. If anything feels off, or if you feel that some functionality is missing, please check out the contributing page. There you will find instructions for sharing your feedback, building the tool locally, and submitting pull requests to the project.

A Note about Versioning

While we are practicing semantic versioning for our client libraries such as uplink, we are not practicing semantic versioning in this repo, as we do not intend for it to be used via Go modules. We may have backwards-incompatible changes between minor and patch releases in this repo.

Start using Storj

Our wiki has documentation and tutorials. Check out these three tutorials:

Support

If you have any questions or suggestions please reach out to us on our community forum or file a ticket at https://support.storj.io/.


Check out our white paper for more info!


Download Details:

Author: Dtorj
Source Code: https://github.com/storj/storj 
License: AGPL-3.0 license

#go #golang #storage 

Storj: Ongoing Storj V3 Development