Rupert  Beatty

Rupert Beatty

1676665680

BlueCap: iOS Bluetooth LE Framework

BlueCap

iOS Bluetooth LE Framework

Features

  • A futures interface replacing protocol implementations.
  • Timeout for Peripheral connection, Service scan, Service + Characteristic discovery and Characteristic read/write.
  • A DSL for specification of GATT profiles.
  • Characteristic profile types encapsulating serialization and deserialization.
  • Example applications implementing CentralManager and PeripheralManager.
  • A full featured extendable scanner and Peripheral simulator available in the App Store.
  • Thread safe.
  • Comprehensive test coverage.

Requirements

  • iOS 12.0+
  • Xcode 11.3.1

Installation

CocoaPods

CocoaPods is an Xcode dependency manager. It is installed with the following command,

gem install cocoapods

Requires CocoaPods 1.1+

Add BluCapKit to your to your project Podfile,

platform :ios, '10.0'
use_frameworks!

target 'Your Target Name' do
  pod 'BlueCapKit', '~> 0.7'
end

To enable DBUG output add this post_install hook to your Podfile

Carthage

Carthage is a decentralized dependency manager for Xcode projects. It can be installed using Homebrew,

brew update
brew install carthage

To add BlueCapKit to your Cartfile

github "troystribling/BlueCap" ~> 0.7

To download and build BlueCapKit.framework run the command,

carthage update

then add BlueCapKit.framework to your project.

If desired use the --no-build option,

carthage update --no-build

This will only download BlueCapKit. Then follow the steps in Manual to add it to a project.

Manual

  1. Place the BlueCap somewhere in your project directory. You can either copy it or add it as a git submodule.
  2. Open the BlueCap project folder and drag BlueCapKit.xcodeproj into the project navigator of your applications Xcode project.
  3. Under your Projects Info tab set the iOS Deployment Target to 9.0 and verify that the BlueCapKit.xcodeproj iOS Deployment Target is also 9.0.
  4. Under the General tab for your project target add the top BlueCapKit.framework as an Embedded Binary.
  5. Under the Build Phases tab add BlueCapKit.framework as a Target Dependency and under Link Binary With Libraries add CoreLocation.framework and CoreBluetooth.framework.

Getting Started

With BlueCap it is possible to easily implement CentralManager and PeripheralManager applications, serialize and deserialize messages exchanged with Bluetooth devices and define reusable GATT profile definitions. The BlueCap asynchronous interface uses Futures instead of the usual block interface or the protocol-delegate pattern. Futures can be chained with the result of the previous passed as input to the next. This simplifies application implementation because the persistence of state between asynchronous calls is eliminated and code will not be distributed over multiple files, which is the case for protocol-delegate, or be deeply nested, which is the case for block interfaces. In this section a brief overview of how an application is constructed will be given. Following sections will describe supported use cases. Example applications are also available.

CentralManager

A simple CentralManager implementation that scans for Peripherals advertising a TiSensorTag Accelerometer Service, connects on peripheral discovery, discovers service and characteristics and subscribes to accelerometer data updates will be described.

All applications begin by calling CentralManager#whenStateChanges.

let manager = CentralManager(options: [CBCentralManagerOptionRestoreIdentifierKey : "us.gnos.BlueCap.central-manager-documentation" as NSString])

let stateChangeFuture = manager.whenStateChanges()

To start scanning for Peripherals advertising the TiSensorTag Accelerometer Service follow whenStateChanges() with CentralManager#startScanning and combine the two with the SimpleFutures FutureStream#flatMap combinator. An application error object is also defined,

public enum AppError : Error {
    case invalidState
    case resetting
    case poweredOff
    case unknown
    case unlikely
}

let serviceUUID = CBUUID(string: TISensorTag.AccelerometerService.uuid)

let scanFuture = stateChangeFuture.flatMap { [weak manager] state -> FutureStream<Peripheral> in
    guard let manager = manager else {
        throw AppError.unlikely
    }
    switch state {
    case .poweredOn:
        return manager.startScanning(forServiceUUIDs: [serviceUUID])
    case .poweredOff:
        throw AppError.poweredOff
    case .unauthorized, .unsupported:
        throw AppError.invalidState
    case .resetting:
        throw AppError.resetting
    case .unknown:
        throw AppError.unknown
    }
}

scanFuture.onFailure { [weak manager] error in
    guard let appError = error as? AppError else {
        return
    }
    switch appError {
    case .invalidState:
    break
    case .resetting:
        manager?.reset()
    case .poweredOff:
        break
    case .unknown:
        break
    }
}

Here when .poweredOn is received the scan is started. On all other state changes the appropriate error is thrown and handled in the error handler.

To connect discovered peripherals the scan is followed by Peripheral#connect and combined with FutureStream#flatMap,

var peripheral: Peripheral?

let connectionFuture = scanFuture.flatMap { [weak manager] discoveredPeripheral  -> FutureStream<Void> in
    manager?.stopScanning()
    peripheral = discoveredPeripheral
    return peripheral.connect(connectionTimeout: 10.0)
}

Here the scan is also stopped after a peripheral with the desired service UUID is discovered.

The Peripheral Services and Characteristics need to be discovered and the connection errors need to be handled. Service and Characteristic discovery are performed by 'Peripheral#discoverServices' and Service#discoverCharacteristics and more errors are added to AppError.

public enum AppError : Error {
    case dataCharactertisticNotFound
    case enabledCharactertisticNotFound
    case updateCharactertisticNotFound
    case serviceNotFound
    case invalidState
    case resetting
    case poweredOff
    case unknown
    case unlikely
}

let discoveryFuture = connectionFuture.flatMap { [weak peripheral] () -> Future<Void> in
    guard let peripheral = peripheral else {
        throw AppError.unlikely
    }
    return peripheral.discoverServices([serviceUUID])
}.flatMap { [weak peripheral] () -> Future<Void> in
    guard let peripheral = peripheral, let service = peripheral.services(withUUID: serviceUUID)?.first else {
        throw AppError.serviceNotFound
    }
    return service.discoverCharacteristics([dataUUID, enabledUUID, updatePeriodUUID])
}

discoveryFuture.onFailure { [weak peripheral] error in
    switch error {
    case PeripheralError.disconnected:
        peripheral?.reconnect()
    case AppError.serviceNotFound:
        break
    default:
    break
    }
}

Here a reconnect attempt is made if the Peripheral is disconnected and the AppError.serviceNotFound error is handled. Finally read and subscribe to the data Characteristic and handle the dataCharactertisticNotFound.

public enum AppError : Error {
    case dataCharactertisticNotFound
    case enabledCharactertisticNotFound
    case updateCharactertisticNotFound
    case serviceNotFound
    case invalidState
    case resetting
    case poweredOff
    case unknown
}

var accelerometerDataCharacteristic: Characteristic?

let subscriptionFuture = discoveryFuture.flatMap { [weak peripheral] () -> Future<Void> in
   guard let peripheral = peripheral, let service = peripheral.services(withUUID: serviceUUID)?.first else {
        throw AppError.serviceNotFound
    }
    guard let dataCharacteristic = service.service.characteristics(withUUID: dataUUID)?.first else {
        throw AppError.dataCharactertisticNotFound
    }
    accelerometerDataCharacteristic = dataCharacteristic
    return dataCharacteristic.read(timeout: 10.0)
}.flatMap { [weak accelerometerDataCharacteristic] () -> Future<Void> in
    guard let accelerometerDataCharacteristic = accelerometerDataCharacteristic else {
        throw AppError.dataCharactertisticNotFound
    }
    return accelerometerDataCharacteristic.startNotifying()
}.flatMap { [weak accelerometerDataCharacteristic] () -> FutureStream<Data?> in
    guard let accelerometerDataCharacteristic = accelerometerDataCharacteristic else {
        throw AppError.dataCharactertisticNotFound
    }
    return accelerometerDataCharacteristic.receiveNotificationUpdates(capacity: 10)
}

dataUpdateFuture.onFailure { [weak peripheral] error in
    switch error {
    case PeripheralError.disconnected:
        peripheral?.reconnect()
    case AppError.serviceNotFound:
        break
    case AppError.dataCharactertisticNotFound:
    break
    default:
    break
    }
}

These examples can be written as a single flatMap chain as shown in the CentralManager Example.

PeripheralManager

A simple PeripheralManager application that emulates a TiSensorTag Accelerometer Service supporting all Characteristics will be described. It will advertise the service and respond to characteristic write requests on the writable Characteristics.

First the Characteristics and Service are created and the Characteristics are then added to Service

// create accelerometer service
let accelerometerService = MutableService(uuid: TISensorTag.AccelerometerService.uuid)

// create accelerometer data characteristic
let accelerometerDataCharacteristic = MutableCharacteristic(profile: RawArrayCharacteristicProfile<TISensorTag.AccelerometerService.Data>())

// create accelerometer enabled characteristic
let accelerometerEnabledCharacteristic = MutableCharacteristic(profile: RawCharacteristicProfile<TISensorTag.AccelerometerService.Enabled>())

// create accelerometer update period characteristic
let accelerometerUpdatePeriodCharacteristic = MutableCharacteristic(profile: RawCharacteristicProfile<TISensorTag.AccelerometerService.UpdatePeriod>())

// add characteristics to service
accelerometerService.characteristics = [accelerometerDataCharacteristic, accelerometerEnabledCharacteristic, accelerometerUpdatePeriodCharacteristic]

Next create the PeripheralManager add the Service and start advertising.

enum AppError: Error {
    case invalidState
    case resetting
    case poweredOff
    case unsupported
    case unlikely
}

let manager = PeripheralManager(options: [CBPeripheralManagerOptionRestoreIdentifierKey : "us.gnos.BlueCap.peripheral-manager-documentation" as NSString])

let startAdvertiseFuture = manager.whenStateChanges().flatMap { [weak manager] state -> Future<Void> in
    guard let manager = manager else {
        throw AppError.unlikely
    }
    switch state {
    case .poweredOn:
        manager.removeAllServices()
        return manager.add(self.accelerometerService)
    case .poweredOff:
        throw AppError.poweredOff
    case .unauthorized, .unknown:
        throw AppError.invalidState
    case .unsupported:
        throw AppError.unsupported
    case .resetting:
        throw AppError.resetting
    }
}.flatMap { [weak manager] _ -> Future<Void> in
    guard let manager = manager else {
        throw AppError.unlikely
    }
    manager.startAdvertising(TISensorTag.AccelerometerService.name, uuids:[CBUUID(string: TISensorTag.AccelerometerService.uuid)])
}

startAdvertiseFuture.onFailure { [weak manager] error in
    switch error {
    case AppError.poweredOff:
        manager?.reset()            
    case AppError.resetting:
        manager?.reset()
    default:
    break
    }
    manager?.stopAdvertising()
}

Now respond to write events on accelerometerEnabledFuture and accelerometerUpdatePeriodFuture.

// respond to Update Period write requests
let accelerometerUpdatePeriodFuture = startAdvertiseFuture.flatMap {
    accelerometerUpdatePeriodCharacteristic.startRespondingToWriteRequests()
}

accelerometerUpdatePeriodFuture.onSuccess {  [weak accelerometerUpdatePeriodCharacteristic] (request, _) in
    guard let accelerometerUpdatePeriodCharacteristic = accelerometerUpdatePeriodCharacteristic else {
        throw AppError.unlikely
    }
    guard let value = request.value, value.count > 0 && value.count <= 8 else {
        return
    }
    accelerometerUpdatePeriodCharacteristic.value = value
    accelerometerUpdatePeriodCharacteristic.respondToRequest(request, withResult:CBATTError.success)
}

// respond to Enabled write requests
let accelerometerEnabledFuture = startAdvertiseFuture.flatMap {
    accelerometerEnabledCharacteristic.startRespondingToWriteRequests(capacity: 2)
}

accelerometerEnabledFuture.onSuccess { [weak accelerometerUpdatePeriodCharacteristic] (request, _) in
    guard let accelerometerEnabledCharacteristic = accelerometerEnabledCharacteristic else {
        throw AppError.unlikely
    }
    guard let value = request.value, value.count == 1 else {
        return
    }
    accelerometerEnabledCharacteristic.value = request.value
    accelerometerEnabledCharacteristic.respondToRequest(request, withResult:CBATTError.success)
}

See PeripheralManager Example for details.

Test Cases

Test Cases are available. To run type,

pod install

and run from test tab in generated workspace.

Examples

Examples are available that implement both CentralManager and PeripheralManager. The BluCap app is also available. The example projects are constructed using either CocoaPods or Carthage. The CocaPods projects require installing the Pod before building,

pod install

and Carthage projects require,

carthage update
BlueCapBlueCap provides CentralManager, PeripheralManager and iBeacon Ranging with implementations of GATT profiles. In CentralManager mode a scanner for Bluetooth LE peripherals is provided. In PeripheralManager mode an emulation of any of the included GATT profiles or an iBeacon is supported. In iBeacon Ranging mode beacon regions can be configured and monitored.
CentralManagerCentralManager implements a BLE CentralManager scanning for services advertising the TiSensorTag Accelerometer Service. When a Peripheral is discovered a connection is established, services are discovered, the accelerometer is enabled and the application subscribes to accelerometer data updates. It is also possible to change the data update period.
CentralManagerWithProfileA version of CentralManager that uses GATT Profile Definitions to create services.
PeripheralManagerPeripheralManager implements a BLE PeripheralManager advertising a TiSensorTag Accelerometer Service. PeripheralManager uses the onboard accelerometer to provide data updates.
PeripheralManagerWithProfileA version of Peripheral that uses GATT Profile Definitions to create services.
BeaconPeripheral emulating an iBeacon.
BeaconsiBeacon ranging.

Documentation

BlueCap supports many features that simplify writing Bluetooth LE applications. Use cases with example implementations are described in each of the following sections.

CentralManager: The BlueCap CentralManager implementation replaces CBCentralManagerDelegate and CBPeripheralDelegate protocol implementations with a Scala Futures interface using SimpleFutures.

PeripheralManager: The BlueCap PeripheralManager implementation replaces CBPeripheralManagerDelegate protocol implementations with a Scala Futures interface using SimpleFutures.

Serialization/Deserialization: Serialization and deserialization of device messages.

GATT Profile Definition: Define reusable GATT profiles and add profiles to the BlueCap app.

Download Details:

Author: Troystribling
Source Code: https://github.com/troystribling/BlueCap 
License: MIT license

#swift #ios #serialization 

BlueCap: iOS Bluetooth LE Framework

Serialization: Serialization tools for IPC and Data Storage in PHP

amphp/serialization

AMPHP is a collection of event-driven libraries for PHP designed with fibers and concurrency in mind. amphp/serialization is a library providing serialization tools for IPC and data storage in PHP.

Installation

This package can be installed as a Composer dependency.

composer require amphp/socket

Serializer

The main interface of this library is Amp\Serialization\Serializer.

<?php

namespace Amp\Serialization;

interface Serializer
{
    /**
     * @param mixed $data
     *
     * @throws SerializationException
     */
    public function serialize($data): string;

    /**
     * @return mixed
     *
     * @throws SerializationException
     */
    public function unserialize(string $data);
}

JSON

JSON serialization can be used with the JsonSerializer.

Native Serialization

Native serialization (serialize / unserialize) can be used with the NativeSerializer.

Passthrough Serialization

Sometimes you already have a string and don't want to apply additional serialization. In these cases, you can use the PassthroughSerializer.

Compression

Often, serialized data can be compressed quite well. If you don't need interoperability with other systems deserializing the data, you can compress your serialized payloads by wrapping your Serializer instance in an CompressingSerializer.

Security

If you discover any security related issues, please email me@kelunik.com instead of using the issue tracker.

Download Details:

Author: amphp
Source Code: https://github.com/amphp/serialization 
License: MIT license

#php #serialization 

Serialization: Serialization tools for IPC and Data Storage in PHP
Dexter  Goodwin

Dexter Goodwin

1667413800

Protobuf.js: Protocol Buffers for JavaScript (& TypeScript)

Protobuf.js

Protocol Buffers are a language-neutral, platform-neutral, extensible way of serializing structured data for use in communications protocols, data storage, and more, originally designed at Google (see).

protobuf.js is a pure JavaScript implementation with TypeScript support for node.js and the browser. It's easy to use, blazingly fast and works out of the box with .proto files!

Installation

node.js

$> npm install protobufjs [--save --save-prefix=~]
var protobuf = require("protobufjs");

The command line utility lives in the protobufjs-cli package and must be installed separately:

$> npm install protobufjs-cli [--save --save-prefix=~]

Note that this library's versioning scheme is not semver-compatible for historical reasons. For guaranteed backward compatibility, always depend on ~6.A.B instead of ^6.A.B (hence the --save-prefix above).

Browsers

Development:

<script src="//cdn.jsdelivr.net/npm/protobufjs@7.X.X/dist/protobuf.js"></script>

Production:

<script src="//cdn.jsdelivr.net/npm/protobufjs@7.X.X/dist/protobuf.min.js"></script>

Remember to replace the version tag with the exact release your project depends upon.

The library supports CommonJS and AMD loaders and also exports globally as protobuf.

Distributions

Where bundle size is a factor, there are additional stripped-down versions of the [full library][dist-full] (~19kb gzipped) available that exclude certain functionality:

When working with JSON descriptors (i.e. generated by pbjs) and/or reflection only, see the [light library][dist-light] (~16kb gzipped) that excludes the parser. CommonJS entry point is:

var protobuf = require("protobufjs/light");

When working with statically generated code only, see the [minimal library][dist-minimal] (~6.5kb gzipped) that also excludes reflection. CommonJS entry point is:

var protobuf = require("protobufjs/minimal");
DistributionLocation
Fullhttps://cdn.jsdelivr.net/npm/protobufjs/dist/
Lighthttps://cdn.jsdelivr.net/npm/protobufjs/dist/light/
Minimalhttps://cdn.jsdelivr.net/npm/protobufjs/dist/minimal/

Usage

Because JavaScript is a dynamically typed language, protobuf.js introduces the concept of a valid message in order to provide the best possible performance (and, as a side product, proper typings):

Valid message

A valid message is an object (1) not missing any required fields and (2) exclusively composed of JS types understood by the wire format writer.

There are two possible types of valid messages and the encoder is able to work with both of these for convenience:

  • Message instances (explicit instances of message classes with default values on their prototype) always (have to) satisfy the requirements of a valid message by design and
  • Plain JavaScript objects that just so happen to be composed in a way satisfying the requirements of a valid message as well.

In a nutshell, the wire format writer understands the following types:

Field typeExpected JS type (create, encode)Conversion (fromObject)
s-/u-/int32
s-/fixed32
number (32 bit integer)value | 0 if signed
value >>> 0 if unsigned
s-/u-/int64
s-/fixed64
Long-like (optimal)
number (53 bit integer)
Long.fromValue(value) with long.js
parseInt(value, 10) otherwise
float
double
numberNumber(value)
boolbooleanBoolean(value)
stringstringString(value)
bytesUint8Array (optimal)
Buffer (optimal under node)
Array.<number> (8 bit integers)
base64.decode(value) if a string
Object with non-zero .length is assumed to be buffer-like
enumnumber (32 bit integer)Looks up the numeric id if a string
messageValid messageMessage.fromObject(value)
  • Explicit undefined and null are considered as not set if the field is optional.
  • Repeated fields are Array.<T>.
  • Map fields are Object.<string,T> with the key being the string representation of the respective value or an 8 characters long binary hash string for Long-likes.
  • Types marked as optimal provide the best performance because no conversion step (i.e. number to low and high bits or base64 string to buffer) is required.

Toolset

With that in mind and again for performance reasons, each message class provides a distinct set of methods with each method doing just one thing. This avoids unnecessary assertions / redundant operations where performance is a concern but also forces a user to perform verification (of plain JavaScript objects that might just so happen to be a valid message) explicitly where necessary - for example when dealing with user input.

Note that Message below refers to any message class.

Message.verify(message: Object): null|string
verifies that a plain JavaScript object satisfies the requirements of a valid message and thus can be encoded without issues. Instead of throwing, it returns the error message as a string, if any.

var payload = "invalid (not an object)";
var err = AwesomeMessage.verify(payload);
if (err)
  throw Error(err);

Message.encode(message: Message|Object [, writer: Writer]): Writer
encodes a message instance or valid plain JavaScript object. This method does not implicitly verify the message and it's up to the user to make sure that the payload is a valid message.

var buffer = AwesomeMessage.encode(message).finish();

Message.encodeDelimited(message: Message|Object [, writer: Writer]): Writer
works like Message.encode but additionally prepends the length of the message as a varint.

Message.decode(reader: Reader|Uint8Array): Message
decodes a buffer to a message instance. If required fields are missing, it throws a util.ProtocolError with an instance property set to the so far decoded message. If the wire format is invalid, it throws an Error.

try {
  var decodedMessage = AwesomeMessage.decode(buffer);
} catch (e) {
    if (e instanceof protobuf.util.ProtocolError) {
      // e.instance holds the so far decoded message with missing required fields
    } else {
      // wire format is invalid
    }
}

Message.decodeDelimited(reader: Reader|Uint8Array): Message
works like Message.decode but additionally reads the length of the message prepended as a varint.

Message.create(properties: Object): Message
creates a new message instance from a set of properties that satisfy the requirements of a valid message. Where applicable, it is recommended to prefer Message.create over Message.fromObject because it doesn't perform possibly redundant conversion.

var message = AwesomeMessage.create({ awesomeField: "AwesomeString" });

Message.fromObject(object: Object): Message
converts any non-valid plain JavaScript object to a message instance using the conversion steps outlined within the table above.

var message = AwesomeMessage.fromObject({ awesomeField: 42 });
// converts awesomeField to a string

Message.toObject(message: Message [, options: ConversionOptions]): Object
converts a message instance to an arbitrary plain JavaScript object for interoperability with other libraries or storage. The resulting plain JavaScript object might still satisfy the requirements of a valid message depending on the actual conversion options specified, but most of the time it does not.

var object = AwesomeMessage.toObject(message, {
  enums: String,  // enums as string names
  longs: String,  // longs as strings (requires long.js)
  bytes: String,  // bytes as base64 encoded strings
  defaults: true, // includes default values
  arrays: true,   // populates empty arrays (repeated fields) even if defaults=false
  objects: true,  // populates empty objects (map fields) even if defaults=false
  oneofs: true    // includes virtual oneof fields set to the present field's name
});

For reference, the following diagram aims to display relationships between the different methods and the concept of a valid message:

Toolset Diagram

In other words: verify indicates that calling create or encode directly on the plain object will [result in a valid message respectively] succeed. fromObject, on the other hand, does conversion from a broader range of plain objects to create valid messages. (ref)

Examples

Using .proto files

It is possible to load existing .proto files using the full library, which parses and compiles the definitions to ready to use (reflection-based) message classes:

// awesome.proto
package awesomepackage;
syntax = "proto3";

message AwesomeMessage {
    string awesome_field = 1; // becomes awesomeField
}
protobuf.load("awesome.proto", function(err, root) {
    if (err)
        throw err;

    // Obtain a message type
    var AwesomeMessage = root.lookupType("awesomepackage.AwesomeMessage");

    // Exemplary payload
    var payload = { awesomeField: "AwesomeString" };

    // Verify the payload if necessary (i.e. when possibly incomplete or invalid)
    var errMsg = AwesomeMessage.verify(payload);
    if (errMsg)
        throw Error(errMsg);

    // Create a new message
    var message = AwesomeMessage.create(payload); // or use .fromObject if conversion is necessary

    // Encode a message to an Uint8Array (browser) or Buffer (node)
    var buffer = AwesomeMessage.encode(message).finish();
    // ... do something with buffer

    // Decode an Uint8Array (browser) or Buffer (node) to a message
    var message = AwesomeMessage.decode(buffer);
    // ... do something with message

    // If the application uses length-delimited buffers, there is also encodeDelimited and decodeDelimited.

    // Maybe convert the message back to a plain object
    var object = AwesomeMessage.toObject(message, {
        longs: String,
        enums: String,
        bytes: String,
        // see ConversionOptions
    });
});

Additionally, promise syntax can be used by omitting the callback, if preferred:

protobuf.load("awesome.proto")
    .then(function(root) {
       ...
    });

Using JSON descriptors

The library utilizes JSON descriptors that are equivalent to a .proto definition. For example, the following is identical to the .proto definition seen above:

// awesome.json
{
  "nested": {
    "awesomepackage": {
      "nested": {
        "AwesomeMessage": {
          "fields": {
            "awesomeField": {
              "type": "string",
              "id": 1
            }
          }
        }
      }
    }
  }
}

JSON descriptors closely resemble the internal reflection structure:

Type (T)ExtendsType-specific properties
ReflectionObject options
NamespaceReflectionObjectnested
RootNamespacenested
TypeNamespacefields
EnumReflectionObjectvalues
FieldReflectionObjectrule, type, id
MapFieldFieldkeyType
OneOfReflectionObjectoneof (array of field names)
ServiceNamespacemethods
MethodReflectionObjecttype, requestType, responseType, requestStream, responseStream
  • Bold properties are required. Italic types are abstract.
  • T.fromJSON(name, json) creates the respective reflection object from a JSON descriptor
  • T#toJSON() creates a JSON descriptor from the respective reflection object (its name is used as the key within the parent)

Exclusively using JSON descriptors instead of .proto files enables the use of just the light library (the parser isn't required in this case).

A JSON descriptor can either be loaded the usual way:

protobuf.load("awesome.json", function(err, root) {
    if (err) throw err;

    // Continue at "Obtain a message type" above
});

Or it can be loaded inline:

var jsonDescriptor = require("./awesome.json"); // exemplary for node

var root = protobuf.Root.fromJSON(jsonDescriptor);

// Continue at "Obtain a message type" above

Using reflection only

Both the full and the light library include full reflection support. One could, for example, define the .proto definitions seen in the examples above using just reflection:

...
var Root  = protobuf.Root,
    Type  = protobuf.Type,
    Field = protobuf.Field;

var AwesomeMessage = new Type("AwesomeMessage").add(new Field("awesomeField", 1, "string"));

var root = new Root().define("awesomepackage").add(AwesomeMessage);

// Continue at "Create a new message" above
...

Detailed information on the reflection structure is available within the API documentation.

Using custom classes

Message classes can also be extended with custom functionality and it is also possible to register a custom constructor with a reflected message type:

...

// Define a custom constructor
function AwesomeMessage(properties) {
    // custom initialization code
    ...
}

// Register the custom constructor with its reflected type (*)
root.lookupType("awesomepackage.AwesomeMessage").ctor = AwesomeMessage;

// Define custom functionality
AwesomeMessage.customStaticMethod = function() { ... };
AwesomeMessage.prototype.customInstanceMethod = function() { ... };

// Continue at "Create a new message" above

(*) Besides referencing its reflected type through AwesomeMessage.$type and AwesomeMesage#$type, the respective custom class is automatically populated with:

  • AwesomeMessage.create
  • AwesomeMessage.encode and AwesomeMessage.encodeDelimited
  • AwesomeMessage.decode and AwesomeMessage.decodeDelimited
  • AwesomeMessage.verify
  • AwesomeMessage.fromObject, AwesomeMessage.toObject and AwesomeMessage#toJSON

Afterwards, decoded messages of this type are instanceof AwesomeMessage.

Alternatively, it is also possible to reuse and extend the internal constructor if custom initialization code is not required:

...

// Reuse the internal constructor
var AwesomeMessage = root.lookupType("awesomepackage.AwesomeMessage").ctor;

// Define custom functionality
AwesomeMessage.customStaticMethod = function() { ... };
AwesomeMessage.prototype.customInstanceMethod = function() { ... };

// Continue at "Create a new message" above

Using services

The library also supports consuming services but it doesn't make any assumptions about the actual transport channel. Instead, a user must provide a suitable RPC implementation, which is an asynchronous function that takes the reflected service method, the binary request and a node-style callback as its parameters:

function rpcImpl(method, requestData, callback) {
    // perform the request using an HTTP request or a WebSocket for example
    var responseData = ...;
    // and call the callback with the binary response afterwards:
    callback(null, responseData);
}

Below is a working example with a typescript implementation using grpc npm package.

const grpc = require('grpc')

const Client = grpc.makeGenericClientConstructor({})
const client = new Client(
  grpcServerUrl,
  grpc.credentials.createInsecure()
)

const rpcImpl = function(method, requestData, callback) {
  client.makeUnaryRequest(
    method.name,
    arg => arg,
    arg => arg,
    requestData,
    callback
  )
}

Example:

// greeter.proto
syntax = "proto3";

service Greeter {
    rpc SayHello (HelloRequest) returns (HelloReply) {}
}

message HelloRequest {
    string name = 1;
}

message HelloReply {
    string message = 1;
}
...
var Greeter = root.lookup("Greeter");
var greeter = Greeter.create(/* see above */ rpcImpl, /* request delimited? */ false, /* response delimited? */ false);

greeter.sayHello({ name: 'you' }, function(err, response) {
    console.log('Greeting:', response.message);
});

Services also support promises:

greeter.sayHello({ name: 'you' })
    .then(function(response) {
        console.log('Greeting:', response.message);
    });

There is also an example for streaming RPC.

Note that the service API is meant for clients. Implementing a server-side endpoint pretty much always requires transport channel (i.e. http, websocket, etc.) specific code with the only common denominator being that it decodes and encodes messages.

Usage with TypeScript

The library ships with its own type definitions and modern editors like Visual Studio Code will automatically detect and use them for code completion.

The npm package depends on @types/node because of Buffer and @types/long because of Long. If you are not building for node and/or not using long.js, it should be safe to exclude them manually.

Using the JS API

The API shown above works pretty much the same with TypeScript. However, because everything is typed, accessing fields on instances of dynamically generated message classes requires either using bracket-notation (i.e. message["awesomeField"]) or explicit casts. Alternatively, it is possible to use a typings file generated for its static counterpart.

import { load } from "protobufjs"; // respectively "./node_modules/protobufjs"

load("awesome.proto", function(err, root) {
  if (err)
    throw err;

  // example code
  const AwesomeMessage = root.lookupType("awesomepackage.AwesomeMessage");

  let message = AwesomeMessage.create({ awesomeField: "hello" });
  console.log(`message = ${JSON.stringify(message)}`);

  let buffer = AwesomeMessage.encode(message).finish();
  console.log(`buffer = ${Array.prototype.toString.call(buffer)}`);

  let decoded = AwesomeMessage.decode(buffer);
  console.log(`decoded = ${JSON.stringify(decoded)}`);
});

Using generated static code

If you generated static code to bundle.js using the CLI and its type definitions to bundle.d.ts, then you can just do:

import { AwesomeMessage } from "./bundle.js";

// example code
let message = AwesomeMessage.create({ awesomeField: "hello" });
let buffer  = AwesomeMessage.encode(message).finish();
let decoded = AwesomeMessage.decode(buffer);

Using decorators

The library also includes an early implementation of decorators.

Note that decorators are an experimental feature in TypeScript and that declaration order is important depending on the JS target. For example, @Field.d(2, AwesomeArrayMessage) requires that AwesomeArrayMessage has been defined earlier when targeting ES5.

import { Message, Type, Field, OneOf } from "protobufjs/light"; // respectively "./node_modules/protobufjs/light.js"

export class AwesomeSubMessage extends Message<AwesomeSubMessage> {

  @Field.d(1, "string")
  public awesomeString: string;

}

export enum AwesomeEnum {
  ONE = 1,
  TWO = 2
}

@Type.d("SuperAwesomeMessage")
export class AwesomeMessage extends Message<AwesomeMessage> {

  @Field.d(1, "string", "optional", "awesome default string")
  public awesomeField: string;

  @Field.d(2, AwesomeSubMessage)
  public awesomeSubMessage: AwesomeSubMessage;

  @Field.d(3, AwesomeEnum, "optional", AwesomeEnum.ONE)
  public awesomeEnum: AwesomeEnum;

  @OneOf.d("awesomeSubMessage", "awesomeEnum")
  public which: string;

}

// example code
let message = new AwesomeMessage({ awesomeField: "hello" });
let buffer  = AwesomeMessage.encode(message).finish();
let decoded = AwesomeMessage.decode(buffer);

Supported decorators are:

Type.d(typeName?: string)   (optional)
annotates a class as a protobuf message type. If typeName is not specified, the constructor's runtime function name is used for the reflected type.

Field.d<T>(fieldId: number, fieldType: string | Constructor<T>, fieldRule?: "optional" | "required" | "repeated", defaultValue?: T)
annotates a property as a protobuf field with the specified id and protobuf type.

MapField.d<T extends { [key: string]: any }>(fieldId: number, fieldKeyType: string, fieldValueType. string | Constructor<{}>)
annotates a property as a protobuf map field with the specified id, protobuf key and value type.

OneOf.d<T extends string>(...fieldNames: string[])
annotates a property as a protobuf oneof covering the specified fields.

Other notes:

  • Decorated types reside in protobuf.roots["decorated"] using a flat structure, so no duplicate names.
  • Enums are copied to a reflected enum with a generic name on decorator evaluation because referenced enum objects have no runtime name the decorator could use.
  • Default values must be specified as arguments to the decorator instead of using a property initializer for proper prototype behavior.
  • Property names on decorated classes must not be renamed on compile time (i.e. by a minifier) because decorators just receive the original field name as a string.

ProTip! Not as pretty, but you can use decorators in plain JavaScript as well.

Additional documentation

Protocol Buffers

protobuf.js

Community

Performance

The package includes a benchmark that compares protobuf.js performance to native JSON (as far as this is possible) and Google's JS implementation. On an i7-2600K running node 6.9.1 it yields:

benchmarking encoding performance ...

protobuf.js (reflect) x 541,707 ops/sec ยฑ1.13% (87 runs sampled)
protobuf.js (static) x 548,134 ops/sec ยฑ1.38% (89 runs sampled)
JSON (string) x 318,076 ops/sec ยฑ0.63% (93 runs sampled)
JSON (buffer) x 179,165 ops/sec ยฑ2.26% (91 runs sampled)
google-protobuf x 74,406 ops/sec ยฑ0.85% (86 runs sampled)

   protobuf.js (static) was fastest
  protobuf.js (reflect) was 0.9% ops/sec slower (factor 1.0)
          JSON (string) was 41.5% ops/sec slower (factor 1.7)
          JSON (buffer) was 67.6% ops/sec slower (factor 3.1)
        google-protobuf was 86.4% ops/sec slower (factor 7.3)

benchmarking decoding performance ...

protobuf.js (reflect) x 1,383,981 ops/sec ยฑ0.88% (93 runs sampled)
protobuf.js (static) x 1,378,925 ops/sec ยฑ0.81% (93 runs sampled)
JSON (string) x 302,444 ops/sec ยฑ0.81% (93 runs sampled)
JSON (buffer) x 264,882 ops/sec ยฑ0.81% (93 runs sampled)
google-protobuf x 179,180 ops/sec ยฑ0.64% (94 runs sampled)

  protobuf.js (reflect) was fastest
   protobuf.js (static) was 0.3% ops/sec slower (factor 1.0)
          JSON (string) was 78.1% ops/sec slower (factor 4.6)
          JSON (buffer) was 80.8% ops/sec slower (factor 5.2)
        google-protobuf was 87.0% ops/sec slower (factor 7.7)

benchmarking combined performance ...

protobuf.js (reflect) x 275,900 ops/sec ยฑ0.78% (90 runs sampled)
protobuf.js (static) x 290,096 ops/sec ยฑ0.96% (90 runs sampled)
JSON (string) x 129,381 ops/sec ยฑ0.77% (90 runs sampled)
JSON (buffer) x 91,051 ops/sec ยฑ0.94% (90 runs sampled)
google-protobuf x 42,050 ops/sec ยฑ0.85% (91 runs sampled)

   protobuf.js (static) was fastest
  protobuf.js (reflect) was 4.7% ops/sec slower (factor 1.0)
          JSON (string) was 55.3% ops/sec slower (factor 2.2)
          JSON (buffer) was 68.6% ops/sec slower (factor 3.2)
        google-protobuf was 85.5% ops/sec slower (factor 6.9)

These results are achieved by

  • generating type-specific encoders, decoders, verifiers and converters at runtime
  • configuring the reader/writer interface according to the environment
  • using node-specific functionality where beneficial and, of course
  • avoiding unnecessary operations through splitting up the toolset.

You can also run the benchmark ...

$> npm run bench

and the profiler yourself (the latter requires a recent version of node):

$> npm run prof <encode|decode|encode-browser|decode-browser> [iterations=10000000]

Note that as of this writing, the benchmark suite performs significantly slower on node 7.2.0 compared to 6.9.1 because moths.

Compatibility

  • Works in all modern and not-so-modern browsers except IE8.
  • Because the internals of this package do not rely on google/protobuf/descriptor.proto, options are parsed and presented literally.
  • If typed arrays are not supported by the environment, plain arrays will be used instead.
  • Support for pre-ES5 environments (except IE8) can be achieved by using a polyfill.
  • Support for Content Security Policy-restricted environments (like Chrome extensions without unsafe-eval) can be achieved by generating and using static code instead.
  • If a proper way to work with 64 bit values (uint64, int64 etc.) is required, just install long.js alongside this library. All 64 bit numbers will then be returned as a Long instance instead of a possibly unsafe JavaScript number (see).
  • For descriptor.proto interoperability, see ext/descriptor

Building

To build the library or its components yourself, clone it from GitHub and install the development dependencies:

$> git clone https://github.com/dcodeIO/protobuf.js.git
$> cd protobuf.js
$> npm install

Building the respective development and production versions with their respective source maps to dist/:

$> npm run build

Building the documentation to docs/:

$> npm run docs

Building the TypeScript definition to index.d.ts:

$> npm run types

Browserify integration

By default, protobuf.js integrates into any browserify build-process without requiring any optional modules. Hence:

If int64 support is required, explicitly require the long module somewhere in your project as it will be excluded otherwise. This assumes that a global require function is present that protobuf.js can call to obtain the long module.

If there is no global require function present after bundling, it's also possible to assign the long module programmatically:

var Long = ...;

protobuf.util.Long = Long;
protobuf.configure();

If you have any special requirements, there is the bundler for reference.

Download Details:

Author: Protobufjs
Source Code: https://github.com/protobufjs/protobuf.js 
License: View license

#typescript #javascript #serialization 

Protobuf.js: Protocol Buffers for JavaScript (& TypeScript)

Closure: Serialize Closures (anonymous Functions)

Opis Closure  

Serializable closures

Opis Closure is a library that aims to overcome PHP's limitations regarding closure serialization by providing a wrapper that will make all closures serializable.

The library's key features:

  • Serialize any closure
  • Serialize arbitrary objects
  • Doesn't use eval for closure serialization or unserialization
  • Works with any PHP version that has support for closures
  • Supports PHP 7 syntax
  • Handles all variables referenced/imported in use() and automatically wraps all referenced/imported closures for proper serialization
  • Handles recursive closures
  • Handles magic constants like __FILE__, __DIR__, __LINE__, __NAMESPACE__, __CLASS__, __TRAIT__, __METHOD__ and __FUNCTION__.
  • Automatically resolves all class names, function names and constant names used inside the closure
  • Track closure's residing source by using the #trackme directive
  • Simple and very fast parser
  • Any error or exception, that might occur when executing an unserialized closure, can be caught and treated properly
  • You can serialize/unserialize any closure unlimited times, even those previously unserialized (this is possible because eval() is not used for unserialization)
  • Handles static closures
  • Supports cryptographically signed closures
  • Provides a reflector that can give you information about the serialized closure
  • Provides an analyzer for SuperClosure library
  • Automatically detects when the scope and/or the bound object of a closure needs to be serialized in order for the closure to work after deserialization

Documentation

The full documentation for this library can be found here.

License

Opis Closure is licensed under the MIT License (MIT).

Requirements

  • PHP ^5.4 || ^7.0 || ^8.0

Installation

Opis Closure is available on Packagist and it can be installed from a command line interface by using Composer.

composer require opis/closure

Or you could directly reference it into your composer.json file as a dependency

{
    "require": {
        "opis/closure": "^3.5"
    }
}

Migrating from 2.x

If your project needs to support PHP 5.3 you can continue using the 2.x version of Opis Closure. Otherwise, assuming you are not using one of the removed/refactored classes or features(see CHANGELOG), migrating to version 3.x is simply a matter of updating your composer.json file.

Semantic versioning

Opis Closure follows semantic versioning specifications.

Arbitrary object serialization

We've added this feature in order to be able to support the serialization of a closure's bound object. The implementation is far from being perfect, and it's really hard to make it work flawless. We will try to improve this, but we can't guarantee anything. So our advice regarding the Opis\Closure\serialize|unserialize functions is to use them with caution.

Download Details:

Author: opis
Source Code: https://github.com/opis/closure 
License: MIT license

#php #closure #serialization 

Closure: Serialize Closures (anonymous Functions)

Serialize and Deserialize Php-Serialized Strings in Dart

php_serializer

Serialize and deserialize Php-Serialized strings

Basically the equivalent to the php-functions serialize and unserialize.

Usage:

If you only need basic objects, simply run phpSerialize on any List, Map, String, int or double and send the resulting String to Php where it can be deserialized with unserialize().

The opposite direction is very similar. Just pass a String generated by Phps serialize()-function to phpDeserialize and get the resulting objects.

Advanced Usage: Objects

If you need additional Objects, which would be encoded with a leading O:, these functions require additional information which have to be provided to enable this functionality.

For every class that should be (de-)serializable there has to be an instance of PhpSerializationObjectInformation which contains the Fully Qualified Class Name from Php (the classname including the namespace), a method to convert a Dart-object to a list of properties and another method which does the opposite.

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add php_serializer

With Flutter:

 $ flutter pub add php_serializer

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

dependencies:
  php_serializer: ^1.1.0

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:php_serializer/php_serializer.dart'; 

example/example.md

Transfer data from Php to Dart

<?php
$trivialArray = [
    1, 1, 2, 3, 5, 8
];
echo serialize($trivialArray);
//Output: a:6:{i:0;i:1;i:1;i:1;i:2;i:2;i:3;i:3;i:4;i:5;i:5;i:8;}
void main() {
  String inputFromPhp = 'a:6:{i:0;i:1;i:1;i:1;i:2;i:2;i:3;i:3;i:4;i:5;i:5;i:8;}';
  assert([1, 1, 2, 3, 5, 8] == phpDeserialize(inputFromPhp));
}

Transfer data from Dart to Php

void main() {
  List<int> trivialList = [1, 1, 2, 3, 5, 8];
  print(phpSerialize(trivialList));
  //Output: a:6:{i:0;i:1;i:1;i:1;i:2;i:2;i:3;i:3;i:4;i:5;i:5;i:8;}
}
<?php
$inputFromDart = 'a:6:{i:0;i:1;i:1;i:1;i:2;i:2;i:3;i:3;i:4;i:5;i:5;i:8;}';
\assert([1, 1, 2, 3, 5, 8] === unserialize($inputFromDart)); 

Download Details:

Author: Manu311

Source Code: https://github.com/Manu311/dart-php_serializer

#dart #serialization 

Serialize and Deserialize Php-Serialized Strings in Dart

Caching.jl: Memoization Mechanism

Caching.jl

Memory and disk memoizer written in Julia.

Installation

The installation can be done through the usual channels (manually by cloning the repository or installing it through the julia REPL).

Reporting Bugs

Please file an issue to report a bug or request a feature.

References

[1] https://en.wikipedia.org/wiki/Memoization

[2] https://en.wikipedia.org/wiki/Cache_replacement_policies

For other caching solutions, check out also LRUCache.jl, Memoize.jl and Anamnesis.jl

Download Details:

Author: zgornel
Source Code: https://github.com/zgornel/Caching.jl 
License: MIT license

#julia #serialization #memoize

Caching.jl: Memoization Mechanism
Rupert  Beatty

Rupert Beatty

1666224660

A Handy Swift json-object Serialization/deserialization Library

HandyJSON

To deal with crash on iOS 15 beta3 please try version 5.0.4-beta

HandyJSON is a framework written in Swift which to make converting model objects( pure classes/structs ) to and from JSON easy on iOS.

Compared with others, the most significant feature of HandyJSON is that it does not require the objects inherit from NSObject(not using KVC but reflection), neither implements a 'mapping' function(writing value to memory directly to achieve property assignment).

HandyJSON is totally depend on the memory layout rules infered from Swift runtime code. We are watching it and will follow every bit if it changes.

ไธญๆ–‡ๆ–‡ๆกฃ

Sample Code

Deserialization

class BasicTypes: HandyJSON {
    var int: Int = 2
    var doubleOptional: Double?
    var stringImplicitlyUnwrapped: String!

    required init() {}
}

let jsonString = "{\"doubleOptional\":1.1,\"stringImplicitlyUnwrapped\":\"hello\",\"int\":1}"
if let object = BasicTypes.deserialize(from: jsonString) {
    print(object.int)
    print(object.doubleOptional!)
    print(object.stringImplicitlyUnwrapped)
}

Serialization


let object = BasicTypes()
object.int = 1
object.doubleOptional = 1.1
object.stringImplicitlyUnwrapped = โ€œhello"

print(object.toJSON()!) // serialize to dictionary
print(object.toJSONString()!) // serialize to JSON string
print(object.toJSONString(prettyPrint: true)!) // serialize to pretty JSON string

Features

Serialize/Deserialize Object/JSON to/From JSON/Object

Naturally use object property name for mapping, no need to specify a mapping relationship

Support almost all types in Swift, including enum

Support struct

Custom transformations

Type-Adaption, such as string json field maps to int property, int json field maps to string property

An overview of types supported can be found at file: BasicTypes.swift

Requirements

iOS 8.0+/OSX 10.9+/watchOS 2.0+/tvOS 9.0+

Swift 3.0+ / Swift 4.0+ / Swift 5.0+

Installation

To use with Swift 5.0/5.1 ( Xcode 10.2+/11.0+ ), version == 5.0.2

To use with Swift 4.2 ( Xcode 10 ), version == 4.2.0

To use with Swift 4.0, version >= 4.1.1

To use with Swift 3.x, version >= 1.8.0

For Legacy Swift2.x support, take a look at the swift2 branch.

Cocoapods

Add the following line to your Podfile:

pod 'HandyJSON', '~> 5.0.2'

Then, run the following command:

$ pod install

Carthage

You can add a dependency on HandyJSON by adding the following line to your Cartfile:

github "alibaba/HandyJSON" ~> 5.0.2

Manually

You can integrate HandyJSON into your project manually by doing the following steps:

  • Open up Terminal, cd into your top-level project directory, and add HandyJSON as a submodule:
git init && git submodule add https://github.com/alibaba/HandyJSON.git

Open the new HandyJSON folder, drag the HandyJSON.xcodeproj into the Project Navigator of your project.

Select your application project in the Project Navigator, open the General panel in the right window.

Click on the + button under the Embedded Binaries section.

You will see two different HandyJSON.xcodeproj folders each with four different versions of the HandyJSON.framework nested inside a Products folder.

It does not matter which Products folder you choose from, but it does matter which HandyJSON.framework you choose.

Select one of the four HandyJSON.framework which matches the platform your Application should run on.

Congratulations!

Deserialization

The Basics

To support deserialization from JSON, a class/struct need to conform to 'HandyJSON' protocol. It's truly protocol, not some class inherited from NSObject.

To conform to 'HandyJSON', a class need to implement an empty initializer.

class BasicTypes: HandyJSON {
    var int: Int = 2
    var doubleOptional: Double?
    var stringImplicitlyUnwrapped: String!

    required init() {}
}

let jsonString = "{\"doubleOptional\":1.1,\"stringImplicitlyUnwrapped\":\"hello\",\"int\":1}"
if let object = BasicTypes.deserialize(from: jsonString) {
    // โ€ฆ
}

Support Struct

For struct, since the compiler provide a default empty initializer, we use it for free.

struct BasicTypes: HandyJSON {
    var int: Int = 2
    var doubleOptional: Double?
    var stringImplicitlyUnwrapped: String!
}

let jsonString = "{\"doubleOptional\":1.1,\"stringImplicitlyUnwrapped\":\"hello\",\"int\":1}"
if let object = BasicTypes.deserialize(from: jsonString) {
    // โ€ฆ
}

But also notice that, if you have a designated initializer to override the default one in the struct, you should explicitly declare an empty one(no required modifier need).

Support Enum Property

To be convertable, An enum must conform to HandyJSONEnum protocol. Nothing special need to do now.

enum AnimalType: String, HandyJSONEnum {
    case Cat = "cat"
    case Dog = "dog"
    case Bird = "bird"
}

struct Animal: HandyJSON {
    var name: String?
    var type: AnimalType?
}

let jsonString = "{\"type\":\"cat\",\"name\":\"Tom\"}"
if let animal = Animal.deserialize(from: jsonString) {
    print(animal.type?.rawValue)
}

Optional/ImplicitlyUnwrappedOptional/Collections/...

'HandyJSON' support classes/structs composed of optional, implicitlyUnwrappedOptional, array, dictionary, objective-c base type, nested type etc. properties.

class BasicTypes: HandyJSON {
    var bool: Bool = true
    var intOptional: Int?
    var doubleImplicitlyUnwrapped: Double!
    var anyObjectOptional: Any?

    var arrayInt: Array<Int> = []
    var arrayStringOptional: Array<String>?
    var setInt: Set<Int>?
    var dictAnyObject: Dictionary<String, Any> = [:]

    var nsNumber = 2
    var nsString: NSString?

    required init() {}
}

let object = BasicTypes()
object.intOptional = 1
object.doubleImplicitlyUnwrapped = 1.1
object.anyObjectOptional = "StringValue"
object.arrayInt = [1, 2]
object.arrayStringOptional = ["a", "b"]
object.setInt = [1, 2]
object.dictAnyObject = ["key1": 1, "key2": "stringValue"]
object.nsNumber = 2
object.nsString = "nsStringValue"

let jsonString = object.toJSONString()!

if let object = BasicTypes.deserialize(from: jsonString) {
    // ...
}

Designated Path

HandyJSON supports deserialization from designated path of JSON.

class Cat: HandyJSON {
    var id: Int64!
    var name: String!

    required init() {}
}

let jsonString = "{\"code\":200,\"msg\":\"success\",\"data\":{\"cat\":{\"id\":12345,\"name\":\"Kitty\"}}}"

if let cat = Cat.deserialize(from: jsonString, designatedPath: "data.cat") {
    print(cat.name)
}

Composition Object

Notice that all the properties of a class/struct need to deserialized should be type conformed to HandyJSON.

class Component: HandyJSON {
    var aInt: Int?
    var aString: String?

    required init() {}
}

class Composition: HandyJSON {
    var aInt: Int?
    var comp1: Component?
    var comp2: Component?

    required init() {}
}

let jsonString = "{\"num\":12345,\"comp1\":{\"aInt\":1,\"aString\":\"aaaaa\"},\"comp2\":{\"aInt\":2,\"aString\":\"bbbbb\"}}"

if let composition = Composition.deserialize(from: jsonString) {
    print(composition)
}

Inheritance Object

A subclass need deserialization, it's superclass need to conform to HandyJSON.

class Animal: HandyJSON {
    var id: Int?
    var color: String?

    required init() {}
}

class Cat: Animal {
    var name: String?

    required init() {}
}

let jsonString = "{\"id\":12345,\"color\":\"black\",\"name\":\"cat\"}"

if let cat = Cat.deserialize(from: jsonString) {
    print(cat)
}

JSON Array

If the first level of a JSON text is an array, we turn it to objects array.

class Cat: HandyJSON {
    var name: String?
    var id: String?

    required init() {}
}

let jsonArrayString: String? = "[{\"name\":\"Bob\",\"id\":\"1\"}, {\"name\":\"Lily\",\"id\":\"2\"}, {\"name\":\"Lucy\",\"id\":\"3\"}]"
if let cats = [Cat].deserialize(from: jsonArrayString) {
    cats.forEach({ (cat) in
        // ...
    })
}

Mapping From Dictionary

HandyJSON support mapping swift dictionary to model.

var dict = [String: Any]()
dict["doubleOptional"] = 1.1
dict["stringImplicitlyUnwrapped"] = "hello"
dict["int"] = 1
if let object = BasicTypes.deserialize(from: dict) {
    // ...
}

Custom Mapping

HandyJSON let you customize the key mapping to JSON fields, or parsing method of any property. All you need to do is implementing an optional mapping function, do things in it.

We bring the transformer from ObjectMapper. If you are familiar with it, itโ€™s almost the same here.

class Cat: HandyJSON {
    var id: Int64!
    var name: String!
    var parent: (String, String)?
    var friendName: String?

    required init() {}

    func mapping(mapper: HelpingMapper) {
        // specify 'cat_id' field in json map to 'id' property in object
        mapper <<<
            self.id <-- "cat_id"

        // specify 'parent' field in json parse as following to 'parent' property in object
        mapper <<<
            self.parent <-- TransformOf<(String, String), String>(fromJSON: { (rawString) -> (String, String)? in
                if let parentNames = rawString?.characters.split(separator: "/").map(String.init) {
                    return (parentNames[0], parentNames[1])
                }
                return nil
            }, toJSON: { (tuple) -> String? in
                if let _tuple = tuple {
                    return "\(_tuple.0)/\(_tuple.1)"
                }
                return nil
            })

        // specify 'friend.name' path field in json map to 'friendName' property
        mapper <<<
            self.friendName <-- "friend.name"
    }
}

let jsonString = "{\"cat_id\":12345,\"name\":\"Kitty\",\"parent\":\"Tom/Lily\",\"friend\":{\"id\":54321,\"name\":\"Lily\"}}"

if let cat = Cat.deserialize(from: jsonString) {
    print(cat.id)
    print(cat.parent)
    print(cat.friendName)
}

Date/Data/URL/Decimal/Color

HandyJSON prepare some useful transformer for some none-basic type.

class ExtendType: HandyJSON {
    var date: Date?
    var decimal: NSDecimalNumber?
    var url: URL?
    var data: Data?
    var color: UIColor?

    func mapping(mapper: HelpingMapper) {
        mapper <<<
            date <-- CustomDateFormatTransform(formatString: "yyyy-MM-dd")

        mapper <<<
            decimal <-- NSDecimalNumberTransform()

        mapper <<<
            url <-- URLTransform(shouldEncodeURLString: false)

        mapper <<<
            data <-- DataTransform()

        mapper <<<
            color <-- HexColorTransform()
    }

    public required init() {}
}

let object = ExtendType()
object.date = Date()
object.decimal = NSDecimalNumber(string: "1.23423414371298437124391243")
object.url = URL(string: "https://www.aliyun.com")
object.data = Data(base64Encoded: "aGVsbG8sIHdvcmxkIQ==")
object.color = UIColor.blue

print(object.toJSONString()!)
// it prints:
// {"date":"2017-09-11","decimal":"1.23423414371298437124391243","url":"https:\/\/www.aliyun.com","data":"aGVsbG8sIHdvcmxkIQ==","color":"0000FF"}

let mappedObject = ExtendType.deserialize(from: object.toJSONString()!)!
print(mappedObject.date)
...

Exclude Property

If any non-basic property of a class/struct could not conform to HandyJSON/HandyJSONEnum or you just do not want to do the deserialization with it, you should exclude it in the mapping function.

class NotHandyJSONType {
    var dummy: String?
}

class Cat: HandyJSON {
    var id: Int64!
    var name: String!
    var notHandyJSONTypeProperty: NotHandyJSONType?
    var basicTypeButNotWantedProperty: String?

    required init() {}

    func mapping(mapper: HelpingMapper) {
        mapper >>> self.notHandyJSONTypeProperty
        mapper >>> self.basicTypeButNotWantedProperty
    }
}

let jsonString = "{\"name\":\"cat\",\"id\":\"12345\"}"

if let cat = Cat.deserialize(from: jsonString) {
    print(cat)
}

Update Existing Model

HandyJSON support updating an existing model with given json string or dictionary.

class BasicTypes: HandyJSON {
    var int: Int = 2
    var doubleOptional: Double?
    var stringImplicitlyUnwrapped: String!

    required init() {}
}

var object = BasicTypes()
object.int = 1
object.doubleOptional = 1.1

let jsonString = "{\"doubleOptional\":2.2}"
JSONDeserializer.update(object: &object, from: jsonString)
print(object.int)
print(object.doubleOptional)

Supported Property Type

Int/Bool/Double/Float/String/NSNumber/NSString

RawRepresentable enum

NSArray/NSDictionary

Int8/Int16/Int32/Int64/UInt8/UInt16/UInt23/UInt64

Optional<T>/ImplicitUnwrappedOptional<T> // T is one of the above types

Array<T> // T is one of the above types

Dictionary<String, T> // T is one of the above types

Nested of aboves

Serialization

The Basics

Now, a class/model which need to serialize to JSON should also conform to HandyJSON protocol.

class BasicTypes: HandyJSON {
    var int: Int = 2
    var doubleOptional: Double?
    var stringImplicitlyUnwrapped: String!

    required init() {}
}

let object = BasicTypes()
object.int = 1
object.doubleOptional = 1.1
object.stringImplicitlyUnwrapped = โ€œhello"

print(object.toJSON()!) // serialize to dictionary
print(object.toJSONString()!) // serialize to JSON string
print(object.toJSONString(prettyPrint: true)!) // serialize to pretty JSON string

Mapping And Excluding

Itโ€™s all like what we do on deserialization. A property which is excluded, it will not take part in neither deserialization nor serialization. And the mapper items define both the deserializing rules and serializing rules. Refer to the usage above.

FAQ

Q: Why the mapping function is not working in the inheritance object?

A: For some reason, you should define an empty mapping function in the super class(the root class if more than one layer), and override it in the subclass.

It's the same with didFinishMapping function.

Q: Why my didSet/willSet is not working?

A: Since HandyJSON assign properties by writing value to memory directly, it doesn't trigger any observing function. You need to call the didSet/willSet logic explicitly after/before the deserialization.

But since version 1.8.0, HandyJSON handle dynamic properties by the KVC mechanism which will trigger the KVO. That means, if you do really need the didSet/willSet, you can define your model like follow:

class BasicTypes: NSObject, HandyJSON {
    dynamic var int: Int = 0 {
        didSet {
            print("oldValue: ", oldValue)
        }
        willSet {
            print("newValue: ", newValue)
        }
    }

    public override required init() {}
}

In this situation, NSObject and dynamic are both needed.

And in versions since 1.8.0, HandyJSON offer a didFinishMapping function to allow you to fill some observing logic.

class BasicTypes: HandyJSON {
    var int: Int?

    required init() {}

    func didFinishMapping() {
        print("you can fill some observing logic here")
    }
}

It may help.

Q: How to support Enum property?

It your enum conform to RawRepresentable protocol, please look into Support Enum Property. Or use the EnumTransform:

enum EnumType: String {
    case type1, type2
}

class BasicTypes: HandyJSON {
    var type: EnumType?

    func mapping(mapper: HelpingMapper) {
        mapper <<<
            type <-- EnumTransform()
    }

    required init() {}
}

let object = BasicTypes()
object.type = EnumType.type2
print(object.toJSONString()!)
let mappedObject = BasicTypes.deserialize(from: object.toJSONString()!)!
print(mappedObject.type)

Otherwise, you should implement your custom mapping function.

enum EnumType {
    case type1, type2
}

class BasicTypes: HandyJSON {
    var type: EnumType?

    func mapping(mapper: HelpingMapper) {
        mapper <<<
            type <-- TransformOf<EnumType, String>(fromJSON: { (rawString) -> EnumType? in
                if let _str = rawString {
                    switch (_str) {
                    case "type1":
                        return EnumType.type1
                    case "type2":
                        return EnumType.type2
                    default:
                        return nil
                    }
                }
                return nil
            }, toJSON: { (enumType) -> String? in
                if let _type = enumType {
                    switch (_type) {
                    case EnumType.type1:
                        return "type1"
                    case EnumType.type2:
                        return "type2"
                    }
                }
                return nil
            })
    }

    required init() {}
}

Credit

  • reflection: After the first version which used the swift mirror mechanism, HandyJSON had imported the reflection library and rewrote some code for class properties inspecting.
  • ObjectMapper: To make HandyJSON more compatible with the general style, the Mapper function support Transform which designed by ObjectMapper. And we import some testcases from ObjectMapper.

Download Details:

Author: Alibaba
Source Code: https://github.com/alibaba/HandyJSON 
License: View license

#swift #serialization #json

A Handy Swift json-object Serialization/deserialization Library

Protobuf.js: Protocol Buffers for JavaScript (& TypeScript)

Protobuf.js

Protocol Buffers are a language-neutral, platform-neutral, extensible way of serializing structured data for use in communications protocols, data storage, and more, originally designed at Google (see).

protobuf.js is a pure JavaScript implementation with TypeScript support for node.js and the browser. It's easy to use, blazingly fast and works out of the box with .proto files!

Contents

Installation
How to include protobuf.js in your project.

Usage
A brief introduction to using the toolset.

  • Valid Message
  • Toolset
     

Examples
A few examples to get you started.

  • Using .proto files
  • Using JSON descriptors
  • Using reflection only
  • Using custom classes
  • Using services
  • Usage with TypeScript

Additional documentation
A list of available documentation resources.

Performance
A few internals and a benchmark on performance.

Compatibility
Notes on compatibility regarding browsers and optional libraries.

Building
How to build the library and its components yourself.

Installation

node.js

$> npm install protobufjs [--save --save-prefix=~]

var protobuf = require("protobufjs");

The command line utility lives in the protobufjs-cli package and must be installed separately:

$> npm install protobufjs-cli [--save --save-prefix=~]

Note that this library's versioning scheme is not semver-compatible for historical reasons. For guaranteed backward compatibility, always depend on ~6.A.B instead of ^6.A.B (hence the --save-prefix above).

Browsers

Development:

<script src="//cdn.jsdelivr.net/npm/protobufjs@7.X.X/dist/protobuf.js"></script>

Production:

<script src="//cdn.jsdelivr.net/npm/protobufjs@7.X.X/dist/protobuf.min.js"></script>

Remember to replace the version tag with the exact release your project depends upon.

The library supports CommonJS and AMD loaders and also exports globally as protobuf.

Distributions

Where bundle size is a factor, there are additional stripped-down versions of the [full library][dist-full] (~19kb gzipped) available that exclude certain functionality:

When working with JSON descriptors (i.e. generated by pbjs) and/or reflection only, see the [light library][dist-light] (~16kb gzipped) that excludes the parser. CommonJS entry point is:

var protobuf = require("protobufjs/light");

When working with statically generated code only, see the [minimal library][dist-minimal] (~6.5kb gzipped) that also excludes reflection. CommonJS entry point is:

var protobuf = require("protobufjs/minimal");
DistributionLocation
Fullhttps://cdn.jsdelivr.net/npm/protobufjs/dist/
Lighthttps://cdn.jsdelivr.net/npm/protobufjs/dist/light/
Minimalhttps://cdn.jsdelivr.net/npm/protobufjs/dist/minimal/

Usage

Because JavaScript is a dynamically typed language, protobuf.js introduces the concept of a valid message in order to provide the best possible performance (and, as a side product, proper typings):

Valid message

A valid message is an object (1) not missing any required fields and (2) exclusively composed of JS types understood by the wire format writer.

There are two possible types of valid messages and the encoder is able to work with both of these for convenience:

  • Message instances (explicit instances of message classes with default values on their prototype) always (have to) satisfy the requirements of a valid message by design and
  • Plain JavaScript objects that just so happen to be composed in a way satisfying the requirements of a valid message as well.

In a nutshell, the wire format writer understands the following types:

Field typeExpected JS type (create, encode)Conversion (fromObject)
s-/u-/int32
s-/fixed32
number (32 bit integer)value | 0 if signed
value >>> 0 if unsigned
s-/u-/int64
s-/fixed64
Long-like (optimal)
number (53 bit integer)
Long.fromValue(value) with long.js
parseInt(value, 10) otherwise
float
double
numberNumber(value)
boolbooleanBoolean(value)
stringstringString(value)
bytesUint8Array (optimal)
Buffer (optimal under node)
Array.<number> (8 bit integers)
base64.decode(value) if a string
Object with non-zero .length is assumed to be buffer-like
enumnumber (32 bit integer)Looks up the numeric id if a string
messageValid messageMessage.fromObject(value)
  • Explicit undefined and null are considered as not set if the field is optional.
  • Repeated fields are Array.<T>.
  • Map fields are Object.<string,T> with the key being the string representation of the respective value or an 8 characters long binary hash string for Long-likes.
  • Types marked as optimal provide the best performance because no conversion step (i.e. number to low and high bits or base64 string to buffer) is required.

Toolset

With that in mind and again for performance reasons, each message class provides a distinct set of methods with each method doing just one thing. This avoids unnecessary assertions / redundant operations where performance is a concern but also forces a user to perform verification (of plain JavaScript objects that might just so happen to be a valid message) explicitly where necessary - for example when dealing with user input.

Note that Message below refers to any message class.

Message.verify(message: Object): null|string
verifies that a plain JavaScript object satisfies the requirements of a valid message and thus can be encoded without issues. Instead of throwing, it returns the error message as a string, if any.

var payload = "invalid (not an object)";
var err = AwesomeMessage.verify(payload);
if (err)
  throw Error(err);

Message.encode(message: Message|Object [, writer: Writer]): Writer
encodes a message instance or valid plain JavaScript object. This method does not implicitly verify the message and it's up to the user to make sure that the payload is a valid message.

var buffer = AwesomeMessage.encode(message).finish();

Message.encodeDelimited(message: Message|Object [, writer: Writer]): Writer
works like Message.encode but additionally prepends the length of the message as a varint.

Message.decode(reader: Reader|Uint8Array): Message
decodes a buffer to a message instance. If required fields are missing, it throws a util.ProtocolError with an instance property set to the so far decoded message. If the wire format is invalid, it throws an Error.

try {
  var decodedMessage = AwesomeMessage.decode(buffer);
} catch (e) {
    if (e instanceof protobuf.util.ProtocolError) {
      // e.instance holds the so far decoded message with missing required fields
    } else {
      // wire format is invalid
    }
}

Message.decodeDelimited(reader: Reader|Uint8Array): Message
works like Message.decode but additionally reads the length of the message prepended as a varint.

Message.create(properties: Object): Message
creates a new message instance from a set of properties that satisfy the requirements of a valid message. Where applicable, it is recommended to prefer Message.create over Message.fromObject because it doesn't perform possibly redundant conversion.

var message = AwesomeMessage.create({ awesomeField: "AwesomeString" });

Message.fromObject(object: Object): Message
converts any non-valid plain JavaScript object to a message instance using the conversion steps outlined within the table above.

var message = AwesomeMessage.fromObject({ awesomeField: 42 });
// converts awesomeField to a string

Message.toObject(message: Message [, options: ConversionOptions]): Object
converts a message instance to an arbitrary plain JavaScript object for interoperability with other libraries or storage. The resulting plain JavaScript object might still satisfy the requirements of a valid message depending on the actual conversion options specified, but most of the time it does not.

var object = AwesomeMessage.toObject(message, {
  enums: String,  // enums as string names
  longs: String,  // longs as strings (requires long.js)
  bytes: String,  // bytes as base64 encoded strings
  defaults: true, // includes default values
  arrays: true,   // populates empty arrays (repeated fields) even if defaults=false
  objects: true,  // populates empty objects (map fields) even if defaults=false
  oneofs: true    // includes virtual oneof fields set to the present field's name
});

For reference, the following diagram aims to display relationships between the different methods and the concept of a valid message:

Toolset Diagram

In other words: verify indicates that calling create or encode directly on the plain object will [result in a valid message respectively] succeed. fromObject, on the other hand, does conversion from a broader range of plain objects to create valid messages. (ref)

Examples

Using .proto files

It is possible to load existing .proto files using the full library, which parses and compiles the definitions to ready to use (reflection-based) message classes:

// awesome.proto
package awesomepackage;
syntax = "proto3";

message AwesomeMessage {
    string awesome_field = 1; // becomes awesomeField
}
protobuf.load("awesome.proto", function(err, root) {
    if (err)
        throw err;

    // Obtain a message type
    var AwesomeMessage = root.lookupType("awesomepackage.AwesomeMessage");

    // Exemplary payload
    var payload = { awesomeField: "AwesomeString" };

    // Verify the payload if necessary (i.e. when possibly incomplete or invalid)
    var errMsg = AwesomeMessage.verify(payload);
    if (errMsg)
        throw Error(errMsg);

    // Create a new message
    var message = AwesomeMessage.create(payload); // or use .fromObject if conversion is necessary

    // Encode a message to an Uint8Array (browser) or Buffer (node)
    var buffer = AwesomeMessage.encode(message).finish();
    // ... do something with buffer

    // Decode an Uint8Array (browser) or Buffer (node) to a message
    var message = AwesomeMessage.decode(buffer);
    // ... do something with message

    // If the application uses length-delimited buffers, there is also encodeDelimited and decodeDelimited.

    // Maybe convert the message back to a plain object
    var object = AwesomeMessage.toObject(message, {
        longs: String,
        enums: String,
        bytes: String,
        // see ConversionOptions
    });
});

Additionally, promise syntax can be used by omitting the callback, if preferred:

protobuf.load("awesome.proto")
    .then(function(root) {
       ...
    });

Using JSON descriptors

The library utilizes JSON descriptors that are equivalent to a .proto definition. For example, the following is identical to the .proto definition seen above:

// awesome.json
{
  "nested": {
    "awesomepackage": {
      "nested": {
        "AwesomeMessage": {
          "fields": {
            "awesomeField": {
              "type": "string",
              "id": 1
            }
          }
        }
      }
    }
  }
}

JSON descriptors closely resemble the internal reflection structure:

Type (T)ExtendsType-specific properties
ReflectionObject options
NamespaceReflectionObjectnested
RootNamespacenested
TypeNamespacefields
EnumReflectionObjectvalues
FieldReflectionObjectrule, type, id
MapFieldFieldkeyType
OneOfReflectionObjectoneof (array of field names)
ServiceNamespacemethods
MethodReflectionObjecttype, requestType, responseType, requestStream, responseStream
  • Bold properties are required. Italic types are abstract.
  • T.fromJSON(name, json) creates the respective reflection object from a JSON descriptor
  • T#toJSON() creates a JSON descriptor from the respective reflection object (its name is used as the key within the parent)

Exclusively using JSON descriptors instead of .proto files enables the use of just the light library (the parser isn't required in this case).

A JSON descriptor can either be loaded the usual way:

protobuf.load("awesome.json", function(err, root) {
    if (err) throw err;

    // Continue at "Obtain a message type" above
});

Or it can be loaded inline:

var jsonDescriptor = require("./awesome.json"); // exemplary for node

var root = protobuf.Root.fromJSON(jsonDescriptor);

// Continue at "Obtain a message type" above

Using reflection only

Both the full and the light library include full reflection support. One could, for example, define the .proto definitions seen in the examples above using just reflection:

...
var Root  = protobuf.Root,
    Type  = protobuf.Type,
    Field = protobuf.Field;

var AwesomeMessage = new Type("AwesomeMessage").add(new Field("awesomeField", 1, "string"));

var root = new Root().define("awesomepackage").add(AwesomeMessage);

// Continue at "Create a new message" above
...

Detailed information on the reflection structure is available within the API documentation.

Using custom classes

Message classes can also be extended with custom functionality and it is also possible to register a custom constructor with a reflected message type:

...

// Define a custom constructor
function AwesomeMessage(properties) {
    // custom initialization code
    ...
}

// Register the custom constructor with its reflected type (*)
root.lookupType("awesomepackage.AwesomeMessage").ctor = AwesomeMessage;

// Define custom functionality
AwesomeMessage.customStaticMethod = function() { ... };
AwesomeMessage.prototype.customInstanceMethod = function() { ... };

// Continue at "Create a new message" above

(*) Besides referencing its reflected type through AwesomeMessage.$type and AwesomeMesage#$type, the respective custom class is automatically populated with:

  • AwesomeMessage.create
  • AwesomeMessage.encode and AwesomeMessage.encodeDelimited
  • AwesomeMessage.decode and AwesomeMessage.decodeDelimited
  • AwesomeMessage.verify
  • AwesomeMessage.fromObject, AwesomeMessage.toObject and AwesomeMessage#toJSON

Afterwards, decoded messages of this type are instanceof AwesomeMessage.

Alternatively, it is also possible to reuse and extend the internal constructor if custom initialization code is not required:

...

// Reuse the internal constructor
var AwesomeMessage = root.lookupType("awesomepackage.AwesomeMessage").ctor;

// Define custom functionality
AwesomeMessage.customStaticMethod = function() { ... };
AwesomeMessage.prototype.customInstanceMethod = function() { ... };

// Continue at "Create a new message" above

Using services

The library also supports consuming services but it doesn't make any assumptions about the actual transport channel. Instead, a user must provide a suitable RPC implementation, which is an asynchronous function that takes the reflected service method, the binary request and a node-style callback as its parameters:

function rpcImpl(method, requestData, callback) {
    // perform the request using an HTTP request or a WebSocket for example
    var responseData = ...;
    // and call the callback with the binary response afterwards:
    callback(null, responseData);
}

Below is a working example with a typescript implementation using grpc npm package.

const grpc = require('grpc')

const Client = grpc.makeGenericClientConstructor({})
const client = new Client(
  grpcServerUrl,
  grpc.credentials.createInsecure()
)

const rpcImpl = function(method, requestData, callback) {
  client.makeUnaryRequest(
    method.name,
    arg => arg,
    arg => arg,
    requestData,
    callback
  )
}

Example:

// greeter.proto
syntax = "proto3";

service Greeter {
    rpc SayHello (HelloRequest) returns (HelloReply) {}
}

message HelloRequest {
    string name = 1;
}

message HelloReply {
    string message = 1;
}
...
var Greeter = root.lookup("Greeter");
var greeter = Greeter.create(/* see above */ rpcImpl, /* request delimited? */ false, /* response delimited? */ false);

greeter.sayHello({ name: 'you' }, function(err, response) {
    console.log('Greeting:', response.message);
});

Services also support promises:

greeter.sayHello({ name: 'you' })
    .then(function(response) {
        console.log('Greeting:', response.message);
    });

There is also an example for streaming RPC.

Note that the service API is meant for clients. Implementing a server-side endpoint pretty much always requires transport channel (i.e. http, websocket, etc.) specific code with the only common denominator being that it decodes and encodes messages.

Usage with TypeScript

The library ships with its own type definitions and modern editors like Visual Studio Code will automatically detect and use them for code completion.

The npm package depends on @types/node because of Buffer and @types/long because of Long. If you are not building for node and/or not using long.js, it should be safe to exclude them manually.

Using the JS API

The API shown above works pretty much the same with TypeScript. However, because everything is typed, accessing fields on instances of dynamically generated message classes requires either using bracket-notation (i.e. message["awesomeField"]) or explicit casts. Alternatively, it is possible to use a typings file generated for its static counterpart.

import { load } from "protobufjs"; // respectively "./node_modules/protobufjs"

load("awesome.proto", function(err, root) {
  if (err)
    throw err;

  // example code
  const AwesomeMessage = root.lookupType("awesomepackage.AwesomeMessage");

  let message = AwesomeMessage.create({ awesomeField: "hello" });
  console.log(`message = ${JSON.stringify(message)}`);

  let buffer = AwesomeMessage.encode(message).finish();
  console.log(`buffer = ${Array.prototype.toString.call(buffer)}`);

  let decoded = AwesomeMessage.decode(buffer);
  console.log(`decoded = ${JSON.stringify(decoded)}`);
});

Using generated static code

If you generated static code to bundle.js using the CLI and its type definitions to bundle.d.ts, then you can just do:

import { AwesomeMessage } from "./bundle.js";

// example code
let message = AwesomeMessage.create({ awesomeField: "hello" });
let buffer  = AwesomeMessage.encode(message).finish();
let decoded = AwesomeMessage.decode(buffer);

Using decorators

The library also includes an early implementation of decorators.

Note that decorators are an experimental feature in TypeScript and that declaration order is important depending on the JS target. For example, @Field.d(2, AwesomeArrayMessage) requires that AwesomeArrayMessage has been defined earlier when targeting ES5.

import { Message, Type, Field, OneOf } from "protobufjs/light"; // respectively "./node_modules/protobufjs/light.js"

export class AwesomeSubMessage extends Message<AwesomeSubMessage> {

  @Field.d(1, "string")
  public awesomeString: string;

}

export enum AwesomeEnum {
  ONE = 1,
  TWO = 2
}

@Type.d("SuperAwesomeMessage")
export class AwesomeMessage extends Message<AwesomeMessage> {

  @Field.d(1, "string", "optional", "awesome default string")
  public awesomeField: string;

  @Field.d(2, AwesomeSubMessage)
  public awesomeSubMessage: AwesomeSubMessage;

  @Field.d(3, AwesomeEnum, "optional", AwesomeEnum.ONE)
  public awesomeEnum: AwesomeEnum;

  @OneOf.d("awesomeSubMessage", "awesomeEnum")
  public which: string;

}

// example code
let message = new AwesomeMessage({ awesomeField: "hello" });
let buffer  = AwesomeMessage.encode(message).finish();
let decoded = AwesomeMessage.decode(buffer);

Supported decorators are:

Type.d(typeName?: string)   (optional)
annotates a class as a protobuf message type. If typeName is not specified, the constructor's runtime function name is used for the reflected type.

Field.d<T>(fieldId: number, fieldType: string | Constructor<T>, fieldRule?: "optional" | "required" | "repeated", defaultValue?: T)
annotates a property as a protobuf field with the specified id and protobuf type.

MapField.d<T extends { [key: string]: any }>(fieldId: number, fieldKeyType: string, fieldValueType. string | Constructor<{}>)
annotates a property as a protobuf map field with the specified id, protobuf key and value type.

OneOf.d<T extends string>(...fieldNames: string[])
annotates a property as a protobuf oneof covering the specified fields.

Other notes:

  • Decorated types reside in protobuf.roots["decorated"] using a flat structure, so no duplicate names.
  • Enums are copied to a reflected enum with a generic name on decorator evaluation because referenced enum objects have no runtime name the decorator could use.
  • Default values must be specified as arguments to the decorator instead of using a property initializer for proper prototype behavior.
  • Property names on decorated classes must not be renamed on compile time (i.e. by a minifier) because decorators just receive the original field name as a string.

ProTip! Not as pretty, but you can use decorators in plain JavaScript as well.

Additional documentation

Protocol Buffers

protobuf.js

Community

Performance

The package includes a benchmark that compares protobuf.js performance to native JSON (as far as this is possible) and Google's JS implementation. On an i7-2600K running node 6.9.1 it yields:

benchmarking encoding performance ...

protobuf.js (reflect) x 541,707 ops/sec ยฑ1.13% (87 runs sampled)
protobuf.js (static) x 548,134 ops/sec ยฑ1.38% (89 runs sampled)
JSON (string) x 318,076 ops/sec ยฑ0.63% (93 runs sampled)
JSON (buffer) x 179,165 ops/sec ยฑ2.26% (91 runs sampled)
google-protobuf x 74,406 ops/sec ยฑ0.85% (86 runs sampled)

   protobuf.js (static) was fastest
  protobuf.js (reflect) was 0.9% ops/sec slower (factor 1.0)
          JSON (string) was 41.5% ops/sec slower (factor 1.7)
          JSON (buffer) was 67.6% ops/sec slower (factor 3.1)
        google-protobuf was 86.4% ops/sec slower (factor 7.3)

benchmarking decoding performance ...

protobuf.js (reflect) x 1,383,981 ops/sec ยฑ0.88% (93 runs sampled)
protobuf.js (static) x 1,378,925 ops/sec ยฑ0.81% (93 runs sampled)
JSON (string) x 302,444 ops/sec ยฑ0.81% (93 runs sampled)
JSON (buffer) x 264,882 ops/sec ยฑ0.81% (93 runs sampled)
google-protobuf x 179,180 ops/sec ยฑ0.64% (94 runs sampled)

  protobuf.js (reflect) was fastest
   protobuf.js (static) was 0.3% ops/sec slower (factor 1.0)
          JSON (string) was 78.1% ops/sec slower (factor 4.6)
          JSON (buffer) was 80.8% ops/sec slower (factor 5.2)
        google-protobuf was 87.0% ops/sec slower (factor 7.7)

benchmarking combined performance ...

protobuf.js (reflect) x 275,900 ops/sec ยฑ0.78% (90 runs sampled)
protobuf.js (static) x 290,096 ops/sec ยฑ0.96% (90 runs sampled)
JSON (string) x 129,381 ops/sec ยฑ0.77% (90 runs sampled)
JSON (buffer) x 91,051 ops/sec ยฑ0.94% (90 runs sampled)
google-protobuf x 42,050 ops/sec ยฑ0.85% (91 runs sampled)

   protobuf.js (static) was fastest
  protobuf.js (reflect) was 4.7% ops/sec slower (factor 1.0)
          JSON (string) was 55.3% ops/sec slower (factor 2.2)
          JSON (buffer) was 68.6% ops/sec slower (factor 3.2)
        google-protobuf was 85.5% ops/sec slower (factor 6.9)

These results are achieved by

  • generating type-specific encoders, decoders, verifiers and converters at runtime
  • configuring the reader/writer interface according to the environment
  • using node-specific functionality where beneficial and, of course
  • avoiding unnecessary operations through splitting up the toolset.

You can also run the benchmark ...

$> npm run bench

and the profiler yourself (the latter requires a recent version of node):

$> npm run prof <encode|decode|encode-browser|decode-browser> [iterations=10000000]

Note that as of this writing, the benchmark suite performs significantly slower on node 7.2.0 compared to 6.9.1 because moths.

Compatibility

  • Works in all modern and not-so-modern browsers except IE8.
  • Because the internals of this package do not rely on google/protobuf/descriptor.proto, options are parsed and presented literally.
  • If typed arrays are not supported by the environment, plain arrays will be used instead.
  • Support for pre-ES5 environments (except IE8) can be achieved by using a polyfill.
  • Support for Content Security Policy-restricted environments (like Chrome extensions without unsafe-eval) can be achieved by generating and using static code instead.
  • If a proper way to work with 64 bit values (uint64, int64 etc.) is required, just install long.js alongside this library. All 64 bit numbers will then be returned as a Long instance instead of a possibly unsafe JavaScript number (see).
  • For descriptor.proto interoperability, see ext/descriptor

Building

To build the library or its components yourself, clone it from GitHub and install the development dependencies:

$> git clone https://github.com/dcodeIO/protobuf.js.git
$> cd protobuf.js
$> npm install

Building the respective development and production versions with their respective source maps to dist/:

$> npm run build

Building the documentation to docs/:

$> npm run docs

Building the TypeScript definition to index.d.ts:

$> npm run types

Browserify integration

By default, protobuf.js integrates into any browserify build-process without requiring any optional modules. Hence:

If int64 support is required, explicitly require the long module somewhere in your project as it will be excluded otherwise. This assumes that a global require function is present that protobuf.js can call to obtain the long module.

If there is no global require function present after bundling, it's also possible to assign the long module programmatically:

var Long = ...;

protobuf.util.Long = Long;
protobuf.configure();

If you have any special requirements, there is the bundler for reference.

Download Details:

Author: Protobufjs
Source Code: https://github.com/protobufjs/protobuf.js 
License: View license

#javascript #serialization #typescript 

Protobuf.js: Protocol Buffers for JavaScript (& TypeScript)

SerialPorts.jl: SerialPort IO Streams in Julia Backed By PySerial

SerialPorts

SerialPorts.jl lets you work with devices over serial communication with Julia. It is designed to mimic regular file IO as in the Base Julia library.

This package requires PySerial, which is used through PyCall. Conda is used as a fallback so cross-platform installation is simple.

Check out LibSerialPort.jl if you want to avoid the Python dependency.

Quick Start

A SerialPort has a minimal API similar to IOStream in Julia.

A brief example:

using SerialPorts
s = SerialPort("/dev/ttyACM1", 250000)
write(s, "Hello World!\n")
close(s)

open, close, read, write, bytesavailable, readavailable, are all defined for SerialPort.

In order to see the attached serial devices, use list_serialports().

The Arduino submodule provides functionality for manipulating Arduinos over serial. SerialPorts.Arduino.reset(s::SerialPort) will reset an Arduino.

Download Details:

Author: JuliaIO
Source Code: https://github.com/JuliaIO/SerialPorts.jl 
License: View license

#julia #devices #serialization 

SerialPorts.jl: SerialPort IO Streams in Julia Backed By PySerial

10 Best Golang Libraries and Tools for Binary Serialization

In today's post we will learn about 10 Best Golang Libraries and tools for Binary Serialization. 

What is binary serialization?

Binary serialization allows modifying private members inside an object and therefore changing the state of it. Because of this, other serialization frameworks, like System. Text. Json, that operate on the public API surface are recommended.

Table of contents:

  • Asn1 - Asn.1 BER and DER encoding library for golang.
  • Bambam - generator for Cap'n Proto schemas from go.
  • Bel - Generate TypeScript interfaces from Go structs/interfaces. Useful for JSON RPC.
  • Binstruct - Golang binary decoder for mapping data into the structure.
  • Cbor - Small, safe, and easy CBOR encoding and decoding library.
  • Colfer - Code generation for the Colfer binary format.
  • CSvutil - High Performance, idiomatic CSV record encoding and decoding to native Go structures.
  • Elastic - Convert slices, maps or any other unknown value across different types at run-time, no matter what.
  • Fixedwidth - Fixed-width text formatting (UTF-8 supported).
  • Fwencoder - Fixed width file parser (encoding and decoding library) for Go.
  • Go-capnproto - Cap'n Proto library and parser for go.

1 - Asn1:

Asn.1 BER and DER encoding library for golang.

-- import "github.com/PromonLogicalis/asn1"

Package asn1 implements encoding and decoding of ASN.1 data structures using both Basic Encoding Rules (BER) or its subset, the Distinguished Encoding Rules (BER).

This package is highly inspired by the Go standard package "encoding/asn1" while supporting additional features such as BER encoding and decoding and ASN.1 CHOICE types.

By default and for convenience the package uses DER for encoding and BER for decoding. However it's possible to use a Context object to set the desired encoding and decoding rules as well other options.

Restrictions:

  • BER allows STRING types, such as OCTET STRING and BIT STRING, to be encoded as constructed types containing inner elements that should be concatenated to form the complete string. The package does not support that, but in the future decoding of constructed strings should be included.

Usage

func Decode

func Decode(data []byte, obj interface{}) (rest []byte, err error)

Decode parses the given BER data into obj. The argument obj should be a reference to the value that will hold the parsed data. Decode uses a default Context and is equivalent to:

rest, err := asn1.NewContext().Decode(data, &obj)

func DecodeWithOptions

func DecodeWithOptions(data []byte, obj interface{}, options string) (rest []byte, err error)

DecodeWithOptions parses the given BER data into obj using the additional options. The argument obj should be a reference to the value that will hold the parsed data. Decode uses a default Context and is equivalent to:

rest, err := asn1.NewContext().DecodeWithOptions(data, &obj, options)

func Encode

func Encode(obj interface{}) (data []byte, err error)

Encode returns the DER encoding of obj. Encode uses a default Context and it's equivalent to:

data, err = asn1.NewContext().Encode(obj)

func EncodeWithOptions

func EncodeWithOptions(obj interface{}, options string) (data []byte, err error)

EncodeWithOptions returns the DER encoding of obj using additional options. EncodeWithOptions uses a default Context and it's equivalent to:

data, err = asn1.NewContext().EncodeWithOptions(obj, options)

type Choice

type Choice struct {
	Type    reflect.Type
	Options string
}

Choice represents one option available for a CHOICE element.

type Context

type Context struct {
}

Context keeps options that affect the ASN.1 encoding and decoding

Use the NewContext() function to create a new Context instance:

ctx := ber.NewContext()
// Set options, ex:
ctx.SetDer(true, true)
// And call decode or encode functions
bytes, err := ctx.EncodeWithOptions(value, "explicit,application,tag:5")
...

func NewContext

func NewContext() *Context

NewContext creates and initializes a new context. The returned Context does not contains any registered choice and it's set to DER encoding and BER decoding.

func (*Context) AddChoice

func (ctx *Context) AddChoice(choice string, entries []Choice) error

AddChoice registers a list of types as options to a given choice.

The string choice refers to a choice name defined into an element via additional options for DecodeWithOptions and EncodeWithOptions of via struct tags.

For example, considering that a field "Value" can be an INTEGER or an OCTET STRING indicating two types of errors, each error with a different tag number, the following can be used:

// Error types
type SimpleError string
type ComplextError string
// The main object
type SomeSequence struct {
	// ...
	Value	interface{}	`asn1:"choice:value"`
	// ...
}
// A Context with the registered choices
ctx := asn1.NewContext()
ctx.AddChoice("value", []asn1.Choice {
	{
		Type: reflect.TypeOf(int(0)),
	},
	{
		Type: reflect.TypeOf(SimpleError("")),
		Options: "tag:1",
	},
	{
		Type: reflect.TypeOf(ComplextError("")),
		Options: "tag:2",
	},
})

View on Github

2 - Bambam:

Generator for Cap'n Proto schemas from go.

bambam: auto-generate capnproto schema from your golang source files.

Adding capnproto serialization to an existing Go project used to mean writing a lot of boilerplate.

Not anymore.

Given a set of golang (Go) source files, bambam will generate a capnproto schema. Even better: bambam will also generate translation functions to readily convert between your golang structs and the new capnproto structs.

prereqs

You'll need a recent (up-to-date) version of go-capnproto. If you installed go-capnproto before, you'll want to update it [>= f9f239fc7f5ad9611cf4e88b10080a4b47c3951d / 16 Nov 2014].

Capnproto and go-capnproto should both be installed and on your PATH.

to install: run make. This lets us record the git commit in LASTGITCOMMITHASH to provide accurate version info. Otherwise you'll get an 'undefined: LASTGITCOMMITHASH' failure.

# be sure go-capnproto and capnpc are installed first.

$ go get -t github.com/glycerine/bambam  # the -t pulls in the test dependencies.

# ignore the initial compile error about 'undefined: LASTGITCOMMITHASH'. `make` will fix that.
$ cd $GOPATH/src/github.com/glycerine/bambam
$ make  # runs tests, build if all successful
$ go install

use

use: bambam -o outdir -p package myGoSourceFile.go myGoSourceFile2.go ...
     # Bambam makes it easy to use Capnproto serialization[1] from Go.
     # Bambam reads .go files and writes a .capnp schema and Go bindings.
     # options:
     #   -o="odir" specifies the directory to write to (created if need be).
     #   -p="main" specifies the package header to write (e.g. main, mypkg).
     #   -X exports private fields of Go structs. Default only maps public fields.
     #   -version   shows build version with git commit hash
     #   -OVERWRITE modify .go files in-place, adding capid tags (write to -o dir by default).
     # required: at least one .go source file for struct definitions. Must be last, after options.
     #
     # [1] https://github.com/glycerine/go-capnproto 

demo

See rw.go.txt. To see all the files compiled together in one project: (a) comment out the defer in the rw_test.go file; (b) run go test; (c) then cd testdir_* and look at the sample project files there. (d). run go build in the testdir_ to rebuild the binary. Notice that you will need all three .go files to successfully build. The two .capnp files should be kept so you can read your data from any capnp-supported language. Here's what is what in that example directory:

rw.go             # your original go source file (in this test)
translateCapn.go  # generated by bambam after reading rw.go
schema.capnp      # generated by bambam after reading rw.go
schema.capnp.go   # generated by `capnpc -ogo schema.capnp` <- you have to do this yourself or in your Makefile.
go.capnp          # always necessary boilerplate to let capnpc work, just copy it from bambam/go.capnp to your build dir.

View on Github

3 - Bel:

Generate TypeScript interfaces from Go structs/interfaces. Useful for JSON RPC.

Getting started

bel is easy to use. There are two steps involved: extract the Typescript information, and generate the Typescript code.

package main

import (
    "github.com/32leaves/bel"
)

type Demo struct {
    Foo string `json:"foo,omitempty"`
    Bar uint32
    Baz struct {
        FirstField  bool
        SecondField *string
    }
}

func main() {
    ts, err := bel.Extract(Demo{})
    if err != nil {
        panic(err)
    }

    err = bel.Render(ts)
    if err != nil {
        panic(err)
    }
}

produces something akin to (sans formatting):

export interface Demo {
    foo?: string
    Bar: number
    Baz: {
        FirstField: boolean
        SecondField: string
    }
}

Converting interfaces

You can also convert Golang interfaces to TypeScript interfaces. This is particularly handy for JSON RPC:

package main

import (
    "os"
    "github.com/32leaves/bel"
)

type DemoService interface {
    SayHello(name, msg string) (string, error)
}

func main() {
    ts, err := bel.Extract((*DemoService)(nil))
    if err != nil {
        panic(err)
    }

    err = bel.Render(ts)
    if err != nil {
        panic(err)
    }
}

produces something akin to (sans formatting):

export interface DemoService {
    SayHello(arg0: string, arg1: string): string
}

View on Github

4 - Binstruct:

Golang binary decoder for mapping data into the structure.

Install

go get -u github.com/ghostiam/binstruct

Examples

ZIP decoder
PNG decoder

Use

For struct

From file or other io.ReadSeeker:

package main

import (
	"encoding/binary"
	"fmt"
	"log"
	"os"

	"github.com/ghostiam/binstruct"
)

func main() {
	file, err := os.Open("testdata/file.bin")
	if err != nil {
		log.Fatal(err)
	}

	type dataStruct struct {
		Arr []int16 `bin:"len:4"`
	}

	var actual dataStruct
	decoder := binstruct.NewDecoder(file, binary.BigEndian)
	// decoder.SetDebug(true) // you can enable the output of bytes read for debugging
	err = decoder.Decode(&actual)
	if err != nil {
		log.Fatal(err)
	}

	fmt.Printf("%+v", actual)

	// Output:
	// {Arr:[1 2 3 4]}
}

From bytes

package main

import (
	"fmt"
	"log"
	
	"github.com/ghostiam/binstruct"
)

func main() {
	data := []byte{
		0x00, 0x01,
		0x00, 0x02,
		0x00, 0x03,
		0x00, 0x04,
	}

	type dataStruct struct {
		Arr []int16 `bin:"len:4"`
	}

	var actual dataStruct
	err := binstruct.UnmarshalBE(data, &actual) // UnmarshalLE() or Unmarshal()
	if err != nil {
		log.Fatal(err)
	}

	fmt.Printf("%+v", actual)

	// Output: {Arr:[1 2 3 4]}
}

View on Github

5 - Cbor:

Small, safe, and easy CBOR encoding and decoding library.

fxamacker/cbor is a modern CBOR codec in Go. It's like encoding/json for CBOR with time-saving features. It balances security, usability, speed, data size, program size, and other competing factors.

Features include CBOR tags, duplicate map key detection, float64โ†’32โ†’16, and Go struct tags (toarray, keyasint, omitempty). API is close to encoding/json plus predefined CBOR options like Core Deterministic Encoding, Preferred Serialization, CTAP2, etc.

Using CBOR Preferred Serialization with Go struct tags (toarray, keyasint, omitempty) reduces programming effort and creates smaller encoded data size.

Microsoft Corporation had NCC Group produce a security assessment (PDF) which includes portions of this library in its scope.

fxamacker/cbor has 98% coverage and is fuzz tested. It won't exhaust RAM decoding 9 bytes of bad CBOR data. It's used by Arm Ltd., Berlin Institute of Health at Charitรฉ, Chainlink, ConsenSys, Dapper Labs, Duo Labs (cisco), EdgeX Foundry, Mozilla, Netherlands (govt), Oasis Labs, Taurus SA, Teleport, and others.

Install with go get github.com/fxamacker/cbor/v2 and import "github.com/fxamacker/cbor/v2".
See Quick Start to save time.

What is CBOR?

CBOR is a concise binary data format inspired by JSON and MessagePack. CBOR is defined in RFC 8949 (December 2020) which obsoletes RFC 7049 (October 2013).

CBOR is an Internet Standard by IETF. It's used in other standards like WebAuthn by W3C, COSE (RFC 8152), CWT (RFC 8392), CDDL (RFC 8610) and more.

Reasons for choosing CBOR vary by project. Some projects replaced protobuf, encoding/json, encoding/gob, etc. with CBOR. For example, by replacing protobuf with CBOR in gRPC.

Why fxamacker/cbor?

fxamacker/cbor balances competing factors such as speed, size, safety, usability, maintainability, and etc.

Killer features include Go struct tags like toarray, keyasint, etc. They reduce encoded data size, improve speed, and reduce programming effort. For example, toarray automatically translates a Go struct to/from a CBOR array.

Modern CBOR features include Core Deterministic Encoding and Preferred Encoding. Other features include CBOR tags, big.Int, float64โ†’32โ†’16, an API like encoding/json, and more.

Security features include the option to detect duplicate map keys and options to set various max limits. And it's designed to make concurrent use of CBOR options easy and free from side-effects.

To prevent crashes, it has been fuzz-tested since before release 1.0 and code coverage is kept above 98%.

For portability and safety, it avoids using unsafe, which makes it portable and protected by Go1's compatibility guidelines.

For performance, it uses safe optimizations. When used properly, fxamacker/cbor can be faster than CBOR codecs that rely on unsafe. However, speed is only one factor and should be considered together with other competing factors.

CBOR Security

fxamacker/cbor is secure. It rejects malformed CBOR data and has an option to detect duplicate map keys. It doesn't crash when decoding bad CBOR data. It has extensive tests, coverage-guided fuzzing, data validation, and avoids Go's unsafe package.

Decoding 9 or 10 bytes of malformed CBOR data shouldn't exhaust memory. For example,
[]byte{0x9B, 0x00, 0x00, 0x42, 0xFA, 0x42, 0xFA, 0x42, 0xFA, 0x42}

 Decode bad 10 bytes to interface{}Decode bad 10 bytes to []byte
fxamacker/cbor
1.0-2.3
49.44 ns/op, 24 B/op, 2 allocs/op*51.93 ns/op, 32 B/op, 2 allocs/op*
ugorji/go 1.2.6โš ๏ธ 45021 ns/op, 262852 B/op, 7 allocs/op๐Ÿ’ฅ runtime: out of memory: cannot allocate
ugorji/go 1.1-1.1.7๐Ÿ’ฅ runtime: out of memory: cannot allocate๐Ÿ’ฅ runtime: out of memory: cannot allocate

*Speed and memory are for latest codec version listed in the row (compiled with Go 1.17.5).

fxamacker/cbor CBOR safety settings include: MaxNestedLevels, MaxArrayElements, MaxMapPairs, and IndefLength.

For more info, see:

View on Github

6 - Colfer:

Code generation for the Colfer binary format.

Colfer is a binary serialization format optimized for speed and size.

The project's compiler colf(1) generates source code from schema definitions to marshal and unmarshall data structures.

This is free and unencumbered software released into the public domain. The format is inspired by Protocol Buffers.

Language Support

  • C, ISO/IEC 9899:2011 compliant a.k.a. C11, C++ compatible
  • Go, a.k.a. golang
  • Java, Android compatible
  • JavaScript, a.k.a. ECMAScript, NodeJS compatible

Features

  • Simple and straightforward in use
  • No dependencies other than the core library
  • Both faster and smaller than the competition
  • Robust against malicious input
  • Maximum of 127 fields per data structure
  • No support for enumerations
  • Framed; suitable for concatenation/streaming

TODO's

  • Rust and Python support
  • Protocol revision

Use

Download a prebuilt compiler or run go get -u github.com/pascaldekloe/colfer/cmd/colf to make one yourself. Homebrew users can also brew install colfer.

The command prints its own manual when invoked without arguments.

NAME
	colf โ€” compile Colfer schemas

SYNOPSIS
	colf [-h]
	colf [-vf] [-b directory] [-p package] \
		[-s expression] [-l expression] C [file ...]
	colf [-vf] [-b directory] [-p package] [-t files] \
		[-s expression] [-l expression] Go [file ...]
	colf [-vf] [-b directory] [-p package] [-t files] \
		[-x class] [-i interfaces] [-c file] \
		[-s expression] [-l expression] Java [file ...]
	colf [-vf] [-b directory] [-p package] \
		[-s expression] [-l expression] JavaScript [file ...]

DESCRIPTION
	The output is source code for either C, Go, Java or JavaScript.

	For each operand that names a file of a type other than
	directory, colf reads the content as schema input. For each
	named directory, colf reads all files with a .colf extension
	within that directory. If no operands are given, the contents of
	the current directory are used.

	A package definition may be spread over several schema files.
	The directory hierarchy of the input is not relevant to the
	generated code.

OPTIONS
  -b directory
    	Use a base directory for the generated code. (default ".")
  -c file
    	Insert a code snippet from a file.
  -f	Normalize the format of all schema input on the fly.
  -h	Prints the manual to standard error.
  -i interfaces
    	Make all generated classes implement one or more interfaces.
    	Use commas as a list separator.
  -l expression
    	Set the default upper limit for the number of elements in a
    	list. The expression is applied to the target language under
    	the name ColferListMax. (default "64 * 1024")
  -p package
    	Compile to a package prefix.
  -s expression
    	Set the default upper limit for serial byte sizes. The
    	expression is applied to the target language under the name
    	ColferSizeMax. (default "16 * 1024 * 1024")
  -t files
    	Supply custom tags with one or more files. Use commas as a list
    	separator. See the TAGS section for details.
  -v	Enable verbose reporting to standard error.
  -x class
    	Make all generated classes extend a super class.

TAGS
	Tags, a.k.a. annotations, are source code additions for structs
	and/or fields. Input for the compiler can be specified with the
	-f option. The data format is line-oriented.

		<line> :โ‰ก <qual> <space> <code> ;
		<qual> :โ‰ก <package> '.' <dest> ;
		<dest> :โ‰ก <struct> | <struct> '.' <field> ;

	Lines starting with a '#' are ignored (as comments). Java output
	can take multiple tag lines for the same struct or field. Each
	code line is applied in order of appearance.

EXIT STATUS
	The command exits 0 on success, 1 on error and 2 when invoked
	without arguments.

EXAMPLES
	Compile ./io.colf with compact limits as C:

		colf -b src -s 2048 -l 96 C io.colf

	Compile ./*.colf with a common parent as Java:

		colf -p com.example.model -x com.example.io.IOBean Java

BUGS
	Report bugs at <https://github.com/pascaldekloe/colfer/issues>.

	Text validation is not part of the marshalling and unmarshalling
	process. C and Go just pass any malformed UTF-8 characters. Java
	and JavaScript replace unmappable content with the '?' character
	(ASCII 63).

SEE ALSO
	protoc(1), flatc(1)

It is recommended to commit the generated source code into the respective version control to preserve build consistency and minimise the need for compiler installations. Alternatively, you may use the Maven plugin.

<plugin>
	<groupId>net.quies.colfer</groupId>
	<artifactId>colfer-maven-plugin</artifactId>
	<version>1.11.2</version>
	<configuration>
		<packagePrefix>com/example</packagePrefix>
	</configuration>
</plugin>

View on Github

7 - CSvutil:

High Performance, idiomatic CSV record encoding and decoding to native Go structures.

Package csvutil provides fast, idiomatic, and dependency free mapping between CSV and Go (golang) values.

This package is not a CSV parser, it is based on the Reader and Writer interfaces which are implemented by eg. std Go (golang) csv package. This gives a possibility of choosing any other CSV writer or reader which may be more performant.

Installation

go get github.com/jszwec/csvutil

Requirements

  • Go1.8+

Example

Unmarshal

Nice and easy Unmarshal is using the Go std csv.Reader with its default options. Use Decoder for streaming and more advanced use cases.

var csvInput = []byte(`
name,age,CreatedAt
jacek,26,2012-04-01T15:00:00Z
john,,0001-01-01T00:00:00Z`,
	)

	type User struct {
		Name      string `csv:"name"`
		Age       int    `csv:"age,omitempty"`
		CreatedAt time.Time
	}

	var users []User
	if err := csvutil.Unmarshal(csvInput, &users); err != nil {
		fmt.Println("error:", err)
	}

	for _, u := range users {
		fmt.Printf("%+v\n", u)
	}

	// Output:
	// {Name:jacek Age:26 CreatedAt:2012-04-01 15:00:00 +0000 UTC}
	// {Name:john Age:0 CreatedAt:0001-01-01 00:00:00 +0000 UTC}

Marshal

Marshal is using the Go std csv.Writer with its default options. Use Encoder for streaming or to use a different Writer.

type Address struct {
		City    string
		Country string
	}

	type User struct {
		Name string
		Address
		Age       int `csv:"age,omitempty"`
		CreatedAt time.Time
	}

	users := []User{
		{
			Name:      "John",
			Address:   Address{"Boston", "USA"},
			Age:       26,
			CreatedAt: time.Date(2010, 6, 2, 12, 0, 0, 0, time.UTC),
		},
		{
			Name:    "Alice",
			Address: Address{"SF", "USA"},
		},
	}

	b, err := csvutil.Marshal(users)
	if err != nil {
		fmt.Println("error:", err)
	}
	fmt.Println(string(b))

	// Output:
	// Name,City,Country,age,CreatedAt
	// John,Boston,USA,26,2010-06-02T12:00:00Z
	// Alice,SF,USA,,0001-01-01T00:00:00Z

Unmarshal and metadata

It may happen that your CSV input will not always have the same header. In addition to your base fields you may get extra metadata that you would still like to store. Decoder provides Unused method, which after each call to Decode can report which header indexes were not used during decoding. Based on that, it is possible to handle and store all these extra values.

type User struct {
		Name      string            `csv:"name"`
		City      string            `csv:"city"`
		Age       int               `csv:"age"`
		OtherData map[string]string `csv:"-"`
	}

	csvReader := csv.NewReader(strings.NewReader(`
name,age,city,zip
alice,25,la,90005
bob,30,ny,10005`))

	dec, err := csvutil.NewDecoder(csvReader)
	if err != nil {
		log.Fatal(err)
	}

	header := dec.Header()
	var users []User
	for {
		u := User{OtherData: make(map[string]string)}

		if err := dec.Decode(&u); err == io.EOF {
			break
		} else if err != nil {
			log.Fatal(err)
		}

		for _, i := range dec.Unused() {
			u.OtherData[header[i]] = dec.Record()[i]
		}
		users = append(users, u)
	}

	fmt.Println(users)

	// Output:
	// [{alice la 25 map[zip:90005]} {bob ny 30 map[zip:10005]}]

View on Github

8 - Elastic:

Convert slices, maps or any other unknown value across different types at run-time, no matter what.

Converts go types no matter what

elastic is a simple library that converts any type to another the best way possible. This is useful when the type is only known at run-time, which usually happens when serializing data. elastic allows your code to be flexible regarding type conversion if that is what you're looking for.

It is also capable of seeing through alias types and converting slices and maps to and from other types of slices and maps, providing there is some logical way to convert them.

Default conversion can be overridden by providing custom conversion functions for specific types. Struct types can also implement the ConverterTo interface to help with conversion to and from specific types.

Quick examples:

convert value types:

// note that using elastic wouldn't make sense if you are certain
    // f is a float64 at compile time.
    var f interface{} = float64(5.5)
	var i int
    
    err := elastic.Set(&i, f)
	if err != nil {
		log.Fatal(f)
	}

	fmt.Println(i) // prints 5

convert slices:

var ints []int
	err = elastic.Set(&ints, []interface{}{1, 2, 3, "4", float64(5), 6})
	if err != nil {
		log.Fatal(f)
	}

	fmt.Println(ints) // prints [1 2 3 4 5 6]

convert maps:

someMap := map[string]interface{}{
		"1": "uno",
		"2": "dos",
		"3": "tres",
	}

	intmap := make(map[int]string)
	err = elastic.Set(&intmap, someMap)
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println(intmap) // prints map[1:uno 2:dos 3:tres]

Simple API:

elastic.Convert()

Converts the passed value to the target type

Syntax:

elastic.Convert(source interface{}, targetType reflect.Type) (interface{}, error)

  • source: value to convert
  • targetType the type you want to convert source to

Returns

The converted value or an error if it fails.

View on Github

9 - Fixedwidth:

Fixed-width text formatting (UTF-8 supported).

Fixedwidth is a Go package that provides a simple way to define fixed-width data, fast encoding and decoding also is the project's target.

Character encoding supported

UTF-8

Getting Started

Installation

To start using Fixedwidth, run go get:

$ go get github.com/huydang284/fixedwidth

How we limit a struct field

To limit a struct field, we use fixed tag.

Example:

type people struct {
    Name string `fixed:"10"`
    Age  int    `fixed:"3"`
}

If the value of struct field is longer than the limit that we defined, redundant characters will be truncated.

Otherwise, if the value of struct field is less than the limit, additional spaces will be appended.

Encoding

We can use Marshal function directly to encode fixed-width data.

package main

import (
    "fmt"
    "github.com/huydang284/fixedwidth"
)

type people struct {
    Name string `fixed:"10"`
    Age  int    `fixed:"3"`
}

func main() {
    me := people {
        Name: "Huy",
        Age: 25,
    }
    data, _ := fixedwidth.Marshal(me)
    fmt.Println(string(data))
}

The result will be:

Huy       25 

View on Github

10 - Fwencoder:

Fixed width file parser (encoding and decoding library) for Go.

This library is using to parse fixed-width table data like:

Name            Address               Postcode Phone          Credit Limit Birthday
Evan Whitehouse V4560 Camel Back Road 3122     (918) 605-5383    1000000.5 19870101
Chuck Norris    P.O. Box 872          77868    (713) 868-6003     10909300 19651203

Install

To install the library use the following command:

$ go get -u github.com/o1egl/fwencoder

Decoding example

Parsing data from io.Reader:

type Person struct {
	Name        string
	Address     string
	Postcode    int
	Phone       string
	CreditLimit float64   `json:"Credit Limit"`
	Bday        time.Time `column:"Birthday" format:"20060102"`
}

f, _ := os.Open("/path/to/file")
defer f.Close

var people []Person
err := fwencoder.UnmarshalReader(f, &people)

You can also parse data from byte array:

b, _ := ioutil.ReadFile("/path/to/file")
var people []Person
err := fwencoder.Unmarshal(b, &people)

View on Github

Thank you for following this article.

Related videos:

Serialize protobuf message - Golang

#go #golang #binary #serialization 

10 Best Golang Libraries and Tools for Binary Serialization

A Concise Binary Object Representation Serialization Library in Julia

CBOR.jl  

CBOR.jl is a Julia package for working with the CBOR data format, providing straightforward encoding and decoding for Julia types.

About CBOR

The Concise Binary Object Representation is a data format that's based upon an extension of the JSON data model, whose stated design goals include: small code size, small message size, and extensibility without the need for version negotiation. The format is formally defined in RFC 7049.

Usage

Add the package

Pkg.add("CBOR")

and add the module

using CBOR

Encoding and Decoding

Encoding and decoding follow the simple pattern

bytes = encode(data)

data = decode(bytes)

where bytes is of type Array{UInt8, 1}, and data returned from decode() is usually of the same type that was passed into encode() but always contains the original data.

Primitive Integers

All Signed and Unsigned types, except Int128 and UInt128, are encoded as CBOR Type 0 or Type 1

> encode(21)
1-element Array{UInt8,1}: 0x15

> encode(-135713)
5-element Array{UInt8,1}: 0x3a 0x00 0x02 0x12 0x20

> bytes = encode(typemax(UInt64))
9-element Array{UInt8,1}: 0x1b 0xff 0xff 0xff 0xff 0xff 0xff 0xff 0xff

> decode(bytes)
18446744073709551615

Byte Strings

An AbstractVector{UInt8} is encoded as CBOR Type 2

> encode(UInt8[x*x for x in 1:10])
11-element Array{UInt8, 1}: 0x4a 0x01 0x04 0x09 0x10 0x19 0x24 0x31 0x40 0x51 0x64

Strings

String are encoded as CBOR Type 3

> encode("Valar morghulis")
16-element Array{UInt8,1}: 0x4f 0x56 0x61 0x6c 0x61 ... 0x68 0x75 0x6c 0x69 0x73

> bytes = encode("ืืชื” ื™ื›ื•ืœ ืœืงื—ืช ืืช ืกื•ืก ืืœ ื”ืžื™ื, ืื‘ืœ ืืชื” ืœื ื™ื›ื•ืœ ืœื”ื•ื›ื™ื— ืฉื•ื ื“ื‘ืจ ืืžื™ืชื™")
119-element Array{UInt8,1}: 0x78 0x75 0xd7 0x90 0xd7 ... 0x99 0xd7 0xaa 0xd7 0x99

> decode(bytes)
"ืืชื” ื™ื›ื•ืœ ืœืงื—ืช ืืช ืกื•ืก ืืœ ื”ืžื™ื, ืื‘ืœ ืืชื” ืœื ื™ื›ื•ืœ ืœื”ื•ื›ื™ื— ืฉื•ื ื“ื‘ืจ ืืžื™ืชื™"

Floats

Float64, Float32 and Float16 are encoded as CBOR Type 7

> encode(1.23456789e-300)
9-element Array{UInt8, 1}: 0xfb 0x01 0xaa 0x74 0xfe 0x1c 0x13 0x2c 0x0e

> bytes = encode(Float32(pi))
5-element Array{UInt8, 1}: 0xfa 0x40 0x49 0x0f 0xdb

> decode(bytes)
3.1415927f0

Arrays

AbstractVector and Tuple types, except of course AbstractVector{UInt8}, are encoded as CBOR Type 4

> bytes = encode((-7, -8, -9))
4-element Array{UInt8, 1}: 0x83 0x26 0x27 0x28

> decode(bytes)
3-element Array{Any, 1}: -7 -8 -9

> bytes = encode(["Open", 1, 4, 9.0, "the pod bay doors hal"])
39-element Array{UInt8, 1}: 0x85 0x44 0x4f 0x70 0x65 ... 0x73 0x20 0x68 0x61 0x6c

> decode(bytes)
5-element Array{Any, 1}: "Open" 1 4 9.0 "the pod bay doors hal"

> bytes = encode([log2(x) for x in 1:10])
91-element Array{UInt8, 1}: 0x8a 0xfb 0x00 0x00 0x00 ... 0x4f 0x09 0x79 0xa3 0x71

> decode(bytes)
10-element Array{Any, 1}: 0.0 1.0 1.58496 2.0 2.32193 2.58496 2.80735 3.0 3.16993 3.32193

Maps

An AbstractDict type is encoded as CBOR Type 5

> d = Dict()
> d["GNU's"] = "not UNIX"
> d[Float64(e)] = [2, "+", 0.718281828459045]

> bytes = encode(d)
38-element Array{UInt8, 1}: 0xa2 0x65 0x47 0x4e 0x55 ... 0x28 0x6f 0x8a 0xd2 0x56

> decode(bytes)
Dict{Any,Any} with 2 entries:
  "GNU's"           => "not UNIX"
  2.718281828459045 => Any[0x02, "+", 0.718281828459045]

Tagging

To tag one of the above types, encode a Tag with first being an non-negative integer, and second being the data you want to tag.

> bytes = encode(Tag(80, "web servers"))

> data = decode(bytes)
0x50=>"HTTP Web Server"

There exists an IANA registery which assigns certain meanings to tags; for example, a string tagged with a value of 32 is to be interpreted as a Uniform Resource Locater. To decode a tagged CBOR data item, and then to automatically interpret the meaning of the tag, use decode_with_iana.

For example, a Julia BigInt type is encoded as an Array{UInt8, 1} containing the bytes of it's hexadecimal representation, and tagged with a value of 2 or 3

> b = BigInt(factorial(20))
2432902008176640000

> bytes = encode(b * b * -b)
34-element Array{UInt8,1}: 0xc3 0x58 0x1f 0x13 0xd4 ... 0xff 0xff 0xff 0xff 0xff

To decode bytes without interpreting the meaning of the tag, use decode

> decode(bytes)
0x03 => UInt8[0x96, 0x58, 0xd1, 0x85, 0xdb .. 0xff 0xff 0xff 0xff 0xff]

To decode bytes and to interpret the meaning of the tag, use decode_with_iana

> decode_with_iana(bytes)
-14400376622525549608547603031202889616850944000000000000

Currently, only BigInt is supported for automatically tagged encoding and decoding; more Julia types will be added in the future.

Composite Types

A generic DataType that isn't one of the above types is encoded through encode using reflection. This is supported only if all of the fields of the type belong to one of the above types.

For example, say you have a user-defined type Point

mutable struct Point
    x::Int64
    y::Float64
    space::String
end

point = Point(1, 3.4, "Euclidean")

When point is passed into encode, it is first converted to a Dict containing the symbolic names of it's fields as keys associated to their respective values and a "type" key associated to the type's symbolic name, like so

Dict{Any, Any} with 3 entries:
  "x"     => 0x01
  "type"  => "Point"
  "y"     => 3.4
  "space" => "Euclidean"

The Dict is then encoded as CBOR Type 5.

Indefinite length collections

To encode collections of indefinite length, you can just wrap any iterator in the CBOR.UndefLength type. Make sure that your Iterator knows their eltype to e.g. create a bytestring / string / Dict indefinite length encoding. The eltype mapping is:

Vector{UInt8} -> bytestring
String -> bytestring
Pair -> Dict
Any -> List

If the eltype is unknown, but you still want to enforce it, use this constructor:

CBOR.UndefLength{String}(iter)

First create some julia iterator with unknown length

function producer(ch::Channel)
    for i in 1:10
        put!(ch,i*i)
    end
end
iter = Channel(producer)

encode it with UndefLength

> encode(UndefLength(iter))
18-element Array{UInt8, 1}: 0x9f 0x01 0x04 0x09 0x10 ... 0x18 0x51 0x18 0x64 0xff

> decode(bytes)
[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]

While encoding an indefinite length Map, produce first the key and then the value for each key-value pair, or produce pairs!

function cubes(ch::Channel)
    for i in 1:10
        put!(ch, i)       # key
        put!(ch, i*i*i)   # value
    end
end

> bytes = encode(UndefLength{Pair}(Channel(cubes)))
34-element Array{UInt8, 1}: 0xbf 0x01 0x01 0x02 0x08 ... 0x0a 0x19 0x03 0xe8 0xff

> decode(bytes)
Dict(7=>343,4=>64,9=>729,10=>1000,2=>8,3=>27,5=>125,8=>512,6=>216,1=>1)

Note that when an indefinite length CBOR Type 2 or Type 3 is decoded, the result is a concatenation of the individual elements.

function producer(ch::Channel)
    for c in ["F", "ire", " ", "and", " ", "Blo", "od"]
        put!(ch,c)
    end
end

> bytes = encode(UndefLength{String}(Channel(producer)))
23-element Array{UInt8, 1}: 0x7f 0x61 0x46 0x63 0x69 ... 0x6f 0x62 0x6f 0x64 0xff

> decode(bytes)
"Fire and Blood"

Caveats

Encoding a UInt128 and an Int128 isn't supported; use a BigInt instead.

Decoding CBOR data that isn't well-formed is unpredictable.

Download Details:

Author: JuliaIO
Source Code: https://github.com/JuliaIO/CBOR.jl 
License: MIT license

#julia #serialization 

A Concise Binary Object Representation Serialization Library in Julia
Royce  Reinger

Royce Reinger

1662363841

JSON-builder: The Serializing Counterpart to Json-parser

The serializing counterpart to json-parser.

As with json-parser: BSD licensed, almost ANSI C89 apart from a single use of snprintf.

Usage

Quick example (docs coming soon):

json_value * arr = json_array_new(0);
json_array_push(arr, json_string_new("Hello world!"));
json_array_push(arr, json_integer_new(128));

char * buf = malloc(json_measure(arr));
json_serialize(buf, arr);

printf("%s\n", buf);
[ "Hello world!", 128 ]

json-builder is fully interoperable with json-parser:

char json[] = "[ 1, 2, 3 ]";

json_settings settings = {};
settings.value_extra = json_builder_extra;  /* space for json-builder state */

char error[128];
json_value * arr = json_parse_ex(&settings, json, strlen(json), error);

/* Now serialize it again. */
char * buf = malloc(json_measure(arr));
json_serialize(buf, arr);

printf("%s\n", buf);
[ 1, 2, 3 ]

Note that values created by or modified by json-builder must be freed with json_builder_free instead of json_value_free, otherwise the memory of the builder state will be leaked.

Modes

  • json_serialize_mode_multiline โ€” Generate multi-line JSON, for example:
[
  1,
  2,
  3
]
  • json_serialize_mode_single_line โ€” Generate JSON on a single line, for example:
[ 1, 2, 3 ]
  • json_serialize_mode_packed โ€” Generate JSON as tightly packed as possible, for example:
[1,2,3]

Options

json_serialize_opt_CRLF โ€” use CR/LF (Windows) line endings

json_serialize_opt_pack_brackets โ€” do not leave spaces around brackets (e.g. [ 1, 2 ] becomes [1, 2])

json_serialize_opt_no_space_after_comma โ€” do not leave spaces after commas

json_serialize_opt_no_space_after_colon โ€” do not leave spaces after colons (inside objects)

json_serialize_opt_use_tabs โ€” indent using tabs instead of spaces when in multi-line mode

indent_size โ€” the number of tabs or spaces to indent with in multi-line mode

Download Details:

Author: json-parser 
Source Code: https://github.com/json-parser/json-builder 
License: BSD-2-Clause license

#json #serialization 

JSON-builder: The Serializing Counterpart to Json-parser
Nat  Grady

Nat Grady

1659683100

Qs: Quick Serialization Of R Objects

Using qs   

Quick serialization of R objects

qs provides an interface for quickly saving and reading objects to and from disk. The goal of this package is to provide a lightning-fast and complete replacement for the saveRDS and readRDS functions in R.

Inspired by the fst package, qs uses a similar block-compression design using either the lz4 or zstd compression libraries. It differs in that it applies a more general approach for attributes and object references.

saveRDS and readRDS are the standard for serialization of R data, but these functions are not optimized for speed. On the other hand, fst is extremely fast, but only works on data.frameโ€™s and certain column types.

qs is both extremely fast and general: it can serialize any R object like saveRDS and is just as fast and sometimes faster than fst.

Usage

library(qs)
df1 <- data.frame(x = rnorm(5e6), y = sample(5e6), z=sample(letters, 5e6, replace = T))
qsave(df1, "myfile.qs")
df2 <- qread("myfile.qs")

Installation

# CRAN version
install.packages("qs")

# CRAN version compile from source (recommended)
remotes::install_cran("qs", type = "source", configure.args = "--with-simd=AVX2")

Features

The table below compares the features of different serialization approaches in R.

 qsfstsaveRDS
Not Slowโœ”โœ”โŒ
Numeric Vectorsโœ”โœ”โœ”
Integer Vectorsโœ”โœ”โœ”
Logical Vectorsโœ”โœ”โœ”
Character Vectorsโœ”โœ”โœ”
Character Encodingโœ”(vector-wide only)โœ”
Complex Vectorsโœ”โŒโœ”
Data.Framesโœ”โœ”โœ”
On disk row accessโŒโœ”โŒ
Random column accessโŒโœ”โŒ
Attributesโœ”Someโœ”
Lists / Nested Listsโœ”โŒโœ”
Multi-threadedโœ”โœ”โŒ

qs also includes a number of advanced features:

  • For character vectors, qs also has the option of using the new ALTREP system (R version 3.5+) to quickly read in string data.
  • For numerical data (numeric, integer, logical and complex vectors) qs implements byte shuffling filters (adopted from the Blosc meta-compression library). These filters utilize extended CPU instruction sets (either SSE2 or AVX2).
  • qs also efficiently serializes S4 objects, environments, and other complex objects.

These features have the possibility of additionally increasing performance by orders of magnitude, for certain types of data. See sections below for more details.

Summary Benchmarks

The following benchmarks were performed comparing qs, fst and saveRDS/readRDS in base R for serializing and de-serializing a medium sized data.frame with 5 million rows (approximately 115 Mb in memory):

data.frame(a = rnorm(5e6), 
           b = rpois(5e6, 100),
           c = sample(starnames$IAU, 5e6, T),
           d = sample(state.name, 5e6, T),
           stringsAsFactors = F)

qs is highly parameterized and can be tuned by the user to extract as much speed and compression as possible, if desired. For simplicity, qs comes with 4 presets, which trades speed and compression ratio: โ€œfastโ€, โ€œbalancedโ€, โ€œhighโ€ and โ€œarchiveโ€.

The plots below summarize the performance of saveRDS, qs and fst with various parameters:

Serializing

De-serializing

(Benchmarks are based on qs ver. 0.21.2, fst ver. 0.9.0 and R 3.6.1.)

Benchmarking write and read speed is a bit tricky and depends highly on a number of factors, such as operating system, the hardware being run on, the distribution of the data, or even the state of the R instance. Reading data is also further subjected to various hardware and software memory caches.

Generally speaking, qs and fst are considerably faster than saveRDS regardless of using single threaded or multi-threaded compression. qs also manages to achieve superior compression ratio through various optimizations (e.g. see โ€œByte Shuffleโ€ section below).

ALTREP character vectors

The ALTREP system (new as of R 3.5.0) allows package developers to represent R objects using their own custom memory layout. This allows a potentially large speedup in processing certain types of data.

In qs, ALTREP character vectors are implemented via the stringfish package and can be used by setting use_alt_rep=TRUE in the qread function. The benchmark below shows the time it takes to qread several million random strings (nchar = 80) with and without ALTREP.

The large speedup demonstrates why one would want to consider the system, but there are caveats. Downstream processing functions must be ALTREP-aware. See the stringfish package for more details.

Byte shuffle

Byte shuffling (adopted from the Blosc meta-compression library) is a way of re-organizing data to be more amenable to compression. An integer contains four bytes and the limits of an integer in R are +/- 2^31-1. However, most real data doesnโ€™t use anywhere near the range of possible integer values. For example, if the data were representing percentages, 0% to 100%, the first three bytes would be unused and zero.

Byte shuffling rearranges the data such that all of the first bytes are blocked together, all of the second bytes are blocked together, and so on. This procedure often makes it very easy for compression algorithms to find repeated patterns and can often improve compression ratio by orders of magnitude. In the example below, shuffle compression achieves a compression ratio of over 1:1000. See ?qsave for more details.

# With byte shuffling
x <- 1:1e8
qsave(x, "mydat.qs", preset = "custom", shuffle_control = 15, algorithm = "zstd")
cat( "Compression Ratio: ", as.numeric(object.size(x)) / file.info("mydat.qs")$size, "\n" )
# Compression Ratio:  1389.164

# Without byte shuffling
x <- 1:1e8
qsave(x, "mydat.qs", preset = "custom", shuffle_control = 0, algorithm = "zstd")
cat( "Compression Ratio: ", as.numeric(object.size(x)) / file.info("mydat.qs")$size, "\n" )
# Compression Ratio:  1.479294 

Serializing to memory

You can use qs to directly serialize objects to memory.

Example:

library(qs)
x <- qserialize(c(1, 2, 3))
qdeserialize(x)
[1] 1 2 3

Serializing objects to ASCII

The qs package includes two sets of utility functions for converting binary data to ASCII:

  • base85_encode and base85_decode
  • base91_encode and base91_decode

These functions are similar to base64 encoding functions found in various packages, but offer greater efficiency.

Example:

enc <- base91_encode(qserialize(datasets::mtcars, preset = "custom", compress_level = 22))
dec <- qdeserialize(base91_decode(enc))

(Note: base91 strings contain double quote characters (") and need to be single quoted if stored as a string.)

See the help files for additional details and history behind these algorithms.

Using qs within Rcpp

qs functions can be called directly within C++ code via Rcpp.

Example C++ script:

// [[Rcpp::depends(qs)]]
#include <Rcpp.h>
#include <qs.h>
using namespace Rcpp;

// [[Rcpp::export]]
void test() {
  qs::qsave(IntegerVector::create(1,2,3), "/tmp/myfile.qs", "high", "zstd", 1, 15, true, 1);
}

R side:

library(qs)
library(Rcpp)
sourceCpp("test.cpp")
# save file using Rcpp interface
test()
# read in file created through Rcpp interface
qread("/tmp/myfile.qs")
[1] 1 2 3

The C++ functions do not have default parameters; all parameters must be specified.

Future developments

  • Additional compression algorithms
  • Improved ALTREP serialization
  • Re-write of multithreading code
  • Mac M1 optimizations (NEON) and checking

Future versions will be backwards compatible with the current version.

Author: Traversc
Source Code: https://github.com/traversc/qs 

#r #serialization #data

Qs: Quick Serialization Of R Objects

Apache SkyWalking | APM, Application Performance Monitoring System

Apache SkyWalking

SkyWalking: an APM(application performance monitor) system, especially designed for microservices, cloud native and container-based architectures.

Abstract

SkyWalking is an open source APM system, including monitoring, tracing, diagnosing capabilities for distributed system in Cloud Native architecture. The core features are following.

  • Service, service instance, endpoint(URI) metrics analysis
  • Root cause analysis. Profile the code on the runtime powered by in-process agent and ebpf profiler.
  • Service topology map analysis
  • Service instance and endpoint(URI) dependency analysis
  • Slow services and endpoints detecting
  • Performance optimization
  • Distributed tracing and context propagation
  • Database access metrics. Detect slow database access statements(including SQL statements)
  • Message Queue performance and consuming latency monitoring
  • Browser performance monitoring
  • Infrastructure(VM, network, disk etc.) monitoring
  • Collaboration across metrics, traces, and logs
  • Alerting

SkyWalking supports to collect telemetry (metrics, traces, and logs) data from multiple sources and multiple formats, including

  1. Java, .NET Core, NodeJS, PHP, and Python auto-instrument agents.
  2. Go, C++, and Rust SDKs.
  3. Agent profiling for Java and Python.
  4. ebpf profiling for C, C++, Golang, and Rust.
  5. LUA agent especially for Nginx, OpenResty and Apache APISIX.
  6. Browser agent.
  7. Service Mesh Observability. Control plane and data plane.
  8. Metrics system, including Prometheus, OpenTelemetry, Micrometer(Spring Sleuth), Zabbix.
  9. Logs.
  10. Zipkin v1/v2 trace.(No Analysis)

SkyWalking OAP is using the STAM(Streaming Topology Analysis Method) to analysis topology in the tracing based agent scenario for better performance. Read the paper of STAM for more details.

Documentation

NOTICE, SkyWalking 8.0+ uses v3 protocols. They are incompatible with previous releases.

Downloads

Please head to the releases page to download a release of Apache SkyWalking.

Compiling project

Follow this document.

Code of conduct

This project adheres to the Contributor Covenant code of conduct. By participating, you are expected to uphold this code. Please follow the REPORTING GUIDELINES to report unacceptable behavior.

Live Demo

Download details:
Author: apache
Source code: https://github.com/apache/skywalking
License: Apache-2.0 license

#java #serialization

Apache SkyWalking | APM, Application Performance Monitoring System

Lawnchair - A Free, Open-source Home App for android & Java

Lawnchair 12.1

Lawnchair is a free, open-source home app for Android. Taking Launcher3โ€”Androidโ€™s default home appโ€”as a starting point, it ports Pixel Launcher features and introduces rich options for customization. This branch houses the codebase of Lawnchair 12.1, currently in alpha and based on Launcher3 from Android 12.1. For Lawnchair 9, 10, 11, and 12, see the branches with the 9-, 10-, 11-, and 12- prefixes, respectively.

Contribute code

Whether youโ€™ve fixed a bug or introduced a new feature, we welcome pull requests! (If youโ€™d like to make a larger change and check with us first, you can do so via Lawnchairโ€™s Telegram group chat.) To help translate Lawnchair 12.1 instead, please see โ€œTranslate.โ€

You can use Git to clone this repository:

git clone --recursive https://github.com/LawnchairLauncher/lawnchair.git

To build the app, select the lawnWithQuickstepDebug build type. Should you face errors relating to the iconloaderlib and searchuilib projects, run git submodule update --init --recursive.

Here are a few contribution tips:

The lawnchair package houses Lawnchairโ€™s own code, whereas the src package includes a clone of the Launcher3 codebase with modifications. Generally, place new files in the former, keeping changes to the latter to a minimum.

You can use either Java or, preferably, Kotlin.

Make sure your code is logical and well formatted. If using Kotlin, see โ€œCoding conventionsโ€ in the Kotlin documentation.

Set 12.1-dev as the base branch for pull requests.

Translate

You can help translate Lawnchair 12.1 on Crowdin. Here are a few tips:

When using quotation marks, insert the symbols specific to the target language, as listed in this table.

Lawnchair uses title case for some English UI text. Title case isnโ€™t used in other languages; opt for sentence case instead.

Some English terminology may have no commonly used equivalents in other languages. In such cases, use short descriptive phrasesโ€”for example, the equivalent of bottom row for dock.

Quick links

Download details:
Author: LawnchairLauncher
Source code: https://github.com/LawnchairLauncher/lawnchair
License: View license

#java #serialization

Lawnchair - A Free, Open-source Home App for android & Java