Sasha  Roberts

Sasha Roberts


Reform: Form Objects Decoupled From Models In Ruby


Form objects decoupled from your models.

Reform gives you a form object with validations and nested setup of models. It is completely framework-agnostic and doesn't care about your database.

Although reform can be used in any Ruby framework, it comes with Rails support, works with simple_form and other form gems, allows nesting forms to implement has_one and has_many relationships, can compose a form from multiple objects and gives you coercion.

Full Documentation

Reform is part of the Trailblazer framework. Full documentation is available on the project site.

Reform 2.2

Temporary note: Reform 2.2 does not automatically load Rails files anymore (e.g. ActiveModel::Validations). You need the reform-rails gem, see Installation.

Defining Forms

Forms are defined in separate classes. Often, these classes partially map to a model.

class AlbumForm < Reform::Form
  property :title
  validates :title, presence: true

Fields are declared using ::property. Validations work exactly as you know it from Rails or other frameworks. Note that validations no longer go into the model.


Forms have a ridiculously simple API with only a handful of public methods.

  1. #initialize always requires a model that the form represents.
  2. #validate(params) updates the form's fields with the input data (only the form, not the model) and then runs all validations. The return value is the boolean result of the validations.
  3. #errors returns validation messages in a classic ActiveModel style.
  4. #sync writes form data back to the model. This will only use setter methods on the model(s).
  5. #save (optional) will call #save on the model and nested models. Note that this implies a #sync call.
  6. #prepopulate! (optional) will run pre-population hooks to "fill out" your form before rendering.

In addition to the main API, forms expose accessors to the defined properties. This is used for rendering or manual operations.


In your controller or operation you create a form instance and pass in the models you want to work on.

class AlbumsController
  def new
    @form =

This will also work as an editing form with an existing album.

def edit
  @form =

Reform will read property values from the model in setup. In our example, the AlbumForm will call album.title to populate the title field.

Rendering Forms

Your @form is now ready to be rendered, either do it yourself or use something like Rails' #form_for, simple_form or formtastic.

= form_for @form do |f|
  = f.input :title

Nested forms and collections can be easily rendered with fields_for, etc. Note that you no longer pass the model to the form builder, but the Reform instance.

Optionally, you might want to use the #prepopulate! method to pre-populate fields and prepare the form for rendering.


After form submission, you need to validate the input.

class SongsController
  def create
    @form =

    #=> params: {song: {title: "Rio", length: "366"}}

    if @form.validate(params[:song])

The #validate method first updates the values of the form - the underlying model is still treated as immutuable and remains unchanged. It then runs all validations you provided in the form.

It's the only entry point for updating the form. This is per design, as separating writing and validation doesn't make sense for a form.

This allows rendering the form after validate with the data that has been submitted. However, don't get confused, the model's values are still the old, original values and are only changed after a #save or #sync operation.

Syncing Back

After validation, you have two choices: either call #save and let Reform sort out the rest. Or call #sync, which will write all the properties back to the model. In a nested form, this works recursively, of course.

It's then up to you what to do with the updated models - they're still unsaved.

Saving Forms

The easiest way to save the data is to call #save on the form.

if @form.validate(params[:song])  #=> populates album with incoming data
              #   by calling @form.album.title=.
  # handle validation errors.

This will sync the data to the model and then call

Sometimes, you need to do saving manually.

Default values

Reform allows default values to be provided for properties.

class AlbumForm < Reform::Form
  property :price_in_cents, default: 9_95

Saving Forms Manually

Calling #save with a block will provide a nested hash of the form's properties and values. This does not call #save on the models and allows you to implement the saving yourself.

The block parameter is a nested hash of the form input. do |hash|
    hash      #=> {title: "Greatest Hits"}

You can always access the form's model. This is helpful when you were using populators to set up objects when validating. do |hash|
    album = @form.model



Reform provides support for nested objects. Let's say the Album model keeps some associations.

class Album < ActiveRecord::Base
  has_one  :artist
  has_many :songs

The implementation details do not really matter here, as long as your album exposes readers and writes like Album#artist and Album#songs, this allows you to define nested forms.

class AlbumForm < Reform::Form
  property :title
  validates :title, presence: true

  property :artist do
    property :full_name
    validates :full_name, presence: true

  collection :songs do
    property :name

You can also reuse an existing form from elsewhere using :form.

property :artist, form: ArtistForm

Nested Setup

Reform will wrap defined nested objects in their own forms. This happens automatically when instantiating the form.

album.songs #=> [<Song name:"Run To The Hills">]

form =
form.songs[0] #=> <SongForm model: <Song name:"Run To The Hills">>
form.songs[0].name #=> "Run To The Hills"

Nested Rendering

When rendering a nested form you can use the form's readers to access the nested forms.

= text_field :title,         @form.title
= text_field "artist[name]",

Or use something like #fields_for in a Rails environment.

= form_for @form do |f|
  = f.text_field :title

  = f.fields_for :artist do |a|
    = a.text_field :name

Nested Processing

validate will assign values to the nested forms. sync and save work analogue to the non-nested form, just in a recursive way.

The block form of #save would give you the following data. do |nested|
  nested #=> {title:  "Greatest Hits",
         #    artist: {name: "Duran Duran"},
         #    songs: [{title: "Hungry Like The Wolf"},
         #            {title: "Last Chance On The Stairways"}]
         #   }

The manual saving with block is not encouraged. You should rather check the Disposable docs to find out how to implement your manual tweak with the official API.

Populating Forms

Very often, you need to give Reform some information how to create or find nested objects when validateing. This directive is called populator and documented here.


Add this line to your Gemfile:

gem "reform"

Reform works fine with Rails 3.1-5.0. However, inheritance of validations with ActiveModel::Validations is broken in Rails 3.2 and 4.0.

Since Reform 2.2, you have to add the reform-rails gem to your Gemfile to automatically load ActiveModel/Rails files.

gem "reform-rails"

Since Reform 2.0 you need to specify which validation backend you want to use (unless you're in a Rails environment where ActiveModel will be used).

To use ActiveModel (not recommended because very out-dated).

require "reform/form/active_model/validations"
Reform::Form.class_eval do
  include Reform::Form::ActiveModel::Validations

To use dry-validation (recommended).

require "reform/form/dry"
Reform::Form.class_eval do
  feature Reform::Form::Dry

Put this in an initializer or on top of your script.


Reform allows to map multiple models to one form. The complete documentation is here, however, this is how it works.

class AlbumForm < Reform::Form
  include Composition

  property :id,    on: :album
  property :title, on: :album
  property :songs, on: :cd
  property :cd_id, on: :cd, from: :id

When initializing a composition, you have to pass a hash that contains the composees. album, cd: CD.find(1))


Reform comes many more optional features, like hash fields, coercion, virtual fields, and so on. Check the full documentation here.

Reform is part of the Trailblazer project. Please buy my book to support the development and learn everything about Reform - there's two chapters dedicated to Reform!

Security And Strong_parameters

By explicitly defining the form layout using ::property there is no more need for protecting from unwanted input. strong_parameter or attr_accessible become obsolete. Reform will simply ignore undefined incoming parameters.

This is not Reform 1.x!

Temporary note: This is the README and API for Reform 2. On the public API, only a few tiny things have changed. Here are the Reform 1.2 docs.

Anyway, please upgrade and report problems and do not simply assume that we will magically find out what needs to get fixed. When in trouble, join us on Gitter.

Full documentation for Reform is available online, or support us and grab the Trailblazer book. There is an Upgrading Guide to help you migrate through versions.


Great thanks to Blake Education for giving us the freedom and time to develop this project in 2013 while working on their project.

Author: trailblazer
Source code:
License:  MIT license

#ruby  #ruby-on-rails

Reform: Form Objects Decoupled From Models In Ruby
Reid  Rohan

Reid Rohan


Replicate Couchdb Data into Leveldb In Real Time with Follow


Replicate couchdb data into leveldb in real time with follow. Must be used with sublevel.


The following example illustrates the simplest use case. It will synchronize couchdb data into a leveldb located at /tmp/level-npm and store the data as (key, value) = (, JSON.stringify(data.doc)), where data is JSON chunks received from the couch.

var levelCouchSync = require('level-couch-sync')
var levelup = require('levelup')
var sublevel = require('level-sublevel')

var db = sublevel(levelup('/tmp/level-npm'))
levelCouchSync('', db, 'registry-sync')

If you provide a map/iterator function you can decide for yourself what kind of data your want to persist. An easy way to accomplish this, is to create more sublevels and shove data into them. This example shows how you can store basic package metadata in a sublevel named 'package':

var levelCouchSync = require('level-couch-sync')
var levelup = require('levelup')
var sublevel = require('level-sublevel')

var db = sublevel(levelup('/tmp/level-npm-advanced'))
var packageDb = db.sublevel('package')

levelCouchSync('', db, 'registry-sync',
    function (data, emit) {
      var doc = data.doc
      emit(, JSON.stringify({
        name        :,
        author      :,
        repository  : doc.repository
      }), packageDb)

Each emit() call adds a (key, value, sublevel) triplet to a batch operation that is executed once the iterator returns, which means you can call emit() many times during each time the iterator is invoked.

levelCouchSync() returns an EventEmitter, which you can attach listeners to. The following example logs package versions and progress to stdout. See the events section for more details.

// ..
var sync = levelCouchSync(url, db, 'registry-sync')

sync.on('data', function (data) {
  console.log(, data.doc.versions && Object.keys(data.doc.versions))
sync.on('progress', function (ratio) {
  console.log(Math.floor(ratio*10000)/100 + '%')

Run the samples in the example/ folder and try it out! It should work on all systems where levelup can be compiled. If you want to take a closer look at what the data looks like you can use lev, which is an awesome cli tool for viewing any leveldb. All you need is a path to it.


The API is very simple and only contain one function.

require('level-couch-sync')(sourceUrl, db, metaDb[, map])

This function returns an EventEmitter instance and has three mandatory arguments and one optional.

  • sourceUrl is a string pointing out the url to the couch we are getting the updates from
  • db must be a level-sublevel instance and is used to store the data if there is no map iterator provided
  • metaDb must be a level-sublevel instance or a string. If it's a string, a sublevel will be created with that name. metaDb handles metadata of the ongoing transfer and keeps track of the update_seq, which means that if the process crashes, it will automatically continue where it left off
  • map(data, emit) is an iterator function called for each JSON data received from the couch. The first argument data is the JSON received from the couch. emit(key, value, sublevel) is a function you call each time you want to persist some data. It takes the following three arguments:
    • key is a string and is the key used to store the value
    • value is an object that you are free to build as you please
    • sublevel is a level-sublevel instance used to store the key and the value


level-couch-sync emits various events as the leveldb is syncronized with the couch:

  • sync.emit('data', data) emitted for each data object received from the couch
  • sync.emit('progress', ratio) emitted each time data has been written to levelup. The ratio is defined as how much data that has been written from the current update sequence. When there is something to be read from the couch then 0 < ratio < 1.0 and when ratio > 1.0 it means we are syncing live!
  • sync.emit('fail', err) emitted when there is an error fetching the sourceUrl from couchdb before the request will be tried again using fibonacci backoff
  • sync.emit('max', maxSeq) emitted when a request has been made to the source url. maxSeq is the value of the update_seq property in the JSON response

progress bar

you can create a progress bar like used in npmd, just provide a name for the couchdb, and a function that returns a tagline describing the document that was updated.

sync.createProgressBar(name, function (data) {
  return toTagline(data)

by default, name is the url of the couchdb instance, and the tagline will be doc._id+'@'+doc._rev

Author: Dominictarr
Source Code: 
License: MIT

#javascript #node #sync 

Replicate Couchdb Data into Leveldb In Real Time with Follow
Hunter  Krajcik

Hunter Krajcik


Ecp_sync_plugin: Added AES Encription, Added QR Code Scanner


A new Flutter plugin.

Getting Started

Check how to install

Internet Connection Check

A flutter plugin for check internet connection is active of not.

Barcode Scanner

A flutter plugin for scanning 2D barcodes and QR codes.


  • [x] Scan 2D barcodes
  • [x] Scan QR codes
  • [x] Control the flash while scanning
  • [x] Permission handling

Getting Started


For Android, you must do the following before you can use the plugin:

Add the camera permission to your AndroidManifest.xml

<uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /> <uses-permission android:name="android.permission.CAMERA" />

Now you can depend on the barcode_scan plugin in your pubspec.yaml file:

  ecp_sync_plugin: ^3.7.7

Click "Packages get" in Android Studio or run flutter packages get in your project folder.


To use on iOS, you must add the the camera usage description to your Info.plist

<string>This app requires access to the Camera.</string>
<string>This app requires access to the micro phone</string>
<string>This app requires access to the photo library.</string>


Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add ecp_sync_plugin

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

  ecp_sync_plugin: ^5.0.0+1

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:ecp_sync_plugin/ecp_sync_plugin.dart';


import 'dart:io';

import 'package:flutter/material.dart';
import 'dart:async';
import 'package:ecp_sync_plugin/ecp_sync_plugin.dart';

void main() => runApp(MyApp());

class MyApp extends StatefulWidget {
  _MyAppState createState() => _MyAppState();

class _MyAppState extends State<MyApp> {
  String _platformVersion = 'Unknown';
  String imagePath = '';
  EcpSyncPlugin _battery = EcpSyncPlugin();
  Map _batteryState;
  StreamSubscription<Map> _batteryStateSubscription;
  TextEditingController controller = TextEditingController();
  bool imageFound = false;

  void initState() {

    _batteryStateSubscription =
        _battery.onBatteryStateChanged.listen((Map state) {
      setState(() {
        _batteryState = state;
        try {
          if (state['type'] == '2001') {
            var detailsData = state['Details'];
            imageFound = true;
            imagePath = detailsData;
        } catch (e) {

    controller.addListener(() {
      // Do something here

  // Platform messages are asynchronous, so we initialize in an async method.
  Future<void> initPlatformState(int type) async {
    String platformVersion;

    if (type == 3) {
      /*String descriptionData = await EcpSyncPlugin().scanBarCodes(
          "true", "true", "false", "#ffffff", "false", "Nexxt", "Scan here");

      String descriptionData = await EcpSyncPlugin().checkPermission("Android");
    if (type == 100) {
      String descriptionData = await EcpSyncPlugin().startScanChiperLab();
    } else if (type == 200) {
      String descriptionData = await EcpSyncPlugin().stopScanChiperLab();
    } else if (type == 201) {
      String descriptionData = await EcpSyncPlugin()
          .pickFileFromGallary(Platform.isAndroid ? "android" : "IOS", "1");
    if (!mounted) return;

    setState(() {
      _platformVersion = platformVersion;

  Widget build(BuildContext context) {
    //_context = context;
    return MaterialApp(
      home: Scaffold(
        appBar: AppBar(
          title: const Text('ECP Plugin Sync'),
        body: Center(
          child: Column(
            children: <Widget>[
              Text('Running on: $_platformVersion\n'),
              Text('Progress on: $_batteryState\n'),
                padding: EdgeInsets.all(12.0),
                shape: StadiumBorder(),
                child: Text(
                  style: TextStyle(color: Colors.white),
                color: Colors.blueGrey,
                onPressed: () {
                padding: EdgeInsets.all(12.0),
                shape: StadiumBorder(),
                child: Text(
                  "File Sync",
                  style: TextStyle(color: Colors.white),
                color: Colors.blueGrey,
                onPressed: () {
                padding: EdgeInsets.all(12.0),
                shape: StadiumBorder(),
                child: Text(
                  style: TextStyle(color: Colors.white),
                color: Colors.blueGrey,
                onPressed: () {
                padding: EdgeInsets.all(12.0),
                shape: StadiumBorder(),
                child: Text(
                  style: TextStyle(color: Colors.white),
                color: Colors.blueGrey,
                onPressed: () {
                controller: controller,
                decoration: InputDecoration(
                    border: InputBorder.none,
                    hintText: 'Please enter a search term'),
                padding: EdgeInsets.all(12.0),
                shape: StadiumBorder(),
                child: Text(
                  "Scan start ch",
                  style: TextStyle(color: Colors.white),
                color: Colors.blueGrey,
                onPressed: () {
                padding: EdgeInsets.all(12.0),
                shape: StadiumBorder(),
                child: Text(
                  "Scan stop ch",
                  style: TextStyle(color: Colors.white),
                color: Colors.blueGrey,
                onPressed: () {
                padding: EdgeInsets.all(12.0),
                shape: StadiumBorder(),
                child: Text(
                  "Pick Photo",
                  style: TextStyle(color: Colors.white),
                color: Colors.blueGrey,
                onPressed: () {
              new Container(
                  height: 100,
                  width: 100,
                  child: imageFound
                      ? Image.file(File(imagePath))
                      : Image.asset("assets/download.jpeg"))

  //                       width: p_100, height: bottom_height_50),

  pressDetails(int type) {

  void dispose() {
    if (_batteryStateSubscription != null) {

Original article source at: 

#flutter #dart #sync #plugin 

Ecp_sync_plugin: Added AES Encription, Added QR Code Scanner
Lawrence  Lesch

Lawrence Lesch


Node-levelup-sync: LevelDB - Node.js Style with Sync Supports


Fast & simple storage - a Node.js-style LevelDB wrapper 


LevelDB is a simple key/value data store built by Google, inspired by BigTable. It's used in Google Chrome and many other products. LevelDB supports arbitrary byte arrays as both keys and values, singular get, put and delete operations, batched put and delete, bi-directional iterators and simple compression using the very fast Snappy algorithm.

LevelUP aims to expose the features of LevelDB in a Node.js-friendly way. All standard Buffer encoding types are supported, as is a special JSON encoding. LevelDB's iterators are exposed as a Node.js-style readable stream a matching writeable stream converts writes to batch operations.

LevelDB stores entries sorted lexicographically by keys. This makes LevelUP's ReadStream interface a very powerful query mechanism.

LevelUP is an OPEN Open Source Project, see the Contributing section to find out what this means.

Relationship to LevelDOWN

LevelUP is designed to be backed by LevelDOWN which provides a pure C++ binding to LevelDB and can be used as a stand-alone package if required.

As of version 0.9, LevelUP no longer requires LevelDOWN as a dependency so you must npm install leveldown when you install LevelUP.

LevelDOWN is now optional because LevelUP can be used with alternative backends, such as level.js in the browser or MemDOWN for a pure in-memory store.

LevelUP will look for LevelDOWN and throw an error if it can't find it in its Node require() path. It will also tell you if the installed version of LevelDOWN is incompatible.

The level package is available as an alternative installation mechanism. Install it instead to automatically get both LevelUP & LevelDOWN. It exposes LevelUP on its export (i.e. you can var leveldb = require('level')).

Tested & supported platforms

  • Linux: including ARM platforms such as Raspberry Pi and Kindle!
  • Mac OS
  • Solaris: including Joyent's SmartOS & Nodejitsu
  • Windows: Node 0.10 and above only. See installation instructions for node-gyp's dependencies here, you'll need these (free) components from Microsoft to compile and run any native Node add-on in Windows.

Basic usage

First you need to install LevelUP!

$ npm install levelup leveldown


$ npm install level

(this second option requires you to use LevelUP by calling var levelup = require('level'))

All operations are asynchronous although they don't necessarily require a callback if you don't need to know when the operation was performed.

var levelup = require('levelup')

// 1) Create our database, supply location and options.
//    This will create or open the underlying LevelDB store.
var db = levelup('./mydb')

// 2) put a key & value
db.put('name', 'LevelUP', function (err) {
  if (err) return console.log('Ooops!', err) // some kind of I/O error

  // 3) fetch by key
  db.get('name', function (err, value) {
    if (err) return console.log('Ooops!', err) // likely the key was not found

    // ta da!
    console.log('name=' + value)


Special operations exposed by LevelDOWN

levelup(location[, options[, callback]])

levelup(options[, callback ])

levelup(db[, callback ])

levelup() is the main entry point for creating a new LevelUP instance and opening the underlying store with LevelDB.

This function returns a new instance of LevelUP and will also initiate an open() operation. Opening the database is an asynchronous operation which will trigger your callback if you provide one. The callback should take the form: function (err, db) {} where the db is the LevelUP instance. If you don't provide a callback, any read & write operations are simply queued internally until the database is fully opened.

This leads to two alternative ways of managing a new LevelUP instance:

levelup(location, options, function (err, db) {
  if (err) throw err
  db.get('foo', function (err, value) {
    if (err) return console.log('foo does not exist')
    console.log('got foo =', value)

// vs the equivalent:

var db = levelup(location, options) // will throw if an error occurs
db.get('foo', function (err, value) {
  if (err) return console.log('foo does not exist')
  console.log('got foo =', value)

The location argument is available as a read-only property on the returned LevelUP instance.

The levelup(options, callback) form (with optional callback) is only available where you provide a valid 'db' property on the options object (see below). Only for back-ends that don't require a location argument, such as MemDOWN.

For example:

var levelup = require('levelup')
var memdown = require('memdown')
var db = levelup({ db: memdown })

The levelup(db, callback) form (with optional callback) is only available where db is a factory function, as would be provided as a 'db' property on an options object (see below). Only for back-ends that don't require a location argument, such as MemDOWN.

For example:

var levelup = require('levelup')
var memdown = require('memdown')
var db = levelup(memdown)


levelup() takes an optional options object as its second argument; the following properties are accepted:

'createIfMissing' (boolean, default: true): If true, will initialise an empty database at the specified location if one doesn't already exist. If false and a database doesn't exist you will receive an error in your open() callback and your database won't open.

'errorIfExists' (boolean, default: false): If true, you will receive an error in your open() callback if the database exists at the specified location.

'compression' (boolean, default: true): If true, all compressible data will be run through the Snappy compression algorithm before being stored. Snappy is very fast and shouldn't gain much speed by disabling so leave this on unless you have good reason to turn it off.

'cacheSize' (number, default: 8 * 1024 * 1024): The size (in bytes) of the in-memory LRU cache with frequently used uncompressed block contents.

'keyEncoding' and 'valueEncoding' (string, default: 'utf8'): The encoding of the keys and values passed through Node.js' Buffer implementation (see Buffer#toString()).

'utf8' is the default encoding for both keys and values so you can simply pass in strings and expect strings from your get() operations. You can also pass Buffer objects as keys and/or values and conversion will be performed.

Supported encodings are: hex, utf8, ascii, binary, base64, ucs2, utf16le.

'json' encoding is also supported, see below.

'db' (object, default: LevelDOWN): LevelUP is backed by LevelDOWN to provide an interface to LevelDB. You can completely replace the use of LevelDOWN by providing a "factory" function that will return a LevelDOWN API compatible object given a location argument. For further information, see MemDOWN, a fully LevelDOWN API compatible replacement that uses a memory store rather than LevelDB. Also see Abstract LevelDOWN, a partial implementation of the LevelDOWN API that can be used as a base prototype for a LevelDOWN substitute.

Additionally, each of the main interface methods accept an optional options object that can be used to override 'keyEncoding' and 'valueEncoding'.[callback])

open() opens the underlying LevelDB store. In general you should never need to call this method directly as it's automatically called by levelup().

However, it is possible to reopen a database after it has been closed with close(), although this is not generally advised.


close() closes the underlying LevelDB store. The callback will receive any error encountered during closing as the first argument.

You should always clean up your LevelUP instance by calling close() when you no longer need it to free up resources. A LevelDB store cannot be opened by multiple instances of LevelDB/LevelUP simultaneously.

db.put(key, value[, options][, callback])

put() is the primary method for inserting data into the store. Both the key and value can be arbitrary data objects.

The callback argument is optional but if you don't provide one and an error occurs then expect the error to be thrown.


Encoding of the key and value objects will adhere to 'keyEncoding' and 'valueEncoding' options provided to levelup(), although you can provide alternative encoding settings in the options for put() (it's recommended that you stay consistent in your encoding of keys and values in a single store).

If you provide a 'sync' value of true in your options object, LevelDB will perform a synchronous write of the data; although the operation will be asynchronous as far as Node is concerned. Normally, LevelDB passes the data to the operating system for writing and returns immediately, however a synchronous write will use fsync() or equivalent so your callback won't be triggered until the data is actually on disk. Synchronous filesystem writes are significantly slower than asynchronous writes but if you want to be absolutely sure that the data is flushed then you can use 'sync': true.

db.get(key[, options][, callback])

get() is the primary method for fetching data from the store. The key can be an arbitrary data object. If it doesn't exist in the store then the callback will receive an error as its first argument. A not-found err object will be of type 'NotFoundError' so you can err.type == 'NotFoundError' or you can perform a truthy test on the property err.notFound.

db.get('foo', function (err, value) {
  if (err) {
    if (err.notFound) {
      // handle a 'NotFoundError' here
    // I/O or other error, pass it up the callback chain
    return callback(err)

  // .. handle `value` here


Encoding of the key object will adhere to the 'keyEncoding' option provided to levelup(), although you can provide alternative encoding settings in the options for get() (it's recommended that you stay consistent in your encoding of keys and values in a single store).

LevelDB will by default fill the in-memory LRU Cache with data from a call to get. Disabling this is done by setting fillCache to false.

db.del(key[, options][, callback])

del() is the primary method for removing data from the store.

db.del('foo', function (err) {
  if (err)
    // handle I/O or other error


Encoding of the key object will adhere to the 'keyEncoding' option provided to levelup(), although you can provide alternative encoding settings in the options for del() (it's recommended that you stay consistent in your encoding of keys and values in a single store).

A 'sync' option can also be passed, see put() for details on how this works.

db.batch(array[, options][, callback]) (array form)

batch() can be used for very fast bulk-write operations (both put and delete). The array argument should contain a list of operations to be executed sequentially, although as a whole they are performed as an atomic operation inside LevelDB. Each operation is contained in an object having the following properties: type, key, value, where the type is either 'put' or 'del'. In the case of 'del' the 'value' property is ignored. Any entries with a 'key' of null or undefined will cause an error to be returned on the callback and any 'type': 'put' entry with a 'value' of null or undefined will return an error.

var ops = [
    { type: 'del', key: 'father' }
  , { type: 'put', key: 'name', value: 'Yuri Irsenovich Kim' }
  , { type: 'put', key: 'dob', value: '16 February 1941' }
  , { type: 'put', key: 'spouse', value: 'Kim Young-sook' }
  , { type: 'put', key: 'occupation', value: 'Clown' }

db.batch(ops, function (err) {
  if (err) return console.log('Ooops!', err)
  console.log('Great success dear leader!')


See put() for a discussion on the options object. You can overwrite default 'keyEncoding' and 'valueEncoding' and also specify the use of sync filesystem operations.

In addition to encoding options for the whole batch you can also overwrite the encoding per operation, like:

var ops = [{
    type          : 'put'
  , key           : new Buffer([1, 2, 3])
  , value         : { some: 'json' }
  , keyEncoding   : 'binary'
  , valueEncoding : 'json'

db.batch() (chained form)

batch(), when called with no arguments will return a Batch object which can be used to build, and eventually commit, an atomic LevelDB batch operation. Depending on how it's used, it is possible to obtain greater performance when using the chained form of batch() over the array form.

  .put('name', 'Yuri Irsenovich Kim')
  .put('dob', '16 February 1941')
  .put('spouse', 'Kim Young-sook')
  .put('occupation', 'Clown')
  .write(function () { console.log('Done!') })

batch.put(key, value[, options])

Queue a put operation on the current batch, not committed until a write() is called on the batch.

The optional options argument can be used to override the default 'keyEncoding' and/or 'valueEncoding'.

This method may throw a WriteError if there is a problem with your put (such as the value being null or undefined).

batch.del(key[, options])

Queue a del operation on the current batch, not committed until a write() is called on the batch.

The optional options argument can be used to override the default 'keyEncoding'.

This method may throw a WriteError if there is a problem with your delete.


Clear all queued operations on the current batch, any previous operations will be discarded.


Commit the queued operations for this batch. All operations not cleared will be written to the database atomically, that is, they will either all succeed or fail with no partial commits. The optional callback will be called when the operation has completed with an error argument if an error has occurred; if no callback is supplied and an error occurs then this method will throw a WriteError.


A LevelUP object can be in one of the following states:

  • "new" - newly created, not opened or closed
  • "opening" - waiting for the database to be opened
  • "open" - successfully opened the database, available for use
  • "closing" - waiting for the database to be closed
  • "closed" - database has been successfully closed, should not be used

isOpen() will return true only when the state is "open".


See isOpen()

isClosed() will return true only when the state is "closing" or "closed", it can be useful for determining if read and write operations are permissible.


You can obtain a ReadStream of the full database by calling the createReadStream() method. The resulting stream is a complete Node.js-style Readable Stream where 'data' events emit objects with 'key' and 'value' pairs. You can also use the gt, lt and limit options to control the range of keys that are streamed.

  .on('data', function (data) {
    console.log(data.key, '=', data.value)
  .on('error', function (err) {
    console.log('Oh my!', err)
  .on('close', function () {
    console.log('Stream closed')
  .on('end', function () {
    console.log('Stream closed')

The standard pause(), resume() and destroy() methods are implemented on the ReadStream, as is pipe() (see below). 'data', 'error', 'end' and 'close' events are emitted.

Additionally, you can supply an options object as the first parameter to createReadStream() with the following options:

'gt' (greater than), 'gte' (greater than or equal) define the lower bound of the range to be streamed. Only records where the key is greater than (or equal to) this option will be included in the range. When reverse=true the order will be reversed, but the records streamed will be the same.

'lt' (less than), 'lte' (less than or equal) define the higher bound of the range to be streamed. Only key/value pairs where the key is less than (or equal to) this option will be included in the range. When reverse=true the order will be reversed, but the records streamed will be the same.

'start', 'end' legacy ranges - instead use 'gte', 'lte'

'reverse' (boolean, default: false): a boolean, set true and the stream output will be reversed. Beware that due to the way LevelDB works, a reverse seek will be slower than a forward seek.

'keys' (boolean, default: true): whether the 'data' event should contain keys. If set to true and 'values' set to false then 'data' events will simply be keys, rather than objects with a 'key' property. Used internally by the createKeyStream() method.

'values' (boolean, default: true): whether the 'data' event should contain values. If set to true and 'keys' set to false then 'data' events will simply be values, rather than objects with a 'value' property. Used internally by the createValueStream() method.

'limit' (number, default: -1): limit the number of results collected by this stream. This number represents a maximum number of results and may not be reached if you get to the end of the data first. A value of -1 means there is no limit. When reverse=true the highest keys will be returned instead of the lowest keys.

'fillCache' (boolean, default: false): wheather LevelDB's LRU-cache should be filled with data read.

'keyEncoding' / 'valueEncoding' (string): the encoding applied to each read piece of data.


A KeyStream is a ReadStream where the 'data' events are simply the keys from the database so it can be used like a traditional stream rather than an object stream.

You can obtain a KeyStream either by calling the createKeyStream() method on a LevelUP object or by passing passing an options object to createReadStream() with keys set to true and values set to false.

  .on('data', function (data) {
    console.log('key=', data)

// same as:
db.createReadStream({ keys: true, values: false })
  .on('data', function (data) {
    console.log('key=', data)


A ValueStream is a ReadStream where the 'data' events are simply the values from the database so it can be used like a traditional stream rather than an object stream.

You can obtain a ValueStream either by calling the createValueStream() method on a LevelUP object or by passing passing an options object to createReadStream() with values set to true and keys set to false.

  .on('data', function (data) {
    console.log('value=', data)

// same as:
db.createReadStream({ keys: false, values: true })
  .on('data', function (data) {
    console.log('value=', data)


A WriteStream can be obtained by calling the createWriteStream() method. The resulting stream is a complete Node.js-style Writable Stream which accepts objects with 'key' and 'value' pairs on its write() method.

The WriteStream will buffer writes and submit them as a batch() operations where writes occur within the same tick.

var ws = db.createWriteStream()

ws.on('error', function (err) {
  console.log('Oh my!', err)
ws.on('close', function () {
  console.log('Stream closed')

ws.write({ key: 'name', value: 'Yuri Irsenovich Kim' })
ws.write({ key: 'dob', value: '16 February 1941' })
ws.write({ key: 'spouse', value: 'Kim Young-sook' })
ws.write({ key: 'occupation', value: 'Clown' })

The standard write(), end(), destroy() and destroySoon() methods are implemented on the WriteStream. 'drain', 'error', 'close' and 'pipe' events are emitted.

You can specify encodings both for the whole stream and individual entries:

To set the encoding for the whole stream, provide an options object as the first parameter to createWriteStream() with 'keyEncoding' and/or 'valueEncoding'.

To set the encoding for an individual entry:

    key           : new Buffer([1, 2, 3])
  , value         : { some: 'json' }
  , keyEncoding   : 'binary'
  , valueEncoding : 'json'

write({ type: 'put' })

If individual write() operations are performed with a 'type' property of 'del', they will be passed on as 'del' operations to the batch.

var ws = db.createWriteStream()

ws.on('error', function (err) {
  console.log('Oh my!', err)
ws.on('close', function () {
  console.log('Stream closed')

ws.write({ type: 'del', key: 'name' })
ws.write({ type: 'del', key: 'dob' })
ws.write({ type: 'put', key: 'spouse' })
ws.write({ type: 'del', key: 'occupation' })

db.createWriteStream({ type: 'del' })

If the WriteStream is created with a 'type' option of 'del', all write() operations will be interpreted as 'del', unless explicitly specified as 'put'.

var ws = db.createWriteStream({ type: 'del' })

ws.on('error', function (err) {
  console.log('Oh my!', err)
ws.on('close', function () {
  console.log('Stream closed')

ws.write({ key: 'name' })
ws.write({ key: 'dob' })
// but it can be overridden
ws.write({ type: 'put', key: 'spouse', value: 'Ri Sol-ju' })
ws.write({ key: 'occupation' })

Pipes and Node Stream compatibility

A ReadStream can be piped directly to a WriteStream, allowing for easy copying of an entire database. A simple copy() operation is included in LevelUP that performs exactly this on two open databases:

function copy (srcdb, dstdb, callback) {
  srcdb.createReadStream().pipe(dstdb.createWriteStream()).on('close', callback)

The ReadStream is also fstream-compatible which means you should be able to pipe to and from fstreams. So you can serialize and deserialize an entire database to a directory where keys are filenames and values are their contents, or even into a tar file using node-tar. See the fstream functional test for an example. (Note: I'm not really sure there's a great use-case for this but it's a fun example and it helps to harden the stream implementations.)

KeyStreams and ValueStreams can be treated like standard streams of raw data. If 'keyEncoding' or 'valueEncoding' is set to 'binary' the 'data' events will simply be standard Node Buffer objects straight out of the data store.

db.db.approximateSize(start, end, callback)

approximateSize() can used to get the approximate number of bytes of file system space used by the range [start..end). The result may not include recently written data.

var db = require('level')('./huge.db')

db.db.approximateSize('a', 'c', function (err, size) {
  if (err) return console.error('Ooops!', err)
  console.log('Approximate size of range is %d', size)

Note: approximateSize() is available via LevelDOWN, which by default is accessible as the db property of your LevelUP instance. This is a specific LevelDB operation and is not likely to be available where you replace LevelDOWN with an alternative back-end via the 'db' option.


getProperty can be used to get internal details from LevelDB. When issued with a valid property string, a readable string will be returned (this method is synchronous).

Currently, the only valid properties are:

'leveldb.num-files-at-levelN': returns the number of files at level N, where N is an integer representing a valid level (e.g. "0").

'leveldb.stats': returns a multi-line string describing statistics about LevelDB's internal operation.

'leveldb.sstables': returns a multi-line string describing all of the sstables that make up contents of the current database.

var db = require('level')('./huge.db')
// → '243'

Note: getProperty() is available via LevelDOWN, which by default is accessible as the db property of your LevelUP instance. This is a specific LevelDB operation and is not likely to be available where you replace LevelDOWN with an alternative back-end via the 'db' option.

leveldown.destroy(location, callback)

destroy() is used to completely remove an existing LevelDB database directory. You can use this function in place of a full directory rm if you want to be sure to only remove LevelDB-related files. If the directory only contains LevelDB files, the directory itself will be removed as well. If there are additional, non-LevelDB files in the directory, those files, and the directory, will be left alone.

The callback will be called when the destroy operation is complete, with a possible error argument.

Note: destroy() is available via LevelDOWN which you will have to install seperately, e.g.:

require('leveldown').destroy('./huge.db', function (err) { console.log('done!') }), callback)

repair() can be used to attempt a restoration of a damaged LevelDB store. From the LevelDB documentation:

If a DB cannot be opened, you may attempt to call this method to resurrect as much of the contents of the database as possible. Some data may be lost, so be careful when calling this function on a database that contains important information.

You will find information on the repair operation in the LOG file inside the store directory.

A repair() can also be used to perform a compaction of the LevelDB log into table files.

The callback will be called when the repair operation is complete, with a possible error argument.

Note: repair() is available via LevelDOWN which you will have to install seperately, e.g.:

require('leveldown').repair('./huge.db', function (err) { console.log('done!') })


LevelUP emits events when the callbacks to the corresponding methods are called.

  • db.emit('put', key, value) emitted when a new value is 'put'
  • db.emit('del', key) emitted when a value is deleted
  • db.emit('batch', ary) emitted when a batch operation has executed
  • db.emit('ready') emitted when the database has opened ('open' is synonym)
  • db.emit('closed') emitted when the database has closed
  • db.emit('opening') emitted when the database is opening
  • db.emit('closing') emitted when the database is closing

If you do not pass a callback to an async function, and there is an error, LevelUP will emit('error', err) instead.

JSON data

You specify 'json' encoding for both keys and/or values, you can then supply JavaScript objects to LevelUP and receive them from all fetch operations, including ReadStreams. LevelUP will automatically stringify your objects and store them as utf8 and parse the strings back into objects before passing them back to you.

Custom encodings

A custom encoding may be provided by passing in an object as an value for keyEncoding or valueEncoding (wherever accepted), it must have the following properties:

    encode : function (val) { ... }
  , decode : function (val) { ... }
  , buffer : boolean // encode returns a buffer and decode accepts a buffer
  , type   : String  // name of this encoding type.

Extending LevelUP

A list of Node.js LevelDB modules and projects can be found in the wiki.

When attempting to extend the functionality of LevelUP, it is recommended that you consider using level-hooks and/or level-sublevel. level-sublevel is particularly helpful for keeping additional, extension-specific, data in a LevelDB store. It allows you to partition a LevelUP instance into multiple sub-instances that each correspond to discrete namespaced key ranges.

Multi-process access

LevelDB is thread-safe but is not suitable for accessing with multiple processes. You should only ever have a LevelDB database open from a single Node.js process. Node.js clusters are made up of multiple processes so a LevelUP instance cannot be shared between them either.

See the wiki for some LevelUP extensions, including multilevel, that may help if you require a single data store to be shared across processes.

Getting support

There are multiple ways you can find help in using LevelDB in Node.js:

  • IRC: you'll find an active group of LevelUP users in the ##leveldb channel on Freenode, including most of the contributors to this project.
  • Mailing list: there is an active Node.js LevelDB Google Group.
  • GitHub: you're welcome to open an issue here on this GitHub repository if you have a question.


LevelUP is an OPEN Open Source Project. This means that:

Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.

See the file for more details.


LevelUP is only possible due to the excellent work of the following contributors:

Rod VaggGitHub/rvaggTwitter/@rvagg
John ChesleyGitHub/cheslesTwitter/@chesles
Jake VerbatenGitHub/raynosTwitter/@raynos2
Dominic TarrGitHub/dominictarrTwitter/@dominictarr
Max OgdenGitHub/maxogdenTwitter/@maxogden
Lars-Magnus SkogGitHub/ralphtheninjaTwitter/@ralphtheninja
David BjörklundGitHub/keslaTwitter/@david_bjorklund
Julian GruberGitHub/juliangruberTwitter/@juliangruber
Paolo FragomeniGitHub/hij1nxTwitter/@hij1nx
Anton WhalleyGitHub/No9Twitter/@antonwhalley
Matteo CollinaGitHub/mcollinaTwitter/@matteocollina
Pedro TeixeiraGitHub/pgteTwitter/@pgte
James HallidayGitHub/substackTwitter/@substack


A large portion of the Windows support comes from code by Krzysztof Kowalczyk @kjk, see his Windows LevelDB port here. If you're using LevelUP on Windows, you should give him your thanks!

License & copyright

Author: Snowyu
Source Code:  
License: View license

#javascript #node #sync 

Node-levelup-sync: LevelDB - Node.js Style with Sync Supports
Hermann  Frami

Hermann Frami


Serverless Plugin for S3 Sync

⚡️ Serverless Plugin for S3 Sync   

With this plugin for serverless, you can sync local folders to S3 buckets after your service is deployed.


Add the NPM package to your project:

# Via yarn
$ yarn add serverless-s3bucket-sync

# Via npm
$ npm install serverless-s3bucket-sync

Add the plugin to your serverless.yml:

  - serverless-s3bucket-sync


Configure S3 Bucket syncing Auto Scaling in serverless.yml with references to your local folder and the name of the S3 bucket.

    - folder: relative/folder
      bucket: bucket-name

That's it! With the next deployment, serverless will sync your local folder relative/folder with the S3 bucket named bucket-name.


You can use sls sync to synchornize all buckets without deploying your serverless stack.


You are welcome to contribute to this project! 😘

To make sure you have a pleasant experience, please read the code of conduct. It outlines core values and beliefs and will make working together a happier experience.

Author: sbstjn
Source Code: 
License: MIT license

#serverless #s3 #sync #aws 

Serverless Plugin for S3 Sync
Hermann  Frami

Hermann Frami


How to Synchronize Local Folders and S3 Prefixes for Serverless Framework

Serverless S3 Sync

A plugin to sync local directories and S3 prefixes for Serverless Framework ⚡ .

Use Case

  • Static Website ( serverless-s3-sync ) & Contact form backend ( serverless ) .
  • SPA ( serverless ) & assets ( serverless-s3-sync ) .


Run npm install in your Serverless project.

$ npm install --save serverless-s3-sync

Add the plugin to your serverless.yml file

  - serverless-s3-sync

Compatibility with Serverless Framework

Version 2.0.0 is compatible with Serverless Framework v3, but it uses the legacy logging interface. Version 3.0.0 and later uses the new logging interface.

serverless-s3-syncServerless Framework
v1.xv1.x, v2.x
v2.0.0v1.x, v2.x, v3.x
≥ v3.0.0v3.x


    # A simple configuration for copying static assets
    - bucketName: my-static-site-assets # required
      bucketPrefix: assets/ # optional
      localDir: dist/assets # required

    # An example of possible configuration options
    - bucketName: my-other-site
      localDir: path/to/other-site
      deleteRemoved: true # optional, indicates whether sync deletes files no longer present in localDir. Defaults to 'true'
      acl: public-read # optional
      followSymlinks: true # optional
      defaultContentType: text/html # optional
      params: # optional
        - index.html:
            CacheControl: 'no-cache'
        - "*.js":
            CacheControl: 'public, max-age=31536000'
      bucketTags: # optional, these are appended to existing S3 bucket tags (overwriting tags with the same key)
        tagKey1: tagValue1
        tagKey2: tagValue2

    # This references bucket name from the output of the current stack
    - bucketNameKey: AnotherBucketNameOutputKey
      localDir: path/to/another

    # ... but can also reference it from the output of another stack,
    # see
    - bucketName: ${cf:another-cf-stack-name.ExternalBucketOutputKey}
      localDir: path

      Type: AWS::S3::Bucket
        BucketName: my-static-site-assets
      Type: AWS::S3::Bucket
        BucketName: my-other-site
        AccessControl: PublicRead
          IndexDocument: index.html
          ErrorDocument: error.html
      Type: AWS::S3::Bucket
      Value: !Ref AnotherBucket


Run sls deploy, local directories and S3 prefixes are synced.

Run sls remove, S3 objects in S3 prefixes are removed.

Run sls deploy --nos3sync, deploy your serverless stack without syncing local directories and S3 prefixes.

Run sls remove --nos3sync, remove your serverless stack without removing S3 objects from the target S3 buckets.

sls s3sync

Sync local directories and S3 prefixes.

Offline usage

If also using the plugins serverless-offline and serverless-s3-local, sync can be supported during development by placing the bucket configuration(s) into the buckets object and specifying the alterate endpoint (see below).

    # an alternate s3 endpoint
    endpoint: http://localhost:4569
    # A simple configuration for copying static assets
    - bucketName: my-static-site-assets # required
      bucketPrefix: assets/ # optional
      localDir: dist/assets # required
# ...

As per serverless-s3-local's instructions, once a local credentials profile is configured, run sls offline start --aws-profile s3local to sync to the local s3 bucket instead of Amazon AWS S3

bucketNameKey will not work in offline mode and can only be used in conjunction with valid AWS credentials, use bucketName instead.

run sls deploy for normal deployment

Always disable auto sync

    # Disable sync when sls deploy and sls remove
    noSync: true
    # A simple configuration for copying static assets
    - bucketName: my-static-site-assets # required
      bucketPrefix: assets/ # optional
      localDir: dist/assets # required
# ...

Author: k1LoW
Source Code: 

#serverless #s3 #sync 

How to Synchronize Local Folders and S3 Prefixes for Serverless Framework
Rocio  O'Keefe

Rocio O'Keefe


in_sync_interface: Interface for In_sync Feature


Interface for in_sync feature.


Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add in_sync_interface

With Flutter:

 $ flutter pub add in_sync_interface

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

  in_sync_interface: ^1.2.0

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:in_sync_interface/in_sync_interface.dart';

Author: Innim
Source Code: 
License: BSD-3-Clause license

#dart #flutter #sync 

in_sync_interface: Interface for In_sync Feature

Drive: Google Drive Client for The Commandline


drive is a tiny program to pull or push Google Drive files.

drive was originally developed by Burcu Dogan while working on the Google Drive team. Since she is very busy and no longer able to maintain it, I took over drive on Thursday, 1st January 2015. This repository contains the latest version of the code.



go 1.9.X or higher is required. See here for installation instructions and platform installers.

  • Make sure to set your GOPATH in your env, .bashrc or .bash_profile file. If you have not yet set it, you can do so like this:
cat << ! >> ~/.bashrc
> export GOPATH=\$HOME/gopath
> export PATH=\$GOPATH:\$GOPATH/bin:\$PATH
> !
source ~/.bashrc # To reload the settings and get the newly set ones # Or open a fresh terminal

The above setup will ensure that the drive binary after compilation can be invoked from your current path.

From sources

To install from the latest source, run:

go get -u


  • In order to address issue #138, where debug information should be bundled with the binary, you'll need to run:
go get && drive-gen

In case you need a specific binary e.g for Debian folks issue #271 and issue 277

go get -u

That should produce a binary drive-google


To bundle debug information with the binary, you can run:

go get -u && drive-gen drive-google


  • Using godep
cd $GOPATH/src/ && godep save
  • Unravelling/Restoring dependencies
cd $GOPATH/src/ && godep restore

Please see file drive-gen/ for more information.

Platform Packages

For packages on your favorite platform, please see file Platform

Is your platform missing a package? Feel free to prepare / contribute an installation package and then submit a PR to add it in.

Automation Scripts

You can install scripts for automating major drive commands and syncing from drive-google wiki, also described in Some screenshots are available here.

Cross Compilation

See file Makefile which currently supports cross compilation. Just run make and then inspect the binaries in directory bin.

  • Supported platforms to cross compile to:
  • ARMv5.
  • ARMv6.
  • ARMv7.
  • ARMv8.
  • Darwin (OS X).
  • Linux.

Also inspect file bin/md5Sums.txt after the cross compilation.

API keys

Optionally set the GOOGLE_API_CLIENT_ID and GOOGLE_API_CLIENT_SECRET environment variables to use your own API keys.


Hyphens: - vs --

A single hyphen - can be used to specify options. However two hyphens -- can be used with any options in the provided examples below.


Before you can use drive, you'll need to mount your Google Drive directory on your local file system:

OAuth2.0 credentials

drive init ~/gdrive
cd ~/gdrive

Google Service Account credentials

drive init --service-account-file <gsa_json_file_path> ~/gdrive
cd ~/gdrive

Where <gsa_json_file_path> must be a Google Service Account credentials file in JSON form. This feature was implemented as requested by:

De Initializing

The opposite of drive init, it will remove your credentials locally as well as configuration associated files.

drive deinit [-no-prompt]

For a complete deinitialization, don't forget to revoke account access, please see revoking account access

Traversal Depth

Before talking about the features of drive, it is useful to know about "Traversal Depth".

Throughout this README the usage of the term "Traversal Depth" refers to the number of

nodes/hops/items that it takes to get from one parent to children. In the options that allow it, you'll have a flag option -depth <n> where n is an integer

Traversal terminates on encountering a zero 0 traversal depth.

A negative depth indicates infinity, so traverse as deep as you can.

A positive depth helps control the reach.


|- A/
	|- B/
	|- C/
		|- C1
		|- C2
			|- C10/
			|- CTX/
				| - Music
				| - Summary.txt

Items on the first level relative to A/ ie depth 1, we'll have:

B, C

On the third level relative to C/ ie depth 3

We'll have:

Items: Music, Summary.txt

The items encountered in depth 3 traversal relative to C/ are:

		|- C1
		|- C2
			|- C10/
			|- CTX/
				| - Music
				| - Summary.txt

No items are within the reach of depth -1 relative to B/ since B/ has no children.

Items within the reach of depth - relative to CTX/ are:

  		| - Music
  		| - Summary.txt

Configuring General Settings

drive supports resource configuration files (.driverc) that you can place both globally (in your home directory) and locally(in the mounted drive dir) or in the directory that you are running an operation from, relative to the root. The entries for a .driverc file is in the form a key-value pair where the key is any of the arguments that you'd get from running

drive <command> -h
# e.g
drive push -h

and the value is the argument that you'd ordinarily supply on the commandline. .driverc configurations can be optionally grouped in sections. See #778.

For example:

cat << ! >> ~/.driverc
> # My global .driverc file
> export=doc,pdf
> depth=100
> no-prompt=true
> # For lists
> [list]
> depth=2
> long=true
> # For pushes
> [push]
> verbose=false
> # For stats
> [stat]
> depth=3
> # For pulls and pushes
> [pull/push]
> no-clobber=true
> !

cat << ! >> ~/emm.odeke-drive/.driverc
> # The root main .driverc
> depth=-1
> hidden=false
> no-clobber=true
> exports-dir=$HOME/exports
> !

cat << $ >> ~/emm.odeke-drive/fall2015Classes/.driverc
> # My global .driverc file
> exports-dir=$HOME/Desktop/exports
> export=pdf,csv,txt
> hidden=true
> depth=10
> exclude-ops=delete,update
> $

Excluding and Including Objects

drive allows you to specify a '.driveignore' file similar to your .gitignore, in the root directory of the mounted drive. Blank lines and those prefixed by '#' are considered as comments and skipped.

For example:

cat << $ >> .driveignore
> # My drive ignore file
> \.gd$
> \.so$
> \.swp$
> $


Pattern matching and suffixes are done by regular expression matching so make sure to use a valid regular expression suffix.

Go doesn't have a negative lookahead mechanism ie exclude all but which would normally be achieved in other languages or regex engines by "?!". See!topic/golang-nuts/7qgSDWPIh_E. This was reported and requested in issue #535. A use case might be ignoring all but say .bashrc files or .dotfiles. To enable this, prefix "!" at the beginning of the path to achieve this behavior.

Sample .driveignore with the exclude and include clauses combined

cat << $ >> .driveignore
> ^\.
> !^\.bashrc # .bashrc files won't be ignored
> _export$ # _export files are to be ignored
> !must_export$ # the exception to the clause anything with "must_export"$ won't be ignored


The pull command downloads data that does not exist locally but does remotely on Google drive, and may delete local data that is not present on Google Drive. Run it without any arguments to pull all of the files from the current path:

drive pull

To pull and decrypt your data that is stored encrypted at rest on Google Drive, use flag -decryption-password:

See Issue #543

drive pull -decryption-password '$JiME5Umf' influx.txt

Pulling by matches is also supported

cd ~/myDrive/content/2015
drive pull -matches vines docx

To force download from paths that otherwise would be marked with no-changes

drive pull -force

To pull specific files or directories, pass in one or more paths:

drive pull photos/img001.png docs

Pulling by id is also supported

drive pull -id 0fM9rt0Yc9RTPaDdsNzg1dXVjM0E 0fM9rt0Yc9RTPaTVGc1pzODN1NjQ 0fM9rt0Yc9RTPV1NaNFp5WlV3dlU

pull optionally allows you to pull content up to a desired depth.

Say you would like to get just folder items until the second level

drive pull -depth 2 heavy-files summaries

Traverse deep to infinity and beyond

drive pull -depth -1 all-my-files

Pulling starred files is allowed as well

drive pull -starred
drive pull -starred -matches content
drive pull -starred -all # Pull all the starred files that aren't in the trash
drive pull -starred -all -trashed # Pull all the starred files in the trash

Like most commands .driveignore can be used to filter which files to pull.

  • Note: Use drive pull -hidden to also pull files starting with . like .git.

To selectively pull by type e.g file vs directory/folder, you can use flags

  • files
  • directories
drive pull -files a1/b2
drive pull -directories tf1

Verifying Checksums

Due to popular demand, by default, checksum verification is turned off. It was deemed to be quite vigorous and unnecessary for most cases, in which size + modTime differences are sufficient to detect file changes. The discussion stemmed from issue #117.

However, modTime differences on their own do not warrant a resync of the contents of file. Modification time changes are operations of their own and can be made:

  • locally by, touching a file (chtimes).
  • remotely by just changing the modTime meta data.

To turn checksum verification back on:

drive pull -ignore-checksum=false

drive also supports piping pulled content to stdout which can be accomplished by:

drive pull -piped path1 path2
  • In relation to issue #529, you can change the max retry counts for exponential backoff. Using a count < 0 falls back to the default count of 20:
drive pull -retry-count 14 documents/2016/March videos/2013/September

Exporting Docs

By default, the pull command will export Google Docs documents as PDF files. To specify other formats, use the -export option:

drive pull -export pdf,rtf,docx,txt

To explicitly export instead of using -force

drive pull -export pdf,rtf,docx,txt -explicitly-export

By default, the exported files will be placed in a new directory suffixed by \_exports in the same path. To export the files to a different directory, use the -exports-dir option:

drive pull -export pdf,rtf,docx,txt -exports-dir ~/Desktop/exports

Otherwise, you can export files to the same directory as requested in issue #660, by using pull flag -same-exports-dir. For example:

drive pull -explicitly-export -exports-dir ~/Desktop/exp -export pdf,txt,odt -same-exports-dir 
+ /test-exports/
+ /test-exports/few
+ /test-exports/influx
Addition count 3
Proceed with the changes? [Y/n]:y
Exported '/Users/emmanuelodeke/' to '/Users/emmanuelodeke/Desktop/exp/influx.pdf'
Exported '/Users/emmanuelodeke/' to '/Users/emmanuelodeke/Desktop/exp/influx.txt'
Exported '/Users/emmanuelodeke/' to '/Users/emmanuelodeke/Desktop/exp/few.pdf'
Exported '/Users/emmanuelodeke/' to '/Users/emmanuelodeke/Desktop/exp/'
Exported '/Users/emmanuelodeke/' to '/Users/emmanuelodeke/Desktop/exp/'
Exported '/Users/emmanuelodeke/' to '/Users/emmanuelodeke/Desktop/exp/'

Supported formats:

  • doc, docx
  • jpeg, jpg
  • gif
  • html
  • odt
  • ods
  • rtf
  • pdf
  • png
  • ppt, pptx
  • svg
  • txt, text
  • xls, xlsx


The push command uploads data to Google Drive to mirror data stored locally.

Like pull, you can run it without any arguments to push all of the files from the current path, or you can pass in one or more paths to push specific files or directories.

push also allows you to push content up to a desired traversal depth e.g

drive push -depth 1 head-folders

drive push path expects the path to be within the context of the drive. If the drive is locally at ~/grdrive, drive push ~/xyz.txt may not execute as desired. Each path should reference a file or directory under the root directory of the mounted directory.

You can also push multiple paths that are children of the root of the mounted drive to a destination,

in relation to issue #612, using key -destination:

For example to push the content of music/Travi$+Future, integrals/complex/compilations directly to a1/b2/c3:

drive push -destination a1/b2/c3 music/Travi$+Future integrals/complex/compilations

To enable checksum verification during a push:

drive push -ignore-checksum=false

To keep your data encrypted at rest remotely on Google Drive:

drive push -encryption-password '$JiME5Umf' influx.txt

For E2E discussions, see issue #543:

drive also supports pushing content piped from stdin which can be accomplished by:

drive push -piped path

To selectively push by type e.g file vs directory/folder, you can use flags

  • files
  • directories
drive push -files a1/b2
drive push -directories tf1

Like most commands .driveignore can be used to filter which files to push.

  • Note: Use drive push -hidden to also push files starting with . like .git.

Here is an example using drive to backup the current working directory. It pushes a tar.gz archive created on the fly. No archive file is made on the machine running the command, so it doesn't waste disk space.

tar czf - . | drive push -piped backup-$(date +"%m-%d-%Y-"%T"").tar.gz


In response to #107 and numerous other issues related to confusion about clashing paths, drive can now auto-rename clashing files. Use flag -fix-clashes during a pull or push, and drive will try to rename clashing files by adding a unique suffix at the end of the name, but right before the extension of a file (if the extension exists). If you haven't passed in the above -fix-clashes flag, drive will abort on trying to deal with clashing names. If you'd like to turn off this safety, pass in flag -ignore-name-clashes

In relation to #57 and @rakyll's #49. A couple of scenarios in which data was getting totally clobbered and unrecoverable, drive now tries to play it safe and warn you if your data could potentially be lost e.g during a to-disk clobber for which you have no backup. At least with a push you have the luxury of untrashing content. To disable this safety, run drive with flag -ignore-conflict e.g:

drive pull -ignore-conflict collaboration_documents

Playing the safety card even more, if you want to get changes that are non clobberable ie only additions run drive with flag -no-clobber e.g:

drive pull -no-clobber Makefile

Ordinarily your system will not traverse nested symlinks e.g:

For safety with non clobberable changes i.e only additions:

drive push -no-clobber
  • Due to the reasons above, drive should be able to warn you in case of total clobbers on data. To turn off this behaviour/safety, pass in the -ignore-conflict flag i.e:
drive push -force sure_of_content

To push without user input (i.e. without prompt)

drive push -quiet


drive push -no-prompt

To get Google Drive to convert a file to its native Google Docs format

drive push -convert

Extra features: to make Google Drive attempt Optical Character Recognition (OCR) for png, gif, pdf and jpg files.

drive push -ocr

Note: To use OCR, your account should have this feature. You can find out if your account has OCR allowed.

drive features

Pulling And Pushing Notes

MimeType inference is from the file's extension.

If you would like to coerce a certain mimeType that you'd prefer to assert with Google Drive pushes, use flag -coerce-mime <short-key> See List of MIME type short keys for the full list of short keys.

drive push -coerce-mime docx my_test_doc
  • Excluding certain operations can be done both for pull and push by passing in flag -exclude-ops <csv_crud_values>


drive pull -exclude-ops "delete,update" vines
drive push -exclude-ops "create" sensitive_files
  • To show more information during pushes or pulls e.g show the current operation, pass in option -verbose e.g:
drive pull -verbose 2015/Photos content
drive push -verbose Music Fall2014
  • In relation to issue #529, you can change the max retry counts for exponential backoff. Using a count < 0 falls back to the default count of 20:
drive push -retry-count 4 a/bc/def terms

You can also specify the upload chunk size to be used to push each file, by using flag -upload-chunk-size whose value is in bytes. If you don't specify this flag, by default the internal Google APIs use a value of 8MiB from constant googleapi.DefaultUploadChunkSize. Please note that your value has to be a multiple of and atleast the minimum upload chunksize of 256KiB from constant googleapi.MinUploadChunkSize. See If -upload-chunk-size is not set yet -upload-rate-limit is, -upload-chunk-size will be the same as -upload-rate-limit.

To limit the upload bandwidth, please set -upload-rate-limit=n. It's in n KiB/s, default is unlimited.

End to End Encryption

See Issue #543

This can be toggled when you supply a non-empty password ie

  • -encryption-password for a push.
  • -decryption-password for a pull.

When you supply argument -encryption-password during a push, drive will encrypt your data and store it remotely encrypted(stored encrypted at rest), it can only be decrypted by you when you perform a pull with the respective arg -decryption-password.

drive push -encryption-password '$400lsGO1Di3' few-ones.mp4 newest.mkv
drive pull -decryption-password '$400lsGO1Di3' few-ones.mp4 newest.mkv

If you supply the wrong password, you'll be warned if it cannot be decrypted

$ drive pull -decryption-password "4nG5troM" few-ones.mp4 newest.mkv
message corrupt or incorrect password

To pull normally push or pull your content, without attempting any *cryption attempts, skip passing in a password and no attempts will be made.


The pub command publishes a file or directory globally so that anyone can view it on the web using the link returned.

drive pub photos
  • Publishing by fileId is also supported
drive pub -id 0fM9rt0Yc9RTPV1NaNFp5WlV3dlU 0fM9rt0Yc9RTPSTZEanBsamZjUXM


The unpub command is the opposite of pub. It unpublishes a previously published file or directory.

drive unpub photos
  • Publishing by fileId is also supported
drive unpub -id 0fM9rt0Yc9RTPV1NaNFp5WlV3dlU 0fM9rt0Yc9RTPSTZEanBsamZjUXM

Sharing and Emailing

The share command enables you to share a set of files with specific users and assign them specific roles as well as specific generic access to the files. It also allows for email notifications on share.

drive share -emails, -message "This is the substring file I told you about" -role reader,writer -type group mnt/substringfinder.c projects/kmp.c
$ drive share -emails, -role reader,commenter -type user influx traversal/notes/conquest

For example to share a file with users of a mailing list and a custom message

drive share -emails -message "Here is the drive code" -role group mnt/drive
  • By default, an email notification is sent (even if -message is not specfified). To turn off email notification, use -notify=false
$ drive share -notify=false -emails, -role reader,commenter -type user influx traversal/notes/conquest
  • The share command also supports sharing by fileId
drive share -emails developers@developers.devs -message "Developers, developers developers" -id 0fM9rt0Yc9RTPeHRfRHRRU0dIY97 0fM9rt0Yc9kJRPSTFNk9kSTVvb0U
  • You can also share a file to only those with the link. As per #568, this file won't be publicly indexed. To turn this option on when sharing the file, use flag -with-link.
drive share -with-link ComedyPunchlineDrumSound.mp3


The unshare command revokes access of a specific accountType to a set of files.

When no -role is given it by default assumes you want to revoke all access ie "reader", "writer", "commenter"

drive unshare -type group mnt/drive
drive unshare -emails, -type user,group -role reader,commenter infinity newfiles/confidential
  • Also supports unsharing by fileId
drive unshare -type group -id 0fM9rt0Yc9RTPeHRfRHRRU0dIY97 0fM9rt0Yc9kJRPSTFNk9kSTVvb0U

Starring Or Unstarring

To star or unstar documents,

drive star information quest/A/B/C
drive star -id 0fM9rt0Yc9RTPaDdsNzg1dXVjM0E 0fM9rt0Yc9RTPaTVGc1pzODN1NjQ 0fM9rt0Yc9RTPV1NaNFp5WlV3dlU
drive unstar information quest/A/B/C
drive unstar -id 0fM9rt0Yc9RTPaDdsNzg1dXVjM0E 0fM9rt0Yc9RTPaTVGc1pzODN1NjQ 0fM9rt0Yc9RTPV1NaNFp5WlV3dlU


The diff command compares local files with their remote equivalents. It allows for multiple paths to be passed in e.g

drive diff changeLogs.log notes sub-folders/

You can diff to a desired depth

drive diff -depth 2 sub-folders/ contacts/ listings.txt

You can also switch the base, either local or remote by using flag -base-local

drive diff -base-local=true assignments photos # To use local as the base
drive diff -base-local=false infocom photos # To use remote as the base

You can only diff for short changes that is only name differences, file modTimes and types, you can use flag -skip-content-check.

drive diff -skip-content-check


Files that exist remotely can be touched i.e their modification time updated to that on the remote server using the touch command:

drive touch Photos/img001.png logs/log9907.txt

For example to touch all files that begin with digits 0 to 9:

drive touch -matches $(seq 0 9)
  • Also supports touching of files by fileId
drive touch -id 0fM9rt0Yc9RTPeHRfRHRRU0dIY97 0fM9rt0Yc9kJRPSTFNk9kSTVvb0U
  • You can also touch files to a desired depth of nesting within their parent folders.
drive touch -depth 3 mnt newest flux
drive touch -depth -1 -id 0fM9rt0Yc9RTPeHRfRHRRU0dIY97 0fM9rt0Yc9kJRPSTFNk9kSTVvb0U
drive touch -depth 1 -matches $(seq 0 9)
  • You can also touch and explicitly set the modification time for files by:
drive touch -time 20120202120000 ComedyPunchlineDrumSound.mp3
/share-testing/ComedyPunchlineDrumSound.mp3: 2012-02-02 12:00:00 +0000 UTC
  • Specify the time format that you'd like to use when specifying the time e.g
drive touch -format "2006-01-02-15:04:05.0000Z" -time "2016-02-03-08:12:15.0070Z" outf.go
/share-testing/outf.go: 2016-02-03 08:12:15 +0000 UTC

The mentioned time format has to be relative to how you would represent "Mon Jan 2 15:04:05 -0700 MST 2006". See the documentation for time formatting here time.Parse

  • Specify the touch time offset from the clock on your machine where:
  • minus(-) means ago e.g 30 hours ago -> -30h
  • blank or plus(+) means from now e.g 10 minutes -> 10m or +10m
drive touch -duration -30h ComedyPunchlineDrumSound.mp3 outf.go
/share-testing/outf.go: 2016-09-10 08:06:39 +0000 UTC
/share-testing/ComedyPunchlineDrumSound.mp3: 2016-09-10 08:06:39 +0000 UTC

Trashing And Untrashing

Files can be trashed using the trash command:

drive trash Demo

To trash files that contain a prefix match e.g all files that begin with Untitled, or Make

Note: This option uses the current working directory as the parent that the paths belong to.

drive trash -matches Untitled Make

Files that have been trashed can be restored using the untrash command:

drive untrash Demo

To untrash files that match a certain prefix pattern

drive untrash -matches pQueue photos Untitled
  • Also supports trashing/untrashing by fileId
drive trash -id 0fM9rt0Yc9RTPeHRfRHRRU0dIY97 0fM9rt0Yc9kJRPSTFNk9kSTVvb0U
drive untrash -id 0fM9rt0Yc9RTPeHRfRHRRU0dIY97 0fM9rt0Yc9kJRPSTFNk9kSTVvb0U

Emptying The Trash

Emptying the trash will permanently delete all trashed files. Caution: They cannot be recovered after running this command.

drive emptytrash


Deleting items will PERMANENTLY remove the items from your drive. This operation is irreversible.

drive delete flux.mp4
drive delete -matches onyx swp
  • Also supports deletion by fileIds
drive delete -id 0fM9rt0Yc9RTPeHRfRHRRU0dIY97 0fM9rt0Yc9kJRPSTFNk9kSTVvb0U


The list command shows a paginated list of files present remotely.

Run it without arguments to list all files in the current directory's remote equivalent:

drive list

Pass in a directory path to list files in that directory:

drive list photos

To list matches

drive list -matches mp4 go

The -trashed option can be specified to show trashed files in the listing:

drive list -trashed photos

To get detailed information about the listings e.g owner information and the version number of all listed files:

drive list -owners -l -version
  • Also supports listing by fileIds
drive list -depth 3 -id 0fM9rt0Yc9RTPeHRfRHRRU0dIY97 0fM9rt0Yc9kJRPSTFNk9kSTVvb0U
  • Listing allows for sorting by fields e.g name, version, size, modtime, lastModifiedByMeTime lvt, md5. To do this in reverse order, suffix _ror-` to the selected key

e.g to first sort by modTime, then largest-to-smallest and finally most number of saves:

drive list -sort modtime,size_r,version_r Photos
  • For advanced listing
drive list -skip-mime mp4,doc,txt
drive list -match-mime xls,docx
drive list -exact-title url_test,Photos


The stat commands show detailed file information for example people with whom it is shared, their roles and accountTypes, and fileId etc. It is useful to help determine whom and what you want to be set when performing share/unshare

drive stat mnt

By default stat won't recursively stat a directory, to enable recursive stating:

drive stat -r mnt
  • Also supports stat-ing by fileIds
drive stat -r -id 0fM9rt0Yc9RTPeHRfRHRRU0dIY97 0fM9rt0Yc9kJRPSTFNk9kSTVvb0U


drive stat -depth 4 -id 0fM9rt0Yc9RTPeHRfRHRRU0dIY97 0fM9rt0Yc9kJRPSTFNk9kSTVvb0U

Printing URL

The url command prints out the url of a file. It allows you to specify multiple paths relative to root or even by id

drive url Photos/2015/07/Releases intros/flux
drive url -id  0Bz5qQkvRAeVEV0JtZl4zVUZFWWx  1Pwu8lzYc9RTPTEpwYjhRMnlSbDQ 0Cz5qUrvDBeX4RUFFbFZ5UXhKZm8

Editing Description

You can edit the description of a file like this

drive edit-desc -description "This is a new file description" freshFolders/1.txt commonCore/
drive edit-description -description "This is a new file description" freshFolders/1.txt commonCore/

Even more conveniently by piping content

cat fileDescriptions | drive edit-desc -piped  targetFile influx/1.txt

Retrieving MD5 Checksums

The md5sum command quickly retrieves the md5 checksums of the files on your drive. The result can be fed into the "md5sum -c" shell command to validate the integrity of the files on Drive versus the local copies.

Check that files on Drive are present and match local files:

~/MyDrive/folder$ drive md5sum | md5sum -c

Do a two-way diff (will also locate files missing on either side)

~/MyDrive/folder$ diff <(drive md5sum) <(md5sum *)

Same as above, but include subfolders

~/MyDrive/folder$ diff <(drive md5sum -r) <(find * -type f | sort | xargs md5sum)

Compare across two different Drive accounts, including subfolders

~$ diff <(drive md5sum -r MyDrive/folder) <(drive md5sum -r OtherDrive/otherfolder)
  • Note: Running the 'drive md5sum' command retrieves pre-computed md5 sums from Drive; its speed is proportional to the number of files on Drive. Running the shell 'md5sum' command on local files requires reading through the files; its speed is proportional to the size of the files._

Retrieving FileId

You can retrieve just the fileId for specified paths

drive id [-depth n] [paths...]
drive file-id [-depth n] [paths...]

For example:

drive file-id -depth 2 dup-tests bug-reproductions
# drive file-id -depth 2 dup-tests bug-reproductions
FileId                                           Relative Path
"0By5qKlgRJeV2NB1OTlpmSkg8TFU"                   "/dup-tests"
"0Bz5wQlgRJeP2QkRSenBTaUowU3c"                   "/dup-tests/influx_0"
"0Cu5wQlgRJeV2d2VmY29HV217TFE"                   "/dup-tests/a"
"0Cy5wQlgRJeX2WXVFMnQyQ2NDRTQ"                   "/dup-tests/influx"
"0Cy5wQlgRJeP2YGMiOC15OEpUZnM"                   "/bug-reproductions"
"0Cy5wQlgRJeV2MzFtTm50NVV5NW8"                   "/bug-reproductions/drive-406"
"1xmXPziMPEgq2dK-JqaUytKz_By8S_7_RVY79ceRoZwv"	 "info-bulletins"

Retrieving Quota

The quota command prints information about your drive, such as the account type, bytes used/free, and the total amount of storage available.

drive quota

Retrieving Features

The features command provides information about the features present on the drive being queried and the request limit in queries per second

drive features


drive allows you to create an empty file or folder remotely Sample usage:

drive new -folder flux
drive new -mime-key doc bofx
drive new -mime-key folder content
drive new -mime-key presentation ProjectsPresentation
drive new -mime-key sheet Hours2015Sept
drive new -mime-key form taxForm2016 taxFormCounty
drive new flux.txt oxen.pdf # Allow auto type resolution from the extension


The open command allows for files to be opened by the default file browser, default web browser, either by path or by id for paths that exist atleast remotely

drive open -file-browser=false -web-browser f1/f2/f3 jamaican.mp4
drive open -file-browser -id 0Bz8qQkpZAeV9T1PObvs2Y3BMQEj 0Y9jtQkpXAeV9M1PObvs4Y3BNRFk


drive allows you to copy content remotely without having to explicitly download and then reupload.

drive copy -r mnt flagging
drive copy
  • Also supports copying by fileIds
drive copy -r -id 0fM9rt0Yc9RTPeHRfRHRRU0dIY97 0fM9rt0Yc9kJRPSTFNk9kSTVvb0U ../content


drive allows you to move content remotely between folders. To do so:

drive move photos/2015 angles library archives/storage
  • Also supports moving by fileId
drive move -id 0fM9rt0Yc9RTPeHRfRHRRU0dIY97 0fM9rt0Yc9kJRPSTFNk9kSTVvb0U ../../new_location

Google Drive supports multi-parent folder structure, where one file/folder can be placed in more than one parent folder. It consumes no extra disk space on the Cloud, but after pulling such structure it may double your files several times in your file structure. Pushing non deduplicated folder structures back may also break things, so be careful.

To place file/folder into new parent folder, keeping old one as well, use -keep-parent option

$ drive move -keep-parent photos/2015 angles library second_parent_folder


drive allows you to rename a file/folder remotely. Two arguments are required to rename ie <relativePath/To/source or Id> <newName>.

To perform a rename:

drive rename url_test url_test_results
drive rename openSrc/2015 2015-Contributions
  • Also supports renaming by fileId

drive rename 0fM9rt0Yc9RTPeHRfRHRRU0dIY97 fluxing

To turn off renaming locally or remotely, use flags -local=false or -remote=false. By default both are turned on.

For example

drive rename -local=false -remote=true a/b/c/d/e/f flux

Command Aliases

drive supports a few aliases to make usage familiar to the utilities in your shell e.g:

  • cp : copy
  • ls : list
  • mv : move
  • rm : delete

Detecting And Fixing Clashes

You can deal with clashes by using command drive clashes.

  • To list clashes, you can do

drive clashes [-depth n] [paths...] drive clashes -list [-depth n] [paths...] # To be more explicit

  • To fix clashes, you can do:
drive clashes -fix [-fix-mode mode] [-depth n] [paths...]

There are two available modes for -fix-mode:

  • rename: this is the default behavior
  • trash: trashing both new and old files

.desktop Files

As previously mentioned, Google Docs, Drawings, Presentations, Sheets etc and all files affiliated with cannot be downloaded raw but only exported. Due to popular demand, Linux users desire the ability to have *.desktop files that enable the file to be opened appropriately by an external opener. Thus by default on Linux, drive will create *.desktop files for files that fall into this category.

To turn off this behavior, you can set flag -desktop-links to false e.g

drive pull -desktop-links=false

Fetching And Pruning Missing Index Files

  • index

If you would like to fetch missing index files for files that would otherwise not need any modifications, run:

drive index path1 path2 path3/path3.1 # To fetch any missing indices in those paths
drive index -id 0CLu4lbUI9RTRM80k8EMoe5JQY2z

You can also fetch specific files by prefix matches

drive index -matches mp3 jpg
  • prune

In case you might have deleted files remotely but never using drive, and feel like you have stale indices, running drive index -prune will search your entire indices dir for index files that do not exist remotely and remove those ones

drive index -prune
  • prune-and-index To combine both operations (prune and then fetch) for indices:
drive index -all-ops

Drive server

To enable services like qr-code sharing, you'll need to have the server running that will serve content once invoked in a web browser to allow for resources to be accessed on another device e.g your mobile phone

go get && drive-server




  • DRIVE_SERVER_PORT : default is 8010
  • DRIVE_SERVER_HOST : default is localhost

If the above keys are not set in your env, you can do this

DRIVE_SERVER_PUB_KEY=<pub_key> DRIVE_SERVER_PRIV_KEY=<priv_key> [DRIVE...] drive-server

QR Code Share

Instead of traditionally copying long links, drive can now allow you to share a link to a file by means of a QR code that is generated after a redirect through your web browser.

From then on, you can use your mobile device or any other QR code reader to get to that file. In order for this to run, you have to have the drive-server running

As long as the server is running on a known domain, then you can start the qr-link getting ie

drive qr vines/kevin-hart.mp4 notes/caches.pdf
drive qr -address books/newest.pdf maps/infoGraphic.png
drive qr -address https://my.server books/newest.pdf maps/infoGraphic.png

That should open up a browser with the QR code that when scanned will open up the desired file.


The about command provides information about the program as well as that about your Google Drive. Think of it as a hybrid between the features and quota commands.

drive about

OR for detailed information

drive about -features -quota


Run the help command without any arguments to see information about the commands that are available:

drive help

Pass in the name of a command to get information about that specific command and the options that can be passed to it.

drive help push

To get help for all the commands

drive help all

Filing Issues

In case of any issue, you can file one by using command issue aka report-issue aka report. It takes flags -title -body -piped.

  • If -piped is set, it expects to read the body from standard input.

A successful issue-filing request will open up the project's issue tracker in your web browser.

drive issue -title "Can't open my file" -body "Drive trips out every time"
drive report-issue -title "Can't open my file" -body "Drive trips out every time"
cat bugReport.txt | drive issue -piped -title "push: dump on pushing from this directory"

Revoking Account Access

To revoke OAuth Access of drive to your account, when logged in with your Google account, go to and revoke the desired permissions


To remove drive from your computer, you'll need to take out:

  • $GOPATH/bin/drive
  • $GOPATH/src/
  • $GOPATH/pkg/
  • $GOPATH/pkg/
  • Also do not forget to revoke drive's access in case you need to uninstall it.

Applying Patches

To apply patches of code e.g in the midst of bug fixes, you'll just need a little bit of git fiddling.

For example to patch your code with that on remote branch patch-1, you'll need to go into the source code directory, fetch all content from the git remote, checkout the patch branch then run the go installation: something like this.

cd $GOPATH/src/
git fetch --all
git checkout patch-1
git pull origin patch-1
go get

Why Another Google Drive Client?

Background sync is not just hard, it is stupid. Here are my technical and philosophical rants about why it is not worth to implement:

Too racy. Data is shared between your remote resource, local disk and sometimes in your sync daemon's in-memory structs. Any party could touch a file at any time. It is hard to lock these actions. You end up working with multiple isolated copies of the same file and trying to determine which is the latest version that should be synced across different contexts.

It requires great scheduling to perform best with your existing environmental constraints. On the other hand, file attribute have an impact on the sync strategy. Large files block -- you wouldn't like to sit on and wait for a VM image to get synced before you can start working on a tiny text file.

It needs to read your mind to understand your priorities. Which file do you need most? It needs to read your mind to foresee your future actions. I'm editing a file, and saving the changes time to time. Why not to wait until I feel confident enough to commit the changes remotely?

drive is not a sync daemon, it provides:

Upstreaming and downstreaming. Unlike a sync command, we provide pull and push actions. The user has the opportunity to decide what to do with their local copy and when they decide to. Make some changes, either push the file remotely or revert it to the remote version. You can perform these actions with user prompt:

  echo "hello" > hello.txt
  drive push # pushes hello.txt to Google Drive
  echo "more text" >> hello.txt
  drive pull # overwrites the local changes with the remote version

Allowing to work with a specific file or directory, optionally not recursively. If you recently uploaded a large VM image to Google Drive, yet only a few text files are required for you to work, simply only push/pull the exact files you'd like to worth with:

  echo "hello" > hello.txt
  drive push hello.txt # pushes only the specified file
  drive pull path/to/a/b path2/to/c/d/e # pulls the remote directory recursively

Better I/O scheduling. One of the major goals is to provide better scheduling to improve upload/download times.

Possibility to support multiple accounts. Pull from or push to multiple Google Drive remotes. Possibility to support multiple backends. Why not to push to Dropbox or Box as well?

Known Issues

  • It probably doesn't work on Windows.
  • Google Drive allows a directory to contain files/directories with the same name. Client doesn't handle these cases yet. We don't recommend you to use drive if you have such files/directories to avoid data loss.
  • Racing conditions occur if remote is being modified while we're trying to update the file. Google Drive provides resource versioning with ETags, use Etags to avoid racy cases.
  • drive rejects reading from namedPipes because they could infinitely hang. See issue #208.

Reaching Out

Doing anything interesting with drive or want to share your favorite tips and tricks? Check out the wiki and feel free to reach out with ideas for features or requests.


This project is not supported nor maintained by Google.

Author: odeke-em
Source Code: 
License: Apache-2.0 license

#go #golang #cli #sync 

Drive: Google Drive Client for The Commandline

Sync-exec: Node/npm Module to Imitate Fs.execSync


An fs.execSync replacement until you get it natively from node 0.12+

Upgrading to 0.12.x is usually safe. At that point it will use child_process.execSync.

You can still force the emulated version passing {forceEmulated: true} to the options argument.


Inspired by exec-sync but comes with a few advantages:

  • no libc requirement (no node-gyp compilation)
  • no external dependencies
  • returns the exit status code
  • you can pass execSync options
  • multiple commands should work pretty safely


[sudo] npm install sync-exec


exec(cmd[, timeout][, options]);


var exec = require('sync-exec');

// { stdout: '1\n',
//   stderr: '',
//   status: 0 }
console.log(exec('echo 1'));

// You can even pass options, just like for [child_process.exec](
console.log(exec('ls -la', {cwd: '/etc'}));

// Times out after 1 second, throws an error
exec('sleep 3; echo 1', 1000);

How it works (if you care)

Your commands STDOUT and STDERR outputs will be channeled to files, also the exit code will be saved. Synchronous file readers will start listening to these files right after. Once outputting is done, values get picked up, tmp files get deleted and values are returned to your code.

Author: Gvarsanyi
Source Code: 
License: MIT license

#sync #node #npm #javascript 

Sync-exec: Node/npm Module to Imitate Fs.execSync
Hermann  Frami

Hermann Frami


A Simple Wrapper Around Amplify AppSync Simulator

This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.


npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator


This plugin relies on your serverless yml file and on the serverless-offline plugin.

  - serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
  - serverless-appsync-simulator
  - serverless-offline

Note: Order is important serverless-appsync-simulator must go before serverless-offline

To start the simulator, run the following command:

sls offline start

You should see in the logs something like:

Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002


Put options under custom.appsync-simulator in your serverless.yml file

| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789 | When using API_KEY as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |


    location: '.webpack/service' # use webpack build directory
      endpoint: 'http://my-custom-dynamo:8000'


By default, the simulator will hot-relad when changes to *.graphql or *.vtl files are detected. Changes to *.yml files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.

Hot-reloading relies on watchman. Make sure it is installed on your system.

You can change the files being watched with the watch option, which is then passed to watchman as the match expression.


      - ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
      - "*.graphql"                                 # => string like this is equivalent to `["match", "*.graphql"]`

Or you can opt-out by leaving an empty array or set the option to false

Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.

Resource CloudFormation functions resolution

This plugin supports some resources resolution from the Ref, Fn::GetAtt and Fn::ImportValue functions in your yaml file. It also supports some other Cfn functions such as Fn::Join, Fb::Sub, etc.

Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation

Basic usage

You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.

      Ref: MyBucket # resolves to `my-bucket-name`

      Type: AWS::DynamoDB::Table
        TableName: myTable
      Type: AWS::S3::Bucket
        BucketName: my-bucket-name

# in your appsync config
    name: dynamosource
        Ref: MyDbTable # resolves to `myTable`

Override (or mock) values

Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.

In those cases, you can define (or override) those values using the refMap, getAttMap and importValueMap options.

  • refMap takes a mapping of resource name to value pairs
  • getAttMap takes a mapping of resource name to attribute/values pairs
  • importValueMap takes a mapping of import name to values pairs


      # Override `MyDbTable` resolution from the previous example.
      MyDbTable: 'mock-myTable'
      # define ElasticSearchInstance DomainName
        DomainEndpoint: 'localhost:9200'
      other-service-api-url: ''

# in your appsync config
    name: elasticsource
      # endpoint resolves as 'http://localhost:9200'
          - ''
          - - https://
            - Fn::GetAtt:
                - ElasticSearchInstance
                - DomainEndpoint

Key-value mock notation

In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}) in the import name.

This notation can be used with all mocks - refMap, getAttMap and importValueMap

      Fn::ImportValue: other-service-api-${self:provider.stage}-url

      - key: other-service-api-${self:provider.stage}-url
        value: ''


This plugin only tries to resolve the following parts of the yml tree:

  • provider.environment
  • functions[*].environment
  • custom.appSync

If you have the need of resolving others, feel free to open an issue and explain your use case.

For now, the supported resources to be automatically resovled by Ref: are:

  • DynamoDb tables
  • S3 Buckets

Feel free to open a PR or an issue to extend them as well.

External functions

When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".

        url: http://localhost:3016/2015-03-31/functions/addUser/invocations
        method: post
        method: post

Supported Resolver types

This plugin supports resolvers implemented by amplify-appsync-simulator, as well as custom resolvers.

From Aws Amplify:

  • NONE

Implemented by this plugin

  • HTTP

Relational Database

Sample VTL for a create mutation

#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
  #set( $regex = "([a-z])([A-Z]+)")
  #set( $replacement = "$1_$2")
  #set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
  #set( $discard = $cols.add("$toSnake") )
  #if( $util.isBoolean($ctx.args.input[$entry]) )
      #if( $ctx.args.input[$entry] )
        #set( $discard = $vals.add("1") )
        #set( $discard = $vals.add("0") )
      #set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
  #set( $valStr = "($valStr)" )
#if ( $colStr.substring(0, 1) != '(' )
  #set( $colStr = "($colStr)" )
  "version": "2018-05-29",
  "statements":   ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM    <name-of-table> ORDER BY id DESC LIMIT 1"]

Sample VTL for an update mutation

#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
  #set( $cur = $ctx.args.input[$entry] )
  #set( $regex = "([a-z])([A-Z]+)")
  #set( $replacement = "$1_$2")
  #set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
  #if( $util.isBoolean($cur) )
      #if( $cur )
        #set ( $cur = "1" )
        #set ( $cur = "0" )
  #if ( $util.isNullOrEmpty($update) )
      #set($update = "$toSnake$equals'$cur'" )
      #set($update = "$update,$toSnake$equals'$cur'" )
  "version": "2018-05-29",
  "statements":   ["UPDATE <name-of-table> SET $update WHERE id=$", "SELECT * FROM <name-of-table> WHERE id=$"]

Sample resolver for delete mutation

  "version": "2018-05-29",
  "statements":   ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$", "SELECT * FROM <name-of-table> WHERE id=$"]

Sample mutation response VTL with support for handling AWSDateTime

#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
    #set ($index = $index + 1)
    #if ( $column["typeName"] == "timestamptz" )
        #set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
        #set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
        #set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
        $util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
    #set ( $s = $mapKey.split("_") )
    #set ( $camelCase="" )
    #set ( $isFirst=true )
    #foreach($entry in $s)
        #if ( $isFirst )
          #set ( $first = $entry.substring(0,1) )
          #set ( $first = $entry.substring(0,1).toUpperCase() )
        #set ( $isFirst=false )
        #set ( $stringLength = $entry.length() )
        #set ( $remaining = $entry.substring(1, $stringLength) )
        #set ( $camelCase = "$camelCase$first$remaining" )
    $util.qr( $response.put("$camelCase", $res[$mapKey]) )

Using Variable Map

Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.

Will be escaped properly: null, true, and false values.

  "version": "2018-05-29",
  "statements":   [
    "UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
    "SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
  variableMap: {
    ":ID": $,
##    ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!


Author: Serverless-appsync
Source Code: 
License: MIT License

#serverless #sync #graphql 

A Simple Wrapper Around Amplify AppSync Simulator
Hermann  Frami

Hermann Frami


Serverless AWS AppSync Offline Plugin


This is a wrapper for the excellent AppSync Emulator.

This Plugin Requires


  • Emulate Appsync with AppSync Emulator and depends on Serverless-AppSync-Plugin
  • Connect to any DynamoDB or install DynamoDB Local
  • Start DynamoDB Local with all the parameters supported (e.g port, inMemory, sharedDb)
  • Table Creation for DynamoDB Local

This plugin is updated by its users, I just do maintenance and ensure that PRs are relevant to the community. In other words, if you find a bug or want a new feature, please help us by becoming one of the contributors.


Install Plugin

npm install --save serverless-appsync-offline

Then in serverless.yml add following entry to the plugins array: serverless-appsync-offline

  - serverless-appsync-offline

Using the Plugin

  1. Add Appsync Resource definitions to your Serverless configuration, as defined here:

Start appsync-offline

sls appsync-offline start

All CLI options are optional:

--port            -p  Port to provide the graphgl api. Default: dynamic
--dynamoDbPort            -d  Port to access the dynamoDB. Default: dynamic
--inMemory                -i  DynamoDB; will run in memory, instead of using a database file. When you stop DynamoDB;, none of the data will be saved. Note that you cannot specify both -dbPath and -inMemory at once.
--dbPath                  -b  The directory where DynamoDB will write its database file. If you do not specify this option, the file will be written to the current directory. Note that you cannot specify both -dbPath and -inMemory at once. For the path, current working directory is <projectroot>/node_modules/serverless-appsync-offline/dynamob. For example to create <projectroot>/node_modules/serverless-appsync-offline/dynamob/<mypath> you should specify -d <mypath>/ or --dbPath <mypath>/ with a forwardslash at the end.
--sharedDb                -h  DynamoDB will use a single database file, instead of using separate files for each credential and region. If you specify -sharedDb, all DynamoDB clients will interact with the same set of tables regardless of their region and credential configuration.
--delayTransientStatuses  -t  Causes DynamoDB to introduce delays for certain operations. DynamoDB can perform some tasks almost instantaneously, such as create/update/delete operations on tables and indexes; however, the actual DynamoDB service requires more time for these tasks. Setting this parameter helps DynamoDB simulate the behavior of the Amazon DynamoDB web service more closely. (Currently, this parameter introduces delays only for global secondary indexes that are in either CREATING or DELETING status.)
--optimizeDbBeforeStartup -o  Optimizes the underlying database tables before starting up DynamoDB on your computer. You must also specify -dbPath when you use this parameter.

All the above options can be added to serverless.yml to set default configuration: e.g.

Minimum Options:

    port: 62222
        port: 8000

All Options:

    port: 62222
        # if endpoint is provided, no local database server is started and and appsync connects to the endpoint - e.g. serverless-dynamodb-local
        endpoint: "http://localhost:8000"
        region: localhost
        accessKeyId: a
        secretAccessKey: a
        port: 8000
        dbPath: "./.dynamodb"
        inMemory: false,
        sharedDb: false,
        delayTransientStatuses: false,
        optimizeDbBeforeStartup: false,

How to Query:

curl -X POST \
  http://localhost:62222/graphql \
  -H 'Content-Type: application/json' \
  -H 'x-api-key: APIKEY' \
  -d '{
    "query": "{ hello { world } }"

Note: If you're using API_KEY as your authenticationType, then a x-api-key header has to be present in the request. The value of the key doesn't really matter.

Using DynamoDB Local in your code

You need to add the following parameters to the AWS NODE SDK dynamodb constructor

e.g. for dynamodb document client sdk

var AWS = require('aws-sdk');
new AWS.DynamoDB.DocumentClient({
    region: 'localhost',
    endpoint: 'http://localhost:8000'

e.g. for dynamodb document client sdk

new AWS.DynamoDB({
    region: 'localhost',
    endpoint: 'http://localhost:8000'

Using with serverless-offline plugin

When using this plugin with serverless-offline, it is difficult to use above syntax since the code should use DynamoDB Local for development, and use DynamoDB Online after provisioning in AWS. Therefore we suggest you to use serverless-dynamodb-client plugin in your code.

The serverless appsync-offline start command can be triggered automatically when using serverless-offline plugin.

Add both plugins to your serverless.yml file:

  - serverless-appsync-offline
  - serverless-offline

Make sure that serverless-appsync-offline is above serverless-offline so it will be loaded earlier.

Now your local Appsync and the DynamoDB database will be automatically started before running serverless offline.


SLS_DEBUG=* NODE_DEBUG=appsync-* yarn offline


SLS_DEBUG=* NODE_DEBUG=appsync-* yarn sls appsync-offline start

Using with serverless-offline and serverless-webpack plugin

Run serverless offline start. In comparison with serverless offline, the start command will fire an init and a end lifecycle hook which is needed for serverless-offline and serverless-appsync-offline to switch off both resources.

Add plugins to your serverless.yml file:

  - serverless-webpack
  - serverless-appsync-offline
  - serverless-offline #serverless-offline needs to be last in the list

    # when using serverless-webpack it (by default) outputs all the build assets to `<projectRoot>/.webpack/service`
    # this will let appsync-offline know where to find those compiled files
    buildPrefix: .webpack/service


The AppSync Emulator does not support CloudFormation syntax (e.g. tableName: { Ref: UsersTable }) in dataSources.

Author: Aheissenberger
Source Code: 
License: MIT

#serverless #aws #sync 

Serverless AWS AppSync Offline Plugin
Daron  Moore

Daron Moore


Immerhin: Send Patches Around to Keep The System in Sync


The core idea is to use patches to keep the UI in sync between client and server, multiple clients, or multiple windows.

It uses Immer as an interface for state mutations and provides a convenient way to group mutations into a single transaction, and enables undo/redo out of the box.

Play with it on Codesandbox


  1. Sync application state using patches
  2. Get undo/redo for free
  3. Sync to the server
  4. Server agnostic
  5. State management libraries agnostic (a container interface)
  6. Small bundle size
  7. Sync between iframes (not implemented yet)
  8. Sync between tabs (not implemented yet)
  9. Resolve conflicts (not implemented yet)
  10. Provide server handler (not implemented yet)


import store, { sync } from "immerhin";

// Create containers for each state. Sync engine only cares that the result has a "value" and a "dispatch(newValue)"
const container1 = createContainer(initialValue);
const container2 = createContainer(initialValue);

// - Explicitely enable containers for transactions
// - Define a namespace for each container, so that server knows which object it has to patch.
store.register("container1", container1);
store.register("container2", container2);

// Creating the actual transaction that will:
// - generate patches
// - update states
// - inform all subscribers
// - register a transaction for potential undo/redo and sync calls
  [container1, container2,],
  (value1, value2, => {
    // ...

// Setup periodic sync with a fetch, or do this with Websocket
setInterval(async () => {
  const entries = sync();
  await fetch("/patch", { method: "POST", payload: JSON.stringify(entries) });
}, 1000);

// Undo/redo


How it works


A container is an interface that provides a .value and implements a .dispatch(value) method so that a value can be updated and propagated to all consumers.

You can use anything to create containers, it could be a Redux store, could be an observable or a nano state

You can use the same container instance to subscribe to the changes across the entire application.

Example using nano state:

import { createContainer, useValue } from "react-nano-state";
const myContainer = createContainer(initialValue);

// I can call a dispatch from anywhere

// I can subscribe to updates in React
const Component = () => {
  const [value, setValue] = useValue(myContainer);

Container registration

We register containers for two reasons:

  1. To define a namespace for each container so that whoever consumes the changes knows which object to apply the patches to.
  2. Ensure that the container was intentionally registered to be synced to the server and be part of undo/redo transactions. You may not want this for every container since you can use them for ephemeral states.


store.register("myName", myContainer);

Creating a transaction

A transaction is a set of changes applied to a set of states. When you apply changes to the states inside a transaction, you are essentially telling the engine which changes are associated with the same user action so that undo/redo can use that as a single step to work with.

A call into store.createTransaction()does all of this:

  • generate patches (using Immer)
  • update states and inform all subscribers (by calling container.dispatch(newValue))
  • register a transaction for potential undo/redo and calls


  [container1, container2,],
  (value1, value2, => {
    // ...


Calling undo() and redo() functions will essentially apply the right patch for the value and dispatch the update.


The sync() function returns you all changes queued up for a sync since the last call. With the return from sync(), you can do anything you want, for example, send it to your server.


// Setup periodic sync with a fetch, or do this with Websocket
setInterval(async () => {
  const entries = sync();
  await fetch("/patch", { method: "POST", payload: JSON.stringify(entries) });
}, 1000);

Example entries:

    "transactionId": "6243062b469f516835327f65",
    "changes": [
        "namespace": "root",
        "patches": [
            "op": "replace",
            "path": ["children", 1],
            "value": {
              "component": "Box",
              "id": "6241f55791596f2467df9c2a",
              "style": {},
              "children": []
            "op": "replace",
            "path": ["children", 2],
            "value": {
              "component": "Box",
              "id": "6241f55a91596f2467df9c36",
              "style": {},
              "children": []
            "op": "replace",
            "path": ["children", "length"],
            "value": 3

Create a new store

If you want to have multiple separate undoable states, create a separate store for each. They add to the same sync queue in the end.

import { Store } from "immerhin";

const store = new Store();

Author: webstudio-is
Source Code:
License: MIT License

#sync #react 

Immerhin: Send Patches Around to Keep The System in Sync
Jada  Grady

Jada Grady


Sync CSV spreadsheet content to Figma - Figma Tutorial

This video tutorial is a complete step-by-step guide showing you how to sync content from a CSV spreadsheet to your Figma designs using the CopyDoc plugin –

#figma #sync 

Sync CSV spreadsheet content to Figma - Figma Tutorial
Anne  Klocko

Anne Klocko


How to PWA Offline Save and Sync with Vaadin Fusion

In this video, Marcus Hellberg teaches you how to save data locally while offline, so you can sync it to the server once you are back online.

0:00 - Intro
0:46 - App overview
1:26 - Saving data locally
3:32 - Saving data back to the server when connected
5:02 - Recap


#pwa #sync 

How to PWA Offline Save and Sync with Vaadin Fusion
Max Weber

Max Weber


ObjectBox Tutorial - Flutter Local Database with Sync implementation

It's been a while since my last tutorial video. Today I'm back with a new video about ObjectBox and the Sync functionality of it. We will create an Order App and Chef App and integrate the ObjectBox in it and last but not least, set up the Sync server.

#flutter #dart #objectbox #sync

ObjectBox Tutorial - Flutter Local Database with Sync implementation