PHPfastcache: A High-performance Backend Cache System

⚠️ Please note that the V9 is mostly a PHP 8 type aware update of Phpfastcache with some significant changes !

As the V9 is relatively not compatible with previous versions, please read carefully the migration guide to ensure you the smoothest migration possible. One of the biggest change is the configuration system which is now an object that replace the primitive array that we used to implement back then. Also, please note that the V9 requires at least PHP 8 or higher to works properly.


Simple Yet Powerful PHP Caching Class

More information in Wiki The simplicity of abstraction: One class for many backend cache. You don't need to rewrite your code many times again.

Supported drivers at this day *

💡 Feel free to propose a driver by making a new Pull Request, they are welcome !

Regular driversHigh performances driversDevelopment driversCluster-Aggregated drivers
Apcu (APC support removed)ArangodbDevnullFullReplicationCluster
Dynamodb (AWS)CassandraDevrandomSemiReplicationCluster
FilesCouchBasev3
(Couchbase for SDK 2 support removed)
MemstaticMasterSlaveReplicationCluster
Firestore (GCP)Couchdb RandomReplicationCluster
LeveldbMongodb  
Memcache(d)Predis  
Solr (Via Solarium 6.x)Redis  
SqliteSsdb  
WincacheZend Memory Cache  
Zend Disk Cache   

* Driver descriptions available in DOCS/DRIVERS.md


Because caching does not mean weaken your code

Phpfastcache has been developed over the years with 3 main goals:

  • Performance: We optimized and still optimize the code to provide you the lightest library as possible
  • Security: Because caching strategies can sometimes comes with unwanted vulnerabilities, we do our best to provide you a sage & strong library as possible
  • Portability: No matter what operating system you're working on, we did our best to provide you the most cross-platform code as possible

Rich Development API

Phpfastcache provides you a lot of useful APIs:

Item API (ExtendedCacheItemInterface)

MethodReturnDescription
addTag($tagName)ExtendedCacheItemInterfaceAdds a tag
addTags(array $tagNames)ExtendedCacheItemInterfaceAdds multiple tags
append($data)ExtendedCacheItemInterfaceAppends data to a string or an array (push)
decrement($step = 1)ExtendedCacheItemInterfaceRedundant joke...
expiresAfter($ttl)ExtendedCacheItemInterfaceAllows you to extends the lifetime of an entry without altering its value (formerly known as touch())
expiresAt($expiration)ExtendedCacheItemInterfaceSets the expiration time for this cache item (as a DateTimeInterface object)
get()mixedThe getter, obviously, returns your cache object
getCreationDate()\DatetimeInterfaceGets the creation date for this cache item (as a DateTimeInterface object) *
getDataAsJsonString()stringReturn the data as a well-formatted json string
getEncodedKey()stringReturns the final and internal item identifier (key), generally used for debug purposes
getExpirationDate()ExtendedCacheItemInterfaceGets the expiration date as a Datetime object
getKey()stringReturns the item identifier (key)
getLength()intGets the data length if the data is a string, array, or objects that implement \Countable interface.
getModificationDate()\DatetimeInterfaceGets the modification date for this cache item (as a DateTimeInterface object) *
getTags()string[]Gets the tags
getTagsAsString($separator = ', ')stringGets the data as a string separated by $separator
getTtl()intGets the remaining Time To Live as an integer
increment($step = 1)ExtendedCacheItemInterfaceTo allow us to count on an integer item
isEmpty()boolChecks if the data is empty or not despite the hit/miss status.
isExpired()boolChecks if your cache entry is expired
isHit()boolChecks if your cache entry exists and is still valid, it's the equivalent of isset()
isNull()boolChecks if the data is null or not despite the hit/miss status.
prepend($data)ExtendedCacheItemInterfacePrepends data to a string or an array (unshift)
removeTag($tagName)ExtendedCacheItemInterfaceRemoves a tag
removeTags(array $tagNames)ExtendedCacheItemInterfaceRemoves multiple tags
set($value)ExtendedCacheItemInterfaceThe setter, for those who missed it, can be anything except resources or non-serializer object (ex: PDO objects, file pointers, etc).
setCreationDate($expiration)\DatetimeInterfaceSets the creation date for this cache item (as a DateTimeInterface object) *
setEventManager($evtMngr)ExtendedCacheItemInterfaceSets the event manager
setExpirationDate()ExtendedCacheItemInterfaceAlias of expireAt() (for more code logic)
setModificationDate($expiration)\DatetimeInterfaceSets the modification date for this cache item (as a DateTimeInterface object) *
setTags(array $tags)ExtendedCacheItemInterfaceSets multiple tags

* Require configuration directive "itemDetailedDate" to be enabled, else a \LogicException will be thrown

ItemPool API (ExtendedCacheItemPoolInterface)

Methods (By Alphabetic Order)ReturnDescription
appendItemsByTag($tagName, $data)boolAppends items by a tag
appendItemsByTags(array $tagNames, $data)boolAppends items by one of multiple tag names
attachItem($item)void(Re-)attaches an item to the pool
clear()boolAllows you to completely empty the cache and restart from the beginning
commit()boolPersists any deferred cache items
decrementItemsByTag($tagName, $step = 1)boolDecrements items by a tag
decrementItemsByTags(array $tagNames, $step = 1)boolDecrements items by one of multiple tag names
deleteItem($key)boolDeletes an item
deleteItems(array $keys)boolDeletes one or more items
deleteItemsByTag($tagName)boolDeletes items by a tag
deleteItemsByTags(array $tagNames, int $strategy)boolDeletes items by one of multiple tag names
detachItem($item)voidDetaches an item from the pool
getConfig()ConfigurationOptionReturns the configuration object
getConfigOption($optionName);mixedReturns a configuration value by its key $optionName
getDefaultConfig()ConfigurationOptionReturns the default configuration object (not altered by the object instance)
getDriverName()stringReturns the current driver name (without the namespace)
getEventManager()EventManagerInterfaceGets the event manager
getHelp()stringProvides a very basic help for a specific driver
getInstanceId()stringReturns the instance ID
getItem($key)ExtendedCacheItemInterfaceRetrieves an item and returns an empty item if not found
getItems(array $keys)ExtendedCacheItemInterface[]Retrieves one or more item and returns an array of items
getItemsAsJsonString(array $keys)stringReturns A json string that represents an array of items
getItemsByTag($tagName, $strategy)ExtendedCacheItemInterface[]Returns items by a tag
getItemsByTags(array $tagNames, $strategy)ExtendedCacheItemInterface[]Returns items by one of multiple tag names
getItemsByTagsAsJsonString(array $tagNames, $strategy)stringReturns A json string that represents an array of items corresponding
getStats()DriverStatisticReturns the cache statistics as an object, useful for checking disk space used by the cache etc.
hasEventManager()boolCheck the event manager
hasItem($key)boolTests if an item exists
incrementItemsByTag($tagName, $step = 1, $strategy)boolIncrements items by a tag
incrementItemsByTags(array $tagNames, $step = 1, $strategy)boolIncrements items by one of multiple tag names
isAttached($item)boolVerify if an item is (still) attached
prependItemsByTag($tagName, $data, $strategy)boolPrepends items by a tag
prependItemsByTags(array $tagNames, $data, $strategy)boolPrepends items by one of multiple tag names
save(CacheItemInterface $item)boolPersists a cache item immediately
saveDeferred(CacheItemInterface $item)boolSets a cache item to be persisted later
saveMultiple(...$items)boolPersists multiple cache items immediately
setEventManager(EventManagerInterface $evtMngr)ExtendedCacheItemPoolInterfaceSets the event manager

🆕 in V8: Multiple strategies ($strategy) are now supported for tagging:

  • TaggableCacheItemPoolInterface::TAG_STRATEGY_ONE allows you to get cache item(s) by at least ONE of the specified matching tag(s). Default behavior.
  • TaggableCacheItemPoolInterface::TAG_STRATEGY_ALL allows you to get cache item(s) by ALL of the specified matching tag(s) (the cache item can have additional tag(s))
  • TaggableCacheItemPoolInterface::TAG_STRATEGY_ONLY allows you to get cache item(s) by ONLY the specified matching tag(s) (the cache item cannot have additional tag(s))

It also supports multiple calls, Tagging, Setup Folder for caching. Look at our examples folders for more information.

Phpfastcache versioning API

Phpfastcache provides a class that gives you basic information about your Phpfastcache installation

  • Get the API version (Item+Pool interface) with Phpfastcache\Api::GetVersion();
  • Get the API changelog (Item+Pool interface) Phpfastcache\Api::getChangelog();
  • Get the Phpfastcache version with Phpfastcache\Api::getPhpfastcacheVersion();
  • Get the Phpfastcache changelog Phpfastcache\Api::getPhpfastcacheChangelog();

Want to keep it simple ?

😅 Good news, as of the V6, a Psr16 adapter is provided to keep the cache simplest using very basic getters/setters:

  • get($key, $default = null);
  • set($key, $value, $ttl = null);
  • delete($key);
  • clear();
  • getMultiple($keys, $default = null);
  • setMultiple($values, $ttl = null);
  • deleteMultiple($keys);
  • has($key);

Basic usage:

<?php

use Phpfastcache\Helper\Psr16Adapter;

$defaultDriver = 'Files';
$Psr16Adapter = new Psr16Adapter($defaultDriver);

if(!$Psr16Adapter->has('test-key')){
    // Setter action
    $data = 'lorem ipsum';
    $Psr16Adapter->set('test-key', 'lorem ipsum', 300);// 5 minutes
}else{
    // Getter action
    $data = $Psr16Adapter->get('test-key');
}


/**
* Do your stuff with $data
*/

Internally, the Psr16 adapter calls the Phpfastcache Api via the cache manager.


Introducing to events

📣 As of the V6, Phpfastcache provides an event mechanism. You can subscribe to an event by passing a Closure to an active event:

<?php

use Phpfastcache\EventManager;

/**
* Bind the event callback
*/
EventManager::getInstance()->onCacheGetItem(function(ExtendedCacheItemPoolInterface $itemPool, ExtendedCacheItemInterface $item){
    $item->set('[HACKED BY EVENT] ' . $item->get());
});

An event callback can get unbind but you MUST provide a name to the callback previously:

<?php
use Phpfastcache\EventManager;

/**
* Bind the event callback
*/
EventManager::getInstance()->onCacheGetItem(function(ExtendedCacheItemPoolInterface $itemPool, ExtendedCacheItemInterface $item){
    $item->set('[HACKED BY EVENT] ' . $item->get());
}, 'myCallbackName');


/**
* Unbind the event callback
*/
EventManager::getInstance()->unbindEventCallback('onCacheGetItem', 'myCallbackName');

🆕 As of the V8 you can simply subscribe to every events of Phpfastcache.

More information about the implementation and the events are available on the Wiki


Introducing new helpers

📚 As of the V6, Phpfastcache provides some helpers to make your code easier.

May more will come in the future, feel free to contribute !


Introducing aggregated cluster support

Check out the WIKI to learn how to implement aggregated cache clustering feature.


As Fast To Implement As Opening a Beer

👍 Step 1: Include phpFastCache in your project with composer:

composer require phpfastcache/phpfastcache

🚧 Step 2: Setup your website code to implement the phpFastCache calls (with Composer)

<?php
use Phpfastcache\CacheManager;
use Phpfastcache\Config\ConfigurationOption;

// Setup File Path on your config files
// Please note that as of the V6.1 the "path" config 
// can also be used for Unix sockets (Redis, Memcache, etc)
CacheManager::setDefaultConfig(new ConfigurationOption([
    'path' => '/var/www/phpfastcache.com/dev/tmp', // or in windows "C:/tmp/"
]));

// In your class, function, you can call the Cache
$InstanceCache = CacheManager::getInstance('files');

/**
 * Try to get $products from Caching First
 * product_page is "identity keyword";
 */
$key = "product_page";
$CachedString = $InstanceCache->getItem($key);

$your_product_data = [
    'First product',
    'Second product',
    'Third product'
     /* ... */
];

if (!$CachedString->isHit()) {
    $CachedString->set($your_product_data)->expiresAfter(5);//in seconds, also accepts Datetime
    $InstanceCache->save($CachedString); // Save the cache item just like you do with doctrine and entities

    echo 'FIRST LOAD // WROTE OBJECT TO CACHE // RELOAD THE PAGE AND SEE // ';
    echo $CachedString->get();

} else {
    echo 'READ FROM CACHE // ';
    echo $CachedString->get()[0];// Will print 'First product'
}

/**
 * use your products here or return them;
 */
echo implode('<br />', $CachedString->get());// Will echo your product list

💾 Legacy support (Without Composer)

* See the file examples/withoutComposer.php for more information.
⚠️ The legacy autoload will be removed in the next major release ⚠️
Please include Phpfastcache through composer by running composer require phpfastcache/phpfastcache.

⚡ Step 3: Enjoy ! Your website is now faster than lightning !

For curious developers, there is a lot of other examples available here.

💥 Phpfastcache support

Found an issue or have an idea ? Come here and let us know !

Download Details:

Author: PHPSocialNetwork
Source Code: https://github.com/PHPSocialNetwork/phpfastcache 
License: MIT license

#php #couchdb #redis #website 

PHPfastcache: A High-performance Backend Cache System

7 Favorite Node.js CouchDB Libraries

In today's post we will learn about 7 Favorite Node.js CouchDB Libraries. 

CouchDB is the database for the web. CouchDB is a powerful system which uses JSON for storing your documents. You can also use the power of JavaScript to index combine and transform your documents using this amazing technology. It is one of the most effective tools to serve web apps across various devices and it is feature packed qualities make it easy even for beginners to churn out applications and solutions for the web.

1 - Couchdb-nano

Nano: The official Apache CouchDB library for Node.js

Installation

  1. Install npm
  2. npm install nano

or save nano as a dependency of your project with

npm install --save nano

Note the minimum required version of Node.js is 10.

Getting started

To use nano you need to connect it to your CouchDB install, to do that:

const nano = require('nano')('http://localhost:5984');

Note: The URL you supply may also contain authentication credentials e.g. http://admin:mypassword@localhost:5984.

To create a new database:

nano.db.create('alice');

and to use an existing database:

const alice = nano.db.use('alice');

Under-the-hood, calls like nano.db.create are making HTTP API calls to the CouchDB service. Such operations are asynchronous. There are two ways to receive the asynchronous data back from the library

View on Github

2 - Cradle

A high-level CouchDB client for Node.js

Installation

  $ npm install cradle

Synopsis

  var cradle = require('cradle');
  var db = new(cradle.Connection)().database('starwars');

  db.get('vader', function (err, doc) {
      doc.name; // 'Darth Vader'
      assert.equal(doc.force, 'dark');
  });

  db.save('skywalker', {
      force: 'light',
      name: 'Luke Skywalker'
  }, function (err, res) {
      if (err) {
          // Handle error
      } else {
          // Handle success
      }
  });

View on Github

3 - Node-couchdb

ES2015-compatible package to interact with CouchDB

Installation

npm install node-couchdb --save

API

Constructor

node-couchdb exports constructor, which accepts one object argument with properties host (127.0.0.1 by default), port (5984 by default), protocol (http by default), cache (one of plugins, null by default), auth (object with properties {user, pass}) and timeout for all requests (5000 by default). All object fields are optional.

ES Module:

import NodeCouchDb from 'node-couchdb';

Common JS:

const NodeCouchDb = require('node-couchdb');
// node-couchdb instance with default options
const couch = new NodeCouchDb({
    auth: {
        user: AUTH_USER,
        pass: AUTH_PASS
    }
});

// node-couchdb instance with Memcached
const MemcacheNode = require('node-couchdb-plugin-memcached');
const couchWithMemcache = new NodeCouchDb({
    cache: new MemcacheNode,
    auth: {
        user: AUTH_USER,
        pass: AUTH_PASS
    }
});

// node-couchdb instance talking to external service
const couchExternal = new NodeCouchDb({
    host: 'couchdb.external.service',
    protocol: 'https',
    port: 6984,
    auth: {
        user: AUTH_USER,
        pass: AUTH_PASS
    }
});

All node-couchdb methods return Promise instances which resolve if everything works as expected and reject with Error instance which usually has code and body fields. See package source and tests for more info.

View on Github

4 - Caminte

CaminteJS is cross-db ORM for nodejs, providing common interface to access most popular database formats.

Installation

First install node.js. Then:

$ npm install caminte --save

Usage

var caminte = require('caminte');
var Schema  = caminte.Schema;
var schema  = new Schema('redis', {port: 6379});

// define models
var Post = schema.define('Post', {
    title:     { type: schema.String,  limit: 255 },
    userId:    { type: schema.Number },
    content:   { type: schema.Text },
    created:   { type: schema.Date,    default: Date.now },
    updated:   { type: schema.Date },
    published: { type: schema.Boolean, default: false, index: true }
});

var User = schema.define('User', {
    name:       { type: schema.String,  limit: 255 },
    bio:        { type: schema.Text },
    email:      { type: schema.String,  limit: 155, unique: true },
    approved:   { type: schema.Boolean, default: false, index: true }
    joinedAt:   { type: schema.Date,    default: Date.now },
    age:        { type: schema.Number },
    gender:     { type: schema.String,  limit: 10 }
});

// setup hooks
Post.afterUpdate = function (next) {
    this.updated = new Date();
    this.save();
    next();
};

// define any custom method for instance
User.prototype.getNameAndAge = function () {
    return this.name + ', ' + this.age;
};

// define scope
Post.scope('active', { published : true });

// setup validations
User.validatesPresenceOf('name', 'email');
User.validatesUniquenessOf('email', {message: 'email is not unique'});
User.validatesInclusionOf('gender', {in: ['male', 'female']});
User.validatesNumericalityOf('age', {int: true});

// setup relationships
User.hasMany(Post,   {as: 'posts',  foreignKey: 'userId'});

// Common API methods

var user = new User({ 
    name:       'Alex',
    email:      'example@domain.aga',
    age:        40,
    gender:     'male'
});

user.isValid(function (valid) {
    if (!valid) {
        return console.log(user.errors);
    }
    user.save(function(err){
        if (!err) {
            return console.log(err);
        }
        console.log('User created');
    });
})

// just instantiate model
new Post
// save model (of course async)
Post.create(cb);
// all posts
Post.all(cb)
// all posts by user
Post.all({where: {userId: user.id}, order: 'id', limit: 10, skip: 20});
// the same as prev
user.posts(cb)
// get one latest post
Post.findOne({where: {published: true}, order: 'date DESC'}, cb);
// same as new Post({userId: user.id});
user.posts.build
// save as Post.create({userId: user.id}, cb);
user.posts.create(cb)
// find instance by id
User.findById(1, cb)
// count instances
User.count([conditions, ]cb)
// destroy instance
user.destroy(cb);
// destroy all instances
User.destroyAll(cb);

// models also accessible in schema:
schema.models.User;
schema.models.Post;

View on Github

5 - Node-couchdb

An extendable couch client lib for Nodejs

Installation

npm install couch-db --save

Usage

About Options

Most of classes in this lib is accept an option object to let you configure the behaviors that how to request to the server.

All the options that you can pass to request, you can set here. So you can control whether use strictSSL, proxy yourself.

Is there any other additional options that is used by [couch-db][villadora/node-couchdb)?

None except one: request. The request options is let user to take full control of how to send request to the server, and of course, you have to follow the request api. Via this options, you can do cache layer to reduce request via modules like modified, or even intercept the response.

So except the request field, you can treat the options is the same as options in request.

You can go and see the doc there.

Create a couch server

var couch = require('couch-db'),
    server = couch('http://localhost:5984');
/// or 
server = couch('https://localhost:6984', {
    rejectUnauthorized: false // this will pass to request
});

Or

var CouchDB = require('couch-db').CouchDB;
    server = new CouchDB('http://localhost:5984');

View on Github

6 - Couchdb-promises

Yet another Node module for CouchDB that uses ES6 promises. No dependencies.

Installation

npm install couchdb-promises

Example.js

const db = require('couchdb-promises')({
  baseUrl: 'http://localhost:5984', // required
  requestTimeout: 10000
})
const dbName = 'testdb'

get info

db.getInfo()
.then(console.log)
// { headers: { ... },
//   data:
//    { couchdb: 'Welcome',
//      version: '2.0.0',
//      vendor: { name: 'The Apache Software Foundation' } },
//   status: 200,
//   message: 'OK - Request completed successfully'
//   duration: 36 }

View on Github

7 - Node-couchdb-api

An async wrapper for the CouchDB API, following Node.JS conventions

Installation

$ npm install couchdb-api

Usage

var couchdb = require("couchdb-api");

// connect to a couchdb server (defaults to http://localhost:5984)
var server = couchdb.srv();

// test it out!
server.info(function (err, response) {
console.log(response);

// should get { couchdb: "Welcome", version: "1.0.1" }
// if something went wrong, the `err` argument would provide the error that CouchDB provides
});

// select a database
var db = server.db("my-database");

db.info(function (err, response) {
console.log(response);

// should see the basic statistics for your test database
// if you chose a non-existant db, you'd get { error: "not_found", reason: "no_db_file" } in place of `err`
});

View on Github

Thank you for following this article. 

Related videos:

Node js With CouchDB

#node #couchdb 

7 Favorite Node.js CouchDB Libraries
Dexter  Goodwin

Dexter Goodwin

1657287900

Couchup: A CouchDB Implementation on top Of Levelup

couchup

couchup is a database. The goal is to build a data model well suited for mobile applications that may need to work offline and sync later on and maintain smart client side caches. This data model is inspired by CouchDB but diverges greatly in the way it handles and resolves the revision history and conflicts. couchup implements a "most writes wins" conflict resolution scheme and does not require or even allow user specific conflict resolution.

Another goal of couchup is to be performant and modular. This repository only implements the base document storage layer. Indexes, attachments and replicators are implemented as additional modules.

The tradeoffs couchup has made in revision tree storage along with some other simple optimizations mean that couchup already has better write performance than CouchDB and the same consistency guarantees.

API

var couchup = require('couchup')
  , store = couchup('./dbdir')
  ;

db.put('databaseName', function (e, db) {
  if (e) throw e
  db.put({_id:'key', prop:'value'}, function (e, info) {
    if (e) throw e
    db.put({_id:'key', _rev:info.rev, prop:'newvalue'}, function (e, info) {
      if (e) throw e
      db.get('key', function (e, doc) {
        if (e) throw new Error('doc not found')
        console.log(doc)
      })
    })
  })
})
db.compact() // remove old revisions and sequences from the database.
db.info(function (e, i) {
  if (e) throw e
  console.log(i.update_seq, i.doc_count)
})

SLEEP Support

var changes = db.sleep()
changes.on('entry', function (entry) {
  console.log(entry.seq, entry.id)
})
changes.on('end' function () {
  console.log('done')
})

And can be used with sleep-ref for replicating over the network via tcp,tls,http and https.

var sleepref = require('sleep-ref')
  , s = sleepref(db.sleep.bind(db))
  ;
http.createServer(s.httpHandler.bind(s)).listen(8080, function () {
  db2.pull('http://localhost:8080/', function (e) {
    if (e) throw e
    // all replicated over the network
  })
})

You can also replicate between database objects in process.

db.clone(db2, function (e) {
  if (e) throw e
  // all replicated
})

Incompatibilities w/ CouchDB

Pull replication from CouchDB works and will continue to work continuously if you aren't updating the couchup node you're writing it to. Bi-Directional replication with CouchDB will eventually result in conflicts on the CouchDB side because couchup converts CouchDB's revision tree to a linear revision sequence.

Similarly, push replication to Apache CouchDB will work once but writing again will likely cause unnecessary conflicts on the CouchDB side.

Author: Mikeal
Source Code: https://github.com/mikeal/couchup 
License: 

#javascript #database #couchdb 

Couchup: A CouchDB Implementation on top Of Levelup

CouchDBとLevelDB:状態データベースオプションの比較

Hyperledger Fabricは、エンタープライズブロックチェーンソリューションおよびアプリケーションの開発に使用される、オープンソースの許可されたブロックチェーンフレームワークです。Hyperledger Fabricプロジェクトは、Hyperledger内の主要なブロックチェーンプロジェクトの1つであり、業界を超えたブロックチェーンテクノロジーを進歩させるためにLinuxFoundationによって2015年に開始されたマルチプロジェクトの共同作業です。現在、Hyperledger Fabricには、120,000を超える貢献組織と、15,000を超えるエンジニアの貢献者が協力してプロジェクトを推進しています。

Hyperledger Fabricはモジュラー(プラグアンドプレイ)アーキテクチャを備えており、毎秒1,000トランザクション(TPS)を超えるトランザクションに到達でき、20,000TPSにアップグレードできます。Hyperledger Fabricは、他のブロックチェーンテクノロジーとは異なり、企業による採用を容易にする特定の機能セットも提供します。これらの機能は次のとおりです。

Hyperledger Fabricは多くの実装が行われており、銀行や金融、国際貿易、モノのインターネットなどの業界で広く使用されています。Hyperledger Fabricは、承認者、コミッター、データベースなど、多くのコンポーネントで構成されています。

HyperledgerFabricネットワークで対話するためのオプション

Hyperledger Fabricネットワークで対話するには、チェーンコード(スマートコントラクトとも呼ばれます)が使用されます。これらのスマートコントラクトは、Go、Node.js、Javaなどのさまざまなプログラミング言語で記述できます。

Hyperledger Fabricは、チェーンコードによって実行されたトランザクションの状態をデータベースに保存します。HyperledgerFabricネットワークによって保存されるレコードには主に2つのタイプがあります。

  • トランザクションログ:これは、Hyperledger Fabricネットワークのブロックチェーンの側面であり、HyperledgerFabricネットワークの現在の状態をもたらしたすべてのトランザクションの記録で構成されます。トランザクションログに保存されるデータは不変であり、LevelDBデータベースに保存されます
  • 世界の状態:これは、特定の時点での元帳の現在の値です。デフォルトでは、FabricネットワークのメインデータベースはLevelDBデータベースであり、特定のトランザクションが完了したときにのみデータが入力されます。ただし、CouchDBデータベースに置き換えることができます

この記事では、CouchDBデータベースとLevelDBデータベースのどちらをワールドステートデータベースとして選択するかについて説明します。

楽しみ!

PostgreSQLやMySQLのようなデータベースを使ってみませんか?

Hyperledger Fabricは、LevelDBとCouchDBのみをサポートし、PostgreSQL、MongoDB、MySQLなどのデータベースはサポートしません。このHyperledgerフォーラムでの議論では、プラグ可能なデータベースのサポートを提供するには、ネットワークとデータベース自体に多くの変更を加える必要があると述べています。プロジェクトは、当面の間、Hyperledgerの開発パイプラインから削除されました。

ただし、状態データベースとしてPostgreSQLなどのデータベースを設定する方法はまだあります。これらの手順に従うことで、Fabricプロジェクトをフォークし、 PostgreSQL、MySQL、MongoDBなどのデータベースをプラグインできます。このIBMの例は、HyperledgerFabricプロジェクトでPostgreSQLをセットアップするために使用できる方法も示しています。

LevelDBとは何ですか?

LevelDBデータベースは、キーと値のペアを格納できる高速データベースです。LevelDBデータベースはSanjayGhemawatとJeffDeanによってGoogleで作成され、現在HyperledgerFabricがトランザクションログを保存するために使用している唯一のデータベースです。前述のように、これはHyperledgerFabricにワールド状態を格納するために使用されるデフォルトのデータベースでもあります。

LevelDBのキーと値のペアは、任意のバイト配列に格納されます。LevelDBは、、、および複合キークエリ、つまりKey、、、およびのみをサポートします。Key rangePut(key, value)GET(key, value)Delete(key)

LevelDBはSQLデータベースではないため、SQLクエリ、インデックス、またはリレーショナルデータモデルはサポートされていません。

LevelDBの利点

  • LevelDBは、CouchDBと比較してオーバーヘッドが少ない、シンプルで高速なデータベースです。
  • データは保存され、キーでマッピングされます
    • 、、、などPut(key, value)の単純な操作を実行できるため、LevelDBの使用が簡単になります。GET(Key, Value)Delete(Key)
  • LevelDBは、データ、スナップショット、およびアトミックバッチでの変更に対する順方向および逆方向の反復をサポートします

LevelDBの制限

  • インデックスとクエリはサポートされていません
  • Put(key, value)、、GET(key, value)やDelete(key)などの単純な操作のみが可能です。
  • SQLデータベースではないため、リレーショナルデータモデルの使い慣れた機能はありません。データはキーと値のペアでのみ保存できます
  • LevelDBデータベースからの大規模なデータセットのクエリは、複雑なクエリとインデックスが許可されていないため、非効率的で複雑です。

CouchDBとは何ですか?

CouchDBデータベースは、Erlangに実装されたオープンソースのドキュメント指向データベースであり、JSON形式でデータを収集して保存します。Hyperledger Fabricユーザーは、デフォルトのLevelDBワールドステートデータベースをCouchDBデータベースに置き換えることができます。これは、利用可能な唯一の代替手段でもあります。

CouchDBは、保存されたJSONコンテンツの豊富なクエリを可能にするNoSQLデータベースです。CouchDBデータベースのデフォルトのクエリ言語はJavaScriptであり、データはJSON形式でスキーマを設定せずに保存できます。

CouchDBはインデックスとページネーションをサポートしています。LevelDBデータベースと同様に、キーに基づいてクエリを実行することもできます。

CouchDBの利点

  • CouchDBではJSONクエリとインデックスが可能であるため、LevelDBよりもHyperledgerFabricネットワークの監査とレポートの要件を満たすことが容易になります。
  • ドキュメント指向データベースとして、CouchDBを使用すると、データを配列または辞書としてデータベースに保存できます。
  • CouchDBデータはHTTPURIを介して公開されます。これにより、データに対してHTTP操作(、、、)を実行できるようにGETなりますDELETEPUTPOST
  • CouchDBを使用すると、チェーンコードの大規模なデータセットをインデックスでより効率的かつ柔軟にクエリできます。

CouchDBの制限

  • CouchDBは、HyperledgerFabricネットワークと並行して別個のデータベースとして実行されます
  • ネットワークを実行するには、データベースの管理、セットアップ、構成などの面で、より多くのコミットメントが必要です。

次のHyperledgerFabricプロジェクトの状態データベースを選択する

状態データベースを選択する前に、Hyperledger Fabricは、Hyperledger Fabricネットワークの起動後にデータベースを変更できないため、データベースを決定する必要があることを知っておく必要があります。

トランザクションの量が少ない単純なHyperledgerFabricプロジェクトを実行している場合、LevelDBの使いやすさと速度により、使用するのが非常に望ましいです。

ただし、より複雑なプロジェクトの場合は、CouchDBデータベースを使用することをお勧めします。CouchDBは、キーに基づいてデータを取得および設定するなど、LevelDBデータベースのすべての機能を備えていますが、JSON形式でのデータの保存、インデックス、クエリの発行、ページネーションなどの追加機能へのアクセスも提供します。データの操作が簡単になります。

CouchDBが提供する機能により、大量のトランザクションを伴うHyperledgerFabricネットワークの管理が容易になります。CouchDBを使用すると、HyperledgerFabricネットワークを実行している最新の組織で必要とされる監査とレポートの要件を簡単に完了することができます。

結論

この記事では、Hyperledger Fabric、CouchDB、およびLevelDBで使用可能なワールドステートデータベースオプションについて説明しました。各データベースの利点と制限について説明し、次のHyperledgerFabricプロジェクト用にデータベースを選択する方法について説明しました。

ソース:https ://blog.logrocket.com/couchdb-vs-leveldb-comparing-state-database-options/ 

#leveldb #couchdb #hyperledger #fabric 

CouchDBとLevelDB:状態データベースオプションの比較
Saul  Alaniz

Saul Alaniz

1652850480

CouchDB Vs. LevelDB: Comparación De Las Opciones De La Base De Datos

Hyperledger Fabric es un marco de cadena de bloques autorizado de código abierto que se utiliza para desarrollar soluciones y aplicaciones de cadenas de bloques empresariales. El proyecto Hyperledger Fabric es uno de los principales proyectos de cadena de bloques dentro de Hyperledger , un esfuerzo de colaboración de varios proyectos iniciado en 2015 por la Fundación Linux para avanzar en las tecnologías de cadena de bloques entre industrias. Actualmente, Hyperledger Fabric cuenta con más de 120 000 organizaciones contribuyentes y más de 15 000 ingenieros contribuyentes que trabajan juntos para avanzar en el proyecto.

Hyperledger Fabric tiene una arquitectura modular (plug-and-play) y puede alcanzar transacciones de más de 1000 transacciones por segundo (TPS), que se pueden actualizar para llegar a 20 000 TPS . Hyperledger Fabric también proporciona un conjunto particular de características que lo diferencia de otras tecnologías de cadena de bloques y permite una adopción más fácil por parte de las empresas. Estas características son:

Hyperledger Fabric ha visto mucha implementación y se usa ampliamente en industrias como la banca y las finanzas, el comercio internacional y el Internet de las cosas. Hyperledger Fabric consta de muchos componentes, como patrocinadores, confirmadores y bases de datos.

Opciones para interactuar en una red Hyperledger Fabric

Para interactuar en una red de Hyperledger Fabric, se utilizan códigos de cadena (también conocidos como contratos inteligentes). Estos contratos inteligentes se pueden escribir en diferentes lenguajes de programación, incluidos Go, Node.js y Java.

Hyperledger Fabric almacena el estado de las transacciones realizadas por códigos de cadena en bases de datos. Hay dos tipos principales de registros almacenados por una red Hyperledger Fabric:

  • Registros de transacciones : este es el aspecto blockchain de la red Hyperledger Fabric, que comprende un registro de todas las transacciones que dieron como resultado el estado actual de la red Hyperledger Fabric. Los datos almacenados en el registro de transacciones son inmutables y se almacenan en una base de datos LevelDB
  • Estado mundial : este es el valor actual del libro mayor en un momento dado. La base de datos principal de la red Fabric, de manera predeterminada, es una base de datos LevelDB y solo se completa cuando se completa una transacción determinada; sin embargo, se puede reemplazar con una base de datos Couch DB

En este artículo, vamos a discutir cómo elegir entre las bases de datos CouchDB y LevelDB como su base de datos de estado mundial.

¡Disfrutar!

¿Por qué no usar una base de datos como PostgreSQL o MySQL?

Hyperledger Fabric solo brinda soporte para LevelDB y CouchDB y no brinda soporte para bases de datos como PostgreSQL, MongoDB o MySQL. Una discusión en este foro de Hyperledger establece que, para brindar soporte para bases de datos conectables, se deben realizar muchos cambios en la red y la base de datos. El proyecto ha sido eliminado de la tubería de desarrollo de Hyperledger por el momento.

Sin embargo, todavía hay formas de configurar una base de datos como PostgreSQL como su base de datos estatal. Puede bifurcar el proyecto Fabric y conectar una base de datos como PostgreSQL, MySQL o MongoDB siguiendo estas instrucciones. Este ejemplo de IBM también muestra un método que puede usar para configurar PostgreSQL en su proyecto Hyperledger Fabric.

¿Qué es LevelDB?

La base de datos LevelDB es una base de datos rápida que le permite almacenar pares clave-valor. La base de datos LevelDB fue escrita en Google por Sanjay Ghemawat y Jeff Dean, y actualmente es la única base de datos utilizada por Hyperledger Fabric para almacenar registros de transacciones. Como se mencionó anteriormente, también es la base de datos predeterminada utilizada para almacenar el estado mundial en Hyperledger Fabric.

Los pares clave-valor de LevelDB se almacenan en matrices de bytes arbitrarias. LevelDB proporciona soporte solo para Key, Key rangey consultas de clave compuesta, es decir, Put(key, value), GET(key, value)y Delete(key).

No hay soporte para consultas SQL, índices o modelos de datos relacionales, ya que LevelDB no es una base de datos SQL.

Ventajas de LevelDB

  • LevelDB es una base de datos simple y rápida con menos gastos generales en comparación con CouchDB
  • Los datos se almacenan y mapean con claves
    • Puede realizar operaciones simples, como Put(key, value), GET(Key, Value)y Delete(Key), lo que hace que LevelDB sea menos complejo de usar
  • LevelDB admite iteraciones hacia adelante y hacia atrás sobre los datos, instantáneas y cambios en lotes atómicos

Limitaciones de LevelDB

  • No hay soporte para índices y consultas.
  • Solo permite operaciones simples, como Put(key, value), GET(key, value)y Eliminar (tecla)
  • Ninguna de las características familiares de un modelo de datos relacional, ya que no es una base de datos SQL. Los datos solo se pueden almacenar en pares clave-valor
  • Consultar grandes conjuntos de datos desde una base de datos LevelDB es ineficiente y complejo, ya que no se permiten consultas e índices complejos.

¿Qué es CouchDB?

La base de datos CouchDB es una base de datos orientada a documentos de código abierto implementada en Erlang que recopila y almacena datos en formato JSON. Los usuarios de Hyperledger Fabric pueden reemplazar su base de datos de estado mundial LevelDB predeterminada con una base de datos CouchDB, que también es la única alternativa disponible.

CouchDB es una base de datos NoSQL que permite consultas enriquecidas del contenido JSON almacenado. El lenguaje de consulta predeterminado en una base de datos de CouchDB es JavaScript y los datos se pueden almacenar sin un esquema establecido en formato JSON.

CouchDB admite índices y paginación. También puede realizar consultas basadas en claves, como con la base de datos LevelDB.

Ventajas de CouchDB

  • CouchDB permite consultas e índices JSON, lo que facilita el cumplimiento de los requisitos de informes y auditoría de red de Hyperledger Fabric que en LevelDB.
  • Como base de datos orientada a documentos, CouchDB le permite almacenar datos como matrices o diccionarios en la base de datos.
  • Los datos de CouchDB se exponen a través de un URI HTTP. Esto hace posible realizar operaciones HTTP ( GET, DELETE, PUT, POST) contra sus datos
  • CouchDB hace que la consulta de grandes conjuntos de datos de código de cadena sea más eficiente y flexible con índices

Limitaciones de CouchDB

  • CouchDB se ejecuta como una base de datos separada junto con la red Hyperledger Fabric
  • Se necesita más compromiso en términos de administración de bases de datos, instalación, configuraciones, etc. para ejecutar la red.

Elegir una base de datos estatal para su próximo proyecto de Hyperledger Fabric

Antes de elegir una base de datos estatal, debe saber que debe decidirse por una base de datos porque Hyperledger Fabric no le permite cambiar la base de datos después de que se haya lanzado la red Hyperledger Fabric.

Si está ejecutando un proyecto simple de Hyperledger Fabric con un bajo volumen de transacciones, la facilidad de uso y la velocidad de LevelDB lo hacen muy preferible de usar.

Sin embargo, para proyectos más complejos, es preferible utilizar una base de datos CouchDB. CouchDB tiene todas las funciones de la base de datos LevelDB, como obtener y configurar datos basados ​​en claves, pero también brinda acceso a funciones adicionales, como almacenar datos en formato JSON, índices, emitir consultas, paginación, etc., todo lo cual hace trabajar con datos más fácilmente.

Las funciones proporcionadas por CouchDB facilitan la gestión de una red Hyperledger Fabric con grandes volúmenes de transacciones. CouchDB también facilita el cumplimiento de los requisitos de auditoría e informes necesarios en una organización moderna que ejecuta una red Hyperledger Fabric.

Conclusión

En este artículo, hablamos sobre las opciones de bases de datos de estado mundial disponibles en Hyperledger Fabric, CouchDB y LevelDB. Explicamos las ventajas y limitaciones de cada base de datos y discutimos cómo elegir una base de datos para su próximo proyecto de Hyperledger Fabric.

Fuente: https://blog.logrocket.com/couchdb-vs-leveldb-comparing-state-database-options/

#leveldb #couchdb #hyperledger #fabric 

CouchDB Vs. LevelDB: Comparación De Las Opciones De La Base De Datos

Nano 10.0: The Official Apache CouchDB Library for Node.js

Nano

Offical Apache CouchDB library for Node.js.

Features:

  • Minimalistic - There is only a minimum of abstraction between you and CouchDB.
  • Pipes - Proxy requests from CouchDB directly to your end user. ( ...AsStream functions only)
  • Promises - The vast majority of library calls return native Promises.
  • TypeScript - Detailed TypeScript definitions are built in.
  • Errors - Errors are proxied directly from CouchDB: if you know CouchDB you already know nano.

Installation

  1. Install npm
  2. npm install nano

or save nano as a dependency of your project with

npm install --save nano

Note the minimum required version of Node.js is 10.

Table of contents

Getting started

Tutorials & screencasts

Configuration

Database functions

Document functions

Partitioned database functions

Multipart functions

Attachments functions

Views and design functions

Using cookie authentication

Advanced features

Tests

Release

Getting started

To use nano you need to connect it to your CouchDB install, to do that:

const nano = require('nano')('http://localhost:5984');

Note: The URL you supply may also contain authentication credentials e.g. http://admin:mypassword@localhost:5984.

To create a new database:

nano.db.create('alice');

and to use an existing database:

const alice = nano.db.use('alice');

Under-the-hood, calls like nano.db.create are making HTTP API calls to the CouchDB service. Such operations are asynchronous. There are two ways to receive the asynchronous data back from the library

  1. Promises
nano.db.create('alice').then((data) => {
  // success - response is in 'data'
}).catch((err) => {
  // failure - error information is in 'err'
})

or in the async/await style:

try {
  const response = await nano.db.create('alice')
  // succeeded
  console.log(response)
} catch (e) {
  // failed
  console.error(e)
}
  1. Callbacks
nano.db.create('alice', (err, data) => {
  // errors are in 'err' & response is in 'data'
})

In nano the callback function receives always three arguments:

  • err - The error, if any.
  • body - The HTTP response body from CouchDB, if no error. JSON parsed body, binary for non JSON responses.
  • header - The HTTP response header from CouchDB, if no error.

The documentation will follow the async/await style.


A simple but complete example in the async/await style:

async function asyncCall() {
  await nano.db.destroy('alice')
  await nano.db.create('alice')
  const alice = nano.use('alice')
  const response = await alice.insert({ happy: true }, 'rabbit')
  return response
}
asyncCall()

Running this example will produce:

you have inserted a document with an _id of rabbit.
{ ok: true,
  id: 'rabbit',
  rev: '1-6e4cb465d49c0368ac3946506d26335d' }

You can also see your document in futon (http://localhost:5984/_utils).

Configuration

Configuring nano to use your database server is as simple as:

const nano = require('nano')('http://localhost:5984')
const db = nano.use('foo');

If you don't need to instrument database objects you can simply:

// nano parses the URL and knows this is a database
const db = require('nano')('http://localhost:5984/foo');

You can also pass options to the require to specify further configuration options you can pass an object literal instead:

// nano parses the URL and knows this is a database
const opts = {
  url: 'http://localhost:5984/foo',
  requestDefaults: {
    proxy: {
      protocol: 'http',
      host: 'myproxy.net'
    },
    headers: {
      customheader: 'MyCustomHeader'
    }
  }
};
const db = require('nano')(opts);

Nano works perfectly well over HTTPS as long as the SSL cert is signed by a certification authority known by your client operating system. If you have a custom or self-signed certificate, you may need to create your own HTTPS agent and pass it to Nano e.g.

const httpsAgent = new https.Agent({
  ca: '/path/to/cert',
  rejectUnauthorized: true,
  keepAlive: true,
  maxSockets: 6
})
const nano = Nano({
  url: process.env.COUCH_URL,
  requestDefaults: {
    agent: httpsAgent,
  }
})

Please check axios for more information on the defaults. They support features like proxies, timeout etc.

You can tell nano to not parse the URL (maybe the server is behind a proxy, is accessed through a rewrite rule or other):

// nano does not parse the URL and return the server api
// "http://localhost:5984/prefix" is the CouchDB server root
const couch = require('nano')(
  { url : "http://localhost:5984/prefix"
    parseUrl : false
  });
const db = couch.use('foo');

Pool size and open sockets

A very important configuration parameter if you have a high traffic website and are using nano is the HTTP pool size. By default, the Node.js HTTP global agent has a infinite number of active connections that can run simultaneously. This can be limited to user-defined number (maxSockets) of requests that are "in flight", while others are kept in a queue. Here's an example explicitly using the Node.js HTTP agent configured with custom options:

const http = require('http')
const myagent = new http.Agent({
  keepAlive: true,
  maxSockets: 25
})

const db = require('nano')({
  url: 'http://localhost:5984/foo',
  requestDefaults : {
    agent : myagent
  }
});

TypeScript

There is a full TypeScript definition included in the the nano package. Your TypeScript editor will show you hints as you write your code with the nano library with your own custom classes:

import * as Nano  from 'nano'

let n = Nano('http://USERNAME:PASSWORD@localhost:5984')
let db = n.db.use('people')

interface iPerson extends Nano.MaybeDocument {
  name: string,
  dob: string
}

class Person implements iPerson {
  _id: string
  _rev: string
  name: string
  dob: string

  constructor(name: string, dob: string) {
    this._id = undefined
    this._rev = undefined
    this.name = name
    this.dob = dob
  }

  processAPIResponse(response: Nano.DocumentInsertResponse) {
    if (response.ok === true) {
      this._id = response.id
      this._rev = response.rev
    }
  }
}

let p = new Person('Bob', '2015-02-04')
db.insert(p).then((response) => {
  p.processAPIResponse(response)
  console.log(p)
})

Database functions

nano.db.create(name, [opts], [callback])

Creates a CouchDB database with the given name, with options opts.

await nano.db.create('alice', { n: 3 })

nano.db.get(name, [callback])

Get information about the database name:

const info = await nano.db.get('alice')

nano.db.destroy(name, [callback])

Destroys the database name:

await nano.db.destroy('alice')

nano.db.list([callback])

Lists all the CouchDB databases:

const dblist = await nano.db.list()

nano.db.listAsStream()

Lists all the CouchDB databases as a stream:

nano.db.listAsStream()
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

nano.db.compact(name, [designname], [callback])

Compacts name, if designname is specified also compacts its views.

nano.db.replicate(source, target, [opts], [callback])

Replicates source to target with options opts. The targetdatabase has to exist, add create_target:true to opts to create it prior to replication:

const response = await nano.db.replicate('alice',
                  'http://admin:password@otherhost.com:5984/alice',
                  { create_target:true })

nano.db.replication.enable(source, target, [opts], [callback])

Enables replication using the new CouchDB api from source to target with options opts. target has to exist, add create_target:true to opts to create it prior to replication. Replication will survive server restarts.

const response = await nano.db.replication.enable('alice',
                  'http://admin:password@otherhost.com:5984/alice',
                  { create_target:true })

nano.db.replication.query(id, [opts], [callback])

Queries the state of replication using the new CouchDB API. The id comes from the response given by the call to replication.enable:

const r = await nano.db.replication.enable('alice',
                  'http://admin:password@otherhost.com:5984/alice',
                   { create_target:true })
const q = await nano.db.replication.query(r.id)

nano.db.replication.disable(id, [opts], [callback])

Disables replication using the new CouchDB API. The id comes from the response given by the call to replication.enable:

const r = await nano.db.replication.enable('alice',
                   'http://admin:password@otherhost.com:5984/alice',
                   { create_target:true })
await nano.db.replication.disable(r.id);

nano.db.changes(name, [params], [callback])

Asks for the changes feed of name, params contains additions to the query string.

const c = await nano.db.changes('alice')

nano.db.changesAsStream(name, [params])

Same as nano.db.changes but returns a stream.

nano.db.changes('alice').pipe(process.stdout);

nano.db.info([callback])

Gets database information:

const info = await nano.db.info()

nano.use(name)

Returns a database object that allows you to perform operations against that database:

const alice = nano.use('alice');
await alice.insert({ happy: true }, 'rabbit')

The database object can be used to access the Document Functions.

nano.db.use(name)

Alias for nano.use

nano.db.scope(name)

Alias for nano.use

nano.scope(name)

Alias for nano.use

nano.request(opts, [callback])

Makes a custom request to CouchDB. This can be used to create your own HTTP request to the CouchDB server, to perform operations where there is no nano function that encapsulates it. The available opts are:

  • opts.db – the database name
  • opts.method – the http method, defaults to get
  • opts.path – the full path of the request, overrides opts.doc and opts.att
  • opts.doc – the document name
  • opts.att – the attachment name
  • opts.qs – query string parameters, appended after any existing opts.path, opts.doc, or opts.att
  • opts.content_type – the content type of the request, default to json
  • opts.headers – additional http headers, overrides existing ones
  • opts.body – the document or attachment body
  • opts.encoding – the encoding for attachments
  • opts.multipart – array of objects for multipart request
  • opts.stream - if true, a request object is returned. Default false and a Promise is returned.

nano.relax(opts, [callback])

Alias for nano.request

nano.config

An object containing the nano configurations, possible keys are:

  • url - the CouchDB URL
  • db - the database name

nano.updates([params], [callback])

Listen to db updates, the available params are:

  • params.feed – Type of feed. Can be one of
  • longpoll: Closes the connection after the first event.
  • continuous: Send a line of JSON per event. Keeps the socket open until timeout.
  • eventsource: Like, continuous, but sends the events in EventSource format.
  • params.timeout – Number of seconds until CouchDB closes the connection. Default is 60.
  • params.heartbeat – Whether CouchDB will send a newline character (\n) on timeout. Default is true.

Document functions

db.insert(doc, [params], [callback])

Inserts doc in the database with optional params. If params is a string, it's assumed it is the intended document _id. If params is an object, it's passed as query string parameters and docName is checked for defining the document _id:

const alice = nano.use('alice');
const response = await alice.insert({ happy: true }, 'rabbit')

The insert function can also be used with the method signature db.insert(doc,[callback]), where the doc contains the _id field e.g.

const alice = nano.use('alice')
const response alice.insert({ _id: 'myid', happy: true })

and also used to update an existing document, by including the _rev token in the document being saved:

const alice = nano.use('alice')
const response = await alice.insert({ _id: 'myid', _rev: '1-23202479633c2b380f79507a776743d5', happy: false })

db.destroy(docname, rev, [callback])

Removes a document from CouchDB whose _id is docname and who's revision is _rev:

const response = await alice.destroy('rabbit', '3-66c01cdf99e84c83a9b3fe65b88db8c0')

db.get(docname, [params], [callback])

Gets a document from CouchDB whose _id is docname:

const doc = await alice.get('rabbit')

or with optional query string params:

const doc = await alice.get('rabbit', { revs_info: true })

db.head(docname, [callback])

Same as get but lightweight version that returns headers only:

const headers = await alice.head('rabbit')

Note: if you call alice.head in the callback style, the headers are returned to you as the third argument of the callback function.

db.bulk(docs, [params], [callback])

Bulk operations(update/delete/insert) on the database, refer to the CouchDB doc e.g:

const documents = [
  { a:1, b:2 },
  { _id: 'tiger', striped: true}
];
const response = await alice.bulk({ docs: documents })

db.list([params], [callback])

List all the docs in the database .

const doclist = await alice.list().then((body)
doclist.rows.forEach((doc) => {
  console.log(doc);
});

or with optional query string additions params:

const doclist = await alice.list({include_docs: true})

db.listAsStream([params])

List all the docs in the database as a stream.

alice.listAsStream()
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

db.fetch(docnames, [params], [callback])

Bulk fetch of the database documents, docnames are specified as per CouchDB doc. additional query string params can be specified, include_docs is always set to true.

const keys = ['tiger', 'zebra', 'donkey'];
const datat = await alice.fetch({keys: keys})

db.fetchRevs(docnames, [params], [callback])

** changed in version 6 **

Bulk fetch of the revisions of the database documents, docnames are specified as per CouchDB doc. additional query string params can be specified, this is the same method as fetch but include_docs is not automatically set to true.

db.createIndex(indexDef, [callback])

Create index on database fields, as specified in CouchDB doc.

const indexDef = {
  index: { fields: ['foo'] },
  name: 'fooindex'
};
const response = await alice.createIndex(indexDef)

Reading Changes Feed

Nano provides a low-level API for making calls to CouchDB's changes feed, or if you want a reliable, resumable changes feed follower, then you need the changesReader.

There are three ways to start listening to the changes feed:

  1. changesReader.start() - to listen to changes indefinitely by repeated "long poll" requests. This mode continues to poll for changes forever.
  2. changesReader.get() - to listen to changes until the end of the changes feed is reached, by repeated "long poll" requests. Once a response with zero changes is received, the 'end' event will indicate the end of the changes and polling will stop.
  3. changesReader.spool() - listen to changes in one long HTTP request. (as opposed to repeated round trips) - spool is faster but less reliable.

Note: for .get() & .start(), the sequence of API calls can be paused by calling changesReader.pause() and resumed by calling changesReader.resume().

Set up your database connection and then choose changesReader.start() to listen to that database's changes:

const db = nano.db.use('mydb')
db.changesReader.start()
  .on('change', (change) => { console.log(change) })
  .on('batch', (b) => {
    console.log('a batch of', b.length, 'changes has arrived');
  }).on('seq', (s) => {
    console.log('sequence token', s);
  }).on('error', (e) => {
    console.error('error', e);
  })

Note: you probably want to monitor either the change or batch event, not both.

If you want changesReader to hold off making the next _changes API call until you are ready, then supply wait:true in the options to get/start. The next request will only fire when you call changesReader.resume():

db.changesReader.get({wait: true})
  .on('batch', (b) => {
    console.log('a batch of', b.length, 'changes has arrived');
    // do some asynchronous work here and call "changesReader.resume()"
    // when you're ready for the next API call to be dispatched.
    // In this case, wait 5s before the next changes feed request.
    setTimeout( () => {
      db.changesReader.resume()
    }, 5000)
  }).on('end', () => {
    console.log('changes feed monitoring has stopped');
  });

You may supply a number of options when you start to listen to the changes feed:

ParameterDescriptionDefault valuee.g. 
batchSizeThe maximum number of changes to ask CouchDB for per HTTP request. This is the maximum number of changes you will receive in a batch event.100500 
sinceThe position in the changes feed to start from where 0 means the beginning of time, now means the current position or a string token indicates a fixed position in the changes feednow390768-g1AAAAGveJzLYWBgYMlgTmGQ 
includeDocsWhether to include document bodies or notfalsee.g. true 
waitFor get/start mode, automatically pause the changes reader after each request. When the the user calls resume(), the changes reader will resume.falsee.g. true 
fastChangesAdds a seq_interval parameter to fetch changes more quicklyfalsetrue 
selectorFilters the changes feed with the supplied Mango selector{"name":"fred}null 
timeoutThe number of milliseconds a changes feed request waits for data6000010000 

The events it emits are as follows:s

EventDescriptionData 
changeEach detected change is emitted individually. Only available in get/start modes.A change object 
batchEach batch of changes is emitted in bulk in quantities up to batchSize.An array of change objects 
seqEach new sequence token (per HTTP request). This token can be passed into ChangesReader as the since parameter to resume changes feed consumption from a known point. Only available in get/start modes.String 
errorOn a fatal error, a descriptive object is returned and change consumption stops.Error object 
endEmitted when the end of the changes feed is reached. ChangesReader.get() mode only,Nothing 

The ChangesReader library will handle many temporal errors such as network connectivity, service capacity limits and malformed data but it will emit an error event and exit when fed incorrect authentication credentials or an invalid since token.

The change event delivers a change object that looks like this:

{
    "seq": "8-g1AAAAYIeJyt1M9NwzAUBnALKiFOdAO4gpRix3X",
    "id": "2451be085772a9e588c26fb668e1cc52",
    "changes": [{
        "rev": "4-061b768b6c0b6efe1bad425067986587"
    }],
    "doc": {
        "_id": "2451be085772a9e588c26fb668e1cc52",
        "_rev": "4-061b768b6c0b6efe1bad425067986587",
        "a": 3
    }
}

N.B

  • doc is only present if includeDocs:true is supplied
  • seq is not present for every change

The id is the unique identifier of the document that changed and the changes array contains the document revision tokens that were written to the database.

The batch event delivers an array of change objects.

Partition Functions

Functions related to partitioned databases.

Create a partitioned database by passing { partitioned: true } to db.create:

await nano.db.create('my-partitioned-db', { partitioned: true })

The database can be used as normal:

const db = nano.db.use('my-partitioned-db')

but documents must have a two-part _id made up of <partition key>:<document id>. They are insert with db.insert as normal:

const doc = { _id: 'canidae:dog', name: 'Dog', latin: 'Canis lupus familiaris' }
await db.insert(doc)

Documents can be retrieved by their _id using db.get:

const doc = db.get('canidae:dog')

Mango indexes can be created to operate on a per-partition index by supplying partitioned: true on creation:

const i = {
  ddoc: 'partitioned-query',
  index: { fields: ['name'] },
  name: 'name-index',
  partitioned: true,
  type: 'json'
}

// instruct CouchDB to create the index
await db.index(i)

Search indexes can be created by writing a design document with opts.partitioned = true:

// the search definition
const func = function(doc) {
  index('name', doc.name)
  index('latin', doc.latin)
}

// the design document containing the search definition function
const ddoc = {
  _id: '_design/search-ddoc',
  indexes: {
    search-index: {
      index: func.toString()
    }
  },
  options: {
    partitioned: true
  }
}
 
await db.insert(ddoc)

MapReduce views can be created by writing a design document with opts.partitioned = true:

const func = function(doc) {
  emit(doc.family, doc.weight)
}

// Design Document
const ddoc = {
  _id: '_design/view-ddoc',
  views: {
    family-weight: {
      map: func.toString(),
      reduce: '_sum'
    }
  },
  options: {
    partitioned: true
  }
}

// create design document
await db.insert(ddoc)

db.partitionInfo(partitionKey, [callback])

Fetch the stats of a single partition:

const stats = await alice.partitionInfo('canidae')

db.partitionedList(partitionKey, [params], [callback])

Fetch documents from a database partition:

// fetch document id/revs from a partition
const docs = await alice.partitionedList('canidae')

// add document bodies but limit size of response
const docs = await alice.partitionedList('canidae', { include_docs: true, limit: 5 })

db.partitionedListAsStream(partitionKey, [params])

Fetch documents from a partition as a stream:

// fetch document id/revs from a partition
nano.db.partitionedListAsStream('canidae')
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

// add document bodies but limit size of response
nano.db.partitionedListAsStream('canidae', { include_docs: true, limit: 5 })
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

db.partitionedFind(partitionKey, query, [params])

Query documents from a partition by supplying a Mango selector:

// find document whose name is 'wolf' in the 'canidae' partition
await db.partitionedFind('canidae', { 'selector' : { 'name': 'Wolf' }})

db.partitionedFindAsStream(partitionKey, query)

Query documents from a partition by supplying a Mango selector as a stream:

// find document whose name is 'wolf' in the 'canidae' partition
db.partitionedFindAsStream('canidae', { 'selector' : { 'name': 'Wolf' }})
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

db.partitionedSearch(partitionKey, designName, searchName, params, [callback])

Search documents from a partition by supplying a Lucene query:

const params = {
  q: 'name:\'Wolf\''
}
await db.partitionedSearch('canidae', 'search-ddoc', 'search-index', params)
// { total_rows: ... , bookmark: ..., rows: [ ...] }

db.partitionedSearchAsStream(partitionKey, designName, searchName, params)

Search documents from a partition by supplying a Lucene query as a stream:

const params = {
  q: 'name:\'Wolf\''
}
db.partitionedSearchAsStream('canidae', 'search-ddoc', 'search-index', params)
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)
// { total_rows: ... , bookmark: ..., rows: [ ...] }

db.partitionedView(partitionKey, designName, viewName, params, [callback])

Fetch documents from a MapReduce view from a partition:

const params = {
  startkey: 'a',
  endkey: 'b',
  limit: 1
}
await db.partitionedView('canidae', 'view-ddoc', 'view-name', params)
// { rows: [ { key: ... , value: [Object] } ] }

db.partitionedViewAsStream(partitionKey, designName, viewName, params)

Fetch documents from a MapReduce view from a partition as a stream:

const params = {
  startkey: 'a',
  endkey: 'b',
  limit: 1
}
db.partitionedViewAsStream('canidae', 'view-ddoc', 'view-name', params)
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)
// { rows: [ { key: ... , value: [Object] } ] }

Multipart functions

db.multipart.insert(doc, attachments, params, [callback])

Inserts a doc together with attachments and params. If params is a string, it's assumed as the intended document _id. If params is an object, its passed as query string parameters and docName is checked for defining the _id. Refer to the doc for more details. The attachments parameter must be an array of objects with name, data and content_type properties.

const fs = require('fs');

fs.readFile('rabbit.png', (err, data) => {
  if (!err) {
    await alice.multipart.insert({ foo: 'bar' }, [{name: 'rabbit.png', data: data, content_type: 'image/png'}], 'mydoc')
  }
});

db.multipart.get(docname, [params], [callback])

Get docname together with its attachments via multipart/related request with optional query string additions params. The multipart response body is a Buffer.

const response = await alice.multipart.get('rabbit')

Attachments functions

db.attachment.insert(docname, attname, att, contenttype, [params], [callback])

Inserts an attachment attname to docname, in most cases params.rev is required. Refer to the CouchDB doc for more details.

const fs = require('fs');

fs.readFile('rabbit.png', (err, data) => {
  if (!err) {
    await alice.attachment.insert('rabbit', 
      'rabbit.png', 
      data, 
      'image/png',
      { rev: '12-150985a725ec88be471921a54ce91452' })
  }
});

db.attachment.insertAsStream(docname, attname, att, contenttype, [params])

As of Nano 9.x, the function db.attachment.insertAsStream is now deprecated. Now simply pass a readable stream to db.attachment.insert as the third paramseter.

db.attachment.get(docname, attname, [params], [callback])

Get docname's attachment attname with optional query string additions params.

const fs = require('fs');

const body = await alice.attachment.get('rabbit', 'rabbit.png')
fs.writeFile('rabbit.png', body)

db.attachment.getAsStream(docname, attname, [params])

const fs = require('fs');
alice.attachment.getAsStream('rabbit', 'rabbit.png')
  .on('error', e => console.error)
  .pipe(fs.createWriteStream('rabbit.png'));

db.attachment.destroy(docname, attname, [params], [callback])

changed in version 6

Destroy attachment attname of docname's revision rev.

const response = await alice.attachment.destroy('rabbit', 'rabbit.png', {rev: '1-4701d73a08ce5c2f2983bf7c9ffd3320'})

Views and design functions

db.view(designname, viewname, [params], [callback])

Calls a view of the specified designname with optional query string params. If you're looking to filter the view results by key(s) pass an array of keys, e.g { keys: ['key1', 'key2', 'key_n'] }, as params.

const body = await alice.view('characters', 'happy_ones', { key: 'Tea Party', include_docs: true })
body.rows.forEach((doc) => {
  console.log(doc.value)
})

or

const body = await alice.view('characters', 'soldiers', { keys: ['Hearts', 'Clubs'] })

When params is not supplied, or no keys are specified, it will simply return all documents in the view:

const body = await alice.view('characters', 'happy_ones')
const body = alice.view('characters', 'happy_ones', { include_docs: true })

db.viewAsStream(designname, viewname, [params])

Same as db.view but returns a stream:

alice.viewAsStream('characters', 'happy_ones', {reduce: false})
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

db.viewWithList(designname, viewname, listname, [params], [callback])

Calls a list function fed by the given view from the specified design document.

const body = await alice.viewWithList('characters', 'happy_ones', 'my_list')

db.viewWithListAsStream(designname, viewname, listname, [params], [callback])

Calls a list function fed by the given view from the specified design document as a stream.

alice.viewWithListAsStream('characters', 'happy_ones', 'my_list')
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

db.show(designname, showname, doc_id, [params], [callback])

Calls a show function from the specified design for the document specified by doc_id with optional query string additions params.

const doc = await alice.show('characters', 'format_doc', '3621898430')

Take a look at the CouchDB wiki for possible query paramaters and more information on show functions.

db.atomic(designname, updatename, docname, [body], [callback])

Calls the design's update function with the specified doc in input.

const response = await db.atomic('update', 'inplace', 'foobar', {field: 'foo', value: 'bar'})

Note that the data is sent in the body of the request. An example update handler follows:

"updates": {
  "in-place" : "function(doc, req) {
      var request_body = JSON.parse(req.body)
      var field = request_body.field
      var value = request_body.value
      var message = 'set ' + field + ' to ' + value
      doc[field] = value
      return [doc, message]
  }"
}

db.search(designname, searchname, params, [callback])

Calls a view of the specified design with optional query string additions params.

const response = await alice.search('characters', 'happy_ones', { q: 'cat' })

or

const drilldown = [['author', 'Dickens']['publisher','Penguin']]
const response = await alice.search('inventory', 'books', { q: '*:*', drilldown: drilldown })

Check out the tests for a fully functioning example.

db.searchAsStream(designname, searchname, params)

Calls a view of the specified design with optional query string additions params. Returns stream.

alice.search('characters', 'happy_ones', { q: 'cat' }).pipe(process.stdout);

db.find(selector, [callback])

Perform a "Mango" query by supplying a JavaScript object containing a selector:

// find documents where the name = "Brian" and age > 25.
const q = {
  selector: {
    name: { "$eq": "Brian"},
    age : { "$gt": 25 }
  },
  fields: [ "name", "age", "tags", "url" ],
  limit:50
};
const response = await alice.find(q)

db.findAsStream(selector)

Perform a "Mango" query by supplying a JavaScript object containing a selector, but return a stream:

// find documents where the name = "Brian" and age > 25.
const q = {
  selector: {
    name: { "$eq": "Brian"},
    age : { "$gt": 25 }
  },
  fields: [ "name", "age", "tags", "url" ],
  limit:50
};
alice.findAsStream(q)
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

using cookie authentication

Nano supports making requests using CouchDB's cookie authentication functionality. If you initialise Nano so that it is cookie-aware, you may call nano.auth first to get a session cookie. Nano will behave like a web browser, remembering your session cookie and refreshing it if a new one is received in a future HTTP response.

const nano = require('nano')({
  url: 'http://localhost:5984',
  requestDefaults: {
    jar: true
  }
})
const username = 'user'
const userpass = 'pass'
const db = nano.db.use('mydb')

// authenticate
await nano.auth(username, userpass)

// requests from now on are authenticated
const doc = await db.get('mydoc')
console.log(doc)

The second request works because the nano library has remembered the AuthSession cookie that was invisibily returned by the nano.auth call.

When you have a session, you can see what permissions you have by calling the nano.session function

const doc = await nano.session()
// { userCtx: { roles: [ '_admin', '_reader', '_writer' ], name: 'rita' },  ok: true }

Advanced features

Getting uuids

If your application needs to generate UUIDs, then CouchDB can provide some for you

const response = await nano.uuids(3)
// { uuids: [
// '5d1b3ef2bc7eea51f660c091e3dffa23',
// '5d1b3ef2bc7eea51f660c091e3e006ff',
// '5d1b3ef2bc7eea51f660c091e3e007f0',
//]}

The first parameter is the number of uuids to generate. If omitted, it defaults to 1.

Extending nano

nano is minimalistic but you can add your own features with nano.request(opts)

For example, to create a function to retrieve a specific revision of the rabbit document:

function getrabbitrev(rev) {
  return nano.request({ db: 'alice',
                 doc: 'rabbit',
                 method: 'get',
                 params: { rev: rev }
               });
}

getrabbitrev('4-2e6cdc4c7e26b745c2881a24e0eeece2').then((body) => {
  console.log(body);
});

Pipes

You can pipe the return values of certain nano functions like other stream. For example if our rabbit document has an attachment with name picture.png you can pipe it to a writable stream:

const fs = require('fs');
const nano = require('nano')('http://127.0.0.1:5984/');
const alice = nano.use('alice');
alice.attachment.getAsStream('rabbit', 'picture.png')
  .on('error', (e) => console.error('error', e))
  .pipe(fs.createWriteStream('/tmp/rabbit.png'));

then open /tmp/rabbit.png and you will see the rabbit picture.

Functions that return streams instead of a Promise are:

  • nano.db.listAsStream

attachment functions:

  • db.attachment.getAsStream
  • db.attachment.insertAsStream

and document level functions

  • db.listAsStream

Logging

When instantiating Nano, you may supply the function that will perform the logging of requests and responses. In its simplest for, simply pass console.log as your logger:

const nano = Nano({ url: process.env.COUCH_URL, log: console.log })
// all requests and responses will be sent to console.log

You may supply your own logging function to format the data before output:

const url = require('url')
const logger = (data) => {
  // only output logging if there is an environment variable set
  if (process.env.LOG === 'nano') {
    // if this is a request
    if (typeof data.err === 'undefined') {
      const u = new url.URL(data.uri)
      console.log(data.method, u.pathname, data.qs)
    } else {
      // this is a response
      const prefix = data.err ? 'ERR' : 'OK'
      console.log(prefix, data.headers.statusCode, JSON.stringify(data.body).length)
    }
  }
}
const nano = Nano({ url: process.env.COUCH_URL, log: logger })
// all requests and responses will be formatted by my code
// GET /cities/_all_docs { limit: 5 }
// OK 200 468

Tutorials, examples in the wild & screencasts

Roadmap

Check issues

Tests

To run (and configure) the test suite simply:

cd nano
npm install
npm run test

Meta

https://freenode.org/

Release

To create a new release of nano. Run the following commands on the main branch

  npm version {patch|minor|major}
  github push  origin main --tags
  npm publish

Download Details: 
Author: 
Source Code: 
License: 
#node #nodejs #nano #apache #couchdb

Nano 10.0: The Official Apache CouchDB Library for Node.js
Oral  Brekke

Oral Brekke

1646179440

Couchdb-nano: The official Apache CouchDB library for Node.js

Nano

Offical Apache CouchDB library for Node.js.

Features:

  • Minimalistic - There is only a minimum of abstraction between you and CouchDB.
  • Pipes - Proxy requests from CouchDB directly to your end user. ( ...AsStream functions only)
  • Promises - The vast majority of library calls return native Promises.
  • TypeScript - Detailed TypeScript definitions are built in.
  • Errors - Errors are proxied directly from CouchDB: if you know CouchDB you already know nano.

Installation

  1. Install npm
  2. npm install nano

or save nano as a dependency of your project with

npm install --save nano

Note the minimum required version of Node.js is 10.

Table of contents

Getting started

Tutorials & screencasts

Configuration

Database functions

Document functions

Partitioned database functions

Multipart functions

Attachments functions

Views and design functions

Using cookie authentication

Advanced features

Tests

Release

Getting started

To use nano you need to connect it to your CouchDB install, to do that:

const nano = require('nano')('http://localhost:5984');

Note: The URL you supply may also contain authentication credentials e.g. http://admin:mypassword@localhost:5984.

To create a new database:

nano.db.create('alice');

and to use an existing database:

const alice = nano.db.use('alice');

Under-the-hood, calls like nano.db.create are making HTTP API calls to the CouchDB service. Such operations are asynchronous. There are two ways to receive the asynchronous data back from the library

  1. Promises
nano.db.create('alice').then((data) => {
  // success - response is in 'data'
}).catch((err) => {
  // failure - error information is in 'err'
})

or in the async/await style:

try {
  const response = await nano.db.create('alice')
  // succeeded
  console.log(response)
} catch (e) {
  // failed
  console.error(e)
}
  1. Callbacks
nano.db.create('alice', (err, data) => {
  // errors are in 'err' & response is in 'data'
})

In nano the callback function receives always three arguments:

  • err - The error, if any.
  • body - The HTTP response body from CouchDB, if no error. JSON parsed body, binary for non JSON responses.
  • header - The HTTP response header from CouchDB, if no error.

The documentation will follow the async/await style.


A simple but complete example in the async/await style:

async function asyncCall() {
  await nano.db.destroy('alice')
  await nano.db.create('alice')
  const alice = nano.use('alice')
  const response = await alice.insert({ happy: true }, 'rabbit')
  return response
}
asyncCall()

Running this example will produce:

you have inserted a document with an _id of rabbit.
{ ok: true,
  id: 'rabbit',
  rev: '1-6e4cb465d49c0368ac3946506d26335d' }

You can also see your document in futon (http://localhost:5984/_utils).

Configuration

Configuring nano to use your database server is as simple as:

const nano = require('nano')('http://localhost:5984')
const db = nano.use('foo');

If you don't need to instrument database objects you can simply:

// nano parses the URL and knows this is a database
const db = require('nano')('http://localhost:5984/foo');

You can also pass options to the require to specify further configuration options you can pass an object literal instead:

// nano parses the URL and knows this is a database
const opts = {
  url: 'http://localhost:5984/foo',
  requestDefaults: {
    proxy: {
      protocol: 'http',
      host: 'myproxy.net'
    },
    headers: {
      customheader: 'MyCustomHeader'
    }
  }
};
const db = require('nano')(opts);

Nano works perfectly well over HTTPS as long as the SSL cert is signed by a certification authority known by your client operating system. If you have a custom or self-signed certificate, you may need to create your own HTTPS agent and pass it to Nano e.g.

const httpsAgent = new https.Agent({
  ca: '/path/to/cert',
  rejectUnauthorized: true,
  keepAlive: true,
  maxSockets: 6
})
const nano = Nano({
  url: process.env.COUCH_URL,
  requestDefaults: {
    agent: httpsAgent,
  }
})

Please check axios for more information on the defaults. They support features like proxies, timeout etc.

You can tell nano to not parse the URL (maybe the server is behind a proxy, is accessed through a rewrite rule or other):

// nano does not parse the URL and return the server api
// "http://localhost:5984/prefix" is the CouchDB server root
const couch = require('nano')(
  { url : "http://localhost:5984/prefix"
    parseUrl : false
  });
const db = couch.use('foo');

Pool size and open sockets

A very important configuration parameter if you have a high traffic website and are using nano is the HTTP pool size. By default, the Node.js HTTP global agent has a infinite number of active connections that can run simultaneously. This can be limited to user-defined number (maxSockets) of requests that are "in flight", while others are kept in a queue. Here's an example explicitly using the Node.js HTTP agent configured with custom options:

const http = require('http')
const myagent = new http.Agent({
  keepAlive: true,
  maxSockets: 25
})

const db = require('nano')({
  url: 'http://localhost:5984/foo',
  requestDefaults : {
    agent : myagent
  }
});

TypeScript

There is a full TypeScript definition included in the the nano package. Your TypeScript editor will show you hints as you write your code with the nano library with your own custom classes:

import * as Nano  from 'nano'

let n = Nano('http://USERNAME:PASSWORD@localhost:5984')
let db = n.db.use('people')

interface iPerson extends Nano.MaybeDocument {
  name: string,
  dob: string
}

class Person implements iPerson {
  _id: string
  _rev: string
  name: string
  dob: string

  constructor(name: string, dob: string) {
    this._id = undefined
    this._rev = undefined
    this.name = name
    this.dob = dob
  }

  processAPIResponse(response: Nano.DocumentInsertResponse) {
    if (response.ok === true) {
      this._id = response.id
      this._rev = response.rev
    }
  }
}

let p = new Person('Bob', '2015-02-04')
db.insert(p).then((response) => {
  p.processAPIResponse(response)
  console.log(p)
})

Database functions

nano.db.create(name, [opts], [callback])

Creates a CouchDB database with the given name, with options opts.

await nano.db.create('alice', { n: 3 })

nano.db.get(name, [callback])

Get information about the database name:

const info = await nano.db.get('alice')

nano.db.destroy(name, [callback])

Destroys the database name:

await nano.db.destroy('alice')

nano.db.list([callback])

Lists all the CouchDB databases:

const dblist = await nano.db.list()

nano.db.listAsStream()

Lists all the CouchDB databases as a stream:

nano.db.listAsStream()
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

nano.db.compact(name, [designname], [callback])

Compacts name, if designname is specified also compacts its views.

nano.db.replicate(source, target, [opts], [callback])

Replicates source to target with options opts. The targetdatabase has to exist, add create_target:true to opts to create it prior to replication:

const response = await nano.db.replicate('alice',
                  'http://admin:password@otherhost.com:5984/alice',
                  { create_target:true })

nano.db.replication.enable(source, target, [opts], [callback])

Enables replication using the new CouchDB api from source to target with options opts. target has to exist, add create_target:true to opts to create it prior to replication. Replication will survive server restarts.

const response = await nano.db.replication.enable('alice',
                  'http://admin:password@otherhost.com:5984/alice',
                  { create_target:true })

nano.db.replication.query(id, [opts], [callback])

Queries the state of replication using the new CouchDB API. The id comes from the response given by the call to replication.enable:

const r = await nano.db.replication.enable('alice',
                  'http://admin:password@otherhost.com:5984/alice',
                   { create_target:true })
const q = await nano.db.replication.query(r.id)

nano.db.replication.disable(id, [opts], [callback])

Disables replication using the new CouchDB API. The id comes from the response given by the call to replication.enable:

const r = await nano.db.replication.enable('alice',
                   'http://admin:password@otherhost.com:5984/alice',
                   { create_target:true })
await nano.db.replication.disable(r.id);

nano.db.changes(name, [params], [callback])

Asks for the changes feed of name, params contains additions to the query string.

const c = await nano.db.changes('alice')

nano.db.changesAsStream(name, [params])

Same as nano.db.changes but returns a stream.

nano.db.changes('alice').pipe(process.stdout);

nano.db.info([callback])

Gets database information:

const info = await nano.db.info()

nano.use(name)

Returns a database object that allows you to perform operations against that database:

const alice = nano.use('alice');
await alice.insert({ happy: true }, 'rabbit')

The database object can be used to access the Document Functions.

nano.db.use(name)

Alias for nano.use

nano.db.scope(name)

Alias for nano.use

nano.scope(name)

Alias for nano.use

nano.request(opts, [callback])

Makes a custom request to CouchDB. This can be used to create your own HTTP request to the CouchDB server, to perform operations where there is no nano function that encapsulates it. The available opts are:

  • opts.db – the database name
  • opts.method – the http method, defaults to get
  • opts.path – the full path of the request, overrides opts.doc and opts.att
  • opts.doc – the document name
  • opts.att – the attachment name
  • opts.qs – query string parameters, appended after any existing opts.path, opts.doc, or opts.att
  • opts.content_type – the content type of the request, default to json
  • opts.headers – additional http headers, overrides existing ones
  • opts.body – the document or attachment body
  • opts.encoding – the encoding for attachments
  • opts.multipart – array of objects for multipart request
  • opts.stream - if true, a request object is returned. Default false and a Promise is returned.

nano.relax(opts, [callback])

Alias for nano.request

nano.config

An object containing the nano configurations, possible keys are:

  • url - the CouchDB URL
  • db - the database name

nano.updates([params], [callback])

Listen to db updates, the available params are:

  • params.feed – Type of feed. Can be one of
  • longpoll: Closes the connection after the first event.
  • continuous: Send a line of JSON per event. Keeps the socket open until timeout.
  • eventsource: Like, continuous, but sends the events in EventSource format.
  • params.timeout – Number of seconds until CouchDB closes the connection. Default is 60.
  • params.heartbeat – Whether CouchDB will send a newline character (\n) on timeout. Default is true.

Document functions

db.insert(doc, [params], [callback])

Inserts doc in the database with optional params. If params is a string, it's assumed it is the intended document _id. If params is an object, it's passed as query string parameters and docName is checked for defining the document _id:

const alice = nano.use('alice');
const response = await alice.insert({ happy: true }, 'rabbit')

The insert function can also be used with the method signature db.insert(doc,[callback]), where the doc contains the _id field e.g.

const alice = nano.use('alice')
const response alice.insert({ _id: 'myid', happy: true })

and also used to update an existing document, by including the _rev token in the document being saved:

const alice = nano.use('alice')
const response = await alice.insert({ _id: 'myid', _rev: '1-23202479633c2b380f79507a776743d5', happy: false })

db.destroy(docname, rev, [callback])

Removes a document from CouchDB whose _id is docname and who's revision is _rev:

const response = await alice.destroy('rabbit', '3-66c01cdf99e84c83a9b3fe65b88db8c0')

db.get(docname, [params], [callback])

Gets a document from CouchDB whose _id is docname:

const doc = await alice.get('rabbit')

or with optional query string params:

const doc = await alice.get('rabbit', { revs_info: true })

db.head(docname, [callback])

Same as get but lightweight version that returns headers only:

const headers = await alice.head('rabbit')

Note: if you call alice.head in the callback style, the headers are returned to you as the third argument of the callback function.

db.bulk(docs, [params], [callback])

Bulk operations(update/delete/insert) on the database, refer to the CouchDB doc e.g:

const documents = [
  { a:1, b:2 },
  { _id: 'tiger', striped: true}
];
const response = await alice.bulk({ docs: documents })

db.list([params], [callback])

List all the docs in the database .

const doclist = await alice.list().then((body)
doclist.rows.forEach((doc) => {
  console.log(doc);
});

or with optional query string additions params:

const doclist = await alice.list({include_docs: true})

db.listAsStream([params])

List all the docs in the database as a stream.

alice.listAsStream()
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

db.fetch(docnames, [params], [callback])

Bulk fetch of the database documents, docnames are specified as per CouchDB doc. additional query string params can be specified, include_docs is always set to true.

const keys = ['tiger', 'zebra', 'donkey'];
const datat = await alice.fetch({keys: keys})

db.fetchRevs(docnames, [params], [callback])

** changed in version 6 **

Bulk fetch of the revisions of the database documents, docnames are specified as per CouchDB doc. additional query string params can be specified, this is the same method as fetch but include_docs is not automatically set to true.

db.createIndex(indexDef, [callback])

Create index on database fields, as specified in CouchDB doc.

const indexDef = {
  index: { fields: ['foo'] },
  name: 'fooindex'
};
const response = await alice.createIndex(indexDef)

Reading Changes Feed

Nano provides a low-level API for making calls to CouchDB's changes feed, or if you want a reliable, resumable changes feed follower, then you need the changesReader.

There are three ways to start listening to the changes feed:

  1. changesReader.start() - to listen to changes indefinitely by repeated "long poll" requests. This mode continues to poll for changes forever.
  2. changesReader.get() - to listen to changes until the end of the changes feed is reached, by repeated "long poll" requests. Once a response with zero changes is received, the 'end' event will indicate the end of the changes and polling will stop.
  3. changesReader.spool() - listen to changes in one long HTTP request. (as opposed to repeated round trips) - spool is faster but less reliable.

Note: for .get() & .start(), the sequence of API calls can be paused by calling changesReader.pause() and resumed by calling changesReader.resume().

Set up your database connection and then choose changesReader.start() to listen to that database's changes:

const db = nano.db.use('mydb')
db.changesReader.start()
  .on('change', (change) => { console.log(change) })
  .on('batch', (b) => {
    console.log('a batch of', b.length, 'changes has arrived');
  }).on('seq', (s) => {
    console.log('sequence token', s);
  }).on('error', (e) => {
    console.error('error', e);
  })

Note: you probably want to monitor either the change or batch event, not both.

If you want changesReader to hold off making the next _changes API call until you are ready, then supply wait:true in the options to get/start. The next request will only fire when you call changesReader.resume():

db.changesReader.get({wait: true})
  .on('batch', (b) => {
    console.log('a batch of', b.length, 'changes has arrived');
    // do some asynchronous work here and call "changesReader.resume()"
    // when you're ready for the next API call to be dispatched.
    // In this case, wait 5s before the next changes feed request.
    setTimeout( () => {
      db.changesReader.resume()
    }, 5000)
  }).on('end', () => {
    console.log('changes feed monitoring has stopped');
  });

You may supply a number of options when you start to listen to the changes feed:

ParameterDescriptionDefault valuee.g. 
batchSizeThe maximum number of changes to ask CouchDB for per HTTP request. This is the maximum number of changes you will receive in a batch event.100500 
sinceThe position in the changes feed to start from where 0 means the beginning of time, now means the current position or a string token indicates a fixed position in the changes feednow390768-g1AAAAGveJzLYWBgYMlgTmGQ 
includeDocsWhether to include document bodies or notfalsee.g. true 
waitFor get/start mode, automatically pause the changes reader after each request. When the the user calls resume(), the changes reader will resume.falsee.g. true 
fastChangesAdds a seq_interval parameter to fetch changes more quicklyfalsetrue 
selectorFilters the changes feed with the supplied Mango selector{"name":"fred}null 
timeoutThe number of milliseconds a changes feed request waits for data6000010000 

The events it emits are as follows:s

EventDescriptionData 
changeEach detected change is emitted individually. Only available in get/start modes.A change object 
batchEach batch of changes is emitted in bulk in quantities up to batchSize.An array of change objects 
seqEach new sequence token (per HTTP request). This token can be passed into ChangesReader as the since parameter to resume changes feed consumption from a known point. Only available in get/start modes.String 
errorOn a fatal error, a descriptive object is returned and change consumption stops.Error object 
endEmitted when the end of the changes feed is reached. ChangesReader.get() mode only,Nothing 

The ChangesReader library will handle many temporal errors such as network connectivity, service capacity limits and malformed data but it will emit an error event and exit when fed incorrect authentication credentials or an invalid since token.

The change event delivers a change object that looks like this:

{
    "seq": "8-g1AAAAYIeJyt1M9NwzAUBnALKiFOdAO4gpRix3X",
    "id": "2451be085772a9e588c26fb668e1cc52",
    "changes": [{
        "rev": "4-061b768b6c0b6efe1bad425067986587"
    }],
    "doc": {
        "_id": "2451be085772a9e588c26fb668e1cc52",
        "_rev": "4-061b768b6c0b6efe1bad425067986587",
        "a": 3
    }
}

N.B

  • doc is only present if includeDocs:true is supplied
  • seq is not present for every change

The id is the unique identifier of the document that changed and the changes array contains the document revision tokens that were written to the database.

The batch event delivers an array of change objects.

Partition Functions

Functions related to partitioned databases.

Create a partitioned database by passing { partitioned: true } to db.create:

await nano.db.create('my-partitioned-db', { partitioned: true })

The database can be used as normal:

const db = nano.db.use('my-partitioned-db')

but documents must have a two-part _id made up of <partition key>:<document id>. They are insert with db.insert as normal:

const doc = { _id: 'canidae:dog', name: 'Dog', latin: 'Canis lupus familiaris' }
await db.insert(doc)

Documents can be retrieved by their _id using db.get:

const doc = db.get('canidae:dog')

Mango indexes can be created to operate on a per-partition index by supplying partitioned: true on creation:

const i = {
  ddoc: 'partitioned-query',
  index: { fields: ['name'] },
  name: 'name-index',
  partitioned: true,
  type: 'json'
}

// instruct CouchDB to create the index
await db.index(i)

Search indexes can be created by writing a design document with opts.partitioned = true:

// the search definition
const func = function(doc) {
  index('name', doc.name)
  index('latin', doc.latin)
}

// the design document containing the search definition function
const ddoc = {
  _id: '_design/search-ddoc',
  indexes: {
    search-index: {
      index: func.toString()
    }
  },
  options: {
    partitioned: true
  }
}
 
await db.insert(ddoc)

MapReduce views can be created by writing a design document with opts.partitioned = true:

const func = function(doc) {
  emit(doc.family, doc.weight)
}

// Design Document
const ddoc = {
  _id: '_design/view-ddoc',
  views: {
    family-weight: {
      map: func.toString(),
      reduce: '_sum'
    }
  },
  options: {
    partitioned: true
  }
}

// create design document
await db.insert(ddoc)

db.partitionInfo(partitionKey, [callback])

Fetch the stats of a single partition:

const stats = await alice.partitionInfo('canidae')

db.partitionedList(partitionKey, [params], [callback])

Fetch documents from a database partition:

// fetch document id/revs from a partition
const docs = await alice.partitionedList('canidae')

// add document bodies but limit size of response
const docs = await alice.partitionedList('canidae', { include_docs: true, limit: 5 })

db.partitionedListAsStream(partitionKey, [params])

Fetch documents from a partition as a stream:

// fetch document id/revs from a partition
nano.db.partitionedListAsStream('canidae')
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

// add document bodies but limit size of response
nano.db.partitionedListAsStream('canidae', { include_docs: true, limit: 5 })
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

db.partitionedFind(partitionKey, query, [params])

Query documents from a partition by supplying a Mango selector:

// find document whose name is 'wolf' in the 'canidae' partition
await db.partitionedFind('canidae', { 'selector' : { 'name': 'Wolf' }})

db.partitionedFindAsStream(partitionKey, query)

Query documents from a partition by supplying a Mango selector as a stream:

// find document whose name is 'wolf' in the 'canidae' partition
db.partitionedFindAsStream('canidae', { 'selector' : { 'name': 'Wolf' }})
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)

db.partitionedSearch(partitionKey, designName, searchName, params, [callback])

Search documents from a partition by supplying a Lucene query:

const params = {
  q: 'name:\'Wolf\''
}
await db.partitionedSearch('canidae', 'search-ddoc', 'search-index', params)
// { total_rows: ... , bookmark: ..., rows: [ ...] }

db.partitionedSearchAsStream(partitionKey, designName, searchName, params)

Search documents from a partition by supplying a Lucene query as a stream:

const params = {
  q: 'name:\'Wolf\''
}
db.partitionedSearchAsStream('canidae', 'search-ddoc', 'search-index', params)
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)
// { total_rows: ... , bookmark: ..., rows: [ ...] }

db.partitionedView(partitionKey, designName, viewName, params, [callback])

Fetch documents from a MapReduce view from a partition:

const params = {
  startkey: 'a',
  endkey: 'b',
  limit: 1
}
await db.partitionedView('canidae', 'view-ddoc', 'view-name', params)
// { rows: [ { key: ... , value: [Object] } ] }

db.partitionedViewAsStream(partitionKey, designName, viewName, params)

Fetch documents from a MapReduce view from a partition as a stream:

const params = {
  startkey: 'a',
  endkey: 'b',
  limit: 1
}
db.partitionedViewAsStream('canidae', 'view-ddoc', 'view-name', params)
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout)
// { rows: [ { key: ... , value: [Object] } ] }

Multipart functions

db.multipart.insert(doc, attachments, params, [callback])

Inserts a doc together with attachments and params. If params is a string, it's assumed as the intended document _id. If params is an object, its passed as query string parameters and docName is checked for defining the _id. Refer to the doc for more details. The attachments parameter must be an array of objects with name, data and content_type properties.

const fs = require('fs');

fs.readFile('rabbit.png', (err, data) => {
  if (!err) {
    await alice.multipart.insert({ foo: 'bar' }, [{name: 'rabbit.png', data: data, content_type: 'image/png'}], 'mydoc')
  }
});

db.multipart.get(docname, [params], [callback])

Get docname together with its attachments via multipart/related request with optional query string additions params. The multipart response body is a Buffer.

const response = await alice.multipart.get('rabbit')

Attachments functions

db.attachment.insert(docname, attname, att, contenttype, [params], [callback])

Inserts an attachment attname to docname, in most cases params.rev is required. Refer to the CouchDB doc for more details.

const fs = require('fs');

fs.readFile('rabbit.png', (err, data) => {
  if (!err) {
    await alice.attachment.insert('rabbit', 
      'rabbit.png', 
      data, 
      'image/png',
      { rev: '12-150985a725ec88be471921a54ce91452' })
  }
});

db.attachment.insertAsStream(docname, attname, att, contenttype, [params])

As of Nano 9.x, the function db.attachment.insertAsStream is now deprecated. Now simply pass a readable stream to db.attachment.insert as the third paramseter.

db.attachment.get(docname, attname, [params], [callback])

Get docname's attachment attname with optional query string additions params.

const fs = require('fs');

const body = await alice.attachment.get('rabbit', 'rabbit.png')
fs.writeFile('rabbit.png', body)

db.attachment.getAsStream(docname, attname, [params])

const fs = require('fs');
alice.attachment.getAsStream('rabbit', 'rabbit.png')
  .on('error', e => console.error)
  .pipe(fs.createWriteStream('rabbit.png'));

db.attachment.destroy(docname, attname, [params], [callback])

changed in version 6

Destroy attachment attname of docname's revision rev.

const response = await alice.attachment.destroy('rabbit', 'rabbit.png', {rev: '1-4701d73a08ce5c2f2983bf7c9ffd3320'})

Views and design functions

db.view(designname, viewname, [params], [callback])

Calls a view of the specified designname with optional query string params. If you're looking to filter the view results by key(s) pass an array of keys, e.g { keys: ['key1', 'key2', 'key_n'] }, as params.

const body = await alice.view('characters', 'happy_ones', { key: 'Tea Party', include_docs: true })
body.rows.forEach((doc) => {
  console.log(doc.value)
})

or

const body = await alice.view('characters', 'soldiers', { keys: ['Hearts', 'Clubs'] })

When params is not supplied, or no keys are specified, it will simply return all documents in the view:

const body = await alice.view('characters', 'happy_ones')
const body = alice.view('characters', 'happy_ones', { include_docs: true })

db.viewAsStream(designname, viewname, [params])

Same as db.view but returns a stream:

alice.viewAsStream('characters', 'happy_ones', {reduce: false})
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

db.viewWithList(designname, viewname, listname, [params], [callback])

Calls a list function fed by the given view from the specified design document.

const body = await alice.viewWithList('characters', 'happy_ones', 'my_list')

db.viewWithListAsStream(designname, viewname, listname, [params], [callback])

Calls a list function fed by the given view from the specified design document as a stream.

alice.viewWithListAsStream('characters', 'happy_ones', 'my_list')
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

db.show(designname, showname, doc_id, [params], [callback])

Calls a show function from the specified design for the document specified by doc_id with optional query string additions params.

const doc = await alice.show('characters', 'format_doc', '3621898430')

Take a look at the CouchDB wiki for possible query paramaters and more information on show functions.

db.atomic(designname, updatename, docname, [body], [callback])

Calls the design's update function with the specified doc in input.

const response = await db.atomic('update', 'inplace', 'foobar', {field: 'foo', value: 'bar'})

Note that the data is sent in the body of the request. An example update handler follows:

"updates": {
  "in-place" : "function(doc, req) {
      var request_body = JSON.parse(req.body)
      var field = request_body.field
      var value = request_body.value
      var message = 'set ' + field + ' to ' + value
      doc[field] = value
      return [doc, message]
  }"
}

db.search(designname, searchname, params, [callback])

Calls a view of the specified design with optional query string additions params.

const response = await alice.search('characters', 'happy_ones', { q: 'cat' })

or

const drilldown = [['author', 'Dickens']['publisher','Penguin']]
const response = await alice.search('inventory', 'books', { q: '*:*', drilldown: drilldown })

Check out the tests for a fully functioning example.

db.searchAsStream(designname, searchname, params)

Calls a view of the specified design with optional query string additions params. Returns stream.

alice.search('characters', 'happy_ones', { q: 'cat' }).pipe(process.stdout);

db.find(selector, [callback])

Perform a "Mango" query by supplying a JavaScript object containing a selector:

// find documents where the name = "Brian" and age > 25.
const q = {
  selector: {
    name: { "$eq": "Brian"},
    age : { "$gt": 25 }
  },
  fields: [ "name", "age", "tags", "url" ],
  limit:50
};
const response = await alice.find(q)

db.findAsStream(selector)

Perform a "Mango" query by supplying a JavaScript object containing a selector, but return a stream:

// find documents where the name = "Brian" and age > 25.
const q = {
  selector: {
    name: { "$eq": "Brian"},
    age : { "$gt": 25 }
  },
  fields: [ "name", "age", "tags", "url" ],
  limit:50
};
alice.findAsStream(q)
  .on('error', (e) => console.error('error', e))
  .pipe(process.stdout);

using cookie authentication

Nano supports making requests using CouchDB's cookie authentication functionality. If you initialise Nano so that it is cookie-aware, you may call nano.auth first to get a session cookie. Nano will behave like a web browser, remembering your session cookie and refreshing it if a new one is received in a future HTTP response.

const nano = require('nano')({
  url: 'http://localhost:5984',
  requestDefaults: {
    jar: true
  }
})
const username = 'user'
const userpass = 'pass'
const db = nano.db.use('mydb')

// authenticate
await nano.auth(username, userpass)

// requests from now on are authenticated
const doc = await db.get('mydoc')
console.log(doc)

The second request works because the nano library has remembered the AuthSession cookie that was invisibily returned by the nano.auth call.

When you have a session, you can see what permissions you have by calling the nano.session function

const doc = await nano.session()
// { userCtx: { roles: [ '_admin', '_reader', '_writer' ], name: 'rita' },  ok: true }

Advanced features

Getting uuids

If your application needs to generate UUIDs, then CouchDB can provide some for you

const response = await nano.uuids(3)
// { uuids: [
// '5d1b3ef2bc7eea51f660c091e3dffa23',
// '5d1b3ef2bc7eea51f660c091e3e006ff',
// '5d1b3ef2bc7eea51f660c091e3e007f0',
//]}

The first parameter is the number of uuids to generate. If omitted, it defaults to 1.

Extending nano

nano is minimalistic but you can add your own features with nano.request(opts)

For example, to create a function to retrieve a specific revision of the rabbit document:

function getrabbitrev(rev) {
  return nano.request({ db: 'alice',
                 doc: 'rabbit',
                 method: 'get',
                 params: { rev: rev }
               });
}

getrabbitrev('4-2e6cdc4c7e26b745c2881a24e0eeece2').then((body) => {
  console.log(body);
});

Pipes

You can pipe the return values of certain nano functions like other stream. For example if our rabbit document has an attachment with name picture.png you can pipe it to a writable stream:

const fs = require('fs');
const nano = require('nano')('http://127.0.0.1:5984/');
const alice = nano.use('alice');
alice.attachment.getAsStream('rabbit', 'picture.png')
  .on('error', (e) => console.error('error', e))
  .pipe(fs.createWriteStream('/tmp/rabbit.png'));

then open /tmp/rabbit.png and you will see the rabbit picture.

Functions that return streams instead of a Promise are:

  • nano.db.listAsStream

attachment functions:

  • db.attachment.getAsStream
  • db.attachment.insertAsStream

and document level functions

  • db.listAsStream

Logging

When instantiating Nano, you may supply the function that will perform the logging of requests and responses. In its simplest for, simply pass console.log as your logger:

const nano = Nano({ url: process.env.COUCH_URL, log: console.log })
// all requests and responses will be sent to console.log

You may supply your own logging function to format the data before output:

const url = require('url')
const logger = (data) => {
  // only output logging if there is an environment variable set
  if (process.env.LOG === 'nano') {
    // if this is a request
    if (typeof data.err === 'undefined') {
      const u = new url.URL(data.uri)
      console.log(data.method, u.pathname, data.qs)
    } else {
      // this is a response
      const prefix = data.err ? 'ERR' : 'OK'
      console.log(prefix, data.headers.statusCode, JSON.stringify(data.body).length)
    }
  }
}
const nano = Nano({ url: process.env.COUCH_URL, log: logger })
// all requests and responses will be formatted by my code
// GET /cities/_all_docs { limit: 5 }
// OK 200 468

Tutorials, examples in the wild & screencasts

Roadmap

Check issues

Tests

To run (and configure) the test suite simply:

cd nano
npm install
npm run test

Meta

https://freenode.org/

Release

To create a new release of nano. Run the following commands on the main branch

  npm version {patch|minor|major}
  github push  origin main --tags
  npm publish

Author: Apache
Source Code: https://github.com/apache/couchdb-nano 
License: Apache-2.0 License

#node #library 

Couchdb-nano: The official Apache CouchDB library for Node.js
Trace  Hoeger

Trace Hoeger

1639422000

How to Create A Custom Test Database Containing Fake Data

About

Generate fake but valid data filled databases for test purposes using most popular patterns(AFAIK). Current support is sqlite, mysql, postgresql, mongodb, redis, couchdb
 

Installation

The installation through pypi retrieves 'fake-factory' as a main dependency.

pip install fake2db

Optional requirements

PostgreSQL

pip install psycopg2

For psycopg2 to install you need pg_config in your system.

On Mac, the solution is to install postgresql:

brew install postgresql

On CentOS, the solution is to install postgresql-devel:

sudo yum install postgresql-devel

Mongodb

pip install pymongo

Redis

pip install redis

MySQL

mysql connector is needed for mysql db generation:

http://dev.mysql.com/downloads/connector/python/

CouchDB

pip install couchdb

Usage

--rows argument is pretty clear :) integer

--db argument takes 6 possible options : sqlite, mysql, postgresql, mongodb, redis, couchdb

--name argument is OPTIONAL. When it is absent fake2db will name db's randomly.

--host argument is OPTIONAL. Hostname to use for database connection. Not used for sqlite.

--port argument is OPTIONAL. Port to use for database connection. Not used for sqlite.

--username argument is OPTIONAL. Username for the database user.

--password argument is OPTIONAL. Password for database user. Only supported for mysql & postgresql.

--locale argument is OPTIONAL. The localization of data to be generated ('en_US' as default).

--seed argument is OPTIONAL. Integer for seeding random generator to produce the same data set between runs. Note: uuid4 values still generated randomly.

fake2db --rows 200 --db sqlite

fake2db --rows 1500 --db postgresql --name test_database_postgre

fake2db --db postgresql --rows 2500 --host container.local --password password --user docker

fake2db --rows 200 --db sqlite --locale cs_CZ --seed 1337

In addition to the databases supported in the db argument, you can also run fake2db with FoundationDB SQL Layer. Once SQL Layer is installed, simply use the postgresql generator and specify the SQL Layer port. For example:

fake2db --rows --db postgresql --port 15432

Custom Database Generation

If you want to create a custom db/table, you have to provide --custom parameter followed by the column item you want. At the point in time, i mapped all the possible column items you can use here:

https://github.com/emirozer/fake2db/blob/master/fake2db/custom.py

Feed any keys you want to the custom flag:

fake2db.py --rows 250 --db mysql --username mysql --password somepassword --custom name date country

fake2db.py --rows 1500 --db mysql --password randompassword --custom currency_code credit_card_full credit_card_provider

fake2db.py --rows 20 --db mongodb --custom name date country

Sample output - SQLite

Screenshot

Screenshot

Screenshot

Author: emirozer
Source Code: https://github.com/emirozer/fake2db
License: GPL-2.0 License

#sqlite #mysql #postgresql #mongodb #redis #couchdb 

How to Create A Custom Test Database Containing Fake Data
Edureka Fan

Edureka Fan

1635490144

Introduction to CouchDB

What is CouchDB? | How to Install and Setup Couchbase | CRUD in Couchbase | NoSQL Database


This CouchDB video will take you through will help you understand all the basics of Couchbase and also it will help you install, setup and perform CRUD operations in the CouchDB application from scratch. It also covers the advantages of using CouchDB.
Following pointers are covered in this video:
00:00:00 Agenda
00:00:56 Why should You use CouchDB?
00:06:56 What is CouchDB?
00:09:31 How CouchDB works?
00:11:24 CouchDB Installation
00:15:06 CRUD operations in CouchDB

#couchdb #nosql #database

Introduction to CouchDB
Ron  Cartwright

Ron Cartwright

1618546391

CouchDB vs. MariaDB - Which is Better?

Every day you visit grocery stores, clothing stores, e-commerce portals, or a bank. Ever wondered how they keep a track of customers, employees, and other crucial information?

The answer is Database.

Database

In simple terms, a database is a collection of information. It is organized to ensure easy accessibility, management, and updates.

A database (DB) is useful when you need to store data or facilitate the search for particular. Storage of large volumes of data, modification of the same can be easily managed with databases. Here, the search and sorting are quick and easy.

Types of Databases

There are various types of databases:

  1. Relational database
  2. Distributed database
  3. Cloud database
  4. Graph database
  5. NoSQL database

Apache CouchDB

Apache CouchDB is a NoSQL database. NoSQL databases are beneficial for the management of large amounts of distributed and unstructured data. CouchDb was initially released in 2005. The current version 3.1.1 was released in September 2020. It supports cross-platform operating systems.

Apache CouchDB open-source DB employs multiple formats and protocols to store, transfer or process the given data. It is document-oriented. Being implemented in Erlang, CouchDB uses JSON for data storage. Besides, JavaScript is its query language which uses MapReduce. Further, it employs HTTP as its API.

Unlike a relational database, this does not support the storage of data in tables. Instead, it considers each database as a collection of independent documents. The most distinguishing feature of CouchDB is its multi-master replication feature. Thus, it ensures scalability that in turn aids in developing high-performance systems.

MariaDB

MariaDB is the most-used relational database. A relational DB uses a tabular structure for storing data. This ensures easy re-organization and multiple access patterns.

MariaDB is an open-source database that is community-developed. It is highly compatible with MySQL. Thus it matches the APIs and commands of MySQL. Besides, it is inclusive of storage engines such as MyRocks, Aria, and ColumnStore.

MariaDB is developed by MariaDB Corporation Ab and MariaDB Foundation. It was initially released 11 years ago in 2009. However, the stabilized version was released in February 2021. It is compatible with Linux, Windows, and macOS operating systems.

#database #mariadb #couchdb

CouchDB vs. MariaDB - Which is Better?
Liya Gebre

Liya Gebre

1604585820

Multi-User Applications With PouchDB and IBM Cloudant

Imagine a web application used simultaneously by many users at the same time. It can become necessary to keep them in sync. We want to ensure that they all look at the same fresh data. We might want them to interact with each other. Think about co-editing of documents on Google Drive, chat applications etc. We’ve implemented a simple solution for seamless synchronization of application state in real time, using a NoSQL database hosted in the cloud.

#javascript #cloud #tutorial #web application #couchdb #vuejs #vue #cloudant

Multi-User Applications With PouchDB and IBM Cloudant

How to Install CouchDB on Ubuntu 20.04

Apache CouchDB is a free and open-source NoSQL database developed by the Apache Software Foundation. It can be used as a single-node or clustered database.

CouchDB server stores its data in named databases, which contains documents with JSON structure. Each document consists of a number of fields and attachments. Fields can include text, numbers, lists, booleans, more. CouchDB includes a RESTful HTTP API that allows you to read, create, edit, and delete database documents.

This article covers the steps of installing the latest version of CouchDB on Ubuntu 20.04.

Installing CouchDB on Ubuntu is relatively straightforward. We’ll enable the CouchDB APT repository, import the repository GPG key, and install the CouchDB package.

#ubuntu #couchdb

How to Install CouchDB on Ubuntu 20.04
Monique  Larson

Monique Larson

1594125540

How to Install CouchDB on Ubuntu 18.04

CouchDB is a free and open-source fault-tolerant NoSQL database maintained by the Apache Software Foundation.

CouchDB server stores its data in named databases which contains documents with JSON structure. Each document consists of a number of fields and attachments. Fields can include text, numbers, lists, booleans, more. It includes a RESTful HTTP API that allows you to read, create, edit and delete database documents.

In this tutorial, we will cover the process of installing the latest version of CouchDB on Ubuntu 18.04.

#ubuntu 18.04 #couchdb

How to Install CouchDB on Ubuntu 18.04