1676446620
An open source driver assistance system. openpilot performs the functions of Automated Lane Centering and Adaptive Cruise Control for over 200 supported car makes and models
openpilot is an open source driver assistance system. Currently, openpilot performs the functions of Adaptive Cruise Control (ACC), Automated Lane Centering (ALC), Forward Collision Warning (FCW), and Lane Departure Warning (LDW) for a growing variety of supported car makes, models, and model years. In addition, while openpilot is engaged, a camera-based Driver Monitoring (DM) feature alerts distracted and asleep drivers. See more about the vehicle integration and limitations.
To use openpilot in a car, you need four things
We have detailed instructions for how to mount the device in a car.
All openpilot services can run as usual on a PC without requiring special hardware or a car. You can also run openpilot on recorded or simulated data to develop or experiment with openpilot.
With openpilot's tools, you can plot logs, replay drives, and watch the full-res camera streams. See the tools README for more information.
You can also run openpilot in simulation with the CARLA simulator. This allows openpilot to drive around a virtual car on your Ubuntu machine. The whole setup should only take a few minutes but does require a decent GPU.
A PC running openpilot can also control your vehicle if it is connected to a webcam, a black panda, and a harness.
openpilot is developed by comma and by users like you. We welcome both pull requests and issues on GitHub. Bug fixes and new car ports are encouraged. Check out the contributing docs.
Documentation related to openpilot development can be found on docs.comma.ai. Information about running openpilot (e.g. FAQ, fingerprinting, troubleshooting, custom forks, community hardware) should go on the wiki.
You can add support for your car by following guides we have written for Brand and Model ports. Generally, a car with adaptive cruise control and lane keep assist is a good candidate. Join our Discord to discuss car ports: most car makes have a dedicated channel.
Want to get paid to work on openpilot? comma is hiring.
And follow us on Twitter.
By default, openpilot uploads the driving data to our servers. You can also access your data through comma connect. We use your data to train better models and improve openpilot for everyone.
openpilot is open source software: the user is free to disable data collection if they wish to do so.
openpilot logs the road-facing cameras, CAN, GPS, IMU, magnetometer, thermal sensors, crashes, and operating system logs. The driver-facing camera is only logged if you explicitly opt-in in settings. The microphone is not recorded.
By using openpilot, you agree to our Privacy Policy. You understand that use of this software or its related services will generate certain types of user data, which may be logged and stored at the sole discretion of comma. By accepting this agreement, you grant an irrevocable, perpetual, worldwide right to comma for the use of this data.
.
├── cereal # The messaging spec and libs used for all logs
├── common # Library like functionality we've developed here
├── docs # Documentation
├── opendbc # Files showing how to interpret data from cars
├── panda # Code used to communicate on CAN
├── third_party # External libraries
└── system # Generic services
├── camerad # Driver to capture images from the camera sensors
├── clocksd # Broadcasts current time
├── hardware # Hardware abstraction classes
├── logcatd # systemd journal as a service
└── proclogd # Logs information from /proc
└── selfdrive # Code needed to drive the car
├── assets # Fonts, images, and sounds for UI
├── athena # Allows communication with the app
├── boardd # Daemon to talk to the board
├── car # Car specific code to read states and control actuators
├── controls # Planning and controls
├── debug # Tools to help you debug and do car ports
├── locationd # Precise localization and vehicle parameter estimation
├── loggerd # Logger and uploader of car data
├── manager # Daemon that starts/stops all other daemons as needed
├── modeld # Driving and monitoring model runners
├── monitoring # Daemon to determine driver attention
├── navd # Turn-by-turn navigation
├── sensord # IMU interface code
├── test # Unit tests, system tests, and a car simulator
└── ui # The UI
Author: Commaai
Source Code: https://github.com/commaai/openpilot
License: MIT license
1666717680
Native Mac APIs for Golang!
MacDriver is a toolkit for working with Apple/Mac APIs and frameworks in Go. It currently has 2 parts:
The objc
package wraps the Objective-C runtime to dynamically interact with Objective-C objects and classes:
cls := objc.NewClass("AppDelegate", "NSObject")
cls.AddMethod("applicationDidFinishLaunching:", func(app objc.Object) {
fmt.Println("Launched!")
})
objc.RegisterClass(cls)
delegate := objc.Get("AppDelegate").Alloc().Init()
app := objc.Get("NSApplication").Get("sharedApplication")
app.Set("delegate:", delegate)
app.Send("run")
The cocoa
, webkit
, and core
packages wrap objc
with wrapper types for parts of the Apple/Mac APIs. They're being added to as needed by hand until we can automate this process with schema data. These packages effectively let you use Apple APIs as if they were native Go libraries, letting you write Mac applications (potentially also iOS, watchOS, etc) as Go applications:
func main() {
app := cocoa.NSApp_WithDidLaunch(func(notification objc.Object) {
config := webkit.WKWebViewConfiguration_New()
wv := webkit.WKWebView_Init(core.Rect(0, 0, 1440, 900), config)
url := core.URL("http://progrium.com")
req := core.NSURLRequest_Init(url)
wv.LoadRequest(req)
w := cocoa.NSWindow_Init(core.Rect(0, 0, 1440, 900),
cocoa.NSClosableWindowMask|
cocoa.NSTitledWindowMask,
cocoa.NSBackingStoreBuffered, false)
w.SetContentView(wv)
w.MakeKeyAndOrderFront(w)
w.Center()
})
app.SetActivationPolicy(cocoa.NSApplicationActivationPolicyRegular)
app.ActivateIgnoringOtherApps(true)
app.Run()
}
examples/largetype - A Contacts/Quicksilver-style Large Type utility in under 80 lines:
examples/pomodoro - A menu bar pomodoro timer in under 80 lines:
examples/topframe - An always-on-top webview with transparent background in 120 lines [requires Go 1.16+]:
NEW: See progrium/topframe for a more fully-featured standalone version!
Eventually we can generate most of the wrapper APIs using bridgesupport and/or doc schemas. However, the number of APIs is pretty ridiculous so there are lots of edge cases I wouldn't know how to automate yet. We can just continue to create them by hand as needed until we have enough coverage/confidence to know how we'd generate wrappers.
The original objc
and variadic
packages were written by Mikkel Krautz. The variadic
package is some assembly magic to make everything possible since libobjc relies heavily on variadic function calls, which aren't possible out of the box in Cgo.
Author: Progrium
Source Code: https://github.com/progrium/macdriver
License: MIT license
1665123240
A BSON parser Node.JS native addon.
BSON is short for Binary JSON and is the binary-encoded serialization of JSON-like documents. You can learn more about it in the specification.
While this library is compatible with the mongodb driver version 4+, bson-ext will soon be deprecated and no longer supported. It is strongly recommended that js-bson be used instead.
NOTE: bson-ext version 4+ works with js-bson version 4+ and the mongodb driver version 4+.
npm install bson-ext
A simple example of how to use BSON in Node.js
:
// Get BSON parser class
const BSON = require('bson-ext')
// Get the Long type
const Long = BSON.Long;
// Serialize document
const doc = { long: Long.fromNumber(100) }
// Serialize a document
const data = BSON.serialize(doc)
console.log('data:', data)
// Deserialize the resulting Buffer
var docRoundTrip = bson.deserialize(data)
console.log('docRoundTrip:', docRoundTrip)
To build a new version perform the following operation.
npm install
npm run build
For all BSON types documentation, please refer to the documentation for the MongoDB Node.js driver.
The BSON serialize
method takes a JavaScript object and an optional options object and returns a Node.js Buffer.
/**
* The BSON library accepts plain javascript objects.
* It serializes to BSON by iterating the keys
*/
interface Document {
[key: string]: any;
}
interface SerializeOptions {
/** the serializer will check if keys are valid. */
checkKeys?: boolean;
/** serialize the javascript functions **(default:false)**. */
serializeFunctions?: boolean;
/** serialize will not emit undefined fields **(default:true)** */
ignoreUndefined?: boolean;
}
/**
* Serialize a Javascript object.
*
* @param object - the Javascript object to serialize.
* @returns Buffer object containing the serialized object.
*/
function serialize(object: Document, options?: SerializeOptions): Buffer;
The BSON serializeWithBufferAndIndex
method takes an object, a target buffer instance and an optional options object and returns the end serialization index in the final buffer.
/**
* Serialize a Javascript object using a predefined Buffer and index into the buffer,
* useful when pre-allocating the space for serialization.
*
* @param object - the Javascript object to serialize.
* @param finalBuffer - the Buffer you pre-allocated to store the serialized BSON object.
* @returns the index pointing to the last written byte in the buffer.
*/
function serializeWithBufferAndIndex(object: Document, finalBuffer: Buffer, options?: SerializeOptions): number;
The BSON calculateObjectSize
method takes a JavaScript object and an optional options object and returns the size of the BSON object.
interface CalculateObjectSizeOptions {
/** serialize the javascript functions **(default:false)**. */
serializeFunctions?: boolean;
/** serialize will not emit undefined fields **(default:true)** */
ignoreUndefined?: boolean;
}
/**
* Calculate the bson size for a passed in Javascript object.
*
* @param object - the Javascript object to calculate the BSON byte size for
* @returns size of BSON object in bytes
* @public
*/
function calculateObjectSize(object: Document, options?: CalculateObjectSizeOptions): number;
The BSON deserialize
method takes a Node.js Buffer and an optional options object and returns a deserialized JavaScript object.
interface DeserializeOptions {
/** evaluate functions in the BSON document scoped to the object deserialized. */
evalFunctions?: boolean;
/** cache evaluated functions for reuse. */
cacheFunctions?: boolean;
/** when deserializing a Long will fit it into a Number if it's smaller than 53 bits */
promoteLongs?: boolean;
/** when deserializing a Binary will return it as a node.js Buffer instance. */
promoteBuffers?: boolean;
/** when deserializing will promote BSON values to their Node.js closest equivalent types. */
promoteValues?: boolean;
/** allow to specify if there what fields we wish to return as unserialized raw buffer. */
fieldsAsRaw?: Document;
/** return BSON regular expressions as BSONRegExp instances. */
bsonRegExp?: boolean;
/** allows the buffer to be larger than the parsed BSON object */
allowObjectSmallerThanBufferSize?: boolean;
/** Offset into buffer to begin reading document from */
index?: number;
}
/**
* Deserialize data as BSON.
*
* @param buffer - the buffer containing the serialized set of BSON documents.
* @returns returns the deserialized Javascript Object.
* @public
*/
function deserialize(buffer: Buffer | ArrayBufferView | ArrayBuffer, options?: DeserializeOptions): Document;
The BSON deserializeStream
method takes a Node.js Buffer, startIndex
and allow more control over deserialization of a Buffer containing concatenated BSON documents.
/**
* Deserialize stream data as BSON documents.
*
* @param data - the buffer containing the serialized set of BSON documents.
* @param startIndex - the start index in the data Buffer where the deserialization is to start.
* @param numberOfDocuments - number of documents to deserialize.
* @param documents - an array where to store the deserialized documents.
* @param docStartIndex - the index in the documents array from where to start inserting documents.
* @param options - additional options used for the deserialization.
* @returns next index in the buffer after deserialization **x** numbers of documents.
* @public
*/
function deserializeStream(data: Buffer | ArrayBufferView | ArrayBuffer, startIndex: number, numberOfDocuments: number, documents: Document[], docStartIndex: number, options: DeserializeOptions): number;
Author: Mongodb-js
Source Code: https://github.com/mongodb-js/bson-ext
License: Apache-2.0 license
1664484480
NodeJS driver for the Instagram API. In production at http://totems.co aggregating more than 200 data points per seconds
npm install instagram-node
client_id/client_secret
from the app you are building, or an access_token
from a user that use your app.var ig = require('instagram-node').instagram();
// Every call to `ig.use()` overrides the `client_id/client_secret`
// or `access_token` previously entered if they exist.
ig.use({ access_token: 'YOUR_ACCESS_TOKEN' });
ig.use({ client_id: 'YOUR_CLIENT_ID',
client_secret: 'YOUR_CLIENT_SECRET' });
app.post('/like/:media_id', function(req, res, next) {
var ig = require('instagram-node').instagram({});
ig.use({ access_token: 'YOUR_ACCESS_TOKEN' });
ig.add_like(req.param('media_id'), {
sign_request: {
client_secret: 'YOUR_CLIENT_SECRET',
// Then you can specify the request:
client_req: req
// or the IP on your own:
ip: 'XXX.XXX.XXX.XXX'
}
}, function(err) {
// handle err here
return res.send('OK');
});
});
Instagram uses the standard oauth authentication flow in order to allow apps to act on a user's behalf. Therefore, the API provides two convenience methods to help you authenticate your users. The first, get_authorization_url
, can be used to redirect an unauthenticated user to the instagram login screen based on a redirect_uri
string and an optional options
object containing an optional scope
array and an optional state
string. The second method, authorize_user
, can be used to retrieve and set an access token for a user, allowing your app to act fully on his/her behalf. This method takes three parameters: a response_code
which is sent as a GET parameter once a user has authorized your app and instagram has redirected them back to your authorization redirect URI, a redirect_uri
which is the same one supplied to get_authorization_url
, and a callback that takes two parameters err
and result
. err
will be populated if and only if the request to authenticate the user has failed for some reason. Otherwise, it will be null
and response
will be populated with a JSON object representing Instagram's confirmation reponse that the user is indeed authorized. See instagram's authentication documentation for more information.
Below is an example of how one might authenticate a user within an ExpressJS app.
var http = require('http');
var express = require('express');
var api = require('instagram-node').instagram();
var app = express();
app.configure(function() {
// The usual...
});
api.use({
client_id: YOUR_CLIENT_ID,
client_secret: YOUR_CLIENT_SECRET
});
var redirect_uri = 'http://yoursite.com/handleauth';
exports.authorize_user = function(req, res) {
res.redirect(api.get_authorization_url(redirect_uri, { scope: ['likes'], state: 'a state' }));
};
exports.handleauth = function(req, res) {
api.authorize_user(req.query.code, redirect_uri, function(err, result) {
if (err) {
console.log(err.body);
res.send("Didn't work");
} else {
console.log('Yay! Access token is ' + result.access_token);
res.send('You made it!!');
}
});
};
// This is where you would initially send users to authorize
app.get('/authorize_user', exports.authorize_user);
// This is your redirect URI
app.get('/handleauth', exports.handleauth);
http.createServer(app).listen(app.get('port'), function(){
console.log("Express server listening on port " + app.get('port'));
});
###Using the API
Once you've setup the API and/or authenticated, here is the full list of what you can do:
/********************************/
/* USERS */
/********************************/
ig.user('user_id', function(err, result, remaining, limit) {});
/* OPTIONS: { [count], [min_id], [max_id] }; */
ig.user_self_feed([options,] function(err, medias, pagination, remaining, limit) {});
/* OPTIONS: { [count], [min_timestamp], [max_timestamp], [min_id], [max_id] }; */
ig.user_media_recent('user_id', [options,] function(err, medias, pagination, remaining, limit) {});
/* OPTIONS: { [count], [min_timestamp], [max_timestamp], [min_id], [max_id] }; */
ig.user_self_media_recent([options,] function(err, medias, pagination, remaining, limit) {});
/* OPTIONS: { [count], [max_like_id] }; */
ig.user_self_liked([options,] function(err, medias, pagination, remaining, limit) {});
/* OPTIONS: { [count] }; */
ig.user_search('username', [options,] function(err, users, remaining, limit) {});
/********************************/
/* RELATIONSHIP */
/********************************/
/* OPTIONS: { [count], [cursor] }; */
ig.user_follows('user_id', function(err, users, pagination, remaining, limit) {});
/* OPTIONS: { [count], [cursor] }; */
ig.user_followers('user_id', function(err, users, pagination, remaining, limit) {});
ig.user_self_requested_by(function(err, users, remaining, limit) {});
ig.user_relationship('user_id', function(err, result, remaining, limit) {});
ig.set_user_relationship('user_id', 'follow', function(err, result, remaining, limit) {});
/********************************/
/* MEDIAS */
/********************************/
ig.media('media_id', function(err, media, remaining, limit) {});
/* OPTIONS: { [min_timestamp], [max_timestamp], [distance] }; */
ig.media_search(48.4335645654, 2.345645645, [options,] function(err, medias, remaining, limit) {});
ig.media_popular(function(err, medias, remaining, limit) {});
/********************************/
/* COMMENTS */
/********************************/
ig.comments('media_id', function(err, result, remaining, limit) {});
ig.add_comment('media_id', 'your comment', function(err, result, remaining, limit) {});
ig.del_comment('media_id', 'comment_id', function(err, remaining, limit) {});
/********************************/
/* LIKES */
/********************************/
ig.likes('media_id', function(err, result, remaining, limit) {});
ig.add_like('media_id', function(err, remaining, limit) {});
ig.del_like('media_id', function(err, remaining, limit) {});
/********************************/
/* TAGS */
/********************************/
ig.tag('tag', function(err, result, remaining, limit) {});
/* OPTIONS: { [min_tag_id], [max_tag_id] }; */
ig.tag_media_recent('tag', [options,] function(err, medias, pagination, remaining, limit) {});
ig.tag_search('query', function(err, result, remaining, limit) {});
/********************************/
/* LOCATIONS */
/********************************/
ig.location('location_id', function(err, result, remaining, limit) {});
/* OPTIONS: { [min_id], [max_id], [min_timestamp], [max_timestamp] }; */
ig.location_media_recent('location_id', [options,] function(err, result, pagination, remaining, limit) {});
/* SPECS: { lat, lng, [foursquare_v2_id], [foursquare_id] }; */
/* OPTIONS: { [distance] }; */
ig.location_search({ lat: 48.565464564, lng: 2.34656589 }, [options,] function(err, result, remaining, limit) {});
/********************************/
/* GEOGRAPHIES */
/********************************/
/* OPTIONS: { [min_id], [count] } */
ig.geography_media_recent(geography_id, [options,] function(err, result, pagination, remaining, limit) {});
/********************************/
/* SUBSCRIPTIONS */
/********************************/
ig.subscriptions(function(err, result, remaining, limit){});
ig.del_subscription({id:1}, function(err,subscriptions,limit){})
/* OPTIONS: { [verify_token] } */
ig.add_tag_subscription('funny', 'http://MYHOST/tag/funny', [options,] function(err, result, remaining, limit){});
/* OPTIONS: { [verify_token] } */
ig.add_geography_subscription(48.565464564, 2.34656589, 100, 'http://MYHOST/geography', [options,] function(err, result, remaining, limit){});
/* OPTIONS: { [verify_token] } */
ig.add_user_subscription('http://MYHOST/user', [options,] function(err, result, remaining, limit){});
/* OPTIONS: { [verify_token] } */
ig.add_location_subscription(1257285, 'http://MYHOST/location/1257285', [options,] function(err, result, remaining, limit){});
Subscriptions are callbacks from Instagram to your app when new things happen. They should be web-accessable, and return req.query['hub.challenge']
on GET. Read more here. After you subscribe, Instagram will calllback your web URL whenever a new post, user action, etc happens.
You can get your subscriptions with this:
ig.subscriptions(function(err, subscriptions, remaining, limit){
console.log(subscriptions);
});
You can delete all your subscriptions with this:
ig.del_subscription({ all: true }, function(err, subscriptions, remaining, limit){});
or just one with this:
ig.del_subscription({ id: 1 }, function(err, subscriptions, remaining, limit){});
When errors occur, you receive an error object with default properties, but we also add some other things:
// Available when the error comes from Instagram API
err.code; // code from Instagram
err.error_type; // error type from Instagram
err.error_message; // error message from Instagram
// If the error occurs while requesting the API
err.status_code; // the response status code
err.body; // the received body
and
err.retry(); // Lets you retry in the same conditions that before
When you use functions like user_media_recent
or tag_media_recent
, you will get a pagination
object in your callback. This object is basically the same that Instagram would give you but there will be a next()
function that let you retrieve next results without caring about anything.
var ig = require('instagram-node').instagram();
var hdl = function(err, result, pagination, remaining, limit) {
// Your implementation here
if(pagination.next) {
pagination.next(hdl); // Will get second page results
}
};
ig.tag_media_recent('test', hdl);
Put the following in your environment:
export INSTAGRAM_ACCESS_TOKEN=YOUACCESSTOKEN
Then just use
make test
hello@totems.co
Author: Totemstech
Source Code: https://github.com/totemstech/instagram-node
1663222140
In today's post we will learn about 10 Favorite Libraries for NoSQL Database Drivers in Go.
What is NoSQL Database Drivers?
A NoSQL database provides a mechanism for the storage and retrieval of data that uses looser consistency models than typical relational databases in order to achieve horizontal scaling and higher availability. Some authors refer to them as "Not only SQL" to emphasize that some NoSQL systems do allow SQL-like query languages to be used.
Table of contents:
database/sql
driver for Azure Cosmos DB.This library is compatible with Go 1.9+ and supports the following operating systems: Linux, Mac OS X (Windows builds are possible, but untested).
Up-to-date documentation is available in the .
The following is a very simple example of CRUD operations in an Aerospike database.
package main
import (
"fmt"
aero "github.com/aerospike/aerospike-client-go"
)
// This is only for this example.
// Please handle errors properly.
func panicOnError(err error) {
if err != nil {
panic(err)
}
}
func main() {
// define a client to connect to
client, err := aero.NewClient("127.0.0.1", 3000)
panicOnError(err)
key, err := aero.NewKey("test", "aerospike", "key")
panicOnError(err)
// define some bins with data
bins := aero.BinMap{
"bin1": 42,
"bin2": "An elephant is a mouse with an operating system",
"bin3": []interface{}{"Go", 2009},
}
// write the bins
err = client.Put(nil, key, bins)
panicOnError(err)
// read it back!
rec, err := client.Get(nil, key)
panicOnError(err)
// delete the key, and check if key exists
existed, err := client.Delete(nil, key)
panicOnError(err)
fmt.Printf("Record existed before delete? %v\n", existed)
}
More examples illustrating the use of the API are located in the examples
directory.
Details about the API are available in the docs
directory.
GOPATH
: go get github.com/aerospike/aerospike-client-go
go get -u github.com/aerospike/aerospike-client-go
Using gopkg.in is also supported: go get -u gopkg.in/aerospike/aerospike-client-go.v1
Arangolite is a lightweight ArangoDB driver for Go.
It focuses on pure AQL querying. See AranGO for a more ORM-like experience.
To install Arangolite:
go get -u github.com/solher/arangolite/v2
package main
import (
"context"
"fmt"
"log"
"github.com/solher/arangolite/v2"
"github.com/solher/arangolite/v2/requests"
)
type Node struct {
arangolite.Document
}
func main() {
ctx := context.Background()
// We declare the database definition.
db := arangolite.NewDatabase(
arangolite.OptEndpoint("http://localhost:8529"),
arangolite.OptBasicAuth("root", "rootPassword"),
arangolite.OptDatabaseName("_system"),
)
// The Connect method does two things:
// - Initializes the connection if needed (JWT authentication).
// - Checks the database connectivity.
if err := db.Connect(ctx); err != nil {
log.Fatal(err)
}
// We create a new database.
err := db.Run(ctx, nil, &requests.CreateDatabase{
Name: "testDB",
Users: []map[string]interface{}{
{"username": "root", "passwd": "rootPassword"},
{"username": "user", "passwd": "password"},
},
})
if err != nil {
log.Fatal(err)
}
// We sign in as the new created user on the new database.
// We could eventually rerun a "db.Connect()" to confirm the connectivity.
db.Options(
arangolite.OptBasicAuth("user", "password"),
arangolite.OptDatabaseName("testDB"),
)
// We create a new "nodes" collection.
if err := db.Run(ctx, nil, &requests.CreateCollection{Name: "nodes"}); err != nil {
log.Fatal(err)
}
// We declare a new AQL query with options and bind parameters.
key := "48765564346"
r := requests.NewAQL(`
FOR n
IN nodes
FILTER n._key == @key
RETURN n
`, key).
Bind("key", key).
Cache(true).
BatchSize(500) // The caching feature is unavailable prior to ArangoDB 2.7
// The Run method returns all the query results of every pages
// available in the cursor and unmarshal it into the given struct.
// Cancelling the context cancels every running request.
nodes := []Node{}
if err := db.Run(ctx, &nodes, r); err != nil {
log.Fatal(err)
}
// The Send method gives more control to the user and doesn't follow an eventual cursor.
// It returns a raw result object.
result, err := db.Send(ctx, r)
if err != nil {
log.Fatal(err)
}
nodes = []Node{}
result.UnmarshalResult(&nodes)
for result.HasMore() {
result, err = db.Send(ctx, &requests.FollowCursor{Cursor: result.Cursor()})
if err != nil {
log.Fatal(err)
}
tmp := []Node{}
result.UnmarshalResult(&tmp)
nodes = append(nodes, tmp...)
}
fmt.Println(nodes)
}
This library is compatible with Go 1.11+
aerospike client/policy config params
keyColumn, keyColumnName
Defines name of column used as record key ('id' by default)
It can be specified per table i.e
events.keyColumn = code
excludedColumns
List of columns to be excluded from record (i.e: id - in case we need it only as record key)
dateFormat
ISO date format used to time.Time conversion
optimizeLargeScan
Experimental feature that first scan all keys and write then to disk and then separate go routines scan data using the dumped keys
You can only specify scanBaseDirectory
The following is a very simple example of CRUD operations with dsc
config.yaml
driverName: aerospike
parameters:
namespace: test
host: 127.0.0.1
dateFormat: yyyy-MM-dd hh:mm:ss
package main
import (
_ "github.com/aerospike/aerospike-client-go"
_ "github.com/viant/asc"
"github.com/viant/dsc"
"log"
)
type Interest struct {
Id int `autoincrement:"true"`
Name string
ExpiryTimeInSecond int `column:"expiry"`
Category string
}
func main() {
config, err := dsc.NewConfigFromURL("config.yaml")
if err != nil {
log.Fatal(err)
}
factory := dsc.NewManagerFactory()
manager, err := factory.Create(config)
if err != nil {
log.Fatal(err)
}
// manager := factory.CreateFromURL("file:///etc/myapp/datastore.json")
interest := &Interest{}
success, err:= manager.ReadSingle(interest, "SELECT id, name, expiry, category FROM interests WHERE id = ?", []interface{}{id},nil)
if err != nil {
panic(err.Error())
}
var intersts = make([]*Interest, 0)
err = manager.ReadAll(&intersts, "SELECT id, name, expiry, category FROM interests", nil ,nil)
if err != nil {
panic(err.Error())
}
intersts = []*Interest {
Interest{Name:"Abc", ExpiryTimeInSecond:3600, Category:"xyz"},
Interest{Name:"Def", ExpiryTimeInSecond:3600, Category:"xyz"},
Interest{Id:"20, Name:"Ghi", ExpiryTimeInSecond:3600, Category:"xyz"},
}
_, _, err = manager.PersistAll(&intersts, "intersts", nil)
if err != nil {
panic(err.Error())
}
fmt.Printf("Inserted %v, updated: %v\n", inserted, updated)
deleted, err := manager.DeleteAll(&intersts, "intersts", nil)
if err != nil {
panic(err.Error())
}
fmt.Printf("Inserted %v, updated: %v\n", deleted)
var records = []map[string]interface{}{}
err = manager.ReadAll(&records, "SELECT id, name, expiry, category FROM interests", nil ,nil)
if err != nil {
panic(err.Error())
}
}
make install
to install the library)cd <forestdb_project_dir> && mkdir /usr/local/include/libforestdb && cp include/libforestdb/* /usr/local/include/libforestdb
go get -u -v -t github.com/couchbase/goforestdb
// Open a database
db, _ := Open("test", nil)
// Close it properly when we're done
defer db.Close()
// Store the document
doc, _ := NewDoc([]byte("key"), nil, []byte("value"))
defer doc.Close()
db.Set(doc)
// Lookup the document
doc2, _ := NewDoc([]byte("key"), nil, nil)
defer doc2.Close()
db.Get(doc2)
// Delete the document
doc3, _ := NewDoc([]byte("key"), nil, nil)
defer doc3.Close()
db.Delete(doc3)
A smart client for couchbase in go
This is a unoffical version of a Couchbase Golang client. If you are looking for the Offical Couchbase Golang client please see [CB-go])[https://github.com/couchbaselabs/gocb].
This is an evolving package, but does provide a useful interface to a couchbase server including all of the pool/bucket discovery features, compatible key distribution with other clients, and vbucket motion awareness so application can continue to operate during rebalances.
It also supports view querying with source node randomization so you don't bang on all one node to do all the work.
go get github.com/couchbase/go-couchbase
c, err := couchbase.Connect("http://dev-couchbase.example.com:8091/")
if err != nil {
log.Fatalf("Error connecting: %v", err)
}
pool, err := c.GetPool("default")
if err != nil {
log.Fatalf("Error getting pool: %v", err)
}
bucket, err := pool.GetBucket("default")
if err != nil {
log.Fatalf("Error getting bucket: %v", err)
}
bucket.Set("someKey", 0, []string{"an", "example", "list"})
Go client for Pilosa high performance distributed index.
Download the library in your GOPATH
using:
go get github.com/pilosa/go-pilosa
After that, you can import the library in your code using:
import "github.com/pilosa/go-pilosa"
Assuming Pilosa server is running at localhost:10101
(the default):
package main
import (
"fmt"
"github.com/pilosa/go-pilosa"
)
func main() {
var err error
// Create the default client
client := pilosa.DefaultClient()
// Retrieve the schema
schema, err := client.Schema()
// Create an Index object
myindex := schema.Index("myindex")
// Create a Field object
myfield := myindex.Field("myfield")
// make sure the index and the field exists on the server
err = client.SyncSchema(schema)
// Send a Set query. If err is non-nil, response will be nil.
response, err := client.Query(myfield.Set(5, 42))
// Send a Row query. If err is non-nil, response will be nil.
response, err = client.Query(myfield.Row(5))
// Get the result
result := response.Result()
// Act on the result
if result != nil {
columns := result.Row().Columns
fmt.Println("Got columns: ", columns)
}
// You can batch queries to improve throughput
response, err = client.Query(myindex.BatchQuery(
myfield.Row(5),
myfield.Row(10)))
if err != nil {
fmt.Println(err)
}
for _, result := range response.Results() {
// Act on the result
fmt.Println(result.Row().Columns)
}
}
Go-ReJSON is a Go client for ReJSON Redis Module.
ReJSON is a Redis module that implements ECMA-404 The JSON Data Interchange Standard as a native data type. It allows storing, updating and fetching JSON values from Redis keys (documents).
Primary features of ReJSON Module:
* Full support of the JSON standard
* JSONPath-like syntax for selecting element inside documents
* Documents are stored as binary data in a tree structure, allowing fast access to sub-elements
* Typed atomic operations for all JSON values types
Each and every feature of ReJSON Module is fully incorporated in the project.
Enjoy ReJSON with the type-safe Redis client, Go-Redis/Redis
or use the print-like Redis-api client GoModule/Redigo
. Go-ReJSON supports both the clients. Use any of the above two client you want, Go-ReJSON helps you out with all its features and functionalities in a more generic and standard way.
Support for mediocregopher/radix
and other Redis clients is in our RoadMap. Any contributions to the support for other clients is hearty welcome.
go get github.com/nitishm/go-rejson/v4
package main
import (
"context"
"encoding/json"
"flag"
"fmt"
"log"
"github.com/nitishm/go-rejson/v4"
goredis "github.com/go-redis/redis/v8"
"github.com/gomodule/redigo/redis"
)
// Name - student name
type Name struct {
First string `json:"first,omitempty"`
Middle string `json:"middle,omitempty"`
Last string `json:"last,omitempty"`
}
// Student - student object
type Student struct {
Name Name `json:"name,omitempty"`
Rank int `json:"rank,omitempty"`
}
func Example_JSONSet(rh *rejson.Handler) {
student := Student{
Name: Name{
"Mark",
"S",
"Pronto",
},
Rank: 1,
}
res, err := rh.JSONSet("student", ".", student)
if err != nil {
log.Fatalf("Failed to JSONSet")
return
}
if res.(string) == "OK" {
fmt.Printf("Success: %s\n", res)
} else {
fmt.Println("Failed to Set: ")
}
studentJSON, err := redis.Bytes(rh.JSONGet("student", "."))
if err != nil {
log.Fatalf("Failed to JSONGet")
return
}
readStudent := Student{}
err = json.Unmarshal(studentJSON, &readStudent)
if err != nil {
log.Fatalf("Failed to JSON Unmarshal")
return
}
fmt.Printf("Student read from redis : %#v\n", readStudent)
}
func main() {
var addr = flag.String("Server", "localhost:6379", "Redis server address")
rh := rejson.NewReJSONHandler()
flag.Parse()
// Redigo Client
conn, err := redis.Dial("tcp", *addr)
if err != nil {
log.Fatalf("Failed to connect to redis-server @ %s", *addr)
}
defer func() {
_, err = conn.Do("FLUSHALL")
err = conn.Close()
if err != nil {
log.Fatalf("Failed to communicate to redis-server @ %v", err)
}
}()
rh.SetRedigoClient(conn)
fmt.Println("Executing Example_JSONSET for Redigo Client")
Example_JSONSet(rh)
// GoRedis Client
cli := goredis.NewClient(&goredis.Options{Addr: *addr})
defer func() {
if err := cli.FlushAll(context.Background()).Err(); err != nil {
log.Fatalf("goredis - failed to flush: %v", err)
}
if err := cli.Close(); err != nil {
log.Fatalf("goredis - failed to communicate to redis-server: %v", err)
}
}()
rh.SetGoRedisClient(cli)
fmt.Println("\nExecuting Example_JSONSET for GoRedis Client")
Example_JSONSet(rh)
}
Couchbase Go Client
The Go SDK library allows you to connect to a Couchbase cluster from Go. It is written in pure Go, and uses the included gocbcore library to handle communicating to the cluster over the Couchbase binary protocol.
To install the latest stable version, run:
go get github.com/couchbase/gocb/v2@latest
To install the latest developer version, run:
go get github.com/couchbase/gocb/v2@master
You can run tests in the usual Go way:
go test -race ./...
Which will execute both the unit test suite and the integration test suite. By default, the integration test suite is run against a mock Couchbase Server. See the testmain_test.go
file for information on command line arguments for running tests against a real server instance.
Releases are targeted for every third Tuesday of the month. This is subject to change based on priorities.
Linting is performed used golangci-lint
. To run:
make lint
database/sql
driver for Azure Cosmos DB.Go driver for Azure Cosmos DB SQL API which can be used with the standard database/sql package. A REST client for Azure Cosmos DB SQL API is also included.
import (
"os"
"database/sql"
"github.com/btnguyen2k/gocosmos"
)
func main() {
cosmosDbConnStr := "AccountEndpoint=https://localhost:8081/;AccountKey=<cosmosdb-account-key>"
client, err := gocosmos.NewRestClient(nil, cosmosDbConnStr)
if err != nil {
panic(err)
}
dbSpec := gocosmos.DatabaseSpec{Id:"mydb", Ru: 400}
result := client.CreateDatabase(dbSpec)
if result.Error() != nil {
panic(result.Error)
}
// database "mydb" has been created successfuly
}
Connection string syntax for Cosmos DB
AccountEndpoint=<cosmosdb-endpoint>;AccountKey=<cosmosdb-account-key>;TimeoutMs=<timeout-in-ms>;Version=<cosmosdb-api-version>;AutoId=<true/false>;InsecureSkipVerify=<true/false>
AccountEndpoint
: (required) endpoint to access Cosmos DB. For example, the endpoint for Azure Cosmos DB Emulator running on local is https://localhost:8081/
.AccountKey
: (required) account key to authenticate.TimeoutMs
: (optional) operation timeout in milliseconds. Default value is 10 seconds
if not specified.Version
: (optional) version of Cosmos DB to use. Default value is 2018-12-31
if not specified. See: https://docs.microsoft.com/en-us/rest/api/cosmos-db/#supported-rest-api-versions.AutoId
: (optional, available since v0.1.2) see auto id session.InsecureSkipVerify
: (optional, available since v0.1.4) if true
, disable CA verification for https endpoint (useful to run against test/dev env with local/docker Cosmos DB emulator).Redis client implement by golang, refers to jedis. This library implements most of redis command, include normal redis command, cluster command, sentinel command, pipeline command and transaction command. if you've ever used jedis, then you can use godis easily, godis almost has the same method of jedis.
especially, godis implements distributed lock in single mode and cluster mode, godis's lock is much more faster than redisson, on my computer(i7,8core,32g), run 100,000 loop, use 8 threads, the business code is just count++, redisson need 18s-20s, while godis just need 7 second.
godis has done many test case to make sure it's stable.
Installation
go get -u github.com/piaohao/godis
or use go.mod
:
require github.com/piaohao/godis latest
Documentation
Quick Start
basic example
package main
import (
"github.com/piaohao/godis"
)
func main() {
redis := godis.NewRedis(&godis.Option{
Host: "localhost",
Port: 6379,
Db: 0,
})
defer redis.Close()
redis.Set("godis", "1")
arr, _ := redis.Get("godis")
println(arr)
}
use pool
package main
import (
"github.com/piaohao/godis"
)
func main() {
option:=&godis.Option{
Host: "localhost",
Port: 6379,
Db: 0,
}
pool := godis.NewPool(&godis.PoolConfig{}, option)
redis, _ := pool.GetResource()
defer redis.Close()
redis.Set("godis", "1")
arr, _ := redis.Get("godis")
println(arr)
}
Thank you for following this article.
Modeling MongoDB Documents with Native Go Data Structures
1663216860
In today's post we will learn about 10 Popular Libraries for Relational Database Drivers in Go.
What is a Relational Database (RDBMS)?
A relational database is a type of database that stores and provides access to data points that are related to one another. Relational databases are based on the relational model, an intuitive, straightforward way of representing data in tables. In a relational database, each row in the table is a record with a unique ID called the key. The columns of the table hold attributes of the data, and each record usually has a value for each attribute, making it easy to establish the relationships among data points.
Table of contents:
Apache Avatica/Phoenix SQL Driver
Apache Calcite's Avatica Go is a Go database/sql driver for the Avatica server.
Avatica is a sub-project of Apache Calcite.
Install using Go modules:
$ go get github.com/apache/calcite-avatica-go
The Phoenix/Avatica driver implements Go's database/sql/driver
interface, so, import the database/sql
package and the driver:
import "database/sql"
import _ "github.com/apache/calcite-avatica-go/v5"
db, err := sql.Open("avatica", "http://localhost:8765")
Then simply use the database connection to query some data, for example:
rows := db.Query("SELECT COUNT(*) FROM test")
For more details, see the home page.
Release notes for all published versions are available on the history page.
Datastore Connectivity for BigQuery (bgc)
This library is compatible with Go 1.5+
This library uses SQL mode and streaming API to insert data as default. To use legacy SQL please use the following /* USE LEGACY SQL */ hint, in this case you will not be able to fetch repeated and nested fields.
insertMethod
To control insert method just provide config.parameters with the following value:
_table_name_.insertMethod = "load"
Note that if streaming is used, currently UPDATE and DELETE statements are not supported.
insertIdColumn
For streaming you can specify which column to use as insertId with the following config.params
_table_name_.insertMethod = "stream"
_table_name_.insertIdColumn = "sessionId"
streamBatchCount
streamBatchCount controls row count in batch (default 9999)
insertWaitTimeoutInMs
When inserting data data this library checks upto 60 sec if data has been added. To control this behaviour you can set insertWaitTimeoutInMs (default 60 sec)
To disable this mechanism set: insertWaitTimeoutInMs: -1
insertMaxRetires
Retries insert when 503 internal error
datasetId
Default dataset
pageSize
Default 500
The maximum number of rows of data to return per page of results. In addition to this limit, responses are also limited to 10 MB.
Google secrets for service account
a) set GOOGLE_APPLICATION_CREDENTIALS environment variable
b) credential can be a name with extension of the JSON secret file placed into ~/.secret/ folder
config.yaml
driverName: bigquery
credentials: bq # place your big query secret json to ~/.secret/bg.json
parameters:
datasetId: myDataset
c) full URL to secret file
config.yaml
driverName: bigquery
credentials: file://tmp/secret/mySecret.json
parameters:
datasetId: myDataset
Secret file has to specify the following attributes:
type Config struct {
//google cloud credential
ClientEmail string `json:"client_email,omitempty"`
TokenURL string `json:"token_uri,omitempty"`
PrivateKey string `json:"private_key,omitempty"`
PrivateKeyID string `json:"private_key_id,omitempty"`
ProjectID string `json:"project_id,omitempty"`
}
Private key (pem)
config.yaml
driverName: bigquery
credentials: bq # place your big query secret json to ~/.secret/bg.json
parameters:
serviceAccountId: "***@developer.gserviceaccount.com"
datasetId: MyDataset
projectId: spheric-arcadia-98015
privateKeyPath: /tmp/secret/bq.pem
firebirdsql (Go firebird sql driver)
Firebird RDBMS https://firebirdsql.org SQL driver for Go
package main
import (
"fmt"
"database/sql"
_ "github.com/nakagami/firebirdsql"
)
func main() {
var n int
conn, _ := sql.Open("firebirdsql", "user:password@servername/foo/bar.fdb")
defer conn.Close()
conn.QueryRow("SELECT Count(*) FROM rdb$relations").Scan(&n)
fmt.Println("Relations count=", n)
}
See also driver_test.go
package main
import (
"fmt"
"github.com/nakagami/firebirdsql"
)
func main() {
dsn := "user:password@servername/foo/bar.fdb"
events := []string{"my_event", "order_created"}
fbEvent, _ := firebirdsql.NewFBEvent(dsn)
defer fbEvent.Close()
sbr, _ := fbEvent.Subscribe(events, func(event firebirdsql.Event) { //or use SubscribeChan
fmt.Printf("event: %s, count: %d, id: %d, remote id:%d \n", event.Name, event.Count, event.ID, event.RemoteID)
})
defer sbr.Unsubscribe()
go func() {
fbEvent.PostEvent(events[0])
fbEvent.PostEvent(events[1])
}()
<- make(chan struct{}) //wait
}
See also _example
user:password@servername[:port_number]/database_name_or_file[?params1=value1[¶m2=value2]...]
go-adodb
Microsoft ADODB driver conforming to the built-in database/sql interface
This package can be installed with the go get command:
go get github.com/mattn/go-adodb
If you met the issue that your apps crash, try to import blank import of runtime/cgo
like below.
import (
...
_ "runtime/cgo"
)
API documentation can be found here: http://godoc.org/github.com/mattn/go-adodb
Examples can be found under the ./_example
directory
A pure Go MSSQL driver for Go's database/sql package
Requires Go 1.8 or above.
Install with go get github.com/denisenkom/go-mssqldb
.
The recommended connection string uses a URL format: sqlserver://username:password@host/instance?param1=value¶m2=value
Other supported formats are listed below.
user id
- enter the SQL Server Authentication user id or the Windows Authentication user id in the DOMAIN\User format. On Windows, if user id is empty or missing Single-Sign-On is used. The user domain sensitive to the case which is defined in the connection string.password
database
connection timeout
- in seconds (default is 0 for no timeout), set to 0 for no timeout. Recommended to set to 0 and use context to manage query and connection timeouts.dial timeout
- in seconds (default is 15), set to 0 for no timeoutencrypt
disable
- Data send between client and server is not encrypted.false
- Data sent between client and server is not encrypted beyond the login packet. (Default)true
- Data sent between client and server is encrypted.app name
- The application name (default is go-mssqldb)server
- host or host\instance (default localhost)port
- used only when there is no instance in server (default 1433)keepAlive
- in seconds; 0 to disable (default is 30)failoverpartner
- host or host\instance (default is no partner).failoverport
- used only when there is no instance in failoverpartner (default 1433)packet size
- in bytes; 512 to 32767 (default is 4096)log
- logging flags (default 0/no logging, 63 for full logging)TrustServerCertificate
certificate
- The file that contains the public key certificate of the CA that signed the SQL Server certificate. The specified certificate overrides the go platform specific CA certificates.hostNameInCertificate
- Specifies the Common Name (CN) in the server certificate. Default value is the server host.ServerSPN
- The kerberos SPN (Service Principal Name) for the server. Default is MSSQLSvc/host:port.Workstation ID
- The workstation name (default is the host name)ApplicationIntent
- Can be given the value ReadOnly
to initiate a read-only connection to an Availability Group listener. The database
must be specified when connecting with Application Intent
set to ReadOnly
.Golang Oracle database driver conforming to the Go database/sql interface
Install Oracle full client or Instant Client:
Install a C/C++ compiler
Install pkg-config, edit your package config file oci8.pc (examples below), then set environment variable PKG_CONFIG_PATH to oci8.pc file location (Or can use Go tag noPkgConfig then setup environment variables CGO_CFLAGS and CGO_LDFLAGS)
Go get with Go version 1.9 or higher
go get github.com/mattn/go-oci8
Try the simple select example:
https://godoc.org/github.com/mattn/go-oci8#example-package--SqlSelect
If you have a build error it is normaly because of a misconfiguration, make sure to search close issues for help
prefix=/devel/target/XXXXXXXXXXXXXXXXXXXXXXXXXX
exec_prefix=${prefix}
libdir=C:/app/instantclient_12_2/sdk/oci/lib/msvc
includedir=C:/app/instantclient_12_2/sdk/include
glib_genmarshal=glib-genmarshal
gobject_query=gobject-query
glib_mkenums=glib-mkenums
Name: oci8
Description: oci8 library
Libs: -L${libdir} -loci
Cflags: -I${includedir}
Version: 12.2
prefix=/devel/target/XXXXXXXXXXXXXXXXXXXXXXXXXX
exec_prefix=${prefix}
libdir=/usr/lib/oracle/12.2/client64/lib
includedir=/usr/include/oracle/12.2/client64
glib_genmarshal=glib-genmarshal
gobject_query=gobject-query
glib_mkenums=glib-mkenums
Name: oci8
Description: oci8 library
Libs: -L${libdir} -lclntsh
Cflags: -I${includedir}
Version: 12.2
Please install pkg-config
with brew
if not already present. Download the instant client and the sdk and unpack it e.g. in your Downloads
folder and create therein a file names oci8.pc
. Please replace <username>
with your actual username.
prefixdir=/Users/<username>/Downloads/instantclient_12_2/
libdir=${prefixdir}
includedir=${prefixdir}/sdk/include
Name: OCI
Description: Oracle database driver
Version: 12.2
Libs: -L${libdir} -lclntsh
Cflags: -I${includedir}
You also have to set these environment variables (e.g. permanently by adding them to your .bashrc
)
export LD_LIBRARY_PATH=/Users/<username>/Downloads/instantclient_12_2
export PKG_CONFIG_PATH=/Users/<username>/Downloads/instantclient_12_2
Go MySQL Driver is a MySQL driver for Go's (golang) database/sql package
Simple install the package to your $GOPATH with the go tool from shell:
$ go get -u github.com/go-sql-driver/mysql
Make sure Git is installed on your machine and in your system's PATH
.
Go MySQL Driver is an implementation of Go's database/sql/driver
interface. You only need to import the driver and can use the full database/sql
API then.
Use mysql
as driverName
and a valid DSN as dataSourceName
:
import (
"database/sql"
"time"
_ "github.com/go-sql-driver/mysql"
)
// ...
db, err := sql.Open("mysql", "user:password@/dbname")
if err != nil {
panic(err)
}
// See "Important settings" section.
db.SetConnMaxLifetime(time.Minute * 3)
db.SetMaxOpenConns(10)
db.SetMaxIdleConns(10)
Examples are available in our Wiki.
db.SetConnMaxLifetime()
is required to ensure connections are closed by the driver safely before connection is closed by MySQL server, OS, or other middlewares. Since some middlewares close idle connections by 5 minutes, we recommend timeout shorter than 5 minutes. This setting helps load balancing and changing system variables too.
db.SetMaxOpenConns()
is highly recommended to limit the number of connection used by the application. There is no recommended limit number because it depends on application and MySQL server.
db.SetMaxIdleConns()
is recommended to be set same to db.SetMaxOpenConns()
. When it is smaller than SetMaxOpenConns()
, connections can be opened and closed much more frequently than you expect. Idle connections can be closed by the db.SetConnMaxLifetime()
. If you want to close idle connections more rapidly, you can use db.SetConnMaxIdleTime()
since Go 1.15.
Description
A sqlite3 driver that conforms to the built-in database/sql interface.
Supported Golang version: See .github/workflows/go.yaml.
This package follows the official Golang Release Policy.
Installation
This package can be installed with the go get
command:
go get github.com/mattn/go-sqlite3
go-sqlite3 is cgo package. If you want to build your app using go-sqlite3, you need gcc. However, after you have built and installed go-sqlite3 with go install github.com/mattn/go-sqlite3
(which requires gcc), you can build your app without relying on gcc in future.
Important: because this is a CGO
enabled package, you are required to set the environment variable CGO_ENABLED=1
and have a gcc
compile present within your path.
API Reference
API documentation can be found here.
Examples can be found under the examples directory.
Connection String
When creating a new SQLite database or connection to an existing one, with the file name additional options can be given. This is also known as a DSN (Data Source Name) string.
Options are append after the filename of the SQLite database. The database filename and options are separated by an ?
(Question Mark). Options should be URL-encoded (see url.QueryEscape).
This also applies when using an in-memory database instead of a file.
Options can be given using the following format: KEYWORD=VALUE
and multiple options can be combined with the &
ampersand.
This library supports DSN options of SQLite itself and provides additional options.
Go DRiver for ORacle
godror is a package which is a database/sql/driver.Driver for connecting to Oracle DB, using Anthony Tuininga's excellent OCI wrapper, ODPI-C.
CGO_ENABLED=1
- so cross-compilation is hardRun:
go get github.com/godror/godror@latest
Then install Oracle Client libraries and you're ready to go!
godror is cgo package. If you want to build your app using godror, you need gcc (a C compiler).
Important: because this is a CGO enabled package, you are required to set the environment variable CGO_ENABLED=1
and have a gcc compile present within your path.
See Godror Installation for more information.
To connect to Oracle Database use sql.Open("godror", dataSourceName)
, where dataSourceName
is a logfmt-encoded parameter list. Specify at least "user", "password" and "connectString". For example:
db, err := sql.Open("godror", `user="scott" password="tiger" connectString="dbhost:1521/orclpdb1"`)
The connectString
can be ANYTHING that SQL*Plus or Oracle Call Interface (OCI) accepts: a service name, an Easy Connect string like host:port/service_name
, or a connect descriptor like (DESCRIPTION=...)
.
You can specify connection timeout seconds with "?connect_timeout=15" - Ping uses this timeout, NOT the Deadline in Context! Note that connect_timeout
requires at least 19c client.
For more connection options, see Godor Connection Handling.
To use the godror-specific functions, you'll need a *godror.conn
. That's what godror.DriverConn
is for! See z_qrcn_test.go for using that to reach NewSubscription.
Use ExecContext
and mark each OUT parameter with sql.Out
.
As sql.DB will close the statemenet ASAP, for long-lived objects (LOB, REF CURSOR), you have to keep the Stmt alive: Prepare the statement, and Close only after finished with the Lob/Rows.
Use ExecContext
and an interface{}
or a database/sql/driver.Rows
as the sql.Out
destination, then either use the driver.Rows
interface, or transform it into a regular *sql.Rows
with godror.WrapRows
, or (since Go 1.12) just Scan into *sql.Rows
.
As sql.DB will close the statemenet ASAP, you have to keep the Stmt alive: Prepare the statement, and Close only after finished with the Rows.
For examples, see Anthony Tuininga's presentation about Go (page 41)!
Go FreeTDS wrapper. Native Sql Server database driver.
FreeTDS libraries must be installed on the system.
Mac
brew install freetds
Ubuntu, Debian...
sudo apt-get install freetds-dev
go get github.com/minus5/gofreetds
http://godoc.org/github.com/minus5/gofreetds
Name of the driver is mssql.
db, err := sql.Open("mssql", connStr)
...
row := db.QueryRow("SELECT au_fname, au_lname name FROM authors WHERE au_id = ?", "172-32-1176")
..
var firstName, lastName string
err = row.Scan(&firstName, &lastName)
Full example in example/mssql.
What I'm missing in database/sql is calling stored procedures, handling return values and output params. And especially handling multiple result sets. Which is all supported by FreeTDS and of course by gofreetds.
Connect:
pool, err := freetds.NewConnPool("user=ianic;pwd=ianic;database=pubs;host=iow")
defer pool.Close()
...
//get connection
conn, err := pool.Get()
defer conn.Close()
Execute stored procedure:
rst, err := conn.ExecSp("sp_help", "authors")
Read sp return value, and output params:
returnValue := rst.Status()
var param1, param2 int
rst.ParamScan(¶m1, ¶m2)
Read sp resultset (fill the struct):
author := &Author{}
rst.Scan(author)
Read next resultset:
if rst.NextResult() {
for rst.Next() {
var v1, v2 string
rst.Scan(&v1, &v2)
}
}
Full example in example/stored_procedure
Thank you for following this article.
Learning Golang: Relational Databases - Introduction to database/sql
1661813640
This Julia package is an interface to ScyllaDB / Cassandra and is based on the Datastax CPP driver implementing the CQL v3 binary protocol. The package is missing very many features, but it does two things quite well:
Now, it's probably easy to extend this package to enable other features, but I haven't taken the time to do so. If you find this useful but are missing a small set of features I can probably implement them if you file an issue. CQLdriver is compatible and depends on DataFrames and JuliaDB.
Currently the following data-types are supported:
Julia Type | CQL type |
---|---|
Vector{UInt8} | BLOB |
String | TEXT |
String | VARCHAR |
Date | DATE |
Int8 | TINYINT |
Int16 | SMALLINT |
Int32 | INTEGER |
Int64 | BIGINT |
Int64 | COUNTER |
Bool | BOOLEAN |
Float32 | FLOAT |
Float64 | DOUBLE |
DateTime | TIMESTAMP |
UUID | UUID |
UUID | TIMEUUID |
Example use
cqlinit()
will return a tuple with 2 pointers and a UInt16
error code which you can check. If the returned value is 0
then you're in good shape. It also lets you tune some performance characteristics of your connection.
julia> session, cluster, err = cqlinit("192.168.1.128, 192.168.1.140")
julia> const CQL_OK = 0x0000
julia> @assert err == CQL_OK
julia> cqlclose(session, cluster)
julia> hosts = "192.168.1.128, 192.168.1.140"
julia> session, cluster, err = cqlinit(hosts, threads = 1, connections = 2,
queuesize = 4096, bytelimit = 65536, requestlimit = 256,
username="admin", password="s3cr!t")
julia> cqlclose(session, cluster)
The driver tries to be smart about detecting all the nodes in the cluster and keeping the connection alive.
cqlwrite()
takes a DataFrame
with named columns, or a JuliaDB
table. Make sure that the column names in your DataFrame are the same as those in table you are writing to. By default it will write 1000 rows per batch and will make 5 attemps at writing each batch.
For appending new rows to tables:
julia> table = "data.refrigerator"
julia> data = DataFrame(veggies = ["Carrots", "Broccoli"], amount = [3, 5])
julia> err = cqlwrite(session, table, data)
For updating a table you must provide additional arguments. Consider the following statement which updates a table that uses counters: UPDATE data.car SET speed = speed + ?, temp = temp + ? WHERE partid = ?
The query below is analogous to the statement above:
julia> table = "data.car"
julia> data = DataFrame(speed=[1,2], temp=[4,5], partid=["wheel1","wheel2"])
julia> err = cqlwrite(session,
table,
data[:,[:speed, :total]],
update = data[:,[:partid]],
batchsize = 10000,
retries = 6,
counter = true)
cqlread()
pulls down data in 10000-row pages by default. It will do 5 retries per page and collate everything into a DataFrame
with typed and named columns.
julia> query = "SELECT * FROM data.car"
julia> err, output = cqlread(session, query)
(0x0000, 2×3 DataFrames.DataFrame
│ Row │ speed │ temp │ partid │
├┼┼┼┤
│ 1 │ 1 │ 4 │ "wheel1" │
│ 2 │ 2 │ 5 │ "wheel2" │)
Changing the page size might affect performance. You can also increase the number of characters allowed for string types.
julia> query = "SELECT * FROM data.bigtable LIMIT 1000000"
julia> err, output = cqlread(session,
query,
pgsize = 15000,
retries = 6,
strlen = 1024)
You can send in an array of different queries and the driver will execute them asynchronously and return an array of resulting dataframes.
julia> query = ["SELECT * FROM data.bigtable WHERE driver=124","SELECT * FROM data.smalltable WHERE car=144"]
julia> err, output = cqlread(session,
query,
concurrency=500,
timeout = 12000)
cqlexec()
runs your command on the database and returns a 0x0000 if everything went OK.
julia> cmd = "CREATE TABLE test.example (id int, data text, PRIMARY KEY (id));"
julia> err = cqlexec(session, cmd)
Author: r3tex
Source Code: https://github.com/r3tex/CQLdriver.jl
License: View license
1661176407
The chesslinkdriver flutter package allows you to quickly get your ChessLink-board connected to your Android/iOS application.
Feature | Function | Supported |
---|---|---|
Get Version | getVersion() | ✅ |
Reset | reset() | ✅ |
Get Board Status | getStatus() | ✅ |
Extinguish All Leds | extinguishAllLeds() | ✅ |
Set LED's | setLeds() | ✅ |
Set LED Brightness | setLedBrightness() | ✅ |
Get LED Brightness | - | ❌ |
Set Status Report Time | setAutomaticReportsTime() | ✅ |
Get Status Report Time | - | ❌ |
Set Automatic Reports | setAutomaticReports() | ✅ |
Get Automatic Reports | - | ❌ |
Set Scan time | setScanTime() | ✅ |
Get Scan time | - | ❌ |
Set eONE Settings | setEONESettings() | ✅ |
Get eONE Settings | getEONESettings() | ✅ |
Run this command:
With Dart:
$ dart pub add chesslinkdriver
With Flutter:
$ flutter pub add chesslinkdriver
This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get
):
dependencies:
chesslinkdriver: ^0.0.10
Alternatively, your editor might support dart pub get
or flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:chesslinkdriver/ChessLink.dart';
import 'package:chesslinkdriver/ChessLinkCommunicationClient.dart';
import 'package:chesslinkdriver/ChessLinkMessage.dart';
import 'package:chesslinkdriver/protocol/Answer.dart';
import 'package:chesslinkdriver/protocol/Command.dart';
import 'package:chesslinkdriver/protocol/commands/ExtinguishAllLeds.dart';
import 'package:chesslinkdriver/protocol/commands/GetEONESettings.dart';
import 'package:chesslinkdriver/protocol/commands/GetStatus.dart';
import 'package:chesslinkdriver/protocol/commands/GetVersion.dart';
import 'package:chesslinkdriver/protocol/commands/Reset.dart';
import 'package:chesslinkdriver/protocol/commands/SetAutomaticReports.dart';
import 'package:chesslinkdriver/protocol/commands/SetAutomaticReportsTime.dart';
import 'package:chesslinkdriver/protocol/commands/SetEONESettings.dart';
import 'package:chesslinkdriver/protocol/commands/SetLedBrightness.dart';
import 'package:chesslinkdriver/protocol/commands/SetLeds.dart';
import 'package:chesslinkdriver/protocol/commands/SetScanTime.dart';
import 'package:chesslinkdriver/protocol/model/ChessLinkBoardType.dart';
import 'package:chesslinkdriver/protocol/model/EONESettings.dart';
import 'package:chesslinkdriver/protocol/model/LEDPattern.dart';
import 'package:chesslinkdriver/protocol/model/RequestConfig.dart';
import 'package:chesslinkdriver/protocol/model/StatusReportSendInterval.dart';
example/lib/main.dart
import 'dart:async';
import 'package:flutter_reactive_ble/flutter_reactive_ble.dart';
import 'package:flutter_stateless_chessboard/flutter_stateless_chessboard.dart'
as cb;
import 'package:chesslinkdriver/ChessLinkCommunicationClient.dart';
import 'package:flutter/material.dart';
import 'package:chesslinkdriver/ChessLink.dart';
import 'package:chesslinkdriver/protocol/model/LEDPattern.dart';
import 'package:chesslinkdriver/protocol/model/EONESettings.dart';
import 'package:chesslinkdriver/protocol/model/StatusReportSendInterval.dart';
import 'package:permission_handler/permission_handler.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key}) : super(key: key);
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
ChessLink connectedBoard;
Uuid _serviceId = Uuid.parse("49535343-fe7d-4ae5-8fa9-9fafd205e455");
Uuid _characteristicReadId =
Uuid.parse("49535343-1e4d-4bd9-ba61-23c647249616");
Uuid _characteristicWriteId =
Uuid.parse("49535343-8841-43f4-a8d4-ecbe34729bb3");
Duration scanDuration = Duration(seconds: 10);
List<DiscoveredDevice> devices = [];
bool scanning = false;
final flutterReactiveBle = FlutterReactiveBle();
Timer updateSquareLedTimer;
StreamSubscription<ConnectionStateUpdate> connection;
String version;
EONESettings eoneSettings;
Future<void> reqPermission() async {
await Permission.locationWhenInUse.request();
await Permission.bluetoothConnect.request();
await Permission.bluetoothScan.request();
}
Future<void> listDevices() async {
setState(() {
scanning = true;
devices = [];
});
await reqPermission();
// Listen to scan results
final sub = flutterReactiveBle.scanForDevices(
withServices: [], scanMode: ScanMode.lowLatency).listen((device) async {
if (!device.name.contains("MILLENNIUM") ||
devices.indexWhere((e) => e.id == device.id) > -1) return;
setState(() {
devices.add(device);
});
}, onError: (e) {
print(e);
});
// Stop scanning
Future.delayed(scanDuration, () {
sub.cancel();
setState(() {
scanning = false;
});
});
}
void connect(DiscoveredDevice device) async {
connection = flutterReactiveBle
.connectToDevice(
id: device.id,
connectionTimeout: const Duration(seconds: 2),
).listen((connectionState) async {
print(connectionState.connectionState);
if (connectionState.connectionState == DeviceConnectionState.disconnected) {
disconnect();
return;
}
if (connectionState.connectionState != DeviceConnectionState.connected) {
return;
}
final read = QualifiedCharacteristic(
serviceId: _serviceId,
characteristicId: _characteristicReadId,
deviceId: device.id);
final write = QualifiedCharacteristic(
serviceId: _serviceId,
characteristicId: _characteristicWriteId,
deviceId: device.id);
ChessLinkCommunicationClient client =
ChessLinkCommunicationClient((v) => flutterReactiveBle.writeCharacteristicWithResponse(write, value: v));
flutterReactiveBle
.subscribeToCharacteristic(read)
.listen(client.handleReceive);
ChessLink nBoard = new ChessLink();
await nBoard.init(client, initialDelay: Duration(seconds: 1));
setState(() {
connectedBoard = nBoard;
});
updateSquareLedTimer = Timer.periodic(Duration(milliseconds: 200), (t) => lightChangeSquare());
}, onError: (Object e) {
print(e);
});
}
void getVersion() async {
String _version = await connectedBoard.getVersion();
setState(() {
version = _version;
});
}
void setLedPatternB1ToC3() async {
LEDPattern ledPattern = LEDPattern();
ledPattern.set(
"B1",
LEDPattern.generateSquarePattern(
true, true, true, true, false, false, false, false));
ledPattern.set(
"C3",
LEDPattern.generateSquarePattern(
false, false, false, false, true, true, true, true));
connectedBoard.setLeds(ledPattern, slotTime: Duration(milliseconds: 100));
}
void disconnect() async {
if (updateSquareLedTimer != null) {
updateSquareLedTimer.cancel();
updateSquareLedTimer = null;
}
connection.cancel();
setState(() {
connectedBoard = null;
});
}
Map<String, String> board;
Map<String, String> oldBoard;
void lightChangeSquare() async {
if (board == null) {
return;
}
List<String> squares = [];
if (oldBoard != null) {
for (String sq in board.keys) {
bool hasPiece = board[sq] != null;
bool isNew = board[sq] != oldBoard[sq];
if (hasPiece && isNew) squares.add(sq);
}
}
oldBoard = board;
if (squares.length > 0) {
await connectedBoard.turnOnLeds(squares);
}
}
String boardToFen(Map<String, String> board) {
String res = "";
for (var i = 0; i < 8; i++) {
int free = 0;
for (var j = 0; j < 8; j++) {
String square =
ChessLink.RANKS.reversed.elementAt(j) + ChessLink.ROWS[i];
String piece = board[square];
if (piece == null) {
free++;
} else {
if (free > 0) {
res += free.toString();
free = 0;
}
res += piece;
}
}
if (free > 0) {
res += free.toString();
}
res += "/";
}
return res.substring(0, res.length - 1) + ' w KQkq - 0 1';
}
Widget connectedBoardButtons() {
return Column(
children: [
SizedBox(height: 25),
Center(
child: StreamBuilder(
stream: connectedBoard?.getBoardUpdateStream(),
builder:
(context, AsyncSnapshot<Map<String, String>> snapshot) {
if (!snapshot.hasData) return Text("-");
board = snapshot.data.map((key, value) => MapEntry(key, value == "O" ? "P" : value));
String fen = boardToFen(board);
return cb.Chessboard(
size: 300,
fen: fen,
orientation: cb.Color.BLACK, // optional
lightSquareColor:
Color.fromRGBO(240, 217, 181, 1), // optional
darkSquareColor:
Color.fromRGBO(181, 136, 99, 1), // optional
);
})),
TextButton(
onPressed: getVersion,
child: Text("Get Version (" + (version ?? "_") + ")")),
TextButton(
onPressed: () => connectedBoard.turnOnAllLeds(),
child: Text("Turn on LED's")),
Row(
mainAxisAlignment: MainAxisAlignment.center,
children: [
TextButton(
onPressed: setLedPatternB1ToC3,
child: Text("Turn on B1 -> C3")),
TextButton(
onPressed: () => connectedBoard.turnOnSingleLed("A1"),
child: Text("Turn on A1 LED")),
],
),
TextButton(
onPressed: () => connectedBoard.extinguishAllLeds(),
child: Text("Turn off LED's")),
TextButton(
onPressed: () => connectedBoard.getStatus(),
child: Text("Get Board")),
TextButton(
onPressed: () => connectedBoard.reset(), child: Text("Reset")),
TextButton(onPressed: disconnect, child: Text("Disconnect")),
],
);
}
Widget additionalSettings() {
return Column(
children: [
SizedBox(height: 25),
Text("SetAutomaticReports"),
Row(
mainAxisAlignment: MainAxisAlignment.center,
children: [
TextButton(
onPressed: () => connectedBoard
.setAutomaticReports(StatusReportSendInterval.disabled),
child: Text("disabled")),
TextButton(
onPressed: () => connectedBoard
.setAutomaticReports(StatusReportSendInterval.onEveryScan),
child: Text("onEveryScan")),
TextButton(
onPressed: () => connectedBoard
.setAutomaticReports(StatusReportSendInterval.onChange),
child: Text("onChange")),
TextButton(
onPressed: () => connectedBoard
.setAutomaticReports(StatusReportSendInterval.withSetTime),
child: Text("withSetTime")),
],
),
SizedBox(height: 25),
Text("SetAutomaticReportsTime"),
Row(
mainAxisAlignment: MainAxisAlignment.center,
children: [
TextButton(
onPressed: () => connectedBoard
.setAutomaticReportsTime(Duration(milliseconds: 200)),
child: Text("200ms")),
TextButton(
onPressed: () => connectedBoard
.setAutomaticReportsTime(Duration(milliseconds: 500)),
child: Text("500ms")),
TextButton(
onPressed: () => connectedBoard
.setAutomaticReportsTime(Duration(seconds: 1)),
child: Text("1s")),
],
),
SizedBox(height: 25),
Text("SetBrightness"),
Row(
mainAxisAlignment: MainAxisAlignment.center,
children: [
TextButton(
onPressed: () => connectedBoard.setLedBrightness(0),
child: Text("dim(0)")),
TextButton(
onPressed: () => connectedBoard.setLedBrightness(0.5),
child: Text("middle(0.5)")),
TextButton(
onPressed: () => connectedBoard.setLedBrightness(1),
child: Text("full(1)")),
],
),
SizedBox(height: 25),
Text("Set ChessRules (eONE exclusiv)"),
Row(
mainAxisAlignment: MainAxisAlignment.center,
children: [
TextButton(
onPressed: () => connectedBoard.setEONESettings(EONESettings(false, false, false, false, false, true)),
child: Text("Turn On")),
TextButton(
onPressed: () => connectedBoard.setEONESettings(EONESettings(false, false, false, false, false, false)),
child: Text("Turn Off")),
],
),
SizedBox(height: 25),
Text("setScanTime"),
Row(
mainAxisAlignment: MainAxisAlignment.center,
children: [
TextButton(
onPressed: () =>
connectedBoard.setScanTime(Duration(milliseconds: 31)),
child: Text("32 Scans/Sec")),
TextButton(
onPressed: () =>
connectedBoard.setScanTime(Duration(milliseconds: 41)),
child: Text("24.4 Scans/Sec")),
TextButton(
onPressed: () =>
connectedBoard.setScanTime(Duration(milliseconds: 523)),
child: Text("1.9 Scans/Sec")),
],
),
SizedBox(height: 25),
Text("EONESettings"),
TextButton(
onPressed: () => connectedBoard.getEONESettings().then((newEoneSetting) => setState(() { eoneSettings = newEoneSetting; })),
child: Text("Retrieve Current Settings")
),
eoneSettings != null ? Column(
children: [
Text("AutoReverse: " + (eoneSettings.boardAutoReverseEnabled ? "Yes" : "No")),
Text("IsReversed: " + (eoneSettings.boardIsReversed ? "Yes" : "No")),
Text("ChessRules: " + (eoneSettings.chessRulesEnabled ? "Yes" : "No")),
Text("ErrLEDsOnStartup: " + (eoneSettings.errorLedsEnabledOnStartup ? "Yes" : "No")),
Text("ErrLEDsTurnedOffOnLCom: " + (eoneSettings.errorLedsTurnedOffOnLCommand ? "Yes" : "No")),
Text("LEDsAreShowingErrors: " + (eoneSettings.ledsAreShowingErrors ? "Yes" : "No")),
],
) : Text("-"),
],
);
}
Widget deivceList() {
return Column(
mainAxisAlignment: MainAxisAlignment.center,
crossAxisAlignment: CrossAxisAlignment.center,
children: [
SizedBox(height: 25),
Center(
child: scanning
? CircularProgressIndicator()
: TextButton(
child: Text("List Devices"),
onPressed: listDevices,
)),
Flexible(
child: ListView.builder(
itemCount: devices.length,
itemBuilder: (context, index) => ListTile(
title: Text(devices[index].name),
subtitle: Text(devices[index].id.toString()),
onTap: () => connect(devices[index]),
))),
SizedBox(height: 24)
],
);
}
@override
Widget build(BuildContext context) {
Widget content = connectedBoard == null
? deivceList()
: TabBarView(
children: [
connectedBoardButtons(),
additionalSettings(),
],
);
Widget appBar = connectedBoard == null
? AppBar(
title: Text("chesslinkdriver example"),
)
: AppBar(
title: Text("chesslinkdriver example"),
bottom: TabBar(
tabs: [
Tab(text: "Overview"),
Tab(text: "Additional"),
],
),
);
return DefaultTabController(
length: 2, child: Scaffold(appBar: appBar, body: content));
}
}
Author: mono424
Source Code: https://github.com/mono424/chesslinkdriver
License: MIT license
1660740900
This project aims to fully implement the FIGfont spec in JavaScript. It works in the browser and with Node.js. You can see it in action here: http://patorjk.com/software/taag/ (the figlet.js file was written to power that application)
___________.___ ________.__ __ __
\_ _____/| |/ _____/| | _____/ |_ |__| ______
| __) | / \ ___| | _/ __ \ __\ | |/ ___/
| \ | \ \_\ \ |_\ ___/| | | |\___ \
\___ / |___|\______ /____/\___ >__| /\ /\__| /____ >
\/ \/ \/ \/ \______| \/
Install:
npm install figlet
Simple usage:
var figlet = require('figlet');
figlet('Hello World!!', function(err, data) {
if (err) {
console.log('Something went wrong...');
console.dir(err);
return;
}
console.log(data)
});
That should print out:
_ _ _ _ __ __ _ _ _ _
| | | | ___| | | ___ \ \ / /__ _ __| | __| | | |
| |_| |/ _ \ | |/ _ \ \ \ /\ / / _ \| '__| |/ _` | | |
| _ | __/ | | (_) | \ V V / (_) | | | | (_| |_|_|
|_| |_|\___|_|_|\___/ \_/\_/ \___/|_| |_|\__,_(_|_)
Calling the figlet object as a function is shorthand for calling the text function. This method allows you to create ASCII Art from text. It takes in 3 parameters:
Example:
figlet.text('Boo!', {
font: 'Ghost',
horizontalLayout: 'default',
verticalLayout: 'default',
width: 80,
whitespaceBreak: true
}, function(err, data) {
if (err) {
console.log('Something went wrong...');
console.dir(err);
return;
}
console.log(data);
});
That will print out:
.-. .-') ,---.
\ ( OO ) | |
;-----.\ .-'),-----. .-'),-----. | |
| .-. | ( OO' .-. '( OO' .-. '| |
| '-' /_)/ | | | |/ | | | || |
| .-. `. \_) | |\| |\_) | |\| || .'
| | \ | \ | | | | \ | | | |`--'
| '--' / `' '-' ' `' '-' '.--.
`------' `-----' `-----' '--'
This method is the synchronous version of the method above.
Example:
console.log(figlet.textSync('Boo!', {
font: 'Ghost',
horizontalLayout: 'default',
verticalLayout: 'default',
width: 80,
whitespaceBreak: true
}));
That will print out:
.-. .-') ,---.
\ ( OO ) | |
;-----.\ .-'),-----. .-'),-----. | |
| .-. | ( OO' .-. '( OO' .-. '| |
| '-' /_)/ | | | |/ | | | || |
| .-. `. \_) | |\| |\_) | |\| || .'
| | \ | \ | | | | \ | | | |`--'
| '--' / `' '-' ' `' '-' '.--.
`------' `-----' `-----' '--'
The options object has several parameters which you can set:
Type: String
Default value: 'Standard'
A string value that indicates the FIGlet font to use.
Type: String
Default value: 'default'
A string value that indicates the horizontal layout to use. FIGlet fonts have 5 possible values for this: "default", "full", "fitted", "controlled smushing", and "universal smushing". "default" does the kerning the way the font designer intended, "full" uses full letter spacing, "fitted" moves the letters together until they almost touch, and "controlled smushing" and "universal smushing" are common FIGlet kerning setups.
Type: String
Default value: 'default'
A string value that indicates the vertical layout to use. FIGlet fonts have 5 possible values for this: "default", "full", "fitted", "controlled smushing", and "universal smushing". "default" does the kerning the way the font designer intended, "full" uses full letter spacing, "fitted" moves the letters together until they almost touch, and "controlled smushing" and "universal smushing" are common FIGlet kerning setups.
Type: Number
Default value: undefined
This option allows you to limit the width of the output. For example, if you want your output to be a max of 80 characters wide, you would set this option to 80. Example
Type: Boolean
Default value: false
This option works in conjunction with "width". If this option is set to true, then the library will attempt to break text up on whitespace when limiting the width. Example
The 2 layout options allow you to override a font's default "kerning". Below you can see how this effects the text. The string "Kerning" was printed using the "Standard" font with horizontal layouts of "default", "fitted" and then "full".
_ __ _
| |/ /___ _ __ _ __ (_)_ __ __ _
| ' // _ \ '__| '_ \| | '_ \ / _` |
| . \ __/ | | | | | | | | | (_| |
|_|\_\___|_| |_| |_|_|_| |_|\__, |
|___/
_ __ _
| |/ / ___ _ __ _ __ (_) _ __ __ _
| ' / / _ \| '__|| '_ \ | || '_ \ / _` |
| . \| __/| | | | | || || | | || (_| |
|_|\_\\___||_| |_| |_||_||_| |_| \__, |
|___/
_ __ _
| |/ / ___ _ __ _ __ (_) _ __ __ _
| ' / / _ \ | '__| | '_ \ | | | '_ \ / _` |
| . \ | __/ | | | | | | | | | | | | | (_| |
|_|\_\ \___| |_| |_| |_| |_| |_| |_| \__, |
|___/
In most cases you'll either use the default setting or the "fitted" setting. Most fonts don't support vertical kerning, but a hand full fo them do (like the "Standard" font).
The metadata function allows you to retrieve a font's default options and header comment. Example usage:
figlet.metadata('Standard', function(err, options, headerComment) {
if (err) {
console.log('something went wrong...');
console.dir(err);
return;
}
console.dir(options);
console.log(headerComment);
});
The fonts function allows you to get a list of all of the available fonts. Example usage:
figlet.fonts(function(err, fonts) {
if (err) {
console.log('something went wrong...');
console.dir(err);
return;
}
console.dir(fonts);
});
The synchronous version of the fonts method
console.log(figlet.fontsSync());
Allows you to use a font from another source.
const fs = require('fs');
const path = require('path');
let data = fs.readFileSync(path.join(__dirname, 'myfont.flf'), 'utf8');
figlet.parseFont('myfont', data);
console.log(figlet.textSync('myfont!', 'myfont'));
Webpack/React usage will be very similar to what's talked about in the "Getting Started - The Browser" section. The main difference is that you import fonts via the importable-fonts folder. Example:
import figlet from 'figlet';
import standard from 'figlet/importable-fonts/Standard.js'
figlet.parseFont('Standard', standard);
figlet.text('test', {
font: 'Standard',
}, function(err, data) {
console.log(data);
});
The browser API is the same as the Node API with the exception of the "fonts" method not being available. The browser version also requires fetch API (or a shim) for its loadFont function.
Example usage:
<script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/fetch/1.0.0/fetch.min.js"></script>
<script type="text/javascript" src="figlet.js"></script>
<script>
figlet(inputText, 'Standard', function(err, text) {
if (err) {
console.log('something went wrong...');
console.dir(err);
return;
}
console.log(text);
});
</script>
The browser API supports a synchronous mode so long as fonts used are preloaded.
Example:
figlet.defaults({fontPath: "assets/fonts"});
figlet.preloadFonts(["Standard", "Ghost"], ready);
function ready(){
console.log(figlet.textSync("ASCII"));
console.log(figlet.textSync("Art", "Ghost"));
}
That will print out:
_ ____ ____ ___ ___
/ \ / ___| / ___||_ _||_ _|
/ _ \ \___ \ | | | | | |
/ ___ \ ___) || |___ | | | |
/_/ \_\|____/ \____||___||___|
('-. _ .-') .-') _
( OO ).-.( \( -O ) ( OO) )
/ . --. / ,------. / '._
| \-. \ | /`. '|'--...__)
.-'-' | | | / | |'--. .--'
\| |_.' | | |_.' | | |
| .-. | | . '.' | |
| | | | | |\ \ | |
`--' `--' `--' '--' `--'
See the examples folder for a more robust front-end example.
To use figlet.js on the command line, install figlet-cli:
npm install -g figlet-cli
And then you should be able run from the command line. Example:
figlet -f "Dancing Font" "Hi"
For more info see the figlet-cli.
Author: Patorjk
Source Code: https://github.com/patorjk/figlet.js
License: MIT license
1660622943
This package report to Google Cloud Error Reporting. It's multi-platform. Http request supported via http and dio package.
To See /example
folder.
Future main() async {
final config = Config(
projectId: "PROJECT_ID",
key:"API_KEY",
service: 'my-app',
version: "1.0.0");
final reporter = StackDriverErrorReporter();
reporter.start(config);
runZonedGuarded(() async {
runApp(const MyApp());
}, (error, trace) {
reporter.report(err: error, trace: trace);
});
}
Run this command:
With Flutter:
$ flutter pub add stackdriver_dart
This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get
):
dependencies:
stackdriver_dart: ^1.0.7
Alternatively, your editor might support flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:stackdriver_dart/stackdriver_dart.dart';
example/lib/main.dart
import 'dart:async';
import 'package:flutter/material.dart';
import 'package:stackdriver_dart/stackdriver_dart.dart';
class MyApp extends StatefulWidget {
const MyApp({Key? key}) : super(key: key);
@override
_MyAppState createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: "Report Sample",
supportedLocales: const [
Locale('ja', 'JP'),
Locale('en', ''),
],
home: Container(
child: const Center(
child: Text(
"Report Sample",
style: TextStyle(
fontFamily: 'NotoSansJP',
fontSize: 24,
fontWeight: FontWeight.w200,
color: Color.fromRGBO(0, 102, 238, 0.08),
),
),
),
),
);
}
}
Future main() async {
final config = Config(
projectId: "PROJECT_ID",
key:"API_KEY",
service: 'my-app',
version: "1.0.0");
final reporter = StackDriverErrorReporter();
reporter.start(config);
runZonedGuarded(() async {
runApp(const MyApp());
}, (error, trace) {
reporter.report(err: error, trace: trace);
});
}
Author: Ueki-tomohiro
Source Code: https://github.com/ueki-tomohiro/stackdriver_dart
License: MIT license
1660327500
RNeo4j is Neo4j's R driver. It allows you to read and write data from / to Neo4j directly from your R environment.
First and foremost, download Neo4j!
If you're on Windows, download the .exe
and follow the instructions. You'll get a GUI where you simply press "Start" to start Neo4j.
If you're on OS X, you can download either the .dmg
or the .tar.gz
. The .dmg
will give you a GUI where you simply press "Start" to start Neo4j. Otherwise, download the .tar.gz
, unzip, navigate to the directory and execute ./bin/neo4j start
.
If you're on Linux, you have to use the .tar.gz
. Download the .tar.gz
, unzip, navigate to the directory and execute ./bin/neo4j start
.
You may also find neo4j in your distribution's package manager.
These depencies are are only required if you want to use the Bolt interface. They must be present at build time, and libneo4j-client
must also be present at runtime.
C:\RTools\MinGW\bin
, and C:\MinGW\bin
)brew install rust
(or https://rustup.rs but see "Rust Path" section)brew install llvm
brew install cleishm/neo4j/libneo4j-client
sudo apt-get install cargo
sudo pacman -S rust
sudo apt-get install clang libclang-dev
sudo pacman -S clang
clang
. It may be called llvm
.sudo apt-get install libneo4j-client-dev
By default, on *nix systems (such as Linux and OS X), rustup only sets the PATH in your shell. That means that if you try to build RNeo4j in a GUI application like RStudio, it may fail. To work around this issue, simply build RNeo4j in a terminal.
Newer versions of GCC require removing the -Werror
from GCC_CFLAGS
in configure.ac
.
Run these commands in your shell:
git clone https://github.com/cleishm/libneo4j-client
cd libneo4j-client
./autogen.sh
./configure --disable-tools
sudo make install
See https://github.com/cleishm/libneo4j-client for more details
install.packages("RNeo4j")
devtools::install_github("nicolewhite/RNeo4j")
Go to the latest release and download the source code. You can then install with install.packages
.
install.packages("/path/to/file.tar.gz", repos=NULL, type="source")
library(RNeo4j)
graph = startGraph("http://localhost:7474/db/data/")
If you have authentication enabled, pass your username and password.
graph = startGraph("http://localhost:7474/db/data/", username="neo4j", password="password")
nicole = createNode(graph, "Person", name="Nicole", age=24)
greta = createNode(graph, "Person", name="Greta", age=24)
kenny = createNode(graph, "Person", name="Kenny", age=27)
shannon = createNode(graph, "Person", name="Shannon", age=23)
r1 = createRel(greta, "LIKES", nicole, weight=7)
r2 = createRel(nicole, "LIKES", kenny, weight=1)
r3 = createRel(kenny, "LIKES", shannon, weight=3)
r4 = createRel(nicole, "LIKES", shannon, weight=5)
If you're returning tabular results, use cypher
, which will give you a data.frame
.
query = "
MATCH (nicole:Person)-[r:LIKES]->(p:Person)
WHERE nicole.name = 'Nicole'
RETURN nicole.name, r.weight, p.name
"
cypher(graph, query)
## nicole.name r.weight p.name
## 1 Nicole 5 Shannon
## 2 Nicole 1 Kenny
For anything more complicated, use cypherToList
, which will give you a list
.
query = "
MATCH (nicole:Person)-[:LIKES]->(p:Person)
WHERE nicole.name = 'Nicole'
RETURN nicole, COLLECT(p.name) AS friends
"
cypherToList(graph, query)
## [[1]]
## [[1]]$nicole
## < Node >
## Person
##
## $name
## [1] "Nicole"
##
## $age
## [1] 24
##
##
## [[1]]$friends
## [[1]]$friends[[1]]
## [1] "Shannon"
##
## [[1]]$friends[[2]]
## [1] "Kenny"
Both cypher
and cypherToList
accept parameters. These parameters can be passed individually or as a list.
query = "
MATCH (p1:Person)-[r:LIKES]->(p2:Person)
WHERE p1.name = {name1} AND p2.name = {name2}
RETURN p1.name, r.weight, p2.name
"
cypher(graph, query, name1="Nicole", name2="Shannon")
## p1.name r.weight p2.name
## 1 Nicole 5 Shannon
cypher(graph, query, list(name1="Nicole", name2="Shannon"))
## p1.name r.weight p2.name
## 1 Nicole 5 Shannon
p = shortestPath(greta, "LIKES", shannon, max_depth=4)
n = nodes(p)
sapply(n, "[[", "name")
## [1] "Greta" "Nicole" "Shannon"
p = shortestPath(greta, "LIKES", shannon, max_depth=4, cost_property="weight")
n = nodes(p)
sapply(n, "[[", "name")
## [1] "Greta" "Nicole" "Kenny" "Shannon"
p$weight
## [1] 11
library(igraph)
query = "
MATCH (n)-->(m)
RETURN n.name, m.name
"
edgelist = cypher(graph, query)
ig = graph.data.frame(edgelist, directed=F)
betweenness(ig)
## Nicole Greta Kenny Shannon
## 2 0 0 0
closeness(ig)
## Nicole Greta Kenny Shannon
## 0.3333333 0.2000000 0.2500000 0.2500000
igraph
plot(ig)
ggnet
library(network)
library(GGally)
net = network(edgelist)
ggnet(net, label.nodes=TRUE)
visNetwork
Read this blog post and check out this slide deck.
library(hflights)
hflights = hflights[sample(nrow(hflights), 1000), ]
row.names(hflights) = NULL
head(hflights)
## Year Month DayofMonth DayOfWeek DepTime ArrTime UniqueCarrier FlightNum
## 1 2011 1 15 6 927 1038 XE 2885
## 2 2011 10 10 1 2001 2322 XE 4243
## 3 2011 6 15 3 1853 2108 CO 670
## 4 2011 4 10 7 2100 102 CO 410
## 5 2011 1 25 2 739 1016 XE 3083
## 6 2011 9 13 2 1745 1841 CO 1204
## TailNum ActualElapsedTime AirTime ArrDelay DepDelay Origin Dest Distance
## 1 N34110 131 113 -10 -3 IAH COS 809
## 2 N13970 141 127 2 19 IAH CMH 986
## 3 N36207 255 231 15 -2 IAH SFO 1635
## 4 N76517 182 162 -18 5 IAH EWR 1400
## 5 N12922 157 128 0 -6 IAH MKE 984
## 6 N35271 56 34 -7 -5 IAH SAT 191
## TaxiIn TaxiOut Cancelled CancellationCode Diverted
## 1 6 12 0 0
## 2 4 10 0 0
## 3 5 19 0 0
## 4 7 13 0 0
## 5 4 25 0 0
## 6 3 19 0 0
addConstraint(graph, "Carrier", "name")
addConstraint(graph, "Airport", "name")
query = "
CREATE (flight:Flight {number: {FlightNum} })
SET flight.year = TOINT({Year}),
flight.month = TOINT({DayofMonth}),
flight.day = TOINT({DayOfWeek})
MERGE (carrier:Carrier {name: {UniqueCarrier} })
CREATE (flight)-[:OPERATED_BY]->(carrier)
MERGE (origin:Airport {name: {Origin} })
MERGE (dest:Airport {name: {Dest} })
CREATE (flight)-[o:ORIGIN]->(origin)
CREATE (flight)-[d:DESTINATION]->(dest)
SET o.delay = TOINT({DepDelay}),
o.taxi_time = TOINT({TaxiOut})
SET d.delay = TOINT({ArrDelay}),
d.taxi_time = TOINT({TaxiIn})
"
tx = newTransaction(graph)
for(i in 1:nrow(hflights)) {
row = hflights[i, ]
appendCypher(tx, query,
FlightNum=row$FlightNum,
Year=row$Year,
DayofMonth=row$DayofMonth,
DayOfWeek=row$DayOfWeek,
UniqueCarrier=row$UniqueCarrier,
Origin=row$Origin,
Dest=row$Dest,
DepDelay=row$DepDelay,
TaxiOut=row$TaxiOut,
ArrDelay=row$ArrDelay,
TaxiIn=row$TaxiIn)
}
commit(tx)
summary(graph)
## This To That
## 1 Flight OPERATED_BY Carrier
## 2 Flight ORIGIN Airport
## 3 Flight DESTINATION Airport
Error in curl::curl_fetch_memory(url, handle = handle) :
Couldn't connect to server
Neo4j probably isn't running. Make sure Neo4j is running first. It's also possible you have localhost resolution issues; try connecting to http://127.0.0.1:7474/db/data/
instead.
Error: client error: (401) Unauthorized
Neo.ClientError.Security.AuthorizationFailed
No authorization header supplied.
You have auth enabled on Neo4j and either didn't provide your username and password or they were invalid. You can pass a username and password to startGraph
.
graph = startGraph("http://localhost:7474/db/data/", username="neo4j", password="password")
You can also disable auth by editing the following line in conf/neo4j-server.properties
.
# Require (or disable the requirement of) auth to access Neo4j
dbms.security.auth_enabled=false
Check out the contributing doc if you'd like to contribute!
Author: Nicolewhite
Source Code: https://github.com/nicolewhite/Rneo4j
License: View license
1660315989
Dear R and rmongodb users, rmongodb
project is based on legacy C drivers. For the moment, I (@dselivanov) don't have time for rmongodb
development. And unless someone will not port new drivers, package will remain with outdated functionality (see issues ). If some of you want to udertake package maintainance - let me know.
For new R / mongodb users , I recommend to start with mongolite package wich is much more actively maintained.
Usage
Once you have installed the package, it may be loaded from within R like any other package:
library("rmongodb")
# connect to your local mongodb
mongo <- mongo.create()
# create query object
query <- mongo.bson.from.JSON('{"age": 27}')
# Find the first 100 records
# in collection people of database test where age == 27
cursor <- mongo.find(mongo, "test.people", query, limit=100L)
# Step through the matching records and display them
while (mongo.cursor.next(cursor))
print(mongo.cursor.value(cursor))
mongo.cursor.destroy(cursor)
res <- mongo.find.batch(mongo, "test.people", query, limit=100L)
mongo.disconnect(mongo)
mongo.destroy(mongo)
There is also one demo available:
library("rmongodb")
demo(teachers_aid)
Basic Overview of using the rmongodb package for R - December 2013, Brock Tibert
code examples: https://gist.github.com/Btibert3/7751989
R with MongoDB - MongoDB Munich conference, October 2013, Markus Schmidberger
Accessing mongodb from R - January 2012, Sean Davis
code examples: http://watson.nci.nih.gov/~sdavis/blog/rmongodb-using-R-with-mongo/
R and MongoDB - June 2013, WenSui
example comparing rmongodb and RMongo: http://www.r-bloggers.com/r-and-mongodb/
rmongodb – R Driver for MongoDB - September 2011, Gerald Lindsly
blog post: http://www.r-bloggers.com/rmongodb-r-driver-for-mongodb/
Stackoverflow questions and answers:
http://stackoverflow.com/questions/tagged/rmongodb
Installing MongoDB, MongoDB Doku
http://docs.mongodb.org/manual/installation/
Getting Started with MongoDB, MongoDB Doku
http://docs.mongodb.org/manual/tutorial/getting-started/
MongoDB - Getting Started Guide, April 2013, Satish
Development
To install the development version of rmongodb, it's easiest to use the devtools package:
# install.packages("devtools")
library(devtools)
install_github("mongosoup/rmongodb")
We advice using RStudio (www.rstudio.org) for the package development. The RStudio .Rproj file is included in the repository.
We use a three step version number system, e.g. v1.2.1:
This is an R (www.r-project.org) extension supporting access to MongoDB (www.mongodb.org) using the mongodb-c-driver (http://docs.mongodb.org/ecosystem/drivers/c/).
The latest stable version is available on CRAN: http://cran.r-project.org/package=rmongodb
Thanks to Gerald Lindsly and MongoDB, Inc. (formerly 10gen) for the initial work.
In October 2013, MongoSoup and Markus Schmidberger have overtaken the development and maintenance of the R package.
Since October 2014 package is maintained by Dmitriy Selivanov. Please feel free to send us issues or pull requests via github: https://github.com/mongosoup/rmongodb
Furthermore, I'm happy to get your feedback personally via email: selivanov.dmitriy (at) gmail.com.
Author: dselivanov
Source Code: https://github.com/dselivanov/rmongodb
1660299780
The MongoDB supported driver for Go.
The recommended way to get started using the MongoDB Go driver is by using Go modules to install the dependency in your project. This can be done either by importing packages from go.mongodb.org/mongo-driver
and having the build step install the dependency or by explicitly running
go get go.mongodb.org/mongo-driver/mongo
When using a version of Go that does not support modules, the driver can be installed using dep
by running
dep ensure -add "go.mongodb.org/mongo-driver/mongo"
To get started with the driver, import the mongo
package and create a mongo.Client
with the Connect
function:
import (
"context"
"time"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
"go.mongodb.org/mongo-driver/mongo/readpref"
)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
client, err := mongo.Connect(ctx, options.Client().ApplyURI("mongodb://localhost:27017"))
Make sure to defer a call to Disconnect
after instantiating your client:
defer func() {
if err = client.Disconnect(ctx); err != nil {
panic(err)
}
}()
For more advanced configuration and authentication, see the documentation for mongo.Connect.
Calling Connect
does not block for server discovery. If you wish to know if a MongoDB server has been found and connected to, use the Ping
method:
ctx, cancel = context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
err = client.Ping(ctx, readpref.Primary())
To insert a document into a collection, first retrieve a Database
and then Collection
instance from the Client
:
collection := client.Database("testing").Collection("numbers")
The Collection
instance can then be used to insert documents:
ctx, cancel = context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
res, err := collection.InsertOne(ctx, bson.D{{"name", "pi"}, {"value", 3.14159}})
id := res.InsertedID
To use bson.D
, you will need to add "go.mongodb.org/mongo-driver/bson"
to your imports.
Your import statement should now look like this:
import (
"context"
"log"
"time"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
"go.mongodb.org/mongo-driver/mongo/readpref"
)
Several query methods return a cursor, which can be used like this:
ctx, cancel = context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
cur, err := collection.Find(ctx, bson.D{})
if err != nil { log.Fatal(err) }
defer cur.Close(ctx)
for cur.Next(ctx) {
var result bson.D
err := cur.Decode(&result)
if err != nil { log.Fatal(err) }
// do something with result....
}
if err := cur.Err(); err != nil {
log.Fatal(err)
}
For methods that return a single item, a SingleResult
instance is returned:
var result struct {
Value float64
}
filter := bson.D{{"name", "pi"}}
ctx, cancel = context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
err = collection.FindOne(ctx, filter).Decode(&result)
if err == mongo.ErrNoDocuments {
// Do something when no record was found
fmt.Println("record does not exist")
} else if err != nil {
log.Fatal(err)
}
// Do something with result...
Additional examples and documentation can be found under the examples directory and on the MongoDB Documentation website.
For help with the driver, please post in the MongoDB Community Forums.
New features and bugs can be reported on jira: https://jira.mongodb.org/browse/GODRIVER
The driver tests can be run against several database configurations. The most simple configuration is a standalone mongod with no auth, no ssl, and no compression. To run these basic driver tests, make sure a standalone MongoDB server instance is running at localhost:27017. To run the tests, you can run make
(on Windows, run nmake
). This will run coverage, run go-lint, run go-vet, and build the examples.
To test a replica set or sharded cluster, set MONGODB_URI="<connection-string>"
for the make
command. For example, for a local replica set named rs1
comprised of three nodes on ports 27017, 27018, and 27019:
MONGODB_URI="mongodb://localhost:27017,localhost:27018,localhost:27019/?replicaSet=rs1" make
To test authentication and TLS, first set up a MongoDB cluster with auth and TLS configured. Testing authentication requires a user with the root
role on the admin
database. Here is an example command that would run a mongod with TLS correctly configured for tests. Either set or replace PATH_TO_SERVER_KEY_FILE and PATH_TO_CA_FILE with paths to their respective files:
mongod \
--auth \
--tlsMode requireTLS \
--tlsCertificateKeyFile $PATH_TO_SERVER_KEY_FILE \
--tlsCAFile $PATH_TO_CA_FILE \
--tlsAllowInvalidCertificates
To run the tests with make
, set:
MONGO_GO_DRIVER_CA_FILE
to the location of the CA file used by the databaseMONGO_GO_DRIVER_KEY_FILE
to the location of the client key fileMONGO_GO_DRIVER_PKCS8_ENCRYPTED_KEY_FILE
to the location of the pkcs8 client key file encrypted with the password string: password
MONGO_GO_DRIVER_PKCS8_UNENCRYPTED_KEY_FILE
to the location of the unencrypted pkcs8 key fileMONGODB_URI
to the connection string of the serverAUTH=auth
SSL=ssl
For example:
AUTH=auth SSL=ssl \
MONGO_GO_DRIVER_CA_FILE=$PATH_TO_CA_FILE \
MONGO_GO_DRIVER_KEY_FILE=$PATH_TO_CLIENT_KEY_FILE \
MONGO_GO_DRIVER_PKCS8_ENCRYPTED_KEY_FILE=$PATH_TO_ENCRYPTED_KEY_FILE \
MONGO_GO_DRIVER_PKCS8_UNENCRYPTED_KEY_FILE=$PATH_TO_UNENCRYPTED_KEY_FILE \
MONGODB_URI="mongodb://user:password@localhost:27017/?authSource=admin" \
make
Notes:
--tlsAllowInvalidCertificates
flag is required on the server for the test suite to work correctly.?authSource=admin
, not /admin
.The MongoDB Go Driver supports wire protocol compression using Snappy, zLib, or zstd. To run tests with wire protocol compression, set MONGO_GO_DRIVER_COMPRESSOR
to snappy
, zlib
, or zstd
. For example:
MONGO_GO_DRIVER_COMPRESSOR=snappy make
Ensure the --networkMessageCompressors
flag on mongod or mongos includes zlib
if testing zLib compression.
Check out the project page for tickets that need completing. See our contribution guidelines for details.
Commits to master are run automatically on evergreen.
See our common issues documentation for troubleshooting frequently encountered issues.
@ashleymcnamara - Mongo Gopher Artwork
Author: Mongodb
Source Code: https://github.com/mongodb/mongo-go-driver
License: Apache-2.0 license
1660261140
Microsoft ADODB driver conforming to the built-in database/sql interface
This package can be installed with the go get command:
go get github.com/mattn/go-adodb
If you met the issue that your apps crash, try to import blank import of runtime/cgo
like below.
import (
...
_ "runtime/cgo"
)
API documentation can be found here: http://godoc.org/github.com/mattn/go-adodb
Examples can be found under the ./_example
directory
Author: Mattn
Source Code: https://github.com/mattn/go-adodb
License: MIT license
1660252800
Current version: v6.2.1 (RethinkDB v2.4)
Please note that this version of the driver only supports versions of RethinkDB using the v0.4 protocol (any versions of the driver older than RethinkDB 2.0 will not work).
If you need any help you can find me on the RethinkDB slack in the gorethink channel.
go get gopkg.in/rethinkdb/rethinkdb-go.v6
Replace v6
with v5
or v4
to use previous versions.
package rethinkdb_test
import (
"fmt"
"log"
r "gopkg.in/rethinkdb/rethinkdb-go.v6"
)
func Example() {
session, err := r.Connect(r.ConnectOpts{
Address: url, // endpoint without http
})
if err != nil {
log.Fatalln(err)
}
res, err := r.Expr("Hello World").Run(session)
if err != nil {
log.Fatalln(err)
}
var response string
err = res.One(&response)
if err != nil {
log.Fatalln(err)
}
fmt.Println(response)
// Output:
// Hello World
}
Setting up a basic connection with RethinkDB is simple:
embedmd:# (example_connect_test.go go /func ExampleConnect() {/ /^}/)
func ExampleConnect() {
var err error
session, err = r.Connect(r.ConnectOpts{
Address: url,
})
if err != nil {
log.Fatalln(err.Error())
}
}
See the documentation for a list of supported arguments to Connect().
The driver uses a connection pool at all times, by default it creates and frees connections automatically. It's safe for concurrent use by multiple goroutines.
To configure the connection pool InitialCap
, MaxOpen
and Timeout
can be specified during connection. If you wish to change the value of InitialCap
or MaxOpen
during runtime then the functions SetInitialPoolCap
and SetMaxOpenConns
can be used.
embedmd:# (example_connect_test.go go /func ExampleConnect_connectionPool() {/ /^}/)
func ExampleConnect_connectionPool() {
var err error
session, err = r.Connect(r.ConnectOpts{
Address: url,
InitialCap: 10,
MaxOpen: 10,
})
if err != nil {
log.Fatalln(err.Error())
}
}
To connect to a RethinkDB cluster which has multiple nodes you can use the following syntax. When connecting to a cluster with multiple nodes queries will be distributed between these nodes.
embedmd:# (example_connect_test.go go /func ExampleConnect_cluster() {/ /^}/)
func ExampleConnect_cluster() {
var err error
session, err = r.Connect(r.ConnectOpts{
Addresses: []string{url},
// Addresses: []string{url1, url2, url3, ...},
})
if err != nil {
log.Fatalln(err.Error())
}
}
When DiscoverHosts
is true any nodes are added to the cluster after the initial connection then the new node will be added to the pool of available nodes used by RethinkDB-go. Unfortunately the canonical address of each server in the cluster MUST be set as otherwise clients will try to connect to the database nodes locally. For more information about how to set a RethinkDB servers canonical address set this page http://www.rethinkdb.com/docs/config-file/.
To login with a username and password you should first create a user, this can be done by writing to the users
system table and then grant that user access to any tables or databases they need access to. This queries can also be executed in the RethinkDB admin console.
err := r.DB("rethinkdb").Table("users").Insert(map[string]string{
"id": "john",
"password": "p455w0rd",
}).Exec(session)
...
err = r.DB("blog").Table("posts").Grant("john", map[string]bool{
"read": true,
"write": true,
}).Exec(session)
...
Finally the username and password should be passed to Connect
when creating your session, for example:
session, err := r.Connect(r.ConnectOpts{
Address: "localhost:28015",
Database: "blog",
Username: "john",
Password: "p455w0rd",
})
Please note that DiscoverHosts
will not work with user authentication at this time due to the fact that RethinkDB restricts access to the required system tables.
This library is based on the official drivers so the code on the API page should require very few changes to work.
To view full documentation for the query functions check the API reference or GoDoc
Slice Expr Example
r.Expr([]interface{}{1, 2, 3, 4, 5}).Run(session)
Map Expr Example
r.Expr(map[string]interface{}{"a": 1, "b": 2, "c": 3}).Run(session)
Get Example
r.DB("database").Table("table").Get("GUID").Run(session)
Map Example (Func)
r.Expr([]interface{}{1, 2, 3, 4, 5}).Map(func (row Term) interface{} {
return row.Add(1)
}).Run(session)
Map Example (Implicit)
r.Expr([]interface{}{1, 2, 3, 4, 5}).Map(r.Row.Add(1)).Run(session)
Between (Optional Args) Example
r.DB("database").Table("table").Between(1, 10, r.BetweenOpts{
Index: "num",
RightBound: "closed",
}).Run(session)
For any queries which use callbacks the function signature is important as your function needs to be a valid RethinkDB-go callback, you can see an example of this in the map example above. The simplified explanation is that all arguments must be of type r.Term
, this is because of how the query is sent to the database (your callback is not actually executed in your Go application but encoded as JSON and executed by RethinkDB). The return argument can be anything you want it to be (as long as it is a valid return value for the current query) so it usually makes sense to return interface{}
. Here is an example of a callback for the conflict callback of an insert operation:
r.Table("test").Insert(doc, r.InsertOpts{
Conflict: func(id, oldDoc, newDoc r.Term) interface{} {
return newDoc.Merge(map[string]interface{}{
"count": oldDoc.Add(newDoc.Field("count")),
})
},
})
As shown above in the Between example optional arguments are passed to the function as a struct. Each function that has optional arguments as a related struct. This structs are named in the format FunctionNameOpts, for example BetweenOpts is the related struct for Between.
For query cancellation use Context
argument at RunOpts
. If Context
is nil
and ReadTimeout
or WriteTimeout
is not 0 from ConnectionOpts
, Context
will be formed by summation of these timeouts.
For unlimited timeouts for Changes()
pass context.Background()
.
Different result types are returned depending on what function is used to execute the query.
Run
returns a cursor which can be used to view all rows returned.RunWrite
returns a WriteResponse and should be used for queries such as Insert, Update, etc...Exec
sends a query to the server and closes the connection immediately after reading the response from the database. If you do not wish to wait for the response then you can set the NoReply
flag.Example:
res, err := r.DB("database").Table("tablename").Get(key).Run(session)
if err != nil {
// error
}
defer res.Close() // Always ensure you close the cursor to ensure connections are not leaked
Cursors have a number of methods available for accessing the query results
Next
retrieves the next document from the result set, blocking if necessary.All
retrieves all documents from the result set into the provided slice.One
retrieves the first document from the result set.Examples:
var row interface{}
for res.Next(&row) {
// Do something with row
}
if res.Err() != nil {
// error
}
var rows []interface{}
err := res.All(&rows)
if err != nil {
// error
}
var row interface{}
err := res.One(&row)
if err == r.ErrEmptyResult {
// row not found
}
if err != nil {
// error
}
When passing structs to Expr(And functions that use Expr such as Insert, Update) the structs are encoded into a map before being sent to the server. Each exported field is added to the map unless
Each fields default name in the map is the field name but can be specified in the struct field's tag value. The "rethinkdb" key in the struct field's tag value is the key name, followed by an optional comma and options. Examples:
// Field is ignored by this package.
Field int `rethinkdb:"-"`
// Field appears as key "myName".
Field int `rethinkdb:"myName"`
// Field appears as key "myName" and
// the field is omitted from the object if its value is empty,
// as defined above.
Field int `rethinkdb:"myName,omitempty"`
// Field appears as key "Field" (the default), but
// the field is skipped if empty.
// Note the leading comma.
Field int `rethinkdb:",omitempty"`
// When the tag name includes an index expression
// a compound field is created
Field1 int `rethinkdb:"myName[0]"`
Field2 int `rethinkdb:"myName[1]"`
NOTE: It is strongly recommended that struct tags are used to explicitly define the mapping between your Go type and how the data is stored by RethinkDB. This is especially important when using an Id
field as by default RethinkDB will create a field named id
as the primary key (note that the RethinkDB field is lowercase but the Go version starts with a capital letter).
When encoding maps with non-string keys the key values are automatically converted to strings where possible, however it is recommended that you use strings where possible (for example map[string]T
).
If you wish to use the json
tags for RethinkDB-go then you can call SetTags("rethinkdb", "json")
when starting your program, this will cause RethinkDB-go to check for json
tags after checking for rethinkdb
tags. By default this feature is disabled. This function will also let you support any other tags, the driver will check for tags in the same order as the parameters.
NOTE: Old-style gorethink
struct tags are supported but deprecated.
RethinkDB contains some special types which can be used to store special value types, currently supports are binary values, times and geometry data types. RethinkDB-go supports these data types natively however there are some gotchas:
time.Time
value to your query, due to the way Go works type aliasing or embedding is not support here[]byte
) to your querygithub.com/rethinkdb/rethinkdb-go/types
package, Any of the types (Geometry
, Point
, Line
and Lines
) can be passed to a query to create a RethinkDB geometry type.RethinkDB unfortunately does not support compound primary keys using multiple fields however it does support compound keys using an array of values. For example if you wanted to create a compound key for a book where the key contained the author ID and book name then the ID might look like this ["author_id", "book name"]
. Luckily RethinkDB-go allows you to easily manage these keys while keeping the fields separate in your structs. For example:
type Book struct {
AuthorID string `rethinkdb:"id[0]"`
Name string `rethinkdb:"id[1]"`
}
// Creates the following document in RethinkDB
{"id": [AUTHORID, NAME]}
Sometimes you may want to use a Go struct that references a document in another table, instead of creating a new struct which is just used when writing to RethinkDB you can annotate your struct with the reference tag option. This will tell RethinkDB-go that when encoding your data it should "pluck" the ID field from the nested document and use that instead.
This is all quite complicated so hopefully this example should help. First lets assume you have two types Author
and Book
and you want to insert a new book into your database however you dont want to include the entire author struct in the books table. As you can see the Author
field in the Book
struct has some extra tags, firstly we have added the reference
tag option which tells RethinkDB-go to pluck a field from the Author
struct instead of inserting the whole author document. We also have the rethinkdb_ref
tag which tells RethinkDB-go to look for the id
field in the Author
document, without this tag RethinkDB-go would instead look for the author_id
field.
type Author struct {
ID string `rethinkdb:"id,omitempty"`
Name string `rethinkdb:"name"`
}
type Book struct {
ID string `rethinkdb:"id,omitempty"`
Title string `rethinkdb:"title"`
Author Author `rethinkdb:"author_id,reference" rethinkdb_ref:"id"`
}
The resulting data in RethinkDB should look something like this:
{
"author_id": "author_1",
"id": "book_1",
"title": "The Hobbit"
}
If you wanted to read back the book with the author included then you could run the following RethinkDB-go query:
r.Table("books").Get("1").Merge(func(p r.Term) interface{} {
return map[string]interface{}{
"author_id": r.Table("authors").Get(p.Field("author_id")),
}
}).Run(session)
You are also able to reference an array of documents, for example if each book stored multiple authors you could do the following:
type Book struct {
ID string `rethinkdb:"id,omitempty"`
Title string `rethinkdb:"title"`
Authors []Author `rethinkdb:"author_ids,reference" rethinkdb_ref:"id"`
}
{
"author_ids": ["author_1", "author_2"],
"id": "book_1",
"title": "The Hobbit"
}
The query for reading the data back is slightly more complicated but is very similar:
r.Table("books").Get("book_1").Merge(func(p r.Term) interface{} {
return map[string]interface{}{
"author_ids": r.Table("authors").GetAll(r.Args(p.Field("author_ids"))).CoerceTo("array"),
}
})
Marshaler
s/Unmarshaler
sSometimes the default behaviour for converting Go types to and from ReQL is not desired, for these situations the driver allows you to implement both the Marshaler
and Unmarshaler
interfaces. These interfaces might look familiar if you are using to using the encoding/json
package however instead of dealing with []byte
the interfaces deal with interface{}
values (which are later encoded by the encoding/json
package when communicating with the database).
An good example of how to use these interfaces is in the types
package, in this package the Point
type is encoded as the GEOMETRY
pseudo-type instead of a normal JSON object.
On the other side, you can implement external encode/decode functions with SetTypeEncoding
function.
By default the driver logs are disabled however when enabled the driver will log errors when it fails to connect to the database. If you would like more verbose error logging you can call r.SetVerbose(true)
.
Alternatively if you wish to modify the logging behaviour you can modify the logger provided by github.com/sirupsen/logrus
. For example the following code completely disable the logger:
// Enabled
r.Log.Out = os.Stderr
// Disabled
r.Log.Out = ioutil.Discard
The driver supports opentracing-go. You can enable this feature by setting UseOpentracing
to true in the ConnectOpts
. Then driver will expect opentracing.Span
in the RunOpts.Context
and will start new child spans for queries. Also you need to configure tracer in your program by yourself.
The driver starts span for the whole query, from the first byte is sent to the cursor closed, and second-level span for each query for fetching data.
So you can trace how much time you program spends for RethinkDB queries.
The driver includes the ability to mock queries meaning that you can test your code without needing to talk to a real RethinkDB cluster, this is perfect for ensuring that your application has high unit test coverage.
To write tests with mocking you should create an instance of Mock
and then setup expectations using On
and Return
. Expectations allow you to define what results should be returned when a known query is executed, they are configured by passing the query term you want to mock to On
and then the response and error to Return
, if a non-nil error is passed to Return
then any time that query is executed the error will be returned, if no error is passed then a cursor will be built using the value passed to Return
. Once all your expectations have been created you should then execute you queries using the Mock
instead of a Session
.
Here is an example that shows how to mock a query that returns multiple rows and the resulting cursor can be used as normal.
func TestSomething(t *testing.T) {
mock := r.NewMock()
mock.On(r.Table("people")).Return([]interface{}{
map[string]interface{}{"id": 1, "name": "John Smith"},
map[string]interface{}{"id": 2, "name": "Jane Smith"},
}, nil)
cursor, err := r.Table("people").Run(mock)
if err != nil {
t.Errorf("err is: %v", err)
}
var rows []interface{}
err = cursor.All(&rows)
if err != nil {
t.Errorf("err is: %v", err)
}
// Test result of rows
mock.AssertExpectations(t)
}
If you want the cursor to block on some of the response values, you can pass in a value of type chan interface{}
and the cursor will block until a value is available to read on the channel. Or you can pass in a function with signature func() interface{}
: the cursor will call the function (which may block). Here is the example above adapted to use a channel.
func TestSomething(t *testing.T) {
mock := r.NewMock()
ch := make(chan []interface{})
mock.On(r.Table("people")).Return(ch, nil)
go func() {
ch <- []interface{}{
map[string]interface{}{"id": 1, "name": "John Smith"},
map[string]interface{}{"id": 2, "name": "Jane Smith"},
}
ch <- []interface{}{map[string]interface{}{"id": 3, "name": "Jack Smith"}}
close(ch)
}()
cursor, err := r.Table("people").Run(mock)
if err != nil {
t.Errorf("err is: %v", err)
}
var rows []interface{}
err = cursor.All(&rows)
if err != nil {
t.Errorf("err is: %v", err)
}
// Test result of rows
mock.AssertExpectations(t)
}
The mocking implementation is based on amazing https://github.com/stretchr/testify library, thanks to @stretchr for their awesome work!
Everyone wants their project's benchmarks to be speedy. And while we know that RethinkDB and the RethinkDB-go driver are quite fast, our primary goal is for our benchmarks to be correct. They are designed to give you, the user, an accurate picture of writes per second (w/s). If you come up with a accurate test that meets this aim, submit a pull request please.
Thanks to @jaredfolkins for the contribution.
Type | Value |
---|---|
Model Name | MacBook Pro |
Model Identifier | MacBookPro11,3 |
Processor Name | Intel Core i7 |
Processor Speed | 2.3 GHz |
Number of Processors | 1 |
Total Number of Cores | 4 |
L2 Cache (per Core) | 256 KB |
L3 Cache | 6 MB |
Memory | 16 GB |
BenchmarkBatch200RandomWrites 20 557227775 ns/op
BenchmarkBatch200RandomWritesParallel10 30 354465417 ns/op
BenchmarkBatch200SoftRandomWritesParallel10 100 761639276 ns/op
BenchmarkRandomWrites 100 10456580 ns/op
BenchmarkRandomWritesParallel10 1000 1614175 ns/op
BenchmarkRandomSoftWrites 3000 589660 ns/op
BenchmarkRandomSoftWritesParallel10 10000 247588 ns/op
BenchmarkSequentialWrites 50 24408285 ns/op
BenchmarkSequentialWritesParallel10 1000 1755373 ns/op
BenchmarkSequentialSoftWrites 3000 631211 ns/op
BenchmarkSequentialSoftWritesParallel10 10000 263481 ns/op
Many functions have examples and are viewable in the godoc, alternatively view some more full features examples on the wiki.
Another good place to find examples are the tests, almost every term will have a couple of tests that demonstrate how they can be used.
Author: Rethinkdb
Source Code: https://github.com/rethinkdb/rethinkdb-go
License: Apache-2.0 license