1664984943
Official Dgraph Go client which communicates with the server using gRPC.
Before using this client, we highly recommend that you go through tour.dgraph.io and docs.dgraph.io to understand how to run and work with Dgraph.
Use Discuss Issues for reporting issues about this repository.
Depending on the version of Dgraph that you are connecting to, you will have to use a different version of this client and their corresponding import paths.
Dgraph version | dgo version | dgo import path |
---|---|---|
dgraph 1.0.X | dgo 1.X.Y | "github.com/dgraph-io/dgo" |
dgraph 1.1.X | dgo 2.X.Y | "github.com/dgraph-io/dgo/v2" |
dgraph 20.03.0 | dgo 200.03.0 | "github.com/dgraph-io/dgo/v200" |
dgraph 20.07.0 | dgo 200.03.0 | "github.com/dgraph-io/dgo/v200" |
dgraph 20.11.0 | dgo 200.03.0 | "github.com/dgraph-io/dgo/v200" |
dgraph 21.03.0 | dgo 210.03.0 | "github.com/dgraph-io/dgo/v210" |
Note: One of the most important API breakages from dgo v1 to v2 is in the function dgo.Txn.Mutate
. This function returns an *api.Assigned
value in v1 but an *api.Response
in v2.
Note: There is no breaking API change from v2 to v200 but we have decided to follow the CalVer Versioning Scheme.
dgraphClient
object can be initialized by passing it a list of api.DgraphClient
clients as variadic arguments. Connecting to multiple Dgraph servers in the same cluster allows for better distribution of workload.
The following code snippet shows just one connection.
conn, err := grpc.Dial("localhost:9080", grpc.WithInsecure())
if err != nil {
log.Fatal(err)
}
defer conn.Close()
dgraphClient := dgo.NewDgraphClient(api.NewDgraphClient(conn))
If your server has Access Control Lists enabled (Dgraph v1.1 or above), the client must be logged in for accessing data. Use Login
endpoint:
Calling login will obtain and remember the access and refresh JWT tokens. All subsequent operations via the logged in client will send along the stored access token.
err := dgraphClient.Login(ctx, "user", "passwd")
// Check error
If your server additionally has namespaces (Dgraph v21.03 or above), use the LoginIntoNamespace
API.
err := dgraphClient.LoginIntoNamespace(ctx, "user", "passwd", 0x10)
// Check error
To set the schema, create an instance of api.Operation
and use the Alter
endpoint.
op := &api.Operation{
Schema: `name: string @index(exact) .`,
}
err := dgraphClient.Alter(ctx, op)
// Check error
Operation
contains other fields as well, including DropAttr
and DropAll
. DropAll
is useful if you wish to discard all the data, and start from a clean slate, without bringing the instance down. DropAttr
is used to drop all the data related to a predicate.
Starting Dgraph version 20.03.0, indexes can be computed in the background. You can set RunInBackground
field of the api.Operation
to true
before passing it to the Alter
function. You can find more details here.
op := &api.Operation{
Schema: `name: string @index(exact) .`,
RunInBackground: true
}
err := dgraphClient.Alter(ctx, op)
To create a transaction, call dgraphClient.NewTxn()
, which returns a *dgo.Txn
object. This operation incurs no network overhead.
It is a good practice to call txn.Discard(ctx)
using a defer
statement after it is initialized. Calling txn.Discard(ctx)
after txn.Commit(ctx)
is a no-op. Furthermore, txn.Discard(ctx)
can be called multiple times with no additional side-effects.
txn := dgraphClient.NewTxn()
defer txn.Discard(ctx)
Read-only transactions can be created by calling c.NewReadOnlyTxn()
. Read-only transactions are useful to increase read speed because they can circumvent the usual consensus protocol. Read-only transactions cannot contain mutations and trying to call txn.Commit()
will result in an error. Calling txn.Discard()
will be a no-op.
txn.Mutate(ctx, mu)
runs a mutation. It takes in a context.Context
and a *api.Mutation
object. You can set the data using JSON or RDF N-Quad format.
To use JSON, use the fields SetJson and DeleteJson, which accept a string representing the nodes to be added or removed respectively (either as a JSON map or a list). To use RDF, use the fields SetNquads and DelNquads, which accept a string representing the valid RDF triples (one per line) to added or removed respectively. This protobuf object also contains the Set and Del fields which accept a list of RDF triples that have already been parsed into our internal format. As such, these fields are mainly used internally and users should use the SetNquads and DelNquads instead if they are planning on using RDF.
We define a Person struct to represent a Person and marshal an instance of it to use with Mutation
object.
type Person struct {
Uid string `json:"uid,omitempty"`
Name string `json:"name,omitempty"`
DType []string `json:"dgraph.type,omitempty"`
}
p := Person{
Uid: "_:alice",
Name: "Alice",
DType: []string{"Person"},
}
pb, err := json.Marshal(p)
if err != nil {
log.Fatal(err)
}
mu := &api.Mutation{
SetJson: pb,
}
res, err := txn.Mutate(ctx, mu)
if err != nil {
log.Fatal(err)
}
For a more complete example, see GoDoc.
Sometimes, you only want to commit a mutation, without querying anything further. In such cases, you can use mu.CommitNow = true
to indicate that the mutation must be immediately committed.
Mutation can be run using txn.Do
as well.
mu := &api.Mutation{
SetJson: pb,
}
req := &api.Request{CommitNow:true, Mutations: []*api.Mutation{mu}}
res, err := txn.Do(ctx, req)
if err != nil {
log.Fatal(err)
}
You can run a query by calling txn.Query(ctx, q)
. You will need to pass in a DQL query string. If you want to pass an additional map of any variables that you might want to set in the query, call txn.QueryWithVars(ctx, q, vars)
with the variables map as third argument.
Let's run the following query with a variable $a:
q := `query all($a: string) {
all(func: eq(name, $a)) {
name
}
}`
res, err := txn.QueryWithVars(ctx, q, map[string]string{"$a": "Alice"})
fmt.Printf("%s\n", res.Json)
You can also use txn.Do
function to run a query.
req := &api.Request{
Query: q,
Vars: map[string]string{"$a": "Alice"},
}
res, err := txn.Do(ctx, req)
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s\n", res.Json)
When running a schema query for predicate name
, the schema response is found in the Json
field of api.Response
as shown below:
q := `schema(pred: [name]) {
type
index
reverse
tokenizer
list
count
upsert
lang
}`
res, err := txn.Query(ctx, q)
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s\n", res.Json)
The txn.Do
function allows you to run upserts consisting of one query and one mutation. Variables can be defined in the query and used in the mutation. You could also use txn.Do
to perform a query followed by a mutation.
To know more about upsert, we highly recommend going through the docs at https://docs.dgraph.io/mutations/#upsert-block.
query = `
query {
user as var(func: eq(email, "wrong_email@dgraph.io"))
}`
mu := &api.Mutation{
SetNquads: []byte(`uid(user) <email> "correct_email@dgraph.io" .`),
}
req := &api.Request{
Query: query,
Mutations: []*api.Mutation{mu},
CommitNow:true,
}
// Update email only if matching uid found.
if _, err := dg.NewTxn().Do(ctx, req); err != nil {
log.Fatal(err)
}
The upsert block also allows specifying a conditional mutation block using an @if
directive. The mutation is executed only when the specified condition is true. If the condition is false, the mutation is silently ignored.
See more about Conditional Upsert Here.
query = `
query {
user as var(func: eq(email, "wrong_email@dgraph.io"))
}`
mu := &api.Mutation{
Cond: `@if(eq(len(user), 1))`, // Only mutate if "wrong_email@dgraph.io" belongs to single user.
SetNquads: []byte(`uid(user) <email> "correct_email@dgraph.io" .`),
}
req := &api.Request{
Query: query,
Mutations: []*api.Mutation{mu},
CommitNow:true,
}
// Update email only if exactly one matching uid is found.
if _, err := dg.NewTxn().Do(ctx, req); err != nil {
log.Fatal(err)
}
A transaction can be committed using the txn.Commit(ctx)
method. If your transaction consisted solely of calls to txn.Query
or txn.QueryWithVars
, and no calls to txn.Mutate
, then calling txn.Commit
is not necessary.
An error will be returned if other transactions running concurrently modify the same data that was modified in this transaction. It is up to the user to retry transactions when they fail.
txn := dgraphClient.NewTxn()
// Perform some queries and mutations.
err := txn.Commit(ctx)
if err == y.ErrAborted {
// Retry or handle error
}
Metadata headers such as authentication tokens can be set through the context of gRPC methods. Below is an example of how to set a header named "auth-token".
// The following piece of code shows how one can set metadata with
// auth-token, to allow Alter operation, if the server requires it.
md := metadata.New(nil)
md.Append("auth-token", "the-auth-token-value")
ctx := metadata.NewOutgoingContext(context.Background(), md)
dg.Alter(ctx, &op)
Please use the following snippet to connect to a Slash GraphQL or Slash Enterprise backend.
conn, err := dgo.DialSlashEndpoint("https://your.endpoint.dgraph.io/graphql", "api-token")
if err != nil {
log.Fatal(err)
}
defer conn.Close()
dgraphClient := dgo.NewDgraphClient(api.NewDgraphClient(conn))
Make sure you have dgraph
installed before you run the tests. This script will run the unit and integration tests.
go test -v ./...
Author: Dgraph-io
Source Code: https://github.com/dgraph-io/dgo
License: Apache-2.0 license
1599854400
Go announced Go 1.15 version on 11 Aug 2020. Highlighted updates and features include Substantial improvements to the Go linker, Improved allocation for small objects at high core counts, X.509 CommonName deprecation, GOPROXY supports skipping proxies that return errors, New embedded tzdata package, Several Core Library improvements and more.
As Go promise for maintaining backward compatibility. After upgrading to the latest Go 1.15 version, almost all existing Golang applications or programs continue to compile and run as older Golang version.
#go #golang #go 1.15 #go features #go improvement #go package #go new features
1664984943
Official Dgraph Go client which communicates with the server using gRPC.
Before using this client, we highly recommend that you go through tour.dgraph.io and docs.dgraph.io to understand how to run and work with Dgraph.
Use Discuss Issues for reporting issues about this repository.
Depending on the version of Dgraph that you are connecting to, you will have to use a different version of this client and their corresponding import paths.
Dgraph version | dgo version | dgo import path |
---|---|---|
dgraph 1.0.X | dgo 1.X.Y | "github.com/dgraph-io/dgo" |
dgraph 1.1.X | dgo 2.X.Y | "github.com/dgraph-io/dgo/v2" |
dgraph 20.03.0 | dgo 200.03.0 | "github.com/dgraph-io/dgo/v200" |
dgraph 20.07.0 | dgo 200.03.0 | "github.com/dgraph-io/dgo/v200" |
dgraph 20.11.0 | dgo 200.03.0 | "github.com/dgraph-io/dgo/v200" |
dgraph 21.03.0 | dgo 210.03.0 | "github.com/dgraph-io/dgo/v210" |
Note: One of the most important API breakages from dgo v1 to v2 is in the function dgo.Txn.Mutate
. This function returns an *api.Assigned
value in v1 but an *api.Response
in v2.
Note: There is no breaking API change from v2 to v200 but we have decided to follow the CalVer Versioning Scheme.
dgraphClient
object can be initialized by passing it a list of api.DgraphClient
clients as variadic arguments. Connecting to multiple Dgraph servers in the same cluster allows for better distribution of workload.
The following code snippet shows just one connection.
conn, err := grpc.Dial("localhost:9080", grpc.WithInsecure())
if err != nil {
log.Fatal(err)
}
defer conn.Close()
dgraphClient := dgo.NewDgraphClient(api.NewDgraphClient(conn))
If your server has Access Control Lists enabled (Dgraph v1.1 or above), the client must be logged in for accessing data. Use Login
endpoint:
Calling login will obtain and remember the access and refresh JWT tokens. All subsequent operations via the logged in client will send along the stored access token.
err := dgraphClient.Login(ctx, "user", "passwd")
// Check error
If your server additionally has namespaces (Dgraph v21.03 or above), use the LoginIntoNamespace
API.
err := dgraphClient.LoginIntoNamespace(ctx, "user", "passwd", 0x10)
// Check error
To set the schema, create an instance of api.Operation
and use the Alter
endpoint.
op := &api.Operation{
Schema: `name: string @index(exact) .`,
}
err := dgraphClient.Alter(ctx, op)
// Check error
Operation
contains other fields as well, including DropAttr
and DropAll
. DropAll
is useful if you wish to discard all the data, and start from a clean slate, without bringing the instance down. DropAttr
is used to drop all the data related to a predicate.
Starting Dgraph version 20.03.0, indexes can be computed in the background. You can set RunInBackground
field of the api.Operation
to true
before passing it to the Alter
function. You can find more details here.
op := &api.Operation{
Schema: `name: string @index(exact) .`,
RunInBackground: true
}
err := dgraphClient.Alter(ctx, op)
To create a transaction, call dgraphClient.NewTxn()
, which returns a *dgo.Txn
object. This operation incurs no network overhead.
It is a good practice to call txn.Discard(ctx)
using a defer
statement after it is initialized. Calling txn.Discard(ctx)
after txn.Commit(ctx)
is a no-op. Furthermore, txn.Discard(ctx)
can be called multiple times with no additional side-effects.
txn := dgraphClient.NewTxn()
defer txn.Discard(ctx)
Read-only transactions can be created by calling c.NewReadOnlyTxn()
. Read-only transactions are useful to increase read speed because they can circumvent the usual consensus protocol. Read-only transactions cannot contain mutations and trying to call txn.Commit()
will result in an error. Calling txn.Discard()
will be a no-op.
txn.Mutate(ctx, mu)
runs a mutation. It takes in a context.Context
and a *api.Mutation
object. You can set the data using JSON or RDF N-Quad format.
To use JSON, use the fields SetJson and DeleteJson, which accept a string representing the nodes to be added or removed respectively (either as a JSON map or a list). To use RDF, use the fields SetNquads and DelNquads, which accept a string representing the valid RDF triples (one per line) to added or removed respectively. This protobuf object also contains the Set and Del fields which accept a list of RDF triples that have already been parsed into our internal format. As such, these fields are mainly used internally and users should use the SetNquads and DelNquads instead if they are planning on using RDF.
We define a Person struct to represent a Person and marshal an instance of it to use with Mutation
object.
type Person struct {
Uid string `json:"uid,omitempty"`
Name string `json:"name,omitempty"`
DType []string `json:"dgraph.type,omitempty"`
}
p := Person{
Uid: "_:alice",
Name: "Alice",
DType: []string{"Person"},
}
pb, err := json.Marshal(p)
if err != nil {
log.Fatal(err)
}
mu := &api.Mutation{
SetJson: pb,
}
res, err := txn.Mutate(ctx, mu)
if err != nil {
log.Fatal(err)
}
For a more complete example, see GoDoc.
Sometimes, you only want to commit a mutation, without querying anything further. In such cases, you can use mu.CommitNow = true
to indicate that the mutation must be immediately committed.
Mutation can be run using txn.Do
as well.
mu := &api.Mutation{
SetJson: pb,
}
req := &api.Request{CommitNow:true, Mutations: []*api.Mutation{mu}}
res, err := txn.Do(ctx, req)
if err != nil {
log.Fatal(err)
}
You can run a query by calling txn.Query(ctx, q)
. You will need to pass in a DQL query string. If you want to pass an additional map of any variables that you might want to set in the query, call txn.QueryWithVars(ctx, q, vars)
with the variables map as third argument.
Let's run the following query with a variable $a:
q := `query all($a: string) {
all(func: eq(name, $a)) {
name
}
}`
res, err := txn.QueryWithVars(ctx, q, map[string]string{"$a": "Alice"})
fmt.Printf("%s\n", res.Json)
You can also use txn.Do
function to run a query.
req := &api.Request{
Query: q,
Vars: map[string]string{"$a": "Alice"},
}
res, err := txn.Do(ctx, req)
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s\n", res.Json)
When running a schema query for predicate name
, the schema response is found in the Json
field of api.Response
as shown below:
q := `schema(pred: [name]) {
type
index
reverse
tokenizer
list
count
upsert
lang
}`
res, err := txn.Query(ctx, q)
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s\n", res.Json)
The txn.Do
function allows you to run upserts consisting of one query and one mutation. Variables can be defined in the query and used in the mutation. You could also use txn.Do
to perform a query followed by a mutation.
To know more about upsert, we highly recommend going through the docs at https://docs.dgraph.io/mutations/#upsert-block.
query = `
query {
user as var(func: eq(email, "wrong_email@dgraph.io"))
}`
mu := &api.Mutation{
SetNquads: []byte(`uid(user) <email> "correct_email@dgraph.io" .`),
}
req := &api.Request{
Query: query,
Mutations: []*api.Mutation{mu},
CommitNow:true,
}
// Update email only if matching uid found.
if _, err := dg.NewTxn().Do(ctx, req); err != nil {
log.Fatal(err)
}
The upsert block also allows specifying a conditional mutation block using an @if
directive. The mutation is executed only when the specified condition is true. If the condition is false, the mutation is silently ignored.
See more about Conditional Upsert Here.
query = `
query {
user as var(func: eq(email, "wrong_email@dgraph.io"))
}`
mu := &api.Mutation{
Cond: `@if(eq(len(user), 1))`, // Only mutate if "wrong_email@dgraph.io" belongs to single user.
SetNquads: []byte(`uid(user) <email> "correct_email@dgraph.io" .`),
}
req := &api.Request{
Query: query,
Mutations: []*api.Mutation{mu},
CommitNow:true,
}
// Update email only if exactly one matching uid is found.
if _, err := dg.NewTxn().Do(ctx, req); err != nil {
log.Fatal(err)
}
A transaction can be committed using the txn.Commit(ctx)
method. If your transaction consisted solely of calls to txn.Query
or txn.QueryWithVars
, and no calls to txn.Mutate
, then calling txn.Commit
is not necessary.
An error will be returned if other transactions running concurrently modify the same data that was modified in this transaction. It is up to the user to retry transactions when they fail.
txn := dgraphClient.NewTxn()
// Perform some queries and mutations.
err := txn.Commit(ctx)
if err == y.ErrAborted {
// Retry or handle error
}
Metadata headers such as authentication tokens can be set through the context of gRPC methods. Below is an example of how to set a header named "auth-token".
// The following piece of code shows how one can set metadata with
// auth-token, to allow Alter operation, if the server requires it.
md := metadata.New(nil)
md.Append("auth-token", "the-auth-token-value")
ctx := metadata.NewOutgoingContext(context.Background(), md)
dg.Alter(ctx, &op)
Please use the following snippet to connect to a Slash GraphQL or Slash Enterprise backend.
conn, err := dgo.DialSlashEndpoint("https://your.endpoint.dgraph.io/graphql", "api-token")
if err != nil {
log.Fatal(err)
}
defer conn.Close()
dgraphClient := dgo.NewDgraphClient(api.NewDgraphClient(conn))
Make sure you have dgraph
installed before you run the tests. This script will run the unit and integration tests.
go test -v ./...
Author: Dgraph-io
Source Code: https://github.com/dgraph-io/dgo
License: Apache-2.0 license
1648091340
go-elasticsearch
The official Go client for Elasticsearch.
Language clients are forward compatible; meaning that clients support communicating with greater or equal minor versions of Elasticsearch. Elasticsearch language clients are only backwards compatible with default distributions and without guarantees made.
When using Go modules, include the version in the import path, and specify either an explicit version or a branch:
require github.com/elastic/go-elasticsearch/v8 v8.0.0
require github.com/elastic/go-elasticsearch/v7 7.17
It's possible to use multiple versions of the client in a single project:
// go.mod
github.com/elastic/go-elasticsearch/v7 v7.17.0
github.com/elastic/go-elasticsearch/v8 v8.0.0
// main.go
import (
elasticsearch7 "github.com/elastic/go-elasticsearch/v7"
elasticsearch8 "github.com/elastic/go-elasticsearch/v8"
)
// ...
es7, _ := elasticsearch7.NewDefaultClient()
es8, _ := elasticsearch8.NewDefaultClient()
The main
branch of the client is compatible with the current master
branch of Elasticsearch.
Add the package to your go.mod
file:
require github.com/elastic/go-elasticsearch/v8 main
Or, clone the repository:
git clone --branch main https://github.com/elastic/go-elasticsearch.git $GOPATH/src/github.com/elastic/go-elasticsearch
A complete example:
mkdir my-elasticsearch-app && cd my-elasticsearch-app
cat > go.mod <<-END
module my-elasticsearch-app
require github.com/elastic/go-elasticsearch/v8 main
END
cat > main.go <<-END
package main
import (
"log"
"github.com/elastic/go-elasticsearch/v8"
)
func main() {
es, _ := elasticsearch.NewDefaultClient()
log.Println(elasticsearch.Version)
log.Println(es.Info())
}
END
go run main.go
The elasticsearch
package ties together two separate packages for calling the Elasticsearch APIs and transferring data over HTTP: esapi
and elastictransport
, respectively.
Use the elasticsearch.NewDefaultClient()
function to create the client with the default settings.
es, err := elasticsearch.NewDefaultClient()
if err != nil {
log.Fatalf("Error creating the client: %s", err)
}
res, err := es.Info()
if err != nil {
log.Fatalf("Error getting response: %s", err)
}
defer res.Body.Close()
log.Println(res)
// [200 OK] {
// "name" : "node-1",
// "cluster_name" : "go-elasticsearch"
// ...
NOTE: It is critical to both close the response body and to consume it, in order to re-use persistent TCP connections in the default HTTP transport. If you're not interested in the response body, call
io.Copy(ioutil.Discard, res.Body)
.
When you export the ELASTICSEARCH_URL
environment variable, it will be used to set the cluster endpoint(s). Separate multiple adresses by a comma.
To set the cluster endpoint(s) programatically, pass a configuration object to the elasticsearch.NewClient()
function.
cfg := elasticsearch.Config{
Addresses: []string{
"http://localhost:9200",
"http://localhost:9201",
},
// ...
}
es, err := elasticsearch.NewClient(cfg)
To set the username and password, include them in the endpoint URL, or use the corresponding configuration options.
cfg := elasticsearch.Config{
// ...
Username: "foo",
Password: "bar",
}
To set a custom certificate authority used to sign the certificates of cluster nodes, use the CACert
configuration option.
cert, _ := ioutil.ReadFile(*cacert)
cfg := elasticsearch.Config{
// ...
CACert: cert,
}
To configure other HTTP settings, pass an http.Transport
object in the configuration object.
cfg := elasticsearch.Config{
Transport: &http.Transport{
MaxIdleConnsPerHost: 10,
ResponseHeaderTimeout: time.Second,
TLSClientConfig: &tls.Config{
MinVersion: tls.VersionTLS12,
// ...
},
// ...
},
}
See the _examples/configuration.go
and _examples/customization.go
files for more examples of configuration and customization of the client. See the _examples/security
for an example of a security configuration.
The following example demonstrates a more complex usage. It fetches the Elasticsearch version from the cluster, indexes a couple of documents concurrently, and prints the search results, using a lightweight wrapper around the response body.
// $ go run _examples/main.go
package main
import (
"bytes"
"context"
"encoding/json"
"log"
"strconv"
"strings"
"sync"
"github.com/elastic/go-elasticsearch/v8"
"github.com/elastic/go-elasticsearch/v8/esapi"
)
func main() {
log.SetFlags(0)
var (
r map[string]interface{}
wg sync.WaitGroup
)
// Initialize a client with the default settings.
//
// An `ELASTICSEARCH_URL` environment variable will be used when exported.
//
es, err := elasticsearch.NewDefaultClient()
if err != nil {
log.Fatalf("Error creating the client: %s", err)
}
// 1. Get cluster info
//
res, err := es.Info()
if err != nil {
log.Fatalf("Error getting response: %s", err)
}
defer res.Body.Close()
// Check response status
if res.IsError() {
log.Fatalf("Error: %s", res.String())
}
// Deserialize the response into a map.
if err := json.NewDecoder(res.Body).Decode(&r); err != nil {
log.Fatalf("Error parsing the response body: %s", err)
}
// Print client and server version numbers.
log.Printf("Client: %s", elasticsearch.Version)
log.Printf("Server: %s", r["version"].(map[string]interface{})["number"])
log.Println(strings.Repeat("~", 37))
// 2. Index documents concurrently
//
for i, title := range []string{"Test One", "Test Two"} {
wg.Add(1)
go func(i int, title string) {
defer wg.Done()
// Build the request body.
var b strings.Builder
b.WriteString(`{"title" : "`)
b.WriteString(title)
b.WriteString(`"}`)
// Set up the request object.
req := esapi.IndexRequest{
Index: "test",
DocumentID: strconv.Itoa(i + 1),
Body: strings.NewReader(b.String()),
Refresh: "true",
}
// Perform the request with the client.
res, err := req.Do(context.Background(), es)
if err != nil {
log.Fatalf("Error getting response: %s", err)
}
defer res.Body.Close()
if res.IsError() {
log.Printf("[%s] Error indexing document ID=%d", res.Status(), i+1)
} else {
// Deserialize the response into a map.
var r map[string]interface{}
if err := json.NewDecoder(res.Body).Decode(&r); err != nil {
log.Printf("Error parsing the response body: %s", err)
} else {
// Print the response status and indexed document version.
log.Printf("[%s] %s; version=%d", res.Status(), r["result"], int(r["_version"].(float64)))
}
}
}(i, title)
}
wg.Wait()
log.Println(strings.Repeat("-", 37))
// 3. Search for the indexed documents
//
// Build the request body.
var buf bytes.Buffer
query := map[string]interface{}{
"query": map[string]interface{}{
"match": map[string]interface{}{
"title": "test",
},
},
}
if err := json.NewEncoder(&buf).Encode(query); err != nil {
log.Fatalf("Error encoding query: %s", err)
}
// Perform the search request.
res, err = es.Search(
es.Search.WithContext(context.Background()),
es.Search.WithIndex("test"),
es.Search.WithBody(&buf),
es.Search.WithTrackTotalHits(true),
es.Search.WithPretty(),
)
if err != nil {
log.Fatalf("Error getting response: %s", err)
}
defer res.Body.Close()
if res.IsError() {
var e map[string]interface{}
if err := json.NewDecoder(res.Body).Decode(&e); err != nil {
log.Fatalf("Error parsing the response body: %s", err)
} else {
// Print the response status and error information.
log.Fatalf("[%s] %s: %s",
res.Status(),
e["error"].(map[string]interface{})["type"],
e["error"].(map[string]interface{})["reason"],
)
}
}
if err := json.NewDecoder(res.Body).Decode(&r); err != nil {
log.Fatalf("Error parsing the response body: %s", err)
}
// Print the response status, number of results, and request duration.
log.Printf(
"[%s] %d hits; took: %dms",
res.Status(),
int(r["hits"].(map[string]interface{})["total"].(map[string]interface{})["value"].(float64)),
int(r["took"].(float64)),
)
// Print the ID and document source for each hit.
for _, hit := range r["hits"].(map[string]interface{})["hits"].([]interface{}) {
log.Printf(" * ID=%s, %s", hit.(map[string]interface{})["_id"], hit.(map[string]interface{})["_source"])
}
log.Println(strings.Repeat("=", 37))
}
// Client: 8.0.0-SNAPSHOT
// Server: 8.0.0-SNAPSHOT
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// [201 Created] updated; version=1
// [201 Created] updated; version=1
// -------------------------------------
// [200 OK] 2 hits; took: 5ms
// * ID=1, map[title:Test One]
// * ID=2, map[title:Test Two]
// =====================================
As you see in the example above, the esapi
package allows to call the Elasticsearch APIs in two distinct ways: either by creating a struct, such as IndexRequest
, and calling its Do()
method by passing it a context and the client, or by calling the Search()
function on the client directly, using the option functions such as WithIndex()
. See more information and examples in the package documentation.
The elastictransport
package handles the transfer of data to and from Elasticsearch, including retrying failed requests, keeping a connection pool, discovering cluster nodes and logging.
Read more about the client internals and usage in the following blog posts:
The esutil
package provides convenience helpers for working with the client. At the moment, it provides the esutil.JSONReader()
and the esutil.BulkIndexer
helpers.
The _examples
folder contains a number of recipes and comprehensive examples to get you started with the client, including configuration and customization of the client, using a custom certificate authority (CA) for security (TLS), mocking the transport for unit tests, embedding the client in a custom type, building queries, performing requests individually and in bulk, and parsing the responses.
Author: Elastic
Source Code: https://github.com/elastic/go-elasticsearch
License: Apache-2.0 License
1660276800
This is an old version the Aerospike Go Client, which is v4.x.x. The newest version v6 has migrated to go modules, which required us to put it in the v6
branch here. All the latest changes to the library are applied in that branch, documented in the CHANGELOG.
Official Aerospike Client library for Go.
This library is compatible with Go 1.9+ and supports the following operating systems: Linux, Mac OS X (Windows builds are possible, but untested).
Up-to-date documentation is available in the .
You can refer to the test files for idiomatic use cases.
Please refer to CHANGELOG.md
for release notes, or if you encounter breaking changes.
The following is a very simple example of CRUD operations in an Aerospike database.
package main
import (
"fmt"
aero "github.com/aerospike/aerospike-client-go"
)
// This is only for this example.
// Please handle errors properly.
func panicOnError(err error) {
if err != nil {
panic(err)
}
}
func main() {
// define a client to connect to
client, err := aero.NewClient("127.0.0.1", 3000)
panicOnError(err)
key, err := aero.NewKey("test", "aerospike", "key")
panicOnError(err)
// define some bins with data
bins := aero.BinMap{
"bin1": 42,
"bin2": "An elephant is a mouse with an operating system",
"bin3": []interface{}{"Go", 2009},
}
// write the bins
err = client.Put(nil, key, bins)
panicOnError(err)
// read it back!
rec, err := client.Get(nil, key)
panicOnError(err)
// delete the key, and check if key exists
existed, err := client.Delete(nil, key)
panicOnError(err)
fmt.Printf("Record existed before delete? %v\n", existed)
}
More examples illustrating the use of the API are located in the examples
directory.
Details about the API are available in the docs
directory.
Go version v1.12+ is required.
To install the latest stable version of Go, visit http://golang.org/dl/
Aerospike Go client implements the wire protocol, and does not depend on the C client. It is goroutine friendly, and works asynchronously.
Supported operating systems:
GOPATH
: go get github.com/aerospike/aerospike-client-go
go get -u github.com/aerospike/aerospike-client-go
Using gopkg.in is also supported: go get -u gopkg.in/aerospike/aerospike-client-go.v1
go run <filename.go>
go build -o <output> <filename.go>
go build -o benchmark tools/benchmark/benchmark.go
We are bending all efforts to improve the client's performance. In our reference benchmarks, Go client performs almost as good as the C client.
To read about performance variables, please refer to docs/performance.md
This library is packaged with a number of tests. Tests require Ginkgo and Gomega library.
Before running the tests, you need to update the dependencies:
$ go get .
To run all the test cases with race detection:
$ ginkgo -r -race
A variety of example applications are provided in the examples
directory.
A variety of clones of original tools are provided in the tools
directory. They show how to use more advanced features of the library to re-implement the same functionality in a more concise way.
Benchmark utility is provided in the tools/benchmark
directory. See the tools/benchmark/README.md
for details.
A simple API documentation is available in the docs
directory. The latest up-to-date docs can be found in .
To build the library for App Engine, build it with the build tag app_engine
. Aggregation functionality is not available in this build.
To make the library both flexible and fast, we had to integrate the reflection API (methods with [Get/Put/...]Object
names) tightly in the library. In case you wanted to avoid mixing those API in your app inadvertently, you can use the build tag as_performance
to remove those APIs from the build.
Author: Aerospike
Source Code: https://github.com/aerospike/aerospike-client-go
License: Apache-2.0 license
1652911980
newsapi
Go client implementation for the NewsAPI.
go get github.com/jellydator/newsapi-go
Simply, we create a client using NewClient
function, by default only an API key is required, however some other parameters can be set using variadic option functions.
client := newsapi.NewClient("apiKey", newsapi.WithHTTPClient(&http.Client{
Timeout: 5 * time.Second,
}))
Everything
retrieves all articles based on provided parameters. Full endpoint documentation can be viewed here.
articles, pageCount, err := client.Everything(context.Background(), newsapi.EverythingParams{
Query: "cryptocurrency",
})
if err != nil {
// handle error
}
// success
TopHeadlines
retrieves top headlines articles based on provided parameters. Full endpoint documentation can be viewed here.
articles, pageCount, err := client.TopHeadlines(context.Background(), newsapi.TopHeadlinesParams{
Query: "cryptocurrency",
})
if err != nil {
// handle error
}
// success
Sources
retrieves available sources based on provided parameters. Full endpoint documentation can be viewed here.
sources, err := client.Sources(context.Background(), newsapi.SourceParams{
Categories: []newsapi.Category{
newsapi.CategoryBusiness,
newsapi.CategoryScience,
},
})
if err != nil {
// handle error
}
// success
Author: jellydator
Source Code: https://github.com/jellydator/newsapi-go
License: