Changing the last character of a file

I want to continuously write json objects to a file. To be able to read it, I need to wrap them into an array. I don't want to read the whole file, for simple appending. So what I' doing now:

I want to continuously write json objects to a file. To be able to read it, I need to wrap them into an array. I don't want to read the whole file, for simple appending. So what I' doing now:

comma := []byte(", ")
    file, err := os.OpenFile(erp.TransactionsPath, os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0666)
    if err != nil {
        return err
    }
    transaction, err := json.Marshal(t)
    if err != nil {
        return err
    }
    transaction = append(transaction, comma...)
    file.Write(transaction)

But with this implementation I will need to add []scopes by hand(or via some script) before reading. How can I add an object before closing scope on each writing?

How to build a JSON API with Python

How to build a JSON API with Python

The JSON API specification is a powerful way for enabling communication between client and server. It specifies the structure of the requests and responses sent between the two, using the JSON format. The [JSON API...

The JSON API specification is a powerful way for enabling communication between client and server. It specifies the structure of the requests and responses sent between the two, using the JSON format.

The JSON API specification is a powerful way for enabling communication between client and server. It specifies the structure of the requests and responses sent between the two, using the JSON format.

As a data format, JSON has the advantages of being lightweight and readable. This makes it very easy to work with quickly and productively. The specification is designed to minimise the number of requests and the amount of data that needs sending between client and server.

Here, you can learn how to create a basic JSON API using Python and Flask. Then, the rest of the article will show you how to try out some of the features the JSON API specification has to offer.

Flask is a Python library that provides a 'micro-framework' for web development. It is great for rapid development as it comes with a simple-yet-extensible core functionality.

A really basic example of how to send a JSON-like response using Flask is shown below:

from flask import Flask

app = Flask(__name__)

@app.route('/')
def example():
   return '{"name":"Bob"}'

if __name__ == '__main__':
    app.run()

This article will use two add-ons for Flask:

The big picture

The end goal is to create an API that allows client-side interaction with an underlying database. There will be a couple of layers between the database and the client - a data abstraction layer and a resource manager layer.

Here's an overview of the steps involved:

  1. Define a database using Flask-SQLAlchemy
  2. Create a data abstraction with Marshmallow-JSONAPI
  3. Create resource managers with Flask-REST-JSONAPI
  4. Create URL endpoints and start the server with Flask

This example will use a simple schema describing modern artists and their relationships to different artworks.

Install everything

Before getting started, you'll need to set up the project. This involves creating a workspace and virtual environment, installing the modules required, and creating the main Python and database files for the project.

From the command line create a new directory and navigate inside.

$ mkdir flask-jsonapi-demo
$ cd flask-jsonapi-demo/

It is good practice to create virtual environments for each of your Python projects. You can skip this step, but it is strongly recommended.

$ python -m venv .venv
$ source .venv/bin/activate

Once your virtual environment has been created and activated, you can install the modules needed for this project.

$ pip install flask-rest-jsonapi flask-sqlalchemy

Everything you'll need will be installed as the requirements for these two extensions. This includes Flask itself, and SQLAlchemy.

The next step is to create a Python file and database for the project.

$ touch application.py artists.db

Create the database schema

Here, you will start modifying application.py to define and create the database schema for the project.

Open application.py in your preferred text editor. Begin by importing some modules. For clarity, modules will be imported as you go.

Next, create an object called app as an instance of the Flask class.

After that, use SQLAlchemy to connect to the database file you created. The final step is to define and create a table called artists.

from flask import Flask
from flask_sqlalchemy import SQLAlchemy

# Create a new Flask application
app = Flask(__name__)

# Set up SQLAlchemy
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////artists.db'
db = SQLAlchemy(app)

# Define a class for the Artist table
class Artist(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    name = db.Column(db.String)
    birth_year = db.Column(db.Integer)
    genre = db.Column(db.String)

# Create the table
db.create_all()

Creating an abstraction layer

The next step uses the Marshmallow-JSONAPI module to create a logical data abstraction over the tables just defined.

The reason to create this abstraction layer is simple. It gives you more control over how your underlying data is exposed via the API. Think of this layer as a lens through which the API client can view the underlying data clearly, and only the bits they need to see.

In the code below, the data abstraction layer is defined as a class which inherits from Marshmallow-JSONAPI's Schema class. It will provide access via the API to both single records and multiple records from the artists table.

Inside this block, the Meta class defines some metadata. Specifically, the name of the URL endpoint for interacting with single records will be artist_one, where each artist will be identified by a URL parameter <id>. The name of the endpoint for interacting with many records will be artist_many.

The remaining attributes defined relate to the columns in the artists table. Here, you can control further how each is exposed via the API.

For example, when making POST requests to add new artists to the database, you can make sure the name field is mandatory by setting required=True.

And if for any reason you didn't want the birth_year field to be returned when making GET requests, you can specify so by setting load_only=True.

from marshmallow_jsonapi.flask import Schema
from marshmallow_jsonapi import fields

# Create data abstraction layer
class ArtistSchema(Schema):
    class Meta:
        type_ = 'artist'
        self_view = 'artist_one'
        self_view_kwargs = {'id': '<id>'}
        self_view_many = 'artist_many'

    id = fields.Integer()
    name = fields.Str(required=True)
    birth_year = fields.Integer(load_only=True)
    genre = fields.Str()

Create resource managers and URL endpoints

The final piece of the puzzle is to create a resource manager and corresponding endpoint for each of the routes /artists and /artists/id.

Each resource manager is defined as a class that inherits from the Flask-REST-JSONAPI classes ResourceList and ResourceDetail.

Here they take two attributes. schema is used to indicate the data abstraction layer the resource manager uses, and data_layer indicates the session and data model that will be used for the data layer.

Next, define api as an instance of Flask-REST-JSONAPI's Api class, and create the routes for the API with api.route(). This method takes three arguments - the data abstraction layer class, the endpoint name, and the URL path.

The last step is to write a main loop to launch the app in debug mode when the script is run directly. Debug mode is great for development, but it is not suitable for running in production.

# Create resource managers and endpoints

from flask_rest_jsonapi import Api, ResourceDetail, ResourceList

class ArtistMany(ResourceList):
    schema = ArtistSchema
    data_layer = {'session': db.session,
                  'model': Artist}

class ArtistOne(ResourceDetail):
    schema = ArtistSchema
    data_layer = {'session': db.session,
                  'model': Artist}

api = Api(app)
api.route(ArtistMany, 'artist_many', '/artists')
api.route(ArtistOne, 'artist_one', '/artists/<int:id>')

# main loop to run app in debug mode
if __name__ == '__main__':
    app.run(debug=True)

Make GET and POST requests

Now you can start using the API to make HTTP requests. This could be from a web browser, or from a command line tool like curl, or from within another program (e.g., a Python script using the Requests library).

To launch the server, run the application.py script with:

$ python application.py

In your browser, navigate to http://localhost:5000/artists.  You will see a JSON output of all the records in the database so far. Except, there are none.

To start adding records to the database, you can make a POST request. One way of doing this is from the command line using curl. Alternatively, you could use a tool like Insomnia, or perhaps code up a simple HTML user interface that posts data using a form.

With curl, from the command line:

curl -i -X POST -H 'Content-Type: application/json' -d '{"data":{"type":"artist", "attributes":{"name":"Salvador Dali", "birth_year":1904, "genre":"Surrealism"}}}' http://localhost:5000/artists

Now if you navigate to http://localhost:5000/artists, you will see the record you just added. If you were to add more records, they would all show here as well, as this URL path calls the artists_many endpoint.

To view just a single artist by their id number, you can navigate to the relevant URL. For example, to see the first artist, try http://localhost:5000/artists/1.

Filtering and sorting

One of the neat features of the JSON API specification is the ability to return the response in more useful ways by defining some parameters in the URL. For instance, you can sort the results according to a chosen field, or filter based on some criteria.

Flask-REST-JSONAPI comes with this built in.

To sort artists in order of birth year, just navigate to http://localhost:5000/artists?sort=birth_year. In a web application, this would save you from needing to sort results on the client side, which could be costly in terms of performance and therefore impact the user experience.

Filtering is also easy. You append to the URL the criteria you wish to filter on, contained in square brackets. There are three pieces of information to include:

  • "name" - the field you are filtering by (e.g., birth_year)
  • "op" - the filter operation ("equal to", "greater than", "less than" etc.)
  • "val" - the value to filter against (e.g., 1900)

For example, the URL below retrieves artists whose birth year is greater than 1900:

http://localhost:5000/artists?filter=[{"name":"birth_year","op":"gt","val":1900}]

This functionality makes it much easier to retrieve only relevant information when calling the API. This is valuable for improving performance, especially when retrieving potentially large volumes of data over a slow connection.

Pagination

Another feature of the JSON API specification that aids performance is pagination. This is when large responses are sent over several "pages", rather than all in one go. You can control the page size and the number of the page you request in the URL.

So, for example, you could receive 100 results over 10 pages instead of loading all 100 in one go. The first page would contain results 1-10, the second page would contain results 11-20, and so on.

To specify the number of results you want to receive per page, you can add the parameter ?page[size]=X to the URL, where X is the number of results. Flask-REST-JSONAPI uses 30 as the default page size.

To request a given page number, you can add the parameter ?page[number]=X, where is the page number. You can combine both parameters as shown below:

http://localhost:5000/artists?page[size]=2&page[number]=2

This URL sets the page size to two results per page, and asks for the second page of results. This would return the third and fourth results from the overall response.

Relationships

Almost always, data in one table will be related to data stored in another. For instance, if you have a table of artists, chances are you might also want a table of artworks. Each artwork is related to the artist who created it.

The JSON API specification allows you to work with relational data easily, and the Flask-REST-JSONAPI lets you take advantage of this. Here, this will be demonstrated by adding an artworks table to the database, and including relationships between artist and artwork.

To implement the artworks example, it will be necessary to make a few changes to the code in application.py.

First, make a couple of extra imports, then create a new table which relates each artwork to an artist:

from marshmallow_jsonapi.flask import Relationship
from flask_rest_jsonapi import ResourceRelationship

# Define the Artwork table
class Artwork(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    title = db.Column(db.String)
    artist_id = db.Column(db.Integer, 
        db.ForeignKey('artist.id'))
    artist = db.relationship('Artist',
        backref=db.backref('artworks'))

Next, rewrite the abstraction layer:

# Create data abstraction 
class ArtistSchema(Schema):
    class Meta:
        type_ = 'artist'
        self_view = 'artist_one'
        self_view_kwargs = {'id': '<id>'}
        self_view_many = 'artist_many'

    id = fields.Integer()
    name = fields.Str(required=True)
    birth_year = fields.Integer(load_only=True)
    genre = fields.Str()
    artworks = Relationship(self_view = 'artist_artworks',
        self_view_kwargs = {'id': '<id>'},
        related_view = 'artwork_many',
        many = True,
        schema = 'ArtworkSchema',
        type_ = 'artwork')

class ArtworkSchema(Schema):
    class Meta:
        type_ = 'artwork'
        self_view = 'artwork_one'
        self_view_kwargs = {'id': '<id>'}
        self_view_many = 'artwork_many'

    id = fields.Integer()
    title = fields.Str(required=True)
    artist_id = fields.Integer(required=True)

This defines an abstraction layer for the artwork table, and adds a relationship between artist and artwork to the ArtistSchema class.

Next, define new resource managers for accessing artworks many at once and one at a time, and also for accessing the relationships between artist and artwork.

class ArtworkMany(ResourceList):
    schema = ArtworkSchema
    data_layer = {'session': db.session,
                  'model': Artwork}

class ArtworkOne(ResourceDetail):
    schema = ArtworkSchema
    data_layer = {'session': db.session,
                  'model': Artwork}

class ArtistArtwork(ResourceRelationship):
    schema = ArtistSchema
    data_layer = {'session': db.session,
                  'model': Artist}

Finally, add some new endpoints:

api.route(ArtworkOne, 'artwork_one', '/artworks/<int:id>')
api.route(ArtworkMany, 'artwork_many', '/artworks')
api.route(ArtistArtwork, 'artist_artworks',
    '/artists/<int:id>/relationships/artworks')

Run application.py and trying posting some data from the command line via curl:

curl -i -X POST -H 'Content-Type: application/json' -d '{"data":{"type":"artwork", "attributes":{"title":"The Persistance of Memory", "artist_id":1}}}' http://localhost:5000/artworks

This will create an artwork related to the artist with id=1.

In the browser, navigate to http://localhost:5000/artists/1/relationships/artworks. This should show the artworks related to the artist with id=1. This saves you from writing a more complex URL with parameters to filter artworks by their artist_id field. You can quickly list all the relationships between a given artist and their artworks.

Another feature is the ability to include related results in the response to calling the artists_one endpoint:

http://localhost:5000/artists/1?include=artworks

This will return the usual response for the artists endpoint, and also results for each of that artist's artworks.

Sparse Fields

One last feature worth mentioning - sparse fields. When working with large data resources with many complex relationships, the response sizes can blow up real fast. It is helpful to only retrieve the fields you are interested in.

The JSON API specification lets you do this by adding a fields parameter to the URL. For example URL below gets the response for a given artist and their related artworks. However, instead of returning all the fields for the given artwork, it returns only the title.

http://localhost:5000/artists/1?include=artworks&fields[artwork]=title

This is again very helpful for improving performance, especially over slow connections. As a general rule, you should only make requests to and from the server with the minimal amount of data required.

Final remarks

The JSON API specification is a very useful framework for sending data between server and client in a clean, flexible format. This article has provided an overview of what you can do with it, with a worked example in Python using the Flask-REST-JSONAPI library.

So what will you do next? There are many possibilities. The example in this article has been a simple proof-of-concept, with just two tables and a single relationship between them. You can develop an application as sophisticated as you like, and create a powerful API to interact with it using all the tools provided here.

Thanks for reading, and keep coding in Python!

Understanding Protocol Buffers - Will they replace JSON?

Understanding Protocol Buffers - Will they replace JSON?

In this article, we will see what is a protocol buffer and how it works and ask the question will protocol buffers replace JSON?

In this article, we will see what is a protocol buffer and how it works and ask the question will protocol buffers replace JSON?

What are Protocol Buffers?

Protocol buffers are a flexible, efficient way of serializing structured data. you define how you want your data to be structured once, then you can use special generated source code to write and read your structured data.

it is a way of encoding structured data in an efficient and extensible format.

Protocol Buffers are developed and backed by Google. Just like Google, other companies have their own implementation like Facebook has Apache Thrift and Microsoft has Microsoft Bond Protocol Buffers in addition to concrete Remote Procedure Call(RPC)

Protocol Buffers vs JSON

Before we compare protocol buffers with JSON(Javascript Object Notation), we will see how JSON is used in the current tech world.

Let’s consider an application life cycle when a user clicks a button in an application.

The browser makes an API request to the Server.

The Server requests data from the Database.

Once the database returns the data and sends it to the application server, it then sends it to the browser to display.

Here, the way the browser and server communicate is happening with JSON.
This is image title

JSON is based on key-value pair. JSON has different value types, such as

  • Array
  • Boolean
  • Number
  • Object
  • String

Above all , Each entity in JSON should have a key and value associated with it.

Now, let’s come back to the topic of why we need Protobuf over JSON and what kind of benefits we get using protocol buffers.

Why Protocol buffers over JSON

Firstly, protocol buffers have more data types than JSON. Protocol buffers are not only a message format, but it is also a set of rules and tools that defines the exchange of messages.

Simple Analogy

Let’s say that you want to travel from location A to location B every day to reach the destination.

Now, 1000’s people will be traveling between these two locations at the same time every day.

if you travel in an SUV, it occupies more spaces on the road, it will delay your travel due to traffic. Now, instead, if you travel by bike, you reach the destination in a minimal amount of time as well as less traffic.

Let’s relate this our concept protocol buffers.

Location A to B => Sender and receiverRoads – Network bandwidth

Data – the vehicle that you are traveling in

Here sending data between a source and destination through a network using

XML -> traveling alone via truck

JSON -> traveling via SUV(better than a truck)

Protobuf -> traveling via bike (best possible way to reduce network bandwidth and allowing more request to flow via network)

How Do Protocol Buffers work?

This is image title

Protocol buffer works through binary serialization. It encodes the data using the determined schema and sends the data.

The receiver decodes the data with the schema to get the message. Here, the message is encoded with a schema and stream it as a binary data to the receiver.

The receiver decodes the data with the same schema and get the message from the binary stream.

Implementing Protocol Buffer

Like I said before, protocol work with a schema to transmit the message.

Schemas are fields that are indicated and aliased with a number and a tag.

you can have keywords such as required, optional and repeated. schema also allows messages to be extensible.

For example, User in the UserList is extensible of message User.

message UserList {
  repeated User user = 1;
}

message User{
  required string name = 1;
  required string email = 2;
}Copy

  • You specify how you want the information you are serializing to be structured by defining protocol buffer message types in .proto files.

  • That is to say, Each protocol buffer is a logical record of information, containing a series of name-value pairs.

The above is a very basic example of .proto file that defines a message containing information about User model

As you can see, the message format is simple – each message type has one or more uniquely numbered fields, and each field has a name and a value type, where value types can be numbers, booleans, strings, etc.

Once, you’ve defined your message,you run the protocol buffer compiler for the language of your application on your .proto file to generate data access classes.

This repository contains the implementation of a to-do application with Protobufs in a NodeJS application.

9 Reasons Why We Switched from Python to Go

9 Reasons Why We Switched from Python to Go

This post will explain some of the reasons why we decided to leave Python behind and make the switch to Go. Switching to a new language is always a big step, especially when only one of your team members has prior experience with that language.

Switching to a new language is always a big step, especially when only one of your team members has prior experience with that language. Early this year, we switched primary programming language from Python to Go. This post will explain some of the reasons why we decided to leave Python behind and make the switch to Go.

Reasons to Use Go

Reason 1 — Performance

Go is extremely fast. The performance is similar to that of Java or C++. For our use case, Go is typically 30 times faster than Python. Here’s a small benchmark game comparing Go vs Java.

Reason 2 — Language Performance Matters

For many applications, the programming language is simply the glue between the app and the database. The performance of the language itself usually doesn’t matter much.

Stream, however, is an API provider powering the feed infrastructure for 500 companies and more than 200 million end users. We’ve been optimizing Cassandra, PostgreSQL, Redis, etc. for years, but eventually, you reach the limits of the language you’re using.

Python is a great language but its performance is pretty sluggish for use cases such as serialization/deserialization, ranking and aggregation. We frequently ran into performance issues where Cassandra would take 1ms to retrieve the data and Python would spend the next 10ms turning it into objects.

Reason 3 — Developer Productivity & Not Getting Too Creative

Have a look at this little snippet of Go code from the How I Start Go tutorial. (This is a great tutorial and a good starting point to pick up a bit of Go.)

package main

type openWeatherMap struct{}

func (w openWeatherMap) temperature(city string) (float64, error) {
	resp, err := http.Get("http://api.openweathermap.org/data/2.5/weather?APPID=YOUR_API_KEY&q=" + city)
	if err != nil {
		return 0, err
	}

	defer resp.Body.Close()

	var d struct {
		Main struct {
			Kelvin float64 `json:"temp"`
		} `json:"main"`
	}

	if err := json.NewDecoder(resp.Body).Decode(&d); err != nil {
		return 0, err
	}

	log.Printf("openWeatherMap: %s: %.2f", city, d.Main.Kelvin)
	return d.Main.Kelvin, nil
}

If you’re new to Go, there’s not much that will surprise you when reading that little code snippet. It showcases multiple assignments, data structures, pointers, formatting and a built-in HTTP library.

When I first started programming I always loved using Python’s more advanced features. Python allows you to get pretty creative with the code you’re writing. For instance, you can:

  • Use MetaClasses to self-register classes upon code initialization
  • Swap out True and False
  • Add functions to the list of built-in functions
  • Overload operators via magic methods

These features are fun to play around with but, as most programmers will agree, they often make the code harder to understand when reading someone else’s work.

Go forces you to stick to the basics. This makes it very easy to read anyone’s code and immediately understand what’s going on.

Note: How “easy” it is really depends on your use case, of course. If you want to create a basic CRUD API I’d still recommend Django + DRF, or Rails.

Reason 4 — Concurrency & Channels

As a language, Go tries to keep things simple. It doesn’t introduce many new concepts. The focus is on creating a simple language that is incredibly fast and easy to work with. The only area where it does get innovative is goroutines and channels. (To be 100% correct the concept of CSP started in 1977, so this innovation is more of a new approach to an old idea.) Goroutines are Go’s lightweight approach to threading, and channels are the preferred way to communicate between goroutines.

Goroutines are very cheap to create and only take a few KBs of additional memory. Because Goroutines are so light, it is possible to have hundreds or even thousands of them running at the same time.

You can communicate between goroutines using channels. The Go runtime handles all the complexity. The goroutines and channel-based approach to concurrency makes it very easy to use all available CPU cores and handle concurrent IO — all without complicating development. Compared to Python/Java, running a function on a goroutine requires minimal boilerplate code. You simply prepend the function call with the keyword “go”:

package main

import (
	"fmt"
	"time"
)

func say(s string) {
	for i := 0; i < 5; i++ {
		time.Sleep(100 * time.Millisecond)
		fmt.Println(s)
	}

}

func main() {
	go say("world")
	say("hello")
}

https://tour.golang.org/concurrency/1

Go’s approach to concurrency is very easy to work with. It’s an interesting approach compared to Node where the developer has to pay close attention to how asynchronous code is handled.

Another great aspect of concurrency in Go is the race detector. This makes it easy to figure out if there are any race conditions within your asynchronous code.

Here are a few good resources to get started with Go and channels:

Reason 5 — Fast Compile Time

Our largest micro service written in Go currently takes 6 seconds to compile. Go’s fast compile times are a major productivity win compared to languages like Java and C++ which are famous for sluggish compilation speed. I like sword fighting, but it’s even nicer to get things done while I still remember what the code is supposed to do:
Why We Switched from Python to Go

Reason 6 — The Ability to Build a Team

First of all, let’s start with the obvious: there are not as many Go developers compared to older languages like C++ and Java. According to StackOverflow, 38% of developers know Java, 19.3% know C++ and only 4.6% know Go. GitHub data shows a similar trend: Go is more widely used than languages such as Erlang, Scala and Elixir, but less popular than Java and C++.

Fortunately, Go is a very simple and easy to learn language. It provides the basic features you need and nothing else. The new concepts it introduces are the “defer” statement and built-in management of concurrency with “go routines” and channels. (For the purists: Go isn’t the first language to implement these concepts, just the first to make them popular.) Any Python, Elixir, C++, Scala or Java dev that joins a team can be effective at Go within a month because of its simplicity.

We’ve found it easier to build a team of Go developers compared to many other languages. If you’re hiring people in competitive ecosystems like Boulder and Amsterdam this is an important benefit.

Reason 7 — Strong Ecosystem

For a team of our size (~20 people) the ecosystem matters. You simply can’t create value for your customers if you have to reinvent every little piece of functionality. Go has great support for the tools we use. Solid libraries were already available for Redis, RabbitMQ, PostgreSQL, Template parsing, Task scheduling, Expression parsing and RocksDB.

Go’s ecosystem is a major win compared to other newer languages like Rust or Elixir. It’s of course not as good as languages like Java, Python or Node, but it’s solid and for many basic needs you’ll find high-quality packages already available.

Reason 8 — Gofmt, Enforced Code Formatting

Let’s start with what is Gofmt? And no, it’s not a swear word. Gofmt is an awesome command line utility, built into the Go compiler for formatting your code. In terms of functionality it’s very similar to Python’s autopep8. While the show Silicon Valley portrays otherwise, most of us don’t really like to argue about tabs vs spaces. It’s important that formatting is consistent, but the actual formatting standard doesn’t really matter all that much. Gofmt avoids all of this discussion by having one official way to format your code.

Reason 9 — gRPC and Protocol Buffers

Go has first-class support for protocol buffers and gRPC. These two tools work very well together for building microservices which need to communicate via RPC. You only need to write a manifest where you define the RPC calls that can be made and what arguments they take. Both server and client code are then automatically generated from this manifest. This resulting code is both fast, has a very small network footprint and is easy to use.

From the same manifest, you can generate client code for many different languages even, such as C++, Java, Python and Ruby. So, no more ambiguous REST endpoints for internal traffic, that you have to write almost the same client and server code for every time. .

Disadvantages of Using Golang

Disadvantage 1 — Lack of Frameworks

Go doesn’t have a single dominant framework like Rails for Ruby, Django for Python or Laravel for PHP. This is a topic of heated debate within the Go community, as many people advocate that you shouldn’t use a framework to begin with. I totally agree that this is true for some use cases. However, if someone wants to build a simple CRUD API they will have a much easier time with Django/DJRF, Rails Laravel or Phoenix.

Disadvantage 2 — Error Handling

Go handles errors by simply returning an error from a function and expecting your calling code to handle the error (or to return it up the calling stack). While this approach works, it’s easy to lose scope of what went wrong to ensure you can provide a meaningful error to your users. The errors package solves this problem by allowing you to add context and a stack trace to your errors.

Another issue is that it’s easy to forget to handle an error by accident. Static analysis tools like errcheck and megacheck are handy to avoid making these mistakes.

While these workarounds work well it doesn’t feel quite right. You’d expect proper error handling to be supported by the language.

Disadvantage 3 — Package Management

Go’s package management is by no means perfect. By default, it doesn’t have a way to specify a specific version of a dependency and there’s no way to create reproducible builds. Python, Node and Ruby all have better systems for package management. However, with the right tools, Go’s package management works quite well.

You can use Dep to manage your dependencies to allow specifying and pinning versions. Apart from that, we’ve contributed an open-source tool called VirtualGo which makes it easier to work on multiple projects written in Go.

Python vs Go

One interesting experiment we conducted was taking our ranked feed functionality in Python and rewriting it in Go. Have a look at this example of a ranking method:

{
	"functions": {
		"simple_gauss": {
			"base": "decay_gauss",
			"scale": "5d",
			"offset": "1d",
			"decay": "0.3"
		},
		"popularity_gauss": {
			"base": "decay_gauss",
			"scale": "100",
			"offset": "5",
			"decay": "0.5"
		}
	},
	"defaults": {
		"popularity": 1
	},
	"score": "simple_gauss(time)*popularity"
}

Both the Python and Go code need to do the following to support this ranking method:

  1. Parse the expression for the score. In this case, we want to turn this string “simple_gauss(time)*popularity” into a function that takes an activity as input and returns a score as output.
  2. Create partial functions based on the JSON config. For example, we want “simple_gauss” to call “decay_gauss” with a scale of 5 days, offset of 1 day and a decay factor of 0.3.
  3. Parse the “defaults” configuration so you have a fallback if a certain field is not defined on an activity.
  4. Use the function from step 1 to score all activities in the feed.

Developing the Python version of the ranking code took roughly 3 days. That includes writing the code, unit tests and documentation. Next, we’ve spent approximately 2 weeks optimizing the code. One of the optimizations was translating the score expression (simple_gauss(time)*popularity) into an abstract syntax tree. We also implemented caching logic which pre-computed the score for certain times in the future.

In contrast, developing the Go version of this code took roughly 4 days. The performance didn’t require any further optimization. So while the initial bit of development was faster in Python, the Go based version ultimately required substantially less work from our team. As an added benefit, the Go code performed roughly 40 times faster than our highly-optimized Python code.

Now, this is just a single example of the performance gains we’ve experienced by switching to Go. It is, of course, comparing apples to oranges:

  • The ranking code was my first project in Go
  • The Go code was built after the Python code, so the use case was better understood
  • The Go library for expression parsing was of exceptional quality

Your mileage will vary. Some other components of our system took substantially more time to build in Go compared to Python. As a general trend, we see that developing Go code takes slightly more effort. However, we spend much less time optimizing the code for performance.

Elixir vs Go — The Runner Up

Another language we evaluated is Elixir. Elixir is built on top of the Erlang virtual machine. It’s a fascinating language and we considered it since one of our team members has a ton of experience with Erlang.

For our use cases, we noticed that Go’s raw performance is much better. Both Go and Elixir will do a great job serving thousands of concurrent requests. However, if you look at individual request performance, Go is substantially faster for our use case. Another reason why we chose Go over Elixir was the ecosystem. For the components we required, Go had more mature libraries whereas, in many cases, the Elixir libraries weren’t ready for production usage. It’s also harder to train/find developers to work with Elixir.

These reasons tipped the balance in favor of Go. The Phoenix framework for Elixir looks awesome though and is definitely worth a look.

Conclusion

Go is a very performant language with great support for concurrency. It is almost as fast as languages like C++ and Java. While it does take a bit more time to build things using Go compared to Python or Ruby, you’ll save a ton of time spent on optimizing the code.

We have a small development team at Stream powering the feeds for over 200 million end users. Go’s combination of a great ecosystem, easy onboarding for new developers, fast performance, solid support for concurrency and a productive programming environment make it a great choice.

Stream still leverages Python for our dashboard, site and machine learning for personalized feeds. We won’t be saying goodbye to Python anytime soon, but going forward all performance-intensive code will be written in Go.

Json Javascript database for Node.js, Electron and Browser

Json Javascript database for Node.js, Electron and Browser

JSON Javascript database for Node.js, Electron and the browser. Powered by Lodash. ⚡️

lowdb is a small local JSON database powered by Lodash (supports Node, Electron and the Browser)

Install

npm install lowdb

Alternatively, if you're using yarn

yarn add lowdb

A UMD build is also available on unpkg for testing and quick prototyping:

<script src="https://unpkg.com/[email protected]/lodash.min.js"></script>
<script src="https://unpkg.com/[email protected]/dist/low.min.js"></script>
<script src="https://unpkg.com/[email protected]/dist/LocalStorage.min.js"></script>
<script>
  var adapter = new LocalStorage('db')
  var db = low(adapter)
</script>

How to use LowDB

db.get('posts')
  .push({ id: 1, title: 'lowdb is awesome'})
  .write()
const low = require('lowdb')
const FileSync = require('lowdb/adapters/FileSync')

const adapter = new FileSync('db.json')
const db = low(adapter)

// Set some defaults (required if your JSON file is empty)
db.defaults({ posts: [], user: {}, count: 0 })
  .write()

// Add a post
db.get('posts')
  .push({ id: 1, title: 'lowdb is awesome'})
  .write()

// Set a user using Lodash shorthand syntax
db.set('user.name', 'typicode')
  .write()
  
// Increment count
db.update('count', n => n + 1)
  .write()

Data is saved to db.json

{
  "posts": [
    { "id": 1, "title": "lowdb is awesome"}
  ],
  "user": {
    "name": "typicode"
  },
  "count": 1
}

You can use any of the powerful lodash functions, like _.get and _.find with shorthand syntax.

// For performance, use .value() instead of .write() if you're only reading from db
db.get('posts')
  .find({ id: 1 })
  .value()

Lowdb is perfect for CLIs, small servers, Electron apps and npm packages in general.

It supports Node, the browser and uses lodash API, so it's very simple to learn. Actually, if you know Lodash, you already know how to use lowdb

Important lowdb doesn't support Cluster and may have issues with very large JSON files (~200MB).

API

low(adapter)

Returns a lodash chain with additional properties and functions described below.

db.[...].write() and db.[...].value()

write() writes database to state.

On the other hand, value() is just _.prototype.value() and should be used to execute a chain that doesn't change database state.

db.set('user.name', 'typicode')
  .write()

Please note that db.[...].write() is syntactic sugar and equivalent to

db.set('user.name', 'typicode')
  .value()

db.write()

db._

Database lodash instance. Use it to add your own utility functions or third-party mixins like underscore-contrib or lodash-id.

db._.mixin({
  second: function(array) {
    return array[1]
  }
})

db.get('posts')
  .second()
  .value()

db.getState()

Returns database state.

db.getState() // { posts: [ ... ] }

db.setState(newState)

Replaces database state.

const newState = {}
db.setState(newState)

db.write()

Persists database using adapter.write (depending on the adapter, may return a promise).

// With lowdb/adapters/FileSync
db.write()
console.log('State has been saved')

// With lowdb/adapters/FileAsync
db.write()
  .then(() => console.log('State has been saved'))

db.read()

Reads source using storage.read option (depending on the adapter, may return a promise).

// With lowdb/FileSync
db.read()
console.log('State has been updated')

// With lowdb/FileAsync
db.read()
  .then(() => console.log('State has been updated'))

Adapters API

Please note this only applies to adapters bundled with Lowdb. Third-party adapters may have different options.

For convenience, FileSync, FileAsync and LocalBrowser accept the following options:

  • defaultValue if file doesn't exist, this value will be used to set the initial state (default: {})
  • serialize/deserialize functions used before writing and after reading (default: JSON.stringify and JSON.parse)
const adapter = new FileSync('array.yaml', {
  defaultValue: [],
  serialize: (array) => toYamlString(array),
  deserialize: (string) => fromYamlString(string)
})

Guide

How to query

With lowdb, you get access to the entire lodash API, so there are many ways to query and manipulate data. Here are a few examples to get you started.

Please note that data is returned by reference, this means that modifications to returned objects may change the database. To avoid such behaviour, you need to use .cloneDeep().

Also, the execution of methods is lazy, that is, execution is deferred until .value() or .write() is called.

Reading from existing JSON file

If you are reading from a file adapter, the path is relative to execution path (CWD) and not to your code.

my_project/
  src/
    my_example.js
  db.json 

So then you read it like this:

// file src/my_example.js
const adapter = new FileSync('db.json')

// With lowdb/FileAsync
db.read()
  .then(() => console.log('Content of my_project/db.json is loaded'))

Examples

Check if posts exists.

db.has('posts')
  .value()

Set posts.

db.set('posts', [])
  .write()

Sort the top five posts.

db.get('posts')
  .filter({published: true})
  .sortBy('views')
  .take(5)
  .value()

Get post titles.

db.get('posts')
  .map('title')
  .value()

Get the number of posts.

db.get('posts')
  .size()
  .value()

Get the title of first post using a path.

db.get('posts[0].title')
  .value()

Update a post.

db.get('posts')
  .find({ title: 'low!' })
  .assign({ title: 'hi!'})
  .write()

Remove posts.

db.get('posts')
  .remove({ title: 'low!' })
  .write()

Remove a property.

db.unset('user.name')
  .write()

Make a deep clone of posts.

db.get('posts')
  .cloneDeep()
  .value()

How to use id based resources

Being able to get data using an id can be quite useful, particularly in servers. To add id-based resources support to lowdb, you have 2 options.

shortid is more minimalist and returns a unique id that you can use when creating resources.

const shortid = require('shortid')

const postId = db
  .get('posts')
  .push({ id: shortid.generate(), title: 'low!' })
  .write()
  .id

const post = db
  .get('posts')
  .find({ id: postId })
  .value()

lodash-id provides a set of helpers for creating and manipulating id-based resources.

const lodashId = require('lodash-id')
const FileSync = require('lowdb/adapters/FileSync')

const adapter = new FileSync('db.json')
const db = low(adapter)

db._.mixin(lodashId)

// We need to set some default values, if the collection does not exist yet
// We also can store our collection
const collection = db
  .defaults({ posts: [] })
  .get('posts')

// Insert a new post...
const newPost = collection
  .insert({ title: 'low!' })
  .write()

// ...and retrieve it using its id
const post = collection
  .getById(newPost.id)
  .value()

How to create custom adapters

low() accepts custom Adapter, so you can virtually save your data to any storage using any format.

class MyStorage {
  constructor() {
    // ...
  }

  read() {
    // Should return data (object or array) or a Promise
  }

  write(data) {
    // Should return nothing or a Promise
  }
}

const adapter = new MyStorage(args)
const db = low(adapter)

See src/adapters for examples.

How to encrypt data

FileSync, FileAsync and LocalStorage accept custom serialize and deserialize functions. You can use them to add encryption logic.

const adapter = new FileSync('db.json', {
  serialize: (data) => encrypt(JSON.stringify(data)),
  deserialize: (data) => JSON.parse(decrypt(data))
})

Changelog

See changes for each version in the release notes.

Limits

Lowdb is a convenient method for storing data without setting up a database server. It is fast enough and safe to be used as an embedded database.

However, if you seek high performance and scalability more than simplicity, you should probably stick to traditional databases like MongoDB.

Source Code

https://github.com/typicode/lowdb

How to create JSON String in Javascript

 How to create JSON String in Javascript

Javascript JSON.stringify() converts the JavaScript object into a string, optionally replacing the values if the replacer function is specified or optionally including only the specified properties if the replacer array is specified. JSON.stringify() converts the value to JSON notation representing it.

Javascript JSON.stringify()

The JSON.stringify() is the inbuilt function in which it allows us to take the JavaScript Object or Array and create a JSON string out of it.

JSON.stringify() converts the value to JSON notation representing it:

If a value has the toJSON() method, it’s responsible for defining what data will be serialized.

Number, Boolean, and String objects are converted to the corresponding primitive values during the stringification, in accord with a traditional conversion semantics.

The undefined, Functions, and Symbols are not valid JSON values.

If any such values are encountered during the conversion they are either omitted(found in object then omitted) or changed to null (if found in an array then value changes to null).

JSON.stringify() can return the undefined when passing in “pure” values like JSON.stringify(function(){}) or JSON.stringify(undefined).

All Symbol-keyed properties will be ignored entirely, even when using a replacer function.
The instances of Date implement a toJSON() function by returning a string (the same as date.toISOString()). Thus, they are treated as the strings.

The numbers Infinity and NaN, and null, are all considered null.
All the other Object instances (including the Map, Set, WeakMap, and WeakSet) will have only their enumerable properties serialized.

Syntax
JSON.stringify(value, replacer, space)


It accepts three parameters.

  1. value: It is the value that is to be converted into the JSON string.

  2. replacer: It is the optional parameter. This parameter value can be an altering function or the array used as a selected filter for the stringify. If a value is empty or null, then all properties of the object included in the string.

  3. space: It is also an optional parameter. The space argument is used to control spacing in the final string generated using the JSON.stringify() function. It can be number or string if it is a number than the specified number of spaces indented to the final string, and if it is the string, then that string is (up to 10 characters) used for indentation.

JSON.stringify Example

Let’s convert Javascript object to string.

// app.js

let obj = { name: "Krunal", age: 27, city: "Rajkot" };
let jsonString = JSON.stringify(obj);
console.log(string);

See the output.


➜  es git:(master) ✗ node app
{"name":"Krunal","age":27,"city":"Rajkot"}
➜  es git:(master) ✗

The jsonString is now a string, and ready to be sent to a server.

Stringify JavaScript Array

In the above example, we have stringify the Javascript Object.

Let’s stringify the Javascript Array.

// app.js

let arr = ['Millie Bobby Brown', 'Finn Wolfhard', 'Noah Schnapp'];
let jsonString = JSON.stringify(arr);
console.log(jsonString);

See the output.


➜  es git:(master) ✗ node app
["Millie Bobby Brown","Finn Wolfhard","Noah Schnapp"]
➜  es git:(master) ✗


Stringify JavaScript String

Let’s pass the string and see the output.


// app.js

let str = 'Eleven character in Stranger Things played by Millie';
let jsonString = JSON.stringify(str);
console.log(jsonString);

See the output.

➜  es git:(master) ✗ node app
"Eleven character in Stranger Things played by Millie"
➜  es git:(master) ✗

Stringify JavaScript Boolean, Integer

See the following code.

// app.js

let x = 11;
let y = true;

let jsonX = JSON.stringify(x);
let jsonY = JSON.stringify(y);

console.log(jsonX);
console.log(jsonY);

See the output.


➜  es git:(master) ✗ node app
11
true
➜  es git:(master) ✗

Stringify JavaScript Null, NaN, undefined

See the following code example.


// app.js

let x = null;
let y = undefined;
let z = NaN;
let jsonX = JSON.stringify(x);
let jsonY = JSON.stringify(y);
let jsonZ = JSON.stringify(z);
console.log(jsonX);
console.log(jsonY);
console.log(jsonZ);

See the output.

➜  es git:(master) ✗ node app
null
undefined
null
➜  es git:(master) ✗

JSON.stringify replacer parameter

The replacer parameter can be either the function or the array.

As a function, it takes two parameters: a key and a value that being stringified.

The object in which the key was found is provided as the replacer’s this parameter.
Initially, a replacer function is called with the empty string as the key representing an object being stringified.

It is then called for each property on the object or an array that being stringified.

It should return a value that should be added to a JSON string, as follows:

  1. If you return the Number, the string corresponding to that number is used as a value for the property when added to a JSON string.

  2. If you return the String, that string is used as the property’s value when adding it to the JSON string.

  3. If you return the Boolean, “true” or “false” is used as a property’s value, as appropriate, when adding it to the JSON string.

  4. If you return the null value, null will be added to a JSON string.

  5. If you return any other object, an object is recursively stringified into the JSON string, calling the replacer function on each property, unless the object is the function, in which case nothing is added to the JSON string.

  6. If you return undefined, the property is not included (i.e., filtered out) in the output JSON string.
    See the following code example.


// app.js

const replacer = (key, value) => {
  // Filtering out properties
  if (Number.isInteger(value)) {
    return undefined;
  }
  return value;
}

let shows = {
  hbo: 'Game of Thrones',
  netflix: 'Stranger Things',
  disneyplus: 'Mandalorian',
  appletvplus: 1
};
let res = JSON.stringify(shows, replacer);
console.log(res);

In the above function, if the value of object property is Integer, then we filter it out, and the remaining data will be logged on the console.


➜  es git:(master) ✗ node app
{"hbo":"Game of Thrones","netflix":"Stranger Things","disneyplus":"Mandalorian"}
➜  es git:(master) ✗

JSON.stringify space parameter

The space argument may be used to control the spacing in the final string.

  1. If it is a number, successive levels in the stringification will each be indented by the many space characters (up to 10).

  2. If it is the string, successive levels will be indented by this string (or the first ten characters of it).

See the following code example.


// app.js

const obj = {
  character: 'Eleven',
  actor: 'Millie'
}

console.log(JSON.stringify(obj, null, ' '));

See the output.


➜  es git:(master) ✗ node app
{
 "character": "Eleven",
 "actor": "Millie"
}
➜  es git:(master) ✗

How to find length of JSON string

Let’s find a length of JSON string.


// app.js

let jsonString = '{"name":"Millie Bobby Brown"}';
console.log("The string has " + jsonString.length + " characters");

See the output.


➜  es git:(master) ✗ node app
The string has 29 characters
➜  es git:(master) ✗

Finally, Javascript JSON.stringify() | How To Create JSON String Example is over.