Elian  Harber

Elian Harber


MikkelHJuul/ld: Lean Database


Lean database - it's a simple-database, it's just an rpc-based server with the basic Get/Set/Delete operations. The database can ingest any value and store it at a key, it only supports rpc, via gRPC. The encoded binary message is stored, and served without touching the value on the server-side. To that end it is mostly a gRPC-cache, but I intend it to be a more general building block.

The database is operating on "key"-level only. If you need secondary indexes you needed to maintain two versions of the data or actually create the index (id'-id-mapping) table yourself. Some key-value databases offer more solutions than this; this does not, and will not, offering too many solutions most often lead to poorer solutions in general.

There is no Query language to clutter your code! I know, awesome, right?!

This project started out as a learning project for myself, to learn golang, rpc and gRPC.

The project is written in golang. It will be packaged as a scratch-container (linux amd64). I will not support other ways of downloading. As always you can simply go build

Docker images

images are mjuul/ld:<tag> and (alpine)mjuul/ld:<tag>-client. There is also a standalone client container mjuul/ld-client.

The container mjuul/ld:<tag> is just a scratch container with the Linux/amd64 image as entrypoint.

The container mjuul/ld:<tag>-client is based on the image mjuul/ld-client adding the binary ld to it and running that at startup. The client serves as an interactive shell for the database, see client.


This project exposes badgerDB. You should be able to use the badgerDB CLI-tool on the database.


Hashmap Get-Set-Delete semantics! With bidirectional streaming rpc's. No lists, because aggregation of data should be kept at a minimum. The APIs for get and delete further implement unidirectional server-side streams for querying via KeyRange.

I consider the gRPC api to be feature complete. While the underlying implementation may change to enable better database configuration and/or usage of this code as a library. Maturity may also bring changes to the server implementation.

See test for a client implementations, the testing package builds on the data from DMI - Free data initiative (specifically the lightning data set), but can easily be changed to ingest other data, ingestion and read separated into two different clients.

Working with the API

The API is expandable. Because of how gRPC encoding works you can replace the bytes type value tag on the client side with whatever you want. This way you could use it to store dynamically typed objects using Any. Or you can save and query the database with a fixed or reflected type.

The test folder holds two small programs that implements a fixed type: my_message.proto.

The client uses reflection to serialize/deserialize json to a message given a .proto-file.

CRUD - why not CRUD?

CRUD operations must be implemented client side, use Get -> [decision] -> Set to implement create or update, the way you want to. fx

    Create      Get -> if {empty response} -> Set
    Update      Get/Delete -> if {non-empty} -> [map?] -> Set

To have done this server side would cause so much friction. All embeddable key-value databases, to my knowledge, implement Get-Set-Delete semantics, so whether you go with bolt/bbolt or badger you would always end up having this friction; so naturally you implement it without CRUD-semantics. Implementing a concurrent GetMany/SetMany ping-pong client-service feels a lot more elegant anyways.


via flags or environment variables:

flag            ENV             default     description
-port            PORT            5326        "5326" spells out "lean" in T9 keyboards
-db-location     DB_LOCATION     ld_badger   The folder location where badger stores its database-files
-in-mem          IN_MEM          false       save data in memory (or not) setting this to true ignores db-location.
-log-level       LOG_LEVEL       INFO        the logging level of the server

The container mjuul/ld:<tag>-client does not support flags for ld, use environment variables. (Since it is ld-client that is the entrypoint)

Comparison to ProfaneDB

ProfaneDB uses field options to find your object's key, and can ingest a list (repeated), your key can be composite, and you don't have to think about your key. (I envy the design a bit (it's shiny), but then again I don't feel like that is the best design).

ld forces you to design your key, and force single-object(no-aggregation/non-repeated) thinking.

ProfaneDB does not support any type of non-singular key queries; you will have to build query objects with very high knowledge of your keys (specific keys). This may force you to make fewer keys, and do more work in the client. (you may end up searching for a needle in a haystack, or completely loosing a key)

ld supports KeyRanges, you can then make very specific keys, and more of them, and think about the key-design, and query that via, From, To, Prefix and/or Pattern syntax.

ProfaneDB uses an inbuilt extension for its .proto. pro: you can use their .proto file as is. con: google's Any-type is just like map, and requires the implementer to send type-knowledge on each object on the wire.

ld use the underlying protocol buffers encoding design, con: this force the implementer to edit their .proto file, which is an anti-pattern. pro: while the database will not know anything about the value it saves, the type will be packed binary and can be serialised.

ld support bulk operations (via stream methods) natively. ProfaneDB via a repeated nested object, Memory-wise, streaming is preferred.


  • benchmarks

Author: MikkelHJuul
Source Code: https://github.com/MikkelHJuul/ld 
License: Unlicense license

#go #golang #id #database 

What is GEEK

Buddha Community

MikkelHJuul/ld: Lean Database
Ruth  Nabimanya

Ruth Nabimanya


System Databases in SQL Server


In SSMS, we many of may noticed System Databases under the Database Folder. But how many of us knows its purpose?. In this article lets discuss about the System Databases in SQL Server.

System Database

Fig. 1 System Databases

There are five system databases, these databases are created while installing SQL Server.

  • Master
  • Model
  • MSDB
  • Tempdb
  • Resource
  • This database contains all the System level Information in SQL Server. The Information in form of Meta data.
  • Because of this master database, we are able to access the SQL Server (On premise SQL Server)
  • This database is used as a template for new databases.
  • Whenever a new database is created, initially a copy of model database is what created as new database.
  • This database is where a service called SQL Server Agent stores its data.
  • SQL server Agent is in charge of automation, which includes entities such as jobs, schedules, and alerts.
  • The Tempdb is where SQL Server stores temporary data such as work tables, sort space, row versioning information and etc.
  • User can create their own version of temporary tables and those are stored in Tempdb.
  • But this database is destroyed and recreated every time when we restart the instance of SQL Server.
  • The resource database is a hidden, read only database that holds the definitions of all system objects.
  • When we query system object in a database, they appear to reside in the sys schema of the local database, but in actually their definitions reside in the resource db.

#sql server #master system database #model system database #msdb system database #sql server system databases #ssms #system database #system databases in sql server #tempdb system database

Django-allauth: A simple Boilerplate to Setup Authentication


A simple Boilerplate to Setup Authentication using Django-allauth, with a custom template for login and registration using django-crispy-forms.

Getting Started


  • Python 3.8.6 or higher

Project setup

# clone the repo
$ git clone https://github.com/yezz123/Django-Authentication

# move to the project folder
$ cd Django-Authentication

Creating virtual environment

  • Create a virtual environment for this project:
# creating pipenv environment for python 3
$ virtualenv venv

# activating the pipenv environment
$ cd venv/bin #windows environment you activate from Scripts folder

# if you have multiple python 3 versions installed then
$ source ./activate

Configured Enviromment

Environment variables

SECRET_KEY = #random string
DEBUG = #True or False
ALLOWED_HOSTS = #localhost
DATABASE_NAME = #database name (You can just use the default if you want to use SQLite)
DATABASE_USER = #database user for postgres
DATABASE_PASSWORD = #database password for postgres
DATABASE_HOST = #database host for postgres
DATABASE_PORT = #database port for postgres
ACCOUNT_EMAIL_VERIFICATION = #mandatory or optional
EMAIL_BACKEND = #email backend
EMAIL_HOST = #email host
EMAIL_HOST_PASSWORD = #email host password
EMAIL_USE_TLS = # if your email use tls
EMAIL_PORT = #email port

change all the environment variables in the .env.sample and don't forget to rename it to .env.

Run the project

After Setup the environment, you can run the project using the Makefile provided in the project folder.

 @echo "Targets:"
 @echo "    make install" #install requirements
 @echo "    make makemigrations" #prepare migrations
 @echo "    make migrations" #migrate database
 @echo "    make createsuperuser" #create superuser
 @echo "    make run_server" #run the server
 @echo "    make lint" #lint the code using black
 @echo "    make test" #run the tests using Pytest

Preconfigured Packages

Includes preconfigured packages to kick start Django-Authentication by just setting appropriate configuration.

django-allauthIntegrated set of Django applications addressing authentication, registration, account management as well as 3rd party (social) account authentication.
django-crispy-formsdjango-crispy-forms provides you with a crispy filter and {% crispy %} tag that will let you control the rendering behavior of your Django forms in a very elegant and DRY way.


  • Django-Authentication is a simple project, so you can contribute to it by just adding your code to the project to improve it.
  • If you have any questions, please feel free to open an issue or create a pull request.

Download Details:
Author: yezz123
Source Code: https://github.com/yezz123/Django-Authentication
License: MIT License

#django #python 

 iOS App Dev

iOS App Dev


SingleStore: The One Stop Shop For Everything Data

  • SingleStore works toward helping businesses embrace digital innovation by operationalising “all data through one platform for all the moments that matter”

The pandemic has brought a period of transformation across businesses globally, pushing data and analytics to the forefront of decision making. Starting from enabling advanced data-driven operations to creating intelligent workflows, enterprise leaders have been looking to transform every part of their organisation.

SingleStore is one of the leading companies in the world, offering a unified database to facilitate fast analytics for organisations looking to embrace diverse data and accelerate their innovations. It provides an SQL platform to help companies aggregate, manage, and use the vast trove of data distributed across silos in multiple clouds and on-premise environments.

**Your expertise needed! **Fill up our quick Survey

#featured #data analytics #data warehouse augmentation #database #database management #fast analytics #memsql #modern database #modernising data platforms #one stop shop for data #singlestore #singlestore data analytics #singlestore database #singlestore one stop shop for data #singlestore unified database #sql #sql database

Ruth  Nabimanya

Ruth Nabimanya


How to Efficiently Choose the Right Database for Your Applications

Finding the right database solution for your application is not easy. Learn how to efficiently find a database for your applications.

Finding the right database solution for your application is not easy. At iQIYI, one of the largest online video sites in the world, we’re experienced in database selection across several fields: Online Transactional Processing (OLTP), Online Analytical Processing (OLAP), Hybrid Transaction/Analytical Processing (HTAP), SQL, and NoSQL.

Today, I’ll share with you:

  • What criteria to use for selecting a database.
  • What databases we use at iQIYI.
  • Some decision models to help you efficiently pick a database.
  • Tips for choosing your database.

I hope this post can help you easily find the right database for your applications.

#database architecture #database application #database choice #database management system #database management tool

Brain  Crist

Brain Crist


How to Build a Pokedex React App with a Slash GraphQL Backend

Frontend developers want interactions with the backends of their web applications to be as painless as possible. Requesting data from the database or making updates to records stored in the database should be simple so that frontend developer can focus on what they do best: creating beautiful and intuitive user interfaces.

GraphQL makes working with databases easy. Rather than relying on backend developers to create specific API endpoints that return pre-selected data fields when querying the database, frontend developers can make simple requests to the backend and retrieve the exact data that they need—no more, no less. This level of flexibility is one reason why GraphQL is so appealing.

Even better, you can use a _hosted _GraphQL backend—Slash GraphQL (by Dgraph). This service is brand new and was publicly released on September 10, 2020. With Slash GraphQL, I can create a new backend endpoint, specify the schema I want for my graph database, and—voila!—be up and running in just a few steps.

The beauty of a hosted backend is that you don’t need to manage your own backend infrastructure, create and manage your own database, or create API endpoints. All of that is taken care of for you.

In this article, we’re going to walk through some of the basic setup for Slash GraphQL and then take a look at how I built a Pokémon Pokédex app with React and Slash GraphQL in just a few hours!

#development #web developement #databases #graph databases #reactjs #database design #database architecture #pokemon #graph databases in the cloud #dgraph