Introducing MikroORM, TypeScript data-mapper ORM with Identity Map

<strong>MikroORM is simple TypeScript ORM for node.js based on data-mapper, unit-of-work and identity-map patterns. Supports MongoDB, MySQL ...</strong>

MikroORM is simple TypeScript ORM for node.js based on data-mapper, unit-of-work and identity-map patterns. Supports MongoDB, MySQL ...

Motivation

During my early days at university, I remember how quickly I fell in love with object oriented programming and the concepts of Object-relational mapping and Domain Driven Design. Back then, I was mainly a PHP programmer (while we did a lot of Java/Hibernate at school), so a natural choice for me was to start using Doctrine.

A few years ago, when I switched from PHP to Node.js (and later to TypeScript), I was really confused. How come there is nothing similar to Hibernate or Doctrine in the JavaScript world? About a year ago, I finally came across TypeORM, and when I read this line in the readme I thought I found what I was looking for:

TypeORM is highly influenced by other ORMs, such as HibernateDoctrine and Entity Framework.

I started playing with it immediately, but I got disappointed very quickly. No Identity Map that would keep track of all loaded entities. No Unit of Work that would handle transaction isolation. No unified API for references with very strange support for accessing just the identifier without populating the entity, MongoDB driver (which I was aiming to use) was experimental and I had a lot problems setting it up. After a few days of struggle, I went away from it.

By that time, I started to think about writing something myself. And that is how MikroORM started!

MikroORM is TypeScript ORM for Node.js based on Data Mapper, Unit of Work and Identity Map patterns.

Currently it supports MongoDBMySQL, PostgreSQL and SQLite databases, but more can be supported via custom drivers right now. It has first class TypeScript support, while staying back compatible with Vanilla JavaScript.

Installation

First install the module via yarn or npm and do not forget to install the database driver as well. Next you will need to enable support for decorators

in tsconfig.json via experimentalDecorators flag. Then call MikroORM.init as part of bootstrapping your application.

Last step is to provide forked EntityManager for each request, so it will have its own unique Identity Map. To do so, you can use EntityManager.fork() method. Another way, that is more DI friendly, is to create new request contextfor each request, which will use some dark magic in the background to always pick the right EntityManager for you.

# using yarn
$ yarn add mikro-orm mongodb # for mongo
$ yarn add mikro-orm mysql2  # for mysql
$ yarn add mikro-orm pg      # for postgresql
$ yarn add mikro-orm sqlite  # for sqlite
or npm

$ npm i -s mikro-orm mongodb # for mongo
$ npm i -s mikro-orm mysql2 # for mysql
$ npm i -s mikro-orm pg # for postgresql
$ npm i -s mikro-orm sqlite # for sqlite
{
"compilerOptions": {
"module": "commonjs",
"target": "es2017",
"moduleResolution": "node",
"declaration": true,
"strict": true,
"strictPropertyInitialization": false,
"experimentalDecorators": true
}
const orm = await MikroORM.init({
entities: [Author, Book, BookTag],
dbName: 'my-db-name',
clientUrl: '...', // defaults to 'mongodb://127.0.0.1:27017' for mongodb driver
type: 'mongo', // one of 'mysql', 'postgresql', 'sqlite', defaults to 'mongo'
});

console.log(orm.em); // access EntityManager via em property
const app = express();

app.use((req, res, next) => {
req.em = orm.em.fork(); // save the fork to req object
});

app.get('/books', async (req, res) => {
const books = await req.em.find(Book); // use the fork via req.em
});
const app = express();

// by providing request context, creating forked EntityManager will be handled automatically
app.use((req, res, next) => {
RequestContext.create(orm.em, next);
});

Defining entities

To define an entity, simply create a class and decorate it. Here is an example of Book entity defined for MongoDB driver:

import { ObjectID } from 'mongodb';
import { Collection, Entity, IEntity, ManyToMany, ManyToOne, PrimaryKey, Property } from 'mikro-orm';
import { Author, BookTag, Publisher } from '.';

@Entity()
export class Book {

@PrimaryKey()
_id: ObjectID;

@Property()
createdAt = new Date();

@Property({ onUpdate: () => new Date() })
updatedAt = new Date();

@Property()
title: string;

@ManyToOne()
author: Author;

@ManyToOne()
publisher: Publisher;

@ManyToMany({ entity: () => BookTag, inversedBy: 'books' })
tags = new Collection<BookTag>(this);

constructor(title: string, author: Author) {
this.title = title;
this.author = author;
}

}

export interface Book extends IEntity<string> { }

As you can see, it’s pretty simple and straightforward. Entities are simple JavaScript objects (so called POJO), decorated with @Entity decorator (for TypeScript), or accompanied with schema definition object (for vanilla JavaScript). No real restrictions are made, you do not have to extend any base class, you are more than welcome to use entity constructors for specifying required parameters to always keep the entity in valid state. The only requirement is to define the primary key property.

You might be curious about the last line with Book as an interface. This is called interface merging and it is there to let TypeScript know the entity will have some extra API methods (like init() or isInitialized()) available as it will be monkey-patched during discovery process. More about this can be found in the docs.

Persisting entities with EntityManager

To save entity state to database, you need to persist it. Persist takes care or deciding whether to use insert or update and computes appropriate change-set. As a result, only changed fields will be updated in database.

MikroORM comes with support for cascading persist and remove operations. Cascade persist is enabled by default, which means that by persisting an entity, all referenced entities will be automatically persisted too.

const author = new Author('Jon Snow', '[email protected]');
author.born = new Date();

const publisher = new Publisher('7K publisher');

const book1 = new Book('My Life on The Wall, part 1', author);
book1.publisher = publisher;
const book2 = new Book('My Life on The Wall, part 2', author);
book2.publisher = publisher;
const book3 = new Book('My Life on The Wall, part 3', author);
book3.publisher = publisher;

// just persist books, author and publisher will be automatically cascade persisted
await orm.em.persistAndFlush([book1, book2, book3]);

// or one by one
orm.em.persistLater(book1);
orm.em.persistLater(book2);
orm.em.persistLater(book3);
await orm.em.flush(); // flush everything to database at once

Fetching entities

To fetch entities from database you can use find() and findOne() methods of EntityManager:

// find all authors with name matching 'Jon', and populate all of their books
const authors = await orm.em.find(Author, { name: /Jon/ }, ['books']);

for (const author of authors) {
console.log(author.name); // Jon Snow

for (const book of author.books) {
console.log(book.title); // initialized
console.log(book.author.isInitialized()); // true
console.log(book.author.id);
console.log(book.author.name); // Jon Snow
console.log(book.publisher); // just reference
console.log(book.publisher.isInitialized()); // false
console.log(book.publisher.id);
console.log(book.publisher.name); // undefined
}
}

More convenient way of fetching entities from database is by using EntityRepository, that carries the entity name so you do not have to pass it to every find and findOne calls:

import { QueryOrder } from 'mikro-orm';

const booksRepository = orm.em.getRepository(Book);

// with sorting, limit and offset parameters, populating author references
const books = await booksRepository.find({ author: '...' }, ['author'], { title: QueryOrder.DESC }, 2, 1);

// or with options object
const books = await booksRepository.find({ author: '...' }, {
populate: ['author'],
limit: 1,
offset: 2,
sort: { title: QueryOrder.DESC },
});

console.log(books); // Book[]

Working with references

Entity associations are mapped to entity references. Reference is an entity that has at least the identifier (primary key). This reference is stored in the Identity Map so you will get the same object reference when fetching the same document from database.

Thanks to this concept, MikroORM offers unified API for accessing entity references, regardless of whether the entity is initialized or not. Even if you do not populate an association, there will be its reference with primary key set. You can call await entity.init() to initialize the entity. This will trigger database call and populate itself, keeping the same reference to entity object in identity map.

const book = orm.em.findOne(Book, '...');
console.log(book.author); // reference with ID only, instance of Author entity

// this will get the same reference as we already have in book.author
const author = orm.em.getReference(Author, book.author.id);
console.log(author.id); // accessing the id will not trigger any db call
console.log(author.isInitialized()); // false
console.log(author.name); // undefined
console.log(author === book.author); // true

// this will trigger db call, we could also use orm.em.findOne(Author, author.id) to do the same
await author.init();
console.log(author.isInitialized()); // true
console.log(author.name); // defined

Identity Map and Unit of Work

MikroORM uses the Identity Map in background to track objects. This means that whenever you fetch entity via EntityManager, MikroORM will keep a reference to it inside its UnitOfWork, and will always return the same instance of it, even if you query one entity via different properties. This also means you can compare entities via strict equality operators (=== and !==):

const authorRepository = orm.em.getRepository(Author);
const jon = await authorRepository.findOne({ name: 'Jon Snow' }, ['books']);
const jon2 = await authorRepository.findOne({ email: '[email protected]' });
const authors = await authorRepository.findAll(['books']);

// identity map in action
console.log(jon === authors[0]); // true
console.log(jon === jon2); // true

// as we always have one instance, books will be populated also here
console.log(jon2.books);

Another benefit of Identity Map is that this allows us to skip some database calls. When you try to load an already managed entity by its identifier, the one from Identity Map will be returned, without querying the database.

The power of Unit of Work is in running all queries inside a batch and wrapped inside a transaction (if supported by given driver). This approach is usually more performant as opposed to firing queries from various places.

Collections

OneToMany and ManyToMany collections are stored in a Collection wrapper. It implements iterator so you can use for of loop to iterate through it.

Another way to access collection items is to use bracket syntax like when you access array items. Keep in mind that this approach will not check if the collection is initialized, while using get method will throw error in this case.

// find author and populate his books collection
const author = orm.em.findOne(Author, '...', ['books']);

for (const book of author.books) {
console.log(book); // instance of Book
}

author.books.add(book);
console.log(author.books.contains(book)); // true
author.books.remove(book);
console.log(author.books.contains(book)); // false
author.books.add(book);
console.log(author.books.count()); // 1
console.log(author.books.getItems()); // Book[]
console.log(author.books.getIdentifiers()); // array of primary keys of all items
author.books.removeAll();

More informations about collections can be found in the docs.

What’s next?

So you read through the whole article, got here and still not satisfied? There are more articles to come (beginning with integration manual for popular frameworks like Express or NestJS), but you can take a look at some advanced features covered in docs right now:

![](https://cdn-images-1.medium.com/max/1600/1*4877k4Hq9dPdtmvg9hnGFA.jpeg)

To start playing with MikroORM, go through quick start and read the docs. You can also take a look at example integrations with some popular frameworks.

Originally published by Martin Adámek at https://medium.com/dailyjs/introducing-mikro-orm-typescript-data-mapper-orm-with-identity-map-9ba58d049e02

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Learn More

☞ The Complete JavaScript Course 2019: Build Real Projects!

☞ Vue JS 2 - The Complete Guide (incl. Vue Router & Vuex)

☞ JavaScript Bootcamp - Build Real World Applications

☞ The Web Developer Bootcamp

☞ JavaScript: Understanding the Weird Parts

☞ Top 10 JavaScript array methods you should know

☞ An introduction to functional programming in JavaScript

☞ A Beginner’s Guide to JavaScript’s Prototype

☞ Google’s Go Essentials For Node.js / JavaScript Developers

Top 7 Most Popular Node.js Frameworks You Should Know

Top 7 Most Popular Node.js Frameworks You Should Know

Node.js is an open-source, cross-platform, runtime environment that allows developers to run JavaScript outside of a browser. In this post, you'll see top 7 of the most popular Node frameworks at this point in time (ranked from high to low by GitHub stars).

Node.js is an open-source, cross-platform, runtime environment that allows developers to run JavaScript outside of a browser.

One of the main advantages of Node is that it enables developers to use JavaScript on both the front-end and the back-end of an application. This not only makes the source code of any app cleaner and more consistent, but it significantly speeds up app development too, as developers only need to use one language.

Node is fast, scalable, and easy to get started with. Its default package manager is npm, which means it also sports the largest ecosystem of open-source libraries. Node is used by companies such as NASA, Uber, Netflix, and Walmart.

But Node doesn't come alone. It comes with a plethora of frameworks. A Node framework can be pictured as the external scaffolding that you can build your app in. These frameworks are built on top of Node and extend the technology's functionality, mostly by making apps easier to prototype and develop, while also making them faster and more scalable.

Below are 7of the most popular Node frameworks at this point in time (ranked from high to low by GitHub stars).

Express

With over 43,000 GitHub stars, Express is the most popular Node framework. It brands itself as a fast, unopinionated, and minimalist framework. Express acts as middleware: it helps set up and configure routes to send and receive requests between the front-end and the database of an app.

Express provides lightweight, powerful tools for HTTP servers. It's a great framework for single-page apps, websites, hybrids, or public HTTP APIs. It supports over fourteen different template engines, so developers aren't forced into any specific ORM.

Meteor

Meteor is a full-stack JavaScript platform. It allows developers to build real-time web apps, i.e. apps where code changes are pushed to all browsers and devices in real-time. Additionally, servers send data over the wire, instead of HTML. The client renders the data.

The project has over 41,000 GitHub stars and is built to power large projects. Meteor is used by companies such as Mazda, Honeywell, Qualcomm, and IKEA. It has excellent documentation and a strong community behind it.

Koa

Koa is built by the same team that built Express. It uses ES6 methods that allow developers to work without callbacks. Developers also have more control over error-handling. Koa has no middleware within its core, which means that developers have more control over configuration, but which means that traditional Node middleware (e.g. req, res, next) won't work with Koa.

Koa already has over 26,000 GitHub stars. The Express developers built Koa because they wanted a lighter framework that was more expressive and more robust than Express. You can find out more about the differences between Koa and Express here.

Sails

Sails is a real-time, MVC framework for Node that's built on Express. It supports auto-generated REST APIs and comes with an easy WebSocket integration.

The project has over 20,000 stars on GitHub and is compatible with almost all databases (MySQL, MongoDB, PostgreSQL, Redis). It's also compatible with most front-end technologies (Angular, iOS, Android, React, and even Windows Phone).

Nest

Nest has over 15,000 GitHub stars. It uses progressive JavaScript and is built with TypeScript, which means it comes with strong typing. It combines elements of object-oriented programming, functional programming, and functional reactive programming.

Nest is packaged in such a way it serves as a complete development kit for writing enterprise-level apps. The framework uses Express, but is compatible with a wide range of other libraries.

LoopBack

LoopBack is a framework that allows developers to quickly create REST APIs. It has an easy-to-use CLI wizard and allows developers to create models either on their schema or dynamically. It also has a built-in API explorer.

LoopBack has over 12,000 GitHub stars and is used by companies such as GoDaddy, Symantec, and the Bank of America. It's compatible with many REST services and a wide variety of databases (MongoDB, Oracle, MySQL, PostgreSQL).

Hapi

Similar to Express, hapi serves data by intermediating between server-side and client-side. As such, it's can serve as a substitute for Express. Hapi allows developers to focus on writing reusable app logic in a modular and prescriptive fashion.

The project has over 11,000 GitHub stars. It has built-in support for input validation, caching, authentication, and more. Hapi was originally developed to handle all of Walmart's mobile traffic during Black Friday.

From Zero to Forex Trading Bot Hero with Node.js and Typescript

From Zero to Forex Trading Bot Hero with Node.js and Typescript

From Zero to Forex Trading Bot Hero with Node.js and Typescript: During this talk, you will discover Daniele's journey building a trading bot. From building a basic prototype in Typescript to using functional programming techniques to trade autonomously across multiple foreign exchanges and generate thousands of dollars in revenue.

During this talk, you will discover Daniele's journey building a trading bot. From building a basic prototype in Typescript to using functional programming techniques to trade autonomously across multiple foreign exchanges and generate thousands of dollars in revenue.

By the end of the talk you will learn:

  • The basics of financial trading platforms from a developer's perspective (APIs/concepts/terminology)
  • How you can use the skills you've gained building full stack applications to write trading software
  • How static typing and Typescript can speed up your workflow
  • How functional programming can help you refine your trading algorithms and verify the correctness of your program

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about Node.js and Typescript

The Complete Node.js Developer Course (3rd Edition)

Creating RESTful APIs with NodeJS and MongoDB Tutorial

Build a web scraper with Node

React + TypeScript : Why and How

From Javascript to Typescript to Elm

Python for Financial Analysis and Algorithmic Trading