.NET vs Node.js

.NET vs Node.js

.NET vs Node.js - In this article we'll discuss how they are alike and how they are different, where their respective strengths lie and where they might experience shortcomings and hopefully along the way I can help you make up your mind or at least ease your decision a bit.

Hello there, you might be here because you wish to code up your shiny brand new blog, start writing articles that will captivate your readers and earn you a following, perhaps you had an amazing idea for a brand new web application that will change everything or maybe you simply wish to create simple website to present yourself with and can't be bothered with Wordpress or PHP.

But the problem you face is one that most developers have these days.

Which DAMN techstack to use???

We are spoiled with choices like never before in the history of development. There are many web development libraries, programming languages and frameworks but in this case we'll only focus on Node.js and .NET Core specifically ASP.NET

Both of these technologies have firmly established themselves in the world of web development over the last few years. Both pride themselves on performance and scalability, both have groups of devoted fans and highly opinionated developers behind them.

In this article we'll discuss how they are alike and how they are different, where their respective strengths lie and where they might experience shortcomings and hopefully along the way I can help you make up your mind or at least ease your decision a bit.


I have a confession to make! I'm a .NET developer and I might have a bit of a bias towards my favorite framework. However the purpose of this article is not to compare which of the two is better but simply to point out the differences between the two in an objective, honest kind of way. Most of us developers love our respective technology tribes that we have established. But there will hopefully be none of that in here.

Learning Curve

Here's another confession: most of these points will most likely start with 'it depends'... and then a short explanation on how different people have different experience levels and different preferences. Well... This point depends heavily on your pre-existing knowledge. ASP.NET comes with a premade project/file structure, premade boilerplate and some nifty examples on how to use the MVC pattern.

To a complete beginner this could be incredibly helpful as it provides an easy entry point into an architecture that is widely considered an industry standard, while to others that prefer to learn as they go it could be a hinderance or downright confusing. Node.js with express on the other hand is as barebones as it goes.

You install the express module via the NPM and then you're on your own, what pattern your choose, how you structure your project and so on is on you.

To a complete beginner or even an experienced developer this may be confusing but it's nothing that a google search or a look at the documentation couldn't fix.

Programming Language

Now there is a substantial difference in C# vs Javascript. C# is a fully OOP hard typed programming language, really understanding the OOP philosophy is something even some experienced developers struggle with and with OOP we get a ton of different patters to use each with their own pros and cons. It's very easy to drown in the depth of information on C#.

On the other hand Javascript is a functional, weak typed programming language. Which means you won't have to worry about remembering what type of variable can contain what information and to what length or decimal and so forth. OOP can be simulated but never fully achieved in Javascript as of 2019. It pains me greatly to say this but Javascript is a lot simpler for a beginner to pick up and create something with than C#.

Development Time

Obligatory it depends on your pre-existing knowledge and experience.
Both languages are hugely popular and therefore provide you with an ocean of knowledge one google search away. As I've mentioned above ASP.NET does come with premade project templates that allow you to jump straight into developing your web app. However a quick search reveals a ton of Yeoman generators that you can use freely.

But hey that's just the starting phase. How about the actual development time?

I firmly believe that I can create a fully functional webapp or RESTful API faster in ASP.NET than I could in Node.js. This has a lot to do with the tooling that Microsoft has provided and the fact that most of the most important modules, plugins, nuget packages in ASP.NET were all created by the same company.

Which means there's less surprises, method names don't suddenly switch from camelcase to snakecase and so on. With Node.js that is not the case as often each node module is written by a different developer. These developers may have different preferences, ideas and standards. This in turn makes me consult the documentation more often than I would like, sometimes even on stuf I have done a million times. Because better safe than sorry right?


As I've mentioned in the point above, ASP.NET most used and important nuget packages are all developed by Microsoft, which means they will always work together and always stay concise. Where as most node modules are written and maintained by different developers with different philosophies.

This obviously has a big effect on the stability of your webapp. One day you could decide to update your project and all it's dependencies only to discover they don't work well together anymore or that a method you used all over your project has been deprecated or even renamed.

This can ruin anyone's day, I know I wasn't too happy when it happened to me.

Furthermore these developers can at anytime stop maintaining their modules and leave you out to dry. Or at any moment the company behind a widely used node module could be bought up by another company and their direction could change.

When it comes to stability I personally like to side with Microsoft, for all their faults and I know they can have quite a few, at least you know they'll always be around and they'll run a tight ship when it comes to their developer tools, frameworks and libraries


Oh boy the big one. Both the Node.js camp and the .NET camp will claim that under certain conditions their framework of choice will outperform the other. There's articles on the web where companies boosted their performance substantially by switching from Node to .NET and vica-versa.

What they don't tell you is the gorey details. How old was their previous codebase, how experienced was the team that developed it, how many stupid mistakes and problems did the project contain and so on and so on.

Both Node.js and .NET are capable of outperforming or underperforming against each other.

I know you're sick of hearing it depends but this point really really lies on the developer or development team. After all a good workman does not blame his tools. Now having said that when both of them are optimally done and allowed to run on the same system, under the same workload.

Objectively .NET does outperform Node.js by quite a bit. This has a lot to do with the differences between Javascript and C#. There's a whole blog post I could write on why C# outperforms many other languages and how it achieves that. But that's for another time.

BUT keep in mind that unless you are facing heavy traffic and huge request or performance intensive tasks, the difference between the two out in the real world after human error is added to the equation will be miniscule. Only consider performance a metric if you really really have to squeeze out every last % and even then, servers cost a heck of a lot less than developers do per year

Modules & Tools

Node.js has NPM, Dotnet has Nuget. As of 18.Nov.2019 there are apparently


nuget packages and


packages. The numbers are clearly on Node.js' side. However Microsoft develops and maintains a great deal of those packages. When it comes to tooling .NET is the clear winner as it has some of the most powerful and well maintained tools out there.

Visual Studio and Rider spring to mind and lately you can even use Visual Studio code as a full IDE. As of 2019 there is still no official or unofficial Node.js focused IDE, so your best bet will most likely be Visual Studio code with Node.js development plugins.


Both ASP.NET and Node.js can be hosted on Azure, AWS, VPS, Google cloud, Heroku and so on. Once again we are spoiled for choice. However there are more dedicated ASP.NET hosts overall, and with quantity come options.

Async vs Sync

This right here is the key difference between the two. Node.js is fully asynchronous while ASP.NET is synchronous, but allows you to define asynchronous methods.


Basically means that you can only execute a single operationat any given time, meaning that all other operations are blocked until the original operation is fully executed.


Means that you can execute multiple operations at the same time and you do not block other operations during it. Sounds simple enough right? In fact it even sounds like Async is always the way to go.

Hold up there not so fast. Async is great and all but it can cause a ton of headaches when handled improperly. Imagine for a moment that two users submit a request at the same exact time for the same exact code block.

Except user A wants to delete or update the code block and user B wishes to read the code block. This means that user B will either get to see information that should not exist at that moment, he will see old information or will request information that does not exist anymore. 2 wrong results and 1 correct one. I don't like the odds of that.

Furthermore more often than not I had to force synchronicity into my node.js API or webapp for one reason or another. This is why I personally prefer the ASP.NET way of doing it.

The default state of it all is synchronous until async is required.

Heck if you want you can make the entire ASP.NET webapp Async.


As per usual there are no clear winners and the choice boils largely down to personal preference. But I hope that I managed to clear up some things or help you make your decision.

Happy coding!

Getting Started With Threads in NodeJS

Getting Started With Threads in NodeJS

How Node.js really works. Node.js uses two kinds of threads: a main thread handled by event loop and several auxiliary threads in the worker pool. Event loop is the mechanism that takes callbacks (functions) and registers them to be executed at some point in the future. Worker pool is an execution model that spawns and handles separate threads. The worker_threads module is a package that allows us to create fully functional multithreaded Node.js applications.

Many people wonder how a single-threaded Node.js can compete with multithreaded back ends. As such, it may seem counterintuitive that so many huge companies pick Node as their back end, given its supposed single-threaded nature. To know why, we have to understand what we really mean when we say that Node is single-threaded.

JavaScript was created to be just good enough to do simple things on the web, like validate a form or, say, create a rainbow-colored mouse trail. It was only in 2009 that Ryan Dahl, creator of Node.js, made it possible for developers to use the language to write back-end code.

Back-end languages, which generally support multithreading, have all kinds of mechanisms for syncing values between threads and other thread-oriented features. To add support for such things to JavaScript would require changing the entire language, which wasn’t really Dahl’s goal. For plain JavaScript to support multithreading, he had to create a workaround. Let’s explore …

How Node.js really works

Node.js uses two kinds of threads: a main thread handled by event loop and several auxiliary threads in the worker pool.

Event loop is the mechanism that takes callbacks (functions) and registers them to be executed at some point in the future. It operates in the same thread as the proper JavaScript code. When a JavaScript operation blocks the thread, the event loop is blocked as well.

Worker pool is an execution model that spawns and handles separate threads, which then synchronously perform the task and return the result to the event loop. The event loop then executes the provided callback with said result.

In short, it takes care of asynchronous I/O operations — primarily, interactions with the system’s disk and network. It is mainly used by modules such as fs (I/O-heavy) or crypto (CPU-heavy). Worker pool is implemented in libuv, which results in a slight delay whenever Node needs to communicate internally between JavaScript and C++, but this is hardly noticeable.

With both of these mechanisms, we are able to write code like this:

fs.readFile(path.join(__dirname, './package.json'), (err, content) => {
 if (err) {
   return null;

The aforementioned fs module tells the worker pool to use one of its threads to read the contents of a file and notify the event loop when it is done. The event loop then takes the provided callback function and executes it with the content of the file.

Above is an example of a non-blocking code; as such, we don’t have to wait synchronously for something to happen. We tell the worker pool to read the file and call the provided function with the result. Since worker pool has its own threads, the event loop can continue executing normally while the file is being read.

It’s all good until there’s a need to synchronously execute some complex operation: any function that takes too long to run will block the thread. If an application has many such functions, it could significantly decrease the throughput of the server or freeze it altogether. In this case, there’s no way of delegating the work to the worker pool.

Fields that require complex calculations — such as AI, machine learning, or big data — couldn’t really use Node.js efficiently due to the operations blocking the main (and only) thread, making the server unresponsive. That was the case up until Node.js v10.5.0 came about, which added support for multiple threads.

Introducing: worker_threads

The worker_threads module is a package that allows us to create fully functional multithreaded Node.js applications.

A thread worker is a piece of code (usually taken out of a file) spawned in a separate thread.

Note that the terms thread worker, worker, and thread are often used interchangeably; they all refer to the same thing.

To start using thread workers, we have to import the worker_threads module. Let’s start by creating a function to help us spawn these thread workers, and then we’ll talk a little bit about their properties.

type WorkerCallback = (err: any, result?: any) => any;
export function runWorker(path: string, cb: WorkerCallback, workerData: object | null = null) {
 const worker = new Worker(path, { workerData });
 worker.on('message', cb.bind(null, null));
 worker.on('error', cb);
 worker.on('exit', (exitCode) => {
   if (exitCode === 0) {
     return null;
   return cb(new Error(`Worker has stopped with code ${exitCode}`));
 return worker;

To create a worker, we have to create an instance of the Worker class. In the first argument, we provide a path to the file that contains the worker’s code; in the second, we provide an object containing a property called workerData. This is the data we’d like the thread to have access to when it starts running.

Note that whether you use JavaScript itself or something that transpiles to JavaScript (e.g., TypeScript), the path should always refer to files with either .js or .mjs extensions.

I would also like to point out why we used the callback approach as opposed to returning a promise that would be resolved when the message event is fired. This is because workers can dispatch many message events, not just one.

As you can see in the example above, the communication between threads is event-based, which means we are setting up listeners to be called once a given event is sent by the worker.

Here are the most common events:

worker.on('error', (error) => {});

The error event is emitted whenever there’s an uncaught exception inside the worker. The worker is then terminated, and the error is available as the first argument in the provided callback.

worker.on('exit', (exitCode) => {});

exit is emitted whenever a worker exits. If process.exit() was called inside the worker, exitCode would be provided to the callback. If the worker was terminated with worker.terminate(), the code would be 1.

worker.on('online', () => {});

online is emitted whenever a worker stops parsing the JavaScript code and starts the execution. It’s not used very often, but it can be informative in specific cases.

worker.on('message', (data) => {});

message is emitted whenever a worker sends data to the parent thread.

Now let’s take a look at how the data is being shared between threads.

Exchanging data between threads

To send the data to the other thread, we use the port.postMessage() method. It has the following signature:

port.postMessage(data[, transferList])

The port object can be either parentPort or an instance of MessagePort — more on that later.

The data argument

The first argument — here called data — is an object that is copied to the other thread. It can contain anything the copying algorithm supports.

The data is copied by the structured clone algorithm. Per Mozilla:

It builds up a clone by recursing through the input object while maintaining a map of previously visited references in order to avoid infinitely traversing cycles.

The algorithm doesn’t copy functions, errors, property descriptors, or prototype chains. It should also be noted that copying objects in this way is different than with JSON because it can contain circular references and typed arrays, for example, whereas JSON cannot.

By supporting the copying of typed arrays, the algorithm makes it possible to share memory between threads.

Sharing memory between threads

People may argue that modules like cluster or child_process enabled the use of threads a long time ago. Well, yes and no.

The cluster module can create multiple node instances with one master process routing incoming requests between them. Clustering an application allows us to effectively multiply the server’s throughput; however, we can’t spawn a separate thread with the cluster module.

People tend to use tools like PM2 to cluster their applications as opposed to doing it manually inside their own code, but if you’re interested, you can read my post on how to use the cluster module.

The child_process module can spawn any executable regardless of whether it’s JavaScript. It is pretty similar, but it lacks several important features that worker_threads has.

Specifically, thread workers are more lightweight and share the same process ID as their parent threads. They can also share memory with their parent threads, which allows them to avoid serializing big payloads of data and, as a result, send the data back and forth much more efficiently.

Now let’s take a look at an example of how to share memory between threads. In order for the memory to be shared, an instance of ArrayBuffer or SharedArrayBuffer must be sent to the other thread as the data argument or inside the data argument.

Here’s a worker that shares memory with its parent thread:

import { parentPort } from 'worker_threads';
parentPort.on('message', () => {
 const numberOfElements = 100;
 const sharedBuffer = new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT * numberOfElements);
 const arr = new Int32Array(sharedBuffer);
 for (let i = 0; i < numberOfElements; i += 1) {
   arr[i] = Math.round(Math.random() * 30);
 parentPort.postMessage({ arr });

First, we create a SharedArrayBuffer with the memory needed to contain 100 32-bit integers. Next, we create an instance of Int32Array, which will use the buffer to save its structure, then we just fill the array with some random numbers and send it to the parent thread.

In the parent thread:

import path from 'path';
import { runWorker } from '../run-worker';
const worker = runWorker(path.join(__dirname, 'worker.js'), (err, { arr }) => {
 if (err) {
   return null;
 arr[0] = 5;

By changing arr[0] to 5, we actually change it in both threads.

Of course, by sharing memory, we risk changing a value in one thread and having it changed in the other. But we also gain a very nice feature along the way: the value doesn’t need to be serialized to be available in another thread, which greatly increases efficiency. Simply remember to manage references to the data properly in order for it to be garbage-collected once you finish working with it.

Sharing an array of integers is fine, but what we’re really interested in is sharing objects — the default way of storing information. Unfortunately, there is no SharedObjectBuffer or similar, but we can create a similar structure ourselves.

The transferList argument

transferList can only contain ArrayBuffer and MessagePort. Once they are transferred to the other thread, they can no longer be used in the sending thread; the memory is moved to the other thread and, thus, is unavailable in the sending one.

For the time being, we can’t transfer network sockets by including them in the transferList (which we can do with the child_process module).

Creating a channel for communications

Communication between threads is made through ports, which are instances of the MessagePort class and enable event-based communication.

There are two ways of using ports to communicate between threads. The first is the default and the easier of the two. Within the worker’s code, we import an object called parentPort from the worker_threads module and use the object’s .postMessage() method to send messages to the parent thread.

Here’s an example:

import { parentPort } from 'worker_threads';
const data = {
// ...

parentPort is an instance of MessagePort that Node.js created for us behind the scenes to enable communication with the parent thread. This way, we can communicate between threads by using parentPort and worker objects.

The second way of communicating between threads is to actually create a MessageChannel on our own and send it to the worker. Here’s how we could create a new MessagePort and share it with our worker:

import path from 'path';
import { Worker, MessageChannel } from 'worker_threads';
const worker = new Worker(path.join(__dirname, 'worker.js'));
const { port1, port2 } = new MessageChannel();
port1.on('message', (message) => {
 console.log('message from worker:', message);
worker.postMessage({ port: port2 }, [port2]);

After creating port1 and port2, we set up event listeners on port1 and send port2 to the worker. We have to include it in the transferList for it to be transferred to the worker side.

And now, inside the worker:

import { parentPort, MessagePort } from 'worker_threads';
parentPort.on('message', (data) => {
 const { port }: { port: MessagePort } = data;
 port.postMessage('heres your message!');

This way, we use the port that was sent by the parent thread.

Using parentPort is not necessarily a wrong approach, but it’s better to create a new MessagePort with an instance of MessageChannel and then share it with the spawned worker (read: separation of concerns).

Note that in the examples below, I use parentPort to keep things simple.

Two ways of using workers

There are two ways we can use workers. The first is to spawn a worker, execute its code, and send the result to the parent thread. With this approach, each time a new task comes up, we have to create a worker all over again.

The second way is to spawn a worker and set up listeners for the message event. Each time the message is fired, it does the work and sends the result back to the parent thread, which keeps the worker alive for later usage.

Node.js documentation recommends the second approach because of how much effort it takes to actually create a thread worker, which requires creating a virtual machine and parsing and executing the code. This method is also much more efficient than constantly spawning workers.

This approach is called worker pool because we create a pool of workers and keep them waiting, dispatching the message event to do the work when needed.

Here’s an example of a file that contains a worker that is spawned, executed, and then closed:

import { parentPort } from 'worker_threads';
const collection = [];
for (let i = 0; i < 10; i += 1) {
 collection[i] = i;

After sending the collection to the parent thread, it simply exits.

And here’s an example of a worker that can wait for a long period of time before it is given a task:

import { parentPort } from 'worker_threads';
parentPort.on('message', (data: any) => {
 const result = doSomething(data);

Useful properties available in the worker_threads module

There are a few properties available inside the worker_threads module:


The property is true when not operating inside a worker thread. If you feel the need, you can include a simple if statement at the start of a worker file to make sure it is only run as a worker.

import { isMainThread } from 'worker_threads';
if (isMainThread) {
 throw new Error('Its not a worker');


Data included in the worker’s constructor by the spawning thread.

const worker = new Worker(path, { workerData });

In the worker thread:

import { workerData } from 'worker_threads';


The aforementioned instance of MessagePort used to communicate with the parent thread.


A unique identifier assigned to the worker.

Now that we know the technical details, let’s implement something and test out our knowledge in practice.

Implementing setTimeout

setTimeout is an infinite loop that, as the name implies, times out the app. In practice, it checks in each iteration whether the sum of the starting date and a given number of milliseconds are smaller than the actual date.

import { parentPort, workerData } from 'worker_threads';
const time = Date.now();
while (true) {
 if (time + workerData.time <= Date.now()) {

This particular implementation spawns a thread, executes its code, and then exits after it’s done.

Let’s try implementing the code that will make use of this worker. First, let’s create a state in which we’ll keep track of the spawned workers:

const timeoutState: { [key: string]: Worker } = {};

And now the function that takes care of creating workers and saving them into the state:

export function setTimeout(callback: (err: any) => any, time: number) {
 const id = uuidv4();
 const worker = runWorker(
   path.join(__dirname, './timeout-worker.js'),
   (err) => {
     if (!timeoutState[id]) {
       return null;
     timeoutState[id] = null;
     if (err) {
       return callback(err);
 timeoutState[id] = worker;
 return id;

First we use the UUID package to create a unique identifier for our worker, then we use the previously defined helper function runWorker to get the worker. We also pass to the worker a callback function to be fired once the worker sends some data. Finally, we save the worker in the state and return the id.

Inside the callback function, we have to check whether the worker still exists in the state because there is a possibility to cancelTimeout(), which would remove it. If it does exist, we remove it from the state and invoke the callback passed to the setTimeout function.

The cancelTimeout function uses the .terminate() method to force the worker to quit and removes that worker from the state:

export function cancelTimeout(id: string) {
 if (timeoutState[id]) {
   timeoutState[id] = undefined;
   return true;
 return false;

If you’re interested, I also implemented setInterval here, but since it has nothing to do with threads (we reuse the code of setTimeout), I have decided not to include the explanation here.

I have created a little test code for the purpose of checking how much this approach differs from the native one. You can review the code here. These are the results:

native setTimeout { ms: 7004, averageCPUCost: 0.1416 }
worker setTimeout { ms: 7046, averageCPUCost: 0.308 }

We can see that there’s a slight delay in our setTimeout — about 40ms — due to the worker being created. The average CPU cost is also a little bit higher, but nothing unbearable (the CPU cost is an average of the CPU usage across the whole duration of the process).

If we could reuse the workers, we would lower the delay and CPU usage, which is why we’ll now take a look at how to implement our own worker pool.

Implementing a worker pool

As mentioned above, a worker pool is a given number of previously created workers sitting and listening for the message event. Once the message event is fired, they do the work and send back the result.

To better illustrate what we’re going to do, here’s how we would create a worker pool of eight thread workers:

const pool = new WorkerPool(path.join(__dirname, './test-worker.js'), 8);

If you are familiar with limiting concurrent operations, then you will see that the logic here is almost the same, just a different use case.

As shown in the code snippet above, we pass to the constructor of WorkerPool the path to the worker and the number of workers to spawn.

export class WorkerPool<T, N> {
 private queue: QueueItem<T, N>[] = [];
 private workersById: { [key: number]: Worker } = {};
 private activeWorkersById: { [key: number]: boolean } = {};
 public constructor(public workerPath: string, public numberOfThreads: number) {

Here, we have additional properties like workersById and activeWorkersById, in which we can save existing workers and the IDs of currently running workers, respectively. There’s also queue, in which we can save objects with the following structure:

type QueueCallback<N> = (err: any, result?: N) => void;
interface QueueItem<T, N> {
 callback: QueueCallback<N>;
 getData: () => T;

callback is just the default node callback, with error as its first argument and the possible result as the second. getData is the function passed to the worker pool’s .run() method (explained below), which is called once the item starts being processed. The data returned by the getData function will be passed to the worker thread.

Inside the .init() method, we create the workers and save them in the states:

private init() {
  if (this.numberOfThreads < 1) {
    return null;
  for (let i = 0; i < this.numberOfThreads; i += 1) {
    const worker = new Worker(this.workerPath);
    this.workersById[i] = worker;
    this.activeWorkersById[i] = false;

To avoid infinite loops, we first ensure the number of threads is >1. We then create the valid number of workers and save them by their index in the workersById state. We save information on whether they are currently running inside the activeWorkersById state, which, at first, is always false by default.

Now we have to implement the aforementioned .run() method to set up a task to run once a worker is available.

public run(getData: () => T) {
  return new Promise<N>((resolve, reject) => {
    const availableWorkerId = this.getInactiveWorkerId();
    const queueItem: QueueItem<T, N> = {
      callback: (error, result) => {
        if (error) {
          return reject(error);
return resolve(result);
   if (availableWorkerId === -1) {
      return null;
    this.runWorker(availableWorkerId, queueItem);

Inside the function passed to the promise, we first check whether there’s a worker available to process the data by calling the .getInactiveWorkerId():

private getInactiveWorkerId(): number {
  for (let i = 0; i < this.numberOfThreads; i += 1) {
    if (!this.activeWorkersById[i]) {
      return i;
  return -1;

Next, we create a queueItem, in which we save the getData function passed to the .run() method as well as the callback. In the callback, we either resolve or reject the promise depending on whether the worker passed an error to the callback.

If the availableWorkerId is -1, then there is no available worker, and we add the queueItem to the queue. If there is an available worker, we call the .runWorker() method to execute the worker.

In the .runWorker() method, we have to set inside the activeWorkersById state that the worker is currently being used; set up event listeners for message and error events (and clean them up afterwards); and, finally, send the data to the worker.

private async runWorker(workerId: number, queueItem: QueueItem<T, N>) {
 const worker = this.workersById[workerId];
 this.activeWorkersById[workerId] = true;
 const messageCallback = (result: N) => {
   queueItem.callback(null, result);
 const errorCallback = (error: any) => {
 const cleanUp = () => {
   this.activeWorkersById[workerId] = false;
   if (!this.queue.length) {
     return null;
   this.runWorker(workerId, this.queue.shift());
 worker.once('message', messageCallback);
 worker.once('error', errorCallback);
 worker.postMessage(await queueItem.getData());

First, by using the passed workerId, we get the worker reference from the workersById state. Then, inside activeWorkersById, we set the [workerId] property to true so we know not to run anything else while the worker is busy.

Next, we create messageCallback and errorCallback to be called on message and error events, respectively, then register said functions to listen for the event and send the data to the worker.

Inside the callbacks, we call the queueItem’s callback, then call the cleanUp function. Inside the cleanUp function, we make sure event listeners are removed since we reuse the same worker many times. If we didn’t remove the listeners, we would have a memory leak; essentially, we would slowly run out of memory.

Inside the activeWorkersById state, we set the [workerId] property to false and check if the queue is empty. If it isn’t, we remove the first item from the queue and call the worker again with a different queueItem.

Let’s create a worker that does some calculations after receiving the data in the message event:

import { isMainThread, parentPort } from 'worker_threads';
if (isMainThread) {
 throw new Error('Its not a worker');
const doCalcs = (data: any) => {
 const collection = [];
 for (let i = 0; i < 1000000; i += 1) {
   collection[i] = Math.round(Math.random() * 100000);
 return collection.sort((a, b) => {
   if (a > b) {
     return 1;
   return -1;
parentPort.on('message', (data: any) => {
 const result = doCalcs(data);

The worker creates an array of 1 million random numbers and then sorts them. It doesn’t really matter what happens as long as it takes some time to finish.

Here’s an example of a simple usage of the worker pool:

const pool = new WorkerPool<{ i: number }, number>(path.join(__dirname, './test-worker.js'), 8);
const items = [...new Array(100)].fill(null);
 items.map(async (_, i) => {
   await pool.run(() => ({ i }));
   console.log('finished', i);
).then(() => {
 console.log('finished all');

We start by creating a pool of eight workers. We then create an array with 100 elements, and for each element, we run a task in the worker pool. First, eight tasks will be executed immediately, and the rest will be put in the queue and gradually executed. By using a worker pool, we don’t have to create a worker each time, which vastly improves efficiency.


worker_threads provide a fairly easy way to add multithreading support to our applications. By delegating heavy CPU computations to other threads, we can significantly increase our server’s throughput. With the official threads support, we can expect more developers and engineers from fields like AI, machine learning, and big data to start using Node.js.

Top 10 NodeJS Frameworks For Developers in 2020

Top 10 NodeJS Frameworks For Developers in 2020

Node also called Node.js where js means JavaScript is an open-source. In this Node.js tutorial, we will share the top 10 Node.js frameworks for the Developers in 2020. Hapi, Express, Koa, Sails, Meteor, Derby, Total, Adonis, Nest, LoopBack

Table of Contents
  • What is Node?
    • Why Node is Special?
  • Architecture of Node
  • NodeJS Frameworks
    • 1. Hapi.js
    • 2. Express.js
    • 3. Koa.js
    • 4. Sails.js
    • 5. Meteor.js
    • 6. Derby.js
    • 7. Total.js
    • 8. Adonis.js
    • 9. Nest.js
    • 10. LoopBack.js
What is Node?

Node also called Node.js where js means JavaScript is an open-source, cross-platform runtime environment for executing JavaScript code outside of the browser. To run JavaScript on the backend servers, a virtual machine like V8 by Google executes JS in the server so Node is a wrapper around virtual machines like V8 with built-in modules providing rich features through easy to use asynchronous API.

Backend services like APIs(Applications Programming Interfaces) uses Node to build its services. These services power client applications like web apps inside web browsers and mobile apps on mobile devices. Users see and interact with these clients’ apps, so, they are just at the surface an interact with services sitting under server or in the cloud to store data, send emails, push notifications, kick of workflow and more.

Node is ideal for highly-scalable, data-intensive and real-time backend services that power real-time applications.

Why Node is Special?

  • Great for prototyping and agile development.
  • Building super fast and highly scalable services.
  • Supports widely used language JavaScript
  • Cleaner and more consistent codebase.
  • Large ecosystem of open-source libraries.
Architecture of Node

Traditionally, the browser provided the runtime environment for the JS code. Every browser has a JS engine that converts the JS code to machine code. For instance, Microsoft Edge has Chakra, Firefox has spider monkey and Chrome has V8 engines.

To execute JS out of the browser the fastest engine V8 is embedded into a C++ program, this is called Node. Therefore, Node is a runtime environment for JS code.

It contains the JS engine that executes JS code but also has certain objects that provide an environment for JS code that is not provided inside browsers.

NodeJS Frameworks

Let us now look at the popular NodeJs Frameworks:

1. Hapi.js

It is introduced by Eran Hammer at Walmart while trying to handle traffic on black Friday. It is a powerful and robust open-source framework for developing JSON API. Application programming interface (API) servers, websites, and HTTP proxy applications are built with hapi.js. Various key features such as input validation, implement caching,configuration-based functionality, error handling, logging, and more and the well-developed plugin system and make the Hapi one of the most preferred frameworks. It is used in building useful applications and providing technology solutions by several large-scale websites such as PayPal, Disney.


  • Code reusability
  • No external dependencies
  • Security
  • Integrated Architecture: comprehensive authorization and authentication API available in a node framework.

2. Express.js

Built by TJ Holowaychuk, Express.js is a flexible and minimal Node.js application framework specifically designed for building single-page, multi-page, and hybrid applications that provide a robust set of features for web and mobile applications.

Express has no out-of-the-box object-relational mapping engine. Express isn't built around specific components, having "no opinion" regarding what technologies you plug into it. This freedom, coupled with lightning-fast setup and the pure JavaScript environment of Node, makes Express a strong candidate for agile development and rapid prototyping. Express is most popular with startups that want to build a product as quickly as possible and don't have very much legacy code.

The framework has the advantage of continuous updates and reforms of all the core features. It is a minimalist framework that is used to build several mobile applications and APIs.

3. Koa.js

Developed and maintained by the creators of widely used Node.js framework — Express.js, Koa, a cross-platform server-side runtime environment application, is an object containing an array of middleware functions that are composed and executed in a stacked manner upon request making it easier for web developers to build fast and scalable network applications with JavaScript. It improves interoperability, robustness, and makes writing middleware much more enjoyable.

Many web developers, at present, even use Node.js to write both frontend and backend of a web application in JavaScript. Web developers can further accelerate the development of custom web applications and application programming interfaces (APIs) by using several Node.js frameworks.

4. Sails.js

It is a model–view–controller(MVC) framework for Node.js that follows the principle of “convention over configuration.” The Ruby on Rails web framework inspires it, thus emulates the familiar MVC pattern to build single-page apps, REST APIs, and real-time apps. Extensively uses code generators that allow building applications with less writing of code. The framework is built on top of Socket.io, a JavaScript library for adding real-time, bidirectional, event-based communication to applications And Express.js, one of the most popular Node.js libraries.

5. Meteor.js

It is a platform for building applications using Node.js with any frontend framework like Angular, React, or even Blaze, which is the Meteor frontend framework. The database its uses is MongoDB by default.


  • Zero configuration build tools providing code splitting and dynamic imports.
  • It is faster as it comes with real-time features.
  • Nicely integrated frontend with backend
  • Meteor methods that define server-side functionality on the server and then call the methods directly from the client-side and not have to interact with hidden API.
  • Accounts and user authentication are excellent with meteor.
  • Excellent platform for building as doesn't require code separate between its all a part of one code base that communicates smoothly.

6. Derby.js

DerbyJS is an open-source, full-stack framework for building modern realtime web applications. Uses PubSub and is compatible with any database. We can use the NPM to add features and functionality to a Derby project. Any other party library is not loaded automatically and is not globally included in Derby, and one has to "require" as they would with any node.js project. Derby is focused on allowing users to create fast-loading realtime web-apps and is flexible and extensible. Templates can be provided in the browser and on the server. In a browser, DerbyJS renders with fast, native DOM methods.


  • Realtime Collaboration
  • Server Rendering
  • Components and data binding
  • Modular

7. Total.js

Total.js is a modular and modern Node.js three-year-old framework supporting the MVC architecture. Client-side frameworks like Angular.js, Polymer, Backbone.js, Bootstrap, are fully compatible with this framework. This framework is extensible and asynchronous and offers excellent performance and stability. Any tools such as Grunt are not required to compress it easy to use. It also has NoSql embedded in it and supports the array and other prototypes.


  • Rapid support and bug fixing
  • Supports RESTful routing
  • Supports video streaming
  • Supports themes
  • Supports workers
  • Supports sitemap
  • Supports WebSocket
  • Supports models, modules, packages, and isomorphic code
  • Supports Image processing via GM or IM
  • Supports generators
  • Supports localization with diff tool and CSV export
  • Supports restrictions and redirections

8. Adonis.js

Adonis is a node.js framework that has a hardcore MVC structure, which is a design pattern where it breaks certain functionalities up into different sections of the applications. Adonis uses the edge template engine, which is really easy to use.


  • It has its own CLI (Command Line Interface)
  • Familiar to Laravel so easy to learn
  • Validators are used to check if the data flowing into the controllers has the right format, and emit messages when some errors occur.

9. Nest.js

NestJS is a progressive Node.js framework for building efficient, reliable and scalable server-side applications helping developers create modular, highly scalable, and maintainable server-side web applications.

It implements the MVC (Model-View-Controller) pattern and provides extensibility. The outstanding feature of NestJS is its native support for TypeScript, which lets you access optional static type-checking along with strong tooling for large apps and the latest ECMAScript features.


  • Extensible: Allows the use of any other libraries because of modular architecture, thus making it truly flexible.
  • Versatile: It offers an adaptable ecosystem that is a fully-fledged backbone for all kinds of server-side applications.
  • Progressive: Brings design patterns and sophisticated solutions to node.js world by taking advantage of the latest JavaScript features.

10. LoopBack.js

LoopBack is a Node.js framework with an easy-to-use CLI and a dynamic API explorer. It allows you to create your models based on your schema or dynamic models in the absence of a schema. It is compatible with a good number of REST services and a wide variety of databases, including MySQL, Oracle, MongoDB, Postgres, and more.

It can allow a user to build a server API that maps to another server, almost like creating an API that is a proxy for another API. It’s support for native mobile and browser SDKs for clients like Android/Java, iOS, Browser JavaScript(Angular).


  • Unbelievably extensible
  • Graph QL Support

Learning new frameworks is overwhelming and requires a lot of research before starting. Above mentioned frameworks are most popularly used and offer different features. Which framework do you use or prefer to use? Do you have any more frameworks to share? Comment below!

What is Dotnet?

What is Dotnet?

**Introduction to .Net Framework:** <a href="https://onlineitguru.com/dot-net-online-training-placement.html">.*Net online training*</a> is a software framework that is designed and developed by Microsoft.Microsoft began developing the .net...

Introduction to .Net Framework:
.Net online training is a software framework that is designed and developed by Microsoft.Microsoft began developing the .net framework in the year 1990 originally under the name of next-generation windows services(NGWS). The first version of the .Net Framework was net 1.0.

Features of .net framework are
Language independence
Type safety
Memory management
.net framework used to develop 3 kinds of applications they are
1.Web applications
2. Windows applications
3. Mobile applications
To Get depth knowledge in .net online training Hyderabad
.Net framework supports 60 programming languages. In this 11 programming language designed and developed by Microsoft and remaining are supported by .Net framework but not developed by Microsoft.some of the Microsoft supported languages are

  1. C#.net
    6.windows Power-Shell
    7.iron ruby
    8.iron python
    9.C OMEGA
    10.ASML(Abstract State Machine Language)
    Main components of the .net framework
    .Net framework supports 4 types of components they are
    1.Common Language Runtime(CLR),
    2.Framework Class Library(FCL),
    3.Core languages(WinForms,ASP.Net,ADO.Net),and
    4.Other Modules(WCF,WPF,WF,Card Space,LINQ,Entity Framework,Parallel Linq,etc.)
    CLR: CLR stands for common language runtime.
    It is the effective machine component of the .net framework.
    It is an execution engine that converts the given program into native code.
    CLR acts as an interface between the framework and the operating system.
    CLR provides various services they are type-safety and memory management, thread management, remoting, robustness, etc. Basically, CLR is responsible for controlling the .net programs of any .Net framework.
    It also helps in the controlling of code.
    The code that selects at runtime is called managed code and that does not select at runtime is called unmanaged code.
    FCL: FCL stands for the framework class library. FCL is the collection of the standard library that contains a collection of reusable class libraries and object-oriented methods that are used to develop an application.
    If we want to install the .net framework firstly we have to install the CLR and FCL into the system.
    The following diagram shows the description of the .Net Framework

Is .Net platform independent or dependent?
Basically, the platform is the mixture of operating system architecture and CPU architecture. platform-dependent means programming language that runs on the specific operating system only that does not supports other operating systems.
.Net framework can able run on windows based operating system hence .net is the platform-dependent language.
We can convert the .net framework from platform-dependent to platform-independent by using the Mono Framework.
By using this Mono Framework we can run .Net applications in any operating systems. Mono framework is one of the third-party company which is developed by Novell company. Now it is part of the microfocus company.
Release History of .net Framework and its similarity with the different windows versions
Visual Studio .NET
Visual Studio .NET 2003
Visual Studio 2005
Expression Blend
Visual Studio 2008
7, 8, 8.1, 10
Visual Studio 2010
Visual Studio 2012
Visual Studio 2013
Visual Studio 2015
10 v1507
Visual Studio 2015 Update 1
10 v1511
Some of the most important points:
If we want to develop any .net applications we have to install the Microsoft visual studio. If the user wants to work on the visual studio first install the .net framework software on the system.
Some of the older versions of Windows OS like XP SP1SP2, SP3, etc. .net framework was joined with installation media.
How .net is used for?
.Net is a rich feature which used to develop different next-generation applications.
they are
business functions
Interoperable applications
Multi-tired software architecture
Mobile apps
Advantages of .Net:
Object-oriented: The code that is writing in the .net framework in the form of objects only.
Cashing: The cashing of .net is very fast and easy-to-use.
Easy maintenance: The code that is written in .net framework is very simple and easily maintainable. This is Because source code and HTML code combined together
Time-saving:.Net removes a large part of coding sections. Hence we can save time while working with .net applications.
Simplicity: Performing tasks in .net framework is extremely simple and easily understandable.
Future rich:.net framework provides a range of features that are explored by developers in order to create powerful .net applications.
Consistency: The management and observing of all the processes performed by the .net framework. If one process is stopped, then it will take another process this is all about consistency feature in .net.
Monitoring:.Net provides an important advantage that is automatic monitoring. By using this automatic monitoring we can easily notice problems like infinite loops, memory leaks, etc.
Dis-advantages of .Net:
limited object-relational(OR)support: It is found to be limited at times because t supports in entity framework only.
Slower than native code: Managed code that runs in the .net framework slower than native code.
Vendor Lock-in: Vendor lock-in means future development will depend on Microsoft only.
Expensive: In some cases, applications of .net can turn into very expensive.
Reasons to learn .Net:
Here the few reasons to learn the .net for the career those are
Resources available
Large community
Visual studio
Multiple server platforms
Job opportunity
These are the few details on Dotnet.I hope you people got the basic idea on Dotnet. You can get more practical knowledge when you enrol for Dotnet Online Training