In this Node.js tutorial, you'll see 20 ways to become a better Node.js developer in 2020. How to Become a Better Node.js Developer in 2020. Becoming a better Node developer in 20201. Use TypeScript features thoughtfully
Big things are happening in the testing field. According to the recent state of js survey, developers` satisfaction from testing tools increases more than any other domain. A revolution is happening on test runners as well as the good old veteran Mocha and Jasmine are losing the top for the new sophisticated kids in town Jest and Ava. Thanks to the modern approach they bring, it’s possible to test more, cover more ground and find more bugs. Why?
Some of the traditional tools were designed for CI or for occasional test execution. In times where teams deploy once a day, discovering a bug after 4 hours is not good enough. Modern tooling allows running tests, including component tests with DB, constantly even during coding. This approach allows for testing more layers and more use cases earlier and it’s called ‘shift left’ — read more about it below
The long-awaited ES6 modules were unflagged recently, so you might be tempted to use it right away. They bring great opportunities to Node land like a modern syntax for importing modules, compatibility with frontend syntax (important for package maintainer that need to support both Node and browser runtime) and asynchronous modules resolution that opens the door for top-level async-await and better tree shaking. Cool. However, there are some implications that one must be aware of before jumping on the ESM wagon: not all the supporting features are implemented yet. For example, it’s unclear yet how test doubles libraries like Sinon and Jest can ‘mock’ such modules so your wagon might break on the side of the road with smoke.
Given all of these considerations, what’s your strategy: jump straight into the ESM water and work around the issues? or use ESM with babel/TS as a safety net? maybe keep on with gold old common js ‘require’ but avoid incompatible syntax like usage of __filename, __dirname, JSON resolution, and others? there are no strict answers here but at least we strive to ask the right questions
Great techniques exist in different paradigms that you can embrace without changing your architecture. Also, most companies have a variety of different apps/microservices types like data-driven search and reporting apps, others based on heavy logic and some are just streams of data. Why apply the same treatment to these different requirements? If all you have is a screwdriver, every challenge starts looking like a screw.
Ensure you’re familiar with layered architectures like n-tiers, DDD, Hexagonal/Onion/Clean. They look very different but their primary principle is smilar — isolating the domain (i.e. core data schema and business logic) from the surrounding tech (e.g. APIs, DB). Introduce yourself also to streaming style architectures which see a great increase in popularity. Then, spend some time with data-driven architectures that are best implemented nowadays with GraphQL frameworks.
Speaking of GraphQL, it’s interesting how some of its flavors disrupt the traditional separation between API and data-access layers, instead of repeating similar code and schemas twice these frameworks allow to declaratively define the entire app with one schema. This approach will greatly boost the go-to-market for data-driven apps which are not likely to embed complex logic.
After years of Express life, we need a little nest (js). With such amazing growth, you simply can’t ignore it. I would argue that Nest.js is the most remarkable thing that happened to Node.js in 2018/2019— for the first time, we have a full-fledge consensus framework like Java Spring and Python Django. Until 2018, teams without strong design skills had to architect their backends, spend a great time on plumbing and invent the wheel. Being one who engages with ~15 projects every year, believe me, I’ve seen so many types of wheels. Too many. My friend and colleague Gil Tayar funnily adapt the Anna Karenina principle to software: ‘All happy projects are alike; each unhappy project is unhappy in its own way.’
Unlike Express & co., Nest.js brings a full-fledged, batteries-included, framework (e.g. handles the data access layer, validation, etc). Its design style is highly ‘inspired’ by Angular — opinionated, TypeScript-based and embodies heavy modularization constructs. That said, it still offers great flexibility in choosing it’s sub-frameworks. Given all these goodies, I have no single doubt that teams doing their first steps with Node.js will move way faster with Nest.js rather with minimalist Express approach.
With all it’s greatness, it’s not flawless. One may doubt, is the heavy modularized Angular approach that was designed to ease the pain of huge frontend codebase suits the backend needs? aren’t we jumping too far from minimal Express to a huge and such an opinionated framework? are all of these heavy modularity features needed in a world of small Microservices? or the equivalent, isn’t it promoting monoliths (“I can easily handle 30,000 LOC in my code base”)?
At least we now have an option to choose from.
Have you heard about the epic Cloudflare downtime where a developer who wanted to experiment some feature in production rendered a big part of the internet down? Nothing will boost your confidence and speed than knowing that your deployment engine catches errors before your users do. A bunch of techniques will provide this magic. Each one achieves this in its own way but the overall idea is the same — serve the next version to a limited group of users and measure whether it seems stable. Going with this approach, we actually separating the deployment phase from the release phase. Some say it’s as important as testing, I suggest than anything that measures our pipeline is a TEST.
What are these techniques? Canary is the most well-known and simple. It tunes the routing so the next version is deployed and served to a group of users, starting from users are more likely to tolerate bugs (e.g. office employee, non-paying customers) and as the confidence grows it serves to more and more users. This might sound complex, but frameworks such as istio for K8S and AWS serverless handle most of the heavy lifting work. The next technique, feature flagging, is more powerful but also demands to get your hand dirty. It basically suggests wrapping a feature code with condition criteria that tell which users should benefit this new feature. Usually, it also comes with a dashboard for product managers to turn on and off features. This allows non-technical users to be part of the party and also support finer-grained advanced criteria. For example, using flags you may activate some experimental feature only for users from a specific city, on a specific machine instance that has a specific browser. One last super-interesting technique to look for is traffic shadowing which I’ll leave you to read about.
The value of these techniques is immense: unlike code-testing which demands great effort all the time, tuning your routers for canary deployment only happens once(!) and it also ‘test’ you code under realistic production environment. Learn about this fascinating world and plumb one of this technique into your pipeline
The shift left concept puts forward a sensible claim — the later a bug is discovered, the pricier it is to fix it. Consider a case where you discover a performance issue late on a staging environment, after short analysis it turns out that the fundamental DB data model must be changed — this is likely to incur significant code changes. Some researchers claim it might cost up to x640 times more when a bug is discovered too late on production. In plain words, the traditional model where a developer is focused on unit testing only and then weeks later the QA performs realistic E2E and advanced tests is slow and pricy. This well-known diagram brings this point home safely.
Test more things sooner, discover bugs early. How can we translate this idea into tangible development tasks? run a diversified set of tests as part of every commit and even during coding: component/api tests with real (in-memory?) DB just like you run unit tests, tests with realistic production input using dedicated property-based libraries, apply security scanners, run performance load tests and more. See below a list with dozens of tests one can run across the pipeline
‘Testing in production’ is a mega-trend in the testing community. It’s based on an idea called ‘shift-right’, which suggests that traditional tests on development and staging environment are less realistic and probably won’t prevent enough issues. The modern production has so many moving parts and parties, so many issues are likely to occur or get discovered only in production. Consequently, many tests must be conducted on the production environment itself for monitoring purposes but also to better test the future versions (e.g. serve some small traffic to the next version). The most straightforward production test is monitoring but many other advanced techniques exist like traffic shadowing, a/b tests (as a technical measure), load testing, tap-compare, soak testing and others.
So should we shift left or right? both. A modern approach for software delivery is not just thinking about tests rather about a pipeline. Given that many phases exist until the next version is served to the user — planning, development, deployment, release — each one is another opportunity to realize issues, stop, or build accumulating confidence. Code testing is a significant step on the pipeline, but plugging other tests into the pipeline will provide more confidence.
The most popular Node.js interview question might vanish soon: ‘is Node.js really single-threaded?’. As of version 11.7, we welcome a new family member in the async toolbox — worker thread. This tool, unlike any other, can address a very painful blindspot in Node. If 100% of the requests are CPU-intensive — no web framework, including Go & Java ones, can help tame this beast. However, a more popular workload is when only 1–10% of requests grabs the CPU for long time — most of the non-Node frameworks will prevent this automatically (thread per request). Node.js couldn’t — when serving 1000 req/sec, it’s enough for 1 to be CPU-intensive so all the other 999 suffer. There was no remedy to this pain, child process for examples are too slow to spin-up and can’t share memory. Good news, this is now tamable: worker threads can spin-up a dedicated event loop so the main one will remain snappy.
Now for some bad news — worker threads are not a lightweight thread that one spawn in no-time on demand. They actually duplicate the entire engine so it can become quite slow until they start running CPU-bounded requests will suffer additional delay. For this reason, consider a thread-pool (link below)
The DevOps storm means different things to different teams. For some, it is about making Dev perform also Ops work (e.g. being on-call), for others it’s more of a recommendation to plan early for production. At a minimum, it’s expected from developers to understand the production run-time as it highly affects the coding decisions and patterns. Mostly the decisions that sit on the intersection between Dev and Ops.
Few examples: it’s a well-known practice to ensure all outgoing requests are being replayed upon failures (the circuit breaker pattern), this can be done both on the infrastructure level using K8S Istio, or at the code itself using dedicated packages. Which one would you prefer and why? interesting choice, isn’t it? Let’s discuss other scenarios — K8S might kill and relocate pods, when it sends a kill signal the webserver might handle 2000 users. When a pod just crashes, they will quickly become angry 2000 users unless you implement a graceful and thoughtful shutdown. What is the grace period? well, this requires some Kubernetes learning, right? Sometimes the kill signals from K8S won’t even reach to your code if you use ‘npm start’ command, why? this requires some understanding of how Docker processes and signals are managed (the 1st link below will answer this question). One other interesting challenge is settling two contradicting things — how can test tools run within Docker containers during the pipeline but then removed before production? One last interesting example is configuring the allowed memory per container, given that v8 recently stopped limiting the heap size which will now keep growing as needed, this might interfere with K8S resource limits (common best practice) — make a decision and align the Node.js side with the K8S side
all of these challenges call you to dig deeper into the fascinating world of Docker clusters (or Serverless if you wish)
If you can’t think like an attacker, you can’t think like a defender. In 2020, you shouldn’t outsource the defense work to third-party companies or rely solely on static security scanners: The amount of attack types is overwhelming (The development pipeline and NPM are the latest trends). Developers training is the key — bake security DNA into yourself and your team and add a security touch to everything. A useful way to deepen your security understanding is to go through examples of vulnerable code and attack vectors. See below few example links that might greatly help
Monitoring is a crucial set up that should be well hardened and demands cooperation between Ops and Dev. No monitoring solution can be perfect without developers' involvement. These two popular monitoring systems, ELK and Prometheus, sound like sys admin toys but in fact, developers can learn a lot by configuring them. In any case, the mandatory activity for developers is being involved in exposing the metrics.
Ops folks know nothing about the event loop and how to monitor it (npm package does this), only you can propose and implement this important metric. Only developers can suggest the right V8 monitoring limits alerts. Developers might even write automated tests to ensure that when application errors are thrown — the right metrics are incremented. One another valuable activity is custom applicative metrics — coding some measurement of user activities can be very efficient in tracking production anomalies. Consider an e-commerce app, if the number of purchases is tracked and it suddenly drops dramatically in production — this is likely to imply some underlying issue
Obviously, better delve into the details and read the manuals. In no way, I’m suggesting a careless use of technologies. What I do propose however is that becoming familiar with the high-levels is better than knowing nothing and can build the motivation to keep exploring. Should I used ML taxonomy wrong above — please understand, I’ve just started my ML journey
your sleep quality and stress level matter far, far more than the languages you use or the practices you follow. Nothing else comes close: not type systems, not TDD, not formal methods, not ANYTHING.
It’s packed with examples and researches — I urge you to visit there and dig into those pearls of wisdom
Did you remember to wrap all your Express routes with try-catch, then move to the next() method and finally return some appropriate status to the user? If no, your process will crash without a trace. If yes, you just spent a great time on plumbing straight-forward pieces that add no value to your business. Isn’t this what frameworks are here for? Though Fastify & Koa won’t handle all the error paths for you (e.g. uncaught exceptions) they address this with a modern approach that requires less effort. They both also natively support async routes. Those are just a few examples where a modern and maintained framework could do better for you.
The last commit to the Express project was pushed somewhere 6 months ago… Since then Fastify & Koa saw dozens of builds and they keep improving. It’s frankly not appropriate for the Node.js ecosystem to rely heavily on a library that doesn’t keep in step with the times.
That said, most of the community tools and docs rely on Express. I hope to see community leaders, course makers, bloggers creating more content on its modern alternatives.
I’ve published a similar post in 2019 and many of the bullets seem important also in 2020. Here are some specific recommendations:
Our journey for quality and safe deploys is usually centered around testing. The caveat of code-based testing (E.g. unit tests) is their price — every new functionality demands writing more testing code. This usually pays off but still painful. Some tools like linters, scanners, and static analyzers offer a different deal — for a one-time setup they will discover bugs forever. This is a great opportunity for lowering the price of building confidence, almost a free lunch. The list of tools is growing every year so keep following and enrich your CI — below I’ve included a few examples of modern tools.
let, discoverextraneous packages that aren't declared with package.json and much more
We often surround ourselves with favorite technologies and ignore alternatives based on prejudice. Here are some typical sentences I hear in my network: ‘Functional programming is not practical’, ‘REST API is dead, ‘TDD is not for me’, ‘ORMs are evil’, ‘TypeScript is too verbose’.
These are false-dichotomies — it’s not a binary question, all these paradigms embody many different ideas, and still, we tend to pick all or ignore all. For example, Functional Programming currying and monads feel weird? that’s fair, consider other more mainstream FP ideas like pure functions. Ignoring TypeScript because OOP is not your style? maybe use only its type system and stick to vanilla JS objects and functions. Cherry-pick ideas and features, not a package.
By no mean, I suggest that this is the best stack. It is, however, a diversified stack that mixes and matches multiple ideas from many paradigms. Obviously assemble your own stack, just don’t be afraid of tiptoeing into unexplored territory and get inspiration from many sources of wisdom.
p.s. I’m not advocating for becoming a jack of too many trades. Actually mastering some technologies is important. My point is being pragmatic and open to great ideas. Don’t be “that screwdriver guy”, enrich your mindset, diversify your toolbox20. Get inspiration from these great 5 starter projects
Starter (boilerplate) projects are a genuine source for knowledge — just skim through the code for 10–20 min and get many ideas to embrace. I’ve packed here below some quality starters, each brings some unique approach so you enrich your mindset with new paradigms
Node.js for Beginners - Learn Node.js from Scratch (Step by Step) - Learn the basics of Node.js. This Node.js tutorial will guide you step by step so that you will learn basics and theory of every part. Learn to use Node.js like a professional. You’ll learn: Basic Of Node, Modules, NPM In Node, Event, Email, Uploading File, Advance Of Node.Node.js for Beginners
Welcome to my course "Node.js for Beginners - Learn Node.js from Scratch". This course will guide you step by step so that you will learn basics and theory of every part. This course contain hands on example so that you can understand coding in Node.js better. If you have no previous knowledge or experience in Node.js, you will like that the course begins with Node.js basics. otherwise if you have few experience in programming in Node.js, this course can help you learn some new information . This course contain hands on practical examples without neglecting theory and basics. Learn to use Node.js like a professional. This comprehensive course will allow to work on the real world as an expert!
What you’ll learn:
Internationalization is a difficult undertaking but using the Intl API is an easy way to get started, it's great to see this new API in the JS language and available for use. Soon, you'll be able to have confidence using it in the browser as modern browsers support the major Intl features. Have a look at the browser compatibility charts to see which browsers and versions of node are supported.
Use Intl.RelativeTimeFormat for language-sensitive relative time formatting.
Full ICU NPM package:
Speaker: Mx Kassian Wren | DevRel, Cloudflare.
WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications.
Blazor isn’t the only WebAssembly-powered experiment that’s out of the gate. Consider Pyodide, which aims to put Python in the browser, complete with an advanced math toolkit for data analysis.
And WebAssembly is still evolving rapidly. It’s current implementation is a minimum viable product — just enough to be useful in some important scenarios, but not an all-purpose approach to developing on the web. As WebAssembly is adopted, it will improve. For example, if platforms like Blazor catch on, WebAssembly is likely to add support for direct DOM access. Browser makers are already planning to add garbage collection and multithreading, so runtimes don’t need to implement these details themselves.