Will WebAssembly replace JavaScript in the future?

Will WebAssembly replace JavaScript in the future?

Will WebAssembly replace JavaScript in the future? JavaScript is flourishing. But thanks to WebAssembly, its death may be just a matter of time.

Some programming languages are loved. Others are only tolerated. For many programmers, JavaScript is an example of the latter — a language that every front-end developer needs to understand but no one needs to like.

Ten years ago, it wasn’t obvious that JavaScript was set to rule the world. Other platforms, like Java, Flash, and Silverlight were also in the running. All three needed a browser plug-in to do their work, and all three replaced HTML with a different approach to user interface. This approach allowed them to get far in front of JavaScript with features — for example, adding video, animation, and drawing long before we had the <video> element, the CSS Animations specification, or the HTML canvas. But it also spelled their downfall. When mobile browsing exploded and HTML shifted to embrace it, these other platforms became obsolete.

Here’s another irony. At the same time that JavaScript was conquering the world, a tiny seed was planted that may, sometime in the future, spell the end of JavaScript. That seed was an experimental technology called asm.js.

But before we get to that, let’s take a step back to survey the situation today.

Transpiling: The current approach

As long as we’ve had JavaScript, developers have been trying to get around it. One early approach was to use plug-ins to take the code out of the browser. (That failed.) Another idea was to make development tools that could convert code — in other words, take code written in another more respectable language, and transform it into JavaScript. That way developers could get the run-everywhere support they wanted but still keep their hands clean.

The process of converting one language to another is called transpiling, and it has some obvious stumbling blocks. High-level languages have different features, syntax, and idioms, and you can’t always map a line in one to an equivalent construct in another. And even when you can, danger lurks. What happens if the community stops developing your favorite transpiler? Or if the transpiler introduces bugs of its own? What if you want to plug into a JavaScript framework like Angular, React, or Vue? And how do you collaborate on a team if you don’t speak the same language?

As in many cases with coding, the tool is only as good as the community behind it.

Today, transpilers are common, but they’re almost always used in just one way — to handle backward compatibility.

Developers write the most modern JavaScript possible, and then use a transpiler like Babel to convert their code to the equivalent (but less elegant) old-school JavaScript code that works everywhere. Or — even better — they use TypeScript (a modernized flavor of JavaScript that adds features like strong typing, generics, and non-nullable types) and then transpile that into JavaScript. Either way, you’re still playing in the walled garden of JavaScript.

Asm.js: A stepping stone

The first glimmer of a new possibility came from asm.js, a quirky experiment cooked up by the developers at Mozilla in 2013. They were looking for a way to run high-performance code inside a browser. But unlike plug-ins, asm.js didn’t try to go beside the browser. Instead, it aimed to tunnel straight through the JavaScript virtual machine.

At its heart, asm.js is a terse, optimized JavaScript syntax. It runs faster than normal JavaScript because it avoids the slow dynamic parts of the language. But web browsers that recognize it can also apply other optimizations, boosting performance much more dramatically. In other words, asm.js follows the golden rule — don’t break the web — while offering a pathway to future improvements. The Firefox team used asm.js, along with a transpiling tool called Emscripten, to take real-time 3D games built in C++ and put them inside a web browser, running on nothing more than JavaScript and raw ambition.

The Unreal engine running on asm.js

The most important part of asm.js was the way it forced developers to rethink the role of JavaScript. Asm.js code is JavaScript, but it’s not meant for coders to read or write by hand. Instead, asm.js code is meant to be built by an automated process (a transpiler) and fed straight to the browser. JavaScript is the medium but not the message.

WebAssembly: A new technology

Although the asm.js experiment produced a few dazzling demos, it was largely ignored by working developers. To them, it was just another interesting piece of over-the-horizon technology. But that changed with the creation of WebAssembly.

WebAssembly is both the successor to asm.js, and a significantly different technology. It’s a compact, binary format for code. Like asm.js, WebAssembly code is fed into the JavaScript execution environment. It gets the same sandbox and the same runtime environment. Also like asm.js, WebAssembly is compiled in a way that makes further efficiencies possible. But now these efficiencies are more dramatic than before, and the browser can skip the JavaScript parsing stage altogether. For an ordinary bit of logic (say, a time-consuming calculation), WebAssembly is far faster than regular JavaScript and nearly as fast as natively compiled code.

A simplified look at the WebAssembly processing pipeline

If you’re curious what WASM looks like, imagine you have a C function like this:

int factorial(int n) {
  if (n == 0)
    return 1;
  else
    return n * factorial(n-1);
}

It would compile to WASM code that looks like this:

get_local 0
i64.eqz
if (result i64)
    i64.const 1
else
    get_local 0
    get_local 0
    i64.const 1
    i64.sub
    call 0
    i64.mul
end

When it’s sent over the wire, WASM code is further condensed into a binary encoding.

WebAssembly is designed to be a target for compilers. You’ll never write it by hand. (But you could, if you want to take a deep-dive exploration.)

WebAssembly first appeared 2015. Today, it’s fully supported by the big four browsers (Chrome, Edge, Safari, and Firefox) on desktop and mobile. It isn’t supported in Internet Explorer, although backward compatibility is possible by converting the WebAssembly code to asm.js. (Performance will suffer. Please let IE fade into obscurity!)

WebAssembly and the future of web development

Out of the box, WebAssembly gives developers a way to write optimized code routines, usually in C++. This is powerful ability, but it has a relatively narrow scope. It’s useful if you need to improve the performance of complex calculations. (For example, fastq.bio used WebAssembly to speed up their DNA sequencing calculations.) It’s also important if you’re porting high-performance games or writing an emulator that runs inside your browser. If this is all there were to WebAssembly, it wouldn’t be nearly as exciting — and it wouldn’t have any hope of displacing JavaScript. But WebAssembly also opens a narrow pathway for framework developers to squeeze their platforms into the JavaScript environment.

Here’s where things take an interesting turn. WebAssembly can’t sidestep JavaScript, because it’s locked into the JavaScript runtime environment. In fact, WebAssembly needs to run alongside at least some ordinary JavaScript code, because it doesn’t have direct access to the page. That means it can’t manipulate the DOM or receive events without going through a layer of JavaScript.

This sounds like a deal-breaking limitation. But clever developers have found ways to smuggle their runtimes in through WebAssembly. For example, Microsoft’s Blazor framework downloads a miniature .NET runtime as a compiled WASM file. This runtime deals with the JavaScript interop, and it provides basic services (like garbage collection) and higher-level features (layout, routing, and user interface widgets). In other words, Blazor uses a virtual machine that lives inside another virtual machine, which is either an Inception-level paradox or a clever way to create a non-JavaScript application framework that runs in the browser.

Blazor isn’t the only WebAssembly-powered experiment that’s out of the gate. Consider Pyodide, which aims to put Python in the browser, complete with an advanced math toolkit for data analysis.

This is the future. WebAssembly, which started out to satisfy C++, Rust, and not much more, is quickly being exploited to create more ambitious experiments. Soon it will allow non-JavaScript frameworks to compete with JavaScript-based standbys like Angular, React, and Vue.

And WebAssembly is still evolving rapidly. It’s current implementation is a minimum viable product — just enough to be useful in some important scenarios, but not an all-purpose approach to developing on the web. As WebAssembly is adopted, it will improve. For example, if platforms like Blazor catch on, WebAssembly is likely to add support for direct DOM access. Browser makers are already planning to add garbage collection and multithreading, so runtimes don’t need to implement these details themselves.

If this path of evolution seems long and doubtful, consider the lessons of JavaScript. First, we saw that if something is possible in JavaScript, it is done. Then, we learned that if something is done often enough, browsers make it work better. And so on. If WebAssembly is popular, it will feed into a virtuous cycle of enhancement that could easily overtake the native advantages of JavaScript.

It’s often said that WebAssembly was not built to replace JavaScript. But that’s true of every revolutionary platform. JavaScript was not designed to replace browser-embedded Java. Web applications were not designed to replace desktop applications. But once they could, they did.

Angular 9 Tutorial: Learn to Build a CRUD Angular App Quickly

What's new in Bootstrap 5 and when Bootstrap 5 release date?

What’s new in HTML6

How to Build Progressive Web Apps (PWA) using Angular 9

What is new features in Javascript ES2020 ECMAScript 2020

Hire Dedicated eCommerce Web Developers | Top eCommerce Web Designers

Hire Dedicated eCommerce Web Developers | Top eCommerce Web Designers

Build your eCommerce project by hiring our expert eCommerce Website developers. Our Dedicated Web Designers develop powerful & robust website in a short span of time.

Build your eCommerce project by hiring our expert eCommerce Website developers. Our Dedicated Web Designers develop powerful & robust website in a short span of time.

Hire Now: https://bit.ly/394wdOx

Mobile App Development Company India | Ecommerce Web Development Company India

Mobile App Development Company India | Ecommerce Web Development Company India

Best Mobile App Development Company India, WebClues Global is one of the leading web and mobile app development company. Our team offers complete IT solutions including Cross-Platform App Development, CMS & E-Commerce, and UI/UX Design.

We are custom eCommerce Development Company working with all types of industry verticals and providing them end-to-end solutions for their eCommerce store development.

Know more about Top E-Commerce Web Development Company

JavaScript developers should you be using Web Workers?

JavaScript developers should you be using Web Workers?

Do you think JavaScript developers should be making more use of Web Workers to shift execution off of the main thread?

Originally published by David Gilbertson at https://medium.com

So, Web Workers. Those wonderful little critters that allow us to execute JavaScript off the main thread.

Also known as “no, you’re thinking of Service Workers”.

Photo by Caleb Jones on Unsplash

Before I get into the meat of the article, please sit for a lesson in how computers work:

Understood? Good.

For the red/green colourblind, let me explain. While a CPU is doing one thing, it can’t be doing another thing, which means you can’t sort a big array while a user scrolls the screen.

This is bad, if you have a big array and users with fingers.

Enter, Web Workers. These split open the atomic concept of a ‘CPU’ and allow us to think in terms of threads. We can use one thread to handle user-facing work like touch events and rendering the UI, and different threads to carry out all other work.

Check that out, the main thread is green the whole way through, ready to receive and respond to the gentle caress of a user.

You’re excited (I can tell), if we only have UI code on the main thread and all other code can go in a worker, things are going to be amazing (said the way Oprah would say it).

But cool your jets for just a moment, because websites are mostly about the UI — it’s why we have screens. And a lot of a user’s interactions with your site will be tapping on the screen, waiting for a response, reading, tapping, looking, reading, and so on.

So we can’t just say “here’s some JS that takes 20ms to run, chuck it on a thread”, we must think about where that execution time exists in the user’s world of tap, read, look, read, tap…

I like to boil this down to one specific question:

Is the user waiting anyway?

Imagine we have created some sort of git-repository-hosting website that shows all sorts of things about a repository. We have a cool feature called ‘issues’. A user can even click an ‘issues’ tab in our website to see a list of all issues relating to the repository. Groundbreaking!

When our users click this issues tab, the site is going to fetch the issue data, process it in some way — perhaps sort, or format dates, or work out which icon to show — then render the UI.

Inside the user’s computer, that’ll look exactly like this.

Look at that processing stage, locking up the main thread even though it has nothing to do with the UI! That’s terrible, in theory.

But think about what the human is actually doing at this point. They’re waiting for the common trio of network/process/render; just sittin’ around with less to do than the Bolivian Navy.

Because we care about our users, we show a loading indicator to let them know we’ve received their request and are working on it — putting the human in a ‘waiting’ state. Let’s add that to the diagram.

Now that we have a human in the picture, we can mix in a Web Worker and think about the impact it will have on their life:

Hmmm.

First thing to note is that we’re not doing anything in parallel. We need the data from the network before we process it, and we need to process the data before we can render the UI. The elapsed time doesn’t change.

(BTW, the time involved in moving data to a Web Worker and back is negligible: 1ms per 100 KB is a decent rule of thumb.)

So we can move work off the main thread and have a page that is responsive during that time, but to what end? If our user is sitting there looking at a spinner for 600ms, have we enriched their experience by having a responsive screen for the middle third?

No.

I’ve fudged these diagrams a little bit to make them the gorgeous specimens of graphic design that they are, but they’re not really to scale.

When responding to a user request, you’ll find that the network and DOM-manipulating part of any given task take much, much longer than the pure-JS data processing part.

I saw an article recently making the case that updating a Redux store was a good candidate for Web Workers because it’s not UI work (and non-UI work doesn’t belong on the main thread).

Chucking the data processing over to a worker thread sounds sensible, but the idea struck me as a little, umm, academic.

First, let’s split instances of ‘updating a store’ into two categories:

  1. Updating a store in response to a user interaction, then updating the UI in response to the data change
  2. Not that first one

If the first scenario, a user taps a button on the screen — perhaps to change the sort order of a list. The store updates, and this results in a re-rendering of the DOM (since that’s the point of a store).

Let me just delete one thing from the previous diagram:

In my experience, it is rare that the store-updating step goes beyond a few dozen milliseconds, and is generally followed by ten times that in DOM updating, layout, and paint. If I’ve got a site that’s taking longer than this, I’d be asking questions about why I have so much data in the browser and so much DOM, rather than on which thread I should do my processing.

So the question we’re faced with is the same one from above: the user tapped something on the screen, we’re going to work on that request for hopefully less than a second, why would we want to make the screen responsive during that time?

OK what about the second scenario, where a store update isn’t in response to a user interaction? Performing an auto-save, for example — there’s nothing more annoying than an app becoming unresponsive doing something you didn’t ask it to do.

Actually there’s heaps of things more annoying than that. Teens, for example.

Anyhoo, if you’re doing an auto-save and taking 100ms to process data client-side before sending it off to a server, then you should absolutely use a Web Worker.

In fact, any ‘background’ task that the user hasn’t asked for, or isn’t waiting for, is a good candidate for moving to a Web Worker.

The matter of value

Complexity is expensive, and implementing Web Workers ain’t cheap.

If you’re using a bundler — and you are — you’ll have a lot of reading to do, and probably npm packages to install. If you’ve got a create-react-app app, prepare to eject (and put aside two days twice a year to update 30 different packages when the next version of Babel/Redux/React/ESLint comes out).

Also, if you want to share anything fancier than plain data between a worker and the main thread you’ve got some more reading to do (comlink is your friend).

What I’m getting at is this: if the benefit is real, but minimal, then you’ve gotta ask if there’s something else you could spend a day or two on with a greater benefit to your users.

This thinking is true of everything, of course, but I’ve found that Web Workers have a particularly poor benefit-to-effort ratio.

Hey David, why you hate Web Workers so bad?

Good question.

This is a doweling jig:

I own a doweling jig. I love my doweling jig. If I need to drill a hole into the end of a piece of wood and ensure that it’s perfectly perpendicular to the surface, I use my doweling jig.

But I don’t use it to eat breakfast. For that I use a spoon.

Four years ago I was working on some fancy animations. They looked slick on a fast device, but janky on a slow one. So I wrote fireball-js, which executes a rudimentary performance benchmark on the user’s device and returns a score, allowing me to run my animations only on devices that would render them smoothly.

Where’s the best spot to run some CPU intensive code that the user didn’t request? On a different thread, of course. A Web Worker was the correct tool for the job.

Fast forward to 2019 and you’ll find me writing a routing algorithm for a mapping application. This requires parsing a big fat GeoJSON map into a collection of nodes and edges, to be used when a user asks for directions. The processing isn’t in response to a user request and the user isn’t waiting on it. And so, a Web Worker is the correct tool for the job.

It was only when doing this that it dawned on me: in the intervening quartet of years, I have seen exactly zero other instances where Web Workers would have improved the user experience.

Contrast this with a recent resurgence in Web Worker wonderment, and combine that contrast with the fact that I couldn’t think of anything else to write about, then concatenate that combined contrast with my contrarian character and you’ve got yourself a blog post telling you that maybe Web Workers are a teeny-tiny bit overhyped.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

An Introduction to Web Workers

JavaScript Web Workers: A Beginner’s Guide

Using Web Workers to Real-time Processing

How to use Web Workers in Angular app

Using Web Workers with Angular CLI