Understanding Memoization In JavaScript

Understanding Memoization In JavaScript

A simple yet thorough explanation of memoization in JavaScript.

As our applications grow and begin to carry out heavier computations, there comes an increasing need for speed ( 🏎️ ) and the optimization of processes becomes a necessity. When we ignore this concern, we end up with programs that take a lot of time and consume a monstrous chunk of system resources during execution.

Table of Contents

Memoization is an optimization technique that speeds up applications by storing the results of expensive function calls and returning the cached result when the same inputs occur again.

If this doesn’t make much sense to you yet, that’s okay. This article provides an in-depth explanation of why memoization is necessary, what it is, how it can be implemented and when it should be used.

What is memoization?

Memoization is an optimization technique that speeds up applications by storing the results of expensive function calls and returning the cached result when the same inputs are supplied again.
Same definition again? 🙈 Let’s break it down this time.

It is clear to us at this point that the aim of memoization is to reduce the time taken and amount of resources consumed in the execution of “expensive function calls”.

What is an expensive function call? Don’t get confused, we aren’t spending money here. In the context of computer programs, the two major resources we have are time and memory. Thus, an expensive function call is a function call that consumes huge chunks of these two resources during execution due to heavy computation.

However, as with money, we need to be economical. For this, memoization uses caching to store the results of our function calls for quick and easy access at a later time.

Memoization is an optimization technique that speeds up applications by storing the results of expensive function calls and returning the cached result when the same inputs are supplied again.
Thus, when an expensive function has been called once, the result is stored in a cache such that whenever the function is called again within our application, the result would be returned very quickly from the cache without redoing any calculations.

Why is memoization important?

Here is a practical example that shows the importance of memoization:

Imagine you were reading a new novel with a pretty attractive cover at the park. Each time a person passes by, they are drawn by the cover, so they ask for the name of the book and its author. The first time the question is asked, you turn the cover and read out the title and the name of the author. Now more and more people keep stopping by and asking the same question. You’re a very nice person 🙂 , so you answer them all.

Memoization is an optimization technique that speeds up applications by storing the results of expensive function calls and returning the cached result when the same inputs are supplied again.
Notice the similarity? With memoization, when a function is provided an input, it does the required computation and stores the result to cache before returning the value. If this same input is ever received in the future, it wouldn’t have to do it over and over again. It would simply provide the answer from cache(memory).

How does memoization work?

The concept of memoization in JavaScript is built majorly on two concepts. They are:

Closures

Memoization is an optimization technique that speeds up applications by storing the results of expensive function calls and returning the cached result when the same inputs are supplied again.
Not quite clear? I think so too.

To gain a clearer understanding, let us quickly examine the concept of lexical scope in JavaScript. Lexical scope simply refers to the physical location of variables and blocks as specified by the programmer while writing code.

Take a look at this very popular code snippet adapted from Kyle Simpson’s book; ”You Don’t Know JS”:

function foo(a) {
  var b = a + 2;
  function bar(c) {
    console.log(a, b, c);
}
  bar(b * 2);
}

foo(3); // 3, 5, 10

From this code snippet we can identify three scopes:

Looking carefully at the code above, we notice that the function bar has access to the variable a and b by virtue of the fact that it is nested inside of foo. Notice that we successfully store the function bar along with its environment. Thus, we say that bar has a closure over the scope of foo.

You may understand this in the context of hereditary, in that an individual will have access to and exhibit inherited traits even outside of their immediate environment. This logic highlights another factor about closures, which leads into our second main concept.

Returning functions from functions

Memoization is an optimization technique that speeds up applications by storing the results of expensive function calls and returning the cached result when the same inputs are supplied again.
Closures allow us to invoke an inner function outside its enclosing function while maintaining access to the enclosing function’s lexical scope(i.e identifiers in its enclosing function).

Let’s make a little adjustment to the code in our previous example to explain this.

function foo(){
  var a = 2;

  function bar() {
    console.log(a);
  }
  return bar;
}
var baz = foo();
baz();//2

Ahaaa!!!** Interesting, don’t you think?**

Notice how the function foo returns another function bar. Observe that we execute the function foo and assign the returned value to baz. In this case however, we have a return function. Thus, baz now holds a reference to the bar function defined inside of foo.

What’s most interesting about this is that when we execute the function baz outside the lexical scope of foo we still get the value of a i.e 2 logged to our console. How is this possible? 😕

Remember that bar would always have access to variables in foo(inherited traits) even if it is executed outside of foo's scope (is far away from home).

Now let’s see how memoization utilizes these concept using some more code samples. 💪🏾

Case Study: The Fibonacci Sequence

What is the Fibonacci sequence?

Memoization is an optimization technique that speeds up applications by storing the results of expensive function calls and returning the cached result when the same inputs are supplied again.

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, …

OR

1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, …

The Challenge: Write a function to return the **nth** element in the Fibonacci sequence, where the sequence is:

[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, …]

Knowing that each value is a sum of the previous two, a recursive solution to this problem will be:

function fibonacci(n) {
    if (n <= 1) {
        return 1
    }
    return fibonacci(n - 1) + fibonacci(n - 2);
}

Concise and accurate indeed! But, there’s a problem. Notice that in consistently reducing the size of the problem(value of n ) until the terminating case is reached, a lot more work is done and time consumed to arrive at our solution because there is a repetitive evaluation of certain values in the sequence. Looking at the diagram below, when we try to evaluate fib(5), we notice that we repeatedly try to find the Fibonacci number at indices 0, 1, 2 and 3 on different branches. This is known as redundant computation and is exactly what memoization stands to eliminate.

Now let’s fix this with memoization.

function fibonacci(n,memo) {
    memo = memo || {}
    if (memo[n]) {
        return memo[n]
    }
    if (n <= 1) {
        return 1
    }
    return memo[n] = fibonacci(n - 1, memo) + fibonacci(n - 2, memo)
}

In the code snippet above, we adjust the function to accept an optional parameter known as memo. We use the memo object as a cache to store the Fibonacci numbers with their respective indices as key to be retrieved whenever they are required later in the course of execution.

memo = memo || {}

Here, we check if memo was received as an argument when the function was called. If it was, we initialize it for use, but if it wasn’t, we set it to an empty object.

if (memo[n]) {
        return memo[n]
    }

Next, we check if there’s a cached value for the current key n and we return its value if there is.

As in the solution before, we specify a terminating case for when n is less than or equal to 1.

At the end we recursively call the function with a smaller value of n, while passing in the cached values(memo) into each function, for use during computation. This ensures that when the value has been evaluated before and cached, we do not perform such expensive computation a second time. We simply retrieve the value from cache memo.

Notice that we add the final result to the cache before returning it.

Wheeew!!! Let’s celebrate the good work so far! 🙂

Lets see how much better we’ve made things!

Testing performance with JSPerf Follow this link to the performance test on JSPerf. There, we run a test to evaluate the time it’d take to execute fibonacci(20) using both methods. See the results below:

😲 Wow!!! This is super impressive. The memoized fibonacci function is the fastest as expected. However, by how much is quite astonishing. It executes 126,762 ops/sec which is far greater than the purely recursive solution which executes 1,751 ops/sec and is approximately 99% slower.

Memoization is an optimization technique that speeds up applications by storing the results of expensive function calls and returning the cached result when the same inputs are supplied again.
Now we’ve seen just how much memoization can impact the performance of our applications on a functional level. Does this mean that for every expensive function within our application, we would have to create a variation that is modified to maintain an internal cache?

No. Recall that we learnt that by returning functions from functions, we cause them to inherit the scope of their parent function even when executed outside? This makes it possible to transfer certain features and properties(traits) from the enclosing function to the function that is returned.

Let’s apply this to memoization as we write our own memoizer function.

A Functional Approach

In the code snippet below, we create a higher order function called memoizer. With this function, we will be able to easily apply memoization to any function.

function memoizer(fun){
    let cache = {}
    return function (n){
        if (cache[n]) {
          return cache[n]
        } else {
          let result = fun(n)
          cache[n] = result
          return result
        }
    }
}

Above, we simply create a new function called memoizer which accepts the function fun to be memoized as a parameter. Within the function we create a cache object for storing the results of our function executions for future use.

From the memoizer function, we return a new function which can access the cache no matter where it is executed due to the principle of closure as discussed above.

Within the returned function, we use an if..else statement to check if there is already a cached value for the specified key(parameter) n. If there is, we retrieve it and return it. If there isn’t, we calculate the result using the function to be memoized fun . Afterwards, we add the result to the cache using the appropriate key n , so that it may be accessed from there on future occasions. At the end, we return the calculated result.

Pretty smooth!

To apply the memoizer function to the recursive fibonacci function initially considered, we call the memoizer function passing the function as an argument.

const fibonacciMemoFunction = memoizer(fibonacciRecursive)

Testing our memoizer function When we compare our memoizer function with the sample case above, here’s the result:

😲 😲 😲 No way!!! Our memoizer function produced the fastest solution with 42,982,762 ops/sec. The previous solutions considered are 100% slower.

How’s that for optimization!

As regards memoization, we have now considered the what, why and how. Now let’s take a look at the when.

When to memoize a function

Of course Memorization is amazing and you may now be tempted to memoize all your functions. That could turn out very unproductive. Here’s three cases in which memoization would be beneficial:

All done! You get it now don’t you?

Memoization Libraries

Here are some libraries that provide memoization functionality.

Conclusion

With memoization, we are able to prevent our function from calling functions that re-calculate the same results over and over again. It’s now time for you to put this knowledge to work.

You may go forth and memoize your entire codebase! 😅 (Just kidding)

Further Reading

To learn more about the techniques and concepts discussed in this article, you may use the following links:

Memoization

Understanding the Underlying Processes of JavaScript’s Closures and Scope Chain

Higher Order Functions

Implementing Memoization in JavaScript

Machine Learning in JavaScript with TensorFlow.js

Full Stack Developers: Everything You Need to Know

ES5 to ESNext — here’s every feature added to JavaScript since 2015

12 Concepts That Will Level Up Your JavaScript Skills

Vue Authentication And Route Handling Using Vue-router

JavaScript: Understanding the Weird Parts

Vue JS 2 - The Complete Guide (incl. Vue Router & Vuex)

The Full JavaScript & ES6 Tutorial - (including ES7 & React)

Mobile App Development Company India | Ecommerce Web Development Company India

Mobile App Development Company India | Ecommerce Web Development Company India

Best Mobile App Development Company India, WebClues Global is one of the leading web and mobile app development company. Our team offers complete IT solutions including Cross-Platform App Development, CMS & E-Commerce, and UI/UX Design.

We are custom eCommerce Development Company working with all types of industry verticals and providing them end-to-end solutions for their eCommerce store development.

Know more about Top E-Commerce Web Development Company

JavaScript developers should you be using Web Workers?

JavaScript developers should you be using Web Workers?

Do you think JavaScript developers should be making more use of Web Workers to shift execution off of the main thread?

Originally published by David Gilbertson at https://medium.com

So, Web Workers. Those wonderful little critters that allow us to execute JavaScript off the main thread.

Also known as “no, you’re thinking of Service Workers”.

Photo by Caleb Jones on Unsplash

Before I get into the meat of the article, please sit for a lesson in how computers work:

Understood? Good.

For the red/green colourblind, let me explain. While a CPU is doing one thing, it can’t be doing another thing, which means you can’t sort a big array while a user scrolls the screen.

This is bad, if you have a big array and users with fingers.

Enter, Web Workers. These split open the atomic concept of a ‘CPU’ and allow us to think in terms of threads. We can use one thread to handle user-facing work like touch events and rendering the UI, and different threads to carry out all other work.

Check that out, the main thread is green the whole way through, ready to receive and respond to the gentle caress of a user.

You’re excited (I can tell), if we only have UI code on the main thread and all other code can go in a worker, things are going to be amazing (said the way Oprah would say it).

But cool your jets for just a moment, because websites are mostly about the UI — it’s why we have screens. And a lot of a user’s interactions with your site will be tapping on the screen, waiting for a response, reading, tapping, looking, reading, and so on.

So we can’t just say “here’s some JS that takes 20ms to run, chuck it on a thread”, we must think about where that execution time exists in the user’s world of tap, read, look, read, tap…

I like to boil this down to one specific question:

Is the user waiting anyway?

Imagine we have created some sort of git-repository-hosting website that shows all sorts of things about a repository. We have a cool feature called ‘issues’. A user can even click an ‘issues’ tab in our website to see a list of all issues relating to the repository. Groundbreaking!

When our users click this issues tab, the site is going to fetch the issue data, process it in some way — perhaps sort, or format dates, or work out which icon to show — then render the UI.

Inside the user’s computer, that’ll look exactly like this.

Look at that processing stage, locking up the main thread even though it has nothing to do with the UI! That’s terrible, in theory.

But think about what the human is actually doing at this point. They’re waiting for the common trio of network/process/render; just sittin’ around with less to do than the Bolivian Navy.

Because we care about our users, we show a loading indicator to let them know we’ve received their request and are working on it — putting the human in a ‘waiting’ state. Let’s add that to the diagram.

Now that we have a human in the picture, we can mix in a Web Worker and think about the impact it will have on their life:

Hmmm.

First thing to note is that we’re not doing anything in parallel. We need the data from the network before we process it, and we need to process the data before we can render the UI. The elapsed time doesn’t change.

(BTW, the time involved in moving data to a Web Worker and back is negligible: 1ms per 100 KB is a decent rule of thumb.)

So we can move work off the main thread and have a page that is responsive during that time, but to what end? If our user is sitting there looking at a spinner for 600ms, have we enriched their experience by having a responsive screen for the middle third?

No.

I’ve fudged these diagrams a little bit to make them the gorgeous specimens of graphic design that they are, but they’re not really to scale.

When responding to a user request, you’ll find that the network and DOM-manipulating part of any given task take much, much longer than the pure-JS data processing part.

I saw an article recently making the case that updating a Redux store was a good candidate for Web Workers because it’s not UI work (and non-UI work doesn’t belong on the main thread).

Chucking the data processing over to a worker thread sounds sensible, but the idea struck me as a little, umm, academic.

First, let’s split instances of ‘updating a store’ into two categories:

  1. Updating a store in response to a user interaction, then updating the UI in response to the data change
  2. Not that first one

If the first scenario, a user taps a button on the screen — perhaps to change the sort order of a list. The store updates, and this results in a re-rendering of the DOM (since that’s the point of a store).

Let me just delete one thing from the previous diagram:

In my experience, it is rare that the store-updating step goes beyond a few dozen milliseconds, and is generally followed by ten times that in DOM updating, layout, and paint. If I’ve got a site that’s taking longer than this, I’d be asking questions about why I have so much data in the browser and so much DOM, rather than on which thread I should do my processing.

So the question we’re faced with is the same one from above: the user tapped something on the screen, we’re going to work on that request for hopefully less than a second, why would we want to make the screen responsive during that time?

OK what about the second scenario, where a store update isn’t in response to a user interaction? Performing an auto-save, for example — there’s nothing more annoying than an app becoming unresponsive doing something you didn’t ask it to do.

Actually there’s heaps of things more annoying than that. Teens, for example.

Anyhoo, if you’re doing an auto-save and taking 100ms to process data client-side before sending it off to a server, then you should absolutely use a Web Worker.

In fact, any ‘background’ task that the user hasn’t asked for, or isn’t waiting for, is a good candidate for moving to a Web Worker.

The matter of value

Complexity is expensive, and implementing Web Workers ain’t cheap.

If you’re using a bundler — and you are — you’ll have a lot of reading to do, and probably npm packages to install. If you’ve got a create-react-app app, prepare to eject (and put aside two days twice a year to update 30 different packages when the next version of Babel/Redux/React/ESLint comes out).

Also, if you want to share anything fancier than plain data between a worker and the main thread you’ve got some more reading to do (comlink is your friend).

What I’m getting at is this: if the benefit is real, but minimal, then you’ve gotta ask if there’s something else you could spend a day or two on with a greater benefit to your users.

This thinking is true of everything, of course, but I’ve found that Web Workers have a particularly poor benefit-to-effort ratio.

Hey David, why you hate Web Workers so bad?

Good question.

This is a doweling jig:

I own a doweling jig. I love my doweling jig. If I need to drill a hole into the end of a piece of wood and ensure that it’s perfectly perpendicular to the surface, I use my doweling jig.

But I don’t use it to eat breakfast. For that I use a spoon.

Four years ago I was working on some fancy animations. They looked slick on a fast device, but janky on a slow one. So I wrote fireball-js, which executes a rudimentary performance benchmark on the user’s device and returns a score, allowing me to run my animations only on devices that would render them smoothly.

Where’s the best spot to run some CPU intensive code that the user didn’t request? On a different thread, of course. A Web Worker was the correct tool for the job.

Fast forward to 2019 and you’ll find me writing a routing algorithm for a mapping application. This requires parsing a big fat GeoJSON map into a collection of nodes and edges, to be used when a user asks for directions. The processing isn’t in response to a user request and the user isn’t waiting on it. And so, a Web Worker is the correct tool for the job.

It was only when doing this that it dawned on me: in the intervening quartet of years, I have seen exactly zero other instances where Web Workers would have improved the user experience.

Contrast this with a recent resurgence in Web Worker wonderment, and combine that contrast with the fact that I couldn’t think of anything else to write about, then concatenate that combined contrast with my contrarian character and you’ve got yourself a blog post telling you that maybe Web Workers are a teeny-tiny bit overhyped.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

An Introduction to Web Workers

JavaScript Web Workers: A Beginner’s Guide

Using Web Workers to Real-time Processing

How to use Web Workers in Angular app

Using Web Workers with Angular CLI


Hire PHP Developer and Web Developer for your Online Business

Hire PHP Developer and Web Developer for your Online Business

PHP is widely used open-source scripting language it helps in making dynamically easy your websites and web application. Mobiweb Technology is your best technical partner and offering you solution for any kind of website and application...

PHP is widely used open-source scripting language it helps in making dynamically easy your websites and web application. Mobiweb Technology is your best technical partner and offering you solution for any kind of website and application development. To hire PHP developer and web developer at affordable prices contact Mobiweb Technology via [email protected]