JavaScript — from callbacks to async/await

JavaScript — from callbacks to async/await

JavaScript — from callbacks to async and await. JavaScript is synchronous. This means that it will execute your code block by order after hoisting. Before the code executes, var and function declarations are “hoisted” to the top of their scope

JavaScript is synchronous. This means that it will execute your code block by order after hoisting. Before the code executes, var and function declarations are “hoisted” to the top of their scope.

This is an example of a synchronous code:

console.log('1')

console.log('2')

console.log('3')

This code will reliably log “1 2 3".

Asynchronous requests will wait for a timer to finish or a request to respond while the rest of the code continues to execute. Then when the time is right a callback will spring these asynchronous requests into action.

This is an example of an asynchronous code:

console.log('1')

setTimeout(function afterTwoSeconds() {
  console.log('2')
}, 2000)

console.log('3')

This will actually log “1 3 2”, since the “2” is on a setTimeout which will only execute, by this example, after two seconds. Your application does not hang waiting for the two seconds to finish. Instead it keeps executing the rest of the code and when the timeout is finished it returns to afterTwoSeconds.

You may ask “Why is this useful?” or “How do I get my async code to become sync?”. Hopefully I can show you the answers.

“The problem”

Let us say our goal is to search for a GitHub user and get all the repositories of that user. The thing is we don’t know the exact name of the user. So we have to list all the users with similar name and their respective repositories.

Doesn’t need to super fancy, something like this

In these examples the request code will use XHR (XMLHttpRequest). You can replace it with jQuery $.ajax or the more recent native approach called fetch. Both will give you the promises approach out of the gate.

It will be slightly changed depending on your approach but as a starter:

// url argument can be something like 'https://api.github.com/users/daspinola/repos'

function request(url) {
  const xhr = new XMLHttpRequest();
  xhr.timeout = 2000;
  xhr.onreadystatechange = function(e) {
    if (xhr.readyState === 4) {
      if (xhr.status === 200) {
       // Code here for the server answer when successful
      } else {
       // Code here for the server answer when not successful
      }
    }
  }
  xhr.ontimeout = function () {
    // Well, it took to long do some code here to handle that
  }
  xhr.open('get', url, true)
  xhr.send();
}

Remember that in these examples the important part is not what the end result of the code is. Instead your goal should be to understand the differences of the approaches and how you can leverage them for your development.

Callback

You can save a reference of a function in a variable when using JavaScript. Then you can use them as arguments of another function to execute later. This is our “callback”.

One example would be:

// Execute the function "doThis" with another function as parameter, in this case "andThenThis". doThis will execute whatever code it has and when it finishes it should have "andThenThis" being executed.

doThis(andThenThis)

// Inside of "doThis" it's referenced as "callback" which is just a variable that is holding the reference to this function

function andThenThis() {
  console.log('and then this')
}

// You can name it whatever you want, "callback" is common approach

function doThis(callback) {
  console.log('this first')
  
  // the '()' is when you are telling your code to execute the function reference else it will just log the reference

  callback()
}

Using the callback to solve our problem allows us to do something like this to the request function we defined earlier:

function request(url, callback) {
  const xhr = new XMLHttpRequest();
  xhr.timeout = 2000;
  xhr.onreadystatechange = function(e) {
    if (xhr.readyState === 4) {
      if (xhr.status === 200) {
       callback(null, xhr.response)
      } else {
       callback(xhr.status, null)
      }
    }
  }
  xhr.ontimeout = function () {
   console.log('Timeout')
  }
  xhr.open('get', url, true)
  xhr.send();
}

Our function for the request will now accept a callback so that when a request is made it will be called in case of error and in case of success.

const userGet = `https://api.github.com/search/users?page=1&q=daspinola&type=Users`

request(userGet, function handleUsersList(error, users) {
  if (error) throw error
  const list = JSON.parse(users).items

  list.forEach(function(user) {
    request(user.repos_url, function handleReposList(err, repos) {
      if (err) throw err
      // Handle the repositories list here
    })
  })
})

Breaking this down:

  • We make a request to get a user’s repositories
  • After the request is complete we use callback handleUsersList
  • If there is no error then we parse our server response into an object using JSON.parse
  • Then we iterate our user list since it can have more than one
  • For each user we request their repositories list.
  • We will use the url that returned per user in our first response
  • We call repos_urlas the url for our next requests or from the first response
  • When the request has completed the callback, we will call
  • This will handle either its error or the response with the list of repositories for that user

Note: Sending the error first as parameter is a common practice especially when using Node.js.

A more “complete” and readable approach would be to have some error handling. We would keep the callback separate from the request execution.

Something like this:

try {
  request(userGet, handleUsersList)
} catch (e) {
  console.error('Request boom! ', e)
}

function handleUsersList(error, users) {
  if (error) throw error
  const list = JSON.parse(users).items

  list.forEach(function(user) {
    request(user.repos_url, handleReposList)
  })
}

function handleReposList(err, repos) {
  if (err) throw err
  
  // Handle the repositories list here
  console.log('My very few repos', repos)
}

This ends up having problems like racing and error handling issues. Racing happens when you don’t control which user you will get first. We are requesting the information for all of them in case there is more than one. We are not taking an order into account. For example, user 10 can come first and user 2 last. We have a possible solution later in the article.

The main problem with callbacks is that maintenance and readability can become a pain. It sort of already is and the code does hardly anything. This is known as callback hell which can be avoided with our next approach.

Promises

Promises you can make your code more readable. A new developer can come to the code base and see a clear order of execution to your code.

To create a promise you can use:

const myPromise = new Promise(function(resolve, reject) {
  
  // code here
  
  if (codeIsFine) {
    resolve('fine')
  } else {
    reject('error')
  }

})

myPromise
  .then(function whenOk(response) {
    console.log(response)
    return response
  })
  .catch(function notOk(err) {
    console.error(err)
  })

Let us decompose it:

  • A promise is initialized with a function that has resolve and reject statements
  • Make your async code inside the Promise function
  • resolve when everything happens as desired
  • Otherwise reject
  • When a resolve is found the.then method will execute for that Promise When a reject is found the .catch will be triggered

Things to bear in mind:

  • resolve and reject only accept one parameter
  • resolve(‘yey’, ‘works’) will only send ‘yey’ to the .then callback function
  • If you chain multiple .then Add a return if you want the next .then value not to be undefined
  • When a reject is caught with .catch if you have a .then chained to it
  • It will still execute that .then You can see the .then as an “always executes” and you can check an example in this comment
  • With a chain on .then if an error happens on the first one
  • It will skip subsequent .then until it finds a .catch
  • A promise has three states
  • pending
  • When waiting for a resolve or reject to happen
  • resolved
  • rejected
  • Once it’s in a resolved or rejected state
  • It cannot be changed

Note: You can create promises without the function at the moment of declarations. The way that I’m showing it is only a common way of doing it.

“Theory, theory, theory…I’m confused” you may say.

Let’s use our request example with a promise to try to clear things up:

function request(url) {
  return new Promise(function (resolve, reject) {
    const xhr = new XMLHttpRequest();
    xhr.timeout = 2000;
    xhr.onreadystatechange = function(e) {
      if (xhr.readyState === 4) {
        if (xhr.status === 200) {
          resolve(xhr.response)
        } else {
          reject(xhr.status)
        }
      }
    }
    xhr.ontimeout = function () {
      reject('timeout')
    }
    xhr.open('get', url, true)
    xhr.send();
  })
}

In this scenario when you execute request it will return something like this:

A promise pending to be resolved or rejected

const userGet = `https://api.github.com/search/users?page=1&q=daspinola&type=Users`

const myPromise = request(userGet)

console.log('will be pending when logged', myPromise)

myPromise
  .then(function handleUsersList(users) {
    console.log('when resolve is found it comes here with the response, in this case users ', users)

    const list = JSON.parse(users).items
    return Promise.all(list.map(function(user) {
      return request(user.repos_url)
    }))
  })
  .then(function handleReposList(repos) {
    console.log('All users repos in an array', repos)
  })
  .catch(function handleErrors(error) {
    console.log('when a reject is executed it will come here ignoring the then statement ', error)
  })

This is how we solve racing and some of the error handling problems. The code is still a bit convoluted. But its a way to show you that this approach can also create readability problems.

A quick fix would be to separate the callbacks like so:

const userGet = `https://api.github.com/search/users?page=1&q=daspinola&type=Users`

const userRequest = request(userGet)

// Just by reading this part out loud you have a good idea of what the code does
userRequest
  .then(handleUsersList)
  .then(repoRequest)
  .then(handleReposList)
  .catch(handleErrors)

function handleUsersList(users) {
  return JSON.parse(users).items
}

function repoRequest(users) {
  return Promise.all(users.map(function(user) {
    return request(user.repos_url)
  }))
}

function handleReposList(repos) {
  console.log('All users repos in an array', repos)
}

function handleErrors(error) {
  console.error('Something went wrong ', error)
}

By looking at what userRequest is waiting in order with the .then you can get a sense of what we expect of this code block. Everything is more or less separated by responsibility.

This is “scratching the surface” of what Promises are. To have a great insight on how they work I cannot recommend enough this article.

Generators

Another approach is to use the generators. This is a bit more advance so if you are starting out feel free to jump to the next topic.

One use for generators is that they allow you to have async code looking like sync.

They are represented by a * in a function and look something like:

function* foo() {
  yield 1
  const args = yield 2
  console.log(args)
}
var fooIterator = foo()

console.log(fooIterator.next().value) // will log 1
console.log(fooIterator.next().value) // will log 2

fooIterator.next('aParam') // will log the console.log inside the generator 'aParam'

Instead of returning with a return, generators have a yield statement. It stops the function execution until a .next is made for that function iteration. It is similar to .then promise that only executes when resolved comes back.

Our request function would look like this:

function request(url) {
  return function(callback) {
    const xhr = new XMLHttpRequest();
    xhr.onreadystatechange = function(e) {
      if (xhr.readyState === 4) {
        if (xhr.status === 200) {
          callback(null, xhr.response)
        } else {
          callback(xhr.status, null)
        }
      }
    }
    xhr.ontimeout = function () {
      console.log('timeout')
    }
    xhr.open('get', url, true)
    xhr.send()
  }
}

We want to have the url as an argument. But instead of executing the request out of the gate we want it only when we have a callback to handle the response.

Our generator would be something like:

function* list() {
  const userGet = `https://api.github.com/search/users?page=1&q=daspinola&type=Users`
 
  const users = yield request(userGet)
  
  yield
  
  for (let i = 0; i<=users.length; i++) {
    yield request(users[i].repos_url)
  }
}

It will:

  • Wait until the first request is prepared
  • Return a function reference expecting a callback for the first request Our request function accepts a url and returns a function that expects a callback
  • Expect a users to be sent in the next .next
  • Iterate over users
  • Wait for a .next for each of the users
  • Return their respective callback function

So an execution of this would be:

try {
  const iterator = list()
  iterator.next().value(function handleUsersList(err, users) {
    if (err) throw err
    const list = JSON.parse(users).items
    
    // send the list of users for the iterator
    iterator.next(list)
    
    list.forEach(function(user) {
      iterator.next().value(function userRepos(error, repos) {
        if (error) throw repos

        // Handle each individual user repo here
        console.log(user, JSON.parse(repos))
      })
    })
  })  
} catch (e) {
  console.error(e)
}

We could separate the callback functions like we did previously. You get the deal by now, a takeaway is that we now can handle each individual user repository list individually.

I have mixed felling about generators. On one hand I can get a grasp of what is expected of the code by looking at the generator.

But its execution ends up having similar problems to the callback hell.

Like async/await, a compiler is recommended. This is because it isn’t supported in older browser versions.

Also it isn’t that common in my experience. So it may generate confusing in codebases maintained by various developers.

An awesome insight of how generators work can be found in this article. And here is another great resource.

Async/Await

This method seems like a mix of generators with promises. You just have to tell your code what functions are to be async. And what part of the code will have to await for that promise to finish.

sumTwentyAfterTwoSeconds(10)
  .then(result => console.log('after 2 seconds', result))

async function sumTwentyAfterTwoSeconds(value) {
  const remainder = afterTwoSeconds(20)
  return value + await remainder
}

function afterTwoSeconds(value) {
  return new Promise(resolve => {
    setTimeout(() => { resolve(value) }, 2000);
  });
}

In this scenario:

  • We have sumTwentyAfterTwoSeconds as being an async function
  • We tell our code to wait for the resolve or reject for our promise function afterTwoSeconds
  • It will only end up in the .then when the await operations finish
  • In this case there is only one

Applying this to our request we leave it as a promise as seen earlier:

function request(url) {
  return new Promise(function(resolve, reject) {
    const xhr = new XMLHttpRequest();
    xhr.onreadystatechange = function(e) {
      if (xhr.readyState === 4) {
        if (xhr.status === 200) {
          resolve(xhr.response)
        } else {
          reject(xhr.status)
        }
      }
    }
    xhr.ontimeout = function () {
      reject('timeout')
    }
    xhr.open('get', url, true)
    xhr.send()
  })
}

We create our async function with the needed awaits like so:

async function list() {
  const userGet = `https://api.github.com/search/users?page=1&q=daspinola&type=Users`
  
  const users = await request(userGet)
  const usersList = JSON.parse(users).items
  
  usersList.forEach(async function (user) {
    const repos = await request(user.repos_url)
    
    handleRepoList(user, repos)
  })
}

function handleRepoList(user, repos) {
  const userRepos = JSON.parse(repos)
  
  // Handle each individual user repo here

  console.log(user, userRepos)
}

So now we have an async list function that will handle the requests. Another async is needed in the forEach so that we have the list of repos for each user to manipulate.

We call it as:

list()
  .catch(e => console.error(e))

This and the promises approach are my favorites since the code is easy to read and change. You can read about async/await more in depth here.

A downside of using async/await is that it isn’t supported in the front-end by older browsers or in the back-end. You have to use the Node 8.

You can use a compiler like babel to help solve that.

“Solution”

You can see the end code accomplishing our initial goal using async/await in this snippet.

A good thing to do is to try it yourself in the various forms referenced in this article.

Conclusion

Depending on the scenario you might find yourself using:

  • async/await
  • callbacks
  • mix

It’s up to you what fits your purposes. And what lets you maintain the code so that it is understandable to others and your future self.

Note: Any of the approaches become slightly less verbose when using the alternatives for requests like $.ajax and fetch.

Let me know what you would do different and different ways you found to make each approach more readable.

Mobile App Development Company India | Ecommerce Web Development Company India

Mobile App Development Company India | Ecommerce Web Development Company India

Best Mobile App Development Company India, WebClues Global is one of the leading web and mobile app development company. Our team offers complete IT solutions including Cross-Platform App Development, CMS & E-Commerce, and UI/UX Design.

We are custom eCommerce Development Company working with all types of industry verticals and providing them end-to-end solutions for their eCommerce store development.

Know more about Top E-Commerce Web Development Company

JavaScript developers should you be using Web Workers?

JavaScript developers should you be using Web Workers?

Do you think JavaScript developers should be making more use of Web Workers to shift execution off of the main thread?

Originally published by David Gilbertson at https://medium.com

So, Web Workers. Those wonderful little critters that allow us to execute JavaScript off the main thread.

Also known as “no, you’re thinking of Service Workers”.

Photo by Caleb Jones on Unsplash

Before I get into the meat of the article, please sit for a lesson in how computers work:

Understood? Good.

For the red/green colourblind, let me explain. While a CPU is doing one thing, it can’t be doing another thing, which means you can’t sort a big array while a user scrolls the screen.

This is bad, if you have a big array and users with fingers.

Enter, Web Workers. These split open the atomic concept of a ‘CPU’ and allow us to think in terms of threads. We can use one thread to handle user-facing work like touch events and rendering the UI, and different threads to carry out all other work.

Check that out, the main thread is green the whole way through, ready to receive and respond to the gentle caress of a user.

You’re excited (I can tell), if we only have UI code on the main thread and all other code can go in a worker, things are going to be amazing (said the way Oprah would say it).

But cool your jets for just a moment, because websites are mostly about the UI — it’s why we have screens. And a lot of a user’s interactions with your site will be tapping on the screen, waiting for a response, reading, tapping, looking, reading, and so on.

So we can’t just say “here’s some JS that takes 20ms to run, chuck it on a thread”, we must think about where that execution time exists in the user’s world of tap, read, look, read, tap…

I like to boil this down to one specific question:

Is the user waiting anyway?

Imagine we have created some sort of git-repository-hosting website that shows all sorts of things about a repository. We have a cool feature called ‘issues’. A user can even click an ‘issues’ tab in our website to see a list of all issues relating to the repository. Groundbreaking!

When our users click this issues tab, the site is going to fetch the issue data, process it in some way — perhaps sort, or format dates, or work out which icon to show — then render the UI.

Inside the user’s computer, that’ll look exactly like this.

Look at that processing stage, locking up the main thread even though it has nothing to do with the UI! That’s terrible, in theory.

But think about what the human is actually doing at this point. They’re waiting for the common trio of network/process/render; just sittin’ around with less to do than the Bolivian Navy.

Because we care about our users, we show a loading indicator to let them know we’ve received their request and are working on it — putting the human in a ‘waiting’ state. Let’s add that to the diagram.

Now that we have a human in the picture, we can mix in a Web Worker and think about the impact it will have on their life:

Hmmm.

First thing to note is that we’re not doing anything in parallel. We need the data from the network before we process it, and we need to process the data before we can render the UI. The elapsed time doesn’t change.

(BTW, the time involved in moving data to a Web Worker and back is negligible: 1ms per 100 KB is a decent rule of thumb.)

So we can move work off the main thread and have a page that is responsive during that time, but to what end? If our user is sitting there looking at a spinner for 600ms, have we enriched their experience by having a responsive screen for the middle third?

No.

I’ve fudged these diagrams a little bit to make them the gorgeous specimens of graphic design that they are, but they’re not really to scale.

When responding to a user request, you’ll find that the network and DOM-manipulating part of any given task take much, much longer than the pure-JS data processing part.

I saw an article recently making the case that updating a Redux store was a good candidate for Web Workers because it’s not UI work (and non-UI work doesn’t belong on the main thread).

Chucking the data processing over to a worker thread sounds sensible, but the idea struck me as a little, umm, academic.

First, let’s split instances of ‘updating a store’ into two categories:

  1. Updating a store in response to a user interaction, then updating the UI in response to the data change
  2. Not that first one

If the first scenario, a user taps a button on the screen — perhaps to change the sort order of a list. The store updates, and this results in a re-rendering of the DOM (since that’s the point of a store).

Let me just delete one thing from the previous diagram:

In my experience, it is rare that the store-updating step goes beyond a few dozen milliseconds, and is generally followed by ten times that in DOM updating, layout, and paint. If I’ve got a site that’s taking longer than this, I’d be asking questions about why I have so much data in the browser and so much DOM, rather than on which thread I should do my processing.

So the question we’re faced with is the same one from above: the user tapped something on the screen, we’re going to work on that request for hopefully less than a second, why would we want to make the screen responsive during that time?

OK what about the second scenario, where a store update isn’t in response to a user interaction? Performing an auto-save, for example — there’s nothing more annoying than an app becoming unresponsive doing something you didn’t ask it to do.

Actually there’s heaps of things more annoying than that. Teens, for example.

Anyhoo, if you’re doing an auto-save and taking 100ms to process data client-side before sending it off to a server, then you should absolutely use a Web Worker.

In fact, any ‘background’ task that the user hasn’t asked for, or isn’t waiting for, is a good candidate for moving to a Web Worker.

The matter of value

Complexity is expensive, and implementing Web Workers ain’t cheap.

If you’re using a bundler — and you are — you’ll have a lot of reading to do, and probably npm packages to install. If you’ve got a create-react-app app, prepare to eject (and put aside two days twice a year to update 30 different packages when the next version of Babel/Redux/React/ESLint comes out).

Also, if you want to share anything fancier than plain data between a worker and the main thread you’ve got some more reading to do (comlink is your friend).

What I’m getting at is this: if the benefit is real, but minimal, then you’ve gotta ask if there’s something else you could spend a day or two on with a greater benefit to your users.

This thinking is true of everything, of course, but I’ve found that Web Workers have a particularly poor benefit-to-effort ratio.

Hey David, why you hate Web Workers so bad?

Good question.

This is a doweling jig:

I own a doweling jig. I love my doweling jig. If I need to drill a hole into the end of a piece of wood and ensure that it’s perfectly perpendicular to the surface, I use my doweling jig.

But I don’t use it to eat breakfast. For that I use a spoon.

Four years ago I was working on some fancy animations. They looked slick on a fast device, but janky on a slow one. So I wrote fireball-js, which executes a rudimentary performance benchmark on the user’s device and returns a score, allowing me to run my animations only on devices that would render them smoothly.

Where’s the best spot to run some CPU intensive code that the user didn’t request? On a different thread, of course. A Web Worker was the correct tool for the job.

Fast forward to 2019 and you’ll find me writing a routing algorithm for a mapping application. This requires parsing a big fat GeoJSON map into a collection of nodes and edges, to be used when a user asks for directions. The processing isn’t in response to a user request and the user isn’t waiting on it. And so, a Web Worker is the correct tool for the job.

It was only when doing this that it dawned on me: in the intervening quartet of years, I have seen exactly zero other instances where Web Workers would have improved the user experience.

Contrast this with a recent resurgence in Web Worker wonderment, and combine that contrast with the fact that I couldn’t think of anything else to write about, then concatenate that combined contrast with my contrarian character and you’ve got yourself a blog post telling you that maybe Web Workers are a teeny-tiny bit overhyped.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

An Introduction to Web Workers

JavaScript Web Workers: A Beginner’s Guide

Using Web Workers to Real-time Processing

How to use Web Workers in Angular app

Using Web Workers with Angular CLI