These are the features in ES6 that you should know

These are the features in ES6 that you should know

ES6 brings more features to the JavaScript language. Some new syntax allows you to write code in a more expressive way, some features complete the functional programming toolbox, and some features are questionable.

ES6 brings more features to the JavaScript language. Some new syntax allows you to write code in a more expressive way, some features complete the functional programming toolbox, and some features are questionable.

let and const

There are two ways for declaring a variable (let and const) plus one that has become obsolete (var).

let

let declares and optionally initializes a variable in the current scope. The current scope can be either a module, a function or a block. The value of a variable that is not initialized is undefined .

Scope defines the lifetime and visibility of a variable. Variables are not visible outside the scope in which they are declared.

Consider the next code that emphasizes let block scope:

let x = 1;
{ 
  let x = 2;
}
console.log(x); //1

In contrast, the var declaration had no block scope:

var x = 1;
{ 
  var x = 2;
}
console.log(x); //2

The for loop statement, with the let declaration, creates a new variable local to the block scope, for each iteration. The next loop creates five closures over five different i variables.

(function run(){
  for(let i=0; i<5; i++){
    setTimeout(function log(){
      console.log(i); //0 1 2 3 4
    }, 100);
  }
})();

Writing the same code with var will create five closures, over the same variable, so all closures will display the last value of i.

The log() function is a closure. For more on closures, take a look at Discover the power of closures in JavaScript.

const

const declares a variable that cannot be reassigned. It becomes a constant only when the assigned value is immutable.

An immutable value is a value that, once created, cannot be changed. Primitive values are immutable, objects are mutable.

const freezes the variable, Object.freeze() freezes the object.

The initialization of the const variable is mandatory.

Modules

Before modules, a variable declared outside any function was a global variable.

With modules, a variable declared outside any function is hidden and not available to other modules unless it is explicitly exported.

Exporting makes a function or object available to other modules. In the next example, I export functions from different modules:

//module "./TodoStore.js"
export default function TodoStore(){}
//module "./UserStore.js"
export default function UserStore(){}

Importing makes a function or object, from other modules, available to the current module.

import TodoStore from "./TodoStore";
import UserStore from "./UserStore";
const todoStore = TodoStore();
const userStore = UserStore();

Spread/Rest

The  operator can be the spread operator or the rest parameter, depending on where it is used. Consider the next example:

const numbers = [1, 2, 3];
const arr = ['a', 'b', 'c', ...numbers];
console.log(arr);
["a", "b", "c", 1, 2, 3]

This is the spread operator. Now look at the next example:

function process(x,y, ...arr){
  console.log(arr)
}
process(1,2,3,4,5);
//[3, 4, 5]
function processArray(...arr){
  console.log(arr)
}
processArray(1,2,3,4,5);
//[1, 2, 3, 4, 5]

This is the rest parameter.

arguments

With the rest parameter we can replace the arguments pseudo-parameter. The rest parameter is an array, arguments is not.

function addNumber(total, value){
  return total + value;
}
function sum(...args){
  return args.reduce(addNumber, 0);
}
sum(1,2,3); //6

Cloning

The spread operator makes the cloning of objects and arrays simpler and more expressive.

The object spread properties operator will be available as part of ES2018.

const book = { title: "JavaScript: The Good Parts" };
//clone with Object.assign()
const clone = Object.assign({}, book);
//clone with spread operator
const clone = { ...book };
const arr = [1, 2 ,3];
//clone with slice
const cloneArr = arr.slice();
//clone with spread operator
const cloneArr = [ ...arr ];

Concatenation

In the next example, the spread operator is used to concatenate arrays:

const part1 = [1, 2, 3];
const part2 = [4, 5, 6];
const arr = part1.concat(part2);
const arr = [...part1, ...part2];

Merging objects

The spread operator, like Object.assign(), can be used to copy properties from one or more objects to an empty object and combine their properties.

const authorGateway = { 
  getAuthors : function() {},
  editAuthor: function() {}
};
const bookGateway = { 
  getBooks : function() {},
  editBook: function() {}
};
//copy with Object.assign()
const gateway = Object.assign({},
      authorGateway, 
      bookGateway);
      
//copy with spread operator
const gateway = {
   ...authorGateway,
   ...bookGateway
};

Property short-hands

Consider the next code:

function BookGateway(){
  function getBooks() {}
  function editBook() {}
  
  return {
    getBooks: getBooks,
    editBook: editBook
  }
}

With property short-hands, when the property name and the name of the variable used as the value are the same, we can just write the key once.

function BookGateway(){
  function getBooks() {}
  function editBook() {}
  
  return {
    getBooks,
    editBook
  }
}

Here is another example:

const todoStore = TodoStore();
const userStore = UserStore();
    
const stores = {
  todoStore,
  userStore
};

Destructuring assignment

Consider the next code:

function TodoStore(args){
  const helper = args.helper;
  const dataAccess = args.dataAccess;
  const userStore = args.userStore;
}

With destructuring assignment syntax, it can be written like this:

function TodoStore(args){
   const { 
      helper, 
      dataAccess, 
      userStore } = args;
}

or even better, with the destructuring syntax in the parameter list:

function TodoStore({ helper, dataAccess, userStore }){}

Below is the function call:

TodoStore({ 
  helper: {}, 
  dataAccess: {}, 
  userStore: {} 
});

Default parameters

Functions can have default parameters. Look at the next example:

function log(message, mode = "Info"){
  console.log(mode + ": " + message);
}
log("An info");
//Info: An info
log("An error", "Error");
//Error: An error

Template string literals

Template strings are defined with the ``````` character. With template strings, the previous logging message can be written like this:

function log(message, mode= "Info"){
  console.log(`${mode}: ${message}`);
}

Template strings can be defined on multiple lines. However, a better option is to keep the long text messages as resources, in a database for example.

See below a function that generates an HTML that spans multiple lines:

function createTodoItemHtml(todo){
  return `<li>
    <div>${todo.title}</div>
    <div>${todo.userName}</div>
  </li>`;
}

Proper tail-calls

A recursive function is tail recursive when the recursive call is the last thing the function does.
The tail recursive functions perform better than non tail recursive functions. The optimized tail recursive call does not create a new stack frame for each function call, but rather uses a single stack frame.

ES6 brings the tail-call optimization in strict mode.

The following function should benefit from the tail-call optimization.

function print(from, to) 
{ 
  const n = from;
  if (n > to)  return;
  
  console.log(n);
  //the last statement is the recursive call 
  print(n + 1, to); 
}
print(1, 10);

Note: the tail-call optimization is not yet supported by major browsers.

Promises

A recursive function is tail recursive when the recursive call is the last thing the function does.
Promises are easier to combine. As you see in the next example, it is easy to call a function when all promises are resolved, or when the first promise is resolved.

function getTodos() { return fetch("/todos"); }
function getUsers() { return fetch("/users"); }
function getAlbums(){ return fetch("/albums"); }
const getPromises = [
  getTodos(), 
  getUsers(), 
  getAlbums()
];
Promise.all(getPromises).then(doSomethingWhenAll);
Promise.race(getPromises).then(doSomethingWhenOne);
function doSomethingWhenAll(){}
function doSomethingWhenOne(){}

The fetch() function, part of the Fetch API, returns a promise.

Promise.all() returns a promise that resolves when all input promises have resolved. Promise.race() returns a promise that resolves or rejects when one of the input promises resolves or rejects.

A promise can be in one of the three states: pending, resolved or rejected. The promise will in pending until is either resolved or rejected.

Promises support a chaining system that allows you to pass data through a set of functions. In the next example, the result of getTodos() is passed as input to toJson(), then its result is passed as input to getTopPriority(), and then its result is passed as input to renderTodos() function. When an error is thrown or a promise is rejected the handleError is called.

getTodos()
  .then(toJson)
  .then(getTopPriority)
  .then(renderTodos)
  .catch(handleError);
function toJson(response){}
function getTopPriority(todos){}
function renderTodos(todos){}
function handleError(error){}

In the previous example, .then() handles the success scenario and .catch() handles the error scenario. If there is an error at any step, the chain control jumps to the closest rejection handler down the chain.

Promise.resolve() returns a resolved promise. Promise.reject() returns a rejected promise.

Class

Class is sugar syntax for creating objects with a custom prototype. It has a better syntax than the previous one, the function constructor. Check out the next exemple:

class Service {
  doSomething(){ console.log("doSomething"); }
}
let service = new Service();
console.log(service.__proto__ === Service.prototype);

All methods defined in the Service class will be added to theService.prototype object. Instances of the Service class will have the same prototype (Service.prototype) object. All instances will delegate method calls to the Service.prototype object. Methods are defined once onService.prototype and then inherited by all instances.

Inheritance

“Classes can inherit from other classes”. Below is an example of inheritance where the SpecialService class “inherits” from the Service class:

class Service {
  doSomething(){ console.log("doSomething"); }
}
class SpecialService extends Service {
  doSomethingElse(){ console.log("doSomethingElse"); }  
}
let specialService = new SpecialService();
specialService.doSomething();
specialService.doSomethingElse();

All methods defined in the SpecialService class will be added to the SpecialService.prototype object. All instances will delegate method calls to the SpecialService.prototype object. If the method is not found in SpecialService.prototype, it will be searched in the Service.prototype object. If it is still not found, it will be searched in Object.prototype.

Class can become a bad feature

Even if they seem encapsulated, all members of a class are public. You still need to manage problems with this losing context. The public API is mutable.

class can become a bad feature if you neglect the functional side of JavaScript. class may give the impression of a class-based language when JavaScript is both a functional programming language and a prototype-based language.

Encapsulated objects can be created with factory functions. Consider the next example:

function Service() {
  function doSomething(){ console.log("doSomething"); }
  
  return Object.freeze({
     doSomething
  });
}

This time all members are private by default. The public API is immutable. There is no need to manage issues with this losing context.

class may be used as an exception if required by the components framework. This was the case with React, but is not the case anymore with React Hooks.

For more on why to favor factory functions, take a look at Class vs Factory function: exploring the way forward.

Arrow functions

Arrow functions can create anonymous functions on the fly. They can be used to create small callbacks, with a shorter syntax.

Let’s take a collection of to-dos. A to-do has an id , a title , and a completed boolean property. Now, consider the next code that selects only the title from the collection:

const titles = todos.map(todo => todo.title);

or the next example selecting only the todos that are not completed:

const filteredTodos = todos.filter(todo => !todo.completed);

this

Arrow functions don’t have their own this and arguments. As a result, you may see the arrow function used to fix problems with this losing context. I think that the best way to avoid this problem is to not use this at all.

Arrow functions can become a bad feature

Arrow functions can become a bad feature when used to the detriment of named functions. This will create readability and maintainability problems. Look at the next code written only with anonymous arrow functions:

const newTodos = todos.filter(todo => 
       !todo.completed && todo.type === "RE")
    .map(todo => ({
       title : todo.title,
       userName : users[todo.userId].name
    }))
    .sort((todo1, todo2) =>  
      todo1.userName.localeCompare(todo2.userName));

Now, check out the same logic refactored to pure functions with intention revealing names and decide which of them is easier to understand:

const newTodos = todos.filter(isTopPriority)
  .map(partial(toTodoView, users))
  .sort(ascByUserName);
function isTopPriority(todo){
  return !todo.completed && todo.type === "RE";
}
  
function toTodoView(users, todo){
  return {
    title : todo.title,
    userName : users[todo.userId].name
  }
}
function ascByUserName(todo1, todo2){
  return todo1.userName.localeCompare(todo2.userName);
}

Even more, anonymous arrow functions will appear as (anonymous) in the Call Stack.

For more on why to favor named functions, take a look at How to make your code better with intention-revealing function names.

Less code doesn’t necessary mean more readable. Look at the next example and see which version is easier for you to understand:

//with arrow function
const prop = key => obj => obj[key];
//with function keyword
function prop(key){
   return function(obj){
      return obj[key];
   }
}

Pay attention when returning an object. In the next example, the getSampleTodo() returns undefined.

const getSampleTodo = () => { title : "A sample todo" };
getSampleTodo();
//undefined

Generators

I think the ES6 generator is an unnecessary feature that makes code more complicated.

The ES6 generator creates an object that has the next() method. The next() method creates an object that has the value property. ES6 generators promote the use of loops. Take a look at code below:

function* sequence(){
  let count = 0;
  while(true) {
    count += 1;
    yield count;
  }
}
const generator = sequence();
generator.next().value;//1
generator.next().value;//2
generator.next().value;//3

The same generator can be simply implemented with a closure.

function sequence(){
  let count = 0;
  return function(){
    count += 1;
    return count;
  }
}
const generator = sequence();
generator();//1
generator();//2
generator();//3

For more examples with functional generators take a look at Let’s experiment with functional generators and the pipeline operator in JavaScript

Conclusion

let and const declare and initialize variables.

Modules encapsulate functionality and expose only a small part.

The spread operator, rest parameter, and property shorthand make things easier to express.

Promises and tail recursion complete the functional programming toolbox.

Mobile App Development Company India | Ecommerce Web Development Company India

Mobile App Development Company India | Ecommerce Web Development Company India

Best Mobile App Development Company India, WebClues Global is one of the leading web and mobile app development company. Our team offers complete IT solutions including Cross-Platform App Development, CMS & E-Commerce, and UI/UX Design.

We are custom eCommerce Development Company working with all types of industry verticals and providing them end-to-end solutions for their eCommerce store development.

Know more about Top E-Commerce Web Development Company

JavaScript developers should you be using Web Workers?

JavaScript developers should you be using Web Workers?

Do you think JavaScript developers should be making more use of Web Workers to shift execution off of the main thread?

Originally published by David Gilbertson at https://medium.com

So, Web Workers. Those wonderful little critters that allow us to execute JavaScript off the main thread.

Also known as “no, you’re thinking of Service Workers”.

Photo by Caleb Jones on Unsplash

Before I get into the meat of the article, please sit for a lesson in how computers work:

Understood? Good.

For the red/green colourblind, let me explain. While a CPU is doing one thing, it can’t be doing another thing, which means you can’t sort a big array while a user scrolls the screen.

This is bad, if you have a big array and users with fingers.

Enter, Web Workers. These split open the atomic concept of a ‘CPU’ and allow us to think in terms of threads. We can use one thread to handle user-facing work like touch events and rendering the UI, and different threads to carry out all other work.

Check that out, the main thread is green the whole way through, ready to receive and respond to the gentle caress of a user.

You’re excited (I can tell), if we only have UI code on the main thread and all other code can go in a worker, things are going to be amazing (said the way Oprah would say it).

But cool your jets for just a moment, because websites are mostly about the UI — it’s why we have screens. And a lot of a user’s interactions with your site will be tapping on the screen, waiting for a response, reading, tapping, looking, reading, and so on.

So we can’t just say “here’s some JS that takes 20ms to run, chuck it on a thread”, we must think about where that execution time exists in the user’s world of tap, read, look, read, tap…

I like to boil this down to one specific question:

Is the user waiting anyway?

Imagine we have created some sort of git-repository-hosting website that shows all sorts of things about a repository. We have a cool feature called ‘issues’. A user can even click an ‘issues’ tab in our website to see a list of all issues relating to the repository. Groundbreaking!

When our users click this issues tab, the site is going to fetch the issue data, process it in some way — perhaps sort, or format dates, or work out which icon to show — then render the UI.

Inside the user’s computer, that’ll look exactly like this.

Look at that processing stage, locking up the main thread even though it has nothing to do with the UI! That’s terrible, in theory.

But think about what the human is actually doing at this point. They’re waiting for the common trio of network/process/render; just sittin’ around with less to do than the Bolivian Navy.

Because we care about our users, we show a loading indicator to let them know we’ve received their request and are working on it — putting the human in a ‘waiting’ state. Let’s add that to the diagram.

Now that we have a human in the picture, we can mix in a Web Worker and think about the impact it will have on their life:

Hmmm.

First thing to note is that we’re not doing anything in parallel. We need the data from the network before we process it, and we need to process the data before we can render the UI. The elapsed time doesn’t change.

(BTW, the time involved in moving data to a Web Worker and back is negligible: 1ms per 100 KB is a decent rule of thumb.)

So we can move work off the main thread and have a page that is responsive during that time, but to what end? If our user is sitting there looking at a spinner for 600ms, have we enriched their experience by having a responsive screen for the middle third?

No.

I’ve fudged these diagrams a little bit to make them the gorgeous specimens of graphic design that they are, but they’re not really to scale.

When responding to a user request, you’ll find that the network and DOM-manipulating part of any given task take much, much longer than the pure-JS data processing part.

I saw an article recently making the case that updating a Redux store was a good candidate for Web Workers because it’s not UI work (and non-UI work doesn’t belong on the main thread).

Chucking the data processing over to a worker thread sounds sensible, but the idea struck me as a little, umm, academic.

First, let’s split instances of ‘updating a store’ into two categories:

  1. Updating a store in response to a user interaction, then updating the UI in response to the data change
  2. Not that first one

If the first scenario, a user taps a button on the screen — perhaps to change the sort order of a list. The store updates, and this results in a re-rendering of the DOM (since that’s the point of a store).

Let me just delete one thing from the previous diagram:

In my experience, it is rare that the store-updating step goes beyond a few dozen milliseconds, and is generally followed by ten times that in DOM updating, layout, and paint. If I’ve got a site that’s taking longer than this, I’d be asking questions about why I have so much data in the browser and so much DOM, rather than on which thread I should do my processing.

So the question we’re faced with is the same one from above: the user tapped something on the screen, we’re going to work on that request for hopefully less than a second, why would we want to make the screen responsive during that time?

OK what about the second scenario, where a store update isn’t in response to a user interaction? Performing an auto-save, for example — there’s nothing more annoying than an app becoming unresponsive doing something you didn’t ask it to do.

Actually there’s heaps of things more annoying than that. Teens, for example.

Anyhoo, if you’re doing an auto-save and taking 100ms to process data client-side before sending it off to a server, then you should absolutely use a Web Worker.

In fact, any ‘background’ task that the user hasn’t asked for, or isn’t waiting for, is a good candidate for moving to a Web Worker.

The matter of value

Complexity is expensive, and implementing Web Workers ain’t cheap.

If you’re using a bundler — and you are — you’ll have a lot of reading to do, and probably npm packages to install. If you’ve got a create-react-app app, prepare to eject (and put aside two days twice a year to update 30 different packages when the next version of Babel/Redux/React/ESLint comes out).

Also, if you want to share anything fancier than plain data between a worker and the main thread you’ve got some more reading to do (comlink is your friend).

What I’m getting at is this: if the benefit is real, but minimal, then you’ve gotta ask if there’s something else you could spend a day or two on with a greater benefit to your users.

This thinking is true of everything, of course, but I’ve found that Web Workers have a particularly poor benefit-to-effort ratio.

Hey David, why you hate Web Workers so bad?

Good question.

This is a doweling jig:

I own a doweling jig. I love my doweling jig. If I need to drill a hole into the end of a piece of wood and ensure that it’s perfectly perpendicular to the surface, I use my doweling jig.

But I don’t use it to eat breakfast. For that I use a spoon.

Four years ago I was working on some fancy animations. They looked slick on a fast device, but janky on a slow one. So I wrote fireball-js, which executes a rudimentary performance benchmark on the user’s device and returns a score, allowing me to run my animations only on devices that would render them smoothly.

Where’s the best spot to run some CPU intensive code that the user didn’t request? On a different thread, of course. A Web Worker was the correct tool for the job.

Fast forward to 2019 and you’ll find me writing a routing algorithm for a mapping application. This requires parsing a big fat GeoJSON map into a collection of nodes and edges, to be used when a user asks for directions. The processing isn’t in response to a user request and the user isn’t waiting on it. And so, a Web Worker is the correct tool for the job.

It was only when doing this that it dawned on me: in the intervening quartet of years, I have seen exactly zero other instances where Web Workers would have improved the user experience.

Contrast this with a recent resurgence in Web Worker wonderment, and combine that contrast with the fact that I couldn’t think of anything else to write about, then concatenate that combined contrast with my contrarian character and you’ve got yourself a blog post telling you that maybe Web Workers are a teeny-tiny bit overhyped.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

An Introduction to Web Workers

JavaScript Web Workers: A Beginner’s Guide

Using Web Workers to Real-time Processing

How to use Web Workers in Angular app

Using Web Workers with Angular CLI


Hire PHP Developer and Web Developer for your Online Business

Hire PHP Developer and Web Developer for your Online Business

PHP is widely used open-source scripting language it helps in making dynamically easy your websites and web application. Mobiweb Technology is your best technical partner and offering you solution for any kind of website and application...

PHP is widely used open-source scripting language it helps in making dynamically easy your websites and web application. Mobiweb Technology is your best technical partner and offering you solution for any kind of website and application development. To hire PHP developer and web developer at affordable prices contact Mobiweb Technology via [email protected]