ES5 to ESNext — here’s every feature added to JavaScript since 2015

ES5 to ESNext — here’s every feature added to JavaScript since 2015

I wrote this article to help you move from pre-ES6 knowledge of JavaScript and get you quickly up to speed with the most recent advancements of the language.

I wrote this article to help you move from pre-ES6 knowledge of JavaScript and get you quickly up to speed with the most recent advancements of the language.

JavaScript today is in the privileged position to be the only language that can run natively in the browser, and is highly integrated and optimized for that.

The future of JavaScript is going to be brilliant. Keeping up with the changes shouldn’t be harder than it already is, and my goal here is to give you a quick yet comprehensive overview of the new stuff available to us.

Click here to get a PDF / ePub / Mobi version of this post to read offline

Table of Contents

Introduction to ECMAScript

ES2015

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

ES2016

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

ES2017

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

ES2018

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

ESNext

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

Introduction to ECMAScript

Whenever you read about JavaScript you’ll inevitably see one of these terms: ES3, ES5, ES6, ES7, ES8, ES2015, ES2016, ES2017, ECMAScript 2017, ECMAScript 2016, ECMAScript 2015… what do they mean?

They are all referring to a standard, called ECMAScript.

ECMAScript is the standard upon which JavaScript is based, and it’s often abbreviated to ES.

Beside JavaScript, other languages implement(ed) ECMAScript, including:

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

but of course JavaScript is the most popular and widely used implementation of ES.

Why this weird name? Ecma International is a Swiss standards association who is in charge of defining international standards.

When JavaScript was created, it was presented by Netscape and Sun Microsystems to Ecma and they gave it the name ECMA-262 alias ECMAScript.

This press release by Netscape and Sun Microsystems (the maker of Java) might help figure out the name choice, which might include legal and branding issues by Microsoft which was in the committee, according to Wikipedia.

After IE9, Microsoft stopped branding its ES support in browsers as JScript and started calling it JavaScript (at least, I could not find references to it any more).

So as of 201x, the only popular language supporting the ECMAScript spec is JavaScript.

Current ECMAScript version

The current ECMAScript version is ES2018.

It was released in June 2018.

What is TC39

TC39 is the committee that evolves JavaScript.

The members of TC39 are companies involved in JavaScript and browser vendors, including Mozilla, Google, Facebook, Apple, Microsoft, Intel, PayPal, SalesForce and others.

Every standard version proposal must go through various stages, which are explained here.

ES Versions

I found it puzzling why sometimes an ES version is referenced by edition number and sometimes by year, and I am confused by the year by chance being -1 on the number, which adds to the general confusion around JS/ES 😄

Before ES2015, ECMAScript specifications were commonly called by their edition. So ES5 is the official name for the ECMAScript specification update published in 2009.

Why does this happen? During the process that led to ES2015, the name was changed from ES6 to ES2015, but since this was done late, people still referenced it as ES6, and the community has not left the edition naming behind — the world is still calling ES releases by edition number.

This table should clear things up a bit:

Let’s dive into the specific features added to JavaScript since ES5. Let’s start with the ES2015 features.

let and const

Until ES2015, var was the only construct available for defining variables.

var a = 0

If you forget to add var you will be assigning a value to an undeclared variable, and the results might vary.

In modern environments, with strict mode enabled, you will get an error. In older environments (or with strict mode disabled) this will initialize the variable and assign it to the global object.

If you don’t initialize the variable when you declare it, it will have the undefined value until you assign a value to it.

var a //typeof a === 'undefined'

You can redeclare the variable many times, overriding it:

var a = 1var a = 2

You can also declare multiple variables at once in the same statement:

var a = 1, b = 2

The scope is the portion of code where the variable is visible.

A variable initialized with var outside of any function is assigned to the global object, has a global scope and is visible everywhere. A variable initialized with var inside a function is assigned to that function, it’s local and is visible only inside it, just like a function parameter.

Any variable defined in a function with the same name as a global variable takes precedence over the global variable, shadowing it.

It’s important to understand that a block (identified by a pair of curly braces) does not define a new scope. A new scope is only created when a function is created, because var does not have block scope, but function scope.

Inside a function, any variable defined in it is visible throughout all the function code, even if the variable is declared at the end of the function it can still be referenced in the beginning, because JavaScript before executing the code actually moves all variables on top (something that is called hoisting). To avoid confusion, always declare variables at the beginning of a function.

Using let

let is a new feature introduced in ES2015 and it’s essentially a block scoped version of var. Its scope is limited to the block, statement or expression where it’s defined, and all the contained inner blocks.

Modern JavaScript developers might choose to only use let and completely discard the use of var.

If _let_ seems an obscure term, just read _let color = 'red'_ as let the color be red and it all makes much more sense
Defining let outside of any function - contrary to var - does not create a global variable.

Using const

Variables declared with var or let can be changed later on in the program, and reassigned. Once a const is initialized, its value can never be changed again, and it can’t be reassigned to a different value.

const a = 'test'

We can’t assign a different literal to the a const. We can however mutate a if it’s an object that provides methods that mutate its contents.

const does not provide immutability, just makes sure that the reference can’t be changed.

const has block scope, same as let.

Modern JavaScript developers might choose to always use const for variables that don’t need to be reassigned later in the program, because we should always use the simplest construct available to avoid making errors down the road.

Arrow Functions

Arrow functions, since their introduction, changed forever how JavaScript code looks (and works).

In my opinion this change was so welcome that you now rarely see the usage of the function keyword in modern codebases. Although that has still its usage.

Visually, it’s a simple and welcome change, which allows you to write functions with a shorter syntax, from:

const myFunction = function() {  //...}

to

const myFunction = () => {  //...}

If the function body contains just a single statement, you can omit the brackets and write all on a single line:

const myFunction = () => doSomething()

Parameters are passed in the parentheses:

const myFunction = (param1, param2) => doSomething(param1, param2)

If you have one (and just one) parameter, you could omit the parentheses completely:

const myFunction = param => doSomething(param)

Thanks to this short syntax, arrow functions encourage the use of small functions.

Implicit return

Arrow functions allow you to have an implicit return: values are returned without having to use the return keyword.

It works when there is a one-line statement in the function body:

const myFunction = () => 'test'
myFunction() //'test'

Another example, when returning an object, remember to wrap the curly brackets in parentheses to avoid it being considered the wrapping function body brackets:

const myFunction = () => ({ value: 'test' })
myFunction() //{value: 'test'}

How this works in arrow functions

this is a concept that can be complicated to grasp, as it varies a lot depending on the context and also varies depending on the mode of JavaScript (strict mode or not).

It’s important to clarify this concept because arrow functions behave very differently compared to regular functions.

When defined as a method of an object, in a regular function this refers to the object, so you can do:

const car = {  model: 'Fiesta',  manufacturer: 'Ford',  fullName: function() {    return `${this.manufacturer} ${this.model}`  }}

calling car.fullName() will return "Ford Fiesta".

The this scope with arrow functions is inherited from the execution context. An arrow function does not bind this at all, so its value will be looked up in the call stack, so in this code car.fullName() will not work, and will return the string "undefined undefined":

const car = {  model: 'Fiesta',  manufacturer: 'Ford',  fullName: () => {    return `${this.manufacturer} ${this.model}`  }}

Due to this, arrow functions are not suited as object methods.

Arrow functions cannot be used as constructors either, when instantiating an object will raise a TypeError.

This is where regular functions should be used instead, when dynamic context is not needed.

This is also a problem when handling events. DOM Event listeners set this to be the target element, and if you rely on this in an event handler, a regular function is necessary:

const link = document.querySelector('#link')link.addEventListener('click', () => {  // this === window})
const link = document.querySelector('#link')link.addEventListener('click', function() {  // this === link})

Classes

JavaScript has quite an uncommon way to implement inheritance: prototypical inheritance. Prototypal inheritance, while in my opinion great, is unlike most other popular programming language’s implementation of inheritance, which is class-based.

People coming from Java or Python or other languages had a hard time understanding the intricacies of prototypal inheritance, so the ECMAScript committee decided to sprinkle syntactic sugar on top of prototypical inheritance so that it resembles how class-based inheritance works in other popular implementations.

This is important: JavaScript under the hood is still the same, and you can access an object prototype in the usual way.

A class definition

This is how a class looks.

class Person {  constructor(name) {    this.name = name  }
  hello() {    return 'Hello, I am ' + this.name + '.'  }}

A class has an identifier, which we can use to create new objects using new ClassIdentifier().

When the object is initialized, the constructor method is called, with any parameters passed.

A class also has as many methods as it needs. In this case hello is a method and can be called on all objects derived from this class:

const flavio = new Person('Flavio')flavio.hello()

Class inheritance

A class can extend another class, and objects initialized using that class inherit all the methods of both classes.

If the inherited class has a method with the same name as one of the classes higher in the hierarchy, the closest method takes precedence:

class Programmer extends Person {  hello() {    return super.hello() + ' I am a programmer.'  }}
const flavio = new Programmer('Flavio')flavio.hello()

(the above program prints “Hello, I am Flavio. I am a programmer.”)

Classes do not have explicit class variable declarations, but you must initialize any variable in the constructor.

Inside a class, you can reference the parent class calling super().

Static methods

Normally methods are defined on the instance, not on the class.

Static methods are executed on the class instead:

class Person {  static genericHello() {    return 'Hello'  }}
Person.genericHello() //Hello

Private methods

JavaScript does not have a built-in way to define private or protected methods.

There are workarounds, but I won’t describe them here.

Getters and setters

You can add methods prefixed with get or set to create a getter and setter, which are two different pieces of code that are executed based on what you are doing: accessing the variable, or modifying its value.

class Person {  constructor(name) {    this.name = name  }
  set name(value) {    this.name = value  }
  get name() {    return this.name  }}

If you only have a getter, the property cannot be set, and any attempt at doing so will be ignored:

class Person {  constructor(name) {    this.name = name  }
  get name() {    return this.name  }}

If you only have a setter, you can change the value but not access it from the outside:

class Person {  constructor(name) {    this.name = name  }
  set name(value) {    this.name = value  }}

Default parameters

This is a doSomething function which accepts param1.

const doSomething = (param1) => {
}

We can add a default value for param1 if the function is invoked without specifying a parameter:

const doSomething = (param1 = 'test') => {
}

This works for more parameters as well, of course:

const doSomething = (param1 = 'test', param2 = 'test2') => {
}

What if you have an unique object with parameters values in it?

Once upon a time, if we had to pass an object of options to a function, in order to have default values of those options if one of them was not defined, you had to add a little bit of code inside the function:

const colorize = (options) => {  if (!options) {    options = {}  }
  const color = ('color' in options) ? options.color : 'yellow'  ...}

With destructuring you can provide default values, which simplifies the code a lot:

const colorize = ({ color = 'yellow' }) => {  ...}

If no object is passed when calling our colorize function, similarly we can assign an empty object by default:

const spin = ({ color = 'yellow' } = {}) => {  ...}

Template Literals

Template Literals allow you to work with strings in a novel way compared to ES5 and below.

The syntax at a first glance is very simple, just use backticks instead of single or double quotes:

const a_string = `something`

They are unique because they provide a lot of features that normal strings built with quotes do not, in particular:

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

Let’s dive into each of these in detail.

Multiline strings

Pre-ES6, to create a string spanning over two lines you had to use the \ character at the end of a line:

const string =  'first part \second part'

This allows to create a string on 2 lines, but it’s rendered on just one line:

first part second part

To render the string on multiple lines as well, you explicitly need to add \n at the end of each line, like this:

const string =  'first line\n \second line'

or

const string = 'first line\n' + 'second line'

Template literals make multiline strings much simpler.

Once a template literal is opened with the backtick, you just press enter to create a new line, with no special characters, and it’s rendered as-is:

const string = `Heythis
stringis awesome!`

Keep in mind that space is meaningful, so doing this:

const string = `First                Second`

is going to create a string like this:

First                Second

an easy way to fix this problem is by having an empty first line, and appending the trim() method right after the closing backtick, which will eliminate any space before the first character:

const string = `FirstSecond`.trim()

Interpolation

Template literals provide an easy way to interpolate variables and expressions into strings.

You do so by using the ${...} syntax:

const var = 'test'const string = `something ${var}` //something test

inside the ${} you can add anything, even expressions:

const string = `something ${1 + 2 + 3}`const string2 = `something ${foo() ? 'x' : 'y'}`

Template tags

Tagged templates is one feature that might sound less useful at first for you, but it’s actually used by lots of popular libraries around, like Styled Components or Apollo, the GraphQL client/server lib, so it’s essential to understand how it works.

In Styled Components template tags are used to define CSS strings:

const Button = styled.button`  font-size: 1.5em;  background-color: black;  color: white;`

In Apollo template tags are used to define a GraphQL query schema:

const query = gql`  query {    ...  }`

The styled.button and gql template tags highlighted in those examples are just functions:

function gql(literals, ...expressions) {}

this function returns a string, which can be the result of any kind of computation.

literals is an array containing the template literal content tokenized by the expressions interpolations.

expressions contains all the interpolations.

If we take an example above:

const string = `something ${1 + 2 + 3}`

literals is an array with two items. The first is something, the string until the first interpolation, and the second is an empty string, the space between the end of the first interpolation (we only have one) and the end of the string.

expressions in this case is an array with a single item, 6.

A more complex example is:

const string = `somethinganother ${'x'}new line ${1 + 2 + 3}test`

in this case literals is an array where the first item is:

;`somethinganother `

the second is:

;`new line `

and the third is:

;`test`

expressions in this case is an array with two items, x and 6.

The function that is passed those values can do anything with them, and this is the power of this kind feature.

The most simple example is replicating what the string interpolation does, by joining literals and expressions:

const interpolated = interpolate`I paid ${10}€`

and this is how interpolate works:

function interpolate(literals, ...expressions) {  let string = ``  for (const [i, val] of expressions) {    string += literals[i] + val  }  string += literals[literals.length - 1]  return string}

Destructuring assignments

Given an object, you can extract just some values and put them into named variables:

const person = {  firstName: 'Tom',  lastName: 'Cruise',  actor: true,  age: 54, //made up}
const {firstName: name, age} = person

name and age contain the desired values.

The syntax also works on arrays:

const a = [1,2,3,4,5]const [first, second] = a

This statement creates 3 new variables by getting the items with index 0, 1, 4 from the array a:

const [first, second, , , fifth] = a

Enhanced Object Literals

In ES2015 Object Literals gained superpowers.

Simpler syntax to include variables

Instead of doing

const something = 'y'const x = {  something: something}

you can do

const something = 'y'const x = {  something}

Prototype

A prototype can be specified with

const anObject = { y: 'y' }const x = {  __proto__: anObject}

super()

const anObject = { y: 'y', test: () => 'zoo' }const x = {  __proto__: anObject,  test() {    return super.test() + 'x'  }}x.test() //zoox

Dynamic properties

const x = {  ['a' + '_' + 'b']: 'z'}x.a_b //z

For-of loop

ES5 back in 2009 introduced forEach() loops. While nice, they offered no way to break, like for loops always did.

ES2015 introduced the **for-of** loop, which combines the conciseness of forEach with the ability to break:

//iterate over the valuefor (const v of ['a', 'b', 'c']) {  console.log(v);}
//get the index as well, using `entries()`for (const [i, v] of ['a', 'b', 'c'].entries()) {  console.log(index) //index  console.log(value) //value}

Notice the use of const. This loop creates a new scope in every iteration, so we can safely use that instead of let.

The difference with for...in is:

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

Promises

A promise is commonly defined as a proxy for a value that will eventually become available.

Promises are one way to deal with asynchronous code, without writing too many callbacks in your code.

Async functions use the promises API as their building block, so understanding them is fundamental even if in newer code you’ll likely use async functions instead of promises.

How promises work, in brief

Once a promise has been called, it will start in pending state. This means that the caller function continues the execution, while it waits for the promise to do its own processing, and give the caller function some feedback.

At this point, the caller function waits for it to either return the promise in a resolved state, or in a rejected state, but as you know JavaScript is asynchronous, so the function continues its execution while the promise does it work.

Which JS API use promises?

In addition to your own code and library code, promises are used by standard modern Web APIs such as:

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

It’s unlikely that in modern JavaScript you’ll find yourself not using promises, so let’s start diving right into them.

Creating a promise

The Promise API exposes a Promise constructor, which you initialize using new Promise():

let done = true
const isItDoneYet = new Promise((resolve, reject) => {  if (done) {    const workDone = 'Here is the thing I built'    resolve(workDone)  } else {    const why = 'Still working on something else'    reject(why)  }})

As you can see the promise checks the done global constant, and if that’s true, we return a resolved promise, otherwise a rejected promise.

Using resolve and reject we can communicate back a value, in the above case we just return a string, but it could be an object as well.

Consuming a promise

In the last section, we introduced how a promise is created.

Now let’s see how the promise can be consumed or used.

const isItDoneYet = new Promise()//...
const checkIfItsDone = () => {  isItDoneYet    .then(ok => {      console.log(ok)    })    .catch(err => {      console.error(err)    })}

Running checkIfItsDone() will execute the isItDoneYet() promise and will wait for it to resolve, using the then callback, and if there is an error, it will handle it in the catch callback.

Chaining promises

A promise can be returned to another promise, creating a chain of promises.

A great example of chaining promises is given by the Fetch API, a layer on top of the XMLHttpRequest API, which we can use to get a resource and queue a chain of promises to execute when the resource is fetched.

The Fetch API is a promise-based mechanism, and calling fetch() is equivalent to defining our own promise using new Promise().

Example of chaining promises

const status = response => {  if (response.status >= 200 && response.status < 300) {    return Promise.resolve(response)  }  return Promise.reject(new Error(response.statusText))}
const json = response => response.json()
fetch('/todos.json')  .then(status)  .then(json)  .then(data => {    console.log('Request succeeded with JSON response', data)  })  .catch(error => {    console.log('Request failed', error)  })

In this example, we call fetch() to get a list of TODO items from the todos.json file found in the domain root, and we create a chain of promises.

Running fetch() returns a response, which has many properties, and within those we reference:

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

response also has a json() method, which returns a promise that will resolve with the content of the body processed and transformed into JSON.

So given those premises, this is what happens: the first promise in the chain is a function that we defined, called status(), that checks the response status and if it’s not a success response (between 200 and 299), it rejects the promise.

This operation will cause the promise chain to skip all the chained promises listed and will skip directly to the catch() statement at the bottom, logging the Request failed text along with the error message.

If that succeeds instead, it calls the json() function we defined. Since the previous promise, when successful, returned the response object, we get it as an input to the second promise.

In this case, we return the data JSON processed, so the third promise receives the JSON directly:

.then((data) => {  console.log('Request succeeded with JSON response', data)})

and we log it to the console.

Handling errors

In the above example, in the previous section, we had a catch that was appended to the chain of promises.

When anything in the chain of promises fails and raises an error or rejects the promise, the control goes to the nearest catch() statement down the chain.

new Promise((resolve, reject) => {  throw new Error('Error')}).catch(err => {  console.error(err)})
// or
new Promise((resolve, reject) => {  reject('Error')}).catch(err => {  console.error(err)})

Cascading errors

If inside the catch() you raise an error, you can append a second catch() to handle it, and so on.

new Promise((resolve, reject) => {  throw new Error('Error')})  .catch(err => {    throw new Error('Error')  })  .catch(err => {    console.error(err)  })

Orchestrating promises

Promise.all()

If you need to synchronize different promises, Promise.all() helps you define a list of promises, and execute something when they are all resolved.

Example:

const f1 = fetch('/something.json')const f2 = fetch('/something2.json')
Promise.all([f1, f2])  .then(res => {    console.log('Array of results', res)  })  .catch(err => {    console.error(err)  })

The ES2015 destructuring assignment syntax allows you to also do

Promise.all([f1, f2]).then(([res1, res2]) => {  console.log('Results', res1, res2)})

You are not limited to using fetch of course, any promise is good to go.

Promise.race()

Promise.race() runs as soon as one of the promises you pass to it resolves, and it runs the attached callback just once with the result of the first promise resolved.

Example:

const promiseOne = new Promise((resolve, reject) => {  setTimeout(resolve, 500, 'one')})const promiseTwo = new Promise((resolve, reject) => {  setTimeout(resolve, 100, 'two')})
Promise.race([promiseOne, promiseTwo]).then(result => {  console.log(result) // 'two'})

Modules

ES Modules is the ECMAScript standard for working with modules.

While Node.js has been using the CommonJS standard for years, the browser never had a module system, as every major decision such as a module system must be first standardized by ECMAScript and then implemented by the browser.

This standardization process completed with ES2015 and browsers started implementing this standard trying to keep everything well aligned, working all in the same way, and now ES Modules are supported in Chrome, Safari, Edge and Firefox (since version 60).

Modules are very cool, because they let you encapsulate all sorts of functionality, and expose this functionality to other JavaScript files, as libraries.

The ES Modules Syntax

The syntax to import a module is:

import package from 'module-name'

while CommonJS uses

const package = require('module-name')

A module is a JavaScript file that exports one or more values (objects, functions or variables), using the export keyword. For example, this module exports a function that returns a string uppercase:

If _let_ seems an obscure term, just read _let color = 'red'_ as let the color be red and it all makes much more sense

export default str => str.toUpperCase()

In this example, the module defines a single, default export, so it can be an anonymous function. Otherwise it would need a name to distinguish it from other exports.

Now, any other JavaScript module can import the functionality offered by uppercase.js by importing it.

An HTML page can add a module by using a <script> tag with the special type="module" attribute:

<script type="module" src="index.js"></script>

If _let_ seems an obscure term, just read _let color = 'red'_ as let the color be red and it all makes much more sense
It’s important to note that any script loaded with type="module" is loaded in strict mode.

In this example, the uppercase.js module defines a default export, so when we import it, we can assign it a name we prefer:

import toUpperCase from './uppercase.js'

and we can use it:

toUpperCase('test') //'TEST'

You can also use an absolute path for the module import, to reference modules defined on another domain:

import toUpperCase from 'https://flavio-es-modules-example.glitch.me/uppercase.js'

This is also valid import syntax:

import { toUpperCase } from '/uppercase.js'import { toUpperCase } from '../uppercase.js'

This is not:

import { toUpperCase } from 'uppercase.js'import { toUpperCase } from 'utils/uppercase.js'

It’s either absolute, or has a ./ or / before the name.

Other import/export options

We saw this example above:

export default str => str.toUpperCase()

This creates one default export. In a file however you can export more than one thing, by using this syntax:

const a = 1const b = 2const c = 3
export { a, b, c }

Another module can import all those exports using

import * from 'module'

You can import just a few of those exports, using the destructuring assignment:

import { a } from 'module'import { a, b } from 'module'

You can rename any import, for convenience, using as:

import { a, b as two } from 'module'

You can import the default export, and any non-default export by name, like in this common React import:

import React, { Component } from 'react'

You can see an ES Modules example here: https://glitch.com/edit/#!/flavio-es-modules-example?path=index.html

CORS

Modules are fetched using CORS. This means that if you reference scripts from other domains, they must have a valid CORS header that allows cross-site loading (like Access-Control-Allow-Origin: *)

What about browsers that do not support modules?

Use a combination of type="module" and nomodule:

<script type="module" src="module.js"></script><script nomodule src="fallback.js"></script>

Wrapping up modules

ES Modules are one of the biggest features introduced in modern browsers. They are part of ES6 but the road to implement them has been long.

We can now use them! But we must also remember that having more than a few modules is going to have a performance hit on our pages, as it’s one more step that the browser must perform at runtime.

Webpack is probably going to still be a huge player even if ES Modules land in the browser, but having such a feature directly built in the language is huge for a unification of how modules work client-side and on Node.js as well.

New String methods

Any string value got some new instance methods:

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

repeat()

Repeats the strings for the specified number of times:

'Ho'.repeat(3) //'HoHoHo'

Returns an empty string if there is no parameter, or the parameter is 0. If the parameter is negative you’ll get a RangeError.

codePointAt()

This method can be used to handle Unicode characters that cannot be represented by a single 16-bit Unicode unit, but need 2 instead.

Using charCodeAt() you need to retrieve the first, and the second, and combine them. Using codePointAt() you get the whole character in one call.

For example, this Chinese character “𠮷” is composed by 2 UTF-16 (Unicode) parts:

"𠮷".charCodeAt(0).toString(16) //d842"𠮷".charCodeAt(1).toString(16) //dfb7

If you create a new character by combining those unicode characters:

"\ud842\udfb7" //"𠮷"

You can get the same result sign codePointAt():

"𠮷".codePointAt(0) //20bb7

If you create a new character by combining those unicode characters:

"\u{20bb7}" //"𠮷"

More on Unicode and working with it in my Unicode guide.

New Object methods

ES2015 introduced several static methods under the Object namespace:

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

Object.is()

This methods aims to help comparing values.

Usage:

Object.is(a, b)

The result is always false unless:

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

0 and -0 are different values in JavaScript, so pay attention in this special case (convert all to +0 using the + unary operator before comparing, for example).

Object.assign()

Introduced in ES2015, this method copies all the enumerable own properties of one or more objects into another.

Its primary use case is to create a shallow copy of an object.

const copied = Object.assign({}, original)

Being a shallow copy, values are cloned, and objects references are copied (not the objects themselves), so if you edit an object property in the original object, that’s modified also in the copied object, since the referenced inner object is the same:

const original = {  name: 'Fiesta',  car: {    color: 'blue'  }}const copied = Object.assign({}, original)
original.name = 'Focus'original.car.color = 'yellow'
copied.name //Fiestacopied.car.color //yellow

I mentioned “one or more”:

const wisePerson = {  isWise: true}const foolishPerson = {  isFoolish: true}const wiseAndFoolishPerson = Object.assign({}, wisePerson, foolishPerson)
console.log(wiseAndFoolishPerson) //{ isWise: true, isFoolish: true }

Object.setPrototypeOf()

Set the prototype of an object. Accepts two arguments: the object and the prototype.

Usage:

Object.setPrototypeOf(object, prototype)

Example:

const animal = {  isAnimal: true}const mammal = {  isMammal: true}
mammal.__proto__ = animalmammal.isAnimal //true
const dog = Object.create(animal)
dog.isAnimal  //trueconsole.log(dog.isMammal)  //undefined
Object.setPrototypeOf(dog, mammal)
dog.isAnimal //truedog.isMammal //true

The spread operator

You can expand an array, an object or a string using the spread operator ...

Let’s start with an array example. Given

const a = [1, 2, 3]

you can create a new array using

const b = [...a, 4, 5, 6]

You can also create a copy of an array using

const c = [...a]

This works for objects as well. Clone an object with:

const newObj = { ...oldObj }

Using strings, the spread operator creates an array with each char in the string:

const hey = 'hey'const arrayized = [...hey] // ['h', 'e', 'y']

This operator has some pretty useful applications. The most important one is the ability to use an array as function argument in a very simple way:

const f = (foo, bar) => {}const a = [1, 2]f(...a)

(In the past you could do this using f.apply(null, a) but that’s not as nice and readable.)

The rest element is useful when working with array destructuring:

const numbers = [1, 2, 3, 4, 5][first, second, ...others] = numbers

and spread elements:

const numbers = [1, 2, 3, 4, 5]const sum = (a, b, c, d, e) => a + b + c + d + econst sum = sum(...numbers)

ES2018 introduces rest properties, which are the same but for objects.

Rest properties:

const { first, second, ...others } = {  first: 1,  second: 2,  third: 3,  fourth: 4,  fifth: 5}
first // 1second // 2others // { third: 3, fourth: 4, fifth: 5 }

Spread properties allow us to create a new object by combining the properties of the object passed after the spread operator:

const items = { first, second, ...others }items //{ first: 1, second: 2, third: 3, fourth: 4, fifth: 5 }

Set

A Set data structure allows us to add data to a container.

A Set is a collection of objects or primitive types (strings, numbers or booleans), and you can think of it as a Map where values are used as map keys, with the map value always being a boolean true.

Initialize a Set

A Set is initialized by calling:

const s = new Set()

Add items to a Set

You can add items to the Set by using the add method:

s.add('one')s.add('two')

A set only stores unique elements, so calling s.add('one') multiple times won’t add new items.

You can’t add multiple elements to a set at the same time. You need to call add() multiple times.

Check if an item is in the set

Once an element is in the set, we can check if the set contains it:

s.has('one') //trues.has('three') //false

Delete an item from a Set by key

Use the delete() method:

s.delete('one')

Determine the number of items in a Set

Use the size property:

s.size

Delete all items from a Set

Use the clear() method:

s.clear()

Iterate the items in a Set

Use the keys() or values() methods - they are equivalent:

for (const k of s.keys()) {  console.log(k)}
for (const k of s.values()) {  console.log(k)}

The entries() method returns an iterator, which you can use like this:

const i = s.entries()console.log(i.next())

calling i.next() will return each element as a { value, done = false } object until the iterator ends, at which point done is true.

You can also use the forEach() method on the set:

s.forEach(v => console.log(v))

or you can just use the set in a for…of loop:

for (const k of s) {  console.log(k)}

Initialize a Set with values

You can initialize a Set with a set of values:

const s = new Set([1, 2, 3, 4])

Convert the Set keys into an array

const a = [...s.keys()]
// or
const a = [...s.values()]

A WeakSet

A WeakSet is a special kind of Set.

In a Set, items are never garbage collected. A WeakSet instead lets all its items be freely garbage collected. Every key of a WeakSet is an object. When the reference to this object is lost, the value can be garbage collected.

Here are the main differences:

  1. you cannot iterate over the WeakSet
  2. you cannot clear all items from a WeakSet
  3. you cannot check its size

A WeakSet is generally used by framework-level code, and only exposes these methods:

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

Map

A Map data structure allows us to associate data to a key.

Before ES6

Before its introduction, people generally used objects as maps, by associating some object or value to a specific key value:

const car = {}car['color'] = 'red'car.owner = 'Flavio'console.log(car['color']) //redconsole.log(car.color) //redconsole.log(car.owner) //Flavioconsole.log(car['owner']) //Flavio

Enter Map

ES6 introduced the Map data structure, providing us a proper tool to handle this kind of data organization.

A Map is initialized by calling:

const m = new Map()

Add items to a Map

You can add items to the map by using the set method:

m.set('color', 'red')m.set('age', 2)

Get an item from a map by key

And you can get items out of a map by using get:

const color = m.get('color')const age = m.get('age')

Delete an item from a map by key

Use the delete() method:

m.delete('color')

Delete all items from a map

Use the clear() method:

m.clear()

Check if a map contains an item by key

Use the has() method:

const hasColor = m.has('color')

Find the number of items in a map

Use the size property:

const size = m.size

Initialize a map with values

You can initialize a map with a set of values:

const m = new Map([['color', 'red'], ['owner', 'Flavio'], ['age', 2]])

Map keys

Just like any value (object, array, string, number) can be used as the value of the key-value entry of a map item, any value can be used as the key, even objects.

If you try to get a non-existing key using get() out of a map, it will return undefined.

Weird situations you’ll almost never find in real life

const m = new Map()m.set(NaN, 'test')m.get(NaN) //test
const m = new Map()m.set(+0, 'test')m.get(-0) //test

Iterate over map keys

Map offers the keys() method we can use to iterate on all the keys:

for (const k of m.keys()) {  console.log(k)}

Iterate over map values

The Map object offers the values() method we can use to iterate on all the values:

for (const v of m.values()) {  console.log(v)}

Iterate over map key, value pairs

The Map object offers the entries() method we can use to iterate on all the values:

for (const [k, v] of m.entries()) {  console.log(k, v)}

which can be simplified to

for (const [k, v] of m) {  console.log(k, v)}

Convert the map keys into an array

const a = [...m.keys()]

Convert the map values into an array

const a = [...m.values()]

WeakMap

A WeakMap is a special kind of map.

In a map object, items are never garbage collected. A WeakMap instead lets all its items be freely garbage collected. Every key of a WeakMap is an object. When the reference to this object is lost, the value can be garbage collected.

Here are the main differences:

  1. you cannot iterate over the WeakSet
  2. you cannot clear all items from a WeakSet
  3. you cannot check its size

A WeakMap exposes those methods, which are equivalent to the Map ones:

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

The use cases of a WeakMap are less evident than the ones of a Map, and you might never find the need for them, but essentially it can be used to build a memory-sensitive cache that is not going to interfere with garbage collection, or for careful encapsulation and information hiding.

Generators

Generators are a special kind of function with the ability to pause itself, and resume later, allowing other code to run in the meantime.

See the full JavaScript Generators Guide for a detailed explanation of the topic.

The code decides that it has to wait, so it lets other code “in the queue” to run, and keeps the right to resume its operations “when the thing it’s waiting for” is done.

All this is done with a single, simple keyword: yield. When a generator contains that keyword, the execution is halted.

A generator can contain many yield keywords, thus halting itself multiple times, and it’s identified by the *function keyword, which is not to be confused with the pointer dereference operator used in lower level programming languages such as C, C++ or Go.

Generators enable whole new paradigms of programming in JavaScript, allowing:

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

Here is an example of a generator which explains how it all works.

function *calculator(input) {    var doubleThat = 2 * (yield (input / 2))    var another = yield (doubleThat)    return (input * doubleThat * another)}

We initialize it with

const calc = calculator(10)

Then we start the iterator on our generator:

calc.next()

This first iteration starts the iterator. The code returns this object:

{  done: false  value: 5}

What happens is: the code runs the function, with input = 10 as it was passed in the generator constructor. It runs until it reaches the yield, and returns the content of yield: input / 2 = 5. So we got a value of 5, and the indication that the iteration is not done (the function is just paused).

In the second iteration we pass the value 7:

calc.next(7)

and what we got back is:

{  done: false  value: 14}

7 was placed as the value of doubleThat. Important: you might read like input / 2 was the argument, but that’s just the return value of the first iteration. We now skip that, and use the new input value, 7, and multiply it by 2.

We then reach the second yield, and that returns doubleThat, so the returned value is 14.

In the next, and last, iteration, we pass in 100

calc.next(100)

and in return we got

{  done: true  value: 14000}

As the iteration is done (no more yield keywords found) and we just return (input * doubleThat * another) which amounts to 10 * 14 * 100.

Those were the features introduced in ES2015. Let’s now dive into ES2016 which is much smaller in scope.

Array.prototype.includes()

This feature introduces a more readable syntax for checking if an array contains an element.

With ES6 and lower, to check if an array contained an element you had to use indexOf, which checks the index in the array, and returns -1 if the element is not there.

Since -1 is evaluated as a true value, you could not do for example

if (![1,2].indexOf(3)) {  console.log('Not found')}

With this feature introduced in ES7 we can do

if (![1,2].includes(3)) {  console.log('Not found')}

Exponentiation Operator

The exponentiation operator ** is the equivalent of Math.pow(), but brought into the language instead of being a library function.

Math.pow(4, 2) == 4 ** 2

This feature is a nice addition for math intensive JS applications.

The ** operator is standardized across many languages including Python, Ruby, MATLAB, Lua, Perl and many others.

Those were the features introduced in 2016. Let’s now dive into 2017

String padding

The purpose of string padding is to add characters to a string, so it reaches a specific length.

ES2017 introduces two String methods: padStart() and padEnd().

padStart(targetLength [, padString])padEnd(targetLength [, padString])

Sample usage:

Object.values()

This method returns an array containing all the object own property values.

Usage:

const person = { name: 'Fred', age: 87 }Object.values(person) // ['Fred', 87]

Object.values() also works with arrays:

const people = ['Fred', 'Tony']Object.values(people) // ['Fred', 'Tony']

Object.entries()

This method returns an array containing all the object own properties, as an array of [key, value] pairs.

Usage:

const person = { name: 'Fred', age: 87 }Object.entries(person) // [['name', 'Fred'], ['age', 87]]

Object.entries() also works with arrays:

const people = ['Fred', 'Tony']Object.entries(people) // [['0', 'Fred'], ['1', 'Tony']]

Object.getOwnPropertyDescriptors()

This method returns all own (non-inherited) properties descriptors of an object.

Any object in JavaScript has a set of properties, and each of these properties has a descriptor.

A descriptor is a set of attributes of a property, and it’s composed by a subset of the following:

  • let and const
  • Arrow Functions
  • Classes
  • Default parameters
  • Template Literals
  • Destructuring assignments
  • Enhanced Object Literals
  • For-of loop
  • Promises
  • Modules
  • New String methods
  • New Object methods
  • The spread operator
  • Set
  • Map
  • Generators

Object.getOwnPropertyDescriptors(obj) accepts an object, and returns an object with the set of descriptors.

In what way is this useful?

ES6 gave us Object.assign(), which copies all enumerable own properties from one or more objects, and return a new object.

However there is a problem with that, because it does not correctly copies properties with non-default attributes.

If an object for example has just a setter, it’s not correctly copied to a new object, using Object.assign().

For example with

const person1 = {    set name(newName) {        console.log(newName)    }}

This won’t work:

const person2 = {}Object.assign(person2, person1)

But this will work:

const person3 = {}Object.defineProperties(person3,  Object.getOwnPropertyDescriptors(person1))

As you can see with a simple console test:

person1.name = 'x'"x"
person2.name = 'x'
person3.name = 'x'"x"

person2 misses the setter, it was not copied over.

The same limitation goes for shallow cloning objects with Object.create().

Trailing commas

This feature allows to have trailing commas in function declarations, and in functions calls:

const doSomething = (var1, var2,) => {  //...}
doSomething('test2', 'test2',)

This change will encourage developers to stop the ugly “comma at the start of the line” habit.

Async functions

JavaScript evolved in a very short time from callbacks to promises (ES2015), and since ES2017 asynchronous JavaScript is even simpler with the async/await syntax.

Async functions are a combination of promises and generators, and basically, they are a higher level abstraction over promises. Let me repeat: async/await is built on promises.

Why were async/await introduced?

They reduce the boilerplate around promises, and the “don’t break the chain” limitation of chaining promises.

When Promises were introduced in ES2015, they were meant to solve a problem with asynchronous code, and they did, but over the 2 years that separated ES2015 and ES2017, it was clear that promises could not be the final solution.

Promises were introduced to solve the famous callback hell problem, but they introduced complexity on their own, and syntax complexity.

They were good primitives around which a better syntax could be exposed to developers, so when the time was right we got async functions.

They make the code look like it’s synchronous, but it’s asynchronous and non-blocking behind the scenes.

How it works

An async function returns a promise, like in this example:

const doSomethingAsync = () => {  return new Promise(resolve => {    setTimeout(() => resolve('I did something'), 3000)  })}

When you want to call this function you prepend await, and the calling code will stop until the promise is resolved or rejected. One caveat: the client function must be defined as async. Here’s an example:

const doSomething = async () => {  console.log(await doSomethingAsync())}

A quick example

This is a simple example of async/await used to run a function asynchronously:

const doSomethingAsync = () => {  return new Promise(resolve => {    setTimeout(() => resolve('I did something'), 3000)  })}
const doSomething = async () => {  console.log(await doSomethingAsync())}
console.log('Before')doSomething()console.log('After')

The above code will print the following to the browser console:

BeforeAfterI did something //after 3s

Promise all the things

Prepending the async keyword to any function means that the function will return a promise.

Even if it’s not doing so explicitly, it will internally make it return a promise.

This is why this code is valid:

const aFunction = async () => {  return 'test'}
aFunction().then(alert) // This will alert 'test'

and it’s the same as:

const aFunction = async () => {  return Promise.resolve('test')}
aFunction().then(alert) // This will alert 'test'

The code is much simpler to read

As you can see in the example above, our code looks very simple. Compare it to code using plain promises, with chaining and callback functions.

And this is a very simple example, the major benefits will arise when the code is much more complex.

For example here’s how you would get a JSON resource, and parse it, using promises:

const getFirstUserData = () => {  return fetch('/users.json') // get users list    .then(response => response.json()) // parse JSON    .then(users => users[0]) // pick first user    .then(user => fetch(`/users/${user.name}`)) // get user data    .then(userResponse => response.json()) // parse JSON}
getFirstUserData()

And here is the same functionality provided using await/async:

const getFirstUserData = async () => {  const response = await fetch('/users.json') // get users list  const users = await response.json() // parse JSON  const user = users[0] // pick first user  const userResponse = await fetch(`/users/${user.name}`) // get user data  const userData = await user.json() // parse JSON  return userData}
getFirstUserData()

Multiple async functions in series

Async functions can be chained very easily, and the syntax is much more readable than with plain promises:

const promiseToDoSomething = () => {  return new Promise(resolve => {    setTimeout(() => resolve('I did something'), 10000)  })}
const watchOverSomeoneDoingSomething = async () => {  const something = await promiseToDoSomething()  return something + ' and I watched'}
const watchOverSomeoneWatchingSomeoneDoingSomething = async () => {  const something = await watchOverSomeoneDoingSomething()  return something + ' and I watched as well'}
watchOverSomeoneWatchingSomeoneDoingSomething().then(res => {  console.log(res)})

Will print:

I did something and I watched and I watched as well

Easier debugging

Debugging promises is hard because the debugger will not step over asynchronous code.

Async/await makes this very easy because to the compiler it’s just like synchronous code.

Shared Memory and Atomics

WebWorkers are used to create multithreaded programs in the browser.

They offer a messaging protocol via events. Since ES2017, you can create a shared memory array between web workers and their creator, using a SharedArrayBuffer.

Since it’s unknown how much time writing to a shared memory portion takes to propagate, Atomics are a way to enforce that when reading a value, any kind of writing operation is completed.

Any more detail on this can be found in the spec proposal, which has since been implemented.

This was ES2017. Let me now introduce the ES2018 features

Rest/Spread Properties

ES2015 introduced the concept of a rest element when working with array destructuring:

const numbers = [1, 2, 3, 4, 5][first, second, ...others] = numbers

and spread elements:

const numbers = [1, 2, 3, 4, 5]const sum = (a, b, c, d, e) => a + b + c + d + econst sum = sum(...numbers)

ES2018 introduces the same but for objects.

Rest properties:

const { first, second, ...others } = { first: 1, second: 2, third: 3, fourth: 4, fifth: 5 }
first // 1second // 2others // { third: 3, fourth: 4, fifth: 5 }

Spread properties allow to create a new object by combining the properties of the object passed after the spread operator:

const items = { first, second, ...others }items //{ first: 1, second: 2, third: 3, fourth: 4, fifth: 5 }

Asynchronous iteration

The new construct for-await-of allows you to use an async iterable object as the loop iteration:

for await (const line of readLines(filePath)) {  console.log(line)}

Since this uses await, you can use it only inside async functions, like a normal await.

Promise.prototype.finally()

When a promise is fulfilled, successfully it calls the then() methods, one after another.

If something fails during this, the then() methods are jumped and the catch() method is executed.

finally() allow you to run some code regardless of the successful or not successful execution of the promise:

fetch('file.json')  .then(data => data.json())  .catch(error => console.error(error))  .finally(() => console.log('finished'))

Regular Expression improvements

ES2018 introduced a number of improvements regarding Regular Expressions. I recommend my tutorial on them, available at https://flaviocopes.com/javascript-regular-expressions/.

Here are the ES2018 specific additions.

RegExp lookbehind assertions: match a string depending on what precedes it

This is a lookahead: you use ?= to match a string that’s followed by a specific substring:

/Roger(?=Waters)/
/Roger(?= Waters)/.test('Roger is my dog') //false/Roger(?= Waters)/.test('Roger is my dog and Roger Waters is a famous musician') //true

?! performs the inverse operation, matching if a string is not followed by a specific substring:

/Roger(?!Waters)/
/Roger(?! Waters)/.test('Roger is my dog') //true/Roger(?! Waters)/.test('Roger Waters is a famous musician') //false

Lookaheads use the ?= symbol. They were already available.

Lookbehinds, a new feature, uses ?<=.

/(?<=Roger) Waters/
/(?<=Roger) Waters/.test('Pink Waters is my dog') //false/(?<=Roger) Waters/.test('Roger is my dog and Roger Waters is a famous musician') //true

A lookbehind is negated using ?<!:

/(?<!Roger) Waters/
/(?<!Roger) Waters/.test('Pink Waters is my dog') //true/(?<!Roger) Waters/.test('Roger is my dog and Roger Waters is a famous musician') //false

Unicode property escapes \p{…} and \P{…}

In a regular expression pattern you can use \d to match any digit, \s to match any character that’s not a white space, \w to match any alphanumeric character, and so on.

This new feature extends this concept to all Unicode characters introducing \p{} and is negation \P{}.

Any unicode character has a set of properties. For example Script determines the language family, ASCII is a boolean that’s true for ASCII characters, and so on. You can put this property in the graph parentheses, and the regex will check for that to be true:

/^\p{ASCII}+$/u.test('abc')   //✅/^\p{ASCII}+$/u.test('[email protected]')  //✅/^\p{ASCII}+$/u.test('ABC🙃') //❌

ASCII_Hex_Digit is another boolean property, that checks if the string only contains valid hexadecimal digits:

/^\p{ASCII_Hex_Digit}+$/u.test('0123456789ABCDEF') //✅/^\p{ASCII_Hex_Digit}+$/u.test('h')                //❌

There are many other boolean properties, which you just check by adding their name in the graph parentheses, including Uppercase, Lowercase, White_Space, Alphabetic, Emoji and more:

/^\p{Lowercase}$/u.test('h') //✅/^\p{Uppercase}$/u.test('H') //✅
/^\p{Emoji}+$/u.test('H')   //❌/^\p{Emoji}+$/u.test('🙃🙃') //✅

In addition to those binary properties, you can check any of the unicode character properties to match a specific value. In this example, I check if the string is written in the greek or latin alphabet:

/^\p{Script=Greek}+$/u.test('ελληνικά') //✅/^\p{Script=Latin}+$/u.test('hey') //✅

Read more about all the properties you can use directly on the proposal.

Named capturing groups

In ES2018 a capturing group can be assigned to a name, rather than just being assigned a slot in the result array:

const re = /(?<year>\d{4})-(?<month>\d{2})-(?<day>\d{2})/const result = re.exec('2015-01-02')
// result.groups.year === '2015';// result.groups.month === '01';// result.groups.day === '02';

The s flag for regular expressions

The s flag, short for single line, causes the . to match new line characters as well. Without it, the dot matches regular characters but not the new line:

/hi.welcome/.test('hi\nwelcome') // false/hi.welcome/s.test('hi\nwelcome') // true

ESNext

What’s next? ESNext.

ESNext is a name that always indicates the next version of JavaScript.

The current ECMAScript version is ES2018. It was released in June 2018.

Historically JavaScript editions have been standardized during the summer, so we can expect ECMAScript 2019 to be released in summer 2019.

So at the time of writing, ES2018 has been released, and ESNext is ES2019

Proposals to the ECMAScript standard are organized in stages. Stages 1–3 are an incubator of new features, and features reaching Stage 4 are finalized as part of the new standard.

At the time of writing we have a number of features at Stage 4. I will introduce them in this section. The latest versions of the major browsers should already implement most of those.

Some of those changes are mostly for internal use, but it’s also good to know what is going on.

There are other features at Stage 3, which might be promoted to Stage 4 in the next few months, and you can check them out on this GitHub repository: https://github.com/tc39/proposals.

Array.prototype.{flat,flatMap}

flat() is a new array instance method that can create a one-dimensional array from a multidimensional array.

Example:

['Dog', ['Sheep', 'Wolf']].flat()//[ 'Dog', 'Sheep', 'Wolf' ]

By default it only “flats” up to one level, but you can add a parameter to set the number of levels you want to flat the array to. Set it to Infinity to have unlimited levels:

['Dog', ['Sheep', ['Wolf']]].flat()//[ 'Dog', 'Sheep', [ 'Wolf' ] ]
['Dog', ['Sheep', ['Wolf']]].flat(2)//[ 'Dog', 'Sheep', 'Wolf' ]
['Dog', ['Sheep', ['Wolf']]].flat(Infinity)//[ 'Dog', 'Sheep', 'Wolf' ]

If you are familiar with the JavaScript map() method of an array, you know that using it you can execute a function on every element of an array.

flatMap() is a new Array instance method that combines flat() with map(). It’s useful when calling a function that returns an array in the map() callback, but you want your resulted array to be flat:

['My dog', 'is awesome'].map(words => words.split(' '))//[ [ 'My', 'dog' ], [ 'is', 'awesome' ] ]
['My dog', 'is awesome'].flatMap(words => words.split(' '))//[ 'My', 'dog', 'is', 'awesome' ]

Optional catch binding

Sometimes we don’t need to have a parameter bound to the catch block of a try/catch.

We previously had to do:

try {  //...} catch (e) {  //handle error}

Even if we never had to use e to analyze the error. We can now simply omit it:

try {  //...} catch {  //handle error}

Object.fromEntries()

Objects have an entries() method, since ES2017.

It returns an array containing all the object own properties, as an array of [key, value] pairs:

const person = { name: 'Fred', age: 87 }Object.entries(person) // [['name', 'Fred'], ['age', 87]]

ES2019 introduces a new Object.fromEntries() method, which can create a new object from such array of properties:

const person = { name: 'Fred', age: 87 }const entries = Object.entries(person)const newPerson = Object.fromEntries(entries)person !== newPerson //true

String.prototype.{trimStart,trimEnd}

This feature has been part of v8/Chrome for almost a year now, and it’s going to be standardized in ES2019.

trimStart()

Return a new string with removed white space from the start of the original string

'Testing'.trimStart() //'Testing'' Testing'.trimStart() //'Testing'' Testing '.trimStart() //'Testing ''Testing'.trimStart() //'Testing'

trimEnd()

Return a new string with removed white space from the end of the original string

'Testing'.trimEnd() //'Testing'' Testing'.trimEnd() //' Testing'' Testing '.trimEnd() //' Testing''Testing '.trimEnd() //'Testing'

Symbol.prototype.description

You can now retrieve the description of a symbol by accessing its description property instead of having to use the toString() method:

const testSymbol = Symbol('Test')testSymbol.description // 'Test'

JSON improvements

Before this change, the line separator (\u2028) and paragraph separator (\u2029) symbols were not allowed in strings parsed as JSON.

Using JSON.parse(), those characters resulted in a SyntaxError but now they parse correctly, as defined by the JSON standard.

Well-formed JSON.stringify()

Fixes the JSON.stringify() output when it processes surrogate UTF-8 code points (U+D800 to U+DFFF).

Before this change calling JSON.stringify() would return a malformed Unicode character (a “�”).

Now those surrogate code points can be safely represented as strings using JSON.stringify(), and transformed back into their original representation using JSON.parse().

Function.prototype.toString()

Functions have always had an instance method called toString() which return a string containing the function code.

ES2019 introduced a change to the return value to avoid stripping comments and other characters like whitespace, exactly representing the function as it was defined.

If previously we had

function /* this is bar */ bar () {}

The behavior was this:

bar.toString() //'function bar() {}

now the new behavior is:

bar.toString(); // 'function /* this is bar */ bar () {}'

Wrapping up, I hope this article helped you catch up on some of the latest JavaScript additions, and the new features we’ll see in 2019.

Click here to get a PDF / ePub / Mobi version of this post to read offline

Introduction New Features in TypeScript 3.7 and How to Use Them

Introduction New Features in TypeScript 3.7 and How to Use Them

The TypeScript 3.7 release is coming soon, and it's going to be a big one.

The target release date is November 5th, and there are some seriously exciting headline features included:

  • Assert signatures.
  • Recursive type aliases.
  • Top-level await.
  • Null coalescing.
  • Optional chaining.

Personally, I'm super excited about this, they're going to whisk away all sorts of annoyances that I've been fighting in TypeScript while building HTTP Toolkit.

If you haven't been paying close attention to the TypeScript development process though, it's probably not clear what half of these mean, or why you should care. Let's talk through all of them.

Assert Signatures

This is a brand-new and little-known TypeScript feature, which allows you to write functions that act like type guards as a side-effect, rather than explicitly returning their boolean result.

It's easiest to demonstrate this with a JavaScript example:

function assertString(input) { 
  if (typeof input === 'string') 
    return; 
  else 
    throw new Error('Input must be a string!'); 
} 
function doSomething(input) { 
  assertString(input); 
  // ... Use input, confident that it's a string 
} 
doSomething('abc'); 
// All good doSomething(123); // Throws an error

This pattern is neat and useful, and you can't use it in TypeScript today.

TypeScript can't know that you've guaranteed the type of input after it's run assertString. Typically, people just make the argument input: string to avoid this, and that's good. But, it also just pushes the type checking problem somewhere else, and in cases where you just want to fail hard, it's useful to have this option available.

Fortunately, soon we will:

// With TS 3.7 
function assertString(input: any): 
	asserts input is string { 
      // <-- the magic 
      if (typeof input === 'string') 
        return; 
      else 
        throw new Error('Input must be a string!'); 
    } 
function doSomething(input: string | number) { 
  assertString(input); 
  // input's type is just 'string' here }

Here assert input is string means that if this function ever returns, TypeScript can narrow the type of input to string, just as if it was inside an if block with a type guard.

To make this safe, that means if the assert statement isn't true then your assert function must either throw an error or not return at all (kill the process, infinite loop, you name it).

That's the basics, but this actually lets you pull some really neat tricks:

// With TS 3.7 
// Asserts that input is truthy, throwing immediately if not: 
function assert(input: any): 
	asserts input { // <-- not a typo 
      if (!input) 
        throw new Error('Not a truthy value'); 
    } 
declare const x: number | string | undefined; 
assert(x); // Narrows x to number | string 
// Also usable with type guarding expressions! 
assert(typeof x === 'string'); 
// Narrows x to string // -- Or use assert in your tests: -- 
const a: Result | Error = doSomethingTestable(); 
expect(a).is.instanceOf(result); 
// 'instanceOf' could 'asserts a is Result' 
expect(a.resultValue).to.equal(123); 
// a.resultValue is now legal // -- Use as a safer ! that throws immediately if 
// you're wrong -- 
function assertDefined<T>(obj: T): 
	asserts obj is NonNullable<T> { 
      if (obj === undefined || obj === null) { 
        throw new Error('Must not be a nullable value'); 
      } 
    } 
declare const x: string | undefined; 
// Gives y just 'string' as a type, could throw elsewhere later: 
const y = x!; 
// Gives y 'string' as a type, or throws immediately if you're wrong: 
assertDefined(x); const z = x; 
// -- Or even update types to track a function's side-effects -- 
type X<T extends string | {}> = { value: T }; 
// Use asserts to narrow types according to side effects: 
function setX<T extends string | {}>(x: X<any>, v: T): 
	asserts x is X<T> { 
      x.value = v; 
    } 
	declare let x: X<any>; 
// x is now { value: any }; 
setX(x, 123); 
// x is now { value: number };

This is still in flux, so don't take it as the definite result, and keep an eye on the pull request if you want the final details.

There's even a discussion there about allowing functions to assert something and return a type, which would let you extend the final example above to track a much wider variety of side effects, but we'll have to wait and see how that plays out.

Top-Level Await

Async/await is amazing and makes promises dramatically cleaner to use.

Unfortunately, though, you can't use them at the top level. This might not be something you care about much in a TS library or application, but if you're writing a runnable script or using TypeScript in a REPL, then this gets super annoying.

It's even worse if you're used to frontend development, since top-level await has been working nicely in the Chrome and Firefox console for a couple of years now.

Fortunately though, a fix is coming. This is actually a general stage-3 JS proposal, so it'll be everywhere else eventually too, but for TS devs 3.7 is where the magic happens.

This one's simple, but let's have another quick demo anyway:


// Your only solution right now for a script that does something async: 
async function doEverything() { 
  ... 
  const response = await fetch('http://example.com'); 
  ... 
} 
  
doEverything(); // <- eugh (could use an IIFE instead, but even more eugh)

With top-level await:

// With TS 3.7: 
// Your script: ... 
const response = await fetch('http://example.com'); 
// ...

There's a notable gotcha here: if you're not writing a script, or using a REPL, don't write this at the top level, unless you really know what you're doing!

It's totally possible to use this to write modules that do blocking async steps when imported. That can be useful for some niche cases, but people tend to assume that their import statement is a synchronous, reliable, and fairly quick operation, and you could easily hose your codebase's startup time if you start blocking imports for complex async processes (even worse, processes that can fail).

This is somewhat mitigated by the semantics of imports of async modules: they're imported and run in parallel, so the importing module effectively waits for Promise.all(importedModules) before being executed.

Rich Harris wrote an excellent piece on a previous version of this spec before that change when imports ran sequentially and this problem was much worse), which makes for good background reading on the risks here if you're interested.

It's also worth noting that this is only useful for module systems that support asynchronous imports. There isn't yet a formal spec for how TS will handle this, but that likely means that a very recent target configuration, and, either ES Modules or Webpack v5 (whose alphas have experimental support), will be used at runtime.

Recursive Type Aliases

If you're ever tried to define a recursive type in TypeScript, you may have run into StackOverflow questions like this: https://stackoverflow.com/questions/47842266/recursive-types-in-typescript.

Right now, you can't. Interfaces can be recursive, but there are limitations to their expressiveness, and type aliases can't. That means right now, you need to combine the two: define a type alias and extract the recursive parts of the type into interfaces. It works, but it's messy, and we can do better.

As a concrete example, this is the suggested type definition for JSON data:

type JSONValue = | string | number | boolean | JSONObject | JSONArray; 
interface JSONObject { [x: string]: JSONValue; } 
interface JSONArray extends Array<JSONValue> { }

That works, but the extra interfaces are only there because they're required to get around the recursion limitation.

Fixing this requires no new syntax; it just removes that restriction, so the below compiles:

// With TS 3.7: 
type JSONValue = | string | number | boolean | { [x: string]: JSONValue } | Array<JSONValue>;

Right now, that fails to compile with Type alias 'JSONValue' circularly references itself. Soon though, soon...

Null Coalescing

Aside from being difficult to spell, this one is quite simple and easy. It's based on a JavaScript stage-3 proposal, which means it'll also be coming to your favorite vanilla JavaScript environment soon (if it hasn't already).

In JavaScript, there's a common pattern for handling default values, and falling back to the first valid result of a defined group. It looks something like this:

// Use the first of firstResult/secondResult which is truthy: 
const result = firstResult || secondResult; 
// Use configValue from provided options if truthy, or 'default' if not: 
this.configValue = options.configValue || 'default';

This is useful in a host of cases, but due to some interesting quirks in JavaScript, it can catch you out. If firstResult or options.configValue can meaningfully be set to false, an empty string or 0, then this code has a bug. If those values are set, then when considered as booleans they're falsy, so the fallback value (secondResult / 'default') is used anyway.

Null coalescing fixes this. Instead of the above, you'll be able to write:

// With TS 3.7: 
// Use the first of firstResult/secondResult which is *defined*: 
const result = firstResult ?? secondResult; 
// Use configSetting from provided options if *defined*, or 'default' if not: 
this.configValue = options.configValue ?? 'default';

?? differs from || in that it falls through to the next value only if the first argument is null or undefined, not falsy. That fixes our bug. If you pass false as firstResult, that will be used instead of secondResult because, while it's falsy, it is still defined, and that's all that's required.

It's simple but super-useful, as it takes away a whole class of bugs.

Optional Chaining

Last but not least, optional chaining is another stage-3 proposal that is making its way into TypeScript.

This is designed to solve an issue faced by developers in every language: how do you get data out of a data structure when some or all of it might not be present?

Right now, you might do something like this:

// To get data.key1.key2, if any level could be null/undefined: 
let result = data ? (data.key1 ? data.key1.key2 : undefined) : undefined; 
// Another equivalent alternative: 
let result = ((data || {}).key1 || {}).key2;

Nasty! This gets much much worse if you need to go deeper, and although the second example works at runtime, it won't even compile in TypeScript, since the first step could be {}, in which case key1 isn't a valid key at all.

This gets still more complicated if you're trying to get into an array, or there's a function call somewhere in this process.

There's a host of other approaches to this, but they're all noisy, messy & error-prone. With optional chaining, you can do this:

// With TS 3.7: 
// Returns the value is it's all defined & non-null, or undefined if not. 
let result = data?.key1?.key2; 
// The same, through an array index or property, if possible: 
array?.[0]?.['key']; 
// Call a method, but only if it's defined: 
obj.method?.(); 
// Get a property, or return 'default' if any step is not defined: 
let result = data?.key1?.key2 ?? 'default';

The last case shows how neatly some of these dovetails together: null coalescing + optional chaining is a match made in heaven.

One gotcha: this will return undefined for missing values, even if they were null, e.g. in cases like (null)?.key (returns undefined). A small point, but one to watch out for if you have a lot of null in your data structures.

That's the lot! That should outline all the essentials for these features, but there are lots of smaller improvements, fixes, and editor support improvements coming too, so take a look at the official roadmap if you want to get into the nitty-gritty.

How to building a stable Node.js project architecture.

How to building a stable Node.js project architecture.

Often product development process which involves JavaScript, is accompanied by the use of Node.js, a JavaScript runtime environment. The birth of this technology has certainly turned the use of JS upside-down. Today, [JavaScript...

Often product development process which involves JavaScript, is accompanied by the use of Node.js, a JavaScript runtime environment. The birth of this technology has certainly turned the use of JS upside-down. Today, JavaScript is in the category of the most preferred languages to build apps thanks to Node.js.

What is so special about this technology? To answer this, let’s reflect on not only this technology’s benefits but also its architecture limitations and the ways to deal with cons.

Node JS brief history

Node.js was introduced by Ryan Dahl in 2009. The technology is mostly used for building app’s server side/ back-end development. What’s special about Node.js is that the technology is asynchronous.

This means that server continues to process other client requests without urging the client to wait till another previously sent request is processed. Let’s say, it’s Node.js “value proposition” for all who would like to create reliable JS-based apps.

What is Node JS commonly used for?

The non-blocking I/O machine behind the current framework is a great way to build real-time web app with NodeJS, mobile products, chats, data streaming apps, browser games, APIs and medium-performance JS apps.

Node JS real-time applications examples and showcase

Among the companies who rely their tech part to Node.js are LinkedIn, Yahoo, IBM, Netflix, PayPal, Uber and others.

Let’s see where else Node is used apart from back-end (2017 data):
This is image title

Source
As of business point of view, Node.js is used for:
This is image title

If you have worked with Node.js, you probably already know that with Node.js you can gain the following:

  • Since Node.js is created with the C++ help, you can call C functions

  • Asynchronous nature which accelerates app’s functioning and provides the ability to multitask. Apart from this, non-closing i/o method suits for high-traffic, real-time websites, resources creation.

  • Stable multiplatform app development

  • Lots of Node.js development tools for better workflow: npm-s, Express, Socket.io, etc. (we’ll touch upon these a bit later in the article)

  • Clear and flexible learning curve

But with Node.js architecture limitations you lose the opportunities to:

  • Create heavy-computed apps with the elements of 3D projection; calculation apps.
    As Node.js is single-threaded, it is not the right fit for such projects. All the actions happen on a single thread, and hence, overload CPU. For such types of apps or software, it’s better to utilize multithreading languages, like C, C#. For the full-scale video games, you can use Unity.

  • Use some npm-s.

Not all npm-s, we’ve mentioned in the pros, are of high quality and stable. Thus, you have to filter them properly and choose only the reliable ones. (Author’s note: Think of npm-s, like of plugins in Wordpress).

  • Taking in consideration Node.js architecture, you can’t utilize relational databases at full power.

Node.js come best with such document-oriented databases, like MongoDB

Let’s get into tech details and best practices for Node.js development and more detailed practical tips on working with this platform

  • Application Specifics
    First and foremost, think about what type of application you plan to release. To further proceed with Node.js project architecture building, ask yourself some of the following questions:

  • Are you going to build a real-time web app with Node.js? Is it meant to be a mobile or a console one? Or maybe it's a multiplatform app?

  • What data should the app operate with? Are these databases, files, or remote storages (like Amazon S3)?

  • Do you plan to use special software in your application? Are sophisticated data processing algorithms such as face detection or text recognition enlisted in your app's functionality business plan?

  • Does your application need an extra hardware, like camera, microphone, various sensors, or any other related devices?

  • What are the architectural specifics of the future app? Is this meant to be a client server, MVC or maybe any other type of architecture?

If you've answered all of these questions, make sure that Node.js is able to fully meet all requirements set for your project development. For example, if you need an API server that works with several types of databases, Node.js might be a good choice. But if you need an application that is designed to build 3D graphics using Directx, you might want to get acquainted with C ++ a bit closer.

Let's assume that your application uses special temperature and contamination sensors. You can pair such features with RaspberryPi, Arduino or any other special device to go further and create a 'smart' functionality model. But before you start, make sure that the driver of any mentioned device is compatible with Node.js.

Best practices for Node.js development workflow

Typically, an application is written by a team of developers. Everyone in the team is unique as well as their own code style. Therefore, it’s recommended to settle and take into account the following code organization nuances before the development stage.

  • Functional development style or object-oriented programming patterns usage?

Since JS is a weakly-typed language and allows you to write your code in a freestyle, it's still better to agree upon a single code writing rules in your team. This will keep most of the misunderstandings away and will help your colleagues to get the better understanding of the project’s code.

  • Code style
    Discuss with your teammates code writing do-s and don’t-s. Check the quality of what is written, using such Node.js development tools, like lint

  • Your team’s experiencewith the integration of third-party means and devices in your application (i.e. Google Maps, data collection, analytics tools, e-communication means,etc.)

  • Data models you’re up to work with (files, databases or third-party APIs)

  • Communication and data exchange tools you’re going to use (REST API, Blouse Protocol, Socket io, GraphQl, DDP protocol)

  • Possibility to utilize 3rd party libraries (hardware libraries, special algorithms)

The scope of work and its specifics can be as random as possible. Let's say one of your teammates works with SQL database, someone else deals with Amazon API. Thus, each of your colleagues is assigned to do the particular tasks.

Don't reinvent the bike

Currently Node.js community is up and thriving. A lot of neat features are invented and written by other developers already. So before you create a particular functionality for your application, make sure if someone else has not encountered the exact problem before.

If you’re not the only one who has experienced a certain issue, you might find the solution in npm packages. This is an entire catalogue of many ready-made useful libraries that will make life much easier for you.

The same applies to frameworks. Think about whether to use any of them to speed up the process to build a real-time web app with Node JS or a mobile one.

Let's say if you’re dealing with REST API, you can try out Express js. If you need to interact with a particular database type, you can refer to such frameworks, as Mongoose js or SQL depending on which database type you need.

Although packages can benefit your project, there are some significant dangers to be aware of. Given the fact, that these solutions are open source there are several threats to bear in mind:

  • Duplicates

There are too many packages already and some of them clone the others. Unfortunately, this mainstream is only growing. Be careful and make sure you choose the right and unique npm.

  • Malicious code
    Since these packages are not supervised, anyone can write anything they want. Read more about security issues in the article by David Gilbertson ‘I’m harvesting credit card numbers and passwords from your site. Here’s how’. So if your product has to provide AAA-security type check each code snippet of any package you instal meticulously.
Always stay ahead of the time

JS and Node.js community is constantly growing. ES standards are frequently updated. Old features are being replaced by the new, better ones and implemented in Node.js. Thus it’s important to always monitor the technology’s state of art.

For instance,

[calbackHell](http://callbackhell.com/) 
fs.action(source, function (err, res) {
  if (err) {
    console.log('Error: ' + err)
  } else {
     res.acton(function(err, res) {
       console.log('Error s: ' + err)
     })
    })
  }
})

Was replaced with

[promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise)
fs.action(source)
  .then(res => res.action())
  .then(res => res.action())
  .then(res => res.action())
  .catch(err =>  console.log('Error : ' + err))
	

Now we can use async/await as the alternative to Promises:

[async/await](https://blog.risingstack.com/async-await-node-js-7-nightly)
try {
  const res = await fs.action(source);
  const res1 = await res.action(source);
  const res2 = await res1.action(source);
  const res3 = await res2.action(source);
} catch(error) {
   console.log('Error s: ' + err)
}

As for now, community’s opinion whether to use promises or async/await vary.

Try to update your knowledge base with the new Node.js and ES releases on the regular basis. This will help you keep a modernized development process.

Node.js app development techniques and tips

“Deep in the human unconscious is a pervasive need for a logical universe that makes sense. But the real universe is always one step beyond logic.”
― Frank Herbert.

To overcome Node.js architecture limitations and trivial-to-challenging issues, keep to the clear development structure.

Since Node.js project might consist of only one file, this does not mean that you need to pile everything up into one great mess.

Make your code as readable and understandable for others as it could be. The following recommendations :

  • Follow the instructions on the structure development for the framework you are using. If you do not use any, try to place your code in the directories and subdirectories in the most logical way possible.

  • File naming

Keep to a single agreed file naming. For example, choose one of these: ErrorHandler or errorHandler orerror_handle or error-handler. Try to name the files according to their purpose but not according to their functionality. For example, it’s better to name the File NotifyAllUserByEmaisSMSLocal as Notifier.

  • The entry point must not contain unnecessary code lines

The entry point is the main.js,app.js and www files that are requested to launch your application. Such files contain only certain methods or classes calls, but not more.

  • index.js.

Keeping only import / export in these files is in general considered to be a justified practice.

Tips for better Node.js project architecture

The way the code is written signifies the ‘face’ (reputation) of the programmer. It also shows how the entire development team deals with the app’s creation using a certain technology or language. Node.js in our case. Therefore, always try to keep it up-to-date and structured (and comprehensive for other developers). Code readability is one of the main ways to build stable, real time web app with Node.js.

  • Use code quality control tools, like Lint

This tool will help you not to slip out/keep a keen eye on any trivial small error./Give a trivial error no chance. It will also allow you to keep the code in one unified form.

  • Keep track of your files size

Too large files are difficult in guiding and understanding. The optimal file size is of 300-500 lines or less. So if you have noted that the code is constantly growing within the same file, turn it to the directory with several files inside.

  • Comment on your code
    When you write a universal module that will be used in several places, don’t forget to create a quick guide on how to utilize this code.
/**
 * Provide sending notification for users by Local, Email , SMS
 *
 * @example
 *          new Notifier(user).notify(['sms', 'email'], ...)
 **/
class Notifier {

You can also leave instructions for methods in your code

/**
 * Provide parse date for single format on project
 * @param Date
 * @example
 *        dateToString(date) => String
 * **/
 

If you’re developing a particular REST API, you can embed instructions on its usage in the code itself. It’s recommended though, to create the complete documentation/guidelines and store it on a single resource.

/**
* Provide Api for Account
  Account Register  POST /api/v1/account/
  @params
         email {string}
         password {string}
  Account Login  POST /api/v1/account/login
  @params
         email {string}
         password {string}

  Account Logout  GET /api/v1/account/logout
  @header
         Authorization: Bearer {token}
 **/
 

There are 2 sides of a coin though.

To ensure the use of Node.js architecture best practices, keep your code as descriptive and organized as possible but don’t overdo.

Don’t comment each code string. It will do more harm than good and nothing but only make the development process more complex. The better tip is to comment the code snippet’s purpose but not its functionality (what does it do). Depending on the development style (callback, promise, async/await) write and use only one (if it’s possible) general error handler/processor.

Errors handling

Errors handling is another important aspect among other best practices for Node.js development to bear in mind.

Since JavaScript is not as strict as Java all the responsibility lies on the development team.

First and foremost, always try to process the errors. Otherwise it can lead to app’s uncontrolled behavior.

  • With callback
const withoutErrors = calback => (err, updatedTank) => {
  if (err) {
    return // do something
  }
  return calback(updatedTank);
};
fs.action(withoutErrors(data => ...))

With Promise

const handlError = error => {
  if (err) {
    return // do something
  }
};
fs.action()
    .then(data => )
    .then(data => )
    .cattch(handlError)
  • With async / await
class Actions
  async action1 (data) {
    return fs.action(data)
  }
  async action2 (data) {
    return fs.action(data)
  }
  .... 
}

try {
  await new Actions().action1();
  await new Actions().action1();
} catach(error) {
  return handlError(error)
}

Node JS development tools
  • Gulp

A toolkit which allows to launch several apps simultaneously. It might be useful if you’d like to run several services at the same time with one command/request.

  • Nodemon

Hot reload feature for Node.js. This tool automatically updates/ resets your project after any code change is made. A quite handy tool during the Node.js project architecture development.

  • Forever, pm2
    These two packages ensure app’s launch during the (OC) system’s start.

  • Winston
    Provides with the opportunity to record app’s logs to the primary source (file or database). The package comes to help, when you need the app to work remotely and don’t have the full access to it.

  • Threads
    A tool designed for the better work with threads.

To sum up with

We hope that Node.js architecture best practices represented in this article will help you reach the most desireable result when looking for the way to build performant real-time web or mobile apps with Node.js.

Build a CMS with Laravel and Vue

Build a CMS with Laravel and Vue

A CMS (Content Management System) helps content creators produce content in an easily consumable format. In this tutorial series, we will consider how to build a simple CMS from scratch using Laravel and Vue.

A CMS (Content Management System) helps content creators produce content in an easily consumable format. In this tutorial series, we will consider how to build a simple CMS from scratch using Laravel and Vue.

Build a CMS with Laravel and Vue - Part 1: Setting up

The birth of the internet has since redefined content accessibility for the better, causing a distinct rise in content consumption across the globe. The average user of the internet consumes and produces some form of content formally or informally.

An example of an effort at formal content creation is when an someone makes a blog post about their work so that a targeted demographic can easily find their website. This type of content is usually served and managed by a CMS (Content Management System). Some popular ones are WordPress, Drupal, and SilverStripe.

A CMS helps content creators produce content in an easily consumable format. In this tutorial series, we will consider how to build a simple CMS from scratch using Laravel and Vue.

Our CMS will be able to make new posts, update existing posts, delete posts that we do not need anymore, and also allow users make comments to posts which will be updated in realtime using Pusher. We will also be able to add featured images to posts to give them some visual appeal.

When we are done, we will be able to have a CMS that looks like this:

Prerequisites

To follow along with this series, a few things are required:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.

The source code for this project is available here on GitHub.## Installing the Laravel CLI
The source code for this project is available here on GitHub.
The first thing we need to do is install the Laravel CLI, and the Laravel dependencies. The CLI will be instrumental in creating new Laravel projects whenever we need to create one. Laravel requires PHP and a few other tools and extensions, so we need to first install these first before installing the CLI.

Here’s a list of the dependencies as documented on the official Laravel documentation:

Let’s install them one at a time.

Installing PHP

The source code for this project is available here on GitHub.
Open a fresh instance of the terminal and paste the following command:

    # Linux Users
    $ sudo apt-get install php7.2

    # Mac users
    $ brew install php72


As at the time of writing this article, PHP 7.2 is the latest stable version of PHP so the command above installs it on your machine.

On completion, you can check that PHP has been installed to your machine with the following command:

    $ php -v


Installing the Mbstring extension

To install the mbstring extension for PHP, paste the following command in the open terminal:

    # Linux users
    $ sudo apt-get install php7.2-mbstring

    # Mac users
    # You don't have to do anything as it is installed automatically.


To check if the mbstring extension has been installed successfully, you can run the command below:

    $ php -m | grep mbstring


Installing the XML PHP extension

To install the XML extension for PHP, paste the following command in the open terminal:

    # Linux users
    $ sudo apt-get install php-xml

    # Mac users
    # You don't have to do anything as it is installed automatically.


To check if the xml extension has been installed successfully, you can run the command below:

    $ php -m | grep xml


Installing the ZIP PHP extension

To install the zip extension for PHP, paste the following command in your terminal:

    # Linux users
    $ sudo apt-get install php7.2-zip

    # Mac users
    # You don't have to do anything as it is installed automatically.


To check if the zip extension has been installed successfully, you can run the command below:

    $ php -m | grep zip


Installing curl

The source code for this project is available here on GitHub.
To install curl, paste the following command in your terminal:

    # Linux users
    $ sudo apt-get install curl

    # Mac users using Homebrew (https://brew.sh)
    $ brew install curl


To verify that curl has been installed successfully, run the following command:

    $ curl --version


Installing Composer

The source code for this project is available here on GitHub.> The source code for this project is available here on GitHub.
Now that we have curl installed on our machine, let’s pull in Composer with this command:

    $ curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer


For us to run Composer in the future without calling sudo, we may need to change the permission, however you should only do this if you have problems installing packages:

    $ sudo chown -R $USER ~/.composer/


Installing the Laravel installer

At this point, we can already create a new Laravel project using Composer’s create-project command, which looks like this:

    $ composer create-project --prefer-dist laravel/laravel project-name


But we will go one step further and install the Laravel installer using composer:

    $ composer global require "laravel/installer"


The source code for this project is available here on GitHub.
After the installation, we will need to add the PATH to the bashrc file so that our terminal can recognize the laravel command:

    $ echo 'export PATH="$HOME/.composer/vendor/bin:$PATH"' >> ~/.bashrc
    $ source ~/.bashrc


Creating the CMS project

Now that we have the official Laravel CLI installed on our machine, let’s create our CMS project using the installer. In your terminal window, cd to the project directory you want to create the project in and run the following command:

    $ laravel new cms


The source code for this project is available here on GitHub.
We will navigate into the project directory and serve the application using PHP’s web server:

    $ cd cms
    $ php artisan serve


Now, when we visit http://127.0.0.1:8000/, we will see the default Laravel template:

Setting up the database

In this series, we will be using MySQL as our database system so a prerequisite for this section is that you have MySQL installed on your machine.

You can follow the steps below to install and configure MySQL:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.

You will also need a special driver that makes it possible for PHP to work with MySQL, you can install it with this command:

    # Linux users
    $ sudo apt-get install php7.2-mysql

    # Mac Users
    # You don't have to do anything as it is installed automatically.


Load the project directory in your favorite text editor and there should be a .env file in the root of the folder. This is where Laravel stores its environment variables.

Create a new MySQL database and call it laravelcms. In the .env file, update the database configuration keys as seen below:

    DB_CONNECTION=mysql
    DB_HOST=127.0.0.1
    DB_PORT=3306
    DB_DATABASE=laravelcms
    DB_USERNAME=YourUsername
    DB_PASSWORD=YourPassword


The source code for this project is available here on GitHub.## Setting up user roles

Like most content management systems, we are going to have a user role system so that our blog can have multiple types of users; the admin and regular user. The admin should be able to create a post and perform other CRUD operations on a post. The regular user, on the other hand, should be able to view and comment on a post.

For us to implement this functionality, we need to implement user authentication and add a simple role authorization system.

Setting up user authentication

Laravel provides user authentication out of the box, which is great, and we can key into the feature by running a single command:

    $ php artisan make:auth


The above will create all that’s necessary for authentication in our application so we do not need to do anything extra.

Setting up role authorization

We need a model for the user roles so let’s create one and an associated migration file:

    $ php artisan make:model Role -m


In the database/migrations folder, find the newly created migration file and update the CreateRolesTable class with this snippet:

    <?php // File: ./database/migrations/*_create_roles_table.php

    // [...]

    class CreateRolesTable extends Migration
    {
        public function up()
        {
            Schema::create('roles', function (Blueprint $table) {
                $table->increments('id');
                $table->string('name');
                $table->string('description');
                $table->timestamps();
            });
        }

        public function down()
        {
            Schema::dropIfExists('roles');
        }
    }

We intend to create a many-to-many relationship between the User and Role models so let’s add a relationship method on both models.

Open the User model and add the following method:

    // File: ./app/User.php
    public function roles() 
    {
        return $this->belongsToMany(Role::class);
    }

Open the Role model and include the following method:

    // File: ./app/Role.php
    public function users() 
    {
        return $this->belongsToMany(User::class);
    }

We are also going to need a pivot table to associate each user with a matching role so let’s create a new migration file for the role_user table:

    $ php artisan make:migration create_role_user_table


In the database/migrations folder, find the newly created migration file and update the CreateRoleUserTable class with this snippet:

    // File: ./database/migrations/*_create_role_user_table.php
    <?php 

    // [...]

    class CreateRoleUserTable extends Migration
    {

        public function up()
        {
            Schema::create('role_user', function (Blueprint $table) {
                $table->increments('id');
                $table->integer('role_id')->unsigned();
                $table->integer('user_id')->unsigned();
            });
        }

        public function down()
        {
            Schema::dropIfExists('role_user');
        }
    }

Next, let’s create seeders that will populate the users and roles tables with some data. In your terminal, run the following command to create the database seeders:

    $ php artisan make:seeder RoleTableSeeder
    $ php artisan make:seeder UserTableSeeder


In the database/seeds folder, open the RoleTableSeeder.php file and replace the contents with the following code:

    // File: ./database/seeds/RoleTableSeeder.php
    <?php 

    use App\Role;
    use Illuminate\Database\Seeder;

    class RoleTableSeeder extends Seeder
    {
        public function run()
        {
            $role_regular_user = new Role;
            $role_regular_user->name = 'user';
            $role_regular_user->description = 'A regular user';
            $role_regular_user->save();

            $role_admin_user = new Role;
            $role_admin_user->name = 'admin';
            $role_admin_user->description = 'An admin user';
            $role_admin_user->save();
        }
    }

Open the UserTableSeeder.php file and replace the contents with the following code:

    // File: ./database/seeds/UserTableSeeder.php
    <?php 

    use Illuminate\Database\Seeder;
    use Illuminate\Support\Facades\Hash;
    use App\User;
    use App\Role;

    class UserTableSeeder extends Seeder
    {

        public function run()
        {
            $user = new User;
            $user->name = 'Samuel Jackson';
            $user->email = '[email protected]';
            $user->password = bcrypt('samuel1234');
            $user->save();
            $user->roles()->attach(Role::where('name', 'user')->first());

            $admin = new User;
            $admin->name = 'Neo Ighodaro';
            $admin->email = '[email protected]';
            $admin->password = bcrypt('neo1234');
            $admin->save();
            $admin->roles()->attach(Role::where('name', 'admin')->first());
        }
    }

We also need to update the DatabaseSeeder class. Open the file and update the run method as seen below:

    // File: ./database/seeds/DatabaseSeeder.php
    <?php 

    // [...]

    class DatabaseSeeder extends Seeder
    {
        public function run()
        {
            $this->call([
                RoleTableSeeder::class, 
                UserTableSeeder::class,
            ]);
        }
    }

Next, let’s update the User model. We will be adding a checkRoles method that checks what role a user has. We will return a 404 page where a user doesn’t have the expected role for a page. Open the User model and add these methods:

    // File: ./app/User.php
    public function checkRoles($roles) 
    {
        if ( ! is_array($roles)) {
            $roles = [$roles];    
        }

        if ( ! $this->hasAnyRole($roles)) {
            auth()->logout();
            abort(404);
        }
    }

    public function hasAnyRole($roles): bool
    {
        return (bool) $this->roles()->whereIn('name', $roles)->first();
    }

    public function hasRole($role): bool
    {
        return (bool) $this->roles()->where('name', $role)->first();
    }

Let’s modify the RegisterController.php file in the Controllers/Auth folder so that a default role, the user role, is always attached to a new user at registration.

Open the RegisterController and update the create action with the following code:

    // File: ./app/Http/Controllers/Auth/RegisterController.php
    protected function create(array $data)
    {       
        $user = User::create([
            'name'     => $data['name'],
            'email'    => $data['email'],
            'password' => bcrypt($data['password']),
        ]);

        $user->roles()->attach(\App\Role::where('name', 'user')->first());

        return $user;
    }

Now let’s migrate and seed the database so that we can log in with the sample accounts. To do this, run the following command in your terminal:

    $ php artisan migrate:fresh --seed


In order to test that our roles work as they should, we will make an update to the HomeController.php file. Open the HomeController and update the index method as seen below:

    // File: ./app/Http/Controllers/HomeController.php
    public function index(Request $request)
    {
        $request->user()->checkRoles('admin');

        return view('home');
    }

Now, only administrators should be able to see the dashboard. In a more complex application, we would use a middleware to do this instead.

We can test that this works by serving the application and logging in both user accounts; Samuel Jackson and Neo Ighodaro.

Remember that in our UserTableSeeder.php file, we defined Samuel as a regular user and Neo as an admin, so Samuel should see a 404 error after logging in and Neo should be able to see the homepage.

Testing the application

Let’s serve the application with this command:

    $ php artisan serve


When we try logging in with Samuel’s credentials, we should see this:

On the other hand, we will get logged in with Neo’s credentials because he has an admin account:

We will also confirm that whenever a new user registers, he is assigned a role and it is the role of a regular user. We will create a new user and call him Greg, he should see a 404 error right after:

It works just as we wanted it to, however, it doesn’t really make any sense for us to redirect a regular user to a 404 page. Instead, we will edit the HomeController so that it redirects users based on their roles, that is, it redirects a regular user to a regular homepage and an admin to an admin dashboard.

Open the HomeController.php file and update the index method as seen below:

    // File: ./app/Http/Controllers/HomeController.php
    public function index(Request $request)
    {
        if ($request->user()->hasRole('user')) {
            return redirect('/');
        }

        if ($request->user()->hasRole('admin')){
            return redirect('/admin/dashboard');
        }
    }

If we serve our application and try to log in using the admin account, we will hit a 404 error because we do not have a controller or a view for the admin/dashboard route. In the next article, we will start building the basic views for the CMS.

Conclusion

In this tutorial, we learned how to install a fresh Laravel app on our machine and pulled in all the needed dependencies. We also learned how to configure the Laravel app to work with a MySQL database. We also created our models and migrations files and seeded the database using database seeders.

In the next part of this series, we will start building the views for the application.

The source code for this project is available on Github.

Build a CMS with Laravel and Vue - Part 2: Implementing posts

In the previous part of this series, we set up user authentication and role authorization but we didn’t create any views for the application yet. In this section, we will create the Post model and start building the frontend for the application.

Our application allows different levels of accessibility for two kinds of users; the regular user and admin. In this chapter, we will focus on building the view that the regular users are permitted to see.

Before we build any views, let’s create the Post model as it is imperative to rendering the view.

The source code for this project is available here on GitHub.## Prerequisites

To follow along with this series, a few things are required:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.
Setting up the Post model

We will create the Post model with an associated resource controller and a migration file using this command:

    $ php artisan make:model Post -mr


The source code for this project is available here on GitHub.
Let’s navigate into the database/migrations folder and update the CreatePostsTable class that was generated for us:

    // File: ./app/database/migrations/*_create_posts_table.php
    <?php 

    // [...]

    class CreatePostsTable extends Migration
    {
        public function up()
        {
            Schema::create('posts', function (Blueprint $table) {
                $table->increments('id');
                $table->integer('user_id')->unsigned();
                $table->string('title');
                $table->text('body');
                $table->binary('image')->nullable();
                $table->timestamps();
            });
        }

        public function down()
        {
            Schema::dropIfExists('posts');
        }
    }

We included a user_id property because we want to create a relationship between the User and Post models. A Post also has an image field, which is where its associated image’s address will be stored.

Creating a database seeder for the Post table

We will create a new seeder file for the posts table using this command:

    $ php artisan make:seeder PostTableSeeder


Let’s navigate into the database/seeds folder and update the PostTableSeeder.php file:

    // File: ./app/database/seeds/PostsTableSeeder.php
    <?php 

    use App\Post;
    use Illuminate\Database\Seeder;

    class PostTableSeeder extends Seeder
    {
        public function run()
        {
            $post = new Post;
            $post->user_id = 2;
            $post->title = "Using Laravel Seeders";
            $post->body = "Laravel includes a simple method of seeding your database with test data using seed classes. All seed classes are stored in the database/seeds directory. Seed classes may have any name you wish, but probably should follow some sensible convention, such as UsersTableSeeder, etc. By default, a DatabaseSeeder class is defined for you. From this class, you may use the  call method to run other seed classes, allowing you to control the seeding order.";
            $post->save();

            $post = new Post;
            $post->user_id = 2;
            $post->title = "Database: Migrations";
            $post->body = "Migrations are like version control for your database, allowing your team to easily modify and share the application's database schema. Migrations are typically paired with Laravel's schema builder to easily build your application's database schema. If you have ever had to tell a teammate to manually add a column to their local database schema, you've faced the problem that database migrations solve.";
            $post->save();
        }
    }

When we run this seeder, it will create two new posts and assign both of them to the admin user whose ID is 2. We are attaching both posts to the admin user because the regular users are only allowed to view posts and make comments; they can’t create a post.

Let’s open the DatabaseSeeder and update it with the following code:

    // File: ./app/database/seeds/DatabaseSeeder.php
    <?php 

    use Illuminate\Database\Seeder;

    class DatabaseSeeder extends Seeder
    {
        public function run()
        {
            $this->call([
                RoleTableSeeder::class,
                UserTableSeeder::class,
                PostTableSeeder::class,
            ]);
        }
    }

The source code for this project is available here on GitHub.
We will use this command to migrate our tables and seed the database:

    $ php artisan migrate:fresh --seed


Defining the relationships

Just as we previously created a many-to-many relationship between the User and Role models, we need to create a different kind of relationship between the Post and User models.

We will define the relationship as a one-to-many relationship because a user will have many posts but a post will only ever belong to one user.

Open the User model and include the method below:

    // File: ./app/User.php
    public function posts()
    {
        return $this->hasMany(Post::class);
    }

Open the Post model and include the method below:

    // File: ./app/Post.php
    public function user()
    {
        return $this->belongsTo(User::class);
    }

Setting up the routes

At this point in our application, we do not have a front page with all the posts listed. Let’s create so anyone can see all of the created posts. Asides from the front page, we also need a single post page in case a user needs to read a specific post.

Let’s include two new routes to our routes/web.php file:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.
    Route::get('/', '[email protected]');

The source code for this project is available here on GitHub.* Basic knowledge of PHP.

  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.
    Route::get('/posts/{post}', '[email protected]');

With these two new routes added, here’s what the routes/web.php file should look like this:

    // File: ./routes/web.php
    <?php 

    Auth::routes();
    Route::get('/posts/{post}', '[email protected]');
    Route::get('/home', '[email protected]')->name('home');
    Route::get('/', '[email protected]');

Setting up the Post controller

In this section, we want to define the handler action methods that we registered in the routes/web.php file so that our application know how to render the matching views.

First, let’s add the all() method:

    // File: ./app/Http/Controllers/PostController.php
    public function all()
    {
        return view('landing', [
            'posts' => Post::latest()->paginate(5)
        ]);
    }

Here, we want to retrieve five created posts per page and send to the landing view. We will create this view shortly.

Next, let’s add the single() method to the controller:

    // File: ./app/Http/Controllers/PostController.php
    public function single(Post $post)
    {
        return view('single', compact('post'));
    }

In the method above, we used a feature of Laravel named route model binding to map the URL parameter to a Post instance with the same ID. We are returning a single view, which we will create shortly. This will be the view for the single post page.

Building our views

Laravel uses a templating engine called Blade for its frontend. We will use Blade to build these parts of the frontend before switching to Vue in the next chapter.

Navigate to the resources/views folder and create two new Blade files:

  1. landing.blade.php
  2. single.blade.php

These are the files that will load the views for the landing page and single post page. Before we start writing any code in these files, we want to create a simple layout template that our page views can use as a base.

In the resources/views/layouts folder, create a Blade template file and call it master.blade.php. This is where we will define the inheritable template for our single and landing pages.

Open the master.blade.php file and update it with this code:

    <!-- File: ./resources/views/layouts/master.blade.php -->
    <!DOCTYPE html>
    <html lang="en">
      <head>
        <meta charset="utf-8">
        <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
        <meta name="description" content="">
        <meta name="author" content="Neo Ighodaro">
        <title>LaravelCMS</title>
        <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css">
        <style> 
        body {
          padding-top: 54px;
        }
        @media (min-width: 992px) {
          body {
              padding-top: 56px;
          }
        }
        </style>
      </head>
      <body>
        <nav class="navbar navbar-expand-lg navbar-dark bg-dark fixed-top">
          <div class="container">
            <a class="navbar-brand" href="/">LaravelCMS</a>
            <div class="collapse navbar-collapse" id="navbarResponsive">
              <ul class="navbar-nav ml-auto">
                 @if (Route::has('login'))
                    @auth
                    <li class="nav-item">
                         <a class="nav-link" href="{{ url('/home') }}">Home</a>
                    </li>
                    <li class="nav-item">
                      <a class="nav-link" href="{{ route('logout') }}"
                                           onclick="event.preventDefault();
                                                         document.getElementById('logout-form').submit();">
                        Log out
                      </a>
                      <form id="logout-form" action="{{ route('logout') }}" method="POST" style="display: none;">
                        @csrf
                      </form>
                     </li>
                     @else
                     <li class="nav-item">
                         <a class="nav-link" href="{{ route('login') }}">Login</a>
                    </li>
                     <li class="nav-item">
                         <a class="nav-link" href="{{ route('register') }}">Register</a>
                     </li>
                     @endauth
                 @endif
              </ul>
            </div>
          </div>
        </nav>

        <div id="app">
            @yield('content')
        </div>

        <footer class="py-5 bg-dark">
          <div class="container">
            <p class="m-0 text-center text-white">Copyright &copy; LaravelCMS 2018</p>
          </div>
        </footer>
      </body>
    </html>

Now we can inherit this template in the landing.blade.php file, open it and update it with this code:

    {{-- File: ./resources/views/landing.blade.php --}}
    @extends('layouts.master')

    @section('content')
    <div class="container">
      <div class="row align-items-center">
        <div class="col-md-8 mx-auto">
          <h1 class="my-4 text-center">Welcome to the Blog </h1>

          @foreach ($posts as $post)
          <div class="card mb-4">
            <img class="card-img-top" src=" {!! !empty($post->image) ? '/uploads/posts/' . $post->image :  'http://placehold.it/750x300' !!} " alt="Card image cap">
            <div class="card-body">
              <h2 class="card-title text-center">{{ $post->title }}</h2>
              <p class="card-text"> {{ str_limit($post->body, $limit = 280, $end = '...') }} </p>
              <a href="/posts/{{ $post->id }}" class="btn btn-primary">Read More &rarr;</a>
            </div>
            <div class="card-footer text-muted">
              Posted {{ $post->created_at->diffForHumans() }} by
              <a href="#">{{ $post->user->name }} </a>
            </div>
          </div>
          @endforeach

        </div>
      </div>
    </div>
    @endsection

Let’s do the same with the single.blade.php file, open it and update it with this code:

    {{-- File: ./resources/views/single.blade.php --}}
    @extends('layouts.master')

    @section('content')
    <div class="container">
      <div class="row">
        <div class="col-lg-10 mx-auto">
          <h3 class="mt-4">{{ $post->title }} <span class="lead"> by <a href="#"> {{ $post->user->name }} </a></span> </h3>
          <hr>
          <p>Posted {{ $post->created_at->diffForHumans() }} </p>
          <hr>
          <img class="img-fluid rounded" src=" {!! !empty($post->image) ? '/uploads/posts/' . $post->image :  'http://placehold.it/750x300' !!} " alt="">
          <hr>
          <p class="lead">{{ $post->body }}</p>
          <hr>
          <div class="card my-4">
            <h5 class="card-header">Leave a Comment:</h5>
            <div class="card-body">
              <form>
                <div class="form-group">
                  <textarea class="form-control" rows="3"></textarea>
                </div>
                <button type="submit" class="btn btn-primary">Submit</button>
              </form>
            </div>
          </div>
        </div>
      </div>
    </div>
    @endsection

Testing the application

We can test the application to see that things work as we expect. When we serve the application, we expect to see a landing page and a single post page. We also expect to see two posts because that’s the number of posts we seeded into the database.

We will serve the application using this command:

    $ php artisan serve


We can visit this address to see the application:

We have used simple placeholder images here because we haven’t built the admin dashboard that allows CRUD operations to be performed on posts.

In the coming chapters, we will add the ability for an admin to include a custom image when creating a new post.

Conclusion

In this chapter, we created the Post model and defined a relationship on it to the User model. We also built the landing page and single page.

In the next part of this series, we will develop the API that will be the medium for communication between the admin user and the post items.

The source code for this project is available here on Github.

Build a CMS with Laravel and Vue - Part 3: Building an API

In the previous part of this series, we initialized the posts resource and started building the frontend of the CMS. We designed the front page that shows all the posts and the single post page using Laravel’s templating engine, Blade.

In this part of the series, we will start building the API for the application. We will create an API for CRUD operations that an admin will perform on posts and we will test the endpoints using Postman.

The source code for this project is available here on GitHub.## Prerequisites

To follow along with this series, a few things are required:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.
Building the API using Laravel’s API resources

The Laravel framework makes it very easy to build APIs. It has an API resources feature that we can easily adopt in our project. You can think of API resources as a transformation layer between Eloquent models and the JSON responses that will be sent back by our API.

Allowing mass assignment on specified fields

Since we are going to be performing CRUD operations on the posts in the application, we have to explicitly specify that it’s permitted for some fields to be mass-assigned data. For security reasons, Laravel prevents mass assignment of data to model fields by default.

Open the Post.php file and include this line of code:

    // File: ./app/Post.php
    protected $fillable = ['user_id', 'title', 'body', 'image'];

Defining API routes

We will use the apiResource()method to generate only API routes. Open the routes/api.php file and add the following code:

    // File: ./routes/api.php
    Route::apiResource('posts', 'PostController');


The source code for this project is available here on GitHub.### Creating the Post resource

At the beginning of this section, we already talked about what Laravel’s API resources are. Here, we create a resource class for our Post model. This will enable us to retrieve Post data and return formatted JSON format.

To create a resource class for our Post model run the following command in your terminal:

    $ php artisan make:resource PostResource


A new PostResource.php file will be available in the app/Http/Resources directory of our application. Open up the PostResource.php file and replace the toArray() method with the following:

    // File: ./app/Http/Resources/PostResource.php
    public function toArray($request)
    {
        return [
            'id' => $this->id,
            'title' => $this->title,
            'body' => $this->body,
            'image' => $this->image,
            'created_at' => (string) $this->created_at,
            'updated_at' => (string) $this->updated_at,
        ];
    }

The job of this toArray() method is to convert our P``ost resource into an array. As seen above, we have specified the fields on our Post model, which we want to be returned as JSON when we make a request for posts.

We are also explicitly casting the dates, created_at and update_at, to strings so that they would be returned as date strings. The dates are normally an instance of Carbon.

Now that we have created a resource class for our Post model, we can start building the API’s action methods in our PostController and return instances of the PostResource where we want.

Adding the action methods to the Post controller

The usual actions performed on a post include the following:

  1. landing.blade.php
  2. single.blade.php

In the last article, we already implemented a kind of ‘Read’ functionality when we defined the all and single methods. These methods allow users to browse through posts on the homepage.

In this section, we will define the methods that will resolve our API requests for creating, reading, updating and deleting posts.

The first thing we want to do is import the PostResource class at the top of the PostController.php file:

    // File: ./app/Http/Controllers/PostController.php
    use App\Http\Resources\PostResource;

The source code for this project is available here on GitHub.### Building the handler action for the create operation

In the PostController update the store() action method with the code snippet below. It will allow us to validate and create a new post:

    // File: ./app/Http/Controllers/PostController.php
    public function store(Request $request)
    {
        $this->validate($request, [
            'title' => 'required',
            'body' => 'required',
            'user_id' => 'required',            
            'image' => 'required|mimes:jpeg,png,jpg,gif,svg',
        ]);

        $post = new Post;

        if ($request->hasFile('image')) {
            $image = $request->file('image');
            $name = str_slug($request->title).'.'.$image->getClientOriginalExtension();
            $destinationPath = public_path('/uploads/posts');
            $imagePath = $destinationPath . "/" . $name;
            $image->move($destinationPath, $name);
            $post->image = $name;
        }

        $post->user_id = $request->user_id;
        $post->title = $request->title;
        $post->body = $request->body;
        $post->save();

        return new PostResource($post);
    }

Here’s a breakdown of what this method does:

  1. landing.blade.php
  2. single.blade.php

Building the handler action for the read operations

What we want here is to be able to read all the created posts or a single post. This is possible because the apiResource() method defines the API routes using standard REST rules.

This means that a GET request to this address, http://127.0.0.1:800/api/posts, should be resolved by the index() action method. Let’s update the index method with the following code:

    // File: ./app/Http/Controllers/PostController.php
    public function index()
    {
        return PostResource::collection(Post::latest()->paginate(5));
    }

This method will allow us to return a JSON formatted collection of all of the stored posts. We also want to paginate the response as this will allow us to create a better view on the admin dashboard.

Following the RESTful conventions as we discussed above, a GET request to this address, http://127.0.0.1:800/api/posts/id, should be resolved by the show() action method. Let’s update the method with the fitting snippet:

    // File: ./app/Http/Controllers/PostController.php
    public function show(Post $post)
    {
        return new PostResource($post);
    }

Awesome, now this method will return a single instance of a post resource upon API query.

Building the handler action for the update operation

Next, let’s update the update() method in the PostController class. It will allow us to modify an existing post:

    // File: ./app/Http/Controllers/PostController.php
    public function update(Request $request, Post $post)
    {
        $this->validate($request, [
            'title' => 'required',
            'body' => 'required',
        ]);

        $post->update($request->only(['title', 'body']));

        return new PostResource($post);
    }

This method receives a request and a post id as parameters, then we use route model binding to resolve the id into an instance of a Post. First, we validate the $request attributes, then we update the title and body fields of the resolved post.

Building the handler action for the delete operation

Let’s update the destroy() method in the PostController class. This method will allow us to remove an existing post:

    // File: ./app/Http/Controllers/PostController.php
    public function destroy(Post $post)
    {
        $post->delete();

        return response()->json(null, 204);
    }

In this method, we resolve the Post instance, then delete it and return a 204 response code.

Our methods are complete. We have a method to handle our CRUD operations, however, we haven’t built the frontend for the admin dashboard.

At the end of the second article, we defined the [email protected]() action method like this:

    public function index(Request $request)
    {
        if ($request->user()->hasRole('user')) {
            return view('home');
        }

        if ($request->user()->hasRole('admin')) {
            return redirect('/admin/dashboard');
        }
    }

This allowed us to redirect regular users to the view home, and admin users to the URL /admin/dashboard. At this point in this series, a visit to /admin/dashboard will fail because we have neither defined it as a route with a handler Controller nor built a view for it.

Let’s create the AdminController with this command:

    $ php artisan make:controller AdminController


We will add the /admin/ route to our routes/web.php file:

    Route::get('/admin/{any}', '[email protected]')->where('any', '.*');

The source code for this project is available here on GitHub.
Let’s update the AdminController.php file to use the auth middleware and include an index() action method:

    // File: ./app/Http/Controllers/AdminController.php
    <?php 

    namespace App\Http\Controllers;

    class AdminController extends Controller
    {
        public function __construct()
        {
            $this->middleware('auth');
        }

        public function index()
        {
            if (request()->user()->hasRole('admin')) {
                return view('admin.dashboard');
            }

            if (request()->user()->hasRole('user')) {
                return redirect('/home');
            }
        }
    }

In the index()action method, we included a snippet that will ensure that only admin users can visit the admin dashboard and perform CRUD operations on posts.

We will not start building the admin dashboard in this article but will test that our API works properly. We will use Postman to make requests to the application.

Testing the application

Let’s test that our API works as expected. We will, first of all, serve the application using this command:

    $ php artisan serve


We can visit http://localhost:8000 to see our application and there should be exactly two posts available; these are the posts we seeded into the database during the migration:

The source code for this project is available here on GitHub.
Now let’s create a new post over the API interface using Postman. Send a POST request as seen below:

Now let’s update this post we just created. In Postman, we will pass only the title and body fields to a PUT request.

To make it easy, you can just copy the payload below and use the raw request data type for the Body:

    {
      "title": "We made an edit to the Post on APIs",
      "body": "To a developer, 'What's an API?' might be a straightforward - if not exactly simple - question. But to anyone who doesn't have experience with code. APIs can come across as confusing or downright intimidating."
    }


The source code for this project is available here on GitHub.
Finally, let’s delete the post using Postman:

We are sure the post is deleted because the response status is 204 No Content as we specified in the PostController.

Conclusion

In this chapter, we learned about Laravel’s API resources and we created a resource class for the Post model. We also used the apiResources() method to generate API only routes for our application. We wrote the methods to handle the API operations and tested them using Postman.

In the next part, we will build the admin dashboard and develop the logic that will enable the admin user to manage posts over the API.

The source code for this project is available here on Github.

Build a CMS with Laravel and Vue - Part 4: Building the dashboard

In the last article of this series, we built the API interface and used Laravel API resources to return neatly formatted JSON responses. We tested that the API works as we defined it to using Postman.

In this part of the series, we will start building the admin frontend of the CMS. This is the first part of the series where we will integrate Vue and explore Vue’s magical abilities.

When we are done with this part, our application will have some added functionalities as seen below:

The source code for this project is available here on GitHub.## Prerequisites

To follow along with this series, a few things are required:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.
Building the frontend

Laravel ships with Vue out of the box so we do not need to use the Vue-CLI or reference Vue from a CDN. This makes it possible for us to have all of our application, the frontend, and backend, in a single codebase.

Every newly created instance of a Laravel installation has some Vue files included by default, we can see these files when we navigate into the resources/assets/js/components folder.

Setting up Vue and VueRouter

Before we can start using Vue in our application, we need to first install some dependencies using NPM. To install the dependencies that come by default with Laravel, run the command below:

    $ npm install


We will be managing all of the routes for the admin dashboard using vue-router so let’s pull it in:

    $ npm install --save vue-router


When the installation is complete, the next thing we want to do is open the resources/assets/js/app.js file and replace its contents with the code below:

    // File: ./resources/assets/js/app.js
    require('./bootstrap');

    import Vue from 'vue'
    import VueRouter from 'vue-router'
    import Homepage from './components/Homepage'
    import Read from './components/Read'

    Vue.use(VueRouter)

    const router = new VueRouter({
        mode: 'history',
        routes: [
            {
                path: '/admin/dashboard',
                name: 'read',
                component: Read,
                props: true
            },
        ],
    });

    const app = new Vue({
        el: '#app',
        router,
        components: { Homepage },
    });

In the snippet above, we imported the VueRouter and added it to the Vue application. We also imported a Homepage and a Read component. These are the components where we will write our markup so let’s create both files.

Open the resources/assets/js/components folder and create four files:

  1. landing.blade.php
  2. single.blade.php

The source code for this project is available here on GitHub.
In the resources/assets/js/app.js file, we defined a routes array and in it, we registered a read route. During render time, this route’s path will be mapped to the Read component.

In the previous article, we specified that admin users should be shown an admin.dashboard view in the index method, however, we didn’t create this view. Let’s create the view. Open the resources/views folder and create a new folder called admin. Within the new resources/views/admin folder, create a new file and called dashboard.blade.php. This is going to be the entry point to the admin dashboard, further from this route, we will let the VueRouter handle everything else.

Open the resources/views/admin/dashboard.blade.php file and paste in the following code:

    <!-- File: ./resources/views/admin/dashboard.blade.php -->
    <!DOCTYPE html>
    <html lang="en">
    <head>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1.0">
        <meta http-equiv="X-UA-Compatible" content="ie=edge">
        <title> Welcome to the Admin dashboard </title>
        <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css">
        <style>
            html, body {
            background-color: #202B33;
            color: #738491;
            font-family: "Open Sans";
            font-size: 16px;
            font-smoothing: antialiased;
            overflow: hidden;
            }
        </style>
    </head>
    <body>

      <script src="{{ asset('js/app.js') }}"></script>
    </body>
    </html>

Our goal here is to integrate Vue into the application, so we included the resources/assets/js/app.js file with this line of code:

    <script src="{{ asset('js/app.js') }}"></script>


For our app to work, we need a root element to bind our Vue instance unto. Before the <script> tag, add this snippet of code:

    <div id="app">
      <Homepage 
        :user-name='@json(auth()->user()->name)' 
        :user-id='@json(auth()->user()->id)'
      ></Homepage>
    </div>

We earlier defined the Homepage component as the wrapping component, that’s why we pulled it in here as the root component. For some of the frontend components to work correctly, we require some details of the logged in admin user to perform CRUD operations. This is why we passed down the userName and userId props to the Homepage component.

We need to prevent the CSRF error from occurring in our Vue frontend, so include this snippet of code just before the <title> tag:

    <meta name="csrf-token" content="{{ csrf_token() }}">
    <script> window.Laravel = { csrfToken: 'csrf_token() ' } </script>

This snippet will ensure that the correct token is always included in our frontend, Laravel provides the CSRF protection for us out of the box.

At this point, this should be the contents of your resources/views/admin/dashboard.blade.php file:

    <!DOCTYPE html>
    <html lang="en">
    <head>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1.0">
        <meta http-equiv="X-UA-Compatible" content="ie=edge">
        <meta name="csrf-token" content="{{ csrf_token() }}">
        <script> window.Laravel = { csrfToken: 'csrf_token() ' } </script>
        <title> Welcome to the Admin dashboard </title>
        <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css">
        <style>
          html, body {
            background-color: #202B33;
            color: #738491;
            font-family: "Open Sans";
            font-size: 16px;
            font-smoothing: antialiased;
            overflow: hidden;
          }
        </style>
    </head>
    <body>
    <div id="app">
      <Homepage 
        :user-name='@json(auth()->user()->name)' 
        :user-id='@json(auth()->user()->id)'>
      </Homepage>
    </div>
    <script src="{{ asset('js/app.js') }}"></script>
    </body>
    </html>

Setting up the Homepage view

Open the Homepage.vue file that we created some time ago and include this markup template:

    <!-- File: ./resources/app/js/components/Homepage.vue -->
    <template>
      <div>
        <nav>
          <section>
            <a style="color: white" href="/admin/dashboard">Laravel-CMS</a> &nbsp; ||  &nbsp;
            <a style="color: white" href="/">HOME</a>
            <hr>
            <ul>
               <li>
                 <router-link :to="{ name: 'create', params: { userId } }">
                   NEW POST
                 </router-link>
               </li>
            </ul>
          </section>
        </nav>
        <article>
          <header>
            <header class="d-inline">Welcome, {{ userName }}</header>
            <p @click="logout" class="float-right mr-3" style="cursor: pointer">Logout</p>
          </header>
          <div> 
            <router-view></router-view> 
          </div>
        </article>
      </div>
    </template>

We added a router-link in this template, which routes to the Create component.

We are passing the userId data to the create component because a userId is required during Post creation.

Let’s include some styles so that the page looks good. Below the closing template tag, paste the following code:

    <style scoped>
      @import url(https://fonts.googleapis.com/css?family=Dosis:300|Lato:300,400,600,700|Roboto+Condensed:300,700|Open+Sans+Condensed:300,600|Open+Sans:400,300,600,700|Maven+Pro:400,700);
      @import url("https://netdna.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.css");
      * {
        -moz-box-sizing: border-box;
        -webkit-box-sizing: border-box;
        box-sizing: border-box;
      }
      header {
        color: #d3d3d3;
      }
      nav {
        position: absolute;
        top: 0;
        bottom: 0;
        right: 82%;
        left: 0;
        padding: 22px;
        border-right: 2px solid #161e23;
      }
      nav > header {
        font-weight: 700;
        font-size: 0.8rem;
        text-transform: uppercase;
      }
      nav section {
        font-weight: 600;
      }
      nav section header {
        padding-top: 30px;
      }
      nav section ul {
        list-style: none;
        padding: 0px;
      }
      nav section ul a {
        color: white;
        text-decoration: none;
        font-weight: bold;
      }
      article {
        position: absolute;
        top: 0;
        bottom: 0;
        right: 0;
        left: 18%;
        overflow: auto;
        border-left: 2px solid #2a3843;
        padding: 20px;
      }
      article > header {
        height: 60px;
        border-bottom: 1px solid #2a3843;
      }
    </style>

The source code for this project is available here on GitHub.
Next, let’s add the <script> section that will use the props we passed down from the parent component. We will also define the method that controls the log out feature here. Below the closing style tag, paste the following code:

    <script>
    export default {
      props: {
        userId: {
          type: Number,
          required: true
        },
        userName: {
          type: String,
          required: true
        }
      },
      data() {
        return {};
      },
      methods: {
        logout() {
          axios.post("/logout").then(() => {
            window.location = "/";
          });
        }
      }
    };
    </script>

Setting up the Read view

In the resources/assets/js/app.js file, we defined the path of the read component as /admin/dashboard, which is the same address as the Homepage component. This will make sure the Read component always loads by default.

In the Read component, we want to load all of the available posts. We are also going to add Update and Delete options to each post. Clicking on these options will lead to the update and delete views respectively.

Open the Read.vue file and paste the following:

    <!-- File: ./resources/app/js/components/Read.vue -->
    <template>
        <div id="posts">
            <p class="border p-3" v-for="post in posts">
                {{ post.title }}
                <router-link :to="{ name: 'update', params: { postId : post.id } }">
                    <button type="button" class="p-1 mx-3 float-right btn btn-light">
                        Update
                    </button>
                </router-link>
                <button 
                    type="button" 
                    @click="deletePost(post.id)" 
                    class="p-1 mx-3 float-right btn btn-danger"
                >
                    Delete
                </button>
            </p>
            <div>
                <button 
                    v-if="next" 
                    type="button" 
                    @click="navigate(next)" 
                    class="m-3 btn btn-primary"
                >
                  Next
                </button>
                <button 
                    v-if="prev" 
                    type="button" 
                    @click="navigate(prev)" 
                    class="m-3 btn btn-primary"
                >
                  Previous
                </button>
            </div>
        </div>
    </template>

Above, we have the template to handle the posts that are loaded from the API. Next, paste the following below the closing template tag:

    <script>
    export default {
      mounted() {
        this.getPosts();
      },
      data() {
        return {
          posts: {},
          next: null,
          prev: null
        };
      },
      methods: {
        getPosts(address) {
          axios.get(address ? address : "/api/posts").then(response => {
            this.posts = response.data.data;
            this.prev = response.data.links.prev;
            this.next = response.data.links.next;
          });
        },
        deletePost(id) {
          axios.delete("/api/posts/" + id).then(response => this.getPosts())
        },
        navigate(address) {
          this.getPosts(address)
        }
      }
    };
    </script>

In the script above, we defined a getPosts() method that requests a list of posts from the backend server. We also defined a posts object as a data property. This object will be populated whenever posts are received from the backend server.

We defined next and prev data string properties to store pagination links and only display the pagination options where it is available.

Lastly, we defined a deletePost() method that takes the id of a post as a parameter and sends a DELETE request to the API interface using Axios.

Testing the application

Now that we have completed the first few components, we can serve the application using this command:

    $ php artisan serve


We will also build the assets so that our JavaScript is compiled for us. To do this, will run the command below in the root of the project folder:

    $ npm run dev


We can visit the application’s URL http://localhost:8000 and log in as an admin user, and delete a post:

Conclusion

In this part of the series, we started building the admin dashboard using Vue. We installed VueRouter to make the admin dashboard a SPA. We added the homepage view of the admin dashboard and included read and delete functionalities.

We are not done with the dashboard just yet. In the next part, we will add the views that lets us create and update posts.

The source code for this project is available here on Github.

Build a CMS with Laravel and Vue - Part 5: Completing our dashboards

In the previous part of this series, we built the first parts of the admin dashboard using Vue. We also made it into an SPA with the VueRouter, this means that visiting the pages does not cause a reload to the web browser.

We only built the wrapper component and the Read component that retrieves the posts to be loaded so an admin can manage them.

Here’s a recording of what we ended up with, in the last article:

In this article, we will build the view that will allow users to create and update posts. We will start writing code in the Update.vue and Create.vue files that we created in the previous article.

When we are done with this part, we will have additional functionalities like create and updating:

The source code for this project is available here on GitHub.## Prerequisites

To follow along with this series, a few things are required:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.
Including the new routes in VueRouter

In the previous article, we only defined the route for the Read component, we need to include the route configuration for the new components that we are about to build; Update and Create.

Open the resources/assets/js/app.js file and replace the contents with the code below:

    require('./bootstrap');

    import Vue from 'vue'
    import VueRouter from 'vue-router'
    import Homepage from './components/Homepage'
    import Create from './components/Create'
    import Read from './components/Read'
    import Update from './components/Update'

    Vue.use(VueRouter)

    const router = new VueRouter({
        mode: 'history',
        routes: [
            {
                path: '/admin/dashboard',
                name: 'read',
                component: Read,
                props: true
            },
            {
                path: '/admin/create',
                name: 'create',
                component: Create,
                props: true
            },
            {
                path: '/admin/update',
                name: 'update',
                component: Update,
                props: true
            },
        ],
    });

    const app = new Vue({
        el: '#app',
        router,
        components: { Homepage },
    });

Above, we have added two new components to the JavaScript file. We have the Create and Read components. We also added them to the router so that they can be loaded using the specified URLs.

Building the create view

Open the Create.vue file and update it with this markup template:

    <!-- File: ./resources/app/js/components/Create.vue -->
    <template>
      <div class="container">
        <form>
          <div :class="['form-group m-1 p-3', (successful ? 'alert-success' : '')]">
            <span v-if="successful" class="label label-sucess">Published!</span>
          </div>
          <div :class="['form-group m-1 p-3', error ? 'alert-danger' : '']">
            <span v-if="errors.title" class="label label-danger">
              {{ errors.title[0] }}
            </span>
            <span v-if="errors.body" class="label label-danger"> 
              {{ errors.body[0] }} 
            </span>
            <span v-if="errors.image" class="label label-danger"> 
              {{ errors.image[0] }} 
            </span>
          </div>

          <div class="form-group">
            <input type="title" ref="title" class="form-control" id="title" placeholder="Enter title" required>
          </div>

          <div class="form-group">
            <textarea class="form-control" ref="body" id="body" placeholder="Enter a body" rows="8" required></textarea>
          </div>

          <div class="custom-file mb-3">
            <input type="file" ref="image" name="image" class="custom-file-input" id="image" required>
            <label class="custom-file-label" >Choose file...</label>
          </div>

          <button type="submit" @click.prevent="create" class="btn btn-primary block">
            Submit
          </button>
        </form>
      </div>
    </template>

Above we have the template for the Create component. If there is an error during post creation, there will be a field indicating the specific error. When a post is successfully published, there will also a message saying it was successful.

Let’s include the script logic that will perform the sending of posts to our backend server and read back the response.

After the closing template tag add this:

    <script>
    export default {
      props: {
        userId: {
          type: Number,
          required: true
        }
      },
      data() {
        return {
          error: false,
          successful: false,
          errors: []
        };
      },
      methods: {
        create() {
          const formData = new FormData();
          formData.append("title", this.$refs.title.value);
          formData.append("body", this.$refs.body.value);
          formData.append("user_id", this.userId);
          formData.append("image", this.$refs.image.files[0]);

          axios
            .post("/api/posts", formData)
            .then(response => {
              this.successful = true;
              this.error = false;
              this.errors = [];
            })
            .catch(error => {
              if (!_.isEmpty(error.response)) {
                if ((error.response.status = 422)) {
                  this.errors = error.response.data.errors;
                  this.successful = false;
                  this.error = true;
                }
              }
            });

          this.$refs.title.value = "";
          this.$refs.body.value = "";
        }
      }
    };
    </script>

In the script above, we defined a create() method that takes the values of the input fields and uses the Axios library to send them to the API interface on the backend server. Within this method, we also update the status of the operation, so that an admin user can know when a post is created successfully or not.

Building the update view

Let’s start building the Update component. Open the Update.vue file and update it with this markup template:

    <!-- File: ./resources/app/js/components/Update.vue -->
    <template>
      <div class="container">
        <form>
          <div :class="['form-group m-1 p-3', successful ? 'alert-success' : '']">
            <span v-if="successful" class="label label-sucess">Updated!</span>
          </div>

          <div :class="['form-group m-1 p-3', error ? 'alert-danger' : '']">
            <span v-if="errors.title" class="label label-danger">
              {{ errors.title[0] }}
            </span>
            <span v-if="errors.body" class="label label-danger">
              {{ errors.body[0] }}
            </span>
          </div>

          <div class="form-group">
            <input type="title" ref="title" class="form-control" id="title" placeholder="Enter title" required>
          </div>

          <div class="form-group">
            <textarea class="form-control" ref="body" id="body" placeholder="Enter a body" rows="8" required></textarea>
          </div>

          <button type="submit" @click.prevent="update" class="btn btn-primary block">
            Submit
          </button>
        </form>
      </div>
    </template>

This template is similar to the one in the Create component. Let’s add the script for the component.

Below the closing template tag, paste the following:

    <script>
    export default {
      mounted() {
        this.getPost();
      },
      props: {
        postId: {
          type: Number,
          required: true
        }
      },
      data() {
        return {
          error: false,
          successful: false,
          errors: []
        };
      },
      methods: {
        update() {
          let title = this.$refs.title.value;
          let body = this.$refs.body.value;

          axios
            .put("/api/posts/" + this.postId, { title, body })
            .then(response => {
              this.successful = true;
              this.error = false;
              this.errors = [];
            })
            .catch(error => {
              if (!_.isEmpty(error.response)) {
                if ((error.response.status = 422)) {
                  this.errors = error.response.data.errors;
                  this.successful = false;
                  this.error = true;
                }
              }
            });
        },
        getPost() {
          axios.get("/api/posts/" + this.postId).then(response => {
            this.$refs.title.value = response.data.data.title;
            this.$refs.body.value = response.data.data.body;
          });
        }
      }
    };
    </script>


In the script above, we make a call to the getPosts() method as soon as the component is mounted. The getPosts() method fetches the data of a single post from the backend server, using the postId.

When Axios sends back the data for the post, we update the input fields in this component so they can be updated.

Finally, the update() method takes the values of the fields in the components and attempts to send them to the backend server for an update. In a situation where the fails, we get instant feedback.

Testing the application

To test that our changes work, we want to refresh the database and restore it back to a fresh state. To do this, run the following command in your terminal:

    $ php artisan migrate:fresh --seed


Next, let’s compile our JavaScript files and assets. This will make sure all the changes we made in the Vue component and the app.js file gets built. To recompile, run the command below in your terminal:

    $ npm run dev


Lastly, we need to serve the application. To do this, run the following command in your terminal window:

    $ php artisan serve


The source code for this project is available here on GitHub.
We will visit the application’s http://localhost:8000 and log in as an admin user. From the dashboard, you can test the create and update feature:

Conclusion

In this part of the series, we updated the dashboard to include the Create and Update component so the administrator can add and update posts.

In the next article, we will build the views that allow for the creation and updating of a post.

The source code for this project is available here on Github.

Build a CMS with Laravel and Vue - Part 6: Adding Realtime Comments

In the previous part of this series, we finished building the backend of the application using Vue. We were able to add the create and update component, which is used for creating a new post and updating an existing post.

Here’s a screen recording of what we have been able to achieve:

In this final part of the series, we will be adding support for comments. We will also ensure that the comments on each post are updated in realtime, so a user doesn’t have to refresh the page to see new comments.

When we are done, our application will have new features and will work like this:

The source code for this project is available here on GitHub.## Prerequisites

To follow along with this series, a few things are required:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.
Adding comments to the backend

When we were creating the API, we did not add the support for comments to the post resource, so we will have to do so now. Open the API project in your text editor as we will be modifying the project a little.

The first thing we want to do is create a model, controller, and a migration for the comment resource. To do this, open your terminal and cd to the project directory and run the following command:

    $ php artisan make:model Comment -mc


The command above will create a model called Comment, a controller called CommentController, and a migration file in the database/migrations directory.

Updating the comments migration file

To update the comments migration navigate to the database/migrations folder and find the newly created migration file for the Comment model. Let’s update the up() method in the file:

    // File: ./database/migrations/*_create_comments_table.php
    public function up()
    {
        Schema::create('comments', function (Blueprint $table) {
            $table->increments('id');
            $table->timestamps();
            $table->integer('user_id')->unsigned();
            $table->integer('post_id')->unsigned();
            $table->text('body');
        });
    }

We included user_id and post_id fields because we intend to create a link between the comments, users, and posts. The body field will contain the actual comment.

Defining the relationships among the Comment, User, and Post models

In this application, a comment will belong to a user and a post because a user can make a comment on a specific post, so we need to define the relationship that ties everything up.

Open the User model and include this method:

    // File: ./app/User.php
    public function comments()
    {
        return $this->hasMany(Comment::class);
    }

This is a relationship that simply says that a user can have many comments. Now let’s define the same relationship on the Post model. Open the Post.php file and include this method:

    // File: ./app/Post.php
    public function comments()
    {
        return $this->hasMany(Comment::class);
    }

Finally, we will include two methods in the Comment model to complete the second half of the relationships we defined in the User and Post models.

Open the app/Comment.php file and include these methods:

    // File: ./app/Comment.php
    public function user()
    {
        return $this->belongsTo(User::class);
    }

    public function post()
    {
        return $this->belongsTo(Post::class);
    }

Since we want to be able to mass assign data to specific fields of a comment instance during comment creation, we will include this array of permitted assignments in the app/Comment.php file:

    protected $fillable = ['user_id', 'post_id', 'body'];

We can now run our database migration for our comments:

    $ php artisan migrate


Configuring Laravel to broadcast events using Pusher

We already said that the comments will have a realtime functionality and we will be building this using Pusher, so we need to enable Laravel’s event broadcasting feature.

Open the config/app.php file and uncomment the following line in the providers array:

    App\Providers\BroadcastServiceProvider


Next, we need to configure the broadcast driver in the .env file:

    BROADCAST_DRIVER=pusher


Let’s pull in the Pusher PHP SDK using composer:

    $ composer require pusher/pusher-php-server


Configuring Pusher

For us to use Pusher in this application, it is a prerequisite that you have a Pusher account. You can create a free Pusher account here then login to your dashboard and create an app.

Once you have created an app, we will use the app details to configure pusher in the .env file:

    PUSHER_APP_ID=xxxxxx
    PUSHER_APP_KEY=xxxxxxxxxxxxxxxxxxxx
    PUSHER_APP_SECRET=xxxxxxxxxxxxxxxxxxxx
    PUSHER_APP_CLUSTER=xx


Update the Pusher keys with the app credentials provided for you under the Keys section on the Overview tab on the Pusher dashboard.

Broadcasting an event for when a new comment is sent

To make the comment update realtime, we have to broadcast an event based on the comment creation activity. We will create a new event and call it CommentSent. It is to be fired when there is a successful creation of a new comment.

Run command in your terminal:

    php artisan make:event CommentSent


There will be a newly created file in the app\Events directory, open the CommentSent.php file and ensure that it implements the ShouldBroadcast interface.

Open and replace the file with the following code:

    // File: ./app/Events/CommentSent.php
    <?php 

    namespace App\Events;

    use App\Comment;
    use App\User;
    use Illuminate\Broadcasting\Channel;
    use Illuminate\Queue\SerializesModels;
    use Illuminate\Broadcasting\PrivateChannel;
    use Illuminate\Broadcasting\PresenceChannel;
    use Illuminate\Foundation\Events\Dispatchable;
    use Illuminate\Broadcasting\InteractsWithSockets;
    use Illuminate\Contracts\Broadcasting\ShouldBroadcast;

    class CommentSent implements ShouldBroadcast
    {
        use Dispatchable, InteractsWithSockets, SerializesModels;

        public $user;

        public $comment;

        public function __construct(User $user, Comment $comment)
        {
            $this->user = $user;

            $this->comment = $comment;
        }

        public function broadcastOn()
        {
            return new PrivateChannel('comment');
        }
    }

In the code above, we created two public properties, user and comment, to hold the data that will be passed to the channel we are broadcasting on. We also created a private channel called comment. We are using a private channel so that only authenticated clients can subscribe to the channel.

Defining the routes for handling operations on a comment

We created a controller for the comment model earlier but we haven’t defined the web routes that will redirect requests to be handled by that controller.

Open the routes/web.php file and include the code below:

    // File: ./routes/web.php
    Route::get('/{post}/comments', '[email protected]');
    Route::post('/{post}/comments', '[email protected]');

Setting up the action methods in the CommentController

We need to include two methods in the CommentController.php file, these methods will be responsible for storing and retrieving methods. In the store() method, we will also be broadcasting an event when a new comment is created.

Open the CommentController.php file and replace its contents with the code below:

    // File: ./app/Http/Controllers/CommentController.php
    <?php 

    namespace App\Http\Controllers;

    use App\Comment;
    use App\Events\CommentSent;
    use App\Post;
    use Illuminate\Http\Request;

    class CommentController extends Controller
    {
        public function store(Post $post)
        {
            $this->validate(request(), [
                'body' => 'required',
            ]);

            $user = auth()->user();

            $comment = Comment::create([
                'user_id' => $user->id,
                'post_id' => $post->id,
                'body' => request('body'),
            ]);

            broadcast(new CommentSent($user, $comment))->toOthers();

            return ['status' => 'Message Sent!'];
        }

        public function index(Post $post)
        {
            return $post->comments()->with('user')->get();
        }
    }

In the store method above, we are validating then creating a new post comment. After the comment has been created, we broadcast the CommentSent event to other clients so they can update their comments list in realtime.

In the index method we just return the comments belonging to a post along with the user that made the comment.

Adding a layer of authentication

Let’s add a layer of authentication that ensures that only authenticated users can listen on the private comment channel we created.

Add the following code to the routes/channels.php file:

    // File: ./routes/channels.php
    Broadcast::channel('comment', function ($user) {
        return auth()->check();
    });

Adding comments to the frontend

In the second article of this series, we created the view for the single post landing page in the single.blade.php file, but we didn’t add the comments functionality. We are going to add it now. We will be using Vue to build the comments for this application so the first thing we will do is include Vue in the frontend of our application.

Open the master layout template and include Vue to its <head> tag. Just before the <title> tag appears in the master.blade.php file, include this snippet:

    <!-- File: ./resources/views/layouts/master.blade.php -->
    <meta name="csrf-token" content="{{ csrf_token() }}">
    <script src="{{ asset('js/app.js') }}" defer></script>

The csrf_token() is there so that users cannot forge requests in our application. All our requests will pick the randomly generated csrf-token and use that to make requests.

Related: CSRF in Laravel: how VerifyCsrfToken works and how to prevent attacks

Now the next thing we want to do is update the resources/assets/js/app.js file so that it includes a template for the comments view.

Open the file and replace its contents with the code below:

    require('./bootstrap');

    import Vue          from 'vue'
    import VueRouter    from 'vue-router'
    import Homepage from './components/Homepage'
    import Create   from './components/Create'
    import Read     from './components/Read'
    import Update   from './components/Update'
    import Comments from './components/Comments'

    Vue.use(VueRouter)

    const router = new VueRouter({
        mode: 'history',
        routes: [
            {
                path: '/admin/dashboard',
                name: 'read',
                component: Read,
                props: true
            },
            {
                path: '/admin/create',
                name: 'create',
                component: Create,
                props: true
            },
            {
                path: '/admin/update',
                name: 'update',
                component: Update,
                props: true
            },
        ],
    });

    const app = new Vue({
        el: '#app',
        components: { Homepage, Comments },
        router,
    });

Above we imported the Comment component and then we added it to the list of components in the applications Vue instance.

Now create a Comments.vue file in the resources/assets/js/components directory. This is where all the code for our comment view will go. We will populate this file later on.

Installing Pusher and Laravel Echo

For us to be able to use Pusher and subscribe to events on the frontend, we need to pull in both Pusher and Laravel Echo. We will do so by running this command:

    $ npm install --save laravel-echo pusher-js


The source code for this project is available here on GitHub.
Now let’s configure Laravel Echo to work in our application. In the resources/assets/js/bootstrap.js file, find and uncomment this snippet of code:

    import Echo from 'laravel-echo'

    window.Pusher = require('pusher-js');

    window.Echo = new Echo({
         broadcaster: 'pusher',
         key: process.env.MIX_PUSHER_APP_KEY,
         cluster: process.env.MIX_PUSHER_APP_CLUSTER,
         encrypted: true
    });

The source code for this project is available here on GitHub.
Now let’s import the Comments component into the single.blade.php file and pass along the required the props.

Open the single.blade.php file and replace its contents with the code below:

    {{-- File: ./resources/views/single.blade.php --}}
    @extends('layouts.master')

    @section('content')
    <div class="container">
      <div class="row">
        <div class="col-lg-10 mx-auto">
          <br>
          <h3 class="mt-4">
            {{ $post->title }} 
            <span class="lead">by <a href="#">{{ $post->user->name }}</a></span>
          </h3>
          <hr>
          <p>Posted {{ $post->created_at->diffForHumans() }}</p>
          <hr>
          <img class="img-fluid rounded" src="{!! !empty($post->image) ? '/uploads/posts/' . $post->image : 'http://placehold.it/750x300' !!}" alt="">
          <hr>
          <div>
            <p>{{ $post->body }}</p>
            <hr>
            <br>
          </div>

          @auth
          <Comments
              :post-id='@json($post->id)' 
              :user-name='@json(auth()->user()->name)'>
          </Comments>
          @endauth
        </div>
      </div>
    </div>
    @endsection

Building the comments view

Open the Comments.vue file and add the following markup template below:

    <template>
      <div class="card my-4">
        <h5 class="card-header">Leave a Comment:</h5>
        <div class="card-body">
          <form>
            <div class="form-group">
              <textarea ref="body" class="form-control" rows="3"></textarea>
            </div>
            <button type="submit" @click.prevent="addComment" class="btn btn-primary">
              Submit
            </button>
          </form>
        </div>
        <p class="border p-3" v-for="comment in comments">
           <strong>{{ comment.user.name }}</strong>: 
           <span>{{ comment.body }}</span>
        </p>
      </div>
    </template>

Now, we’ll add a script that defines two methods:

  1. landing.blade.php
  2. single.blade.php

In the same file, add the following below the closing template tag:

    <script>
    export default {
      props: {
        userName: {
          type: String,
          required: true
        },
        postId: {
          type: Number,
          required: true
        }
      },
      data() {
        return {
          comments: []
        };
      },

      created() {
        this.fetchComments();

        Echo.private("comment").listen("CommentSent", e => {
            this.comments.push({
              user: {name: e.user.name},
              body: e.comment.body,
            });
        });
      },

      methods: {
        fetchComments() {
          axios.get("/" + this.postId + "/comments").then(response => {
            this.comments = response.data;
          });
        },

        addComment() {
          let body = this.$refs.body.value;
          axios.post("/" + this.postId + "/comments", { body }).then(response => {
            this.comments.push({
              user: {name: this.userName},
              body: this.$refs.body.value
            });
            this.$refs.body.value = "";
          });
        }
      }
    };
    </script>

In the created() method above, we first made a call to the fetchComments() method, then we created a listener to the private comment channel using Laravel Echo. Once this listener is triggered, the comments property is updated.

Testing the application

Now let’s test the application to see if it is working as intended. Before running the application, we need to refresh our database so as to revert any changes. To do this, run the command below in your terminal:

    $ php artisan migrate:fresh --seed


Next, let’s build the application so that all the changes will be compiled and included as a part of the JavaScript file. To do this, run the following command on your terminal:

    $ npm run dev


Finally, let’s serve the application using this command:

    $ php artisan serve


To test that our application works visit the application URL http://localhost:8000 on two separate browser windows, we will log in to our application on each of the windows as a different user.

We will finally make a comment on the same post on each of the browser windows and check that it updates in realtime on the other window:

Conclusion

In this final tutorial of this series, we created the comments feature of the CMS and also made it realtime. We were able to accomplish the realtime functionality using Pusher.

In this entire series, we learned how to build a CMS using Laravel and Vue.

The source code for this article series is available here on Github.

Learn More

Build a Basic CRUD App with Laravel and Vue

Fullstack Vue App with Node, Express and MongoDB

Build a Simple CRUD App with Spring Boot and Vue.js

Build a Basic CRUD App with Laravel and Angular

Build a Basic CRUD App with Laravel and React

PHP Programming Language - PHP Tutorial for Beginners

Vuejs 2 Authentication Tutorial

Vue Authentication And Route Handling Using Vue-router

Vue JS 2 - The Complete Guide (incl. Vue Router & Vuex)

Nuxt.js - Vue.js on Steroids

MEVP Stack Vue JS 2 Course: MySQL + Express.js + Vue.js +PHP

Build Web Apps with Vue JS 2 & Firebase

10 Tips for Building and Maintaining Large Vue.js Projects

10 Tips for Building and Maintaining Large Vue.js Projects

Here are the top best practices I've developed while working on Vue projects with a large code base. These tips will help you develop more efficient code that is easier to maintain and share.

Here are the top best practices I've developed while working on Vue projects with a large code base. These tips will help you develop more efficient code that is easier to maintain and share.

When freelancing this year, I had the opportunity to work on some large Vue applications. I am talking about projects with more than 😰 a dozen Vuex stores, a high number of components (sometimes hundreds) and many views (pages). 😄 It was actually quite a rewarding experience for me as I discovered many interesting patterns to make the code scalable. I also had to fix some bad practices that resulted in the famous spaghetti code dilemma. 🍝

Thus, today I’m sharing 10 best practices with you that I would recommend to follow if you are dealing with a large code base. 🧚🏼‍♀️

1. Use Slots to Make Your Components Easier to Understand and More Powerful

I recently wrote an article about some important things you need to know regarding slots in Vue.js. It highlights how slots can make your components more reusable and easier to maintain and why you should use them.

🧐 But what does this have to do with large Vue.js projects? A picture is usually worth a thousand words, so I will paint you a picture about the first time I deeply regretted not using them.

One day, I simply had to create a popup. Nothing really complex at first sight as it was just including a title, a description and some buttons. So what I did was to pass everything as props. I ended up with three props that you would use to customize the components and an event was emitted when people clicked on the buttons. Easy peasy! 😅

But, as the project grew over time, the team requested that we display a lot of other new things in it: form fields, different buttons depending on which page it was displayed on, cards, a footer, and the list goes on. I figured out that if I kept using props to make this component evolve, it would be ok. But god, 😩 how wrong I was! The component quickly became too complex to understand as it was including countless child components, using way too many props and emitting a large number of events. 🌋 I came to experience that terrible situation in which when you make a change somewhere and somehow it ends up breaking something else on another page. I had built a Frankenstein monster instead of a maintainable component! 🤖

However, things could have been better if I had relied on slots from the start. I ended up refactoring everything to come up with this tiny component. Easier to maintain, faster to understand and way more extendable!

<template>
  <div class="c-base-popup">
    <div v-if="$slot.header" class="c-base-popup__header">
      <slot name="header">
    </div>
    <div v-if="$slot.subheader" class="c-base-popup__subheader">
      <slot name="subheader">
    </div>
    <div class="c-base-popup__body">
      <h1>{{ title }}</h1>
      <p v-if="description">{{ description }}</p>
    </div>
    <div v-if="$slot.actions" class="c-base-popup__actions">
      <slot name="actions">
    </div>
    <div v-if="$slot.footer" class="c-base-popup__footer">
      <slot name="footer">
    </div>
  </div>
</template>

<script>
export default {
  props: {
    description: {
      type: String,
      default: null
    },
    title: {
      type: String,
      required: true
    }
  }
}
</script>

My point is that, from experience, projects built by developers who know when to use slots does make a big difference on its future maintainability. Way fewer events are being emitted, the code is easier to understand, and it offers way more flexibility as you can display whatever components you wish inside.

⚠️ As a rule of thumb, keep in mind that when you end up duplicating your child components' props inside their parent component, you should start using slots at that point.

2. Organize Your Vuex Store Properly

Usually, new Vue.js developers start to learn about Vuex because they stumbled upon on of these two issues:

  • Either they need to access the data of a given component from another one that’s actually too far apart in the tree structure, or
  • They need the data to persist after the component is destroyed.

That's when they create their first Vuex store, learn about modules and start organizing them in their application. 💡

The thing is that there is no single pattern to follow when creating modules. However, 👆🏼 I highly recommend you think about how you want to organize them. From what I've seen, most developers prefer to organize them per feature. For instance:

  • Auth.
  • Blog.
  • Inbox.
  • Settings.

😜 On my side, I find it easier to understand when they are organized according to the data models they fetch from the API. For example:

  • Users
  • Teams
  • Messages
  • Widgets
  • Articles

Which one you choose is up to you. The only thing to keep in mind is that a well-organized Vuex store will result in a more productive team in the long run. It will also make newcomers better predisposed to wrap their minds around your code base when they join your team.

3. Use Actions to Make API Calls and Commit the Data

Most of my API calls (if not all) are made inside my Vuex actions. You may wonder: why is that a good place to do so? 🤨

🤷🏼‍♀️ Simply because most of them fetch the data I need to commit in my store. Besides, they provide a level of encapsulation and reusability I really enjoy working with. Here are some other reasons I do so:

  • If I need to fetch the first page of articles in two different places (let's say the blog and the homepage), I can just call the appropriate dispatcher with the right parameters. The data will be fetched, committed and returned with no duplicated code other than the dispatcher call.

  • If I need to create some logic to avoid fetching this first page when it has already been fetched, I can do so in one place. In addition to decreasing the load on my server, I am also confident that it will work everywhere.

  • I can track most of my Mixpanel events inside these actions, making the analytics code base really easy to maintain. I do have some applications where all the Mixpanel calls are solely made in the actions. 😂 I can't tell you how much of a joy it is to work this way when I don't have to understand what is tracked from what is not and when they are being sent.

4. Simplify Your Code Base with mapState, mapGetters, mapMutations and mapActions

There usually is no need to create multiple computed properties or methods when you just need to access your state/getters or call your actions/mutations inside your components. Using mapState, mapGetters, mapMutations and mapActions can help you shorten your code and make things easier to understand by grouping what is coming from your store modules in one place.

// NPM
import { mapState, mapGetters, mapActions, mapMutations } from "vuex";

export default {
  computed: {
    // Accessing root properties
    ...mapState("my_module", ["property"]),
    // Accessing getters
    ...mapGetters("my_module", ["property"]),
    // Accessing non-root properties
    ...mapState("my_module", {
      property: state => state.object.nested.property
    })
  },

  methods: {
    // Accessing actions
    ...mapActions("my_module", ["myAction"]),
    // Accessing mutations
    ...mapMutations("my_module", ["myMutation"])
  }
};

All the information you'll need on these handy helpers is available here in the official Vuex documentation. 🤩

5. Use API Factories

I usually like to create a this.$api helper that I can call anywhere to fetch my API endpoints. At the root of my project, I have an api folder that includes all my classes (see one of them below).

api
├── auth.js
├── notifications.js
└── teams.js

Each one is grouping all the endpoints for its category. Here is how I initialize this pattern with a plugin in my Nuxt applications (it is quite a similar process in a standard Vue app).

// PROJECT: API
import Auth from "@/api/auth";
import Teams from "@/api/teams";
import Notifications from "@/api/notifications";

export default (context, inject) => {
  if (process.client) {
    const token = localStorage.getItem("token");
    // Set token when defined
    if (token) {
      context.$axios.setToken(token, "Bearer");
    }
  }
  // Initialize API repositories
  const repositories = {
    auth: Auth(context.$axios),
    teams: Teams(context.$axios),
    notifications: Notifications(context.$axios)
  };
  inject("api", repositories);
};

export default $axios => ({
  forgotPassword(email) {
    return $axios.$post("/auth/password/forgot", { email });
  },

  login(email, password) {
    return $axios.$post("/auth/login", { email, password });
  },

  logout() {
    return $axios.$get("/auth/logout");
  },

  register(payload) {
    return $axios.$post("/auth/register", payload);
  }
});

Now, I can simply call them in my components or Vuex actions like this:

export default {
  methods: {
    onSubmit() {
      try {
        this.$api.auth.login(this.email, this.password);
      } catch (error) {
        console.error(error);
      }
    }
  }
};

6. Use $config to access your environment variables (especially useful in templates)

Your project probably have some global configuration variables defined in some files:

config
├── development.json
└── production.json

I like to quickly access them through a this.$config helper, especially when I am inside a template. As always, it's quite easy to extend the Vue object:

// NPM
import Vue from "vue";

// PROJECT: COMMONS
import development from "@/config/development.json";
import production from "@/config/production.json";

if (process.env.NODE_ENV === "production") {
  Vue.prototype.$config = Object.freeze(production);
} else {
  Vue.prototype.$config = Object.freeze(development);
}

7. Follow a Single Convention to Name Your Commits

As the project grows, you will need to browse the history for your components on a regular basis. If your team does not follow the same convention to name their commits, it will make it harder to understand what each one does.

I always use and recommend the Angular commit message guidelines. I follow it in every project I work on, and in many cases other team members are quick to figure out that it's better to follow too.

Following these guidelines leads to more readable messages that make commits easier to track when looking through the project history. In a nutshell, here is how it works:

git commit -am "<type>(<scope>): <subject>"

# Here are some samples
git commit -am "docs(changelog): update changelog to beta.5"
git commit -am "fix(release): need to depend on latest rxjs and zone.js"

Have a look at their README file to learn more about it and its conventions.

8. Always Freeze Your Package Versions When Your Project is in Production

I know... All packages should follow the semantic versioning rules. But the reality is, some of them don't. 😅

To avoid having to wake up in the middle of the night because one of your dependencies broke your entire project, locking all your package versions should make your mornings at work less stressful. 😇

What it means is simply this: avoid versions prefixed with ^:

{
  "name": "my project",

  "version": "1.0.0",

  "private": true,

  "dependencies": {
    "axios": "0.19.0",
    "imagemin-mozjpeg": "8.0.0",
    "imagemin-pngquant": "8.0.0",
    "imagemin-svgo": "7.0.0",
    "nuxt": "2.8.1",
  },

  "devDependencies": {
    "autoprefixer": "9.6.1",
    "babel-eslint": "10.0.2",
    "eslint": "6.1.0",
    "eslint-friendly-formatter": "4.0.1",
    "eslint-loader": "2.2.1",
    "eslint-plugin-vue": "5.2.3"
  }
}

9. Use Vue Virtual Scroller When Displaying a Large Amount of Data

When you need to display a lot of rows in a given page or when you need to loop over a large amount of data, you might have noticed that the page can quickly become quite slow to render. To fix this, you can use vue-virtual-scoller.

npm install vue-virtual-scroller

It will render only the visible items in your list and re-use components and dom elements to be as efficient and performant as possible. It really is easy to use and works like a charm! ✨

<template>
  <RecycleScroller
    class="scroller"
    :items="list"
    :item-size="32"
    key-field="id"
    v-slot="{ item }"
  >
    <div class="user">
      {{ item.name }}
    </div>
  </RecycleScroller>
</template>

10. Track the Size of Your Third-Party Packages

When a lot of people work in the same project, the number of installed packages can quickly become incredibly high if no one is paying attention to them. To avoid your application becoming slow (especially on slow mobile networks), I use the import cost package in Visual Studio Code. This way, I can see right from my editor how large an imported module library is, and can check out what's wrong when it's getting too large.

For instance, in a recent project, the entire lodash library was imported (which is approximately 24kB gzipped). The issue? Only the cloneDeep method was used. By identifying this issue with the import cost package, we fixed it with:

npm remove lodash
npm install lodash.clonedeep

The clonedeep function could then be imported where needed:

import cloneDeep from "lodash.clonedeep";

⚠️ To optimize things even further, you can also use the Webpack Bundle Analyzer package to visualize the size of your webpack output files with an interactive zoomable treemap.


Do you have other best practices when dealing with a large Vue code base? Feel free to tell me in the comments below

Understand JavaScript Async and Await with Examples for Beginners

Understand JavaScript Async and Await with Examples for Beginners

In the beginning, there were callbacks.** A callback is nothing special but a function that is executed at some later time**. Due to JavScript’s asynchronous nature, a callback is required in many places, where the results are not available immediately.

Here’s an example of reading a file in Node.js (asynchronously) —

fs.readFile(__filename, 'utf-8', (err, data) => {
  if (err) {
    throw err;
  }
  console.log(data);
});

Problems arise when we want to do multiple async operations. Imagine this hypothetical scenario (where all operations are async) —

  • We query our database for the user Arfat. We read the profile_img_url and fetch the image from someServer.com.

  • After fetching the image, we transform it into a different format, say PNG to JPEG.

  • If the transformation is successful, we send the user an email.

  • We log this task in our file transformations.log with the timestamp.

The code for something like this would like —

This is image title

Note the nesting of callbacks and the staircase of }) at the end. This is affectionately called as Callback Hell or Pyramid of Doom due to its namesake resemblance. Some disadvantages of this are —

  • The code becomes harder to read as one has to move from left to right to understand.

  • Error handling is complicated and often leads to bad code.

To overcome this problem, JavaScript gods created Promises. Now, instead of nesting callbacks inward, we can chain them. Here’s an example —

This is image title

The flow has become a familiar top-to-bottom rather than left-to-right as in callbacks, which is a plus. However, Promises still suffer from some problems —

  • We still have to give a callback to every .then.

  • Instead of using a normal try/catch, we have to use .catch for error handling.

  • Looping over multiple promises in a sequence is challenging and non-intuitive.

As a demonstration of the last point, try this challenge!

The Challenge

Let’s assume that we have a for loop that prints 0 to 10 at random intervals (0 to n seconds). We need to modify it using promises to print sequentially 0 to 10. For example, if 0 takes 6 seconds to print and 1 takes two seconds to print, then 1 should wait for 0 to print and so on.

Needless to say, don’t use async/await or .sort function. We’ll have a solution towards the end.

Async Functions

Introduced in ES2017(ES8), async functions make working with promises much easier.

  • It is important to note the async functions work on top of promises.

  • They are not a fundamentally different concept.

  • They can be thought of as an alternate way of writing promise-based code.

  • We can avoid chaining promise altogether using async/await.

  • They allow asynchronous execution while maintaining a regular, synchronous feel.

Hence, an understanding of promises is required before you can fully understand async/await concepts.

Syntax

They consist of two main keywords- async and await. async is used to make a function asynchronous. It unlocks the use of await inside these functions. Using await in any other case is a syntax error.

// With function declaration
async function myFn() {
  // await ...
}
// With arrow function
const myFn = async () => {
  // await ...
}
function myFn() {
  // await fn(); (Syntax Error since no async) 
}

Notice the use of async keyword at the beginning of the function declaration. In the case of arrow function, async is put after the = sign and before the parentheses.

Async functions can also be put on an object as methods, or in class declarations as follows.

// As an object's method
const obj = {
  async getName() {
    return fetch('https://www.example.com');
  }
}
// In a class
class Obj {
  async getResource() {
    return fetch('https://www.example.com');
  }
}

Note: Class constructors and getters/setters cannot be async.

Semantics and Evaluation Rules

Async functions are normal JavaScript functions with the following differences —

An async function always returns a promise.

async function fn() {
  return 'hello';
}
fn().then(console.log)
// hello

The function fn returns 'hello'. Because we have used async, the return value 'hello' is wrapped in a promise (via Promise constructor).

Here’s an alternate representation without using async

function fn() {
  return Promise.resolve('hello');
}
fn().then(console.log);
// hello

In this, we are manually returning a promise instead of using async.

A slightly more accurate way to say the same thing — The body of an async function is always wrapped in a new Promise.

If the return value is primitive, async functions return a promise-wrapped version of the value. However, when the return value is a promise object, its resolution is returned in a new promise. [See this comment]

// in case of primitive values
const p = Promise.resolve('hello')
p instanceof Promise; 
// true
// p is returned as is it
Promise.resolve(p) === p; 
// true

What happens when you throw an error inside an async function?

For example —

async function foo() {
  throw Error('bar');
}
foo().catch(console.log);

foo() will return a rejected promise if the error is uncaught. Instead of Promise.resolve, Promise.reject wraps the error and is returned. See Error Handling section later.

The net effect is that you return whatever you want, you will always get a promise out of an async function.

Async functions pause at each await <expression>.

An ```await``` acts on an expression. When the expression is a promise, the evaluation of the async function halts until the promise is resolved. When the expression is a non-promise value, it is converted to a promise using ```Promise.resolve``` and then resolved.
// utility function to cause delay
// and get random value
const delayAndGetRandom = (ms) => {
  return new Promise(resolve => setTimeout(
    () => {
      const val = Math.trunc(Math.random() * 100);
      resolve(val);
    }, ms
  ));
};
async function fn() {
  const a = await 9;
  const b = await delayAndGetRandom(1000);
  const c = await 5;
  await delayAndGetRandom(1000);
  
  return a + b * c;
}
// Execute fn
fn().then(console.log);

Let’s examine the function fn line by line —

  • When fn is executed, the first line to be evaluated is const a = await 9;. It is internally transformed into const a = await Promise.resolve(9);.

  • Since we are using await, fn pauses until the variable a gets a value. In this case, the promise resolves it to 9.

  • delayAndGetRandom(1000) causes fn to pause until the function delayAndGetRandom is resolved which is after 1 second. So, fn effectively pauses for 1 second.

  • Also, delayAndGetRandom resolves with a random value. Whatever is passed in the resolve function, that is assigned to the variable b.

  • c gets the value of 5 similarly and we delay for 1 second again using await delayAndGetRandom(1000). We don’t use the resolved value in this case.

  • Finally, we compute the result a + b * c which is wrapped in a Promise using Promise.resolve. This wrapped promise is returned.

Note: If this pause and resume are reminding you of ES6 generators, it’s because there are good reasons for it.

The Solution

Let’s use async/await for the hypothetical problem listed at the beginning of the article —

This is image title

We make an async function finishMyTask and use await to wait for the result of operations such as queryDatabase, sendEmail, logTaskInFile etc.

If we contrast this solution with the solutions using promises above, we find that it is roughly the same line of code. However, async/await has made it simpler in terms of syntactical complexity. There aren’t multiple callbacks and .then /.catch to remember.

Now, let’s solve the challenge of numbers listed above. Here are two implementations —

const wait = (i, ms) => new Promise(resolve => setTimeout(() => resolve(i), ms));

// Implementation One (Using for-loop)
const printNumbers = () => new Promise((resolve) => {
  let pr = Promise.resolve(0);
  for (let i = 1; i <= 10; i += 1) {
    pr = pr.then((val) => {
      console.log(val);
      return wait(i, Math.random() * 1000);
    });
  }
  resolve(pr);
});

// Implementation Two (Using Recursion)

const printNumbersRecursive = () => {
  return Promise.resolve(0).then(function processNextPromise(i) {

    if (i === 10) {
      return undefined;
    }

    return wait(i, Math.random() * 1000).then((val) => {
      console.log(val);
      return processNextPromise(i + 1);
    });
  });
};

If you want, you can run them yourself at repl.it console.

If you were allowed async function, the task would have been much simpler.

async function printNumbersUsingAsync() {
  for (let i = 0; i < 10; i++) {
    await wait(i, Math.random() * 1000);
    console.log(i);
  }
}

This implementation is also provided in the repl.it console.

Error Handling

As we saw in the Semantics section, an uncaught Error() is wrapped in a rejected promise. However, we can use try-catch in async functions to handle errors synchronously. Let’s begin with this utility function —

async function canRejectOrReturn() {
  // wait one second
  await new Promise(res => setTimeout(res, 1000));
// Reject with ~50% probability
  if (Math.random() > 0.5) {
    throw new Error('Sorry, number too big.')
  }
return 'perfect number';
}

canRejectOrReturn() is an async function and it will either resolve with 'perfect number' or reject with Error('Sorry, number too big').

Let’s look at the code example —

async function foo() {
  try {
    await canRejectOrReturn();
  } catch (e) {
    return 'error caught';
  }
}

Since we are awaiting canRejectOrReturn, its own rejection will be turned into a throw and the catch block will execute. That is, foo will either resolve with undefined (because we are not returning anything in try) or it will resolve with 'error caught'. It will never reject since we used a try-catch block to handle the error in the foo function itself.

Here’s another example —

async function foo() {
  try {
    return canRejectOrReturn();
  } catch (e) {
    return 'error caught';
  }
}

Note that we are returning (and not awaiting) canRejectOrReturn from foo this time. foo will either resolve with 'perfect number' or reject with Error('Sorry, number too big'). The catch block will never be executed.

It is because we return the promise returned by canRejectOrReturn. Hence, the resolution of foo becomes the resolution of canRejectOrReturn. You can break return canRejectOrReturn() into two lines to see clearly
(Note the missing await in the first line)—

try {
    const promise = canRejectOrReturn();
    return promise;
}

Let’s see the usage of await and return together —

async function foo() {
  try {
    return await canRejectOrReturn();
  } catch (e) {
    return 'error caught';
  }
}

In this case, foo resolves with either 'perfect number' or resolve with 'error caught'. There is no rejection. It is like the first example above with just await. Except, we resolve with the value that canRejectOrReturn produces rather than undefined.

You can break return await canRejectOrReturn(); to see the effect —

try {
    const value  = await canRejectOrReturn();
    return value;
}
// ...

Common Mistakes and Gotchas

Because of an intricate play of Promise-based and async/await concepts, there are some subtle errors that can creep into the code. Let’s look at them —

Not using await

In some cases, we forget to use the await keyword before a promise or return it. Here’s an example —

async function foo() {
  try {
    canRejectOrReturn();
  } catch (e) {
    return 'caught';
  }
}

Note that we are not using await or return. foo will always resolve with undefined without waiting for 1 second. However, the promise does start its execution. If there are side-effects, they will happen. If it throws an error or rejects, then UnhandledPromiseRejectionWarning will be issued.

async functions in callbacks

We often use async functions in .map or .filter as callbacks. Let’s take an example — Suppose we have a function fetchPublicReposCount(username) that fetched the number of public GitHub repositories a user has. We have three users whose counts we want to fetch. Let’s see the code —


const url = 'https://api.github.com/users';
// Utility fn to fetch repo counts
const fetchPublicReposCount = async (username) => {
  const response = await fetch(`${url}/${username}`);
  const json = await response.json();
  return json['public_repos'];
}

We want to fetch repo counts of ['ArfatSalman', 'octocat', 'norvig']. We may do something like this —


const users = [
  'ArfatSalman',
  'octocat',
  'norvig'
];
const counts = users.map(async username => {
  const count = await fetchPublicReposCount(username);
  return count;
});

Note the async in the callback to the .map. We might expect that counts variable will contain the numbers of repos. However, as we have seen earlier, all async functions return promises. Hence, counts is actually an array of promises. .map calls the anonymous callback with every username, and a promise is returned with every invocation which .map keeps in the resulting array.

Too sequential using await

We may also think of a solution such as —

async function fetchAllCounts(users) {
  const counts = [];
  for (let i = 0; i < users.length; i++) {
    const username = users[i];
    const count = await fetchPublicReposCount(username);
    counts.push(count);
  }
  return counts;
}

We are manually fetching each count, and appending them in the counts array. The problem with this code is that until the first username’s count is fetched, the next will not start. At a time, only one repo count is fetched.

If a single fetch takes 300 ms, then fetchAllCounts will take ~900ms for 3 users. As we can see that time usage will linearly grow with user counts. Since the fetching of repo counts is not co-dependent, we can parallelize the operation.

We can fetch users concurrently instead of doing them sequentially. We are going to utilize .map and Promise.all.

async function fetchAllCounts(users) {
  const promises = users.map(async username => {
    const count = await fetchPublicReposCount(username);
    return count;
  });
  return Promise.all(promises);
}

Promise.all receives an array of promises as input and returns a promise as output. The returned promise resolves with an array of all promise resolutions or rejects with the first rejection. However, it may not be feasible to start all promises at the same time. Maybe you want to complete promises in batches. You can look at p-map for limited concurrency.

Conclusion

Async functions have become really important. With the introduction of Async Iterators, async functions will see more adoption. It is important to have a good understanding of them for modern JavaScript developer. I hope this article sheds some light on that. :)