Introduction to JSON - Java

Introduction to JSON - Java

Learn the fundamentals of parsing and manipulating JSON data with the JSON-Java library.

Programming plays an indispensable role in software development, but while it is effective at systematically solving problems, it falls short in a few important categories. In the cloud and distributed age of software, data plays a pivotal role in generating revenue and sustaining an effective product. Whether configuration data or data exchanged over a web-based Application Programming Interface (API), having a compact, human-readable data language is essential to maintaining a dynamic system and allowing non-programmer domain experts to understand the system.

While binary-based data can be very condensed, it is unreadable without a proper editor or extensive knowledge. Instead, most systems use a text-based data language, such as eXtensive Markup Language (XML), Yet Another Markup Language (YML), or Javascript Object Notation (JSON). While XML and YML have advantages of their own, JSON has become a front-runner in the realm of configuration and Representational State Transfer (REST) APIs. It combines simplicity with just enough richness to easily express a wide variety of data.

While JSON is simple enough for programmers and non-programmers alike to read and write, as programmers, our systems must be able to produce and consume JSON, which can be far from simple. Fortunately, there exist numerous Java JSON libraries that efficiently tackle this task. In this article, we will cover the basics of JSON, including a basic overview and common use cases, as well as how to serialize Java objects into JSON and how to deserialize JSON into Java objects using the venerable Jackson library. Note that entirety of the code used for this tutorial can be found on Github.

The Basics of JSON

JSON did not start out as a text-based configuration language. Strictly speaking, it is actually Javascript code that represents the fields of an object and can be used by a Javascript compiler to recreate an object. At its most elementary level, a JSON object is a set of key-value pairs — surrounded by braces and with a colon separating the key from the value; the key in this pair is a string —surrounded by quotation marks, and the value is a JSON value. A JSON value can be any of the following:

  • another JSON object
  • a string
  • a number
  • the boolean literal true 
  • the boolean literal false 
  • the literal null 
  • an array of JSON values

For example, a simple JSON object, with only string values, could be:

{"firstName": "John", "lastName": "Doe"}

Likewise, another JSON object can act as a value, as well as a number, either of the boolean literals or a null literal, as seen in the following example:

{
  "firstName": "John", 
  "lastName": "Doe",
  "alias": null,
  "age": 29,
  "isMale": true,
  "address": {
    "street": "100 Elm Way",
    "city": "Foo City",
    "state": "NJ",
    "zipCode": "01234"
  }
}

A JSON array is a comma-separated list of values surrounded by square brackets. This list does not contain keys; instead, it only contains any number of values, including zero. An array containing zero values is called an empty array. For example:

{
  "firstName": "John",
  "lastName": "Doe",
  "cars": [
    {"make": "Ford", "model": "F150", "year": 2018},
    {"make": "Subaru", "model": "BRZ", "year": 2016},
  ],
  "children": []
}

JSON arrays may be heterogeneous, containing values of mixed types, such as: 

["hi", {"name": "John Doe"}, null, 125]

It is a highly-followed common convention, though, for arrays to contain the same types of data. Likewise, objects in an array conventionally have the same keys and value types, as seen in the previous example. For example, each value in the cars array has a key make with a string value, a key model with a string value, and a key year with a numeric value.

It is important to note that the order of key-value pairs in a JSON object and the order of elements in a JSON array do not factor into the equality of two JSON objects or arrays, respectively. For example, the following two JSON objects are equivalent:

{"firstName": "John", "lastName": "Doe"}
{"lastName": "Doe", "firstName": "John"}

JSON Files

A file containing JSON data uses the .json file extension and has a root of either a JSON object or a JSON array. JSON files may not contain more than one root object. If more than one object is desired, the root of the JSON file should be an array containing the desired objects. For example, both of the following are valid JSON files:

[
  {"name": "John Doe"},
  {"name": "Jane Doe"}
]


{
  "name": "John Doe",
  "age": 29
}

While it is valid for a JSON file to have an array as its root, it is also common practice for the root be an object with a single key containing an array. This makes the purpose of the array explicit:

{
  "accounts": [
    {"name": "John Doe"},
    {"name": "Jane Doe"}
  ]
}

Advantages of JSON

JSON has three main advantages: (1) it can be easily read by humans, (2) it can be efficiently parsed by code, and (3) it is expressive. The first advantage is evident by reading a JSON file. Although some files can become large and complex — containing many different objects and arrays — it is the size of these files that is cumbersome, not the language in which they are written. For example, the snippets above can be easily read without requiring context or using a specific set of tools.

Likewise, code can also parse JSON relatively easily; the entirety of the JSON language can be reduced to a set of simple state machines, as described in the ECMA-404 The JSON Data Interchange Standard. We will see in the following sections that this simplicity has translated into JSON parsing libraries in Java that can quickly and accurately consume JSON strings or files and produce user-defined Java objects.

Lastly, the ease of interpretation does not detract from its expressibility. Nearly any object structure can be recursively reduced to a JSON object or array. Those fields that are difficult to serialize into JSON objects can be stringified and their string value used in place of their object representation. For example, if we wished to store a regular expression as a value, we could do so by simply turning the regular expression into a string.

With an understanding of the basics of JSON, we can now delve into the details of interfacing with JSON snippets and files through Java applications.

Serializing Java Objects to JSON

While the process of tokenizing and parsing JSON strings is tedious and intricate, there are numerous libraries in Java that encapsulate this logic into easy-to-use objects and methods that make interfacing with JSON nearly trivial. One of the most popular among these libraries is FasterXML Jackson.

Jackson centers around a simple concept: the ObjectMapper. An ObjectMapper is an object that encapsulates the logic for taking a Java object and serializing it to JSON and taking JSON and deserializing it into a specified Java object. In the former case, we can create objects the represent our model and delegate the JSON creation logic to the ObjectMapper. Following the previous examples, we can create two classes to represent our model: a person and an address:

public class Person {
    private String firstName;
    private String lastName;
    private String alias;
    private int age;
    private boolean isMale;
    private Address address;
    public Person(String firstName, String lastName, String alias, int age, boolean isMale, Address address) {
        this.firstName = firstName;
        this.lastName = lastName;
        this.alias = alias;
        this.age = age;
        this.isMale = isMale;
        this.address = address;
    }
    public Person() {}
    public String getFirstName() {
        return firstName;
    }
    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }
    public String getLastName() {
        return lastName;
    }
    public void setLastName(String lastName) {
        this.lastName = lastName;
    }
    public String getAlias() {
        return alias;
    }
    public void setAlias(String alias) {
        this.alias = alias;
    }
    public int getAge() {
        return age;
    }
    public void setAge(int age) {
        this.age = age;
    }
    public boolean isMale() {
        return isMale;
    }
    public void setMale(boolean isMale) {
        this.isMale = isMale;
    }
    public Address getAddress() {
        return address;
    }
    public void setAddress(Address address) {
        this.address = address;
    }
    @Override
    public String toString() {
        return "Person [firstName=" + firstName + ", lastName=" + lastName + ", alias=" + alias + ", age=" + age
                + ", isMale=" + isMale + ", address=" + address + "]";
    }
}
public class Address {
    private String street;
    private String city;
    private String state;
    private String zipCode;
    public Address(String street, String city, String state, String zipCode) {
        this.street = street;
        this.city = city;
        this.state = state;
        this.zipCode = zipCode;
    }
    public Address() {}
    public String getStreet() {
        return street;
    }
    public void setStreet(String street) {
        this.street = street;
    }
    public String getCity() {
        return city;
    }
    public void setCity(String city) {
        this.city = city;
    }
    public String getState() {
        return state;
    }
    public void setState(String state) {
        this.state = state;
    }
    public String getZipCode() {
        return zipCode;
    }
    public void setZipCode(String zipCode) {
        this.zipCode = zipCode;
    }
    @Override
    public String toString() {
        return "Address [street=" + street + ", city=" + city + ", state=" + state + ", zipCode=" + zipCode + "]";
    }
}

The Person and Address classes are both Plain Old Java Objects (POJOs) and do not contain any JSON-specific code. Instead, we can instantiate these classes into objects and pass the instantiated objects to the writeValueAsString method of an ObjectMapper. This method produces a JSON object—represented as a String of the supplied object. For example:

Address johnDoeAddress = new Address("100 Elm Way", "Foo City", "NJ", "01234");
Person johnDoe = new Person("John", "Doe", null, 29, true, johnDoeAddress);

ObjectMapper mapper = new ObjectMapper();
String json = mapper.writeValueAsString(johnDoe);

System.out.println(json);

Executing this code, we obtain the following output:

{"firstName":"John","lastName":"Doe","alias":null,"age":29,"address":{"street":"100 Elm Way","city":"Foo City","state":"NJ","zipCode":"01234"},"male":true}

If we pretty print (format) this JSON output using the JSON Validator and Formatter, we see that it nearly matches our original example:

{
"firstName":"John",
"lastName":"Doe",
"alias":null,
"age":29,
"address":{
"street":"100 Elm Way",
"city":"Foo City",
"state":"NJ",
"zipCode":"01234"
},
"male":true
}

By default, Jackson will recursively traverse the fields of the object supplied to the writeValueAsString method, using the name of the field as the key and serializing the value into an appropriate JSON value. In the case of String fields, the value is a string; for numbers (such as int), the result is numeric; for booleans, the result is a boolean; for objects (such as Address), this serialization process is recursively repeated, resulting in a JSON object.

Two noticeable differences between our original JSON and the JSON outputted by Jackson are (1) that the order of the key-value pairs is different and (2) there is a male key instead of isMale. In the first case, Jackson does not necessarily output JSON keys in the same order in which the corresponding fields appear in a Java object. For example, the male key is last key in the outputted JSON even though the isMale field is before the address field in the Person object. In the second case, by default, boolean flags of the form isFooBar result in key-value pairs with a key of the form fooBar.

If we desired the key name to be isMale, as it was in our original JSON example, we can annotate the getter for the desired field using the JsonProperty annotation and explicitly supply a key name:

public class Person {
// ...
@JsonProperty("isMale")
public boolean isMale() {
return isMale;
}
// ...
}

Rerunning the application results in the following JSON string, which matches our desired output:

{
"firstName":"John",
"lastName":"Doe",
"alias":null,
"age":29,
"address":{
"street":"100 Elm Way",
"city":"Foo City",
"state":"NJ",
"zipCode":"01234"
},
"isMale":true
}

In cases where we wish to remove null values from the JSON output (where a missing key implicitly denotes a null value rather than explicitly including the null value), we can annotate the affected class as follows:

@JsonInclude(Include.NON_NULL)
public class Person {
// ...
}

This results in the following JSON string:

{
"firstName":"John",
"lastName":"Doe",
"age":29,
"address":{
"street":"100 Elm Way",
"city":"Foo City",
"state":"NJ",
"zipCode":"01234"
},
"isMale":true
}

While the resulting JSON string is equivalent to the JSON string that explicitly included the null  value, the second JSON string saves space. This efficiency is more applicable to larger objects that have many null fields. 

Serializing Collections

In order to create a JSON array from Java objects, we can pass a Collection object (or any subclass) to the ObjectMapper. In many cases, a List will be used, as it most closely resembles the JSON array abstraction. For example, we can serialize a List of Person objects as follows:

Address johnDoeAddress = new Address("100 Elm Way", "Foo City", "NJ", "01234");
Person johnDoe = new Person("John", "Doe", null, 29, true, johnDoeAddress);

Address janeDoeAddress = new Address("200 Boxer Road", "Bar City", "NJ", "09876");
Person janeDoe = new Person("Jane", "Doe", null, 27, false, janeDoeAddress);

List<Person> people = List.of(johnDoe, janeDoe);

ObjectMapper mapper = new ObjectMapper();
String json = mapper.writeValueAsString(people);

System.out.println(json);

Executing this code results in the following output:

[
{
"firstName":"John",
"lastName":"Doe",
"age":29,
"address":{
"street":"100 Elm Way",
"city":"Foo City",
"state":"NJ",
"zipCode":"01234"
},
"isMale":true
},
{
"firstName":"Jane",
"lastName":"Doe",
"age":27,
"address":{
"street":"200 Boxer Road",
"city":"Bar City",
"state":"NJ",
"zipCode":"09876"
},
"isMale":false
}
]

Writing Serialized Objects to a File

Apart from storing the resulting JSON into a String variable, we can write the JSON string directly to an output file using the writeValue method, supplying a File object that corresponds to the desired output file (such as john.json):

Address johnDoeAddress = new Address("100 Elm Way", "Foo City", "NJ", "01234");
Person johnDoe = new Person("John", "Doe", null, 29, true, johnDoeAddress);

ObjectMapper mapper = new ObjectMapper();
mapper.writeValue(new File("john.json"), johnDoe);

The serialized Java object can also be written to any OutputStream, allowing the outputted JSON to be streamed to the desired location (e.g. over a network or on a remote file system).

Deserializing JSON Into Java Objects

With an understanding of how to take existing objects and serialize them into JSON, it is just as important to understand how to deserialize existing the JSON into objects. This process is common when deserializing configuration files and resembles its serializing counterpart, but it also differs in some important ways. First, JSON does not inherently retain type information. Thus, looking at a snippet of JSON, it is impossible to know the type of the object it represents (without explicitly adding an extra key, as is the case with the _class key in a MongoDB database). In order to properly deserialize a JSON snippet, we must explicitly inform the deserializer of the expected type. Second, keys present in the JSON snippet may not be present in the deserialized object. For example, if a newer version of an object — containing a new field — has been serialized into JSON, but the JSON is deserialized into an older version of the object.

In order to tackle the first problem, we must explicitly inform the ObjectMapper the desired type of the deserialized object. For example, we can supply a JSON string representing a Person object and instruct the ObjectMapper to deserialize it into a Person object:

String json = "{"firstName":"John","lastName":"Doe","alias":"Jay","
+ ""age":29,"address":{"street":"100 Elm Way","city":"Foo City","
+ ""state":"NJ","zipCode":"01234"},"isMale":true}";

ObjectMapper mapper = new ObjectMapper();
Person johnDoe = mapper.readValue(json, Person.class);
System.out.println(johnDoe);

Executing this snippet results in the following output:

Person [firstName=John, lastName=Doe, alias=Jay, age=29, isMale=true, address=Address [street=100 Elm Way, city=Foo City, state=NJ, zipCode=01234]]

It is important to note that there are two requirements that the deserialized type (or any recursively deserialized types) must have:

  1. The class must have a default constructor
  2. The class must have setters for each of the fields

Additionally, the isMale getter must be decorated with the JsonProperty annotation (added in the previous section) denoting that the key name for that field is isMale. Removing this annotation will result in an exception during deserialization because the ObjectMapper, by default, is configured to fail if a key is found in the deserialized JSON that does not correspond to a field in the supplied type. In particular, since the ObjectMapper expects a male field (if not configured with the JsonProperty annotation), the deserialization fails since there is no field male found in the Person class.

In order to remedy this, we can annotate the Person class with the JsonIgnoreProperties annotation. For example:

@JsonIgnoreProperties(ignoreUnknown = true)
public class Person {
// ...
}

We can then add a new field, such as favoriteColor in the JSON snippet, and deserialize the JSON snippet into a Person object. It is important to note, though, that the value of the favoriteColor key will be lost in the deserialized Person object since there is no field to store the value:

String json = "{"firstName":"John","lastName":"Doe","alias":"Jay","
+ ""age":29,"address":{"street":"100 Elm Way","city":"Foo City","
+ ""state":"NJ","zipCode":"01234"},"isMale":true,"favoriteColor":"blue"}";

ObjectMapper mapper = new ObjectMapper();
Person johnDoe = mapper.readValue(json, Person.class);
System.out.println(johnDoe);

Executing this snippet results in the following output:

Person [firstName=John, lastName=Doe, alias=Jay, age=29, isMale=true, address=Address [street=100 Elm Way, city=Foo City, state=NJ, zipCode=01234]]

Deserializing Collections

In some cases, the JSON we wish to deserialize has an array as its root value. In order to properly deserialize JSON of this form, we must instruct the ObjectMapper to deserialize the supplied JSON snippet into a Collection (or a subclass) and then cast the result to a Collection object with the proper generic argument. In many cases, a List object is used, as in the following code:

String json = "[{"firstName":"John","lastName":"Doe","age":29,"
+ ""address":{"street":"100 Elm Way","city":"Foo City","
+ ""state":"NJ","zipCode":"01234"},"isMale":true},"
+ "{"firstName":"Jane","lastName":"Doe","age":27,"
+ ""address":{"street":"200 Boxer Road","city":"Bar City","
+ ""state":"NJ","zipCode":"09876"},"isMale":false}]";

ObjectMapper mapper = new ObjectMapper();
@SuppressWarnings("unchecked")
List<Person> people = (List<Person>) mapper.readValue(json, List.class);

System.out.println(people);

Executing this snippet results in the following output:

[{firstName=John, lastName=Doe, age=29, address={street=100 Elm Way, city=Foo City, state=NJ, zipCode=01234}, isMale=true}, {firstName=Jane, lastName=Doe, age=27, address={street=200 Boxer Road, city=Bar City, state=NJ, zipCode=09876}, isMale=false}]

It is important to note that the cast used above is an unchecked cast since the ObjectMapper cannot verify that the generic argument of List is correct a priori; i.e., that the ObjectMapper is actually returning a List of Person objects and not a List of some other objects (or not a List at all).

Reading JSON From an Input File

Although it is useful to deserialize JSON from a string, it is common that the desired JSON will come from a file rather than a supplied string. In this case, the ObjectMapper can be supplied any InputStream or File object from which to read. For example, we can create a file named john.json on the classpath which contains the following:

{"firstName":"John","lastName":"Doe","alias":"Jay","age":29,"address":{"street":"100 Elm Way","city":"Foo City","state":"NJ","zipCode":"01234"},"isMale":true}

Then, the following snippet can be used to read from this file:

ObjectMapper mapper = new ObjectMapper();
Person johnDoe = mapper.readValue(new File("john.json"), Person.class);
System.out.println(johnDoe);

Executing this snippet results in the following output:

Person [firstName=John, lastName=Doe, alias=null, age=29, isMale=true, address=Address [street=100 Elm Way, city=Foo City, state=NJ, zipCode=01234]]
Conclusion

While programming is a self-evident part of software development, it is not the only skill that developers must possess. In many applications, data drives an application: configuration files create dynamism, and web-service data allows disparate parts of a system to communicate with one another. In particular, JSON has become one of the de facto data languages in many Java applications. Learning not only how this language is structured but how to incorporate JSON into Java applications can add an important set of tools to a Java developer's toolbox.

Originally published on https://dzone.com

Introduction New Features in TypeScript 3.7 and How to Use Them

Introduction New Features in TypeScript 3.7 and How to Use Them

The TypeScript 3.7 release is coming soon, and it's going to be a big one.

The target release date is November 5th, and there are some seriously exciting headline features included:

  • Assert signatures.
  • Recursive type aliases.
  • Top-level await.
  • Null coalescing.
  • Optional chaining.

Personally, I'm super excited about this, they're going to whisk away all sorts of annoyances that I've been fighting in TypeScript while building HTTP Toolkit.

If you haven't been paying close attention to the TypeScript development process though, it's probably not clear what half of these mean, or why you should care. Let's talk through all of them.

Assert Signatures

This is a brand-new and little-known TypeScript feature, which allows you to write functions that act like type guards as a side-effect, rather than explicitly returning their boolean result.

It's easiest to demonstrate this with a JavaScript example:

function assertString(input) { 
  if (typeof input === 'string') 
    return; 
  else 
    throw new Error('Input must be a string!'); 
} 
function doSomething(input) { 
  assertString(input); 
  // ... Use input, confident that it's a string 
} 
doSomething('abc'); 
// All good doSomething(123); // Throws an error

This pattern is neat and useful, and you can't use it in TypeScript today.

TypeScript can't know that you've guaranteed the type of input after it's run assertString. Typically, people just make the argument input: string to avoid this, and that's good. But, it also just pushes the type checking problem somewhere else, and in cases where you just want to fail hard, it's useful to have this option available.

Fortunately, soon we will:

// With TS 3.7 
function assertString(input: any): 
	asserts input is string { 
      // <-- the magic 
      if (typeof input === 'string') 
        return; 
      else 
        throw new Error('Input must be a string!'); 
    } 
function doSomething(input: string | number) { 
  assertString(input); 
  // input's type is just 'string' here }

Here assert input is string means that if this function ever returns, TypeScript can narrow the type of input to string, just as if it was inside an if block with a type guard.

To make this safe, that means if the assert statement isn't true then your assert function must either throw an error or not return at all (kill the process, infinite loop, you name it).

That's the basics, but this actually lets you pull some really neat tricks:

// With TS 3.7 
// Asserts that input is truthy, throwing immediately if not: 
function assert(input: any): 
	asserts input { // <-- not a typo 
      if (!input) 
        throw new Error('Not a truthy value'); 
    } 
declare const x: number | string | undefined; 
assert(x); // Narrows x to number | string 
// Also usable with type guarding expressions! 
assert(typeof x === 'string'); 
// Narrows x to string // -- Or use assert in your tests: -- 
const a: Result | Error = doSomethingTestable(); 
expect(a).is.instanceOf(result); 
// 'instanceOf' could 'asserts a is Result' 
expect(a.resultValue).to.equal(123); 
// a.resultValue is now legal // -- Use as a safer ! that throws immediately if 
// you're wrong -- 
function assertDefined<T>(obj: T): 
	asserts obj is NonNullable<T> { 
      if (obj === undefined || obj === null) { 
        throw new Error('Must not be a nullable value'); 
      } 
    } 
declare const x: string | undefined; 
// Gives y just 'string' as a type, could throw elsewhere later: 
const y = x!; 
// Gives y 'string' as a type, or throws immediately if you're wrong: 
assertDefined(x); const z = x; 
// -- Or even update types to track a function's side-effects -- 
type X<T extends string | {}> = { value: T }; 
// Use asserts to narrow types according to side effects: 
function setX<T extends string | {}>(x: X<any>, v: T): 
	asserts x is X<T> { 
      x.value = v; 
    } 
	declare let x: X<any>; 
// x is now { value: any }; 
setX(x, 123); 
// x is now { value: number };

This is still in flux, so don't take it as the definite result, and keep an eye on the pull request if you want the final details.

There's even a discussion there about allowing functions to assert something and return a type, which would let you extend the final example above to track a much wider variety of side effects, but we'll have to wait and see how that plays out.

Top-Level Await

Async/await is amazing and makes promises dramatically cleaner to use.

Unfortunately, though, you can't use them at the top level. This might not be something you care about much in a TS library or application, but if you're writing a runnable script or using TypeScript in a REPL, then this gets super annoying.

It's even worse if you're used to frontend development, since top-level await has been working nicely in the Chrome and Firefox console for a couple of years now.

Fortunately though, a fix is coming. This is actually a general stage-3 JS proposal, so it'll be everywhere else eventually too, but for TS devs 3.7 is where the magic happens.

This one's simple, but let's have another quick demo anyway:


// Your only solution right now for a script that does something async: 
async function doEverything() { 
  ... 
  const response = await fetch('http://example.com'); 
  ... 
} 
  
doEverything(); // <- eugh (could use an IIFE instead, but even more eugh)

With top-level await:

// With TS 3.7: 
// Your script: ... 
const response = await fetch('http://example.com'); 
// ...

There's a notable gotcha here: if you're not writing a script, or using a REPL, don't write this at the top level, unless you really know what you're doing!

It's totally possible to use this to write modules that do blocking async steps when imported. That can be useful for some niche cases, but people tend to assume that their import statement is a synchronous, reliable, and fairly quick operation, and you could easily hose your codebase's startup time if you start blocking imports for complex async processes (even worse, processes that can fail).

This is somewhat mitigated by the semantics of imports of async modules: they're imported and run in parallel, so the importing module effectively waits for Promise.all(importedModules) before being executed.

Rich Harris wrote an excellent piece on a previous version of this spec before that change when imports ran sequentially and this problem was much worse), which makes for good background reading on the risks here if you're interested.

It's also worth noting that this is only useful for module systems that support asynchronous imports. There isn't yet a formal spec for how TS will handle this, but that likely means that a very recent target configuration, and, either ES Modules or Webpack v5 (whose alphas have experimental support), will be used at runtime.

Recursive Type Aliases

If you're ever tried to define a recursive type in TypeScript, you may have run into StackOverflow questions like this: https://stackoverflow.com/questions/47842266/recursive-types-in-typescript.

Right now, you can't. Interfaces can be recursive, but there are limitations to their expressiveness, and type aliases can't. That means right now, you need to combine the two: define a type alias and extract the recursive parts of the type into interfaces. It works, but it's messy, and we can do better.

As a concrete example, this is the suggested type definition for JSON data:

type JSONValue = | string | number | boolean | JSONObject | JSONArray; 
interface JSONObject { [x: string]: JSONValue; } 
interface JSONArray extends Array<JSONValue> { }

That works, but the extra interfaces are only there because they're required to get around the recursion limitation.

Fixing this requires no new syntax; it just removes that restriction, so the below compiles:

// With TS 3.7: 
type JSONValue = | string | number | boolean | { [x: string]: JSONValue } | Array<JSONValue>;

Right now, that fails to compile with Type alias 'JSONValue' circularly references itself. Soon though, soon...

Null Coalescing

Aside from being difficult to spell, this one is quite simple and easy. It's based on a JavaScript stage-3 proposal, which means it'll also be coming to your favorite vanilla JavaScript environment soon (if it hasn't already).

In JavaScript, there's a common pattern for handling default values, and falling back to the first valid result of a defined group. It looks something like this:

// Use the first of firstResult/secondResult which is truthy: 
const result = firstResult || secondResult; 
// Use configValue from provided options if truthy, or 'default' if not: 
this.configValue = options.configValue || 'default';

This is useful in a host of cases, but due to some interesting quirks in JavaScript, it can catch you out. If firstResult or options.configValue can meaningfully be set to false, an empty string or 0, then this code has a bug. If those values are set, then when considered as booleans they're falsy, so the fallback value (secondResult / 'default') is used anyway.

Null coalescing fixes this. Instead of the above, you'll be able to write:

// With TS 3.7: 
// Use the first of firstResult/secondResult which is *defined*: 
const result = firstResult ?? secondResult; 
// Use configSetting from provided options if *defined*, or 'default' if not: 
this.configValue = options.configValue ?? 'default';

?? differs from || in that it falls through to the next value only if the first argument is null or undefined, not falsy. That fixes our bug. If you pass false as firstResult, that will be used instead of secondResult because, while it's falsy, it is still defined, and that's all that's required.

It's simple but super-useful, as it takes away a whole class of bugs.

Optional Chaining

Last but not least, optional chaining is another stage-3 proposal that is making its way into TypeScript.

This is designed to solve an issue faced by developers in every language: how do you get data out of a data structure when some or all of it might not be present?

Right now, you might do something like this:

// To get data.key1.key2, if any level could be null/undefined: 
let result = data ? (data.key1 ? data.key1.key2 : undefined) : undefined; 
// Another equivalent alternative: 
let result = ((data || {}).key1 || {}).key2;

Nasty! This gets much much worse if you need to go deeper, and although the second example works at runtime, it won't even compile in TypeScript, since the first step could be {}, in which case key1 isn't a valid key at all.

This gets still more complicated if you're trying to get into an array, or there's a function call somewhere in this process.

There's a host of other approaches to this, but they're all noisy, messy & error-prone. With optional chaining, you can do this:

// With TS 3.7: 
// Returns the value is it's all defined & non-null, or undefined if not. 
let result = data?.key1?.key2; 
// The same, through an array index or property, if possible: 
array?.[0]?.['key']; 
// Call a method, but only if it's defined: 
obj.method?.(); 
// Get a property, or return 'default' if any step is not defined: 
let result = data?.key1?.key2 ?? 'default';

The last case shows how neatly some of these dovetails together: null coalescing + optional chaining is a match made in heaven.

One gotcha: this will return undefined for missing values, even if they were null, e.g. in cases like (null)?.key (returns undefined). A small point, but one to watch out for if you have a lot of null in your data structures.

That's the lot! That should outline all the essentials for these features, but there are lots of smaller improvements, fixes, and editor support improvements coming too, so take a look at the official roadmap if you want to get into the nitty-gritty.

Kubernetes Fundamentals - Learn Kubernetes from the Beginning

Kubernetes Fundamentals - Learn Kubernetes from the Beginning

Kubernetes is about orchestrating containerized apps. Docker is great for your first few containers. As soon as you need to run on multiple machines and need to scale/up down and distribute the load and so on, you need an orchestrator - you need Kubernetes

Kubernetes is about orchestrating containerized apps. Docker is great for your first few containers. As soon as you need to run on multiple machines and need to scale/up down and distribute the load and so on, you need an orchestrator - you need Kubernetes

This is the first part of a series of articles on Kubernetes, cause this topic is BIG!.

  • Part I - From the beginning, Part I, Basics, Deployment and Minikube: In this part, we cover why Kubernetes, some history and some basic concepts like deploying, Nodes, Pods.
  • Part II - Introducing Services and Labeling: In this part, we deepen our knowledge of Pods and Nodes. We also introduce Services and labeling using labels to query our artifacts.
  • Part III - Scaling: Here we cover how to scale our app
  • Part IV - Auto scaling: In this part we look at how to set up auto-scaling so we can handle sudden large increases of incoming requests
Resources Kubernetes

So what do we know about Kubernetes?

It's an open-source system for automating deployment, scaling, and management of containerized applications

Let'start with the name. It's Greek for Helmsman, the person who steers the ship. Which is why the logo looks like this, a steering wheel on a boat:

It's Also called K8s so K ubernete s, 8 characters in the middle are removed. Now you can impress your friends that you know why it's referred to as K8.

Here is some more Jeopardy knowledge on its origin. Kubernetes was born out of systems called Borg and Omega. It was donated to CNCF, Cloud Native Computing Foundation in 2014. It's written in Go/Golang.

If we see past all this trivia knowledge, it was built by Google as a response to their own experience handling a ton of containers. It's also Open Source and battle-tested to handle really large systems, like planet-scale large systems.

So the sales pitch is:

Run billions of containers a week, Kubernetes can scale without increasing your ops team

Sounds amazing right, billions of containers cause we are all Google size. No? :) Well even if you have something like 10-100 containers, it's for you.

In this part I hope to cover the following:

  • Why Kubernetes and Orchestration in General
  • Hello world: Minikube basics, talking through Minikube, simple deploy example
  • Cluster and basic commands, Nodes,
  • Deployments, what it is and deploying an app
  • Pods and Nodes, explain concepts and troubleshooting
Part I - From the beginning, Part I, Basics, Deployment and Minikube Why Orchestration

Well, it all started with containers. Containers gave us the ability to create repeatable environments so dev, staging, and prod all looked and functioned the same way. We got predictability and they were also light-weight as they drew resources from the host operating system. Such a great breakthrough for Developers and Ops but the Container API is really only good for managing a few containers at a time. Larger systems might consist of 100s or 1000+ containers and needs to be managed as well so we can do things like scheduling, load balancing, distribution and more.

At this point, we need orchestration the ability for a system to handle all these container instances. This is where Kubernetes comes in.

Getting started

Ok ok, let's say I buy into all of this, how do I get started?

Impatient ey, sure let's start to do something practical with Minikube

Ok, sounds good I'm a coder, I like practical stuff. What is Minikube?

Minikube is a tool that lets us run Kubernetes locally

Oh, sweet, millions of containers on my little machine?

Well, no, let's start with a few and learn Kubernetes basics while at it.

Installation

To install Minikube lets go to this installation page

It's just a few short steps that means we install

  • a Hypervisor
  • Kubectl (Kube control tool)
  • Minikube

Run

Get that thing up and running by typing:

minikube start

It should look something like this:

You can also ensure that kubectl have been correctly installed and running:

kubectl version

Should give you something like this in response:

Ok, now we are ready to learn Kubernetes.

Learning kubectl and basic concepts

In learning Kubernetes lets do so by learning more about kubectl a command line program that lets us interact with our Cluster and lets us deploy and manage applications on said Cluster.

The word Cluster just means a group of similar things but in the context of Kubernetes, it means a Master and multiple worker machines called Nodes. Nodes were historically called Minions

, but not so anymore.

The master decides what will run on the Nodes, which includes things like scheduled workloads or containerized apps. Which brings us to our next command:

kubectl get nodes

This should give us a result like this:

What this tells us what Nodes we have available to do work.

Next up let's try to run our first app on Kubernetes with the run command like so:

kubectl run kubernetes-first-app --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080

This should give us a response like so:

Next up lets check that everything is up and running with the command:

kubectl get deployments

This shows the following in the terminal:

In putting our app on the Kluster, by invoking the run command, Kubernetes performed a few things behind the scenes, it:

  • searched for a suitable node where an instance of the application could be run, there was only one node so it got chosen
  • scheduled the application to run on that Node
  • configured the cluster to reschedule the instance on a new Node when needed

Next up we are going to introduce the concept Pod, so what is a Pod?

A Pod is the smallest deployable unit and consists of one or many containers, for example, Docker containers. That's all we are going to say about Pods at the moment but if you really really want to know more have a read here

The reason for mentioning Pods at this point is that our container and app is placed inside of a Pod. Furthermore, Pods runs in a private isolated network that, although visible from other Pods and services, it cannot be accessed outside the network. Which means we can't reach our app with say a curl command.

We can change that though. There is more than one way to expose our application to the outside world for now however we will use a proxy.

Now open up a 2nd terminal window and type:

kubectl proxy

This will expose the kubectl as an API that we can query with HTTP request. The result should look like:

Instead of typing kubectl version we can now type curl http://localhost:8001/version and get the same results:

The API Server inside of Kubernetes have created an endpoint for each pod by its pod name. So the next step is to find out the pod name:

kubectl get pods

This will list all the pods you have, it should just be one pod at this point and look something like this:

Then you can just save that down to a variable like so:

Lastly, we can now do an HTTP call to learn more about our pod:

curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME

This will give us a long JSON response back (I trimmed it a bit but it goes on and on...)

Maybe that's not super interesting for us as app developers. We want to know how our app is doing. Best way to know that is looking at the logs. Let's do that with this command:

kubectl logs $POD_NAME

As you can see below we know get logs from our app:

Now that we know the Pods name we can do all sorts of things like checking its environment variables or even step inside the container and look at the content.

kubectl exec $POD_NAME env

This yields the following result:

Now lets step inside the container:

kubectl exec -ti $POD_NAME bash

We are inside! This means we can see what the source code looks like even:

cat server.js

Inside of our container, we can now reach the running app by typing:

curl http://localhost:8080

Summary Part I

This is where we will stop for now.
What did we actually learn?

  • Kubernetes, its origin what it is
  • Orchestration why you will soon need it
  • Concepts like Master, Nodes and Pods
  • Minikube, kubectl and how to deploy an image onto our Cluster
Part II - Introducing Services and Labeling

In this part we will cover the following:

  • Deepen our knowledge on Pods and Nodes
  • Introduce Services and labeling
  • Perform an exercise, involving setting labels on Pods and use labels to query our artifacts
Concepts revisited

When we create a Deployment on Kubernetes, that Deployment creates Pods with containers inside them. So Pods are tied to Nodes and will continue to exist until terminated or deleted. Let's try to educate ourselves a bit more on Pods, Nodes and let's also introduce a new topic Services.

Pods

Pods are the atomic unit on the Kubernetes platform, i.e smallest possible deployable unit

We've stated the above before but it's worth mentioning again.

What else is there to know?

A Pod is an abstraction that represents a group of one or more containers, for example, Docker or rkt, and some shared resources for those containers. Those resources include:

  • Shared storage, as Volumes
  • Networking, as a unique cluster IP address
  • Information about how to run each container, such as the container image version or specific ports to use

A Pod can have more than one container. If it does contain more than one container it is so the other containers can support the primary application.
Typical examples of helper applications are data pullers, data pushers, and proxies. You can read more on that use case here

  1. The containers in a Pod share an IP Address and port space and are:
  2. Always co-located
  3. Co-scheduled

Let me show you an image to make it easier to visualize:

As we can see above a Pod can have a lot of different artifacts in them that are able to communicate and support the app in some way.

Nodes

A Pod always runs on a Node

So Node is the Pods parent?

Yes.

A Node is a worker machine and may be either a virtual or a physical machine, depending on the cluster

Each Node is managed by the Master. A Node can have multiple pods.

So it's a one to many relationship

The Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster

Every Kubernetes Node runs at least a:

  • Kubelet, is responsible for the pod spec and talks to the cri interface

  • Kube proxy, is the main interface for coms between nodes

  • A container runtime, (like Docker, rkt) responsible for pulling the container image from a registry, unpacking the container, and running the application.

Ok so a Node contains a Kubelet and container runtime and one to many Pods. I think I got it.

Let's show an image to make this info stick, cause it's quite important that we know what goes on, at least at a high level:

Services

Pods are mortal, they can die. Pods, in fact, have a lifecycle.

When a worker node dies, the Pods running on the Node are also lost.

What happens to our apps? :(

You might think them and their data are lost but not so. The whole point with Kubernetes is to not let that happen. We normally deploy something like a ReplicaSet.

A ReplicaSet, what do you mean?

A ReplicaSet is a high-level artifact that can drive the cluster back to desired state via the creation of new Pods to keep your application running.

Ok so if a Pod goes down the ReplicaSet just creates a new Pod/s in its place?

Yes, exactly that. If you focus on defining a desired state the rest is up to Kubernetes.

Phew sounds really great then.

This concept of desired state is a very important one. You need to specify how many containers you want of each kind, at all times.

Oh so 4 database containers, 3 services etc?

Yes exactly.

So you don't have to care about the details just tell Kubernetes what state you want and it does the rest. If something goes up, Kubernetes ensures it comes back up again to desired state.

Each Pod in a Kubernetes cluster has a unique IP address, even Pods on the same Node, so there needs to be a way of automatically reconciling changes among Pods so that your applications continue to function.

Ok?

Yea, think like this. If a Pod containing your app goes down and another Pod is created in its place, running your app. Users should still be able to use your app after that.

Ok I got it. Makes me think...

The motivation for a Service

You should never refer to a Pod by it's IP address, just think what happens when a Pod goes down and comes back up again but this time with a different IP. It is for that reason a Service exists.

A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them.

Makes me think of a routers and subnets

Yea I guess you can say there is a resemblance in there somewhere.

Services enable a loose coupling between dependent Pods and are defined using YAML or JSON file, just like all Kubernetes objects.

That's handy, just JSON and YAML :)

Services and Labels

The set of Pods targeted by a Service is usually determined by a LabelSelector.

Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. We can expose them through a proxy though as we showed in part I.

Wait, go back a second here, you said LabelSelector. I wasn't quite following?

Remember how we couldn't refer to Pods by IP, cause Pods might go down and a new Pod could come back in its place?

Yes

Well, labels are the answer to how Services and Pods are able to communicate. This is what we mean by loose coupling. By applying labels like for example frontend, backend, release and so on to Pods, we are able to refer to Pods by their logical name rather than their specifics, i.e IP number.

Oh I get it, so it's a high-level domain language

Mm, kind of.

Services and Traffic

Services allow your applications to receive traffic.

Services can be exposed in different ways by specifying a type in ServiceSpec, service specification.

  • ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
  • NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using :. Superset of ClusterIP.
  • LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
  • ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns.

Ok I think I get it. Ensure I'm speaking externally to a Service instead of specific Pods. Depending on what I expose the Service as, that leads to different behavior?

Yea that's correct.

You said something about labels though, how do we create and apply them to Pods?

Yea lets talk about that next.

Labels

As we just mentioned, Services are the abstraction that allows pods to die and replicate in Kubernetes without impacting your application.

Now, Services match a set of Pods using labels and selectors, it allows us to operate on Pods like a group.

Labels are key/value pairs attached to objects and can be used in any number of ways:

  • Designate objects for development, test, and production
  • Embed version tags
  • Classify an object using tags

Labels can be attached to objects at creation time or later on. They can be modified at any time.

Lab - Fun with Labels and kubectl

It's a good idea to have read the first part of this series where we create a deployment. If you haven't you need to first create a deployment like so:

kubectl run kubernetes-first-app --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080

Now we should be good to go.

Ok. I know you are probably all tired from all theory by now.

I bet you are just itching to learn more hands on Kubernetes with kubectl.

Well, the time for that has come :). We will do two things:

  1. Create a Service and learn how we can expose our app using said Service
  2. Learn about Labeling and how we can improve our querying game by having appropriate labels on our artifacts.

Let's create a new service.

We will get acquainted with the expose command.

Let's check for existing pods,

kubectl get pods

Next let's see what services we have:

kubectl get services

Next lets create a Service like so:

kubectl expose deployment/kubernetes-first-app --type="NodePort" --port 8080

As you can see above we are just targeting one of our deployments kubernetes-first-app and referring to it with [type]/[deployment name] and type being deployment.

We expose it as service of type NodePort and finally, we choose to expose it at port 8080.

Now run kubectl get services again and see the results:

As you can see we now have two services in use, our basic kubernetes service and our newly created kubernetes-first-app.

Next up we need to grab the port of our service and assign that to a variable:

export NODE_PORT=$(kubectl get services/kubernetes-first-app -o go-template='{{(index .spec.ports 0).nodePort}}')
echo NODE_PORT=$NODE_PORT

We now have a our port stored on environment variable NODE_PORT and we are ready to start communicating with our service like so:

curl $(minikube ip):$NODE_PORT

Which leads to the following output:

Creating and applying Labels

When we created our deployment and our Pod, it was automatically assigned with a label.

By typing

kubectl describe deployment

we can see the name of said label.

Next up we can query the pods by that same label

kubectl get pods -l run=kubernetes-first-app

Above we are using -l to query for a specific label and kubernetes-bootcamp as the name of the label. This gives us the following result:

You can do a similar query to your services:

kubectl get services -l run=kubernetes-first-app

That just shows that you can query on different levels, for specific Pods or Services that have Pods with that label.

Next up we will look at how to change the label

First let's get the name of the pod, like so:

POD_NAME=kubernetes-first-app-669789f4f8-6glpx

Above I'm just assigning what my Pod is called to a variable POD_NAME. Check with a kubectl getpods what your Pod is called.

Then we can add/apply the new label like so:

kubectl label pod $POD_NAME app=v1

Verify that the new label have been set, like so:

kubectl describe pod

or

kubectl describe pods $POD_NAME

As you can see from the result our new label app=v1 has been appended to existing labels.

Now we can query like so:

kubectl get pods -l app=v1

That's pretty much how labeling works, how to get available labels, apply them and use them in a query. Ensure to give them a descriptive name like an app version, a certain environment or a name like frontend or backend, something that makes sense to your situation.

Clean up

Ok, so we created a service. We should learn how to clean up after ourselves. Run the following command to remove our service:

kubectl delete service -l run=kubernetes-bootcamp

Verify the service is no longer there with:

kubectl get services

also, ensure our exposed IP and port can no longer be reached:

curl $(minikube ip):$NODE_PORT

Just because the service is gone doesn't mean the app is gone. The app should still be reachable on:

kubectl exec -ti $POD_NAME curl localhost:8080

Summary Part II

So what did we learn? We learned a bit more on Pods and Nodes. Furthermore, we learned that we shouldn't speak directly to Pods but rather use a high-level abstraction such as Services. Services use labels as a way to define a domain language and apply those to different Pods.

Ok, so we understand a bit more on Kubernetes and how different concepts relate. We mentioned something called desired state a number of times but we didn't go into detail on how to set such a state. That's our next part in this series where we will cover how to set the desired state and how Kubernetes maintains it, so stay tuned.

Part III - Scaling

This third part aims to show how you scale your application. We can easily set the number of Replicas we want of a certain application and let Kubernetes figure out how to do that. This is us defining a so-called desired state.

When traffic increases, we will need to scale the application to keep up with user demand. We've talked about deployments and services, now lets talk scaling.

What does scaling mean in the context of Kubernetes?

We get more Pods. More Pods that are scheduled to nodes.

Now it's time to talk about desired state again, that we mentioned in previous parts.

This is where we relinquish control to Kubernetes. All we need to do is tell Kubernetes how many Pods we want and Kubernetes does the rest.

So we tell Kubernetes about the number of Pods we want, what does that mean? What does Kubernetes do for us?

It means we get multiple instances of our application. It also means traffic is being distributed to all of our Pods, ie. load balancing.

Furthermore, Kubernetes, or more specifically, services within Kubernetes will monitor which Pods are available and send traffic to those Pods.

Scaling demo Lab

If you haven't followed the first two parts I do recommend you go back and have a read. What you need for the following to work is at least a deployment. So if you haven't created one, here is how:

kubectl run kubernetes-first-app --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080

Let's have a look at our deployments:

kubectl get deployments

Let's look closer at the response we get:

We have three pieces of information that are important to us. First, we have the READY column in which we should read the value in the following way, CURRENT STATE/DESIRED STATE. Next up is the UP_TO_DATE column which shows the number of replicas that were updated to match the desired state.
Lastly, we have the AVAILABLE column that shows how many replicas we have available to do work.

Let's scale

Now, let's do some scaling. For that we will use the scale command like so:

kubectl scale deployments/kubernetes-first-app --replicas=4 

as we can see above the number of replicas was increased to 4 and kubernetes is thereby ready to load balance any incoming requests.

Let's have a look at our Pods next:

When we asked for 4 replicas we got 4 Pods.

We can see that this scaling operation took place by using the describe command, like so:

kubectl describe deployments/kubernetes-first-app

In the above image, we are given quite a lot of information on our Replicas for example, but there is some other information in there that we will explain later on.

Does it load balance?

The whole point with the scaling was so that we could balance the load on incoming requests. That means that not the same Pod would handle all the requests but that different Pods would be hit.
We can easily try this out, now that we have scaled our app to contain 4 replicas of itself.

So far we used the describe command to describe the deployment but we can use it to describe the IP and port of. Once we have the IP and port we can then hit it with different HTTP requests.

kubectl describe services/kubernetes-first-app

Especially look at the NodePort and the Endpoints. NodePort is the port value that we want to hit with an HTTP request.

Now we will actually invoke the cURL command and ensure that it hits a different port each time and thereby prove our load balancing is working. Let's do the following:

NODE_PORT=30450

Next up the cURL call:

curl $(minikube ip):$NODE_PORT

As you can see above we are doing the call 4 times. Judging by the output and the name of the instance we see that we are hitting a different Pod for each request. Thereby we see that the load balancing is working.

Scaling down

So far we have scaled up. We managed to go from one Pod to 4 Pods thanks to the scale command. We can use the same command to scale down, like so:

kubectl scale deployments/kubernetes-first-app --replicas=2 

Now if we are really fast adding the next command we can see how the Pods are being removed as Kubernetes is trying to adjust to desired state.

2 out of 4 Pods are saying Terminating as only 2 Pods are needed to maintain the new desired state.

Running our command again we see that only 2 Pods remain and thereby our new desired state have been reached:

We can also look at our deployment to see that our scale instruction has been parsed correctly:

Self-healing

Self-healing is Kubernetes way of ensuring that the desired state is maintained. Pods don't self heal cause Pods can die. What happens is that a new Pod appears in its place, thanks to Kubernetes.

So how do we test this?

Glad you asked, we can delete a Pod and see what happens. So how do we do that? We use the delete command. We need to know the name of our Pod though so we need to call get pods for that. So let's start with that:

kubectl get pods

Then lets pick one of our two Pods kubernetes-first-app-669789f4f8-6glpx and assign it to a variable:

POD_NAME=kubernetes-first-app-669789f4f8-6glpx

Now remove it:

kubectl delete pods $POD_NAME

Let's be quick about it and check our Pod status with get pods. It should say Terminating like so:

Wait some time and then echo out our variable $POD_NAME followed by get pods. That should give you a result similar to the below.

So what does the above image tell us? It tells us that the Pod we deleted is truly deleted but it also tells us that the desired state of two replicas has been achieved by spinning up a new Pod. What we are seeing is * self-healing* at work.

Different ways to scale

Ok, we looked at a way to scale by explicitly saying how many replicas we want of a certain deployment. Sometimes, however, we might want a different way to scale namely auto-scaling. Auto-scaling is about you not having to set the exact number of replicas you want but rather rely on Kubernetes to create the number of replicas it thinks it needs. So how would Kubernetes know that? Well, it can look at more than one thing but a common metric is CPU utilization. So let's say you have a booking site and suddenly someone releases Bruce Springsteen tickets you are likely to want to rely on auto-scaling, cause the next day when the tickets are all sold out you want the number of Pods to go back to normal and you wouldn't want to do this manually.

Auto-scaling is a topic I plan to cover more in detail in a future article so if you are really curious how that is done I recommend you have a look here

Summary Part III

Ok. So we did it. We managed to scale an app by creating replicas of it. It wasn't so hard to accomplish. We showed how we only needed to provide Kubernetes with a desired state and it would do its utmost to preserve said state, also called * self-healing*. Furthermore, we mentioned that there was another way to scale, namely auto-scaling but decided to leave that topic for another article. Hopefully, you are now more in awe of how amazing Kubernetes is and how easy it is to scale your app.

Part IV - Auto scaling

In this article, we will cover the following:

  • Why auto scaling, we will discuss different scenarios in which it makes sense to rely on auto scaling over defining it statically like we do with desired state
  • How, lets talk about Horizontal Auto Scaling the concept/feature that allows us to scale in an elastic way.
  • Lab - lets scale, we will look at how to actually set this up in kubectl and simulate a ton of incoming requests. We will then inspect the results and see that Kubernetes acts the way we think
Why

So in our last part, we talked about desired state. That's an OK strategy until something unforeseen happens and suddenly you got a great influx of traffic. This is likely to happen to businesses such as e-commerce around a big sale or a ticket vendor when you release tickets to a popular event.

Events like these are an anomaly which forces you to quickly scale up. The other side of the coin though is that at some point you need to scale down or you suddenly have overcapacity you might need to pay for. What you really want is for the scaling to act in an elastic way so it scaled up when you need it to and scales down when there is less traffic.

How

Horizontal auto-scaling, what does it mean?

It's a concept in Kubernetes that can scale the number of Pods we need. It can do so on a replication controller, deployment or replica set. It usually looks at CPU utilization but can be made to look at other things by using something called custom metrics support, so it's customizable.

It consists of two parts a resource and a controller. The controller checks utilization, or whatever metric you decided, to ensure that the number of replicas matches your specification. If need be it spins up more Pods or removes them. The default is checking every 15 seconds but you can change that by looking at a flag called --horizontal-pod-autoscaler-sync-period.

The underlying algorithm that decides the number of replicas looks like this:

desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]

Lab - lets scale

Ok, the first thing we need to do is to scale our deployment to use something other than desired state.

We have two things we need to specify when we do autoscaling:

  • min/max, we define a minimum and maximum in terms of how many Pods we want
  • CPU, in this version we set a certain CPU utilization percentage. When it goes above that it scales out as needed. Think of this one as an IF clause, if CPU value greater than the threshold, try to scale

Set up

Before we can attempt our scaling experiment we need to make sure we have the correct add-ons enabled. You can easily see what add-ons you have enabled by typing:

minikube addons list

If it looks like the above we are all good. Why am I saying that? Well, what we need, to be able to auto-scale, is that heapster and metrics-server add ons are enabled.

Heapster enables Container Cluster Monitoring and Performance Analysis.

Metrics server provide metrics via the resource metrics API. Horizontal Pod Autoscaler uses this API to collect metrics

We can easily enable them both with the following commands (we will need to for auto-scaling to show correct data):

minikube addons enable heapster

and

minikube addons enable metrics-server

We need to do one more thing, namely to enable Custom metrics, which we do by starting minikube with such a flag like so:

minikube start --extra-config kubelet.EnableCustomMetrics=true

Ok, now we are good to go.

Running the experiment

We need to do the following to run our experiment

  • Create a deployment
  • Apply autoscaling
  • Bombard the deployment with incoming requests
  • Watch the auto scaling how it changes

Create a deployment

kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80

Above we are creating a deployment php-apache and expose it as a service on port 80. We can see that we are using the image k8s.gcr.io/hpa-example

It should tell us the following:

service/php-apache created
deployment.apps/php-apache created

Autoscaling

Next up we will use the command autoscale. We will use it like so:

kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10

It should say something like:

horizontalpodautoscaler.autoscaling/php-apache autoscaled

Above we are applying the auto-scaling on the deployment php-apache and as you can see we are applying both min-max and cpu based auto scaling which means we give a rule for how the auto scaling should happen:

If CPU load is >= 50% create a new Pod, but only maximum 10 Pods. If the load is low go back gradually to one Pod

Bombard with requests

Next step is to send a ton of requests against our deployment and see our auto-scaling doing its work. So how do we do that?

First off let's check the current status of our horizontal pod auto-scaler or hpa for short by typing:

kubectl get hpa

This should give us something like this:

The above shows us two pieces of information. The first is the TARGETS column which shows our CPU utilization, actual usage/trigger value. The next bit of interest is the column REPLICAS that shows us the number of copies, which is 1 at the moment.

For our next trick open up a separate terminal tab. We need to do is to set things up so we can send a ton of requests.

Next up we create a container using this command:

kubectl run -i --tty load-generator --image=busybox /bin/sh

This should take us to a prompt within the container. This is followed by:

while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done

The command above should result in something looking like this.

This will just go on and on until you hit CTRL+C, but leave it be for now.

This throws a ton on requests in while true loop.

I thought while true loops were bad?

They are but we are only going to run it for a minute so that the auto scaling can happen. Yes, the CPU will sound a lot but don't worry :)

Let this go on for a minute or so, then enter the following command into the first terminal tab (not the one running the requests), like so:

kubectl get hpa

It should now show something like this:

As you can see from the above the column TARGETS looks different and now says 339%/50% which means the current load on the CPU and REPLICAS is 7 which means it has gone from 1 to 7 replicas. So as you can see we have been bombarding it pretty hard.

Now go to the second terminal and hit CTRL+C or you will have a situation like this:

It will actually take a few minutes for Kubernetes to cool off and get the values back to normal. A first look at the Pods situation shows us the following:

kubectl get pods

As you can see we have 7 Pods up and running still but let's wait a minute or two and it should look like this:

Ok, now we are back to normal.

Summary Part IV

Now we did some great stuff in this article. We managed to set up auto-scaling, bombard it with requests and without frying our CPU, hopefully ;)

We also managed to learn some new Kubernetes commands while at it and got to see auto-scaling at work giving us new Pods based on our specification.

How to building a stable Node.js project architecture.

How to building a stable Node.js project architecture.

Often product development process which involves JavaScript, is accompanied by the use of Node.js, a JavaScript runtime environment. The birth of this technology has certainly turned the use of JS upside-down. Today, [JavaScript...

Often product development process which involves JavaScript, is accompanied by the use of Node.js, a JavaScript runtime environment. The birth of this technology has certainly turned the use of JS upside-down. Today, JavaScript is in the category of the most preferred languages to build apps thanks to Node.js.

What is so special about this technology? To answer this, let’s reflect on not only this technology’s benefits but also its architecture limitations and the ways to deal with cons.

Node JS brief history

Node.js was introduced by Ryan Dahl in 2009. The technology is mostly used for building app’s server side/ back-end development. What’s special about Node.js is that the technology is asynchronous.

This means that server continues to process other client requests without urging the client to wait till another previously sent request is processed. Let’s say, it’s Node.js “value proposition” for all who would like to create reliable JS-based apps.

What is Node JS commonly used for?

The non-blocking I/O machine behind the current framework is a great way to build real-time web app with NodeJS, mobile products, chats, data streaming apps, browser games, APIs and medium-performance JS apps.

Node JS real-time applications examples and showcase

Among the companies who rely their tech part to Node.js are LinkedIn, Yahoo, IBM, Netflix, PayPal, Uber and others.

Let’s see where else Node is used apart from back-end (2017 data):
This is image title

Source
As of business point of view, Node.js is used for:
This is image title

If you have worked with Node.js, you probably already know that with Node.js you can gain the following:

  • Since Node.js is created with the C++ help, you can call C functions

  • Asynchronous nature which accelerates app’s functioning and provides the ability to multitask. Apart from this, non-closing i/o method suits for high-traffic, real-time websites, resources creation.

  • Stable multiplatform app development

  • Lots of Node.js development tools for better workflow: npm-s, Express, Socket.io, etc. (we’ll touch upon these a bit later in the article)

  • Clear and flexible learning curve

But with Node.js architecture limitations you lose the opportunities to:

  • Create heavy-computed apps with the elements of 3D projection; calculation apps.
    As Node.js is single-threaded, it is not the right fit for such projects. All the actions happen on a single thread, and hence, overload CPU. For such types of apps or software, it’s better to utilize multithreading languages, like C, C#. For the full-scale video games, you can use Unity.

  • Use some npm-s.

Not all npm-s, we’ve mentioned in the pros, are of high quality and stable. Thus, you have to filter them properly and choose only the reliable ones. (Author’s note: Think of npm-s, like of plugins in Wordpress).

  • Taking in consideration Node.js architecture, you can’t utilize relational databases at full power.

Node.js come best with such document-oriented databases, like MongoDB

Let’s get into tech details and best practices for Node.js development and more detailed practical tips on working with this platform

  • Application Specifics
    First and foremost, think about what type of application you plan to release. To further proceed with Node.js project architecture building, ask yourself some of the following questions:

  • Are you going to build a real-time web app with Node.js? Is it meant to be a mobile or a console one? Or maybe it's a multiplatform app?

  • What data should the app operate with? Are these databases, files, or remote storages (like Amazon S3)?

  • Do you plan to use special software in your application? Are sophisticated data processing algorithms such as face detection or text recognition enlisted in your app's functionality business plan?

  • Does your application need an extra hardware, like camera, microphone, various sensors, or any other related devices?

  • What are the architectural specifics of the future app? Is this meant to be a client server, MVC or maybe any other type of architecture?

If you've answered all of these questions, make sure that Node.js is able to fully meet all requirements set for your project development. For example, if you need an API server that works with several types of databases, Node.js might be a good choice. But if you need an application that is designed to build 3D graphics using Directx, you might want to get acquainted with C ++ a bit closer.

Let's assume that your application uses special temperature and contamination sensors. You can pair such features with RaspberryPi, Arduino or any other special device to go further and create a 'smart' functionality model. But before you start, make sure that the driver of any mentioned device is compatible with Node.js.

Best practices for Node.js development workflow

Typically, an application is written by a team of developers. Everyone in the team is unique as well as their own code style. Therefore, it’s recommended to settle and take into account the following code organization nuances before the development stage.

  • Functional development style or object-oriented programming patterns usage?

Since JS is a weakly-typed language and allows you to write your code in a freestyle, it's still better to agree upon a single code writing rules in your team. This will keep most of the misunderstandings away and will help your colleagues to get the better understanding of the project’s code.

  • Code style
    Discuss with your teammates code writing do-s and don’t-s. Check the quality of what is written, using such Node.js development tools, like lint

  • Your team’s experiencewith the integration of third-party means and devices in your application (i.e. Google Maps, data collection, analytics tools, e-communication means,etc.)

  • Data models you’re up to work with (files, databases or third-party APIs)

  • Communication and data exchange tools you’re going to use (REST API, Blouse Protocol, Socket io, GraphQl, DDP protocol)

  • Possibility to utilize 3rd party libraries (hardware libraries, special algorithms)

The scope of work and its specifics can be as random as possible. Let's say one of your teammates works with SQL database, someone else deals with Amazon API. Thus, each of your colleagues is assigned to do the particular tasks.

Don't reinvent the bike

Currently Node.js community is up and thriving. A lot of neat features are invented and written by other developers already. So before you create a particular functionality for your application, make sure if someone else has not encountered the exact problem before.

If you’re not the only one who has experienced a certain issue, you might find the solution in npm packages. This is an entire catalogue of many ready-made useful libraries that will make life much easier for you.

The same applies to frameworks. Think about whether to use any of them to speed up the process to build a real-time web app with Node JS or a mobile one.

Let's say if you’re dealing with REST API, you can try out Express js. If you need to interact with a particular database type, you can refer to such frameworks, as Mongoose js or SQL depending on which database type you need.

Although packages can benefit your project, there are some significant dangers to be aware of. Given the fact, that these solutions are open source there are several threats to bear in mind:

  • Duplicates

There are too many packages already and some of them clone the others. Unfortunately, this mainstream is only growing. Be careful and make sure you choose the right and unique npm.

  • Malicious code
    Since these packages are not supervised, anyone can write anything they want. Read more about security issues in the article by David Gilbertson ‘I’m harvesting credit card numbers and passwords from your site. Here’s how’. So if your product has to provide AAA-security type check each code snippet of any package you instal meticulously.
Always stay ahead of the time

JS and Node.js community is constantly growing. ES standards are frequently updated. Old features are being replaced by the new, better ones and implemented in Node.js. Thus it’s important to always monitor the technology’s state of art.

For instance,

[calbackHell](http://callbackhell.com/) 
fs.action(source, function (err, res) {
  if (err) {
    console.log('Error: ' + err)
  } else {
     res.acton(function(err, res) {
       console.log('Error s: ' + err)
     })
    })
  }
})

Was replaced with

[promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise)
fs.action(source)
  .then(res => res.action())
  .then(res => res.action())
  .then(res => res.action())
  .catch(err =>  console.log('Error : ' + err))
	

Now we can use async/await as the alternative to Promises:

[async/await](https://blog.risingstack.com/async-await-node-js-7-nightly)
try {
  const res = await fs.action(source);
  const res1 = await res.action(source);
  const res2 = await res1.action(source);
  const res3 = await res2.action(source);
} catch(error) {
   console.log('Error s: ' + err)
}

As for now, community’s opinion whether to use promises or async/await vary.

Try to update your knowledge base with the new Node.js and ES releases on the regular basis. This will help you keep a modernized development process.

Node.js app development techniques and tips

“Deep in the human unconscious is a pervasive need for a logical universe that makes sense. But the real universe is always one step beyond logic.”
― Frank Herbert.

To overcome Node.js architecture limitations and trivial-to-challenging issues, keep to the clear development structure.

Since Node.js project might consist of only one file, this does not mean that you need to pile everything up into one great mess.

Make your code as readable and understandable for others as it could be. The following recommendations :

  • Follow the instructions on the structure development for the framework you are using. If you do not use any, try to place your code in the directories and subdirectories in the most logical way possible.

  • File naming

Keep to a single agreed file naming. For example, choose one of these: ErrorHandler or errorHandler orerror_handle or error-handler. Try to name the files according to their purpose but not according to their functionality. For example, it’s better to name the File NotifyAllUserByEmaisSMSLocal as Notifier.

  • The entry point must not contain unnecessary code lines

The entry point is the main.js,app.js and www files that are requested to launch your application. Such files contain only certain methods or classes calls, but not more.

  • index.js.

Keeping only import / export in these files is in general considered to be a justified practice.

Tips for better Node.js project architecture

The way the code is written signifies the ‘face’ (reputation) of the programmer. It also shows how the entire development team deals with the app’s creation using a certain technology or language. Node.js in our case. Therefore, always try to keep it up-to-date and structured (and comprehensive for other developers). Code readability is one of the main ways to build stable, real time web app with Node.js.

  • Use code quality control tools, like Lint

This tool will help you not to slip out/keep a keen eye on any trivial small error./Give a trivial error no chance. It will also allow you to keep the code in one unified form.

  • Keep track of your files size

Too large files are difficult in guiding and understanding. The optimal file size is of 300-500 lines or less. So if you have noted that the code is constantly growing within the same file, turn it to the directory with several files inside.

  • Comment on your code
    When you write a universal module that will be used in several places, don’t forget to create a quick guide on how to utilize this code.
/**
 * Provide sending notification for users by Local, Email , SMS
 *
 * @example
 *          new Notifier(user).notify(['sms', 'email'], ...)
 **/
class Notifier {

You can also leave instructions for methods in your code

/**
 * Provide parse date for single format on project
 * @param Date
 * @example
 *        dateToString(date) => String
 * **/
 

If you’re developing a particular REST API, you can embed instructions on its usage in the code itself. It’s recommended though, to create the complete documentation/guidelines and store it on a single resource.

/**
* Provide Api for Account
  Account Register  POST /api/v1/account/
  @params
         email {string}
         password {string}
  Account Login  POST /api/v1/account/login
  @params
         email {string}
         password {string}

  Account Logout  GET /api/v1/account/logout
  @header
         Authorization: Bearer {token}
 **/
 

There are 2 sides of a coin though.

To ensure the use of Node.js architecture best practices, keep your code as descriptive and organized as possible but don’t overdo.

Don’t comment each code string. It will do more harm than good and nothing but only make the development process more complex. The better tip is to comment the code snippet’s purpose but not its functionality (what does it do). Depending on the development style (callback, promise, async/await) write and use only one (if it’s possible) general error handler/processor.

Errors handling

Errors handling is another important aspect among other best practices for Node.js development to bear in mind.

Since JavaScript is not as strict as Java all the responsibility lies on the development team.

First and foremost, always try to process the errors. Otherwise it can lead to app’s uncontrolled behavior.

  • With callback
const withoutErrors = calback => (err, updatedTank) => {
  if (err) {
    return // do something
  }
  return calback(updatedTank);
};
fs.action(withoutErrors(data => ...))

With Promise

const handlError = error => {
  if (err) {
    return // do something
  }
};
fs.action()
    .then(data => )
    .then(data => )
    .cattch(handlError)
  • With async / await
class Actions
  async action1 (data) {
    return fs.action(data)
  }
  async action2 (data) {
    return fs.action(data)
  }
  .... 
}

try {
  await new Actions().action1();
  await new Actions().action1();
} catach(error) {
  return handlError(error)
}

Node JS development tools
  • Gulp

A toolkit which allows to launch several apps simultaneously. It might be useful if you’d like to run several services at the same time with one command/request.

  • Nodemon

Hot reload feature for Node.js. This tool automatically updates/ resets your project after any code change is made. A quite handy tool during the Node.js project architecture development.

  • Forever, pm2
    These two packages ensure app’s launch during the (OC) system’s start.

  • Winston
    Provides with the opportunity to record app’s logs to the primary source (file or database). The package comes to help, when you need the app to work remotely and don’t have the full access to it.

  • Threads
    A tool designed for the better work with threads.

To sum up with

We hope that Node.js architecture best practices represented in this article will help you reach the most desireable result when looking for the way to build performant real-time web or mobile apps with Node.js.

Build a CMS with Laravel and Vue

Build a CMS with Laravel and Vue

A CMS (Content Management System) helps content creators produce content in an easily consumable format. In this tutorial series, we will consider how to build a simple CMS from scratch using Laravel and Vue.

A CMS (Content Management System) helps content creators produce content in an easily consumable format. In this tutorial series, we will consider how to build a simple CMS from scratch using Laravel and Vue.

Build a CMS with Laravel and Vue - Part 1: Setting up

The birth of the internet has since redefined content accessibility for the better, causing a distinct rise in content consumption across the globe. The average user of the internet consumes and produces some form of content formally or informally.

An example of an effort at formal content creation is when an someone makes a blog post about their work so that a targeted demographic can easily find their website. This type of content is usually served and managed by a CMS (Content Management System). Some popular ones are WordPress, Drupal, and SilverStripe.

A CMS helps content creators produce content in an easily consumable format. In this tutorial series, we will consider how to build a simple CMS from scratch using Laravel and Vue.

Our CMS will be able to make new posts, update existing posts, delete posts that we do not need anymore, and also allow users make comments to posts which will be updated in realtime using Pusher. We will also be able to add featured images to posts to give them some visual appeal.

When we are done, we will be able to have a CMS that looks like this:

Prerequisites

To follow along with this series, a few things are required:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.

The source code for this project is available here on GitHub.## Installing the Laravel CLI
The source code for this project is available here on GitHub.
The first thing we need to do is install the Laravel CLI, and the Laravel dependencies. The CLI will be instrumental in creating new Laravel projects whenever we need to create one. Laravel requires PHP and a few other tools and extensions, so we need to first install these first before installing the CLI.

Here’s a list of the dependencies as documented on the official Laravel documentation:

Let’s install them one at a time.

Installing PHP

The source code for this project is available here on GitHub.
Open a fresh instance of the terminal and paste the following command:

    # Linux Users
    $ sudo apt-get install php7.2

    # Mac users
    $ brew install php72


As at the time of writing this article, PHP 7.2 is the latest stable version of PHP so the command above installs it on your machine.

On completion, you can check that PHP has been installed to your machine with the following command:

    $ php -v


Installing the Mbstring extension

To install the mbstring extension for PHP, paste the following command in the open terminal:

    # Linux users
    $ sudo apt-get install php7.2-mbstring

    # Mac users
    # You don't have to do anything as it is installed automatically.


To check if the mbstring extension has been installed successfully, you can run the command below:

    $ php -m | grep mbstring


Installing the XML PHP extension

To install the XML extension for PHP, paste the following command in the open terminal:

    # Linux users
    $ sudo apt-get install php-xml

    # Mac users
    # You don't have to do anything as it is installed automatically.


To check if the xml extension has been installed successfully, you can run the command below:

    $ php -m | grep xml


Installing the ZIP PHP extension

To install the zip extension for PHP, paste the following command in your terminal:

    # Linux users
    $ sudo apt-get install php7.2-zip

    # Mac users
    # You don't have to do anything as it is installed automatically.


To check if the zip extension has been installed successfully, you can run the command below:

    $ php -m | grep zip


Installing curl

The source code for this project is available here on GitHub.
To install curl, paste the following command in your terminal:

    # Linux users
    $ sudo apt-get install curl

    # Mac users using Homebrew (https://brew.sh)
    $ brew install curl


To verify that curl has been installed successfully, run the following command:

    $ curl --version


Installing Composer

The source code for this project is available here on GitHub.> The source code for this project is available here on GitHub.
Now that we have curl installed on our machine, let’s pull in Composer with this command:

    $ curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer


For us to run Composer in the future without calling sudo, we may need to change the permission, however you should only do this if you have problems installing packages:

    $ sudo chown -R $USER ~/.composer/


Installing the Laravel installer

At this point, we can already create a new Laravel project using Composer’s create-project command, which looks like this:

    $ composer create-project --prefer-dist laravel/laravel project-name


But we will go one step further and install the Laravel installer using composer:

    $ composer global require "laravel/installer"


The source code for this project is available here on GitHub.
After the installation, we will need to add the PATH to the bashrc file so that our terminal can recognize the laravel command:

    $ echo 'export PATH="$HOME/.composer/vendor/bin:$PATH"' >> ~/.bashrc
    $ source ~/.bashrc


Creating the CMS project

Now that we have the official Laravel CLI installed on our machine, let’s create our CMS project using the installer. In your terminal window, cd to the project directory you want to create the project in and run the following command:

    $ laravel new cms


The source code for this project is available here on GitHub.
We will navigate into the project directory and serve the application using PHP’s web server:

    $ cd cms
    $ php artisan serve


Now, when we visit http://127.0.0.1:8000/, we will see the default Laravel template:

Setting up the database

In this series, we will be using MySQL as our database system so a prerequisite for this section is that you have MySQL installed on your machine.

You can follow the steps below to install and configure MySQL:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.

You will also need a special driver that makes it possible for PHP to work with MySQL, you can install it with this command:

    # Linux users
    $ sudo apt-get install php7.2-mysql

    # Mac Users
    # You don't have to do anything as it is installed automatically.


Load the project directory in your favorite text editor and there should be a .env file in the root of the folder. This is where Laravel stores its environment variables.

Create a new MySQL database and call it laravelcms. In the .env file, update the database configuration keys as seen below:

    DB_CONNECTION=mysql
    DB_HOST=127.0.0.1
    DB_PORT=3306
    DB_DATABASE=laravelcms
    DB_USERNAME=YourUsername
    DB_PASSWORD=YourPassword


The source code for this project is available here on GitHub.## Setting up user roles

Like most content management systems, we are going to have a user role system so that our blog can have multiple types of users; the admin and regular user. The admin should be able to create a post and perform other CRUD operations on a post. The regular user, on the other hand, should be able to view and comment on a post.

For us to implement this functionality, we need to implement user authentication and add a simple role authorization system.

Setting up user authentication

Laravel provides user authentication out of the box, which is great, and we can key into the feature by running a single command:

    $ php artisan make:auth


The above will create all that’s necessary for authentication in our application so we do not need to do anything extra.

Setting up role authorization

We need a model for the user roles so let’s create one and an associated migration file:

    $ php artisan make:model Role -m


In the database/migrations folder, find the newly created migration file and update the CreateRolesTable class with this snippet:

    <?php // File: ./database/migrations/*_create_roles_table.php

    // [...]

    class CreateRolesTable extends Migration
    {
        public function up()
        {
            Schema::create('roles', function (Blueprint $table) {
                $table->increments('id');
                $table->string('name');
                $table->string('description');
                $table->timestamps();
            });
        }

        public function down()
        {
            Schema::dropIfExists('roles');
        }
    }

We intend to create a many-to-many relationship between the User and Role models so let’s add a relationship method on both models.

Open the User model and add the following method:

    // File: ./app/User.php
    public function roles() 
    {
        return $this->belongsToMany(Role::class);
    }

Open the Role model and include the following method:

    // File: ./app/Role.php
    public function users() 
    {
        return $this->belongsToMany(User::class);
    }

We are also going to need a pivot table to associate each user with a matching role so let’s create a new migration file for the role_user table:

    $ php artisan make:migration create_role_user_table


In the database/migrations folder, find the newly created migration file and update the CreateRoleUserTable class with this snippet:

    // File: ./database/migrations/*_create_role_user_table.php
    <?php 

    // [...]

    class CreateRoleUserTable extends Migration
    {

        public function up()
        {
            Schema::create('role_user', function (Blueprint $table) {
                $table->increments('id');
                $table->integer('role_id')->unsigned();
                $table->integer('user_id')->unsigned();
            });
        }

        public function down()
        {
            Schema::dropIfExists('role_user');
        }
    }

Next, let’s create seeders that will populate the users and roles tables with some data. In your terminal, run the following command to create the database seeders:

    $ php artisan make:seeder RoleTableSeeder
    $ php artisan make:seeder UserTableSeeder


In the database/seeds folder, open the RoleTableSeeder.php file and replace the contents with the following code:

    // File: ./database/seeds/RoleTableSeeder.php
    <?php 

    use App\Role;
    use Illuminate\Database\Seeder;

    class RoleTableSeeder extends Seeder
    {
        public function run()
        {
            $role_regular_user = new Role;
            $role_regular_user->name = 'user';
            $role_regular_user->description = 'A regular user';
            $role_regular_user->save();

            $role_admin_user = new Role;
            $role_admin_user->name = 'admin';
            $role_admin_user->description = 'An admin user';
            $role_admin_user->save();
        }
    }

Open the UserTableSeeder.php file and replace the contents with the following code:

    // File: ./database/seeds/UserTableSeeder.php
    <?php 

    use Illuminate\Database\Seeder;
    use Illuminate\Support\Facades\Hash;
    use App\User;
    use App\Role;

    class UserTableSeeder extends Seeder
    {

        public function run()
        {
            $user = new User;
            $user->name = 'Samuel Jackson';
            $user->email = '[email protected]';
            $user->password = bcrypt('samuel1234');
            $user->save();
            $user->roles()->attach(Role::where('name', 'user')->first());

            $admin = new User;
            $admin->name = 'Neo Ighodaro';
            $admin->email = '[email protected]';
            $admin->password = bcrypt('neo1234');
            $admin->save();
            $admin->roles()->attach(Role::where('name', 'admin')->first());
        }
    }

We also need to update the DatabaseSeeder class. Open the file and update the run method as seen below:

    // File: ./database/seeds/DatabaseSeeder.php
    <?php 

    // [...]

    class DatabaseSeeder extends Seeder
    {
        public function run()
        {
            $this->call([
                RoleTableSeeder::class, 
                UserTableSeeder::class,
            ]);
        }
    }

Next, let’s update the User model. We will be adding a checkRoles method that checks what role a user has. We will return a 404 page where a user doesn’t have the expected role for a page. Open the User model and add these methods:

    // File: ./app/User.php
    public function checkRoles($roles) 
    {
        if ( ! is_array($roles)) {
            $roles = [$roles];    
        }

        if ( ! $this->hasAnyRole($roles)) {
            auth()->logout();
            abort(404);
        }
    }

    public function hasAnyRole($roles): bool
    {
        return (bool) $this->roles()->whereIn('name', $roles)->first();
    }

    public function hasRole($role): bool
    {
        return (bool) $this->roles()->where('name', $role)->first();
    }

Let’s modify the RegisterController.php file in the Controllers/Auth folder so that a default role, the user role, is always attached to a new user at registration.

Open the RegisterController and update the create action with the following code:

    // File: ./app/Http/Controllers/Auth/RegisterController.php
    protected function create(array $data)
    {       
        $user = User::create([
            'name'     => $data['name'],
            'email'    => $data['email'],
            'password' => bcrypt($data['password']),
        ]);

        $user->roles()->attach(\App\Role::where('name', 'user')->first());

        return $user;
    }

Now let’s migrate and seed the database so that we can log in with the sample accounts. To do this, run the following command in your terminal:

    $ php artisan migrate:fresh --seed


In order to test that our roles work as they should, we will make an update to the HomeController.php file. Open the HomeController and update the index method as seen below:

    // File: ./app/Http/Controllers/HomeController.php
    public function index(Request $request)
    {
        $request->user()->checkRoles('admin');

        return view('home');
    }

Now, only administrators should be able to see the dashboard. In a more complex application, we would use a middleware to do this instead.

We can test that this works by serving the application and logging in both user accounts; Samuel Jackson and Neo Ighodaro.

Remember that in our UserTableSeeder.php file, we defined Samuel as a regular user and Neo as an admin, so Samuel should see a 404 error after logging in and Neo should be able to see the homepage.

Testing the application

Let’s serve the application with this command:

    $ php artisan serve


When we try logging in with Samuel’s credentials, we should see this:

On the other hand, we will get logged in with Neo’s credentials because he has an admin account:

We will also confirm that whenever a new user registers, he is assigned a role and it is the role of a regular user. We will create a new user and call him Greg, he should see a 404 error right after:

It works just as we wanted it to, however, it doesn’t really make any sense for us to redirect a regular user to a 404 page. Instead, we will edit the HomeController so that it redirects users based on their roles, that is, it redirects a regular user to a regular homepage and an admin to an admin dashboard.

Open the HomeController.php file and update the index method as seen below:

    // File: ./app/Http/Controllers/HomeController.php
    public function index(Request $request)
    {
        if ($request->user()->hasRole('user')) {
            return redirect('/');
        }

        if ($request->user()->hasRole('admin')){
            return redirect('/admin/dashboard');
        }
    }

If we serve our application and try to log in using the admin account, we will hit a 404 error because we do not have a controller or a view for the admin/dashboard route. In the next article, we will start building the basic views for the CMS.

Conclusion

In this tutorial, we learned how to install a fresh Laravel app on our machine and pulled in all the needed dependencies. We also learned how to configure the Laravel app to work with a MySQL database. We also created our models and migrations files and seeded the database using database seeders.

In the next part of this series, we will start building the views for the application.

The source code for this project is available on Github.

Build a CMS with Laravel and Vue - Part 2: Implementing posts

In the previous part of this series, we set up user authentication and role authorization but we didn’t create any views for the application yet. In this section, we will create the Post model and start building the frontend for the application.

Our application allows different levels of accessibility for two kinds of users; the regular user and admin. In this chapter, we will focus on building the view that the regular users are permitted to see.

Before we build any views, let’s create the Post model as it is imperative to rendering the view.

The source code for this project is available here on GitHub.## Prerequisites

To follow along with this series, a few things are required:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.
Setting up the Post model

We will create the Post model with an associated resource controller and a migration file using this command:

    $ php artisan make:model Post -mr


The source code for this project is available here on GitHub.
Let’s navigate into the database/migrations folder and update the CreatePostsTable class that was generated for us:

    // File: ./app/database/migrations/*_create_posts_table.php
    <?php 

    // [...]

    class CreatePostsTable extends Migration
    {
        public function up()
        {
            Schema::create('posts', function (Blueprint $table) {
                $table->increments('id');
                $table->integer('user_id')->unsigned();
                $table->string('title');
                $table->text('body');
                $table->binary('image')->nullable();
                $table->timestamps();
            });
        }

        public function down()
        {
            Schema::dropIfExists('posts');
        }
    }

We included a user_id property because we want to create a relationship between the User and Post models. A Post also has an image field, which is where its associated image’s address will be stored.

Creating a database seeder for the Post table

We will create a new seeder file for the posts table using this command:

    $ php artisan make:seeder PostTableSeeder


Let’s navigate into the database/seeds folder and update the PostTableSeeder.php file:

    // File: ./app/database/seeds/PostsTableSeeder.php
    <?php 

    use App\Post;
    use Illuminate\Database\Seeder;

    class PostTableSeeder extends Seeder
    {
        public function run()
        {
            $post = new Post;
            $post->user_id = 2;
            $post->title = "Using Laravel Seeders";
            $post->body = "Laravel includes a simple method of seeding your database with test data using seed classes. All seed classes are stored in the database/seeds directory. Seed classes may have any name you wish, but probably should follow some sensible convention, such as UsersTableSeeder, etc. By default, a DatabaseSeeder class is defined for you. From this class, you may use the  call method to run other seed classes, allowing you to control the seeding order.";
            $post->save();

            $post = new Post;
            $post->user_id = 2;
            $post->title = "Database: Migrations";
            $post->body = "Migrations are like version control for your database, allowing your team to easily modify and share the application's database schema. Migrations are typically paired with Laravel's schema builder to easily build your application's database schema. If you have ever had to tell a teammate to manually add a column to their local database schema, you've faced the problem that database migrations solve.";
            $post->save();
        }
    }

When we run this seeder, it will create two new posts and assign both of them to the admin user whose ID is 2. We are attaching both posts to the admin user because the regular users are only allowed to view posts and make comments; they can’t create a post.

Let’s open the DatabaseSeeder and update it with the following code:

    // File: ./app/database/seeds/DatabaseSeeder.php
    <?php 

    use Illuminate\Database\Seeder;

    class DatabaseSeeder extends Seeder
    {
        public function run()
        {
            $this->call([
                RoleTableSeeder::class,
                UserTableSeeder::class,
                PostTableSeeder::class,
            ]);
        }
    }

The source code for this project is available here on GitHub.
We will use this command to migrate our tables and seed the database:

    $ php artisan migrate:fresh --seed


Defining the relationships

Just as we previously created a many-to-many relationship between the User and Role models, we need to create a different kind of relationship between the Post and User models.

We will define the relationship as a one-to-many relationship because a user will have many posts but a post will only ever belong to one user.

Open the User model and include the method below:

    // File: ./app/User.php
    public function posts()
    {
        return $this->hasMany(Post::class);
    }

Open the Post model and include the method below:

    // File: ./app/Post.php
    public function user()
    {
        return $this->belongsTo(User::class);
    }

Setting up the routes

At this point in our application, we do not have a front page with all the posts listed. Let’s create so anyone can see all of the created posts. Asides from the front page, we also need a single post page in case a user needs to read a specific post.

Let’s include two new routes to our routes/web.php file:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.
    Route::get('/', '[email protected]');

The source code for this project is available here on GitHub.* Basic knowledge of PHP.

  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.
    Route::get('/posts/{post}', '[email protected]');

With these two new routes added, here’s what the routes/web.php file should look like this:

    // File: ./routes/web.php
    <?php 

    Auth::routes();
    Route::get('/posts/{post}', '[email protected]');
    Route::get('/home', '[email protected]')->name('home');
    Route::get('/', '[email protected]');

Setting up the Post controller

In this section, we want to define the handler action methods that we registered in the routes/web.php file so that our application know how to render the matching views.

First, let’s add the all() method:

    // File: ./app/Http/Controllers/PostController.php
    public function all()
    {
        return view('landing', [
            'posts' => Post::latest()->paginate(5)
        ]);
    }

Here, we want to retrieve five created posts per page and send to the landing view. We will create this view shortly.

Next, let’s add the single() method to the controller:

    // File: ./app/Http/Controllers/PostController.php
    public function single(Post $post)
    {
        return view('single', compact('post'));
    }

In the method above, we used a feature of Laravel named route model binding to map the URL parameter to a Post instance with the same ID. We are returning a single view, which we will create shortly. This will be the view for the single post page.

Building our views

Laravel uses a templating engine called Blade for its frontend. We will use Blade to build these parts of the frontend before switching to Vue in the next chapter.

Navigate to the resources/views folder and create two new Blade files:

  1. landing.blade.php
  2. single.blade.php

These are the files that will load the views for the landing page and single post page. Before we start writing any code in these files, we want to create a simple layout template that our page views can use as a base.

In the resources/views/layouts folder, create a Blade template file and call it master.blade.php. This is where we will define the inheritable template for our single and landing pages.

Open the master.blade.php file and update it with this code:

    <!-- File: ./resources/views/layouts/master.blade.php -->
    <!DOCTYPE html>
    <html lang="en">
      <head>
        <meta charset="utf-8">
        <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
        <meta name="description" content="">
        <meta name="author" content="Neo Ighodaro">
        <title>LaravelCMS</title>
        <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css">
        <style> 
        body {
          padding-top: 54px;
        }
        @media (min-width: 992px) {
          body {
              padding-top: 56px;
          }
        }
        </style>
      </head>
      <body>
        <nav class="navbar navbar-expand-lg navbar-dark bg-dark fixed-top">
          <div class="container">
            <a class="navbar-brand" href="/">LaravelCMS</a>
            <div class="collapse navbar-collapse" id="navbarResponsive">
              <ul class="navbar-nav ml-auto">
                 @if (Route::has('login'))
                    @auth
                    <li class="nav-item">
                         <a class="nav-link" href="{{ url('/home') }}">Home</a>
                    </li>
                    <li class="nav-item">
                      <a class="nav-link" href="{{ route('logout') }}"
                                           onclick="event.preventDefault();
                                                         document.getElementById('logout-form').submit();">
                        Log out
                      </a>
                      <form id="logout-form" action="{{ route('logout') }}" method="POST" style="display: none;">
                        @csrf
                      </form>
                     </li>
                     @else
                     <li class="nav-item">
                         <a class="nav-link" href="{{ route('login') }}">Login</a>
                    </li>
                     <li class="nav-item">
                         <a class="nav-link" href="{{ route('register') }}">Register</a>
                     </li>
                     @endauth
                 @endif
              </ul>
            </div>
          </div>
        </nav>

        <div id="app">
            @yield('content')
        </div>

        <footer class="py-5 bg-dark">
          <div class="container">
            <p class="m-0 text-center text-white">Copyright &copy; LaravelCMS 2018</p>
          </div>
        </footer>
      </body>
    </html>

Now we can inherit this template in the landing.blade.php file, open it and update it with this code:

    {{-- File: ./resources/views/landing.blade.php --}}
    @extends('layouts.master')

    @section('content')
    <div class="container">
      <div class="row align-items-center">
        <div class="col-md-8 mx-auto">
          <h1 class="my-4 text-center">Welcome to the Blog </h1>

          @foreach ($posts as $post)
          <div class="card mb-4">
            <img class="card-img-top" src=" {!! !empty($post->image) ? '/uploads/posts/' . $post->image :  'http://placehold.it/750x300' !!} " alt="Card image cap">
            <div class="card-body">
              <h2 class="card-title text-center">{{ $post->title }}</h2>
              <p class="card-text"> {{ str_limit($post->body, $limit = 280, $end = '...') }} </p>
              <a href="/posts/{{ $post->id }}" class="btn btn-primary">Read More &rarr;</a>
            </div>
            <div class="card-footer text-muted">
              Posted {{ $post->created_at->diffForHumans() }} by
              <a href="#">{{ $post->user->name }} </a>
            </div>
          </div>
          @endforeach

        </div>
      </div>
    </div>
    @endsection

Let’s do the same with the single.blade.php file, open it and update it with this code:

    {{-- File: ./resources/views/single.blade.php --}}
    @extends('layouts.master')

    @section('content')
    <div class="container">
      <div class="row">
        <div class="col-lg-10 mx-auto">
          <h3 class="mt-4">{{ $post->title }} <span class="lead"> by <a href="#"> {{ $post->user->name }} </a></span> </h3>
          <hr>
          <p>Posted {{ $post->created_at->diffForHumans() }} </p>
          <hr>
          <img class="img-fluid rounded" src=" {!! !empty($post->image) ? '/uploads/posts/' . $post->image :  'http://placehold.it/750x300' !!} " alt="">
          <hr>
          <p class="lead">{{ $post->body }}</p>
          <hr>
          <div class="card my-4">
            <h5 class="card-header">Leave a Comment:</h5>
            <div class="card-body">
              <form>
                <div class="form-group">
                  <textarea class="form-control" rows="3"></textarea>
                </div>
                <button type="submit" class="btn btn-primary">Submit</button>
              </form>
            </div>
          </div>
        </div>
      </div>
    </div>
    @endsection

Testing the application

We can test the application to see that things work as we expect. When we serve the application, we expect to see a landing page and a single post page. We also expect to see two posts because that’s the number of posts we seeded into the database.

We will serve the application using this command:

    $ php artisan serve


We can visit this address to see the application:

We have used simple placeholder images here because we haven’t built the admin dashboard that allows CRUD operations to be performed on posts.

In the coming chapters, we will add the ability for an admin to include a custom image when creating a new post.

Conclusion

In this chapter, we created the Post model and defined a relationship on it to the User model. We also built the landing page and single page.

In the next part of this series, we will develop the API that will be the medium for communication between the admin user and the post items.

The source code for this project is available here on Github.

Build a CMS with Laravel and Vue - Part 3: Building an API

In the previous part of this series, we initialized the posts resource and started building the frontend of the CMS. We designed the front page that shows all the posts and the single post page using Laravel’s templating engine, Blade.

In this part of the series, we will start building the API for the application. We will create an API for CRUD operations that an admin will perform on posts and we will test the endpoints using Postman.

The source code for this project is available here on GitHub.## Prerequisites

To follow along with this series, a few things are required:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.
Building the API using Laravel’s API resources

The Laravel framework makes it very easy to build APIs. It has an API resources feature that we can easily adopt in our project. You can think of API resources as a transformation layer between Eloquent models and the JSON responses that will be sent back by our API.

Allowing mass assignment on specified fields

Since we are going to be performing CRUD operations on the posts in the application, we have to explicitly specify that it’s permitted for some fields to be mass-assigned data. For security reasons, Laravel prevents mass assignment of data to model fields by default.

Open the Post.php file and include this line of code:

    // File: ./app/Post.php
    protected $fillable = ['user_id', 'title', 'body', 'image'];

Defining API routes

We will use the apiResource()method to generate only API routes. Open the routes/api.php file and add the following code:

    // File: ./routes/api.php
    Route::apiResource('posts', 'PostController');


The source code for this project is available here on GitHub.### Creating the Post resource

At the beginning of this section, we already talked about what Laravel’s API resources are. Here, we create a resource class for our Post model. This will enable us to retrieve Post data and return formatted JSON format.

To create a resource class for our Post model run the following command in your terminal:

    $ php artisan make:resource PostResource


A new PostResource.php file will be available in the app/Http/Resources directory of our application. Open up the PostResource.php file and replace the toArray() method with the following:

    // File: ./app/Http/Resources/PostResource.php
    public function toArray($request)
    {
        return [
            'id' => $this->id,
            'title' => $this->title,
            'body' => $this->body,
            'image' => $this->image,
            'created_at' => (string) $this->created_at,
            'updated_at' => (string) $this->updated_at,
        ];
    }

The job of this toArray() method is to convert our P``ost resource into an array. As seen above, we have specified the fields on our Post model, which we want to be returned as JSON when we make a request for posts.

We are also explicitly casting the dates, created_at and update_at, to strings so that they would be returned as date strings. The dates are normally an instance of Carbon.

Now that we have created a resource class for our Post model, we can start building the API’s action methods in our PostController and return instances of the PostResource where we want.

Adding the action methods to the Post controller

The usual actions performed on a post include the following:

  1. landing.blade.php
  2. single.blade.php

In the last article, we already implemented a kind of ‘Read’ functionality when we defined the all and single methods. These methods allow users to browse through posts on the homepage.

In this section, we will define the methods that will resolve our API requests for creating, reading, updating and deleting posts.

The first thing we want to do is import the PostResource class at the top of the PostController.php file:

    // File: ./app/Http/Controllers/PostController.php
    use App\Http\Resources\PostResource;

The source code for this project is available here on GitHub.### Building the handler action for the create operation

In the PostController update the store() action method with the code snippet below. It will allow us to validate and create a new post:

    // File: ./app/Http/Controllers/PostController.php
    public function store(Request $request)
    {
        $this->validate($request, [
            'title' => 'required',
            'body' => 'required',
            'user_id' => 'required',            
            'image' => 'required|mimes:jpeg,png,jpg,gif,svg',
        ]);

        $post = new Post;

        if ($request->hasFile('image')) {
            $image = $request->file('image');
            $name = str_slug($request->title).'.'.$image->getClientOriginalExtension();
            $destinationPath = public_path('/uploads/posts');
            $imagePath = $destinationPath . "/" . $name;
            $image->move($destinationPath, $name);
            $post->image = $name;
        }

        $post->user_id = $request->user_id;
        $post->title = $request->title;
        $post->body = $request->body;
        $post->save();

        return new PostResource($post);
    }

Here’s a breakdown of what this method does:

  1. landing.blade.php
  2. single.blade.php

Building the handler action for the read operations

What we want here is to be able to read all the created posts or a single post. This is possible because the apiResource() method defines the API routes using standard REST rules.

This means that a GET request to this address, http://127.0.0.1:800/api/posts, should be resolved by the index() action method. Let’s update the index method with the following code:

    // File: ./app/Http/Controllers/PostController.php
    public function index()
    {
        return PostResource::collection(Post::latest()->paginate(5));
    }

This method will allow us to return a JSON formatted collection of all of the stored posts. We also want to paginate the response as this will allow us to create a better view on the admin dashboard.

Following the RESTful conventions as we discussed above, a GET request to this address, http://127.0.0.1:800/api/posts/id, should be resolved by the show() action method. Let’s update the method with the fitting snippet:

    // File: ./app/Http/Controllers/PostController.php
    public function show(Post $post)
    {
        return new PostResource($post);
    }

Awesome, now this method will return a single instance of a post resource upon API query.

Building the handler action for the update operation

Next, let’s update the update() method in the PostController class. It will allow us to modify an existing post:

    // File: ./app/Http/Controllers/PostController.php
    public function update(Request $request, Post $post)
    {
        $this->validate($request, [
            'title' => 'required',
            'body' => 'required',
        ]);

        $post->update($request->only(['title', 'body']));

        return new PostResource($post);
    }

This method receives a request and a post id as parameters, then we use route model binding to resolve the id into an instance of a Post. First, we validate the $request attributes, then we update the title and body fields of the resolved post.

Building the handler action for the delete operation

Let’s update the destroy() method in the PostController class. This method will allow us to remove an existing post:

    // File: ./app/Http/Controllers/PostController.php
    public function destroy(Post $post)
    {
        $post->delete();

        return response()->json(null, 204);
    }

In this method, we resolve the Post instance, then delete it and return a 204 response code.

Our methods are complete. We have a method to handle our CRUD operations, however, we haven’t built the frontend for the admin dashboard.

At the end of the second article, we defined the [email protected]() action method like this:

    public function index(Request $request)
    {
        if ($request->user()->hasRole('user')) {
            return view('home');
        }

        if ($request->user()->hasRole('admin')) {
            return redirect('/admin/dashboard');
        }
    }

This allowed us to redirect regular users to the view home, and admin users to the URL /admin/dashboard. At this point in this series, a visit to /admin/dashboard will fail because we have neither defined it as a route with a handler Controller nor built a view for it.

Let’s create the AdminController with this command:

    $ php artisan make:controller AdminController


We will add the /admin/ route to our routes/web.php file:

    Route::get('/admin/{any}', '[email protected]')->where('any', '.*');

The source code for this project is available here on GitHub.
Let’s update the AdminController.php file to use the auth middleware and include an index() action method:

    // File: ./app/Http/Controllers/AdminController.php
    <?php 

    namespace App\Http\Controllers;

    class AdminController extends Controller
    {
        public function __construct()
        {
            $this->middleware('auth');
        }

        public function index()
        {
            if (request()->user()->hasRole('admin')) {
                return view('admin.dashboard');
            }

            if (request()->user()->hasRole('user')) {
                return redirect('/home');
            }
        }
    }

In the index()action method, we included a snippet that will ensure that only admin users can visit the admin dashboard and perform CRUD operations on posts.

We will not start building the admin dashboard in this article but will test that our API works properly. We will use Postman to make requests to the application.

Testing the application

Let’s test that our API works as expected. We will, first of all, serve the application using this command:

    $ php artisan serve


We can visit http://localhost:8000 to see our application and there should be exactly two posts available; these are the posts we seeded into the database during the migration:

The source code for this project is available here on GitHub.
Now let’s create a new post over the API interface using Postman. Send a POST request as seen below:

Now let’s update this post we just created. In Postman, we will pass only the title and body fields to a PUT request.

To make it easy, you can just copy the payload below and use the raw request data type for the Body:

    {
      "title": "We made an edit to the Post on APIs",
      "body": "To a developer, 'What's an API?' might be a straightforward - if not exactly simple - question. But to anyone who doesn't have experience with code. APIs can come across as confusing or downright intimidating."
    }


The source code for this project is available here on GitHub.
Finally, let’s delete the post using Postman:

We are sure the post is deleted because the response status is 204 No Content as we specified in the PostController.

Conclusion

In this chapter, we learned about Laravel’s API resources and we created a resource class for the Post model. We also used the apiResources() method to generate API only routes for our application. We wrote the methods to handle the API operations and tested them using Postman.

In the next part, we will build the admin dashboard and develop the logic that will enable the admin user to manage posts over the API.

The source code for this project is available here on Github.

Build a CMS with Laravel and Vue - Part 4: Building the dashboard

In the last article of this series, we built the API interface and used Laravel API resources to return neatly formatted JSON responses. We tested that the API works as we defined it to using Postman.

In this part of the series, we will start building the admin frontend of the CMS. This is the first part of the series where we will integrate Vue and explore Vue’s magical abilities.

When we are done with this part, our application will have some added functionalities as seen below:

The source code for this project is available here on GitHub.## Prerequisites

To follow along with this series, a few things are required:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.
Building the frontend

Laravel ships with Vue out of the box so we do not need to use the Vue-CLI or reference Vue from a CDN. This makes it possible for us to have all of our application, the frontend, and backend, in a single codebase.

Every newly created instance of a Laravel installation has some Vue files included by default, we can see these files when we navigate into the resources/assets/js/components folder.

Setting up Vue and VueRouter

Before we can start using Vue in our application, we need to first install some dependencies using NPM. To install the dependencies that come by default with Laravel, run the command below:

    $ npm install


We will be managing all of the routes for the admin dashboard using vue-router so let’s pull it in:

    $ npm install --save vue-router


When the installation is complete, the next thing we want to do is open the resources/assets/js/app.js file and replace its contents with the code below:

    // File: ./resources/assets/js/app.js
    require('./bootstrap');

    import Vue from 'vue'
    import VueRouter from 'vue-router'
    import Homepage from './components/Homepage'
    import Read from './components/Read'

    Vue.use(VueRouter)

    const router = new VueRouter({
        mode: 'history',
        routes: [
            {
                path: '/admin/dashboard',
                name: 'read',
                component: Read,
                props: true
            },
        ],
    });

    const app = new Vue({
        el: '#app',
        router,
        components: { Homepage },
    });

In the snippet above, we imported the VueRouter and added it to the Vue application. We also imported a Homepage and a Read component. These are the components where we will write our markup so let’s create both files.

Open the resources/assets/js/components folder and create four files:

  1. landing.blade.php
  2. single.blade.php

The source code for this project is available here on GitHub.
In the resources/assets/js/app.js file, we defined a routes array and in it, we registered a read route. During render time, this route’s path will be mapped to the Read component.

In the previous article, we specified that admin users should be shown an admin.dashboard view in the index method, however, we didn’t create this view. Let’s create the view. Open the resources/views folder and create a new folder called admin. Within the new resources/views/admin folder, create a new file and called dashboard.blade.php. This is going to be the entry point to the admin dashboard, further from this route, we will let the VueRouter handle everything else.

Open the resources/views/admin/dashboard.blade.php file and paste in the following code:

    <!-- File: ./resources/views/admin/dashboard.blade.php -->
    <!DOCTYPE html>
    <html lang="en">
    <head>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1.0">
        <meta http-equiv="X-UA-Compatible" content="ie=edge">
        <title> Welcome to the Admin dashboard </title>
        <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css">
        <style>
            html, body {
            background-color: #202B33;
            color: #738491;
            font-family: "Open Sans";
            font-size: 16px;
            font-smoothing: antialiased;
            overflow: hidden;
            }
        </style>
    </head>
    <body>

      <script src="{{ asset('js/app.js') }}"></script>
    </body>
    </html>

Our goal here is to integrate Vue into the application, so we included the resources/assets/js/app.js file with this line of code:

    <script src="{{ asset('js/app.js') }}"></script>


For our app to work, we need a root element to bind our Vue instance unto. Before the <script> tag, add this snippet of code:

    <div id="app">
      <Homepage 
        :user-name='@json(auth()->user()->name)' 
        :user-id='@json(auth()->user()->id)'
      ></Homepage>
    </div>

We earlier defined the Homepage component as the wrapping component, that’s why we pulled it in here as the root component. For some of the frontend components to work correctly, we require some details of the logged in admin user to perform CRUD operations. This is why we passed down the userName and userId props to the Homepage component.

We need to prevent the CSRF error from occurring in our Vue frontend, so include this snippet of code just before the <title> tag:

    <meta name="csrf-token" content="{{ csrf_token() }}">
    <script> window.Laravel = { csrfToken: 'csrf_token() ' } </script>

This snippet will ensure that the correct token is always included in our frontend, Laravel provides the CSRF protection for us out of the box.

At this point, this should be the contents of your resources/views/admin/dashboard.blade.php file:

    <!DOCTYPE html>
    <html lang="en">
    <head>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1.0">
        <meta http-equiv="X-UA-Compatible" content="ie=edge">
        <meta name="csrf-token" content="{{ csrf_token() }}">
        <script> window.Laravel = { csrfToken: 'csrf_token() ' } </script>
        <title> Welcome to the Admin dashboard </title>
        <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css">
        <style>
          html, body {
            background-color: #202B33;
            color: #738491;
            font-family: "Open Sans";
            font-size: 16px;
            font-smoothing: antialiased;
            overflow: hidden;
          }
        </style>
    </head>
    <body>
    <div id="app">
      <Homepage 
        :user-name='@json(auth()->user()->name)' 
        :user-id='@json(auth()->user()->id)'>
      </Homepage>
    </div>
    <script src="{{ asset('js/app.js') }}"></script>
    </body>
    </html>

Setting up the Homepage view

Open the Homepage.vue file that we created some time ago and include this markup template:

    <!-- File: ./resources/app/js/components/Homepage.vue -->
    <template>
      <div>
        <nav>
          <section>
            <a style="color: white" href="/admin/dashboard">Laravel-CMS</a> &nbsp; ||  &nbsp;
            <a style="color: white" href="/">HOME</a>
            <hr>
            <ul>
               <li>
                 <router-link :to="{ name: 'create', params: { userId } }">
                   NEW POST
                 </router-link>
               </li>
            </ul>
          </section>
        </nav>
        <article>
          <header>
            <header class="d-inline">Welcome, {{ userName }}</header>
            <p @click="logout" class="float-right mr-3" style="cursor: pointer">Logout</p>
          </header>
          <div> 
            <router-view></router-view> 
          </div>
        </article>
      </div>
    </template>

We added a router-link in this template, which routes to the Create component.

We are passing the userId data to the create component because a userId is required during Post creation.

Let’s include some styles so that the page looks good. Below the closing template tag, paste the following code:

    <style scoped>
      @import url(https://fonts.googleapis.com/css?family=Dosis:300|Lato:300,400,600,700|Roboto+Condensed:300,700|Open+Sans+Condensed:300,600|Open+Sans:400,300,600,700|Maven+Pro:400,700);
      @import url("https://netdna.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.css");
      * {
        -moz-box-sizing: border-box;
        -webkit-box-sizing: border-box;
        box-sizing: border-box;
      }
      header {
        color: #d3d3d3;
      }
      nav {
        position: absolute;
        top: 0;
        bottom: 0;
        right: 82%;
        left: 0;
        padding: 22px;
        border-right: 2px solid #161e23;
      }
      nav > header {
        font-weight: 700;
        font-size: 0.8rem;
        text-transform: uppercase;
      }
      nav section {
        font-weight: 600;
      }
      nav section header {
        padding-top: 30px;
      }
      nav section ul {
        list-style: none;
        padding: 0px;
      }
      nav section ul a {
        color: white;
        text-decoration: none;
        font-weight: bold;
      }
      article {
        position: absolute;
        top: 0;
        bottom: 0;
        right: 0;
        left: 18%;
        overflow: auto;
        border-left: 2px solid #2a3843;
        padding: 20px;
      }
      article > header {
        height: 60px;
        border-bottom: 1px solid #2a3843;
      }
    </style>

The source code for this project is available here on GitHub.
Next, let’s add the <script> section that will use the props we passed down from the parent component. We will also define the method that controls the log out feature here. Below the closing style tag, paste the following code:

    <script>
    export default {
      props: {
        userId: {
          type: Number,
          required: true
        },
        userName: {
          type: String,
          required: true
        }
      },
      data() {
        return {};
      },
      methods: {
        logout() {
          axios.post("/logout").then(() => {
            window.location = "/";
          });
        }
      }
    };
    </script>

Setting up the Read view

In the resources/assets/js/app.js file, we defined the path of the read component as /admin/dashboard, which is the same address as the Homepage component. This will make sure the Read component always loads by default.

In the Read component, we want to load all of the available posts. We are also going to add Update and Delete options to each post. Clicking on these options will lead to the update and delete views respectively.

Open the Read.vue file and paste the following:

    <!-- File: ./resources/app/js/components/Read.vue -->
    <template>
        <div id="posts">
            <p class="border p-3" v-for="post in posts">
                {{ post.title }}
                <router-link :to="{ name: 'update', params: { postId : post.id } }">
                    <button type="button" class="p-1 mx-3 float-right btn btn-light">
                        Update
                    </button>
                </router-link>
                <button 
                    type="button" 
                    @click="deletePost(post.id)" 
                    class="p-1 mx-3 float-right btn btn-danger"
                >
                    Delete
                </button>
            </p>
            <div>
                <button 
                    v-if="next" 
                    type="button" 
                    @click="navigate(next)" 
                    class="m-3 btn btn-primary"
                >
                  Next
                </button>
                <button 
                    v-if="prev" 
                    type="button" 
                    @click="navigate(prev)" 
                    class="m-3 btn btn-primary"
                >
                  Previous
                </button>
            </div>
        </div>
    </template>

Above, we have the template to handle the posts that are loaded from the API. Next, paste the following below the closing template tag:

    <script>
    export default {
      mounted() {
        this.getPosts();
      },
      data() {
        return {
          posts: {},
          next: null,
          prev: null
        };
      },
      methods: {
        getPosts(address) {
          axios.get(address ? address : "/api/posts").then(response => {
            this.posts = response.data.data;
            this.prev = response.data.links.prev;
            this.next = response.data.links.next;
          });
        },
        deletePost(id) {
          axios.delete("/api/posts/" + id).then(response => this.getPosts())
        },
        navigate(address) {
          this.getPosts(address)
        }
      }
    };
    </script>

In the script above, we defined a getPosts() method that requests a list of posts from the backend server. We also defined a posts object as a data property. This object will be populated whenever posts are received from the backend server.

We defined next and prev data string properties to store pagination links and only display the pagination options where it is available.

Lastly, we defined a deletePost() method that takes the id of a post as a parameter and sends a DELETE request to the API interface using Axios.

Testing the application

Now that we have completed the first few components, we can serve the application using this command:

    $ php artisan serve


We will also build the assets so that our JavaScript is compiled for us. To do this, will run the command below in the root of the project folder:

    $ npm run dev


We can visit the application’s URL http://localhost:8000 and log in as an admin user, and delete a post:

Conclusion

In this part of the series, we started building the admin dashboard using Vue. We installed VueRouter to make the admin dashboard a SPA. We added the homepage view of the admin dashboard and included read and delete functionalities.

We are not done with the dashboard just yet. In the next part, we will add the views that lets us create and update posts.

The source code for this project is available here on Github.

Build a CMS with Laravel and Vue - Part 5: Completing our dashboards

In the previous part of this series, we built the first parts of the admin dashboard using Vue. We also made it into an SPA with the VueRouter, this means that visiting the pages does not cause a reload to the web browser.

We only built the wrapper component and the Read component that retrieves the posts to be loaded so an admin can manage them.

Here’s a recording of what we ended up with, in the last article:

In this article, we will build the view that will allow users to create and update posts. We will start writing code in the Update.vue and Create.vue files that we created in the previous article.

When we are done with this part, we will have additional functionalities like create and updating:

The source code for this project is available here on GitHub.## Prerequisites

To follow along with this series, a few things are required:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.
Including the new routes in VueRouter

In the previous article, we only defined the route for the Read component, we need to include the route configuration for the new components that we are about to build; Update and Create.

Open the resources/assets/js/app.js file and replace the contents with the code below:

    require('./bootstrap');

    import Vue from 'vue'
    import VueRouter from 'vue-router'
    import Homepage from './components/Homepage'
    import Create from './components/Create'
    import Read from './components/Read'
    import Update from './components/Update'

    Vue.use(VueRouter)

    const router = new VueRouter({
        mode: 'history',
        routes: [
            {
                path: '/admin/dashboard',
                name: 'read',
                component: Read,
                props: true
            },
            {
                path: '/admin/create',
                name: 'create',
                component: Create,
                props: true
            },
            {
                path: '/admin/update',
                name: 'update',
                component: Update,
                props: true
            },
        ],
    });

    const app = new Vue({
        el: '#app',
        router,
        components: { Homepage },
    });

Above, we have added two new components to the JavaScript file. We have the Create and Read components. We also added them to the router so that they can be loaded using the specified URLs.

Building the create view

Open the Create.vue file and update it with this markup template:

    <!-- File: ./resources/app/js/components/Create.vue -->
    <template>
      <div class="container">
        <form>
          <div :class="['form-group m-1 p-3', (successful ? 'alert-success' : '')]">
            <span v-if="successful" class="label label-sucess">Published!</span>
          </div>
          <div :class="['form-group m-1 p-3', error ? 'alert-danger' : '']">
            <span v-if="errors.title" class="label label-danger">
              {{ errors.title[0] }}
            </span>
            <span v-if="errors.body" class="label label-danger"> 
              {{ errors.body[0] }} 
            </span>
            <span v-if="errors.image" class="label label-danger"> 
              {{ errors.image[0] }} 
            </span>
          </div>

          <div class="form-group">
            <input type="title" ref="title" class="form-control" id="title" placeholder="Enter title" required>
          </div>

          <div class="form-group">
            <textarea class="form-control" ref="body" id="body" placeholder="Enter a body" rows="8" required></textarea>
          </div>

          <div class="custom-file mb-3">
            <input type="file" ref="image" name="image" class="custom-file-input" id="image" required>
            <label class="custom-file-label" >Choose file...</label>
          </div>

          <button type="submit" @click.prevent="create" class="btn btn-primary block">
            Submit
          </button>
        </form>
      </div>
    </template>

Above we have the template for the Create component. If there is an error during post creation, there will be a field indicating the specific error. When a post is successfully published, there will also a message saying it was successful.

Let’s include the script logic that will perform the sending of posts to our backend server and read back the response.

After the closing template tag add this:

    <script>
    export default {
      props: {
        userId: {
          type: Number,
          required: true
        }
      },
      data() {
        return {
          error: false,
          successful: false,
          errors: []
        };
      },
      methods: {
        create() {
          const formData = new FormData();
          formData.append("title", this.$refs.title.value);
          formData.append("body", this.$refs.body.value);
          formData.append("user_id", this.userId);
          formData.append("image", this.$refs.image.files[0]);

          axios
            .post("/api/posts", formData)
            .then(response => {
              this.successful = true;
              this.error = false;
              this.errors = [];
            })
            .catch(error => {
              if (!_.isEmpty(error.response)) {
                if ((error.response.status = 422)) {
                  this.errors = error.response.data.errors;
                  this.successful = false;
                  this.error = true;
                }
              }
            });

          this.$refs.title.value = "";
          this.$refs.body.value = "";
        }
      }
    };
    </script>

In the script above, we defined a create() method that takes the values of the input fields and uses the Axios library to send them to the API interface on the backend server. Within this method, we also update the status of the operation, so that an admin user can know when a post is created successfully or not.

Building the update view

Let’s start building the Update component. Open the Update.vue file and update it with this markup template:

    <!-- File: ./resources/app/js/components/Update.vue -->
    <template>
      <div class="container">
        <form>
          <div :class="['form-group m-1 p-3', successful ? 'alert-success' : '']">
            <span v-if="successful" class="label label-sucess">Updated!</span>
          </div>

          <div :class="['form-group m-1 p-3', error ? 'alert-danger' : '']">
            <span v-if="errors.title" class="label label-danger">
              {{ errors.title[0] }}
            </span>
            <span v-if="errors.body" class="label label-danger">
              {{ errors.body[0] }}
            </span>
          </div>

          <div class="form-group">
            <input type="title" ref="title" class="form-control" id="title" placeholder="Enter title" required>
          </div>

          <div class="form-group">
            <textarea class="form-control" ref="body" id="body" placeholder="Enter a body" rows="8" required></textarea>
          </div>

          <button type="submit" @click.prevent="update" class="btn btn-primary block">
            Submit
          </button>
        </form>
      </div>
    </template>

This template is similar to the one in the Create component. Let’s add the script for the component.

Below the closing template tag, paste the following:

    <script>
    export default {
      mounted() {
        this.getPost();
      },
      props: {
        postId: {
          type: Number,
          required: true
        }
      },
      data() {
        return {
          error: false,
          successful: false,
          errors: []
        };
      },
      methods: {
        update() {
          let title = this.$refs.title.value;
          let body = this.$refs.body.value;

          axios
            .put("/api/posts/" + this.postId, { title, body })
            .then(response => {
              this.successful = true;
              this.error = false;
              this.errors = [];
            })
            .catch(error => {
              if (!_.isEmpty(error.response)) {
                if ((error.response.status = 422)) {
                  this.errors = error.response.data.errors;
                  this.successful = false;
                  this.error = true;
                }
              }
            });
        },
        getPost() {
          axios.get("/api/posts/" + this.postId).then(response => {
            this.$refs.title.value = response.data.data.title;
            this.$refs.body.value = response.data.data.body;
          });
        }
      }
    };
    </script>


In the script above, we make a call to the getPosts() method as soon as the component is mounted. The getPosts() method fetches the data of a single post from the backend server, using the postId.

When Axios sends back the data for the post, we update the input fields in this component so they can be updated.

Finally, the update() method takes the values of the fields in the components and attempts to send them to the backend server for an update. In a situation where the fails, we get instant feedback.

Testing the application

To test that our changes work, we want to refresh the database and restore it back to a fresh state. To do this, run the following command in your terminal:

    $ php artisan migrate:fresh --seed


Next, let’s compile our JavaScript files and assets. This will make sure all the changes we made in the Vue component and the app.js file gets built. To recompile, run the command below in your terminal:

    $ npm run dev


Lastly, we need to serve the application. To do this, run the following command in your terminal window:

    $ php artisan serve


The source code for this project is available here on GitHub.
We will visit the application’s http://localhost:8000 and log in as an admin user. From the dashboard, you can test the create and update feature:

Conclusion

In this part of the series, we updated the dashboard to include the Create and Update component so the administrator can add and update posts.

In the next article, we will build the views that allow for the creation and updating of a post.

The source code for this project is available here on Github.

Build a CMS with Laravel and Vue - Part 6: Adding Realtime Comments

In the previous part of this series, we finished building the backend of the application using Vue. We were able to add the create and update component, which is used for creating a new post and updating an existing post.

Here’s a screen recording of what we have been able to achieve:

In this final part of the series, we will be adding support for comments. We will also ensure that the comments on each post are updated in realtime, so a user doesn’t have to refresh the page to see new comments.

When we are done, our application will have new features and will work like this:

The source code for this project is available here on GitHub.## Prerequisites

To follow along with this series, a few things are required:

  • Basic knowledge of PHP.
  • Basic knowledge of the Laravel framework.
  • Basic knowledge of JavaScript (ES6 syntax).
  • Basic knowledge of Vue.
  • Postman installed on your machine.
Adding comments to the backend

When we were creating the API, we did not add the support for comments to the post resource, so we will have to do so now. Open the API project in your text editor as we will be modifying the project a little.

The first thing we want to do is create a model, controller, and a migration for the comment resource. To do this, open your terminal and cd to the project directory and run the following command:

    $ php artisan make:model Comment -mc


The command above will create a model called Comment, a controller called CommentController, and a migration file in the database/migrations directory.

Updating the comments migration file

To update the comments migration navigate to the database/migrations folder and find the newly created migration file for the Comment model. Let’s update the up() method in the file:

    // File: ./database/migrations/*_create_comments_table.php
    public function up()
    {
        Schema::create('comments', function (Blueprint $table) {
            $table->increments('id');
            $table->timestamps();
            $table->integer('user_id')->unsigned();
            $table->integer('post_id')->unsigned();
            $table->text('body');
        });
    }

We included user_id and post_id fields because we intend to create a link between the comments, users, and posts. The body field will contain the actual comment.

Defining the relationships among the Comment, User, and Post models

In this application, a comment will belong to a user and a post because a user can make a comment on a specific post, so we need to define the relationship that ties everything up.

Open the User model and include this method:

    // File: ./app/User.php
    public function comments()
    {
        return $this->hasMany(Comment::class);
    }

This is a relationship that simply says that a user can have many comments. Now let’s define the same relationship on the Post model. Open the Post.php file and include this method:

    // File: ./app/Post.php
    public function comments()
    {
        return $this->hasMany(Comment::class);
    }

Finally, we will include two methods in the Comment model to complete the second half of the relationships we defined in the User and Post models.

Open the app/Comment.php file and include these methods:

    // File: ./app/Comment.php
    public function user()
    {
        return $this->belongsTo(User::class);
    }

    public function post()
    {
        return $this->belongsTo(Post::class);
    }

Since we want to be able to mass assign data to specific fields of a comment instance during comment creation, we will include this array of permitted assignments in the app/Comment.php file:

    protected $fillable = ['user_id', 'post_id', 'body'];

We can now run our database migration for our comments:

    $ php artisan migrate


Configuring Laravel to broadcast events using Pusher

We already said that the comments will have a realtime functionality and we will be building this using Pusher, so we need to enable Laravel’s event broadcasting feature.

Open the config/app.php file and uncomment the following line in the providers array:

    App\Providers\BroadcastServiceProvider


Next, we need to configure the broadcast driver in the .env file:

    BROADCAST_DRIVER=pusher


Let’s pull in the Pusher PHP SDK using composer:

    $ composer require pusher/pusher-php-server


Configuring Pusher

For us to use Pusher in this application, it is a prerequisite that you have a Pusher account. You can create a free Pusher account here then login to your dashboard and create an app.

Once you have created an app, we will use the app details to configure pusher in the .env file:

    PUSHER_APP_ID=xxxxxx
    PUSHER_APP_KEY=xxxxxxxxxxxxxxxxxxxx
    PUSHER_APP_SECRET=xxxxxxxxxxxxxxxxxxxx
    PUSHER_APP_CLUSTER=xx


Update the Pusher keys with the app credentials provided for you under the Keys section on the Overview tab on the Pusher dashboard.

Broadcasting an event for when a new comment is sent

To make the comment update realtime, we have to broadcast an event based on the comment creation activity. We will create a new event and call it CommentSent. It is to be fired when there is a successful creation of a new comment.

Run command in your terminal:

    php artisan make:event CommentSent


There will be a newly created file in the app\Events directory, open the CommentSent.php file and ensure that it implements the ShouldBroadcast interface.

Open and replace the file with the following code:

    // File: ./app/Events/CommentSent.php
    <?php 

    namespace App\Events;

    use App\Comment;
    use App\User;
    use Illuminate\Broadcasting\Channel;
    use Illuminate\Queue\SerializesModels;
    use Illuminate\Broadcasting\PrivateChannel;
    use Illuminate\Broadcasting\PresenceChannel;
    use Illuminate\Foundation\Events\Dispatchable;
    use Illuminate\Broadcasting\InteractsWithSockets;
    use Illuminate\Contracts\Broadcasting\ShouldBroadcast;

    class CommentSent implements ShouldBroadcast
    {
        use Dispatchable, InteractsWithSockets, SerializesModels;

        public $user;

        public $comment;

        public function __construct(User $user, Comment $comment)
        {
            $this->user = $user;

            $this->comment = $comment;
        }

        public function broadcastOn()
        {
            return new PrivateChannel('comment');
        }
    }

In the code above, we created two public properties, user and comment, to hold the data that will be passed to the channel we are broadcasting on. We also created a private channel called comment. We are using a private channel so that only authenticated clients can subscribe to the channel.

Defining the routes for handling operations on a comment

We created a controller for the comment model earlier but we haven’t defined the web routes that will redirect requests to be handled by that controller.

Open the routes/web.php file and include the code below:

    // File: ./routes/web.php
    Route::get('/{post}/comments', '[email protected]');
    Route::post('/{post}/comments', '[email protected]');

Setting up the action methods in the CommentController

We need to include two methods in the CommentController.php file, these methods will be responsible for storing and retrieving methods. In the store() method, we will also be broadcasting an event when a new comment is created.

Open the CommentController.php file and replace its contents with the code below:

    // File: ./app/Http/Controllers/CommentController.php
    <?php 

    namespace App\Http\Controllers;

    use App\Comment;
    use App\Events\CommentSent;
    use App\Post;
    use Illuminate\Http\Request;

    class CommentController extends Controller
    {
        public function store(Post $post)
        {
            $this->validate(request(), [
                'body' => 'required',
            ]);

            $user = auth()->user();

            $comment = Comment::create([
                'user_id' => $user->id,
                'post_id' => $post->id,
                'body' => request('body'),
            ]);

            broadcast(new CommentSent($user, $comment))->toOthers();

            return ['status' => 'Message Sent!'];
        }

        public function index(Post $post)
        {
            return $post->comments()->with('user')->get();
        }
    }

In the store method above, we are validating then creating a new post comment. After the comment has been created, we broadcast the CommentSent event to other clients so they can update their comments list in realtime.

In the index method we just return the comments belonging to a post along with the user that made the comment.

Adding a layer of authentication

Let’s add a layer of authentication that ensures that only authenticated users can listen on the private comment channel we created.

Add the following code to the routes/channels.php file:

    // File: ./routes/channels.php
    Broadcast::channel('comment', function ($user) {
        return auth()->check();
    });

Adding comments to the frontend

In the second article of this series, we created the view for the single post landing page in the single.blade.php file, but we didn’t add the comments functionality. We are going to add it now. We will be using Vue to build the comments for this application so the first thing we will do is include Vue in the frontend of our application.

Open the master layout template and include Vue to its <head> tag. Just before the <title> tag appears in the master.blade.php file, include this snippet:

    <!-- File: ./resources/views/layouts/master.blade.php -->
    <meta name="csrf-token" content="{{ csrf_token() }}">
    <script src="{{ asset('js/app.js') }}" defer></script>

The csrf_token() is there so that users cannot forge requests in our application. All our requests will pick the randomly generated csrf-token and use that to make requests.

Related: CSRF in Laravel: how VerifyCsrfToken works and how to prevent attacks

Now the next thing we want to do is update the resources/assets/js/app.js file so that it includes a template for the comments view.

Open the file and replace its contents with the code below:

    require('./bootstrap');

    import Vue          from 'vue'
    import VueRouter    from 'vue-router'
    import Homepage from './components/Homepage'
    import Create   from './components/Create'
    import Read     from './components/Read'
    import Update   from './components/Update'
    import Comments from './components/Comments'

    Vue.use(VueRouter)

    const router = new VueRouter({
        mode: 'history',
        routes: [
            {
                path: '/admin/dashboard',
                name: 'read',
                component: Read,
                props: true
            },
            {
                path: '/admin/create',
                name: 'create',
                component: Create,
                props: true
            },
            {
                path: '/admin/update',
                name: 'update',
                component: Update,
                props: true
            },
        ],
    });

    const app = new Vue({
        el: '#app',
        components: { Homepage, Comments },
        router,
    });

Above we imported the Comment component and then we added it to the list of components in the applications Vue instance.

Now create a Comments.vue file in the resources/assets/js/components directory. This is where all the code for our comment view will go. We will populate this file later on.

Installing Pusher and Laravel Echo

For us to be able to use Pusher and subscribe to events on the frontend, we need to pull in both Pusher and Laravel Echo. We will do so by running this command:

    $ npm install --save laravel-echo pusher-js


The source code for this project is available here on GitHub.
Now let’s configure Laravel Echo to work in our application. In the resources/assets/js/bootstrap.js file, find and uncomment this snippet of code:

    import Echo from 'laravel-echo'

    window.Pusher = require('pusher-js');

    window.Echo = new Echo({
         broadcaster: 'pusher',
         key: process.env.MIX_PUSHER_APP_KEY,
         cluster: process.env.MIX_PUSHER_APP_CLUSTER,
         encrypted: true
    });

The source code for this project is available here on GitHub.
Now let’s import the Comments component into the single.blade.php file and pass along the required the props.

Open the single.blade.php file and replace its contents with the code below:

    {{-- File: ./resources/views/single.blade.php --}}
    @extends('layouts.master')

    @section('content')
    <div class="container">
      <div class="row">
        <div class="col-lg-10 mx-auto">
          <br>
          <h3 class="mt-4">
            {{ $post->title }} 
            <span class="lead">by <a href="#">{{ $post->user->name }}</a></span>
          </h3>
          <hr>
          <p>Posted {{ $post->created_at->diffForHumans() }}</p>
          <hr>
          <img class="img-fluid rounded" src="{!! !empty($post->image) ? '/uploads/posts/' . $post->image : 'http://placehold.it/750x300' !!}" alt="">
          <hr>
          <div>
            <p>{{ $post->body }}</p>
            <hr>
            <br>
          </div>

          @auth
          <Comments
              :post-id='@json($post->id)' 
              :user-name='@json(auth()->user()->name)'>
          </Comments>
          @endauth
        </div>
      </div>
    </div>
    @endsection

Building the comments view

Open the Comments.vue file and add the following markup template below:

    <template>
      <div class="card my-4">
        <h5 class="card-header">Leave a Comment:</h5>
        <div class="card-body">
          <form>
            <div class="form-group">
              <textarea ref="body" class="form-control" rows="3"></textarea>
            </div>
            <button type="submit" @click.prevent="addComment" class="btn btn-primary">
              Submit
            </button>
          </form>
        </div>
        <p class="border p-3" v-for="comment in comments">
           <strong>{{ comment.user.name }}</strong>: 
           <span>{{ comment.body }}</span>
        </p>
      </div>
    </template>

Now, we’ll add a script that defines two methods:

  1. landing.blade.php
  2. single.blade.php

In the same file, add the following below the closing template tag:

    <script>
    export default {
      props: {
        userName: {
          type: String,
          required: true
        },
        postId: {
          type: Number,
          required: true
        }
      },
      data() {
        return {
          comments: []
        };
      },

      created() {
        this.fetchComments();

        Echo.private("comment").listen("CommentSent", e => {
            this.comments.push({
              user: {name: e.user.name},
              body: e.comment.body,
            });
        });
      },

      methods: {
        fetchComments() {
          axios.get("/" + this.postId + "/comments").then(response => {
            this.comments = response.data;
          });
        },

        addComment() {
          let body = this.$refs.body.value;
          axios.post("/" + this.postId + "/comments", { body }).then(response => {
            this.comments.push({
              user: {name: this.userName},
              body: this.$refs.body.value
            });
            this.$refs.body.value = "";
          });
        }
      }
    };
    </script>

In the created() method above, we first made a call to the fetchComments() method, then we created a listener to the private comment channel using Laravel Echo. Once this listener is triggered, the comments property is updated.

Testing the application

Now let’s test the application to see if it is working as intended. Before running the application, we need to refresh our database so as to revert any changes. To do this, run the command below in your terminal:

    $ php artisan migrate:fresh --seed


Next, let’s build the application so that all the changes will be compiled and included as a part of the JavaScript file. To do this, run the following command on your terminal:

    $ npm run dev


Finally, let’s serve the application using this command:

    $ php artisan serve


To test that our application works visit the application URL http://localhost:8000 on two separate browser windows, we will log in to our application on each of the windows as a different user.

We will finally make a comment on the same post on each of the browser windows and check that it updates in realtime on the other window:

Conclusion

In this final tutorial of this series, we created the comments feature of the CMS and also made it realtime. We were able to accomplish the realtime functionality using Pusher.

In this entire series, we learned how to build a CMS using Laravel and Vue.

The source code for this article series is available here on Github.

Learn More

Build a Basic CRUD App with Laravel and Vue

Fullstack Vue App with Node, Express and MongoDB

Build a Simple CRUD App with Spring Boot and Vue.js

Build a Basic CRUD App with Laravel and Angular

Build a Basic CRUD App with Laravel and React

PHP Programming Language - PHP Tutorial for Beginners

Vuejs 2 Authentication Tutorial

Vue Authentication And Route Handling Using Vue-router

Vue JS 2 - The Complete Guide (incl. Vue Router & Vuex)

Nuxt.js - Vue.js on Steroids

MEVP Stack Vue JS 2 Course: MySQL + Express.js + Vue.js +PHP

Build Web Apps with Vue JS 2 & Firebase

10 Tips for Building and Maintaining Large Vue.js Projects

10 Tips for Building and Maintaining Large Vue.js Projects

Here are the top best practices I've developed while working on Vue projects with a large code base. These tips will help you develop more efficient code that is easier to maintain and share.

Here are the top best practices I've developed while working on Vue projects with a large code base. These tips will help you develop more efficient code that is easier to maintain and share.

When freelancing this year, I had the opportunity to work on some large Vue applications. I am talking about projects with more than 😰 a dozen Vuex stores, a high number of components (sometimes hundreds) and many views (pages). 😄 It was actually quite a rewarding experience for me as I discovered many interesting patterns to make the code scalable. I also had to fix some bad practices that resulted in the famous spaghetti code dilemma. 🍝

Thus, today I’m sharing 10 best practices with you that I would recommend to follow if you are dealing with a large code base. 🧚🏼‍♀️

1. Use Slots to Make Your Components Easier to Understand and More Powerful

I recently wrote an article about some important things you need to know regarding slots in Vue.js. It highlights how slots can make your components more reusable and easier to maintain and why you should use them.

🧐 But what does this have to do with large Vue.js projects? A picture is usually worth a thousand words, so I will paint you a picture about the first time I deeply regretted not using them.

One day, I simply had to create a popup. Nothing really complex at first sight as it was just including a title, a description and some buttons. So what I did was to pass everything as props. I ended up with three props that you would use to customize the components and an event was emitted when people clicked on the buttons. Easy peasy! 😅

But, as the project grew over time, the team requested that we display a lot of other new things in it: form fields, different buttons depending on which page it was displayed on, cards, a footer, and the list goes on. I figured out that if I kept using props to make this component evolve, it would be ok. But god, 😩 how wrong I was! The component quickly became too complex to understand as it was including countless child components, using way too many props and emitting a large number of events. 🌋 I came to experience that terrible situation in which when you make a change somewhere and somehow it ends up breaking something else on another page. I had built a Frankenstein monster instead of a maintainable component! 🤖

However, things could have been better if I had relied on slots from the start. I ended up refactoring everything to come up with this tiny component. Easier to maintain, faster to understand and way more extendable!

<template>
  <div class="c-base-popup">
    <div v-if="$slot.header" class="c-base-popup__header">
      <slot name="header">
    </div>
    <div v-if="$slot.subheader" class="c-base-popup__subheader">
      <slot name="subheader">
    </div>
    <div class="c-base-popup__body">
      <h1>{{ title }}</h1>
      <p v-if="description">{{ description }}</p>
    </div>
    <div v-if="$slot.actions" class="c-base-popup__actions">
      <slot name="actions">
    </div>
    <div v-if="$slot.footer" class="c-base-popup__footer">
      <slot name="footer">
    </div>
  </div>
</template>

<script>
export default {
  props: {
    description: {
      type: String,
      default: null
    },
    title: {
      type: String,
      required: true
    }
  }
}
</script>

My point is that, from experience, projects built by developers who know when to use slots does make a big difference on its future maintainability. Way fewer events are being emitted, the code is easier to understand, and it offers way more flexibility as you can display whatever components you wish inside.

⚠️ As a rule of thumb, keep in mind that when you end up duplicating your child components' props inside their parent component, you should start using slots at that point.

2. Organize Your Vuex Store Properly

Usually, new Vue.js developers start to learn about Vuex because they stumbled upon on of these two issues:

  • Either they need to access the data of a given component from another one that’s actually too far apart in the tree structure, or
  • They need the data to persist after the component is destroyed.

That's when they create their first Vuex store, learn about modules and start organizing them in their application. 💡

The thing is that there is no single pattern to follow when creating modules. However, 👆🏼 I highly recommend you think about how you want to organize them. From what I've seen, most developers prefer to organize them per feature. For instance:

  • Auth.
  • Blog.
  • Inbox.
  • Settings.

😜 On my side, I find it easier to understand when they are organized according to the data models they fetch from the API. For example:

  • Users
  • Teams
  • Messages
  • Widgets
  • Articles

Which one you choose is up to you. The only thing to keep in mind is that a well-organized Vuex store will result in a more productive team in the long run. It will also make newcomers better predisposed to wrap their minds around your code base when they join your team.

3. Use Actions to Make API Calls and Commit the Data

Most of my API calls (if not all) are made inside my Vuex actions. You may wonder: why is that a good place to do so? 🤨

🤷🏼‍♀️ Simply because most of them fetch the data I need to commit in my store. Besides, they provide a level of encapsulation and reusability I really enjoy working with. Here are some other reasons I do so:

  • If I need to fetch the first page of articles in two different places (let's say the blog and the homepage), I can just call the appropriate dispatcher with the right parameters. The data will be fetched, committed and returned with no duplicated code other than the dispatcher call.

  • If I need to create some logic to avoid fetching this first page when it has already been fetched, I can do so in one place. In addition to decreasing the load on my server, I am also confident that it will work everywhere.

  • I can track most of my Mixpanel events inside these actions, making the analytics code base really easy to maintain. I do have some applications where all the Mixpanel calls are solely made in the actions. 😂 I can't tell you how much of a joy it is to work this way when I don't have to understand what is tracked from what is not and when they are being sent.

4. Simplify Your Code Base with mapState, mapGetters, mapMutations and mapActions

There usually is no need to create multiple computed properties or methods when you just need to access your state/getters or call your actions/mutations inside your components. Using mapState, mapGetters, mapMutations and mapActions can help you shorten your code and make things easier to understand by grouping what is coming from your store modules in one place.

// NPM
import { mapState, mapGetters, mapActions, mapMutations } from "vuex";

export default {
  computed: {
    // Accessing root properties
    ...mapState("my_module", ["property"]),
    // Accessing getters
    ...mapGetters("my_module", ["property"]),
    // Accessing non-root properties
    ...mapState("my_module", {
      property: state => state.object.nested.property
    })
  },

  methods: {
    // Accessing actions
    ...mapActions("my_module", ["myAction"]),
    // Accessing mutations
    ...mapMutations("my_module", ["myMutation"])
  }
};

All the information you'll need on these handy helpers is available here in the official Vuex documentation. 🤩

5. Use API Factories

I usually like to create a this.$api helper that I can call anywhere to fetch my API endpoints. At the root of my project, I have an api folder that includes all my classes (see one of them below).

api
├── auth.js
├── notifications.js
└── teams.js

Each one is grouping all the endpoints for its category. Here is how I initialize this pattern with a plugin in my Nuxt applications (it is quite a similar process in a standard Vue app).

// PROJECT: API
import Auth from "@/api/auth";
import Teams from "@/api/teams";
import Notifications from "@/api/notifications";

export default (context, inject) => {
  if (process.client) {
    const token = localStorage.getItem("token");
    // Set token when defined
    if (token) {
      context.$axios.setToken(token, "Bearer");
    }
  }
  // Initialize API repositories
  const repositories = {
    auth: Auth(context.$axios),
    teams: Teams(context.$axios),
    notifications: Notifications(context.$axios)
  };
  inject("api", repositories);
};

export default $axios => ({
  forgotPassword(email) {
    return $axios.$post("/auth/password/forgot", { email });
  },

  login(email, password) {
    return $axios.$post("/auth/login", { email, password });
  },

  logout() {
    return $axios.$get("/auth/logout");
  },

  register(payload) {
    return $axios.$post("/auth/register", payload);
  }
});

Now, I can simply call them in my components or Vuex actions like this:

export default {
  methods: {
    onSubmit() {
      try {
        this.$api.auth.login(this.email, this.password);
      } catch (error) {
        console.error(error);
      }
    }
  }
};

6. Use $config to access your environment variables (especially useful in templates)

Your project probably have some global configuration variables defined in some files:

config
├── development.json
└── production.json

I like to quickly access them through a this.$config helper, especially when I am inside a template. As always, it's quite easy to extend the Vue object:

// NPM
import Vue from "vue";

// PROJECT: COMMONS
import development from "@/config/development.json";
import production from "@/config/production.json";

if (process.env.NODE_ENV === "production") {
  Vue.prototype.$config = Object.freeze(production);
} else {
  Vue.prototype.$config = Object.freeze(development);
}

7. Follow a Single Convention to Name Your Commits

As the project grows, you will need to browse the history for your components on a regular basis. If your team does not follow the same convention to name their commits, it will make it harder to understand what each one does.

I always use and recommend the Angular commit message guidelines. I follow it in every project I work on, and in many cases other team members are quick to figure out that it's better to follow too.

Following these guidelines leads to more readable messages that make commits easier to track when looking through the project history. In a nutshell, here is how it works:

git commit -am "<type>(<scope>): <subject>"

# Here are some samples
git commit -am "docs(changelog): update changelog to beta.5"
git commit -am "fix(release): need to depend on latest rxjs and zone.js"

Have a look at their README file to learn more about it and its conventions.

8. Always Freeze Your Package Versions When Your Project is in Production

I know... All packages should follow the semantic versioning rules. But the reality is, some of them don't. 😅

To avoid having to wake up in the middle of the night because one of your dependencies broke your entire project, locking all your package versions should make your mornings at work less stressful. 😇

What it means is simply this: avoid versions prefixed with ^:

{
  "name": "my project",

  "version": "1.0.0",

  "private": true,

  "dependencies": {
    "axios": "0.19.0",
    "imagemin-mozjpeg": "8.0.0",
    "imagemin-pngquant": "8.0.0",
    "imagemin-svgo": "7.0.0",
    "nuxt": "2.8.1",
  },

  "devDependencies": {
    "autoprefixer": "9.6.1",
    "babel-eslint": "10.0.2",
    "eslint": "6.1.0",
    "eslint-friendly-formatter": "4.0.1",
    "eslint-loader": "2.2.1",
    "eslint-plugin-vue": "5.2.3"
  }
}

9. Use Vue Virtual Scroller When Displaying a Large Amount of Data

When you need to display a lot of rows in a given page or when you need to loop over a large amount of data, you might have noticed that the page can quickly become quite slow to render. To fix this, you can use vue-virtual-scoller.

npm install vue-virtual-scroller

It will render only the visible items in your list and re-use components and dom elements to be as efficient and performant as possible. It really is easy to use and works like a charm! ✨

<template>
  <RecycleScroller
    class="scroller"
    :items="list"
    :item-size="32"
    key-field="id"
    v-slot="{ item }"
  >
    <div class="user">
      {{ item.name }}
    </div>
  </RecycleScroller>
</template>

10. Track the Size of Your Third-Party Packages

When a lot of people work in the same project, the number of installed packages can quickly become incredibly high if no one is paying attention to them. To avoid your application becoming slow (especially on slow mobile networks), I use the import cost package in Visual Studio Code. This way, I can see right from my editor how large an imported module library is, and can check out what's wrong when it's getting too large.

For instance, in a recent project, the entire lodash library was imported (which is approximately 24kB gzipped). The issue? Only the cloneDeep method was used. By identifying this issue with the import cost package, we fixed it with:

npm remove lodash
npm install lodash.clonedeep

The clonedeep function could then be imported where needed:

import cloneDeep from "lodash.clonedeep";

⚠️ To optimize things even further, you can also use the Webpack Bundle Analyzer package to visualize the size of your webpack output files with an interactive zoomable treemap.


Do you have other best practices when dealing with a large Vue code base? Feel free to tell me in the comments below