Top 10 Web Development Trends for 2020

Top 10 Web Development Trends for 2020

Emerging technologies like Artificial Intelligence and its components such as blockchain, ML, NLP all point out towards one thing; that is, businesses in the future will be data-driven and these emerging technologies will assist organizations to...

Emerging technologies like Artificial Intelligence and its components such as blockchain, ML, NLP all point out towards one thing; that is, businesses in the future will be data-driven and these emerging technologies will assist organizations to use up every iota of data to achieve it.

The article speaks about the most upcoming and prominent web development trends for 2020 that also includes some of the most path-breaking development in web technologies.

1. Artificial Intelligence & Analytics:

AI is gradually becoming the be-all and end-all of our lives. Our business interactions have become empowered thanks to some of the tools of AI that have given companies a weapon to analyze trends. Data analytics, AI and machine learning (ML) intermixed together has exposed an entirely different world of business solutions. AI form an integral part of data science that works in conjunction with machine learning that signifies the capabilities of machines to learn from the previous behavior of the users and serve the user with intelligence imitating human intellectual capacities.

A host of JavaScript frameworks like React and Angular empowers developers and software development companies allow for immaculate AI solutions effortlessly. Just a few lines of code and importing pre-defined libraries of code while developing an application with React Native/ VR, AngularJS, or Node JS helps to develop ingenious business solutions leveraging artificial intelligence efficiencies effortlessly. JavaScript frameworks are easy and simple to use.

2. Programming Languages:

A programming language is the crux of application development. Ever since programming languages came into being, languages like C have powered the programmers with control over executables and applications at the memory level. Cross-platform application development supports both compiler-based languages and interpreter based languages in order to deliver the desired results.

Web development can not be possible without a programming language. Various web development and programming languages such as Java, C#, Objective C, C++, R, Laravel, Python, etc. have been adopted for developing native, hybrid and cross-platform applications.

Python has seen a major evolution in terms of its functionality. Python is an open-source language that allows taking advantage of pre-defined code libraries enabling scriptwriting and integrations of plug-ins. Python also has a commendable community support structure because it is highly effective in developing interactive, scalable and strong web applications.

3. Internet of Things (IoT):

What if we tell you that you can control your electronic device with just one command! We are sure this is something you would love to do. With the use of the Internet of things or IoT home device controlling has become a cakewalk.
Smart homes embedded with IoT is a concept that is seeing bright sunlight. The development of various software applications that leverages full-functionality to manage and control home-based appliances. IoT is a system of interconnected devices, objects, people or animals having assigned identifiers that allows for the exchange of data across a network without the need for human-to-machine or human-to-human interaction.
The home innovation ideas with IOT are skyrocketing and so is the development of various software applications that channelize full-functionality to manage and control home appliances.

4. Distribution Cloud:

The distributed cloud is in vogue and is been used append data and applications served from multiple geographic locations. The increasing demand for data centers have driven cloud providers to provide distributed cloud services.
In the context of Information technology, distributed cloud implies that information is shared across diverse systems and platforms which may differ in location. The main benefit provided by the distributed cloud is the speeding up of communication for global services which leads to receptive communication for any picked region. The distributed cloud is scattered across various public cloud locations outside the boundaries of the physical data centers devised by the cloud provides yet controlled and managed by the providers. The year that is 2020 is expected to be high on a distributed cloud platform.

5. Voice Search and NLP:

Voice search is trending and is trending big. It is used is developing intuitive web and mobile applications. Assistants like Cortana, Google Assistant, Alexa, Siri, and Echo are embraced by users worldwide as they do not support voice search by making users feel more comfortable using search engines to gather the desired information. NLP also is trending high and is a pseudoscientific approach that helps in communication mostly used in personal development and psychotherapy. NLP applications are extensively used in industry verticals like psychotherapeutic, medicine, sports, sales, public speaking, and many others.

6. Motion UI:

Yet another emerging technology in web applications is motion UI. the availability of powerful tools supporting animations makes the base of Motion UI. It helps in augmenting conversions with interactive web UI’s. Motion user creates alliances that help developers generate transition and motion effects within native apps. The modern version of Motion UI endows scalable patterns of CSS with a gamut of JavaScript libraries while allowing coherent animation integration into the websites.

7. Blockchain:

Blockchain was introduced with the motive of developing cryptocurrencies that could be used on the net. But in web development also it has made inwards due to its unique offerings. Blockchain is an allocated ledger that has an exhaustive list of immutable transactional records that are signed cryptographically by the user and placed in a sequential manner in the network. In the forthcoming years, Blockchain is stated to gain importance, especially when used pragmatically.

8. Single Page Applications (SPA):

A single-page application is receiving a lot of traction because they help in developing resources and aids in faster loading. It aids end-users to find short, crisp and accurate information without going haywire. With SPA the data is placed in correct places that will motivate customers. The intermingling interactive elements and futuristic technology such as voice search helps businesses to attain a higher rate of conversion and hence SPA ranks high in the top trends of web development for 2020.

9. Chatbots and Customer Support:

The power of being there for your customers every day and always is the brain-idea behind chatbots. Chatbots imitate human characters and are programmed intelligently with pre-stored information to improve customer interaction and offer support round the clock.

The main motive of setting-up chatbots is to elevate business performance while making the customer conversations more interactive to keep the customers feel bonded with business significantly It is predicted that by 2020 the chatbots and AI-enabled technical customer support will be implemented by more than 80% of businesses. (source:

10. Human Augmentation:

This trending technology is the application of scattered technology modules that help to improve cognitive and physical experiences in humans. It makes use of CRISPR technology to elevate genes.

To gain an ambitious edge in developing state of the art solutions worldwide, make sure you watch and follow the above-listed trends in 2020. This year is all set to make some revolutionary developments that would take the web development to the next level.

Website Development - Top Reasons Why Your Business Need A Creative Website?

Website Development - Top Reasons Why Your Business Need A Creative Website?

Currently, most business owners want a website and expand their business to increase the selling on local to international markets. We give you the top reasons why you need a website.

Node.js Command Line Fun

Node.js Command Line Fun

Let's have some command line fun with Node.js : 1. Install colors: `npm install -g colors`  2. Install cfonts: `npm install -g cfonts`  3. Link colors and cfonts  `npm link colors` and `npm link cfonts`  4. Save the following code as...

Let's have some command line fun with Node.js :

  1. Install colors: npm install -g colors

  2. Install cfonts: npm install -g cfonts

  3. Link colors and cfonts  npm link colors and npm link cfonts

  4. Save the following code as love.js

  5. Run love.js:  node love.js

  6. Output

var colors = require('colors');

const CFonts = require('cfonts');

interval = 4000

function d1(){

function d2(){

function d3(){

function d4(){

function d5(){

function d0(col1){

CFonts.say('    LOVE     ', {
    font: 'block',              // define the font face
    align: 'left',              // define text alignment
  //  colors: ['red'],         // define all colors
    colors: [col1],         // define all colors
    background: 'transparent',  // define the background color, you can also use `backgroundColor` here as key
    letterSpacing: 1,           // define letter spacing
    lineHeight: 1,              // define the line height
    space: true,                // define if the output text should have empty lines on top and on the bottom
    maxLength: '0',             // define how many character can be on one line

console.log('   ***     ***                   ***     ***                   ***     ***'.rainbow)
console.log(' **   ** **   **               **   ** **   **               **   ** **   **'.rainbow)
console.log('*       *       *             *       *       *             *       *       *'.rainbow)
console.log('*               *             *               *             *               *'.rainbow)
console.log(' *     LOVE    *               *     LOVE    *               *     LOVE    *'.rainbow)
console.log('  **         **   ***     ***   **         **   ***     ***   **         **'.rainbow)
console.log('    **     **   **   ** **   **   **     **   **   ** **   **   **     **'.rainbow)
console.log('      ** **    *       *       *    ** **    *       *       *    ** **'.rainbow)
console.log('        *      *               *      *      *               *      *'.rainbow)
console.log('                *     LOVE    *               *     LOVE    *'.rainbow)
console.log('   ***     ***   **         **   ***     ***   **         **   ***     ***'.rainbow)
console.log(' **   ** **   **   **     **   **   ** **   **   **     **   **   ** **   **'.rainbow)
console.log('*       *       *    ** **    *       *       *    ** **    *       *       *'.rainbow)
console.log('*               *      *      *               *      *      *               *'.rainbow)
console.log(' *     LOVE    *               *     LOVE    *               *     LOVE    *'.rainbow)
console.log('  **         **   ***     ***   **         **   ***     ***   **         **'.rainbow)
console.log('    **     **   **   ** **   **   **     **   **   ** **   **   **     **'.rainbow)
console.log('      ** **    *       *       *    ** **    *       *       *    ** **'.rainbow)
console.log('        *      *               *      *      *               *      *'.rainbow)
console.log('                *     LOVE    *               *     LOVE    *'.rainbow)
console.log('                 **         **                 **         **'.rainbow)
console.log('                   **     **                     **     **'.rainbow)
console.log('                     ** **                         ** **'.rainbow)
console.log('                       *                             *'.rainbow)

Fun coding! Thank you

How to Ping monitoring between Kubernetes nodes

How to Ping monitoring between Kubernetes nodes

When diagnosing issues in the Kubernetes cluster, we often notice flickering* of one of the cluster nodes, which usually happens in a random and strange manner. That’s why we have been experiencing a need in a tool that can **test the reachability...

When diagnosing issues in the Kubernetes cluster, we often notice flickering* of one of the cluster nodes, which usually happens in a random and strange manner. That’s why we have been experiencing a need in a tool that can test the reachability of one node from another and present results in the form of Prometheus metrics. Having that, we would also want to create graphs in Grafana and quickly locate the failed node (and — if necessary — to reschedule all pods from it and conduct the required maintenance).

  • By “flickering” I mean some sort of behavior when a node randomly becomes NotReady and later turns back to work. Or, for example, part of the traffic may not reach pods on neighboring nodes.

Why do such situations take place at all? One of the common causes is the connectivity issues at the switch in the data center. For example, while we have been setting up a vswitch in Hetzner once, one of the nodes has become unavailable through this vswitch-port and happened to be completely unreachable on the local network.

Our last requirement was to run this service directly in Kubernetes, so we would be able to deploy everything via Helm charts. (In the case of, say, Ansible we would have to define roles for each of the various environments: AWS, GCE, bare metal, etc.) Since we haven’t found a ready-made solution for this, we’ve decided to implement our own.

Script and configs

The main component of our solution is a scriptthat watches the .status.addresses value for each node. If this value has changed for some node (i.e., the new node has been added), our script passes the list of nodes in the form of ConfigMap to a chart via Helm values:

apiVersion: v1
kind: ConfigMap
  name: ping-exporter-config
  namespace: d8-system
  nodes.json: >
    {{ .Values.pingExporter.targets | toJson }}

Here is how.Values.pingExporter.targets will look like:


That’s the Python script itself:

#!/usr/bin/env python3

import subprocess
import prometheus_client
import re
import statistics
import os
import json
import glob
import better_exchook
import datetime


FPING_CMDLINE = "/usr/sbin/fping -p 1000 -C 30 -B 1 -q -r 1".split(" ")
FPING_REGEX = re.compile(r"^(\S*)\s*: (.*)$", re.MULTILINE)
CONFIG_PATH = "/config/targets.json"

registry = prometheus_client.CollectorRegistry()

prometheus_exceptions_counter = \
    prometheus_client.Counter('kube_node_ping_exceptions', 'Total number of exceptions', [], registry=registry)

prom_metrics_cluster = {"sent": prometheus_client.Counter('kube_node_ping_packets_sent_total',
                                                  'ICMP packets sent',
                                                  ['destination_node', 'destination_node_ip_address'],
                "received": prometheus_client.Counter('kube_node_ping_packets_received_total',
                                                      'ICMP packets received',
                                                     ['destination_node', 'destination_node_ip_address'],
                "rtt": prometheus_client.Counter('kube_node_ping_rtt_milliseconds_total',
                                                 'round-trip time',
                                                ['destination_node', 'destination_node_ip_address'],
                "min": prometheus_client.Gauge('kube_node_ping_rtt_min', 'minimum round-trip time',
                                               ['destination_node', 'destination_node_ip_address'],
                "max": prometheus_client.Gauge('kube_node_ping_rtt_max', 'maximum round-trip time',
                                               ['destination_node', 'destination_node_ip_address'],
                "mdev": prometheus_client.Gauge('kube_node_ping_rtt_mdev',
                                                'mean deviation of round-trip times',
                                                ['destination_node', 'destination_node_ip_address'],

prom_metrics_external = {"sent": prometheus_client.Counter('external_ping_packets_sent_total',
                                                  'ICMP packets sent',
                                                  ['destination_name', 'destination_host'],
                "received": prometheus_client.Counter('external_ping_packets_received_total',
                                                      'ICMP packets received',
                                                     ['destination_name', 'destination_host'],
                "rtt": prometheus_client.Counter('external_ping_rtt_milliseconds_total',
                                                 'round-trip time',
                                                ['destination_name', 'destination_host'],
                "min": prometheus_client.Gauge('external_ping_rtt_min', 'minimum round-trip time',
                                               ['destination_name', 'destination_host'],
                "max": prometheus_client.Gauge('external_ping_rtt_max', 'maximum round-trip time',
                                               ['destination_name', 'destination_host'],
                "mdev": prometheus_client.Gauge('external_ping_rtt_mdev',
                                                'mean deviation of round-trip times',
                                                ['destination_name', 'destination_host'],

def validate_envs():

    for k, v in envs.items():
        if not v:
            raise ValueError("{} environment variable is empty".format(k))

    return envs

def compute_results(results):
    computed = {}

    matches = FPING_REGEX.finditer(results)
    for match in matches:
        host =
        ping_results =
        if "duplicate" in ping_results:
        splitted = ping_results.split(" ")
        if len(splitted) != 30:
            raise ValueError("ping returned wrong number of results: \"{}\"".format(splitted))

        positive_results = [float(x) for x in splitted if x != "-"]
        if len(positive_results) > 0:
            computed[host] = {"sent": 30, "received": len(positive_results),
                            "rtt": sum(positive_results),
                            "max": max(positive_results), "min": min(positive_results),
                            "mdev": statistics.pstdev(positive_results)}
            computed[host] = {"sent": 30, "received": len(positive_results), "rtt": 0,
                            "max": 0, "min": 0, "mdev": 0}
    if not len(computed):
        raise ValueError("regex match\"{}\" found nothing in fping output \"{}\"".format(FPING_REGEX, results))
    return computed

def call_fping(ips):
    cmdline = FPING_CMDLINE + ips
    process =, stdout=subprocess.PIPE,
                             stderr=subprocess.STDOUT, universal_newlines=True)
    if process.returncode == 3:
        raise ValueError("invalid arguments: {}".format(cmdline))
    if process.returncode == 4:
        raise OSError("fping reported syscall error: {}".format(process.stderr))

    return process.stdout

envs = validate_envs()

files = glob.glob(envs["PROMETHEUS_TEXTFILE_DIR"] + "*")
for f in files:

labeled_prom_metrics = {"cluster_targets": [], "external_targets": []}

while True:
    with open(CONFIG_PATH, "r") as f:
        config = json.loads(
        config["external_targets"] = [] if config["external_targets"] is None else config["external_targets"]
        for target in config["external_targets"]:
            target["name"] = target["host"] if "name" not in target.keys() else target["name"]

    if labeled_prom_metrics["cluster_targets"]:
        for metric in labeled_prom_metrics["cluster_targets"]:
            if (metric["node_name"], metric["ip"]) not in [(node["name"], node["ipAddress"]) for node in config['cluster_targets']]:
                for k, v in prom_metrics_cluster.items():
                    v.remove(metric["node_name"], metric["ip"])

    if labeled_prom_metrics["external_targets"]:
        for metric in labeled_prom_metrics["external_targets"]:
            if (metric["target_name"], metric["host"]) not in [(target["name"], target["host"]) for target in config['external_targets']]:
                for k, v in prom_metrics_external.items():
                    v.remove(metric["target_name"], metric["host"])

    labeled_prom_metrics = {"cluster_targets": [], "external_targets": []}

    for node in config["cluster_targets"]:
        metrics = {"node_name": node["name"], "ip": node["ipAddress"], "prom_metrics": {}}

        for k, v in prom_metrics_cluster.items():
            metrics["prom_metrics"][k] = v.labels(node["name"], node["ipAddress"])


    for target in config["external_targets"]:
        metrics = {"target_name": target["name"], "host": target["host"], "prom_metrics": {}}

        for k, v in prom_metrics_external.items():
            metrics["prom_metrics"][k] = v.labels(target["name"], target["host"])


    out = call_fping([prom_metric["ip"]   for prom_metric in labeled_prom_metrics["cluster_targets"]] + \
                     [prom_metric["host"] for prom_metric in labeled_prom_metrics["external_targets"]])
    computed = compute_results(out)

    for dimension in labeled_prom_metrics["cluster_targets"]:
        result = computed[dimension["ip"]]

    for dimension in labeled_prom_metrics["external_targets"]:
        result = computed[dimension["host"]]

        envs["PROMETHEUS_TEXTFILE_DIR"] + envs["PROMETHEUS_TEXTFILE_PREFIX"] + envs["MY_NODE_NAME"] + ".prom", registry)

This script runs on each K8s node and sends ICMP packets to all instances of the Kubernetes cluster twice per second. The collected results are stored in the text files.

The script is included in theDocker image:

FROM python:3.6-alpine3.8
COPY rootfs /
RUN pip3 install --upgrade pip && pip3 install -r requirements.txt && apk add --no-cache fping
ENTRYPOINT ["python3", "/app/"]

Also, we have created a ServiceAccount and a corresponding role with the only permission provided — to get the list of nodes (so we can know their addresses):

apiVersion: v1
kind: ServiceAccount
  name: ping-exporter
  namespace: d8-system
kind: ClusterRole
  name: d8-system:ping-exporter
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list"]
kind: ClusterRoleBinding
  name: d8-system:kube-ping-exporter
- kind: ServiceAccount
  name: ping-exporter
  namespace: d8-system
  kind: ClusterRole
  name: d8-system:ping-exporter

Finally, we need a DaemonSet which runs in all instances of the cluster:

apiVersion: apps/v1
kind: DaemonSet
  name: ping-exporter
  namespace: d8-system
    type: RollingUpdate
      name: ping-exporter
        name: ping-exporter
      terminationGracePeriodSeconds: 0
      - operator: "Exists"
      hostNetwork: true
      serviceAccountName: ping-exporter
      priorityClassName: cluster-low
      - image:
        name: ping-exporter
          - name: MY_NODE_NAME
                fieldPath: spec.nodeName
            value: /node-exporter-textfile/
            value: ping-exporter_
          - name: textfile
            mountPath: /node-exporter-textfile
          - name: config
            mountPath: /config
        - name: textfile
            path: /var/run/node-exporter-textfile
        - name: config
            name: ping-exporter-config
      - name: private-registry

The last operating details of this solution:

  • When Python script is executed, its results (that is, text files stored on the host machine in the /var/run/node-exporter-textfile directory) are passed to the DaemonSet node-exporter.

  • This node-exporter is launched with /host/textfile argument where /host/textfile is a hostPath to /var/run/node-exporter-textfile. (You can read more about the textfile collector in the node-exporter here.)

  • In the end, node-exporter reads these files, and Prometheus collects all data from the node-exporter.

What are the results?

Now it is time to enjoy the long-awaited results. After the metrics have been created, we can use and, of course, visualize them. Here is how they look.
Firstly, there is a general selector where you can choose the nodes to check their “to” and “from” connectivity. You’re getting a summary table for pinging results of selected nodes for the period specified in the Grafana dashboard:

And here are graphs with the combined statistics about selected nodes:

Also, we have a list of records where each record links to graphs for each specific node selected in the Source node:

If you expand such a record, you will see detailed ping statistics from a current node to all other nodes which have been selected in the Destination nodes:

And here are the relevant graphs:

How would the graphs with bad pings between nodes look like?

If you’re observing something like that in real life — it’s time for troubleshooting!
Finally, here is our visualization for pinging external hosts:

We can check either this overall view for all nodes, or a graph for any particular node only:

It might be useful when you observe connectivity issues affecting some specific nodes only.

This article has been originally written and published in Russian language by Flant’s engineer Andrey Sidorov.