Top 10 Web Development Trends for 2020

Top 10 Web Development Trends for 2020

Emerging technologies like Artificial Intelligence and its components such as blockchain, ML, NLP all point out towards one thing; that is, businesses in the future will be data-driven and these emerging technologies will assist organizations to...

Emerging technologies like Artificial Intelligence and its components such as blockchain, ML, NLP all point out towards one thing; that is, businesses in the future will be data-driven and these emerging technologies will assist organizations to use up every iota of data to achieve it.

The article speaks about the most upcoming and prominent web development trends for 2020 that also includes some of the most path-breaking development in web technologies.

1. Artificial Intelligence & Analytics:

AI is gradually becoming the be-all and end-all of our lives. Our business interactions have become empowered thanks to some of the tools of AI that have given companies a weapon to analyze trends. Data analytics, AI and machine learning (ML) intermixed together has exposed an entirely different world of business solutions. AI form an integral part of data science that works in conjunction with machine learning that signifies the capabilities of machines to learn from the previous behavior of the users and serve the user with intelligence imitating human intellectual capacities.

A host of JavaScript frameworks like React and Angular empowers developers and software development companies allow for immaculate AI solutions effortlessly. Just a few lines of code and importing pre-defined libraries of code while developing an application with React Native/ VR, AngularJS, or Node JS helps to develop ingenious business solutions leveraging artificial intelligence efficiencies effortlessly. JavaScript frameworks are easy and simple to use.

2. Programming Languages:

A programming language is the crux of application development. Ever since programming languages came into being, languages like C have powered the programmers with control over executables and applications at the memory level. Cross-platform application development supports both compiler-based languages and interpreter based languages in order to deliver the desired results.

Web development can not be possible without a programming language. Various web development and programming languages such as Java, C#, Objective C, C++, R, Laravel, Python, etc. have been adopted for developing native, hybrid and cross-platform applications.

Python has seen a major evolution in terms of its functionality. Python is an open-source language that allows taking advantage of pre-defined code libraries enabling scriptwriting and integrations of plug-ins. Python also has a commendable community support structure because it is highly effective in developing interactive, scalable and strong web applications.

3. Internet of Things (IoT):

What if we tell you that you can control your electronic device with just one command! We are sure this is something you would love to do. With the use of the Internet of things or IoT home device controlling has become a cakewalk.
Smart homes embedded with IoT is a concept that is seeing bright sunlight. The development of various software applications that leverages full-functionality to manage and control home-based appliances. IoT is a system of interconnected devices, objects, people or animals having assigned identifiers that allows for the exchange of data across a network without the need for human-to-machine or human-to-human interaction.
The home innovation ideas with IOT are skyrocketing and so is the development of various software applications that channelize full-functionality to manage and control home appliances.

4. Distribution Cloud:

The distributed cloud is in vogue and is been used append data and applications served from multiple geographic locations. The increasing demand for data centers have driven cloud providers to provide distributed cloud services.
In the context of Information technology, distributed cloud implies that information is shared across diverse systems and platforms which may differ in location. The main benefit provided by the distributed cloud is the speeding up of communication for global services which leads to receptive communication for any picked region. The distributed cloud is scattered across various public cloud locations outside the boundaries of the physical data centers devised by the cloud provides yet controlled and managed by the providers. The year that is 2020 is expected to be high on a distributed cloud platform.

5. Voice Search and NLP:

Voice search is trending and is trending big. It is used is developing intuitive web and mobile applications. Assistants like Cortana, Google Assistant, Alexa, Siri, and Echo are embraced by users worldwide as they do not support voice search by making users feel more comfortable using search engines to gather the desired information. NLP also is trending high and is a pseudoscientific approach that helps in communication mostly used in personal development and psychotherapy. NLP applications are extensively used in industry verticals like psychotherapeutic, medicine, sports, sales, public speaking, and many others.

6. Motion UI:

Yet another emerging technology in web applications is motion UI. the availability of powerful tools supporting animations makes the base of Motion UI. It helps in augmenting conversions with interactive web UI’s. Motion user creates alliances that help developers generate transition and motion effects within native apps. The modern version of Motion UI endows scalable patterns of CSS with a gamut of JavaScript libraries while allowing coherent animation integration into the websites.

7. Blockchain:

Blockchain was introduced with the motive of developing cryptocurrencies that could be used on the net. But in web development also it has made inwards due to its unique offerings. Blockchain is an allocated ledger that has an exhaustive list of immutable transactional records that are signed cryptographically by the user and placed in a sequential manner in the network. In the forthcoming years, Blockchain is stated to gain importance, especially when used pragmatically.

8. Single Page Applications (SPA):

A single-page application is receiving a lot of traction because they help in developing resources and aids in faster loading. It aids end-users to find short, crisp and accurate information without going haywire. With SPA the data is placed in correct places that will motivate customers. The intermingling interactive elements and futuristic technology such as voice search helps businesses to attain a higher rate of conversion and hence SPA ranks high in the top trends of web development for 2020.

9. Chatbots and Customer Support:

The power of being there for your customers every day and always is the brain-idea behind chatbots. Chatbots imitate human characters and are programmed intelligently with pre-stored information to improve customer interaction and offer support round the clock.

The main motive of setting-up chatbots is to elevate business performance while making the customer conversations more interactive to keep the customers feel bonded with business significantly It is predicted that by 2020 the chatbots and AI-enabled technical customer support will be implemented by more than 80% of businesses. (source: dzone.com).

10. Human Augmentation:

This trending technology is the application of scattered technology modules that help to improve cognitive and physical experiences in humans. It makes use of CRISPR technology to elevate genes.

To gain an ambitious edge in developing state of the art solutions worldwide, make sure you watch and follow the above-listed trends in 2020. This year is all set to make some revolutionary developments that would take the web development to the next level.

Website Development - Top Reasons Why Your Business Need A Creative Website?

Website Development - Top Reasons Why Your Business Need A Creative Website?

Currently, most business owners want a website and expand their business to increase the selling on local to international markets. We give you the top reasons why you need a website.

Node.js Command Line Fun

Node.js Command Line Fun

Let's have some command line fun with Node.js : 1. Install colors: `npm install -g colors`  2. Install cfonts: `npm install -g cfonts`  3. Link colors and cfonts  `npm link colors` and `npm link cfonts`  4. Save the following code as...

Let's have some command line fun with Node.js :

  1. Install colors: npm install -g colors

  2. Install cfonts: npm install -g cfonts

  3. Link colors and cfonts  npm link colors and npm link cfonts

  4. Save the following code as love.js

  5. Run love.js:  node love.js

  6. Output

var colors = require('colors');

const CFonts = require('cfonts');

interval = 4000
for(i=1;i<20;i++){
    setTimeout(d1,i*interval);
    setTimeout(d2,i*interval+(interval/5));
    setTimeout(d3,i*interval+(2*interval/5));
    setTimeout(d4,i*interval+(3*interval/5));
    setTimeout(d5,i*interval+(4*interval/5));
}

function d1(){
 console.log('\x1Bc');
 d0('green')
}

function d2(){
console.log('\x1Bc');
    d0('blue')
 }

function d3(){
console.log('\x1Bc');
    d0('red')
 }

function d4(){
console.log('\x1Bc');
    d0('yellow')
 }

function d5(){
console.log('\x1Bc');
    d0('magenta')
 }

function d0(col1){

CFonts.say('    LOVE     ', {
    font: 'block',              // define the font face
    align: 'left',              // define text alignment
  //  colors: ['red'],         // define all colors
    colors: [col1],         // define all colors
    background: 'transparent',  // define the background color, you can also use `backgroundColor` here as key
    letterSpacing: 1,           // define letter spacing
    lineHeight: 1,              // define the line height
    space: true,                // define if the output text should have empty lines on top and on the bottom
    maxLength: '0',             // define how many character can be on one line
});

console.log('   ***     ***                   ***     ***                   ***     ***'.rainbow)
console.log(' **   ** **   **               **   ** **   **               **   ** **   **'.rainbow)
console.log('*       *       *             *       *       *             *       *       *'.rainbow)
console.log('*               *             *               *             *               *'.rainbow)
console.log(' *     LOVE    *               *     LOVE    *               *     LOVE    *'.rainbow)
console.log('  **         **   ***     ***   **         **   ***     ***   **         **'.rainbow)
console.log('    **     **   **   ** **   **   **     **   **   ** **   **   **     **'.rainbow)
console.log('      ** **    *       *       *    ** **    *       *       *    ** **'.rainbow)
console.log('        *      *               *      *      *               *      *'.rainbow)
console.log('                *     LOVE    *               *     LOVE    *'.rainbow)
console.log('   ***     ***   **         **   ***     ***   **         **   ***     ***'.rainbow)
console.log(' **   ** **   **   **     **   **   ** **   **   **     **   **   ** **   **'.rainbow)
console.log('*       *       *    ** **    *       *       *    ** **    *       *       *'.rainbow)
console.log('*               *      *      *               *      *      *               *'.rainbow)
console.log(' *     LOVE    *               *     LOVE    *               *     LOVE    *'.rainbow)
console.log('  **         **   ***     ***   **         **   ***     ***   **         **'.rainbow)
console.log('    **     **   **   ** **   **   **     **   **   ** **   **   **     **'.rainbow)
console.log('      ** **    *       *       *    ** **    *       *       *    ** **'.rainbow)
console.log('        *      *               *      *      *               *      *'.rainbow)
console.log('                *     LOVE    *               *     LOVE    *'.rainbow)
console.log('                 **         **                 **         **'.rainbow)
console.log('                   **     **                     **     **'.rainbow)
console.log('                     ** **                         ** **'.rainbow)
console.log('                       *                             *'.rainbow)
}

Fun coding! Thank you

How to Ping monitoring between Kubernetes nodes

How to Ping monitoring between Kubernetes nodes

When diagnosing issues in the Kubernetes cluster, we often notice flickering* of one of the cluster nodes, which usually happens in a random and strange manner. That’s why we have been experiencing a need in a tool that can **test the reachability...

When diagnosing issues in the Kubernetes cluster, we often notice flickering* of one of the cluster nodes, which usually happens in a random and strange manner. That’s why we have been experiencing a need in a tool that can test the reachability of one node from another and present results in the form of Prometheus metrics. Having that, we would also want to create graphs in Grafana and quickly locate the failed node (and — if necessary — to reschedule all pods from it and conduct the required maintenance).

  • By “flickering” I mean some sort of behavior when a node randomly becomes NotReady and later turns back to work. Or, for example, part of the traffic may not reach pods on neighboring nodes.

Why do such situations take place at all? One of the common causes is the connectivity issues at the switch in the data center. For example, while we have been setting up a vswitch in Hetzner once, one of the nodes has become unavailable through this vswitch-port and happened to be completely unreachable on the local network.

Our last requirement was to run this service directly in Kubernetes, so we would be able to deploy everything via Helm charts. (In the case of, say, Ansible we would have to define roles for each of the various environments: AWS, GCE, bare metal, etc.) Since we haven’t found a ready-made solution for this, we’ve decided to implement our own.

Script and configs

The main component of our solution is a scriptthat watches the .status.addresses value for each node. If this value has changed for some node (i.e., the new node has been added), our script passes the list of nodes in the form of ConfigMap to a chart via Helm values:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: ping-exporter-config
  namespace: d8-system
data:
  nodes.json: >
    {{ .Values.pingExporter.targets | toJson }}

Here is how.Values.pingExporter.targets will look like:

"cluster_targets":[{"ipAddress":"192.168.191.11","name":"kube-a-3"},{"ipAddress":"192.168.191.12","name":"kube-a-2"},{"ipAddress":"192.168.191.22","name":"kube-a-1"},{"ipAddress":"192.168.191.23","name":"kube-db-1"},{"ipAddress":"192.168.191.9","name":"kube-db-2"},{"ipAddress":"51.75.130.47","name":"kube-a-4"}],"external_targets":[{"host":"8.8.8.8","name":"google-dns"},{"host":"youtube.com"}]}

That’s the Python script itself:

#!/usr/bin/env python3

import subprocess
import prometheus_client
import re
import statistics
import os
import json
import glob
import better_exchook
import datetime

better_exchook.install()

FPING_CMDLINE = "/usr/sbin/fping -p 1000 -C 30 -B 1 -q -r 1".split(" ")
FPING_REGEX = re.compile(r"^(\S*)\s*: (.*)$", re.MULTILINE)
CONFIG_PATH = "/config/targets.json"

registry = prometheus_client.CollectorRegistry()

prometheus_exceptions_counter = \
    prometheus_client.Counter('kube_node_ping_exceptions', 'Total number of exceptions', [], registry=registry)

prom_metrics_cluster = {"sent": prometheus_client.Counter('kube_node_ping_packets_sent_total',
                                                  'ICMP packets sent',
                                                  ['destination_node', 'destination_node_ip_address'],
                                                  registry=registry),
                "received": prometheus_client.Counter('kube_node_ping_packets_received_total',
                                                      'ICMP packets received',
                                                     ['destination_node', 'destination_node_ip_address'],
                                                     registry=registry),
                "rtt": prometheus_client.Counter('kube_node_ping_rtt_milliseconds_total',
                                                 'round-trip time',
                                                ['destination_node', 'destination_node_ip_address'],
                                                registry=registry),
                "min": prometheus_client.Gauge('kube_node_ping_rtt_min', 'minimum round-trip time',
                                               ['destination_node', 'destination_node_ip_address'],
                                               registry=registry),
                "max": prometheus_client.Gauge('kube_node_ping_rtt_max', 'maximum round-trip time',
                                               ['destination_node', 'destination_node_ip_address'],
                                               registry=registry),
                "mdev": prometheus_client.Gauge('kube_node_ping_rtt_mdev',
                                                'mean deviation of round-trip times',
                                                ['destination_node', 'destination_node_ip_address'],
                                                registry=registry)}


prom_metrics_external = {"sent": prometheus_client.Counter('external_ping_packets_sent_total',
                                                  'ICMP packets sent',
                                                  ['destination_name', 'destination_host'],
                                                  registry=registry),
                "received": prometheus_client.Counter('external_ping_packets_received_total',
                                                      'ICMP packets received',
                                                     ['destination_name', 'destination_host'],
                                                     registry=registry),
                "rtt": prometheus_client.Counter('external_ping_rtt_milliseconds_total',
                                                 'round-trip time',
                                                ['destination_name', 'destination_host'],
                                                registry=registry),
                "min": prometheus_client.Gauge('external_ping_rtt_min', 'minimum round-trip time',
                                               ['destination_name', 'destination_host'],
                                               registry=registry),
                "max": prometheus_client.Gauge('external_ping_rtt_max', 'maximum round-trip time',
                                               ['destination_name', 'destination_host'],
                                               registry=registry),
                "mdev": prometheus_client.Gauge('external_ping_rtt_mdev',
                                                'mean deviation of round-trip times',
                                                ['destination_name', 'destination_host'],
                                                registry=registry)}

def validate_envs():
    envs = {"MY_NODE_NAME": os.getenv("MY_NODE_NAME"), "PROMETHEUS_TEXTFILE_DIR": os.getenv("PROMETHEUS_TEXTFILE_DIR"),
            "PROMETHEUS_TEXTFILE_PREFIX": os.getenv("PROMETHEUS_TEXTFILE_PREFIX")}

    for k, v in envs.items():
        if not v:
            raise ValueError("{} environment variable is empty".format(k))

    return envs


@prometheus_exceptions_counter.count_exceptions()
def compute_results(results):
    computed = {}

    matches = FPING_REGEX.finditer(results)
    for match in matches:
        host = match.group(1)
        ping_results = match.group(2)
        if "duplicate" in ping_results:
            continue
        splitted = ping_results.split(" ")
        if len(splitted) != 30:
            raise ValueError("ping returned wrong number of results: \"{}\"".format(splitted))

        positive_results = [float(x) for x in splitted if x != "-"]
        if len(positive_results) > 0:
            computed[host] = {"sent": 30, "received": len(positive_results),
                            "rtt": sum(positive_results),
                            "max": max(positive_results), "min": min(positive_results),
                            "mdev": statistics.pstdev(positive_results)}
        else:
            computed[host] = {"sent": 30, "received": len(positive_results), "rtt": 0,
                            "max": 0, "min": 0, "mdev": 0}
    if not len(computed):
        raise ValueError("regex match\"{}\" found nothing in fping output \"{}\"".format(FPING_REGEX, results))
    return computed


@prometheus_exceptions_counter.count_exceptions()
def call_fping(ips):
    cmdline = FPING_CMDLINE + ips
    process = subprocess.run(cmdline, stdout=subprocess.PIPE,
                             stderr=subprocess.STDOUT, universal_newlines=True)
    if process.returncode == 3:
        raise ValueError("invalid arguments: {}".format(cmdline))
    if process.returncode == 4:
        raise OSError("fping reported syscall error: {}".format(process.stderr))

    return process.stdout


envs = validate_envs()

files = glob.glob(envs["PROMETHEUS_TEXTFILE_DIR"] + "*")
for f in files:
    os.remove(f)

labeled_prom_metrics = {"cluster_targets": [], "external_targets": []}

while True:
    with open(CONFIG_PATH, "r") as f:
        config = json.loads(f.read())
        config["external_targets"] = [] if config["external_targets"] is None else config["external_targets"]
        for target in config["external_targets"]:
            target["name"] = target["host"] if "name" not in target.keys() else target["name"]

    if labeled_prom_metrics["cluster_targets"]:
        for metric in labeled_prom_metrics["cluster_targets"]:
            if (metric["node_name"], metric["ip"]) not in [(node["name"], node["ipAddress"]) for node in config['cluster_targets']]:
                for k, v in prom_metrics_cluster.items():
                    v.remove(metric["node_name"], metric["ip"])

    if labeled_prom_metrics["external_targets"]:
        for metric in labeled_prom_metrics["external_targets"]:
            if (metric["target_name"], metric["host"]) not in [(target["name"], target["host"]) for target in config['external_targets']]:
                for k, v in prom_metrics_external.items():
                    v.remove(metric["target_name"], metric["host"])


    labeled_prom_metrics = {"cluster_targets": [], "external_targets": []}

    for node in config["cluster_targets"]:
        metrics = {"node_name": node["name"], "ip": node["ipAddress"], "prom_metrics": {}}

        for k, v in prom_metrics_cluster.items():
            metrics["prom_metrics"][k] = v.labels(node["name"], node["ipAddress"])

        labeled_prom_metrics["cluster_targets"].append(metrics)

    for target in config["external_targets"]:
        metrics = {"target_name": target["name"], "host": target["host"], "prom_metrics": {}}

        for k, v in prom_metrics_external.items():
            metrics["prom_metrics"][k] = v.labels(target["name"], target["host"])

        labeled_prom_metrics["external_targets"].append(metrics)

    out = call_fping([prom_metric["ip"]   for prom_metric in labeled_prom_metrics["cluster_targets"]] + \
                     [prom_metric["host"] for prom_metric in labeled_prom_metrics["external_targets"]])
    computed = compute_results(out)

    for dimension in labeled_prom_metrics["cluster_targets"]:
        result = computed[dimension["ip"]]
        dimension["prom_metrics"]["sent"].inc(computed[dimension["ip"]]["sent"])
        dimension["prom_metrics"]["received"].inc(computed[dimension["ip"]]["received"])
        dimension["prom_metrics"]["rtt"].inc(computed[dimension["ip"]]["rtt"])
        dimension["prom_metrics"]["min"].set(computed[dimension["ip"]]["min"])
        dimension["prom_metrics"]["max"].set(computed[dimension["ip"]]["max"])
        dimension["prom_metrics"]["mdev"].set(computed[dimension["ip"]]["mdev"])

    for dimension in labeled_prom_metrics["external_targets"]:
        result = computed[dimension["host"]]
        dimension["prom_metrics"]["sent"].inc(computed[dimension["host"]]["sent"])
        dimension["prom_metrics"]["received"].inc(computed[dimension["host"]]["received"])
        dimension["prom_metrics"]["rtt"].inc(computed[dimension["host"]]["rtt"])
        dimension["prom_metrics"]["min"].set(computed[dimension["host"]]["min"])
        dimension["prom_metrics"]["max"].set(computed[dimension["host"]]["max"])
        dimension["prom_metrics"]["mdev"].set(computed[dimension["host"]]["mdev"])

    prometheus_client.write_to_textfile(
        envs["PROMETHEUS_TEXTFILE_DIR"] + envs["PROMETHEUS_TEXTFILE_PREFIX"] + envs["MY_NODE_NAME"] + ".prom", registry)

This script runs on each K8s node and sends ICMP packets to all instances of the Kubernetes cluster twice per second. The collected results are stored in the text files.

The script is included in theDocker image:

FROM python:3.6-alpine3.8
COPY rootfs /
WORKDIR /app
RUN pip3 install --upgrade pip && pip3 install -r requirements.txt && apk add --no-cache fping
ENTRYPOINT ["python3", "/app/ping-exporter.py"]

Also, we have created a ServiceAccount and a corresponding role with the only permission provided — to get the list of nodes (so we can know their addresses):

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ping-exporter
  namespace: d8-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: d8-system:ping-exporter
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: d8-system:kube-ping-exporter
subjects:
- kind: ServiceAccount
  name: ping-exporter
  namespace: d8-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: d8-system:ping-exporter

Finally, we need a DaemonSet which runs in all instances of the cluster:

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ping-exporter
  namespace: d8-system
spec:
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      name: ping-exporter
  template:
    metadata:
      labels:
        name: ping-exporter
    spec:
      terminationGracePeriodSeconds: 0
      tolerations:
      - operator: "Exists"
      hostNetwork: true
      serviceAccountName: ping-exporter
      priorityClassName: cluster-low
      containers:
      - image: private-registry.flant.com/ping-exporter/ping-exporter:v1
        name: ping-exporter
        env:
          - name: MY_NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: PROMETHEUS_TEXTFILE_DIR
            value: /node-exporter-textfile/
          - name: PROMETHEUS_TEXTFILE_PREFIX
            value: ping-exporter_
        volumeMounts:
          - name: textfile
            mountPath: /node-exporter-textfile
          - name: config
            mountPath: /config
      volumes:
        - name: textfile
          hostPath:
            path: /var/run/node-exporter-textfile
        - name: config
          configMap:
            name: ping-exporter-config
      imagePullSecrets:
      - name: private-registry

The last operating details of this solution:

  • When Python script is executed, its results (that is, text files stored on the host machine in the /var/run/node-exporter-textfile directory) are passed to the DaemonSet node-exporter.

  • This node-exporter is launched with the--collector.textfile.directory /host/textfile argument where /host/textfile is a hostPath to /var/run/node-exporter-textfile. (You can read more about the textfile collector in the node-exporter here.)

  • In the end, node-exporter reads these files, and Prometheus collects all data from the node-exporter.

What are the results?

Now it is time to enjoy the long-awaited results. After the metrics have been created, we can use and, of course, visualize them. Here is how they look.
Firstly, there is a general selector where you can choose the nodes to check their “to” and “from” connectivity. You’re getting a summary table for pinging results of selected nodes for the period specified in the Grafana dashboard:

And here are graphs with the combined statistics about selected nodes:

Also, we have a list of records where each record links to graphs for each specific node selected in the Source node:

If you expand such a record, you will see detailed ping statistics from a current node to all other nodes which have been selected in the Destination nodes:

And here are the relevant graphs:

How would the graphs with bad pings between nodes look like?

If you’re observing something like that in real life — it’s time for troubleshooting!
Finally, here is our visualization for pinging external hosts:

We can check either this overall view for all nodes, or a graph for any particular node only:

It might be useful when you observe connectivity issues affecting some specific nodes only.

This article has been originally written and published in Russian language by Flant’s engineer Andrey Sidorov.