DevOps Trends in 2020

DevOps Trends in 2020

Here's a look at what is going to be growing in DevOps in 2020.

DevOps Trends to Watch for in 2020. Here's a look at what is going to be growing in DevOps in 2020.

Marc Andreessen, the founder of Netscape, said long back about how software is eating the world. He also said that every company is a software company these days, and software companies are poised to take over broad swathes of the economy. You’ll see DevOps clearly in 2020, where continuous updates transform the way software is delivered to a nearly-limitless marketplace. DevOps has become a must to thrive in this highly competitive technological world.

Introduction to DevOps

While firms have different meanings of DevOps, we can define DevOps as a mindset that a team adopts to gear its engineering momentum to newer heights. DevOps is mostly about eliminating the barriers in engineering and mainly the cultural obstacles that come in between the idea and execution, making the process of shipping software better, faster, cheaper and more secure.

Whatever you may call it, it should all come down to automation at the end of the day, which in turn should help firms with developing fast, shipping fast, failing fast, recovering fast, and learning fast.

From the SDLC model to today, things have changed tremendously. In 2009, DevOps was coined, and it advocated a cultural transformation and some tech principles where everything was treated as code. Then came principles like CI/CD, but still, the software used to be written as a big monolith and this presented numerous challenges.

So in 2011, a microservices architecture was introduced, and this microservices architecture advocated the fine-grained and loosely coupled components with a specific task to be carried.

The applications written following this loosely-coupled microservices-based architecture were termed, cloud-native. The firms are transitioning from VMs to Kubernetes to serverless, depending on their business needs and goals.

According to a slide from Black Hat USA 2019 by Kelly Shortridge and Dr. Nicole Forsgren, four factors are important while benchmarking yourself with the elite performers in the DevOps industry.

  1. Lead time for change
  2. Release frequency
  3. Time to recovery
  4. Change failure rate

DevOps performance metrics

In this article, we will see what the future holds for DevOps.

1. Cloud-Native Will Become a Necessity

The Diamanti survey of more than 500 IT leaders implies container technology, by all measures, has grown far beyond and has matured dramatically in one year and moved from developer experimentation to production. Cloud-Native technologies will rise to new elevations, especially Kubernetes adoption. Cloud-Native technologies give a higher advantage for the firms in faster time to market

Why Cloud-Native?

Adopting cloud-native practices means better innovation, advanced transformation, and a richer customer experience. As described in my other article, "Cloud-Native DevOps," cloud-native fundamentally boosts cloud automation, which refers to automatically managing the installation, configuration, and supervision of cloud computing services. It is about using technology to make more reliable business decisions for your cloud computing resources at the right time.

According to Oracle's predictions about the future of cloud-native, it is estimated that 80% of enterprise IT will move to the cloud by 2025. The CNCF survey results showed that the use of cloud-native technologies in production has grown over 200%.

Last year, Abby Kearns, executive director of the open-source platform as a service provider Cloud Foundry Foundation, delivered a keynote at LinuxCon + ContainerCon + CloudOpen China in 2018 explained a more in-depth view of cloud-native and the future.

“Cloud-native technologies and cloud-native applications are growing,’’ Kearns said. Over the next 18 months, there will be a 100 percent increase in the number of cloud-native applications organizations are writing and using," she added. “This means you can no longer just invest in IT, but need to in cloud and cloud technologies as well. (Quoted from Abby Kearns’s keynote, "Shaping the Cloud-Native Future")

The U.S. Air Force is one of the best examples she took in her talk on how Agile they have become and using bleeding-edge technology and cloud-native principles. The U.S. Air Force has implemented Agile practices and is now taking advantage of the cloud and developing apps to run on multiple clouds.

2. There Will Be a Rise in The Container Registry Services

This point could have been included in the cloud-native part itself. Still, I think this needs special attention as most software companies now are indulging themselves with the container registries that help developers store and manage artifacts and all dependencies for the smooth flow of software development life cycle.

Just like managing application source code in a version-controlled repository such as Git, managing Docker images is very crucial. Docker also provides similar capabilities of managing Docker images that can be managed locally on your development machine and even on a remote container registry, also known as Docker hub.

But, sometimes, these images are prone to many security-related issues and can be easily accessible by hackers. Hence, modern firms need a safe and secure way of managing and maintaining their container images through registries, container registries.

Container registries have become a must to have when it comes to DevOps teams working with containerized applications and microservices architecture. With the popularity of Docker and cloud-native applications increasing day by day, container image management has become a vital part of modern software development. A container registry is simply a collection of repositories that are primarily made to store container images.

In a recent KubeCon conference at San Diego, JFrog announced its own container registry. Based on the robustness of Artifactory, JFrog Container Registry is the most hardened and proven free container registry on the market. It is scalable, hybrid, comes with the fine UI of Artifactory and powered by Artifactory.

Other notable container registries available in the market today include:

  • Amazon Elastic Container Registry (ECR)
  • Dockerhub
  • JFrog Container Registry
  • Azure Container Registry
  • Google Container Registry

The private container registries allow companies to apply policies, security, access controls, and more to how they manage containers. The container registry should have features that include fully hybrid, Docker registry, Helm registry, Generic repositories, Remote repositories, Virtual repositories, and rich metadata.

Where Is the Need for A Container Registry?

There are a few reasons container registries are necessary.

  1. As the cloud-native approaches are increasing, we see rapid digital transformations at the enterprise level happening with tools like Kubernetes, Docker, Artifactory, Helm, and Istio. The whole world is rapidly moving to container registry technologies and that is the future of containerized applications to ship fast and securely.
  2. The hybrid competition is heating up. Most of the cloud providers are providing registries for their services with a free tier since they know its importance.
  3. Docker containers tend to take up a lot of storage space and are moved around a lot. This means a high cost in storage and sometimes, security can be an issue. As a result, we see some firms using Artifactory for everything except Docker - and using other free tools to manage their containers. It is great to see that JFrog has its own container registry now:)
  4. Registries can act as remote and virtual container repositories, with rich metadata which are ouseful factors in DevOps.
  5. To gain valuable insights about your artifacts.

3. Golang and DevOps Will Thrive Together

Golang as a programming language will create an even greater impact on the DevOps community, although it is already making an impact. Most DevOps tools like Kubernetes, Helm, Docker, etcd, Istio, and more are written in Go. Joe Beda, the creator of Kubernetes, writes about why Kubernetes is written in Go.

Golang is excellent for working in environments where you can’t or don’t want to install dependencies since it compiles into a stand-alone binary. Without having to get the whole environment set up, you can get things done in a much faster way than other programming languages.

Areas of Go development

JFrog surveyed over a thousand developers at the most recent GopherCon conferences in London and San Diego, to better understand the Go community and general sentiment towards Go modules.

What did they find?

  • Go Developers are highly engaged
  • Over 82% of Go developers use a Go version capable of using Go modules, and nearly the same total either use them now or expect to by mid-2020.
  • Go Modules adoption is high
  • GoLang is used widely across industries
  • Choosing Go modules is hard

Also, take a look at my article that talks about what makes Go so good for DevOps.

4. Security Will Become an Even Higher Priority

Security gets more priority in the development life cycle than ever. Security becomes everybody’s responsibility rather than just the security experts'.

Global DevSecOps market

Even though the word DevSecOps seems like just another buzzword, it is required to give more importance to security. DevSecOps creates security awareness and a shared knowledge base within the organization to tighten the security in the software development process. Capital One breach, earlier this year, made cloud security a concerning factor and hence the focus is on securing data in the public cloud.

Samsung Note 7 disaster explains a lot about why security is so crucial at the beginning of the process and at each stage of the development life cycle. Specialists speculate that one of the problems with the Note 7 phones involved its battery management system. This system monitors the electric current and stops the charging process when the battery is full. A fault in this system led the battery cell to overcharge, become unstable, and eventually explode.

This bug fix cost Samsung nearly $17 billion. Had the company caught the issue earlier, they could have saved a lot of money and the brand reputation.

To develop a strategic approach to make security a must in the organization, here are some points to take into consideration:

  • Start small, have security checkpoints at each stage of the development life cycle from the beginning.
  • For developers, make security part of their job and part of their performance review.
  • Development and operation teams need to treat security as quality.
  • Don't separate DevOps and security; unite them thoroughly, make it your engineering team's mantra.

Chaos engineering principles will get adopted by many firms to check the stability and reliability of the systems and to see the extent of security concerns. Intentionally harming systems can help you find bigger bugs & will make sure the hackers don't find any loopholes in the system. This will also help organizations to find bugs before their customers do. The aim is to keep making your systems stronger than ever.

5. Open Source Will Grow Beyond Boundaries

Open-source gets more and more attention since the advantages & flexibility that it brings to the developers. Open source is on the move, a recent survey by Synopsys found that almost 70% of corporate organizations are either contributing to or have open-source projects.

Why Open Source?

Open-source software is great for developers to improve their skills personally. Open source provides a path for developers to:

  • Learn new techniques and efficient ways to solve the problem.
  • Collaborate and gain experience working together on projects.
  • Contributing to open source software creates a sense of belongingness; it will make you a part of something big, a community with the same goals and mindset.

In a recent Open Source India 2019 conference, we surveyed almost 300 open source professionals and below is the result of responses when we asked them the reason they liked open source software. Customization is the fact that most people like open-source software.

Why developers like open source

A recent research study by CB Insights, it is estimated that the open-source services industry is set to exceed $17B in 2019, and expected to reach nearly $33B by 2022.

The big giants like Microsoft, Google, Intel, and Facebook — which are not open-source companies — are actively contributing to various projects on GitHub. Google employees have made 5,500 collective contributions in 2018. Many of these contributions have helped smaller, independent projects.

There's a lot of support is for Google’s open-source software projects like Kubernetes, Istio, and Knative, which are in high demand. As corporate-sponsored projects become more popular, independent developers will continue to contribute. This shows that the giants should come forward and help the open-source community to grow.

For example, Microsoft’s Visual Studio Code project has over 19,000 contributors in total. With thousands of developers contributing, these tech giants benefit from the free developer input and direct user feedback. This allows organizations to build better software faster. Open source technology has definitely gone mainstream and has a bright future.

Cheryl Hung, Director of Ecosystem at Cloud Native Foundation, makes it clear in her recent talk at The Linux Foundation Open Source Summit, Europe, that large companies are now working on open source projects. Especially Kubernetes, which has created a huge community.

6. Serverless Is Still New but Has a Bright Future

Deploying in milliseconds is the future and many firms are making use of serverless architecture to the fullest extent already. The Serverless market is expected to reach $7.7B by 2021. According to RightScale's 2018 State of the Cloud report, serverless is the fastest-growing cloud service model today, the annual growth rate is 75% and is expected to go beyond expectations in 2020.

Current serverless computing options include:

  • AWS Lambda
  • Microsoft Azure
  • Google Cloud Platform
  • IBM Bluemix/OpenWhisk

Why developers prefer serverless:

  • Developer productivity
  • Faster deployments
  • Enhanced scalability
  • Great user experience
  • Lower costs and infrastructure things to worry about

Adopting serverless

In May 2017, Microsoft CEO, Satya Nadella, acknowledged the potential of serverless and its ability to change the mechanics of cloud computing.

He said, “But one of the things that I think is going to change how we think about logic completely is 'serverless'… So the serverless computation is going to fundamentally not only change the economics of what is back-end computing, but it’s going to be the core of the future of distributed computing.”

Lego’s Journey to Serverless

Lego's journey to serverless will show you how your journey can start with a small step and end up as a big success. A Black Friday/Cyber Monday disaster made them move to serverless. The Lego had a legacy system that includes Oracle ATG, eight servers talking to the same database with SAP in the back that goes to a TAX system.

With the above legacy system, they went live for their annual Black Friday/Cyber Monday event, which turned into a disaster while the system couldn't control the night vertical peak. As a result, a series of events took place where the TAX system went down first, which took the SAP down, and as a result, the whole Lego e-commerce platform was down for flat 2 hours. This put them at a considerable loss.

Lego's serverless journey

This event made them think about serverless. Why?

After the disaster, the Lego team decided to move to the cloud, have a simple API, put Lambda behind it, and just use it. This was the first step to move to serverless at Lego. This made them also move to a microservices architecture and even DevOps and automation.

The Lego team started with a single Lambda to calculate sales tax, and now it makes use of Lambda.

The whole talk is here.

7. Digital Transformations Will Set Examples for Others

We will see many organizations getting out of their comfort zones and trying out new technologies and even the traditional sectors like healthcare, financial institutes, governments will see an overall drastic improvement with digital transformation by embracing cloud-native and DevOps practices. Let us see some interesting recent case studies.

See how FedEx, a courier delivery services company, found its way to Digital Transformation. FedEx didn't have enough IT professionals to work with the modern Cloud-Native and DevOps processes, but it didn't stop there. FedEx knew the problem of not having the right skills in its talent pool of engineers, and hence CIO Rob Carte found a solution. FedEx became a university, started teaching its own engineers the advanced computing skills & modern way of software development.

For this initiative, the team was created, and it was named "The Cloud Dojo." The Dojo comprises a cross-functional team of expert cloud developers, security professionals, and operations specialists, co-located in one location. The aim was to train the team to move the traditional engineering with modern cloud practices — DevOps, cloud native, Rewriting legacy applications to run in the cloud, and automation. This homegrown team, called Cloud Dojo, has reskilled more than 2,500 software programmers.

To date, FedEx has rewritten more than 200 production applications for the cloud, with more than 300 apps on tap. FedEx's Cloud Dojo team won the 2019 CIO 100 Award in IT excellence. Read the whole story & FedEx's CIO, Carter's CIOs tips.

Box’s Digital Transformation Journey

A few years ago at Box, it was taking up to six months to build a new microservice. Fast forward to today, it takes only a couple of days. How did they manage to speed up? Two key factors made it possible:

  1. Kubernetes technology
  2. DevOps practices

Founded in 2005, Box was a monolithic PHP application and had grown over time to millions of lines of code. The monolithic nature of their application led to them basically building very tightly coupled designs, and this tight coupling was coming in their way. It was resulting in them not being able to innovate as quickly as they wanted to. Bugs in one part of their application would require them to roll back the entire application.

So many engineers working on the same code base with millions of lines of code, bugs were not that uncommon. It was increasingly hard to ship features or even bug fixes on time. So they looked out for a solution and decided to go with the microservices approach. But then they started to face another set of problems, that's where Kubernetes came in.

See the full video talk by Kunal Parmar, Senior Engineering Manager at Box.

8. Multi-Cloud Will Flourish to New Heights

The multi-cloud approach will flourish. The majority of enterprises have a hybrid cloud strategy. Many of the applications are written to run on-prem and off-prem and potentially on multiple public cloud environments. Google's cloud services platform Anthos is just an amazing validation that multi-cloud is going to be too flexible and cost-effective for software firms.

Azure and AWS being the leaders in this space are going to dictate the multi-cloud future.

According to the recent RightScale 2019 State of the Cloud Report, it is seen that 84% of enterprises have a multi-cloud strategy.

Multi-cloud strategy

Multi-cloud is highly relevant to today's growing market trends. According to a recent IDC survey named "Cloud Repatriation Accelerates in a Multicloud World" multi-cloud best describes today’s cloud reality.

Multi-cloud deployment

While there is a lot of talk going on about cloud cost optimization and vendor lock-in, multi-cloud addresses some crucial facts here, this is the model used by companies to avoid vendor lock-in, cost optimization, security, data sovereignty, minimizing downtime, etc.

Embracing DevOps is just the conversation starter and there is a long way to go. The number of companies are increasing day by day and the dependency on the cloud is hence making the DevOps market a big one. Allied Market Research estimates the DevOps market to reach $9.40 Bn, globally, by 2023 at 18.7% CAGR. DevOps brings development and operations together and gives higher confidence and freedom to the teams to ship at a higher velocity and with quality.

DevOps isn't done growing yet; it is evolving day by day and has a bright future. We all know this, 2018 was the Year of Enterprise DevOps, according to Forrester. Businesses that practice DevOps practices recover 24 times faster from failures and spend 50% less time remediating security issues, DevOps is proven to produce happier and more engaged teams.

I hope these DevOps trends will give you an idea about where the market is going forward and how can you prepare yourself to be more agile and release fast.

Thank you for reading ! Please share if you liked it!

Develop this one fundamental skill if you want to become a successful developer

Throughout my career, a multitude of people have asked me&nbsp;<em>what does it take to become a successful developer?</em>

Throughout my career, a multitude of people have asked me what does it take to become a successful developer?

It’s a common question newbies and those looking to switch careers often ask — mostly because they see the potential paycheck. There is also a Hollywood level of coolness attached to working with computers nowadays. Being a programmer or developer is akin to being a doctor or lawyer. There is job security.

But a lot of people who try to enter the profession don’t make it. So what is it that separates those who make it and those who don’t? 

Read full article here

Docker for front-end developers

Docker for front-end developers

This is short and simple guide of docker, useful for frontend developers.

Since Docker’s release in 2013, the use of containers has been on the rise, and it’s now become a part of the stack in most tech companies out there. Sadly, when it comes to front-end development, this concept is rarely touched.

Therefore, when front-end developers have to interact with containerization, they often struggle a lot. That is exactly what happened to me a few weeks ago when I had to interact with some services in my company that I normally don’t deal with.

The task itself was quite easy, but due to a lack of knowledge of how containerization works, it took almost two full days to complete it. After this experience, I now feel more secure when dealing with containers and CI pipelines, but the whole process was quite painful and long.

The goal of this post is to teach you the core concepts of Docker and how to manipulate containers so you can focus on the tasks you love!

The what and why for Docker

Let’s start by defining what Docker is in plain, approachable language (with some help from Docker Curriculum

Docker is a tool that allows developers, sys-admins, etc. to easily deploy their applications in a sandbox (called containers) to run on the host operating system.
The key benefit of using containers is that they package up code and all its dependencies so the application runs quickly and reliably regardless of the computing environment.

This decoupling allows container-based applications to be deployed easily and consistently regardless of where the application will be deployed: a cloud server, internal company server, or your personal computer.


In the Docker ecosystem, there are a few key definitions you’ll need to know to understand what the heck they are talking about:

  • Image: The blueprints of your application, which forms the basis of containers. It is a lightweight, standalone, executable package of software that includes everything needed to run an application, i.e., code, runtime, system tools, system libraries, and settings.
  • Containers: These are defined by the image and any additional configuration options provided on starting the container, including but not limited to the network connections and storage options.
  • Docker daemon: The background service running on the host that manages the building, running, and distribution of Docker containers. The daemon is the process that runs in the OS the clients talk to.
  • Docker client: The CLI that allows users to interact with the Docker daemon. It can also be in other forms of clients, too, such as those providing a UI interface.
  • Docker Hub: A registry of images. You can think of the registry as a directory of all available Docker images. If required, you can host your own Docker registries and pull images from there.
‘Hello, World!’ demo

To fully understand the aforementioned terminologies, let’s set up Docker and run an example.

The first step is installing Docker on your machine. To do that, go to the official Docker page, choose your current OS, and start the download. You might have to create an account, but don’t worry, they won’t charge you in any of these steps.

After installing Docker, open your terminal and execute docker run hello-world. You should see the following message:

➜ ~ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:6540fc08ee6e6b7b63468dc3317e3303aae178cb8a45ed3123180328bcc1d20f
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

Let’s see what actually happened behind the scenes:

  1. docker is the command that enables you to communicate with the Docker client.
  2. When you run docker run [name-of-image], the Docker daemon will first check if you have a local copy of that image on your computer. Otherwise, it will pull the image from Docker Hub. In this case, the name of the image is hello-world.
  3. Once you have a local copy of the image, the Docker daemon will create a container from it, which will produce the message Hello from Docker!
  4. The Docker daemon then streams the output to the Docker client and sends it to your terminal.
Node.js demo

The “Hello, World!” Docker demo was quick and easy, but the truth is we were not using all Docker’s capabilities. Let’s do something more interesting. Let’s run a Docker container using Node.js.

So, as you might guess, we need to somehow set up a Node environment in Docker. Luckily, the Docker team has created an amazing marketplace where you can search for Docker images inside their public Docker Hub. To look for a Node.js image, you just need to type “node” in the search bar, and you most probably will find this one.

So the first step is to pull the image from the Docker Hub, as shown below:

➜ ~ docker pull node

Then you need to set up a basic Node app. Create a file called node-test.js, and let’s do a simple HTTP request using JSON Placeholder. The following snippet will fetch a Todo and print the title:

const https = require('https');

  .get('', response => {
    let todo = '';

    response.on('data', chunk => {
      todo += chunk;

    response.on('end', () => {
      console.log(`The title is "${JSON.parse(todo).title}"`);
  .on('error', error => {
    console.error('Error: ' + error.message);

I wanted to avoid using external dependencies like node-fetch or axios to keep the focus of the example just on Node and not in the dependencies manager.

Let’s see how to run a single file using the Node image and explain the docker run flags:

➜ ~ docker run -it --rm --name my-running-script -v "$PWD":/usr/src/app -w /usr/src/app node node node-test.js

  • -it runs the container in the interactive mode, where you can execute several commands inside the container.
  • --rm automatically removes the container after finishing its execution.
  • --name [name] provides a name to the process running in the Docker daemon.
  • -v [local-path: docker-path] mounts a local directory into Docker, which allows exchanging information or access to the file system of the current system. This is one of my favorite features of Docker!
  • -w [docker-path] sets the working directory (start route). By default, this is /.
  • node is the name of the image to run. It always comes after all the docker run flags.
  • node node-test.js are instructions for the container. These always come after the name of the image.

The output of running the previous command should be: The title is "delectus aut autem".

React.js demo

Since this post is focused on front-end developers, let’s run a React application in Docker!

Let’s start with a base project. For that, I recommend using the create-react-app CLI, but you can use whatever project you have at hand; the process will be the same.

➜ ~ npx create-react-app react-test
➜ ~ cd react-test
➜ ~ yarn start

You should be able to see the homepage of the create-react-app project. Then, let’s introduce a new concept, the Dockerfile.

In essence, a Dockerfile is a simple text file with instructions on how to build your Docker images. In this file, you’d normally specify the image you want to use, which files will be inside, and whether you need to execute some commands before building.

Let’s now create a file inside the root of the react-test project. Name this Dockerfile, and write the following:

# Select the image to use
FROM node

## Install dependencies in the root of the Container
COPY package.json yarn.lock ./
ENV NODE_PATH=/node_modules
ENV PATH=$PATH:/node_modules/.bin
RUN yarn

# Add project files to /app route in Container
ADD . /app

# Set working dir to /app

# expose port 3000

When working in yarn projects, the recommendation is to remove the node_modules from the /app and move it to root. This is to take advantage of the cache that yarn provides. Therefore, you can freely do rm -rf node_modules/ inside your React application.

After that, you can build a new image given the above Dockerfile, which will run the commands defined step by step.

➜ ~ docker image build -t react:test .

To check if the Docker image is available, you can run docker image ls.

➜ ~ docker image ls
react test b530cde7aba1 50 minutes ago 1.18GB
hello-world latest fce289e99eb9 7 months ago 1.84kB

Now it’s time to run the container by using the command you used in the previous examples: docker run.

➜ ~ docker run -it -p 3000:3000 react:test /bin/bash

Be aware of the -it flag, which, after you run the command, will give you a prompt inside the container. Here, you can run the same commands as in your local environment, e.g., yarn start or yarn build.

To quit the container, just type exit, but remember that the changes you make in the container won’t remain when you restart it. In case you want to keep the changes to the container in your file system, you can use the -v flag and mount the current directory into /app.

➜ ~ docker run -it -p 3000:3000 -v $(pwd):/app react:test /bin/bash

[email protected]:/app# yarn build

After the command is finished, you can check that you now have a /build folder inside your local project.


This has been an amazing journey into the fundamentals of how Docker works.

Learn More

Thanks for reading

WordPress in Docker. Part 1: Dockerization

WordPress in Docker. Part 1: Dockerization

This entry-level guide will tell you why and how to Dockerize your WordPress projects.

This entry-level guide will tell you why and how to Dockerize your WordPress projects.