What’s the job of a developer? Writing software, of course. But that wasn’t always the case.

Back in the days, each application had its own server. And as each server has a finite amount of resources, developers constantly had to think about not overloading the server’s capacities. If an application was needed to run from a different server, the whole thing had to be set up again.

Enterprises soon realized that this was rather inefficient: on the one hand, developers weren’t happy because they had to worry about other things than their code. On the other hand, plenty of compute resources were going unused whenever an application didn’t take up 100 percent of the server’s capacity.

Fast-forward to today: when was the last time you worried about some server architecture? Probably a while back.

That doesn’t mean that we gotten rid of all considerations regarding memory, RAM, storage, and so on. Even though developers are getting liberated of such issues more and more, the process is far from over.

As of now, we’re not at the end of the road of virtualization and never worrying about servers again. And it’s not clear yet which technology will be the winner of it all. Will it be virtual machines? Containers? Or serverless computing? It’s an ongoing debate, and one that is worth studying in detail.

Is Serverless The End Of Kubernetes?

A comparison of both technologies to stop the debate about what is better.


The advent of virtual machines

Enter the 1960s, when virtual machines (VMs) were first invented. Instead of using bare-metal servers for one single application at a time, people started thinking about spinning up multiple operating systems (OS) on one server. This would allow multiple applications to run separately, each on its own OS.

At IBM and other companies, they achieved just that: they emulate the physical server, i.e., they make virtual representations of it. A so-called hypervisor is in charge of allocating compute resources to the VMs and making sure that the VMs don’t interfere with one another.

Not only are compute resources of one server used more efficiently this way. You can also run an application on many different servers without having to reconfigure it each time. And if a server doesn’t have a VM with the right OS, you can just clone another VM.

In addition to all that, VMs can make servers more secure because you can scan them for malicious software. If you find some, you can just restore the VM in an earlier version, which essentially erases the malware.

With these advantages in mind, it’s no wonder that VMs dominate the space, and bare-metal applications are pretty rare. Linux and Ubuntu are often-seen guests in many VMs. Java and Python run on stripped-down versions of VMs, which enables developers to execute their code on any machine.

Three men working on stacks of servers

#serverless #programming #kubernetes #containers #towards-data-science #data science

There’s an ongoing war between containers and serverless computing
1.10 GEEK