The server is dead, long live the server!

Or so the battle cry of the serverless revolution goes. Take even a quick glance through the industry press of the last few years, and it would be easy to conclude that the traditional server model is dead, and that within a few years we will all be running serverless architectures.

As anyone who works in the industry knows, and as we’ve also pointed out in our article on the state of serverless computing, this isn’t true. Despite many articles expounding the virtues of the serverless revolution, it has not come to pass. In fact, recent research indicates that the revolution may have already stalled.

Some of the promises made for serverless models have undoubtedly been realized, but not all of them. Not by a long shot.

**LaunchDarkly Feature Management Platform.**Dynamically control the availability of application features to your users. Start Free Trial.

In this article, I want to take a look at why, despite serverless models finding great utility in specific, well-defined circumstances, it seems that the lack of agility and flexibility of these systems is still a bar to their more widespread adoption.

The Promise of Serverless Computing

Before we get to the problems with serverless computing, let’s look at what it was supposed to provide. The promises of the serverless revolution have been multiple and – at times – very ambitious.

For those new to the term, a quick definition. Serverless computing refers to an architecture in which applications (or parts of applications) run on-demand within execution environments that are typically hosted remotely. That said, it’s also possible to host serverless systems in-house. Building resilient, serverless systems has been a major concern of sysadmins and SaaS companies alike over the past few years, because (it is claimed) this architecture offers several key advantages over the “traditional” server and client model:

  1. Serverless models don’t require users to maintain their own operating systems, or even to build applications that are compatible with particular OSs. Instead, developers can produce generic code, and then upload it to the serverless framework, and watch it run.
  2. The resources used on serverless frameworks are typically paid for by the minute (or even by the second). This means that clients only pay for the time they are actually running code. This contrasts favorably with the traditional cloud-based virtual machine, where often you end up paying for a machine that sits idle much of the time.
  3. Scalability has also been a major draw. Resources in serverless frameworks can be dynamically assigned, meaning that they are able to deal with sudden spikes in demand.

In short, this means that serverless models are supposed to deliver flexible, cheap, scalable solutions. When put like that, it’s amazing that we didn’t come up with this idea earlier.

#cloud #serverless #architecture & design #development #article

Why the Serverless Revolution Has Stalled
1.40 GEEK