Serverless is the next iteration in cloud management. First, we let go of having physical hardware servers and moved all of our servers into the cloud because, hey, why bother managing all that hardware? This created cloud infrastructure providers that resulted in behemoths like Amazon and Google. Now, they’re saying, why bother managing a server at all? What you really want to do is run code, right? Serverless is an architecture where code is run in a managed service container that is totally isolated from server-level concerns like operating systems, web servers, and updates.
In truth, this creates a whole host of trade-offs. The promised benefits are simplicity, automatic scaling, and low cost. The idea is that to create an application, all you have to do is upload your code and you’re off! Because the service provider is managing provisioning for you automatically, scaling is fast and transparent. And because you’re only paying for time you actually use - instead of paying a fixed cost - you save money. Sounds perfect. Indeed, it sounds like a great sell to managers and corporate executives. But while in some use cases, serverless is fantastic, it’s not that simple.
For example, Amazon AWS is a fantastic platform in some ways, but I doubt too many people have accused it of being “simple.” It’s an expert platform made for technical specialists and engineered to provide maximum control. This results in a bewildering array of options and documentation that sometimes leaves me aching for a simple command prompt and a few “sudo apt-get install” commands. More than a few startups have simply traded their Linux gurus for AWS gurus.
The benefit of simplicity can be offset by the cost of learning how to interact with the service provider’s system. Instead of interacting with well known and proven technologies like Linux and Apache or Nginx, you’re working with a new system like Amazon or Google’s cloud infrastructure. Serverless functions run as private functions that have to go through Gateway APIs to make them public, adding another level of complexity. As the complexity of the serverless application grows, the complexity of these proprietary interactions grows. This has two effects: 1) your team is having to become an expert in a totally new system, often every bit as complex as the system they already knew; and 2) you’re becoming increasingly locked into whatever cloud provider you chose just as the benefits of even using a serverless environment diminish (a bit like one of those finger-trap toys).
What are the ideal use-cases for serverless? My thought is: discrete units of code that are not likely to grow greatly in complexity but that have unpredictable or intermittent usage (thus maximizing the benefit from the pay-as-you-go model) or have the potential to vary greatly in demand (thus benefiting from the great scaling capacity of serverless technologies).
Serverless functions should also be stateless (there’s no disk to write to) and have relatively short running times. If you need to write to disk, serverless is out - although, with databases, cloud logging, and cloud file repositories, it’s hard to imagine too many places where this is a huge problem. Further, serverless systems are optimized for short-running code. You start to lose the cost-benefit if your code runs for long and most of the providers have maximum execution times between 300 and 900 seconds.
Choose Between Java Serverless Options
Sign Up for AWS Account with Billing
Create AWS Access Keys
Install and Configure AWS CLI
Create AWS Role
Download the Project from GitHub
Configure Okta JWT Auth
Create the Lambda
Create an AWS API Gateway
Test Your API Gateway URL
Generate a JWT Token
Test the Protected Serverless Function
#serverless #aws #java #cloud #web-development