With the advent of various serverless computing services such as AWS Lambda, Google and Azure functions, Spotinst etc, developers now, more than ever, are reaping numerous benefits that serverless computing offers; top of which includes less responsibility to manage your app’s backend, improved automation and effortless scaling.

All these benefits let the developers focus more on innovation and product itself instead of spending time around admin tasks and infrastructure. Additionally, serverless computing, being a good fit for event-driven applications, is playing a significant role in boosting the rising category of event driven applications which are likely to represent bigger fraction of future corporate applications portfolio.

Nevertheless, there are some challenges associated with going serverless that need to be dealt with in order for it to make its mark on the software development world in a true essence.

Serverless Security Concerns

Various enterprises already employ serverless architectures for building and deploying their services and software. And even though it has greatly helped developers with its inherent scalability and compatibility with other cloud services, however, it is not impervious to some security concerns.

A research by PureSec highlights that the miss-configuration of cloud services and erroneous settings are the most frequent reasons for the leak of confidential, sensitive information. It can supply an entry point to attackers to the serverless architectures as well as provide a way for potential Man-in-The-Middle (MiTM) attacks.

As a matter of fact, serverless architecture makes things so convenient for developers that it may lead to “poor code hygiene” that results in bigger attack surfaces and lurks developers into making awful decisions about security. This doesn’t mean you should not go serverless, but you should be mindful about security. There’s this great article about identifying serverless security risks.

Dormancy Concerns — Cold vs Warm

With serverless architecture, generally no copies of functions are running on standby which indicates that whenever the function will be hit, it is going to be a cold hit. This means that the code needs to be initiated, unlike a warm hit where code is already in running condition prior to any hit, resulting in increased invocation latency.

#serverless

The Biggest Challenges (And Solutions) Of Going Serverless
1.15 GEEK