Alfie Mellor

Alfie Mellor


How to Building Serverless Applications on AWS?

Overview of Serverless Architecture

Serverless does not mean that there is no server. It means you don’t have that server in which you are going to be managing and putting your entire application.Earlier, not very long ago, companies and the individuals used to buy and manage their hardware and software from networking infrastructure to data stores and servers to high-level responsibilities hiring specialized teams and the individuals for each trust. After that company starting outsourcing, some duties, and then the cloud came, which in combination.

With virtualization, it laid the ground for infrastructure as a service, platform as a service and similar services, which made companies and the individuals happy. These technologies and trends allowed for more outsourcing, and as a result of more focus on the business logic so, lead time has shortened, and the creation of software from requirements gathering to production radius software has become relatively more comfortable cheaper and quicker.

Then came the containerization wave, and we started to hear about deploying single instances and the units that use enough resources from the host, and the same thing happened. New services emerged as gas or container as a service.This article gives an overview about Building Serverless Applications on AWS

It is all for building software and deploying it more comfortable, cheaper, and quicker, all of that with reducing risks and increased efficiency. Serverless is nothing but a step forward in this movement or evolution. If you see each of these advancements i.e., cloud, infrastructure as a service, platform as a service, container as a service for each one of these a lot of burdens have been shifted from manual to outsourcing the management and maintenance of the infrastructure, the platform or even higher level of the staff. But still there are some things that have to be dealt like servers- the logic they acquire and everything from building server-side code that perform a lot of functionalities that are not associated to the market like routing, security, authentication and authorization and support and debugging the server-side of the application are still done by the developers, well Serverless came to solve one more problem that is building and maintaining the server-side of the application.

Serverless is building software by focusing more on business logic without thinking how you are going to serve your software like there is no server, just business logic. This doesn’t mean that there is no server-side work by developers at all. There could be some configuration and integration work, but all those problems like bugging service technologies as well as scaling and handling failovers. All of these problems that back-end developers used to go through are gone with Serverless.So, Serverless is building software without worrying about servers.

Why Serverless?

Serverless helps the user to build higher and modern level applications with the increase in speed, agility, and lowering the owner’s cost. Building a Serverless application means the developer doesn’t need to have to worry about operating or managing servers; instead, he can focus entirely on developing the core product i.e., the project he has been assigned. This reduces his effort, energy, and time. These all can later be utilized in building and developing the best quality products.

Benefits of Building Serverless Applications

  • No Server Management – Since there is no physical server, therefore, the user doesn’t have to manage any.
  • Flexible Scaling – Scaling of application is adjustable as per the capacity and has already been automated.
  • Pay for Value – Users have to pay for what we use.
  • Automated High Availability – Serverless offers automated fault tolerance and built-inn availability. User doesn’t have to worry about these capabilities because the applications have in-built services that take care of all these.

Some important Constrains for Building Serverless Applications

  • The Serverless way requires a new way of thinking.
  • New architectural patterns and styles are used in building the software as traditional patterns may not be suitable.
  • Event-driven and distributed patterns work well in this model.
  • The architectural design chosen might have to be refined.

How to Building Serverless Applications?

Suppose an Enterprise has a client on the right side and the developers on the left side, and in between these, the main thing happens.So basically, we write business logic and deploy it to the provider say, Amazon which encapsulates code units in the form of functions which is where the fast acronym came from or role as a service and so whenever a client request comes to your application , a notification gets to a service that is listening for clients requests after that the server try to locate the code that is responsible for answering the request. When it finds it, it loads it into a container, and then the code gets executed. The answer gets constructed and sent to the client.

The other aspect of Building Serverless Applications is that you get to have all the responsibilities done for you through back-end as service like authentication and routing etc. You may have heard of Amazon API gateway before these technologies belong to Serverless computing. They are considered as back- end services, and you can use them as per your advantages, this is Serverless in a nutshell.

Benefits of Enabling Serverless Architecture

  • Reduces managing and maintain server-side work
  • Reduces cost
  • Reduce risk and increased efficiency
  • No need to worry about security and updates
  • Auto-scaling of resources
  • Prototyping cycle and lead time has been significantly shorter

Overview of AWS Serverless Solutions

AWS Service comes down to offer different vital things.First, you should not have to think about any managing of servers i.e., no physical, no virtual, no containers, or anything that involves you thinking about an operating system or thinking about individual compute resources is something that you have to not think about when it comes to AWS Serverless.

AWS service should not scale with usage, so as requests come in, A.W.S. is going to take those requests and process them using a service product and respond as necessary.

User doesn’t have to pay for idle, there are number of stats that, out there in industry to talk about, how in most enterprises most of their I.T. resources are vacant 80% of the time, that is quite a lot of money being spent on funds that may or maybe never being used or being used very lightly and in the World of Serverless you don’t have to think about the capacity planning in the traditional way.

Overview of AWS Lambda

AWS lambda processes trillions of requests across hundreds of thousands of active customers every month. Lambda is currently available in all 18 A.W.S. regions and as a foundational service lambda launch in every new region that A.W.S. launches. A.W.S. has a number of customers that are using lambda to build highly available, scalable and secure services Thomson routers which processes 4000 requests per second for its product insights analytics platform i.e. finra , which performs half a trillion validation of stock trades daily for fraud and anomaly detection , who uses lambda and Amazon Kinesis to track a matrix of mobile matrix in real-time.

Customers are adopting lambda because running highly available large-scale systems is a lot of work. First, you need to ensure that your order has load balancing at every layer of your architecture.You do this, so you have redundancy in your architecture, but you also so that you can handle more traffic than a single set server able to serve.

When Enterprises plan to build new service, they need to prepare for, and provision for these load balancing layers between primarily architecture components, you also have to make sure you have these systems configured with appropriate routing rules, such that your load is distributed evenly.

Second on the point that a single server can serve, you need to support scaling up, so if you have more traffic than your current service layer can handle, you can continue to serve that traffic bit you also need to be able to scale down after the traffic peaks, so that you are not indefinitely over provisioned which of course is wasteful. When you plan to build a new service, you also need to prepare for and provision these auto-scaling layers to sit in front of your fleet, evaluate the capacity of your fleet, and scale up the traffic volume and stress on your server pool and then back down as peak traffic decreases.

Third continuing on the point of system failure. You need to consider both when a host fails but what about a complete breakdown of an entire data center or availability zone to this you need to instrument each of your services with health check based on fundamental service matrices and if the service shows is unhealthy, stop routing traffic to that host then you need to repeat to ensure you do this for every single system and service component that you build.

Lambda takes care of all your system administration and more helping developers to focus on business logic and writing code and not administering systems.

AWS Lambda Features

  • Load Balancing
  • Auto Scaling
  • Handling Failures
  • Preserving Security Isolation
  • Managing Utilization

Overview of Lambda Architecture

Lambda Architecture is split into a control plane and the data plane.The control plane is where engineers and developers typically end up with interacting the lambda service. On that part of the system, we have a set of developers’ tools such as the lambda console, the CM, CLI, ID, and toolchains. Underneath those tools, there is a set of control plane APIs, and these are for configuration and resource management. When you create or uploads a function, you interoperate with these API’s and the resource management does the packaging up for code and ends up putting that up into the lambda service, and at this point, the data plane picks up.

Data Plane picks up – First Asynchronous it invokes and also systems like dynamo dB, Kinesis and S.Q.S., and a group of systems that work together, i.e., Polar’s state managers and Leasing Service and they work together to process those events. Those events that are processed through that system are then sent to synchronous invoke. In the synchronous invoke area of the system, there is a front-end, the counting service, the worker manager, the worker, and the placement service.

Front-end invoke – Responsible for orchestrating both synchronous and asynchronous invokes. Also, the first thing it does is authenticate the callers i.e., and it makes sure that only valid callers make it the function and call invoke.

Counting service – It is responsible for providing a region-wide view of customer concurrency to help enforce those set concurrency limits. It also keeps tracking the current concurrency of the function executing on the service. If it’s below the granted execution, it will automatically be given a performance, and if it hits the concurrency limit, it may or may not be throttled. A.W.S. has some intelligence that helps to make sure the user gets the full concurrency.It uses a quorum based protocol which is designed for high throughput and low latency of fewer than 1.5 milliseconds.

Worker Manager – The worker manager is responsible for tracking container idle and busy state and scheduling incoming and V.O.C. requests to the available containers. It handles the workflow steps around function invocation, including environment variable setup and computes metering it. One major key thing it does is it will optimize for the running of code on a warm sandbox.

Worker – It is an essential component of the system architecture.It is responsible for provisioning a secure environment for code execution. It creates and manages a collection of sandboxes. It sets limits on sandboxes such as the memory and CPU, which is available for function execution. It downloads customer code and mounts it for performance, and it also manages multiple language runtimes.It is also responsible for notifying the worker manager hen a sandbox invoke completes.

Placement Service – It is responsible for placing sandboxes on workers to maximize packing density without impacting the customer experience or code pack latency. It is the intelligence to help determine where we want to put a sandbox when we have function ready for execution. It also monitors worker health and decides as to win to mark a worker as unhealthy.

Building Serverless Applications on AWS

  • Step 1 – Search for API gateway and open it.
  • Step 2 – Click on Get Started, then select new API.
  • Step 3 – Enter the details i.e., API name and description, and then click on create API.

Then the new window appears. Now you have to create the resource. For this, click on Actions and create the resource. Here, then fill the resource name and resource path as per your choice. (There is one thing in the window, i.e., ‘enable API gateway CORS. Enable that if you want to connect it through different domain). Then click on Create Resource

Step 4 – The resource has been created. Now the next thing is Create Method. Choose the method as per your choice.

Step 5 – The next thing is choosing the integration point of your method. For this mock. Now you can see the execution path (Method Request >Integration Request > Integration Response > Method Response).

Step 6 – Now select Integration Response. Expand the section. Then expand body mapping template. Then select any generate template. Then save this all.

Step 7 – Now, to test that API, you will need an API URL. To get this API URL, you have to deploy the API. For this, click on the action and then choose Deploy API.Fill the details and deploy them.

Step 8 – Now, you will find a URL on the screen. The URL is the root URL. You can directly not access the root URL. You have to get the HTTP method along with the URL. SO, put the method detail also with the URL.

Step 9 – NEXT search for lambda. There you will find different templates.

Step 10 – For now, use S.R.A.T.C.H. When you click on it, you will get a new window. Fill the details and click on create a function (you have to assign a role. You can create your role. For this, Go to I.A.M. and create role and give assign the access to your role).

Step 11 – Click on create function. This will create a new function. Just put the same JSON that you have created and configured before and click on save and test.

Step 12 – Now, the next thing you have to do is connect your lambda function with dynamo D.B. The first thing to do this includes the AWS SDK. Then create a document client to connect to dynamo D.B. Later create table name and specify the parameter. After making changes save and run your code.

Step 13 – Check all your parameters are correct.

Step 14 – Configure the test event and create the event.

Step 15 – Click on the test. You will see your data is successfully updated.

Step 16 – Now, go to Dynamo D.B. You will see that your data has been updated. It means that the lambda function that you have created is working fine.

Step 17 – The next step is to integrate this lambda function with API Gateway.

Step 18 – Go to API gateway console and change the integration type to Lambda Function.

Step 19 – Select the lambda region. Choose the place you have hosted the language function. Click on save. You will see specific messages, click ok.

Step 20 – Now, you need to deploy this API.

Step 21 – Click on the action and deploy API.

Step 22 – Now you can check this updated data from postman.

This means we are ready with two components i.e., API gateway and lambda function, and the integration of both.Now for processing further with HTML front end, and we need text boxes and host that HTML to S3 bucket. Then we will host the front end in a browser. And then we will test that from that front end. We will enter the details in those test boxes. Once we have updated, we should be able to see that data in Dynamo DB.

#Serverless #aws

What is GEEK

Buddha Community

How to Building Serverless Applications on AWS?
Christa  Stehr

Christa Stehr


How To Unite AWS KMS with Serverless Application Model (SAM)

The Basics

AWS KMS is a Key Management Service that let you create Cryptographic keys that you can use to encrypt and decrypt data and also other keys. You can read more about it here.

Important points about Keys

Please note that the customer master keys(CMK) generated can only be used to encrypt small amount of data like passwords, RSA key. You can use AWS KMS CMKs to generate, encrypt, and decrypt data keys. However, AWS KMS does not store, manage, or track your data keys, or perform cryptographic operations with data keys.

You must use and manage data keys outside of AWS KMS. KMS API uses AWS KMS CMK in the encryption operations and they cannot accept more than 4 KB (4096 bytes) of data. To encrypt application data, use the server-side encryption features of an AWS service, or a client-side encryption library, such as the AWS Encryption SDK or the Amazon S3 encryption client.


We want to create signup and login forms for a website.

Passwords should be encrypted and stored in DynamoDB database.

What do we need?

  1. KMS key to encrypt and decrypt data
  2. DynamoDB table to store password.
  3. Lambda functions & APIs to process Login and Sign up forms.
  4. Sign up/ Login forms in HTML.

Lets Implement it as Serverless Application Model (SAM)!

Lets first create the Key that we will use to encrypt and decrypt password.

    Type: AWS::KMS::Key
      Description: CMK for encrypting and decrypting
        Version: '2012-10-17'
        Id: key-default-1
        - Sid: Enable IAM User Permissions
          Effect: Allow
            AWS: !Sub arn:aws:iam::${AWS::AccountId}:root
          Action: kms:*
          Resource: '*'
        - Sid: Allow administration of the key
          Effect: Allow
            AWS: !Sub arn:aws:iam::${AWS::AccountId}:user/${KeyAdmin}
          - kms:Create*
          - kms:Describe*
          - kms:Enable*
          - kms:List*
          - kms:Put*
          - kms:Update*
          - kms:Revoke*
          - kms:Disable*
          - kms:Get*
          - kms:Delete*
          - kms:ScheduleKeyDeletion
          - kms:CancelKeyDeletion
          Resource: '*'
        - Sid: Allow use of the key
          Effect: Allow
            AWS: !Sub arn:aws:iam::${AWS::AccountId}:user/${KeyUser}
          - kms:DescribeKey
          - kms:Encrypt
          - kms:Decrypt
          - kms:ReEncrypt*
          - kms:GenerateDataKey
          - kms:GenerateDataKeyWithoutPlaintext
          Resource: '*'

The important thing in above snippet is the KeyPolicy. KMS requires a Key Administrator and Key User. As a best practice your Key Administrator and Key User should be 2 separate user in your Organisation. We are allowing all permissions to the root users.

So if your key Administrator leaves the organisation, the root user will be able to delete this key. As you can see **KeyAdmin **can manage the key but not use it and KeyUser can only use the key. ${KeyAdmin} and **${KeyUser} **are parameters in the SAM template.

You would be asked to provide values for these parameters during SAM Deploy.

#aws #serverless #aws-sam #aws-key-management-service #aws-certification #aws-api-gateway #tutorial-for-beginners #aws-blogs

Ashish parmar

Ashish parmar


Serverless Applications - Pros and Cons to Help Businesses Decide - Prismetric

In the past few years, especially after Amazon Web Services (AWS) introduced its Lambda platform, serverless architecture became the business realm’s buzzword. The increasing popularity of serverless applications saw market leaders like Netflix, Airbnb, Nike, etc., adopting the serverless architecture to handle their backend functions better. Moreover, serverless architecture’s market size is expected to reach a whopping $9.17 billion by the year 2023.


Why use serverless computing?
As a business it is best to approach a professional mobile app development company to build apps that are deployed on various servers; nevertheless, businesses should understand that the benefits of the serverless applications lie in the possibility it promises ideal business implementations and not in the hype created by cloud vendors. With the serverless architecture, the developers can easily code arbitrary codes on-demand without worrying about the underlying hardware.

But as is the case with all game-changing trends, many businesses opt for serverless applications just for the sake of being up-to-date with their peers without thinking about the actual need of their business.

The serverless applications work well with stateless use cases, the cases which execute cleanly and give the next operation in a sequence. On the other hand, the serverless architecture is not fit for predictable applications where there is a lot of reading and writing in the backend system.

Another benefit of working with the serverless software architecture is that the third-party service provider will charge based on the total number of requests. As the number of requests increases, the charge is bound to increase, but then it will cost significantly less than a dedicated IT infrastructure.

Defining serverless software architecture
In serverless software architecture, the application logic is implemented in an environment where operating systems, servers, or virtual machines are not visible. Although where the application logic is executed is running on any operating system which uses physical servers. But the difference here is that managing the infrastructure is the soul of the service provider and the mobile app developer focuses only on writing the codes.

There are two different approaches when it comes to serverless applications. They are

Backend as a service (BaaS)
Function as a service (FaaS)

  1. Backend as a service (BaaS)
    The basic required functionality of the growing number of third party services is to provide server-side logic and maintain their internal state. This requirement has led to applications that do not have server-side logic or any application-specific logic. Thus they depend on third-party services for everything.

Moreover, other examples of third-party services are Autho, AWS Cognito (authentication as a service), Amazon Kinesis, Keen IO (analytics as a service), and many more.

  1. Function as a Service (FaaS)
    FaaS is the modern alternative to traditional architecture when the application still requires server-side logic. With Function as a Service, the developer can focus on implementing stateless functions triggered by events and can communicate efficiently with the external world.

FaaS serverless architecture is majorly used with microservices architecture as it renders everything to the organization. AWS Lambda, Google Cloud functions, etc., are some of the examples of FaaS implementation.

Pros of Serverless applications
There are specific ways in which serverless applications can redefine the way business is done in the modern age and has some distinct advantages over the traditional could platforms. Here are a few –

🔹 Highly Scalable
The flexible nature of the serverless architecture makes it ideal for scaling the applications. The serverless application’s benefit is that it allows the vendor to run each of the functions in separate containers, allowing optimizing them automatically and effectively. Moreover, unlike in the traditional cloud, one doesn’t need to purchase a certain number of resources in serverless applications and can be as flexible as possible.

🔹 Cost-Effective
As the organizations don’t need to spend hundreds and thousands of dollars on hardware, they don’t need to pay anything to the engineers to maintain the hardware. The serverless application’s pricing model is execution based as the organization is charged according to the executions they have made.

The company that uses the serverless applications is allotted a specific amount of time, and the pricing of the execution depends on the memory required. Different types of costs like presence detection, access authorization, image processing, etc., associated with a physical or virtual server is completely eliminated with the serverless applications.

🔹 Focuses on user experience
As the companies don’t always think about maintaining the servers, it allows them to focus on more productive things like developing and improving customer service features. A recent survey says that about 56% of the users are either using or planning to use the serverless applications in the coming six months.

Moreover, as the companies would save money with serverless apps as they don’t have to maintain any hardware system, it can be then utilized to enhance the level of customer service and features of the apps.

🔹 Ease of migration
It is easy to get started with serverless applications by porting individual features and operate them as on-demand events. For example, in a CMS, a video plugin requires transcoding video for different formats and bitrates. If the organization wished to do this with a WordPress server, it might not be a good fit as it would require resources dedicated to serving pages rather than encoding the video.

Moreover, the benefits of serverless applications can be used optimally to handle metadata encoding and creation. Similarly, serverless apps can be used in other plugins that are often prone to critical vulnerabilities.

Cons of serverless applications
Despite having some clear benefits, serverless applications are not specific for every single use case. We have listed the top things that an organization should keep in mind while opting for serverless applications.

🔹 Complete dependence on third-party vendor
In the realm of serverless applications, the third-party vendor is the king, and the organizations have no options but to play according to their rules. For example, if an application is set in Lambda, it is not easy to port it into Azure. The same is the case for coding languages. In present times, only Python developers and Node.js developers have the luxury to choose between existing serverless options.

Therefore, if you are planning to consider serverless applications for your next project, make sure that your vendor has everything needed to complete the project.

🔹 Challenges in debugging with traditional tools
It isn’t easy to perform debugging, especially for large enterprise applications that include various individual functions. Serverless applications use traditional tools and thus provide no option to attach a debugger in the public cloud. The organization can either do the debugging process locally or use logging for the same purpose. In addition to this, the DevOps tools in the serverless application do not support the idea of quickly deploying small bits of codes into running applications.

#serverless-application #serverless #serverless-computing #serverless-architeture #serverless-application-prosand-cons

Matt  Towne

Matt Towne


Serverless CI/CD on the AWS Cloud

CI/CD pipelines have long played a major role in speeding up the development and deployment of cloud-native apps. Cloud services like AWS lend themselves to more agile deployment through the services they offer as well as approaches such as Infrastructure as Code. There is no shortage of tools to help you manage your CI/CD pipeline as well.

While the majority of development teams have streamlined their pipelines to take full advantage of cloud-native features, there is still so much that can be done to refine CI/CD even further. The entire pipeline can now be built as code and managed either via Git as a single source of truth or by using visual tools to help guide the process.

The entire process can be fully automated. Even better, it can be made serverless, which allows the CI/CD pipeline to operate with immense efficiency. Git branches can even be utilized as a base for multiple pipelines. Thanks to the three tools from Amazon; AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy, serverless CI/CD on the AWS cloud is now easy to set up.

#aws #aws codebuild #aws codecommit #aws codedeploy #cd #cd pipeline #ci #ci/cd processes #ci/cd workflow #serverless

How to Secure Your AWS Serverless Application?

Security is no joke, one of the key things you have to visit whenever you’re architecting a new serverless application.

Serverless application is basically an idea, like any other application, that you have and it has a logic. You need a computing power to execute your logic, an object storage in case you need it, a place to store your information and a way to tell your logic: go go go!

Pretty simple!

The beauty of Serverless is that we don’t care where and why our logic got stored the way it does, although Information Security teams would get really mad if they read this statement. Anyway, what really matters for us as a cloud developers, as I like to name ourself, is that our logic is getting executed, customers are happy, win-win situation!

But, I wish it was that way. We all need to care about our own applications security. Some people would say: Security!! Hack! We got exposed! Well, sometimes these things happen. But, you as a cloud developer, can help preventing this from happening. Let’s dive into it together shall we?

#serverless #aws #aws serverless

Seamus  Quitzon

Seamus Quitzon


AWS Cost Allocation Tags and Cost Reduction

Bob had just arrived in the office for his first day of work as the newly hired chief technical officer when he was called into a conference room by the president, Martha, who immediately introduced him to the head of accounting, Amanda. They exchanged pleasantries, and then Martha got right down to business:

“Bob, we have several teams here developing software applications on Amazon and our bill is very high. We think it’s unnecessarily high, and we’d like you to look into it and bring it under control.”

Martha placed a screenshot of the Amazon Web Services (AWS) billing report on the table and pointed to it.

“This is a problem for us: We don’t know what we’re spending this money on, and we need to see more detail.”

Amanda chimed in, “Bob, look, we have financial dimensions that we use for reporting purposes, and I can provide you with some guidance regarding some information we’d really like to see such that the reports that are ultimately produced mirror these dimensions — if you can do this, it would really help us internally.”

“Bob, we can’t stress how important this is right now. These projects are becoming very expensive for our business,” Martha reiterated.

“How many projects do we have?” Bob inquired.

“We have four projects in total: two in the aviation division and two in the energy division. If it matters, the aviation division has 75 developers and the energy division has 25 developers,” the CEO responded.

Bob understood the problem and responded, “I’ll see what I can do and have some ideas. I might not be able to give you retrospective insight, but going forward, we should be able to get a better idea of what’s going on and start to bring the cost down.”

The meeting ended with Bob heading to find his desk. Cost allocation tags should help us, he thought to himself as he looked for someone who might know where his office is.

#aws #aws cloud #node js #cost optimization #aws cli #well architected framework #aws cost report #cost control #aws cost #aws tags