Stateful Programming Models in Serverless Functions

Stateful Programming Models in Serverless Functions

Chris Gillum want to talk to you about Stateful Programming models in Serverless Functions. Chris Gillum explores two stateful programming models: workflows and actors. He discusses how they can simplify development and how they enable stateful and long-running application patterns within ephemeral, Serverless compute environments. He explains why he is making a big bet on these programming models in the Azure Functions service.

Chris Gillum explores two stateful programming models: workflows and actors. He discusses how they can simplify development and how they enable stateful and long-running application patterns within ephemeral, Serverless compute environments. He explains why he is making a big bet on these programming models in the Azure Functions service.

What is Serverless? Learn Serverless by creating a Slack App

What is Serverless? Learn Serverless by creating a Slack App

In this Serverless architecture tutorial, we'll learn what Serverless is and why you should use Serverless, learn Serverless by creating a Slack App. We'll also set up AWS, create our serverless app, and create a slack app!

Serverless architecture is the industry's latest buzzword and many of the largest tech companies have begun to embrace it.

In this article, we'll learn what it is and why you should use it. We'll also set up AWS, create our serverless app, and create a slack app!

What is Serverless?

Serverless is a cloud computing paradigm in which the developer no longer has to worry about maintaining a server – they just focus on the code.

Cloud providers, such as AWS or Azure, are now responsible for executing code and maintaining servers by dynamically allocating their resources. A variety of events can trigger code execution, including cron jobs, http requests, or database events.

The code that developers send to the cloud is usually just a function so, many times, serverless architecture is implemented using Functions-as-a-Service, or FaaS. The major cloud providers provide frameworks for FaaS, such as AWS Lambda and Azure Functions.

Why Serverless?

Not only does serverless allow developers to just focus on code, but it has many other benefits as well.

Since cloud providers are now responsible for executing code and dynamically allocate resources based on event triggers, you typically only pay per request, or when your code is being executed.

Additionally, since cloud providers are handling your servers, you don't have to worry about scaling up – the cloud provider will handle it. This makes serverless apps lower cost, easier to maintain, and easier to scale.

Setting up AWS Lambda

For this tutorial, I will be using AWS Lambda, so first, we'll create an AWS account. I find AWS's UI hard to understand and difficult to navigate, so I will be adding screenshots for each step.

Once you log in, you should see this:

Next, we'll set up an IAM user. An IAM (Identity and Access Management) user interacts with AWS and its resources on your behalf. This allows you to create different IAM users with different permissions and purposes, without compromising the security of your root user account.

Click on the "services" tab at the top of the page, and type "IAM" into the bar:

Click on the first result, and you'll see, on the left-hand sidebar, that you're at the dashboard. Click on the "Users" option to get to create our new IAM user.

Click on the "Add user" button to create a new user. Fill in the details as follows:

You can name your user anything you'd like, but I went with serverless-admin. Be sure that your user has "Programmatic access" to AWS, not "AWS Management Console Access". You'd use the latter for teammates, or other humans who need access to AWS. We just need this user to interact with AWS Lambda, so we can just give them programmatic access.

For permissions, I've chosen to attach existing policies since I don't have any groups, and I don't have any existing users that I want to copy permissions for. In this example, I will create the user with Administrator access since it's just for a personal project; however, if you were to use a serverless app in an actual production environment, your IAM user should be scoped to only access Lambda-necessary parts of AWS. (Instructions can be found here).

I didn't add any tags and created the user. It's vital to save the information given to you on the next screen - the Access ID and Secret Access Key.

Don't leave this screen without copying down both! You won't be able to see the Secret access key again after this screen.

Finally, we'll add these credentials to command line AWS. Use this guide to get aws cli setup.

Make sure you have it installed by running aws --version. You should see something like this:

Then run aws configure and fill in the prompts:

I have the default region as us-east-2 already set up, but you can use this to determine what your region is.

To make sure that you have your credentials set up correctly, you can run cat ~/.aws/credentials in your terminal.

If you want to configure a profile other than your default, you can run the command as follows: aws configure --profile [profile name].

If you had trouble following the steps, you can also check out AWS's documentation.

Set up serverless

Go to your terminal and install the serverless package globally using npm: npm i -g serverless. (More info on serverless here)
and your terminal should look something like this:

Next, navigate to the directory where you want to create the app, then run serverless and follow the prompts:

For this application, we'll be using Node.js. You can name your app anything you want, but I've called mine exampleSlackApp.

Open your favorite code editor to the contents in exampleSlackApp (or whatever you've called your application).

First, we'll take a look at serverless.yml. You'll see there's a lot of commented code here describing the different options you can use in the file. Definitely give it a read, but I've deleted it down to just:

service: exampleslackapp

provider:
  name: aws
  runtime: nodejs10.x
  region: us-east-2

functions:
  hello:
    handler: handler.hello

I've included region since the default is us-east-1 but my aws profile is configured for us-east-2.

Let's deploy what we already have by running serverless deploy in the directory of the app that serverless just created for us. The output should look something like this:

And if you run serverless invoke -f hello in your terminal, it'll run the app, and you should see:

{
    "statusCode": 200,
    "body": "{\n  \"message\": \"Go Serverless v1.0! Your function executed successfully!\",\n  \"input\": {}\n}"
}

For further proof that our slack app is live, you can head back to AWS console. Go to the services dropdown, search for "Lambda", and click on the first option ("Run code without thinking about servers").

And here's your app!

Next, we'll explore actually using serverless by building our slack app. Our slack app will post a random Ron Swanson quote to slack using a slash command like this:

The following steps don't necessarily have to be done in the order that I've done them, so if you want to skip around, feel free!

Adding the API to our code

I'm using this API to generate Ron Swanson quotes since the docs are fairly simple (and of course, it's free). To see how requests are make and what gets returned, you can just put this URL in your browser:

https://ron-swanson-quotes.herokuapp.com/v2/quotes

You should see something like this:

So, we can take our initial function and modify it as such:

module.exports.hello = (event) => {
  getRon();
};

Note: I've removed the async portion

and getRon looks like:

function getRon() {
  request('https://ron-swanson-quotes.herokuapp.com/v2/quotes', function (err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
  })
}

Now, let's check if it works. To test this code locally, in your terminal: serverless invoke local -f hello. Your output should look something like:

serverless invoke -f hello would run the code that you've deployed, as we saw in previous sections. serverless invoke local -f hello, however, runs your local code, so it's useful for testing. Go ahead and deploy using serverless deploy!

Create your Slack App

To create your slack app, follow this link. It'll make you sign into a slack workspace first, so be sure you're a part of one that you can add this app to. I've created a testing one for my purposes. You'll be prompted with this modal. You can fill in whatever you want, but here's what I have as an example:

From there, you'll be taken to the homepage for your app. You should definitely explore these pages and the options. For example, I've added the following customization to my app:

Next, we need to add some permissions to the app:

To get an OAuth Access Token, you have to add some scope and permissions, which you can do by scrolling down:

I've added "Modify your public channels" so that the bot could write to a channel, "Send messages as Ron Swanson" so when the message gets posted, it looks like a user called Ron Swanson is posting the message, and slash commands so the user can "request" a quote as shown in the screenshot at the beginning of the article. After you save the changes, you should be able to scroll back up to OAuths & Permissions to see:

Click the button to Install App to Workspace, and you'll have an OAuth Access Token! We'll come back to this in a second, so either copy it down or remember it's in this spot.

Connect Code and Slack App

In AWS Lambda, find your slack app function. Your Function Code section should show our updated code with the call to our Ron Swanson API (if it does not, go back to your terminal and run serverless deploy).

Scroll below that to the section that says "Environment Variables", and put your Slack OAuth Access Token here (you can name the key whatever you'd like):

Let's go back to our code and add Slack into our function. At the top of our file, we can declare a const with our new OAuth Token:

const SLACK_OAUTH_TOKEN = process.env.OAUTH_TOKEN.

process.env just grabs our environment variables (additional reading). Next, let's take a look at the Slack API to figure out how to post a message to a channel.

The two pictures above I've taken from the API are the most relevant to us. So, to make this API request, I'll use request by passing in an object called options:

  let options = {
    url: 'https://slack.com/api/chat.postMessage',
    headers: {
      'Accept': 'application/json',
    },
    method: 'POST',
    form: {
      token: SLACK_OAUTH_TOKEN,
      channel: 'general', // hard coding for now
      text: 'I am here',
    }
  }

and we can make the request:

  request(options, function(err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
  })

Finally, I'll wrap the whole thing in a function:

function postRon(quote) {
  let options = {
    url: 'https://slack.com/api/chat.postMessage',
    headers: {
      'Accept': 'application/json',
    },
    method: 'POST',
    form: {
      token: SLACK_OAUTH_TOKEN,
      channel: 'general',
      text: quote,
    }
  }

  request(options, function(err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
  })
}

and we can call it from getRon like this:

function getRon() {
  request('https://ron-swanson-quotes.herokuapp.com/v2/quotes', function (err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
    postRon(body.substring(2, body.length - 2)) // here for parsing, remove if you want to see how/why I did it
  })
}

So our code should all in all look like this:

'use strict';
let request = require('request');

const SLACK_OAUTH_TOKEN = process.env.OAUTH_TOKEN

module.exports.hello = (event) => {
  getRon();
};

function getRon() {
  request('https://ron-swanson-quotes.herokuapp.com/v2/quotes', function (err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
    postRon(body.substring(2, body.length - 2))
  })
}

function postRon(quote) {
  let options = {
    url: 'https://slack.com/api/chat.postMessage',
    headers: {
      'Accept': 'application/json',
    },
    method: 'POST',
    form: {
      token: SLACK_OAUTH_TOKEN,
      channel: 'general',
      text: quote,
    }
  }

  request(options, function(err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
  })
}

Now let's test! Unfortunately, our environment variable in AWS Lambda isn't available to us when we run serverless invoke local -f hello. There are a few ways you can approach this, but for our purposes, you can just replace the value for SLACK_OAUTH_TOKEN with your actual OAuth Token (make sure it's a string). But be sure you switch it back before you push it up to version control!

Run serverless invoke local -f hello, and hopefully you should see a message like this in your #general channel:

Please note that I put down my channel name as 'general' since it's my test workspace; however, if you're in an actual workspace, you should create a separate channel for testing apps, and put the message there instead while you're testing.

And in your terminal, you should see something like:

If that works, go ahead and deploy it using serverless deploy. If it does not, the best way to debug this is to adjust code and run serverless invoke local -f hello.

Adding slash command

The last and final part is adding a slash command! Go back to your function's home page in AWS Lambda and look for the button that says "Add trigger":

Click on the button to get to the "Add trigger" page, and select "API Gateway" from the list:

I've filled in the information based on defaults mostly:

I've also left this API open for use – however, if you're using this in production, you should discuss what standard protocol would be with your team. "Add" the API, and you should receive an API endpoint. Hold on to this, because we'll need it for the next step.

Let's switch back over to our slack app and add a slash command:

Click on "Create New Command" and it should pop up with a new window to create a command. Here's how I filled mine out:

You can enter anything you want for "command" and "short description" but for "request URL", you should put your API endpoint.

Finally, we'll go back to our code to make some final adjustments. If you try to use the slash command, you should receive some kind of error back – this is because slack expects a response and AWS expects you to give a response when the endpoint is hit. So, we'll change our function to allow a callback (for reference):

module.exports.hello = (event,context,callback) => {
  getRon(callback);
};

and then we'll change getRon to do something with the callback:

function getRon(callback) {
  request('https://ron-swanson-quotes.herokuapp.com/v2/quotes', function (err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
    callback(null, SUCCESS_RESPONSE)
    postRon(body.substring(2, body.length - 2))
  })
}

where SUCCESS_RESPONSE is at the top of the file:

const SUCCESS_RESPONSE = {
  statusCode: 200,
  body: null
}

You can put the callback here or in postRon – it just depends on what your purposes are with the callback.

Our code at this point now looks something like:

'use strict';
let request = require('request');

const SLACK_OAUTH_TOKEN = OAUTH_TOKEN

const SUCCESS_RESPONSE = {
  statusCode: 200,
  body: null
}

module.exports.hello = (event,context,callback) => {
  getRon(callback);
};

function getRon(callback) {
  request('https://ron-swanson-quotes.herokuapp.com/v2/quotes', function (err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
    callback(null, SUCCESS_RESPONSE)
    postRon(body.substring(2, body.length - 2))
  })
}

function postRon(quote) {
  let options = {
    url: 'https://slack.com/api/chat.postMessage',
    headers: {
      'Accept': 'application/json',
    },
    method: 'POST',
    form: {
      token: SLACK_OAUTH_TOKEN,
      channel: 'general',
      text: quote,
    }
  }

  request(options, function(err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
  })
}

You should be able to use the /ron command in slack now and get a Ron Swanson quote back. If you don't, you can use Cloudwatch logs to see what went wrong:

The way our code works now, we've hardcoded in the channel name. But, what we actually want is for the quote to get posted in the message where you used /ron.

So, we can now use the event portion of our function.

module.exports.hello = (event,context,callback) => {
  console.log(event)
  getRon(callback);
};

Use /ron to run the function, and then check your Cloudwatch logs to see what gets logged to the console (you may need to refresh). Check on the most recent logs and you should see something like this:

The first item in this list (where it says "resource", "path", etc.) is the event, so if you expand that, you'll see a long list of things, but what we're looking for is 'body' all the way down at the bottom:

Body is a string with some relevant information in it, one of them being "channel_id". We can use channel_id (or channel_name) and pass it into the function that creates our slack message. For your convenience, I've already parsed this string: event.body.split("&")[3].split("=")[1] should give you the channel_id. I hardcoded in which entry (3) the channel_id was for simplicity.

Now, we can alter our code to save that string as a variable:

let channel = 'general' (as our fallback)

module.exports.hello = (event,context,callback) => {
  console.log(event)
  channel = event.body.split("&")[3].split("=")[1]
  console.log(context)
  getGoat(callback);
};

and in postRon:

  let options = {
    url: 'https://slack.com/api/chat.postMessage',
    headers: {
      'Accept': 'application/json',
    },
    method: 'POST',
    form: {
      token: SLACK_OAUTH_TOKEN,
      channel: channel,
      text: quote,
    }
  }

Finally, if you use a slack command in any channel in your workspace, you should be able to see a Ron Swanson quote pop up! If not, as I mentioned before, the most common tools I use to debug serverless apps are serverless invoke local -f <function name> and Cloudwatch logs.

Hopefully you were successfully able to create a functioning Slack application! I've included resources and background reading dispersed throughout the article and I'm happy to answer any questions you may have!

Final Repo with code: https://github.com/lsurasani/ron-swanson-slack-app/

Everything You Need to Know about Cloud Computing

Everything You Need to Know about Cloud Computing

In this Cloud Computing full course tutorial for Beginners, we'll give you everything you need to know about Cloud Computing. We'll cover the fundamentals of cloud computing, the cloud lifecycle, and important concepts of AWS, Azure, and the Google Cloud Platform. We'll cover some Cloud Computing interview questions that would definitely help you out in a cloud computing interview

In this Cloud Computing Full Course Tutorial for Beginners, we'll give you everything you need to know about Cloud Computing! We'll cover the fundamentals of Cloud Computing, the cloud lifecycle, and important concepts of AWS, Azure, and the Google Cloud Platform. We'll tell you how you can become a cloud computing engineer, and explain how the three major cloud computing platforms are different from one another. Finally, we'll cover some important cloud computing interview questions that would definitely help you out in a cloud computing interview. Whenever possible, each of these concepts are explained in a practical manner to ensure easy understanding. Now, let's take a deep dive into this cloud computing full course!

Below topics are explained in this Cloud Computing full course:

  1. Before Cloud Computing 01:15
  2. What is Cloud Computing 02:50
  3. Benefits of Cloud Computing 03:44
  4. Types of Cloud Computing 05:44
  5. Lifecycle of Cloud Computing 09:41
  6. What is AWS 13:14
  7. History of AWS 16:47
  8. Service AWS Offers 18:11
  9. How does AWS Make Lives Easier 20:26
  10. AWS Tutorial 22:50
  11. How did AWS become so successful 22:50
  12. The services AWS provides 31:35
  13. The future of AWS 38:53
  14. What is Azure 45:47
  15. Azure Services 47:32
  16. Uses of Azure 51:11
  17. Azure Tutorial 51:50
  18. What is Microsoft Azure 01:01:31
  19. what are the services Azure offers 01:08:00
  20. How is Azure better than other Cloud Services 2:11:29
  21. Companies that use Azure 2:15:08
  22. AWS vs Azure 2:29:50
  23. Comparison round Market share and options 02:33:02
  24. AWS vs Google Cloud 02:38:17
  25. AWS vs Azure vs GCP 02:45:37
  26. What is Amazon Web services 02:46:13
  27. What is Microsoft Azure 02:46:25
  28. What is Google Cloud Platform 02:46:34
  29. Compute Services 02:47:49
  30. Storage Service 02:50:49
  31. Key Cloud tools 02:54:19
  32. Companies using cloud providers 02:55:16
  33. Advantages 02:55:40
  34. Disadvantages 02:57:43
  35. AWS Certifications 02:59:28
  36. Which is suitable for you 03:00:13
  37. Azure Certifications 03:06:46
  38. What is an Azure Certifications 03:06:55
  39. What makes Azure so Great 03:07:31
  40. Who is a Cloud Computing Engineer 3:24:46
  41. Steps to become a Cloud Computing Engineer 03:23:13
  42. Cloud Computing Engineer salary 03:27:28
  43. AWS Interview Questions Part-1 03:28:31
  44. AWS Interview Questions Part-2 04:59:02
  45. Multiplte Choice Questions 05:49:50
  46. Azure Interview Questions 05:59:58

Why become a Cloud Architect?
With the increasing focus on cloud computing and infrastructure over the last several years, cloud architects are in great demand worldwide. Many organizations have moved to cloud platforms for better scalability, mobility, and security, and cloud solutions architects are among the highest paid professionals in the IT industry.

According to a study by Goldman Sachs, cloud computing is one of the top three initiatives planned by IT executives as they make cloud infrastructure an integral part of their organizations. According to Forbes, enterprise IT architects with cloud computing expertise are earning a median salary of $137,957.

How to Build and Deploy Back End System with Serverless

How to Build and Deploy Back End System with Serverless

This Serverless and AWS tutorial explains how to build and deploy Back End System with Serverless Framework. Build a Complete Back End System with Serverless. The Ultimate Guide to Backend Serverless Development. Set up your AWS account to work with the Serverless Framework. Set up a Serverless Project and deploy a Lambda. Deploy an API using API Gateway and AWS Lambda. Create an API to get data from your DynamoDB table.

This article will teach you how to build and deploy everything you need to be able to build a back-end for your application. We'll be using AWS to host all of this and deploying it all using the Serverless Framework.

By the end of this article you'll know how to:

  • Set up your AWS account to work with the Serverless Framework
  • Set up a Serverless Project and deploy a Lambda
  • Create private cloud storage with S3 bucket and upload files from your computer
  • Deploy an API using API Gateway and AWS Lambda
  • Create a serverless database table with AWS DynamoDB
  • Create an API to get data from your DynamoDB table
  • Create an API to add data to your DynamoDB table
  • Create APIs to store files and get files from your S3 bucket
  • Secure all of your API endpoints with API keys

Being able to do all these things gives you the ability to create all the functionality needed from most application back ends.

Serverless Setup with AWS

The Serverless Framework is a tool that we can use as developers to configure and deploy services from our computers. There's a bit of setup to allow all of this to work together and this section will show you how to do that.

To allow serverless to do work on your account, you need to set up a user for it. To do this, navigate into AWS and search for "IAM" (Identity and Access Management).

Once in the IAM Page, click on Users in the list on the left hand side. This will open the list of users on your account. From here we'll be clicking Add user.

We need to create a user which has Programmatic access and I've called my ServerlessAccount but the name doesn't matter too much.

Next, we need to give the user some permissions. When in the permissions screen, select Attach existing policies directly and then select AdministratorAccess. This will give our serverless framework permission to create all the resources it needs to.

We don't need to add any tags, so we can move straight onto Review.

In the review window, you'll see the user has been given an Access key ID and a Secret access key. We'll be needing those in the next part so keep this page open.

Serverless Install and Configuration

Now that we've created our user, we need to install the Serverless Framework on our machine.

Open up a terminal and run this command to install serverless globally on your computer. If you haven't got NodeJS installed check out this page.

npm install -g serverless

Now that we've got serverless installed, we need to set up the credentials for serverless to use. Run this command, putting your access key ID and Secret access key in the correct places.

serverless config credentials --provider aws --key ${Your access key ID} --secret ${Your secret access key} --profile serverlessUser

Once this has been run, you're all set up with serverless.

Deploying Your First AWS Lambda

With out serverlessUser set up, we want to deploy something using the Serverless Framework. We can use Serverless templates to setup a basic project that we can deploy. This will be the base for the whole of this Serverless project.

In your terminal we can create a serverless project from a template. This command will create a NodeJS serverless project in the folder of myServerlessProject.

serverless create --template aws-nodejs --path myServerlessProject

If you now open the folder up in your code editor we can look at what we've created.

We've got two file worth talking about: handler.js and serverless.yml

handler.js

This file is a function that will be uploaded as a Lambda function to your AWS account. Lambda functions are great and we'll use a lot more of them later on in the series.

serverless.yml

This is a very important file for us. This is where all the configuration for our deployment goes. It tells Serverless what runtime to use, which account to deploy to, and what to deploy.

We need to make a change to this file so that our deployment works properly. In the provider object we need to add a new line of profile: serverlessUser. This tells Serverless to use the AWS credentials we created in the last section.

We can scroll down to functions and see that we have one function which is called hello and points to the function within the handler.js file. This means we will be deploying this Lambda function as part of this project.

We'll learn a lot more about this serverless.yml file later on in this article.

Deploying Our Project

Now that we've looked at the files it's time to do our first deployment. Open up a terminal and navigate to our project folder. Deploying is as simple as typing

serverless deploy

This takes a while but when it's done we can check that everything has deployed successfully.

Open up your browser and navigate to your AWS account. Search for Lambda and you'll see a list of all your Lambda functions. (If you don't see any then check that your region is set to N. Virginia). You should see the myserverlessproject-dev-hello Lambda which contains the exact code that is in the handler.js file in your project folder.

Deploying an S3 Bucket and Uploading Files

In this section we're going to learn how we can deploy an Amazon S3 bucket and then sync up files from our computer. This is how we can start using S3 as cloud storage for our files.

Open up the serverless.yml file and remove all the commented out lines. Scroll to the bottom of the file and this is where we're going to add our S3 resources by adding this code

resources:
    Resources:
        DemoBucketUpload:
            Type: AWS::S3::Bucket
            Properties:
                BucketName: EnterAUniqueBucketNameHere

Change the name of the bucket and we're ready to deploy again. Open up your terminal again and run serverless deploy. You may get an error saying that the bucket name is not unique, in which case you'll need to change the bucket name, save the file and rerun the command.

If it is successful we can then go and see our new S3 bucket in our AWS Console through our browser. Search for S3 and then you should see your newly created bucket.

Syncing up your files

Having a bucket is great but now we need to put files in the bucket. We're going to be using a serverless plugin called S3 Sync to do this for us. To add this plugin to our project we need to define the plugins. After your provider object, add this code:

plugins:
    - serverless-s3-sync

This plugin also needs some custom configuration so we add another field to our serverless.yml file, changing out the bucket name for yours.

custom:
    s3Sync:
        - bucketName: YourUniqueBucketName
          localDir: UploadData

This section of code is telling the S3 Sync plugin to upload the contents of the UploadData folder to our bucket. We don't currently have that folder so we need to create it and add some files. You can add a text file, an image or whatever you want to be uploaded, just make sure there is at least 1 file in the folder.

The last thing we need to do is to install the plugin. Luckily, all Serverless plugins are also npm packages so we can install it by running npm install --save-dev serverless-s3-sync in our terminal.

As we've done before, we can now run serverless deploy and wait for the deployment to complete. Once it is complete we can go back into our browser and into our bucket and we should see all the files that we put in the UploadData folder in our project.

Creating an API with Lambda and API Gateway

In this section we'll learn to do one of the most useful things with Serverless: create an API. Creating an API allows you to do so many things, from getting data from databases, S3 storage, hitting other APIs and much more!

To create the API we first need to create a new Lambda function to handle the request. We're going to make a few Lambdas so we're going to create a lambdas folder in our project with two subfolders common and endpoints.

Inside the endpoints folder we can add a new file called getUser.js. This API is going to allow someone to make a request and get back data based on an ID of a user. This is the code for the API:

const Responses = require('../common/API_Responses');

exports.handler = async event => {
    console.log('event', event);

    if (!event.pathParameters || !event.pathParameters.ID) {
        // failed without an ID
        return Responses._400({ message: 'missing the ID from the path' });
    }

    let ID = event.pathParameters.ID;

    if (data[ID]) {
        // return the data
        return Responses._200(data[ID]);
    }

    //failed as ID not in the data
    return Responses._400({ message: 'no ID in data' });
};

const data = {
    1234: { name: 'Anna Jones', age: 25, job: 'journalist' },
    7893: { name: 'Chris Smith', age: 52, job: 'teacher' },
    5132: { name: 'Tom Hague', age: 23, job: 'plasterer' },
};

If the request doesn't contain an ID then we return a failed response. If there is data for that ID then we return that data. If there isn't data for that user ID then we also return a failure response.

As you may have noticed we are requiring in the Responses object from API_Responses. These responses are going to be common to every API that we make so making this code importable is a smart move. Create a new file called API_Responses.js in the common folder and put this code:

const Responses = {
    _200(data = {}) {
        return {
            headers: {
                'Content-Type': 'application/json',
                'Access-Control-Allow-Methods': '*',
                'Access-Control-Allow-Origin': '*',
            },
            statusCode: 200,
            body: JSON.stringify(data),
        };
    },

    _400(data = {}) {
        return {
            headers: {
                'Content-Type': 'application/json',
                'Access-Control-Allow-Methods': '*',
                'Access-Control-Allow-Origin': '*',
            },
            statusCode: 400,
            body: JSON.stringify(data),
        };
    },
};

module.exports = Responses;

This set of functions which are used to simplify the creation of the correct response needed when using a Lambda with API Gateway (which we'll do in a sec). The methods add headers, a status code and stringify any data that needs to be returned.

Now that we have the code for our API, we need to set it up in our serverless.yml file. Scroll to the functions section of the serverless.yml file. In the last part of this guide we deployed the hello function, but we no longer need that. Delete the functions object and replace it with this:

functions:
    getUser:
        handler: lambdas/endpoints/getUser.handler
        events:
            - http:
                  path: get-user/{ID}
                  method: GET
                  cors: true

This code is creating a new Lambda function called getUser which is in the file of lambdas/getUser on the method of handler. We then define the events that can trigger this lambda function to run.

To make a Lambda into an API we can add a http event. This tells serverless to add an API Gateway to this account and then we can define the API endpoint using path. In this case get-user/{ID} means the url will be https://${something-provided-by-API-Gateway}/get-user/{ID}, where the ID is passed into the Lambda as a path parameter. We also set the method to GET and enable CORS so that we could access this endpoint from a front end application if we wanted.

We can now deploy again and this time we can use a short and command of sls deploy. It saves a few characters but a lot of typos. When this is completed we'll get an output that also includes a list of endpoints. We can copy our endpoint and head over to a browser to test it out.

If we paste our API URL into our browser and then add an ID to the end of 5132 we should get back a response of { name: 'Tom Hague', age: 23, job: 'plasterer' }. If we enter a different ID such as 1234, we'll get different data but entering an ID of 7890 or not entering an ID will return an error.

If we want to add more data to our API, we can simply add a new row to the data object in the getUser.js file. We can then run a special command which only deploys one function, sls deploy -f ${functionName} so for us that is:

sls deploy -f getUser

If you now make a request using the ID of the new data, the API will return that new data instead of an error.

Creating a Database on AWS

DynamoDB is a fully hosted, non-relational database on AWS. This is the perfect solution for storing data that you need to access and update regularly. In this section we're going to learn how we can create a DynamoDB table with Serverless.

In our serverless.yml file we're going to add some configuration to the Resources section.

resources:
    Resources:
        DemoBucketUpload:
            Type: AWS::S3::Bucket
            Properties:
                BucketName: ${self:custom.bucketName}
        # New Code
        MyDynamoDbTable:
            Type: AWS::DynamoDB::Table
            Properties:
                TableName: ${self:custom.tableName}
                AttributeDefinitions:
                    - AttributeName: ID
                      AttributeType: S
                KeySchema:
                    - AttributeName: ID
                      KeyType: HASH
                BillingMode: PAY_PER_REQUEST

In this code we can see that we are creating a new dynamoDB table with a TableName of ${self:custom.tableName}, defining an attribute of ID and setting the billing mode to pay per request.

This is our first look at the use of variables in our serverless.yml file. We can use variables for a few reasons and they can make our jobs much easier. In this case, we're referencing a variable of custom.tableName. We can then reference this variable from multiple locations without having to copy and paste the table name. To get this to work we also need to add the tableName to the custom section. In our case we're going to add the line tableName: player-points to create a table to store the points a player has. This table name only needs to be unique to your account.

When defining a table you need to define at least one of the fields which will be your unique identifying field. Because DynamoDB is a non-relational database, you don't need to define the full schema. In our case we've defined the ID, stating that it has an attribute type of string and that has a key type of HASH.

The last part of the definition is the billing mode. There are two ways to pay for DynamoDB:

  • pay per request
  • provisioned resources.

Provisioned resources lets you define how much data you're going to be reading and writing to the table. The issues with this are that if you start using more your requests get throttled, and that you pay for the resource even if no one is using it.

Pay Per Request it's much simpler as you just pay per request. This means if you have no one using it then you pay nothing and if your have hundreds of people using it at once, all the requests work. For this added flexibility you pay slightly more for Pay Per Request but in the long run it usually works out cheaper.

Once we've run sls deploy again we can open up our AWS console and search for DynamoDB. We should be able to see our new table and we can see that there is nothing in there.

To add data to the table, click Create item, give it a unique ID, click the plus button and append to a new field and select the type of string. We need to give it a field of name and a value of Jess. Add a number field of score set to 12. Click save and you now have data in your dynamo table.

Getting Data from your DynamoDB Table

Now that we have our Dynamo table created, we want to be able to get and add data to the table. We're going to start with getting data from the table with a get endpoint.

We're going to create a new file in our endpoints folder called getPlayerScore.js. This Lambda endpoint is going to handle the requests for a user and get that data from the Dynamo table.

const Responses = require('../common/API_Responses');
const Dynamo = require('../common/Dynamo');

const tableName = process.env.tableName;

exports.handler = async event => {
    console.log('event', event);

    if (!event.pathParameters || !event.pathParameters.ID) {
        // failed without an ID
        return Responses._400({ message: 'missing the ID from the path' });
    }

    let ID = event.pathParameters.ID;

    const user = await Dynamo.get(ID, tableName).catch(err => {
        console.log('error in Dynamo Get', err);
        return null;
    });

    if (!user) {
        return Responses._400({ message: 'Failed to get user by ID' });
    }

    return Responses._200({ user });
};

The code used here is very similar to the code inside the getUser.js file. We are checking that a path parameter of ID exists, getting the user data and then returning the user. The main difference is how we are getting the user.

We have imported the Dynamo function object and are calling Dynamo.get. We're passing in the ID and the table name and then catching any errors. We now need to create that Dynamo function object in a new file called Dynamo.js in the common folder.

const AWS = require('aws-sdk');

const documentClient = new AWS.DynamoDB.DocumentClient();

const Dynamo = {
    async get(ID, TableName) {
        const params = {
            TableName,
            Key: {
                ID,
            },
        };

        const data = await documentClient.get(params).promise();

        if (!data || !data.Item) {
            throw Error(`There was an error fetching the data for ID of ${ID} from ${TableName}`);
        }
        console.log(data);

        return data.Item;
    },
};
module.exports = Dynamo;

Reading and writing to Dynamo requires a reasonable amount of code. We could write that code every time we want to use Dynamo but it is much cleaner to have functions to simplify the process for us.

The file first imports AWS and then creates an instance of the DynamoDB Document Client. The document client is the easiest way for us to work with Dynamo from our Lambdas. We create a Dynamo object with an async get function. The only things we need to make a request are an ID and a table name. We format those into the correct parameter format for the DocumentClient, await a documentClient.get request and make sure that we add .promise() to the end. This turns the request from a callback to a promise which is much easier to work with. We check that we managed to get an item from Dynamo and then we return that item.

We now have the all the code that we need, we have to update our serverless.yml file too. The first thing to do is to add our new API endpoint by adding it to our list of functions.

    getPlayerScore:
        handler: lambdas/endpoints/getPlayerScore.handler
        events:
            - http:
                  path: get-player-score/{ID}
                  method: GET
                  cors: true

There are two more changes that we need to make to get our endpoint working:

  • environment variables
  • permissions

You may have noticed in the getPlayerScore.js file we had a line of code like this:

const tableName = process.env.tableName;

This is where we are getting the table name from the environment variables of the Lambda. To create our Lambda with the correct environment variables, we need to set a new object in the provider called environment with a field of tableName and a value of ${self:custom.tableName}. This will ensure that we are making the request to correct table.

We also need to give our Lambdas permissions to access Dynamo. We have to add another field to the provider called iamRoleStatements. This has an array of policies which can allow or disallow access to certain services or resources.

provider:
    name: aws
    runtime: nodejs10.x
    profile: serverlessUser
    region: eu-west-1
    environment:
        tableName: ${self:custom.tableName}
    iamRoleStatements:
        - Effect: Allow
          Action:
              - dynamodb:*
          Resource: '*'

As all this has been added to the provider object, it will be applied to all Lambdas.

We can now run sls deploy again to deploy our new endpoint. When that is done we should get an output with a new endpoint of https://${something-provided-by-API-Gateway}/get-player-score/{ID}. If we copy that URL into a browser tab and add the ID of the player that we created in the last section, we should get a response.

Adding New Data to DynamoDB

Being able to get data from Dynamo is cool, but it's quite useless if we can't also add new data to the table as well. We're going to be creating a POST endpoint to create new data in our Dynamo table.

Start by creating a new file in our endpoints folder called createPlayerScore.js and adding this code:

const Responses = require('../common/API_Responses');
const Dynamo = require('../common/Dynamo');

const tableName = process.env.tableName;

exports.handler = async event => {
    console.log('event', event);

    if (!event.pathParameters || !event.pathParameters.ID) {
        // failed without an ID
        return Responses._400({ message: 'missing the ID from the path' });
    }

    let ID = event.pathParameters.ID;
    const user = JSON.parse(event.body);
    user.ID = ID;

    const newUser = await Dynamo.write(user, tableName).catch(err => {
        console.log('error in dynamo write', err);
        return null;
    });

    if (!newUser) {
        return Responses._400({ message: 'Failed to write user by ID' });
    }

    return Responses._200({ newUser });
};

This code is very similar to the getPlayerScore code with a few changes. We are getting the user from the body of the request, adding the ID to the user and then passing that to a Dynamo.write function. We need to parse the event body as API Gateway stringifies it before passing it to the Lambda.

We now need to modify the common Dynamo.js file to add the .write method. This performs very similar steps to the .get function and returns the newly created data.

    async write(data, TableName) {
        if (!data.ID) {
            throw Error('no ID on the data');
        }

        const params = {
            TableName,
            Item: data,
        };

        const res = await documentClient.put(params).promise();

        if (!res) {
            throw Error(`There was an error inserting ID of ${data.ID} in table ${TableName}`);
        }

        return data;
    }

We've created the endpoint and common code so the last thing we need to do is modify the serverless.yml file. As we added the environment variable and permissions in the last section we just need to add the function and API configuration. This endpoint is different from the previous two because the method is POST instead of GET.

    createPlayerScore:
        handler: lambdas/endpoints/createPlayerScore.handler
        events:
            - http:
                  path: create-player-score/{ID}
                  method: POST
                  cors: true

Deploying this with sls deploy will now create three endpoints, including our create-player-score endpoint. Testing a POST endpoint is more complex than a GET request but luckily there are tools to help us out. I use Postman to test all my endpoints as it makes it quick and easy.

Create a new request and paste in your create-player-score url. You need to change the request type to POST and set the ID at the end of the URL. Because we're doing a POST request we can send up data within the body of the request. Click body then raw and select JSON as the body type. You can then add the data that you want to put into your table. When you click Send, you should get a successful response.

To validate that your data has been added to the table, you can make a get-player-score request with the ID of the new data you just created. You can also go into the Dynamo console and look at all the items in the table.

Creating S3 GET and POST Endpoints

Dynamo is a brilliant database storage solution but sometimes it isn't the best storage solution. If you've got data that isn't going to change and you want to save some money, or if you want to store files other than JSON then you might want to consider Amazon S3.

Creating endpoints to get and create files in S3 is very similar to DynamoDB. We need to create two endpoint files, a common S3 file and add modify the serverless.yml file.

We're going to start with adding a file to S3. Create a createFile.js file in the endpoints folder, adding this code:

const Responses = require('../common/API_Responses');
const S3 = require('../common/S3');

const bucket = process.env.bucketName;

exports.handler = async event => {
    console.log('event', event);

    if (!event.pathParameters || !event.pathParameters.fileName) {
        // failed without an fileName
        return Responses._400({ message: 'missing the fileName from the path' });
    }

    let fileName = event.pathParameters.fileName;
    const data = JSON.parse(event.body);

    const newData = await S3.write(data, fileName, bucket).catch(err => {
        console.log('error in S3 write', err);
        return null;
    });

    if (!newData) {
        return Responses._400({ message: 'Failed to write data by filename' });
    }

    return Responses._200({ newData });
};

This code is almost identical to the createPlayerScore.js code, but uses a filename instead of an ID and S3.write intead of Dynamo.write.

Now we need to create our S3 common code to simplify requests made to S3.

const AWS = require('aws-sdk');
const s3Client = new AWS.S3();

const S3 = {
    async write(data, fileName, bucket) {
        const params = {
            Bucket: bucket,
            Body: JSON.stringify(data),
            Key: fileName,
        };
        const newData = await s3Client.putObject(params).promise();
        if (!newData) {
            throw Error('there was an error writing the file');
        }
        return newData;
    },
};
module.exports = S3;

Again, the code in this file is very similar to the code in Dynamo.js, with a few differences around the parameters for the request.

The last thing we need to do for writing to S3 is change the severless.yml file. We need to do four things: add environment variables, add permissions, add the function, and add an S3 bucket.

In the provider we can add a new environment variable of bucketName: ${self:custom.s3UploadBucket}.

To add permission to read and write to S3 we can add a new permission to the existing policy. Straight after - dynamodb:* we can add the line - s3:*.

Adding the function is the same as we've been doing with all

our functions. Make sure that the path has a parameter of fileName as that is what you are checking for in your endpoint code.

    createFile:
        handler: lambdas/endpoints/createFile.handler
        events:
            - http:
                  path: create-file/{fileName}
                  method: POST
                  cors: true

Lastly we need to create a new bucket to upload these files into. In the custom section we need to add a new field of s3UploadBucket and set it to a unique bucket name. We also need to configure the resource. After the Dynamo table config, we can add this to create a new bucket for our file uplaods.

        s3UploadBucket:
            Type: AWS::S3::Bucket
            Properties:
                BucketName: ${self:custom.s3UploadBucket}

With this set up it is time to deploy again. Running sls deploy again will deploy the new upload bucket as well as the S3 write endpoint. To test the write endpoint, we'll need to head back over to Postman.

Copy in the create-file URL that you get when Serverless has completed the deployment and paste it into Postman and change the request type to POST. Next, we need to do is to add the filename that we are uploading, in our case we're going to be uploading car.json. The last thing we need to do is add the data to the request. Select Body then raw with a type of JSON. You can add whatever JSON data you would like but heres some example data:

{
	"model": "Ford Focus",
	"year": 2018,
	"colour": "red"
}

When you post this data up, you should get a 200 response with an ETag reference to the file. Going into the console and your new S3 bucket you should be able to see car.json

Getting Data from S3

Now that we can upload data to S3, we want to be able to get it back too. We start by creating a getFile.js file inside the endpoints folder.

const Responses = require('../common/API_Responses');
const S3 = require('../common/S3');

const bucket = process.env.bucketName;

exports.handler = async event => {
    console.log('event', event);

    if (!event.pathParameters || !event.pathParameters.fileName) {
        // failed without an fileName
        return Responses._400({ message: 'missing the fileName from the path' });
    }

    const fileName = event.pathParameters.fileName;

    const file = await S3.get(fileName, bucket).catch(err => {
        console.log('error in S3 get', err);
        return null;
    });

    if (!file) {
        return Responses._400({ message: 'Failed to read data by filename' });
    }

    return Responses._200({ file });
};

This should look pretty similar to previous GET endpoints we've created before. Differences are the use of the fileName path parameter, S3.get and returning the file.

Inside the common s3.js file we need to add the get function. The main difference between this and getting from Dynamo is that when we get from S3, the result is not a JSON response, but a Buffer. This means that if we upload a JSON file, it won't come back down in JSON format, so we check if we're getting a JSON file and then transform it back to JSON.

    async get(fileName, bucket) {
        const params = {
            Bucket: bucket,
            Key: fileName,
        };
        let data = await s3Client.getObject(params).promise();
        if (!data) {
            throw Error(`Failed to get file ${fileName}, from ${bucket}`);
        }
        if (fileName.slice(fileName.length - 4, fileName.length) == 'json') {
            data = data.Body.toString();
        }
        return data;
    }

Back in our serverless.yml file, we can add a new function and endpoint for getting files and we've already configured the permissions and environment variables.

    getFile:
        handler: lambdas/endpoints/getFile.handler
        events:
            - http:
                  path: get-file/{fileName}
                  method: GET
                  cors: true

As we're creating a new endpoint we need to do a full deployment again with sls deploy. We can then take the new get-file endpoint and paste it into a browser or Postman. If we add car.json to the end of the request we'll receive the JSON data that we uploaded earlier in this section.

Securing Your Endpoints with API Keys

Being able to create API endpoints quickly and easily with Serverless is great for starting a project and creating a proof of concept. When it gets to creating a production version of your application you need to start being more careful around who can access your endpoints. You don't want anybody being able to hit your APIs.

To secure your APIs there are loads of methods and in this section we're going to be implementing API Keys. If you don't pass the API key with the request then it fails with an unauthorised message. You can then control who you give the API keys to and therefore who has access to your APIs.

You can also add usage policies to your API keys so that you can control how much each person uses your API. This allows you to created tiered usage plans for your service.

To start we're going to be creating a simple API Key. To do this we need to go into our serverless.yml file and add some configuration to the provider.

	apiKeys:
		myFirstAPIKey

This will create a new API key. Now we need to tell serverless which API endpoints to protect with the API key. This has been done so that we can have some of the APIs protected, whilst some of them stay as public. We specify that an endpoint needs to be protected by adding the options of private: true.

    getUser:
        handler: lambdas/endpoints/getUser.handler
        events:
            - http:
                  path: get-user/{ID}
                  method: GET
                  cors: true
                  private: true

You can then add this field to as many of your APIs as you would like. To deploy this we can run sls deploy again. When this completes, you will get back an API key in the return values. This is very important and we'll use it ver soon. If you try and make a request to your get-user API you should get a 401 Unauthorised error.

To get the request to succeed, you now need to pass up an API key in the headers of the request. To do this we need to use Postman or another API request tool and add header to our get request. We do this by selecting Authorisation using the API type. The key needs to be X-API-KEY and the value is the key that you got as an output from your serverless deploy.

When we now make the request, we get a successful response. This means that the only people who can access your API are people who you have given your API key to.

This is great but we can do more. We can add a usage policy to this API key. This is where we can limit the number of requests a month as well as the rate at which requests can be made. This is great for running a SAAS product as you can provide an API key that gives user a set amount of API calls.

To create a usage plan we need to add a new object in the provider. The quota section defines how many requests can be made using that API key. You can change the period to either DAY or WEEK if that would suit your application better.

The throttle section allows you to control how frequently your API endpoints can be hit. Adding a throttle rate limit sets a maximum number of requests per second. This is very useful as it stops people from setting up a denial of service attack. The burstLimit allows the API to be hit more often than your rateLimit but only for a short period of time, normally a few seconds.

    usagePlan:
        quota:
            limit: 10
            period: MONTH
        throttle:
            burstLimit: 2
            rateLimit: 1

If we were to deploy this again, the deployment would fail as we would be trying to deploy the same API key. API keys need to be unique so we have to change the name of the API key. When we deploy this and copy our new API key into Postman, we'll be able to make requests as we normally would. If we try and make too many requests per second or reach the maximum number of requests then we'll get a 429 error of

{
    "message": "Limit Exceeded"
}

This means that you can't use this API key again until next month.

Whilst creating a usage plan is great, you often want to give different people different levels of access to your services. You might give free users 100 requests per month and paying users get 1000. You might want different payment plans which give different number of requests. You would also probably want a master API key for yourself which has unlimited requests!

To do this we can set up multiple groups of API keys that each have their own usage policy. We need to change the apiKeys and usagePlan sections.

	apiKeys:
        - free:
              - MyAPIKey3
        - paid:
              - MyPaidKey3
    usagePlan:
        - free:
              quota:
                  limit: 10
                  period: MONTH
              throttle:
                  burstLimit: 2
                  rateLimit: 1
        - paid:
              quota:
                  period: MONTH
                  limit: 1000
              throttle:
                  burstLimit: 20
                  rateLimit: 10

Once you've saved and deployed this you'll get two new API keys, each with a different level of access to your API endpoints.

Thanks for reading this guide!