Everything You Need to Know about Cloud Computing

Everything You Need to Know about Cloud Computing

In this Cloud Computing full course tutorial for Beginners, we'll give you everything you need to know about Cloud Computing. We'll cover the fundamentals of cloud computing, the cloud lifecycle, and important concepts of AWS, Azure, and the Google Cloud Platform. We'll cover some Cloud Computing interview questions that would definitely help you out in a cloud computing interview

In this Cloud Computing Full Course Tutorial for Beginners, we'll give you everything you need to know about Cloud Computing! We'll cover the fundamentals of Cloud Computing, the cloud lifecycle, and important concepts of AWS, Azure, and the Google Cloud Platform. We'll tell you how you can become a cloud computing engineer, and explain how the three major cloud computing platforms are different from one another. Finally, we'll cover some important cloud computing interview questions that would definitely help you out in a cloud computing interview. Whenever possible, each of these concepts are explained in a practical manner to ensure easy understanding. Now, let's take a deep dive into this cloud computing full course!

Below topics are explained in this Cloud Computing full course:

  1. Before Cloud Computing 01:15
  2. What is Cloud Computing 02:50
  3. Benefits of Cloud Computing 03:44
  4. Types of Cloud Computing 05:44
  5. Lifecycle of Cloud Computing 09:41
  6. What is AWS 13:14
  7. History of AWS 16:47
  8. Service AWS Offers 18:11
  9. How does AWS Make Lives Easier 20:26
  10. AWS Tutorial 22:50
  11. How did AWS become so successful 22:50
  12. The services AWS provides 31:35
  13. The future of AWS 38:53
  14. What is Azure 45:47
  15. Azure Services 47:32
  16. Uses of Azure 51:11
  17. Azure Tutorial 51:50
  18. What is Microsoft Azure 01:01:31
  19. what are the services Azure offers 01:08:00
  20. How is Azure better than other Cloud Services 2:11:29
  21. Companies that use Azure 2:15:08
  22. AWS vs Azure 2:29:50
  23. Comparison round Market share and options 02:33:02
  24. AWS vs Google Cloud 02:38:17
  25. AWS vs Azure vs GCP 02:45:37
  26. What is Amazon Web services 02:46:13
  27. What is Microsoft Azure 02:46:25
  28. What is Google Cloud Platform 02:46:34
  29. Compute Services 02:47:49
  30. Storage Service 02:50:49
  31. Key Cloud tools 02:54:19
  32. Companies using cloud providers 02:55:16
  33. Advantages 02:55:40
  34. Disadvantages 02:57:43
  35. AWS Certifications 02:59:28
  36. Which is suitable for you 03:00:13
  37. Azure Certifications 03:06:46
  38. What is an Azure Certifications 03:06:55
  39. What makes Azure so Great 03:07:31
  40. Who is a Cloud Computing Engineer 3:24:46
  41. Steps to become a Cloud Computing Engineer 03:23:13
  42. Cloud Computing Engineer salary 03:27:28
  43. AWS Interview Questions Part-1 03:28:31
  44. AWS Interview Questions Part-2 04:59:02
  45. Multiplte Choice Questions 05:49:50
  46. Azure Interview Questions 05:59:58

Why become a Cloud Architect?
With the increasing focus on cloud computing and infrastructure over the last several years, cloud architects are in great demand worldwide. Many organizations have moved to cloud platforms for better scalability, mobility, and security, and cloud solutions architects are among the highest paid professionals in the IT industry.

According to a study by Goldman Sachs, cloud computing is one of the top three initiatives planned by IT executives as they make cloud infrastructure an integral part of their organizations. According to Forbes, enterprise IT architects with cloud computing expertise are earning a median salary of $137,957.

Angular 9 Tutorial: Learn to Build a CRUD Angular App Quickly

What's new in Bootstrap 5 and when Bootstrap 5 release date?

What’s new in HTML6

How to Build Progressive Web Apps (PWA) using Angular 9

What is new features in Javascript ES2020 ECMAScript 2020

How to build a basic Serverless Application using AWS Lambda

How to build a basic Serverless Application using AWS Lambda

In this post, I’m going to show you how easy it is to build a basic Serverless Application using a few AWS Services (AWS Lambda)

One of the architectural patterns I’ve championed for is Serverless Architecture. In short, a serverless application utilizes an array of cloud applications and services to perform logic that would normally be constrained to a single application hosted on a constantly running server.

Serverless itself has many benefits. It typically has reduced operational cost, as you are only paying for an application to run when it needs to. It also typically has reduced development cost, as many things you’d typically have to worry about with a legacy application, such as OS and System configurations, are handled for you by most cloud providers.

In this post, I’m going to show you how easy it is to build a basic serverless application using a few AWS services.

AWS Lambda

AWS Lambda is a service that allows you to run functions when they need to run, and only pay for them when they do. AWS allows you to upload an executable in a variety of languages, and handles spinning up and tearing down containers for you as it needs them. In this blog, I’ll be using Golang, also called Go, a language built at Google with scalability and concurrency in mind. Some of the other languages supported out of the box by AWS Lambda include Python, Node.js, and Java, to name a few. Some others, such as C++, can be used if you supply a custom runtime.

Building a Lambda Function

Let’s say we want to build a really simple API that returns information about cars. It’s not something that’s going to be used very often, and it doesn’t have to perform any complex logic. It’s just retrieving data and sending it to a client. For brevity, we’re going to avoid having a database of any kind and pretend that we magically have all of this information in memory.

In Go, there are no classes. Instead, objects are represented by structs, similar to C. Let’s build a quick struct to represent a car:

type Car struct {
  Model string `json:"model"`
  Color string `json:"color"`
  Year int `json:"year"`
}

As you can see, we have a Car struct with three fields, Model, Color and Year. The fields are capitalized so that they are exported, which is similar to public fields in some other languages. They are annotated with a json key to instruct the program how to deserialize json into an instance of the struct. More on that later.

Okay, so we have our struct now. Let’s say we want to build an HTTP endpoint that returns one of my favorite cars, a red 1999 Chevrolet Corvette. The response could look something like this:

GET /myfavoritecar
{   "model": "Corvette",   "color": "red",   "year": 1999}

AWS Lambda programs need to conform to a specific format which depends on the language they’re written in. All of them will have a handler function of some kind, which will usually include two parameters: a context and an input, the latter of which is usually some kind of JSON payload. In our example, we’re going to have our lambda function be triggered by requests to an application load balancer, which requires a specific contract be followed. More on that can be read here.

After following the AWS documentation, structs to represent an input to our Lambda function and its output will look something like this:

type LambdaPayload struct {
   RequestContext struct {
      Elb struct {
         TargetGroupArn string `json:"targetGroupArn"`
      } `json:"elb"`
   } `json:"requestContext"`
   HTTPMethod            string            `json:"httpMethod"`
   Path                  string            `json:"path"`
   Headers               map[string]string `json:"headers"`
   QueryStringParameters map[string]string `json:"queryStringParameters"`
   Body                  string            `json:"body"`
   IsBase64Encoded       bool              `json:"isBase64Encoded"`
}


type LambdaResponse struct {
   IsBase64Encoded   bool   `json:"isBase64Encoded"`
   StatusCode        int    `json:"statusCode"`
   StatusDescription string `json:"statusDescription"`
   Headers           struct {
      SetCookie   string `json:"Set-cookie"`
      ContentType string `json:"Content-Type"`
   } `json:"headers"`
   Body string `json:"body"`
}

There are fields that represent some meta info about our requests, such as Request Methods and response codes, as well as fields for request paths (in our case, /myfavoritecar), maps for headers and query parameters, the bodies of the requests and responses, and whether or not either are base 64 encoded. We can assume our lambda will automatically conform to this contract, as AWS will ensure that, and start writing our function!

When all is said and done, our function will look something like this:

func lambda_handler(ctx context.Context, payload LambdaPayload) (LambdaResponse, error) {
  response := &LambdaResponse{}
  response.Headers.ContentType = “text/html”
  response.StatusCode = http.StatusBadRequest
  response.StatusDescription = http.StatusText(http.StatusBadRequest)
  if payload.HTTPMethod == http.MethodGet && payload.Path == “/myfavoritecar” {
    car := &Car{}
    car.Model = “Corvette”
    car.Color = “Red”
    car.Year = 1999
    res, err := json.Marshal(car)
    if err != nil {
      fmt.Println(err)
      response.StatusCode = http.StatusInternalServerError
      response.StatusDescription = http.StatusText(http.StatusInternalServerError)
      return *response, err
    }
    response.Headers.ContentType = “application/json”
    response.Body = string(res)
    response.StatusCode = http.StatusOK
    response.StatusDescription = http.StatusText(http.StatusOK)
    return *response, nil
  } else {
    return *response, nil
  }
}

As you can see, we check that the request is a GET request and that we are looking for my favorite car by checking the path of the request. Otherwise, we return a Bad Request response. If we are serving the right request, we create a Car struct, assign the proper values, and marshal it as JSON (convert the struct to a JSON string) using Golang’s (great) built in JSON library. We create an instance of the Lambda response struct in either case, populate the relevant fields (and the content-type where applicable) and return it. Now, we just need a main function that specifies the above should run and we’re done:

func main(){
  lambda.Start(lambda_handler)
}

That’s it! Now let’s deploy it to AWS and see how it runs.

Deploying the Lambda to AWS

The first thing we need to do is create a function package. This differs from language to language; for Golang it is quite simple. Here’s a sample bash script for doing so:

env GOOS=linux GOARCH=amd64 go build -o /tmp/lambda golang/*.go
zip -j /tmp/lambda /tmp/lambda
aws lambda update-function-code --function-name car-lambda --region us-east-1 --zip-file fileb:///tmp/lambda.zip

As you can see, we use the Golang CLI to build an executable for the amd64 architecture, and Linux as an OS (as go executables differ depending on where they are meant to be run). For AWS lambda, we need to target the above architecture. It looks in a folder golang for any .go files, and compiles them into an executable. Then, we zip it in a location and call the AWS CLI to update the function code of our car-lambda function, which we haven’t created yet. Let’s do that now.

We can create our Lambda by going to the AWS Lambda page in the AWS Console, and clicking Create Function. We can name it car-lambda, and choose Go 1.x as our runtime. For execution role, we can choose the “Create a new role with basic Lambda permissions” option. That’s it! Now, we can run our script (assuming we have the AWS and Go CLI installed) and see output similar to the following:

{
  "FunctionName": "car-lambda",
  "FunctionArn": "arn:aws:lambda:us-east-1: YOUR_ACCOUNT_ID:function:car-lambda",
  "Runtime": "go1.x",
  "Role": "arn:aws:iam::YOUR_ACCOUNT_ID:role/service-role/car-lambda-role-w4cmi5fv",
  "Handler": "hello",
  "CodeSize": 4887939,
  "Description": "",
  "Timeout": 15,
  "MemorySize": 512,
  "LastModified": "2019–09–20T19:46:19.393+0000",
  "CodeSha256": "6h9d94EOfBj6uArZZKkDalbat7+xtlC83rOR+vBEaws=",
  "Version": "$LATEST",
  "TracingConfig": {
    "Mode": "PassThrough"
  },
  "RevisionId": "7d4cbfd8–2174–4a85-a9e0–043016f0a1d0"
}

We’re almost done deploying our lambda! However, you’ll see that the “handler” is set to “hello”. This is an AWS default. We need to change it to “lambda” (as that’s what I called my lambda program). You can do so by going to the Lambda’s page on AWS and scrolling down to the “handler” text field. You can change it to whatever you named your lambda program if you named it something different.

All right. NOW we’re done. Phew.

The next thing we need to do on AWS is create an ALB (Application Load Balancer). We can do so on the EC2 page by clicking the Load Balancers link, followed by Create Load Balancer. On the following screen, we click “Create” on the ALB square to be greeted with a page like the following (I’ve gone ahead and filled in the Name field and made it internet-facing):

For security purposes, I haven’t selected a VPC in the above image, but you’d want to choose a VPC and public subnets that are relevant to your AWS account. On the following screens, we need to set up a target group that routes traffic from the ALB to our lambda function. Select “New target group” from the dropdown, and “Lambda” as the target type. I named it car-lambda-tg.

For Step 5, choose the lambda we created earlier. When we create it, you’ll see something like this after it’s done provisioning the ALB for you:

If your ALB is in a VPC, make sure it is reachable from whichever network you’re on. You can do so by configuring it with public subnets and attaching a security group to the ALB (go to the VPC page, and then click Security Groups) that allows traffic from specific (or all) IP addresses. Read more about security groups. Then, when you try to go to the ALB’s DNS in your browser, you will (probably) see something like this:

This is good! We see the Bad Request we programmed earlier. Now, if we go to the /myfavoritecar endpoint….

We did it! Our lambda is now fully up and running on AWS. We have an API that can return responses, and we don’t have to to worry about setting up and paying for an entire server and its pain points. We can now extend it as much as we want through other APIs, endpoints, a database, more lambdas, and more. The possibilities are endless. I hope this blog post was helpful for you. As you can see, it’s super easy to get up and running with a serverless API.

Top Ten Cloud Tools From AWS

Top Ten Cloud Tools From AWS

Cound down the top ten from the number one cloud vendor in the world.

Amazon Web Services offer robust, secure, and easy-to-operate tools for databases, storage, running operations, and so on. Some of the popular global companies that use AWS are Netflix, Unilever, Airbnb, BMW, Met Office, and others. We’ve created this to help you run your business with a much higher profit. Here’s a list of all AWS services:

Here are the criteria for adding the services to the AWS services overview list:

  • Each tool responds to current trends in the cloud market
  • It will be demanded in 2020
  • It’s secure and robust
  • It’s cost-efficient
1. Simple Storage Service (S3)

S3 takes the first place in the list of AWS services for storing and protecting any amount of data in various situations, for instance, to ensure the operation of sites, mobile software, for backup and recovery, archiving, corporate programs, IoT gadgets, and significant data inquiry.

S3 offers simple-as-a-pie administration tools that let you operate data and fine-tune access restrictions to reach your commercial or legal requirements.

What kind of value you get with S3:

  • Painless increase and reduction of storage resources in accordance with current needs.
  • Data storage in various classes of S3 storage that provide different levels of access to information at the corresponding prices. You will get a significant cost reduction.
  • Protection of data from unauthorized access with encryption and access restriction tools.
  • Classification of your data, simple operational, and reporting tools.
2. Elastic Compute Cloud (EC2)

Elastic Compute Cloud (EC2) is one of the most widely-used AWS services, providing secure, scalable computing assets in the cloud. It helps developers by simplifying cloud computing across the web.

EC 2’s easy-to-use web interface lets you access and set up computing assets with almost no effort. It gives users full control over the resources.

What value you will get:

  • Amazon EC2 lets you increase or decrease computing power in minutes, not hours or days.
  • You have complete control over your instances, including root access and all the features available on any other machine.
  • Adaptable cloud hosting. The service allows you to choose different types of operating systems and software packages.
3. Lambda

AWS Lambda is the AWS service that will let you run program codes with no need to provision or operate servers. You pay only for the used time. When the code is not executed, no fee is charged.

With Lambda, you can run almost any kind of software or server services without the need for any administrative operations. All you need to do is to upload the code, and Lambda will provide all the necessary resources for its execution, scaling availability on demand.

You may like these features:

  • Lambda allows you to automatically run program codes without the need for server provisioning.
  • It automatically scales the program up or down by running the program code in response to each trigger.
  • When working with AWS Lambda, you pay for every 100 ms of program code execution and the number of triggers.
4. Glacier

Glacier is a safe, reliable, and highly cost-effective cloud storage solution for backing up information and long-term backup storage.

The cost of the service is only $0.004 per month for storing a gigabyte of data. This is a great cost reduction when compared to local storage solutions.

You have three options for extracting information for various use cases: The Accelerated option will last 1-5 minutes, Standard will last 3-5 hours, and Batch will last 5-12 hours.

Here are some more benefits that you can get from Glacier:

  • Glacier offers enhanced integration with AWS CloudTrail for running an audit, monitoring, and storing storage API call data for audit purposes. You will have different methods of encryption here.
  • Glacier is developed to be the most cost-effective object storage class.
  • There is a community of Amazon object storage services that comprises thousands of consulting companies, system integrators, and independent software vendors.
5. Lex

Lex is one of the top AWS services, offering a scalable, secure, and easy-to-use end-to-end software for creating, publishing, and monitoring bots. It offers automatic speech recognition tools and natural language recognition technologies to create a speech understanding system.

Lex offers two kinds of requests:

  • Confirmation requests that allow you to confirm a specific action before it is executed
  • Error-handling requests that allow you to ask the user to re-enter something to clarify information

Lex, by default, supports integration with AWS Lambda to retrieve data, update, and execute business logic. One more thing that you may like is single-click multi-platform deployment.

6. Polly

Polly is Amazon's text-to-speech service with access to a significant number of available languages. Polly is accessible via an API that will add an audio file directly into your program.

You pay only for the number of symbols that you transcribe into voice. The book Harry Potter and the Sorcerer’s Stone contains about 385,000 characters and text-to-voice conversion could cost as little as $2. You can benefit from Polly if you use it for commercial or personal purposes.

Here are the key benefits:

  • Natural voice
  • Speech storage and distribution
  • Real-time streaming
  • Configuring and managing voice output
7. Simple Queuing Service (SQS)

Simple Queue Service is an entirely operated message queuing service that can isolate and scale microservices, distributed systems, and serverless programs.

SQS offers two types of message queues. Standard queues provide maximum throughput, optimal ordering, and delivery of messages according to the “at once” method. Limited bandwidth FIFO SQS queues ensure that messages are processed strictly once and exclusively in the order they are sent.

8. Simple Notification Service (SNS)

Simple Notification Service is a highly reliable, secure, fully operated Pub/Sub messaging service. It can also separate microservices, distributed systems, and serverless programs.

SNS is one of the most versatile AWS services. You can deliver messages to any operating system at any time of the day or night. You can use your own software that will add messages to the SNS, and it will send them to your subscribers. It’s fast and cost-efficient.

There are different variants of payments, but you will pay about $2 for 100 thousand email notifications, and this is pretty cheap.

9. Internet of Things (IoT)

AWS IoT is one of the trendier AWS services that offers software solutions, operations, and data services. It allows you to connect devices safely, gather information, and perform activities based on the information received locally, even if there is no Internet connection.

Operating services let you supervise, manage, and protect a large and diverse fleet of devices. Data services help capitalize on IoT data.

What you can get with IoT:

  • AWS IoT Services
  • Device Software
  • You can connect and operate peripherals
  • Protection, control, and management of devices in the cloud
  • Data services
10. Athena

Athena is an online query AWS service that simplifies the process of data analysis in S3 using standard SQL tools. Athena is a serverless service where there is no infrastructure that requires configuration or operation, so you can start analyzing the data right away. You don’t even need to upload data to Athena since the service cooperates directly with data that is stored in S3.

What kind of benefits you get with Athena:

  • It’s very easy to get started with Athena Console
  • You can easily create queries using the standard SQL language
  • You pay per request
  • Integration with AWS Glue will give you an optimized query performance cost reduction 

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

AWS Certified Solution Architect Associate

AWS Lambda vs. Azure Functions vs. Google Functions

Running TensorFlow on AWS Lambda using Serverless

Deploy Docker Containers With AWS CodePipeline

A Complete Guide on Deploying a Node app to AWS with Docker

Create and Deploy AWS and AWS Lambda using Serverless Framework

Introduction To AWS Lambda

What is Serverless? Learn Serverless by creating a Slack App

What is Serverless? Learn Serverless by creating a Slack App

In this Serverless architecture tutorial, we'll learn what Serverless is and why you should use Serverless, learn Serverless by creating a Slack App. We'll also set up AWS, create our serverless app, and create a slack app!

Serverless architecture is the industry's latest buzzword and many of the largest tech companies have begun to embrace it.

In this article, we'll learn what it is and why you should use it. We'll also set up AWS, create our serverless app, and create a slack app!

What is Serverless?

Serverless is a cloud computing paradigm in which the developer no longer has to worry about maintaining a server – they just focus on the code.

Cloud providers, such as AWS or Azure, are now responsible for executing code and maintaining servers by dynamically allocating their resources. A variety of events can trigger code execution, including cron jobs, http requests, or database events.

The code that developers send to the cloud is usually just a function so, many times, serverless architecture is implemented using Functions-as-a-Service, or FaaS. The major cloud providers provide frameworks for FaaS, such as AWS Lambda and Azure Functions.

Why Serverless?

Not only does serverless allow developers to just focus on code, but it has many other benefits as well.

Since cloud providers are now responsible for executing code and dynamically allocate resources based on event triggers, you typically only pay per request, or when your code is being executed.

Additionally, since cloud providers are handling your servers, you don't have to worry about scaling up – the cloud provider will handle it. This makes serverless apps lower cost, easier to maintain, and easier to scale.

Setting up AWS Lambda

For this tutorial, I will be using AWS Lambda, so first, we'll create an AWS account. I find AWS's UI hard to understand and difficult to navigate, so I will be adding screenshots for each step.

Once you log in, you should see this:

Next, we'll set up an IAM user. An IAM (Identity and Access Management) user interacts with AWS and its resources on your behalf. This allows you to create different IAM users with different permissions and purposes, without compromising the security of your root user account.

Click on the "services" tab at the top of the page, and type "IAM" into the bar:

Click on the first result, and you'll see, on the left-hand sidebar, that you're at the dashboard. Click on the "Users" option to get to create our new IAM user.

Click on the "Add user" button to create a new user. Fill in the details as follows:

You can name your user anything you'd like, but I went with serverless-admin. Be sure that your user has "Programmatic access" to AWS, not "AWS Management Console Access". You'd use the latter for teammates, or other humans who need access to AWS. We just need this user to interact with AWS Lambda, so we can just give them programmatic access.

For permissions, I've chosen to attach existing policies since I don't have any groups, and I don't have any existing users that I want to copy permissions for. In this example, I will create the user with Administrator access since it's just for a personal project; however, if you were to use a serverless app in an actual production environment, your IAM user should be scoped to only access Lambda-necessary parts of AWS. (Instructions can be found here).

I didn't add any tags and created the user. It's vital to save the information given to you on the next screen - the Access ID and Secret Access Key.

Don't leave this screen without copying down both! You won't be able to see the Secret access key again after this screen.

Finally, we'll add these credentials to command line AWS. Use this guide to get aws cli setup.

Make sure you have it installed by running aws --version. You should see something like this:

Then run aws configure and fill in the prompts:

I have the default region as us-east-2 already set up, but you can use this to determine what your region is.

To make sure that you have your credentials set up correctly, you can run cat ~/.aws/credentials in your terminal.

If you want to configure a profile other than your default, you can run the command as follows: aws configure --profile [profile name].

If you had trouble following the steps, you can also check out AWS's documentation.

Set up serverless

Go to your terminal and install the serverless package globally using npm: npm i -g serverless. (More info on serverless here)
and your terminal should look something like this:

Next, navigate to the directory where you want to create the app, then run serverless and follow the prompts:

For this application, we'll be using Node.js. You can name your app anything you want, but I've called mine exampleSlackApp.

Open your favorite code editor to the contents in exampleSlackApp (or whatever you've called your application).

First, we'll take a look at serverless.yml. You'll see there's a lot of commented code here describing the different options you can use in the file. Definitely give it a read, but I've deleted it down to just:

service: exampleslackapp

provider:
  name: aws
  runtime: nodejs10.x
  region: us-east-2

functions:
  hello:
    handler: handler.hello

I've included region since the default is us-east-1 but my aws profile is configured for us-east-2.

Let's deploy what we already have by running serverless deploy in the directory of the app that serverless just created for us. The output should look something like this:

And if you run serverless invoke -f hello in your terminal, it'll run the app, and you should see:

{
    "statusCode": 200,
    "body": "{\n  \"message\": \"Go Serverless v1.0! Your function executed successfully!\",\n  \"input\": {}\n}"
}

For further proof that our slack app is live, you can head back to AWS console. Go to the services dropdown, search for "Lambda", and click on the first option ("Run code without thinking about servers").

And here's your app!

Next, we'll explore actually using serverless by building our slack app. Our slack app will post a random Ron Swanson quote to slack using a slash command like this:

The following steps don't necessarily have to be done in the order that I've done them, so if you want to skip around, feel free!

Adding the API to our code

I'm using this API to generate Ron Swanson quotes since the docs are fairly simple (and of course, it's free). To see how requests are make and what gets returned, you can just put this URL in your browser:

https://ron-swanson-quotes.herokuapp.com/v2/quotes

You should see something like this:

So, we can take our initial function and modify it as such:

module.exports.hello = (event) => {
  getRon();
};

Note: I've removed the async portion

and getRon looks like:

function getRon() {
  request('https://ron-swanson-quotes.herokuapp.com/v2/quotes', function (err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
  })
}

Now, let's check if it works. To test this code locally, in your terminal: serverless invoke local -f hello. Your output should look something like:

serverless invoke -f hello would run the code that you've deployed, as we saw in previous sections. serverless invoke local -f hello, however, runs your local code, so it's useful for testing. Go ahead and deploy using serverless deploy!

Create your Slack App

To create your slack app, follow this link. It'll make you sign into a slack workspace first, so be sure you're a part of one that you can add this app to. I've created a testing one for my purposes. You'll be prompted with this modal. You can fill in whatever you want, but here's what I have as an example:

From there, you'll be taken to the homepage for your app. You should definitely explore these pages and the options. For example, I've added the following customization to my app:

Next, we need to add some permissions to the app:

To get an OAuth Access Token, you have to add some scope and permissions, which you can do by scrolling down:

I've added "Modify your public channels" so that the bot could write to a channel, "Send messages as Ron Swanson" so when the message gets posted, it looks like a user called Ron Swanson is posting the message, and slash commands so the user can "request" a quote as shown in the screenshot at the beginning of the article. After you save the changes, you should be able to scroll back up to OAuths & Permissions to see:

Click the button to Install App to Workspace, and you'll have an OAuth Access Token! We'll come back to this in a second, so either copy it down or remember it's in this spot.

Connect Code and Slack App

In AWS Lambda, find your slack app function. Your Function Code section should show our updated code with the call to our Ron Swanson API (if it does not, go back to your terminal and run serverless deploy).

Scroll below that to the section that says "Environment Variables", and put your Slack OAuth Access Token here (you can name the key whatever you'd like):

Let's go back to our code and add Slack into our function. At the top of our file, we can declare a const with our new OAuth Token:

const SLACK_OAUTH_TOKEN = process.env.OAUTH_TOKEN.

process.env just grabs our environment variables (additional reading). Next, let's take a look at the Slack API to figure out how to post a message to a channel.

The two pictures above I've taken from the API are the most relevant to us. So, to make this API request, I'll use request by passing in an object called options:

  let options = {
    url: 'https://slack.com/api/chat.postMessage',
    headers: {
      'Accept': 'application/json',
    },
    method: 'POST',
    form: {
      token: SLACK_OAUTH_TOKEN,
      channel: 'general', // hard coding for now
      text: 'I am here',
    }
  }

and we can make the request:

  request(options, function(err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
  })

Finally, I'll wrap the whole thing in a function:

function postRon(quote) {
  let options = {
    url: 'https://slack.com/api/chat.postMessage',
    headers: {
      'Accept': 'application/json',
    },
    method: 'POST',
    form: {
      token: SLACK_OAUTH_TOKEN,
      channel: 'general',
      text: quote,
    }
  }

  request(options, function(err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
  })
}

and we can call it from getRon like this:

function getRon() {
  request('https://ron-swanson-quotes.herokuapp.com/v2/quotes', function (err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
    postRon(body.substring(2, body.length - 2)) // here for parsing, remove if you want to see how/why I did it
  })
}

So our code should all in all look like this:

'use strict';
let request = require('request');

const SLACK_OAUTH_TOKEN = process.env.OAUTH_TOKEN

module.exports.hello = (event) => {
  getRon();
};

function getRon() {
  request('https://ron-swanson-quotes.herokuapp.com/v2/quotes', function (err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
    postRon(body.substring(2, body.length - 2))
  })
}

function postRon(quote) {
  let options = {
    url: 'https://slack.com/api/chat.postMessage',
    headers: {
      'Accept': 'application/json',
    },
    method: 'POST',
    form: {
      token: SLACK_OAUTH_TOKEN,
      channel: 'general',
      text: quote,
    }
  }

  request(options, function(err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
  })
}

Now let's test! Unfortunately, our environment variable in AWS Lambda isn't available to us when we run serverless invoke local -f hello. There are a few ways you can approach this, but for our purposes, you can just replace the value for SLACK_OAUTH_TOKEN with your actual OAuth Token (make sure it's a string). But be sure you switch it back before you push it up to version control!

Run serverless invoke local -f hello, and hopefully you should see a message like this in your #general channel:

Please note that I put down my channel name as 'general' since it's my test workspace; however, if you're in an actual workspace, you should create a separate channel for testing apps, and put the message there instead while you're testing.

And in your terminal, you should see something like:

If that works, go ahead and deploy it using serverless deploy. If it does not, the best way to debug this is to adjust code and run serverless invoke local -f hello.

Adding slash command

The last and final part is adding a slash command! Go back to your function's home page in AWS Lambda and look for the button that says "Add trigger":

Click on the button to get to the "Add trigger" page, and select "API Gateway" from the list:

I've filled in the information based on defaults mostly:

I've also left this API open for use – however, if you're using this in production, you should discuss what standard protocol would be with your team. "Add" the API, and you should receive an API endpoint. Hold on to this, because we'll need it for the next step.

Let's switch back over to our slack app and add a slash command:

Click on "Create New Command" and it should pop up with a new window to create a command. Here's how I filled mine out:

You can enter anything you want for "command" and "short description" but for "request URL", you should put your API endpoint.

Finally, we'll go back to our code to make some final adjustments. If you try to use the slash command, you should receive some kind of error back – this is because slack expects a response and AWS expects you to give a response when the endpoint is hit. So, we'll change our function to allow a callback (for reference):

module.exports.hello = (event,context,callback) => {
  getRon(callback);
};

and then we'll change getRon to do something with the callback:

function getRon(callback) {
  request('https://ron-swanson-quotes.herokuapp.com/v2/quotes', function (err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
    callback(null, SUCCESS_RESPONSE)
    postRon(body.substring(2, body.length - 2))
  })
}

where SUCCESS_RESPONSE is at the top of the file:

const SUCCESS_RESPONSE = {
  statusCode: 200,
  body: null
}

You can put the callback here or in postRon – it just depends on what your purposes are with the callback.

Our code at this point now looks something like:

'use strict';
let request = require('request');

const SLACK_OAUTH_TOKEN = OAUTH_TOKEN

const SUCCESS_RESPONSE = {
  statusCode: 200,
  body: null
}

module.exports.hello = (event,context,callback) => {
  getRon(callback);
};

function getRon(callback) {
  request('https://ron-swanson-quotes.herokuapp.com/v2/quotes', function (err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
    callback(null, SUCCESS_RESPONSE)
    postRon(body.substring(2, body.length - 2))
  })
}

function postRon(quote) {
  let options = {
    url: 'https://slack.com/api/chat.postMessage',
    headers: {
      'Accept': 'application/json',
    },
    method: 'POST',
    form: {
      token: SLACK_OAUTH_TOKEN,
      channel: 'general',
      text: quote,
    }
  }

  request(options, function(err, resp, body) {
    console.log('error:', err)
    console.log('statusCode:', resp && resp.statusCode)
    console.log('body', body)
  })
}

You should be able to use the /ron command in slack now and get a Ron Swanson quote back. If you don't, you can use Cloudwatch logs to see what went wrong:

The way our code works now, we've hardcoded in the channel name. But, what we actually want is for the quote to get posted in the message where you used /ron.

So, we can now use the event portion of our function.

module.exports.hello = (event,context,callback) => {
  console.log(event)
  getRon(callback);
};

Use /ron to run the function, and then check your Cloudwatch logs to see what gets logged to the console (you may need to refresh). Check on the most recent logs and you should see something like this:

The first item in this list (where it says "resource", "path", etc.) is the event, so if you expand that, you'll see a long list of things, but what we're looking for is 'body' all the way down at the bottom:

Body is a string with some relevant information in it, one of them being "channel_id". We can use channel_id (or channel_name) and pass it into the function that creates our slack message. For your convenience, I've already parsed this string: event.body.split("&")[3].split("=")[1] should give you the channel_id. I hardcoded in which entry (3) the channel_id was for simplicity.

Now, we can alter our code to save that string as a variable:

let channel = 'general' (as our fallback)

module.exports.hello = (event,context,callback) => {
  console.log(event)
  channel = event.body.split("&")[3].split("=")[1]
  console.log(context)
  getGoat(callback);
};

and in postRon:

  let options = {
    url: 'https://slack.com/api/chat.postMessage',
    headers: {
      'Accept': 'application/json',
    },
    method: 'POST',
    form: {
      token: SLACK_OAUTH_TOKEN,
      channel: channel,
      text: quote,
    }
  }

Finally, if you use a slack command in any channel in your workspace, you should be able to see a Ron Swanson quote pop up! If not, as I mentioned before, the most common tools I use to debug serverless apps are serverless invoke local -f <function name> and Cloudwatch logs.

Hopefully you were successfully able to create a functioning Slack application! I've included resources and background reading dispersed throughout the article and I'm happy to answer any questions you may have!

Final Repo with code: https://github.com/lsurasani/ron-swanson-slack-app/