1656749532
A quick note about AWS Directory Services. This post is a quick note from the course Ultimate AWS Certified Solutions Architect Professional by Stephane Maarek. The only purpose of this post is a summary, if you want detailed learning, please buy a Stephane Maarekβs course.
https://medium.com/codex/aws-certified-solutions-architect-professional-identity-federation-directory-services-895807d86497
1656748020
This Serverless plugin simplifies the integration of Sentry with the popular Serverless Framework and AWS Lambda.
Currently, we support Lambda Runtimes for Node.js 12, 14, and 16 for AWS Lambda. Other platforms can be added by providing a respective integration library. Pull Requests are welcome!
The serverless-sentry-plugin
and serverless-sentry-lib
libraries are not affiliated with either Functional Software Inc., Sentry, Serverless or Amazon Web Services but developed independently and in my spare time.
Sentry integration splits into two components:
For a detailed overview of how to use the serverless-sentry-lib refer to its README.md.
Install the @sentry/node
module as a production dependency (so it gets packaged together with your source code):
npm install --save @sentry/node
Install the serverless-sentry-lib as a production dependency as well:
npm install --save serverless-sentry-lib
Install this plugin as a development dependency (you don't want to package it with your release artifacts):
npm install --save-dev serverless-sentry
Check out the examples below on how to integrate it with your project by updating serverless.yml
as well as your Lambda handler code.
The Serverless Sentry Plugin allows configuration of the library through the serverless.yml
and will create release and deployment information for you (if wanted). This is the recommended way of using the serverless-sentry-lib
library.
The plugin determines your environment during deployment and adds the SENTRY_DSN
environment variables to your Lambda function. All you need to do is to load the plugin and set the dsn
configuration option as follows:
service: my-serverless-project
provider:
# ...
plugins:
- serverless-sentry
custom:
sentry:
dsn: https://xxxx:yyyy@sentry.io/zzzz # URL provided by Sentry
The actual reporting to Sentry happens in platform-specific libraries. Currently, only Node.js and Python are supported.
Each library provides a withSentry
helper that acts as decorators around your original AWS Lambda handler code and is configured via this plugin or manually through environment variables.
For more details refer to the individual libraries' repositories:
Old, now unsupported libraries:
For maximum flexibility, this library is implemented as a wrapper around your original AWS Lambda handler code (your handler.js
or similar function). The withSentry
higher-order function adds error and exception handling and takes care of configuring the Sentry client automatically.
withSentry
is pre-configured to reasonable defaults and doesn't need any configuration. It will automatically load and configure @sentry/node
which needs to be installed as a peer dependency.
Original Lambda Handler Code:
exports.handler = async function (event, context) {
console.log("EVENT: \n" + JSON.stringify(event, null, 2));
return context.logStreamName;
};
New Lambda Handler Code Using withSentry
For Sentry Reporting
const withSentry = require("serverless-sentry-lib"); // This helper library
exports.handler = withSentry(async function (event, context) {
console.log("EVENT: \n" + JSON.stringify(event, null, 2));
return context.logStreamName;
});
ES6 Module: Original Lambda Handler Code:
export async function handler(event, context) {
console.log("EVENT: \n" + JSON.stringify(event, null, 2));
return context.logStreamName;
}
ES6 Module: New Lambda Handler Code Using withSentry
For Sentry Reporting
import withSentry from "serverless-sentry-lib"; // This helper library
export const handler = withSentry(async (event, context) => {
console.log("EVENT: \n" + JSON.stringify(event, null, 2));
return context.logStreamName;
});
Once your Lambda handler code is wrapped in withSentry
, it will be extended with automatic error reporting. Whenever your Lambda handler sets an error response, the error is forwarded to Sentry with additional context information. For more details about the different configuration options available refer to the serverless-sentry-lib documentation.
Configure the Sentry plugin using the following options in your serverless.yml
:
dsn
- Your Sentry project's DSN URL (required)enabled
- Specifies whether this SDK should activate and send events to Sentry (defaults to true
)environment
- Explicitly set the Sentry environment (defaults to the Serverless stage)In order for some features such as releases and deployments to work, you need to grant API access to this plugin by setting the following options:
organization
- Organization nameproject
- Project nameauthToken
- API authentication token with project:write
accessπ Important: You need to make sure youβre using Auth Tokens not API Keys, which are deprecated.
Releases are used by Sentry to provide you with additional context when determining the cause of an issue. The plugin can automatically create releases for you and tag all messages accordingly. To find out more about releases in Sentry, refer to the official documentation.
In order to enable release tagging, you need to set the release
option in your serverless.yml
:
custom:
sentry:
dsn: https://xxxx:yyyy@sentry.io/zzzz
organization: my-sentry-organziation
project: my-sentry-project
authToken: my-sentry-api-key
release:
version: <RELEASE VERSION>
refs:
- repository: <REPOSITORY NAME>
commit: <COMMIT HASH>
previousCommit: <COMMIT HASH>
version
- Set the release version used in Sentry. Use any of the below values:
git
- Uses the current git commit hash or tag as release identifier.random
- Generates a random release during deployment.true
- First tries to determine the release via git
and falls back to random
if Git is not available.false
- Disable release versioning.refs
- If you have set up Sentry to collect commit data, you can use commit refs to associate your commits with your Sentry releases. Refer to the Sentry Documentation for details about how to use commit refs. If you set your version
to git
(or true
), the refs
options are populated automatically and don't need to be set.
π Tip {"refs":["Invalid repository names: xxxxx/yyyyyyy"]}: If your repository provider is not supported by Sentry (currently only GitHub or Gitlab with Sentry Integrations) you have the following options:
refs: false
, this will not automatically population the refs but also dismisses your commit id as versionrefs: true
and version: true
to populate the version with the commit short idIf you don't specify any refs, you can also use the short notation for release
and simply set it to the desired release version as follows:
custom:
sentry:
dsn: https://xxxx:yyyy@sentry.io/zzzz
release: <RELEASE VERSION>
If you don't need or want the plugin to create releases and deployments, you can omit the authToken
, organization
and project
options. Messages and exceptions sent by your Lambda functions will still be tagged with the release version and show up grouped in Sentry nonetheless.
π Pro Tip: The possibility to use a fixed string in combination with Serverless variables allows you to inject your release version through the command line, e.g. when running on your continuous integration machine.
custom:
sentry:
dsn: https://xxxx:yyyy@sentry.io/zzzz
organization: my-sentry-organziation
project: my-sentry-project
authToken: my-sentry-api-key
release:
version: ${opt:sentryVersion}
refs:
- repository: ${opt:sentryRepository}
commit: ${opt:sentryCommit}
And then deploy your project using the command-line options from above:
sls deploy --sentryVersion 1.0.0 --sentryRepository foo/bar --sentryCommit 2da95dfb
π Tip when using Sentry with multiple projects: Releases in Sentry are specific to the organization and can span multiple projects. Take this in consideration when choosing a version name. If your version applies to the current project only, you should prefix it with your project name.
If no option for release
is provided, releases and deployments are disabled.
Sourcemap files can be uploaded to Sentry to display source files in the stack traces rather than the compiled versions. This only uploads existing files being output, you'll need to configure your bundling tool separately. You'll also need to have releases configured, see above.
Default options:
custom:
sentry:
sourceMaps: true
Add custom prefix (required if your app is not at the filesystem root)
custom:
sentry:
sourceMaps:
urlPrefix: /var/task
In addition, you can configure the Sentry error reporting on a service as well as a per-function level. For more details about the individual configuration options see the serverless-sentry-lib documentation.
autoBreadcrumbs
- Automatically create breadcrumbs (see Sentry Raven docs, defaults to true
)filterLocal
- Don't report errors from local environments (defaults to true
)captureErrors
- Capture Lambda errors (defaults to true
)captureUnhandledRejections
- Capture unhandled Promise rejections (defaults to true
)captureUncaughtException
- Capture unhandled exceptions (defaults to true
)captureMemoryWarnings
- Monitor memory usage (defaults to true
)captureTimeoutWarnings
- Monitor execution timeouts (defaults to true
)# serverless.yml
service: my-serverless-project
provider:
# ...
plugins:
- serverless-sentry
custom:
sentry:
dsn: https://xxxx:yyyy@sentry.io/zzzz # URL provided by Sentry
captureTimeoutWarnings: false # disable timeout warnings globally for all functions
functions:
FuncFoo:
handler: Foo.handler
description: Hello World
sentry:
captureErrors: false # Disable error capturing for this specific function only
captureTimeoutWarnings: true # Turn timeout warnings back on
FuncBar:
handler: Bar.handler
sentry: false # completely turn off Sentry reporting
In some cases, it might be desired to use a different Sentry configuration depending on the currently deployed stage. To make this work we can use a built-in Serverless variable resolutions trick:
# serverless.yml
plugins:
- serverless-sentry
custom:
config:
default:
sentryDsn: ""
prod:
sentryDsn: "https://xxxx:yyyy@sentry.io/zzzz" # URL provided by Sentry
sentry:
dsn: ${self:custom.config.${self:provider.stage}.sentryDsn, self:custom.config.default.sentryDsn}
captureTimeoutWarnings: false # disable timeout warnings globally for all functions
Double-check the DSN settings in your serverless.yml
and compare it with what Sentry shows you in your project settings under "Client Keys (DSN)". You need a URL in the following format - see the Sentry Quick Start:
{PROTOCOL}://{PUBLIC_KEY}:{SECRET_KEY}@{HOST}/{PATH}{PROJECT_ID}
Also, make sure to add the plugin to your plugins list in the serverless.yml
:
plugins:
- serverless-sentry
custom:
sentry:
dsn: https://xxxx:yyyy@sentry.io/zzzz # URL provided by Sentry
Make sure to set the authToken
, organization
as well as project
options in your serverless.yml
, and set release
to a non-empty value as shown in the example below:
plugins:
- serverless-sentry
custom:
sentry:
dsn: https://xxxx:yyyy@sentry.io/zzzz # URL provided by Sentry
organization: my-sentry-organziation
project: my-sentry-project
authToken: my-sentry-api-key
release: git
Check out the filterLocal
configuration setting. If you test Sentry locally and want to make sure your messages are sent, set this flag to false
. Once done testing, don't forget to switch it back to true
as otherwise, you'll spam your Sentry projects with meaningless errors of local code changes.
false
or unset.enabled
flag. Thanks to aaronbannin for the contribution.custom.sentry
.captureUncaughtException
configuration option. This already exists in serverless-sentry-lib
but was never exposed in the plugin.SENTRY_DSN
is not set but simply disable Sentry integration.sls deploy -f MyFunction
). Thanks to dominik-meissner!serverless-sentry-plugin
requires the use of serverless-sentry-lib
v2.x.xraven
to the Unified Node.js SDK @sentry/node
.withSentry
higher-order function. Passing the Sentry instance is now optional.sls invoke local
. Thanks to sifrenette for his contribution.serverless-sentry-lib
v1.1.x.serverless-sentry-lib
as well!sentry: false
in the serverless.yml
.That you for supporting me and my projects.
Author: Arabold
Source Code: https://github.com/arabold/serverless-sentry-plugin
License: MIT license
1656720000
Hello everyone,
In this video, we will checkout some theory part for using AWS dynamo DB which will be very useful for our application and then let us understand it before using this.
In this we will be learning some core components of Dynamo DB with examples given in documentation.
In next parts we will be implementing it in our application.
1656684300
This is a tiny example to show how to deploy the Next.js app on AWS Lambda using Apex Up.
The custom server for Next.js app is needed to run your app on AWS Lambda. In this example, express
will be used.
// server.js
const express = require("express");
const next = require("next");
const port = parseInt(process.env.PORT, 10) || 3000;
const dev = process.env.NODE_ENV !== "production";
const app = next({ dev });
const handle = app.getRequestHandler();
app
.prepare()
.then(() => {
const server = express();
server.get("/", (req, res) => {
return app.render(req, res, "/", req.params);
});
server.get("/about", (req, res) => {
return app.render(req, res, "/about", req.params);
});
server.get("*", (req, res) => {
return handle(req, res);
});
server.listen(port, err => {
if (err) throw err;
console.log(`> Ready on http://localhost:${port}`);
});
})
.catch(ex => {
console.log(ex);
process.exit(1);
});
package.json
with custom server.js
{
"scripts": {
"dev": "node server.js",
"build": "next build",
"start": "NODE_ENV=production node server.js"
}
}
$ curl -sf https://up.apex.sh/install | sh
# verify the installation
$ up version
You need to have one AWS account and recommend to use IAM with programmaic way for security and convinience. If you have already installed awscli
or awsebcli
, etc. You are having ~/.aws/credentials
which is storing AWS credentials. It allows you to use AWS_PROFILE
environment. If you don't please make one and save it with your account access key
and security key
in it.
# ~/.aws/credentials
[my-aws-account-for-lambda]
aws_access_key = xxxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxx
IAM policy allows the Up to access your AWS resources in order to deploy your Next.js app on Lambda. Go to AWS IAM and make the new policy and link it up to your AWS account.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"acm:*",
"cloudformation:Create*",
"cloudformation:Delete*",
"cloudformation:Describe*",
"cloudformation:ExecuteChangeSet",
"cloudformation:Update*",
"cloudfront:*",
"cloudwatch:*",
"ec2:*",
"ecs:*",
"events:*",
"iam:AttachRolePolicy",
"iam:CreatePolicy",
"iam:CreateRole",
"iam:DeleteRole",
"iam:DeleteRolePolicy",
"iam:GetRole",
"iam:PassRole",
"iam:PutRolePolicy",
"lambda:AddPermission",
"lambda:Create*",
"lambda:Delete*",
"lambda:Get*",
"lambda:InvokeFunction",
"lambda:List*",
"lambda:RemovePermission",
"lambda:Update*",
"logs:Create*",
"logs:Describe*",
"logs:FilterLogEvents",
"logs:Put*",
"logs:Test*",
"route53:*",
"route53domains:*",
"s3:*",
"ssm:*",
"sns:*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "apigateway:*",
"Resource": "arn:aws:apigateway:*::/*"
}
]
}
up.json
file{
"name": "nextjs-example",
// aws account profile in ~/.aws/credentials
"profile": "my-aws-account-for-lambda",
"regions": ["ap-northeast-2"],
"lambda": {
// min 128, default 512
"memory": 256,
// AWS Lambda supports node.js 8.10 latest
"runtime": "nodejs8.10"
},
"proxy": {
"command": "npm start",
"timeout": 25,
"listen_timeout": 15,
"shutdown_timeout": 15
},
"stages": {
"development": {
"proxy": {
"command": "yarn dev"
}
}
},
"environment": {
// you can hydrate env variables as you want.
"NODE_ENV": "production"
},
"error_pages": {
"variables": {
"support_email": "admin@my-email.com",
"color": "#2986e2"
}
}
}
$ yarn build
.upignore
fileThe Up will inspect your files to compose and deploy to lambda. Firstly The up will read .gitignore
and ignore files written in .gitignore
. And after that, .upignore
will be read. The Up, by default, ignores dotfiles, so needs to negate .next
directory in .upignore
in order for the Up will build the package with it.
# .upignore
!.next
$ up
You may encounter the cold start issue, then refer to the lambda warmer. https://github.com/mattdamon108/lambda-warmer
Download Details:
Author: mattdamon108
Source Code: https://github.com/mattdamon108/nextjs-with-lambda
License:
#nextjs #react #javascript #Lambda #aws
1656663300
Serverless Select Plugin
Select which functions are to be deployed based on region and stage.
Note: Requires Serverless v1.12.x or higher.
Install via npm in the root of your Serverless service:
npm install serverless-plugin-select --save-dev
plugins
array in your Serverless serverless.yml
, you should place it at the top of the list:plugins:
- serverless-plugin-select
- ...
Add regions
or stages
in your functions to select for deployment
Run deploy command sls deploy --stage [STAGE NAME] --region [REGION NAME]
or sls deploy function --stage [STAGE NAME] --region [REGION NAME] --function [FUNCTION NAME]
Functions will be deployed based on your selection
All done!
How it works? When deployment region or stage don't match function regions or stages, that function will be deleted from deployment.
regions - Function accepted deployment regions.
functions:
hello:
regions:
- eu-west-1
- ...
functions:
hello:
stages:
- dev
- ...
Help us making this plugin better and future proof.
npm install
git checkout -b new_feature
npm run lint
Author: FidelLimited
Source Code: https://github.com/FidelLimited/serverless-plugin-select
License: MIT license
1656644125
Hi guys, In this post, Iβm going to introduce the books that help me a lot on the path to becoming a Cloud DevOps Engineer from a Full-stack Developer. Hope it is useful for everyone, the books that I recommend go from basic to advance.
https://medium.com/codex/side-story-books-you-should-read-to-become-devops-for-beginner-b75384ef6774
1656636720
Serverless Framework: Deploy on Scaleway Functions
The Scaleway functions plugin for Serverless Framework allows users to deploy their functions and containers to Scaleway Functions with a simple serverless deploy
.
Serverless Framework handles everything from creating namespaces to function/code deployment by calling APIs endpoint under the hood.
npm install serverless -g
)Let's work into ~/my-srvless-projects
# mkdir ~/my-srvless-projects
# cd ~/my-srvless-projects
The easiest way to create a project is to use one of our templates. The list of templates is here
Let's use python3
serverless create --template-url https://github.com/scaleway/serverless-scaleway-functions/tree/master/examples/python3 --path myService
Once it's done, we can install mandatory node packages used by serverless
cd mypython3functions
npm i
Note: these packages are only used by serverless, they are not shipped with your functions.
Your functions are defined in the serverless.yml
file created:
service: scaleway-python3
configValidationMode: off
useDotenv: true
provider:
name: scaleway
runtime: python310
# Global Environment variables - used in every functions
env:
test: test
# Storing credentials in this file is strongly not recommanded for security concerns, please refer to README.md about best practices
scwToken: <scw-token>
scwProject: <scw-project-id>
# region in which the deployment will happen (default: fr-par)
scwRegion: <scw-region>
plugins:
- serverless-scaleway-functions
package:
patterns:
- '!node_modules/**'
- '!.gitignore'
- '!.git/**'
functions:
first:
handler: handler.py
# Local environment variables - used only in given function
env:
local: local
Note: provider.name
and plugins
MUST NOT be changed, they enable us to use the scaleway provider
This file contains the configuration of one namespace containing one or more functions (in this example, only one) of the same runtime (here python3
)
The different parameters are:
service
: your namespace nameuseDotenv
: Load environment variables from .env files (default: false), read Security and secret managementconfigValidationMode
: Configuration validation: 'error' (fatal error), 'warn' (logged to the output) or 'off' (default: warn)provider.runtime
: the runtime of your functions (check the supported runtimes above)provider.env
: environment variables attached to your namespace are injected to all your namespace functionsprovider.secret
: secret environment variables attached to your namespace are injected to all your namespace functions, see this example projectscwToken
: Scaleway token you got in prerequisitesscwProject
: Scaleway org id you got in prerequisitesscwRegion
: Scaleway region in which the deployment will take place (default: fr-par
)package.patterns
: usually, you don't need to configure it. Enable to include/exclude directories to/from the deploymentfunctions
: Configure of your fonctions. It's a yml dictionary, with the key being the function namehandler
(Required): file or function which will be executed. See the next section for runtime specific handlersenv
(Optional): environment variables specific for the current functionsecret
(Optional): secret environment variables specific for the current function, see this example projectminScale
(Optional): how many function instances we keep running (default: 0)maxScale
(Optional): maximum number of instances this function can scale to (default: 20)memoryLimit
: ram allocated to the function instances. See the introduction for the list of supported valuestimeout
: is the maximum duration in seconds that the request will wait to be served before it times out (default: 300 seconds)runtime
: (Optional) runtime of the function, if you need to deploy multiple functions with different runtimes in your Serverless Project. If absent, provider.runtime
will be used to deploy the function, see this example project.events
(Optional): List of events to trigger your functions (e.g, trigger a function based on a schedule with CRONJobs
). See events
section belowcustom_domains
(Optional): List of custom domains, refer to Custom Domain DocumentationYou configuration file may contains sensitive data, your project ID and your Token must not be shared and must not be commited in VCS.
To keep your informations safe and be able to share or commit your serverles.yml
file you should remove your credentials from the file. Then you can :
.env
file and keep it secretTo use .env
file you can modify your serverless.yml
file as following :
# This will alow the plugin to read your .env file
useDotenv: true
provider:
name: scaleway
runtime: node16
scwToken: ${env:SCW_SECRET_KEY}
scwProject: ${env:SCW_DEFAULT_PROJECT_ID}
scwRegion: ${env:SCW_REGION}
And then create a .env
file next to your serverless.yml
file, containing following values :
SCW_SECRET_KEY=XXX
SCW_DEFAULT_PROJECT_ID=XXX
SCW_REGION=fr-par
You can use this pattern to hide your secrets (for example a connexion string to a database or a S3 bucket).
Based on the chosen runtime, the handler
variable on function might vary.
Node has two module systems: CommonJS
modules and ECMAScript
(ES
) modules. By default, Node treats your code files as CommonJS modules, however ES modules have also been available since the release of node16
runtime on Scaleway Serverless Functions. ES modules give you a more modern way to re-use your code.
According to the official documentation, to use ES modules you can specify the module type in package.json
, as in the following example:
...
"type": "module",
...
This then enables you to write your code for ES modules:
export {handle};
function handle (event, context, cb) {
return {
body: process.version,
statusCode: 200,
};
};
The use of ES modules is encouraged, since they are more efficient and make setup and debugging much easier.
Note that using "type": "module"
or "type": "commonjs"
in your package.json will enable/disable some features in Node runtime. For a comprehensive list of differences, please refer to the official documentation, the following is a summary only:
commonjs
is used as default valuecommonjs
allows you to use require/module.exports
(synchronous code loading, it basically copies all file contents)module
allows you to use import/export
ES6 instructions (asynchronous loading, more optimized as it imports only the pieces of code you need)Path to your handler file (from serverless.yml), omit ./
, ../
, and add the exported function to use as a handler :
- src
- handlers
- firstHandler.js => module.exports.myFirstHandler = ...
- secondHandler.js => module.exports.mySecondHandler = ...
- serverless.yml
In serverless.yml:
provider:
# ...
runtime: node16
functions:
first:
handler: src/handlers/firstHandler.myFirstHandler
second:
handler: src/handlers/secondHandler.mySecondHandler
Similar to node
, path to handler file src/testing/handler.py
:
- src
- handlers
- firstHandler.py => def my_first_handler
- secondHandler.py => def my_second_handler
- serverless.yml
In serverless.yml:
provider:
# ...
runtime: python310 # or python37, python38, python39
functions:
first:
handler: src/handlers/firstHandler.my_first_handler
second:
handler: src/handlers/secondHandler.my_second_handler
Path to your handler's package, for example if I have the following structure:
- src
- testing
- handler.go -> package main in src/testing subdirectory
- second
- handler.go -> package main in src/second subdirectory
- serverless.yml
- handler.go -> package main at the root of project
Your serverless.yml functions
should look something like this:
provider:
# ...
runtime: go118
functions:
main:
handler: "."
testing:
handler: src/testing
second:
handler: src/second
With events
, you may link your functions with specific triggers, which might include CRON Schedule (Time based)
, MQTT Queues
(Publish on a topic will trigger the function), S3 Object update
(Upload an object will trigger the function).
Note that we do not include HTTP triggers in our event types, as a HTTP endpoint is created for every function. Triggers are just a new way to trigger your Function, but you will always be able to execute your code via HTTP.
Here is a list of supported triggers on Scaleway Serverless, and the configuration parameters required to deploy them:
rate
: CRON Schedule (UNIX Format) on which your function will be executedinput
: key-value mapping to define arguments that will be passed into your function's event object during execution.To link a Trigger to your function, you may define a key events
in your function:
functions:
handler: myHandler.handle
events:
# "events" is a list of triggers, the first key being the type of trigger.
- schedule:
# CRON Job Schedule (UNIX Format)
rate: '1 * * * *'
# Input variable are passed in your function's event during execution
input:
key: value
key2: value2
You may link Events to your Containers too (See section Managing containers
below for more informations on how to deploy containers):
custom:
containers:
mycontainer:
directory: my-directory
# Events key
events:
- schedule:
rate: '1 * * * *'
input:
key: value
key2: value2
You may refer to the follow examples:
Custom domains allows users to use their own domains.
For domain configuration please Refer to Scaleway Documentation
Integration with serverless framework example :
functions:
first:
handler: handler.handle
# Local environment variables - used only in given function
env:
local: local
custom_domains:
- func1.scaleway.com
- func2.scaleway.com
Note As your domain must have a record to your function hostname, you should deploy your function once to read its hostname. Custom Domains configurations will be available after the first deploy.
Note: Serverless Framework will consider the configuration file as the source of truth.
If you create a domain with other tools (Scaleway's Console, CLI or API) you must refer created domain into your serverless configuration file. Otherwise it will be deleted as Serverless Framework will give the priority to its configuration.
Requirements: You need to have Docker installed to be able to build and push your image to your Scaleway registry.
You must define your containers inside the custom.containers
field in your serverless.yml manifest. Each container must specify the relative path of its application directory (containing the Dockerfile, and all files related to the application to deploy):
custom:
containers:
mycontainer:
directory: my-container-directory
# port: 8080
# Environment only available in this container
env:
MY_VARIABLE: "my-value"
Here is an example of the files you should have, the directory
containing your Dockerfile and scripts is my-container-directory
.
.
βββ my-container-directory
β βββ Dockerfile
β βββ requirements.txt
β βββ server.py
β βββ (...)
βββ node_modules
β βββ serverless-scaleway-functions
β βββ (...)
βββ package-lock.json
βββ package.json
βββ serverless.yml
Scaleway's platform will automatically inject a PORT environment variable on which your server should be listening for incoming traffic. By default, this PORT is 8080. You may change the port
in your serverless.yml
.
You may use the container example to getting started.
The serverless logs
command lets you watch the logs of a specific function or container.
Pass the function or container name you want to fetch the logs for with --function
:
serverless logs --function <function_or_container_name>
serverless info
command gives you informations your current deployement state in JSON format.
MUST
use this library if you plan to develop with Golang).This plugin is mainly developed and maintained by Scaleway Serverless Team
but you are free to open issues or discuss with us on our Community Slack Channels #serverless-containers and #serverless-functions.
Author: Scaleway
Source Code: https://github.com/scaleway/serverless-scaleway-functions
License: MIT license
1656621900
This serverless plugin includes a set of utilities to implement custom workflow for AWS Sagemaker Groundtruth
Currently includes :
Any Pull request will be warmly welcome !
Ideas for future implementation :
npm install --save-dev serverless-sagemaker-groundtruth
In order to use this module, you need to add a groundtruthTasks
key into your serverless.yml
file
...
plugins:
- serverless-sagemaker-groundtruth
functions:
pre-example:
handler: handler.pre
name: pre
post-example:
handler: handler.postObjectDetection
name: post
groundtruthTasks:
basic:
pre: pre-example
post: post-example
template: app/templates/object-detection/basic.liquid.html
serverless groundtruth serve \
--groundtruthTask <groundtruthTask-name> \
--manifest <s3-uri or local file> \
--row <row index>
Here, we create a puppeteer module which is doing random bounding boxes (using hasard library) :
const BbPromise = require('bluebird')
const h = require('hasard');
/**
* This function is binding a sequence of actions made by the user before submitting the form
* This is an example showing how to simulate a use bounding box actions
* @param {Page} page puppeteer page instance see https://github.com/puppeteer/puppeteer
* This page is open and running in the annotation page
* @param {Object} manifestRow the object from the manifest file row
* @param {Object} prelambdaOutput the output object from the prelambda result
* @returns {Promise} the promise is resolved once the user has done all needed actions on the form
*/
module.exports = function({
page,
manifestRow,
workerId
}){
// we draw 5 boxes for each worker
const nBoxes = 5;
// Cat and Dog
const nCategories = 2;
// Using the technic from https://github.com/puppeteer/puppeteer/issues/858#issuecomment-438540596 to select the node
return page.evaluateHandle(`document.querySelector("body > crowd-form > form > crowd-bounding-box").shadowRoot.querySelector("#annotation-area-container > div > div > div")`)
.then(imageCanvas => {
return imageCanvas.boundingBox()
}).then(boundingBox => {
// define a random bounding box over the image canvas using hasard library
// see more example in https://www.npmjs.com/package/hasard
const width = h.reference(h.integer(0, Math.floor(boundingBox.width)));
const height = h.reference(h.integer(0, Math.floor(boundingBox.height)));
const top = h.add(h.integer(0, h.substract(Math.floor(boundingBox.width), width)), Math.floor(boundingBox.x));
const left = h.add(h.integer(0, h.substract(Math.floor(boundingBox.height), height)), Math.floor(boundingBox.y));
const randomAnnotation = h.object({
box: h.array([
top,
left,
width,
height
]),
category: h.integer(0, nCategories-1)
});
const workerAnnotations = randomAnnotation.run(nBoxes)
return BbPromise.map(workerAnnotations, ({box, category}) => {
return page.evaluateHandle(`document.querySelector("body > crowd-form > form > crowd-bounding-box").shadowRoot.querySelector("#react-mount-point > div > div > awsui-app-layout > div > div.awsui-app-layout__tools.awsui-app-layout--open > aside > div > span > div > div.label-pane-content > div:nth-child(${category+1})")`)
.then(categoryButton => categoryButton.click())
.then(() => page.mouse.move(box[0], box[1]))
.then(() => page.mouse.down())
.then(() => page.mouse.move(box[0]+box[2], box[1]+box[3]))
.then(() => page.mouse.up());
}, {concurrency: 1})
}).then(() => {
console.log(`${workerId} actions simulation done on ${JSON.stringify(manifestRow)}`)
// at the end we return nothing, serverless-sagemaker-groundtruth will automatically request the output from the page
})
}
serverless groundtruth test e2e \
--groundtruthTask <groundtruthTask-name> \
--manifest <s3-uri or local file> \
--puppeteerModule <path to the module> \
--workerIds a,b,c
You can use serverless-sagemaker-groundtruth
functions in your nodejs code by using
const gtLibs = require('serverless-sagemaker-groundtruth/lib')
/**
* @param {String} template path to the liquid template file
* @param {String} labelAttributeName labelAttributeName to use as output of the postLambda function
* @param {Object} manifestRow js object reproesnting the manifest row
* @param {Function} preLambda js function to use as pre lambda function
* @param {Number} [port=3000] port to use to serve the web page
* @param {Function} postLambda js function to use as post lambda function
* @param {Array.<String>} workerIds js function to use as post lambda function
* @param {PuppeteerModule} puppeteerMod module that simulate the behavior of a worker
* @returns {Promise.<PostLambdaOutput>}
*/
return gtLibs.endToEnd({
template,
labelAttributeName,
manifestRow,
preLambda,
port,
postLambda,
workerIds,
puppeteerMod
});
You need to make sure that you post lambda function is compatible with using local filename in event.payload.s3Uri
. You can use gtLibs.loadFile
if you need such a function
Your template should be submited using a button that can match with button.awsui-button[type="submit"]
selector.
Author: Piercus
Source Code: https://github.com/piercus/serverless-sagemaker-groundtruth
License:
1656614460
With this plugin for serverless, you can sync local folders to S3 buckets after your service is deployed.
Add the NPM package to your project:
# Via yarn
$ yarn add serverless-s3bucket-sync
# Via npm
$ npm install serverless-s3bucket-sync
Add the plugin to your serverless.yml
:
plugins:
- serverless-s3bucket-sync
Configure S3 Bucket syncing Auto Scaling in serverless.yml
with references to your local folder and the name of the S3 bucket.
custom:
s3-sync:
- folder: relative/folder
bucket: bucket-name
That's it! With the next deployment, serverless will sync your local folder relative/folder
with the S3 bucket named bucket-name
.
You can use sls sync
to synchornize all buckets without deploying your serverless stack.
You are welcome to contribute to this project! π
To make sure you have a pleasant experience, please read the code of conduct. It outlines core values and beliefs and will make working together a happier experience.
Author: sbstjn
Source Code: https://github.com/sbstjn/serverless-s3bucket-sync
License: MIT license
1656554820
A Serverless Plugin which bundles ruby gems from Gemfile and deploys them to the lambda layer automatically while running serverless deploy
.
It auto-configures the AWS lambda layer and RUBY_PATH to all the functions.
Install β‘οΈ serverless. Refer here for serverless installation instructions.
Navigate to your serverless project and install the plugin
sls plugin install -n serverless-ruby-layer
This will add the plugin to package.json
and the plugins section of serverless.yml
.
serverless.yml
service: basic
plugins:
- serverless-ruby-layer
provider:
name: aws
runtime: ruby2.5
functions:
hello:
handler: handler.hello
Gemfile
source 'https://rubygems.org'
gem 'httparty'
Running sls deploy
automatically deploys the required gems as in Gemfile to AWS lambda layer and make the gems available to the RUBY_PATH
of the functions hello.handler
Refer example amd docs for more details
The plugin operation can be customized by specifying the custom
configuration under rubyLayer
.
For example, to use docker environment for packing gem, the below configuration is added to serverless.yml
custom:
rubyLayer:
use_docker: true
For more detailse refer the docs here
Using the custom configuration, the plugin can be utilized for below cases,
http
, Nokogiri
- Example - Docspg
, mysql
, RMagick
- PG Example - DocsBundler.require(:default)
to require all gems in handler.rb by respecting Gemfile.lock - Example - DocsCheck out the documentation here for,
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Please make sure to update the tests as appropriate.
Refer Guidelines for more information.
Author: Navarasu
Source Code: https://github.com/navarasu/serverless-ruby-layer
License: MIT license
1656547380
A serverless framework plugin so that your functions know how to use resources created by cloudformation.
In short, whether you are running your function as a lambda, or locally on your machine, the physical name or ARN of each resource that was part of your CloudFormation template will be available as an environment variable keyed to its logical name prefixed with CF_
.
For lambdas running on AWS, this plugin will set environment variables on your functions within lambda using the Lambda environment variable support.
For running functions locally, it will also create a local env file for use in reading in environment variables for a specific region-stage-function while running functions locally. These are by default stored in a directory named: .serverless-resources-env
in files named .<region>_<stage>_<function-name>
. Ex: ./.serverless-resources-env/.us-east-1_dev_hello
. These environment variables are set automatically by the plugin when running serverless invoke local -f ...
.
Breaking Changes in 0.3.0: See below
You have a CloudFormation template all set, and you are writing your functions. Now you are ready to use the resources created as part of your CF template. Well, you need to know about them! You could deploy and then try and manage configuration for these resources, or you can use this module which will automatically set environmet variables that map the logical resource name to the physical resource name for resources within the CloudFormation file.
You have defined resources in your serverless.yml called mySQS
and myTable
, and you want to actually use these in your function so you need their ARN or the actual table name that was created.
const sqs_arn = process.env.CF_mySQS;
const my_dynamo_table_name = process.env.CF_myTable;
This plugin attaches to the deploy post-deploy hook. After the stack is deployed to AWS, the plugin determines the name of the cloud formation stack, and queries AWS for all resources in this stack.
After deployment, this plugin, will fetch all the CF resources for the current stack (stage i.e. 'dev'). It will then use the AWS SDK to set as environment variables the physical id's of each resource as an environment variable prefixed with CF_
.
It will also create a file with these values in a .properties file format named ./serverless-resources-env/.<region>_<stage>_<function-name>
. These are then pulled in during a local invocation (serverless invoke local -f...
) Each region, stage, and function will get its own file. When invoking locally the module will automatically select the correct .env information based on which region and stage is set.
This means no code changes, or config changes no matter how many regions, and stages you deploy to.
The lambdas always know exactly where to find their resources, whether that resource is a DynamoDB, SQS, SNS, or anything else.
npm install serverless-resources-env --save
Add the plugin to the serverless.yml.
plugins:
- serverless-resources-env
Set your resources as normal:
resources:
Resources:
testTopic1:
Type: AWS::SNS::Topic
testTopic2:
Type: AWS::SNS::Topic
Set which resources you want exported on each function.
functions:
hello:
handler: handler.hello
custom:
env-resources:
- testTopic1
- testTopic2
At version 0.2.0 and before, all resources were exported to both the local .env file and to each function automatically.
This caused issues with AWS limits on the amount of information that could be exported as env variables onto lambdas deployed within AWS. This also exposed resources as env variables that were not needed by functions, as it was setting all resources, not just the ones the function needed.
Starting at version 0.3.0 a list of which resources are to be exported to each function are required to be a part of the function definition in the .yml file, if the function needs any of these environment variables. (See current install instructions above)
This also means that specific env files are needed per region / stage / function. This can potentially be a lot of files and therefore these files were also moved to a sub-folder. .serverless-resources-env
by default.
Unexpected key 'Environment' found in params
. Your aws-sdk is out of date. Setting environment variables on lambdas is new. See the Important note above.
You may need to upgrade the version of the package aws-sdk
being used by the serverless framework.
In the 1.1.0 serverless framework, the aws-sdk
is pegged at version 2.6.8 in the npm-shrinkwrap.json
of serverless.
If you have installed serverless locally as part of your project you can just upgrade the sdk. npm upgrade aws-sdk
.
If you have installed serverless globally, you will need to change to the serverless directory and run npm upgrade aws-sdk
from there.
The following commands should get it done:
cd `npm list serverless -g | head -n 1`/node_modules/serverless
npm upgrade aws-sdk
By default, the mapping is written to a .env file located at ./.serverless-resources-env/.<region>_<stage-name>_env
. This can be overridden by setting an option in serverless.yml.
custom:
resource-output-dir: .alt-resource-dir
functions:
hello:
custom:
resource-output-file: .alt-file-name
Author: Rurri
Source Code: https://github.com/rurri/serverless-resources-env
License: MIT license
1656543480
Serverless stackTags will update the tags for all the resources that support tagging. But the issue is it will update once during create. If you update the tag values after deployment, it wont reflect in next deployment. We have to remove the stack and redeploy to get the new tags reflect. This plugin will solve that issue for AWS.
npm install serverless-plugin-resource-tagging
provider:
name: XXX
stackTags:
Tag1: "Tag1 value"
Tag2: "Tag2 value"
plugins:
- serverless-plugin-resource-tagging
AWS::Lambda::Function
AWS::SQS::Queue
AWS::Kinesis::Stream
AWS::DynamoDB::Table
AWS::S3::Bucket
AWS::ApiGateway::Stage
AWS::CloudFront::Distribution
AWS::Logs::LogGroup
Author: ilayanambi86
Source Code: https://github.com/ilayanambi86/serverless-plugin-resource-tagging
License:
1656528660
serverless-reqvalidator-plugin
Serverless plugin to set specific validator request on method
npm install serverless-reqvalidator-plugin
This require you to have documentation plugin installed
serverless-aws-documentation
Specify plugin
plugins:
- serverless-reqvalidator-plugin
- serverless-aws-documentation
In serverless.yml
create custom resource for request validators
xMyRequestValidator:
Type: "AWS::ApiGateway::RequestValidator"
Properties:
Name: 'my-req-validator'
RestApiId:
Ref: ApiGatewayRestApi
ValidateRequestBody: true
ValidateRequestParameters: false
For every function you wish to use the validator set property reqValidatorName: 'xMyRequestValidator'
to match resource you described
debug:
handler: apis/admin/debug/debug.debug
timeout: 10
events:
- http:
path: admin/debug
method: get
cors: true
private: true
reqValidatorName: 'xMyRequestValidator'
The serverless framework allows us to share resources among several stacks. Therefore a CloudFormation Output has to be specified in one stack. This Output can be imported in another stack to make use of it. For more information see here.
Specify a request validator in a different stack:
plugins:
- serverless-reqvalidator-plugin
service: my-service-a
functions:
hello:
handler: handler.myHandler
events:
- http:
path: hello
reqValidatorName: 'myReqValidator'
resources:
Resources:
xMyRequestValidator:
Type: "AWS::ApiGateway::RequestValidator"
Properties:
Name: 'my-req-validator'
RestApiId:
Ref: ApiGatewayRestApi
ValidateRequestBody: true
ValidateRequestParameters: false
Outputs:
xMyRequestValidator:
Value:
Ref: my-req-validator
Export:
Name: myReqValidator
Make use of the exported request validator in stack b:
plugins:
- serverless-reqvalidator-plugin
service: my-service-b
functions:
hello:
handler: handler.myHandler
events:
- http:
path: hello
reqValidatorName:
Fn::ImportValue: 'myReqValidator'
service:
name: my-service
plugins:
- serverless-webpack
- serverless-reqvalidator-plugin
- serverless-aws-documentation
provider:
name: aws
runtime: nodejs6.10
region: eu-west-2
environment:
NODE_ENV: ${self:provider.stage}
custom:
documentation:
api:
info:
version: '1.0.0'
title: My API
description: This is my API
tags:
-
name: User
description: User Management
models:
- name: MessageResponse
contentType: "application/json"
schema:
type: object
properties:
message:
type: string
- name: RegisterUserRequest
contentType: "application/json"
schema:
required:
- email
- password
properties:
email:
type: string
password:
type: string
- name: RegisterUserResponse
contentType: "application/json"
schema:
type: object
properties:
result:
type: string
- name: 400JsonResponse
contentType: "application/json"
schema:
type: object
properties:
message:
type: string
statusCode:
type: number
commonModelSchemaFragments:
MethodResponse400Json:
statusCode: '400'
responseModels:
"application/json": 400JsonResponse
functions:
signUp:
handler: handler.signUp
events:
- http:
documentation:
summary: "Register user"
description: "Registers new user"
tags:
- User
requestModels:
"application/json": RegisterUserRequest
method: post
path: signup
reqValidatorName: onlyBody
methodResponses:
- statusCode: '200'
responseModels:
"application/json": RegisterUserResponse
- ${self:custom.commonModelSchemaFragments.MethodResponse400Json}
package:
include:
handler.ts
resources:
Resources:
onlyBody:
Type: "AWS::ApiGateway::RequestValidator"
Properties:
Name: 'only-body'
RestApiId:
Ref: ApiGatewayRestApi
ValidateRequestBody: true
ValidateRequestParameters: false
Author: RafPe
Source Code: https://github.com/RafPe/serverless-reqvalidator-plugin
License:
1656521280
Register function names with AWS SSM Parameter Store
Requirements:
This plugin creates an SSM Parameter with your functions' fully qualified Lambda Function names as values. The main motivation for this plugin is to remove the dependency that any client code would have on the AWS Stack, as the stack name is part of the fully qualified Lambda Function name. Using this plugin, it is easier to move functions between stacks with out less changes to client code and configuration.
One caveat is the fact that any IAM policies that are written for these functions will still need to be updated. In the case of Serverless configuration, if you use the built-in SSM Parameter resolution, then it might be as simple as just redeploying any client upstream services.
Install via npm in the root of your Serverless service:
npm install serverless-plugin-registry --save-dev
plugins
array in your Serverless serverless.yml
:plugins:
- serverless-plugin-registry
service: ServerlessPluginRegistry
provider:
stage: ${opt:stage, "Test"}
functions:
Hello:
handler: hello.js
This will produce an SSM Parameter with
service: ServerlessPluginRegistry
provider:
stage: ${opt:stage, "Test"}
custom:
registry:
baseName: /Registry/${self:provider.stage}
functions:
Hello:
handler: hello.js
This will produce an SSM Parameter with
service: ServerlessPluginRegistry
provider:
stage: ${opt:stage, "Test"}
functions:
Hello:
handler: hello.js
registry:
baseName: /Registry/${self:provider.stage}
This will produce an SSM Parameter with
service: ServerlessPluginRegistry
provider:
stage: ${opt:stage, "Test"}
functions:
Hello:
handler: hello.js
registry:
baseName: /Registry/${self:provider.stage}
HowAreYou:
handler: howAreYou.js
registry:
register: true
Goodbye:
handler: goodbye.js
This will only produce two SSM Parameters with
Name: /Registry/Test/Hello/FunctionName
Value: ServerlessPluginRegistry-Test-Hello
Name: /ServerlessPluginRegistry/Test/HowAreYou/FunctionName
Value: ServerlessPluginRegistry-Test-HowAreYou
Help us making this plugin better and future proof.
npm install
git checkout -b new_feature
npm run lint
Author: Aronim
Source Code: https://github.com/aronim/serverless-plugin-registry
License: MIT license
1656498780
A Serverless v1.x plugin to build your deploy Ruby Rack applications using Serverless. Compatible Rack application frameworks include Sinatra, Cuba and Padrino.
rack serve
command for serving your application locally during developmentrack exec
), rake tasks ('rack rake') and shell commands (rack command
)sls plugin install -n serverless-rack
This will automatically add the plugin to package.json
and the plugins section of serverless.yml
.
project
βββ api.rb
βββ config.ru
βββ Gemfile
βββ serverless.yml
A regular Sinatra application.
require 'sinatra'
get '/cats' do
'Cats'
end
get '/dogs/:id' do
'Dog'
end
require './api'
run Sinatra::Application
All functions that will use Rack need to have rack_adapter.handler
set as the Lambda handler and use the default lambda-proxy
integration for API Gateway. This configuration example treats API Gateway as a transparent proxy, passing all requests directly to your Sinatra application, and letting the application handle errors, 404s etc.
service: example
provider:
name: aws
runtime: ruby2.5
plugins:
- serverless-rack
functions:
api:
handler: rack_adapter.handler
events:
- http: ANY /
- http: ANY /{proxy+}
Add Sinatra to the application bundle.
source 'https://rubygems.org'
gem 'sinatra'
Simply run the serverless deploy command as usual:
$ bundle install --path vendor/bundle
$ sls deploy
Serverless: Packaging Ruby Rack handler...
Serverless: Packaging gem dependencies using docker...
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service .zip file to S3 (1.64 MB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
..............
Serverless: Stack update finished...
You'll need to include any gems that your application uses in the bundle that's deployed to AWS Lambda. This plugin helps you out by doing this automatically, as long as you specify your required gems in a Gemfile:
source 'https://rubygems.org'
gem 'rake'
gem 'sinatra'
For more information, see https://bundler.io/docs.html.
If your application depends on any gems that include compiled binaries, these must be compiled for the lambda execution environment. Enabling the dockerizeBundler
configuration option will fetch and build the gems using a docker image that emulates the lambda environment:
custom:
rack:
dockerizeBundler: true
The default docker image that will be used will match the runtime you are using. That is, if you are using the ruby2.7
runtime, then the docker image will be logandk/serverless-rack-bundler:ruby2.7
. You can override the docker image with the dockerImage
configuration option:
custom:
rack:
dockerImage: lambci/lambda:build-ruby2.5
You can use the automatic bundling functionality of serverless-rack without the Rack request handler itself by including the plugin in your serverless.yml
configuration, without specifying rack_adapter.handler
as the handler for any of your lambda functions. This will omit the Rack handler from the package, but include any gems specified in the Gemfile
.
If you don't want to use automatic gem bundling you can set custom.rack.enableBundler
to false
:
custom:
rack:
enableBundler: false
In order to pass additional arguments to bundler
when installing requirements, the bundlerArgs
configuration option is available:
custom:
rack:
bundlerArgs: --no-cache
If your bundler
executable is not in $PATH
, set the path explicitly using the bundlerBin
configuration option:
custom:
rack:
bundlerBin: /path/to/bundler
If your Rack configuration file (config.ru
) is not in ./
, set the path explicitly using the configPath
configuration option:
custom:
rack:
configPath: path/to/config.ru
For convenience, a sls rack serve
command is provided to run your Rack application locally. This command requires the rack
gem to be installed, and acts as a simple wrapper for rackup
.
By default, the server will start on port 5000.
$ sls rack serve
[2019-01-03 18:13:21] INFO WEBrick 1.4.2
[2019-01-03 18:13:21] INFO ruby 2.5.1 (2018-03-29) [x86_64-linux-gnu]
[2019-01-03 18:13:21] INFO WEBrick::HTTPServer#start: pid=25678 port=5000
Configure the port using the -p
parameter:
$ sls rack serve -p 8000
[2019-01-03 18:13:21] INFO WEBrick 1.4.2
[2019-01-03 18:13:21] INFO ruby 2.5.1 (2018-03-29) [x86_64-linux-gnu]
[2019-01-03 18:13:21] INFO WEBrick::HTTPServer#start: pid=25678 port=8000
When running locally, an environment variable named IS_OFFLINE
will be set to True
. So, if you want to know when the application is running locally, check ENV["IS_OFFLINE"]
.
For use with the serverless-offline
plugin, run sls rack install
prior to sls offline
.
The rack exec
command lets you execute ruby code remotely:
$ sls rack exec -c "puts (1 + Math.sqrt(5)) / 2"
1.618033988749895
$ cat count.rb
3.times do |i|
puts i
end
$ sls rack exec -f count.rb
0
1
2
The rack command
command lets you execute shell commands remotely:
$ sls rack command -c "pwd"
/var/task
$ cat script.sh
#!/bin/bash
echo "dlrow olleh" | rev
$ sls rack command -f script.sh
hello world
The rack rake
command lets you execute Rake tasks remotely:
$ sls rack rake -t "db:rollback STEP=3"
If you'd like to be explicit about which routes and HTTP methods should pass through to your application, see the following example:
service: example
provider:
name: aws
runtime: ruby2.5
plugins:
- serverless-rack
functions:
api:
handler: rack_adapter.handler
events:
- http:
path: cats
method: get
integration: lambda-proxy
- http:
path: dogs/{id}
method: get
integration: lambda-proxy
If you use custom domain names with API Gateway, you might have a base path that is at the beginning of your path, such as the stage (/dev
, /stage
, /prod
). In this case, set the API_GATEWAY_BASE_PATH
environment variable to let serverless-rack
know.
The example below uses the serverless-domain-manager plugin to handle custom domains in API Gateway:
service: example
provider:
name: aws
runtime: ruby2.5
environment:
API_GATEWAY_BASE_PATH: ${self:custom.customDomain.basePath}
plugins:
- serverless-rack
- serverless-domain-manager
functions:
api:
handler: rack_adapter.handler
events:
- http: ANY /
- http: ANY {proxy+}
custom:
customDomain:
basePath: ${opt:stage}
domainName: mydomain.name.com
stage: ${opt:stage}
createRoute53Record: true
In order to accept file uploads from HTML forms, make sure to add multipart/form-data
to the list of content types with Binary Support in your API Gateway API. The serverless-apigw-binary Serverless plugin can be used to automate this process.
Keep in mind that, when building Serverless applications, uploading directly to S3 from the browser is usually the preferred approach.
The raw context and event from AWS Lambda are both accessible through the Rack request. The following example shows how to access them when using Sinatra:
require 'sinatra'
get '/' do
puts request.env['serverless.event']
puts request.env['serverless.context']
end
By default, all MIME types starting with text/
and the following whitelist are sent through API Gateway in plain text. All other MIME types will have their response body base64 encoded (and the isBase64Encoded
API Gateway flag set) in order to be delivered by API Gateway as binary data (remember to add any binary MIME types that you're using to the Binary Support list in API Gateway).
This is the default whitelist of plain text MIME types:
application/json
application/javascript
application/xml
application/vnd.api+json
image/svg+xml
In order to add additional plain text MIME types to this whitelist, use the textMimeTypes
configuration option:
custom:
rack:
textMimeTypes:
- application/custom+json
- application/vnd.company+json
The AWS API Gateway to Rack mapping module is available as a gem.
Use this gem if you need to deploy Ruby functions to handle API Gateway events directly, without using the Serverless framework.
gem install --install-dir vendor/bundle serverless-rack
Initialize your Rack application and in your Lambda event handler, call the request mapper:
require 'serverless_rack'
$app ||= Proc.new do |env|
['200', {'Content-Type' => 'text/html'}, ['A barebones rack app.']]
end
def handler(event:, context:)
handle_request(app: $app, event: event, context: context)
end
Author: logandk
Source Code: https://github.com/logandk/serverless-rack
License: MIT license