AWS

AWS

Amazon Web Services is a subsidiary of Amazon that provides on-demand cloud computing platforms to individuals, companies and governments, on a paid subscription basis.
Quan Huynh

Quan Huynh

1656749532

AWS Certified Solutions Architect Professional - Directory Services

A quick note about AWS Directory Services. This post is a quick note from the course Ultimate AWS Certified Solutions Architect Professional by Stephane Maarek. The only purpose of this post is a summary, if you want detailed learning, please buy a Stephane Maarek’s course.

#aws

https://medium.com/codex/aws-certified-solutions-architect-professional-identity-federation-directory-services-895807d86497

AWS Certified Solutions Architect Professional - Directory Services
Hermann  Frami

Hermann Frami

1656748020

Serverless Sentry Plugin

⚑️ Serverless Sentry Plugin    

About

This Serverless plugin simplifies the integration of Sentry with the popular Serverless Framework and AWS Lambda.

Currently, we support Lambda Runtimes for Node.js 12, 14, and 16 for AWS Lambda. Other platforms can be added by providing a respective integration library. Pull Requests are welcome!

The serverless-sentry-plugin and serverless-sentry-lib libraries are not affiliated with either Functional Software Inc., Sentry, Serverless or Amazon Web Services but developed independently and in my spare time.

Benefits

  • Easy to use. Promised 🀞
  • Integrates with Serverless Framework as well as the AWS Serverless Application Model for AWS Lambda (though no use of any framework is required).
  • Wraps your Node.js code with Sentry error capturing.
  • Forwards any errors returned by your AWS Lambda function to Sentry.
  • Warn if your code is about to hit the execution timeout limit.
  • Warn if your Lambda function is low on memory.
  • Reports unhandled promise rejections.
  • Reports uncaught exceptions.
  • Serverless, Sentry and as well as this library are all Open Source. Yay! πŸŽ‰
  • TypeScript support

Overview

Sentry integration splits into two components:

  1. This plugin, which simplifies installation with the Serverless Framework
  2. The serverless-sentry-lib, which performs the runtime monitoring and error reporting.

For a detailed overview of how to use the serverless-sentry-lib refer to its README.md.

Installation

Install the @sentry/node module as a production dependency (so it gets packaged together with your source code):

npm install --save @sentry/node

Install the serverless-sentry-lib as a production dependency as well:

npm install --save serverless-sentry-lib

Install this plugin as a development dependency (you don't want to package it with your release artifacts):

npm install --save-dev serverless-sentry

Check out the examples below on how to integrate it with your project by updating serverless.yml as well as your Lambda handler code.

Usage

The Serverless Sentry Plugin allows configuration of the library through the serverless.yml and will create release and deployment information for you (if wanted). This is the recommended way of using the serverless-sentry-lib library.

▢️ Step 1: Load the Plugin

The plugin determines your environment during deployment and adds the SENTRY_DSN environment variables to your Lambda function. All you need to do is to load the plugin and set the dsn configuration option as follows:

service: my-serverless-project
provider:
  # ...
plugins:
  - serverless-sentry
custom:
  sentry:
    dsn: https://xxxx:yyyy@sentry.io/zzzz # URL provided by Sentry

▢️ Step 2: Wrap Your Function Handler Code

The actual reporting to Sentry happens in platform-specific libraries. Currently, only Node.js and Python are supported.

Each library provides a withSentry helper that acts as decorators around your original AWS Lambda handler code and is configured via this plugin or manually through environment variables.

For more details refer to the individual libraries' repositories:

Old, now unsupported libraries:

Node.js

For maximum flexibility, this library is implemented as a wrapper around your original AWS Lambda handler code (your handler.js or similar function). The withSentry higher-order function adds error and exception handling and takes care of configuring the Sentry client automatically.

withSentry is pre-configured to reasonable defaults and doesn't need any configuration. It will automatically load and configure @sentry/node which needs to be installed as a peer dependency.

Original Lambda Handler Code:

exports.handler = async function (event, context) {
  console.log("EVENT: \n" + JSON.stringify(event, null, 2));
  return context.logStreamName;
};

New Lambda Handler Code Using withSentry For Sentry Reporting

const withSentry = require("serverless-sentry-lib"); // This helper library

exports.handler = withSentry(async function (event, context) {
  console.log("EVENT: \n" + JSON.stringify(event, null, 2));
  return context.logStreamName;
});

ES6 Module: Original Lambda Handler Code:

export async function handler(event, context) {
  console.log("EVENT: \n" + JSON.stringify(event, null, 2));
  return context.logStreamName;
}

ES6 Module: New Lambda Handler Code Using withSentry For Sentry Reporting

import withSentry from "serverless-sentry-lib"; // This helper library

export const handler = withSentry(async (event, context) => {
  console.log("EVENT: \n" + JSON.stringify(event, null, 2));
  return context.logStreamName;
});

Once your Lambda handler code is wrapped in withSentry, it will be extended with automatic error reporting. Whenever your Lambda handler sets an error response, the error is forwarded to Sentry with additional context information. For more details about the different configuration options available refer to the serverless-sentry-lib documentation.

Plugin Configuration Options

Configure the Sentry plugin using the following options in your serverless.yml:

  • dsn - Your Sentry project's DSN URL (required)
  • enabled - Specifies whether this SDK should activate and send events to Sentry (defaults to true)
  • environment - Explicitly set the Sentry environment (defaults to the Serverless stage)

Sentry API access

In order for some features such as releases and deployments to work, you need to grant API access to this plugin by setting the following options:

  • organization - Organization name
  • project - Project name
  • authToken - API authentication token with project:write access

πŸ‘‰ Important: You need to make sure you’re using Auth Tokens not API Keys, which are deprecated.

Releases

Releases are used by Sentry to provide you with additional context when determining the cause of an issue. The plugin can automatically create releases for you and tag all messages accordingly. To find out more about releases in Sentry, refer to the official documentation.

In order to enable release tagging, you need to set the release option in your serverless.yml:

custom:
  sentry:
    dsn: https://xxxx:yyyy@sentry.io/zzzz
    organization: my-sentry-organziation
    project: my-sentry-project
    authToken: my-sentry-api-key
    release:
      version: <RELEASE VERSION>
      refs:
        - repository: <REPOSITORY NAME>
          commit: <COMMIT HASH>
          previousCommit: <COMMIT HASH>

version - Set the release version used in Sentry. Use any of the below values:

  • git - Uses the current git commit hash or tag as release identifier.
  • random - Generates a random release during deployment.
  • true - First tries to determine the release via git and falls back to random if Git is not available.
  • false - Disable release versioning.
  • any fixed string - Use a fixed string for the release. Serverless variables are allowed.

refs - If you have set up Sentry to collect commit data, you can use commit refs to associate your commits with your Sentry releases. Refer to the Sentry Documentation for details about how to use commit refs. If you set your version to git (or true), the refs options are populated automatically and don't need to be set.

πŸ‘‰ Tip {"refs":["Invalid repository names: xxxxx/yyyyyyy"]}: If your repository provider is not supported by Sentry (currently only GitHub or Gitlab with Sentry Integrations) you have the following options:

  1. set refs: false, this will not automatically population the refs but also dismisses your commit id as version
  2. set refs: true and version: true to populate the version with the commit short id

If you don't specify any refs, you can also use the short notation for release and simply set it to the desired release version as follows:

custom:
  sentry:
    dsn: https://xxxx:yyyy@sentry.io/zzzz
    release: <RELEASE VERSION>

If you don't need or want the plugin to create releases and deployments, you can omit the authToken, organization and project options. Messages and exceptions sent by your Lambda functions will still be tagged with the release version and show up grouped in Sentry nonetheless.

πŸ‘‰ Pro Tip: The possibility to use a fixed string in combination with Serverless variables allows you to inject your release version through the command line, e.g. when running on your continuous integration machine.

custom:
  sentry:
    dsn: https://xxxx:yyyy@sentry.io/zzzz
    organization: my-sentry-organziation
    project: my-sentry-project
    authToken: my-sentry-api-key
    release:
      version: ${opt:sentryVersion}
      refs:
        - repository: ${opt:sentryRepository}
          commit: ${opt:sentryCommit}

And then deploy your project using the command-line options from above:

sls deploy --sentryVersion 1.0.0 --sentryRepository foo/bar --sentryCommit 2da95dfb

πŸ‘‰ Tip when using Sentry with multiple projects: Releases in Sentry are specific to the organization and can span multiple projects. Take this in consideration when choosing a version name. If your version applies to the current project only, you should prefix it with your project name.

If no option for release is provided, releases and deployments are disabled.

Source Maps

Sourcemap files can be uploaded to Sentry to display source files in the stack traces rather than the compiled versions. This only uploads existing files being output, you'll need to configure your bundling tool separately. You'll also need to have releases configured, see above.

Default options:

custom:
  sentry:
    sourceMaps: true

Add custom prefix (required if your app is not at the filesystem root)

custom:
  sentry:
    sourceMaps:
      urlPrefix: /var/task

Enabling and Disabling Error Reporting Features

In addition, you can configure the Sentry error reporting on a service as well as a per-function level. For more details about the individual configuration options see the serverless-sentry-lib documentation.

  • autoBreadcrumbs - Automatically create breadcrumbs (see Sentry Raven docs, defaults to true)
  • filterLocal - Don't report errors from local environments (defaults to true)
  • captureErrors - Capture Lambda errors (defaults to true)
  • captureUnhandledRejections - Capture unhandled Promise rejections (defaults to true)
  • captureUncaughtException - Capture unhandled exceptions (defaults to true)
  • captureMemoryWarnings - Monitor memory usage (defaults to true)
  • captureTimeoutWarnings - Monitor execution timeouts (defaults to true)

Example Configuration

# serverless.yml
service: my-serverless-project
provider:
  # ...
plugins:
  - serverless-sentry
custom:
  sentry:
    dsn: https://xxxx:yyyy@sentry.io/zzzz # URL provided by Sentry
    captureTimeoutWarnings: false # disable timeout warnings globally for all functions
functions:
  FuncFoo:
    handler: Foo.handler
    description: Hello World
    sentry:
      captureErrors: false # Disable error capturing for this specific function only
      captureTimeoutWarnings: true # Turn timeout warnings back on
  FuncBar:
    handler: Bar.handler
    sentry: false # completely turn off Sentry reporting

Example: Configuring Sentry based on stage

In some cases, it might be desired to use a different Sentry configuration depending on the currently deployed stage. To make this work we can use a built-in Serverless variable resolutions trick:

# serverless.yml
plugins:
  - serverless-sentry
custom:
  config:
    default:
      sentryDsn: ""
    prod:
      sentryDsn: "https://xxxx:yyyy@sentry.io/zzzz" # URL provided by Sentry

  sentry:
    dsn: ${self:custom.config.${self:provider.stage}.sentryDsn, self:custom.config.default.sentryDsn}
    captureTimeoutWarnings: false # disable timeout warnings globally for all functions

Troubleshooting

No errors are reported in Sentry

Double-check the DSN settings in your serverless.yml and compare it with what Sentry shows you in your project settings under "Client Keys (DSN)". You need a URL in the following format - see the Sentry Quick Start:

{PROTOCOL}://{PUBLIC_KEY}:{SECRET_KEY}@{HOST}/{PATH}{PROJECT_ID}

Also, make sure to add the plugin to your plugins list in the serverless.yml:

plugins:
  - serverless-sentry
custom:
  sentry:
    dsn: https://xxxx:yyyy@sentry.io/zzzz # URL provided by Sentry

The plugin doesn't create any releases or deployments

Make sure to set the authToken, organization as well as project options in your serverless.yml, and set release to a non-empty value as shown in the example below:

plugins:
  - serverless-sentry
custom:
  sentry:
    dsn: https://xxxx:yyyy@sentry.io/zzzz # URL provided by Sentry
    organization: my-sentry-organziation
    project: my-sentry-project
    authToken: my-sentry-api-key
    release: git

I'm testing my Sentry integration locally but no errors or messages are reported

Check out the filterLocal configuration setting. If you test Sentry locally and want to make sure your messages are sent, set this flag to false. Once done testing, don't forget to switch it back to true as otherwise, you'll spam your Sentry projects with meaningless errors of local code changes.

Version History

2.5.3

  • Increased number of parallel uploads of source map artifacts for better performance.

2.5.2

  • Upload multiple source map artifacts in parallel for better performance.

2.5.1

  • Fix #63: Upload source maps serially to avoid running out of sockets.
  • Correctly disable uploading source maps if the config setting is false or unset.

2.5.0

  • Added support for uploading Source Maps to Sentry. Thanks to jonmast for the contribution.
  • Fixed an issue in the configuration validation. Thanks to DonaldoLog for the fix.
  • Fixed an issue if using
  • Updated dependencies.

2.4.0

  • Explicitly check for enabled flag. Thanks to aaronbannin for the contribution.
  • Explicit peer dependency on Serverless
  • Updated dependencies minor versions; locked TypeScript to 4.4 for now

2.3.0

  • Added configuration validation. Serverless will now warn if you pass an invalid configuration value in custom.sentry.

2.2.0

  • Added captureUncaughtException configuration option. This already exists in serverless-sentry-lib but was never exposed in the plugin.
  • Don't fail if SENTRY_DSN is not set but simply disable Sentry integration.

2.1.0

  • Support for deploying individual functions only (sls deploy -f MyFunction). Thanks to dominik-meissner!
  • Improved documentation. Thanks to aheissenberger
  • Updated dependencies.

2.0.2

  • Fixed custom release names not being set. Thanks to botond-veress!

2.0.1

  • Fixed error when creating new Sentry releases. Thanks to dryror!

2.0.0

  • This version of serverless-sentry-plugin requires the use of serverless-sentry-lib v2.x.x
  • Rewrite using TypeScript. The use of TypeScript in your project is fully optional, but if you do, we got you covered!
  • Added new default uncaught exception handler.
  • Dropped support for Node.js 6 and 8. The only supported versions are Node.js 10 and 12.
  • Upgrade from Sentry SDK raven to the Unified Node.js SDK @sentry/node.
  • Simplified integration using withSentry higher-order function. Passing the Sentry instance is now optional.
  • Thank you @aheissenberger and @Vadorequest for their contributions to this release! πŸ€—

1.2.0

  • Fixed a compatibility issue with Serverless 1.28.0.

1.1.1

  • Support for sls invoke local. Thanks to sifrenette for his contribution.

1.1.0

  • ⚠️ Dropped support for Node 4.3. AWS deprecates Node 4.3 starting July 31, 2018.
  • Pair with serverless-sentry-lib v1.1.x.

1.0.0

  • Version falls back to git hash if no tag is set for the current head (#15).
  • Fixed reporting bugs in the local environment despite config telling otherwise (#17). This requires an update of serverless-sentry-lib as well!

1.0.0-rc.4

  • Fixed an issue with creating random version numbers

1.0.0-rc.3

  • Allow disabling Sentry for specific functions by settings sentry: false in the serverless.yml.
  • Added support for the Serverless Offline Plugin.

1.0.0-rc.2

  • Fixed an issue with the plugin not being initialized properly when deploying an existing artifact.

1.0.0-rc.1

To-Dos

  •  Bring back automatic instrumentation of the Lambda code during packaging
  •  Provide CLI commands to create releases and perform other operations in Sentry
  •  Ensure all exceptions and messages have been sent to Sentry before returning; see #338.

Support

That you for supporting me and my projects.

Author: Arabold
Source Code: https://github.com/arabold/serverless-sentry-plugin 
License: MIT license

#serverless #aws #sentry #lambda 

Serverless Sentry Plugin
Maryse  Reinger

Maryse Reinger

1656720000

Learn About AWS Dynamo Db Features and Components

Hello everyone,

In this video, we will checkout some theory part for using AWS dynamo DB which will be very useful for our application and then let us understand it before using this.
In this we will be learning some core components of Dynamo DB with examples given in documentation.

In next parts we will be implementing it in our application.

#dynamodb #aws 

Learn About AWS Dynamo Db Features and Components
Osiki  Douglas

Osiki Douglas

1656684300

How to Deploy The Next.js App on AWS Lambda using Apex Up

Next.js with AWS Lambda

This is a tiny example to show how to deploy the Next.js app on AWS Lambda using Apex Up.

Next.js app with custom server

The custom server for Next.js app is needed to run your app on AWS Lambda. In this example, express will be used.

// server.js

const express = require("express");
const next = require("next");

const port = parseInt(process.env.PORT, 10) || 3000;
const dev = process.env.NODE_ENV !== "production";
const app = next({ dev });
const handle = app.getRequestHandler();

app
  .prepare()
  .then(() => {
    const server = express();

    server.get("/", (req, res) => {
      return app.render(req, res, "/", req.params);
    });

    server.get("/about", (req, res) => {
      return app.render(req, res, "/about", req.params);
    });

    server.get("*", (req, res) => {
      return handle(req, res);
    });

    server.listen(port, err => {
      if (err) throw err;
      console.log(`> Ready on http://localhost:${port}`);
    });
  })
  .catch(ex => {
    console.log(ex);
    process.exit(1);
  });

update package.json with custom server.js

{
  "scripts": {
    "dev": "node server.js",
    "build": "next build",
    "start": "NODE_ENV=production node server.js"
  }
}

Install Apex Up

$ curl -sf https://up.apex.sh/install | sh

# verify the installation
$ up version

AWS credentials

You need to have one AWS account and recommend to use IAM with programmaic way for security and convinience. If you have already installed awscli or awsebcli, etc. You are having ~/.aws/credentials which is storing AWS credentials. It allows you to use AWS_PROFILE environment. If you don't please make one and save it with your account access key and security key in it.

# ~/.aws/credentials

[my-aws-account-for-lambda]
aws_access_key = xxxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxx

IAM policy for Up CLI

IAM policy allows the Up to access your AWS resources in order to deploy your Next.js app on Lambda. Go to AWS IAM and make the new policy and link it up to your AWS account.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "acm:*",
        "cloudformation:Create*",
        "cloudformation:Delete*",
        "cloudformation:Describe*",
        "cloudformation:ExecuteChangeSet",
        "cloudformation:Update*",
        "cloudfront:*",
        "cloudwatch:*",
        "ec2:*",
        "ecs:*",
        "events:*",
        "iam:AttachRolePolicy",
        "iam:CreatePolicy",
        "iam:CreateRole",
        "iam:DeleteRole",
        "iam:DeleteRolePolicy",
        "iam:GetRole",
        "iam:PassRole",
        "iam:PutRolePolicy",
        "lambda:AddPermission",
        "lambda:Create*",
        "lambda:Delete*",
        "lambda:Get*",
        "lambda:InvokeFunction",
        "lambda:List*",
        "lambda:RemovePermission",
        "lambda:Update*",
        "logs:Create*",
        "logs:Describe*",
        "logs:FilterLogEvents",
        "logs:Put*",
        "logs:Test*",
        "route53:*",
        "route53domains:*",
        "s3:*",
        "ssm:*",
        "sns:*"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": "apigateway:*",
      "Resource": "arn:aws:apigateway:*::/*"
    }
  ]
}

Create up.json file

{
  "name": "nextjs-example",
  // aws account profile in ~/.aws/credentials
  "profile": "my-aws-account-for-lambda",
  "regions": ["ap-northeast-2"],
  "lambda": {
    // min 128, default 512
    "memory": 256,
    // AWS Lambda supports node.js 8.10 latest
    "runtime": "nodejs8.10"
  },
  "proxy": {
    "command": "npm start",
    "timeout": 25,
    "listen_timeout": 15,
    "shutdown_timeout": 15
  },
  "stages": {
    "development": {
      "proxy": {
        "command": "yarn dev"
      }
    }
  },
  "environment": {
    // you can hydrate env variables as you want.
    "NODE_ENV": "production"
  },
  "error_pages": {
    "variables": {
      "support_email": "admin@my-email.com",
      "color": "#2986e2"
    }
  }
}

Build the Next.js app before deploy

$ yarn build

Create .upignore file

The Up will inspect your files to compose and deploy to lambda. Firstly The up will read .gitignore and ignore files written in .gitignore. And after that, .upignore will be read. The Up, by default, ignores dotfiles, so needs to negate .next directory in .upignore in order for the Up will build the package with it.

# .upignore

!.next

Deploy

$ up

Learn More

You may encounter the cold start issue, then refer to the lambda warmer. https://github.com/mattdamon108/lambda-warmer

Download Details:
Author: mattdamon108
Source Code: https://github.com/mattdamon108/nextjs-with-lambda
License:

#nextjs #react #javascript #Lambda #aws

How to Deploy The Next.js App on AWS Lambda using Apex Up
Hermann  Frami

Hermann Frami

1656663300

Serverless Select Plugin

Serverless Select Plugin

Select which functions are to be deployed based on region and stage.

Note: Requires Serverless v1.12.x or higher.

Setup

Install via npm in the root of your Serverless service:

npm install serverless-plugin-select --save-dev
  • Add the plugin to the plugins array in your Serverless serverless.yml, you should place it at the top of the list:
plugins:
  - serverless-plugin-select
  - ...

Add regions or stages in your functions to select for deployment

Run deploy command sls deploy --stage [STAGE NAME] --region [REGION NAME] or sls deploy function --stage [STAGE NAME] --region [REGION NAME] --function [FUNCTION NAME]

Functions will be deployed based on your selection

All done!

Function

How it works? When deployment region or stage don't match function regions or stages, that function will be deleted from deployment.

regions - Function accepted deployment regions.

functions:
  hello:
    regions:
      - eu-west-1
      - ...
  • stages - Function accepted deployment stages.
functions:
  hello:
    stages:
      - dev
      - ...

Contribute

Help us making this plugin better and future proof.

  • Clone the code
  • Install the dependencies with npm install
  • Create a feature branch git checkout -b new_feature
  • Lint with standard npm run lint

Author: FidelLimited
Source Code: https://github.com/FidelLimited/serverless-plugin-select 
License: MIT license

#serverless #plugin #aws #nodejs 

Serverless Select Plugin
Quan Huynh

Quan Huynh

1656644125

Books you should read to become DevOps for beginner

Hi guys, In this post, I’m going to introduce the books that help me a lot on the path to becoming a Cloud DevOps Engineer from a Full-stack Developer. Hope it is useful for everyone, the books that I recommend go from basic to advance.

#devops #aws #kubernetes 

https://medium.com/codex/side-story-books-you-should-read-to-become-devops-for-beginner-b75384ef6774

Books you should read to become DevOps for beginner
Hermann  Frami

Hermann Frami

1656636720

Serverless Framework: Deploy on Scaleway Functions

Serverless Framework: Deploy on Scaleway Functions

The Scaleway functions plugin for Serverless Framework allows users to deploy their functions and containers to Scaleway Functions with a simple serverless deploy.

Serverless Framework handles everything from creating namespaces to function/code deployment by calling APIs endpoint under the hood.

Requirements

  • Install node.js
  • Install Serverless CLI (npm install serverless -g)

Let's work into ~/my-srvless-projects

# mkdir ~/my-srvless-projects
# cd ~/my-srvless-projects

Create a Project

The easiest way to create a project is to use one of our templates. The list of templates is here

Let's use python3

serverless create --template-url https://github.com/scaleway/serverless-scaleway-functions/tree/master/examples/python3 --path myService

Once it's done, we can install mandatory node packages used by serverless

cd mypython3functions
npm i

Note: these packages are only used by serverless, they are not shipped with your functions.

Configure your functions

Your functions are defined in the serverless.yml file created:

service: scaleway-python3
configValidationMode: off

useDotenv: true

provider:
  name: scaleway
  runtime: python310
  # Global Environment variables - used in every functions
  env:
    test: test
  # Storing credentials in this file is strongly not recommanded for security concerns, please refer to README.md about best practices
  scwToken: <scw-token>
  scwProject: <scw-project-id>
  # region in which the deployment will happen (default: fr-par)
  scwRegion: <scw-region>

plugins:
  - serverless-scaleway-functions
  
package:
  patterns:
    - '!node_modules/**'
    - '!.gitignore'
    - '!.git/**'

functions:
  first:
    handler: handler.py
    # Local environment variables - used only in given function
    env:
      local: local

Note: provider.name and plugins MUST NOT be changed, they enable us to use the scaleway provider

This file contains the configuration of one namespace containing one or more functions (in this example, only one) of the same runtime (here python3)

The different parameters are:

  • service: your namespace name
  • useDotenv: Load environment variables from .env files (default: false), read Security and secret management
  • configValidationMode: Configuration validation: 'error' (fatal error), 'warn' (logged to the output) or 'off' (default: warn)
  • provider.runtime: the runtime of your functions (check the supported runtimes above)
  • provider.env: environment variables attached to your namespace are injected to all your namespace functions
  • provider.secret: secret environment variables attached to your namespace are injected to all your namespace functions, see this example project
  • scwToken: Scaleway token you got in prerequisites
  • scwProject: Scaleway org id you got in prerequisites
  • scwRegion: Scaleway region in which the deployment will take place (default: fr-par)
  • package.patterns: usually, you don't need to configure it. Enable to include/exclude directories to/from the deployment
  • functions: Configure of your fonctions. It's a yml dictionary, with the key being the function name
    • handler (Required): file or function which will be executed. See the next section for runtime specific handlers
    • env (Optional): environment variables specific for the current function
    • secret (Optional): secret environment variables specific for the current function, see this example project
    • minScale (Optional): how many function instances we keep running (default: 0)
    • maxScale (Optional): maximum number of instances this function can scale to (default: 20)
    • memoryLimit: ram allocated to the function instances. See the introduction for the list of supported values
    • timeout: is the maximum duration in seconds that the request will wait to be served before it times out (default: 300 seconds)
    • runtime: (Optional) runtime of the function, if you need to deploy multiple functions with different runtimes in your Serverless Project. If absent, provider.runtime will be used to deploy the function, see this example project.
    • events (Optional): List of events to trigger your functions (e.g, trigger a function based on a schedule with CRONJobs). See events section below
    • custom_domains (Optional): List of custom domains, refer to Custom Domain Documentation

Security and secret management

You configuration file may contains sensitive data, your project ID and your Token must not be shared and must not be commited in VCS.

To keep your informations safe and be able to share or commit your serverles.yml file you should remove your credentials from the file. Then you can :

  • use global environment variables
  • use .env file and keep it secret

To use .env file you can modify your serverless.yml file as following :

# This will alow the plugin to read your .env file
useDotenv: true

provider:
  name: scaleway
  runtime: node16

  scwToken: ${env:SCW_SECRET_KEY}
  scwProject: ${env:SCW_DEFAULT_PROJECT_ID}
  scwRegion: ${env:SCW_REGION}

And then create a .env file next to your serverless.yml file, containing following values :

SCW_SECRET_KEY=XXX
SCW_DEFAULT_PROJECT_ID=XXX
SCW_REGION=fr-par

You can use this pattern to hide your secrets (for example a connexion string to a database or a S3 bucket).

Functions Handler

Based on the chosen runtime, the handler variable on function might vary.

Using ES Modules

Node has two module systems: CommonJS modules and ECMAScript (ES) modules. By default, Node treats your code files as CommonJS modules, however ES modules have also been available since the release of node16 runtime on Scaleway Serverless Functions. ES modules give you a more modern way to re-use your code.

According to the official documentation, to use ES modules you can specify the module type in package.json, as in the following example:

  ...
  "type": "module",
  ...

This then enables you to write your code for ES modules:

export {handle};

function handle (event, context, cb) {
    return {
        body: process.version,
        statusCode: 200,
    };
};

The use of ES modules is encouraged, since they are more efficient and make setup and debugging much easier.

Note that using "type": "module" or "type": "commonjs" in your package.json will enable/disable some features in Node runtime. For a comprehensive list of differences, please refer to the official documentation, the following is a summary only:

  • commonjs is used as default value
  • commonjs allows you to use require/module.exports (synchronous code loading, it basically copies all file contents)
  • module allows you to use import/export ES6 instructions (asynchronous loading, more optimized as it imports only the pieces of code you need)

Node

Path to your handler file (from serverless.yml), omit ./, ../, and add the exported function to use as a handler :

- src
  - handlers
    - firstHandler.js  => module.exports.myFirstHandler = ...
    - secondHandler.js => module.exports.mySecondHandler = ...
- serverless.yml

In serverless.yml:

provider:
  # ...
  runtime: node16
functions:
  first:
    handler: src/handlers/firstHandler.myFirstHandler
  second:
    handler: src/handlers/secondHandler.mySecondHandler

Python

Similar to node, path to handler file src/testing/handler.py:

- src
  - handlers
    - firstHandler.py  => def my_first_handler
    - secondHandler.py => def my_second_handler
- serverless.yml

In serverless.yml:

provider:
  # ...
  runtime: python310 # or python37, python38, python39
functions:
  first:
    handler: src/handlers/firstHandler.my_first_handler
  second:
    handler: src/handlers/secondHandler.my_second_handler

Golang

Path to your handler's package, for example if I have the following structure:

- src
  - testing
    - handler.go -> package main in src/testing subdirectory
  - second
    - handler.go -> package main in src/second subdirectory
- serverless.yml
- handler.go -> package main at the root of project

Your serverless.yml functions should look something like this:

provider:
  # ...
  runtime: go118
functions:
  main:
    handler: "."
  testing:
    handler: src/testing
  second:
    handler: src/second

Events

With events, you may link your functions with specific triggers, which might include CRON Schedule (Time based), MQTT Queues (Publish on a topic will trigger the function), S3 Object update (Upload an object will trigger the function).

Note that we do not include HTTP triggers in our event types, as a HTTP endpoint is created for every function. Triggers are just a new way to trigger your Function, but you will always be able to execute your code via HTTP.

Here is a list of supported triggers on Scaleway Serverless, and the configuration parameters required to deploy them:

  • schedule: Trigger your function based on CRON schedules
    • rate: CRON Schedule (UNIX Format) on which your function will be executed
    • input: key-value mapping to define arguments that will be passed into your function's event object during execution.

To link a Trigger to your function, you may define a key events in your function:

functions:
  handler: myHandler.handle
  events:
    # "events" is a list of triggers, the first key being the type of trigger.
    - schedule:
        # CRON Job Schedule (UNIX Format)
        rate: '1 * * * *'
        # Input variable are passed in your function's event during execution
        input:
          key: value
          key2: value2

You may link Events to your Containers too (See section Managing containers below for more informations on how to deploy containers):

custom:
  containers:
    mycontainer:
      directory: my-directory
      # Events key
      events:
        - schedule:
            rate: '1 * * * *'
            input:
              key: value
              key2: value2

You may refer to the follow examples:

Custom domains

Custom domains allows users to use their own domains.

For domain configuration please Refer to Scaleway Documentation

Integration with serverless framework example :

functions:
  first:
    handler: handler.handle
    # Local environment variables - used only in given function
    env:
      local: local
    custom_domains:
      - func1.scaleway.com
      - func2.scaleway.com

Note As your domain must have a record to your function hostname, you should deploy your function once to read its hostname. Custom Domains configurations will be available after the first deploy.

Note: Serverless Framework will consider the configuration file as the source of truth.

If you create a domain with other tools (Scaleway's Console, CLI or API) you must refer created domain into your serverless configuration file. Otherwise it will be deleted as Serverless Framework will give the priority to its configuration.

Managing containers

Requirements: You need to have Docker installed to be able to build and push your image to your Scaleway registry.

You must define your containers inside the custom.containers field in your serverless.yml manifest. Each container must specify the relative path of its application directory (containing the Dockerfile, and all files related to the application to deploy):

custom:
  containers:
    mycontainer:
      directory: my-container-directory
      # port: 8080
      # Environment only available in this container 
      env:
        MY_VARIABLE: "my-value"

Here is an example of the files you should have, the directory containing your Dockerfile and scripts is my-container-directory.

.
β”œβ”€β”€ my-container-directory
β”‚   β”œβ”€β”€ Dockerfile
β”‚   β”œβ”€β”€ requirements.txt
β”‚   β”œβ”€β”€ server.py
β”‚   └── (...)
β”œβ”€β”€ node_modules
β”‚   β”œβ”€β”€ serverless-scaleway-functions
β”‚   └── (...)
β”œβ”€β”€ package-lock.json
β”œβ”€β”€ package.json
└── serverless.yml

Scaleway's platform will automatically inject a PORT environment variable on which your server should be listening for incoming traffic. By default, this PORT is 8080. You may change the port in your serverless.yml.

You may use the container example to getting started.

Logs

The serverless logs command lets you watch the logs of a specific function or container.

Pass the function or container name you want to fetch the logs for with --function:

serverless logs --function <function_or_container_name>

Info

serverless info command gives you informations your current deployement state in JSON format.

Documentation and useful Links

Contributing

This plugin is mainly developed and maintained by Scaleway Serverless Team but you are free to open issues or discuss with us on our Community Slack Channels #serverless-containers and #serverless-functions.

Author: Scaleway
Source Code: https://github.com/scaleway/serverless-scaleway-functions 
License: MIT license

#serverless #function #aws #lambda 

Serverless Framework: Deploy on Scaleway Functions
Hermann  Frami

Hermann Frami

1656621900

Serverless Sagemaker Groundtruth

Serverless-sagemaker-groundtruth

This serverless plugin includes a set of utilities to implement custom workflow for AWS Sagemaker Groundtruth

Currently includes :

  • Serve liquid template from manifest file + prelambda the same it is done on AWS Sagemaker Groundtruth
  • Run End to end test pre-lambda -> labelling simulation -> post lambda

Any Pull request will be warmly welcome !

Ideas for future implementation :

  • Create Tasks from serverless CLI
  • Test Chained tasks
  • Expose nodejs api to integrate with testing suites

Installation

npm install --save-dev serverless-sagemaker-groundtruth

Usage as a serverless plugin

Example serverless.yml

In order to use this module, you need to add a groundtruthTasks key into your serverless.yml file

...

plugins:
  - serverless-sagemaker-groundtruth

functions:
  pre-example: 
    handler: handler.pre
    name: pre
  post-example: 
    handler: handler.postObjectDetection
    name: post

groundtruthTasks:
  basic:
    pre: pre-example
    post: post-example
    template: app/templates/object-detection/basic.liquid.html

Serve a liquid template against a manifest file

serverless groundtruth serve \
  --groundtruthTask <groundtruthTask-name> \
  --manifest <s3-uri or local file> \
  --row <row index>

Test e2e behavior of sagemaker groundtruth workflow

The puppeteer module example

Here, we create a puppeteer module which is doing random bounding boxes (using hasard library) :

const BbPromise = require('bluebird')
const h = require('hasard');

/**
* This function is binding a sequence of actions made by the user before submitting the form
* This is an example showing how to simulate a use bounding box actions
* @param {Page} page puppeteer page instance see https://github.com/puppeteer/puppeteer
* This page is open and running in the annotation page
* @param {Object} manifestRow the object from the manifest file row
* @param {Object} prelambdaOutput the output object from the prelambda result
* @returns {Promise} the promise is resolved once the user has done all needed actions on the form
*/

module.exports = function({
    page, 
    manifestRow,  
    workerId
}){
    
    // we draw 5 boxes for each worker
    const nBoxes = 5;
    
    // Cat and Dog
    const nCategories = 2;
    
    // Using the technic from https://github.com/puppeteer/puppeteer/issues/858#issuecomment-438540596 to select the node
    return page.evaluateHandle(`document.querySelector("body > crowd-form > form > crowd-bounding-box").shadowRoot.querySelector("#annotation-area-container > div > div > div")`)
        .then(imageCanvas => {
            return imageCanvas.boundingBox()
        }).then(boundingBox => {
            
            // define a random bounding box over the image canvas using hasard library
            // see more example in https://www.npmjs.com/package/hasard
            const width = h.reference(h.integer(0, Math.floor(boundingBox.width)));
            const height = h.reference(h.integer(0, Math.floor(boundingBox.height)));
            const top = h.add(h.integer(0, h.substract(Math.floor(boundingBox.width), width)), Math.floor(boundingBox.x));
            const left = h.add(h.integer(0, h.substract(Math.floor(boundingBox.height), height)), Math.floor(boundingBox.y));

            const randomAnnotation = h.object({
                box: h.array([
                    top,
                    left,
                    width,
                    height
                ]),
                category: h.integer(0, nCategories-1)
            });
            
            const workerAnnotations = randomAnnotation.run(nBoxes)
            
            return BbPromise.map(workerAnnotations, ({box, category}) => {
                return page.evaluateHandle(`document.querySelector("body > crowd-form > form > crowd-bounding-box").shadowRoot.querySelector("#react-mount-point > div > div > awsui-app-layout > div > div.awsui-app-layout__tools.awsui-app-layout--open > aside > div > span > div > div.label-pane-content > div:nth-child(${category+1})")`)
                    .then(categoryButton => categoryButton.click())
                    .then(() => page.mouse.move(box[0], box[1]))
                    .then(() => page.mouse.down())
                    .then(() => page.mouse.move(box[0]+box[2], box[1]+box[3]))
                    .then(() => page.mouse.up());
                
            }, {concurrency: 1})
        }).then(() => {
            console.log(`${workerId} actions simulation done on ${JSON.stringify(manifestRow)}`)
            // at the end we return nothing, serverless-sagemaker-groundtruth will automatically request the output from the page
        })
}

The end to end command

serverless groundtruth test e2e \
    --groundtruthTask <groundtruthTask-name> \
    --manifest <s3-uri or local file> \
    --puppeteerModule <path to the module> \
    --workerIds a,b,c

Usage programmatically

You can use serverless-sagemaker-groundtruth functions in your nodejs code by using

const gtLibs = require('serverless-sagemaker-groundtruth/lib')

endToEnd


/**
* @param {String} template path to the liquid template file
* @param {String} labelAttributeName labelAttributeName to use as output of the postLambda function
* @param {Object} manifestRow js object reproesnting the manifest row
* @param {Function} preLambda js function to use as pre lambda function
* @param {Number} [port=3000]  port to use to serve the web page
* @param {Function} postLambda js function to use as post lambda function
* @param {Array.<String>} workerIds js function to use as post lambda function
* @param {PuppeteerModule} puppeteerMod module that simulate the behavior of a worker
* @returns {Promise.<PostLambdaOutput>}
*/

return gtLibs.endToEnd({
    template,
    labelAttributeName, 
    manifestRow,
    preLambda,
    port,
    postLambda,
    workerIds,
    puppeteerMod
});

Remarks

Local consolidation request file compatibilty

You need to make sure that you post lambda function is compatible with using local filename in event.payload.s3Uri. You can use gtLibs.loadFile if you need such a function

Template

Your template should be submited using a button that can match with button.awsui-button[type="submit"] selector.

Author: Piercus
Source Code: https://github.com/piercus/serverless-sagemaker-groundtruth 
License: 

#serverless #aws #plugin 

Serverless Sagemaker Groundtruth
Hermann  Frami

Hermann Frami

1656614460

Serverless Plugin for S3 Sync

⚑️ Serverless Plugin for S3 Sync   

With this plugin for serverless, you can sync local folders to S3 buckets after your service is deployed.

Usage

Add the NPM package to your project:

# Via yarn
$ yarn add serverless-s3bucket-sync

# Via npm
$ npm install serverless-s3bucket-sync

Add the plugin to your serverless.yml:

plugins:
  - serverless-s3bucket-sync

Configuration

Configure S3 Bucket syncing Auto Scaling in serverless.yml with references to your local folder and the name of the S3 bucket.

custom:
  s3-sync:
    - folder: relative/folder
      bucket: bucket-name

That's it! With the next deployment, serverless will sync your local folder relative/folder with the S3 bucket named bucket-name.

Sync

You can use sls sync to synchornize all buckets without deploying your serverless stack.

Contribution

You are welcome to contribute to this project! 😘

To make sure you have a pleasant experience, please read the code of conduct. It outlines core values and beliefs and will make working together a happier experience.

Author: sbstjn
Source Code: https://github.com/sbstjn/serverless-s3bucket-sync 
License: MIT license

#serverless #s3 #sync #aws 

Serverless Plugin for S3 Sync
Hermann  Frami

Hermann Frami

1656554820

Serverless Ruby Layer

Serverless-ruby-layer

A Serverless Plugin which bundles ruby gems from Gemfile and deploys them to the lambda layer automatically while running serverless deploy.

It auto-configures the AWS lambda layer and RUBY_PATH to all the functions.

Install

Install ⚑️ serverless. Refer here for serverless installation instructions.

Navigate to your serverless project and install the plugin

sls plugin install -n serverless-ruby-layer

This will add the plugin to package.json and the plugins section of serverless.yml.

Getting Started

Simple Usage

serverless.yml

service: basic

plugins:
  - serverless-ruby-layer

provider:
  name: aws
  runtime: ruby2.5

functions:
  hello:
    handler: handler.hello

Gemfile

  source 'https://rubygems.org'
  gem 'httparty'

Running sls deploy automatically deploys the required gems as in Gemfile to AWS lambda layer and make the gems available to the RUBY_PATH of the functions hello.handler

Refer example amd docs for more details

Customization

The plugin operation can be customized by specifying the custom configuration under rubyLayer.

For example, to use docker environment for packing gem, the below configuration is added to serverless.yml

custom:
  rubyLayer:
    use_docker: true

For more detailse refer the docs here

Usage

Using the custom configuration, the plugin can be utilized for below cases,

  • Using locally installed bundler for gems without any native extensions - Example - Docs
  • Using Docker for gems with OS native C extensions or system libraries like http, Nokogiri - Example - Docs
  • Preinstall OS packages (yum packages) for gems which requires OS native system libraries like pg, mysql, RMagick - PG Example - Docs
  • Using Dockerfile for gems which with other OS Linux image or system libraries and utilities - Example - Docs
  • Using Docker / Dockerfile with environment variable - Docker Example - DockerFile Example - Docs
  • Include / Exclude specific functions from layer configuration - Include Example , Exclude Example - Docs
  • Exclude test and development related gems from layer - Example - Docs
  • Using Bundler.require(:default) to require all gems in handler.rb by respecting Gemfile.lock - Example - Docs

Documentation

Check out the documentation here for,

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update the tests as appropriate.

Refer Guidelines for more information.

Author: Navarasu
Source Code: https://github.com/navarasu/serverless-ruby-layer 
License: MIT license

#serverless #ruby #layers #aws 

Serverless Ruby Layer
Hermann  Frami

Hermann Frami

1656547380

Serverless Resources Env

Serverless-resources-env

A serverless framework plugin so that your functions know how to use resources created by cloudformation.

In short, whether you are running your function as a lambda, or locally on your machine, the physical name or ARN of each resource that was part of your CloudFormation template will be available as an environment variable keyed to its logical name prefixed with CF_.

For lambdas running on AWS, this plugin will set environment variables on your functions within lambda using the Lambda environment variable support.

For running functions locally, it will also create a local env file for use in reading in environment variables for a specific region-stage-function while running functions locally. These are by default stored in a directory named: .serverless-resources-env in files named .<region>_<stage>_<function-name>. Ex: ./.serverless-resources-env/.us-east-1_dev_hello. These environment variables are set automatically by the plugin when running serverless invoke local -f ....

Breaking Changes in 0.3.0: See below

Why?

You have a CloudFormation template all set, and you are writing your functions. Now you are ready to use the resources created as part of your CF template. Well, you need to know about them! You could deploy and then try and manage configuration for these resources, or you can use this module which will automatically set environmet variables that map the logical resource name to the physical resource name for resources within the CloudFormation file.

Example:

You have defined resources in your serverless.yml called mySQS and myTable, and you want to actually use these in your function so you need their ARN or the actual table name that was created.

const sqs_arn = process.env.CF_mySQS;
const my_dynamo_table_name = process.env.CF_myTable;

How it works

This plugin attaches to the deploy post-deploy hook. After the stack is deployed to AWS, the plugin determines the name of the cloud formation stack, and queries AWS for all resources in this stack.

After deployment, this plugin, will fetch all the CF resources for the current stack (stage i.e. 'dev'). It will then use the AWS SDK to set as environment variables the physical id's of each resource as an environment variable prefixed with CF_.

It will also create a file with these values in a .properties file format named ./serverless-resources-env/.<region>_<stage>_<function-name>. These are then pulled in during a local invocation (serverless invoke local -f...) Each region, stage, and function will get its own file. When invoking locally the module will automatically select the correct .env information based on which region and stage is set.

This means no code changes, or config changes no matter how many regions, and stages you deploy to.

The lambdas always know exactly where to find their resources, whether that resource is a DynamoDB, SQS, SNS, or anything else.

Install / Setup

npm install serverless-resources-env --save

Add the plugin to the serverless.yml.

plugins:
  - serverless-resources-env

Set your resources as normal:

resources:
  Resources:
    testTopic1:
      Type: AWS::SNS::Topic
    testTopic2:
      Type: AWS::SNS::Topic

Set which resources you want exported on each function.

functions:
  hello:
    handler: handler.hello
    custom:
      env-resources:
        - testTopic1
        - testTopic2

Breaking Changes since 0.2.0

At version 0.2.0 and before, all resources were exported to both the local .env file and to each function automatically.

This caused issues with AWS limits on the amount of information that could be exported as env variables onto lambdas deployed within AWS. This also exposed resources as env variables that were not needed by functions, as it was setting all resources, not just the ones the function needed.

Starting at version 0.3.0 a list of which resources are to be exported to each function are required to be a part of the function definition in the .yml file, if the function needs any of these environment variables. (See current install instructions above)

This also means that specific env files are needed per region / stage / function. This can potentially be a lot of files and therefore these files were also moved to a sub-folder. .serverless-resources-env by default.

Common Errors

Unexpected key 'Environment' found in params. Your aws-sdk is out of date. Setting environment variables on lambdas is new. See the Important note above.

You may need to upgrade the version of the package aws-sdk being used by the serverless framework.

In the 1.1.0 serverless framework, the aws-sdk is pegged at version 2.6.8 in the npm-shrinkwrap.json of serverless.

If you have installed serverless locally as part of your project you can just upgrade the sdk. npm upgrade aws-sdk.

If you have installed serverless globally, you will need to change to the serverless directory and run npm upgrade aws-sdk from there.

The following commands should get it done:

cd `npm list serverless -g | head -n 1`/node_modules/serverless
npm upgrade aws-sdk

Config

By default, the mapping is written to a .env file located at ./.serverless-resources-env/.<region>_<stage-name>_env. This can be overridden by setting an option in serverless.yml.

custom:
  resource-output-dir: .alt-resource-dir
functions:
  hello:
    custom:
      resource-output-file: .alt-file-name

Author: Rurri
Source Code: https://github.com/rurri/serverless-resources-env 
License: MIT license

#serverless #aws #env #plugin 

Serverless Resources Env
Hermann  Frami

Hermann Frami

1656543480

Serverless Plugin Resource Tagging

serverless-plugin-resource-tagging

Serverless stackTags will update the tags for all the resources that support tagging. But the issue is it will update once during create. If you update the tag values after deployment, it wont reflect in next deployment. We have to remove the stack and redeploy to get the new tags reflect. This plugin will solve that issue for AWS.

Note:

  • This plugin is only for AWS.
  • This plugin will support APIGateway stage tags even if stage is not configured in serverless.yml and clouformation created one.

Using this pluging

npm install serverless-plugin-resource-tagging

serverless.yml

provider:
    name: XXX
    stackTags:
        Tag1: "Tag1 value"
        Tag2: "Tag2 value"
plugins:
  - serverless-plugin-resource-tagging

Suported AWS resources

AWS::Lambda::Function
AWS::SQS::Queue
AWS::Kinesis::Stream
AWS::DynamoDB::Table
AWS::S3::Bucket
AWS::ApiGateway::Stage
AWS::CloudFront::Distribution
AWS::Logs::LogGroup

Author: ilayanambi86
Source Code: https://github.com/ilayanambi86/serverless-plugin-resource-tagging 
License: 

#serverless #plugin #tags  #aws 

Serverless Plugin Resource Tagging
Hermann  Frami

Hermann Frami

1656528660

Serverless Reqvalidator Plugin

serverless-reqvalidator-plugin

Serverless plugin to set specific validator request on method

Installation

npm install serverless-reqvalidator-plugin

Requirements

This require you to have documentation plugin installed

serverless-aws-documentation

Using plugin

Specify plugin

plugins:
  - serverless-reqvalidator-plugin
  - serverless-aws-documentation

In serverless.yml create custom resource for request validators

    xMyRequestValidator:  
      Type: "AWS::ApiGateway::RequestValidator"
      Properties:
        Name: 'my-req-validator'
        RestApiId: 
          Ref: ApiGatewayRestApi
        ValidateRequestBody: true
        ValidateRequestParameters: false  

For every function you wish to use the validator set property reqValidatorName: 'xMyRequestValidator' to match resource you described

  debug:
    handler: apis/admin/debug/debug.debug
    timeout: 10
    events:
      - http:
          path: admin/debug
          method: get
          cors: true
          private: true 
          reqValidatorName: 'xMyRequestValidator'

Use Validator specified in different Stack

The serverless framework allows us to share resources among several stacks. Therefore a CloudFormation Output has to be specified in one stack. This Output can be imported in another stack to make use of it. For more information see here.

Specify a request validator in a different stack:

plugins:
  - serverless-reqvalidator-plugin
service: my-service-a
functions:
  hello:
    handler: handler.myHandler
    events:
      - http:
          path: hello
          reqValidatorName: 'myReqValidator'

resources:
  Resources:
    xMyRequestValidator:
      Type: "AWS::ApiGateway::RequestValidator"
      Properties:
        Name: 'my-req-validator'
        RestApiId: 
          Ref: ApiGatewayRestApi
        ValidateRequestBody: true
        ValidateRequestParameters: false
  Outputs:
    xMyRequestValidator:
      Value:
        Ref: my-req-validator
      Export:
        Name: myReqValidator

Make use of the exported request validator in stack b:

plugins:
  - serverless-reqvalidator-plugin
service: my-service-b
functions:
  hello:
    handler: handler.myHandler
    events:
      - http:
          path: hello
          reqValidatorName:
            Fn::ImportValue: 'myReqValidator'

Full example

service:
  name: my-service

plugins:
  - serverless-webpack
  - serverless-reqvalidator-plugin
  - serverless-aws-documentation

provider:
  name: aws
  runtime: nodejs6.10
  region: eu-west-2
  environment:
    NODE_ENV: ${self:provider.stage}
custom:
  documentation:
    api:
      info:
        version: '1.0.0'
        title: My API
        description: This is my API
      tags:
        -
          name: User
          description: User Management
    models:
      - name: MessageResponse
        contentType: "application/json"
        schema:
          type: object
          properties:
            message:
              type: string
      - name: RegisterUserRequest
        contentType: "application/json"
        schema:
          required: 
            - email
            - password
          properties:
            email:
              type: string
            password:
              type: string
      - name: RegisterUserResponse
        contentType: "application/json"
        schema:
          type: object
          properties:
            result:
              type: string
      - name: 400JsonResponse
        contentType: "application/json"
        schema:
          type: object
          properties:
            message:
              type: string
            statusCode:
              type: number
  commonModelSchemaFragments:
    MethodResponse400Json:
      statusCode: '400'
      responseModels:
        "application/json": 400JsonResponse

functions:
  signUp:
    handler: handler.signUp
    events:
      - http:
          documentation:
            summary: "Register user"
            description: "Registers new user"
            tags:
              - User
            requestModels:
              "application/json": RegisterUserRequest
          method: post
          path: signup
          reqValidatorName: onlyBody
          methodResponses:
              - statusCode: '200'
                responseModels:
                  "application/json": RegisterUserResponse
              - ${self:custom.commonModelSchemaFragments.MethodResponse400Json}

package:
  include:
    handler.ts

resources:
  Resources:
    onlyBody:  
      Type: "AWS::ApiGateway::RequestValidator"
      Properties:
        Name: 'only-body'
        RestApiId: 
          Ref: ApiGatewayRestApi
        ValidateRequestBody: true
        ValidateRequestParameters: false

Author: RafPe
Source Code: https://github.com/RafPe/serverless-reqvalidator-plugin 
License: 

#serverless #plugin #framework #aws 

Serverless Reqvalidator Plugin
Hermann  Frami

Hermann Frami

1656521280

Serverless Plugin Registry

Serverless Registry Plugin

Register function names with AWS SSM Parameter Store

Requirements:

  • Serverless v1.12.x or higher.
  • AWS provider

How it works

This plugin creates an SSM Parameter with your functions' fully qualified Lambda Function names as values. The main motivation for this plugin is to remove the dependency that any client code would have on the AWS Stack, as the stack name is part of the fully qualified Lambda Function name. Using this plugin, it is easier to move functions between stacks with out less changes to client code and configuration.

Caveats

One caveat is the fact that any IAM policies that are written for these functions will still need to be updated. In the case of Serverless configuration, if you use the built-in SSM Parameter resolution, then it might be as simple as just redeploying any client upstream services.

Setup

Install via npm in the root of your Serverless service:

npm install serverless-plugin-registry --save-dev
  • Add the plugin to the plugins array in your Serverless serverless.yml:
plugins:
  - serverless-plugin-registry

Default Behavior

service: ServerlessPluginRegistry

provider:
  stage: ${opt:stage, "Test"}

functions:
  Hello:
    handler: hello.js

This will produce an SSM Parameter with

  • Name: /ServerlessPluginRegistry/Test/Hello/FunctionName
  • Value: ServerlessPluginRegistry-Test-Hello

Global Base Name

service: ServerlessPluginRegistry

provider:
  stage: ${opt:stage, "Test"}

custom:
  registry:
    baseName: /Registry/${self:provider.stage}

functions:
  Hello:
    handler: hello.js

This will produce an SSM Parameter with

  • Name: /Registry/Test/Hello/FunctionName
  • Value: ServerlessPluginRegistry-Test-Hello

Function Base Name

service: ServerlessPluginRegistry

provider:
  stage: ${opt:stage, "Test"}

functions:
  Hello:
    handler: hello.js    
    registry:
      baseName: /Registry/${self:provider.stage}

This will produce an SSM Parameter with

  • Name: /Registry/Test/Hello/FunctionName
  • Value: ServerlessPluginRegistry-Test-Hello

Only Publish Select Functions

service: ServerlessPluginRegistry

provider:
  stage: ${opt:stage, "Test"}

functions:
  Hello:
    handler: hello.js    
    registry:
      baseName: /Registry/${self:provider.stage}
  HowAreYou:
    handler: howAreYou.js    
    registry:
      register: true
  Goodbye:
    handler: goodbye.js    

This will only produce two SSM Parameters with

Name: /Registry/Test/Hello/FunctionName

Value: ServerlessPluginRegistry-Test-Hello

Name: /ServerlessPluginRegistry/Test/HowAreYou/FunctionName

Value: ServerlessPluginRegistry-Test-HowAreYou

Contribute

Help us making this plugin better and future proof.

  • Clone the code
  • Install the dependencies with npm install
  • Create a feature branch git checkout -b new_feature
  • Lint with standard npm run lint

Author: Aronim
Source Code: https://github.com/aronim/serverless-plugin-registry 
License: MIT license

#serverless #plugin #registry #aws 

Serverless Plugin Registry
Hermann  Frami

Hermann Frami

1656498780

Serverless-rack: Serverless Plugin to Deploy Ruby Rack Applications

Serverless-rack

A Serverless v1.x plugin to build your deploy Ruby Rack applications using Serverless. Compatible Rack application frameworks include Sinatra, Cuba and Padrino.

Features

  • Transparently converts API Gateway and ALB requests to and from standard Rack requests
  • Supports anything you'd expect from Rack such as redirects, cookies, file uploads etc.
  • Bundler integration, including dockerized bundling of binary dependencies
  • Convenient rack serve command for serving your application locally during development
  • CLI commands for remote execution of Ruby code (rack exec), rake tasks ('rack rake') and shell commands (rack command)

Install

sls plugin install -n serverless-rack

This will automatically add the plugin to package.json and the plugins section of serverless.yml.

Sinatra configuration example

project
β”œβ”€β”€ api.rb
β”œβ”€β”€ config.ru
β”œβ”€β”€ Gemfile
└── serverless.yml

api.rb

A regular Sinatra application.

require 'sinatra'

get '/cats' do
  'Cats'
end

get '/dogs/:id' do
  'Dog'
end

config.ru

require './api'
run Sinatra::Application

serverless.yml

All functions that will use Rack need to have rack_adapter.handler set as the Lambda handler and use the default lambda-proxy integration for API Gateway. This configuration example treats API Gateway as a transparent proxy, passing all requests directly to your Sinatra application, and letting the application handle errors, 404s etc.

service: example

provider:
  name: aws
  runtime: ruby2.5

plugins:
  - serverless-rack

functions:
  api:
    handler: rack_adapter.handler
    events:
      - http: ANY /
      - http: ANY /{proxy+}

Gemfile

Add Sinatra to the application bundle.

source 'https://rubygems.org'

gem 'sinatra'

Deployment

Simply run the serverless deploy command as usual:

$ bundle install --path vendor/bundle
$ sls deploy
Serverless: Packaging Ruby Rack handler...
Serverless: Packaging gem dependencies using docker...
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service .zip file to S3 (1.64 MB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
..............
Serverless: Stack update finished...

Usage

Automatic bundling of gems

You'll need to include any gems that your application uses in the bundle that's deployed to AWS Lambda. This plugin helps you out by doing this automatically, as long as you specify your required gems in a Gemfile:

source 'https://rubygems.org'

gem 'rake'
gem 'sinatra'

For more information, see https://bundler.io/docs.html.

Dockerized bundling

If your application depends on any gems that include compiled binaries, these must be compiled for the lambda execution environment. Enabling the dockerizeBundler configuration option will fetch and build the gems using a docker image that emulates the lambda environment:

custom:
  rack:
    dockerizeBundler: true

The default docker image that will be used will match the runtime you are using. That is, if you are using the ruby2.7 runtime, then the docker image will be logandk/serverless-rack-bundler:ruby2.7. You can override the docker image with the dockerImage configuration option:

custom:
  rack:
    dockerImage: lambci/lambda:build-ruby2.5

Bundler configuration

You can use the automatic bundling functionality of serverless-rack without the Rack request handler itself by including the plugin in your serverless.yml configuration, without specifying rack_adapter.handler as the handler for any of your lambda functions. This will omit the Rack handler from the package, but include any gems specified in the Gemfile.

If you don't want to use automatic gem bundling you can set custom.rack.enableBundler to false:

custom:
  rack:
    enableBundler: false

In order to pass additional arguments to bundler when installing requirements, the bundlerArgs configuration option is available:

custom:
  rack:
    bundlerArgs: --no-cache

If your bundler executable is not in $PATH, set the path explicitly using the bundlerBin configuration option:

custom:
  rack:
    bundlerBin: /path/to/bundler

Rack configuration file

If your Rack configuration file (config.ru) is not in ./, set the path explicitly using the configPath configuration option:

custom:
  rack:
    configPath: path/to/config.ru

Local server

For convenience, a sls rack serve command is provided to run your Rack application locally. This command requires the rack gem to be installed, and acts as a simple wrapper for rackup.

By default, the server will start on port 5000.

$ sls rack serve
[2019-01-03 18:13:21] INFO  WEBrick 1.4.2
[2019-01-03 18:13:21] INFO  ruby 2.5.1 (2018-03-29) [x86_64-linux-gnu]
[2019-01-03 18:13:21] INFO  WEBrick::HTTPServer#start: pid=25678 port=5000

Configure the port using the -p parameter:

$ sls rack serve -p 8000
[2019-01-03 18:13:21] INFO  WEBrick 1.4.2
[2019-01-03 18:13:21] INFO  ruby 2.5.1 (2018-03-29) [x86_64-linux-gnu]
[2019-01-03 18:13:21] INFO  WEBrick::HTTPServer#start: pid=25678 port=8000

When running locally, an environment variable named IS_OFFLINE will be set to True. So, if you want to know when the application is running locally, check ENV["IS_OFFLINE"].

For use with the serverless-offline plugin, run sls rack install prior to sls offline.

Remote command execution

The rack exec command lets you execute ruby code remotely:

$ sls rack exec -c "puts (1 + Math.sqrt(5)) / 2"
1.618033988749895

$ cat count.rb
3.times do |i|
  puts i
end

$ sls rack exec -f count.rb
0
1
2

The rack command command lets you execute shell commands remotely:

$ sls rack command -c "pwd"
/var/task

$ cat script.sh
#!/bin/bash
echo "dlrow olleh" | rev

$ sls rack command -f script.sh
hello world

The rack rake command lets you execute Rake tasks remotely:

$ sls rack rake -t "db:rollback STEP=3"

Explicit routes

If you'd like to be explicit about which routes and HTTP methods should pass through to your application, see the following example:

service: example

provider:
  name: aws
  runtime: ruby2.5

plugins:
  - serverless-rack

functions:
  api:
    handler: rack_adapter.handler
    events:
      - http:
          path: cats
          method: get
          integration: lambda-proxy
      - http:
          path: dogs/{id}
          method: get
          integration: lambda-proxy

Custom domain names

If you use custom domain names with API Gateway, you might have a base path that is at the beginning of your path, such as the stage (/dev, /stage, /prod). In this case, set the API_GATEWAY_BASE_PATH environment variable to let serverless-rack know.

The example below uses the serverless-domain-manager plugin to handle custom domains in API Gateway:

service: example

provider:
  name: aws
  runtime: ruby2.5
  environment:
    API_GATEWAY_BASE_PATH: ${self:custom.customDomain.basePath}

plugins:
  - serverless-rack
  - serverless-domain-manager

functions:
  api:
    handler: rack_adapter.handler
    events:
      - http: ANY /
      - http: ANY {proxy+}

custom:
  customDomain:
    basePath: ${opt:stage}
    domainName: mydomain.name.com
    stage: ${opt:stage}
    createRoute53Record: true

File uploads

In order to accept file uploads from HTML forms, make sure to add multipart/form-data to the list of content types with Binary Support in your API Gateway API. The serverless-apigw-binary Serverless plugin can be used to automate this process.

Keep in mind that, when building Serverless applications, uploading directly to S3 from the browser is usually the preferred approach.

Raw context and event

The raw context and event from AWS Lambda are both accessible through the Rack request. The following example shows how to access them when using Sinatra:

require 'sinatra'

get '/' do
  puts request.env['serverless.event']
  puts request.env['serverless.context']
end

Text MIME types

By default, all MIME types starting with text/ and the following whitelist are sent through API Gateway in plain text. All other MIME types will have their response body base64 encoded (and the isBase64Encoded API Gateway flag set) in order to be delivered by API Gateway as binary data (remember to add any binary MIME types that you're using to the Binary Support list in API Gateway).

This is the default whitelist of plain text MIME types:

  • application/json
  • application/javascript
  • application/xml
  • application/vnd.api+json
  • image/svg+xml

In order to add additional plain text MIME types to this whitelist, use the textMimeTypes configuration option:

custom:
  rack:
    textMimeTypes:
      - application/custom+json
      - application/vnd.company+json

Usage without Serverless

The AWS API Gateway to Rack mapping module is available as a gem.

Use this gem if you need to deploy Ruby functions to handle API Gateway events directly, without using the Serverless framework.

gem install --install-dir vendor/bundle serverless-rack

Initialize your Rack application and in your Lambda event handler, call the request mapper:

require 'serverless_rack'

$app ||= Proc.new do |env|
  ['200', {'Content-Type' => 'text/html'}, ['A barebones rack app.']]
end

def handler(event:, context:)
  handle_request(app: $app, event: event, context: context)
end

Author: logandk
Source Code: https://github.com/logandk/serverless-rack 
License: MIT license

#serverless #aws #ruby #rails 

Serverless-rack: Serverless Plugin to Deploy Ruby Rack Applications