Serverless

Serverless

Build web, mobile and IoT applications using AWS Lambda and API Gateway, Azure Functions, Google Cloud Functions, and more.
Hermann  Frami

Hermann Frami

1656755480

Serverless Sequelize Migrations

Serverless Sequelize Migrations

A plugin to manage sequelize migrations on serverless projects 

Features:

  • Create migration file
  • List pending and executed migrations
  • Apply pending migrations
  • Revert applied migrations
  • Reset all applied migrations

Documentation

Installation

  1. Add Serverless Sequelize Migrations to your project:
npm install --save serverless-sequelize-migrations
  1. Inside your serverless.yml file add the following entry to the plugins section (if there is no plugin section in the file you'll need to add it):
plugins:
    - serverless-sequelize-migrations

You can check whether the plugin is ready to be used through the terminal. To do so, type the following command on the CLI:

serverless

the console should display SequelizeMigrations as one of the available plugins in your Serverless project.

Setting up Sequelize

For the plugin to work correctly, you have to set the database information as environment variables on the service provider section as follows:

provider:
  environment:
    DB_DIALECT: 'database_dialect' /* one of 'mysql' | 'mariadb' | 'postgres' | 'mssql' */
    DB_NAME: 'database_name'
    DB_USERNAME: 'database_username'
    DB_PASSWORD: 'database_password'
    DB_HOST: 'database_host'
    DB_PORT: 'database_port'

or by using DB_CONNECTION_URL

provider:
  environment:
    DB_CONNECTION_URL: database_dialect://database_username:database_password@database_host:database_port/database_name`

Replace the variables with the information of your own database.

Obs: This plugin does not have support to create the database itself.

As per Sequelize docs, you'll have to manually install the driver for your database of choice:

# One of the following:
$ npm install --save pg pg-hstore # Postgres
$ npm install --save mysql2 # MySQL
$ npm install --save mariadb # MariaDB
$ npm install --save tedious # Microsoft SQL Server

Usage and command line options

To see the available commands of the plugin, run sls migrations on the terminal. The following should appear:

Plugin: SequelizeMigrations
migrations .................... Sequelize migrations management for Serverless
migrations create ............. Create a migration file
migrations up ................. Execute all pending migrations
migrations down ............... Rolls back one or more migrations
migrations reset .............. Rolls back all migrations
migrations list ............... Shows a list of migrations
    --path / -p ........................ Specify the migrations path (default is './migrations')
    --verbose / -v ..................... Shows sequelize logs

For any of these commands, you can specify two parameters:

  • --path to inform the path of migrations on your project.
  • --verbose to show sequelize execution logs during the operations.

In order to see the options of each command individually, you can run sls migrations <command> --help on the terminal.

The commands (those that have some option) and it's options are presented below:

  • migrations create
--name / -n (required) ... Specify the name of the migration to be created
  • migrations up
--rollback / -r .......... Rolls back applied migrations in case of error (default is false)
--dbDialect ........................ Specify the database dialect (one of: 'mysql', 'mariadb', 'postgres', 'mssql')
--dbHost ........................... Specify the database host
--dbPort ........................... Specify the database port
--dbName ........................... Specify the database name
--dbUsername ....................... Specify the database username
--dbPassword ....................... Specify the database password
  • migrations down
--times / -t ............. Specify how many times to roll back (default is 1)
--name / -n .............. Specify the name of the migration to be rolled back (e.g. "--name create-users.js")
--dbDialect ........................ Specify the database dialect (one of: 'mysql', 'mariadb', 'postgres', 'mssql')
--dbHost ........................... Specify the database host
--dbPort ........................... Specify the database port
--dbName ........................... Specify the database name
--dbUsername ....................... Specify the database username
--dbPassword ....................... Specify the database password
  • migrations list
--status / -s ............ Specify the status of migrations to be listed (--status pending [default] or --status executed)
--dbDialect ........................ Specify the database dialect (one of: 'mysql', 'mariadb', 'postgres', 'mssql')
--dbHost ........................... Specify the database host
--dbPort ........................... Specify the database port
--dbName ........................... Specify the database name
--dbUsername ....................... Specify the database username
--dbPassword ....................... Specify the database password

Custom migrations path

You can also define a migrations path variable on the custom section of your project service file.

custom:
  migrationsPath: './custom/migrations/path'

Important: if you inform the --path option through the CLI, this configuration will be ignored.

Credits and inspiration

This plugin was first based on Nevon's serverless-pg-migrations.

Author: Manelferreira
Source Code: https://github.com/manelferreira/serverless-sequelize-migrations 
License: MIT license

#serverless #sequelize 

Serverless Sequelize Migrations
Hermann  Frami

Hermann Frami

1656748020

Serverless Sentry Plugin

⚑️ Serverless Sentry Plugin    

About

This Serverless plugin simplifies the integration of Sentry with the popular Serverless Framework and AWS Lambda.

Currently, we support Lambda Runtimes for Node.js 12, 14, and 16 for AWS Lambda. Other platforms can be added by providing a respective integration library. Pull Requests are welcome!

The serverless-sentry-plugin and serverless-sentry-lib libraries are not affiliated with either Functional Software Inc., Sentry, Serverless or Amazon Web Services but developed independently and in my spare time.

Benefits

  • Easy to use. Promised 🀞
  • Integrates with Serverless Framework as well as the AWS Serverless Application Model for AWS Lambda (though no use of any framework is required).
  • Wraps your Node.js code with Sentry error capturing.
  • Forwards any errors returned by your AWS Lambda function to Sentry.
  • Warn if your code is about to hit the execution timeout limit.
  • Warn if your Lambda function is low on memory.
  • Reports unhandled promise rejections.
  • Reports uncaught exceptions.
  • Serverless, Sentry and as well as this library are all Open Source. Yay! πŸŽ‰
  • TypeScript support

Overview

Sentry integration splits into two components:

  1. This plugin, which simplifies installation with the Serverless Framework
  2. The serverless-sentry-lib, which performs the runtime monitoring and error reporting.

For a detailed overview of how to use the serverless-sentry-lib refer to its README.md.

Installation

Install the @sentry/node module as a production dependency (so it gets packaged together with your source code):

npm install --save @sentry/node

Install the serverless-sentry-lib as a production dependency as well:

npm install --save serverless-sentry-lib

Install this plugin as a development dependency (you don't want to package it with your release artifacts):

npm install --save-dev serverless-sentry

Check out the examples below on how to integrate it with your project by updating serverless.yml as well as your Lambda handler code.

Usage

The Serverless Sentry Plugin allows configuration of the library through the serverless.yml and will create release and deployment information for you (if wanted). This is the recommended way of using the serverless-sentry-lib library.

▢️ Step 1: Load the Plugin

The plugin determines your environment during deployment and adds the SENTRY_DSN environment variables to your Lambda function. All you need to do is to load the plugin and set the dsn configuration option as follows:

service: my-serverless-project
provider:
  # ...
plugins:
  - serverless-sentry
custom:
  sentry:
    dsn: https://xxxx:yyyy@sentry.io/zzzz # URL provided by Sentry

▢️ Step 2: Wrap Your Function Handler Code

The actual reporting to Sentry happens in platform-specific libraries. Currently, only Node.js and Python are supported.

Each library provides a withSentry helper that acts as decorators around your original AWS Lambda handler code and is configured via this plugin or manually through environment variables.

For more details refer to the individual libraries' repositories:

Old, now unsupported libraries:

Node.js

For maximum flexibility, this library is implemented as a wrapper around your original AWS Lambda handler code (your handler.js or similar function). The withSentry higher-order function adds error and exception handling and takes care of configuring the Sentry client automatically.

withSentry is pre-configured to reasonable defaults and doesn't need any configuration. It will automatically load and configure @sentry/node which needs to be installed as a peer dependency.

Original Lambda Handler Code:

exports.handler = async function (event, context) {
  console.log("EVENT: \n" + JSON.stringify(event, null, 2));
  return context.logStreamName;
};

New Lambda Handler Code Using withSentry For Sentry Reporting

const withSentry = require("serverless-sentry-lib"); // This helper library

exports.handler = withSentry(async function (event, context) {
  console.log("EVENT: \n" + JSON.stringify(event, null, 2));
  return context.logStreamName;
});

ES6 Module: Original Lambda Handler Code:

export async function handler(event, context) {
  console.log("EVENT: \n" + JSON.stringify(event, null, 2));
  return context.logStreamName;
}

ES6 Module: New Lambda Handler Code Using withSentry For Sentry Reporting

import withSentry from "serverless-sentry-lib"; // This helper library

export const handler = withSentry(async (event, context) => {
  console.log("EVENT: \n" + JSON.stringify(event, null, 2));
  return context.logStreamName;
});

Once your Lambda handler code is wrapped in withSentry, it will be extended with automatic error reporting. Whenever your Lambda handler sets an error response, the error is forwarded to Sentry with additional context information. For more details about the different configuration options available refer to the serverless-sentry-lib documentation.

Plugin Configuration Options

Configure the Sentry plugin using the following options in your serverless.yml:

  • dsn - Your Sentry project's DSN URL (required)
  • enabled - Specifies whether this SDK should activate and send events to Sentry (defaults to true)
  • environment - Explicitly set the Sentry environment (defaults to the Serverless stage)

Sentry API access

In order for some features such as releases and deployments to work, you need to grant API access to this plugin by setting the following options:

  • organization - Organization name
  • project - Project name
  • authToken - API authentication token with project:write access

πŸ‘‰ Important: You need to make sure you’re using Auth Tokens not API Keys, which are deprecated.

Releases

Releases are used by Sentry to provide you with additional context when determining the cause of an issue. The plugin can automatically create releases for you and tag all messages accordingly. To find out more about releases in Sentry, refer to the official documentation.

In order to enable release tagging, you need to set the release option in your serverless.yml:

custom:
  sentry:
    dsn: https://xxxx:yyyy@sentry.io/zzzz
    organization: my-sentry-organziation
    project: my-sentry-project
    authToken: my-sentry-api-key
    release:
      version: <RELEASE VERSION>
      refs:
        - repository: <REPOSITORY NAME>
          commit: <COMMIT HASH>
          previousCommit: <COMMIT HASH>

version - Set the release version used in Sentry. Use any of the below values:

  • git - Uses the current git commit hash or tag as release identifier.
  • random - Generates a random release during deployment.
  • true - First tries to determine the release via git and falls back to random if Git is not available.
  • false - Disable release versioning.
  • any fixed string - Use a fixed string for the release. Serverless variables are allowed.

refs - If you have set up Sentry to collect commit data, you can use commit refs to associate your commits with your Sentry releases. Refer to the Sentry Documentation for details about how to use commit refs. If you set your version to git (or true), the refs options are populated automatically and don't need to be set.

πŸ‘‰ Tip {"refs":["Invalid repository names: xxxxx/yyyyyyy"]}: If your repository provider is not supported by Sentry (currently only GitHub or Gitlab with Sentry Integrations) you have the following options:

  1. set refs: false, this will not automatically population the refs but also dismisses your commit id as version
  2. set refs: true and version: true to populate the version with the commit short id

If you don't specify any refs, you can also use the short notation for release and simply set it to the desired release version as follows:

custom:
  sentry:
    dsn: https://xxxx:yyyy@sentry.io/zzzz
    release: <RELEASE VERSION>

If you don't need or want the plugin to create releases and deployments, you can omit the authToken, organization and project options. Messages and exceptions sent by your Lambda functions will still be tagged with the release version and show up grouped in Sentry nonetheless.

πŸ‘‰ Pro Tip: The possibility to use a fixed string in combination with Serverless variables allows you to inject your release version through the command line, e.g. when running on your continuous integration machine.

custom:
  sentry:
    dsn: https://xxxx:yyyy@sentry.io/zzzz
    organization: my-sentry-organziation
    project: my-sentry-project
    authToken: my-sentry-api-key
    release:
      version: ${opt:sentryVersion}
      refs:
        - repository: ${opt:sentryRepository}
          commit: ${opt:sentryCommit}

And then deploy your project using the command-line options from above:

sls deploy --sentryVersion 1.0.0 --sentryRepository foo/bar --sentryCommit 2da95dfb

πŸ‘‰ Tip when using Sentry with multiple projects: Releases in Sentry are specific to the organization and can span multiple projects. Take this in consideration when choosing a version name. If your version applies to the current project only, you should prefix it with your project name.

If no option for release is provided, releases and deployments are disabled.

Source Maps

Sourcemap files can be uploaded to Sentry to display source files in the stack traces rather than the compiled versions. This only uploads existing files being output, you'll need to configure your bundling tool separately. You'll also need to have releases configured, see above.

Default options:

custom:
  sentry:
    sourceMaps: true

Add custom prefix (required if your app is not at the filesystem root)

custom:
  sentry:
    sourceMaps:
      urlPrefix: /var/task

Enabling and Disabling Error Reporting Features

In addition, you can configure the Sentry error reporting on a service as well as a per-function level. For more details about the individual configuration options see the serverless-sentry-lib documentation.

  • autoBreadcrumbs - Automatically create breadcrumbs (see Sentry Raven docs, defaults to true)
  • filterLocal - Don't report errors from local environments (defaults to true)
  • captureErrors - Capture Lambda errors (defaults to true)
  • captureUnhandledRejections - Capture unhandled Promise rejections (defaults to true)
  • captureUncaughtException - Capture unhandled exceptions (defaults to true)
  • captureMemoryWarnings - Monitor memory usage (defaults to true)
  • captureTimeoutWarnings - Monitor execution timeouts (defaults to true)

Example Configuration

# serverless.yml
service: my-serverless-project
provider:
  # ...
plugins:
  - serverless-sentry
custom:
  sentry:
    dsn: https://xxxx:yyyy@sentry.io/zzzz # URL provided by Sentry
    captureTimeoutWarnings: false # disable timeout warnings globally for all functions
functions:
  FuncFoo:
    handler: Foo.handler
    description: Hello World
    sentry:
      captureErrors: false # Disable error capturing for this specific function only
      captureTimeoutWarnings: true # Turn timeout warnings back on
  FuncBar:
    handler: Bar.handler
    sentry: false # completely turn off Sentry reporting

Example: Configuring Sentry based on stage

In some cases, it might be desired to use a different Sentry configuration depending on the currently deployed stage. To make this work we can use a built-in Serverless variable resolutions trick:

# serverless.yml
plugins:
  - serverless-sentry
custom:
  config:
    default:
      sentryDsn: ""
    prod:
      sentryDsn: "https://xxxx:yyyy@sentry.io/zzzz" # URL provided by Sentry

  sentry:
    dsn: ${self:custom.config.${self:provider.stage}.sentryDsn, self:custom.config.default.sentryDsn}
    captureTimeoutWarnings: false # disable timeout warnings globally for all functions

Troubleshooting

No errors are reported in Sentry

Double-check the DSN settings in your serverless.yml and compare it with what Sentry shows you in your project settings under "Client Keys (DSN)". You need a URL in the following format - see the Sentry Quick Start:

{PROTOCOL}://{PUBLIC_KEY}:{SECRET_KEY}@{HOST}/{PATH}{PROJECT_ID}

Also, make sure to add the plugin to your plugins list in the serverless.yml:

plugins:
  - serverless-sentry
custom:
  sentry:
    dsn: https://xxxx:yyyy@sentry.io/zzzz # URL provided by Sentry

The plugin doesn't create any releases or deployments

Make sure to set the authToken, organization as well as project options in your serverless.yml, and set release to a non-empty value as shown in the example below:

plugins:
  - serverless-sentry
custom:
  sentry:
    dsn: https://xxxx:yyyy@sentry.io/zzzz # URL provided by Sentry
    organization: my-sentry-organziation
    project: my-sentry-project
    authToken: my-sentry-api-key
    release: git

I'm testing my Sentry integration locally but no errors or messages are reported

Check out the filterLocal configuration setting. If you test Sentry locally and want to make sure your messages are sent, set this flag to false. Once done testing, don't forget to switch it back to true as otherwise, you'll spam your Sentry projects with meaningless errors of local code changes.

Version History

2.5.3

  • Increased number of parallel uploads of source map artifacts for better performance.

2.5.2

  • Upload multiple source map artifacts in parallel for better performance.

2.5.1

  • Fix #63: Upload source maps serially to avoid running out of sockets.
  • Correctly disable uploading source maps if the config setting is false or unset.

2.5.0

  • Added support for uploading Source Maps to Sentry. Thanks to jonmast for the contribution.
  • Fixed an issue in the configuration validation. Thanks to DonaldoLog for the fix.
  • Fixed an issue if using
  • Updated dependencies.

2.4.0

  • Explicitly check for enabled flag. Thanks to aaronbannin for the contribution.
  • Explicit peer dependency on Serverless
  • Updated dependencies minor versions; locked TypeScript to 4.4 for now

2.3.0

  • Added configuration validation. Serverless will now warn if you pass an invalid configuration value in custom.sentry.

2.2.0

  • Added captureUncaughtException configuration option. This already exists in serverless-sentry-lib but was never exposed in the plugin.
  • Don't fail if SENTRY_DSN is not set but simply disable Sentry integration.

2.1.0

  • Support for deploying individual functions only (sls deploy -f MyFunction). Thanks to dominik-meissner!
  • Improved documentation. Thanks to aheissenberger
  • Updated dependencies.

2.0.2

  • Fixed custom release names not being set. Thanks to botond-veress!

2.0.1

  • Fixed error when creating new Sentry releases. Thanks to dryror!

2.0.0

  • This version of serverless-sentry-plugin requires the use of serverless-sentry-lib v2.x.x
  • Rewrite using TypeScript. The use of TypeScript in your project is fully optional, but if you do, we got you covered!
  • Added new default uncaught exception handler.
  • Dropped support for Node.js 6 and 8. The only supported versions are Node.js 10 and 12.
  • Upgrade from Sentry SDK raven to the Unified Node.js SDK @sentry/node.
  • Simplified integration using withSentry higher-order function. Passing the Sentry instance is now optional.
  • Thank you @aheissenberger and @Vadorequest for their contributions to this release! πŸ€—

1.2.0

  • Fixed a compatibility issue with Serverless 1.28.0.

1.1.1

  • Support for sls invoke local. Thanks to sifrenette for his contribution.

1.1.0

  • ⚠️ Dropped support for Node 4.3. AWS deprecates Node 4.3 starting July 31, 2018.
  • Pair with serverless-sentry-lib v1.1.x.

1.0.0

  • Version falls back to git hash if no tag is set for the current head (#15).
  • Fixed reporting bugs in the local environment despite config telling otherwise (#17). This requires an update of serverless-sentry-lib as well!

1.0.0-rc.4

  • Fixed an issue with creating random version numbers

1.0.0-rc.3

  • Allow disabling Sentry for specific functions by settings sentry: false in the serverless.yml.
  • Added support for the Serverless Offline Plugin.

1.0.0-rc.2

  • Fixed an issue with the plugin not being initialized properly when deploying an existing artifact.

1.0.0-rc.1

To-Dos

  •  Bring back automatic instrumentation of the Lambda code during packaging
  •  Provide CLI commands to create releases and perform other operations in Sentry
  •  Ensure all exceptions and messages have been sent to Sentry before returning; see #338.

Support

That you for supporting me and my projects.

Author: Arabold
Source Code: https://github.com/arabold/serverless-sentry-plugin 
License: MIT license

#serverless #aws #sentry #lambda 

Serverless Sentry Plugin
Hermann  Frami

Hermann Frami

1656663300

Serverless Select Plugin

Serverless Select Plugin

Select which functions are to be deployed based on region and stage.

Note: Requires Serverless v1.12.x or higher.

Setup

Install via npm in the root of your Serverless service:

npm install serverless-plugin-select --save-dev
  • Add the plugin to the plugins array in your Serverless serverless.yml, you should place it at the top of the list:
plugins:
  - serverless-plugin-select
  - ...

Add regions or stages in your functions to select for deployment

Run deploy command sls deploy --stage [STAGE NAME] --region [REGION NAME] or sls deploy function --stage [STAGE NAME] --region [REGION NAME] --function [FUNCTION NAME]

Functions will be deployed based on your selection

All done!

Function

How it works? When deployment region or stage don't match function regions or stages, that function will be deleted from deployment.

regions - Function accepted deployment regions.

functions:
  hello:
    regions:
      - eu-west-1
      - ...
  • stages - Function accepted deployment stages.
functions:
  hello:
    stages:
      - dev
      - ...

Contribute

Help us making this plugin better and future proof.

  • Clone the code
  • Install the dependencies with npm install
  • Create a feature branch git checkout -b new_feature
  • Lint with standard npm run lint

Author: FidelLimited
Source Code: https://github.com/FidelLimited/serverless-plugin-select 
License: MIT license

#serverless #plugin #aws #nodejs 

Serverless Select Plugin
Hermann  Frami

Hermann Frami

1656655860

Serverless Secret Baker

Serverless Secret Baker is a Serverless Framework Plugin for secure, performant, and deterministic secret management using AWS Systems Manager Parameter Store and AWS KMS.  


How it works

AWS System Manager Parameter Store is responsible for storing and managing your versioned secret values. You can create and update your secrets via Parameter Store using your own workflow via the AWS Console or via the AWS CLI. When uploading secrets, Parameter Store will use KMS to perform the actual encryption of the secret and store the resulting ciphertext. It is important to choose a customer managed KMS CMK (customer managed key) rather than a AWS managed KMS CMK in this step in order to have the flexibility to decrypt the secrets at runtime, as we'll see later.

Serverless Secert Baker is responsbile for automatically retrieving the ciphertext stored in Parameter Store and storing it in a well-known file in your bundled application during serverless deploy. Serverless Secret Baker, nor Serverless Framework, never see the decrypted secret values.

Runtime Code Snippet for KMS Decryption is responsible for reading the ciphertext from the well-known file and decrypting it via KMS APIs for use in the application. Serverless Secret Baker provides sample code snippets in both Python and Node for performing this operation. Only Lambda functions with an IAM role that enables decryption via the specified KMS CMK will be able to decrypt the secrets.

Why all this fuss?

There are many solutions for secret management with AWS Lambda. Unfortunately, a lot of the solutions unnecessarily expose the secrets in plain text, incur latency by invoking an API call to decrypt a secret with every lambda invocation, or require potentially complex cache invalidation for when a secret is rotated.

Here are some common patterns to secret management and their downsides:

  1. Use Lambda Environment Variables: The plaintext value of the secret is exposed insecurely within the Cloud Formation template. AWS explicitly recommends not storing sensitive information in Lambda Environment Variables as it is not encrypted in transit during deploy time.
  2. Use the built-in Serverless Framework for AWS Parameter Store:. By using the built-in syntax of ${ssm:/path/to/secret~true} this will retrieve the plaintext secret at packaging time and store it in an Environment Variables. This has the same downsides to 1).
  3. Use AWS Parameter Store or AWS Secret Manager at Runtime: Requires either retrieving the secret via API at every invocation of the Lambda (latency) or retrieving it once and caching the secret in the lambda global scope. If caching the secret in global scope a cache invalidation strategy is needed to refresh the secret when it is updated in Parameter Store / Secret Manager to prevent lambdas using old, potentially invalid secrets.

This plugin addresses these concerns by focusing on:

  1. Security: Secrets should always be encrypted at rest and in transit. The secrets are stored in Parameter Store using a custom KMS CMK. The only time it is decrypted is at lambda invocation.
  2. Performance: Minimize external dependencies and API calls. The secrets are retrieved directly from KMS. There is no runtime dependency on Parameter Store or Secrets Manager. In addition, the secret can be cached in the Lambda global scope so only a single API call per warmed up lambda is needed.
  3. Deterministic State: Complex cache invalidation strategies are not needed. Because the ciphertext is bundled with the lambda at deploy time the secrets can be modified at the source in AWS Parameter Store without effecting the runtime state. In order to apply the new secrets, a new deployment of the Lambdas is required allowing it to go through a CI/CD pipeline to catch any potential errors with secrets and to ensure that all the lambdas get the new secret at the same time.

Step by step

  1. Create a symmetric customer managed KMS CMK
  2. Upload secrets as "SecureString" to SSM Parameter Store via AWS Console or AWS CLI, specifying the Cusomter Managed CMK in created in step 1
  3. Install this plugin via serverless plugin install --name serverless-secret-baker
  4. Add to your serverless.yml the following to specify which secrets to retrieve from parameter store:
custom:
  secretBaker:
    - MY_SECRET

The plugin will create a json file called secret-baker-secrets.json with all the secrets and include it in your application during packaging. In the above example the ciphertext and ARN of the AWS Parameter Store parameter located at MY_SECRET will be stored in the file under the key MY_SECRET.

See example code in examples folder for reference.

  1. Ensure your Lambda has permission to decrypt the secret at runtime using the CMK. Example:
iamRoleStatements:
  - Effect: Allow
    Action:
      - kms:Decrypt
    Resource:
      - # REPLACE with ARN for your CMK
  1. Add a code snippet in your application to decrypt the secret:

Advanced Configuration

If you would like to name your secrets something different than the path in Parameter Store you can specify a name and path in the configuration like so:

custom:
  secretBaker:
    # Retrieves the latest encrypted secret at the given parameter store path
    MY_SECRET: /path/to/ssm/secret

You can also pin your secrets to specific versions in Parameter Store to have a deterministic secret value:

custom:
  secretBaker:
    # Retrieves the version 2 encrypted secret at the given parameter store path 
    MY_SECRET: /path/to/ssm/secret:2

Alternate syntax explcitly defining name and path is also supported:

custom:
  secretBaker:
    - name: CUSTOM_SECRET
      path: a/custom/secret/path 

This allows you to mix styles

custom:
  secretBaker:
    - MY_SECRET
    - MY_OTHER_SECRET
    - name: CUSTOM_SECRET
      path: a/custom/secret/path 

Preserve the encrypted secrets file

The secrets files, secret-baker-secrets.json, is automatically generated at the start of every serverless deploy, serverless package, serverless invoke local, and serverless offline command. The secrets file, by default, will also be automatically removed upon command completion to not leave it in your source directory. If you'd like to preserve the secrets file, pass in the CLI option --no-secret-baker-cleanup

Author: Vacasaoss
Source Code: https://github.com/vacasaoss/serverless-secret-baker 
License: MIT license

#serverless #framework #plugin 

Serverless Secret Baker
Hermann  Frami

Hermann Frami

1656648240

Serverless Plugin Scripts

serverless-plugin-scripts 

Add scripting capabilities to the Serverless Framework.

Caution

This project is in maintenance mode, and it will not get any new features.

Installation

Install the plugin in your Serverless (v1.0 or higher) project:

npm install --save serverless-plugin-scripts

And activate it by adding the following configuration to your serverless.yml file:

plugins:
  - serverless-plugin-scripts

Usage

Custom commands

To add a custom command to the Serverless CLI, just define a custom.scripts.commands property in your serverless.yml file:

custom:
  scripts:
    commands:
      hello: echo Hello from ${self:service} service!

You can now run serverless hello to execute the hello command.

Simple hooks

It is possible to define simple hooks for existing Serverless CLI commands by adding a custom.scripts.hooks property in your serverless.yml file:

custom:
  scripts:
    hooks:
      'deploy:createDeploymentArtifacts': npm run compile

The next time you run serverless deploy, your script will be automatically invoked during the deploy:createDeploymentArtifacts lifecycle event.

To find out about existing lifecycle events, check out this page.

Author: Mvila
Source Code: https://github.com/mvila/serverless-plugin-scripts 
License: MIT

#serverless #plugin #script 

Serverless Plugin Scripts
Hermann  Frami

Hermann Frami

1656640800

Serverless Scriptable Plugin

What's the plugins for?

This plugin allows you to write scripts to customize Serverless behavior for Serverless 1.x and upper

It also supports running node.js scripts in any build stage.

Features:

  • Run any command or nodejs scripts in any stage of serverless lifecycle
  • Add custom commands to serverless, e.g. npx serverless YOUR-COMMAND Example

Quick Start

  1. Install
npm install --save-dev serverless-scriptable-plugin
  1. Add to Serverless config
plugins:
  - serverless-scriptable-plugin

custom:
  scriptable:
    # add custom hooks
    hooks:
      before:package:createDeploymentArtifacts: npm run build

    # or custom commands
    commands:
      migrate: echo Running migration

Upgrade from <=1.1.0

This serverless-scriptable-plugin now supports event hooks and custom commands. Here's an example of upgrade to the latest schema. The previous config schema still works for backward compatibility.

Example that using the previous schema:

plugins:
  - serverless-scriptable-plugin

custom:
  scriptHooks:
    before:package:createDeploymentArtifacts: npm run build

Changed to:

plugins:
  - serverless-scriptable-plugin

custom:
  scriptable:
    hooks:
      before:package:createDeploymentArtifacts: npm run build

Examples

Customize package behavior

The following config is using babel for transcompilation and packaging only the required folders: dist and node_modules without aws-sdk

plugins:
  - serverless-scriptable-plugin

custom:
  scriptable:
    hooks:
      before:package:createDeploymentArtifacts: npm run build

package:
  exclude:
    - '**/**'
    - '!dist/**'
    - '!node_modules/**'
    - node_modules/aws-sdk/**

Add a custom command

plugins:
  - serverless-scriptable-plugin

custom:
  scriptable:
    hooks:
      before:migrate:command: echo before migrating
      after:migrate:command: echo after migrating
    commands:
      migrate: echo Running migration

Then you could run this command by:

$ npx serverless migrate
Running command: echo before migrating
before migrating
Running command: echo Running migrating
Running migrating
Running command: echo after migrating
after migrating

Deploy python function

plugins:
  - serverless-scriptable-plugin

custom:
  scriptable:
      hooks:
        before:package:createDeploymentArtifacts: ./package.sh

# serverless will use the specified package that generated by `./package.sh`
package:
  artifact: .serverless/package.zip

and package.sh script file to package the zip file (https://docs.aws.amazon.com/lambda/latest/dg/python-package.html)

  PACKAGE_FILE=.serverless/package.zip
  rm -f $PACKAGE_FILE && rm -rf output && mkdir -p output
  pip install -r requirements.txt --target output/libs
  # You can use the following command to install if you are using pipenv
  # pipenv requirements > output/requirements.txt && pip install -r output/requirements.txt --target output/libs
  (cd output/libs && zip -r ../../$PACKAGE_FILE . -x '*__pycache__*')
  (zip -r $PACKAGE_FILE your-src-folder -x '*__pycache__*')

Serverless would then deploy the zip file you built to aws lambda.

Run any command as a hook script

It's possible to run any command as the hook script, e.g. use the following command to zip the required folders

plugins:
  - serverless-scriptable-plugin

custom:
  scriptable:
    hooks:
      before:package:createDeploymentArtifacts: zip -q -r .serverless/package.zip src node_modules

service: service-name
package:
  artifact: .serverless/package.zip

Dynamically change resources

Create CloudWatch Log subscription filter for all Lambda function Log groups, e.g. subscribe to a Kinesis stream

plugins:
  - serverless-scriptable-plugin

custom:
  scriptable:
    hooks:
      after:package:compileEvents: build/serverless/add-log-subscriptions.js

provider:
  logSubscriptionDestinationArn: 'arn:aws:logs:ap-southeast-2:{account-id}:destination:'

and in build/serverless/add-log-subscriptions.js file:

const resources = serverless.service.provider.compiledCloudFormationTemplate.Resources;
const logSubscriptionDestinationArn = serverless.service.provider.logSubscriptionDestinationArn;

Object.keys(resources)
  .filter(name => resources[name].Type === 'AWS::Logs::LogGroup')
  .forEach(logGroupName => resources[`${logGroupName}Subscription`] = {
      Type: "AWS::Logs::SubscriptionFilter",
      Properties: {
        DestinationArn: logSubscriptionDestinationArn,
        FilterPattern: ".",
        LogGroupName: { "Ref": logGroupName }
      }
    }
  );

Run multiple commands

It's possible to run multiple commands for the same serverless event, e.g. Add CloudWatch log subscription and dynamodb auto scaling support

plugins:
  - serverless-scriptable-plugin

custom:
  scriptable:
    hooks:
      after:package:createDeploymentArtifacts: 
        - build/serverless/add-log-subscriptions.js
        - build/serverless/add-dynamodb-auto-scaling.js

service: service-name
package:
  artifact: .serverless/package.zip

Suppress console output

You could control what to show during running commands, in case there are sensitive info in command or console output.

custom:
  scriptable:
    showStdoutOutput: false # Default true. true: output stderr to console, false: output nothing
    showStderrOutput: false # Default true. true: output stderr to console, false: output nothing
    showCommands: false # Default true. true: show the command before execute, false: do not show commands

    hooks:
      ...
    commands:
      ...

Hooks

The serverless lifecycle hooks are different to providers, here's a reference of AWS hooks: https://gist.github.com/HyperBrain/50d38027a8f57778d5b0f135d80ea406#file-lifecycle-cheat-sheet-md

Change Log

Version 0.8.0 and above

Version 0.7.1

  • [Feature] Fix vulnerability warning by remove unnecessary dev dependencies

Version 0.7.0

  • [Feature] Return promise object to let serverless to wait until script is finished

Version 0.6.0

  • [Feature] Supported execute multiple script/command for the same serverless event

Version 0.5.0

  • [Feature] Supported serverless variables in script/command
  • [Improvement] Integrated with codeclimate for code analysis and test coverage

Version 0.4.0

  • [Feature] Supported colored output in script/command
  • [Improvement] Integrated with travis for CI

Version 0.3.0

  • [Feature] Supported to execute any command for serverless event

Version 0.2.0

  • [Feature] Supported to execute javascript file for serverless event

Author: weixu365
Source Code: https://github.com/weixu365/serverless-scriptable-plugin 
License: MIT license

#serverless #plugin 

Serverless Scriptable Plugin
Hermann  Frami

Hermann Frami

1656636720

Serverless Framework: Deploy on Scaleway Functions

Serverless Framework: Deploy on Scaleway Functions

The Scaleway functions plugin for Serverless Framework allows users to deploy their functions and containers to Scaleway Functions with a simple serverless deploy.

Serverless Framework handles everything from creating namespaces to function/code deployment by calling APIs endpoint under the hood.

Requirements

  • Install node.js
  • Install Serverless CLI (npm install serverless -g)

Let's work into ~/my-srvless-projects

# mkdir ~/my-srvless-projects
# cd ~/my-srvless-projects

Create a Project

The easiest way to create a project is to use one of our templates. The list of templates is here

Let's use python3

serverless create --template-url https://github.com/scaleway/serverless-scaleway-functions/tree/master/examples/python3 --path myService

Once it's done, we can install mandatory node packages used by serverless

cd mypython3functions
npm i

Note: these packages are only used by serverless, they are not shipped with your functions.

Configure your functions

Your functions are defined in the serverless.yml file created:

service: scaleway-python3
configValidationMode: off

useDotenv: true

provider:
  name: scaleway
  runtime: python310
  # Global Environment variables - used in every functions
  env:
    test: test
  # Storing credentials in this file is strongly not recommanded for security concerns, please refer to README.md about best practices
  scwToken: <scw-token>
  scwProject: <scw-project-id>
  # region in which the deployment will happen (default: fr-par)
  scwRegion: <scw-region>

plugins:
  - serverless-scaleway-functions
  
package:
  patterns:
    - '!node_modules/**'
    - '!.gitignore'
    - '!.git/**'

functions:
  first:
    handler: handler.py
    # Local environment variables - used only in given function
    env:
      local: local

Note: provider.name and plugins MUST NOT be changed, they enable us to use the scaleway provider

This file contains the configuration of one namespace containing one or more functions (in this example, only one) of the same runtime (here python3)

The different parameters are:

  • service: your namespace name
  • useDotenv: Load environment variables from .env files (default: false), read Security and secret management
  • configValidationMode: Configuration validation: 'error' (fatal error), 'warn' (logged to the output) or 'off' (default: warn)
  • provider.runtime: the runtime of your functions (check the supported runtimes above)
  • provider.env: environment variables attached to your namespace are injected to all your namespace functions
  • provider.secret: secret environment variables attached to your namespace are injected to all your namespace functions, see this example project
  • scwToken: Scaleway token you got in prerequisites
  • scwProject: Scaleway org id you got in prerequisites
  • scwRegion: Scaleway region in which the deployment will take place (default: fr-par)
  • package.patterns: usually, you don't need to configure it. Enable to include/exclude directories to/from the deployment
  • functions: Configure of your fonctions. It's a yml dictionary, with the key being the function name
    • handler (Required): file or function which will be executed. See the next section for runtime specific handlers
    • env (Optional): environment variables specific for the current function
    • secret (Optional): secret environment variables specific for the current function, see this example project
    • minScale (Optional): how many function instances we keep running (default: 0)
    • maxScale (Optional): maximum number of instances this function can scale to (default: 20)
    • memoryLimit: ram allocated to the function instances. See the introduction for the list of supported values
    • timeout: is the maximum duration in seconds that the request will wait to be served before it times out (default: 300 seconds)
    • runtime: (Optional) runtime of the function, if you need to deploy multiple functions with different runtimes in your Serverless Project. If absent, provider.runtime will be used to deploy the function, see this example project.
    • events (Optional): List of events to trigger your functions (e.g, trigger a function based on a schedule with CRONJobs). See events section below
    • custom_domains (Optional): List of custom domains, refer to Custom Domain Documentation

Security and secret management

You configuration file may contains sensitive data, your project ID and your Token must not be shared and must not be commited in VCS.

To keep your informations safe and be able to share or commit your serverles.yml file you should remove your credentials from the file. Then you can :

  • use global environment variables
  • use .env file and keep it secret

To use .env file you can modify your serverless.yml file as following :

# This will alow the plugin to read your .env file
useDotenv: true

provider:
  name: scaleway
  runtime: node16

  scwToken: ${env:SCW_SECRET_KEY}
  scwProject: ${env:SCW_DEFAULT_PROJECT_ID}
  scwRegion: ${env:SCW_REGION}

And then create a .env file next to your serverless.yml file, containing following values :

SCW_SECRET_KEY=XXX
SCW_DEFAULT_PROJECT_ID=XXX
SCW_REGION=fr-par

You can use this pattern to hide your secrets (for example a connexion string to a database or a S3 bucket).

Functions Handler

Based on the chosen runtime, the handler variable on function might vary.

Using ES Modules

Node has two module systems: CommonJS modules and ECMAScript (ES) modules. By default, Node treats your code files as CommonJS modules, however ES modules have also been available since the release of node16 runtime on Scaleway Serverless Functions. ES modules give you a more modern way to re-use your code.

According to the official documentation, to use ES modules you can specify the module type in package.json, as in the following example:

  ...
  "type": "module",
  ...

This then enables you to write your code for ES modules:

export {handle};

function handle (event, context, cb) {
    return {
        body: process.version,
        statusCode: 200,
    };
};

The use of ES modules is encouraged, since they are more efficient and make setup and debugging much easier.

Note that using "type": "module" or "type": "commonjs" in your package.json will enable/disable some features in Node runtime. For a comprehensive list of differences, please refer to the official documentation, the following is a summary only:

  • commonjs is used as default value
  • commonjs allows you to use require/module.exports (synchronous code loading, it basically copies all file contents)
  • module allows you to use import/export ES6 instructions (asynchronous loading, more optimized as it imports only the pieces of code you need)

Node

Path to your handler file (from serverless.yml), omit ./, ../, and add the exported function to use as a handler :

- src
  - handlers
    - firstHandler.js  => module.exports.myFirstHandler = ...
    - secondHandler.js => module.exports.mySecondHandler = ...
- serverless.yml

In serverless.yml:

provider:
  # ...
  runtime: node16
functions:
  first:
    handler: src/handlers/firstHandler.myFirstHandler
  second:
    handler: src/handlers/secondHandler.mySecondHandler

Python

Similar to node, path to handler file src/testing/handler.py:

- src
  - handlers
    - firstHandler.py  => def my_first_handler
    - secondHandler.py => def my_second_handler
- serverless.yml

In serverless.yml:

provider:
  # ...
  runtime: python310 # or python37, python38, python39
functions:
  first:
    handler: src/handlers/firstHandler.my_first_handler
  second:
    handler: src/handlers/secondHandler.my_second_handler

Golang

Path to your handler's package, for example if I have the following structure:

- src
  - testing
    - handler.go -> package main in src/testing subdirectory
  - second
    - handler.go -> package main in src/second subdirectory
- serverless.yml
- handler.go -> package main at the root of project

Your serverless.yml functions should look something like this:

provider:
  # ...
  runtime: go118
functions:
  main:
    handler: "."
  testing:
    handler: src/testing
  second:
    handler: src/second

Events

With events, you may link your functions with specific triggers, which might include CRON Schedule (Time based), MQTT Queues (Publish on a topic will trigger the function), S3 Object update (Upload an object will trigger the function).

Note that we do not include HTTP triggers in our event types, as a HTTP endpoint is created for every function. Triggers are just a new way to trigger your Function, but you will always be able to execute your code via HTTP.

Here is a list of supported triggers on Scaleway Serverless, and the configuration parameters required to deploy them:

  • schedule: Trigger your function based on CRON schedules
    • rate: CRON Schedule (UNIX Format) on which your function will be executed
    • input: key-value mapping to define arguments that will be passed into your function's event object during execution.

To link a Trigger to your function, you may define a key events in your function:

functions:
  handler: myHandler.handle
  events:
    # "events" is a list of triggers, the first key being the type of trigger.
    - schedule:
        # CRON Job Schedule (UNIX Format)
        rate: '1 * * * *'
        # Input variable are passed in your function's event during execution
        input:
          key: value
          key2: value2

You may link Events to your Containers too (See section Managing containers below for more informations on how to deploy containers):

custom:
  containers:
    mycontainer:
      directory: my-directory
      # Events key
      events:
        - schedule:
            rate: '1 * * * *'
            input:
              key: value
              key2: value2

You may refer to the follow examples:

Custom domains

Custom domains allows users to use their own domains.

For domain configuration please Refer to Scaleway Documentation

Integration with serverless framework example :

functions:
  first:
    handler: handler.handle
    # Local environment variables - used only in given function
    env:
      local: local
    custom_domains:
      - func1.scaleway.com
      - func2.scaleway.com

Note As your domain must have a record to your function hostname, you should deploy your function once to read its hostname. Custom Domains configurations will be available after the first deploy.

Note: Serverless Framework will consider the configuration file as the source of truth.

If you create a domain with other tools (Scaleway's Console, CLI or API) you must refer created domain into your serverless configuration file. Otherwise it will be deleted as Serverless Framework will give the priority to its configuration.

Managing containers

Requirements: You need to have Docker installed to be able to build and push your image to your Scaleway registry.

You must define your containers inside the custom.containers field in your serverless.yml manifest. Each container must specify the relative path of its application directory (containing the Dockerfile, and all files related to the application to deploy):

custom:
  containers:
    mycontainer:
      directory: my-container-directory
      # port: 8080
      # Environment only available in this container 
      env:
        MY_VARIABLE: "my-value"

Here is an example of the files you should have, the directory containing your Dockerfile and scripts is my-container-directory.

.
β”œβ”€β”€ my-container-directory
β”‚   β”œβ”€β”€ Dockerfile
β”‚   β”œβ”€β”€ requirements.txt
β”‚   β”œβ”€β”€ server.py
β”‚   └── (...)
β”œβ”€β”€ node_modules
β”‚   β”œβ”€β”€ serverless-scaleway-functions
β”‚   └── (...)
β”œβ”€β”€ package-lock.json
β”œβ”€β”€ package.json
└── serverless.yml

Scaleway's platform will automatically inject a PORT environment variable on which your server should be listening for incoming traffic. By default, this PORT is 8080. You may change the port in your serverless.yml.

You may use the container example to getting started.

Logs

The serverless logs command lets you watch the logs of a specific function or container.

Pass the function or container name you want to fetch the logs for with --function:

serverless logs --function <function_or_container_name>

Info

serverless info command gives you informations your current deployement state in JSON format.

Documentation and useful Links

Contributing

This plugin is mainly developed and maintained by Scaleway Serverless Team but you are free to open issues or discuss with us on our Community Slack Channels #serverless-containers and #serverless-functions.

Author: Scaleway
Source Code: https://github.com/scaleway/serverless-scaleway-functions 
License: MIT license

#serverless #function #aws #lambda 

Serverless Framework: Deploy on Scaleway Functions
Hermann  Frami

Hermann Frami

1656629280

Serverless Framework Plugin to Export AWS SAM Templates for A Service

Serverless SAM 

Serverless-sam is a plugin for the Serverless framework that makes it easy to create Serverless Application Model (SAM) templates from an application. The plugin adds the sam command to the serverless cli.

Installation

From your Serverless application directory, use npm to install the plugin:

$ npm install --save-dev serverless-sam

Once you have installed the plugin, add it to your serverless.yml file in the plugins sections.

service: my-serverless-service

plugins:
  - serverless-sam

frameworkVersion: ">=1.1.0 <2.0.0"
...

Usage

Use the sam export command to generate a SAM definition from your service. Use the --output or -o option to set the name for the SAM template file.

$ serverless sam export --output ./sam-template.yml

Once you have exported the template, you can follow the standard procedure with the AWS CLI to deploy the service. First, the package command reads the generated templates, uploads the packaged functions to an S3 bucket for deployment, and generates an output template with the S3 links to the function packages.

$ aws cloudformation package \
    --template-file /path_to_template/template.yaml \
    --s3-bucket bucket-name \
    --output-template-file packaged-template.yaml

The next step is to deploy the output template from the package command:

$ aws cloudformation deploy \
    --template-file /path_to_template/packaged-template.yaml \
    --stack-name my-new-stack \
    --capabilities CAPABILITY_IAM

Author: SAPessi
Source Code: https://github.com/SAPessi/serverless-sam 
License: Apache-2.0 license

#serverless #node #plugin #lambda 

Serverless Framework Plugin to Export AWS SAM Templates for A Service
Hermann  Frami

Hermann Frami

1656621900

Serverless Sagemaker Groundtruth

Serverless-sagemaker-groundtruth

This serverless plugin includes a set of utilities to implement custom workflow for AWS Sagemaker Groundtruth

Currently includes :

  • Serve liquid template from manifest file + prelambda the same it is done on AWS Sagemaker Groundtruth
  • Run End to end test pre-lambda -> labelling simulation -> post lambda

Any Pull request will be warmly welcome !

Ideas for future implementation :

  • Create Tasks from serverless CLI
  • Test Chained tasks
  • Expose nodejs api to integrate with testing suites

Installation

npm install --save-dev serverless-sagemaker-groundtruth

Usage as a serverless plugin

Example serverless.yml

In order to use this module, you need to add a groundtruthTasks key into your serverless.yml file

...

plugins:
  - serverless-sagemaker-groundtruth

functions:
  pre-example: 
    handler: handler.pre
    name: pre
  post-example: 
    handler: handler.postObjectDetection
    name: post

groundtruthTasks:
  basic:
    pre: pre-example
    post: post-example
    template: app/templates/object-detection/basic.liquid.html

Serve a liquid template against a manifest file

serverless groundtruth serve \
  --groundtruthTask <groundtruthTask-name> \
  --manifest <s3-uri or local file> \
  --row <row index>

Test e2e behavior of sagemaker groundtruth workflow

The puppeteer module example

Here, we create a puppeteer module which is doing random bounding boxes (using hasard library) :

const BbPromise = require('bluebird')
const h = require('hasard');

/**
* This function is binding a sequence of actions made by the user before submitting the form
* This is an example showing how to simulate a use bounding box actions
* @param {Page} page puppeteer page instance see https://github.com/puppeteer/puppeteer
* This page is open and running in the annotation page
* @param {Object} manifestRow the object from the manifest file row
* @param {Object} prelambdaOutput the output object from the prelambda result
* @returns {Promise} the promise is resolved once the user has done all needed actions on the form
*/

module.exports = function({
    page, 
    manifestRow,  
    workerId
}){
    
    // we draw 5 boxes for each worker
    const nBoxes = 5;
    
    // Cat and Dog
    const nCategories = 2;
    
    // Using the technic from https://github.com/puppeteer/puppeteer/issues/858#issuecomment-438540596 to select the node
    return page.evaluateHandle(`document.querySelector("body > crowd-form > form > crowd-bounding-box").shadowRoot.querySelector("#annotation-area-container > div > div > div")`)
        .then(imageCanvas => {
            return imageCanvas.boundingBox()
        }).then(boundingBox => {
            
            // define a random bounding box over the image canvas using hasard library
            // see more example in https://www.npmjs.com/package/hasard
            const width = h.reference(h.integer(0, Math.floor(boundingBox.width)));
            const height = h.reference(h.integer(0, Math.floor(boundingBox.height)));
            const top = h.add(h.integer(0, h.substract(Math.floor(boundingBox.width), width)), Math.floor(boundingBox.x));
            const left = h.add(h.integer(0, h.substract(Math.floor(boundingBox.height), height)), Math.floor(boundingBox.y));

            const randomAnnotation = h.object({
                box: h.array([
                    top,
                    left,
                    width,
                    height
                ]),
                category: h.integer(0, nCategories-1)
            });
            
            const workerAnnotations = randomAnnotation.run(nBoxes)
            
            return BbPromise.map(workerAnnotations, ({box, category}) => {
                return page.evaluateHandle(`document.querySelector("body > crowd-form > form > crowd-bounding-box").shadowRoot.querySelector("#react-mount-point > div > div > awsui-app-layout > div > div.awsui-app-layout__tools.awsui-app-layout--open > aside > div > span > div > div.label-pane-content > div:nth-child(${category+1})")`)
                    .then(categoryButton => categoryButton.click())
                    .then(() => page.mouse.move(box[0], box[1]))
                    .then(() => page.mouse.down())
                    .then(() => page.mouse.move(box[0]+box[2], box[1]+box[3]))
                    .then(() => page.mouse.up());
                
            }, {concurrency: 1})
        }).then(() => {
            console.log(`${workerId} actions simulation done on ${JSON.stringify(manifestRow)}`)
            // at the end we return nothing, serverless-sagemaker-groundtruth will automatically request the output from the page
        })
}

The end to end command

serverless groundtruth test e2e \
    --groundtruthTask <groundtruthTask-name> \
    --manifest <s3-uri or local file> \
    --puppeteerModule <path to the module> \
    --workerIds a,b,c

Usage programmatically

You can use serverless-sagemaker-groundtruth functions in your nodejs code by using

const gtLibs = require('serverless-sagemaker-groundtruth/lib')

endToEnd


/**
* @param {String} template path to the liquid template file
* @param {String} labelAttributeName labelAttributeName to use as output of the postLambda function
* @param {Object} manifestRow js object reproesnting the manifest row
* @param {Function} preLambda js function to use as pre lambda function
* @param {Number} [port=3000]  port to use to serve the web page
* @param {Function} postLambda js function to use as post lambda function
* @param {Array.<String>} workerIds js function to use as post lambda function
* @param {PuppeteerModule} puppeteerMod module that simulate the behavior of a worker
* @returns {Promise.<PostLambdaOutput>}
*/

return gtLibs.endToEnd({
    template,
    labelAttributeName, 
    manifestRow,
    preLambda,
    port,
    postLambda,
    workerIds,
    puppeteerMod
});

Remarks

Local consolidation request file compatibilty

You need to make sure that you post lambda function is compatible with using local filename in event.payload.s3Uri. You can use gtLibs.loadFile if you need such a function

Template

Your template should be submited using a button that can match with button.awsui-button[type="submit"] selector.

Author: Piercus
Source Code: https://github.com/piercus/serverless-sagemaker-groundtruth 
License: 

#serverless #aws #plugin 

Serverless Sagemaker Groundtruth
Hermann  Frami

Hermann Frami

1656614460

Serverless Plugin for S3 Sync

⚑️ Serverless Plugin for S3 Sync   

With this plugin for serverless, you can sync local folders to S3 buckets after your service is deployed.

Usage

Add the NPM package to your project:

# Via yarn
$ yarn add serverless-s3bucket-sync

# Via npm
$ npm install serverless-s3bucket-sync

Add the plugin to your serverless.yml:

plugins:
  - serverless-s3bucket-sync

Configuration

Configure S3 Bucket syncing Auto Scaling in serverless.yml with references to your local folder and the name of the S3 bucket.

custom:
  s3-sync:
    - folder: relative/folder
      bucket: bucket-name

That's it! With the next deployment, serverless will sync your local folder relative/folder with the S3 bucket named bucket-name.

Sync

You can use sls sync to synchornize all buckets without deploying your serverless stack.

Contribution

You are welcome to contribute to this project! 😘

To make sure you have a pleasant experience, please read the code of conduct. It outlines core values and beliefs and will make working together a happier experience.

Author: sbstjn
Source Code: https://github.com/sbstjn/serverless-s3bucket-sync 
License: MIT license

#serverless #s3 #sync #aws 

Serverless Plugin for S3 Sync
Hermann  Frami

Hermann Frami

1656606960

How to Synchronize Local Folders and S3 Prefixes for Serverless Framework

Serverless S3 Sync

A plugin to sync local directories and S3 prefixes for Serverless Framework ⚑ .

Use Case

  • Static Website ( serverless-s3-sync ) & Contact form backend ( serverless ) .
  • SPA ( serverless ) & assets ( serverless-s3-sync ) .

Install

Run npm install in your Serverless project.

$ npm install --save serverless-s3-sync

Add the plugin to your serverless.yml file

plugins:
  - serverless-s3-sync

Compatibility with Serverless Framework

Version 2.0.0 is compatible with Serverless Framework v3, but it uses the legacy logging interface. Version 3.0.0 and later uses the new logging interface.

serverless-s3-syncServerless Framework
v1.xv1.x, v2.x
v2.0.0v1.x, v2.x, v3.x
β‰₯ v3.0.0v3.x

Setup

custom:
  s3Sync:
    # A simple configuration for copying static assets
    - bucketName: my-static-site-assets # required
      bucketPrefix: assets/ # optional
      localDir: dist/assets # required

    # An example of possible configuration options
    - bucketName: my-other-site
      localDir: path/to/other-site
      deleteRemoved: true # optional, indicates whether sync deletes files no longer present in localDir. Defaults to 'true'
      acl: public-read # optional
      followSymlinks: true # optional
      defaultContentType: text/html # optional
      params: # optional
        - index.html:
            CacheControl: 'no-cache'
        - "*.js":
            CacheControl: 'public, max-age=31536000'
      bucketTags: # optional, these are appended to existing S3 bucket tags (overwriting tags with the same key)
        tagKey1: tagValue1
        tagKey2: tagValue2

    # This references bucket name from the output of the current stack
    - bucketNameKey: AnotherBucketNameOutputKey
      localDir: path/to/another

    # ... but can also reference it from the output of another stack,
    # see https://www.serverless.com/framework/docs/providers/aws/guide/variables#reference-cloudformation-outputs
    - bucketName: ${cf:another-cf-stack-name.ExternalBucketOutputKey}
      localDir: path

resources:
  Resources:
    AssetsBucket:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: my-static-site-assets
    OtherSiteBucket:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: my-other-site
        AccessControl: PublicRead
        WebsiteConfiguration:
          IndexDocument: index.html
          ErrorDocument: error.html
    AnotherBucket:
      Type: AWS::S3::Bucket
  Outputs:
    AnotherBucketNameOutputKey:
      Value: !Ref AnotherBucket

Usage

Run sls deploy, local directories and S3 prefixes are synced.

Run sls remove, S3 objects in S3 prefixes are removed.

Run sls deploy --nos3sync, deploy your serverless stack without syncing local directories and S3 prefixes.

Run sls remove --nos3sync, remove your serverless stack without removing S3 objects from the target S3 buckets.

sls s3sync

Sync local directories and S3 prefixes.

Offline usage

If also using the plugins serverless-offline and serverless-s3-local, sync can be supported during development by placing the bucket configuration(s) into the buckets object and specifying the alterate endpoint (see below).

custom:
  s3Sync:
    # an alternate s3 endpoint
    endpoint: http://localhost:4569
    buckets:
    # A simple configuration for copying static assets
    - bucketName: my-static-site-assets # required
      bucketPrefix: assets/ # optional
      localDir: dist/assets # required
# ...

As per serverless-s3-local's instructions, once a local credentials profile is configured, run sls offline start --aws-profile s3local to sync to the local s3 bucket instead of Amazon AWS S3

bucketNameKey will not work in offline mode and can only be used in conjunction with valid AWS credentials, use bucketName instead.

run sls deploy for normal deployment

Always disable auto sync

custom:
  s3Sync:
    # Disable sync when sls deploy and sls remove
    noSync: true
    buckets:
    # A simple configuration for copying static assets
    - bucketName: my-static-site-assets # required
      bucketPrefix: assets/ # optional
      localDir: dist/assets # required
# ...

Author: k1LoW
Source Code: https://github.com/k1LoW/serverless-s3-sync 
License: 

#serverless #s3 #sync 

How to Synchronize Local Folders and S3 Prefixes for Serverless Framework
Hermann  Frami

Hermann Frami

1656599580

Serverless S3 Remover

serverless-s3-remover

plugin for serverless to make buckets empty before remove

Usage

Run next command.

$ npm install serverless-s3-remover

Add to your serverless.yml

plugins:
  - serverless-s3-remover

custom:
  remover:
     buckets:
       - my-bucket-1
       - my-bucket-2

You can specify any number of buckets that you want.

Now you can make all buckets empty by running:

$ sls s3remove

When removing

When removing serverless stack, this plugin automatically make buckets empty before removing stack.

$ sls remove

Using Prompt

You can use prompt before deleting bucket.

custom:
  remover:
    prompt: true # default value is `false`
    buckets:
      - remover-bucket-a
      - remover-bucket-b

terminal.png

Populating the configuration object before using it

custom:
  boolean:
    true: true
    false: false
  remover:
    prompt: ${self:custom.boolean.${opt:s3-remover-prompt, 'true'}}

I can use the command line argument --s3-remover-prompt false to disable the prompt feature.

Author: Sinofseven
Source Code: https://github.com/sinofseven/serverless-s3-remover 
License: MIT license

#serverless #s3 #remove #plugin 

Serverless S3 Remover
Hermann  Frami

Hermann Frami

1656592140

Serverless S3 Local

serverless-s3-local

serverless-s3-local is a Serverless plugin to run S3 clone in local. This is aimed to accelerate development of AWS Lambda functions by local testing. I think it is good to collaborate with serverless-offline.

Installation

Use npm

npm install serverless-s3-local --save-dev

Use serverless plugin install

sls plugin install --name serverless-s3-local

Example

serverless.yaml

service: serverless-s3-local-example
provider:
  name: aws
  runtime: nodejs12.x
plugins:
  - serverless-s3-local
  - serverless-offline
custom:
# Uncomment only if you want to collaborate with serverless-plugin-additional-stacks
# additionalStacks:
#    permanent:
#      Resources:
#        S3BucketData:
#            Type: AWS::S3::Bucket
#            Properties:
#                BucketName: ${self:service}-data
  s3:
    host: localhost
    directory: /tmp
resources:
  Resources:
    NewResource:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: local-bucket
functions:
  webhook:
    handler: handler.webhook
    events:
      - http:
        method: GET
        path: /
  s3hook:
    handler: handler.s3hook
    events:
      - s3: local-bucket
        event: s3:*

handler.js (AWS SDK v2)

const AWS = require("aws-sdk");

module.exports.webhook = (event, context, callback) => {
  const S3 = new AWS.S3({
    s3ForcePathStyle: true,
    accessKeyId: "S3RVER", // This specific key is required when working offline
    secretAccessKey: "S3RVER",
    endpoint: new AWS.Endpoint("http://localhost:4569"),
  });
  S3.putObject({
    Bucket: "local-bucket",
    Key: "1234",
    Body: new Buffer("abcd")
  }, () => callback(null, "ok"));
};

module.exports.s3hook = (event, context) => {
  console.log(JSON.stringify(event));
  console.log(JSON.stringify(context));
  console.log(JSON.stringify(process.env));
};

handler.js (AWS SDK v3)

const { S3Client, PutObjectCommand } = require("@aws-sdk/client-s3");

module.exports.webhook = (event, context, callback) => {
  const client = new S3Client({
    forcePathStyle: true,
    credentials: {
      accessKeyId: "S3RVER", // This specific key is required when working offline
      secretAccessKey: "S3RVER",
    },
    endpoint: "http://localhost:4569",
  });
  client
    .send(
      new PutObjectCommand({
        Bucket: "local-bucket",
        Key: "1234",
        Body: Buffer.from("abcd"),
      })
    )
    .then(() => callback(null, "ok"));
};

module.exports.s3hook = (event, context) => {
  console.log(JSON.stringify(event));
  console.log(JSON.stringify(context));
  console.log(JSON.stringify(process.env));
};

Configuration options

Configuration options can be defined in multiple ways. They will be parsed with the following priority:

  • custom.s3 in serverless.yml
  • custom.serverless-offline in serverless.yml
  • Default values (see table below)
OptionDescriptionTypeDefault value
addressThe host/IP to bind the S3 server tostring'localhost'
hostThe host where internal S3 calls are made. Should be the same as addressstring 
portThe port that S3 server will listen tonumber4569
directoryThe location where the S3 files will be created. The directory must exist, it won't be createdstring'./buckets'
accessKeyIdThe Access Key Id to authenticate requestsstring'S3RVER'
secretAccessKeyThe Secret Access Key to authenticate requestsstring'S3RVER'
corsThe S3 CORS configuration XML. See AWS docsstring | Buffer 
websiteThe S3 Website configuration XML. See AWS docsstring | Buffer 
noStartSet to true if you already have an S3rver instance runningbooleanfalse
allowMismatchedSignaturesPrevent SignatureDoesNotMatch errors for all well-formed signaturesbooleanfalse
silentSuppress S3rver log messagesbooleanfalse
serviceEndpointOverride the AWS service root for subdomain-style accessstringamazonaws.com
httpsProtocolTo enable HTTPS, specify directory (relative to your cwd, typically your project dir) for both cert.pem and key.pem files.string 
vhostBucketsDisable vhost-style access for all bucketsbooleantrue
bucketsExtra bucket names will be created after starting S3 localstring 

Feature

  • Start local S3 server with specified root directory and port.
  • Create buckets at launching.
  • Support serverless-plugin-additional-stacks
  • Support serverless-webpack
  • Support serverless-plugin-existing-s3
  • Support S3 events.

Working with IaC tools

If your want to work with IaC tools such as terraform, you have to manage creating bucket process. In this case, please follow the below steps.

  1. Comment out configurations about S3 Bucket from resources section in serverless.yml.
#resources:
#  Resources:
#    NewResource:
#      Type: AWS::S3::Bucket
#      Properties:
#        BucketName: local-bucket
  1. Create bucket directory in s3rver working directory.
$ mkdir /tmp/local-bucket

Triggering AWS Events offline

This plugin will create a temporary directory to store mock S3 info. You must use the AWS cli to trigger events locally. First, using aws configure set up a new profile, i.e. aws configure --profile s3local. The default creds are

aws_access_key_id = S3RVER
aws_secret_access_key = S3RVER

You can now use this profile to trigger events. e.g. to trigger a put-object on a file at ~/tmp/userdata.csv in a local bucket run: aws --endpoint http://localhost:4569 s3 cp ~/tmp/data.csv s3://local-bucket/userdata.csv --profile s3local

You should see the event trigger in the serverless offline console: info: PUT /local-bucket/user-data.csv 200 16ms 0b and a new object with metadata will appear in your local bucket.

See also

Author: ar90n
Source Code: https://github.com/ar90n/serverless-s3-local 
License: MIT license

#serverless #s3 #lambda #local 

Serverless S3 Local
Hermann  Frami

Hermann Frami

1656584580

Serverless S3 Encryption

Serverless-s3-encryption

set or remove the encryption settings on the s3 buckets in your serverless stack

This plugin runs on the after:deploy hook, but you can also run it manually with: sls s3-encryption update

Install

npm install --save-dev serverless-s3-encryption

Usage

See the example below for how to modify your serverless.yml

# serverless.yml

plugins:
  # ...
  - serverless-s3-encryption

custom:
  # ...
  s3-encryption:
    buckets:
      MyEncryptedBucket:
        # see: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putBucketEncryption-property
        # accepted values: none, AES256, aws:kms
        SSEAlgorithm: AES256
        # only if SSEAlgorithm is aws:kms
        KMSMasterKeyID: STRING_VALUE 

resources:
  Resources:
    MyEncryptedBucket:
      Type: "AWS::S3::Bucket"
      Description: my encrypted bucket
      DeletionPolicy: Retain

Author: Tradle
Source Code: https://github.com/tradle/serverless-s3-encryption 
License: 

#serverless #s3 #encryption 

Serverless S3 Encryption
Hermann  Frami

Hermann Frami

1656577200

Serverless S3 Deploy

serverless-s3-deploy

Plugin for serverless to deploy files to a variety of S3 Buckets

Note: This project is currently not maintained.

Installation

 npm install --save-dev serverless-s3-deploy

Usage

Add to your serverless.yml:

  plugins:
    - serverless-s3-deploy

  custom:
    assets:
      targets:
       - bucket: my-bucket
         files:
          - source: ../assets/
            globs: '**/*.css'
          - source: ../app/
            globs:
              - '**/*.js'
              - '**/*.map'
       - bucket: my-other-bucket
         empty: true
         prefix: subdir
         files:
          - source: ../email-templates/
            globs: '**/*.html'

You can specify any number of targets that you want. Each target has a bucket and a prefix.

bucket is either the name of your S3 bucket or a reference to a CloudFormation resources created in the same serverless configuration file. See below for additional details.

You can specify source relative to the current directory.

Each source has its own list of globs, which can be either a single glob, or a list of globs.

Setting empty to true will delete all files inside the bucket before uploading the new content to S3 bucket. The prefix value is respected and files outside will not be deleted.

Now you can upload all of these assets to your bucket by running:

$ sls s3deploy

If you have defined multiple buckets, you can limit your deployment to a single bucket with the --bucket option:

$ sls s3deploy --bucket my-bucket

ACL

You can optionally specificy an ACL for the files uploaded on a per target basis:

  custom:
    assets:
      targets:
        - bucket: my-bucket
          acl: private
          files:

The default value is private. Options are defined here.

Content Type

The appropriate Content Type for each file will attempt to be determined using mime-types. If one can't be determined, a default fallback of 'application/octet-stream' will be used.

You can override this fallback per-source by setting defaultContentType.

  custom:
    assets:
      targets:
        - bucket: my-bucket
          files:
            - source: html/
              defaultContentType: text/html
              ...

Other Headers

Additional headers can be included per target by providing a headers object.

See http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html for more details.

  custom:
    assets:
      targets:
        - bucket: my-bucket
          files:
            - source: html/
              headers:
                CacheControl: max-age=31104000 # 1 year

Resolving References

A common use case is to create the S3 buckets in the resources section of your serverless configuration and then reference it in your S3 plugin settings:

  custom:
    assets:
      targets:
        - bucket:
            Ref: MyBucket
          files:
            - source: html/

  resources:
    # AWS CloudFormation Template
    Resources:
      MyBucket:
        Type: AWS::S3::Bucket
        Properties:
          AccessControl: PublicRead
          WebsiteConfiguration:
            IndexDocument: index.html
            ErrorDocument: index.html

You can disable the resolving with the following flag:

  custom:
    assets:
      resolveReferences: false

Auto-deploy

If you want s3deploy to run automatically after a deploy, set the auto flag:

  custom:
    assets:
      auto: true

IAM Configuration

You're going to need an IAM policy that supports this deployment. This might be a good starting point:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::${bucket}"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::${bucket}/*"
            ]
        }
    ]
}

Upload concurrency

If you want to tweak the upload concurrency, change uploadConcurrency config:

config:
  assets:
    # defaults to 3
    uploadConcurrency: 1

Verbosity

Verbosity cloud be enabled using either of these methods:

Configuration:

  custom:
    assets:
      verbose: true

Cli:

  sls s3deploy -v

Author: Funkybob
Source Code: https://github.com/funkybob/serverless-s3-deploy 
License: MIT license

#serverless #deploy #s3 #plugin 

Serverless S3 Deploy