Hermann  Frami

Hermann Frami

1654025880

Serverless Export Env Plugin

⚡️ Serverless Export Env Plugin 

About

The Serverless Framework offers a very powerful feature: You can reference AWS resources anywhere from within your serverless.yml and it will automatically resolve them to their respective values during deployment. However, this only works properly once your code is deployed to AWS. The Serverless Export Env Plugin extends the Serverless Framework's built-in variable solution capabilities by adding support many additional CloudFormation intrinsic functions (Fn::GetAtt, Fn::Join, Fn::Sub, etc.) as well as variable references (AWS::Region, AWS::StackId, etc.).

The Serverless Export Env Plugin helps solve two main use cases:

  1. It will automatically hook into the sls invoke local and sls offline start (see Serverless Offline Plugin) and help resolve your environment variables. This is fully transparent to your application and other plugins.
  2. Invoke sls export-env from the command line to generate a .env file on your local filesystem. Then use a library such as dotenv to import it into your code, e.g. during local integration tests.

Usage

Add the npm package to your project:

# Via yarn
$ yarn add arabold/serverless-export-env --dev

# Via npm
$ npm install arabold/serverless-export-env --save-dev

Add the plugin to your serverless.yml. It should be listed first to ensure it can resolve your environment variables before other plugins see them:

plugins:
  - serverless-export-env

That's it! You can now call sls export-env in your terminal to generate the .env file. Or, you can run sls invoke local -f FUNCTION or sls offline start to run your code locally as usual.

Examples

sls export-env

This will export all project-wide environment variables into a .env file in your project root folder.

sls export-env --function MyFunction --filename .env-MyFunction

This will export environment variables of the MyFunction Lambda function into a .env-MyFunction file in your project root folder.

Referencing CloudFormation resources

As mentioned before, the Serverless Framework allows you to reference AWS resources anywhere from within your serverless.yml and it will automatically resolve them to their respective values during deployment. However, Serverless' built-in variable resolution is limited and will not always work when run locally. The Serverless Export Env Plugin extends this functionality and automatically resolves commonly used intrinsic functions and initializes your local environment properly.

Supported instrinsic functions

  • Condition Functions
    • Fn::And
    • Fn::Equals
    • Fn::If
    • Fn::Not
    • Fn::Or
  • Fn::FindInMap
  • Fn::GetAtt
  • Fn::GetAZs
  • Fn::Join
  • Fn::Select
  • Fn::Split
  • Fn::Sub (at the moment only key-value map subtitution is supported)
  • Fn::ImportValue
  • Ref

Examples

provider:
  environment:
    S3_BUCKET_URL:
      Fn::Join:
        - ""
        - - https://s3.amazonaws.com/
          - Ref: MyBucket

Or the short version:

provider:
  environment:
    S3_BUCKET_URL: !Join ["", [https://s3.amazonaws.com/, Ref: MyBucket]]

You can then access the environment variable in your code the usual way (e.g. process.env.S3_BUCKET_URL).

Configuration

The plugin supports various configuration options under custom.export-env in your serverless.yml file:

custom:
  export-env:
    filename: .env
    overwrite: false
    enableOffline: true

Configuration Options

OptionDefaultDescription
filename.envTarget file name where to write the environment variables to, relative to the project root.
enableOfflinetrueEvaluate the environment variables when running sls invoke local or sls offline start.
overwritefalseOverwrite the file even if it exists already.
refMap{}Mapping of resource resolutions for the Ref function
getAttMap{}Mapping of resource resolutions for the Fn::GetAtt function
importValueMap{}Mapping of resource resolutions for the Fn::ImportValue function

Custom Resource Resolution

The plugin will try its best to resolve resource references like Ref, Fn::GetAtt, and Fn::ImportValue for you. However, sometimes this might fail, or you might want to use mocked values instead. In those cases, you can override those values using the refMap, getAttMap, and importValueMap options.

  • refMap takes a mapping of resource name to value pairs.
  • getAttMap takes a mapping of resource name to attribute/value pairs.
  • importValueMap takes a mapping of import name to value pairs.
custom:
  export-env:
    refMap:
      # Resolve `!Ref MyDynamoDbTable` as `mock-myTable`
      MyDynamoDbTable: "mock-myTable"
    getAttMap:
      # Resolve `!GetAtt MyElasticSearchInstance.DomainEndpoint` as `localhost:9200`
      MyElasticSearchInstance:
        DomainEndpoint: "localhost:9200"
    importValueMap:
      # Resolve `!ImportValue MyLambdaFunction` as `arn:aws:lambda:us-east-2::function:my-lambda-function`
      MyLambdaFunction: "arn:aws:lambda:us-east-2::function:my-lambda-function"

👉 Generally, it is recommended to avoid the use of intrinsic functions in your environment variables. Often, the same can be achieved by simply predefining a resource name and then manually construct the desired variable values. To share resources between different Serverless services, check out the ${cf:stackName.outputKey} variable resolution mechanism.

Command-Line Options

Running sls export-env will, by default, only export global environment variables into your .env file (those defined under provider.environment in your serverless.yml). If you want to generate the .env file for a specific function, pass the function name as a command-line argument as follows:

sls export-env --function hello --filename .env-hello
OptionDescription
filenameTarget filename where to write the environment variables to, relative to the project root.
overwriteOverwrite the file even if it exists already.
functionName of a function for which to generate the .env file.
allMerge environment variables of all functions into a single .env file.

Provided lifecycle events

  • export-env:collect - Collect environment variables from Serverless
  • export-env:resolve - Resolve CloudFormation references and import variables
  • export-env:apply - Set environment variables when testing Lambda functions locally
  • export-env:write - Write environment variables to file

Migrating from 1.x to 2.x

  • Running sls invoke local or sls offline start will no longer create or update your .env file. If you want to create an .env file, simply run sls export-env instead.
  • By default, the plugin will no longer overwrite any existing .env file. To enable overwriting existing files, either specify --overwrite in the command-line or set the custom.export-env.overwrite configuration option.
  • Resource Outputs values (resources.Resources.Outputs.*) are no longer getting exported automatically. This has always been a workaround and causes more problems than it solved. The plugin will try its best to resolve Fn::GetAtt and other references for you now, so there should be little need for the old behavior anymore. Add the desired value as an environment variable to provider.environment instead.
  • Running sls export-env will no longer merge the environment variables of all functions into a single .env file. Instead, pass the name of the desired function as --function argument to the command line. If no function name is specified, only project-wide environment variables will get exported. To bring back the old behavior, pass --all in command line and it will generate a file including all environment variables of all functions. However, please be aware that the behavior is undefined if functions use conflicting values for the same environment variable name.
  • The configuration options filename and pathFromRoot have been merged to filename now. You can specify relative paths in filename such as ./dist/.env now. Make sure the target folder exists!

Releases

2.1.0

  • Compatibility with Serverless v3.x
  • Updated dependencies minor versions

2.0.0

  • Removed optimistic variable resolution for Fn::GetAtt as it was not working properly and caused hard to solve issues. If you rely on Fn::GetAtt in your environment variables, define a custom resolution using the getAttMap configuration option.

alpha.1

  • Added --all command line parameter to merge the environment variables of all functions into a single .env file. Please note that the behavior is undefined if functions use conflicting values for the same environment variable name.

alpha.0

  • Complete rewrite of the variable resolver. We use the amazing cfn-resolver-lib lib now. This allows us to support not only Ref and Fn::ImportValue as in previous releases, but we're able to resolve the most commonly used intrinsic functions automatically now.

1.x Releases

1.4.4

  • Reverted changes in 1.4.1. Unfortunately, we broke the semver contract by introducing a breaking feature in a patch update. This feature needs to be rethought and added back in a 1.5.x release as optional. Until then, I had to remove it again.

1.4.3

  • Internal version (not published)

1.4.2

  • Fixed some compatibility issues with the latest Serverless framework release. Thanks to pgrzesik for the necessary updates.

1.4.1

  • Disabled calls to the real aws infrastructure when running with Serverless Offline. Thanks to marooned071 for the contribution.

1.4.0

  • Collect and set resource values from actual Cloud Formation stack output. Thanks to andersquist for his contribution!
  • Fix error when serverless.yml doesn't contain a custom section. Thanks to michael-wolfenden!

1.3.1

  • Explicitly set environment variables during local invocation of the Lambda (sls invoke local)

1.3.0

  • Support different output filename and path. Thanks to philiiiiiipp.
  • Export Outputs as environment variables. Thanks to lielran.
  • Updated to latest dependencies

1.2.0

  • Use operating-system-specific end-of-line when creating .env file

1.1.3

  • Fixed an issue with AWS::AccountId being resolved as [Object Promise] instead of the actual value.

1.1.2

  • Fixed an issue with CloudFormation resources not being resolved properly if the stack has more than 100 resources or exports.

1.1.1

  • Fix issue with multiple environment variables for function (thanks to @Nevon).

1.1.0

  • Support Fn::Join operation (contribution by @jonasho)
  • Support pseudo parameters AWS::Region, AWS::AccountId, AWS::StackId and AWS::StackName.

1.0.2

  • The plugin now properly resolves and sets the environment variables if a Lambda function is invoked locally (sls invoke local -f FUNCTION). This allows seamless as if the function would be deployed on AWS.

1.0.1

  • Corrected plugin naming
  • Improved documentation

1.0.0

  • This is the initial release with all basic functionality

Author: Arabold
Source Code: https://github.com/arabold/serverless-export-env 
License: MIT license

#serverless #aws #export #env 

What is GEEK

Buddha Community

Serverless Export Env Plugin
Hermann  Frami

Hermann Frami

1654025880

Serverless Export Env Plugin

⚡️ Serverless Export Env Plugin 

About

The Serverless Framework offers a very powerful feature: You can reference AWS resources anywhere from within your serverless.yml and it will automatically resolve them to their respective values during deployment. However, this only works properly once your code is deployed to AWS. The Serverless Export Env Plugin extends the Serverless Framework's built-in variable solution capabilities by adding support many additional CloudFormation intrinsic functions (Fn::GetAtt, Fn::Join, Fn::Sub, etc.) as well as variable references (AWS::Region, AWS::StackId, etc.).

The Serverless Export Env Plugin helps solve two main use cases:

  1. It will automatically hook into the sls invoke local and sls offline start (see Serverless Offline Plugin) and help resolve your environment variables. This is fully transparent to your application and other plugins.
  2. Invoke sls export-env from the command line to generate a .env file on your local filesystem. Then use a library such as dotenv to import it into your code, e.g. during local integration tests.

Usage

Add the npm package to your project:

# Via yarn
$ yarn add arabold/serverless-export-env --dev

# Via npm
$ npm install arabold/serverless-export-env --save-dev

Add the plugin to your serverless.yml. It should be listed first to ensure it can resolve your environment variables before other plugins see them:

plugins:
  - serverless-export-env

That's it! You can now call sls export-env in your terminal to generate the .env file. Or, you can run sls invoke local -f FUNCTION or sls offline start to run your code locally as usual.

Examples

sls export-env

This will export all project-wide environment variables into a .env file in your project root folder.

sls export-env --function MyFunction --filename .env-MyFunction

This will export environment variables of the MyFunction Lambda function into a .env-MyFunction file in your project root folder.

Referencing CloudFormation resources

As mentioned before, the Serverless Framework allows you to reference AWS resources anywhere from within your serverless.yml and it will automatically resolve them to their respective values during deployment. However, Serverless' built-in variable resolution is limited and will not always work when run locally. The Serverless Export Env Plugin extends this functionality and automatically resolves commonly used intrinsic functions and initializes your local environment properly.

Supported instrinsic functions

  • Condition Functions
    • Fn::And
    • Fn::Equals
    • Fn::If
    • Fn::Not
    • Fn::Or
  • Fn::FindInMap
  • Fn::GetAtt
  • Fn::GetAZs
  • Fn::Join
  • Fn::Select
  • Fn::Split
  • Fn::Sub (at the moment only key-value map subtitution is supported)
  • Fn::ImportValue
  • Ref

Examples

provider:
  environment:
    S3_BUCKET_URL:
      Fn::Join:
        - ""
        - - https://s3.amazonaws.com/
          - Ref: MyBucket

Or the short version:

provider:
  environment:
    S3_BUCKET_URL: !Join ["", [https://s3.amazonaws.com/, Ref: MyBucket]]

You can then access the environment variable in your code the usual way (e.g. process.env.S3_BUCKET_URL).

Configuration

The plugin supports various configuration options under custom.export-env in your serverless.yml file:

custom:
  export-env:
    filename: .env
    overwrite: false
    enableOffline: true

Configuration Options

OptionDefaultDescription
filename.envTarget file name where to write the environment variables to, relative to the project root.
enableOfflinetrueEvaluate the environment variables when running sls invoke local or sls offline start.
overwritefalseOverwrite the file even if it exists already.
refMap{}Mapping of resource resolutions for the Ref function
getAttMap{}Mapping of resource resolutions for the Fn::GetAtt function
importValueMap{}Mapping of resource resolutions for the Fn::ImportValue function

Custom Resource Resolution

The plugin will try its best to resolve resource references like Ref, Fn::GetAtt, and Fn::ImportValue for you. However, sometimes this might fail, or you might want to use mocked values instead. In those cases, you can override those values using the refMap, getAttMap, and importValueMap options.

  • refMap takes a mapping of resource name to value pairs.
  • getAttMap takes a mapping of resource name to attribute/value pairs.
  • importValueMap takes a mapping of import name to value pairs.
custom:
  export-env:
    refMap:
      # Resolve `!Ref MyDynamoDbTable` as `mock-myTable`
      MyDynamoDbTable: "mock-myTable"
    getAttMap:
      # Resolve `!GetAtt MyElasticSearchInstance.DomainEndpoint` as `localhost:9200`
      MyElasticSearchInstance:
        DomainEndpoint: "localhost:9200"
    importValueMap:
      # Resolve `!ImportValue MyLambdaFunction` as `arn:aws:lambda:us-east-2::function:my-lambda-function`
      MyLambdaFunction: "arn:aws:lambda:us-east-2::function:my-lambda-function"

👉 Generally, it is recommended to avoid the use of intrinsic functions in your environment variables. Often, the same can be achieved by simply predefining a resource name and then manually construct the desired variable values. To share resources between different Serverless services, check out the ${cf:stackName.outputKey} variable resolution mechanism.

Command-Line Options

Running sls export-env will, by default, only export global environment variables into your .env file (those defined under provider.environment in your serverless.yml). If you want to generate the .env file for a specific function, pass the function name as a command-line argument as follows:

sls export-env --function hello --filename .env-hello
OptionDescription
filenameTarget filename where to write the environment variables to, relative to the project root.
overwriteOverwrite the file even if it exists already.
functionName of a function for which to generate the .env file.
allMerge environment variables of all functions into a single .env file.

Provided lifecycle events

  • export-env:collect - Collect environment variables from Serverless
  • export-env:resolve - Resolve CloudFormation references and import variables
  • export-env:apply - Set environment variables when testing Lambda functions locally
  • export-env:write - Write environment variables to file

Migrating from 1.x to 2.x

  • Running sls invoke local or sls offline start will no longer create or update your .env file. If you want to create an .env file, simply run sls export-env instead.
  • By default, the plugin will no longer overwrite any existing .env file. To enable overwriting existing files, either specify --overwrite in the command-line or set the custom.export-env.overwrite configuration option.
  • Resource Outputs values (resources.Resources.Outputs.*) are no longer getting exported automatically. This has always been a workaround and causes more problems than it solved. The plugin will try its best to resolve Fn::GetAtt and other references for you now, so there should be little need for the old behavior anymore. Add the desired value as an environment variable to provider.environment instead.
  • Running sls export-env will no longer merge the environment variables of all functions into a single .env file. Instead, pass the name of the desired function as --function argument to the command line. If no function name is specified, only project-wide environment variables will get exported. To bring back the old behavior, pass --all in command line and it will generate a file including all environment variables of all functions. However, please be aware that the behavior is undefined if functions use conflicting values for the same environment variable name.
  • The configuration options filename and pathFromRoot have been merged to filename now. You can specify relative paths in filename such as ./dist/.env now. Make sure the target folder exists!

Releases

2.1.0

  • Compatibility with Serverless v3.x
  • Updated dependencies minor versions

2.0.0

  • Removed optimistic variable resolution for Fn::GetAtt as it was not working properly and caused hard to solve issues. If you rely on Fn::GetAtt in your environment variables, define a custom resolution using the getAttMap configuration option.

alpha.1

  • Added --all command line parameter to merge the environment variables of all functions into a single .env file. Please note that the behavior is undefined if functions use conflicting values for the same environment variable name.

alpha.0

  • Complete rewrite of the variable resolver. We use the amazing cfn-resolver-lib lib now. This allows us to support not only Ref and Fn::ImportValue as in previous releases, but we're able to resolve the most commonly used intrinsic functions automatically now.

1.x Releases

1.4.4

  • Reverted changes in 1.4.1. Unfortunately, we broke the semver contract by introducing a breaking feature in a patch update. This feature needs to be rethought and added back in a 1.5.x release as optional. Until then, I had to remove it again.

1.4.3

  • Internal version (not published)

1.4.2

  • Fixed some compatibility issues with the latest Serverless framework release. Thanks to pgrzesik for the necessary updates.

1.4.1

  • Disabled calls to the real aws infrastructure when running with Serverless Offline. Thanks to marooned071 for the contribution.

1.4.0

  • Collect and set resource values from actual Cloud Formation stack output. Thanks to andersquist for his contribution!
  • Fix error when serverless.yml doesn't contain a custom section. Thanks to michael-wolfenden!

1.3.1

  • Explicitly set environment variables during local invocation of the Lambda (sls invoke local)

1.3.0

  • Support different output filename and path. Thanks to philiiiiiipp.
  • Export Outputs as environment variables. Thanks to lielran.
  • Updated to latest dependencies

1.2.0

  • Use operating-system-specific end-of-line when creating .env file

1.1.3

  • Fixed an issue with AWS::AccountId being resolved as [Object Promise] instead of the actual value.

1.1.2

  • Fixed an issue with CloudFormation resources not being resolved properly if the stack has more than 100 resources or exports.

1.1.1

  • Fix issue with multiple environment variables for function (thanks to @Nevon).

1.1.0

  • Support Fn::Join operation (contribution by @jonasho)
  • Support pseudo parameters AWS::Region, AWS::AccountId, AWS::StackId and AWS::StackName.

1.0.2

  • The plugin now properly resolves and sets the environment variables if a Lambda function is invoked locally (sls invoke local -f FUNCTION). This allows seamless as if the function would be deployed on AWS.

1.0.1

  • Corrected plugin naming
  • Improved documentation

1.0.0

  • This is the initial release with all basic functionality

Author: Arabold
Source Code: https://github.com/arabold/serverless-export-env 
License: MIT license

#serverless #aws #export #env 

Hermann  Frami

Hermann Frami

1655426640

Serverless Plugin for Microservice Code Management and Deployment

Serverless M

Serverless M (or Serverless Modular) is a plugin for the serverless framework. This plugins helps you in managing multiple serverless projects with a single serverless.yml file. This plugin gives you a super charged CLI options that you can use to create new features, build them in a single file and deploy them all in parallel

splash.gif

Currently this plugin is tested for the below stack only

  • AWS
  • NodeJS λ
  • Rest API (You can use other events as well)

Prerequisites

Make sure you have the serverless CLI installed

# Install serverless globally
$ npm install serverless -g

Getting Started

To start the serverless modular project locally you can either start with es5 or es6 templates or add it as a plugin

ES6 Template install

# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es6 --path myModularService

# Step 2. Change directory
$ cd myModularService

# Step 3. Create a package.json file
$ npm init

# Step 3. Install dependencies
$ npm i serverless-modular serverless-webpack webpack --save-dev

ES5 Template install

# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es5 --path myModularService

# Step 2. Change directory
$ cd myModularService

# Step 3. Create a package.json file
$ npm init

# Step 3. Install dependencies
$ npm i serverless-modular --save-dev

If you dont want to use the templates above you can just add in your existing project

Adding it as plugin

plugins:
  - serverless-modular

Now you are all done to start building your serverless modular functions

API Reference

The serverless CLI can be accessed by

# Serverless Modular CLI
$ serverless modular

# shorthand
$ sls m

Serverless Modular CLI is based on 4 main commands

  • sls m init
  • sls m feature
  • sls m function
  • sls m build
  • sls m deploy

init command

sls m init

The serverless init command helps in creating a basic .gitignore that is useful for serverless modular.

The basic .gitignore for serverless modular looks like this

#node_modules
node_modules

#sm main functions
sm.functions.yml

#serverless file generated by build
src/**/serverless.yml

#main serverless directories generated for sls deploy
.serverless

#feature serverless directories generated sls deploy
src/**/.serverless

#serverless logs file generated for main sls deploy
.sm.log

#serverless logs file generated for feature sls deploy
src/**/.sm.log

#Webpack config copied in each feature
src/**/webpack.config.js

feature command

The feature command helps in building new features for your project

options (feature Command)

This command comes with three options

--name: Specify the name you want for your feature

--remove: set value to true if you want to remove the feature

--basePath: Specify the basepath you want for your feature, this base path should be unique for all features. helps in running offline with offline plugin and for API Gateway

optionsshortcutrequiredvaluesdefault value
--name-nstringN/A
--remove-rtrue, falsefalse
--basePath-pstringsame as name

Examples (feature Command)

Creating a basic feature

# Creating a jedi feature
$ sls m feature -n jedi

Creating a feature with different base path

# A feature with different base path
$ sls m feature -n jedi -p tatooine

Deleting a feature

# Anakin is going to delete the jedi feature
$ sls m feature -n jedi -r true

function command

The function command helps in adding new function to a feature

options (function Command)

This command comes with four options

--name: Specify the name you want for your function

--feature: Specify the name of the existing feature

--path: Specify the path for HTTP endpoint helps in running offline with offline plugin and for API Gateway

--method: Specify the path for HTTP method helps in running offline with offline plugin and for API Gateway

optionsshortcutrequiredvaluesdefault value
--name-nstringN/A
--feature-fstringN/A
--path-pstringsame as name
--method-mstring'GET'

Examples (function Command)

Creating a basic function

# Creating a cloak function for jedi feature
$ sls m function -n cloak -f jedi

Creating a basic function with different path and method

# Creating a cloak function for jedi feature with custom path and HTTP method
$ sls m function -n cloak -f jedi -p powers -m POST

build command

The build command helps in building the project for local or global scope

options (build Command)

This command comes with four options

--scope: Specify the scope of the build, use this with "--feature" tag

--feature: Specify the name of the existing feature you want to build

optionsshortcutrequiredvaluesdefault value
--scope-sstringlocal
--feature-fstringN/A

Saving build Config in serverless.yml

You can also save config in serverless.yml file

custom:
  smConfig:
    build:
      scope: local

Examples (build Command)

all feature build (local scope)

# Building all local features
$ sls m build

Single feature build (local scope)

# Building a single feature
$ sls m build -f jedi -s local

All features build global scope

# Building all features with global scope
$ sls m build -s global

deploy command

The deploy command helps in deploying serverless projects to AWS (it uses sls deploy command)

options (deploy Command)

This command comes with four options

--sm-parallel: Specify if you want to deploy parallel (will only run in parallel when doing multiple deployments)

--sm-scope: Specify if you want to deploy local features or global

--sm-features: Specify the local features you want to deploy (comma separated if multiple)

optionsshortcutrequiredvaluesdefault value
--sm-paralleltrue, falsetrue
--sm-scopelocal, globallocal
--sm-featuresstringN/A
--sm-ignore-buildstringfalse

Saving deploy Config in serverless.yml

You can also save config in serverless.yml file

custom:
  smConfig:
    deploy:
      scope: local
      parallel: true
      ignoreBuild: true

Examples (deploy Command)

Deploy all features locally

# deploy all local features
$ sls m deploy

Deploy all features globally

# deploy all global features
$ sls m deploy --sm-scope global

Deploy single feature

# deploy all global features
$ sls m deploy --sm-features jedi

Deploy Multiple features

# deploy all global features
$ sls m deploy --sm-features jedi,sith,dark_side

Deploy Multiple features in sequence

# deploy all global features
$ sls m deploy  --sm-features jedi,sith,dark_side --sm-parallel false

Author: aa2kb
Source Code: https://github.com/aa2kb/serverless-modular 
License: MIT license

#serverless #aws #node #lambda 

Hermann  Frami

Hermann Frami

1656547380

Serverless Resources Env

Serverless-resources-env

A serverless framework plugin so that your functions know how to use resources created by cloudformation.

In short, whether you are running your function as a lambda, or locally on your machine, the physical name or ARN of each resource that was part of your CloudFormation template will be available as an environment variable keyed to its logical name prefixed with CF_.

For lambdas running on AWS, this plugin will set environment variables on your functions within lambda using the Lambda environment variable support.

For running functions locally, it will also create a local env file for use in reading in environment variables for a specific region-stage-function while running functions locally. These are by default stored in a directory named: .serverless-resources-env in files named .<region>_<stage>_<function-name>. Ex: ./.serverless-resources-env/.us-east-1_dev_hello. These environment variables are set automatically by the plugin when running serverless invoke local -f ....

Breaking Changes in 0.3.0: See below

Why?

You have a CloudFormation template all set, and you are writing your functions. Now you are ready to use the resources created as part of your CF template. Well, you need to know about them! You could deploy and then try and manage configuration for these resources, or you can use this module which will automatically set environmet variables that map the logical resource name to the physical resource name for resources within the CloudFormation file.

Example:

You have defined resources in your serverless.yml called mySQS and myTable, and you want to actually use these in your function so you need their ARN or the actual table name that was created.

const sqs_arn = process.env.CF_mySQS;
const my_dynamo_table_name = process.env.CF_myTable;

How it works

This plugin attaches to the deploy post-deploy hook. After the stack is deployed to AWS, the plugin determines the name of the cloud formation stack, and queries AWS for all resources in this stack.

After deployment, this plugin, will fetch all the CF resources for the current stack (stage i.e. 'dev'). It will then use the AWS SDK to set as environment variables the physical id's of each resource as an environment variable prefixed with CF_.

It will also create a file with these values in a .properties file format named ./serverless-resources-env/.<region>_<stage>_<function-name>. These are then pulled in during a local invocation (serverless invoke local -f...) Each region, stage, and function will get its own file. When invoking locally the module will automatically select the correct .env information based on which region and stage is set.

This means no code changes, or config changes no matter how many regions, and stages you deploy to.

The lambdas always know exactly where to find their resources, whether that resource is a DynamoDB, SQS, SNS, or anything else.

Install / Setup

npm install serverless-resources-env --save

Add the plugin to the serverless.yml.

plugins:
  - serverless-resources-env

Set your resources as normal:

resources:
  Resources:
    testTopic1:
      Type: AWS::SNS::Topic
    testTopic2:
      Type: AWS::SNS::Topic

Set which resources you want exported on each function.

functions:
  hello:
    handler: handler.hello
    custom:
      env-resources:
        - testTopic1
        - testTopic2

Breaking Changes since 0.2.0

At version 0.2.0 and before, all resources were exported to both the local .env file and to each function automatically.

This caused issues with AWS limits on the amount of information that could be exported as env variables onto lambdas deployed within AWS. This also exposed resources as env variables that were not needed by functions, as it was setting all resources, not just the ones the function needed.

Starting at version 0.3.0 a list of which resources are to be exported to each function are required to be a part of the function definition in the .yml file, if the function needs any of these environment variables. (See current install instructions above)

This also means that specific env files are needed per region / stage / function. This can potentially be a lot of files and therefore these files were also moved to a sub-folder. .serverless-resources-env by default.

Common Errors

Unexpected key 'Environment' found in params. Your aws-sdk is out of date. Setting environment variables on lambdas is new. See the Important note above.

You may need to upgrade the version of the package aws-sdk being used by the serverless framework.

In the 1.1.0 serverless framework, the aws-sdk is pegged at version 2.6.8 in the npm-shrinkwrap.json of serverless.

If you have installed serverless locally as part of your project you can just upgrade the sdk. npm upgrade aws-sdk.

If you have installed serverless globally, you will need to change to the serverless directory and run npm upgrade aws-sdk from there.

The following commands should get it done:

cd `npm list serverless -g | head -n 1`/node_modules/serverless
npm upgrade aws-sdk

Config

By default, the mapping is written to a .env file located at ./.serverless-resources-env/.<region>_<stage-name>_env. This can be overridden by setting an option in serverless.yml.

custom:
  resource-output-dir: .alt-resource-dir
functions:
  hello:
    custom:
      resource-output-file: .alt-file-name

Author: Rurri
Source Code: https://github.com/rurri/serverless-resources-env 
License: MIT license

#serverless #aws #env #plugin 

Hermann  Frami

Hermann Frami

1653578100

Serverless Plugin Datadog

Datadog recommends the Serverless Framework Plugin for developers using the Serverless Framework to deploy their serverless applications. The plugin automatically enables instrumentation for applications to collect metrics, traces, and logs by:

  • Installing the Datadog Lambda library to your Lambda functions as a Lambda layer.
  • Installing the Datadog Lambda Extension to your Lambda functions as a Lambda layer (addExtension) or subscribing the Datadog Forwarder to your Lambda functions' log groups (forwarderArn).
  • Making the required configuration changes, such as adding environment variables or additional tracing layers, to your Lambda functions.

Getting started

To quickly get started, follow the installation instructions for Python, Node.js, Ruby, Java, Go, or .NET and view your function's enhanced metrics, traces, and logs in Datadog.

After installation is complete, configure the advanced options to suit your monitoring needs.

Upgrade

Each version of the plugin is published with a specific set of versions of the Datadog Lambda layers. To pick up new features and bug fixes provided by the latest versions of Datadog Lambda layers, upgrade the serverless framework plugin. Test the new version before applying it on your production applications.

Configuration parameters

To further configure your plugin, use the following custom parameters in your serverless.yml:

ParameterDescription
siteSet which Datadog site to send data to, such as datadoghq.com (default), datadoghq.eu, us3.datadoghq.com, us5.datadoghq.com, or ddog-gov.com. This parameter is required when collecting telemtry using the Datadog Lambda Extension.
apiKey[Datadog API key][7]. This parameter is required when collecting telemetry using the Datadog Lambda Extension. Alternatively, you can also set the DATADOG_API_KEY environment variable in your deployment environment.
appKeyDatadog app key. Only needed when the monitors field is defined. Alternatively, you can also set the DATADOG_APP_KEY environment variable in your deployment environment.
apiKeySecretArnAn alternative to using the apiKey field. The ARN of the secret that is storing the Datadog API key in AWS Secrets Manager. Remember to add the secretsmanager:GetSecretValue permission to the Lambda execution role.
apiKMSKeyAn alternative to using the apiKey field. Datadog API key encrypted using KMS. Remember to add the kms:Decrypt permission to the Lambda execution role.
envWhen set along with addExtension, a DD_ENV environment variable is added to all Lambda functions with the provided value. Otherwise, an env tag is added to all Lambda functions with the provided value. Defaults to the stage value of the serverless deployment.
serviceWhen set along with addExtension, a DD_SERVICE environment variable is added to all Lambda functions with the provided value. Otherwise, a service tag is added to all Lambda functions with the provided value. Defaults to the service value of the serverless project.
versionWhen set along with addExtension, a DD_VERSION environment variable is added to all Lambda functions with the provided value. When set along with forwarderArn, a version tag is added to all Lambda functions with the provided value.
tagsA comma separated list of key:value pairs as a single string. When set along with extensionLayerVersion, a DD_TAGS environment variable is added to all Lambda functions with the provided value. When set along with forwarderArn, the plugin parses the string and sets each key:value pair as a tag on all Lambda functions.
enableXrayTracingSet true to enable X-Ray tracing on the Lambda functions and API Gateway integrations. Defaults to false.
enableDDTracingEnable Datadog tracing on the Lambda function. Defaults to true.
enableDDLogsEnable Datadog log collection using the Lambda Extension. Defaults to true. Note: This setting has no effect on logs sent by the Datadog Forwarder.
monitorsWhen defined, the Datadog plugin configures monitors for the deployed function. Requires setting DATADOG_API_KEY and DATADOG_APP_KEY in your environment. To learn how to define monitors, see To Enable and Configure a Recommended Serverless Monitor.
captureLambdaPayload[Captures incoming and outgoing AWS Lambda payloads][17] in the Datadog APM spans for Lambda invocations. Defaults to false.
enableSourceCodeIntegrationEnable [Datadog source code integration][18] for the function. Defaults to true.
subscribeToApiGatewayLogsEnable automatic subscription of the Datadog Forwarder to API Gateway log groups. Requires setting forwarderArn. Defaults to true.
subscribeToHttpApiLogsEnable automatic subscription of the Datadog Forwarder to HTTP API log groups. Requires setting forwarderArn. Defaults to true.
subscribeToWebsocketLogsEnable automatic subscription of the Datadog Forwarder to WebSocket log groups. Requires setting forwarderArn. Defaults to true.
forwarderArnThe ARN of the Datadog Forwarder to be subscribed to the Lambda or API Gateway log groups.
addLayersWhether to install the Datadog Lambda library as a layer. Defaults to true. Set to false when you plan to package the Datadog Lambda library to your function's deployment package on your own so that you can install a specific version of the Datadog Lambda library ([Python][8] or [Node.js][9]).
addExtensionWhether to install the Datadog Lambda Extension as a layer. Defaults to true. When enabled, it's required to set the apiKey and site.
excludeWhen set, this plugin ignores all specified functions. Use this parameter if you have any functions that should not include Datadog functionality. Defaults to [].
enabledWhen set to false, the Datadog plugin stays inactive. Defaults to true. You can control this option using an environment variable. For example, use enabled: ${strToBool(${env:DD_PLUGIN_ENABLED, true})} to activate/deactivate the plugin during deployment. Alternatively, you can also use the value passed in through --stage to control this option—see example.
customHandlerWhen set, the specified handler is set as the handler for all the functions.
failOnErrorWhen set, this plugin throws an error if any custom Datadog monitors fail to create or update. This occurs after deploy, but will cause the result of serverless deploy to return a nonzero exit code (to fail user CI). Defaults to false.
integrationTestingSet true when running integration tests. This bypasses the validation of the Forwarder ARN and the addition of Datadog Monitor output links. Defaults to false.
logLevelThe log level, set to DEBUG for extended logging.

To use any of these parameters, add a custom > datadog section to your serverless.yml similar to this example:

custom:
  datadog:
    apiKeySecretArn: "{Datadog_API_Key_Secret_ARN}"
    enableXrayTracing: false
    enableDDTracing: true
    enableDDLogs: true
    subscribeToAccessLogs: true
    forwarderArn: arn:aws:lambda:us-east-1:000000000000:function:datadog-forwarder
    exclude:
      - dd-excluded-function

Webpack

If you are using a bundler, such as webpack, see Serverless Tracing and Webpack.

TypeScript

You may encounter the error of missing type definitions. To resolve the error, add datadog-lambda-js and dd-trace to the devDependencies list of your project's package.json.

If you are using serverless-typescript, make sure that serverless-datadog is above the serverless-typescript entry in your serverless.yml. The plugin will automatically detect .ts files.

plugins:
  - serverless-plugin-datadog
  - serverless-typescript

Disable Plugin for Particular Environment

If you'd like to turn off the plugin based on the environment (passed via --stage), you can use something similar to the example below.

provider:
  stage: ${self:opt.stage, 'dev'}

custom:
  staged: ${self:custom.stageVars.${self:provider.stage}, {}}

  stageVars:
    dev:
      dd_enabled: false

  datadog:
    enabled: ${self:custom.staged.dd_enabled, true}

Serverless Monitors

There are seven recommended monitors with default values pre-configured.

MonitorMetricsThresholdServerless Monitor ID
High Error Rateaws.lambda.errors/aws.lambda.invocations>= 10%high_error_rate
Timeoutaws.lambda.duration.max/aws.lambda.timeout>= 1timeout
Out of Memoryaws.lambda.enhanced.out_of_memory> 0out_of_memory
High Iterator Ageaws.lambda.iterator_age.maximum>= 24 hrshigh_iterator_age
High Cold Start Rateaws.lambda.enhanced.invocations(cold_start:true)/
aws.lambda.enhanced.invocations
>= 20%high_cold_start_rate
High Throttlesaws.lambda.throttles/aws.lambda.invocations>= 20%high_throttles
Increased Costaws.lambda.enhanced.estimated_cost↑20%increased_cost

To Enable and Configure a Recommended Serverless Monitor

To create a recommended monitor, you must use its respective serverless monitor ID. Note that you must also set the DATADOG_API_KEY and DATADOG_APP_KEY in your environment.

If you’d like to further configure the parameters for a recommended monitor, you can directly define the parameter values below the serverless monitor ID. Parameters not specified under a recommended monitor will use the default recommended value. The query parameter for recommended monitors cannot be directly modified and will default to using the query valued as defined above; however, you may change the threshold value in query by re-defining it within the options parameter. To delete a monitor, remove the monitor from the serverless.yml template. For further documentation on how to define monitor parameters, see the Datadog Monitors API.

Monitor creation occurs after the function is deployed. In the event that a monitor is unsuccessfully created, the function will still be successfully deployed.

To create a recommended monitor with the default values

Define the appropriate serverless monitor ID without specifying any parameter values

custom:
  datadog:
    addLayers: true
    monitors:
      - high_error_rate:

To configure a recommended monitor

custom:
  datadog:
    addLayers: true
    monitors:
      - high_error_rate:
          name: "High Error Rate with Modified Warning Threshold"
          message: "More than 10% of the function’s invocations were errors in the selected time range. Notify @data.dog@datadoghq.com @slack-serverless-monitors"
          tags: ["modified_error_rate", "serverless", "error_rate"]
          require_full_window: true
          priority: 2
          options:
            include_tags: true
            notify_audit: true
            thresholds:
              ok: 0.025
              warning: 0.05

To delete a monitor

Removing the serverless monitor ID and its parameters will delete the monitor.

To Enable and Configure a Custom Monitor

To define a custom monitor, you must define a unique serverless monitor ID string in addition to passing in the API key and Application key, DATADOG_API_KEY and DATADOG_APP_KEY, in your environment. The query parameter is required but every other parameter is optional. Define a unique serverless monitor ID string and specify the necessary parameters below. For further documentation on monitor parameters, see the Datadog Monitors API.

custom:
  datadog:
    addLayers: true
    monitors:
      - custom_monitor_id:
          name: "Custom Monitor"
          query: "max(next_1w):forecast(avg:system.load.1{*}, 'linear', 1, interval='60m', history='1w', model='default') >= 3"
          message: "Custom message for custom monitor. Notify @data.dog@datadoghq.com @slack-serverless-monitors"
          tags: ["custom_monitor", "serverless"]
          priority: 3
          options:
            enable_logs_sample: true
            require_full_window: true
            include_tags: false
            notify_audit: true
            notify_no_data: false
            thresholds:
              ok: 1
              warning: 2

Breaking Changes

v5.0.0

  • When used in conjunction with the Datadog Extension, this plugin sets service and env tags through environment variables instead of Lambda resource tags.
  • The enableTags parameter was replaced by the new service, env parameters.

v4.0.0

  • The Datadog Lambda Extension is now the default mechanism for transmitting telemetry to Datadog.

Opening Issues

If you encounter a bug with this package, let us know by filing an issue! Before opening a new issue, please search the existing issues to avoid duplicates.

When opening an issue, include your Serverless Framework version, Python/Node.js version, and stack trace if available. Also, please include the steps to reproduce when appropriate.

You can also open an issue for a feature request.

Contributing

If you find an issue with this package and have a fix, open a pull request following the procedures.

Community

For product feedback and questions, join the #serverless channel in the Datadog community on Slack.

Author: DataDog
Source Code: https://github.com/DataDog/serverless-plugin-datadog 
License: View license

#serverless #datadog #plugin 

How To Customize WordPress Plugins? (4 Easy Ways To Do)

This is image title
WordPress needs no introduction. It has been in the world for quite a long time. And up till now, it has given a tough fight to leading web development technology. The main reason behind its remarkable success is, it is highly customizable and also SEO-friendly. Other benefits include open-source technology, security, user-friendliness, and the thousands of free plugins it offers.

Talking of WordPress plugins, are a piece of software that enables you to add more features to the website. They are easy to integrate into your website and don’t hamper the performance of the site. WordPress, as a leading technology, has to offer many out-of-the-box plugins.

However, not always the WordPress would be able to meet your all needs. Hence you have to customize the WordPress plugin to provide you the functionality you wished. WordPress Plugins are easy to install and customize. You don’t have to build the solution from scratch and that’s one of the reasons why small and medium-sized businesses love it. It doesn’t need a hefty investment or the hiring of an in-house development team. You can use the core functionality of the plugin and expand it as your like.

In this blog, we would be talking in-depth about plugins and how to customize WordPress plugins to improve the functionality of your web applications.

What Is The Working Of The WordPress Plugins?

Developing your own plugin requires you to have some knowledge of the way they work. It ensures the better functioning of the customized plugins and avoids any mistakes that can hamper the experience on your site.

1. Hooks

Plugins operate primarily using hooks. As a hook attaches you to something, the same way a feature or functionality is hooked to your website. The piece of code interacts with the other components present on the website. There are two types of hooks: a. Action and b. Filter.

A. Action

If you want something to happen at a particular time, you need to use a WordPress “action” hook. With actions, you can add, change and improve the functionality of your plugin. It allows you to attach a new action that can be triggered by your users on the website.

There are several predefined actions available on WordPress, custom WordPress plugin development also allows you to develop your own action. This way you can make your plugin function as your want. It also allows you to set values for which the hook function. The add_ action function will then connect that function to a specific action.

B. Filters

They are the type of hooks that are accepted to a single variable or a series of variables. It sends them back after they have modified it. It allows you to change the content displayed to the user.

You can add the filter on your website with the apply_filter function, then you can define the filter under the function. To add a filter hook on the website, you have to add the $tag (the filter name) and $value (the filtered value or variable), this allows the hook to work. Also, you can add extra function values under $var.

Once you have made your filter, you can execute it with the add_filter function. This will activate your filter and would work when a specific function is triggered. You can also manipulate the variable and return it.

2. Shortcodes

Shortcodes are a good way to create and display the custom functionality of your website to visitors. They are client-side bits of code. They can be placed in the posts and pages like in the menu and widgets, etc.

There are many plugins that use shortcodes. By creating your very own shortcode, you too can customize the WordPress plugin. You can create your own shortcode with the add_shortcode function. The name of the shortcode that you use would be the first variable and the second variable would be the output of it when it is triggered. The output can be – attributes, content, and name.

3. Widgets

Other than the hooks and shortcodes, you can use the widgets to add functionality to the site. WordPress Widgets are a good way to create a widget by extending the WP_Widget class. They render a user-friendly experience, as they have an object-oriented design approach and the functions and values are stored in a single entity.

How To Customize WordPress Plugins?

There are various methods to customize the WordPress plugins. Depending on your need, and the degree of customization you wish to make in the plugin, choose the right option for you. Also, don’t forget to keep in mind that it requires a little bit of technical knowledge too. So find an expert WordPress plugin development company in case you lack the knowledge to do it by yourself.

1. Hire A Plugin Developer3
This is image title

One of the best ways to customize a WordPress plugin is by hiring a plugin developer. There are many plugin developers listed in the WordPress directory. You can contact them and collaborate with world-class WordPress developers. It is quite easy to find a WordPress plugin developer.

Since it is not much work and doesn’t pay well or for the long term a lot of developers would be unwilling to collaborate but, you will eventually find people.

2. Creating A Supporting Plugin

If you are looking for added functionality in an already existing plugin go for this option. It is a cheap way to meet your needs and creating a supporting plugin takes very little time as it has very limited needs. Furthermore, you can extend a plugin to a current feature set without altering its base code.

However, to do so, you have to hire a WordPress developer as it also requires some technical knowledge.

3. Use Custom Hooks

Use the WordPress hooks to integrate some other feature into an existing plugin. You can add an action or a filter as per your need and improve the functionality of the website.

If the plugin you want to customize has the hook, you don’t have to do much to customize it. You can write your own plugin that works with these hooks. This way you don’t have to build a WordPress plugin right from scratch. If the hook is not present in the plugin code, you can contact a WordPress developer or write the code yourself. It may take some time, but it works.

Once the hook is added, you just have to manually patch each one upon the release of the new plugin update.

4. Override Callbacks

The last way to customize WordPress plugins is by override callbacks. You can alter the core functionality of the WordPress plugin with this method. You can completely change the way it functions with your website. It is a way to completely transform the plugin. By adding your own custom callbacks, you can create the exact functionality you desire.

We suggest you go for a web developer proficient in WordPress as this requires a good amount of technical knowledge and the working of a plugin.

Read More

#customize wordpress plugins #how to customize plugins in wordpress #how to customize wordpress plugins #how to edit plugins in wordpress #how to edit wordpress plugins #wordpress plugin customization