1652996640
Serverless-framework Headless Chrome Plugin
A Serverless-framework plugin which bundles the @serverless-chrome/lambda package and ensures that Headless Chrome is running when your function handler is invoked.
Install with yarn:
yarn add --dev serverless-plugin-chrome
Install with npm:
npm install --save-dev serverless-plugin-chrome
Requires Node 6.10 runtime.
Add the following plugin to your serverless.yml
:
plugins:
- serverless-plugin-chrome
Then, in your handler code.. Do whatever you want. Chrome will be running!
const CDP = require('chrome-remote-interface')
module.exports.hello = (event, context, callback, chrome) => {
// Chrome is already running!
CDP.Version()
.then((versionInfo) => {
callback(null, {
statusCode: 200,
body: JSON.stringify({
versionInfo,
chrome,
}),
})
})
.catch((error) => {
callback(null, {
statusCode: 500,
body: JSON.stringify({
error,
}),
})
})
}
Further details are available in the Serverless Lambda example.
Example functions are available here. They include:
Local development is supported. You must install the chrome-launcher
package in your project. A locally installed version of Chrome will be launched.
Command line flags (or "switches")
The behavior of Chrome does vary between platforms. It may be necessary to experiment with flags to get the results you desire. On Lambda default flags are used, but in development no default flags are used.
You can pass custom flags with which to launch Chrome using the custom
section in serverless.yml
. For example:
plugins:
- serverless-plugin-chrome
custom:
chrome:
flags:
- --window-size=1280,1696 # Letter size
- --hide-scrollbars
- --ignore-certificate-errors
functions:
- enableChromeOnThisFunctionName
- mySuperChromeFunction
It is also possible to enable Chrome on only specific functions in your service using the custom.chrome.functions
configuration. For example:
custom:
chrome:
functions:
- enableChromeOnThisFunctionName
- mySuperChromeFunction
You can enable debugging/logging output by specifying the DEBUG env variable in the provider section of serverless.yml
. For example:
provider:
name: aws
runtime: nodejs6.10
environment:
DEBUG: "*"
plugins:
- serverless-plugin-chrome
Load order is important.
For example, if you're using the serverless-webpack plugin, your plugin section should be:
plugins:
- serverless-plugin-chrome # 1st
- serverless-webpack
However, with the serverless-plugin-typescript plugin, the order is:
plugins:
- serverless-plugin-typescript
- serverless-plugin-chrome # 2nd
I keep getting a timeout error when deploying and it's really annoying.
Indeed, that is annoying. I've had the same problem, and so that's why it's now here in this troubleshooting section. This may be an issue in the underlying AWS SDK when using a slower Internet connection. Try changing the AWS_CLIENT_TIMEOUT
environment variable to a higher value. For example, in your command prompt enter the following and try deploying again:
export AWS_CLIENT_TIMEOUT=3000000
Aaaaaarggghhhhhh!!!
Uuurrrggghhhhhh! Have you tried filing an Issue?
Author: Adieuadieu
Source Code: https://github.com/adieuadieu/serverless-chrome/tree/master/packages/serverless-plugin
License: MIT license
#serverless #chrome #plugin #aws
1652996640
Serverless-framework Headless Chrome Plugin
A Serverless-framework plugin which bundles the @serverless-chrome/lambda package and ensures that Headless Chrome is running when your function handler is invoked.
Install with yarn:
yarn add --dev serverless-plugin-chrome
Install with npm:
npm install --save-dev serverless-plugin-chrome
Requires Node 6.10 runtime.
Add the following plugin to your serverless.yml
:
plugins:
- serverless-plugin-chrome
Then, in your handler code.. Do whatever you want. Chrome will be running!
const CDP = require('chrome-remote-interface')
module.exports.hello = (event, context, callback, chrome) => {
// Chrome is already running!
CDP.Version()
.then((versionInfo) => {
callback(null, {
statusCode: 200,
body: JSON.stringify({
versionInfo,
chrome,
}),
})
})
.catch((error) => {
callback(null, {
statusCode: 500,
body: JSON.stringify({
error,
}),
})
})
}
Further details are available in the Serverless Lambda example.
Example functions are available here. They include:
Local development is supported. You must install the chrome-launcher
package in your project. A locally installed version of Chrome will be launched.
Command line flags (or "switches")
The behavior of Chrome does vary between platforms. It may be necessary to experiment with flags to get the results you desire. On Lambda default flags are used, but in development no default flags are used.
You can pass custom flags with which to launch Chrome using the custom
section in serverless.yml
. For example:
plugins:
- serverless-plugin-chrome
custom:
chrome:
flags:
- --window-size=1280,1696 # Letter size
- --hide-scrollbars
- --ignore-certificate-errors
functions:
- enableChromeOnThisFunctionName
- mySuperChromeFunction
It is also possible to enable Chrome on only specific functions in your service using the custom.chrome.functions
configuration. For example:
custom:
chrome:
functions:
- enableChromeOnThisFunctionName
- mySuperChromeFunction
You can enable debugging/logging output by specifying the DEBUG env variable in the provider section of serverless.yml
. For example:
provider:
name: aws
runtime: nodejs6.10
environment:
DEBUG: "*"
plugins:
- serverless-plugin-chrome
Load order is important.
For example, if you're using the serverless-webpack plugin, your plugin section should be:
plugins:
- serverless-plugin-chrome # 1st
- serverless-webpack
However, with the serverless-plugin-typescript plugin, the order is:
plugins:
- serverless-plugin-typescript
- serverless-plugin-chrome # 2nd
I keep getting a timeout error when deploying and it's really annoying.
Indeed, that is annoying. I've had the same problem, and so that's why it's now here in this troubleshooting section. This may be an issue in the underlying AWS SDK when using a slower Internet connection. Try changing the AWS_CLIENT_TIMEOUT
environment variable to a higher value. For example, in your command prompt enter the following and try deploying again:
export AWS_CLIENT_TIMEOUT=3000000
Aaaaaarggghhhhhh!!!
Uuurrrggghhhhhh! Have you tried filing an Issue?
Author: Adieuadieu
Source Code: https://github.com/adieuadieu/serverless-chrome/tree/master/packages/serverless-plugin
License: MIT license
1655228880
Serverless Kubeless Offline Plugin
This Serverless plugin emulates Kubeless on your local machine without minikube to speed up your development cycles. To do so, it starts an HTTP server that handles the request's lifecycle like Kubeless does and invokes your handlers.
Features:
serverless-webpack
supportFor Serverless v1.x only.
First, add Serverless Kubeless Offline to your project:
npm install serverless-kubeless-offline --save-dev
Then inside your project's serverless.yml
file add following entry to the plugins section: serverless-kubeless-offline
. If there is no plugin section you will need to add it to the file.
It should look something like this:
plugins:
- serverless-kubeless-offline
You can check wether you have successfully installed the plugin by running the serverless command line:
serverless
the console should display KubelessOfflinePlugin as one of the plugins now available in your Serverless project.
In your project root run:
serverless offline start
or sls offline start
.
to list all the options for the plugin run:
sls offline --help
All CLI options are optional:
--port -p Port to listen on. Default: 3000
--httpsProtocol -H To enable HTTPS, specify directory (relative to your cwd, typically your project dir) for local key and certificate files.
This is how to generate local key and certificate (valid for 365 days) files for HTTPS using openssl:
openssl req -newkey rsa:2048 -new -nodes -keyout key.pem -out csr.pem
openssl x509 -req -days 365 -in csr.pem -signkey key.pem -out server.crt
Any of the CLI options can be added to your serverless.yml
. For example:
custom:
serverless-kubeless-offline:
port: 4000
Options passed on the command line override YAML options.
By default you can send your requests to http://localhost:3000/
. Please note that:
serverless.yml
.OPTIONS
) requests for you, but additional CORs headers for your responses should be set by your handlers.process.env.IS_OFFLINE
is true
.Content-Type
header is set to 'application/json'
on a request, Kubeless will JSON.parse
the body and place it at event.data
, and so does the plugin. But if you send any other Content-Type
, Kubeless and this plugin will parse the body as a string and place it at event.data
. You can always access the request and response objects directly in a Kubeless environment through event.extensions.request
and event.extensions.response
.Running the serverless kubeless start
command will fire an init
and an end
lifecycle hook which is needed for serverless-offline
to switch off resources.
Add plugins to your serverless.yml
file:
plugins:
- serverless-webpack
- serverless-kubeless
- serverless-kubeless-offline #serverless-kubeless-offline needs to be last in the list
Serverless Kubeless Offline plugin will respond to the overall framework settings and output additional information to the console in debug mode. In order to do this you will have to set the SLS_DEBUG
environmental variable. You can run the following in the command line to switch to debug mode execution.
Unix:
export SLS_DEBUG=*
Windows:
SET SLS_DEBUG=*
Interactive debugging is also possible for your project if you have installed the node-inspector module and chrome browser. You can then run the following command line inside your project's root.
Initial installation: npm install -g node-inspector
For each debug run: node-debug sls offline
The system will start in wait status. This will also automatically start the chrome browser and wait for you to set breakpoints for inspection. Set the breakpoints as needed and, then, click the play button for the debugging to continue.
Depending on the breakpoint, you may need to call the URL path for your function in seperate browser window for your serverless function to be run and made available for debugging.
This plugin simulates the NodeJS runtime in Kubeless for many practical purposes, good enough for development - but is not a perfect simulator. Specifically, Kubeless currently runs on Node v6.x and v8.x, whereas Kubeless Offline runs on your own runtime where no memory limits are enforced.
The HTTP server in this plugin mimics (the NodeJS server)[https://github.com/kubeless/kubeless/blob/master/docker/runtime/nodejs/kubeless.js] in the Kubeless runtime as closely as possible. If you find any discrepancies, please file an issue.
This plugin is heavily inspired by (especially this README): Serverless Offline Plugin. A big thank you to all the contributors there!
It is also mutually incompatible with the Serverless Offline Plugin since they both define and emit the same events in order to be compatible with serverless-webpack
. You cannot add both this plugin and the standard Serverless Offline Plugin
which simulates AWS Lambda/API Gateway to the same Serverless service.
The vast majority of the actual server code is taken from the (Kubeless' team's)[https://github.com/kubeless] NodeJS runtime server. Without them, this plugin wouldn't even make sense. Thanks for making it worth the time to build tools around your tools. ;)
Yes, thank you! Please update the docs and tests and add your name to the package.json file.
Author: usefulio
Source Code: https://github.com/usefulio/serverless-kubeless-offline
License: MIT license
1655426640
Serverless M (or Serverless Modular) is a plugin for the serverless framework. This plugins helps you in managing multiple serverless projects with a single serverless.yml file. This plugin gives you a super charged CLI options that you can use to create new features, build them in a single file and deploy them all in parallel
Currently this plugin is tested for the below stack only
Make sure you have the serverless CLI installed
# Install serverless globally
$ npm install serverless -g
To start the serverless modular project locally you can either start with es5 or es6 templates or add it as a plugin
# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es6 --path myModularService
# Step 2. Change directory
$ cd myModularService
# Step 3. Create a package.json file
$ npm init
# Step 3. Install dependencies
$ npm i serverless-modular serverless-webpack webpack --save-dev
# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es5 --path myModularService
# Step 2. Change directory
$ cd myModularService
# Step 3. Create a package.json file
$ npm init
# Step 3. Install dependencies
$ npm i serverless-modular --save-dev
If you dont want to use the templates above you can just add in your existing project
plugins:
- serverless-modular
Now you are all done to start building your serverless modular functions
The serverless CLI can be accessed by
# Serverless Modular CLI
$ serverless modular
# shorthand
$ sls m
Serverless Modular CLI is based on 4 main commands
sls m init
sls m feature
sls m function
sls m build
sls m deploy
sls m init
The serverless init command helps in creating a basic .gitignore
that is useful for serverless modular.
The basic .gitignore
for serverless modular looks like this
#node_modules
node_modules
#sm main functions
sm.functions.yml
#serverless file generated by build
src/**/serverless.yml
#main serverless directories generated for sls deploy
.serverless
#feature serverless directories generated sls deploy
src/**/.serverless
#serverless logs file generated for main sls deploy
.sm.log
#serverless logs file generated for feature sls deploy
src/**/.sm.log
#Webpack config copied in each feature
src/**/webpack.config.js
The feature command helps in building new features for your project
This command comes with three options
--name: Specify the name you want for your feature
--remove: set value to true if you want to remove the feature
--basePath: Specify the basepath you want for your feature, this base path should be unique for all features. helps in running offline with offline plugin and for API Gateway
options | shortcut | required | values | default value |
---|---|---|---|---|
--name | -n | ✅ | string | N/A |
--remove | -r | ❎ | true, false | false |
--basePath | -p | ❎ | string | same as name |
Creating a basic feature
# Creating a jedi feature
$ sls m feature -n jedi
Creating a feature with different base path
# A feature with different base path
$ sls m feature -n jedi -p tatooine
Deleting a feature
# Anakin is going to delete the jedi feature
$ sls m feature -n jedi -r true
The function command helps in adding new function to a feature
This command comes with four options
--name: Specify the name you want for your function
--feature: Specify the name of the existing feature
--path: Specify the path for HTTP endpoint helps in running offline with offline plugin and for API Gateway
--method: Specify the path for HTTP method helps in running offline with offline plugin and for API Gateway
options | shortcut | required | values | default value |
---|---|---|---|---|
--name | -n | ✅ | string | N/A |
--feature | -f | ✅ | string | N/A |
--path | -p | ❎ | string | same as name |
--method | -m | ❎ | string | 'GET' |
Creating a basic function
# Creating a cloak function for jedi feature
$ sls m function -n cloak -f jedi
Creating a basic function with different path and method
# Creating a cloak function for jedi feature with custom path and HTTP method
$ sls m function -n cloak -f jedi -p powers -m POST
The build command helps in building the project for local or global scope
This command comes with four options
--scope: Specify the scope of the build, use this with "--feature" tag
--feature: Specify the name of the existing feature you want to build
options | shortcut | required | values | default value |
---|---|---|---|---|
--scope | -s | ❎ | string | local |
--feature | -f | ❎ | string | N/A |
Saving build Config in serverless.yml
You can also save config in serverless.yml file
custom:
smConfig:
build:
scope: local
all feature build (local scope)
# Building all local features
$ sls m build
Single feature build (local scope)
# Building a single feature
$ sls m build -f jedi -s local
All features build global scope
# Building all features with global scope
$ sls m build -s global
The deploy command helps in deploying serverless projects to AWS (it uses sls deploy
command)
This command comes with four options
--sm-parallel: Specify if you want to deploy parallel (will only run in parallel when doing multiple deployments)
--sm-scope: Specify if you want to deploy local features or global
--sm-features: Specify the local features you want to deploy (comma separated if multiple)
options | shortcut | required | values | default value |
---|---|---|---|---|
--sm-parallel | ❎ | ❎ | true, false | true |
--sm-scope | ❎ | ❎ | local, global | local |
--sm-features | ❎ | ❎ | string | N/A |
--sm-ignore-build | ❎ | ❎ | string | false |
Saving deploy Config in serverless.yml
You can also save config in serverless.yml file
custom:
smConfig:
deploy:
scope: local
parallel: true
ignoreBuild: true
Deploy all features locally
# deploy all local features
$ sls m deploy
Deploy all features globally
# deploy all global features
$ sls m deploy --sm-scope global
Deploy single feature
# deploy all global features
$ sls m deploy --sm-features jedi
Deploy Multiple features
# deploy all global features
$ sls m deploy --sm-features jedi,sith,dark_side
Deploy Multiple features in sequence
# deploy all global features
$ sls m deploy --sm-features jedi,sith,dark_side --sm-parallel false
Author: aa2kb
Source Code: https://github.com/aa2kb/serverless-modular
License: MIT license
1654352760
serverless-finch
A Serverless Framework plugin for deployment of static website assets of your Serverless project to AWS S3.
npm install --save serverless-finch
First, update your serverless.yml
by adding the following:
plugins:
- serverless-finch
custom:
client:
bucketName: unique-s3-bucketname # (see Configuration Parameters below)
# [other configuration parameters] (see Configuration Parameters below)
NOTE: For full example configurations, please refer to the examples folder.
Second, Create a website folder in the root directory of your Serverless project. This is where your distribution-ready website should live. By default the plugin expects the files to live in a folder called client/dist
. But this is configurable with the distributionFolder
option (see the Configuration Parameters below).
The plugin uploads the entire distributionFolder
to S3 and configures the bucket to host the website and make it publicly available, also setting other options based the Configuration Parameters specified in serverless.yml
.
To test the plugin initially you can copy/run the following commands in the root directory of your Serverless project to get a quick sample website for deployment:
mkdir -p client/dist
touch client/dist/index.html
touch client/dist/error.html
echo "Go Serverless" >> client/dist/index.html
echo "error page" >> client/dist/error.html
Third, run the plugin, and visit your new website!
serverless client deploy [--region $REGION] [--no-delete-contents] [--no-config-change] [--no-policy-change] [--no-cors-change]
The plugin should output the location of your newly deployed static site to the console.
Note: See Command-line Parameters for details on command above
WARNING: The plugin will overwrite any data you have in the bucket name you set above if it already exists.
If later on you want to take down the website you can use:
serverless client remove
bucketName
required
custom:
client:
bucketName: unique-s3-bucketname
Use this parameter to specify a unique name for the S3 bucket that your files will be uploaded to.
tags
optional, default: none
custom:
client:
...
tags:
tagKey: tagvalue
tagKey2: tagValue2
...
Use this parameter to specify a list of tags as key:value pairs that will be assigned to your bucket.
distributionFolder
optional, default: client/dist
custom:
client:
...
distributionFolder: path/to/files
...
Use this parameter to specify the path that contains your website files to be uploaded. This path is relative to the path that your serverless.yaml
configuration files resides in.
indexDocument
optional, default: index.html
custom:
client:
...
indexDocument: file-name.ext
...
The name of your index document inside your distributionFolder
. This is the file that will be served to a client visiting the base URL for your website.
errorDocument
optional, default: error.html
custom:
client:
...
errorDocument: file-name.ext
...
The name of your error document inside your distributionFolder
. This is the file that will be served to a client if their initial request returns an error (e.g. 404). For an SPA, you may want to set this to the same document specified in indexDocument
so that all requests are redirected to your index document and routing can be handled on the client side by your SPA.
bucketPolicyFile
custom:
client:
...
bucketPolicyFile: path/to/policy.json
...
Use this parameter to specify the path to a single custom policy file. If not set, it defaults to a config for a basic static website. Currently, only JSON is supported. In your policy, make sure that your resource has the correct bucket name specified above: "Resource": "arn:aws:s3:::BUCKET_NAME/*",
Note: You can also use ${env:PWD}
if you want to dynamically specify the policy within your repo. for example:
custom:
client:
...
bucketPolicyFile: "${env:PWD}/path/to/policy.json"
...
Additionally, you will want to specify different policies depending on your stage using ${self:provider.stage}
to ensure your BUCKET_NAME
corosponds to the stage.
custom:
client:
...
bucketPolicyFile: "/path/to/policy-${self:provider.stage}.json"
...
corsFile
custom:
client:
...
corsFile: path/to/cors.json
...
Path to a JSON file defining the bucket CORS configuration. If not set, it defaults to the configuration defined here. See above docs on bucketPolicyFile
option for how to provide a dynamic file path.
objectHeaders
optional, no default
custom:
client:
...
objectHeaders:
ALL_OBJECTS:
- name: header-name
value: header-value
...
'someGlobPattern/*.html':
- name: header-name
value: header-value
...
specific-directory/:
- name: header-name
value: header-value
...
specific-file.ext:
- name: header-name
value: header-value
...
... # more file- or folder-specific rules
...
Use the objectHeaders
option to set HTTP response headers be sent to clients requesting uploaded files from your website.
Headers may be specified globally for all files in the bucket by adding a name
, value
pair to the ALL_OBJECTS
property of the objectHeaders
option. They may also be specified for specific folders or files within your site by specifying properties with names like specific-directory/
(trailing slash required to indicate folder) or specific-file.ext
, where the folder and/or file paths are relative to distributionFolder
.
Headers with more specificity will take precedence over more general ones. For instance, if 'Cache-Control' was set to 'max-age=100' in ALL_OBJECTS
and to 'max-age=500' in my/folder/
, the files in my/folder/
would get a header of 'Cache-Control: max-age=500'.
redirectAllRequestsTo
optional, no default
custom:
client:
...
redirectAllRequestsTo:
hostName: hostName
protocol: protocol # "http" or "https"
...
Use the redirectAllRequestsTo
option if you want to route all traffic coming to your website to a different address. hostName
is the address that requests should be redirected to (e.g. 'www.other-website.com'). protocol
is the protocol to use for the redirect and must be either 'http' or 'https'.
routingRules
optional, no default
custom:
client:
...
routingRules:
- redirect:
hostName: hostName
httpRedirectCode: httpCode
protocol: protocol # "http" or "https"
replaceKeyPrefixWith: prefix
replaceKeyWith: [object]
condition:
keyPrefixEquals: prefix
httpErrorCodeReturnedEquals: httpCode
- ...
...
The routingRules
option can be used to define rules for when and how certain requests to your site should be redirected. Each rule in the redirectRules
list consists of a (required) redirect
definition and (optionally) a condition
on which the redirect is applied.
The redirect
property of each rule has five optional parameters:
hostName
is the name of the host that the request should be redirected to (e.g. 'www.other-site.com'). Defaults to the host from the original request.httpRedirectCode
is the HTTP status code to use for the redirect (e.g. 301, 303, 308).protocol
is the protocol to use for the redirect and must be 'http' or 'https'. Defaults to the protocol from the original request.replaceKeyPrefixWith
specifies the string to replace the portion of the route specified in the keyPrefixEquals
with in the redirect. For instance, if you want to redirect requests for pages starting with '/images' to pages starting with '/assets/images', you can specify keyPrefixEquals
as '/images' and replaceKeyPrefixWith
as '/assets/images'. Cannot be specified along with replaceKeyWith
.replaceKeyWith
specifies a specific page to redirect requests to (e.g. 'landing.html'). Cannot be specified along with replaceKeyPrefixWith
.The condition
property has two optional parameters:
keyPrefixEquals
specifies that requests to pages starting with the specified value should be redirected. Often used with the replaceKeyPrefixWith
and replaceKeyWith
redirect
properties.httpErrorCodeReturnedEquals
specifies that requests resulting in the given HTTP error code (e.g. 404, 500) should be redirected.If condition
is not specified, then all requests will be redirected in accordance with the specified redirect
properties
uploadOrder
optional, no default
custom:
client:
...
uploadOrder:
- .*
- .*/assets/.*
- service-worker\.js
- index\.html
...
The uploadOrder
option can be used to control the order that files are uploaded to the bucket. Each entry is evaluated as a case-insensitive regular expression. Unmatched files are uploaded first.
When combined with --no-delete-contents
this can help with zero downtime (e.g. by uploading assets before the html files that depend on them).
keyPrefix
optional, no default
custom:
client:
...
keyPrefix: s3-folder/possible-sub-folder
...
Adding a keyPrefix option, so that it's possibly to upload files to a prefixed s3 path. You can use this to specify a key prefix path such as static
so the deployment matches the naming conventions of popular frontend frameworks and tools.
sse
optional, no default
custom:
client:
...
sse: AES256
...
Enable server side encryption for the uploaded files. You can use AES256
or aws:kms
.
manageResources
optional, default true
(the plugin does manage your resources by default)
custom:
client:
...
manageResources: false
...
This allows you to opt out of having serverless-finch create or configure the s3 bucket. Instead, you can rely on an existing bucket or a CloudFormation definition.
--region
optional, defaults to value specified in provider
section of serverless.yml
serverless client deploy --region $REGION
Use this parameter to specify what AWS region your bucket will be deployed in.
This option will always determine the deployment region if specified. If region
is not specified via the CLI, we use the region
option specified under custom/client in serverless.yml
. If that is not specified, we use the Serverless region specified under provider
in serverless.yml
.
--no-delete-contents
optional, default false
(deletes contents by default)
serverless client deploy --no-delete-contents
Use this parameter if you do not want to delete the contents of your bucket before deployment. Files uploaded during deployment will still replace any corresponding files already in your bucket.
--no-config-change
optional, default false
(overwrites bucket configuration by default)
serverless client deploy --no-config-change
Use this parameter if you do not want to overwrite the bucket configuration when deploying to your bucket.
--no-policy-change
optional, default false
(overwrites bucket policy by default)
serverless client deploy --no-policy-change
Use this parameter if you do not want to overwrite the bucket policy when deploying to your bucket.
--no-cors-change
optional, default false
(overwrites bucket CORS configuration by default)
serverless client deploy --no-cors-change
Use this parameter if you do not want to overwrite the bucket CORS configuration when deploying to your bucket.
--no-confirm
optional, default false
(disables confirmation prompt)
serverless client deploy --no-confirm
Use this parameter if you do not want a confirmation prompt to interrupt automated builds.
Please read our contribution guide.
See Releases for releases after v2.6.0
.
keyPrefix
option - Pull 102 - Josephsse
option to allow you to encrypt files with Server Side Encryption using AES256
or aws:kms
- Pull 91 - Severi HaverilamanageResources
option to allow you to tell serverless-finch to not interact with your S3 bucket - Pull 75 - sprockowkeyPrefix
option to enable working with S3 folders - Pull 76 - ArchaniumdistributionFolder
configuration value. This enables you to upload your website files from a custom directory (Pull 12 - pradel)remove
option to tear down what you deploy. (Pull 10 thanks to redroot)Forked from the serverless-client-s3
Author: fernando-mc
Source Code: https://github.com/fernando-mc/serverless-finch
License: MIT license
1656528660
serverless-reqvalidator-plugin
Serverless plugin to set specific validator request on method
npm install serverless-reqvalidator-plugin
This require you to have documentation plugin installed
serverless-aws-documentation
Specify plugin
plugins:
- serverless-reqvalidator-plugin
- serverless-aws-documentation
In serverless.yml
create custom resource for request validators
xMyRequestValidator:
Type: "AWS::ApiGateway::RequestValidator"
Properties:
Name: 'my-req-validator'
RestApiId:
Ref: ApiGatewayRestApi
ValidateRequestBody: true
ValidateRequestParameters: false
For every function you wish to use the validator set property reqValidatorName: 'xMyRequestValidator'
to match resource you described
debug:
handler: apis/admin/debug/debug.debug
timeout: 10
events:
- http:
path: admin/debug
method: get
cors: true
private: true
reqValidatorName: 'xMyRequestValidator'
The serverless framework allows us to share resources among several stacks. Therefore a CloudFormation Output has to be specified in one stack. This Output can be imported in another stack to make use of it. For more information see here.
Specify a request validator in a different stack:
plugins:
- serverless-reqvalidator-plugin
service: my-service-a
functions:
hello:
handler: handler.myHandler
events:
- http:
path: hello
reqValidatorName: 'myReqValidator'
resources:
Resources:
xMyRequestValidator:
Type: "AWS::ApiGateway::RequestValidator"
Properties:
Name: 'my-req-validator'
RestApiId:
Ref: ApiGatewayRestApi
ValidateRequestBody: true
ValidateRequestParameters: false
Outputs:
xMyRequestValidator:
Value:
Ref: my-req-validator
Export:
Name: myReqValidator
Make use of the exported request validator in stack b:
plugins:
- serverless-reqvalidator-plugin
service: my-service-b
functions:
hello:
handler: handler.myHandler
events:
- http:
path: hello
reqValidatorName:
Fn::ImportValue: 'myReqValidator'
service:
name: my-service
plugins:
- serverless-webpack
- serverless-reqvalidator-plugin
- serverless-aws-documentation
provider:
name: aws
runtime: nodejs6.10
region: eu-west-2
environment:
NODE_ENV: ${self:provider.stage}
custom:
documentation:
api:
info:
version: '1.0.0'
title: My API
description: This is my API
tags:
-
name: User
description: User Management
models:
- name: MessageResponse
contentType: "application/json"
schema:
type: object
properties:
message:
type: string
- name: RegisterUserRequest
contentType: "application/json"
schema:
required:
- email
- password
properties:
email:
type: string
password:
type: string
- name: RegisterUserResponse
contentType: "application/json"
schema:
type: object
properties:
result:
type: string
- name: 400JsonResponse
contentType: "application/json"
schema:
type: object
properties:
message:
type: string
statusCode:
type: number
commonModelSchemaFragments:
MethodResponse400Json:
statusCode: '400'
responseModels:
"application/json": 400JsonResponse
functions:
signUp:
handler: handler.signUp
events:
- http:
documentation:
summary: "Register user"
description: "Registers new user"
tags:
- User
requestModels:
"application/json": RegisterUserRequest
method: post
path: signup
reqValidatorName: onlyBody
methodResponses:
- statusCode: '200'
responseModels:
"application/json": RegisterUserResponse
- ${self:custom.commonModelSchemaFragments.MethodResponse400Json}
package:
include:
handler.ts
resources:
Resources:
onlyBody:
Type: "AWS::ApiGateway::RequestValidator"
Properties:
Name: 'only-body'
RestApiId:
Ref: ApiGatewayRestApi
ValidateRequestBody: true
ValidateRequestParameters: false
Author: RafPe
Source Code: https://github.com/RafPe/serverless-reqvalidator-plugin
License: