1549982453
I have a util/config.json file like below:
{ "dev": { "db": { "host": "localhost", "user": "root", "pass":"*****"}, "json_indentation": 4, "database": "my-app-db-dev"
}
“prod”:{
“db”: {
“host”: “somethingt”,
“user”: “root”,
“pass”:“****”}, "json_indentation": 4, "database": "my-app-db-dev"
}
}
from a util/config.js file, I have exported it to my app.js
const config = require(‘./config.json’)
const environment = process.env.NODE_ENV || ‘dev’const defaultConfig = config.environment;
console.log(defaultConfig) // shows undefined why?
module.exports = defaultConfig
app.js is like so:
const express = require(‘express’)
const config = require(‘./util/config’)
but it my defaultConfig shows undefined and can’t fetch the data for dev (npm run dev in app.js directory) while I have set the NODE_ENV variable like this in my package.json:
“scripts”: {
“dev”: “set NODE_ENV=dev && nodemon app.js”,
“prod”: “set NODE_ENV=prod && nodemon app.js”
}
#node-js
1549984826
You can try following snippet
const config = require('./config.json')
const environment = process.env.NODE_ENV || 'dev'
const defaultConfig = config[environment];
console.log(defaultConfig) // shows undefined why?
module.exports = defaultConfig
1651383480
This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.
Install
npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator
Usage
This plugin relies on your serverless yml file and on the serverless-offline
plugin.
plugins:
- serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
- serverless-appsync-simulator
- serverless-offline
Note: Order is important serverless-appsync-simulator
must go before serverless-offline
To start the simulator, run the following command:
sls offline start
You should see in the logs something like:
...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...
Configuration
Put options under custom.appsync-simulator
in your serverless.yml
file
| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789
| When using API_KEY
as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref
function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt
function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue
function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |
Example:
custom:
appsync-simulator:
location: '.webpack/service' # use webpack build directory
dynamoDb:
endpoint: 'http://my-custom-dynamo:8000'
Hot-reloading
By default, the simulator will hot-relad when changes to *.graphql
or *.vtl
files are detected. Changes to *.yml
files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.
Hot-reloading relies on watchman. Make sure it is installed on your system.
You can change the files being watched with the watch
option, which is then passed to watchman as the match expression.
e.g.
custom:
appsync-simulator:
watch:
- ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
- "*.graphql" # => string like this is equivalent to `["match", "*.graphql"]`
Or you can opt-out by leaving an empty array or set the option to false
Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.
Resource CloudFormation functions resolution
This plugin supports some resources resolution from the Ref
, Fn::GetAtt
and Fn::ImportValue
functions in your yaml file. It also supports some other Cfn functions such as Fn::Join
, Fb::Sub
, etc.
Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation
You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.
provider:
environment:
BUCKET_NAME:
Ref: MyBucket # resolves to `my-bucket-name`
resources:
Resources:
MyDbTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: myTable
...
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket-name
...
# in your appsync config
dataSources:
- type: AMAZON_DYNAMODB
name: dynamosource
config:
tableName:
Ref: MyDbTable # resolves to `myTable`
Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.
In those cases, you can define (or override) those values using the refMap
, getAttMap
and importValueMap
options.
refMap
takes a mapping of resource name to value pairsgetAttMap
takes a mapping of resource name to attribute/values pairsimportValueMap
takes a mapping of import name to values pairsExample:
custom:
appsync-simulator:
refMap:
# Override `MyDbTable` resolution from the previous example.
MyDbTable: 'mock-myTable'
getAttMap:
# define ElasticSearchInstance DomainName
ElasticSearchInstance:
DomainEndpoint: 'localhost:9200'
importValueMap:
other-service-api-url: 'https://other.api.url.com/graphql'
# in your appsync config
dataSources:
- type: AMAZON_ELASTICSEARCH
name: elasticsource
config:
# endpoint resolves as 'http://localhost:9200'
endpoint:
Fn::Join:
- ''
- - https://
- Fn::GetAtt:
- ElasticSearchInstance
- DomainEndpoint
In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}
) in the import name.
This notation can be used with all mocks - refMap
, getAttMap
and importValueMap
provider:
environment:
FINISH_ACTIVITY_FUNCTION_ARN:
Fn::ImportValue: other-service-api-${self:provider.stage}-url
custom:
serverless-appsync-simulator:
importValueMap:
- key: other-service-api-${self:provider.stage}-url
value: 'https://other.api.url.com/graphql'
This plugin only tries to resolve the following parts of the yml tree:
provider.environment
functions[*].environment
custom.appSync
If you have the need of resolving others, feel free to open an issue and explain your use case.
For now, the supported resources to be automatically resovled by Ref:
are:
Feel free to open a PR or an issue to extend them as well.
External functions
When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".
custom:
appsync-simulator:
functions:
addUser:
url: http://localhost:3016/2015-03-31/functions/addUser/invocations
method: post
addPost:
url: https://jsonplaceholder.typicode.com/posts
method: post
Supported Resolver types
This plugin supports resolvers implemented by amplify-appsync-simulator
, as well as custom resolvers.
From Aws Amplify:
Implemented by this plugin
#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#set( $discard = $cols.add("$toSnake") )
#if( $util.isBoolean($ctx.args.input[$entry]) )
#if( $ctx.args.input[$entry] )
#set( $discard = $vals.add("1") )
#else
#set( $discard = $vals.add("0") )
#end
#else
#set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
#end
#end
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
#set( $valStr = "($valStr)" )
#end
#if ( $colStr.substring(0, 1) != '(' )
#set( $colStr = "($colStr)" )
#end
{
"version": "2018-05-29",
"statements": ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM <name-of-table> ORDER BY id DESC LIMIT 1"]
}
#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $cur = $ctx.args.input[$entry] )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#if( $util.isBoolean($cur) )
#if( $cur )
#set ( $cur = "1" )
#else
#set ( $cur = "0" )
#end
#end
#if ( $util.isNullOrEmpty($update) )
#set($update = "$toSnake$equals'$cur'" )
#else
#set($update = "$update,$toSnake$equals'$cur'" )
#end
#end
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> SET $update WHERE id=$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.input.id"]
}
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.id"]
}
#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
#set ($index = $index + 1)
#if ( $column["typeName"] == "timestamptz" )
#set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
#set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
#set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
$util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
#end
#end
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
#set ( $s = $mapKey.split("_") )
#set ( $camelCase="" )
#set ( $isFirst=true )
#foreach($entry in $s)
#if ( $isFirst )
#set ( $first = $entry.substring(0,1) )
#else
#set ( $first = $entry.substring(0,1).toUpperCase() )
#end
#set ( $isFirst=false )
#set ( $stringLength = $entry.length() )
#set ( $remaining = $entry.substring(1, $stringLength) )
#set ( $camelCase = "$camelCase$first$remaining" )
#end
$util.qr( $response.put("$camelCase", $res[$mapKey]) )
#end
$utils.toJson($response)
Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.
Will be escaped properly: null
, true
, and false
values.
{
"version": "2018-05-29",
"statements": [
"UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
"SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
],
variableMap: {
":ID": $ctx.args.id,
## ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!
}
}
Requires
Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator
License: MIT License
1651319520
Serverless APIGateway Service Proxy
This Serverless Framework plugin supports the AWS service proxy integration feature of API Gateway. You can directly connect API Gateway to AWS services without Lambda.
Run serverless plugin install
in your Serverless project.
serverless plugin install -n serverless-apigateway-service-proxy
Here is a services list which this plugin supports for now. But will expand to other services in the feature. Please pull request if you are intersted in it.
Define settings of the AWS services you want to integrate under custom > apiGatewayServiceProxies
and run serverless deploy
.
Sample syntax for Kinesis proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- kinesis: # partitionkey is set apigateway requestid by default
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis
method: post
partitionKey: 'hardcordedkey' # use static partitionkey
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis/{myKey} # use path parameter
method: post
partitionKey:
pathParam: myKey
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis
method: post
partitionKey:
bodyParam: data.myKey # use body parameter
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis
method: post
partitionKey:
queryStringParam: myKey # use query string param
streamName: { Ref: 'YourStream' }
cors: true
- kinesis: # PutRecords
path: /kinesis
method: post
action: PutRecords
streamName: { Ref: 'YourStream' }
cors: true
resources:
Resources:
YourStream:
Type: AWS::Kinesis::Stream
Properties:
ShardCount: 1
Sample request after deploying.
curl https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/kinesis -d '{"message": "some data"}' -H 'Content-Type:application/json'
Sample syntax for SQS proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true
resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
Sample request after deploying.
curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/sqs -d '{"message": "testtest"}' -H 'Content-Type:application/json'
If you'd like to pass additional data to the integration request, you can do so by including your custom API Gateway request parameters in serverless.yml
like so:
custom:
apiGatewayServiceProxies:
- sqs:
path: /queue
method: post
queueName: !GetAtt MyQueue.QueueName
cors: true
requestParameters:
'integration.request.querystring.MessageAttribute.1.Name': "'cognitoIdentityId'"
'integration.request.querystring.MessageAttribute.1.Value.StringValue': 'context.identity.cognitoIdentityId'
'integration.request.querystring.MessageAttribute.1.Value.DataType': "'String'"
'integration.request.querystring.MessageAttribute.2.Name': "'cognitoAuthenticationProvider'"
'integration.request.querystring.MessageAttribute.2.Value.StringValue': 'context.identity.cognitoAuthenticationProvider'
'integration.request.querystring.MessageAttribute.2.Value.DataType': "'String'"
The alternative way to pass MessageAttribute
parameters is via a request body mapping template.
See the SQS section under Customizing request body mapping templates
Simplified response template customization
You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be application/json
.
custom:
apiGatewayServiceProxies:
- sqs:
path: /queue
method: post
queueName: !GetAtt MyQueue.QueueName
cors: true
response:
template:
# `success` is used when the integration response is 200
success: |-
{ "message: "accepted" }
# `clientError` is used when the integration response is 400
clientError: |-
{ "message": "there is an error in your request" }
# `serverError` is used when the integration response is 500
serverError: |-
{ "message": "there was an error handling your request" }
Full response customization
If you want more control over the integration response, you can provide an array of objects for the response
value:
custom:
apiGatewayServiceProxies:
- sqs:
path: /queue
method: post
queueName: !GetAtt MyQueue.QueueName
cors: true
response:
- statusCode: 200
selectionPattern: '2\\d{2}'
responseParameters: {}
responseTemplates:
application/json: |-
{ "message": "accepted" }
The object keys correspond to the API Gateway integration response object.
Sample syntax for S3 proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: post
action: PutObject
bucket:
Ref: S3Bucket
key: static-key.json # use static key
cors: true
- s3:
path: /s3/{myKey} # use path param
method: get
action: GetObject
bucket:
Ref: S3Bucket
key:
pathParam: myKey
cors: true
- s3:
path: /s3
method: delete
action: DeleteObject
bucket:
Ref: S3Bucket
key:
queryStringParam: key # use query string param
cors: true
resources:
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
Sample request after deploying.
curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/s3 -d '{"message": "testtest"}' -H 'Content-Type:application/json'
Similar to the SQS support, you can customize the default request parameters serverless.yml
like so:
custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: post
action: PutObject
bucket:
Ref: S3Bucket
cors: true
requestParameters:
# if requestParameters has a 'integration.request.path.object' property you should remove the key setting
'integration.request.path.object': 'context.requestId'
'integration.request.header.cache-control': "'public, max-age=31536000, immutable'"
If you'd like use custom API Gateway request templates, you can do so like so:
custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: get
action: GetObject
bucket:
Ref: S3Bucket
request:
template:
application/json: |
#set ($specialStuff = $context.request.header.x-special)
#set ($context.requestOverride.path.object = $specialStuff.replaceAll('_', '-'))
{}
Note that if the client does not provide a Content-Type
header in the request, ApiGateway defaults to application/json
.
Added the new customization parameter that lets the user set a custom Path Override in API Gateway other than the {bucket}/{object}
This parameter is optional and if not set, will fall back to {bucket}/{object}
The Path Override will add {bucket}/
automatically in front
Please keep in mind, that key or path.object still needs to be set at the moment (maybe this will be made optional later on with this)
Usage (With 2 Path Parameters (folder and file and a fixed file extension)):
custom:
apiGatewayServiceProxies:
- s3:
path: /s3/{folder}/{file}
method: get
action: GetObject
pathOverride: '{folder}/{file}.xml'
bucket:
Ref: S3Bucket
cors: true
requestParameters:
# if requestParameters has a 'integration.request.path.object' property you should remove the key setting
'integration.request.path.folder': 'method.request.path.folder'
'integration.request.path.file': 'method.request.path.file'
'integration.request.path.object': 'context.requestId'
'integration.request.header.cache-control': "'public, max-age=31536000, immutable'"
This will result in API Gateway setting the Path Override attribute to {bucket}/{folder}/{file}.xml
So for example if you navigate to the API Gatway endpoint /language/en
it will fetch the file in S3 from {bucket}/language/en.xml
Can use greedy, for deeper Folders
The forementioned example can also be shortened by a greedy approach. Thanks to @taylorreece for mentioning this.
custom:
apiGatewayServiceProxies:
- s3:
path: /s3/{myPath+}
method: get
action: GetObject
pathOverride: '{myPath}.xml'
bucket:
Ref: S3Bucket
cors: true
requestParameters:
# if requestParameters has a 'integration.request.path.object' property you should remove the key setting
'integration.request.path.myPath': 'method.request.path.myPath'
'integration.request.path.object': 'context.requestId'
'integration.request.header.cache-control': "'public, max-age=31536000, immutable'"
This will translate for example /s3/a/b/c
to a/b/c.xml
You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be application/json
.
custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: post
action: PutObject
bucket:
Ref: S3Bucket
key: static-key.json
response:
template:
# `success` is used when the integration response is 200
success: |-
{ "message: "accepted" }
# `clientError` is used when the integration response is 400
clientError: |-
{ "message": "there is an error in your request" }
# `serverError` is used when the integration response is 500
serverError: |-
{ "message": "there was an error handling your request" }
Sample syntax for SNS proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
cors: true
resources:
Resources:
SNSTopic:
Type: AWS::SNS::Topic
Sample request after deploying.
curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/sns -d '{"message": "testtest"}' -H 'Content-Type:application/json'
Simplified response template customization
You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be application/json
.
custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
cors: true
response:
template:
# `success` is used when the integration response is 200
success: |-
{ "message: "accepted" }
# `clientError` is used when the integration response is 400
clientError: |-
{ "message": "there is an error in your request" }
# `serverError` is used when the integration response is 500
serverError: |-
{ "message": "there was an error handling your request" }
Full response customization
If you want more control over the integration response, you can provide an array of objects for the response
value:
custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
cors: true
response:
- statusCode: 200
selectionPattern: '2\d{2}'
responseParameters: {}
responseTemplates:
application/json: |-
{ "message": "accepted" }
The object keys correspond to the API Gateway integration response object.
Content Handling and Pass Through Behaviour customization
If you want to work with binary fata, you can not specify contentHandling
and PassThrough
inside the request
object.
custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
request:
contentHandling: CONVERT_TO_TEXT
passThrough: WHEN_NO_TEMPLATES
The allowed values correspond with the API Gateway Method integration for ContentHandling and PassthroughBehavior
Sample syntax for DynamoDB proxy in serverless.yml
. Currently, the supported DynamoDB Operations are PutItem
, GetItem
and DeleteItem
.
custom:
apiGatewayServiceProxies:
- dynamodb:
path: /dynamodb/{id}/{sort}
method: put
tableName: { Ref: 'YourTable' }
hashKey: # set pathParam or queryStringParam as a partitionkey.
pathParam: id
attributeType: S
rangeKey: # required if also using sort key. set pathParam or queryStringParam.
pathParam: sort
attributeType: S
action: PutItem # specify action to the table what you want
condition: attribute_not_exists(Id) # optional Condition Expressions parameter for the table
cors: true
- dynamodb:
path: /dynamodb
method: get
tableName: { Ref: 'YourTable' }
hashKey:
queryStringParam: id # use query string parameter
attributeType: S
rangeKey:
queryStringParam: sort
attributeType: S
action: GetItem
cors: true
- dynamodb:
path: /dynamodb/{id}
method: delete
tableName: { Ref: 'YourTable' }
hashKey:
pathParam: id
attributeType: S
action: DeleteItem
cors: true
resources:
Resources:
YourTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: YourTable
AttributeDefinitions:
- AttributeName: id
AttributeType: S
- AttributeName: sort
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
- AttributeName: sort
KeyType: RANGE
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
Sample request after deploying.
curl -XPUT https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/dynamodb/<hashKey>/<sortkey> \
-d '{"name":{"S":"john"},"address":{"S":"xxxxx"}}' \
-H 'Content-Type:application/json'
Sample syntax for EventBridge proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- eventbridge: # source and detailType are hardcoded; detail defaults to POST body
path: /eventbridge
method: post
source: 'hardcoded_source'
detailType: 'hardcoded_detailType'
eventBusName: { Ref: 'YourBusName' }
cors: true
- eventbridge: # source and detailType as path parameters
path: /eventbridge/{detailTypeKey}/{sourceKey}
method: post
detailType:
pathParam: detailTypeKey
source:
pathParam: sourceKey
eventBusName: { Ref: 'YourBusName' }
cors: true
- eventbridge: # source, detail, and detailType as body parameters
path: /eventbridge/{detailTypeKey}/{sourceKey}
method: post
detailType:
bodyParam: data.detailType
source:
bodyParam: data.source
detail:
bodyParam: data.detail
eventBusName: { Ref: 'YourBusName' }
cors: true
resources:
Resources:
YourBus:
Type: AWS::Events::EventBus
Properties:
Name: YourEventBus
Sample request after deploying.
curl https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/eventbridge -d '{"message": "some data"}' -H 'Content-Type:application/json'
To set CORS configurations for your HTTP endpoints, simply modify your event configurations as follows:
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors: true
Setting cors to true assumes a default configuration which is equivalent to:
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors:
origin: '*'
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
allowCredentials: false
Configuring the cors property sets Access-Control-Allow-Origin, Access-Control-Allow-Headers, Access-Control-Allow-Methods,Access-Control-Allow-Credentials headers in the CORS preflight response. To enable the Access-Control-Max-Age preflight response header, set the maxAge property in the cors object:
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors:
origin: '*'
maxAge: 86400
If you are using CloudFront or another CDN for your API Gateway, you may want to setup a Cache-Control header to allow for OPTIONS request to be cached to avoid the additional hop.
To enable the Cache-Control header on preflight response, set the cacheControl property in the cors object:
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors:
origin: '*'
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
allowCredentials: false
cacheControl: 'max-age=600, s-maxage=600, proxy-revalidate' # Caches on browser and proxy for 10 minutes and doesnt allow proxy to serve out of date content
You can pass in any supported authorization type:
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true
# optional - defaults to 'NONE'
authorizationType: 'AWS_IAM' # can be one of ['NONE', 'AWS_IAM', 'CUSTOM', 'COGNITO_USER_POOLS']
# when using 'CUSTOM' authorization type, one should specify authorizerId
# authorizerId: { Ref: 'AuthorizerLogicalId' }
# when using 'COGNITO_USER_POOLS' authorization type, one can specify a list of authorization scopes
# authorizationScopes: ['scope1','scope2']
resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
Source: AWS::ApiGateway::Method docs
You can indicate whether the method requires clients to submit a valid API key using private
flag:
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true
private: true
resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
which is the same syntax used in Serverless framework.
Source: Serverless: Setting API keys for your Rest API
Source: AWS::ApiGateway::Method docs
By default, the plugin will generate a role with the required permissions for each service type that is configured.
You can configure your own role by setting the roleArn
attribute:
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true
roleArn: # Optional. A default role is created when not configured
Fn::GetAtt: [CustomS3Role, Arn]
resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
CustomS3Role:
# Custom Role definition
Type: 'AWS::IAM::Role'
The plugin allows one to specify which parameters the API Gateway method accepts.
A common use case is to pass custom data to the integration request:
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SqsQueue', 'QueueName'] }
cors: true
acceptParameters:
'method.request.header.Custom-Header': true
requestParameters:
'integration.request.querystring.MessageAttribute.1.Name': "'custom-Header'"
'integration.request.querystring.MessageAttribute.1.Value.StringValue': 'method.request.header.Custom-Header'
'integration.request.querystring.MessageAttribute.1.Value.DataType': "'String'"
resources:
Resources:
SqsQueue:
Type: 'AWS::SQS::Queue'
Any published SQS message will have the Custom-Header
value added as a message attribute.
If you'd like to add content types or customize the default templates, you can do so by including your custom API Gateway request mapping template in serverless.yml
like so:
# Required for using Fn::Sub
plugins:
- serverless-cloudformation-sub-variables
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'MyStream' }
request:
template:
text/plain:
Fn::Sub:
- |
#set($msgBody = $util.parseJson($input.body))
#set($msgId = $msgBody.MessageId)
{
"Data": "$util.base64Encode($input.body)",
"PartitionKey": "$msgId",
"StreamName": "#{MyStreamArn}"
}
- MyStreamArn:
Fn::GetAtt: [MyStream, Arn]
It is important that the mapping template will return a valid
application/json
string
Source: How to connect SNS to Kinesis for cross-account delivery via API Gateway
Customizing SQS request templates requires us to force all requests to use an application/x-www-form-urlencoded
style body. The plugin sets the Content-Type
header to application/x-www-form-urlencoded
for you, but API Gateway will still look for the template under the application/json
request template type, so that is where you need to configure you request body in serverless.yml
:
custom:
apiGatewayServiceProxies:
- sqs:
path: /{version}/event/receiver
method: post
queueName: { 'Fn::GetAtt': ['SqsQueue', 'QueueName'] }
request:
template:
application/json: |-
#set ($body = $util.parseJson($input.body))
Action=SendMessage##
&MessageGroupId=$util.urlEncode($body.event_type)##
&MessageDeduplicationId=$util.urlEncode($body.event_id)##
&MessageAttribute.1.Name=$util.urlEncode("X-Custom-Signature")##
&MessageAttribute.1.Value.DataType=String##
&MessageAttribute.1.Value.StringValue=$util.urlEncode($input.params("X-Custom-Signature"))##
&MessageBody=$util.urlEncode($input.body)
Note that the ##
at the end of each line is an empty comment. In VTL this has the effect of stripping the newline from the end of the line (as it is commented out), which makes API Gateway read all the lines in the template as one line.
Be careful when mixing additional requestParameters
into your SQS endpoint as you may overwrite the integration.request.header.Content-Type
and stop the request template from being parsed correctly. You may also unintentionally create conflicts between parameters passed using requestParameters
and those in your request template. Typically you should only use the request template if you need to manipulate the incoming request body in some way.
Your custom template must also set the Action
and MessageBody
parameters, as these will not be added for you by the plugin.
When using a custom request body, headers sent by a client will no longer be passed through to the SQS queue (PassthroughBehavior
is automatically set to NEVER
). You will need to pass through headers sent by the client explicitly in the request body. Also, any custom querystring parameters in the requestParameters
array will be ignored. These also need to be added via the custom request body.
Similar to the Kinesis support, you can customize the default request mapping templates in serverless.yml
like so:
# Required for using Fn::Sub
plugins:
- serverless-cloudformation-sub-variables
custom:
apiGatewayServiceProxies:
- kinesis:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
request:
template:
application/json:
Fn::Sub:
- "Action=Publish&Message=$util.urlEncode('This is a fixed message')&TopicArn=$util.urlEncode('#{MyTopicArn}')"
- MyTopicArn: { Ref: MyTopic }
It is important that the mapping template will return a valid
application/x-www-form-urlencoded
string
Source: Connect AWS API Gateway directly to SNS using a service integration
You can customize the response body by providing mapping templates for success, server errors (5xx) and client errors (4xx).
Templates must be in JSON format. If a template isn't provided, the integration response will be returned as-is to the client.
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'MyStream' }
response:
template:
success: |
{
"success": true
}
serverError: |
{
"success": false,
"errorMessage": "Server Error"
}
clientError: |
{
"success": false,
"errorMessage": "Client Error"
}
Author: Serverless-operations
Source Code: https://github.com/serverless-operations/serverless-apigateway-service-proxy
License:
1672833558
Bash has many environment variables for various purposes. The set command of Bash is used to modify or display the different attributes and parameters of the shell environment. This command has many options to perform the different types of tasks. The uses of set command for various purposes are described in this tutorial.
set [options] [arguments]
This command can be used with different types of options and arguments for different purposes. If no option or argument is used with this command, the shell variables are printed. The minus sign (-) is used with the command’s option to enable that option and the plus sign (+) is used with the command’s option to disable that option.
Three exit values can be returned by this command which are mentioned in the following:
The purposes of the most commonly used options of the set command are described in this part of the tutorial.
Option | Purpose |
---|---|
-a | It defines those variables or functions which are created or modified or exported. |
-b | It informs the job termination. |
-B | To do the task of the brace expansion. |
-C | It disables the overwriting feature of the existing file. |
-e | It exits for non-zero exit status value. |
-f | It disables the filename generation task. |
-h | It saves the location of the command where it has been used. |
-m | It enables job control. |
-n | It reads the commands. |
-t | It exits from the command after executing a single command. |
-u | It traces the unset variables. |
-v | It prints the shell input lines. |
-x | It displays the commands and their attributes sequentially. It is mainly used to debug the script. |
The uses of set command with different options are shown in this part of this tutorial.
Example 1: Using the Set Command with -a Option
Create a Bash file with the following script that enables the “set –a” command and initialize three variables named $v1, $v2, and $v3. These variables can be accessed after executing the script.
#!/bin/bash
#Enable -a option to read the values of the variables
set -a
#Initialize three variables
v1=78
v2=50
v3=35
Run the script using the following command:
$ bash set1.bash
Read the values of the variable using the “echo” command:
$ echo $v1 $v2 $v3
The following output appears after executing the previous commands:
Example 2: Using the Set Command with -C Option
Run the “cat” command to create a text file named testfile.txt. Next, run the “set –C” command to disable the overwriting feature. Next, run the “cat” command again to overwrite the file to check whether the overwriting feature is disabled or not.
$ cat > testfile.txt
$ set -C
$ cat > testfile.txt
The following output appears after executing the previous commands:
Example 3: Using the Set Command with -x Option
Create a Bash file with the following script that declares a numeric array of 6 elements. The values of the array are printed using for loop.
#!/bin/bash
#Declare an array
arr=(67 3 90 56 2 80)
#iterate the array values
for value in ${arr[@]}
do
echo $value
done
Execute the previous script by the following command:
$ bash set3.bash
Enable the debugging option using the following command:
$ set -x
The following output appears after executing the provided commands:
Example 4: Using the Set Command with -e Option
Create a Bash file with the following script that reads a file using the “cat” command before and after using the “set –e” command.
#!/bin/bash
#Read a non-existing file without setting set -e
cat myfile.txt
echo "Reading a file..."
#Set the set command with -e option
set -e
#Read a non-existing file after setting set -e
cat myfile.txt
echo "Reading a file..."
The following output appears after executing the provided commands. The first error message is shown because the file does not exist in the current location. The next message is then printed. But after executing the “set –e” command, the execution stops after displaying the error message.
Example 5: Using the Set Command with -u Option
Create a Bash file with the following script that initializes a variable but prints the initialized and uninitialized variable before and after using the “set –u” command.
#!/bin/bash
#Assign value to a variable
strvar="Bash Programming"
printf "$strvar $intvar\n"
#Set the set command with -u option
set -u
#Assign value to a variable
strvar="Bash Programming"
printf "\n$strvar $intvar\n"
The following output appears after executing the previous script. Here, the error is printed for the uninitialized variable:
Example 6: Using the Set Command with -f Option
Run the following command to print the list of all text files of the current location:
$ ls *.txt
Run the following command to disable the globbing:
$ set –f
Run the following command again to print the list of all text files of the current location:
$ ls *.txt
The following output appears after executing the previous script. Based on the output, the “ls *.txt” command did not work after setting “set –f” command:
Example 7: Split the String Using the Set Command with Variable
Create a Bash file with the following script that splits the string value based on the space using the “set – variable” command. The split values are printed later.
#!/bin/bash
#Define a string variable
myvar="Learn bash programming"
#Set the set command without option and with variable
set -- $myvar
#Print the split value
printf "$1\n$2\n$3\n"
The following output appears after executing the previous script. The string value is divided into three parts based on the space that is printed:
The uses of the different options of the “set” command are shown in this tutorial using multiple examples to know the basic uses of this command.
Original article source at: https://linuxhint.com/
1669099573
In this article, we will know what is face recognition and how is different from face detection. We will go briefly over the theory of face recognition and then jump on to the coding section. At the end of this article, you will be able to make a face recognition program for recognizing faces in images as well as on a live webcam feed.
In computer vision, one essential problem we are trying to figure out is to automatically detect objects in an image without human intervention. Face detection can be thought of as such a problem where we detect human faces in an image. There may be slight differences in the faces of humans but overall, it is safe to say that there are certain features that are associated with all the human faces. There are various face detection algorithms but Viola-Jones Algorithm is one of the oldest methods that is also used today and we will use the same later in the article. You can go through the Viola-Jones Algorithm after completing this article as I’ll link it at the end of this article.
Face detection is usually the first step towards many face-related technologies, such as face recognition or verification. However, face detection can have very useful applications. The most successful application of face detection would probably be photo taking. When you take a photo of your friends, the face detection algorithm built into your digital camera detects where the faces are and adjusts the focus accordingly.
For a tutorial on Real-Time Face detection
Now that we are successful in making such algorithms that can detect faces, can we also recognise whose faces are they?
Face recognition is a method of identifying or verifying the identity of an individual using their face. There are various algorithms that can do face recognition but their accuracy might vary. Here I am going to describe how we do face recognition using deep learning.
So now let us understand how we recognise faces using deep learning. We make use of face embedding in which each face is converted into a vector and this technique is called deep metric learning. Let me further divide this process into three simple steps for easy understanding:
Face Detection: The very first task we perform is detecting faces in the image or video stream. Now that we know the exact location/coordinates of face, we extract this face for further processing ahead.
Feature Extraction: Now that we have cropped the face out of the image, we extract features from it. Here we are going to use face embeddings to extract the features out of the face. A neural network takes an image of the person’s face as input and outputs a vector which represents the most important features of a face. In machine learning, this vector is called embedding and thus we call this vector as face embedding. Now how does this help in recognizing faces of different persons?
While training the neural network, the network learns to output similar vectors for faces that look similar. For example, if I have multiple images of faces within different timespan, of course, some of the features of my face might change but not up to much extent. So in this case the vectors associated with the faces are similar or in short, they are very close in the vector space. Take a look at the below diagram for a rough idea:
Now after training the network, the network learns to output vectors that are closer to each other(similar) for faces of the same person(looking similar). The above vectors now transform into:
We are not going to train such a network here as it takes a significant amount of data and computation power to train such networks. We will use a pre-trained network trained by Davis King on a dataset of ~3 million images. The network outputs a vector of 128 numbers which represent the most important features of a face.
Now that we know how this network works, let us see how we use this network on our own data. We pass all the images in our data to this pre-trained network to get the respective embeddings and save these embeddings in a file for the next step.
Comparing faces: Now that we have face embeddings for every face in our data saved in a file, the next step is to recognise a new t image that is not in our data. So the first step is to compute the face embedding for the image using the same network we used above and then compare this embedding with the rest of the embeddings we have. We recognise the face if the generated embedding is closer or similar to any other embedding as shown below:
So we passed two images, one of the images is of Vladimir Putin and other of George W. Bush. In our example above, we did not save the embeddings for Putin but we saved the embeddings of Bush. Thus when we compared the two new embeddings with the existing ones, the vector for Bush is closer to the other face embeddings of Bush whereas the face embeddings of Putin are not closer to any other embedding and thus the program cannot recognise him.
In the field of Artificial Intelligence, Computer Vision is one of the most interesting and Challenging tasks. Computer Vision acts like a bridge between Computer Software and visualizations around us. It allows computer software to understand and learn about the visualizations in the surroundings. For Example: Based on the color, shape and size determining the fruit. This task can be very easy for the human brain however in the Computer Vision pipeline, first we gather the data, then we perform the data processing activities and then we train and teach the model to understand how to distinguish between the fruits based on size, shape and color of fruit.
Currently, various packages are present to perform machine learning, deep learning and computer vision tasks. By far, computer vision is the best module for such complex activities. OpenCV is an open-source library. It is supported by various programming languages such as R, Python. It runs on most of the platforms such as Windows, Linux and MacOS.
To know more about how face recognition works on opencv, check out the free course on face recognition in opencv.
Advantages of OpenCV:
Installation:
Here we will be focusing on installing OpenCV for python only. We can install OpenCV using pip or conda(for anaconda environment).
Using pip, the installation process of openCV can be done by using the following command in the command prompt.
pip install opencv-python
If you are using anaconda environment, either you can execute the above code in anaconda prompt or you can execute the following code in anaconda prompt.
conda install -c conda-forge opencv
In this section, we shall implement face recognition using OpenCV and Python. First, let us see the libraries we will need and how to install them:
OpenCV is an image and video processing library and is used for image and video analysis, like facial detection, license plate reading, photo editing, advanced robotic vision, optical character recognition, and a whole lot more.
The dlib library, maintained by Davis King, contains our implementation of “deep metric learning” which is used to construct our face embeddings used for the actual recognition process.
The face_recognition library, created by Adam Geitgey, wraps around dlib’s facial recognition functionality, and this library is super easy to work with and we will be using this in our code. Remember to install dlib library first before you install face_recognition.
To install OpenCV, type in command prompt
pip install opencv-python |
I have tried various ways to install dlib on Windows but the easiest of all of them is via Anaconda. First, install Anaconda (here is a guide to install it) and then use this command in your command prompt:
conda install -c conda-forge dlib |
Next to install face_recognition, type in command prompt
pip install face_recognition |
Now that we have all the dependencies installed, let us start coding. We will have to create three files, one will take our dataset and extract face embedding for each face using dlib. Next, we will save these embedding in a file.
In the next file we will compare the faces with the existing the recognise faces in images and next we will do the same but recognise faces in live webcam feed
First, you need to get a dataset or even create one of you own. Just make sure to arrange all images in folders with each folder containing images of just one person.
Next, save the dataset in a folder the same as you are going to make the file. Now here is the code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
Now that we have stored the embedding in a file named “face_enc”, we can use them to recognise faces in images or live video stream.
Here is the script to recognise faces on a live webcam feed:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|
https://www.youtube.com/watch?v=fLnGdkZxRkg
Although in the example above we have used haar cascade to detect faces, you can also use face_recognition.face_locations to detect a face as we did in the previous script
The script for detecting and recognising faces in images is almost similar to what you saw above. Try it yourself and if you can’t take a look at the code below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
|
Output:
InputOutput
This brings us to the end of this article where we learned about face recognition.
You can also upskill with Great Learning’s PGP Artificial Intelligence and Machine Learning Course. The course offers mentorship from industry leaders, and you will also have the opportunity to work on real-time industry-relevant projects.
Original article source at: https://www.mygreatlearning.com
1561523460
This Matplotlib cheat sheet introduces you to the basics that you need to plot your data with Python and includes code samples.
Data visualization and storytelling with your data are essential skills that every data scientist needs to communicate insights gained from analyses effectively to any audience out there.
For most beginners, the first package that they use to get in touch with data visualization and storytelling is, naturally, Matplotlib: it is a Python 2D plotting library that enables users to make publication-quality figures. But, what might be even more convincing is the fact that other packages, such as Pandas, intend to build more plotting integration with Matplotlib as time goes on.
However, what might slow down beginners is the fact that this package is pretty extensive. There is so much that you can do with it and it might be hard to still keep a structure when you're learning how to work with Matplotlib.
DataCamp has created a Matplotlib cheat sheet for those who might already know how to use the package to their advantage to make beautiful plots in Python, but that still want to keep a one-page reference handy. Of course, for those who don't know how to work with Matplotlib, this might be the extra push be convinced and to finally get started with data visualization in Python.
You'll see that this cheat sheet presents you with the six basic steps that you can go through to make beautiful plots.
Check out the infographic by clicking on the button below:
With this handy reference, you'll familiarize yourself in no time with the basics of Matplotlib: you'll learn how you can prepare your data, create a new plot, use some basic plotting routines to your advantage, add customizations to your plots, and save, show and close the plots that you make.
What might have looked difficult before will definitely be more clear once you start using this cheat sheet! Use it in combination with the Matplotlib Gallery, the documentation.
Matplotlib
Matplotlib is a Python 2D plotting library which produces publication-quality figures in a variety of hardcopy formats and interactive environments across platforms.
>>> import numpy as np
>>> x = np.linspace(0, 10, 100)
>>> y = np.cos(x)
>>> z = np.sin(x)
>>> data = 2 * np.random.random((10, 10))
>>> data2 = 3 * np.random.random((10, 10))
>>> Y, X = np.mgrid[-3:3:100j, -3:3:100j]
>>> U = 1 X** 2 + Y
>>> V = 1 + X Y**2
>>> from matplotlib.cbook import get_sample_data
>>> img = np.load(get_sample_data('axes_grid/bivariate_normal.npy'))
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> fig2 = plt.figure(figsize=plt.figaspect(2.0))
>>> fig.add_axes()
>>> ax1 = fig.add_subplot(221) #row-col-num
>>> ax3 = fig.add_subplot(212)
>>> fig3, axes = plt.subplots(nrows=2,ncols=2)
>>> fig4, axes2 = plt.subplots(ncols=3)
>>> plt.savefig('foo.png') #Save figures
>>> plt.savefig('foo.png', transparent=True) #Save transparent figures
>>> plt.show()
>>> fig, ax = plt.subplots()
>>> lines = ax.plot(x,y) #Draw points with lines or markers connecting them
>>> ax.scatter(x,y) #Draw unconnected points, scaled or colored
>>> axes[0,0].bar([1,2,3],[3,4,5]) #Plot vertical rectangles (constant width)
>>> axes[1,0].barh([0.5,1,2.5],[0,1,2]) #Plot horiontal rectangles (constant height)
>>> axes[1,1].axhline(0.45) #Draw a horizontal line across axes
>>> axes[0,1].axvline(0.65) #Draw a vertical line across axes
>>> ax.fill(x,y,color='blue') #Draw filled polygons
>>> ax.fill_between(x,y,color='yellow') #Fill between y values and 0
>>> fig, ax = plt.subplots()
>>> im = ax.imshow(img, #Colormapped or RGB arrays
cmap= 'gist_earth',
interpolation= 'nearest',
vmin=-2,
vmax=2)
>>> axes2[0].pcolor(data2) #Pseudocolor plot of 2D array
>>> axes2[0].pcolormesh(data) #Pseudocolor plot of 2D array
>>> CS = plt.contour(Y,X,U) #Plot contours
>>> axes2[2].contourf(data1) #Plot filled contours
>>> axes2[2]= ax.clabel(CS) #Label a contour plot
>>> axes[0,1].arrow(0,0,0.5,0.5) #Add an arrow to the axes
>>> axes[1,1].quiver(y,z) #Plot a 2D field of arrows
>>> axes[0,1].streamplot(X,Y,U,V) #Plot a 2D field of arrows
>>> ax1.hist(y) #Plot a histogram
>>> ax3.boxplot(y) #Make a box and whisker plot
>>> ax3.violinplot(z) #Make a violin plot
y-axis
x-axis
The basic steps to creating plots with matplotlib are:
1 Prepare Data
2 Create Plot
3 Plot
4 Customized Plot
5 Save Plot
6 Show Plot
>>> import matplotlib.pyplot as plt
>>> x = [1,2,3,4] #Step 1
>>> y = [10,20,25,30]
>>> fig = plt.figure() #Step 2
>>> ax = fig.add_subplot(111) #Step 3
>>> ax.plot(x, y, color= 'lightblue', linewidth=3) #Step 3, 4
>>> ax.scatter([2,4,6],
[5,15,25],
color= 'darkgreen',
marker= '^' )
>>> ax.set_xlim(1, 6.5)
>>> plt.savefig('foo.png' ) #Step 5
>>> plt.show() #Step 6
>>> plt.cla() #Clear an axis
>>> plt.clf(). #Clear the entire figure
>>> plt.close(). #Close a window
>>> plt.plot(x, x, x, x**2, x, x** 3)
>>> ax.plot(x, y, alpha = 0.4)
>>> ax.plot(x, y, c= 'k')
>>> fig.colorbar(im, orientation= 'horizontal')
>>> im = ax.imshow(img,
cmap= 'seismic' )
>>> fig, ax = plt.subplots()
>>> ax.scatter(x,y,marker= ".")
>>> ax.plot(x,y,marker= "o")
>>> plt.plot(x,y,linewidth=4.0)
>>> plt.plot(x,y,ls= 'solid')
>>> plt.plot(x,y,ls= '--')
>>> plt.plot(x,y,'--' ,x**2,y**2,'-.' )
>>> plt.setp(lines,color= 'r',linewidth=4.0)
>>> ax.text(1,
-2.1,
'Example Graph',
style= 'italic' )
>>> ax.annotate("Sine",
xy=(8, 0),
xycoords= 'data',
xytext=(10.5, 0),
textcoords= 'data',
arrowprops=dict(arrowstyle= "->",
connectionstyle="arc3"),)
>>> plt.title(r '$sigma_i=15$', fontsize=20)
Limits & Autoscaling
>>> ax.margins(x=0.0,y=0.1) #Add padding to a plot
>>> ax.axis('equal') #Set the aspect ratio of the plot to 1
>>> ax.set(xlim=[0,10.5],ylim=[-1.5,1.5]) #Set limits for x-and y-axis
>>> ax.set_xlim(0,10.5) #Set limits for x-axis
Legends
>>> ax.set(title= 'An Example Axes', #Set a title and x-and y-axis labels
ylabel= 'Y-Axis',
xlabel= 'X-Axis')
>>> ax.legend(loc= 'best') #No overlapping plot elements
Ticks
>>> ax.xaxis.set(ticks=range(1,5), #Manually set x-ticks
ticklabels=[3,100, 12,"foo" ])
>>> ax.tick_params(axis= 'y', #Make y-ticks longer and go in and out
direction= 'inout',
length=10)
Subplot Spacing
>>> fig3.subplots_adjust(wspace=0.5, #Adjust the spacing between subplots
hspace=0.3,
left=0.125,
right=0.9,
top=0.9,
bottom=0.1)
>>> fig.tight_layout() #Fit subplot(s) in to the figure area
Axis Spines
>>> ax1.spines[ 'top'].set_visible(False) #Make the top axis line for a plot invisible
>>> ax1.spines['bottom' ].set_position(( 'outward',10)) #Move the bottom axis line outward
Have this Cheat Sheet at your fingertips
Original article source at https://www.datacamp.com
#matplotlib #cheatsheet #python