1595666340
Last week, I started working on a new NextJS application. After getting my application setup and deployed to vercel (which is amazing), I wanted to write some tests. The first thing I did was check the NextJS documentation and looked for instructions on how to get started with testing.
Searching for “testing” in the NextJS documentation
After trying and failing to find anything in their documentation, I resigned myself to figuring it out myself. So I created a new NextJS project using create-next-app
, and got started.
#testing #react #javascript #nextjs #programming
1651383480
This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.
Install
npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator
Usage
This plugin relies on your serverless yml file and on the serverless-offline
plugin.
plugins:
- serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
- serverless-appsync-simulator
- serverless-offline
Note: Order is important serverless-appsync-simulator
must go before serverless-offline
To start the simulator, run the following command:
sls offline start
You should see in the logs something like:
...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...
Configuration
Put options under custom.appsync-simulator
in your serverless.yml
file
| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789
| When using API_KEY
as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref
function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt
function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue
function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |
Example:
custom:
appsync-simulator:
location: '.webpack/service' # use webpack build directory
dynamoDb:
endpoint: 'http://my-custom-dynamo:8000'
Hot-reloading
By default, the simulator will hot-relad when changes to *.graphql
or *.vtl
files are detected. Changes to *.yml
files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.
Hot-reloading relies on watchman. Make sure it is installed on your system.
You can change the files being watched with the watch
option, which is then passed to watchman as the match expression.
e.g.
custom:
appsync-simulator:
watch:
- ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
- "*.graphql" # => string like this is equivalent to `["match", "*.graphql"]`
Or you can opt-out by leaving an empty array or set the option to false
Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.
Resource CloudFormation functions resolution
This plugin supports some resources resolution from the Ref
, Fn::GetAtt
and Fn::ImportValue
functions in your yaml file. It also supports some other Cfn functions such as Fn::Join
, Fb::Sub
, etc.
Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation
You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.
provider:
environment:
BUCKET_NAME:
Ref: MyBucket # resolves to `my-bucket-name`
resources:
Resources:
MyDbTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: myTable
...
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket-name
...
# in your appsync config
dataSources:
- type: AMAZON_DYNAMODB
name: dynamosource
config:
tableName:
Ref: MyDbTable # resolves to `myTable`
Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.
In those cases, you can define (or override) those values using the refMap
, getAttMap
and importValueMap
options.
refMap
takes a mapping of resource name to value pairsgetAttMap
takes a mapping of resource name to attribute/values pairsimportValueMap
takes a mapping of import name to values pairsExample:
custom:
appsync-simulator:
refMap:
# Override `MyDbTable` resolution from the previous example.
MyDbTable: 'mock-myTable'
getAttMap:
# define ElasticSearchInstance DomainName
ElasticSearchInstance:
DomainEndpoint: 'localhost:9200'
importValueMap:
other-service-api-url: 'https://other.api.url.com/graphql'
# in your appsync config
dataSources:
- type: AMAZON_ELASTICSEARCH
name: elasticsource
config:
# endpoint resolves as 'http://localhost:9200'
endpoint:
Fn::Join:
- ''
- - https://
- Fn::GetAtt:
- ElasticSearchInstance
- DomainEndpoint
In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}
) in the import name.
This notation can be used with all mocks - refMap
, getAttMap
and importValueMap
provider:
environment:
FINISH_ACTIVITY_FUNCTION_ARN:
Fn::ImportValue: other-service-api-${self:provider.stage}-url
custom:
serverless-appsync-simulator:
importValueMap:
- key: other-service-api-${self:provider.stage}-url
value: 'https://other.api.url.com/graphql'
This plugin only tries to resolve the following parts of the yml tree:
provider.environment
functions[*].environment
custom.appSync
If you have the need of resolving others, feel free to open an issue and explain your use case.
For now, the supported resources to be automatically resovled by Ref:
are:
Feel free to open a PR or an issue to extend them as well.
External functions
When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".
custom:
appsync-simulator:
functions:
addUser:
url: http://localhost:3016/2015-03-31/functions/addUser/invocations
method: post
addPost:
url: https://jsonplaceholder.typicode.com/posts
method: post
Supported Resolver types
This plugin supports resolvers implemented by amplify-appsync-simulator
, as well as custom resolvers.
From Aws Amplify:
Implemented by this plugin
#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#set( $discard = $cols.add("$toSnake") )
#if( $util.isBoolean($ctx.args.input[$entry]) )
#if( $ctx.args.input[$entry] )
#set( $discard = $vals.add("1") )
#else
#set( $discard = $vals.add("0") )
#end
#else
#set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
#end
#end
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
#set( $valStr = "($valStr)" )
#end
#if ( $colStr.substring(0, 1) != '(' )
#set( $colStr = "($colStr)" )
#end
{
"version": "2018-05-29",
"statements": ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM <name-of-table> ORDER BY id DESC LIMIT 1"]
}
#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
#set( $cur = $ctx.args.input[$entry] )
#set( $regex = "([a-z])([A-Z]+)")
#set( $replacement = "$1_$2")
#set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
#if( $util.isBoolean($cur) )
#if( $cur )
#set ( $cur = "1" )
#else
#set ( $cur = "0" )
#end
#end
#if ( $util.isNullOrEmpty($update) )
#set($update = "$toSnake$equals'$cur'" )
#else
#set($update = "$update,$toSnake$equals'$cur'" )
#end
#end
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> SET $update WHERE id=$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.input.id"]
}
{
"version": "2018-05-29",
"statements": ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.id"]
}
#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
#set ($index = $index + 1)
#if ( $column["typeName"] == "timestamptz" )
#set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
#set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
#set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
$util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
#end
#end
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
#set ( $s = $mapKey.split("_") )
#set ( $camelCase="" )
#set ( $isFirst=true )
#foreach($entry in $s)
#if ( $isFirst )
#set ( $first = $entry.substring(0,1) )
#else
#set ( $first = $entry.substring(0,1).toUpperCase() )
#end
#set ( $isFirst=false )
#set ( $stringLength = $entry.length() )
#set ( $remaining = $entry.substring(1, $stringLength) )
#set ( $camelCase = "$camelCase$first$remaining" )
#end
$util.qr( $response.put("$camelCase", $res[$mapKey]) )
#end
$utils.toJson($response)
Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.
Will be escaped properly: null
, true
, and false
values.
{
"version": "2018-05-29",
"statements": [
"UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
"SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
],
variableMap: {
":ID": $ctx.args.id,
## ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!
}
}
Requires
Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator
License: MIT License
1596754901
The shift towards microservices and modular applications makes testing more important and more challenging at the same time. You have to make sure that the microservices running in containers perform well and as intended, but you can no longer rely on conventional testing strategies to get the job done.
This is where new testing approaches are needed. Testing your microservices applications require the right approach, a suitable set of tools, and immense attention to details. This article will guide you through the process of testing your microservices and talk about the challenges you will have to overcome along the way. Let’s get started, shall we?
Traditionally, testing a monolith application meant configuring a test environment and setting up all of the application components in a way that matched the production environment. It took time to set up the testing environment, and there were a lot of complexities around the process.
Testing also requires the application to run in full. It is not possible to test monolith apps on a per-component basis, mainly because there is usually a base code that ties everything together, and the app is designed to run as a complete app to work properly.
Microservices running in containers offer one particular advantage: universal compatibility. You don’t have to match the testing environment with the deployment architecture exactly, and you can get away with testing individual components rather than the full app in some situations.
Of course, you will have to embrace the new cloud-native approach across the pipeline. Rather than creating critical dependencies between microservices, you need to treat each one as a semi-independent module.
The only monolith or centralized portion of the application is the database, but this too is an easy challenge to overcome. As long as you have a persistent database running on your test environment, you can perform tests at any time.
Keep in mind that there are additional things to focus on when testing microservices.
Test containers are the method of choice for many developers. Unlike monolith apps, which lets you use stubs and mocks for testing, microservices need to be tested in test containers. Many CI/CD pipelines actually integrate production microservices as part of the testing process.
As mentioned before, there are many ways to test microservices effectively, but the one approach that developers now use reliably is contract testing. Loosely coupled microservices can be tested in an effective and efficient way using contract testing, mainly because this testing approach focuses on contracts; in other words, it focuses on how components or microservices communicate with each other.
Syntax and semantics construct how components communicate with each other. By defining syntax and semantics in a standardized way and testing microservices based on their ability to generate the right message formats and meet behavioral expectations, you can rest assured knowing that the microservices will behave as intended when deployed.
It is easy to fall into the trap of making testing microservices complicated, but there are ways to avoid this problem. Testing microservices doesn’t have to be complicated at all when you have the right strategy in place.
There are several ways to test microservices too, including:
What’s important to note is the fact that these testing approaches allow for asynchronous testing. After all, asynchronous development is what makes developing microservices very appealing in the first place. By allowing for asynchronous testing, you can also make sure that components or microservices can be updated independently to one another.
#blog #microservices #testing #caylent #contract testing #end-to-end testing #hoverfly #integration testing #microservices #microservices architecture #pact #testing #unit testing #vagrant #vcr
1651319520
Serverless APIGateway Service Proxy
This Serverless Framework plugin supports the AWS service proxy integration feature of API Gateway. You can directly connect API Gateway to AWS services without Lambda.
Run serverless plugin install
in your Serverless project.
serverless plugin install -n serverless-apigateway-service-proxy
Here is a services list which this plugin supports for now. But will expand to other services in the feature. Please pull request if you are intersted in it.
Define settings of the AWS services you want to integrate under custom > apiGatewayServiceProxies
and run serverless deploy
.
Sample syntax for Kinesis proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- kinesis: # partitionkey is set apigateway requestid by default
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis
method: post
partitionKey: 'hardcordedkey' # use static partitionkey
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis/{myKey} # use path parameter
method: post
partitionKey:
pathParam: myKey
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis
method: post
partitionKey:
bodyParam: data.myKey # use body parameter
streamName: { Ref: 'YourStream' }
cors: true
- kinesis:
path: /kinesis
method: post
partitionKey:
queryStringParam: myKey # use query string param
streamName: { Ref: 'YourStream' }
cors: true
- kinesis: # PutRecords
path: /kinesis
method: post
action: PutRecords
streamName: { Ref: 'YourStream' }
cors: true
resources:
Resources:
YourStream:
Type: AWS::Kinesis::Stream
Properties:
ShardCount: 1
Sample request after deploying.
curl https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/kinesis -d '{"message": "some data"}' -H 'Content-Type:application/json'
Sample syntax for SQS proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true
resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
Sample request after deploying.
curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/sqs -d '{"message": "testtest"}' -H 'Content-Type:application/json'
If you'd like to pass additional data to the integration request, you can do so by including your custom API Gateway request parameters in serverless.yml
like so:
custom:
apiGatewayServiceProxies:
- sqs:
path: /queue
method: post
queueName: !GetAtt MyQueue.QueueName
cors: true
requestParameters:
'integration.request.querystring.MessageAttribute.1.Name': "'cognitoIdentityId'"
'integration.request.querystring.MessageAttribute.1.Value.StringValue': 'context.identity.cognitoIdentityId'
'integration.request.querystring.MessageAttribute.1.Value.DataType': "'String'"
'integration.request.querystring.MessageAttribute.2.Name': "'cognitoAuthenticationProvider'"
'integration.request.querystring.MessageAttribute.2.Value.StringValue': 'context.identity.cognitoAuthenticationProvider'
'integration.request.querystring.MessageAttribute.2.Value.DataType': "'String'"
The alternative way to pass MessageAttribute
parameters is via a request body mapping template.
See the SQS section under Customizing request body mapping templates
Simplified response template customization
You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be application/json
.
custom:
apiGatewayServiceProxies:
- sqs:
path: /queue
method: post
queueName: !GetAtt MyQueue.QueueName
cors: true
response:
template:
# `success` is used when the integration response is 200
success: |-
{ "message: "accepted" }
# `clientError` is used when the integration response is 400
clientError: |-
{ "message": "there is an error in your request" }
# `serverError` is used when the integration response is 500
serverError: |-
{ "message": "there was an error handling your request" }
Full response customization
If you want more control over the integration response, you can provide an array of objects for the response
value:
custom:
apiGatewayServiceProxies:
- sqs:
path: /queue
method: post
queueName: !GetAtt MyQueue.QueueName
cors: true
response:
- statusCode: 200
selectionPattern: '2\\d{2}'
responseParameters: {}
responseTemplates:
application/json: |-
{ "message": "accepted" }
The object keys correspond to the API Gateway integration response object.
Sample syntax for S3 proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: post
action: PutObject
bucket:
Ref: S3Bucket
key: static-key.json # use static key
cors: true
- s3:
path: /s3/{myKey} # use path param
method: get
action: GetObject
bucket:
Ref: S3Bucket
key:
pathParam: myKey
cors: true
- s3:
path: /s3
method: delete
action: DeleteObject
bucket:
Ref: S3Bucket
key:
queryStringParam: key # use query string param
cors: true
resources:
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
Sample request after deploying.
curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/s3 -d '{"message": "testtest"}' -H 'Content-Type:application/json'
Similar to the SQS support, you can customize the default request parameters serverless.yml
like so:
custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: post
action: PutObject
bucket:
Ref: S3Bucket
cors: true
requestParameters:
# if requestParameters has a 'integration.request.path.object' property you should remove the key setting
'integration.request.path.object': 'context.requestId'
'integration.request.header.cache-control': "'public, max-age=31536000, immutable'"
If you'd like use custom API Gateway request templates, you can do so like so:
custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: get
action: GetObject
bucket:
Ref: S3Bucket
request:
template:
application/json: |
#set ($specialStuff = $context.request.header.x-special)
#set ($context.requestOverride.path.object = $specialStuff.replaceAll('_', '-'))
{}
Note that if the client does not provide a Content-Type
header in the request, ApiGateway defaults to application/json
.
Added the new customization parameter that lets the user set a custom Path Override in API Gateway other than the {bucket}/{object}
This parameter is optional and if not set, will fall back to {bucket}/{object}
The Path Override will add {bucket}/
automatically in front
Please keep in mind, that key or path.object still needs to be set at the moment (maybe this will be made optional later on with this)
Usage (With 2 Path Parameters (folder and file and a fixed file extension)):
custom:
apiGatewayServiceProxies:
- s3:
path: /s3/{folder}/{file}
method: get
action: GetObject
pathOverride: '{folder}/{file}.xml'
bucket:
Ref: S3Bucket
cors: true
requestParameters:
# if requestParameters has a 'integration.request.path.object' property you should remove the key setting
'integration.request.path.folder': 'method.request.path.folder'
'integration.request.path.file': 'method.request.path.file'
'integration.request.path.object': 'context.requestId'
'integration.request.header.cache-control': "'public, max-age=31536000, immutable'"
This will result in API Gateway setting the Path Override attribute to {bucket}/{folder}/{file}.xml
So for example if you navigate to the API Gatway endpoint /language/en
it will fetch the file in S3 from {bucket}/language/en.xml
Can use greedy, for deeper Folders
The forementioned example can also be shortened by a greedy approach. Thanks to @taylorreece for mentioning this.
custom:
apiGatewayServiceProxies:
- s3:
path: /s3/{myPath+}
method: get
action: GetObject
pathOverride: '{myPath}.xml'
bucket:
Ref: S3Bucket
cors: true
requestParameters:
# if requestParameters has a 'integration.request.path.object' property you should remove the key setting
'integration.request.path.myPath': 'method.request.path.myPath'
'integration.request.path.object': 'context.requestId'
'integration.request.header.cache-control': "'public, max-age=31536000, immutable'"
This will translate for example /s3/a/b/c
to a/b/c.xml
You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be application/json
.
custom:
apiGatewayServiceProxies:
- s3:
path: /s3
method: post
action: PutObject
bucket:
Ref: S3Bucket
key: static-key.json
response:
template:
# `success` is used when the integration response is 200
success: |-
{ "message: "accepted" }
# `clientError` is used when the integration response is 400
clientError: |-
{ "message": "there is an error in your request" }
# `serverError` is used when the integration response is 500
serverError: |-
{ "message": "there was an error handling your request" }
Sample syntax for SNS proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
cors: true
resources:
Resources:
SNSTopic:
Type: AWS::SNS::Topic
Sample request after deploying.
curl https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/sns -d '{"message": "testtest"}' -H 'Content-Type:application/json'
Simplified response template customization
You can get a simple customization of the responses by providing a template for the possible responses. The template is assumed to be application/json
.
custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
cors: true
response:
template:
# `success` is used when the integration response is 200
success: |-
{ "message: "accepted" }
# `clientError` is used when the integration response is 400
clientError: |-
{ "message": "there is an error in your request" }
# `serverError` is used when the integration response is 500
serverError: |-
{ "message": "there was an error handling your request" }
Full response customization
If you want more control over the integration response, you can provide an array of objects for the response
value:
custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
cors: true
response:
- statusCode: 200
selectionPattern: '2\d{2}'
responseParameters: {}
responseTemplates:
application/json: |-
{ "message": "accepted" }
The object keys correspond to the API Gateway integration response object.
Content Handling and Pass Through Behaviour customization
If you want to work with binary fata, you can not specify contentHandling
and PassThrough
inside the request
object.
custom:
apiGatewayServiceProxies:
- sns:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
request:
contentHandling: CONVERT_TO_TEXT
passThrough: WHEN_NO_TEMPLATES
The allowed values correspond with the API Gateway Method integration for ContentHandling and PassthroughBehavior
Sample syntax for DynamoDB proxy in serverless.yml
. Currently, the supported DynamoDB Operations are PutItem
, GetItem
and DeleteItem
.
custom:
apiGatewayServiceProxies:
- dynamodb:
path: /dynamodb/{id}/{sort}
method: put
tableName: { Ref: 'YourTable' }
hashKey: # set pathParam or queryStringParam as a partitionkey.
pathParam: id
attributeType: S
rangeKey: # required if also using sort key. set pathParam or queryStringParam.
pathParam: sort
attributeType: S
action: PutItem # specify action to the table what you want
condition: attribute_not_exists(Id) # optional Condition Expressions parameter for the table
cors: true
- dynamodb:
path: /dynamodb
method: get
tableName: { Ref: 'YourTable' }
hashKey:
queryStringParam: id # use query string parameter
attributeType: S
rangeKey:
queryStringParam: sort
attributeType: S
action: GetItem
cors: true
- dynamodb:
path: /dynamodb/{id}
method: delete
tableName: { Ref: 'YourTable' }
hashKey:
pathParam: id
attributeType: S
action: DeleteItem
cors: true
resources:
Resources:
YourTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: YourTable
AttributeDefinitions:
- AttributeName: id
AttributeType: S
- AttributeName: sort
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
- AttributeName: sort
KeyType: RANGE
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
Sample request after deploying.
curl -XPUT https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/dynamodb/<hashKey>/<sortkey> \
-d '{"name":{"S":"john"},"address":{"S":"xxxxx"}}' \
-H 'Content-Type:application/json'
Sample syntax for EventBridge proxy in serverless.yml
.
custom:
apiGatewayServiceProxies:
- eventbridge: # source and detailType are hardcoded; detail defaults to POST body
path: /eventbridge
method: post
source: 'hardcoded_source'
detailType: 'hardcoded_detailType'
eventBusName: { Ref: 'YourBusName' }
cors: true
- eventbridge: # source and detailType as path parameters
path: /eventbridge/{detailTypeKey}/{sourceKey}
method: post
detailType:
pathParam: detailTypeKey
source:
pathParam: sourceKey
eventBusName: { Ref: 'YourBusName' }
cors: true
- eventbridge: # source, detail, and detailType as body parameters
path: /eventbridge/{detailTypeKey}/{sourceKey}
method: post
detailType:
bodyParam: data.detailType
source:
bodyParam: data.source
detail:
bodyParam: data.detail
eventBusName: { Ref: 'YourBusName' }
cors: true
resources:
Resources:
YourBus:
Type: AWS::Events::EventBus
Properties:
Name: YourEventBus
Sample request after deploying.
curl https://xxxxxxx.execute-api.us-east-1.amazonaws.com/dev/eventbridge -d '{"message": "some data"}' -H 'Content-Type:application/json'
To set CORS configurations for your HTTP endpoints, simply modify your event configurations as follows:
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors: true
Setting cors to true assumes a default configuration which is equivalent to:
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors:
origin: '*'
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
allowCredentials: false
Configuring the cors property sets Access-Control-Allow-Origin, Access-Control-Allow-Headers, Access-Control-Allow-Methods,Access-Control-Allow-Credentials headers in the CORS preflight response. To enable the Access-Control-Max-Age preflight response header, set the maxAge property in the cors object:
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors:
origin: '*'
maxAge: 86400
If you are using CloudFront or another CDN for your API Gateway, you may want to setup a Cache-Control header to allow for OPTIONS request to be cached to avoid the additional hop.
To enable the Cache-Control header on preflight response, set the cacheControl property in the cors object:
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'YourStream' }
cors:
origin: '*'
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
allowCredentials: false
cacheControl: 'max-age=600, s-maxage=600, proxy-revalidate' # Caches on browser and proxy for 10 minutes and doesnt allow proxy to serve out of date content
You can pass in any supported authorization type:
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true
# optional - defaults to 'NONE'
authorizationType: 'AWS_IAM' # can be one of ['NONE', 'AWS_IAM', 'CUSTOM', 'COGNITO_USER_POOLS']
# when using 'CUSTOM' authorization type, one should specify authorizerId
# authorizerId: { Ref: 'AuthorizerLogicalId' }
# when using 'COGNITO_USER_POOLS' authorization type, one can specify a list of authorization scopes
# authorizationScopes: ['scope1','scope2']
resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
Source: AWS::ApiGateway::Method docs
You can indicate whether the method requires clients to submit a valid API key using private
flag:
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true
private: true
resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
which is the same syntax used in Serverless framework.
Source: Serverless: Setting API keys for your Rest API
Source: AWS::ApiGateway::Method docs
By default, the plugin will generate a role with the required permissions for each service type that is configured.
You can configure your own role by setting the roleArn
attribute:
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SQSQueue', 'QueueName'] }
cors: true
roleArn: # Optional. A default role is created when not configured
Fn::GetAtt: [CustomS3Role, Arn]
resources:
Resources:
SQSQueue:
Type: 'AWS::SQS::Queue'
CustomS3Role:
# Custom Role definition
Type: 'AWS::IAM::Role'
The plugin allows one to specify which parameters the API Gateway method accepts.
A common use case is to pass custom data to the integration request:
custom:
apiGatewayServiceProxies:
- sqs:
path: /sqs
method: post
queueName: { 'Fn::GetAtt': ['SqsQueue', 'QueueName'] }
cors: true
acceptParameters:
'method.request.header.Custom-Header': true
requestParameters:
'integration.request.querystring.MessageAttribute.1.Name': "'custom-Header'"
'integration.request.querystring.MessageAttribute.1.Value.StringValue': 'method.request.header.Custom-Header'
'integration.request.querystring.MessageAttribute.1.Value.DataType': "'String'"
resources:
Resources:
SqsQueue:
Type: 'AWS::SQS::Queue'
Any published SQS message will have the Custom-Header
value added as a message attribute.
If you'd like to add content types or customize the default templates, you can do so by including your custom API Gateway request mapping template in serverless.yml
like so:
# Required for using Fn::Sub
plugins:
- serverless-cloudformation-sub-variables
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'MyStream' }
request:
template:
text/plain:
Fn::Sub:
- |
#set($msgBody = $util.parseJson($input.body))
#set($msgId = $msgBody.MessageId)
{
"Data": "$util.base64Encode($input.body)",
"PartitionKey": "$msgId",
"StreamName": "#{MyStreamArn}"
}
- MyStreamArn:
Fn::GetAtt: [MyStream, Arn]
It is important that the mapping template will return a valid
application/json
string
Source: How to connect SNS to Kinesis for cross-account delivery via API Gateway
Customizing SQS request templates requires us to force all requests to use an application/x-www-form-urlencoded
style body. The plugin sets the Content-Type
header to application/x-www-form-urlencoded
for you, but API Gateway will still look for the template under the application/json
request template type, so that is where you need to configure you request body in serverless.yml
:
custom:
apiGatewayServiceProxies:
- sqs:
path: /{version}/event/receiver
method: post
queueName: { 'Fn::GetAtt': ['SqsQueue', 'QueueName'] }
request:
template:
application/json: |-
#set ($body = $util.parseJson($input.body))
Action=SendMessage##
&MessageGroupId=$util.urlEncode($body.event_type)##
&MessageDeduplicationId=$util.urlEncode($body.event_id)##
&MessageAttribute.1.Name=$util.urlEncode("X-Custom-Signature")##
&MessageAttribute.1.Value.DataType=String##
&MessageAttribute.1.Value.StringValue=$util.urlEncode($input.params("X-Custom-Signature"))##
&MessageBody=$util.urlEncode($input.body)
Note that the ##
at the end of each line is an empty comment. In VTL this has the effect of stripping the newline from the end of the line (as it is commented out), which makes API Gateway read all the lines in the template as one line.
Be careful when mixing additional requestParameters
into your SQS endpoint as you may overwrite the integration.request.header.Content-Type
and stop the request template from being parsed correctly. You may also unintentionally create conflicts between parameters passed using requestParameters
and those in your request template. Typically you should only use the request template if you need to manipulate the incoming request body in some way.
Your custom template must also set the Action
and MessageBody
parameters, as these will not be added for you by the plugin.
When using a custom request body, headers sent by a client will no longer be passed through to the SQS queue (PassthroughBehavior
is automatically set to NEVER
). You will need to pass through headers sent by the client explicitly in the request body. Also, any custom querystring parameters in the requestParameters
array will be ignored. These also need to be added via the custom request body.
Similar to the Kinesis support, you can customize the default request mapping templates in serverless.yml
like so:
# Required for using Fn::Sub
plugins:
- serverless-cloudformation-sub-variables
custom:
apiGatewayServiceProxies:
- kinesis:
path: /sns
method: post
topicName: { 'Fn::GetAtt': ['SNSTopic', 'TopicName'] }
request:
template:
application/json:
Fn::Sub:
- "Action=Publish&Message=$util.urlEncode('This is a fixed message')&TopicArn=$util.urlEncode('#{MyTopicArn}')"
- MyTopicArn: { Ref: MyTopic }
It is important that the mapping template will return a valid
application/x-www-form-urlencoded
string
Source: Connect AWS API Gateway directly to SNS using a service integration
You can customize the response body by providing mapping templates for success, server errors (5xx) and client errors (4xx).
Templates must be in JSON format. If a template isn't provided, the integration response will be returned as-is to the client.
custom:
apiGatewayServiceProxies:
- kinesis:
path: /kinesis
method: post
streamName: { Ref: 'MyStream' }
response:
template:
success: |
{
"success": true
}
serverError: |
{
"success": false,
"errorMessage": "Server Error"
}
clientError: |
{
"success": false,
"errorMessage": "Client Error"
}
Author: Serverless-operations
Source Code: https://github.com/serverless-operations/serverless-apigateway-service-proxy
License:
1620983255
Automation and segregation can help you build better software
If you write automated tests and deliver them to the customer, he can make sure the software is working properly. And, at the end of the day, he paid for it.
Ok. We can segregate or separate the tests according to some criteria. For example, “white box” tests are used to measure the internal quality of the software, in addition to the expected results. They are very useful to know the percentage of lines of code executed, the cyclomatic complexity and several other software metrics. Unit tests are white box tests.
#testing #software testing #regression tests #unit tests #integration tests
1654243980
GitHub Repo:- https://github.com/PradhumnaPancholi/Figbot
Hey everyone! A little while ago, I was learning Dapp Tools as it has fantastic tools for developing and auditing smart contracts. And although I loved the experience, I soon learned that it is in the clandestine development stage. This means that casual/individual users can not depend on maintainers for support and updates.
Then I stumbled upon Foundry. It has everything that Dapp Tools offers apart from built-in symbolic execution (which is not a problem for me as I use Manticore by Trail of Bits ). And this is auditing related hence not a hindrance in smart contract development by any stretch of the imagination.
After working with Foundry for a bit, I enjoyed the experience and wanted to share that with others. Hence, this article.
This article will go through the benefits of Foundry, the installation process, developing an NFT (because everyone is interested in that), testing the contract, and deploying it with Figment Datahub.
Foundry is a blazing fast, portable and modular toolkit for Ethereum application development written in Rust.
Foundry is made up of three components:
Today’s focus is going to be on Forge. But I will be posting in-depth articles on Caste and Anvil in the upcoming weeks.
There are many smart contract development tools like Truffle, Hardhat, and Brownie. But one of my primary reasons for looking into Dapp Tools in the first place was native Solidity tests. Writing smart contracts is not hard when switching between frameworks like Hardhat and Brownie. And they are incredible tools with plugins, but one needs to be well versed in JavaScript/TypeScript and Python to perform testing.
Foundry allows us to write our tests natively in Solidity. This saves a lot of time onboarding new developers and makes the process smoother. In my experience of helping people navigate their way into smart contracts development, I have learned that the best and most efficient way for junior developers to engage with DAO/community-maintained projects is by writing tests and learning about the code-base itself. I remember Scupy Trooples once mentioned that they used the same approach while developing Alchemix Finance on Bankless.
In addition to that, built-in fuzzing, cheat codes, Cast, and Anvil make it a solid suite for testing smart contracts. There will be more detailed articles on those components coming soon. [Easy to integrate static analyzer]
Let’s dive in and build an NFT project now.
If you are on Mac or Linux, all you need to do is run two commands:
curl -L https://foundry.paradigm.xyz | bash
foundryup
Make sure to close the terminal before running foundryup
.
And Voila! You are all done.
For Windows, you need to have Rust installed and then :
cargo install --git https://github.com/foundry-rs/foundry --locked
For this article, we will be creating a simple NFT project called Figbots.
Start by creating a directory called “Figbots.” And run forge init
once you are inside the directory. This command will create a foundry project for you with git
initialized.
Let’s take a quick look at the folder structure. You have three primary folders, namely src, lib, and test. Very much self-explanatory here, you write your contracts in src
, tests in test
, and lib
contains all the libraries you installed, e.g., OpenZeppelin. In addition to that, you get foundry.toml
which contains all the configurations just like hardhat.config.js
and brownie-config.yaml
if you have used those frameworks. Another sweet thing is .github, where you can write your Github Actions. I find it really helpful for tests when working in a team.
Let’s start building! We will create a simple NFT called Figbot with a limited supply, cost (for minting), and withdrawal. With this approach, we can cover edges for different tests. First of all, rename Contract.sol
and test/Contract.t.sol
to Figbot.sol
and Figbot.t.sol
respectively. Now, we can not write smart contracts without Openzeppelin, can we?
Installing libraries with Foundry is slightly different than Hardhat and Brownie. We don’t have npm or pip packages. We install libraries directly from the Source (GitHub repo) in Foundry.
forge install Openzeppelin/openzeppelin-contracts
Now we can import the ERC721URIStorage.sol extension to create our NFT. To check that everything is alright, we can run the command forge build
, and it will compile our project. The compiler will yell at you if there is something wrong. Otherwise, you will get a successful compile.
Just like any other package manager, Forge allows you to use forge install <lib>,
forge remove <lib>
, and forge update <lib>
to manage your dependencies.
We will be using three contracts from the Openzeppelin. Counters, ERC721URIStorage, and Ownable. Time to upload our asset to IPFS using Pinata. We use the Ownable contract to set deploying address owner
and have access to onlyOwner
modifier to allow only the owner to withdraw funds. Counters
to help us with token id(s) and ERC721URIStorage
to keep the NFT contract simple.
MAX_SUPPLY
to 100COST
to 0.69 etherTOKEN_URI
to CID, we receive from Pinata2. Using Counter for token id:
using Counters for Counters.Counter;
Counters.Counter private tokenIds;
3. ERC721 constructor:
constructor() ERC721(“Figbot”, “FBT”) {}
4. Mint function:
msg.value
is greater than COST
tokenIds.current()
is greater or equal to MAX_SUPPLY
_safeMint
and _setTokenURI
5. Withdraw function:
function withdrawFunds() external onlyOwner { uint256 balance = address(this).balance; require(balance > 0, "No ether left to withdraw"); (bool success, ) = (msg.sender).call{value: balance}(""); require(success, "Withdrawal Failed"); emit Withdraw(msg.sender, balance); }
6. TotalSupply function:
function totalSupply() public view returns (uint256) { return _tokenIds.current(); }
As we all know, testing our smart contracts is really important. In this section, we will be writing some tests to get a solid understanding of forge test
and get used to writing tests in native solidity. We will be three Foundry cheat codes (I love them!) to manage account states to fit our test scenario.
We will be testing for the following scenarios:
As we can have complex logic in our smart contracts. And they are expected to behave differently depending on the state, the account used to invoke, time, etc. To deal with such scenarios, we can use cheatcodes to manage the state of the blockchain. We can use these cheatcodes using vm
instance, which is a part of Foundry’s Test
library.
We will be using three cheatcodes in our tests :
startPrank
: Sets msg.sender
for all subsequent calls until stopPrank
is called.
stopPrank
:
Stops an active prank started by startPrank
, resetting msg.sender
and tx.origin
to the values before startPrank
was called.
deal
: Sets the balance of an address provided address to the given balance.
Foundry comes with a built-in testing library. We start by importing this test library, our contract (the one we want to test), defining the test, setting variables, and setUp
function.
pragma solidity ^0.8.13;
import"forge-std/Test.sol";
import "../src/Figbot.sol";
contract FigbotTest is Test {
Figbot figbot;
address owner = address(0x1223);
address alice = address(0x1889);
address bob = address(0x1778);
function setUp() public {
vm.startPrank(owner);
figbot = new Figbot();
vm.stopPrank();
}
}
For state variables, we create a variable figbot
of type Figbot
. This is also the place where I like to define user accounts. In Foundry, you can describe an address by using the syntax address(0x1243)
. you can use any four alphanumeric characters for this. I have created the accounts named owner, Alice, and bob, respectively.
Now our setUp
function. This is a requirement for writing tests in Foundry. This is where we do all the deployments and things of that nature. I used the cheatcode startPrank
to switch the user to the “owner.” By default, Foundry uses a specific address to deploy test contracts. But that makes it harder to test functions with special privileges like withdrawFunds
. Hence, we switch to the “owner” account for this deployment.
Starting with a simple assertion test to learn Foundry convention. By convention, all the test functions must have the prefix test
. And we use assertEq
to test if two values are equal.
We call our MaxSupply
function and test if the result value is 100, as we described in our contract. And we use forge test
to run our tests.
And Voila!!! we have a passed test.
Now that we have written a simple test, let’s write one with cheatcodes. The primary function of our contract.
balanceOf
Alice is 1We have another testing function used for tests that we expect to fail. The prefix used for such a test is testFail
. We will test if the mint
function reverts if the caller has insufficient funds.
Switch user account to Bob
Set Bob’s balance to 0.5 ether (our NFT is 0.69 ether)
Call the mint function (it will be reverted due to not having enough funds)
Check if balanceOf
Bob is 1
Because mint didn’t go through, the balance of Bob is not going to be 1. Hence, it will fail, which is exactly what we are used testFail
for. So when you run forge test
, it will pass.
Here we will test a function that only the “owner” can successfully perform. For this test, we will :
withdrawFunds
function ( if successful, it should make the owner’s balance 0.69 ether)To verify, we assert if the owner’s balance is 0.69 ether
Now that we have tested our contract, it is time to deploy it. We need private keys to a wallet (with some Rinkeby test ETH) and an RPC URL. For our RPC URL, we will use Figment DataHu.
Figment DataHub provides us with infrastructure to develop on Web 3. It supports multiple chains like Ethereum, Celo, Solana, Terra, etc.
You can get your RPC URL for Rinkeby from under the “Protocols” tab.
Open your terminal to enter both of these things as environment variables.
export FIG_RINKEBY_URL=<Your RPC endpoint>
export PVT_KEY=<Your wallets private key>
Once we have the environment variables, we are all set to deploy
forge create Figbot --rpc-url=$FIG_RINKEBY_URL --private-key=$PVT_KEY
We are almost done here. So far, we have written, tested, and deployed a smart contract with Foundry and Figment DataHub. But we are not entirely done just yet. We are now going to verify our contract. We will need to set up our Etherscan API key for that.
export ETHERSCAN_API=<Your Etherscan API Key>
And now we can verify our smart contract.
forge verify-contract --chain-id <Chain-Id> --num-of-optimizations 200 --compiler-version <Compiler Version> src/<Contract File>:<Contract> $ETHERSCAN_API
Congratulations! Now you can write, test, and deploy smart contracts using Foundry. I hope you enjoyed and learned from this article. I indeed enjoyed writing this. Feel free to let me know your thoughts about it.
This story was originally published at https://hackernoon.com/how-to-build-an-nft-project-with-foundry-and-figment-datahub
#nft #datahub