1585644420
We’re seeing more and more people using Serverless to deploy web applications. The benefits are huge—lightning-fast deployments, automatic scaling, and pay-per-execution pricing.
But moving to serverless has a learning curve as well. You need to learn the intricacies of the platform you’re using, including low-level details like format of the request input and the required shape of the response output. This can get in the way and slow your development process.
Today, I come with good news: your existing web framework tooling will work seamlessly with Serverless. In this post, I’ll show you how to use the popular Node web framework Express.js to deploy a Serverless REST API. This means you can use your existing code + the vast Express.js ecosystem while still getting all the benefits of Serverless 💥!
Below is a step-by-step walkthrough of creating a new Serverless service using Express.js. We will:
To get started, you’ll need the Serverless Framework installed. You’ll also need your environment configured with AWS credentials.
Let’s start with something easy—deploying a single endpoint. First, create a new directory with a package.json
file:
$ mkdir my-express-application && cd my-express-application
$ npm init -f
Then, let’s install a few dependencies. We’ll install the express
framework, as well as the serverless-http
:
$ npm install --save express serverless-http
The serverless-http
package is a handy piece of middleware that handles the interface between your Node.js application and the specifics of API Gateway. Huge thanks to Doug Moscrop for developing it.
With our libraries installed, let’s create an index.js
file that has our application code:
// index.js
const serverless = require('serverless-http');
const express = require('express')
const app = express()
app.get('/', function (req, res) {
res.send('Hello World!')
})
module.exports.handler = serverless(app);
This is a very simple application that returns "Hello World!"
when a request comes in on the root path /
.
It’s straight out of the Express documentation with two small additions. First, we imported the serverless-http
package at the top. Second, we exported a handler
function which is our application wrapped in the serverless
package.
To get this application deployed, let’s create a serverless.yml
in our working directory:
# serverless.yml
service: my-express-application
provider:
name: aws
runtime: nodejs6.10
stage: dev
region: us-east-1
functions:
app:
handler: index.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
This is a pretty basic configuration. We’ve created one function, app
, which uses the exported handler from our index.js
file. Finally, it’s configured with some HTTP triggers.
We’ve used a very broad path matching so that all requests on this domain are routed to this function. All of the HTTP routing logic will be done inside the Express application.
Now, deploy your function:
$ sls deploy
... snip ...
Service Information
service: my-express-application
stage: dev
region: us-east-1
stack: my-express-application-dev
api keys:
None
endpoints:
ANY - https://bl4r0gjjv5.execute-api.us-east-1.amazonaws.com/dev
ANY - https://bl4r0gjjv5.execute-api.us-east-1.amazonaws.com/dev/{proxy+}
functions:
app: my-express-application-dev-app
After a minute, the console will show your endpoints
in the Service Information
section. Navigate to that route in your browser:
Your application is live!
It’s fun to get a simple endpoint live, but it’s not very valuable. Often, your application will need to persist some sort of state to be useful. Let’s add a DynamoDB table as our backing store.
For this simple example, let’s say we’re storing Users in a database. We want to store them by userId
, which is a unique identifier for a particular user.
First, we’ll need to configure our serverless.yml
to provision the table. This involves three parts:
resources
section;Change your serverless.yml
to look as follows:
# serverless.yml
service: my-express-application
custom:
tableName: 'users-table-${self:provider.stage}'
provider:
name: aws
runtime: nodejs6.10
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource:
- { "Fn::GetAtt": ["UsersDynamoDBTable", "Arn" ] }
environment:
USERS_TABLE: ${self:custom.tableName}
functions:
app:
handler: index.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
resources:
Resources:
UsersDynamoDBTable:
Type: 'AWS::DynamoDB::Table'
Properties:
AttributeDefinitions:
-
AttributeName: userId
AttributeType: S
KeySchema:
-
AttributeName: userId
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:custom.tableName}
We provisioned the table in the resources
section using CloudFormation syntax. We also added IAM permissions for our functions under the iamRoleStatements
portion of the provider
block. Finally, we passed the table name as the environment variable USERS_TABLE
in the environment
portion of the provider
block.
Now, let’s update our application to use the table. We’ll implement two endpoints: POST /user
to create a new user, and GET /user/{userId}
to get information on a particular user.
First, install the aws-sdk
and body-parser
, which is used for parsing the body of HTTP requests:
$ npm install --save aws-sdk body-parser
Then update your index.js
as follows:
// index.js
const serverless = require('serverless-http');
const bodyParser = require('body-parser');
const express = require('express')
const app = express()
const AWS = require('aws-sdk');
const USERS_TABLE = process.env.USERS_TABLE;
const dynamoDb = new AWS.DynamoDB.DocumentClient();
app.use(bodyParser.json({ strict: false }));
app.get('/', function (req, res) {
res.send('Hello World!')
})
// Get User endpoint
app.get('/users/:userId', function (req, res) {
const params = {
TableName: USERS_TABLE,
Key: {
userId: req.params.userId,
},
}
dynamoDb.get(params, (error, result) => {
if (error) {
console.log(error);
res.status(400).json({ error: 'Could not get user' });
}
if (result.Item) {
const {userId, name} = result.Item;
res.json({ userId, name });
} else {
res.status(404).json({ error: "User not found" });
}
});
})
// Create User endpoint
app.post('/users', function (req, res) {
const { userId, name } = req.body;
if (typeof userId !== 'string') {
res.status(400).json({ error: '"userId" must be a string' });
} else if (typeof name !== 'string') {
res.status(400).json({ error: '"name" must be a string' });
}
const params = {
TableName: USERS_TABLE,
Item: {
userId: userId,
name: name,
},
};
dynamoDb.put(params, (error) => {
if (error) {
console.log(error);
res.status(400).json({ error: 'Could not create user' });
}
res.json({ userId, name });
});
})
module.exports.handler = serverless(app);
In addition to base “Hello World” endpoint, we now have two new endpoints:
GET /users/:userId
for getting a UserPOST /users
for creating a new UserLet’s deploy the service and test it out!
$ sls deploy
We’ll use curl
for these examples. Set the BASE_DOMAIN
variable to your unique domain and base path so it’s easier to reuse:
export BASE_DOMAIN=https://bl4r0gjjv5.execute-api.us-east-1.amazonaws.com/dev
Then, let’s create a user:
$ curl -H "Content-Type: application/json" -X POST ${BASE_DOMAIN}/users -d '{"userId": "alexdebrie1", "name": "Alex DeBrie"}'
{"userId":"alexdebrie1","name":"Alex DeBrie"}
Nice! We’ve created a new user! Now, let’s retrieve the user with the GET /users/:userId` endpoint:
$ curl -H "Content-Type: application/json" -X GET ${BASE_DOMAIN}/users/alexdebrie1
{"userId":"alexdebrie1","name":"Alex DeBrie"}
Perfect!
This isn’t a full-fledged REST API, and you’ll want to add things like error handling, authentication, and additional business logic. This does give a framework in which you can work to set up those things.
Let’s take another look at our function configuration in serverless.yml
:
functions:
app:
handler: index.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
We’re forwarding all traffic on the domain to our application and letting Express handle the entirety of the routing logic. There is a benefit to this—I don’t have to manually string up all my routes and functions. I can also limit the impact of cold-starts on lightly-used routes.
However, we also lose some of the benefits of the serverless architecture. I can isolate my bits of logic into separate functions and get a decent look at my application from standard metrics. If each route is handled by a different Lambda function, then I can see:
Luckily, you can still get these things if you want them! You can configure your serverless.yml
so that different routes are routed to different instances of your function.
Each function instance will have the same code, but they’ll be segmented for metrics purposes:
# serverless.yml
functions:
app:
handler: index.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
getUser:
handler: index.handler
events:
- http: 'GET /users/{proxy+}'
createUser:
handler: index.handler
events:
- http: 'POST /users'
Now, all requests to GET /users/:userId
will be handled by the getUser
instance of your application, and all requests to POST /users/
will be handled by the createUser
instance. For any other requests, they’ll be handled by the main app
instance of your function.
Again, none of this is required, and it’s a bit of an overweight solution since each specific endpoint will include the full application code for your other endpoints. However, it’s a good balance between speed of development by using the tools you’re used to along with the per-endpoint granularity that serverless application patterns provide.
When developing an application, it’s nice to rapidly iterate by developing and testing locally rather than doing a full deploy betwen changes. In this section, I’ll show you how to configure your environment for local development.
First, let’s use the serverless-offline
plugin. This plugin helps to emulate the API Gateway environment for local development.
Install the serverless-offline
plugin:
$ npm install --save-dev serverless-offline
Then add the plugin to your serverless.yml
:
# serverless.yml
plugins:
- serverless-offline
Then, start the serverless-offline server:
$ sls offline start
Serverless: Starting Offline: dev/us-east-1.
Serverless: Routes for app:
Serverless: ANY /
Serverless: ANY /{proxy*}
Serverless: Routes for getUser:
Serverless: GET /users/{proxy*}
Serverless: Routes for createUser:
Serverless: POST /users
Serverless: Offline listening on http://localhost:3000
Then navigate to your root page on localhost:3000
in your browser:
It works! If you make a change in your index.js
file, it will be updated the next time you hit your endpoint. This rapidly improves development time.
While this works easily for a stateless endpoint like “Hello World!”, it’s a little trickier for our /users
endpoints that interact with a database.
Luckily, there’s a plugin for doing local development with a local DynamoDB emulator! We’ll use the serverless-dynamodb-local
plugin for this.
First, let’s install the plugin:
$ npm install --save-dev serverless-dynamodb-local
Then, let’s add the plugin to our serverless.yml
. Note that it must come before the serverless-offline
plugin. We’ll also add some config in the custom
block so that it locally creates our tables defined in the resources
block:
# serverless.yml
plugins:
- serverless-dynamodb-local
- serverless-offline #serverless-offline needs to be last in the list
custom:
tableName: 'users-table-${self:provider.stage}'
dynamodb:
start:
migrate: true
Then, run a command to install DynamoDB local:
$ sls dynamodb install
Finally, we need to make some small changes to our application code. When instantiating our DynamoDB client, we’ll add in some special configuration if we’re in a local, offline environment. The serverless-offline
plugin sets an environment variable of IS_OFFLINE
to true
, so we’ll use that to handle our config. Change the beginning of index.js
to the following:
// index.js
const serverless = require('serverless-http');
const bodyParser = require('body-parser');
const express = require('express')
const app = express()
const AWS = require('aws-sdk');
const USERS_TABLE = process.env.USERS_TABLE;
const IS_OFFLINE = process.env.IS_OFFLINE;
let dynamoDb;
if (IS_OFFLINE === 'true') {
dynamoDb = new AWS.DynamoDB.DocumentClient({
region: 'localhost',
endpoint: 'http://localhost:8000'
})
console.log(dynamoDb);
} else {
dynamoDb = new AWS.DynamoDB.DocumentClient();
};
app.use(bodyParser.json({ strict: false }));
... rest of application code ...
Now, our DocumentClient constructor is configured to use DynamoDB local if we’re running locally or uses the default options if running in Lambda.
Let’s see it if works. Start up your offline server again:
$ sls offline start
Dynamodb Local Started, Visit: http://localhost:8000/shell
Serverless: DynamoDB - created table users-table-dev
Serverless: Starting Offline: dev/us-east-1.
Serverless: Routes for app:
Serverless: ANY /
Serverless: ANY /{proxy*}
Serverless: Routes for getUser:
Serverless: GET /users/{proxy*}
Serverless: Routes for createUser:
Serverless: POST /users
Serverless: Offline listening on http://localhost:3000
Let’s run our curl
command from earlier to hit our local endpoint and create a user:
$ curl -H "Content-Type: application/json" -X POST http://localhost:3000/users -d '{"userId": "alexdebrie1", "name": "Alex DeBrie"}'
{"userId":"alexdebrie1","name":"Alex DeBrie"}
And then retrieve the user:
$ curl -H "Content-Type: application/json" -X GET http://localhost:3000/users/alexdebrie1
{"userId":"alexdebrie1","name":"Alex DeBrie"}
It works just like it did on Lambda!
This local setup can really speed up your workflow while still allowing you to emulate a close approximation of the Lambda environment.
If you already have an existing Express application, it’s very easy to convert to a Serverless-friendly application. Do the following steps:
Install the serverless-http
package – npm install --save serverless-http
Add the serverless-http
configuration to your Express application.
You’ll need to import the serverless-http library at the top of your file:
`const serverless = require('serverless-http');`
then export your wrapped application:
`module.exports.handler = serverless(app);.`
For reference, an example application might look like this:
# app.js
const serverless = require('serverless-http'); <-- Add this.
const express = require('express')
const app = express()
All your Express code
module.exports.handler = serverless(app); <-- Add this.
Set up your serverless.yml
with a single function that captures all traffic:
# serverless.yml
service: express-app
provider:
name: aws
runtime: nodejs6.10
stage: dev
region: us-east-1
functions:
app:
handler: app.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
That’s it! Run sls deploy
and your app will deploy!
Note that if you use other resources (databases, credentials, etc.), you’ll need to make sure those make it into your application
#serverless #node-js #express #rest #api
1585644420
We’re seeing more and more people using Serverless to deploy web applications. The benefits are huge—lightning-fast deployments, automatic scaling, and pay-per-execution pricing.
But moving to serverless has a learning curve as well. You need to learn the intricacies of the platform you’re using, including low-level details like format of the request input and the required shape of the response output. This can get in the way and slow your development process.
Today, I come with good news: your existing web framework tooling will work seamlessly with Serverless. In this post, I’ll show you how to use the popular Node web framework Express.js to deploy a Serverless REST API. This means you can use your existing code + the vast Express.js ecosystem while still getting all the benefits of Serverless 💥!
Below is a step-by-step walkthrough of creating a new Serverless service using Express.js. We will:
To get started, you’ll need the Serverless Framework installed. You’ll also need your environment configured with AWS credentials.
Let’s start with something easy—deploying a single endpoint. First, create a new directory with a package.json
file:
$ mkdir my-express-application && cd my-express-application
$ npm init -f
Then, let’s install a few dependencies. We’ll install the express
framework, as well as the serverless-http
:
$ npm install --save express serverless-http
The serverless-http
package is a handy piece of middleware that handles the interface between your Node.js application and the specifics of API Gateway. Huge thanks to Doug Moscrop for developing it.
With our libraries installed, let’s create an index.js
file that has our application code:
// index.js
const serverless = require('serverless-http');
const express = require('express')
const app = express()
app.get('/', function (req, res) {
res.send('Hello World!')
})
module.exports.handler = serverless(app);
This is a very simple application that returns "Hello World!"
when a request comes in on the root path /
.
It’s straight out of the Express documentation with two small additions. First, we imported the serverless-http
package at the top. Second, we exported a handler
function which is our application wrapped in the serverless
package.
To get this application deployed, let’s create a serverless.yml
in our working directory:
# serverless.yml
service: my-express-application
provider:
name: aws
runtime: nodejs6.10
stage: dev
region: us-east-1
functions:
app:
handler: index.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
This is a pretty basic configuration. We’ve created one function, app
, which uses the exported handler from our index.js
file. Finally, it’s configured with some HTTP triggers.
We’ve used a very broad path matching so that all requests on this domain are routed to this function. All of the HTTP routing logic will be done inside the Express application.
Now, deploy your function:
$ sls deploy
... snip ...
Service Information
service: my-express-application
stage: dev
region: us-east-1
stack: my-express-application-dev
api keys:
None
endpoints:
ANY - https://bl4r0gjjv5.execute-api.us-east-1.amazonaws.com/dev
ANY - https://bl4r0gjjv5.execute-api.us-east-1.amazonaws.com/dev/{proxy+}
functions:
app: my-express-application-dev-app
After a minute, the console will show your endpoints
in the Service Information
section. Navigate to that route in your browser:
Your application is live!
It’s fun to get a simple endpoint live, but it’s not very valuable. Often, your application will need to persist some sort of state to be useful. Let’s add a DynamoDB table as our backing store.
For this simple example, let’s say we’re storing Users in a database. We want to store them by userId
, which is a unique identifier for a particular user.
First, we’ll need to configure our serverless.yml
to provision the table. This involves three parts:
resources
section;Change your serverless.yml
to look as follows:
# serverless.yml
service: my-express-application
custom:
tableName: 'users-table-${self:provider.stage}'
provider:
name: aws
runtime: nodejs6.10
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource:
- { "Fn::GetAtt": ["UsersDynamoDBTable", "Arn" ] }
environment:
USERS_TABLE: ${self:custom.tableName}
functions:
app:
handler: index.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
resources:
Resources:
UsersDynamoDBTable:
Type: 'AWS::DynamoDB::Table'
Properties:
AttributeDefinitions:
-
AttributeName: userId
AttributeType: S
KeySchema:
-
AttributeName: userId
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:custom.tableName}
We provisioned the table in the resources
section using CloudFormation syntax. We also added IAM permissions for our functions under the iamRoleStatements
portion of the provider
block. Finally, we passed the table name as the environment variable USERS_TABLE
in the environment
portion of the provider
block.
Now, let’s update our application to use the table. We’ll implement two endpoints: POST /user
to create a new user, and GET /user/{userId}
to get information on a particular user.
First, install the aws-sdk
and body-parser
, which is used for parsing the body of HTTP requests:
$ npm install --save aws-sdk body-parser
Then update your index.js
as follows:
// index.js
const serverless = require('serverless-http');
const bodyParser = require('body-parser');
const express = require('express')
const app = express()
const AWS = require('aws-sdk');
const USERS_TABLE = process.env.USERS_TABLE;
const dynamoDb = new AWS.DynamoDB.DocumentClient();
app.use(bodyParser.json({ strict: false }));
app.get('/', function (req, res) {
res.send('Hello World!')
})
// Get User endpoint
app.get('/users/:userId', function (req, res) {
const params = {
TableName: USERS_TABLE,
Key: {
userId: req.params.userId,
},
}
dynamoDb.get(params, (error, result) => {
if (error) {
console.log(error);
res.status(400).json({ error: 'Could not get user' });
}
if (result.Item) {
const {userId, name} = result.Item;
res.json({ userId, name });
} else {
res.status(404).json({ error: "User not found" });
}
});
})
// Create User endpoint
app.post('/users', function (req, res) {
const { userId, name } = req.body;
if (typeof userId !== 'string') {
res.status(400).json({ error: '"userId" must be a string' });
} else if (typeof name !== 'string') {
res.status(400).json({ error: '"name" must be a string' });
}
const params = {
TableName: USERS_TABLE,
Item: {
userId: userId,
name: name,
},
};
dynamoDb.put(params, (error) => {
if (error) {
console.log(error);
res.status(400).json({ error: 'Could not create user' });
}
res.json({ userId, name });
});
})
module.exports.handler = serverless(app);
In addition to base “Hello World” endpoint, we now have two new endpoints:
GET /users/:userId
for getting a UserPOST /users
for creating a new UserLet’s deploy the service and test it out!
$ sls deploy
We’ll use curl
for these examples. Set the BASE_DOMAIN
variable to your unique domain and base path so it’s easier to reuse:
export BASE_DOMAIN=https://bl4r0gjjv5.execute-api.us-east-1.amazonaws.com/dev
Then, let’s create a user:
$ curl -H "Content-Type: application/json" -X POST ${BASE_DOMAIN}/users -d '{"userId": "alexdebrie1", "name": "Alex DeBrie"}'
{"userId":"alexdebrie1","name":"Alex DeBrie"}
Nice! We’ve created a new user! Now, let’s retrieve the user with the GET /users/:userId` endpoint:
$ curl -H "Content-Type: application/json" -X GET ${BASE_DOMAIN}/users/alexdebrie1
{"userId":"alexdebrie1","name":"Alex DeBrie"}
Perfect!
This isn’t a full-fledged REST API, and you’ll want to add things like error handling, authentication, and additional business logic. This does give a framework in which you can work to set up those things.
Let’s take another look at our function configuration in serverless.yml
:
functions:
app:
handler: index.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
We’re forwarding all traffic on the domain to our application and letting Express handle the entirety of the routing logic. There is a benefit to this—I don’t have to manually string up all my routes and functions. I can also limit the impact of cold-starts on lightly-used routes.
However, we also lose some of the benefits of the serverless architecture. I can isolate my bits of logic into separate functions and get a decent look at my application from standard metrics. If each route is handled by a different Lambda function, then I can see:
Luckily, you can still get these things if you want them! You can configure your serverless.yml
so that different routes are routed to different instances of your function.
Each function instance will have the same code, but they’ll be segmented for metrics purposes:
# serverless.yml
functions:
app:
handler: index.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
getUser:
handler: index.handler
events:
- http: 'GET /users/{proxy+}'
createUser:
handler: index.handler
events:
- http: 'POST /users'
Now, all requests to GET /users/:userId
will be handled by the getUser
instance of your application, and all requests to POST /users/
will be handled by the createUser
instance. For any other requests, they’ll be handled by the main app
instance of your function.
Again, none of this is required, and it’s a bit of an overweight solution since each specific endpoint will include the full application code for your other endpoints. However, it’s a good balance between speed of development by using the tools you’re used to along with the per-endpoint granularity that serverless application patterns provide.
When developing an application, it’s nice to rapidly iterate by developing and testing locally rather than doing a full deploy betwen changes. In this section, I’ll show you how to configure your environment for local development.
First, let’s use the serverless-offline
plugin. This plugin helps to emulate the API Gateway environment for local development.
Install the serverless-offline
plugin:
$ npm install --save-dev serverless-offline
Then add the plugin to your serverless.yml
:
# serverless.yml
plugins:
- serverless-offline
Then, start the serverless-offline server:
$ sls offline start
Serverless: Starting Offline: dev/us-east-1.
Serverless: Routes for app:
Serverless: ANY /
Serverless: ANY /{proxy*}
Serverless: Routes for getUser:
Serverless: GET /users/{proxy*}
Serverless: Routes for createUser:
Serverless: POST /users
Serverless: Offline listening on http://localhost:3000
Then navigate to your root page on localhost:3000
in your browser:
It works! If you make a change in your index.js
file, it will be updated the next time you hit your endpoint. This rapidly improves development time.
While this works easily for a stateless endpoint like “Hello World!”, it’s a little trickier for our /users
endpoints that interact with a database.
Luckily, there’s a plugin for doing local development with a local DynamoDB emulator! We’ll use the serverless-dynamodb-local
plugin for this.
First, let’s install the plugin:
$ npm install --save-dev serverless-dynamodb-local
Then, let’s add the plugin to our serverless.yml
. Note that it must come before the serverless-offline
plugin. We’ll also add some config in the custom
block so that it locally creates our tables defined in the resources
block:
# serverless.yml
plugins:
- serverless-dynamodb-local
- serverless-offline #serverless-offline needs to be last in the list
custom:
tableName: 'users-table-${self:provider.stage}'
dynamodb:
start:
migrate: true
Then, run a command to install DynamoDB local:
$ sls dynamodb install
Finally, we need to make some small changes to our application code. When instantiating our DynamoDB client, we’ll add in some special configuration if we’re in a local, offline environment. The serverless-offline
plugin sets an environment variable of IS_OFFLINE
to true
, so we’ll use that to handle our config. Change the beginning of index.js
to the following:
// index.js
const serverless = require('serverless-http');
const bodyParser = require('body-parser');
const express = require('express')
const app = express()
const AWS = require('aws-sdk');
const USERS_TABLE = process.env.USERS_TABLE;
const IS_OFFLINE = process.env.IS_OFFLINE;
let dynamoDb;
if (IS_OFFLINE === 'true') {
dynamoDb = new AWS.DynamoDB.DocumentClient({
region: 'localhost',
endpoint: 'http://localhost:8000'
})
console.log(dynamoDb);
} else {
dynamoDb = new AWS.DynamoDB.DocumentClient();
};
app.use(bodyParser.json({ strict: false }));
... rest of application code ...
Now, our DocumentClient constructor is configured to use DynamoDB local if we’re running locally or uses the default options if running in Lambda.
Let’s see it if works. Start up your offline server again:
$ sls offline start
Dynamodb Local Started, Visit: http://localhost:8000/shell
Serverless: DynamoDB - created table users-table-dev
Serverless: Starting Offline: dev/us-east-1.
Serverless: Routes for app:
Serverless: ANY /
Serverless: ANY /{proxy*}
Serverless: Routes for getUser:
Serverless: GET /users/{proxy*}
Serverless: Routes for createUser:
Serverless: POST /users
Serverless: Offline listening on http://localhost:3000
Let’s run our curl
command from earlier to hit our local endpoint and create a user:
$ curl -H "Content-Type: application/json" -X POST http://localhost:3000/users -d '{"userId": "alexdebrie1", "name": "Alex DeBrie"}'
{"userId":"alexdebrie1","name":"Alex DeBrie"}
And then retrieve the user:
$ curl -H "Content-Type: application/json" -X GET http://localhost:3000/users/alexdebrie1
{"userId":"alexdebrie1","name":"Alex DeBrie"}
It works just like it did on Lambda!
This local setup can really speed up your workflow while still allowing you to emulate a close approximation of the Lambda environment.
If you already have an existing Express application, it’s very easy to convert to a Serverless-friendly application. Do the following steps:
Install the serverless-http
package – npm install --save serverless-http
Add the serverless-http
configuration to your Express application.
You’ll need to import the serverless-http library at the top of your file:
`const serverless = require('serverless-http');`
then export your wrapped application:
`module.exports.handler = serverless(app);.`
For reference, an example application might look like this:
# app.js
const serverless = require('serverless-http'); <-- Add this.
const express = require('express')
const app = express()
All your Express code
module.exports.handler = serverless(app); <-- Add this.
Set up your serverless.yml
with a single function that captures all traffic:
# serverless.yml
service: express-app
provider:
name: aws
runtime: nodejs6.10
stage: dev
region: us-east-1
functions:
app:
handler: app.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
That’s it! Run sls deploy
and your app will deploy!
Note that if you use other resources (databases, credentials, etc.), you’ll need to make sure those make it into your application
#serverless #node-js #express #rest #api
1632537859
Not babashka. Node.js babashka!?
Ad-hoc CLJS scripting on Node.js.
Experimental. Please report issues here.
Nbb's main goal is to make it easy to get started with ad hoc CLJS scripting on Node.js.
Additional goals and features are:
Nbb requires Node.js v12 or newer.
CLJS code is evaluated through SCI, the same interpreter that powers babashka. Because SCI works with advanced compilation, the bundle size, especially when combined with other dependencies, is smaller than what you get with self-hosted CLJS. That makes startup faster. The trade-off is that execution is less performant and that only a subset of CLJS is available (e.g. no deftype, yet).
Install nbb
from NPM:
$ npm install nbb -g
Omit -g
for a local install.
Try out an expression:
$ nbb -e '(+ 1 2 3)'
6
And then install some other NPM libraries to use in the script. E.g.:
$ npm install csv-parse shelljs zx
Create a script which uses the NPM libraries:
(ns script
(:require ["csv-parse/lib/sync$default" :as csv-parse]
["fs" :as fs]
["path" :as path]
["shelljs$default" :as sh]
["term-size$default" :as term-size]
["zx$default" :as zx]
["zx$fs" :as zxfs]
[nbb.core :refer [*file*]]))
(prn (path/resolve "."))
(prn (term-size))
(println (count (str (fs/readFileSync *file*))))
(prn (sh/ls "."))
(prn (csv-parse "foo,bar"))
(prn (zxfs/existsSync *file*))
(zx/$ #js ["ls"])
Call the script:
$ nbb script.cljs
"/private/tmp/test-script"
#js {:columns 216, :rows 47}
510
#js ["node_modules" "package-lock.json" "package.json" "script.cljs"]
#js [#js ["foo" "bar"]]
true
$ ls
node_modules
package-lock.json
package.json
script.cljs
Nbb has first class support for macros: you can define them right inside your .cljs
file, like you are used to from JVM Clojure. Consider the plet
macro to make working with promises more palatable:
(defmacro plet
[bindings & body]
(let [binding-pairs (reverse (partition 2 bindings))
body (cons 'do body)]
(reduce (fn [body [sym expr]]
(let [expr (list '.resolve 'js/Promise expr)]
(list '.then expr (list 'clojure.core/fn (vector sym)
body))))
body
binding-pairs)))
Using this macro we can look async code more like sync code. Consider this puppeteer example:
(-> (.launch puppeteer)
(.then (fn [browser]
(-> (.newPage browser)
(.then (fn [page]
(-> (.goto page "https://clojure.org")
(.then #(.screenshot page #js{:path "screenshot.png"}))
(.catch #(js/console.log %))
(.then #(.close browser)))))))))
Using plet
this becomes:
(plet [browser (.launch puppeteer)
page (.newPage browser)
_ (.goto page "https://clojure.org")
_ (-> (.screenshot page #js{:path "screenshot.png"})
(.catch #(js/console.log %)))]
(.close browser))
See the puppeteer example for the full code.
Since v0.0.36, nbb includes promesa which is a library to deal with promises. The above plet
macro is similar to promesa.core/let
.
$ time nbb -e '(+ 1 2 3)'
6
nbb -e '(+ 1 2 3)' 0.17s user 0.02s system 109% cpu 0.168 total
The baseline startup time for a script is about 170ms seconds on my laptop. When invoked via npx
this adds another 300ms or so, so for faster startup, either use a globally installed nbb
or use $(npm bin)/nbb script.cljs
to bypass npx
.
Nbb does not depend on any NPM dependencies. All NPM libraries loaded by a script are resolved relative to that script. When using the Reagent module, React is resolved in the same way as any other NPM library.
To load .cljs
files from local paths or dependencies, you can use the --classpath
argument. The current dir is added to the classpath automatically. So if there is a file foo/bar.cljs
relative to your current dir, then you can load it via (:require [foo.bar :as fb])
. Note that nbb
uses the same naming conventions for namespaces and directories as other Clojure tools: foo-bar
in the namespace name becomes foo_bar
in the directory name.
To load dependencies from the Clojure ecosystem, you can use the Clojure CLI or babashka to download them and produce a classpath:
$ classpath="$(clojure -A:nbb -Spath -Sdeps '{:aliases {:nbb {:replace-deps {com.github.seancorfield/honeysql {:git/tag "v2.0.0-rc5" :git/sha "01c3a55"}}}}}')"
and then feed it to the --classpath
argument:
$ nbb --classpath "$classpath" -e "(require '[honey.sql :as sql]) (sql/format {:select :foo :from :bar :where [:= :baz 2]})"
["SELECT foo FROM bar WHERE baz = ?" 2]
Currently nbb
only reads from directories, not jar files, so you are encouraged to use git libs. Support for .jar
files will be added later.
The name of the file that is currently being executed is available via nbb.core/*file*
or on the metadata of vars:
(ns foo
(:require [nbb.core :refer [*file*]]))
(prn *file*) ;; "/private/tmp/foo.cljs"
(defn f [])
(prn (:file (meta #'f))) ;; "/private/tmp/foo.cljs"
Nbb includes reagent.core
which will be lazily loaded when required. You can use this together with ink to create a TUI application:
$ npm install ink
ink-demo.cljs
:
(ns ink-demo
(:require ["ink" :refer [render Text]]
[reagent.core :as r]))
(defonce state (r/atom 0))
(doseq [n (range 1 11)]
(js/setTimeout #(swap! state inc) (* n 500)))
(defn hello []
[:> Text {:color "green"} "Hello, world! " @state])
(render (r/as-element [hello]))
Working with callbacks and promises can become tedious. Since nbb v0.0.36 the promesa.core
namespace is included with the let
and do!
macros. An example:
(ns prom
(:require [promesa.core :as p]))
(defn sleep [ms]
(js/Promise.
(fn [resolve _]
(js/setTimeout resolve ms))))
(defn do-stuff
[]
(p/do!
(println "Doing stuff which takes a while")
(sleep 1000)
1))
(p/let [a (do-stuff)
b (inc a)
c (do-stuff)
d (+ b c)]
(prn d))
$ nbb prom.cljs
Doing stuff which takes a while
Doing stuff which takes a while
3
Also see API docs.
Since nbb v0.0.75 applied-science/js-interop is available:
(ns example
(:require [applied-science.js-interop :as j]))
(def o (j/lit {:a 1 :b 2 :c {:d 1}}))
(prn (j/select-keys o [:a :b])) ;; #js {:a 1, :b 2}
(prn (j/get-in o [:c :d])) ;; 1
Most of this library is supported in nbb, except the following:
:syms
.-x
notation. In nbb, you must use keywords.See the example of what is currently supported.
See the examples directory for small examples.
Also check out these projects built with nbb:
See API documentation.
See this gist on how to convert an nbb script or project to shadow-cljs.
Prequisites:
To build:
bb release
Run bb tasks
for more project-related tasks.
Download Details:
Author: borkdude
Download Link: Download The Source Code
Official Website: https://github.com/borkdude/nbb
License: EPL-1.0
#node #javascript
1594289280
The REST acronym is defined as a “REpresentational State Transfer” and is designed to take advantage of existing HTTP protocols when used for Web APIs. It is very flexible in that it is not tied to resources or methods and has the ability to handle different calls and data formats. Because REST API is not constrained to an XML format like SOAP, it can return multiple other formats depending on what is needed. If a service adheres to this style, it is considered a “RESTful” application. REST allows components to access and manage functions within another application.
REST was initially defined in a dissertation by Roy Fielding’s twenty years ago. He proposed these standards as an alternative to SOAP (The Simple Object Access Protocol is a simple standard for accessing objects and exchanging structured messages within a distributed computing environment). REST (or RESTful) defines the general rules used to regulate the interactions between web apps utilizing the HTTP protocol for CRUD (create, retrieve, update, delete) operations.
An API (or Application Programming Interface) provides a method of interaction between two systems.
A RESTful API (or application program interface) uses HTTP requests to GET, PUT, POST, and DELETE data following the REST standards. This allows two pieces of software to communicate with each other. In essence, REST API is a set of remote calls using standard methods to return data in a specific format.
The systems that interact in this manner can be very different. Each app may use a unique programming language, operating system, database, etc. So, how do we create a system that can easily communicate and understand other apps?? This is where the Rest API is used as an interaction system.
When using a RESTful API, we should determine in advance what resources we want to expose to the outside world. Typically, the RESTful API service is implemented, keeping the following ideas in mind:
The features of the REST API design style state:
For REST to fit this model, we must adhere to the following rules:
#tutorials #api #application #application programming interface #crud #http #json #programming #protocols #representational state transfer #rest #rest api #rest api graphql #rest api json #rest api xml #restful #soap #xml #yaml
1591359240
Express.js Tutorial: Building RESTful APIs with Node Express. nodejs tutorial with api building with get and post methods.
#express #apis #node #restful apis #express.js
1667425440
Perl script converts PDF files to Gerber format
Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.
The general workflow is as follows:
Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).
See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.
#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;
use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)
##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file
use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call
#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software. \nGerber files MAY CONTAIN ERRORS. Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG
use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC
use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)
#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1);
#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
.010, -.001, #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
.031, -.014, #used for vias
.041, -.020, #smallest non-filled plated hole
.051, -.025,
.056, -.029, #useful for IC pins
.070, -.033,
.075, -.040, #heavier leads
# .090, -.043, #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
.100, -.046,
.115, -.052,
.130, -.061,
.140, -.067,
.150, -.079,
.175, -.088,
.190, -.093,
.200, -.100,
.220, -.110,
.160, -.125, #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
.090, -.040, #want a .090 pad option, but use dummy hole size
.065, -.040, #.065 x .065 rect pad
.035, -.040, #.035 x .065 rect pad
#traces:
.001, #too thin for real traces; use only for board outlines
.006, #minimum real trace width; mainly used for text
.008, #mainly used for mid-sized text, not traces
.010, #minimum recommended trace width for low-current signals
.012,
.015, #moderate low-voltage current
.020, #heavier trace for power, ground (even if a lighter one is adequate)
.025,
.030, #heavy-current traces; be careful with these ones!
.040,
.050,
.060,
.080,
.100,
.120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);
#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size: parsed PDF diameter: error:
# .014 .016 +.002
# .020 .02267 +.00267
# .025 .026 +.001
# .029 .03167 +.00267
# .033 .036 +.003
# .040 .04267 +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};
#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
CIRCLE_ADJUST_MINX => 0,
CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
CIRCLE_ADJUST_MAXY => 0,
SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};
#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches
#line join/cap styles:
use constant
{
CAP_NONE => 0, #butt (none); line is exact length
CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
#number of elements in each shape type:
use constant
{
RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
rect => RECT_SHAPELEN,
line => LINE_SHAPELEN,
curve => CURVE_SHAPELEN,
circle => CIRCLE_SHAPELEN,
);
#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions
# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?
#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes.
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes
#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches
# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)
# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time
# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const
use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool
my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time
print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load
#############################################################################################
#junk/experiment:
#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html
#my $caller = "pdf2gerb::";
#sub cfg
#{
# my $proto = shift;
# my $class = ref($proto) || $proto;
# my $settings =
# {
# $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
# };
# bless($settings, $class);
# return $settings;
#}
#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;
#print STDERR "read cfg file\n";
#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names
#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }
Author: swannman
Source Code: https://github.com/swannman/pdf2gerb
License: GPL-3.0 license