General-use HAPI server front-end implemented in node.js

HAPI Server Front-End

A generic HAPI front-end server.

1. About

The intended use for this server-side software is a data provider wants to serve data through a HAPI API. With this software, the data provider only needs

  1. HAPI metadata, in one of a variety of forms, for a collection of datasets and
  2. a command-line program that returns at least headerless HAPI CSV for all parameters in the dataset over the full time range of available data. Optionally, the command line program can take inputs of a start and stop time, a list of one or more parameters to output, and an output format

to be able to serve data from a HAPI API from their server. This software handles

  1. HAPI metadata validation,
  2. request validation and error responses,
  3. logging and alerts,
  4. time and parameter subsetting (as needed), and
  5. generation of HAPI JSON or HAPI binary (as needed).

A list of catalogs that are served using this software is given at http://hapi-server.org/servers.

2. Installation

Binary packages are available for OS-X x64, Linux x64, and Linux ARMv7l (e.g., Rasberry Pi).

A Docker image is also available.

Installation and startup commands are given below the binary packages and docker image. See the Development section for instructions on installing from source.

OS-X x64:

 curl -L https://github.com/hapi-server/server-nodejs/releases/download/v0.9.5/hapi-server-v0.9.5-darwin-x64.tgz | tar zxf -
 cd hapi-server-v0.9.5
 ./hapi-server --open

Linux x64:

 curl -L https://github.com/hapi-server/server-nodejs/releases/download/v0.9.5/hapi-server-v0.9.5-linux-x64.tgz | tar zxf -
 cd hapi-server-v0.9.5
 ./hapi-server --open

Linux ARMv7l:

 curl -L https://github.com/hapi-server/server-nodejs/releases/download/v0.9.5/hapi-server-v0.9.5-linux-armv7l.tgz | tar zxf -
 cd hapi-server-v0.9.5
 ./hapi-server --open

Docker:

docker pull rweigel/hapi-server:v0.9.5
docker run -dit --name hapi-server-v0.9.5 --expose 8999 -p 8999:8999 rweigel/hapi-server:v0.9.5
docker exec -it hapi-server-v0.9.5 ./hapi-server
# Open http://localhost:8999/TestData/hapi in a web browser

2. Examples

List of Included Examples

The following examples are included in the metadata directory. The examples can be run using

./hapi-server -f metadata/FILENAME.json

where FILENAME.json is one of the file names listed below (e.g., Example0.json).

  • Example0.json - A Python program dumps a full dataset in the headerless HAPI CSV format; the server handles time and parameter subsetting and creation of HAPI Binary and JSON. See section 2.1.
  • Example1.json - Same as Example0 except the Python program handles time subsetting.
  • Example2.json - Same as Example0 except the Python program handles time and parameter subsetting and creation of HAPI CSV and Binary. See section 2.2.
  • Example3.json - Same as Example2 except for HAPI info metadata for each dataset is stored in an external file.
  • Example4.json - Same as Example2 except for HAPI info metadata for each dataset is generated by a command-line command.
  • Example5.json - Same as Example2 except catalog metadata is stored in an external file.
  • Example6.json - Same as Example2 except catalog metadata is generated by a command-line command.
  • Example7.json - Same as Example2 except that catalog metadata is returned from a URL.
  • Example8.json - A dataset in headerless HAPI CSV format is stored in a single file; the server handles parameter and time subsetting and creation of HAPI JSON and Binary.
  • Example9.json - A dataset in headerless HAPI CSV format is returned by a URL; the server handles parameter and time subsetting and creation of HAPI JSON and Binary.
  • AutoplotExample1.json - A dataset is stored in multiple files and AutoplotDataServer is used to subset in time. See section 2.6.
  • AutoplotExample2.json - A dataset is stored in a CDF file and AutoplotDataserver is used to generate HAPI CSV. See section 2.6.
  • TestData.json - A test dataset used to test HAPI clients.
  • SSCWeb.json - Data from a non-HAPI web service is made available from a HAPI server. See section 2.3.
  • INTERMAGNET.json - Data in ASCII files on an FTP site is made available from a HAPI server. See section 2.4.
  • QinDenton.json - Data in a single ASCII file is converted to headerless HAPI CSV by a Python program. See section 2.5.

2.1 Serve data from a minimal Python program

In this example, we assume that the command line program that returns a dataset has the minimal capabilities required - when executed, it generates a headerless HAPI CSV file with all parameters in the dataset over the full time range of available data. The server handles time and parameter subsetting and the generation of HAPI Binary and JSON.

The Python script Example.py returns HAPI-formatted CSV data (with no header) with two parameters. To serve this data, only a configuration file, Example0.json, is needed. The configuration file has information that is used to call the command line program and it also has HAPI metadata that describes the output of Example.py. Details about the configuration file format are described in the Metadata section.

The Python calling syntax of Example.py is

python Example.py

To run this example locally after installation, execute

./hapi-server --file metdata/Example0.json

and then open http://localhost:8999/Example1/hapi. You should see the same landing page as that at http://hapi-server.org/servers/Example0/hapi. Note that the --open command-line switch can be used to automatically open the landing page, e.g.,

./hapi-server --file metdata/Example0.json --open

2.2 Serve data from an enhanced Python program

The Python script Example.py actually can subset parameters and time and provide binary output. To force the server to use these capabilities, we need to modify the server configuration metadata in Example1.json. The changes are replacing

"command": "python bin/Example.py"

with

"command": "python bin/Example.py --params ${parameters} --start ${start} --stop ${stop} --fmt ${format}"

and adding

"formats": ["csv","binary"]

The modified file is Example2.json. To run this example locally after installation, execute

./hapi-server --file metadata/Example2.json

and then open http://localhost:8999/Example2/hapi. The command-line program now produces binary output and performs parameter subsetting as needed and the response time for data should decrease.

The server responses will be identical to that in the previous example. You should see the same landing page as that at http://hapi-server.org/servers/Example2/hapi.

2.3 Serve data from a non-HAPI web service

A non-HAPI server can be quickly made HAPI compliant by using this server as a pass-through. Data from SSCWeb, which is available from a REST API, has been made available through a HAPI API at http://hapi-server.org/servers/SSCWeb/hapi. The configuration file is SSCWeb.json and the command line program is SSCWeb.js. Note that the metadata file SSCWeb.json was created using code in metadata/SSCWeb.

To run this example locally after installation, execute

./hapi-server --file metadata/SSCWeb.json --open

You should see the same landing page as that at http://hapi-server.org/servers/SSCWeb/hapi.

2.4 Serve data stored in a single file

The Qin-Denton dataset contains multiple parameters stored in a single large file.

The command-line program that produces HAPI CSV from this file is QinDenton.py and the metadata is in QinDenton.json.

To run this example, use

./hapi-server --file metadata/QinDenton.json

2.5 Serve data stored in multiple files

INTERMAGNET has ground magnetometer data stored in daily files from over 150 magnetometer stations at 1-minute and 1-second cadence made available from a FTP site.

The command-line program that produces HAPI CSV is INTERMAGNET.py and the metadata is in INTERMAGNET.json. The code that produces the metadata is in metadata/INTERMAGNET. To run this example, execute

./hapi-server --file metadata/INTERMAGNET.json --open

2.6 Serve data read by Autoplot

Nearly any data file that can be read by Autoplot can be served using this server.

Serving data requires at most two steps:

  1. Generating an Autoplot URI for each parameter; and (in some cases)
  2. Writing (by hand) metadata for each parameter.

Example 1

The first example serves data stored in a single CDF file. The configuration file is AutoplotExample1.json.

In this example, step 2. above (writing metadata by hand) is not required because the data file has metadata that is in a format that Autoplot can translate to HAPI metadata.

To run this example locally, execute

./hapi-server --file metadata/AutoplotExample1.json

Example 2

The second example serves data stored in multiple ASCII files. The configuration file is AutoplotExample2.json.

To run this example locally, execute

./hapi-server --file metadata/AutoplotExample2.json

3. Usage

List command-line options:

./hapi-server -h

  --help, -h    Show help 
  --file, -f    Catalog configuration file
  --port, -p    Server port [default:8999]             
  --conf, -c    Server configuration file
  --ignore, -i  Start server even if metadata errors
  --open, -o    Open web page on start
  --test, -t    Run URL tests and exit
  --verify, -v  Run verification tests and exit

Basic usage:

./hapi-server --file metdata/TestData.json

Starts HAPI server at http://localhost:8999/TestData/hapi and serves datasets specified in the catalog ./metadata/TestData.json.

Multiple catalogs can be served by providing multiple catalog files on the command line:

./hapi-server --file CATALOG1.json --file CATALOG2.json

For example

./hapi-server --file metadata/TestData.json --file metadata/Example1.json

will serve the two datasets at

http://localhost:8999/TestData/hapi
http://localhost:8999/Example1/hapi

And the page at http://localhost:8999/ will point to these two URLs.

4. Server Configuration

4.1 conf/config.json

The variables HAPISERVERPATH, HAPISERVERHOME, NODEEXE, and PYTHONEXE can be set in conf/config.json or as environment variables. These variables can be used in commands, files, and URLs in the server metadata (the file passed using the command-line --file switch).

The default configuration file is conf/config.json and this location can be set using a command-line argument, e.g.,

./hapiserver -c /tmp/config.json

To set variables using environment variables, use, e.g.,

PYTHONEXE=/opt/python/bin/python ./hapi-server

Variables set as environment variables take precedence over those set in conf/config.json.

HAPISERVERPATH and HAPISERVERHOME

These two variables can be used in metadata to reference a directory. For example,

"catalog": "$HAPISERVERHOME/mymetadata/Data.json"

By default, $HAPISERVERPATH is the installation directory (the directory containing the shell launch script hapi-server) and should not be changed as it is referenced in the demonstration metadata files. Modify HAPISERVERHOME in conf/config.json to use a custom path.

All relative paths in commands in metadata files are relative to the directory where hapi-server was executed.

For example, if

/tmp/hapi-server

is executed from /home/username, the file

/home/username/metadata/TestData.json`

is read and relative paths in TestData.json have /home/username/ prepended.

PYTHONEXE

This is the command used to call Python. By default, it is python. If python is not in the path, this can be set using a relative or absolute path. Python is used by several of the demonstration catalogs.

Example:

"command": "$PYTHONEXE $HAPISERVERHOME/mybin/Data.py"

NODEEXE

This is the command used to call NodeJS. By default, it is the command used to start the server. The start-up script looks for a NodeJS executable in $HAPISERVERPATH/bin and then tries node and then nodejs.

4.2 Apache

To expose a URL through Apache, (1) enable mod_proxy and mod_proxy_http, (2) add the following in a <VirtualHost> node in a Apache Virtual Hosts file

<VirtualHost *:80>
	ProxyPass /TestData http://localhost:8999/TestData retry=1
	ProxyPassReverse /TestData http://localhost:8999/TestData
</VirtualHost>

and (3) Include this file in the Apache start-up configuration file.

If serving multiple catalogs, use

<VirtualHost *:80>
	ProxyPass /servers http://localhost:8999/servers retry=1
	ProxyPassReverse /servers http://localhost:8999/servers
</VirtualHost>

4.3 Nginx

For Nginx, add the following to nginx.conf

location /TestData {
    proxy_pass http://localhost:8999/TestData;
}

If serving multiple catalogs, use

location /servers {
    proxy_pass http://localhost:8999/servers;
}

5. Metadata

The metadata required for this server is similar to the /catalog and /info response of a HAPI server.

The server requires that the /catalog response is combined with the /info response for all datasets in the catalog in a single JSON catalog configuration file. Additional information about how to generate data must also be included in this JSON file.

The top-level structure of the configuration file is

{
	"server": { // See section 5.1
		"id": "",
		"prefix": "",
		"landing": "",
		"contact": "", 
		"landingFile": "",
		"landingPath": "",
		"catalog-update": null
	},
	"catalog": array or string // See section 5.2 
	"data": { // See section 5.3
	    "command": "Command line template",
	     or
	    "file": "HAPI CSV file"
	    "fileformat": "one of 'csv', 'binary', 'json'"
	     or
	    "url": "URL that returns HAPI data"
	    "urlformat": "one of 'csv', 'binary', 'json'"
	    "contact": "Email address if error in command line program",
	    "testcommands": [
	    		{
		    		"command": string,  
		    		"Nlines": integer,
		    		"Nbytes": integer,
		    		"Ncommas", integer
	    		},
	    		...
	    	]
	    "testurls": [
	    		{
		    		"url": string,  
		    		"Nlines": integer, 
		    		"Nbytes": integer,  
		    		"Ncommas": integer
	    		},
	    		...
	    	]
	},

}

A variety of examples are given in ./metadata and described below along with options for the catalog property.

The string command in the data node is a command that produces a headerless HAPI data response and can have placeholders for time range of data to return (using start (${start}) and stop (${stop})), a dataset id (${id}), a comma-separated list of parameters (${parameters}) and an output format (${format}). For example,

python ./bin/Example.py --dataset ${id} --parameters \
	${parameters} --start ${start} --stop ${stop} --format ${format}"`

5.1 server

The server node has the form

"server": {
	"id": "", 		// Default is file name without extension.
	"prefix": "", 	// Default is id.
	"contact": "", 	// Required. Server will not start without this set.
	"landingFile": "",
	"landingPath": "",
	"catalog-update": null // How often in seconds to re-read content
						   // in the catalog node (5.2).
}

5.1.1 id and prefix

The id is by default the name of the server configuration file, e.g.,

./hapi-server --file metadata/TestData.json

then id=TestData and prefix=TestData.

By default, this catalog would be served from

http://localhost:8999/TestData/hapi

TestData in the URL can be changed to TestData2 by using prefix=TestData2.

5.1.2 contact

This element must not be empty or the server will not start. It should be at minimum the email address of a system administrator.

5.1.3 landingFile and landingPath

landingFile is the file to serve in response to requests for

http://localhost:8999/TestData/hapi

By default, the landing page served is single.htm from the HAPI server UI codebase. The double underscore variables in this file are replaced using the information in the metadata file (e.g., __CONTACT__ is replaced with the server.contact value. A different landing page can be served by setting the landingFile configuration variable, e.g. "landingFile": "$HAPISERVERPATH/public/index.htm", where $HAPISERVERPATH is described in Server Configuration.

If landingFile has local CSS and JS dependencies, set landingPath to be the local directory of the referenced files. Several possible settings are

	"landingFile": "$HAPISERVERPATH/index.htm", 
	// $HAPISERVERPATH will be replaced with location of hapi-server binary
	"landingPath": "/var/www/public/" // Location of CSS and JS files
	// If index.htm has <script src="index.js">, index.js should be in /var/www/public/

To serve a directory listing, use

	"landingFile": "",
	"landingPath": "/var/www/public/"
	// Server will look for index.htm and index.html in /var/www/public/. If not
	// found, directory listing of /var/www/public/ will be served.
5.1.4 catalog-update

This is an integer number of seconds corresponding to how often the catalog node should be updated. Use this if the catalog node is not static.

5.2 catalog

The catalog node can be either a string or an array.

In the case that it is an array, it should contain either the combined HAPI /catalog and /info response (5.2.1) or a /catalog response with references to the \info response (5.2.1).

In the case that it is a string (5.2.3), the string is either a file containing a catalog array or a command-line template that returns a catalog array.

5.2.1 Combined HAPI /catalog and /info object

If catalog is an array, it should have the same format as a HAPI /catalog response (each object in the array has an id property and optional title property) with the addition of an info property that is the HAPI response for that id, e.g., /info?id=dataset1.

"catalog":
 [
	{
		"id": "dataset1",
		"title": "a dataset",
		"info": {
				"startDate": "2000-01-01Z",
				"stopDate": "2000-01-02Z",
				"parameters": [...]
		}
	},
	{
		"id": "dataset2",
		"title": "another dataset",
		"info": {
			"startDate": "2000-01-01Z",
			"stopDate": "2000-01-02Z",
			"parameters": [...]
		}
	}
 ]

In the following subsections, this type of JSON structure is referred to as a fully resolved catalog.

Examples of this type of catalog include

5.2.2 /catalog response with file or command template for info object

The info value can be a path to an info JSON file

"catalog": 
 [
	{
		"id": "dataset1",
		"title": "a dataset",
		"info": "relativepath/to/dataset2/info_file.json"
	},
	{
		"id": "dataset2",
		"title": "another dataset",
		"info": "/absolutepath/to/dataset2/info_file.json"
	}
 ]

See also Example3.json.

Alternatively, the metadata for each dataset may be produced by the execution of a command-line program for each dataset. For example, in the following, program1 should result in a HAPI JSON response from /info?id=dataset1 to stdout. Before execution, the string ${id}, if found, is replaced with the requested dataset ID. Execution of program2 should produce the HAPI JSON corresponding to the query /info?id=dataset2.

"catalog":
 [
	{
		"id": "dataset1",
		"title": "a dataset",
		"info": "bin/program --id ${id}" 
	},
	{
		"id": "dataset2",
		"title": "another dataset",
		"info": "program2"
	}
 ]

See also Example4.json.

5.2.3 References to a command-line template or file

The catalog value can be a command-line program that generates a fully resolved catalog, e.g.,

"catalog": "program --arg1 val1 ..."

The command-line command should return the response of an /info query (with no id argument).

The path to a fully resolved catalog can also be given. See also Example5.json.

5.3 data

6. Development

6.1 Installation

Install nodejs (tested with v8) using either the standard installer or NVM.

Show NVM installation notes

See also https://github.com/nvm-sh/nvm#install–update-script

# Install Node Version Manager
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash

# Open a new shell (see displayed instructions from above command)

# Install and use node.js version 8
nvm install 8
# Clone the server repository
git clone https://github.com/hapi-server/server-nodejs

# Install dependencies
cd server-nodejs; npm install

# Start server
node server.js

# Run tests; Python 2.7+ required for certain tests.
npm test

7. Contact

Please submit questions, bug reports, and feature requests to the issue tracker.

Download Details:

Author: hapi-server/

Source Code: https://github.com/hapi-server/server-nodejs

#nodejs #node #javascript

What is GEEK

Buddha Community

General-use HAPI server front-end implemented in node.js
Hermann  Frami

Hermann Frami

1651383480

A Simple Wrapper Around Amplify AppSync Simulator

This serverless plugin is a wrapper for amplify-appsync-simulator made for testing AppSync APIs built with serverless-appsync-plugin.

Install

npm install serverless-appsync-simulator
# or
yarn add serverless-appsync-simulator

Usage

This plugin relies on your serverless yml file and on the serverless-offline plugin.

plugins:
  - serverless-dynamodb-local # only if you need dynamodb resolvers and you don't have an external dynamodb
  - serverless-appsync-simulator
  - serverless-offline

Note: Order is important serverless-appsync-simulator must go before serverless-offline

To start the simulator, run the following command:

sls offline start

You should see in the logs something like:

...
Serverless: AppSync endpoint: http://localhost:20002/graphql
Serverless: GraphiQl: http://localhost:20002
...

Configuration

Put options under custom.appsync-simulator in your serverless.yml file

| option | default | description | | ------------------------ | -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | | apiKey | 0123456789 | When using API_KEY as authentication type, the key to authenticate to the endpoint. | | port | 20002 | AppSync operations port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20002, 20012, 20022, etc.) | | wsPort | 20003 | AppSync subscriptions port; if using multiple APIs, the value of this option will be used as a starting point, and each other API will have a port of lastPort + 10 (e.g. 20003, 20013, 20023, etc.) | | location | . (base directory) | Location of the lambda functions handlers. | | refMap | {} | A mapping of resource resolutions for the Ref function | | getAttMap | {} | A mapping of resource resolutions for the GetAtt function | | importValueMap | {} | A mapping of resource resolutions for the ImportValue function | | functions | {} | A mapping of external functions for providing invoke url for external fucntions | | dynamoDb.endpoint | http://localhost:8000 | Dynamodb endpoint. Specify it if you're not using serverless-dynamodb-local. Otherwise, port is taken from dynamodb-local conf | | dynamoDb.region | localhost | Dynamodb region. Specify it if you're connecting to a remote Dynamodb intance. | | dynamoDb.accessKeyId | DEFAULT_ACCESS_KEY | AWS Access Key ID to access DynamoDB | | dynamoDb.secretAccessKey | DEFAULT_SECRET | AWS Secret Key to access DynamoDB | | dynamoDb.sessionToken | DEFAULT_ACCESS_TOKEEN | AWS Session Token to access DynamoDB, only if you have temporary security credentials configured on AWS | | dynamoDb.* | | You can add every configuration accepted by DynamoDB SDK | | rds.dbName | | Name of the database | | rds.dbHost | | Database host | | rds.dbDialect | | Database dialect. Possible values (mysql | postgres) | | rds.dbUsername | | Database username | | rds.dbPassword | | Database password | | rds.dbPort | | Database port | | watch | - *.graphql
- *.vtl | Array of glob patterns to watch for hot-reloading. |

Example:

custom:
  appsync-simulator:
    location: '.webpack/service' # use webpack build directory
    dynamoDb:
      endpoint: 'http://my-custom-dynamo:8000'

Hot-reloading

By default, the simulator will hot-relad when changes to *.graphql or *.vtl files are detected. Changes to *.yml files are not supported (yet? - this is a Serverless Framework limitation). You will need to restart the simulator each time you change yml files.

Hot-reloading relies on watchman. Make sure it is installed on your system.

You can change the files being watched with the watch option, which is then passed to watchman as the match expression.

e.g.

custom:
  appsync-simulator:
    watch:
      - ["match", "handlers/**/*.vtl", "wholename"] # => array is interpreted as the literal match expression
      - "*.graphql"                                 # => string like this is equivalent to `["match", "*.graphql"]`

Or you can opt-out by leaving an empty array or set the option to false

Note: Functions should not require hot-reloading, unless you are using a transpiler or a bundler (such as webpack, babel or typescript), un which case you should delegate hot-reloading to that instead.

Resource CloudFormation functions resolution

This plugin supports some resources resolution from the Ref, Fn::GetAtt and Fn::ImportValue functions in your yaml file. It also supports some other Cfn functions such as Fn::Join, Fb::Sub, etc.

Note: Under the hood, this features relies on the cfn-resolver-lib package. For more info on supported cfn functions, refer to the documentation

Basic usage

You can reference resources in your functions' environment variables (that will be accessible from your lambda functions) or datasource definitions. The plugin will automatically resolve them for you.

provider:
  environment:
    BUCKET_NAME:
      Ref: MyBucket # resolves to `my-bucket-name`

resources:
  Resources:
    MyDbTable:
      Type: AWS::DynamoDB::Table
      Properties:
        TableName: myTable
      ...
    MyBucket:
      Type: AWS::S3::Bucket
      Properties:
        BucketName: my-bucket-name
    ...

# in your appsync config
dataSources:
  - type: AMAZON_DYNAMODB
    name: dynamosource
    config:
      tableName:
        Ref: MyDbTable # resolves to `myTable`

Override (or mock) values

Sometimes, some references cannot be resolved, as they come from an Output from Cloudformation; or you might want to use mocked values in your local environment.

In those cases, you can define (or override) those values using the refMap, getAttMap and importValueMap options.

  • refMap takes a mapping of resource name to value pairs
  • getAttMap takes a mapping of resource name to attribute/values pairs
  • importValueMap takes a mapping of import name to values pairs

Example:

custom:
  appsync-simulator:
    refMap:
      # Override `MyDbTable` resolution from the previous example.
      MyDbTable: 'mock-myTable'
    getAttMap:
      # define ElasticSearchInstance DomainName
      ElasticSearchInstance:
        DomainEndpoint: 'localhost:9200'
    importValueMap:
      other-service-api-url: 'https://other.api.url.com/graphql'

# in your appsync config
dataSources:
  - type: AMAZON_ELASTICSEARCH
    name: elasticsource
    config:
      # endpoint resolves as 'http://localhost:9200'
      endpoint:
        Fn::Join:
          - ''
          - - https://
            - Fn::GetAtt:
                - ElasticSearchInstance
                - DomainEndpoint

Key-value mock notation

In some special cases you will need to use key-value mock nottation. Good example can be case when you need to include serverless stage value (${self:provider.stage}) in the import name.

This notation can be used with all mocks - refMap, getAttMap and importValueMap

provider:
  environment:
    FINISH_ACTIVITY_FUNCTION_ARN:
      Fn::ImportValue: other-service-api-${self:provider.stage}-url

custom:
  serverless-appsync-simulator:
    importValueMap:
      - key: other-service-api-${self:provider.stage}-url
        value: 'https://other.api.url.com/graphql'

Limitations

This plugin only tries to resolve the following parts of the yml tree:

  • provider.environment
  • functions[*].environment
  • custom.appSync

If you have the need of resolving others, feel free to open an issue and explain your use case.

For now, the supported resources to be automatically resovled by Ref: are:

  • DynamoDb tables
  • S3 Buckets

Feel free to open a PR or an issue to extend them as well.

External functions

When a function is not defined withing the current serverless file you can still call it by providing an invoke url which should point to a REST method. Make sure you specify "get" or "post" for the method. Default is "get", but you probably want "post".

custom:
  appsync-simulator:
    functions:
      addUser:
        url: http://localhost:3016/2015-03-31/functions/addUser/invocations
        method: post
      addPost:
        url: https://jsonplaceholder.typicode.com/posts
        method: post

Supported Resolver types

This plugin supports resolvers implemented by amplify-appsync-simulator, as well as custom resolvers.

From Aws Amplify:

  • NONE
  • AWS_LAMBDA
  • AMAZON_DYNAMODB
  • PIPELINE

Implemented by this plugin

  • AMAZON_ELASTIC_SEARCH
  • HTTP
  • RELATIONAL_DATABASE

Relational Database

Sample VTL for a create mutation

#set( $cols = [] )
#set( $vals = [] )
#foreach( $entry in $ctx.args.input.keySet() )
  #set( $regex = "([a-z])([A-Z]+)")
  #set( $replacement = "$1_$2")
  #set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
  #set( $discard = $cols.add("$toSnake") )
  #if( $util.isBoolean($ctx.args.input[$entry]) )
      #if( $ctx.args.input[$entry] )
        #set( $discard = $vals.add("1") )
      #else
        #set( $discard = $vals.add("0") )
      #end
  #else
      #set( $discard = $vals.add("'$ctx.args.input[$entry]'") )
  #end
#end
#set( $valStr = $vals.toString().replace("[","(").replace("]",")") )
#set( $colStr = $cols.toString().replace("[","(").replace("]",")") )
#if ( $valStr.substring(0, 1) != '(' )
  #set( $valStr = "($valStr)" )
#end
#if ( $colStr.substring(0, 1) != '(' )
  #set( $colStr = "($colStr)" )
#end
{
  "version": "2018-05-29",
  "statements":   ["INSERT INTO <name-of-table> $colStr VALUES $valStr", "SELECT * FROM    <name-of-table> ORDER BY id DESC LIMIT 1"]
}

Sample VTL for an update mutation

#set( $update = "" )
#set( $equals = "=" )
#foreach( $entry in $ctx.args.input.keySet() )
  #set( $cur = $ctx.args.input[$entry] )
  #set( $regex = "([a-z])([A-Z]+)")
  #set( $replacement = "$1_$2")
  #set( $toSnake = $entry.replaceAll($regex, $replacement).toLowerCase() )
  #if( $util.isBoolean($cur) )
      #if( $cur )
        #set ( $cur = "1" )
      #else
        #set ( $cur = "0" )
      #end
  #end
  #if ( $util.isNullOrEmpty($update) )
      #set($update = "$toSnake$equals'$cur'" )
  #else
      #set($update = "$update,$toSnake$equals'$cur'" )
  #end
#end
{
  "version": "2018-05-29",
  "statements":   ["UPDATE <name-of-table> SET $update WHERE id=$ctx.args.input.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.input.id"]
}

Sample resolver for delete mutation

{
  "version": "2018-05-29",
  "statements":   ["UPDATE <name-of-table> set deleted_at=NOW() WHERE id=$ctx.args.id", "SELECT * FROM <name-of-table> WHERE id=$ctx.args.id"]
}

Sample mutation response VTL with support for handling AWSDateTime

#set ( $index = -1)
#set ( $result = $util.parseJson($ctx.result) )
#set ( $meta = $result.sqlStatementResults[1].columnMetadata)
#foreach ($column in $meta)
    #set ($index = $index + 1)
    #if ( $column["typeName"] == "timestamptz" )
        #set ($time = $result["sqlStatementResults"][1]["records"][0][$index]["stringValue"] )
        #set ( $nowEpochMillis = $util.time.parseFormattedToEpochMilliSeconds("$time.substring(0,19)+0000", "yyyy-MM-dd HH:mm:ssZ") )
        #set ( $isoDateTime = $util.time.epochMilliSecondsToISO8601($nowEpochMillis) )
        $util.qr( $result["sqlStatementResults"][1]["records"][0][$index].put("stringValue", "$isoDateTime") )
    #end
#end
#set ( $res = $util.parseJson($util.rds.toJsonString($util.toJson($result)))[1][0] )
#set ( $response = {} )
#foreach($mapKey in $res.keySet())
    #set ( $s = $mapKey.split("_") )
    #set ( $camelCase="" )
    #set ( $isFirst=true )
    #foreach($entry in $s)
        #if ( $isFirst )
          #set ( $first = $entry.substring(0,1) )
        #else
          #set ( $first = $entry.substring(0,1).toUpperCase() )
        #end
        #set ( $isFirst=false )
        #set ( $stringLength = $entry.length() )
        #set ( $remaining = $entry.substring(1, $stringLength) )
        #set ( $camelCase = "$camelCase$first$remaining" )
    #end
    $util.qr( $response.put("$camelCase", $res[$mapKey]) )
#end
$utils.toJson($response)

Using Variable Map

Variable map support is limited and does not differentiate numbers and strings data types, please inject them directly if needed.

Will be escaped properly: null, true, and false values.

{
  "version": "2018-05-29",
  "statements":   [
    "UPDATE <name-of-table> set deleted_at=NOW() WHERE id=:ID",
    "SELECT * FROM <name-of-table> WHERE id=:ID and unix_timestamp > $ctx.args.newerThan"
  ],
  variableMap: {
    ":ID": $ctx.args.id,
##    ":TIMESTAMP": $ctx.args.newerThan -- This will be handled as a string!!!
  }
}

Requires

Author: Serverless-appsync
Source Code: https://github.com/serverless-appsync/serverless-appsync-simulator 
License: MIT License

#serverless #sync #graphql 

NBB: Ad-hoc CLJS Scripting on Node.js

Nbb

Not babashka. Node.js babashka!?

Ad-hoc CLJS scripting on Node.js.

Status

Experimental. Please report issues here.

Goals and features

Nbb's main goal is to make it easy to get started with ad hoc CLJS scripting on Node.js.

Additional goals and features are:

  • Fast startup without relying on a custom version of Node.js.
  • Small artifact (current size is around 1.2MB).
  • First class macros.
  • Support building small TUI apps using Reagent.
  • Complement babashka with libraries from the Node.js ecosystem.

Requirements

Nbb requires Node.js v12 or newer.

How does this tool work?

CLJS code is evaluated through SCI, the same interpreter that powers babashka. Because SCI works with advanced compilation, the bundle size, especially when combined with other dependencies, is smaller than what you get with self-hosted CLJS. That makes startup faster. The trade-off is that execution is less performant and that only a subset of CLJS is available (e.g. no deftype, yet).

Usage

Install nbb from NPM:

$ npm install nbb -g

Omit -g for a local install.

Try out an expression:

$ nbb -e '(+ 1 2 3)'
6

And then install some other NPM libraries to use in the script. E.g.:

$ npm install csv-parse shelljs zx

Create a script which uses the NPM libraries:

(ns script
  (:require ["csv-parse/lib/sync$default" :as csv-parse]
            ["fs" :as fs]
            ["path" :as path]
            ["shelljs$default" :as sh]
            ["term-size$default" :as term-size]
            ["zx$default" :as zx]
            ["zx$fs" :as zxfs]
            [nbb.core :refer [*file*]]))

(prn (path/resolve "."))

(prn (term-size))

(println (count (str (fs/readFileSync *file*))))

(prn (sh/ls "."))

(prn (csv-parse "foo,bar"))

(prn (zxfs/existsSync *file*))

(zx/$ #js ["ls"])

Call the script:

$ nbb script.cljs
"/private/tmp/test-script"
#js {:columns 216, :rows 47}
510
#js ["node_modules" "package-lock.json" "package.json" "script.cljs"]
#js [#js ["foo" "bar"]]
true
$ ls
node_modules
package-lock.json
package.json
script.cljs

Macros

Nbb has first class support for macros: you can define them right inside your .cljs file, like you are used to from JVM Clojure. Consider the plet macro to make working with promises more palatable:

(defmacro plet
  [bindings & body]
  (let [binding-pairs (reverse (partition 2 bindings))
        body (cons 'do body)]
    (reduce (fn [body [sym expr]]
              (let [expr (list '.resolve 'js/Promise expr)]
                (list '.then expr (list 'clojure.core/fn (vector sym)
                                        body))))
            body
            binding-pairs)))

Using this macro we can look async code more like sync code. Consider this puppeteer example:

(-> (.launch puppeteer)
      (.then (fn [browser]
               (-> (.newPage browser)
                   (.then (fn [page]
                            (-> (.goto page "https://clojure.org")
                                (.then #(.screenshot page #js{:path "screenshot.png"}))
                                (.catch #(js/console.log %))
                                (.then #(.close browser)))))))))

Using plet this becomes:

(plet [browser (.launch puppeteer)
       page (.newPage browser)
       _ (.goto page "https://clojure.org")
       _ (-> (.screenshot page #js{:path "screenshot.png"})
             (.catch #(js/console.log %)))]
      (.close browser))

See the puppeteer example for the full code.

Since v0.0.36, nbb includes promesa which is a library to deal with promises. The above plet macro is similar to promesa.core/let.

Startup time

$ time nbb -e '(+ 1 2 3)'
6
nbb -e '(+ 1 2 3)'   0.17s  user 0.02s system 109% cpu 0.168 total

The baseline startup time for a script is about 170ms seconds on my laptop. When invoked via npx this adds another 300ms or so, so for faster startup, either use a globally installed nbb or use $(npm bin)/nbb script.cljs to bypass npx.

Dependencies

NPM dependencies

Nbb does not depend on any NPM dependencies. All NPM libraries loaded by a script are resolved relative to that script. When using the Reagent module, React is resolved in the same way as any other NPM library.

Classpath

To load .cljs files from local paths or dependencies, you can use the --classpath argument. The current dir is added to the classpath automatically. So if there is a file foo/bar.cljs relative to your current dir, then you can load it via (:require [foo.bar :as fb]). Note that nbb uses the same naming conventions for namespaces and directories as other Clojure tools: foo-bar in the namespace name becomes foo_bar in the directory name.

To load dependencies from the Clojure ecosystem, you can use the Clojure CLI or babashka to download them and produce a classpath:

$ classpath="$(clojure -A:nbb -Spath -Sdeps '{:aliases {:nbb {:replace-deps {com.github.seancorfield/honeysql {:git/tag "v2.0.0-rc5" :git/sha "01c3a55"}}}}}')"

and then feed it to the --classpath argument:

$ nbb --classpath "$classpath" -e "(require '[honey.sql :as sql]) (sql/format {:select :foo :from :bar :where [:= :baz 2]})"
["SELECT foo FROM bar WHERE baz = ?" 2]

Currently nbb only reads from directories, not jar files, so you are encouraged to use git libs. Support for .jar files will be added later.

Current file

The name of the file that is currently being executed is available via nbb.core/*file* or on the metadata of vars:

(ns foo
  (:require [nbb.core :refer [*file*]]))

(prn *file*) ;; "/private/tmp/foo.cljs"

(defn f [])
(prn (:file (meta #'f))) ;; "/private/tmp/foo.cljs"

Reagent

Nbb includes reagent.core which will be lazily loaded when required. You can use this together with ink to create a TUI application:

$ npm install ink

ink-demo.cljs:

(ns ink-demo
  (:require ["ink" :refer [render Text]]
            [reagent.core :as r]))

(defonce state (r/atom 0))

(doseq [n (range 1 11)]
  (js/setTimeout #(swap! state inc) (* n 500)))

(defn hello []
  [:> Text {:color "green"} "Hello, world! " @state])

(render (r/as-element [hello]))

Promesa

Working with callbacks and promises can become tedious. Since nbb v0.0.36 the promesa.core namespace is included with the let and do! macros. An example:

(ns prom
  (:require [promesa.core :as p]))

(defn sleep [ms]
  (js/Promise.
   (fn [resolve _]
     (js/setTimeout resolve ms))))

(defn do-stuff
  []
  (p/do!
   (println "Doing stuff which takes a while")
   (sleep 1000)
   1))

(p/let [a (do-stuff)
        b (inc a)
        c (do-stuff)
        d (+ b c)]
  (prn d))
$ nbb prom.cljs
Doing stuff which takes a while
Doing stuff which takes a while
3

Also see API docs.

Js-interop

Since nbb v0.0.75 applied-science/js-interop is available:

(ns example
  (:require [applied-science.js-interop :as j]))

(def o (j/lit {:a 1 :b 2 :c {:d 1}}))

(prn (j/select-keys o [:a :b])) ;; #js {:a 1, :b 2}
(prn (j/get-in o [:c :d])) ;; 1

Most of this library is supported in nbb, except the following:

  • destructuring using :syms
  • property access using .-x notation. In nbb, you must use keywords.

See the example of what is currently supported.

Examples

See the examples directory for small examples.

Also check out these projects built with nbb:

API

See API documentation.

Migrating to shadow-cljs

See this gist on how to convert an nbb script or project to shadow-cljs.

Build

Prequisites:

  • babashka >= 0.4.0
  • Clojure CLI >= 1.10.3.933
  • Node.js 16.5.0 (lower version may work, but this is the one I used to build)

To build:

  • Clone and cd into this repo
  • bb release

Run bb tasks for more project-related tasks.

Download Details:
Author: borkdude
Download Link: Download The Source Code
Official Website: https://github.com/borkdude/nbb 
License: EPL-1.0

#node #javascript

Generis: Versatile Go Code Generator

Generis

Versatile Go code generator.

Description

Generis is a lightweight code preprocessor adding the following features to the Go language :

  • Generics.
  • Free-form macros.
  • Conditional compilation.
  • HTML templating.
  • Allman style conversion.

Sample

package main;

// -- IMPORTS

import (
    "html"
    "io"
    "log"
    "net/http"
    "net/url"
    "strconv"
    );

// -- DEFINITIONS

#define DebugMode
#as true

// ~~

#define HttpPort
#as 8080

// ~~

#define WriteLine( {{text}} )
#as log.Println( {{text}} )

// ~~

#define local {{variable}} : {{type}};
#as var {{variable}} {{type}};

// ~~

#define DeclareStack( {{type}}, {{name}} )
#as
    // -- TYPES

    type {{name}}Stack struct
    {
        ElementArray []{{type}};
    }

    // -- INQUIRIES

    func ( stack * {{name}}Stack ) IsEmpty(
        ) bool
    {
        return len( stack.ElementArray ) == 0;
    }

    // -- OPERATIONS

    func ( stack * {{name}}Stack ) Push(
        element {{type}}
        )
    {
        stack.ElementArray = append( stack.ElementArray, element );
    }

    // ~~

    func ( stack * {{name}}Stack ) Pop(
        ) {{type}}
    {
        local
            element : {{type}};

        element = stack.ElementArray[ len( stack.ElementArray ) - 1 ];

        stack.ElementArray = stack.ElementArray[ : len( stack.ElementArray ) - 1 ];

        return element;
    }
#end

// ~~

#define DeclareStack( {{type}} )
#as DeclareStack( {{type}}, {{type:PascalCase}} )

// -- TYPES

DeclareStack( string )
DeclareStack( int32 )

// -- FUNCTIONS

func HandleRootPage(
    response_writer http.ResponseWriter,
    request * http.Request
    )
{
    local
        boolean : bool;
    local
        natural : uint;
    local
        integer : int;
    local
        real : float64;
    local
        escaped_html_text,
        escaped_url_text,
        text : string;
    local
        integer_stack : Int32Stack;

    boolean = true;
    natural = 10;
    integer = 20;
    real = 30.0;
    text = "text";
    escaped_url_text = "&escaped text?";
    escaped_html_text = "<escaped text/>";

    integer_stack.Push( 10 );
    integer_stack.Push( 20 );
    integer_stack.Push( 30 );

    #write response_writer
        <!DOCTYPE html>
        <html lang="en">
            <head>
                <meta charset="utf-8">
                <title><%= request.URL.Path %></title>
            </head>
            <body>
                <% if ( boolean ) { %>
                    <%= "URL : " + request.URL.Path %>
                    <br/>
                    <%@ natural %>
                    <%# integer %>
                    <%& real %>
                    <br/>
                    <%~ text %>
                    <%^ escaped_url_text %>
                    <%= escaped_html_text %>
                    <%= "<%% ignored %%>" %>
                    <%% ignored %%>
                <% } %>
                <br/>
                Stack :
                <br/>
                <% for !integer_stack.IsEmpty() { %>
                    <%# integer_stack.Pop() %>
                <% } %>
            </body>
        </html>
    #end
}

// ~~

func main()
{
    http.HandleFunc( "/", HandleRootPage );

    #if DebugMode
        WriteLine( "Listening on http://localhost:HttpPort" );
    #end

    log.Fatal(
        http.ListenAndServe( ":HttpPort", nil )
        );
}

Syntax

#define directive

Constants and generic code can be defined with the following syntax :

#define old code
#as new code

#define old code
#as
    new
    code
#end

#define
    old
    code
#as new code

#define
    old
    code
#as
    new
    code
#end

#define parameter

The #define directive can contain one or several parameters :

{{variable name}} : hierarchical code (with properly matching brackets and parentheses)
{{variable name#}} : statement code (hierarchical code without semicolon)
{{variable name$}} : plain code
{{variable name:boolean expression}} : conditional hierarchical code
{{variable name#:boolean expression}} : conditional statement code
{{variable name$:boolean expression}} : conditional plain code

They can have a boolean expression to require they match specific conditions :

HasText text
HasPrefix prefix
HasSuffix suffix
HasIdentifier text
false
true
!expression
expression && expression
expression || expression
( expression )

The #define directive must not start or end with a parameter.

#as parameter

The #as directive can use the value of the #define parameters :

{{variable name}}
{{variable name:filter function}}
{{variable name:filter function:filter function:...}}

Their value can be changed through one or several filter functions :

LowerCase
UpperCase
MinorCase
MajorCase
SnakeCase
PascalCase
CamelCase
RemoveComments
RemoveBlanks
PackStrings
PackIdentifiers
ReplacePrefix old_prefix new_prefix
ReplaceSuffix old_suffix new_suffix
ReplaceText old_text new_text
ReplaceIdentifier old_identifier new_identifier
AddPrefix prefix
AddSuffix suffix
RemovePrefix prefix
RemoveSuffix suffix
RemoveText text
RemoveIdentifier identifier

#if directive

Conditional code can be defined with the following syntax :

#if boolean expression
    #if boolean expression
        ...
    #else
        ...
    #end
#else
    #if boolean expression
        ...
    #else
        ...
    #end
#end

The boolean expression can use the following operators :

false
true
!expression
expression && expression
expression || expression
( expression )

#write directive

Templated HTML code can be sent to a stream writer using the following syntax :

#write writer expression
    <% code %>
    <%@ natural expression %>
    <%# integer expression %>
    <%& real expression %>
    <%~ text expression %>
    <%= escaped text expression %>
    <%! removed content %>
    <%% ignored tags %%>
#end

Limitations

  • There is no operator precedence in boolean expressions.
  • The --join option requires to end the statements with a semicolon.
  • The #writer directive is only available for the Go language.

Installation

Install the DMD 2 compiler (using the MinGW setup option on Windows).

Build the executable with the following command line :

dmd -m64 generis.d

Command line

generis [options]

Options

--prefix # : set the command prefix
--parse INPUT_FOLDER/ : parse the definitions of the Generis files in the input folder
--process INPUT_FOLDER/ OUTPUT_FOLDER/ : reads the Generis files in the input folder and writes the processed files in the output folder
--trim : trim the HTML templates
--join : join the split statements
--create : create the output folders if needed
--watch : watch the Generis files for modifications
--pause 500 : time to wait before checking the Generis files again
--tabulation 4 : set the tabulation space count
--extension .go : generate files with this extension

Examples

generis --process GS/ GO/

Reads the Generis files in the GS/ folder and writes Go files in the GO/ folder.

generis --process GS/ GO/ --create

Reads the Generis files in the GS/ folder and writes Go files in the GO/ folder, creating the output folders if needed.

generis --process GS/ GO/ --create --watch

Reads the Generis files in the GS/ folder and writes Go files in the GO/ folder, creating the output folders if needed and watching the Generis files for modifications.

generis --process GS/ GO/ --trim --join --create --watch

Reads the Generis files in the GS/ folder and writes Go files in the GO/ folder, trimming the HTML templates, joining the split statements, creating the output folders if needed and watching the Generis files for modifications.

Version

2.0

Author: Senselogic
Source Code: https://github.com/senselogic/GENERIS 
License: View license

#go #golang #code 

Aria Barnes

Aria Barnes

1622719015

Why use Node.js for Web Development? Benefits and Examples of Apps

Front-end web development has been overwhelmed by JavaScript highlights for quite a long time. Google, Facebook, Wikipedia, and most of all online pages use JS for customer side activities. As of late, it additionally made a shift to cross-platform mobile development as a main technology in React Native, Nativescript, Apache Cordova, and other crossover devices. 

Throughout the most recent couple of years, Node.js moved to backend development as well. Designers need to utilize a similar tech stack for the whole web project without learning another language for server-side development. Node.js is a device that adjusts JS usefulness and syntax to the backend. 

What is Node.js? 

Node.js isn’t a language, or library, or system. It’s a runtime situation: commonly JavaScript needs a program to work, however Node.js makes appropriate settings for JS to run outside of the program. It’s based on a JavaScript V8 motor that can run in Chrome, different programs, or independently. 

The extent of V8 is to change JS program situated code into machine code — so JS turns into a broadly useful language and can be perceived by servers. This is one of the advantages of utilizing Node.js in web application development: it expands the usefulness of JavaScript, permitting designers to coordinate the language with APIs, different languages, and outside libraries.

What Are the Advantages of Node.js Web Application Development? 

Of late, organizations have been effectively changing from their backend tech stacks to Node.js. LinkedIn picked Node.js over Ruby on Rails since it took care of expanding responsibility better and decreased the quantity of servers by multiple times. PayPal and Netflix did something comparative, just they had a goal to change their design to microservices. We should investigate the motivations to pick Node.JS for web application development and when we are planning to hire node js developers. 

Amazing Tech Stack for Web Development 

The principal thing that makes Node.js a go-to environment for web development is its JavaScript legacy. It’s the most well known language right now with a great many free devices and a functioning local area. Node.js, because of its association with JS, immediately rose in ubiquity — presently it has in excess of 368 million downloads and a great many free tools in the bundle module. 

Alongside prevalence, Node.js additionally acquired the fundamental JS benefits: 

  • quick execution and information preparing; 
  • exceptionally reusable code; 
  • the code is not difficult to learn, compose, read, and keep up; 
  • tremendous asset library, a huge number of free aides, and a functioning local area. 

In addition, it’s a piece of a well known MEAN tech stack (the blend of MongoDB, Express.js, Angular, and Node.js — four tools that handle all vital parts of web application development). 

Designers Can Utilize JavaScript for the Whole Undertaking 

This is perhaps the most clear advantage of Node.js web application development. JavaScript is an unquestionable requirement for web development. Regardless of whether you construct a multi-page or single-page application, you need to know JS well. On the off chance that you are now OK with JavaScript, learning Node.js won’t be an issue. Grammar, fundamental usefulness, primary standards — every one of these things are comparable. 

In the event that you have JS designers in your group, it will be simpler for them to learn JS-based Node than a totally new dialect. What’s more, the front-end and back-end codebase will be basically the same, simple to peruse, and keep up — in light of the fact that they are both JS-based. 

A Quick Environment for Microservice Development 

There’s another motivation behind why Node.js got famous so rapidly. The environment suits well the idea of microservice development (spilling stone monument usefulness into handfuls or many more modest administrations). 

Microservices need to speak with one another rapidly — and Node.js is probably the quickest device in information handling. Among the fundamental Node.js benefits for programming development are its non-obstructing algorithms.

Node.js measures a few demands all at once without trusting that the first will be concluded. Many microservices can send messages to one another, and they will be gotten and addressed all the while. 

Versatile Web Application Development 

Node.js was worked in view of adaptability — its name really says it. The environment permits numerous hubs to run all the while and speak with one another. Here’s the reason Node.js adaptability is better than other web backend development arrangements. 

Node.js has a module that is liable for load adjusting for each running CPU center. This is one of numerous Node.js module benefits: you can run various hubs all at once, and the environment will naturally adjust the responsibility. 

Node.js permits even apportioning: you can part your application into various situations. You show various forms of the application to different clients, in light of their age, interests, area, language, and so on. This builds personalization and diminishes responsibility. Hub accomplishes this with kid measures — tasks that rapidly speak with one another and share a similar root. 

What’s more, Node’s non-hindering solicitation handling framework adds to fast, letting applications measure a great many solicitations. 

Control Stream Highlights

Numerous designers consider nonconcurrent to be one of the two impediments and benefits of Node.js web application development. In Node, at whatever point the capacity is executed, the code consequently sends a callback. As the quantity of capacities develops, so does the number of callbacks — and you end up in a circumstance known as the callback damnation. 

In any case, Node.js offers an exit plan. You can utilize systems that will plan capacities and sort through callbacks. Systems will associate comparable capacities consequently — so you can track down an essential component via search or in an envelope. At that point, there’s no compelling reason to look through callbacks.

 

Final Words

So, these are some of the top benefits of Nodejs in web application development. This is how Nodejs is contributing a lot to the field of web application development. 

I hope now you are totally aware of the whole process of how Nodejs is really important for your web project. If you are looking to hire a node js development company in India then I would suggest that you take a little consultancy too whenever you call. 

Good Luck!

Original Source

#node.js development company in india #node js development company #hire node js developers #hire node.js developers in india #node.js development services #node.js development

Hire Dedicated Node.js Developers - Hire Node.js Developers

If you look at the backend technology used by today’s most popular apps there is one thing you would find common among them and that is the use of NodeJS Framework. Yes, the NodeJS framework is that effective and successful.

If you wish to have a strong backend for efficient app performance then have NodeJS at the backend.

WebClues Infotech offers different levels of experienced and expert professionals for your app development needs. So hire a dedicated NodeJS developer from WebClues Infotech with your experience requirement and expertise.

So what are you waiting for? Get your app developed with strong performance parameters from WebClues Infotech

For inquiry click here: https://www.webcluesinfotech.com/hire-nodejs-developer/

Book Free Interview: https://bit.ly/3dDShFg

#hire dedicated node.js developers #hire node.js developers #hire top dedicated node.js developers #hire node.js developers in usa & india #hire node js development company #hire the best node.js developers & programmers