Lawrence  Lesch

Lawrence Lesch

1677727920

REST API Boilerplate using NodeJS and KOA2, Typescript

Node - Koa - Typescript Project 

The main purpose of this repository is to build a good project setup and workflow for writing a Node api rest in TypeScript using KOA and an SQL DB.

Koa is a new web framework designed by the team behind Express, which aims to be a smaller, more expressive, and more robust foundation for web applications and APIs. Through leveraging generators Koa allows you to ditch callbacks and greatly increase error-handling. Koa does not bundle any middleware within core, and provides an elegant suite of methods that make writing servers fast and enjoyable.

Through Github Actions CI, this boilerplate is deployed here! You can try to make requests to the different defined endpoints and see how it works. The following Authorization header will have to be set (already signed with the boilerplate's secret) to pass the JWT middleware:

HEADER (DEMO)

Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6IjEiLCJuYW1lIjoiSmF2aWVyIEF2aWxlcyIsImVtYWlsIjoiYXZpbGVzbG9wZXouamF2aWVyQGdtYWlsLmNvbSJ9.7oxEVGy4VEtaDQyLiuoDvzdO0AyrNrJ_s9NU3vko5-k

AVAILABLE ENDPOINTS DEMO SWAGGER DOCS DEMO

When running the project locally with watch-server, being .env file config the very same as .example.env file, the swagger docs will be deployed at: http:localhost:3000/swagger-html, and the bearer token for authorization should be as follows:

HEADER (LOCALHOST BASED ON DEFAULT SECRET KEY 'your-secret-whatever')

Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6IjEiLCJuYW1lIjoiSmF2aWVyIEF2aWxlcyIsImVtYWlsIjoiYXZpbGVzbG9wZXouamF2aWVyQGdtYWlsLmNvbSJ9.rgOobROftUYSWphkdNfxoN2cgKiqNXd4Km4oz6Ex4ng
methodresourcedescription
GET/Simple hello world response
GET/usersreturns the collection of users present in the DB
GET/users/:idreturns the specified id user
POST/userscreates a user in the DB (object user to be includued in request's body)
PUT/users/:idupdates an already created user in the DB (object user to be includued in request's body)
DELETE/users/:iddeletes a user from the DB (JWT token user ID must be the same as the user you want to delete)

Pre-reqs

To build and run this app locally you will need:

Features:

  • Nodemon - server auto-restarts when code changes
  • Koa v2
  • TypeORM (SQL DB) with basic CRUD included
  • Swagger decorator (auto generated swagger docs)
  • Class-validator - Decorator based entities validation
  • Docker-compose ready to go
  • Postman (newman) integration tests
  • Locust load tests
  • Jest unit tests
  • Github actions - CI for building and testing the project
  • Cron jobs prepared

Included middleware:

  • @koa/router
  • koa-bodyparser
  • Winston Logger
  • JWT auth koa-jwt
  • Helmet (security headers)
  • CORS

Getting Started

  • Clone the repository
git clone --depth=1 https://github.com/javieraviles/node-typescript-koa-rest.git <project_name>
  • Install dependencies
cd <project_name>
npm install
  • Run the project directly in TS
npm run watch-server
  • Build and run the project in JS
npm run build
npm run start
  • Run integration or load tests
npm run test:integration:local (newman needed)
npm run test:load (locust needed)
  • Run unit tests
npm run test
  • Run unit tests with coverage
npm run test:coverage
  • Run unit tests on Jest watch mode
npm run test:watch

Docker (optional)

A docker-compose file has been added to the project with a postgreSQL (already setting user, pass and dbname as the ORM config is expecting) and an ADMINER image (easy web db client).

It is as easy as go to the project folder and execute the command 'docker-compose up' once you have Docker installed, and both the postgreSQL server and the Adminer client will be running in ports 5432 and 8080 respectively with all the config you need to start playing around.

If you use Docker natively, the host for the server which you will need to include in the ORM configuration file will be localhost, but if you were to run Docker in older Windows versions, you will be using Boot2Docker and probably your virtual machine will use your ip 192.168.99.100 as network adapter (if not, command docker-machine ip will tell you). This mean your database host will be the aforementioned ip and in case you want to access the web db client you will also need to go to http://192.168.99.100/8080

Setting up the Database - ORM

This API is prepared to work with an SQL database, using TypeORM. In this case we are using postgreSQL, and that is why in the package.json 'pg' has been included. If you where to use a different SQL database remember to install the correspondent driver.

The ORM configuration and connection to the database can be specified in the file 'ormconfig.json'. Here is directly in the connection to the database in 'server.ts' file because a environment variable containing databaseUrl is being used to set the connection data. This is prepared for Heroku, which provides a postgres-string-connection as env variable. In local is being mocked with the docker local postgres as can be seen in ".example.env"

It is importante to notice that, when serving the project directly with *.ts files using ts-node,the configuration for the ORM should specify the *.ts files path, but once the project is built (transpiled) and run as plain js, it will be needed to change it accordingly to find the built js files:

"entities": [
      "dist/entity/**/*.js"
   ],
   "migrations": [
      "dist/migration/**/*.js"
   ],
   "subscribers": [
      "dist/subscriber/**/*.js"
   ]

**NOTE: this is now automatically handled by the NODE_ENV variable too.

Notice that if NODE_ENV is set to development, the ORM config won't be using SSL to connect to the DB. Otherwise it will.

And because Heroku uses self-signed certificates, this bit has been added, please take it out if connecting to a local DB without SSL.

createConnection({
    ...
    extra: {
        ssl: {
            rejectUnauthorized: false // Heroku uses self signed certificates
        }
    }
 })

You can find an implemented CRUD of the entity user in the correspondent controller controller/user.ts and its routes in routes.ts file.

Entities validation

This project uses the library class-validator, a decorator-based entity validation, which is used directly in the entities files as follows:

export class User {
    @Length(10, 100) // length of string email must be between 10 and 100 characters
    @IsEmail() // the string must comply with an standard email format
    @IsNotEmpty() // the string can't be empty
    email: string;
}

Once the decorators have been set in the entity, you can validate from anywhere as follows:

const user = new User();
user.email = "avileslopez.javier@gmail"; // should not pass, needs the ending .com to be a valid email

validate(user).then(errors => { // errors is an array of validation errors
    if (errors.length > 0) {
        console.log("validation failed. errors: ", errors); // code will get here, printing an "IsEmail" error
    } else {
        console.log("validation succeed");
    }
});

For further documentation regarding validations see class-validator docs.

Environment variables

Create a .env file (or just rename the .example.env) containing all the env variables you want to set, dotenv library will take care of setting them. This project is using three variables at the moment:

  • PORT -> port where the server will be started on, Heroku will set this env variable automatically
  • NODE_ENV -> environment, development value will set the logger as debug level, also important for CI. In addition will determine if the ORM connects to the DB through SSL or not.
  • JWT_SECRET -> secret value, JWT tokens should be signed with this value
  • DATABASE_URL -> DB connection data in connection-string format.

Getting TypeScript

TypeScript itself is simple to add to any project with npm.

npm install -D typescript

If you're using VS Code then you're good to go! VS Code will detect and use the TypeScript version you have installed in your node_modules folder. For other editors, make sure you have the corresponding TypeScript plugin.

Project Structure

The most obvious difference in a TypeScript + Node project is the folder structure. TypeScript (.ts) files live in your src folder and after compilation are output as JavaScript (.js) in the dist folder.

The full folder structure of this app is explained below:

Note! Make sure you have already built the app using npm run build

NameDescription
distContains the distributable (or output) from your TypeScript build. This is the code you ship
node_modulesContains all your npm dependencies
srcContains your source code that will be compiled to the dist dir
src/server.tsEntry point to your KOA app
.github/workflows/ci.ymlGithub actions CI configuration
loadtests/locustfile.pyLocust load tests
integrationtests/node-koa-typescript.postman_collection.jsonPostman integration test collection
.copyStaticAssets.tsBuild script that copies images, fonts, and JS libs to the dist folder
package.jsonFile that contains npm dependencies as well as build scripts
docker-compose.ymlDocker PostgreSQL and Adminer images in case you want to load the db from Docker
tsconfig.jsonConfig settings for compiling server code written in TypeScript
.eslintrc and .eslintignoreConfig settings for ESLint code style checking
.example.envEnv variables file example to be renamed to .env
Dockerfile and dockerignoreThe app is dockerized to be deployed from CI in a more standard way, not needed for dev

Configuring TypeScript compilation

TypeScript uses the file tsconfig.json to adjust project compile options. Let's dissect this project's tsconfig.json, starting with the compilerOptions which details how your project is compiled.

    "compilerOptions": {
        "module": "commonjs",
        "target": "es2017",
        "lib": ["es6"],
        "noImplicitAny": true,
        "strictPropertyInitialization": false,
        "moduleResolution": "node",
        "sourceMap": true,
        "outDir": "dist",
        "baseUrl": ".",
        "experimentalDecorators": true,
        "emitDecoratorMetadata": true,  
        }
    },
compilerOptionsDescription
"module": "commonjs"The output module type (in your .js files). Node uses commonjs, so that is what we use
"target": "es2017"The output language level. Node supports ES2017, so we can target that here
"lib": ["es6"]Needed for TypeORM.
"noImplicitAny": trueEnables a stricter setting which throws errors when something has a default any value
"moduleResolution": "node"TypeScript attempts to mimic Node's module resolution strategy. Read more here
"sourceMap": trueWe want source maps to be output along side our JavaScript.
"outDir": "dist"Location to output .js files after compilation
"baseUrl": "."Part of configuring module resolution.
paths: {...}Part of configuring module resolution.
"experimentalDecorators": trueNeeded for TypeORM. Allows use of @Decorators
"emitDecoratorMetadata": trueNeeded for TypeORM. Allows use of @Decorators

The rest of the file define the TypeScript project context. The project context is basically a set of options that determine which files are compiled when the compiler is invoked with a specific tsconfig.json. In this case, we use the following to define our project context:

    "include": [
        "src/**/*"
    ]

include takes an array of glob patterns of files to include in the compilation. This project is fairly simple and all of our .ts files are under the src folder. For more complex setups, you can include an exclude array of glob patterns that removes specific files from the set defined with include. There is also a files option which takes an array of individual file names which overrides both include and exclude.

Running the build

All the different build steps are orchestrated via npm scripts. Npm scripts basically allow us to call (and chain) terminal commands via npm. This is nice because most JavaScript tools have easy to use command line utilities allowing us to not need grunt or gulp to manage our builds. If you open package.json, you will see a scripts section with all the different scripts you can call. To call a script, simply run npm run <script-name> from the command line. You'll notice that npm scripts can call each other which makes it easy to compose complex builds out of simple individual build scripts. Below is a list of all the scripts this template has available:

Npm ScriptDescription
startDoes the same as 'npm run serve'. Can be invoked with npm start
buildFull build. Runs ALL build tasks (build-ts, lint, copy-static-assets)
serveRuns node on dist/server/server.js which is the apps entry point
watch-serverNodemon, process restarts if crashes. Continuously watches .ts files and re-compiles to .js
build-tsCompiles all source .ts files to .js files in the dist folder
lintRuns ESLint check and fix on project files
copy-static-assetsCalls script that copies JS libs, fonts, and images to dist directory
test:integration:<env>Execute Postman integration tests collection using newman on any env (local or heroku)
test:loadExecute Locust load tests using a specific configuration

CI: Github Actions

Using Github Actions a pipeline is deploying the application in Heroku and running tests against it, checking the application is healthy deployed. The pipeline can be found at /.github/workflows/test.yml. This performs the following:

  • Build the project
    • Install Node
    • Install dependencies
    • Build the project (transpile to JS)
    • Run unit tests
  • Deploy to Heroku
    • Install Docker cli
    • Build the application container
    • Install Heroku cli
    • Login into Heroku
    • Push Docker image to Heroku
    • Trigger release in Heroku
  • Run integration tests
    • Install Node
    • Install Newman
    • Run Postman collection using Newman against deployed app in Heroku
  • Run load tests
    • Install Python
    • Install Locust
    • Run Locust load tests against deployed app in Heroku

ESLint

Since TSLint is deprecated now, ESLint feels like the way to go as also supports typescript. ESLint is a static code analysis tool for identifying problematic patterns found in JavaScript/typescript code.

ESLint rules

Like most linters, ESLint has a wide set of configurable rules as well as support for custom rule sets. All rules are configured through .eslintrc. In this project, we are using a fairly basic set of rules with no additional custom rules.

Running ESLint

Like the rest of our build steps, we use npm scripts to invoke ESLint. To run ESLint you can call the main build script or just the ESLint task.

npm run build   // runs full build including ESLint format check
npm run lint    // runs ESLint check + fix

Notice that ESLint is not a part of the main watch task. It can be annoying for ESLint to clutter the output window while in the middle of writing a function, so I elected to only run it only during the full build. If you are interested in seeing ESLint feedback as soon as possible, I strongly recommend the ESLint extension in VS Code.

Register cron jobs

Cron dependency has been added to the project together with types. A cron.ts file has been created where a cron job is created using a cron expression configured in config.ts file.

import { CronJob } from 'cron';
import { config } from './config';

const cron = new CronJob(config.cronJobExpression, () => {
    console.log('Executing cron job once every hour');
});

export { cron };

From the server.ts, the cron job gets started:

import { cron } from './cron';
// Register cron job to do any action needed
cron.start();

Integrations and load tests

Integrations tests are a Postman collection with assertions, which gets executed using Newman from the CI (Github Actions). It can be found at /integrationtests/node-koa-typescript.postman_collection.json; it can be opened in Postman and get modified very easily. Feel free to install Newman in your local environment and trigger npm run test:integration:local command which will use local environment file (instead of heroku dev one) to trigger your postman collection faster than using postman.

Load tests are a locust file with assertions, which gets executed from the CI (Github Actions). It can be found at /loadtests/locustfile.py; It is written in python and can be executed locally against any host once python and locust are installed on your dev machine.

**NOTE: at the end of load tests, an endpoint to remove all created test users is called.

Logging

Winston is designed to be a simple and universal logging library with support for multiple transports.

A "logger" middleware passing a winstonInstance has been created. Current configuration of the logger can be found in the file "logger.ts". It will log 'error' level to an error.log file and 'debug' or 'info' level (depending on NODE_ENV environment variable, debug if == development) to the console.

// Logger middleware -> use winston as logger (logger.ts with config)
app.use(logger(winston));

Authentication - Security

The idea is to keep the API as clean as possible, therefore the auth will be done from the client using an auth provider such as Auth0. The client making requests to the API should include the JWT in the Authorization header as "Authorization: Bearer ". HS256 will be used as the secret will be known by both your api and your client and will be used to sign the token, so make sure you keep it hidden.

As can be found in the server.ts file, a JWT middleware has been added, passing the secret from an environment variable. The middleware will validate that every request to the routes below, MUST include a valid JWT signed with the same secret. The middleware will set automatically the payload information in ctx.state.user.

// JWT middleware -> below this line, routes are only reached if JWT token is valid, secret as env variable
app.use(jwt({ secret: config.jwtSecret }));

Go to the website https://jwt.io/ to create JWT tokens for testing/debugging purposes. Select algorithm HS256 and include the generated token in the Authorization header to pass through the jwt middleware.

Custom 401 handling -> if you don't want to expose koa-jwt errors to users:

app.use(function(ctx, next){
  return next().catch((err) => {
    if (401 == err.status) {
      ctx.status = 401;
      ctx.body = 'Protected resource, use Authorization header to get access\n';
    } else {
      throw err;
    }
  });
});

If you want to authenticate from the API, and you fancy the idea of an auth provider like Auth0, have a look at jsonwebtoken — JSON Web Token signing and verification

CORS

This boilerplate uses @koa/cors, a simple CORS middleware for koa. If you are not sure what this is about, click here.

// Enable CORS with default options
app.use(cors());

Have a look at Official @koa/cors docs in case you want to specify 'origin' or 'allowMethods' properties.

Helmet

This boilerplate uses koa-helmet, a wrapper for helmet to work with koa. It provides important security headers to make your app more secure by default.

Usage is the same as helmet. Helmet offers 11 security middleware functions (clickjacking, DNS prefetching, Security Policy...), everything is set by default here.

// Enable helmet with default options
app.use(helmet());

Have a look at Official koa-helmet docs in case you want to customize which security middlewares are enabled.

Dependencies

Dependencies are managed through package.json. In that file you'll find two sections:

dependencies

PackageDescription
dotenvLoads environment variables from .env file.
koaNode web framework.
koa-bodyparserA bodyparser for koa.
koa-jwtMiddleware to validate JWT tokens.
@koa/routerRouter middleware for koa.
koa-helmetWrapper for helmet, important security headers to make app more secure
@koa/corsCross-Origin Resource Sharing(CORS) for koa
pgPostgreSQL driver, needed for the ORM.
reflect-metadataUsed by typeORM to implement decorators.
typeormA very cool SQL ORM.
winstonLogging library.
class-validatorDecorator based entities validation.
koa-swagger-decoratorusing decorator to automatically generate swagger doc for koa-router.
cronRegister cron jobs in node.

devDependencies

PackageDescription
@typesDependencies in this folder are .d.ts files used to provide types
nodemonUtility that automatically restarts node process when it crashes
ts-nodeEnables directly running TS files. Used to run copy-static-assets.ts
eslintLinter for Javascript/TypeScript files
typescriptJavaScript compiler/type checker that boosts JavaScript productivity
shelljsPortable Unix shell commands for Node.js

To install or update these dependencies you can use npm install or npm update.

Changelog

1.8.0

  • Unit tests included using Jest (Thanks to @rafapaezbas)
  • Upgrade all dependencies
  • Upgrade to Node 14

1.7.1

  • Upgrading Locust + fixing load tests
  • Improving Logger

1.7.0

  • Migrating TSLint (deprecated already) to ESLint
  • Node version upgraded from 10.x.x to 12.0.0 (LTS)
  • Now CI installs from package-lock.json using npm ci (Beyond guaranteeing you that you'll only get what is in your lock-file it's also much faster (2x-10x!) than npm install when you don't start with a node_modules).
  • included integraton test using Newman for local env too
  • koa-router deprecated, using new fork from koa team @koa/router
  • Dependencies updated, some @types removed as more and more libraries include their own types now!
  • Typescript to latest

1.6.1

  • Fixing CI
  • Improving integration tests robustness

1.6.0

  • CI migrated from Travis to Github actions
  • cron dependency -> register cron jobs
  • Node app dockerized -> now is directly pushed as a docker image to Heroku from CI, not using any webhook
  • Added postman integration tests, executed from Github actions CI using Newman
  • Added locust load tests, executed from Github actions CI
  • PRs merged: 47, 48 and 49. Thanks to everybody!

1.5.0

  • koa-swagger-decorator -> generate swagger docs with decorators in the endpoints
  • Split routes into protected and unprotected. Hello world + swagger docs are not proteted by jwt
  • some dependencies have been updated

1.4.2

  • Fix -> npm run watch-server is now working properly live-reloading changes in the code Issue 39.
  • Fix -> Logging levels were not correctly mapped. Thanks to @atamano for the PR Pull Request 35
  • Some code leftovers removed

1.4.1

  • Fix -> After updating winston to 3.0.0, it was throwing an error when logging errors into file
  • Fix -> Config in config.ts wasn't implementing IConfig interface

1.4.0

  • Dotenv lib updated, no changes needed (they are dropping node4 support)
  • Class-validator lib updated, no chages needed (cool features added like IsPhoneNumber or custom context for decorators)
  • Winston lib updated to 3.0.0, some amendments needed to format the console log. Removed the @types as Winston now supports Typescript natively!
  • Some devDependencies updated as well

1.3.0

  • CORS added
  • Syntax full REST
  • Some error handling improvement

1.2.0

  • Heroku deployment added

1.1.0

  • Added Helmet for security
  • Some bad practices await/async fixed

Download Details:

Author: javieraviles
Source Code: https://github.com/javieraviles/node-typescript-koa-rest 
License: MIT license

#typescript #heroku #docker #jwt #node 

What is GEEK

Buddha Community

REST API Boilerplate using NodeJS and KOA2, Typescript
Wilford  Pagac

Wilford Pagac

1594289280

What is REST API? An Overview | Liquid Web

What is REST?

The REST acronym is defined as a “REpresentational State Transfer” and is designed to take advantage of existing HTTP protocols when used for Web APIs. It is very flexible in that it is not tied to resources or methods and has the ability to handle different calls and data formats. Because REST API is not constrained to an XML format like SOAP, it can return multiple other formats depending on what is needed. If a service adheres to this style, it is considered a “RESTful” application. REST allows components to access and manage functions within another application.

REST was initially defined in a dissertation by Roy Fielding’s twenty years ago. He proposed these standards as an alternative to SOAP (The Simple Object Access Protocol is a simple standard for accessing objects and exchanging structured messages within a distributed computing environment). REST (or RESTful) defines the general rules used to regulate the interactions between web apps utilizing the HTTP protocol for CRUD (create, retrieve, update, delete) operations.

What is an API?

An API (or Application Programming Interface) provides a method of interaction between two systems.

What is a RESTful API?

A RESTful API (or application program interface) uses HTTP requests to GET, PUT, POST, and DELETE data following the REST standards. This allows two pieces of software to communicate with each other. In essence, REST API is a set of remote calls using standard methods to return data in a specific format.

The systems that interact in this manner can be very different. Each app may use a unique programming language, operating system, database, etc. So, how do we create a system that can easily communicate and understand other apps?? This is where the Rest API is used as an interaction system.

When using a RESTful API, we should determine in advance what resources we want to expose to the outside world. Typically, the RESTful API service is implemented, keeping the following ideas in mind:

  • Format: There should be no restrictions on the data exchange format
  • Implementation: REST is based entirely on HTTP
  • Service Definition: Because REST is very flexible, API can be modified to ensure the application understands the request/response format.
  • The RESTful API focuses on resources and how efficiently you perform operations with it using HTTP.

The features of the REST API design style state:

  • Each entity must have a unique identifier.
  • Standard methods should be used to read and modify data.
  • It should provide support for different types of resources.
  • The interactions should be stateless.

For REST to fit this model, we must adhere to the following rules:

  • Client-Server Architecture: The interface is separate from the server-side data repository. This affords flexibility and the development of components independently of each other.
  • Detachment: The client connections are not stored on the server between requests.
  • Cacheability: It must be explicitly stated whether the client can store responses.
  • Multi-level: The API should work whether it interacts directly with a server or through an additional layer, like a load balancer.

#tutorials #api #application #application programming interface #crud #http #json #programming #protocols #representational state transfer #rest #rest api #rest api graphql #rest api json #rest api xml #restful #soap #xml #yaml

Chloe  Butler

Chloe Butler

1667425440

Pdf2gerb: Perl Script Converts PDF Files to Gerber format

pdf2gerb

Perl script converts PDF files to Gerber format

Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.

The general workflow is as follows:

  1. Design the PCB using your favorite CAD or drawing software.
  2. Print the top and bottom copper and top silk screen layers to a PDF file.
  3. Run Pdf2Gerb on the PDFs to create Gerber and Excellon files.
  4. Use a Gerber viewer to double-check the output against the original PCB design.
  5. Make adjustments as needed.
  6. Submit the files to a PCB manufacturer.

Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).

See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.


pdf2gerb_cfg.pm

#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;

use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)


##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file

use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call

#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software.  \nGerber files MAY CONTAIN ERRORS.  Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG

use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC

use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)

#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1); 

#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
    .010, -.001,  #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
    .031, -.014,  #used for vias
    .041, -.020,  #smallest non-filled plated hole
    .051, -.025,
    .056, -.029,  #useful for IC pins
    .070, -.033,
    .075, -.040,  #heavier leads
#    .090, -.043,  #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
    .100, -.046,
    .115, -.052,
    .130, -.061,
    .140, -.067,
    .150, -.079,
    .175, -.088,
    .190, -.093,
    .200, -.100,
    .220, -.110,
    .160, -.125,  #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
    .090, -.040,  #want a .090 pad option, but use dummy hole size
    .065, -.040, #.065 x .065 rect pad
    .035, -.040, #.035 x .065 rect pad
#traces:
    .001,  #too thin for real traces; use only for board outlines
    .006,  #minimum real trace width; mainly used for text
    .008,  #mainly used for mid-sized text, not traces
    .010,  #minimum recommended trace width for low-current signals
    .012,
    .015,  #moderate low-voltage current
    .020,  #heavier trace for power, ground (even if a lighter one is adequate)
    .025,
    .030,  #heavy-current traces; be careful with these ones!
    .040,
    .050,
    .060,
    .080,
    .100,
    .120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);

#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size:   parsed PDF diameter:      error:
#  .014                .016                +.002
#  .020                .02267              +.00267
#  .025                .026                +.001
#  .029                .03167              +.00267
#  .033                .036                +.003
#  .040                .04267              +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
    HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
    RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
    SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
    RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
    TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
    REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};

#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
    CIRCLE_ADJUST_MINX => 0,
    CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
    CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
    CIRCLE_ADJUST_MAXY => 0,
    SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
    WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
    RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};

#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches

#line join/cap styles:
use constant
{
    CAP_NONE => 0, #butt (none); line is exact length
    CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
    CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
    CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
    
#number of elements in each shape type:
use constant
{
    RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
    LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
    CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
    CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
    rect => RECT_SHAPELEN,
    line => LINE_SHAPELEN,
    curve => CURVE_SHAPELEN,
    circle => CIRCLE_SHAPELEN,
);

#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions

# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?

#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes. 
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes

#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches

# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)

# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time

# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const

use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool

my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time

print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load


#############################################################################################
#junk/experiment:

#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html

#my $caller = "pdf2gerb::";

#sub cfg
#{
#    my $proto = shift;
#    my $class = ref($proto) || $proto;
#    my $settings =
#    {
#        $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
#    };
#    bless($settings, $class);
#    return $settings;
#}

#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;

#print STDERR "read cfg file\n";

#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names

#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }

Download Details:

Author: swannman
Source Code: https://github.com/swannman/pdf2gerb

License: GPL-3.0 license

#perl 

An API-First Approach For Designing Restful APIs | Hacker Noon

I’ve been working with Restful APIs for some time now and one thing that I love to do is to talk about APIs.

So, today I will show you how to build an API using the API-First approach and Design First with OpenAPI Specification.

First thing first, if you don’t know what’s an API-First approach means, it would be nice you stop reading this and check the blog post that I wrote to the Farfetchs blog where I explain everything that you need to know to start an API using API-First.

Preparing the ground

Before you get your hands dirty, let’s prepare the ground and understand the use case that will be developed.

Tools

If you desire to reproduce the examples that will be shown here, you will need some of those items below.

  • NodeJS
  • OpenAPI Specification
  • Text Editor (I’ll use VSCode)
  • Command Line

Use Case

To keep easy to understand, let’s use the Todo List App, it is a very common concept beyond the software development community.

#api #rest-api #openai #api-first-development #api-design #apis #restful-apis #restful-api

Lets Cms

Lets Cms

1652251629

Unilevel MLM Wordpress Rest API FrontEnd | UMW Rest API Woocommerce

Unilevel MLM Wordpress Rest API FrontEnd | UMW Rest API Woocommerce Price USA, Philippines : Our API’s handle the Unilevel MLM woo-commerce end user all functionalities like customer login/register. You can request any type of information which is listed below, our API will provide you managed results for your all frontend needs, which will be useful for your applications like Mobile App etc.
Business to Customer REST API for Unilevel MLM Woo-Commerce will empower your Woo-commerce site with the most powerful Unilevel MLM Woo-Commerce REST API, you will be able to get and send data to your marketplace from other mobile apps or websites using HTTP Rest API request.
Our plugin is used JWT authentication for the authorization process.

REST API Unilevel MLM Woo-commerce plugin contains following APIs.
User Login Rest API
User Register Rest API
User Join Rest API
Get User info Rest API
Get Affiliate URL Rest API 
Get Downlines list Rest API
Get Bank Details Rest API
Save Bank Details Rest API
Get Genealogy JSON Rest API
Get Total Earning Rest API
Get Current Balance Rest API
Get Payout Details Rest API
Get Payout List Rest API
Get Commissions List Rest API
Withdrawal Request Rest API
Get Withdrawal List Rest API

If you want to know more information and any queries regarding Unilevel MLM Rest API Woocommerce WordPress Plugin, you can contact our experts through 
Skype: jks0586, 
Mail: letscmsdev@gmail.com,
Website: www.letscms.com, www.mlmtrees.com,
Call/WhatsApp/WeChat: +91-9717478599.  

more information : https://www.mlmtrees.com/product/unilevel-mlm-woocommerce-rest-api-addon

Visit Documentation : https://letscms.com/documents/umw_apis/umw-apis-addon-documentation.html

#Unilevel_MLM_WooCommerce_Rest_API's_Addon #umw_mlm_rest_api #rest_api_woocommerce_unilevel #rest_api_in_woocommerce #rest_api_woocommerce #rest_api_woocommerce_documentation #rest_api_woocommerce_php #api_rest_de_woocommerce #woocommerce_rest_api_in_android #woocommerce_rest_api_in_wordpress #Rest_API_Woocommerce_unilevel_mlm #wp_rest_api_woocommerce

Lets Cms

Lets Cms

1652251528

Opencart REST API extensions - V3.x | Rest API Integration, Affiliate

Opencart REST API extensions - V3.x | Rest API Integration : OpenCart APIs is fully integrated with the OpenCart REST API. This is interact with your OpenCart site by sending and receiving data as JSON (JavaScript Object Notation) objects. Using the OpenCart REST API you can register the customers and purchasing the products and it provides data access to the content of OpenCart users like which is publicly accessible via the REST API. This APIs also provide the E-commerce Mobile Apps.

Opencart REST API 
OCRESTAPI Module allows the customer purchasing product from the website it just like E-commerce APIs its also available mobile version APIs.

Opencart Rest APIs List 
Customer Registration GET APIs.
Customer Registration POST APIs.
Customer Login GET APIs.
Customer Login POST APIs.
Checkout Confirm GET APIs.
Checkout Confirm POST APIs.


If you want to know Opencart REST API Any information, you can contact us at -
Skype: jks0586,
Email: letscmsdev@gmail.com,
Website: www.letscms.com, www.mlmtrees.com
Call/WhatsApp/WeChat: +91–9717478599.

Download : https://www.opencart.com/index.php?route=marketplace/extension/info&extension_id=43174&filter_search=ocrest%20api
View Documentation : https://www.letscms.com/documents/api/opencart-rest-api.html
More Information : https://www.letscms.com/blog/Rest-API-Opencart
VEDIO : https://vimeo.com/682154292  

#opencart_api_for_android #Opencart_rest_admin_api #opencart_rest_api #Rest_API_Integration #oc_rest_api #rest_api_ecommerce #rest_api_mobile #rest_api_opencart #rest_api_github #rest_api_documentation #opencart_rest_admin_api #rest_api_for_opencart_mobile_app #opencart_shopping_cart_rest_api #opencart_json_api