Dylan  Iqbal

Dylan Iqbal

1566616193

Build a REST API to manage users and roles using Firebase and Node.js

Originally published by Joaquin at https://www.toptal.com

Introduction

Almost every app requires some level of authorization system. In some cases, validating a username/password set with our Users table is enough, but often, we need a more fine-grained permissions model to allow certain users to access certain resources and restrict them from others. Building a system to support the latter is not trivial and can be very time consuming. In this tutorial, we’ll learn how to build a role-based auth API using Firebase, which will help us get quickly up and running.

Role-based Auth

In this authorization model, access is granted to roles, instead of specific users, and a user can have one or more depending on how you design your permission model. Resources, on the other hand, require certain roles to allow a user to execute it.


Firebase

Firebase Authentication

In a nutshell, Firebase Authentication is an extensible token-based auth system and provides out-of-the-box integrations with the most common providers such as Google, Facebook, and Twitter, among others.

It enables us to use custom claims which we’ll leverage to build a flexible role-based API.

We can set any JSON value into the claims (e.g., { role: 'admin' } or { role: 'manager' }).

Once set, custom claims will be included in the token that Firebase generates, and we can read the value to control access.

It also comes with a very generous free quota, which in most cases will be more than enough.

Firebase Functions

Functions are a fully-managed serverless platform service. We just need to write our code in Node.js and deploy it. Firebase takes care of scaling the infrastructure on demand, server configuration, and more. In our case, we’ll use it to build our API and expose it via HTTP to the web.

Firebase allows us to set express.js apps as handlers for different paths—for example, you can create an Express app and hook it to /mypath, and all requests coming to this route will be handled by the app configured.

From within the context of a function, you have access to the whole Firebase Authentication API, using the Admin SDK.

This is how we’ll create the user API.

What We’ll Build

So before we get started, let’s take a look at what we’ll build. We are going to create a REST API with the following endpoints:

Each of these endpoints will handle authentication, validate authorization, perform the correspondent operation, and finally return a meaningful HTTP code.

We’ll create the authentication and authorization functions required to validate the token and check if the claims contain the required role to execute the operation.

Building the API

In order to build the API, we’ll need:

  • A Firebase project
  • firebase-tools installed

First, log in to Firebase:

firebase login 

Next, initialize a Functions project:

firebase init
 
? Which Firebase CLI features do you want to set up for this folder? ...
(O) Functions: Configure and deploy Cloud Functions
 
? Select a default Firebase project for this directory: {your-project}
 
? What language would you like to use to write Cloud Functions? TypeScript
 
? Do you want to use TSLint to catch probable bugs and enforce style? Yes
 
? Do you want to install dependencies with npm now? Yes

At this point, you will have a Functions folder, with minimum setup to create Firebase Functions.

At src/index.ts there’s a helloWorld example, which you can uncomment to validate that your Functions works. Then you can cd functions and run npm run serve. This command will transpile the code and start the local server.

You can check the results at http://localhost:5000/{your-project}/us-central1/helloWorld

Notice the function is exposed on the path defined as the name of it at 'index.ts: 'helloWorld'.

Creating a Firebase HTTP Function

Now let’s code our API. We are going to create an http Firebase function and hook it on /api path.

First, install npm install express.

On the src/index.ts we will:

  • Initialize the firebase-admin SDK module with admin.initializeApp();
  • Set an Express app as the handler of our api https endpoint
import * as functions from 'firebase-functions';
import * as admin from 'firebase-admin';
import * as express from 'express';
 
admin.initializeApp();
 
const app = express();
 
export const api = functions.https.onRequest(app);

Now, all requests going to /api will be handled by the app instance.

The next thing we’ll do is configure the app instance to support CORS and add JSON body parser middleware. This way we can make requests from any URL and parse JSON formatted requests.

We’ll first install required dependencies.

npm install --save cors body-parser 
npm install --save-dev @types/cors 

And then:

//...
import * as cors from 'cors';
import * as bodyParser from 'body-parser';
 
//...
const app = express();
app.use(bodyParser.json());
app.use(cors({ origin: true }));
 
export const api = functions.https.onRequest(app);

Finally, we will configure the routes that the app will handle.

//...
import { routesConfig } from './users/routes-config';
//…
app.use(cors({ origin: true }));
routesConfig(app)
 
export const api = functions.https.onRequest(app);

Firebase Functions allows us to set an Express app as the handler, and any path after the one you set up at functions.https.onRequest(app);—in this case, api—will also be handled by the app. This allows us to write specific endpoints such as api/users and set a handler for each HTTP verb, which we’ll do next.

Let’s create the file src/users/routes-config.ts

Here, we’ll set a create handler at POST '/users'

import { Application } from "express";
import { create} from "./controller";
 
export function routesConfig(app: Application) {
   app.post('/users',
       create
   );
}

Now, we’ll create the src/users/controller.ts file.

In this function, we first validate that all fields are in the body request, and next, we create the user and set the custom claims.

We are just passing { role } in the setCustomUserClaims—the other fields are already set by Firebase.

If no errors occur, we return a 201 code with the uid of the user created.

import { Request, Response } from "express";
import * as admin from 'firebase-admin'
 
export async function create(req: Request, res: Response) {
   try {
       const { displayName, password, email, role } = req.body
 
       if (!displayName || !password || !email || !role) {
           return res.status(400).send({ message: 'Missing fields' })
       }
 
       const { uid } = await admin.auth().createUser({
           displayName,
           password,
           email
       })
       await admin.auth().setCustomUserClaims(uid, { role })
 
       return res.status(201).send({ uid })
   } catch (err) {
       return handleError(res, err)
   }
}
 
function handleError(res: Response, err: any) {
   return res.status(500).send({ message: `${err.code} - ${err.message}` });
}

Now, let’s secure the handler by adding authorization. To do that, we’ll add a couple of handlers to our create endpoint. With express.js, you can set a chain of handlers that will be executed in order. Within a handler, you can execute code and pass it to the next() handler or return a response. What we’ll do is first authenticate the user and then validate if it is authorized to execute. If the user doesn’t have the required role, we’ll return a 403.

On file src/users/routes-config.ts:

//...
import { isAuthenticated } from "../auth/authenticated";
import { isAuthorized } from "../auth/authorized";
 
export function routesConfig(app: Application) {
   app.post('/users',
       isAuthenticated,
       isAuthorized({ hasRole: ['admin', 'manager'] }),
       create
   );
}

 Let’s create the files src/auth/authenticated.ts.

On this function, we’ll validate the presence of the authorization bearer token in the request header. Then we’ll decode it with admin.auth().verifyidToken() and persist the user’s uid, role, and email in the res.locals variable, which we’ll later use to validate authorization.

In the case the token is invalid, we return a 401 response to the client:

import { Request, Response } from "express";
import * as admin from 'firebase-admin'
 
export async function isAuthenticated(req: Request, res: Response, next: Function) {
   const { authorization } = req.headers
 
   if (!authorization)
       return res.status(401).send({ message: 'Unauthorized' });
 
   if (!authorization.startsWith('Bearer'))
       return res.status(401).send({ message: 'Unauthorized' });
 
   const split = authorization.split('Bearer ')
   if (split.length !== 2)
       return res.status(401).send({ message: 'Unauthorized' });
 
   const token = split[1]
 
   try {
       const decodedToken: admin.auth.DecodedIdToken = await admin.auth().verifyIdToken(token);
       console.log("decodedToken", JSON.stringify(decodedToken))
       res.locals = { ...res.locals, uid: decodedToken.uid, role: decodedToken.role, email: decodedToken.email }
       return next();
   }
   catch (err) {
       console.error(`${err.code} -  ${err.message}`)
       return res.status(401).send({ message: 'Unauthorized' });
   }
}

 Now, let’s create a src/auth/authorized.ts file.

In this handler, we extract the user’s info from res.locals we set previously and validate if it has the role required to execute the operation or in the case the operation allows the same user to execute, we validate that the ID on the request params is the same as the one in the auth token.

import { Request, Response } from "express";
 
export function isAuthorized(opts: { hasRole: Array<'admin' | 'manager' | 'user'>, allowSameUser?: boolean }) {
   return (req: Request, res: Response, next: Function) => {
       const { role, email, uid } = res.locals
       const { id } = req.params
 
       if (opts.allowSameUser && id && uid === id)
           return next();
 
       if (!role)
           return res.status(403).send();
 
       if (opts.hasRole.includes(role))
           return next();
 
       return res.status(403).send();
   }
}

 With these two methods, we’ll be able to authenticate requests and authorize them given the role in the incoming token. That’s great, but since Firebase doesn’t let us set custom claims from the project console, we won’t be able to execute any of these endpoints. In order to bypass this, we can create a root user from Firebase Authentication Console

And set an email comparison in the code. Now, when firing requests from this user, we’ll be able to execute all operations.

//…
  const { role, email, uid } = res.locals
  const { id } = req.params
 
  if (email === ‘your-root-user-email@domain.com’)
    return next();
//…

 Now, let’s add the rest of the CRUD operations to src/users/routes-config.ts.

For operations to get or update a single user where :id param is sent, we also allow the same user to execute the operation.

export function routesConfig(app: Application) {
   //…
   // lists all users
   app.get(‘/users’, [
       isAuthenticated,
       isAuthorized({ hasRole: [‘admin’, ‘manager’] }),
       all
   ]);
   // get :id user
   app.get(‘/users/:id’, [
       isAuthenticated,
       isAuthorized({ hasRole: [‘admin’, ‘manager’], allowSameUser: true }),
       get
   ]);
   // updates :id user
   app.patch(‘/users/:id’, [
       isAuthenticated,
       isAuthorized({ hasRole: [‘admin’, ‘manager’], allowSameUser: true }),
       patch
   ]);
   // deletes :id user
   app.delete(‘/users/:id’, [
       isAuthenticated,
       isAuthorized({ hasRole: [‘admin’, ‘manager’] }),
       remove
   ]);
}

 And on src/users/controller.ts. In these operations, we leverage the admin SDK to interact with Firebase Authentication and perform the respective operations. As we did previously on create operation, we return a meaningful HTTP code on each operation.

For the update operation, we validate all fields present and override customClaims with those sent in the request:

//…
 
export async function all(req: Request, res: Response) {
   try {
       const listUsers = await admin.auth().listUsers()
       const users = listUsers.users.map(user => {
           const customClaims = (user.customClaims || { role: ‘’ }) as { role?: string }
           const role = customClaims.role ? customClaims.role : ‘’
           return {
               uid: user.uid,
               email: user.email,
               displayName: user.displayName,
               role,
               lastSignInTime: user.metadata.lastSignInTime,
               creationTime: user.metadata.creationTime
           }
       })
 
       return res.status(200).send({ users })
   } catch (err) {
       return handleError(res, err)
   }
}
 
export async function get(req: Request, res: Response) {
   try {
       const { id } = req.params
       const user = await admin.auth().getUser(id)
       return res.status(200).send({ user })
   } catch (err) {
       return handleError(res, err)
   }
}
 
export async function patch(req: Request, res: Response) {
   try {
       const { id } = req.params
       const { displayName, password, email, role } = req.body
 
       if (!id || !displayName || !password || !email || !role) {
           return res.status(400).send({ message: ‘Missing fields’ })
       }
 
       const user = await admin.auth().updateUser(id, { displayName, password, email })
       await admin.auth().setCustomUserClaims(id, { role })
       return res.status(204).send({ user })
   } catch (err) {
       return handleError(res, err)
   }
}
 
export async function remove(req: Request, res: Response) {
   try {
       const { id } = req.params
       await admin.auth().deleteUser(id)
       return res.status(204).send({})
   } catch (err) {
       return handleError(res, err)
   }
}
 
//…

 Now we can run the function locally. To do that, first you need to set up the account key to be able to connect with the auth API locally. Then run:

npm run serve 

Deploy the API

Great! Now that we have our written the role-based API, we can deploy it to the web and start using it. Deploying with Firebase is super easy, we just need to run firebase deploy. Once the deploy is completed, we can access our API at the published URL.

You can check the API URL at https://console.firebase.google.com/u/0/project/{your-project}/functions/list.

In my case, it is https://us-central1-joaq-lab.cloudfunctions.net/api.

Consuming the API

Once our API is deployed, we have several ways to use it—in this tutorial, I’ll cover how to use it via Postman or from an Angular app.

If we enter the List All Users URL (/api/users) on any browser, we’ll get the following:

The reason for this is when sending the request from a browser, we are performing a GET request without auth headers. These means our API is actually working as expected!

Our API is secured via tokens—in order to generate such a token, we need to call Firebase’s Client SDK and log in with a valid user/password credential. When successful, Firebase will send a token back in the response which we can then add to the header of any following request we want to perform.

From an Angular App

In this tutorial, I’ll just go over the important pieces to consume the API from an Angular app. The full repository can be accessed here, and if you need a step-by-step tutorial on how to create an Angular app and configure @angular/fire to use

So, back to signing in, we’ll have a SignInComponent with a <form> to let the user enter a username and password.

//…
<form [formGroup]=“form”>
   <div class=“form-group”>
     <label>Email address</label>
     <input type=“email”
            formControlName=“email”
            class=“form-control”
            placeholder=“Enter email”>
  </div>
   <div class=“form-group”>
     <label>Password</label>
     <input type=“password”
            formControlName=“password”
            class=“form-control”
            placeholder=“Password”>
   </div>
</form>
 
//…

 And on the class, we signInWithEmailAndPassword using the AngularFireAuth service.

//…
 
form: FormGroup = new FormGroup({
   email: new FormControl(‘’),
   password: new FormControl(‘’)
})
 
constructor(
   private afAuth: AngularFireAuth
) { }
 
async signIn() {
   try {
     const { email, password } = this.form.value
     await this.afAuth.auth.signInWithEmailAndPassword(email, password)
   } catch (err) {
     console.log(err)
   }
}
 
//…

 At this point, we can sign in to our Firebase project.

And when we inspect the network requests in the DevTools, we can see that Firebase returns a token after verifying our user and password.

This token is the one we will use to send on our header’s request to the API we’ve built. One way to add the token to all requests is using an HttpInterceptor.

This file shows how to get the token from AngularFireAuth and add it to the header’s request. We then provide the interceptor file in the AppModule.

http-interceptors/auth-token.interceptor.ts

@Injectable({ providedIn: ‘root’ })
export class AuthTokenHttpInterceptor implements HttpInterceptor {
 
   constructor(
       private auth: AngularFireAuth
   ) {
 
   }
 
   intercept(req: HttpRequest&lt;any&gt;, next: HttpHandler): Observable&lt;HttpEvent&lt;any&gt;&gt; {
       return this.auth.idToken.pipe(
           take(1),
           switchMap(idToken =&gt; {
               let clone = req.clone()
               if (idToken) {
                   clone = clone.clone({ headers: req.headers.set(‘Authorization’, 'Bearer ’ + idToken) });
               }
               return next.handle(clone)
           })
       )
 
   }
}
 
export const AuthTokenHttpInterceptorProvider = {
   provide: HTTP_INTERCEPTORS,
   useClass: AuthTokenHttpInterceptor,
   multi: true
}

 app.module.ts

@NgModule({
//…
providers: [
   AuthTokenHttpInterceptorProvider
]
//…
})
export class AppModule { }

 Once the interceptor is set, we can make requests to our API from httpClient. For example, here’s a UsersService where we call the list all users, get the user by its ID, and create a user.

//…
 
export type CreateUserRequest = { displayName: string, password: string, email: string, role: string }
 
@Injectable({
providedIn: ‘root’
})
export class UserService {
 
private baseUrl = ‘{your-functions-url}/api/users’
 
constructor(
   private http: HttpClient
) { }
  get users$(): Observable&lt;User[]&gt; {
   return this.http.get&lt;{ users: User[] }&gt;(${this.baseUrl}).pipe(
     map(result =&gt; {
       return result.users
     })
   )
}
 
user$(id: string): Observable&lt;User&gt; {
   return this.http.get&lt;{ user: User }&gt;(${this.baseUrl}/${id}).pipe(
     map(result =&gt; {
       return result.user
     })
   )
}
 
create(user: CreateUserRequest) {
   return this.http.post(${this.baseUrl}, user)
}
}

 Now, we can call the API to get the user by its ID and list all users from a component like this:

//…
   <ul *ngIf=“user$ | async; let user”
      class=“list-group”>
     <li class=“list-group-item d-flex justify-content-between align-items-center”>
       <div>
         <h5 class=“mb-1”>{{user.displayName}}</h5>
         <small>{{user.email}}</small>
       </div>
      <span class=“badge badge-primary badge-pill”>{{user.role?.toUpperCase()}}</span>
     </li>
   </ul>
 
  <ul *ngIf=“users$ | async; let users”
       class=“list-group”>
     <li *ngFor=“let user of users”
         class=“list-group-item d-flex justify-content-between align-items-center”>
       <div>
         <h5 class=“mb-1”>{{user.displayName}}</h5>
         <small class=“d-block”>{{user.email}}</small>
         <small class=“d-block”>{{user.uid}}</small>
       </div>
       <span class=“badge badge-primary badge-pill”>{{user.role?.toUpperCase()}}</span>
     </li>
   </ul>
//…

 

//…
 
users$: Observable&lt;User[]&gt;
user$: Observable&lt;User&gt;
 
constructor(
   private userService: UserService,
   private userForm: UserFormService,
   private modal: NgbModal,
   private afAuth: AngularFireAuth
) { }
 
ngOnInit() {
   this.users$ = this.userService.users$
 
   this.user$ = this.afAuth.user.pipe(
     filter(user =&gt; !!user),
     switchMap(user =&gt; this.userService.user$(user.uid))
   )
}
 
//…

 And here’s the result.

Notice that if we sign in with a user with role=user, only the Me section will be rendered.

And we’ll get a 403 on the network inspector.

From Postman

Postman is a tool to build and make requests to APIs. This way, we can simulate that we are calling our API from any client app or a different service.

What we’ll demo is how to send a request to list all users.

Once we open the tool, we set the URL https://us-central1-{your-project}.cloudfunctions.net/api/users:

Next, on the tab authorization, we choose Bearer Token and we set the value we extracted from Dev Tools previously.

Conclusion

Congratulations! You’ve made it through the whole tutorial and now you’ve learned to create a user role-based API on Firebase.

We’ve also covered how to consume it from an Angular app and Postman.

Let’s recap the most important things:

  1. Firebase allows you to get quickly up and running with an enterprise-level auth API, which you can extend later on.
  2. Almost every project requires authorization—if you need to control access using a role-based model, Firebase Authentication lets you get started very quickly.
  3. The role-based model relies on validating resources that are requested from users with specific roles vs. specific users.
  4. Using an Express.js app on Firebase Function, we can create a REST API and set handlers to authenticate and authorize requests.
  5. Leveraging built-in custom claims, you can create a role-based auth API and secure your app.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

The Complete Node.js Developer Course (3rd Edition)

Angular & NodeJS - The MEAN Stack Guide

NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)

Best 50 Nodejs interview questions from Beginners to Advanced in 2019

Node.js 12: The future of server-side JavaScript

An Introduction to Node.js Design Patterns

Basic Server Side Rendering with Vue.js and Express

Fullstack Vue App with MongoDB, Express.js and Node.js

How to create a full stack React/Express/MongoDB app using Docker


#node-js #firebase #rest #api

What is GEEK

Buddha Community

Build a REST API to manage users and roles using Firebase and Node.js
Dylan  Iqbal

Dylan Iqbal

1566616193

Build a REST API to manage users and roles using Firebase and Node.js

Originally published by Joaquin at https://www.toptal.com

Introduction

Almost every app requires some level of authorization system. In some cases, validating a username/password set with our Users table is enough, but often, we need a more fine-grained permissions model to allow certain users to access certain resources and restrict them from others. Building a system to support the latter is not trivial and can be very time consuming. In this tutorial, we’ll learn how to build a role-based auth API using Firebase, which will help us get quickly up and running.

Role-based Auth

In this authorization model, access is granted to roles, instead of specific users, and a user can have one or more depending on how you design your permission model. Resources, on the other hand, require certain roles to allow a user to execute it.


Firebase

Firebase Authentication

In a nutshell, Firebase Authentication is an extensible token-based auth system and provides out-of-the-box integrations with the most common providers such as Google, Facebook, and Twitter, among others.

It enables us to use custom claims which we’ll leverage to build a flexible role-based API.

We can set any JSON value into the claims (e.g., { role: 'admin' } or { role: 'manager' }).

Once set, custom claims will be included in the token that Firebase generates, and we can read the value to control access.

It also comes with a very generous free quota, which in most cases will be more than enough.

Firebase Functions

Functions are a fully-managed serverless platform service. We just need to write our code in Node.js and deploy it. Firebase takes care of scaling the infrastructure on demand, server configuration, and more. In our case, we’ll use it to build our API and expose it via HTTP to the web.

Firebase allows us to set express.js apps as handlers for different paths—for example, you can create an Express app and hook it to /mypath, and all requests coming to this route will be handled by the app configured.

From within the context of a function, you have access to the whole Firebase Authentication API, using the Admin SDK.

This is how we’ll create the user API.

What We’ll Build

So before we get started, let’s take a look at what we’ll build. We are going to create a REST API with the following endpoints:

Each of these endpoints will handle authentication, validate authorization, perform the correspondent operation, and finally return a meaningful HTTP code.

We’ll create the authentication and authorization functions required to validate the token and check if the claims contain the required role to execute the operation.

Building the API

In order to build the API, we’ll need:

  • A Firebase project
  • firebase-tools installed

First, log in to Firebase:

firebase login 

Next, initialize a Functions project:

firebase init
 
? Which Firebase CLI features do you want to set up for this folder? ...
(O) Functions: Configure and deploy Cloud Functions
 
? Select a default Firebase project for this directory: {your-project}
 
? What language would you like to use to write Cloud Functions? TypeScript
 
? Do you want to use TSLint to catch probable bugs and enforce style? Yes
 
? Do you want to install dependencies with npm now? Yes

At this point, you will have a Functions folder, with minimum setup to create Firebase Functions.

At src/index.ts there’s a helloWorld example, which you can uncomment to validate that your Functions works. Then you can cd functions and run npm run serve. This command will transpile the code and start the local server.

You can check the results at http://localhost:5000/{your-project}/us-central1/helloWorld

Notice the function is exposed on the path defined as the name of it at 'index.ts: 'helloWorld'.

Creating a Firebase HTTP Function

Now let’s code our API. We are going to create an http Firebase function and hook it on /api path.

First, install npm install express.

On the src/index.ts we will:

  • Initialize the firebase-admin SDK module with admin.initializeApp();
  • Set an Express app as the handler of our api https endpoint
import * as functions from 'firebase-functions';
import * as admin from 'firebase-admin';
import * as express from 'express';
 
admin.initializeApp();
 
const app = express();
 
export const api = functions.https.onRequest(app);

Now, all requests going to /api will be handled by the app instance.

The next thing we’ll do is configure the app instance to support CORS and add JSON body parser middleware. This way we can make requests from any URL and parse JSON formatted requests.

We’ll first install required dependencies.

npm install --save cors body-parser 
npm install --save-dev @types/cors 

And then:

//...
import * as cors from 'cors';
import * as bodyParser from 'body-parser';
 
//...
const app = express();
app.use(bodyParser.json());
app.use(cors({ origin: true }));
 
export const api = functions.https.onRequest(app);

Finally, we will configure the routes that the app will handle.

//...
import { routesConfig } from './users/routes-config';
//…
app.use(cors({ origin: true }));
routesConfig(app)
 
export const api = functions.https.onRequest(app);

Firebase Functions allows us to set an Express app as the handler, and any path after the one you set up at functions.https.onRequest(app);—in this case, api—will also be handled by the app. This allows us to write specific endpoints such as api/users and set a handler for each HTTP verb, which we’ll do next.

Let’s create the file src/users/routes-config.ts

Here, we’ll set a create handler at POST '/users'

import { Application } from "express";
import { create} from "./controller";
 
export function routesConfig(app: Application) {
   app.post('/users',
       create
   );
}

Now, we’ll create the src/users/controller.ts file.

In this function, we first validate that all fields are in the body request, and next, we create the user and set the custom claims.

We are just passing { role } in the setCustomUserClaims—the other fields are already set by Firebase.

If no errors occur, we return a 201 code with the uid of the user created.

import { Request, Response } from "express";
import * as admin from 'firebase-admin'
 
export async function create(req: Request, res: Response) {
   try {
       const { displayName, password, email, role } = req.body
 
       if (!displayName || !password || !email || !role) {
           return res.status(400).send({ message: 'Missing fields' })
       }
 
       const { uid } = await admin.auth().createUser({
           displayName,
           password,
           email
       })
       await admin.auth().setCustomUserClaims(uid, { role })
 
       return res.status(201).send({ uid })
   } catch (err) {
       return handleError(res, err)
   }
}
 
function handleError(res: Response, err: any) {
   return res.status(500).send({ message: `${err.code} - ${err.message}` });
}

Now, let’s secure the handler by adding authorization. To do that, we’ll add a couple of handlers to our create endpoint. With express.js, you can set a chain of handlers that will be executed in order. Within a handler, you can execute code and pass it to the next() handler or return a response. What we’ll do is first authenticate the user and then validate if it is authorized to execute. If the user doesn’t have the required role, we’ll return a 403.

On file src/users/routes-config.ts:

//...
import { isAuthenticated } from "../auth/authenticated";
import { isAuthorized } from "../auth/authorized";
 
export function routesConfig(app: Application) {
   app.post('/users',
       isAuthenticated,
       isAuthorized({ hasRole: ['admin', 'manager'] }),
       create
   );
}

 Let’s create the files src/auth/authenticated.ts.

On this function, we’ll validate the presence of the authorization bearer token in the request header. Then we’ll decode it with admin.auth().verifyidToken() and persist the user’s uid, role, and email in the res.locals variable, which we’ll later use to validate authorization.

In the case the token is invalid, we return a 401 response to the client:

import { Request, Response } from "express";
import * as admin from 'firebase-admin'
 
export async function isAuthenticated(req: Request, res: Response, next: Function) {
   const { authorization } = req.headers
 
   if (!authorization)
       return res.status(401).send({ message: 'Unauthorized' });
 
   if (!authorization.startsWith('Bearer'))
       return res.status(401).send({ message: 'Unauthorized' });
 
   const split = authorization.split('Bearer ')
   if (split.length !== 2)
       return res.status(401).send({ message: 'Unauthorized' });
 
   const token = split[1]
 
   try {
       const decodedToken: admin.auth.DecodedIdToken = await admin.auth().verifyIdToken(token);
       console.log("decodedToken", JSON.stringify(decodedToken))
       res.locals = { ...res.locals, uid: decodedToken.uid, role: decodedToken.role, email: decodedToken.email }
       return next();
   }
   catch (err) {
       console.error(`${err.code} -  ${err.message}`)
       return res.status(401).send({ message: 'Unauthorized' });
   }
}

 Now, let’s create a src/auth/authorized.ts file.

In this handler, we extract the user’s info from res.locals we set previously and validate if it has the role required to execute the operation or in the case the operation allows the same user to execute, we validate that the ID on the request params is the same as the one in the auth token.

import { Request, Response } from "express";
 
export function isAuthorized(opts: { hasRole: Array&lt;'admin' | 'manager' | 'user'&gt;, allowSameUser?: boolean }) {
   return (req: Request, res: Response, next: Function) =&gt; {
       const { role, email, uid } = res.locals
       const { id } = req.params
 
       if (opts.allowSameUser && id && uid === id)
           return next();
 
       if (!role)
           return res.status(403).send();
 
       if (opts.hasRole.includes(role))
           return next();
 
       return res.status(403).send();
   }
}

 With these two methods, we’ll be able to authenticate requests and authorize them given the role in the incoming token. That’s great, but since Firebase doesn’t let us set custom claims from the project console, we won’t be able to execute any of these endpoints. In order to bypass this, we can create a root user from Firebase Authentication Console

And set an email comparison in the code. Now, when firing requests from this user, we’ll be able to execute all operations.

//…
  const { role, email, uid } = res.locals
  const { id } = req.params
 
  if (email === ‘your-root-user-email@domain.com’)
    return next();
//…

 Now, let’s add the rest of the CRUD operations to src/users/routes-config.ts.

For operations to get or update a single user where :id param is sent, we also allow the same user to execute the operation.

export function routesConfig(app: Application) {
   //…
   // lists all users
   app.get(‘/users’, [
       isAuthenticated,
       isAuthorized({ hasRole: [‘admin’, ‘manager’] }),
       all
   ]);
   // get :id user
   app.get(‘/users/:id’, [
       isAuthenticated,
       isAuthorized({ hasRole: [‘admin’, ‘manager’], allowSameUser: true }),
       get
   ]);
   // updates :id user
   app.patch(‘/users/:id’, [
       isAuthenticated,
       isAuthorized({ hasRole: [‘admin’, ‘manager’], allowSameUser: true }),
       patch
   ]);
   // deletes :id user
   app.delete(‘/users/:id’, [
       isAuthenticated,
       isAuthorized({ hasRole: [‘admin’, ‘manager’] }),
       remove
   ]);
}

 And on src/users/controller.ts. In these operations, we leverage the admin SDK to interact with Firebase Authentication and perform the respective operations. As we did previously on create operation, we return a meaningful HTTP code on each operation.

For the update operation, we validate all fields present and override customClaims with those sent in the request:

//…
 
export async function all(req: Request, res: Response) {
   try {
       const listUsers = await admin.auth().listUsers()
       const users = listUsers.users.map(user =&gt; {
           const customClaims = (user.customClaims || { role: ‘’ }) as { role?: string }
           const role = customClaims.role ? customClaims.role : ‘’
           return {
               uid: user.uid,
               email: user.email,
               displayName: user.displayName,
               role,
               lastSignInTime: user.metadata.lastSignInTime,
               creationTime: user.metadata.creationTime
           }
       })
 
       return res.status(200).send({ users })
   } catch (err) {
       return handleError(res, err)
   }
}
 
export async function get(req: Request, res: Response) {
   try {
       const { id } = req.params
       const user = await admin.auth().getUser(id)
       return res.status(200).send({ user })
   } catch (err) {
       return handleError(res, err)
   }
}
 
export async function patch(req: Request, res: Response) {
   try {
       const { id } = req.params
       const { displayName, password, email, role } = req.body
 
       if (!id || !displayName || !password || !email || !role) {
           return res.status(400).send({ message: ‘Missing fields’ })
       }
 
       const user = await admin.auth().updateUser(id, { displayName, password, email })
       await admin.auth().setCustomUserClaims(id, { role })
       return res.status(204).send({ user })
   } catch (err) {
       return handleError(res, err)
   }
}
 
export async function remove(req: Request, res: Response) {
   try {
       const { id } = req.params
       await admin.auth().deleteUser(id)
       return res.status(204).send({})
   } catch (err) {
       return handleError(res, err)
   }
}
 
//…

 Now we can run the function locally. To do that, first you need to set up the account key to be able to connect with the auth API locally. Then run:

npm run serve 

Deploy the API

Great! Now that we have our written the role-based API, we can deploy it to the web and start using it. Deploying with Firebase is super easy, we just need to run firebase deploy. Once the deploy is completed, we can access our API at the published URL.

You can check the API URL at https://console.firebase.google.com/u/0/project/{your-project}/functions/list.

In my case, it is https://us-central1-joaq-lab.cloudfunctions.net/api.

Consuming the API

Once our API is deployed, we have several ways to use it—in this tutorial, I’ll cover how to use it via Postman or from an Angular app.

If we enter the List All Users URL (/api/users) on any browser, we’ll get the following:

The reason for this is when sending the request from a browser, we are performing a GET request without auth headers. These means our API is actually working as expected!

Our API is secured via tokens—in order to generate such a token, we need to call Firebase’s Client SDK and log in with a valid user/password credential. When successful, Firebase will send a token back in the response which we can then add to the header of any following request we want to perform.

From an Angular App

In this tutorial, I’ll just go over the important pieces to consume the API from an Angular app. The full repository can be accessed here, and if you need a step-by-step tutorial on how to create an Angular app and configure @angular/fire to use

So, back to signing in, we’ll have a SignInComponent with a <form> to let the user enter a username and password.

//…
<form [formGroup]=“form”>
   <div class=“form-group”>
     <label>Email address</label>
     <input type=“email”
            formControlName=“email”
            class=“form-control”
            placeholder=“Enter email”>
  </div>
   <div class=“form-group”>
     <label>Password</label>
     <input type=“password”
            formControlName=“password”
            class=“form-control”
            placeholder=“Password”>
   </div>
</form>
 
//…

 And on the class, we signInWithEmailAndPassword using the AngularFireAuth service.

//…
 
form: FormGroup = new FormGroup({
   email: new FormControl(‘’),
   password: new FormControl(‘’)
})
 
constructor(
   private afAuth: AngularFireAuth
) { }
 
async signIn() {
   try {
     const { email, password } = this.form.value
     await this.afAuth.auth.signInWithEmailAndPassword(email, password)
   } catch (err) {
     console.log(err)
   }
}
 
//…

 At this point, we can sign in to our Firebase project.

And when we inspect the network requests in the DevTools, we can see that Firebase returns a token after verifying our user and password.

This token is the one we will use to send on our header’s request to the API we’ve built. One way to add the token to all requests is using an HttpInterceptor.

This file shows how to get the token from AngularFireAuth and add it to the header’s request. We then provide the interceptor file in the AppModule.

http-interceptors/auth-token.interceptor.ts

@Injectable({ providedIn: ‘root’ })
export class AuthTokenHttpInterceptor implements HttpInterceptor {
 
   constructor(
       private auth: AngularFireAuth
   ) {
 
   }
 
   intercept(req: HttpRequest&lt;any&gt;, next: HttpHandler): Observable&lt;HttpEvent&lt;any&gt;&gt; {
       return this.auth.idToken.pipe(
           take(1),
           switchMap(idToken =&gt; {
               let clone = req.clone()
               if (idToken) {
                   clone = clone.clone({ headers: req.headers.set(‘Authorization’, 'Bearer ’ + idToken) });
               }
               return next.handle(clone)
           })
       )
 
   }
}
 
export const AuthTokenHttpInterceptorProvider = {
   provide: HTTP_INTERCEPTORS,
   useClass: AuthTokenHttpInterceptor,
   multi: true
}

 app.module.ts

@NgModule({
//…
providers: [
   AuthTokenHttpInterceptorProvider
]
//…
})
export class AppModule { }

 Once the interceptor is set, we can make requests to our API from httpClient. For example, here’s a UsersService where we call the list all users, get the user by its ID, and create a user.

//…
 
export type CreateUserRequest = { displayName: string, password: string, email: string, role: string }
 
@Injectable({
providedIn: ‘root’
})
export class UserService {
 
private baseUrl = ‘{your-functions-url}/api/users’
 
constructor(
   private http: HttpClient
) { }
  get users$(): Observable&lt;User[]&gt; {
   return this.http.get&lt;{ users: User[] }&gt;(${this.baseUrl}).pipe(
     map(result =&gt; {
       return result.users
     })
   )
}
 
user$(id: string): Observable&lt;User&gt; {
   return this.http.get&lt;{ user: User }&gt;(${this.baseUrl}/${id}).pipe(
     map(result =&gt; {
       return result.user
     })
   )
}
 
create(user: CreateUserRequest) {
   return this.http.post(${this.baseUrl}, user)
}
}

 Now, we can call the API to get the user by its ID and list all users from a component like this:

//…
   <ul *ngIf=“user$ | async; let user”
      class=“list-group”>
     <li class=“list-group-item d-flex justify-content-between align-items-center”>
       <div>
         <h5 class=“mb-1”>{{user.displayName}}</h5>
         <small>{{user.email}}</small>
       </div>
      <span class=“badge badge-primary badge-pill”>{{user.role?.toUpperCase()}}</span>
     </li>
   </ul>
 
  <ul *ngIf=“users$ | async; let users”
       class=“list-group”>
     <li *ngFor=“let user of users”
         class=“list-group-item d-flex justify-content-between align-items-center”>
       <div>
         <h5 class=“mb-1”>{{user.displayName}}</h5>
         <small class=“d-block”>{{user.email}}</small>
         <small class=“d-block”>{{user.uid}}</small>
       </div>
       <span class=“badge badge-primary badge-pill”>{{user.role?.toUpperCase()}}</span>
     </li>
   </ul>
//…

 

//…
 
users$: Observable&lt;User[]&gt;
user$: Observable&lt;User&gt;
 
constructor(
   private userService: UserService,
   private userForm: UserFormService,
   private modal: NgbModal,
   private afAuth: AngularFireAuth
) { }
 
ngOnInit() {
   this.users$ = this.userService.users$
 
   this.user$ = this.afAuth.user.pipe(
     filter(user =&gt; !!user),
     switchMap(user =&gt; this.userService.user$(user.uid))
   )
}
 
//…

 And here’s the result.

Notice that if we sign in with a user with role=user, only the Me section will be rendered.

And we’ll get a 403 on the network inspector.

From Postman

Postman is a tool to build and make requests to APIs. This way, we can simulate that we are calling our API from any client app or a different service.

What we’ll demo is how to send a request to list all users.

Once we open the tool, we set the URL https://us-central1-{your-project}.cloudfunctions.net/api/users:

Next, on the tab authorization, we choose Bearer Token and we set the value we extracted from Dev Tools previously.

Conclusion

Congratulations! You’ve made it through the whole tutorial and now you’ve learned to create a user role-based API on Firebase.

We’ve also covered how to consume it from an Angular app and Postman.

Let’s recap the most important things:

  1. Firebase allows you to get quickly up and running with an enterprise-level auth API, which you can extend later on.
  2. Almost every project requires authorization—if you need to control access using a role-based model, Firebase Authentication lets you get started very quickly.
  3. The role-based model relies on validating resources that are requested from users with specific roles vs. specific users.
  4. Using an Express.js app on Firebase Function, we can create a REST API and set handlers to authenticate and authorize requests.
  5. Leveraging built-in custom claims, you can create a role-based auth API and secure your app.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

The Complete Node.js Developer Course (3rd Edition)

Angular & NodeJS - The MEAN Stack Guide

NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)

Best 50 Nodejs interview questions from Beginners to Advanced in 2019

Node.js 12: The future of server-side JavaScript

An Introduction to Node.js Design Patterns

Basic Server Side Rendering with Vue.js and Express

Fullstack Vue App with MongoDB, Express.js and Node.js

How to create a full stack React/Express/MongoDB app using Docker


#node-js #firebase #rest #api

NBB: Ad-hoc CLJS Scripting on Node.js

Nbb

Not babashka. Node.js babashka!?

Ad-hoc CLJS scripting on Node.js.

Status

Experimental. Please report issues here.

Goals and features

Nbb's main goal is to make it easy to get started with ad hoc CLJS scripting on Node.js.

Additional goals and features are:

  • Fast startup without relying on a custom version of Node.js.
  • Small artifact (current size is around 1.2MB).
  • First class macros.
  • Support building small TUI apps using Reagent.
  • Complement babashka with libraries from the Node.js ecosystem.

Requirements

Nbb requires Node.js v12 or newer.

How does this tool work?

CLJS code is evaluated through SCI, the same interpreter that powers babashka. Because SCI works with advanced compilation, the bundle size, especially when combined with other dependencies, is smaller than what you get with self-hosted CLJS. That makes startup faster. The trade-off is that execution is less performant and that only a subset of CLJS is available (e.g. no deftype, yet).

Usage

Install nbb from NPM:

$ npm install nbb -g

Omit -g for a local install.

Try out an expression:

$ nbb -e '(+ 1 2 3)'
6

And then install some other NPM libraries to use in the script. E.g.:

$ npm install csv-parse shelljs zx

Create a script which uses the NPM libraries:

(ns script
  (:require ["csv-parse/lib/sync$default" :as csv-parse]
            ["fs" :as fs]
            ["path" :as path]
            ["shelljs$default" :as sh]
            ["term-size$default" :as term-size]
            ["zx$default" :as zx]
            ["zx$fs" :as zxfs]
            [nbb.core :refer [*file*]]))

(prn (path/resolve "."))

(prn (term-size))

(println (count (str (fs/readFileSync *file*))))

(prn (sh/ls "."))

(prn (csv-parse "foo,bar"))

(prn (zxfs/existsSync *file*))

(zx/$ #js ["ls"])

Call the script:

$ nbb script.cljs
"/private/tmp/test-script"
#js {:columns 216, :rows 47}
510
#js ["node_modules" "package-lock.json" "package.json" "script.cljs"]
#js [#js ["foo" "bar"]]
true
$ ls
node_modules
package-lock.json
package.json
script.cljs

Macros

Nbb has first class support for macros: you can define them right inside your .cljs file, like you are used to from JVM Clojure. Consider the plet macro to make working with promises more palatable:

(defmacro plet
  [bindings & body]
  (let [binding-pairs (reverse (partition 2 bindings))
        body (cons 'do body)]
    (reduce (fn [body [sym expr]]
              (let [expr (list '.resolve 'js/Promise expr)]
                (list '.then expr (list 'clojure.core/fn (vector sym)
                                        body))))
            body
            binding-pairs)))

Using this macro we can look async code more like sync code. Consider this puppeteer example:

(-> (.launch puppeteer)
      (.then (fn [browser]
               (-> (.newPage browser)
                   (.then (fn [page]
                            (-> (.goto page "https://clojure.org")
                                (.then #(.screenshot page #js{:path "screenshot.png"}))
                                (.catch #(js/console.log %))
                                (.then #(.close browser)))))))))

Using plet this becomes:

(plet [browser (.launch puppeteer)
       page (.newPage browser)
       _ (.goto page "https://clojure.org")
       _ (-> (.screenshot page #js{:path "screenshot.png"})
             (.catch #(js/console.log %)))]
      (.close browser))

See the puppeteer example for the full code.

Since v0.0.36, nbb includes promesa which is a library to deal with promises. The above plet macro is similar to promesa.core/let.

Startup time

$ time nbb -e '(+ 1 2 3)'
6
nbb -e '(+ 1 2 3)'   0.17s  user 0.02s system 109% cpu 0.168 total

The baseline startup time for a script is about 170ms seconds on my laptop. When invoked via npx this adds another 300ms or so, so for faster startup, either use a globally installed nbb or use $(npm bin)/nbb script.cljs to bypass npx.

Dependencies

NPM dependencies

Nbb does not depend on any NPM dependencies. All NPM libraries loaded by a script are resolved relative to that script. When using the Reagent module, React is resolved in the same way as any other NPM library.

Classpath

To load .cljs files from local paths or dependencies, you can use the --classpath argument. The current dir is added to the classpath automatically. So if there is a file foo/bar.cljs relative to your current dir, then you can load it via (:require [foo.bar :as fb]). Note that nbb uses the same naming conventions for namespaces and directories as other Clojure tools: foo-bar in the namespace name becomes foo_bar in the directory name.

To load dependencies from the Clojure ecosystem, you can use the Clojure CLI or babashka to download them and produce a classpath:

$ classpath="$(clojure -A:nbb -Spath -Sdeps '{:aliases {:nbb {:replace-deps {com.github.seancorfield/honeysql {:git/tag "v2.0.0-rc5" :git/sha "01c3a55"}}}}}')"

and then feed it to the --classpath argument:

$ nbb --classpath "$classpath" -e "(require '[honey.sql :as sql]) (sql/format {:select :foo :from :bar :where [:= :baz 2]})"
["SELECT foo FROM bar WHERE baz = ?" 2]

Currently nbb only reads from directories, not jar files, so you are encouraged to use git libs. Support for .jar files will be added later.

Current file

The name of the file that is currently being executed is available via nbb.core/*file* or on the metadata of vars:

(ns foo
  (:require [nbb.core :refer [*file*]]))

(prn *file*) ;; "/private/tmp/foo.cljs"

(defn f [])
(prn (:file (meta #'f))) ;; "/private/tmp/foo.cljs"

Reagent

Nbb includes reagent.core which will be lazily loaded when required. You can use this together with ink to create a TUI application:

$ npm install ink

ink-demo.cljs:

(ns ink-demo
  (:require ["ink" :refer [render Text]]
            [reagent.core :as r]))

(defonce state (r/atom 0))

(doseq [n (range 1 11)]
  (js/setTimeout #(swap! state inc) (* n 500)))

(defn hello []
  [:> Text {:color "green"} "Hello, world! " @state])

(render (r/as-element [hello]))

Promesa

Working with callbacks and promises can become tedious. Since nbb v0.0.36 the promesa.core namespace is included with the let and do! macros. An example:

(ns prom
  (:require [promesa.core :as p]))

(defn sleep [ms]
  (js/Promise.
   (fn [resolve _]
     (js/setTimeout resolve ms))))

(defn do-stuff
  []
  (p/do!
   (println "Doing stuff which takes a while")
   (sleep 1000)
   1))

(p/let [a (do-stuff)
        b (inc a)
        c (do-stuff)
        d (+ b c)]
  (prn d))
$ nbb prom.cljs
Doing stuff which takes a while
Doing stuff which takes a while
3

Also see API docs.

Js-interop

Since nbb v0.0.75 applied-science/js-interop is available:

(ns example
  (:require [applied-science.js-interop :as j]))

(def o (j/lit {:a 1 :b 2 :c {:d 1}}))

(prn (j/select-keys o [:a :b])) ;; #js {:a 1, :b 2}
(prn (j/get-in o [:c :d])) ;; 1

Most of this library is supported in nbb, except the following:

  • destructuring using :syms
  • property access using .-x notation. In nbb, you must use keywords.

See the example of what is currently supported.

Examples

See the examples directory for small examples.

Also check out these projects built with nbb:

API

See API documentation.

Migrating to shadow-cljs

See this gist on how to convert an nbb script or project to shadow-cljs.

Build

Prequisites:

  • babashka >= 0.4.0
  • Clojure CLI >= 1.10.3.933
  • Node.js 16.5.0 (lower version may work, but this is the one I used to build)

To build:

  • Clone and cd into this repo
  • bb release

Run bb tasks for more project-related tasks.

Download Details:
Author: borkdude
Download Link: Download The Source Code
Official Website: https://github.com/borkdude/nbb 
License: EPL-1.0

#node #javascript

Wilford  Pagac

Wilford Pagac

1594289280

What is REST API? An Overview | Liquid Web

What is REST?

The REST acronym is defined as a “REpresentational State Transfer” and is designed to take advantage of existing HTTP protocols when used for Web APIs. It is very flexible in that it is not tied to resources or methods and has the ability to handle different calls and data formats. Because REST API is not constrained to an XML format like SOAP, it can return multiple other formats depending on what is needed. If a service adheres to this style, it is considered a “RESTful” application. REST allows components to access and manage functions within another application.

REST was initially defined in a dissertation by Roy Fielding’s twenty years ago. He proposed these standards as an alternative to SOAP (The Simple Object Access Protocol is a simple standard for accessing objects and exchanging structured messages within a distributed computing environment). REST (or RESTful) defines the general rules used to regulate the interactions between web apps utilizing the HTTP protocol for CRUD (create, retrieve, update, delete) operations.

What is an API?

An API (or Application Programming Interface) provides a method of interaction between two systems.

What is a RESTful API?

A RESTful API (or application program interface) uses HTTP requests to GET, PUT, POST, and DELETE data following the REST standards. This allows two pieces of software to communicate with each other. In essence, REST API is a set of remote calls using standard methods to return data in a specific format.

The systems that interact in this manner can be very different. Each app may use a unique programming language, operating system, database, etc. So, how do we create a system that can easily communicate and understand other apps?? This is where the Rest API is used as an interaction system.

When using a RESTful API, we should determine in advance what resources we want to expose to the outside world. Typically, the RESTful API service is implemented, keeping the following ideas in mind:

  • Format: There should be no restrictions on the data exchange format
  • Implementation: REST is based entirely on HTTP
  • Service Definition: Because REST is very flexible, API can be modified to ensure the application understands the request/response format.
  • The RESTful API focuses on resources and how efficiently you perform operations with it using HTTP.

The features of the REST API design style state:

  • Each entity must have a unique identifier.
  • Standard methods should be used to read and modify data.
  • It should provide support for different types of resources.
  • The interactions should be stateless.

For REST to fit this model, we must adhere to the following rules:

  • Client-Server Architecture: The interface is separate from the server-side data repository. This affords flexibility and the development of components independently of each other.
  • Detachment: The client connections are not stored on the server between requests.
  • Cacheability: It must be explicitly stated whether the client can store responses.
  • Multi-level: The API should work whether it interacts directly with a server or through an additional layer, like a load balancer.

#tutorials #api #application #application programming interface #crud #http #json #programming #protocols #representational state transfer #rest #rest api #rest api graphql #rest api json #rest api xml #restful #soap #xml #yaml

Chloe  Butler

Chloe Butler

1667425440

Pdf2gerb: Perl Script Converts PDF Files to Gerber format

pdf2gerb

Perl script converts PDF files to Gerber format

Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.

The general workflow is as follows:

  1. Design the PCB using your favorite CAD or drawing software.
  2. Print the top and bottom copper and top silk screen layers to a PDF file.
  3. Run Pdf2Gerb on the PDFs to create Gerber and Excellon files.
  4. Use a Gerber viewer to double-check the output against the original PCB design.
  5. Make adjustments as needed.
  6. Submit the files to a PCB manufacturer.

Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).

See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.


pdf2gerb_cfg.pm

#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;

use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)


##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file

use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call

#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software.  \nGerber files MAY CONTAIN ERRORS.  Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG

use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC

use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)

#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1); 

#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
    .010, -.001,  #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
    .031, -.014,  #used for vias
    .041, -.020,  #smallest non-filled plated hole
    .051, -.025,
    .056, -.029,  #useful for IC pins
    .070, -.033,
    .075, -.040,  #heavier leads
#    .090, -.043,  #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
    .100, -.046,
    .115, -.052,
    .130, -.061,
    .140, -.067,
    .150, -.079,
    .175, -.088,
    .190, -.093,
    .200, -.100,
    .220, -.110,
    .160, -.125,  #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
    .090, -.040,  #want a .090 pad option, but use dummy hole size
    .065, -.040, #.065 x .065 rect pad
    .035, -.040, #.035 x .065 rect pad
#traces:
    .001,  #too thin for real traces; use only for board outlines
    .006,  #minimum real trace width; mainly used for text
    .008,  #mainly used for mid-sized text, not traces
    .010,  #minimum recommended trace width for low-current signals
    .012,
    .015,  #moderate low-voltage current
    .020,  #heavier trace for power, ground (even if a lighter one is adequate)
    .025,
    .030,  #heavy-current traces; be careful with these ones!
    .040,
    .050,
    .060,
    .080,
    .100,
    .120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);

#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size:   parsed PDF diameter:      error:
#  .014                .016                +.002
#  .020                .02267              +.00267
#  .025                .026                +.001
#  .029                .03167              +.00267
#  .033                .036                +.003
#  .040                .04267              +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
    HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
    RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
    SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
    RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
    TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
    REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};

#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
    CIRCLE_ADJUST_MINX => 0,
    CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
    CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
    CIRCLE_ADJUST_MAXY => 0,
    SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
    WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
    RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};

#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches

#line join/cap styles:
use constant
{
    CAP_NONE => 0, #butt (none); line is exact length
    CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
    CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
    CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
    
#number of elements in each shape type:
use constant
{
    RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
    LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
    CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
    CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
    rect => RECT_SHAPELEN,
    line => LINE_SHAPELEN,
    curve => CURVE_SHAPELEN,
    circle => CIRCLE_SHAPELEN,
);

#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions

# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?

#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes. 
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes

#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches

# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)

# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time

# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const

use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool

my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time

print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load


#############################################################################################
#junk/experiment:

#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html

#my $caller = "pdf2gerb::";

#sub cfg
#{
#    my $proto = shift;
#    my $class = ref($proto) || $proto;
#    my $settings =
#    {
#        $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
#    };
#    bless($settings, $class);
#    return $settings;
#}

#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;

#print STDERR "read cfg file\n";

#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names

#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }

Download Details:

Author: swannman
Source Code: https://github.com/swannman/pdf2gerb

License: GPL-3.0 license

#perl 

An API-First Approach For Designing Restful APIs | Hacker Noon

I’ve been working with Restful APIs for some time now and one thing that I love to do is to talk about APIs.

So, today I will show you how to build an API using the API-First approach and Design First with OpenAPI Specification.

First thing first, if you don’t know what’s an API-First approach means, it would be nice you stop reading this and check the blog post that I wrote to the Farfetchs blog where I explain everything that you need to know to start an API using API-First.

Preparing the ground

Before you get your hands dirty, let’s prepare the ground and understand the use case that will be developed.

Tools

If you desire to reproduce the examples that will be shown here, you will need some of those items below.

  • NodeJS
  • OpenAPI Specification
  • Text Editor (I’ll use VSCode)
  • Command Line

Use Case

To keep easy to understand, let’s use the Todo List App, it is a very common concept beyond the software development community.

#api #rest-api #openai #api-first-development #api-design #apis #restful-apis #restful-api