How to use Google's fully typed API SDK in Angular web application

How to use Google's fully typed API SDK in Angular web application

How to use Google's fully typed API SDK in Angular web application. Step by step from integration, through Authorisation to usage

Motivation

Task: We need to access/display events of a private google calendar.
Problem: You cannot put private calendar into an iframe or query its events using just an API key.
Requirements: Compatible with Angular, TypeScript support(service wrappers/classes and data model types)
Solution: There’s google-api-nodejs-client that provides all we need

Google’s officially supported Node.js client library for accessing Google APIs. Support for authorization and authentication with OAuth 2.0, API Keys and JWT is included.

Now to figure out how to replace that Node.js part with Angular .

Integration

Unfortunately, this library is for NodeJS and there is a slight problem integrating it into “Webpack built apps” and that includes Angular as it relies on some of the NodeJS things that don’t exist in Browser.
That is not going to stop us as there is a workaround (discussed here).

First, we need to extend a Webpack build.

npm i -D @angular-builders/custom-webpack

Then we need to replace builder in angular.json project architect configuration file, we also include a path to a custom Webpack config in options:

"architect": {
  "build": {
    "builder": "@angular-builders/custom-webpack:browser",
    "options": {
      "customWebpackConfig": {
        "path": "./extra-webpack.config.js"
      },
      ...
    },
    ...
  },
  "serve": {
    "builder": "@angular-builders/custom-webpack:dev-server",
    ...
  },
}

Explanations can be found in Angular’s builder docs.

The “hack”

Now, what is the actual “hack”? We are about to pretend all these things this NodeJS library needs exist in the browser.

  1. Simulate some of the NodeJS runtime internals like global and Buffer
  2. We have to mock a few libs that are required in runtime fs, child_process and https-proxy-agent
Providing fallbacks for global and Buffer

Add the following polyfills under the application imports in your src/polyfills.ts.

import * as process from 'process';
(window as any).process = process;
import { Buffer } from 'buffer';
(window as any).Buffer = Buffer;

Don’t forget to install these packages with npm i -D process buffer .

Put this to your index.html <head> tag. It will get rid of errors related to accessing the global as it substitutes it with the window.

<script>
  if (global === undefined) {
    var global = window;
  }"
</script>

Mocking the libraries

const path = require('path');
module.exports = {
  resolve: {
    extensions: ['.js'],
    alias: {
      fs: path.resolve(__dirname, 'src/mocks/fs.mock.js'),
      child_process: path.resolve(
        __dirname,
        'src/mocks/child_process.mock.js'
      ),
      'https-proxy-agent': path.resolve(
        __dirname,
        'src/mocks/https-proxy-agent.mock.js',
      ),
    },
  },
};

Explanation of what’s going on would be that we are telling WebPack to replace one import (file) with another. You can notice we put all the “mock” files to src/mock so it is easier to understand what these files are to our colleagues working on a project.

The code inside of these mocks is rather simple. We just need to add a few methods that are used but not required, so they can “do” nothing.
Both fs andchild_process will look like this.

module.exports = {
  readFileSync() {},
  readFile() {},
};

The https-proxy-agent is even simpler as it can be just
module.exports = {};.

Setting up the access

  1. Create a new GCP project or use an existing one.
  2. At GCP console select your project and go to the Library tab and enable API you want to use (in our case it is Google Calendar and Analytics)
  3. Customize your OAuth consent screen (name, restrictions,…)
    • set all Scopes for Google APIs you need (calendar, analytics, …)
  4. Create OAuth credentials
    • needed if when accessing private data (like calendar events or analytics)
  5. Proceed to Authentication and use the public and private OAuth keys
Authentication

You can access a lot of public data from API easily with an API key. Things like public calendars, but as long as you need some private data (private calendar for example) you will need to authenticate. That can be done via OAuth client.

Provide Google’s OAuth2Client in your app.module.ts providers array. It should look like this:

{
  provide: OAuth2Client,
  useValue: new OAuth2Client(
// You get this in GCP project credentials
    environment.G_API_CLIENT_ID,
    environment.G_API_CLIENT_SECRET,
// URL where you'll handle succesful authentication
    environment.G_API_REDIRECT,
),
},

We will be using redirect based auth so next thing is generating auth URL.

window.location.href = this.oauth2Client.generateAuthUrl({
// 'offline' also gets refresh_token  
  access_type: 'offline',
// put any scopes you need there, 
  scope: [
    // in the first example we want to read calendar events
    'https://www.googleapis.com/auth/calendar.events.readonly',
    'https://www.googleapis.com/auth/calendar.readonly',
    // in the second example we read analytics data
    'https://www.googleapis.com/auth/analytics.readonly',
  ],
});

Thanks to refresh_token OAuthClient should be able to handle token exchange even after it is expired so we don’t have to go through google’s Auth screen every hour after token expiration.

Example usages

If you like exploring documentation you visit google-apis docs or have a look on Calendar as our it is used in the example below.

Using the Calendar Service SDK

Permissions

Make sure that the account you are using does have access to the calendar you want to read events from.

Code Example

Provide Calendar class with the default auth method, in our case it is OAuth. Extend your app.module.ts providers array with this:

{
  provide: calendar_v3.Calendar,
  useFactory: 
    (auth: OAuth2Client) => new calendar_v3.Calendar({ auth }),
  deps: [OAuth2Client],
},

Now we have access to complete set of Google calendar API features with fully typed SDK interface.

You can get the calendar as you would any other Angular service, have it as a constructor parameter and dependency injection will provide it for you.

constructor(
  private calendar: calendar_v3.Calendar,
) {}

Here is an example of how to get a list of events of some specific calendar. We will also filter only events that are “today” and not deleted/cancelled.

this.calendar.events.list({
  // required, it is an email, or email like id of a calendar
  calendarId: CALENDAR_ID,
// optional, arguments that let you filter/specify wanted events
  timeMin: startOfDay(today).toISOString(),
  timeMax: endOfDay(today).toISOString(),
  showDeleted: false,
  singleEvents: true,
}),

You can also perform other tasks such as creating events but don’t forget to require proper scope claims in with your authentication.

Using the Analytics SDK

Setting permissions

⮕ Analytics Console ⮕ Admin ⮕ Account column ⮕ User Management ⮕
⮕ Select the user ⮕ Activate the "Read & Analyze" checkbox

Getting the “View ID”

⮕ Analytics Console ⮕ Admin ⮕ View column ⮕ View Settings ⮕
⮕ Copy the "View ID" number

Code Example

Same as in a previous example, you’ll need a provider. Provide Analytics class with the default auth method. Extend your app.module.ts providers array with this:

{
  provide: analytics_v3.Analytics,
  useFactory:
    (auth: OAuth2Client) => new analytics_v3.Analytics({ auth }),
  deps: [OAuth2Client],
},

Again you’ll have it ready to be injected by DI to any injectable class.

constructor(
  private analytics: analytics_v3.Analytics,
) {}

This example will get the specified metrics for the desired time range. In this case, we’ll see total page views for the last 30 days.

this.analytics.data.ga.get({
  ids: 'ga:xxxxxxxxx', // replace xxxxxxxxx with your view ID
  'start-date': '30daysAgo',
  'end-date': 'today',
  metrics: 'ga:pageviews',
})

Conclusion

Having a fully typed SDK for an API is a big difference compared to having to find it all in the docs especially the data models. The Auth part of the problem is also taken care of pretty conveniently so it is not a show stopper for people who don’t know how or don’t want to manage that.
Overall it is a lot easier to create features when you have the project environment and tools set up just right.

Angular – introducing semantic versioning

Angular – introducing semantic versioning

Using semantic versioning is about managing expectations. It's about managing how the user of your application, or library, will react when a change happens to it. Changes will happen for various reasons, either to fix something broken in the code...

Using semantic versioning is about managing expectations. It's about managing how the user of your application, or library, will react when a change happens to it. Changes will happen for various reasons, either to fix something broken in the code or add/alter/remove a feature. The way authors of frameworks or libraries use to convey what impact a certain change has is by incrementing the version number of the software.
A production-ready software usually has version 1.0 or 1.0.0 if you want to be more specific.

There are three different levels of change that can happen when updating your software. Either you patch it and effectively correct something. Or you make a minor change, which essentially means you add functionality. Or lastly, you make a major change, which might completely change how your software works. Let's describe these changes in more detail in the following sections. Angularjs Online Training Hyderabad

Patch change
A patch change means we increment the rightmost digit by one. Changing the said software from 1.0.0 to 1.0.1 is a small change, usually a bug fix. As a user of that software, you don't have to worry; if anything, you should be happy that something is suddenly working better. The point is, you can safely start using 1.0.1. Angular Training

Minor change
This means the software is increased from 1.0.0 to 1.1.0. We are dealing with a more severe change as we increase the middle digit by one. This number should be increased when functionality is added to the software and it should still be backward compatible. Also, in this case, it should be safe adapting the 1.1.0 version of the software.

Major change
At this stage, the version number increases from 1.0.0 to 2.0.0. Now, this is where you need to the lookout. At this stage, things might have changed so much that constructs have been renamed or removed. It might not be compatible with earlier versions. I'm saying it might because a lot of software authors still ensure that there is decent backward compatibility, but the main point here is that there is no warranty, no contract,
guaranteeing that it will still work.

What about Angular?
The first version of Angular was known by most people as Angular 1; it later became known as AngularJS. It did not use semantic versioning. Most people still refer to it as Angular 1.
Then Angular came along and in 2016 it reached production readiness.

Angular decided to adopt semantic versioning and this caused a bit of confusion in the developer community, especially when it was announced that there would be an Angular 4 and 5, and so on. Google, as well as the Google Developer Experts, started to explain to people that it wanted people to call the latest version of the framework Angular - just Angular. Learn more Angularjs Online Training

You can always argue on the wisdom of that decision, but the fact remains, the new Angular is using semantic versioning. This means Angular is the same platform as Angular 4, as well as Angular 11, and so on, if that ever comes out. Adopting semantic versioning means that you as a user of Angular can rely on things working the same way until Google decides to increase the major version. Even then it's up to you if you want to remain on the latest major version or want to upgrade your existing apps.

Migrating from AngularJS to Angular

Migrating from AngularJS to Angular

Migrating from AngularJS to Angular a hybrid system architecture running both AngularJS and Angular

Intro

Dealing with legacy code/technologies is never fun and the path to migration isn’t always as straight forward as you want. If you are a small startup, trying to balance business requirements, scarce resources and aggressive deadlines, it becomes even more complicated.

This is the situation one of the startups I was advising was facing.

A bit of background

The startup was developing a SaaS for the last 2 years and (at the time) had around 15 clients worldwide. In these 2 years their code base grew pretty fast and lead to quite a lot of fast/reckless written code. There was nobody to be blame, this is pretty common in the startup world when business needs move way faster than you expect and you start sacrificing code qualify for quantity.

The system architecture was pretty simple. 
• a frontend application written in AngularJS (split into multiple modules that were selected at build time depending on the clients’ configuration)
• a backend application written in Python 2.7 and Django 1.9 using a Mysql database
• Celery for running async tasks

Each client would get their own isolated environment deployed on AWS:
• Apache in front of the Django application (deployed on multiple EC2 instances behind an ELB)
• AngularJS build deployed on individual S3 buckets with CloudFront in front of them

Path to migration

A few months before starting the migration, development was getting very slow, features were not coming out as fast, deadlines were missed and clients were reporting more issues with every update that we were rolling out. It was at this time that we started thinking more seriously about some kind of refactoring or major improvement.

We didn’t know exactly what we were going to “refactor/improve” so we started off by answering three questions (I recommend that anyone who is thinking about a migration/refactoring think really hard about the how to answer them):

1st question: Why is refactoring necessary now ?

This is a very important questions to answer because it helps you understand the value of the migration and also it helps to keep the team focused on the desired outcome. For example because i don’t like the way the code is written isn’t good enough reason. The reason has to have a clear value proposition that somehow directly or indirectly benefits the customers.

For us it was mainly three things:
 1. feature development was becoming painfully slow;
 2. code was unpredictable. we would work in one part of the application and break 3 other parts without realizing;
 3. single point of failure: only 1 engineer knew the FE code base completely and only he could develop new features on the codebase (this is out of a team of only 5 engineers)

So our goal was simple:

improve FE development velocity and remove the simple point of failure by empowering other engineers to develop FE features

2nd question: Who is going to do the migration ?

You can answer this question either now or after the 3rd question. Depending on the size of the company and on the available resources it can be one person, several people, an entire team, etc…

We were in a difficult situation. The only developer who could work on this couldn’t because he was busy building critical features for our customers. Luckily we had one senior backend engineer who wanted to get some FE exposure so he volunteered to take on the task. We also decided to time-box a proof of concept at 2 weeks. We did this because we didn’t know how long it would take to figure out a solution or whether the engineer could actually do this task since he hadn’t worked on FE before.

3rd question: What are we actually going to do ? The answer here usually involves some discovery time, a few tech proposals and a general overview of the options with the entire team while weighing the pros and cons of each.

For us one thing was clear from the start: we didn’t want to invest any resources into learning/on-boarding engineers on AngularJS. AngularJS had already entered Long Term Support and we didn’t want to have our engineers invest time in something that might not benefit them long term. This meant that refactoring the existing AngularJS code was not an option. So we started looking at Angular6 …

The migration

There a multiple approaches on how to have a hybrid app running different frameworks. After reviewing some options we decided that — for us — the best way to move forward was to simply have 2 separate FE applications deployed: the legacy AngularJS one and the new Angular one. This meant that any state on one app could not be transferred to the other application, which wasn’t such a big deal for us since only new modules were going to be developed using Angular and our modules didn’t share state with each other.

From the client’s perspective everything would look like one application, except for a page reload when they would move between the applications.

Pros to this approach

  • speed: get something up and running without untangling legacy code
  • safety: no risk of breaking the current live app since it would be a new code based deployed next to the old one (especially since a developer with no previous exposure to the project was working on it)
  • stop legacy development: we stop adding more code the an already unmanageable codebase

Cons to this approach:

  • maintaining legacy code: it didn’t address feature improvements on existing modules; old modules would still be in AngularJS for an undefined period of time
  • duplicating parts of the code: since the new app had to look and feel like the old one any themes, custom components would have to be written in both places. Also some parts of the layout would have to be duplicated in new app (like header, menu, etc.. ) and any changes to those components would have to be done in both apps

We already knew of a new module that we wanted to build so we started form scratch with a new Angular 6 project and we used this new module for our 2 weeks proof of concept.

Step 1— same domain

Have both apps running on the same domain so that they have access to the same cookies and local data. This was extremely important since only the AngularJS app would continue handing authentication & authorization.

Step 2— look and feel

Both apps The goal was to make the new app look the same as the original application. So we: 
 • copied over all the stylesheets
 • implemented the base layout of the application (header & menu drawer)

Step 3 — authentication & authorization

We had to duplicate the authorization logic in the Angular6 app and make sure the correct session tokens were available to allow access to the module

Step 4— routing between apps

Since our main navigation links would take you to either app, we decided do move all that logic to a backend service called menu-service. This would eliminate the need to write any navigation changes in both apps and also would allow for greater runtime control over what navigation buttons we show.

Example:

HEADER: Authorization: Bearer xxxxx
GET menu-service/v1/menu/?type=0|1 (0: legacy, 1: new)
[{
  "slug": "refresh",
  "name" : "Refresh",
  "icon" : "fa-refresh",
  "type" : 1  
 }, {
  "slug": "module1",
  "name" : "Module1",
  "icon" : "fa-module1",
  "type" : 1
}, {
  "slug": "module2",
  "name" : "Module2",
  "icon" : "fa-module2",
  "type" : 0
}, {
  "slug": "logout",
  "name" : "Logout",
  "icon" : "fa-logout",
  "type" : 0
}]

In the above example based on the type value we identify that the module1 and refresh are links towards the new application while module2 and logout are links in the old application.
This information allows each application to decide whether to use the internal routing mechanism or do a window.location redirect

Example of routing in the Angular app (AngularJS does something similar):

export class MenuService {
  constructor(private router: Router) {  }
  onMenuItemClicked(menuItem): void {
    if (menuItem.type === 1) {
      this.router.navigate([menuItem.slug])    
    } else {   
      const url = `${legacy_endpoint}/${menuItem.slug}`;
      window.location.href = url      
    } 
  }
}

Step 5— building/deployment on a real environment

Like i mentioned in the beginning the AngularJS application was deployed to an AWS S3 bucket and exposed through Cloudfront to take advantage of the massively scaled and globally distributed infrastructure offered by AWS.

The result we wanted was the following: anything that has the url [https://hostname/v2](https://hostname/v2)/ is routed to the Angular application and everything else is routed to the legacy AngularJS app.

We used base-href and to make sure our Angular6 application builds accordingly

ng build --base-href /v2/ --deploy-url /v2/

Unfortunately we didn’t manage to achieve the desired routing behavior with AWS Cloudfront. This was a big disappointment since we had to pivot to a less optimal solution. (if anyone has any suggestion on how to do this in Cloudfront i’d love to hear it)

We ended up with the following structure:
• each app deployed in a NGINX Docker container

# AngularJS — Dockerfile:
FROM nginx:alpine
COPY dist /usr/share/nginx/html
--------------------------------------------------------------------
# Angular6 — Dockerfile:
FROM nginx:alpine
COPY dist /usr/share/nginx/html/v2

• AWS ALB with path routing

Step 6: Local development

Local development for the AngularJS application didn’t have to change. However in order to develop on the Angular6 app you had to also run the AngularJS application to be able to authenticate and get the appropriate session tokens.

We were already using Docker for deploying our application as containers. So we added a Makefile target to run the latest from our Docker repository

# Angular6 — Makefile:
AWS_REPOSITORY = xxx.dkr.ecr.eu-central-1.amazonaws.com
JS_APP_NAME = angular-js
...
run-local: 
  docker run -p 8080:80 $(AWS_REPOSITORY)/$(JS_APP_NAME):latest

Conclusion

This might not be the cleanest or optimal solution, however it was the fastest way towards our goals. And this was the most important thing to us.

The goal of this post isn’t to teach you how to do a AngularJS to Angular6 migration but instead is to showcase our path when dealing with such a task.

Further reading:

An in-depth look at Angular’s ng template

Angular 8 Data (Event & Property) Binding Tutorial: Build your First Angular App

Angular 8 Forms Tutorial - Reactive Forms Validation Example

What is the difference between Angular and AngularJS?

<img src="https://moriohcdn.b-cdn.net/3e48d87dd5.png">AngularJS and Angular have been on trend as soon as the announcement of Angular 2 was done. There were a lot of differences between both of them. AngularJS was written in JavaScript whereas Angular in TypeScript.

AngularJS and Angular have been on trend as soon as the announcement of Angular 2 was done. There were a lot of differences between both of them. AngularJS was written in JavaScript whereas Angular in TypeScript.