Top 5 Python Frameworks For Test Automation In 2019

Top 5 Python Frameworks For Test Automation In 2019

In this article, we'll list top 5 Python Frameworks for Test Automation in 2019.

After being voted as the best programming language in the year 2018, Python still continues rising up the charts and currently ranks as the 3rd best programming language just after Java and C, as per the index published by Tiobe. With the increasing use of this language, the popularity of test automation frameworks based on Python is increasing as well. Obviously, developers and testers will get a little bit confused when it comes to choosing the best framework for their project. While choosing one, you should judge a lot of things, the script quality of the framework, test case simplicity and the technique to run the modules and find out their weaknesses. This is my attempt to help you compare the top 5 Python frameworks for test automation in 2019, and their advantages over the other as well as disadvantages. So you could choose the ideal Python framework for test automation according to your needs.

Robot Framework

Used mostly for development that is acceptance test-driven as well as for acceptance testing, Robot Framework is one of the top Python test frameworks. Although it is developed using Python, it can also run on IronPython, which is .net-based and on Java-based Jython. Robot as a Python framework is compatible across all platforms – Windows, MacOS or Linux.

What Are The Prerequisites?

Yes, there are

  • First of all, you will be able to use Robot Framework (RF), only when you have Python 2.7.14 or any version above it installed. Although nowadays, Python 3.6.4 is also used, code snippets provided in the official blog of RF will make sure that appropriate notes are added consisting of all the changes required.
  • You will also need to install ‘pip’ or python package manager.
  • Finally, a development framework should be must to download. A popular framework among developers is the PyCharm Community Edition. However, since code snippets are not IDE-dependent, you can use any IDE which you have worked on earlier.

What Are The Advantages & Disadvantages Of Robot?

Let’s take a look at what are the advantages and disadvantages of Robot as a test automation framework over other Python frameworks.

Pros

  • First of all, you will be able to use Robot Framework (RF), only when you have Python 2.7.14 or any version above it installed. Although nowadays, Python 3.6.4 is also used, code snippets provided in the official blog of RF will make sure that appropriate notes are added consisting of all the changes required.
  • You will also need to install ‘pip’ or python package manager.
  • Finally, a development framework should be must to download. A popular framework among developers is the PyCharm Community Edition. However, since code snippets are not IDE-dependent, you can use any IDE which you have worked on earlier.

Cons

  • First of all, you will be able to use Robot Framework (RF), only when you have Python 2.7.14 or any version above it installed. Although nowadays, Python 3.6.4 is also used, code snippets provided in the official blog of RF will make sure that appropriate notes are added consisting of all the changes required.
  • You will also need to install ‘pip’ or python package manager.
  • Finally, a development framework should be must to download. A popular framework among developers is the PyCharm Community Edition. However, since code snippets are not IDE-dependent, you can use any IDE which you have worked on earlier.

Is Robot The Top Python Test Framework For You?

If you are a beginner in the automation domain and have less experience in development, using Robot as a top Python test framework is easier to use than pytest or pyunit, since it has rich in built libraries and involves using an easier test-oriented DSL. However, if you want to develop a complex automation framework, it is better to switch to pytest or any other framework involving Python code.

If you are new to Robot framework, here is a document that would help you run your first automation script using Robot framework with Selenium.

Pytest

Used for all kinds of software testing, pytest is another top Python test framework for test automation. Being open source and easy to learn, the tool can be used by QA teams, development teams as well as individual practice groups and in open source projects. Because of its useful features like ‘assert rewriting’, most projects on the internet, including big shots like Dropbox and Mozilla, have switched from unittest(Pyunit) to pytest. Let’s take a deep dive and find out what’s so special about this Python framework.

What Are The Prerequisites?

Apart from working knowledge in Python, pytest does not need anything complex. All you need is a working desktop that has a command line interface, python package manager and an IDE for development.

What Are The Advantages & Disadvantages Of pytest?

Pros

  • First of all, you will be able to use Robot Framework (RF), only when you have Python 2.7.14 or any version above it installed. Although nowadays, Python 3.6.4 is also used, code snippets provided in the official blog of RF will make sure that appropriate notes are added consisting of all the changes required.
  • You will also need to install ‘pip’ or python package manager.
  • Finally, a development framework should be must to download. A popular framework among developers is the PyCharm Community Edition. However, since code snippets are not IDE-dependent, you can use any IDE which you have worked on earlier.

Cons

The fact that special routines are used by pytest means that you have to compromise with compatibility. You will be able to conveniently write test cases but you won’t be able to use those test cases with any other testing framework.

Is Pytest The Top Python Test Framework For You?

Well, you have to start by learning a full-fledged language but once you get the hang of it, you will get all the features like static code analysis, support for multiple IDE and most importantly, writing effective test cases. For writing functional test cases and developing a complex framework, it is better than unittest but its advantage is somewhat similar to Robot Framework if your aim is to develop a simple framework.

If you are considering Pytest as a top Python test framework for you then here is a guide to test automation using pytest and Selenium WebDriver.

UnitTest, Also Known As PyUnit

Unittest or PyUnit is the standard test automation framework for unit testing that comes with Python. It’s highly inspired by JUnit. The assertion methods and all the cleanup and setup routines are provided by the base class TestCase. The name of each and every method in the subclass of TestCase starts with “test”. This allows them to run as test cases. You can use the load methods and the TestSuite class to the group and load the tests. Together, you can use them to build customized test runners. Just like Selenium testing with JUnit, unittest also has the ability to use unittest-sml-reporting and generate XML reports.

What Are The Prerequisites?

There are no such prerequisites since unittest comes by default with Python. To use it, you will need standard knowledge of the python framework and also if you want to install additional modules, you will need pip installed along with an IDE for development.

What Are The Advantages & Disadvantages Of PyUnit?

Pros

Being part of the standard library of Python, there are several advantages of using Unittest.

  • First of all, you will be able to use Robot Framework (RF), only when you have Python 2.7.14 or any version above it installed. Although nowadays, Python 3.6.4 is also used, code snippets provided in the official blog of RF will make sure that appropriate notes are added consisting of all the changes required.
  • You will also need to install ‘pip’ or python package manager.
  • Finally, a development framework should be must to download. A popular framework among developers is the PyCharm Community Edition. However, since code snippets are not IDE-dependent, you can use any IDE which you have worked on earlier.

Angular 7 CRUD with Nodejs and MySQL Example

Angular 7 CRUD with Nodejs and MySQL Example

Angular7 CRUD with nodejs and mysql example - Hey there, Today we will proceed to create a demo for CRUD with Mysql, Express, Angular7(MEAN) and Nodejs from scratch using Angular CLI

Below are the requirements for creating the CRUD on MEAN

  • Node.js
  • Angular CLI
  • Angular 7
  • Mysql
  • IDE or Text Editor

We assume that you have already available the above tools/frameworks and you are familiar with all the above that what individually actually does.

So now we will proceed step by step to achieve the task.

1. Update Angular CLI and Create Angular 7 Application

At first, We have to update the Angular CLI to the latest version. Open the terminal then go to the project folder and then type the below command to update the Angular CLI

sudo npm install -g @angular/cli

Once the above task finishes, Next task is to create new angular application with below command. So go to your project folder and then type below command:

ng new angular7-crud

then go to the newly created folder of angular application with cd /angular7-crud  and type **ng serve. **Now, open the browser then go to http://localhost:4200 you should see this page.

Angular 7 CRUD with Nodejs and MySQL Example

2. Create a server with node.js express and Mysql for REST APIs

create a separate folder named server for server-side stuff, Then move inside folder and create server.js by typing touch server.js

Let’s have a look on the server.js file

let app = require('express')(),
server = require('http').Server(app),
bodyParser = require('body-parser')
express = require('express'),
cors = require('cors'),
http = require('http'),
path = require('path');
 
let articleRoute = require('./Routes/article'),
util = require('./Utilities/util');
 
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({extended: false }));
 
app.use(cors());
 
app.use(function(err, req, res, next) {
return res.send({ "statusCode": util.statusCode.ONE, "statusMessage": util.statusMessage.SOMETHING_WENT_WRONG });
});
 
app.use('/article', articleRoute);
 
// catch 404 and forward to error handler
app.use(function(req, res, next) {
next();
});
 
/*first API to check if server is running*/
app.get('*', (req, res) => {
res.sendFile(path.join(__dirname, '../server/client/dist/index.html'));
})
 
 
server.listen(3000,function(){
console.log('app listening on port: 3000');
});

In the above file we can see, at the top, there are required packages for the app. Below that body parsing, middleware and routing is done.

The next task is to create routes and create a file article.js . So creating a folder name ‘Routes’ and adding article.js within it.

Add the below code for routing in article.js inside routing folder

let express = require('express'),
router = express.Router(),
util = require('../Utilities/util'),
articleService = require('../Services/article');
 
/**Api to create article */
router.post('/create-article', (req, res) => {
articleService.createArticle(req.body, (data) => {
res.send(data);
});
});
 
// /**Api to update article */
router.put('/update-article', (req, res) => {
articleService.updateArticle(req.body, (data) => {
res.send(data);
});
});
 
// /**Api to delete the article */
router.delete('/delete-article', (req, res) => {
articleService.deleteArticle(req.query, (data) => {
res.send(data);
});
});
 
/**Api to get the list of article */
router.get('/get-article', (req, res) => {
documentService.getArticle(req.query, (data) => {
res.send(data);
});
});
 
// /**API to get the article by id... */
router.get('/get-article-by-id', (req, res) => {
articleService.getArticleById(req.query, (data) => {
res.send(data);
});
});
 
module.exports = router;

Now create a folder named Utilities for all config, common methods and mysql connection config.

Now I am adding config values in a file named config.js

let environment = "dev";
 
let serverURLs = {
"dev": {
"NODE_SERVER": "http://localhost",
"NODE_SERVER_PORT": "3000",
"MYSQL_HOST": 'localhost',
"MYSQL_USER": 'root',
"MYSQL_PASSWORD": 'password',
'MYSQL_DATABASE': 'demo_angular7_crud',
}
}
 
let config = {
"DB_URL_MYSQL": {
"host": `${serverURLs[environment].MYSQL_HOST}`,
"user": `${serverURLs[environment].MYSQL_USER}`,
"password": `${serverURLs[environment].MYSQL_PASSWORD}`,
"database": `${serverURLs[environment].MYSQL_DATABASE}`
},
"NODE_SERVER_PORT": {
"port": `${serverURLs[environment].NODE_SERVER_PORT}`
},
"NODE_SERVER_URL": {
"url": `${serverURLs[environment].NODE_SERVER}`
}
};
 
module.exports = {
config: config
};

Now configure mysql connection. So I am writing the connection with database in a separate file. So creating a file named mysqkConfig.js under Utilities folder and adding the below line of code for mysql connection:

var config = require("../Utilities/config").config;
var mysql = require('mysql');
var connection = mysql.createConnection({
host: config.DB_URL_MYSQL.host,
user: config.DB_URL_MYSQL.user,
password: config.DB_URL_MYSQL.password,
database: config.DB_URL_MYSQL.database,
});
 
connection.connect(() => {
require('../Models/Article').initialize();
});
 
let getDB = () => {
return connection;
}
 
module.exports = {
getDB: getDB
}

Now I am creating separate file name util.js to save common methods and common status code/message:

// Define Error Codes
let statusCode = {
OK: 200,
FOUR_ZERO_FOUR: 404,
FOUR_ZERO_THREE: 403,
FOUR_ZERO_ONE: 401,
FIVE_ZERO_ZERO: 500
};
 
// Define Error Messages
let statusMessage = {
SERVER_BUSY : 'Our Servers are busy. Please try again later.',
DATA_UPDATED: 'Data updated successfully.',
DELETE_DATA : 'Delete data successfully',
 
};
 
module.exports = {
statusCode: statusCode,
statusMessage: statusMessage
}

Now the next part is model, So create a folder named Models and create a file **Article.js **and add the below code in it:

let mysqlConfig = require("../Utilities/mysqlConfig");
 
let initialize = () => {
mysqlConfig.getDB().query("create table IF NOT EXISTS article (id INT auto_increment primary key, category VARCHAR(30), title VARCHAR(24))");
 
}
 
module.exports = {
initialize: initialize
}

Now create DAO folder and add a file articleDAO.js for writting the mysql queries common functions:

let dbConfig = require("../Utilities/mysqlConfig");


 
let getArticle = (criteria, callback) => {
//criteria.aricle_id ? conditions += ` and aricle_id = '${criteria.aricle_id}'` : true;
dbConfig.getDB().query(`select * from article where 1`,criteria, callback);
}
 
let getArticleDetail = (criteria, callback) => {
    let conditions = "";
criteria.id ? conditions += ` and id = '${criteria.id}'` : true;
dbConfig.getDB().query(`select * from article where 1 ${conditions}`, callback);
}
 
let createArticle = (dataToSet, callback) => {
console.log("insert into article set ? ", dataToSet,'pankaj')
dbConfig.getDB().query("insert into article set ? ", dataToSet, callback);
}
 
let deleteArticle = (criteria, callback) => {
let conditions = "";
criteria.id ? conditions += ` and id = '${criteria.id}'` : true;
console.log(`delete from article where 1 ${conditions}`);
dbConfig.getDB().query(`delete from article where 1 ${conditions}`, callback);
 
}
 
let updateArticle = (criteria,dataToSet,callback) => {
    let conditions = "";
let setData = "";
criteria.id ? conditions += ` and id = '${criteria.id}'` : true;
dataToSet.category ? setData += `category = '${dataToSet.category}'` : true;
dataToSet.title ? setData += `, title = '${dataToSet.title}'` : true;
console.log(`UPDATE article SET ${setData} where 1 ${conditions}`);
dbConfig.getDB().query(`UPDATE article SET ${setData} where 1 ${conditions}`, callback);
}
module.exports = {
getArticle : getArticle,
createArticle : createArticle,
deleteArticle : deleteArticle,
updateArticle : updateArticle,
getArticleDetail : getArticleDetail
}

Now one create Services folder and add a file article.js for all the logic of API

let async = require('async'),
parseString = require('xml2js').parseString;
 
let util = require('../Utilities/util'),
articleDAO = require('../DAO/articleDAO');
//config = require("../Utilities/config").config;
 
 
/**API to create the atricle */
let createArticle = (data, callback) => {
async.auto({
article: (cb) => {
var dataToSet = {
"category":data.category?data.category:'',
"title":data.title,
}
console.log(dataToSet);
articleDAO.createArticle(dataToSet, (err, dbData) => {
if (err) {
cb(null, { "statusCode": util.statusCode.FOUR_ZERO_ONE, "statusMessage": util.statusMessage.SERVER_BUSY });
return;
}
 
cb(null, { "statusCode": util.statusCode.OK, "statusMessage": util.statusMessage.DATA_UPDATED,"result":dataToSet });
});
}
//]
}, (err, response) => {
callback(response.article);
});
}
 
/**API to update the article */
let updateArticle = (data,callback) => {
async.auto({
articleUpdate :(cb) =>{
if (!data.id) {
cb(null, { "statusCode": util.statusCode.FOUR_ZERO_ONE, "statusMessage": util.statusMessage.PARAMS_MISSING })
return;
}
console.log('phase 1');
var criteria = {
id : data.id,
}
var dataToSet={
"category": data.category,
"title":data.title,
}
console.log(criteria,'test',dataToSet);
                    articleDAO.updateArticle(criteria, dataToSet, (err, dbData)=>{
                        if(err){
cb(null,{"statusCode":util.statusCode.FOUR_ZERO_ONE,"statusMessage":util.statusMessage.SERVER_BUSY});
                        return; 
                        }
                        else{
cb(null, { "statusCode": util.statusCode.OK, "statusMessage": util.statusMessage.DATA_UPDATED,"result":dataToSet });                        
                        }
                    });
}
}, (err,response) => {
callback(response.articleUpdate);
});
}
 
/**API to delete the subject */
let deleteArticle = (data,callback) => {
console.log(data,'data to set')
async.auto({
removeArticle :(cb) =>{
if (!data.id) {
cb(null, { "statusCode": util.statusCode.FOUR_ZERO_ONE, "statusMessage": util.statusMessage.PARAMS_MISSING })
return;
}
var criteria = {
id : data.id,
}
articleDAO.deleteArticle(criteria,(err,dbData) => {
if (err) {
console.log(err);
cb(null, { "statusCode": util.statusCode.FOUR_ZERO_ONE, "statusMessage": util.statusMessage.SERVER_BUSY });
return;
}
cb(null, { "statusCode": util.statusCode.OK, "statusMessage": util.statusMessage.DELETE_DATA });
});
}
}, (err,response) => {
callback(response.removeArticle);
});
}
 
/***API to get the article list */
let getArticle = (data, callback) => {
async.auto({
article: (cb) => {
articleDAO.getArticle({},(err, data) => {
if (err) {
cb(null, {"errorCode": util.statusCode.INTERNAL_SERVER_ERROR,"statusMessage": util.statusMessage.SERVER_BUSY});
return;
}
cb(null, data);
return;
});
}
}, (err, response) => {
callback(response.article);
})
}
 
/***API to get the article detail by id */
let getArticleById = (data, callback) => {
async.auto({
article: (cb) => {
let criteria = {
"id":data.id
}
articleDAO.getArticleDetail(criteria,(err, data) => {
if (err) {
console.log(err,'error----');
cb(null, {"errorCode": util.statusCode.INTERNAL_SERVER_ERROR,"statusMessage": util.statusMessage.SERVER_BUSY});
return;
}
cb(null, data[0]);
return;
});
}
}, (err, response) => {
callback(response.article);
})
}
 
module.exports = {
createArticle : createArticle,
updateArticle : updateArticle,
deleteArticle : deleteArticle,
getArticle : getArticle,
getArticleById : getArticleById
};

3. Create angular component for performing CRUD task of article

ng g component article

Above command will generate all required files for build article component and also automatically added this component to app.module.ts.

create src/app/article/article.component.css (0 bytes)
create src/app/article/article.component.html (23 bytes)
create src/app/article/article.component.spec.ts (614 bytes)
create src/app/article/article.component.ts (321 bytes)
update src/app/app.module.ts (390 bytes)

Now we need to add HttpClientModule to app.module.ts. Open and edit src/app/app.module.ts then add this import. And add it to @NgModule imports after BrowserModule. Now our app.module.ts will have following code:

import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { ReactiveFormsModule } from '@angular/forms';
import { HttpModule } from '@angular/http';
 
import { AppComponent } from './app.component';
import { ArticleComponent } from './article.component';
import { ArticleService } from './article.service';
 
@NgModule({
imports: [
BrowserModule,
HttpModule,
ReactiveFormsModule
],
declarations: [
AppComponent,
ArticleComponent
],
providers: [
ArticleService
],
bootstrap: [
AppComponent
]
})
export class AppModule { }

Now create a service file where we will make all the request to the server for CRUD operation. Command for creating service is ng g service artcle , for now I have just created a file named it article.service.ts. Let's have a look in the code inside this file.

import { Injectable } from '@angular/core';
import { Http, Response, Headers, URLSearchParams, RequestOptions } from '@angular/http';
import { Observable } from 'rxjs';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/catch';
 
import { Article } from './article';
 
@Injectable()
export class ArticleService {
//URL for CRUD operations
    articleUrl = "http://localhost:3000/article";
    //Create constructor to get Http instance
    constructor(private http:Http) {
    }
    
    //Fetch all articles
getAllArticles(): Observable<Article[]> {
return this.http.get(this.articleUrl+"/get-article")
              .map(this.extractData)
         .catch(this.handleError);
 
}
    //Create article
createArticle(article: Article):Observable<number> {
     let cpHeaders = new Headers({ 'Content-Type': 'application/json' });
let options = new RequestOptions({ headers: cpHeaders });
return this.http.post(this.articleUrl+"/create-article", article, options)
.map(success => success.status)
.catch(this.handleError);
}
    //Fetch article by id
getArticleById(articleId: string): Observable<Article> {
        let cpHeaders = new Headers({ 'Content-Type': 'application/json' });
        let options = new RequestOptions({ headers: cpHeaders });
        console.log(this.articleUrl +"/get-article-by-id?id="+ articleId);
        return this.http.get(this.articleUrl +"/get-article-by-id?id="+ articleId)
             .map(this.extractData)
             .catch(this.handleError);
}   
    //Update article
updateArticle(article: Article):Observable<number> {
     let cpHeaders = new Headers({ 'Content-Type': 'application/json' });
        let options = new RequestOptions({ headers: cpHeaders });
return this.http.put(this.articleUrl +"/update-article", article, options)
.map(success => success.status)
.catch(this.handleError);
}
//Delete article    
deleteArticleById(articleId: string): Observable<number> {
        let cpHeaders = new Headers({ 'Content-Type': 'application/json' });
        let options = new RequestOptions({ headers: cpHeaders });
        return this.http.delete(this.articleUrl +"/delete-article?id="+ articleId)
             .map(success => success.status)
             .catch(this.handleError);
}   
    private extractData(res: Response) {
        let body = res.json();
return body;
}
private handleError (error: Response | any) {
        console.error(error.message || error);
        return Observable.throw(error.status);
}
}

In the above file we have made all the http request for the CRUD operation. Observables of rxjs library has been used to handle the data fetching from http request.

Now let's move to the next file, article.component.ts. Here we have all the login part of the app. Let's have a look code inside this file:

import { Component, OnInit } from '@angular/core';
import { FormControl, FormGroup, Validators } from '@angular/forms';
 
import { ArticleService } from './article.service';
import { Article } from './article';
 
@Component({
selector: 'app-article',
templateUrl: './article.component.html',
styleUrls: ['./article.component.css']
})
export class ArticleComponent implements OnInit {
//Component properties
allArticles: Article[];
statusCode: number;
requestProcessing = false;
articleIdToUpdate = null;
processValidation = false;
//Create form
articleForm = new FormGroup({
title: new FormControl('', Validators.required),
category: new FormControl('', Validators.required)   
});
//Create constructor to get service instance
constructor(private articleService: ArticleService) {
}
//Create ngOnInit() and and load articles
ngOnInit(): void {
     this.getAllArticles();
}
//Fetch all articles
 
getAllArticles() {
        this.articleService.getAllArticles()
         .subscribe(
data => this.allArticles = data,
                errorCode => this.statusCode = errorCode);
                
}
//Handle create and update article
onArticleFormSubmit() {
     this.processValidation = true;
     if (this.articleForm.invalid) {
     return; //Validation failed, exit from method.
     }
     //Form is valid, now perform create or update
this.preProcessConfigurations();
     let article = this.articleForm.value;
     if (this.articleIdToUpdate === null) {
     //Generate article id then create article
this.articleService.getAllArticles()
     .subscribe(articles => {
            
         //Generate article id    
         let maxIndex = articles.length - 1;
         let articleWithMaxIndex = articles[maxIndex];
         let articleId = articleWithMaxIndex.id + 1;
         article.id = articleId;
         console.log(article,'this is form data---');
         //Create article
    this.articleService.createArticle(article)
             .subscribe(successCode => {
                    this.statusCode = successCode;
                    this.getAllArticles();  
                    this.backToCreateArticle();
                 },
                 errorCode => this.statusCode = errorCode
             );
         });        
     } else {
  //Handle update article
article.id = this.articleIdToUpdate;        
     this.articleService.updateArticle(article)
     .subscribe(successCode => {
         this.statusCode = successCode;
                 this.getAllArticles();  
                    this.backToCreateArticle();
             },
         errorCode => this.statusCode = errorCode);  
     }
}
//Load article by id to edit
loadArticleToEdit(articleId: string) {
this.preProcessConfigurations();
this.articleService.getArticleById(articleId)
     .subscribe(article => {
            console.log(article,'poiuytre');
         this.articleIdToUpdate = article.id;
                    this.articleForm.setValue({ title: article.title, category: article.category });
                    this.processValidation = true;
                    this.requestProcessing = false;
         },
         errorCode => this.statusCode = errorCode);
}
//Delete article
deleteArticle(articleId: string) {
this.preProcessConfigurations();
this.articleService.deleteArticleById(articleId)
     .subscribe(successCode => {
         //this.statusCode = successCode;
                    //Expecting success code 204 from server
                    this.statusCode = 204;
                 this.getAllArticles();  
                 this.backToCreateArticle();
             },
         errorCode => this.statusCode = errorCode);
}
//Perform preliminary processing configurations
preProcessConfigurations() {
this.statusCode = null;
     this.requestProcessing = true;
}
//Go back from update to create
backToCreateArticle() {
this.articleIdToUpdate = null;
this.articleForm.reset(); 
     this.processValidation = false;
}
}

Now we have to show the task over browser, So lets have a look inside article.component.html file.

<h1 class="text-center">Angular 7 CRUD Demo App</h1>
<h3 class="text-center" *ngIf="articleIdToUpdate; else create">
Update Article for Id: {{articleIdToUpdate}}
</h3>
<ng-template #create>
<h3 class="text-center"> Create New Article </h3>
</ng-template>
<div>
<form [formGroup]="articleForm" (ngSubmit)="onArticleFormSubmit()">
<table class="table-striped" style="margin:0 auto;">
<tr><td>Enter Title</td><td><input formControlName="title">
   <label *ngIf="articleForm.get('title').invalid && processValidation" [ngClass] = "'error'"> Title is required. </label>
 </td></tr>
<tr><td>Enter Category</td><td><input formControlName="category">
   <label *ngIf="articleForm.get('category').invalid && processValidation" [ngClass] = "'error'"> Category is required. </label>
  </td></tr>  
<tr><td colspan="2">
   <button class="btn btn-default" *ngIf="!articleIdToUpdate">CREATE</button>
    <button class="btn btn-default" *ngIf="articleIdToUpdate">UPDATE</button>
   <button (click)="backToCreateArticle()" *ngIf="articleIdToUpdate">Go Back</button>
  </td></tr>
</table>
</form>
<br/>
<div class="text-center" *ngIf="statusCode; else processing">
<div *ngIf="statusCode === 201" [ngClass] = "'success'">
   Article added successfully.
</div>
<div *ngIf="statusCode === 409" [ngClass] = "'success'">
Article already exists.
</div>   
<div *ngIf="statusCode === 200" [ngClass] = "'success'">
Article updated successfully.
</div>   
<div *ngIf="statusCode === 204" [ngClass] = "'success'">
Article deleted successfully.
</div>   
<div *ngIf="statusCode === 500" [ngClass] = "'error'">
Internal Server Error.
</div> 
</div>
<ng-template #processing>
  <img *ngIf="requestProcessing" src="assets/images/loading.gif">
</ng-template>
</div>
<h3 class="text-center">Article List</h3>
<table class="table-striped" style="margin:0 auto;" *ngIf="allArticles">
<tr><th> Id</th> <th>Title</th><th>Category</th><th></th><th></th></tr>
<tr *ngFor="let article of allArticles" >
<td>{{article.id}}</td> <td>{{article.title}}</td> <td>{{article.category}}</td>
  <td><button class="btn btn-default" type="button" (click)="loadArticleToEdit(article.id)">Edit</button> </td>
  <td><button class="btn btn-default" type="button" (click)="deleteArticle(article.id)">Delete</button></td>
</tr>
</table>

Now since I have created server and client two separate folder for nodejs and angular task. So will run both the apps with npm start over two tabs of terminal.

On the browser, over link http://localhost:4200. App will look like below

Angular CRUD with Nodejs and MySQL Example

That’s all for now. Thank you for reading and I hope this post will be very helpful for creating CRUD operations with angular7,node.js & mysql.

================================================

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Introduction New Features in TypeScript 3.7 and How to Use Them

Introduction New Features in TypeScript 3.7 and How to Use Them

The TypeScript 3.7 release is coming soon, and it's going to be a big one.

The target release date is November 5th, and there are some seriously exciting headline features included:

  • Assert signatures.
  • Recursive type aliases.
  • Top-level await.
  • Null coalescing.
  • Optional chaining.

Personally, I'm super excited about this, they're going to whisk away all sorts of annoyances that I've been fighting in TypeScript while building HTTP Toolkit.

If you haven't been paying close attention to the TypeScript development process though, it's probably not clear what half of these mean, or why you should care. Let's talk through all of them.

Assert Signatures

This is a brand-new and little-known TypeScript feature, which allows you to write functions that act like type guards as a side-effect, rather than explicitly returning their boolean result.

It's easiest to demonstrate this with a JavaScript example:

function assertString(input) { 
  if (typeof input === 'string') 
    return; 
  else 
    throw new Error('Input must be a string!'); 
} 
function doSomething(input) { 
  assertString(input); 
  // ... Use input, confident that it's a string 
} 
doSomething('abc'); 
// All good doSomething(123); // Throws an error

This pattern is neat and useful, and you can't use it in TypeScript today.

TypeScript can't know that you've guaranteed the type of input after it's run assertString. Typically, people just make the argument input: string to avoid this, and that's good. But, it also just pushes the type checking problem somewhere else, and in cases where you just want to fail hard, it's useful to have this option available.

Fortunately, soon we will:

// With TS 3.7 
function assertString(input: any): 
	asserts input is string { 
      // <-- the magic 
      if (typeof input === 'string') 
        return; 
      else 
        throw new Error('Input must be a string!'); 
    } 
function doSomething(input: string | number) { 
  assertString(input); 
  // input's type is just 'string' here }

Here assert input is string means that if this function ever returns, TypeScript can narrow the type of input to string, just as if it was inside an if block with a type guard.

To make this safe, that means if the assert statement isn't true then your assert function must either throw an error or not return at all (kill the process, infinite loop, you name it).

That's the basics, but this actually lets you pull some really neat tricks:

// With TS 3.7 
// Asserts that input is truthy, throwing immediately if not: 
function assert(input: any): 
	asserts input { // <-- not a typo 
      if (!input) 
        throw new Error('Not a truthy value'); 
    } 
declare const x: number | string | undefined; 
assert(x); // Narrows x to number | string 
// Also usable with type guarding expressions! 
assert(typeof x === 'string'); 
// Narrows x to string // -- Or use assert in your tests: -- 
const a: Result | Error = doSomethingTestable(); 
expect(a).is.instanceOf(result); 
// 'instanceOf' could 'asserts a is Result' 
expect(a.resultValue).to.equal(123); 
// a.resultValue is now legal // -- Use as a safer ! that throws immediately if 
// you're wrong -- 
function assertDefined<T>(obj: T): 
	asserts obj is NonNullable<T> { 
      if (obj === undefined || obj === null) { 
        throw new Error('Must not be a nullable value'); 
      } 
    } 
declare const x: string | undefined; 
// Gives y just 'string' as a type, could throw elsewhere later: 
const y = x!; 
// Gives y 'string' as a type, or throws immediately if you're wrong: 
assertDefined(x); const z = x; 
// -- Or even update types to track a function's side-effects -- 
type X<T extends string | {}> = { value: T }; 
// Use asserts to narrow types according to side effects: 
function setX<T extends string | {}>(x: X<any>, v: T): 
	asserts x is X<T> { 
      x.value = v; 
    } 
	declare let x: X<any>; 
// x is now { value: any }; 
setX(x, 123); 
// x is now { value: number };

This is still in flux, so don't take it as the definite result, and keep an eye on the pull request if you want the final details.

There's even a discussion there about allowing functions to assert something and return a type, which would let you extend the final example above to track a much wider variety of side effects, but we'll have to wait and see how that plays out.

Top-Level Await

Async/await is amazing and makes promises dramatically cleaner to use.

Unfortunately, though, you can't use them at the top level. This might not be something you care about much in a TS library or application, but if you're writing a runnable script or using TypeScript in a REPL, then this gets super annoying.

It's even worse if you're used to frontend development, since top-level await has been working nicely in the Chrome and Firefox console for a couple of years now.

Fortunately though, a fix is coming. This is actually a general stage-3 JS proposal, so it'll be everywhere else eventually too, but for TS devs 3.7 is where the magic happens.

This one's simple, but let's have another quick demo anyway:


// Your only solution right now for a script that does something async: 
async function doEverything() { 
  ... 
  const response = await fetch('http://example.com'); 
  ... 
} 
  
doEverything(); // <- eugh (could use an IIFE instead, but even more eugh)

With top-level await:

// With TS 3.7: 
// Your script: ... 
const response = await fetch('http://example.com'); 
// ...

There's a notable gotcha here: if you're not writing a script, or using a REPL, don't write this at the top level, unless you really know what you're doing!

It's totally possible to use this to write modules that do blocking async steps when imported. That can be useful for some niche cases, but people tend to assume that their import statement is a synchronous, reliable, and fairly quick operation, and you could easily hose your codebase's startup time if you start blocking imports for complex async processes (even worse, processes that can fail).

This is somewhat mitigated by the semantics of imports of async modules: they're imported and run in parallel, so the importing module effectively waits for Promise.all(importedModules) before being executed.

Rich Harris wrote an excellent piece on a previous version of this spec before that change when imports ran sequentially and this problem was much worse), which makes for good background reading on the risks here if you're interested.

It's also worth noting that this is only useful for module systems that support asynchronous imports. There isn't yet a formal spec for how TS will handle this, but that likely means that a very recent target configuration, and, either ES Modules or Webpack v5 (whose alphas have experimental support), will be used at runtime.

Recursive Type Aliases

If you're ever tried to define a recursive type in TypeScript, you may have run into StackOverflow questions like this: https://stackoverflow.com/questions/47842266/recursive-types-in-typescript.

Right now, you can't. Interfaces can be recursive, but there are limitations to their expressiveness, and type aliases can't. That means right now, you need to combine the two: define a type alias and extract the recursive parts of the type into interfaces. It works, but it's messy, and we can do better.

As a concrete example, this is the suggested type definition for JSON data:

type JSONValue = | string | number | boolean | JSONObject | JSONArray; 
interface JSONObject { [x: string]: JSONValue; } 
interface JSONArray extends Array<JSONValue> { }

That works, but the extra interfaces are only there because they're required to get around the recursion limitation.

Fixing this requires no new syntax; it just removes that restriction, so the below compiles:

// With TS 3.7: 
type JSONValue = | string | number | boolean | { [x: string]: JSONValue } | Array<JSONValue>;

Right now, that fails to compile with Type alias 'JSONValue' circularly references itself. Soon though, soon...

Null Coalescing

Aside from being difficult to spell, this one is quite simple and easy. It's based on a JavaScript stage-3 proposal, which means it'll also be coming to your favorite vanilla JavaScript environment soon (if it hasn't already).

In JavaScript, there's a common pattern for handling default values, and falling back to the first valid result of a defined group. It looks something like this:

// Use the first of firstResult/secondResult which is truthy: 
const result = firstResult || secondResult; 
// Use configValue from provided options if truthy, or 'default' if not: 
this.configValue = options.configValue || 'default';

This is useful in a host of cases, but due to some interesting quirks in JavaScript, it can catch you out. If firstResult or options.configValue can meaningfully be set to false, an empty string or 0, then this code has a bug. If those values are set, then when considered as booleans they're falsy, so the fallback value (secondResult / 'default') is used anyway.

Null coalescing fixes this. Instead of the above, you'll be able to write:

// With TS 3.7: 
// Use the first of firstResult/secondResult which is *defined*: 
const result = firstResult ?? secondResult; 
// Use configSetting from provided options if *defined*, or 'default' if not: 
this.configValue = options.configValue ?? 'default';

?? differs from || in that it falls through to the next value only if the first argument is null or undefined, not falsy. That fixes our bug. If you pass false as firstResult, that will be used instead of secondResult because, while it's falsy, it is still defined, and that's all that's required.

It's simple but super-useful, as it takes away a whole class of bugs.

Optional Chaining

Last but not least, optional chaining is another stage-3 proposal that is making its way into TypeScript.

This is designed to solve an issue faced by developers in every language: how do you get data out of a data structure when some or all of it might not be present?

Right now, you might do something like this:

// To get data.key1.key2, if any level could be null/undefined: 
let result = data ? (data.key1 ? data.key1.key2 : undefined) : undefined; 
// Another equivalent alternative: 
let result = ((data || {}).key1 || {}).key2;

Nasty! This gets much much worse if you need to go deeper, and although the second example works at runtime, it won't even compile in TypeScript, since the first step could be {}, in which case key1 isn't a valid key at all.

This gets still more complicated if you're trying to get into an array, or there's a function call somewhere in this process.

There's a host of other approaches to this, but they're all noisy, messy & error-prone. With optional chaining, you can do this:

// With TS 3.7: 
// Returns the value is it's all defined & non-null, or undefined if not. 
let result = data?.key1?.key2; 
// The same, through an array index or property, if possible: 
array?.[0]?.['key']; 
// Call a method, but only if it's defined: 
obj.method?.(); 
// Get a property, or return 'default' if any step is not defined: 
let result = data?.key1?.key2 ?? 'default';

The last case shows how neatly some of these dovetails together: null coalescing + optional chaining is a match made in heaven.

One gotcha: this will return undefined for missing values, even if they were null, e.g. in cases like (null)?.key (returns undefined). A small point, but one to watch out for if you have a lot of null in your data structures.

That's the lot! That should outline all the essentials for these features, but there are lots of smaller improvements, fixes, and editor support improvements coming too, so take a look at the official roadmap if you want to get into the nitty-gritty.

How to building a stable Node.js project architecture.

How to building a stable Node.js project architecture.

Often product development process which involves JavaScript, is accompanied by the use of Node.js, a JavaScript runtime environment. The birth of this technology has certainly turned the use of JS upside-down. Today, [JavaScript...

Often product development process which involves JavaScript, is accompanied by the use of Node.js, a JavaScript runtime environment. The birth of this technology has certainly turned the use of JS upside-down. Today, JavaScript is in the category of the most preferred languages to build apps thanks to Node.js.

What is so special about this technology? To answer this, let’s reflect on not only this technology’s benefits but also its architecture limitations and the ways to deal with cons.

Node JS brief history

Node.js was introduced by Ryan Dahl in 2009. The technology is mostly used for building app’s server side/ back-end development. What’s special about Node.js is that the technology is asynchronous.

This means that server continues to process other client requests without urging the client to wait till another previously sent request is processed. Let’s say, it’s Node.js “value proposition” for all who would like to create reliable JS-based apps.

What is Node JS commonly used for?

The non-blocking I/O machine behind the current framework is a great way to build real-time web app with NodeJS, mobile products, chats, data streaming apps, browser games, APIs and medium-performance JS apps.

Node JS real-time applications examples and showcase

Among the companies who rely their tech part to Node.js are LinkedIn, Yahoo, IBM, Netflix, PayPal, Uber and others.

Let’s see where else Node is used apart from back-end (2017 data):
This is image title

Source
As of business point of view, Node.js is used for:
This is image title

If you have worked with Node.js, you probably already know that with Node.js you can gain the following:

  • Since Node.js is created with the C++ help, you can call C functions

  • Asynchronous nature which accelerates app’s functioning and provides the ability to multitask. Apart from this, non-closing i/o method suits for high-traffic, real-time websites, resources creation.

  • Stable multiplatform app development

  • Lots of Node.js development tools for better workflow: npm-s, Express, Socket.io, etc. (we’ll touch upon these a bit later in the article)

  • Clear and flexible learning curve

But with Node.js architecture limitations you lose the opportunities to:

  • Create heavy-computed apps with the elements of 3D projection; calculation apps.
    As Node.js is single-threaded, it is not the right fit for such projects. All the actions happen on a single thread, and hence, overload CPU. For such types of apps or software, it’s better to utilize multithreading languages, like C, C#. For the full-scale video games, you can use Unity.

  • Use some npm-s.

Not all npm-s, we’ve mentioned in the pros, are of high quality and stable. Thus, you have to filter them properly and choose only the reliable ones. (Author’s note: Think of npm-s, like of plugins in Wordpress).

  • Taking in consideration Node.js architecture, you can’t utilize relational databases at full power.

Node.js come best with such document-oriented databases, like MongoDB

Let’s get into tech details and best practices for Node.js development and more detailed practical tips on working with this platform

  • Application Specifics
    First and foremost, think about what type of application you plan to release. To further proceed with Node.js project architecture building, ask yourself some of the following questions:

  • Are you going to build a real-time web app with Node.js? Is it meant to be a mobile or a console one? Or maybe it's a multiplatform app?

  • What data should the app operate with? Are these databases, files, or remote storages (like Amazon S3)?

  • Do you plan to use special software in your application? Are sophisticated data processing algorithms such as face detection or text recognition enlisted in your app's functionality business plan?

  • Does your application need an extra hardware, like camera, microphone, various sensors, or any other related devices?

  • What are the architectural specifics of the future app? Is this meant to be a client server, MVC or maybe any other type of architecture?

If you've answered all of these questions, make sure that Node.js is able to fully meet all requirements set for your project development. For example, if you need an API server that works with several types of databases, Node.js might be a good choice. But if you need an application that is designed to build 3D graphics using Directx, you might want to get acquainted with C ++ a bit closer.

Let's assume that your application uses special temperature and contamination sensors. You can pair such features with RaspberryPi, Arduino or any other special device to go further and create a 'smart' functionality model. But before you start, make sure that the driver of any mentioned device is compatible with Node.js.

Best practices for Node.js development workflow

Typically, an application is written by a team of developers. Everyone in the team is unique as well as their own code style. Therefore, it’s recommended to settle and take into account the following code organization nuances before the development stage.

  • Functional development style or object-oriented programming patterns usage?

Since JS is a weakly-typed language and allows you to write your code in a freestyle, it's still better to agree upon a single code writing rules in your team. This will keep most of the misunderstandings away and will help your colleagues to get the better understanding of the project’s code.

  • Code style
    Discuss with your teammates code writing do-s and don’t-s. Check the quality of what is written, using such Node.js development tools, like lint

  • Your team’s experiencewith the integration of third-party means and devices in your application (i.e. Google Maps, data collection, analytics tools, e-communication means,etc.)

  • Data models you’re up to work with (files, databases or third-party APIs)

  • Communication and data exchange tools you’re going to use (REST API, Blouse Protocol, Socket io, GraphQl, DDP protocol)

  • Possibility to utilize 3rd party libraries (hardware libraries, special algorithms)

The scope of work and its specifics can be as random as possible. Let's say one of your teammates works with SQL database, someone else deals with Amazon API. Thus, each of your colleagues is assigned to do the particular tasks.

Don't reinvent the bike

Currently Node.js community is up and thriving. A lot of neat features are invented and written by other developers already. So before you create a particular functionality for your application, make sure if someone else has not encountered the exact problem before.

If you’re not the only one who has experienced a certain issue, you might find the solution in npm packages. This is an entire catalogue of many ready-made useful libraries that will make life much easier for you.

The same applies to frameworks. Think about whether to use any of them to speed up the process to build a real-time web app with Node JS or a mobile one.

Let's say if you’re dealing with REST API, you can try out Express js. If you need to interact with a particular database type, you can refer to such frameworks, as Mongoose js or SQL depending on which database type you need.

Although packages can benefit your project, there are some significant dangers to be aware of. Given the fact, that these solutions are open source there are several threats to bear in mind:

  • Duplicates

There are too many packages already and some of them clone the others. Unfortunately, this mainstream is only growing. Be careful and make sure you choose the right and unique npm.

  • Malicious code
    Since these packages are not supervised, anyone can write anything they want. Read more about security issues in the article by David Gilbertson ‘I’m harvesting credit card numbers and passwords from your site. Here’s how’. So if your product has to provide AAA-security type check each code snippet of any package you instal meticulously.
Always stay ahead of the time

JS and Node.js community is constantly growing. ES standards are frequently updated. Old features are being replaced by the new, better ones and implemented in Node.js. Thus it’s important to always monitor the technology’s state of art.

For instance,

[calbackHell](http://callbackhell.com/) 
fs.action(source, function (err, res) {
  if (err) {
    console.log('Error: ' + err)
  } else {
     res.acton(function(err, res) {
       console.log('Error s: ' + err)
     })
    })
  }
})

Was replaced with

[promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise)
fs.action(source)
  .then(res => res.action())
  .then(res => res.action())
  .then(res => res.action())
  .catch(err =>  console.log('Error : ' + err))
	

Now we can use async/await as the alternative to Promises:

[async/await](https://blog.risingstack.com/async-await-node-js-7-nightly)
try {
  const res = await fs.action(source);
  const res1 = await res.action(source);
  const res2 = await res1.action(source);
  const res3 = await res2.action(source);
} catch(error) {
   console.log('Error s: ' + err)
}

As for now, community’s opinion whether to use promises or async/await vary.

Try to update your knowledge base with the new Node.js and ES releases on the regular basis. This will help you keep a modernized development process.

Node.js app development techniques and tips

“Deep in the human unconscious is a pervasive need for a logical universe that makes sense. But the real universe is always one step beyond logic.”
― Frank Herbert.

To overcome Node.js architecture limitations and trivial-to-challenging issues, keep to the clear development structure.

Since Node.js project might consist of only one file, this does not mean that you need to pile everything up into one great mess.

Make your code as readable and understandable for others as it could be. The following recommendations :

  • Follow the instructions on the structure development for the framework you are using. If you do not use any, try to place your code in the directories and subdirectories in the most logical way possible.

  • File naming

Keep to a single agreed file naming. For example, choose one of these: ErrorHandler or errorHandler orerror_handle or error-handler. Try to name the files according to their purpose but not according to their functionality. For example, it’s better to name the File NotifyAllUserByEmaisSMSLocal as Notifier.

  • The entry point must not contain unnecessary code lines

The entry point is the main.js,app.js and www files that are requested to launch your application. Such files contain only certain methods or classes calls, but not more.

  • index.js.

Keeping only import / export in these files is in general considered to be a justified practice.

Tips for better Node.js project architecture

The way the code is written signifies the ‘face’ (reputation) of the programmer. It also shows how the entire development team deals with the app’s creation using a certain technology or language. Node.js in our case. Therefore, always try to keep it up-to-date and structured (and comprehensive for other developers). Code readability is one of the main ways to build stable, real time web app with Node.js.

  • Use code quality control tools, like Lint

This tool will help you not to slip out/keep a keen eye on any trivial small error./Give a trivial error no chance. It will also allow you to keep the code in one unified form.

  • Keep track of your files size

Too large files are difficult in guiding and understanding. The optimal file size is of 300-500 lines or less. So if you have noted that the code is constantly growing within the same file, turn it to the directory with several files inside.

  • Comment on your code
    When you write a universal module that will be used in several places, don’t forget to create a quick guide on how to utilize this code.
/**
 * Provide sending notification for users by Local, Email , SMS
 *
 * @example
 *          new Notifier(user).notify(['sms', 'email'], ...)
 **/
class Notifier {

You can also leave instructions for methods in your code

/**
 * Provide parse date for single format on project
 * @param Date
 * @example
 *        dateToString(date) => String
 * **/
 

If you’re developing a particular REST API, you can embed instructions on its usage in the code itself. It’s recommended though, to create the complete documentation/guidelines and store it on a single resource.

/**
* Provide Api for Account
  Account Register  POST /api/v1/account/
  @params
         email {string}
         password {string}
  Account Login  POST /api/v1/account/login
  @params
         email {string}
         password {string}

  Account Logout  GET /api/v1/account/logout
  @header
         Authorization: Bearer {token}
 **/
 

There are 2 sides of a coin though.

To ensure the use of Node.js architecture best practices, keep your code as descriptive and organized as possible but don’t overdo.

Don’t comment each code string. It will do more harm than good and nothing but only make the development process more complex. The better tip is to comment the code snippet’s purpose but not its functionality (what does it do). Depending on the development style (callback, promise, async/await) write and use only one (if it’s possible) general error handler/processor.

Errors handling

Errors handling is another important aspect among other best practices for Node.js development to bear in mind.

Since JavaScript is not as strict as Java all the responsibility lies on the development team.

First and foremost, always try to process the errors. Otherwise it can lead to app’s uncontrolled behavior.

  • With callback
const withoutErrors = calback => (err, updatedTank) => {
  if (err) {
    return // do something
  }
  return calback(updatedTank);
};
fs.action(withoutErrors(data => ...))

With Promise

const handlError = error => {
  if (err) {
    return // do something
  }
};
fs.action()
    .then(data => )
    .then(data => )
    .cattch(handlError)
  • With async / await
class Actions
  async action1 (data) {
    return fs.action(data)
  }
  async action2 (data) {
    return fs.action(data)
  }
  .... 
}

try {
  await new Actions().action1();
  await new Actions().action1();
} catach(error) {
  return handlError(error)
}

Node JS development tools
  • Gulp

A toolkit which allows to launch several apps simultaneously. It might be useful if you’d like to run several services at the same time with one command/request.

  • Nodemon

Hot reload feature for Node.js. This tool automatically updates/ resets your project after any code change is made. A quite handy tool during the Node.js project architecture development.

  • Forever, pm2
    These two packages ensure app’s launch during the (OC) system’s start.

  • Winston
    Provides with the opportunity to record app’s logs to the primary source (file or database). The package comes to help, when you need the app to work remotely and don’t have the full access to it.

  • Threads
    A tool designed for the better work with threads.

To sum up with

We hope that Node.js architecture best practices represented in this article will help you reach the most desireable result when looking for the way to build performant real-time web or mobile apps with Node.js.

How to Set up an SMS Notification With Python

How to Set up an SMS Notification With Python

How to Set up an SMS Notification With Python. oday I am beginning a new series of posts specifically aimed at Python beginners.

Hi everyone :) Today I am beginning a new series of posts specifically aimed at Python beginners. The concept is rather simple: I'll do a fun project, in as few lines of code as possible, and will try out as many new tools as possible.

For example, today we will learn to use the Twilio API, the Twitch API, and we'll see how to deploy the project on Heroku. I'll show you how you can have your own "Twitch Live" SMS notifier, in 30 lines of codes, and for 12 cents a month.

Prerequisite: You only need to know how to run Python on your machine and some basic commands in git (commit & push). If you need help with these, I can recommend these 2 articles to you:

Python 3 Installation & Setup Guide

The Ultimate Git Command Tutorial for Beginners from Adrian Hajdin.

What you'll learn:

  • Twitch API
  • Twilio API
  • Deploying on Heroku
  • Setting up a scheduler on Heroku

What you will build:

The specifications are simple: we want to receive an SMS as soon as a specific Twitcher is live streaming. We want to know when this person is going live and when they leave streaming. We want this whole thing to run by itself, all day long.

We will split the project into 3 parts. First, we will see how to programmatically know if a particular Twitcher is online. Then we will see how to receive an SMS when this happens. We will finish by seeing how to make this piece of code run every X minutes, so we never miss another moment of our favorite streamer's life.

Is this Twitcher live?

To know if a Twitcher is live, we can do two things: we can go to the Twitcher URL and try to see if the badge "Live" is there.

Screenshot of a Twitcher live streaming.

This process involves scraping and is not easily doable in Python in less than 20 or so lines of code. Twitch runs a lot of JS code and a simple request.get() won't be enough.

For scraping to work, in this case, we would need to scrape this page inside Chrome to get the same content like what you see in the screenshot. This is doable, but it will take much more than 30 lines of code. If you'd like to learn more, don't hesitate to check my recent web scraping guide.

So instead of trying to scrape Twitch, we will use their API. For those unfamiliar with the term, an API is a programmatic interface that allows websites to expose their features and data to anyone, mainly developers. In Twitch's case, their API is exposed through HTTP, witch means that we can have lots of information and do lots of things by just making a simple HTTP request.

Get your API key

To do this, you have to first create a Twitch API key. Many services enforce authentication for their APIs to ensure that no one abuses them or to restrict access to certain features by certain people.

Please follow these steps to get your API key:

  • Create a Twitch account
  • Now create a Twitch dev account -> "Signing up with Twitch" top right
  • Go to your "dashboard" once logged in
  • "Register your application"
  • Name -> Whatever, Oauth redirection URL -> http://localhost, Category -> Whatever

You should now see, at the bottom of your screen, your client-id. Keep this for later.

Is that Twitcher streaming now?

With your API key in hand, we can now query the Twitch API to have the information we want, so let's begin to code. The following snippet just consumes the Twitch API with the correct parameters and prints the response.

# requests is the go to package in python to make http request
# https://2.python-requests.org/en/master/
import requests

# This is one of the route where Twich expose data, 
# They have many more: https://dev.twitch.tv/docs
endpoint = "https://api.twitch.tv/helix/streams?"

# In order to authenticate we need to pass our api key through header
headers = {"Client-ID": "<YOUR-CLIENT-ID>"}

# The previously set endpoint needs some parameter, here, the Twitcher we want to follow
# Disclaimer, I don't even know who this is, but he was the first one on Twich to have a live stream so I could have nice examples
params = {"user_login": "Solary"}

# It is now time to make the actual request
response = request.get(endpoint, params=params, headers=headers)
print(response.json())

The output should look like this:

{
   'data':[
      {
         'id':'35289543872',
         'user_id':'174955366',
         'user_name':'Solary',
         'game_id':'21779',
         'type':'live',
         'title':"Wakz duoQ w/ Tioo - GM 400LP - On récupère le chall après les -250LP d'inactivité !",
         'viewer_count':4073,
         'started_at':'2019-08-14T07:01:59Z',
         'language':'fr',
         'thumbnail_url':'https://static-cdn.jtvnw.net/previews-ttv/live_user_solary-{width}x{height}.jpg',
         'tag_ids':[
            '6f655045-9989-4ef7-8f85-1edcec42d648'
         ]
      }
   ],
   'pagination':{
      'cursor':'eyJiIjpudWxsLCJhIjp7Ik9mZnNldCI6MX19'
   }
}

This data format is called JSON and is easily readable. The data object is an array that contains all the currently active streams. The key type ensures that the stream is currently live. This key will be empty otherwise (in case of an error, for example).

So if we want to create a boolean variable in Python that stores whether the current user is streaming, all we have to append to our code is:

json_response = response.json()

# We get only streams
streams = json_response.get('data', [])

# We create a small function, (a lambda), that tests if a stream is live or not
is_active = lambda stream: stream.get('type') == 'live'
# We filter our array of streams with this function so we only keep streams that are active
streams_active = filter(is_active, streams)

# any returns True if streams_active has at least one element, else False
at_least_one_stream_active = any(streams_active)

print(at_least_one_stream_active)

At this point, at_least_one_stream_active is True when your favourite Twitcher is live.

Let's now see how to get notified by SMS.

Send me a text, NOW!

So to send a text to ourselves, we will use the Twilio API. Just go over there and create an account. When asked to confirm your phone number, please use the phone number you want to use in this project. This way you'll be able to use the $15 of free credit Twilio offers to new users. At around 1 cent a text, it should be enough for your bot to run for one year.

If you go on the console, you'll see your Account SID and your Auth Token , save them for later. Also click on the big red button "Get My Trial Number", follow the step, and save this one for later too.

Sending a text with the Twilio Python API is very easy, as they provide a package that does the annoying stuff for you. Install the package with pip install Twilio and just do:

from twilio.rest import Client
client = Client(<Your Account SID>, <Your Auth Token>)
client.messages.create(
	body='Test MSG',from_=<Your Trial Number>,to=<Your Real Number>)

And that is all you need to send yourself a text, amazing right?

Putting everything together

We will now put everything together, and shorten the code a bit so we manage to say under 30 lines of Python code.

import requests
from twilio.rest import Client
endpoint = "https://api.twitch.tv/helix/streams?"
headers = {"Client-ID": "<YOUR-CLIENT-ID>"}
params = {"user_login": "Solary"}
response = request.get(endpoint, params=params, headers=headers)
json_response = response.json()
streams = json_response.get('data', [])
is_active = lambda stream:stream.get('type') == 'live'
streams_active = filter(is_active, streams)
at_least_one_stream_active = any(streams_active)
if at_least_one_stream_active:
    client = Client(<Your Account SID>, <Your Auth Token>)
	client.messages.create(body='LIVE !!!',from_=<Your Trial Number>,to=<Your Real Number>)

Avoiding double notifications

This snippet works great, but should that snippet run every minute on a server, as soon as our favorite Twitcher goes live we will receive an SMS every minute.

We need a way to store the fact that we were already notified that our Twitcher is live and that we don't need to be notified anymore.

The good thing with the Twilio API is that it offers a way to retrieve our message history, so we just have to retrieve the last SMS we sent to see if we already sent a text notifying us that the twitcher is live.

Here what we are going do to in pseudocode:

if favorite_twitcher_live and last_sent_sms is not live_notification:
	send_live_notification()
if not favorite_twitcher_live and last_sent_sms is live_notification:
	send_live_is_over_notification()

This way we will receive a text as soon as the stream starts, as well as when it is over. This way we won't get spammed - perfect right? Let's code it:

# reusing our Twilio client
last_messages_sent = client.messages.list(limit=1)
last_message_id = last_messages_sent[0].sid
last_message_data = client.messages(last_message_id).fetch()
last_message_content = last_message_data.body

Let's now put everything together again:

import requests
from twilio.rest import Client
client = Client(<Your Account SID>, <Your Auth Token>)

endpoint = "https://api.twitch.tv/helix/streams?"
headers = {"Client-ID": "<YOUR-CLIENT-ID>"}
params = {"user_login": "Solary"}
response = request.get(endpoint, params=params, headers=headers)
json_response = response.json()
streams = json_response.get('data', [])
is_active = lambda stream:stream.get('type') == 'live'
streams_active = filter(is_active, streams)
at_least_one_stream_active = any(streams_active)

last_messages_sent = client.messages.list(limit=1)
if last_messages_sent:
	last_message_id = last_messages_sent[0].sid
	last_message_data = client.messages(last_message_id).fetch()
	last_message_content = last_message_data.body
    online_notified = "LIVE" in last_message_content
    offline_notified = not online_notified
else:
	online_notified, offline_notified = False, False

if at_least_one_stream_active and not online_notified:
	client.messages.create(body='LIVE !!!',from_=<Your Trial Number>,to=<Your Real Number>)
if not at_least_one_stream_active and not offline_notified:
	client.messages.create(body='OFFLINE !!!',from_=<Your Trial Number>,to=<Your Real Number>)

And voilà!

You now have a snippet of code, in less than 30 lines of Python, that will send you a text a soon as your favourite Twitcher goes Online / Offline and without spamming you.

We just now need a way to host and run this snippet every X minutes.

The quest for a host

To host and run this snippet we will use Heroku. Heroku is honestly one of the easiest ways to host an app on the web. The downside is that it is really expensive compared to other solutions out there. Fortunately for us, they have a generous free plan that will allow us to do what we want for almost nothing.

If you don't already, you need to create a Heroku account. You also need to download and install the Heroku client.

You now have to move your Python script to its own folder, don't forget to add a requirements.txt file in it. The content of the latter begins:

requests
twilio

This is to ensure that Heroku downloads the correct dependencies.

cd into this folder and just do a heroku create --app <app name>.

If you go on your app dashboard you'll see your new app.

We now need to initialize a git repo and push the code on Heroku:

git init
heroku git:remote -a <app name>
git add .
git commit -am 'Deploy breakthrough script'
git push heroku master

Your app is now on Heroku, but it is not doing anything. Since this little script can't accept HTTP requests, going to <app name>.herokuapp.com won't do anything. But that should not be a problem.

To have this script running 24/7 we need to use a simple Heroku add-on call "Heroku Scheduler". To install this add-on, click on the "Configure Add-ons" button on your app dashboard.

Then, on the search bar, look for Heroku Scheduler:

Click on the result, and click on "Provision"

If you go back to your App dashboard, you'll see the add-on:

Click on the "Heroku Scheduler" link to configure a job. Then click on "Create Job". Here select "10 minutes", and for run command select python <name_of_your_script>.py. Click on "Save job".

While everything we used so far on Heroku is free, the Heroku Scheduler will run the job on the $25/month instance, but prorated to the second. Since this script approximately takes 3 seconds to run, for this script to run every 10 minutes you should just have to spend 12 cents a month.

Ideas for improvements

I hope you liked this project and that you had fun putting it into place. In less than 30 lines of code, we did a lot, but this whole thing is far from perfect. Here are a few ideas to improve it:

  • Send yourself more information about the current streaming (game played, number of viewers ...)
  • Send yourself the duration of the last stream once the twitcher goes offline
  • Don't send you a text, but rather an email
  • Monitor multiple twitchers at the same time

Do not hesitate to tell me in the comments if you have more ideas.

Conclusion

I hope that you liked this post and that you learned things reading it. I truly believe that this kind of project is one of the best ways to learn new tools and concepts, I recently launched a web scraping API where I learned a lot while making it.

Please tell me in the comments if you liked this format and if you want to do more.

I have many other ideas, and I hope you will like them. Do not hesitate to share what other things you build with this snippet, possibilities are endless.

Happy Coding.

Pierre

Don't want to miss my next post:

You can subscribe here to my newsletter.

Python Tutorial: Image processing with Python (Using OpenCV)

Python Tutorial: Image processing with Python (Using OpenCV)

In this tutorial, you will learn how you can process images in Python using the OpenCV library.

In this tutorial, you will learn how you can process images in Python using the OpenCV library.

OpenCV is a free open source library used in real-time image processing. It’s used to process images, videos, and even live streams, but in this tutorial, we will process images only as a first step. Before getting started, let’s install OpenCV.

Table of Contents

Install OpenCV

To install OpenCV on your system, run the following pip command:

 pip install opencv-python

Now OpenCV is installed successfully and we are ready. Let’s have some fun with some images!

Rotate an Image

First of all, import the cv2 module.

 import cv2

Now to read the image, use the imread() method of the cv2 module, specify the path to the image in the arguments and store the image in a variable as below:

 img = cv2.imread("pyimg.jpg")

The image is now treated as a matrix with rows and columns values stored in img.

Actually, if you check the type of the img, it will give you the following result:

>>>print(type(img))
 
<class 'numpy.ndarray'>

It’s a NumPy array! That why image processing using OpenCV is so easy. All the time you are working with a NumPy array.

To display the image, you can use the imshow() method of cv2.

cv2.imshow('Original Image', img) 
 
cv2.waitKey(0)

The waitkey functions take time as an argument in milliseconds as a delay for the window to close. Here we set the time to zero to show the window forever until we close it manually.

To rotate this image, you need the width and the height of the image because you will use them in the rotation process as you will see later.

 height, width = img.shape[0:2]

The shape attribute returns the height and width of the image matrix. If you print img.shape[0:2] , you will have the following output:

Okay, now we have our image matrix and we want to get the rotation matrix. To get the rotation matrix, we use the getRotationMatrix2D() method of cv2. The syntax of getRotationMatrix2D() is:

 cv2.getRotationMatrix2D(center, angle, scale)

Here the center is the center point of rotation, the angle is the angle in degrees and scale is the scale property which makes the image fit on the screen.

To get the rotation matrix of our image, the code will be:

 rotationMatrix = cv2.getRotationMatrix2D((width/2, height/2), 90, .5)

The next step is to rotate our image with the help of the rotation matrix.

To rotate the image, we have a cv2 method named wrapAffine which takes the original image, the rotation matrix of the image and the width and height of the image as arguments.

 rotatedImage = cv2.warpAffine(img, rotationMatrix, (width, height))

The rotated image is stored in the rotatedImage matrix. To show the image, use imshow() as below:

cv2.imshow('Rotated Image', rotatedImage)
 
cv2.waitKey(0)

After running the above lines of code, you will have the following output:

Crop an Image

First, we need to import the cv2 module and read the image and extract the width and height of the image:

import cv2
 
img = cv2.imread("pyimg.jpg")
 
height, width = img.shape[0:2]

Now get the starting and ending index of the row and column. This will define the size of the newly created image. For example, start from row number 10 till row number 15 will give the height of the image.

Similarly, start from column number 10 until column number 15 will give the width of the image.

You can get the starting point by specifying the percentage value of the total height and the total width. Similarly, to get the ending point of the cropped image, specify the percentage values as below:

startRow = int(height*.15)
 
startCol = int(width*.15)
 
endRow = int(height*.85)
 
endCol = int(width*.85)

Now map these values to the original image. Note that you have to cast the starting and ending values to integers because when mapping, the indexes are always integers.

 croppedImage = img[startRow:endRow, startCol:endCol]

Here we specified the range from starting to ending of rows and columns.

Now display the original and cropped image in the output:

cv2.imshow('Original Image', img)
 
cv2.imshow('Cropped Image', croppedImage)
 
cv2.waitKey(0)

The result will be as follows:

Resize an Image

To resize an image, you can use the resize() method of openCV. In the resize method, you can either specify the values of x and y axis or the number of rows and columns which tells the size of the image.

Import and read the image:

import cv2
 
img = cv2.imread("pyimg.jpg")

Now using the resize method with axis values:

newImg = cv2.resize(img, (0,0), fx=0.75, fy=0.75)
 
cv2.imshow('Resized Image', newImg)
 
cv2.waitKey(0)

The result will be as follows:

Now using the row and column values to resize the image:

newImg = cv2.resize(img, (550, 350))
 
cv2.imshow('Resized Image', newImg)
 
cv2.waitKey(0)

We say we want 550 columns (the width) and 350 rows (the height).

The result will be:

Adjust Image Contrast

In Python OpenCV module, there is no particular function to adjust image contrast but the official documentation of OpenCV suggests an equation that can perform image brightness and image contrast both at the same time.

 new_img = a * original_img + b

Here a is alpha which defines contrast of the image. If a is greater than 1, there will be higher contrast.

If the value of a is between 0 and 1 (smaller than 1 but greater than 0), there would be lower contrast. If a is 1, there will be no contrast effect on the image.

b stands for beta. The values of b vary from -127 to +127.

To implement this equation in Python OpenCV, you can use the addWeighted() method. We use The addWeighted() method as it generates the output in the range of 0 and 255 for a 24-bit color image.

The syntax of addWeighted() method is as follows:

 cv2.addWeighted(source_img1, alpha1, source_img2, alpha2, beta)

This syntax will blend two images, the first source image (source_img1) with a weight of alpha1 and second source image (source_img2).

If you only want to apply contrast in one image, you can add a second image source as zeros using NumPy.

Let’s work on a simple example. Import the following modules:

import cv2
 
import numpy as np

Read the original image:

 img = cv2.imread("pyimg.jpg")

Now apply the contrast. Since there is no other image, we will use the np.zeros which will create an array of the same shape and data type as the original image but the array will be filled with zeros.

contrast_img = cv2.addWeighted(img, 2.5, np.zeros(img.shape, img.dtype), 0, 0)
 
cv2.imshow('Original Image', img)
 
cv2.imshow('Contrast Image', contrast_img)
 
cv2.waitKey(0)

In the above code, the brightness is set to 0 as we only want to apply contrast.

The comparison of the original and contrast image is as follows:

Make an image blurry

Gaussian Blur

To make an image blurry, you can use the GaussianBlur() method of OpenCV.

The GaussianBlur() uses the Gaussian kernel. The height and width of the kernel should be a positive and an odd number.

Then you have to specify the X and Y direction that is sigmaX and sigmaY respectively. If only one is specified, both are considered the same.

Consider the following example:

import cv2
 
img = cv2.imread("pyimg.jpg")
 
blur_image = cv2.GaussianBlur(img, (7,7), 0)
 
cv2.imshow('Original Image', img)
 
cv2.imshow('Blur Image', blur_image)
 
cv2.waitKey(0)

In the above snippet, the actual image is passed to GaussianBlur() along with height and width of the kernel and the X and Y directions.

The comparison of the original and blurry image is as follows:

Median Blur

In median blurring, the median of all the pixels of the image is calculated inside the kernel area. The central value is then replaced with the resultant median value. Median blurring is used when there are salt and pepper noise in the image.

To apply median blurring, you can use the medianBlur() method of OpenCV.

Consider the following example where we have a salt and pepper noise in the image:

import cv2
 
img = cv2.imread("pynoise.png")
 
blur_image = cv2.medianBlur(img,5)

This will apply 50% noise in the image along with median blur. Now show the images:

cv2.imshow('Original Image', img)
 
cv2.imshow('Blur Image', blur_image)
 
cv2.waitKey(0)

The result will be like the following:

Another comparison of the original image and after blurring:

Detect Edges

To detect the edges in an image, you can use the Canny() method of cv2 which implements the Canny edge detector. The Canny edge detector is also known as the optimal detector.

The syntax to Canny() is as follows:

 cv2.Canny(image, minVal, maxVal)

Here minVal and maxVal are the minimum and maximum intensity gradient values respectively.

Consider the following code:

import cv2
 
img = cv2.imread("pyimg.jpg")
 
edge_img = cv2.Canny(img,100,200)
 
cv2.imshow("Detected Edges", edge_img)
 
cv2.waitKey(0)

The output will be the following:

Here is the result of the above code on another image:

Convert image to grayscale (Black & White)

The easy way to convert an image in grayscale is to load it like this:

 img = cv2.imread("pyimg.jpg", 0)

There is another method using BGR2GRAY.

To convert a color image into a grayscale image, use the BGR2GRAY attribute of the cv2 module. This is demonstrated in the example below:

Import the cv2 module:

 import cv2

Read the image:

 img = cv2.imread("pyimg.jpg")

Use the cvtColor() method of the cv2 module which takes the original image and the COLOR_BGR2GRAY attribute as an argument. Store the resultant image in a variable:

 gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

Display the original and grayscale images:

cv2.imshow("Original Image", img)
 
cv2.imshow("Gray Scale Image", gray_img)
 
cv2.waitKey(0)

The output will be as follows:

Centroid (Center of blob) detection

To find the center of an image, the first step is to convert the original image into grayscale. We can use the cvtColor() method of cv2 as we did before.

This is demonstrated in the following code:

import cv2
 
img = cv2.imread("py.jpg")
 
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

We read the image and convert it to a grayscale image. The new image is stored in gray_img.

Now we have to calculate the moments of the image. Use the moments() method of cv2. In the moments() method, the grayscale image will be passed as below:

 moment = cv2.moments(gray_img)

Finally, we have the center of the image. To highlight this center position, we can use the circle method which will create a circle in the given coordinates of the given radius.

The circle() method takes the img, the x and y coordinates where the circle will be created, the size, the color that we want the circle to be and the thickness.

 cv2.circle(img, (X, Y), 15, (205, 114, 101), 1)

The circle is created on the image.

cv2.imshow("Center of the Image", img)
 
cv2.waitKey(0)

The original image is:

After detecting the center, our image will be as follows:

Apply a mask for a colored image

Image masking means to apply some other image as a mask on the original image or to change the pixel values in the image.

To apply a mask on the image, we will use the HoughCircles() method of the OpenCV module. The HoughCircles() method detects the circles in an image. After detecting the circles, we can simply apply a mask on these circles.

The HoughCircles() method takes the original image, the Hough Gradient (which detects the gradient information in the edges of the circle), and the information from the following circle equation:

 (x - xcenter)2 + (y - ycenter)2 = r2

In this equation (xcenter , ycenter) is the center of the circle and r is the radius of the circle.

Our original image is:

After detecting circles in the image, the result will be:

Okay, so we have the circles in the image and we can apply the mask. Consider the following code:

import cv2
 
import numpy as np
 
img1 = cv2.imread('pyimg.jpg')
 
img1 = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

Detecting the circles in the image using the HoughCircles() code from OpenCV: Hough Circle Transform:

gray_img = cv2.medianBlur(cv2.cvtColor(img, cv2.COLOR_RGB2GRAY), 3)
 
circles = cv2.HoughCircles(gray_img, cv2.HOUGH_GRADIENT, 1, 20, param1=50, param2=50, minRadius=0, maxRadius=0)
 
circles = np.uint16(np.around(circles))

To create the mask, use np.full which will return a NumPy array of given shape:

masking=np.full((img1.shape[0], img1.shape[1]),0,dtype=np.uint8)
 
for j in circles[0, :]:
 
    cv2.circle(masking, (j[0], j[1]), j[2], (255, 255, 255), -1)

The next step is to combine the image and the masking array we created using the bitwise_or operator as follows:

 final_img = cv2.bitwise_or(img1, img1, masking=masking)

Display the resultant image:

Extracting text from Image (OCR)

To extract text from an image, you can use Google Tesseract-OCR. You can download it from this link

Then you should install the pytesseract module which is a Python wrapper for Tesseract-OCR.

The image from which we will extract the text from is as follows:

Now let’s convert the text in this image to a string of characters and display the text as a string on output:

Import the pytesseract module:

 import pytesseract

Set the path of the Tesseract-OCR executable file:

 pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files (x86)\Tesseract-OCR\tesseract'

Now use the image_to_string method to convert the image into a string:

 print(pytesseract.image_to_string('pytext.png'))

The output will be as follows:

Works like charm!

Detect and correct text skew

In this section, we will correct the text skew.

The original image is as follows:

Import the modules cv2, NumPy and read the image:

import cv2
 
import numpy as np
 
img = cv2.imread("pytext1.png")

Convert the image into a grayscale image:

 gray_img=cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

Invert the grayscale image using bitwise_not:

 gray_img=cv2.bitwise_not(gray_img)

Select the x and y coordinates of the pixels greater than zero by using the column_stack method of NumPy:

 coordinates = np.column_stack(np.where(gray_img > 0))

Now we have to calculate the skew angle. We will use the minAreaRect() method of cv2 which returns an angle range from -90 to 0 degrees (where 0 is not included).

 ang=cv2.minAreaRect(coordinates)[-1]

The rotated angle of the text region will be stored in the ang variable. Now we add a condition for the angle; if the text region’s angle is smaller than -45, we will add a 90 degrees else we will multiply the angle with a minus to make the angle positive.

if ang<-45:
 
    ang=-(90+ang)
 
else:
 
    ang=-ang

Calculate the center of the text region:

height, width = img.shape[:2]
 
center_img = (width / 2, height / 2)

Now we have the angle of text skew, we will apply the getRotationMatrix2D() to get the rotation matrix then we will use the wrapAffine() method to rotate the angle (explained earlier).

rotationMatrix = cv2.getRotationMatrix2D(center, angle, 1.0)
 
rotated_img = cv2.warpAffine(img, rotationMatrix, (width, height), borderMode = cv2.BORDER_REFLECT)

Display the rotated image:

cv2.imshow("Rotated Image", rotated_img)
 
cv2.waitKey(0)

Color Detection

Let’s detect the green color from an image:

Import the modules cv2 for images and NumPy for image arrays:

import cv2
 
import numpy as np

Read the image and convert it into HSV using cvtColor():

img = cv2.imread("pydetect.png")
 
hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)

Display the image:

 cv2.imshow("HSV Image", hsv_img)

Now create a NumPy array for the lower green values and the upper green values:

lower_green = np.array([34, 177, 76])
 
upper_green = np.array([255, 255, 255])

Use the inRange() method of cv2 to check if the given image array elements lie between array values of upper and lower boundaries:

 masking = cv2.inRange(hsv_img, lower_green, upper_green)

This will detect the green color.

Finally, display the original and resultant images:

 cv2.imshow("Original Image", img)

cv2.imshow("Green Color detection", masking)
 
cv2.waitKey(0)

Reduce Noise

To reduce noise from an image, OpenCV provides the following methods:

  1. fastNlMeansDenoising(): Removes noise from a grayscale image
  2. fastNlMeansDenoisingColored(): Removes noise from a colored image
  3. fastNlMeansDenoisingMulti(): Removes noise from grayscale image frames (a grayscale video)
  4. fastNlMeansDenoisingColoredMulti(): Same as 3 but works with colored frames

Let’s use fastNlMeansDenoisingColored() in our example:

Import the cv2 module and read the image:

2
3
	
import cv2
 
img = cv2.imread("pyn1.png")

Apply the denoising function which takes respectively the original image (src), the destination (which we have kept none as we are storing the resultant), the filter strength, the image value to remove the colored noise (usually equal to filter strength or 10), the template patch size in pixel to compute weights which should always be odd (recommended size equals 7) and the window size in pixels to compute average of the given pixel.

 result = cv2.fastNlMeansDenoisingColored(img,None,20,10,7,21)

Display original and denoised image:

cv2.imshow("Original Image", img)
 
cv2.imshow("Denoised Image", result)
 
cv2.waitKey(0)

The output will be:

Get image contour

Contours are the curves in an image that are joint together. The curves join the continuous points in an image. The purpose of contours is used to detect the objects.

The original image of which we are getting the contours of is given below:

Consider the following code where we used the findContours() method to find the contours in the image:

Import cv2 module:

 import cv2

Read the image and convert it to a grayscale image:

img = cv2.imread('py1.jpg')
 
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

Find the threshold:

 retval, thresh = cv2.threshold(gray_img, 127, 255, 0)

Use the findContours() which takes the image (we passed threshold here) and some attributes. See findContours() Official.

 img_contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

Draw the contours on the image using drawContours() method:

  cv2.drawContours(img, img_contours, -1, (0, 255, 0))

Display the image:

cv2.imshow('Image Contours', img)
 
cv2.waitKey(0)

The result will be:

Remove Background from an image

To remove the background from an image, we will find the contours to detect edges of the main object and create a mask with np.zeros for the background and then combine the mask and the image using the bitwise_and operator.

Consider the example below:

Import the modules (NumPy and cv2):

import cv2
 
import numpy as np

Read the image and convert the image into a grayscale image:

img = cv2.imread("py.jpg")
 
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

Find the threshold:

 _, thresh = cv2.threshold(gray_img, 127, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)

In the threshold() method, the last argument defines the style of the threshold. See Official documentation of OpenCV threshold.

Find the image contours:

 img_contours = cv2.findContours(threshed, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[-2]

Sort the contours:

img_contours = sorted(img_contours, key=cv2.contourArea)
 
for i in img_contours:
 
    if cv2.contourArea(i) > 100:
 
        break

Generate the mask using np.zeros:

 mask = np.zeros(img.shape[:2], np.uint8)

Draw contours:

 cv2.drawContours(mask, [i],-1, 255, -1)

Apply the bitwise_and operator:

 new_img = cv2.bitwise_and(img, img, mask=mask)

Display the original image:

 cv2.imshow("Original Image", img)

Display the resultant image:

cv2.imshow("Image with background removed", new_img)
 
cv2.waitKey(0)

Image processing is fun when using OpenCV as you saw. I hope you find the tutorial useful. Keep coming back.

Thank you.