The Image Processing Tutorial from Zero to One

Originally published by Rakesh Bhatt at https://overflowjs.com

Here is the list of this posts

  1. Image Processing Using Cloundinary (Part 1)
  2. Image Processing — Making Custom Filters — React.js — (Part 2)
  3. Image Processing — OpenCV and Node.js (Part 3)
  4. Image Object Detection Using TensorFlow.js (Part 4)

1 - Image Processing In React.js (Part 1)

Image processing and manipulation has been interesting field to work from the beginning. I had done a wide variety of work in image processing application from creating basic image filters to Augmented Reality apps.

We will implement basic features for image processing today.

Brief about Pixel and Image

Every digital imprint of the image contains pixels. Each pixel determined by some values. There are different mechanisms to define and store these pixel values. Pixel value can be represented with RGB, RGBA.

Each pixel comprises of color code values and in combination, it represents a dot. When you connect multiple dots entire image is built. While many image processing systems manipulate over these values and provides some cool filters which you might have seen on Instagram too.

Each value in a pixel can be said as a channel. Transparency channel also termed as A (Alpha) contains transparency details. The difference between RGB and RGBA is due to the transparency channel.

Colorful images use three or four channels depending on the image format.

Note: There are a lot of libraries for image processing, out of which OpenCV , imageMagick provide a wide variety of algorithms to do image manipulation and processing. Starting from creating masks over the image to do image processing like Object detection and extraction. Motion detection Image processing on videos done in a similar way.

As the evolution part Javascript provide wide majority of libraries to implement and play with images.

We will use Cloudinary (https://cloudinary.com) for image manipulation and react for UI for this project.

Live App link of what we will create today - https://cryptic-sierra-27182.herokuapp.com/ , if you want to play around a bit.
Part 2 of Image processing series - https://overflowjs.com/posts/Image-Processing-Making-Custom-Filters-Reactjs-Part-2.html

Prerequisite

  1. Node version >= 9.2.1
  2. Create react app - https://facebook.github.io/create-react-app/docs/getting-started
  3. Some knowledge around material UI as we will use it for UI design in our react app- https://material-ui.com

Let’s create a react app and install the dependencies

  • Create a react app named image_app
create-react-app image_app
  • Go to created directory and install cloudinary-react sdk for processing images
cd image_app 
npm install cloudinary-react --save
  • Next is to install Material UI theme for getting some quick controls to manipulate over images
npm install @material-ui/core

Directory Structure

Now we have our dependencies installed and basic project directory setup, which looks like below

image-app

├── README.md

├── node_modules

├── package.json

├── .gitignore

├── public

│   ├── favicon.ico

│   ├── index.html

│   └── manifest.json

└── src

    ├── App.css

    ├── App.js

    ├── App.test.js

    ├── index.css

    ├── index.js

    ├── logo.svg

    └── serviceWorker.js

We will now, create a container folder inside the src folder, and add ImageOps.jsx file to it which will contain our UI code to build our image processing filters.

image-app
├── README.md
├── node_modules
├── package.json
├── .gitignore
├── public
│   ├── favicon.ico
│   ├── index.html
│   └── manifest.json
└── src
    ├── Container
    │   ├── ImageOps.jsx <------------this
    ├── App.css
    ├── App.js
    ├── App.test.js
    ├── index.css
    ├── index.js
    ├── logo.svg
    └── serviceWorker.js

 And in App.js we will call the ImageOps file as below.

Note: You can see the entire project on overflowjs Github repo - https://github.com/overflowjs-com/image_app_cloudinary_material_react

Now basic setup is done, we will create some RGB settings, HSV settings and Advance filter on image inside ImageOps.jsx file.

RGB Settings

An image can be represented with RGB values per pixels. Each pixel contains a set of values that will determine how the image will be displayed. We will provide a mechanism to change each of these channel values.

We have used material Grid to distribute our overall UI parts:

  • We will build first row with two columns, One to display original image and another to show altered image.

<Grid item xs={6}>
    <Card>
        <CardContent>
            <Typography variant=“body2” color=“textSecondary” component=“p”>
                Input image
            </Typography>
            <Image publicId=“leena” cloudName=“rakesh111” >
            </Image>
        </CardContent>
    </Card>
</Grid>
<Grid item xs={6}>
    <Card>
        <CardContent>
            <Typography variant=“body2” color=“textSecondary” component=“p”>
                Output Image
            </Typography>
            <Image publicId=“leena” cloudName=“rakesh111” >
                {this.getTransformations()}
            </Image>
        </CardContent>
    </Card>
</Grid>

Each grid contain, card and heading mentioned as Typography which is imported from material UI. The Image component above is imported from cloudinary dictionary. These attributes are fetched from libraries as shown below.

import Grid from ‘@material-ui/core/Grid’;
import Card from ‘@material-ui/core/Card’;
import Typography from ‘@material-ui/core/Typography’;
import {Image} from ‘cloudinary-react’;

 The output image code call’s getTransformations function and ask for current transformation to be applied as per selected transformation on UI. We keep our selected transformation in react state.

getTransformations() {
 
    return this.state.transforms.map((tranform) => {
 
        return ( <Transformation effect={${tranform.key}:${tranform.value}} gravity=“center” crop=“fill” />)
    })
}

 On state change we form a transformations array that contain each change made to an image. Based on each of those transformations which is a key => value pair object, we do a re-render of UI.

These key=>value pairs will create transformation pipeline to render inside Image component of cloudinary.

Let’s create our first initial slider controls for changing RGB values.

Again we use 2 Grid styles UI as we did previously.

<Grid item xs={6}>
    <Card>
        <CardContent>
            <Box color=“text.primary”>
                <Typography paragraph={true} variant=“h5” align=“left” component=“h5”>
                    R-G-B Controls
                </Typography>
 
                {this.getRBBCons().map((color) => {
                    return (
                        <SliderComponent getSliderValue={(key) => this.getSliderValue(key, “rgb”)} default={0} min={-100} max={100} keyLabel={color.key} keyValue={color.value}
                            updateColorValue={(e, value, key) => this.updateColorValue(e, value, key)}  />
                    )
                })}
 
                <Button variant=“contained” align=“left” onClick={() => this.resetFilters([“red”, “green”, “blue”])} color=“primary”>
                    Reset
                </Button>
           
            </Box>
        </CardContent>
    </Card>
</Grid>
<Grid item xs={6}>
    <Card item xs={6}>
        <CardContent>
            <Box color=“text.primary”>
                <Typography paragraph={true} variant=“h5” align=“left” component=“h5”>
                    R-G-B Based Filters
                </Typography>
               
                <Button variant=“contained” align=“left” onClick={() => this.createRGBEffect(“all_blue”)} >
                    Fill Blue
                </Button>
                <Button variant=“contained” align=“left” onClick={() => this.createRGBEffect(“all_red”)} >
                    Fill Red
                </Button>
                <Button variant=“contained” align=“left” onClick={() => this.createRGBEffect(“all_green”)}>
                    Fill Green
                </Button>
                <Button variant=“contained” align=“left” onClick={() => this.resetFilters([“red”, “green”, “blue”])} color=“primary”>
                    Reset
                </Button>
           
            </Box>
        </CardContent>
    </Card>
</Grid>

 Function getRBBCons will give constant to help transformations over image as below

getRBBCons() {
    return [
        {key: “Red”, value: “red”, default: 0},
        {key: “Green”, value: “green”, default: 0},
        {key: “Blue”, value: “blue”, default: 0}
    ]
}

It provide sliders constants to iterate and create separate slider

getSliderValue(key, type) {
       
   const transform = this.state.transforms.find((transform) => transform.key === key);
   
   if (transform) {
       return transform.value;
   }

   if (type == “rgb”) {
       return this.getRBBCons().find((transform) => transform.value === key).default;
   } else if (type == “hsv”) {
       return this.getHSVCons().find((transform) => transform.value === key).default;
   }

}

This function provides binding to current effect value for RGB and HSV. Every-time the value of slider is changed it gets updated on current state channel value.

updateColorValue(e, value, key) {
   const transform = {
       key,
       value
   }

   const transforms = this.getUpdatedTransform(this.state.transforms, transform);
   this.setState({transforms});

}


getUpdatedTransform(transforms, transform) {

   const newTransforms = transforms.filter(({key}) => key !== transform.key)
   
   newTransforms.push(transform);

   return newTransforms

}

SliderComponent is basic extension of Slider material UI

class SliderComponent extends React.Component {

   valuetext(value) {
       return ${value}°C;
   }

   render() {
       return (
           <div>
               <Typography id=“discrete-slider” align=“left” gutterBottom>
                   {this.props.keyLabel}
               </Typography>
               <Slider
                   defaultValue={this.props.default}
                   getAriaValueText={this.valuetext}
                   aria-labelledby=“discrete-slider”
                   valueLabelDisplay=“auto”
                   step={10}
                   value={this.props.getSliderValue(this.props.keyValue)}
                   marks
                   min={this.props.min}
                   max={this.props.max}
                   onChangeCommitted={(e, value) => this.props.updateColorValue(e, value, this.props.keyValue)}
               />
           </div>
       )
   }
   
}

With current implementation we can create several strait filters like

  1. All Blue with Red and Green set to minimum and Blue to maximum
  2. All Red with Blue and Green set to minimum and Red to maximum
  3. All Green with Red and Blue set to minimum and Green to maximum

In another Grid we will create buttons for handling these filters. Function of these buttons are below:-

createRGBEffect(type) {
   
   const red = {key: “red”, value: 0}
   const blue = {key: “blue”, value: 0}
   const green = {key: “green”, value: 0}

   switch(type) {
       case “all_red”:
           red.value = 100;
           break;
       case “all_blue”:
           blue.value = 100;
           break;
       case “all_green”:
           green.value = 100;
           break;
       default:
           break;
   }

   let transforms = this.state.transforms;

   transforms = this.getUpdatedTransform(transforms, red);
   transforms = this.getUpdatedTransform(transforms, blue);
   transforms = this.getUpdatedTransform(transforms, green);

   this.setState({transforms})
}

If you want to see the final code of imageOps.jsx - https://github.com/overflowjs-com/image_app_cloudinary_material_react/blob/master/src/Container/ImageOps.jsx

Another representation/manipulation of image is HSV. Let’s look at it.

HSV Settings

We implemented another set of settings to incorporate HSV controls.

Note: For more info around these feature you can refer to - https://cloudinary.com/documentation/image_transformation_reference.

HSV model provides a cylindrical/cone representation of an image. Hue is a 360-degree implementation of Red (0–60), Yellow (61–120), Magenta ( 301–360), Cyan(181–240), Blue (241–300), Green (121–180), where each portion can contribute from given degrees.

Saturation: - Describes amount of grey color, if we reduc it gives we get more grey color.

Value (Brightness): With saturation it represents strength of color.

Let’s look at the code,

<Grid item xs={6}>
   <Card>
       <CardContent>
           <Box color=“text.primary”>
               <Typography paragraph={true} variant=“h5” align=“left” component=“h5”>
                   H-S-V Controls
               </Typography>

               {this.getHSVCons().map((color) => {
                   return (
                       <SliderComponent getSliderValue={(key) => this.getSliderValue(key, “hsv”)} default={0} min={-100} max={100} keyLabel={color.key} keyValue={color.value}
                           updateColorValue={(e, value, key) => this.updateColorValue(e, value, key)} />
                   )
               })}

               <Button variant=“contained” align=“left” onClick={() => this.resetFilters([“hue”, “saturation”])} color=“primary”>
                   Reset
               </Button>
               
           </Box>
       </CardContent>
   </Card>
</Grid>
<Grid item xs={6}>
   <Card item xs={6}>
       <CardContent>
           <Box color=“text.primary”>
               <Typography paragraph={true} variant=“h5” align=“left” component=“h5”>
                   H-S-V Based Filters
               </Typography>
               
               <Button variant=“contained” align=“left” onClick={() => this.createHSVEffect(“grayscale”)} >
                   Gray Scale
               </Button>
               <Button variant=“contained” align=“left” onClick={() => this.createHSVEffect(“sepia”)} >
                   Sepia
               </Button>
               <Button variant=“contained” align=“left” onClick={() => this.resetFilters([“hue”, “saturation”, “brightness”])} color=“primary”>
                   Reset
               </Button>
           
           </Box>
       </CardContent>
   </Card>
</Grid>

Again HSV filter implementation is done in two parts.

While most of the components are similarly implemented on first gird. We have three slider for Hue, Saturation and Value.

getHSVCons() {
   return [
       {key: “Hue”, value: “hue”, default: 80},
       {key: “Saturation”, value: “saturation”, default: 80},
       {key: “Value”, value: “brightness”, default: 80},
   ]
}

This function gives SliderComponent a way to define constants for HSV.

On second grid we have two buttons Grayscale and Sepia to show the image effects. The main code which handles these button effects are:-

createHSVEffect(type) {
       
   const hue = {key: “hue”, value: 80}
   const saturation = {key: “saturation”, value: 80}

   switch(type) {
       case “grayscale”:
           saturation.value = -70;
           break;
       case “sepia”:
           hue.value = 20;
           saturation.value = -20
           break;
       default:
           break;
   }

   let transforms = this.state.transforms;

   // transforms = this.getUpdatedTransform(transforms, hue);
   if(type == “grayscale”) {
       transforms = this.getUpdatedTransform(transforms, saturation);
   } else if(type == “sepia”) {
       transforms = this.getUpdatedTransform(transforms, hue);
       transforms = this.getUpdatedTransform(transforms, saturation);
   }
   this.setState({transforms})

}

Here we have made some constant values to push the transformations for greyscale and sepia.

Last portion is creating advanced filters provided by cloudinary which is the coolest part here. Let’s look into it.

Below is the code to make the above UI

<Grid xs={12}>
   <Card item xs={6}>
       <CardContent>
           <Box color=“text.primary”>
               <Typography paragraph={true} variant=“h5” align=“left” component=“h5”>
                   Advance Filters By Cloudinary
               </Typography>
               
               <Button variant=“contained” align=“left” onClick={() => this.createAdvanceEffects(“cartoon”)} >
                   Cartoonify
               </Button>
               <Button variant=“contained” align=“left” onClick={() => this.createAdvanceEffects(“vignette”)} >
                   Vignette
               </Button>

               <Button variant=“contained” align=“left” onClick={() => this.createAdvanceEffects(“oil_painting”)} >
                   Oil Painting
               </Button>

               <Button variant=“contained” align=“left” onClick={() => this.createAdvanceEffects(“vibrance”)} >
                   vibrance
               </Button>

               <Button variant=“contained” align=“left” onClick={() => this.resetFilters([“vignette”, “cartoonify”, “vibrance”, “oil_paint”])} color=“primary”>
                   Reset
               </Button>
           
           </Box>
       </CardContent>
   </Card>
</Grid>

For each of these filters we have code that push transformation to our state which applied to image

createAdvanceEffects(type) {

   let transforms = this.state.transforms;

   switch(type) {
       case “cartoon”:
           const transform = {
               key: “cartoonify”,
               value: “20:60”
           }
           transforms = this.getUpdatedTransform(transforms, transform);
           break;
       case “vignette”:
           const transform_v = {
               key: “vignette”,
               value: “30”
           }
           transforms = this.getUpdatedTransform(transforms, transform_v);
           break;
       case “oil_painting”:
           const transform_p = {
               key: “oil_paint”,
               value: “40”
           }
           transforms = this.getUpdatedTransform(transforms, transform_p);
           break;
       case “vibrance”:
           const transform_vb = {
               key: “vibrance”,
               value: “70”
           }
           transforms = this.getUpdatedTransform(transforms, transform_vb);
           break;
       default:
           break;

   }

   this.setState({transforms});
   
}

Note: For better understanding around transformation go through documentation - https://cloudinary.com/documentation/image_transformation_reference

We are done for now, but there are many more filters directly given by cloudinary library which you can try to implement and have fun with images. You can also create camera connected webcam app and try these filters on captured image. There are unlimited possibilities with them.

2 - Image Processing — Making Custom Filters — React.js — (Part 2)

The biggest fun in programming is to do the same things in many ways. But everything comes with a cost. As we have used Cloudniary for image processing in the first part of the series, which is a paid solution although a great library to begin with.

In part 2 we will implement some of the things that matter a lot while playing with images in large projects

1.     Smoothing Filters (Some filters and algo’s for it)

2.     Thresholding Filters

3.     Finding contours in the image

Finally, we will use a live camera to capture some images from the webcam live stream.

Let’s start now

Libraries

This one is going to take time if you are a beginner. Setting up a third party system is something to learn.

Use this straight implementation of webcam wrapper

  • API Calling Library

We are using isomorphic-fetch from NPM to call API from react

  • Frontend: React.js and again material UI see the previous article for more help

https://material-ui.com/

  • API: Node.js Express.js code written in es6 for API and server-side processing.
Here is the live app link of UI (may take time to load for first time ) - https://peaceful-reef-69295.herokuapp.com/ and Github code link - https://github.com/overflowjs-com/image_app_image_processing_opencvviawebcam_part_2, Please feel free to check it out.

Let’s begin with the UI first —

Our aim is to capture the image from the webcam and then apply the filter to them via our own API calls.

1.     Create a new react app:

create-react-app image_app_opencvwebcam

2. Go inside the project and install the dependencies:

cd image_app_opencvwebcam
npm install @material-ui/core — save
npm install react-webcam — save
npm install — save isomorphic-fetch es6-promise

3. Let’s create a containers, components, utlisfolder inside src a folder. The container will contain ImageOps.jsx which is our main entry point and the components will contain Imagefilter.jsx and WebCamCapture.jsx which can be our reusable components. Utils will have our Api.jsAPI wrapper to hit Node.js server.

the directory will look like below

image_app_opencvwebcam
├── README.md
├── node_modules
├── package.json
├── .gitignore
├── public
│  ├── favicon.ico
│  ├── index.html
│  └── manifest.json
└── src
   ├── containers
       ├── ImageOps.jsx
   ├── components
       ├── Imagefilter.jsx
       ├── WebCamCapture.jsx
   ├── utils
       ├── Api.js
   ├── App.css
   ├── App.js
   ├── App.test.js
   ├── index.css
   ├── index.js
   ├── logo.svg
   └── serviceWorker.js

If you have read our Part 1 then you will know what our App.js code will be :)

Let’s check the ImageOps.jsx rendering code:-

import React from ‘react’;
import Container from ‘@material-ui/core/Container’;
import Grid from ‘@material-ui/core/Grid’;
import Card from ‘@material-ui/core/Card’;
import CardHeader from ‘@material-ui/core/CardHeader’;
import CardContent from ‘@material-ui/core/CardContent’;
import Typography from ‘@material-ui/core/Typography’;
import WebCamCapture from ‘./Components/WebCamCapture’;
export default class ImageOpsContainer extends React.Component {
   
 constructor(props) {
  super(props);
  this.state ={
    image_data: null
  };
}
saveCapturedImage(data) {
 this.setState({ image_data: data });
}
render() {
 return (
   <Container maxWidth=“md”>
     <Grid container spacing={2}>
       <Grid item xs={12}>
         <Card>
           <CardContent>
              <Typography variant=“body” color=“textPrimary”  component=“p”>
                 Image processing part-2
              </Typography>
              <Typography variant=“h6” color=“textPrimary” component=“h6”>
                 CAMERA PREVIEW
              </Typography>
              <WebCamCapture saveCapturedImage={(data) =>
              this.saveCapturedImage(data)}/>
           </CardContent>
          </Card>
       </Grid>
     </Grid>
   </Container>
 );
}

Note: We have imported Container, Grid, Card, CardContent, Typography from @material-ui module and WebCamCapture from our own component WebCamCapture.jsx

WebCamCapture, as the name suggests, is used for capturing images from the camera, also we have passed saveCapturedImage function as props which get’s called when we click capture button that we will see in WebCamCapture component. saveCapturedImage function just set the container state with image data.

saveCapturedImage(data) {
 this.setState({ image_data: data });
}

Let’s look now on WebCamCapture component to get more understanding of how the component works

import React from ‘react’;
import Webcam from ‘react-webcam’;
import Grid from ‘@material-ui/core/Grid’;
import Button from ‘@material-ui/core/Button’;
export default class WebCamCaptureContainer extends React.Component {
   constructor(props) {
       super(props);
       this.state = {
           videoConstants: {
               width: 1200,
               height: 720,
               facingMode: ‘user’
           }
       }
   }
   captureImage() {
         this.props.saveCapturedImage(this.refs.webcam.getScreenshot());
   }
   render() {
  return (
    <div>
     <Grid container spacing={1}>
       <Grid item xs={12}>
         <Webcam
           ref=“webcam”
           audio={false}
           // height={350}
           screenshotFormat=“image/jpeg”
           // width={350}
           videoConstraints={this.state.videoConstants}
           />
         
        </Grid>
       <Grid item xs={12}>
         <Button variant=“contained” align=“center” color=“primary” onClick={() => this.captureImage()} >
           Capture
         </Button>
       </Grid>
     </Grid>
    </div>
  );
   }
}

Here, we have added the Webcam component and Button to capture the current image. Both are in Grid of 12 spaces that means only one component in a row.

On button click, we capture the current image from the webcam as a screenshot and pass that to the props function saveCapturedImagethat we have passed from the ImageOps.jsx.

Note: If you are having trouble at understanding this material code go to — https://material-ui.com/components/

Let’s run the project and see what we have built so far vianpm start

Let’s move on to filter and code their UI as well.


Smoothing Filters:

Smoothing image or we can say blurring image is very useful in many image operations. The biggest one is reducing noise in the image which is useful if we create a mask or do object detection/face detection or do the processing of any kind of images.

Note: For more depth go to — https://docs.opencv.org/2.4/doc/tutorials/imgproc/gausian_median_blur_bilateral_filter/gausian_median_blur_bilateral_filter.html

Now inside components, we will create another component ImageFilter.jsx

import React from ‘react’;
import Grid from ‘@material-ui/core/Grid’;
import Button from ‘@material-ui/core/Button’;
import Typography from ‘@material-ui/core/Typography’;
import Card from ‘@material-ui/core/Card’;
import CardContent from ‘@material-ui/core/CardContent’;
import { Divider, CardHeader } from ‘@material-ui/core’;
import {api} from ‘…/Utils/Api’;
export default class ImageFilters extends React.Component {
constructor(props) {
       super(props);
       
        this.state = {
           smoothing_effects: [
               {label: “Blur”, key: “blur”},
               {label: “Gaussian Blur”, key: “gaussian_blur”},
               {label: “Median Blur”, key: “median_blur”},
               {label: “Bilateral Filter”, key: “median_filter”},
           ],
           render: {}
       }
   }
 applyEffect(effect) {
   api(“apply_filter”, {
     type: effect,
     data: this.props.image_data
   }).then((data) => {
      const render = this.state.render;
      render[effect] = data;
      this.setState({render});
    });
  }
  getFilterData(effect) {
    if(this.state.render[effect]) {
      return this.state.render[effect];
    }
    return this.props.image_data;
  }
 render() {
   if (!this.props.image_data) {
     return (
       <div/>;
     )
   }
   return (
           <Grid container>
               {this.state[this.props.type].map((effect, i) => {
                   return (
                       <Grid item md={4} key={i}>
                           <Card>
                             <CardHeader title={${effect.label}&nbsp;&nbsp;Image}>
                               </CardHeader>
                               <CardContent>
                                   <img src={this.getFilterData(effect.key)} alt=“” height=“300px” />
                                   <Button variant=“contained” align=“center” color=“secondary” onClick={() => this.applyEffect(effect.key)} >
                                       Generate
                                   </Button>
                               </CardContent>
                           </Card>
                           <Divider />
                       </Grid>
                    )
               })}
           </Grid>
       )
   }
}

Let’s understand what’s going on with the code.

This component is pretty simple. We have initiated a Grid container and then iterated over state field effects based on props type which can be (smoothing_effects/threshold_effects/contour_effects) to generate each effect card.

Card container is a grid of 4 columns for medium devices and on smaller devices take up whole 12 column space. Now for larger devices, 4 columns will generate 3 columns per row.

On render, we are checking whether the captured image is available or not, if yes, then show the filters to apply.

<Card>
   <CardHeader title={${effect.label} Image}>
   </CardHeader>
   <CardContent>
       <img src={this.getFilterData(effect.key)} alt=“” height=“300px” />
       <Button variant=“contained” align=“center” color=“secondary” onClick={() => this.applyEffect(effect.key)} >
           Generate
       </Button>
   </CardContent>
</Card>

Now we have an image as content and a button to apply the filter on it. The image content is coming from the below function.

getFilterData(effect) {
   if(this.state.render[effect]) {
       return this.state.render[effect];
   }
 return this.props.data;
}

As you can see we are checking the whether effect is saved in the state object. If yes, then render it otherwise just return the original image. Also, on button click, we are calling an api object from Api.js file. In the api object, we are passing the endpoint name and some required parameters. On Successful execution, we will set the state render object and show image with corresponding filters.

applyEffect(effect) {
   api(“apply_filter”, {
       type: effect,
       data: this.props.image_data
   }).then((data) => {
       const render = this.state.render;
       render[effect] = data;
       this.setState({render});
   });
}

let’s look into the Api.js file in utils

import fetch from ‘isomorphic-fetch’;
const BASE_API_URL = “http://localhost:4000/
return fetch(BASE_API_URL+api_end_point,
   {
       method: ‘POST’,
       headers: {
           ‘Content-Type’: ‘application/json’
       },
       body:JSON.stringify(data)
   }).then((response) => {
       return response.json();
   });
}

Here we are calling our custom API using fetch API from isomorphic-fetch module.

Note: To read more about the module, check this out https://github.com/matthew-andrews/isomorphic-fetch

In ImageOps.jsx we have created two grids

{this.state.image_data &&
 <Grid item md={12}>
       <CardHeader title={Captured Image}>
   </CardHeader>
   <img src={this.state.image_data} alt=“” height=“300px”/>
</Grid>}
<Grid item xs={12}>
   <Card>
       <CardContent>
           <Typography variant=“h6” color=“textPrimary” component=“h6”>
               IMAGE SMOOTH FILTERS
           </Typography>
           <ImageFilters image_data={this.state.image_data} type=“smoothing_effects” />
       </CardContent>
   </Card>
</Grid>

The first Grid is used to display the captured image which corresponds to image_data in our state. The second grid is 12 columns grid and contains ImageFilters component and In ImageFilters we are passing two props data image_data and type of effect.

We are now done with the smoothing filter, let’s code the UI for the other two.

Thresholding filters:

Thresholding filter is used in image segmentation which can be used in information extraction. Many processes use this to produce and analyse binary filters. It gives each pixel either white or black. And deciding factor for which pixel will be white or black depends on the algorithm. We will use some of them here.

For information extraction like reading the number from the image, identifying objects or reading text from image these all things use thresholding as one process.

The UI code will mostly be the same with some state update.

In our ImageFilter.jsxcomponent we will add threshold effect to the state.

this.state = {
   smoothing_effects: [
       {label: “Blur”, key: “blur”},
       {label: “Gaussian Blur”, key: “gaussian_blur”},
       {label: “Median Blur”, key: “median_blur”},
       {label: “Bilateral Filter”, key: “median_filter”},
   ],
   threshold_effects: [
       {label: “Simple Threshold”, key: “simple_threshold”},
       {label: “Adaptive Threshold”, key: “adaptive_threshold”},
       {label: “Otsu’s Threshold”, key: “otasu_threshold”},
   ],
   render: {}
}

and in ImageOps.jsx we will add a new grid for threshold filters.

<Grid item xs={12}>
   <Card>
       <CardContent>
           <Typography variant=“h6” color=“textPrimary” component=“h6”>
               THRESHOLDING FILTERS
           </Typography>
           <ImageFilters image_data={this.state.image_data} type=“threshold_effects” />
       </CardContent>
   </Card>
</Grid>

Finding Contours:

Contour is creating boundaries over shapes in the image that follow the same color or intensity. We will create a binary threshold image generator and try to map contours (Boundaries around objects) and then draw these boundaries over the image. It can be used in a variety of processes count. Ting object is one of them and plenty more. Its help in filtering the desired object from a set of multiple objects. So let’s check some of its power.

On UI level its same how we did for threshold filter i.e first add the state for contour in ImageFilter.jsx

this.state = {
   smoothing_effects: [
       {label: “Blur”, key: “blur”},
       {label: “Gaussian Blur”, key: “gaussian_blur”},
       {label: “Median Blur”, key: “median_blur”},
       {label: “Bilateral Filter”, key: “median_filter”},
   ],
   threshold_effects: [
       {label: “Simple Threshold”, key: “simple_threshold”},
       {label: “Adaptive Threshold”, key: “adaptive_threshold”},
       {label: “Otsu’s Threshold”, key: “otasu_threshold”},
   ],
   contour_effects: [
       {label: “Find all contours”, key: “find_all_contours”},
       {label: “Find filtered contours”, key: “find_filtered_contours”},
   ],
   render: {}
}

and then add the grid in ImageOps.jsx

<Grid item xs={12}>
   <Card>
       <CardContent>
           <Typography variant=“h6” color=“textPrimary” component=“h6”>
               CONTOUR FILTERS
           </Typography>
           <ImageFilters image_data={this.state.image_data} type=“contour_effects” />
       </CardContent>
   </Card>
</Grid>

and this is how it will look like

Complete UI as the project will have initial looks as below

Now we have implemented the whole UI for our filters. Now all we need is the working API endpoint to apply filters and see the magic.

Since this article is getting too long, we will implement it in part-3 of Image Processing series. Where we will incorporate OpenCV in Node.js and at last see how everything works end to end.

3 - Image Processing — OpenCV and Node.js (Part 3)

The final UI looks like this

Note: Before we start here is the link of Github repo of the code we will implement - https://github.com/overflowjs-com/image_app_opencv_api_part3

Let’s start,

We will create an endPoint /api/apply_filter , whose schema looks like

Method: POST
Data-Type: application/json
Request payload: {
 data: “base64 encoded image data”,
 type: “Type of filter to be applied”
}

To jump-start the project we will use https://github.com/developit/express-es6-rest-api which is a sample API starter kit with everything setup. Feel free to use any of the boilerplate with ES6 tooling.

1.     Clone the repo using below

git clone https://github.com/developit/express-es6-rest-api.git image_app_api_part_3

2. Install initial packages

cd image_app_api_part_3npm i

3. Open config.json in project root change port from 8080 to 4000, and body limit to 10000kb.

4. Open src > api > index.js file and add the replace it with below code

import { version } from ‘…/…/package.json’;
import { Router } from ‘express’;
import facets from ‘./facets’;

export default ({ config, db }) => {
   let api = Router();

    // mount the facets resource
   api.use(‘/facets’, facets({ config, db }));

    // perhaps expose some API metadata at the root
   api.get(‘/’, (req, res) => {
       res.json({ version });
   });

    api.post(‘/apply_filter’, (req, res) => {
       return res.json({ “msg”: “hello”})
   });

    return api;
}

We only added a POST endpoint /apply_filter which for now returns { msg: “hello” }

5. Now, let’s install OpenCV module opencv4nodejs, Please refer https://www.npmjs.com/package/opencv4nodejs for the installation guide on mac/Linux/windows and make sure it’s properly installed.

6. We will now add some smoothing filters, go to src > api and create a folder named filters , inside it create a new file named FilterProcessor.js

import cv from ‘opencv4nodejs’;  
import SmoothingFitlers from ‘./SmoothingFilters’;
import ThresholdingFilters from ‘./ThresholdingFilters’;
import ContourFilters from ‘./ContourFilters’;
export default class FilterProcessor {
   constructor(data, type) {
       this.data = data;
       this.type = type
   }
getInputImage() {
const base64data =this.data.replace(‘data:image/jpeg;base64’,‘’)
                           .replace(‘data:image/png;base64’,‘’);//Strip image type prefix
       const buffer = Buffer.from(base64data,‘base64’);
       const image = cv.imdecode(buffer);
return image;
       
    }
process() {
       
        let outputImage = null;
if ([“blur”, “gaussian_blur”, “median_blur”, “bilateral_filter”].indexOf(this.type) > -1) {
           const filter = new SmoothingFitlers(this.type, this.getInputImage());
           outputImage = filter.process();
       }
       
        const outBase64 = cv.imencode(‘.jpg’, outputImage).toString(‘base64’);
const output = ‘data:image/jpeg;base64,’+outBase64 + ‘’
return output;
   }
}

In this class, there are two values passed in constructor — Data and Type. Data is a base64 encoded image. Type is the filter or action that we need to do on the image.

We have two functions-

1.     getInputImage() which convert the base64 image to cv: Mat object and return it.

2.     process() is where the filter/action on the image happens and return the output image.

We are checking the type of filter that lies in [“blur”, “gaussian_blur”, “median_blur”, “bilateral_filter”]. We are initiating SmoothingFilter class with two values in its constructor filter type and cv: Mat image of the available image and calling a function process()on the filter and saving output Image.

7. Below code inSmoothingFilter.jsis converting Mat object to base 64 and return the base 64 filtered image.

import cv from ‘opencv4nodejs’;
export default class SmoothingFilters {
constructor(type, image) {
       this.type = type;
       this.image = image;
   }
process() {
       let processedImage = null;
if (this.type == “blur”) {
           processedImage = cv.blur(this.image, new cv.Size(10, 10));
       } else if(this.type == “gaussian_blur”) {
           processedImage = cv.gaussianBlur(this.image, new cv.Size(5, 5), 1.2, 1.2);
           // processedImage = this.image.gaussianBlur(new cv.Size(10, 10), 1.2);
       } else if(this.type == “median_blur”) {
           processedImage = cv.medianBlur(this.image, 10);
       } else if(this.type == “bilateral_filter”) {
           processedImage = this.image.bilateralFilter(9, 2.0, 2.0);
       }
return processedImage;
   }
}

This is a smoothing filter class takes two-parameter in constructor — the type of filter and source image Mat object.

Inside process function based on each filter type we are processing the image and returning the processed image.

8. Let’s now add function to the router and test it. So, if you go in to src > index.js

import { version } from ‘…/…/package.json’;
import { Router } from ‘express’;
import facets from ‘./facets’;
import FilterProcessor from ‘./filters/FilterProcessor’;

export default ({ config, db }) => {
   let api = Router();

    // mount the facets resource
   api.use(‘/facets’, facets({ config, db }));

    // perhaps expose some API metadata at the root
   api.get(‘/’, (req, res) => {
       res.json({ version });
   });

    api.post(‘/apply_filter’, (req, res) => {

        // console.log(req.body);

        const data = req.body.data;
       const type = req.body.type;

        const processor = new FilterProcessor(data, type);

        return res.json({type: type, data: processor.process()});
   });

    return api;
}

Here, we have fetched the data and type from the request body, which is passed by our UI code and then call FilterProcessor function with data and type and returned the processed data and type back as a response.

Here is how the UI looks like

As effect are visible but with some algos, there is not too much change we see. But yes each has responded properly.

9. Now let’s add another set of filters for thresholding. We will create a new class ThresholdingFilter.jsinside src > filters.

import cv from ‘opencv4nodejs’;
export default class ThresholdingFilters {
constructor(type, image) {
       this.type = type;
       this.image = image;
 }
process() {
       let processedImage = null;
this.image = this.image.gaussianBlur(new cv.Size(5, 5), 1.2);
       this.image = this.image.cvtColor(cv.COLOR_BGR2GRAY);
           
       if (this.type == “simple_threshold”) {
           processedImage = this.image.threshold(127, 255, cv.THRESH_BINARY);
       }
           
       if(this.type == “adaptive_threshold”) {
           processedImage = this.image.adaptiveThreshold(255, cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY, 11, 2);
       }
if(this.type == “otasu_threshold”) {
           processedImage = this.image.threshold(0, 255, cv.THRESH_BINARY+cv.THRESH_OTSU);
       }
return processedImage;
       
   }
}

The constructor is accepting common values cv: Mat object (image) and type of filter. In the process(), we have implemented various thresholding filters.

Something to see

this.image = this.image.gaussianBlur(new cv.Size(5, 5), 1.2);
this.image = this.image.cvtColor(cv.COLOR_BGR2GRAY);

In the above two lines, we have blurred the image using gaussian blur to reduce noise on the input image and in the next line, we have converted an image to grayscale.

Adaptive thresholding works with 8UC1 format which means it works for only one channel and the grayscale image has only one channel that is black.

Next based on the type we are operating thresholding techniques.

Note: For more details on parameters and thresholding techniques go to- https://docs.opencv.org/3.4.0/d7/d4d/tutorial_py_thresholding.html

In FitlerProcessor process, we have added an array of filter and calledThresholdingFilters.js function.

if ([“simple_threshold”, “adaptive_threshold”, “otasu_threshold”].indexOf(this.type) > -1) {
           const filter = new ThresholdingFilters(this.type, this.getInputImage());
           outputImage = filter.process();
}

This is how the UI looks after integration.

11. Now the last implementation left is finding contour in the image. We will now add a new class ContourFilter.jsinside filter folder.

import cv from “opencv4nodejs”;
export default class ContourFilters {
constructor(type, image) {
       this.type = type;
       this.image = image;
   }
process() {
this.image = this.image.gaussianBlur(new cv.Size(5, 5), 1.2);
let grayImage = this.image.cvtColor(cv.COLOR_BGR2GRAY);
       grayImage = grayImage.adaptiveThreshold(255, cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY, 11, 2);
if(this.type == “find_all_contours”) {
           
           let contours = grayImage.findContours(cv.RETR_TREE, cv.CHAIN_APPROX_NONE, new cv.Point2(0, 0));
const color = new cv.Vec3(41, 176, 218);
contours = contours.sort((c0, c1) => c1.area - c0.area);
const imgContours = contours.map((contour) => {
               return contour.getPoints();
           });
this.image.drawContours(imgContours, -1, color, 2);
       }
if(this.type == “find_filtered_contours”) {
let contours = grayImage.findContours(cv.RETR_LIST, cv.CHAIN_APPROX_NONE, new cv.Point2(0, 0));
           
           const color = new cv.Vec3(41, 176, 218);
           
           contours = contours.sort((c0, c1) => c1.area - c0.area);
const imgContours = contours.map((contour) => {
               return contour.getPoints();
           });
this.image.drawContours(imgContours, -1, color, 0);
       }
return this.image;
   }
}

The implementation style remains the same as other filters i.e a constructor function and a process function. What varies is the process function implementation. In the process function —

this.image = this.image.gaussianBlur(new cv.Size(5, 5), 1.2);
let grayImage = this.image.cvtColor(cv.COLOR_BGR2GRAY);
grayImage = grayImage.adaptiveThreshold(255,cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY, 11, 2);

We are blurring the image, converting it to grayscale and then thresholding it, to use it for contour finding.

There are two methods we have used in each filter findContour()and drawContour , which help in finding edges or boundaries and then drawing it to the main colored image. Also, the operations are performed on a threshold image and drawn on a colored image.

Note: Read about image contour below- https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours#findcontours

Inside FiltereProcessing process function,

if ([“find_all_contours”, “find_filtered_contours”].indexOf(this.type) > -1) {
     const filter = new ContourFilters(this.type, this.getInputImage());
     outputImage = filter.process();
}

The above produces the output

and we are done :)

Final words:

This series is enough to start working on any scale of image processing and computation project. We have used OpenCV which is one of the most powerful images processing framework.

4 - Image Object Detection Using TensorFlow.js (Part 4)

In this part, we will build an image object detection system with Tensorflow-js with the pre-trained model.

To start with, there are lots of ways to deploy TensorFlow in webpage one way is to include ml5js. Visit https://ml5js.org/. Its a wrapper around tf.js a tensor flow and p5.js library used for doing operations in Html element.

But, We will like to keep the power on the backend part so that I can try and run these models over backend with API’s backend processes and so on.

Therefore, In the first half of the post, we will create a UI using React.js and Material-UI and in the second half will we create an API in Node.js to power the UI.

Let’s start with building a sample React project.

FRONTEND PART:

If you have followed along with my previous article the react project will seem to be fairly easy to build.

1.     Open the terminal and do

create-react-app image_classification_react_ui

This will create a react project to work with.

2. Let’s install the dependency required

npm install @material-ui/core
npm install - save isomorphic-fetch es6-promise

Note: isomorphic-fetch is required to call the object detection API endpoint from React code.

3. Open the project in your favorite editor and let’s create 2 folders

1.     container — This will contain a file — ImageOps.jsx , which have all frontend UI code.

2.     utils — This will contain a file Api.js , which is used to call the object detection endpoint.

└── src
   ├── containers
       ├── ImageOps.jsx
   ├── utils
       ├── Api.js

Let’s look into the ImageOps.jsxcode and understand it.

import React from ‘react’;

import Container from ‘@material-ui/core/Container’;
import Grid from ‘@material-ui/core/Grid’;

import Card from ‘@material-ui/core/Card’;
import CardContent from ‘@material-ui/core/CardContent’;
import Typography from ‘@material-ui/core/Typography’;
import Button from ‘@material-ui/core/Button’;
import { red } from ‘@material-ui/core/colors’;

import {api} from ‘…/utils/Api’;

import Table from ‘@material-ui/core/Table’;
import TableBody from ‘@material-ui/core/TableBody’;
import TableCell from ‘@material-ui/core/TableCell’;
import TableHead from ‘@material-ui/core/TableHead’;
import TableRow from ‘@material-ui/core/TableRow’;
import Paper from ‘@material-ui/core/Paper’;
import CircularProgress from ‘@material-ui/core/CircularProgress’;

 
export default class ImageOps extends React.Component {
 
   constructor(props) {
      super(props);

       this.state = {
          image_object: null,
          image_object_details: {},
          active_type: null
      }
  }

   updateImageObject(e) {
      const file = e.target.files[0];
      const reader = new FileReader();
     
       reader.readAsDataURL(file);
      reader.onload = () => {
          this.setState({image_object: reader.result, image_object_details: {}, active_type: null});
      };

   }

   processImageObject(type) {

       this.setState({active_type: type}, () => {

           if(!this.state.image_object_details[this.state.active_type]) {
              api(“detect_image_objects”, {
                  type,
                  data: this.state.image_object
              }).then((response) => {
                 
                   const filtered_data = response;
                  const image_details = this.state.image_object_details;
     
                   image_details[filtered_data.type] = filtered_data.data;
     
                   this.setState({image_object_details: image_details });
              });
          }
      });
  }

   render() {
      return (
          <Container maxWidth=“md”>
              <Grid container spacing={2}>
                  <Grid item xs={12}>
                      <CardContent>
                          <Typography variant=“h4” color=“textPrimary” component=“h4”>
                              Object Detection Tensorflow
                          </Typography>
                      </CardContent>
                  </Grid>
                  <Grid item xs={12}>
                      {this.state.image_object &&
                          <img src={this.state.image_object} alt=“” height=“500px”/>
                      }
                  </Grid>
                  <Grid item xs={12}>
                      <Card>
                          <CardContent>
                              <Button variant=“contained”
                                  component=‘label’ // <-- Just add me!
                                  >
                                  Upload Image
                                  <input accept=“image/jpeg” onChange={(e) => this.updateImageObject(e)} type=“file” style={{ display: ‘none’ }} />
                              </Button>
                          </CardContent>
                      </Card>
                  </Grid>
                  <Grid item xs={3}>
                      <Grid container justify=“center” spacing={3}>
                          <Grid item >
                              {this.state.image_object && <Button onClick={() => this.processImageObject(“imagenet”)}variant=“contained” color=“primary”>
                                  Get objects with ImageNet
                              </Button>}
                          </Grid>
                          <Grid item>
                              {this.state.image_object && <Button onClick={() => this.processImageObject(“coco-ssd”)}variant=“contained” color=“secondary”>
                                  Get objects with Coco SSD
                              </Button>}
                          </Grid>
                      </Grid>
                  </Grid>
                  <Grid item xs={9}>
                      <Grid container justify=“center”>
                          {this.state.active_type && this.state.image_object_details[this.state.active_type] &&
                              <Grid item xs={12}>
                                  <Card>
                                      <CardContent>
                                          <Typography variant=“h4” color=“textPrimary” component=“h4”>
                                              {this.state.active_type.toUpperCase()}
                                          </Typography>
                                          <ImageDetails type={this.state.active_type} data = {this.state.image_object_details[this.state.active_type]}></ImageDetails>
                                      </CardContent>
                                  </Card>
                              </Grid>
                          }
                          {this.state.active_type && !this.state.image_object_details[this.state.active_type] &&
                              <Grid item xs={12}>
                                  <CircularProgress
                                      color=“secondary”
                                  />
                              </Grid>
                          }
                      </Grid>
                  </Grid>
              </Grid>
          </Container>
      )
  }
}

class ImageDetails extends React.Component {
 
   render() {

       console.log(this.props.data);

       return (
          <Grid item xs={12}>
              <Paper>
                  <Table>
                  <TableHead>
                      <TableRow>
                      <TableCell>Objects</TableCell>
                      <TableCell align=“right”>Probability</TableCell>
                      </TableRow>
                  </TableHead>
                  <TableBody>
                      {this.props.data.map((row) => {
                          if (this.props.type === “imagenet”) {
                              return (
                                  <TableRow key={row.className}>
                                      <TableCell component=“th” scope=“row”>
                                      {row.className}
                                      </TableCell>
                                      <TableCell align=“right”>{row.probability.toFixed(2)}</TableCell>
                                  </TableRow>
                              )
                          } else if(this.props.type === “coco-ssd”) {
                              return (
                                  <TableRow key={row.className}>
                                      <TableCell component=“th” scope=“row”>
                                      {row.class}
                                      </TableCell>
                                      <TableCell align=“right”>{row.score.toFixed(2)}</TableCell>
                                  </TableRow>
                              )
                          }
                          })
                      }
                  </TableBody>
                  </Table>
              </Paper>
           
           </Grid>
      )
  }
}

}

Note: Here is the Github repo link of above — https://github.com/overflowjs-com/image_object_detction_react_ui . If you find understanding above diffcult then i highly recommend to read our Part 2 and Part 1.

In render, we have created a Grid of three rows with first row containing heading

Second, containing the image to display

<Grid item xs={12}>
 {this.state.image_object &&
   <img src={this.state.image_object} alt=“” height=“500px”/>}               
</Grid>

Here we are displaying an image if the image has been uploaded or image object is available in the state

Next grid contains a button to upload a file and update uploaded file to the current state.

<Grid item xs={12}>
   <Card>
       <CardContent>
           <Button variant=“contained”
               component=‘label’ // <-- Just add me!
               >
               Upload Image
               <input accept=“image/jpeg” onChange={(e) => this.updateImageObject(e)} type=“file” style={{ display: ‘none’ }} />
           </Button>
       </CardContent>
   </Card>
</Grid>

On Button to upload an image on change event we have called a function updateImageto update the currently selected image on the state.

updateImageObject(e) {
      const file = e.target.files[0];
      const reader = new FileReader();
     
       reader.readAsDataURL(file);
      reader.onload = () => {
          this.setState({image_object: reader.result, image_object_details: {}, active_type: null
          });
      };
}

In the above code, we are reading the current file object from file input uploader and loading its data on the current state. As the new image is getting uploaded we are resetting image_object_details and active_type so that fresh operations can be applied on uploaded image

Below is the next grid that contains code for two buttons for each model.

<Grid item xs={3}>
       <Grid container justify=“center” spacing={3}>
           <Grid item >
               {this.state.image_object && <Button onClick={() => this.processImageObject(“imagenet”)}variant=“contained” color=“primary”>
                   Get objects with ImageNet
               </Button>}
           </Grid>
           <Grid item>
                {this.state.image_object && <Button onClick={() => this.processImageObject(“coco-ssd”)}variant=“contained” color=“secondary”>
                   Get objects with Coco SSD
               </Button>}
           </Grid>
       </Grid>
   </Grid>
   <Grid item xs={9}>
       <Grid container justify=“center”>
           {this.state.active_type && this.state.image_object_details[this.state.active_type] &&
               <Grid item xs={12}>
                   <Card>
                       <CardContent>
                           <Typography variant=“h4” color=“textPrimary” component=“h4”>
                               {this.state.active_type.toUpperCase()}
                           </Typography>
                           <ImageDetails data = {this.state.image_object_details[this.state.active_type]}></ImageDetails>
                       </CardContent>
                   </Card>
               </Grid>
           }
           {this.state.active_type && !this.state.image_object_details[this.state.active_type] &&
                <Grid item xs={12}>
                   <CircularProgress
                       color=“secondary”
                   />
               </Grid>
           }
    </Grid>
</Grid>

Here we are dividing Grid into two parts 3 columns and 9 columns from 12 columns parent.

First Grid with 3 columns contains two Grid having two buttons

<Grid container justify=“center” spacing={3}>
   <Grid item >
       {this.state.image_object && <Button onClick={() => this.processImageObject(“imagenet”)}variant=“contained” color=“primary”>
           Get objects with ImageNet
       </Button>}
   </Grid>
   <Grid item>
        {this.state.image_object && <Button onClick={() => this.processImageObject(“coco-ssd”)}variant=“contained” color=“secondary”>
           Get objects with Coco SSD
       </Button>}
   </Grid>
</Grid>

We are analyzing image detection using ImageNet and Coco SSD Models and compare the outputs.

Each button has an action event onClick and it is calling a function processImageObject()which takes the name of the model as a parameter.

processImageObject(type) {
this.setState({active_type: type}, () => {
       api(“detect_image_objects”, {
           type,
           data: this.state.image_object
       }).then((response) => {
           
            const filtered_data = response;
           const image_details = this.state.image_object_details;
image_details[filtered_data.type] = filtered_data.data;
this.setState({image_object_details: image_details });
       });
   });
}

We are setting the state object action_typewith currently selected modal.

The Process image object function will take the current image from state and send it to API function which I will show you next and API will be called detect_image_objectsand in response, we will process and show in UI.

The response from API will be fetched and it will be set in stage image_object_details.

We are setting each API response based on the type of model (imagenet/coco-ssd)

This Buttons will only be visible when image_object is present in the state.

{
this.state.image_object &&
 <Button onClick={() => this.processImageObject()} variant=“contained” color=“primary”>Process Image
 </Button>
}

Below is another grid we have created:

<Grid item xs={9}>
   <Grid container justify=“center”>
       {this.state.active_type && this.state.image_object_details[this.state.active_type] &&
           <Grid item xs={12}>
               <Card>
                   <CardContent>
                       <Typography variant=“h4” color=“textPrimary” component=“h4”>
                           {this.state.active_type.toUpperCase()}
                       </Typography>
                       <ImageDetails type={this.state.active_type} data = {this.state.image_object_details[this.state.active_type]}></ImageDetails>
                   </CardContent>
               </Card>
           </Grid>
       }
       {this.state.active_type && !this.state.image_object_details[this.state.active_type] &&
            <Grid item xs={12}>
               <CircularProgress
                   color=“secondary”
               />
           </Grid>
       }
   </Grid>
</Grid>

Here we have checked whether current action_typemodal is selected or not then if API has processed details it shows object details. For this, we have created a component ImageDetails.

Let’s look into ImageDetails component code which is easy to understand.

class ImageDetails extends React.Component {
 
   render() {

       console.log(this.props.data);

       return (
          <Grid item xs={12}>
              <Paper>
                  <Table>
                  <TableHead>
                      <TableRow>
                      <TableCell>Objects</TableCell>
                      <TableCell align=“right”>Probability</TableCell>
                      </TableRow>
                  </TableHead>
                  <TableBody>
                      {this.props.data.map((row) => {
                          if (this.props.type === “imagenet”) {
                              return (
                                  <TableRow key={row.className}>
                                      <TableCell component=“th” scope=“row”>
                                      {row.className}
                                      </TableCell>
                                      <TableCell align=“right”>{row.probability.toFixed(2)}</TableCell>
                                  </TableRow>
                              )
                          } else if(this.props.type === “coco-ssd”) {
                              return (
                                  <TableRow key={row.className}>
                                      <TableCell component=“th” scope=“row”>
                                      {row.class}
                                      </TableCell>
                                      <TableCell align=“right”>{row.score.toFixed(2)}</TableCell>
                                  </TableRow>
                              )
                          }
                          })
                      }
                  </TableBody>
                  </Table>
              </Paper>
           
           </Grid>
      )
  }
}

This component will show details received from modal Name of Object and their probability. Based on the type of modal we are working with we can display two different outputs which are handled in this class.

4. The last step is to write the API.js wrapper to do a server-side call.

import fetch from ‘isomorphic-fetch’;

const BASE_API_URL = “http://localhost:4000/api/

export function api(api_end_point, data) {

   return fetch(BASE_API_URL+api_end_point,
      {
          method: ‘POST’,
          headers: {
              ‘Content-Type’: ‘application/json’
          },
          body:JSON.stringify(data)
      }).then((response) => {
          return response.json();
      });
}

In this sample code, we are providing a wrapper over fetch API function will take API endpoint and data and it will construct complete URL and return response sent from API.

Final UI will look like this

BACKEND PART:


Now since we have our UI in place let’s get started with creating an API endpoint using tensorflow.js which will look like

http://localhost:4000/api/detect_image_objects

1.     The first step is to choose a boilerplate that is using express.js and providing the ability to just write a route and object detection logic. We are using https://github.com/developit/express-es6-rest-api for this tutorial. Let’s clone it

git clone https://github.com/developit/express-es6-rest-api image_detection_tensorflow_api

2. Now install all dependencies by running

cd image_detection_tensorflow_apinpm install

3. Go to config.jsonin the project root and edit portto 4000 and bodylimit to 10000kb.

Note: We will use pre-trained models imagenet and coco-ssd.Finding multiple objects from an image is a tedious work even though image net is famous to detect a single object from images (Animals/ Other objects ) but still, these both modals based on very vast diverse datasets. So, if you don’t get your object right don’t worry 😅.

4.     To start with TensorFlow, we need to update the node version if you are using the old one. After you are good with the node version then let’s run the below command to install https://github.com/tensorflow/tfjs-models

npm install @tensorflow/tfjs-node

Note: You can install tfjs-node as per your system Linux/Windows/Mac using — https://www.npmjs.com/package/@tensorflow/tfjs-node

5.     Let’s now install both models that we are going to use, so run

npm install @tensorflow-models/mobilenet — save
npm install @tensorflow-models/coco-ssd — save

6.     We need to install the below module too, as required dependency

npm install base64-to-uint8array — save

7.     Now go to index.js under src > apifolder and create a new endpoint

api.post(‘/detect_image_objects’, async (req, res) => {
 const data = req.body.data;
 const type = req.body.type;
 const objectDetect = new ObjectDetectors(data, type);
 const results = await objectDetect.process();
 res.json(results);
});

Here we are calling ObjectDetectorsclass and passing two arguments received from UI, one is base64 encoded image and other is a type of model.

8.     Now let’s create ObjectDetectorsclass. Go to src > api folder and create object_detector folder. Inside object_detector we will create a new file ObjectDetectors.js

const tf = require(‘@tensorflow/tfjs-node’);

const cocossd = require(‘@tensorflow-models/coco-ssd’);
const mobilenet = require(‘@tensorflow-models/mobilenet’);

import toUint8Array from ‘base64-to-uint8array’;

 
export default class ObjectDetectors {

   constructor(image, type) {

       this.inputImage = image;
      this.type = type;
  }
 
   async loadCocoSsdModal() {
      const modal = await cocossd.load({
          base: ‘mobilenet_v2’
      })
      return modal;
  }

   async loadMobileNetModal() {
      const modal = await mobilenet.load({
          version: 1,
          alpha: 0.25 | .50 | .75 | 1.0,
      })
      return modal;
  }

   getTensor3dObject(numOfChannels) {

       const imageData = this.inputImage.replace(‘data:image/jpeg;base64’,‘’)
                          .replace(‘data:image/png;base64’,‘’);
     
       const imageArray = toUint8Array(imageData);
     
       const tensor3d = tf.node.decodeJpeg( imageArray, numOfChannels );

       return tensor3d;
  }

   async process() {
       
       let predictions = null;
      const tensor3D = this.getTensor3dObject(3);

       if(this.type === “imagenet”) {

           const model = await this.loadMobileNetModal();
          predictions = await model.classify(tensor3D);

       } else {

           const model = await this.loadCocoSsdModal();
          predictions = await model.detect(tensor3D);
      }

       tensor3D.dispose();

      return {data: predictions, type: this.type};
  }
}

We have a constructor which takes two parameters one is image base64 encoded and type of image.

A processfunction is called which is calling getTensor3dObject(3).

Note: Here 3 is the number of channels as in the UI we have limited image type to jpeg which is 3 channel image right now. We are not processing 4 channel images which are png you can build this easily as you can send image type in API and in backed change the given functions as needed.
getTensor3dObject(numOfChannels) {
const imageData = this.inputImage.replace(‘data:image/jpeg;base64’,‘’)
          .replace(‘data:image/png;base64’,‘’);
const imageArray = toUint8Array(imageData);
const tensor3d = tf.node.decodeJpeg( imageArray, numOfChannels );
return tensor3d;
}

In this function, we are removing tags from the base64 image, converting it to an image array and building up our tensor3d.

Our pre-trained models consume either tensor3d object or <img>HTML tag or HTML video tag but as we are doing this from Node.js API, We have a base64 image which is converted to tensor3d object.

Gladly tensorflow.js provides a function for it decodeJpeg.

There are other functions also provided by TensorFlow for same work you can see more details — https://js.tensorflow.org/api_node/1.2.7/#node.decodeJpeg

Now decodeJpeg will convert our ArrayBufferof an image of 3 channels to tesnor3d object.

if(this.type === “imagenet”) {
const model = await this.loadMobileNetModal();
predictions = await model.classify(tensor3D);
} else {
const model = await this.loadCocoSsdModal();
predictions = await model.detect(tensor3D);
}

Based on the type of model picked we are loading model on API call. You can load models at the time, API starts to load but here for this blog, I am just loading them as API is called, so the API may take time to respond.

Now below are outputs I have got so far

Output of imagenetit provides the name of the object and its probability there are three objects identified with imagenet.

COCO-SSD MODEL OUTPUT-

If you read more about coco-ssd it cam identifies multiple objects even if they are similar. Along with a rectangle coordinates where their object is placed.

coco-ssd model object detection

Here you can see it has identified 6 persons with their positions as a rectangle. Now you can use these coordinates for any purposes as they tell you the object name and object location.

You can use any image library to draw these rectangles build some cool image effect applications around these details.

You can try my tutorial on Cloudniary and OpenCV on React.js, Nodejs from previous articles try to use that knowledge to build cool stuff.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

Complete Python Bootcamp: Go from zero to hero in Python 3

Machine Learning A-Z™: Hands-On Python & R In Data Science

Python and Django Full Stack Web Developer Bootcamp

Complete Python Masterclass

Computer Vision Using OpenCV

OpenCV Python Tutorial - Computer Vision With OpenCV In Python

Python Tutorial: Image processing with Python (Using OpenCV)

A guide to Face Detection in Python

Machine Learning Tutorial - Image Processing using Python, OpenCV, Keras and TensorFlow

A guide to Object Detection with OpenCV and Swift


#tensorflow #image #opencv #reactjs #machine-learning

The Image Processing Tutorial from Zero to One
1 Likes115.05 GEEK