Node.js - Setting Up & Using Google Cloud Video Intelligence API

Node.js - Setting Up & Using Google Cloud Video Intelligence API

​ Do you need to annotate the content of a video automatically? Let's say you have a service that allows users to upload videos and you want to know content of each videos. It would take a lot of time and efforts if the process is done manually by watching each videos. Fortunately there are some services that can annotate videos and extract metadata. By doing so, it becomes possible to make the videos searchable. ​ ​ ​ ​ If you need that kind of service, you can consider Google Cloud Video Intelligence. It works by labelling a video with multiple labels using their library of 20,000 labels. It has the capability to extract metadata for indexing your video content, so that you can easily organize and search video content. Other features include shot detection to distinguish scene changes and integration with Google Cloud Storage. ​ ​ ​ ## Preparation ​ ​ ​ ​ 1. Create or select a Google Cloud project ​ A Google Cloud project is required to use this service. Open [Google Cloud console](https://console.cloud.google.com/project "Google Cloud console"), then create a new project or select existing project ​ 2. Enable billing for the project ​ Like other cloud platforms, Google requires you to enable billing for your project. If you haven't set up billing, open [billing page](https://console.cloud.google.com/billing "billing page"). ​ 3. Enable Video Intelligence API ​ To use an API, you must enable it first. Open [this page](https://console.cloud.google.com/flows/enableapi?apiid=videointelligence.googleapis.com "this page") to enable Video Intelligence API. ​ 4. Set up service account for authentication ​ As for authentication, you need to create a new service account. Create a new one on the [service account management page](https://console.cloud.google.com/iam-admin/serviceaccounts "service account management page") and download the credentials, or you can use your already created service account. ​ In your .env file, you have to add a new variable because it's needed by the library we are going to use. ​ ``` GOOGLE_APPLICATION_CREDENTIALS=/path/to/the/credentials ​ ``` ​ In addition, you also need to add GOOGLE*CLOUD*PROJECT_ID to your .env as well. Do you need to annotate the content of a video automatically? Let’s say you have a service that allows users to upload videos and you want to know content of each videos. It would take a lot of time and efforts if the process is done manually by watching each videos. Fortunately there are some services that can annotate videos and extract metadata. By doing so, it becomes possible to make the videos searchable.

Do you need to annotate the content of a video automatically? Let's say you have a service that allows users to upload videos and you want to know content of each videos. It would take a lot of time and efforts if the process is done manually by watching each videos. Fortunately there are some services that can annotate videos and extract metadata. By doing so, it becomes possible to make the videos searchable.

If you need that kind of service, you can consider Google Cloud Video Intelligence. It works by labelling a video with multiple labels using their library of 20,000 labels. It has the capability to extract metadata for indexing your video content, so that you can easily organize and search video content. Other features include shot detection to distinguish scene changes and integration with Google Cloud Storage.

Preparation
  1. Create or select a Google Cloud project

A Google Cloud project is required to use this service. Open Google Cloud console, then create a new project or select existing project

  1. Enable billing for the project

Like other cloud platforms, Google requires you to enable billing for your project. If you haven't set up billing, open billing page.

  1. Enable Video Intelligence API

To use an API, you must enable it first. Open this page to enable Video Intelligence API.

  1. Set up service account for authentication

As for authentication, you need to create a new service account. Create a new one on the service account management page and download the credentials, or you can use your already created service account.

In your .env file, you have to add a new variable because it's needed by the library we are going to use.

GOOGLE_APPLICATION_CREDENTIALS=/path/to/the/credentials

In addition, you also need to add GOOGLECLOUDPROJECT_ID to your .env as well.

Dependencies

This tutorial uses @google-cloud/video-intelligence and also dotenv for loading environment. Add the following dependencies to your package.json and run npm install

 "@google-cloud/video-intelligence": "~1.5.0" "dotenv": "~4.0.0"

Code

Below is the code example of how to annotate video with Google Video Intelligence API. The video that will be analyzed needs to be uploaded to Google Cloud Storage first. You can read our tutorial about how to upload file to Google Cloud Storage using Node.js.

// Loads environment variables
require('dotenv').config();
// Imports the Google Cloud Video Intelligence library
const videoIntelligence = require('@google-cloud/video-intelligence');
// Creates a client
const client = new videoIntelligence.VideoIntelligenceServiceClient({
projectId: process.env.GOOGLE_CLOUD_PROJECT_ID,
});
// URI of the video you want to analyze
const gcsUri = 'gs://{YOUR_BUCKET_NAME}/{PATH_TO_FILE}';
// Request config
const request = {
inputUri: gcsUri,
features: ['LABEL_DETECTION'],
};
// Execute request
client
.annotateVideo(request)
.then(results => {
console.log('Waiting for service to analyze the video. This may take a few minutes.');
return results[0].promise();
})
.then(results => {
console.log(JSON.stringify(results, null, 2));
// Gets annotations for video
const annotations = results[0].annotationResults[0];
// Gets labels for video from its annotations
const labels = annotations.segmentLabelAnnotations;
labels.forEach(label => {
console.log(`Label ${label.entity.description} occurs at:`);
label.segments.forEach(segment => {
const _segment = segment.segment;
_segment.startTimeOffset.seconds = _segment.startTimeOffset.seconds || 0;
_segment.startTimeOffset.nanos = _segment.startTimeOffset.nanos || 0;
_segment.endTimeOffset.seconds = _segment.endTimeOffset.seconds || 0;
_segment.endTimeOffset.nanos = _segment.endTimeOffset.nanos || 0;
console.log(
`\tStart: ${_segment.startTimeOffset.seconds}` +
`.${(_segment.startTimeOffset.nanos / 1e6).toFixed(0)}s`
);
console.log(
`\tEnd: ${_segment.endTimeOffset.seconds}.` +
`${(_segment.endTimeOffset.nanos / 1e6).toFixed(0)}s`
);
console.log(`Confidence level: ${segment.confidence}`);
});
});
})
.catch(err => {
console.error(`ERROR: ${err}`);
});

It may take a few minutes to get the annotation results depending on video length. annotateVideo returns a promise of array which first element has promise(). So you need to wait until the process is done by calling results[0].promise(). Meanwhile, you can add a console.log to show that the annotation is in progress.

Below is the result format. It's an array of 3 objects. The first object contains the annotation results - this is what we need to parse in order to understand what is the video about. The second object contains progress percentage, execution start time, and the last time the progress is updated.

[
{
"annotationResults":[
{ }
]
},
{
"annotationProgress":[
{
"inputUri":"/{YOUR_BUCKET_NAME}/{PATH_TO_FILE}",
"progressPercent":100,
"startTime":{
"seconds":"1546439976",
"nanos":559663000
},
"updateTime":{
"seconds":"1546440001",
"nanos":104220000
}
}
]
},
{ }
]

annotationResults is an array whose elements look like this

{
"entity": {
"entityId": "/m/01350r",
"description": "performance art",
"languageCode": "en-US"
},
"categoryEntities": [
{
"entityId": "/m/02jjt",
"description": "entertainment",
"languageCode": "en-US"
}
],
"segments": [
{
"segment": {
"startTimeOffset": {},
"endTimeOffset": {
"seconds": "269",
"nanos": 720000000
}
},
"confidence": 0.666665256023407
}
]
},

Each object in annotationResults represent a label along with video segments that strengthen the reason why the label is given. There is also a value that shows how confidence the service gives a label to a segment.

That's how to use Google Cloud Intelligence in Node.js. If you want to analyze images, you can read the tutorial about how to use Google Cloud Vision in Node.js

Learn to generate video previews by using FFmpeg and Node.js

Learn to generate video previews by using FFmpeg and Node.js

In this Node.js tutorial, we'll learn to generate video previews by using FFmpeg and Node.js. How to create video previews with Node.js and FFmpeg. Generating video previews with Node.js and FFmpeg. How to generate video thumbnail in NodeJS? How to manipulate a video with Node.js

Every website that deals with video streaming in any way has a way of showing a short preview of a video without actually playing it. YouTube, for instance, plays a 3- to 4-second excerpt from a video whenever users hover over its thumbnail. Another popular way of creating a preview is to take a few frames from a video and make a slideshow.

We are going to take a closer look at how to implement both of these approaches.

How to manipulate a video with Node.js

Manipulating a video with Node.js itself would be extremely hard, so instead we are going to use the most popular video manipulation tool: FFmpeg. In the documentation, we read:

FFmpeg is the leading multimedia framework, able to decode, encode, transcode, mux, demux, stream, filter and play pretty much anything that humans and machines have created. It supports the most obscure ancient formats up to the cutting edge. No matter if they were designed by some standards committee, the community or a corporation. It is also highly portable: FFmpeg compiles, runs, and passes our testing infrastructure FATE across Linux, Mac OS X, Microsoft Windows, the BSDs, Solaris, etc. under a wide variety of build environments, machine architectures, and configurations.

Boasting such an impressive resume, FFmpeg is the perfect choice for video manipulation done from inside of the program, able to run in many different environments.

FFmpeg is accessible through CLI, but the framework can be easily controlled through the node-fluent-ffmpeg library. The library, available on npm, generates the FFmpeg commands for us and executes them. It also implements many useful features, such as tracking the progress of a command and error handling.

Although the commands can get pretty complicated quickly, there’s very good documentation available for the tool. Also, in our examples, there won’t be anything too fancy going on.

The installation process is pretty straightforward if you are on Mac or Linux machine. For Windows, please refer here. The fluent-ffmpeg library depends on the ffmpeg executable being either on our $PATH (so it is callable from the CLI like: ffmpeg ...) or by our providing the paths to the executables through the environment variables.

The exemplary .env file:

FFMPEG_PATH="D:/ffmpeg/bin/ffmpeg.exe"
FFPROBE_PATH="D:/ffmpeg/bin/ffprobe.exe"

Both paths have to be set if they are not already available in our $PATH.

Creating a preview

Now that we know what tools to use for video manipulation from within Node.js runtime, let’s create the previews in the formats mentioned above. I will be using Childish Gambino’s “This is America” video for testing purposes.

Video fragment

The video fragment preview is pretty straightforward to create; all we have to do is slice the video at the right moment. In order for the fragment to be a meaningful and representative sample of the video content, it is best if we get it from a point somewhere around 25–75 percent of the total length of the video. For this, of course, we must first get the video duration.

In order to get the duration of the video, we can use ffprobe, which comes with FFmpeg. ffprobe is a tool that lets us get the metadata of a video, among other things.

Let’s create a helper function that gets the duration for us:

export const getVideoInfo = (inputPath: string) => {
  return new Promise((resolve, reject) => {
    return ffmpeg.ffprobe(inputPath, (error, videoInfo) => {
      if (error) {
        return reject(error);
      }

      const { duration, size } = videoInfo.format;

      return resolve({
        size,
        durationInSeconds: Math.floor(duration),
      });
    });
  });
};

The ffmpeg.ffprobe method calls the provided callback with the video metadata. The videoInfo is an object containing many useful properties, but we are interested only in the format object, in which there is the duration property. The duration is provided in seconds.

Now we can create a function for creating the preview.

Before we do that, let’s take break down the FFmpeg command used to create the fragment:

ffmpeg -ss 146 -i video.mp4 -y -an -t 4 fragment-preview.mp4
  • -ss 146: Start video processing at the 146-second mark of the video (146 is just a placeholder here, our code will randomly generate the number of seconds)
  • -i video.mp4: The input file path
  • -y: Overwrite any existing files while generating the output
  • -an: Remove audio from the generated fragment
  • -t 4: The duration of the (fragment in seconds)
  • fragment-preview.mp4: The path of the output file

Now that we know what the command will look like, let’s take a look at the Node code that will generate it for us.

const createFragmentPreview = async (
  inputPath,
  outputPath,
  fragmentDurationInSeconds = 4,
) => {
  return new Promise(async (resolve, reject) => {
    const { durationInSeconds: videoDurationInSeconds } = await getVideoInfo(
      inputPath,
    );

    const startTimeInSeconds = getStartTimeInSeconds(
      videoDurationInSeconds,
      fragmentDurationInSeconds,
    );

    return ffmpeg()
      .input(inputPath)
      .inputOptions([`-ss ${startTimeInSeconds}`])
      .outputOptions([`-t ${fragmentDurationInSeconds}`])
      .noAudio()
      .output(outputPath)
      .on('end', resolve)
      .on('error', reject)
      .run();
  });
};

First, we use the previously created getVideoInfo function to get the duration of the video. Then we get the start time using the getStartTimeInSeconds helper function.

Let’s think about the start time (the -ss parameter) because it may be tricky to get it right. The start time has to be somewhere between 25–75 percent of the video length since that is where the most representative fragment will be.

But we also have to make sure that the randomly generated start time plus the duration of the fragment is not larger than the duration of the video (startTime + fragmentDurationvideoDuration). If that were the case, the fragment would be cut short due since there wouldn’t be enough video left.

With these requirements in mind, let’s create the function:

const getStartTimeInSeconds = (
  videoDurationInSeconds,
  fragmentDurationInSeconds,
) => {
  // by subtracting the fragment duration we can be sure that the resulting
  // start time + fragment duration will be less than the video duration
  const safeVideoDurationInSeconds =
    videoDurationInSeconds - fragmentDurationInSeconds;

  // if the fragment duration is longer than the video duration
  if (safeVideoDurationInSeconds <= 0) {
    return 0;
  }

  return getRandomIntegerInRange(
    0.25 * safeVideoDurationInSeconds,
    0.75 * safeVideoDurationInSeconds,
  );
};

First, we subtract the fragment duration from the video duration. By doing so, we can be sure that the resulting start time plus the fragment duration will be smaller than the video duration.

If the result of the subtraction is less than 0, then the start time has to be 0 because the fragment duration is longer than the actual video. For example, if the video were 4 seconds long and the expected fragment were to be 6 seconds long, the fragment would be the entire video.

The function returns a random number of seconds from the range between 25–75 percent of the video length using the helper function: getRandomIntegerInRange.

export const getRandomIntegerInRange = (min, max) => {
  const minInt = Math.ceil(min);
  const maxInt = Math.floor(max);

  return Math.floor(Math.random() * (maxInt - minInt + 1) + minInt);
};

It makes use of, among other things, Math.random() to get a pseudo-random integer in the range. The helper is brilliantly explained here.

Now, coming back to the command, all that’s left to do is set the command’s parameters with the generated values and run it.

return ffmpeg()
  .input(inputPath)
  .inputOptions([`-ss ${startTimeInSeconds}`])
  .outputOptions([`-t ${fragmentDurationInSeconds}`])
  .noAudio()
  .output(outputPath)
  .on('end', resolve)
  .on('error', reject)
  .run();

The code is self-explanatory. We make use of the .noAudio() method to generate the -an parameter. We also attach the resolve and reject listeners on the end and error events, respectively. As a result, we have a function that is easy to deal with because it’s wrapped in a promise.

In a real-world setting, we would probably take in a stream and output a stream from the function, but here I decided to use promises to make the code easier to understand.

Here are a few sample results from running the function on the “This is America” video. The videos were converted to gifs to embed them more easily.

Since the users are probably going to view the previews in small viewports, we could do without an unnecessarily high resolution and thus save on the file size.

Frames interval

The second option is to get x frames evenly spread throughout the video. For example if we had a video that was 100 seconds long and we wanted 5 frames out of it for the preview, we would take a frame every 20 seconds. Then we could either merge them together in a video (using ffmpeg) or load them to the website and manipulate them with JavaScript.

Let’s break down the command:

ffmpeg -i video.mp4 -y -vf fps=1/24 thumb%04d.jpg
  • -i video.mp4: The input video file
  • -y: Output overwrites any existing files
  • -vf fps=1/24: The filter that takes a frame every (in this case) 24 seconds
  • thumb%04d.jpg: The output pattern that generates files in the following fashion: thumb0001.jpg, thumb0002.jpg, etc. The %04d part specifies that there should be four decimal numbers

With the command also being pretty straightforward, let’s implement it in Node.

export const createXFramesPreview = (
  inputPath,
  outputPattern,
  numberOfFrames,
) => {
  return new Promise(async (resolve, reject) => {
    const { durationInSeconds } = await getVideoInfo(inputPath);

    // 1/frameIntervalInSeconds = 1 frame each x seconds
    const frameIntervalInSeconds = Math.floor(
      durationInSeconds / numberOfFrames,
    );

    return ffmpeg()
      .input(inputPath)
      .outputOptions([`-vf fps=1/${frameIntervalInSeconds}`])
      .output(outputPattern)
      .on('end', resolve)
      .on('error', reject)
      .run();
  });
};

As was the case with the previous function, we must first know the length of the video in order to calculate when to extract each frame. We get it with the previously defined helper getVideoInfo.

Next, we divide the duration of the video by the number of frames (passed as an argument, numberOfFrames). We use the Math.floor() function to make sure that the number is an integer and multiplied again by the number of frames is lower or equal to the duration of the video.

Then we generate the command with the values and execute it. Once again, we attach the resolve and reject functions to the end and error events, respectively, to wrap the output in the promise.

Here are some of the generated images (frames):

As stated above, we could now load the images in a browser and use JavaScript to make them into a slideshow or generate a slideshow with FFmpeg. Let’s create a command for the latter approach as an exercise:

ffmpeg -framerate 1/0.6 -i thumb%04d.jpg slideshow.mp4
  • -framerate 1/0.6: Each frame should be seen for 0.6 seconds
  • -i thumb%04d.jpg: The pattern for the images to be included in the slideshow
  • slideshow.mp4: The output video file name

Here’s the slideshow video generated from 10 extracted frames. A frame was extracted every 24 seconds.


This preview shows us a very good overview of the content of the video.

Fun fact

In order to prepare the resulting videos for embedding in the article, I had to convert them to the .gif format. There are many online converters available as well as apps that could do this for me. But writing a post about using FFmpeg, it felt weird to not even try and use it in this situation. Sure enough, converting a video to the gif format could be done with one command:

ffmpeg -i video.mp4 -filter_complex "[0:v] split [a][b];[a] palettegen [p];[b][p] paletteuse" converted-video.gif

Here’s the blog post explaining the logic behind it.

Now, sure, this command is not that easy to understand because of the complex filter, but it goes a long way in showing how many use cases FFmpeg has and how useful it is to be familiar with this tool.

Instead of using online converters, where the conversion could take some time due to the tools being free and doing it on the server side, I executed the command and had the gif ready after only a few seconds.

Summary

It is not very likely that you will need to create previews of videos yourself, but hopefully by now you know how to use FFmpeg and its basic command syntax well enough to use it in any potential projects. Regarding the previews formats, I would probably go with the video fragment option, as more people will be familiar with it because of YouTube.

We should probably generate the previews of the video with low quality to keep the preview file sizes small since they have to be loaded on users’ browsers. The previews are usually shown in a very small viewport, so the low resolution should not be a problem.

Originally published by Maciej Cieślar at https://blog.logrocket.com

How to Create a Fake REST API Server using Node.js

How to Create a Fake REST API Server using Node.js

How to make a REST API Server using Node.js for testing your AJAX client side.

In this post, I want to show you how to create a fake REST API Server, using Node.js. This will be just a Server to test AJAX at your client side. In addition to it, it added support for CORS and request verbs like (POST, GET, DELETE, PUT).

First of all, create a folder (RESTAPI)  and create a JSON file, as shown below. I used the file mentioned in the link above for sample JSON data.

{  
   "user1" : {  
      "name" : "mahesh",  
      "password" : "password1",  
      "profession" : "teacher",  
      "id": 1  
   },  
   "user2" : {  
      "name" : "suresh",  
      "password" : "password2",  
      "profession" : "librarian",  
      "id": 2  
   },  
   "user3" : {  
      "name" : "ramesh",  
      "password" : "password3",  
      "profession" : "clerk",  
      "id": 3  
   }  
}  

From start menu, open Node.Js command prompt and change the directory, where our folder is located and type the command given below.

npm install express --save

Create a JavaScript file, add the piece of code and name it server.js.

var express = require('express');  
var app = express();  
var fs = require("fs");  
var bodyParser = require('body-parser');  
  
//enable CORS for request verbs
app.use(function(req, res, next) {  
  res.header("Access-Control-Allow-Origin", "*");  
  res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");  
  res.header("Access-Control-Allow-Methods","POST, GET, PUT, DELETE, OPTIONS");  
  next();  
});  
  
app.use(bodyParser.urlencoded({  
    extended: true  
}));  
  
app.use(bodyParser.json());  
  
//Handle GET method for listing all users
app.get('/listUsers', function (req, res) {  
   fs.readFile( __dirname + "/" + "users.json", 'utf8', function (err, data) {  
       console.log( data );  
       res.end( data );  
   });  
})  
  
//Handle GET method to get only one record
app.get('/:id', function (req, res) {  
   // First read existing users.  
   fs.readFile( __dirname + "/" + "users.json", 'utf8', function (err, data) {  
       users = JSON.parse( data );  
       console.log(req.params.id);  
       var user = users["user" + req.params.id]   
       console.log( user );  
       res.end( JSON.stringify(user));  
   });  
})  
  
//Handle POST method
app.post('/addUser', function (req, res) {  
   // First read existing users.  
       fs.readFile( __dirname + "/" + "users.json", 'utf8', function (err, data) {  
       var obj = JSON.parse('[' + data + ']' );  
       obj.push(req.body);  
       console.log(obj);  
         
       res.end( JSON.stringify(obj)  );  
   });  
})  
  
//Handle DELETE method
app.delete('/deleteUser/:id', function (req, res) {  
  
   // First read existing users.  
   fs.readFile( __dirname + "/" + "users.json", 'utf8', function (err, data) {  
       data = JSON.parse( data );  
         
       delete data["user" + req.params.id];  
         
       console.log( data );  
       res.end( JSON.stringify(data));  
   });  
})  
  
//Handle GET method
app.put('/updateUser/:id', function(req,res){  
      
    // First read existing users.  
    fs.readFile( __dirname + "/" + "users.json", 'utf8', function (err, data) {  
       //var obj = JSON.parse('[' + data + ']' );  
       data = JSON.parse( data );  
       var arr={};  
       arr=req.body;  
         
        data["user" + req.params.id]= arr[Object.keys(arr)[0]] ; //  req.body;   //obj[Object.keys(obj)[0]]  
          
        res.end( JSON.stringify( data ));  
         
    });  
} );  
  
var server = app.listen(8081, function () {  
  
  var host = server.address().address  
  var port = server.address().port  
  
  console.log("Example app listening at http://%s:%s", host, port)  
  
})  

Notice

The parameter (data) of the request handler method is supposed to be an array of JSON objects but it didn't accept the normal array methods like (push) method, so I decided to parse it with the brackets [ ].

var obj = JSON.parse('[' + data + ']' );     

To run this Server, open node.js command prompt and run the command.

$ node server.js

Now, the Server is running and now you have a REST API Server, which supports CORS for the requests.

Here, it is listening at port 8081. To test our Server, this is the sample data for the testing purpose.

Here, it is listening at port 808. To test our Server, this is a sample data for the testing purpose.

I hope it comes in handy. Thank you for reading !

Build a REST API using Node.js, Express.js, Mongoose.js and MongoDB

Build a REST API using Node.js, Express.js, Mongoose.js and MongoDB

Node.js, Express.js, Mongoose.js, and MongoDB is a great combination for building easy and fast REST API. You will see how fast that combination than other existing frameworks because of Node.js is a packaged compilation of Google’s V8 JavaScript engine and it works on non-blocking and event-driven I/O. Express.js is a Javascript web server that has a complete function of web development including REST API.

Node.js, Express.js, Mongoose.js, and MongoDB is a great combination for building easy and fast REST API. You will see how fast that combination than other existing frameworks because of Node.js is a packaged compilation of Google’s V8 JavaScript engine and it works on non-blocking and event-driven I/O. Express.js is a Javascript web server that has a complete function of web development including REST API.

This tutorial divided into several steps:

Step #1. Create Express.js Application and Install Required Modules
Step #2. Add Mongoose.js Module as ORM for MongoDB
Step #3. Create Product Mongoose Model
Step #4. Create Routes for the REST API endpoint
Step #5. Test REST API Endpoints

Source codes here:
https://github.com/didinj/NodeRestApi...