HTML5 <video> Player Controls in Chrome Three Dots on the Right Open Blank Screen

Can't seem to find an answer for this anywhere.

Can't seem to find an answer for this anywhere.

I am using a standard HTML5 tag to embed a video into page:

<video controls>
     <source src="video/intro-video.mp4" type="video/mp4"/>
</video>

However, in Chrome's default controls on the right, three dots show up (options), however, when you click on them, it goes to a blank screen and there's no way to get out if it except for refreshing the entire page.

How do you make the options either go away or prevent a blank screen?

Thank you.

How to Install Google Chrome Web Browser on CentOS 8?

How to Install Google Chrome Web Browser on CentOS 8?

This tutorial explains how to install the Chrome Browser web browser on CentOS 8. Chrome is the most widely used web browser in the world.

Chrome Browser is the most widely used web browser in the world. It is fast, easy to use, and secure browser built for the modern web.

Chrome is not an open-source browser, and it is not included in the official CentOS repositories.

This tutorial explains how to install the Chrome Browser web browser on CentOS 8.

Installing Chrome Browser on CentOS 8

Follow these steps to install Chrome Browser on your CentOS 8:

  1. Open your terminal and download the latest Chrome 64-bit .rpm package with wget:

    wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm
    
  2. Once the download is complete, run the following command as root or user with sudo privileges to install Chrome Browser:

    sudo dnf localinstall google-chrome-stable_current_x86_64.rpm
    

When prompted, enter your user password, and the installation will continue.

At this point, you have Chrome installed on your CentOS system.

Starting Chrome Browser

Now that Chrome Browser is installed on your CentOS system, you can launch it either from the command line by typing google-chrome & or by clicking on the Chrome icon (Activities -> Chrome Browser):

When Chrome Browser is started for the first time, it will ask you whether you want to make Chrome your default browser and to send usage statistic and crash reports to Google:

Select the checkboxes according to your preferences, and click OK to proceed.

Chrome Browser will open, and you’ll see the default welcome page.

From here, you can sign-in with your Google Account to sync your bookmarks, history, passwords, and install Chrome apps and extensions.

Updating Chrome Browser

During the package installation, the official Google repository will be added to your system. Use the following cat command to verify that the file exists:

cat /etc/yum.repos.d/google-chrome.repo
[google-chrome]
name=google-chrome
baseurl=http://dl.google.com/linux/chrome/rpm/stable/x86_64
enabled=1
gpgcheck=1
gpgkey=https://dl.google.com/linux/linux_signing_key.pub

When a new version is released, you can perform an update with dnf or through your desktop standard Software Update tool.

Conclusion

In this tutorial, we’ve shown you how to install Chrome Browser on CentOS 8 desktop systems. If you’ve previously used a different browser, like Firefox or Opera, you can import your bookmarks and settings into Chrome.

If you hit a problem or have feedback, leave a comment below.

Why does Firefox produce larger WebM video files compared with Chrome?

I and my team have been struggling lately to find an explanation why does Firefox produce larger WebM/VP8 video files compared with Chrome when using the MediaRecorder API in our project.

I and my team have been struggling lately to find an explanation why does Firefox produce larger WebM/VP8 video files compared with Chrome when using the MediaRecorder API in our project.

In short, we record a MediaStream from a HTMLCanvas via the captureStream method. In attempt to isolate everything from our app that might affect this, I developed a small dedicated test app which records a <canvas> and produces WebM files. I've been performing tests with the same footage, video duration, codec, A/V bit rate and frame rate. However, Firefox still ends up creating up to 4 times larger files compared with Chrome. I also tried using a different MediaStream source like the web camera but the results were similar.

Here is a fiddle which should demonstrate what I am talking about: https://jsfiddle.net/nzwasv8k/1/You can try recording 10-sec or 20-sec long videos on both FF and Chrome, and notice the difference between the file sizes. Note that I am using only 4 relatively simple frames/images in this demo. In real-world usage, like in our app where we record a video stream of a desktop, we reached the staggering x9 times difference.

I am not a video codec guru in any way but I believe that the browsers should follow the same specifications when implementing a certain technology; therefore, such a tremendous difference shouldn't occur, I guess. Considering my knowledge is limited, I cannot conclude whether this is a bug or something totally expected. This is why, I am addressing the question here since my research on the topic, so far, led to absolutely nothing. I'll be really glad, if someone can point what is the logical explanation behind it. Thanks in advance!

Node.js - Setting Up & Using Google Cloud Video Intelligence API

Node.js - Setting Up & Using Google Cloud Video Intelligence API

​ Do you need to&nbsp;annotate the content of a video automatically?&nbsp;Let's say you have a service that allows users to upload videos and you want to know&nbsp;content of each videos. It would take a lot of time and efforts if the process is done manually by watching each videos. Fortunately there are some services that&nbsp;can&nbsp;annotate videos and extract metadata.&nbsp;By doing so, it becomes possible to make the videos searchable. ​ ​ ​ ​ If you need that kind of service, you can consider Google Cloud Video Intelligence.&nbsp;It&nbsp;works by labelling a video with multiple labels using their library of 20,000 labels.&nbsp;It has the capability to extract metadata for indexing your video content, so that you can easily organize and search video content.&nbsp;Other features include shot detection to distinguish scene changes and integration with Google Cloud Storage. ​ ​ ​ ## Preparation ​ ​ ​ ​ 1. Create or select a Google Cloud project ​ A Google Cloud project is required to use this service. Open [Google Cloud console](https://console.cloud.google.com/project "Google Cloud console"), then create a new project or select existing project ​ 2. Enable billing for the project ​ Like other cloud platforms, Google requires you to enable billing for your project. If you haven't set up billing, open [billing page](https://console.cloud.google.com/billing "billing page"). ​ 3. Enable Video Intelligence API ​ To use an API, you must enable it first. Open [this page](https://console.cloud.google.com/flows/enableapi?apiid=videointelligence.googleapis.com "this page") to enable Video Intelligence API. ​ 4. Set up service&nbsp;account for authentication ​ As for authentication, you need to create a new service account. Create a new one on the [service account management page](https://console.cloud.google.com/iam-admin/serviceaccounts "service account management page") and download the credentials, or you can use your already created service account. ​ In your .env file, you have to add a new variable because it's needed by the library we are going to use. ​ ``` GOOGLE_APPLICATION_CREDENTIALS=/path/to/the/credentials ​ ``` ​ In addition, you also need to add GOOGLE*CLOUD*PROJECT_ID to your .env as well. Do you need to annotate the content of a video automatically? Let’s say you have a service that allows users to upload videos and you want to know content of each videos. It would take a lot of time and efforts if the process is done manually by watching each videos. Fortunately there are some services that can annotate videos and extract metadata. By doing so, it becomes possible to make the videos searchable.

Do you need to annotate the content of a video automatically? Let's say you have a service that allows users to upload videos and you want to know content of each videos. It would take a lot of time and efforts if the process is done manually by watching each videos. Fortunately there are some services that can annotate videos and extract metadata. By doing so, it becomes possible to make the videos searchable.

If you need that kind of service, you can consider Google Cloud Video Intelligence. It works by labelling a video with multiple labels using their library of 20,000 labels. It has the capability to extract metadata for indexing your video content, so that you can easily organize and search video content. Other features include shot detection to distinguish scene changes and integration with Google Cloud Storage.

Preparation
  1. Create or select a Google Cloud project

A Google Cloud project is required to use this service. Open Google Cloud console, then create a new project or select existing project

  1. Enable billing for the project

Like other cloud platforms, Google requires you to enable billing for your project. If you haven't set up billing, open billing page.

  1. Enable Video Intelligence API

To use an API, you must enable it first. Open this page to enable Video Intelligence API.

  1. Set up service account for authentication

As for authentication, you need to create a new service account. Create a new one on the service account management page and download the credentials, or you can use your already created service account.

In your .env file, you have to add a new variable because it's needed by the library we are going to use.

GOOGLE_APPLICATION_CREDENTIALS=/path/to/the/credentials

In addition, you also need to add GOOGLECLOUDPROJECT_ID to your .env as well.

Dependencies

This tutorial uses @google-cloud/video-intelligence and also dotenv for loading environment. Add the following dependencies to your package.json and run npm install

 "@google-cloud/video-intelligence": "~1.5.0" "dotenv": "~4.0.0"

Code

Below is the code example of how to annotate video with Google Video Intelligence API. The video that will be analyzed needs to be uploaded to Google Cloud Storage first. You can read our tutorial about how to upload file to Google Cloud Storage using Node.js.

// Loads environment variables
require('dotenv').config();
// Imports the Google Cloud Video Intelligence library
const videoIntelligence = require('@google-cloud/video-intelligence');
// Creates a client
const client = new videoIntelligence.VideoIntelligenceServiceClient({
projectId: process.env.GOOGLE_CLOUD_PROJECT_ID,
});
// URI of the video you want to analyze
const gcsUri = 'gs://{YOUR_BUCKET_NAME}/{PATH_TO_FILE}';
// Request config
const request = {
inputUri: gcsUri,
features: ['LABEL_DETECTION'],
};
// Execute request
client
.annotateVideo(request)
.then(results => {
console.log('Waiting for service to analyze the video. This may take a few minutes.');
return results[0].promise();
})
.then(results => {
console.log(JSON.stringify(results, null, 2));
// Gets annotations for video
const annotations = results[0].annotationResults[0];
// Gets labels for video from its annotations
const labels = annotations.segmentLabelAnnotations;
labels.forEach(label => {
console.log(`Label ${label.entity.description} occurs at:`);
label.segments.forEach(segment => {
const _segment = segment.segment;
_segment.startTimeOffset.seconds = _segment.startTimeOffset.seconds || 0;
_segment.startTimeOffset.nanos = _segment.startTimeOffset.nanos || 0;
_segment.endTimeOffset.seconds = _segment.endTimeOffset.seconds || 0;
_segment.endTimeOffset.nanos = _segment.endTimeOffset.nanos || 0;
console.log(
`\tStart: ${_segment.startTimeOffset.seconds}` +
`.${(_segment.startTimeOffset.nanos / 1e6).toFixed(0)}s`
);
console.log(
`\tEnd: ${_segment.endTimeOffset.seconds}.` +
`${(_segment.endTimeOffset.nanos / 1e6).toFixed(0)}s`
);
console.log(`Confidence level: ${segment.confidence}`);
});
});
})
.catch(err => {
console.error(`ERROR: ${err}`);
});

It may take a few minutes to get the annotation results depending on video length. annotateVideo returns a promise of array which first element has promise(). So you need to wait until the process is done by calling results[0].promise(). Meanwhile, you can add a console.log to show that the annotation is in progress.

Below is the result format. It's an array of 3 objects. The first object contains the annotation results - this is what we need to parse in order to understand what is the video about. The second object contains progress percentage, execution start time, and the last time the progress is updated.

[
{
"annotationResults":[
{ }
]
},
{
"annotationProgress":[
{
"inputUri":"/{YOUR_BUCKET_NAME}/{PATH_TO_FILE}",
"progressPercent":100,
"startTime":{
"seconds":"1546439976",
"nanos":559663000
},
"updateTime":{
"seconds":"1546440001",
"nanos":104220000
}
}
]
},
{ }
]

annotationResults is an array whose elements look like this

{
"entity": {
"entityId": "/m/01350r",
"description": "performance art",
"languageCode": "en-US"
},
"categoryEntities": [
{
"entityId": "/m/02jjt",
"description": "entertainment",
"languageCode": "en-US"
}
],
"segments": [
{
"segment": {
"startTimeOffset": {},
"endTimeOffset": {
"seconds": "269",
"nanos": 720000000
}
},
"confidence": 0.666665256023407
}
]
},

Each object in annotationResults represent a label along with video segments that strengthen the reason why the label is given. There is also a value that shows how confidence the service gives a label to a segment.

That's how to use Google Cloud Intelligence in Node.js. If you want to analyze images, you can read the tutorial about how to use Google Cloud Vision in Node.js