Front and Rear Camera Access with JavaScript's getUserMedia()

Front and Rear Camera Access with JavaScript's getUserMedia()

Front and Rear Camera Access with JavaScript's getUserMedia()

It seems like not so long ago every browser had the Flash plugin to get access to the devices media hardware to capture audio and video, with the help of these plugins, developers were able to get access to the audio and videos devices to stream and display live video feed on the browser.

Table of Contents

It all got easier when HTML5 was introduced, for developers and users alike. With HTML5 came the introduction of APIs that had access to device hardware, some of the introduced APIs in HTML5 is the MediaDevices. This API provides access to media input devices like audio, video etc. This object contains the getUserMedia method we’ll be working with.

What’s the getUserMedia API

The getUserMedia API makes use of the media input devices to product a MediaStream, this MediaStream contains the requested media types whether audio or video. Using the stream returned from the API, video feeds can be displayed on the browser which is useful realtime communication on the browser. When used alongside the MediaStreamRecorder API, we can record and store media data captured on the browser. This API only works on secure origins like the rest of the newly introduces APIs, it works nonetheless on localhost and on file urls.

Getting started

Let’s walk through the steps from requesting permission to capture video data to displaying live feed from the input device on the browser. First, we have to check if the intending user’s browser supports the mediaDevices API. This API exists within the navigator interface, this interface contains the current state and identity of the user agent. This is how the check is performed:

if('mediaDevices' in navigator && 'getUserMedia' in navigator.mediaDevices){
  console.log("Let's get this party started")
}

First we check if the mediaDevices API exists within the navigator and then checking if the getUserMedia API is available within the mediaDevices. If this returns true, we can get started.

Requesting user permission

The next step after confirming support on the browser for getUserMedia is to request for permission to make use of the media input devices on the user agent. Typically, after a user grants permission, a Promise is returned which resolves to a media stream, this Promise isn’t returned when the permission is denied by the user, which blocks access to these devices.

if('mediaDevices' in navigator && 'getUserMedia' in navigator.mediaDevices){
  const stream = await navigator.mediaDevices.getUserMedia({video: true})
}

The object provided as an argument for the getUserMedia method is called constraints, this determines which of the media input devices we are requesting permissions for, if the object contained audio: true, the user will be asked to grant access to the audio input device.

Configuring media constraints

The constrains object is a MediaStreamConstraints object that specifies the types of media to request and the requirements of each media type. Using the constraints object, we can specify requirements for the requested stream like the resolution of the stream anto use (front, back).

A media type must be provided when requesting a media type, either video or audio, a NotFoundError will be returned if the requested media types can’t be found on the user’s browser. If we intend to request a video stream of 1280 x 720 resolution, we’ll can update the constraints object to look like this:

{
  video: {
    width: 1280,
    height: 720,
  }
}

With this update, the browser will try to match this quality settings for the stream, but if the video device can’t deliver this resolution, the browser will return other resolutions available. To ensure that the browser returns a resolution not lower than the one provided we have to make use of the min property. Update the constraints object to include the min property:

{
  video: {
    width: { 
      min: 1280,
    },
    height: {
      min: 720,
    }
  }
}

This will ensure that the stream resolution will returned will be at least 1280 x 720. If this minimum requirement can’t be met, the promise will be rejected with an OverconstrainedError.

Sometimes, you’re concerned about data saving and you need the stream to not exceed a set resolution. This can come in handy when the user is on a limited plan. To enable this functionality, update the constraints object to contain a max field:

{
  video: {
    width: { 
      min: 1280,
      max: 1920,
    },
    height: {
      min: 720,
      max: 1080
    }
  }
}

With these settings, the browser will ensure that the return stream doesn’t go below 1280 x 720 and doesn’t exceed 1920 x 1080. Other terms that can be used includes exact and ideal. That ideal setting is typically used alongside the min and max properties to find the best possible setting that is closest to the ideal values provided.

You can update the constraints to use the ideal keyword:

{
  video: {
    width: { 
      min: 1280,
      ideal: 1920,
      max: 2560,
    },
    height: {
      min: 720,
      ideal: 1080,
      max: 1440
    }
  }
}

To tell the browser to make use of the front or back (on mobile) camera on devices, you can specify a facingMode property in the video object:

{
  video: {
    width: { 
      min: 1280,
      ideal: 1920,
      max: 2560,
    },
    height: {
      min: 720,
      ideal: 1080,
      max: 1440
    },
    facingMode: 'user'
  }
}

This setting will make use of the front facing camera at all times in all devices, to make use of the back camera on mobile devices, we can alter the facingMode property to environment.

{
  video: {
    ...
    facingMode: { 
      exact: 'environment'
    }
  }
}

Using the enumerateDevices method

This method when called, returns all the available input media devices available on the user’s PC.

With the method, you can provide the user options on which input media device to use for streaming audio or video content. This method returns a Promise resolved to a MediaDeviceInfo array containing information about each device.

An example of how to make a use of this method is show in the snippet below:

async function getDevices(){
  const devices = await navigator.mediaDevices.enumerateDevices();
}

A sample response for each of the devices would look like:

{
  deviceId: "23e77f76e308d9b56cad920fe36883f30239491b8952ae36603c650fd5d8fbgj",
  groupId: "e0be8445bd846722962662d91c9eb04ia624aa42c2ca7c8e876187d1db3a3875",
  kind: "audiooutput",
  label: "",
}

Note: A label won’t be returned unless an available stream is available, or the user has granted device access permissions.## Displaying video stream on browser

We’ve gone through the process of requesting and getting access to the media devices, configured constraints to include required resolutions and also selected the camera we need to record video. After going through all these steps, we’ll at least want to see if the stream is delivering based on the configured settings. To ensure this, we’ll make use of the video element to display the video stream on the browser.

Like we said earlier in the article, the getUserMedia method returns Promise which can be resolved to a stream. The returned stream can be converted to an object URL using the createObjectURL method, this URL will be set as video source.

We’ll create a short demo where we let the user choose from their available list of video devices. using the enumerateDevices method. This is a navigator.mediaDevices method, it lists the available media devices like microphones, cameras etc. It returns a Promise resolvable to an array of objects detailing the available media devices.

Create an index.html file and update the contents with the code below:

<!doctype html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport"
          content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
    <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css">
    <link rel="stylesheet" href="style.css">
    <title>Document</title>
</head>
<body>
<div>
    <video autoplay></video>
    <canvas class="d-none"></canvas>
</div>
<div class="video-options">
    <select name="" id="" class="custom-select">
        <option value="">Select camera</option>
    </select>
</div>

<img class="screenshot-image" alt="">

<div class="controls">
    <button class="btn btn-danger play" title="Play"><i data-feather="play-circle"></i></button>
    <button class="btn btn-info pause d-none" title="Pause"><i data-feather="pause"></i></button>
    <button class="btn btn-outline-success screenshot d-none" title="ScreenShot"><i data-feather="image"></i></button>
</div>

<script src="https://unpkg.com/feather-icons"></script>
<script src="script.js"></script>
</body>
</html>

In the snippet above, we’ve setup the elements we’ll need and a couple of controls for the video. Also included, is a button for taking screenshots of the current video feed. Now let’s style up these components a bit.

Create a style.css file and the following styles into it, if you noticed, Bootstrap was included to reduce the amount of CSS we need to write to get the components going.

// style.css
.screenshot-image {
    width: 150px;
    height: 90px;
    border-radius: 4px;
    border: 2px solid whitesmoke;
    box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.1);
    position: absolute;
    bottom: 5px;
    left: 10px;
    background: white;
}

.display-cover {
    display: flex;
    justify-content: center;
    align-items: center;
    width: 70%;
    margin: 5% auto;
    position: relative;
}

video {
    width: 100%;
    background: rgba(0, 0, 0, 0.2);
}

.video-options {
    position: absolute;
    left: 20px;
    top: 30px;
}

.controls {
    position: absolute;
    right: 20px;
    top: 20px;
    display: flex;
}

.controls > button {
    width: 45px;
    height: 45px;
    text-align: center;
    border-radius: 100%;
    margin: 0 6px;
    background: transparent;
}

.controls > button:hover svg {
    color: white !important;
}

@media (min-width: 300px) and (max-width: 400px) {
    .controls {
        flex-direction: column;
    }

    .controls button {
        margin: 5px 0 !important;
    }
}

.controls > button > svg {
    height: 20px;
    width: 18px;
    text-align: center;
    margin: 0 auto;
    padding: 0;
}

.controls button:nth-child(1) {
    border: 2px solid #D2002E;
}

.controls button:nth-child(1) svg {
    color: #D2002E;
}

.controls button:nth-child(2) {
    border: 2px solid #008496;
}

.controls button:nth-child(2) svg {
    color: #008496;
}

.controls button:nth-child(3) {
    border: 2px solid #00B541;
}

.controls button:nth-child(3) svg {
    color: #00B541;
}

.controls > button {
    width: 45px;
    height: 45px;
    text-align: center;
    border-radius: 100%;
    margin: 0 6px;
    background: transparent;
}

.controls > button:hover svg {
    color: white;
}

After styling, if you open the html file in your browser, you should get a view similar to the screenshot below:

The next step, is to add functionality to the demo, using the enumerateDevices method, we’ll get the available video devices and set it as the options within the select element. Create a file script.js and update it with the following snippet:

feather.replace();

const controls = document.querySelector('.controls');
const cameraOptions = document.querySelector('.video-options>select');
const video = document.querySelector('video');
const canvas = document.querySelector('canvas');
const screenshotImage = document.querySelector('img');
const buttons = [...controls.querySelectorAll('button')];
let streamStarted = false;

const [play, pause, screenshot] = buttons;

const constraints = {
  video: {
    width: {
      min: 1280,
      ideal: 1920,
      max: 2560,
    },
    height: {
      min: 720,
      ideal: 1080,
      max: 1440
    },
  }
};

const getCameraSelection = async () => {
  const devices = await navigator.mediaDevices.enumerateDevices();
  const videoDevices = devices.filter(device => device.kind === 'videoinput');
  const options = videoDevices.map(videoDevice => {
    return `<option value="${videoDevice.deviceId}">${videoDevice.label}</option>`;
  });
  cameraOptions.innerHTML = options.join('');
};

play.onclick = () => {
  if (streamStarted) {
    video.play();
    play.classList.add('d-none');
    pause.classList.remove('d-none');
    return;
  }
  if ('mediaDevices' in navigator && navigator.mediaDevices.getUserMedia) {
    const updatedConstraints = {
      ...constraints,
      deviceId: {
        exact: cameraOptions.value
      }
    };
    startStream(updatedConstraints);
  }
};

const startStream = async (constraints) => {
  const stream = await navigator.mediaDevices.getUserMedia(constraints);
  handleStream(stream);
};

const handleStream = (stream) => {
  video.srcObject = stream;
  play.classList.add('d-none');
  pause.classList.remove('d-none');
  screenshot.classList.remove('d-none');
  streamStarted = true;
};

getCameraSelection();

In the snippet above there are a couple of things going on, let’s break them down:

  1. feather.replace(): this method call instantiates feather, a great icon-set for web development.
  2. The constraints variable holds the initial configuration for the stream. This will be extended to include the media device the user chooses.
  3. getCameraSelection: this function calls the enumerateDevices method, then, we filter through the array from the resolved Promise and select video input devices. From the filtered results, we create options for the select element.
  4. Calling the getUserMedia method happens within the onclick listener of the play button. Here we check if this method is supported by the user’s browser before starting the stream.
  5. Next, we call the startStream function that takes a constraints argument. It calls the getUserMedia method with the provided constraints . handleStream is called using the stream from the resolved promise, this method sets the returned stream to the video element’s srcObject.

Next, we’ll add click listeners to the button controls on the page to pause, stop and takes screenshots. Also, we’ll add a listener to the select element to update the stream constraints with the selected video device.

Update the script.js file with the code below:

...

const startStream = async (constraints) => {
  ...
};

const handleStream = (stream) => {
  ...
};

cameraOptions.onchange = () => {
  const updatedConstraints = {
    ...constraints,
    deviceId: {
      exact: cameraOptions.value
    }
  };
  startStream(updatedConstraints);
};

const pauseStream = () => {
  video.pause();
  play.classList.remove('d-none');
  pause.classList.add('d-none');
};

const doScreenshot = () => {
  canvas.width = video.videoWidth;
  canvas.height = video.videoHeight;
  canvas.getContext('2d').drawImage(video, 0, 0);
  screenshotImage.src = canvas.toDataURL('image/webp');
  screenshotImage.classList.remove('d-none');
};

pause.onclick = pauseStream;
screenshot.onclick = doScreenshot;

Now, when you open the index.html file on the browser, clicking the play button should start the stream.

Here is a complete demo:

Conclusion

This article has introduced the getUserMedia API, an interesting addition to the web that eases the process of capturing media on the web. The API takes a parameter ( constraints ) that can be used to configure the get access to audio and video input devices, it can also be used to specify the video resolution required for your application. You can extend the demo further to give the user an option to save the screenshots taken, as well as recording and storing video and audio data with the help of MediaStreamRecorder API. Happy hacking.

Learn More

The Complete JavaScript Course 2019: Build Real Projects!

Become a JavaScript developer - Learn (React, Node,Angular)

JavaScript: Understanding the Weird Parts

Vue JS 2 - The Complete Guide (incl. Vue Router & Vuex)

The Full JavaScript & ES6 Tutorial - (including ES7 & React)

JavaScript - Step By Step Guide For Beginners

The Web Developer Bootcamp

MERN Stack Front To Back: Full Stack React, Redux & Node.js

Angular 9 Tutorial: Learn to Build a CRUD Angular App Quickly

What's new in Bootstrap 5 and when Bootstrap 5 release date?

Brave, Chrome, Firefox, Opera or Edge: Which is Better and Faster?

How to Build Progressive Web Apps (PWA) using Angular 9

What is new features in Javascript ES2020 ECMAScript 2020

JavaScript Tutorial: if-else Statement in JavaScript

This JavaScript tutorial is a step by step guide on JavaScript If Else Statements. Learn how to use If Else in javascript and also JavaScript If Else Statements. if-else Statement in JavaScript. JavaScript's conditional statements: if; if-else; nested-if; if-else-if. These statements allow you to control the flow of your program's execution based upon conditions known only during run time.

How to Retrieve full Profile of LinkedIn User using Javascript

I am trying to retrieve the full profile (especially job history and educational qualifications) of a linkedin user via the Javascript (Fetch LinkedIn Data Using JavaScript)

Java vs. JavaScript: Know The Difference

Java vs. JavaScript: Know the Difference, Java vs. JavaScript: What's the Difference? Java vs. JavaScript: Major Similarities and Differences. pros and cons of JavaScript and Java.