I’ve been waiting for MediaRecorder API for several months, and now I am very excited to let you know that its first prototype is up and running, and is available via Google Chrome Canary. Before delving any deeper on the ways to use it, first let’s define what MediaRecorder API actually is (source: http://www.w3.org/TR/mediastream-recording/):
This API attempts to make basic recording very simple, while still allowing for more complex use cases. In the simplest case, the application instantiates the MediaRecorder object, calls record() and then calls stop() or waits for the MediaStream to be ended. The contents of the recording will be made available in the platform’s default encoding via the data available event.
In a nutshell, a MediaStream
instance that you get by calling getUserMedia
is just raw PCM data, and it’s fine if you want to use it in video
-tag, but not enough to create a video-file, simply because you are missing something that encodes your raw PCM data to a video format, for example WebM.
With MediaRecorder API we can access encoded blob chunks, which means that once the recording is finished we can construct a real file out of them, and then upload or download it. This was not possible to achieve before - at least without some dirty hacks :) For example, if not for MediaRecorder API, you would have to perform the following steps:
getUserMedia
and stream data to a video
tag;video
tag to the canvas;If you want to learn more about this method, check out this amazing article.
#chrome #media-recorder #api