Live Image Processing with getUserMedia() and Web Workers

[embed]http://www.youtube.com/watch?v=Z-bi_YG_ZfQ[/embed]
Demo of our internal tools using Web Workers & getUserMedia() to create image effects.

At Aviary, we are constantly exploring new technologies including the latest (and not-yet-fully supported) HTML5 features. Over the past few months, we’ve begun using Web Workers for a number of our internal tools. Web Workers allow us to perform heavy image processing tasks as a background process to avoid freezing the UI. We hope to be able to roll these benefits into our product in the near future while providing a fallback to unsupported browsers.

getUserMedia() is another exciting new feature in HTML5. It allows web applications to access video and audio from a user’s camera and other media device. In this demo, I'm piping webcam video data into a canvas element. I’ve also built a UI that allows me to control Aviary’s JavaScript Image Processors in real-time.

Since image processing is CPU intensive and can freeze our UI, we are also going to leverage the aforementioned Web Workers to perform the pixel manipulation as a background process. As I mentioned before, this will prevent the UI from freezing while the image is processing. No more need for the setTimeout( …, 0) trick.

Here’s how to get the video stream:

#!javascript // create video element (attach to DOM if you’d like to // view the stream but not necessary here) var video = document.createElement('video');

// default to vendor prefixed method for getUserMedia navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;

// acquire the video stream navigator.getUserMedia({video:true}, function(stream){ video.src = URL.createObjectURL(stream); video.play();

// setup interval to getImageData every X seconds: setInterval(render, 10); }, function(error){ console.log('error', error); });

In our render() method we first draw the video stream onto a canvas and use getImageData() for the pixels. Then we pass the pixels to our Web Worker for processing. (See example code link at bottom for more detail)

#!javascript var render = function(){ ctx.drawImage(video, 0, 0, w, h); var srcData = ctx.getImageData(0,0,w,h);

// pass image data to web worker processor.postMessage({ imageData: srcData }); };

After that, we use Web Workers to increase the Red channel, then use postMessage() to pass the resulting pixel data back.

#!javascript // message receiver onmessage = function(event) { var imageData = event.data.imageData, dst = imageData.data;

/* Image Processing will go here */ for (var i=0; i < dst.length; i += 4) { dst[i] += 70; // increase red channel }

postMessage({ dstData: imageData // pass result back }); };

Once the Worker is finished we will listen for the result and draw it onto our canvas:

#!javascript processor.onmessage = function(event){ ctxEffects.putImageData(event.data.dstData, 0, 0); };

Check out the full example code and a working demo below.

Source: https://github.com/conorbuck/canvas-video-effects
Demo: http://conorbuck.github.com/canvas-video-effects/ (you'll need the latest Chrome or Firefox)