\\n \\n \\n \\n

Face Detection

\\n
\\n \\n \\n
\\n \\n \\n \\n \\n\r\n```\r\nOur `face.js` file will include an object called `FaceDetect` that will hold our functions that we will call. The first important function is called \\`calculateII\\`, this function will first convert the canvas pixels into grayscale for better processing. It will then iterate through the entire \\`pixels\\` array and calculate the summed area table or integral image. \\n\\n\\`\\`\\`javascript\\nvar FaceDetect = {\\n integral: [],\\n integralSquared: [], \\n calculateII: function() {\\n var el = this;\\n var w = canvas.width;\\n var h = canvas.height;\\n var imgData = ctx.getImageData(0,0,w,h);\\n var pixels = imgData.data;\\n var ii = [];\\n var ii2 = [];\\n for(var y = 0; y < h; y++) {\\n ii[y] = [];\\n ii2[y] = [];\\n var rowSum = 0;\\n var rowSum2 = 0;\\n for(var x = 0; x < w; x++) {\\n var i = (x * 4) + (y * 4 * h);\\n var g = pixels[i] * .3;\\n g += pixels[i+1] * .59;\\n g += pixels[i+2] * .11;\\n pixels[i] = g;\\n pixels[i+1] = g;\\n pixels[i+2] = g;\\n var idx = ((x - 1) > 0) ? x - 1: 0;\\n var idy = ((y - 1) > 0) ? y - 1: 0;\\n var iiY = (ii[idy][x]) ? ii[idy][x]: 0;\\n var iiY2 = (ii2[idy][x]) ? ii2[idy][x]: 0;\\n rowSum += (g * 3);\\n rowSum2 += (g * 3) * (g * 3);\\n ii[y][x] = rowSum + iiY;\\n ii2[y][x] = rowSum2 + iiY2;\\n }\\n }\\n this.integral = ii;\\n this.integralSquared = ii2;\\n ctx.putImageData(imgData,0,0);\\n this.calculateThreshold(4, 1.25, 2);\\n }\\n};\\n\\`\\`\\`\\n\\nThe next function is called \\`calculateThreshold\\` that will begin at the smallest block size at position (0, 0) and loop through the x and y positions within the canvas. At each step it will evaluate all stages and will exit if the stage thresholds are not met. There are a total of 22 stages that must be passed in order for a face to be detected and returned. If a face is detected, it will be put into the \\`dataPoints\\` array that contain the block size and x/y coordinates. We will use this data to draw the rectangles on the canvas where the face appears. The function \\`calculateThreshold\\` will call on \\`evalStage\\` at each x/y coordinate that will evaluate if all stages are met.\\n\\n\\`\\`\\`javascript\\ncalculateThreshold: function(scale, scaleFactor, stepSize) {\\n var el = this;\\n var data = cascade;\\n var size = data.cascadeSize;\\n var w = canvas.width;\\n var h = canvas.height;\\n var minWidth = size.width;\\n var minHeight = size.height;\\n var blockWidth = (scale * minWidth) | 0;\\n var blockHeight = (scale * minHeight) | 0;\\n var dataPoints = [];\\n\\n // continue until block sizes reach canvas\\n // width and height values\\n while(blockWidth < w && blockHeight < h) {\\n // console.info(\\\"Set scale at \\\" + scale);\\n var step = (scale * stepSize + 0.5) | 0;\\n for(var y = 0; y < (h - blockHeight); y += step) {\\n for(var x = 0; x < (w - blockWidth); x += step) {\\n\\n if(el.evalStage(data, x, y, blockWidth, blockHeight, scale)) {\\n dataPoints.push({\\n width: blockWidth,\\n height: blockHeight,\\n x: x,\\n y: y\\n });\\n }\\n\\n }\\n }\\n scale *= scaleFactor;\\n blockWidth = (scale * minWidth) | 0;\\n blockHeight = (scale * minHeight) | 0;\\n }\\n if(dataPoints.length) {\\n el.drawRectangles(dataPoints);\\n }\\n }\\n\\`\\`\\` \\n\\nThe final step in face detection is actually iterating through our cascade data and calculating the sum of a given area and seeing if it exceeds some threshold. The \\`evalStage\\` function will iterate through all stages and will break if the \\`stageSum\\` does not exceed the threshold. In each stage, there are nodes that contain \\`rects\\` or rectangles with x/y coordinates and width/height of the rectangle to be traced from the block sent to the \\`evalStage\\` function.\\n\\n\\`\\`\\`javascript\\nevalStage: function(data, y, x, bW, bH, s) {\\n // integral image at data point A\\n // is just the x value * image width\\n // plus the block width\\n var el = this;\\n var ii = el.integral;\\n var ii2 = el.integralSquared;\\n var nodeCount = 0;\\n var inverseArea = 1.0/(bW * bH);\\n var mean = (ii[y+bH][x+bW] - ii[y][x+bW] - ii[y+bH][x] + ii[y][x]) * inverseArea;\\n var meanSquared = (ii2[y+bH][x+bW] - ii2[y][x+bW] - ii2[y+bH][x] + ii2[y][x]) * inverseArea\\n var variance = meanSquared - (mean * mean);\\n var deviation = 1;\\n if(variance > 0) {\\n deviation = Math.sqrt(variance);\\n } \\n for(var a = 0; a < data.nstages; a++) {\\n var stage = data.stages[a];\\n var stageSum = 0;\\n var sThreshold = stage.stageThreshold;\\n for(var b = 0; b < stage.nnodes; b++) {\\n var node = stage.nodes[b];\\n var nThreshold = node.threshold;\\n var nLeft = node.left_val;\\n var nRight = node.right_val;\\n var rects = data.rects[nodeCount].data;\\n var rectSum = 0;\\n for(var c = 0; c < rects.length; c++) {\\n // calculate rectangle values\\n var r = rects[c];\\n var rect = r.replace(\\\".\\\", \\\"\\\").split(\\\" \\\");\\n var rL = (x + rect[0] * s + 0.5) | 0;\\n var rT = (y + rect[1] * s + 0.5) | 0;\\n var rW = (rect[2] * s + 0.5) | 0;\\n var rH = (rect[3] * s + 0.5) | 0;\\n var rWeight = parseFloat(rect[4]);\\n var iiA, iiB, iiC, iiD;\\n // set the A point of integral image\\n // as the left rect value and top rect\\n // remember, in the integral image\\n // the first array is the y and second\\n // is the x => ii[y][x]\\n iiA = ii[rT][rL];\\n iiB = ii[rT][rL+rW];\\n iiC = ii[rT+rH][rL]\\n iiD = ii[rT+rH][rL+rW];\\n // calculate the integral image\\n // sum = iiA + iiD - iiB - iiC;\\n rectSum += (iiD - iiB - iiC + iiA) * rWeight;\\n }\\n nodeCount++;\\n if(rectSum * inverseArea < nThreshold * deviation) {\\n stageSum += nLeft;\\n } else {\\n stageSum += nRight;\\n }\\n }\\n\\n if(a > el.highestStage) {\\n el.highestStage = a;\\n }\\n if(stageSum < sThreshold) {\\n return false;\\n }\\n }\\n return true;\\n }\\n\\`\\`\\`\\n\\nHere is the final version of \\`face.js\\` that includes all functions and variables. Notice the additional \\`drawRectangles\\` function that will loop through all data points and draw the rectangles onto the canvas. Also, the \\`init\\` function will grab a stream from the users' webcam using the \\`getUserMedia\\` function that will get attached to the video element since we can't directly obtain the pixels without first drawing on the canvas. The full source code for this project is available on [github](https://github.com/chr8993/FaceDetection.git) and I recommend to check it out as it may have additional code or comments.\\n\\n\\`\\`\\`javascript\\nvar FaceDetect = {\\n integral: [],\\n integralSquared: [],\\n stream: null,\\n init: function() {\\n var el = this;\\n var c = {\\n audio: false,\\n video: {\\n width: { ideal: 512 },\\n height: { ideal: 512 }\\n }\\n };\\n navigator.mediaDevices.getUserMedia(c)\\n .then(function(stream) {\\n var w = window.URL;\\n var s = w.createObjectURL(stream);\\n el.stream = stream;\\n video.src = s;\\n });\\n },\\n takePicture: function() {\\n ctx.drawImage(video, 0, 0);\\n },\\n calculateII: function() {\\n var el = this;\\n var w = canvas.width;\\n var h = canvas.height;\\n var imgData = ctx.getImageData(0,0,w,h);\\n var pixels = imgData.data;\\n var ii = [];\\n var ii2 = [];\\n for(var y = 0; y < h; y++) {\\n ii[y] = [];\\n ii2[y] = [];\\n var rowSum = 0;\\n var rowSum2 = 0;\\n for(var x = 0; x < w; x++) {\\n var i = (x * 4) + (y * 4 * h);\\n var g = pixels[i] * .3;\\n g += pixels[i+1] * .59;\\n g += pixels[i+2] * .11;\\n pixels[i] = g;\\n pixels[i+1] = g;\\n pixels[i+2] = g;\\n var idx = ((x - 1) > 0) ? x - 1: 0;\\n var idy = ((y - 1) > 0) ? y - 1: 0;\\n var iiY = (ii[idy][x]) ? ii[idy][x]: 0;\\n var iiY2 = (ii2[idy][x]) ? ii2[idy][x]: 0;\\n rowSum += (g * 3);\\n rowSum2 += (g * 3) * (g * 3);\\n ii[y][x] = rowSum + iiY;\\n ii2[y][x] = rowSum2 + iiY2;\\n }\\n }\\n this.integral = ii;\\n this.integralSquared = ii2;\\n ctx.putImageData(imgData,0,0);\\n this.calculateThreshold(4, 1.25, 2);\\n },\\n /**\\n *\\n * @function calculateThreshold\\n *\\n * @params scale - The initial scale to multiply\\n * the block width and height\\n *\\n * @params scaleFactor - How much to increase the\\n * scale after successfully looping through x, y\\n * of canvas with stepSize\\n *\\n * @params stepSize - The amount to jump between\\n * x, y coordinates when looping through canvas pixels\\n */\\n calculateThreshold: function(scale, scaleFactor, stepSize) {\\n var el = this;\\n var data = cascade;\\n var size = data.cascadeSize;\\n var w = canvas.width;\\n var h = canvas.height;\\n var minWidth = size.width;\\n var minHeight = size.height;\\n var blockWidth = (scale * minWidth) | 0;\\n var blockHeight = (scale * minHeight) | 0;\\n var dataPoints = [];\\n\\n // continue until block sizes reach canvas\\n // width and height values\\n while(blockWidth < w && blockHeight < h) {\\n var step = (scale * stepSize + 0.5) | 0;\\n for(var y = 0; y < (h - blockHeight); y += step) {\\n for(var x = 0; x < (w - blockWidth); x += step) {\\n\\n if(el.evalStage(data, x, y, blockWidth, blockHeight, scale)) {\\n dataPoints.push({\\n width: blockWidth,\\n height: blockHeight,\\n x: x,\\n y: y\\n });\\n }\\n\\n }\\n }\\n scale *= scaleFactor;\\n blockWidth = (scale * minWidth) | 0;\\n blockHeight = (scale * minHeight) | 0;\\n }\\n if(dataPoints.length) {\\n el.drawRectangles(dataPoints);\\n }\\n },\\n highestStage: 0,\\n evalStage: function(data, y, x, bW, bH, s) {\\n // integral image at data point A\\n // is just the x value * image width\\n // plus the block width\\n var el = this;\\n var ii = el.integral;\\n var ii2 = el.integralSquared;\\n var nodeCount = 0;\\n var inverseArea = 1.0/(bW * bH);\\n var mean = (ii[y+bH][x+bW] - ii[y][x+bW] - ii[y+bH][x] + ii[y][x]) * inverseArea;\\n var meanSquared = (ii2[y+bH][x+bW] - ii2[y][x+bW] - ii2[y+bH][x] + ii2[y][x]) * inverseArea\\n var variance = meanSquared - (mean * mean);\\n var deviation = 1;\\n if(variance > 0) {\\n deviation = Math.sqrt(variance);\\n }\\n for(var a = 0; a < data.nstages; a++) {\\n var stage = data.stages[a];\\n var stageSum = 0;\\n var sThreshold = stage.stageThreshold;\\n for(var b = 0; b < stage.nnodes; b++) {\\n var node = stage.nodes[b];\\n var nThreshold = node.threshold;\\n var nLeft = node.left_val;\\n var nRight = node.right_val;\\n var rects = data.rects[nodeCount].data;\\n var rectSum = 0;\\n for(var c = 0; c < rects.length; c++) {\\n // calculate rectangle values\\n var r = rects[c];\\n var rect = r.replace(\\\".\\\", \\\"\\\").split(\\\" \\\");\\n var rL = (x + rect[0] * s + 0.5) | 0;\\n var rT = (y + rect[1] * s + 0.5) | 0;\\n var rW = (rect[2] * s + 0.5) | 0;\\n var rH = (rect[3] * s + 0.5) | 0;\\n var rWeight = parseFloat(rect[4]);\\n var iiA, iiB, iiC, iiD;\\n // set the A point of integral image\\n // as the left rect value and top rect\\n // remember, in the integral image\\n // the first array is the y and second\\n // is the x => ii[y][x]\\n iiA = ii[rT][rL];\\n iiB = ii[rT][rL+rW];\\n iiC = ii[rT+rH][rL]\\n iiD = ii[rT+rH][rL+rW];\\n\\n // calculate the integral image\\n // sum = iiA + iiD - iiB - iiC;\\n rectSum += (iiD - iiB - iiC + iiA) * rWeight;\\n }\\n nodeCount++;\\n if(rectSum * inverseArea < nThreshold * deviation) {\\n stageSum += nLeft;\\n } else {\\n stageSum += nRight;\\n }\\n }\\n if(a > el.highestStage) {\\n el.highestStage = a;\\n }\\n if(stageSum < sThreshold) {\\n return false;\\n }\\n }\\n return true;\\n },\\n drawRectangles: function(data) {\\n for(var a = 0; a < data.length; a++) {\\n var d = data[a];\\n ctx.rect(d.x, d.y, d.width, d.height);\\n }\\n ctx.stroke();\\n },\\n detectFace: function() {\\n var el = this;\\n this.calculateII();\\n }\\n};\\nFaceDetect.init();\\n\\`\\`\\`\\n \\n","image":"https://res.cloudinary.com/cinemate/image/upload/w_800/face_okskm2.jpg","url":"https://christiangomez.me/blog/d462ba80-5a22-47cb-aa35-2418aa547bfb/building-facial-detection-software-with-javascript","author":{"@type":"Person","name":"Christian Gomez"}}Building Facial Detection Software With JavaScript

Building Facial Detection Software With JavaScript

June 9, 2019 (5y ago)

Introduction

Seems like an obvious and easy task, doesn't it? The human brain is able to identify objects within milliseconds because our brains are constantly making predictions and comparing it to previous experiences. A human is able to do this very easily but a computer would need to be trained with set instructions and lots of training. If we want to build software that is able to detect faces then we will need to code our program to be able to make predictions about whether a face is in the frame or not.

In this article, we will go over the popular Viola-Jones face detection algorithm that we will use to help us identify faces within an HTML5 canvas. We will use the getUserMedia API to take pictures using the webcam that will be rendered on the canvas and then recognize whether there is a face in the frame or not. Face Detection using JavaScript seems like a tedious or difficult task, but it really isn't. Due to the increase of APIs made available to us via the browser, developing solutions to complicated tasks like this would have been a nightmare a few years back. The web has come so far and it's exciting to see all of these APIs being made available to web developers!

The Viola-Jones Algorithm

The Viola-Jones Face Detection Algorithm is an object detection framework aimed at providing competitive object detection rates proposed by Paul Viola and Michael Jones in 2001. It can be broken down into three main components which include feature detection, integral image, and cascades. These components when combined allow for great detection rates and fast accuracy. Under this detection framework, we are not only tied to detecting faces but can also be used for detecting other objects as well. Check out this pdf to find out more information regarding the Viola-Jones Algorithm.

Features

Haar All humans share certain facial features that allow us to identify that it is a face. Eyes and noses, for example, are easily detected by the human brain through years of training, but to a computer this means nothing. Haar features help a computer identify these facial features by comparing the darker areas of the face. For example, the eye region is darker than the upper-cheeks or the nose is brighter than the eyes. The Haar features can be used to help identify that there is a certain facial feature like a mouth or nose. It does this by comparing the sum of the pixel's RGB values between different areas. When combined, these features form a "cascade" of classifiers which are used to reject frames that don't contain faces. The face detection cascade has over 38 layers of classifiers and over 6060 features! The first classifier contains 2 features and rejects 50% of non-face sub-frames.

Integral Image

A summed-area table or integral image is an algorithm for quickly generating the sum of values within a rectangular area of a grid. In our case, we will use summed-area tables to help us quickly sum the value of pixels within a certain region. This will help us compare between Haar features to help determine if there is a match. Integral image works by preprocessing the frame to calculate the pixel values so that at any location x, y contains the sum of pixels to the left and right of x, y inclusively. This allows us to compute the sum of a rectangular area in one pass versus having to sum the previous values again.

Integral

Classifier Cascade

A classifier cascade is made up of classifiers which contain a certain amount of Haar features. Simpler classifiers are used at first to reject a larger amount of sub-frames that contain no face like features. More complex classifiers are later used to achieve a lower false positive rate and begin to accurately detect the face location. For example, the first classifier contains 2 features and rejects 50% of non-face containing sub-frames. The second classifier then moves up to 10 features and rejects 80%. This pattern allows for much faster face detection and a higher detection rate as well.

Integral

Putting it Together

In order to detect faces we must first create a blank HTML5 canvas and video element that will hold our pixel data that is drawn onto the canvas. We will also need a couple of buttons with event listeners that will trigger our functions. In this example, I've set up the HTML to include two files `cascade.js` which will hold our cascade data and face.js which will contain all of the code behind detecting faces.

<html>\n  <head>\n    <title>Face Detection</title>\n    <script src=\"./cascade.js\"></script>\n    <script src=\"./face.js\"></script>\n  </head>\n  <body>\n      <h1>Face Detection</h1>\n      <center>\n        <button id=\"take-pic\">Take Picture</button>\n        <button id=\"detect-face\">Detect</button>\n      </center>\n      <canvas id=\"image\"></canvas>\n      <video id=\"video\" width=\"512\" height=\"512\" autoplay></video>\n      <script>\n        var canvas = document.getElementById(\"image\");\n        var video = document.getElementById(\"video\");\n        var ctx = canvas.getContext(\"2d\");\n        canvas.width = canvas.height = 512;\n\n        var btn = document.getElementById(\"take-pic\");\n        var detect = document.getElementById(\"detect-face\");\n\n        btn.addEventListener(\"click\", function() {\n          FaceDetect.takePicture();\n        }, false);\n\n        detect.addEventListener(\"click\", function() {\n          FaceDetect.detectFace();\n        }, false);\n      </script>\n  </body>\n</html>

Our face.js file will include an object called FaceDetect that will hold our functions that we will call. The first important function is called `calculateII`, this function will first convert the canvas pixels into grayscale for better processing. It will then iterate through the entire `pixels` array and calculate the summed area table or integral image. \n\n```javascript\nvar FaceDetect = {\n integral: [],\n integralSquared: [], \n calculateII: function() {\n var el = this;\n var w = canvas.width;\n var h = canvas.height;\n var imgData = ctx.getImageData(0,0,w,h);\n var pixels = imgData.data;\n var ii = [];\n var ii2 = [];\n for(var y = 0; y < h; y++) {\n ii[y] = [];\n ii2[y] = [];\n var rowSum = 0;\n var rowSum2 = 0;\n for(var x = 0; x < w; x++) {\n var i = (x * 4) + (y * 4 * h);\n var g = pixels[i] * .3;\n g += pixels[i+1] * .59;\n g += pixels[i+2] * .11;\n pixels[i] = g;\n pixels[i+1] = g;\n pixels[i+2] = g;\n var idx = ((x - 1) > 0) ? x - 1: 0;\n var idy = ((y - 1) > 0) ? y - 1: 0;\n var iiY = (ii[idy][x]) ? ii[idy][x]: 0;\n var iiY2 = (ii2[idy][x]) ? ii2[idy][x]: 0;\n rowSum += (g * 3);\n rowSum2 += (g * 3) * (g * 3);\n ii[y][x] = rowSum + iiY;\n ii2[y][x] = rowSum2 + iiY2;\n }\n }\n this.integral = ii;\n this.integralSquared = ii2;\n ctx.putImageData(imgData,0,0);\n this.calculateThreshold(4, 1.25, 2);\n }\n};\n```\n\nThe next function is called `calculateThreshold` that will begin at the smallest block size at position (0, 0) and loop through the x and y positions within the canvas. At each step it will evaluate all stages and will exit if the stage thresholds are not met. There are a total of 22 stages that must be passed in order for a face to be detected and returned. If a face is detected, it will be put into the `dataPoints` array that contain the block size and x/y coordinates. We will use this data to draw the rectangles on the canvas where the face appears. The function `calculateThreshold` will call on `evalStage` at each x/y coordinate that will evaluate if all stages are met.\n\n```javascript\ncalculateThreshold: function(scale, scaleFactor, stepSize) {\n var el = this;\n var data = cascade;\n var size = data.cascadeSize;\n var w = canvas.width;\n var h = canvas.height;\n var minWidth = size.width;\n var minHeight = size.height;\n var blockWidth = (scale * minWidth) | 0;\n var blockHeight = (scale * minHeight) | 0;\n var dataPoints = [];\n\n // continue until block sizes reach canvas\n // width and height values\n while(blockWidth < w && blockHeight < h) {\n // console.info("Set scale at " + scale);\n var step = (scale * stepSize + 0.5) | 0;\n for(var y = 0; y < (h - blockHeight); y += step) {\n for(var x = 0; x < (w - blockWidth); x += step) {\n\n if(el.evalStage(data, x, y, blockWidth, blockHeight, scale)) {\n dataPoints.push({\n width: blockWidth,\n height: blockHeight,\n x: x,\n y: y\n });\n }\n\n }\n }\n scale *= scaleFactor;\n blockWidth = (scale * minWidth) | 0;\n blockHeight = (scale * minHeight) | 0;\n }\n if(dataPoints.length) {\n el.drawRectangles(dataPoints);\n }\n }\n``` \n\nThe final step in face detection is actually iterating through our cascade data and calculating the sum of a given area and seeing if it exceeds some threshold. The `evalStage` function will iterate through all stages and will break if the `stageSum` does not exceed the threshold. In each stage, there are nodes that contain `rects` or rectangles with x/y coordinates and width/height of the rectangle to be traced from the block sent to the `evalStage` function.\n\n```javascript\nevalStage: function(data, y, x, bW, bH, s) {\n // integral image at data point A\n // is just the x value * image width\n // plus the block width\n var el = this;\n var ii = el.integral;\n var ii2 = el.integralSquared;\n var nodeCount = 0;\n var inverseArea = 1.0/(bW * bH);\n var mean = (ii[y+bH][x+bW] - ii[y][x+bW] - ii[y+bH][x] + ii[y][x]) * inverseArea;\n var meanSquared = (ii2[y+bH][x+bW] - ii2[y][x+bW] - ii2[y+bH][x] + ii2[y][x]) * inverseArea\n var variance = meanSquared - (mean * mean);\n var deviation = 1;\n if(variance > 0) {\n deviation = Math.sqrt(variance);\n } \n for(var a = 0; a < data.nstages; a++) {\n var stage = data.stages[a];\n var stageSum = 0;\n var sThreshold = stage.stageThreshold;\n for(var b = 0; b < stage.nnodes; b++) {\n var node = stage.nodes[b];\n var nThreshold = node.threshold;\n var nLeft = node.left_val;\n var nRight = node.right_val;\n var rects = data.rects[nodeCount].data;\n var rectSum = 0;\n for(var c = 0; c < rects.length; c++) {\n // calculate rectangle values\n var r = rects[c];\n var rect = r.replace(".", "").split(" ");\n var rL = (x + rect[0] * s + 0.5) | 0;\n var rT = (y + rect[1] * s + 0.5) | 0;\n var rW = (rect[2] * s + 0.5) | 0;\n var rH = (rect[3] * s + 0.5) | 0;\n var rWeight = parseFloat(rect[4]);\n var iiA, iiB, iiC, iiD;\n // set the A point of integral image\n // as the left rect value and top rect\n // remember, in the integral image\n // the first array is the y and second\n // is the x => ii[y][x]\n iiA = ii[rT][rL];\n iiB = ii[rT][rL+rW];\n iiC = ii[rT+rH][rL]\n iiD = ii[rT+rH][rL+rW];\n // calculate the integral image\n // sum = iiA + iiD - iiB - iiC;\n rectSum += (iiD - iiB - iiC + iiA) * rWeight;\n }\n nodeCount++;\n if(rectSum * inverseArea < nThreshold * deviation) {\n stageSum += nLeft;\n } else {\n stageSum += nRight;\n }\n }\n\n if(a > el.highestStage) {\n el.highestStage = a;\n }\n if(stageSum < sThreshold) {\n return false;\n }\n }\n return true;\n }\n```\n\nHere is the final version of `face.js` that includes all functions and variables. Notice the additional `drawRectangles` function that will loop through all data points and draw the rectangles onto the canvas. Also, the `init` function will grab a stream from the users' webcam using the `getUserMedia` function that will get attached to the video element since we can't directly obtain the pixels without first drawing on the canvas. The full source code for this project is available on github and I recommend to check it out as it may have additional code or comments.\n\n```javascript\nvar FaceDetect = {\n integral: [],\n integralSquared: [],\n stream: null,\n init: function() {\n var el = this;\n var c = {\n audio: false,\n video: {\n width: { ideal: 512 },\n height: { ideal: 512 }\n }\n };\n navigator.mediaDevices.getUserMedia(c)\n .then(function(stream) {\n var w = window.URL;\n var s = w.createObjectURL(stream);\n el.stream = stream;\n video.src = s;\n });\n },\n takePicture: function() {\n ctx.drawImage(video, 0, 0);\n },\n calculateII: function() {\n var el = this;\n var w = canvas.width;\n var h = canvas.height;\n var imgData = ctx.getImageData(0,0,w,h);\n var pixels = imgData.data;\n var ii = [];\n var ii2 = [];\n for(var y = 0; y < h; y++) {\n ii[y] = [];\n ii2[y] = [];\n var rowSum = 0;\n var rowSum2 = 0;\n for(var x = 0; x < w; x++) {\n var i = (x * 4) + (y * 4 * h);\n var g = pixels[i] * .3;\n g += pixels[i+1] * .59;\n g += pixels[i+2] * .11;\n pixels[i] = g;\n pixels[i+1] = g;\n pixels[i+2] = g;\n var idx = ((x - 1) > 0) ? x - 1: 0;\n var idy = ((y - 1) > 0) ? y - 1: 0;\n var iiY = (ii[idy][x]) ? ii[idy][x]: 0;\n var iiY2 = (ii2[idy][x]) ? ii2[idy][x]: 0;\n rowSum += (g * 3);\n rowSum2 += (g * 3) * (g * 3);\n ii[y][x] = rowSum + iiY;\n ii2[y][x] = rowSum2 + iiY2;\n }\n }\n this.integral = ii;\n this.integralSquared = ii2;\n ctx.putImageData(imgData,0,0);\n this.calculateThreshold(4, 1.25, 2);\n },\n /**\n *\n * @function calculateThreshold\n *\n * @params scale - The initial scale to multiply\n * the block width and height\n *\n * @params scaleFactor - How much to increase the\n * scale after successfully looping through x, y\n * of canvas with stepSize\n *\n * @params stepSize - The amount to jump between\n * x, y coordinates when looping through canvas pixels\n */\n calculateThreshold: function(scale, scaleFactor, stepSize) {\n var el = this;\n var data = cascade;\n var size = data.cascadeSize;\n var w = canvas.width;\n var h = canvas.height;\n var minWidth = size.width;\n var minHeight = size.height;\n var blockWidth = (scale * minWidth) | 0;\n var blockHeight = (scale * minHeight) | 0;\n var dataPoints = [];\n\n // continue until block sizes reach canvas\n // width and height values\n while(blockWidth < w && blockHeight < h) {\n var step = (scale * stepSize + 0.5) | 0;\n for(var y = 0; y < (h - blockHeight); y += step) {\n for(var x = 0; x < (w - blockWidth); x += step) {\n\n if(el.evalStage(data, x, y, blockWidth, blockHeight, scale)) {\n dataPoints.push({\n width: blockWidth,\n height: blockHeight,\n x: x,\n y: y\n });\n }\n\n }\n }\n scale *= scaleFactor;\n blockWidth = (scale * minWidth) | 0;\n blockHeight = (scale * minHeight) | 0;\n }\n if(dataPoints.length) {\n el.drawRectangles(dataPoints);\n }\n },\n highestStage: 0,\n evalStage: function(data, y, x, bW, bH, s) {\n // integral image at data point A\n // is just the x value * image width\n // plus the block width\n var el = this;\n var ii = el.integral;\n var ii2 = el.integralSquared;\n var nodeCount = 0;\n var inverseArea = 1.0/(bW * bH);\n var mean = (ii[y+bH][x+bW] - ii[y][x+bW] - ii[y+bH][x] + ii[y][x]) * inverseArea;\n var meanSquared = (ii2[y+bH][x+bW] - ii2[y][x+bW] - ii2[y+bH][x] + ii2[y][x]) * inverseArea\n var variance = meanSquared - (mean * mean);\n var deviation = 1;\n if(variance > 0) {\n deviation = Math.sqrt(variance);\n }\n for(var a = 0; a < data.nstages; a++) {\n var stage = data.stages[a];\n var stageSum = 0;\n var sThreshold = stage.stageThreshold;\n for(var b = 0; b < stage.nnodes; b++) {\n var node = stage.nodes[b];\n var nThreshold = node.threshold;\n var nLeft = node.left_val;\n var nRight = node.right_val;\n var rects = data.rects[nodeCount].data;\n var rectSum = 0;\n for(var c = 0; c < rects.length; c++) {\n // calculate rectangle values\n var r = rects[c];\n var rect = r.replace(".", "").split(" ");\n var rL = (x + rect[0] * s + 0.5) | 0;\n var rT = (y + rect[1] * s + 0.5) | 0;\n var rW = (rect[2] * s + 0.5) | 0;\n var rH = (rect[3] * s + 0.5) | 0;\n var rWeight = parseFloat(rect[4]);\n var iiA, iiB, iiC, iiD;\n // set the A point of integral image\n // as the left rect value and top rect\n // remember, in the integral image\n // the first array is the y and second\n // is the x => ii[y][x]\n iiA = ii[rT][rL];\n iiB = ii[rT][rL+rW];\n iiC = ii[rT+rH][rL]\n iiD = ii[rT+rH][rL+rW];\n\n // calculate the integral image\n // sum = iiA + iiD - iiB - iiC;\n rectSum += (iiD - iiB - iiC + iiA) * rWeight;\n }\n nodeCount++;\n if(rectSum * inverseArea < nThreshold * deviation) {\n stageSum += nLeft;\n } else {\n stageSum += nRight;\n }\n }\n if(a > el.highestStage) {\n el.highestStage = a;\n }\n if(stageSum < sThreshold) {\n return false;\n }\n }\n return true;\n },\n drawRectangles: function(data) {\n for(var a = 0; a < data.length; a++) {\n var d = data[a];\n ctx.rect(d.x, d.y, d.width, d.height);\n }\n ctx.stroke();\n },\n detectFace: function() {\n var el = this;\n this.calculateII();\n }\n};\nFaceDetect.init();\n```\n \n