Microsoft Cognitive Services (MCS) allows you to tap into the power of Machine Learning and perform sophisticated analysis of photographs, simply by calling a web service.

The Face API in MCS returns an array of all the faces found in a photo, along with information about each face, such as the location of the eyes, nose, and mouth; the age and gender of the person, and information about eyeglasses and facial hair.

You can sign up to use MCS for free at https://www.microsoft.com/cognitive-services/. Information specific to the Face API can be found at https://www.microsoft.com/cognitive-services/en-us/face-api. (fig 1)

FaceAPI02-SubscriptionKey 
Fig. 1

To use the Face API, click the [Get Started for Free] button. You will see a list of subscription keys. Scroll down to "Face" section and click "Copy" next to one of the Face Subscription keys to save it to your clipboard or click "Show" to reveal the key.

FaceAPI01-APIpage 
Fig. 2

To call the Face API, send an HTTP POST request to https://api.projectoxford.ai/face/v1.0/detect

You may add optional querystring parameters to the above URL:

returnFaceID: If set to "true", the web service will return a GUID representing the face, so that you can make repeated inquiries about this face.

returnFaceLandmarks: If set to "true", the web service will return a "faceLandmarks" object containing a list of points identifying where location of the edges of the eyes, eyebrows, nose, and mouth.

returnFaceAttributes: A comma-delimited list of face attributes the web service should return. Allowable attributes are
age: an age number in years.
gender, smile, facialHair, headPose, and glasses.

The service will always return a rectangle identifying the outline of the face. Adding more properties to return will, of course, slow down both the computation and the download of the data.

You must pass the subscription key (described above) in the header of your HTTP request as in the following example:

Ocp-Apim-Subscription-Key:52b24a988a179f13a25aac4713aec800 

The photo itself will be in the body of the POST request. In the content-type header parameter, you can specify how you plan to send the photo to the server. If you plan to send a link to the URL of a photo, set the content-type to "application/json"; if you plan to send the photo as binary data, set the content-type to "application/octet-stream".

The body of the request contains the image. If you selected "application/json" as the content type, send the URL in the following JSON format:

{ "Url": "/themes/Giard/images/logo.png"}

If successful, the web service will return (formatted as JSON) an array of "face" objects - one for each face detected in the photo. Each object will contain the top, left, height, and width values to define a rectangle outlining that face. If you declared that you want more data (e.g., FaceID, Face Landmanrks, and Face Attributes), that data will also be returned in each face object.

Below is an example of JavaScript / jQuery code to call this API.

var subscriptionKey = "Copy your Subscription key here"; 
 
var imageUrl = "/themes/Giard/images/logo.png";
 
var webSvcUrl = "https://api.projectoxford.ai/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=true&returnFaceAttributes=age,gender,smile,facialHair,headPose,glasses";
 
var outputDiv = $("#OutputDiv");
outputDiv.text("Thinking..."); 
 
$.ajax({
    type: "POST",
    url: webSvcUrl,
    headers: { "Ocp-Apim-Subscription-Key": subscriptionKey },
    contentType: "application/json",
    data: '{ "Url": "' + imageUrl + '" }'
}).done(function (data) { 
 
    if (data.length > 0) {
        var firstFace = data[0];
        var faceId = firstFace.faceId;
        var faceRectange = firstFace.faceRectangle;
        var faceWidth = faceRectange.width;
        var faceHeight = faceRectange.height;
        var faceLeft = faceRectange.left;
        var faceTop = faceRectange.top; 
 
        var faceLandmarks = firstFace.faceLandmarks;
        var faceAttributes = firstFace.faceAttributes; 
 
        var leftPupil = faceLandmarks.pupilLeft;
        var rightPupil = faceLandmarks.pupilRight;
        var mouth = faceLandmarks.mouthLeft;
        var nose = faceLandmarks.noseLeftAlarOutTip;
        var mouthTop = faceLandmarks.upperLipTop;
        var mouthBottom = faceLandmarks.underLipBottom;
        leftEyeWidth = faceLandmarks.eyebrowLeftInner.x - faceLandmarks.eyebrowLeftOuter.x;
        rightEyeWidth = faceLandmarks.eyebrowRightOuter.x - faceLandmarks.eyebrowRightInner.x;
        mouthWidth = faceLandmarks.mouthRight.x - faceLandmarks.mouthLeft.x; 
 
        var mouthLeft = faceLandmarks.mouthLeft;
        var mouthRight = faceLandmarks.mouthRight;
        var mouthTop = faceLandmarks.upperLipTop;
        var mouthBottom = faceLandmarks.underLipBottom; 
 
        var outputText = "";
        outputText += "Face ID: " + faceId + "
"
;
        outputText += "Top: " + faceTop + "
"
;
        outputText += "Left: " + faceLeft + "
"
;
        outputText += "Width: " + faceWidth + "
"
;
        outputText += "Height: " + faceHeight + "
"
;
        outputText += "Right Pupil: " + rightPupil.x + ", " + rightPupil.y + "
"
;
        outputText += "Left Pupil: " + leftPupil.x + ", " + leftPupil.y + "
"
;
        outputText += "Mouth: 
"
;
        outputText += " -Left: " + mouthLeft.x + ", " + mouthLeft.y + "
"
;
        outputText += " -Right: " + mouthRight.x + ", " + mouthRight.y + "
"
;
        outputText += " -Top: " + mouthTop.x + ", " + mouthTop.y + "
"
;
        outputText += " -Bottom: " + mouthBottom.x + ", " + mouthBottom.y + "
"
;
        outputText += "Attributes:" + "
"
;
        outputText += "age: " + faceAttributes.age + "
"
;
        outputText += "gender: " + faceAttributes.gender + "
"
;
        outputText += "smile: " + (faceAttributes.smile || "n/a") + "
"
;
        outputText += "glasses: " + faceAttributes.glasses + "
"
;
        outputDiv.html(outputText); 
 
    }
    else {
        outputDiv.text("No faces detected."); 
 
    } 
 
}).fail(function (err) {
    $("#OutputDiv").text("ERROR!" + err.responseText);
}); 
 
service call is performed by the following line: 
 
$.ajax({
    type: "POST",
    url: webSvcUrl,
    headers: { "Ocp-Apim-Subscription-Key": subscriptionKey },
    contentType: "application/json",
    data: '{ "Url": "' + imageUrl + '" }' 
 

This request is Asynchronous, so the "done" function is called when it returns successfully.

}).done(function (data) { 

The function tied to the "done" event parses through the returned JSON and displays it on the screen.

If an error occurs, we output a simple error message to the user in the "fail" function.

}).fail(function (err) {
          $("#OutputDiv").text("ERROR!" + err.responseText);
  }); 

The rest of the code above simply grabs the first face in the JSON array and drills down into properties of that face, displaying those properties in a DIV on the page.

For example, in the attached site.css stylesheet, I’ve defined 2 classes - .Rectangle and .FaceLabel - that initially hide objects on the page via the display: none style. These classes also set the position to “absolute” allowing us to position them exactly where we want within a container. The z-order is incresed, so that these items will appear on top of the face in the picture.  The relevant CSS is shown below:

#Rectangle
{
    opacity: 0.3;
    z-index: 10;
    position: absolute;
    display: none;
  }
 
.FaceLabel{
    position: absolute;
    z-index: 20;
    display: none;
    font-size: 8px;
    padding: 0px;
    margin: 0px;
    background-color: white;
    color: black;
    padding: 1px;
  }

Our page contains object with these classes to identify parts of the face identified. Initially, they will be invisible until we determine where to place them via the information returned by the Face API.

<div id="PhotoDiv">
    <img id="ImageToAnalyze" src="images/CartoonFace.png">
    <div class="FaceLabel" id="LeftEyeDiv">LEFTdiv>
    <div class="FaceLabel" id="RightEyeDiv">RIGHTdiv>
    <div class="FaceLabel" id="NoseDiv">NOSEdiv>
    <div class="FaceLabel" id="MouthDiv">MOUTHdiv>
    <img src="images/Rectangle.png" id="Rectangle">
div>

When the call to the Face API web service returns successfully, we drill down into the returned JSON to find out the outline of the face and the location of the eyes, nose, and mouth. Then, we make these objects visible (set the display style to “block”)  and place them above the corresponding facial feature (set the “top” and “left” styles). In the case of the Rectangle image, we also resize it to cover the face detected. The rectangle’s “opacity” style is 0.3, making it translucent enough to see the face behind it. Here is the JavaScript to accomplish this:

$("#Rectangle").css("top", faceTop);
$("#Rectangle").css("left", faceLeft);
$("#Rectangle").css("height", faceHeight);
$("#Rectangle").css("width", faceHeight);
$("#Rectangle").css("display", "block");
$("#LeftEyeDiv").css("top", leftPupil.y);
$("#LeftEyeDiv").css("left", leftPupil.x);
$("#LeftEyeDiv").css("display", "block");
$("#RightEyeDiv").css("top", rightPupil.y);
$("#RightEyeDiv").css("left", rightPupil.x);
$("#RightEyeDiv").css("display", "block");
$("#NoseDiv").css("top", nose.y);
$("#NoseDiv").css("left", noseHorizontalCenter);
$("#NoseDiv").css("display", "block");
$("#MouthDiv").css("top", mouthVerticalCenter);
$("#MouthDiv").css("left", mouthTop.x);
$("#MouthDiv").css("display", "block");

Below is the output of the web page analyzing a photo of my face:

FaceApi03-WebPage

As you can see, calling the Cognitive Services Face API, is a simple matter of making a call to a web service and reading the JSON data returned by that service.

You can find this code in my GitHub repository.