# Sunday, 15 May 2016

In an earlier article, I described how to call the Cognitive Services API from JavaScript. The key parts of the code (using the jQuery library) is shown below:

var subscriptionKey = "cd529ca0a97f48b8a1f3bc36ecd73600";
var imageUrl = $("#imageUrlTextbox").val();
var webSvcUrl = "https://api.projectoxford.ai/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=true&returnFaceAttributes=age,gender,smile,facialHair,headPose,glasses";
    type: "POST",
    url: webSvcUrl,
    headers: { "Ocp-Apim-Subscription-Key": subscriptionKey },
    contentType: "application/json",
    data: '{ "Url": "' + imageUrl + '" }'
}).done(function (data) {
        // ... 
        // Code to run when web service returns data...
        // ...

Do you spot the problem with this code?

Although it works very well from a functional point of view, it has a security flaw” As with all Cognitive Services APIs, the Face API requires you to pass a key in the header of your HTTP request. Because there is no easy way to hide this key, it is easy for a hacker to steal your key. Using your key, they can make calls and charge them to your account - either using up your quota of free calls or (worse) charging your credit card for their calls.

One way to hide the key is to make the call to Cognitive Services from a custom web service. This web service can be a simple pass-through, but it allows you to hide the key on the server, where it is much more difficult for a hacker to find it.

We can use Web API to create this custom web service. With Web API, is a simple matter to store the Subscription Key in a secure configuration file and retrieve it at run time. The relevant code goes in an ApiController class as shown below.

// POST: api/Face
public IEnumerable<Face> Post([FromBody]string imageUrl)
    return GetFaceData(imageUrl);
public static IEnumerable<Face> GetFaceData(string imageUrl)
    var webSvcUrl = "https://api.projectoxford.ai/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=true&returnFaceAttributes=age,gender,smile,facialHair,headPose,glasses";
    string subscriptionKey = ConfigurationManager.AppSettings["SubscriptionKey"];
    if (subscriptionKey == null)
        throw new ConfigurationErrorsException("Web Service is missing Subscription Key");
    WebRequest Request = WebRequest.Create(webSvcUrl);
    Request.Method = "POST";
    Request.ContentType = "application / json";
    Request.Headers.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
    using (var streamWriter = new StreamWriter(Request.GetRequestStream()))
        string json = "{"
        + "\"url\": \"" + imageUrl + "\""
        + "}"; 
        var httpResponse = (HttpWebResponse)Request.GetResponse(); 
        string result = "";
        using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
            result = streamReader.ReadToEnd();
        List<Face> faces = JsonConvert.DeserializeObject<List<Face>>(result);
        return faces;

Of course, we could also use the .NET library to abstract the call to the Face API, but I wanted to keep this code as close to the original JavaScript code as possible.  This method returns a List of Face objects. The Face object maps to the JSON data returned by the Face API. You can see this class below.

public class Face
    public string FaceId { get; set; }
    public FaceRectangle FaceRectangle { get; set; }
    public FaceLandmarks FaceLandmarks { get; set; }
    public FaceAttributes FaceAttributes { get; set; }

Now, we can modify our JavaScript code to call our custom API, instead of calling Face API directly, and we can eliminate the need to pass the key from the client-side JavaScript, as shown below:

var imageUrl = $("#imageUrlTextbox").val();
webSvcUrl = "api/face";
    type: "POST",
    url: webSvcUrl,
    data: JSON.stringify(imageUrl),
    contentType: "application/json;charset=utf-8"
}).done(function (data) { 
// ... 
// Code to run when web service returns data...
// ...

In this listing, we have removed the Subscription-Key from the Header and we have changed the Web Service URL. This is JavaScript is calling a web service in its own domain, so it is trusted. The default Cross-Site scripting settings of most web servers will keep JavaScript outside this domain from accessing our custom web service.

This pattern works with any of the Cognitive Services APIs. In fact, it works with any API that requires the client to pass a secure key. We can abstract away the direct call to that API and hide the key on the server, making our code more secure.


  • You can find the full code of the sample described above here.
  • You can find the code for the “unprotected” sample here.
Sunday, 15 May 2016 00:50:49 (GMT Daylight Time, UTC+01:00)
# Friday, 13 May 2016

Microsoft Cognitive Services (MCS) allows you to tap into the power of Machine Learning and perform sophisticated analysis of photographs, simply by calling a web service.

The Face API in MCS returns an array of all the faces found in a photo, along with information about each face, such as the location of the eyes, nose, and mouth; the age and gender of the person, and information about eyeglasses and facial hair.

You can sign up to use MCS for free at https://www.microsoft.com/cognitive-services/. Information specific to the Face API can be found at https://www.microsoft.com/cognitive-services/en-us/face-api. (fig 1)

Fig. 1

To use the Face API, click the [Get Started for Free] button. You will see a list of subscription keys. Scroll down to "Face" section and click "Copy" next to one of the Face Subscription keys to save it to your clipboard or click "Show" to reveal the key.

Fig. 2

To call the Face API, send an HTTP POST request to https://api.projectoxford.ai/face/v1.0/detect

You may add optional querystring parameters to the above URL:

returnFaceID: If set to "true", the web service will return a GUID representing the face, so that you can make repeated inquiries about this face.

returnFaceLandmarks: If set to "true", the web service will return a "faceLandmarks" object containing a list of points identifying where location of the edges of the eyes, eyebrows, nose, and mouth.

returnFaceAttributes: A comma-delimited list of face attributes the web service should return. Allowable attributes are
age: an age number in years.
gender, smile, facialHair, headPose, and glasses.

The service will always return a rectangle identifying the outline of the face. Adding more properties to return will, of course, slow down both the computation and the download of the data.

You must pass the subscription key (described above) in the header of your HTTP request as in the following example:


The photo itself will be in the body of the POST request. In the content-type header parameter, you can specify how you plan to send the photo to the server. If you plan to send a link to the URL of a photo, set the content-type to "application/json"; if you plan to send the photo as binary data, set the content-type to "application/octet-stream".

The body of the request contains the image. If you selected "application/json" as the content type, send the URL in the following JSON format:

{ "Url": "http://davidgiard.com/themes/Giard/images/logo.png"}

If successful, the web service will return (formatted as JSON) an array of "face" objects - one for each face detected in the photo. Each object will contain the top, left, height, and width values to define a rectangle outlining that face. If you declared that you want more data (e.g., FaceID, Face Landmanrks, and Face Attributes), that data will also be returned in each face object.

Below is an example of JavaScript / jQuery code to call this API.

var subscriptionKey = "Copy your Subscription key here"; 
var imageUrl = "http://davidgiard.com/themes/Giard/images/logo.png";
var webSvcUrl = "https://api.projectoxford.ai/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=true&returnFaceAttributes=age,gender,smile,facialHair,headPose,glasses";
var outputDiv = $("#OutputDiv");
    type: "POST",
    url: webSvcUrl,
    headers: { "Ocp-Apim-Subscription-Key": subscriptionKey },
    contentType: "application/json",
    data: '{ "Url": "' + imageUrl + '" }'
}).done(function (data) { 
    if (data.length > 0) {
        var firstFace = data[0];
        var faceId = firstFace.faceId;
        var faceRectange = firstFace.faceRectangle;
        var faceWidth = faceRectange.width;
        var faceHeight = faceRectange.height;
        var faceLeft = faceRectange.left;
        var faceTop = faceRectange.top; 
        var faceLandmarks = firstFace.faceLandmarks;
        var faceAttributes = firstFace.faceAttributes; 
        var leftPupil = faceLandmarks.pupilLeft;
        var rightPupil = faceLandmarks.pupilRight;
        var mouth = faceLandmarks.mouthLeft;
        var nose = faceLandmarks.noseLeftAlarOutTip;
        var mouthTop = faceLandmarks.upperLipTop;
        var mouthBottom = faceLandmarks.underLipBottom;
        leftEyeWidth = faceLandmarks.eyebrowLeftInner.x - faceLandmarks.eyebrowLeftOuter.x;
        rightEyeWidth = faceLandmarks.eyebrowRightOuter.x - faceLandmarks.eyebrowRightInner.x;
        mouthWidth = faceLandmarks.mouthRight.x - faceLandmarks.mouthLeft.x; 
        var mouthLeft = faceLandmarks.mouthLeft;
        var mouthRight = faceLandmarks.mouthRight;
        var mouthTop = faceLandmarks.upperLipTop;
        var mouthBottom = faceLandmarks.underLipBottom; 
        var outputText = "";
        outputText += "Face ID: " + faceId + "<br>";
        outputText += "Top: " + faceTop + "<br>";
        outputText += "Left: " + faceLeft + "<br>";
        outputText += "Width: " + faceWidth + "<br>";
        outputText += "Height: " + faceHeight + "<br>";
        outputText += "Right Pupil: " + rightPupil.x + ", " + rightPupil.y + "<br>";
        outputText += "Left Pupil: " + leftPupil.x + ", " + leftPupil.y + "<br>";
        outputText += "Mouth: <br>";
        outputText += " -Left: " + mouthLeft.x + ", " + mouthLeft.y + "<br>";
        outputText += " -Right: " + mouthRight.x + ", " + mouthRight.y + "<br>";
        outputText += " -Top: " + mouthTop.x + ", " + mouthTop.y + "<br>";
        outputText += " -Bottom: " + mouthBottom.x + ", " + mouthBottom.y + "<br>";
        outputText += "Attributes:" + "<br>";
        outputText += "age: " + faceAttributes.age + "<br>";
        outputText += "gender: " + faceAttributes.gender + "<br>";
        outputText += "smile: " + (faceAttributes.smile || "n/a") + "<br>";
        outputText += "glasses: " + faceAttributes.glasses + "<br>";
    else {
        outputDiv.text("No faces detected."); 
}).fail(function (err) {
    $("#OutputDiv").text("ERROR!" + err.responseText);
service call is performed by the following line: 
    type: "POST",
    url: webSvcUrl,
    headers: { "Ocp-Apim-Subscription-Key": subscriptionKey },
    contentType: "application/json",
    data: '{ "Url": "' + imageUrl + '" }' 

This request is Asynchronous, so the "done" function is called when it returns successfully.

}).done(function (data) { 

The function tied to the "done" event parses through the returned JSON and displays it on the screen.

If an error occurs, we output a simple error message to the user in the "fail" function.

}).fail(function (err) {
          $("#OutputDiv").text("ERROR!" + err.responseText);

The rest of the code above simply grabs the first face in the JSON array and drills down into properties of that face, displaying those properties in a DIV on the page.

For example, in the attached site.css stylesheet, I’ve defined 2 classes - .Rectangle and .FaceLabel - that initially hide objects on the page via the display: none style. These classes also set the position to “absolute” allowing us to position them exactly where we want within a container. The z-order is incresed, so that these items will appear on top of the face in the picture.  The relevant CSS is shown below:

    opacity: 0.3;
    z-index: 10;
    position: absolute;
    display: none;
    position: absolute;
    z-index: 20;
    display: none;
    font-size: 8px;
    padding: 0px;
    margin: 0px;
    background-color: white;
    color: black;
    padding: 1px;

Our page contains object with these classes to identify parts of the face identified. Initially, they will be invisible until we determine where to place them via the information returned by the Face API.

<div id="PhotoDiv">
    <img id="ImageToAnalyze" src="images/CartoonFace.png">
    <div class="FaceLabel" id="LeftEyeDiv">LEFT</div>
    <div class="FaceLabel" id="RightEyeDiv">RIGHT</div>
    <div class="FaceLabel" id="NoseDiv">NOSE</div>
    <div class="FaceLabel" id="MouthDiv">MOUTH</div>
    <img src="images/Rectangle.png" id="Rectangle">

When the call to the Face API web service returns successfully, we drill down into the returned JSON to find out the outline of the face and the location of the eyes, nose, and mouth. Then, we make these objects visible (set the display style to “block”)  and place them above the corresponding facial feature (set the “top” and “left” styles). In the case of the Rectangle image, we also resize it to cover the face detected. The rectangle’s “opacity” style is 0.3, making it translucent enough to see the face behind it. Here is the JavaScript to accomplish this:

$("#Rectangle").css("top", faceTop);
$("#Rectangle").css("left", faceLeft);
$("#Rectangle").css("height", faceHeight);
$("#Rectangle").css("width", faceHeight);
$("#Rectangle").css("display", "block");
$("#LeftEyeDiv").css("top", leftPupil.y);
$("#LeftEyeDiv").css("left", leftPupil.x);
$("#LeftEyeDiv").css("display", "block");
$("#RightEyeDiv").css("top", rightPupil.y);
$("#RightEyeDiv").css("left", rightPupil.x);
$("#RightEyeDiv").css("display", "block");
$("#NoseDiv").css("top", nose.y);
$("#NoseDiv").css("left", noseHorizontalCenter);
$("#NoseDiv").css("display", "block");
$("#MouthDiv").css("top", mouthVerticalCenter);
$("#MouthDiv").css("left", mouthTop.x);
$("#MouthDiv").css("display", "block");

Below is the output of the web page analyzing a photo of my face:


As you can see, calling the Cognitive Services Face API, is a simple matter of making a call to a web service and reading the JSON data returned by that service.

You can find this code in my GitHub repository.

Friday, 13 May 2016 10:11:00 (GMT Daylight Time, UTC+01:00)
# Thursday, 12 May 2016

Microsoft Cognitive Services is a set of APIs built on Machine Learning and exposed as REST Web Services. The Speech API offers a way to listen to speech and convert it into text.

GCast 16: Cognitive Services - Speech to Text

Thursday, 12 May 2016 13:10:38 (GMT Daylight Time, UTC+01:00)
# Monday, 09 May 2016
Monday, 09 May 2016 16:07:00 (GMT Daylight Time, UTC+01:00)

Yesterday, I posted a code sample of the simplest web server I could create using node.js. After reading my post, Tugberk Ugurlut pointed me to http-server and his blog post on http-server.

Install http-server globally

Installing http-server is simple with the following command:

install http-server -g

The “-g” switch installs this library locally. I tried it without that switch, but it did not work. However, the advantage of installing it globally is that it will work in any new folder you create on your computer.

Create Web Files to Render

You can change to a folder on your local machine and create some web pages, JavaScript files, and CSS files, such as the 2 sample pages below:


        Welcome to my site!


        This is a site!

Start Web Server

Then, start a web server with the following command:
The server will notify you on what port it is running (8080 by default)

Test it Out

Now you can open a browser and navigate to a page, as shown below:


Notice that you don’t need to type the “.html” or “.htm” extension. The server is smart enough to know the page you are looking for.

It is also smart enough to assume a page with a name like “index.html”, “home.htm”, or “default.htm” is a a default page and render this page when none is specified, as shown below:


This is a very simple way to quickly install, configure, and start a lightweight web server on your local PC for testing or creating static pages.

Monday, 09 May 2016 03:02:55 (GMT Daylight Time, UTC+01:00)
# Saturday, 07 May 2016

Often I want to set up a simple web server to render pages that consist only of client-side code, such as HTML, CSS, and JavaScript. In the past, I've created such pages using Visual Studio, but VS contains more functionality than I need in this case and that functionality can slow me down.

I found a lightweight web server solution in node.js, which is quick to install, quick to start up, and (relatively) simple to use. As a bonus, it will run on multiple operating systems.

Here are the steps I use to create a web server.

  1. Install node
  2. Create and open a folder for your pages
  3. npm Init to create the package.json file that node requires
  4. Create a "views" folder for your pages and add any pages to it
  5. Install express and ejs
  6. Create a main script
  7. Create HTML pages
  8. Start the web server
  9. Try it out

1. Install node

You can download node from https://nodejs.org/en/download/. Select your operating system and run the installer. It is a small download and a quick install.

2. Create and open a folder for your pages.

In Windows, I use the command prompt for this. I create my projects in either the c:\development folder (if I plan to keep them) or the c:\test folder (if they are scratch projects for learning)

So, for a scratch project, I would use the following commands to create a folder named "nodetest1":
cd c:\test

md nodetest1
cd nodetest1 

3. npm Init to create the package.json file that node requires

Node requires a file named package.json with important information about the project. A quick way to create this is with the command "npm Init". It will prompt you for information required by package.json, even providing default values for some of these settings. Press ENTER at a prompt to accept the default in parentheses or override the default by typing in a new setting. Below is an example of me walking through this command, accepting mostly defaults

npm init

This utility will walk you through creating a package.json file.
It only covers the most common items, and tries to guess sensible defaults.

See `npm help json` for definitive documentation on these fields
and exactly what they do.

Use `npm install <pkg> --save` afterwards to install a package and
save it as a dependency in the package.json file.  See the session below for reference.

Press ^C at any time to quit.
name: (nodetest1)
version: (1.0.0)
description: best app evar
entry point: (index.js)
test command:
git repository:
author: David
license: (ISC)
About to write to C:\test\nodetesting\nodetest1\package.json:
  "name": "nodetest1",
  "version": "1.0.0",
  "description": "best app evar",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  "author": "David",
  "license": "ISC"
Is this ok? (yes) 

As you can see the file you create is echoed to the screen before you confirm its creation.

4. Create a "views" folder for your pages and add any pages to it

The "md" command works for this in the command prompt

md views 

5. Install express and ejs

Express and ejs are packages required for a simple web server. You can use the npm install command to install them. Add the "--save" switch to this command to update the dependencies in your package.json file. The following 2 commands install express and ejs packages, respectively.

npm install --save express
npm install --save ejs 

Check out the package.json and verify that these packages now appear in the dependencies section, as shown in the example below:

"dependencies": {
  "ejs": "^2.4.1",
  "express": "^4.13.4"

6. Create a main script.

Below is the simplest script I could create that serves up any page in the "views" folder and sets a default page. In the example above, I named it "index.js". You can name it whatever you want, but you should set the "main" attribute in package.json to the name of your file. I have commented each line to explain what it does.

// This script uses the express library
var express = require('express');
var app = express(); 
// Tell node where to find the view pages
app.set('views', __dirname + '/views'); 
// Tell node to use the ejs library to handle the view
app.set('view engine', 'ejs');
app.engine('html', require('ejs').renderFile); 
// Set a default page (Optional, but goood to have)
app.get('/', function (req, res) {
// Start the web server
var server = app.listen(3000, function () {
    console.log("Express is running on port 3000");

In package.json, you specified a main JavaScript file. (In the example above, I used the default "index.js". Open package.json to see what you set.) This is the script that will run when you run node.

7. Create HTML pages

In the view folder, create some HTML pages. The script above specified "index.html", so we will need to create this one. Create any other page you like and name them with the ".htm" or ".html" extension. Here are 2 simple examples:


            This is the INDEX page


        <h1>About Us</h1>
            We think therefore we are!


8. Start the web server

To start your web server, simply type "node" followed by the name of your main script.  This will launch node and tell you the server is running on port 3000.

9. Try it out

Open a browser and type "http://localhost:3000/" in the address bar. You should see the default page (index.html) displayed. Here is the sample page rendered:


Type "http://localhost:3000/about.html" to see the "about" sample page described above. If you used my example, you will see the following:

At the command prompt, press CTRL+C at any time to stop the server, making these pages unavailable.

You can view this sample code at https://github.com/DavidGiard/simplenodewebserver.

Saturday, 07 May 2016 18:55:29 (GMT Daylight Time, UTC+01:00)
# Thursday, 05 May 2016

Microsoft Cognitive Services is a set of APIs based on Machine Learning and exposed as REST web services. The Emotion API analyzes pictures of faces and determines the emotion shown in each face.

GCast 15: Cognitive Services - Emotion API

Thursday, 05 May 2016 15:35:00 (GMT Daylight Time, UTC+01:00)
# Monday, 02 May 2016
Monday, 02 May 2016 16:21:00 (GMT Daylight Time, UTC+01:00)
# Sunday, 01 May 2016

Today I am grateful for a chance to be part of the Chicago Code Camp keynote yesterday.

Today I am grateful for:
-My first experience at a FIRST Robotics competition.
-An upgraded flight home last night.

Today I am grateful for an upgraded hotel room and my first taste of Peruvian food.

Today I am grateful:
-for dinner last night with Nick, Kathy, and Rick.
-that I arrived safely in St. Louis, despite a rocky start to my trip.
-to Nick for driving me to and from the airport last night.

Today I am grateful for a hot sauna at the end of a long day.

Today I am grateful for post-meetup drinks last night with a bunch of JavaScript developers I just met.

Today I am grateful for breakfast on my balcony in my pajamas.

Today I am grateful to spend time with Nick and Adriana in Chicago this weekend.

Today I am grateful for:
-Lunch in the South Loop with Scott yesterday
-Dinner in ChinaTown with Nick and Adriana last night

Today I am grateful for the variety of people I meet every day.

Today I am grateful for this view.
ChicagoSkyline (1)
ChicagoSkyline (2)
ChicagoSkyline (3)
ChicagoSkyline (4)

Today I am grateful to attend the AWS Summit yesterday with Brian and Kevin.

Today I am grateful that I got my car back with a repaired fender.

Today I am grateful to sit out on my balcony last night sipping a cocktail and enjoying a warm breeze.

Today I am grateful for a drink and some beer cheese soup last night with Katie, Justin, and Shira.

Today I am grateful for:
-Lunch at Wildberry with a bunch of folks from the dev community
-Dinner at a Columbian steakhouse in Lakeview.

Today I am grateful to see Johnny Clegg in concert last night, after 30+ years as a fan.

Today I am grateful for lunch with Nonnie yesterday.

Today I am grateful to get together with my team yesterday, which doesn't happen often enough.

Today I am grateful that all these family photos are now hanging on the walls of my new apartment.

Today I am grateful that my taxes are filed.

Today I am grateful for:
-The opportunity to deliver the keynote presentation at the Tech In 2016 conference yesterday.
-A front-row ticket to see Marshall Crenshaw last night.

Today I am grateful I was able to track down all my missing tax forms yesterday.

Today I am grateful for my new cuff links - the first I've received in years!

Today I am grateful for the power of ibuprofen.

Today I am grateful for dinner last night with Shelly and her family.

Today I am grateful to Brian for helping me prepare my upcoming presentation on Virtualization Containers.

Today I am grateful that I've been able to make it to the gym almost every day for the past 3 months.

Sunday, 01 May 2016 17:14:12 (GMT Daylight Time, UTC+01:00)
# Thursday, 28 April 2016

Microsoft Cognitive Services uses Machine Learning to recognize text within pictures and exposes this functionality via a RESTful Web Service.

GCast 14: Cognitive Services - Thumbnail Generation

Thursday, 28 April 2016 12:12:00 (GMT Daylight Time, UTC+01:00)