# Sunday, 31 December 2017

12/31
Today I am grateful for lunch yesterday with my cousin Bob.

12/30
Today I am grateful to see Nick's Williams College basketball team play for the first time last night in California.

12/29
Today I am grateful for:
-Lunch yesterday with my cousin Barbara in San Juan Capistrano
-Watching a Spartan victory in the Holiday from the 50-yard-line with my son Tim

12/28
Today I am grateful to see a Lakers-Grizzlies game last night on my first visit to the Staples Center.

12/27
Today I am grateful to see an excellent Roy Hargrove concert last night at the Jazz Showcase in the South Loop.

12/26
Today I am grateful to spend Christmas with my family.

12/25
Today I am grateful that we still celebrate the birth of Jesus Christ after all these years.

12/24
Today I am grateful for a Christmas Eve snowfall; and the fact that I am not driving in it.

12/23
Today I am grateful for 3 Personal Training sessions this week - the last 3 of 2017!

12/22
Today I am grateful to see Roy Ayers in concert last night on my first visit to The Promontory in Hyde Park.

12/21
Today I am grateful to everyone1 who helped me get to 500 episodes on #TechnologyAndFriends

12/20
Today I am grateful for my first visit to the Argonne National Laboratory to attend a reception for David Danielson - clean energy entrepreneur and former Assistant Secretary of Energy.

12/19
Today I am grateful for an unseasonably warm Chicago December.

12/18
Today I am grateful to take Nick and Tim to a Black Hawks game last night - their first visit to the United Center.

12/17
Today I am grateful to spend yesterday with my sons.

12/16
Today I am grateful to spend some time at home.

12/15
Today I am grateful for the holiday party hosted by my apartment building last night.

12/14
Today I am grateful to spend a few days in Texas and meeting with folks at the University of Texas in Austin.

12/13
Today I am grateful to attend a home University of Texas basketball game for the first time.

12/12
Today I am grateful to see an exciting Pelicans-Rockets game last night - my first time at the Toyota Center!

12/11
Today I am grateful for:
-The hospitality and generosity of Paul
-Attending a home Texans game for the first time.

12/10
Today I am grateful for:
-The Uber driver who picked me up yesterday and took me to the airport when my Uber driver ran out of gas on the way.
-The "Lights in the Heights" festival last night in Houston.

12/09
Today I am grateful for a kind and completely unexpected email last night.

12/08
Today I am grateful to attend the Chicago User Group Holiday Party last night.

12/07
Today I am grateful for a meaningful and enjoyable offsite with my team in Atlanta this week.

12/06
Today I am grateful for an excellent dinner last night in midtown Atlanta with my team.

12/05
Today I am grateful for great seats at my second Atlanta Hawks home game in the past week.

12/4
Today I am grateful for temperatures in the 50s in Chicago in December.

Sunday, 31 December 2017 13:03:27 (GMT Standard Time, UTC+00:00)
# Saturday, 30 December 2017

As I discussed in a previous article, Microsoft Cognitive Services includes a set of APIs that allow your applications to take advantage of Machine Learning in order to analyze, image, sound, video, and language. One of these APIs is a REST web service that can determine the words and punctuation contained in a picture. This is accomplished by a simple REST web service call.

The Cognitive Services Optical Character Recognition (OCR) service is part of the Custom Vision API. It takes as input a picture of text and returns the words found in the image.

To get started, you will need an Azure account and a Cognitive Services Vision API key.

If you don't have an Azure account, you can get a free one at https://azure.microsoft.com/free/.

Once you have an Azure Account,  follow the instructions in this article to generate a Cognitive Services Computer Vision key.

To use this API, you simply have to make a POST request to the following URL:
https://[location].api.cognitive.microsoft.com/vision/v1.0/ocr

where [location] is the Azure location where you created your API key (above).

Optionally, you can add the following 2 querystring parameters to the URL:

  • Language: the 2-digit language abbreviation abbreviation. Use “en” for English. Currently, 25 languages are supported. If omitted, the service will attempt to auto-detect the language
  • detectOrientation: Set this to “true” if you want to support upside-down or rotated images.

The HTTP header of the request should include the following:

Ocp-Apim-Subscription-Key.     
The Cognitive Services Computer Vision key you generated above.

Content-Type

This tells the service how you will send the image. The options are:

  • application/json
  • application/octet-stream
  • multipart/form-data

If the image is accessible via a public URL, set the Content-Type to application/json and send JSON in the body of the HTTP request in the following format

{"url":"imageurl"}
where imageurl is a public URL pointing to the image. For example, to perform OCR on an image of an Edgar Allen Poe poem, submit the following JSON:

{"url": "http://media.tumblr.com/tumblr_lrbhs0RY2o1qaaiuh.png"}

DreamWithinADream

If you plan to send the image itself to the web service, set the content type to either "application/octet-stream" or “multipart/form-data” and submit the binary image in the body of the HTTP request.

The full request looks something like:  

POST https://westus.api.cognitive.microsoft.com/vision/v1.0/ocr HTTP/1.1
Content-Type: application/json
Host: westus.api.cognitive.microsoft.com
Content-Length: 62
Ocp-Apim-Subscription-Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
{ "url": "http://media.tumblr.com/tumblr_lrbhs0RY2o1qaaiuh.png" }

For example, passing a URL with the following picture:

 DreamWithinADream
  (found online at http://media.tumblr.com/tumblr_lrbhs0RY2o1qaaiuh.png)

returned the following data: 

{
  "textAngle": 0.0,
  "orientation": "NotDetected",
  "language": "en",
  "regions": [
    {
      "boundingBox": "31,6,435,478",
      "lines": [
        {
          "boundingBox": "114,6,352,23",
          "words": [
            {
              "boundingBox": "114,6,24,22",
              "text": "A"
            },
            {
              "boundingBox": "144,6,93,23",
               "text": "Dream"
            },
            {
               "boundingBox": "245,6,95,23",
              "text": "Within"
            },
            {
              "boundingBox": "350,12,14,16",
              "text": "a"
            },
            {
              "boundingBox": "373,6,93,23",
              "text": "Dream"
            }
          ]
        },
        {
           "boundingBox": "31,50,187,16",
          "words": [
             {
              "boundingBox": "31,50,31,12",
               "text": "Take"
            },
            {
              "boundingBox": "66,50,23,12",
              "text": "this"
             },
            {
              "boundingBox": "93,50,24,12",
              "text": "kiss"
            },
            {
               "boundingBox": "121,54,33,12",
              "text": "upon"
            },
            {
              "boundingBox": "158,50,19,12",
              "text": "the"
            },
             {
              "boundingBox": "181,50,37,12",
               "text": "brow!"
            }
          ]
        },
        {
          "boundingBox": "31,67,194,16",
          "words": [
             {
              "boundingBox": "31,67,31,15",
               "text": "And,"
            },
            {
              "boundingBox": "67,67,12,12",
              "text": "in"
             },
            {
              "boundingBox": "82,67,46,16",
              "text": "parting"
            },
            {
              "boundingBox": "132,67,31,12",
              "text": "from"
            },
            {
              "boundingBox": "167,71,25,12",
              "text": "you"
            },
             {
              "boundingBox": "195,71,30,11",
               "text": "now,"
            }
          ]
        },
         {
          "boundingBox": "31,85,159,12",
          "words": [
            {
              "boundingBox": "31,85,32,12",
               "text": "Thus"
            },
            {
               "boundingBox": "67,85,35,12",
              "text": "much"
            },
            {
              "boundingBox": "107,86,16,11",
              "text": "let"
            },
             {
              "boundingBox": "126,89,20,8",
              "text": "me"
            },
            {
              "boundingBox": "150,89,40,8",
              "text": "avow-"
            }
          ]
        },
        {
          "boundingBox": "31,102,193,16",
          "words": [
            {
              "boundingBox": "31,103,26,11",
              "text": "You"
             },
            {
              "boundingBox": "61,106,19,8",
              "text": "are"
            },
            {
               "boundingBox": "84,104,21,10",
              "text": "not"
            },
            {
              "boundingBox": "109,106,44,12",
              "text": "wrong,"
            },
             {
              "boundingBox": "158,102,27,12",
               "text": "who"
            },
            {
              "boundingBox": "189,102,35,12",
              "text": "deem"
             }
          ]
        },
        {
          "boundingBox": "31,120,214,16",
          "words": [
            {
               "boundingBox": "31,120,29,12",
              "text": "That"
            },
            {
              "boundingBox": "64,124,21,12",
              "text": "my"
            },
            {
              "boundingBox": "89,121,29,15",
              "text": "days"
            },
            {
              "boundingBox": "122,120,30,12",
              "text": "have"
            },
            {
              "boundingBox": "156,121,30,11",
              "text": "been"
            },
            {
               "boundingBox": "191,124,7,8",
              "text": "a"
            },
            {
              "boundingBox": "202,121,43,14",
              "text": "dream;"
            }
           ]
        },
        {
          "boundingBox": "31,138,175,16",
          "words": [
            {
              "boundingBox": "31,139,22,11",
              "text": "Yet"
            },
             {
              "boundingBox": "57,138,11,12",
               "text": "if"
            },
            {
              "boundingBox": "70,138,31,16",
              "text": "hope"
             },
            {
              "boundingBox": "105,138,21,12",
              "text": "has"
            },
            {
               "boundingBox": "131,138,37,12",
              "text": "flown"
            },
            {
              "boundingBox": "172,142,34,12",
              "text": "away"
            }
          ]
        },
        {
          "boundingBox": "31,155,140,16",
          "words": [
            {
              "boundingBox": "31,156,13,11",
              "text": "In"
             },
            {
              "boundingBox": "48,159,8,8",
               "text": "a"
            },
            {
               "boundingBox": "59,155,37,16",
              "text": "night,"
            },
            {
              "boundingBox": "100,159,14,8",
              "text": "or"
            },
             {
              "boundingBox": "118,155,12,12",
              "text": "in"
            },
            {
              "boundingBox": "134,159,7,8",
              "text": "a"
            },
             {
              "boundingBox": "145,155,26,16",
               "text": "day,"
            }
          ]
        },
         {
          "boundingBox": "31,173,144,15",
          "words": [
            {
              "boundingBox": "31,174,13,11",
              "text": "In"
            },
            {
               "boundingBox": "48,177,8,8",
              "text": "a"
             },
            {
              "boundingBox": "59,173,43,15",
              "text": "vision,"
            },
             {
              "boundingBox": "107,177,13,8",
              "text": "or"
            },
            {
              "boundingBox": "124,173,12,12",
              "text": "in"
            },
            {
              "boundingBox": "140,177,35,11",
               "text": "none,"
            }
          ]
        },
        {
          "boundingBox": "31,190,180,16",
          "words": [
            {
              "boundingBox": "31,191,11,11",
              "text": "Is"
            },
            {
               "boundingBox": "47,190,8,12",
              "text": "it"
            },
            {
              "boundingBox": "59,190,58,12",
              "text": "therefore"
            },
             {
              "boundingBox": "121,190,19,12",
               "text": "the"
            },
            {
               "boundingBox": "145,191,23,11",
              "text": "less"
             },
            {
              "boundingBox": "173,191,38,15",
              "text": "gone?"
            }
          ]
        },
        {
          "boundingBox": "31,208,150,12",
          "words": [
            {
              "boundingBox": "31,208,20,12",
              "text": "All"
            },
             {
              "boundingBox": "55,208,24,12",
               "text": "that"
            },
            {
              "boundingBox": "83,212,19,8",
              "text": "we"
             },
            {
              "boundingBox": "107,212,19,8",
              "text": "see"
            },
            {
               "boundingBox": "131,212,13,8",
              "text": "or"
            },
            {
              "boundingBox": "148,212,33,8",
              "text": "seem"
            }
           ]
        },
        {
          "boundingBox": "31,226,194,12",
          "words": [
            {
              "boundingBox": "31,227,11,11",
              "text": "Is"
            },
             {
              "boundingBox": "46,226,21,12",
               "text": "but"
            },
            {
              "boundingBox": "71,230,7,8",
              "text": "a"
             },
            {
              "boundingBox": "82,226,40,12",
              "text": "dream"
            },
            {
               "boundingBox": "126,226,41,12",
              "text": "within"
            },
            {
              "boundingBox": "171,230,7,8",
              "text": "a"
            },
             {
              "boundingBox": "182,226,43,12",
               "text": "dream."
            }
          ]
        },
         {
          "boundingBox": "31,261,133,12",
          "words": [
            {
              "boundingBox": "31,262,5,11",
               "text": "I"
            },
            {
               "boundingBox": "41,261,33,12",
              "text": "stand"
             },
            {
              "boundingBox": "78,261,32,12",
              "text": "amid"
            },
            {
              "boundingBox": "114,261,19,12",
              "text": "the"
            },
            {
              "boundingBox": "137,265,27,8",
              "text": "roar"
            }
          ]
        },
        {
          "boundingBox": "31,278,169,15",
          "words": [
            {
              "boundingBox": "31,278,18,12",
              "text": "Of"
             },
            {
              "boundingBox": "52,282,7,8",
              "text": "a"
            },
            {
               "boundingBox": "63,278,95,12",
              "text": "surf-tormented"
            },
            {
              "boundingBox": "162,278,38,15",
              "text": "shore,"
            }
          ]
        },
        {
          "boundingBox": "31,296,174,15",
          "words": [
            {
              "boundingBox": "31,296,28,12",
              "text": "And"
             },
            {
              "boundingBox": "63,297,4,11",
              "text": "I"
            },
            {
               "boundingBox": "72,296,28,12",
              "text": "hold"
            },
            {
              "boundingBox": "104,296,41,12",
              "text": "within"
            },
             {
              "boundingBox": "149,300,20,11",
               "text": "my"
            },
            {
              "boundingBox": "173,296,32,12",
              "text": "hand"
             }
          ]
        },
        {
          "boundingBox": "31,314,169,16",
          "words": [
            {
               "boundingBox": "31,314,42,12",
              "text": "Grains"
            },
            {
              "boundingBox": "78,314,15,12",
              "text": "of"
            },
             {
              "boundingBox": "95,314,19,12",
              "text": "the"
            },
            {
              "boundingBox": "119,315,43,15",
              "text": "golden"
             },
            {
              "boundingBox": "167,314,33,12",
              "text": "sand-"
            }
          ]
         },
        {
          "boundingBox": "31,331,189,16",
           "words": [
            {
              "boundingBox": "31,332,31,11",
              "text": "How"
            },
             {
              "boundingBox": "66,331,28,12",
              "text": "few!"
            },
            {
              "boundingBox": "99,333,20,14",
              "text": "yet"
            },
            {
              "boundingBox": "123,331,27,12",
               "text": "how"
            },
            {
               "boundingBox": "154,331,28,16",
              "text": "they"
            },
            {
              "boundingBox": "186,335,34,12",
              "text": "creep"
            }
           ]
        },
        {
          "boundingBox": "31,349,206,16",
          "words": [
            {
              "boundingBox": "31,349,55,16",
              "text": "Through"
            },
            {
              "boundingBox": "90,353,20,11",
               "text": "my"
            },
            {
               "boundingBox": "115,349,44,16",
              "text": "fingers"
            },
            {
              "boundingBox": "163,351,12,10",
              "text": "to"
            },
             {
              "boundingBox": "179,349,20,12",
               "text": "the"
            },
            {
              "boundingBox": "203,350,34,15",
              "text": "deep,"
             }
          ]
        },
        {
          "boundingBox": "31,366,182,16",
          "words": [
            {
               "boundingBox": "31,366,39,12",
              "text": "While"
            },
            {
              "boundingBox": "74,367,5,11",
              "text": "I"
            },
            {
              "boundingBox": "83,370,39,12",
              "text": "weep-"
            },
            {
              "boundingBox": "126,366,36,12",
              "text": "while"
             },
            {
              "boundingBox": "166,367,5,11",
              "text": "I"
            },
            {
               "boundingBox": "175,367,38,15",
              "text": "weep!"
            }
          ]
        },
        {
          "boundingBox": "31,384,147,16",
          "words": [
            {
               "boundingBox": "31,385,11,11",
              "text": "O"
            },
            {
              "boundingBox": "47,384,31,12",
              "text": "God!"
            },
             {
              "boundingBox": "84,388,21,8",
               "text": "can"
            },
            {
              "boundingBox": "110,385,4,11",
              "text": "I"
             },
            {
              "boundingBox": "119,386,20,10",
              "text": "not"
            },
            {
               "boundingBox": "144,388,34,12",
              "text": "grasp"
            }
          ]
        },
        {
          "boundingBox": "31,402,170,16",
          "words": [
            {
              "boundingBox": "31,402,37,12",
              "text": "Them"
            },
            {
              "boundingBox": "72,402,29,12",
              "text": "with"
            },
            {
              "boundingBox": "105,406,7,8",
               "text": "a"
            },
            {
              "boundingBox": "116,402,42,16",
              "text": "tighter"
            },
            {
              "boundingBox": "162,403,39,15",
              "text": "clasp?"
            }
           ]
        },
        {
          "boundingBox": "31,419,141,12",
          "words": [
            {
              "boundingBox": "31,420,11,11",
              "text": "O"
            },
             {
              "boundingBox": "47,419,31,12",
               "text": "God!"
            },
            {
              "boundingBox": "84,423,21,8",
              "text": "can"
             },
            {
              "boundingBox": "110,420,4,11",
              "text": "I"
            },
            {
               "boundingBox": "119,421,20,10",
              "text": "not"
            },
            {
              "boundingBox": "144,423,28,8",
              "text": "save"
            }
           ]
        },
        {
          "boundingBox": "31,437,179,16",
          "words": [
            {
              "boundingBox": "31,438,26,11",
              "text": "One"
            },
            {
              "boundingBox": "62,437,31,12",
               "text": "from"
            },
            {
               "boundingBox": "97,437,19,12",
              "text": "the"
             },
            {
              "boundingBox": "120,437,45,16",
              "text": "pitiless"
            },
             {
              "boundingBox": "169,438,41,11",
               "text": "wave?"
            }
          ]
        },
        {
          "boundingBox": "31,454,161,12",
          "words": [
            {
              "boundingBox": "31,455,11,11",
               "text": "Is"
            },
            {
               "boundingBox": "47,454,15,12",
              "text": "all"
             },
            {
              "boundingBox": "66,454,25,12",
              "text": "that"
            },
            {
              "boundingBox": "94,458,19,8",
              "text": "we"
            },
            {
              "boundingBox": "118,458,19,8",
              "text": "see"
            },
             {
              "boundingBox": "142,458,13,8",
               "text": "or"
            },
            {
              "boundingBox": "159,458,33,8",
              "text": "seem"
             }
          ]
        },
        {
          "boundingBox": "31,472,185,12",
          "words": [
            {
               "boundingBox": "31,473,23,11",
              "text": "But"
             },
            {
              "boundingBox": "58,476,7,8",
              "text": "a"
            },
            {
               "boundingBox": "69,472,40,12",
              "text": "dream"
            },
            {
              "boundingBox": "113,472,41,12",
              "text": "within"
            },
            {
              "boundingBox": "158,476,7,8",
               "text": "a"
            },
            {
              "boundingBox": "169,472,47,12",
              "text": "dream?"
            }
          ]
        }
      ]
    }
  ]
}
  

Note that the image is split into an array of regions; each region contains an array of lines; and each line contains an array of words. This is done so that you can replace or block out one or more specific words, lines, or regions.

Below is a jQuery code snippet making a request to this service to perform OCR on images of text. You can download the full application at https://github.com/DavidGiard/CognitiveSvcsDemos.

    var language = $("#LanguageDropdown").val();
    var computerVisionKey = getKey() || "Copy your Subscription key here";
    var webSvcUrl = "https://westcentralus.api.cognitive.microsoft.com/vision/v1.0/ocr";     
    webSvcUrl = webSvcUrl + "?language=" + language;
$.ajax({
    type: "POST",
    url: webSvcUrl,
    headers: { "Ocp-Apim-Subscription-Key": computerVisionKey },
    contentType: "application/json",
    data: '{ "Url": "' + url + '" }'
}).done(function (data) {
    outputDiv.text("");

    var regionsOfText = data.regions;
    for (var h = 0; h < regionsOfText.length; h++) {
        var linesOfText = data.regions[h].lines;
        for (var i = 0; i < linesOfText.length; i++) {
            var output = "";

            var thisLine = linesOfText[i];
            var words = thisLine.words;
            for (var j = 0; j < words.length; j++) {
                 var thisWord = words[j];
                output += thisWord.text;
                output += " ";

            }
            var newDiv = "<div>" + output + "</div>";
             outputDiv.append(newDiv);

        }
        outputDiv.append("<hr>");
    }
               
}).fail(function (err) {
    $("#OutputDiv").text("ERROR!" + err.responseText);
});

You can find the full documentation – including an in-browser testing tool - for this API here.

Sending requests to the Cognitive Services OCR API makes it simple to convert a picture of text into text.  

Saturday, 30 December 2017 10:31:00 (GMT Standard Time, UTC+00:00)
# Friday, 29 December 2017

It's difficult enough for humans to recognize emotions in the faces of other humans. Can a computer accomplish this task? It can if we train it to and if we give it enough examples of different faces with different emotions.

When we supply data to a computer with the objective of training that computer to recognize patterns and predict new data, we call that Machine Learning. And Microsoft has done a lot of Machine Learning with a lot of faces and a lot of data and they are exposing the results for you to use.

As I discussed in a previous article, Microsoft Cognitive Services includes a set of APIs that allow your applications to take advantage of Machine Learning in order to analyze, image, sound, video, and language.

The Cognitive Services Emotions API looks at photographs of people and determines the emotion of each person in the photo. Supported emotions are anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. Each emotion is assigned a score between 0 and 1 - higher numbers indicate a high confidence that this is the emotion expressed in the face. If a picture contains multiple faces, the emotion of each face is returned.

To get started, you will need an Azure account and a Cognitive Services Vision API key.

If you don't have an Azure account, you can get a free one at https://azure.microsoft.com/free/.

Once you have an Azure Account,  follow the instructions in this article to generate a Cognitive Services Computer Vision key.

To use this API, you simply have to make a POST request to the following URL:
https://[location].api.cognitive.microsoft.com/vision/v1.0/recognize

where [location] is the Azure location where you created your API key (above).

The HTTP header of the request should include the following:

Ocp-Apim-Subscription-Key.
This is the Cognitive Services Computer Vision key you generated above.

Content-Type

This tells the service how you will send the image. The options are:

  • application/json
  • application/octet-stream

If the image is accessible via a public URL, set the Content-Type to application/json and send JSON in the body of the HTTP request in the following format

{"url":"imageurl"}
where imageurl is a public URL pointing to the image. For example, to generate a thumbnail of this picture of a happy face and a not happy face,

TwoEmotions

submit the following JSON:

{"url":"http://davidgiard.com/content/binary/Open-Live-Writer/Using-the-Cognitive-Services-Emotion-API_14A56/TwoEmotions_2.jpg"}

If you plan to send the image itself to the web service, set the content type to "application/octet-stream" and submit the binary image in the body of the HTTP request.

A full request looks something like this:

The full request looks something like:

POST https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognize HTTP/1.1
Content-Type: application/json
Host: westus.api.cognitive.microsoft.com
Content-Length: 62
Ocp-Apim-Subscription-Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
{ "url": "http://xxxx.com/xxxx.jpg" }

For example, passing a URL with a picture below of 3 attractive, smiling people

BrianAnnaDavid   

(found online at https://giard.smugmug.com/Tech-Community/SpartaHack-2016/i-4FPV9bf/0/X2/SpartaHack-068-X2.jpg)

returned the following data: 

[
  {
    "faceRectangle": {
      "height": 113,
       "left": 285,
      "top": 156,
      "width": 113
    },
    "scores": {
      "anger": 1.97831262E-09,
      "contempt": 9.096525E-05,
      "disgust": 3.86221245E-07,
      "fear": 4.26409547E-10,
      "happiness": 0.998336,
      "neutral": 0.00156954059,
      "sadness": 8.370223E-09,
      "surprise": 3.06117772E-06
    }
  },
  {
    "faceRectangle": {
       "height": 108,
      "left": 831,
      "top": 169,
      "width": 108
    },
    "scores": {
      "anger": 2.63808062E-07,
      "contempt": 5.387114E-08,
      "disgust": 1.3360991E-06,
      "fear": 1.407629E-10,
      "happiness": 0.9999967,
      "neutral": 1.63170478E-06,
      "sadness": 2.52861843E-09,
      "surprise": 1.91028926E-09
    }
  },
  {
     "faceRectangle": {
      "height": 100,
      "left": 591,
      "top": 168,
      "width": 100
    },
    "scores": {
      "anger": 3.24157673E-10,
      "contempt": 4.90155344E-06,
      "disgust": 6.54665473E-06,
      "fear": 1.73284559E-06,
      "happiness": 0.9999156,
      "neutral": 6.42121E-05,
      "sadness": 7.02297257E-06,
      "surprise": 5.53670576E-09
    }
  }
]   

A high value for the 3 happiness scores and the very low values for all the other scores suggest a very high degree of confidence that each person in this photo  happy. is

Here is the request in the popular HTTP analysis tool Fiddler [http://www.telerik.com/fiddler]:
Request

Em01-Fiddler-Request

Response:
Em02-Fiddler-Response 

Below is a C# code snippet making a request to this service to analyze the emotions of the people in an online photograph. You can download the full application at https://github.com/DavidGiard/CognitiveSvcsDemos.

string emotionApiKey = "XXXXXXXXXXXXXXXXXXXXXXX";
var client = new HttpClient();
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", emotionApiKey);
    string uri = "https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognize";
HttpResponseMessage response;
var json = "{'url': '" + imageUrl + "'}";
byte[] byteData = Encoding.UTF8.GetBytes(json);
using (var content = new ByteArrayContent(byteData))
{
    content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
    response = await client.PostAsync(uri, content);
}

if (response.IsSuccessStatusCode)
{
    var data = await response.Content.ReadAsStringAsync();
}

You can find the full documentation – including an in-browser testing tool - for this API here.

Sending requests to the Cognitive Services Emotion API makes it simple to analyze the emotions of people in a photograph.  

Friday, 29 December 2017 10:43:00 (GMT Standard Time, UTC+00:00)
# Thursday, 28 December 2017

Generating a thumbnail image from a larger image sounds easy – just shrink the dimensions of the original, right? But it becomes more complicated if the thumbnail image is a different shape than the original. For example, the original image may be rectangular but we need the new image to be a square. Or we may need to generate a portrait-oriented thumbnail from a landscape-oriented original image. In these cases, we will need to crop or distort the original image when we create the thumbnail. Distorting the image tends to look very bad; and when we crop an image, we want ensure that the primary subject of the image remains in the generated thumbnail. To do this, we need to identify the primary subject of the image. That's easy enough for a human observer to do, but a difficult thing for a computer to do. But if we want to automate this process, we will have to ask the computer to do exactly that.

This is where machine learning can help. By analyzing many images, Machine Learning can figure out what parts of a picture are likely to be the main subject. Once this is known, it becomes a simpler matter to crop the picture in such a way that the main subject remains in the generated thumbnail.

As I discussed in a previous article, Microsoft Cognitive Services includes a set of APIs that allow your applications to take advantage of Machine Learning in order to analyze, image, sound, video, and language.

The Cognitive Services Vision API uses Machine Learning so that you don't have to. It exposes a web service to return an intelligent thumbnail image from any picture.

You can see this in action here.

Scroll down the the section titled "Generate a thumbnail" to see the Thumbnail generator as shown in Figure 1. 

Th01
Figure 1

With this live, in-browser demo, you can either select an image from the gallery and view the generated thumbnails; or provide your own image - either from your local computer or from a public URL. The page uses the Thumbnail API to create thumbnails of 6 different dimensions.
 
For your own application, you can either call the REST Web Service directly or (for a .NET application) use a custom library. The library simplifies development by abstracting away HTTP calls via strongly-typed objects.

To get started, you will need an Azure account and a Cognitive Services Vision API key.

If you don't have an Azure account, you can get a free one at https://azure.microsoft.com/free/.

Once you have an Azure Account,  follow the instructions in this article to generate a Cognitive Services Computer Vision key.

     

To use this API, you simply have to make a POST request to the following URL:
https://[location].api.cognitive.microsoft.com/vision/v1.0/generateThumbnail?width=ww&height=hh&smartCropping=true

where [location] is the Azure location where you created your API key (above) and ww and hh are the desired width and height of the thumbnail to generate.

The “smartCropping” parameter tells the service to determine the main subject of the photo and to try keep it in the thumbnail while cropping.

The HTTP header of the request should include the following:

Ocp-Apim-Subscription-Key.     
The Cognitive Services Computer Vision key you generated above.

Content-Type

This tells the service how you will send the image. The options are:   

  • application/json    
  • application/octet-stream    
  • multipart/form-data

If the image is accessible via a public URL, set the Content-Type to application/json and send JSON in the body of the HTTP request in the following format

{"url":"imageurl"}
where imageurl is a public URL pointing to the image. For example, to generate a thumbnail of this picture of a skier, submit the following JSON:

{"url":"http://mezzotint.de/wp-content/uploads/2014/12/2013-skier-edge-01-Kopie.jpg"}

Man skiing  alps

If you plan to send the image itself to the web service, set the content type to either "application/octet-stream" or "multipart/form-data" and submit the binary image in the body of the HTTP request.

Here is a sample console application that uses the service to generate a thumbnail from a file on disc. You can download the full source code at
https://github.com/DavidGiard/CognitiveSvcsDemos

Note: You will need to create the folder "c:\test" to store the generated thumbnail.

   

             // TODO: Replace this value with your Computer Vision API Key
            string computerVisionKey = "XXXXXXXXXXXXXXXX"

            var client = new HttpClient();
            var queryString = HttpUtility.ParseQueryString(string.Empty);

            client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", computerVisionKey);

            queryString["width"] = "300";
            queryString["height"] = "300";
            queryString["smartCropping"] = "true";
            var uri = "https://westcentralus.api.cognitive.microsoft.com/vision/v1.0/generateThumbnail?" + queryString;

            HttpResponseMessage response;

            string originalPicture = "http://davidgiard.com/content/Giard/_DGInAppleton.png";
            var jsonBody = "{'url': '" + originalPicture + "'}";
            byte[] byteData = Encoding.UTF8.GetBytes(jsonBody);

            using (var content = new ByteArrayContent(byteData))
            {
                 content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
                response = await client.PostAsync(uri, content);
            }       
            if (response.StatusCode == System.Net.HttpStatusCode.OK)
            {
                 // Write thumbnail to file
                var responseContent = await response.Content.ReadAsByteArrayAsync();
                 string folder = @"c:\test";
                string thumbnaileFullPath = string.Format("{0}\\thumbnailResult_{1:yyyMMddhhmmss}.jpg", folder, DateTime.Now);
                using (BinaryWriter binaryWrite = new BinaryWriter(new FileStream(thumbnaileFullPath, FileMode.Create, FileAccess.Write)))
                 {
                    binaryWrite.Write(responseContent);
                }
                // Show BEFORE and AFTER to user
                Process.Start(thumbnaileFullPath);
                 Process.Start(originalPicture);
                Console.WriteLine("Done! Thumbnail is at {0}!", thumbnaileFullPath);
            }
            else
            {
                Console.WriteLine("Error occurred. Thumbnail not created");
             }

        }            

The result is shown in Figure 2 below.
Th02Results
Figure 2

One thing to note. The Thumbnail API is part of the Computer Vision API. As of this writing, the free version of the Computer Vision API is limited to 5,000 transactions per month. If you want more than that, you will need to upgrade to the Standard version, which charges $1.50 per 1000 transactions.

But this should be plenty for you to learn this API for free and build and test your applications until you need to put them into production.
The code above can be found on GitHub.

You can find the full documentation – including an in-browser testing tool - for this API here.

The Cognitive Services Custom Vision API provides a simple way to generate thumbnail images from pictures.

Thursday, 28 December 2017 10:31:00 (GMT Standard Time, UTC+00:00)
# Wednesday, 27 December 2017

As I discussed in a previous article, Microsoft Cognitive Services includes a set of APIs that allow your applications to take advantage of Machine Learning in order to analyze, image, sound, video, and language.

Your application uses Cognitive Services by calling one or more RESTful web services. These services require you to pass a key in the header of each HTTP call. You can generate this key from the Azure portal.

If you don't have an Azure account, you can get a free one at https://azure.microsoft.com/free/.

Once you have an Azure Account, navigate to the Azure Portal.

CsKey01-Portal
Figure 1

Here you can create a Cognitive Services API key. Click the button in the top left of the portal (Figure 2)

CsKey02-New
Figure 2

It’s worth noting that the “New” button caption sometimes changes to “Create a Resource” (Figure 2a)

CsKey02-CreateResourceButton
Figure 2a

From the flyout menu, select AI+Cognitive Services. A list of Cognitive Services displays. Select the service you want to call. For this demo,I will select Computer Vision API, as shown in Figure 3.

CsKey03-AICogServices
Figure 3

The Face API blade displays as shown in Figure 4.

CsKey04-ComputerVisionBlade
Figure 4

At the Name textbox, enter a name for this service account.

At the Subscription dropdown, select the Azure subscription to associate with this service.

At the Location dropdown, select the region in which you want to host this service. You should select a region close to those who will be consuming the service. Make note of the region you selected.

At the Pricing Tier dropdown, select the pricing tier you want to use. Currently, the choices are F0 (which is free, but limited to 20 calls per minute); and S1 (which is not free, but allows more calls.) Click the View full pricing details link to see how much S1 will cost.

At the Resource Group field, select or create an Azure Resource Group. Resource Groups allow you to logically group different Azure resources, so you can manage them together.

Click the [Create] button to create the account. The creation typically takes less than a minute and a message displays when the service is created, as shown in Figure 5.

CsKey05-GoToResourceButton
Figure 5

Click the [Go to resource] button to open a blade to configure the newly-created service. Alternatively, you can select "All Resources" on the left menu and search for your service by name. Either way, the service blade displays, as as shown in Figure 6.

CsKey06-ComputerVisionBlade
Figure 6

The important pieces of information in this blade are the Endpoint (on the Overview tab, Figure 7) and the Access Keys (on the Keys tab, as shown in Figure 8). Within this blade, you also have the opportunity to view log files and other tools to help troubleshoot your service. And you can set authorization and other restrictions to your service.

CsKey07-ComputerVisionOverview
Figure 7

CsKey08-ComputerVisionKeys
Figure 8

The process is almost identical when you create a key for any other Cognitive Service. The only difference is that you will select a different service set in the AI+Cognitive Services flyout.

Wednesday, 27 December 2017 10:35:00 (GMT Standard Time, UTC+00:00)
# Tuesday, 26 December 2017

Microsoft Cognitive Services is a set of APIs that take advantage of Machine Learning to provide developers with an easy way to analyze images, speech, language, and others.

If you have worked with or studied Machine Learning, you know that you can accomplish a lot, but that it requires a lot of computing power, a lot of time, and a lot of data. Since most of us have a limited amount of each of these, we can take advantage of the fact that Microsoft has data, time, and the computing power of Azure. They have used this power to analyze large data sets and expose the results via a set of web services, collectively known as Cognitive Services.

The APIs of Cognitive Services are divided into 5 broad categories: Vision, Speech, Language, Knowledge, and Search.

Vision APIs

The Vision APIs provide information about a given photograph or video. For example, several Vision APIs are capable of recognizing  faces in an image. One analyzes each face and deduces that person's emotion; another can compare 2 pictures and decide whether or not 2 photographs are the same person; a third guesses the age of each person in a photo.

Speech APIs

The Speech APIs can convert speech to text or text to speech. It can also recognize the voice of a given speaker (You might use this to authenticate users, for example) and infer the intent of the speaker from his words and tone. The Translator Speech API supports translations between 10 different spoken languages.

Language

The Language APIs include a variety of services. A spell checker is smart enough to recognize common proper names and homonyms. And the Translator Text API can detect the language in which a text is written and translate that text into another language. The Text Analytics API analyzes a document for the sentiment expressed, returning a score based on how positive or negative is the wording and tone of the document. The most interesting API in this group is the Language Understanding Intelligence Service (LUIS) that allows you to build custom language models so that your application can understand questions and statements from your users in a variety of formats.

Knowledge

Knowledge includes a variety of APIs - from customer recommendations to smart querying and information about the context of text. Many of these services take advantage of natural language processing. As of this writing, all of these services are in preview.

Search

The Search APIs allow you to retrieve Bing search results with a single web service call.

You can use these APIs. To get started, you need an Azure account. You can get a free Azure trial at https://azure.microsoft.com/.

Each API offers a free option that restricts the number and/or frequency of calls, but you can break through that boundary for a charge.  Because they are hosted in Azure, the paid services can scale out to meet increased demand.

You call most of these APIs by passing and receiving JSON to a RESTful web service. Some of the more complex services offer configuration and setup beforehand.

These APIs are capable of analyzing pictures, text, and speech because each service draws on the knowledge learned from parsing countless photos, documents, etc. beforehand.
 
You can find documentation, sample code, and even a place to try out each API live in your browser at https://azure.microsoft.com/en-us/services/cognitive-services/

A couple of fun applications of Cognitive Services are how-old.net (which guesses the ages of people in photographs) and what-dog.net (which identifies the breed of dog in a photo).

Below is a screenshot from the Azure documentation page, listing the sets of services. But keep checking back, because this list grows and each set contains one or more services.

List of Cognitive Services
 
Sign up today and start building apps. It’s fun, it's useful, and it’s free!

Tuesday, 26 December 2017 10:25:00 (GMT Standard Time, UTC+00:00)
# Monday, 25 December 2017
Monday, 25 December 2017 09:48:00 (GMT Standard Time, UTC+00:00)
# Sunday, 24 December 2017

I have been recording my online TV show - Technology and Friends - for 9 years. I recently passed episode #500.

The show has evolved over the years and so has the recording equipment I use.

Below is a description of the hardware I use to record Technology and Friends.

Camera: Canon EOS6D

CameraThis is the second Canon SLR I’ve purchased. My EOS 30D lasted over 10 years, so I returned to a similar, but updated model when it finally began to fail. The EOS 6D is primarily a still camera, but it can record up to 30 minutes of high-resolution video. The image quality is outstanding, particularly with the 24-105mm Canon lens I bought with it. This setup is overkill (read: "expensive") for a show that most people view in a browser, but I also use this camera for still photography and I have been happy with the results. The main downside for video is the 30-minute limit. After this time, someone needs to re-start the recording.

Audio Recorder: Xoom H6 Handy Recorder

AudioRecorderI bought a Xoom recorder a few years ago on the recommendation of Carl Franklin, who is the co-host and the audio expert of the excellent .NET Rocks podcast. It served me well for years, so I bought the H6 when it was time to replace it. This device contains 2 built-in microphones, but I almost always plug in 2 external microphones, so I can get closer to a speakers mouth. I can plug in up to 4 external microphones. Using these microphones eliminates most of the background noise, allowing me to record in crowded areas. Each microphone can record to a separate audio file, which is convenient if one speaker is much louder than another.

Microphones: Shure SM58

MicrophonesI went with Shure based on popularity and Amazon reviews. I bought these mid-level models. I have been happy with the results. I strongly recommend external microphones (either lapel or handheld) when recording audio. My show is much better since I began using them. Switching to a separate microphone and the resulting increase in audio quality is probably the technical change resulting in the single biggest jump in quality for my show.
 


Tripod: Vanguard Lite1

TripodThis is a cheap tripod, but it has lasted me for years. I have a larger tripod, but seldom use it because I can throw the vanguard is small enough to keep in a backpack, carry on a plane, and carry around a conference. I also like the fact that I can set it on a tabletop, which is what I usually do. It is not quite tall enough to stand on the ground and hold the camera as high as the face of a standing adult.

Sunday, 24 December 2017 17:56:17 (GMT Standard Time, UTC+00:00)
# Friday, 22 December 2017

Me and Roy Roy Ayers is 77 years old and stutters when he talks. But not when he sings. And definitely not when he plays the vibraphone. And play he did last night in front of a packed house at The Promontory in Hyde Park.

Ayers mixed a few ballads with the jazz-funk that he helped define. Backed by a band consisting of bass, drums, keyboard, and another vocalist, Ayers played for about 90 minutes, drawing on his 99 albums with such songs as "Red, Black & Green", "Don't Stop the Feeling", and his interpretation of Sam Cooke's "You Send Me".

The keyboardist was the best of the bunch, coaxing a variety of sounds from his instrument during his many solos. I wondered why the stage setup hid so much of him from the audience's view.

And then there was Roy and his vibraphone. Ayers still sounds great when he does his thing with his vibes.

Also Me and RoyI bought a ticket at the door and had to stand in the back with some folks who decided it was ok to engage in loud conversation at the concert. But I had a chance to shake the hand of Mr. Ayers after the show and tell him how much I enjoyed his music.

And to wish him luck on his next 99 albums.

Friday, 22 December 2017 10:33:00 (GMT Standard Time, UTC+00:00)
# Thursday, 21 December 2017

"Mirror Dance" by Lois McMaster Bujold is the sequel to "Brothers In Arms", the novel that introduced Miles Vorkosigan's clone / brother Mark.

mirror_danceFollowing Miles's rescue of Mark in the previous novel, the brothers return to Miles's home planet of Barrayar, where Mark decides to launch a rescue mission to liberate clones who are intended to be used as replacement parts for their genetic donors. Miles follows and is gravely wounded in the ensuing battle. His body is cryogenically frozen and then disappears. Mark returns to Barrayar to deliver the news to their parents - Lord Aral and Lady Cordelia. Arel and Cordelia  accept Mark as their son and a potential heir to the Vorkosigan line. Ultimately, Mark launches another rescue mission, this one to find and save Miles, who has been revived by scientists on an enemy planet.

This is one of Bujold's strongest novels. She not only tells a complex story, but she dives further into the emotions of her characters - particularly the clone Mark.

The definition of humanity and the rights that go with it are common themes of Bujold's books and this one delves into it very well, if a little heavy-handed. Interwoven with this general question is Mark’s personal struggle to define his own identity. He desperately wants to define himself outside of just the clown of a heroic Lord. But, his struggle to do so often leads to failure. Others help him with the struggle. He was raised to assassinate Miles's father, but ends up being accepted by his potential victim and his new family.

Thursday, 21 December 2017 11:14:00 (GMT Standard Time, UTC+00:00)
# Monday, 18 December 2017
Monday, 18 December 2017 17:36:00 (GMT Standard Time, UTC+00:00)
# Monday, 11 December 2017
Monday, 11 December 2017 11:48:00 (GMT Standard Time, UTC+00:00)
# Friday, 08 December 2017

BrothersInArmsFor years, Miles has been leading a double life - he was born Lord Miles Vorkosigan who became a lieutenant in the army of the Barryaran empire; but he sometimes assumes the role of Admiral Naismith, leader of the  Dendarii Free Mercenary Fleet.

One day, Miles is forced to appear on the same planet as both of his persona on the same day. Fearing his cover will be blown, he invents a story that Naismith is actually Miles's clone.

Shortly afterward, Miles discovers that he actually does have a clone and that this clone is being used by his enemies in a plot to assasinate Miles.

The book is a good adventure story. It advances the relationship between Miles and Elli (his bodyguard / lover); and it addresses a glaring plot problem - Miles disguises himself as a mercenary Admiral despite his unique physique. It also takes place on future Earth, which is a bonus for those of us who call Earth home today.

Friday, 08 December 2017 07:18:10 (GMT Standard Time, UTC+00:00)
# Monday, 04 December 2017
Monday, 04 December 2017 09:59:00 (GMT Standard Time, UTC+00:00)
# Sunday, 03 December 2017

12/3
Today I am grateful to attend DataSciConf in Atlanta for the first time.

12/2
Today I am grateful for dinner last night with Shawn and Resa.

12/1
Today I am grateful to see an exciting Hawks-Cavs game with Dave at Philips Arena last night.

11/30
Today I am grateful for:
-Arriving safely in Atlanta after some delays.
-The Data Sci conference speaker dinner last night

11/29
Today I am grateful to the Greater Chicago Food Depository for letting me help yesterday.

11/28
Today I am grateful to Lisa and Betsy for taking over the Midwest Geeks call, after my 4 years facilitating it.

11/27
Today I am grateful for the public library.

11/26
Today I am grateful for a weekend in Michigan visiting friends and family.

11/25
Today I am grateful to Desi and Ondrej for a home-cooked meal and a place to stay last night.

11/24
Today I am grateful to celebrate Thanksgiving dinner with family in Michigan.

11/23
Today I am grateful that most of my family lives within driving distance.

11/22
Today I am grateful to see Mark Colby in concert at the Jazz Showcase last night.

11/21
Today I am grateful for the cabinets I cleaned out last night and all the crap I threw away.

11/20
Today I am grateful for the man who watched my car at the airport terminal yesterday while I ran in to check a bag.

11/19
Today I am grateful to deliver the keynote at GangConf yesterday and to remain a part of #MIGANG after all these years.

11/18
Today I am grateful for:
-Attending a home Raptors game for the first time;
-My first visit to Canada in 10 years

11/17
Today I am grateful for:
-My first visit to the University of Toronto
-An exciting overtime NHL game at Air Canada Arena
-My first time attending a Maple Leafs home game.

11/16
Today I am grateful for my first visit to the University of Waterloo.

11/15
Today I am grateful for my first visit to the Massachusetts Institute of Technology.

11/14
Today I am grateful to arrive safely in Boston on a difficult travel day.

11/13
Today I am grateful for:
-a walk around downtown Princeton, NJ yesterday afternoon
-a tour of Princeton University by Mihaela Friday afternoon

11/12
Today I am grateful to get to bed early last night.

11/11
Today I am grateful for my first visit to Princeton, NJ and Princeton University.

11/10
Today I am grateful to attend an exciting Philadelphia Flyers game last night at the Wells Fargo Arena with Jeffrey.

11/9
Today I am grateful to Sarah for making me look good at UIUC yesterday.

11/8
Today I am grateful to the Uber driver who stopped by my home to drop off the lens cap I left in his car last week.

11/7
Today I am grateful to speak at PyData Chicago for the first time.

11/6
Today I am grateful that the weather here is still nice enough that I can walk to church.

Sunday, 03 December 2017 15:51:27 (GMT Standard Time, UTC+00:00)