# Saturday, 06 January 2018

2017 was a year of change. I started a new job; I traveled for the first time to South America, the Czech Republic, Iowa, Oregon, and Los Angeles; I returned to Canada for the first time in 10+ years; my son Tim graduated and began his professional career; and my son Nick accepted a new job in Massachusetts.

I'll start with me.

New Job

Microsoft  went through a major reorganization last year and it greatly affected my department and my job. I moved to a new team that is focused on helping professors at Top Computer Science universities teach their students about cloud computing. This role involves even more travel than my last one. For the past few months, I've been visiting schools around North America and my calendar for the next 2 months is filled with campus visits for hackathons, guest lectures, workshops, and meetings with students, and professors. I won't be home much in January and February.

Travel

I traveled more and farther in 2017 than I have in a long time (if ever). I visited 7 different countries (Romania, Sweden, The Czech Republic, Uruguay, Argentina, Canada, and the US) and 17 states. It was my first visit to the Czech Republic, Uruguay, and Argentina.

Prague has long been on my list of places to visit so I was thrilled to finally get there and I enjoyed the hospitality of Gael Fraiteur and Brit King. Gael and I drove down to Český Krumlov - a small village in southern Bohemia preserved as it was in the 18th century - where we spent a night and explored castles, museums, and restaurants.

I was happy to accept an invitation to speak at .NET Conf UY, primarily because it was my first trip to South America. After a few days in Montevideo, Uruguay, I took a ferry to Buenos Aires, where I spent an afternoon exploring the city on foot. In May, I scheduled tour to speak at 4 different user groups in 3 days in Iowa. My friend Javier helped me plan the trip and I was excited for my first visit to the Hawkeye State. 

In 2017, I got serious about my goal of seeing every home stadium and arena of the 4 major professional sports league. I visited 3 NFL stadiums, 4 NBA arenas, and 2 NHL arenas last year. With 55 places remaining, I will need to accelerate this process.

In January, I flew to San Francisco, where my friend Sara picked me up and together we drove 7 hours to a small town in southwest Oregon to attend the funeral of the wife of an old friend. The next day, we attended the funeral and a dinner and repeated the trip in reverse. We were fortunate to have flexible enough schedules to make this trip and I'm really glad we did. And I got to know Sara a lot better on the trip.

Live Music

I made it a point to see a lot of live music in 2017. Most of the shows  (Stanley Clarke, Buddy Guy, Guy King, Ladysmith Black Mambazo, Booker T. Jones, Marcia Ball, Delbert McClinton, Kris Kristofferson, Al Stewart, Jean-Luc Ponty, Benny Golson, Paul Weller, and Roy Ayers) were at small clubs in Chicago (SPACE, Buddy Guy's Legends, Old Town School of Folk Music, SPACE, City Winery, Jazz Showcase, House of Blues, and The Promontory), but I also saw Eric Church and Tim McGraw / Faith Hill at the cavernous Allstate Arena.

My Two Sons

My two sons also had some major changes in their lives.

Shortly after graduating from Indiana University with a degree in Informatics, Tim accepted a job with Enkay Tech -  an IT consulting company outside of Chicago. He lived with me for a few months before renting a house in Wrigleyville. Spending time with hi was one of the highlights of my summer.

After 2 years serving as Director of Basketball Operations at Southern Illinois University - Edwardsville, Nick accepted a position as an assistant coach at Williams College in Williamstown, MA. He moved in the fall and his team have been ranked as high as #5 in Division 3. In December, I was able to see Williams play 2 games at a tournament in Thousand Oaks CA.

Mostly Good, Some Bad

Although most of 2017 was good to me, not everything was awesome. My mother passed away in June. A few weeks later, I was diagnosed with skin cancer, which was successfully removed. Not long after, a relationship ended after over a year of dating. Each incident was magnified because they came in quick succession, but I've recovered from them. My family and I were somewhat prepared for our mother's passing. She was 85 and the death of my father and sister in the past few years forced us to consider the inevitable loss of other loved ones. I am left with fond memories of her and of the girl I lost and this helps. And my follow-up appointment showed no sign of skin cancer.

Looking Ahead

2017 was an amazing year of growth for me personally. The changes are accelerating into 2018. My calendar is already full for the first 2 months and I am looking forward to the future with optimism.

Saturday, 06 January 2018 06:22:15 (GMT Standard Time, UTC+00:00)
# Monday, 01 January 2018
Monday, 01 January 2018 12:48:00 (GMT Standard Time, UTC+00:00)
# Sunday, 31 December 2017

12/31
Today I am grateful for lunch yesterday with my cousin Bob.

12/30
Today I am grateful to see Nick's Williams College basketball team play for the first time last night in California.

12/29
Today I am grateful for:
-Lunch yesterday with my cousin Barbara in San Juan Capistrano
-Watching a Spartan victory in the Holiday from the 50-yard-line with my son Tim

12/28
Today I am grateful to see a Lakers-Grizzlies game last night on my first visit to the Staples Center.

12/27
Today I am grateful to see an excellent Roy Hargrove concert last night at the Jazz Showcase in the South Loop.

12/26
Today I am grateful to spend Christmas with my family.

12/25
Today I am grateful that we still celebrate the birth of Jesus Christ after all these years.

12/24
Today I am grateful for a Christmas Eve snowfall; and the fact that I am not driving in it.

12/23
Today I am grateful for 3 Personal Training sessions this week - the last 3 of 2017!

12/22
Today I am grateful to see Roy Ayers in concert last night on my first visit to The Promontory in Hyde Park.

12/21
Today I am grateful to everyone1 who helped me get to 500 episodes on #TechnologyAndFriends

12/20
Today I am grateful for my first visit to the Argonne National Laboratory to attend a reception for David Danielson - clean energy entrepreneur and former Assistant Secretary of Energy.

12/19
Today I am grateful for an unseasonably warm Chicago December.

12/18
Today I am grateful to take Nick and Tim to a Black Hawks game last night - their first visit to the United Center.

12/17
Today I am grateful to spend yesterday with my sons.

12/16
Today I am grateful to spend some time at home.

12/15
Today I am grateful for the holiday party hosted by my apartment building last night.

12/14
Today I am grateful to spend a few days in Texas and meeting with folks at the University of Texas in Austin.

12/13
Today I am grateful to attend a home University of Texas basketball game for the first time.

12/12
Today I am grateful to see an exciting Pelicans-Rockets game last night - my first time at the Toyota Center!

12/11
Today I am grateful for:
-The hospitality and generosity of Paul
-Attending a home Texans game for the first time.

12/10
Today I am grateful for:
-The Uber driver who picked me up yesterday and took me to the airport when my Uber driver ran out of gas on the way.
-The "Lights in the Heights" festival last night in Houston.

12/09
Today I am grateful for a kind and completely unexpected email last night.

12/08
Today I am grateful to attend the Chicago User Group Holiday Party last night.

12/07
Today I am grateful for a meaningful and enjoyable offsite with my team in Atlanta this week.

12/06
Today I am grateful for an excellent dinner last night in midtown Atlanta with my team.

12/05
Today I am grateful for great seats at my second Atlanta Hawks home game in the past week.

12/4
Today I am grateful for temperatures in the 50s in Chicago in December.

Sunday, 31 December 2017 13:03:27 (GMT Standard Time, UTC+00:00)
# Saturday, 30 December 2017

As I discussed in a previous article, Microsoft Cognitive Services includes a set of APIs that allow your applications to take advantage of Machine Learning in order to analyze, image, sound, video, and language. One of these APIs is a REST web service that can determine the words and punctuation contained in a picture. This is accomplished by a simple REST web service call.

The Cognitive Services Optical Character Recognition (OCR) service is part of the Custom Vision API. It takes as input a picture of text and returns the words found in the image.

To get started, you will need an Azure account and a Cognitive Services Vision API key.

If you don't have an Azure account, you can get a free one at https://azure.microsoft.com/free/.

Once you have an Azure Account,  follow the instructions in this article to generate a Cognitive Services Computer Vision key.

To use this API, you simply have to make a POST request to the following URL:
https://[location].api.cognitive.microsoft.com/vision/v1.0/ocr

where [location] is the Azure location where you created your API key (above).

Optionally, you can add the following 2 querystring parameters to the URL:

  • Language: the 2-digit language abbreviation abbreviation. Use “en” for English. Currently, 25 languages are supported. If omitted, the service will attempt to auto-detect the language
  • detectOrientation: Set this to “true” if you want to support upside-down or rotated images.

The HTTP header of the request should include the following:

Ocp-Apim-Subscription-Key.     
The Cognitive Services Computer Vision key you generated above.

Content-Type

This tells the service how you will send the image. The options are:

  • application/json
  • application/octet-stream
  • multipart/form-data

If the image is accessible via a public URL, set the Content-Type to application/json and send JSON in the body of the HTTP request in the following format

{"url":"imageurl"}
where imageurl is a public URL pointing to the image. For example, to perform OCR on an image of an Edgar Allen Poe poem, submit the following JSON:

{"url": "http://media.tumblr.com/tumblr_lrbhs0RY2o1qaaiuh.png"}

DreamWithinADream

If you plan to send the image itself to the web service, set the content type to either "application/octet-stream" or “multipart/form-data” and submit the binary image in the body of the HTTP request.

The full request looks something like:  

POST https://westus.api.cognitive.microsoft.com/vision/v1.0/ocr HTTP/1.1
Content-Type: application/json
Host: westus.api.cognitive.microsoft.com
Content-Length: 62
Ocp-Apim-Subscription-Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
{ "url": "http://media.tumblr.com/tumblr_lrbhs0RY2o1qaaiuh.png" }

For example, passing a URL with the following picture:

 DreamWithinADream
  (found online at http://media.tumblr.com/tumblr_lrbhs0RY2o1qaaiuh.png)

returned the following data: 

{
  "textAngle": 0.0,
  "orientation": "NotDetected",
  "language": "en",
  "regions": [
    {
      "boundingBox": "31,6,435,478",
      "lines": [
        {
          "boundingBox": "114,6,352,23",
          "words": [
            {
              "boundingBox": "114,6,24,22",
              "text": "A"
            },
            {
              "boundingBox": "144,6,93,23",
               "text": "Dream"
            },
            {
               "boundingBox": "245,6,95,23",
              "text": "Within"
            },
            {
              "boundingBox": "350,12,14,16",
              "text": "a"
            },
            {
              "boundingBox": "373,6,93,23",
              "text": "Dream"
            }
          ]
        },
        {
           "boundingBox": "31,50,187,16",
          "words": [
             {
              "boundingBox": "31,50,31,12",
               "text": "Take"
            },
            {
              "boundingBox": "66,50,23,12",
              "text": "this"
             },
            {
              "boundingBox": "93,50,24,12",
              "text": "kiss"
            },
            {
               "boundingBox": "121,54,33,12",
              "text": "upon"
            },
            {
              "boundingBox": "158,50,19,12",
              "text": "the"
            },
             {
              "boundingBox": "181,50,37,12",
               "text": "brow!"
            }
          ]
        },
        {
          "boundingBox": "31,67,194,16",
          "words": [
             {
              "boundingBox": "31,67,31,15",
               "text": "And,"
            },
            {
              "boundingBox": "67,67,12,12",
              "text": "in"
             },
            {
              "boundingBox": "82,67,46,16",
              "text": "parting"
            },
            {
              "boundingBox": "132,67,31,12",
              "text": "from"
            },
            {
              "boundingBox": "167,71,25,12",
              "text": "you"
            },
             {
              "boundingBox": "195,71,30,11",
               "text": "now,"
            }
          ]
        },
         {
          "boundingBox": "31,85,159,12",
          "words": [
            {
              "boundingBox": "31,85,32,12",
               "text": "Thus"
            },
            {
               "boundingBox": "67,85,35,12",
              "text": "much"
            },
            {
              "boundingBox": "107,86,16,11",
              "text": "let"
            },
             {
              "boundingBox": "126,89,20,8",
              "text": "me"
            },
            {
              "boundingBox": "150,89,40,8",
              "text": "avow-"
            }
          ]
        },
        {
          "boundingBox": "31,102,193,16",
          "words": [
            {
              "boundingBox": "31,103,26,11",
              "text": "You"
             },
            {
              "boundingBox": "61,106,19,8",
              "text": "are"
            },
            {
               "boundingBox": "84,104,21,10",
              "text": "not"
            },
            {
              "boundingBox": "109,106,44,12",
              "text": "wrong,"
            },
             {
              "boundingBox": "158,102,27,12",
               "text": "who"
            },
            {
              "boundingBox": "189,102,35,12",
              "text": "deem"
             }
          ]
        },
        {
          "boundingBox": "31,120,214,16",
          "words": [
            {
               "boundingBox": "31,120,29,12",
              "text": "That"
            },
            {
              "boundingBox": "64,124,21,12",
              "text": "my"
            },
            {
              "boundingBox": "89,121,29,15",
              "text": "days"
            },
            {
              "boundingBox": "122,120,30,12",
              "text": "have"
            },
            {
              "boundingBox": "156,121,30,11",
              "text": "been"
            },
            {
               "boundingBox": "191,124,7,8",
              "text": "a"
            },
            {
              "boundingBox": "202,121,43,14",
              "text": "dream;"
            }
           ]
        },
        {
          "boundingBox": "31,138,175,16",
          "words": [
            {
              "boundingBox": "31,139,22,11",
              "text": "Yet"
            },
             {
              "boundingBox": "57,138,11,12",
               "text": "if"
            },
            {
              "boundingBox": "70,138,31,16",
              "text": "hope"
             },
            {
              "boundingBox": "105,138,21,12",
              "text": "has"
            },
            {
               "boundingBox": "131,138,37,12",
              "text": "flown"
            },
            {
              "boundingBox": "172,142,34,12",
              "text": "away"
            }
          ]
        },
        {
          "boundingBox": "31,155,140,16",
          "words": [
            {
              "boundingBox": "31,156,13,11",
              "text": "In"
             },
            {
              "boundingBox": "48,159,8,8",
               "text": "a"
            },
            {
               "boundingBox": "59,155,37,16",
              "text": "night,"
            },
            {
              "boundingBox": "100,159,14,8",
              "text": "or"
            },
             {
              "boundingBox": "118,155,12,12",
              "text": "in"
            },
            {
              "boundingBox": "134,159,7,8",
              "text": "a"
            },
             {
              "boundingBox": "145,155,26,16",
               "text": "day,"
            }
          ]
        },
         {
          "boundingBox": "31,173,144,15",
          "words": [
            {
              "boundingBox": "31,174,13,11",
              "text": "In"
            },
            {
               "boundingBox": "48,177,8,8",
              "text": "a"
             },
            {
              "boundingBox": "59,173,43,15",
              "text": "vision,"
            },
             {
              "boundingBox": "107,177,13,8",
              "text": "or"
            },
            {
              "boundingBox": "124,173,12,12",
              "text": "in"
            },
            {
              "boundingBox": "140,177,35,11",
               "text": "none,"
            }
          ]
        },
        {
          "boundingBox": "31,190,180,16",
          "words": [
            {
              "boundingBox": "31,191,11,11",
              "text": "Is"
            },
            {
               "boundingBox": "47,190,8,12",
              "text": "it"
            },
            {
              "boundingBox": "59,190,58,12",
              "text": "therefore"
            },
             {
              "boundingBox": "121,190,19,12",
               "text": "the"
            },
            {
               "boundingBox": "145,191,23,11",
              "text": "less"
             },
            {
              "boundingBox": "173,191,38,15",
              "text": "gone?"
            }
          ]
        },
        {
          "boundingBox": "31,208,150,12",
          "words": [
            {
              "boundingBox": "31,208,20,12",
              "text": "All"
            },
             {
              "boundingBox": "55,208,24,12",
               "text": "that"
            },
            {
              "boundingBox": "83,212,19,8",
              "text": "we"
             },
            {
              "boundingBox": "107,212,19,8",
              "text": "see"
            },
            {
               "boundingBox": "131,212,13,8",
              "text": "or"
            },
            {
              "boundingBox": "148,212,33,8",
              "text": "seem"
            }
           ]
        },
        {
          "boundingBox": "31,226,194,12",
          "words": [
            {
              "boundingBox": "31,227,11,11",
              "text": "Is"
            },
             {
              "boundingBox": "46,226,21,12",
               "text": "but"
            },
            {
              "boundingBox": "71,230,7,8",
              "text": "a"
             },
            {
              "boundingBox": "82,226,40,12",
              "text": "dream"
            },
            {
               "boundingBox": "126,226,41,12",
              "text": "within"
            },
            {
              "boundingBox": "171,230,7,8",
              "text": "a"
            },
             {
              "boundingBox": "182,226,43,12",
               "text": "dream."
            }
          ]
        },
         {
          "boundingBox": "31,261,133,12",
          "words": [
            {
              "boundingBox": "31,262,5,11",
               "text": "I"
            },
            {
               "boundingBox": "41,261,33,12",
              "text": "stand"
             },
            {
              "boundingBox": "78,261,32,12",
              "text": "amid"
            },
            {
              "boundingBox": "114,261,19,12",
              "text": "the"
            },
            {
              "boundingBox": "137,265,27,8",
              "text": "roar"
            }
          ]
        },
        {
          "boundingBox": "31,278,169,15",
          "words": [
            {
              "boundingBox": "31,278,18,12",
              "text": "Of"
             },
            {
              "boundingBox": "52,282,7,8",
              "text": "a"
            },
            {
               "boundingBox": "63,278,95,12",
              "text": "surf-tormented"
            },
            {
              "boundingBox": "162,278,38,15",
              "text": "shore,"
            }
          ]
        },
        {
          "boundingBox": "31,296,174,15",
          "words": [
            {
              "boundingBox": "31,296,28,12",
              "text": "And"
             },
            {
              "boundingBox": "63,297,4,11",
              "text": "I"
            },
            {
               "boundingBox": "72,296,28,12",
              "text": "hold"
            },
            {
              "boundingBox": "104,296,41,12",
              "text": "within"
            },
             {
              "boundingBox": "149,300,20,11",
               "text": "my"
            },
            {
              "boundingBox": "173,296,32,12",
              "text": "hand"
             }
          ]
        },
        {
          "boundingBox": "31,314,169,16",
          "words": [
            {
               "boundingBox": "31,314,42,12",
              "text": "Grains"
            },
            {
              "boundingBox": "78,314,15,12",
              "text": "of"
            },
             {
              "boundingBox": "95,314,19,12",
              "text": "the"
            },
            {
              "boundingBox": "119,315,43,15",
              "text": "golden"
             },
            {
              "boundingBox": "167,314,33,12",
              "text": "sand-"
            }
          ]
         },
        {
          "boundingBox": "31,331,189,16",
           "words": [
            {
              "boundingBox": "31,332,31,11",
              "text": "How"
            },
             {
              "boundingBox": "66,331,28,12",
              "text": "few!"
            },
            {
              "boundingBox": "99,333,20,14",
              "text": "yet"
            },
            {
              "boundingBox": "123,331,27,12",
               "text": "how"
            },
            {
               "boundingBox": "154,331,28,16",
              "text": "they"
            },
            {
              "boundingBox": "186,335,34,12",
              "text": "creep"
            }
           ]
        },
        {
          "boundingBox": "31,349,206,16",
          "words": [
            {
              "boundingBox": "31,349,55,16",
              "text": "Through"
            },
            {
              "boundingBox": "90,353,20,11",
               "text": "my"
            },
            {
               "boundingBox": "115,349,44,16",
              "text": "fingers"
            },
            {
              "boundingBox": "163,351,12,10",
              "text": "to"
            },
             {
              "boundingBox": "179,349,20,12",
               "text": "the"
            },
            {
              "boundingBox": "203,350,34,15",
              "text": "deep,"
             }
          ]
        },
        {
          "boundingBox": "31,366,182,16",
          "words": [
            {
               "boundingBox": "31,366,39,12",
              "text": "While"
            },
            {
              "boundingBox": "74,367,5,11",
              "text": "I"
            },
            {
              "boundingBox": "83,370,39,12",
              "text": "weep-"
            },
            {
              "boundingBox": "126,366,36,12",
              "text": "while"
             },
            {
              "boundingBox": "166,367,5,11",
              "text": "I"
            },
            {
               "boundingBox": "175,367,38,15",
              "text": "weep!"
            }
          ]
        },
        {
          "boundingBox": "31,384,147,16",
          "words": [
            {
               "boundingBox": "31,385,11,11",
              "text": "O"
            },
            {
              "boundingBox": "47,384,31,12",
              "text": "God!"
            },
             {
              "boundingBox": "84,388,21,8",
               "text": "can"
            },
            {
              "boundingBox": "110,385,4,11",
              "text": "I"
             },
            {
              "boundingBox": "119,386,20,10",
              "text": "not"
            },
            {
               "boundingBox": "144,388,34,12",
              "text": "grasp"
            }
          ]
        },
        {
          "boundingBox": "31,402,170,16",
          "words": [
            {
              "boundingBox": "31,402,37,12",
              "text": "Them"
            },
            {
              "boundingBox": "72,402,29,12",
              "text": "with"
            },
            {
              "boundingBox": "105,406,7,8",
               "text": "a"
            },
            {
              "boundingBox": "116,402,42,16",
              "text": "tighter"
            },
            {
              "boundingBox": "162,403,39,15",
              "text": "clasp?"
            }
           ]
        },
        {
          "boundingBox": "31,419,141,12",
          "words": [
            {
              "boundingBox": "31,420,11,11",
              "text": "O"
            },
             {
              "boundingBox": "47,419,31,12",
               "text": "God!"
            },
            {
              "boundingBox": "84,423,21,8",
              "text": "can"
             },
            {
              "boundingBox": "110,420,4,11",
              "text": "I"
            },
            {
               "boundingBox": "119,421,20,10",
              "text": "not"
            },
            {
              "boundingBox": "144,423,28,8",
              "text": "save"
            }
           ]
        },
        {
          "boundingBox": "31,437,179,16",
          "words": [
            {
              "boundingBox": "31,438,26,11",
              "text": "One"
            },
            {
              "boundingBox": "62,437,31,12",
               "text": "from"
            },
            {
               "boundingBox": "97,437,19,12",
              "text": "the"
             },
            {
              "boundingBox": "120,437,45,16",
              "text": "pitiless"
            },
             {
              "boundingBox": "169,438,41,11",
               "text": "wave?"
            }
          ]
        },
        {
          "boundingBox": "31,454,161,12",
          "words": [
            {
              "boundingBox": "31,455,11,11",
               "text": "Is"
            },
            {
               "boundingBox": "47,454,15,12",
              "text": "all"
             },
            {
              "boundingBox": "66,454,25,12",
              "text": "that"
            },
            {
              "boundingBox": "94,458,19,8",
              "text": "we"
            },
            {
              "boundingBox": "118,458,19,8",
              "text": "see"
            },
             {
              "boundingBox": "142,458,13,8",
               "text": "or"
            },
            {
              "boundingBox": "159,458,33,8",
              "text": "seem"
             }
          ]
        },
        {
          "boundingBox": "31,472,185,12",
          "words": [
            {
               "boundingBox": "31,473,23,11",
              "text": "But"
             },
            {
              "boundingBox": "58,476,7,8",
              "text": "a"
            },
            {
               "boundingBox": "69,472,40,12",
              "text": "dream"
            },
            {
              "boundingBox": "113,472,41,12",
              "text": "within"
            },
            {
              "boundingBox": "158,476,7,8",
               "text": "a"
            },
            {
              "boundingBox": "169,472,47,12",
              "text": "dream?"
            }
          ]
        }
      ]
    }
  ]
}
  

Note that the image is split into an array of regions; each region contains an array of lines; and each line contains an array of words. This is done so that you can replace or block out one or more specific words, lines, or regions.

Below is a jQuery code snippet making a request to this service to perform OCR on images of text. You can download the full application at https://github.com/DavidGiard/CognitiveSvcsDemos.

    var language = $("#LanguageDropdown").val();
    var computerVisionKey = getKey() || "Copy your Subscription key here";
    var webSvcUrl = "https://westcentralus.api.cognitive.microsoft.com/vision/v1.0/ocr";     
    webSvcUrl = webSvcUrl + "?language=" + language;
$.ajax({
    type: "POST",
    url: webSvcUrl,
    headers: { "Ocp-Apim-Subscription-Key": computerVisionKey },
    contentType: "application/json",
    data: '{ "Url": "' + url + '" }'
}).done(function (data) {
    outputDiv.text("");

    var regionsOfText = data.regions;
    for (var h = 0; h < regionsOfText.length; h++) {
        var linesOfText = data.regions[h].lines;
        for (var i = 0; i < linesOfText.length; i++) {
            var output = "";

            var thisLine = linesOfText[i];
            var words = thisLine.words;
            for (var j = 0; j < words.length; j++) {
                 var thisWord = words[j];
                output += thisWord.text;
                output += " ";

            }
            var newDiv = "<div>" + output + "</div>";
             outputDiv.append(newDiv);

        }
        outputDiv.append("<hr>");
    }
               
}).fail(function (err) {
    $("#OutputDiv").text("ERROR!" + err.responseText);
});

You can find the full documentation – including an in-browser testing tool - for this API here.

Sending requests to the Cognitive Services OCR API makes it simple to convert a picture of text into text.  

Saturday, 30 December 2017 10:31:00 (GMT Standard Time, UTC+00:00)
# Friday, 29 December 2017

It's difficult enough for humans to recognize emotions in the faces of other humans. Can a computer accomplish this task? It can if we train it to and if we give it enough examples of different faces with different emotions.

When we supply data to a computer with the objective of training that computer to recognize patterns and predict new data, we call that Machine Learning. And Microsoft has done a lot of Machine Learning with a lot of faces and a lot of data and they are exposing the results for you to use.

As I discussed in a previous article, Microsoft Cognitive Services includes a set of APIs that allow your applications to take advantage of Machine Learning in order to analyze, image, sound, video, and language.

The Cognitive Services Emotions API looks at photographs of people and determines the emotion of each person in the photo. Supported emotions are anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. Each emotion is assigned a score between 0 and 1 - higher numbers indicate a high confidence that this is the emotion expressed in the face. If a picture contains multiple faces, the emotion of each face is returned.

To get started, you will need an Azure account and a Cognitive Services Vision API key.

If you don't have an Azure account, you can get a free one at https://azure.microsoft.com/free/.

Once you have an Azure Account,  follow the instructions in this article to generate a Cognitive Services Computer Vision key.

To use this API, you simply have to make a POST request to the following URL:
https://[location].api.cognitive.microsoft.com/vision/v1.0/recognize

where [location] is the Azure location where you created your API key (above).

The HTTP header of the request should include the following:

Ocp-Apim-Subscription-Key.
This is the Cognitive Services Computer Vision key you generated above.

Content-Type

This tells the service how you will send the image. The options are:

  • application/json
  • application/octet-stream

If the image is accessible via a public URL, set the Content-Type to application/json and send JSON in the body of the HTTP request in the following format

{"url":"imageurl"}
where imageurl is a public URL pointing to the image. For example, to generate a thumbnail of this picture of a happy face and a not happy face,

TwoEmotions

submit the following JSON:

{"url":"http://davidgiard.com/content/binary/Open-Live-Writer/Using-the-Cognitive-Services-Emotion-API_14A56/TwoEmotions_2.jpg"}

If you plan to send the image itself to the web service, set the content type to "application/octet-stream" and submit the binary image in the body of the HTTP request.

A full request looks something like this:

The full request looks something like:

POST https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognize HTTP/1.1
Content-Type: application/json
Host: westus.api.cognitive.microsoft.com
Content-Length: 62
Ocp-Apim-Subscription-Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
{ "url": "http://xxxx.com/xxxx.jpg" }

For example, passing a URL with a picture below of 3 attractive, smiling people

BrianAnnaDavid   

(found online at https://giard.smugmug.com/Tech-Community/SpartaHack-2016/i-4FPV9bf/0/X2/SpartaHack-068-X2.jpg)

returned the following data: 

[
  {
    "faceRectangle": {
      "height": 113,
       "left": 285,
      "top": 156,
      "width": 113
    },
    "scores": {
      "anger": 1.97831262E-09,
      "contempt": 9.096525E-05,
      "disgust": 3.86221245E-07,
      "fear": 4.26409547E-10,
      "happiness": 0.998336,
      "neutral": 0.00156954059,
      "sadness": 8.370223E-09,
      "surprise": 3.06117772E-06
    }
  },
  {
    "faceRectangle": {
       "height": 108,
      "left": 831,
      "top": 169,
      "width": 108
    },
    "scores": {
      "anger": 2.63808062E-07,
      "contempt": 5.387114E-08,
      "disgust": 1.3360991E-06,
      "fear": 1.407629E-10,
      "happiness": 0.9999967,
      "neutral": 1.63170478E-06,
      "sadness": 2.52861843E-09,
      "surprise": 1.91028926E-09
    }
  },
  {
     "faceRectangle": {
      "height": 100,
      "left": 591,
      "top": 168,
      "width": 100
    },
    "scores": {
      "anger": 3.24157673E-10,
      "contempt": 4.90155344E-06,
      "disgust": 6.54665473E-06,
      "fear": 1.73284559E-06,
      "happiness": 0.9999156,
      "neutral": 6.42121E-05,
      "sadness": 7.02297257E-06,
      "surprise": 5.53670576E-09
    }
  }
]   

A high value for the 3 happiness scores and the very low values for all the other scores suggest a very high degree of confidence that each person in this photo  happy. is

Here is the request in the popular HTTP analysis tool Fiddler [http://www.telerik.com/fiddler]:
Request

Em01-Fiddler-Request

Response:
Em02-Fiddler-Response 

Below is a C# code snippet making a request to this service to analyze the emotions of the people in an online photograph. You can download the full application at https://github.com/DavidGiard/CognitiveSvcsDemos.

string emotionApiKey = "XXXXXXXXXXXXXXXXXXXXXXX";
var client = new HttpClient();
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", emotionApiKey);
    string uri = "https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognize";
HttpResponseMessage response;
var json = "{'url': '" + imageUrl + "'}";
byte[] byteData = Encoding.UTF8.GetBytes(json);
using (var content = new ByteArrayContent(byteData))
{
    content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
    response = await client.PostAsync(uri, content);
}

if (response.IsSuccessStatusCode)
{
    var data = await response.Content.ReadAsStringAsync();
}

You can find the full documentation – including an in-browser testing tool - for this API here.

Sending requests to the Cognitive Services Emotion API makes it simple to analyze the emotions of people in a photograph.  

Friday, 29 December 2017 10:43:00 (GMT Standard Time, UTC+00:00)
# Thursday, 28 December 2017

Generating a thumbnail image from a larger image sounds easy – just shrink the dimensions of the original, right? But it becomes more complicated if the thumbnail image is a different shape than the original. For example, the original image may be rectangular but we need the new image to be a square. Or we may need to generate a portrait-oriented thumbnail from a landscape-oriented original image. In these cases, we will need to crop or distort the original image when we create the thumbnail. Distorting the image tends to look very bad; and when we crop an image, we want ensure that the primary subject of the image remains in the generated thumbnail. To do this, we need to identify the primary subject of the image. That's easy enough for a human observer to do, but a difficult thing for a computer to do. But if we want to automate this process, we will have to ask the computer to do exactly that.

This is where machine learning can help. By analyzing many images, Machine Learning can figure out what parts of a picture are likely to be the main subject. Once this is known, it becomes a simpler matter to crop the picture in such a way that the main subject remains in the generated thumbnail.

As I discussed in a previous article, Microsoft Cognitive Services includes a set of APIs that allow your applications to take advantage of Machine Learning in order to analyze, image, sound, video, and language.

The Cognitive Services Vision API uses Machine Learning so that you don't have to. It exposes a web service to return an intelligent thumbnail image from any picture.

You can see this in action here.

Scroll down the the section titled "Generate a thumbnail" to see the Thumbnail generator as shown in Figure 1. 

Th01
Figure 1

With this live, in-browser demo, you can either select an image from the gallery and view the generated thumbnails; or provide your own image - either from your local computer or from a public URL. The page uses the Thumbnail API to create thumbnails of 6 different dimensions.
 
For your own application, you can either call the REST Web Service directly or (for a .NET application) use a custom library. The library simplifies development by abstracting away HTTP calls via strongly-typed objects.

To get started, you will need an Azure account and a Cognitive Services Vision API key.

If you don't have an Azure account, you can get a free one at https://azure.microsoft.com/free/.

Once you have an Azure Account,  follow the instructions in this article to generate a Cognitive Services Computer Vision key.

     

To use this API, you simply have to make a POST request to the following URL:
https://[location].api.cognitive.microsoft.com/vision/v1.0/generateThumbnail?width=ww&height=hh&smartCropping=true

where [location] is the Azure location where you created your API key (above) and ww and hh are the desired width and height of the thumbnail to generate.

The “smartCropping” parameter tells the service to determine the main subject of the photo and to try keep it in the thumbnail while cropping.

The HTTP header of the request should include the following:

Ocp-Apim-Subscription-Key.     
The Cognitive Services Computer Vision key you generated above.

Content-Type

This tells the service how you will send the image. The options are:   

  • application/json    
  • application/octet-stream    
  • multipart/form-data

If the image is accessible via a public URL, set the Content-Type to application/json and send JSON in the body of the HTTP request in the following format

{"url":"imageurl"}
where imageurl is a public URL pointing to the image. For example, to generate a thumbnail of this picture of a skier, submit the following JSON:

{"url":"http://mezzotint.de/wp-content/uploads/2014/12/2013-skier-edge-01-Kopie.jpg"}

Man skiing  alps

If you plan to send the image itself to the web service, set the content type to either "application/octet-stream" or "multipart/form-data" and submit the binary image in the body of the HTTP request.

Here is a sample console application that uses the service to generate a thumbnail from a file on disc. You can download the full source code at
https://github.com/DavidGiard/CognitiveSvcsDemos

Note: You will need to create the folder "c:\test" to store the generated thumbnail.

   

             // TODO: Replace this value with your Computer Vision API Key
            string computerVisionKey = "XXXXXXXXXXXXXXXX"

            var client = new HttpClient();
            var queryString = HttpUtility.ParseQueryString(string.Empty);

            client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", computerVisionKey);

            queryString["width"] = "300";
            queryString["height"] = "300";
            queryString["smartCropping"] = "true";
            var uri = "https://westcentralus.api.cognitive.microsoft.com/vision/v1.0/generateThumbnail?" + queryString;

            HttpResponseMessage response;

            string originalPicture = "http://davidgiard.com/content/Giard/_DGInAppleton.png";
            var jsonBody = "{'url': '" + originalPicture + "'}";
            byte[] byteData = Encoding.UTF8.GetBytes(jsonBody);

            using (var content = new ByteArrayContent(byteData))
            {
                 content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
                response = await client.PostAsync(uri, content);
            }       
            if (response.StatusCode == System.Net.HttpStatusCode.OK)
            {
                 // Write thumbnail to file
                var responseContent = await response.Content.ReadAsByteArrayAsync();
                 string folder = @"c:\test";
                string thumbnaileFullPath = string.Format("{0}\\thumbnailResult_{1:yyyMMddhhmmss}.jpg", folder, DateTime.Now);
                using (BinaryWriter binaryWrite = new BinaryWriter(new FileStream(thumbnaileFullPath, FileMode.Create, FileAccess.Write)))
                 {
                    binaryWrite.Write(responseContent);
                }
                // Show BEFORE and AFTER to user
                Process.Start(thumbnaileFullPath);
                 Process.Start(originalPicture);
                Console.WriteLine("Done! Thumbnail is at {0}!", thumbnaileFullPath);
            }
            else
            {
                Console.WriteLine("Error occurred. Thumbnail not created");
             }

        }            

The result is shown in Figure 2 below.
Th02Results
Figure 2

One thing to note. The Thumbnail API is part of the Computer Vision API. As of this writing, the free version of the Computer Vision API is limited to 5,000 transactions per month. If you want more than that, you will need to upgrade to the Standard version, which charges $1.50 per 1000 transactions.

But this should be plenty for you to learn this API for free and build and test your applications until you need to put them into production.
The code above can be found on GitHub.

You can find the full documentation – including an in-browser testing tool - for this API here.

The Cognitive Services Custom Vision API provides a simple way to generate thumbnail images from pictures.

Thursday, 28 December 2017 10:31:00 (GMT Standard Time, UTC+00:00)
# Wednesday, 27 December 2017

As I discussed in a previous article, Microsoft Cognitive Services includes a set of APIs that allow your applications to take advantage of Machine Learning in order to analyze, image, sound, video, and language.

Your application uses Cognitive Services by calling one or more RESTful web services. These services require you to pass a key in the header of each HTTP call. You can generate this key from the Azure portal.

If you don't have an Azure account, you can get a free one at https://azure.microsoft.com/free/.

Once you have an Azure Account, navigate to the Azure Portal.

CsKey01-Portal
Figure 1

Here you can create a Cognitive Services API key. Click the button in the top left of the portal (Figure 2)

CsKey02-New
Figure 2

It’s worth noting that the “New” button caption sometimes changes to “Create a Resource” (Figure 2a)

CsKey02-CreateResourceButton
Figure 2a

From the flyout menu, select AI+Cognitive Services. A list of Cognitive Services displays. Select the service you want to call. For this demo,I will select Computer Vision API, as shown in Figure 3.

CsKey03-AICogServices
Figure 3

The Face API blade displays as shown in Figure 4.

CsKey04-ComputerVisionBlade
Figure 4

At the Name textbox, enter a name for this service account.

At the Subscription dropdown, select the Azure subscription to associate with this service.

At the Location dropdown, select the region in which you want to host this service. You should select a region close to those who will be consuming the service. Make note of the region you selected.

At the Pricing Tier dropdown, select the pricing tier you want to use. Currently, the choices are F0 (which is free, but limited to 20 calls per minute); and S1 (which is not free, but allows more calls.) Click the View full pricing details link to see how much S1 will cost.

At the Resource Group field, select or create an Azure Resource Group. Resource Groups allow you to logically group different Azure resources, so you can manage them together.

Click the [Create] button to create the account. The creation typically takes less than a minute and a message displays when the service is created, as shown in Figure 5.

CsKey05-GoToResourceButton
Figure 5

Click the [Go to resource] button to open a blade to configure the newly-created service. Alternatively, you can select "All Resources" on the left menu and search for your service by name. Either way, the service blade displays, as as shown in Figure 6.

CsKey06-ComputerVisionBlade
Figure 6

The important pieces of information in this blade are the Endpoint (on the Overview tab, Figure 7) and the Access Keys (on the Keys tab, as shown in Figure 8). Within this blade, you also have the opportunity to view log files and other tools to help troubleshoot your service. And you can set authorization and other restrictions to your service.

CsKey07-ComputerVisionOverview
Figure 7

CsKey08-ComputerVisionKeys
Figure 8

The process is almost identical when you create a key for any other Cognitive Service. The only difference is that you will select a different service set in the AI+Cognitive Services flyout.

Wednesday, 27 December 2017 10:35:00 (GMT Standard Time, UTC+00:00)
# Tuesday, 26 December 2017

Microsoft Cognitive Services is a set of APIs that take advantage of Machine Learning to provide developers with an easy way to analyze images, speech, language, and others.

If you have worked with or studied Machine Learning, you know that you can accomplish a lot, but that it requires a lot of computing power, a lot of time, and a lot of data. Since most of us have a limited amount of each of these, we can take advantage of the fact that Microsoft has data, time, and the computing power of Azure. They have used this power to analyze large data sets and expose the results via a set of web services, collectively known as Cognitive Services.

The APIs of Cognitive Services are divided into 5 broad categories: Vision, Speech, Language, Knowledge, and Search.

Vision APIs

The Vision APIs provide information about a given photograph or video. For example, several Vision APIs are capable of recognizing  faces in an image. One analyzes each face and deduces that person's emotion; another can compare 2 pictures and decide whether or not 2 photographs are the same person; a third guesses the age of each person in a photo.

Speech APIs

The Speech APIs can convert speech to text or text to speech. It can also recognize the voice of a given speaker (You might use this to authenticate users, for example) and infer the intent of the speaker from his words and tone. The Translator Speech API supports translations between 10 different spoken languages.

Language

The Language APIs include a variety of services. A spell checker is smart enough to recognize common proper names and homonyms. And the Translator Text API can detect the language in which a text is written and translate that text into another language. The Text Analytics API analyzes a document for the sentiment expressed, returning a score based on how positive or negative is the wording and tone of the document. The most interesting API in this group is the Language Understanding Intelligence Service (LUIS) that allows you to build custom language models so that your application can understand questions and statements from your users in a variety of formats.

Knowledge

Knowledge includes a variety of APIs - from customer recommendations to smart querying and information about the context of text. Many of these services take advantage of natural language processing. As of this writing, all of these services are in preview.

Search

The Search APIs allow you to retrieve Bing search results with a single web service call.

You can use these APIs. To get started, you need an Azure account. You can get a free Azure trial at https://azure.microsoft.com/.

Each API offers a free option that restricts the number and/or frequency of calls, but you can break through that boundary for a charge.  Because they are hosted in Azure, the paid services can scale out to meet increased demand.

You call most of these APIs by passing and receiving JSON to a RESTful web service. Some of the more complex services offer configuration and setup beforehand.

These APIs are capable of analyzing pictures, text, and speech because each service draws on the knowledge learned from parsing countless photos, documents, etc. beforehand.
 
You can find documentation, sample code, and even a place to try out each API live in your browser at https://azure.microsoft.com/en-us/services/cognitive-services/

A couple of fun applications of Cognitive Services are how-old.net (which guesses the ages of people in photographs) and what-dog.net (which identifies the breed of dog in a photo).

Below is a screenshot from the Azure documentation page, listing the sets of services. But keep checking back, because this list grows and each set contains one or more services.

List of Cognitive Services
 
Sign up today and start building apps. It’s fun, it's useful, and it’s free!

Tuesday, 26 December 2017 10:25:00 (GMT Standard Time, UTC+00:00)
# Monday, 25 December 2017
Monday, 25 December 2017 09:48:00 (GMT Standard Time, UTC+00:00)
# Sunday, 24 December 2017

I have been recording my online TV show - Technology and Friends - for 9 years. I recently passed episode #500.

The show has evolved over the years and so has the recording equipment I use.

Below is a description of the hardware I use to record Technology and Friends.

Camera: Canon EOS6D

CameraThis is the second Canon SLR I’ve purchased. My EOS 30D lasted over 10 years, so I returned to a similar, but updated model when it finally began to fail. The EOS 6D is primarily a still camera, but it can record up to 30 minutes of high-resolution video. The image quality is outstanding, particularly with the 24-105mm Canon lens I bought with it. This setup is overkill (read: "expensive") for a show that most people view in a browser, but I also use this camera for still photography and I have been happy with the results. The main downside for video is the 30-minute limit. After this time, someone needs to re-start the recording.

Audio Recorder: Xoom H6 Handy Recorder

AudioRecorderI bought a Xoom recorder a few years ago on the recommendation of Carl Franklin, who is the co-host and the audio expert of the excellent .NET Rocks podcast. It served me well for years, so I bought the H6 when it was time to replace it. This device contains 2 built-in microphones, but I almost always plug in 2 external microphones, so I can get closer to a speakers mouth. I can plug in up to 4 external microphones. Using these microphones eliminates most of the background noise, allowing me to record in crowded areas. Each microphone can record to a separate audio file, which is convenient if one speaker is much louder than another.

Microphones: Shure SM58

MicrophonesI went with Shure based on popularity and Amazon reviews. I bought these mid-level models. I have been happy with the results. I strongly recommend external microphones (either lapel or handheld) when recording audio. My show is much better since I began using them. Switching to a separate microphone and the resulting increase in audio quality is probably the technical change resulting in the single biggest jump in quality for my show.
 


Tripod: Vanguard Lite1

TripodThis is a cheap tripod, but it has lasted me for years. I have a larger tripod, but seldom use it because I can throw the vanguard is small enough to keep in a backpack, carry on a plane, and carry around a conference. I also like the fact that I can set it on a tabletop, which is what I usually do. It is not quite tall enough to stand on the ground and hold the camera as high as the face of a standing adult.

Sunday, 24 December 2017 17:56:17 (GMT Standard Time, UTC+00:00)