# Saturday, July 27, 2019

MrsDallowayNothing much happens in Virginia Woolf's Mrs. Dalloway\.

The entire book takes place on a single day in 1923 London. Clarissa Dalloway is preparing to host an evening dinner party; and Septimus Warren Smith is contemplating suicide, as he reflects on his experience fighting in World War I and watching his best friend Evans die shortly before the armistice. Septimus's wife Lucrezia wonders why he acts so strangely when the doctor said there is nothing wrong with him.

For every few lines of conversation, we get many paragraphs of inner monologue.

Mostly, the story consists of flashbacks: Septimus recalls his friendship with Evans and Clarissa remembers her old lover Peter (to whom she refused marriage, settling instead on the government bureaucrat Richard Dalloway) and her friend Sally (with whom she once shared a romantic kiss). The day is complicated when Peter shows up after years in India.

Septimus and Clarissa never meet in the novel, but they are tied together by their obsession with their own past and by a long-past relationship with a same-sex friend.

We also get glimpses into the lives and minds of other people - primarily through their connections with Clarissa Dalloway, even though that connection is often quite thin.

Woolf's stream-of-consciousness writing style and the lack of present-day action sometimes makes the book difficult to follow.

We shift from the thoughts of one person to another; and from the present to the past. It's all very confusing. But pay close attention. And we get commentaries on a wide range of topics:

  • The role and status of marriage in society: A woman attains higher status by marrying, but she loses much of her identity to her husband
  • Society's attitudes toward mental illness, specifically PTSD suffered by veterans
  • The effect our past choices and circumstances have on current lives
  • The inevitable movement of time and the importance of how we spend it
  • The shallow lifestyle embraced by many in high society.

There is much to absorb here. And Ms. Woolf's prose is enjoyable.

Saturday, July 27, 2019 7:34:05 PM (GMT Daylight Time, UTC+01:00)
# Thursday, July 25, 2019

GCast 58:

Creating and Deploying Azure Resources with ARM Templates

Learn how to generate an ARM template and use it to create and deploy resources to Azure.

Azure | DevOps | GCast | Screencast | Video
Thursday, July 25, 2019 10:34:22 PM (GMT Daylight Time, UTC+01:00)
# Saturday, July 20, 2019

DeliveranceFour middle-class suburbanites decide to get away from society for a weekend. They hope to break the tedium of their daily lives and paddle through the uncharted rivers in the hills of Georgia.

But on the second day, they are attacked, rescued, and further terrorized.

With no one to help them, they take matters into their own heads, resulting in a legal and moral crisis.

This is the scenario of James Dickey's novel Deliverance.

It is a story of survival and suspense and ambiguity and self-doubt; of the power of nature; and the brutality of man.

Dickey does a masterful job shifting between descriptions of the power and beauty of nature and building tension within the story.

I read the book in a single sitting. By the end of the it, I was emotionally drained. Dickey is known mostly as a poet, but this novel takes us on a dark and deadly journey that is impossible to forget. The attack in the book became one the most memorable scenes in movie history, when John Boorman turned the novel into a movie 2 years later.

Set aside some time to read this novel and ask yourself: What would you do in these circumstances?

Saturday, July 20, 2019 9:44:00 AM (GMT Daylight Time, UTC+01:00)
# Thursday, July 18, 2019

GCast 57:

Azure Data Factory GitHub Deployment

Learn how to set up automated deployment from a GitHub repository to an Azure Data Factory.

Azure | GCast | GitHub | Screencast | Video
Thursday, July 18, 2019 11:53:00 AM (GMT Daylight Time, UTC+01:00)
# Wednesday, July 17, 2019

In a recent article, I introduced you to the "Recognize Text" API that returns the text in an image - process known as "Optical Character Recognition", or "OCR".

In this article, I will show how to call this API from a .NET application.

Recall that the "Recognize Text" API consists of two web service calls:

We call the "Recognize Text" web service and pass an image to begin the process.

We call the "Get Recognize Text Operation Result" web service to check the status of the processing and retrieive the resulting text, when the process is complete.

The sample .NET application

If you want to follow along, the code is available in the RecognizeTextDemo found in this GitHub repository.

To get started, you will need to create a Computer Vision key, as described here.

Creating this service gives you a URI endpoint to call as a web service, and an API key, which must be passed in the header of web service calls.

The App

To run the app, you will need to copy the key created above into the App.config file. Listing 1 shows a sample config file:

Listing 1:

<configuration>
   <appSettings>
     <add key="ComputerVisionKey" value="5070eab11e9430cea32254e3b50bfdd5" />
   </appSettings>
 </configuration>
  

You will also need an image with some text in it. For this demo, we will use the image shown in Fig. 1.

rt01-Kipling
Fig. 1

When you run the app, you will see the screen in Fig. 2.

rt02-Form1
Fig. 2

Press the [Get File] button and select the saved image, as shown in Fig. 3.

rt03-SelectImage
Fig. 3

Click the [Open] button. The Open File Dialog closes, the full path of the image is displays on the form, and the [Start OCR] button is enabled, as shown in Fig. 4.

rt04-Form2
Fig. 4

Click the [Start OCR] button to call a service that starts the OCR. If an error occurs, it is possible that you did not configure the key correctly or that you are not connected to the Internet.

When the service call returns, the URL of the "Get Text" service displays (beneath the "Location Address" label), and the [Get Text] button is enabled, as shown in Fig. 5.

rt05-Form3
Fig. 5

Click the [Get Text] button. This calls the Location Address service and displays the status. If the status is "Succeeded", it displays the text in the image, as shown in Fig. 6.

rt06-Form4
Fig. 6

## The code

Let's take a look at the code in this application. It is all written in C#. The relevant parts are the calls to the two web service: "Recognize Text" and "Get Recognize Text Operation Result". The first call kicks off the OCR job; the second call returns the status of the job and returns the text found, when complete.

The code is in the TextService static class.

This class has a constant: visionEndPoint, which is the base URL of the Computer Vision Cognitive Service you created above. The code in the repository is in Listing 2. You may need to modify the URL, if you created your service in a different region.

Listing 2:

const string visionEndPoint = "https://westus.api.cognitive.microsoft.com/";
  

### Recognize Text

The call to the "Recognize Text" API is in Listing 1:

Listing 3:

public static async Task<string> GetRecognizeTextOperationResultsFromFile(string imageLocation, string computerVisionKey)
{
    var cogSvcUrl = visionEndPoint + "vision/v2.0/recognizeText?mode=Printed";
    HttpClient client = new HttpClient();
    client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", computerVisionKey);
    HttpResponseMessage response;
    // Convert image to a Byte array
    byte[] byteData = null;
    using (FileStream fileStream = new FileStream(imageLocation, FileMode.Open, FileAccess.Read))
    {
        BinaryReader binaryReader = new BinaryReader(fileStream);
        byteData = binaryReader.ReadBytes((int)fileStream.Length);
    }

    // Call web service; pass image; wait for response
    using (ByteArrayContent content = new ByteArrayContent(byteData))
    {
        content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
        response = await client.PostAsync(cogSvcUrl, content);
    }

    // Read results
    RecognizeTextResult results = null;
    if (response.IsSuccessStatusCode)
    {
        var data = await response.Content.ReadAsStringAsync();
        results = JsonConvert.DeserializeObject<RecognizeTextResult>(data);
    }
    var headers = response.Headers;
    var locationHeaders = response.Headers.GetValues("Operation-Location");
    string locationAddress = "";
    IEnumerable<string> values;
    if (headers.TryGetValues("Operation-Location", out values))
    {
        locationAddress = values.First();
    }
    return locationAddress;
}
  

The first thing we do is construct the specific URL of this service call.

Then we use the System.Net.Http library to submit an HTTP POST request to this URL, passing in the image as an array of bytes in the body of the request. For more information on passing a binary file to a web service, see this article.

When the response returns, we check the headers for the "Operation-Location", which is the URL of the next web service to call. The URL contains a GUID that uniquely identifies this job. We save this for our next  call.

Get Recognize Text Operation Result

After kicking of the OCR, we need to call a different service to check the status and get the results. The code in Listing 4 does this.

Listing 4:

public static async Task<RecognizeTextResult> GetRecognizeTextOperationResults(string locationAddress, string computerVisionKey) 
 { 
    var client = new HttpClient(); 
    client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", computerVisionKey); 
    var response = await client.GetAsync(locationAddress); 
    RecognizeTextResult results = null; 
    if (response.IsSuccessStatusCode) 
    { 
        var data = await response.Content.ReadAsStringAsync(); 
        results = JsonConvert.DeserializeObject<RecognizeTextResult>(data); 
    } 
    return results; 
 }
  

This code is much simpler because it is an HTTP GET and we don't need to pass anything in the request body.

We simply submit an HTTP GET request and use the Newtonsoft.Json libary to convert the response to a string.

Here is the complete code in the TextService class:

Listing 5:

using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;
using TextLib.Models;

namespace TextLib
{

    public static class TextService
    {
        const string visionEndPoint = "https://westus.api.cognitive.microsoft.com/";

public static async Task<string> GetRecognizeTextOperationResultsFromFile(string imageLocation, string computerVisionKey)
{
    var cogSvcUrl = visionEndPoint + "vision/v2.0/recognizeText?mode=Printed";
    HttpClient client = new HttpClient();
    client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", computerVisionKey);
    HttpResponseMessage response;
    // Convert image to a Byte array
    byte[] byteData = null;
    using (FileStream fileStream = new FileStream(imageLocation, FileMode.Open, FileAccess.Read))
    {
        BinaryReader binaryReader = new BinaryReader(fileStream);
        byteData = binaryReader.ReadBytes((int)fileStream.Length);
    }

    // Call web service; pass image; wait for response
    using (ByteArrayContent content = new ByteArrayContent(byteData))
    {
        content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
        response = await client.PostAsync(cogSvcUrl, content);
    }

    // Read results
    RecognizeTextResult results = null;
    if (response.IsSuccessStatusCode)
    {
        var data = await response.Content.ReadAsStringAsync();
        results = JsonConvert.DeserializeObject<RecognizeTextResult>(data);
    }
    var headers = response.Headers;
    var locationHeaders = response.Headers.GetValues("Operation-Location");
    string locationAddress = "";
    IEnumerable<string> values;
    if (headers.TryGetValues("Operation-Location", out values))
    {
        locationAddress = values.First();
    }
    return locationAddress;
}

        public static async Task<RecognizeTextResult> GetRecognizeTextOperationResults(string locationAddress, string computerVisionKey)
        {
            var client = new HttpClient();
            client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", computerVisionKey);
            var response = await client.GetAsync(locationAddress);
            RecognizeTextResult results = null;
            if (response.IsSuccessStatusCode)
            {
                var data = await response.Content.ReadAsStringAsync();
                results = JsonConvert.DeserializeObject<RecognizeTextResult>(data);
            }
            return results;
        }

    }
}
  

The remaining code

There is other code in this application to do things like select the file from disk and loop through the JSON to concatenate all the text; but this code is very simple and (hopefully) self-documenting. You may choose other ways to get the file and handle the JSON in the response.

In this article, I've focused on the code to manage the Cognitive Services calls and responses to those calls in order to get the text from a picture of text.

Wednesday, July 17, 2019 10:51:00 AM (GMT Daylight Time, UTC+01:00)
# Tuesday, July 16, 2019

Sometimes a web service requires us to pass a binary file, such as an image in the request body.

To do this, we need to submit the request with the POST verb, because other verbs - most notably "GET" - do not contain a body.

One simple web service that accepts a binary file is the Cognitive Services Image Analysis API. This API is fully documented here.

I created a console application (the simplest .NET app I can think of) to demonstrate how to pass the binary image to the web service. This application is named "ImageAnalysisConsoleAppDemo" and is included in my Cognitive Services demos, which you can download here.

Assumptions

Before you get started, you will need to create a Computer Vision Cognitive Service, as described here.

I have hard-coded the file name and location, along with the Cognitive Services URL, but you can change these to match what you are using. You will also need to add your API key to the App.config file.

The code

The first thing we need to do is to read the file and convert it into an array of bytes. The code to do this is in Listing 1 below.

Listing 1:

byte[] byteData; ;
using (FileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read))
{
    BinaryReader = new BinaryReader(fileStream);
    byteData = binaryReader.ReadBytes((int)fileStream.Length);
}
  

Next, we call the web service, passing the byte array. The System.Net.Http client library helps us to make this call. Notice the "using" construct that converts the byte array into a ByteArrayContent object that is required by the library.

Within that "using", we make an asynchronous call to the web service and capture the results.

Listing 2 shows this code.

Listing 2:

var cogSvcUrl = "https://westus.api.cognitive.microsoft.com/vision/v2.0/analyze?visualFeatures=Description&language=en"; 
HttpClient client = new HttpClient(); 
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", computerVisionKey); 
HttpResponseMessage response; 
using (ByteArrayContent content = new ByteArrayContent(byteData)) 
{ 
    content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream"); 
    response = await client.PostAsync(cogSvcUrl, content); 
}
  

Finally, we convert the results to a string, as shown in Listing 3. This web service returns JSON containing either information about the image or an error message.

Listing 3:

string webServiceResponseContent = await response.Content.ReadAsStringAsync();
  

Here is the full code:

Listing 4:

using System;
using System.Configuration;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;

namespace ImageAnalysisConsoleAppDemo
{
    class Program
    {
        static void Main(string[] args)
        {
            MainAsync().Wait();
            Console.ReadLine();
        }

        static async Task MainAsync()
        {
            string key = GetKey();
            string imageFilePath = @"c:\test\kittens.jpg";
            if (!File.Exists(imageFilePath))
            {
                Console.WriteLine("File {0} does not exist", imageFilePath);
                return;
            }
            string results = await GetRecognizeTextOperationResultsFromFile(imageFilePath, key);
            Console.WriteLine(results);
        }


        public static async Task<string> GetRecognizeTextOperationResultsFromFile(string imageFilePath, string computerVisionKey)
        {
            // Convert file into Byte Array
            byte[] byteData; ;
            using (FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read))
            {
                BinaryReader binaryReader = new BinaryReader(fileStream);
                byteData = binaryReader.ReadBytes((int)fileStream.Length);
            }

            // Make web service call. Pass byte array in body
            var cogSvcUrl = "https://westus.api.cognitive.microsoft.com/vision/v2.0/analyze?visualFeatures=Description&language=en";
            HttpClient client = new HttpClient();
            client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", computerVisionKey);
            HttpResponseMessage response;
            using (ByteArrayContent content = new ByteArrayContent(byteData))
            {
                content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
                response = await client.PostAsync(cogSvcUrl, content);
            }

            // Get results
            string webServiceResponseContent = await response.Content.ReadAsStringAsync();
            return webServiceResponseContent;
        }

        public static string GetKey()
        {
            string computerVisionKey = ConfigurationManager.AppSettings["ComputerVisionKey"];
            return computerVisionKey;
        }

    }
}
  

Fig. 1 shows the output when analyzing the image displayed in Fig. 2 (saved in “c:\test\kittens.jpg”).

AnalyzeImage
Fig. 1

Kittens
Fig. 2

This code is not complex, but it is not intuitive (at least not, to me). So, it's useful to understand how to write C# code to pass a binary file to a web service.

Tuesday, July 16, 2019 9:00:00 AM (GMT Daylight Time, UTC+01:00)
# Monday, July 15, 2019

Episode 572

Whitney Griffith on Azure Blockchain as a Service

Whitney Griffith describes Blockchain, how it is implemented on Azure, and how her team used it to solve a transportation problem.

Monday, July 15, 2019 9:09:00 AM (GMT Daylight Time, UTC+01:00)
# Sunday, July 14, 2019

FrenchLieutenantsWomanCharles Smithson was a member of England's upper class in Victorian England. He was wealthy to the point of being idle rich and poised to inherit a title and fortune from his bachelor uncle. He became engaged to wealthy Ernestina Freeman, who is pretty, but simple and self-centered.

Everyone in the small, seaside town of Lyme Regis believes that Sarah Woodruff is mad. She spends her days, staring at the sea, waiting for the return of the French officer who jilted her years before.

Smithson's future seems set, until he meets the Sarah and takes an interest in her tragic paste.

At first glance, it seems a classic love triangle with a gentleman forced to choose between the lady to whom he is promised and a fallen woman. But Sarah is not what she seems, and everything becomes more complicated.

The reader is given clues about Sarah's motivations, but we never fully understand her actions. On the other hand, Fowles explores Charles in depth - his prejudices and his frustrations and his fears and his flaws. We follow him on his path of self-destruction, and weep for his arrogance of thinking he is benevolent toward Sarah by treating her (sometimes) as a human being.

We are shown other characters, very few of whom are likeable. They display hypocrisy and dishonest and spoiled and are almost universally self-serving.

The story does a good job of establishing the stark contrasts of Victorian society: between the traditional, Charles and the rebellious Sarah; between the upper and lower classes and the often hostile relationships between them; between the gender roles of men and women; and between the perception and reality of Victorian sexual mores.

Fowles has no qualms about injecting himself into the story, reminding the reader that he controls the actions of the fictional characters. He cuts in with long asides about the differences between sexual attitudes in the 19th and 20th century; or the life of Thomas Hardy; or the role of a writer in a novel. It's a risk; but, for the most part, he pulls it off, thanks to his cleverness and excellent prose. This only becomes an issue when he presents multiple possible endings to the story, which struck me as a cop-out.

Despite this weakness, The French Lieutenant's Woman by John Fowles is a very good story that will make you think.

Sunday, July 14, 2019 9:29:00 AM (GMT Daylight Time, UTC+01:00)
# Saturday, July 13, 2019

PossessionPossession by A.S. Byatt is not a straightforward tale. It tells of an affair between Victorian-era poets Randolph Ash and Christabel LaMotte.

But the romance is revealed largely through letters and journals and essays and poems written by Ash and LaMotte and those around them; and it is slowly uncovered by present-day scholars Roland Michell and Maud Bailey - whose own romance is growing.

Near the beginning of the novel, I questioned the frequent context switching between the present and the mid-19th century. I wondered why we should bother with the present-day characters. But, as the book progressed, the two stories became more intertwined. I began to enjoy the historical discoveries, as Roland and Maud made them; I appreciated the ethical dilemmas Byatt presented of learning about the past, while respecting the privacy of the dead; and I was intrigued by the academic rivalries, as others heard of these discoveries and raced to uncover details more quickly.

Mostly through Roland and Maude's eyes, we watch the growth of the relationship between Randolph and Christabel and the effects of those relationships. Ash has a wife and LaMotte has a lesbian partner and both learn of the relationship and are profoundly altered by it.

I enjoyed Byatt's poetry, which she attributed to her fictional authors/lovers. Byatt does a good job of giving different voices to the writings of each character, including the styles of their poetry.

I really liked the ending, which revealed much about the lives of Randolph and Christabel, much of which was never discovered by Roland, Maud, and their contemporaries.

Possession is a good book for fans of poetry, detective stories, and historical romances.

Saturday, July 13, 2019 9:25:00 AM (GMT Daylight Time, UTC+01:00)
# Friday, July 12, 2019

From its earliest days, Microsoft Cognitive Services has had the ability to convert pictures of text into text - process known as Optical Character Recognition. I wrote about using this service here and here.

Recently, Microsoft released a new service to perform OCR. Unlike the previous service, which only requires a single web service call, this service requires two calls: one to pass an image and start the text recognition process; and other to ask the status of that text recognition process and return the transcribed text.

To get started, you will need to create a Computer Vision key, as described here.

Creating this service gives you a URI endpoint to call as a web service, and an API key, which must be passed in the header of web service calls.

Recognize Text

The first call is to the Recognize Text API. To call this API, send an HTTP POST to the following URL:

https://lllll.api.cognitive.microsoft.com/vision/v2.0/recognizeText?mode=mmmmm

where:

lllll is the location selected when you created the Computer Vision Cognitive Service in Azure; and

mmmmm is "Printed" if the image contains printed text, as from a computer or typewriter; or "Handwritten" if the image contains a picture of handwritten text.

The header of an HTTP request can include name-value pairs. In this request, include the following name-value pairs:

Name Value
Ocp-Apim-Subscription-Key The Computer Vision API key (from the Cognitive Service created above)
Content-Type "application/json", if you plan to pass a URL pointing to an image on the public web;
"application/octet-stream", if you are passing the actual image in the request body.
Details about the request body are described below.

You must pass the image or the URL of the image in the request body. What you pass must be consistent with the "Content-Type" value passed in the header.

If you set the Content-Type header value to "application/json", pass the following JSON in the request body:

{"url":"http://xxxx.com/xxx.xxx"}  

where http://xxxx.com/xxx.xxx is the URL of the image you want to analyze. This image must be accessible to Cognitive Service (e.g., it cannot be behind a firewall or password-protected).

If you set the Content-Type header value to "application/octet-stream", pass the binary image in the request body.

You will receive an HTTP response to your POST. If you receive a response code of "202" ("Accepted"), this is an indication that the POST was successful, and the service is analyzing the image. An "Accepted" response will include the "Operation-Location in its header. The value of this header will contain a URL that you can use to query if the service has finished analyzing the image. The URL will look like the following:

https://lllll.api.cognitiveservices.microsoft.com/vision/v2.0/textOperations/gggggggg-gggg-gggg-gggg-gggggggggggg

where

lllll is the location selected when you created the Computer Vision Cognitive Service in Azure; and

gggggggg-gggg-gggg-gggg-gggggggggggg is a GUID that uniquely identifies the analysis job.

Get Recognize Text Operation Result

After you call the Recognize Text service, you can call the Get Recognize Text Operation Result service to determine if the OCR operation is complete.

To call this service, send an HTTP GET request to the "Operation-Location" URL returned in the request above.

In the header, send the following name-value pair:

Name Value
Ocp-Apim-Subscription-Key The Computer Vision API key (from the Cognitive Service created above)

This is the same value as in the previous request.

An HTTP GET request has no body, so there is nothing to send there.

If the request is successful, you will receive an HTTP "200" ("OK") response code. A successful response does not mean that the image has been analyzed. To know if it has been analyzed, you will need to look at the JSON object returned in the body of the response.

At the root of this JSON object is a property named "status". If the value of this property is "Succeeded", this indicates that the analysis is complete, and the text of the image will also be included in the same JSON object.

Other possible statuses are "NotStarted", "Running" and "Failed".

A successful status will include the recognized text in the JSON document.

At the root of the JSON (the same level as "status") is an object named "recognitionResult". This object contains a child object named "lines".

The "lines" object contains an array of anonymous objects, each of which contains a "boundingBox" object, a "text" object, and a "words" object. Each object in this array represents a line of text.

The "boundingBox" object contains an array of exactly 8 integers, representing the x,y coordinates of the corners an invisible rectangle around the line.

The "text" object contains a string with the full text of the line.

The "words" object contains an array of anonymous objects, each of which contains a "boundingBox" object and a "text" object. Each object in this array represents a single word in this line.

The "boundingBox" object contains an array of exactly 8 integers, representing the x,y coordinates of the corners an invisible rectangle around the word.

The "text" object contains a string with the word.

Below is a sample of a partial result:

{ 
  "status": "Succeeded", 
  "recognitionResult": { 
    "lines": [ 
      { 
        "boundingBox": [ 
          202, 
          618, 
          2047, 
          643, 
          2046, 
          840, 
          200, 
          813 
        ], 
        "text": "The walrus and the carpenter", 
         "words": [ 
          { 
            "boundingBox": [ 
               204, 
              627, 
              481, 
              628, 
              481, 
              830, 
              204, 
               829 
            ], 
            "text": "The" 
           }, 
          { 
            "boundingBox": [ 
              519, 
              628, 
              1057, 
              630, 
               1057, 
              832, 
              518, 
               830 
            ], 
           "text": "walrus" 
          }, 
          ...etc... 
  

In this article, I showed details of the Recognize Text API. In a future article, I will show how to call this service from code within your application.

Friday, July 12, 2019 2:00:09 PM (GMT Daylight Time, UTC+01:00)