Monthly Archives: March 2017

Top 5 advices for users of Microsoft Cognitive Services

Introduction and Background

As the title suggests, this post is a personal recommendation for the users of Microsoft Cognitive Services, the services that provide a cloud-based subscription-based solution for artificially intelligent software applications, with an any team, any purpose and any scale commitment. We all are aware of the fact that Microsoft is investing a lot of man power, promotion and commitment in Azure nowadays and almost every of their solution is hanging around the verge of Azure and one way or the other they come back to a same conclusion, that the solution can be purchased as a Software-as-a-Service from Azure — there are many other names, Platform-as-a-Service, Service-as-a-Service, choose yours from the pool as liked.

In this post, I am going to cover up the most important points that your team should understand before migrating to the Microsoft Cognitive Services.

Background of Microsoft Cognitive Services

To any of you, who have no idea of what Microsoft Cognitive Services are: Microsoft Cognitive Services are a bundle of services, provided by Microsoft to individuals, team, and/or organizations of any size and any scale to provide services that require complex machine learning or artificially intelligent responses.

It is a tough task, to accomplish the machine learning, and with just one wrong input your entire algorithm can go to {add a slang here}. Microsoft is providing the service, where you only have to provide the inputs for the algorithm, and you get the output. Microsoft itself manages the way algorithms are going to be fine tuned, or the performance of the algorithms, you don’t worry about that.

It is a subscription based service, provided as a service in Azure now. In this post, you will know how likely is Cognitive Services service of any help to you!

Tip #0: Ask (Convince) your boss

Microsoft Cognitive Services are tested against thousands (if not millions) of users, data records and entities and the algorithms is really a concrete! You cannot meet the level where Microsoft Cognitive Services are really providing the services, the reason is that Microsoft has partnered with quite a lot of academics professors, indie developers, teams and organizations and even most of the times online surfers show up and share some data to the cloud — all of which is under a license and Microsoft asks for permission, I am not here to cover up the license terms anyways.

Get the permissions, so that we may continue on this post. 🙂

Tip #1: Take only what you need

Cognitive Services is a library of services, there are a lot of services already added to the library and many are being added every month. But that doesn’t mean you should consider all of them, or even half of them. They are all categorized under different sub sections, that contain collective services provided by Microsoft CS,

  1. Vision
    • This set of services contain face APIs, such as recognition and tracking.
    • It also provides services that can extract features from faces such as age, emotion detection.
    • It also provides computer vision, that can allow users to perform OCR functions on the images.
  2. Speech
    • Allows your users to trigger functions based on their vocal commands. — Natural language processing.
    • Speaker Recognition — bleeding-edge technology!
    • Speech to text, and text to speech services.
  3. Language
    • Allows to perform linguistic analysis of the text.
    • You can use the previous services to perform analysis on the photos as well, so that you read the text using OCR and then analyze the text.
    • LUIS (Language Understanding Intelligent Service) is the new Jarvis!
  4. Knowledge
    • Recommender systems.
    • Anything that requires complex academic, or research stuff.
  5. Search
    • The old Bing APIs are now provided here…

Likewise, you can see that these are the categories, and even these categories have different API sets and services that you might want to consume. It is up to you, to select which one you need.

Let me put this simply, if all that you need to do is, read the text from images, convert them to speech and communicate. Then all you need to purchase is, “Computer Vision API”, and “Bing Speech API”. Your application won’t need the rest of the services. LUIS can be added finally to support the communication later on.

There will be more services, and you can always add up more services. But if you are no longer using a service, or your application is not related to a service, there is no need to purchase a key for that service.

Tip #2: Keep everything in Azure

Microsoft CS are provided from different areas (all Microsoft properties), such as LUIS can be accessed through luis.ai, and vice versa. But you should keep the family tight, and keep all of the keys and resources on Azure. So that you can manage everything from a single subscription, instead of having to look at various different accounts to configure and consume the applications.

Microsoft CS supports REST-based API (and we will cover this in a later tip below), so it is very easy to add the keys to the URL and start consuming the services.

You can manage all of the keys from within Azure, just head over to the Cognitive Services blade and open up the application that you want to get the keys for. Under the “Keys” section, look for the keys that you can use to authenticate the requests.


Figure 1: List of the Cognitive Services associated with the account.

I have 4 services active, that I can access in the Azure through REST APIs. How simple that is! You can add more keys, add more services, update the keys… All from within Azure! By the end of this post, you will realize the importance of this tip.

Tip #3: Get most out of REST API

Microsoft CS Azure endpoints are provided as REST API endpoints, that you can access through any HTTP client — even a web browser. The REST API, since working on HTTP protocol, allows you to make the best use of HTTP protocol and send/receive information. Currently, Microsoft CS supports two ways of uploading the information to the cloud,

  1. URL based
  2. Binary data base

These are the two ways that you can deliver the content to Azure for processing. Apart from this, the only required header for the request is the subscription key, added to the header of “Ocp-Apim-Subscription-Key“, which is processed first and the rest of the stuff is processed later based on the subscription information.

Example

Now let me show you a little example in WPF application, of consuming the Computer Vision API to detect what the image is all about. Azure will result in a complete sentence that explains the image, and the objects in the image as well as the task being done.

The XAML code for the WPF application is as below,

<Grid>
    <Grid.ColumnDefinitions>
        <ColumnDefinition />
        <ColumnDefinition />
    </Grid.ColumnDefinitions>
    <Border BorderBrush="Black" BorderThickness="1" Width="211" Height="188">
        <Image Name="image" HorizontalAlignment="Left" Grid.Column="0" Height="188" MouseLeftButtonDown="Image_MouseLeftButtonDown" VerticalAlignment="Top" Width="211"/>
    </Border>
    <Button Name="btn" Grid.Column="0" Margin="0,0,24,10" Height="20" Width="70" Click="btn_Click" VerticalAlignment="Bottom" HorizontalAlignment="Right">Process</Button>
    <Button Name="slct" Grid.Column="0" Margin="24,0,0,10" Height="20" Width="70" Click="slct_Click" VerticalAlignment="Bottom" HorizontalAlignment="Left">Select</Button>
    <TextBlock Name="rslt" Margin="10" VerticalAlignment="Center" TextWrapping="Wrap" Grid.Column="1" Text="Result will be here..." />
</Grid>


Figure 2: WPF application running, with no image selected.

As for the backend code, the C# code was written as following,

private async void btn_Click(object sender, RoutedEventArgs e)
{
    using (var client = new HttpClient())
    {
        // Request headers
        client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "<the-subscription-key>");

        // Request parameters
        var uri = $"https://westus.api.cognitive.microsoft.com/vision/v1.0/analyze?visualFeatures=Description";

        // Request body
        if(fileName == null) { MessageBox.Show("Select a file first."); }
        byte[] byteData = File.ReadAllBytes(fileName);

        using (var content = new ByteArrayContent(byteData))
        {
            content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
            var response = await client.PostAsync(uri, content);
        }

        rslt.Text = await response.Content.ReadAsStringAsync();
    }
}

private void slct_Click(object sender, RoutedEventArgs e)
{
    OpenFileDialog dialog = new OpenFileDialog();
    if (dialog.ShowDialog() == true)
    {
        // Something happened
        fileName = dialog.FileName;
        var source = new BitmapImage(new Uri(fileName));
        image.Source = source;
    }
}

Likewise, the output of this code, once worked was,


Figure 3: Image selected and response captured from the Azure.

Like seen, this is the result, which can be mapped to a JSON object for storage or for further processing of the requests.

Tip #4: Timing is everything

Our interest in Microsoft CS is only possible if it can guarantee that we get results in a timely manner, for example if we invest Microsoft CS in the security applications, then users should be provided with results in a timely manner and the lagging may cause us to reconsider stuff around.

So, I wanted to show the time of the request as well, to demonstrate how this all works. For that, I modified the code and the following changes were applied,

Stopwatch watch = new Stopwatch();
watch.Start();
response = await client.PostAsync(uri, content);
watch.Stop();

rslt.Text = $"Request took {watch.ElapsedMilliseconds} ms to complete, for {byteData.Count()} sized byte array.\n\n";

rslt.Text += await response.Content.ReadAsStringAsync();

The effect of this was that I was able to determine how long does it take to process and return the result.


Figure 4: Application showing the time as well.

Look at the top paragraph, it says, “Request took 3519 milliseconds to complete, for 33282 sized byte array.” Which means, that to process a file of round about 30 kB it took around 3.5 seconds. There are other factors that caused the delay, such as my internet connection. Secondly, a larger image file will take more time, and a smaller image will process quickly but with errors.

There are a few things that we learn from this…

  1. The timing of the Azure is not a big factor, the factors are
    1. Our own Internet connection
    2. The image itself
  2. Type of processing to be done is important
    1. Processing a sound track of 15 seconds with low quality vs a sound track of 1 min with high quality, are never going to end up with same time.
  3. CDNs may or may not help in this case

Finally, the requirements differ from the API to API, that is why I will not talk about the recommended image size. But, you can increase the performance of the applications by uploading the files directly to Azure, because Azure is always going to download the file from the URL as well to process it. So, why not upload it directly?

Tip #5: Security

The keys for your application are really crucial. And if they are lost, or accessible to anyone, then you are responsible for what happens — in worst scenarios, they may use your own resources for their own use, and charged will be you!

Remember Tip #2, if you followed my advice, you would now be able to easily change the key if you feel someone has access to the keys.


Figure 5: Keys shown for the Microsoft Cognitive Service purchased from Azure.

Otherwise, you can use other ways to hide the keys if you don’t like updating the security keys every month. Some include, like storing the keys in secure areas, such as the Key Vault of Azure or any other place where none can access… But, what if someone does access? 🙂

In many ways, things can go wrong, thus it is my recommendation to update the keys every month. Note that you can use either Key 1 or Key 2, and you can update both the keys independent of the other.

Reminder: Just when you were reading this post, I went back and regenerated the keys… Took only 4 seconds to regenerate the both. 🙂

Final Words

I have no words, seriously. I am out of words at the moment, so, I hope you enjoyed the post. 🙂 See you next time.

Continuous Delivery to Azure App Services with VSTS

Introduction and Background

It has been quite a while since I last published anything about Microsoft Azure, so here is my next post about Microsoft Azure. The subject itself could’ve been for the last post in the DevOps category, however this seems to be one of the initials in the category. So, what I want to cover in this post is that… Most of the times, Continuous Delivery comes handy when you really want to make things a bit automated. However, CD is not that much simple to consider. Most of the times, there is authentication required on the machines where your application is going to be delivered etc. Continuous Integration on the simpler hand, is just the process to trigger build for every change in the system.

Continuous Delivery gives the most hard time to any DevOps practitioners. The reason behind that is that CD requires not just the release information, but instead it also needs where to deploy the application, most systems require authentication/authorization in order to deploy or update the applications. That is why, this topic is the toughest in DevOps, I believe.

To keep things simple, I will just create a simple ASP.NET Core application, and then I will deploy that application to Visual Studio Team Services environment, where it will be:

  1. Built using ASP.NET Core build system
  2. Deployed to Azure App Service “Deployment Slot
  3. Swapped to the default web app slot

These are the steps that we are going to take in this post, to learn how we can solve the CD problem with VSTS for Azure and ASP.NET Core web applications.

Note: If any of the images are blur, or harder to read. Right-click the image, and open the image in a new tab. Finally, remove the query string (for example, “?w=625&h=113) and then reload the image. That query string filters and requests a smaller image. You can download the image in full resolution.

Deploying a simple web app

So, open up the terminals and type the following commands,

$ cd your_preferred_directory
$ dotnet new -t web
$ dotnet restore

What this will do — if you don’t know — is that it will create a new project, and setup the development environment for you locally. After this step, — I recommend using Visual Studio Code — you should open up the IDE and start programming in it.

$ code .

I will initially upload the default web application, and then later I will shift the content to the slots and deploy the latest changes to the production.

So, let us just go ahead and create a git repository inside the same directory. Later on we will publish the project to a remote repository of our VSTS account’s project. You would require the execute the following commands for that,

$ git add .
$ git commit -m "Committing the changes to repository".
$ git push https://git.yourserver.com/repository_path.git

The above commands, are hypothetical and you should fill in with your own repository information here. But they will work exactly the same. 

I did the same, and targeted my VSTS project repository to publish the code. Once the code was published, the build system was triggered automatically.

screenshot-129
Figure 1: Publishing the changes to VSTS repository.

That content gets uploaded to the VSTS repository,

screenshot-130
Figure 2: Content of online repository in VSTS.

Now, since our code has been modified, the continuous integration system triggers and starts to build the project since there were changes.

screenshot-131
Figure 3: Build triggered and running in VSTS.

That is currently the build that our change triggered and queued. Now, the build system is for the ASP.NET Core provided by Microsoft in the VSTS. We can use other build systems, or create our own build systems based on the frameworks, or languages that you are using. However, since ASP.NET Core was the default here, I used that.

screenshot-132
Figure 4: Build results for the latest changes in the repository.

After a few steps that are required in the build system, it would finally publish the built executables and other resources, to an artifact folder from where other processes can copy the content easily.

Now since we are using the CD system, the trigger of a successful build would also trigger the deployment, to a location where we have told it to.

Using Deployment Slots

The reason why you should prefer using Deployment Slots is that, they are wonderful! Only Azure supports Deployment Slots, whereas on the other vendors you require to have two separate machines, and then you connect them across. Whereas on Azure, things are different.

  1. Deployment Slots are submachines in your own machine — or service.
  2. They provide an identical environment and configurations for your application to work.
  3. Deployment Slots mimic the production environment, so, it is as if you were testing the application on the production environment.
  4. The swapping process takes zero downtime! Load balancers allow Azure to switch the IP address mapping to the virtual machines or services internally.
  5. No requests are dropped in the swapping process.

Thus, I also used the deployment slots in Azure, as they allow a lot of stuff to be managed easily. And it my own recommendation, to never deploy the applications directly to the production environment, as there needs to be some testing system that confirms that everything runs perfectly on the production environment as well, and not just on the testing environment.

So, that is what I did. I created a deployment slot in the testing App Service for this post.

screenshot-145
Figure 5: Deployment Slot made active in the application to store and stage the latest version of application for testing purposes, and booting purposes.

Then, once that was done I went onwards to modify the release system and ensure that the release was performed to deploy the application to the deployment slot, instead of the production slot.

screenshot-128
Figure 6: Release settings for our Azure App Service.

The “Deploy to slot” option configures whether to deploy to a slot, or to the production slot. So, now that our release is also managed the next step in the toolchain was to deploy it, and since our build was successful the VSTS automatically triggered to deploy the application to the Azure Deployment Slot.

screenshot-133
Figure 7: Release processing the latest build and deploying the application to Azure.

Pay close attention to the picture above,

  1. It got triggered automatically (see the Description column).
  2. The build that was used to fetch the artifacts is also shown.
  3. Time and the author is also shown etc.

After this, the application gets deployed to the server.

Preview of the application

The first preview of the application is the following one,

screenshot-135
Figure 8: Deployment Slot application preview.

This is the preview of the application under deployment slots. If you look at the URL, you will notice “-deployment” appended to the URL, which means that this is the preview of our slot, and not the application itself.

Benefits? Quite a few…

  1. We can run all sort of tests to check whether the latest build work properly or not.
  2. All of the dependencies get loaded up before serving any user. “Warms up the slots”
  3. Users don’t feel that the website was updated, instead they just see what’s improved!
  4. To update the production slot, all we need to do is, click a button.
  5. In case of any problem, we can roll back the latest build… Even from a production slot!

These are a few of the benefits that I have found in using Deployment Slots and which is why I personally recommend using Deployment Slots instead of the production slot, each time. Secondly, you can provide multiple slots to different teams and every team will work separately on their own environment testing their own stuff.

Updating the application

Now that our application is running, let us see how quickly can we update the application after a change. So, what I would be doing is I will be updating the navigation header for the site.

        </div>

You might have noticed, what I changed? (Look for “Updated!” in the middle of the code above).

screenshot-136
Figure 9: Committing the latest changes to the local repository.

This would update the local repository and then we can update the remote repository, to reflect the changes in our project’s repository. That step was similar to what did previously, just publish the changes to the remove repository.

screenshot-137
Figure 10: Pushing the changes to the server for deployment.

So, that change was pushed to the server and that change then triggered our toolchain for build, release and deployment for the ASP.NET Core web application. What this did was, that it

  1. Updated the remote repository.
  2. Triggered the build automatically — Continuous Integration.
  3. Triggered the release automatically, if the build succeeds — Continuous Deployment.
  4. Finally, publishes the application where it has to go live.

So, the rest of the stuff was similar to what we had previously. Same build procedures, same release cycles and then finally everything goes to the Deployment Slots.

Swapping: Deployment Slots to Production Slots

At this moment, our application is properly running in the deployment slot! However, we need to swap the slots and then our users would be able to see the live updates. First of all, let us see how the updates changed our website then we will move onwards to update the website itself.

screenshot-141
Figure 11: Latest updates on the Deployment Slot.

That is the deployment slot, and now we can move onwards to apply the changes to the production slot. For that, we can select the Swap option from the Deployment Slots blade, that option lets us choose which slot goes where.

screenshot-142
Figure 12: Swapping the slots blade.

This order matters a lot, the Source should be, where your recent updates are, and the Destination should be where they should go! Now remember, although internally there is just a traffic shift from one source to another, but in real if the order gets messed up… The results are undefined. I have myself tried to play around with it and it gets messed up really very much. So, do remember, that order is everything! 😀

Finally, when we go back to our application website, you will see that the changes are now living there…

screenshot-143
Figure 13: Preview of the production slot on Azure.

And the Deployment Slot gets the content of this application itself — the default Azure App Service page.

Summary and Final words

So, in this post we solved a few things out. First of all, we saw how CD gets to be the toughest IT part in DevOps (IT because there are some other tasks, such as user management where you get to face a headache of customer requirements and error management etc. :D). Then we moved onwards to manage the Visual Studio Team Services in order to apply the automation across the steps and stages of App Life Cycle.

Then, we saw how Azure Deployment Slots can help us get the most out of the testing systems and make sure that everything, I repeat, everything is working properly.

What’s next?

Here is the assignment, notice that I mentioned that you can swap the Deployment Slots? There is a way to automate that as well, how? Using Powershell! You can easily use Azure PowerShell on Visual Studio Team Services to automate the swapping process as well. That way, you can run some tests on the production environment and then let the cmdlets do the rest of the job for you as well.

Azure PowerShell also would let you migrate the website back if you feel that the website is not performing well.