Category Archives: Beginners

Common Problems in Xamarin.Android and their solutions

Introduction and Background

This is my, well, 3rd or 4th post on Xamarin and this one might be a bit critical in its topic or subject matter, so if you have been pulling your hairs off these days, this post is for you. Basically the purpose of this post is to provide you with a post that has most commonly faced issues, and their solutions attached. I personally faced quite a lot of issues with Xamarin. The problem was not that I was a beginner in Xamarin. The major problem was that it was all working before, but right after a reset everything was giving me an error. Even worse was that, there were no solutions at all. Every solution was like, “Reinstall Xamarin”, “Reinstall Android SDK”, “Remove the space in the Android SDK path”. Even the most authentic of the resources were not providing the working or at least “sensible” solution to me. So I thought I should write a complete post, sharing the reality based solutions, and not just, “install this, install that” sort of stuff that does not help anyone at all.

Installation of Xamarin

Installation of Xamarin itself is a bit confusing and problematic in real, for beginners. If you have had ever worked with Xamarin previously, then you might know where all the plugs go, but for a beginner the experience is painful and many leave in the start — let alone middle as the least.

The initial location where Xamarin installs the Android SDK is, “C:\Program Files (x86)\Android\android-sdk“. A few things to consider here:

  1. There is no problem with the space in the path. Spaces are a problem only in the cases when you are going to program using NDK (Native Development Kit), otherwise they do not cause any problem. I am also using a path with space in it and it does not cause any trouble.
  2. Before anything at all, I would recommend that you run the Android SDK by running:
    1. Either “android.bat” file as Administrator.
    2. Or by going to Visual Studio and running Android SDK from the Android toolkit tray. It will request Admin access itself.
    3. Note that you need to run SDK Manager with Admin access. Otherwise, it will not work. Worse, it will give you errors of “Unable to move directory…” and later, it will delete the “android.bat” file and you will have to download and install Android SDK once again. Painful.
  3. It can be helpful to use the Android SDK installed by Android Studio in cases where you are lacking enough HDD space on your machine, but it is my own recommendation to not use that. The reason is that in case Xamarin messes up, Android Studio keeps running fine. Plus, you do not need all of the plethora that Android Studio installs just for Xamarin.

screenshot-27
Figure 1: Android SDK launcher.

The installation typically installs Android SDK level 19, 21, and 23 (23rd one can be selected from installation options). You should not just go ahead and start installing everything, even if someone asks you to. There is no point in doing that.

screenshot-28
Figure 2: Android SDKs and tools provided.

Later in this post, I will show you which of the frameworks are required an necessary for your project to work and which of these are not at all required.

Installation of Android SDK

Another most important point to consider here is that on various locations online, you will find people saying that you should install SDKs from the minimum one required, all the way up to the latest one. No, that is not the solution and is not required at all.

For example, have a look here,

screenshot-29
Figure 3: Android target platforms.

In this case, which SDKs do I need to install? If you said: All the way from 16 to 23, then you are wrong. The only SDK that is required and compulsory for you to have is the “Latest Platform”. Now, the concept of latest platform is a bit different. You cannot expect to have the latest Android version released by Google, and expect that to be the latest one in Xamarin also. Xamarin is not working under Google and their APIs come a bit later than Google’s APIs.

I have only installed the SDK platform for 25 and 23. I installed 25 because everybody was asking me to install everything. Which did not work in any case. So, what you need to do is you need to install only the Latest Platform, and any platform that you need to test your application on; in the case of emulators etc.

One more thing, the latest platform will differ as you will be working at the time of writing this post, it was Android 6.0 Marshmallow, whereas Nougat was released quite months ago, even then the latest platform in Xamarin was 6.0. And in Android Studio I was already targeting API level 25. Which means, that Android Studio API level and Xamarin level do not meet each other and thus you need to check again which of them are you going to support.

Emulators in Xamarin

Xamarin, if installed with Visual Studio, comes shipped with Visual Studio Emulator for Android. It requires you to have Hyper-V installed and active… Meaning, you can only access it on a Pro edition of Windows. In other editions, such as Home, you cannot run that. In that case you either have to fallback on Genymotion, or other products such as the Android emulator provided by Xamarin itself. The benefits of using these emulators are:

  1. They come pre shipped and preconfigured.
  2. All you need is a Pro edition of Windows (in case of Visual Studio’s emulator), or you need a commercial account; such as for Genymotion unlimited account.

However, my choice is a bit different. I consider using the same Android emulators that I used with Android Studio. There are various benefits to this,

  1. You get the latest API levels before hand. My latest platform (as seen above) is Android 6.0, however, I get to run the application in Android 7.0 using the emulators by Android Studio. Fun eh?
  2. You can use Intel HAXM, and it works even if you do not have a Pro edition of Windows. However, a CPU with virtualization technology is required — of course, an Intel CPU.
  3. Visual Studio automatically detects the running Android device, and you can push your application to the running emulator and run it in super-fast mode.

However, if you want to run your application on one of the Android Studio’s emulator, then you need to make only one thing sure: The platform level of the emulator and the platforms installed in Xamarin.Android SDK is similar. For example, if I have a device of Android 7.0, and I have not installed Android 7.0 as a platform level in Xamarin.Android SDK, then the application will not deploy; although it will build, it will try to deploy, but will not fail nor succeed. To overcome that, install the same SDK level in your Xamarin.Android as well as the Android Studio SDK. Then you can deploy your applications.

One thing and purpose of having Android 7.0 installed on Xamarin, was that I was testing my applications on Android Studio’s emulator, which had Android 7.0 installed, to have it accept the application, I needed to install Android 7.0 SDK on Xamarin as well. Otherwise, it won’t start debugging at all.

screenshot-30
Figure 4: Android emulators shown in Android Studio AVD manager.

If you look close enough in the following, you will find a lot of easter eggs; Android 6.0 as target, yet running on Android 7.0 and so on.

screenshot-31
Figure 5: Android Studio emulator with Android 7.0 running a Xamarin application targeting Android 6.0 application.

You will also see, that the Android emulator being used is the Android Studio’s emulator and not any other emulator and it runs and works just perfectly.

View not loading

Most of the times you will get an error message saying, Android SDK is outdated. Or to be specific, the error message is,

The installed Android SDK is too old. Version {API_LEVEL} or newer is required.

Then it provides you with a link to open Android SDK and install it. The problem here is that you are trying to target your build to a version, that is not installed at the moment. Such as, in my above example the target was Android 6.0 and the only SDK installed was 19 (the default one). That caused the problem for my setup to target the views to the latest API level.

In most posts, it is shown to actually either update the paths, install everything, or move the directories from one location to another, or even reinstall Xamarin.

The solution to this problem is pretty much straight-forward. All you need to do is:

  1. Go to the Properties → Application.
  2. Check for the “Compile using Android version:” value. Also note that the “Target Android version” can be selected as the “Use Compile using SDK version” to make things a bit more simple.
  3. FInally, install the SDK for your target platform level.

One thing to note here, if your SDK shows that you have installed the platform however, you cannot run the application. Recheck the location of SDK being used.

screenshot-32
Figure 6: Android SDK default paths in Visual Studio.

  1. For that, go to Tools → Options → Xamarin → Android Settings.
  2. Double check the Android SDK location property here. And make sure it is the location where your SDK is installed.

These will set up a few things in the system so that Xamarin works the way it should.

Java SDK required

In most cases, Java JDK 8 is recommended. By default you will be provided with JDK 7, and that works perfectly. But it is recommended to install JDK 8 and remove JDK 7. Reason?

  1. JDK 7 is old. Really.
  2. JDK 7 will cause your applications to target JDK 7, even if JDK 8 is installed because it overwrites the default JAVA_HOME variable. Since, JAVA_HOME variable needs to target JDK 8 location, there is no need to have JDK 7; since it will never be used.
  3. Latest Android tools will be using and supported by JDK 8. Soon Xamarin will also require you to have JDK 8, because while compiling Xamarin to Android, it uses the Android libraries as well; SDK etc. They require JDK 8.

A simple step to do this would be, to remove the JDK 7 completely. Go to control panel for that. Next, set the JAVA_HOME environment to point to JDK 8 location. That will be different on devices, based on the build or versions. So, check it against your own system.

Final words

Xamarin itself is a very powerful tool provided by Microsoft. Plus, the benefits of Xamarin, especially Xamarin.Forms, outweigh any disadvantages of it. The main disadvantage of this is, the learning slope is really very slippery — not just steep. Most of the beginners leave out learning the Xamarin framework, because learning a simple language such as Java and having maximum code already provided is an easy way to have your work done. Whereas, in Xamarin you need to not just learn the tools but also to understand which plug goes to which socket.

I tried my best to provide you with a post that has the solutions to most widely faced problems. The problems talked about here, were all generic and not a specific case because I needed future readers to also get help from this post.

If you find any other error, do let me know by commenting and I will try to find a solution — a real solution — and then share it with you and the rest of the community. 🙂

Serverless computing with Azure Functions

While I was working on cloud computing, and other similar technologies, I somehow stumbled upon this new thing that I did not have heard of before (quite a few weeks from now) and I found it really interesting to watch about with and to learn and to share my own understanding of this technology with you guys. I will try my best to keep the main idea as simple as possible, but explain everything from A to Z in a much easy way, that it will look as if the concept was always in your mind.

So, let us begin with the first initials about the serverless computing and how it all began.

How, it all began…

If you guys have been into the IT and programming geekiness for the past few years, you must have witnessed how things changed, from physical machines to virtualization, to containers and all the way to, what we now call serverless computing. Whether this new tech trend comes down the stream as I mentioned, or perhaps someone else knows better as how this all began is not the question or the topic of concern, the main thing is that we now have this another topic to cover before we actually start up and design the applications. You will see, in this post, how one way of deploying application has pros and cons, and how serverless comes up to solve the problems — or ruin everything at all. There are some pros to serverless computing, and there are obviously a lot of cons to serverless computing.

The buzz word of serverless computing didn’t take much longer, because we now have Internet and one new thing from the west, reaches east in no time at all. The question to ask is, “Do we understand what they wanted us to?” And this is the question that I am going to address in this post of mine, so that all of us really understand the purpose, need and reason of serverless computing today.

I am going to use Azure Functions, to explain the usage, benefit and “should I” of the serverless computing. The reason I chose Azure was that everyone was already covering a lot of Amazing Lambda stuff, so I didn’t want to do use that, also I am working on Azure a lot so I thought this is it.

Understanding the term, “serverless”

Primary focus of our is to explain the term, “serverless”, Azure usage would just be to give you an overview of how that actually works in real world.

traditional_vs_serverless_cost_graph
Figure 1: A simple yet clear difference in traditional vs serverless approach toward server management. 

The picture above, was captured from this blog, and it provides a very good intuition towards the difference in what we use, and what serverless offers. But this does not mean that the image above provides a 100% true and only difference in the both worlds, sometimes the differences boils down to zero; such as in the cases, where you are going to use architectures such as cloud offerings. In such scenarios, the total difference in the cost is how you selected the subscription. As we progress in the post, I will also draw a margin line in many other aspects of this methodology. Keep reading.

Just like the term cloud computing was mistaken for various reasons, and for various acts, the term of serverless is also being mistakenly understood as a platform, where servers are no longer needed. As for cloud computing, you might want to see, Cloud Computing explained by Former IT Commissioner, and try to consider the fact that we are in no way understanding the technology the way it is meant to be understood. The same is the result for serverless, when I tried to search for it on Google, I even saw computer devices being dumped into the dustbin; which triggered me, that people are so not understanding the term itself. For example, go here, Building a Serverless API and Deployment Pipeline: Part 1 and try to just see the first image and please remember to come back as soon as possible. You will get my point. Finally, I mean no offence to anyone being mentioned in this paragraph, if you are the target in either one of the link, kindly get a good book or contact me I will love to teach you some computer science.

Old hardware put into container

Figure 2: Just in the case that blog post is not accessible or the author takes the image down, I just want to show you the image. Still, no offence please.

So what exactly is serverless computing? The basic idea behind this, is to remove the complexity or the time taken to manage the servers, not the servers themselves. In this scenario, we actually use a framework, platform or infrastructure, where everything, even the booting, executing and terminating of our application is managed by the provider itself. Our duty, only is to write the code and the magic is based on the recipe of provider. Just to provide you with a simple definition of serverless computing, let me state,

Serverless computing is a paradigm of computing environment in which a platform or infrastructure provider manages booting, scheduling, connection, execution, termination and response of the programs without needing the development teams to manage the control panel.

Few, that wasn’t so tough, was it? Rest of the stuff, such as pricing models, languages or runtimes provided, continuous deployment or DevOps is just a bunch of extra topping that every provider will differ in the offerings. That is one of the reasons, I did not include any statement consisting of anything about the pricing models, or the languages to use — whereas in this post, I will use C#… I am expecting to write another post that will cover Python or other similar interpreted languages.

Benefits of Serverless architecture

If you migrate your current procedures, and communicators, to serverless paradigm you can easily enjoy a lot of benefits, such as cost reduction, freedom from having operations team to manage a full fledge server and much more. But on the same time, I also want to enlist a few of the disadvantages of serverless computing, at the end of this section.

Cost model

Ok let me talk about the most interesting question in the mind of everybody perhaps. How is this going to change the way I am charged? Well, the answer is very much relative to what you are building, on what platform you are building and how much customers do you have, plus how they interact with your application. So, in other words, there is no way we can judge the amount of charges you are going to play with this. In the following sections, I will give you an overview of another special part in serverless programming, that you can use to consider the pricing model for your application.

NoOps

Do not be mad at me for adding a new term in the computer science; if it has not yet been added there. Now let me get to the point where I can explain the concept of resolving the teams, such as development, operations etc. and then getting to a point, where you can enroll “serverless” to the environment. In modern day computing, we have, let’s say, implemented DevOps and we need to have a mindset where our teams are working together to bring the product to market for users. A DevOps typically has the following tasks to be conducted,

  1. Planning and startup, user stories or whatever it is being called.
  2. Source control or version control
  3. Development; any IDE, any language
  4. Testing; there are various tests, unit testing, load testing, integration testing etc.
  5. Building; DevOps support and encourage continuous integration
  6. Release; same, continuous deployment is recommended

From this, you can see that the developers only need to work in a few areas, Planning, Development and Building. Version control systems should be managed by the IDE and timers should be set to them to control when the versions or updates are checked in each day, updated versions must be released into the market by operations teams etc. However, since now in the field of serverless, we do not have to “release the software” to market, and we also do not need to manage any sort of underlying server if our application is web based, thus we can somehow remove the operations team, or include them in the development team as application developer team. I read a research guide a few days ago naming DevOps – Ops as AppOps, but however I would like the term of NoOps, because the considerations of them being simple operations team is removed and they now work with the application development team, focusing entirely on the code, and the performance or uptime of the application, instead of the servers or virtual machines.

So, let’s count the purpose of NoOps in this field,

  1. Planning and startup
  2. Source control or version control
  3. Development
  4. Testing; I can strikethrough this one as well.
  5. Building
  6. Release

Clear enough, I believe. Before we get into another discussion, let me tell you why I think these are the way they are; why did I cut a few in the list.

Planning and Startup

First of all, planning and startup in this does not make any sense at all. Please see below, the section in which I am talking about the “whens” to select serverless architecture, that section will depict when you should use serverless approach over the current “modern” approaches. Once you have gone through that, you will understand, that in serverless arch, the planning is done beforehand.

Thus, there is no need to again sit around, and have the kanban board messy once again. If you are going to work on that board again, please go back and work in DevOps environment, serverless is not for you. As mentioned below, serverless is for the programs that do not run for 4-7 hours, but just for an instance, each time they are called and they are entirely managed by the infrastructure provider.

Version control

Since this is directly targeted towards the development, or the core portion of your application, this is a part of serverless architecture design. Almost every serverless platform that I have seen as of now, uses version control services or provides best practices of DevOps; I know. Azure Functions provide you with features that you can use to update the source code of your function, Amazon Web Services’ Lambas allow you to use GitHub, same goes to almost all; have a look at Google’s microservices service, they support GitHub based deployment of Node.js applications and then manage how to run them.

In the cycle of development of a serverless application, this can come as the first step after every first cycle.

Development

Yes, although we removed the servers and other IT stuff from the scene, we still need developers to write up the logic behind the application.

Testing

The reason that I left this option active in the current scenario was, that if you are using version control and then you are deploying the application’s code to the server, my own recommendation and DevOps also, would be to test the code before going to the next step, as it might break something up ahead.

In serverless arch, we are allowed to use source controls, so, before forwarding the code from there to the server, why shouldn’t you run some tests? In serverless arch, we don’t have to worry about the servers, but sure we do need to worry whether the code ran, or did it just break all the time?

In a serverless architecture, we do not technically build a full fledge, or full featured application that takes care of everything, instead we simply write a “if this then that” sort of application. If you understand the concept of “Internet of Things”, then you can think of a serverless application as the the hub that manages the communication and responds to an event; message; request. In such cases, it is not required to implement every test possible, instead, we can perform simple tests to ensure that the code does not break at the arrival of request, and at the dispatch of a response. These are just a few, top of my dumb piece assumptions and suggestions, based on what your serverless application does, you might need other tests, such as pattern matching or regular expressions to be tested against.

I repeat, this is the most important part of serverless programming. I cannot put more effort in saying this, but you get my point, if you are a team and you are going toward the serverless paradigm to lift the response rate, then you first of all need to ensure that the program will be resilient to any input provided to it and will not break at all. Even if there are some issues, how does the program respond to those errors?

Building

I removed that step, but I could have also left that as it was. There are reasons for this, because, the infrastructure may provide you with support to publish built programs. We are going to look into Azure Functions, and Azure supports publishing built programs that execute, instead of scripts that are to be interpreted each time that function has to execute.

But in many cases, you do not need to take care of, or even worry about the build process, since the functions are small programs and they don’t require much of the stuff that typical applications require. You can easily publish the code, and it will execute in a moment. Serverless providers allow interpreted languages to be used as the scripts too, such as Batch files, you can also use Python scripts, or PHP files. But again, every provider has their own specification for this, Azure Functions support any kind of program that you can write, or build, you can upload it on the server and they will host it and your users can connect to it next time they send a request, or interact with the application; mobile, web or any other IoT based device.

Release

Submitting the code out to the version control, was the only release we are going to worry about.

Winding up the basics

Although what I covered, just scratched the basics about the serverless architecture, and there is a lot more to it, than just this singular concept of what to do, and what to leave out. But going any deeper into the rabbit hole might be confusing and might get us off-track as per this post’s structure. So I will not go down, but just for winding us the basics, let us go through a few things.

  1. Do not consider the serverless architecture as a replacement for your current physics servers or container based environments. Serverless are just used to take the events and then trigger another server or virtual machine to act on that event, with the provided data. Nothing else.
  2. Before going serverless, you and your team must understand one fact: “It is your duty, to test the code, and the validity of your code, and it is the duty of your platform provider, to ensure that the code runs the way it is intended to be run on the runtime intended.”
  3. Pay a lot of attention to testing, testing, testing. I repeat, other than development if there is something that needs to be done, it is testing.
  4. The payment plan, in many cases differ from one and other. Sometimes you might choose a monthly plan, sometimes if your users are not in thousands, then you can select the plan where you only pay for the time your function is executing; only for resources used.
  5. Microsoft Azure Functions provide full support for various languages and runtimes, you can use C#, then you can as well choose JavaScript, there are other methods of writing the function code, such as using Python and PowerShell. You can also upload the compiled code and then run it. In other words, if it can execute, it can be a function.
  6. One final thing, a function is only a program or handler, that runs for a small time (10ms-1.5s), if it takes more than that, then it will raise other errors and you would face other problems as well. Always keep the function code short, and as soon as possible terminate it by triggering other services or passing the data on to other service handlers, such as you can trigger the function from an IoT hub and then use other services such as SMS or SMTP services to send notifications and before sending notifications close the function, by only triggering those services and passing the data.

In many ways, this architecture can help you out. But if used badly, it would be like shooting in your own feet. In my own experience, I have found that the architecture can create a lot of problems for you as well, and it might not always be as much helpful as you think. So, use wisely.

Azure Function example

I didn’t want to write a complete guide on serverless architecture, because I might have other posts coming out on this one as well, so let us go a bit deeper and have a look at the Azure Functions feature and see how we can write minimal serverless functions in Azure itself.

So in this example, I am only going to show you a bit of the example that can be used to show you how functions work, in future posts I might cover the HTTP bindings to the functions, or other stuff such as DevOps practices, but for this post let me keep it really short and simple and cover the basics.

Basic function file hierarchy

At the minimum, a function requires an executable script (in any runtime), and a configuration file that specifies the input/output binding of the function, timers or other parameters that can be used for the proper execution. That function.json file controls the execution of the function, it takes all the configuration settings, such as the accounts or services to communicate with. So for instance, in a simple timer based function the following files are enough to control the function itself,

screenshot-7815

The code in both the files is as the following one,

using System;

public static void Run(TimerInfo myTimer, TraceWriter log)
{
    log.Info($"C# Timer trigger function executed at: {DateTime.Now}"); 
}

The JSON configuration file has the following content,

{
    "bindings": [
        {
           "name": "myTimer",
           "type": "timerTrigger",
           "direction": "in",
           "schedule": "0 */5 * * * *"
        }
    ],
    "disabled": false
}

What their purpose is, let me clarify the bit about it in this post before moving any further.

Note: In another post, I will clarify the meaning and use of function.json file, and what attributes it holds. For now, please bear with me.

An executable script

The executable script can be C#, JavaScript or F# or any other executable that can run. You can use Python scripts as well as a compiled executable script.

Configuration file

The function.json file has the settings for your function. The above provided code was a very basic one, the complex functions would have more bindings in them, they will have more parameters and connection names or authentication modules, but you get the point.

In the file, the name and direction of the binding is compulsory. However, other settings are based entirely on the type of binding being used. For example, HTTP triggers will have different settings, timer triggers will require different settings and so on and so forth.

Executing

Azure provides the runtime for almost every executable platform, from PowerShell, to Python, to JavaScript, to C# scripts (the above provided code is from a C# script file) and all the way to other scripts, such as batch etc. Runtime also supports native executables — and this part I yet have to explore a bit more to explain which languages are supported in this scenario.

I will not go into the depths of this concept, so I will leave it here, anyways the output of this function is as,

2017-02-03T17:35:00.007 Function started (Id=3a3dfa76-7aad-4525-ab00-60c05b5a5404)
2017-02-03T17:35:00.007 C# Timer trigger function executed at: 2/3/2017 5:35:00 PM
2017-02-03T17:35:00.007 Function completed (Success, Id=3a3dfa76-7aad-4525-ab00-60c05b5a5404)
2017-02-03T17:40:00.021 Function started (Id=a32b986c-712e-4f30-84c9-7411e63b5356)
2017-02-03T17:40:00.021 C# Timer trigger function executed at: 2/3/2017 5:40:00 PM
2017-02-03T17:40:00.021 Function completed (Success, Id=a32b986c-712e-4f30-84c9-7411e63b5356)
2017-02-03T17:45:00.009 Function started (Id=c18c0eef-271d-4918-8055-64e3f31f953a)
2017-02-03T17:45:00.009 C# Timer trigger function executed at: 2/3/2017 5:45:00 PM
2017-02-03T17:45:00.009 Function completed (Success, Id=c18c0eef-271d-4918-8055-64e3f31f953a)
2017-02-04T10:27:54 No new trace in the past 1 min(s).

The timer trigger keeps running and keep logging the new events, and process information. You will also consider, that this is the same output that Node.js or F# programs would give you, the only difference in these 3 (only 3 at the moment), is that their runtimes are different, the binding and input/output of the functions is managed entirely by Azure Functions itself and developers do not need to manage or take care of anything at all.

Wrapup

Since this was an introductory post on serverless programming and how Azure Functions can be used in this practice, I did not go much deeper in the explanation of the procedures of writing function applications. But the post was enough to give you an understanding of the serverless architecture, what it means to be serverless and how DevOps transition to NoOps. In the following posts about serverless, I will walk you around writing the serverless applications and then consuming them from client devices; Android or native HTTP requests.

Finally, just a few things to consider:

  1. If your functions take a lot of time to execute, such as 1 minute, or even 30 seconds, then consider running the application in a virtual machine or App Service. A function should be like a handshake negotiator, it should take the data and pass the data to a processor, itself it must not be involved in processing and generation of results.
  2. Your functions should be heavily tested against. I am really enforcing a huge amount of tension on this one, as this point needs to be taken care of. Your functions are like the welcomers, who warmly welcome the incoming guests to your servers. If they fail in doing so, the data may never come back (data being your users; events, or anything similar).
  3. Functions follow the functional programming concepts more, so, in functional programming the functions are not stateful. They are stateless, meaning they do not process the data based on any machine state, attribute, property or the time at which they are executed. Such as, a function add, when passed with a data input of “1, 2, 3, 4, 5”, will always return “15”, since the process only depends on the input list.

As we start to develop our own serverless APIs and applications, we will also look forward to further more ways that we can develop the applications, and write the application code in a way that it does not affect the overall performance of our service.

Nonetheless, even if not being implemented in the production environment, serverless is a really interesting topic to understand and learn from a developer’s perspective as you are the one taking care of everything and there are no cables involved. 😉

Is using ‘using’ block really helpful?

Introduction and Background

So, it all began on the Facebook when I was posting a status (using Twitter) about a statement that I was feeling very bad that .NET team had left out the “Close” function while designing their .NET Core framework. At first I thought maybe everything was “managed” underground until I came up to an exception telling me that the file was already being used. The code that I was using was something like this,

if(!File.Exists("path")) { File.Create("path").Close(); }

However, “Close” was not defined and I double checked against .NET Core reference documentations too, and they suggested that this was not available in .NET Core framework. So I had to use other end… Long story short, it all went up like this,

I am unable to understand why was “Close” function removed from FileStream. It can be handy guys, @dotnet.

Then, Vincent came up with the idea of saying that “Close” was removed so that we all can use “using” blocks, which are much better in many cases.

You don’t need that. It’s handier if you just put any resource that eats resources within the “using block”. 🙂

Me:

How does using, “using (var obj = File.Create()) { }” make the code handier?

I was talking about this, “File.Create().Close();”

Now, instead of this, we have to flush the stream, or perform a flush etc. But I really do love the “using block” suggestion, I try to use it as much as I can. Sometimes, that doesn’t play fair. 😉

He:

Handier because I don’t have to explicitly call out the “Close()” function. Depends on the developer’s preference, but to me, I find “using (var obj = File.Create()) { }” handier and sexier to look at rather than the plain and flat “File.Create().Close();”. Also, it’s a best practice to “always” use the using block when dealing with objects that implements IDisposable to ensure that objects are properly closed and disposed when you’re done with it. 😉

As soon as you leave the using block’s scope, the stream is closed and disposed. The using block calls the Close() function under the hood, and the Close() calls the Flush(), so you should not need to call it manually.

Me:

I will go with LINQPad to see which one is better. Will let you know.

So, now I am here, and I am going to share what I find in the LINQPad. The fact is that I have always had faith in the code that works fast and provides a better performance. He is an ASP.NET MVP on Microsoft and can be forgiven for the fact that web developers are trained on multi-core CPUs and multi-deca-giga-bytes of RAMs so they use the code that looks cleaner but… He missed the C# bytecode factor here. I am going to use LINQPad to find out the underlying modifications that can be done to find out a few things.

Special thanks to Vincent: Since a few days I was out of topics to write on, Vincent you gave me one and I am going to write on top of that debate that we had.

Notice: I also prefer using the “using” block in almost every case. Looks handier, but the following code block doesn’t look handier at all,

using (var obj = File.Create("path")) { }

And this is how it began…

Exploring the “using” and simple “Close” calls

LINQPad is a great software to try out the C# (or .NET framework) code and see how it works natively, it lets you see the native bytecode of ,NET framework and also lets you perform and check the tree diagrams of the code. The two types that we are interested in are, “using” statement of .NET framework and the simple “Close” calls that are made to the objects to close their streams.

I used a Stopwatch object to calculate the time taken by the program to execute each task, then I match the results of each of the program with each other to find out which one went fast. Looking at the code of them both, it looks the thousand-feet high view of them both looks the same,

// The using block
using (var obj = File.Create("path")) { }

// The clock method
File.Create("path").Close();

They both the same, however, their intermediate code shows something else.

// IL Code for "using" block
IL_0000: nop 
IL_0001: ldstr "F:/File.txt"
IL_0006: call System.IO.File.Create
IL_000B: stloc.0 // obj
IL_000C: nop 
IL_000D: nop 
IL_000E: leave.s IL_001B
IL_0010: ldloc.0 // obj
IL_0011: brfalse.s IL_001A
IL_0013: ldloc.0 // obj
IL_0014: callvirt System.IDisposable.Dispose
IL_0019: nop 
IL_001A: endfinally 
IL_001B: ret 

// IL Code for the close function
IL_0000: nop 
IL_0001: ldstr "F:/File.txt"
IL_0006: call System.IO.File.Create
IL_000B: callvirt System.IO.Stream.Close
IL_0010: nop 
IL_0011: ret

Oh-my-might-no! There is no way, “using” block could have ever won with all of that extra intermediate code for the .NET VM to execute before exiting. The time taken by these commands was also tested and for that I used the native Stopwatch object to calculate the “ticks” used, instead of the time in milliseconds by each of the call. So my code in the LINQPad looked like this,

void Main()
{
   Stopwatch watch = new Stopwatch();
   watch.Start();
   using (var obj = File.Create("F:/file.txt")) { }
   watch.Stop();
   Console.WriteLine($"Time required for 'using' was {watch.ElapsedTicks}.");
 
   watch.Reset();
   watch.Start();
   File.Create("F:/file.txt").Close();
   watch.Stop();
   Console.WriteLine($"Time required for 'close' was {watch.ElapsedTicks}.");
}

The execution of the above program always results in a win for the “Close” function call. In sometimes it was a close result, but still “Close” function had a win over the “using” statement. The results are shown the images below,

screenshot-5120 screenshot-5121 screenshot-5122

The same executed, produced different results, there are many factors to this and it can be forgiven for all of them,

  1. Operating system might put a break to the program for its own processing.
  2. Program might not be ready for processing.
  3. Etc. etc. etc.

There are many reasons. For example, have a look at the last image, the time taken was 609 ticks. Which also includes the ticks taken by other programs. The stopwatch was running ever since, and that is what caused the stopwatch to keep tracking the ticks. But in any case, “using” statement was not a better solution from a low level code side.

Final words

Although I also recommend using the “using” block statement in C# programs, but there is a condition where you should consider using one over other. For example, in this condition, we see that implementing one over the other has no benefit at all, but just a matter of personal interest and like. Although I totally agree to Vincent where he mentions that “using” block is helpful, but in other cases. In this case, adding a Close call is a cleaner (IMO) way of writing the program as compared to the other one.

At the end, it is all in the hands of the writer… Select the one you prefer.

A quick startup using .NET Core on Linux

I know you may be thinking… This post is another rhetoric post by this guy, yes it is. 🙂 .NET Core is another product that I like, the first one being, .NET framework itself. It was last year when .NET Core got started and Microsoft said they are going to release the source code as a part of their open source environment and love. By following the open source project environment and ethics, Microsoft has been working very hard in bringing global developers toward their environments, platforms and services. For example, .NET framework works on Windows, C# language is used to build Windows Store applications, C# is also the primary language in Microsoft’s web application development framework, ASP.NET and much more. The online cloud service of Microsoft is also programmed in C#; primarily. These things are interrelated to each other. Thus, when Microsoft brings all of this as an open source project, things start to get a lot better!

Everyone knows about the background of .NET Core, if you don’t know, I recommend that you read the blog post on Microsoft, Introducing .NET Core. The(ir) basic idea was to provide a framework that would work with any platform, any application type and any framework to be targeted.

Introduction and Background

In this quick post, I will walk you through getting started with .NET Core, installing it on a Linux machine and I will also give my views as to why install .NET Core on a Linux machine instead of Windows machine, I will then walk you through different steps of .NET Core programming and how you can use terminal based environment to perform multiple tasks. But first things first.

I am sure you have heard of .NET Core and other of the cool stuff that Microsoft has been releasing these years, From all of these services the ones that I like are:

  1. C# 6
    • In my own opinion, I think the language looks cleaner now. These sugar-coated features make the language easier to write too. If you haven’t yet read, please read my previous blog post at, Experimenting with C# 6’s new features.
  2. .NET Core
    • Of course, who wouldn’t love to use .NET on other platforms.
  3. Xamarin acquisition
    • I’m going to try this one out tonight. Fingers crossed.
  4. Rest are all just “usual” stuff around.

In this post I am going to talk about .NET Core on Linux because I have already talked about the C# stuff.

Screenshot (898)
Figure 1: .NET Core is a cross-platform programming framework by Microsoft.

Why Linux for .NET Core?

Before I dive any deeper, as I had said, I will give you a few of my considerations for using .NET Core on Linux and not on Windows (yet!) and they are as following. You don’t have to take them seriously or to always consider them, they are just my views and my opinions, you are free to have your own.

1. .NET Core is not yet complete

It would take a while before .NET gets released as a public stable version. Until then, using this bleeding edge technology on your own machines won’t be a good idea and someday sooner you will consider removing the binaries. In such cases, it is always better to use it in the virtual machine somewhere. I have set up a few Linux (Ubuntu-based) virtual machines for my development purposes, and I recommend that you go the same.

  1. Install VirtualBox (or any other virtualization software that you prefer; I like VirtualBox for its simplicity).
  2. Set up an Ubuntu environment.
    • Remember to use Ubuntu 14.04. Later ones are not yet supported yet.
  3. Install the .NET Core on that machine.

If something goes wrong. You can easily revert back where you want to. If the code plays dirty, you don’t have to worry about your data, or your machine at all.

2. Everything is command-line

Windows OS is not your OS if you like command-line interfaces. I am waiting for the BASH language to be introduced in Windows as in Ubuntu, until then, I am not going to use anything that requires a command-line interface to be used on my Windows environment. In Linux, however, everything almost has a command-line interface and since the .NET Core is a command-based program by Microsoft that is why I have enjoyed using it on Linux as compared to Windows.

Besides, on command-line interface, creating, building and running a .NET Core project is as simple as 1… 2… 3. You’ll see how, in the sections below. 🙂

3. I don’t use Mac

Besides these points, the only valid point left is then why shouldn’t we use Mac OS for .NET Core is because I don’t use it. You are free to use Mac OS for .NET Core development too. .NET Core does support it, its just that I don’t support that development environment. 😀

Installation of .NET Core

Although it is intended that soon, the command would be as simple as:

$ sudo apt-get install dotnet

Same command is used on Mac OS X and other operating systems other than Ubuntu and Debian derivatives. But until the .NET Core is in development process that is not going to happen. Until then, there are other steps that you can perform to install the .NET Core on your own machine. I’d like to skip this part and let Microsoft give you the steps to install the framework.

Installation procedure of .NET Core on multiple platforms.

After this step, do remember to make sure that the platform has been installed successfully. In almost any environment, you can run the following command to get the success message.

> dotnet --help

If .NET is installed, it would simply show the version and other help material on the screen. Otherwise, you may want to make sure that that procedure did not incur any problems during the installation of the packages. Before I get started, I want to show you the help material provided with “dotnet” command,

afzaal@afzaal-VirtualBox:~/Projects/Sample$ dotnet --help
.NET Command Line Tools (1.0.0-preview1-002702)
Usage: dotnet [common-options] [command] [arguments]

Arguments:
 [command]      The command to execute
 [arguments]    Arguments to pass to the command

Common Options (passed before the command):
 -v|--verbose   Enable verbose output
 --version      Display .NET CLI Version Number
 --info         Display .NET CLI Info

Common Commands:
 new           Initialize a basic .NET project
 restore       Restore dependencies specified in the .NET project
 build         Builds a .NET project
 publish       Publishes a .NET project for deployment (including the runtime)
 run           Compiles and immediately executes a .NET project
 test          Runs unit tests using the test runner specified in the project
 pack          Creates a NuGet package

So, you get the point that we are going to look deeper into the commands that dotnet provides us with.

  1. Creating a package
  2. Restoring the package
  3. Running the package
    • Build and Run both work, run would execute, build would just build the project. That was easy.
  4. Packaging the package

I will slice the stuff down to make it even more clearer and simpler to understand. One thing that you may have figured is that the option to compile the project natively is not provided as an explicit command in this set of options. As far as I can think is that this support has been removed until further notice. Until then, you need to pass a framework type to compile and build against.

Using .NET Core on Linux

Although the procedure on both of the environments is similar and alike. I am going to show you the procedure in Ubuntu. Plus, I will be explaining the purpose of these commands and multiple options that .NET provides you with. I don’t want you feel lonely here, because most of the paragraphs here would be for Microsoft team working on .NET project, so I would be providing a few suggestions for the team too. But, don’t worry, I’ll make sure the content seems to be and remain on-topic.

1. Creating a new project

I agree that the framework development is not yet near releasing and so I think I should consider passing on my own suggestions for the project too. At the moment, .NET Core supports creating a new project in the directory and uses the name of the directory as the default name (if at all required). Beginners in .NET Core are only aware of the commands that come right after the “dotnet”. However, there are other parameters that collect a few more parameters and optional values such as:

  1. Type of project.
    • Executable
      • At the moment, .NET only generates DLL files as output. Console is the default.
    • Dynamic-link library
  2. Language to be used for programming.

To create a new project, at the moment you just have to execute the following command:

$ dotnet new

In my own opinion, if you are just learning, this is enough for you. However, if you execute the following command you will get an idea of how you can modify the way project is being created and how it can be used to modify the project itself, including the programming language being used.

$ dotnet new --help
.NET Initializer

Usage: dotnet new [options]

Options:
 -h|--help Show help information
 -l|--lang <LANGUAGE> Language of project [C#|F#]
 -t|--type <TYPE> Type of project

Options are optional. However, you can pass those values if you want to create a project with a different programming language such as F#. You can also change the type, currently however, Console applications are only supported.

I already had a directory set up for my sample testing,

Screenshot (899)
Figure 2: Sample directory open. 

So, I just created the project here. I didn’t mess around with anything at all, you can see that the .NET itself creates the files.

Screenshot (900)
Figure 3: Creating the project in the same directory where we were.

Excuse the fact that I created an F# project. I did that so that I can show that I can pass the language to be used in the project too. I removed that, and instead created a C# program-based project. This is a minimal Console application.

In .NET Core applications, every project contains:

  1. A program’s source code.
    • If you were to create an F# project. Then the program would be written in F# language and in case of default settings. The program is a C# language program.
  2. A project.json file.
    • This file contains the settings for the project and dependencies that are to be maintained in the project for building the project.

However, once you are going to run you need to build the project and that is what we are going to do in the next steps.

2. Restoring the project

We didn’t delete the project. This simply means that the project needs to be restored and the dependencies need to be resolved before we can actually build to run the project. This can be done using the following command,

$ dotnet restore

Note: Make sure you are in the working directory where the project was created.

This command does the job of restoring the packages. If you try to build the project before restoring the dependencies, you are going to get the error message of:

Project {name} does not have a lock file.

.NET framework uses the lock file to look for the dependencies that a project requires and then starts the building and compilation process. Which means, this file is required before your project can be executed to ensure “it can execute”.

After this command gets executed, you will get another file in the project directory.

Screenshot (902)
Figure 4: Project.lock.json file is now available in the project directory.

And so finally, you can continue to building the project and to running it on your own machine with the default settings and setup.

3. Building the project

As I have already mentioned above, the native compilation support has been removed from the toolchain and I think Ubuntu developers may have to wait for a while and this may only be supported on Windows environment until then. However, we can somehow still execute the project as we would and we can perform other options too, such as publishing and NuGet package creation.

You can build a project using the following command,

$ dotnet build

Remember that you need to have restored the project once. Build command would do the following things for you:

  1. It would build the project for you.
  2. It would create the output directories.
    • However, as I am going to talk about this later, you can change the directories where the outputs are saved.
  3. It would prompt if there are any errors while building the project.

We have seen the way previous commands worked, so let’s slice this one into bits too. This command, too, supports manipulation. It provides you with optional flags such as:

  1. Framework to target.
  2. Directory to use for output binaries.
  3. Runtime to use.
  4. Configuration etc.

This way, you can automate the process of building by passing the parameters to the dotnet script.

4. Deploying the project

Instead of using the term running the project, I think it would be better if I could say deploying the project. One way or the other, running project is also deployed on the machine before it can run. First of all, the project I would show to be running, later I will show how to create NuGet packages.

To run the project, once you have built the project or not, you can just execute the following command:

$ dotnet run

This command also builds the project if the project is not yet built. Now, if I execute that command on my directory where the project resides, the output is something like this on my Ubuntu,

Screenshot (904)
Figure 5: Project output in terminal.

As seen, the project works and displays the message of, “Hello World!” on screen in terminal. This is a console project and a simple project with a console output command in C# only. That is why the program works this way.

Creating NuGet packages

Besides this, I would like to share how you can create the NuGet package from this project using the terminal. NuGet packages have been in the scene since a very long time and they were previously very easy to create in Visual Studio environment. Process is even simpler in this framework of programming. You just have to execute the following command:

$ dotnet pack

This command packs the project in a NuGet package. I would like to show you the output that it generates so that you can understand how it is doing everything.

afzaal@afzaal-VirtualBox:~/Projects/Sample$ dotnet pack
Project Sample (.NETCoreApp,Version=v1.0) was previously compiled. 
Skipping compilation.
Producing nuget package "Sample.1.0.0" for Sample
Sample -> /home/afzaal/Projects/Sample/bin/Debug/Sample.1.0.0.nupkg
Producing nuget package "Sample.1.0.0.symbols" for Sample
Sample -> /home/afzaal/Projects/Sample/bin/Debug/Sample.1.0.0.symbols.nupkg

It builds the project first, if the project was built then it skips that process. Once that has been done it create a new package and simply generates the file that can be published on the galleries. NuGet package management command allows you to perform some other functions too, such as updating the version number itself, it also allows you to specify framework etc. For more, have a look at the help output for this command,

afzaal@afzaal-VirtualBox:~/Projects/Sample$ dotnet pack --help
.NET Packager

Usage: dotnet pack [arguments] [options]

Arguments:
 <PROJECT> The project to compile, defaults to the current directory. 
 Can be a path to a project.json or a project directory

Options:
 -h|--help Show help information
 -o|--output <OUTPUT_DIR> Directory in which to place outputs
 --no-build Do not build project before packing
 -b|--build-base-path <OUTPUT_DIR> Directory in which to place temporary build outputs
 -c|--configuration <CONFIGURATION> Configuration under which to build
 --version-suffix <VERSION_SUFFIX> Defines what `*` should be replaced with in version 
  field in project.json

See the final one, where it shows the version suffix. It can be used to update the version based on the build version and so on. There is also a setting, which allows you modify the way building process updates the version count. This is a widely used method for changing the version number based on the build that produced the binary outputs.

The NuGet package file was saved in the default output directory.

Screenshot (905)
Figure 6: NuGet package in the default output directory.

Rest is easy, you can just upload the package from here to the NuGet galleries.

Final words

Finally, I was thinking I should publish a minimal ebook about this guide. The content was getting longer and longer and I was getting tired and tired, however since this gave me an idea about many things I think I can write a comparison of .NET Core on Windows and Linux and I think I have enough time to do that.

Secondly, there are few suggestions for end users that I want to make.

  1. Do not use .NET Core for commercial software. It is going to change sooner,
  2. .NET Core is a bleeding edge technology and well, since there is no documentation, you are going to waste a lot of time in learning and asking questions. That is why, if you are considering to learn .NET framework, then learn the .NET framework and not .NET Core. .NET framework has a great amount of good resources, articles, tips and tutorials.
  3. If you want cross-platform features and great support like .NET framework, my recommendation is Mono Project over .NET Core maybe because it is yet not mature.

I have a few feedback words on the framework itself.

  1. It is going great. Period.
  2. Since this is a cross-platform framework, features must not be available Windows-only such as that “dotnet compile –native” one. They must be made available to every platform.

At last, the framework is a great one to write programs for. I enjoyed programming for .NET Core because it doesn’t require much effort. Plus, the benefit of multiple programming languages is still there, Besides, Visual Studio Code is also a great IDE to be used and the C# extension makes it even better. I will be writing a lot about these features these days since I am free from all of the academics stuff these days. 🙂

See you in the next post.

Highlighting the faces in uploaded image in ASP.NET web applications

Introduction and Background

Previously, I was thinking we can find the faces in the uploaded image, so why not create a small module that automatically finds the faces and renders them when we want to load the images on our web pages. That was a pretty easy task but I would love to share what and how I did it. The entire procedure may look a bit complex, but trust me it is really very simple and straight-forward. However, you may be required to know about a few framework forehand as I won’t be covering most of the in-depth stuff of that scenario — such as computer vision, which is used to perform actions such as face detection.

In this post, you will learn the basics of many things that range from:

  1. Performing computer vision operations — most basic one, finding the faces in the images.
  2. Sending and receiving content from the web server based on the image data uploaded.
  3. Using the canvas HTML element to render the results.

I won’t be guiding you throughout each and everything of the computer vision, or the processes that are required to perform the facial detection in the images, for that i would like to ask you to go and read this post of mine, Facial biometric authentication on your connected devices. In this post, I’ll cover the basics of how to detect the faces in ASP.NET web application, how to pass the characters of the faces in the images, how to use those properties and to render the faces on the image in the canvas.

Making the web app face-aware

There are two steps that we need to perform in order to make our web applications face aware and to determine whether there is a face in the images that are to be uploaded or not. There are many uses, and I will enlist a few of them in the final words section below. The first step is to configure our web application to be able to consume the image and then render the image for processing. Our image processing toolkit would allow us to find the faces, and the locations of our faces. This part would then forward the request to the client-side, where our client itself would render the face locations on the images.

In this sample, I am going to use a canvas element to draw objects, whereas this can be done using multiple div containers to contain span elements and they can be rendered over the actual image to show the face boxes with their positions set to absolute.

First of all, let us program the ASP.NET web application to get the image, process the image, find the faces and generate the response to be collected on the client-side.

Programming file processing part

On the server-side, we would preferably use the Emgu CV library. This library has been of a great usage in the C# wrappers list of OpenCV library. I will be using the same library, to program the face detectors in ASP.NET. The benefits are:

  1. It is a very light-weight library.
  2. The entire processing can take less than a second or two, and the views would be generated in a second later.
  3. It is better than most of other computer vision libraries; as it is based on OpenCV.

First of all, we would need to create a new controller in our web application that would handle the requests for this purpose, we would later add the POST method handler to the controller action to upload and process the image. You can create any controller, I used the name, “FindFacesController” for this controller in my own application. To create a new Controller, follow: Right click Controllers folder → Select Add → Select Controller…, to add a new controller. Add the name to it as you like and then proceed. By default, this controller is given an action, Index and a folder with the same name is created in the Views folder. First of all, open the Views folder to add the HTML content for which we would later write the backend part. In this example project, we need to use an HTML form, where users would be able to upload the files to the servers for processing.

The following HTML snippet would do this,

<form method="post" enctype="multipart/form-data" id="form">
  <input type="file" name="image" id="image" onchange="this.form.submit()" />
</form>

You can see that this HTML form is enough in itself. There is a special event handler attached to this input element, which would cause the form to automatically submit once the user selects the image. That is because we only want to process one image at a time. I could have written a standalone function, but that would have made no sense and this inline function call is a better way to do this.

Now for the ASP.NET part, I will be using the HttpMethod property of the Request to determine if the request was to upload the image or to just load the page.

if(Request.HttpMethod == "POST") {
   // Image upload code here.
}

Now before I actually write the code I want to show and explain what we want to do in this example. The steps to be performed are as below:

  1. We need to save the image that was uploaded in the request.
  2. We would then get the file that was uploaded, and process that image using Emgu CV.
  3. We would get the locations of the faces in the image and then serialize them to JSON string using Json.NET library.
  4. Later part would be taken care of on the client-side using JavaScript code.

Before I actually write the code, let me first show you the helper objects that I had created. I needed two helper objects, one for storing the location of the faces and other to perform the facial detection in the images.

public class Location
{
    public double X { get; set; }
    public double Y { get; set; }
    public double Width { get; set; }
    public double Height { get; set; }
}

// Face detector helper object
public class FaceDetector
{
    public static List<Rectangle> DetectFaces(Mat image)
    {
        List<Rectangle> faces = new List<Rectangle>();
        var facesCascade = HttpContext.Current.Server.MapPath("~/haarcascade_frontalface_default.xml");
        using (CascadeClassifier face = new CascadeClassifier(facesCascade))
        {
            using (UMat ugray = new UMat())
            {
                CvInvoke.CvtColor(image, ugray, Emgu.CV.CvEnum.ColorConversion.Bgr2Gray);

                //normalizes brightness and increases contrast of the image
                CvInvoke.EqualizeHist(ugray, ugray);

                //Detect the faces from the gray scale image and store the locations as rectangle
                //The first dimensional is the channel
                //The second dimension is the index of the rectangle in the specific channel
                Rectangle[] facesDetected = face.DetectMultiScale(
                                                ugray,
                                                1.1,
                                                10,
                                                new Size(20, 20));

                faces.AddRange(facesDetected);
            }
        }
        return faces;
    }
}

These two objects would be used, one for the processing and other for the client-side code to render the boxes on the faces. The action code that I used for this is as below:

public ActionResult Index()
{
    if (Request.HttpMethod == "POST")
    {
         ViewBag.ImageProcessed = true;
         // Try to process the image.
         if (Request.Files.Count > 0)
         {
             // There will be just one file.
             var file = Request.Files[0];

             var fileName = Guid.NewGuid().ToString() + ".jpg";
             file.SaveAs(Server.MapPath("~/Images/" + fileName));

             // Load the saved image, for native processing using Emgu CV.
             var bitmap = new Bitmap(Server.MapPath("~/Images/" + fileName));

             var faces = FaceDetector.DetectFaces(new Image<Bgr, byte>(bitmap).Mat);

             // If faces where found.
             if (faces.Count > 0)
             {
                 ViewBag.FacesDetected = true;
                 ViewBag.FaceCount = faces.Count;

                 var positions = new List<Location>();
                 foreach (var face in faces)
                 {
                     // Add the positions.
                     positions.Add(new Location
                     {
                          X = face.X,
                          Y = face.Y,
                          Width = face.Width,
                          Height = face.Height
                     });
                 }

                 ViewBag.FacePositions = JsonConvert.SerializeObject(positions);
            }

            ViewBag.ImageUrl = fileName;
        }
    }
    return View();
}

The code above does entire processing of the images that we upload to the server. This code is responsible for processing the images, finding and detecting the faces and then returning the results for the views to be rendered in HTML.

Programming client-side canvas elements

You can create a sense of opening a modal popup to show the faces in the images. I used the canvas element on the page itself, because I just wanted to demonstrate the usage of this coding technique. As we have seen, the controller action would generate a few ViewBag properties that we can later use in the HTML content to render the results based on our previous actions.

The View content is as following,

@if (ViewBag.ImageProcessed == true)
{
    // Show the image.
    if (ViewBag.FacesDetected == true)
    {
        // Show the image here.
        <img src="~/Images/@ViewBag.ImageUrl" alt="Image" id="imageElement" style="display: none; height: 0; width: 0;" />

        <p><b>@ViewBag.FaceCount</b> @if (ViewBag.FaceCount == 1) { <text><b>face</b> was</text> } else { <text><b>faces</b> were</text> } detected in the following image.</p>
        <p>A <code>canvas</code> element is being used to render the image and then rectangles are being drawn on the top of that canvas to highlight the faces in the image.</p>

        <canvas id="faceCanvas"></canvas>

        <!-- HTML content has been loaded, run the script now. -->
        
            // Get the canvas.
            var canvas = document.getElementById("faceCanvas");
            var img = document.getElementById("imageElement");
            canvas.height = img.height;
            canvas.width = img.width;

            var myCanvas = canvas.getContext("2d");
            myCanvas.drawImage(img, 0, 0);

            @if(ViewBag.ImageProcessed == true && ViewBag.FacesDetected == true)
            {
            
            img.style.display = "none";
            var facesFound = true;
            var facePositions = JSON.parse(JSON.stringify(@Html.Raw(ViewBag.FacePositions)));
            
            }

            if(facesFound) {
                // Move forward.
                for (face in facePositions) {
                    // Draw the face.
                    myCanvas.lineWidth = 2;
                    myCanvas.strokeStyle = selectColor(face);

                    console.log(selectColor(face));
                    myCanvas.strokeRect(
                                 facePositions[face]["X"],
                                 facePositions[face]["Y"],
                                 facePositions[face]["Width"],
                                 facePositions[face]["Height"]
                             );
               }
           }

           function selectColor(iteration) {
               if (iteration == 0) { iteration = Math.floor(Math.random()); }

               var step = 42.5;
               var randomNumber = Math.floor(Math.random() * 3);

               // Select the colors.
               var red = Math.floor((step * iteration * Math.floor(Math.random() * 3)) % 255);
               var green = Math.floor((step * iteration * Math.floor(Math.random() * 3)) % 255);
               var blue = Math.floor((step * iteration * Math.floor(Math.random() * 3)) % 255);

               // Change the values of rgb, randomly.
               switch (randomNumber) {
                   case 0: red = 0; break;
                   case 1: green = 0; break;
                   case 2: blue = 0; break;
               }

               // Return the string.
               var rgbString = "rgb(" + red + ", " + green + " ," + blue + ")";
               return rgbString;
           }
        
    }
    else
    {
        <p>No faces were found in the following image.</p>

        // Show the image here.
        <img src="~/Images/@ViewBag.ImageUrl" alt="Image" id="imageElement" />
    }
}

This code is the client-side code and would be executed only if there is an upload of image previously. Now let us review what our application is capable of doing at the moment.

Running the application for testing

Since we have developed the application, now it is time that we actually run the application to see if that works as expected. The following are the results generated of multiple images that were passed to the server.

Screenshot (427)

The above image shows the default HTML page that is shown to the users when they visit the page for the first time. Then they will upload the image, and application would process the content of the image that was uploaded. Following images show the results of those images.

Screenshot (428)

I uploaded my image, it found my face and as shown above in bold text, “1 face was detected…”. It also renders the box around the area where the face was detected.

Screenshot (429)

Article would have never been complete, without Eminem being a part of it! 🙂 Love this guy.

Screenshot (426)

Secondly, I wanted to show how this application processed multiple faces. On the top, see that it shows “5 faces were detected…” and it renders 5 boxes around the areas where faces were detected. I also seem to like the photo, as I am a fan of Batman myself.

Screenshot (430)

This image shows what happens if the image does not contain a detected face (by detected, there are many possibilities where a face might not be detected, such as having hairs, wearing glasses etc.) In this image I just used the three logos of companies and the system told me there were no faces in the image. It also rendered the image, but no boxes were made since there were no faces in the image.

Final words

This was it for this post. This method is useful in many facial detection software applications, many areas where you want the users to upload a photo of their faces, not just some photo of a scenery etc. This is an ASP,NET web application project, which means that you can use this code in your own web applications too. The library usage is also very simple and straight-forward as you have already seen in the article above.

There are other uses, such as in the cases where you want to perform analysis of peoples’ faces to detect their emotions, locations and other parameters. You can first of all perform this action to determine if there are faces in the images or not.

From zero to hero in JSON with C#

Introduction and Background

I have been reading many posts and articles about JSON and C# where every article tries to clarify the purpose of either one thing, or the other thing: JSON or C# libraries. I want to cover both of these technologies in one post so that if you have no idea of JavaScript Object Notation (JSON), and the C# libraries available for processing and parsing the JSON files you can understand these two concepts fully. I will try my best to explain each and every concept in these two frameworks and I will also try to publish the article on C# Corner and to share an offline copy of this post as an eBook, so excuse me if you find me sliding through different writing formats because I have to stick to both writing styles in order to qualify this post as an article and as a miniature guide.

When I started to do a bit of programming, I knew there were many data-interchange formats, serialization and deserialization techniques and how objects can be stored including their states in the disk to fetch the same data back for further processing and further working on the same objects that you just left while you were going outside. Some of these techniques are tough to be handled and understood while others are pretty much framework-oriented. Then comes JSON into the image, JSON provides you with a very familiar and simple interface for programming the serialization and deserialization of the objects on runtime so that their states can be stored for later use. And we are going to study JSON notation of data-interchange and how we can use this format in C# projects for our applications.

What is JSON?

I remember when I was starting to learn programming there was this something called, Extensible markup language, or as some of us know it as, XML. XML was used to convert the runtime data of the objects or the program states or the configuration settings into a storable string notation. These strings were human-readable. Programmers could even modify these values when they needed to, added more values or removed the values they didn’t want to see in the data.

Introduction of XML

For a really long time and even today, XML is being used as the format for data-interchange. Many protocols for communication are built on the top of XML, most notably SOAP protocol. XML didn’t even power a few of the protocols, it even kick-started many of the most widely known and used markup languages, of which HTML and XAML are the ones that most of you are already familiar with.

xml-file
Figure 1: XML file icon.

History has it, XML has been of a great use to many programmers for building applications that share the data across multiple machines. The world wide web started with the pages written in a much XML-oriented way, markup language called, HTML.

The purpose of XML entirely was to design the documents in plain-text that can be human-readable and can be used by the machines to define the state of applications, program or interfaces.

  1. State of applications: You can store the object states, which can be loaded later for further processing of the data. This would allow you to maintain the states of the applications too.
  2. Program: XML can be used to define the configuration of the programs when they start up. Microsoft has been using web.config and machine.config files to define how a program would start. Programmers and developers can easily modify the files as they needed them to. This would alter the design of the way program starts.
  3. Interfaces: HTML is being used to define how the user-interface would look like. Windows Presentation Foundation uses XAML as the major language for designing the interfaces. Android supports XML-based markup language for defining the UI controls.
<?xml version="1.0" encoding="UTF-8"?>

Empty XML documents are not meant to contain any blank object or document tree. They can be omitted.

This is not it. XML is much more than this, WCF, for example supports SOAP communication. Which means that you can download and upload the data in XML format. ASP.NET Web API supports data transfer in the form of XML documents.

Usage of XML

I want to show you how XML is used, then why shouldn’t I used something that comes from a real-world example. RSS for example, uses XML-based documents for delivery of feeds to the clients. I am using the same technology feature on my blog to deliver the blog posts to readers and other communities. The basic XML document for a single post would look like this,

<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
    xmlns:atom="http://www.w3.org/2005/Atom"
    xmlns:media="http://search.yahoo.com/mrss/">

<channel>
    <title>Learn the basics of the Web and App Development</title>
    <atom:link href="https://basicsofwebdevelopment.wordpress.com/feed/" rel="self" type="application/rss+xml" /> 
    <link>https://basicsofwebdevelopment.wordpress.com</link>
    <description>The basics about the Web Standards will be posted on my blog. Love it? Share it! Found an error, ask me to edit the post! :)</description>
    <lastBuildDate>Fri, 03 Jun 2016 09:48:29 +0000</lastBuildDate>
    <language>en</language>
    <generator>http://wordpress.com/</generator>
    <cloud domain='basicsofwebdevelopment.wordpress.com' port='80' path='/?rsscloud=notify' registerProcedure='' protocol='http-post' />
    
    <atom:link rel="search" type="application/opensearchdescription+xml" href="https://basicsofwebdevelopment.wordpress.com/osd.xml" title="Learn the basics of the Web and App Development" />
    <atom:link rel='hub' href='https://basicsofwebdevelopment.wordpress.com/?pushpress=hub'/>
    <item>
        <title>Hashing passwords in .NET Core with tips</title>
        <link>https://basicsofwebdevelopment.wordpress.com/2016/06/03/hashing-passwords-in-net-core-with-tips/</link>
        <comments>https://basicsofwebdevelopment.wordpress.com/2016/06/03/hashing-passwords-in-net-core-with-tips/#respond</comments>
        <pubDate>Fri, 03 Jun 2016 08:42:49 +0000</pubDate>
        <dc:creator><![CDATA[Afzaal Ahmad Zeeshan]]></dc:creator>
        <category><![CDATA[Beginners]]></category>
        <category><![CDATA[C# (C-Sharp)]]></category>
        <guid isPermaLink="false">https://basicsofwebdevelopment.wordpress.com/?p=1313</guid>
        <description>
            <![CDATA[Previously I had written a few stuff for .NET framework and how to implement basic security concepts on your applications that are working in .NET environment. In this post I want to walk you to implement the same security concepts in your applications that are based on the .NET Core framework. As always there will be […]<img alt="" border="0" src="https://pixel.wp.com/b.gif?host=basicsofwebdevelopment.wordpress.com&blog=59372306&post=1313&subd=basicsofwebdevelopment&ref=&feed=1" width="1" height="1" />]]>
        </description>
    </item>
</channel>
</rss>

Of course I snipped out most of the part, but you get the point of XML here. This data can be read by humans themselves, and if the machine is set to parse this data it can provide us with runtime association of data with the objects. In a much similar manner, we can share the data from one machine to another by serializing and deserializing the objects.

What is the need of JSON, then?

JSON comes into the frame back in 1996 or around that time. The purpose of XML and JSON is similar: Data-interchange among multiple devices, networks and applications. However, the language syntax is very much similar to what C, C++, Java and C# programmers have been using for their regular day “object-oriented programming“; pardon me C programmers. The language syntax is very similar to what JavaScript uses for the notation of objects. JSON provides a much compact format for the documents for storing the data. JSON data, as we will see in this guide, is much shorter as compared to XML and in many ways can be (and should be…) used on the network-based applications where each byte can be a bottleneck to your application’s performance and efficiency. I wanted to take this time to disgust XML, but I think your mind will consider my words as personal and bias views. So, first I will talk about JSON itself how it is structured and then I will talk about using the time to show the difference between JSON and XML and which one to prefer.

JSON format and specifications were shared publicly as per ECMA-404. The document guides the developers to find their APIs in such a way that they work as per the teachings of the standard. API developers, programmers, serialization/deserialization software developers can get much help from the documentation in understanding how to define their programs to parse and stringify the JSON content.

Structure of JSON

JSON is an open-standards document format for human-readable and machine-understandable serialization and deserialization of data. Simply, it is used for data-interchange. The benefit of JSON is that it has a very compact size as compared to XML documents of the same purpose and data. JSON stores the data in the form of key/value pairs. Another benefit of JSON is (just like XML) it is language-independent. You can work with JSON data in almost any programming language that can handle string objects. Almost every programming framework that I can think of most of the programming frameworks support JSON-based data-interchange. For example, ASP.NET Web API supports JSON format data transfer to-and-from the server.

The basic structure of JSON document is very much simple. Every document in JSON must have either an object at its root, or an array. Object and array can be empty there is no need for it to contain anything in it, but there must be an object or an array.

A sample, and simple JSON document can be the following one:

{ }

What that means, is something that I am going to get into a bit later. At the moment just understand that this is a blank JSON document about any object in the application. A JSON document can contain other data in the form of key/value pairs, that are of the following types:

  1. Object
  2. Array
  3. String; name or character.
  4. Integer; numeric
  5. Boolean; true or false
  6. Null.

Note that JSON doesn’t support all of the JavaScript keywords, such as you cannot use “undefined” in a valid JSON schema. Nothing is going to stop you from using it in your files, but remember that valid JSON schema should not include such values. One more thing to consider is that JSON is not executable file, although it shares the same object notation but it doesn’t allow you to create variables. If you try to add a few variables to it, it would generate errors. Now I would like to share a few concepts for the valid data types in JSON and what they are and how you should use them in your own JSON data files.

JSON data types

JSON data types are the valid values that can be used in JSON files. In this section I want to clarify the values that they can hold, how you can use them in your own project data-interchange formats.

JSON Object

At the root of the JSON document, there needs to be either a JSON object or a JSON array. In JavaScript and in many other programming languages an object is denoted by curly braces; “{ }”. JSON uses the same notation for denoting the objects in the JSON file. In JSON files, objects only contain the properties that they have. JSON can use other properties, other data types (including objects) as the value for their properties. How that works, I will explain that later in this course guide, but for now just consider that each of the runtime object is mapped to one JSON object in the data file.

In the case of an object, there are properties and their values that are enclosed inside the opening and closing curly braces. These are in the form of a key/value pair. These key value sets are separated using a colon, “:” and multiple sets are separated using a comma, “,”. An example of the JSON object is as follows:

{
  "name": "Afzaal Ahmad Zeeshan",
  "age": 20
}

Even the simplest of the C#, Java or C++ programmers would understand what type these properties are going to hold. It is clear that the “name” property is of “string” type and the “age” property is of integer type.

There are however a few things that you may need to consider at the moment:

  1. An object can contain any number of properties in it.
  2. An object can omit the properties; being an empty object.
  3. All the property names (keys) are string values. You cannot omit the quotation around the key name. It is required, otherwise JSON would claim that it was expecting a string but found undefined.
  4. Key/value sets are separated by a colon character, whereas multiple sets are separated using a comma. You cannot add trailing commas in the JSON format, some of the APIs may through error. For example, JSON.parse(‘[5, ]’); would return an error.

I will demonstrate the errors in a later section because at the moment I want to show you and teach you the basics of JSON. But, I will show you how these things work.

By now the type of the JSON object is clear to you. One thing to keep in mind is that JSON object is not only meant to be used as the root of the JSON document. It is also used as a value for the properties of the object, which allows the JSON document to contain the entire data in the form of relationship with objects; only difference being that the objects are anonymous.

JSON Arrays

Like JavaScript, JSON also supports storing the objects in a sequence, series or list; what-ever you would like to put it as. Arrays are not complex structures in JSON document. They are:

  1. Simple.
  2. Start with a square bracket, and end on the same.
  3. Can contain values in a sequence.
  4. Each value is separated by a comma (not a colon).
  5. Value can be of any type.

A sample JSON array would look something like this,

[ "Value 1", "Value 2" ]

You can store any type of value in the array, unlike other programming language such as C# or C++. The type of the values is not mandatory to be similar. That is something that pure C# or C++ programmers cannot fathom in the first look. But, guys, this is JavaScript. I used JSONLint service to verify this, later I will show you how it works in JavaScript.

Screenshot (1642)
Figure 2: JSONLint being used for verification of JSON validity.

In JavaScript, the type doesn’t need to be matched of the arrays. So, for example you can do the following thing and get away with it without anything complaining about type of the objects.

var arr = [ "Hello", 55, {} ];

So, JSON being a JavaScript-derived document format, follows the same convention. JSON leaves the idea of type casting to the programmer, because this is performed only in the context of strongly typed programming languages such as C++, C# etc. JavaScript, being weakly typed doesn’t care about the stuff at all. So, it can take any type in an array as long as it doesn’t violate other rules.

Upcoming data types are similar to what other programming languages have, such as strings, integers and boolean values. So, I will try to not-explain them as much as I have had explained these two.

JSON Strings

String typed values are the values which contain the character-type values. In many programming environments, strings are called, arrays of characters. In JavaScript, you can use either single quotes or double quotes to wrap the string values. But, in JSON specification you should always consider using double quotation. Using single quotation may work and should work in JavaScript environment but in the cases when you have to share the data over the network to a type-safe programming environment such as where C++ or C# may be used as the programming language. Then your single quotes may cause a runtime error when their parsers would try to read a string starting as a character.

In JSON, strings are similar and you have been watching strings in the previous code samples. A few common examples of string values in JSON are as following:

// Normal string
"Afzaal Ahmad Zeeshan"

// Same as above, but do not use in JSON
'Afzaal Ahmad Zeeshan'

// Escaping characters, outputs: "\"
"\"\\\"" 

// Unicode characters
"\u0123"

In the above example, you see that we can use many characters in our data. We can:

  1. Build normal strings such as C# or C++ strings.
    • JSON standard forbids you from using the single quotation because you may be sending the data over to other platforms.
  2. Escaping the characters is required in strings. Otherwise, the string would result in undefined behavior. You can use the similar string escaping provided in other languages such as:
    • \”
    • \\
    • \/
    • \b
    • \f
    • \n
    • \r
    • \t
    • \u-4-hex-numbers
  3. Unicode characters (as specified in the last element of above list) are also supported in JSON, you can specify the Unicode character code here.

One thing to consider is that your objects’ attributes (properties) are also keyed using a string value.

{
    "key": 1234
}

The value of a property can be anything, but the key must always be a string. The strings in JSON, JavaScript, C++, C# and Java are all alike and have the same value for each of the sentence. Unicode characters are supported and need not to be escaped. For original JSON website, I got this image,


Figure 3: JSON string structure.

Thus, the strings are similar in JSON and other frameworks (or languages).

JSON Integers

Just like strings, integers are also the literal values in the JSON document that are of numeric type. They are not wrapped in quotations, if they are then they are not numeric types, instead they are of string type.

A few examples of integer values are:

// Simple one
1234

// Fractional
12.34

// Exponential
12e34

You can combine fractional and exponential too. You can also append the sign of the number (negative – or positive +) with the number itself. In JSON these types can also be used as the values. If you take a look at the example above in a few sections above, you can see that I had used the age as an integer value for my property. A few things that you should consider while working with JSON data is:

  1. In exponential form, “e” and “E” are similar.
  2. It must never end with “e”.
  3. Number cannot be a NaN.
  4. Number encoding and variable size should be considered. JavaScript and other language may differ in encoding the variable size for the numberics and may cause an overflow.

JSON Boolean

Boolean values indicate the results of a conditional operation on an expression. Such as, “let him in, if he is adult”. JSON supports storing values in their literal boolean notation. They are similar to what other programming languages have in common. They must also not be wrapped in quotation as they are literals. If a boolean value is wrapped in quotation then it is not a boolean value but instead it is a string value.

// Represents a true result
true

// Represents a false result
false

These must be written as they are because JavaScript is case-sensitive and so is JSON. You can use these values to denote the cases where other types cannot make much sense compared to what this can make, sense. For example, you can specify the gender of the object.

{
    "name": "Afzaal Ahmad Zeeshan",
    "age": 20, 
    "gender": true
}

I didn’t use “male”, instead I used “true”. On the other side I can then translate this condition to another type such as,

if(obj.gender) {
    // Male
} else {
    // Female
}

This way, other than strings and numeric values we can store boolean values. These would help us structure the document in a much simpler, cleaner and programmer-readable format.

JSON null

C based programmers understand what a null is. I have been programming in programming languages such as, C, C++, C#, Java, JavaScript and I have used this keywords where the objects do not exist. There are other programming languages such as Haskell, where this doesn’t make much sense. But, since we are just talking about JavaScript and languages like it, we know what null is. Null just represents when an object doesn’t exist in the memory. In JavaScript, this means the “intentional absence” of the object. So in JavaScript environment a null object does exist but has no value. In other languages it is the other way around; it doesn’t even exist.

It just has a single notation,

null

It is a keyword and must be typed as it is. Otherwise, JavaScript-like error would pop up, “undefined“. This can help you in sending the data which doesn’t exist currently. For example, in C# we can use this to pass the values from database who are nullable. There are nullable types in C#, which we can map to this one.  For example, the following JSON document would be mapped to the following C# object:

{
    "name": "Afzaal Ahmad Zeeshan",
    "age": null
}

C# object structure (class)

class Person {
    public string Name { get; set; }
    public int? Age { get; set; }      // Nullable integer.
}

And in the database if the field is set to be nullable, we can set it to that database record. This way, JSON can be used to notably denote the runtime objects in a plain-string-text format for data-interchange.

Examples of use

I don’t want to stretch this one any longer. JSON has been widely accepted by many companies even Microsoft. Microsoft was using the web.config file in XML format for configuration purposes of the application. However, since a while they have upgraded their systems to parse and use the settings provided in a much JSON way. JSON is very simpler and easier to handle.

Microsoft started to use JSON format to handle the settings for their Visual Studio Code product. JSON allows extension of the settings, which helps users to add their own modifications and settings to the product.

I have been publishing many articles about JSON data sources and I think, if I started to use JSON then it was a sign of success of JSON over XML. 🙂

  1. Creating a “customizable” personal blog using ASP.NET
  2. Understanding ASP.NET MVC using real world example, for beginners and intermediate

There are many other uses of JSON and my personal recommendation is with JSON if you can use JSON over XML. However, if you are interested in XML or are supposed to use XML such as when using SOAP protocol. Otherwise, consider using JSON.

Errors in JSON

No, these are not errors in the JSON itself, but a few errors that you may make while writing the JSON documents yourself. This is why, always consider using a library.

Trailing commas

As already mentioned above, the trailing commas in the JSON objects or arrays cause a problem. The parser may think that there is another property to come but unexpectedly hits a terminating square or curly bracket. For example the following are invalid JSON documents:

{ "k": "v",  }    // Ending at a comma

[ 123, "", ]      // Ending at a comma

These can be overcame if you:

  1. First of all, write your own parsers.
  2. Make them intelligent enough to know that if there is no property, they can ignore adding a property, simply.
  3. If there is no more value in the array after the comma, it means the array ended at the previous token.

However, present parsers are not guaranteed to be intelligent enough so my recommendation is to ignore this.

Ending with an “e” or “E”

If you end your numbers in exponential form with a e, you are likely to get an error of undefined. You know what that means, don’t you? It is always better to end it is a proper format following the specification.

Undefined

Each property name must be a string token. In JavaScript you can do the both of the following:

var obj = { "name": "Afzaal Ahmad Zeeshan" };

// OR
var obj = { name: "Afzaal Ahmad Zeeshan" };

But in JSON you are required to follow the string-based-key-names method of creating and defining the object properties. In JSON, if you do the following you will get error,

{ 
   name: "Afzaal Ahmad Zeeshan"
}

Because, name is undefined. Parser may consider it to be a variable somewhere but JSON documents cannot be executed so there is no purpose of JavaScript variables to be there. Some libraries may render the results for your documents even if you leave your property names un-stringed. But, don’t consider everything to be server always follow the specifications.

C# side of article

To this point I hope that JSON is as simple as 1… 2… 3. for you! If not, please let me know so that I can make it even more clear. But, in this section I want to talk about using JSON in your C# projects. In this section I am going to talk about the libraries that are available in C#, or more specifically I am going to talk about JSON.NET library. I will use this library to explain how JSON can be used to save the state of the objects. How you can create a JSON file, how you can parse it to create objects on runtime and how serialization and deserialization works.

This section will be a real world section covering the methods and practices that you can use to use JSON in your own applications. I will cover most of the C# programming part in this section so that you can understand and learn how to program JSON in your .NET environment.

JSON.NET library

I am specifically just going to walk you through JSON.NET library. JSON.NET is a widely used JSON parser for .NET framework. I have been using this library for a very long time and I personally this is one of the best APIs out there for JSON parsing.

Screenshot (1669)
Figure 4: Json.NET comparison with other popular JSON serializers.

I will use the objects provided in this library just to demonstrate how you can parse JSON files to runtime objects, and how you can create JSON strings (serialize the objects) using the objects provided to the serializer. By the end of this post you will be able to create your own JSON-oriented-and-backed applications in .NET framework (or any framework that supports C# programming).

Creating the object for this guide

In C#, we create classes to work around with real-world objects. We can then use this library to convert the runtime structure of that object to a storable JSON document that can be used to:

  1. Send over the network.
  2. Save on the disk.
  3. Save for loading the object in that state later.

Many other uses. The counterpart of this process is the deserialization of the objects in runtime structures from their JSON notation. First of all, I will create a sample class that we are going to use to convert to JSON format and from the JSON format.

class Person
{
    public int ID { get; set; }
    public string Name { get; set; }
    public bool Gender { get; set; }
    public DateTime DateOfBirth { get; set; }
}

This object has the following diagram.

Screenshot (1670)
Figure 5: Person class diagram.

Frankly speaking, this object can be created in C++, it can be created in Java and it can be created in JavaScript too. If we use JSON for serializing this object, or the number of objects that can be transmitted over the network to the peers on the networks. Since we have stringified the data in JSON notation, we now can parse the string data in other languages too. Every other programming language has a JSON parser library.

We can redefine the structures in other programming languages and then use a JSON parser library to convert this string over the network in those machines too. I am going to cover C# part of programming only.

A few things that you might want to know here before we move forward. I have used the datatype of “DateTime” because I wanted to show how derived types are converted to when they are being serialized to JSON notation. So, let’s begin the C# programming to understand the relative stuff in their C# counterparts.

Serializing the objects

In programming, serialization is the process of converting the runtime objects to a plain-text string format. In other words, it converts the object structure from memory to a format that can be stored on the disk. Serialization just translates the state of the objects, which is that it just translates the properties of the object to the format that can be used to store them for later use. You can think of it as synchronization process, where you store the current state of the objects on the disk so that you can load the data later and start your work from the state where you left it instead of starting from scratch again.

Since we are talking about JSON here, we are interested in the conversion of objects in their JSON notation for storage (serialization) and then in the upcoming section to convert the JSON document into runtime objects and data structures (deserialization). Simply, we are going to see how to convert the JSON into objects and vice versa.

To serialize the object, you need an object that holds the data in itself. The object would have a state in the application. We then call the library functions to serialize the objects in their JSON notation. Like this,

// Create the person
Person myself = new Person { 
                    ID = 123, 
                    Name = "Afzaal Ahmad Zeeshan", 
                    Gender = true, 
                    DateOfBirth = new DateTime(1995, 08, 29) 
                };

// Serialize it.
 string serializedJson = JsonConvert.SerializeObject(myself);

// Print on the screen.
 Console.WriteLine(serializedJson);

The function is called to serialize the object which (as clearly seen) return a string object. This object holds the JSON notation for the object. The output returned was like this, (yes, I formatted the JSON document for readability)

{
    "ID": 123,
    "Name": "Afzaal Ahmad Zeeshan",
    "Gender": true,
    "DateOfBirth": "1995-08-29T00:00:00"
}

We create store the data in a file, send it on the network or do what we have to. I am not going to cover that part because I think (and believe) that this is totally relative to your application design and how it is expected to work. Now, we need to understand how it happened.

The ID, Name and Gender are the types that you have already seen. However the last field was actually of type DateTime which got translated to string type in JSON. Why? The answer is simple, JSON doesn’t provide a native DateTime object to store the date and time values. Instead, they use string notation to store the date value. The string value is very much self-explanatory as to what the year is, what month and what value date has. The time part is also clearly visible.

One thing to note here is that C# and JSON can agree on using the “null” value for objects too. Thus if the name is set to a null value, JSON can support the same value of null in its own document format. In the next section, we will study how JSON is mapped to the objects.

Deserializing the JSON

In data-interchange, this is the counterpart of the serialization process. In this process, we take the JSON document and parse it the runtime object (collection). There are many libraries available that can be used for parsing purposes however I am going to talk about the same one that I had been using to demonstrate the usage in C# programming environment. The method is very simple in this case too, However, I am going to use a JSON formatted string variable to use to parse it to the object.

Note: You can use values from files, networks, web APIs and other resources from where you can get the JSON formatted document. Just for the sake of simplicity I am going to use a string variable to hold the values.

The sample JSON data can be like the following,

string data = "{\"ID\": 123,\"Name\": 
               \"Afzaal Ahmad Zeeshan\",\"Gender\": 
               true,\"DateOfBirth\": \"1995-08-29T00:00:00\"}";

I used the following code to deserialize the JSON document.

// Serialize it.
Person obj = JsonConvert.DeserializeObject<Person>(data);

// Print on the screen.
Console.WriteLine(obj.ID);

Now notice one thing here. This “DeserializeObject” function has two versions.

  1. A plain non-generic version.
  2. A generic version.

If you use the non-generic version, then in C# you are going to use the dynamic object creation in C# and then we will be using the properties. For example, like the code below:

dynamic obj = JsonConvert.DeserializeObject(data);

Console.WriteLine(obj.ID);

Using that dynamic keyword would allow you to bypass the compile time check and allow you to get the “ID” property at a later time. This can be handy when you don’t want to create a class and deserialize against.

However, the version that I am using at the moment is different. It uses a type-parameter that is passed as a generic parameter; angled brackets, and Json.NET deserializes the JSON document and fills up our objects. We can also have array of objects, this way instead of “convincing” the program to iterate over the collection. We can first of all have the JSON parsed as an array of objects and so on. This would help us shorten the code and to check the stuff at all.

var listOfObjects = JsonConvert.DeserializeObject<List<Person>>(data);

Notice that we are passing a “List<>” object. Library would convert the stuff into the list of the objects and we can then use the collection for iterative purposes.

Some errors

There are a few errors that I would like to raise here, which might help you in understanding how JSON works. I will update this section later as I collect more data on this topic.

Type mismatch

If you try to deserialize the JSON of one type (such as, an object) into a type where it cannot be converted to (such as, an array). Library will raise an error. So do the following:

  1. Either make sure which type to convert to. Or
  2. Do not convert to a generically passed type, instead use dynamic keyword to deserialize the object and then check its type later on runtime. That still doesn’t guarantee it to work.

My recommendations

Finally, I want to give you a few of my own personal experience tips. In my experience I started to work with data on a database system; SQL Server CE I remember. But later I had to create applications which required a bit complex structures and were not going to last another weak. So in such conditions, using a database table was not a good idea. Instead, I had to use a sample and temporary data source. During those days, XML was a popular framework as per my peers. JSON was something that most didn’t understand. So, in my own humble opinion, you should go with JSON, where:

  1. Data needs to be shared cross-platform, cross-browser and cross-server. Databases may be helpful, but even they need serialization and stuff like that.
  2. You just need to test the stuff. You can target your Model to the JSON parsers and then in release mode change them to the actual data sources.

That is not it, the size of JSON is “amazingly” compact as compared to XML. For example, have a look here:

// JSON
{
 "name": "Afzaal Ahmad Zeeshan",
 "age": 20,
 "emptyObj": { }
}

// XML
<?xml version="1.0" encoding="UTF-8" ?>
<name>Afzaal Ahmad Zeeshan</name>
<age>20</age>
<emptyObj />

Even in the cases where JSON has nothing in it, XML is bound to have that declaration line. Then other problems come around, such as sending an array, or sending out a native type object. In JSON you can do this,

true

In XML, you don’t have any “true” type. So, you are bound to send something extra. Which can cause to be a bottleneck in the networking transmission. I have been writing many articles, blogs and guides and most of them are based on JSON if not on SQL Server.

Hashing passwords in .NET Core with tips

Previously I had written a few stuff for .NET framework and how to implement basic security concepts on your applications that are working in .NET environment. In this post I want to walk you to implement the same security concepts in your applications that are based on the .NET Core framework. As always there will be 2 topics that I will be covering in this post of mine, I did so before but since that was for .NET itself, I don’t think that works with .NET Core. Besides, .NET Core is different in this matter as compared to .NET framework, one of the major reasons being that there is no “SHA256Managed” (or any other _Managed types in the framework). So the framework is different in this manner. This post would cover the basic concepts and would help you to understand and get started using the methodologies for security.

Security_original
Figure 1: Data security in your applications is the first step for gaining confidence in clients.

First of all, I would be covering the parts of hashing and I will give you a few of my tips and considerations for hashing the passwords using .NET Core in your applications. Before I start writing the article post, I remember when I was working in Mono Project and the platform was very easy to write for. I was using Xamarin Studio as IDE and the Mono was the runtime being used at that time, in my previous guide although the focus was on the Mono programming on Ubuntu whereas in this post I will covering the concepts of same but with .NET Core. .NET Core is really beautiful, although it is not complete, yet it is very powerful. I am using the following tools at the moment so in case that you want to set up your own programming environment to match mine, you can use them.

  1. IDE: Visual Studio Code.
  2. C# extension: For C# support and debugging
  3. Terminal: Ubuntu provides a native terminal that I am using to execute the command to run the project after I have done working with my source code.

Screenshot (967)
Figure 2: Visual Studio being used for C# programming using .NET Core.

You can download and install these packages on your own system. If you are using Windows, I am unaware as to what Visual Studio Code has to offer, because since the start of Visual Studio Code I have just used it on Ubuntu and on Windows systems my preference is always Visual Studio itself. Also, I am going to use the same project that I had created and I am going to start from there, A Quick Startup Using .NET Core On Linux.

So, let’s get started… 🙂

Hashing passwords

Even before starting to write it, I am considering the thunderstorm of comments that would hit me if I make a small and simple mistake in the points here, such as:

  1. Bad practices of hashing.
  2. Not using the salts.
  3. Bad functions to be used.
  4. Etc.

However, I will break the process down since it is just a small program that does the job and there is no very less exaggeration here. Instead of talking about that, I will walk you through many concepts of hashing and how hackers may try to get the passwords where hashing helps you out.

Until now I have written like 3 to 4 articles about hashing, and I can’t find any difference in any of these codes that I have been writing. The common difference is that there are no extra managed code stuff around. .NET Core removed everything redundant in the code samples. So we are left with the simple ones now that we would be using.

What I did was that I just created a simple minimal block of the SHA256 algorithm that would hash the string text that I am going to pass. I used the following code,

// SHA256 is disposable by inheritance.
using (var sha256 = SHA256.Create()) {
    // Send a sample text to hash.
    var hashedBytes = sha256.ComputeHash(Encoding.UTF8.GetBytes("hello world"));
 
    // Get the hashed string.
    var hash = BitConverter.ToString(hashedBytes).Replace("-", "").ToLower();
 
    // Print the string. 
    Console.WriteLine(hash);
}

This code is a bit different from the one being used in .NET framework. In the case of .NET framework the code starts as:

using (var sha256 = new SHA256Managed()) {
     // Crypto code here...
}

That is the only difference here, rest of the stuff is almost alike. The conversion of bytes into string text is upto you. You can either convert the bytes to hexadecimal strings or you can use the BitConverter helper to convert that to the text that is being represented.

The result of this code is,

Screenshot (968)
Figure 3: Result of the above shown code in C# being executed in Ubuntu terminal on .NET Core runtime. 

There is one another constraint here, “Encoding.UTF8“, if you use another encoding for characters then the chances are your hashed string would be different. You can try out other flavors of the character encodings such as:

  1. ASCII
  2. UTF-8
  3. Unicode (.NET framework takes Unicode encoding as UTF-16 LE)
  4. Rest of the encodings of Unicode etc.

The reason is that they provide a different byte ordering and this hashing function works on the bytes of the data that are passed.

Tips and considerations

There are generally two namespaces rising, one of them is the very old familiar .NET’s namespace, System.Security.Cryptography, whereas another one is Microsoft.AspNet.Cryptography which is a part of ASP.NET Core and are to be released. Anyways, here are a few of the tips that you should consider before handing the passwords.

Passwords are fragile — handle with care

I can’t think of any online service, offline privacy application, API hosts where passwords are not handled with care. If there is, I would still act as I never knew of it. Passwords must always be hashed before saving in the database. Hashing is done because hashing algorithms are created with one thing in mind, that they are hard (if not impossible) to convert back to plain-text passwords. This makes it harder for the hackers to get the passwords back in the real form. To explain this fact, I converted the code into a functional one and printed the hash with a little change in the text.

private static string getHash(string text) {
    // SHA512 is disposable by inheritance.
    using (var sha256 = SHA256.Create()) {
        // Send a sample text to hash.
        var hashedBytes = sha256.ComputeHash(Encoding.UTF8.GetBytes(text));
   
        // Get the hashed string.
        return BitConverter.ToString(hashedBytes).Replace("-", "").ToLower();
    }
}

I will execute this function and get the hashed string back for the text that have very less difference in them.

string[] passwords = { "PASSWORD", "P@SSW0RD", "password", "p@ssw0rd" };
 
foreach (var password in passwords) {
    Console.WriteLine($"'{password}': '{getHash(password)}'");
}

Although they seem to look alike but have a look at the avalanche effect that happens due to such small changes. Even have a look at the differences in the capital case and small case.

Screenshot (972)
Figure 4: Password hashes being shown in the terminal. 

This helps in many ways, because it is harder to guess what the possible plain-text alternate would be for this hashed string. Remember the constraints again,

  1. The character encoding is UTF-8; others would provide a different character encoding bytes ordering.
  2. Hash algorithm being used in SHA256, others would produce even different results.

If you don’t hash out the passwords, hackers may try to use most common attacks on your database system to gain privileges of access. A few common type of attacks are:

  1. Brute force attack
  2. Dictionary attack
  3. Rainbow table attack

Rainbow table attack work in a different manner, it tries to convert the hash back to the plain-text based on the database where a password/hash combination is present. Brute force and dictionary attacks use a guessing and commonly used passwords respectively, to gain access. You need to prevent these attacks from happening.

Besides there are cases where your password hashing is useless. Such as when you want to use MD5 hashing algorithms. MD5 algorithms can be easily cracked and the tables for entire password look up are already available and hackers can use those tables to crack your passwords that are hashed using MD5. Even SHA256, SHA512 don’t work as you are going to see in the following section. In such cases you have to add an extra layer of security.

Bonus: how to break it?

Before I continue further, I wanted to share the point of these passwords and their hashes being weaker. There are many hacking tools available, such as reverse look ups. Let us take our first password and see if that can be cracked. I used CrackStation service to crack the password and convert it back to its original text form,

Screenshot (975)
Figure 5: SHA256 based password converted back to its original form. 

See how inefficient even these tricks happen to be. In a later section I will show you how to salt the passwords and what the effect is. Although we had hashed it using SHA256, the reverse lookup table already has that password of ours. Hackers would just try to use that hash and get the real string value in the plain-text to be used for authentication purposes.

Slower algorithms

On networks where hackers are generally going to attack your websites with a script. You should have a hashing algorithm that is (not very) significantly slow. About half a second or 3rd of a second should be enough. The purpose is:

  1. It should add a delay to the attacker if they are trying to run a combination of passwords to gain access.
  2. It should not affect the UX.

There are many algorithms that keep the iterations to a number of 10,000 or so. The namespace that I had talked of, Microsoft.AspNet.Cryptography has the objects that allow you to specify the iteration, salt addition etc.

Remember: For online applications, do not increase the iteration count. You would indirectly cause a bad UX for the users who are waiting for a response.

Add salt to the recipe

I wonder who started the terminology of salt in cryptography. He must have a good taste in computers, I’d say. I did cover most of the parts of adding the salts in the article that I have added in the references section, please refer to that article. However, I would like to share the code that I have used to generate a random salt for the password. Adding the salt would help you randomize the password itself. Suppose, a user had a password of, “helloserver”, another one had the same password too. By default the hash would be alike but if you add a random salt to it, it would randomize the password.

In .NET Core, you can use the “RandomNumberGenerator” to create the salt that can be used for the password.

private static string getSalt() {
    byte[] bytes = new byte[128 / 8];
    using (var keyGenerator = RandomNumberGenerator.Create()) {
        keyGenerator.GetBytes(bytes);
 
        return BitConverter.ToString(bytes).Replace("-", "").ToLower();
    }
}

This would create a few random bytes and then would return them to be used for the passwords.

string[] passwords = { "PASSWORD", "P@SSW0RD", "password", "p@ssw0rd" };
 
foreach (var password in passwords) {
    string salt = getSalt();
    Console.WriteLine($@"{{
       'password': '{password}', 
       'salt': '{salt}',
       'hash': '{getHash(password + salt)}'
       }}"
    );
}

This shows how the passwords with “random salt” differ.

Screenshot (974)
Figure 6: Passwords with their salts being hashed. 

Have a look at the hashes now. The hashes differ from what they were before. Also, notice that the function returns a different salt every time which makes it possible to generate different hashes for even the similar passwords. One of the benefits of this is, that your passwords would be secure from a rainbow table attack.

Test: We saw that unsalted passwords are easy to be reverse looked up. In this, case, we salted the password and we are going to test the last of our password to see if there is a match.

Screenshot (976)
Figure 7: Password not found.

Great, isn’t it? The password was not matched against any case in the password dictionary. This gives us an extra layer of security because hacker won’t be able to convert the password back to their original form by using a reverse look up table.

Using salt: the good way

There is no good way of using the salt, there is no standard to be followed while adding the salt to the password. It is just an extra “random” string to be added to your password strings before their are hashed. There are many common ways, some add salt to the end, some prepend it some do the both.

Do as you please. 🙂 There are however a few tips that you should keep in mind while salting the passwords.

  1. Do not reuse the salts.
  2. Do not try to extract the salts from the passwords or usernames.
  3. Use suitable salt size; 128-bit?
  4. Use random salt.
    • You should consider using a good library for generating the salts.
  5. Store the salts and passwords together. Random salts won’t be created again (in a near future).

References:

  1. Avalanche effect
  2. Hashing Passwords using ASP.NET’s Crypto Class
  3. Guide for building C# apps on Ubuntu: Cryptographic helpers
  4. What are the differences between dictionary attack and brute force attack?

Final words

In this post, I demonstrated the hashing techniques in .NET Core, although the procedure is similar and very much alike. There are a few differences that the objects are not similar. The object instantiation is not similar and in my own opinion, this is also going to change sooner.

I gave you a good overview of password hashing, how to crack them (actually, how an attacker may crack them) and how you can add an extra layer of security. Besides, you should consider adding more security protocols to your own application to secure it from other hacking techniques too.