Category Archives: Uncategorized

Common Problems in Xamarin.Android and their solutions

Introduction and Background

This is my, well, 3rd or 4th post on Xamarin and this one might be a bit critical in its topic or subject matter, so if you have been pulling your hairs off these days, this post is for you. Basically the purpose of this post is to provide you with a post that has most commonly faced issues, and their solutions attached. I personally faced quite a lot of issues with Xamarin. The problem was not that I was a beginner in Xamarin. The major problem was that it was all working before, but right after a reset everything was giving me an error. Even worse was that, there were no solutions at all. Every solution was like, “Reinstall Xamarin”, “Reinstall Android SDK”, “Remove the space in the Android SDK path”. Even the most authentic of the resources were not providing the working or at least “sensible” solution to me. So I thought I should write a complete post, sharing the reality based solutions, and not just, “install this, install that” sort of stuff that does not help anyone at all.

Installation of Xamarin

Installation of Xamarin itself is a bit confusing and problematic in real, for beginners. If you have had ever worked with Xamarin previously, then you might know where all the plugs go, but for a beginner the experience is painful and many leave in the start — let alone middle as the least.

The initial location where Xamarin installs the Android SDK is, “C:\Program Files (x86)\Android\android-sdk“. A few things to consider here:

  1. There is no problem with the space in the path. Spaces are a problem only in the cases when you are going to program using NDK (Native Development Kit), otherwise they do not cause any problem. I am also using a path with space in it and it does not cause any trouble.
  2. Before anything at all, I would recommend that you run the Android SDK by running:
    1. Either “android.bat” file as Administrator.
    2. Or by going to Visual Studio and running Android SDK from the Android toolkit tray. It will request Admin access itself.
    3. Note that you need to run SDK Manager with Admin access. Otherwise, it will not work. Worse, it will give you errors of “Unable to move directory…” and later, it will delete the “android.bat” file and you will have to download and install Android SDK once again. Painful.
  3. It can be helpful to use the Android SDK installed by Android Studio in cases where you are lacking enough HDD space on your machine, but it is my own recommendation to not use that. The reason is that in case Xamarin messes up, Android Studio keeps running fine. Plus, you do not need all of the plethora that Android Studio installs just for Xamarin.

screenshot-27
Figure 1: Android SDK launcher.

The installation typically installs Android SDK level 19, 21, and 23 (23rd one can be selected from installation options). You should not just go ahead and start installing everything, even if someone asks you to. There is no point in doing that.

screenshot-28
Figure 2: Android SDKs and tools provided.

Later in this post, I will show you which of the frameworks are required an necessary for your project to work and which of these are not at all required.

Installation of Android SDK

Another most important point to consider here is that on various locations online, you will find people saying that you should install SDKs from the minimum one required, all the way up to the latest one. No, that is not the solution and is not required at all.

For example, have a look here,

screenshot-29
Figure 3: Android target platforms.

In this case, which SDKs do I need to install? If you said: All the way from 16 to 23, then you are wrong. The only SDK that is required and compulsory for you to have is the “Latest Platform”. Now, the concept of latest platform is a bit different. You cannot expect to have the latest Android version released by Google, and expect that to be the latest one in Xamarin also. Xamarin is not working under Google and their APIs come a bit later than Google’s APIs.

I have only installed the SDK platform for 25 and 23. I installed 25 because everybody was asking me to install everything. Which did not work in any case. So, what you need to do is you need to install only the Latest Platform, and any platform that you need to test your application on; in the case of emulators etc.

One more thing, the latest platform will differ as you will be working at the time of writing this post, it was Android 6.0 Marshmallow, whereas Nougat was released quite months ago, even then the latest platform in Xamarin was 6.0. And in Android Studio I was already targeting API level 25. Which means, that Android Studio API level and Xamarin level do not meet each other and thus you need to check again which of them are you going to support.

Emulators in Xamarin

Xamarin, if installed with Visual Studio, comes shipped with Visual Studio Emulator for Android. It requires you to have Hyper-V installed and active… Meaning, you can only access it on a Pro edition of Windows. In other editions, such as Home, you cannot run that. In that case you either have to fallback on Genymotion, or other products such as the Android emulator provided by Xamarin itself. The benefits of using these emulators are:

  1. They come pre shipped and preconfigured.
  2. All you need is a Pro edition of Windows (in case of Visual Studio’s emulator), or you need a commercial account; such as for Genymotion unlimited account.

However, my choice is a bit different. I consider using the same Android emulators that I used with Android Studio. There are various benefits to this,

  1. You get the latest API levels before hand. My latest platform (as seen above) is Android 6.0, however, I get to run the application in Android 7.0 using the emulators by Android Studio. Fun eh?
  2. You can use Intel HAXM, and it works even if you do not have a Pro edition of Windows. However, a CPU with virtualization technology is required — of course, an Intel CPU.
  3. Visual Studio automatically detects the running Android device, and you can push your application to the running emulator and run it in super-fast mode.

However, if you want to run your application on one of the Android Studio’s emulator, then you need to make only one thing sure: The platform level of the emulator and the platforms installed in Xamarin.Android SDK is similar. For example, if I have a device of Android 7.0, and I have not installed Android 7.0 as a platform level in Xamarin.Android SDK, then the application will not deploy; although it will build, it will try to deploy, but will not fail nor succeed. To overcome that, install the same SDK level in your Xamarin.Android as well as the Android Studio SDK. Then you can deploy your applications.

One thing and purpose of having Android 7.0 installed on Xamarin, was that I was testing my applications on Android Studio’s emulator, which had Android 7.0 installed, to have it accept the application, I needed to install Android 7.0 SDK on Xamarin as well. Otherwise, it won’t start debugging at all.

screenshot-30
Figure 4: Android emulators shown in Android Studio AVD manager.

If you look close enough in the following, you will find a lot of easter eggs; Android 6.0 as target, yet running on Android 7.0 and so on.

screenshot-31
Figure 5: Android Studio emulator with Android 7.0 running a Xamarin application targeting Android 6.0 application.

You will also see, that the Android emulator being used is the Android Studio’s emulator and not any other emulator and it runs and works just perfectly.

View not loading

Most of the times you will get an error message saying, Android SDK is outdated. Or to be specific, the error message is,

The installed Android SDK is too old. Version {API_LEVEL} or newer is required.

Then it provides you with a link to open Android SDK and install it. The problem here is that you are trying to target your build to a version, that is not installed at the moment. Such as, in my above example the target was Android 6.0 and the only SDK installed was 19 (the default one). That caused the problem for my setup to target the views to the latest API level.

In most posts, it is shown to actually either update the paths, install everything, or move the directories from one location to another, or even reinstall Xamarin.

The solution to this problem is pretty much straight-forward. All you need to do is:

  1. Go to the Properties → Application.
  2. Check for the “Compile using Android version:” value. Also note that the “Target Android version” can be selected as the “Use Compile using SDK version” to make things a bit more simple.
  3. FInally, install the SDK for your target platform level.

One thing to note here, if your SDK shows that you have installed the platform however, you cannot run the application. Recheck the location of SDK being used.

screenshot-32
Figure 6: Android SDK default paths in Visual Studio.

  1. For that, go to Tools → Options → Xamarin → Android Settings.
  2. Double check the Android SDK location property here. And make sure it is the location where your SDK is installed.

These will set up a few things in the system so that Xamarin works the way it should.

Java SDK required

In most cases, Java JDK 8 is recommended. By default you will be provided with JDK 7, and that works perfectly. But it is recommended to install JDK 8 and remove JDK 7. Reason?

  1. JDK 7 is old. Really.
  2. JDK 7 will cause your applications to target JDK 7, even if JDK 8 is installed because it overwrites the default JAVA_HOME variable. Since, JAVA_HOME variable needs to target JDK 8 location, there is no need to have JDK 7; since it will never be used.
  3. Latest Android tools will be using and supported by JDK 8. Soon Xamarin will also require you to have JDK 8, because while compiling Xamarin to Android, it uses the Android libraries as well; SDK etc. They require JDK 8.

A simple step to do this would be, to remove the JDK 7 completely. Go to control panel for that. Next, set the JAVA_HOME environment to point to JDK 8 location. That will be different on devices, based on the build or versions. So, check it against your own system.

Final words

Xamarin itself is a very powerful tool provided by Microsoft. Plus, the benefits of Xamarin, especially Xamarin.Forms, outweigh any disadvantages of it. The main disadvantage of this is, the learning slope is really very slippery — not just steep. Most of the beginners leave out learning the Xamarin framework, because learning a simple language such as Java and having maximum code already provided is an easy way to have your work done. Whereas, in Xamarin you need to not just learn the tools but also to understand which plug goes to which socket.

I tried my best to provide you with a post that has the solutions to most widely faced problems. The problems talked about here, were all generic and not a specific case because I needed future readers to also get help from this post.

If you find any other error, do let me know by commenting and I will try to find a solution — a real solution — and then share it with you and the rest of the community. 🙂

Learning SQLite databases in Xamarin for Android

Introduction and Background

For quite a while by now, I wasn’t doing any mobile development and I never considered myself to be a mobile developer either. Until a few days back when I realized that I should look into a few of the familiar and amazing stuff, such as database development, or writing database programs in Android. Working with the stuff on Android, I learnt a few basics about SQLite databases and how they actually work, plus I really enjoyed a huge performance they provide for the applications by providing the data in a very fast manner, yet maintaining resilience. That was not the most important part that I learnt in the previous weeks, the most important part I learnt was the use of Xamarin APIs for SQLite programming in Android. And, just personally saying, Xamarin provided even a better interface at programming the databases as compared to Java APIs for this task.

I don’t want you to have any background knowledge of SQLite, or whether you have previously worked in SQLite or Android database system or management. Because, in this post I will start by the basics and then build on top of them. Secondly. however I do want and expect you to have a basic understanding of Xamarin and C# programming language because we are going to heavily use C# programming for building Xamarin applications. So, I believe we should start by now.

Understanding database systems

In database theories, did you ever hear about words like, “database servers”? I am sure you did… The database servers are assigned a task to manage the data sources on a machine. They provide input/output channels to clients, or other processes to save or read the data from the data sources. It is the responsibility of the database server, or also known as database engine, to take care of the communication, data caching, data storage and data manipulation and it only returns the data as it is required and requested — nothing more, nothing less. But as we know, database engine is just a general terminology here. The actual product, will look something like, “MySQL”, “SQL Server”, “Oracle Database”, “mariaDB”, etc. All of these engines are installed on the machine, and then they allow the developers (or IT experts) to create the database on the machines, and only then can someone access data, or insert the data to the system. There were various reasons to use those engines,

  1. They provided an abstracted means to manage the data — in other words, data layer was managed by them and the rest of the layers were programmed independent of managing internal and physical layers of storage.
  2. They provided an easier way to manage the data. You just install the server, and you just execute commands to manipulate the data, or read the data. SQL language was developed for this.
  3. You can have multiple customer devices, all of them connected to a central server to share and fetch the data.
  4. Authentication problems were solved. Only one machine, or program needed to know the authentication, or the mechanisms to retrieve the data and how to store the data. Other machines were agnostic as per this information. However, you can always use tokens, or account systems to allow them to perform some administrative tasks.

But as the world progressed, there were changes to the way applications were developed. Software developers wanted to develop application software, that would run on the machines themselves and store the data locally; instead of having to build a large server processing and storing the data. The problem with older database systems was that they were installed at a location and a network connection was required to retrieve or store the data. If we skip out all of the problems of network based data instances, such as, security vulnerabilities etc, even then we are left out with other issues such as, what application will do when the network is down? Or what happens to all customers, if server goes down, or server is upgrading and many other similar issues.

In such cases, it was a good approach to have the server installed with the application. But the size was an issue, servers really contain a lot of services, task managers, data handlers, connection managers, authentication managers and so on and so forth. So in those cases, either the data was stored in the form of files, or structured data. That gave birth to embedded databases. The databases were embedded, in other words, they were just files and a script code to work with them. Every embedded database works in this way, they are just simple files containing the data and then there are APIs or libraries, or simple program scripts that are executed to fetch the data or write the data, and the script updates the data sources. The benefit is, that you can have this script installed with the application with no extra cost or dependency at all.

Thus, SQLite was brought into action

SQLite, as mentioned above, like all other embedded database systems, was written in C language as a script that managed the data sources. There were various benefits of having SQLite service instead of the large database servers on a remote machine.

  1. It provided similar SQL language syntax for data manipulation and data extraction.
  2. It can be used with every popular language nowadays. It has the library written in languages such as C++, Java, C#, Python and even Haskell for function programming.
  3. There is an optional support for Unicode character sets as well. You can turn it off for ASCII coding, or map your own data.
  4. It is a relational database model. Everything gets stored in a table.
  5. There is support for triggers as well.
  6. It is dynamically typed column typing system. Which means, it can be easily programmed with any data type you have, it will internally map the types to the ones you want and the ones that column is expecting; such as, converting the string data to integer when you pass “5”.
  7. You can find it in most widely used operating systems too:
    1. On mobile environments, Android is on the top.
    2. On desktop: Windows 10 provided a built in support for this.
    3. Since this is an embedded server, you can have it on anywhere.

So with these things and benefits in mind, Android also focused on providing the SQLite databases as their primary databases for applications. So in this article, we will look forward to understanding and then using the databases for our own benefit; storing the data and retrieving the user data when they need it in the application. Almost every application in Android uses this database provider, for its speed, and more-than-enough benefits.

Understanding SQLite system in Android

SQLite systems play an integral role in Android APIs for app development. The reason behind this is that SQLite were added to Android system ever since API 1.

screenshot-7889
FIgure 1: Android API and SQLite API level.

This is the default database provided and supported natively in the API, and with every update in the Android API, an update for SQLite version is also provided, so that latest bug fixes and performance issues can be easily addressed with every updated.

1283976
Figure 2: Android and SQLite logo.

And even if you are building an application that provides data for other applications installed, either for your own organization, or for other vendors, you can use SQLite as the backend of your data layers.

content-provider-migration
Figure 3: Content providers structure as captured from Content Providers documentation on Android Developer website.

In Android, it is just a matter of objects and their function that you can write to implement full feature data storage API and the models in your data. There might be support for some object-relational mappers out there, but I want to talk about the native libraries out there.

In Android API sets, the providers for SQLite library are available under, “android.database.sqlite” package. The most prominent types in the package are,

  1. SQLiteOpenHelper: This is the main class that you need to inherit your classes from, in order to handle the database creation, opening or closing events. Even the events such as create new tables, or deleting the old tables and upgrading your databases to a latest version (such as upgrading the schema), are all handled here in this class-derived classes of yours.
  2. SQLiteDatabase: This is the object that you get and use to either push the data to the database, or to read the data from the database.
  3. SQLiteCursor: This is the cursor implementation for working with the data records that are returned after “Query” commands.

Their connection is very much simple, one depends on the other object and they all communicate in a stream to provide us with the services that we require of them.

sqlitestructure
Figure 4: Structure of the system communication with SQLite database.

I hope the purpose of these is a bit clear as of now. The way they all communicate is that, your main class for the data manipulation first of all inherits from SQLiteOpenHelper to get the functions to handle, then later has a field of type SQLiteDatabase in it to execute the functions for writing or reading the data. The final object (SQLiteCursor) is only used when you are reading the data, in the cases of writing the data, or updating the data, that is not required. But in the cases where you need to fetch the data, this acts as a pivot point to read the data from the data sources. As we progress to program the APIs, you will understand how these work.

Wrapping up the basics

This wraps us the basics of SQLite with Android and now we can move onwards to actually write an application that lets us create a database, create tables for the data that we are going to write down to it, and then write the objects and their functions that we will use to actually store data in the tables — CRUD functions.

Writing the Xamarin application

Unlike my other posts, I do expect you to create the application because this is one simple task that, every post about Xamarin or any other Visual Studio based Android project will have in common so I will not waste any of my time on this. For a good overview and how-to, please go through this basic “Create an Android Project” post on Xamarin documentation website itself. It gives you a good overview and step-by-step introduction to creating a new project in either Visual Studio or Xamarin Studio, anyone that you are using as per your choice or need.

Background of our models and data storage

So, like every data layer developer we would first define our structure for the data to be saved in the databases. These are just simple classes, with the columns presented as a property of the object, and a type associated with it that makes sense. So we will first of all define that, and then we will move onward. The purpose is to make sure that we are both on the same track and level of understanding how our application should work and process the data.

In the form of class, our data structure would look like the following,

public class Person {
    public int Id { get; set; }
    public string Name { get; set; }
    public DateTime Dob { get; set; }
}

Rest of the stuff is not important, and just to keep things a lot simple, I just added 3 columns — properties of the object. We will simply get these values and then show them to the users in a Toast message in the same activity, just to keep things a bit simple as of now. So now we need to create the database, tables and the columns inside the tables to represent our objects.

For that, it is my general approach to write the code in a different folder and name it generally; such as a “DataStore” file inside the “Services” folder, or “DbHelper” file inside the “Model” folder etc. These are a few good approaches that help you to write good codes in the applications. You are allowed to use any of these approached, I used the first one. The basic structure for the class is,

// Inheriting from the SQLiteOpenHelper
public class DataStore : SQLiteOpenHelper {
    private string _DatabaseName = "mydatabase.db";

    /*
     * A default constructor is required, to call the base constructor.
     * The base constructor, takes in the context and the database name; rest of the 
     * 2 parameters are not as much important to understand. 
     */
    public DataStore (Context context) : base (context, _DatabaseName, null, 1) {
    }

    // Default function to create the database. 
    public override void OnCreate(SQLiteDatabase db)
    {
        db.ExecSQL(PersonHelper.CreateQuery);
    }

    // Default function to upgrade the database.
    public override void OnUpgrade(SQLiteDatabase db, int oldVersion, int newVersion)
    {
        db.ExecSQL(PersonHelper.DeleteQuery);
        OnCreate(db);
    }
}

This is the default syntax for your basic database handler, that handles everything about the database creation and deletion. One thing you might have noticed, I did not include “PersonHelper” object here. There are a few things I want to talk about this object, before sharing the code.

  1. This is not the model itself. It is merely the helper we will use.
  2. This class is the class-base representation of the table in our database. This class contains the query that will be executed to create the table, including any constraints to be set such as PRIMARY KEY etc.
  3. It will help us to store the “Person” objects, retrieve the “List of Person” objects. Even update or delete them as well.

Let us have a look at the class itself,

public class PersonHelper
{
    private const string TableName = "persontable";
    private const string ColumnID = "id";
    private const string ColumnName = "name";
    private const string ColumnDob = "dob";

    public const string CreateQuery = "CREATE TABLE " + TableName + " ( "
        + ColumnID + " INTEGER PRIMARY KEY,"
        + ColumnName + " TEXT,"
        + ColumnDob + " TEXT)";


    public const string DeleteQuery = "DROP TABLE IF EXISTS " + TableName;
 
    public PersonHelper()
    {
    }

    public static void InsertPerson(Context context, Person person)
    {
        SQLiteDatabase db = new DataStore(context).WritableDatabase;
        ContentValues contentValues = new ContentValues();
        contentValues.Put(ColumnName, person.Name);
        contentValues.Put(ColumnDob, person.Dob.ToString());

        db.Insert(TableName, null, contentValues);
        db.Close();
    }

    public static List<Person> GetPeople(Context context)
    {
        List<Person> people = new List<Person>();
        SQLiteDatabase db = new DataStore(context).ReadableDatabase;
        string[] columns = new string[] { ColumnID, ColumnName, ColumnDob };

        using (ICursor cursor = db.Query(TableName, columns, null, null, null, null, null))
        {
            while (cursor.MoveToNext())
            {
                people.Add(new Person
                {
                    Id = cursor.GetInt(cursor.GetColumnIndexOrThrow(ColumnID)),
                    Name = cursor.GetString(cursor.GetColumnIndexOrThrow(ColumnName)),
                    Dob = DateTime.Parse(cursor.GetString(cursor.GetColumnIndexOrThrow(ColumnDob)))
                });
            }
        }
        db.Close();
        return people;
    }

    public static void UpdatePerson(Context context, Person person)
    {
        SQLiteDatabase db = new DataStore(context).WritableDatabase;
        ContentValues contentValues = new ContentValues();
        contentValues.Put(ColumnName, person.Name);
        contentValues.Put(ColumnDob, person.Dob.ToString());

        db.Update(TableName, contentValues, ColumnID + "=?", new string[] { person.Id.ToString() });
        db.Close();
    }

    public static void DeletePerson(Context context, int id)
    {
        SQLiteDatabase db = new DataStore(context).WritableDatabase;
        db.Delete(TableName, ColumnID + "=?", new string[] { id.ToString() });
        db.Close();
    }
}

This is a simple, yet very simple, CRUD-based-table-structure-class for our application. 🙂 We will be using this class for any of our internal purposes, of mapping the objects from the database to the code itself. The code itself is pretty much simple, the most important objects used in the code above are,

SQLiteDatabase

This is the database file, that we get from the helper object (our own DataStore object), one different primarily in Java and C# code is that C# code looks shorter — (yes, a personal line). For example, have a look below,

// C#
SQLiteDatabase db = new DataStore(context).WritableDatabase;

// Java
SQLiteDatabase db = new DataStore(context).getWritableDatabase();

To understand the difference, you should understand the encapsulation in Object-oriented programming languages and properties in C#. To an extent that does not make any difference, but if you come from Java background and start programming Xamarin, you will require to understand the best of both worlds and then implement them in your own areas.

Heads up: Entity Framework Core can be used as well in Xamarin applications. Thanks to .NET Core.

ContentValues

In Android APIs, this was the wrapper used to wrap the column values for each of the insert, or update for the records. The same object is provided here and you can add the values here, that SQLite engine would use and push the values to the database.

ICursor

The basic cursor object, used to iterate over the collection. In Android API using Java, you can get the following code,

// Java
Cursor cursor = db.query(...);

And in the C# code, you get a bit different version, but that does not matter at all as you can always implement the ICursor object and create your own handlers for the data. That helps in many ways,

  1. You get the validate the data before generating the list itself.
  2. You can build your own data structures and load them in one go.
  3. You can use and implement other services as well and then consume them as well in the same cursor object — but in many cases, this is not required at all.

Create and Delete queries

If you pay attention to the initials of the helper class, you will find that there are constant string values. Those are used to create and delete the tables. For their usage, please see the OnCreate and OnUpdate functions in the DataStore class above.

One more thing, in SQLite when you create a record that is assigned, “INTEGER PRIMARY KEY“, that column automatically starts to point at the ROWID, that is the similar to AUTO INCREMENT in most database systems. That is why, we don’t need to manage anything else and SQLite itself will make sure that our records are all unique by incrementing the row id. Of course there are a few problems with this as well, because the value doesn’t guarantee to always “increment“, but it guarantees to “be unique“.

Building the UI

On final step in this application would be actually create the UI of the application’s main activity. What I came up with, for this simple application was the following interface. I hope no one is offended. 😀

screenshot-24
Figure 5: Interface Design in Xamarin, in Visual Studio 2015.

A few configuration on the top left corner may also help you to understand and build the similar interface if you want to have the similar interface in your own application as well.

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:orientation="vertical"
    android:layout_width="match_parent"
    android:layout_height="match_parent">
        <TextView
            android:text="Enter the details of a person."
            android:textAppearance="?android:attr/textAppearanceMedium"
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:id="@+id/textView1"
            android:layout_marginTop="10dp" />
       <EditText
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:id="@+id/name"
            android:hint="Enter the name"
            android:layout_marginTop="25dp" />
       <DatePicker
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:id="@+id/datePicker1"
            android:layout_marginBottom="0.0dp" />
       <Button
            android:text="Save"
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:id="@+id/button1" />
</LinearLayout>

So, now we are ready to run and test our application in the emulator.

Running the Application in Emulator

At this point, we may walk away on our own paths because I personally like to use the native Android SDK emulators, instead of the Xamarin’s Android emulator or Visual Studio’s Emulator for Android. There are many reasons for this choice of mine, and I will share that in a later post sometime.

But for now, you can run the application in any of your own favorite emulator, even on your own mobile device.

screenshot_1487370468
Figure 6: Application running in emulator.

I have already filled the UI with the required information, and now I will be writing the backend code — because I am that awesome.

protected override void OnCreate(Bundle bundle)
{
    base.OnCreate(bundle);

    // Set our view from the "main" layout resource
    SetContentView(Resource.Layout.Main);

    // I removed the default one, and added my own.
    Button button = FindViewById<Button>(Resource.Id.button1);
    button.Click += Button_Click;
}

private void Button_Click(object sender, EventArgs e)
{
    EditText nameField = FindViewById<EditText>(Resource.Id.name);
    DatePicker picker = (DatePicker)FindViewById(Resource.Id.datePicker1);

    string nameStr = nameField.Text;
    DateTime dob = picker.DateTime;

    if (string.IsNullOrEmpty(nameStr))
    {
        Toast.MakeText(this, "Name should not be empty.", ToastLength.Short).Show();
        return;
    }

    // Just save it.
    Person person = new Person
    {
        Name = nameStr,
        Dob = dob
    };
    DataStore.PersonHelper.InsertPerson(this, person);
    Toast.MakeText(this, "Person created, fetching the data back.", ToastLength.Short).Show();

    var people = DataStore.PersonHelper.GetPeople(this);
    person = people[people.Count - 1];
    Toast.MakeText(this, $"{person.Name} was born on {person.Dob.ToString("MMMM dd yyyy")}. \n {people.Count} people found.", ToastLength.Short).Show();
 }

The outputs, after having this code applied to the UI was the following,

screenshot_1487370465
Figure 7: Person created toast message. 

screenshot_1487370479
Figure 8: Person details shown on the screen using Toast.

And, it works perfectly just the way we expect it to. You may have noticed that we convert the Date object to the standard string object, and then we parse that same string back to the Date object. Then using SImpleDateFormat we format the data properly.

Final words

Finally, in this post we tackled the problem of SQLite databases in Android using Xamarin APIs. The process itself is pretty much simple and straight-forward, and you don’t need to go through a lot of pain. Just remember these three steps:

  1. Inherit your main data layer class from SQLiteOpenHelper.
  2. Handle the OnCreate and OnUpgrade (and other) functions to create the tables.
  3. Write the CRUDs and you are done.

Also, remember that you should execute the Delete function on the database object, because writing a native SQL query will have a few problems.

  1. DELETE query does not release the underlying storage space — nor does Delete function.
  2. Your new items would anyways come up in the free space, but that free space will not be released to operating system for other data and stuff.
  3. To release the space, you will need to execute VACUUM command. It repackages the database file, with taking only the space it needs.
  4. However, Delete function will help you to overcome SQL Injection problems by passing the parameterized queries. Food for thought: Can you figure out the parameter in the queries above?

So, that will be it for this post. 🙂

Serverless computing with Azure Functions

While I was working on cloud computing, and other similar technologies, I somehow stumbled upon this new thing that I did not have heard of before (quite a few weeks from now) and I found it really interesting to watch about with and to learn and to share my own understanding of this technology with you guys. I will try my best to keep the main idea as simple as possible, but explain everything from A to Z in a much easy way, that it will look as if the concept was always in your mind.

So, let us begin with the first initials about the serverless computing and how it all began.

How, it all began…

If you guys have been into the IT and programming geekiness for the past few years, you must have witnessed how things changed, from physical machines to virtualization, to containers and all the way to, what we now call serverless computing. Whether this new tech trend comes down the stream as I mentioned, or perhaps someone else knows better as how this all began is not the question or the topic of concern, the main thing is that we now have this another topic to cover before we actually start up and design the applications. You will see, in this post, how one way of deploying application has pros and cons, and how serverless comes up to solve the problems — or ruin everything at all. There are some pros to serverless computing, and there are obviously a lot of cons to serverless computing.

The buzz word of serverless computing didn’t take much longer, because we now have Internet and one new thing from the west, reaches east in no time at all. The question to ask is, “Do we understand what they wanted us to?” And this is the question that I am going to address in this post of mine, so that all of us really understand the purpose, need and reason of serverless computing today.

I am going to use Azure Functions, to explain the usage, benefit and “should I” of the serverless computing. The reason I chose Azure was that everyone was already covering a lot of Amazing Lambda stuff, so I didn’t want to do use that, also I am working on Azure a lot so I thought this is it.

Understanding the term, “serverless”

Primary focus of our is to explain the term, “serverless”, Azure usage would just be to give you an overview of how that actually works in real world.

traditional_vs_serverless_cost_graph
Figure 1: A simple yet clear difference in traditional vs serverless approach toward server management. 

The picture above, was captured from this blog, and it provides a very good intuition towards the difference in what we use, and what serverless offers. But this does not mean that the image above provides a 100% true and only difference in the both worlds, sometimes the differences boils down to zero; such as in the cases, where you are going to use architectures such as cloud offerings. In such scenarios, the total difference in the cost is how you selected the subscription. As we progress in the post, I will also draw a margin line in many other aspects of this methodology. Keep reading.

Just like the term cloud computing was mistaken for various reasons, and for various acts, the term of serverless is also being mistakenly understood as a platform, where servers are no longer needed. As for cloud computing, you might want to see, Cloud Computing explained by Former IT Commissioner, and try to consider the fact that we are in no way understanding the technology the way it is meant to be understood. The same is the result for serverless, when I tried to search for it on Google, I even saw computer devices being dumped into the dustbin; which triggered me, that people are so not understanding the term itself. For example, go here, Building a Serverless API and Deployment Pipeline: Part 1 and try to just see the first image and please remember to come back as soon as possible. You will get my point. Finally, I mean no offence to anyone being mentioned in this paragraph, if you are the target in either one of the link, kindly get a good book or contact me I will love to teach you some computer science.

Old hardware put into container

Figure 2: Just in the case that blog post is not accessible or the author takes the image down, I just want to show you the image. Still, no offence please.

So what exactly is serverless computing? The basic idea behind this, is to remove the complexity or the time taken to manage the servers, not the servers themselves. In this scenario, we actually use a framework, platform or infrastructure, where everything, even the booting, executing and terminating of our application is managed by the provider itself. Our duty, only is to write the code and the magic is based on the recipe of provider. Just to provide you with a simple definition of serverless computing, let me state,

Serverless computing is a paradigm of computing environment in which a platform or infrastructure provider manages booting, scheduling, connection, execution, termination and response of the programs without needing the development teams to manage the control panel.

Few, that wasn’t so tough, was it? Rest of the stuff, such as pricing models, languages or runtimes provided, continuous deployment or DevOps is just a bunch of extra topping that every provider will differ in the offerings. That is one of the reasons, I did not include any statement consisting of anything about the pricing models, or the languages to use — whereas in this post, I will use C#… I am expecting to write another post that will cover Python or other similar interpreted languages.

Benefits of Serverless architecture

If you migrate your current procedures, and communicators, to serverless paradigm you can easily enjoy a lot of benefits, such as cost reduction, freedom from having operations team to manage a full fledge server and much more. But on the same time, I also want to enlist a few of the disadvantages of serverless computing, at the end of this section.

Cost model

Ok let me talk about the most interesting question in the mind of everybody perhaps. How is this going to change the way I am charged? Well, the answer is very much relative to what you are building, on what platform you are building and how much customers do you have, plus how they interact with your application. So, in other words, there is no way we can judge the amount of charges you are going to play with this. In the following sections, I will give you an overview of another special part in serverless programming, that you can use to consider the pricing model for your application.

NoOps

Do not be mad at me for adding a new term in the computer science; if it has not yet been added there. Now let me get to the point where I can explain the concept of resolving the teams, such as development, operations etc. and then getting to a point, where you can enroll “serverless” to the environment. In modern day computing, we have, let’s say, implemented DevOps and we need to have a mindset where our teams are working together to bring the product to market for users. A DevOps typically has the following tasks to be conducted,

  1. Planning and startup, user stories or whatever it is being called.
  2. Source control or version control
  3. Development; any IDE, any language
  4. Testing; there are various tests, unit testing, load testing, integration testing etc.
  5. Building; DevOps support and encourage continuous integration
  6. Release; same, continuous deployment is recommended

From this, you can see that the developers only need to work in a few areas, Planning, Development and Building. Version control systems should be managed by the IDE and timers should be set to them to control when the versions or updates are checked in each day, updated versions must be released into the market by operations teams etc. However, since now in the field of serverless, we do not have to “release the software” to market, and we also do not need to manage any sort of underlying server if our application is web based, thus we can somehow remove the operations team, or include them in the development team as application developer team. I read a research guide a few days ago naming DevOps – Ops as AppOps, but however I would like the term of NoOps, because the considerations of them being simple operations team is removed and they now work with the application development team, focusing entirely on the code, and the performance or uptime of the application, instead of the servers or virtual machines.

So, let’s count the purpose of NoOps in this field,

  1. Planning and startup
  2. Source control or version control
  3. Development
  4. Testing; I can strikethrough this one as well.
  5. Building
  6. Release

Clear enough, I believe. Before we get into another discussion, let me tell you why I think these are the way they are; why did I cut a few in the list.

Planning and Startup

First of all, planning and startup in this does not make any sense at all. Please see below, the section in which I am talking about the “whens” to select serverless architecture, that section will depict when you should use serverless approach over the current “modern” approaches. Once you have gone through that, you will understand, that in serverless arch, the planning is done beforehand.

Thus, there is no need to again sit around, and have the kanban board messy once again. If you are going to work on that board again, please go back and work in DevOps environment, serverless is not for you. As mentioned below, serverless is for the programs that do not run for 4-7 hours, but just for an instance, each time they are called and they are entirely managed by the infrastructure provider.

Version control

Since this is directly targeted towards the development, or the core portion of your application, this is a part of serverless architecture design. Almost every serverless platform that I have seen as of now, uses version control services or provides best practices of DevOps; I know. Azure Functions provide you with features that you can use to update the source code of your function, Amazon Web Services’ Lambas allow you to use GitHub, same goes to almost all; have a look at Google’s microservices service, they support GitHub based deployment of Node.js applications and then manage how to run them.

In the cycle of development of a serverless application, this can come as the first step after every first cycle.

Development

Yes, although we removed the servers and other IT stuff from the scene, we still need developers to write up the logic behind the application.

Testing

The reason that I left this option active in the current scenario was, that if you are using version control and then you are deploying the application’s code to the server, my own recommendation and DevOps also, would be to test the code before going to the next step, as it might break something up ahead.

In serverless arch, we are allowed to use source controls, so, before forwarding the code from there to the server, why shouldn’t you run some tests? In serverless arch, we don’t have to worry about the servers, but sure we do need to worry whether the code ran, or did it just break all the time?

In a serverless architecture, we do not technically build a full fledge, or full featured application that takes care of everything, instead we simply write a “if this then that” sort of application. If you understand the concept of “Internet of Things”, then you can think of a serverless application as the the hub that manages the communication and responds to an event; message; request. In such cases, it is not required to implement every test possible, instead, we can perform simple tests to ensure that the code does not break at the arrival of request, and at the dispatch of a response. These are just a few, top of my dumb piece assumptions and suggestions, based on what your serverless application does, you might need other tests, such as pattern matching or regular expressions to be tested against.

I repeat, this is the most important part of serverless programming. I cannot put more effort in saying this, but you get my point, if you are a team and you are going toward the serverless paradigm to lift the response rate, then you first of all need to ensure that the program will be resilient to any input provided to it and will not break at all. Even if there are some issues, how does the program respond to those errors?

Building

I removed that step, but I could have also left that as it was. There are reasons for this, because, the infrastructure may provide you with support to publish built programs. We are going to look into Azure Functions, and Azure supports publishing built programs that execute, instead of scripts that are to be interpreted each time that function has to execute.

But in many cases, you do not need to take care of, or even worry about the build process, since the functions are small programs and they don’t require much of the stuff that typical applications require. You can easily publish the code, and it will execute in a moment. Serverless providers allow interpreted languages to be used as the scripts too, such as Batch files, you can also use Python scripts, or PHP files. But again, every provider has their own specification for this, Azure Functions support any kind of program that you can write, or build, you can upload it on the server and they will host it and your users can connect to it next time they send a request, or interact with the application; mobile, web or any other IoT based device.

Release

Submitting the code out to the version control, was the only release we are going to worry about.

Winding up the basics

Although what I covered, just scratched the basics about the serverless architecture, and there is a lot more to it, than just this singular concept of what to do, and what to leave out. But going any deeper into the rabbit hole might be confusing and might get us off-track as per this post’s structure. So I will not go down, but just for winding us the basics, let us go through a few things.

  1. Do not consider the serverless architecture as a replacement for your current physics servers or container based environments. Serverless are just used to take the events and then trigger another server or virtual machine to act on that event, with the provided data. Nothing else.
  2. Before going serverless, you and your team must understand one fact: “It is your duty, to test the code, and the validity of your code, and it is the duty of your platform provider, to ensure that the code runs the way it is intended to be run on the runtime intended.”
  3. Pay a lot of attention to testing, testing, testing. I repeat, other than development if there is something that needs to be done, it is testing.
  4. The payment plan, in many cases differ from one and other. Sometimes you might choose a monthly plan, sometimes if your users are not in thousands, then you can select the plan where you only pay for the time your function is executing; only for resources used.
  5. Microsoft Azure Functions provide full support for various languages and runtimes, you can use C#, then you can as well choose JavaScript, there are other methods of writing the function code, such as using Python and PowerShell. You can also upload the compiled code and then run it. In other words, if it can execute, it can be a function.
  6. One final thing, a function is only a program or handler, that runs for a small time (10ms-1.5s), if it takes more than that, then it will raise other errors and you would face other problems as well. Always keep the function code short, and as soon as possible terminate it by triggering other services or passing the data on to other service handlers, such as you can trigger the function from an IoT hub and then use other services such as SMS or SMTP services to send notifications and before sending notifications close the function, by only triggering those services and passing the data.

In many ways, this architecture can help you out. But if used badly, it would be like shooting in your own feet. In my own experience, I have found that the architecture can create a lot of problems for you as well, and it might not always be as much helpful as you think. So, use wisely.

Azure Function example

I didn’t want to write a complete guide on serverless architecture, because I might have other posts coming out on this one as well, so let us go a bit deeper and have a look at the Azure Functions feature and see how we can write minimal serverless functions in Azure itself.

So in this example, I am only going to show you a bit of the example that can be used to show you how functions work, in future posts I might cover the HTTP bindings to the functions, or other stuff such as DevOps practices, but for this post let me keep it really short and simple and cover the basics.

Basic function file hierarchy

At the minimum, a function requires an executable script (in any runtime), and a configuration file that specifies the input/output binding of the function, timers or other parameters that can be used for the proper execution. That function.json file controls the execution of the function, it takes all the configuration settings, such as the accounts or services to communicate with. So for instance, in a simple timer based function the following files are enough to control the function itself,

screenshot-7815

The code in both the files is as the following one,

using System;

public static void Run(TimerInfo myTimer, TraceWriter log)
{
    log.Info($"C# Timer trigger function executed at: {DateTime.Now}"); 
}

The JSON configuration file has the following content,

{
    "bindings": [
        {
           "name": "myTimer",
           "type": "timerTrigger",
           "direction": "in",
           "schedule": "0 */5 * * * *"
        }
    ],
    "disabled": false
}

What their purpose is, let me clarify the bit about it in this post before moving any further.

Note: In another post, I will clarify the meaning and use of function.json file, and what attributes it holds. For now, please bear with me.

An executable script

The executable script can be C#, JavaScript or F# or any other executable that can run. You can use Python scripts as well as a compiled executable script.

Configuration file

The function.json file has the settings for your function. The above provided code was a very basic one, the complex functions would have more bindings in them, they will have more parameters and connection names or authentication modules, but you get the point.

In the file, the name and direction of the binding is compulsory. However, other settings are based entirely on the type of binding being used. For example, HTTP triggers will have different settings, timer triggers will require different settings and so on and so forth.

Executing

Azure provides the runtime for almost every executable platform, from PowerShell, to Python, to JavaScript, to C# scripts (the above provided code is from a C# script file) and all the way to other scripts, such as batch etc. Runtime also supports native executables — and this part I yet have to explore a bit more to explain which languages are supported in this scenario.

I will not go into the depths of this concept, so I will leave it here, anyways the output of this function is as,

2017-02-03T17:35:00.007 Function started (Id=3a3dfa76-7aad-4525-ab00-60c05b5a5404)
2017-02-03T17:35:00.007 C# Timer trigger function executed at: 2/3/2017 5:35:00 PM
2017-02-03T17:35:00.007 Function completed (Success, Id=3a3dfa76-7aad-4525-ab00-60c05b5a5404)
2017-02-03T17:40:00.021 Function started (Id=a32b986c-712e-4f30-84c9-7411e63b5356)
2017-02-03T17:40:00.021 C# Timer trigger function executed at: 2/3/2017 5:40:00 PM
2017-02-03T17:40:00.021 Function completed (Success, Id=a32b986c-712e-4f30-84c9-7411e63b5356)
2017-02-03T17:45:00.009 Function started (Id=c18c0eef-271d-4918-8055-64e3f31f953a)
2017-02-03T17:45:00.009 C# Timer trigger function executed at: 2/3/2017 5:45:00 PM
2017-02-03T17:45:00.009 Function completed (Success, Id=c18c0eef-271d-4918-8055-64e3f31f953a)
2017-02-04T10:27:54 No new trace in the past 1 min(s).

The timer trigger keeps running and keep logging the new events, and process information. You will also consider, that this is the same output that Node.js or F# programs would give you, the only difference in these 3 (only 3 at the moment), is that their runtimes are different, the binding and input/output of the functions is managed entirely by Azure Functions itself and developers do not need to manage or take care of anything at all.

Wrapup

Since this was an introductory post on serverless programming and how Azure Functions can be used in this practice, I did not go much deeper in the explanation of the procedures of writing function applications. But the post was enough to give you an understanding of the serverless architecture, what it means to be serverless and how DevOps transition to NoOps. In the following posts about serverless, I will walk you around writing the serverless applications and then consuming them from client devices; Android or native HTTP requests.

Finally, just a few things to consider:

  1. If your functions take a lot of time to execute, such as 1 minute, or even 30 seconds, then consider running the application in a virtual machine or App Service. A function should be like a handshake negotiator, it should take the data and pass the data to a processor, itself it must not be involved in processing and generation of results.
  2. Your functions should be heavily tested against. I am really enforcing a huge amount of tension on this one, as this point needs to be taken care of. Your functions are like the welcomers, who warmly welcome the incoming guests to your servers. If they fail in doing so, the data may never come back (data being your users; events, or anything similar).
  3. Functions follow the functional programming concepts more, so, in functional programming the functions are not stateful. They are stateless, meaning they do not process the data based on any machine state, attribute, property or the time at which they are executed. Such as, a function add, when passed with a data input of “1, 2, 3, 4, 5”, will always return “15”, since the process only depends on the input list.

As we start to develop our own serverless APIs and applications, we will also look forward to further more ways that we can develop the applications, and write the application code in a way that it does not affect the overall performance of our service.

Nonetheless, even if not being implemented in the production environment, serverless is a really interesting topic to understand and learn from a developer’s perspective as you are the one taking care of everything and there are no cables involved. 😉

Considerations for SQL Server on Linux

Introduction and Background

SQL Server has been released for Linux environments too, and after ASP.NET we can now use SQL Server to actually workout our servers on Linux using Microsoft technologies without having to purchase the licenses and pay a fortune. But that is not the case, at least for a while… Why? I will walk you through these critical aspects of the SQL Server and Linux continuum in this post, I am really looking forward to exploring a lot of things right here with you, by reading this post, you will be able to go through many areas of SQL Server on Linux and see if you really need to try it out at the moment or if you need to wait a while before diving any deeper in the platform.

I personally enjoy trying new things out… There are many out there who just try things out, I F… no, no, I dissect things up to share the in-depths of what everything has, what everything says and really is and finally, should you consider it or not. That is the most important part. After all, it is you, who is the main focus of my attention I want to tell you, share with you, what I find. Thus, by reading my post, you will get idea about many things — SQL Server is in its initial versions on Linux so, this post is primarily a feedback, overview of SQL Server and not a rant on the product in anyway.

SQL Server 2016 available on Linux

Now starts the fun part, I believe almost all of you are aware of the fact that SQL Server 2016 is available on Linux after serving Windows only since a great time, plus .NET Core is available on Linux too, and if you have been reading my past articles and blogs you are aware of fact that I am a huge fan of .NET Core framework and how it helps Microsoft to ship their products to other platforms too.

Microsoft announced SQL Server availability on Linux quite a while ago, and many have started downloading and using the product.

sql-loves-linux_2_twitter-002-640x358
Figure 1: SQL Server “heart” Linux. No pun intended. 

Of course the benefit is for Linux users and developers because they now have more options, whether they like SQL Server or not is a different thing in itself. I personally enjoy the tools, such as SQL Server Management Studio, which is a free software to manage databases using SQL Server. The tool can be used for most of the database engines, however Microsoft only engines.

There are many posts that give a good overview and introduction to SQL Server on Linux, and I am not going to dive deeper into them… As I have an exam of Database Systems tomorrow. So, help yourself with a few of,

  1. https://www.microsoft.com/en-us/sql-server/sql-server-vnext-including-Linux
  2. https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-get-started-tutorial
  3. https://blogs.technet.microsoft.com/dataplatforminsider/2016/11/16/announcing-sql-server-on-linux-public-preview-first-preview-of-next-release-of-sql-server/ (This is a must read!)

Without further delay, let us go on to the installation of SQL Server on Linux section, and see how we can get our hands on those binaries.

Installation of SQL Server engine

SQL Server, like on Windows, comes separately from tools. You have to install SQL Server and then you later install the tools required to connect to the engine itself — Note: If you can use .NET Core application, then chances are that you do not even need the SQL Server Tools for Linux at the moment. You can just straight away execute the commands in the terminal, I will show you how… And in my experience I found this method really agile and as per demand.

So, fire up your machines and add the keys for packages that you are going to access, download and install.

$ curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
$ curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list | sudo tee /etc/apt/sources.list.d/mssql-server.list

These will set up the repositories to your machine that you can then use to start downloading the packages.

screenshot-7008
Figure 2: curl executing.

Finally, refresh the sources and start the installation,

$ sudo apt-get update
$ sudo apt-get install mssql-server

You will be prompted to enter “y” and press “Enter” key to continue the installation. This process will download and install the binaries in your system. It downloads the files and scripts that will later on set up the system for your instance, Once the setup finishes, you may execute the following script to start “installation” of SQL Server 2016 on your machine.

screenshot-7010
Figure 3: SQL Server installed on Ubuntu. 

$ sudo /opt/mssql/bin/sqlservr-setup

Authenticate this script, as it requires a lot of permissions in order to generate the server’s engine on your machine. The default hierarchy is like this,

screenshot-7007
Figure 4: Files in the installation directory, ready to install engine. 

You can see the setup script in the list, execute it in the terminal and it will guide you through setup.

screenshot-7011
Figure 5: SQL Server installer requesting user password during installation. 

You will accept the terms, enter password etc. and server will be installed. Simple as that. But yeah, do remember the password you need it later to connect — there is no Trusted_Connection property available in this one.

You can also test the service to see if that is running properly, or running at all by going down to your task manager and looking for the running services. First of all, you can execute the following command to see if the server is running,

$ systemctl status mssql-server

screenshot-7013
Figure 6: The screenshot shows that the response has everything required to see whether service is running or not.

In most operating systems you can find the same under processes too,

screenshot-7016
Figure 7: SQL Server collects telemetry information; the last process tells this. 

Once everything is working we can move onwards to download and install tools.

Installing SQL Server 2016 Tools

On Linux, the choice you have is very small and selected. You may download and install a few helpers and tools if you would like, such as “sqlcmd” program. Just giving a short overview of the steps required to install this package and to use it for some tinkering.

The procedure is similar to what we had, add keys, install the server,

$ curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
$ curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | sudo tee /etc/apt/sources.list.d/msprod.list

Then finally use the following command,

$ sudo apt-get update 
$ sudo apt-get install mssql-tools

The installation won’t take longer and you will be able to get the packages that you can later use, I won’t be showing these off a lot.

$ sqlcmd -S localhost -U sa

Do not enter password in the terminal at all, let the program itself ask for the password and user will enter it later, secondly user input will not be shown on the screen.

screenshot-7017
Figure 8: Result of SQL query in sqlcmd program.

This is enough and I don’t want to talk any more about this tool, plus I do not recommend this at all, why? See the result of SQL query below,

> SELECT serverproperty('edition')
> GO

screenshot-7018
Figure 9: Result of an SQL query, totally unstructured. 

The results are not properly structured so they are confusing, plus you have to scroll up and down a bit to see them properly. So, what to do then? Instead of using this I am going to teach you how to write your own programs… I already did so, and I thought I should again for .NET Core.

Writing SQL Server connector app in .NET Core

I am going to use .NET Core for this application development part, it is indeed my favorite platform there is. On the .NET world, I wrote the same article using .NET framework, you can read that article here, How to connect SQL Database to your C# program, beginner’s tutorial. That tutorial is a complete guide to SQL Server, targeted at any beginner in the platform field. However in this, I am not going to explain the basics but I am just going to show you how to write the application and how to get the output.

Note: You should read that article for more explanations, I am not going to explain this in depth. I will not be explaining System.Data.SqlClient either, it is available in that post I referred to.

You will start off by creating a new project, restoring it, and finally open it up in Visual Studio Code.

$ dotnet new
$ dotnet restore
$ code .

I ignored the new directory creation process, thinking you guys know it yourself. If you have no idea, consider reading a few of my previous posts for more on this topic, such as, A quick startup using .NET Core on Linux.

After that, add a new file to your project, name it SqlHelper, add a class with same name in it,

using System;
using System.Data.SqlClient;

namespace ConsoleApplication
{
    class SqlHelper 
    {
        private SqlConnection _conn = null;

        public SqlHelper() 
        {
            _conn = new SqlConnection("server=localhost;user id=sa;password=<password>");
            _conn.Open();
            Console.WriteLine("Connected to server.");
        }
    }
}

You still require a simple SQL connection string for your database engine, there is a website that you can use to get your connection strings. I initially told you there is no version of using trusted_connection here because there are no Windows accounts here that we can utilize. Now, call this object from your main class and you will see the result.

using System;

namespace ConsoleApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            new SqlHelper();
        }
    }
}

screenshot-7019
Figure 10: Connected to the server.

The results are promising as they show we can connect to the server here. Now we can move onwards to actually create something useful. Let us just try to execute the commands (that we executed above) in our newly developed system, update the code to start accepting requests of SQL commands.

using System;
using System.Data.SqlClient;

namespace ConsoleApplication
{
    class SqlHelper 
    {
        private SqlConnection _conn = null;

        public SqlHelper() 
        {
            _conn = new SqlConnection("server = localhost; user id = sa; password = <password>");
            _conn.Open();
            Console.WriteLine("Connected to server.");
            _execute();
        }

        private void _execute() 
        {
            if (_conn != null ) 
            {
                Console.BackgroundColor = ConsoleColor.Red;
                Console.ForegroundColor = ConsoleColor.White;
                Console.Write("Note:");
                Console.BackgroundColor = ConsoleColor.Black;
                Console.ForegroundColor = ConsoleColor.Gray;
                Console.WriteLine(" You may enter single line queries only.");

                while(true)
                {
                    Console.Write("SQL> ");
                    string query = Console.ReadLine();
                    using (var command = new SqlCommand(query, _conn)) 
                    {
                        try {
                            using (var reader = command.ExecuteReader()) 
                            {
                                while (reader.Read()) 
                                {
                                    Console.WriteLine(reader.GetString(0));
                                }
                            }
                        } catch (Exception error) {
                            Console.WriteLine(error.Message);
                        }
                    }
                }
            }
        }
    }
}

This will start executing and will request the user to enter SQL commands, you can see how this works below,

screenshot-7020
Figure 11: Connecting and executing SQL queries on SQL Server. 

So, you have seen that even a simple .NET Core program to write and manage the queries is way better are structuring the sqlcmd program on Linux. You can update this as per needed.

Tips, tricks and considerations…

In this section I will talk about a few of the major concepts that you must know and understand before diving any deeper in SQL Server on Linux at all, if you don’t know these, then chances are you don’t know SQL Server too. So, pay attention.

1. Default Directory?

There are so many articles out there already, yet no one seems to be interested in telling the world where the server gets installed. Where to find the files? How to locate database logs etc?

In Linux, the default directory is, “/var/opt/mssql/<locked out>“. You need to have admin privileges to access and read the content of this directory. So, I got them and entered the directory.

$ sudo dolphin /var/opt/mssql/

“dolphin” is the file manager program in KDE, sorry the display was not so much clear so I selected everything for you too what is inside.

screenshot-7021
Figure 12: Data in the “data” directory under “mssql” directory.

You can surf the rest of the directory on your own, I thought I should let you know where things actually reside. On your system, in future, it might change. But, until then, enjoy. 😉

2. Edition Installed

On Windows, you are typically asked for edition that you need to install on your machine. On Linux that was not the case and it installed one for us. The default one (and at the moment, only possible one) is SQL Server Developer edition. This edition is free of cost, and it has everything that Enterprise edition has.

To confirm, just execute,

SELECT serverproperty('edition')

Or, have a look above in figure 9 or 11, you can see the edition shown. All of recent products of Microsoft based on .NET Core are x64 only. ARM and x86 are left for future, for now.

3. Usage Permissions

Yes, you can feel free to download, use and try this product out, but remember you cannot use it in production or with production data. This is the only difference in Enterprise and Developer edition of SQL Server 2016.

If you head over to SQL Server 2016 editions, you can see the chart clearly.

screenshot-7022
Figure 13: No production rights are available in Developer edition.

Thus you should not consider this to be used with production data. You can, until a time when it is allowed, you should stick with Windows platform and download Express edition. It has more than enough space for small projects.

4. Should you use it?

Finally, if you are a learner like me, if you want to try something new out, then yes of course go ahead and try it out. You can install a Ubuntu on a VirtualBox to try this thing out if you’d like.

If you find something new, message me and we can chat on that. 🙂

Automating deployment of ASP.NET Core to Azure App Service from Linux

Introduction and Background

On operating systems such as Microsoft Windows, and using great tools like Visual Studio, it is greatly easy to upload and publish your web application projects to Microsoft Azure but what about other platforms such as Linux? Microsoft has recently released PowerShell as open source project and that is a great tool to be used in the cases where you have developers who understand and can use PowerShell but if you don’t happen to have any, then you would require to use traditional tools and sometimes they will require you to manually perform these changes. In this post, I am going to cover the teams who are using Git repositories for their projects and I am going to cover how they can basically automate the publishing of their projects after each successful build. On Linux, we can basically create small scripts that run on the background just like PowerShell or the Windows command prompt terminal’s batch files. Git programs are required for this to run, actually what I am going to show in this is to execute the same commands but in a manner that most of the repository’s information is already built, the commit message is updated and the content is published to the Azure app service. The purpose of this guide is to make the entire process as easy to be understood, as A, B and C. I will be using a real world application to show you how to do what and where instead of talking generally about many things at the same time.

Before moving onward, a very little knowledge of how Git system works is required from you because some things might get a bit technical in the post below and thus you must have the know-how of how Git systems work in order to control the versioning of files.

2color-lightbg2x
Figure 1: Git logo.

Secondly, you are required to know how you can use “dotnet” script to create, build and run the .NET Core applications. If you don’t know .NET Core, I will reference to a few of my own articles that are beginner-friendly which you can read to learn more on this.

Finally, you are required to have an active Azure account with a working subscription. If you don’t happen to have on, you can get a free account with $200 credits to try out all of Azure! Once these are met, you can continue to actually using the article for something useful.

Creating the application — entire part

This part has been shared and taught many times, on many occasions. I wrote an article on the same concept, a few days ago and I would like to refer that article to you to learn how you can create a new application with web template in .NET Core. Creating and hosting ASP.NET Core application on Linux — Nothing Third-Party, read this article for a complete overview and a walkthough for the concept of building and running the applications on Linux environment. I will move onward from this step, because you must be aware of the ways of creating your own applications.

Deployment of web application to Azure

Now comes the main part, this article entirely focusses on the deployment of applications to Microsoft Azure instead of developing an application. There would be some parts, where I will be modifying the parts, but that is just to tell you how easy it would be to redeploy the application using the automation by executing a simple command. Although it is not required but you are required to have a very basic introduction to automation tools. You can get a good tutorial about these toolings easily from any software engineering guide.

Using git for deployment

Microsoft Azure uses many methods and ways to deploy the applications on the cloud, such as using Visual Studio to deploy the applications and leave the configurations to the tool itself. However, since Linux environments don’t have Visual Studio, most of the tasks are left with source control tools (Visual Studio Code also uses the same git programs to upload the code to repositories). I will show you, how you can create a minimal script or a program, that manages all of these tasks for you — automation program.

On Linux, mostly and typically, git comes pre installed in most of the Linux-distributions, such as Ubuntu, etc. However, if that is not available you get easily install it using

$ sudo apt-get install git

Or using a similar command, such as using yum etc. This would install and setup git for your environment. Once you have git installed, you need to set a few things up. Git requires the name, email address for notifying who made a change to the system. So for that we would require to execute the following commands,

$ git config --global user.name "Eminem"
$ git config --global user.email "email@domain.com"

You should pass the name and address values that you hold — I used Eminem and a random email address. And to check if your personal configurations are done correctly, you can execute the following command to test that,

$ git config --list

These are the required configurations that you must do before you can use git for any purpose at all.

Note: You also need to create a new web application service (Azure App Service) on Microsoft Azure so that you can deploy the application somewhere. Since I do not want to go deep into the development and starting of this session, I would like you guys to go and watch this video of mine, providing an overview of Microsoft Azure App Service (you may have to put the volume a bit higher).

Once you have that, you can continue onwards and actually set up repositories to deploy the application. I used the following information while creating a new service, so that if I use a name, you should know where did that come from.

screenshot-6477
Figure 2: Creating a new App Service.

screenshot-6478
Figure 3: Reviewing the information of App Service in Microsoft Azure.

Upon creating, you can visit the web service from your browser using the link provided and on the first visit without any update, the following page is shown.

screenshot-6479
Figure 4: Initial page from the App Service.

It shows that you can deploy your web applications, from many sources, using many services such as FTP, Git etc. I will show you, how to do this using Git… There are a few other things that I would like to show you here,

  1. I will show you how to do this, using git.
  2. I will also show you, how to test the build integrity — whether build succeeds or not.
  3. I will also show you, whether git considers to commit and push the changes to the server or not.

So these are a few of the tests that I have prepared for the automation tool that will be helpful for us in this case. These things are not provided on the tutorials that are available online, they are just simple straight-forward ones that only show you how to do it, not whether it is helpful to do it at all.

Setting up the local repository

On the machine side, the first thing to do is to set up a local repository for git deployments, even if you have created a project, you can still create a repository for that project and then use it as the source for your application’s content. So, for that, the following commands would work,

# If you have not created a project, remove these lines. 
$ dotnet new -t web
$ dotnet restore

# The following creates a local repository
$ git init

This would create the project, set up the repository for us under .git directory. On my machine, the following was the result of this,

screenshot-6480
Figure 5: Creating a new project using .NET Core.

Not visible at the moment, however there is a special file called .gitignore, created by default and that contains really helpful ignore constraints for the git systems. However, we will look into them later as we progress.

Adding remote repositories

We can now set up a few remote repositories on own local machine so that once we need to publish the application, we don’t have to enter the URL of the repository every time. Instead, what we can do is, just call that alias of the repository and deploy the content there. This helps to notify the system of the locations where it needs to push the changes to. For that, first of all we need to setup our online repository to allow Local Git Deployments from Microsoft Azure and only then we can assure that we can send the code to the repository online. For that, head over to, Deployment options → Deployment source → Local git repository.

screenshot-6487
Figure 6: Deployment options in Microsoft Azure.

You can see that there are other options available too, select as required and wanted. I selected the third option so that we can publish the source code from our local machines to the servers in no time at all, without having to require any third-party vendors. You need to set up the credentials, so that when you are trying to publish the applications, only you are the one in charge of pushing the changes to the server and not anyone else.

screenshot-6488
Figure 7: Setting up credentials for the deployment account.

Once these things are set up, we can then move onwards to set up our local git programs to allow it to deploy the source code from our local environment all the way up to the Microsoft Azure. Stay with me. To do so, we need to execute the following command,

$ git remote add repoinazure https://<user>@<servicename>.scm.azurewebsites.net:443/<servicename>.git

Note a few things in the previous command, the few things interesting here are that the repository always contains the name of your application service in the URL as well as in the resource file for your git processes. You need to update that, secondly, you need to update the username in the URL and as the one you used in the credentials (I used “afzaal” there). Once this is done, we can move onwards and actually push our first version of application to see, how Azure will behave.

The following commands take care of that,

$ git add .
$ git commit -m "Some random commit message here..."
$ git push repoinazure master

In the previous commands, the first one takes care of the addition of files to the local tracking of the repository, to track the files that needs to be. Second command simply commits the changes and finally a push is made to the “repoinazure”, of course that is the repository where we will publish the application to.

screenshot-6489
Figure 8: Terminal asking for password before deployment. 

It asks for the password and then continues to simply compress the objects, to deploy them using git protocol.

screenshot-6490
Figure 9: Terminal showing the progress. 

The following is a screenshot of a process going on, during the first deployment. Next times and onwards it doesn’t take this much time and is very much simpler.

screenshot-6491
Figure 10: Terminal showing the processes going on at Azure during the deployment.

However, we will also create a shell script that automates the deployment for us. Once it is done, the terminal will show the following results on the screen and we can know that Azure is set up for startup.

screenshot-6492
Figure 11: Application deployed to Azure.

Let us navigate to the website and see if things are the way we are planning them to be.

screenshot-6493
Figure 12: Application’s home page after it has been uploaded to Azure.

Voila! We have finally deployed the application to the Azure. We now need to deploy the changes and so here is where I submit my recommendations, ideas and tips.

Development and redeployments

Let us add a simple controller to this web application, and then quickly deploy that using the script that we will create to do the trick for us. In this section, I will show you a simple ASP.NET Core Web API that will be used to basically see, how early an updates get published live on the server. So, I start with adding a new controller file to the web application’s source code and then modifying it to return a few objects with some data.

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;

namespace WebApplication.Controllers {

    [RouteAttribute("api/people")]
    public class PersonApiController : Controller {
        private List<Person> _people { get; set; } = new List<Person> { 
             new Person { ID = 1, Name = "Afzaal Ahmad Zeeshan" },
             new Person { ID = 2, Name = "Bruce Wayne" },
             new Person { ID = 3, Name = "Marshall Bruce Mathers III" } 
        };
        public List<Person> GetPeople() {
            return _people;
        }

        // Rest of the stuff here...
    }

    public class Person {
        public int ID { get; set; }
        public string Name { get; set; }
    }
}

I just added a simple HTTP GET handler to show you how easy and simple it is to actually make the changes to your live application using git deployment from a local repository.

Now the magic trick for kids, the following script will manage most of our tasks and will take care of them before we proceed deploying the application.

if [ $# == 0 ] 
    then 
    echo "Usage: automate.sh <git-remote-repository>"
    exit 1
fi

# Variables
commit_message="Deployment commit on $(date "+%B %dth, %Y at %H:%M %p")."

# Run the test
dotnet build

if [ $? == 0 ]
    then
        # Successful attempt.
        # Update the git.
        git add -A

        echo "Using commit message, \"$commit_message\"."
        git commit -m "$commit_message"
 
        # Check if anything was commited, or whether there were no changes.
        if [ $? == 0 ]
            then 
                # Files need to be updated
                # Update the azure's repository.
                echo "Connecting to server for git push..."
                git push $1 master
                exit 0
        else 
            echo "No changes to be pushed to server. Terminating."
            exit 1
        fi
        exit
    else 
        # There must have been an issue in the execution.
        echo "There were errors in building process, fix them and re-try."
        exit 1
fi

This would check if we are passing the remote repository or not, it will also build the project and then continue only if build succeeds. So, basically, this comes really handy when working with a terminal based environment especially Linux and using the git protocol for deployment. I recommend saving this as a file in your local repository, and then executing it from the terminal as a program. You can get the file from my GitHub repository too.

screenshot-6533
Figure 13: “autorun.sh” file available in the files.

Once this is done, simply create it executable and then finally execute the script to deploy it. It will go through each step and it will finally deploy the application, it would require password. I did not create the script to accept the password.

screenshot-6536
Figure 14: My script working and deploying the application to Azure.

So, let us run the system now. Once it is done, it will show you that you can now access the web application as it has been deployed successfully. Browser can confirm this,

screenshot-6539
Figure 15: Result of the deployment.

To see more information on this, we can head over to Azure to see how does the deployment affect our current source code.

screenshot-6538
Figure 16: Deployments shown on the Azure.

Clearly seen that now our most recent version of the deployment is being shown as the active deployment and the previous one is removed and set as Inactive. We can do deeper into them and change their status as needed but I won’t be doing that at all. Remember: Next time you want to publish the application, just use that script I provided.

Final words

Microsoft Azure provides very simple ways of deploying the applications, there are many other ways other than Git to deploy the applications to the server. OneDrive, GitHub and other cloud storage services can be used easily to deploy the applications.

However my major concern was to show how you can use the git, in a script-based environment to deploy the applications to Azure and also track when each commit gets pushed to the server. As you can see in this post, the commits are tracked on the cloud and you can change and select which commit you are interested in and set them as the active ones — such as in the case of a deletion of a file or so.

A quick startup using .NET Core on Linux

I know you may be thinking… This post is another rhetoric post by this guy, yes it is. 🙂 .NET Core is another product that I like, the first one being, .NET framework itself. It was last year when .NET Core got started and Microsoft said they are going to release the source code as a part of their open source environment and love. By following the open source project environment and ethics, Microsoft has been working very hard in bringing global developers toward their environments, platforms and services. For example, .NET framework works on Windows, C# language is used to build Windows Store applications, C# is also the primary language in Microsoft’s web application development framework, ASP.NET and much more. The online cloud service of Microsoft is also programmed in C#; primarily. These things are interrelated to each other. Thus, when Microsoft brings all of this as an open source project, things start to get a lot better!

Everyone knows about the background of .NET Core, if you don’t know, I recommend that you read the blog post on Microsoft, Introducing .NET Core. The(ir) basic idea was to provide a framework that would work with any platform, any application type and any framework to be targeted.

Introduction and Background

In this quick post, I will walk you through getting started with .NET Core, installing it on a Linux machine and I will also give my views as to why install .NET Core on a Linux machine instead of Windows machine, I will then walk you through different steps of .NET Core programming and how you can use terminal based environment to perform multiple tasks. But first things first.

I am sure you have heard of .NET Core and other of the cool stuff that Microsoft has been releasing these years, From all of these services the ones that I like are:

  1. C# 6
    • In my own opinion, I think the language looks cleaner now. These sugar-coated features make the language easier to write too. If you haven’t yet read, please read my previous blog post at, Experimenting with C# 6’s new features.
  2. .NET Core
    • Of course, who wouldn’t love to use .NET on other platforms.
  3. Xamarin acquisition
    • I’m going to try this one out tonight. Fingers crossed.
  4. Rest are all just “usual” stuff around.

In this post I am going to talk about .NET Core on Linux because I have already talked about the C# stuff.

Screenshot (898)
Figure 1: .NET Core is a cross-platform programming framework by Microsoft.

Why Linux for .NET Core?

Before I dive any deeper, as I had said, I will give you a few of my considerations for using .NET Core on Linux and not on Windows (yet!) and they are as following. You don’t have to take them seriously or to always consider them, they are just my views and my opinions, you are free to have your own.

1. .NET Core is not yet complete

It would take a while before .NET gets released as a public stable version. Until then, using this bleeding edge technology on your own machines won’t be a good idea and someday sooner you will consider removing the binaries. In such cases, it is always better to use it in the virtual machine somewhere. I have set up a few Linux (Ubuntu-based) virtual machines for my development purposes, and I recommend that you go the same.

  1. Install VirtualBox (or any other virtualization software that you prefer; I like VirtualBox for its simplicity).
  2. Set up an Ubuntu environment.
    • Remember to use Ubuntu 14.04. Later ones are not yet supported yet.
  3. Install the .NET Core on that machine.

If something goes wrong. You can easily revert back where you want to. If the code plays dirty, you don’t have to worry about your data, or your machine at all.

2. Everything is command-line

Windows OS is not your OS if you like command-line interfaces. I am waiting for the BASH language to be introduced in Windows as in Ubuntu, until then, I am not going to use anything that requires a command-line interface to be used on my Windows environment. In Linux, however, everything almost has a command-line interface and since the .NET Core is a command-based program by Microsoft that is why I have enjoyed using it on Linux as compared to Windows.

Besides, on command-line interface, creating, building and running a .NET Core project is as simple as 1… 2… 3. You’ll see how, in the sections below. 🙂

3. I don’t use Mac

Besides these points, the only valid point left is then why shouldn’t we use Mac OS for .NET Core is because I don’t use it. You are free to use Mac OS for .NET Core development too. .NET Core does support it, its just that I don’t support that development environment. 😀

Installation of .NET Core

Although it is intended that soon, the command would be as simple as:

$ sudo apt-get install dotnet

Same command is used on Mac OS X and other operating systems other than Ubuntu and Debian derivatives. But until the .NET Core is in development process that is not going to happen. Until then, there are other steps that you can perform to install the .NET Core on your own machine. I’d like to skip this part and let Microsoft give you the steps to install the framework.

Installation procedure of .NET Core on multiple platforms.

After this step, do remember to make sure that the platform has been installed successfully. In almost any environment, you can run the following command to get the success message.

> dotnet --help

If .NET is installed, it would simply show the version and other help material on the screen. Otherwise, you may want to make sure that that procedure did not incur any problems during the installation of the packages. Before I get started, I want to show you the help material provided with “dotnet” command,

afzaal@afzaal-VirtualBox:~/Projects/Sample$ dotnet --help
.NET Command Line Tools (1.0.0-preview1-002702)
Usage: dotnet [common-options] [command] [arguments]

Arguments:
 [command]      The command to execute
 [arguments]    Arguments to pass to the command

Common Options (passed before the command):
 -v|--verbose   Enable verbose output
 --version      Display .NET CLI Version Number
 --info         Display .NET CLI Info

Common Commands:
 new           Initialize a basic .NET project
 restore       Restore dependencies specified in the .NET project
 build         Builds a .NET project
 publish       Publishes a .NET project for deployment (including the runtime)
 run           Compiles and immediately executes a .NET project
 test          Runs unit tests using the test runner specified in the project
 pack          Creates a NuGet package

So, you get the point that we are going to look deeper into the commands that dotnet provides us with.

  1. Creating a package
  2. Restoring the package
  3. Running the package
    • Build and Run both work, run would execute, build would just build the project. That was easy.
  4. Packaging the package

I will slice the stuff down to make it even more clearer and simpler to understand. One thing that you may have figured is that the option to compile the project natively is not provided as an explicit command in this set of options. As far as I can think is that this support has been removed until further notice. Until then, you need to pass a framework type to compile and build against.

Using .NET Core on Linux

Although the procedure on both of the environments is similar and alike. I am going to show you the procedure in Ubuntu. Plus, I will be explaining the purpose of these commands and multiple options that .NET provides you with. I don’t want you feel lonely here, because most of the paragraphs here would be for Microsoft team working on .NET project, so I would be providing a few suggestions for the team too. But, don’t worry, I’ll make sure the content seems to be and remain on-topic.

1. Creating a new project

I agree that the framework development is not yet near releasing and so I think I should consider passing on my own suggestions for the project too. At the moment, .NET Core supports creating a new project in the directory and uses the name of the directory as the default name (if at all required). Beginners in .NET Core are only aware of the commands that come right after the “dotnet”. However, there are other parameters that collect a few more parameters and optional values such as:

  1. Type of project.
    • Executable
      • At the moment, .NET only generates DLL files as output. Console is the default.
    • Dynamic-link library
  2. Language to be used for programming.

To create a new project, at the moment you just have to execute the following command:

$ dotnet new

In my own opinion, if you are just learning, this is enough for you. However, if you execute the following command you will get an idea of how you can modify the way project is being created and how it can be used to modify the project itself, including the programming language being used.

$ dotnet new --help
.NET Initializer

Usage: dotnet new [options]

Options:
 -h|--help Show help information
 -l|--lang <LANGUAGE> Language of project [C#|F#]
 -t|--type <TYPE> Type of project

Options are optional. However, you can pass those values if you want to create a project with a different programming language such as F#. You can also change the type, currently however, Console applications are only supported.

I already had a directory set up for my sample testing,

Screenshot (899)
Figure 2: Sample directory open. 

So, I just created the project here. I didn’t mess around with anything at all, you can see that the .NET itself creates the files.

Screenshot (900)
Figure 3: Creating the project in the same directory where we were.

Excuse the fact that I created an F# project. I did that so that I can show that I can pass the language to be used in the project too. I removed that, and instead created a C# program-based project. This is a minimal Console application.

In .NET Core applications, every project contains:

  1. A program’s source code.
    • If you were to create an F# project. Then the program would be written in F# language and in case of default settings. The program is a C# language program.
  2. A project.json file.
    • This file contains the settings for the project and dependencies that are to be maintained in the project for building the project.

However, once you are going to run you need to build the project and that is what we are going to do in the next steps.

2. Restoring the project

We didn’t delete the project. This simply means that the project needs to be restored and the dependencies need to be resolved before we can actually build to run the project. This can be done using the following command,

$ dotnet restore

Note: Make sure you are in the working directory where the project was created.

This command does the job of restoring the packages. If you try to build the project before restoring the dependencies, you are going to get the error message of:

Project {name} does not have a lock file.

.NET framework uses the lock file to look for the dependencies that a project requires and then starts the building and compilation process. Which means, this file is required before your project can be executed to ensure “it can execute”.

After this command gets executed, you will get another file in the project directory.

Screenshot (902)
Figure 4: Project.lock.json file is now available in the project directory.

And so finally, you can continue to building the project and to running it on your own machine with the default settings and setup.

3. Building the project

As I have already mentioned above, the native compilation support has been removed from the toolchain and I think Ubuntu developers may have to wait for a while and this may only be supported on Windows environment until then. However, we can somehow still execute the project as we would and we can perform other options too, such as publishing and NuGet package creation.

You can build a project using the following command,

$ dotnet build

Remember that you need to have restored the project once. Build command would do the following things for you:

  1. It would build the project for you.
  2. It would create the output directories.
    • However, as I am going to talk about this later, you can change the directories where the outputs are saved.
  3. It would prompt if there are any errors while building the project.

We have seen the way previous commands worked, so let’s slice this one into bits too. This command, too, supports manipulation. It provides you with optional flags such as:

  1. Framework to target.
  2. Directory to use for output binaries.
  3. Runtime to use.
  4. Configuration etc.

This way, you can automate the process of building by passing the parameters to the dotnet script.

4. Deploying the project

Instead of using the term running the project, I think it would be better if I could say deploying the project. One way or the other, running project is also deployed on the machine before it can run. First of all, the project I would show to be running, later I will show how to create NuGet packages.

To run the project, once you have built the project or not, you can just execute the following command:

$ dotnet run

This command also builds the project if the project is not yet built. Now, if I execute that command on my directory where the project resides, the output is something like this on my Ubuntu,

Screenshot (904)
Figure 5: Project output in terminal.

As seen, the project works and displays the message of, “Hello World!” on screen in terminal. This is a console project and a simple project with a console output command in C# only. That is why the program works this way.

Creating NuGet packages

Besides this, I would like to share how you can create the NuGet package from this project using the terminal. NuGet packages have been in the scene since a very long time and they were previously very easy to create in Visual Studio environment. Process is even simpler in this framework of programming. You just have to execute the following command:

$ dotnet pack

This command packs the project in a NuGet package. I would like to show you the output that it generates so that you can understand how it is doing everything.

afzaal@afzaal-VirtualBox:~/Projects/Sample$ dotnet pack
Project Sample (.NETCoreApp,Version=v1.0) was previously compiled. 
Skipping compilation.
Producing nuget package "Sample.1.0.0" for Sample
Sample -> /home/afzaal/Projects/Sample/bin/Debug/Sample.1.0.0.nupkg
Producing nuget package "Sample.1.0.0.symbols" for Sample
Sample -> /home/afzaal/Projects/Sample/bin/Debug/Sample.1.0.0.symbols.nupkg

It builds the project first, if the project was built then it skips that process. Once that has been done it create a new package and simply generates the file that can be published on the galleries. NuGet package management command allows you to perform some other functions too, such as updating the version number itself, it also allows you to specify framework etc. For more, have a look at the help output for this command,

afzaal@afzaal-VirtualBox:~/Projects/Sample$ dotnet pack --help
.NET Packager

Usage: dotnet pack [arguments] [options]

Arguments:
 <PROJECT> The project to compile, defaults to the current directory. 
 Can be a path to a project.json or a project directory

Options:
 -h|--help Show help information
 -o|--output <OUTPUT_DIR> Directory in which to place outputs
 --no-build Do not build project before packing
 -b|--build-base-path <OUTPUT_DIR> Directory in which to place temporary build outputs
 -c|--configuration <CONFIGURATION> Configuration under which to build
 --version-suffix <VERSION_SUFFIX> Defines what `*` should be replaced with in version 
  field in project.json

See the final one, where it shows the version suffix. It can be used to update the version based on the build version and so on. There is also a setting, which allows you modify the way building process updates the version count. This is a widely used method for changing the version number based on the build that produced the binary outputs.

The NuGet package file was saved in the default output directory.

Screenshot (905)
Figure 6: NuGet package in the default output directory.

Rest is easy, you can just upload the package from here to the NuGet galleries.

Final words

Finally, I was thinking I should publish a minimal ebook about this guide. The content was getting longer and longer and I was getting tired and tired, however since this gave me an idea about many things I think I can write a comparison of .NET Core on Windows and Linux and I think I have enough time to do that.

Secondly, there are few suggestions for end users that I want to make.

  1. Do not use .NET Core for commercial software. It is going to change sooner,
  2. .NET Core is a bleeding edge technology and well, since there is no documentation, you are going to waste a lot of time in learning and asking questions. That is why, if you are considering to learn .NET framework, then learn the .NET framework and not .NET Core. .NET framework has a great amount of good resources, articles, tips and tutorials.
  3. If you want cross-platform features and great support like .NET framework, my recommendation is Mono Project over .NET Core maybe because it is yet not mature.

I have a few feedback words on the framework itself.

  1. It is going great. Period.
  2. Since this is a cross-platform framework, features must not be available Windows-only such as that “dotnet compile –native” one. They must be made available to every platform.

At last, the framework is a great one to write programs for. I enjoyed programming for .NET Core because it doesn’t require much effort. Plus, the benefit of multiple programming languages is still there, Besides, Visual Studio Code is also a great IDE to be used and the C# extension makes it even better. I will be writing a lot about these features these days since I am free from all of the academics stuff these days. 🙂

See you in the next post.

Highlighting the faces in uploaded image in ASP.NET web applications

Introduction and Background

Previously, I was thinking we can find the faces in the uploaded image, so why not create a small module that automatically finds the faces and renders them when we want to load the images on our web pages. That was a pretty easy task but I would love to share what and how I did it. The entire procedure may look a bit complex, but trust me it is really very simple and straight-forward. However, you may be required to know about a few framework forehand as I won’t be covering most of the in-depth stuff of that scenario — such as computer vision, which is used to perform actions such as face detection.

In this post, you will learn the basics of many things that range from:

  1. Performing computer vision operations — most basic one, finding the faces in the images.
  2. Sending and receiving content from the web server based on the image data uploaded.
  3. Using the canvas HTML element to render the results.

I won’t be guiding you throughout each and everything of the computer vision, or the processes that are required to perform the facial detection in the images, for that i would like to ask you to go and read this post of mine, Facial biometric authentication on your connected devices. In this post, I’ll cover the basics of how to detect the faces in ASP.NET web application, how to pass the characters of the faces in the images, how to use those properties and to render the faces on the image in the canvas.

Making the web app face-aware

There are two steps that we need to perform in order to make our web applications face aware and to determine whether there is a face in the images that are to be uploaded or not. There are many uses, and I will enlist a few of them in the final words section below. The first step is to configure our web application to be able to consume the image and then render the image for processing. Our image processing toolkit would allow us to find the faces, and the locations of our faces. This part would then forward the request to the client-side, where our client itself would render the face locations on the images.

In this sample, I am going to use a canvas element to draw objects, whereas this can be done using multiple div containers to contain span elements and they can be rendered over the actual image to show the face boxes with their positions set to absolute.

First of all, let us program the ASP.NET web application to get the image, process the image, find the faces and generate the response to be collected on the client-side.

Programming file processing part

On the server-side, we would preferably use the Emgu CV library. This library has been of a great usage in the C# wrappers list of OpenCV library. I will be using the same library, to program the face detectors in ASP.NET. The benefits are:

  1. It is a very light-weight library.
  2. The entire processing can take less than a second or two, and the views would be generated in a second later.
  3. It is better than most of other computer vision libraries; as it is based on OpenCV.

First of all, we would need to create a new controller in our web application that would handle the requests for this purpose, we would later add the POST method handler to the controller action to upload and process the image. You can create any controller, I used the name, “FindFacesController” for this controller in my own application. To create a new Controller, follow: Right click Controllers folder → Select Add → Select Controller…, to add a new controller. Add the name to it as you like and then proceed. By default, this controller is given an action, Index and a folder with the same name is created in the Views folder. First of all, open the Views folder to add the HTML content for which we would later write the backend part. In this example project, we need to use an HTML form, where users would be able to upload the files to the servers for processing.

The following HTML snippet would do this,

<form method="post" enctype="multipart/form-data" id="form">
  <input type="file" name="image" id="image" onchange="this.form.submit()" />
</form>

You can see that this HTML form is enough in itself. There is a special event handler attached to this input element, which would cause the form to automatically submit once the user selects the image. That is because we only want to process one image at a time. I could have written a standalone function, but that would have made no sense and this inline function call is a better way to do this.

Now for the ASP.NET part, I will be using the HttpMethod property of the Request to determine if the request was to upload the image or to just load the page.

if(Request.HttpMethod == "POST") {
   // Image upload code here.
}

Now before I actually write the code I want to show and explain what we want to do in this example. The steps to be performed are as below:

  1. We need to save the image that was uploaded in the request.
  2. We would then get the file that was uploaded, and process that image using Emgu CV.
  3. We would get the locations of the faces in the image and then serialize them to JSON string using Json.NET library.
  4. Later part would be taken care of on the client-side using JavaScript code.

Before I actually write the code, let me first show you the helper objects that I had created. I needed two helper objects, one for storing the location of the faces and other to perform the facial detection in the images.

public class Location
{
    public double X { get; set; }
    public double Y { get; set; }
    public double Width { get; set; }
    public double Height { get; set; }
}

// Face detector helper object
public class FaceDetector
{
    public static List<Rectangle> DetectFaces(Mat image)
    {
        List<Rectangle> faces = new List<Rectangle>();
        var facesCascade = HttpContext.Current.Server.MapPath("~/haarcascade_frontalface_default.xml");
        using (CascadeClassifier face = new CascadeClassifier(facesCascade))
        {
            using (UMat ugray = new UMat())
            {
                CvInvoke.CvtColor(image, ugray, Emgu.CV.CvEnum.ColorConversion.Bgr2Gray);

                //normalizes brightness and increases contrast of the image
                CvInvoke.EqualizeHist(ugray, ugray);

                //Detect the faces from the gray scale image and store the locations as rectangle
                //The first dimensional is the channel
                //The second dimension is the index of the rectangle in the specific channel
                Rectangle[] facesDetected = face.DetectMultiScale(
                                                ugray,
                                                1.1,
                                                10,
                                                new Size(20, 20));

                faces.AddRange(facesDetected);
            }
        }
        return faces;
    }
}

These two objects would be used, one for the processing and other for the client-side code to render the boxes on the faces. The action code that I used for this is as below:

public ActionResult Index()
{
    if (Request.HttpMethod == "POST")
    {
         ViewBag.ImageProcessed = true;
         // Try to process the image.
         if (Request.Files.Count > 0)
         {
             // There will be just one file.
             var file = Request.Files[0];

             var fileName = Guid.NewGuid().ToString() + ".jpg";
             file.SaveAs(Server.MapPath("~/Images/" + fileName));

             // Load the saved image, for native processing using Emgu CV.
             var bitmap = new Bitmap(Server.MapPath("~/Images/" + fileName));

             var faces = FaceDetector.DetectFaces(new Image<Bgr, byte>(bitmap).Mat);

             // If faces where found.
             if (faces.Count > 0)
             {
                 ViewBag.FacesDetected = true;
                 ViewBag.FaceCount = faces.Count;

                 var positions = new List<Location>();
                 foreach (var face in faces)
                 {
                     // Add the positions.
                     positions.Add(new Location
                     {
                          X = face.X,
                          Y = face.Y,
                          Width = face.Width,
                          Height = face.Height
                     });
                 }

                 ViewBag.FacePositions = JsonConvert.SerializeObject(positions);
            }

            ViewBag.ImageUrl = fileName;
        }
    }
    return View();
}

The code above does entire processing of the images that we upload to the server. This code is responsible for processing the images, finding and detecting the faces and then returning the results for the views to be rendered in HTML.

Programming client-side canvas elements

You can create a sense of opening a modal popup to show the faces in the images. I used the canvas element on the page itself, because I just wanted to demonstrate the usage of this coding technique. As we have seen, the controller action would generate a few ViewBag properties that we can later use in the HTML content to render the results based on our previous actions.

The View content is as following,

@if (ViewBag.ImageProcessed == true)
{
    // Show the image.
    if (ViewBag.FacesDetected == true)
    {
        // Show the image here.
        <img src="~/Images/@ViewBag.ImageUrl" alt="Image" id="imageElement" style="display: none; height: 0; width: 0;" />

        <p><b>@ViewBag.FaceCount</b> @if (ViewBag.FaceCount == 1) { <text><b>face</b> was</text> } else { <text><b>faces</b> were</text> } detected in the following image.</p>
        <p>A <code>canvas</code> element is being used to render the image and then rectangles are being drawn on the top of that canvas to highlight the faces in the image.</p>

        <canvas id="faceCanvas"></canvas>

        <!-- HTML content has been loaded, run the script now. -->
        
            // Get the canvas.
            var canvas = document.getElementById("faceCanvas");
            var img = document.getElementById("imageElement");
            canvas.height = img.height;
            canvas.width = img.width;

            var myCanvas = canvas.getContext("2d");
            myCanvas.drawImage(img, 0, 0);

            @if(ViewBag.ImageProcessed == true && ViewBag.FacesDetected == true)
            {
            
            img.style.display = "none";
            var facesFound = true;
            var facePositions = JSON.parse(JSON.stringify(@Html.Raw(ViewBag.FacePositions)));
            
            }

            if(facesFound) {
                // Move forward.
                for (face in facePositions) {
                    // Draw the face.
                    myCanvas.lineWidth = 2;
                    myCanvas.strokeStyle = selectColor(face);

                    console.log(selectColor(face));
                    myCanvas.strokeRect(
                                 facePositions[face]["X"],
                                 facePositions[face]["Y"],
                                 facePositions[face]["Width"],
                                 facePositions[face]["Height"]
                             );
               }
           }

           function selectColor(iteration) {
               if (iteration == 0) { iteration = Math.floor(Math.random()); }

               var step = 42.5;
               var randomNumber = Math.floor(Math.random() * 3);

               // Select the colors.
               var red = Math.floor((step * iteration * Math.floor(Math.random() * 3)) % 255);
               var green = Math.floor((step * iteration * Math.floor(Math.random() * 3)) % 255);
               var blue = Math.floor((step * iteration * Math.floor(Math.random() * 3)) % 255);

               // Change the values of rgb, randomly.
               switch (randomNumber) {
                   case 0: red = 0; break;
                   case 1: green = 0; break;
                   case 2: blue = 0; break;
               }

               // Return the string.
               var rgbString = "rgb(" + red + ", " + green + " ," + blue + ")";
               return rgbString;
           }
        
    }
    else
    {
        <p>No faces were found in the following image.</p>

        // Show the image here.
        <img src="~/Images/@ViewBag.ImageUrl" alt="Image" id="imageElement" />
    }
}

This code is the client-side code and would be executed only if there is an upload of image previously. Now let us review what our application is capable of doing at the moment.

Running the application for testing

Since we have developed the application, now it is time that we actually run the application to see if that works as expected. The following are the results generated of multiple images that were passed to the server.

Screenshot (427)

The above image shows the default HTML page that is shown to the users when they visit the page for the first time. Then they will upload the image, and application would process the content of the image that was uploaded. Following images show the results of those images.

Screenshot (428)

I uploaded my image, it found my face and as shown above in bold text, “1 face was detected…”. It also renders the box around the area where the face was detected.

Screenshot (429)

Article would have never been complete, without Eminem being a part of it! 🙂 Love this guy.

Screenshot (426)

Secondly, I wanted to show how this application processed multiple faces. On the top, see that it shows “5 faces were detected…” and it renders 5 boxes around the areas where faces were detected. I also seem to like the photo, as I am a fan of Batman myself.

Screenshot (430)

This image shows what happens if the image does not contain a detected face (by detected, there are many possibilities where a face might not be detected, such as having hairs, wearing glasses etc.) In this image I just used the three logos of companies and the system told me there were no faces in the image. It also rendered the image, but no boxes were made since there were no faces in the image.

Final words

This was it for this post. This method is useful in many facial detection software applications, many areas where you want the users to upload a photo of their faces, not just some photo of a scenery etc. This is an ASP,NET web application project, which means that you can use this code in your own web applications too. The library usage is also very simple and straight-forward as you have already seen in the article above.

There are other uses, such as in the cases where you want to perform analysis of peoples’ faces to detect their emotions, locations and other parameters. You can first of all perform this action to determine if there are faces in the images or not.