Author Archives: Afzaal Ahmad Zeeshan

About Afzaal Ahmad Zeeshan

Student, Self learner, Pakistani - by nationality, Ahmadi - by belief, Software and web developer for .NET framework and for other frameworks. Mobile developer for Android, iOS and other MAUI-based generic mobiles. Teacher for new developers, friend for everyone, loves everything.

Common Problems in Xamarin.Android and their solutions

Introduction and Background

This is my, well, 3rd or 4th post on Xamarin and this one might be a bit critical in its topic or subject matter, so if you have been pulling your hairs off these days, this post is for you. Basically the purpose of this post is to provide you with a post that has most commonly faced issues, and their solutions attached. I personally faced quite a lot of issues with Xamarin. The problem was not that I was a beginner in Xamarin. The major problem was that it was all working before, but right after a reset everything was giving me an error. Even worse was that, there were no solutions at all. Every solution was like, “Reinstall Xamarin”, “Reinstall Android SDK”, “Remove the space in the Android SDK path”. Even the most authentic of the resources were not providing the working or at least “sensible” solution to me. So I thought I should write a complete post, sharing the reality based solutions, and not just, “install this, install that” sort of stuff that does not help anyone at all.

Installation of Xamarin

Installation of Xamarin itself is a bit confusing and problematic in real, for beginners. If you have had ever worked with Xamarin previously, then you might know where all the plugs go, but for a beginner the experience is painful and many leave in the start — let alone middle as the least.

The initial location where Xamarin installs the Android SDK is, “C:\Program Files (x86)\Android\android-sdk“. A few things to consider here:

  1. There is no problem with the space in the path. Spaces are a problem only in the cases when you are going to program using NDK (Native Development Kit), otherwise they do not cause any problem. I am also using a path with space in it and it does not cause any trouble.
  2. Before anything at all, I would recommend that you run the Android SDK by running:
    1. Either “android.bat” file as Administrator.
    2. Or by going to Visual Studio and running Android SDK from the Android toolkit tray. It will request Admin access itself.
    3. Note that you need to run SDK Manager with Admin access. Otherwise, it will not work. Worse, it will give you errors of “Unable to move directory…” and later, it will delete the “android.bat” file and you will have to download and install Android SDK once again. Painful.
  3. It can be helpful to use the Android SDK installed by Android Studio in cases where you are lacking enough HDD space on your machine, but it is my own recommendation to not use that. The reason is that in case Xamarin messes up, Android Studio keeps running fine. Plus, you do not need all of the plethora that Android Studio installs just for Xamarin.

screenshot-27
Figure 1: Android SDK launcher.

The installation typically installs Android SDK level 19, 21, and 23 (23rd one can be selected from installation options). You should not just go ahead and start installing everything, even if someone asks you to. There is no point in doing that.

screenshot-28
Figure 2: Android SDKs and tools provided.

Later in this post, I will show you which of the frameworks are required an necessary for your project to work and which of these are not at all required.

Installation of Android SDK

Another most important point to consider here is that on various locations online, you will find people saying that you should install SDKs from the minimum one required, all the way up to the latest one. No, that is not the solution and is not required at all.

For example, have a look here,

screenshot-29
Figure 3: Android target platforms.

In this case, which SDKs do I need to install? If you said: All the way from 16 to 23, then you are wrong. The only SDK that is required and compulsory for you to have is the “Latest Platform”. Now, the concept of latest platform is a bit different. You cannot expect to have the latest Android version released by Google, and expect that to be the latest one in Xamarin also. Xamarin is not working under Google and their APIs come a bit later than Google’s APIs.

I have only installed the SDK platform for 25 and 23. I installed 25 because everybody was asking me to install everything. Which did not work in any case. So, what you need to do is you need to install only the Latest Platform, and any platform that you need to test your application on; in the case of emulators etc.

One more thing, the latest platform will differ as you will be working at the time of writing this post, it was Android 6.0 Marshmallow, whereas Nougat was released quite months ago, even then the latest platform in Xamarin was 6.0. And in Android Studio I was already targeting API level 25. Which means, that Android Studio API level and Xamarin level do not meet each other and thus you need to check again which of them are you going to support.

Emulators in Xamarin

Xamarin, if installed with Visual Studio, comes shipped with Visual Studio Emulator for Android. It requires you to have Hyper-V installed and active… Meaning, you can only access it on a Pro edition of Windows. In other editions, such as Home, you cannot run that. In that case you either have to fallback on Genymotion, or other products such as the Android emulator provided by Xamarin itself. The benefits of using these emulators are:

  1. They come pre shipped and preconfigured.
  2. All you need is a Pro edition of Windows (in case of Visual Studio’s emulator), or you need a commercial account; such as for Genymotion unlimited account.

However, my choice is a bit different. I consider using the same Android emulators that I used with Android Studio. There are various benefits to this,

  1. You get the latest API levels before hand. My latest platform (as seen above) is Android 6.0, however, I get to run the application in Android 7.0 using the emulators by Android Studio. Fun eh?
  2. You can use Intel HAXM, and it works even if you do not have a Pro edition of Windows. However, a CPU with virtualization technology is required — of course, an Intel CPU.
  3. Visual Studio automatically detects the running Android device, and you can push your application to the running emulator and run it in super-fast mode.

However, if you want to run your application on one of the Android Studio’s emulator, then you need to make only one thing sure: The platform level of the emulator and the platforms installed in Xamarin.Android SDK is similar. For example, if I have a device of Android 7.0, and I have not installed Android 7.0 as a platform level in Xamarin.Android SDK, then the application will not deploy; although it will build, it will try to deploy, but will not fail nor succeed. To overcome that, install the same SDK level in your Xamarin.Android as well as the Android Studio SDK. Then you can deploy your applications.

One thing and purpose of having Android 7.0 installed on Xamarin, was that I was testing my applications on Android Studio’s emulator, which had Android 7.0 installed, to have it accept the application, I needed to install Android 7.0 SDK on Xamarin as well. Otherwise, it won’t start debugging at all.

screenshot-30
Figure 4: Android emulators shown in Android Studio AVD manager.

If you look close enough in the following, you will find a lot of easter eggs; Android 6.0 as target, yet running on Android 7.0 and so on.

screenshot-31
Figure 5: Android Studio emulator with Android 7.0 running a Xamarin application targeting Android 6.0 application.

You will also see, that the Android emulator being used is the Android Studio’s emulator and not any other emulator and it runs and works just perfectly.

View not loading

Most of the times you will get an error message saying, Android SDK is outdated. Or to be specific, the error message is,

The installed Android SDK is too old. Version {API_LEVEL} or newer is required.

Then it provides you with a link to open Android SDK and install it. The problem here is that you are trying to target your build to a version, that is not installed at the moment. Such as, in my above example the target was Android 6.0 and the only SDK installed was 19 (the default one). That caused the problem for my setup to target the views to the latest API level.

In most posts, it is shown to actually either update the paths, install everything, or move the directories from one location to another, or even reinstall Xamarin.

The solution to this problem is pretty much straight-forward. All you need to do is:

  1. Go to the Properties → Application.
  2. Check for the “Compile using Android version:” value. Also note that the “Target Android version” can be selected as the “Use Compile using SDK version” to make things a bit more simple.
  3. FInally, install the SDK for your target platform level.

One thing to note here, if your SDK shows that you have installed the platform however, you cannot run the application. Recheck the location of SDK being used.

screenshot-32
Figure 6: Android SDK default paths in Visual Studio.

  1. For that, go to Tools → Options → Xamarin → Android Settings.
  2. Double check the Android SDK location property here. And make sure it is the location where your SDK is installed.

These will set up a few things in the system so that Xamarin works the way it should.

Java SDK required

In most cases, Java JDK 8 is recommended. By default you will be provided with JDK 7, and that works perfectly. But it is recommended to install JDK 8 and remove JDK 7. Reason?

  1. JDK 7 is old. Really.
  2. JDK 7 will cause your applications to target JDK 7, even if JDK 8 is installed because it overwrites the default JAVA_HOME variable. Since, JAVA_HOME variable needs to target JDK 8 location, there is no need to have JDK 7; since it will never be used.
  3. Latest Android tools will be using and supported by JDK 8. Soon Xamarin will also require you to have JDK 8, because while compiling Xamarin to Android, it uses the Android libraries as well; SDK etc. They require JDK 8.

A simple step to do this would be, to remove the JDK 7 completely. Go to control panel for that. Next, set the JAVA_HOME environment to point to JDK 8 location. That will be different on devices, based on the build or versions. So, check it against your own system.

Final words

Xamarin itself is a very powerful tool provided by Microsoft. Plus, the benefits of Xamarin, especially Xamarin.Forms, outweigh any disadvantages of it. The main disadvantage of this is, the learning slope is really very slippery — not just steep. Most of the beginners leave out learning the Xamarin framework, because learning a simple language such as Java and having maximum code already provided is an easy way to have your work done. Whereas, in Xamarin you need to not just learn the tools but also to understand which plug goes to which socket.

I tried my best to provide you with a post that has the solutions to most widely faced problems. The problems talked about here, were all generic and not a specific case because I needed future readers to also get help from this post.

If you find any other error, do let me know by commenting and I will try to find a solution — a real solution — and then share it with you and the rest of the community. 🙂

Learning SQLite databases in Xamarin for Android

Introduction and Background

For quite a while by now, I wasn’t doing any mobile development and I never considered myself to be a mobile developer either. Until a few days back when I realized that I should look into a few of the familiar and amazing stuff, such as database development, or writing database programs in Android. Working with the stuff on Android, I learnt a few basics about SQLite databases and how they actually work, plus I really enjoyed a huge performance they provide for the applications by providing the data in a very fast manner, yet maintaining resilience. That was not the most important part that I learnt in the previous weeks, the most important part I learnt was the use of Xamarin APIs for SQLite programming in Android. And, just personally saying, Xamarin provided even a better interface at programming the databases as compared to Java APIs for this task.

I don’t want you to have any background knowledge of SQLite, or whether you have previously worked in SQLite or Android database system or management. Because, in this post I will start by the basics and then build on top of them. Secondly. however I do want and expect you to have a basic understanding of Xamarin and C# programming language because we are going to heavily use C# programming for building Xamarin applications. So, I believe we should start by now.

Understanding database systems

In database theories, did you ever hear about words like, “database servers”? I am sure you did… The database servers are assigned a task to manage the data sources on a machine. They provide input/output channels to clients, or other processes to save or read the data from the data sources. It is the responsibility of the database server, or also known as database engine, to take care of the communication, data caching, data storage and data manipulation and it only returns the data as it is required and requested — nothing more, nothing less. But as we know, database engine is just a general terminology here. The actual product, will look something like, “MySQL”, “SQL Server”, “Oracle Database”, “mariaDB”, etc. All of these engines are installed on the machine, and then they allow the developers (or IT experts) to create the database on the machines, and only then can someone access data, or insert the data to the system. There were various reasons to use those engines,

  1. They provided an abstracted means to manage the data — in other words, data layer was managed by them and the rest of the layers were programmed independent of managing internal and physical layers of storage.
  2. They provided an easier way to manage the data. You just install the server, and you just execute commands to manipulate the data, or read the data. SQL language was developed for this.
  3. You can have multiple customer devices, all of them connected to a central server to share and fetch the data.
  4. Authentication problems were solved. Only one machine, or program needed to know the authentication, or the mechanisms to retrieve the data and how to store the data. Other machines were agnostic as per this information. However, you can always use tokens, or account systems to allow them to perform some administrative tasks.

But as the world progressed, there were changes to the way applications were developed. Software developers wanted to develop application software, that would run on the machines themselves and store the data locally; instead of having to build a large server processing and storing the data. The problem with older database systems was that they were installed at a location and a network connection was required to retrieve or store the data. If we skip out all of the problems of network based data instances, such as, security vulnerabilities etc, even then we are left out with other issues such as, what application will do when the network is down? Or what happens to all customers, if server goes down, or server is upgrading and many other similar issues.

In such cases, it was a good approach to have the server installed with the application. But the size was an issue, servers really contain a lot of services, task managers, data handlers, connection managers, authentication managers and so on and so forth. So in those cases, either the data was stored in the form of files, or structured data. That gave birth to embedded databases. The databases were embedded, in other words, they were just files and a script code to work with them. Every embedded database works in this way, they are just simple files containing the data and then there are APIs or libraries, or simple program scripts that are executed to fetch the data or write the data, and the script updates the data sources. The benefit is, that you can have this script installed with the application with no extra cost or dependency at all.

Thus, SQLite was brought into action

SQLite, as mentioned above, like all other embedded database systems, was written in C language as a script that managed the data sources. There were various benefits of having SQLite service instead of the large database servers on a remote machine.

  1. It provided similar SQL language syntax for data manipulation and data extraction.
  2. It can be used with every popular language nowadays. It has the library written in languages such as C++, Java, C#, Python and even Haskell for function programming.
  3. There is an optional support for Unicode character sets as well. You can turn it off for ASCII coding, or map your own data.
  4. It is a relational database model. Everything gets stored in a table.
  5. There is support for triggers as well.
  6. It is dynamically typed column typing system. Which means, it can be easily programmed with any data type you have, it will internally map the types to the ones you want and the ones that column is expecting; such as, converting the string data to integer when you pass “5”.
  7. You can find it in most widely used operating systems too:
    1. On mobile environments, Android is on the top.
    2. On desktop: Windows 10 provided a built in support for this.
    3. Since this is an embedded server, you can have it on anywhere.

So with these things and benefits in mind, Android also focused on providing the SQLite databases as their primary databases for applications. So in this article, we will look forward to understanding and then using the databases for our own benefit; storing the data and retrieving the user data when they need it in the application. Almost every application in Android uses this database provider, for its speed, and more-than-enough benefits.

Understanding SQLite system in Android

SQLite systems play an integral role in Android APIs for app development. The reason behind this is that SQLite were added to Android system ever since API 1.

screenshot-7889
FIgure 1: Android API and SQLite API level.

This is the default database provided and supported natively in the API, and with every update in the Android API, an update for SQLite version is also provided, so that latest bug fixes and performance issues can be easily addressed with every updated.

1283976
Figure 2: Android and SQLite logo.

And even if you are building an application that provides data for other applications installed, either for your own organization, or for other vendors, you can use SQLite as the backend of your data layers.

content-provider-migration
Figure 3: Content providers structure as captured from Content Providers documentation on Android Developer website.

In Android, it is just a matter of objects and their function that you can write to implement full feature data storage API and the models in your data. There might be support for some object-relational mappers out there, but I want to talk about the native libraries out there.

In Android API sets, the providers for SQLite library are available under, “android.database.sqlite” package. The most prominent types in the package are,

  1. SQLiteOpenHelper: This is the main class that you need to inherit your classes from, in order to handle the database creation, opening or closing events. Even the events such as create new tables, or deleting the old tables and upgrading your databases to a latest version (such as upgrading the schema), are all handled here in this class-derived classes of yours.
  2. SQLiteDatabase: This is the object that you get and use to either push the data to the database, or to read the data from the database.
  3. SQLiteCursor: This is the cursor implementation for working with the data records that are returned after “Query” commands.

Their connection is very much simple, one depends on the other object and they all communicate in a stream to provide us with the services that we require of them.

sqlitestructure
Figure 4: Structure of the system communication with SQLite database.

I hope the purpose of these is a bit clear as of now. The way they all communicate is that, your main class for the data manipulation first of all inherits from SQLiteOpenHelper to get the functions to handle, then later has a field of type SQLiteDatabase in it to execute the functions for writing or reading the data. The final object (SQLiteCursor) is only used when you are reading the data, in the cases of writing the data, or updating the data, that is not required. But in the cases where you need to fetch the data, this acts as a pivot point to read the data from the data sources. As we progress to program the APIs, you will understand how these work.

Wrapping up the basics

This wraps us the basics of SQLite with Android and now we can move onwards to actually write an application that lets us create a database, create tables for the data that we are going to write down to it, and then write the objects and their functions that we will use to actually store data in the tables — CRUD functions.

Writing the Xamarin application

Unlike my other posts, I do expect you to create the application because this is one simple task that, every post about Xamarin or any other Visual Studio based Android project will have in common so I will not waste any of my time on this. For a good overview and how-to, please go through this basic “Create an Android Project” post on Xamarin documentation website itself. It gives you a good overview and step-by-step introduction to creating a new project in either Visual Studio or Xamarin Studio, anyone that you are using as per your choice or need.

Background of our models and data storage

So, like every data layer developer we would first define our structure for the data to be saved in the databases. These are just simple classes, with the columns presented as a property of the object, and a type associated with it that makes sense. So we will first of all define that, and then we will move onward. The purpose is to make sure that we are both on the same track and level of understanding how our application should work and process the data.

In the form of class, our data structure would look like the following,

public class Person {
    public int Id { get; set; }
    public string Name { get; set; }
    public DateTime Dob { get; set; }
}

Rest of the stuff is not important, and just to keep things a lot simple, I just added 3 columns — properties of the object. We will simply get these values and then show them to the users in a Toast message in the same activity, just to keep things a bit simple as of now. So now we need to create the database, tables and the columns inside the tables to represent our objects.

For that, it is my general approach to write the code in a different folder and name it generally; such as a “DataStore” file inside the “Services” folder, or “DbHelper” file inside the “Model” folder etc. These are a few good approaches that help you to write good codes in the applications. You are allowed to use any of these approached, I used the first one. The basic structure for the class is,

// Inheriting from the SQLiteOpenHelper
public class DataStore : SQLiteOpenHelper {
    private string _DatabaseName = "mydatabase.db";

    /*
     * A default constructor is required, to call the base constructor.
     * The base constructor, takes in the context and the database name; rest of the 
     * 2 parameters are not as much important to understand. 
     */
    public DataStore (Context context) : base (context, _DatabaseName, null, 1) {
    }

    // Default function to create the database. 
    public override void OnCreate(SQLiteDatabase db)
    {
        db.ExecSQL(PersonHelper.CreateQuery);
    }

    // Default function to upgrade the database.
    public override void OnUpgrade(SQLiteDatabase db, int oldVersion, int newVersion)
    {
        db.ExecSQL(PersonHelper.DeleteQuery);
        OnCreate(db);
    }
}

This is the default syntax for your basic database handler, that handles everything about the database creation and deletion. One thing you might have noticed, I did not include “PersonHelper” object here. There are a few things I want to talk about this object, before sharing the code.

  1. This is not the model itself. It is merely the helper we will use.
  2. This class is the class-base representation of the table in our database. This class contains the query that will be executed to create the table, including any constraints to be set such as PRIMARY KEY etc.
  3. It will help us to store the “Person” objects, retrieve the “List of Person” objects. Even update or delete them as well.

Let us have a look at the class itself,

public class PersonHelper
{
    private const string TableName = "persontable";
    private const string ColumnID = "id";
    private const string ColumnName = "name";
    private const string ColumnDob = "dob";

    public const string CreateQuery = "CREATE TABLE " + TableName + " ( "
        + ColumnID + " INTEGER PRIMARY KEY,"
        + ColumnName + " TEXT,"
        + ColumnDob + " TEXT)";


    public const string DeleteQuery = "DROP TABLE IF EXISTS " + TableName;
 
    public PersonHelper()
    {
    }

    public static void InsertPerson(Context context, Person person)
    {
        SQLiteDatabase db = new DataStore(context).WritableDatabase;
        ContentValues contentValues = new ContentValues();
        contentValues.Put(ColumnName, person.Name);
        contentValues.Put(ColumnDob, person.Dob.ToString());

        db.Insert(TableName, null, contentValues);
        db.Close();
    }

    public static List<Person> GetPeople(Context context)
    {
        List<Person> people = new List<Person>();
        SQLiteDatabase db = new DataStore(context).ReadableDatabase;
        string[] columns = new string[] { ColumnID, ColumnName, ColumnDob };

        using (ICursor cursor = db.Query(TableName, columns, null, null, null, null, null))
        {
            while (cursor.MoveToNext())
            {
                people.Add(new Person
                {
                    Id = cursor.GetInt(cursor.GetColumnIndexOrThrow(ColumnID)),
                    Name = cursor.GetString(cursor.GetColumnIndexOrThrow(ColumnName)),
                    Dob = DateTime.Parse(cursor.GetString(cursor.GetColumnIndexOrThrow(ColumnDob)))
                });
            }
        }
        db.Close();
        return people;
    }

    public static void UpdatePerson(Context context, Person person)
    {
        SQLiteDatabase db = new DataStore(context).WritableDatabase;
        ContentValues contentValues = new ContentValues();
        contentValues.Put(ColumnName, person.Name);
        contentValues.Put(ColumnDob, person.Dob.ToString());

        db.Update(TableName, contentValues, ColumnID + "=?", new string[] { person.Id.ToString() });
        db.Close();
    }

    public static void DeletePerson(Context context, int id)
    {
        SQLiteDatabase db = new DataStore(context).WritableDatabase;
        db.Delete(TableName, ColumnID + "=?", new string[] { id.ToString() });
        db.Close();
    }
}

This is a simple, yet very simple, CRUD-based-table-structure-class for our application. 🙂 We will be using this class for any of our internal purposes, of mapping the objects from the database to the code itself. The code itself is pretty much simple, the most important objects used in the code above are,

SQLiteDatabase

This is the database file, that we get from the helper object (our own DataStore object), one different primarily in Java and C# code is that C# code looks shorter — (yes, a personal line). For example, have a look below,

// C#
SQLiteDatabase db = new DataStore(context).WritableDatabase;

// Java
SQLiteDatabase db = new DataStore(context).getWritableDatabase();

To understand the difference, you should understand the encapsulation in Object-oriented programming languages and properties in C#. To an extent that does not make any difference, but if you come from Java background and start programming Xamarin, you will require to understand the best of both worlds and then implement them in your own areas.

Heads up: Entity Framework Core can be used as well in Xamarin applications. Thanks to .NET Core.

ContentValues

In Android APIs, this was the wrapper used to wrap the column values for each of the insert, or update for the records. The same object is provided here and you can add the values here, that SQLite engine would use and push the values to the database.

ICursor

The basic cursor object, used to iterate over the collection. In Android API using Java, you can get the following code,

// Java
Cursor cursor = db.query(...);

And in the C# code, you get a bit different version, but that does not matter at all as you can always implement the ICursor object and create your own handlers for the data. That helps in many ways,

  1. You get the validate the data before generating the list itself.
  2. You can build your own data structures and load them in one go.
  3. You can use and implement other services as well and then consume them as well in the same cursor object — but in many cases, this is not required at all.

Create and Delete queries

If you pay attention to the initials of the helper class, you will find that there are constant string values. Those are used to create and delete the tables. For their usage, please see the OnCreate and OnUpdate functions in the DataStore class above.

One more thing, in SQLite when you create a record that is assigned, “INTEGER PRIMARY KEY“, that column automatically starts to point at the ROWID, that is the similar to AUTO INCREMENT in most database systems. That is why, we don’t need to manage anything else and SQLite itself will make sure that our records are all unique by incrementing the row id. Of course there are a few problems with this as well, because the value doesn’t guarantee to always “increment“, but it guarantees to “be unique“.

Building the UI

On final step in this application would be actually create the UI of the application’s main activity. What I came up with, for this simple application was the following interface. I hope no one is offended. 😀

screenshot-24
Figure 5: Interface Design in Xamarin, in Visual Studio 2015.

A few configuration on the top left corner may also help you to understand and build the similar interface if you want to have the similar interface in your own application as well.

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:orientation="vertical"
    android:layout_width="match_parent"
    android:layout_height="match_parent">
        <TextView
            android:text="Enter the details of a person."
            android:textAppearance="?android:attr/textAppearanceMedium"
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:id="@+id/textView1"
            android:layout_marginTop="10dp" />
       <EditText
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:id="@+id/name"
            android:hint="Enter the name"
            android:layout_marginTop="25dp" />
       <DatePicker
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:id="@+id/datePicker1"
            android:layout_marginBottom="0.0dp" />
       <Button
            android:text="Save"
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:id="@+id/button1" />
</LinearLayout>

So, now we are ready to run and test our application in the emulator.

Running the Application in Emulator

At this point, we may walk away on our own paths because I personally like to use the native Android SDK emulators, instead of the Xamarin’s Android emulator or Visual Studio’s Emulator for Android. There are many reasons for this choice of mine, and I will share that in a later post sometime.

But for now, you can run the application in any of your own favorite emulator, even on your own mobile device.

screenshot_1487370468
Figure 6: Application running in emulator.

I have already filled the UI with the required information, and now I will be writing the backend code — because I am that awesome.

protected override void OnCreate(Bundle bundle)
{
    base.OnCreate(bundle);

    // Set our view from the "main" layout resource
    SetContentView(Resource.Layout.Main);

    // I removed the default one, and added my own.
    Button button = FindViewById<Button>(Resource.Id.button1);
    button.Click += Button_Click;
}

private void Button_Click(object sender, EventArgs e)
{
    EditText nameField = FindViewById<EditText>(Resource.Id.name);
    DatePicker picker = (DatePicker)FindViewById(Resource.Id.datePicker1);

    string nameStr = nameField.Text;
    DateTime dob = picker.DateTime;

    if (string.IsNullOrEmpty(nameStr))
    {
        Toast.MakeText(this, "Name should not be empty.", ToastLength.Short).Show();
        return;
    }

    // Just save it.
    Person person = new Person
    {
        Name = nameStr,
        Dob = dob
    };
    DataStore.PersonHelper.InsertPerson(this, person);
    Toast.MakeText(this, "Person created, fetching the data back.", ToastLength.Short).Show();

    var people = DataStore.PersonHelper.GetPeople(this);
    person = people[people.Count - 1];
    Toast.MakeText(this, $"{person.Name} was born on {person.Dob.ToString("MMMM dd yyyy")}. \n {people.Count} people found.", ToastLength.Short).Show();
 }

The outputs, after having this code applied to the UI was the following,

screenshot_1487370465
Figure 7: Person created toast message. 

screenshot_1487370479
Figure 8: Person details shown on the screen using Toast.

And, it works perfectly just the way we expect it to. You may have noticed that we convert the Date object to the standard string object, and then we parse that same string back to the Date object. Then using SImpleDateFormat we format the data properly.

Final words

Finally, in this post we tackled the problem of SQLite databases in Android using Xamarin APIs. The process itself is pretty much simple and straight-forward, and you don’t need to go through a lot of pain. Just remember these three steps:

  1. Inherit your main data layer class from SQLiteOpenHelper.
  2. Handle the OnCreate and OnUpgrade (and other) functions to create the tables.
  3. Write the CRUDs and you are done.

Also, remember that you should execute the Delete function on the database object, because writing a native SQL query will have a few problems.

  1. DELETE query does not release the underlying storage space — nor does Delete function.
  2. Your new items would anyways come up in the free space, but that free space will not be released to operating system for other data and stuff.
  3. To release the space, you will need to execute VACUUM command. It repackages the database file, with taking only the space it needs.
  4. However, Delete function will help you to overcome SQL Injection problems by passing the parameterized queries. Food for thought: Can you figure out the parameter in the queries above?

So, that will be it for this post. 🙂

Serverless computing with Azure Functions

While I was working on cloud computing, and other similar technologies, I somehow stumbled upon this new thing that I did not have heard of before (quite a few weeks from now) and I found it really interesting to watch about with and to learn and to share my own understanding of this technology with you guys. I will try my best to keep the main idea as simple as possible, but explain everything from A to Z in a much easy way, that it will look as if the concept was always in your mind.

So, let us begin with the first initials about the serverless computing and how it all began.

How, it all began…

If you guys have been into the IT and programming geekiness for the past few years, you must have witnessed how things changed, from physical machines to virtualization, to containers and all the way to, what we now call serverless computing. Whether this new tech trend comes down the stream as I mentioned, or perhaps someone else knows better as how this all began is not the question or the topic of concern, the main thing is that we now have this another topic to cover before we actually start up and design the applications. You will see, in this post, how one way of deploying application has pros and cons, and how serverless comes up to solve the problems — or ruin everything at all. There are some pros to serverless computing, and there are obviously a lot of cons to serverless computing.

The buzz word of serverless computing didn’t take much longer, because we now have Internet and one new thing from the west, reaches east in no time at all. The question to ask is, “Do we understand what they wanted us to?” And this is the question that I am going to address in this post of mine, so that all of us really understand the purpose, need and reason of serverless computing today.

I am going to use Azure Functions, to explain the usage, benefit and “should I” of the serverless computing. The reason I chose Azure was that everyone was already covering a lot of Amazing Lambda stuff, so I didn’t want to do use that, also I am working on Azure a lot so I thought this is it.

Understanding the term, “serverless”

Primary focus of our is to explain the term, “serverless”, Azure usage would just be to give you an overview of how that actually works in real world.

traditional_vs_serverless_cost_graph
Figure 1: A simple yet clear difference in traditional vs serverless approach toward server management. 

The picture above, was captured from this blog, and it provides a very good intuition towards the difference in what we use, and what serverless offers. But this does not mean that the image above provides a 100% true and only difference in the both worlds, sometimes the differences boils down to zero; such as in the cases, where you are going to use architectures such as cloud offerings. In such scenarios, the total difference in the cost is how you selected the subscription. As we progress in the post, I will also draw a margin line in many other aspects of this methodology. Keep reading.

Just like the term cloud computing was mistaken for various reasons, and for various acts, the term of serverless is also being mistakenly understood as a platform, where servers are no longer needed. As for cloud computing, you might want to see, Cloud Computing explained by Former IT Commissioner, and try to consider the fact that we are in no way understanding the technology the way it is meant to be understood. The same is the result for serverless, when I tried to search for it on Google, I even saw computer devices being dumped into the dustbin; which triggered me, that people are so not understanding the term itself. For example, go here, Building a Serverless API and Deployment Pipeline: Part 1 and try to just see the first image and please remember to come back as soon as possible. You will get my point. Finally, I mean no offence to anyone being mentioned in this paragraph, if you are the target in either one of the link, kindly get a good book or contact me I will love to teach you some computer science.

Old hardware put into container

Figure 2: Just in the case that blog post is not accessible or the author takes the image down, I just want to show you the image. Still, no offence please.

So what exactly is serverless computing? The basic idea behind this, is to remove the complexity or the time taken to manage the servers, not the servers themselves. In this scenario, we actually use a framework, platform or infrastructure, where everything, even the booting, executing and terminating of our application is managed by the provider itself. Our duty, only is to write the code and the magic is based on the recipe of provider. Just to provide you with a simple definition of serverless computing, let me state,

Serverless computing is a paradigm of computing environment in which a platform or infrastructure provider manages booting, scheduling, connection, execution, termination and response of the programs without needing the development teams to manage the control panel.

Few, that wasn’t so tough, was it? Rest of the stuff, such as pricing models, languages or runtimes provided, continuous deployment or DevOps is just a bunch of extra topping that every provider will differ in the offerings. That is one of the reasons, I did not include any statement consisting of anything about the pricing models, or the languages to use — whereas in this post, I will use C#… I am expecting to write another post that will cover Python or other similar interpreted languages.

Benefits of Serverless architecture

If you migrate your current procedures, and communicators, to serverless paradigm you can easily enjoy a lot of benefits, such as cost reduction, freedom from having operations team to manage a full fledge server and much more. But on the same time, I also want to enlist a few of the disadvantages of serverless computing, at the end of this section.

Cost model

Ok let me talk about the most interesting question in the mind of everybody perhaps. How is this going to change the way I am charged? Well, the answer is very much relative to what you are building, on what platform you are building and how much customers do you have, plus how they interact with your application. So, in other words, there is no way we can judge the amount of charges you are going to play with this. In the following sections, I will give you an overview of another special part in serverless programming, that you can use to consider the pricing model for your application.

NoOps

Do not be mad at me for adding a new term in the computer science; if it has not yet been added there. Now let me get to the point where I can explain the concept of resolving the teams, such as development, operations etc. and then getting to a point, where you can enroll “serverless” to the environment. In modern day computing, we have, let’s say, implemented DevOps and we need to have a mindset where our teams are working together to bring the product to market for users. A DevOps typically has the following tasks to be conducted,

  1. Planning and startup, user stories or whatever it is being called.
  2. Source control or version control
  3. Development; any IDE, any language
  4. Testing; there are various tests, unit testing, load testing, integration testing etc.
  5. Building; DevOps support and encourage continuous integration
  6. Release; same, continuous deployment is recommended

From this, you can see that the developers only need to work in a few areas, Planning, Development and Building. Version control systems should be managed by the IDE and timers should be set to them to control when the versions or updates are checked in each day, updated versions must be released into the market by operations teams etc. However, since now in the field of serverless, we do not have to “release the software” to market, and we also do not need to manage any sort of underlying server if our application is web based, thus we can somehow remove the operations team, or include them in the development team as application developer team. I read a research guide a few days ago naming DevOps – Ops as AppOps, but however I would like the term of NoOps, because the considerations of them being simple operations team is removed and they now work with the application development team, focusing entirely on the code, and the performance or uptime of the application, instead of the servers or virtual machines.

So, let’s count the purpose of NoOps in this field,

  1. Planning and startup
  2. Source control or version control
  3. Development
  4. Testing; I can strikethrough this one as well.
  5. Building
  6. Release

Clear enough, I believe. Before we get into another discussion, let me tell you why I think these are the way they are; why did I cut a few in the list.

Planning and Startup

First of all, planning and startup in this does not make any sense at all. Please see below, the section in which I am talking about the “whens” to select serverless architecture, that section will depict when you should use serverless approach over the current “modern” approaches. Once you have gone through that, you will understand, that in serverless arch, the planning is done beforehand.

Thus, there is no need to again sit around, and have the kanban board messy once again. If you are going to work on that board again, please go back and work in DevOps environment, serverless is not for you. As mentioned below, serverless is for the programs that do not run for 4-7 hours, but just for an instance, each time they are called and they are entirely managed by the infrastructure provider.

Version control

Since this is directly targeted towards the development, or the core portion of your application, this is a part of serverless architecture design. Almost every serverless platform that I have seen as of now, uses version control services or provides best practices of DevOps; I know. Azure Functions provide you with features that you can use to update the source code of your function, Amazon Web Services’ Lambas allow you to use GitHub, same goes to almost all; have a look at Google’s microservices service, they support GitHub based deployment of Node.js applications and then manage how to run them.

In the cycle of development of a serverless application, this can come as the first step after every first cycle.

Development

Yes, although we removed the servers and other IT stuff from the scene, we still need developers to write up the logic behind the application.

Testing

The reason that I left this option active in the current scenario was, that if you are using version control and then you are deploying the application’s code to the server, my own recommendation and DevOps also, would be to test the code before going to the next step, as it might break something up ahead.

In serverless arch, we are allowed to use source controls, so, before forwarding the code from there to the server, why shouldn’t you run some tests? In serverless arch, we don’t have to worry about the servers, but sure we do need to worry whether the code ran, or did it just break all the time?

In a serverless architecture, we do not technically build a full fledge, or full featured application that takes care of everything, instead we simply write a “if this then that” sort of application. If you understand the concept of “Internet of Things”, then you can think of a serverless application as the the hub that manages the communication and responds to an event; message; request. In such cases, it is not required to implement every test possible, instead, we can perform simple tests to ensure that the code does not break at the arrival of request, and at the dispatch of a response. These are just a few, top of my dumb piece assumptions and suggestions, based on what your serverless application does, you might need other tests, such as pattern matching or regular expressions to be tested against.

I repeat, this is the most important part of serverless programming. I cannot put more effort in saying this, but you get my point, if you are a team and you are going toward the serverless paradigm to lift the response rate, then you first of all need to ensure that the program will be resilient to any input provided to it and will not break at all. Even if there are some issues, how does the program respond to those errors?

Building

I removed that step, but I could have also left that as it was. There are reasons for this, because, the infrastructure may provide you with support to publish built programs. We are going to look into Azure Functions, and Azure supports publishing built programs that execute, instead of scripts that are to be interpreted each time that function has to execute.

But in many cases, you do not need to take care of, or even worry about the build process, since the functions are small programs and they don’t require much of the stuff that typical applications require. You can easily publish the code, and it will execute in a moment. Serverless providers allow interpreted languages to be used as the scripts too, such as Batch files, you can also use Python scripts, or PHP files. But again, every provider has their own specification for this, Azure Functions support any kind of program that you can write, or build, you can upload it on the server and they will host it and your users can connect to it next time they send a request, or interact with the application; mobile, web or any other IoT based device.

Release

Submitting the code out to the version control, was the only release we are going to worry about.

Winding up the basics

Although what I covered, just scratched the basics about the serverless architecture, and there is a lot more to it, than just this singular concept of what to do, and what to leave out. But going any deeper into the rabbit hole might be confusing and might get us off-track as per this post’s structure. So I will not go down, but just for winding us the basics, let us go through a few things.

  1. Do not consider the serverless architecture as a replacement for your current physics servers or container based environments. Serverless are just used to take the events and then trigger another server or virtual machine to act on that event, with the provided data. Nothing else.
  2. Before going serverless, you and your team must understand one fact: “It is your duty, to test the code, and the validity of your code, and it is the duty of your platform provider, to ensure that the code runs the way it is intended to be run on the runtime intended.”
  3. Pay a lot of attention to testing, testing, testing. I repeat, other than development if there is something that needs to be done, it is testing.
  4. The payment plan, in many cases differ from one and other. Sometimes you might choose a monthly plan, sometimes if your users are not in thousands, then you can select the plan where you only pay for the time your function is executing; only for resources used.
  5. Microsoft Azure Functions provide full support for various languages and runtimes, you can use C#, then you can as well choose JavaScript, there are other methods of writing the function code, such as using Python and PowerShell. You can also upload the compiled code and then run it. In other words, if it can execute, it can be a function.
  6. One final thing, a function is only a program or handler, that runs for a small time (10ms-1.5s), if it takes more than that, then it will raise other errors and you would face other problems as well. Always keep the function code short, and as soon as possible terminate it by triggering other services or passing the data on to other service handlers, such as you can trigger the function from an IoT hub and then use other services such as SMS or SMTP services to send notifications and before sending notifications close the function, by only triggering those services and passing the data.

In many ways, this architecture can help you out. But if used badly, it would be like shooting in your own feet. In my own experience, I have found that the architecture can create a lot of problems for you as well, and it might not always be as much helpful as you think. So, use wisely.

Azure Function example

I didn’t want to write a complete guide on serverless architecture, because I might have other posts coming out on this one as well, so let us go a bit deeper and have a look at the Azure Functions feature and see how we can write minimal serverless functions in Azure itself.

So in this example, I am only going to show you a bit of the example that can be used to show you how functions work, in future posts I might cover the HTTP bindings to the functions, or other stuff such as DevOps practices, but for this post let me keep it really short and simple and cover the basics.

Basic function file hierarchy

At the minimum, a function requires an executable script (in any runtime), and a configuration file that specifies the input/output binding of the function, timers or other parameters that can be used for the proper execution. That function.json file controls the execution of the function, it takes all the configuration settings, such as the accounts or services to communicate with. So for instance, in a simple timer based function the following files are enough to control the function itself,

screenshot-7815

The code in both the files is as the following one,

using System;

public static void Run(TimerInfo myTimer, TraceWriter log)
{
    log.Info($"C# Timer trigger function executed at: {DateTime.Now}"); 
}

The JSON configuration file has the following content,

{
    "bindings": [
        {
           "name": "myTimer",
           "type": "timerTrigger",
           "direction": "in",
           "schedule": "0 */5 * * * *"
        }
    ],
    "disabled": false
}

What their purpose is, let me clarify the bit about it in this post before moving any further.

Note: In another post, I will clarify the meaning and use of function.json file, and what attributes it holds. For now, please bear with me.

An executable script

The executable script can be C#, JavaScript or F# or any other executable that can run. You can use Python scripts as well as a compiled executable script.

Configuration file

The function.json file has the settings for your function. The above provided code was a very basic one, the complex functions would have more bindings in them, they will have more parameters and connection names or authentication modules, but you get the point.

In the file, the name and direction of the binding is compulsory. However, other settings are based entirely on the type of binding being used. For example, HTTP triggers will have different settings, timer triggers will require different settings and so on and so forth.

Executing

Azure provides the runtime for almost every executable platform, from PowerShell, to Python, to JavaScript, to C# scripts (the above provided code is from a C# script file) and all the way to other scripts, such as batch etc. Runtime also supports native executables — and this part I yet have to explore a bit more to explain which languages are supported in this scenario.

I will not go into the depths of this concept, so I will leave it here, anyways the output of this function is as,

2017-02-03T17:35:00.007 Function started (Id=3a3dfa76-7aad-4525-ab00-60c05b5a5404)
2017-02-03T17:35:00.007 C# Timer trigger function executed at: 2/3/2017 5:35:00 PM
2017-02-03T17:35:00.007 Function completed (Success, Id=3a3dfa76-7aad-4525-ab00-60c05b5a5404)
2017-02-03T17:40:00.021 Function started (Id=a32b986c-712e-4f30-84c9-7411e63b5356)
2017-02-03T17:40:00.021 C# Timer trigger function executed at: 2/3/2017 5:40:00 PM
2017-02-03T17:40:00.021 Function completed (Success, Id=a32b986c-712e-4f30-84c9-7411e63b5356)
2017-02-03T17:45:00.009 Function started (Id=c18c0eef-271d-4918-8055-64e3f31f953a)
2017-02-03T17:45:00.009 C# Timer trigger function executed at: 2/3/2017 5:45:00 PM
2017-02-03T17:45:00.009 Function completed (Success, Id=c18c0eef-271d-4918-8055-64e3f31f953a)
2017-02-04T10:27:54 No new trace in the past 1 min(s).

The timer trigger keeps running and keep logging the new events, and process information. You will also consider, that this is the same output that Node.js or F# programs would give you, the only difference in these 3 (only 3 at the moment), is that their runtimes are different, the binding and input/output of the functions is managed entirely by Azure Functions itself and developers do not need to manage or take care of anything at all.

Wrapup

Since this was an introductory post on serverless programming and how Azure Functions can be used in this practice, I did not go much deeper in the explanation of the procedures of writing function applications. But the post was enough to give you an understanding of the serverless architecture, what it means to be serverless and how DevOps transition to NoOps. In the following posts about serverless, I will walk you around writing the serverless applications and then consuming them from client devices; Android or native HTTP requests.

Finally, just a few things to consider:

  1. If your functions take a lot of time to execute, such as 1 minute, or even 30 seconds, then consider running the application in a virtual machine or App Service. A function should be like a handshake negotiator, it should take the data and pass the data to a processor, itself it must not be involved in processing and generation of results.
  2. Your functions should be heavily tested against. I am really enforcing a huge amount of tension on this one, as this point needs to be taken care of. Your functions are like the welcomers, who warmly welcome the incoming guests to your servers. If they fail in doing so, the data may never come back (data being your users; events, or anything similar).
  3. Functions follow the functional programming concepts more, so, in functional programming the functions are not stateful. They are stateless, meaning they do not process the data based on any machine state, attribute, property or the time at which they are executed. Such as, a function add, when passed with a data input of “1, 2, 3, 4, 5”, will always return “15”, since the process only depends on the input list.

As we start to develop our own serverless APIs and applications, we will also look forward to further more ways that we can develop the applications, and write the application code in a way that it does not affect the overall performance of our service.

Nonetheless, even if not being implemented in the production environment, serverless is a really interesting topic to understand and learn from a developer’s perspective as you are the one taking care of everything and there are no cables involved. 😉

Considerations for SQL Server on Linux

Introduction and Background

SQL Server has been released for Linux environments too, and after ASP.NET we can now use SQL Server to actually workout our servers on Linux using Microsoft technologies without having to purchase the licenses and pay a fortune. But that is not the case, at least for a while… Why? I will walk you through these critical aspects of the SQL Server and Linux continuum in this post, I am really looking forward to exploring a lot of things right here with you, by reading this post, you will be able to go through many areas of SQL Server on Linux and see if you really need to try it out at the moment or if you need to wait a while before diving any deeper in the platform.

I personally enjoy trying new things out… There are many out there who just try things out, I F… no, no, I dissect things up to share the in-depths of what everything has, what everything says and really is and finally, should you consider it or not. That is the most important part. After all, it is you, who is the main focus of my attention I want to tell you, share with you, what I find. Thus, by reading my post, you will get idea about many things — SQL Server is in its initial versions on Linux so, this post is primarily a feedback, overview of SQL Server and not a rant on the product in anyway.

SQL Server 2016 available on Linux

Now starts the fun part, I believe almost all of you are aware of the fact that SQL Server 2016 is available on Linux after serving Windows only since a great time, plus .NET Core is available on Linux too, and if you have been reading my past articles and blogs you are aware of fact that I am a huge fan of .NET Core framework and how it helps Microsoft to ship their products to other platforms too.

Microsoft announced SQL Server availability on Linux quite a while ago, and many have started downloading and using the product.

sql-loves-linux_2_twitter-002-640x358
Figure 1: SQL Server “heart” Linux. No pun intended. 

Of course the benefit is for Linux users and developers because they now have more options, whether they like SQL Server or not is a different thing in itself. I personally enjoy the tools, such as SQL Server Management Studio, which is a free software to manage databases using SQL Server. The tool can be used for most of the database engines, however Microsoft only engines.

There are many posts that give a good overview and introduction to SQL Server on Linux, and I am not going to dive deeper into them… As I have an exam of Database Systems tomorrow. So, help yourself with a few of,

  1. https://www.microsoft.com/en-us/sql-server/sql-server-vnext-including-Linux
  2. https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-get-started-tutorial
  3. https://blogs.technet.microsoft.com/dataplatforminsider/2016/11/16/announcing-sql-server-on-linux-public-preview-first-preview-of-next-release-of-sql-server/ (This is a must read!)

Without further delay, let us go on to the installation of SQL Server on Linux section, and see how we can get our hands on those binaries.

Installation of SQL Server engine

SQL Server, like on Windows, comes separately from tools. You have to install SQL Server and then you later install the tools required to connect to the engine itself — Note: If you can use .NET Core application, then chances are that you do not even need the SQL Server Tools for Linux at the moment. You can just straight away execute the commands in the terminal, I will show you how… And in my experience I found this method really agile and as per demand.

So, fire up your machines and add the keys for packages that you are going to access, download and install.

$ curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
$ curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list | sudo tee /etc/apt/sources.list.d/mssql-server.list

These will set up the repositories to your machine that you can then use to start downloading the packages.

screenshot-7008
Figure 2: curl executing.

Finally, refresh the sources and start the installation,

$ sudo apt-get update
$ sudo apt-get install mssql-server

You will be prompted to enter “y” and press “Enter” key to continue the installation. This process will download and install the binaries in your system. It downloads the files and scripts that will later on set up the system for your instance, Once the setup finishes, you may execute the following script to start “installation” of SQL Server 2016 on your machine.

screenshot-7010
Figure 3: SQL Server installed on Ubuntu. 

$ sudo /opt/mssql/bin/sqlservr-setup

Authenticate this script, as it requires a lot of permissions in order to generate the server’s engine on your machine. The default hierarchy is like this,

screenshot-7007
Figure 4: Files in the installation directory, ready to install engine. 

You can see the setup script in the list, execute it in the terminal and it will guide you through setup.

screenshot-7011
Figure 5: SQL Server installer requesting user password during installation. 

You will accept the terms, enter password etc. and server will be installed. Simple as that. But yeah, do remember the password you need it later to connect — there is no Trusted_Connection property available in this one.

You can also test the service to see if that is running properly, or running at all by going down to your task manager and looking for the running services. First of all, you can execute the following command to see if the server is running,

$ systemctl status mssql-server

screenshot-7013
Figure 6: The screenshot shows that the response has everything required to see whether service is running or not.

In most operating systems you can find the same under processes too,

screenshot-7016
Figure 7: SQL Server collects telemetry information; the last process tells this. 

Once everything is working we can move onwards to download and install tools.

Installing SQL Server 2016 Tools

On Linux, the choice you have is very small and selected. You may download and install a few helpers and tools if you would like, such as “sqlcmd” program. Just giving a short overview of the steps required to install this package and to use it for some tinkering.

The procedure is similar to what we had, add keys, install the server,

$ curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
$ curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | sudo tee /etc/apt/sources.list.d/msprod.list

Then finally use the following command,

$ sudo apt-get update 
$ sudo apt-get install mssql-tools

The installation won’t take longer and you will be able to get the packages that you can later use, I won’t be showing these off a lot.

$ sqlcmd -S localhost -U sa

Do not enter password in the terminal at all, let the program itself ask for the password and user will enter it later, secondly user input will not be shown on the screen.

screenshot-7017
Figure 8: Result of SQL query in sqlcmd program.

This is enough and I don’t want to talk any more about this tool, plus I do not recommend this at all, why? See the result of SQL query below,

> SELECT serverproperty('edition')
> GO

screenshot-7018
Figure 9: Result of an SQL query, totally unstructured. 

The results are not properly structured so they are confusing, plus you have to scroll up and down a bit to see them properly. So, what to do then? Instead of using this I am going to teach you how to write your own programs… I already did so, and I thought I should again for .NET Core.

Writing SQL Server connector app in .NET Core

I am going to use .NET Core for this application development part, it is indeed my favorite platform there is. On the .NET world, I wrote the same article using .NET framework, you can read that article here, How to connect SQL Database to your C# program, beginner’s tutorial. That tutorial is a complete guide to SQL Server, targeted at any beginner in the platform field. However in this, I am not going to explain the basics but I am just going to show you how to write the application and how to get the output.

Note: You should read that article for more explanations, I am not going to explain this in depth. I will not be explaining System.Data.SqlClient either, it is available in that post I referred to.

You will start off by creating a new project, restoring it, and finally open it up in Visual Studio Code.

$ dotnet new
$ dotnet restore
$ code .

I ignored the new directory creation process, thinking you guys know it yourself. If you have no idea, consider reading a few of my previous posts for more on this topic, such as, A quick startup using .NET Core on Linux.

After that, add a new file to your project, name it SqlHelper, add a class with same name in it,

using System;
using System.Data.SqlClient;

namespace ConsoleApplication
{
    class SqlHelper 
    {
        private SqlConnection _conn = null;

        public SqlHelper() 
        {
            _conn = new SqlConnection("server=localhost;user id=sa;password=<password>");
            _conn.Open();
            Console.WriteLine("Connected to server.");
        }
    }
}

You still require a simple SQL connection string for your database engine, there is a website that you can use to get your connection strings. I initially told you there is no version of using trusted_connection here because there are no Windows accounts here that we can utilize. Now, call this object from your main class and you will see the result.

using System;

namespace ConsoleApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            new SqlHelper();
        }
    }
}

screenshot-7019
Figure 10: Connected to the server.

The results are promising as they show we can connect to the server here. Now we can move onwards to actually create something useful. Let us just try to execute the commands (that we executed above) in our newly developed system, update the code to start accepting requests of SQL commands.

using System;
using System.Data.SqlClient;

namespace ConsoleApplication
{
    class SqlHelper 
    {
        private SqlConnection _conn = null;

        public SqlHelper() 
        {
            _conn = new SqlConnection("server = localhost; user id = sa; password = <password>");
            _conn.Open();
            Console.WriteLine("Connected to server.");
            _execute();
        }

        private void _execute() 
        {
            if (_conn != null ) 
            {
                Console.BackgroundColor = ConsoleColor.Red;
                Console.ForegroundColor = ConsoleColor.White;
                Console.Write("Note:");
                Console.BackgroundColor = ConsoleColor.Black;
                Console.ForegroundColor = ConsoleColor.Gray;
                Console.WriteLine(" You may enter single line queries only.");

                while(true)
                {
                    Console.Write("SQL> ");
                    string query = Console.ReadLine();
                    using (var command = new SqlCommand(query, _conn)) 
                    {
                        try {
                            using (var reader = command.ExecuteReader()) 
                            {
                                while (reader.Read()) 
                                {
                                    Console.WriteLine(reader.GetString(0));
                                }
                            }
                        } catch (Exception error) {
                            Console.WriteLine(error.Message);
                        }
                    }
                }
            }
        }
    }
}

This will start executing and will request the user to enter SQL commands, you can see how this works below,

screenshot-7020
Figure 11: Connecting and executing SQL queries on SQL Server. 

So, you have seen that even a simple .NET Core program to write and manage the queries is way better are structuring the sqlcmd program on Linux. You can update this as per needed.

Tips, tricks and considerations…

In this section I will talk about a few of the major concepts that you must know and understand before diving any deeper in SQL Server on Linux at all, if you don’t know these, then chances are you don’t know SQL Server too. So, pay attention.

1. Default Directory?

There are so many articles out there already, yet no one seems to be interested in telling the world where the server gets installed. Where to find the files? How to locate database logs etc?

In Linux, the default directory is, “/var/opt/mssql/<locked out>“. You need to have admin privileges to access and read the content of this directory. So, I got them and entered the directory.

$ sudo dolphin /var/opt/mssql/

“dolphin” is the file manager program in KDE, sorry the display was not so much clear so I selected everything for you too what is inside.

screenshot-7021
Figure 12: Data in the “data” directory under “mssql” directory.

You can surf the rest of the directory on your own, I thought I should let you know where things actually reside. On your system, in future, it might change. But, until then, enjoy. 😉

2. Edition Installed

On Windows, you are typically asked for edition that you need to install on your machine. On Linux that was not the case and it installed one for us. The default one (and at the moment, only possible one) is SQL Server Developer edition. This edition is free of cost, and it has everything that Enterprise edition has.

To confirm, just execute,

SELECT serverproperty('edition')

Or, have a look above in figure 9 or 11, you can see the edition shown. All of recent products of Microsoft based on .NET Core are x64 only. ARM and x86 are left for future, for now.

3. Usage Permissions

Yes, you can feel free to download, use and try this product out, but remember you cannot use it in production or with production data. This is the only difference in Enterprise and Developer edition of SQL Server 2016.

If you head over to SQL Server 2016 editions, you can see the chart clearly.

screenshot-7022
Figure 13: No production rights are available in Developer edition.

Thus you should not consider this to be used with production data. You can, until a time when it is allowed, you should stick with Windows platform and download Express edition. It has more than enough space for small projects.

4. Should you use it?

Finally, if you are a learner like me, if you want to try something new out, then yes of course go ahead and try it out. You can install a Ubuntu on a VirtualBox to try this thing out if you’d like.

If you find something new, message me and we can chat on that. 🙂

Automating deployment of ASP.NET Core to Azure App Service from Linux

Introduction and Background

On operating systems such as Microsoft Windows, and using great tools like Visual Studio, it is greatly easy to upload and publish your web application projects to Microsoft Azure but what about other platforms such as Linux? Microsoft has recently released PowerShell as open source project and that is a great tool to be used in the cases where you have developers who understand and can use PowerShell but if you don’t happen to have any, then you would require to use traditional tools and sometimes they will require you to manually perform these changes. In this post, I am going to cover the teams who are using Git repositories for their projects and I am going to cover how they can basically automate the publishing of their projects after each successful build. On Linux, we can basically create small scripts that run on the background just like PowerShell or the Windows command prompt terminal’s batch files. Git programs are required for this to run, actually what I am going to show in this is to execute the same commands but in a manner that most of the repository’s information is already built, the commit message is updated and the content is published to the Azure app service. The purpose of this guide is to make the entire process as easy to be understood, as A, B and C. I will be using a real world application to show you how to do what and where instead of talking generally about many things at the same time.

Before moving onward, a very little knowledge of how Git system works is required from you because some things might get a bit technical in the post below and thus you must have the know-how of how Git systems work in order to control the versioning of files.

2color-lightbg2x
Figure 1: Git logo.

Secondly, you are required to know how you can use “dotnet” script to create, build and run the .NET Core applications. If you don’t know .NET Core, I will reference to a few of my own articles that are beginner-friendly which you can read to learn more on this.

Finally, you are required to have an active Azure account with a working subscription. If you don’t happen to have on, you can get a free account with $200 credits to try out all of Azure! Once these are met, you can continue to actually using the article for something useful.

Creating the application — entire part

This part has been shared and taught many times, on many occasions. I wrote an article on the same concept, a few days ago and I would like to refer that article to you to learn how you can create a new application with web template in .NET Core. Creating and hosting ASP.NET Core application on Linux — Nothing Third-Party, read this article for a complete overview and a walkthough for the concept of building and running the applications on Linux environment. I will move onward from this step, because you must be aware of the ways of creating your own applications.

Deployment of web application to Azure

Now comes the main part, this article entirely focusses on the deployment of applications to Microsoft Azure instead of developing an application. There would be some parts, where I will be modifying the parts, but that is just to tell you how easy it would be to redeploy the application using the automation by executing a simple command. Although it is not required but you are required to have a very basic introduction to automation tools. You can get a good tutorial about these toolings easily from any software engineering guide.

Using git for deployment

Microsoft Azure uses many methods and ways to deploy the applications on the cloud, such as using Visual Studio to deploy the applications and leave the configurations to the tool itself. However, since Linux environments don’t have Visual Studio, most of the tasks are left with source control tools (Visual Studio Code also uses the same git programs to upload the code to repositories). I will show you, how you can create a minimal script or a program, that manages all of these tasks for you — automation program.

On Linux, mostly and typically, git comes pre installed in most of the Linux-distributions, such as Ubuntu, etc. However, if that is not available you get easily install it using

$ sudo apt-get install git

Or using a similar command, such as using yum etc. This would install and setup git for your environment. Once you have git installed, you need to set a few things up. Git requires the name, email address for notifying who made a change to the system. So for that we would require to execute the following commands,

$ git config --global user.name "Eminem"
$ git config --global user.email "email@domain.com"

You should pass the name and address values that you hold — I used Eminem and a random email address. And to check if your personal configurations are done correctly, you can execute the following command to test that,

$ git config --list

These are the required configurations that you must do before you can use git for any purpose at all.

Note: You also need to create a new web application service (Azure App Service) on Microsoft Azure so that you can deploy the application somewhere. Since I do not want to go deep into the development and starting of this session, I would like you guys to go and watch this video of mine, providing an overview of Microsoft Azure App Service (you may have to put the volume a bit higher).

Once you have that, you can continue onwards and actually set up repositories to deploy the application. I used the following information while creating a new service, so that if I use a name, you should know where did that come from.

screenshot-6477
Figure 2: Creating a new App Service.

screenshot-6478
Figure 3: Reviewing the information of App Service in Microsoft Azure.

Upon creating, you can visit the web service from your browser using the link provided and on the first visit without any update, the following page is shown.

screenshot-6479
Figure 4: Initial page from the App Service.

It shows that you can deploy your web applications, from many sources, using many services such as FTP, Git etc. I will show you, how to do this using Git… There are a few other things that I would like to show you here,

  1. I will show you how to do this, using git.
  2. I will also show you, how to test the build integrity — whether build succeeds or not.
  3. I will also show you, whether git considers to commit and push the changes to the server or not.

So these are a few of the tests that I have prepared for the automation tool that will be helpful for us in this case. These things are not provided on the tutorials that are available online, they are just simple straight-forward ones that only show you how to do it, not whether it is helpful to do it at all.

Setting up the local repository

On the machine side, the first thing to do is to set up a local repository for git deployments, even if you have created a project, you can still create a repository for that project and then use it as the source for your application’s content. So, for that, the following commands would work,

# If you have not created a project, remove these lines. 
$ dotnet new -t web
$ dotnet restore

# The following creates a local repository
$ git init

This would create the project, set up the repository for us under .git directory. On my machine, the following was the result of this,

screenshot-6480
Figure 5: Creating a new project using .NET Core.

Not visible at the moment, however there is a special file called .gitignore, created by default and that contains really helpful ignore constraints for the git systems. However, we will look into them later as we progress.

Adding remote repositories

We can now set up a few remote repositories on own local machine so that once we need to publish the application, we don’t have to enter the URL of the repository every time. Instead, what we can do is, just call that alias of the repository and deploy the content there. This helps to notify the system of the locations where it needs to push the changes to. For that, first of all we need to setup our online repository to allow Local Git Deployments from Microsoft Azure and only then we can assure that we can send the code to the repository online. For that, head over to, Deployment options → Deployment source → Local git repository.

screenshot-6487
Figure 6: Deployment options in Microsoft Azure.

You can see that there are other options available too, select as required and wanted. I selected the third option so that we can publish the source code from our local machines to the servers in no time at all, without having to require any third-party vendors. You need to set up the credentials, so that when you are trying to publish the applications, only you are the one in charge of pushing the changes to the server and not anyone else.

screenshot-6488
Figure 7: Setting up credentials for the deployment account.

Once these things are set up, we can then move onwards to set up our local git programs to allow it to deploy the source code from our local environment all the way up to the Microsoft Azure. Stay with me. To do so, we need to execute the following command,

$ git remote add repoinazure https://<user>@<servicename>.scm.azurewebsites.net:443/<servicename>.git

Note a few things in the previous command, the few things interesting here are that the repository always contains the name of your application service in the URL as well as in the resource file for your git processes. You need to update that, secondly, you need to update the username in the URL and as the one you used in the credentials (I used “afzaal” there). Once this is done, we can move onwards and actually push our first version of application to see, how Azure will behave.

The following commands take care of that,

$ git add .
$ git commit -m "Some random commit message here..."
$ git push repoinazure master

In the previous commands, the first one takes care of the addition of files to the local tracking of the repository, to track the files that needs to be. Second command simply commits the changes and finally a push is made to the “repoinazure”, of course that is the repository where we will publish the application to.

screenshot-6489
Figure 8: Terminal asking for password before deployment. 

It asks for the password and then continues to simply compress the objects, to deploy them using git protocol.

screenshot-6490
Figure 9: Terminal showing the progress. 

The following is a screenshot of a process going on, during the first deployment. Next times and onwards it doesn’t take this much time and is very much simpler.

screenshot-6491
Figure 10: Terminal showing the processes going on at Azure during the deployment.

However, we will also create a shell script that automates the deployment for us. Once it is done, the terminal will show the following results on the screen and we can know that Azure is set up for startup.

screenshot-6492
Figure 11: Application deployed to Azure.

Let us navigate to the website and see if things are the way we are planning them to be.

screenshot-6493
Figure 12: Application’s home page after it has been uploaded to Azure.

Voila! We have finally deployed the application to the Azure. We now need to deploy the changes and so here is where I submit my recommendations, ideas and tips.

Development and redeployments

Let us add a simple controller to this web application, and then quickly deploy that using the script that we will create to do the trick for us. In this section, I will show you a simple ASP.NET Core Web API that will be used to basically see, how early an updates get published live on the server. So, I start with adding a new controller file to the web application’s source code and then modifying it to return a few objects with some data.

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;

namespace WebApplication.Controllers {

    [RouteAttribute("api/people")]
    public class PersonApiController : Controller {
        private List<Person> _people { get; set; } = new List<Person> { 
             new Person { ID = 1, Name = "Afzaal Ahmad Zeeshan" },
             new Person { ID = 2, Name = "Bruce Wayne" },
             new Person { ID = 3, Name = "Marshall Bruce Mathers III" } 
        };
        public List<Person> GetPeople() {
            return _people;
        }

        // Rest of the stuff here...
    }

    public class Person {
        public int ID { get; set; }
        public string Name { get; set; }
    }
}

I just added a simple HTTP GET handler to show you how easy and simple it is to actually make the changes to your live application using git deployment from a local repository.

Now the magic trick for kids, the following script will manage most of our tasks and will take care of them before we proceed deploying the application.

if [ $# == 0 ] 
    then 
    echo "Usage: automate.sh <git-remote-repository>"
    exit 1
fi

# Variables
commit_message="Deployment commit on $(date "+%B %dth, %Y at %H:%M %p")."

# Run the test
dotnet build

if [ $? == 0 ]
    then
        # Successful attempt.
        # Update the git.
        git add -A

        echo "Using commit message, \"$commit_message\"."
        git commit -m "$commit_message"
 
        # Check if anything was commited, or whether there were no changes.
        if [ $? == 0 ]
            then 
                # Files need to be updated
                # Update the azure's repository.
                echo "Connecting to server for git push..."
                git push $1 master
                exit 0
        else 
            echo "No changes to be pushed to server. Terminating."
            exit 1
        fi
        exit
    else 
        # There must have been an issue in the execution.
        echo "There were errors in building process, fix them and re-try."
        exit 1
fi

This would check if we are passing the remote repository or not, it will also build the project and then continue only if build succeeds. So, basically, this comes really handy when working with a terminal based environment especially Linux and using the git protocol for deployment. I recommend saving this as a file in your local repository, and then executing it from the terminal as a program. You can get the file from my GitHub repository too.

screenshot-6533
Figure 13: “autorun.sh” file available in the files.

Once this is done, simply create it executable and then finally execute the script to deploy it. It will go through each step and it will finally deploy the application, it would require password. I did not create the script to accept the password.

screenshot-6536
Figure 14: My script working and deploying the application to Azure.

So, let us run the system now. Once it is done, it will show you that you can now access the web application as it has been deployed successfully. Browser can confirm this,

screenshot-6539
Figure 15: Result of the deployment.

To see more information on this, we can head over to Azure to see how does the deployment affect our current source code.

screenshot-6538
Figure 16: Deployments shown on the Azure.

Clearly seen that now our most recent version of the deployment is being shown as the active deployment and the previous one is removed and set as Inactive. We can do deeper into them and change their status as needed but I won’t be doing that at all. Remember: Next time you want to publish the application, just use that script I provided.

Final words

Microsoft Azure provides very simple ways of deploying the applications, there are many other ways other than Git to deploy the applications to the server. OneDrive, GitHub and other cloud storage services can be used easily to deploy the applications.

However my major concern was to show how you can use the git, in a script-based environment to deploy the applications to Azure and also track when each commit gets pushed to the server. As you can see in this post, the commits are tracked on the cloud and you can change and select which commit you are interested in and set them as the active ones — such as in the case of a deletion of a file or so.

Is using ‘using’ block really helpful?

Introduction and Background

So, it all began on the Facebook when I was posting a status (using Twitter) about a statement that I was feeling very bad that .NET team had left out the “Close” function while designing their .NET Core framework. At first I thought maybe everything was “managed” underground until I came up to an exception telling me that the file was already being used. The code that I was using was something like this,

if(!File.Exists("path")) { File.Create("path").Close(); }

However, “Close” was not defined and I double checked against .NET Core reference documentations too, and they suggested that this was not available in .NET Core framework. So I had to use other end… Long story short, it all went up like this,

I am unable to understand why was “Close” function removed from FileStream. It can be handy guys, @dotnet.

Then, Vincent came up with the idea of saying that “Close” was removed so that we all can use “using” blocks, which are much better in many cases.

You don’t need that. It’s handier if you just put any resource that eats resources within the “using block”. 🙂

Me:

How does using, “using (var obj = File.Create()) { }” make the code handier?

I was talking about this, “File.Create().Close();”

Now, instead of this, we have to flush the stream, or perform a flush etc. But I really do love the “using block” suggestion, I try to use it as much as I can. Sometimes, that doesn’t play fair. 😉

He:

Handier because I don’t have to explicitly call out the “Close()” function. Depends on the developer’s preference, but to me, I find “using (var obj = File.Create()) { }” handier and sexier to look at rather than the plain and flat “File.Create().Close();”. Also, it’s a best practice to “always” use the using block when dealing with objects that implements IDisposable to ensure that objects are properly closed and disposed when you’re done with it. 😉

As soon as you leave the using block’s scope, the stream is closed and disposed. The using block calls the Close() function under the hood, and the Close() calls the Flush(), so you should not need to call it manually.

Me:

I will go with LINQPad to see which one is better. Will let you know.

So, now I am here, and I am going to share what I find in the LINQPad. The fact is that I have always had faith in the code that works fast and provides a better performance. He is an ASP.NET MVP on Microsoft and can be forgiven for the fact that web developers are trained on multi-core CPUs and multi-deca-giga-bytes of RAMs so they use the code that looks cleaner but… He missed the C# bytecode factor here. I am going to use LINQPad to find out the underlying modifications that can be done to find out a few things.

Special thanks to Vincent: Since a few days I was out of topics to write on, Vincent you gave me one and I am going to write on top of that debate that we had.

Notice: I also prefer using the “using” block in almost every case. Looks handier, but the following code block doesn’t look handier at all,

using (var obj = File.Create("path")) { }

And this is how it began…

Exploring the “using” and simple “Close” calls

LINQPad is a great software to try out the C# (or .NET framework) code and see how it works natively, it lets you see the native bytecode of ,NET framework and also lets you perform and check the tree diagrams of the code. The two types that we are interested in are, “using” statement of .NET framework and the simple “Close” calls that are made to the objects to close their streams.

I used a Stopwatch object to calculate the time taken by the program to execute each task, then I match the results of each of the program with each other to find out which one went fast. Looking at the code of them both, it looks the thousand-feet high view of them both looks the same,

// The using block
using (var obj = File.Create("path")) { }

// The clock method
File.Create("path").Close();

They both the same, however, their intermediate code shows something else.

// IL Code for "using" block
IL_0000: nop 
IL_0001: ldstr "F:/File.txt"
IL_0006: call System.IO.File.Create
IL_000B: stloc.0 // obj
IL_000C: nop 
IL_000D: nop 
IL_000E: leave.s IL_001B
IL_0010: ldloc.0 // obj
IL_0011: brfalse.s IL_001A
IL_0013: ldloc.0 // obj
IL_0014: callvirt System.IDisposable.Dispose
IL_0019: nop 
IL_001A: endfinally 
IL_001B: ret 

// IL Code for the close function
IL_0000: nop 
IL_0001: ldstr "F:/File.txt"
IL_0006: call System.IO.File.Create
IL_000B: callvirt System.IO.Stream.Close
IL_0010: nop 
IL_0011: ret

Oh-my-might-no! There is no way, “using” block could have ever won with all of that extra intermediate code for the .NET VM to execute before exiting. The time taken by these commands was also tested and for that I used the native Stopwatch object to calculate the “ticks” used, instead of the time in milliseconds by each of the call. So my code in the LINQPad looked like this,

void Main()
{
   Stopwatch watch = new Stopwatch();
   watch.Start();
   using (var obj = File.Create("F:/file.txt")) { }
   watch.Stop();
   Console.WriteLine($"Time required for 'using' was {watch.ElapsedTicks}.");
 
   watch.Reset();
   watch.Start();
   File.Create("F:/file.txt").Close();
   watch.Stop();
   Console.WriteLine($"Time required for 'close' was {watch.ElapsedTicks}.");
}

The execution of the above program always results in a win for the “Close” function call. In sometimes it was a close result, but still “Close” function had a win over the “using” statement. The results are shown the images below,

screenshot-5120 screenshot-5121 screenshot-5122

The same executed, produced different results, there are many factors to this and it can be forgiven for all of them,

  1. Operating system might put a break to the program for its own processing.
  2. Program might not be ready for processing.
  3. Etc. etc. etc.

There are many reasons. For example, have a look at the last image, the time taken was 609 ticks. Which also includes the ticks taken by other programs. The stopwatch was running ever since, and that is what caused the stopwatch to keep tracking the ticks. But in any case, “using” statement was not a better solution from a low level code side.

Final words

Although I also recommend using the “using” block statement in C# programs, but there is a condition where you should consider using one over other. For example, in this condition, we see that implementing one over the other has no benefit at all, but just a matter of personal interest and like. Although I totally agree to Vincent where he mentions that “using” block is helpful, but in other cases. In this case, adding a Close call is a cleaner (IMO) way of writing the program as compared to the other one.

At the end, it is all in the hands of the writer… Select the one you prefer.

Creating and hosting ASP.NET Core application on Linux — Nothing Third-Party

Introduction and Background

It has been a while since I wanted to write something about ASP.NET Core hosting stuff, and Kestrel was something that was interesting from my own perspective. Previously I have written a bunch of stuff to cover up the basics of ASP.NET Core programming and in this one I will guide you on hosting the ASP.NET Core application natively, using the .NET Core libraries that you download while installation process. Secondly, it has been a while since I have had written anything at all and I might have forgotten the interests of my readers so excuse me if I miss out a few things.

image_60140ec0-ce46-4dbf-a14f-4210eab7f42c
Figure 1: Image captured from Scott Hanselman’s blog.

There is a lot of stuff to share in this post today, so stay tuned and hold your breath basically. I will be covering the following steps in this article:

  1. Installation of the latest version of .NET Core on Linux environment
    • If you already have a .NET Core framework installed, upgrade your system if that doesn’t support ASP.NET programming as of now.
    • In the previous versions, web development wasn’t support in .NET Core for Linux operating systems. In the recent versions, it has been added to the recent versions now so that is why I insist that before continuing you at least install the latest version of .NET Core.
  2. Setup an ASP.NET Web Application in your Linux environment.
    • I will guide you through many stages, such as development, editing, maintenance and using an IDE for all of these things.
    • I will walk you through many different areas (not the ASP.NET Area!) in ASP.NET web application’s file hierarchy, plus I will tell you what is back and what is about to change (or subject to change in the hierarchy).
  3. Build and run the project
    • In this section, I will guide you through a few of the tips that would deem helpful to you.
    • Before running, I will give you an overview of Kestrel — web server for ASP.NET 5.
    • I will then move onwards to using the website, which has the same feel as it had in Windows operating system, while developing ASP.NET web applications.
    • Tip: I will then give you a tip, by default, application is uploaded at 5000 port, which you would require to change in many cases, I will show you how to change the port of the web application’s hosting service.
  4. Finally, I will head over to final words that I put at the end of every post that I write on any of the platform that I have to write about.

Installing or upgrading .NET Core

I wrote an article that covered how you can start using .NET Core on Linux environments, you can read it here. That article was written entirely, to give you a concept of the .NET Core architecture, and a few commands that you can use to do most of the stuff. The only difference between the older article and this section of the post is that this would just contain a later version of .NET Core to be installed on the system. Whereas in the previous one you had to install the older one.

Try the following,

sudo apt-get update && dist-upgrade

If that doesn’t work, or it doesn’t show the updates, head over to the main website for .NET Core, and install the latest version from there.

Creating a new web application

Previously, .NET Core only supported creating libraries or console applications. At the moment, it supports ASP.NET web application development too, it supports the templates for ASP.NET too, at the moment just a simple ASP.NET MVC oriented web application is created, and I don’t think there is any need for other templates like Web API, SignalR, single page applications etc. I believe that is complete at the moment.

To create a new web application, create the new project in .NET Core and pass the type parameter as a web application, which would guide .NET Core (dotnet) to create a new application with the template of web application.

screenshot-3007
Figure 2: Creation of a new .NET Core project using web template. 

As you can see in the command above, we are passing a type (-t parameter) to the command to create a new web templated application. The content is as following,

screenshot-3008
Figure 3: ASP.NET Web application files and directories in .NET Core. 

The project looks similar to what we have on Windows environment. Just a difference of one thing: It includes, web.config as well as project.json file. Remember that Microsoft is mostly moving back to MSBuild and of course a few of the hosting modules are based on IIS servers so that is why web.config file is available in the project. However, the project at the moment uses project.json file for project-level configuration and (as we have heard mostly during these days) these would be migrated to MSBuild way of managing the project and other related stuff.

Editing and updating the web application

Microsoft has been pushing the limits since a while, of course on Linux you already had a few of the best tools for development but Visual Studio Code is still a better tool. In my previous posts, I said that I don’t like it enough — I still don’t like it very much, it needs work! — but it is better than many tools and IDEs available for C# or .NET-related programming. Since in this post I said that I won’t be talking about anything third-party, so Visual Studio Code is the tool that I am going to suggest, support and use in this case.

You can install Visual Studio Code from their official website, note one thing, the Debian packages take a little longer for installation, so if you also don’t like to wait too much like me then I would recommend that you download the archive packages and install the IDE from them. There is no installation required at all, you just move the extracted files to a location where you want to store it. FInally, you create a symbolic link to your executable, which can be done like this,

sudo ls -s /home/username/Downloads/VSCode-linux-x64/code /usr/local/bin/code

This would create link and you can use Visual Studio Code IDE just by executing this “code” command anywhere in the directory (using the terminal), it would then load the directory itself as a project in the IDE which you can then use to update your application’s source code and other stuff that you want to.

Tip: Install the C# extension from market place for a greater experience.

Once this is done, head over to the directory where you create the project. Execute the following command,

code .

This would trigger Visual Studio Code to load your directory as the project in the IDE. This is how my environment looks like,

screenshot-3009
Figure 4: Visual Studio Code showing the default ASP.NET directory as a project.

The rest is history — I mean the rest is the similar effect, and feeling that you can get on Windows environment. First of all, when you work this way and follow my lead you will get the following errors in the IDE window.

screenshot-3010
Figure 5: Error messages in Visual Studio Code.

We will fix them in a moment. But at first, understand that these are meant to act like this. If you have had ever programmed in .NET Core, you must have known that before anything else you need to restore the NuGet packages for your project before even building the project. Visual Studio Code uses the same IntelliSense and tries to tell you what is going wrong. In doing so, it requires the binaries which are not available so you are shown that last error message. Above two are optional, middle one is a bit optional-required.

Build and run the project

Until this point we have had created the project. But now we need to restore the dependencies for our project which are going to be used by the .NET Core to actually execute our code. By default a lock file is not generated, so we need to create a new file and after that we would be able to build the project.

Execute the following command,

dotnet restore

After this, .NET Core would automatically generate the lock file and building process can be easily triggered.

screenshot-3012
Figure 6: Project.lock.json file created after restoring the project.

We can continue to building the project. I want you to pay attention to this point now. See what happens.

screenshot-3013
Figure 7: Build and run process of ASP.NET web application on .NET Core.

Now notice a few things in the above terminal window. First of all, notice that it keeps logging everything. Secondly, notice that it has a port “5000” appended. We didn’t train it to use that at all, and that also didn’t come from Program.cs file either (you can see that file!). But before I dig any deeper in explanation of how to change that, I want you to praise KestrelThe web server of ASP.NET 5. Kestrel is the implementation of libuv async I/O library that helps hosting the ASP.NET applications in a cross-platform environment. The documentation for kestrel and the reference documentation is also available, and you can get started reading most of the documentation about Kestrel, on the namespace Microsoft.AspNetCore.Server.Kestrel at the ASP.NET Core reference documentation website.

This is the interesting part because ASP.NET Core can run even on a minimal HTTP listener that can act as a web server. Kestrel is a new web server, it isn’t a full feature web server but adapts as community needs updates. it supports HTTP/1 only as of now but would support other features in the coming days. But remember, pushing every single feature and module in one server would actually kill the major purpose of using the .NET Core itself. .NET Core isn’t developed to push everything on the stack, but instead it is developed to use only the modules and features that are required.  Yes, if you want to add a feature you can update the code and build the service yourself. It is open sourced on GitHub.

Changing the port number of application

In many cases, you might want to change the port number where your server listens or you may also want to make this server the default server for all of your HTTP based communication on the network. In such cases, you must be having a special port number assigned (80-for default).

For that, head over to your Program.cs file, that is the main program file (main function of your project), and that is responsible for creating a hosting wrapper for your web application in ASP.NET Core environment. The current content for this object is,

screenshot-3015
Figure 8: Source code for the Program.cs file.

In that chain of “Use” functions, we just need to add one more function call to use a special URL. The function of “UseUrls(“”)” would allow us to update the URLs that are to be used for this web application. So I am just going to use that function here so that it would let us simply select which URL (and port) to use.

screenshot-3017
FIgure 9: URLs being managed.

Now, if you try to run the application you will run into the following problem in Linux-based system.

screenshot-3019
Figure 10: Unable to bind to the URL given, error message.

That is pretty much simple — it doesn’t have permission to do the trick. So what on Linux systems is done is that it simply is performed using the superuser credentials and account.

sudo dotnet run

It would prompt you for your password, enter your password and this time application would run on the localhost:80. In the cases where you have to upload the website to the servers, Azure App Services etc. In these cases, you are requires to have the application running under the server that acts as the default server, so in those cases you must handle the default TCP port for HTTP communication for the requests. Otherwise, you might require to work at the backends of NGINX or Apache servers etc. But that is a different story.

screenshot-3020
Figure 11: Web server running at port 80.

Now that our server is running, we can now go to a web browser to test it out. I am going to use Firefox, you can use any web browser (even a terminal-based).

screenshot-3021
Figure 12: ASP.NET Core web application being rendered in Firefox, running in Linux using Kestrel web server.

As you can see, the web application runs smoothly. Plus, it also logs any event that takes place. The terminal keeps a record of that, it can also allow you to actually transfer the output from main terminal’s output to a file stream to store everything. But that is a topic for a different post.

screenshot-3022
Figure 13: ASP.NET web application’s log in terminal window.

As requests come and responses are generated, this window will have more and more content.

Final words

I have wanted since a while, to write my own next web application for my own blog and stuff, in ASP.NET Core. At the moment, the framework is “almost” ready for production, but just not yet. Maybe in a couple of next builds it will be ready. There are many things that they need to look into. For example, the web server needs to be more agile — the features needs to be provided. That is not it, VIsual Studio Code must be fully integrated with ASP.NET Core tooling. At the moment it just allows us to edit the code, then we have to go back to the terminal to do the rest. It should provide us with the ability to do that.

To Microsoft’s team related to .NET Core: Why isn’t .NET Core being published on Linux? I am pretty much sure the framework is fully function, despite the bugs. But there are bugs in the main .NET framework too, there are some minor issues every here and there. My major concern here is to be able to do, “sudo apt-get install dotnet”. I can’t remember the longer version names.

To readers: If you are willing to use ASP.NET Core for your production applications, wait for a while. Althought ASP.NET Core is a better solution than many other solutions available. But my own recommendation is to stick to ASP.NET 4.6 as of now because that is more stable version as compared to this one.