Tag Archives: windows software

A quick startup using .NET Core on Linux

I know you may be thinking… This post is another rhetoric post by this guy, yes it is. 🙂 .NET Core is another product that I like, the first one being, .NET framework itself. It was last year when .NET Core got started and Microsoft said they are going to release the source code as a part of their open source environment and love. By following the open source project environment and ethics, Microsoft has been working very hard in bringing global developers toward their environments, platforms and services. For example, .NET framework works on Windows, C# language is used to build Windows Store applications, C# is also the primary language in Microsoft’s web application development framework, ASP.NET and much more. The online cloud service of Microsoft is also programmed in C#; primarily. These things are interrelated to each other. Thus, when Microsoft brings all of this as an open source project, things start to get a lot better!

Everyone knows about the background of .NET Core, if you don’t know, I recommend that you read the blog post on Microsoft, Introducing .NET Core. The(ir) basic idea was to provide a framework that would work with any platform, any application type and any framework to be targeted.

Introduction and Background

In this quick post, I will walk you through getting started with .NET Core, installing it on a Linux machine and I will also give my views as to why install .NET Core on a Linux machine instead of Windows machine, I will then walk you through different steps of .NET Core programming and how you can use terminal based environment to perform multiple tasks. But first things first.

I am sure you have heard of .NET Core and other of the cool stuff that Microsoft has been releasing these years, From all of these services the ones that I like are:

  1. C# 6
    • In my own opinion, I think the language looks cleaner now. These sugar-coated features make the language easier to write too. If you haven’t yet read, please read my previous blog post at, Experimenting with C# 6’s new features.
  2. .NET Core
    • Of course, who wouldn’t love to use .NET on other platforms.
  3. Xamarin acquisition
    • I’m going to try this one out tonight. Fingers crossed.
  4. Rest are all just “usual” stuff around.

In this post I am going to talk about .NET Core on Linux because I have already talked about the C# stuff.

Screenshot (898)
Figure 1: .NET Core is a cross-platform programming framework by Microsoft.

Why Linux for .NET Core?

Before I dive any deeper, as I had said, I will give you a few of my considerations for using .NET Core on Linux and not on Windows (yet!) and they are as following. You don’t have to take them seriously or to always consider them, they are just my views and my opinions, you are free to have your own.

1. .NET Core is not yet complete

It would take a while before .NET gets released as a public stable version. Until then, using this bleeding edge technology on your own machines won’t be a good idea and someday sooner you will consider removing the binaries. In such cases, it is always better to use it in the virtual machine somewhere. I have set up a few Linux (Ubuntu-based) virtual machines for my development purposes, and I recommend that you go the same.

  1. Install VirtualBox (or any other virtualization software that you prefer; I like VirtualBox for its simplicity).
  2. Set up an Ubuntu environment.
    • Remember to use Ubuntu 14.04. Later ones are not yet supported yet.
  3. Install the .NET Core on that machine.

If something goes wrong. You can easily revert back where you want to. If the code plays dirty, you don’t have to worry about your data, or your machine at all.

2. Everything is command-line

Windows OS is not your OS if you like command-line interfaces. I am waiting for the BASH language to be introduced in Windows as in Ubuntu, until then, I am not going to use anything that requires a command-line interface to be used on my Windows environment. In Linux, however, everything almost has a command-line interface and since the .NET Core is a command-based program by Microsoft that is why I have enjoyed using it on Linux as compared to Windows.

Besides, on command-line interface, creating, building and running a .NET Core project is as simple as 1… 2… 3. You’ll see how, in the sections below. 🙂

3. I don’t use Mac

Besides these points, the only valid point left is then why shouldn’t we use Mac OS for .NET Core is because I don’t use it. You are free to use Mac OS for .NET Core development too. .NET Core does support it, its just that I don’t support that development environment. 😀

Installation of .NET Core

Although it is intended that soon, the command would be as simple as:

$ sudo apt-get install dotnet

Same command is used on Mac OS X and other operating systems other than Ubuntu and Debian derivatives. But until the .NET Core is in development process that is not going to happen. Until then, there are other steps that you can perform to install the .NET Core on your own machine. I’d like to skip this part and let Microsoft give you the steps to install the framework.

Installation procedure of .NET Core on multiple platforms.

After this step, do remember to make sure that the platform has been installed successfully. In almost any environment, you can run the following command to get the success message.

> dotnet --help

If .NET is installed, it would simply show the version and other help material on the screen. Otherwise, you may want to make sure that that procedure did not incur any problems during the installation of the packages. Before I get started, I want to show you the help material provided with “dotnet” command,

afzaal@afzaal-VirtualBox:~/Projects/Sample$ dotnet --help
.NET Command Line Tools (1.0.0-preview1-002702)
Usage: dotnet [common-options] [command] [arguments]

Arguments:
 [command]      The command to execute
 [arguments]    Arguments to pass to the command

Common Options (passed before the command):
 -v|--verbose   Enable verbose output
 --version      Display .NET CLI Version Number
 --info         Display .NET CLI Info

Common Commands:
 new           Initialize a basic .NET project
 restore       Restore dependencies specified in the .NET project
 build         Builds a .NET project
 publish       Publishes a .NET project for deployment (including the runtime)
 run           Compiles and immediately executes a .NET project
 test          Runs unit tests using the test runner specified in the project
 pack          Creates a NuGet package

So, you get the point that we are going to look deeper into the commands that dotnet provides us with.

  1. Creating a package
  2. Restoring the package
  3. Running the package
    • Build and Run both work, run would execute, build would just build the project. That was easy.
  4. Packaging the package

I will slice the stuff down to make it even more clearer and simpler to understand. One thing that you may have figured is that the option to compile the project natively is not provided as an explicit command in this set of options. As far as I can think is that this support has been removed until further notice. Until then, you need to pass a framework type to compile and build against.

Using .NET Core on Linux

Although the procedure on both of the environments is similar and alike. I am going to show you the procedure in Ubuntu. Plus, I will be explaining the purpose of these commands and multiple options that .NET provides you with. I don’t want you feel lonely here, because most of the paragraphs here would be for Microsoft team working on .NET project, so I would be providing a few suggestions for the team too. But, don’t worry, I’ll make sure the content seems to be and remain on-topic.

1. Creating a new project

I agree that the framework development is not yet near releasing and so I think I should consider passing on my own suggestions for the project too. At the moment, .NET Core supports creating a new project in the directory and uses the name of the directory as the default name (if at all required). Beginners in .NET Core are only aware of the commands that come right after the “dotnet”. However, there are other parameters that collect a few more parameters and optional values such as:

  1. Type of project.
    • Executable
      • At the moment, .NET only generates DLL files as output. Console is the default.
    • Dynamic-link library
  2. Language to be used for programming.

To create a new project, at the moment you just have to execute the following command:

$ dotnet new

In my own opinion, if you are just learning, this is enough for you. However, if you execute the following command you will get an idea of how you can modify the way project is being created and how it can be used to modify the project itself, including the programming language being used.

$ dotnet new --help
.NET Initializer

Usage: dotnet new [options]

Options:
 -h|--help Show help information
 -l|--lang <LANGUAGE> Language of project [C#|F#]
 -t|--type <TYPE> Type of project

Options are optional. However, you can pass those values if you want to create a project with a different programming language such as F#. You can also change the type, currently however, Console applications are only supported.

I already had a directory set up for my sample testing,

Screenshot (899)
Figure 2: Sample directory open. 

So, I just created the project here. I didn’t mess around with anything at all, you can see that the .NET itself creates the files.

Screenshot (900)
Figure 3: Creating the project in the same directory where we were.

Excuse the fact that I created an F# project. I did that so that I can show that I can pass the language to be used in the project too. I removed that, and instead created a C# program-based project. This is a minimal Console application.

In .NET Core applications, every project contains:

  1. A program’s source code.
    • If you were to create an F# project. Then the program would be written in F# language and in case of default settings. The program is a C# language program.
  2. A project.json file.
    • This file contains the settings for the project and dependencies that are to be maintained in the project for building the project.

However, once you are going to run you need to build the project and that is what we are going to do in the next steps.

2. Restoring the project

We didn’t delete the project. This simply means that the project needs to be restored and the dependencies need to be resolved before we can actually build to run the project. This can be done using the following command,

$ dotnet restore

Note: Make sure you are in the working directory where the project was created.

This command does the job of restoring the packages. If you try to build the project before restoring the dependencies, you are going to get the error message of:

Project {name} does not have a lock file.

.NET framework uses the lock file to look for the dependencies that a project requires and then starts the building and compilation process. Which means, this file is required before your project can be executed to ensure “it can execute”.

After this command gets executed, you will get another file in the project directory.

Screenshot (902)
Figure 4: Project.lock.json file is now available in the project directory.

And so finally, you can continue to building the project and to running it on your own machine with the default settings and setup.

3. Building the project

As I have already mentioned above, the native compilation support has been removed from the toolchain and I think Ubuntu developers may have to wait for a while and this may only be supported on Windows environment until then. However, we can somehow still execute the project as we would and we can perform other options too, such as publishing and NuGet package creation.

You can build a project using the following command,

$ dotnet build

Remember that you need to have restored the project once. Build command would do the following things for you:

  1. It would build the project for you.
  2. It would create the output directories.
    • However, as I am going to talk about this later, you can change the directories where the outputs are saved.
  3. It would prompt if there are any errors while building the project.

We have seen the way previous commands worked, so let’s slice this one into bits too. This command, too, supports manipulation. It provides you with optional flags such as:

  1. Framework to target.
  2. Directory to use for output binaries.
  3. Runtime to use.
  4. Configuration etc.

This way, you can automate the process of building by passing the parameters to the dotnet script.

4. Deploying the project

Instead of using the term running the project, I think it would be better if I could say deploying the project. One way or the other, running project is also deployed on the machine before it can run. First of all, the project I would show to be running, later I will show how to create NuGet packages.

To run the project, once you have built the project or not, you can just execute the following command:

$ dotnet run

This command also builds the project if the project is not yet built. Now, if I execute that command on my directory where the project resides, the output is something like this on my Ubuntu,

Screenshot (904)
Figure 5: Project output in terminal.

As seen, the project works and displays the message of, “Hello World!” on screen in terminal. This is a console project and a simple project with a console output command in C# only. That is why the program works this way.

Creating NuGet packages

Besides this, I would like to share how you can create the NuGet package from this project using the terminal. NuGet packages have been in the scene since a very long time and they were previously very easy to create in Visual Studio environment. Process is even simpler in this framework of programming. You just have to execute the following command:

$ dotnet pack

This command packs the project in a NuGet package. I would like to show you the output that it generates so that you can understand how it is doing everything.

afzaal@afzaal-VirtualBox:~/Projects/Sample$ dotnet pack
Project Sample (.NETCoreApp,Version=v1.0) was previously compiled. 
Skipping compilation.
Producing nuget package "Sample.1.0.0" for Sample
Sample -> /home/afzaal/Projects/Sample/bin/Debug/Sample.1.0.0.nupkg
Producing nuget package "Sample.1.0.0.symbols" for Sample
Sample -> /home/afzaal/Projects/Sample/bin/Debug/Sample.1.0.0.symbols.nupkg

It builds the project first, if the project was built then it skips that process. Once that has been done it create a new package and simply generates the file that can be published on the galleries. NuGet package management command allows you to perform some other functions too, such as updating the version number itself, it also allows you to specify framework etc. For more, have a look at the help output for this command,

afzaal@afzaal-VirtualBox:~/Projects/Sample$ dotnet pack --help
.NET Packager

Usage: dotnet pack [arguments] [options]

Arguments:
 <PROJECT> The project to compile, defaults to the current directory. 
 Can be a path to a project.json or a project directory

Options:
 -h|--help Show help information
 -o|--output <OUTPUT_DIR> Directory in which to place outputs
 --no-build Do not build project before packing
 -b|--build-base-path <OUTPUT_DIR> Directory in which to place temporary build outputs
 -c|--configuration <CONFIGURATION> Configuration under which to build
 --version-suffix <VERSION_SUFFIX> Defines what `*` should be replaced with in version 
  field in project.json

See the final one, where it shows the version suffix. It can be used to update the version based on the build version and so on. There is also a setting, which allows you modify the way building process updates the version count. This is a widely used method for changing the version number based on the build that produced the binary outputs.

The NuGet package file was saved in the default output directory.

Screenshot (905)
Figure 6: NuGet package in the default output directory.

Rest is easy, you can just upload the package from here to the NuGet galleries.

Final words

Finally, I was thinking I should publish a minimal ebook about this guide. The content was getting longer and longer and I was getting tired and tired, however since this gave me an idea about many things I think I can write a comparison of .NET Core on Windows and Linux and I think I have enough time to do that.

Secondly, there are few suggestions for end users that I want to make.

  1. Do not use .NET Core for commercial software. It is going to change sooner,
  2. .NET Core is a bleeding edge technology and well, since there is no documentation, you are going to waste a lot of time in learning and asking questions. That is why, if you are considering to learn .NET framework, then learn the .NET framework and not .NET Core. .NET framework has a great amount of good resources, articles, tips and tutorials.
  3. If you want cross-platform features and great support like .NET framework, my recommendation is Mono Project over .NET Core maybe because it is yet not mature.

I have a few feedback words on the framework itself.

  1. It is going great. Period.
  2. Since this is a cross-platform framework, features must not be available Windows-only such as that “dotnet compile –native” one. They must be made available to every platform.

At last, the framework is a great one to write programs for. I enjoyed programming for .NET Core because it doesn’t require much effort. Plus, the benefit of multiple programming languages is still there, Besides, Visual Studio Code is also a great IDE to be used and the C# extension makes it even better. I will be writing a lot about these features these days since I am free from all of the academics stuff these days. 🙂

See you in the next post.

Advertisements

Why to use C# and when to prefer other languages?!

Introduction and Background

Either it is Quora, or is it C# Corner or is it CodeProject, beginners and novice users are always asking questions like, “Which one to use, C++ or C#?”, and many more similar questions. It is harder to provide an answer to such questions on multiple platforms and to multiple questions. That is why, I thought, why not post a blog post to cover most of the aspects of the question. In this post I am going to cover a few of the concepts that you may want to understand to chose either language from the most widely used languages:

  1. C or C++
  2. Java
  3. C# (or .NET framework itself)

Basically, my own preference in many cases is C#, but that depends on what I am going to build. I have been using C, C++, Java and C# in many applications depending on their needs, depending on how they allow me to write the programs and depending on whether it is the suitable programming language for this type of project and application architecture.

The way, I do that, is just a trick. Which, I am going to share with you, in this blog post. Among these a few of the very basic things to consider are:

  1. How simple and easy it would be to use this language in the program?
  2. What is the productivity of that language in your scenario!
  3. Difficulty level and how many people know the language!

I will be talking about these things points in this post, so to make it clear where to use which programming language. But my major concern in this post would be to cover the aspects of C# programming language.

Productivity of language

First thing to always consider about the language is the productivity that a programming language can provide your team with. Productivity of the language depends on many factors, such as:

  1. How programs are built in that language?
  2. How fast and reliable is it to build the project regularly?
  3. Is the program’s source code readable for other members on the team?
  4. Is the language itself suitable for the case that we are going to use it in.

Productivity of the language, in my opinion, is the first thing to consider as the valid candidate to always discuss in the team and not just talk about in a black room. You should manage to get your teams up in a conference room, and then talk about a language. There may be many aspects where one language may fall short and other may get well. There would be many factors that most of the programmers don’t know but talking about them one of the geek would stand up and raise another important point that may guide your team to walk on the correct paths to build greater applications.

Let’s talk a bit graph.

chart
Figure 1: Productivity graph of most widely used programming languages.

In this chart we can see that C is the most productive programming language, of which we are interested in; C, C++, Java and C#. C++ and C# are competing with each other, whereas Java is a bit behind than C# and so on.

In the above graph, it is clear that C# is an average programming language. The average doesn’t mean it is below the requirement. But it means that it can be used in any case, and its performance won’t fall as the data increases. The successful run graph shows that C# programs are much successful in many cases. That is why, in many cases, C# proves to be the valid candidate in many cases, since it is a general purpose programming language.

Then the question arises, “Productive in which sense?

That is the most important part here. Now, this topic may take a bit more of the time to explain how productive a language is. For that, I have dedicated the topic, “Choosing the right language for the right task“. The right programming language for the right task would always help you to elevate the speed of the programming of your team! But among other aspects, the most important ones are:

  1. What sort of project this is?
    • Of course you don’t want to use knife to unscrew a screw.
  2. What are the IDE and compilation tools present?
    • Even a program written in C can go wrong if compile is buggy or inefficient.
  3. How your team would like to manage the projects?
    • Does the language support good software engineering concepts?
    • Can it be used to generate good diagrams: Use-case, activity, class and other diagrams.

Thus, you should always consider thinking about the productivity of your development team while guessing which language to be used.

Choosing the right language for the right task

It is always stated to use the best tool for the best jobs! Then, why not use the best programming language for your projects, to get the best possible results? Before I continue any further, there is one thing I want to mentioned… “There is no best programming language ever built.” Every time a programming language is built for a purpose, another group of programmers jump in to create a new programming language, for the sake of fixing the errors in the previous language. What do you think motivated Bjarne to create C++ when there was C language and Scala already in the market?

In this section, there are many things to consider, from the performance and benefits to the clients, to the perk packages for the employees to the efficiencies of the project repositories and the tools provided for that programming languages.

C# was created by Microsoft and therefore, in my opinion, has, by far, the most efficient tools for programming. I mean, Visual Studio, alone is the beast in this game.

visual-studio-2013-logo
Figure 2: Visual Studio logo.

I am a huge fan of Visual Studio, and i would doubt someone who isn’t a fan of Visual Studio. C# has a better support and benefit of using the best IDE out there. Java has also a number of IDE, so does C++ and many of the C programs are written in minimal environments, like a small program to manage and compile the C programs; no offence geeks!

2013-12-27-csharp-for-systems-programming
Figure 3: Graph of performance to safety ratio.

Now, if you look at this graph, it’s pretty much clear as to what it is trying to guide you with. Of course, as I had mentioned in the starting paragraph here, that there is no best programming language.

In many cases, C# and Java rule over C++ (and C) and in many cases, they rule over C# and Java. There are many factors, like the programming paradigms, performance of the code that is generated; Just-in-time compilation, memory-management delay and so on. While C# and Java may provide the best environment to build “managed” programs in, there are many cases where C# and java don’t work well, like writing a low-level program. Java developers wanted to build a Java OS, but they had to give up because something’s aren’t meant to be done in Java. 🙂

Always consider to search before making the final call. There are many companies working in the similar field that you are going to work in. There would be many packages and languages built for your own field that may help you to get started in no time!

top201020programming20languages-100422646-orig
Figure 4: Top 10 programming languages.

But, I think these are a bit backwards. I think, C is on the top because it causes a lot of trouble to beginners, so everyone is searching for “How to do {this} in C” on Google, raising the rankings. 😉

Selecting the best framework

I don’t totally agree with people when it comes to talk about frameworks like, Java, Qt (which I do like in many cases; like Ubuntu programming), and other programming frameworks available for programming applications to be run despite the architecture of the machine. In this case, my recommendation and personal views for .NET framework are very positive. As, already mentioned, I have programmed on Qt framework for Android, Ubuntu and Linux itself. It was a really very powerful framework to build applications on. But the downside was it was tough to learn, their compilers were modified, their C++ was tinkered.

While selecting the best framework for application development by choices are below:

  1. How much flexible a framework is?
  2. What language does it support?
    • Some frameworks support multiple languages, like .NET framework, it supports C#, VB.NET, Visual C++, JavaScript applications.
  3. Is it cross-platform?
  4. If not cross-platform, then does it support multiple architectures, at least?

Java framework is cross-platform, and entirely framework oriented. You simply have to target the framework despite the operating system or architecture being used. .NET framework on the other hand is a very beautiful framework to write applications on. It uses C#, VB.NET and C++ (no Java!) to write the applications, and then the compiled binaries can be executed on many machines that can support .NET framework. This provides an excellent cross-architecture support.

C# however, does not support Mac OS X, at the moment. Microsoft has started to roll out cross-platform binaries for C# programs. .NET Core has been a great success and once it gets released to a public version, I am sure most of the companies would start to target it. That is not going to happen, in a near future. In which case, Java and C++ are better than C#.

If you are interested in C# programming on multiple platforms, consider using Mono Project instead. You can read about that, on my blog here: Using C# for cross-platform development.

spectrum
Figure 5: Top languages and their platforms of usage.

Java may be supported 100% in the rankings, but C# is also supporting multiple platforms and is rapidly growing making the language better than the rest. With the release of C# 6, Microsoft has proved that the language is much better than the rest in the race. There are many features that I like C# 6:

  1. String interpolation
  2. Getter-only auto-properties as lambdas
  3. Improvements to lambdas

There are a few things Java still doesn’t have. For example, one statement code to write the data to the files and to extract the data. You have to get messy in the streams, or you have to write an entirely non-intuitive code with Files object and then to get the data from there… Yuk!

Performance of the compiled code

Writing the source code may be different in many ways:

  1. Syntax of the programming language.
  2. The way their objects and procedures are imported in the source code.
  3. How clean the source code looks. Many programming languages are just nightmares.

But the major concern comes to mind when we are going to execute the programs. In many cases, or should I say in all the cases, the language which are bytecoded languages, lag behind than the languages that are compiled to native codes or Assembly codes. For example, C or C++ codes are faster, because they are not compiled to a bytecode instead they are generated as machine codes for a platform or architecture. C# or Java programs are compiled down to a bytecode which causes a JIT to occur when the programs are executed. This takes time.

However, you can see the following charts from https://attractivechaos.github.io/plb/ and see for yourself, the way bytecoded languages are similar and how much compiled languages are different, see for yourself.

plb-lang
Figure 6: Mathematical calculations.
plb-lib
Figure 7: Pattern matching and machine learning.

Which makes it pretty much clear, how much compiled language are faster than bytecoded, and then come the ones that are interpreted, like Ruby.

In many cases, there are other factors too, which cause bad performance:

  1. Bad compiler.
  2. A lot of resources being allocated.
  3. Slow hardware resources; bad combination of CPU, RAM and other peripherals.
  4. Bad programmer and bad code being used.
  5. Bad practices of programming.
  6. Keeping CPU halted for most of the times.

Much more are a valid candidates for the halting processes.

Finally… Final words

As I have already mentioned that there is no best programming language out there. One language is best in one case, other is best in another case and so on and so forth. In such cases, it is always better to communicate with your development team and then ask them questions and ask for their feedbacks. Choosing the good tools for your projects is always a good approach and good step in the process of programming.

If your software engineer or software architect are trying to find a good solution, ask them to work on the following questions:

  1. What are the teams and developers qualified for?
    • Asking a team of C++ programmers to leave C++ and join Java or C# programming teams is not a good idea!
    • Always consider the best approach.
    • Time is money — Manage it with care.
    • Recruit more programmers with skills.
  2. Is the programming language long lived, or a minor one?
  3. Is programming language capable of making your application work?
    • I have used many programming languages, and thus I can decide which language to use. This is the job of your software architect to decide which languages to select.

If you follow these rules, you will save yourself from many of the future questions that cause a lot of problems. Many times developers ask questions like, “I have an application in Java, how to migrate it to C#?” Many web developers ask questions like, “I have a PHP application, how to convert PHP code to ASP.NET?” These questions have only answer. “Re-write the applications”.

There are many things that you should consider before designing the applications. Facebook is stuck with PHP, because it was written in PHP. Even if they wanted to migrate to ASP.NET, they won’t. Instead, they have found a work around the PHP bugs and downsides.

This is why, you should always consider using a conference or meeting to decide how to start the project, which programming language to use, what frameworks to target, who would lead the team and much more. These few days of discussion and designing would save you a lot of money and time in future.

What Windows Runtime can teach .NET developers?

Introduction and Background

C# programming language has evolved too much in these years and many frameworks and platforms have been created and developed that support C# language as their primary programming language. Indeed, C# was release with .NET framework but has grown out to support Windows Runtime applications, ASP.NET applications, many .NET framework platforms such as WPF, WinForms etc. I have programmed for all of these frameworks, taught beginners in these frameworks and also wrote a lot of articles for these frameworks. However, the framework that appealed my interests the most was Windows Runtime. Windows Runtime is the new kid on the block with an object-oriented design and performance factor much similar to that of C++ applications.

At a higher-level it seems very simple, easy and straight-forward application development framework. At its core, it has great amount of validations and check points and most of the times, it is pain in somewhere censored. I remember the days when I was developing applications for .NET framework. WPF, WinForms, ASP.NET and other similar ones. I did have a few problems learning and starting, but there was one thing: Underlying framework was a very patient one. They never showed hatred for beginners or newbies. But when I started to write applications for Windows Runtime, I found out that it had no patience in it, at all. It was like, do this or it’s not gonna work.

3jLdy
Figure 1: Windows kernel services and how it branches into different frameworks. 

In this post, I am going to collectively talk about a few things that Windows Runtime may teach C# programmers, for a better overview of the applications that they are going to develop.

1. As a framework

Windows Runtime came into existence way after .NET framework itself and the child frameworks of .NET framework. But one thing that I find in Windows Runtime is that it is very much based on asynchronous patterns for programming. Microsoft never wanted to provide a good UI and UX but to leave out the performance factors.

One of the things that I find in Windows Runtime is that it still uses the same philosophy of C# programming language. But, adds the asynchronous pattern, too much. Not too much in a bad way, but in a positive way. The benefit is that you get to perform many things at the same time and let the framework handle stuff. So a few of the things that it teaches are, that you should always consider having tasks when there is a chance of latency.

  1. Network
  2. File I/O
  3. Long running tasks

These are the bottlenecks in the performance. Windows Runtime allows you to overcome these. .NET framework also uses the same mechanism, but developers typically leave out that part and go for the non-async functions. Bad practice!

References

For more on asynchronous programming, please refer to:

  1. Diving deep with WinRT and await
  2. Asynchronous Programming with Async and Await

2. Modularity

C++ programmers have been using modular programming, developing small snippets of codes and then using them here and there. Basically, modularity provides you with an efficient way of reusing the code in many areas of the application. Windows Runtime has a great modularity philosophy.

The foundation has been laid down on categories, and under those categories there are namespaces that contain the objects that communicate and bring an outstanding experience for the developers.

C# is an object-oriented programming, which means that even if you’re forcing yourself to have a single source file. You are still going to have multiple objects, working collectively for each of the purpose of that application.

Yet, it is recommended that you keep things where they belong.

IC269648
Figure 2: Modules can contain the functionality in them, through which user can communicate with the underlying objects and data sources.

References

  1. Windows API reference for Windows Runtime apps
  2. Modularity

3. Simplicity of the development

I don’t want to lie here, Windows Runtime is simple, it takes times to understand its simplicity. 😉

When I started to write applications for Windows Runtime framework, I did not understand the framework or how to develop the application. That is because, I was fond of straight-forward small programs. Windows Runtime is a beast, a humungousaur, with its handles hanging down to the developers through the interfaces of C# language.

Windows Runtime has everything already set up in different areas. Like, the views, source code, capabilities, properties, resources and much more.

Visual Studio brings a way more simpler mean of development. I mean, Visual Studio handled everything,

  1. Managing the source code.
  2. Managing the output directories and how binaries are generated.
  3. Managing the visual items; the image assets may be difficult to add, Visual Studio makes it really very easy to add the assets of different sizes.
  4. The properties and manifest of the application is also written in XML. But Visual Studio makes really very simple to edit and update the manifest of the application.

While building applications for other platforms and frameworks, like WPF, we can use the same philosophy of Windows Runtime to build a great project and package directory. The settings and configuration files can be kep separate to make it simpler to build the development process.

4. Keep all architectures in mind

.NET framework developers don’t have to worry about the architecture that they are going to target. That made them forget the way that application would be targeted on multiple devices and environments. Windows Runtime is not like that. Windows Runtime generates multiple binaries for multiple environments, multiple architectures and devices.

  1. x86
  2. x64
  3. ARM

These are a few of the configurations for which binaries are generated and since the code that gets generated in a native one. The code must match the architecture, otherwise, the results are undefined.

However implementing the multiple architecture pattern would also require more manpower, more time for testing and implementing the patterns. Windows Runtime in Visual Studio has all of that already supported, but thing is, you need your men ready for any new framework to be included and tested on.

5. You may want to test it again!

Before, Windows Runtime, I thought, if everything works correctly. It is going to work anyways. But, ever since I had started to write applications for Windows Runtime and Windows Store I forgot that mind set and wanted to ensure that it passed all of the tests that it must undergo. That all started when I was about to upload my application, “Note It! App” to Windows Store. Application was passing all of the tests in Debug mode. Yet, when it was tested in the Release mode, it failed many of the tests.

Note: I am not talking about the Windows App Certification Tests.

The thing is, the code in the Debug mode has a debugger attached, which knows when something is going wrong and tells the developers about it and we can fix it. In Release mode, it is not the same. The error checks, the memory segmentation and other stuff in Windows Runtime is not same as it is in .NET framework. .NET framework uses Just-in-time compilation and performs many checks. However, in Windows Runtime, that is not the case. Everything is pre-compiled to overcome the JIT latency; delay.

That is exactly why, you should build the application in debug mode. However, always test the applications in Release mode. If you test the application in debug mode, you will skip a few of the key points in your application where it needs to be checked against.

Also, at the end, the App Cert Kit would be found useful to test if other tests, like resources, binaries and signature packaging is all well.

acx0E
Figure 3: App Certification Kit.

References

For more about it, read these:

  1. Release IS NOT Debug: 64bit Optimizations and C# Method Inlining in Release Build Call Stacks
  2. Debugging Release Mode Problems

Points of Interest

No developer wants to publish their buggy and faulty application on the Internet. I try, not to publish the application until it has passed every possible condition of test. However, no software is 100% bug-free and perfect solution.

In this post, I didn’t mean to target Windows Runtime as an ideal case for solution building, instead I wanted to just share a few of the great cards it has up its sleeves. .NET framework is simple, easy and agile to build on. But agility may drive you insane someday sooner.

Always keep these things on mind, and write applications by keeping these things in mind.

Building a custom documentation tool

Introduction and Background

A team of software developers would all be willing to write a great amount of documentation because after all it is the document that provides their users a resource from where they can get some help. But if your team believes in Agile development methodologies, then you are better off left with programming and “no-documentation” face! That is not at all a problem to most of the programmers, but that is also not a good practice after all. Most of the libraries and tools out there require a good documentation of their APIs. This can be understood by the fact that many programmers are building applications for Microsoft platforms and not for other platforms. I consider myself one of them. The fact is easy, to get any kind of help in Microsoft platforms, I just need to go to MSDN (Microsoft Developer Network) and I can get any document, help with any framework, with any object, with any task and much more. Microsoft has really invested a lot of their times in building a great amount of resources on internet. Their investment pays off in a way that others cannot expect to get paid. The result is, they get a lot of attention from developers, and novice programmers also feel great while programming. Compare this to that of a Linux, compare this to that of any Java library and so on. I don’t want to be biased, but Java APIs, even of Oracle, are very badly written. There is a list of functions, but no remarks, no explanation, no nothing. Which makes it much more complex for beginners to learn and get a grasp of.

Currently I am seeking my Master’s degree in Computer Science and I am going to have a project to build later this year. While that has some months to come, I am thinking of something to build that would help me in many ways. One of them is to build a custom program that helps me to document the API so that I can submit that to the professors when they ask me for my project.

This blog post is about the “things” I have thought about, while building the tool. I haven’t built the tool yet, but I am hoping to have it built soon. But, I can show you the blueprint and the scratch code for that.

Why document the code?

If you are an indie developer, and you don’t like to share the code with anyone, don’t worry about this. Chances are, that 2 or 4 years later, you may “still” have the idea, why you wrote down that code. But if you also have to share the same code with your partner, team or external world. Then, you should either comment the code well. But… In the cases, where you don’t want to share the internal code, then you should provide a well written API for that code.

That would save you from many messages, emails and feedbacks, like, “How to do that?”. That question annoys many people because, developer of that API is already aware of the “simplicity” of the API, but others don’t know it. They are not aware of the objects introduced and so on. In such cases, you should always consider documenting the code well, so that others, when face a trouble, can simply read that documentation and say, “Oh, that’s how I do that!”

In many cases, it also helps you when after 2 years, you lose the focus that you had back in those days when you started the project. Happens to me too. 🙂

keep-calm-and-document-your-code
Figure 1: Keep calm programmers! 

How to build a custom documentation tool?

I am sorry to disappoint you guys, but I am going to talk about C# only. There was a lot of time when I was learning other frameworks, writing applications for Linux, compiling articles and books for other operating systems. This is, when I feel I need to get back to where I belong. I belong to C#. I am going to use C# to show you how you can build the application that documents the code that you have written. Now that is somewhat a complex idea and process. But, for the sake of this post, I am going to keep things really very short here and I will demonstrate the use of the very basic concepts to build such a great and vital tool for you team so that they can focus more on the programming part, and leave the documentation part to this tool itself.

I am going to use the following three basic features:

  1. C# language itself.
    • You can use any framework, Console, WPF, WinForms etc.
  2. HTML document for previewing the results.
    • You may use ASP.NET applications or web sites for previewing. I used static HTML pages.
  3. Some reflection.

In the above parts, I think most of you would get yourselves confused when it comes to “Reflection” in C#. Well, it is not that tough part to understand. Reflection in C# (or .NET itself) is a very simple and easy concept to understand. Using reflection, we can simply, get to know the assemblies, objects, classes and their members, on the run-time. In the IDE, we know what is the class name, what are functions defined. But how to know on run-time, is a part of reflection.

Introduction to Reflection

Before, I finally end up the post I want to give you an overview of the reflection in C#. If you have ever programmed in C#, you are aware of, typeof() operator or GetType() function. Both of them are the first steps toward the reflection. Basically, reflection is performed on types of the objects. Types expose the assembly information, members information, properties, functions (methods) and events information… Much more! So, we use the signature to determine many values, like the names, assembly, namespace, versioning etc.

I am going to use the same tool, and I am going to extract the properties from these types and I am going to build an HTML documentation for the classes. This is same mechanism which is used on MSDN or any other “good” documentation. You should never write the documentation yourself, one object after the other. Create an interface which does the underlying task and then do the modifications and reviews.

I would end this section with this one code sample:

class Student {
   public int ID { get; set; }
   public string Name { get; set; }
}

// In the main function
public static void Main(string[] args) {
    var type = typeof(Student);

    // Above line is similar to, 
    var type = new Student().GetType();

    // But prefer the previous one. 

    // We can use the variable to extract the name of the type.
    Console.WriteLine(type.Name);
}

// Output: Student

We get many other similar functions that we may use to actually get the information on the class that we are using.

You should try them out, for further reading please refer MSDN, System.Reflection Namespace.

Building the HTML document

As I have already mentioned that you can use ASP.NET web site, web application too, to represent the documentation for online users. But I used static HTML pages, which are much simpler to build and don’t have to manage the rest of the underlying stuff for the application.

What I did was that I used a C# program to render the HTML content for me. Based on the type that I pass. I created a new class, “DocumentHelper”, and created a function which takes an object and renders the HTML document for its name. Here is what I made:

<!doctype html>
<html>
<head>
   <title></title>
   <style>
      html {
         margin: 0 auto;
         font-family: 'Segoe UI';
         font-size: 13px;
      }
 
      body {
         /* Nothing special here. */
      }

      table {
         min-width: 500px;
         border: 1px dashed #cecece;
         padding: 5px 10px;
         text-align: center;
      }

      table tr:first-child {
         width: 150px;
         font-size: 15px;
         font-weight: 600px;
      }
   </style>
</head>
<body>
   <h4></h4>
   <p></p>
   <table id="properties">
 
   </table>
   <table id="methods">
 
   </table>
</body>
</html>

This is the very basic HTML document, that will be used to render the details for the item. We can then use the C# program to loop over the properties and render them in our HTML document.

If you would consider using C# 6, you would be provided with useful features such as, string interpolation etc.

// Create the instance
StringBuilder builder = new StringBuilder();

// Append html head
builder.AppendLine("<!doctype html><html><head>");

// Append title and the style here
// Append the lines, based on an iterative loop

// Append the final lines
builder.AppendLine("</body></html>");

// Get the document
var htmlDoc = builder.ToString();

You can then store the file, and at the same time, execute the call to show the file too.

System.Diagnostics.Process.Start(linkToFile);

It would open the default application to render the HTML file (if you saved the file with .html or .html extension). In my case, it was Google Chrome. I did not write the table content or anything, but just the simple title and the full name of the object. Whose web page screenshot is:

Screenshot (943)
Figure 2: Rendering the dynamic HTML content.

This way, we can use other classes and their types to render the documentation web pages too.

Points of Interest

In this post, I have talked about the simplest method required to build a tool that automatically documents the code in your project. You can share the HTML documents with your clients who can easily read those documentations based on the content that you have provided based on the UI you use in the HTML document.

Of course this is not complete, this was just the idea that I was having and I am hoping to complete this project as soon as possible and if I do develop, I will share the source code publicly on GitHub.

There are many more things to do:

  1. Learn Reflection
  2. Find a good way to store the HTML documents.
    • In database
    • In static files
    • Other data source
  3. Build a crawler for your API

Crawler would simply crawl from one object to another using the relations and would continue to build the documentation. 🙂

Experimenting with C# 6’s new features

Introduction and Background

It has been a very long time, and trust me, I am fed up of “Cool new features of C# 6!” posts already. I mean, each post has the same things, same topics, same content and (usually!) the same code blocks to demonstrate the thing too. However, I never wanted to write the blog to describe the new things in C#, but now I think I should explain the things that are done “under the hood” for the C# 6’s code. I have been going through many posts and I already know what these new “cool” features are, so I just wanted to test them out and see when they work, how they work and when they are not going to work. Well definitely, that is something that people aren’t expecting by C# architect team, however, let’s find them out. 🙂

Also, it has already been very late for C# 6 to be discussed because C# 7 has already been into the discussions of the team, you may want to read more about what C# 7 team has on their list. C# 7 Work List of Features.

Anyways, none of them are the final versions, final statements, or “C# 7 must-haves”. They are just what they are trying to inject to C# as per the demand and ideas provided to them on GitHub and other networks that they are having.

Enough of C# 7 already, now you have already seen all of the features, that were introduced in C# 6 and most of you have been using those features in your applications. But, did you try to use them in a way they were not intended to be used? I mean, have you tried to use them in a very “unknown” or “uncertain” manner?

Let me share, what I think of those features. I would also love to hear what you think of them! 🙂

Counting the features of C# 6

Now let me show you a few things about these services, in a way that you were not expecting or you never cared about! These features are definitely interesting, but, a few are useful in one way, and a few are not at all useful in one way. Depends on your project, your perspective and ideas. So let me start by the very famous (and my favorite C# 6) feature: String Interpolation.

String interpolation

This feature is a great feature, personally saying, this is the best feature that I find on C# 6 features list. What I find easy while using “string interpolation” is, that I don’t have to use String.Format, I don’t have to concatenate the strings, I don’t have to use the StringBuilder at all. What I can do is that I can just simply write the string and it would include the variable data in the string. Now let me show you an example of multiple methods that can be used.

// Creating the private field
private static string Name = "Afzaal Ahmad Zeeshan";

// Then inside the Main function.

// 1. Using string concatenation.
Console.WriteLine("Hello, my name is: " + Name);

// 2. Using string format
Console.WriteLine(string.Format("Hello, my name is: {0}", Name));

// 3. Using string builders
StringBuilder builder = new StringBuilder();
builder.Append("Hello, my name is: ");
builder.Append(Name);
Console.WriteLine(builder.ToString());

Technically, their result is same. They all say the same message, and how they do it, we can find that out using the Assembly language (or the MSIL code, if you are fan of MSIL and not Assembly at the debug time), however, we do know that they present the same value on screen.

Screenshot (5950)
Figure 1: Same string rendered and generated, as a result of many ways.

Their IL benchmark would be different (because internally they are three different ways). That is because, they themselves are different. Their IL generated would be like this:

IL_0000: nop 
IL_0001: ldstr "Hello, my name is: "
IL_0006: ldsfld UserQuery.Name
IL_000B: call System.String.Concat
IL_0010: call System.Console.WriteLine
IL_0015: nop 

IL_0016: ldstr "Hello, my name is: {0}"
IL_001B: ldsfld UserQuery.Name
IL_0020: call System.String.Format
IL_0025: call System.Console.WriteLine
IL_002A: nop 

IL_002B: newobj System.Text.StringBuilder..ctor
IL_0030: stloc.0 // builder
IL_0031: ldloc.0 // builder
IL_0032: ldstr "Hello, my name is: "
IL_0037: callvirt System.Text.StringBuilder.Append
IL_003C: pop 
IL_003D: ldloc.0 // builder
IL_003E: ldsfld UserQuery.Name
IL_0043: callvirt System.Text.StringBuilder.Append
IL_0048: pop 
IL_0049: ldloc.0 // builder
IL_004A: callvirt System.Object.ToString
IL_004F: call System.Console.WriteLine
IL_0054: nop 

IL_0055: ret

You can see, that (indirectly) they are all different and execute different functions. What happens, is a “syntactic sugar“. They are typically just functions that are executed and compiler allows you to write all of these simply.

Now coming to the new way of writing the strings with variable content. C# 6 introduces this new way of writing the same message!

Console.WriteLine($"Hello, my name is: {Name}");

/* Output:
 * Hello, my name is: Afzaal Ahmad Zeeshan
 */

The output of this is also same… But, how does C# 6 does that is a bit of trick now. To find that part out, we will again go back and read the MSIL generated.

IL_0000: nop 
IL_0001: ldstr "Hello, my name is: {0}"
IL_0006: ldsfld UserQuery.Name
IL_000B: call System.String.Format
IL_0010: call System.Console.WriteLine
IL_0015: nop 
IL_0016: ret

Um, watching at this MSIL, what do you say? 🙂 If you pay attention to the highlighted code, this resembles to the code that we wrote using the String.Format() method. So what happens is, that this is also a syntactic sugar. There is no addition of anything, it is just what compiler does for you! Compiler simply takes away everything, and generates that previous version of code. Yet, in a very friendly and simple way. I like it.

Remarks and References:

String interpolation has been adopted by many programming languages by now. Swift by Apple and JavaScript (ECMAScript) is also moving forward to templated strings. I personally feel, since string type data is mostly used, this is a great feature. For more, please read:

  1. String class
  2. String.Format()
  3. String interpolation

Conditional try…catch

Now, I have never used VB.NET, because I was always busy in C# itself but I still listen that VB.NET had this feature way before C# implemented it. The conditional try…catch (or as the team calls them, “Exception filters“) just allows you to handle the exception when a condition is met, I am not going to talk about what they are, instead, I will talk about how you can use them in your code. So basically, if you execute a function that is supposed to have a side-effect, and before handling any further errors, you want to find out if there was a side-effect or did the error occur before the side-effect. You would possibly be left with only one way to check that out!

try {
   // Some code that breaks the execution.
} catch (MyException me) {
   // Just a function to check if value was updated
   if(field.Updated()) {
      // Log the error and then re throw the exception
   }
} catch (Exception e) {
   // Re-perform the same for other exceptions.
}

As an example of this case, have a look at the following code:

private int field = 25;

public static void Main(string[] args)
{
    try {
        Console.WriteLine("Do you want to change the field?");
        if(Console.ReadLine() == "yes") { field = 34; }
 
        // Just throw the exception
        throw new Exception();
    } catch {
        if(fieldModified()) {
           // Log error, it was thrown after modifying the field. 
        }
        // Other code...
    }
}

public static bool fieldModified() {
   return (field == 25) ? true : false;
}

This try catch would allow us to check if our fields were modified, if not, then we can simply ignore as there were no side-effects.

Now what C# 6 allows you to do is to add a condition outside the block, so that you don’t even have to enter the block itself, and the condition would just check if it needs to go to this block, if not, then exception would continue to grow and later catch blocks would try their luck, finally to the end. Something like this.

try {
    Console.WriteLine("Do you want to change the field?");
    if(Console.ReadLine() == "yes") { field = 34; }
 
    // Just throw the exception
    throw new Exception();
} catch when (fieldModified()) {
    // Log the errors, reverse the transaction etc. 
}

So instead of writing that condition inside, we can write that condition in front of that catch block itself. Technically, both were OK, only this way we are leaving most of the work to compiler and we are designing the code to be more readable, read it like “try .. this code .. catch when field modified .. continue“. This provides you with a simple way of reading the code. The IL get some branches around the catch block so that code can jump from one location to another and so on.

Remarks and references:

Technically, I won’t use this in any of my applications because I try to avoid transaction-like programming. But, if you are using a transaction-like programming, such as SQL queries, bank management systems. I would recommend that you use these conditions on your catches. They can be of great use when you are logging errors in transactions. The function used would determine if there were changes (or what-ever you want to do in that function!) and if that condition is true, error would be caught, logged (if code is provided) and then re-thrown (if code is provided).

For more:

  1. C# : How C# 6.0 Simplifies, Clarifies and Condenses Your Code

nameof operator

Before saying at all, I want to confess that I have been one of those responders, who said, you cannot have the name of the variable! Compiler doesn’t know the name of the variables, they are just labels. Well, not anymore. C# 6 provides you with another “amazing” feature that lets you get the actual name of the variable, not just the value of that variable.

Before C# 6, nameof operator was not present. With C# 6, this new feature can allow you to know the names of the variables, not just their values. So for example, previous we did:

Console.WriteLine($"Hello, my name is {Name}");

Now we can do this too,

Console.WriteLine($"Value of {nameof(Name)} variable is {Name}");

Screenshot (6011)
Figure 2: Use of nameof operator in real world.

However, the actual use cannot be determined here. The actual use of this would be when we want to know the variable names too. We could definitely (before C# 6) hardcode the names, but after modification, refactoring, they would be of no use anymore. However, now C# 6 can allow us to have the names of those variables too.

To understand the working, have a look at the MSIL generated:

IL_0000: nop 
IL_0001: ldstr "Value of {0} variable is {1}"
IL_0006: ldstr "Name"
IL_000B: ldarg.0 
IL_000C: ldfld UserQuery.Name
IL_0011: call System.String.Format
IL_0016: call System.Console.WriteLine
IL_001B: nop 
IL_001C: ret

This shows, that the variable names are passed as string objects to the function and then the string formatted (as we talked in the string interpolation), and finally the result is printed. So basically, all of this is just sugar-coating.

Null-conditional operator

For this feature too, I have been always very narrow-minded. I never liked it, perhaps because I have written a “good way to handle null exception” as:

if(obj != null) {
   // "Safe" code here
}

// OR
try {
   // Code here...
} catch (NullReferenceException e) {
    // Code here...
}

But, things have changed. Previously, I was writing a library, in which I was supposed to get a list of objects from JSON data. So basically, if the source was empty or like “[]”, the array would be typically null. In other words, to overcome the exception, I would have to write something like this:

// Deserialize the string
var list = JsonConvert.DeserializeObject<List<Type>>(jsonString);

// Then check if it is null before returning
return (list == null) ? new List<Type>() : list;

//  Or using null-coalescing operator
return list ?? new List<Type>();

However, as of C# 6 introduced something new, just to overcome the null exception in each line.

var item = list?[index]; // item would be null, if list is null

I am still not very much pushed by this feature, because setting something to null based on null-ness of another is not a good idea. Is it? We can easily do this:

// Get the list
var list = Model.GetList();
var item = new Item(); // Just assume it

if(list != null && index < list.Count) {
    item = list[index];
} else {
    // Show the error
}

// In C# 6 we do
var list = Model.GetList();
var item = list?[index];

if(item == null) {
   // Show the error
}

Now basically, following this approach, we have “easily” removed on conditional block. But still, our code is exposed to:

  1. Null reference exception: We still need to check the item for this, or continue to use the same steps to avoid null reference exception along the track, and finally, tell the client, “Oooooops! There was a problem!”.
  2. Index out of range: In the previous code, we would check the index too, however to do that, we would require a new block in C# 6 code, and that would ultimately lead us to a state where null-conditional operator would lose its meaning.

So, have a look at this code:

List<int> i = null;
var item = i?.First();
 
// Item would be null if list is null. But we managed to minimize errors "yay!"

Technically, we know item is null. But just to check it, we generate the MSIL too,

IL_0000: nop 
IL_0001: ldnull 
IL_0002: stloc.0 // i
IL_0003: ldloc.0 // i
IL_0004: brtrue.s IL_0011
IL_0006: ldloca.s 02 
IL_0008: initobj System.Nullable<System.Int32>
IL_000E: ldloc.2 
IL_000F: br.s IL_001C
IL_0011: ldloc.0 // i
IL_0012: call System.Linq.Enumerable.First
IL_0017: newobj System.Nullable<System.Int32>..ctor
IL_001C: stloc.1 // item
IL_001D: ret

Look at IL_0017, what this does is, that it just creates a new instance of an integer of nullable type. So basically, what we have above is:

int? nullable = null; // Int is struct! That is why we append "?" to the type.

The MSIL for this case is:

IL_0000: nop 
IL_0001: ldloca.s 00 // nullable
IL_0003: initobj System.Nullable<System.Int32>
IL_0009: ret

Gotcha! This is same as to what we had previously. Only difference is the syntax now. Nothing new. So we can also conclude that our item is of type “Nullable<int>“. We already had that, didn’t we? 🙂

Remarks and references:

This is (yet!) another sugar coat to C# compiler and the syntax. But I like it. However, there are still many things that you may want to learn before digging any deeper into the code.

  1. Nullable Types in C#

Static, just got better!

Another new feature provided by C# 6 is the static keyword in the using statement. So basically, what we have got is a method to include the classes themselves, and not just the namespace packages. What this feature does is already very much popular, so let me just continue to talk about what I had to talk about. I have a few things that I want to talk about… Starting with the first one:

First of all, when you write something like,

using static System.Console;

It doesn’t mean, that each time you call “WriteLine” (or other Console’s member functions), it would be the function of Console that gets called. This is more like ambiguity here. For example, the following code:

using System;
using static System.Console;

namespace CSTest
{
    class Program
    {
        static void Main(string[] args)
        { 
            WriteLine("Love for all, hatred for none.");
            Read();
        }

        public static void WriteLine(string message) 
        {
            Console.WriteLine($"'{message}' printed by member function."); 
        }
    }
}

The output of this programs is:

Screenshot (6014)
Figure 3: Output of the member function.

So basically, what we can conclude is that even though our Console class is introduced, but compiler prioritizes the member functions. The priority is done in the case of both being static. If your member function is instance, then compiler would raise an error, so in that case you will always have to write the full name for the function with the type it is associated to. Like this:

using System;
using static System.Console;7

namespace CSTest
{
    class Program
    {
        static void Main(string[] args)
        {
             Console.WriteLine("Love for all, hatred for none.");
             Read();
        }

        public void WriteLine(string message) 
        {
            Console.WriteLine($"'{message}' printed by member function."); 
        }
    }
}

This would now run the function from Console class object.

Screenshot (6016)
Figure 4: Message printed by Console.WriteLine function.

This just executes the function. There is another thing that I want to discuss, the static include is not just for the built-in classes, you can use them in your cases and your class objects too. Just for the sake of example, I created a sample class that would allow us to use the functions in a similar behavior.

class Sample
{
    public static void StaticFunction() { }
    public void InstanceFunction() { }
}

// Adding a struct would also work. Static functions would be pulled.
struct SSample
{
    public static void StructStaticFunction() { } 
}

Now, I don’t want to demonstrate the use of struct or class. So, now you can call those functions in your own class as if you were calling them from within the class itself.

Remarks and references:

This works for structs and as well as classes. The thing is that this works with the functions who are:

  1. Public, private functions are not shown (of course!)
  2. Static, instance functions require an instance, so compiler doesn’t bother showing them at all.

Static classes are all static, their members are also meant to be static so every member function of a static class would be accessible. However, in case of instance classes, only static functions are visible in the list of IntelliSense.

For more information, please read:

  1. Static Classes and Static Class Members

Auto-properties, and their improvements

Auto-properties have been available in C# for a long time by now, the thing is that they now introduce a new feature. “Initializers” can be used to initialize these properties to their defaults. A few of the ways that they can be used is like this:

private string Name { get; set; } = "Afzaal Ahmad Zeeshan";

// Without a setter, a readonly
private string Name { get; } = "Afzaal Ahmad Zeeshan";

// Simply read-only
private readonly string Name = "Afzaal Ahmad Zeeshan";

But remember, the last one is a field, not a property. For example, see this:

private string Property { get; set; } = "Afzaal Ahmad Zeeshan";

private string getterOnlyProperty { get; } = "Afzaal Ahmad Zeeshan";

private readonly string readonlyField = "Afzaal Ahmad Zeeshan";

This would be translated as:

IL_0000: nop 
IL_0001: ret

get_Property:
IL_0000: ldarg.0 
IL_0001: ldfld UserQuery.<Property>k__BackingField
IL_0006: ret

set_Property:
IL_0000: ldarg.0 
IL_0001: ldarg.1 
IL_0002: stfld UserQuery.<Property>k__BackingField
IL_0007: ret

// getterOnlyProperty is a getter-only, so not set_ field is available.
get_getterOnlyProperty:
IL_0000: ldarg.0 
IL_0001: ldfld UserQuery.<getterOnlyProperty>k__BackingField
IL_0006: ret

The code is categorized under getters and setters for the properties, and the readonly field is just a variable in the program.

Remarks and references:

The new thing introduced in C# 6 is that you can now set the values to a default, in previous versions you would have to enter a new value in the constructor. However, now you can enter that value right there!

For more please read:

  1. Accessors in C# on MSDN.

Lambdas have evolved

If you have ever read my previous posts, you may have known that I am a huge fan of lambda expressions of C#, specially when it comes to handling the events. I have always enjoyed programming, based on these lambdas.

  1. Handling events in WPF in an easy and short hand way

What now can be done is that lambda expressions can be used now to create the functions too (just like before), not just that. They can be used to simplify the getter-only properties too.

The syntax, cannot show you how simple it gets. For example, the following code block,

public int Multiply (int a, int b) { return a * b; }

It looks very much simple, and is optimized. However, the MSIL code looks “horrible”.

Multiply:
IL_0000: nop 
IL_0001: ldarg.1 
IL_0002: ldarg.2 
IL_0003: mul 
IL_0004: stloc.0 
IL_0005: br.s IL_0007
IL_0007: ldloc.0 
IL_0008: ret

This is a very long MSIL, which would be translated and would require a lot of CPU cycles. While on the other hand, if you can see this code:

public int Multiply (int a, int b) => a * b;

This lambda expression does same, as what the previous one did. But, is very much performance-efficient. The MSIL generated with this code is like this:

Multiply:
IL_0000: ldarg.1 
IL_0001: ldarg.2 
IL_0002: mul 
IL_0003: ret

How small MSIL code, right? So this demonstrates that using a lambda expression to write our function, we can not just simplify the code size, we can also fine-tune the code that is executed for that.

Another major use of these lambdas is while creating getter-only fields. Now, their topic is already discussed above. But, however, I would like to talk about them again…

public double PI { get; } = 3.14;

The MSIL generated for this would be (you guessed it right!) a get property, and a backing up field. Like this:

get_PI:
IL_0000: ldarg.0 
IL_0001: ldfld UserQuery.<PI>k__BackingField
IL_0006: ret

So basically, what we have is a block, that returns the value of the field that is backing up this property; whenever we call the property. Now, to fine tune it using C# 6, what we can do is something like this:

public double PI => 3.14;

The MSIL for this is really very amazing, just the way we improved the function, we can improve this field too, increasing the access time. So now the MSIL is:

get_PI:
IL_0000: ldc.r8 1F 85 EB 51 B8 1E 09 40 
IL_0009: ret

Pretty much amazing, and short! It just pushes this content, as float, on the stack. Then calls the return to provide the value. The thing is that we don’t need a backing field at all. To make it more readable, please read this:

public string PI => "Afzaal Ahmad Zeeshan";

Now, in the case of a field-backed-property, the value would be stored in the backing field and when we would call this, the backing field would be called (a new call!) and then that value would be returned. However, in this case, what happens is something like this:

get_PI:
IL_0000: ldstr "Afzaal Ahmad Zeeshan"
IL_0005: ret

Simply, this would load the string on the stack, and would then return this value. It is more like having that constant field in your project that gets places wherever it is called from and so on.

Remarks and references:

Lambda expressions are definitely very useful, not in just decreasing the syntax size, but also in making the source code size small, too. The lambdas are inherited from “Functional programming”, and well, in the world on object-oriented programming they have seriously marked their place.

For more please read:

  1. Functional programming
  2. Lambda expressions in C#
  3. => operator

Points of Interest

In this post, I have talked about the most “popular” and interesting features that C# 6 has introduced to its users. However, until now most of the times I have only seen the “same song and dance“. Programmers have just talked about the fuss, no one talked about their internal implementation.

In this post, I have shown what difference they have, how they differ, who are just syntactic sugars and which really do have an influence on your programs. Basically, from this list, my favorites are:

  1. String interpolation
  2. Lambda expressions

Others are useful, yet I don’t feel like using them in my own projects. If you have anything else on your mind, that I yet don’t know… Please share it with me. 🙂 I would love to hear.

Earning revenue through in-app purchases on Windows Store

Introduction and Background

A small overview of in-app purchases in Windows Store platform, in this post I will only cover the short summaries of quickly editing the small modules that you have recently created and to enable users to purchase them to support your development environments. Building applications is a tough task, and programmers need to be given what they deserve. Sometimes, respect is just not enough. Sometimes, the module you created is a tough one and you have full rights to ask for some money in a repay. It is your choice to select a price that users have to pay before they can consume the service.

Windows Store allows you to simply create a block of code, around the service, feature or application itself. That code is capable of managing the payment information, client information and other required stuff to make sure the user pays you before they can access the information and the service you are providing them with. Windows Store holds information for users too, it knows if a user has already made the purchase. If so, Windows Store won’t bother the users again to make the purchase and will hold the information for that license, user has acquired.

App monetization methods

On Windows Store you are offered with many methods and ways to earn money and to get support for your development environments and stuff! A few of them are:

  1. Ads display in your application.
    • The ads are of both types, static images or videos.
  2. Providing paid services in your applications.
  3. Asking for a pre-use purchase of your application. Paid apps.

You can ask for an upfront payment by your users. Many application developers use this method to support their development scenarios. This allows them to have an upfront payment to continue their programming and to support any of the updates to their clients. Not just this, the clients are also able to get free updates to the application later one.

But sometimes you want to allow your users to download and install the applications for free! This allows you to allow your users to try out your application. Users like free stuff, specially the stuff that is not available currently in their device’s marketplace. You can develop an application and start to offer a trial-period to allow users to use your service. If they like it, you can then ask them to purchase the license.

Otherwise, you can add a service separately that needs users to purchase it. You can add the price for it and allow users to purchase it. This method is very simple, helpful and handy! Your users can try out the application that they want to, and you can get the revenue for the features that were difficult to build and deploy. In this post, I will discuss and talk about the monetization methods by “in-app purchases” in Windows Store applications.

In-app purchases in Windows Store apps

In-app purchases requires you to have a Windows Store developer account and that you have published your application! In-app purchases are edited (and published) from the WIndows Store dashboard. You create a new in-app purchase in the dashboard, you edit the details and then update the application’s code to call those in-app purchases and provide the users with a service.

If you do not have a Windows Store developer account, you can still edit the application, you can test the license information but you won’t be able to publish the application, you won’t be able to ask your users (if any!) to purchase the services. So, first step would be to create a new developer account. Register as a developer.

If you are all set up for publishing a new application, continue with the sections and by the end you will be able to include in-app purchases for earning some revenue for your development tasks!

Creating a new app

Although, for development purposes you can test the application for performing the purchases without an online application, or a developer account. For that, you can use CurrentAppSimulator object, instead of CurrentApp object from Windows Store API. CurrentAppSimulator allows you to perform actions without having to call the active Windows Store APIs, and to just test the code in your application. This lets you test many things in your application and how user interacts with your application.

However, if you do not like to waste time, you can just head over to the “hard core” stuff. Where you publish an application, create a new in-app and start consuming it and allowing the users to purchase it. First of all, head over to the dashboard (provided: You have a developer account!) and create a new application. Publish it to the store.

Note: Developing application, uploading the source code and getting it to be published after verification takes time. My application took 16+ hours. Yours should have to go through the same procedure so be patient. I assume that you are having an application ready.

Once your application is created, continue to the next section and start creating and developing for an in-app purchase.

Creating the in-app product

The in-app product first needs to be defined in the Windows Store and the team then needs to approve the service. Once the product service has been approved by the team, you can use the product ID of that service to program your application. The product ID is used to check the license information for the application and if user owns the license to use the service or not.

License Information

LicenseInformation object of the application holds the information about the license. License can be checked for,

  1. ExpirationDate
  2. IsActive
  3. IsTrial
  4. ProductLicenses

These properties can be checked to determine the information about license of the application (if your application is a paid application), otherwise you can check for licenses of the in-app products. The in-app products also have a license and they also have properties set up, like active, trial etc. You can check against them!

An example of this license information is something like this,

var license = CurrentApp.LicenseInformation;

if(license.IsActive) {
    // License is active
    // Allow user to use the service.
} else {
    // Allow the users to purchase the service.
}

The same applies to in-app products! You would use the same method, the code for in-apps is in a later section so continue to read.

Create a product in Windows Store

You will be able to create a new in-app product online, on your application’s dashboard! Head over there and find the following option.

Screenshot (3586)
Figure 1: Windows Store showing IAPs option.

Click on this option and it would take you a new page where you can create a new IAP! Click on, “Create a new IAP” and enter a new product ID.

Screenshot (3587)
Figure 2: Enter a new product ID here and click on “Create IAP”.

You may have read that you need to be sure of the name. You won’t be able to update or delete it once you publish it.

Editing the in-app product

You will then be able to update the properties for that product in Windows Store. This is when your updates and properties would need to be verified. You will get the following three properties, that you can edit to make your product a bit “understandable”!

  1. Properties
    • Type of the purchase:
      • Consumables are those services that can be consumed. Like, 500 gold coins! Users can use them and re-purchase them.
      • Durables are those who have a lifetime. Like a premium service of 30 days.
    • Content type
    • Keywords
    • Tag
  2. Pricing and availability
    • Base price
    • A few other settings like Markets to sell the product in.
  3. Descriptions
    • Language
    • Description and title for the purchase. This is shown in the Windows Store.

You can edit the properties there, and then continue to submit it to the store. The team there would read the information and would approve it if there aren’t any problems with it. Chances are, it gets approved within an hour. Then you would continue to write the code to handle the products.

Remember: It takes at least one day to make the changes on the system. So, even if you upload the package along with the products, it would take one day for Windows Store to be able to start selling your products.

Writing the application to provide services

The Windows Store side is a very short process that you can perform in within less than 30 minutes. The longer part is to write the source code to handle the user’s license and to allow or request the user to purchase the license and service. The code is very short, intuitive and simple! I will explain the entire process here, so that it is very easy for you to get started earning revenues from Windows Store applications that you build.

Create a module

I wonder why this section even exists. Just create a module, a function or a service that performs some actions in your application. Suppose, you want to earn something from it. You can create it as a function, so that you can wrap the function within another code block that checks for a license. License information would be used to either allow the user to consume the service or to reject the request and to ask the user to purchase the service from Windows Store.

Add the license check-up!

I shared a code above, to determine the license for the application, you would do the same in case of in-app product too. You would see if a user has the license or not. The following code would do the thing for you:

// Remember the license object we created above?
if(license.ProductLicenses["productId"].IsActive) {
   // productId that you created while creating the in-app product

   // The user owns the license to use it.
} else {
   // Ask the user to get a licence.
}

You can use the CurrentApp.RequestProductPurchaseAsync(“productId”) function to allow the user to purchase the product from Windows Store. Windows Store would open a new window for the user to make a purchase. You can then (as its result) check if the user made the purchase or not. If he made the purchase, then allow him to use the service otherwise just end the request.

The common example for this would be like this:

// Get the license for the user
var license = CurrentApp.LicenseInformation;

// License for the in-app
var iapLicense = license.ProductLicenses["productId"];

// Apply a condition for the in-app product license
if(iapLicense.IsActive) {
   // User has bought the service; provide service
   provideService();
} else {
   // Allow user to purchase it
   await var resultOfPurchase = CurrentApp.RequestProductPurchaseAsync("productId");
   if(resultOfPurchase.Status == ProductPurchaseStatus.Succeeded) {
      // User has bought the service now
   } else {
      // Show the error message; for try again.
   }
}

This way, user would be able to purchase the item from Windows Store. The process is mostly similar to this one:

Screenshot (3588)
Figure 3: Windows Store purchase window is loading up.

Screenshot (3590)
Figure 4: Windows Store shows the details for the purchase the user is about to make. Also allows them to add a payment method to their account.

Screenshot (3589)
Figure 5: Windows Store shows the following methods (or any other, or less; depending on the country of users) to allow them to make the purchase. 

This would then allow the users to make a payment and then it would update their licenses to be able to consume the service later. It would update the “IsActive” property of their license information.

Points of Interest

This is it for this post, in this post you were taught how to earn revenue from your Windows Store applications. MSDN documentation has a lot of great resources available for developers to learn more about in-app purchases and services provided to developers to ensure that users pay for their services and for the users, to allow developers that they have indeed made the purchase and you should allow them to use the service.

There are many other topics that you should consider reading, such as:

  1. Allowing consuming of services like tokens.
  2. Managing the services from Windows Store.
  3. Adding more in-app products.
  4. Using receipts to verify the purchase status.
  5. Validating the receipts of services.

As a further read, you should consider reading “Enable in-app product purchases” guide on MSDN. I hope I have helped you with this post, and I think this is the last post for 2015, catch you in the next year!

Happy New Year! 🙂

Handling events in WPF in an easy and short hand way

I would talk about the WPF events and how to handle them easily, perhaps you might already know how to handle them; XAML gives you a lot of good handy functions, but there are a lot of other good ways of doing the same.

Events in WPF

Events in WPF are similar to what we had in Console and Windows Forms. They provide us with notifications and procedures to handle the business logic based on what state of application is. They allow us to manage what to happen when the application is starting, what to do when the application is closing and when the user interacts with the application.

Interaction includes the button clicks, input value change, window resize and many other similar processes that can be handled to ensure that our business logic is always applied. Validation and other checks can be easily applied to check the values.

Handling the events

Although handling the events is another function, thus we require a function to perform (or get triggered) when a certain event is triggered.

Note: Please note that I would be using Visual C# for this article’s purpose, it would be different as to what VB.NET would provide you as intuition and library resource, so try to follow Visual C# and not VB.NET at the moment.

In C#, you can handle the event, by attaching a function to it. It is similar to say, “When this happens, do that“. You tell your application to perform an action when something happens. The similar action is perform when a new email is received in your email clients, or when download is complete; if notification gets raised for that download complete. The attachment of the event to another function is pretty much simple, the only thing to worry is the required dependencies (the parameters or values or objects required to handle the event).

Look at the following code,

<Button Click="handleEvent">Click me</Button>

The above code is a XAML code (remember we are in WPF), the function to handle the event would be like this,

void handleEvent(object sender, RoutedEventArgs e) {
   // 1. sender is the Button object, cast it
   // 2. e contains the related properties for the Click
   // Code here... Any code!
}

Now, the function handleEvent is attached to the Click event of the Button object. Remember that event needs to be raised in order to trigger that function.

Pretty much easy it is to handle the events in WPF, just embed the code in the XAML markup, and Visual Studio would generate the back-end code. Write what you want to be done and you’re good to go!

Using back-end code

As, you know you can do anything with back-end code. You can also attach the event handler to the control’s event using the back-end code. In this section, I would explain how can you do that using back-end code.

Since we used Button for previous example, I would use the same for this one. Attaching the event is as easy as 1, 2, 3…

button.Click += handleEvent; // 1

void handleEvent(object sender, RoutedEventArgs e) { // 2
   // Handle the event
}

Step 3 is the click event that is performed by the user. 😉

Easy isn’t it? (IMO, it is not! Look in the next section to make it even better!)

Simple way…

This section is the actual section, which I wanted to talk about. The simple way to handle the events in WPF. It uses the lambda expression, to generate the delegate functions which in turn are executed when the event gets raised. If you are unclear as to what lambda expression or delegate functions are, I would suggest that before continuing, you learn them from the links I have provided. 🙂

A very simple way (IMO) to handle the event in WPF is like this,

myButton.Click += (sender, e) => 
{
   // Handle the event
}

Notice the above expressions. It is a simple lambda expression, being resolved as a delegate function with two paramters (sender and e). The parameters are required by the event handler itself, so that is why we need to pass them.

Points of Interest

The above is the simplest way anyone can handle the event in WPF and is a very much short hand, because you leave everything to the context of the object itself. You skip one more step for casting the object, you also don’t have to create different functions with different appended names such as, “myButton_Click”, “myButton_MouseEnter”. All you need to do is just create a lambda expression and use it as the event handler. Pretty much simple isn’t it?

That’s all for now. 🙂 I hope, I might have helped you out in making programming WPF applications easier. I would post more of such helpful tips for WPF developers soon. 🙂 Stay tuned.