jay's old blog

this blog will be deleted soon - please visit my new blog - https://thesanguinetechtrainer.com

Flags with enums

Earlier we talked about enums and how they help you with listing out items with some values. Now, there are scenarios where the enums can be less useful. Rather, there are ways of making them better than what they are now.  

Going back to the previous blog (where we discussed enums) we talked about students and the devices that are issued to them. To keep track of all the devices that are issued to the student, I would have to represent each student with an array of integer values, and the array size would be 5 (because they are 5 possible values that can be assigned to the student, to indicate which devices he has). By assigning them enum values, we are making sure that the values don’t get mixed up as discussed in the earlier post.  

However, what if I could represent all the device positions in a single value?  

That is where flags come in. When you assign the Flags attribute to an enum, it will start thinking of each value as a combination of the enum items. Yes, this sentence does not make sense, but it will if you follow through on our github code where I have put comments to explain the process. 

Follow me on twitter, facebook and instagram for more updates. Thanks!

Making your code readable

There are several things that are frustrating and comes with the territory of being a developer. Updates that break your IDE, system updates that suddenly outdate an DLL and slow internet access. Some of these are outside our control and others are manageable. However, one thing that should be avoidable but almost seems to be ignored is making the code readable. 

Code readability is not just one thing. It’s a combination for several things. For instance, a lot of times, there are literally zero comments written by the original developer of the software project. When that developer leaves (to another project or to another opportunity at another company) the ones that are now tasked with maintaining that code are stuck in an endless loop where fixing even trivial bugs takes ages because we are left searching for a needle in a haystack because the developer took his magnet with him. 

That is why, making code readable is important. I have seen a lot of developers (in fact, I am yet to meet anyone in person who documents anything which makes it all the more depressing) who simply go by 'memory'. Or perhaps, they like it that whenever there is even a simple issue, everybody calls them and asks them for help. Maybe it makes them feel needed, which is frankly a waste of time. 

Nevertheless, personal feelings and ego aside, not documenting is a dangerous and a sign of coding immaturity.  

meaningful names 

Meaningful names are the crux of readability. Personally, I give meaningful names to everything. When a new create a new project, I give it something like 'demo_calculator_students' instead of 'ConsoleApplication26'. The letter indicates that it is a console application and not much else. The former, informs me (and other developers) what the project is about.  

I also give meaningful names to classes, methods, variables, method parameters and anything else that needs to communicated via a name. This extends to temporary (variables that have as well as permanent variables, enums and constants.  


Every time I create a new project, I always include a simple file called 'notes.txt'. This will play no role in the actual app itself. However, in this notes, I put everything that I did when building the project. It's almost like a diary.  

I will try and make notes with dates. As in which classes were created when. What bugs were discovered and what online resources (MSDN, stack overflow) I used to fix them. I will also include notes about how I solved a particular challenge which includes algorithms used and new logics that were created.  

The idea is to make sure that when another person opens the project, he can use this notes document to find out what was being done while this project was being built.  


Why don’t people write comments? I am convinced that at least some percentage of my hair loss is having to do with debugging code written by others without as much as a single line of comment as to what a code block does or does not do.  

Comments work hand in hand with the notes document I mentioned earlier. The difference is that, notes document is like a comment file for the entire project while also acting like a diary with daily work entries. Comments related to a specific method, comments related to a specific condition statement. The beauty of comments is that, it tells the developer exactly why a particularly comparison was done, why a method has two parameters, why a object was made null and so on.  

Without comments, even the most brilliant developer will take a while to realize why a particular statement or method call or object was written the way it has been written. 


In Visual Studio, there is this thing called regions. Regions is how you divide your code into 'zones' with each zone serving a different purpose. For instance, at the beginning of your code file, there will always be a list of using statements. Sometimes, these using statements can run two pages, and if you are working on incredibly expansive project, perhaps even more.  

Everytime you open the page contain so many usings, you will be scorlling, paging up and paging down so much, it consumes a lot of time. You can put the entire thing in a 'region' and then this two page thing gets folded (which can be later unfolded) into a single line. Thus, making your code look slim.  

It's a good idea to 'regionize' related items like related methods, related function calls, variable declarations and so on. Proper usage of region can turn a 100 page code file into a two page file, which makes code traversal that much easier. 


This may be a little excessive but I prefer to write a lot of flowcharts. When you are designing something new, you should consider writing flowcharts. Then, once the flowcharts are done, take a photo of it, upload them to your cloud drive and include a link to it in the notes. 

Follow me on twitter, facebook and instagram for more updates. Thanks!

Types and Methods

Irrespective of the type you decide to use in your project (value types or reference types or little bit of both), you will have to give your types behavior.  

Behavior, that’s a curious word isn't it? To be used when thinking of code and programs. Then again, when you take a step back and remember that you are learning an object oriented language (which assumes that you are trying to mirror life itself), it all suddenly makes sense. For instance, here is one of my favorite quotes from the very first Nolan Batman movie. 

"It's not who I am but what I do that defines me" 

Applying that here, It's not what data that is stored in your type that defines the type. It's what the type can do that defines. That bring's us to the next question, how do you make your type 'do' stuff? The answer is 'method'. Now, I need to add that if you have gone through a standard computer degree in your college (or currently pursuing), you may have used the word functions. I want to make it clear to you that method is another name for function.  

In the usual sense of things 

  •  A method will accept some parameters. 

  • It may return a value. 

  • It will also do some stuff.  

Ideally, when you are designing your type (struct or class/value type or reference type), you should always design its behavior. That means, you finalize the different methods you will be using. Then, you will find out, for each method, what parameters it might take. Then, you will find out how the method will consume these parameters and then finally decide if a value needs to be returned.  

As always, remember to follow the 'readability tips' given here while naming the different components of a method. 

Check code Types_Methods at our github for an actual implementation of what has been discussed above. As always, the code has detailed comments about every line of code written. 

Follow me on twitter, facebook and instagram for more updates. Thanks!

Hello Raspberry Pi 3 and IoT

One of those things that I always wanted to do was to get into the Internet of Things or IoT game. Things got really exciting after Microsoft made available a IoT core version of Windows 10 and as a guy who earns and enjoys via the .NET platform, this was all very cool. 

I (as so many others have) believe that devices will continue to get smart. Smart (not the usual english definition smart) device means, the device has an operating system and has some way of communicating with the outside world (cloud or web or internet services) and hence do stuff that they did not originally come bundled with.  

A simple example would be a phone. A 'smart' phone comes with an operating system that allows it to perform functionality that wasn't there out of the box. For instance, when you bought your phone, it probably did not have a flashlight app installed in it. 

However, it does have a flash. You can easily download and install a flashlight app and now your phone has 'learnt' (and hence) smart enough to behave like a flashlight. Your phone can talk to outside services like social network and chat. That makes it a smartphone.  

IoT is about everyday items behaving smart. Using the various tools and services and programming available with IoT I can rig up my own smart fan. I could get my table fan to talk to a server I have built online, allowing me to controlling it via a app on my phone. I could build my own security camera system and remotely monitor my home. Perhaps, I could add a sensor to my coffee mug (because why not :P) and track how many times I lift it off the desk for every mug full of coffee.  

So, that's IoT 

There are several 'micro computers' or rather 'motherboards' that lend themselves to IoT. Of these, I think the best choice would be a Raspbery Pi 3. The version 3 comes with built in Wi-Fi and Bluetooth which is awesome. Further, Windows 10 (the IoT core version) is available for it.  

All said and done, I am looking forward to it. As always, I will blog my adventures so my students and others can follow and contribute in return.

Follow me on twitter, facebook and instagram for more updates. Thanks!

Creating a Console Project

When you learning to code in c sharp, you almost always need to start by creating an empty console project. Most folks would already know this. However, just in case, you don’t know, here is how you would do it.  

  1. Open Visual Studio (of course) 

  1. File New > Project 

  1. On the left side, select Visual C# 

  1. Then in the middle window, select 'Console Application' 

  1. Give a name (or leave the default name as it is) 

  1. Choose a directory location (or leave the default location as it is) 

  1. Click OK  

Here are some pictures to help you out. Also find the project 'Types_1' on github which will show you the basic ropes about types.  

Once you have finished typing your code, you will then have to two steps. 

  • Build the project. This is done by pressing Ctrl + Shift + B. Or you can be lazy (and not use shortcuts) and select the build option from the menu. Build > Build Solution. 

  • Assuming there are no errors (you can ignore some warnings or all of them), you can then run it. As always, I prefer the keyboard shortcut which is 'F5'. If you are feeling really lazy today, you can tap on the 'Start' button with the green play symbol just below the Build option on the menu.  

Follow me on twitter, facebook and instagram for more updates. Thanks!

Thoughts on immutability and immutable

When you working with types, particularly value types, you are bound to run into the word 'immutable' and 'immutability'. They are both similar words in the same sense 'able' and 'ability' are related and similar.  

Immutable (usually in connection with an object) is a feature where in, once an object has been assigned a value, it cannot be changed. It is sort of like, suppose you buy a diamond (which also means you are filthy rich and congratulations on being that). Now, using conventional tools, there is no possible way you could change the shape and size or anything else about that diamond after you have purchased it and walked out of the diamond shop. It cannot be changed. It's...immutable. 

To put it more simply, it is sort of like a constant. For instance, the value of 'pi' is the same no matter where you go and use. The color 'green' is always green and cannot be changed. We can endless examples like this but I will stop at that.  

How does immutability work in a programming scenario though? Here is an example that I have borrowed from the msdn site.  

Value types are by design immutable but reference types can also be immutable. For instance, string, despite being a reference is definitely immutable. I am going to use string objects as an example and show how immutability works. 

Here is some code for our purpose. 

  1. string b = "study" 

  1. b = b + "nildana" 

When you look at lines 1 and 2, you are probably thinking, 'okay. b had the word 'study' in it, and then after line 2, it changed to 'study nildana'. That means, b has changed and hence not immutable or mutable' 

On the outset, it does feel that way, and I don’t blame you or hold you responsible for thinking that way. If you remember your types and objects and references, you will notice that reference types only hold a reference and not the actual value. So, in line number 1, when we write "study", an object of type string has been stored in heap memory somewhere and a reference to that has been assigned to b. In line number 2, we have proceeded to create another string object called "nildana" and then proceeded to 'concatenate' it with what b is referencing i.e. "study".  

Thanks to the concatenation, a new string which contains "study nildana" is created, and then its reference assigned to b. 

So, in those two lines, three distinct string objects were created. Once they were created (and a value assigned), they were never modified because of the immutability nature of string type. Rather than saying 'they were never modified', it is better to say 'the design of string types prevents them from being modified'.  

Now, it won't be a stretch to imagine that you are thinking, 'man, why bother with this mutable and immutable stuff?"  

The answer is yes and no. For starters, immutability is one of those things that are part of the package. When you declare a value type or use something like string, they just come with immutability. It's like when you go out shopping for a new puppy. You know by default that all the puppies are cute and you will end up falling in love with all of them. 

However, as your programming expertise increases and you build even more complicated code, you will run into scenarios where you have to ensure that your object (and its values) are not tampered with during runtime. You want that absolute confidence that, come what may, your object that has a certain value, will remain so throughout its object lifecycle. That is when you start thinking hard and long and strong about mutability and you start doing stuff.

Follow me on twitter, facebook and instagram for more updates. Thanks!


One of the value types that we discussed earlier were enumerations or enums for short.  

Enums are a special case of types that you may either use way too much or not use them at all. Still, on projects that lend themselves to enums, there are extremely useful and reduce a lot of workload, confusion. Enums are one easy way to have 'readable constants' that can be used across the project. 

Suppose you are building an app for your college. At the time of writing the code, your college has 10 departments. However, you have a feeling that perhaps, in the future, your college may end up having 12 departments. At the same time, you also want to make your code extremely readable. You don’t want to use global variables (the worst thing a developer can do is use global variables) which could be easily modified elsewhere and cause unexpected app behavior.  

Under these circumstances, you will use an enum called "college_departments" and assign a value of 10 to it. Now, throughout your project you can use college_departments and the value of 10 immediately becomes available.  

Further you cannot assign a value to enum (outside of the enum definition) and nor can you do increment or decrement operators. All in all, you get the benefit of a global constant value (which can be changed later) without the headache of a global variable. 

Check the code Enums_1 to see enums in action.

Follow me on twitter, facebook and instagram for more updates. Thanks!

Follow me on twitter, facebook and instagram for more updates. Thanks!

Types of Types

I love the alliteration in the title of this post. 

In the previous post, I was talking about types in the simplest sense. Deep programming though will require more than just the basic understanding of types. In fact, a lot of conventional programming training (as provided by engineering colleges where I come from) sort of skip the different categories of types that are available 

The knowledge of types categories may not seem all that important in the initial days of programming, they take up a lot of significance when you dig deeper. With that in mind, here are the three categories. 

  • Value types 

  • Reference types 

  • Pointer types 

One thing I would like to say is that Pointer types are never used. The reason is obvious if your remember your days of doing pointer manipulation in C. A pointer contains the actual location of the memory (like its address) where a particular type's instance is stored. Manipulating memory locations is never ever a good idea. It is sort of like trying to jump into that enclosure where they are keeping a collections of zoos. Further, the animals have not been fed for the day yet.  

So yeah, don’t use pointer types kids and girls and guys. 

So, when you are programming you are essentially limited to two categories of types – value types and reference types.  

Value Types 

Value types are those category of types that contain the actual value in them. This might seem strange now (what do you mean, they contain the actual value? Obviously the type instance will contain its value...what else could it contain?), but once you read through reference types, you will see what I am trying to say.  

Value types are types such as structs and enumerators (of course we will dig into them in a future post). For instance, int type is a value type. So is decimal type and double type. Obviously, user defined structs are also value types. However, if I know a few things about programming (and I do know a few things), you probably won't be using structs all that much. You may end up using the built in, standard struct like Point but building own struct is something you may not do all that much. 

So, what defines a value type? 

Value types always hold the actual value. This is where they get their name from. How does this work really? Here is a simple example. You have wallet in your pocket. In that pocket you are keeping thousand rupees. To be more specific, you have 10 notes, each with a face value of rupees 100. If were to think of your pocket as an instance of the type pocket, it is a value type because it actually contains the 1000 rupees that belongs to you. 

In addition to this, they cannot be nulled (assigned a null value). That means, when you create an instance of  an int, even if you want to, you cannot make it null. Obviously, if you really  and very badly want to make a value type null, you do have an option but we won't go there now. 

Value type assignment (where you copy the store value from one instance of a value type to another) copies the actual value. Again, this may seem obvious but you will appreciate and understand the difference once you have looked at the reference type discussion below. 

Value types don’t lend themselves well to derivation of new types. That means, you cannot create new types by using existing value types. Creation of new types is the bedrock of object oriented programming and this is a feature of reference type, which will be discussed below.  

Each value type comes with its own default constructor that will assign a standard, fixed value. This makes it possible for you to create a instance of the type and then call upon the default constructor and let it assign the standard default value for you.  

Reference types 

Hopefully, once you go through this, you will re-read the entire blog post to really appreciate the differences between reference and value types. 

Reference types don’t hold the value they are supposed to hold (in direct contrast to what value types do). Instead of holding the actual value, they hold a 'reference' to the location in memory where the actual value is stored. Let's go back to the pocket example that I used when describing the value types. In the exact same pocket, you have your wallet. The wallet contains a debit card (which belongs to your bank account). Your bank account has 1000 rupees. In both cases, as a person, you have or own thousand rupees. Earlier (value type) your wallet actually contained 1000 rupees. Now (reference type), your wallet contains a debit card which has the 'reference' to the 1000 rupees in your bank account.  

Before we go into the unique features of reference types, lets look at how the reference type and value type differ, especially when it comes to security.  

Continuing with our discussion on wallet and holding money, suppose you lost your wallet. If you were following the value type model of keeping your money, along with your wallet, all of your money is lost. However, if you were following the reference type model of keeping your money, you lost your wallet but not your money. Your money is still in your bank account and the person who stole your wallet cannot access your money. All you have to do is collect a new debit card and your money is still yours.  

This is just one instance of why reference types are the ones you will use the most when building your application. As you learn more (and we add more blog posts) you will see why this is the case.  

With this intro, lets dig in and find out the features of reference types. 

Reference types don’t hold a value at all. They hold a reference to the object that has been instantiated. For example, if you are defined a class or an interface or a delegate (each of these things you will learn more later) and use them to create an instance. Suppose this class has ten different int values. The object would not actually contain the ten int values. Rather, the object would contain a reference to this data or data set, so to speak.  

Along with (one of many) the security implications (or benefits) of using reference types, another usefulness is that multiple objects can refer to the same data. So, the changes made by one object will be available to the other object and vice versa. Going back to our wallet in pocket example, it is possible to have two debit cards connected to the same bank account. The changes made by one card will affect the money in the account, and the same goes to the other debit card. 

While c sharp as a language provides some built in reference types (dynamic, object and string), most of the time you will be creating your own reference types, mostly in the form of classes that you define. Same goes to defining your own interfaces and delegates. Of course, it goes without saying that the string reference type will also find a lot of use in everyday programming.  

Lastly, I made a big deal about value types not getting a null value. With reference types, of course you can have a null value. That's because a reference object can point to nothing. Later it could be assigned a reference and it will start pointing at something. Then, later, it could be made to not point at anything or something else entirely.  

Heap and Stack 

Considering the differences between what value and reference types actually store, there is also a different way they are managed in memory.  

Value types are usually small data items (like int) and hence need fast storage. That is why value types are stored on the stack. On the other hand, reference types hold references which aren't exactly small data items, and are stored on the heap. Although the discussion of stacks and heaps are for another day, the important thing to know about them is the role of the garbage collector (which is responsible for destroying objects in memory once their usefulness is over). The garbage collector constantly monitors the heap, where as the stack has nothing to do with it.  

So that is our introduction to types. Understanding of types is extremely important. Read and re-read this blog post multiple times and only move to the next posts after you are very clear about everything.

Follow me on twitter, facebook and instagram for more updates. Thanks!

Which type to use

After our discussion on types, the first question that will come to your mind is probably, when to use what type? 

There are some general guidelines to this which you can use (of course, these are guidelines and can and should be broken as per your own wisdom but I will come back to this later). The best way to decide between value and reference type is to check if it can be a value type object. If it's not a value type object, then it has to be a reference type.  

To decide on a object being a value type, you will see if (these are pretty much the same guidelines you would in official msdn books as well as online material. I am simply copy pasting them here) Imagine declaring an int when you are reading these requirements and it will make more sense. 

  • Object is small – this means, the object is expected to store small values.  

  • Object is logically immutable – ah, immutability. It’s a complicated concept but the gist is something like this. When you create a new object, and then you assign it a value, that value sticks with that object from its birth to its death. It cannot be changed no matter what. Immutable objects are essential when you are working in a multithreaded application and you really want to be sure that some objects will never be affected once they have been assigned a value. Check this post dedicated entirely to immutable for more details. 

  • There are many objects. You have lots of them. Meaning, you have many objects that contain small data.  


These guidelines make sense because, you should remember that value types are stored on stacks which are small and have no connection with the garbage collector. By nature, stacks are small in size but provide faster access to memory. Being commonsensical, you know that as things grow big, they get slow and hence need more management. By reverse logic, when things are small, they can be fast and need little management. So, when you know for sure that the actual data that you are storing is small, you simply go for value types. Of course, when you want to make things 'thread safe' and want to take advantage of the immutability features, you want to go with types again.  

Having said all the above, it is important that I add that, there is no such thing as 'solid' guidelines. As you grow and learn as a developer you will make up your own rules. 

For instance, at the time of this writing I have this rule that I should not use the mouse for anything. I must the keyboard (that means memorizing endless amount of shortcuts until they are assimilated as muscle memory) for everything! My colleagues think I am crazy but I know that a day will come when I will be working on 4 monitor workstations and I don’t wish to handicapped by having to grab the mouse for everything. 

Now that the guidelines are over, you know when to choose value types. If a type cannot be slotted to value type, you simply go with a reference type. It's as simple as that.

Follow me on twitter, facebook and instagram for more updates. Thanks!

structs and classes

Once you have gone through the essentials of types, you will go about creating new value types as well as new reference types. When you are creating new value types, you are invariably creating struct types, and when you are creating reference types, you are creating class types 

Due to the way, value types and reference type designed, there are some things that struct and class can do, some that only struct type can (or cannot do) and some that only class type can (or cannot do). Understanding these distinctions can also help you figuring out which type you want to create for your project.  

If you are like me, it is possible that you may have never used struct (except may be when you taking baby steps in programming with Turbo C compiler and C language which only had struct) and have only used classes. I keep thinking why that is the case, and perhaps the reason is the nature of programming languages such as c sharp, Java and C++. All these languages are object oriented, so most books and trainers tend to focus on the object oriented stuff such as inheritance.  

That brings me to the primary distinction between creating your own struct type and class type. In a struct, you can do all the usual stuff (which is create fields/properties and define methods) but you cannot inherit a struct or let it be inherited. In other words, struct types stand alone. In addition to this, you cannot define your own empty (no parameters) constructor either.  

Going in the same thread of thinking, if inheritance is not part of the picture, so much of the object oriented magic (or headache depending on how your day is going) is thrown out of the picture. However, there will be scenarios where you would enjoy a non-object oriented type. When such a scenario strikes, you should use struct and you will be that much better off with it.  

That also means, if you are sure that you will be either inheriting a type or expect your created type to be inherited, then you are obviously going to create class types. 

Follow me on twitter, facebook and instagram for more updates. Thanks!