jay's old blog

this blog will be deleted soon - please visit my new blog - https://thesanguinetechtrainer.com

Entering the world of web API and related stuff

Ever since this blog started, I have written what I believe are useful tutorials on android, web development and windows development. I must have written more than 150 blog posts in total (or may be more, I don't have an exact count). One common thread across all these tutorials is that they are all geared towards application development.

That means, unless you already have an external source of data (or your application connects to a database), the tutorials above will not help you much. Of course, in most real world, work type scenarios, you can be fairly sure that the blogs above will help you with the application development which is what is expected of you. However, what if you want to change teams? What if you also wish to build a data service to complement your application development? Further, perhaps, you don't wish to change teams but play on both teams. You wish to build a data service and also the application that consumes it. If you could do that, well, that would be incredibly cool.

That is what I plan to blog about going forward. I plan to build my own data service (for a project that I hope to start next year), and obviously I will blog about it. Of course, this year, I have taken up novel writing which is taking up significant amount of my writing time. So, the blog posts will reduce in quantity, at least till the end of the year, but I will blog as often as I can.

Update 1 : I will be using the information available at the official docs at asp dot net site.

Obsolescence and taxis and uber and developers and programmers and disruption culture

There is nothing more heartbreaking and devastating for an individual than to find out that there are no longer required in the grand scheme of things. This takes many forms, in personal life, in relationships, in business and of course career. Obviously, I cannot write about the personal stuff, but from a business perspective and from a career perspective, I sure can.

For starters, check out this interesting article " Court says yes to regulating cabbies, no to governing Uber drivers " on one of my favorite technology sites, arstechnica.

In the article, there are a couple of quotes. Both the quotes are extremely insightful, but the second one, I reproduce here in its entirety.

A license to operate a coffee shop doesn’t authorize the licensee to enjoin a tea shop from opening. When property consists of a license to operate in a market in a particular way, it does not carry with it a right to be free from competition in that market. A patent confers an exclusive right to make and sell the patented product, but no right to prevent a competitor from inventing a noninfringing substitute product that erodes the patentee’s profits. Indeed when new technologies, or new business methods, appear, a common result is the decline or even disappearance of the old. Were the old deemed to have a constitutional right to preclude the entry of the new into the markets of the old, economic progress might grind to a halt. Instead of taxis we might have horse and buggies; instead of the telephone, the telegraph; instead of computers, slide rules. Obsolescence would equal entitlement.

In a nutshell, the quoted paragraph implies that the passage of time (and the consequence i.e. innovation) stops for nobody. As programmers and developers (which are two different type of people by the way) who work in the computer science industry will have to deal with the challenge of obsolescence rather frequently. A very simple example would be, the amount of RAM our computers have. When I was a student, 128 GB RAM was a big deal. Today, my own work laptop has 8 GB RAM and I frequently complain that it is not enough. Do you see how much has changed?

I talk about obsolescence constantly, and have been talking about it as long as I can remember. My students are probably bored to death with it. Now, I am doing the same thing here. This time though, I am backing my claims with what some other person is saying. If someone is reading this blog post, especially my students, I beg you. Study the scenery in front of you. feel the direction in which the technology wind is blowing. Start researching the consequences of this upcoming change. Discuss with others like you. Then, finalize a plan. A plan that tells you how you can react and adopt when the hypothetical plan becomes a very reality. When the plan becomes a reality, be ready to change and adopt. Do not let age old customs and traditions stop you from adapting to changing realities.

Change is not easy. It hurts. It affects everybody around you, and sometimes it even breaks existing professional and personal relationships. Life is about survival. Change, I am afraid, is part of it. If you are unable to change, then all will be lost.

XML – stuff

Earlier, I blogged about web services where I mentioned the two most popular web services. One of them is XML, which is a web standard. Of course, microsoft uses it heavily and hence this blog. There should be a series on blogs on JSON as well, but that would come some other time.

With XML, we are looking at reading XML data, writing XML data. C sharp provides multiple ways of doing these things. First up, we are looking at a pair of classes that are meant for writing and reading xml files. Now, before we look into them, you need to understand the XML format. XML also called Extended Markup Language, is just like HTML. The difference here is that, in HTML we use the standard HTML tags like <P>, <img> and stuff like that. In XML, we create tags that allow us to ‘tag’ the data we wish to in our own tags. Perhaps that is where it gets the name ‘Extended’ markup language. You know, extend HTML to suit our needs. Very clever, whoever came up with that name.

Enough of my rambling, and off we look at the two classes that are useful for XML – XmlWriter and XmlReader. As you may have guessed from the titles, the former is for writing and the latter is for reading.

Writing object is created like this, and the reading object like this.

XmlWriter stream_object_xml_writer = XmlWriter.Create(stream_object_stream, new XmlWriterSettings() { Indent = true })

XmlReader xml_reading_object = XmlReader.Create(string_reading_object, new XmlReaderSettings() { IgnoreWhitespace = true })

Writing using these classes is a complicated affair, mostly because XML is all about tags and stuff. In addition that, you also have to keep thing about nodes, that have additional nodes, and contain some data or attribute. You will have to manually insert attributes, manually insert values. The whole thing takes a while to code. It’s like typing XML manually, but with methods. Of course, once you have a method in place, replicating becomes easier but yes it takes a while.

The same applies to reading. Navigating from node to node, manually, and picking up the attributes and values manually, and then collecting it or displaying it. You can find this in full action in our accompanying bit bucket code repo.

In addition to XmlWriter and XmlReader, there is another class meant for XML manipulation. This class is XmlDocument, which provides for faster XML writing and reading stuff. Then the question is, why not just use XmlDocument for everything? The answer is: choice.

As always, find the attached bit bucket to see all these three in action.

Serialize and Deserialize data and why

To understand serialization and deserialization, we need to understand what happens to data, when it is residing as part of an application and when it travels over networks. When data is part of an application, it is usually stored as part of an object instance. It would be an object representation, something that cannot be transmitted over a network.

The web transmits data in the form of text, words, and that’s pretty much it. This is part of that loose coupling that I keep talking about. The same happened earlier when we talked about streams, with different type of files. We convert the files from one format to another format, and then reconvert it back to the original type of file. In the case of serialization and deserialization, we are looking at converting object data into its equivalent string format, which would be serialization. Perhaps the words come from the series of characters to which data is converted into. Then, this string data is sent over the network, which is what networks do. Once the serial data reaches the destination, the string data is reverted back to its object form, thereby allowing the application to work with the data.

When you think about it, this is perhaps the greatest accomplishment of JSON. There are countless number of platforms today. You have web. You have windows. You have windows phone. You have android. You also have iOS. Then, there are so many platforms from the olden days, and so many platforms coming in the future. All in all, we have a real cluster-foxtrot here!

The behind JSON is that, it would be a simple string or in other words, serialized representation of actual data. That means, JSON can happily travel over the internet, establishing an useful and standard way of making things travel from source (data server) to the destination (client which is an application that runs any of the myriad applications I have mentioned above) and vice versa.

Our focus is clearly the dot net platform, so we have libraries that can readily convert object data into its serial format, and then back. That is what we are discussing here. Believe it or not, it is pretty straightforward. In fact, if you allow me to go off on one of my tangents here, I think today, we live in an excellent ecosystem for developers. Access to knowledge is easy and free. Development technology is mature and most of the time, free. It is easy to see mentors and it is easy to switch platforms. I would say, it is now possible to build large scale applications with extremely small teams. There is so much processing available for free or at a low price, to one person. It’s just…I don’t know wonderful. If only more students spent a little more time innovating, taking risks and less time texting, updating their chat app status and profile photo, the world will be better place.

Okay, so…serialization and deserialization. Right, here is how it looks in c sharp.

            using (StringWriter write_string_object_1 = new StringWriter())


                //creating an instance of water bottle and initialiazing it with some values

                Water_Bottle temp_bottle = new Water_Bottle


                    name_of_bottle = "bottle 1",

                    color_of_bottle = "blue"



                //now serializing it and adding it into the string.

                //on the left side, the string stream that will hold the serialized data

                //on the right side, the object that needs to be serialized

                object_serializer_1.Serialize(write_string_object_1, temp_bottle);

                xml_string_1 = write_string_object_1.ToString();


            using (StringReader read_string_object_1 = new StringReader(xml_string_1))


                Water_Bottle temp_bottle = (Water_Bottle)object_serializer_1.Deserialize(read_string_object_1);


                //alright, at this point, I have obtained the deseriazed object

                //let me display the data from the object itself

                Console.WriteLine("Here is data from the deserialized xml and now in an object, data");

                Console.WriteLine("name of bottle - {0} color of bottle - {1} ", temp_bottle.name_of_bottle, temp_bottle.color_of_bottle);


There you go, easy peasy. Of course, check our code for complete understanding of this concept.  

Directory operations

Once we have located the drives (read about it here), it’s time to do some directory operations. In simple words, directory operations boil down to the essential tasks

Creating directories

Deleting directories.

C sharp provides two ways of doing this. There is the static Directory class, and the regular DirectoryInfo class. The merits of using either one of these boils down to using a static class or a non-static one. So, I will leave to your interpretation of static and non-static classes to decide which of these two Directory operations classes you wish to use in your project.

I personally prefer to use the non-static class, and here is how you can create and delete directories.

            //lets create a directory object with a path

            var directory_object = new DirectoryInfo(@"C:\Users\Administrator\Documents\implementdataaccess\directory_playground\create1");


Now, do remember that these are directory operations. Since access to drives, and drive permissions can vary so much in an actual application real world scenario, there is a high chance of running into errors, so you are probably looking at mandatory try/catch code blocks.

Of course, you can find full code at our code repository.

wrapping up the exam 483 preparations - phase 1 of 6 complete and praying

As I mentioned yesterday (read about it here), I uploaded the last set of bit bucket code related to exam 483, thereby ending a journey that started almost 5 months ago. One of the many things I realized this year is that, the world is full of developers. However, how do we separate the good from the bad? That is where certifications come in. Certifications aren't perfect but they do give us a common yardstick to measure expertise.

One such certification is Microsoft certification. it's definitely top of the line. it's tough to break through, and it costs a lot. Not only that, it has international recognition and the certification actually covers the stuff that I use everyday in my work. So yes, Microsoft certification really makes sense for me, and by extension study nildana. Of course, as always, one should lead from the front. I wanted my students (well, at least those who believe in becoming the best in their lives and understand the sacrifices one has to make for the same) to become certified, and I believed if I could get one of these, they will follow me.

To that effect, I bought the exam 483 book. Not only did I study, I also made it a point to code everything I learnt. I built a bit bucket repository. Wrote extremely detailed comments. Then, I blogged each and every thing I learned. I did this for my own benefit, but also so that my students can have an easy time. So far, none of my students have taken the lead but a man can be hopeful. Either way, as of today, the blog and the code are ready, and that is phase 1 complete.

There are 5 more phases to go. Here it is in a nutshell

  1.  learn exam 483, blog what you learn and code what you learned. (COMPLETED)
  2. Revise everything. (IN PROGRESS)
  3. Take Exam. Clear it.(TO START)
  4. Get study nildana students to get certification. also pray to god that the students wont quit midway. and, offer prayers when one of them actually gets the certification. yes, lest i forget, pray even more that they wont quit. (IN PROGRESS)
  5. pray some more that students wont quit. (IN PROGRESS)
  6. pray (IN PROGRESS)

As I continue my mission to take things to the next level, I hope god in his infinite will help me out, with the phases to be completed.

Consuming data – database and web services

Applications or all about data, that much is obvious. I have also explained why is this so obvious in an earlier blog post (read about it here). So, when you are consuming data, where does this data come from. Files are an obvious source and destination when it comes to applications that rely heavily on the local hard drive. This applies to both desktop applications and mobile applications.

Of course, that is not always good enough. That is where we use the phrase ‘consuming’ data services. That is where databases come into the picture. Databases or database server or simply servers, are where data is stored for consumption by applications. Any application that you might think of, will prefer to consume data from these servers. Something simple as a facebook app gets all its data from the facebook data servers. Same applies to other apps, like gaming where multiplayer stuff is happening. Like this, any app that needs stuff stored at a remote location, will have some kind of connection to a database.

Now, this much is clear – consuming data from a database. However, with applications, we have something called separation of concern or tight coupling and loose coupling. You can read about it here. Since applications prefer to have loose coupling (except when tight coupling is preferred) we don’t want applications to connecting directly to databases.

Ideally, we want applications to connect to an intermediary agent in the middle, and the intermediary agent to connect to the database. This is not a new concept of course, and there are so many ways to do this. The preferred way to do this of course, is to allow applications access to data through web apis. This means, in most cases, data comes to the application in the form of a JSON string (majority cases) or XML files (minority cases).

There are several ways to build web api services, and obviously I am going to recommend asp dot net for building web apis. Someday, I might get around to blogging that as well. Meanwhile, do check out the bit bucket solution which talks about making web requests. Further, check up on our entire collection of blog posts and code on asp dot net MVC to understand how web applications can connect with databases. Find them here on our blog.

LINQ Stuff and LINQ Stuff with XML

I have already discussed some LINQ stuff in an earlier blog (find it here), but this one is related to working with XML. However, what is discussed is still valid because that is how cool LINQ (and by extension dot net platform) is.

Once you read the earlier blog post on LINQ, you will realize that LINQ is a query system that is built into dot net. LINQ works with multiple data systems, and XML is one of them. If you have an XML file, you can query through it, just like you would query through a database via the database context in asp dot net.

Here you can LINQ in action with an XML string

            XDocument document_xml_1 = XDocument.Parse(xml_string);


            //doing linq and getting a list of elements we want

            IEnumerable<string> list_of_person = from temp in document_xml_1.Descendants("person")

                                                 select (string)temp.Attribute("firstname") + " " + (string)temp.Attribute("lastname");

Here the important thing to note is to use the object instance of XDocument, which gives lists of data, which can then parsed and printed. As always, check our bit bucket code to see all this in action.

Files Directories – object or static operations

When you are working with directories and files, you are provided with two options at least. You can use the static class or you can use regular classes. To add, at this point, static class means, you cannot really instantiate objects from it. It also means, you can simply use the methods of the static class directly.

Personally, I think using the static methods makes for faster execution. It also improves the file size, that is reduces the number of lines since you can do file operations by using the static class directly. So, that is when you would use the static class. When you wish to do some quick file operations, like one time opening of a file, writing into it, and then closing it. Stuff like that.

When you use the non-static methods, you will have to instantiate the object, and then do file operations. This is useful, when you are playing around with multiple files, like hundreds at the same. In this scenario, each file or directory operation will have an object connected to it. That way, when you want to do some mid operation manipulation of a directory or file, you are free to do so.

Is that explanation good enough? I would think so, but as with many things, these are developer preferences. Even for one time operations, I prefer to use the non-static classes. You may choose the other way around. Eventually, as with life itself, these decisions are yours to make.

Working with drives

Perhaps, for developers who are only now entering the world of programming, working with drives might be silly. Most application development programs would start by teaching you how to work with web apis, that give you data in JSON form. The again, computers are still relevant and that means, we should know how to work with drives (hard drives) that are installed locally or over a network.

The logic here is that, an application needs access to files that are stored on a drive. Every file will have a path associated with it. That means, before you can access a file, you need to locate the drive on which the file itself is located. Once the drive is located, your application will also have to check if it has access, and if access is not available, perhaps request for it and get it. That means, anything and everything that an application needs to do, it has to start with the knowledge of drives.

For that, c sharp provides the DriveInfo, class. It’s usage looks something like this, like when we are collecting a collection of drives that are available to the computer.

            //this would be a list of drives

            DriveInfo[] list_of_all_drives = DriveInfo.GetDrives();

Each item in the above drive array can be scanned and necessary information such as the name of the drive, its properties, access, available space and all that can be obtained. In most cases, a local drive would be available, simply because, well the application is running on it. However, when it comes to network drives, they are connected to the computer but aren’t exactly ready. That is why, it is always a good idea to check if the drive is ready before doing operations on it.

                if (temp_drive.IsReady == true)


                     //do programming stuff here

Once you have a drive object with you, you could do some stuff like this.

                    Console.WriteLine("Program.cs - lets_do_drive_stuff - volume label is - {0}", temp_drive.VolumeLabel);

                    Console.WriteLine("Program.cs - lets_do_drive_stuff - file system is - {0}", temp_drive.DriveFormat);

                    Console.WriteLine("Program.cs - lets_do_drive_stuff - available space to current user is - {0}", temp_drive.AvailableFreeSpace);

                    Console.WriteLine("Program.cs - lets_do_drive_stuff - total available space is - {0}", temp_drive.TotalFreeSpace);

                    Console.WriteLine("Program.cs - lets_do_drive_stuff - total drive size is - {0}", temp_drive.TotalSize);

So, that is all there is to it about drives. As always, find out more at our code repository.