jay's old blog

this blog will be deleted soon - please visit my new blog - https://thesanguinetechtrainer.com

Project TD - Day 59 update – Core Tech – Web api module and migrating to new blog



UPDATE 2!!!!

Alright, my new blog is now up and running. Find it here - https://thesanguinetechtrainer.com 


UPDATE!!!!

Okay. This is weird for me. For the last few days, BlogEngine.NET (the software that powers this blog) has been acting crazy. I have spent a good chunk of time troubleshooting (after all, I am myself a dot net developer and this thing was built using dot net, and I have come to a realization that, there is not fixing this. No one else is facing this issue.

I dont think it is a connection issue to the web server, because the blog itself is running just fine.

I dont think it is a database issue, because the blog does not use a database.

I thought there would be a easy way to export my blog to elsewhere, but nope. This blogengine software is not popular enough to merit that.

and, since I cannot create new posts (fortunately, existing posts can still be edited and updated) I am updating an existing post. I suppose I should be happy that at least this is working.

I am currently exploring both wordpress and medium as my next blogging software. I like the look of medium, so I will probably go with that. I still dont know how I will move all the stuff from this blog to the new medium blog, but right now, I need a place to write and publish. Or else, I will lose my mind. and fast, at that.

--------------------------------------------------------------------

When discussing the API engine, I put a flowchart in place that dictates how I would work on the different modules of the entire app ecosystem of project TD. Then, I further broke down the api engine into simpler modules, which I have discussed here. Here is that list, for reference.

  1. 1.       Build and Deploy an API
  2. 2.       POST and GET to the API
  3. 3.       PUSH and PULL from the API to the Data Service
  4. 4.       Google Maps API on Android, Web and iOS
  5. 5.       Facebook Login API on Android, Web and iOS

Obviously, I realized that some changes were needed, and here is the revised list of the simpler modules.

  1. 1.       Build and Deploy API (this includes the POST, GET, PUSH and PULL from the above list)
  2. 2.       Android app that works with the above API.
  3. 3.       iOS app that works with the above API.
  4. 4.       Web app that works with the above API.
  5. 5.       Google Maps API on Android, Web and iOS
  6. 6.       Facebook Login API on Android, Web and iOS

From the above list, 1 and 4, are completed. I mean, in software, there is no such thing as completed, but it is completed to acceptable levels. This blog, talks about 1 and 4, and I will move on to item 2/3/5/6 next.

As I have already decided, much of the back-end stuff, that is the API, is being built using .NET and that means, I get to use my favorite language c # for all of this. That much is done. I am using web api 2 technology for this. Since I don’t wish to manually create the tables for this service, I am using entity framework, version 6, to act as the agent between the api app and the database. For the database, I am using Microsoft SQL server, and for the hosted web app where the api would live, I am using an IIS server, powered by Microsoft technologies. All the server components are hosted on Microsoft Azure, and the cost, so far, (at the time of this writing) less than 500 rupees per month. Of course, this is a test server, and the load on it has been kept at extremely low levels. I am sure the actual cost when the full api ecosystem launches will be in triple or even quad digits.

While building this api, I ran into Knockout JS. I hadn’t used that one before. I also ran into attribute routing, which is a fun feature and I am surprised that it is a recent addition. I sort of assumed that it was always there. I also learnt about action names, but I think attribute routing sort of makes it redundant, but it is good to know it is there. Of course, the current api only communicates in text data, which might become a problem later when I wish to deal with multi media. However, from what I heard, the necessary modifications should not be a problem.

I am not currently satisfied with the way the web app works. It takes raw data, JSONifies the data on its own before consuming it. The web api does have components that return data in both raw format and JSON string format, so perhaps at some future time, I will build a web app that works with the JSON calls. For now, though, I will stick with the default web app built using Knockout JS.

The api is living on the cloud, and I have now completely abandoned the (soon to be retired) classic azure management portal. The new azure portal is sort of heavy on the system and it has this sliding interface which makes it difficult to work with. Especially as more azure components are loaded. However, it has all the new stuff, and for good or bad, Microsoft has been pushing the new portal for at least 3 years now. I guess, it is time for me to go with the change. I still don’t like it, but it is the future, so I am bending backwards for that.

As always, my tech guru, mika has been wonderfully helpful while building this. I want to thank him for his help, and hopefully, I will return the favor someday.

Alright, enough of the diary writing. You can find the two repos related to the above components, here and here. You can find the web app demoing the api, running at this link. Obviously, the repo has all the comments I can put and relevant links and stuff.

Alright, that is all for now.

Follow me on twitter, facebook and instagram for more updates. Thanks!

Project TD - Day 55 update – A Word About Microsoft Azure


[Ongoing series of blog posts to inform potential developers, users and (hopefully investors) about this new app ecosystem I am architecting, designing, developing and deploying. More details at this page]

I have been working on Project TD for about 55 days now. It has been an interesting experience because of what came before Project TD, and the incredible potential TD holds, if it should be fully realized as per its original vision.

Day 55 is also significant in many ways because, till now, most of the blog posts have been about explaining the concept, the design and the work process. All this is good, and obviously essential. If one were to compare this to a vacation, then, till now, I have been mostly planning the vacation. Deciding on the places to hit, the modes of travel, the clothes to pack, accessories being purchased and so on. It’s all happening in the drawing room. It’s like the first half of a heist movie.

Now, things are getting real. The vacation is planned. The places chosen. It’s time to step out of the comfort of the home and go outside. Time to carry out the heist, and I guess, I am stealing knowledge, to complete the metaphor.

For the last couple of months, I have spent a small fortune (to be honest, it’s a lot of money for a small time businessman such as myself) in setting up all the necessary hardware. I have got the computers, power supply, mobile devices, tablets, office space and everything else. In each case, I also had to invest in a backup system, which simply multiplied the cost by 2. All this is the ‘planning part’.

Now, all these ideas must become real, and the services must run on the cloud. The cloud is the backbone. The cloud is where Project TD will live. I have been using Microsoft Azure for a little over 5 years now. However, I have only been using it in bits and pieces. To host a website here, run a mobile data service there, setup a notification center here, host a virtual machine to test an application and so on. None of them were related to each other, so it wasn’t a complicated system. Project TD though, by any measure, a complicated system. It will push my ability to harness the power of the cloud to levels I have never done before. Almost every part of this system can be automated, post deployment. I will taking advantage of that too and see how far ahead automation has come.

And the monetary cost! Man, I have going to be paying huge bills for at least the next 2 years!

While the cost – time and money – is huge, I am convinced that the cloud thing is pretty much in place now. As the amount of data people consume is only going to increase, the requirement for cloud services will only increase. Project TD will help me figure out how to make multiple cloud components work with each other in a rather complicated system. By extension,  this should provide me insight into what is best fit for the cloud and what are not.

I will be the first to admit that Microsoft Azure is expensive when compared to Amazon and Google. However, I find that Azure has a lot better support to Visual Studio, which is a deal maker for me. Microsoft is one of the big dogs (if not the biggest dog) in the cloud business, and they are constantly adding new stuff to their cloud services.

So yeah, Dear Mr. Azure, don’t let me down man.

Follow me on twitterfacebook and instagram for more updates. Thanks    

Project TD - Day 41 update – Core Tech For The First Batch of APIs for the API Engine


[Ongoing series of blog posts to inform potential developers, users and (hopefully investors) about this new app ecosystem I am architecting, designing, developing and deploying. More details at this page]

Earlier, I have written extensively about the design technique that I plan to follow to build the API Engine. The API Engine itself is a collection of APIs. For the last few days, I have been analysing the APIs that will make up the first batch of contribution towards the engine. I noticed that there are some essential tech components that they need to use to make it work.

Going by the design that I put in place, this is the second bubble from the API design titled as ‘Identify all Tech’.

So, here is the full list of tech’s that are identified to make this happen. The blog post image above lists it.

  • Build and Deploy an API
  • POST and GET to the API
  • PUSH and PULL from the API to the Data Service
  • Google Maps API on Android, Web and iOS
  • Facebook Login API on Android, Web and iOS 

I will of course, follow the design plan, and repo the above stuff, tutorial it. After that (these steps will take a few weeks to complete), I will finally have some serious unit testing to work upon.

As they say in the good books (mostly, there are no good books that I refer to. It simply me using them as proxy :P), exciting times lie ahead!

Follow me on twitterfacebook and instagram for more updates. Thanks

Project TD - Day 28 update – Design for the Modules of API Engine



[Ongoing series of blog posts to inform potential developers, users and (hopefully investors) about this new app ecosystem I am architecting, designing, developing and deploying. More details at this page]

I have already blogged that the entire app ecosystem depends on what the heart pushes and collects. The heart being the API Engine. I have written multiple times that the whole system must be modular so that parts can be replaced and re-engineered if something goes wrong. Or, if there is an opportunity to make things better tomorrow, then, that option should be readily available.

To that effect, I have come up with a design plan (pictured above) that should drive all engineering efforts towards the API modules. It’s a simple and plan design, but it should be enough to help me. Obviously, it is subject to further revisions if the occasion should call for it.

Pick an API

The API Engine is but a collection of API calls running on a single API server. On one side, it provides access to facilities to end users. On the other side, it has the necessary access to the underlying database. As of now, 12 API calls that would form the first batch of API services that will be part of the API Engine.

The idea is to pick up, one API service (each of them will have their own designs) and then start working on it. First step would be figuring out the design which means, illustrations, finite discussions about its usability with the stakeholders. Once the design is confirmed to work (to the best of data available), then we move on the technical part of it.

For instance, for the patron app, there would be one service that pushes the restaurant data to the server. This would be one API call. So, I would start off by designing the work flow. Show the workflow to as many people as I can (my, mom friends, my tech mentor, the many students who are under charge and of course, random strangers) and collect its feasibility. Use this feedback data, and design the final workflow.

Identify all tech

At this stage, the design and the flow of activities of each identified API has been done. The API design itself is completely independent of the underlying tech that will actually power it. For instance, the design is like I wake up in the morning and decide to have breakfast. The actual activity of eating breakfast might involve many things. I could have a pack of biscuits. Two packs of milk. A luxurious breakfast at my folks place. Or go out and pay a decent amount and generous tip at my favorite restaurant. As you can see, the decision to have breakfast and actually having it, though related, are quite independent of each other.

Similarly, the design is always independent of its implementation. It’s part of the modularity and replacability thing I keep going on and on about. Here, I will be thinking only of implementation and not question the design choices.

Earlier, I mentioned an example API service that pushes data from the patron app to the cloud. So, I need to figure out all the technologies that will make this happen. It would mean, I need to build a web app, android app and a iOS app that has a simple button. This button would collect data (that mean, I need to figure out how to collect data on all these platforms) and trigger a cloud push on button press. Like this, I break down the design into individual components and see how the code would actually work for these things.

Repo it

Once the tech part has been figured, the next part is pretty straight forward. I have already promised that a lot of stuff related to the project would be open sourced. Anybody can use it non-commercial uses. So, I will push the code of the web stuff, the cloud things, the app solution, everything on a public repo.

Tutorial it

The repo will have the code. As is my standard practice, I write pages and pages worth of documentation within the code. However, it helps me (and my career) to have some tutorials that point to the approach I took while converting the design into runnable code.

Unit Testing

Once the repo and tutorial is done, I need to check the individual fitness of the resulting module. This will ensure that the unit itself is independent and will involve writing dummy external connections.

Integration Testing

Once the unit is confirmed to be fit to work, next comes the part about integration. I have to check if this module will work with the rest of the modules. I also need to check to make sure that the system won’t come crashing down when the module dies or goes corrupts or simply stops working for some insane reason.

Once the integration testing is done (if it fails, I will probably have to restart at the design stage and work down from there), I put the module aside, and start over by picking up the next API service to work on.

Rinse and Repeat

This part is useful when, at a later point, if some new tech is introduced which will make things faster. Or perhaps some technology on which the app depends gets rebooted and revised. For instance, Microsoft might replace their existing mobile data service with a new one. Or, Facebook might implement a new login system. There are so many gears moving here man, anything can change.

Anticipating change and being ready adapt is the only way I can keep this engine relavant and allow it to evolve.

Note about Sketchbook from AutoDesk

I have upgraded from hand written illustrations to using the Sketchbook app by AutoDesk on the iPad. For the purpose of drawing, I am using an Amazon Basics stylus. I must confess, there are some serious advantages to using this over the hand written thing. Obviously, the free app has certain limitations. For instance, I can only have 3 layers (which is alright but annoying) and I cannot select components (which is the biggest handicap of the free version). Eventually, I will have to get the pro membership, but I have to evaluate this system first. Once I am satisfied, I will kick things up a notch with the pro membership which provides tools for desktop and android as well. It is fairly inexpensive and should be useful.

The Amazon stylus is extremely basic, and there is a learning curve involved here as well. I feel that it is good enough for now. If this art design thing goes to the next level, I will probably spring for a Wacom stylus for a better control over the designs. For now, I am still trying to figure out the stylus.

Follow me on twitter, facebook and instagram for more updates. Thanks!

Project TD - Day 19 update – Moving from Requirements Planning to Design



[Ongoing series of blog posts to inform potential developers, users and (hopefully investors) about this new app ecosystem I am architecting, designing, developing and deploying. More details at this page]

Aha! It took 19 days for me to wrap up the requirements planning. I have been using the whole Island – Volcano – Ships and other stuff – Ocean as an example for my ecosystem. I think that makes sense because everything that happens in software (or any machinery) for that matter is a reflection of real life itself. That is why, it is only prudent that I (with my limited intelligence and even more limited imagination) use some real life stuff, especially nature stuff, as an example eco system.

Right then, enough with my babbling. So, the requirements planning is done. A huge chunk of details related to this is documented at this link, creatively titled, Requirements Planning. Of course, nothing is ever done when it comes to any planning. Yet, I am happy with the stuff documented so far. I feel confident enough that I have enough stuff to move to the Design stage.

The design stage will be elaborated upon I future blogs posts. May the force be with me.

Follow me on twitter, facebook and instagram for more updates. Thanks!

Project TD - Day 19 update – Designing the Design Process



[Ongoing series of blog posts to inform potential developers, users and (hopefully investors) about this new app ecosystem I am architecting, designing, developing and deploying. More details at this page]

Now that the requirements planning is all done (at least to an acceptable level), it’s time to finalize the design process. The design process is crucial. I have always believed that being a developer (which include being a designer) is always better than being a programmer. I have spent considerable ink on that topic while discussing Uber and self-driving cars. That means, a good design is the only way to ensure that the idea leads to become an usable service.

That means, I need to first design the actual design process. My design for the design process would include the following four components.

Sketches

Technologies To Be Used

Tutorials

Repositories

Everything in software development is a cyclical process. The more time you spend revising stuff, the more you find flaws in it. The less flaws one finds, the more confident one becomes at its design. A solid design means less flaws during actual development (where the actual app ecosystem is coded. I always make sure that I run my design through a lot of people. God has blessed with access to individuals from varied backgrounds, and most of them are people who know the techie kind of person that I am. So, when I approach them with what I want, they don’t get weirded out or act surprised. In other words, I can collect an endless amount of feedback from folks who are tech wizards to those who technical achievement is forgetting their personal phone’s PIN LOCK.

Sketches

It all starts with illustrations, drawings, doodles and anything else one might call creations. I have given myself access to a lot of drawing tools. I have got the good old fashion pen and paper. The premium notebook for scribbling. Of course, I have my iPad with a nice Amazon Basics stylus. In the future, I will probably invest in more expensive drawing tools but I must balance my ambitions with the budget for this project.

One reason to start with sketches is the universalness of visual things. My mother can understand. My students can understand it. My tech mentor can understand it. Random people I meet on the road can understand it. Anybody can understand it. Over the years, I have learnt that no feedback is less important. Sure, some feedback might be useless, but then again, the greatest products have been built by the silliest of inspirations.

I have already concluded that the app ecosystem has the API Engine at its heart. The API Engine is but a collection of APIs, and each API would have a design connected to it.

Technologies To Be Used

Once a sketch has been finalized, the next step is to start adding real technology to reflect the corresponding actions. I mentioned earlier that that each API would have a sketch attached to it. Once that is done, it would be essential to connect the different components of the sketch to the technologies that are used.

For instance, at its simplest, I would need to build an API service that would take some data, and push it to the database. For this, I would need to code the API, put it on a server, design the database and then connect the two. I would need to list out all this, and code the stuff out of it.

Tutorials

Me just learning how to convert the sketches into actual technology components is not enough. I must write tutorials related to it. While I am an architect, I still make a huge a chunk of money through training. I must have a collection of tutorials that will help me bolster my tutorial portfolio. Otherwise, one of the main benefits of this thing will be lost on me.

Repositories

Of course, all code must be pushed into a public repository that supports tutorials. A tech tutorial without supporting code is not that cool. I will be the first to admit that I myself have written multiple tutorials that don’t link to a code on the repo. It’s a fault I am trying to fix going forward.  

Each design will go through the above steps, and hopefully there is enough for room for all possible flaws to show up. The end result of the above process will be modules, which I would like to consider as ‘ready to use module templates’ or RTUMT. The actual development process (which comes after the design is over) will utilize these RTUMTs. Hopefully, since the RTUMTs have already been vetted at the design stage and the coding stage, the actual development will put a lot of focus on integration of these modules, continuing my efforts to keep the entire design modular. This will allow me to remove and replace modules as newer and better technology should become available if the app ecosystem survives the first year of operation.

Of the five stages of the app ecosystem development, I imagine that the design stage to be the most extensive. I have vowed not to start the actual development (which will eventually lead to the beta launch) until the entire design is completed. That means, if necessary I am happy to spend an additional 6 months fine tuning the design rather than jump to development with a half-baked design.

It’s going to be a long ride man.

Genius is eternal patience - Michelangelo

Follow me on twitter, facebook and instagram for more updates. Thanks!

Error and Logging



[Ongoing series of blog posts to inform potential developers, users and (hopefully investors) about this new app ecosystem I am architecting, designing, developing and deploying. More details at this page]

A system, no matter how good it, should be capable of giving the right communication. This applies to people too. If there are two individuals, one of them an excellent work person with poor communication skills, and the other with average skills but excellent communication. On my best day or the worst day, I will always go with the person who has excellent communication skills (and medium work skills). That’s because a person with good communication is the one who will keep me informed about the good and the bad stuff, no matter how comfortable/uncomfortable the truth is.

On the outset, this all seems like common sense. However, in all the years I have been working, common sense is perhaps the rarest of commodities. People do stupid things, and then hide behind a veil of ego and false claims and losing touch with reality. That is why, as I continue to work on my app ecosystem, I realized that it is essential that it has a proper error and logging system.

From a strictly programming perspective, error and logs, both do the exact same thing. When something happens as expected, it is ‘logged’. When something unexpected happens, it is also logged, but shows up as a ‘error’. For instance, when a user signs in successfully, that is a ‘log’. A user signs in but it does happen for some reason, that would be an ‘error’.

A collection of logs is useful for making decisions related to performance optimization. For instance, if every user in the system is taking 10 seconds to sign in, and I discover that there is a new library that allows signs in to happen in 5 seconds, I know I need to implement that. There by saving 5 seconds for every user on the system. Logs are more of a proactive measure at making a system better. We use the collection of, the collection of (two repeated collections, not typo here), logs to find out what about the system can be improved.

A collection of errors is useful for finding out what is going wrong. This, unlike the logs, are about reacting to something that is going wrong. Let’s use the sign in example. If 10% of users are having difficulty signing in, then that is a problem. A problem that needs to be fixed as soon as possible, and the system should automatically trigger when the error percentage rises. Inform the operation folks and established documentation should also be automatically triggered allowing emergency response to start working.

In fact, if possible, the system should not only diagnose the issue on its own but also, if possible, fix it on its own. I could be throwing darts in the dark here but I think this is what machine learning is all about. The machine learns from its own mistakes and then starts doing things. Like say, fix problems with no intervention from anybody. I think this is practical and even possible. By my own experience, mistakes are like wheels. Some wheels need to be invented, while others have already been invented, and hence need not be reinvented.

Let’s say, the system logs an error system for the first time. This is the first time, so a human is involved in fixing it. Once the issue has been fixed, programming is done so that next time, when the same error is triggered, the system will attempt this fix. That means, no human need to be involved. The human can now focus on fixing new issues, instead of reinventing a wheel that already has been wheeled out last time. Machines are good at repetitive tasks, and that means, if something has been repeated, I don’t quite understand why humans should be involved? The time spent by the human re-fixing what has already been fixed could be used for something else. That time could be spent fixing new issues. That time could be spent improving the system, like making it faster or better. For all I know, the human can use the same time to watch a movie or fall asleep on the couch or go for a walk. The idea is to avoid reinventing the wheel, let the machine do what is already done and the human do something new and exciting.

So yes, my app ecosystem will have a machine learning enabled error and logging system.

Follow me on twitter, facebook and instagram for more updates. Thanks!

Project TD - Day 17 update - Requirements Planning updated



Project update.

So, I went ahead and have updated the following stuff in relevant pages.

Accessibility, Multi Language Support and Natural Interfaces
Choice of Cloud Platform
Supported Platforms – End Users
Error and Logging

Obviously, the main Requirements Planning post has also been updated, with the above links for future reference.

Follow me on twitter, facebook and instagram for more updates. Thanks!

Choice of Cloud Platform



[Ongoing series of blog posts to inform potential developers, users and (hopefully investors) about this new app ecosystem I am architecting, designing, developing and deploying. More details at this page]

The app ecosystem has many components but it can be broken down into two essential parts. There are the user apps (all types of users) and the API engine (which also includes the data storage system). The user apps live on the web, android and iOS. The API engine and the data storage system live on the cloud.

For almost a decade now, we have been hearing about this cloud. We hearing about it so much, sometimes, it’s almost funny to talk about it. However, the cloud is very real, and it really does make life simpler. For a developer, the cloud technology is a god send. It reduces costs, allows new features to become immediately available and mostly makes life better for end users as a consequence. It is only obvious that my app ecosystem runs all its processing in the cloud.

For cloud, I can only think of two choices – Azure and Amazon Web Services. I understand that Google also provides cloud services, and there are a lot of many smaller companies that provide cloud services. It is also possible to utilize cloud software to build our cloud services.

For those who are unclear about this concept of cloud, I will put a simple definition here. Cloud is about a service that can be scaled up and down as per requirements. Almost instantly. That is the cloud. For instance, I am running a server on a VM and I realize that I need additional 16 GB of RAM instantly. With a conventional VM, I will have to go through a series of steps and wait till it gets provisioned. This is just for RAM increase. What if I only needed the processing power increase for one day (perhaps I was launching some huge sale which only lasts one day) and no more. What if I wanted a bigger hard drive for a week only. Or perhaps, I don’t know what I need, and I wish to increase or decrease or perhaps I just wish to experiment and find the ideal combination. All this will lead to a flexible solution that will allow reduction of cost.

In a non-cloud scenario, there are people who have to do things manually. I am all for working with people (one cannot get anything done without people) but sometimes, they can be a hindrance rather than help. It is in human nature to make mistakes. That is why, when possible, we use machines. Cloud removes that human factor almost entirely. Without going through the IT team (or not have an IT team at all) I can provision stuff that are required to allow my service to work.

Now that I have discussed what the cloud is, time to decide the choice of cloud provider. For extremely obvious reasons, I will go with Microsoft, with the only alternative I would recommend being Amazon Web Services. Compared to Amazon, Microsoft services are expensive, and they have reduced trial periods. For instance, Microsoft is generous enough to give a month or at most 3 months of trial services. Amazon though, can give a year worth of free services in some cases. Clearly Microsoft is expensive.

However, for one thing, I have been using Azure for almost 5 years now and so far, it has not disappointed me. Their service is excellent, and Azure keeps getting better with new services being added constantly. They met my needs 5 years ago, and I am sure they will meet all my IT needs for years to come. It is possible I am a little biased here, but I am yet to come across a simple reason (other than low price, which is not really a priority for me) why I must explore Amazon Web Services, or some other cloud service.

So yes, the app ecosystem Project TD will be powered by Azure cloud.

Follow me on twitter, facebook and instagram for more updates. Thanks!

Accessibility, Multi Language Support and Natural Interfaces



[Ongoing series of blog posts to inform potential developers, users and (hopefully investors) about this new app ecosystem I am architecting, designing, developing and deploying. More details at this page]

For some reason, lately, I have become obsessed with the concept of making sure that folks (like my mother) who are unable to enjoy the fruits of what modern technology has to offer should at least enjoy what my ecosystem has to offer. I will admit, my interactions with my mother is the motivation for all this, and as a consequence, I have blogged my thoughts on that here, and here.

I understand that there is already a lot of work done in this domain. Accessibility features have always been included in every operating system. However, I am beginning to feel that they are mostly designed for western audiences and don’t really apply to someone like my mother. The idea here is to find out what is already being done in terms of accessibility, and figure out how I can improve upon them (if it is possible at all) and ensure that my mother user the cloud services and the apps that are going to part of this ecosystem.

One area where I am particularly interested in are natural interaction systems. This word has many meanings I am sure. To clarify, here is a simple example. Every morning, my mother takes over her empire which is her kitchen. While she is working, she likes to listen to music. She has already turned down my offer to get her a modern device such as the iPad for music listening and continues to insist that she wants to use the old knob based, button based radio that has been wedged in her kitchen for years.

These knobs and buttons (physical stuff) is what I define as natural interfaces or natural interaction systems. I have been observing the way she interacts with this radio and have been drawing some conclusions. I understand that she prefers the feedback it provides and also simplicity of it. Despite all these years of using touch screen devices, I still find it easier to type on my physical keyboard. If someone like me prefers the convenience of manual feedback, I can imagine how it must be for someone like her.

That means, I must design and probably build an interaction system that incorporates some kind of a physical feedback thing. At this point, all I have is an idea, or more like a target, an end game. Of all the things in my app ecosystem, the UI is the one thing that excites me the most. If I can make this happen, and I can figure out a way for my mother (and by extension mothers everywhere) utilize the benefits of modern cloud services, that would be my greatest achievement.

Follow me on twitter, facebook and instagram for more updates. Thanks!