Thursday, September 16, 2010

Next Generation User Interfaces (From Switches to Augmented Reality)

Its interesting how the computer user interface has evolved over the decades. Here is a very brief history: First there was nothing more then switches and light and you had to enter and read information in binary, this evolved to punch cards and tape, later keyboards and printers, and eventually terminals with keyboard and monitors. For decades, the text user interface was the status-quo.

In the 1960s, Doug Engelbart introduced the concept of the mouse and graphical user interface (GUI). In the 1970s - 1980s, Xerox took the concept and made it a reality. In the 1980s Apple popularize the GUI with the Lisa and Macintosh lines of computers. Then again for decades, the graphical user interface was the status-quo.

In the 1980's Jaron Lanier, introduced the 'virtual reality' (VR) concept. The first generation of this technology was crude by today's standards, but advanced drastically since.  I remember seeing some early VR prototypes of future GUI in the 1990s, and being blown away about the concepts.

In 1993, Apple released an early natural speech recognition feature as part of their OS for the Quadra.  Microsoft first included this feature with their release of the Windows Vista OS.

Touch screens are passe, but when this technology is enhanced by advancements like a 'multi-touch gestures' interface.  Then combined with other technology advancements, you get a GUI that begins to become on-par with some of the user interface concepts introduced in the movie 2002 movie Minority report (excerpt of the scenes I am referring to).  Microsoft now includes a 'multi-touch gestures' interface feature as part of the Windows 7 OS.

In 2006, Nintendo introduced the 3D wireless controllers.  In 2007 Apple introduced the iPhone, and popularize the multi-touch gestures interface for this device.  In 2009, Yelp help popularize the augmented reality concept (computer generated information laid over real world video), by introducing an Easter egg on the iPhone called Monocle. 

In 2009 Microsoft introduced Project Natal, now called 'Kinect' (proof of concept video).  In 2009 at TEDIndia, Pranav Mistry demoed technology concepts from a company called SixthSense, that shows how they are overlaying computer data on the physical world and making it interactive.  In 2010,  Intel is also working on hard on advancing the user interface, see the following article.

From here you begin to see other emerging technologies, like camspace which is a computer vision platform.  Basically it allows you use a web cam, and just about any real world object as a game controller (or input device).


All the technologies I mentioned above might seem little disjointed in how I presented them, although the all share a few things in common.  All of them are popularized concepts based around advancements in user interface technologies.

As mobile devices become more powerful, and get more features (such as the motion sensor and gyroscope already built into the iPhone 4).  I personally believe that software developers will build on these features and concepts, to bring about next generation user interface enhancements that I believe will include location aware social networking concepts (such as Gowalla and foursquare, and now Facebook Places), augmented reality, object and speech recognition.

From a technological standpoint, I don't see developers limited by hardware anymore. I see greatest the limitation, is the imagination needed to integrate all these technologies together into a killer application that gains mass adaption.  Although, with all these advancements comes greater concern about privacy and security.  I hope these issues will be addressed before they become a problem.
Post a Comment