Tuesday, December 25, 2012

From Type to Touch: For the Text, What’s Next?

Human civilization has seen centuries of evolution and development as an increasingly knowledge society, solely on the basis of oral and written communication. The myriad languages of the human race have been the vehicles of division as well as integration of societies. While oral communication has always been through a complex interaction of the brain, tongue and the mouth, the written communication has seen transformational changes as the human race and technologies developed. In India, Vedas, slokas, epics and poems which were captured on the palm leaves in the ancient root language Sanskrit (as well as in the other Indian languages) are a compelling evidence of the human’s unceasing quest to develop, race to develop, communicate and archive knowledge for the present and the future.

While oral expression has been a natural endowment, written expression has been a manmade competitive advantage. Writing materials have played a very prominent role in the development of diverse cultures. They have helped not only in preserving the history and culture of mankind, but have also deeply influenced the scripts, languages as well as man's mode of thinking. To understand ancient writing materials, therefore, is to understand ancient cultures in a better light. Societies which preserved knowledge, like the Western world, progressed while societies which were indifferent to preservation, like India, lost several treasures of knowledge. India has seen writing forms evolve over  several centuries using virtually every form of materials, from palm leaves to copper plates and stone slabs to modern paper; but so much more of ancient wisdom, it is felt, has not been captured at all. And, of whatever that has been written down a lot has been lost in history too.

Shift and Change

 Ever since the advent of computer, the domains of writing, publishing and dissemination have seen nothing short of a revolutionary transformation. From the initial binary punching to later day language typing, computer has seen a step function jump in processing power. From being a data processing machine, computer has become the backbone of new age information technology. Development of various computer languages to converse with the computer has customized computers to multiple uses.  Software capabilities and hardware power evolved in tandem to be able to electronically capture, manage, store, transmit and retrieve data, information and knowledge in standalone and networked computers as well as global servers and grids. Whatever the historical lapses in writing and preservation of information in the ancient India, it is a just irony that India is now in the forefront of writing the code for all the computer languages. There are, however, two facets to the language and communication paradigm of the computers; the visible and the invisible.

The visible language is user-friendly while the invisible language is programmer-challenging. The greater the user friendliness that is targeted, the greater is the programming challenge. With the growth of the Internet and instant patching and updating the challenge of keeping the invisible language current and contemporary has been increasing.  The invisible language itself has two parts, the programming language and the machine language. While major enhancements have been happening on the programming language front, the machine language remains binary. The relative exclusivity of popular operating systems (4 for computers and 4 for smart phones) vis-à-vis the proliferation of devices (several hundreds) indicates the complexities and challenges of developing an operating system that is truly multi-functional and robust.  Underlying the complexity is the need to write millions and billions of coding lines to support such user functionalities.

Touch and Write

The first improvement to user friendliness came with the incorporation of drop-down boxes and tool bars to guide the user and let the user select from the available options. This development still had to be accompanied by typing of options. The breakthrough, however, came through Apple bringing touch-select as the next level of user experience. The touch experience is intimately tied with the scrolling capability. Despite the dominance of touch in smart phones, touch navigation could pose a challenge in full fledged computing devices given the significantly additional number of operations to be performed. While an Office suite would cater to the detailed navigation, the fact would remain that touch would still be an immediate experience rather than a type-along experience for the user. Microsoft has recently launched Windows 8 as the ’touch and navigate’ platform applicable across all the devices, from phones and tablets to laptops and desktops. Whether text management can now be integrated with touch management is the next frontier for Microsoft to explore and conquer.

For thousands of years, writing has been the hallmark of human civilization. Writing brings out certain faculties of hand and brain coordination, in terms of control and memory as opposed to typing which, though bringing out a different type of hand and brain coordination, admittedly mechanizes the writing faculty. After the initial failure of initial handwriting recognition that was applied on the original Microsoft tablet computer in the early 2000s, the importance of writing on the computers has taken a distant second position. The more recent revival of pen stylus based devices, more especially Samsung Galaxy Note devices, gives hope that handwriting would reemerge and remain as the most intelligent and intellectual form of providing inputs. This would require a new level of integration between software and hardware technologies with superior capabilities for highly variable handwriting between people. However, that certainly is the way to go as the pen stylus brings the added advantage of free hand sketching, drawing and annotations as well as track changes.

See and Speak

Recognition and cognitive technologies would continue to be developed to the extent that look and talk would be the new input channels. The focusing of eyes would soon determine which icon (in the Apple language) or tile (in the Windows language) is desired to be opened by the user. Eventually, even sub-instructions may be tracked and selected with eye contact. The first evidence of the feasibility is already seen in certain devices and software solutions, from recognition of start and stop commands based on eyesight to adjustments to ambient conditions. In devices of the future, there may not be a need to have a dedicated camera key; a mere blink after focusing could snap the photo. Future cameras may detect the natural power of eyes and accordingly adjust their own aperture and focusing settings.

Cognitive technologies have seen a boost with the voice commands in the navigation systems and the more recent Apple Siri and Galay S voice in the mobile devices. These, however, are limited by the preprogramming potential. Open ended speech based input technologies have not so far realized the potential such technologies could hold. Cognitive technologies have speech recognition as the primary platform.  The potential has not been realized mainly because of the phonetic and pronunciation variations across people. Self-learning programs are the answer. Future speech recognition software solutions would reprogram themselves based on an initial cycle whereby a person inputs his or her speaking patterns and profiles. Such technologies would also add an extra layer of security to the computer system by uniquely recognizing the speaking patterns.

Think and Imagine

Probably, the net giant leap would be when the devices start recognizing the thought processes. As the mapping of brain, deciphering of brain waves and decoding of fired up neurons gain traction, potentially  each thought may be identified with a unique fingerprint. The thought rather than the person or the device could be the unique factor that could be standard across any person-device combination. This would require the devices to have powerful electromagnetic sensors that can recognize unique thought waves that correspond to the commands a person desires to give to the device. As with any new endeavor, the thought recognition technologies may be introduced with simple commands and later extended to the whole gamut of human thoughts.

The ultimate frontier could be to imagine the human intents and desires from out of the myriad overt recognizable actions and thoughts. This development would be quite the opposite of the previous hypothesis of universally standardized thought wave technologies. The ability to imagine would be achieved when a device is uniquely synchronized and intertwined with the owner cum user of the device. The device would have an enormous storage ability and analytical power to capture each and every unique thought, command and action to develop a full portfolio of how the user would behave. As the experience and expertise of the device grows, so would its ability to simulate its user’s thinking and imagining for the user and outlining a whole range of options as if the owner himself or herself would be imagining.

Design beyond Device

If the foregoing were to come true, and the author of this blog post believes would come true sooner than later, the devices would no longer me hand operated apparatuses anymore. They would be vested with increasingly higher levels of human faculties. The advertisement by Samsung for its latest mobile phone, Galaxy SIII, claims that it is “designed for humans”. Based on the foregoing discussion, the advertisement for future devices could well read as “designed as humans”, with each device being as uniquely personalized and intelligent as each human being uniquely is .

Posted by Dr CB Rao on December 25, 2012                      

 

No comments: