Communicate instead of command

Computers are pervading more and more areas of life, changing communication between humans and computers in particular. On the one hand, classical input and output media such as keyboards and monitors are declining in importance; on the other, connectedness and the sensory capabilities of devices are on the increase. So how are we as humans supposed to communicate with all these machines? What shape is natural interaction for applications on the Internet of Things to take?

Communication is essential for the development of new applications in two respects. For one, a common basis for discussion must be established on which to find and implement new digital ideas; for another, the user expects the most natural forms of interaction possible for sharing information within an application.

 

In workshops together with partners we work on, experience and evaluate new ideas Prefabricated and easy-to-use APIs, or programming interfaces, serve as the basis for communication, such as for voice processing or localization combined with everyday objects and interactive toys ranging from Lego Mindstorms to Microsoft Kinect.

 

The focus is on mobile usage scenarios in which the user interacts with a single terminal device. However, computers are increasingly pervading more and more areas of life and are steadily less apparent as such (ubiquitous and pervasive computing). Frequently there are not even the usual input and output devices such as monitors and keyboards, but these devices are mostly networked or even connected immediately via the Internet (Internet of Things).

 

The challenge is to find new forms of interaction that not only fit to a one-on-one relationship between human and machine. They need to be suitable for 1:n scenarios in which the user moves around amid a large number of devices. These connected devices set up a complex user interface consisting of a wide range of sensors (ambient intelligence) and actuators such as motors.

 

The user expects natural forms of interaction based, for example, on innate skills such as gestures and speech. New forms of interaction are necessary in any case, because devices such as fitness bracelets lack graphical user interfaces based on metaphors to which we have grown accustomed. Speech and gesture recognition already works very well. We now even have the technological means to identify nonverbal signals such as mood and to incorporate them into communication between human and machine.

 

Speech recognition (for speech input) can be improved by grammars instead of general transcription, but the lack of “polite phrases” quickly transforms this form of interaction into an unnatural succession of commands.

 

Natural language understanding of machines is still very much in its early days, however. We are still unable to communicate naturally with devices because machines have only a limited context and lack knowledge of the world and intuition (commonsense knowledge). In many cases not even polite phrases are permitted, limiting communication to the recognition of keywords and thus more or less to simple commands. In 1966 Joseph Weizenbaum developed the computer program ELIZA, which was a superficial simulation of a psychotherapist. ELIZA shows impressively that machines can create an illusion of natural communication to a limited extent but only in selected domains and as long as knowledge of the world is not required. Much of the interactive adult communication and some Facebook chatbots (text-based dialog systems) make use of the users’ clear interests and thereby seem to recognize their intentions naturally.

 

In spite of Siri & Co. we are still a long way away from the seemingly philosophical debate on Bomb #20 in the 1974 science fiction classic Dark Star. Yet that very bomb shows what we can already accomplish. Interaction must be combined with all the available information (sensors) from the context. Speech input in the form of “Light On!” is nowhere near enough on its own if there are several lights, but combined with a gesture in the direction of the light in question it comes very close to natural behavior. Location data (Where am I?) and blood pressure (How do I feel?) can also help achieve a better “understanding” of intentions.

 

With the circle now fully turned we are now back to the importance of communication as mentioned initially. There is such a wide range of this kind of forms of interaction and, above all, we have so little experience with them that implementation cannot simply be delegated. Communication is essential, starting with finding ideas.

 

Sascha Wolter is a professional developer and user experience enthusiast with passion for M2M, Internet of Things (IoT), Rich Applications and Mobile Apps in all flavors. He also works as a consultant, trainer, software-architect and author and contributes articles to a number of magazines.

More articles by Sascha Wolter on www.wolter.biz

 

 

 

Tags: M2M , Internet of Things , Speech Recognition , Lego Mindstorms , Microsoft Kinect , Ubiquitous Computing , Pervasive Computing , Human Machine Interface

Comments (0)

No comments found!

Leave a reply

Your email address will not be published.
Required fields are marked.