Gestural Interfaces: An Emerging Language
Technology is a fast-moving field, there are continuous innovations and areas not so long ago considered sci-fi is now becoming mainstream. In a previous article I introduced the future of input devices, and today we will talk more in deep about gestural interfaces instead of manipulating keyboards and mice, we shall simply make gestures to our computers. "Hand-wave" will be the new "click", or wait, maybe click sounds already a bit old-fashioned?
Let's start with the basic definition: a gesture is any physical movement, especially a hand or the head, to express an idea or meaning. Today many digital systems recognize human gestures and can sense and respond to them without the aid of a traditional pointing device such as a mouse or stylus. A wave, a head nod, a touch, a toe tap, and even a raised eyebrow can be a gesture. But simply because we can now do interactive gestures doesn't mean they are appropriate for every situation. As Bill Buxton notes, when it comes to technology, everything is best for something and worse for something else, and interactive gestures are no exception. There are several reasons to not have a gestural interface:
Heavy data input
Reliance on the visual
Reliance on the physical
Inappropriate for context
There are, of course, many reasons to use a gestural interface. Everything that a noninteractive gesture can be used for—communication, manipulating objects, using a tool, making music, and so on—can also be done using an interactive gesture. Gestural interfaces are particularly good for:
More natural interactions
Less cumbersome or visible hardware
The Characteristics Of Good Gestural Interfaces
The characteristics don't differ much from the characteristics of any other well-designed interactive system. Designers often use Liz Sanders' phrase "useful, usable, and desirable" to describe well-designed products, or they say that products should be "intuitive" or "innovative." All of that really means gestural interfaces should be:
Direct versus Indirect Manipulation
The ease of use one experiences with a well-designed touchscreen comes from what University of Maryland professor Ben Shneiderman coined as direct manipulation in a seminal 1983 paper. Direct manipulation is the ability to manipulate digital objects on a screen without the use of command-line commands—for example, dragging a file to a trash can on your desktop instead of typing del into a command line. Touchscreens and gestural interfaces take direct manipulation to another level. Now, users can simply touch the item they want to manipulate right on the screen itself, moving it, making it bigger, scrolling it, and so on. This is the ultimate in direct manipulation: using the body to control the digital space around us. In the future, as an increasing variety of sensors are built into devices and environments, this may change, but for now, touchscreens are the new standard for gestural interfaces.
We've entered a new era of interaction design. For the past 40 years, we have been using the same human-computer interaction paradigms that were designed by the likes of Doug Engelbart, Alan Kay, Tim Mott, Larry Tesler, and others at Xerox PARC in the 1960s and 1970s. Cut and paste. Save. Windows. The desktop metaphor. And so many others that we now don't even think about when working on our digital devices. These interaction conventions will continue, of course, but they will also be supplemented by many others that take advantage of the whole human body, of sensors, of new input devices, and of increased processing power. We've entered the era of interactive gestures.