Future Now
The IFTF Blog
The invisible world
What happens today when something goes wrong with an easy-to-use device? That's when the impulse that produces "easy to use" can take a wrong turn into "hard to understand." Too often, we discover that things that look easy to use are just under-documented, and that the simplicity that was created to make a device easy to use makes it hard to repair. A short manual might suggest that "this is so simple to use, you hardly need any instructions." Which is fine until things go bad; at which point we find out that brevity really means, "we're not going to help you." Devices that are invisible when they work shouldn't turn opaque when they break.
Invisibility can also be bad when you want to do something new with a device. In their early days at Xerox PARC, anthropologists made a splash by recording a movie of two of PARC's most distinguished scientists trying, and failing, to figure out how to use a Xerox copier. That was a case of poor user interface. More recently, a scientist working in pervasive computing told me he and a roomful of colleagues spent an afternoon trying to figure out why his Bluetooth-equipped cell phone refused to connect to his computer. The problem was that neither device was designed to give users useful information about its state when things went wrong - another kind of poor interface. It takes a village to raise a child. It shouldn't take an entire computer science department to connect two devices together.
Is this just the cost of making high-tech simple? Is it only possible to hide complexity from users, either through intermediaries or interfaces or context-aware software agents, by sealing the user off from his own mistakes, and making it hard to do new things with a device? Or is it possible that users would be better served by devices that are simple when they work well, but helpful when they fail, and informative when you want to use them in new ways?
The idea that "easy to use" doesn't have to be another way of saying "impossible to fix" is what drives a research program called palpable computing.
Under normal circumstances, it's fine for computers to keep their complexity hidden from users. When things are working well, you want people to be able to focus on their tasks, not their tools. But there are times when it's essential to be able to get under the hood, and devices should be designed to make that possible.
One of those times is when things go wrong. Another is when you want to do something new with a device. For pervasive computing to work, it's going to be necessary for all kinds of devices to work together, because none of the devices you carry around with you-- or wear, or have implanted-- will be designed to have the functionalities you need under every circumstance, or in every context. In an always-connected environment, you'll need a screen and keyboard for some tasks at some times; speech recognition and voice output for others; and gestural or multimodal interface for yet others.
For example, in a given day, you might connect your personal server to a colleague's wall display to show a presentation in the morning; have your personal server connect to your car's voice interface to check and reply to e-mail while driving to a meeting at a cafe; review a document on the display on your table; and send a copy of the document to your friend by shaking hands, which triggers a conversation between your personal area networks to copy your file to their personal server.
One of the grand promises-- indeed, one of the grand premises-- of pervasive computing is the ability use devices in a vast array of combinations, as resources and circumstances demand. Small differences in software, hardware, or standards implenentation could make that clunky at best, or impossible at worst, yielding a world of fractured ubiquity.
To make pervasive work, companies will have to do one of two things. They can anticipate every possible circumstance under which a product will be used, every way it can be used, and every other device it might want to connect to. Or, they can make it possible for users to see how devices and connections are supposed to work, figure out how to make these connections when the devices can't do it by themselves, and learn and how to deconstruct and reuse technologies.
Which is more likely to work? It's a no-brainer. Users rarely behave the way designers expect. You can't predict every context for, or potential use of, a new technology. And technologies don't just diffuse on their own; they're as dependent on humans to propagate as seeds are on animals. So successful pervasive computing - successful both in the functional and commercial senses - will depend on making it possible for users to be smart about their technologies.
So what do devices have to tell you to make your smart? Morten Kyng, one of the leading figures in palpable computing, argues that devices need to make themselves understood on three levels. Users need to be able to understand their logic - the rules they follow in their attempt to fulfill their goals. They need to be able to understand how they function - the particular actions they take to fulfill those goals. Finally, they need to understand their physical logic - the design of their hardware.
Making it possible for devices to work well has an added benefit. The same knowledge that can help you make a combination of devices work better can be used to deconstruct devices, and reorganize them in ways to better fit your intentions and life.