Future Now
The IFTF Blog
Location awareness and smart homes
This afternoon Nicolas Nova, a graduate student at EPFL, gave a talk on the impact of different location-awareness service designs on collaboration. (It was a busy day at 124 University Ave.; Doug Engelbart was also there.) My notes from the talk are here; but in mulling over the talk, an interesting parallel has come to mind between location services design and recent smart home research.
Nicolas designed a collaborative game in which small teams had to work together to find an object on the EPFL campus. The most surprising of Nicolas' findings is that giving users real-time information about the location of other players decreased the amount of direct player-to-player communication. Players who could see each others' locations on a map tended to not message each other so much, or remember afterwards where their team members had been (a measure of just how much teams had to work together, instead of separately pursuing their tasks). When they couldn't see each other, they messaged each other more frequently.
What also happened-- and this is even more significant-- is that the teams that couldn't see each other not only communicated more, they were also better able to adjust their strategies as the game progressed, and players were better-informed about what their teammates were doing. While seeing where your fellow-players seems self-evidently valuable, in this case it had the unintended consequence of reducing other valuable forms of communication.
At a certain point in the talk, I was reminded of smart home research I've been reading about. Very briefly, the aim of smart home researchers used to be to automate home processes, and to make homes (or home appliances) smart enough to figure out what inhabitants would want-- i.e., to model user contexts and intentionality, and act accordingly. The newer school of thought argues two things: 1) machines can never fully specify nor understand context; and 2) it's actually bad to try to do everything for people. In the smart home case, much of the research is aimed at facilitating "aging in place," and for the elderly, immobility is a big problem: you don't want to create an environment that lets them just it in a chair all day, because that would be bad for them. Instead, you want to create technologies that help them stay active, that allow them to maintain control over their environments, but step in to assist with functions that they can't handle so well any longer.
In both the smart home and Nicolas' work, one of the biggest design challenges is to make the technology useful, but not so intrusive it gets in the way of users' social interactions (in the game), or their interaction with things (in the smart home).
There's also an interesting point about context here. (This ties into a broader set of questions about how to think of context awareness, which was the subject of an expert workshop the Institute conducted in May.) One of the big challenges that CS people have worked on is making systems smart enough to be able to infer user contexts-- what users are trying to do, and what they're going to do next. The new smart home research accepts that people will always know their contexts better than any computer, and that processor cycles are better spent doing other things than trying to infer context. In Nicolas' experiment, the display of teammates' position hindered communication between players about context-- something unintended and undesireable. Information is useful. However, informed people are a lot more useful.