Future Now
The IFTF Blog
Ends vs. Means and Persuasive Games
A video featuring Carnegie Mellon Professor of Entertainment Technology Jesse Schell has been making the rounds this week, and it touches on a number of themes that I think are central to understanding the intersection of games, education, and persuasive technology.
Schell's talk, given at DICE 2010, outlines a brief recent-history of games - explaining the shift from immersive-games to casual games as being due in large part to games starting to "bust through to reality". The move from games like DOOM to ones like Farmville is in response to games becoming increasingly tied to our real-life (and external/non-game related) social networks. In short, the argument stems from the premise that humans are hard-wired to compare their personal performance on things that they hold as important with how members of their social networks perform. As a result, the games that have overrun Facebook have done so in large part because they have managed to understand, and integrate, this understanding of the importance of markers of external, comparative rewards.
If we accept this take on human nature, and then add a healthy dose of the internet-of-things (replete with sensors attached to everyday items like soda cans, clothing, and anything else you might imagine), it becomes possible -- and important to this discussion, desirable, according to Schell-- to create external and comparative rewards for any and all routine human behaviors. In his talk, Schell's envisions of a world in which we are provided external incentives for all sorts of behavior: where brushing teeth earns points from Crest, and taking public transport earns points from the government.
As Schell points out, persuasive technologies like the Ford Fusion dashboard, are already being designed with game-like feedback in mind. To him these technologies fall short, however, because they are being engineered by people who are not game designers. If game designers would start to design reward systems that aimed to improve behaviors, we'd have feedback mechanisms that are much more enjoyable, and as a corollary that are much more effective.
Though I agree with his conclusion - that there is a clear need for people with game design expertise to design things that can help people improve behaviors - by focusing on creating technologies that aim to achieving measurable ends, Schell misses a much more important use of persuasive technologies: namely, technology that aims to influence means.
Ends vs. Means
"the tools of persuasive technology are often used exclusively in the support of ends rather than means. A website 'tunnels' a user from browsing to purchase. A heart-rate device allows a user to self-monitor and adjust behavior based on digital output. An exercise bike conditions a rider by rewarding him with a television image when a target speed is reached. A surveillance system dissuades a knowing subject from taking the wrong action, as defined by his surveyors, through implicit threat. These techniques might produce desirable ends, from the perspective of the creator or sponsor of a persuasive technology. But they do not necessarily produce desirable means." (Bogost, 16)
Over at The Ephemeral Notebook Aaron Matthew outlines some important objections to the External Rewards approach. In particular, Matthew writes that reward structures are "only ONE of the many neural motivators for gaming, and while the most basic and exploitable with our current understanding, they do not represent the true power of games." Although Schell's future employs game mechanics for positive behavior change, such a world also omits the centrality of elements like "learning and mastery" in games. As I mentioned on Tuesday, games and/as learning is a fairly extensive and important topic, and I think Matthew's brief critique of External Rewards is on point. The person living in this future world would ostensibly face daily tasks for which, "the entire ruleset is one item long and mastery is immediate." Faced with an unending stream of binary tasks (e.g. brush teeth, take vitamins), we may well end up improving particular behavior-related afflictions (e.g. gingivitis, scurvy) but at what cost to our free will?
It is this aspect of eroded individual causality that I think troubles me the most about Schell's dystopic future. When I wrote about Keiichi Matsuda's Augmented (hyper)Reality video a few weeks ago, I mentioned that the most disturbing aspect of that film was that the subject's life "seems to have few opportunities for acting on internally compelled behaviors. His life seems to consist of executing directions given by a series of reminders of what he needs to do, rather than of undertakings that he chooses to do." This seems to be exactly what Schell is envisioning, and although I am sure he his being someone hyperbolic for the sake of making his case more compelling, I am still left feeling that his call to action could do more harm than good.
In a paper presented at the 2008 conference on Persuasive Technologies, Game Theorist and Designer Ian Bogost points to the crux of my wariness. In what he calls a "provocation for ... future work" in laying out "a detailed theory of fine processes,(Bogost, 19)" Bogost examines his objections to persuasive technology (also known as captology). Bogost writes that one of his concerns for the field is that:
"'captology' risk[s] becoming manipulative rather than persuasive. This issue ... should serve as a strong warning to persuasive computing projects. But it’s not the whole story. There is another reason to pay closer attention to the means by which computational persuasion takes place. A downside of an overzealous focus on outcomes is that we tend to lose all the richness and wonder of experiences in our chase for them. Encounters with computer artifacts are experiences as frequently as they are tools. And understanding experiences requires a different focus." (Bogost, 17)
By focusing on "outcomes" instead of "our chase for them," Schell has fallen into a persuasive technology trap. Schell concludes his talk by voicing the internal monologue of a citizen of his future, raising the possibility that since we are, "being watched and measured and judged, maybe I should change my behavior a little bit, and be a little better." What Schell is saying is that conditioned behavior through external rewards has the potential create cascading effects in changes in attitudes. Although there is a tremendous lure in believing that external feedback can gradually influence our thinking for the better, so that we start to change internally negative/destructive behaviors into desired ones, there is a gap in this logic. If external rewards are incentivizing certain types of behaviors then so long as those incentives remain in place, it is unclear what motivation the user would have to actually internally decide to change his behaviors. If he is playing the game to get points, then surely he is looking to maximize his points in any way possible: the specific behaviors that are targeted for changing are incidental to the gamer's experience since he just wants to win.
In this sense, reliance on External Rewards obviates the true potential of persuasive games: namely, that they can be used to help people think about themselves, the world, and life in new and empowering ways. Rather than relying on persuasive games to provoke or impede certain behaviors, what if they were used to open up a world of previously inaccessible experiences? What if games were used to get people thinking about why they acted in particular ways? If they helped them identify if these behaviors were in line with who they believe they are and how they ought to be? If they helped people think through the steps they needed to take to improve themselves from within, rather than from without? I, for one, would feel much better if this was the type of world we wanted game designers to help create.
Works Cited
Bogost, Ian. "Fine Processing." PERSUASIVE 2008. H. Oinas-Kukkonen et al. (Eds.) LNCS 5033, 2008. pp 13-22