Future Now
The IFTF Blog
Prediction Markets at Yahoo! ConFab
I'm in California this week for a bunch of internal workshops at the Institute, and having loads of fun running around the Valley. Last night I went over to Yahoo! for their first "yahoo.confab" event, which was about using prediction markets inside corporations. It was an amazing event. (There are professional-quality webcasts at 100kbps and 300kbps)
It was an incredibly timely opportunity. One of the projects I'm working on right now at IFTF is a new research program that focuses on new business opportunities in the open economy, called OpenEx. As part of OpenEx, we're running a prediction market hosted by Inkling Markets(we want you to play too! Shoot me an email for an invite or to get a prospectus for the OpenEx program).
I was both buoyed and dismayed by what I learned at this event. Buoyed by how much potential there is in prediction markets for improving decision-making and forecasting. Dismayed by how shallow the body of knowledge is on how to make them work well, and how to make them work well inside large organizations. (But then again buoyed that this is a big opportunity for the Institute to help advance the craft of creating and running good prediction markets).
A Brief History of Prediction Markets
Prediction markets have been around since the late 1980s, when a couple of professors started the Iowa Electronic Market (IEM) at the University of Iowa. HP ran the earliest corporate experiments in the late 1990s (see Bernardo Huberman's presentation to Howard Rheingold's class at Stanford). As Leslie Fine from HP Research described last night, that experiment only tried to predict a dozen events and only had 10-15 active traders during the 3 years it ran (1996-1999).
The modern era for prediction markets - and Robin Hanson of George Mason University showed a slide that corporate use of prediction markets is growing exponentially (but still quite a small total #) - began after the political disaster of DARPA's Policy Analysis Market in 2003. PAM sought to create a market for predictions for political and economic forecasts, but was killed by a couple of Congressional zealots and the media, who zeroed in on "the fact that PAM would allow trading in such events as coups d'état, assassinations, and terrorist attacks." Senator Wyden said at the time, "The idea of a federal betting parlor on atrocities and terrorism is ridiculous and it's grotesque" and Senator Dorgan called it "useless, offensive and unbelievably stupid". (source: Wikipedia article)
So while PAM failed, it brought the idea of prediction markets to a lot of people's attention, and as Hanson pointed out last night, he has been closely tracking media mentions of PAM over the last 3 years and there is a clear trend towards the media talking about PAM in a positive light. In the end, we make look back on the PAM debacle as a watershed moment for the broader institutional use of prediction markets.
Corporate Experiences
The discussion at Yahoo! brought to light some of the many useful characteristics of prediction markets for improving forecasting and decision-making inside companies:- They aggregate information - the larger and broader the group, the more relevant information gets reflected in prediction market prices. In an ideal market, all relevant information is reflected in the price (the group's forecast), and any erroneous information is random and cancels itself out.
- They are better than polls and surveys at picking winners - both in terms of accuracy as well as what they measure. For instance, election polls often ask who you voted for, not who you think is going to win.
- They allow very granular forecasts - you can ask extremely specific questions, and break larger questions up into components to isolate trends and influences on broad indicators or measures like the company's stock price
- They encourage honest forecasts - because there is usually some sort of reward at stake in a good prediction market (money or prestige), people tend to vote with their mind not their heart. Also, since they are effectively anonymous if proper precautions are taken, people can make a bet without having to worry about what peers or bosses (or underlings) might think.
- They are fun - they get people interested and involved in forecasting, and can provide a platform for community inside companies.
- They update themselves - unlike polls and surveys which are a "push" forecasting mechanism, prediction markets are a "pull" mechanism. As new information emerges, traders react quickly by changing their positions and the market quickly moves to incorporate that new information.
The speakers told of some experiences at their firms:
<UL<LI>Google, Bo Cowgill - is moving away from monetary rewards towards reputation/social rewards. Companies have to create elaborate schemes to use monetary rewards because of the regulatory restrictions on prediction markets that use real money. Also, people didn't pay attention to the rewards, but pay lots of attention to the lists of top traders. Exploring new ways to indicate performance - by team, by date of hire (newcomers vs old-timers), etc. Thinking about putting prediction market ranking into the company directory as a prestige measure. Thinking about how to allow traders to publicize/brag about their trading decisions within the platform or corporate directories.
Best Practices
So what was the takeaway? How do companies effectively utilize the many software and hosted prediction market tools available? The most surprising thing to me is how little advice or information the group had to share on this. Or more precisely, how little consensus there was. In theory, and James Suroewicki talks about this in his book The Wisdom of Crowds (he was the host last night), you do not want traders to have a stake or be able to influence outcomes. Yet many of the corporate examples did not comply with this important theoretical requirement for an efficient and accurate market, but they seemed to work anyways.
I think what it means is that companies need to experiment widely and watch closely what happens in prediction markets. But as Adam Siegel, IFTF's partner at Inkling pointed out, there are 3 main reasons why prediction markets fail:
- People don't understand the concept
- The interface isn't easy enough to use
- Market structure was wrong: causes include questions that ask for opinions, poor descriptions, biased questions, and too long timeframes
Going forward with our OpenEx Prediction Market, it's that last issue that is going to provide the greatest challenge and the greatest opportunity. At IFTF, we generally focus on the 5-10 year time frame when we work with our clients. But this is really too long for effective prediction markets. Few have even been around long enough to even run something where 3 or 5 or 7 year markets have come to fruition, though one of the speakers said that the 10-year old Foresight Exchange has had some limited successes with long-term forecasts. This is something I really look forward to working with Inkling and our clients and anyone else who wants to play in our market to learn how to do well. It could just be the most exciting thing I ever work on here.