Future Now
The IFTF Blog
Super Crunchers and the End of Intuition
It sounds like a song by They Might Be Giants (whose children's music is pretty outstanding), but it's actually "a new book by Ian Ayres, an econometrician and law professor at Yale," which describes
a powerful trend that will shape the economy for years to come: the replacement of expertise and intuition by objective, data-based decision making, made possible by a virtually inexhaustible supply of inexpensive information. Those who control and manipulate this data will be the masters of the new economic universe. Ayres calls them "Super Crunchers," which is also the title of his book, the latest attempt to siphon off a bit of the buzz that surrounds the hugely successful "Freakonomics." In fields from criminal law (where statistical projections of recidivism are taking discretion away from judges and parole boards) to oenophilia (where a formula involving temperature and rainfall is a better predictor of the quality of a vintage than the palates of the most vaunted experts), "intuitivists" are on the defensive against the Super Crunchers.
Why should we care about this? After all, if algorithms can help you find cool books or movies, shouldn't this be embraced? Well, there's an unanticipated consequence:
Increasingly, jobs that used to call for independent judgment, especially about other people, are being routinized and dumbed down. Banks no longer care about a loan officer's assessment of whether a borrower is a good risk; everything they need to know is in the numbers. Baseball managers increasingly judge prospects by quantifiable statistics, not their "drive" or "hustle." "We are living in an age when dispersed discretion is on the wane," Ayres writes, even in such intimate settings as the doctor's office. Evidence-based medicine, the use of statistical models to guide diagnoses and treatment, is already changing how doctors practice. "Many physicians have effectively ceded a large chunk of control of treatment choice to Super Crunchers," he writes, and the trend will continue despite understandable resistance from the profession.
A couple decades ago, nearly everyone who studied history of science and technology had to read David Noble's Forces of Production: A Social History of Industrial Automation, which argued that numeric control machine tools triumphed over competing systems in the 1950s and 1960s not because they delivered better results, but because they were powerful instruments for deskilling machinists. Machinists were highly paid, very skilled, and in the eyes of management, unacceptably powerful; numeric control offered the prospect of giving managers control over machine tools, and rendering machinists themselves irrelevant. (The book was extremely controversial; I interviewed a couple people who were in the book, and they hated it.) But idea that evidence-based medicine, quantitative analysis of personal performance or potential risk, etc. could serve the same purpose for white-collar workers is provocative, to say the least.
Update. Peter Levin asks (actually, he did this back in June), "Does Information-mining displace decision-making?"
At what point, though (and this is the general question), does information mining actually replace decision-making? In sociology, C. Wright Mills made the argument a half-century ago, this is leads to the vacuousness of abstracted empiricism. In business decision-making, it leads to an abdication of decision-making in favor of empirical data mining. The problem here (following James March) is that the world is not just uncertain, it is ambiguous. If the world were simply uncertain, reduction of uncertainty via the aggregation of more and better information might prove just the ticket. But what happens when a decision has to be made between qualitatively different options? When more information does not provide a clear direction to go? Or when decision-making could actually increase, decrease, or change in fundamental ways the options themselves? At this point, information-mining actually becomes harmful to the extent that it replaces rather than augments real decision-making. Worse, making decisions is a skill, that needs to be flexed, and used. Understanding when and how information-mining would be useful seems to me a more important ability than even knowing how to manage the information-mining itself.
I need to go back to The Black Swan and see what Nassim Nicholas Taleb says about the role of automated trading in exacerbating financial crises. His big argument is that black swans are fewer but worse in today's more precisely-managed economies and societies: to build on Peter's observation, does he argue that efforts to generate certainty distract us from the continuing existence of ambiguity-- and perhaps even make those ambiguities bigger and more dangerous, even while we focus on the computer model?