I’m going to recommend a lot of books here on
blog.pmarca.com, but this is one of the most important you’ll ever read: Philip Tetlock’s Expert Political Judgment: How Good Is It? How Can We Know?.
A comprehensive quantitative survey of so-called experts in the political domain — analysts, commentators, forecasters, commentators, pundits — this book will permanently change how you think about what you read, and whether you should ever again listen to anyone who sounds like they know what they’re talking about. In politics, and in every other complex domain — including (and perhaps especially) business.
Quoting from a New Yorker review:
It is the somewhat gratifying lesson of Philip Tetlock’s new book that people who make prediction their business — people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables — are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake. No one is paying you for your gratuitous opinions about other people, but the experts are being paid, and Tetlock claims that the better known and more frequently quoted they are, the less reliable their guesses about the future are likely to be. The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote. Our system of expertise is completely inside out: it rewards bad judgments over good ones.
Expert Political Judgment is not a work of media criticism. Tetlock is a psychologist —- he teaches at Berkeley —- and his conclusions are based on a long-term study that he began twenty years ago. He picked two hundred and eighty-four people who made their living “commenting or offering advice on political and economic trends,” and he started asking them to assess the probability that various things would or would not come to pass, both in the areas of the world in which they specialized and in areas about which they were not expert. Would there be a nonviolent end to apartheid in South Africa? Would Gorbachev be ousted in a coup? Would the United States go to war in the Persian Gulf? Would Canada disintegrate? (Many experts believed that it would, on the ground that Quebec would succeed in seceding.) And so on. By the end of the study, in 2003, the experts had made 82,361 forecasts. Tetlock also asked questions designed to determine how they reached their judgments, how they reacted when their predictions proved to be wrong, how they evaluated new information that did not support their views, and how they assessed the probability that rival theories and predictions were accurate.
Tetlock got a statistical handle on his task by putting most of the forecasting questions into a “three possible futures” form. The respondents were asked to rate the probability of three alternative outcomes: the persistence of the status quo, more of something (political freedom, economic growth), or less of something (repression, recession). And he measured his experts on two dimensions: how good they were at guessing probabilities (did all the things they said had an x per cent chance of happening happen x per cent of the time?), and how accurate they were at predicting specific outcomes. The results were unimpressive. On the first scale, the experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes—if they had given each possible future a thirty-three-per-cent chance of occurring. Human beings who spend their lives studying the state of the world, in other words, are poorer forecasters than dart-throwing monkeys, who would have distributed their picks evenly over the three choices.
Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study…
It goes on and on, as Tetlock remorselessly walks through the data and completely dismantles the whole edifice of expert forecasting and the idea that the future is predictable in any meaningful way at all.
Sometimes the idea of retreating to a mountain cabin in Montana with no electricity or running water doesn’t seem like such a bad one.
(In fairness, I’m exaggerating to make the point. Tetlock also walks through the patterns of forecasting that do work better than chimps throwing darts at a dartboard, and makes a number of suggestions on how to improve the quality of predictions. I’m highly skeptical that any of his suggestions will be widely adopted, though. Which brings me back to that cabin idea…)