I love the Acquired podcast. Hosts Ben and David dig deep into the histories of companies like Netflix, Standard Oil, Nike, and Tesla. The stories always surprise me, and I learn a lot about business in the process.
And yet…I often zone out while I’m listening. I’ll start thinking about chicken tikka masala, or following a melody in a song that keeps looping through my head. When I come back to the story of Phil Knight a minute or two later, it always feels like I didn’t miss anything important. If you asked me to give you the gist of the conversation, I’d be confident that my summary was accurate.
But I’d be wrong. My summary might sound plausible to me, or to someone who also wasn’t listening to the podcast, but I’ve re-listened to some of these conversations, and I always find at least one major point that I did not hear at all the first time (Phil Knight stole documents from someone’s briefcase?!).
This is the phenomenon of overconfidence Daniel Kahneman describes in Thinking, Fast and Slow. When we come to a conclusion, we don’t like to doubt it. We’re holding that completion trophy I mentioned in the last post, and we don’t want to give it up.
Kahneman lists many fascinating examples of confident experts who are wrong more often than they think:
Mutual fund managers: two-thirds of them fail to beat the market. That is to say, their clients would have been better off dumping their money in an index fund and forgetting about it. There is close to zero correlation between top performers from one year to the next.
Radiologists: In one study, they contradicted themselves 20% of the time. (They were shown scans and asked for a diagnosis. Some of the scans were duplicates, but they responded to them differently.)
Drivers: 90% of us believe we are above average drivers.
Physicians: Their diagnoses before death were wrong 40% of the time.
Political experts: In Superforecasting, Philip Tetlock describes a study of his in which he collected the predictions of 284 experts. These are the pundits you see on TV and read in prominent newspapers, where they confidently predict what will happen in the stock market, or who will win an election, or whether two countries will go to war. He collected over 80,000 judgments from these experts…and found that their predictions were worse than random. In his unfortunate phrase, a “dart throwing monkey” would have performed better.
We also witness this overconfidence when we take tests. It’s particularly noticeable on reading comprehension questions like this one:
It’s a difficult question, because the answer is actually nowhere near line 8. Much later in the passage, we learn in a parenthetical remark that Georgia’s fiancee (Joseph Tank) is a paper bag manufacturer. Good readers will infer that her friends are teasing her when they waive paper bags before her eyes. But those who struggle with this question will often decide that the answer is E: it makes sense that Georgia would be tired of being the only woman in the newsroom. In fact, it makes so much sense that when I ask these students to predict which questions they got wrong, they don’t point to this one. Like my podcast summaries, this story about Georgia is plausible – it *feels* right – even though it did not account for important information. And because this feeling is so compelling, they see no reason to question their judgment or search for more information.