Pocket worthyStories to fuel your mind

The Irony Effect

How the scientist who founded the science of mistakes ended up mistaken.


Read when you’ve got time to spare.


Danny Kahneman’s love affair with Amos Tversky began in the spring of 1969, when his dazzling and clever colleague, also a professor of psychology at Hebrew University in Jerusalem, came to give a talk to Kahneman’s graduate seminar. Tversky told the students about a new study being done by researchers in Michigan on how regular people tend to think about statistics. The work suggested that we all have a natural grasp of probability, Tversky said. We’re all inclined to be rational.

“Brilliant talk,” said Kahneman when Tversky finished, “but I don’t believe a word of it.” He had strong reasons to be skeptical. For years, his own research had been derailed by faulty intuitions about how to use statistics. Even though Kahneman was highly trained in research methods, he’d kept on falling victim to the same mistake: His sample sizes were too small.

Kahneman also knew that he was not the only scholar screwing up. He’d seen a recent paper showing that the problem was ubiquitous: Even studies in the leading journals of psychology, performed by scientists who specialized in quantitative research, were underpowered as a rule. And now Tversky meant to argue that everyone’s a natural statistician?

Most places that he went, Tversky was the smartest person in the room, and he wasn’t often challenged. But Kahneman let him have it. (“You won’t believe what happened to me,” Tversky told a colleague later, titillated by the disagreement.) Their argument in class led to a conversation over lunch, and then a run of private meetings. What if they could show that Kahneman was right—that when people think about statistics they’re inclined to make mistakes?

That summer, Kahneman and Tversky tested out their theory. They wrote a set of questions meant to test a person’s statistical intuitions, and Tversky passed it out to researchers at a pair of scientific conferences, including one for the Mathematical Psychology Association. Back home in Jerusalem, they looked over the responses. Many of the scientists had made the same mistake that Kahneman had, placing too much trust in studies of small samples and tending to “extract more certainty from the data than the data, in fact, contain.” This was not a form of wishful thinking, wrote Kahneman and Tversky in the first paper they put out together in 1971, called “Belief in the Law of Small Numbers.” It was a bias of cognition, a hardware error of the mind, like one of those optical illusions that cannot be unseen, no matter how many times it’s pointed out. Neither knowledge nor expertise could save you: Even savvy professionals suffered from a “consistent misperception of the world.”

This scientific study of scientific bias would ignite a romance of the mind, one that spanned several decades and ended up transforming both psychology and economics. Kahneman and Tversky went on to show that mistakes in human judgment are not exceptions but the rule, resulting from a host of mental shortcuts and distortions that cannot be avoided. We do not behave like “rational actors,” as economists once presumed; rather, we’re predictably misguided—subject to a “bounded rationality.” Tversky went on to win a MacArthur “genius” grant on the basis of their work. Kahneman would get a Nobel Prize.

The author Michael Lewis tells the tender, probing story of their lives and work together in his new book, The Undoing Project. It’s a portrait of besotted opposites: Both Kahneman and Tversky were brilliant scientists, and atheist Israeli Jews descended from Eastern European rabbis, but in every other way they seemed to differ. Kahneman liked to smoke; Tversky hated cigarettes. Kahneman was a morning person; Tversky worked at night. Kahneman’s office was a mess, with papers piled everywhere; Tversky preferred an empty room with nothing but a pencil and a desk. Kahneman could be shy, pessimistic, and depressive. Tversky was pugnacious and outgoing. Kahneman’s insight into bias was tied up with self-doubt, deriving from the careful excavation of weakness that he found within himself. Tversky seemed to study human folly through a telescope, as if he were peering at a far-off land.

Lewis says that colleagues never understood quite how these two managed to connect, let alone to spend so many hours every day, across so many years, inside each other’s heads. “It was as if you had dropped a white mouse into a cage with a python,” he writes, “and come back later and found the mouse talking and the python curled in the corner, rapt.” In any case their peculiar synergy fueled an engine of disruption, laying groundwork for the field of behavioral economics and driving new approaches to the practice of sports management, health care, government, presidential campaigns, and education. (That’s just a random sample of domains in which their work has been influential; there are many, many more.)

Yet for all their fame and influence, Kahneman and Tversky’s own discipline, psychology, remained somehow unaffected by the first and founding insight of their partnership. Other scholars would extend their work on judgment and heuristics while seeming to ignore the implications of their original study of statistics. If it were really true that some of the keenest minds in psychology were doing research wrong, then the aggregate effect would be enormous. The literature of published findings might be clogged with inconclusive data, and that would mean that each new research project in psychology was wobbling on hollow struts. In 1971, Kahneman and Tversky had cautioned that scientists’ intuitions about randomness—their unwarranted belief that any small sample of the population could stand in for the whole—could lead to “unfortunate consequences in the course of scientific inquiry.” It was as if the field forgot, or simply overlooked, what they’d demonstrated with such clarity and humor—that even virtuosic scientists, even Danny Kahneman and Amos Tversky types, might be prone to this mistake.

Neither were Kahneman and Tversky themselves inclined to dwell on what they’d found. Their survey of fellow researchers had served to illustrate the broader point, that everyone makes mistakes in a systematic way. That would be the target of their research as their partnership progressed—not the inefficiencies of scientific practice (small potatoes, those), but the very fundaments of human thought.

Even as their work produced what seemed to be an endless stream of insight, Kahneman and Tversky’s personal relationship would be soured by mistakes that neither party could avoid. The Undoing Project chronicles their split in stinging detail. “It was worse than a divorce,” Tversky’s wife told Lewis. They seemed to suffer from the difference in their natures: one delicate, the other bruising. In 1983, Kahneman confessed that he’d been overcome by envy, since Tversky was getting most of the acclaim. “Tversky cannot control this,” he told an interviewer at the time, “though I wonder whether he does as much to control it as he should.” He began to feel rejected and ignored. “An episode such as the one we had yesterday wrecks my life for several days, (including anticipation as well as recovery), and I just don’t want those anymore,” he wrote to Tversky after a meeting in 1987.

“I realize my response style leaves a lot to be desired but you have also become much less interested in objections or criticism, mine or others’,” Tversky answered. “One of the things I admired you for most in our joint work was your relentlessness as a critic.” Tversky’s letter castigated Kahneman for having lost his skepticism and willingness to change his mind. “I do not see any of this in your attitude to many of your ideas recently,” the letter said.

After Tversky died from melanoma in 1996—the two remained in touch until the end—Kahneman gained more prominence himself. He would get his Nobel Prize in 2002 and the Presidential Medal of Freedom in 2013. But his reputation got its biggest boost five years ago, when Kahneman published Thinking, Fast and Slow, a fascinating résumé of his work with Tversky and beyond, and also of the other research in psychology that has dug into our many hidden biases, illusions, fallacies, and neglects.

Kahneman had been terrified to write the book; according to Lewis, he worried that it would destroy his reputation. (Like a good behavioral economist, he even paid his colleagues to read the manuscript and provide anonymous advice about whether to abandon it.) To his surprise, Thinking, Fast and Slow became a huge success—a critically acclaimed, best-selling, award-winning culmination of his long career.

Within a few months of its publication, though, the study of psychology went into a state of crisis. Many of the findings that Kahneman had cited in the book—and lots of others, too—suddenly appeared to be quite fragile, maybe even spurious. In a section on “The Marvels of Priming,” for example, Kahneman had described the way that “primed” ideas spread across the mind like ripples on a pond, and that “the mapping of these ripples is now one of the most exciting pursuits in psychological research.” But shortly after he finished the book, the enterprise of social-priming research fell into scandal and uncertainty. An influential scholar in the field was outed as a fraud. A bedrock finding in the field—drawn from a study that Kahneman called an “instant classic”—crumbled under scrutiny.

Other classic studies mentioned in the book started to dissolve as well, as scientists tried and failed to reproduce the original experiments. Kahneman had wowed his readers with the fact that a person is more easily amused when she holds a pen between her lips, for example, and that seeing images of money will make her act more selfishly, and that a glass of lemonade can help restore her willpower. But it wasn’t long before each of these effects would be questioned or upended.

Indeed, the research that supported them ran afoul of the very problem that Kahneman and Tversky identified so many years ago: The researchers were fooled by faulty intuitions about randomness. Their studies had been underpowered, which meant that false-positive results may have crept into the literature.

Yet Kahneman himself left no room to doubt those suspect findings. “Disbelief is not an option,” he wrote in the book. “The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of these studies are true. More important, you must accept that they are true about you.” He concluded the chapter on priming with what he called “a perfect demonstration” of its argument—a study from 2006 in which posters showing a pair of eyes or a bunch of flowers were put into a university kitchen. People were more likely to pay for their tea and coffee in the presence of the poster with the eyes, that study found. Something as subtle as this meager gesture at surveillance could prompt important changes in behavior.

But then, further study of the “watching-eyes effect”—with very mixed results—made even that perfect demonstration of priming seem a little fishy.

The replication crisis in psychology does not extend to every line of inquiry, and just a portion of the work described in Thinking, Fast and Slow has been cast in shadows. Kahneman and Tversky’s own research, for example, turns out to be resilient. Large-scale efforts to recreate their classic findings have so far been successful. One bias they discovered—people’s tendency to overvalue the first piece of information that they get, in what is known as the “anchoring effect”—not only passed a replication test, but turned out to be much stronger than Kahneman and Tversky thought.

Still, entire chapters of Kahneman’s book may need to be rewritten. The psychologist Uli Schimmack has devised a statistical measure called the R-index to estimate the trustworthiness of a given body of research based on its reported sample sizes and effects. (It’s like a “doping test for science,” Schimmack says.) He recently applied this measure to the studies cited in each of 11 different chapters from Thinking, Fast and Slow, then assigned letter grades to each result. (The book has 38 chapters total.) A couple of the chapters came out looking very good, with R-index scores of 93 and 99—worthy of an A-plus grade from Schimmack for their rigor. But five other chapters, including the one on social priming, ended up with scores of less than 40—what Schimmack called an F. Taken all together, the chapters Schimmack looked at earned an average grade of C-minus.

How could Kahneman, of all people—a man so brilliant at describing weakness, so canny in his doubts, so quick to cut through bullshit—have signed his name to all this suspect science? Had his thinking gotten credulous, as Tversky had suggested in that letter written in the midst of their divorce? Or had he succumbed to normal human error, like the one that he discovered in the 1960s, right around the time he and Tversky met?

No one is a natural statistician, he’d argued at the time. Even scientists make mistakes. As usual, when it came to being wrong, Kahneman was right.

Daniel Engber is a columnist for Slate.

How was it? Save stories you love and never lose them.

Logo for Slate

This post originally appeared on Slate and was published December 21, 2016. This article is republished here with permission.

Want more Slate in your life?

Get the daily newsletter.