In the 1960s, the American philosopher Edmund Gettier devised a thought experiment that has become known as a “Gettier case.” It shows that something’s “off” about the way we understand knowledge. This ordeal is called the “Gettier problem,” and 50 years later, philosophers are still arguing about it. Jennifer Nagel, a philosopher of mind at the University of Toronto, sums up its appeal. “The resilience of the Gettier problem,” she says, “suggests that it is difficult (if not impossible) to develop any explicit reductive theory of knowledge.”
What is knowledge? Well, thinkers for thousands of years had more or less taken one definition for granted: Knowledge is “justified true belief.” The reasoning seemed solid: Just believing something that happens to be true doesn’t necessarily make it knowledge. If your friend says to you that she knows what you ate last night (say it’s veggie pizza), and happens to be right after guessing, that doesn’t mean she knew. That was just a lucky guess—a mere true belief. Your friend would know, though, if she said veggie pizza because she saw you eat it—that’s the “justification” part. Your friend, in that case, would have good reason to believe you ate it.
The reason the Gettier problem is renowned is because Gettier showed, using little short stories, that this intuitive definition of knowledge was flawed. His 1963 paper, titled “Is Justified True Belief Knowledge?” resembles an undergraduate assignment. It’s just three pages long. But that’s all Gettier needed to revolutionize his field, epistemology, the study of the theory of knowledge.
The “problem” in a Gettier problem emerges in little, unassuming vignettes. Gettier had his, and philosophers have since come up with variations of their own. Try this version, from the University of Birmingham philosopher Scott Sturgeon:
Suppose I burgle your house, find two bottles of Newcastle Brown in the kitchen, drink and replace them. You remember purchasing the ale and come to believe there will be two bottles waiting for you at home. Your belief is justified and true, but you do not know what’s going on.
Does it seem odd to say that you would know that there are two Newcastles in your fridge? Sure, you’re confident they’re there. But the only reason they’re there is because this burglar evidently had a change of heart. You, though, believe two are there because you put them there. You’re right that you’ve got beer in the fridge, and you’ve got good reason to believe they’d be there once you get back—but doesn’t your true and justified belief that you have two Newcastles waiting for you seem lucky somehow? After all, your belief is true only because the burglar replaced the beer. Can lucking into a true and justified belief be considered knowledge?
Consider another case, from the philosopher John Turri:
Mary enters the house and looks into the living room. A familiar appearance greets her from her husband’s chair. She thinks, “My husband is sitting in the living room,” and then walks into the den. But Mary misidentified the man in the chair. It’s not her husband, but his brother, whom she had no reason to think was even in the country. However, her husband was seated along the opposite wall of the living room, out of Mary’s sight, dozing in a different chair.
Again, the element of luck lurks. Does Mary know that her husband’s sitting in the living room? She believes he is, has justification, and is right. Yet the temptation, as with the Newcastles, is to say no. “The Gettier problem challenges us to diagnose why Gettier subjects don’t know,” Turri says. “Many assume surmounting the challenge will lead to the correct theory of knowledge. Some denounce or reject the challenge. But few are fully immune to its allure.”
So if knowledge isn’t justified true belief, what is it? At this point, a couple years after the 50th anniversary of the publication of Gettier’s puzzle, a bunch of philosophers and psychologists think trying to answer this question is silly and always has been. “It is presently fashionable to denigrate early research on the Gettier problem, either as an absurd attempt at something foolish to begin with, or (as though spilled ink were a species of spilled blood) as a tragic loss of philosophical effort,” says Allan Hazlett, a philosopher at the University of New Mexico. But Duncan Pritchard, a philosopher at the University of Edinburgh, disagrees. “Far from being a lost cause,” he says, it’s “in fact alive and kicking.”
Inspired by the Gettier problem, Pritchard has come up with his own definition of knowledge. In a 2012 paper, he explains why you don’t know that there’s beer in the fridge, even though your belief is true and justified—which is what the traditional definition, “justified true belief,” failed to do.
The trick, Pritchard says, is first to notice that there are two distinct “master intuitions” about knowledge that seem to be two “faces” of a single intuition, but are not. These are the “anti-luck intuition” (your true belief, which Pritchard calls a “cognitive success,” can’t be lucky to be considered knowledge) and the “ability intuition” (your true belief has to be in some sense a product of your cognitive ability). (It’s worth noting that some have doubted whether probing intuitions, like Pritchard does, is useful. Nagel thinks it is: “Epistemic intuition is not infallible,” she wrote in a 2013 paper, published in Current Controversies in Experimental Philosophy, “but at present it looks reliable enough to continuing serving its traditional function of supplying us with valuable evidence about the nature of knowledge.”)
“What does it take to ensure that one’s cognitive success is not due to luck? Well, intuitively anyway, that it is the product of one’s cognitive ability,” Pritchard says. “Conversely, insofar as one’s cognitive success is the product of one’s cognitive ability, then again, intuitively one would expect it to thereby be immune to knowledge-undermining luck.” But this, he says, is a flawed way to think about it. Consider this Gettier case, about a fellow named “Temp,” to see why:
Temp forms his beliefs about the temperature in the room by consulting a thermometer. His beliefs, so formed, are highly reliable, in that any belief he forms on this basis will always be correct. Moreover, he has no reason for thinking that there is anything amiss with this thermometer. But the thermometer is in fact broken, and is fluctuating randomly within a given range. Unbeknownst to Temp, there is an agent hidden in the room who is in control of the thermostat whose job it is to ensure that every time Temp consults the thermometer the “reading” on the thermometer corresponds to the temperature in the room.
Temp’s true belief in the current temperature isn’t lucky—he’s getting it right because someone is deliberately giving him the right temperature every time he takes a look. As Pritchard puts it, “What is wrong with Temp’s beliefs is that they exhibit the wrong direction of fit with the facts, for while his beliefs formed on this basis are guaranteed to be true, their correctness has nothing to do with Temp’s abilities and everything to do with some feature external to his cognitive agency.” In other words, as he goes on to say, “While [Temp’s] cognitive success is not the product of his cognitive ability, that’s not because it’s simply a matter of luck.”
So the way to have knowledge, Pritchard concludes, is have your relevant cognitive abilities produce a belief that’s not only true and creditable to your agency, but also safe. By “safe,” Pritchard means that your belief couldn’t have easily been false. Temp’s belief, for instance, is safe—there’s a hidden guy guaranteeing he’ll believe the correct temperature each time he checks. (If you’re thinking to yourself, “But the hidden guy could easily decide to give Temp the wrong temperature,” just imagine it’s not a hidden guy but a hidden machine programmed to always make the thermometer show the correct temperature.) But your belief that there’s beer in the fridge, and Mary’s belief that her husband’s sitting in the living room, aren’t safe, because the burglar could’ve easily not replaced the beer, and Mary’s husband could’ve easily been in another room.
To make this easier to picture, Pritchard invites us to think of a cognitive success, like a true belief, in the same way that we think of success in, say, archery. Knowledge is an achievement just like hitting the bull’s-eye all on your own is an achievement: You did it and it wasn’t just luck. Here’s what Pritchard says:
Achievements clearly involve success, but an archer who hits a target while lacking any relevant abilities has not exhibited an achievement even despite her success. Moreover, it is also vital that the archer’s success should be because of the exercise of her relevant abilities. A skillful archer who fires at a target but who is only successful at hitting that target because of a fortuitous series of gusts of wind does not exhibit an achievement, even though she is successful and also possesses the relevant abilities (this would be a kind of Gettier-style case). What is required, then, is a success that is best explained in terms of the exercise of the agent’s abilities—i.e. a success that is because of one’s ability—and this is what is lacking in this case.
Success isn’t an achievement unless you did it on your own, in other words. The same goes for true belief—it’s not knowledge unless you yourself are responsible for getting it right. (That isn’t to say that you have to find out everything yourself firsthand; otherwise the theory would rule out the possibility of gaining knowledge from books, for example).
You might be wondering what Gettier thinks of all this. It turns out, not much—or, if he does have an opinion, he hasn’t cared enough to share it. Indeed, he’s never published any other paper besides “Is Justified True Belief Knowledge?” and he turned 89 just before this article's publication in 2016. To the question, “Why not?” he said, “I have nothing more to say.”
Brian Gallagher is the editor of Facts So Romantic, the Nautilus blog. Follow him on Twitter @brianga11agher.