Pocket worthyStories to fuel your mind

How to Think about “Implicit Bias”

Amidst a controversy, it’s important to remember that implicit bias is real—and it matters.

Scientific American

Read when you’ve got time to spare.

GettyImages-96332476.jpg

Photo by theprint / Getty Images .

When is the last time a stereotype popped into your mind? If you are like most people, the authors included, it happens all the time. That doesn’t make you a racist, sexist, or whatever-ist. It just means your brain is working properly, noticing patterns, and making generalizations. But the same thought processes that make people smart can also make them biased. This tendency for stereotype-confirming thoughts to pass spontaneously through our minds is what psychologists call implicit bias. It sets people up to overgeneralize, sometimes leading to discrimination even when people feel they are being fair.

Studies of implicit bias have recently drawn ire from both right and left. For the right, talk of implicit bias is just another instance of progressives seeing injustice under every bush. For the left, implicit bias diverts attention from more damaging instances of explicit bigotry. Debates have become heated, and leapt from scientific journals to the popular press. Along the way, some important points have been lost. We highlight two misunderstandings that anyone who wants to understand implicit bias should know about.

First, much of the controversy covered in 2017 centers on the most famous implicit bias test, the Implicit Association Test (IAT). A majority of people taking this test show evidence of implicit bias, suggesting that most people are implicitly biased even if they do not think of themselves as prejudiced. Like any measure, the test does have limitations. The stability of the test is low, meaning that if you take the same test a few weeks apart, you might score very differently. And the correlation between a person’s IAT scores and discriminatory behavior is often small.

The IAT is a measure, and it doesn’t follow from a particular measure being flawed that the phenomenon we’re attempting to measure is not real. Drawing that conclusion is to commit the Divining Rod Fallacy: just because a rod doesn’t find water doesn’t mean there’s no such thing as water. A smarter move is to ask, “What does the other evidence show?”

In fact, there is lots of other evidence. There are perceptual illusions, for example, in which white subjects perceive black faces as angrier than white faces with the same expression. Race can bias people to see harmless objects as weapons when they are in the hands of black men, and to dislike abstract images that are paired with black faces. And there are dozens of variants of laboratory tasks finding that most participants are faster to identify bad words paired with black faces than white faces. None of these measures is without limitations, but they show the same pattern of reliable bias as the IAT. There is a mountain of evidence—independent of any single test—that implicit bias is real.

The second misunderstanding is about what scientists mean when they say a measure predicts behavior. It is frequently complained that an individual’s IAT score doesn’t tell you whether they will discriminate on a particular occasion. This is to commit the Palm Reading Fallacy: unlike palm readers, research psychologists aren’t usually in the business of telling you, as an individual, what your life holds in store. Most measures in psychology, from aptitude tests to personality scales, are useful for predicting how groups will respond on average, not forecasting how particular individuals will behave.

The difference is crucial. Knowing that an employee scored high on conscientiousness won’t tell you much about whether her work will be careful or sloppy if you inspect it right now. But if a large company hires hundreds of employees who are all conscientious, this will likely pay off with a small but consistent increase in careful work on average.

Implicit bias researchers have always warned against using the tests for predicting individual outcomes, like how a particular manager will behave in job interviews—they’ve never been in the palm-reading business. What the IAT does, and does well, is predict average outcomes across larger entities like counties, cities, or states. For example, metro areas with greater average implicit bias have larger racial disparities in police shootings. And counties with greater average implicit bias have larger racial disparities in infant health problems. These correlations are important: the lives of black citizens and newborn black babies depend on them.

Field experiments demonstrate that real-world discrimination continues, and is widespread. White applicants get about 50 percent more call-backs than black applicants with the same resumes; college professors are 26 percent more likely to respond to a student’s email when it is signed by Brad rather than Lamar; and physicians recommend less pain medication for black patients than white patients with the same injury.

In 2018, managers are unlikely to announce that white job applicants should be chosen over black applicants, and physicians don’t declare that black people feel less pain than whites. Yet, the widespread pattern of discrimination and disparities seen in field studies persists. It bears a much closer resemblance to the widespread stereotypical thoughts seen on implicit tests than to the survey studies in which most people present themselves as unbiased.

One reason people on both the right and the left are skeptical of implicit bias might be pretty simple: it isn’t nice to think we aren’t very nice. It would be comforting to conclude, when we don’t consciously entertain impure intentions, that all of our intentions are pure. Unfortunately, we can’t conclude that: many of us are more biased than we realize. And that is an important cause of injustice—whether you know it or not.


How was it? Save stories you love and never lose them.


Logo for Scientific American

This post originally appeared on Scientific American and was published March 27, 2018. This article is republished here with permission.

Did you enjoy this Scientific American article?

Get our FREE daily newsletter