In a now-classic series of experiments, researchers teased out the deep-rooted nature of human bias simply by distributing red shirts and blue shirts to groups of 3- to 5-year-olds at a day care center. In one classroom, teachers were asked to divide children into groups based on the color of their shirts. In another, teachers were instructed to overlook the shirt colors. After three weeks, children in both classrooms tended to prefer being with classmates who wore the same color as themselves—no matter what the teachers did.
This preference for people who seem to belong to our own tribe forms early and drives our choices throughout life. There appears to be no avoiding it: We are all biased. Even as we learn to sort shapes and colors and distinguish puppies from kittens, we also learn to categorize people on the basis of traits they seem to share. We might associate women who resemble our nannies, mothers, or grandmothers with nurturing or doing domestic labor. Or following centuries of racism, segregation, and entrenched cultural stereotypes, we might perceive dark-skinned men as more dangerous than others.
The biases we form quickly and early in life are surprisingly immutable. Biases are “sticky,” says Kristin Pauker, a psychology researcher at the University of Hawaii, “because they rely on this very fundamental thing that we all do. We naturally categorize things, and we want to have a positivity associated with the groups we’re in.” These associations are logical shortcuts that help us make quick decisions when navigating the world. But they also form the roots of often illogical attractions and revulsions, like red shirts versus blue shirts.
Our reflexive, implicit biases wreak devastating social harm. When we stereotype individuals based on gender, ethnicity, sexual orientation, or race, our mental stereotypes begin to drive our behavior and decisions, such as whom to hire, who we perceive as incompetent, delinquent, or worse. Earlier this year, for instance, an appeals court overturned a Black man’s conviction for heroin distribution and the 10-year prison sentence he received in part because the Detroit federal judge who handed down the original verdict admitted, “This guy looks like a criminal to me.”
People who live in racially homogeneous environments may struggle to distinguish faces of a different race from one another.
Correcting for the biases buried in our brains is difficult, but it is also hugely important. Because women are stereotyped as domestic, they are also generally seen as less professional. That attitude has reinforced a decades-long wage gap. Even today, women still earn only 82 cents for every dollar that men earn. Black men are perceived as more violent than white men, and thus are subjected to discriminatory policing and harsher prison sentences, as in the Detroit case. Clinicians’ implicit preferences for cisgender, heterosexual patients cause widespread inequities in health care for LGBTQ+ individuals.
“These biases are operating on huge numbers of people repetitively over time,” says Anthony Greenwald, a social psychologist at the University of Washington. “The effects of implicit biases accumulate to have great impact.”
Greenwald was one of the first researchers to recognize the scope of the problems created by our implicit biases. In the mid-1990s, he created early tests to study and understand implicit association. Along with colleagues Mahzarin Banaji, Brian Nosek, and others, he hoped that shining a light on the issue might quickly identify the tools needed to fix it. Being aware that our distorted thinking was hurting other people should be enough to give pause and force us to do better, they thought.
They were wrong. Although implicit bias training programs help people become aware of their biases, both anecdotal reports and controlled studies have shown that the programs do little to reduce discriminatory behaviors spurred by those prejudices. “They fail in the most important respect,” Greenwald says. When he, Banaji, and Nosek developed the Implicit Association Test, he took it himself. He was distressed to discover that he automatically associated more positive words with the faces of white people, and more unfavorable words with people who were Black. “I didn’t regard myself as a prejudiced person,” Greenwald says. “But I had this association nevertheless.”
His experience is not unusual. The Implicit Association Test (IAT) measures the speed of subjects’ responses as they match descriptors of people (such as Hispanic or gay) to qualities (such as attractiveness, athleticism, or being professional). It’s based on the idea that people react more quickly when they are matching qualities that are already strongly associated in their minds. Implicit bias exists separately from explicit opinion, so someone who honestly believes they don’t have anything against gay people, for instance, may still reveal a bias against them on the test. “A lot of people are surprised by their results,” Greenwald says. “This is very hard for people to come to grips with intuitively.”
People’s beliefs may not matter as much if they can be persuaded not to act on them.
One reason we are so often unaware of our implicit biases is that we begin to form these mental associations even before we can express a thought. Brain-imaging studies have found that six-month-old babies can
identify individual monkey faces as well as individual humans. Just a dozen weeks later, nine-month-old babies retain the ability to identify human faces but begin to group all the monkey faces together generically as just “monkey,” losing the ability to spot individual features. Shortly after, babies begin to group human faces by race and ethnicity. Our adult brains echo these early learning patterns. People who live in racially homogeneous environments may struggle to distinguish faces of a different race from one another.
As it became clear how deeply ingrained these biases are—and how they might be unfathomable even to ourselves—researchers began to design new types of strategies to mitigate bias and its impact in society. By 2017, companies in the United States were spending $8 billion annually on diversity training efforts, including those aimed at reducing unconscious stereotyping, according to management consulting firm McKinsey & Company. These trainings range from online educational videos to workshops lasting a few hours or days in which participants engage in activities such as word-association tests that help identify their internalized biases.
Recent data suggest that these efforts have been failing too. In 2019 researchers evaluated the effectiveness of 18 methods that aimed to reduce implicit bias, particularly pro-white and anti-Black bias. Only half the methods proved even temporarily effective, and they shared a common theme: They worked by giving study participants experiences that contradicted stereotypes. Reading a story with an evil white man and a dashing young Black hero, for example, reduced people’s association of Black men with criminality. Most of these strategies had fleeting effects that lasted only hours. The most effective ones reduced bias for only a few days at best.
Even when training reduced bias, it did little to reduce discriminatory outcomes. Beginning in early 2018, the New York City Police Department began implicit bias training for its 36,000 personnel to reduce racial inequities in policing. When researchers evaluated the project in 2020, they found that most officers were aware of the
problems created by implicit bias and were keen to address these harms, but their behaviors contradicted these intentions. Data on arrests, stops, and stop-and-frisk actions showed that officers who had completed the training were still more likely to take these actions against Black and Hispanic people. In fact, the training program hardly
had any effect on the numbers.
This and similar studies have “thrown some cold water on just targeting implicit bias as a focus of intervention,” says Calvin Lai, a social psychologist at Washington University in St. Louis. Even if you are successful in changing implicit bias or making people more aware of it, “you can’t easily assume that people will be less discriminatory.”
But researchers are finding reason for hope. Although the dozens of interventions tested so far have demonstrated limited long-term effects, some still show that people can be made more aware of implicit bias and can be moved to act more equitably, at least temporarily. In 2016, Lai and his colleagues tested eight ways of reducing unconscious bias in studies with college students. One of the interventions they tested involved participants reading a vividly portrayed scenario in which a white person assaulted them and a Black person came to their rescue. The story reinforced the connection between heroism and Black identity.
Other interventions were designed to heighten similar connections. For instance, one offered examples of famous Black individuals, such as Oprah Winfrey, and contrasted them with examples of infamous white people, including Adolf Hitler. Participants’ biases were gauged using the IAT both before and after these interventions. While the experiments tamped down bias temporarily, none of them made a difference just a few days later. “People go into the lab and do an intervention and there’s that immediate effect,” Pauker says.
From such small but significant successes, an insight began to emerge: Perhaps the reason implicit bias is stable is because we inhabit an environment that’s giving us the same messages again and again. Instead of trying to chip away at implicit bias merely by changing our minds, perhaps success depended on changing our environment .
The implicit associations we form—whether about classmates who wear the same color shirt or about people who look like us—are a product of our mental filing cabinets. But a lot of what’s in those filing cabinets is drawn from our culture and environment. Revise the cultural and social inputs, researchers like Kristin Pauker theorize, and you have a much greater likelihood of influencing implicit bias than you do by sending someone to a one-off class or training program.
Babies who start to blur monkey faces together do so because they learn, early on, that distinguishing human faces is more critical than telling other animals apart. Similarly, adults categorize individuals by race, gender, or disability status because these details serve as markers of something we’ve deemed important as a society. “We use certain categories because our environment says those are the ones that we should be paying attention to,” Pauker says.
Just as we are oblivious to many of the biases in our heads, we typically don’t notice the environmental cues that seed those biases. In a 2009 study, Pauker and her colleagues examined the cultural patterns depicted in 11 highly popular TV shows, including Grey’s Anatomy, Scrubs, and CSI Miami. The researchers tracked nonverbal interactions among characters on these shows and found that even when white and Black characters were equal in status and jobs and spoke for about the same amount of time, their nonverbal interactions differed. For instance, on-screen characters were less likely to smile at Black characters, and the latter were more often portrayed as stern or unfriendly.
Thinking of implicit bias as malleable allows us to constantly reframe our judgments about people we meet.
In a series of tests, Pauker and her colleagues found that regular viewers of such shows were more likely to have stronger anti-Black implicit biases on the IAT. But when the researchers asked viewers multiple-choice questions about bias in the video clips they saw, viewers’ responses about whether they’d witnessed pro-Black or pro-white
bias were no better than random. They were being influenced by the bias embedded in the show, “but they were not able to explicitly detect it,” Pauker says.
Perhaps the most definitive proof that the outside world shapes our biases emerged from a recent study of attitudes toward homosexuality and race over decades. In 2019 Harvard University experimental psychologist
Tessa Charlesworth and her colleagues analyzed the results of 4.4 million IATs taken by people between 2007 and 2016. The researchers found that anti-gay implicit bias had dropped about 33 percent over the years, while negative racial attitudes against people of color declined by about 17 percent.
The data were the first to definitively show that implicit attitudes can change in response to a shifting zeitgeist. The changes in attitudes weren’t due to any class or training program. Rather, they reflected societal changes, including marriage equality laws and protections against racial discrimination. Reducing explicit discrimination altered the implicit attitudes instilled by cultures and communities—and thus helped people rearrange their mental associations and biases.
Until societal shifts occur, however, researchers are finding alternate ways to reduce the harms caused by implicit bias. People’s beliefs may not matter as much if they can be persuaded not to act on them. According to the new way of thinking, managers wouldn’t just enter training to reduce their bias. Instead, they could be trained to remove implicit bias from hiring decisions by setting clear criteria before they begin the hiring process.
Faced with a stack of resumes that reveal people’s names, ethnicities, or gender, an employer’s brain automatically starts slotting them based on preconceived notions of who is more professional or worthy of a job. Then bias supersedes logic.
When we implicitly favor someone, we are more likely to regard their strengths as important. Consider, for example, a hiring manager who perceives men as more suited to a role than women. Meeting a male candidate with a low GPA but considerable work experience may lead the manager to think that real-world experience is what really matters. But if the man has a higher GPA and less experience, the manager might instead reason that the latter isn’t important because experience can be gained on the job.
To avoid this all-too-common scenario, employers could define specific criteria necessary for a role, then create a detailed list of questions needed to evaluate those criteria and use these to create a structured interview. Deciding in advance whether education or work experience matters more can reduce this problem and lead to more equitable decisions. “You essentially sever the link between the bias and the behavior,” explains Benedek Kurdi, a psychologist at the University of Illinois Urbana–Champaign. “What you’re saying is the bias can remain, but you deprive it of the opportunity to influence decision making.”
In the long run, reducing the biases and injustices built into our environment is the only surefire path toward taming the harmful implicit biases in our heads. If we see a world with greater equity, our internal attitudes seem to adjust to interpret that as normal. There’s no magical way to make the whole world fair and equitable all at once. But it may be possible to help people envision a better world from the start so that their brains form fewer flawed associations in the first place.
To Pauker, achieving that goal means teaching children to be flexible in their thinking from an early age. Children gravitate toward same-race interactions by about the age of 10. In one study, Pauker and her colleagues found that offering stories to children that nudged them to think about racial bias as flexible made them more likely to explore mixed-race friendships. In another study, Pauker and team found that children who thought about prejudice as fixed had more uncomfortable interactions with friends of other races and eventually avoided them. But those who thought about prejudice as malleable—believing they could change their minds about people of other races—were less likely to avoid friends of other races.
The key, Pauker suggests, is not to rethink rigid mental categories but to encourage mental flexibility. Her approach, which encourages children to consider social categories as fluid constructs, appears to be more effective. The data are preliminary, but they offer a powerful route to change: simply being open to updating the traits we associate with different groups of people.
Thinking of implicit bias as malleable allows us to constantly reframe our judgments about people we meet—evaluating each unique individual for what they are, rather than reducing them to a few preconceived traits we associate with their race, gender, or other social category. Rather than trying to fight against our wariness toward out-groups, reconsidering our mental classifications in this manner allows us to embrace the complexity of human nature and experience, making more of the world feel like our in-group.
Blurring the implicit lines in our minds might be the first step to reducing disparities in the world we make.
This story is part of a series of OpenMind essays, podcasts, and videos supported by a generous grant from the Pulitzer Center‘s Truth Decay initiative.