Also known as Hidden Bias or Unconscious Bias, Implicit Bias arose conceptually as a way to explain why discrimination persists, even though polling and other research clearly shows that people oppose it. Some conjectured that people sought to hide their bias from pollsters – and simply lied about their views for fear of appearing prejudiced.
In 1995, Doctors Anthony Greenwald and M.R. Benaji posited that it was possible that our social behavior was not completely under our conscious control. In Implicit Social Cognition: Attitudes, Self-Esteem and Stereotypes, Greenwald and Benaji argued that much of our social behavior is driven by learned stereotypes that operate automatically – and therefore unconsciously — when we interact with other people. Three years later, Greenwald et al developed the Implicit Association Test (IAT), which has become the standard bearer for measuring implicit bias (you can take the test yourself here).
In order to understand how the IAT works, it’s important to back up and take a look at how our minds store, process and think through information. Our minds work through what are called “schemas”. As UCLA law professor Jerry Kang describes it, “Schemas are simply templates of knowledge that help us organize specific examples into broad categories. A stool, sofa and office chair are all understood to be ‘chairs.’ Once our brain maps some item into that category, we know what to do with it—in this case, name sit on it. Schemas exist not only for objects, but also for people. Automatically, we categorize individuals by age, gender, race and role. Once an individual is mapped into that category, specific meanings associated with that category are immediately activated and influence our interaction with that individual.”
These schemas we use to categorize people are called stereotypes. Stereotypes have a bad reputation in everyday life, but in social science circles, a stereotype is simply the way our brains naturally sort the people we meet into recognizable groups. Stereotyping is different from its close cousin prejudice, which is the (generally negative) attitude or reaction towards people because they’re members of a specific group. As Jerry Kang and Mahzarin Banaji discuss in their article Fair Measures, “mechanisms of bias [are] produced by the current, ordinary workings of human brains—the mental states they create, the schemas they hold, and the behaviors they produce. Obviously, both history and societal factors play a crucial role in providing the content of those schemas, which are programmed through culture, media, and the material context.” The schema, in other words, is where our implicit bias lives. Implicit Bias tests attempt to dig into our stereotypes and find out how biased they are and how we are governed by them.
The IAT uses reaction time measurement to look at subconscious bias. To take a simple example, imagine that you are asked to associate a list of positive words (pretty, sweet, calm) with a list of flower names. Next, you are asked to associate a list of negative words (ugly, scary, freaky) with a list of insect names. So far so easy, right? Most of us like flowers and aren’t crazy about bugs.
But what if you reverse it? You are in front of a computer screen and the left half of the screen contains a picture of a spiny poisonous caterpillar and the word “calm” on the right hand of the screen is a picture of a tulip and the word “freaky”. When a positive word or an insect name comes up, you press the left arrow. When a negative word or a flower name comes up, you press the right arrow.
The second task turns out to be complicated — we don’t generally associate insects with positive words. This complication leads us to do worse (react more slowly) on a test that pairs insects with “pretty,” “sweet,” and “calm” than one that pairs insects with “ugly,” “scary,” and “freaky.” By measuring reaction times in tests like these, Greenwald postulated that scientists are able to measure your association of positive words with flowers and negative words with insects. We call the positive association a preference and the negative association a bias.
Although this seems innocuous enough, it gets less so when “flowers” and “insects” are swapped out for what’s called in-group (the group you belong to) and out-group (groups you aren’t a member of) perceptions. When similar tests are administered to people with regards to race (i.e. measuring Japanese Americans’ associations about Koreans) they frequently demonstrate bias. It turns out that it is generally harder for people to associate out-group images and names with positive words.
Real World Effects
What scientists have also discovered over the last decade is that the IAT works as a very good predictor of people’s behavior. This is why implicit bias matters. While the measuring of hidden opinions about various groups might seem on the surface to be inconsequential, it becomes something else entirely when we see bias’ impact on real world behaviors. Study after study in a wide range of fields has shown the potential real-world impact of implicit bias on people’s quality of life. Studies show, for example, that doctors are more likely to prescribe life-saving care to whites, that managers are more likely to hire and promote members of their own in-group and that referees in basketball might be more likely to subtly favor players with whom they share a racial identity.
One reason why investigating Implicit Bias is so essential is the effect it has on our country’s discussion of discrimination. We are used to thinking of discrimination being about individual bigoted people acting overtly to cause some harm against someone because of their race, gender or sexuality. While there are still some cases of this happening, this mode of thinking about discrimination is obsolete, and it actually hampers our journey towards equality. As long as discrimination is about a moral flaw in an individual, discussing bias and discrimination is impossible because hanging over the conversation is the idea that someone must be a hate-filled bigot. Implicit Bias, on the other hand, offers the idea that discrimination and bias are social, rather than individual issues, and that we can thus all participate in promoting equality.
No advance in social science is without some controversy – and a few have challenged both the idea of implicit bias and the tools to measure it. For a more in-depth discussion of the challenge, click here. It is important to recognize though that the overwhelming evidence supports the salience of implicit bias and the utility of the IAT. Our goal here at the American Values Institute is not to prove the existence of implicit bias, but rather to investigate implicit bias to see how it affects our society. As a consortium of researchers from universities across the country and social justice advocates from a wide range of groups and perspectives we have come together to devise new ways to counter implicit bias. We seek to prevent implicit bias from undermining our national ideals, both during elections and in the creation of public policy.