- New research from Ohio State claims we cannot separate how someone looks and sounds.
- Volunteers were asked to look at photos and listen to audio, and were told to ignore their face or voice.
- "They were unable to entirely eliminate the irrelevant information," said associate professor Kathryn Campbell-Kibler.
Postmates is a way of life in Los Angeles. So when a young Black driver recently crossed paths with a woman outside of her building while delivering food to another apartment, you might initially be shocked at her response. While the woman claims her reaction is not racist—not only does she refuse him entry, but after he calls the apartment and talks to the man on other line, she even denies that he lives in the building—her use of the term “boy” says it all.
Would she have reacted similarly if the driver was white? While no definitive answer can be given, a new study from Ohio State University finds that his race is not only an issue, the woman would have not been able to ignore it even if she wanted to.
The distance between implicit and explicit bias has been studied for years. In this research, published in Journal of Sociolinguistics, Associate Professor Kathryn Campbell-Kibler, in the Department of Linguistics at OSU, asked 1,034 volunteers to look at photos and listen to audio of people speaking to determine if they immediately judged someone by their looks or accents.
Almost across the board, they did.
In some cases, volunteers were told to evaluate how “good-looking” the people in the photos were; in others, they were asked to judge their accents. One cohort was not given guidance; they looked at a photo and listened to a voice. Others were told to ignore the face while listening, and vice-versa. Some were even told that the voice was not from the same person they were looking at.
It didn’t matter. In most cases, volunteers expressed critical judgment of either their face or voice. As Campbell-Kibler says,
“Even though we told them to ignore the voice, they couldn’t do it completely. Some of the information from the voice seeped into their evaluation of the face.”
Detaching face from voice is a difficult endeavor. The first time I heard Welsh actor Matthew Rhys’ true accent was while watching “The Wine Show,” which he filmed shortly after wrapping up work on “The Americans.” It took me a few minutes to rationalize what I was seeing. Now I can’t get his actual speaking voice out of my head while watching the drunken private investigator transform into the lawyer we knew Perry Mason would become.
Photo by Joe Raedle/Getty Images
Rhys is paid to speak English with an American accent. The stakes are low for me as a viewer. Out in the real world, where racism is as prevalent as ever, the situation is different. Implicit bias affects everyone, which means racism and xenophobia are conditions we have to work at correcting in ourselves. It won’t come natural. Campbell-Kibler continues,
“We found that people could exercise some control over what information to favor, the voice or the face, depending on what we told them to do. But in most cases, they were unable to entirely eliminate the irrelevant information.”
She notes that even though most participants were white, they were careful to not racially stereotype. Volunteers told to ignore faces while listening to accents performed best for this reason, though some admitted they had to make a conscious effort to do so.
Volunteers took no issue with judging the photos good-looking, believing looks to be subjective. Campbell-Kibler wants to follow up this research using videos instead of photographs to observe the impact of watching others on the screen.
The takeaway: we are influenced by all of the information available to us at all times. Our biases will make themselves apparent. Course-correcting is not natural, but thankfully, it is possible.