Skip to content
Culture & Religion

Prejudice AI? Machine Learning Can Pick up Society’s Biases

The program picked up association biases nearly identical to those seen in human subjects.  
Circuit board silhouettes of people. Pixbaby.
Sign up for Smart Faster newsletter
The most counterintuitive, surprising, and impactful new stories delivered to your inbox every Thursday.

We think of computers as emotionless automatons and artificial intelligence as stoic, zen-like programs, mirroring Mr. Spock, devoid of prejudice and unable to be swayed by emotion. A team of researchers at Princeton University’s engineering school have proven otherwise, in a new study. They say that AI picks up our innate biases about sex and race, even when we ourselves may be unaware of them. The results of this study were published in the journal Science.  


This may not be too surprising after a Microsoft snafu in March last year, when a chatbot named Tat had to be taken off Twitter. After interacting with certain users, she began spouting racist remarks. It isn’t to say that AI is inherently flawed. It just learns everything from us and as our echo, picks up the prejudices we’ve become deaf to. In this sense, we’ll have to design such programs carefully to avoid allowing biases to slip past.

Arvind Narayanan was the co-author of this study. He’s an assistant professor of computer science and the Center for Information Technology Policy (CITP) at Princeton. Under him was Aylin Caliskan, the study’s lead author. She’s a postdoctoral research associate at Princeton. They both worked with colleague Joanna Bryson at University of Bath, also a co-author.

A chatbot Tat had to be taken off Twitter recently for “talking like a Nazi.” Getty Images.

While examining a program which was given access to languages online, what they found was, based on the patterns of wording and usage, inherent cultural biases could be passed along to the program. “Questions about fairness and bias in machine learning are tremendously important for our society,” Narayanan said. “We have a situation where these artificial-intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

To scan for biases, Caliskan and Bryson used an online version of the Implicit Association Test. This was developed through several social psychology studies at the University of Washington in the late 1990s. The test works like this, a human subject is given a pair of words on a computer screen and must respond to them in as little time as possible. Answers are expected to come in milliseconds. Shorter response times are found in similar concepts and longer times for dissimilar ones.

Participants would be given prompts such as “daisy” or “rose,” and insects such as “moth” or “ant.” These would have to be matched with concept words such as “love” or “caress,” or negative words such as “ugly” or “filth.” Usually, flowers were paired with the positive words and insects with negative ones.

AI is more of a reflection of us than first thought. Pixbaby.

For this experiment, researchers utilized a computer program called GloVe, an open-source version of the Implicit Association Test. Developed at Stanford, GloVe stands for Global Vectors for Word Representation. It’s very much like any program that would sit at the heart of machine learning, researchers say. The program represents the co-occurrence of words statistically, displayed in a 10-word text window. Words that appear nearer one another have a stronger association, while those farther away have a weaker one.

In a previous study, programmers at Stanford used the internet to expose GloVe to 840 billion words. Prof. Narayanan and colleagues examined word sets and their associations. They looked at words such as “scientists, programmer, engineer,” and “teacher, nurse, librarian,” and recorded the gender associated with each.

Innocuous relationships between words such as the insects and flowered were found. But more worrisome connections, surrounding race and gender, were also discovered. The algorithm picked up association biases nearly identical to those seen in human subjects in previous studies.

For instance, male names corresponded more strongly with words such as “salary” and “professional,” as well as family-related terms like “wedding” and “parents.” When researchers turned to race, they found that African-American names were associated with far more negative attributes that Caucasian ones.

AI will have to be programmed to embrace equality. Getty Images.

AI programs are now being used more and more to help humans with things like language translation, image categorization, and text searches. Last fall, Google Translate made headlines because its skill level is coming very close to that of human translators. While AI gets more embedded in the human experience, so will these biases, if they aren’t addressed.

Consider a translation from Turkish to English. Turkish uses the third person pronoun “o.” If one took “o bir doktor” and “o bir hemşire,” it would translate to “he is a doctor” and “she is a nurse.” So what can be done to identify and clear such stereotypes from AI programs?

Explicit coding to instruct machine learning to recognize and prevent cultural stereotypes is required. Researchers liken this to how parents and teachers help children recognize unfair practices, and instill in them a sense of equality.

Narayanan said:

The biases that we studied in the paper are easy to overlook when designers are creating systems. The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.

To find out what exactly is at stake, click here: 

Sign up for Smart Faster newsletter
The most counterintuitive, surprising, and impactful new stories delivered to your inbox every Thursday.

Related

Up Next