The Imagination, Computation, and Expression Laboratory at MIT’s Computer Science and Artificial Intelligence Laboratory has released a new video game called Grayscale, which is designed to sensitize players to problems of sexism, sexual harassment, and sexual assault in the workplace. D. Fox Harrell, the lab’s director, and students in his course CMS.628 (Advanced Identity Representation) completed the initial version of the game more than a year ago, and the ICE Lab has been working on it consistently since. But it addresses many of the themes brought to the fore by the recent #MeToo movement. The game is built atop the ICE Lab’s Chimeria computational platform, which was designed to give computer systems a more subtle, flexible, and dynamic model of how humans categorize members of various groups. MIT News spoke to Harrell, a professor of digital media and artificial intelligence, about Grayscale (or to give it its more formal name, Chimeria:Grayscale).
Q: How does the game work?
A: You’re playing the role of an employee of a corporation called Grayscale. It’s a kind of melancholy place: Everything is gray toned. The interface looks like a streamlined email interface. You’re a temporary human resources manager, and as you play, messages begin coming in. And the messages from other employees have embedded within them evidence of different types of sexism from the Fiske and Glick social-science model.
We chose this particular model of sexism because it addresses this notion of ambivalent sexism, which includes both hostile sexism — which is the very overt sexism that we know well and could include everything from heinous assaults to gender discrimination — and what they call “benevolent sexism.” It’s not benevolent in the sense that it’s anything good; it’s oppressive too. Fixing a woman’s computer for her under the assumption she cannot do it herself, these researchers would say, is “protective paternalism.” “Complimentary gender differentiation” involves statements like, “Oh, you must be so emotionally adept.”
Over the course of the week you have new emails coming in, new fires to put out. Some of them are more subtle. For instance, the office temperature is deemed to be too cold by some employee. There’s been research that shows that’s a place of inequity because people perceive temperature differently, in part based on gender or even clothing that we typically associate with gender.
That’s a kind of gentle introduction into this. But some of them are more obvious in different sorts of ways. So a co-worker, say, commenting that wearing yoga pants in the office is (a) unprofessional and (b) distracting. He sends that to the entire list. So do you tell everyone to look at the manual for the dress code? Or do you comment to this guy? Or do you tell everybody it’s actually commenting on your coworker’s attire being “distracting” that’s the problem?
Other emails deal more directly with assault, like somebody who touched somebody inappropriately in an office space.
So you have to make choices about all of these different options. You might have four draft messages, as if you’d been deliberating about which one you’re going to send, and then you finally hit reply with one of your possible drafts. And on the back end, we have each of those connected with particular ways that sexism is exhibited.
The thing that people find compelling about it is that there’s not always an easy answer for each of the questions. You might find tension between one answer and another. Should I send this to the entire list, or should I send it just to the person directly? Or you might think, I really hate the way this guy phrased this email, but at the same time, maybe there are standards within the manual.
Finally, you get your performance evaluation at the very end of the story. We didn’t want it to be straightforward, that if you’ve been nonsexist you get the job, and if you’ve been sexist you don’t. You end up with some kinds of tensions, because maybe you’ve been promoted, but you compromised your values. Maybe you’re kept on but not really seen as a team player, so you have to watch your step. You’re navigating those kinds of tensions between what is seen as the corporate culture, what would get you ahead, and your own personal thoughts about the sexism that’s displayed.
This also isn’t the only vector through which you get feedback. You’re also getting feedback based on what happens to the other characters as well.
Q: Whom do you envision playing this game?
A: There have been thematic indie games that have come out recently. There’s a game addressing issues like isolation and human connection, Firewatch, that was pretty popular. And games about social issues, like the game Dys4ia, which is a game about gender dysphoria.
There was also a lot of press recently about a game called Hair Nah. This was a game related to the fact that for a lot of African-American women, other people like to touch their hair in a way that’s as irritating as it is othering. Such games act like editorials about particular topics. They are not novels, but more like opinion pieces about an issue.
People who like this type of indie game, I think, [would like Grayscale].
We intend for it to be a compelling narrative. That means understanding the back stories of the co-workers, getting to know their personalities. So there could be a bit of humor, a bit of pathos.
Q: How does the Chimeria platform work?
A: At the core is the Chimeria engine, which models social-category membership with more nuance than a lot of other systems — in particular, building on models that come from cognitive science on how humans cognitively categorize. We enable people to be members of multiple categories or to have gradient degrees of categories and have those categories change over time. It’s a patent-pending technology I’m in the process of spinning out now through my company called Blues Identity Systems.
Most computational systems that categorize users — whether that’s your social-media profile or e-commerce account or video-game character — model category membership in almost a taxonomic way: If you have a certain number of features that are defined to be the features of that category, then you’re going to be a member of that category.
In cognitive science, researchers like George Lakoff and Eleanor Rosch have this idea that actually that’s not the way the human brain categorizes. Eleanor Rosch’s famous work argues that we categorize based on prototypes. When people categorize, say, a bird, it’s not because we’re going down this list of features: “Does it have feathers?” “Check.” “Does it have a beak?” It’s more that we have a typical bird in our mind, and we look at how it relates to that prototype. If you say, think of a bird, the idea is people wouldn’t think of a penguin or ostrich. They’d think of something that is prototypical to them — for example, in the U.S. it might be a robin. And then there’s gradient membership from there.
So what I thought was, what if we could take out the taxonomic model currently in a lot of systems and replace it with this more nuanced model? What new kinds of possibilities emerge from there?
One of the first papers we wrote about Chimeria involved using it for authoring conversations in games. A lot of times now, it’s a branching narrative: You have four choices, say, and four more for each of those, and so on. That’s exponential growth in terms of choices.
Instead, we can look at your category. Have you been playing as a physically oriented character, like a warrior? Have you been playing aggressively? And so on. And then based upon your category membership — and how it’s been changing — we can customize conversation.
So instead of branching plot points, you might have wild cards within the text that change based upon the current category that you’re in — or the trajectory. It actually breaks bottlenecks in authoring, but it also opens up new types of expressive possibilities.
—
Reprinted with permission of MIT News