Why Failing the Rationality Quiz Shouldn’t Alarm You
If you took the three-question quiz I posted last week, chances are you answered some items incorrectly. Like some of my smart, accomplished friends and family members who took the challenge, you might even have scored a big fat zero. And like them, you might be troubled by your performance.
Buck up: you are far from alone. Like most people, I got them wrong, too, when I encountered similar questions years ago in a course in graduate school. The questions and answers reveal some interesting patterns in the way we think, but I don’t believe that getting them wrong is very persuasive evidence of human irrationality.
Mind Matters blogger David Berreby maintains that “irrationality is a fact, not a fad,” critiquing my claim that some recent attention to the intellectual foibles of humanity is overblown. He points to the inefficiency of markets and troubling studies showing unconscious judicial bias as evidence that irrationality can “cause a great deal of harm” in society. I don’t doubt these findings. There is indeed ample data that institutions and individuals can wreak havoc when they deviate from certain principles of logic and objectivity. There is also a good deal of dispiriting evidence about the rationality of voters that spurs questions about the effectiveness — and even the legitimacy — of democratic government. (I’ll have more to say on this matter as the fall election approaches.)
Yet much of the work of cognitive scientists — even fascinating Nobel Prize-winning research by Daniel Kahneman — leaves me edified but not alarmed. The human capacity for reason may be fragile and partial but it is not belied by studies in which large percentages of subjects answer a few tricky questions incorrectly.
Rethinking Genevieve
I’ll focus here on my question about Genevieve, more popularly known as the Linda problem. (If you haven’t taken the quiz yet, you might want to test yourself before reading on.) Here, again, is the question:
At a dinner party this weekend, a friend introduces you to a woman named Genevieve. He tells you that Genevieve recently graduated from Bryn Mawr College with a B.A. in Philosophy, where she was active in the Occupy movement and edited a literary magazine. You’re interested in talking to Genevieve about Hegel, the subject of her senior thesis, but your friend jumps in and asks you to rank the following statements about Genevieve in order of their probability:
(1) Genevieve is a feminist.
(2) Genevieve is looking for a job as a sanitation worker.
(3) Genevieve is a feminist who is looking for a job as a sanitation worker.
Given what you know about Genevieve, rank the statements from most likely to least likely.
This question is meant to test how well you evaluate probabilities. If you botch the task by ranking (3) before (2), you commit what Kahneman and Tversky call the “conjunction fallacy”: seeing the concurrent existence of two states of affairs as more likely than than the existence of only one. As Kahneman points out in Thinking, Fast and Slow, we are not always inclined to make this mistake. Try this question, for example (from p. 160):
Which alternative is more probable?
(a) Jane is a teacher.
(b) Jane is a teacher who walks to work.
Everyone will immediately see that (a) is more probable than (b), since Jane could commute to school by bike, car, subway or Segway. We answer this question correctly yet mishandle the logically identical Genevieve problem because — as several readers commented in my previous post — the latter primes us to develop a particular view of Genevieve. Not only is she a woman, but she is an Occupy Wall Street activist, a philosophy major and a graduate of an elite women’s college. Her fancy French name gilds the lily.
How often have you witnessed such a creature driving a sanitation truck?
Reader “Hypatia501” made this complaint most artfully on my post last week:
Why are you telling me all this information that is congruent with Linda being a feminist if you didn’t think it was relevant to the problem you then ask me to solve? Since the information is irrelevant to being a sanitation worker and it is a norm of conversation to treat what people say as relevant to what follows, presumably it bears on the probability that Linda is a feminist. In other words, the information that we are weighing is not only about Linda but about the fact that someone thinks it important to tell us these facts about Linda–is, in effect, inviting us to use these facts to make inferences about her.
The background information about Linda/Genevieve is designed to tempt you toward a response that is illogical. It’s bait, pure and simple. When we take the bait, we are giving in to our intuitive sense that sanitation workers are by and large male and less highly educated than Genevieve. Kahneman apparently thinks that the deceit at the heart of the question is justified because it shows how a story can dull our reasoning ability. In his words, System 2 (our deliberate, slow-thinking rational capacity) “is not impressively alert.” We jump to conclusions when the conclusion seems obvious: “The laziness of System 2 is an important fact of life,” he writes (p. 164).
But there is another interpretation of this purported error that Kahneman dismisses too quickly. Consider the way he reacts to students who dare to challenge his interpretation of the Linda problem:
Remarkably, the sinners seemed to have no shame. When I asked my large undergraduate class in some indignation, ‘Do you realize that you have violated an elementary logical rule?’ someone in the back row shouted, ‘So what?’ and a graduate student who made the same error explained herself by saying, ‘I thought you just asked for my opinion.’ (Thinking, Fast and Slow, p. 158)
Rather than explore his students’ points of view and open his experiment to critical inquiry, Kahneman dismisses their responses out of hand. A logic error is a logic error; “opinions” have no place in a discussion of probability.
But as Ralph Hertwig and Gert Gigerenzer argue in their 1999 article criticizing Tvesky and Kahneman’s conclusions, this all depends on what “probability” means in the minds of the test-takers (my adaptations to the Genevieve example follow in brackets):
It is evident that most of the candidate meanings of ‘probability’ and ‘probable’ cannot be reduced to mathematical probability. For instance, if one interprets ‘probability’ and ‘probable’ in the [Genevieve] problem as ‘something which, judged by present evidence, is likely to happen,’ ‘plausible,’ or ‘a credible story,’ then one might easily judge [‘Genevieve being a feminist sanitation worker’] to be more probable than [‘Genevieve being a sanitation worker’] because the information about [Genevieve] was deliberately selected to provide no evidence for the hypothesis [that she is a sanitation worker alone]. Under these interpretations it is pointless to compare participants’ judgments with a norm from mathematical probability theory, because the inferred meanings have nothing to do with mathematical probability.
Exactly. If you ranked (3) before (2) on the quiz, you were seeking the most plausible, commonsensical answer, not trying to parse a logic game on the LSAT. Your rationality wasn’t dozing. System 2 was absorbing and responding to evidence provided by the questioner — evidence you had every reason to believe was relevant to the question.
The Selection Task and the Good Bet
Questions 2 and 3 on my quiz are meant to test deductive logic and instrumental rationality, respectively, but neither tells a complete story about how well an individual reasons in real life. Wason’s selection task experiments show that performance improves markedly when real-world examples, rather than letters and numbers, are tested. Since social situations like this confront us much more often than test items like this, we should take our tendency to bungle the latter with a grain of salt. And since question 3 is a measure of risk tolerance whose answer depends on a number of contingencies, you can’t be condemned by the Rationality Police for passing up even an excellent bet. It may indeed be useful to be aware of an unusually strong loss aversion in your psyche so you can work on changing your behavior, but there is no evidence that individuals who gamble more freely necessarily live better lives.
So Why the Quiz?
Why did I quiz you using questions and answers I believe to be less meaningful than their originators thought them to be? For two reasons: (1) to draw your attention to the types of data that inform the new science of irrationality and, I hope, to open up some of these questions and conclusions to popular critical engagement; and (2) to conduct a non-scientific experiment of my own about individuals’ emotional investment in their sense of themselves as rational. My hypothesis is that while we love reading about humanity’s tendency toward the irrational, we take offense when light is thrown on our own individual incompetencies.
If you were upset about performing poorly on the quiz, your trauma was compounded when you read comments from other readers who reported how “obvious” the answers were and how easily they aced it. Here Kahneman has some useful advice for you: be wary of the “availability fallacy,” the tendency to generalize from an easily accessed but skewed data set.
The majority of commenters reported answering at least two of the questions correctly. This doesn’t mean that Big Think readers are necessarily more rational than the general population. The vast majority of the individuals who have read the post (at least 98.5%, if my math is right) did not comment at all, and the vast majority of this silent majority likely did poorly on the quiz. The comments represent nothing but a single-digit swath of the quiz-takers. Most of your fellow readers probably performed exactly the way you did.
So relax, people. We might not all be perfect reasoners, but we’re in this together.
Follow Steven Mazie on Twitter: @stevenmazie
Photo credit: Shutterstock.com