Skip to content

Is Individual Liberty Over-Rated?

One of the biggest misconceptions about post-rational behavioral research is that its effects on society are small. From the news you get the impression “behavioral economics” is all about changing the 401(k) plan from opt-in to opt-out, or informing people, via their bills, how their electricity use compares to the neighbors’. All of which is well and good (opt-out yields a much higher participation rate in the plans and the comparative-use trick gets power-hogs to dial back). This is contrary to the old “Rational Economic Man” model, but what’s the big whoop about some policy nips and tucks? To see just how myopic that view is, look no further than Cass Sunstein’s latest article in the current issue of the New York Review of Books. What prompts the 401(k) revise and other mild-mannered “choice architecture” policies, he notes, is good evidence that people are not always the best judges of their own interests. And if you concede that point, then you have to concede that one of the foundations modern democracy—the notion that each of us is entitled to make her own choices and her own mistakes—appears to rest on … nothing.


Uh-oh. The presumption that you know how to look after yourself, that reason 21st birthdays are so special, is one of the most cherished privileges in modern society. After all, most societies restrict what children can do (and buy) because they lack the ability to make good judgements about what is in their interest. Adulthood is supposedly the period in which that handicap is gone. What’s the most common way for Americans to express outrage at violations of our precious adult autonomy? By complaining that we’re not children. That’s why “paternalism” has a bad name and supposedly no citizen wants to live in a “nanny state.” But if adults aren’t that much better than kids at certain kinds of assessment, it’s reasonable to start talking about paternalism without outrage—even “coercive paternalism,” in which the state makes damn sure you can’t make your own mistakes. This kind of tough-love nanny state is a perfectly logical consequence of modern behavioral research, argues the philosopher Sarah Conly, whose book Sunstein is reviewing in his essay. Her mild-mannered title: Against Autonomy: Justifying Coercive Paternalism.

I haven’t read the book yet, but Sunstein’s respectful review merits attention on its own, for two reasons. First, it’s a succinct account of how the end of “Rational Economic Man” assumptions necessarily opens the way to a profound re-think about how people live their lives, and think of their rights and obligations. Second, Sunstein’s interest in the topic is far from theoretical. He spent President Obama’s first term as head of the White House Office of Information and Regulatory Affairs, which reviews and modifies all proposed federal regulations before they go into effect. When he declares that behavioral research “is having a significant influence on public officials throughout the world,” he is not a writer hyping his point. He’s a practitioner, embodying it.

It’s hard to understate the challenge that post-rational research poses for our current social contract. The notion that we are rational about ourselves—that whenever we wish we consciously reason our way to our choices—is, after all, the basis of modern civil rights. To be enlightened, Immanuel Kant explained, one must “use one’s understanding without guidance,” and this is impossible without freedom of speech and of thought. (Hence, Kant ridiculed people who lazily used the judgment of others as a guide.) “Error of opinion may be tolerated,” wrote Thomas Jefferson, “where reason is left to combat it.” Then, too, if we can be rational about ourselves at will, then it follows that each of us is both the best judge and the best guardian of his/her own well-being. After all, we have the most knowledge of the subject and the most motivation to reach the right answer. And the reason we apply to that information is just as good as anyone else’s.

This argument, so central to our modern notions of autonomy and equality, was brilliantly made in the middle of the 19th century by John Stuart Mill, in On Liberty. Given that I am the best judge of my own interests, Mill argued, there can be no legitimate reason to compel me to do something “for my own good.” Of course, Mill wrote, “this doctrine is meant to apply only to human beings in the maturity of their faculties,” not children or “barbarians” who can’t make good judgments: “Those who are still in a state to require being taken care of by others, must be protected against their own actions as well as against external injury.”

To Mill, all this was self-evident. Today, researchers in psychology and behavioral economics (and, I’d add, some other disciplines too), treat the claim as an empirical question. And, Sunstein writes, their evidence shows that Mill was simply wrong. People certainly can make good judgements about their own interests some of the time, but it appears likely that no one does this reliably all the time. In deciding how to conduct themselves in their own lives, Sunstein writes, “people make a lot of mistakes, and that those mistakes can prove extremely damaging.”

So that category of “those who must be protected against their own actions” includes pretty much everyone at some time or other. As many have said to children over the ages, too bad if you don’t like the nanny. You need one.

Before he became a shaper of government rules and regulations, Sunstein was best known as the creator, with Richard Thaler, of the principle of “libertarian paternalism”: The theory that authorities should, as the pair have written, “attempt to steer people’s choices in welfare-promoting directions without eliminating freedom of choice.” Yet, he acknowledges, the questions raised are open. His is not the only possible response to post-rational research.

As the philosopher Thomas Nagel has put it, the evidence shows that there is an unacknowledged influence on our behavior—an influence that rationalist models of the mind fail to describe. We’ve only started to address what that means for our ideas about self and society. At the least, we need to make sure that the future management of that unacknowledged influence is done transparently and democratically.

Or we could just drift along, imaging that behavioral research will inform only little tweaks the workings of markets, courts, workplaces, schools and other important places. In which case the transition to a post-rationalist era could end badly. It could, for example, end in a world where big corporations pay lip service to “freedom of choice” even as they spend billions on tools to wield unacknowledged influence (which can’t be regulated because the official ideology of rational choice doesn’t register it). Or it could end in a heavy-handed nanny state in which “choice architecture” isn’t democratically debated but rather imposed by elite high-achievers.

Sunstein, though he admires Conly’s “careful, provocative and novel” argument, clearly does not want to go there. Despite predictable attacks on this article from the usual suspects, he is not easily turned into an anti-freedom cartoon. In fact, he identifies the problems with excessive paternalism clearly: First, the problem of being certain that “for your own good” is correct (as we have seen since 2008, someone may be quite right to want to avoid investing in a 401(k) plan that “experts” consider wise). Second, the problem of reflecting the genuine diversity of the human race, in which some may genuinely be better off enjoying their meals than they would have been living until 98.

Conly’s is, of course, a philosophy book, designed to clarify thinking, not a political manifesto. So, yes, her argument is not a realistic political threat to Big Tobacco. But philosophers who change public discourse are the harbingers of new ideas among law professors and judges and think tanks, and those eventually lead to policy change. (You could ask John Stuart Mill, if he were alive and felt like answering you of his own free will, about the eventual impact of theory on politics and society.) In 2013, “coercive authoritarianism” may be politically unrealistic. But the news here is that in 2013, after 150 years or so of rarely-questioned respect for the principle of individual autonomy among non-religious political thinkers, the terms of the debate are moving.

Illustration: Influenced by the Pied Piper, the children of Hamelin freely choose an action that is not in their best long-term interests. Via Wikimedia.

Follow me on Twitter: @davidberreby


Related

Up Next