For many years, Joscha Bach could not understand why humans flock so strongly towards religion and ideology. Having grown up in communist East Germany and seeing the people around him buy into nationalistic narratives—that were to him obviously untruthful—made no sense. It was only when the wall came down that he came to understand that people everywhere are buying into various false narratives—as of 2015, 34% of Americans still reject evolution completely. The drive to believe whatever instructions come from above you is not a cognitive error, Bach realized then, but an evolutionary feature—as powerful as it is problematic. The ability for large groups of people to follow one set of rules, to cooperate, is how Homo sapiens established agricultural societies, and is ultimately how we outcompeted other now long-gone nomadic hominin groups. We are a programmable species, says Bach, and we need to belong and conform to a larger entity to survive. As such, Bach sees the debate surrounding free will not as a question of determinism or incompatibilism, but of social conditioning. Perhaps the free will relates to decision-making over physics: are you really free to act in a way that is true, or are you bound by a social code of responsibility that runs thousands of years deep in your genetics? Joscha Bach’s latest book is Principles of Synthetic Intelligence.
Joscha Bach: Like consciousness, free will is often misunderstood because we know it by reference, but it’s difficult to know it by content, what you really mean by free will. A lot of people who immediately feel that free will is related to whether the universe is deterministic or probabilistic. And while physics has some ideas about that—which change every now and then—it’s not part of our experience and I don’t think it makes a difference if the universe forces you randomly to do things or deterministically.
The important thing seems to me that in free will you are responsible for your actions, and responsibility is a social interface. For instance, if I am told that if I do X I go to prison, and this changes my decision whether or not to do X, I’m obviously responsible for my decision because it was an appeal to my responsibility in some sense. Likewise if I do a certain thing that causes harm to other people and they don’t want that harm to happen, that influences my decision. This is a discourse of decision-making that I would call a free will decision.
“Will” is the representation that my nervous system at any level of its functioning has raised a motive to an intention. It has committed to a particular kind of goal that gets integrated into the story of myself, this protocol that I experience as myself in this world. And that was what I experienced as will, as a willed decision, and this decision is free in as much as this decision can be influenced by discourse.
So to me, free will is a social notion. It means that this interface of social interaction, of discourse, of thinking about things, about this interface of knowledge, language, conceptual thought, is relevant for that decision. If you have a decision in which it doesn’t play a role, for instance, because you are addicted to something and you cannot stop doing it even if you want to, then this decision I would say is not free.
I grew up in eastern Germany, it was communist eastern Germany and it was a very weird ideological country. A country that believed in stories about how the world works that I, as a nerd, thought obviously not quite true. I had difficultly believing the official stories about how the world works. It was like some weird kind of religion. And then the wall came down and it didn’t surprise me in the least. And then we entered a new dream, a new shared model of the world that was not quite true, and I realized that most people now fall for this new model. It was very interesting to see this for me and if you look, for instance, at the U.S., the majority of U.S. Americans do not believe in the theory of evolution despite all the evidence to the contrary.
The majority of people on this planet are religious even though there doesn’t seem to be very good evidence for a multitude of creator gods and so on, in my view at least. And if it existed, if a creator god existed, it would be very hard for me to understand why this creator god really does care about whether I worship it or all these things that we attribute to creator gods by religion. So it’s very hard for me, in some sense, to intuitively understand why humans are religious and why humans are ideological.
But I think now over the years that this is not a bug, it’s a feature. Humans are a programmable species. Religions and ideologies are operating systems for societies. They have been so throughout most of our history, and this idea that we can build society based on rational arguments is very, very recent and very novel. And it’s not entirely clear if it really works.
But it’s clear that we cannot really build societies on conflicting ideologies that are at war with each other. In the past it has led to situations where the ideology solved the problems by killing the unbelievers or the religions did the same thing, and we all agree this is not what we want to have. We want to have an open society, a pluralistic society, a nonviolent, tolerant society, but still one where people work together and cooperate well. And this ability to wake up into a shared dream in which people believe things because their neighbors believe them has been a very powerful feature that’s probably the reason why we were able to build large-scale societies.
We have to understand that when people cooperate they’re very often in what we call the Prisoner’s Dilemma, a situation in where in order to achieve the greatest good you have to give up something for yourself, even in a situation where that is in some sense a bit irrational, because if everybody else is not doing it you’re going to be worse off. And for these Prisoner’s Dilemmas we have various solutions. The easiest solution is to have a reputation system. You basically keep track of who did what when, and you make sure that only the good guys get cookies in the future.
And the problem is that these reputation systems do not scale. If you have too many people in your tribe or in your family or in your village you just lose track of who did what when, and you cannot really synchronize it by talking about it. So after a couple hundred individuals the reputation system doesn’t work very well. It also doesn’t really work if you are not looking. So if nobody is doing the surveillance, how do you make sure that nobody is defecting and stealing stuff from the fridge of your tribe, right? So what do we do? We evolve the ability to be normative: the ability or the need to be good. And this need to be good, this need to follow internalized norms, this need to serve sacred principles is something that is probably a feature that is ingrained into our genetic makeup.
And of course this alone would not be good enough because goodness is like an arbitrary vector in value space. People also have a need to synchronize what’s good. So people will try to feel what’s good in their in-group. It works by empathy. Empathy is the primary mode in which we transmit norms. If you dress-up somebody as an authority, as a priest, as a professor, as a pop star, as a politician, and this person says a certain thing with conviction and people see that others believe it, they start believing it themselves. And it’s obviously very useful to do this. There’s almost never a situation where it’s useful to have an opinion that is different from the opinion of your boss.
So this is the ability that we got, and it means that people perform the same things, they follow the same rules regardless of the size of the group. This makes it possible to build agricultural societies with hundreds of thousands of individuals and then millions of individuals. It makes it possible that this agricultural society has people that specialize in different foods and different trades and different materials and different crafts and so on, and produce all the multitude of tasks and tools that we need to get an agricultural society to run and be able to compete with the nomadic societies.
And I think the reason why Homo sapiens is the only hominin species that’s left is because we outcompeted them all. We were in the same competitive niche and we were a species that was programmable, that was able to coordinate our very large group of individuals. That was very powerful. It just turns out that this mode of tribal organization is not sufficient for the world that we live in now.