Skip to content
Who's in the Video
Eric Weinstein is an American mathematician and economist. He earned his Ph.D in mathematical physics from Harvard University in 1992, is a research fellow at the Mathematical Institute of Oxford University, and is a[…]

We know it’s a myth that “humans only use 10% of their brains,” but there might be a function of your mind that you’re neglecting to use: its sandboxes. Eric Weinstein borrows this term from computer science to explain the potential of experimental thinking. A sandbox in computing is a secured place where untrusted software can run without controlling the computer or accessing its vital resources. Security specialists, for example, use sandboxes to analyze how malware behaves. Once they see and understand how it works, they can then devise a strategy to defeat it, and strengthen their own system to prevent it from getting in again.


Your mind has sandboxes for the same purpose: a safe space to metaphorically play with dangerous or seemingly irrational ideas that are inconsistent with your worldview. Without the ability to fully understand these problems, we cannot hope to solve them or react appropriately. Weinstein uses the example of a jihadi sandbox: how can we effectively fight Islamic terror if we don’t understand the forces behind it? If an idea appalls you, do not shut it out. Build a sandbox and investigate it thoroughly by running an emulation of a jihadi’s mind (or an atheist, or a devout Christian, or a racist, or a dictator, or a righteous liberal): “Can we run that mind well enough to understand it, to empathize with it and to argue and spar with it to achieve some kind of better outcome where we are actually able to turn foes into dancing partners?” asks Weinstein.

A rational, one-track mind cannot solve humanity’s biggest issues, but the ability to process seemingly irrational ideas might allow us, as individuals and as a species, to reach new intellectual and behavioral heights.

ERIC WEINSTEIN: We have to embrace the inconsistency of our own minds, not as a bug but as a feature, that we are in essence brought here by the forces of selection. We are the products of systems of selective pressures, and what they seem to do is to create the ability to run many, many different programs and often contradictory programs within the same mind. And the question is why have we put such an extraordinary emphasis on intellectual consistency so that we are constantly alerted to the hypocrisy of others but we are seemingly blind to it in ourselves? 

Our mind is constructed with an architecture that allows us to run various sandboxes where we can experiment with the ideas of others without actually becoming the other. Can we run another mind in emulation? Perhaps not as well as its original owner, but can we run that mind well enough to understand it, to empathize with it and to argue and spar with it to achieve some kind of better outcome where we are actually able to turn foes into dancing partners as we come to show that we’ve actually understood perspectives different from our own.

The biggest objection to this way of thinking is that it’s somehow a kind of a cheat. That hypocrisy is being summoned by another name. But I think this is actually incorrect. I think that we have these sandboxes, for example, so that we can fight more effectively a foe that we feel we must defeat. So, for example, recently I talked about the importance of being able to run a jihadi sandbox in our minds if we want to understand the forces that are behind Islamic terror and its effect on what I think are relatively fragile Western sensibilities about life and death. And so if we choose not to empathize with the other, to say that so much is beyond the pale, we are probably not going to be very effective in understanding that the other does not see itself as evil. It does not see itself as an enemy that must be fought. I don’t necessarily need to agree with it but to demonstrate that I can’t even run the program simply for the purpose of social signaling seems the height of folly. How do we hope to become effective if we can’t guess what the other will do next? 

There are limits to this. We have to have a certain kind of consistency of mind. But the idea that you can’t be capable of running a diehard rationalist, materialist, atheist program as well as a program that says perhaps I will open myself to transcendental states and, if I need to anthropomorphize those as coming from a deity, perhaps the idea is that that architecture is not what Richard Dawkins would suggest is a kind of mind virus. But, in fact, it’s a facility that we choose to deny ourselves in our peril. What if we’re trapped on a local maximum of fitness and, in fact, we need to get to higher ground. But the idea is that the traversal of the so-called adaptive valley where we have to make things much, much worse before they get much better, what if the idea is that cannot generally be attempted rationally, that we need a modicum of faith, of belief that we cannot reference to any sort of information set. We could end up trapped on local maxima forever. But I think it’s really important to consider that some people may be able to traverse the adaptive valley without a belief in a deity. Some may need a temporary belief in a deity. Some may be able to reference some sort of a transcendental state and steel ourselves in order to make the journey.

But however it's accomplished there are times when it would appear that all hope is lost, and that if we are not to end our days stuck on these local maxima of whatever we have achieved, that we have to fundamentally experiment with ways of thinking, if only temporarily, to get us to higher ground.


Related