Neutral Monism vs. Simulation

Philosopher Nick Bostrom, Oxford

Popular internet philosophy has recently focused upon Nick Bostrom’s argument that we could be living in a computer simulation. A New Yorker article by Adam Gopnik suggested that a fiasco at the Oscars and other anomalies in popular culture lend evidence that we might be. This inspired a video by A.J. Rocca that went viral on Reddit about Bostrom’s simulation argument. This was followed by a Vox video by Chang, Posner, and Barton about the fact that Elon Musk buys into the simulation argument. These recent examples are in addition to a fair amount of other internet material. 

This article provides brief argumentation against Bostrom’s simulation argument. The gist of what I am arguing is that Bostrom’s simulation argument gives rise to classic problems in metaphysics that are best avoided by adopting a neutral monist position. My difference from Bostrom is epistemological: there may be other worlds upon which we have supervened, but they are not important or interesting unless one can find a theoretical way of interfacing with them.  Here, I give a brief sketch of what neutral monism is and how it relates to the idea that we are living in a computer simulation.

Neutral Monism

Neutral monism as a theory has been attributed in the literature to a number of historical philosophers including Anaximander and Spinoza, but the father of our contemporary concept of neutral monism was the physicist, psychologist, and mathematician Ernst Mach. Einstein was introduced to Mach’s writings by his friend Michel Besso and used Mach’s ideas profitably in his thinking about physics.

Historically, Bertrand Russell adopted neutral monism as his final thinking in metaphysics.

Neutral monism amounts to a high point in thinking about metaphysics, followed by the sophisticated philosophical analysis of later analytic thinkers in semantics, cognitive science, and so on.

How might one define neutral monism? Quite simply, it resolves the Cartesian mind-body problem and other dualisms under a monism of the object. Objects are neither entirely physical or psychological, but every object is a blend of both. To adopt a neutral monist position requires that one change one’s way of thinking about the problem of metaphysics. For the neutral monist, the aim of metaphysics is to eliminate dualisms and the multiplications of reality altogether. In that sense, it is a logical and methodological position, which provides a robust theory of metaphysics that does not place itself in opposition to scientific thinking. Reducing reality to individual dualistic units applies nicely to both quantum uncertainty and relativity, although Einstein famously attempted to subvert of the former.

In neutral monism, dream objects are nevertheless objects. A materialistic explanation will argue roughly that dreams exist upon a neural network and are not connected to the external world. This results in what has been called a brain paradox, which can be revealed by asking the question: If I realize in a dream that my cognitive states are the result of brain states, am I then referring to a dream brain or a real brain? This reveals that a materialistic explanation of dream states results in an uncertainty about the reference of the term “brain.” In other words, whenever one reduces phenomena to a neural network, there is an uncertainty about whether the neural network that one is referring to exists upon another neural network that one is implicitly not referring to.

Incidentally, there is a better way of describing the difference between a dream state and being awake. Waking life is causally connected. The same problems that were there when you went to sleep will be there waiting for you when you wake up. This is not the case with dream states.

The neutral monist does not argue that it is impossible that experienced reality exists upon some “other” substratum (ideas, god, matter, simulation, etc.) For the neutral monist, the substratum is described as units of meaning, and, from this, one can deduce rules about meaning. Most importantly: a sentence will be meaningless if it explicitly removes any and all knowing subjects. The neutral monist is basically saying that any underlying reality would have to be encountered in the same way that we encounter experienced reality.

The neutral monist is basically saying that “there may be other worlds upon which we have supervened, but they are not important unless one can find a theoretical way of interfacing with them.”

Contra Bostrom

Bostrom begins his argument with his largest concession: the assumption of so-called substrate-independence. This is the idea that consciousness could theoretically supervene upon another substratum besides the biological one upon which it has supervened.

Bostrom uses a number of sources to support the notion that theoretically neural networks could be perfectly replicated by use of computing power.

But, why exactly would consciousness arise within the simulation? However sophisticated the machine and program were, what would make consciousness arise within it?

One can imagine two scenarios here. In the one case, a sophisticated simulation is operating that is as fine grained as what I presently consider reality to be, but it does not contain consciousness. It is, instead, like a virtual television reality with no viewer. In the other case, that same reality has actual conscious beings in it.

A fundamental question here is Turing’s. How do I know that another being is conscious? I can only know that it is conscious by interfacing with it. If it shows all of the signs of consciousness, I assume that it is conscious. So, here we see that Bostrom’s argument implies that computers will reach a level of sophistication, a genuine artificial intelligence that will pass the Turing test. While these beings can be assumed to be conscious, their being so is not necessary.

In a recent article, Metzinger argues that our sense of self is inextricably connected to our biological bodies. If a human being were to be transferred into a computer simulation, what would be transferred? There would be nothing other to transfer than one’s physical body. A virtual reality program capable of doing so would have to have not only fine grained computing power, but a fine grained interface between a biological body and the computer interface.

How does one define the “sense of self” that the individual has and that the computer interface lacks? Metzinger argues that the sense of self could only arise through biological evolution, and could not arise in a simulation. One does not even have to go that far. One could argue that a sense of self requires a physical substrate that cannot be replicated in computer code.

Fundamentally, the computing power problem arises again. The computing power of all possible machines would not be capable of a human’s robust sense of self and the world, because every conscious being has a fine grained view of the world for itself. Machines can be made, but one could not make a machine in which conscious beings exist. It would require computing power as fine grained as the world itself, which is impossible. In this sense, the electron microscope is the counter argument against simulation theory.

Another argument would come from information theory. If reality were a computer simulation, one would expect many, many more errors.

The Subject

There is a faulty paradigm in philosophy which may or my not be adopted by the simulation argument, but is present in many similar classic positions in metaphysics which attempt to define the individual by some kernel of personality, an identity, a consciousness.

The neutral monist defines the individual as a relation between objects. A person that does not hold a relation to objects in the world is not a person. The world is made up of precisely these sorts of dualistic units. The individual cannot properly be said to exist, but the certainty of the individual’s existence is the same as the certainty of the existence of objects.

Conclusion

Simulation theory is an interesting and fruitful concept in metaphysics. Information theory might be capable of showing that if simulation were the case, then one could expect more errors, and that the computing power necessary to run such fine grained simulations would have to match that of the real world, which would be impossible. In any case, neutral monism suggests that simulation would need to provide a theoretical interface in order to be interesting.

 

Edits: Grammar (3)