The Universe Solved



Is Our Reality Just a Big Video Game?

by Jim Elvidge
March 16, 2008

It is now 9 years since the release of the movies “The Matrix,” “eXistenZ,” and “The Thirteenth Floor,” all of which explored the idea that we might be living in a computer-generated simulation.  Although the fun and speculation about the premise of these movies has largely died down, the interest around the concept has shifted from pop culture to academic.  Nick Bostrom, the Director of the Future of Humanity Institute at Oxford, wrote his oft-quoted “Are You Living In a Computer Simulation?” [1] in 2001.  More recently, in 2008, Brian Whitworth from Massey University in New Zealand submitted the white paper “The Physical World as a Virtual Reality” [2], which created a nice buzz in the blogosphere (the Slashdot forum collected over 1000 comments on it alone).  Somewhat tangentially, there is also the whole Transhumanism/Singularity movement, which predicts our future merge with AI, but does not really address the idea that we may have already done so.


Also, at the beginning of 2008, I released the book “The Universe – Solved!”  My take on the topic is a little different than the common theme of a computer-generated simulation.  I consider that scenario to be but one of several possible ways in which our reality may be under programmed control.  The book introduces these scenarios but really focuses on presenting all of the categories of evidence that our reality may indeed be programmed.  This evidence takes us from feasibility to some level of probability. 


But feasibility is where it starts, and for you skeptics out there, this article is for you.  If, after reading it, you are still convinced that the idea is not at all feasible, I respectfully acknowledge your position and we can go our separate ways.  Where do I stand on it?  Put simply, I believe it is very feasible that we live in a programmed reality and highly probable, although as with every other idea in the world, I remain less than 100% convinced.


We shall begin our feasibility study with a nod to the 30th anniversary of the release of the arcade video game Space Invaders.  Running on an Intel 8080 microprocessor at 2 MHz, it featured 64-bit characters on a 224 x 240 pixel 2-color screen.  There was, of course, was no mistaking anything in that game for reality.  One would never have nightmares about being abducted by a 64-bit Space Invader alien.  Fast forward 30 years and take a stroll through your local electronic superstore and what do you see on the screen?  Is that a football game or is it Madden NFL ’08?  Is that an Extreme Games telecast or are we looking at a PS3 or Wii version of the latest skateboarding or snowboarding game.  Is that movie featuring real actors or are they CG?  (After watching “Beowulf”, I confess that I had to ask my son, who is much more knowledgeable about such things, which parts were CG.) 

space invaders
Madden NFL 08

The source of our confusion is simply Moore’s Law, the general trend that technology doubles every two years or so.  Actually, to put a finer point on it, the doubling rate depends on the aspect of technology in question.  Transistor density doubles every two years, processor speed doubles every 2.8, and screen resolutions double about every four years.  What still remains fascinating about Moore’s Law is that this exponential growth rate has been consistent for the past 40 years or so.  As a result, “Madden NFL ‘08” utilizes a 1080x1900 screen resolution (at 16 bit color), at least 1GB of memory, and runs on a PS3 clocked at 2 TFLOPs.  Compared to Space Invaders, that represents an increase in screen resolution of over 500x, an increase in processing speed of a factor of 2 million, and an increase in the resolution of gaming models of well over a thousand.  And so, “Madden” looks like a real football game. 


So, given this relentless technology trend, at what point will be able to generate a simulation so real that it will be indistinguishable from reality?  To some extent, we are already there.  From an auditory standpoint, we are already capable of generating a soundscape that matches reality.  But the visual experience is the long pole in the tent. Given the average human’s visual acuity and ability to distinguish colors, it would require generating a full speed simulation at 150 MB/screen to match reality.  Considering Moore’s Law on screen resolution, we should reach that point in 16 years.  Then, of course, there are the other senses to fool, however, as we shall see, they should not be too difficult.  So, 16 years is our timeframe to generate a virtual reality indistinguishable from “normal” reality.  Of course, we also have to experience that reality in a fully immersive environment in order for it to seem authentic.  This means doing away with VR goggles, gloves, and other clumsy haptic devices.  Yes, we are talking about a direct interface to the brain.  So what is the state of the art in that field?


In 2006, a team at MIT did an experiment where they placed electrodes in the brains of macaque monkeys and showed them a variety of pictures.  By reading the signals from the electrodes, the team was able to determine to an accuracy of 80% which picture a particular monkey was looking at. [3]  Although certainly a nascent technology, this experiment and others like it have demonstrated that it is possible to determine someone’s sensory stimuli by simply monitoring the electrical signals in their brains.  We can leave it to Moore’s Law for the perfection of this technology.  What about the other direction - writing information into the brain?  Dozens of people in the US and Germany have already received retinal implants whereby miniature cameras generate signals that stimulate nerves in the visual cortex in order to provide rudimentary vision of grids of light.  Whereas it took 16 years to develop a 16-pixel version, is has taken only 4 years to develop a 60-pixel one. [4]  That rate of advance is even higher than Moore’s Law because it is at the early part of a technological innovation.  Further advances are being made in this field by stimulating regions deeper into the brain.  For example, at Harvard Medical School researchers have shown that visual signals can be generated by stimulating the lateral geniculate nucleus (LGN), an area in the brain that relays signals from the optic nerve to the visual cortex.  Perhaps stimulating the visual cortex directly will allow further acceleration of advances in generating simulated realities.  Other senses, like taste, smell, and touch, seem to not require the same level of data simulation as the visual simulation and can also be accomplished via the same deep-brain stimulation methods.  Given the state of these technologies today and the fact that there are about one million axons that carry visual signals in parallel through the optic nerve, Moore’s Law might say that we could achieve the electrical-implant-based simulation in a little over 30 years.  However, nanotech may actually speed up the process.


Instead of implanting a device into the brain that stimulates millions of cells, or axons, why not generate millions of nanobots and give them the instructions to each find an available cell and stimulate it according to direction from a central computer.  In early 2008, researchers from Northwestern and Brookhaven National Laboratory equipped gold nano-particles with DNA tentacles, and demonstrated their ability to link with neighbors to create ordered structures. [5]  Crystals containing as many as a million particles were built using this technology.  In addition, scientists from the International Center for Young Scientists have developed a rudimentary nano-scale molecular machine that is capable of generating the logical state machine necessary to direct and control other nano-machines. [6]  These experiments demonstrate a nascent ability to manipulate, build, and control nano-devices, which are the fundamental premises for nanobot technology.  Many well-respected technologists and futurists put the likely time frames for molecular assembly and nanobots somewhere in the 2020s.  It is hard to say exactly when directed nanobot activities such as those necessary to create the reality experience mentioned above may be possible because there isn’t yet a starting point from which to extrapolate Moore’s Law.  However, the current state of the art and common projections put the time frame 20 to 30 years out.

Of course, this would not be enough to fool us because we would still have our memory prior to the instantiation of the simulation, plus our collective set of life memories.  Or can our memories be erased and replaced?  Researchers at Harvard and McGill Universities used the drug propranolol in a study to dampen traumatic memories, while an amnesia drug was used in studies on rats at New York University to delete specific memories, while leaving all others intact. [7]  To the best of our knowledge, memories in the brain are due to strengths in synaptic connections, which in turn are due to the generation of neurochemicals, such as glutamates.  In principle, it would appear that properly-programmed nanobots should have the ability to weaken or strengthen these synaptic connections, hence removing or adding memories.  The likelihood that memories are distributed throughout the brain only make the programming or deprogramming more difficult, but not impossible in theory. 


From these directions, it should be clear that the generation of a full-immersion simulation is not only feasible, but also likely some time in the next 20-30 years.  So who is to say that we aren’t already in one?  In fact, Nick Bostrom’s Simulation Argument makes a compelling case that we probably are. 


The argument goes like this…


Someday, we will have the ability to generate and experience these simulations (the time at which this occurs is called the posthuman phase).  And when we do, we would generate millions of them. From a logical standpoint, he says that one of three scenarios must be true:


1. We never get to the posthuman phase because we destroy ourselves.

2. We never get to the posthuman phase because we make a conscious decision not to pursue this technology.  Personally, I throw out this scenario as unrealistic.  When faces with any technology that has inherent dangers (nuclear energy, nanotech, cloning, generating energies in particle accelerators that are sufficient to create a black hole), when have we ever made a decision not to pursue it?

3. We do achieve posthumanism.  And, since the odds that we are living in one of the millions of generated simulations is much higher than the odds that we just happen to be in a reality musing about the possibilities 20 years hence, we are most probably living in a simulation.


Therefore, if you subscribe to his logic and have an optimistic view of where we are going as a species, you have to conclude that we are probably living in a simulation.


However, this only addresses the scenario whereby the reality that we are living in is generated by us in the future.  What if our reality is programmed, but by someone else?  Other nanotech-based technologies may also be deployed this century that will create physical realities rather than simulated ones.  These ideas is explored further in my article "Nanotech and the Physical Manifestation of Reality."




1. Are You Living In a Computer Simulation? Nick Bostrom. Philosophical Quarterly, 2003, Vol. 53, No. 211, pp. 243-255.

2. “Fast Readout of Object Identity from Macaque Inferior Temporal Cortex”, Chou P. Hung, Gabriel Kreiman, Tomaso Poggio, James J. DiCarlo, Science 4 November 2005: Vol. 310. no. 5749, pp. 863 – 866.

3. The Guardian, February 17 2007, p11 of the UK news and analysis section.

4. Farley, Peter, “Programming Advanced Materials,” Technology Review, January 31, 2008.

5. Fildes, Jonathan, “Chemical brain controls nanobots,” 11 March 2008,

6. Christensen, Bill, “New Drug Deletes Bad Memories,” 2 July 2007 post on