Research

## Newcomb’s Paradox

### A constructor-theoretic reflection on Newcomb’s paradox

Bizarre things happen to the dynamical-law-based approach when it’s applied to entities with counterfactual properties. Usually, this shows that the traditional conception of physics, which expresses everything in terms of initial conditions and laws of motion, is inadequate to handle them. Therefore either they are not fundamental at all; or the traditional conception is not – it must be only regarded as an approximation. However, there are also cases where the traditional conception is merely misleading and sometimes leads to paradoxical ways of reasoning about things.

A case in point is the well known Newcomb’s paradox – which we’ll consider in a slightly revisited manner. Suppose a game is set up as follows. At time t_{0}, a predictor **P** prepares a large box with the following content: two smaller, opaque, soundproof, boxes, labeled A and B, each containing one or more puppies (to be specified below); an automaton **T** that is programmed to open one or both boxes at a later time t_{1}, in such a way as to obtain the maximum number of puppies.

At time t_{0} the small boxes and the automaton are sealed in the large box, the automaton begins to ponder its choice. Just to be clear: we haven’t yet fully specified how the boxes were filled, but no matter how the boxes were filled, given that there is at least one puppy in each, there must be more puppies in both boxes combined than in box A alone. At time t_{1} the automaton **T** must choose to open either box A or both boxes.

The boxes are filled by the predictor **P** as follows:

if **T **is predicted to choose to open both boxes, one puppy is placed in box A and one in box B before the larger box is sealed.

if** T **is predicted to choose, at time t_{1}, to open only box A, 3 puppies are placed in box A and again only one in box B before the larger box is sealed.

The predictor has perfect knowledge about the initial conditions of the automaton **T** and about the dynamical laws of everything in the box – which for the sake of argument are supposed to be deterministic.

Therefore, the rules of the game allow only two possible scenarios at time t_{1}:

**P **predicted that **T** will open both boxes; Box A contains one puppy, Box B contains one; **T** opens both boxes and gets two puppies in total.

**P **predicted that **T** will open one box; Box A contains 3 puppies, Box B contains 1; **T** opens only box A and gets 3 puppies.

The paradox arises from the fact that the automaton can provide an output based on two different algorithms, derived from two different arguments, which apparently must be equivalent under the traditional conception of physics – i.e., they must lead the automaton to the same conclusion; but they seem not to.

The first algorithm leads the automaton to open only box A – so that he gets 3 puppies. This is derived from the rules of the game: if **T** were to open both boxes, **T** will only get 2 puppies.

The other algorithm instead is based on the reasoning that there are *only* *two* possible initial conditions for the boxes A and B, immediately after time t_{0} (just after the preparation, when the larger box outside is sealed). One, where each box contains 1 puppy; two, where A contains 3 puppies and B contains 1. For both initial conditions, a larger number of puppies is obtained by opening both boxes, because the sum of the content of both is always larger than the content of either. Therefore, the automaton should conclude to open both boxes irrespective of everything else.

The reason why the two algorithms should be equivalent is that the rules of the preparation seem irrelevant for the decision of the automaton, in that the content of the boxes once the larger box outside is sealed is fixed and cannot be retroactively affected by **T**’s choices; therefore, the boxes’ initial condition is all that matters for the number of puppies that the automaton can get.

**Resolution. **

The reason why the paradox arises is that the account given above is incomplete. There is a crucial additional thing to be stressed about the preparation. What has determined the content of the boxes is the simulation of **T** that **P** ran in advance, before time t_{0}. Let us call that simulated version of the automaton **T****′**. It is the simulated version’s choice that sets the content of the boxes. And so the correct way of reasoning for the automaton, and for its simulation, must take this fact into account.

If there can exist a perfect simulation **T****′** of **T**, then they must both reason identically and they must have been given the same algorithm to make their decision; and the same inputs. Also, they both know about the existence of the simulation, and about how it is used to prepare the boxes, but each of them cannot know whether it is the simulation or the actual automaton.

Now under these circumstances the only algorithm that is possible is one that can be executed by both **T****′** and **T**. The crucial point is that at time t_{0}, when the simulation takes place, the boxes are not yet filled with anything; and that neither the simulation nor the actual automaton know which one they are — otherwise the two automata would know different things, contrary to the assumption of the perfect simulator.

Hence the simulator **T****′** in particular must assume that the boxes could still be empty, to be filled according to its own choice. **T****′ **therefore chooses to open only one box to maximize the amount of puppies **T **gets, thus determining the content of the boxes. Consistently with the assumption of perfect simulation, **T **will reason as above, and reach the same conclusion as **T****′** – open one box. So, there is no paradox after all.

There are two interesting side-remarks about this way of looking at the apparent paradox.

First, suppose that instead of an automaton, **T** is a person with ‘free will’ – which we take to be the capacity to *create knowledge* to make a choice about the boxes. Its behaviour must be *unpredictable*, which seems in contradiction with the statement that a perfect predictor exists. Well, this is not the case. What one means by saying that the creation of knowledge is unpredictable is that it cannot be known BEFORE it is created. Creating the relevant knowledge can be done *only* by running a simulation of the person in question. That simulation, in turn, will by definition be creative, and unpredictable, in the same sense: the prediction of its choice cannot be made in advance of the creation of the relevant knowledge.

In this particular case, the content of the boxes is not set by what choice **T **will make (as is misleadingly assumed in statements of the ‘paradox’) but by the choice that the simulation** T****′** made before t_{0}; this choice unpredictable, in the sense that the only way to predict it is to bring about a simulation of **T****′**; and so on. Until **T****′** has made its choice, the content is not set; once it has made it, the choices of **T** are set, on the ground of the knowledge that was created in **T****′**. What matters is precisely the fact that the requisite knowledge about what to do with the boxes, whatever that is, could not be predicted before an instance of the person **T** was brought about, via **T****′**.

The second point is that this setting lends itself to explaining why the unpredictability of the creation of knowledge is fundamentally different from the unpredictability of measurement in quantum theory.

Consider a slightly altered version of the game, where the automaton **T **is supposed to make a choice based on the output of a measurement of the X-component of the spin on a superposition of two eigenstates of that observable. Then, there cannot be any perfect predictor for what choice **T** will make. This is because, if there were one, the laws of quantum mechanics would be violated. However, this has nothing to do with the unpredictability of knowledge we mentioned above. The knowledge created by **T****′** to make its choice is represented by a sharp information attribute of the variable ‘which box to open’; in the case of the quantum version, instead, the variable ‘which box to open’ is not sharp.

See http://www.scottaaronson.com/blog/?p=30 for Scott Aaronson’s related take on this.