Everything everywhere all at once? Many worlds in context

The many worlds interpretation (MWI) of quantum mechanics has not only become an equal rival to the once all-dominating Copenhagen collapse interpretation (CI), but it even breached through the fences of science and became a popular theme of Hollywood blockbusters. By contrasting the two rivaling interpretations, this post highlights some of the surprising stuff I have learned during my recent research journey. In particular, I point out that the MWI might not imply that “everything everywhere all at once” happens—contrary to common perceptions in science and popular portrays.

Required background: basic quantum mechanics.


Most professional physicists probably never think too deeply about the quantum measurement problem even if they do quantum computations in their daily life. At least this described me well until I met Joan Vaccaro in Brisbane, Australia, in 2020.[just before the pandemic] During a seemingly harmless discussion about entropy and the second law, she suddenly asked me: “And what is your stand on the quantum measurement problem?” My answer:

Aaaaahhhhhhhhhhaaaaa….. yeaaaaaaahhh…………. Coffee?

Me when asked about the quantum measurement problem in 2020

Since then, I slowly started thinking about it, but it was not until recently that I think I have something to say about it. And also this happened only by chance, as my original motivation was to derive the conditions under which entropy increases monotonically in an isolated quantum system. I realized, however, that this required a derivation of why the system behaves classical in the first place, which started everything… And this is basically the first fact I like to emphasize:

Fact 0: There is a lot of chance and luck involved in research. (Something that funding agencies and professors are happy to acknowledge in theory but immediately forget in practice.)

But let’s turn to the physics now…

The Copenhagen Interpretation (CI)

According to CI there are 5 basic axioms[more or less]:

  • P1. The state of a quantum system is described by a wave function living in Hilbert space and observables are described by Hermitian matrices.
  • P2. The time evolution of an isolated system is governed by Schrödinger’s equation.
  • P3. If one measures an observable, one obtains one of its eigenvalues (let’s call it λ) as a result.
  • P4. Upon obtaining outcome λ, the wave function “collapses”, i.e., it is projected on the eigenspace of λ.
  • P5. The probability for outcome λ is given by Born’s rule, i.e., it equals the norm of the projected component in P4.

The problems with these set of axioms are well known. First of all, P3, P4 and P5 explicitly talk about measurements, but do not define what a measurement is. Does it require a human observer? Can a cat measure a quantum system? Is it sufficient to have a record carved in stone? Do two interacting ions already measure each other?

If one tries to work around this problem by modeling a measurement using P1 and P2, more disturbing features result. To see this, imagine we model the system S to be measured (say, a spin 1/2 particle) together with the measuring apparatus A as one big quantum system. If R denotes the state of the apparatus when it is “ready” to measure the system, then the measurement interaction will transform the joint system-apparatus state as (assuming a “good” measurement, neglecting errors, etc.)

Here, “up” and “down” denote two orthogonal “memory states” of the apparatus that keep the information about the spin of the system (say, in z direction to be definite). So far so good, but if the initial state of the system is a coherent superposition, the result will be:

Thus, the combined system-apparatus state will end up in a superposition as a consequence of the linearity of the Schrödinger equation. But the state on the right hand side is incompatible with any collapse of the wave function, according to which the measurement apparatus should show either “up” or (exclusive or) “down” with probability 1/2. This is the quantum measurement problem, which is still unsolved.

Some ontological commitment

Note that the CI as given above rather resembles a cookbook recipe of how to do quantum calculations in practice. It does not tell you how you should think about the wave function or its collapse. In that sense it is not an actual “interpretation” (and perhaps for this reason it was a very clever and successful move, historically seen).

Since I want to focus in this post on the MWI and don’t want to turn it into a general overview about the interpretations of quantum mechanics, I will now make some heavy metaphysical commitments and read the CI from a realist perspective. This means that both the wave function and the collapse really describe something “out” there, and not just something that exists only in our mind (look up “QBism” for an alternative view). Moreover, I will also assume that the wave function gives a complete description of a quantum system (look up “hidden variable theory” for an alternative view).

Then, within this realist stance, the CI is also sometimes called a collapse interpretation (which we can luckily again abbreviate by CI). According to it, there is some physical mechanism that modifies Schrödinger’s equation and causes the collapse. While proposals for this mechanism exist, there is no direct experimental test of it. In particular, it is not known at which time, length, mass or energy scale this collapse is supposed to happen, nor is it clear whether it is something fundamental or whether it is caused by another mechanism (gravity? consciousness?).

The Many Worlds Interpretation (MWI)

The MWI, proposed by Hugh Everett in 1957 (and called the “many worlds interpretation” in 1973), is based on a simple yet drastic modification of the postulates of quantum mechanics: drop P3, P4 and P5. That is, the world is only a wave function that evolves according to Schrödinger’s equation[in the nonrelativistic case], that’s it! The promise of the MWI is thus that P1 and P2 are sufficient to make sense out of quantum mechanics. Until today, however, this goal has not yet been achieved.

But before we come to its problems, it is good to demystify the MWI a bit. First of all, we have seen that the linearity of the Schrödinger equation inevitably causes superpositions to spread and proliferate if they couple to other degrees of freedom (such as the measurement apparatus above). Furthermore, since the Schrödinger equation is assumed to apply to everything in the Universe, including its human observers, this implies that the global wave function of the Universe will naturally contain “parallel worlds”, i.e., a superposition of states where observers have seen and experienced very different things. Therefore, we can summarize:

Fact 1: The Multiverse is a prediction of the MWI.

This is the first important and underestimated fact about the MWI because it is easy to think that the MWI postulates the existence of all these parallel worlds (the Multiverse), but that is not true. In particular, the existence of all these unobserved parallel worlds has been used to argue against the MWI based on Occam’s razor: why should one introduce all these parallel worlds if a single world is sufficient to explain quantum mechanics? But from what we said above, the situation is actually reversed. Parallel worlds are not postulated but a consequence of the two postulates of the MWI, whereas the CI has five postulates. Thus, we find:

Fact 2: Occam’s razor would favor MWI over CI.

Of course, this statement assumes that we can make sense of the empirically verified postulates P3, P4 and P5 within the MWI (which is, as we said, currently not clear). However, the point of Fact 2 is that one can not use Occam’s razor to a priori rule out the MWI.

At this point the reader might even wonder why I am still talking about “interpretations”. And you’re right, within the realist perspective taken here one should surely admit

Fact 3: The CI and the MWI really are different theories, so let’s abbreviate them by CT and MWT from now on.

Hence, this discussion is no longer about some abstract philosophical or metaphysical interpretations, it really is about different and incompatible physical theories! Unfortunately, it turns out that it is hard to design an explicit test of the theory in any laboratory setting that we humans can control. But the situation is not hopeless! Given the current efforts to design and control large scale quantum computers and other quantum technologies, we are actively pushing the boundaries at which a collapse could happen. In my view, this is the most exciting aspect of any possible quantum revolution.[don’t tell the funding agencies]

Moreover, instead of directly testing CT versus MWT, I believe there is much more indirect evidence available, and that is how I actually started to appreciate the MWT. Throughout all of my career I naturally derived all my results by starting from a big isolated quantum system that obeys Schrödinger’s equation. This is the de facto modus operandi for research in statistical mechanics, solid state physics, open quantum systems theory, among other fields. So, all those conductivities, susceptibilities, absorption/emission spectra, sound velocities, response functions, and what you have, that researchers computed from the Schrödinger equation (partially even for very big systems) provide much indirect evidence for the MWT. I mean I would not even know how to compute these things starting from a CT with a modified Schrödinger equation, and I think nobody does. This led me to realize

Fact 4: 99,9% of the physicists are already Everettians in practice.

By this I simply mean that only a few physicist would hesitate to apply Schrödinger’s equation to systems of arbitrary size.

Good and bad questions about the MWI

At this point, you might say “Fine! The MWT promises an easier account of quantum mechanics with less axioms, and I am also happy to write down the Schrödinger equation for systems of arbitrary size. But:”

Good question: What do you exactly mean when you talk about worlds and parallel universes?

It turns out that the definition of “worlds” or “universes” is not universally agreed on and seems to have changed over time. I prefer the following definition, which I believe is also used by most people nowadays. To this end, suppose you have split up the universal wave function into different components. Note that there are different ways to do this, for instance, as in the equation above, or using the histories formalism of quantum mechanics, which is somewhat of a generalized path integral approach and currently my preferred method. But whatever method you choose (I won’t review them here), my definition of a world is

Definition: A world is a (non-zero) component in the wave function that is decohered from all other components.

Now, if you just know basic quantum mechanics, you might not know what decoherence is. I plan to write a popular introduction to it, but for now simply think of decoherence as destructive interference between different components of the wave function such that they become effectively non-interacting. Thus, if different components of the wave function decohere, they behave like members in a statistical ensemble. And that is precisely what we want when we look around us: our world seems pretty stable and describable at large by classical concepts, and that’s guaranteed due to decoherence.

So, when does decoherence happen? Theory says that decoherence happens in large (many-body) quantum systems whenever the components of the wave function are defined with respect to the eigenprojectors of a slow and coarse observable. If you have no intuition of what a slow and coarse observable means, consider the temperature (or energy) of a cup of coffee. This observable is coarse because knowing the coffee temperature tells you very little about the microstate of the coffee molecules. Moreover, it is slow because it changes on a time scale of minutes, whereas fundamental microscopic processes inside the coffee (e.g., collisions between the molecules) happen on a much shorter femtosecond time scale. Remarkably, this insight was formulated by Nico van Kampen as early as in 1954, long before people started even talking about decoherence or many worlds.[and he is still not credited for it]

Now, the only missing puzzle piece is to realize that our human senses naturally limit us to coarse and slow observables, i.e., observables that decohere. Therefore, our world appears classical. Note that this fact does not change by our enhanced capabilities to measure super fast quantum processes very precisely, e.g., at CERN. Because at the end of the day, when we want to communicate about them, we need to store information in stable memories (for instance, in a hard drive or as ink on paper), and those things are by definition coarse and slow.

Another good question: How many worlds are out there?

Beyond the naive answer “many”, this question should be answerable precisely with decoherence theory, but I think nobody has yet given a good answer to it: neither a rough estimate for the actual (observed) Universe nor a clear-cut answer for some toy model. So, there is much to explore here.

Unfortunately, debates about the MWT (even scientific onces) are still plagued by questions, which in my view are either meaningless or unfair or both. Here is an example:

Bad question: When does a world split into new worlds?

As explained above, the definition of world requires some coarse graining. In particular, since only slow observables decohere, there is also some coarse graining in time. Surely, it would be possible to define some decoherence threshold and to say “now the world has split according to this threshold”, but it would be kind of arbitrary, similar to answering the question: “At which femtosecond did it actually start to rain?”

Moreover, this question is also unfair, in the sense that CT has the very same problem: when does the collapse happen? There is no known answer to it. In fact, what is known is that an instantaneous collapse violates special relativity, and even without any relativistic considerations any realistic quantum measurement must take a finite time that one can even estimate.

Another bad question: How does it feel if the world split? Why can I not feel the other worlds?

OK, there is much one could say here, but perhaps the shortest answer is simply: we anyway have no physical theory of what “feeling” or “emotions” mean! So, please don’t ask something that is (currently) not answerable in any physical theory or interpretation. Indeed, I think we should first take out any “human” elements in the observer to understand the MWT. Just try to make sense of the MWT within a lifeless universe, where some systems function as a memory or detector, but without any “feelings”. As we will now see, this is already challenging enough.

The UNSOLVED probability problem

So, here comes a truly worrying and unsolved problem for the MWT, where it is not even clear how to approach it qualitatively. It concerns the origin of probabilities and why they obey Born’s rule. This is a problem that one needs to solve within the MWT because, remember, MWT is based on P1 and P2 only (which do not mention probabilities) and rejects P3, P4 and P5 as fundamental (where all the probabilities appear).

Needless to say, many interesting approaches have been put forward to solve this “probability problem”. Yet, it is also fair to say that none of them has convinced it’s opponents, not to mention that even the proposed solutions show a remarkable diversity. Moreover, as far as I can tell, all those approaches add some additional metaphysical postulate PX, which can not be derived from P1 and P2 only. So they put MWT = P1 + P2 + PX, but perhaps that’s the only way to go… In any case, readers interested in all the different (counter)arguments can find a fairly balanced account in this book. If you ask me, I think most of these proposals fail short of being satisfactory because they rely, in one form or another, on concepts like agents, rationality, decisions, choices, etc. But to me the entire purpose of the MWT is to get rid of agents! There is no room for decisions and choices within a deterministically evolving wave function. The question precisely is whether a set of dull and bloodless detectors records outcomes in unison with Born’s rule or not? There is no space for rationality as far as I can see.

To be more precise about the problem, I want to end this post by talking about the theory confirmation problem. To this end, suppose you want to confirm Born’s rule from within the Multiverse. How would you do this? Well, obviously, you would perform many repeated trials, trying to carefully pay attention that different trials are independent, and check whether the outcome frequencies match Born’s rule within some statistical tolerance.

Unfortunately, within the standard reading of MWT this causes a huge problem. At each trial the Multiverse would split, producing one branch where you have seen outcome “0” and one branch where you have seen outcome “1” (I’ll restrict the discussion to a binary setting for simplicity). So, after L trials the many worlds tree will contain one branch for every possible sequence (xL, …, x2, x1) with xi ∊ {0,1}. Question: how many branches are there which have seen n “1s” after L trials? Well, it’s the binomial coefficient

And now comes the bummer. In the limit of very large L, most branches will have approximately seen n ≈ L/2 many “1s”, and they would thus associate a probability of roughly 50% to seeing outcome “1”. Now, that is certainly not a problem for our example above, where we prepared a 50:50 superposition of spin up and down. But for any other superposition it is a huge problem: Most branches will sample frequencies at odds with Born’s rule unless one looks at a 50:50 superposition! So, how can it be that we see Born’s rule in our lab? This problem freaked me almost out, and I guess the same happened to many other researchers working on the MWT. So, can we rescue Born’s rule and solve the theory confirmation problem?

Well, all previous approaches that I know would claim that there is something wrong with simply counting the number of branches as I did above. Indeed, this “counting argument” completely neglects that each branch in the wave function has a different norm (and this norm gives precisely Born’s rule). So, if you have a superposition of decohered branches such as

you should not claim that there is one branch with outcome “0” and one branch with outcome “1”. But, then, how are you supposed to count them? Unfortunately, the following prescription does not work:

Moreover, be aware of the problem that, by writing down the total wave function, we are employing here a god-like bird’s eye perspective. But the theory confirmation problem asks you to confirm Born’s rule within the Multiverse. But from within the Multiverse you can only access the branch that is compatible with your results (your “relative state” according to Everett). In particular, the coefficients in front of your and all the other branches are completely unknown to any observer within the Multiverse. The only thing you know is that the coefficient of your branch is non-zero, that’s it!

The more I thought about this problem, the more it drove me up the wall, but then I realized that there is another hidden assumption behind the reasoning above. Namely, I have tacitly assumed that after L trials all the branches “exist”, i.e., are decohered and admit a classical observer in accordance with our definition of a “world” given above. Let me call this way of thinking naive branch realism, i.e., the idea that decoherence proliferates together with the branches independent of how often the wave function splits. This emergence of a many worlds “tree” is illustrated below (I’m sure you have seen that before), and the de facto standard way of discussing the MWT among both proponents and opponents of it (see, for instance, again the book).

Indeed, it is somewhat known that decoherence can not persist on all branches in the limit of large L, in some sense because the Hilbert space is no longer big enough to effectively decohere the exponentially growing number of 2L many branches. But no detailed investigation of this has ever been performed.

Now, what I found in recent work with my colleagues Teresa Reinhard and Joey Schindler is the following. First, for relatively small L things decohere as expected. In fact, what we found (and generally conjecture) is an exponential suppression of coherences as a function of the particle number of the system. Second, if L becomes very large, branches start to recohere, but, very remarkably, they do not recohere in the same way. Instead, there is a structure arising in which a few selected branches remain decoherent. Now, guess which branches remain decoherent! Exactly those that sample frequencies according to Born’s rule!

So the picture that emerges is somewhat like the one below, where the amount of decoherence is proportional to the darkness of the branch.

This, of course, is just a sketch. If you want to see how it really looks like, you can check out the following little animation[Thanks for that, Joey!] on youtube in which again the shading is proportional to the degree of decoherence (black/white = maximally decoherent/coherent). Note that the many worlds tree does not split exponentially here because each dot keeps track only of the net number n ∊ {0, 1, …, L} of “1s” after L trials (hence, the number of worlds grows linearly here).

https://www.youtube.com/watch?v=oKxvtqql6Ik

Of course, all this is just some preliminary evidence that we obtained for a simple toy model that we could exactly solve numerically. It might be completely off from the real solution. But at least it illustrates three nice features:

  • We have all the tools to perform theoretical physics research about the MWT instead of only philosophizing about it.
  • It is certainly possible that the Multiverse has some non-trivial structure, in contrast to the structureless tree of parallel worlds as it was so far depicted.
  • To reveal this structure there is no need for agents, rationality, decisions, or other questionable arguments.
Summary: Everything everywhere all at once?

I have reviewed the MWT and I could hopefully convince you that it is not as crazy as you think. In fact, it is undeniable that one can perform rigorous and serious (and much needed) research about it.

Moreover, if you have a strong realist or materialistic stance towards physics and exclude hidden variables[which have other problems], I believe that current evidence speaks clearly in favor of the MWT compared to any explicit CTs.

Nevertheless, the probability problem is unsolved and continues to haunt the Multiverse. But even if one needs an additional postulate to solve it, MWT would still win over CT from the perspective of both Occam’s razor and it’s empirical success. However, perhaps the probability problem also teaches us that a realist-materialistic stand is not advisable at first place?

Finally, while I believe much of the stuff I have said above can be also found in other sources, I hopefully could make the novel point that you shouldn’t think about the Multiverse as if “everything everywhere all at once” happens. This naive branch realism might only apply to short periods of time, whereas the actual Multiverse could possess some rich and non-trivial structure.


Acknowledgements: I like to thank my colleagues Teresa E. Reinhard and Joseph Schindler for working with me on these topics. My views and approaches on this topic have been also shaped by collaborations and/or discussions with Anthony Aguirre, Josh Deutsch, Jochen Gemmer, Kavan Modi, Michalis Skotiniotis, Jiaozi Wang and Andreas Winter.

References:

  • Wikipedia, Quantum decoherence (accessed October 2023).
  • N. Van Kampen, Quantum statistics of irreversible processes, Physica 20, 603–622 (1954).
  • H. Everett, “Relative State” Formulation of Quantum Mechanics, Rev. Mod. Phys. 29, 454–462 (1957).
  • B. S. DeWitt and N. Graham, eds., The many-worlds interpretation of quantum mechanics, Vol. 63 (Princeton University Press, Princeton, 1973).
  • J. J. Halliwell, A Review of the Decoherent Histories Approach to Quantum Mechanics, Ann. (N.Y.) Acad. Sci. 755, 726–740 (1995, arXiv).
  • J. J. Halliwell, Somewhere in the universe: Where is the information stored when histories decohere?, Phys. Rev. D 60, 105031 (1999, arXiv).
  • S. Saunders, J. Barrett, A. Kent, and D. Wallace, eds., Many Worlds? Everett, Quantum Theory, and Reality (Oxford University Press, Oxford, 2010).
  • P. Strasberg, K. Modi, and M. Skotiniotis, How long does it take to implement a projective measurement?, Eur. J. Phys. 43 035404 (2022, arXiv).
  • P. Strasberg, A. Winter, J. Gemmer, and J. Wang, Classicality, Markovianity, and local detailed balance from pure-state dynamics, Phys. Rev. A 108, 012225 (2023, arXiv).
  • P. Strasberg, T. E. Reinhard, and J. Schindler, Everything Everywhere All At Once: A First Principles Numerical Demonstration of Emergent Decoherent Histories, arXiv 2304.10258.
  • P. Strasberg and J. Schindler, Shearing Off the Tree: Emergent Branch Structure and Born’s Rule in the Multiverse, arXiv 2310.06755.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *