Hacker News new | past | comments | ask | show | jobs | submit login
How David Bohm and Hugh Everett changed quantum theory (jstor.org)
72 points by anarbadalov 3 months ago | hide | past | favorite | 125 comments



I think interpretations fall more within the realm of philosophy. What physicists wants are theories that can yield correct and precise predictions. For instance, if we have, let's say, 10 different interpretations of QM, all of them unfalsifiable, yet all of them provide the same accurate physical predictions, then in terms of theory, they are equally suitable approximations of reality. What is of greater interest are innovative theories that can generate improved and more specific predictions, such as those for higher energy scales where gravity comes into play and other areas. Therefore, as a physicist, I would concentrate on developing theories that can yield more accurate and precise predictions.


I wouldn’t say it’s philosophy so much as a split in what you think the goal of physics is. Is the goal to create the most accurate models or to understand how the universe actually works?


The notion that we can fully understand how the universe works may be a overly optimistic belief. There's no inherent reason to assume the universe is structured in a way that human brains can comprehend. Even if we are fortunate enough to have a universe that is understandable by humans in principle, there is no guarantee that a falsifiable theory that explains everything exist. Some people assume that we could eventually formulate a theory of everything or have a complete understanding of QM, but there is no scientific evidence to support this belief. All we have are equations, like the Schrödinger equation, which provide accurate predictions of physical phenomena.


You are describing philosophy:

because we can’t measure the goals anywhere in the universe; the universe (in the form of humans) has to decide with its consciousness what is more valuable (this is philosophy).


The storyline that “science now accepts their ideas” is questionable editorializing imho. Neither Bohmian mechanics nor many worlds are falsifiable, they are just interpretations. They are cool but they aren’t really something you can reject. And to be honest, an unscientific sample of the physicists I know has the majority basically subscribing to some version of “shut up and calculate”


Sean Carroll claims many-worlds is “super duper falsifiable”

“ Right, so speaking of which, look, many-worlds says there is a wave function or a state vector, it evolves all the time under the Schrödinger equation. So all you need to do to falsify the many worlds-interpretation is to do an experiment where the wave function is not under the Schrödinger equation. These experiments are ongoing. Roger Penrose makes predictions that we should see them. There’s other theories of objective collapses that says we should see them. So that’s just one way. Also, you could find evidence for dynamical variables other than the wave function, ’cause those don’t exist in many-worlds. So there’s plenty of ways in which you could experimentally do these things. They’re hard experiments to do, and they may never converge on anything, but they’re there in principle. If you care about the philosophy of it rather than the practice of it, there’s zero question that many-worlds is completely 100% super-duper falsifiable.”

https://www.preposterousuniverse.com/podcast/2021/04/14/ama-...


> all you need to do to falsify the many worlds-interpretation is to do an experiment where the wave function is not under the Schrödinger equation.

Which, Carroll conveniently fails to note, happens every time we make a measurement on a system that is not in an eigenstate of the observable being measured. For example, every time an H polarization measurement is made on a V+ polarized qubit, which happens all the time in quantum computing, the result is either H+ or H-, with 50% probability of each; but before the measurement the qubit was in state V+. The time evolution from V+ to H+ or H- is not time evolution under the Schrodinger equation.

MWI proponents explain this away by saying there is another branch of the wave function where the result is the other one (for example, H- if we observed the result H+), so that the whole time evolution is still under the Schrodinger equation. (Note that "the whole time evolution" here has to include the entire universe, not just the qubit being measured.) But MWI proponents also have to say that this other branch of the wave function is in principle unobservable. In other words, the MWI itself says it is not falsifiable.


There are objective collapse theories that lead to different physical predictions. Copenhagen is not actually a well defined theory because it does not define measurement.

If we measured objective collapse it would falsify MWI.


> There are objective collapse theories that lead to different physical predictions.

Yes, that's true. So far all of the ones whose predictions can be tested have failed the tests. But if one were to succeed, yes, that would falsify the MWI.


In a general, philosophical sense it probably is non-falsifiable. But... where does it end?

What if we took the measurement 5ns, 500ms, 5 seconds, 5 hours later, because of, you know, things. Billions of universes springing up every second does not seem plausible to me and it does sound like an escape hatch: we don't know why it happened that way, so let's just say there is another unobservable universe where it happened the other way round.

Which is nice, I don't have to think about it anymore. But... it kind of reminds me of the flying arrow or a turtle you can never reach. If you are thinking in the same problem space you will never reach the correct conclusion. So, what we really need is the different mode of thought (I wish I would know to suggest something ;) ), not an escape hatch.


it’s not “billions of universes springing up”, it’s just simple wavefunction evolution according to the laws we already know.

“collapse” seems much more arbitrary, especially as we are able to construct larger and larger objects that are best described by wavefunction mechanics. it seems obvious we just become entangled with the experiment through decoherence. idk, it probably is non-falsifiable


> Billions of universes springing up every second does not seem plausible to me

Why? Do you have any reason for this, or is it just a gut feeling?

If it’s just a gut feeling, I’ll note that this was exactly the reaction most of the intelligencia had to Capurnicus and Galileo. Turns out the universe is a caster place than we previously knew.


>Why? Do you have any reason for this, or is it just a gut feeling?

MWI says these universes are actual, concrete physical state, for which the universe must somehow differentiate by discernible information. But you don't get this multiplicity of state/information for free. This is a theory cost that is as profligate as one can imagine. It is magical thinking to imagine that the universe gets this for free.


Every quantum physics interpretation agrees that these universe-branches multiply for any isolated quantum system until a human makes a measurement of that system. They only disagree on what happens when and after a measurement is made.

Even the Copenhagen interpretation agrees that if you do the two-slit experiment with a single photon, that one photon will traverse all trajectories to reach the target screen.


>until a human makes a measurement of that system.

No one sane thinks human observation has anything to do with measurement in QM anymore.


Then I'll take this opportunity to ask someone who presumably considers themself sane: How do you determine what kind of interaction counts as a measurement?


Let me say that my previous comment was unfair. It's unfair to say that no one sane would consider quantum measurement related to consciousness. Rather its an extremely fringe position at this point.

It's certainly a deep mystery of nature and I don't claim to have an answer. Quantum erasure experiments strongly suggest to me that measurement has to do with information about the system that entails constraints on the evolution of the system. Nature seems to abhor inconsistency. If the state of the system is such that one can in principle derive constraints on its evolution up to this point, then nature ensures its evolution up to this point is consistent with that information. A measurement then is just any interaction with the property of imparting a constraint on the evolution of the system up to now.


But that seems circular; what determines whether an interaction imparts a constraint, or doesn't?

Let's say that I send a photon through a beam-splitter which will deflect its momentum either right or left. QM predicts that the photon at the output of the beamsplitter will be in a superposition of both trajectories, and we can prove this by adding mirrors that combine both possible output beams back together with another and generate an interference pattern. The interference pattern will be detected by a sensitive photodiode array, capable of detecting individual photons.

Would you say the interaction with the photodiode array counts as a measurement? And the interaction with the mirror does not? How does QM know whether a photon interaction with a bunch of atoms in a mirror coating, vs in a silicon diode, creates a constraint on the evolution of the system? Why is one interaction measurement-like in nature, but not the other? Is there some property of silicon diodes that make them immune to being put into quantum superpositions?

I would have said that quantum erasure experiments make by far the most sense if you consider measurement to never happen, and the result is merely a superposition of both outcomes. The photon is put into a superposition of both states when it reaches the beamsplitter, the detector screen is put into a superposition of both states when the superposition-photon reaches it, and so are you in turn when you observe the detector screen.


> Why is one interaction measurement-like in nature, but not the other?

The mirror reflects all the energy of the photon whereas the photodiode absorbs the energy and its state is altered by it. How nature knows the difference is a deep question. Presumably the rules governing nature ensure this consistency. What that looks like is anyone's guess.

>I would have said that quantum erasure experiments make by far the most sense if you consider measurement to never happen

I agree, but then this interpretation comes with the exponentially growing cost of actual branching. In my view, this rationally must be the interpretation of last resort, for reasons stated in the other threads.


Counterintuitively, the best sensors actually behave just like mirrors themselves, re-emitting a reflected photon rather than absorbing any of the energy of the incoming signal. This increases the signal-to-noise ratio, because the E-field magnitude at the sensor is doubled, as it combines the constructive interference of the incoming and reflected photon.

And as mentioned elsewhere, all QM theories do have to deal with exponentially increasing branching. A cost that apparently increases if we grind up photodiodes and turn them into mirrors. If wavefunction branching is a problem, how does QM get away with it when nobody is looking?

Deep questions usually have simple answers! The simplicity is what makes the answer difficult to accept.


> the best sensors actually behave just like mirrors themselves, re-emitting a reflected photon rather than absorbing any of the energy of the incoming signal.

Presumably some state is changed somewhere in the system to indicate a detection.

>If wavefunction branching is a problem, how does QM get away with it when nobody is looking?

Do we know that branching is unbounded when we're not looking? Is there a detectable difference in a large, presumably entangled system and its classical counterpart?


> Presumably some state is changed somewhere in the system to indicate a detection.

Yes, but "state change" happens all the time! When the photon bounces off the mirror, even if 100% of its energy is reflected, it imparts momentum to the mirror. When the photon propagates through free space, it "changes state". All state changes occur exactly in accordance with the time-evolution of the Schrodinger equation.

> Do we know that branching is unbounded when we're not looking?

This is a basic principle of QM, which would have pretty measurable macroscopic impact if it were invalid. "Branches" are not even finite in number, even for a single particle in free space; its position and momentum distributions are both continuous, with a minimum possible area guaranteed by the Heisenberg uncertainty principle, and even so much as precisely localizing its momentum already creates an infinity of equally-likely possible positions.


>Yes, but "state change" happens all the time! When the photon bounces off the mirror, even if 100% of its energy is reflected, it imparts momentum to the mirror.

It depends on what you consider the system. For a "measuring apparatus" to signal the outcome of a measurement, its internal state must change in some structured way. That is, the configuration of its atoms must change in a discernible way as to indicate one outcome over another. Perhaps the change in momentum is too diffuse or too noisy to result in an informative state change with implications for collapse. Perhaps one's context is relevant to whether a state change results in collapse (e.g. Wigner's friend). I'm more inclined to deny a universal definite reality than to accept many-worlds.

> "Branches" are not even finite in number, even for a single particle in free space

This only has problematic metaphysical implications if we assume the definite reality involves particles rather than waves or some indeterminate in-between. Branching due to "measurement" appears to be different owing to its problematic metaphysical implications. The question I'm asking is do we have experimental evidence of "very large" macroscopic entanglement?


> It depends on what you consider the system.

It certainly shouldn't! If wavefunction collapse happens for a photodiode but not for a mirror, that means that there should be some objective law of nature that says "if you arrange silicon atoms in a flat, polished plane, the wavefunction does not collapse, but if you dope them with phosphorus and nitrogen, and apply an electric field, it does collapse". Whether the wavefunction collapses or not shouldn't depend on what we consider the system to be.

> in a discernible way.

This is passing the buck. What makes a change discernible? You know there's only one place the bucks stops: The photon frees an electron in the photodiode, which creates a small flow of current; that current is wired to an amplifier, which flips a MOSFET gate, triggering the flow of charge into a capacitor, which results in a voltage that exceeds a comparator threshold, which triggers a software interrupt in a microprocessor, which increments a digital counter, which alters the light output in an LED screen, which a human observes and scribbles in their lab book.

At which point did the system change become "discernible", and change from a quantum wavefunction to a classical-but-randomly-behaved outcome? MWI provides an answer: when the observer became entangled with the system it was observing.

Nature doesn't even 'know' there's a photodiode there, or a human observing the screen. It's just a sea of fermions and bosons all doing their thing, with various positions and velocities. If some of them are arranged in a way that we call a "photodiode", that's only our orderly ontology imposed on the messiness of the natural world. The photodiode itself is merely a bubbling sea of quantum particles, just like its surroundings.

> The question I'm asking is do we have experimental evidence of "very large" macroscopic entanglement?

How large is "very large"? Is the Cosmic Microwave Background big enough? [1][2]

According to MWI, the classical-physics behavior we observe in macroscopic objects, and the unpredictable decay of nuclear isotopes, are exactly what we would expect from macroscopic entanglement. So all you have to do to quantum-entangle a cat is to literally put it in a box with a geiger counter and a uranium puck. But in order to prove something is quantum entangled by the standards of Copenhagen, you need to be able to create an interference pattern that an instrument can record, which necessarily requires that the instrument cannot itself be entangled with the system it's measuring. Otherwise it won't measure an interference pattern, it will measure (randomly but decidedly) one result or the other.

> I'm more inclined to deny a universal definite reality than to accept many-worlds.

What you apparently are inclined to deny is the notion that you could ever be put in a quantum superposition yourself, and that this would merely feel like observing one particular nondeterministic outcome out of a deterministic distribution of possible outcomes. Superpositions for thee, but not for me!

[1] https://arxiv.org/pdf/2106.15100v1

[2] https://www.worldscientific.com/doi/abs/10.1142/S02182718110...


>Whether the wavefunction collapses or not shouldn't depend on what we consider the system to be.

It's not about consideration in terms of choice of what constitutes a system; the systems are defined by sensitivity to informative states. It's like a quantum analog to the light cone, the information boundary defines the system. But this information boundary is much more elaborate and dependent on the dynamics of the structures involved. In this case, the mirror imparts a different information dynamic than the diode and so has different implications for the information boundaries.

What the rules are that govern the interactions that integrate the informative states of two systems isn't obvious. Funnily enough, I've thought a lot about this problem in an entirely different context. Somewhat surprising that it comes up naturally in quantum mechanics. I don't have any fully worked out details, but I do have some intuition. The issue is related to supervenience of macro structures on micro details. The macro structures can carry information (i.e. have mutual information) about external systems; this is how two systems are informationally integrated. A measurement is when the macrostate of one system becomes correlated with the microstate of another system. The correlation is between macro and micro because for state to be useful requires it to interact in a specific way to result in a macrostructure correlation which can in principle be computed with. It's the information analog of free energy. Not all interactions produce state that one can compute with. Information integration requires precisely this kind of state. I suspect the exact nature of this kind of interaction can be characterized in principle.

>What you apparently are inclined to deny is the notion that you could ever be put in a quantum superposition yourself, and that this would merely feel like observing one particular nondeterministic outcome out of a deterministic distribution of possible outcomes. Superpositions for thee, but not for me!

I don't deny that I could, I just don't seem to be. And the interpretation that coheres with unobserved superposition I reject for external reasons. I don't have any qualms about the implications for personal identity from MWI (assuming that's what you're getting at).


well, it would not be Billions of universes, it would basically be infinite number of universes popping up every second. Which while interesting, does not seem possible under any law of conservation of mass?


They're not popping up, the wave function simply branches. In Copenhagen you cut off the branches, in MWI you do not. Total energy in MWI is the weighted sum of the energy of all branches (weighted according to borns rule). If one could dismiss MWI this easily, there wouldn't be any serious proponents of it and we wouldn't have to have this discussion.


Occam's razor?


Occam's razor cuts both ways :-)

Some people when they hear that MWI implies that there are many worlds and that the wave function branches they think that implies that MWI is "adding" something so Occam's razor should work against it.

OTOH what MWI postulates that the Schrodinger equation is all there is and the collapse of the wave function is observed because the observer is entangled with the system it measured.

In order to make an intuitive sense of what's going on, MWI invokes the "worlds" and the "branching" but that's just a way for us to grasp how it would feel if ourselves are part of the entangled system. But as many analogies whose purpose is to tickle the intuition it doesn't work for everybody and may be confusing the discourse more than helping it.

But the thing is: the everett interpretation is simpler and it requires less additional rules and mechanisms to explain what we observe. So Occam's razor should favor it.


Simplicity in terms of weighing theory cost isn't about rules, but about explanatory posits. MWI posits exponential growth in discernible state about which sufficient information must be carried by whatever grounds the wavefunction. But you don't get this multiplicity of state/information for free. This is a theory cost that is as profligate as one can imagine. It is magical thinking to imagine that the universe gets this for free.


I don't know where the expression comes from, but "the only numbers that need no justification are zero and infinity". Think of it this way, if you found a coordinate system within the universe that goes from 0,0 to 100,100, you'd have to ask "but why exactly 100?" whereas it would be less surprising if the system had no apparent limit and just could go to infinity on any axis.

The multiplicity of MWI is like that: just a maximally extended coordinate system.


A coordinate system is an abstraction, whether anything has some large coordinate is irrelevant to the coordinate system itself. But my understanding of MWI is that there is discernible state being described by the Schrodinger that is taken as "real" or "concrete" by MWI. If we considered the non-actual branches as merely abstract coordinates, then there would be no problem with profligate state and its associated theory cost. But then it wouldn't be many worlds, but rather some kind of collapse theory.


Wavefunction state is just as “real” in the Copenhagen interpretation. There cannot be any hidden variables, it is not merely a mathematical abstraction. You then must posit this additional process of collapse

Where the MWI takes additional theory burden is where it explains how the born rule translates to our lived experience - ie. why do we only experience one path of the wavefunction


I have no clue what you mean by “theory cost” but MWI is just describing existing evolution of the wave function.

I think the measurement problem is a much bigger ‘theory cost’ to Copenhagen than just assuming wavefunction mechanics continues even after measurements are made.


Theory cost just means some quantity by which we compare theories and judge one to be more parsimonious. A theoretical posit has some cost associated with it. MWI posits actual discernible states, each of which is a unique posit and thus a unique contribution to the theoretical cost of the whole. This state grows exponentially which implies an exponentially growing cost. This is as profligate as one can imagine. Copenhagen does not have an exponential cost as the non-collapse branches do not have a concrete reality under Copenhagen. The cost associated with the new collapse construct pales in comparison.


the occam's razor is an abstract principle, so it is hard to really quantify what counts as additional 'theory burden'. i simply don't agree that this "actual discernible states, each of which is a unique posit and thus a unique contribution to the theoretical cost of the whole" is an actual way of judging theory burden vs. just "the rules we've already discovered continue to apply at large-scale and chaotic systems" which i would say is very low theory burden

Copenhagen isn't even clear on if the states have a concrete reality or not - there certainly are no hidden variables, at least.


There's a certain amount of intuition and taste involved when assigning theoretical costs, but that's no reason not to do it. We have general principles with some rational justification for preferring certain features of theories over others. Simplicity in terms of "not multiplying entities without need" is one such principle.

A discernible state is an entity. More discernible states requires more entities or more discernible features of an entity. This is the only way to make the concept of discernible state intelligible. To deny this is just to engage in magical thinking. But each posit has a cost associated with it. We don't know what that cost is exactly (relative to other theories with different features). But we do know that an exponential growth in discernible states overshadows the theoretical cost of the whole regardless of how we quantify the cost of any individual posit. MWI is almost maximally profligate; any empirically adequate theory without this cost should be preferred.


to me, this is like saying germ theory of disease has higher theoretical burden than miasma theory of disease because it posits all of these individual viruses and bacteria around us everywhere.

"sure, we can observe bacteria and viruses in the lab individually - but when we move to the physical world disease spread is described by miasma which is simpler and thus this is the more parsimonious theory." there are fewer entities in miasma - but you still have to posit this additional thing (the miasma) on top of the already observed viruses and bacteria.

similarly, i don't agree that 'count of entities' is the correct criterion for what makes a theory more or less parsimonious/occam's razor preferred. it is about the number of rules even if there is a single rule that creates exponential entities. Copenhagen would also predict this 'exponential branching' in the system under observation - it just proposes that it collapses when measured by the apparatus: an additional rule that makes it less parsimonious than many worlds


It's hard to compare theories that don't have well-developed mechanisms. If we're naive theorists comparing miasma vs tiny organisms, but you have no sense of how those organisms operate, its functionally just a comparison between one miasma and many miasmas, i.e. one vs. many nebulous non-mechanistic effects. In that case one miasma would be preferable. But that's not a demerit to the concept of minimizing entities as a theoretical virtue.

>i don't agree that 'count of entities' is the correct criterion for what makes a theory more or less parsimonious/occam's razor preferred.

I mean, its literally in the original formulation of Occam's razor: "plurality should not be posited without necessity". But there are many ways to justify this. My preferred argument is that fewer entities means fewer resources with which to "bake in" the explananda into the theory. The more knobs to turn to generate your empirically adequate theory, the less likely your theory will capture reality.

>[Copenhagen] just proposes that it collapses when measured by the apparatus

Measurement isn't limited to an "apparatus". Collapse of superposition is a feature of large scale interactions. While there is branching, in practice it is bounded because interactions tend to cause collapse.


Well. They're not discernible states if they are in another world. That's the very definition of "another world".

From an outside viewer that hasn't been entangled with you, no branching happens and they see the overall state evolve according to the Schrodinger equation.

Imagine that w create very small machines that could perform some observations and record them into some internal log.

Now imagine creating a particle in some state and have that machine measure that state and record it internally. Provided that the particle+machine system is isolated from the environment (and us) you'd probably agree that the particle and machine are entangled and that the state of the "log" is not in a defined state until we measure it.

Now, that information processing machine is not a human but I think it may be useful to describe how the external world would look like from it's point of view. For example, if that machine performed several experiments on that test particle or other particles it would note in its log data that would confirm that it obeys the Born rule.

In that sense, it's "useful" to imagine how things look like from the "point" of view of that machine and it's possible outcomes in the many worlds, although these worlds might not really matter to us since we can describe the system that includes that machine as having been in a mixed state all along until we humans actually observe its "log"

The likelihood that people start to object to this though experiment gets higher and higher s as soon as one starts to imply that we humans ourselves are exactly like that machine.

For me, the observation that our ability to accept or reject this interpretation is so much tied to our intuitions about our own _self_ is a strong hint that actually favours the theory. We humans have been known to be trapped by our point of view


>Well. They're not discernible states if they are in another world. That's the very definition of "another world".

Discernible in principle. Either the states are distinguishable from within the system (e.g. two branches that disagree on the state of the world), from an outside observer, or a God's eye view. If none of this is true then there is no other world.

>Now imagine creating a particle in some state and have that machine measure that state and record it internally. Provided that the particle+machine system is isolated from the environment (and us) you'd probably agree that the particle and machine are entangled and that the state of the "log" is not in a defined state until we measure it.

Lets further imagine a large collection of these machines connected in series. Each machine can perform one of two experiments at any given trial, and which experiment they run depends on the most recent outcome of the prior machine in the series (e.g. the direction of a particle's spin). There are an exponential number of scenarios as the number of machines in series grows. If we imagine this system entangled until we observe it, and we expect that the definite state is consistent in terms of which particle the machine measured and the corresponding prior outcome in the series, then the entangled system just has to carry discernible state in proportion to the exponential state space of the system. If not, then the system isn't in an indefinite state until observed. The concerns about exponential theory cost remain. Unless the state space is bounded by collapse or some other means, the theoretical cost to MWI or any theory based on Schrodinger without additional posits is supreme.

>In that sense, it's "useful" to imagine how things look like from the "point" of view of that machine and it's possible outcomes in the many worlds, although these worlds might not really matter to us since we can describe the system that includes that machine as having been in a mixed state all along until we humans actually observe its "log"

There's a tension here that needs to be released somehow. On the one hand, you have an external observer determining the system is in a superposition of states. On the other hand, you have the perspective from within the system of everything being definite. You can't just carry both forward simultaneously without addressing the disagreement on the state of reality. Either there are really an unbounded number of branches, the branches are bounded by some mechanism, or there are no branches and the evolution of reality happens asynchronously.


I'm not sure what you're actually suggesting.

Are you saying that under this scenario of N such machines in series the MWI would make different predictions that just solving the Schrodinger equation?


No, I'm saying that under the assumption of no collapse, the scenario with N machines after a long period of entanglement requires that nature "stores" (has discernible state for) information proportional to e^N bits about this region of space. The prediction is the same, the question is what does it say about how nature is constructed and how does this relate to the credence we give the theory.


I think you just highlighted the fundamental difference between a bit and a qubit


Do you speak Ithkuil?


The sibling reply is correct, but I'll give a different answer connected to the analogy I gave that you are replying to. Occam's razor prefers simpler theories not smaller universes.

The idea that the universe consists of seemingly uncountable numbers of stars, each of which is another Sun and perhaps has planets of its own, is vastly more detail than the old heliocentric model of one Sun, a fixed number of planets, and stars being pinpricks in the tapestry surrounding the solar system. Nonetheless Occam's razor suggests that we accept the view that the stars are suns and we are just one planet among many billions upon billions, because although that universe requires vastly more detail to describe, it is governed by a simpler set of rules. Unifying heaven and earth under a single, simple set of physical laws, discovered by Newton, makes for a simpler scientific model, even if it introduces the possibility of the universe being vastly larger than we originally thought.

Similar situation with many-worlds. If MWI is right, then the "universe" if interpreted to include all reachable branches of the so-called multiverse, is vastly larger than we previously believed. But it is also simpler, in a strict Occam's razor sense, because it does away with the concept of collapse. Wave function collapse is an additional physical rule in the Copenhagen interpretation, whereas it's just an illusion and not an enumerated part of physical law in MWI.


Occam's razor is not just about simple rules. The original formulation is: "plurality should not be posited without necessity". Necessity here is empirical adequacy. A theory that posits an unbounded, exponential growth in states fails the simplicity test against an empirically adequate theory that posits bounded state.


The philosophy of science has advanced significantly since the 12th century, and scientists don’t generally use the original definition anymore.

I have never, ever heard it expressed in this statistical mechanics sense, and it doesn’t line up with most advances in physics (my field) which have been accompanied by increased number of states but fewer rules.


I mean, obviously. If you let a system interact with the external world that system on its own will not follow the schrodinger equation.

Carroll is suggesting we try to observe this behavior from a sufficiently large isolated system.


> If you let a system interact with the external world that system on its own will not follow the schrodinger equation.

Not only that--even the combined system + what it is interacting with won't.

> Carroll is suggesting we try to observe this behavior from a sufficiently large isolated system.

But you can't, even with the entire universe, if measurements have single results. In the MWI, they don't; any measurement has all possible results. But we don't observe that, we observe measurements to have single results, so the MWI has to jump through hoops to explain that discrepancy away--and in doing so, makes itself unfalsifiable.


sure it’s unfalsifiable in that there is always an interpretation of copenhagen that will predict the same thing as MWI, but if we are able to construct a very large object that doesn’t show “collapse”, it seems vibe-wise that the same process is what is happening to us when we make a measurement, just more chaotically (aka decoherence)


> But we don't observe that, we observe measurements to have single results

What do you think it would feel like to observe a measurement to have multiple results simultaneously? And do you think MWI predicts any branch of the wavefunction would exist in which a human would be experiencing that sensation? It doesn't; it predicts a superposition of (human observes result A + human observes result B), with ~zero probability of (human observes both results A + B). So I'm not sure which hoops you think MWI needs to jump through to match our subjective experience.


> What do you think it would feel like to observe a measurement to have multiple results simultaneously?

I have no idea.

> do you think MWI predicts any branch of the wavefunction would exist in which a human would be experiencing that sensation?

No. As you say, the MWI claims that in each branch of the wave function, the human experiences a single result of a measurement.

However, that is completely different from the way QM normally treats individual branches of the wave function in an entangled superposition, which is what we are talking about--"measurement" in the MWI is just an interaction that entangles a "measured system" with a "measuring device" (and eventually with "the brain of a human who looks at the measuring device to read off its result") and results in an entangled superposition of all of those subsystems. Normally, in individual branches of an entangled superposition of multiple subsystems, each subsystem has no well-defined state at all. Only the full system, including all branches of the superposition, has a well-defined state.

That would imply that in the case of a "measurement", the measuring device in each individual branch, and the brain of the human that looks at it, would have no well-defined state at all. It would not imply that in each branch, the measuring device and the brain of the human that looks at it has a well-defined state that corresponds to a particular measurement result being recorded and observed.

In terms of your "feel like", the way QM normally treats individual branches of the wave function in an entangled superposition, it should not "feel like" anything in any individual branch. The human should only "feel like" something, if at all, in the full wave function, containing all the branches. And of course nobody has any idea what it would "feel like" in such a state--but it clearly would not "feel like" having observed a particular result of a measurement.

> I'm not sure which hoops you think MWI needs to jump through to match our subjective experience.

It needs to explain why, somehow, when a "measuring device" or a human observer is involved, the way QM treats individual branches of entangled superpositions drastically changes. See above.


That's not very convincing! He's only talking about falsifying it vs extensions of quantum theory. The interesting question is whether it can be falsified vs other interpretations of quantum theory.


Why is no one ever asking how to falsify Copenhagen vs other interpretations? What did Copenhagen ever do to deserve this privilege?


People do think about that. But it's rich of Carroll to claim that MWI is falsifiable, as if it is quantum theory.


Copenhagen is somehow more persuasive.

Bohm's implicate and explicate order is an interesting relevant theory.


copenhagen is popular for historical reasons including our historical obsession with the mind-body duality. many physicists i talk to secretly admit to MWI/universal wavefunction making more sense but it is metaphysics and academia is a conservative institution, especially for metaphysics.


Popularity resolves to beliefs, and beliefs are formed via persuasiveness of stories. There may be some math contained within the stories, but ultimately physics is driven by persuasiveness of stories, as with all other endeavors involving humans.

Now that we're in the age of AI, perhaps things will change.


So the question under discussion is whether many-worlds is, or should be, "generally accepted". It isn't. If it's not generally accepted and wants to be, then it has the burden of proof.

Claiming that many-worlds met its burden of proof (compared to Copenhagen) by something other than Copenhagen not being proven is completely missing the point. If you want people to believe many-worlds rather than Copenhagen, you have to disprove Copenhagen. If you don't, then people will continue to believe Copenhagen.


No, my question was: why did Copenhagen become generally accepted? You're saying: it just is, deal with it. That's unsatisfactory. Copenhagen proponents never seem to have to defend their stance, and you're telling me they don't have to because they're the majority.


Copenhagen doesn’t even seem to be well defined unlike Pilot Wave, Objective Collapse, GRW…


> Copenhagen proponents never seem to have to defend their stance, and you're telling me they don't have to because they're the majority.

That is correct. Physics inherits that from culture, which runs on-top of reality, and that is how reality works.

Personally, I agree with you, but reality is consensus based. Deal with it (develop a better story, or meme - "Shut up and calculate" seems effective despite it being in no way whatsoever a proof of anything...try and come up with something catchy like that).


If I understand correctly, Copenhagen became generally accepted because it was around for several decades before many-worlds.

> You're saying: it just is, deal with it. That's unsatisfactory. Copenhagen proponents never seem to have to defend their stance, and you're telling me they don't have to because they're the majority.

I'm saying that Copenhagen is the de facto default, no matter what you think should be. If you want that to change, you have to change peoples' minds. These being scientists, you typically need some evidence to make them change their minds - they're not going to do it just because you think they should.

Without evidence (impossible in principle, if Copenhagen and many-worlds make the same predictions), you can argue that one shouldn't have a firm stance on this, since both interpretations give the same results, and there is no experimental way of discerning one from the other. (This looks a lot like "shut up and calculate".) Or you can argue that the philosophical position of many-worlds is more "parsimonious" or better in some other sense, but that's philosophy, and that doesn't persuade scientists all that well.

TL;DR: If you want people to move from where they are to where they aren't, give them a reason. If you don't, don't be surprised when they don't move.


Pretty sure they are a plurality, not the majority.


Not a physicist but my guess is that Copenhagen was a "mental trick" (not in a bad sense) that got people off of worrying too much about questions we still don't have answers to e.g "what is a measurement", and got people to "shut up and calculate", which yielded results. Hence it's success


Versus all others it’s more parsimonious (as has been noted elsewhere in this discussion) which is typically more desirable as a description of nature. So if it needs less and the others don’t have any more reason or evidence to be taken more seriously as a description of what is happening then shouldn’t it rank first as the best current interpretation of QM?


Indeed, there are various pros and cons to all the leading interpretations. "Parsimonious" is often used to describe Everettian interpreations, although this is somewhat dubious as the leading advocates all have their own slightly different take on MWI (i.e. which quantities are "physical" and how to make experimental predictions - see [0] for more).

MWI has always suffered with issues around probability. Why should we assign probabilities to branches, and why do those probabilities coincide with the Born rule. My belief is that it won't be widely accepted until this is resolved. There are lots of (typically complex) attempts to derive the Born rule using decision theory or the like. There's no consensus that any of these attempts are compelling, or likely to succeed in the near future.

[0] https://arxiv.org/pdf/gr-qc/9703089


How is the answer “everything that can happen does happen, just in an alternate universe that is identical except for the outcome of one single measurement” a parsimonious answer to the measurement problem? To quote Sam Harris, many-worlds seems to be the least parsimonious concept ever produced by science.


> How is the answer “everything that can happen does happen, just in an alternate universe that is identical except for the outcome of one single measurement” a parsimonious answer to the measurement problem?

Because it drops straight out of the schroedinger equation, which is something you were already believing.

Do you consider believing in distant galaxies, rather than a novel type of astronomical object, to be more or less parsimonious? It means multiplying the size of the universe by millions; nevertheless most of us accept it.


And, I don't think that many-worlds even says that "everything that can happen does happen". It does say that a lot of branching goes on though :)

Might as well quote Sean Carroll again, he's an expert and I am not:

"The Schrödinger equation tells us that there's a wave function that evolves over time and then you can ask how I can divide that wave function into a set of decohered non-interacting worlds and those are the worlds that happen. It is nowhere close to saying everything happens. It is what is predicted by the Schrödinger equation That is what happens."

https://www.preposterousuniverse.com/podcast/2024/05/06/ama-...

Interestingly, people seem less bothered by the idea that the universe might be infinite in size even though that has a similar feature that a lot of things happen. So it just seems to me that the discomfort is people worrying about the idea that there are copies of themselves out there somewhere when they really shouldn't. At least no more than if in an infinite universe there would also be infinite repetition and infinite identical "yous". This is starting to sound like a Douglas Adams bit now :D


Infinite "yous" does not follow from an infinite universe unless one adds a constraint on the distribution of matter in that universe. E.g. one could have a finite observable universe that we see, then an infinite expanse of cheese. No extra "you", unless you're cheese. Infinite does not imply random.


But if I observe a Shroedinger's cat after opening the box every morning, I do get at least duplicated. Repeat this lots of times, and MWI says there are for sure 2^n copies of me as the lowest estimate.

Counting all daily QM-influenced events everybody experiences (a cosmic ray either passed through me or decayed before it reached me, a radon atom either decayed in the air right next to me or not when I went to my cellar etc.), I think it is fair to say there are nearly infinite copies of you in various multiverse branches (of course, some who collected lots of radioactive decay events are already dead from cancer by now).


If you observe the position of a system whose quantum wavefunction has non-zero probability over a non-empty interval, then according to MWI would you not create an uncountable infinity of duplicates of yourself?

Arguably this would be limited by the reporting accuracy of your equipment, but perhaps the thought experiment can be tweaked to get around that...


It's parsimonious because that was already the accepted answer as to the question of why you get an interference pattern in the two-slit experiment, even when you emit just one photon at a time.


IIRC the pilot wave theory from Bohmian is good for single particles, but it's difficult to generalize to multiple particles. In my opinion most physicist don't think it's a good idea.

The many-words interpretation of Everett is quite different. I'm still not sure if it's just claiming that the "hallucinated" Hilbert spaces in a measurement are actualy real copies of the "universe".

Disclaimer: I prefer the "shut up and calculate" interpretation, and I hope that "something-something-decoherence" will fix all the problems one day. Perhaps I like "many-words" without knowing.


> I'm still not sure if it's just claiming that the "hallucinated" Hilbert spaces in a measurement are actualy real copies of the "universe".

I'm not sure what you mean by "'hallucinated' Hilbert spaces", but MWI is just the combination of the claims that

- The wavefunction is a (the) real physical object

- The Schrodinger equation always holds

- The Born rule is correct (though there's some hope this might be derivable from QM)

The "worlds" are then a statmech phenomenon.


The main problem with Bohmian mechanics is trying to extend it to quantum field theory


frankly i don’t see at all how decoherence can rescue copenhagen and if anything it seems highly suggestive of a MWI interpretation in practice


Technically, if you find a system that doesn't satisfy the Schrödinger equation, you have falsified many worlds. On the other hand, is the wave function collapse falsifiable? Note that MWI makes less assumptions than Copenhagen.


If you find a system that doesn't satisfy the Schrödinger equation, you have falsified quantum mechanics. The point is that many worlds isn't falsifiable independently of any other interpretation. It's "just" an interpretation.

And it's a very bad interpretation IMO as it fails to provide a convincing interpretation of the probabilities QM produces, which are the main predictive content of the theory. If you believe every outcome happens then one thing happening with higher probability than another loses meaning. The decision theory argument put forward by some many worlds proponents fails to solve this as it just provides a calculation you can do that produces the Born probabilities. It doesn't provide a convincing interpretation of them.

Edit: Also the claim many worlds is somehow more parsimonious in its assumptions than Copenhagen is highly dubious. It assumes an infinite multiverse that is splitting into infinite variants in every infinitesimal instant, in possibly the grossest violation of conservation of momentum and energy (and therefore the corresponding symmetries) that could be imagined.


Of course it would falsify quantum mechanics, because MWI is nothing but the most austere version of QM. But people always speak of Copenhagen as if it was the standard interpretation of QM and everything else must be measurable different from it. In another world, MWI would have been the first consensus interpretation, and people postulating unfalsifiable wave collapses would be the weirdos.

Also, MWI doesn't assume infinite worlds, it predicts them. It's a consequence, not a postulate.


Labels are important and I think progress on Everett QM (or, maybe “unitary QM”?) has been held back by the label “Many Worlds Interpretation”.

“Worlds” is inaccurate since there is still only one world (the ordinary, uncontroversial, tensor product Hilbert space of textbook qm). It the stuff in the world that enters into superposition, among which stuff are the atoms making up “observers”.

Similarly “completion” would be a better word than “interpretation”.


The Copenhagen interpretation is the most obvious thing to compare it to, but personally I'm more interested in de Broglie-Bohm (pilot wave), which might have eclipsed it in the 20th century if people had paid attention when Grete Hermann pointed out Jon von Neumann's mistake.

"The most austere version of QM" is marketing bullshit and not widely accepted (or particularly meaningful). Many worlds does postulate its many worlds rather than predict them as it provides no way their existence can ever be tested for.

Equating Schrodinger's equation with many worlds is both intellectual dishonesty and begging the question. Any interpretation of conventional quantum mechanics involves Schrodinger's equation. Penrose's ideas (referenced by Sean Carroll in quotes others have posted on this thread) involve violations of Schrodinger's equation because they are actual new theories that differ from standard quantum mechanics. In other words, Penrose is doing actual physics rather than blowing smoke up people's arses. This fact doesn't convey some falsifiability on many-worlds, which is just one of several interpretations of QM that all make the same predictions.


> Many worlds does postulate its many worlds rather than predict them as it provides no way their existence can ever be tested for.

Many Worlds just postulates that if you put a cat in a superposition of two quantum states (alive + dead), there is no sensation associated with that superposition. The cat is in a superposition of having the sensation of being alive, and having the sensation of being dead, but there is no observable quality of "being in a superposition of alive and dead". Both of the superimposed pure states of the cat feel decidedly one way or the other.

Hence, when a human is in a superposition, we would not know it. Both of our superimposed states are experiencing the feeling of being in a pure state of looking at an instrument measuring a photon with polarization ↑, or looking at an instrument measuring a photon with polarization ↓. Being in an (↑ + ↓) superposition doesn't feel like looking at an instrument display and seeing a blurry reading. We feel like we saw the instrument reading ↑, and we (as an element of the quantum system) cannot interact with the branch of the wavefunction that we are superimposed with in which we saw the instrument reading ↓.

This isn't a separate postulate, it's just straightforward quantum theory applied to quantum systems large enough to have opinions about their own state.


> Many worlds does postulate its many worlds rather than predict them as it provides no way their existence can ever be tested for.

Your conclusion does not follow from your premise. MWI does postulate different stuff from Copenhagen, but the existence of many worlds is not one of them. Just because I postulate Peano arithmetic doesn’t mean I also postulate 2+2=4. The latter is but a consequence of the former.

My understanding of MWI, is that it just "believes" the equations. Specifically, if the equations say the complex amplitude is not null, then this stuff is real. And that’s about it. It’s not our fault the equations describe non-null amplitudes, that if real are many worlds.

This interpretation business could apply to macro-scale physics as well: when you send a probe so far out there that it crosses the limit of the observable universe, does it cease to exist? What if there’s people inside, will they die? One would be hard pressed to argue from their discontinued existence: why would we disbelieve the equations of relativity, that work so well for everything we can observe?

Conversely, what makes you think that the equations that work so well on everything we can observe, would somehow stop working (or stop applying) on the stuff we cannot observe? I know you didn’t explicitly disbelieve the equations, but since we can’t at the same time believe in their general applicability and disbelieve MWI, it’s hard to interpret your refusal to acknowledge MIW as the most probably hypothesis as anything but scepticism about the applicability of those equations.

---

To throw you a bone I personally do have a reason to be wary: physics is currently inconsistent, and when we get a credible theory of everything, it may very well use different equations that do not describe many worlds, make long range probes vanish, or both.


> when you send a probe so far out there that it crosses the limit of the observable universe, does it still exist, or does it cease to exist?

Of course they cease to exist as do the people ... just like sailing ships and sailors did when they travelled past the visible horizon.

This isn't the conundrum you're looking for.


Pilot Wave and Objective Collapse are physical theories, Copenhagen is an administrative agreement, Many Worlds is pop-sci religion.


The multiverse may be a consequence of Everett’s theory, but the reality shattering nature of that consequence warrants scrutiny of the theory. Consider one of Xeno’s paradoxes. One may postulate that to go from A to B that first one passes through a midpoint C, and by naively applying induction (I.e. without knowing under what circumstances am infinite sum converges to a finite value), one concludes that a consequence is motion itself cannot exist.

However, the absurdity of this consequence should mot lead us to doubt the existence of motion, but quite the opposite, it hints to us that the either the postulate is false, or otherwise we are missing some essential knowledge that allows to determine the correct consequences of that postulate.


> If you believe every outcome happens then one thing happening with higher probability than another loses meaning. The decision theory argument put forward by some many worlds proponents fails to solve this as it just provides a calculation you can do that produces the Born probabilities. It doesn't provide a convincing interpretation of them.

What further interpretation is needed? To the extent that we have a number we can measure, we have a calculation that can predict it. More interpretation would be nice, but fundamentally every interpretation of QM has this problem, it's not a unique problem with MWI.

> It assumes an infinite multiverse that is splitting into infinite variants in every infinitesimal instant, in possibly the grossest violation of conservation of momentum and energy (and therefore the corresponding symmetries) that could be imagined.

It assumes continuous unitary evolution that preserves momentum and energy in exactly the way you'd expect, exactly the way we already assume they work. Accepting that you can reasonably make calculations about infinitesimal changes is literally the foundation of physics, it's how Newton was able to make a theory of gravity (and people were just as unhappy about it then).


> If you find a system that doesn't satisfy the Schrödinger equation, you have falsified quantum mechanics.

No, you haven't. Standard "shut up and calculate" QM does not say time evolution always happens according to the Schrodinger equation. It only says that happens when a measurement is not being made.


However it's very unclear how some photons bouncing around between atoms constitutes a "time evolution of the system", and other photons bouncing around between atoms constitutes a "measurement of the system". Usually this involves introducing some fictional object called a "detector", but in the world we live in, every detector is itself just built out of atoms which evolve in accordance with Schrodinger's equation.


> it's very unclear how some photons bouncing around between atoms constitutes a "time evolution of the system", and other photons bouncing around between atoms constitutes a "measurement of the system".

No "photons bouncing around between atoms" would constitute a measurement. A measurement involves some kind of macroscopic, irreversible change.

> every detector is itself just built out of atoms

Yes.

> which evolve in accordance with Schrodinger's equation.

That's not what standard "shut up and calculate" QM says. The MWI is what you get if you insist that this is literally true, all the time, no exceptions. For many physicists, however, the fact that the MWI leads to claims that appear to grossly contradict observation is a reason to doubt that the Schrodinger equation really has to be literally true, all the time.


This reply just raises all sorts of questions.

Is there an objective definition of "macroscopic" within the framework of QM? How does a system decide whether it's macroscopic or not? As far as I can tell I'm just swimming in a sea of bosons and fermions doing their thing, there's no "macroscopic" label that distinguishes the atoms of iron in my vacuum chamber wall from the atoms of rubidium in my vacuum chamber's magnetic trap. They're both following the exact same laws of physics at any given moment. (Laws which, AFAIK, satisfy time-reversal symmetry.) Of course, some systems have many degrees of freedom, which makes the calculation very difficult and exhausting, but that's our problem, not the system's.

When does your calculation get to drop branches of the wavefunction, and when does it have to keep track of them? Deciding this policy seems like an important step of shut-up-and-calculate.

And most importantly: which MWI claim contradicts observation?


> This reply just raises all sorts of questions.

Yes. All of those questions are open areas of research in the foundations of QM. There are no generally accepted answers to them, although of course each individual QM interpretation has its own set of preferred answers, and physicists who prefer a particular interpretation will often talk as though their preferred answers are generally accepted--but they're not.

> which MWI claim contradicts observation?

The fact that we observe measurements to have single results. The MWI claims that every measurement has all possible results, but that this fact is in principle unobservable. That makes the MWI unfalsifiable.


All QM interpretations are unfalsifiable. That's why they're interpretations.

In a different thread you wrote that "as you say, the MWI claims that in each branch of the wave function, the human experiences a single result of a measurement." But here you're saying that MWI makes a claim that contradicts that we observe measurements to have single results. MWI, just like Copenhagen, claims that we will observe measurements to have single results (but that we won't be able to predict in advance which result we will experience observing). MWI's predictions are exactly in alignment with what we observe, and makes no claim that in any way differs from the reality that we experience. If it did, then it would be falsifiable.


> All QM interpretations are unfalsifiable.

I agree. Others (not you) elsewhere in this discussion have said otherwise.

> here you're saying that MWI makes a claim that contradicts that we observe measurements to have single results.

Yes, overall I think the MWI contradicts itself in this regard. It has to say that measurements have all possible results, because that's what Schrodinger's equation says, but it also has to say that we observe measurements to have single results, because it's clear that that's what we actually observe. I don't think its claimed resolution of that contradiction is valid. But you are correct that that is not an issue that can be resolved by experimental tests.


what you are saying is so obviously true to me, i have trouble understanding what people even think is happening with measurement otherwise. i remain dumbfounded that copenhagen is still the preferred interpretation


> i remain dumbfounded that copenhagen is still the preferred interpretation

So you're an MWI proponent, then? If it's "obviously true" to you that everything, including measuring devices and ourselves, always evolves in time exactly according to the Schrodinger equation, no exceptions, then the MWI is your only option as an interpretation.

The fact that many people do not think the MWI is a viable interpretation is why interpretations like Copenhagen, which do not rest on the belief that the Schrodinger equation has to be literally true, everywhere, all the time, continue to exist.


I view most people who profess a belief in the Copenhagen interpretation to be expressing a belief in what the role of science is: to predict experiments. Give me an experiment and using the math of QM, I can predict the result of that experiment and everything else is just metaphysics. I can understand that perspective.

But once you start trying to really tease apart the potentially metaphysical question of "what's actually happening?" (a question many people believe is just not within the realm of falsifiability/science), traditional collapse theories have a bunch of trouble setting up clear lines of what macroscopic is, how collapse actually occurs, etc. that do not seem to have easy resolutions, especially as we can create bigger and bigger quantum systems.


> traditional collapse theories have a bunch of trouble setting up clear lines of what macroscopic is, how collapse actually occurs, etc. that do not seem to have easy resolutions

All of this is true. But none of it makes it "obvious" that the Schrodinger equation must always be exactly correct and that one should be "dumbfounded" (your word) that any interpretation (like Copenhagen) that says otherwise has any proponents at all. Yes, "collapse" is an obvious open question that hasn't been resolved. But the belief that some way of resolving it will eventually be found that doesn't require accepting the MWI is a perfectly reasonable belief to hold, and an interpretation like Copenhagen is a perfectly reasonable interpretation for people to use in the interim while we search for such a resolution.


The evolution a system undergoes when you measure it in the Copenhagen interpretation does not follow the schrodinger equation, that's his point.


> The evolution a system undergoes when you measure it in the Copenhagen interpretation does not follow the schrodinger equation

Do you have experimental evidence for that claim? The "cat in a box experiment in a box" thought experiment suggests that it does.


They're talking about what the Copenhagen interpretation predicts, not what actually happens. It's a mathematical question, not an experimental one.

Solutions to the Schrodinger equation all take the form `Psi(t) = U(t)Psi(0)` for some unitary operator `U`. "Collapse" applies a projection operator, which is nonunitary.


> Neither Bohmian mechanics nor many worlds are falsifiable, they are just interpretations.

If we accept that (I’m not sure I do[1]), then that must be true of all interpretations, including the once most popular, Copenhagen/collapse. And as valuable is the "shut up and calculate" refusal to interpret anything, I suspect it is at least in part a way to avoid the possible demise of the collapse interpretation. A compromise of sorts.

[1]: https://www.lesswrong.com/posts/DFxoaWGEh9ndwtZhk/decoherenc...


I would argue that “shut up and calculate” is correct specifically because QM is a model. What hubris it would take to believe that we’ve discovered how the universe actually works (and there’s the issue of inconsistency with GR).

And since it’s just a model, it doesn’t matter what “interpretation” you pick, since all models are wrong anyway.


> What hubris it would take to believe that we’ve discovered how the universe actually works

Isn't that what happened with the interpretations that came before many worlds? Weren't most people believing in an actual wave function collapse back then, with basically no one telling them of their hubris?

Genuine question: how soon did the "shut up and calculate" approach, as a refusal to interpret the maths, started getting traction?


Don’t know the specifics about this particular case

However, this is a typical dichotomy in academic/research settings

The interplay between the practical (like “calculate” or “build”); and searching for meaning (like coming up with a new model for some phenomenon)

My impression is that the majority in a particular field, usually focus on the practical, applying known “tried and true” models to novel things

And then, there are only a few who are successful at convincing others of using their models

There are a lot of people who are technically capable of creating new models. But it is incredibly difficult to make them go mainstream


For a layman in physics, what do you mean by “shut up and calculate”, and does it always align with existing interpretations?


All of these different interpretations use equivalent mathematics under the hood, and make identical, accurate predictions about what we will see in experiments. Where people get excited is in the stories they tell about what the math “means”. QM is definitely wierd, which makes the stories fun and hard to articulate. But “shut up and calculate” says that there is no truly satisfying story, all of our analogies are flawed, we will just go in circles for ever. So just calculate. And yes, the math works perfectly. A bit sad but quite pragmatic!


> All of these different interpretations use equivalent mathematics under the hood, and make identical, accurate predictions about what we will see in experiments.

This isn’t true at all, especially the former part. The mathematics of these different interpretations is very different, there are some things you can do with some interpretations but not others, etc. they’re not all mutually interchangeable. For example, in Copenhagen you can’t analyse the interaction between the measurement device and the measured system while MWI can do so, no one has managed to make quantum field theory for Bohmian mechanics, etc.


In copenhagen you can analyze the interaction between the measurement device and system, you just have to call the device part of the “system”

You can do this arbitrarily and then pretend the collapse occurs at some future later point because “when collapse occurs” is a totally subjective thing of what you define as the measurement


Sure but that's really just a hack. By defining the measurement device to be somehow outside of the entire thing you can analyse another measurement device as if it's not a measurement device, but still it leaves an element of the theory that is physically significant (the new, broader measurement device) that isn't studied by the measurement. This is usually okay, sure, but it means that ultimately my point still stands: Copenhagen can be used to analyse a specific measurement device + system, but whatever measurement device is chosen as the Copenhagen measurement device cannot be analysed by the Copenhagen interpretation in that given example. This is an inescapable problem of the Copenhagen interpretation, and MWI does not have this problem.


yeah i mean i'm very sympathetic to MWI so even my original comment is not going to be very good at defending copenhagen - but i do think the math is pretty much equivalent depending on what you define as the system. just like i could have equivalent math where i was assuming that all reality was actually just God moving around the particles with non-local hidden states only known to Him (i'm not religious though).


That’s of course we are a storyteller species. We need narratives to understand the world.

Like with “Particles” or “waves”. These are all words borrowed from our daily experience to describe an abstract reality which isn’t easy to comprehend for our story minds.


Agreed that “science accepts their ideas” is a mischaracterization (indeed Everett annd Bohm are probably mutually exclusive so couldn’t both be “accepted”). Better to say that the community now takes both seriously as contenders for a completion of quantum mechanics.

I deliberately say “completion”, not “interpretation” since, without further elaboration, Copenhagen quantum mechanics is only a piece of a model.


You just rejected them!


There are some biographical glimpses of Hugh Everett's family life in his son's autobiography, "Things the Grandchildren Should Know" https://en.wikipedia.org/wiki/Things_the_Grandchildren_Shoul...

The son, Mark Oliver Everett, is better known as E, the leader of the Eels.


I highly recommend the documentary on Hugh Everett, hosted by his son, Mark:

https://youtu.be/0LroZS97VjA

It's an odd thing to have a bio piece on what amounts to a "failed" phycist — at least while he was alive. Not all of life's stories have happy endings.


"Claims that the standard procedure for testing scientific theories is inapplicable to Everettian quantum theory, and hence that the theory is untestable, are due to misconceptions about probability and about the logic of experimental testing. Refuting those claims by correcting those misconceptions leads to an improved theory of scientific methodology (based on Popper's) and testing, which allows various simplifications, notably the elimination of everything probabilistic from the methodology (‘Bayesian’ credences) and from fundamental physics (stochastic processes)." – David Deutsch

pdf, The Logic of Experimental Tests, Particularly of Everettian Quantum Theory https://www.sciencedirect.com/science/article/pii/S135521981...


I love these sort of paradoxes in physics: multiple models, multiple interpretations, multiple meanings and realities, out of seemingly the same experiments and measurements, sometimes even the exact same mathematical formulas

Sometimes it can definitely be seen as a bit ridiculous, like if maybe a formula is taken to mean something slightly different, it could mean the whole Universe is upside down!

However, sometimes creating alternative models, even if weird when taken at face value, can actually make a difference in making better predictions and even finding new practical applications

In the end, all of our models are made up by us


For anyone interested in other theories that were not mentioned and not satisfied with the unfalsifiability of Many Worlds I can recommend reading Carlo Rovelli's book Helgoland and his Relational Quantum Mechanics[0].

[0]: https://plato.stanford.edu/entries/qm-relational/


Copenhagen isn't any more satisfiable than Many Worlds.

They both say similar things: that the wavefunction evolves according to the Schrodinger equation. One leaves the equation intact (Many Worlds), while the other involves randomly selecting certain parts of it to be "real" or "unreal" without any explanation of how or why that choice is made (Copenhagen).


https://arxiv.org/abs/2101.11052 claims to have derived an experimentally verifiable prediction from many worlds (namely that energy is only conserved in the linear progression of the wave function, and not necessarily in the "branched" or "collapsed" bit that we observe; and so in certain circumstances it might be possible to observe an interaction that fails to conserve energy.


For anyone interested in MWI in a novel, Stephenson’s Anathem is quite nice. I think I may start a reread this evening.

https://en.wikipedia.org/wiki/Anathem




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: