Hacker News new | past | comments | ask | show | jobs | submit login
Is Quantum Theory About Reality or What We Know? (nautil.us)
127 points by pmcpinto on May 1, 2017 | hide | past | favorite | 93 comments



I hope Pilot Wave Theory [0][1] gets more recognition and future work is able to extend it to account for relativity, there's hope that we actually can find a deterministic approach to Quantum Mechanics.

Here's a amusing video of an analogous macroscopic experiment, droplets oscillating and interacting with each other at stable states: https://www.youtube.com/watch?v=JUI_DtzXdw4

[0]: https://en.wikipedia.org/wiki/De_Broglie%E2%80%93Bohm_theory

[1]: https://en.wikipedia.org/wiki/Pilot_wave


I've been fairly interested in pilot wave thoery, and the more I delve into it, the less I feel it's useful.

My understanding is that the statistical equivalent of the pilot wave theory is "when you flip a coin, there is not a 50/50 chance, but a 100% chance of getting the result, because you got the result". Hyper-determinism.

The programming equivalent is "all functions are built with lookup tables, with precomputed values". Great, but I still want to predict the values of the lookup tables to describe the function.

From the wiki description of the "guiding function" (basically, the universe's step function):

>The main fact to notice is that this velocity field depends on the actual positions of all of the N particles in the universe.

This feels totally infalsifiable! "in the exact state of the universe you get the result you get". None of the results end up being usable because there's always the "out" of "the universe has changed". You end up going into hyper-determinism, which is almost all about negating free will.

Even if the pilot wave theory is the "real thing", the other model of quantum mechanics ends up being more useful because at least you can make some predictions on the state of the universe.

(Would love to have someone explain to me how we can apply Pilot Wave Theory in a "local-ish" fashion, people worked hard on this stuff so I imagine there's some use)


...It makes the same localized predictions as other quantum theories and has the same utility (aside from not having relativity incorporated).

One just says "we fundamentally can't know, lulz random" and the other says "we could know, if we knew everything". PWT just has the local probabilistic behavior because we can't know the little fiddly bits of the rest of the universe, even if we know the dominant local forces. Copenhagen just says that it does random shit, no reason.

I'm not sure why you think the other interpretation of QM gives predictive power PWT doesn't.

Ed: To specifically address the stats point --

PWT says that if we had total knowledge of the universe, we could determine that the coin would always be, eg, heads under specific conditions, but due to measurement limitations we can only say that it will come up 50/50, and that the discrepancy between the theoretical ability to perfectly predict and our need to statistically approximate is related to our ability to measure (and compute).

By contrast, Copenhagen says there is no coin between when it leaves your hand and when it lands, so clearly we can't do any better to predict than statistics of how often we see the coin heads on the table after it leaves our hand.

You seem to fault PWT for making the same predictions as other models, just because it doesn't declare the problem of doing better fundamentally impossible.


Isn't PWT saying "well if you saw the entire state of the universe you could know the answer" the physics equivalent of "every terminating calculation is O(1) relative to the heat death of the universe"?

Because the interpretation still leads to the "wave function collapse" interpretation in the expermienttal scope anyways, how does it feel less impossible?

Is the theory equivalent to other quantum theories? If so, the value of that interpretation is purely philosophical right? Personally I'm not super satisfied. It's brought up as an interpretation that removes the non-determinism, but in reality it hides the non-determinism to the "practically invisible" by extending out to the entire universe.

To bring back the stats thing... I could either say I have a 50/50 chance to get heads or tails on a flip, or I could say "well, if I could see through time I'd know all the results and they'd be knowable, so the results are 100% likely to be known". Sure, that's true but that's not happening.

Granted, there's no local deterministic theory... but what's the value of determinism if the non-locality extends to the universe?


> Granted, there's no local deterministic theory... but what's the value of determinism if the non-locality extends to the universe?

This implies that "value" (some sort of utility to us) has some say in how the universe is.

It's basically saying the theory could not be correct because it has less value to us.


My understanding is that the guiding function mechanism of PWT could describe just about physical mechanism, not just quantum effects. Since the pilot wave itself is what truly encodes the results, you can mess around with this "initial data" and get whatever you want (Granted, I haven't thought too much about this, so this could be easily falsifiable).

If the objective is to describe some fundamental truth, isn't it odd that the description can be applied to everything?

Maybe the universe is odd and meaningless, though the Copenhagen interpretation captures that feeling pretty well ;)


> If the objective is to describe some fundamental truth, isn't it odd that the description can be applied to everything?

Whether something is odd or not is irrelevant to whether it is true.

Quantum mechanics is a good example of this.


I think PWT works better with TQFTs, but it's an area of active research how TQFTs work, so we'll have to wait and see.

So far, the difference is philosophical, but it's worth pointing out that so is Copenhagen.

They originally thought they had a choice in the theory between non-locality and non-determinism, so chose to build a theory around non-determinism for philosophical reasons. However, non-locality eventually became incorporated because some phenomena are non-local in nature.

So in a lot of senses, the Copenhagen interpretation simply has an extraneous assumption of non-determinism and they don't want to rework their model because it would be a lot of work.

Legacy cruft.


> Hyper-determinism

How does "hyper-determinism" differ from "determinism"?

I find that when people use adjectives like "hyper" or "ultra" for some theoretical view (like people who talk about 'ultra-darwinism') they are trying to make something appear "extreme" without making any real argument for that position.

> You end up going into hyper-determinism, which is almost all about negating free will.

That phrasing implies that people come up with, or want, "hyper-deterministic" views because they want to negate the idea of free will. Is that what you mean? If not, what are you trying to say?


Sorry, this might be the wrong term for a higher level concept based on determinism.

If everything is deterministic, then our actions can theoretically be pre-determined. I've seen discussion around pilot wave theory that goes into details about this thought process.

This is fine and good. But if I publish "A solution to the double slit experiment" and the contents are "everything is deterministic, therefore all the protons go where they go" (hidden behind a pilot wave and initial values), that's not really the right genre of paper is it?

It's not that I'm demeaning it, but that I don't feel like it's in the same realm of discussion os the Copenhagen's rather pragmatic "stuff happens behind the curtain, but here's how to work with it" approach.


> It's not that I'm demeaning it, but that I don't feel like it's in the same realm of discussion os the Copenhagen's rather pragmatic "stuff happens behind the curtain, but here's how to work with it" approach.

I don't get the distinction you're trying to make. The maths are all the same, so the interpretation has almost no bearing on how to work with it.

Furthermore, pilot waves produce a deterministic theory because of non-locality, but it sounds like you're criticizing superdeterminism. They're not the same. 't Hooft is working on a superdeterministic theory based on cellular automata.

In any case, how are pilot waves any different than any other classical theory in this regard? It's about describing the system as accurately as one can, which entails inferring the initial conditions based on how we know the system evolves, ie. this cannon ball fell here because it was launched with force X at angle Y from height Z.


>How does "hyper-determinism" differ from "determinism"?

It's the difference between a trajectory that looks like Brownian motion consisting of a preprogrammed constant function, versus actually being sampled from a Wiener Process.

In the former case, if you know the function, you can predict the trajectory with total certainty. In the latter case, if you use the Wiener Process as a model, you can predict the trajectory probabilistically.


I'm not familiar with those concepts (like Wiener Process), but I don't think notions of predictability should be mixed up with notions of determinism. Whether something is predictable or not does not change how deterministic it is.


I think what I have to say, then, is welcome to quantum mechanics. Or, in fact, welcome to probability theory. Mathematically, there's no difference between a process which can only be predicted probabilistically, and a process which really is random; no difference between incomplete knowledge in your head and stochasticity in the world.

An interesting question is: how could a fully deterministic ontology give rise to beings capable of having probabilistic beliefs? Worse, how can a fully deterministic ontology give rise to truly random mathematical entities like Chaitin's Omega? In fact, in such a universe, where do the bits for atomic random-number generators come from? After all, if they're not really random, there should be some deterministic model capable of predicting them, and yet, quantum mechanics tells us precisely that such a thing is impossible.

You can try to have a Bayesian epistemology with a nonstochastic ontology, but it doesn't really make sense. Just let your ontology be stochastic: then and only then you can physically account for both physical randomness and probabilistic belief.


> I think what I have to say, then, is welcome to quantum mechanics. Or, in fact, welcome to probability theory. Mathematically, there's no difference between a process which can only be predicted probabilistically, and a process which really is random; no difference between incomplete knowledge in your head and stochasticity in the world.

I'm not talking about randomness. I'm talking about determinism, and as far as I can see what you're saying says nothing about determinism or differing degrees of determinism.

> An interesting question is: how could a fully deterministic ontology give rise to beings capable of having probabilistic beliefs?

I don't see that as a problem at all. It fits completely fine with the idea that probability is a matter of lack of knowledge.

> Worse, how can a fully deterministic ontology give rise to truly random mathematical entities like Chaitin's Omega? In fact, in such a universe, where do the bits for atomic random-number generators come from? After all, if they're not really random, there should be some deterministic model capable of predicting them, and yet, quantum mechanics tells us precisely that such a thing is impossible.

You're talking like it's already a settled matter as to whether reality is fundamentally deterministic or not. Are you saying that there's definitive proof that deterministic accounts of QM are incorrect?


> I'm not talking about randomness. I'm talking about determinism

The difference being?


The person I was replying to said "Mathematically, there's no difference between a process which can only be predicted probabilistically, and a process which really is random; no difference between incomplete knowledge in your head and stochasticity in the world."

This is because randomness can be seen as a matter of incomplete knowledge. Determinism is independent of our knowledge.

Now, perhaps there is randomness that is independent of our knowledge. But I was talking about randomness/determinism in the context of the quoted statement, which was a matter of knowledge.


> This is because randomness can be seen as a matter of incomplete knowledge. Determinism is independent of our knowledge.

I'm not sure there's a meaningful difference. "Determinism" as you're describing it is an interpretation with no practical implications.

You can imagine a god controlling every "random" event in our universe by selecting the outcome from "the big book of outcomes" and claim that everything is deterministic. You can also imagine the god consulting a D20 instead, and claim that everything is non-deterministic. The former makes randomness a function of our lack of knowledge and the latter makes it truly random. These two scenarios are equivalent and neither is falsifiable, though, making the distinction meaningless. From your perspective, random events are still non-deterministic.


You're just making up scenarios that fit your point of view, without arguing that these correspond to the actual options.

Are you arguing that there cannot be any distinction between determinism and non-determinism?

[In reply to the content below:

The scenarios are ones involving a god choosing and performing actions, and the supposed randomness of their dice. What are they meant to translate to?

> I'm arguing that your "deterministic randomness" is indistinguishable from "nondeterministic randomness". You're trying to define a class of "randomness" to be "deterministic but unknown

No I'm not. Show me where I'm saying that.


> You're just making up scenarios that fit your point of view, without arguing that these correspond to the actual options.

No, I gave scenarios that demonstrate why your distinction is not meaningful. If you have a scenario that changes this, can you please explain it?

> Are you arguing that there cannot be any distinction between determinism and non-determinism?

No, I'm arguing that your "deterministic randomness" is indistinguishable from "nondeterministic randomness". You're trying to define a class of "randomness" to be "deterministic but unknown". Near as I can tell, this is a distinction without a difference.


> This feels totally infalsifiable!

This isn't too surprising, because it's a QM interpretation that's designed to be indistinguishable from the others. The MWI is unfalsifiable (you can't see the other worlds) and Copenhagen is too (you can't​ see wave function collapse actually happen).


Copenhagenism is falsifiable. It says that "large" things are classical and can't be put in superpositions. "Large" isn't well defined, but it includes at least humans. So, put a human in a superposition, and you've falsified it.


> Copenhagenism is falsifiable. It says that "large" things are classical and can't be put in superpositions.

Copenhagen doesn't say that. See for instance, Schrodinger's cat.


The biggest problem with Copenhagen is that it doesn't say if there is a physical collapse or that there isn't one. Instead think of it as a rule of thumb; if you apply the quantum/classical cut in the appropriate spot for your experiment, it will give the right answer. But it gives no guidance where that spot is or what physically is happening.


From the many worlds perspective, it's actually quite clear where the Copenhagen "collapse" occurs: it's when something enters a superposition with Earth.


It is true that WMI gives clear guidance on the subjective collapse, but there is nothing special about Earth.


> You end up going into hyper-determinism, which is almost all about negating free will.

You seem to think this is a bad thing. Why?


It's not that I think it's a bad thing, but that it's irrelevant in the context of trying to use quantum mechanics for "practical" purposes.

Sure, maybe we don't have free will but I still want to figure out how to build quantum computers.


Some of us want to understand what the universe is really like. If that's uninteresting to you because you're more of a utilitarian, that's ok, but that's not the only way of approaching life on this blue planet.


It's not just about being a utilitarian. Science is supposed to make predictions. If it can't make a prediction that can be tested, it's not physics, it's philosophy.


I've had many discussions with people who don't truly seem to grasp the concept of determinism. I think we should never accept a theory that proves everything is predetermined. If the theory is right it doesn't matter if people believe or act in accordance anyway, and everything anybody does doesn't matter. Time does not exist, nor actions and neither do people. Nothing is here or there if it is predetermined. If the theory is wrong but seems right for a while it will lead to less favourable outcomes for those who act in accordance while they actually had other options.

That is if you believe in free will. I don't believe in free will (surely it's impossible) or determinism, but rather in chaos with (maybe) some reinforcement by deterministic systems (such as our brains). Our actions powered by constant rolls of the dice, at our birth, in our brains and on the surface of the sun where at this very moment photons are being released that indirectly animate all human life on earth.


"Pre"-determination is a misleading term in this context. The wave function collapsed, and the result was what it was because that's what the result was.

TLDR of my opinions in this space: the cosmos is simultaneously deterministic and probabilistic. There is no free will. Nor are all things predetermined in the sense that most people use the word.


I would say: because a totally deterministic, nonstochastic universe becomes subject to paradox theorems about Laplace's Demon.[1] You need at least a little bit of stochasticity to make prediction and unpredictability work out.

[1] -- https://rjlipton.wordpress.com/2014/08/08/laplaces-demon/


Most of the disproofs here don't actually seem to disprove the demon's existence, just limit it's ability to answer trick questions.

"Suppose that there is a device that can predict the future. Ask that device what you will do in the evening. Without loss of generality, consider that there are only two options: (1) watch TV or (2) listen to the radio. After the device gives a response, for example, (1) watch TV, you instead listen to the radio on purpose. The device would, therefore, be wrong. No matter what the device says, we are free to choose the other option. This implies that Laplace’s demon cannot exist."

This doesn't prove that the demon can't exist, just that it can't answer the question. If it knows what you will do in any case of its response, and it knows what its response will be, it knows what you will do, generally. It just can't tell you what you are going to do. Sort of a difference between being omniscient versus being omnipotent.

In other words, the demon knows that its response will be X, and your actions will be Y, it's response just can't be truthful. It's all still perfectly deterministic.

Free will is just a convenient illusion based on the fact that predicting the universe faster than realtime would almost certainly require more resources than the universe contains. At best, we can limit our scope for imperfect predictions based on imperfect knowledge and limited processing ability. The universe is likely deterministic, but there's no way to act on that in a meaningful way, so for the purposes of human existence, we may as well act as if we have free will.


Free will can exist; we just are not able to model it or reason about it with the constructs we have


Free will is an outcome of a universe which is not purely deterministic if you have a proof that the universe is then essentially there is no freewill.

Freewill goes beyond the concept of your brain deciding what to have for dinner.

People are actually analogous to some physical system it's much harder to predict the actions of an individual whilst predicting group actions on a larger scale is easier since the various individual inputs are effectively canceled out.

This is like modeling say a glass of water, modeling each individual molecule is nearly impossible because you get to the point of not being able to measure them especially when you trying to measure or predict the sub atomic make up of each molecule, but modeling the entire system is easy.


Free will can exist in the same way God can exist. At least with our current understanding of the universe, we lack a falsifiable test to disprove either. However, there is also no evidence to support the existence of either with any greater certainty than the claim that there is a teapot orbiting the sun somewhere between the orbits of Earth and Mars.

You are welcome to believe in God, orbital teapots, and/or free will, of course, so long as your belief doesn't lead to actions which create a negative imposition upon others. Sadly, that is all too common, and far less welcome, in my eyes.


What is your definition of free will in that statement?


> You need at least a little bit of stochasticity to make prediction and unpredictability work out.

There is already unpredictability in all sufficiently complex formal systems. For instance, it's unpredictable whether Turing machine T(i) will halt. Determinism is all you need!


"Free will" negates itself as a useful concept whether you embrace or reject determinism (either one's will is a function of body and environment or its not in which case it is arbitrary), so I don't think it's a relevant consideration to the veracity of determinism or PWT in general.


It's no more or less useful than any other interpretation of quantum mechanics. Is a many worlds interpretation more useful? They all have the same mathematics.


Interesting, since I dont see a coin flip as random at all. There are just more "factors" than we can take into account.


Data from Star Trek likely nails it every single time.


PWT has an underlying dynamics, much like classical mechanics, that to compute the predictions exactly, one has to have perfect knowledge of the current state. Given that knowledge and a perfect computational environment, one could then determine the future for all time, at least ignoring creation and annihilation of particles.

Clearly this is useless if that is all it provided. But so was classical mechanics. For example, to compute the gravitational force on my finger, technically, one needs in Newtonian theory to know the position and masses of all particles in the whole universe. Equally clearly, that's neither practical nor relevant.

Instead, we use the well-defined dynamics to do a statistical analysis of the theory. What happens when we do that is that we get a quantum thermodynamics equivalent which is what the Copenhagen interpretation is. But Copenhagen suffers from being vague and undefined (what the heck is a macroscopic measurement?) It is very imprecise and not a theory at all. And yet, when it appears as a statistical framework from an underlying well-define theory, the measurement problem completely recedes.

And it is important to understand that it is the theory that ought to tell us what a measurement actually means.

The next point to raise is, what is the point if we just derive Copenhagen? Well, the point is that it allows to both understand the setup and thus extend it. If all questions were known and answered, it would be largely philosophical. But we have not resolved the problems (QM + relativity) in standard quantum theory after many decades of research.

What are a few things that PWT explain?

1) Identical particles. By having configurations of particles, one can investigate what a configuration is. And, after a moment's thought, identical particles are not labelled by nature. Choosing a configuration space of unlabelled particles, we end up with the identical particle choices of symmetric or antisymmetric wave functions. It gets more complicated with spin, but it turns out spin is best represented as a value of the wave function located at a spatial point, not an intrinsic part of the particle. And then the problems just disappear from that.

2) Arrival times. There is a time of arrival naturally enough in a theory with particles actually moving about. In most theories, the collapse or splitting of the universe, etc., does not really fit in with experiments of time arrival. That doesn't stop physicists from coming up with answers, of course, but a theoretical framework is conceptually trivial based on this.

3) Collapse of the wave function. This collapse is central to the whole business. There is no collapse in the fundamental theory. But in suitable situations (measurements), the environment can be conditioned on to get a local wave function for the system in question. When a measurement happens with the environment registering it, the environmental conditioning changes that local wave function and that appears as a collapse.

This also helps deal with the murkiness of a position measurement which is never some precise state to collapse into. It is all a bit wishy-washy which is a problem for a theory such as Copenhagen in which that is all there is. It is trivial shrug in PWT.

4) Creation and annihilation of particles. There are particles that exist and they can be born or destroyed. There are formulas that give a probabilistic form for the creation of particles. This makes the theory no longer deterministic, but that was never the point of PWT. The point was to be an actual theory, one which was well-defined. There is nothing wrong with randomness. So the creation and annihilation of particles is perfectly consistent with PWT. It requires wave functions defined on a disjoint union of configuration spaces of different number of particles. This is exactly what quantum field theory does.

5) Well-defined quantum field theory. It is legendary that QFT is not well-defined mathematically. Recent work, inspired by PWT, looks promising in finding a well-defined path in QFT. Essentially, the perturbation from a free evolution cannot work. Instead, one has to work with functions in which the different number of particles spaces are linked in such a way as to preserve the probability. Once that is done, divergences fade away. The mathematical work is still quite difficult and it is early days, but it has had successes in toy theories that were not workable before.

6) Relativity. This is the big one. By having a clear theory to work with, one can understand the nonlocality that is present in reality (Bell's theorem) [unless you deny the results of experiments when they happen as in MWI]. There are explorations of natural foliations of space-time which provides the kind of nonlocality needed. In terms of something that works mathematically, that exists. A theory that is philosophically satisfactory is still being pursued. Essentially, the foliation is needed but is not detectable which is kind of what is needed, but it is unsatisfactory.

PWT is not a return to determinism. It is a return to clearly defined theories, whether they are deterministic or not. PWT is a theory that one could envision a computer being fed data and then computing out without further intervention. Copenhagen is not like that. One needs to know what experiments and measurements are to be done which, of course, can't really be specified in advance since what experiments are done may rely on what happened in the past.


Veritasium video about the topic https://www.youtube.com/watch?v=WIyTZDHuarQ


Just wondering, what is the speed of the pilot wave? Is it faster than the speed of light?


Yes, it's a non-local theory.


Here is another good video on pilot wave theory

https://www.youtube.com/watch?v=RlXdsyctD50


If you're interested in the subject matter, I highly recommend Richard Feynman's 'Six Easy Pieces' series (available on Youtube here: https://www.youtube.com/playlist?list=PLBxHpsmcxyNghmwd6MJBy... ).

Feynman was a key member of the Manhattan project (Wikipedia link here: https://en.wikipedia.org/wiki/Manhattan_Project ). I trust him to know what he's talking about. :-)


I really enjoyed the article.

Every time I read about Quantum Theory it reminds me about a story I read online some time ago. I can't fully remember it:

I think it was about a guy who made a Quantum Computer to answer any question in the universe by asking it to different dimension of himself. I think he uses it to win the lottery but in the end he mistrusts himself to answer the question "how to live a happy life" and ends up living an unhappy life.

I really want to re-read it. Does anyone know where I can find this story?

I spend weeks now to find it ... with no luck.



YES, That is it! Thank you!



"Measuring a system will generally change its state from one in which each possible outcome has some non-zero probability to one in which only one outcome occurs. That’s a lot like what happens when, in the die game, you learn that the die does, in fact, show a six. It seems strange to think that the world would change simply because you measured something. But if it is just your knowledge that changes, things don’t seem so strange."

This is the only interpretation of quantum theory that makes sense to me. The idea that an objective reality of the world has changed, through our act of observation, leads to absurd conclusions such as Schrodinger's cat being simultaneously alive and dead at the same time. I believe this is the same point that Schrodinger tried to make with his thought experiment as well.


Sure, it makes sense. But reality is complex. The amplitudes of the wave function are real (no puns intended) and the Bell inequalities are also real.

It's not just about incomplete knowledge.


Hypothesis 1: Schroedinger's cat is either alive, or dead, but not both. We will not know what the outcome is, until we take a look inside.

Hypothesis 2: Schroedinger's cat is simultaneously both alive and dead. Our taking a look will "collapse the wave function" which could then result in the cat dying.

Is there any concrete evidence to believe in hypothesis 2 over hypothesis 1? If not, I'd rather go with the hypothesis that doesn't sound absurd.


The bell inequalities are a concrete evidence to dismiss 1.

Say we flip a coin (just for the sake of the argument - superposition only works for small things), and we're arguing whether the coin is really heads or really tails before we look at it, or whether it's in a superposition. The bell inequalities give us a prediction on the distribution of results we should see if 1. was true or if 2. was true. Turns out the distributions we see in experiments are consistent with 2 but not with 1. The measured values can not be true if the coin is either head or tails.

Of course, there are subtleties involved, but that's one of the main evidences we have.


> Is there any concrete evidence to believe in hypothesis 2 over hypothesis 1?

https://en.wikipedia.org/wiki/Double-slit_experiment. That's pretty concrete.

Plus all those fancy quantum computers that people are developing depend on the fact that quantum superposition exists.


Well, for Schroedinger's cat, it's hypothesis 1. For quantum systems, hypothesis 2 has some evidence behind it.

But you can't translate the quantum stuff to the cat, because the detector makes a measurement. So because of the detector, the uncertainty is gone, and there's no uncertainty in the state of the radioactive atoms, let alone in the cat.


> Is there any concrete evidence to believe in hypothesis 2 over hypothesis 1? If not, I'd rather go with the hypothesis that doesn't sound absurd.

Almost the entire idea of quantum mechanics is that hypothesis 1 is incorrect.


Isn't the double slit experiment evidence for hypothesis 2?


The double slit experiment shows that light has properties traditionally associated with waves (as opposed to particles). Can you clarify how this is related to hypothesis 2?


> The double slit experiment shows that light has properties traditionally associated with waves (as opposed to particles).

Even for photons sent through one at a time? Nope. That's not behavior associated with any type of wave.


Well, it says that individual photons are still waves.


http://www.hitachi.com/rd/portal/highlight/quantum/index.htm...

It works when you send electrons one at a time, suggesting they are somehow interfering with themselves. In other words, the electrons are in both slits at once until we check. That might not literally be true, but it's not possible to explain the interference by saying we just don't know which slit they're going through.


So the most logical interpretation of quantum physics is that reality is subjective, correct?


No, the most logical interpretation of quantum mechanics is that you can't measure things without using particle interactions, and particle interactions change what they interact with.


I don't think that's enough. Particle interactions are unitary, but measurement looks non-unitary from the point of view of the observer.

Reality is intersubjective, but not subjectivist.


I think confusion arises when people hear 'measurement' and jump to thinking "wait, what's so special about measurement? Would the world not exist if we weren't there to measure it?" This is due to an imprecise understanding of measurement, and should be corrected by pointing out all particle interactions are measurements. Our classical world exists without superposition because there are enough particles interacting with each other that everything is constantly being measured.


I cannot agree with this more. Universe has been existing without intelligent observers for billions of years. It stands to reason that universe is constantly being shaped as particles interact.


Reality is objective, but our perception of it is subjective given our current limited understanding of it.


Anyone who thinks that QM is making stuff up, please watch this video. Here Rubidium atoms (mind you atoms and not electrons or photos) are made to interfere with each other.

https://www.youtube.com/watch?v=a_w6AGe_fIo

Quantum mechanics is a foundation of physics, chemistry and materials science. Still, there is an ongoing debate about the emergence of the classical, macroscopic world from the well-understood microscopic world of quantum mechanics. We contribute to this discourse by demonstrating quantum superposition of massive particles at the distance (0.5 m) and time scales (2 s) of everyday life, thereby advancing the state-of-the-art of atom de Broglie wave interferometry by nearly two orders of magnitude [1]. In addition to testing a central tenet of quantum mechanics, we pave the way for new precision tests of gravity, including the possible observation of gravitational waves and tests of the equivalence principle. In related experimental work, we demonstrate that entangled clusters of approximately 1000 atoms can be used to achieve 10-fold improvement in the sensitivity of quantum sensors based on atomic transitions; the levels of performance achieved could not have been realized with any competing (non-entangled) method [2].



"Physicists know how to use quantum theory—your phone and computer give plenty of evidence of that."

What?


Semiconductors couldn't work without quantum mechanics [0]. Heck, the sun couldn't work without quantum mechanics [1].

[0] https://physics.stackexchange.com/questions/112615/why-is-it...

[1] https://www.forbes.com/sites/ethansiegel/2015/06/22/its-the-...


(This does not make your phone a quantum computer. Your phone is an ordinary computer that happens to be enabled by quantum mechanics.)


This sounds like saying our phone shows a demonstration of E=MC2. Not a great intro / not giving a lot of confidence to the reader when it starts like that.

In fact, it'd make Einstein laugh. His late life was devoted to unify the theories and asking quantum physicists to demonstrate the quantum theory in the real world or through though experiments. He'd probably still be upset today.

I, for one, am still waiting for the 'unified relative quantum theory' as well.


Integrated Circuits and associated lithography based manufacturing techniques rely heavily on quantum mechanics. The transistors and gates wouldn't work if quantum mechanics did not provide an accurate depiction of the physics involved.


> lithography based manufacturing techniques rely heavily on quantum mechanics.

Are you sure ?

> The field of quantum lithography is in its infancy, and although experimental proofs of principal have been carried out using the Hong–Ou–Mandel effect,[3] it is still a long way from commercial application.

From https://en.wikipedia.org/wiki/Quantum_lithography


The most direct requirement on Quantum Mechanics in existing lithography systems is the lasers used as light sources https://en.wikipedia.org/wiki/Laser#Foundations https://en.wikipedia.org/wiki/Photolithography#Light_sources


Conventional lithography also requires an understanding of quantum effects at the scales it's being used at. Not to mention modelling light/mask layer interactions.


I think that "reality vs knowledge" is a false dichotomy. After all, our knowledge is a classical-level feature of reality, not a core element of nature's physical ontology. What does nature's physical ontology require, such that once it all builds up to the classical level, we can know things?

Information as an element of physics is what I'd guess.


As a laymen, it seems to me that quantum theory is similar to the expanding universe theory in that scientists substitute complexity (cause of expansion, quantum states) with a token (dark matter/energy, indeterminacy) until you better understand the underlying systems.

Can anyone tell me why I am incorrect?


The two examples you give are very different. I would argue that you are, indeed, very incorrect.

The purpose of scientific theories is to be able to make predictions about physical systems. Fundamental theories are based on some postulates (assumptions; in mathematics, we would have axioms). Quantum mechanics is based on a few postulates (which are expressed in very mathematical terms, like "Physical observables are represented by Hermitian matrices on a Hilbert Space H").

If the theory makes incorrect predictions, it is deemed to be wrong. If it makes predictions that cannot be explained using "common sense", then non-specialist come out of the wood work stating that it is obviously incorrect, that specialists simply do not understand.

For the expanding universe ... we have a theory (Einstein's Theory of General Relativity) which has been extremely well tested in all kinds of scenarios, and has always been found to make correct predictions. Your phone's GPS would not work if it would use predictions from Newton's theory rather than Einstein's.

The Theory of General Relativity can be thought as a set of 16 equations of the form:

(aspect of geometry of space time) = (stress-energy-momentum content of the universe)

In simplified terms, the left-hand side of these equations relates to the observed expansion of the universe. The right-hand side of these equations is what type of "energy/matter" we observe. We find that simply including baryonic (i.e. "normal") matter that we can see by looking (using telescopes) at the sky leads to inconsistencies with the observed expansion. If we include different type of energy/matter that we cannot see (i.e. dark), each obeying precise equations of state, we can have predictions that are consistent with what we observe. Thus, we use General Relativity + observations about the expansion of the universe to make predictions about what is the energy/matter content of the universe.

What you see as a failure, I see as a success of G.R. as it makes predictions about what is found in the universe which we did not know before, and had no way to know simply using the tools we had.

"Fun fact": many years before modern neutrino experiments nailed down the number of "normal" light neutrino species to 3, using General Relativity and known nuclear physics, one could use the observations of the ratio of light elements (He/H, Li/H, etc.) produced in the early universe to make the prediction that the number of light neutrino species had to be equal to 3.


> "Fun fact": many years before modern neutrino experiments nailed down the number of "normal" light neutrino species to 3, using General Relativity and known nuclear physics, one could use the observations of the ratio of light elements (He/H, Li/H, etc.) produced in the early universe to make the prediction that the number of light neutrino species had to be equal to 3.

Could you give a bit more detail on how the observatsions, plus nuclear physics, plus general relativity, led to that conclusion?


Look at the very short section titled Big Bang nucleosynthesis on https://en.wikipedia.org/wiki/Cosmic_neutrino_background. Based only on the relative abundance of He-4 and D (H-2), one gets an estimate of 3.14 for the number of neutrino species. You can find more detail on http://darkuniverse.uni-hd.de/pub/Main/WinterSchool08Slides/.... Basically, the more light neutrino species there are, the faster the expansion occurred in the early universe, and the less time there is for making other nucleus from protons and neutrons. After a while, the matter is to diluted for fusion to occur and the ratio of various nuclei is "frozen" .... until stars are formed much later on.


Doesn't Bell's inequality address this question?


It addresses a related but slightly different question.

You flip a coin: there is a 50/50 chance of heads or tails. These probabilities are due to our incomplete knowledge: the coin knows which face is up, and it's only because we don't know that we assign probabilities.

Now if we have a particle that is 50/50 spin up or spin down, we may think that it is like a coin: the spin is already determined, we just don't know which it is. This is where Bell's Inequality comes in and shuts that idea down. Until the particle is observed, it simply does not have a spin, in the same way that a coin has an up face.

So if a pre-observed particle does not have a spin, what does it actually have? This is the core of the dispute in the article. There's two possible answers:

1. A wavefunction (psi), aka a quantum state. This is the psi-ontic view which argues the wavefunction is a real and physical thing that evolves in time regardless of any observations. MWI and Bohm are both psi-ontic theories.

2. Nothing. This is the psi-epistemic view which holds that it makes no sense to talk about a particle's reality in-between observations. In this view, only measurements and their results are real; the wavefunction is a state of knowledge. Copenhagen, Consistent Histories, and the minimalist ensemble interpretations are psi-epistemic.


The argument about what "a real and physical wavefunction" means boils down to this question:

"Is there a science of the wavefunction beyond QM?"

Can you ask more questions about it, break it down into constituent processes, maybe re-engineer it?

Or is it opaque and closed and there is no possible way to do further science on it?

My money is on option 1, because it's hard to imagine an organising principle you can't possibly do any science on.

Wavefunction science may be weird and like nothing ever seen before - especially given non-locality and possibly non-realism. But it's a good rule of thumb in science that when nature becomes counterintuitive you're on the right track. If the answer was obvious and familiar you wouldn't need original and creative thinking to find it.

Option 2 seems like an admission of total defeat. It would be the first time in science where we hit a wall and didn't even try to understand what it was made of.

That doesn't mean it can't be correct. But it does mean we shouldn't give up without trying for quite a while longer. And if we do give up, it should only be after discovering some kind of meta-Bell theorem which proves there's no way to go further.


If I understand it all correctly, the article describes an experiment to test your preferred option, but so far the evidence is in favor of option 2?


Oh Hey, I know the author of this! In fact, I think I introduced him to that Spekkens paper. Small world.


Seems like it changes reality. The presence of a measurement changes the interference pattern for all observers, no?

If the waveform collapse only happened in my reality then if I observed a field, I would expect others to see an interference pattern, even if I only saw a clear field.

If other people can observe an interference pattern while I observe a clear field, then that's a multiverse. That means you need a whole new universe for every observation. Seems like poor compression. Although I guess if it's an append-only store then it works out ok.


The article describes an alternative interpretation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: