Hacker News new | past | comments | ask | show | jobs | submit login
It’s time to admit quantum theory has reached a dead end (nautil.us)
225 points by pseudolus on March 8, 2022 | hide | past | favorite | 228 comments



As former quantum physicist, I find it little troubling to read "quantum theory has reached a dead end" in specific reference to the interpretation of quantum mechanics. Most quantum physicists could not care less about how quantum mechanics is interpreted when it makes highly accurate quantitative predictions, and there are still plenty of interesting open problems for quantum theory (e.g., related to the practical design of algorithms and hardware for quantum computers).

This article also misses what is likely the leading interpretation of quantum mechanics by actual quantum physicists, namely that the measurment problem is solved by decoherence (the quantitative theory of how classical states emerge from quantum states):

https://en.wikipedia.org/wiki/Measurement_problem#The_role_o...

https://royalsocietypublishing.org/doi/10.1098/rsta.2011.049...


I read somewhere that quantum mechanics is the most tested of all scientific theories. And it has been shown to be right, every time.

Hawking espoused this idea he called “model dependent realism”. The idea is that every human understanding of reality is model-dependent, that is, it is not “reality” that we truly understand (we can’t) but rather in every case we have some model of reality that is useful in particular situations. For instance, we know that Newtonian physics are not “real” but they are perfectly accurate in certain situations. So they are not “wrong” when they are used in those situations, in fact, they are right.

The author of the article writes, “While Einstein won a Nobel Prize for proving that light is composed of particles that we call photons, Schrödinger’s equation characterizes light and indeed everything else as wave-like radiation. Can light and matter be both particle and wave? Or neither? We don’t know.”

In model dependent realism, we can ignore this apparent contradiction. In some situations the model of light as a particle is the most useful, and in others, the model where it is a wave is the most useful. We have to accept that it is not “really” either of these models, but that no matter what we do, any model we come up with for it will still just be a model.


>> Can light and matter be both particle and wave? Or neither? We don’t know.

But we know! The answer is neither.

Light and matter are weird things that is impossible to describe with usual language, but they can be described very precisely with math language. The problem is that the equations are too complicated and difficult to use.

They have been tested thoroughly, for example in particle accelerators but in experiments with very few things moving around. It's very difficult to use them when the experiment gets bigger.

In some cases, you can make some approximations and get almost the same result if instead of the full correct equations you use the wave equation. It's just an approximation. Light and matter are never waves, but in some case they can be approximated as waves.

In other cases, you can make some approximations and get almost the same result if instead of the full correct equations you use the particle equation. It's just an approximation. Light and matter are never particles, but in some case they can be approximated as particles.

And in other weird cases, bot approximations get very inaccurate predictions.


This very precise description is still just a model and will in all likelihood be improved or even replaced one day as well.


Apparently Hawking coined this term in 2010. Robert Anton Wilson coined a similar term, "Model Agnosticism", all the way back in 1977 in his book Cosmic Trigger:

"The Copenhagen Interpretation is sometimes called "model agnosticism" and holds that any grid we use to organize our experience of the world is a model of the world and should not be confused with the world itself. Alfred Korzybski, the semanticist, tried to popularize this outside physics with the slogan, "The map is not the territory." Alan Watts, a talented exegete of Oriental philosophy, restated it more vividly as "The menu is not the meal."


"All models are wrong, some are useful" describes it very well for me. It still amazes me, how late I really understood this and how many intelligent people not fully understand it.


he expanded on that quite a bit in quantum psychology (1990), giving an expanded treatise on the software of the mind and how it maps our interpretation of reality.

quite a good read if you liked his previous works.


We are human and prefer elegance, so enough will continue to try to unify models anyway. I don’t buy that we cannot understand nature but this model dependent realism is fine as a practical way of working until we do understand it.


> I don’t buy that we cannot understand nature

I don’t think anyone is saying that, but in pondering this issue, I remembered how in Spanish there are two verbs for “to know”, “saber” and “conocer”. That latter verb is often explained in English as “to be familiar with”. The usage makes the point best: you can “conocer” a person but cannot “saber” them. That is, you can be acquainted with someone but you cannot truly “know” them, no matter how close you are to them.

Think about it: how well do you know yourself? You live in your own head and yet you are probably surprised by some of your own reactions, or dismayed by your actions, or fearful of certain emotions. If you do not fully understand yourself then what does understanding nature even mean? I cannot inhabit the mind of my wife, let alone inhabit a photon.


> it is not “reality” that we truly understand (we can’t)

Yes, we can only describe with models what can be observed. But it is a bad excuse for ignoring contradictions in (or between) models.


>And it has been shown to be right, every time.

So has GR. Yet the two theories seem to be utterly incompatible.


They aren't "utterly incompatible", they're largely compatible. For example, lasers work here on Earth, and between the Earth and its moon. Moreover, hydrogen maser clocks and cesium et al.'s hyperfine transitions are used in clocks which are sensitive to nearby mass concentrations, and altitude above the Earth.

There are whole textbooks written on the limit in which General Relativity and Quantum Mechanics work well together, with Birrell & Davies 1984 https://books.google.co.uk/books?id=SEnaUnrqzrUC being the most widely used by graduate students (and as a reference book for researchers).

Indeed, such textbooks go into where GR & QM make incompatible predictions, and almost all of those are in the limit of strong gravity, which in turn is almost certainly always deep within an event horizon, or isolated in the very very very early universe.

Semi-classical gravity (SCG) works well as an "effective field theory", and simply marries a classical curved spacetime (General-Relativity style) with a relativistic quantum field theory (standard-model-of-particle-physics style). In particular, with minor caveats, on the cusp of strong gravity SCG is successful enough in the astrophysical study of stellar remnants that it is reasonably believed to be good everywhere outside black hole horizons and after the very early universe. https://en.wikipedia.org/wiki/Semiclassical_gravity -- one of the caveats is noted there, namely given a sizeable mass (> kilograms) brought into a superposition of space, it is not clear at all what SCG predicts a cavendish apparatus or other gravimeter will point to. This is a possible incompatibility of SCG's two more-fundamental theories in the weak gravitational field, low-energy matter, and low-speeds-compared-to-c limit, and is a puzzle that hopefully will be informed by clear experimental data some day.

Since we can't get information back from inside a black hole horizon; can't see anything in the very very early universe (electromagnetism hadn't "frozen out" of the GUT yet for instance); direct detectors of very early universe gravitational radiation are implausibly hard engineering tasks; and a bowling ball sized mass will be extremely hard to keep in a coherent state for reasonably long periods of time; these are really academic problems rather than practical ones.


As a former quantum physicist who has just decided to go back into quantum computing, this was my take as well: Introductory quantum physics courses may still include wave-function collapse and all that nonsense, but I have not met many physicists who use this as a mental model.

To be a bit more specific as to how _decoherence_ solves this, one way to see it is that classicality (i.e. observables having specific values) is an emergent property in the limit of near-infinite degrees of freedom in the same way that e.g. thermodynamic properties (temperature etc.) are emergent properties of classical systems in the limit of near-infinite degrees of freedom.

Putting it on the edge, claiming that quantum theory is at a dead end is like claiming statistical physics is at a dead end.

One of my personal favorites for how to formalize this is the work on "pointer states" by Wojciech H. Zurek. There is a freely available Physics Today articls [0], and you can find surveys of further work e.g. in the introduction of [1].

[0]: https://arxiv.org/abs/quant-ph/0306072 Zurek, Decoherence and the transition from quantum to classical -- REVISITED [1]: https://arxiv.org/abs/1508.04101 Brasil, Understanding the Pointer States


Okay, cool, I’m 100% with you.

But could we then please stop teaching the collapse nonsense to first year students?

The logical inconsistencies of the collapse interpretation are an insult to their intellect.


The collapse just stands for the unknowable details of the interaction with the environment during measurement, a good quantum physics course will explain that and include experiments that make that clear. For instance the Stern-Gerlach experiment illustrates this well.


Decoherence is a consequence of interaction between particles in the coherent state, environment plays no role there, you can do all experiments in the vacuum and they will still work the same way, obviously.


> in the limit of near-infinite degrees of freedom.

Can you explain or express this in a simpler way? Is it almost like saying macroscopic?


> namely that the measurment problem is solved by decoherence

I think "solved" is too strong. The Wallace paper you reference, for example, does not claim that decoherence solves the measurement problem. His claim is only the more modest one that understanding decoherence helps to clarify what the measurement problem actually is.


I'd love if decoherence was now the dominant perspective, and I at least am largely convinced barring some huge revolution, but it would surprise me very much. In terms of my acquaintances (largely the department of my university) team "I don't care, does it matter?" takes top spot followed by equal parts collapse and decoherence. Oh, and some Bohmian people but I don't know them as well, as interesting as it would be if they were right.


Decoherence theory is really somewhat orthogonal to the measurement problem. Decoherence explains how the "collapse" happens gradually if you have a non-perfectly isolated system by seeping entanglement. But at some point you still need to activate the stipulation that you as an observer is drawn into the entanglement and at that point whatever's left of the wavefunction "collapses".

In essence the Copenhagen interpretation is still correct as a simplification that can still be OK in most cases. This is reflected by the fact that practising solid state physicists have successfully used this 20's style of QM for 100 years now.


How can you be a "former quantum physicist"? Did you somehow unlearn everything that made you a quantum physicist?

Maybe you meant to say "formerly paid to be a quantum physicist"? :-)


I can't speak for the parent, but I have a physics degree from 30 years ago. Since then my career has diverged from academic physics to the point where it would take some effort to re-learn even my graduate level QM coursework, much less familiarize with current research topics. So I could see where the "former" status comes from.


>How can you be a "former quantum physicist"?

By disentangling himself with quantum physics. He was a quantum physicist, so I assume he knows how to do it.


I guess "quantum physicist" can mean either an academic credential or a job title. In my case, I have the former but no longer the later. I finished my PhD nine years ago and no longer work in the field.


I disagree strongly with the author's pull-quote that "We know nothing more than Bohr, Einstein, Schrödinger, and Heisenberg."

There has been roughly a century worth of experiments since the quantum theory came into being. Those experiments have scythed through myriad attempts to explain the mechanism by which quantum theory apparently describes reality.

Physicists can choose between interpretations of quantum mechanics according to "taste" precisely because there are no observable ways to differentiate between them. As soon as those ways become available to us, "taste" will give way to experimental tests.

There is still ample room for cleverness. Experimentalists are pushing forward where we can, but another observation like Bell's might be sufficient to allow, pardon the pun, a quantum leap.


For some time already we are in an "epicyclean phase" of physics, trapped by the extraordinary predictive success of Quantum Electrodynamics and still using mathematical methods devised in the 19th century (variational calculus). This has lead us to the current situation, with extremely complicated theories at the limit of human understanding that bear no new results. It will take a modern day Copernicus to come up with a new view of physics, that will result in simpler, more productive models, to take us out of the local maximum we are in.


That makes the assumption that there exists a simpler view of physics that will result in more productivity.

IMO, some of the smartest people that have ever lived, live right now. Mainly because we have more humans alive than ever before. The amount of brain power working to find these simpler models is breath taking, yet we aren't seeing the elegant simple solutions fall out like we once did. I don't think that's a problem with the ingenuity, I think that's a problem with the problem.


If the brain power is focused in the "wrong" direction or all the smart people are constantly looking under the same set of "street lamps" it doesn't matter how many there are. Unconventional thinking is hard you can't force it solely with numbers arguably quite to the contrary: a large agglomeration of scientists can systematically enforce a proportionally larger conformity pressure.

It is a hard balance to strike because on the one hand you want to be constantly challenged by your fellow scientists but on the other hand also just take the foolish liberty to fully develop your (most likely flawed) intuition.

So even if I'm highly sceptical of Wolfram he gets my full respect and also Hugh Everett [0] who wrote a letter to Einstein as a 12-year old with Einstein answering: Dear Hugh: There is no such thing like an irresistible force and immovable body. But there seems to be a very stubborn boy who has forced his way victoriously through strange difficulties created by himself for this purpose. Sincerely yours, A. Einstein And later in life courageously confronted Nils Bohr with the [...] idea that the universe is describable, in theory, by an objectively existing universal wave function (which does not "collapse") i.e. Many-Worlds-Interpretation.

[0]https://en.m.wikipedia.org/wiki/Hugh_Everett_III


I think it's easy to underestimate the extent to which the greatest minds have also turned to every weird permutation you could think of. Personally I figure we'll probably find some deeper theory just since the Standard Model still has its issues and gravity isn't rolled in yet, but there's every reason to think that this problem is hard, not just untried. Much of the most interesting work in the last century has been ruling things out.

There are lots of nifty ideas that are explored until insurmountable holes are found in them. The two main nonstandard lines of thought that have had any real progress are decoherence, which I'd call a success, and string theory, which I'll avoid rating because I have string-theorist friends but am not a liar. There are plenty of others, and maybe one will bear fruit, but honestly when the next big break comes it'll probably be really obvious.


Why would you assume it would be obvious? Copernicus's heliocentric model was wrong in many ways: he assumed uninform speed, that the sun was the center of the universe, that orbits were circular, assumed the necessity of epicycles, and so on. His main victory was simply that from our perspective of complete knowledge we can now safely say he was less-wrong than the geocentric theories of the time; there's also the David vs Goliath narrative which is emotionally satisfying.

If one wanted to cast doubt upon, if not "debunk", his idea from the science of the times, it would not have been difficult to do so (see: Tycho Brahe). And that was on an issue that was likely some orders of magnitude less complex than the one we may be facing today. The implication of that being that the "right" answer may initially seem to have more holes than swiss cheese. I think it's very safe to say that relativity, undoubtedly the monolith to which all scientists aspire, was an exception and not the rule in the march of discovery, in its reception/clarity.


Good point. I guess all the great minds of this time see occupied with how to maximise ad clicks.


Copernicus' model had the planets orbiting the same in circles at uniform speed. This does not match reality, so Copernicus just chucked in some epicycles to correct the error. Copernicus's model had all the complexity of epicycles, but he hard coded the first epicycle into the system by having the planets orbit the Sun instead of the Earth.

We wouldn't need a Copernicus to solve this problem, we would need a Copernicus, then a Kepler, (ellipses & non-uniform speed) then a Newton, (gravity causes ellipses and non-uniform speed) and then an Einstein. (gravity is warping of space-time)

If quantum field theory is as wrong as Ptolemy's geocentric model was, we're hopeless. Because QFT very well predicts the observations; our observations have no ellipses in them that invalidate circular orbits, our observations have no anomalous Mercury precession that invalidates Newtonian gravity, no speed of light being consistent in all directions to invalidate luminiferous aether. To say that we simply need a smarter theoretical physicist is simply wrong -- our current theories do not contradict the things we are able to observe.

We know that general relativity and quantum mechanics do not play nice at small scales and high local gravity. But we cannot observe this conflict. And that's nothing to go on.

We would need an observation which shows that general relativity or QFT is wrong about something before we could conceivably make foundational progress on making new or different theories. And every few months there's a new article about "Einstein is proven right again" or "LHC experiment shows all readings are nominal".


So what is dark matter?

This thing that keeps galaxies bound that we cannot see but can only observe the effects… I think the hunt for it will push us into new territory.

That being said, as you say there has been nothing yet found that violates GR/QFT.


The leading candidate for dark matter is that it's a soup of particles that interact gravitationally, but not via the electromagnic or strong nuclear force. If we're very, very lucky, it might interact via the weak nuclear force, which means there exists a slim theoretically possibility that we might be able to observe it interacting via the weak nuclear force. That being said, even if it does interact via the weak nuclear force, there's nothing to say it doesn't interact in a way which gives us the power to differentiate it from, say, neutrinos. And if it doesn't interact via the weak nuclear force, then we cannot every make any characterizations of it ever, not even with arbitrarily advanced technology.

If that's true we're hosed. There would be no insights, no way to theory ourselves out of it. Only nerds wailing futilely about dark matter on the internet. And sadness. And we wouldn't know it's because all our theories are correct, but we don't have the complete picture of the extra-Standard Model dark matter particles, or if it's because some variation of MOND is true, (although it's extremely unlikely to be a MOND variation) or if we're as wrong as epicycles vs curved fabric of spacetime.


It seems like I'm more hopeful for a quantum explanation of relativity than you are, but it's still sort of disquieting to think that there could be any number of fields that we just couldn't ever properly interact with. Even with gravity rolled in, there could still be a field that just doesn't interact at all. Hell, there could be hundreds. Creepy.


More likely, the symmetries that hide these fields would break down at energies that are way too high to ever reproduce in an accellerator or another lab experiment.


Wouldn't arbitrarily advanced technology be able to move it around with gravity or sense collections of it with some kind of LIGO but with gamma rays for even more precision or something?

Arbitrarily advanced is pretty strong in a world where some people think dyson sphere scale stuff could be real.


You could characterize it gravity-wise, but we can already characterize it gravity-wise (sort of). We really only care about a field to the extent it can interact with other fields, and as of now gravity isn't really a quantum field, but a distinct thing. I think most people hope to eventually get relativity under the quantum umbrella, but we're nowhere near that yet. If relativity is just a description of the stage the other fields dance on, then we're just kinda screwed. We could determine the mass of the particle (assuming we could ever be confident in separate particles at all), but no other properties.


It will take a modern day Copernicus

We might already have a modern day Copernicus. Could be string theory, could be something else. The problem is that we don't really have any experimental data that can't be explained by the Standard Model. What we really need is a modern day Galileo that can perform some sort of observations, like finding Jupiter's moons, that don't fit in with the existing conventional physics.


27% of the universe is totally unknown (dark matter) and another 68% is totally unknown (dark energy), leaving only 5% of our universe that is explained by the standard model.

Feels very epicycle-ish and ripe for a major shift.


they are both called dark for a reason

not very observable


Their effects are observable by way of their gravity.


That's very forgiving.

We have a model and weer have observations. The two don't match, so there is a problem with at least one (model or observations). MOND posits the problem is with the model and proposes refined models. These are refutable and indeed, counterevidence is often found.

Dark matter posits the problem is worth observations. Its solution is to propose a tiny particle that is basically invisible. For every situation, you then can invent an amount and a distribution of these tiny, invisible, undetectable particles to match observations and thereby substantiate your theory.

It's not really surprising that so far, folks have succeeded in inventing a distribution of undetectable clouds of particles, clouds weighing in at multiples of the solar system's combined mass.


> we don't really have any experimental data that can't be explained by the Standard Model

What was the evidence Copernicus saw that epicycles didn't explain? (Honest question.)


Nothing. His model actually fit the data worse than epicycles. He just thought it was a more sensible approach that matched the Pythagorean opinion.

It wasn’t until Kepler that a geocentric model involving ellipses actually was more predictive.

Might be worth returning to the Pythagorean interpretation (which Bohr discussed): All is number. The world is made of math, not stuff.


> It wasn’t until Kepler that a geocentric model involving ellipses actually was more predictive.

You mean heliocentric?


Thanks, my bad.


Could the would-be modern-day Copernicii be being stymied by modern-day academic funding structures and grant review committees?


I also have this impression. I think the current system lacks intellectual diversity. Lots of people pursuing the same few popular lines of inquiry, because if they get away from the herd, they have a hard time joining a big research group, getting funds and getting more people to read and cite their papers.


That problem is kind of inherent in the domain. In the past, significant research could be done with the meager stipends from generous nobles. Cutting-edge physics labs and supercomputers for astrophysical simulations are significantly more expensive.


Only an undergraduate in Physics, but rest assured that there are many smart minds currently in search of this! Building bigger particle colliders is not the only solution, and there are a lot of interesting things being done with e.g. neutrino physics that may bear fruit in contradicting the standard model.


Hmm, a new kind of science.


Due to the mixed reviews, I haven't bothered to read

   https://en.wikipedia.org/wiki/A_New_Kind_of_Science
even though I assume that the target audience is more or less people who were fascinated with

   https://en.wikipedia.org/wiki/Chaos:_Making_a_New_Science


I read this humongous damn thing around 2004. It's not worth it.


Wolfram has a proposal, by that very name.

Hasn't yet made useful predictions though.


Are more modern information-theoretic and dynamical models not used for quantum mechanics? They're both relatively recent and extremely innovative mathematical models.

Granted, I'm not mathematically fluent enough to know if they represent a deviation from variational calculus!



They are, and you see them used sometimes (my friend is actually using something like that in her thesis), but for the Grand Theory type stuff it seems like this blog is talking about they aren't as useful. They reduce to the same math as before, and on a fundamental enough level the shortcuts they let you take just aren't useful.


In my view the problem with interpreting physics theory precedes the quantum era. Nobody really knew what classical gravity or magnetism consisted of. The idea of the planets acting on one another instantaneously without contact in a vacuum was absurd on the face of it. Einstein was disturbed by this.

The history of QM that I learned in school a long time ago, is that a few physicists endorsed adopting a purely mathematical formalism with no preferred interpretation in order to be free of preconceptions that might prevent them from making progress. They were still free to debate interpretation, but considered it to be separate from the problem of forming an experimentally testable theory.

But in terms of finding a philosophically satisfactory interpretation, we don't know what to look for. No science has ever attempted to dig deeper than the level of analogies that are easy to grasp. And it seems reasonable that the interpretation should depend on the science. Otherwise we risk embracing an interpretation that becomes a barrier to progress, or that is overturned by new experimental evidence.

I think the biggest problem with fundamental physics today, is that it's hard. As in, hard enough that it's not yielding answers at the rate that it was 100 years ago, and we don't know when it will: 10 years, 100, 1000? The big problem -- reconciling quantum mechanics and gravity -- won't be solved until we solve it. A theory that solves this problem will require one of those two things to be "wrong" in some sense, in which case its interpretation will have to be revised.


The quotes from scientists at the end of the article nicely summarize my opinion on this article:

> Deutsch, too, seemed impatient with my dissatisfaction over our understanding of the nature of reality. “Someone might equally well say: We may know what dogs look like and how they behave, but we don’t know what a dog ‘actually is’,”


From the point of view of chemistry, QM “works”. It provides results that can be tested experimentally and predictions that can be validated. There are issues with computability but that’s more engineering than science. Determining structure, spectra and other physical properties is quite helpful.

What was the cliche? All theories are wrong, some theories are useful. It needs to be rephrased from vector codes into massively parallel codes but people can work with that.


To me, this always felt like programmers excusing abhorrent code with "but it works".

Quantum theory isn't. It's not a single, cohesive, consistent theory.[1] There is no recipe you can apply, it's just a bunch of guessworks and heuristics that happen to produce the numerically correct equations if you keep trying long enough. This isn't secret or some sort of external criticism, you'll find this front-and-centre in the foreword of many a QM text!

Schrödinger famously arrived at his equation basically through numerical methods. He just tried things until it "fit" the desired output.

Now, there's nothing wrong with this, per-se. It's a perfectly viable approach for getting going, for getting something and using it as a starting point. But it isn't the endpoint, because approaches like this often have virtually no explanatory power.

A similar example is collecting insects, categorising them, giving them Latin names, and putting them up for display in a museum. You can learn a lot, amass enormous amounts of information, but without a theory of genetics and natural selection you will always be blind to the underlying truth of it all.

QM is just like bug collecting. We're collecting numerical equations that work, but we have essentially no clear understanding why. We've built a tree of life, and nobody has had the lightbulb moment that explains why it's a tree.

[1] There was a paper published a few years back where a bunch of working quantum physicists were asked some simple multiple-choice questions about the fundamentals of the theory. There was no consensus opinion on anything! PS: You'll get similar results if you ask priests of a random Christian sects about the basics of religion. Conversely, you will get nearly zero disagreement if asking Chemists about the basics of their science.


> because approaches like this often have virtually no explanatory power.

the famous feynman have produced a response to this train of thought - usually from the laymen - where they demand an "explanatory theory". The answer is usually "don't know"; why gravity work the way it does? - don't know, but we can calculate orbits to great precision; why does photons or electrons travel the way it does? - don't know, but we can make very precise predictions via Quantum Electrodynamics theory.

The laymen demands an explanatory theory, because there's an underlying assumption they don't even verbalize - that such an explanatory theory can be understood _without the maths_, that the principles of the laws of physics operate on basic, understandable lego pieces. What is somewhat unconceivable is that the explanation _is_ the maths. To whit, we don't even have an explanation for the idea of inertia! We just observe it, and measure it (as mass). There's no explanatory theory.


You could say that the "explanation" of inertia is that which preserves the symmetry of space translations under Noether's theorem. But that doesn't tell a physicist any more than they already knew, and it tells a layman even less.


"don't know" -- that doesn't mean we can't know, it just means we don't currently have an explanation. If you had asked similar fundamental questions about the "why" of beetle species a few centuries ago, even the most prolific beetle collector in the world would have answered "don't know".

"The laymen demands an explanatory theory" -- and so do scientists! Why do you assume scientists are somehow "above" explanations, and are content in ignorance?

"explanation _is_ the maths." -- People had good rules of thumb for evolution (such as dog breeding practices), and Gregor Mendel could have come up with his mathematical rules for selection before Darwin published his explanatory theory of natural selection. That doesn't mean that it's worthless to try to aim for an explanatory theory of QM instead of a purely numerical / observational one.


Confounding variables are a strong argument against this. Give me two sets of data that have no causal relationship, and it's very possible (if not likely) I will be able to demonstrate not only a causal relationship but also observational predictability through the existence of confounding variables. Let me use ML models and this becomes trivial.

Do school names determine how good a school will be? Of course not. However, schools have names which tend to be reused: cultural icons, scientists, etc.. And schools in low performing areas are going to have a different distribution of chosen names than schools in high performing areas. So you can create a model that will not only demonstrate this but also offer legitimate observational predictability that a school's name will, on average, determine how well it will do.

The only way you could really refute this hypothesis is by understanding the true confounding variable, or by running some huge scale multiple decades long experiment where you changed the names of new schools and measured the performance difference. That experiment is clearly not really practicable, so all you're left with is understanding.

And this is really where the risk is. If we ever begin to build upon something that is unsound then it risks everything from that point onward being also invalid. At best everything collapses and you realize you must have made a mistake. At worse, you simply end up adding endless epicycles until at some point everything starts to completely stall out muddled in its own invalid fundamentals and irreconcilable complexity.


I'm trying to remember which textbook or professor was the one who said that "shut up and calculate" was the Feynman interpretation of QM.


Feynman must have driven his wife mad.

  "Honey did you take the garbage out?"
  "When you say garbage, have you considered if garbage can even really exist?"


In analogy to this discussion, it'd be the opposite, actually -- situational Feynman would be the person who knew he could practically gather specific unwanted matter into a bin or sack and take it out to the collection bin for most practical volumes, while situational wife would be the person asking for an explanation "but how do you know that what you have taken out is the garbage and that it has in fact been removed"?

Still, it's fun to play with this idea via figures who are a little more philosophically rarified as it were:

https://www.tiktok.com/@davecolumbo/video/706085573209017886...


Come on. We have plenty of good heuristics for why things are the way they are. Quantum theory is no more a bunch of "guessworks and heuristic" than Newtonian mechanics.

Also, physicists disagreeing on interpretational issues related to quantum foundations is not the same thing as disagreeing about the fundamentals of the theory.


The things that Schrodinger tried are still valid physics, just for different systems. He didn't know that, but now we do, this has little bearing on modern physics.

Also chemists have the luxury of agreeing because the fundamentals of their work (not the day to day act of doing chemistry) gets to assume the presence of physics i.e. it's stampkeeping (depending on who you ask).


Not quite stamp collecting, although botany or anatomy head in that direction. Call chemistry “just applied physics” as engineering is “just applied science”.


It's a quote ascribed to Rutherford ("all science is physics or just stamp collecting"). I also heard it when studying protein structures in the 90s (the idea being, if you're just a crystallographer cranking out more structures, you're doing stamp collecting; the real reason to determine structures is to elucidate the general principles of protein folding and function).

Note that protein structure prediction finally fell to machine learning and it was "mostly stamp collecting" (IE, the accurate predictions come from subtle analysis of rich protein sequence alignments, not from understanding the fundamental principles of protein folding).


We absolutely understand protein folding. The fact there isn't a closed form simple solution to tell us the shape of a protein shouldn't be surprising at all. It's just hard to calculate. It's like the 3 body problem, but there are 90. I don't know why you think there would be some simple solution to that that doesn't involve simulation and heavy computation. You'd need 8 dimensional 100x100 matrices or something to describe something like a protein in pure math, which is way more like programming than arithmetic anyway.


We do not understand protein folding. I'm an expert in this field with publications and what you just said makes no sense at all.

The simplified protein folding problem states that proteins fold to their energy minimum, and a succesful heuristic can find the "correct" fold by finding the structure argmin(energy). Although this is a real simplification that doesn't represent actual proteins, even that is not something you'd solve with matrices; combinatorial math.

More importantly, "protein folding" is the biophysical process that proteins experience; what DeepMind did was solve "structure prediction", which is another simpler problem, and they didn't do it by energy minimization, they did it by exploiting sequence similarity to provide structural constraints that massively reduced the search space.


I used matrix math as a hand wavey example of how one might purely numerically describe the angles in a structure. What don't we understand? We know the forces of all the atoms that act on each other, what is the mystery at this point other than calculating it is hard?


"We know the forces of all the atoms that act on each other, what is the mystery at this point other than calculating it is hard?"

might be true, but it's computationally intractable and even if you did find a way around it you'd just learn that your force field was inaccurate, or that proteins don't actually fold to their energy minimum.

Not just hard, but NP-hard. In fact, the "protein folding problem is NP hard" is misleading; in fact, the "static protein structure problem" is NP hard, and protein folding is "harder" than that (it's a superproblem of static structure prediction).

The way to describe angles for a protein is this: you use degrees or radians to describe the torsion angles of the backbone (2 torsions per amino acid, 360 possible positions per torsion, times the number of backbone atoms in the protein). Treat side chains with rotamers, that's already a solved problem. However, you can't compute self-collisions in angle space you need to embed the protein structure in cartesian coordinates, which is how they are normally represented (as a graph representing the bond topology, and an N*3 array of positions).


Right, my point is that it's as solved as it can really be as far as human understanding is concerned. There are too many variables at play to have any sort of elegant solution like Ohm's law or calculating an orbit. The best we can do is find different/more efficient/novel methods to calculate it. Am I wrong?


The problem is ill-posed, not that we lack a solution.


Can you elaborate on why it's ill-posed?


When you say:

> it's as solved as it can really be as far as human understanding is concerned

You handwave away several interesting parts of the problem without providing real justification for why that would be necessary or acceptable.

It's like saying "politicians are made up of atoms, and since we know how atoms interact, our understanding of politics is reasonably complete".


proteins don't fold to their energy minimum, they typically "collapse" to an intermediate state and then sample many different states (kinetically), rather than adopting the absolute thermodynamic minimum. In many proteins, the structure snapshot doesn't even correspond to a functionally active protein. The "protein folding problem is NP-hard is an entirely inaccurate view of the biophysics of folding", and if solved, would not address any useful question about folding.

What DeepMind solved was a far simpler problem: reproduce the structures that get produced by some experimental method. That's a well posed problem but ultimately a less interesting one, even if it's immediately "useful".


I prefer to call engineering "physics ruined by philistines" (sometimes at least, engineers seem to have a unique ability to take beautiful physics and notate it in the most cumbersome "safe" way possible)


If you asked chemists questions as "fundamental" as the ones posed to quantum physicists, they'd be exactly as lost because they'd be exactly the same questions. Shockingly, when you have a level of abstraction beneath you you get to have pure empirics, trusting that the foundations add up to get the right answer. If every field of knowledge that relies on empirical evidence rather than a grand unifying theory was stamp collecting, QM would be one of very few that wasn't.


To use the bug collecting analogy, I like the idea that by capturing and describing these particles perhaps we've killed them in the process.


The analogy I like to use is that high-energy particle physics is like trying to learn a game like chess or baseball by analysing the statistics of past games.

That's literally what goes on at CERN. They run effectively trillions of experiments, collect millions of them (after some filter), and then they draw histograms.

Unless you know nothing about mathematics, you know that histograms are inherently statistical beasts. It's meaningless to talk about the histograms of one experiment.

Could you learn to play, say, tennis by drawing histograms of player movements? Ball bounce locations? No? Why not!? It's tons and tons of data! Accurate, scientific data!

Quantum physicists would argue that only statistics exists. I like to point to the fact that one hydrogen atom can exist, and it can have on electron. Or zero electrons. In nice countable, integral quantities. You can place a single gold atom on a crystal lattice with an AFM or even draw cute pictures with them: https://cen.acs.org/analytical-chemistry/imaging/30-years-mo...

Just because some people think only collecting bugs is what biology is all about, doesn't mean that there isn't more to it...


sounds like how ML/neuron network works today, computer comes up a super complex math equation that no human could understand to fit the data fed


I find it kinda hilarious that you all seen to be discussing some sort of cutting edge or controversial idea called "quantum theory" or "quantum physics", when I have to assume you're taking about century old theories of quantum mechanical systems like ones that continue to help design the electronics that you read this piece of shit article on.


Well yeah, there is still unresolved issue(s) with at the very least the interpretation of quantum theory, if not the math. Before decoherence things were even worse, and that was much less than a century ago. The fact that physicists still haven't reached a consensus on what the theory means is very intriguing to many people. Although it's true that people probably underestimate just how much this territory has been trodden. The current quantum computing hype likely plays a role in it seeming "cutting edge". It may just be flavour of the month.


One avenue for progress that the author overlooks is distinguising between quantum mechanics interpretations where wave function collapse is considered part of the theory vs. where wave function collapse is considered to be a useful approximation.

I expect as we see quantum computers increase in scale it will become more apparent and accepted that there is no such thing as "collapse": that is just the approximation for a large system including the observer becoming entangled with a smaller quantum system under consideration.

I think even the disillusioned author here would have to take that as progress.


As former amateur physicist, who read a couple books over the years :)

To me it's quite simple. We haven't detected what is causing the waves that the particle of light is riding on. The particle of light is like a surf board, riding a wave and will always hit the shore in the interference pattern. Einstein and Bohr were both right.

What has been the wrong assumption over the years is that the light is generating the waves. It seems obvious to me that something else outside of the light (that we haven't detected yet) is generating the waves.

My amateur physicist guess is the waves are generated by the clock cycles of the computer simulation we are in. All computers require a clock to function. Why would our universe be any different?


Not saying you're wrong in the slightest, but why would our universe be like a computer? When our understanding of the universe was very primitive we thought everything was alive (animism). Then a little more technology and we thought the universe was like an orrery or a clock (mechanism)[0]. Then a little more and we think it was like a computer (turingism?) Occasionally you'll hear it's like a hologram, a simulation (what's it simulating?), a graph [1], or some other faddish concept.

We just like to make metaphors to put this thing in a box that we can't understand. But perhaps at its most fundamental level it will defy comprehension or even definition.

[0] In the philosophical sense; https://en.wikipedia.org/wiki/Mechanism_(philosophy)

[1] https://syncedreview.com/2020/04/17/stephen-wolfram-the-path...


What's the difference between a clock and a computer? And there's no difference between computer and computer simulation.

Being a hologram is something I see as a significantly different type of comment. That's about how a sphere of space is mathematically equivalent to a flat 2d shell using equivalent but warped physics. It doesn't change anything about the nature of the universe except sort of the number of dimensions. And it's orthogonal to those ideas.

I'm not sure how to categorize the graph thing but it's not widespread at all.


> why would our universe be like a computer?

Because information is fundamental.

Computers just so happen to be our best tools in the information domain.


> The particle of light is like a surf board, riding a wave and will always hit the shore in the interference pattern.

This is exactly Bohm's Pilot Wave theory that the article talks about. It has been debunked to some extent but I believe the debunking is still somewhat controversial for proponents. There are neat macro-level simulations of it called "Walking Droplets" if you search for them.

https://en.wikipedia.org/wiki/Pilot_wave_theory

https://www.pml.unc.edu/walking-drops


But then every point in space has it's own clock, so you'd have gazzilions of tiny clocks instead of one big global clock that tickets for the entire universe/reality.

Or perhaphs if we do have one big global clock, your distance from it warps/distorts other parts of the reality at each point in space? Maybe there is a limit to how far this heartbeat travels (edge of the universe)?

You can also then ask, at what speed or clock cycle is reality "rendering" and is it the same speed everywhere? It would seem each point renders itself and there would be no big global processor doing the rendering.

Then finally if you really want to dig deep, ask why is it being rendered to begin with. Might need some psychedelics for this one instead of math.

What if there are no clocks nor any points, what if matter is just a condensate of frequencies/sound/harmonics - that is, there is a "great piano player" and the "sound" it emits is the universe, just a side effect. If it stops playing, the universe disappears/collapses into nothingness.

note: A point here referring to a point in a massive 3d grid of points.


That's why time slows down near heavy objects, there is more calculus to be done and don't want to drop frames.


>All computers require a clock to function. Why would our universe be any different?

Clocks are not required at all, even for digital computers. They only make designing computers a lot simpler.


Clockless computers don't have a global clock but they still have components that "tick".


Depending on the design, no more than abacus has components that "tick". And I'd never say an abacus requires a clock to function.

If even a single bit gets hung up in a straightforward clockless design, everything else waits for it.

None of that is really visible to something inside the computer, though. The OP is describing calculation steps, not clock cycles.


Assuming that it is true, what is the clock/cpu speed?


The released speed, or the overclocked speed? Let's face it, if the universe isn't overclocked, then we've got some serious hacking to do.


I would assume: 1 / 1 t_P (Planck time) ≈ 1.8549×10^43 Hz (hertz)


I come to HN to learn new things in fields I know little about, like tech and computers, I always find the comments to be authoritative and interesting.

However, whenever I read the comments on any subject I happen to know something about, like physics, they are always completely nonsense from people who clearly have absolutely not even the smallest amount of experience in the area they are talking about. The worst part is, all these comments sound extremely authoritative, if I didn't know any better I would have believed they were experts.

This makes me very worried about the Gell-Mann Amnesia affect. I'm very worried that all the comments I've been reading on here about tech and computer topics are also just from larpers who are pretending to be experts but actually don't know anything.


I think the discussion of computer stuff here is much better than the physics discussion. It is a website about startups after all.

I only know a tiny bit of modern physics, and in a sort of backdoor fashion through electrical engineering, and even I can spot the occasional "this guy learned quantum physics via Star Trek" post.


The Gell-Mann Amnesia effect is real here. The difference is that, for many computer and computer-adjacent topics, the principals will often chime in. That gets you a level of authoritativeness that's hard to match. Even within computer topics, though, you see authoritative-sounding comments that are just wrong. That happens like clockwork in any thread having to do with undefined behavior, for example.

I also know a little about Covid (having a minor in molecular and cell biology, and obsessively consuming resources like TWiV for a while), and the median amateur virologist comment here is just a howler. Occasionally you'll have people who really know what they're talking about comment, but it usually gets drowned out.

The key, then, as always, is to read everything critically. Then there's some real insight to be gained here.


I feel the same when reading comments about medicine. I'm absolutely ignorant of that field, but I cringe at the authoritative sounding comments often found in HN.


From what I've seen on HN, things about tech implementations and software libraries are generally rock-solid. But if you venture into something else, such as QM, the humanities or god-forbid politics, that's when it becomes a nightmare mix of pseudo-intellectualism.

That isn't to say that there aren't voices from experts in those fields that contribute to the discussion, but they are usually drowned out with "The sample size it too small" with an n=20000 or "they aren't letting different ideas into [insert scientific field here]" as they've mentioned further up in this thread.


name and shame, so those of us non-experts can spot the bad comments.


Please correct wrong information without shaming others. We're trying to avoid the online callout/shaming culture here.

https://hn.algolia.com/?sort=byDate&type=comment&dateRange=a...

https://news.ycombinator.com/newsguidelines.html


dang, you read every comment on every thread?

And you stay polite and interactive?!

Are you sure you are human?


Nope–not even close. You're just seeing random data points and then the brain fills in the in-between spaces ;)

I wish I could do more actually - especially right now because the war is putting a lot of pressure on the container here.


I think the thing that honestly puzzles me is: before physicists learned physics, did their entire desire to be in the field spring from “shut up and calculate”? Or was there a time in their lives (especially when they were young) where they thought (or felt) a desire to get into physics to understand the underlying mechanisms that drive the universe? I feel like, at least for some people, the second thing must have been true. So, given that most quantum physicists seem uninterested in interpretations, where did this desire go? Did it disappear? Did the field get them to stop caring? Did it feel silly to them at some point? Did it feel hopeless? Or something else? I genuinely want to know.


The interpretations don't tell you anything that the calculations didn't tell you already (if an interpretation of quantum mechanics ever predicts something different from what quantum mechanics predicts, then it can be ruled out empirically by performing the appropriate experiment). Interpretations which don't give you a faster way to get to the results of those calculations just waste your time and get in the way of understanding. Theoretical computer science indicates that there is no significantly faster way to get the results of many of those calculations - no matter how elegant the interpretation feels, at best you have to do the same amount of work to get the same answers about what will actually happen in reality, and at worst you end up computing a million irrelevant details about things the interpretation invented which have no real-world consequence.

Someone who is genuinely interested in understanding the way the world works ends up asking questions about what goes into the calculations - what's the Lagrangian, how many degrees of freedom (i.e. quantum fields) are there, what symmetries or constraints are satisfied by the laws of physics, how can we set up the math to avoid pathological situations where the results of our computations are infinite, and so on. Philosophically motivated interpretations of quantum mechanics shed exactly zero light on these questions.

Well.. there is a caveat. If quantum mechanics was somehow incorrect - that is, if unitary evolution of a state vector in a Hilbert space didn't actually describe reality at all - then all of the above would be wrong. But in that case, every single interpretation of quantum mechanics would be wrong as well. Quantum computers would just flat out not work.


Thanks for taking the time to respond. I can see the perspective (at least currently) of “the interpretations don’t add any meaningful information” being the reason not to be interested.


There are a broad range of motivations. In my case:

- I wanted to work on nuclear fusion energy, for the future of humanity.

- I was told that if I had a PhD in physics, I could do just about anything - it's a way of keeping options open.

- I liked tinkering in the lab & learning new mind-expanding concepts in textbooks.

- I viewed it as a test of my intelligence. (Turns out, it's more about perseverance.)

I know at least one string theorist, and they seem to be motivated primarily by liking to mess around with abstruse mathematics. Some others seem to enjoy the 'nerd cred.' I think the ones whose sole motivation is 'getting to the bottom of the universe' probably burn out early, b/c there's so little of that to be had right now. (I've heard from several people who got their PhDs in particle physics who went on to do data science/programming, saying that the field is depressing and that's why they didn't pursue it further.) As for me, I found the 'shut-up-and-calculate' attitude a major turn-off to studying quantum physics. (Plasma physics uses little or no QM, so that worked out for me.)


I never thought about all these other motivations. Thanks for responding with them! And it sounds like there may be at some physicists like I was describing that get burnt out or discouraged too.


>Another alternative is to suppose there are hidden factors that we can’t quite access. David Bohm influentially suggested that a particle is accompanied by a “pilot wave” that guides its trajectory through the double-slit experiment and creates the interference pattern. Most physicists will tell you that this kind of “hidden variable” interpretation of quantum theory has been ruled out by a combination of experimental results and mathematical proofs.

This isn't the problem with Bohmian mechanics at all. Bohmian mechanics "solves" the EPR experiment by letting hidden variables jump around across spacelike separations. This cannot be disproven by Bell tests.

The problem with Bohmian mechanics (which I have always heard called "Bohmian mechanics", and not "pilot-wave theory") is that it isn't truly relativistic; it doesn't happen in Minkowski space and nobody has to my knowledge extended it to Minkowski space in a way that is widely considered acceptable. If the author likes Bohm's approach so much, he should learn about the ongoing developments:

http://en.wikipedia.org/wiki/de_Broglie-Bohm_theory#Relativi...


>But the researchers concede that the result will make no difference to anyone who believes in Many-Worlds, Copenhagen, or Pilot-Wave Theory. It only reduces the range of options by one, at most.

This is a bit too pessimistic. There's the possibility their experiment will confirm local realist retrocasuality (by finding a result different from what typical QM would suggest) - at which point we will have reduced the options considerably.


I agree, there are limits to our understanding right now.

Whether it is QM, or Dark M/E, etc. A lot of these constructs feel strained to me at times.

But. having said that, I am guessing, that back when the cutting edge was Maxwell, and Lorentz, and along came a burst of genius, named in the article, Bohr, Einstein, Schrödinger, and Heisenberg, and I would include Pauli, Godel and probably a bunch of others.

My prediction is that there will be a burst of insight and innovation, these things tend to come in clumps, when a lot of people start "riffing" off of each other.

There is no way to predict when, but I only hope I am still breathing to witness it!


I get your point, how long do we keep digging before we reach the center of the earth?

It is human nature to quit, most especially, when there is no foreseeable benefits.

But think about it this way, science is like the human body.

The head of science, are those scientists who make a new discovery by going through research articles of their predescessors, who were just 1 mm from digging up gold.

The necks are those scientists, who almost made a new discovery, but couldn't see the light at the end of the tunnel.

And the legs, are the early scientist like Galileo, who laid the ground work.

Anyways, wherever you find yourself, it is imperative you don't lose faith in the process.

Afterall, everybody mocked the Wright brothers, for building a plane. And most advanced scientist question, why they venture into this field called science, afterall it is a labor of love, and only the lucky few get glorified.

But I get your point through and through, it is better scientists pour in their brains in more linear science, like things we can see and computate, instead of pouring brain power in abstract concept that has no current benefit in the present society.


Does anything think Wolfram Theory is going to end up being the right approach to go deeper down the rabbit hole? Personally I think the theory is really interesting and intuitively feels like the universe at some core level may work as the theory suggests. But I’m not sure if many people are considering the theory to have serious promise in the long term.


It has to find the rabbit hole first. So far as I understand, Wolfram theory hasn't yet been testable against any experimental data. So it's 100 years behind quantum theory. And in the final analysis, even if it achieves the same exquisite accuracy as quantum theory, will it do any better with the problem that really bugs people, which is the ability to tell us what we're really made of? That's the problem stated at the beginning of the article:

>>> But I’m still waiting for a straight answer as to what the structure of the atoms that make up my body is.


Maybe it is time that we review the fundamentals? Could Planck and Einstein have made a mistake?

I find this to be the simplest explanation of light: E=htf where h is in joules per cycle, t is measurement time and f is cycles per second.

Every wavelength of light has the same energy, regardless of frequency. That energy is 6.26e-34 Joules.

Every “photon” is actually 1 seconds worth of light energy at some frequency f.

Once you see that the time variable was hard coded to one second in planck’s constant of action it starts to make sense. It’s not surprising that single photons cause diffraction patterns because photons aren’t real particles.

forgottenphysics.com

https://img1.wsimg.com/blobby/go/e266e5e7-739c-437c-89c3-eed...


I think a 100 years of now 17-decimal accuracy is fine.

Maybe we do need a Copernican revolution. But how many areas of physics and science can we say that? A lot.

I think for sure the idea of measurement as a passive activity revealing pre existing values is gone though. No matter what happens. How long was that a paradigm?


> Copernican revolution

The guy who did the hard work was Kepler. Copernicus's model of the solar system was just idle speculation. Also widely off the mark (it had epicycles, lots of thyem, twice as many as the Ptolemaic model [1]). Kepler sifted through the mountains of astronomical observations made by Tycho Brahe over many decades, and used the latest cutting edge math tools available (logarithms) to come up with his 3 laws.

The true revolution is science was obviously Isaac Newton. But if he had some giants on whose shoulders to stand, they were Kepler and Brahe, not Galileo and Copernicus.

[1] http://tofspot.blogspot.com/2013/08/the-great-ptolemaic-smac...


Even on the macro level there are lots of tricky questions.

A bar magnet is surrounded by magnetic flux lines. Apparently it takes no energy to maintain that magnetic field.

If I heat the bar in boiling water for long enough, it becomes demagnetized. On the macro level there's no visible/measureable change in the bar. We need a lot of magnification to see that the 'particles are no longer aligned'.

On the other hand, I move the bar magnet over a pile of iron filings, some of the filings will fly up against gravity to the bar, against the force of gravity. It takes energy to lift these filings. Where does that energy come from? (It never seems to be exhausted.)


> It takes energy to lift these filings. Where does that energy come from? (It never seems to be exhausted.)

That's potential energy. When you made the magnet, you created a potential between everything else and the magnet. When you unmake the magnet, you change the potential again in the other direction. Changing said potential consumes energy.

The potential can be exhausted as only so much mass/magnetic-potential can fit within the magnets field. To use the field with more stuff, you have to pull some stuff out of the field, which takes energy.


> some of the filings will fly up against gravity to the bar, against the force of gravity. It takes energy to lift these filings. Where does that energy come from?

In this example the energy comes from whatever is holding the bar magnet itself up against gravity. If it's not doing any work (ie. it's not adding energy; say it's a rope or a spring) then it's not just the filings that move: the bar magnet and the filings move towards each other (with the filings moving the bulk of the distance) and the end result is that some of the gravitational energy in the bar magnet is transferred to the filings.

I think this only seems like a tricky question with an informal idea of what "energy" is. It doesn't have the same interpretation problem as what the linked article is about.


The energy comes from the incredible magnetic force spent to align the atoms when the magnet was created, and it will be slowly exhausted over time.


Why would it be exhausted over time? When things come close to the magnet it uses its potential energy to pull them.

When some force acts to remove those objects from the magnetic field, that energy is converted into potential in the magnet again.

Are you claiming that the alignment of the magnet will fade slowly over time when the field is interacted with, more quickly than it would if it were left on a proverbial shelf for millions of years?


Yeah, that was my claim, that the atoms will slowly return to a chaotic high-entropy organization over time, and more rapidly as forces are exerted against the magnet. I guess I was wrong.


> that the atoms will slowly return to a chaotic high-entropy organization over time, and more rapidly as forces are exerted against the magnet

This is true, it's just not why magnets can lift things.


Force isn't a resource which is spent.


Nobody ever asks this question about a spring supporting a mass against gravity.

There's nothing actually holding atoms together, it's all electric fields. So why is a magnet different? (it's not).

People also don't ever ask this question about gravity itself.


This is a good question to bring to /r/askphysics . It's pretty simple, and isn't the kind of problem with quantum mechanics that the article is referring to.


I've been bothering myself with the same question. This rabbit hole in the wikipedia has led me to a dead-end explanation: those iron filings are pulled by virtual photons created by the magnet, those virtual photons are completely virtual and cannot be observed, except for the fact that they somehow pull the filings despite being virtual (this must be a small incursion of the purely mathematical world into reality). I've concluded that magnetism is an unexplained phenomenon, just like gravity: we have accurate formulas, but nobody knows what those formulas describe. For intuitive understanding, I see a magnet as a well, except that this well can be moved around and has two poles. Just like a regular well, things fall into it at no expense to the well, because the well isn't really a "thing" that pulls other things by force.


> This rabbit hole in the wikipedia has led me to a dead-end explanation

Trying to teach yourself physics by reading wikipedia is like trying to learn how to program by reading the C++ standard. It's a fine resource, but not for that. You need an introductory physics textbook, not an encyclopedia. Griffith's Introduction to Electrodynamics is a classic choice.

> those iron filings are pulled by virtual photons created by the magnet

No. Virtual particles are part of a notational scheme for carrying out certain computations in quantum field theory. They're not particles, and they're definitely not involved in classical electromagnetism.

> I've concluded that magnetism is an unexplained phenomenon, just like gravity: we have accurate formulas, but nobody knows what those formulas describe.

The formulas, ultimately, describe the relationship between our observations. This is the only sense in which we ever explain anything, and a sense in which magnetism and gravity are both very well-explained.

> For intuitive understanding, I see a magnet as a well, except that this well can be moved around and has two poles.

You should not try to understand electromagnetism intuitively yet. You haven't built the right intuitions, and to do that you first need to tear down your wrong ones.


By that logic you can't observe anything or explain anything, though. All the force carrying bosons (like the photon) have to interact with fermions (the electrons and quarks that make up your equipment and your body). Some of the interactions happen due to on-shell photons ("light") but most happen due to completely off-shell photons (EM force). They are really the same interaction.

Don't get hung up on the "virtual" in the lingo about particles. All particles are more or less "virtual" depending on how you analyze the problem.

I'm sorry there is not a more intuitive explanation of this.. The underlying QFT building blocks are so far from classical physics. Even concepts like energy and momentum are in some sense emergent properties that lose most of their classical intuition "at the bottom", this is reflected by the Heisenberg uncertainty relationship for example.


Huh? Iron filings fly up because of the principle of minimum potential energy. They can lose potential energy by moving closer to the magnet, so they do. No energy is gained - potential energy is converted to kinetic energy, then heat when they hit the magnet.


Your answer is basically a tautology: if we observe work being performed, there must be some potential energy being converted, that’s what the potential energy is; the ability to perform work. The real question rather is: why doesn’t the same thing happen when I get a stick close to wood shavings? How is energy stored in a magnet in a way that is not stored in a stick?


Because sticks don't have magnetic fields, and there isn't a "stick field" that has potential energy.

And why is there a magnetic field? Basically it's a consequence of the electric field plus relativity.


Because stick field is not a thing? Well, I guess there's gravitational fields, but those are not dipoles and they're really weak. Magnetic fields of large aggregates aren't simple to model, and there are some very complex systems, but individually they are very simple and well modeled.


Maybe start connecting the structures in our brains with the structures in our math. The limited representational capacity of meat-based brains is probably constraining what can be understood. Even math may be too constrained to go beyond current level of detail. I imagine an AI doing its own experiments with the universe will have unbounded understanding, but i doubt we d be able to understand her when she tries to explain to us puny humans


I'm surprised humans have gotten this far in understanding the rules of the universe and making accurate predictions.

There's no evolutionary reason why we would have: our brains only hold like 7 ~ 11 things in working memory, we're almost always thinking about either food, entertainment, sleep, or sex, we can't conceptualize infinities, we're stuck living in linear time at 1 second per second, our long term memory is so full of holes it's practically non-existent, etc.

Either we're unexpectedly capable at making the most of our wetware or the universe is unexpectedly simple to understand.


You may be on to something. Sure human working memory (insofar as our measurement tools can tell us) seems to be limited to single-digit datasets, and long-term (that is, persistent) memory is poorly understood but research portrays that it's a pile of more or less random bits.

It's a subject of professional interest to me for decades, what impresses me is how much isn't known about how our built-in "computer" actually works. Above all neurons are extraordinarily intricate structures, and beyond that the contributions of non-neurons in the brain appear to be considerable. Given that the brain contains at least 100's of millions of participating neurons organized in a complex hierarchical network, no wonder it's still not possible to adequately account for many relatively simple phenomena let alone the emergence of towering genius.

Indeed our wetware is collectively both more capable (and more limited) than we generally appreciate. Definitely we can't predict where genius will arise, it always seems to surprise when it happens. To say it's a miracle may be naive, but perhaps that just expresses wonder and gratitude for a rare, random and highly fortuitous event.


The most promising approach I've seen to answering the author's question: https://writings.stephenwolfram.com/2020/04/finally-we-may-h...


How is this: The electron loses its energy as it approaches the final destination, this energy (of wave nature obviously) passes through the slits and forms interference pattern.

I know that for every problem there is a solution that is simple, neat—and wrong. And above proposition is most probably the same. But still I'll be glad to read the answers.

(Disclaimer: I'm a novice)


You're falling into the very common trap of assuming that "energy" is some kind of substance (like movies/series tend to depict it as for practical reasons), when really it is a mathematical property of a physical object. Like other similar properties (such as spin, momentum and angular momentum), energy is relevant because fundamental symmetries of the universe require it to be constant. Therefore "losing energy" in one part of a physical system (such as the electron) is only possible when another part of the system simultaneously gains the same amount of energy. When you say:

> this energy (of wave nature obviously) passes through the slits

The question then is what type of object would possess this energy as it passes through the slits and hits the screen behind it. Since the screen is specifically set up to detect electrons, the simplest explanation is that it is really the electrons performing the interference.

(Footnote, just to be safe: Energy conservation can be broken on very very short timescales, but this is not relevant for the double-slit experiment.)


You are led into a room. The far wall is covered in levers, buttons, dials, flashing lights, oscilloscope displays, buzzers, and so on. The person who brought you there says "we need you to describe the mechanism behind this wall. You're not allowed to look behind the wall. Good luck!" They leave the room.

You walk up to the wall and press a button. A few feet away, a light flashes green. You press the button again. The same light flashes green. "Aha!" you think. "I have figured out what this button does: it causes that light to flash green." On a table to the side of the room is a notepad and a pencil. You jot down your discovery.

You walk back to the wall, and press a different button. Once again, the same light flashes green. You begin to wonder if all the buttons affect that one light. You press the original button to confirm your suspicions. The light flashes blue.

---

Months have passed. You have covered the three previously unadorned walls with notes, with diagrams, with layer upon layer of discarded theory. But you're so close! The machine you've designed perfectly matches everything you've observed. It's so vivid in your imagination that you can practically see it through the wall. There's just a single lever you're unsure about; you can't decide if it's part of a complex mechanical linkage, or if it simply closes an electrical circuit. It would all work either way. You draw both options and place a question mark between them.

You wander down the hall to an office, pop your head in and say to the person seated at the desk: "I think I've figured it out. Could you come take a look?"

---

THEM: "So you're not sure which one of these it is?"

YOU: "It could work either way, so I drew both. I couldn't find any way to determine the answer."

THEM: "Well, that's not very satisfying. Are you sure you tried everything? What about this button, how many times did you press it?"

YOU: "That one? Tens of thousands of times, cumulatively."

THEM: "Have you tried pressing it repeatedly? Over and over again, I mean, without anything in between."

YOU: "I have the notes over here... yes, one hundred times in a row to confirm my theory was correct."

THEM: "Only a hundred? What if it does something different after a thousand presses? You should try that; it might solve your dilemma."

YOU: "I... could, but that seems awfully time-consuming. None of the other buttons did anything different after more than a dozen presses."

THEM: "Yes, but couldn't this button be different?"

YOU: "Maybe, but there's a point where I have to assume I've got it right. After all, there's no way of knowing whether it does something different at ten thousand presses, or a million presses!"

THEM: "All the same, we would really like to know what's behind that wall. It's a bit disappointing to only know what will happen but not know what's really going on back there. Could you try it the thousand presses? Perhaps twiddle some other widgets at random, see if that creates any discrepancy?"

YOU: "But that might never end! If nothing comes up after a day of effort, you could ask me to do it for a week. If nothing comes up after a week, you could ask me to do it for a year. Can't you just accept that either option works?"

---

The moral of the story: there's a limit to knowledge gained by inference. The problem is even worse than the story described, because even if you resolve the dilemma of the lever:

-It's impossible to know that the 200 sextillionth press (and only the 200 sextillionth press) of that button does something slightly different. Your design is wrong, and you'll never discover the discrepancy.

-It's impossible to know that the first time you pressed the button (and only the first time), it lit up a yellow light in the corner that you didn't see. Your design is wrong, and the information needed to resolve it is lost to you forever.

In the real world, maybe we'll get lucky and produce a theory that has only a single interpretation. Maybe there is a discrepancy that rules out all but one solution. (And maybe we'll get extra lucky and it will be easy to visualize.) But even then, we can never be sure. I feel as though the author of the article isn't clear enough about this. You can never know what's behind the wall.


The author writes that he is "still waiting for a straight answer as to what the structure of the atoms that make up my body is". The answer to this is more of a biochemistry or molecular biology question than a quantum physics question.


He didn’t mean the structures between the atoms, but the structures within the atoms.


Anyone can explain superdeterminism in a simple way? That sounds interesting.


Explanation 0: QM, unlike classical physics, says quantum effects are truly random and unpredictable and we can only know their probability distribution. (Or something like that.)

Explanation 1: The universe is not random and doesn't have cause and effect. All scientific experiments appear to work because the whole universe is actually a movie playing out where each new frame has been edited to produce the results we see, but for some reason the author decided to never violate QM.

Explanation 2: Unlike what QM predicts (true randomness) there's a finite amount of randomness added at the beginning of the universe, enough that our experiments appear to follow QM, but eventually if we looked hard enough there would be a predictable result with finite entropy. So it's actually regular determinism.

A few people claim #2 is true and doesn't imply #1. Everyone else thinks they're the same and doesn't like it for philosophical reasons.

https://www.youtube.com/watch?v=ytyjgIyegDI


relevant (but introductory level): https://www.youtube.com/watch?v=KunEYnIaGGc


Amusing to see all the pessimistic comments form programmers who have zero understanding beyond high school maths thinking somehow there is a magic solution to all the problems in physics like you can just refactor everything.

Quantum field theory is definitely not stuck in the 1920s. Modern physics has a lot more going on than 19th century mathematics. We understand quite a lot about the mathematical structure underpinning quantum field theories.

Quantisation is a nearly well understood process, it’s not simply heuristics and guesswork.

Unfortunately, it’s not like much smarter than anyone posting here haven’t considered lots of wild ideas, it’s just that a lot of them haven’t worked, and we have been funnelled into what we currently have. There are infinitely many more interesting mathematical objects and ideas possible than what is allowed.

Quantum mechanics, or say quantum field theory, has really good predictive power. What ever comes next needs to at least be consistent. Unfortunately I think things are going to get more complicated. Don’t use your intuition developed from grinding out web crap to try and reason about the forefront of human knowledge.


Hey- can you please not post supercilious or sneering comments? It really poisons the ecosystem. What we want is curious conversation in which people share information. Putting others down who you think aren't as smart as you or others, and calling that "amusing", is definitely not what we're looking for—no matter how wrong they are or you feel they are. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

If you know more than others, that's great—the best thing is to share some of what you know, so the rest of us can learn. Just please do it without swipes and putdowns.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...


There are many practical applications of quantum mechanics.

The one that I remember foremost is tunneling to (dis)charge the floating gates on flash memory transistors, but that just scratches the surface.

"Flash memory chips found in USB drives use quantum tunneling to erase their memory cells."

https://en.m.wikipedia.org/wiki/Applications_of_quantum_mech...

This I did not know:

"It even exists in the simple light switch. The switch would not work if electrons could not quantum tunnel through the layer of oxidation on the metal contact surfaces."


Is there something specific about oxidized metal that electrons need quantum tunneling to move through them? Why can't we just assume its normal conductivity?


Aluminum is extremely reactive, and the classic method of liberating this energy is through thermite reactions. Without the oxidation layer on aluminum, it would be an extremely hazardous substance.

"Breaking bad" covered this with an etch-a-sketch. Sometimes chemistry can be surprising.

https://en.wikipedia.org/wiki/Thermite


That's neat but I don't really follow. Why isn't the oxidation layer just a higher impedance material? Why is quantum tunneling required to explain a light switch?


But does it flow through wires?

https://www.youtube.com/watch?v=bHIhgxav9LY


It collapses through chloroplasts!

https://www.scientificamerican.com/article/when-it-comes-to-...

Ironic that plastid-based life is so much more advanced than us, but we are the ones that ended up with (super)intelligence.


While I do get what you're trying to say, it has to be said that many CS people have been exposed to Quantum Physics.

Quantum computing is an elective at every half-decent University.

To think that programmers have zero understanding beyond high school math is just ignorant.


At least for me and my classmates, we got a lot more wrong before we got right. Nothing is worse in this world than an undergrad fresh out of an intro quantum class with a neutered Schrodinger's equation and whatever nonsense pseudo-philosophy they picked up to deal with it. After taking a proper mathematical methods class you start to get the beginnings of an intuition for what's going on for quantum mechanics.


I agree. EPR states and Quantum teleportation gave us a reality check during our QC course. The university I come from did however teach linear algebra, ODEs and PDEs, etc as mandatory courses.


I don’t disagree that computer science intuition isn’t interesting, or that CS people can’t contribute. There are plenty of deep connections all over the place. You can conjecture all you want. My issue is with the tone.


I'm not sure where such negative emotions towards people throwing their intuition and expecting further discussing from experts comes from. Don't think anyone is suggesting that people at the "forefront" aren't capable of coming up with such ideas themselves, more like framing the familiar in other contexts to kick-off a discussion.


Except if they are framed as authoritative statement, rather than a genuine question or even wild conjecture. Then if a non expert reads it, believes it comes from someone expert, they can then take away something completely incorrect. Yes ask questions, but be honest.


I think there is plenty of pessimism in the OP. Essentially physicists today do not understand quantum theory better than the last generation of physicists did. I got my master in physics 20 years ago, and it was fairly obvious to me then that this would be the case.

Now compare that with AI/ML or large scale data processing over the last 20 years. I don’t think you should assume that the smartest people were those that didn’t see this coming and wanted to spend their life on interpretations of double slit experiments and string theory.


" Don’t use your intuition developed from grinding out web crap to try and reason about the forefront of human knowledge. " . This should be stickied on every HN thread


Honestly the funniest comment I have ever seen on HN.


> Quantisation is a nearly well understood process

The layman’s explanation I received for this, perhaps incorrectly, is that you fill in variables for factors that you don’t understand or can’t measure with whatever value makes the rest of the theory / equation work. Is that at all correct?


No, that sounds more like a (very distorted) description of regularization and/or renormalization.

Regularization techniques take effective field theories like the Standard Model, which are known to be inaccurate at high enough energies, and isolate their low-energy behavior.

For example: classical electrodynamics, interpreted naively, says that the total energy stored in an electron's electric field is infinite, since the field strength blows up to infinity at the position of the electron itself. As this isn't actually true, we know there must be physics going on very close to the electron that our theory doesn't account for. But even if our theory isn't the Final Truth, it's still capable of making perfectly good predictions far away from the electron, and we can't just sit around waiting for quantum mechanics to be discovered.

So we make the structure of the electron a parameter of our theory: it can't tell you what the force between two electrons is, but if you posit that the electron's charge is distributed like so, it can tell you what the force between two electrons would be.

We then use empirical data to reabsorb our new degree of freedom: this is renormalization. Instead of trying to predict the force between two electrons, we measure it, and then work backwards to figure out what the electron's charge distribution would have to look like to produce that force. In this particular case: the electron behaves as if it were a sphere about 10^-15 meters across. This is of course not the actual structure of the electron, but it reproduces the same low-energy classical physics.


> We then use empirical data to reabsorb our new degree of freedom: this is renormalization

Notably, in a less rigorous field, this process would be called "doing science to determine physical properties of things", which most people consider an acceptable pastime for physicists. As it turns out we can be pretty confident that the "experimental evidence" is bunk and there's something going on the theory can't account for, but it's funny that so many of the Monday morning quantum physicists think all this math stuff is just confusing the issue and pure, blind empiricism is the way to go.


I hope that's a misunderstanding or miscommunication. It might be useful in some limited contexts, but it would point to gaps in the theory (at minimum).


There are 19 free parameters in the Standard Model. These include such values as the mass of various particles (electron, muon, tau, six quarks, Higgs), Higgs vacuum expectation value, the strength of several interactions, and the mixing angles and phases for certain interactions.

You can rearrange some of the parameters so that they're not free, and redefine each of these in terms of some other related parameter — your choices here are pretty arbitrary; these values being chosen as the 'free' ones does not represent some fundamental truth.

We do not have some underling deeper theory that indicates why these numbers are the way they are. Why is the electron mass ~511 kEv while the up quark is 2.2 mEv? Why not 1.9 mEv? Why not 2.3 mEv? The Standard Model doesn't generally explain it, nor does it even really intend to explain it, just to describe it.


One theory, which I find quite plausible, is that there are many universes, all of which have variations on these constants. And the reason our universe is the way it is, is because we are here to observe it - given that most other configurations of these constants wouldn’t support life (the strong anthropic principle).


It's one that comes up, but first consider that it's not fundamentally clear to the layman that these values are actually completely "free parameters".

Imagine, if you will, benchmarking some unknown CPU, and determining that the fused multiply-add operation takes, idk, 17 times as long to execute as an increment operation. We might postulate that there other CPUs out there where it takes a different amount of time — arbitrary amounts of time! Alternatively, we might gain knowledge of the underlying CPU architecture and understand that fused multiply-add is implemented with a certain set of transistors, and that it's fundamentally more complex, though there's room for some variability based on the specific implementation. In such a world this "free parameter" is set as it is for a very specific reason: a transistor arrangement.

We have limited visibility into what's actually happening "underneath" our laws of physics. Some of the values we see could be truly arbitrary. Some of them might actually be controlled by some other field and change over time (though we haven't seen evidence of that so it seems less likely). Some of them could be a deeper artifact of the way the universe works.

If you look at things like string theory, which do try to describe in more detail and dial down the free parameters to just one ("length of the fundamental string" more or less) we are left with something that's frustratingly nonspecific until you locate a more-specific solution within the broader string-theory solution space and call it "the laws of physics." That specific location might indeed seem quite arbitrary; the question might then become, what relation does this hundreds-of-dimensional solution space have with our concepts of physical reality? And can other areas of that landscape be probed in any way meaningful to our experience of physics?


In terms of the parameters of the standard model, in any other field setting parameters with theory rather than experimentation would be ridiculous. But somehow when it comes to quantum mechanics everyone rediscovers their reductionist hat and demands that every theory neatly explain all the details without anyone actually entering a laboratory.

I'm not saying I'm not a little miffed there isn't yet a nice, lower-level theory that predicts all of QM, but man is it frustrating when people flip flop to and from pure empiricism at the drop of a hat, based on a layman's understanding of a mature, developed field.


All correct.

Oh ... and if things are still a bit floppy, just fix the gauge and you are good.


Don’t use your intuition developed from grinding out web crap to try and reason about the forefront of human knowledge.

Have you tried front end web frameworks recently? String theory has nothing on them.


Eventually there will be so many JavaScript libraries and frameworks that they will collapse in upon themselves and create a black hole.



100%

My friends a pretty decent physicist at Oxford and the hardest part of his research? Makefiles.


Tell them to use qtcreator. They don’t even need to use the qt framework, the makefiles are just generated.

I wish qmake had won the makefiles generator war the same way I wish hg won over git


Now I want my new job title to be string interpolation theorist.


I wish mine were "sell out simpleton"


C String > String


Yes, but go back to philosophy not math.

All the evidence suggests the "particle" interpretation of QM is simply wrong, that light or energy really is just a self propagating wave, which explains the double slit experiment cleanly, and is not incompatible with energy levels or quantization if formulated correctly.

Experiments that claim to show light consists of particles need to be reassessed wearing a philosopher's hat.


No. "Particle" and "wave" are analogies. They're not even wrong, because they're not descriptions of what actually happens.

The real problem is we don't have an analogy for what actually happens, because - by everyday expectations - the traditional interpretations are beyond weird. And the more recent interpretations are even weirder than that.

It's likely - after Bell - that there's something non-local happening. And after this paper it's likely that whatever it is may not even be compatible with a stable identity for the observer.

https://www.nature.com/articles/s42005-021-00589-1

Which seems to be arguing that instead of many worlds, there are many observers - not just simultaneously, but sequentially.

In other words, it's our belief in a consistent self with stable memories that may be wrong.


Not sure how broadly you're willing to draw the borders around "non-local" but I'd argue most quantum physicists take Bell's inequality to suggest that hidden variable theories are impossible because something non-local isn't happening.


Wow I have heard some crackpot theories before, but then I came across "modern physics".


you mean we live in a simulatio n ?


Yes, I never understood why light was considered 'particles' :-(

As far as I understand it is because light can only be emitted by atoms with discrete energy levels, but (IMHO again) light is emitted by atoms with discrete energy levels because it is emitted when electrons jumps from one energy level to another and that the possible trajectory of electrons around the atoms aren't continuous, I don't see how this is related to the nature of light..

I think that with an [undulator](https://en.wikipedia.org/wiki/Undulator) you can tune "continuously" the energy of the light emitted: in the webpage it is written "the emitted radiation is coherent with a wavelength determined by the period length and the beam energy", but as the precise relationship between the beam energy and the wavelength produced isn't specified I'm not 100% sure..


Light is considered a particle - or more accurately has particle-like behaviours in certain contexts - because you can count individual photons. The energy varies with frequency, and it arrives in discrete lumps at discrete locations. You can't have half a photon. It's all of a photon or no photon.

It's one of Einstein's key insights, one of the foundations of quantum theory.

In Quantum Field Theory you have quantum fields. There are no 'waves' and no 'particles', there's only a 3D probability density field which defines the probability of particle-like events at specific locations. It's the probability field that is 'wave-like', and particles are considered excitations of the field.

So you know you're quite likely to see an electron-like or photon-like event in one location, and not at all likely in another.

Calling this a 'particle' is just an analogy. All you can really say is that a particle-like measurement is likely and/or did happen in one region.

What this really means - whether it's an observer artefact, or an exchange of information, or the output of some kind of computational and/or causal substrate, or something else entirely - is still a mystery.


While seemingly intuitive, the idea of a QFT field as a probability field over space is not correct. It usually comes as a shock to anybody entering QFT when they finally realize the actual "probability space" is infinitely larger..

So as a programmer's example you might have an idea that to simulate a QFT you could have an array of floats over space to describe your "electron field", and you draw in values to reflect a probability distribution and you'll write some update rules to describe how this evolves in time. But this won't work, because it's not how nature works.

What you need is that electron field array (and a photon field array to make any kind of non-trivial observations), but in 4D (space + time), and you need to iterate over all possible values in all array bins for all fields, calculate a magic number for each configuration, and weigh all these magic numbers together to figure out the actual probabilities for any configuration.

Even for a 16x16x16x16 array, that is 65536 bins, even with only 2 levels in each bin this is 2^65536 combinations.

All of practical QFT (theoretical, perturbative or non-perturbative lattice methods) is about doing this calculation with (obviously radical) simplifications.

That we can do these enormous simplifications and still get usable results, can be a very good sign that the underlying reality is in fact simpler than what QFT implies but nobody has figured this out yet.


> Light is considered a particle - or more accurately has particle-like behaviours in certain contexts - because you can count individual photon

So how do they produce an inference pattern in the double slit experiment? I know you don't have an answer to that, but that's rather the point - neither does QM. Probability density does not explain an interference pattern. Only one thing does - that light is in fact a wave.


We have several single photon sources, including nitrogen vacancy centers and quantum dots. Nobody has ever produced a "half-photon source" though.


What would half-photons look like? I guess photons with half the energy expected given their wavelength and the Planck relation E = hc/λ?


No, that's a complete photon, just one with a different wavelength. A half photon would have E = hc/2λ; that is, half the energy for the same wavelength.


Could you point to some articles on this? It's way outside of my field, but it sounds very interesting.


The person you are replying to is either wrong or has stated it very badly.

The idea of their being both a wave and particle based description of photons that don't mix well is basically physics as known in the 1920s. The modern theoretical framework of light is quantum field theory (special relativity + quantum mechanics, but actually not quite that for [insert qft textbook] reasons...), which doesn't fit into either of those old ideals particularly well because they are basically shadows of an underlying model projected at different angles.

Edit (further detail): The way physics is taught in British secondary schools is a tragedy - ignoring that you don't even learn the fundamentals properly (no calculus!) - you aren't given a non-mathematical outline of these modern theories. In their eyes quantum mechanics has not really advanced beyond the photoelectric effect, which is where unhelpful weirdness like you'll see in this thread comes from.


We need to go back to the 1920s and start again. As the article explains, modern QM has hit a dead end, and it's little more than mathematical games that have not helped us understand reality.


Those mathematical games quite literally have helped us understand reality. The standard model made predictions, which were then tested.


Has it though? Do certain predictions presuppose a particular intepretation of reality? Does QM actually predict the double slit experiment?


> it's little more than mathematical games that have not helped us understand reality

i dont think that's true. A lot of people would assume that this "mathematical game" is like theories with air pressure. The statistical methods used for computing heat and pressure can be "explained" as little particles of atoms moving about due to their vibrations, and the ultimate observed behaviour is just that - a lot of particles pushing and shoving dependent on the temperature of these particles. this simple explanation can be understood without the maths - after all, the maths is only there to tackle the large numbers and statistical inferences needed. The statistical methods "isn't the theory".

However, such views restricts the mind imho. To seek a simple, understandable explanation that is not mathematical is probably a wrong goal. A lot of people have tried to explain laws such as gravity using such simple rules - a particle that moves and hits in all directions, but bigger bodies will block certain direction (thus resulting in a net force from the opposite side - aka, gravity).


This is cool IMO (2016):

https://www.youtube.com/watch?v=nRSBaq3vAeY

Apologies if its considered off topic - I see double slit experiment and I always think of this. It blew my mind.


Of course "consciousness is special" is what you'd expect someone with a consciousness to say, so we shouldn't trust it unless we can prove the video was made by a p-zombie.


I'd be curious to hear what happens with a fake tone or indicator.

I would expect that to decouple the experimenter to the experiment and should result in no change to the output.


Pretty fascinating stuff, seems like it's time to implement some quantum CAPTCHAs


Crackpot vibes


Where did he go wrong in the experiment?


It is more than just interesting, it's the central unsolved conundrum in physics

https://en.m.wikipedia.org/wiki/Interpretations_of_quantum_m...


I thought GUT was at the core of modern physics studies?


aiui, professional physicists don't think particles are "real" but a phenomenon that appears as a result of attempting to localize a wavefunction through a measurement


Professional physicists are so lost in the mathematics of wave functions they have lost touch with reality.


You seem to be arguing an old version of pilot-wave theory, which has a version that's wrong (because it predicts real life incorrectly) or isn't known to be wrong but isn't falsifiable either (because it predicts real life exactly the same as QM). It also doesn't work well with special relativity, so it's less correct than QFT.


what is the reality then? when you get down to those scales, there really does seem to be uncertainty when you try to localize parameters. on one hand it's troubling but it's also lovely because we get stuff like the casimir effect, and who knows what other futuristic technology we might end up making, which we all want


There's a viable alternative to this called "superdeterminism", but it makes everyone in the world except Sabine Hossenfelder unhappy if you bring it up because it's even worse than telling them they don't have free will.


alternative to what... the uncertainty inequality?


Is this different from the conventional, orthodox view?

I feel like I've seen experts say that yes, the fields are real, and particles are approximate descriptions.

But if it means anything, it's math.


Most of the bright people who could potentially come up with a major breakthrough would rather make >= 10x the money working as quants for hedge funds, software engineers/data scientists/product managers at FAANG or other adtech, or developing copy-and-paste pump and dump shitcoins named after Elon Musk's dog.


In my experience, it is not really the money in the corporate job market that drives these people away from research:

Not USA, but I know some really highly talented physicists who now work as rather badly paid software developers. Lesson: outstanding skills in and passion for physics not necessarily tranfer into high pay in the corporate job market.

It is rather the politicization and lack of perspective in academia that makes these people leave academia: doing your PhD/doctorial degree in physics already bears a high opportunity cost, so you really do that because you have a deep passion for physics and not because you expect a high pay.


Tl;Dr: quantum physicist in the midst of existential crisis articulating his dread.


It seems weird to me that somebody could get a physics PhD without a deep enough understanding of the philosophy of science to

1) understand why science avoids these 'what, really, is X' questions in favor of 'how does X work'

2) circumvent this existential crisis


Sigh, yet another Nautil.us article. One or two meaningful paragraphs buried in a clickbait title by someone bored with their own academic work.


I'm not sure what your complaint with this particular piece is. it seemed like a coherent and well explained argument that the current interpretations of quantum mechanics might just be a blind alley caused by starting from the wrong mathematical foundations - you needn't agree, and the author themself admits that most physicists wouldn't either - but it's not by any means clickbait; it delivers exactly what the title promises.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: