Hacker News new | past | comments | ask | show | jobs | submit login
Protons are probably smaller than long thought (uni-bonn.de)
211 points by hhs on Feb 4, 2022 | hide | past | favorite | 143 comments



So to summarize, standard model theory predicted that the radius of the proton would be 0.84 femtometers. High energy electron scattering experiments from the 90s and 2000s suggested it was more like 0.88 femtometers, which was a large discrepancy that causes some consideration that the standard model would have to be revised. These researchers performed a reanalysis of the data correcting for some confounding phenomena (might have been neutron formation) to suggest that the old experimental data was consistent with the 0.84 femtometers predicted by both theory and by newer lower energy scattering experiments.

I’ll leave it as an exercise for the reader to decide whether the title of the article is clickbait.


It's not clickbait.

I've been to oodles of talks on the problem. It has had special sessions at APS meetings and I've watched acquaintances and colleagues expend meaningful fractions of their careers attempting to resolve the Proton Radius Puzzle.

If an appeal to authority is required -- Nature gave it the cover. https://www.psi.ch/en/media/our-research/protons-smaller-tha...

Pohl's measurements were incontrovertible, yet they disagreed with decades of work. It has been a big problem in a quiet community for some time.

If someone has a reliable angle to resolve the problem, it is big news. Enough really good physicists have tried and failed to resolve the conundrum that one should wait to see if this new approach holds up, too.


The headline gives the impression that the researchers found a new discrepancy where protons are found to be smaller than theory would predict. In actuality, the researchers are proposing a resolution to a preexisting discrepancy.

Additionally, I’m not sure why you’re saying we should wait to see whether the results hold up, if you don’t think writing “probably” in the headline is clickbait.


I can't parse your second question: A) we should wait to see if results hold up & B) supporting probably being in the headline are incompatible? It seems !A & B and A & !B are incompatible because A and B are compatible.


In general I think that whenever a new theory or analysis is published, “probably” indicates a far greater degree of confidence than is justified until the theory / analysis gains broad acceptance.


Your argument seems to be that the topic is relevant, active, and frequent—-ergo, newsworthy. And because the content of the article is newsworthy, the headline for it is not clickbait.

My definition of clickbait is a little different than that. It’s not so much the newsworthiness of the item behind the headline (one man’s old news is another man’s new insight), but rather the form of the headline itself. The more the headline plays on my emotions to pull me in, to try and push its relevancy ahead of an otherwise objective cataloging of information, the more clickbaity it is (to me at least).

This particular headline is far from the worst, but the air of mystery it elicits seems to waft “psst, you gotta check this out.” So imho, it is slightly clickbaity.

(I was going to use an analogy, but a previous HN article today taught me it’s better to use those in non-debate contexts).


This is just like the Millikan Oil drop experiment.

It was actually off, but anybody who did experiments that disagreed with Millikan's "adjusted" things a little bit to match better.

So, it took a while for the value of the charge of the electron to "settle" to where it belonged.

Scientists are people--with all the same failings.


> to summarize, standard model theory predicted that the radius of the proton would be 0.84 femtometers. High energy electron scattering experiments from the 90s and 2000s suggested it was more like 0.88 femtometers

I read the wikipedia articles for "proton" and "proton radius puzzle" and the press release we're commenting on. I didn't notice any mention of a theoretical prediction of proton radius - only some older experiments suggesting the larger size, and some newer experiments suggesting the smaller size.

Was there ever a theoretical prediction?


Ignorant outsider to the field here: I've often heard the claim that quantum mechanics has been verified to amazing accuracy, that its predictions match with reality to the maximum degree that our instrumental precision allows, etc. A 5% difference seems big enough that at least some of our experiments should have error limits less than that. So how is it that this is only now being found, and there's still uncertainty surrounding it?

I feel like I'm missing something fundamental here, and I'd like to know what it is.


Real physicists can correct me, but here's my understanding of it: if only the electromagnetic force is involved, the numbers provided by QED are amazingly accurate (for example, calculating the magnetic moment of an electron). But when the strong force is involved, as for the radius of the proton, the calculations are much more difficult: you can't calculate what the radius should be.


Part of this is also that a proton isn't a elementary particle. And in fact it's not only the three quarks typically ascribed; that's just only a little over 1% of it's mass. The rest is a maelstrom of virtual particles.

Defining the radius anything more than extremely statistically runs up against the uncertainty principle of all these constituent particles.


That's correct. It's very hard to calculate the proton radius ab initio, even with LatticeQCD approaches.


What does "very hard" mean here? e.g., the math is advanced? The calculations are onerous? You need too many potentially risky assumptions? etc.


I am a LQCD practitioner. “Very hard” means that even with a nontrivial fraction of all the available leadership-class supercomputing in the world we don’t have enough computer power.


Good question. According to Wikipedia, Numerical lattice QCD calculations using Monte Carlo methods can be extremely computationally intensive.


Yeah, for form factors at small Q^2, you need large lattices and the convergence isn't very good --> very computational intensive.


There are lots of different numbers that you can use quantum mechanics to predict. It turns out that the size of the proton is both harder to predict and harder to measure than many of those other numbers.

Meanwhile we can measure the fine structure constant to 12 decimal places (https://en.wikipedia.org/wiki/Electromagnetic_coupling_const...) and that measurement is in very close accord with the predictions of quantum mechanics.


I thought the fine structure constant was primitive: there is no theory arguing what it's value should be.


The fine structure constant can be measured in two different ways.

One is to measure the g-factor of the electron in a Penning trap. This can be related to the fine structure constant using quantum electrodynamics (QED).

The other is to measure the recoil that an atom receives when it absorbs a photon in an atom interferometer. By combining the result with another well known constant, the fine structure constant can be calculated.

The results of both methods agree to 12 digits, which shows that the QED calculations of the electron g-factor are correct on that level.


thanks for clarifying. That makes a lot more sense than GP's phrasing.


Yes, that is rather clearer than what I wrote!


That maximum precision had a confidence interval. The new smaller 5% value is within that confidence interval, which is now also tighter.


That 5% smaller value was between 4 and 7 sigma away from the old value (depending how you average measurements). It's absolutely not within expectation.


Quantum theory has the largest error between theory and prediction (120 orders of magnitude). See https://en.wikipedia.org/wiki/Cosmological_constant_problem


That’s just a back-of-the-envelope calculation. Nobody knows how to do a bone-fide calculation because we don’t have an accepted quantum theory of gravity.


I guess that's why they invented slide rules. I could get within 3 significant digits on a slide rule (and compute the exponent in my head), whereas I guess the margin of this envelope is too small to contain the calculation.


The size of the proton is a very poorly defined measurement.

The amazing accuracy you have read about is the anomalous magnetic moment of the electron, which is a very clear cut measurement.


Well this 5% is the instrument precision.


I wish they explained how exactly the fact that protons are 5% smaller than previously thought actually affects our current knolwedge. Or it just doesn't?


(Disclaimer: I work in this field, having done of the original measurements. I do not believe this is a settled case at all.) If the proton is indeed smaller, it changes the Rydberg constant by several sigma. The Rydberg constant is one if not the best determined constant of nature. This has implications for precision tests of QED, for example.


What is even the size of a proton? I mean it's not like it's a nice spherical ball [made of something else].

From reading the article it seems like there's a hard distance boundary beyond which it will not "collide" with the electron?


In this context, it's the root-mean-square of the electric charge distribution. (Or more precisely, it's related to the slope of the electric form factor at Q^2=0.) There is also at least a magnetic radius (similar to the electric), and a gravitational radius.


Does the electrical charge ever reach 0 or does it keep getting less and less and for practical purposes, we have to set a cut-off limit and count the boundary of the proton from there?


As we understand it, the extent is infinite. However, isolated charges have a hard time staying isolated. They tend to rearrange the charges around them, so that the net effect of their charge becomes effectively zero if you define a vicinity for other particles. I believe the term for this is "shielding."


In some sense, the extend is infinite -- but the radius is not the "distance to cut-off". It's the root-mean-square average, which makes it finite.


> What is even the size of a proton?

Just a layman but I think they usually define size via the halfway distance between the centers of two identical bound particles. Not entirely sure what that would be in this case though, given helium-2 is unstable and helium-3 might give a different result. (?)


As far as I know the size in these discussions is some kind of scattering cross-section. So roughly it's just how likely you are to hit it / how precise you need to aim.

I don't know the exact details of the definition though.


That's one way to determine the size. But the same thing pops up in spectroscopy.


You are right about the atomic radii often quoted in chemistry, but the sizes of subatomic particles are usually defined in ways related to their cross-sections.


But isn't the cross section dependent upon what is interacting with them? Like, my understanding is that the cross section of hafnium is massive with respect to neutron capture, but not with respect to photons.


Although the size isn't simply the square root of the cross section, you've hit on a truth that you'd get different sizes if you were asking about the distributions of different things.


Cool to know, thanks!


Protons don't bind without neutrons mixed in.


They do for a very short duration (10^-9 s.) Nuclear fusion happens by protons binding via the strong force, then one beta-decays into a neutron before the electromagnetic repulsion pushes them beyond the range of the strong force. The beta decay happens fast enough in 0.01% of proton-proton collisions.


That's why I wrote He-2 is unstable, yes.


Is there.. any reason why all protons must be the same size?


Yes. All protons are indistinguishable from each other. No matter what aspect of a proton you measure, it will be identical to any other proton you could measure instead.

This has important implications.

Consider a quantum mechanics experiment where you emit a proton at A and try to detect it at A′. You will find that that there is some quantifiable chance of detecting the particle at A′ some time T. Call that probability P.

Now consider a second experiment where you emit a proton at both A and B, and try to detect them at A′ and B′. What is the probability of finding a proton at A′ at time T? You will find that you get a different number! This is because the proton at A could travel to A′ and be detected, but also the proton at B could travel to A′ and be detected too. Since you cannot distinguish the two protons, you won’t be able to distinguish between these two outcomes, and so the probability must be different from P.


Your hypothetical seems to describe a current inability to differentiate between protons, rather than convincing me that protons must be identical. Is this like the monsters on old sea maps, just a gate around an unknown?


I think that GP is trying to say the following.

In quantum mechanics, probabilities are given by the square of the absolute value of more fundamental quantities called amplitudes. When something happens in two ways that can be distinguished, you must add the probabilities. When something happens in two indistinguishable ways, you must add the amplitudes, which yields a different probability after squaring. For example, .3^2+.4^2 != (.3+.4)^2. Thus, you can verify experimentally whether particles are or are not distinguishable.


> you can verify experimentally whether particles are or are not distinguishable

Thank you for a terrific explanation. Could you please go one layer deeper? Why does whether probabilities or amplitudes are summed imply fungibility (or its absence)?


Well, I am not a physicist, but I can fake it well enough for HN :-)

Before I say anything, if you have never heard of amplitudes before, you should read the Feynman lectures on physics vol. III, which you can find here: https://www.feynmanlectures.caltech.edu/III_toc.html Specifically read chapter 1 and chapter 3. The specific situation about distinguishable/indistinguishable particles is described in section 3-4.

Your question is kind of phrased backwards. Probabilities and amplitudes are human inventions to describe Nature's behavior, so by themselves they don't imply anything about Nature. The implication is the other way around: Nature has decided that some particle pairs are indistinguishable (proton vs. proton) and some are distinguishable (proton vs. neutron). Different rules apply to the two cases.

This indistinguishability is a pure quantum phenomenon. In a classical world, all objects are distinguishable---you can in principle label all protons and know which is which. But Nature does not work like that, and there exists this peculiar notion that you cannot tell two protons apart not even in principle. There is no deeper explanation of this phenomenon AFAIK---it is what it is.

You can tell experimentally whether two particles are or are not distinguishable by running the experiment as in Feynman 3-4. If you observe a distribution consistent with the add-amplitude rule, then the particles are indistinguishable. Any attempt to distinguish them leads to the contradictions explained in Chapter 1.

There is an even more peculiar phenomenon. Despite being indistinguishable, swapping two indistinguishable particles is not a no-op because it changes the amplitudes. You must still add amplitudes, but the amplitudes are different, yet different in such a subtle way that you still cannot tell the particles apart. Feynman 3-4 tells you how the amplitudes change during the swap, with a deeper explanation in chapter 4.


Because you can construct experiments where the proton from source A ends up at location B, and the proton from source C ends up at location D, or A ends up at D and B ends up at C. (Or some other possibilities.) You find that the A→B, C→D possibility's amplitude sums with the A→D, B→C possibility: i.e., that they're the same indistinguishable final state.

If swapping two things around gives a result indistinguishable from not swapping them, that's fungibility.


Photons of different polarizations are nonidentical, but if this argument is true, it would also prove that horizontally and vertically polarized photons were identical... in an experiment insensitive to polarization. I do not believe this answers the original question.


Two coins aren't identical, but they're fungible. Is "identical" even well-defined on fundamental-ish particles?


Identical means that either bosonic or fermionic conditions are applied to the joint tensor-product'd wavefunctions of multi-particle systems. It's well defined, and although I was hoping this thread would offer a more grounded definition I am not sure if it succeeded.


I am sorry but I am lost. If they both have probabilities or amplitudes of 1 would that not lead to a joint probability of 200% in the nonidentical case and 400% in the identical one?


Sorry, I didn't mean to imply that my comment should apply literally to all cases. I am just pointing out to parent that particles can be indistinguishable in principle, and not just as a technological limitation of our measurements. Moreover, there is an experimental way to tell the difference between distinguishable and indistinguishable, roughly based on the difference between probabilities and amplitudes. To dig deeper one must look at the details, e.g. in Feynman's lectures vol. III.

The statement that protons are indistinguishable is not strictly correct either, because protons have a spin. Protons with the same spin are indistinguishable, but you can tell apart protons with different spin. The spin of protons can only assume two values, so effectively there are two classes of protons, indistinguishable within the class.

In your specific case, it is clearly false that the probability of having one particle in one place is 200%. However, my statement still holds for expectations, and you end up with an expected two particles in one place. In the indistinguishable case, you must compute expectations based on amplitudes, not probabilities.


Yea, spin adds a new level of complications. You can distinguish between two otherwise–indistinguishable protons if they have opposite spin, but the spin of a proton can also change over time (usually due to interactions with other particles, such as stray radio waves passing through your experiment).

Going back to the experiment that I described, you can imagine that the particles are released at A and B with opposite spins, and then the detector at A’ only detects the spin that corresponds to the particle at A. This causes you to measure yet another probability, distinct from the other two, because there are now more possibilities and there are still multiple ways to cause the detector to find something. It could detect the proton from A, but the proton from A could also have its spin flipped and thus not be detected. The particle from B could arrive at A’ with the wrong spin and not be counted, or it could have its spin flipped along the way and be counted. You still cannot tell which proton you detected!

Similar complications occur with polarization of photons, which someone else mentioned in one of the comments. It’s worse though because polarization is a continuous quantity, and there are more ways to change it.


In addition to what has already been said, I want to point out that there is a normalization step as well. The amplitudes are always calculated in such a way that the probabilities would never add up to more than 100%. This normalization is generally baked into the wave function of the system you are studying, but it can also be done separately.

Incidentally, amplitudes are actually complex numbers. You can think of them as little arrows, like this: →, or this: ↖. In fact, these arrows are also rotating with the passage of time; they trace out little circles. To calculate the probability, we square the absolute value of the complex number. The absolute value of a complex number is equal to the length of the arrow, and squaring a length gives you an area. Thus the probability is essentially the same as the area of the circle traced out by the rotating arrow.

Events with high probability correspond to long arrows (big amplitudes), and low probability events have short arrows (small amplitudes). Amplitudes can cancel out when added together if they point in opposite directions. Thus we observe that some sequences of events have very low probability. We sometimes say that these events interfere with each other.

Sometimes this interference seems mysterious, as in the double–slit experiment, and other times it seems very mundane. In real life we rarely bother to calculate the probability that the batter will hit the ball before the pitcher throws it, but calculating the correct answers in quantum mechanics requires taking into account many such unusual events.


Can a horizontally polarized photon destructively interfere with a vertically polarized photon in an experiment that is insensitive to polarization?


Yes. Why do you ask?


Thank you, I should have mentioned amplitudes.


There is a lot of circular reasoning involved in quantum information and thermodynamics.

It's all totally, perfectly self consistent, but it does not derive from first principles like set theory or mathematical logic do. Physics is an experimental science; they are not required to state their axioms. Oftentimes they do (QFT for example), but the most glaring case where they don't is anything involving information.

The whole postulate that information is physical is something that was stumbled upon, and then turned out to explain a whole bunch of other weird things like heat and entropy, and some of those explanations in turn implied that information is physical.

I suspect that our current efforts to build cryptographically-relevant quantum computers are a lot like the efforts to build perpetual motion machines in the 1700s. Our current understanding of things isn't wrong, but there is some undiscovered general principle that we keep butting up against, so we'll keep trying to build these things until we figure out why nature keeps blocking us. That discovery -- rather than a computationally-useful device -- will be the most important result of all the quantum computing research going on right now.


> until we figure out why nature keeps blocking us

A quantum computer only behaves like our mathematical ideal quantum computer if it's sufficiently isolated from the rest of the universe: otherwise you don't get the “indistinguishable states” thing and the amplitudes don't sum, and the computer stops computing and starts being a regular ol' physics experiment.

It's an engineering problem, as far as I understand. Get things cold enough, get things isolated enough, so they stay entangled with each other and not with the rest of the universe.


Is it? In engineering problems you have to consider practical feasibility. Would it turn out that for the expenditure of energy to get things isolated we may as well spend it on classical computation?


On Earth? Maybe. In deep space? Probably not.


The cosmic microwave background makes even deep intergalactic space warmer (3 or 4 K) than the inside of Earth’s best fridges (the best as low as nanoK). 3 or 4 K is easy to achieve using liquid helium cooling, which is presumably why eg. Google’s Sycamore chip operators in that range. But achieving those low temperatures is a regular occurrence in labs all over the world—-you call AirGas or somebody and they deliver a dewar of liquid He.


And, close to a star, you have several neutrinos flying through your experiment. You have vibrations from earthquakes. You have electromagnetic coupling. etc, etc.

The temperatures are easy enough; you just have to compensate for thermal noise. The isolation isn't; without isolation, your signal is wrong.


But unless you're doing a computation without I/O (i.e. no reading back the results, no providing inputs), you need coupling into the rest of the universe, and typically into a low-entropy part of the universe like the Earth, where you have entities that care about the computation. So, I am not convinced having a deeply isolated part of the universe is the answer; in fact, the isolation vs. signal quality tradeoff makes it sound more and more like a fundamental limitation of practical concern.


> But unless you're doing a computation without I/O (i.e. no reading back the results, no providing inputs), you need coupling into the rest of the universe, and typically into a low-entropy part of the universe like the Earth, where you have entities that care about the computation.

With a quantum computer, you can only do that at the end of the computation. Not half-way through; that'll cause the computer to start doing a different (unwanted) computation instead. While the calculation is happening, you need (a high probability of) total isolation from the rest of the universe, so that the intermediate state of the computer only interferes with itself.

> So, I am not convinced having a deeply isolated part of the universe is the answer; in fact, the isolation vs. signal quality tradeoff makes it sound more and more like a fundamental limitation of practical concern.

It is a fundamental limitation of practical concern! Just like the need to keep conventional computer processors cool or they melt, or the fundamental limitations on the bandwidth that a radio frequency can give you. The people who deal with these limitations are called engineers.


> It's an engineering problem, as far as I understand.

That is far from being clear.

People in the 1700s were totally convinced that building a perpetual motion machine was just "an engineering problem".


that seems plausible, but currently very unsupported by evidence. in the past decade, quantum computers have gotten way more powerful, and progress doesn't seem to be stalling out.


Think of it in a simpler way. There is nothing we can currently measure, that will be different for one proton versus another.


No. Bell’s Theorem states that if subatomic particles have any “hidden variable” which we cannot currently measure, then that “hidden variable” will either have no effect at all on the particles, or that it will have to have instantaneous “nonlocal” effects on them.

If this hidden variable has no effect at all, then it is useless. Most physicists don’t care to include extra variables in a quantum mechanical theory that have no effect at all. By definition, they could not be measured, and so they would be extra baggage to carry around to no effect.

Physicists don’t much like nonlocal theories either. Any theory that requires information about the particle at A to travel instantaneously to B in order to change the state of the particle there is going to be hard to sell. You might have heard of a fellow called Einstein, who proved that nothing can go faster than the speed of light.

Thus, most physicists take the easier road, as it requires only that subatomic particles are indistinguishable. All this means is that particles are too simple to be uniquely identified. You can in principle tell the difference between two baseballs, because they have different patterns of wear and other markings on their surface. But those wear patterns and markings are formed out of the complex arrangements of trillions or quadrillions of atoms. It is easy to see how rearranging the ink molecules on the surface of a baseball could create a unique baseball, or how selectively removing molecules from the surface of the leather (by scratching it, for example) could do the same.

But subatomic particles are too simple to have that kind of internal state. Even atoms are only slightly distinguishable. Most carbon atoms have 12 electrons, but a carbon ion might have 11 or 13 electrons. You can distinguish between the atom and the ion, but not between two atoms or two ions.


Every description of a scientific idea always carries the implied disclaimer that we are talking about the present state of our knowledge. Every "fact" is in fact a belief that's held on a tentative basis pending the introduction of better evidence. That sounds quite noncommittal, but in fact it's the best that we can hope for.

One interesting thing about protons is that you can describe the effect of exchanging any two of them in precise terms, and end up with macroscopic predictions that can be tested. So you're not limited to just trying to measure every proton in a bag to see if they're all the same. There are other ways to test the hypothesis.


How does your experiment demonstrate that all protons are indistinguishable?

Couldn't it be that your measurement for a proton being at A' is simply measuring the wrong feature of the protons?

edit: I have no idea if protons are indistinguishable from one another, but this experiment doesn't seem compelling.


Think of it this way: we can experimentally figure out what the probabilities are; they are an observed thing. The only way to make sense of these probabilities is if all protons are indistinguishable.

In my statistical mechanics course, we went through an illuminating exercise where we started by trying to take account of every atom in a gas cloud. We started taking limits and making assumptions. One of them was that all atoms are indistinguishable from each other. This decreases the possible states of the system by N! (factorial, not surprise). After making that assumption, out pops the ideal gas law.


This is convincing to me, thank you.


I was thinking the same thing. If you read the experiment described but replace the word "proton" with "cat" -- I would just assume that the scientists in question were from a society with very coarse senses and measurements, not that all cats are indistinguishable.


In some versions of object oriented programming, some aspects of objects can be queried from the outside, some cannot--they are invisible states.

For protons, they may come with a serial number, but we're not allowed to read it because of the constraints built into God's programming language.


It's a mistake to try to make analogies of quantum interactions with everyday objects like cats. It's not as useful as one might hope, and your intuition gets in the way.

Protons don't behave like cats.


Turn it the other way around: if it has a different level of energy in its quantum field, it doesnt expose the properties of what we call "a proton".


>and so the probability must be different from P.

Not if they interfere in just the right way to keep the probability the same as in P.

I see a second problem with this experiment - in the double slit experiment, there is one electron interfering with itself. If you released two protons at the same time, they'd interact with each other and change the probabilities, even classically.


Does this mean that there's possibly only one proton?


You mean like the "One-electron universe" postulate [1]? First, I don't think it's really useful beyond being a fun idea to consider. Also there's a lot more matter than antimatter, which raises some... logistical problems. Also, unlike electrons/positrons, protons are not fundamental particles, they're made up of quarks which throws a whole other wrench in the idea.

[1]: https://en.wikipedia.org/wiki/One-electron_universe


On the contrary, protons are baryons and they must each have a unique quantum state (they must be in a unique/distinct location).


Two protons P0 and P1 are the same only if they match in every single quantum parameter.


Each solution to the Schrodinger equation describes a different kind of particle. We assume that QFT is deterministic and complete so each particle will have exactly the same properties as the others of its kind aside from position and momentum (as far as we can tell).

As a fun consequence, the theory treats all particles the same -- even so-called quasi-particles! The difference between fundamental and other kinds of particles is that the fundamental particles can exist (at least for a short time before decaying) in vacuo.


There is no telling whether anything exists in vacuo or merely as excitations in a lattice we're living inside unaware. :)


Good question. I'm not sure. We treat them like they are all the size, which I assume is a function of all them being themselves composed of pieces that we treat as fungible. On the other hand its not like anyone is measuring a proton size with a pair of calipers, so them all being the same 'size' could simply be a function of them all having the same charge.


Protons are made of 3 quarks and a bunch of tiny gluons. if a big proton existed, it would be made of different or more quarks/stuff, and be called something else.

Now, is there a reason why there isn't a something else that acts like a proton?

That gets into Elementary particle physics and what combinations are stable and have matching charge. Quarks have charge +- n/3 and generally come in triples.

Could the number of gluons vary? Maybe? But they wouldn't affect the size measurements much? And variations wouldn't be stable?


layman, but I believe it's the "size" (defined by the wave function) that makes a particle a proton. at least, one of the factors that does, anyway


(Disclaimer: I don't do anything even slightly related to this field, and I somehow managed to skip taking any physics in college except quantum, which I literally slept through and dropped because it was too early in the morning three times before simply giving up. So, my question is probably super super dumb ;P.)

Isn't a sigma a lot? Like, I think that's a standard deviation? If there is even a longshot chance that we are might be off on that constant by multiple standard deviations, isn't that certainly a less-determined constant than, say, the acceleration of gravity? I feel like there is no chance in hell we could one day discover that our calculations are that far off for gravity.


Yes and no. It's a lot in the sense that we are way off from what we believed or knowledge to be. But on absolute scale, it's not a lot. The current determination of the Rydberg constant puts it at 10973731.568160 1/m with an uncertainty of 0.000021 1/m. So a relative precision of ~2*10^-12 (or maybe only 10^-11).

The standard acceleration of gravity is, btw, defined, so no uncertainty. The gravitational constant G is known only to 10^-5 or so.


Is the gravitational constant something that tells us a lot about quantum mechanics? Since it derives from mass?

What I mean is, just like the sun radiates in the EM spectrum and tells us a lot about the properties of those fields, does gravity do something similar?


The acceleration due to gravity isn’t a constant of nature —- it’s dependent on the mass and mutual distance of both objects.

EDIT: distance, not size


Your correction is wrong... The size of the two objects is irrelevant. The only variables are their respective masses and the DISTANCE between them.

If it's worth correcting people, it's worth correcting them correctly... Don't you agree?

Also... The previous poster was pretty obviously talking about the constant of Gravitation (https://en.m.wikipedia.org/wiki/Gravitational_constant). His way of phrasing it is a common English shorthand that I see frequently enough that it's a well understood usage. They may not have used precise language, but their phrasing was definitely less misleading than your (incorrect) correction.


I assumed the gp was thinking about 9.8m/s^2 — because on the English speaking non-physics-expert world, this is called the acceleration due to gravity.

You’re right that it’s distance, of course, but I was coming from a place of trying to be helpful, and thought that if they were talking about terrestrial physics, it’d be easiest to imply it depended on the size of the earth (which determines our distance from it) and your height.


I don't buy that you were coming to this from a place of trying to be helpful. I know that here on HN, we're supposed to assume good motives in other posters, but I just can't do it, here.

I think you were trying to score points, and get an ego fix by correcting someone else.

Not so much fun when someone else dunks on you, though, is it?


That’s not something that could change based on experiment though. We define standard gravity as 9.80665 m/s^2 but the actual value will vary considerably based on location on the earth.


It’s only dependent on the mass of the other object, not the mass of the object whose acceleration we’re measuring.


Objects can and do change their mass. Sometimes that mass changes over the course of the acceleration process (e.g., rockets).

The mass of the accelerating object is absolutely a variable.


I'm sure they meant the gravitational constant. Jeez.


Perhaps they meant the gravitational constant, which is not known to a very high degree of precision.


But we've got to have it down within two standard deviations, right? ...No? Do I just not know what a sigma is? :(


I think you have a misunderstanding about sigma. When describing the measurement of a particular physical constant, the "standard deviation" is something that changes as we get better at making measurements. It basically means "If all our assumptions (e.g. assumptions about how good our equipment is, uncertainties about other physical constants) are correct, then it is unlikely that we would have made the measurements we did if the true value is not within 2 standard deviations of the result we got". When a more accurate measurement is made, then "1 standard deviation" gets smaller, so we know the value better, but it's always true to say "we know the value to within a few standard deviations (given some assumptions made by experimenters)" . If it turns out the measurement was wrong by several standard deviations, then it's very likely that some assumptions were wrong.


You are absolutely right. For deviations beyond, say 4 or 5 sigma, it's much more probable that it's not a statistical fluke, but a systematic error. Assumption wrong, experiment wrong, theory wrong or something like that. We expect that measurements land within +-1 sigma of the true value with ~68% probability (and 95% for +-2sigma). Implicitly, we assume a Gaussian distribution (often the case to good approximation, at least for small deviations), and also that we can invert the sentence: the true value is within the error band around the measurement with that probability.


We had two methods to measure the proton and they gave different results. People used to trust the old value more, but a study in 2019 implied that it was the incorrect one and the correct value is the new one. See https://en.wikipedia.org/wiki/Proton_radius_puzzle, https://www.quantamagazine.org/physicists-finally-nail-the-p....

The latter says "The new result implies that earlier attempts to measure the proton’s radius in electronic hydrogen tended to overshoot the true value. It’s unclear why this would be so" and it seems that these researchers have now shown why.


The value of the proton charge radius in itself is pretty much irrelevant, otherwise it wouldn't be so hard to measure.

As scientists we want to know if our understanding of nature is correct. To test this, we measure the same quantity, for example the proton charge radius, in different ways. If the underlying theory is correct, the results should agree within the experimental uncertainties.

Since 2010 there was a big disagreement between the proton charge radii measured by hydrogen spectroscopy and electron/proton scattering (which roughly agreed at the time), and a much more accurate measurement using muonic hydrogen spectroscopy. This has lead to a lot of excitement, since discrepancies could be a hint for new physics. Since then, more accurate hydrogen spectroscopy experiments have been performed and most agree with the muonic hydrogen value. This probably indicates that the discrepancy is due to underestimated error bars in the old measurements.

In contrast to laser spectroscopy which gives relatively direct results, getting the charge radius out of electron scattering data is notoriously hard. Different groups have found different charge radii from the same data for a long time.


Most likely it indicates protons have devalued their currency slightly. Probably not a big deal for now but we should keep a close eye on it in case it happens again.


Cosmic inflation?


Is this going to mean going back and looking at a bunch of old experiments and slapping your forehead and saying, "god dammit, THAT'S why it didn't work" like I do with my code when I find a wrong sign or off by one error? Or even worse, saying "how the hell did that ever work?" That one... ooooh, that one.


> Or even worse, saying "how the hell did that ever work?"

The most unsettling feeling. I often have that feeling when doing the devops and infrastructure part of my job, e.g. encountering paradoxes in the dark dreary bowels of systemd.

The idealist and perfectionist in you wants to keep digging deeper to arrive at a proper understanding. The realist and lazy SOB in you wants to slowly back away and pretend this never happened. This dialectic hopefully guides you toward a happy compromise, somewhere down the middle.


The more critical the infrastructure you work on, the less you will be willing to take the "slowly back away" route. Because as long as you don't understand every single detail of the failure, that feeling of "what horrible thing do I not understand here" won't go away, and you dread having a major "oh... crap, so that's what I did not understand" moment when it's inevitable.

This week, we had such a failure when testing a change. The root cause was found eventually, but what two extremely good engineers plus me still could not figure out was why that problem was only surfacing now, after years of having that buggy code in there. Very important code. I did not want to risk some obviously unknown property of it be our demise later on.

Fortunately, in this case it just turned out that the seemingly unrelated changes did, after all, hold the now failing thing differently than before. As it had been used, the problem was masked entirely, and the code did work well for all those years.


Worse, you dig at it for too long and collapse the wave function, and it never works again despite you having made no changes.


Because the code you are looking at isn’t what is compiled and deployed, doh!


What would it mean if they were actually 0.88 when first measured and at 0.84 now?


Either the protons shrank or the yardstick grew.


But if protons shrank, wouldn't the yardstick also shrink seeing as it's also composed of protons? These measurements taken didn't use a physical yardstick.


OK, another dumb question from a neophyte here, doesn't the Heisenberg uncertainty principle make it impossible to know the size of a proton for sure? It seems like if you cannot not know the position and momentum of a particle then measuring its size exactly would be impossible?

Thanks in advance to anyone willing to take the time to give me an ELI5 on this :)


How big is the smell of a baking pie? The concentration of pie molecules in the air is not a binary yes/no function of position, but at the same time the volume is definitely smaller than the county, and larger than the kitchen. Fuzzy-edged objects have some sense of size but there is some freedom in where to define their edge.

For the proton, the "size" is defined as a length-valued parameter in the function that expresses the charge distribution through space. The parameter is objective - but its association with the word "size" is imprecise. It is necessarily imprecise, not because of anything quantum or even unfamiliar, but because there's not another English word for the scale of objects without hard edges.


If I'm blindly shooting a lot of bullets at a lot of ducks, I should be hitting some ducks. If I then count the number of shots fired and the number of ducks dead on the ground I should be able to calculate the size of ducks with high precision.


A better analogy would be to subtract the amount of bullets embedded in the wall behind the ducks from the total bullets fired.

Assuming ducks stop the bullets.

And assuming it's not just one immortal duck flying around super fast.


The Heisenberg principle only makes it impossible to know the exact size of a single proton in a particular moment in time. But it doesn't say that you can't calculate the average of millions of protons, for example.


Thanks for all the good discussion, this actually makes sense, of course, now that I think about it, I am sure they are crashing millions of particles together and doing some analytics, so based on that we know about what the size is. Sort of like how when we do analytics online we know the approximate height of say all Amazon customers who purchased jeans. OK dumb analogy but it works for me :)


I'm an ignorant layman here, but the uncertainty principal only says that if you're sure about the size/location, then you're very unsure about the momentum. It's possible that measuring its size does not need much certainty about its momentum, so they can get the accuracy they want.


Looking at the picture in the article, I'd say the proton wasn't too small, rather the ruler was too big.


What does it even mean?

We're at quantum scales and the very notion of "size" is rather ill-defined.

The position of a proton, is - as a matter of fact - a probabilistic affair, so when you talk about size ... if you don't even know where the darn thing is, how are you going to measure where it starts and where it ends ...


It is a misconception that things are ill-defined just because they are at a quantum scale. The proton has an electric charge distribution. The definition of the proton size used here is the root mean square charge radius of that charge distribution.


Indeed, but it's actually a little bit hard to define what that charge distribution is (or in which frame...). The way it shows up, both in the cross section for elastic lepton scattering as well as in spectroscopy, is via the related quantity "form factor", as the slope at zero four-momentum transfer. While the form factor can be thought as the Fourier transform of the charge distribution for heavy objects (say, iron nuclei), for the proton, this becomes dicey.


I misread this as "pronouns."

I need to spend less time on Twitter.


What if the size of the proton is variable?


The difference is .04 femtometers and a femtometer is one quadrillionth of a meter.

So, I'll be sitting down for a little while.


Did you happen to know that the difference between 1 lightyear and 1.000001 light years is 9.4605284 × 10^24 femtometers? That means they're not at all similar distances on an astronomic scale!


Not sure why my og comment was downvoted. In absolute terms, the difference in size of the proton is quite large. In relative, the difference in size is quite small. And that blows my little sapien mind.

And that was all.


What are quarks made of?


As far as we know, quarks are not composite particles. They are fundamental (and point-like). Same as electrons, photons, etc.


Quarks are excitations in the quark fields. Thinking about them as particles is a useful simplification/model.

Analogy: there are no "red pixels", there are just red excitations in the display RGB field.


Snips, snails, and gold-pressed latinum.

Or possibly strings. If that’s not true, the best answer is the tautology: quarks are made of quarks.


I thought that someone had a measure for a thought and that a long thought was measurably shorter than a short(?) thought, by the length of a proton. This then went off into a consideration of neural message length, etc. Cosmic black hole of thought in a spilt second.



This is not even wrong [1].

[1] https://en.wikipedia.org/wiki/Not_even_wrong


It’s easy to drop that link. Much harder to deny the math that works perfectly. But probably it’s even harder to convince you to take serious look at it :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: