This is tremendously interesting. In particular their suggestion that one solution to the conundrum is the existence of additional very light force carriers.
Perhaps a more up to date particle physicist than I can chip in bu such particles must presumably be outside of the Standard Model (as iirc from my particle physics, which is admittedly getting on for 20 years ago, there ain't any more slots left). I know there are some interesting new theories which are very widely disputed and which have a lot of other problems, such as the Exceptionally Simple Theory, which predict a whole host of additional particles - but there are reasons such things are on the fringe right now.
With all the progress in the last couple of decades, and all these new results pointing to new Physics in all kinds of areas, it's a fascinating time for the field - and makes one wonder where we'd be if the entire faculty hadn't dedicated decades down the String theory rabbit hole.
such particles must presumably be outside of the Standard Model
Yes, they are. With the detection of the Higgs we have now detected experimentally all of the Standard Model particles. Any new "very light force carriers" would definitely be outside of the Standard Model.
Gravitons are not included in the Standard Model, which doesn't cover gravity at all, and is not intended to.
As far as whether gravitons could be the "very light force carriers", I don't see how that would be possible; gravity is way too weak a force to make that much of a difference in measurements on hydrogen atoms, even muonic ones.
As a fascinated layman, strings always reminded me quite a bit of epicycles and I've had an impression that string theory had been falling out of favor for some time now.
The "popular fad" aspect of string theory has certainly lost much of its luster, no question, particularly as far as the general public is concerned. (The early excitement about the theory in the 1980's was apparently rather over the top.) The fact that folks like Peter Woit and Lee Smolin were able to publish popular books about their distaste for string theory reflects that (and those books in turn did a lot to establish the "string theory is overrated" counter-meme).
But within actual physics departments, as far as I've been able to tell string theory holds much the same position that is has for many years: it's the direction that most people interested in pursuing a "theory of everything" seem to find most promising. (Plenty of physicists have other interests, of course, even within particle physics: the phenomenology involved in interpreting LHC data and predicting new phenomena to look for at that scale is a big deal these days, and string theory really isn't relevant to those questions.) As far as I can tell from the inside, that's primarily a genuine process of ongoing scientific judgement rather than pure groupthink.
As a string theorist, I can see to some degree why you'd say it reminds you of epicycles: I think most of us would agree that there's got to be some deeper underlying truth that would make the structures of string theory seem more unified and elegant. The difference between that and epicycles, at least to a string theorist, is that we're optimistic that string theory as we know it is a valid, well-defined approximation to the underlying theory, whatever form that may take. Epicycles, on the other hand, were (as I understand it) sort of a hack, without any intrinsic connection to the true heliocentric model.
"The phenomenology involved in interpreting LHC data and predicting new phenomena to look for at that scale is a big deal these days, and string theory really isn't relevant to those questions."
I think "string theory is overrated" meme is mainly fueled by the fact that it "really isn't relevant to those questions". I got the impression that many string theorists claimed string theory to be relevant to those questions, and only changed position once it became clear that won't be the case.
From what I understand (as a fellow fascinated layman) is that String Theory is still the dominant StandardModel++ theory in academia but there are a few who are being more vocal about its validity.
Peter Woit is a notable skeptic of the viability of String Theory and has a blog which contains links to a number of interesting physics blogs.
http://www.math.columbia.edu/~woit/wordpress/
The dominant StandardModel++ theory is the standard model + super symmetry. String Theory is still in its formative years and best described as a collection of ideas than as a proper theory.
This has been a pretty big deal for a while now -- its a huge, unexplained effect that ever theoretical physicist in a remotely related field will have poked at. (And the expected result is actually pretty easy to calculate, as far as anything in QED is.)
A lot of folk were betting on some sort of experimental error (just as ended up happening with the superluminal neutrinos), so if they've managed to replicate the result with more in depth experiments, that's kind of important.
I find the physics fascinating, but that muonic hydrogen stuff, that has marketing gold all over it. SmartWater, now made with Muonic Hydrogen, get it before it decays!
I expect its going to take several days to digest the paper though, it is quite dense.
Humour aside, I love these sorts of things. Even if there is no new physics involved, if it turns out to be some sort of error in the experiment, we are bound to learn something.
I'm no physics expert but I did find the article very interesting. Can someone who knows more than I do possibly explain, simply, what may be going on here and what its implications would be?
In my PhD, I measured the proton form factor and, from that, calculated the radius. So let me explain my view :)
There are essentially three ways to measure the radius:
1) Electron-proton scattering. This gives you the form factors (related to the charge distribution by the Fourier transform), and from that you can calculate the proton radius.
2) Measurements of the electron energy levels of "normal" Hydrogen
3) Measurements of the muon-proton energy levels.
Re 1)
The Mainz proton form factor experiment is to date the most precise proton radius determination from electron scattering and is compatible with the larger value. Our results are along the line of earlier measurements using similar techniques. The first measurements of this kind where done in the 50's and 60's (but produced quite a wrong radius).
Re 2) These measurements are very hard, the proton size effect is very small. Nevertheless, the results are of similar precision as those from 1) and give a compatible radius
Re 3) In muonic hydrogen, the muon is "much closer" to the proton and therefore the proton size has a much larger influence. Because of this, the method produces by far the best precision.
The 7 standard deviations are calculated with the precision of 1) and 2), the error of 3) is negligble!
So, what can go wrong?
In my opinion, the muon experiment is very clean, so I don't believe in an experimental error. The fact that 1) and 2) agree make it unlikely that they are wrong, as they are complete separate methods. However, it is possible.
It could be that we are missing an important part in our understanding of the radiative corrections, i.e. the theory needed to calculate the levels.
This could mean a simple error in one of the calculations, but most if not all of them have been checked by different groups. It could also mean a flaw in one of the solving techniques. Or maybe something which was overlooked.
It is also possible that there is another particle at work here. A possible candidate is a dark photon. This solution has some benefits, especially since such a new particle might also solve the muon g-2 puzzle. But it is not easy to construct a theory of such a particle without violating other experiments. A lot of fine tuning.
It could also be that the muon just behaves differently from the electron. That would shatter a rather basic and widely accepted believe.
I attended a very interesting workshop recently which focused on this puzzle. Unfortunately, we didn't find a solution. However, there are a lot of experiments in the pipeline which might clarify the situation:
- There are several experiments to measure the proton radius using eletron scattering, with specialized instruments and new methods.
-Muse is an experiment which will scatter a combined electron and muon beam from protons. This will test certain aspects of the radiative corrections and allow a direct comparison of the exctracted radii.
-There will be measurements of other muonic atoms.
All in all, this is a very interesting topic right now, with the added benefit that it brings together the often separate communities of nuclear and atomic physics.
Nice tidbit: Our result and the first muonic result was presented at the same conference... and both speakers didn't know what the other would say.
Other nice tidbit: The first try at the muonic measurement didn't work out. When they got it working, they scanned the energy region around the suspected radius value. But they didn't find a resonance. After improving the experiment and double checking everything, they still didn't find anything, until they started looking further away, when suddenly, the resonance appeared. They just didn't scan wide enough the first time!
Muse is an experiment which will scatter a combined electron and muon beam from protons.
I'm a bit surprised that muon-proton scattering experiments haven't already been done. It it just that there have been other priorities up to now, or is there something particularly difficult about doing such experiments, compared to doing them with electrons?
The main difference is that electrons are readily available while muons have to be created. I think there are some experiments with high energy muons, but for the proton radius, you want low energy. PSI creates them accelerating protons and smashing them into nuclei. This produces pions, which then decay into muons. These beams are then rather wider and harder to handle, as muons decay quite quickly. With higher energies, it is possible to store them long enough in rings, relativistic time dilation helps then.
Wow. This is why HN is incredible, I ask a random question and someone with a related PhD answers! Thank you for your time, that was interesting and enlightening.
The article makes this seem like a relatively straightfoward experiment concept and implementation plus high justification. Why have measurements of this sort not been made already?
Is it because of the difficulty of forming muonic hydrogen or making measurements before the decay?
None of it is easy. I haven't read this particular paper, but I have worked on similar experiments. First, you have to create muons and then isolate them in the particle beam and this costs quite some money (typically fire accelerated protons at a target and allow the pions and other decay products to be dumped in some dumb absorber that the muons pass through). Then you have to capture the muons into orbits around protons that come from hydrogen that needs to be disassociated and ionised. Then you have to pulse the hydrogen with a laser and excite the muon into a new energy state and accurately measure the energy of the photons that come off in the decay. You almost certainly need subnanosecond accuracy in your timing at this point. Of course, you can never be absolutely certain what you're measuring in any particular event so you sum over millions of events and average. And you have to simulate the whole thing (typically using a C++ framework called GEANT running on massive processor farms analysed with a horror-show of an analysis toolkit called ROOT) to see what could go wrong and determine the possible sources of error and uncertainty in your measurement.
To get the money to do this, you have to apply for government grants that may take up to a year to get approved. And no government wants to fund it so you need an 'in' with the lab, because they'll be the ones that sponsor your application for it to have a chance of success, and you'll need to know people from a whole load of other countries to collaborate with and prove your bona fides. And the people writing all these grant applications and doing most of the middle management are distracted by teaching literally hundreds of students every week during term time, so they can't even work full-time on it.
There are other experimental ways of doing this (e.g. measure correlated form factors and parton distribution functions from elastic scattering and extrapolate to zero momentum transfer), but they're all harder. They'll probably be done within the next 30 years. Probably. This particular effort is likely the work of 30-50 PhDs working for five or more years (at 50% of the salary they'd earn working for commercial companies), discounting any of the commercially available technology like electronics developed by CAEN or detector components developed by the likes of ElJen or Hamamatsu. And I would be surprised if they didn't mint another 10-15 PhDs during the effort. And I haven't covered the highly trained technicians and electronic engineers that are usually needed to build a lot of the custom experimental equipment.
I dislike the fact that the library is forever expanding its feature set while a lot of the core code is badly written or buggy (see the method for calculating the angle between two T3vectors, for example). This is crazy when development is not under commercial competitive pressure. I dislike the use of global pointers to control the behaviour of the library and I find the design decisions in particular regarding histograms and the lack of separation between data and presentation bewildering (e.g. a graph is a subtype of a histogram, which should contain it so that it can be plotted. A plot frame is a type of histogram object, which makes no sense that I can understand, etc.). The library makes no use of a lot of core C++ features; it really is C-with-classes, where the use of exceptions could really improve the code base and potential usage. I feel quite strongly that a numeric calculation should not return a number if something went wrong, for example.
I dislike that it has a strong feeling of not-invented-here syndrome where it could include dependencies on well tested code, e.g. the gnu scientific library, but instead rewrites everything. And up until about 6 months ago the default plot styles were damned ugly, although this has gotten a lot better. There seem to be three different math libraries that each implement some subset of mathematical functions independently and a lot of the graphics stuff makes little sense to me (take a look at how to ensure platform-independent type face size in a plot, for example). The CINT interface almost seemed designed to ensure that when students were learning, they learned ROOT and not C or C++, which is great for productivity in the short term, and incredibly sucky in the medium to long term.
I understand that a lot of this is legacy cruft (and even made sense at the time!) and that I could contribute patches for it, but I'm busy doing my actual job. I'm impressed that a lot of my objections are being worked on too - I was heavily dependent on the framework for my PhD about 5 years ago and a lot of my frustration stems from then. It often feels like the ROOT team learn-by-doing; they want to understand something, so they make it in ROOT. Which is a fine way to learn, but not the best way to develop stable code that should be used by thousands and thousands of people. In some senses ROOT is an amazing achievement and I still find myself using it on occasion. But I've now mostly replaced what I use it for with matplotlib and the GSL and my life is easier.
breathes
I also dislike the way that it seems partially to have enabled physics to go in a direction where we pump out PhDs who don't understand code or physics, but act as worker-drones for the large collaborations. But that's not really ROOT's fault, and is a whole 'nother topic.
I second that.
The aim of root is certainly good. But the implementation of that idea, especially the overall design, is absolutely awful. It's understandable, it has grown over a long time and was written by physics experts, not software design experts. People who where used to Fortran and paw. The actual code implementation is bad but not hopelessly so. The interface is broken and because so many people are used to it, it will be hard to replace with anything new.
For Mainz experiment, we have or own code base. Not pretty, but because it's a lot more specialist, less confusing. I am working now on OLYMPUS, and we are using ROOT for that. I created a Framework for the analysis based on ROOT, trying to hide the most problematic areas and making it easier for use by the other collaboration members. Also trying to make them write programs, not ROOT macros. Every time I look up a new feature, I'm surprised that they managed to find a non standard way of doing it.
My pet peeve? TH1D is a 1-d historgram class. What does TH1D.Clear do?
Wrong! It clears the histogram title and name, not the histogram itself. For that, you need Reset. It makes kind of sense if you know the class hierarchy, TNamed and all. But who remembers that? I saw this mistake in the wild a lot.
Protip: Gnuplot. While also a little bit arcane in its command language, it's for me the best tool to produce paper-ready plots. With the tikz terminal you can include it in your Latex flow, and with some Makefile trickery you can have e.g. \cites resolve correctly in plot labels. With numbering correctly reflecting the position of the plot in the paper!
I don't think ROOT is very common in the atomic community. For experimental control, LabView is quite common, and it seems to have pitfalls too. But from what I know, the experiment itself is very straight forward, there are not so many places error can hide. Especially since they can calibrate/cross check everything with known H20 lines.
On the electron scattering side, ROOT is more common, but the experiments which are most relevant for the radius either predate ROOT or are known to not work with ROOT. Doesn't mean the software used has no errors, quite the contrary, however since all the results from different measurements with different software agree, I think a programming error is not the culprit.
Sadly, ROOT is generally-speaking the best part of physics code...if it were me, I would ask that all software used to derive a result be made available. It's far too easy for scientists to make stupid errors in their coding and it cannot hurt to be open.
The first problem is to produce low energy muons. There are only few facilities in the world which can do that. The second problem is getting the muons slow enough to bind with the protons, and making sure the muonic atoms are at the right levels. The third is to get a strong enough signal from the setup.
But there are other problems too. For example, you need to measure the frequency of the light. Since the (quite recent) invention of frequency combs, this is possible with astounding precision. While this experiment is not the most complicated in this regard, it is still a challenge.
Also, even after you have measured the level difference, you need quite a lot of theory to get the radius out, which had to be developed first, as you need to know where to look.
So how is the mystery of observed isotopic decay rate variations (with Earth/Moon/Sun spin/orbit period correlations) going these days?
That's now two solid results that indicate things we thought were constants, are not.
What's the bet they are related?
Perhaps a more up to date particle physicist than I can chip in bu such particles must presumably be outside of the Standard Model (as iirc from my particle physics, which is admittedly getting on for 20 years ago, there ain't any more slots left). I know there are some interesting new theories which are very widely disputed and which have a lot of other problems, such as the Exceptionally Simple Theory, which predict a whole host of additional particles - but there are reasons such things are on the fringe right now.
With all the progress in the last couple of decades, and all these new results pointing to new Physics in all kinds of areas, it's a fascinating time for the field - and makes one wonder where we'd be if the entire faculty hadn't dedicated decades down the String theory rabbit hole.