Hacker News new | past | comments | ask | show | jobs | submit login
Discovery of the Higgs Boson rumoured to be at 3.5 sigma. (columbia.edu)
76 points by jsmcgd on Dec 10, 2011 | hide | past | favorite | 47 comments



This is Hacker News - do we really need "god particle" added to the headline? It's not present in the article.


Sorry prawn, on reflection I think you're probably right. It's unnecessary.


I thought you were being rude but prawn really is his name.


Well, if we take the 3.5 sigma at face value and bound it using Chebychev's inequality [1], it seems that the discovery of the Higgs Boson is bounded by 1/3.5^2, which is 8%. In fact, the standard inequality is two-tailed. If we're measuring just from one tail, we can use the single-sided inequality to yield 7.5%. Remember, we can't always assume a distribution is normal, a mistake that even respected researchers often make.

[1] http://en.wikipedia.org/wiki/Chebyshevs_inequality


Hmm, I don't actually think chebyshev applies (in a meaningful way) in this context. 3.5 sigmas refers to the probability of the observation being significant, not the average mass of the higgs. Or did I misunderstand what you meant?

As an aside, there is a typo in the parent's link: http://en.wikipedia.org/wiki/Chebyshev_inequality


3.5 sigmas refers to the probability of some statistic being as severe as they're observing under a model where the Higgs does not exist. Regardless of the distribution, Chebyshev applies solely by assumption that sigma is a meaningful unit. It may be overly conservative though.


Thanks for the correction. That link worked when I checked...

Well, Chebyshev inequality always applies. It just might not be the tightest bound possible. Unless we know the distribution better, this is the best estimate (tightest bound that's provably correct given the single assumption about standard deviation)


OK, now thanks to Wikipedia, I have a vague idea of what a higgs boson is and why it is important. But could someone explain to me what those sigma's are (in non Sheldon Cooper terms)? Is it a measure of the degree of certainty of the particle actually being there?


To expand on what was already a good answer, the Higgs Boson itself is not long-lived enough to interact with any of the ATLAS detectors. It decays into several other particles, which also decay until you're left with a bunch of less exotic particles.

Since a whole bunch of things are happening at once, all shooting particles at your detectors, you get a whole mess of particles all at once, and it's kind of a pain to separate which bunches of particles go together. So what they're saying is that the detectors have been seeing bunches of particles that look like they might have come from decaying Higgs Bosons, but there's still some chance that some other bunch of particles could have have been decaying in just the right places to look like decaying Higgs Bosons.


Yes. See http://en.wikipedia.org/wiki/Standard_deviation . In that first graphical plot, the little σ after the number is lowercase Greek sigma. (The upper case sigma, which may be more recognizable, is Σ, which you may recognize as being used for summation.) Per the chart about halfway down the page, a 5 sigma result means that there is a 1 / 1,744,278 probability that it has come from pure chance.


Buy that same table at 3 sigma there is still 99% chance that they have found the damn thing.

Isn't it usually accepted that it is statistically significant when p goes above 0.95? If so, why aren't they happy with a 20 times better result?


Particle physicists generate data by the petabyte. If they used 3 sigma confidence they'd confidently declare all sorts of false things. We know this, because if you pay attention to particle physics you hear about all kinds of 3 sigmas that turn out to be spurious. (About every six months to a year or so, I'd say.) 5 is just where they start to feel comfortable, they'll take more if they can get it.

http://www.nature.com/news/2011/110119/full/469282a.html


Your answer is good and sufficient. For people who aren't yet getting an intuitive feel: if 99% confidence means you're right 99% of the time (it doesn't quite mean that but lets go with it), then you'll still be wrong 1% of the time. If 3000 studies are published each year with that cutoff for significance, you should expect about 30 spurious results accepted as true each year. Ouch.


That reasoning neglects the bias towards papers with interesting results, which leads to a much higher rate of false reports in hot journals.


In human genetics, studies will meta-analyze each of the ~million SNPs tested across each study, rather than just picking the "candidate" SNPs that came to attention because of their association signal in any particular study. So the file drawer problem is still relevant, but the risk is greatly mitigated.


Because 3 sigma doesn't mean 99% chance they've found it; it means that, if it doesn't exist, there's a 1% chance of obtaining the results they did. Common statistical misconception.

Consider a situation where they test 1000 hypotheses, only 100 of which are actually true. Of the 900 false hypotheses, 9 will, by chance, achieve a 3 sigma "99% chance of being right" result. And hence something like 8.3% of the hypotheses tested as "true" will be false positives -- not 1%.


Particle physicists usually require 5 sigma for a discovery. They've apparently had plenty of 3 sigma results that didn't hold up, so they have high standards. 3 sigma to them now is merely the point where people start to get interested in your result.


Actually it's usually 6.


Are you satisfied with a 1 in 20 chance of complete BS for the foundations of physics?


Biology uses .95 because it's virtually impossible to get better results than that. But it also means that 1 out of every 20 biology papers are completely wrong. And then you have papers based on those....

Physics wants a much greater degree of certainty, and since they have the ability to get it they insist on it.

This http://xkcd.com/882/ sums it up pretty well.

You have to make sure to have results that are far far better than the number of experiments you are running. Otherwise you are virtually guaranteed to find some result that will seem right, but isn't.

(For example a 1 in a million occurrence (per person per day) [would] happen about 7000 times every day.)


That "completely wrong" is a bit harsh on biologists.

Firstly, it is a lot harder to repeat experiments in biology than in particle physics, as (as far as we know), all electrons are the same.

Secondly, biologists will, in general, not make bold claims. A paper "a possible link between X and Z" that works with p>0.95 and states that further research is needed is not a lie; the popular press makes it a lie by changing it to "OMG: X CAUSES Y".


I know it's a lot harder to repeat experiments in biology than in particle physics - I said that in my post.

Just because someone uses weasel words ("possible") doesn't change the end result: It's a wrong result.

I'm not blaming them - I understand better results are not possible. But it doesn't change the fact that a tremendous number of results are wrong.

It doesn't help that they often search for very subtle results. "It helps, but only a little." It also doesn't help that everyone responds differently to things. It makes the research very hard.

Anyway, I was just explaining why p95 is not accepted anywhere else except biology - biology just doesn't have any other choice. They don't prefer such low results.


Different fields have different acceptable levels of significance. Social science and engineering fields generally except significance at 95% or 99% levels , physics has higher standards.


Because of this: http://xkcd.com/882/


That's not how statistics works. Your statement is the converse of happened, not the contraposive.


I wonder if they can combine the experimental results from the 2.5 sigma and the 3.5 sigma experiments. If two independent experiments confirmed some hypothesis at 2.5 and 3.5 sigma, the combined evidence is surely much larger than 3.5. Does anyone know what the right way of combining experimental results is, to yield a more accurate answer?


Assuming the errors are random and uncorrelated, the total significance is the square root of the sum of the squares. In this case, that's only 4.3 sigma.


> only 4.3 sigma

I generally treat particle physics as a black box from which magic appears, but my understanding was that 3 is where things start getting interesting, and 5 is where it's considered pretty conclusive, so wouldn't going from 3.5 to 4.3 be a significant jump?


Really depends on the experimental approach, but there are certainly techniques for combining independent results. For example, in genetics, inverse-variance weighted meta analysis is common. Studies are essentially combined with their effect weighted by the degree to which their results are tightly clustered around that study's mean result.

I would be interested to learn about common procedures from physics as well.


I was always curious how they measured these things. Does anyone have a good reference that doesn't involve years of particle physics study?


It's a long way down the rabbit hole. There's a series of steps for measuring in a particle accelerator and each itself is a big area of knowledge: - Detectors - Hardware triggers - Software triggers - Data storage - Statistical analysis

I guess Wikipedia can give you an idea: http://en.wikipedia.org/wiki/ATLAS_experiment A common statistical method used in particle physics is the Monte Carlo method: http://en.wikipedia.org/wiki/Monte_Carlo_method


Sure, it's 3.5 sigma in this particular energy band, but is it 3.5 sigma in every energy band?

It might be like expressing amazement that someone in the world won the lottery. There's a difference between someone in particular winning the lottery, and anyone winning the lottery.

Xkcd says it better than me. (hat tip to starwed)

http://xkcd.com/882/


They don't expect it to show up in every energy band - the whole point is that they expect to see a lot of Higgs boson formation at a specific energy level, and anything higher or lower drops to a background level. The sigma value is basically a measure of how many standard deviations off from the background event count the event count for this particular energy level is.


I'm sorry but I don't see how this addresses my point? If they are checking 20 bands for results with p < 0.05, the likelihood is that one will show up, just from fluctuations from background.


Yes, but there's a prior prediction from theory that the Higgs will be within this energy band.


This specific one? What about all the other predictions over the years?


Honest question: what are they gonna do once they find it?


First, a big celebration.

Then after the party, everybody has the hangover that may never end, because finding the Higgs right where we expected it is potentially the worst case scenario: http://www.sciencemag.org/content/315/5819/1657.full


Informative, but strange circular arguments in that article, especially from a scientific process perspective.

There are a ton of questions that Higgs cannot answer already and yet if we find Higgs precisely, research comes to a stop?

It almost feels like there's a split between theoretical physcists and everyone else, with theoreticians saying if you take away our broken toy, you better replace it with something we can play with!

Given Nature already trumped Einstein, his contemporaries, and all since, I think we can be fairly secure that we will ALWAYS having find something new to learn or discover from practical science.

After all, that's how we used to almost exclusively learn before scientists went a bit crazy with maths. These days there seem to be many more models out there than there is good science behind it - at least to a layperson like me. :)


The meta-point is more that we need to find a deviation from the current Standard Model. Further confirmations of the Standard Model are in a sense bad news; we want it to come apart so we can examine the pieces. It is true that if we can produce the Higgs we can then study it, but if it then turns out to precisely fit the parameters predicted by the SM, that's bad.

This argument should also be read along with the fact that last I knew, none of the accelerators have been able to turn up anything else particularly interesting either. Some of the supersymmetry theories predicted particles in ranges that we should be able to see (barely) and none of them have appeared. We're down to hoping that there's something else to find in the extra room the LHC will give us at full blast or we really will be up a creek.

"These days there seem to be many more models out there than there is good science behind it - at least to a layperson like me."

And in fact your observation is connected; particle physics has been starved for data and in the interim have come up with all kinds of things, trying to find things that may have testable consequences. This would go a lot better with some data.


Surely there's still gravity left to explain away at the quantum level?


There are irreconcilable differences between general relativity and quantum mechanics. A lot of thought has gone into the problem, with little success. But even if someone did come up with something concrete, what then? Try to come up with an interesting experiment that combines gravity and quantum mechanics.

I know of only one, and whether it tests anything at all depends on which interpretation of QM you have. Based on a Geiger counter, either place, or don't place a heavy weight. Try to measure its gravitational pull regardless of what you do. It only measures a pull if you placed the object. If you believe in the Everett interpretation, this says that gravity, at least to a first order approximation, splits with the universe. We do not have sensitive enough instruments to measure non-linear differences from GR.

History tells us that theoretical science done in the absence of experiment is unlikely to lead to useful knowledge.


What is this experiment called? Any links to read up on?


perhaps reading this http://www.hedweb.com/everett/everett.htm

(especially question 7) will help, even though it does not give the name for such an experiment.


I once saw a lecture by Freeman Dyson where he said that if one were to build a graviton detector with the cross-sectional area of the Earth and point it at the sun for the age of the Earth, the number of gravitons one would expect to detect:

Four.

So, whether there's science still to be done and whether we'll ever be capable of building apparatus that can actually test it are two different questions. For instance, what if the next interesting thing post-Higgs Boson happens at energies 1000 times bigger? There's a good chance we'll never build an accelerator that powerful.


We may not need a more powerful accelerator. Remember: The cosmic rays contain much more energetic particles than that can be produced in accelerators so far.

It is possible that we will discover new phenomena and new ways to test gravitational theories once we can observe gravitational waves. I expect we will detect gravitational waves in ~10 years and identify a specific source in ~20 years; sooner, if there are some powerful sources that we did not think of yet.


As someone who does astrophysics right now, trust me, we want to wring everything we can out of accelerators before we start trying to use cosmic objects to probe the laws of physics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: