Hacker News new | past | comments | ask | show | jobs | submit login
Gravity waves from Big Bang detected (scientificamerican.com)
892 points by tjaerv on March 17, 2014 | hide | past | favorite | 171 comments



As an outsider (PhD student in a quantitative field, no relation to physics), the experimental physics community really strikes me as a class act. High standards for statistical significance, vigorously working to rule out mundane explanations before publishing data, outlining which statistical tests will be performed before data is collected...I'm a fan.

"In fact, the researchers were so startled to see such a blaring signal in the data that they held off on publishing it for more than a year, looking for all possible alternative explanations for the pattern they found." That's pretty amazing; as far as I can tell, such caution is less typical in e.g. the brain sciences.


I'm also a fan, and though I agree, the last part of your comment reminded me of this sketch: http://www.youtube.com/watch?v=THNPmhBl-8I


+1 for all things That Mitchell and Webb Look


The popular media really like to grab any neuroscience paper and twist the hell out of it. Talk to most neuroscientists and they are much more conservative in their leaps and jumps... in reality its moving slowly, but the media wants to portray a Ray Kurzweil reading of every finding.


> The popular media really like to grab any neuroscience paper and twist the hell out of it.

This is true, but a specialized case of the equally true:

"The popular media really like to grab any science paper and twist the hell out of it."

Which is just a specialized case of the also equally true:

"The popular media really like to grab anything and twist the hell out of it."


I can believe, however, that neuroscience gets it worse, because it has more obvious applications in the lives of readers/viewers etc.


I think it's a two way relationship, scientists love a chance to reach out to the media too, even if a bit more reserved, you even get that from their press releases. Publicity helps in getting funding in the life sciences, for better or worse, it's part of the system.


The problem is you have multiple amplifiers: The researchers provide the loudest version they feel comfortable with in order to get published. The university PR department makes it a little louder to get a little more attention. Then the media makes it louder yet.

Of course, running a signal through that many amplifiers tends to introduce a lot of distortion.


Of course. Would you expect anything less from popular media?


Something that fascinated me about the LHC results (from my armchair) was how the physics community is seamlessly blending rigorous statistical thinking with state of the art machine learning techniques.


Sounds interesting. Do you have any links to information about how they use machine learning?


Just the slides in the two big announcement presentations. I don't have links but I imagine they're up on a CERN webpage somewhere.

ML is used pervasively. At the top level they use monte-carlo to simulate the device as well as to train ML classifiers (boosted decision trees of some flavor) that select what data from the detectors can be safely ignored.


My Ph.D thesis was titled "Distributed Machine Learning" and focused on boosted decision trees - this was 1997. Ok, I am small beer and I will get lost in any discussion with the smart folk at Cern in about 2 seconds flat, but I feel it necessary to say whenever I can that I was astonished when I read that the LHC was using a learned model over boosted decision trees as a noise filter. A large number of trees induced in data will find co-incidental outcomes; think about running trials - run 10 and the likelihood of a false positive is low, run 1000 and it's very high! Boost a decision tree and you will bring all sorts of things out of the margins; some of them will even be real. Is the bump in the higgs signal real? There were 15 events that generated that bump, out of n billion. What are the odds of a boosted tree regularizing the noise into that bin - I would say pretty decent.

But I am an old man, who knows little and does less. I shall return to my beer and vegetables.


>outlining which statistical tests will be performed before data is collected

Not knowing how things are usually run in this field, but I am surprised that these details were actually published. Do you have a link to them?


Yes, that happens to anyone who comes from physics to neuroscience i guess. There are no laws and everyone swears by p<0.05


same here, what also fascinates me is the close interaction between empirical folks and the theoretical ones. there's this understanding that they work in tandem, and the theory guys are eagerly awaiting what the empirical folks have to tell them and vice versa. its rarer than you think.


I have similar feelings. Geophysics is not like this.


They made a nice video of the researcher surprising Prof Linde with the news: http://www.youtube.com/watch?v=ZlfIVEy_YOA

The reaction of the couple is great!


Such short news he brought them, too!

"It's 5σ at .2"

His wife, also a theoretical physicist - blank stare "Discovery?" - immediately melts into a hug.

What a great thing, to be able to see the human side, makes you all warm and fuzzy for human progress.


Any chance a non physicist / scientist can understand in a nutshell what it means? (5σ at .2) it has been a long time since college. Is that a statistical notation? or something specific to physics? Googling it (or even using symbolhound) didn't render good results for me.


Each experimental outcome has an associated probability that the outcome resulted from chance, not the thing being measured. This arises from the default scientific precept that is called the "null hypothesis", the assumption that a given measurement was produced by chance, not nature.

Scientists use a value designated by p to describe the probability that a result arose by chance rather than design. In the social sciences, a p value of 0.01 - 0.05 is common, which means a result can be explained by chance with a 1% to 5% probability. As one moves through fields of greater rigor and seriousness, the p value required to declare a discovery becomes smaller. In experimental physics, 5σ (five sigma) has become the standard.

In statistics, σ refers to an area under the normal distribution defined in terms of standard deviations (1σ = 1 standard deviation). Experimental physics uses a one-tailed 5σ value, which is quite strict -- it has a numerical value of about 3×10-7. What this means is that an experimental result must be solid enough (and/or be repeated often enough) that the probability that it arose from chance is equal to or less than 5σ.

More here: http://blogs.scientificamerican.com/observations/2012/07/17/...


And here is an entertaining cartoon about p > 0.05

http://xkcd.com/882/


> In statistics, σ refers to an area under the normal distribution defined in terms of standard deviations (1σ = 1 standard deviation).

No. The Gaussian (normal distribution) is not involved. Sigma is just the symbol for standard deviation, that is, the square root of the variance. So, for random variable X where with expectation E[X] that exists and is finite (in practice a weak assumption), the standard deviation is square root of E[(X - E[X])^2].

So, the standard deviation is just a number, a measure of how 'spread out' the distribution is, and not an "area". Then 5σ is just 5 times the standard deviation.


>> In statistics, σ refers to an area under the normal distribution defined in terms of standard deviations (1σ = 1 standard deviation).

> No.

Yes. The reason the various sigma values have the numerical values they have is because they represent integrals under the normal distribution, either one-tailed or two-tailed. Therefore in the present context sigma values represent definite integrals of the normal distribution.

> Then 5σ is just 5 times the standard deviation.

Yes, correct, except that the present conversation is about p-values and the meaning of sigma in this specific context. Five sigma has the value it does because it represents one minus the area between 0 and five sigma of the normal distribution, i.e. the one-tailed meaning of five sigma.

Link: http://blogs.scientificamerican.com/observations/2012/07/17/...

Quote: "A graph of the normal distribution, showing 3 standard deviations on either side of the mean µ. A five-sigma observation corresponds to data even further from the mean."

Quote: "For particle physics, the sigma used is the standard deviation arising from a normal distribution of data, familiar to us as a bell curve. In a perfect bell curve, 68% of the data is within one standard deviation of the mean, 95% is within two, and so on."

Picture http://blogs.scientificamerican.com/observations/files/2012/...


> Yes. The reason the various sigma values have the numerical values they have is because they represent integrals under the normal distribution, either one-tailed or two-tailed.

No. I defined sigma, that is, standard deviation, fully precisely and correctly. The normal distribution has nothing to do with that definition. And the standard deviation is a number, just a number, just as I defined it as

σ = E[(X - E[X])^2]^(1/2)

which clearly is just a number and not an area.

Or, for random variable X with cumulative distribution F_X, that is, for real number x,

P(X <= x) = F_X(x)

we have, with notation from D. Knuth's TeX, that

σ^2 = \int (x - E[X])^2 dF_X(x)

This integral need not be in the sense of Riemann (i.e., freshman calculus) because dF_X is a measure on the real line; so, the integral is in the sense of measure theory (see any of Rudin, Real and Complex Analysis; Royden, Real Analysis; Halmos, Measure Theory; Loève, Probability Theory).

Sigma is defined for any random variable X or its distribution provided that E[X] exists and is finite. Again, a "normal" or Gaussian assumption is not necessary. So, sigma is defined for discrete distributions, the uniform distribution, the Poisson distribution, the exponential distribution, etc.

For a random variable X, if don't know its distribution, then can't say what the numerical value of its standard distribution is.

Moreover if for some random variable X that has a standard deviation want to know, say, the probability

P( -5σ <= X <= 5σ)

then that is an area and need the distribution of X to find the numerical value.

All that is 100%, completely, totally, absolutely true. That's what σ or standard deviation is.

In particular, the statement

"In statistics, σ refers to an area under the normal distribution defined in terms of standard deviations (1σ = 1 standard deviation)."

is flatly false. The field of statistics has no such statement or convention.

Yes, if want to be really sloppy, make some assumptions not clearly stated, and have some conventions for identifying some things in special ways, e.g., that sigma is an area, then can do so. Maybe some parts of physics do this. I do remember when I was studying physics the prof handed out a little book on how errors were handled in physics. The book was a sloppy mess and one of the reasons I lost respect for accuracy and precision in physics and majored in math instead.

It is true that about 100 years ago some fields of study, especially parts of psychology and much of education, concluded that the Gaussian distribution was some universal law of data handed down by God. Well, God did no such thing. Still, some people in educational statistics believe that student test scores should have a Gaussian distribution and, if the scores do not have such a distribution, will, from many such scores, find the empirical distribution and, then, transform the scores so that the distribution is closely Gaussian.

Maybe physics drank that Kool Aid that all experimental errors of course, as given by God, have a Gaussian distribution, that is "a perfect bell curve", and, then, yes, can get

"68% of the data is within one standard deviation of the mean, 95% is within two, and so on.",

and that σ has a particular numerical value and regard standard deviation as an area. Yes, maybe this is physics but it is very sloppy thinking and not mathematics, probability, statistics, or anything from God.

Of course, even with a Gaussian assumption, standard deviation does not have a particular numerical value. Instead, for Gaussian random variable X with E[X} = 0 each of

P( -σ <= X <= σ)

P( -2σ <= X <= 2σ)

P( -5σ <= X <= 5σ)

has a particular numerical value.

Sorry 'bout that.


Maybe physics drank that Kool Aid that all experimental errors of course, as given by God, have a Gaussian distribution, that is "a perfect bell curve", and, then, yes, can get "68% of the data is within one standard deviation of the mean, 95% is within two, and so on.", and that σ has a particular numerical value and regard standard deviation as an area. Yes, maybe this is physics but it is very sloppy thinking

That's very interesting!

What are some practical or pragmatic advantages of your way of thinking compared to the standard (sloppy) way of thinking? I don't mean to put you on the spot; I genuinely would love to know.

For example, what are some statistical problems which the standard (sloppy) mental model would have a tough time solving, but which your mental model would be able to yield tools for?


The OP's argument may be that the normal distribution is often applied to datasets with which it has no innate connection and that can only mislead the applier. This is correct -- there are any number of examples where the statistical reasoning behind the normal distribution is wildly inappropriate to the data set being analyzed. The classic example is a dataset with anomalous outliers, say, a cohort in which one person died in childbirth and another who lived to the age of 120. Average age: 60.

But the physicists who apply this method to their LHC data, and those who apply it to the newer finding of gravitational waves, know exactly what they're doing. The data being analyzed are entirely appropriate to this method, and the conclusions being drawn are sound.


Mary teaches calculus to 20 students, and Bob also teaches calculus to 15 students. A standardized test is given to all the students, all 35.

So, suppose we assume a null hypothesis that all 35 student scores are independent random variables with the same distribution, that is, (1) Mary and Bob are equally good as teachers and (2) their students are equally well qualified. If the students had been appropriately randomly assigned to Mary and Bob, then maybe we can believe (2) so that only (1) is in question. So, we are going to test (1), that is, that Mary and Bob are equally good as teachers.

This hypothesis, that is, that Mary and Bob are equally good is called a null hypothesis since it assumes that there is no effect between Mary's class and Bob's class (even though Mary is teaching more students than Bob).

So, here is how we do our test: We throw all 35 scores into a pot, stir the pot energetically, pull out 20 in one bowl for Mary and put the other 15 scores into a bowl for Bob. Then we average the scores in each bowl and take the difference in the averages, say, Mary's average minus Bob's average. Then we repeat this many times -- for this might want to use a computer with a good random number generator. This process is sometimes called 'resampling'. It's also possible to argue that what is going on is a finite group of measure preserving transformations that, thus, can yield what we are doing more intuitively. Physicists like symmetries that result in conservation laws, but here we have symmetries resulting in an hypothesis tests.

So, we get the empirical distribution of the differences in the averages.

Then we look at the difference in the actual averages, that is, from the actual students of Mary and Bob. Call this difference X. Now we see where X is in the empirical distribution of differences we found. If X is out in the tails with probability, say, 1%, then either (A) Mary and Bob are equally good as teachers, that is, we accept the null hypothesis, and we have observed something that should happen 1% of the time or less or (B) Mary and Bob are not equally good as teachers and we reject the null hypothesis and conclude that there is a difference in teaching between Mary and Bob.

If the 1% is too small to believe in, then we accept (B) and pop a Champaign cork for the better teacher.

Here we made no assumptions at all about the probability distributions of the scores. So, our hypothesis test does not assume a distribution and is distribution free or, as is sometimes said, non-parametric (because we did not assume a distribution, say, Gaussian with parameters, e.g., mean and variance).

Look, Ma, no standard deviations!

The 1% is he significance level of our hypothesis test and is the probability of rejecting the null hypothesis when it is true and, thus, is the probability of Type I error.

Oh, consider a large server farm or network. Suppose we identify 10,000 systems we want to monitor for health and wellness and detect problems never seen before. Suppose from one of the 10,000 systems, we consider one. Suppose from this system we receive data 100 times a second on each of 12 numerical variables.

Suppose the server farm is supposed to be fairly stable and we collect such data for, say, 3 months. Call this history data. Maybe the machine learning people would call this training data. Whatever.

Now we can construct a 12 dimensional, distribution-free hypothesis test where the null hypothesis is that the system is healthy and also select our false alarm rate (probability of Type I error) in small steps over a wide range. So, we have a multi-dimensional, distribution-free hypothesis test. Such are rare, but, really now we have a large class of them. Yes, we use a group of measure preserving transformations.

As I recall, back in 1999 there was a paper on such things in Information Sciences. I wouldn't call that paper machine learning, but maybe some people would. Some of what is interesting in the paper is how the heck to know and adjust the false alarm rate, that is, the probability of Type I error.

Again, look, Ma, no standard deviations or Gaussian assumptions.


> Again, look, Ma, no standard deviations or Gaussian assumptions.

You're once again overlooking the context. In a physics context, 5σ has an oft-quoted numerical value that is acquired this way:

http://i.imgur.com/pcjr6cN.gif

Good luck acquiring the universally accepted numerical value without applying a Gaussian distribution as shown.

Remember that this thread began with someone nontechnical asking what the significance of 5σ was to the evaluation of a physics experiment, for which I provided an uncontroversial explanation in that context.


I just wanted to thank you for the detailed writeup. It's very helpful and informative.


> And the standard deviation is a number, just a number, just as I defined it as σ = E[(X - E[X])^2]^(1/2) which clearly is just a number and not an area.

Yes, and if I graph a function in terms of X and Y, where X is a function's argument and Y is the value returned by a particular function, one can argue that X and Y are just placeholders for numbers without any intrinsic meaning by themselves.

If I then say that Y represents the sine of X, surely someone will say, as you have said, "No, not at all, X is just a placeholder for a number, it's not what you say. And Y is just a placeholder for a number, it's not tied to any particular function."

In point of fact, the standard deviation is more than a particular number, it's an idea, and its statistical purpose is met when it's associated with a context in which that idea is expressed. That context is the normal distribution. In the present context, a particular sigma value refers to a specific area under a normal distribution, and in turn, to the probability that a particular result might have arisen by chance.

Without reference to a normal distribution, a standard deviation (a sigma) loses its conventional meaning. Variances are acquired by statistical tests of data sets, standard deviations (sigmas) are acquired from variances, and conclusions are drawn from sigmas only to the degree that they are applied to normal distributions.

Link: http://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule

Quote: "In statistics, the 68–95–99.7 rule, also known as the three-sigma rule or empirical rule, states that nearly all values lie within three standard deviations of the mean in a normal distribution. About 68.27% of the values lie within one standard deviation of the mean. Similarly, about 95.45% of the values lie within two standard deviations of the mean. Nearly all (99.73%) of the values lie within three standard deviations of the mean."

According to your thesis, the above claim is obvious nonsense, because in point of fact, sigma has no association with the normal distribution. But you know what? Wikipedia can be edited by anyone, and you can correct this egregious error today, if you like. Correct this widespread erroneous thinking -- fix these errors, everywhere you find them.

Let's perform a little test. Let's submit "5 sigma" to Wolfram Alpha and see whether it makes the same mistake you say I have been making, and that the above linked article is making:

------------------------------------------------------

Link: https://www.wolframalpha.com/input/?i=5+sigma

Quote:

    Input Interpretation: 5 sigma (standard deviations)
    z-score : 5
    Probabilities:
        zɝ (left-tailed p-value) | 1-2.867×10^-7
        zɱ (right-tailed p-value) | 2.867×10^-7
        abs(z)ɱ (two-tailed p-value) | 5.733×10^-7
        abs(z)ɝ (confidence level) | 1-5.733×10^-7
    Associated two-sided confidence level: 100-5.733×10^-5%
------------------------------------------------------

So it seems that, without prompting, Wolfram Alpha draws the same conclusion that Wikipedia does, and every other online reference does when confronted by terms such as "standard deviation" or "sigma" -- that, without providing a specific context, the default context is that sigmas refer to positions on a standard distribution and have statistical meanings associated with that default assumption.

> Of course, even with a Gaussian assumption, standard deviation does not have a particular numerical value.

When a scientist uses the term "5 sigma", she is in most cases using a context of a normalized normal distribution, one that follows the 68-95-99.7 rule described above. Therefore yes, 5 sigma does lead to a a particular value, by being applied to a definite integral of a normalized normal distribution that uses it as an argument. By this reasoning, 5 sigma means:

(Sage notation) integrate(e^(-x^2/2),x,a,b) / sqrt(2 * pi)

Picture: http://i.imgur.com/YhC302b.gif

If I try to compute the value used in physics, the result for 5 sigma, I can use this:

(Sage notation) N(integrate(e^(-x^2/2),x,5,infinity) / sqrt(2 * pi))

With this result: 2.86651571870318e-7

Which is correct, and is the expected one-tailed value for 5 sigma, even though according to your argument, 5 sigma has no connection to normal distributions.

> Sorry 'bout that.

What, for your specious argument? No problem, it comes with the territory. But surely you realize I have more than answered your objection.


> That context is the normal distribution.

No. Here is an important use of standard deviation not related at all to the Gaussian distribution: The set of all real valued random variables that have a standard deviation and one that is finite form a Hilbert space. The crucial part of the argument is completeness. Of course we like to use Hilbert space for projections and converging sequences, so it's super nice that those random variables form a Hilbert space.


>> That context is the normal distribution.

> No. Here is an important use of standard deviation not related at all to the Gaussian distribution ...

Before you troll through the world's imaginary problems again, I am going to ask you one more time to remember how this conversation got started (the context), and why I answered as I did.

Someone nontechnical asked what the significance of 5σ was in the context of a physics experiment. I replied by saying that 5σ was mapped to a p-value this way:

http://i.imgur.com/pcjr6cN.gif

And the p-value was the point, leading to a discussion of the high level of discipline in experimental physics compared, say, to the social sciences, which accept p-values of .01-.05, values I included in my original reply for comparison.

The context is the normal distribution. Wake up and smell the Cappuccino.


Maybe this will clear it up for you: That the physicists are making a normal distribution assumption is from shaky down to sloppy down to silly down to absurd. Making this assumption for experimental data, e.g., the LHC data or the gravitational wave data, where they are hanging on by their fingernails anyway, is close to treating the normal distribution as some gift from God.

And, if some field makes a normal, or mean 0, variance 1 normal, assumption, then they should say so and, hopefully, justify that assumption.

Your examples from Wolfram and Wikipedia show that a lot of people have guzzled that old Kool Aid on the normal distribution. A few years ago I had a date with a high school teacher, and she assumed that test scores are normal. My father had an MS in education and my brother was a ugrad psych major, so, when my brother's psych material got to statistics, Dad taught him about the normal distribution. It appears that around 1900 and then for 80 years or so, and still in some parts of some fields, a normal assumption, without mention or justification, was common and remains in what you call the "context". "Context" or not, common or not, popular or not, it's still an assumption, needs mention and justification, and usually is not at all well justified or justifiable.

Sure, with their normal assumption and, say, five sigma, experimental physicists get particular probabilities of Type I error, so called p values. No question here.

The issue is the normal assumption.

There are places where a normal assumption is quite solid. The most common justification is from the central limit theorem. So, take random variables X(1), X(2), ... and assume that they are independent and all have the same distribution with finite standard deviation. Then the central limit theorem says essentially that as positive integer n grows to infinity the distribution of the sum

X(1) + ... + X(n)

will converge to normal (really should divide the sum by square root of n).

So, where might we get some such

X(1) + ... + X(n)?

Sure, from Brownian motion where there are many little bumps and the bumps come close enough to satisfying the hypothesis. So can use this also in thermodynamics.

So, there are places where a normal assumption is justified.

But just making a normal assumption for essentially all experimental errors as needed for the accepted five sigma criterion is close to a weird religion and not flattering to a modern science.

What physics is doing with their five sigma criterion is a statistical hypothesis test. My examples of distribution free hypothesis tests via resampling is a better justified and more conservative and robust way to do an hypothesis test. Physicists might consider using such.

Such distribution-free statistical methods form a large field.

My example of the role of Hilbert space is quite practical.

If you want to explain to the common man in the street why physics likes their five sigma criterion, then, sure, you need the normal assumption. Then you should mention that there physics is making a normal assumption.

There you might not confuse the common man in the street with a claim that in making this normal assumption physics is close to drinking some swill of boiled tails of rats and bats.

But with the now high interest in computer science of big data, machine learning, etc. I was assuming that the HN audience could and should hear the real stuff -- that in what physics is doing, there's a normal (Gaussian) assumption in there.


Are you sure? I'm pretty confident in this case the assumption is that the distribution is normal. With no knowledge of the shape of distribution, the 5 sigma bound is extremely weak (one given by Chebyshev's inequality [0], in this case, ~4%). Prove me wrong though, as I'm not a physicist or mathematician.

[0] http://en.wikipedia.org/wiki/Chebyshev's_inequality#Sharpnes...


You're correct. Here's a short article that makes the point that what's being discussed are areas of the normal distribution:

http://blogs.scientificamerican.com/observations/2012/07/17/...


No, I think you're wrong. See my comment above. Note that nowhere in the article you link does it say there's an assumption of normality. Doubtless the journalist or whoever chose the picture of th normal distribution was a bit confused also.

Edit: oh, no, actually I think you're right :). I thought it was referring to the location of the test statistic in its null distribution. But it seems it's a scale for measuring p-values. This explains it clearly:

What does "five sigma" mean? It means that the results would occur by chance alone as rarely as a value sampled from a Gaussian distribution would be five standard deviations from the mean.

http://www.graphpad.com/www/data-analysis-resource-center/bl...


All that aside, remember that the context is a nontechnical person asking what 5σ means in statistical analysis in physics. That greatly narrows the possible interpretations.

By the way, this same conversation took place after the LHC Higgs anouncement -- the same five-sigma standard for discovery, and the same detailed discussion of what that means.

> But it seems it's a scale for measuring p-values.

Yes, and that was a point I made in my reply to the OP -- that the context assumed an association with p-values, which in turn assume a normal distribution.


Sorry, I'm either not understanding or I still am not convinced you are correct. p-values don't assume a normal distribution. A p-value is the probability of generating an equal or more extreme value of your test statistic under the null hypothesis. Normal distributions are nowhere in sight.

The only way that the Normal distribution comes into play is that physicists are measuring the smallness of their p-values by stating a number of standard deviations from the Normal mean that would have the same p-value.

I'm finding this discussion helpful by the way. I have worked in applied statistics but not in any fields which use this "sigma" scale or would talk about "sigma values".


> p-values don't assume a normal distribution.

No, not unless a p-value is expressed in terms of sigma as in this case and similar ones. In this case, and commonly in experimental physics, there's a relationship between n-sigma (usually 3σ or 5σ in different circumstances) and how a p-value is acquired from a sigma expression. The p-value is acquired from a sigma value like this:

http://i.imgur.com/pcjr6cN.gif

My point? In experimental physics there's a connection between (a) an expression including an integer and "sigma", (b) a resulting, widely quoted numerical value, and (c) the method for converting one to the other, using a Gaussian distribution as shown.

http://physicsbuzz.physicscentral.com/2012/07/does-5-sigma-d...

Quote: "But what does a 5-sigma result mean, and why do particle physicists use this as a benchmark for discoveries?

To answer these questions, we'll have to look at one of the statistician's oldest friends and C-student's worst enemies: the normal distribution or bell curve."

Couldn't have said it better myself.

> The only way that the Normal distribution comes into play is that physicists are measuring the smallness of their p-values by stating a number of standard deviations from the Normal mean that would have the same p-value.

Hmm. Yes, that's right. That's why I replied as I did in my original post.


It's correct that a normal distribution is assumed in physics, but you are not correct that a standard deviation is defined in terms of a normal distribution.


The present conversation got started by someone non-technical asking what 5 sigma meant in a physics context. Within that context, and with the intent to avoid exotic digressions, my reply is correct.

In physics, a sigma value maps to a p-value, and that relationship is most often defined with respect to a normal distribution. Therefore, in most cases, to go from a sigma value to a p-value, one performs this integral:

http://i.imgur.com/YhC302b.gif

Specifically, the above definite integral, when performed with arguments of 5 and +oo, yields the often-quoted one-tailed p-value for "5 sigma", which we can get here as well:

https://www.wolframalpha.com/input/?i=5+sigma

It seems Wolfram Alpha makes the same default assumption I do: a normal distribution. If I weren't answering an inquiry from someone who wanted the clearest possible answer, I might have replied differently.


Nit: I don't think the physicists are assuming a Gaussian distribution. See my reply in this thread https://news.ycombinator.com/item?id=7422688


> Are you sure?

I'm absolutely correct in what I said.

But it appears that you are correct that that part of physics makes some Gaussian assumptions and, then, has some conventions based on those assumptions. In this case, apparently physics is not making its mathematical assumptions clear and explicit and is doing sloppy writing. For more detail, see my longer explanation in this thread

https://news.ycombinator.com/item?id=7421505

Maybe the situation is a little like getting from a little French restaurant the recipe for French salad dressing, sauce vinaigrette, making it at home, and concluding it tasted better in the little French restaurant. Hmm .... But the French restaurant did something not in the recipe -- took a large clove of garlic, peeled it, cut it in half, and wiped the salad bowl with the cut surface of the garlic! The recipe didn't mention that!


I don't think those physicists are making any Gaussian assumptions, right? They're computing a p-value as normal (how often would my test statistic generated under the null be more extreme than observed), and then using the Gaussian distribution as a scale to communicate how small their p-value is.


They are definitely making a Gaussian assumption, and wouldn't feel the need to clarify that as it's a widely accepted convention in the field. In your restaurant analogy, it's like you overheard two chefs talking.


He's correct. The poster he was responding to made an idiotic statement:

"In statistics, σ refers to an area under the normal distribution defined in terms of standard deviations (1σ = 1 standard deviation)."

He was just stating that the standard deviation is a general concept of spread, that any distribution has, and not just the normal one.


> The poster he was responding to made an idiotic statement:

> "In statistics, σ refers to an area under the normal distribution defined in terms of standard deviations (1σ = 1 standard deviation)."

Not only is that not idiotic, that's the default definition in a statistical context (see below). Anyone can argue that σ is just another Greek letter with no special significance, but that require one to ignore the context in which the term is used.

Link: http://en.wikipedia.org/wiki/Standard_deviation

Quote: "In statistics and probability theory, the standard deviation (SD) (represented by the Greek letter sigma, σ) shows how much variation or dispersion from the average exists."

> He was just stating that the standard deviation is a general concept of spread, that any distribution has, and not just the normal one.

Again, this disregards context. When nontechnical people ask what the significance of 5σ is to scientific statistical analysis in physics (which is how this thread got started), there is precisely one answer.


* "In statistics, σ refers to an area under the normal distribution defined in terms of standard deviations* that's the default definition in a statistical context

No, you're quite wrong about that. greycat is correct in his/her corrections of what you're saying. In statistics sigma is used to represent one of two things:

- a parameter of a probability distribution, typically one which influences the spread of the distribution

- a measure of dispersion in an actual data set, which may be an estimator of a parameter in a probability distribution

Neither of those things necessarily involve the Gaussian density.

In physics it seems that "sigma values" are used as a scale to measure p-values, so that instead of saying 0.0000003, they can just say 5σ. But the critical point here, which your comments seem to be missing, is that there is no distributional assumption being made; there is no implication that the Normal distribution describes any data-generating process, merely that the probability of an equal or more extreme value of a test statistic under some model, is the same as the probability of observing a value more than 5 standard deviations from the mean under a Gaussian model.


>> * "In statistics, σ refers to an area under the normal distribution defined in terms of standard deviations* that's the default definition in a statistical context

> No, you're quite wrong about that.

It is the default, actually. There are plenty of exceptions to the default, but it certainly is common in the context of experimental physics, the present context.

http://physicsbuzz.physicscentral.com/2012/07/does-5-sigma-d...

Quote: "But what does a 5-sigma result mean, and why do particle physicists use this as a benchmark for discoveries?

To answer these questions, we'll have to look at one of the statistician's oldest friends and C-student's worst enemies: the normal distribution or bell curve."

> ... your comments seem to be missing, is that there is no distributional assumption being made ...

Do read some experimental physics -- see what assumptions are made. Here is how a physicist maps a sigma value to a p-value:

http://i.imgur.com/pcjr6cN.gif

If this wasn't a discussion of the analysis of the outcome of a physics experiment, I would be more likely to accept these digressions.


> Do read some experimental physics -- see what assumptions are made.

lutusp, the way in which you are using the word "assumption" carries a very high risk that people will misunderstand you. The critical point here is that the physicists are nowhere using the Normal distribution as a modeling assumption. They are not suggesting that the Normal distribution is a reasonable model for any real data generating process in their problem domain. They are simply using it as a scale, like Celsius of Fahrenheit. There's a crucial philosophical distinction there that, even if you get, your readers will not.


You disproved your own point.

In statistics, σ refers to an area under the normal distribution defined in terms of standard deviations (1σ = 1 standard deviation).

In statistics and probability theory, the standard deviation (SD) (represented by the Greek letter sigma, σ) shows how much variation or dispersion from the average exists.

I think you're missing the difference.


If the unit normal distribution is the distribution under discussion, there's no difference. In that case they become the same. Here's how a physicist computes a p-value given a sigma value:

http://i.imgur.com/pcjr6cN.gif

Note that the unit normal distribution is the underlying context.


> The probability that it arose from chance is equal to or less than 5σ

That's not what p value means. The p value is the probability that a high-variance random effect (centered around an average behavior of "nothing interesting") would yield a result as extreme as the observation, assuming that random distribution. You need Bayes theorem and highly subjective assumptons if you want to derive a posterior probability that an observed result was drawn by chance from a sample with a boring/interesting mean.


>> The probability that it arose from chance is equal to or less than 5σ

> That's not what p value means.

Yes, I know. Verbal shorthand and some associated risk of being misinterpreted.


http://en.wikipedia.org/wiki/Standard_deviation#cite_ref-4

"Particle physics uses a standard of "5 sigma" for the declaration of a discovery. At five-sigma there is only one chance in nearly two million that a random fluctuation would yield the result. This level of certainty prompted the announcement that a particle consistent with the Higgs boson has been discovered in two independent experiments at CERN."


r is a model parameter that affects the strength of gravity waves. r=0.2 was much higher than expected. See http://www.reddit.com/r/science/comments/20mrz4/cosmic_infla...


https://en.wikipedia.org/wiki/Uncertainty#Measurements

"When the uncertainty represents the standard error of the measurement, then about 68.2% of the time, the true value of the measured quantity falls within the stated uncertainty range. For example, it is likely that for 31.8% of the atomic mass values given on the list of elements by atomic mass, the true value lies outside of the stated range. If the width of the interval is doubled, then probably only 4.6% of the true values lie outside the doubled interval, and if the width is tripled, probably only 0.3% lie outside. These values follow from the properties of the normal distribution, and they apply only if the measurement process produces normally distributed errors. In that case, the quoted standard errors are easily converted to 68.3% ("one sigma"), 95.4% ("two sigma"), or 99.7% ("three sigma") confidence intervals."


They measured the tensor-to-scalar ratio of the inflation field, r, to be nonzero at 5 sigma significance. The measured value is 0.2.


I love the humility, skepticism of belief Linde professes at the end. Despite a deep desire to believe in the beautiful, he is well aware of how you can be seduced into believing things because you want to, not because the universe is that way.


There's a Ben Franklin quote: "Believe none of what you hear and half of what you see." Seems like Linde subscribes to that thought, as all scientifically-minded people do.


This is what separates the scientists from ... others.


We can only wish that is true, we are all just humans in the end:

http://www.economist.com/news/briefing/21588057-scientists-t...

Also, I vaguely remember a study in 'some journal' that looked at all it's P values[0] and found an alarming number of them parked right at the limit of acceptance (.05), more so than chance could assume. If anyone remembers this study and can provide actual proof (not my terrible memory), I would be very thankful.

[0]https://en.wikipedia.org/wiki/P-value


I don't think he's commenting on statistical analysis, but rather saying that a scientist should follow the clues where ever it may take them, instead of looking for the clues that will get them where they want to be. And of course they fuck up, too.


http://en.m.wikipedia.org/wiki/John_P._A._Ioannidis

"Most published findings are probably false."


I can't claim to understand much of the statistics, but according to [1], p-values are pretty sketchy as a measure of "is this likely to be true?"

[1] https://www.youtube.com/watch?v=ez4DgdurRPg


Or rather, the scientific method from. . . other ways of "knowing"


I like how he thought it was a delivery guy for something he ordered when he opened the door.

Yep, it was a delivery alright, of one order of Nobel Prize :)


Linde: "Can can can can can you repeat it?"

It was very touching.


I love that he was so excited to tell them, he just blurted it out as soon as they opened the door.


This is the type of moment every person should aspire to have with a parent or mentor at least once in their life.


Well that just made my day.


That is an awesome video. I would love to see this on TV news even for a moment - just validating the long pursuit of scientific knowledge.


Can you (or someone) explain what he says right before the lady hugs him and Prof Linde asks to repeat to him?

EDIT: Nevermind, it's in the article.


Guardian has a nice and simple article explaining gravity, gravitational wave, and about this detection:

http://www.theguardian.com/science/2014/mar/17/gravitational...


Here's an excellent explanation on cosmic inflation from Sean Carroll:

http://www.preposterousuniverse.com/blog/2014/03/16/gravitat...


...which seems to be unreachable for me at the moment. I'll have to check it out later. In the mean time....

Cosmic inflation has always bothered me. I just don't get it. I'm not a physicist at all, though. I get why it's postulated. (The uniformity of the universe requires it to all be in close proximity and the time scales don't have enough time for that part, IIUC.)

But the expansion of inflation is significantly faster than the speed of light. What's up with that? I know, spacetime is expanding rather than things moving within spacetime, but still, during inflation, this particle here is watching that particle there recede at >> C. I doan geddit.

(Grump.)


The way I understand it is that the problem is with the definition of velocity. If you want to measure speed, you have to measure distance, which eventually comes down to the magnitude of the vector between two points - subtract the X's, subtract the Y's, subtract the Z's. Problem is, at cosmic scales, expansion means that those two points aren't in the same reference frame, because more space has appeared between them. You can try to measure it, but by the time you do the axes themselves have stretched again. The two X's can't be subtracted directly anymore, because they're on different reference frames. You get a result of >>C only by assuming that the relevant bits of space are relatively well-behaved Euclidean R^3, but that's not the case.


> this particle here is watching that particle there recede at >> C

Technically speaking, the particle can't really observe the other particle moving away from it faster than C since the light (or any other signal from the particle) will never be able to reach the observing particle. It would just see it disappear.


Current models state that the Universe is still expanding "faster than the speed of light" (that is, fitting your explanation) if you look far enough. (What, of course you can't do, because light from there is redshifted into zero.)


So just to clarify, this is only the measurement of an artifact most likely caused by gravity waves during the period of inflation. We still have not directly measured gravity waves in our current universe, right? I think the fact that gravity can't be measured is a subtle clue about one piece of the puzzle for a unifying theory of everything.


I find this concept of "directly measuring" things confusing. My theoretical knowledge is very weak though, maybe someone can explain.

It seems to me like we never measure things _directly_. For example: To measure the temperature of something in everyday life, we use a tiny glass cylinder filled with some kind of liquid. The liquid expands or contracts, roughly linearly, because of the temperature exchange with its surroundings. We then compare the current level of the liquid to a little ruler inscribed in the cylinder, maybe the markings form a shape similar to "100°C". The photons that bounce off this little ruler into our eyes causes impulses in some neurons and so on and our brains compare the shape "100°C" to yet another reference point, boiling water. It's hardly direct¤, in any sense of the word.

If we measure gravitational waves by observing "ripples in background radiation"¤¤, isn't that kind of the same thing? I've seen several people here mention that it's not "measured directly" - does it mean something else in this case?

¤ If you stick your hand into the boiling water to measure its temperature, it's a bit more direct but not as accurate.

¤¤ I'm just a programmer, this is kind of how I understood it. :D


While the real implementation is always murky, what Physicist usually mean by "direct" observation is that the values of the quantity being observed does not depends on the type of theory (or hypothesis) used to measure the quantity, or the theory being used is universally accepted.

For example, temperature is defined as rate of change of entropy of a system as energy changes, while entropy itself is defined as some formula dependent on number of microstates. If someone measures this rate directly, it would be called a direct measurement. In practice though, you would rely on some derived phenomena, like expansion of mercury. If you understand expansion of mercury from some other independent theoretical ground, then after some rigor, the use of mercury can also be taken as direct measurement of temperature.

In current case, what we would really like to observe is change in space-time as gravitational wave propagates, just like you would want to see ripple in waters to confirm a water wave. There are experiments that are trying to do exactly that (the LIGO for example), but you can also measure something which is a derived phenomena, the polarization of photons of the cosmic microwave background (CMB) radiation. There are good theories connecting why gravitational waves would produces such an effect in CMB, so this can be a tool for indirect observation.

Again this does boils down to usage and what community considers direct vs indirect observation, but as a rule of thumb, experiments which measure a quantity from definition of the quantity itself are considered direct; they are termed indirect otherwise.


> I've seen several people here mention that it's not "measured directly" - does it mean something else in this case?

As far as I can see from the article (couldn't find the paper detailing the experiment), they observed an artifact that could have been caused by primordial gravitational waves. This is different from using a thermometer to measure temperature because we're not sure yet whether that is the only possibility (or our instruments are not sensitive/too sensitive, it's a different mechanism of inflation etc.), it's the best explanation, but we're not extremely sure yet. (Although as I say this it occurred to me that it's probably much more certain than before because of the discovery.)

The key takeaway here is that the inflationary model is almost certainly the right one and gravitational waves are almost certainly real.

(This is similar to the first Higgs Boson announcement, where we were almost certain it was the Higgs, but not sure enough to claim discovery; because we hadn't determined all the properties of the observed particle)


There's a good philosophy of science argument to be made that there's no precise and discrete distinction between direct and indirect measurement. In our model of the universe, there are always multiple physical steps that link the phenomena under investigation to our conscious perception. Therefore, any conclusions we draw from a perception are conditional on our confidence in the entire causal chain performing reliably (e.g. a gravitational wave induces a B-mode in the CMB, which propagates as a photon to our detectors, which heats up a transition-edge sensor, which increases the resistivity of the circuit, which flips a bit in the flash memory, which is read out to a monitor, which emits photons to our eye, which change the nerves firing in our brain). "Direct" measurements, then, are just ones that rely on a small number of reliable inferences, while "indirect" measurements rely on a large number of less reliable inferences.

Nonetheless, in practice there is a rather clear distinction which declares "direct" measurements to be those that take place locally (in space) using well-characterized equipment that we can (importantly) manipulate, and which is conditional only on physical laws which are very strongly established. All other measurements are called "indirect", generally because they are observational (i.e. no manipulation of the experimental parameters), are conditional on tenuous ideas (i.e. naturalness arguments as indirect evidence for supersymmetry), and/or involve intermediary systems that are not well understood (e.g. galactic dynamics).

The classic example is dark matter detection. A detector built in your laboratory that produces clear evidence of a local interaction between the dark matter partice and the atoms composing the detector would be "direct detection". Seeing an anomalous excess of gamma rays from the center of the galaxy whose energy and distribution is consistent with some theories which predict dark matter annihilation would be "indirect detection".

Naturally, direct measurements have a much larger impact on your Bayesian credences than indirect ones. If someone says "I don't trust that indirect measurement" they mean "one or more steps in the inference chain which connects the phenomena to our perceptions is unreliable".

EDIT: Oh, it's worth replying more directly (ha!) to your comment by noting that both pulsar slow downs

https://en.wikipedia.org/wiki/Gravitational_wave#Using_pulsa...

and the CMB measurements by BICEPS are indisputably indirect. Gravitational wave detectors

https://en.wikipedia.org/wiki/Gravitational_wave_detector

like the LISA proposal

https://en.wikipedia.org/wiki/Laser_Interferometer_Space_Ant...

are direct.


That's correct: it's an indirect observation, and really the "observing gravity waves" aspect of this is the least interesting part. There have been other indirect but compelling observations corresponding to gravity waves before (like the precise rate that pulsars' rotation slows down, which I'm told is a perfect match to the energy expected to be carried away by gravitational waves). The big deal here is the insight that it gives into the early universe (VERY early!) and into very high energy physics.

As for the lack of direct detection of gravity waves, I wouldn't read too much into it. Gravity is very weak, and we've known that for most of the century. Figuring out what sources might produce strong enough waves to be observable is really tricky, not because we don't understand gravity but because we don't understand the complicated astrophysical processes involved at the level of precision that we'd need (e.g. merging binary stars: how does that play out in detail?).


We've indirectly measured gravitational waves before. Folks won the 1993 Nobel prize in physics for the observations. The "Hulse-Taylor" binary star system is a pair of neutron stars orbiting very close to each other (the pair of neutron stars orbiting each other would fit inside of our own Sun). Because the stars are pulsars it's possible to measure the orbit with extremely high precision over time. One prediction of the theory of gravitational waves is that the emission of gravitational waves actually removes small amounts of energy from an orbital system over time (like a drag force). But normally these amounts are so small as to be inconsequential even over extremely long periods, but with very dense, very massive, very closely orbiting objects the amount is much higher. And the result is that it causes the orbits to degrade over time, as the stars slowly spiral toward each other. This is a very consistent effect.

Here's a graph showing the prediction of the existence of gravitational waves relative to the observations of the change of period of the Hulse-Taylor binary: http://en.wikipedia.org/wiki/File:PSR_B1913%2B16_period_shif... The tiny horizontal lines on the red dots are the error bars of the observations. You can see how perfectly the observations line up with theory.

Also, it's likely that gravitational waves will be observed directly within the next several years. There have been experiments attempting to do so but none of them have been sensitive enough to detect the most likely of the strongest signals of gravitational radiation. The improvements to those experiments that will come online within the next decade or so should be able to do so however.


>We've indirectly measured gravitational waves before. //

This makes the terminology more opaque to me.

We measured quite directly the arrival time of the pulses of electromagnetic radiation. The accepted model without the effect of gravitational waves would expect to produce a non-decaying measure of the period of pulses; which the model states to be the period of periapsis. It's the attribution of that decay to the effects of gravity waves in that system that is most tentative, surely?

>And the result is that it causes the orbits to degrade over time, as the stars slowly spiral toward each other. This is a very consistent effect. //

You say it's a very consistent effect - how many different [binary neutron/pulsar] systems have been measured?


Thank you for the informative post. It led me to the announcement of the 1993 Nobel prize, which has additional information for those interested.

http://www.nobelprize.org/nobel_prizes/physics/laureates/199...


I agree. For the sake of a future outlook ;) I'd just add:

"can't be measured" with the current combination of theoretical and hardware apparatus...


Gravity can pretty clearly be measured. I'll assume you meant gravity waves in particular, but even then we have no reason to believe they can't be measured (only that we haven't done it yet).


From the horse's mouth: http://bicepkeck.org/


Nothing bugs me more than when a supposedly scientific magazine uses thumbnails of important images without actually linking to the full size. I just wanted to see the black lines in the image the article refers to.



The Paper and all the figures are located here http://bicepkeck.org/


> The finding is direct proof of the theory of inflation, the idea that the universe expanded extremely quickly in the first fraction of a second after it was born.

Small nitpick, but wouldn't the use of the words "evidence for" instead of "proof of" have been better? Not that I am in any way trying to take anything away from the discovery. Just from a science perspective, the word "proof" has always bugged me.


If you're not talking math (which includes logic), proof can never mean what you want it to mean. So why get upset about it being used differently outside of mathematical context?


It's kinda a pet peeve of mine as well, having to switch between 'proof' meaning a conclusion that logically necessarily follow from premises, to proof meaning "really strong evidence".

But, as harshreality was getting at, if we used 'proof' that strictly, nothing outside of pure math and logic would be a 'proof'.


> But, as harshreality was getting at, if we used 'proof' that strictly, nothing outside of pure math and logic would be a 'proof'.

True that. But isn't that an important part of the scientific method (at least in the karl-popper-scientific-method sense), and part of the point of science, really? Strictly speaking, you can't prove anything using the scientific method; only 'falsify' it (hence Popper's 'falsificationism', 'science as falsification', etc.) To 'kinda-sorta-prove' something in science, you formulate a null hypothesis, and then attempt to falsify it. But strictly speaking, one is not able to 'prove' anything (only provide weak/strong evidence for/against something.)


Well, then, don't use the word "proof" outside pure math and logic. Everyone agrees that "proof" means to render something beyond (reasonable) doubt, so it's not a problem of ambiguous definition, just incorrect usage.


I'm very curious of how, if we now presume this to be true, if and how that may effect the "Are we living in a simulation?" question.


> and how that may effect the "Are we living in a simulation?" question.

Not at all.


I think there exist potential implementations of virtualization that are not detectable from inside the virtualization.


Especially if you are willing to sacrifice speed.


Is gravity a wave then, like light? I thought the jury was still out on that.


Gravity is the structure of space-time, and that structure supports waves. The proper analogy would be that gravity is like electromagnetic fields, and the equations of electro magnetism support wave-like events, and we call this light.


Keeping in mind that gravity is only one of four interactions, and not even the only one working at long distances, I would have figured that it is an important component of the structure of spacetime. Have I overemphasized the other interactions in thinking this?


Well, the other interactions (electromagnetic) propagate through spacetime. Gravity waves are ripples in spacetime itself. So they aren't quite the same.


So mastering the energy Of gravity could be the ultimate advanced civilization objective. A warp drive is just the first practical (or at least the most obvious) aplication of such technology. <_end Startrek like voice>


Mastering THE ENERGY OF GRAVITY! is trivial. Pick something up right now. Now, lift it over your head. You've given it GRAVITATIONAL ENERGY! Now, drop it. It has now expressed its GRAVITATIONAL ENERGY!

Do not confuse science fiction with science fact. Exactly, exactly how gravity works is a mystery, yes, but interacting with the gravitational field is very, very settled science. If you want to harness the mighty power of gravity, build more tidal harnesses.


Not to mention that gravity is kind of puny even at that scale. The rate of energy loss from the Earth's rotation due to the tides is about 3.3 TW. In 2007, the US consumed about 3.1 TW.


Another way to look at this is that human civilization consumes a scary amount of power.

Our global energy consumption in 2008 was estimated to be 474 exajoules. The total energy received by the earth from the sun during a year is about 5 million exajoules, a fraction of which reaches the surface.

So we are only a factor 10,000 away from that. At a seemingly modest 2% yearly growth rate, we could increase our energy consumption a hundredfold in two centuries, and waste heat will start to become an issue.


Yep, even today waste heat is equal to something like 5%[1] of the global heat gain. Still the lesser problem today, but once we bring a petawatt of clean fusion online...

[1] it's been awhile since I looked up this number. Could easily be an order of magnitude lower, which is still fairly impressive.


What is 'the global heat gain'?


The imbalance between how much energy the Earth receives (mostly from the sun, but also including terrestrial sources like radioactive decay in the Earth's core) and how much energy it radiates into space.

The number I have in my head -- it's been awhile since I looked it up -- is that the imbalance is about 200 terawatts. For comparison, about 122 petawatts are absorbed from the sun, the Earth's fiery core generates about 45 terawatts, and humans currently produce around 16 terawatts. So the Earth is absorbing hundreds of petawatts, radiating hundreds of petawatts, and the tiny, tiny surplus of 200 terawatts is slowly heating the Earth (mostly the oceans).

Incidentally, this heat gain could be counteracted by dumping about 12 cubic miles of ice into the ocean each day. Antarctica has about 6 million cubic miles of ice, so you'd get a good thousand years out of that strategy.


So the thing more commonly called "Earth's energy balance".

Nothing I see quotes a tidy number like that and the temperature of the oceans would be a major factor in the radiative output of the planet.


Now imagine that you could create a gravity alternator and use gravity in a similar way to electricity... of course this doesn't make sense, but maybe there are gravity applications not even imagined by now.


A reactionless drive (that just pushes on spacetime instead of trying to alter it) is probably even more practical.


vacuum propeller. Imagine if a submarine had to carry all the water it's propeller pushes on? That's rockets today. If we could make a vacuum propeller, that acts directly upon space-time, it would radically transform the prospect of space travel.


They are doing research in this direction. http://en.wikipedia.org/wiki/Quantum_vacuum_plasma_thruster


Your comment just put a big goofy grin on my face.


Warp drive would at the very least require an equivalent finding for momentum.


If the propagation of gravitational effects is analogous to electromagnetic waves, then maybe this is an avenue for a "gravity cloaking device" which would have interesting implications (hoverboard anyone?)


Cloaking yourself from waves would not negate the effects of a constant field.


That gravity propagates as a wave is a consequence of Relativity. To date we have no direct detections of gravitational waves, but we have detected energy loss in a binary system which is consistent with theory.

http://en.wikipedia.org/wiki/PSR_B1913%2B16


As far as I understand it gravitational waves are "ripples" in the fabric of spacetime caused by the motion of high velocity, high mass bodies. That's why they're called "gravitational waves" not "gravity waves", i.e. they're not waves of gravity, rather waves caused by gravitational bodies.


Light is both a particle and a wave


While light's behavior as both particle and wave can easily be observed, the particle carrier for gravity, the graviton, hasn't been observed in the wild.

As a result, gravity is currently only a wave, in relativistic spacetime.


Or, light is neither a particle nor a wave, but a distinct thing that seems to behave like both under various circumstances.


There are some cases in quantum mechanics where you just have to accept that a value can be two different things at once, not that is was really one thing. Like it or not, superposition is a real thing, so it is inarguable that assuming something has to have one value at any one time is wrong.

Wave-particle duality is a lot like that as well, where it is tough to come up with some distinct thing that could possibly act like a particle, but still diffract like a wave.


> where it is tough to come up with some distinct thing that could possibly act like a particle, but still diffract like a wave.

Armchairing here. But as I understood, objects in quantum-land are known as "amplitudes in the complex plane". They behave like this all the time and under all circumstances, and should be understood on their own terms. E.g. squaring the height of pond ripples doesn't return a probability distribution. The "sometimes it's a wave, but other times it's a particle" idea is a historical artifact, like how humanity uses a base ten number system.


It's not tough at all, just stop trying to shoehorn the quantum into things like waves and particles, they aren't both; they're neither. Calling them waves and particles is just trying to cram macro world concepts into places they don't belong because it's easier to understand that way. Quantum behavior isn't like anything we know and thus can't be related to what we know without using bad metaphors.

Watch Feynman explain the behavior of magnets and pretty much makes the same point when someone asks if fields are like rubber bands.

The wave particle duality is just a bad metaphor for quantum behavior. Quantum behavior is its own thing.



The carrier for gravity may also have wave-particle duality, via the graviton[0].

0 - https://en.wikipedia.org/wiki/Graviton


> This pattern, basically a curling in the polarization, or orientation, of the light, can be created only by gravitational waves produced by inflation.

I call BS. "Within our current theories, this pattern can be created only by..." would be a more accurate statement. The arrogance that "with this theory, we understand it all" has been shot down over and over in the history of science.

[Edit: tarrosion noted the caution of experimenters in making sure that the data could not be caused by something else. This is appropriate, and it's good that they have it. You now have one, and only one, theoretical explanation for the data. But the statement in the article that I quoted is still a step too far. It presumes that our existing theories are the only possible ones.]


Usually publications are understood within the context of their field. It would be exhausting to list each limitation of your field in every paper, and the limitations are understood by the intended audience.


A friend-of-a-friend has the "science religion"---he seems to try to claim that what we "know" now is the closest to absolute truth that we could possibly get, and would therefore make the claim from the article with a straight face. (He seems unfazed by comments that some new discovery tomorrow might invalidate what we "know" now and seems to think that what we would learn would simply be more absolutely true.)

Me, I'm of the opposite philosophy and understand that everything I think I know now is probably wrong. It's just slightly not-as-wrong as last week.


On the whole of it, your friend is probably more correct than you are:

http://chem.tufts.edu/answersinscience/relativityofwrong.htm


As an illustration, around 2200 years ago, 200 years before Jesus, a Greek scientist actually measured the curvature of Earth not moving from his town(!) only by observing the shadow of the Sun and cleverly thinking, giving the circumference as "50 times distance between Alexandria and Siena." He didn't have the exact distance then, but knowing that distance today, he measured the circumference of the Earth with the error of less than 0.2%.

http://todaslascosasdeanthony.com/2012/07/03/eratosthenes-ea...

150 years later, still before Jesus, another scientist, Posidonius, repeated the experiment.

Most of the things we confirmed today survived a lot of checks. As Asimov writes it's not that there's much "wrong" in what science knows today, it just that is "incomplete" in the smaller (from the perspective of the common experience) details.


The difference to me is that "incomplete" means you are missing something: more precise measurements, better calculations, more information. But that is not necessarily the case. You may be able to measure the size of the Earth relatively precisely, but if your theory puts that Earth at the center of the universe, more measurements are going to do little but cause you headaches.

Take the ever-popular conflict between relativity and quantum mechanics. For all intensive purposes [sic], I believe they both work out to about Newtonian mechanics at scales I can easily observe and they both work very well for their different appropriate tasks. But they don't mesh well together, which is another requirement for science. "Incomplete" doesn't begin to describe that situation because I suspect that whatever is going to unify both is going to be as different from either as they are from classical physics.

[As an aside, I've seen Asimov's essay before and while I usually don't have a problem with his writing, in this essay's case I can't get past the fact that it is either very poorly written (if I'm feeling charitable) or a rather silly ad hominem (if I'm not).]


My example is totally orthogonal to the subject of the position of the Earth in the universe, I don't understand what can benefit from bringing the confusion in the discussion.

Your other argument is what scientists are well aware of for decades, so it's a good example of the science knowing its current limits, which again means we can't be wrong if we know the exact limits. We have unmapped terrains that span only first 1e−32 part of the first second! Can you even imagine how small that time is? There were 1e49 such time intervals since then! What's that when not a small "incompleteness" of our knowledge. That people who work on that call it "a big thing to unify" doesn't change the fact that it's something small to the vastness of the time we already cover with the present equations.


It sounds to me like you agree entirely with your friend and are nitpicking over the how you define the measurement of knowledge (in particular, the sign convention for incorrect-but-more-predictive theories).


Yes, but note that there are differences in how you can work in experimental/historical sciences.

Your friend^2 would probably gain by reading more about the scientific method, see wikipedia etc. (But then, so could probably you and me both, too.)


These kinds of people are terrible coders. They are made aware of one possible thing that could go wrong in a piece of code they're writing, and once they avoid that trap, they magically think since they avoided that one pitfall, they are on top of their game, and then proceed complacently to write the rest of the code. Months later we find it is full of pitfalls which could have been avoided if they paid attention.


>I call BS. "Within our current theories, this pattern can >be created only by..." would be a more accurate statement. >The arrogance that "with this theory, we understand it all" >has been shot down over and over in the history of science.

I echo your BS call. Arrogance is what this is. And hype.

And this is not totally unrelated to the fact that the experimental physics community is always looking for money. This is more likely just money hype. I am not saying there is something supernatural they are missing. I am just saying they don't understand the universe yet. Yet they are saying they do, and they have their hands out for more money.

Big Bang theory is a form of religion. Thus, I am suspicious of it.

This entire thread is a great example of homo sapiens groupthink.


Are you acquainted with the field and underlying science? If so, could please elucidate your criticisms and make them concrete instead of just throwing about vague accusations of corruption? ("always looking for money", etc.)


Reminds me of the way Einstein, not being an experimental physicist himself, would conclude his famous papers with suggestions for experiments to confirm them. Awesome for this research team to have the opportunity to confirm this discovery in Linde's lifetime.


How would you folks rate the significance of this, let's say as far as scientific discoveries / realizations go in the last 100 years? Yes, of course this is completely subjective. I would say in the top three.


Assuming it's real, I think it's up there with T ~ 3 K (cosmic microwave background itself, Nobel 1978) and Λ > 0 (accelerating universe, Nobel 2011).


A little bit of caution from here: http://profmattstrassler.com/2014/03/17/bicep2-new-evidence-...

Very interesting result, potentially game-changing, but it also could be nothing too. Wait for more experiments before we can say for sure.


interesting that BICEP2 polarization from gravitation waves as they describe it http://bicepkeck.org/faq.html:

"strong B-mode polarization at the much larger angular scales--2 to 4 degrees on the sky--where lensing is a tiny effect but where inflationary gravitational waves are expected to peak. "

is of about the same scale as 500 million light years period of Baryon acoustic oscillations period (ie. baryonic (gravitating) matter density period):

http://en.wikipedia.org/wiki/Baryon_acoustic_oscillations#BA...


Is this the same as graviton in string theory? Or could this be used to further justify the existence of graviton?


I'm not a physicist, but I would think the answer has to be yes, since everything we know of with wave nature also has particle nature on some scale. But I can't even imagine what kind of experiment would detect one.

Wikipedia has some interesting things to say about this: http://en.wikipedia.org/wiki/Graviton#Experimental_observati...


This page allowed me to have an idea of the concepts underlying this discovery (CMB light, B-mode polarization): http://cosmology.berkeley.edu/~yuki/CMBpol/CMBpol.htm


Interesting that they succeeded in detecting gravity waves where LIGO failed?


They are using a different method. In their own words: "The presence of a water wave can be detected by feeling its up-and-down motion or by taking a picture of it. We are doing the latter." LIGO is doing the former.

http://bicepkeck.org/faq.html


LIGO has not failed. The second generation of LIGO detectors, called "advanced LIGO", is currently under construction. It will be around ten times more sensitive than the earlier configuration of LIGO, and is very much expected to yield astrophysical discoveries. More info at: http://ligo.org/

Furthermore, what LIGO seeks to do, and what the BICEP project has done, are quite different. LIGO is something like a "radio" that receives gravitational waves. We'll be able to listen to gravitational waves as they arrive at Earth. The discovery announced today is of the "fossilized" imprint of primordial gravitational waves on the cosmic microwave background radiation. Very important, but complementary to LIGO.


Is there a way from this data to calculate the frequency of the wave(s)? Bandwidth? Or otherwise characterize the signal that is causing the polarization? Is that even a meaningful question in this case?


Not really. What's being observed are remnant polarizations from gravitational waves that had their effect long ago, under very different circumstances and mass-energy densities.

What's interesting is that the present measurements can be interpreted as evidence for gravitational waves to the exclusion of other explanations to a high degree of certainty.

Until now, evidence for gravitational waves was rather indirect and circumstantial, for example orbiting pulsars (very dense collapsed stars that emit periodic radio pulses) were observed to slow their pulse repetition rate over time in a way that suggested they were losing potential energy by radiating gravitational waves. Unfortunately those waves could not be detected directly.

In principle, a gravity wave could have nearly any frequency/wavelength consistent with its source. The pulsars discussed above were thought to produce gravitational waves of relatively high frequency / short wavelength, proportional to their pulse repetition rates. A so-called "millisecond pulsar" would have a possible gravitational wave frequency of one kilohertz and a wavelength of 3 x 10^8 / f meters or 3,000,000 meters (3,000 kilometers). That's hardly short compared to a radio broadcasting station's wavelength, but for gravitational waves, it's remarkable.


Another HN post from earlier today about the same discovery: https://news.ycombinator.com/item?id=7411341


They have evidence of gravity waves, but cannot prove causation (i.e. big bang), right?


We already have overhealming evidence of the Big Bang, this is yet more evidence. But we have only a relatively small amount of evidence of Inflation and gravitational waves, and this is evidence of both (even better, it's evidence against several theories of Inflation - including the current prefered ones).

Correlations are evidence of causation, and quite strong evidence if you foud them because of a causal theory.


As someone who doesn't have a very good grasp of these things, can you explain the difference between the Big Bang and inflation to me? It seems like the Big Bang is the explosion of all matter that would be in our universe, and inflation is the rapid expansion of the universe itself. And inflation is believed to have taken place almost immediately after the Big Bang. Is that sort of accurate?


The Big Bang theory is not about the "explosion" of our universe from a point.

Firstly, the universe may have always been infinite in size. It's just that every small piece of that infinite universe has been expanding since about 14 billion years ago.

Secondly, "explosion" is a misnomer. The universe is expanding, not exploding.

As a theory, the Big Bang theory makes various predictions, such as the cosmic microwave background radiation, the relative abundance of elements in the universe and of course the expansion of the universe.

But it doesn't predict that the universe will be the same in all directions. There just isn't time for energy fluctuations to have evened themselves out due to the transfer of energy from hot spots to cold spots. That process can only happen at the speed of light (energy is transferred at the speed of light).

Because of the way space is expanding, the speed two points move away from each other depends on how far apart they are. Thus, very distant points on opposite sides of the sky are actually moving apart faster than the speed of light. What this implies is that there's no way they can have had time to reach thermal equilibrium (i.e. have reached the same temperature)!

But satellite observations tell us the observable universe is very nearly the same temperature in every direction!

The problem is resolved by the Theory of Inflation. This is a time of exceedingly(!!) rapid expansion which occurred before the time described by the Big Bang theory (remember the Big Bang theory is not about the "explosion" of the universe from a point, but about the subsequent expansion of the universe after inflation).

The reason inflation solves the problem is that a very, very tiny region of space (subatomic scale) expanded exceedingly rapidly in a tiny fraction of a second, smoothing out any temperature fluctuations. What we see as our observable universe is just the temperature fluctuations in a subatomic sized piece of universe from before inflation happened.

After that tiny fraction of a second, inflation stopped, and normal Big Bang physics took over.

Moreover, inflation explains the formation of galaxies. Tiny quantum fluctuations became the seeds of galaxies, clusters, superclusters and giant strings of clusters that make up our universe today.

Note that almost everything written in the current Slashdot summary of the breakthrough is completely wrong!


Wow.. that's sweet but really come on only supporting mavericks OS. And already.. if you have a iPad the notes app automatically synchronizes with the mail server (I use web based email service mostly.) I didn't even set it up or should I say allow the iPad to do so.


Gravity waves work fine for me under ubuntu.


Wrong thread


Just 5 sigma confidence...


Congratulation!


Interesting info! Thanks for sharing!


can someone explain the significance of this, for those that are not familiar with this area?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: