Hacker News new | past | comments | ask | show | jobs | submit login
Neutrino experiment affirms faster-than-light claim (nature.com)
253 points by Rickasaurus on Nov 18, 2011 | hide | past | favorite | 120 comments



To be precise, this latest experiment only ruled out one possible source of error (namely, bias in the detection of arriving neutrinos causing it to preferentially detect neutrinos at the forefront of the pulse). This doesn't affirm that the neutrinos actually exceeded light-speed.

But it is an important step towards understanding what's happening, and it's great that the OPERA group were able to put together this followup experiment so quickly.


> This doesn't affirm that the neutrinos actually exceeded light-speed.

You might be thinking of "confirm". This certainly is an example of the OPERA group affirming their results.

> Affirm. v. 1. State as a fact; assert strongly and publicly. 2. Declare one's support for; uphold or defend.


Take a look at definition 1 in your post.


Affirm means to say it [again] (without adding any more information). It basically means "Yes I really did/do mean it".

Confirm means to check again.


The way English works, not all of the dictionary definitions for a word need to apply to any particular usage.


I think all dictionaries work in this way, not just English ones


Indeed, as stated by the internal communication note:

This test confirms the accuracy of OPERA's timing measurement, ruling out one potential source of systematic error. The new measurements do not change the initial conclusion. Nevertheless, the observed anomaly in the neutrinos' time of flight from CERN to Gran Sasso still needs further scrutiny and independent measurement before it can be refuted or confirmed.


Oops...I just posted this link on my Facebook account with the sensational heading "Foundations of physics shaken again". I will remove it.


I wrote to the lead author asking them to release the source code of the software that they used for the data analysis. I have yet to receive a reply, but it's pretty important to eliminate a programming error from this.

For example, there were small errors in the Met Office's climate change software that I detected. The scientific papers were correct, but the translation into code was not. This could have happened here and it would be better if they released the code.

http://blog.jgc.org/2010/02/something-odd-in-crutem3-station...


Asking for the source code of the OPERA analysis is similar in scope to asking for the source code to gmail. Unless you made a very specific request and explained how you would be qualified to review it, I doubt they are taking you very seriously.

Furthermore, I doubt anyone at OPERA is in a position to even fulfill your request. The minimum number of separate systems with exclusive sets of code is three, and probably more than ten.

However, physicists do make extensive use of unit tests, so that may be some consolation.


> However, physicists do make extensive use of unit tests, so that may be some consolation.

Ehm, well some do. And those that do, endlessly complain about most of them that don't.

Fact of the matter is, most physicists aren't trained in proper software engineering methods. Quite a few, but not nearly a majority are very interested in computers and computing so they may pick up a thing or two or actually bite their teeth in doing things the right way, and then there's most of them who love Physics for being Physics and Science for Science and see the computer as just another tool next to their pencil and paper, and just "get things done" and screwed be the next PHD that has to pick up on their research.

At least, that's what I hear from a friend doing a PHD involving analysing huge datasets coming from apparatus measuring high energy cosmic particles.

Of course the LHC is a much more prestigious project, so you might expect somewhat higher quality, but you can't change the culture. Physicists are generally not Software Engineers.


I find it very strange that the source code is not routinely released in these circumstances. What possible reason is there for not doing so? The code and the ideas within have no direct commercial value, surely?

At the very least it should be as available as the papers themselves.


What possible reason is there for not doing so? The code and the ideas within have no direct commercial value, surely?

Laziness, lack of time, and the fact that most scientific code is not suitable for public consumption. For instance, my code includes error messages such as "What the fuck????" and "This never happens" which I'd need to take out in order to prepare it for publication.

In addition, there's the assumption that source code is a pretty trivial implementation detail; we publish the algorithm but not the details of the implementation, just as the experimentalist tells us what he built but doesn't tell us what brand of screwdriver he used to build it.


The thing is though, as I'm sure you're aware, the algorithm in abstract is utterly irrelevant. The algorithm as released is never actually executed, it is the implementation that gets executed.

One "trivial" detail in the implementation, one tiny little floating point rounding error, can throw the entire thing. We all know this. So why is the scientific community so unwilling to face this elephant in the room? And why is nobody else confronting them about it? Of course there's jgc, with his admirable record of confronting people about things, but that's the only example I can think of.

You would've thought the "Al Gore made up AGW to devalue my exxon shares" angst-brigade would've jumped on this, but they seem more concerned with digging through departmental email gossip.


"The algorithm as released is never actually executed, it is the implementation that gets executed."

If you use a different implementation of the algorithm, on the same data, and get a different result, you will find that there was an error. That's a lot easier than combing through someone else's code base.

A simple example: If Lab A says they took the square root of 16, you don't need their source code to know there's a problem, if their result is 5 and your result is 4. If you use their code, you might not notice that it's borked.


Still, this is not a good reason not to release code. We can deal with funny comments, bad jokes and dead code. Why not let people examine the code? They are not programmers who will be judged by the neatness of their code.


What if they don't have code, because they're using a commercial product to do the recording?


Well... That part has to be certified as a black box, but even if your code is a Mathematica notebook, it would be useful to have it published along the raw data.

Again, there is no reason to be embarrassed of comments like "this should never happen". My own code has parts that raise "ThisShouldNeverHappenError" when something that should never happen disregards my opinion and happens anyway. ;-)


This assumes errors by implementors are uncorrelated, which is almost certainly false. As programmers we often all make similar mistakes, particularly in exacting numerical code.


True, but the second implementation may use different, well-tested, mature code.

If the first lab used C++, and the second lab used Matlab and its libraries, and the third lab used trusted code based on NumPy, I would be surprised if their errors were strongly correlated. Brand new code? Sure. But not code they've used for several years through multiple experiments, or code that's part of a widely-used software package.

The thing is, labs working in similar areas most likely have ready, tested, trusted software toolkits for doing similar tasks, especially when it comes to data analysis.

It may seem to a programmer that the fastest, easiest thing to do would be to read the code, in fact the fastest, easiest thing to do is probably for another lab in the same field to do an independent analysis of the data with their own code.


"trusted code based on NumPy" <- (Just to note, a number of NumPy's routines have seemingly known numerical instability issues; I would not trust it, and was burned by it badly a few months ago. If you've used a specific function before and it seemed to work, then that's great; but if you haven't: watch out.)


You've implicitly accepted that the first implementation is correct, which I've found is one of the mental blocks that people defending this practice seem to have with frequency that surprises me. It isn't true. It isn't even the sort of thing you can try to debate your way out of, because it simply isn't true. Rather complicated numeric code written by non-software engineers is not something I'm willing to presume correct. I wouldn't presume it correct if it were written by software engineers, either, but at least then we'd probably have some sort of halfway reasonable testing to point at.

Further, the Kolmogorov complexity of a simulation is typically a wee bit larger than the Kolmogorov complexity of a square root implementation. That metaphor is beyond useless, it's deceptive.


"You've implicitly accepted that the first implementation is correct"

Not at all. The questions are 1) does the method described sound reasonable and 2) if I implement the same methods, with my toolset, and process the same data, do I get the same result?

If the results are different, using different implementations of the same methods, then it's possible that either party could have the error. But if the second party is reusing well-tested code of their own, or something from Matlab or whatever, then it is more likely that the unknown code of the original lab is the problem.

You might have a point about a simulation, but an awful lot of science isn't simulations, it's analyzing recorded data.


s/simulation/any real code/. It's still not a square root, which is a total straw man. If it were that simple, there'd be no code to write in the first place.


Okay, instead of 'square root' try spike sorting algorithms for recorded neuron activity. There are various ways of doing it, various software packages, free and commercial, ways of doing it in Matlab, etc.

(Ex: http://www.plexon.com/product/Offline_Sorter.html#Features)

Given a terabyte of recording data, I'd think it should be sufficient to specify the parameters they used for spike sorting. Someone else ought to be able to use a different package on the same data and obtain results similar enough to know if the first person fudged their paper.

Given a set of data, and a named algorithm, it ought to be possible to obtain the same result with any correct implementation of that algorithm.


That's begging the question. Yes, of course a correct implementation will by definition obtain the correct result. I'm questioning the casual way people assume they've got the correct algorithm implemented at all.

Is your paper nothing but off-the-shelf "spike sorting", or is that used as a component of something larger? Rolling back around to the original point of source code release being a desirable thing, if it's the latter, just knowing that this one library was used from what is probably literally a single sentence in the paper that reads "We used spike sorting software X to obtain sorted spikes", when presumably the library has knobs whose settings you don't know and you still don't even have their raw data to check the settings with, you still know virtually nothing about what was actually done. Source release ought to be standard.

The degree to which people who are putatively scientists will go to bat to defend making it difficult or impossible to replicate their experiments boggles my mind.


You're still obstinately missing the point in your nerd rage — letting other people run the same source code does nothing to replicate that part of the experiment — they need an independent implementation to do that! Releasing the source code publicly actively works against that goal because anyone who reads it won't be able to do a black-box implementation.

Besides, the code is almost always worthless compared to collected data. What would actually be most valuable and practicable is for groups to be running their proprietary code on each other's public data for confirmation.


The only advantage I can see is that when the theory, formulas and algorithms are published, not the specific implementation itself, it forces people who want to who want to verify your work to write their own implementation. It kind of makes sure that a bug in your own implementation doesn't pollute the results of those trying to verify your work.


Exactly. It shouldn't matter if Lab A implemented the algorithm in Matlab and Lab B implemented the algorithm in NumPy.

Take the data, apply the described algorithms, get the same result. If Lab B doesn't get the same result, and Lab B has confidence in their implementation, then there's probably something funny with Lab A's code or data.

Reusing Lab A's code on Lab A's data just reproduces Lab A's errors, if any.


This is very true.

However if I can just look at Lab A's code and spot an error in it, that will allow me to discount their results without having to redo their experiment, potentially saving years of work and millions of dollars.

Maybe that's what researchers are really afraid of?


The problem is more complex. Some errors will damage the scientific conclusions, others will show up as fractions of a percentage point inside a much larger confidence interval, and will ultimately not matter much.

In machine learning research, for example, most evaluations consist of running a new program on soem data, getting results back, and from these results computing some aggregate measure of performance. A bug on the code that computes this measure of performance is _really bad_ and can invalidate all your conclusions. If that code is right, however, a but on the code that trains your model is completely meaningless, because as long as your results are good you can argue that you actually meant to write a paper about the model actually implemented rather than the model you were supposed to implement.

I'm sure other scientific areas have similar distinctions, and a naive code reader might fail to notice that a bug is harmless (and there's also the fact that scientific code carries within it a lot of assumptions about the data which, if broken, can be buggy, but are not broken by real data).


" discount their results without having to redo their experiment, potentially saving years of work and millions of dollars."

Or you could just analyze their data using their described methods and your own implementation.


Biology researchers don't get to reuse the other guy's mice, why should they reuse their code? As long as the paper describes the algorithms used, anyone should be able to duplicate the work using their own preferred implementation.

Also, a lot of the code may well be specific to the apparatus used, which may be unique to a lab, consisting of custom-built hardware.


Mouse can't be perfectly copied. Code can.

If biologists could copy mice from other experiment to give them their own drugs and compare results, they would surely do this along with using their own mice.

The more ways to compare experiments, the more sure we will be about results.

I understand cleaning up code is a lot of work, so don't do it, just release it as it is. I know it will be ugly, will include obscenity, etc. If somebody need it, it will do the rest of work.


"The more ways to compare experiments, the more sure we will be about results."

No. The more exactly you duplicate the original experiment, the more likely you duplicate any confounding issues. If someone observes something through a telescope, it's best to verify it through a different telescope somewhere else, to exclude the possibility of an optical quirk of the first telescope.


Some people will run the same code on the same data. They will get the same results, but so what.

Some other people will run the same code on different data (maybe from simulation, or when checking their own theory). They will get some other results, or the same.

Some people will recreate experiment from scratch, and get code B and data B.

Some people will run code A on data B. Some people will run code B on data A.

Now we can't do that. The code is already written, it's waste to hide it, it's not like publishing code prohibit other people from writing their own code.

That's how you check for errors - by changing as little as possible and observing results.


>Biology researchers don't get to reuse the other guy's mice

In a sense, they actually do.

https://secure.wikimedia.org/wikipedia/en/wiki/C57BL/6

https://secure.wikimedia.org/wikipedia/en/wiki/BALB/c


They use identically-specified mice. They don't use the same mice. The mice used by one lab could, theoretically, be off-spec mutants. Or the lab could have made mixed up which mice they used.

Saying you used a particular strain of mouse is like saying you used a particular algorithm on your data. Asking to use the exact individual mice used by another lab might not reveal an error such as mixed-up strains.


Sure. I wasn't posting those links to further this argument, I posted them simply because they are interesting.

Besides, they're only inbred strains, not clones.


They can also be pretty damned expensive. Like $400+ for a mouse.


Looks like I'm in the wrong business.

How does that come about? Patent shenanigans?


Probably. But really, if you're spending $50,000 on an experiment, you're probably more interested in getting mice that are really what it says on the label, than in saving a few bucks and possibly shooting your experiment and/or career in the nuts.

Pretty much anything 'lab grade' is going to be expensive. The lab I worked in ordered a lab grade laser for optogenetics experiments, and that cost something like $18k. Presumably that gets you a very precise, stable wavelength.


If I were involved in this project, I'd be sitting around saying "please don't let it be my fault, please don't let it be my fault".

Somewhere, somebody screwed up... and if the problem is in my code then I want to find it and correct it myself, rather than have it found and corrected by some random stranger who emailed me, and will then post the bug all over the internet so the world can point and laugh at the exact line of my source code that sent the world on a wild goose chase for FTL neutrinos.

So I don't blame this guy for not sending you his source code.


I don't blame him either. Random British guy writes to you in French and ask for your source code. Sure I'd be skeptical too (although personally I err on the side of releasing all my code).

The problem is that suppose it is a bug and suppose the code is being kept behind so they can find the bug themselves. A huge amount of time and money is wasted whereas a programmer could look at the code and compare with the paper pretty quickly.

Also in the case of high-energy physics its not exactly easy to duplicate experiments because of the eye-watering cost of the equipment.


"A huge amount of time and money is wasted whereas a programmer could look at the code and compare with the paper pretty quickly."

Maybe. It might be faster and easier to analyze the data using their described methods and your own code and see if the results are the same.


It's simple. For programmers, code is king. For scientists, data is king. Code can be recreated. Data is irreplaceable, so that's what is important, along with method.


Data depends on the code. You can't write neutrino arrival time to file on hard disk without some code in between the equipment and the file. That code can be wrong, too.

Some code has to be trusted, and scientists can't check every assumption every time, so I don't blame them. But why don't they allow others to check the code? It's the opposite of scientific method.


Perhaps an electrician should ask to check all the solder joints on all the cables?

How is that any different from a random programmer assuming the physicists are incompetent and presuming that what is really necessary is a code review?


If an electrician offered to check them for free, in his free time, why should they refuse?

EDIT: publishing code don't require dismantling equipment. It's not my fault electrician isn't very good analogy.


Because it would probably require dismantling vast amounts of equipment, and it would probably be detectable in the data anyway.

EDIT: True, not a great analogy.


The last thing I heard about this was that the results could be explained by the researchers overlooking the relativistic motion of the GPS clocks, and when corrected for it, the results were actually another confirmation of special relativity. See http://www.technologyreview.com/blog/arxiv/27260/

That said, a brief search turned up this paper ( https://facultystaff.richmond.edu/~ebunn/vanelburg.pdf ) which argues that that explanation is faulty (i.e., the paper made a mistake, and when corrected, the original researchers' results stand).

(Disclaimer: although I once read a book on this stuff, I shouldn't be confused with an expert, and I have no idea who's right and who's wrong. But it is exciting.)


That paper got some wide circulation for some reason, but it was an extremely simple criticism (six or seven equations, no derivations involving anything more substantial than distance = rate x time) by a computer scientist that simply misinterpreted the way the GPS synchronization was used (none of the physics was wrong in the paper, FWIW, the problem was that the author's claims about it having any connection to the OPERA experiment were completely unfounded).

IIRC someone from OPERA responded to questions about that paper by saying, basically, that it wasn't even worth debunking because they had real work to do, and publicly disproving every amateur with a theory would suck up all their time.


I seem to remember when that theory was discussed, it was explained that GPS clocks already take into account relativistic motion. Or something like that, don't quote me, I'm not a physicist.


I am a physicist and GPS indeed takes relativistic effects into account. GPS would not just be slightly wrong if it didn't: it would be entirely worthless. See for example [1].

[1] http://www.astronomy.ohio-state.edu/~pogge/Ast162/Unit5/gps....


Fun fact: When designing the original system there was disbelief amongst some involved that the relativistic effects were either real or relevant so they built in a switch to be able to remotely disable those parts of the calculations. They never did disable them.


Actually, that gose down as the funnest fact that I've learned for at least a month, so thank you.


I don't remember for sure but I think I learned that from one of these lectures...

"Particle Physics for Non-Physicists: A Tour of the Microcosmos"

http://www.thegreatcourses.com/tgc/courses/course_detail.asp...


GPS onboard clock is corrected for relativity due to it's orbit height and speed so it's clock time is approximately correct

But measuring the simultaneity of an event between two different points with a moving clock is a lot trick.


That might have been an issue if they were measuring anything with the satellite clocks.

But they weren't. By my understanding, they were merely using the satellite signals to synchronize the ground clocks in the ground frame, and after that was done, the satellite's were never heard from again.


Synchronizing two clocks in different positions is very difficult since relativity always comes in as you are moving a third clock between them. It's essentially impossible if you don't know the gravity field encountered by the moving clock.


This is an interesting paper on the topic: Determination of the CNGS Global Geodesy [1].

It discusses how the distance traveled by the neutrino beam was calculated. It covers GPS measurements, tidal effect considerations, the Sagnac effect [2], etc.

[1] PDF: http://operaweb.lngs.infn.it/Opera/publicnotes/note132.pdf

[2] http://en.wikipedia.org/wiki/Sagnac_effect


I seriously cannot wait for the MINOS experiment :)


If quantum entanglement "pissed off" Einstein, I can only imagine his reaction to this.


If I recall correctly, Einstein was pissed of by the so called "spooky action at a distance", namely non-local interaction. Note however that a many-world interpretation gets rid of that.

Now we have neutrinos that may travel faster than light. That would throw Special Relativity out the window, sure, but not the locality principle: those neutrino do not show infinite speed, unlike quantum entanglement under the Copenhagen interpretation.


He was also upset that it appeared to violate relativity.

To his deathbed he believed in his alternate explanation that the particles were set opposite "at birth" and initiated that way until observation (if I understand that correctly).

So now we have at least TWO phenomena that appear to be "faster than light" - so get cracking scientists (and engineers) on a practical application for communications to probes that are light-hours or light-years away.


I think generally the current theory is that quantum entanglement doesn't allow you to transmit information faster than the speed of light as you can't choose which state your local particle collapses into (although there's some disagreement about whether communication could be achieved with groups of entangled particles) http://en.wikipedia.org/wiki/No-communication_theorem http://en.wikipedia.org/wiki/Faster-than-light (Quantum mechanics section)


Putting my tester's cap on, would it be feasible to have OPERA repeat the experiment with photons, instead of neutrinos? Presumably, if that experiment showed that photons travel faster than light (i.e., light travels faster than light), it would prove that there had to be an error in the experiment.


Will this widen the gap between relativity and quantum mechanics? or will it actually help in finally resolving the disconnect between these two greatest and well tested theories? Particles defying general relativity is well known, but defying special relativity? I can see it going both ways.


On the offchance that it turns out to be really true, it will certainly point us in the direction of some interesting new physics, so it'll be a step in the right direction.

I'm still trying to keep my expectations down for now, though.


I have very basic physic knowledge, and little time. I'm quite interested, however, to know how they synchronized their timers since the Neutrino speed is faster than light. Anyone has a short explanation or an article for that?


Roughly, they used gps signals to synchronize two accurate clocks, one each at the source and the destination.

For what it's worth, this part of the first paper is pretty accessible - at least, as to how they did it - it's more difficult to poke holes in how accurate it was :)


It's more likely they found a particle that travels back in time than a particle that goes faster than light.


I think that's the same thing.


Yep. Relativity says they are exactly the same thing.


AFAIK the special relativity says than traveling faster than light will make you go back in time, but if a particle goes back in time it does not mean it goes faster than light.


Fair point, I may have over-egged it a bit with "exactly."


I am hoping against hope...


So we're left with what, GPS errors? I've gone through their experimental setup and the timing looks foolproof. The only options left in my mind are

1) The distance between the labs is smaller than they think

2) Tachyonic neutrinos (not likely)


2) Tachyonic neutrinos (not likely)

Another possibility that occurred to me (which is probably way off, and has likely already been trivially disproven by some simple experiment that I'm not thinking of at the moment) is that the "speed of light" that we tend to measure is actually a bit lower than the geometrical "c" that appears in equations (I'm not sure what the state-of-the-art is on measuring that geometrical quantity directly), because for some reason photons are slowed as they move through the vacuum (similar to how we can very easily slow photons by making them move through glass, except this would have to be via interactions with the vacuum or some other form of matter). It could be that the vacuum itself has an index of refraction for photons not quite equal to 1 due to some unknown interaction.

Neutrinos interact with almost nothing, so it's conceivable that they even if they have a small amount of mass and travel slower than geometrical "c", they could still end up going faster than photons because they slip through background interactions with ease.

Again, this is wild speculation, "c" is used all over the place, and if we had the wrong value for it, that would require a lot of coincidences, that this sort of vacuum index-of-refraction effect also had the side effect of making other measurements involving the speed of light (energy measurements, for instance - I would tend to think a discrepancy would be way more obvious here, involving, as they do, factors of c^2) come out looking correct, and that seems unlikely to me.

Then again, so much of what we directly measure is grounded in e/m interactions, so there's a chance (vanishingly small, as it is) that a fifth decimal place discrepancy in photon-in-vacuum-c vs. geometrical-c could escape notice because of the relative difficulty of directly measuring geometrical-c, especially if whatever vacuum effects that slow photon travel had other side effects that dunged up our energy measurements, too.


My knowledge of physics is only very basic but it's still fun to ponder these things. Do they need to factor in that the endpoints are rotating with the earth and the neutrinos are not?


At their latitude (~45 deg), the distance the earth travels in the 2ms it takes light to travel the distance is around 76cm (someone check my calc if you will).

They know the distance to within +-20cm, so the 76cm of Earth's rotation would be very significant (but not significant enough, again with my rough calculations, to bring these results back in line with c).

The distance is significant enough that to get an accurate measure of c over that distance you would have to take it into account. ie. it would be obvious


Since the rotation of the earth has to be factored in, does the earth's orbit around the sun need to be as well? How about the suns orbit around the Galactic center?

I suppose my question is - all things being relative, why aren't the neutrinos bound to the same frame as the emitter and detector?


You're thinking of the wrong kind of relativity. The neutrinos aren't "bound" to any frame. They just move from A to B. The (apparent) distance between A and B changes depending on the observer's frame of reference, as does the (apparent) time it takes them to get from A to B. The faster the observer goes, the shorter the apparent distance (Lorentz contraction) and the smaller the apparent time (time dilation) become. But the RATIO of the apparent distance from A to B to the apparent time it takes light to travel from A to B remains constant. So if the neutrinos are moving faster than light, they will appear to do so as measured from any frame of reference as long as you're consistent about using the same frame to measure time and distance. THAT is where it gets tricky because the GPS satellites that they use to establish both time and distance are all moving in different frames. You also have to take gravity into account because the emitter and detector are non-intertial frames.


Thanks for taking the time to answer! You have helped me understand a few things better. However I also realise now that I phrased my question wrongly and should never have used the word "bound".

I also see now that I misread the grand-parent post. Of course these motions would not affect the measurements, and if they did, the rotation of the earth alone would be obvious.


By "frame of reference", I know you meant "inertial frame of reference", but of course the Earth moving around the sun and the solar system moving in the galaxy are not inertial. My weight is a bit stronger during the day than during the night because of the centrifugal force, isn't it?

I'm sure that the effects are tiny for the LHC, but are they accounted for at all?


> you meant "inertial frame of reference"

Actually no. The speed of light is constant in ALL frames of reference, whether inertial or not. The math just gets more complicated in non-inertial frames. (That's the difference between special and general relativity.)

UPDATE: it's actually a little more complicated than that. See http://en.wikipedia.org/wiki/Propagation_of_light_in_non-ine...

> I'm sure that the effects are tiny for the LHC

Actually, gravity is a pretty significant factor. See e.g.:

http://www.astronomy.ohio-state.edu/~pogge/Ast162/Unit5/gps....


I'm pretty sure I remember that being on the list of things that were double-checked.

The sense I've gotten is that the most likely culprit is some sort of systemic error in the software somewhere, as that stuff is the most labor-intensive part of the experimental setup to analyze for errors.


While it may have been checked it doesn't seem to be mentioned in the paper http://arxiv.org/abs/1109.4897v2


I was explaining to a friend my understanding that the emitter and receiver for these neutrino experiments basically needed to be synchronized to a single, fictional (superluminal, etc.) frame of reference. I'm positive there are many factors going into how this is done.


Neil deGrasse Tyson claims that another possibility is that of a new particle moving backwards through time.

Not that I understand that.

He said it in his recent "Ask me anything" session on reddit.


Well that's exactly what will happen when anything moves faster than light, it will travel backwards in time. The real question is where is it getting infinite energy to break the light barrier? or are neutrinos the only particles with mass that will not be dragged by the Higgs field?


He said a new particle though. So something other than the neutrinos themselves, I suppose?


It was a neutrino that was discovered moving faster than light at CERN, that is what I was referring to. Tyson also mentioned that it is likely a special type of neutrino already moving faster than light is the one being observed (tachyons). What they think we are seeing is a particle that has been moving faster than light but never accelerated to reach that speed, it has been moving at that velocity all the way since before the beginning of time itself.


A particle moving backwards through time === a particle moving faster than what we have currently established to be the speed of light


My understanding is that it isn't necessarily moving faster but is cheating by moving through a dimension that we can't even draw.


Feynman had particles moving backwards through time as an explanation for something. I'm not sure if he was being totally serious, or using it as an idea to move theory forward.


Anti-particles.


3) Tiny stable wormhole or other anomaly under Alps (casualty saved, doubtful that it gives stable results)

4) High energy neutrinos spontaneously teleport themselves (casualty saved, new, very odd physics needed)

...

n) Subtle measurement problem (impossible to falsify until the result is replicated on independent setup)


How do either of those save causality? If you've got regular neutrinos traveling FTL, causality is broken. It doesn't matter if the particles travel FTL because they really have a measurable velocity FTL or because they somehow spontaneously teleport.


I think you mean causality.


Sure, thanks.


It's usually best to leave off the "look guys, I broke physics" until human error has been well and thoroughly ruled out.


Sure, I don't think anyone has suggested otherwise. However, the OPERA collaboration has done a heroic job checking everything they could think of. Thus the continued interest.

Physics, as I'm sure you know, has undergone many revolutions in its understanding of the world. This could be the start of a new one, or it could fizzle and be a quirky measurement error. In either case, highly worth following to see what happens.


If it's not a bit of partially-digested cheese, it might have to do with the interface between relativity and QED.

But while it's not unthinkable that relativity requires revision under some circumstances, it's one of the most thoroughly and repeatedly proven tenets of modern physics. It's going to require something much more definitive and reproducible to overturn that mountain. I wouldn't expect a positive answer to this inside ten years, but a negative answer could turn up any time since it depends on someone finds the bug in the gears.


If another lab using different equipment were to reproduce this result, that would give a positive result in less then 10 years?


That's one of the big steps which is necessary. It still leaves the questions of (a) funding, (b) how do you reproduce it and how long does that take, and (c) how do you explain it.

Many problems in science are considered to be "twenty year problems," by which we mean that they'll probably be solved eventually, but that it's unlikely to be any time soon. Many such problems go on for much longer without a solution despite significant progress (see also: fusion power plants). This isn't quite on that scope - but it could evolve into one if it actually ends up with a result which can be reproduced on command.


MINOS is already working on reproducing it (see article), and if they succeed it will be strong evidence that we're looking at something new.


I'm wondering: what do the involved scientists think might be the source of error? GPS timing (namely, the synchronization of the two remote clocks) sounds likely to me, because I was initially surprised that they could even hope to have sufficiently accurate chronography to detect such a small duration.


You should read the paper, they go through that in great detail. I'm not an expert but their clocks are being synchronized via two methods (GPS and fiber-optic connect) independently. They're pretty sure the timing is correct down to 1 nanosecond.


1) was my first thought since it was such a small time difference, but at C speeds the physical distance they would have to be off by is something like 20 meters. Off the shelf GPS units you can get for your car are accurate to around 3 meters, and the high end models are accurate to under 30 centimeters. It would have taken a major screw up for them to be off by 20 meters.


Doesn't the government add a small amount of variance to the GPS signal to limit its accuracy for non-military users? Or did they stop doing that?


They stopped that over ten years ago.


What is the difference between neutrinos and regular light photons? Do Neutrinos also exhibit the wave particle duality? Does this mean Albert Einstein was wrong, after all these years of assuming his "biggest blunder" was right?


Photons are spin one, and mediate the electromagnetic interaction. They have zero mass and propagate at c.

There are actually three types of neutrino (and they spontaneously change from one type to another), they are all spin-one-half, and they interact with other matter only via the weak nuclear force, which has a much smaller effective distance than the electromagnetic force, so they don't interact very often. They have non-zero mass, but it is so small that we don't actually know what it is.

Everything in the universe exhibits the wave-particle duality.


>"biggest blunder"

Einstein's self professed "biggest blunder" was the cosmological constant. That term has never referred to the assumption of lorentz invariance.


Photons interact much more easily with matter. Photons are massless whereas neutrinos are believed to have mass. Also, photons mediate the electromagnetic force where as neutrinos only interact with the weak subatomic force.


The worst kinds of bugs are the ones that only happen in production, when you have some crazy set up with a load balancer, a CDN and/or Varnish, and a few app servers, high traffic, middle of the day, and a bunch of executives hovering. I have seen a crazy bug that happened only on one of the identical app servers. Turns out SVN export didn't perform correctly (still don't know why), and one of the files was messed up. That was not easy to find.

I have seen Apache ignore updated PHP files, and serving old code, which should never happen. This a real bitch to debug.


I think you posted in the wrong comment thread...


Cool story, bro.


I wonder if us realizing speed of light isn't the cosmic limit for everything are like us again at realizing Earth isn't flat.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: