Hacker News new | past | comments | ask | show | jobs | submit login

This is very true.

However if I can just look at Lab A's code and spot an error in it, that will allow me to discount their results without having to redo their experiment, potentially saving years of work and millions of dollars.

Maybe that's what researchers are really afraid of?




The problem is more complex. Some errors will damage the scientific conclusions, others will show up as fractions of a percentage point inside a much larger confidence interval, and will ultimately not matter much.

In machine learning research, for example, most evaluations consist of running a new program on soem data, getting results back, and from these results computing some aggregate measure of performance. A bug on the code that computes this measure of performance is _really bad_ and can invalidate all your conclusions. If that code is right, however, a but on the code that trains your model is completely meaningless, because as long as your results are good you can argue that you actually meant to write a paper about the model actually implemented rather than the model you were supposed to implement.

I'm sure other scientific areas have similar distinctions, and a naive code reader might fail to notice that a bug is harmless (and there's also the fact that scientific code carries within it a lot of assumptions about the data which, if broken, can be buggy, but are not broken by real data).


" discount their results without having to redo their experiment, potentially saving years of work and millions of dollars."

Or you could just analyze their data using their described methods and your own implementation.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: