Hacker News new | past | comments | ask | show | jobs | submit login

I mean -- if nobody's surprised, then nobody has learned anything, and then what was the point of doing all this work?

Also -- if there's no surprise, then it's not science, right? This is why I describe this as something more like robot ethnography.




> I mean -- if nobody's surprised, then nobody has learned anything, and then what was the point of doing all this work?

I strongly disagree with this view on science. It's extremely valuable to scientifically validate prior assumptions.


Agree with you -- it's valuable to validate assumptions if there is some controversy about those assumption.

On the other hand, this work isn't even framed as a generalizable assumption that needed to be validated. It seems to me to be "just another example of how AI systems can be strategically deceptive for self-preservation."


> if nobody's surprised, then nobody has learned anything

Really? You're saying that as long as you assume something is true, there's no value in finding out if it's actually true or not?


I was taught in biology that a good scientific experiment is one in which you learn something whether or not the null hypothesis is confirmed.

I am equating learning to surprise, though you could disagree with semantics.


Yes, it is an enormous mistake to equate learning with surprise. I'd ask you to consider answering my above question directly, as I think it will resolve this issue.


I agree with you, of course, that we should test our assumptions empirically as a general point.

However, there isn't time to test out every single assumption we could generally have.

Therefore, the more worthwhile experiments are ones where we learn something interesting no matter what happens. I'm equating this with "surprise," as in, we have done some meaningful gradient descent or Bayesian update, we've changed our views, we know something that wasn't obvious before.

You could disagree with semantics there, but hopefully we agree with the idea of more vs. less valuable experiments.

I'm just not sure whose model of LLM dynamics was updated by this paper. Then again, I only listened to a couple minutes of their linked YouTube discussion before getting bored.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: