Hacker News new | past | comments | ask | show | jobs | submit login

> No, it’s not.

What's "not"? His statement about the gravitational constant is exactly correct: we have no theory that tells us what the value of this constant should be, we have to get its value from measurements, and doing that is a process of statistical inference.

It is also true that the form of the theory itself--its equations--is not given to us magically but has to be developed by comparing predictions with experimental data, i.e., by a process of statistical inference.

Physicists don't usually describe these processes as "model training", but that doesn't mean such a description is wrong.




I guess one difference between the two is explainability. As we predict, test, and update, there is an ongoing narrative along the way of why we are making this change or the other. We can explain why we thought some scientific model was the correct one, even if it turns out to be wrong and needs to be udpated. In model training as it's used here, I don't think there is any such a thing.


That's really the divergence Norvig points to between Bregman's two cultures.

And there are plenty of cases where we have had essentially purely phenomenological models in physics (I mentioned this in another thread, but Landau theory) which only _later_ were systematized, so it's a set of processes on a continuum, not two binary opposites.

People _treat_ physical laws as absolutely true, because with high confidence they are and it's intellectually convenient, but they're really just models. We build models on those models, and mental models using our _interpretation_ of those models; no-one is denying the importance of model interpretability in all of this – but it's absolutely a tower of model-building, with more-or-less convincing explanations for the model parameters at various points.

Physics, or at least some domains of physics, are tractable using what one might term the "axiomatic style" because the models we have are a) staggeringly explainable and b) extremely robustly supported by measurement. Even that statement exists on a continuum: MOND is pretty damn phenomenological, for one example, and while we have pretty good quantum chemistry methods in the small, we certainly don't have practical ways of using those ab-initio methods for even mesoscale problems in solid state physics. Does that make mesoscale physics "not science"?

A science which rejects all phenomenology and which insists on building everything up from first principles isn't a science; it's mathematics. That's totally fine! It's just a different—related, but distinct—domain of study.


> there are plenty of cases where we have had essentially purely phenomenological models in physics

Actually, there is a fairly common point of view (which often goes by the name of "effective field theory") according to which all of our physical theories, even the ones we usually refer to as "fundamental" like General Relativity and the Standard Model of particle physics, are phenomenological; they aren't the actual "bottom layer" but something that emerges as an effective theory from other layers deeper down (which we don't have a good theory of at this point).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: