It's the same in ML. Good predictions do come from real ideas about how the systems work - ideas in information theory, entropy, cognitive science, statistical mechanics and so on - formulated in a context of our existing understanding and prior results. That's science.
Feynman was a genius. It takes a genius to say something like that, and it's surprising how many "scientists" are upset by statements like that, because they are so proud of their hard-won capabilities in modelling. Their curl and divergence operators, their Maxwell's equations... they forget that the map is not the terrain, and the universe is, at it's root, irreducibly mysterious, and the fact that we understand anything about it at all is gobsmacking.
Most of AI academics have spent their career theorizing complex algorithms or complex explanations of intelligence.
But the engineers have built large enough Neural Networks to give us data points that show intelligence is emergent out of relatively simple components.
Unsurprisingly, the people who believed they were the smartest were the least likely to explore the possibility that human intelligence isn't general, but specialized.
Echos of the academics building heliocentric models of the universe centuries ago.
I believe it's entirely possible that the scientific method breaks down past a certain level of system complexity defined somehow by thermodynamics.
This would in part be due to the infeasibility of running the proper experiments to understand the effects of single variables when tens of thousands of variables might be changing at the micro level.
Nobody has really explained why it works.