A different biological analogy occurred to me which I've mentioned
before in a security context. It isn't model degeneration but the
amplification of invisible nasties that don't become a problem until
way down the line.
Natural examples are prions such as Bovine spongiform encephalopathy
[0] or sheep scrapie. This seems to really become a problem in systems
with a strong and fast positive feedback loop with some selector. In
the case of cattle it was feeding rendered bonemeal from dead cattle
back to livestock. Prions are immune to high temperature removal so
are selected for and concentrated by the feedback process.
To really feel the horror of this, read Ken Thompson's "Reflections on
Trusting Trust" [1] and ponder the ways that a trojan can be replicated
iteratively (like a worm) but undetectably.
It isn't loss functions we should worry about. It's gain functions.
Natural examples are prions such as Bovine spongiform encephalopathy [0] or sheep scrapie. This seems to really become a problem in systems with a strong and fast positive feedback loop with some selector. In the case of cattle it was feeding rendered bonemeal from dead cattle back to livestock. Prions are immune to high temperature removal so are selected for and concentrated by the feedback process.
To really feel the horror of this, read Ken Thompson's "Reflections on Trusting Trust" [1] and ponder the ways that a trojan can be replicated iteratively (like a worm) but undetectably.
It isn't loss functions we should worry about. It's gain functions.
[0] https://en.wikipedia.org/wiki/Bovine_spongiform_encephalopat...
[1] https://tebibyte.media/blog/reflections-on-trusting-trust/