The reason you're looking for is called "The Bitter Lesson".
The short version is, trying to give human assistance to AIs is almost always less cost-effective than making them run on more computing power.
By the time your human expect has calibrated your weight layers to detect orange traffic cones, your GPU cluster has trained its AI to detect traffic cones, traffic ligts, trees, other cars, traffic cones with a slightly different shade of orange, etc.
"The bitter lesson of machine learning is that building knowledge into agents does not work in the long run, and breakthrough progress comes from scaling computation by search and learning.02 This applies to domains where domain knowledge is weak or hard to express mathematically. The rapid progress of ML applied to LQCD, mol.2 dyn., protein folding, and computer graphics is the result of combining domain knowledge with ML."
This passage says that the advantages of scaling a certain kind of learning are especially good when .. two conditions.. but a side-effect of that statement is, when knowledge is well known, and maybe straightforward to express, these kinds of learning systems, not as great.. this is true.
without taking on big other topics, I think this "bitter lesson" is unspecific enough to include some self-serving utility.. just tell the other camps to give up, you lost. that sort of thing.
The short version is, trying to give human assistance to AIs is almost always less cost-effective than making them run on more computing power.
By the time your human expect has calibrated your weight layers to detect orange traffic cones, your GPU cluster has trained its AI to detect traffic cones, traffic ligts, trees, other cars, traffic cones with a slightly different shade of orange, etc.