To me, a fundamental question is "is this a problem?"
Humans have similar problems too. Our intelligence is trained by experience and evolution to operate within certain parameters. More concerning to me is the fact that a human can conceptualize "couch" from a single example. ML algorithims needs to see thousands of couches before they can classify them.
Preface: I didn't intend for this to be so long... I just got on a roll...
I'm not sure it's true that we conceptualize couch from a single example. We've all seen thousands of couches in many different contexts, homes, schools, doctor's offices, on TV. If you had a person who had never seen a couch or a zebra before, and all you could tell them was right or wrong, it would probably take them (a lot?) more than a single try before they could distinguish between couch and zebra without fail.
The only reason it's easy for us is that we have this giant scaffolding built around zebras as living things that look like a horses and donkeys, and couches as inanimate objects that look like things people sit on and regularly have some pattern on them.
I think people tend to underestimate just how much "training" in the ML sense goes into a human brain. After all, humans spend the first... decade? of life incapable of all but the simplest tasks. That's a decade in which our brains are consuming petabytes of information and processing it constantly.
Watching my children grow up, the way they learn seems remarkably similar to the way computers learn. If you watch a baby learn to move, it is purely an exercise in going too far in one direction and then too far in the other direction, repeated for basically years until they're coordinated enough to move roughly like an adult by the time they're 3 or so. It's the same with words and concepts too, they're just guessing based on things they already know (and the guesses are often waaaay off, because they don't yet know much), but they're constantly filing things away into their frameworks until their frameworks get big enough that this too resembles the way adults learn.
By the time we're adults, the human brain makes ML algorithms look pathetic, but that doesn't take into the account the decades long head start that our brains got.
Humans have similar problems too. Our intelligence is trained by experience and evolution to operate within certain parameters. More concerning to me is the fact that a human can conceptualize "couch" from a single example. ML algorithims needs to see thousands of couches before they can classify them.