Even if training a model turns out to be similar to human learning, I don't think it necessarily follows that it should be treated the same, legally or morally. There's nothing wrong with human laws or morals that enshrine human behavior, like the human way of learning, as special and distinct from machine learning.