It shouldn't matter. If your model touched X during training, it should be seen as producing derivative work. This is the reason humans use clean room implementation techniques.
If you examined the source code of a library for a specific purpose Y, then shortly afterwards went implementing another library for purpose Y, there's a high probability of your code being infringing. That's the entire premise and purpose of clean room design (https://en.wikipedia.org/wiki/Clean_room_design).
Now factor in that machine models don't have a fallible or degrading memory and I think the answer is quite clear.
My point was more about the possible negative effects than about the legality.
An important difference is that AIs are much cheaper than a human. Having a library reimplemented by a human usually isn’t cost-effective, but having it done by an AI may become viable. That could cause a major change in the open-source dynamics, while possibly also reducing average software quality (because less code is publicly scrutinized).
If it's not a hard derivation, then it's difficult to prove or even notice.