That also means there is probably a lot of wrong information on Stack Overflow that is baked into the training too. Hopefully, they accounted for this in training, but no way of knowing.
I have not really had a lot of accuracy issues with GPT, but then again, I probably and not savvy enough to spot them, most of the time, anyway.
If it's posted on stack overflow, it's not new, it's merely been published. If this is the bar for LLM "learning" then they are doomed to live in a hazy bubble of the recent past.