This is an instance of the dangers of LLM. Because you (self-admittedly) know nothing about the language or codebase, you have no idea the semantically correct way to do things, so if GPT tells you to metaphorically jump off a cliff, you won’t know that it isn’t the right thing to do.
That certainly could be a concern. You are right, it’s important to review the code written by LLMs.
Did you look at the PR?
I reviewed it before submitting it. While I would have struggled to write it myself, I was able to review it and conclude that it was sensible and unlikely to be risky.
Of course it could have bugs that I missed. But so could any code I write myself in any language.