Seems like no? I mean, they are great in getting some text that looks correct but is an hallucination. LLM can help in getting something roughly ok, that a human can fix. This doesn't seem to be the case here.
Translating is the thing that GPT is best at. Hallucination is much less of a problem here because you’re dealing with a whole language, not a ton of libraries with different APIs.