Trying to understand your use case more, but if you already have a corpus of example code in JS and in Python, why the need to use the model to do the transformation?
That's just an example, but the end goal is feeding arbitrary code in JS and getting code back in Python. Think of the existing corpus as training data.
Or do I have to wait for GPT-4 expanded contexts to fine-tune with prompts like:
The embeddings on their own are insufficient for this. You need some kind of sequential model to generate the code.
It should be possible to build your own model to do this instead of GPT-4 if you are so inclined. I don't know how the quality would compare but there are various specialized code-specific models already around (and more coming) that work quite well.