thanks, the model itself is a pretty vanilla gpt model based heavily on karpathy's nanogpt, so should not need too many bells and whistles to get it running on specific architectures. that said i have very little experience with platform specific development, so would looove some help from the community :)
Could you stick a torch.compile in the inference and training code, maybe gated behind a flag? This should help AMD/Nvidia performance (and probably other vendors soon) significantly.
A serious nod to Karpathy here. They could have chosen any other Transformer architecture, but chose perhaps the most reachable one - in the literal sense.