8 x 8 x 8 A100, should be able to do a 100k++ tokens/s at that size
With a dataset of 1.2 trillion tokens. That’s 12 million seconds. Or 140 days
(PS: this is why everyone is training <60B, its crazy the cost, even if my math estimate is wrong by 300%, its still a crazy number)