I think we'll reach architectural changes that make this moot before we reach hardware for it. The way we train these models is constantly in flux, and we just need someone to crack continuous learning so we can pass models around and train them en-masse, using the collective unused compute that is literally sitting on mine and everyone else's desk right now.