the GPUs that matter here are A/H100s not randomly 30x consumer cards or mining specific chips
there is a 30x performance difference between generations on AI, the cards from the crypto hypecycle don’t do much anymore for the type of AI work that matters.
At the moment, because the current meta is 100B+ Parameter LLMs to do everything. I would not discount chaining specialized smaller models together as a viable approach, and those can run on 24GB or even less.