Look back at expectations at the beginning, and you'll see that nearly everybody was predicting massive recession and even hyperinflation. Balaji bet $1M that there would be hyperinflation:
Yet that didn't happen. We dodged a major bullet, and survived far better than the rest of the world. We must look back at predictions and outcomes with clear eyes, not with the narratives that are being sold in the current day.
On one hand, Pichai paid 2.7bn to get 1 guy back. On the other hand, Pichai laid off 200 Core devs and "relocated roles" to India and Mexico [1]. The duality of Pichai-style management.
You are claiming they are statistical parrots, which I don’t think the parent poster meant.
The “statistical parrots” argument might have been compelling with GPT-3, but not with today’s models and the results of mechanistic interpretability research, which show internal representations and rudimentary world models.
The issue here is that, even with a lot of VRAM, you may be able to run the model, but with a large context, it will still be too slow. (For example, running LLaMA 70B with a 30k+ context prompt takes minutes to process.)
He's also a pretty decent amateur runner, broke 20 minutes for 5k which is nothing to sneeze at, I'm still not able to do it after 1.5 years of training 4-6 days a week.