Not only a few weeks, but more importantly, it was cheap.
The moat for these big models were always expected to be capital expenditure for training costing billions. It's why these companies like openAI etc, are spending massively on compute - it's building a bigger moat (or trying to at least).
If it can be shown, which seems to have been, that you could use smarts and make use of compute more efficiently and cheaply, but achieve similar (or even better) results, the hardware moat bouyed by capital is no longer.
i'm actually glad tho. An opensourced version of these weights should ideally spur the type of innovation that stable diffusion did when theirs was released.
The moat for these big models were always expected to be capital expenditure for training costing billions. It's why these companies like openAI etc, are spending massively on compute - it's building a bigger moat (or trying to at least).
If it can be shown, which seems to have been, that you could use smarts and make use of compute more efficiently and cheaply, but achieve similar (or even better) results, the hardware moat bouyed by capital is no longer.
i'm actually glad tho. An opensourced version of these weights should ideally spur the type of innovation that stable diffusion did when theirs was released.