I'm not saying that either of your concerns are invalid. The LLM space is just the wrong place to be for investors who are worried about cash-flow positivity this early in the game. These models are crazy expensive to develop _currently_, but they is getting cheaper to train all the time. Meaning Mistral spent a fraction of what OpenAI did on GPT-3 to train their debut model, and that companies started one year from now will be spending a fraction of what both are spending presently to train their debut models.
YUP. Plus, the points at the end of your post, abt how much faster and cheaper it is getting to train new models indicates that Mistral may have hit a real sweet-spot. They are getting funding at a moment where the expectations are that huge capital is needed to build these models, just when those costs are declining, so the same investment will buy them a lot more runway than it did for previous competitors...