Ah, yes, just today a friend was having lots of trouble with Mixtral returning terrible results, until he got Mixtral-instruct. Very interesting how much better the UX of instruct models is.
I think it's a matter of setting expectations appropriately. LLMs aren't chatbots by default, they're text prediction engines. Depending on the use case, the instruct-tuned models can be quite a bit more difficult to use.
For the average person, the text completion mode is very unintuitive. Even my friend, who's a very experienced developer, had issues with the non-instruct model, even after I told him he needs to structure his queries as completions.
If you pick their 70B model it says: "I am Perplexity".