Hacker News new | past | comments | ask | show | jobs | submit login

Mistral is genuinely groundbreaking, for a fast, locally-hosted model without content filtering at the base layer. You can try it online here: https://labs.perplexity.ai/ (switch to Mistral)



It's very fast, but it doesn't seem very good. It doesn't take instruction well (acknowledges and spits back the same wrong stuff) and doesn't seem to have much of a corpus or it's dropping most of it on the floor because it successfully answers zero of my three basic smoke-test questions.


>doesn't seem to have much of a corpus

what do you mean by 'corpus'? It is only 13GB so questions that require recalling specific facts aren't going to work well with so little room for 'compression', but asking mistral to write emails or perform style revisions works quite well for me


Are you running mistral-7B or mistral-7B-instruct?


Wow I was not expecting this, It's really something else in terms of speed, and results are not bad! Will test it more


Are more companies/teams than the creating team working to get this to copilot/chatgpt standards?


Thanks for the link, do you know any other similar services that support fine-tuning ?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: