Hacker News new | past | comments | ask | show | jobs | submit login

Which model did you pick? It defaults to 7B which wouldn't be expected to be the brightest of the bunch.

If you pick their 70B model it says: "I am Perplexity".




Mistral 7B is very good, definitely way better than such confusion.


The default here is pplx-7b-online which I'm thinking is not instruct-tuned.

There's also pplx-7b-chat which doesn't appear to exhibit this confusion and I think is instruct-tuned.

Very strange default for them to choose imo.


Ah, yes, just today a friend was having lots of trouble with Mixtral returning terrible results, until he got Mixtral-instruct. Very interesting how much better the UX of instruct models is.


I think it's a matter of setting expectations appropriately. LLMs aren't chatbots by default, they're text prediction engines. Depending on the use case, the instruct-tuned models can be quite a bit more difficult to use.


For the average person, the text completion mode is very unintuitive. Even my friend, who's a very experienced developer, had issues with the non-instruct model, even after I told him he needs to structure his queries as completions.


70B seems more awake for sure




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: