Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What are the advantages of Perplexity over, say, ChatGPT?

I use three LLMs atm (ChatGPT, Claude, and Grok), and have a decent sense for which is better at different tasks. For example, when asked about questionable web scraping ChatGPT and Claude give answers regarding ethics, whereas Grok gives more direct, technical response.

What is Perplexity better than other LLMs at?




Inline citations. I use perplexity all the time to find HackerNews or Reddit threads on certain topics, or conduct baseline research on topics. It will return the standard LLM answer but with inline citations which I can then use to verify or explore further.


Use https://hackersearch.net/ for RAG and copy-paste the response in any LLM. I wish reddit had semantic search as well. For X I can use Grok.


Reddit answers (https://www.reddit.com/answers) kinda serves this purpose but AFAIK, its not available for everyone yet


Have you used ChatGPT with "internet" access enabled (the globe icon)? It can and does cite its sources and is surprisingly accurate and useful.


This is very recently free, and I find less sources than you.com or perplexity in my experience.


Quality over quantity perhaps? Can't see why you'd want _more_ than about 3-5 sources returned each time. Anything more is information overload and defeats the purpose of being able to vet the claims in a timely manner.


Not if I am looking for something specific like Reddit threads.

Anyway. I want more and you.com and perplexity does that so I will use them over chatgtp.


Fair enough!


I do not know about Perplexity because I avoid it. Instead, I prefer and pay for Claude over Perplexity because Claude offers actual privacy protections. All other services take in my data and use it to further train their models and that does not sit well with me because I had a stalker. It’s obvious that Perplexity is following the Google route of using people as products.


Just curious, what’s the connection between you avoiding services training their models based on your data, and you having had a stalker? What kind of stalkers you had to consider this an attack vector?


The bad experience made me realize how easily the data I put out there can be used against me or nefariously. If you have daughters you may know about people making indecent photos or videos using their images. Someone in Eastern Europe actually made an AI app to undress people. What happens when if I share more personal details and the data is no longer considered mine?

Do I need to really elaborate on all the bad experiences people, women in particular, face? I thought these were well known by now.


so you pay them to abuse other people's privacy (how else did they train their ai) but don't want them to abuse yours...


Perplexity has been the most accurate LLM product, and sans hallucination till date. It is also the fastest. It also has inline citation, but other products do, too. But at one time it was the only product in market to offer that.

That's why I stick with Perplexity.


Perplexity is definitely not sans hallucination and its accuracy varies highly depending on the type of query. For me it occupied a niche hybrid spot between full blown Llms and 'dumb' search engines but that niche is increasingly being squeezed out on both sides as the big Llms add web capability with more structured results while Google etc get smarter. It is hard to see what unique value proposition they offer going forward and its no surprise theyre becoming more google-like everyday, in the negative sense, as they struggle to justify their sky high valuation to all the vc money that is propping them up.


Well just depends how hard the questions you ask it are. If I'm getting help with graduate physics it can certainly combine near-miss sources into a mishmash hallucination. Granted, the sources can be useful but they're pretty exclusively things I've already found via google. I think only one has it returned a novel, useful source


I never ask graduate level Math, Physics, or AI questions to any LLMs. I don't trust them enough.


This depends entirely which model you choose right? Which model are you using? I alternated between Sonar Huge and Claude 3.5 Sonnet when I used Perplexity (sometimes switching on a per-query basis), but the decision fatigue with model selection really got to me.


You don't have to sign in or download the app is one big win


I like how it cites relevant Youtube videos based on the search and shows thumbnails of the videos in its results. As far as I can tell ChatGPT doesn't do this.


I've never figured out how to proper use it. Even it being "connected" to internet, the simple searches I've made it hallucinated.


The internet is full of spam and deceit, most of the time even we humans struggle. Even with good references, the models make their own share of mistakes.

Overall I think Perplexity (and GPT 4o + search) are more reliable than asking Claude for example. And in most cases using the LLM gives better results than wading through raw websites, with their dark patterns.

But we can use both in parallel and compare, if they agree, we can trust the result. If not, then we have a better staring point, and are warned to beware.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: