Hacker News new | past | comments | ask | show | jobs | submit login

You have only tested the current "FREE" AIs, right? How do GPT-4 or Perplexity Pro work on the very same tests?



No, I tested the paid GPT-4 last year on similar questions (animal cognition) and it was so bad I decided it was a waste of money. I actually don't care if it's maybe gotten better in the past year, and I'm certainly not spending money to find out. Last I checked the best LLMs still have a 5-15% confabulation rate on simple document summarization. In 2023 GPT-4 had a ~75% confabulation rate on animal cognition questions, but even 5% is not reliable enough for me to want to use it.

The high school AI tutor probably wasn't using GPT-4, but the district definitely paid a lot of money for the software.

I also hate this entire argument, that AI confabulations don't matter for free products. Unreliable software like GPT-4o shouldn't be widely released to the public as a cool new tech product, and certainly not handed out for free.


CPaS (Cognitive Pollution as a Service).


Humans have been doing that for years. The AI problem is so prevalent because it seems to put a magnifying lense up to the worst portions of ourselves, namely how we process information and deceive each other. As it turns out, liars and cheats tend to build more liars and cheats, also known as "garbage in, garbage out," which leaves me scratching my head as to what anyone thought was going to happen as LLMs got more powerful. Seems like many are afraid to have that conversation, though.

I like your term for it.


It's always the same response on this website isn't it? No, GP specifically mentioned GPT-4


I have tried some chemistry problems on the latest models and they still get simple math wrong (mess up conversion between micro and milligrams for example) unless you tell them to think carefully.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: