Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem I have with LLM-powered products is that they’re not marketed as LLMs, but as magic answer machines with phd-level pan-expertise. Lots of people in tech get frustrated and defensive when people criticize LLM-powered products and offer a defense as if people are criticizing LLMs as a technology. It’s perfectly reasonable for people to judge these products based on the way they’re presented as products. Kagi seems less hyperbolic than most, but I wish the marketing material for chatbots was more like this blog post than a overpromises.


Right, this is why I (author here) close the article mentioning that product design needs to keep the humans in the loop for these models to be useful.

If the product is designed assuming humans will turn their brain off while using it, the fundamental unreliability of LLM behavior will create problems.


Yeah, product design and marketing, for sure. As I said, I wish the marketing material was more like your blog post than what it is now. Obviously tough to get nuance in short-form copy but promising the world is a big mistake seemingly all these companies are making (…on purpose.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: