Hacker News new | past | comments | ask | show | jobs | submit login

AI is not mind reading.





Behavioral patterns are not unpredictable. Who knows how far an LLM could get by pattern-matching what a user is doing and generating a UI to make it easier. Since the user could immediately say whether they liked it or not, this could turn into a rapid and creative feedback loop.

So, if the user likes UI’s that don’t change, the LLM will figure out that it should do nothing?

One problem LLM’s don’t fix is the misalignment between app developers’ incentives and users’ incentives. Since the developer controls the LLM, I imagine that a “smart” shifting UI would quickly devolve into automated dark patterns.


A user who doesn't want such changes shouldn't be subjected to them in the first place, so there should be nothing for an LLM to figure out.

I'm with you on disliking dark patterns but it seems to me a separate issue.


A sufficiently advanced prediction engine is indistinguishable from mind reading :D



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: