Hacker News new | past | comments | ask | show | jobs | submit login

>These LLMs can and will be trained to have a will of their own.

That is fundamentally not how they work.




To a philosopher, perhaps. For all practical purposes, an LLM today can be told to behave as a persona with a will of its own, and it will produce output accordingly. If that output is wired to something that allows it to perform actions, you effectively have an agent capable of setting goals and then acting to pursue them. Arguing that it "actually" doesn't want anything is meaningless semantics at that point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: