To a philosopher, perhaps. For all practical purposes, an LLM today can be told to behave as a persona with a will of its own, and it will produce output accordingly. If that output is wired to something that allows it to perform actions, you effectively have an agent capable of setting goals and then acting to pursue them. Arguing that it "actually" doesn't want anything is meaningless semantics at that point.
That is fundamentally not how they work.