Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I wish they’d stop with the anthropomorphizations

You mean in how Claude interacts with you, right? If so, you can change the system prompt (under "styles") and explain what you want and don't want.

> Claude doesn’t “think” anything

Right. LLMs don't 'think' like people do, but they are doing something. At the very least, it can be called information processing.* Unless one believes in souls, that's a fair description of what humans are doing too. Humans just do it better at present.

Here's how I view the tendency of AI papers to use anthropomorphic language: it is primarily a convenience and shouldn't be taken to correspond to some particular human way of doing something. So when a paper says "LLMs can deceive" that means "LLMs output text in a way that is consistent with the text that a human would use to deceive". The former is easier to say than the latter.

Here is another problem some people have with the sentence "LLMs can deceive"... does the sentence convey intention? This gets complicated and messy quickly. One way of figuring out the answer is to ask: Did the LLM just make a mistake? Or did it 'construct' the mistake as part of some larger goal? This way of talking doesn't have to make a person crazy -- there are ways of translating it into criteria that can be tested experimentally without speculation about consciousness (qualia).

* Yes, an LLM's information processing can be described mathematically. The same could be said of a human brain if we had a sufficiently accurate enough scan. There might be some statistical uncertainty, but let's say for the sake of argument this uncertainty was low, like 0.1%. In this case, should one attribute human thinking to the mathematics we do understand? I think so. Should one attribute human thinking to the tiny fraction of the physics we can't model deterministically? Probably not, seems to me. A few unexpected neural spikes here and there could introduce local non-determinism, sure... but it seems very unlikely they would be qualitatively able to bring about thought if it was not already present.





When you type a calculation into a calculator and it gives you an answer, do you say the calculator thinks of the answer?

An LLM is basically the same as a calculator, except instead of giving you answers to math formulas it gives you a response to any kind of text.


My hope was to shift the conversation away from people disagreeing about words to people understanding each other. When a person reads e.g. "an LLM thinks" I'm pretty sure that person translates it sufficiently well to understand the sentence.

It is one thing to use anthropocentric language to refer to something an LLM does. (Like I said above, this is shorthand to make conversation go smoother.) It would be another to take the words literally and extend them -- e.g. to assign other human qualities to an LLM, such as personhood.


In what ways do humans differ when they think?

Humans think all the time (except when they’re watching TV). LLMs only “think” when it is streaming a response to you and then promptly forgets you exist. Then you send it your entire chat and it “auto-fills” the next part of the chat and streams it to you.

What are we debating? Does anyone know?

One claim seems to be “people should cease using any anthropocentric language when describing LLMs”?

Most of the other claims seem either uncontested or a matter of one’s preferred definitions.

My point is more of a suggestion: if you understand what someone means, that’s enough. Maybe your true concerns lie elsewhere, such as: “Humanity is special. If the results of our thinking differentiate us less and less from machines, this is concerning.”


I don't need to feel "special". My concerns are around the people who (want to) believe their statistical models to be a lot more than they really are.

My current working theory is there's a decent fraction of humanity that has a broken theory of mind. They can't easily distinguish between "Claude told me how it got its answer" and "the statistical model made up some text that looks like reasons but have nothing to do with what the model does".


> ... a decent fraction of humanity ... can't easily distinguish between "Claude told me how it got its answer" and "the statistical model made up some text that looks like reasons but have nothing to do with what the model does".

Yes, I also think this is common and a problem. / Thanks for stating it clearly! ... Though I'm not sure if it maps to what others on the thread were trying to convey.


If people think LLMs and humans are equal, people will treat humans the way they treat LLMs.

Looking over the comment chain as a whole, I still have some questions. Is it fair to say this is your main point?...

> Also, Claude doesn’t “think” anything, I wish they’d stop with the anthropomorphizations.

Parsing they above leads to some ambiguity: who do you wish would stop? Anthropic? People who write about LLMs?

If the first (meaning you wish Claude was trained/tuned to not speak anthropomorphically and not to refer to itself in human-like ways), can you give an example (some specific language hopefully) of what you think would be better? I suspect there isn't language that is both concise and clear that won't run afoul of your concerns. But I'd be interested to see if I'm missing something.

If the second, can you point to some examples of where researchers or writers do it more to your taste? I'd like to see what that looks like.


Wait, we went from "they don't think" to "they only think on demand?"

Since we have no idea how humans think, that's a pretty unfair and unanswerable question.

Humans wrote LLMs, so it's pretty fair to say one is a lot more complex than the other lol


> Humans wrote LLMs, so it's pretty fair to say one is a lot more complex than the other

That's not actually a logical position though is it? And either way I'm not sure "less complex" and "incapable of thought" are the same thing either.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: