> Is this anthropomorphizing? Yep. But that's the best way I've found to reason about them.
I think it might be more accurate to say, "LLMs are writing a novel in which a very smart AI answers everyone's questions." If you were writing a sci fi novel with a brilliant AI, and you knew the answer to some question or other, you'd put in the right answer. But if you didn't know, you'd just make up something that sounded plausible.
Alternately, you can think of the problem as the AI taking an exam. If you get an exam question you're a bit fuzzy on, you don't just write "I don't know". You come up with the best answer you can given the scraps of information you do know. Maybe you'll guess right, and in any case you'll get some partial credit.
The first one ("writing a novel") is useful I think in contextualizing emotions expressed by LLMs. If you're writing a novel where some character expresses an emotion, you aren't experiencing that emotion. Nor is the LLM when they express emotions: they're just trying to complete the text -- i.e., write a good novel.
I think it might be more accurate to say, "LLMs are writing a novel in which a very smart AI answers everyone's questions." If you were writing a sci fi novel with a brilliant AI, and you knew the answer to some question or other, you'd put in the right answer. But if you didn't know, you'd just make up something that sounded plausible.
Alternately, you can think of the problem as the AI taking an exam. If you get an exam question you're a bit fuzzy on, you don't just write "I don't know". You come up with the best answer you can given the scraps of information you do know. Maybe you'll guess right, and in any case you'll get some partial credit.
The first one ("writing a novel") is useful I think in contextualizing emotions expressed by LLMs. If you're writing a novel where some character expresses an emotion, you aren't experiencing that emotion. Nor is the LLM when they express emotions: they're just trying to complete the text -- i.e., write a good novel.