Ha, good one! Claude gets it wrong too, except for apologizing and correcting itself when questioned:
"I was trying to find a clever twist that isn't actually there. The riddle appears to just be a straightforward statement - a father who is a surgeon saying he can't operate on his son"
More than being illogical, it seems that LLMs can be too hasty and too easily attracted by known patterns. People do the same.
It's amazing how great these canned apologies work at anthropomorphising LLMs. It wasn't really in haste, it simply failed because the nuance fell below noise in its training set data but you rectified it with your follow-up correction.
Well, first of all it failed twice: first it spat out the canned riddle answer, then once I asked it to "double check" it said "sorry, I was wrong: the surgeon IS the boy's father, so there must be a second surgeon..."
Then the follow up correction did have the effect of making it look harder at the question. It actually wrote:
"Let me look at EXACTLY what's given" (with the all caps).
It's not very different from a person that decides to focus harder on a problem once it was fooled by it a couple of times already because it is trickier than it seems. So yes, surprisingly human, with all its flaws.
But thing is it wasn't trickier than it seemed. It was simply an outlier entry, like the flipped tortoise question that tripped the android in the Bladerunner interrogation scene. It was not able to think harder without your input.
"I was trying to find a clever twist that isn't actually there. The riddle appears to just be a straightforward statement - a father who is a surgeon saying he can't operate on his son"
More than being illogical, it seems that LLMs can be too hasty and too easily attracted by known patterns. People do the same.