Hacker News new | past | comments | ask | show | jobs | submit login

This. it nicely encapsulates why AI aficionados use words like "hallucinate" which become secret clues to belief around the G part of AGI. If it's just a coding mistake, how can the machine be "alive" but if I can re-purpose "hallucinate" as a term of art, I can also make you, dear reader, embue the AI with more and more traits of meaning which go to "it's alive"

It's language Jim, but more as Chomsky said. or maybe Chimpsky.

I 100% agree with your rant. This time fly likes your arrow.




The correct term isn't "hallucinate", it's "bullshit". I mean that in the casual sense of "a bullshitter" - every LLM is a moderately knowleable bullshitter that never stops talking (In a literal sense - even the end of responses is just a magic string from the LLM that cues the containing system to stop the LLM, and if not stopped like that it would just keep going after the "end".) The remarkable thing is that we ever get correct responses out of them.


You could probably s/LLM/human/ in your comment. Essentially all intelligent life is a pachinko machine that takes a bunch of sensory inputs, bounces electricity around a number of neurons, and eventually lands them as actions, which further affect sensory inputs. In between there may be thoughts in there, like 'I've no more to say', or 'Shut your mouth before they figure you out'. The question is, how is it that humans are not a deterministic computer? And if the answer is that actually they are, then what differs between LLMs and actual intelligence?


> Essentially all intelligent life is a pachinko machine that takes a bunch of sensory inputs, bounces electricity around a number of neurons, and eventually lands them as actions, which further affect sensory inputs.

This metaphor of the pachinko machine (or Plinko game) is exactly how I explain LLMs/ML to laypersons. The process of training is the act of discovering through trial and error the right settings for each peg on the board, in order to consistently get the ball to land in the right spot-ish.


There’s something meta about your comment and my reaction. Metaphor. The pachinko metaphor seems so apt it made me pause and probably internalize it in some way. It’s now added to my brains dataset specifically about LLMs. It’s an interesting moment to be hyper aware of in the context that you’re also describing (definition of intelligence). Far out.


It’s survival in reality. Bullshit doesn’t survive (at least on the lower levels of existense, corporate and cultural bs easily does) and that’s why people are so angry at it. We hate absurdity because using absurd results yields failures and loses time or resources that were important to stay fed and warm.

People also can lose your time or resources (first line support, women’s shopping, etc) and the reaction is the same.

I don’t know why there’s still no LLM with a constant reality feedback loop yet. Maybe there’s a set of technical issues that prevents it. But until this happens, pretrained AI will bullshit itself and everyone cause there’s nothing that could hit it on the head.


Well in some ways it does, some insect species use various strategies to fool other insect species, such as a special type of caterpillar that does so against ant colonies, to live at their expense.



> This time fly likes your arrow.

And fruit flies like bananas. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: