Hacker News new | past | comments | ask | show | jobs | submit login

Chain of thought can entirely self validate. The OP is saying the LLM is acting like a photon, evaluate all possible solutions and choosing the most "Right" path. not quoting the OP here but my initial thought is that is does seem quite wasteful.

the LLM only gets two guesses at the "end solutions". The whole chain of thought is breaking out the context, and levels of abstraction. How many "Guesses" is it self generating and internally validating, well that's all just based on compute power and time.

My counter point to OP here would be is that is exactly how our brain works. In every given scenario, we are also evaluating all possible solutions. Our entire stack is constantly listening and eithier staying silent, or contributing to an action potential (eithier excitatory, or inhibitory). but our brain is always "Evaluating all potential possibilities" at any given moment. We have a society of mind always contributing their opinion, but the ones who don't have as much support essentially get "Shouted down".




> How many "Guesses" is it self generating and internally validating

That's completely fair game. That's just search.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: