Hacker News new | past | comments | ask | show | jobs | submit login

I think a lot of what humans think of as "1. 2. Therefore 3." kind of reasoning isn't different from what the llm is doing, and not in fact any more clever than that. Plenty of people believe plenty of questionable things that they assume they have thought through but really haven't. They used the context to guess the next idea/word, often reaching the conclusions they started out with.

When you talk about ironclad conclusions, I think what happens is that we come up with those confabulations intuitively, but then we subject them to intense checking - have we defined everything clearly enough, is that leap in reasoning justified, etc.

So what I'd really like to see is a way to teach llms to take a vague English sentence and transform it into a form that can be run through a more formal reasoning engine.

Often instead of asking an llm to tell you something like how many football fields could you fit inside England, you are better off telling it to write python code to do this, assume get_size_football_field() in m^2 and get_size_England() in m^2 is available.




> I think a lot of what humans think of as "1. 2. Therefore 3." kind of reasoning isn't different from what the llm is doing, and not in fact any more clever than that. Plenty of people believe plenty of questionable things that they assume they have thought through but really haven't. They used the context to guess the next idea/word, often reaching the conclusions they started out with.

Agreed that many/most humans behave this way, but some do not. And those who do not are the ones advancing the boundaries of knowledge and it would be very nice if we could get our LLMs to behave in the same way.


That's what I mean though, people don't just come up with the right answer out of nowhere. They think through many possibilities generatively, based on "intuition", and most of what they come up with is rubbish - they find out by applying strict rules of reasoning and often checking against other known (or "probably true") ideas, and winnow it down to the ideas that do in fact advance the boundaries of knowledge.

Often times it's not even the individual that throws out bad ideas - many times it'll be colleagues poking holes in his argument, removing further unsuitable generated candidates from the pool of possible answers.

If you think clever people just sit in a corner and come up with revolutionary ideas, I think you're probably wrong. Even the ancient philosophers used to hang out with some wine and hear out their peers and poked holes in their arguments. They called it a symposium.


Sorry yes I should have been more clear and I think I am agreeing with you. I was saying that most people just come up with a thought and retroactively apply "logic" to it so they feel like they've reasoned themselves there. A select few people rigorously apply logic and then follow that to whatever conclusion it leads to. We call these people scientists but honestly in my experience even many scientists can fall into the first camp.


Aha my bad!


The thing is those were much larger ideas/arguments that could be picked apart by sturdy logical targeting. My experience is narrow scope prompts (that still require chain-of-thought) that are much less lofty defeating models. No symposium ever entertained these prompts, because we all know the pigeon hole principle for very basic setups, for example. Humans a lot of the time do just come up with the right answer. We just don’t ask those questions much because we answer them ourselves a lot of the time. Though I only see one small angle with my work.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: