Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We just can’t accept that we might solve ourselves. People are understandably desperate to understand their experiences as more than an encoding of a thing that might be explained.

And all of our surprising wins and awful mistakes had explainable reasons, dammit; it wasn’t just a misfiring of trained statistical networks!



Or we just don't know how brains work very well and we shouldn't act like we do.


In addition, we shouldn't draw false equivalence between not knowing how brains work and not knowing how LLMs work, and concluding they must be similar.


Also, it's always funny to notice how the brain, throughout history, has always been compared to the latest technology available. For a long time people said "the brain must be similar to a clock".


Wouldn’t you say that this is how evolution and the spread of information works, in a system that has the spark of some life chemistry?

Of course… The hole in that theory is that evolution never found the wheel.

The steel man in that theory is that it invented the neurological and social processes that then went on to invent the wheel. And the platypus and the clap.

Edit: I forgot to bring it back around and make a point ;)

I’m saying that humans invented clocks and CPUs. We only have metaphors that have emerged from the still misunderstood ether of the informatic universe.


Well, we're observing similarities between them, but people insist they are 100% different in their way of working.


there are currently two things we are aware of in the universe that can reason abstractly. i don't think this is a coincidence.


Only in a very anthropocentric sense. How do we know an ant colony doesn’t reason abstractly (or a human town for that matter)? What about slime mold or ameba? Both can solve a maze as well as humans. What makes you think a forest ecosystem isn’t capable of abstract though?

It is only if we narrow thought to mean precisely human-like thought when humans and human creations are uniquely capable of something. To that extent, our models of intelligence is very much in the pre-copernican era.


> Only in a very anthropocentric sense.

yes, that is the sense in which we are discussing intelligence in order to debate whether the human brain and LLMs operate on similar phenomena


The fact that both humans and LLMs can both reason abstractly is an uninteresting fact if we define “abstract reasoning” to be exactly what humans do, and then create models with the goal of recreating exactly that. This is than simply a statement of an accurate model, and the word intelligence is there only to confuse.

This would be like finding a flower which produces a unique fragrance, then create a perfume which approaches the same fragrance and then conclude that since these are the only two things in the universe which can create this fragrance there must be something special about that perfume.


i would define abstract reasoning as composing and manipulating a model of reality or other complex system in order to make predictions

> is an uninteresting fact if we define “abstract reasoning” to be exactly what humans do, and then create models with the goal of recreating exactly that

if you find this uninteresting, we have perhaps an irreconcilably differing view of things


Your definition excludes language models, as they are in and of them selves just a model which interpolates from data (i.e. makes predictions). But your definition also includes lots of other systems, most mammalian brains construct some kind of models of reality in order to make predictions. And we have no idea whether other systems (such as fungal networks or ant colonies) do that.

I’m not saying these language models—or my hypothetical perfume—aren’t an amazing feat of technology, however neither has any deep philosophical implications about shared properties other than the ones constructed to do so. Meaning, even if LLMs and humans are the only two things in the universe that can reason abstractly in the same way humans do, that doesn’t mean the latter has any more properties shared with the former.


Isn't reasoning abstractly a spectrum though? I took the parent to mean "reason abstractly to the same degree humans do".

Slime molds and amebas might be able to reason abstractly to some degree but they can't write code or poetry.


Like I said, only in an anthropocentric sense.

How do you know fungal networks don’t write and read poetry under the forest floor? If they do—and we have no reason to doubt that they do—you wouldn’t be able to read them, let alone understand them.

The earth’s biosphere as a whole also writes code, just in DNA as opposed to on silicon transistors, why exclude the earth’s biosphere from things capable of abstract thought?


More than 2.

For an example, crows can effectively use tools and communicate abstract concepts to one another from a memory. Which means they can observe a situation, draw conclusions, and use those conclusions to act as well as make decisions on how to act. That would seem to meet the bar for reasoning abstractly.


Not knowing how something works doesn't necessarily preclude replicating its function a completely different way. You don't have to understand an induction cooktop to hang a pot of water over a campfire.


Not that I disagree. It just seems like some of society’s reaction is to slow down on researching the very question.


It to do with safety concerns and economic stability. Not because people are idiots.


I find this quite similar to the issue of free will: If we live in a generally deterministic universe, where is the space for independent decisions of individuals? Was every single decision we take already predetermined before we were born? A lot of ink has been spent on this topic, and as far as I know, only a small minority of people actually deny free will. One assumption I have is to why is because it absolutely sucks from an emotional point of view, and it is a terrible idea to base fe. a justice system on.


Maybe it makes me weird, but the idea of being determined, but still having to follow those emotional and cognitive tasks to moral conclusions is some kind of “work” we still have to do, to get good outcomes.

That’s still the hole in my theory of consciousness. I admit it ;)

But it doesn’t give me as much cognitive dissonance as others to believe that the process of performing moral actions still has to be “processed” by me, a processor. In some sense.


My understanding of how physics works is that due to the probabilistic nature of quantum physics, you can’t predict the future perfectly. So your decisions were not pre-ordained even though they were the result of physical processes happening in your brain.


I think people look more desperate to hype these LLM toys, that they are not the next blockchain or self-driving car. When they fail it's just excuses like you are not using the latest version or "prompting" them right.

The LLM value add for coding is less than the value add of syntax highlighting in my experience.


I think it's getting better.

In my experience and it's certainly useful for unit tests speeding things up for me pretty dramatically.

Also when working on a new development language was quick to point out "how you do that" with the inline vars I needed manipulated vs look ups in Google and copy pasting.


Alternatively, we just can't accept that we might not solve ourselves. People are understandably desperate trying to find an explanation for everything, but can't admit that's just not ever going to happen.


> We just can’t accept that we might solve ourselves.

To solve ourselves is to know ourselves completely, and to know ourselves completely is to be honest in who and why we are what we are simultaneously across all persons. It assumes perfect knowledge.

There is no statistical approximation nor computational power which can do this.

> People are understandably desperate to understand their experiences as more than an encoding of a thing that might be explained.

Another way to frame this is, "some people are nihilists and do not see life as more than an encoding of a thing that might be explained."


That's been solved long ago.

To know anything (an X) completely you need perfect knowledge. Hence people come up with a set of simplified ways of reasoning about X. They call this a model of X.

Model is incomplete and so primitive, so dumbed down, that we manage to play it forward/backward in our heads or our computers.

If a model checks out with the real outcomes we proudly exclaim that we understand X.

I'm not being sarcastic, that is just a real method we use all the time.


> To know anything (an X) completely you need perfect knowledge.

Agreed.

> Hence people come up with a set of simplified ways of reasoning about X. They call this a model of X.

A "set of simplified ways of reasoning about X" in order to create a "model of X" does not imply complete understanding. Quite the contrary actually.

To wit, science often models current understanding of a phenomenon. When new evidence (understanding) is discovered, the model is updated to account for it. Sometimes this invalidates the original model, often the model is refined. Either way, progress is made with the tacit agreement that the model may change in the future.

> If a model checks out with the real outcomes we proudly exclaim that we understand X.

Again, this does not support the assertion of "That's been solved long ago." If anything, it affirms there is justification for disagreeing with the original premise to which I responded:

> We just can’t accept that we might solve ourselves.


That only works in math world, in the real world we have learned that entropy considerations mean there is no such thing as perfect knowledge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: