Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd wager that intelligence is nothing but a reified concept. Something which doesn't innately exist other than as an abstract concept which we have defined, to help us explain complex nature, without the need of fully understanding the underlying complexity giving rise to that nature. Like how cave men used the concept of "gods" for a satisfying explanation for why it rains - we use intelligence to ascertain why something is able to do or achieve certain things.

But fundamentally speaking, evidence suggests that the nature of reality follows simple rules. That everything obeys simple algorithmic rules. And therefore, suggests humans are just as robotic as anything else, including the earliest of computers and current "AI". We don't look at it that way, as we don't fully understand the underlying complexity which gives rise to our behavior (whereas for computers we do), so we conveniently call ourselves intelligent as a way to appease the thirst for a satisfactory explanation for our behavior, and are dissatisfied with calling current AI, as true AI.

Thus, I argue, depending on what one wants to define intelligence, AI does not innately exist and can never exist if everything truly obeys simple rules, or, AGI has existed long before humans have come to exist as it's merely a concept which we are free to define.



There's a paper about that, "The Myth of Intelligence" by Henry Schlinger. Abstract:

> Since the beginning of the 20th century, intelligence has been conceptualized as a qualitatively unique faculty (or faculties) with a relatively fixed quantity that individuals possess and that can be tested by conventional intelligence tests. Despite the logical errors of reification and circular reasoning involved in this essentialistic conceptualization, this view of intelligence has persisted until the present, with psychologists still debating how many and what types of intelligence there are. This paper argues that a concept of intelligence as anything more than a label for various behaviors in their contexts is a myth and that a truly scientific understanding of the behaviors said to reflect intelligence can come only from a functional analysis of those behaviors in the contexts in which they are observed. A functional approach can lead to more productive methods for measuring and teaching intelligent behavior.

https://www.researchgate.net/publication/266418013_The_myth_...


I have been thinking along the same lines - the fact that something as comparatively simple and barebones as an LLM can manipulate language symbols well enough to carry on a conversation suggests that it's a lot easier than was previously imagined. I used to think that language was one of the defining characteristics of intelligence, like the instruction set of a cpu, but chatgpt seems like persuasive evidence against that.


In order to predict the next token it’s doing something more like simulating the writer of the words and the context they were likely to be in while writing the words. You cannot make accurate predictions without understanding the world that gave rise to these words.

Consider a detective story with all the clues laid out and then at the end the detective says: “I know who it is. It is: …” Correctly predicting the next “tokens” entails that you incorporate all the previous details. Same goes for math questions, emotive statements, etc.

I’d be careful calling it simple. They might be simulating humans including for example a theory of mind just as a side effect.


Yes exactly. All of the dismissive ‘it’s just a fancy next word predictor’ articles can’t see the woods for the trees. Just because the function of the model is to predict the next word tells us almost nothing about what internal representation of the world it has built in order to generalise. I don’t for a second think it’s currently of comparable complexity to the world model we all have in our brains, but I also don’t think there’s any clearly defined theoretical limit on what could be learned, beyond the need for the internal model to make better predictions and the optimiser to find this more optimal set of weights (which might be a limit in practice as of now).


> You cannot make accurate predictions without understanding the world that gave rise to these words.

I think you must either admit that chatgpt does exactly this, or else give up our traditional connotation of "understand". Chatgpt has never seen a sunset, felt pain, fallen in love, etc, yet it can discuss them coherently; to call that understanding the world is to implicitly say that the world can be understood solely through reading about it and keeping track of which words follow which. It's amazing that generating text from statistical relationships about tokens in a corpus, which generated nonsensical-but-grammatical sentence fragments at smaller scales, can expand to concepts and essays with more compute, but it is just a difference in scale.

> I’d be careful calling it simple.

I'm not calling it simple, I'm calling us simple! I'm saying that chatgpt is proof that natural language processing is much easier than I previously thought.


Ok, but billions upon billions of “statistical relationships”.. I mean at some point the term “simple” loses its meaning. I get your point though. It is not pure magic.


Yeah, "simple" as in we didn't have to make something that can learn general concepts, and then teach it language. It feels like a hack, doesn't it? Like you were working on a problem you thought was NP-hard and then you stumble over a way to do it in O(n^2).


Yeah, I think we get sidetracked by how it “feels” to us when we learn. We forget that is just a convenient story that our mind tells itself. We are incapable or at least severely handicapped when it comes to raw experience and the knowing of it.

Somehow this approach to ML feels kind of natural to me, but it’s hard to articulate why.


“You cannot make accurate predictions without understanding the world that gave rise to these words.”

This depends on the definition of understanding. There are an infinite number of equations that could describe the trajectory of a ball being thrown, and none of them are exactly correct depending on how deep down the understanding hole one travels.


The point is, you need those equations. Their particular form is secondary and indeed up for debate.


Our current ‘simple rules’ to explain nature can only account for a small % of the visible universe.

Assuming there are simple rules, we don’t know how, for example, an electron has the wherewithal to follow them (when does it read the rules or check on them, where are they stored, etc.). It’s mystery all the way down (unless you define it as simple using hand-wavey abstractions ;))


In the end we will find out that AI is not that intelligent and since it is so close to mimicking us it will say the same thing about our own intelligence.


You mean it’s “emergent”. Yes, I agree.

But so is everything else that is not a fundamental particle/wave/string.

So, while true, it’s not that useful in and of itself.


Normally the materialistic attitude is based on a conceit that we basically already know everything worth knowing. Your position seems to be that we don't, but that when we do it will be equally empty and meaningless.


Yet you have undeniable proof that your own consciousness is as real as it gets, and that you experience life in a way that isn’t just an abstract concept. It’s absolutely there (at least, from your own point of view).

I’m not a religious person and I don’t believe in a soul or anything magical like that, but it’s just impossible for me to accept that I’m just a bunch of atoms following rules. I know that there’s something there, I see the evidence right before me, even if I can’t explain it or prove it to you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: