A transformative aspect of human verbal intelligence involves dealing with concepts and their combinations intersection. ChatGPT does this very well. I think we can agree that ChatGPT provides intelligent completions for an astonishing range of human concepts?
It seems appropriate to describe what ChatGPT understands and what it doesn’t understand through evals or assessments (in the same way that we can use assessments to determine what a student understands or doesn’t). So if we have to call it “computational understanding”, fine —- but clearly ChatGPT understands an incredible range of concepts and their combinations.
It’s terrible and math and logic, but ChatGPT is amazing at concepts—that’s why it is so powerful.
It doesn’t work programmatically—that’s why it fails at logic. But it can reason inductively very very well. Do you have an example besides logic/math where it doesn’t understand simple concepts?
> Do you have an example besides logic/math where it doesn’t understand simple concepts?
All the time. It often fails to understand simple concepts. It doesn't really seem to understand anything.
For example, try to get it to write some code for a program in a moderately obscure programming language. It's terrible: it will confidently produce stuff, but make errors all over the place.
It's unable to understand that it doesn't know the language, and it doesn't know how to ask the right questions to improve. It doesn't have a good model of what it's trying to do, or what you're trying to do. If you point out problems it'll happily try again and repeat the same errors over and over again.
What it does is intuit an answer based on the data it's already seen. It's amazingly good at identifying, matching, and combining abstractions that it's already been trained on. This is often good enough for simple tasks, because it has been trained on so much of the world's output that it can frequently map a request to learned concepts, but it's basically a glorified Markov model when it comes to genuinely new or obscure stuff.
It's a big step forward, but I think the current approach has a ceiling.
>, try to get it to write some code for a program in a moderately obscure programming language. It's terrible: it will confidently produce stuff, but make errors all over the place.
Is that really any different than asking me to attempt to program in a moderately obscure programming language without a runtime to test my code on? I wouldn't be able to figure out what I don't know without a feedback loop incorporating data.
>If you point out problems it'll happily try again and repeat the same errors over and over again.
And quite often if you incorporate the correct documentation, it will stop repeating the errors and give a correct answer.
It's not a continuous learning model either. It has small token windows where it begins forgetting things. So yea, it has limits far below most humans, but far beyond any we've seen in the past.
How about this? Flip it into training mode, feed it the language manual for an obscure language, then ask it to write a program in that language? That's a test that many of us here have passed...
I think you missed my point. It's understandable that it doesn't know how to program in a moderately obscure language. But the model doesn't understand that it doesn't. The specific concepts it doesn't understand are understanding what it is, its limitations, and what it's being asked to do.
It doesn't seem to have any "meta" understanding. It's subconscious thought only.
If I asked a human to program in a language they didn't understand, they'd say they couldn't, or they'd ask for further instructions, or some reference to the documentation, or they'd suggest asking someone else to do it, or they'd eventually figure out how to write in the language by experimenting on small programs and gradually writing more complex ones.
GPT4 and friends "just" take an input that seems like it could plausibly answer the request. If it gets it wrong then it just has another go using the same generative technique as before with whatever extra direction the human decides to give it. It doesn't think about the problem.
("just" doing a lot of work in the above sentence: what it does is seriously impressive! But it still seems to be well behind humans in capability.)
I agree it has very minimal metacognition. That’s partially addressed through prompt chaining—ie, having it reflect critically on its own reasoning. But I agree that it lacks self-awareness.
I think artifacts can easily reflect the understanding of the designer (Socrates claims an etymology of Technology from Echo-Nous [1])
But for an artifact to understand — this is entirely dependent on how you operationalize and measure it. Same as with people—we don’t expect people to understand things unless we assess them.
And, obviously we need to assess the understanding of machines. It is vitally important to have an assessment of how well it performs on different evals of understanding in different domains.
But I have a really interesting supposition about AI understanding that involves it’s ability to access the Platonic world of mathematical forms.
I recently read a popular 2016 article on the philosophy of scientific progress. They define scientific progress as increased understanding — and call it the “noetic account.” [2] Thats a bit of theoretical support for the idea that human understanding consists of our ability to conceptualize the world in terms of the Platonic forms.
Plato ftw!
[1] see his dialogue Cratylus
[2] Dellsén, F. (2016). Scientific progress: Knowledge versus understanding. Studies in History and Philosophy of Science Part A, 56, 72-83.
No, it's amazing at words. Humans use words to encode concepts, but ChatGPT doesn't get the concepts at all - just words and their relationships.
To the extent that humans have encoded the concepts into words, and that text is in the training set, to that degree ChatGPT can work with the words in a way that is at least somewhat true to the concepts encoded in them. But it doesn't actually understand any of the concepts - just words and their relationships.
I disagree. If you play around with ChatGPT4 enough you can see it understands the concept. That is, it's able to model it and draw inferences from the model in a way that's impossible through just words and their relationships. "Sparks of AGI" paper gives some good examples, for instance where it's asked to balance a book, 9 eggs, a laptop, a bottle and a nail. I recently asked it to design a plan for a bathroom. It wrote and SVG mockup and got most of details right. For instance, it understood that sink and tub required both cold and hot water lines but toilet only requires a cold water line. These things are not possible with just words, you can see it's able to create an underlying world model.
I don't think this is the case at all. Language is how we encode and communicate ideas/concepts/practicalities; with sufficient data, the links are extractable just from the text.
I don't see how two examples I gave are possible with just text. They require understanding spatial relationships between objects and their physical properties.
Our own understanding of spatial reasoning is tied in many respects to our hand-eye coordination, muscle memory and other senses: we learn to conceptualize "balance" by observing and feeling falling and when things are "about to" fall.
What GPT does is not "text" - although it centers that as the interface - but "symbols". The billions of parameters express different syntaxes and how they relate to each other. That's why GPT can translate languages and explain things using different words or as different personas.
So when we ask it to solve a spatial problem, we aren't getting a result based on muscle memory and visual estimation like "oh, it's about 1/3rd of the way down the number line". Instead, GPT has devised some internal syntax that frames a spatial problem in symbolic terms. It doesn't use words as we know them to achieve the solution, but has grasped some deeper underlying symbolic pattern in how we talk about a subject like a physics problem.
And this often works! But it also accounts for why its mathematical reasoning is limited in seemingly elementary ways and it quickly deviates into an illogical solution, because it is drawing on an alien means of "intuiting" answers.
We can definitely call it intelligent in some ways, but not in the same ways we are.
This is the debate, isn’t it? I think if we create tests for understanding and deliver them to people, we will find variations in what people understand. I think we will find the same for chatGPT.
But I suspect your notion of understanding is not measurable, is it? For you, chatGPT lacks something essential such that it is incapable of understanding, no matter the test. Or do you have a way to measure this without appeal to consciousness or essentialism?
Well, consciousness is part of the question, isn't it? We know that we are conscious (even if we can't precisely define what that means). Is ChatGPT conscious? I'm pretty sure the answer is no, but how do you prove it?
Does understanding require consciousness? Maybe yes, for the kind of understanding I'm thinking of, but I'm not certain of that.
How do you measure understanding? You step a bit outside the training set, and see if whoever (or whatever) being tested can apply what it has learned in that somewhat novel situation. That's hard when ChatGPT has been trained on the entire internet. But to the degree we can test it, ChatGPT often falls down horribly. (It even falls down on things that should be within its training set.) So we conclude that it doesn't actually understand.
Another part of all this is chatgpt was trained on the entire internet and still does a mediocre job when its doing well. That's an amazing amount of resources required to do all that to arrive at it being able to do what it does writing codes or whatever when a person typically requires a couple slices of pizza and general access to the internet which that person has
Read almost none of.
How come humans are so efficient when these computers are using enormous amounts of energy? When will we have a computer that only requires a few slices of pizza worth of energy to get to the next step?
What do you mean exactly "besides math/logic"? Because logic underpins everything we think.
For example it cannot identify musical chords because despite (I presume) ample training material including explanations of how exactly this works, it cannot reasonable represent this as an abstract rigorous rule, as humans do. So I ask what is C E G and it tells me C major correctly, as it presumably appears many times throughout the training set, yet I ask F Ab Db and it does not tell me Db major, because it did not understand the rules at all.
I hate to break it to you, but humans aren't thinking logically, or exclusively logically. In fact I would say that humans are not using logic most of the time, and we go by intuition most of our lives (intuition is shorthand for experience, pattern matching and extrapolation). There is a reason of why we teach formal logic in certain schools....
i dunno man, I asked it about that chord and it told me it was an F diminished 7th chord. I asked it what a d flat major chord was and it told me. I then asked it what the relationship between the two was. It didnt catch it immediately but when I told it to think about inversions it got it. That's decent for a music student.
It even told me about the role of each in a chord progression and how even though they share the same notes they resolve differently
Humans clearly don't think logically anyhow. Thats why we need things like abacus to help us concretely store things, in our head everything is relative in importance to other things in the moment
> I asked it about that chord and it told me it was an F diminished 7th chord. I asked it what a d flat major chord was and it told me. I then asked it what the relationship between the two was. It didnt catch it immediately but when I told it to think about inversions it got it.
So it gave you a wrong answer and when you spelled out the correct answer it said "OK" x) Was that it? Or am I missing something.
it gave me a correct answer (but not the one GP expected), and then I asked it about another chord GP wanted it to say (D flat major) which can be stylistically replaced with this one (by "inverting" the top note). I asked it what the relationship between the two was and it told me about how they're used in songwriting correctly (gave information about the emotions they invoke and how they resolve to other chords), but didnt tell me the (frankly, trivia) fact that they share notes if you happen to invert it in a particular way.
In music theory a set of the same notes can be one of several chords, the correct answer as to which one it is depends on the "key" of the song and context which wasnt provided, so the AI decided to define the root note of the chord as the bottom one which is a good and pretty standard assumption. In this case major chords are much more common than weird diminished 7 chords but I think you'll agree the approach to answering the question makes sense.
It's kind of like asking the AI about two equivalent mathematical functions expressed in different notation, and it saying a bunch of correct facts about the functions like their derivative, x intercept and stuff and how it can be used, but needing a prod to explicitly say the fact that they are interchangable. It's the kind of trivial barely-qualifies-as-an "oversight" I would expect actual human people who fully understand the material to make.
A diminished 7th above F is B, not B flat. Also a 7th chord is understood to be a triad plus the 7th (and therefore the diminished 5th above the F is also missing). Unless I'm missing something it did indeed produce a wrong answer.
Sure. Let’s take quantum theory. There are lots of concepts that are based in math but can be reasoned about non-mathematically.
The reason that chatGPT can write quantum computer programs to in any domain (despite the lack of existing programs!) is because it can deal with the concepts of quantum computing and the concepts in a domain (eg, predicting housing prices) and align them.
Very little of human reasoning is based on logic and math.
>There are lots of concepts that are based in math but can be reasoned about non-mathematically.
Can you be more specific? I literally don't know what you mean. What can you say about quantum mechanics that is not mathematical or logical in nature? Barring metaphysical issues of interpretation, which I assume is not what you mean.
Rarely do I think responding with chatGPT is appropriate, but this is one of those times.
* Of course! When discussing concepts from quantum mechanics without getting into the mathematical details, we can focus on the general ideas and principles that underlie the theory. Here are some key concepts in quantum mechanics that can be explained in a non-mathematical way:
1. Superposition: In quantum mechanics, particles can exist in multiple states simultaneously, until they are measured. This is called superposition. It's like a coin spinning in the air, being both heads and tails at the same time, until it lands and shows one face.
2. Wave-particle duality: Particles like electrons, photons, and others exhibit both wave-like and particle-like properties. This means they can sometimes behave as particles, and at other times, as waves. This dual nature has been experimentally demonstrated through phenomena like the double-slit experiment.
3. Quantum entanglement: When two particles become entangled, their properties become correlated, regardless of the distance between them. If you measure one of the entangled particles, you'll immediately know the state of the other, even if they are light-years apart. This phenomenon is often referred to as "spooky action at a distance."
4. Heisenberg's uncertainty principle: This principle states that we cannot simultaneously know the exact position and momentum of a particle. The more precisely we know one of these properties, the less precisely we can know the other. This inherent uncertainty is a fundamental aspect of quantum mechanics.
5. Quantum tunneling: In quantum mechanics, particles can "tunnel" through barriers that would be insurmountable in classical physics. This is because the particle's wave function, which describes its probable location, can extend beyond the barrier, allowing the particle to appear on the other side.
6. Quantum superposition of states: Quantum systems can exist in multiple states at once, and when you measure a property of the system, it "collapses" into one of the possible states. This is a fundamental difference between quantum and classical mechanics, where systems have definite properties even before measurement.
These concepts can be discussed and reasoned about without delving into the complex mathematical equations that govern quantum mechanics. While a mathematical understanding is necessary for rigorous study and application of the theory, non-mathematical discussions can still provide valuable insights into the strange and fascinating world of quantum mechanics.*
Mate, not only is every single one of those concepts a mathematical one, the explanations it gives are misleading or incorrect! E.g. the typical pop-sci misleading lines of "it's a wave and a particle at the same time" (it isn't both, it's neither) or "it's in two states at the same time, like a coin which is heads and tails" (no it's not, the point is precisely that it doesn't behave according to classical probabilities).
Claiming these concepts are not mathalematical is like saying addition is not mathematics because you can explain it with words or diagrams to a child!
Ask a person on the street and they will either say "I don't know", "I don't have time for this", or on rare occasion you'll find some nerd who starts juggling numbers outloud eventually reaching some rational terminus (at worst with an error in recall but not in principle along the way).
What evidence do we have that this is how intelligence works?