> For the first answer I got zero results on Google, so it's quite unlikely that it was part of the training data
Sure, not literally part of the training data.
Statistical AI operates in a transformed space derived from the training data, points in that space will not, in general, exist in the original.
So imagine generating 1000 circles and putting their radii on a line: 0.1, 0.2, 0.3, ...
The circles are the training data, and the "implied line" is the transformed space.
Now, AI here is capable of generating a circle with radius 0.15 and hence that circle is "not in the original dataset".
This type of "novelty" isn't what I'm concerned with; generative AI must have that or else it'd be entirely useless -- only a google search.
Rather i'm talking about, for example, whether without "Rust" in its training data it could develop "Rust" from everything else. Is there enough data on lifetimes/borrowing/etc. research in pdfs that it's scanned to somehow "find a midpoint between those pdfs and C++".
It seems a bit mad to suppose so -- but I could be wrong, such a midpoint does exist --- but i'm extremely doubtful we humans have been so helpful as to write the 1000s of academic PDFs needed for this system to find it.
The novelty I'm talking about is dimensions in the transformed space. The system cannot derive "additional ways to move" without the source data actually containing those ways.
This is, roughly, equivalent to saying that it's biased towards the on-average ways we have conceptualised our problems as represented by the on-average distribution of academic articles, github repos, webpages, etc. *that we happened to have created*.
This is a serious "intellectually conservative" bias.
For sure it can find circles it hasnt seen; but could it find spheres from circles alone? No.
I don't think this argument holds water at all. Can we imagine that the people who created Rust were able to do so only because they contributed some magical ingredient from their souls, which didn't exist anywhere in the world's collected pre-Rust printed materials? It's more economical to assume that they were able to create Rust because the necessary concepts already existed, in which case a sufficiently trained AI might do something similar.
Or working in the opposite direction: we can think of AIs as processing concepts in some dimensional space, sure. But we have no conception at all of what that space is like, so there's no reason to expect that a midpoint in that space between two objects we're familiar with would also be familiar to us. I mean, I have no idea what the midpoint between Rust and C++ is, or how I'd go about describing it. Surely an AI that thinks in tensors is more capable than we are to explore the space between known concepts, so why couldn't we expect to learn something novel from one?
Concepts are developed by animals over time. A baby develops sensory-motor concepts from day-1; a child abstracts them; a teenager communicates them; and adult refines that communication.
They are not developed as a matter of averaging over all the text on the internet.
Respectfully, that sounds like hand-waving. Claiming to know where concepts do and don't come from just leads to questions like "did the natural numbers exist before we did?", which are centuries old and presumably not resolvable.
Whereas a more focused question like "can an AI produce outputs that are novel to someone familiar with all of the AI's inputs?" seems resolvable, and even if one thinks it's unlikely or not easy, it's very hard to buy the idea that it's impossible.
No, not really. People in this area are severely poorly informed on animal learning, and "ordinary science".
AI evangelists like to treat as "merely philosophical matters" profoundly scientific ones.
The issues here belong to ordinary science. Can a machine with access only to statistical patterns in the distribution of text tokens infer the physical structure of reality?
We can say, as certain as anything: No.
Associative statistical models are not phenomenological models (ie., specialised to observable cause-effect measures); and phenomenological models are not causal (ie., do not give the mechanism of the cause-effect relationship).
Further, we know as surely as an athlete catching a ball, that animals develop causal models of their environments "deeply and spontaneously".
And we know, to quite a robust degree, how they do so -- using interior causal models of their bodies to change their environments by intentional acts can confirm or disconfirm environmental models. This is modelled logically as abduction, causally as sensory-motor adaption, and so on.
This is not a philosophical matter. We know that "statistical learning" which is nothing more than a "correlation maximisation objective" over non-phenomenological, non-causal, non-physical data produces approximate associative models of those target domains -- that have little use beyond "replaying those associations".
ChatGPT appears to do many things. But you will see soon, after a year or two of papers published, that those things were tricks. That "replaying associations in everything ever written" is a great trick, that is very useful to people.
Today you can ask ChatGPT to rewrite harry potter "if harry were evil" or some such thing. That's because there are many libraries of books on harry potter and "evil" -- and by statistical interpolation alone, you can answer an apparent counter-factual question which should require imagination.
But give ChatGPT an actual counter-factual whose parts are only in the question, and you'll be out-of-luck.
Eg., tell it about tables, chairs, pens, cups and ask it to arrange them using given operations so that, eg., the room is orderly. Or whatever you wish.
Specified precisely enough you can expose the trick.
>This is not a philosophical matter. We know that "statistical learning" which is nothing more than a "correlation maximisation objective" over non-phenomenological, non-causal, non-physical data produces approximate associative models of those target domains -- that have little use beyond "replaying those associations".
Why do you think the data LLMs are trained on are non-causal? Lets take causation as asymmetric correlation. That is, (A,B) present in the training data does not imply (B,A) presence. But of course human text is asymmetric in this manner and LLMs will pick up on this asymmetry. You might say that causation isn't merely about asymmetric correlation, but that of the former determining the latter. But this isn't something we observe from nature, it is an explanatory posit that humans have landed on in service to modelling the world. So causation is intrinsically explanatory, and explanation is intrinsically causal. The question is, does an LLM in the course of modelling asymmetric correlations, develop something analogous to an explanatory model. I think so, in the sense that a good statistical model will intrinsically capture explanatory relations.
Cashing out explanation and explanatory model isn't easy. But as a first pass I can say that explanatory models capture intrinsic regularity of a target system such that the model has an analogical relationship with internal mechanisms in the target system. This means that certain transformations applied to the target system has a corresponding transformation in the model that identifies the same outcome. If we view phenomena in terms of mechanistic levels with the extrinsic observable properties as the top level and the internal mechanisms as lower levels, an explanatory model will model some lower mechanistic level and recover properties of the top level.
But this is in the solution space of good models of statistical regularity of an external system. To maximally predict the next token in a sequence just requires a model of the process that generates that sequence.
Where `->` says we select only those cases where x[-1],... brought-about x[0].
To see why this is the case, suppose we do have a god's eye-view of all of spacetime.
P(A|B)
always selects for all instances where B follows A.
P(A| B -> A)
selects only those instances where B's following A was caused by A.
Eg.,
P(ShoesWet | Raining)
is very different from
P(ShoesWet | Raining -> ShoesWet)
in the former case the two events have, in general, nothing to do with each other.
To select "Raining -> ShoesWet" even with a gods-eye-view we need more than statistics... since those events which count as "Rain -> ShoeWet" have to be selected on a non-statistical basis.
For the athelete catching a ball, or the scientist designing the experiment, we're interested only in those causal cases.
For sure P(A|B) is a (approximate, statistical) model of P(A| B->A) -- but it's a very restricted, limited model.
The athlete needs to estimate P(ball-stops | catch -> ball-stops)
NOT P(ball-stops | catch) which is just any case of the ball-stopping given any case of catching.
Let me alter your example a bit: we have P(A|B), we want P(A|B,B->A). But given enough examples of the form P(A|B), a good algorithm can deduce B->A and use it going forward to predict A. How? By searching over the space of explanatory models to find the model that helps to predict P(A|B) in the right cases and not in the wrong cases. LLMs do this with self-attention, by taking every pair of symbols in the context window and testing whether each pair is useful in determining the next token. As the attention matrix converges, the model can leverage the presence of "Raining & Outside" in predicting "ShoesWet".
Of course, this is a rather poor excuse for an explanation. The fact that "outside" and "raining" are close doesn't explain why "my shoes are wet". But it does get us closer to a genuine explanation in the sense that it eliminates a class of wrong possibilities from consideration: every sentence that doesn't have outside in proximity to raining downranks the generation "my shoes are wet". The model is further improved by adding more inductive relationships of this sort. For example, the presence of an expanded umbrella downranks ShoesWet, the presence of "stepped in puddle" upranks it. Construct about a billion of these kinds of inductive relationships, and you end up with something analogous to an explanatory model. The structural relationships encoded in the many attention matrices in modern LLMs in aggregate entail the explanatory relationships needed for causal modelling.
> How? By searching over the space of explanatory models to find the model that helps to predict P(A|B) in the right cases and not in the wrong cases.
But the machine doesn't know which are the right cases. We aren't presuming there's a column, Z = 1 for B -> A, and Z = 0 otherwise -- right?
The machine has no mechanism to distinguish these cases.
> testing whether each pair is useful in determining the next token
This isnt causation.
> every sentence that doesn't have outside in proximity to raining downranks the generation
So long as the sequential structure of sentences corresponds to the causal structure of the world: but that's kinda insane right?
We haven't rigged human language so that the distribution of tokens is the causal structure of the world. The reason text generated by LLMs appears meaningful is because we understand it. The actual structure of text generated isnt "via" a model of the world.
(Consider, for example, training an LLM on a dead untranslated language -- it's output is incomprehensible, and its weights are abitarily correlated with anything we care to choose.)
Nevertheless, given our choice of token, you do have a model which says:
That's true. But we're choosing these additional conjunctions because we already know the causal model; these conjunctions are how we're eliminating confounders to get an approximation close to the actual.
(Which you'll never get, the actual value is `1`. Iff A -> B, then P(A|B->A) = 1 -- this is a deductive inference necessary for ordinary science to take place).
In any case, P(A | B -> A) means without any confounders. To actually find the LLM's approximation of this we'd need to compute:
P(A|B & C1 & C2 & C3 ...) forall C_i..inf
And then find P(A|B & C') st. C' made P(A|B) maximally likely.
If you find a set of {C} st. P(A|B) has a high probability, you won't find causal conditions.
All that statistical association models here is, at best, salience -- not causal relevance.
>We haven't rigged human language so that the distribution of tokens is the causal structure of the world [...] The actual structure of text generated isnt "via" a model of the world.
This is an odd claim. I certainly say that I picked my cup off the floor rather than I picked my cup off the ceiling because gravity causes things to fall down rather than up. Human language isn't "rigged" to represent the causal structure of the world, but it does nonetheless. The distribution of tokens is such that the occurrence of (A,B) and (B,A) are asymmetric, and this is precisely because of features of the world influence the distribution of words we use. A sufficiently strong model should be able to recover a model of this causal structure given enough training data.
>That's true. But we're choosing these additional conjunctions because we already know the causal model; these conjunctions are how we're eliminating confounders to get an approximation close to the actual.
But these patterns are represented in the training data by the words we use to discuss raining and wet shoes. There is every reason to think a strong model will recover this regularity.
>All that statistical association models here is, at best, salience -- not causal relevance.
That's all we can ever get from sensory experience. We infer causation because it is more explanatory than accepting a huge network of asymmetric correlations as brute. YeGoblynQueenne is right that my point is basically a version of the problem of induction. We can infer causation but we are never witness to causation. We do not build causal models, we build models of asymmetric correlations and infer causation from the success of our models. What a good statistical model does is not different in kind.
The problem of induction is fatal. But we overcome it: we do witness causation.
When I act on the world, with my body, I take as a given "Body -> Action". We witness causation in our every action.
> This is an odd claim
The tokens can be given any meaning. The statistical distribution of token frequencies in our languages have an infinite number of causal semantics which are consistent with them.
We can find arbitary patterns such that
P(A) < P(A|B) < P(A|B & C) < P(A|B & C...)
Only those we give a semantics to ("Rain" = Rain), and only those we already know are causal we will count. This is the trick of humans reading the output of LLMs -- this is what makes it possible. It's essentially one big Eliza effect.
No, the structure of language isnt the structure of the world.
This pattern in tokens,
P(A) < P(A|B) < P(A|B & C) < P(A|B & C...)
Is an associative statistical model of conditional aggregate salience between token terms.
Phrase any such conditional probability you wish, it will never select for causal patterns.
this is why we experiment. It's why we act on the world to change it.
When the child burns their hand on the fireplace they do so once. Why?
Absent this, absent being in the world with a body, you cannot determine causes.
The problem of induction phrased in modern language is this: statistics isn't informative. Or, conditional probabilities are no route to knowledge. Or, AI is dumb.
Wow, that's a nice way to put it. I haven't seen that P(A|B -> A) notation before. Where does it come from?
But I think the OP is arguing, essentially, that P(A|B -> A) is only an interpretation of P(A|B) that we have chosen, among the many possible interpretations of P(A|B).
Which I think evokes the problem of induction. How do we know that P(A| B -> A) when all we can observe ourselves is P(A|B)?
Ah, I thought it might be something to do with Judea Pearl.
>> Hume was WRONG. Very wrong.
Oh boy :)
I can see what you're saying about having bodies, but bodies are very limited things and that's just making Hume's point. We can only know so much by experiencing it with our bodies. We've learned a lot more about the world, and its foundations, thanks to our ability to draw inferences without having to engage our bodies. For example, all of mathematics, including logic that studies inference, is "things we do without having to engage our bodies". And those very things have shown us the limits of our own abilities, or at least our ability to create formal systems that can describe the world in its entirety. They have shown us the limits of our ability for inductive inference (and in a very concrete manner - see Mark E. Gold's Language Identification in the Limit).
Machine learning systems are more limited than ourselves, that's right. And that's because we have created them, and we are limited beings that cannot know the entirety of the world just by looking at it, or reasoning about it.
One of the premises of hume's sceptical metaphysics was that
P(A|B) is just P(A | B -> A)
The argument for this was `A` and `B` are only "Ideas in the head" and don't refer to a world. And secondly, by assertion, that Ideas are "thin" pictorial phenomena that can only be sequenced.
Hume here is just wrong. Our terms refer: `A` succeeds in referring to, eg., Rain. And our experiences aren't "thin", they're "thick" -- this was Kant's point. Our experiences play a rich role in inference that cannot be played by "pictures".
To have a metal representation R of the world is to have a richly structured interpretation which does, in fact, contain and express causation.
ie., R can quite easily be a mental representation of "B -> A". This, after all, is what we are thinking when we think about the rain hitting our shoes. We do not imagine P(A|B), we imagine P(A|B->A) -- if we didnt, we couldn't reason about the future.
The question is only how we obtain such representations, and the answer is: the body with its intrinsic known causal structure.
Whenever we need to go beyond the body, we invent tools to do so -- and connect the causal properties of those tools to our body.
Hume here is wrong in every respect. And it's his extreme scepticism which undergirds all those who would say modern AI is a model of intelligence -- or is capable of modelling the world.
The word isnt a "constant conjunction of text tokens" -- even Hume wouldnt be this insane. Nevertheless, it is this lobotomised Hume we're dealing with.
There is a science now for how the mind comes to represent the world -- we do not need 18th C. crazy ideas. Insofar as they are presented as science, theyre pseudoscience
Thank you for sharing your opinion on Hume, but I don't see how e.g. Polyominoes, to take a random mathematical (ish) concept I was thinking about today, are connected to our body. I can think of many more examples. Geometry, trigonometry, algebra, calculus, the first order predicate calculus, etc. None of those seem to be connected to my body in any way.
Anyway this all is why I'm happy I'm not a philosopher. Philosophers deal in logic, but they don't have a machine that can calculate in logic, and keep them in the straight and narrow with its limited resources. A philosopher can say anything and imagine anything. A computer scientist -well, she can, but good luck making that happen on a computer.
Well Kant (Chomsky, et al.) are probably right that we must have innate concepts -- esp. causation, geometry, linguistic primitives etc. in order to be able to perceive at all.
So in this sense a minimal set of a-priori concepts are required to be built-in, or else we couldn't learn anything at all.
You might say that this means we can separate the sensory-motor genesis of concepts from their content -- but I think this only applies to a-priori ones.
What i'm talking about is conceptualisations of our environment that provide its causal structure. One important aspect of that is how desires (via goals) change the world. Another is how the world itself works.
Both of these do require a body, or at least a solution to the problem of induction (ie., that P(A|B) is consistent with P(A|B->A), P(A|~(B->A_), P(A| B->Z, C->Z, Z->A), etc.)
>> So in this sense a minimal set of a-priori concepts are required to be built-in, or else we couldn't learn anything at all.
I don't disagree with that at all. I'm pretty convinced that, as humans, we can learn and invent all those things we have because we have strong inductive biases that guide us towards certain hypotheses and away from others.
Where those inductive biases come from is a big open question, and I'd be curious to know the answer. We can wave our hands at evolution, but that doesn't explain, say, why have the specific inductive biases we have, and not others. Why do we speak human languages, for example? Why is our innate language ability the way it is? Intuitively, there must be some advantage in terms of efficiency that makes some inductive biases more likely than others to be acquired, but I get tired waving my hands like that.
I'm not convinced that all that absolutely requires a body, either. I think it's reasonable to assume it requires some kind of environment that can be interacted with, and some way to interact with the environment, but why can't a virtual environment, like a computer simulation of reality, provide that? And it doesn't have to be the real reality, either. A "blocks world" or a "grid world" will do, if it's got rules that can be learned by playing around in it.
I very disagree but have an upvote for a well-argued comment.
>> The question is, does an LLM in the course of modelling asymmetric correlations, develop something analogous to an explanatory model. I think so, in the sense that a good statistical model will intrinsically capture explanatory relations.
A statistical model may "capture" explanatory relations, but can it use them? A data scientist showing a plot to a bunch of people is explaining something using a statistical model, so obviously the statistical model has some explanatory power. But it's the data scientist that is using the model as an explanation. I think the discussion is whether a statistical model can exist that doesn't just "capture" an explanation, but can also make use of that explanation like a human would, for example as background knowledge to build new explanations. That seems very far fetched: a statistical model that doesn't just model, but also introspects and has agency.
Anyway I find it very hard to think of language models as explanatory models. They're predictive models, they are black boxes, they model language, but what do they explain? And to whom? The big debate is that (allegedly) "we don't understand language models" in the first place. We have a giant corpus of incomprehensible data; we train a giant black box model on it; now we have a giant incomprehensible model of the data. What did we explain?
>> But this is in the solution space of good models of statistical regularity of an external system. To maximally predict the next token in a sequence just requires a model of the process that generates that sequence.
Let's call that model M* for clarity. The search space of models, let's call it S. There are any number of models in S that can generate many of the same sequences as M* without being M*. The question is, and has always been, in machine learning, how do we find M* in S, without being distracted by M_1, M_2, M_3, ..., ... that are not M*.
Given that we have a very limited way to test the capabilities of models, and that models are getting bigger and bigger (in machine learning anyway) which makes it harder and harder to get a good idea of what, exactly, they are modelling, how can we say which model we got a hold of?
>A statistical model may "capture" explanatory relations, but can it use them?
That's the beauty of autoregressive training, the model is rewarded for capturing and utilizing explanatory relations because they have an outsized effect on prediction. It's the difference between frequency counting while taking the past context as an opaque unit vs decomposing the past context and leveraging relevant tokens for generation while ignoring irrelevant ones. Self-attention does this by searching over all pairs of tokens in the context window for relevant associations. Induction heads[1] are a fully worked out example of this and help explain in-context learning in LLMs.
>Anyway I find it very hard to think of language models as explanatory models. They're predictive models, they are black boxes, they model language, but what do they explain? And to whom?
The model encodes explanatory relationships of phenomena in the world and it uses these relationships to successfully generalize its generation out-of-distribution. Basically, these models genuinely understand some things about the world. LLMs exhibit linguistic competence as it engages with subject matter to accurately respond to unseen variations in prompts of that subject matter. At least in some cases. I argue this point in some detail here[2].
>how can we say which model we got a hold of?
More sophisticated tests, ideally that can isolate exactly what was in the training data in comparison to what was generated. I think the example of the wide variety of poetry these models generate should strongly raise one's credence that they capture a sufficiently accurate model of poetry. I go into detail on this example in the link I mentioned. Aside from that, various ways of testing in-context learning can do a lot of work here[3].
>> That's the beauty of autoregressive training, the model is rewarded for capturing and utilizing explanatory relations because they have an outsized effect on prediction.
That sentence should be decorated with the word "allegedly", or perhaps "conjecture"! In practical terms, I believe you are pointing out that language models of the GPT family are trained on a context surrounding, not just preceding, a predicted token. That's right (and it gets fudged in discussions about predicting the next token in a sequence), but we could already do that with skip-gram models, and with context-sensitive grammars, and dependency grammars, many years ago, and I don't remember anyone saying those were specially capable of capturing explanatory relations [1]. Although for grammars the claim could be made, since they are generally based on explanatory models of human language (but not because of context-sensitivity).
Anyway, I thought you were arguing that explanations are arbitrary, "explanatory posits", and wouldn't that mean that an explanation doesn't improve prediction? This is not to catch you in contradiction, I'm genuinely unsure about this myself. My understanding is that explanatory hypotheses improve predictions in the long run [2], but that's not to say that a predictive model will improve given explanations, rather explanatory models eventually replace strictly predictive models.
Are you saying that including explanations in training data can improve prediction? That would make sense, but this is very hard to do when training a predictive model on text. In that case, the explanations are at best hidden variables and language models are just not the right kind of model to model hidden variables.
Sorry, writing too much today. And I got work to do. So I won't bitch about "in-context learning" (what we used to call sampling from a model back in the day, three years ago before the GPT-3 paper :).
______________
[1] My Master's thesis was a bunch of language models trained on Howard Philips Lovecraft's complete works, and separately on a corpus of Magic: the Gathering cards. One of those models was a probabilistic Context-Free Grammar, and despite its context-freedom, and because it was a Definite Clause Grammar, I could sample from it with input strings like "The X in the darkness with the Y in the Z of the S" and it would dutifully fill-in the blanks with tokens that maximised the probability of the sentence. So even my puny PCFG could represent bi-directional context, after a fashion. Yet I wouldn't ever accuse it of being explanatory. Although I would say it was quite mad, given the corpus.
[2] I mention in another comment my favourite example of the theory of epicylces compared to Kepler's laws of planetary motion.
>Anyway, I thought you were arguing that explanations are arbitrary, "explanatory posits", and wouldn't that mean that an explanation doesn't improve prediction?
I don't mean to say that explanations are arbitrary, rather that causes are not observed only inferred. But we infer causes because of the explanatory work they do. This isn't arbitrary, it is strongly constrained by predictive value as well as, I'm not sure what to call it, epistemic coherence and intelligibility maybe? Explanatory models are satisfying because they allow us to derive many phenomena from fewer assumptions. Good explanatory models are mutually reinforcing and have a high level of coherence among assumptions ("epistemic coherence"). They also require the fewest number of assumptions taken as brute without further justification ("intelligibility").
Why think explanatory models are better at prediction? Because the mutual coherence among assumptions and explanatory power of the whole (ability to predict much from few assumptions) suggests the explanatory model is getting at the productive features of the phenomena that result in the observed behavior. Essentially, the fewer number of posits, the fewer ways to "bake in" the data into the model. If we were to cast this as a computational problem, i.e. find a program that reproduces the data, shorter programs are necessarily more explanatory. There's no other way to explain the coincidence of program picked out of a small space generating data picked out of a very large space without there being an explanatory relation between the two. Further, our credence for explanation increases as the ratio of the respective spaces diverge.
This is really the problem of machine learning in a nutshell. Is the data vs parameter count over some threshold such that training is biased towards explanatory relations? Is the model biased in the right way to discover these relations faster than it can memorize the data? LLMs seem to have crossed this threshold because of the massive amount of data they are trained on, seemingly much larger than can comfortably be memorized, and the inductive biases of Transformers that search the space of models to extract explanatory relations.
>Are you saying that including explanations in training data can improve prediction? That would make sense, but this is very hard to do when training a predictive model on text. In that case, the explanations are at best hidden variables and language models are just not the right kind of model to model hidden variables.
I agree with this, and I think these explanatory relations are implicit in human text. I gave the example in another comment that I say things like "I picked my cup off the floor" rather than "I picked my cup off the ceiling" because causal relations in the real world influence the text we write. The relation of "things fall down" is widely explanatory. But it seems to me that LLMs are very much general modelers of hidden variables, given the wide applicability of LLMs in areas that aren't strictly related to natural language. But then again, any structured data is a language in a broad sense. And the grammar can be arbitrarily complex and so can encode deep relationships among data in any domain. Personally, I'm not so surprised that a "language model" has such wide applicability.
>> Why think explanatory models are better at prediction? Because the mutual coherence among assumptions and explanatory power of the whole (ability to predict much from few assumptions) suggests the explanatory model is getting at the productive features of the phenomena that result in the observed behavior. Essentially, the fewer number of posits, the fewer ways to "bake in" the data into the model. If we were to cast this as a computational problem, i.e. find a program that reproduces the data, shorter programs are necessarily more explanatory. There's no other way to explain the coincidence of program picked out of a small space generating data picked out of a very large space without there being an explanatory relation between the two. Further, our credence for explanation increases as the ratio of the respective spaces diverge.
Like you say, that's the problem of machine learning. There's a huge space of hypotheses many of whom can fit the data, but how do we choose one that also fits unseen data? Explanatory models are easier to trust and trust that they will generalise better, because we can "see" why they would.
But the problem with LLMs is that they remain black boxes. If those black boxes are explanatory models, then to whom is the explanation, explained? Who is there to look at the explanation, and trust the predictions? This is what I can't see and I think it turns into a "turtles all the way down" kind of situation. Unless there is a human mind, somewhere in the process, that can look at the explanatory model and use the explanation to explain some observation, then I don't see how the model can really be said to be explanatory. Explanatory- to whom?
>> But it seems to me that LLMs are very much general modelers of hidden variables, given the wide applicability of LLMs in areas that aren't strictly related to natural language.
Well, I don't know. Maybe we'll find that's the case. For the time being I'm trying to keep an open mind, despite all the noise.
> Can a machine with access only to statistical patterns in the distribution of text tokens infer the physical structure of reality? We can say, as certain as anything: No.
Um. How do you square that claim with the well-known Othello paper?
The board state can be phrased as moves. This paper profoundly misunderstands the problem.
The issue isn't that associative statistical models of domain Z aren't also models of domain Y where Y = f(Z) -- this is obvious.
Rather there are two problems, (1) the modal properties of these models arent right; and (2) they don't work where the target domain isn't just a rephrasing of the training domain.
Each new idea/technology/concept stands on the back of all that came before it. You couldn't just pull a LLM or a dishwasher out of a hat 1000 years ago.
Right, but techniques like chain of thought reasoning can build concepts on concepts. Even if "the thing that generated the text" isn't creating new concepts, the text itself can be, because the AI has learned general patterns like reasoning and building upon previous conclusions.
The argument is that humans interact with the world across many different modalities and do their statistical learning through this complex of interactions, while LLMs do their statistical learning just by what has been written (by humans) in certain internet sites.
I think it is a quite bold and philosophically poor statement to equate the "human training set" of complex interactions with the environment with what is written on the internet.
You’re arguing that the training set is different. You haven’t identified any different capabilities. What are the capabilities that make humans different?
The training sets are different in nature, not in the sense that 2 different LLMs' training sets are different. And that does not even touch that humans do not just learn from "training sets" but from interacting with the world. More like RL but not like ChatGPT's fine tuning; humans take _actions_ and they _experience_ their results in their totality, not just a "good/bad answer" feedback.
I am not saying that we cannot produce an AI with capabilities of that sort. But LLMs offer nothing at all to that direction. They can be useful in certain practical stuff, they are overhyped as hell, but they are not a step towards AGI.
You know, not all AI algorithms in use are derivative of statistical curve fitting.
But if you have some more general definition for "pattern recognition" than this, you should be perfectly able to notice that it's more general than what LLMs do.
> only because they contributed some magical ingredient from their souls, which didn't exist anywhere in the world's collected pre-Rust printed materials
You're focusing on the example too much. Here are more examples illustrating the question. It's doubtful that LLMs could infer solutions that lie outside their statistical models trained on existing data.
"In 1597 John Thorpe is the first recorded architect to replace multiple connected rooms with rooms along a corridor each accessed by a separate door" [1]
"Despite various publications of results where hand-washing reduced mortality to below 1%, Semmelweis's observations conflicted with the established scientific and medical opinions of the time and his ideas were rejected by the medical community. He could offer no theoretical explanation for his findings" [2]
"Button-like objects of stone, glass, bone, ceramic, and gold have been found at archaeological sites dating as early as 2000 b.c.e... One of the earliest extant pieces of clothing to show the use of buttons as fastenings is the pourpoint of Charles of Blois (c. 1319–1364)." [3]
What you're saying makes sense, and I think I appreciate the point behind the examples you provided.
I think it would help your argument if you could point to such an example from the last couple of years, after the cutoff point of the LLM training data. Maybe though, nothing has been invented since then that is sufficiently unique. If there is something like that, I suppose it would be possible to try and prompt the LLM to create it. That would make your argument falsifiable and I'd be really curious to know the outcome.
Sorry, I don't follow at all. We can assume that every concept humans have conceived of was at some point conceived of for the first time, there's no need to list examples. But how does that relate to the claim here, that purports to constrain what outputs AIs are and aren't capable of generating?
LLMs currently statistically regurgitate existing data. An LLM in 1600s would tell you that a house layout is "rooms connected to each other" because that would be its pre-existing data. It remains to be seen if LLMs can come up with "oh wait? we can create a passageway, and have rooms open into that" based on satistical models of pre-existing data.
Can it come up with a corridor when it has no idea that such a concept exists? That remains to be seen.
> LLMs currently statistically regurgitate existing data.
NO! They do not.
Deep learning models are "universal approximators". Any two-layer neural network with enough parameters, data and training is a universal approximation. That means they can learn ANY relationship with an arbitrary accuracy.
Going beyond two layers, with several layers, problem domain structured architectures, and recurrent connections, they become far more efficient and effective.
So yes, they learn associations, correlations, stochastic models, statistics.
But they also learn to model functional relationships. Which is why they are able to generalize relationship to new situations, and combine previously unrelated relationships in reasonable and surprising ways.
A large part of creativity is putting together previously unrelated concepts and then letting the obvious logic of those relationships combine to result in something new an unexpected.
Note that both combining normally unrelated things, and combining the concepts in some way more or less consistent with what those concepts normally mean, is well within the grasp of current models.
They haven't outclassed out best thinkers. Or any of our best thinking as individuals yet. They are still very limited on problems that require many steps to think through.
But they are definitely, within their limits, being creative.
And they are far, far, FAR from just being statistical parrots.
> and combine previously unrelated relationships in reasonable and surprising ways.
We've yet to see those surprisng ways despite all the claims.
Note: what they do already is amazing and surprising in itself (such as "write me a song about quantum physics suitable for a 5 year old"). It's still very much shy of "hey there's this new thing previously unthought of".
> We've yet to see those surprisng ways despite all the claims.
This is the one reason everyone is finding them fascinating.
Perhaps you find them boring. Rote. Or something. But the reason non-technical people, as well as technical people, are enjoying and learning by interacting with chat and other models is how often the results are interesting.
I asked ChatGPT-4 to create a Dr. Seuss story about Cat in the Hat and my green conure parrot Teansy, that involved sewing and Italy. It produced a wonderful story of how they met in Italy, became friends, encountered a homeless child with a threadbare blanket and help the child. Then began helping others and ended up creating a fashion design studio.
All written in Dr. Seuss prose that made for a perfect children's book.
Pretty creative.
I then asked GPT to continue the story, but as a James Bond novel where one of Teansy's mysterious clients was actually a criminal using the fashion industry to hide his nefarious practices, and that Teansy should help James Bond solve the case.
For that I got another great story, completely consistent to James Bond tropes. It came up with a story line where the fashion industry was used to launder blood diamonds, which I thought was brilliant. A perfectly good rational for a James Bond villain. The story was great.
Throughout, Chat threw in funny suitable mentions about Teansy's fashion focuses, including feather lined wear, etc.
And all this creativity in a first draft written as fast as I could read it.
A year ago, nothing on the planet but a whimsical human with too much time (more time than it took Chat), on their hands could do this.
--
Obviously, we are discovering Chat can perform far more complex behaviors.
Act as any agent we describe including computer systems, or the internet. Respond quickly to feedback. Form plans. Learn and summarize the grammar of small artificial languages fairly well just from examples, ...
Without interacting with these models we would never have declared these were expected behaviors.
So I don't know what basis the emergence of these behaviors isn't surprising. Hoped for, envisioned, sure. But hardly an expression of obviously predetermined designed-in capabilities.
> This is all interpolation between existing concepts.
Interpolating sounds like a simple task.
But whether it is depends entirely on the data. Simple data will result in a simple interpolating model.
But complex data requires complex relationships to be learned.
Calling a complex model just an interpolator is like saying human beings are just another bag of atoms doing what atoms do. Technically correct, but missing the significance of humans.
It also isn’t really clear to me that humans aren’t also interpolating between complex existing concepts when we come up with novel thoughts or ideas. Our minds are complex, our pre-existing knowledge base is complex. It’s impossible to know if our unique thoughts aren’t really some complex amalgamation of other thoughts we already have in there somewhere, perhaps a mashup of seemingly unrelated thoughts that just happen to lie closely in the multidimensional space of ideas to the thing we are thinking about. Sounds potentially similar to a complex LLM then, really.
> Can it come up with...when it has no idea that such a concept exists?
I was curious to see if it could.
===
Q. Invent a new statistic that mixes hessian and likelihood with score.
gpt: One potential new statistical concept that mixes Hessian and likelihood with the score function could be called "Hessian-score likelihood ratio".
The idea behind this concept is to use the Hessian matrix, which captures second-order information about the likelihood function, along with the score function, which measures the sensitivity of the likelihood function to changes in the model parameters, to construct a likelihood ratio test statistic.
Q. How do I compute this new statistic ?
gpt. To calculate this statistic, we first compute the Hessian matrix of the log-likelihood function evaluated at the maximum likelihood estimate of the parameters. We then calculate the score function evaluated at the maximum likelihood estimate of the parameters.
We can then use these quantities to construct a test statistic that compares the likelihood of the model with the estimated parameters to the likelihood of the model with an alternative set of parameters. Specifically, we take the difference between the log-likelihoods of the two models, and divide by the square root of the determinant of the Hessian matrix, multiplied by the score function.
Q. Of what use is this new statistic ?
gpt. The resulting test statistic can be used to test hypotheses about the parameters of the model, such as whether they are equal to a particular value or whether they differ across groups or conditions. The Hessian-score likelihood ratio offers a potentially more powerful and efficient way to test such hypotheses than existing methods, since it makes use of both second-order information about the likelihood function and the sensitivity of the likelihood to changes in the parameters.
====
I am not able to spot any obvious inaccuracies in the above.
>> Can it come up with...when it has no idea that such a concept exists?
> I was curious to see if it could.
But you told it about the concept, "a new statistic that mixes hessian and likelihood with score".
You should try a different experiment. I'm more familiar with architecture than statistics so I'll use the floor plan example. Were someone in 16th century had asked its LLM to address the painpoints of the joined room approach, and then the LLM conceived of the novell concept of a corridor. [Look up the origin of the word.]
If that 16th century LLM spat out "the overall concept is to distinguish between transient and in repose spaces. There is already something similar in military architecture called 'corridor', which is a strip of land along the outer edge of a ditch. In these new floor plan designs, there will be corridors internal and peripheral to the building that will connect rooms, just like corridors (strips of land) connect lots or permit movement without stepping into a ditch".
-- can this happen? --
You: Invent a novel test statistic that can be used to test hypotheses about the parameters of the model, such as whether they are equal to a particular value or whether they differ across groups or conditions. You can combine existing statistical tools.
gpt: How about "Hessian-score likelihood ratio"? The idea behind this concept is to use the Hessian matrix, which captures second-order information about the likelihood function, along with the score function, which measures the sensitivity of the likelihood function to changes in the model parameters, to construct a likelihood ratio test statistic.
> LLMs currently statistically regurgitate existing data.
This is clearly not true in any meaningful sense - c.f. the Othello paper, examples from the top of this very comment thread, etc.
> Can it come up with a corridor when it has no idea that such a concept exists?
Unless I'm missing something, the person I replied to is claiming that it categorically cannot come up with a concept it hasn't been trained on. I'm disagreeing - if a model knows about rooms and doors and floorplans, there's no obvious reason why it mightn't think up an arrangement of those things that would be novel to the people who trained it. If you think the matter remains to be seen, then I'm not sure what you disagree with me about.
In my experience, it can certainly be coaxed into discussing novel concepts that transcend existing knowledge. I'm having fun getting it to explain what a hybrid of a Nelson Enfilade data structure combined with a tensegrity data structure is and if that system is novel and brings any benefits, very interesting and novel afaik.
> if a model knows about rooms and doors and floorplans, there's no obvious reason why it mightn't think up an arrangement of those things that would be novel to the people who trained it.
Once again, you're missing the point.
In 16th century people also knew about floors, and rooms, and floorpalns. And yet, the first architect to use a coridor used it for the first time in 1597.
What other "corridors" are missing from LLMs' training data? And we're sure it can come up with such a missing concept?
The Othello paper and the examples (are you referring to the example of coming up with new words?) are doing the same thing: they feed the model well-defined pre-established rules that can be statistically combined. The "novel ideas" are not even nearly novel because, well, they follow the established rules.
Could the model invent reversi/othello had it not known about it beforehand? Could the model invent new words (or a new language) had it not known about how to do that beforehand (there's plenty of research on both)? Can it satisfactorily do either even now (for some definition of satisfactorily)?
People believe it can only because the training set is quite vast and the work done is beyond any shadow of the doubt brilliant. That is why the invention of new words seems amazing and novel to many people while others even with a superficial armchair knowledge of linguistics are nonplussed. And so on.
> Could the model invent reversi/othello had it not known about it beforehand?
You've practically restated the paper's findings! :D The LLM knew nothing about othello; it wasn't shown any rules to be recombined. It was shown only sequences of 60 distinct tokens - effectively sentences in an unknown language. The LLM then inferred a model to predict the grammar of that language, and the authors demonstrated that its model functioned like an othello board.
> You've practically restated the paper's findings! :D The LLM knew nothing about othello; it wasn't shown any rules to be recombined.
Literal quote from the paper:
"As a first step, we train a language model (a GPT variant we call Othello-GPT) to extend partial game transcripts (a list of moves made by players) with legal moves."
And then:
"Nonetheless, our model is able to generate legal Othello moves with high accuracy".
So:
- it knows about the game because it was literally shown the game with only the legal moves
- it doesn't produce legal moves all the time (even though it does so with high accuracy)
That's why I say "the work done is beyond any shadow of the doubt brilliant". Because this is a definite leap forward from the status quo. However, it doesn't imply that the models can invent/predict/come up with novel ways of doing something. This is still strictly within the realm of "given existing data, give back a statistically relevant response".
Could it actually invent Reversi/Othello had it not known about it beforehand?
> it was literally shown the game with only the legal moves
It's shown token sequences only. It has no idea they represent a game, or that the game has legal and illegal moves. And more importantly, it has no idea that each token modifies the state of a gameboard, or that simulating how that gameboard changes after every token is the only way to understand the token's grammar. It invents all that.
> Could it actually invent Reversi/Othello had it not known about it beforehand?
You mean, could an LLM invent othello even if its training material made no mention of the game or its rules? Presumptively, of course - why not? Suppose you go make up an arbitrary board game right now. If you then ask ChatGPT-4 to invent a boardgame of its own, nothing excludes the possibility that it will describe a game isomorphic to yours. Obviously the odds are very low, but why imagine that it's not possible?
You're presenting an example of inference of rules from given data as a counterexample for novelty. They're not even in the same category of thing. Invention is not learning. Sometimes invention is interpolation, but sometimes it isn't: corridors is an interesting example, because they are not obviously a remix of anything.
I am not convinced by this argument. It is very misleading to think that, since GPT is trained on data from the world, it must, necessarily, always produce an average of the ideas in the world. Humans have formulated laws of physics that "minimize loss" on our predictions of the physical world that are later experimentally determined to be accurate, and there's no reason to assume a language model trained to minimize loss on language won't be able to derive similar "laws" that stimulate human behavior.
In short, GPT doesn't just estimate text by looking at frequencies. GPT works so well by learning to model the underlying processes (goal-directedness, creativity, what have you) that create the training data. In other words, as it gets better (and my claim is it has already gotten to the point where it can do the above), it will be able to harness the same capabilities that humans have to make something "not in the training set".
> Guessing the right theory of physics is equivalent to minimizing predictive loss. Any uncertainty that cannot be reduced by more observation or more thinking is irreducible stochasticity in the laws of physics themselves – or, equivalently, noise from the influence of hidden variables that are fundamentally unknowable.
> If you’ve guessed the laws of physics, you now have the ability to compute probabilistic simulations of situations that evolve according to those laws, starting from any conditions28. This applies even if you’ve guessed the wrong laws; your simulation will just systematically diverge from reality.
> Models trained with the strict simulation objective are directly incentivized to reverse-engineer the (semantic) physics of the training distribution, and consequently, to propagate simulations whose dynamical evolution is indistinguishable from that of training samples. I propose this as a description of the archetype targeted by self-supervised predictive learning, again in contrast to RL’s archetype of an agent optimized to maximize free parameters (such as action-trajectories) relative to a reward function.
Even very simple and small neural networks that you can easily train and play with on your laptop readily show that this “outputs are just the average of inputs” conception is just wrong. And it’s not wrong in some trickle philosophical sense, it’s wrong in a very clear mathematical sense, as wrong as 2+2=5. One example that’s been used for something like 15+ years is in using the MNIST handwritten digits dataset to recognize and then reproduce the appearances of handwritten digits. To do this, the model finds regularities and similarities in the shapes of digits and learns to express the digits as combinations of primitive shapes. The model will be able to produce 9s or 4s that don’t quite look like any other 9 or 4 in the dataset. It will also be able to find a digit that looks like a weird combination of a 9 and a 2 if you figure out how to express a value from that point in the latent space. It’s simply mathematically naive to call this new 9-2 hybrid an “average” of a 9 and a 2. If you averaged the pixels of a 9 image and a 2 image you would get an ugly nonsense image. The interpolation in the latent space is finding something like a mix between the ideas behind the shape of 9s and the shape of 2s. The model was never shown a 9-2 hybrid during training, but its 9-2 will look a lot like what you would draw if you were asked to draw a 9-2 hybrid.
A big LLM is something like 10 orders of magnitude bigger than your MNIST model and the interpolations between concepts it can make are obviously more nuanced than interpolations in latent space between 9 and 2. If you tell it write about “hubristic trout” it will have no trouble at all putting those two concepts together, as easily as the MNIST model produced a 9-2 shape, even though it had never seen an example of a “hubristic trout.”
It is weird because all of the above is obvious if you’ve played with any NN architecture much, but seems almost impossible to grasp for a large fraction of people, who will continue to insist that the interpolation in latent space that I just described is what they mean by “averaging”. Perhaps they actually don’t understand how the nonlinearities in the model architecture give rise to the particular mathematical features that make NNs useful and “smart”. Perhaps they see something magical about cognition and don’t realize that we are only ever “interpolating”. I don’t know where the disconnect is.
i think a partial explanation is that people don't move away from parametric representations of reality. We simply must be organized into a nice, neat gaussian distribution with very easy to calculate means and standard deviations. The idea that organization of data could be relational or better handled by a decision tree or whatever is not really presented to most people in school or university. Especially not as frequently or holistically as is simply thinking the average represents the middle of a distribution.
you see this across social sciences where you can see a lot of fields have papers that come out every decade or so since the 1980s saying that linear regression models are wrong because they don't take into account several concepts such as hierarchy (e.g., students go to different schools), frailty (there is likely unmeasured reasons why some people do the things they do), latent effects (there is likely non-linear processes that are more than the sum of the observations, e.g., traffic flows like a fluid and can have turbulence), auto-correlations/spatial correlations/etc.
In fact, I would argue that a decision tree based model (i.e., gradient boosted trees) will always arrive at a better solution to a human system than any linear regression. But at this point I suppose I have digressed from the original point.
I confess to the same mirror image issue. I cannot understand why people insist that regressing in a latent space, derived from the mere associative structure of a dataset, ought be given some Noble status.
It is not a model of our intelligence. It's a stupid thing. You can go and learn about animal intelligence -- and merging template cases of what's gone before, as recorded by human social detritus -- doesn't even bare mentioning.
The latent space of all the text tokens on the internet is not a model of the world; and finding a midpoint is just a trick. It's a merging between "stuff we find meaningful over here", and "stuff we find meaningful over there" to produce "stuff we find meaningful" -- without ever having to know what any of it meant.
The trick is that we're the audience, so we'll find the output meaningful regardless. Image generators don't "struggle with hands" they "struggle" with everything -- is we, the observer, who care more about the fidelity of hands. The process of generating pixels is uniformly dumb.
I don't see anything more here than "this is the thing that I know!" therefore "this is a model of intelligence!11.11!01!!" .
It's a very very bad model of intelligence. The datasets involved are egregious proxy measures of the world whose distribution has little to do with it: novels, books, pdfs, etc.
This is very far away from the toddler who learns to walk, learns to write, and writes what they are thinking about. They write about their day, say -- not because they "interpolate" between all books ever written... but because they have an interior representational life which is directly caused by their environment and can be communicated.
Patterns in our communication are not models of this process. They're a dumb light show.
I feel like our positions are probably both buried in webs of mutually-difficult-to-communicate worldview assumptions, but for what it’s worth, I care more at this point about the models being useful than being meaningful. I use GPT-4 to do complex coding and copy editing tasks. In both cases, the model understands what I’m going for. As in, I had some specific, complex, nuanced, concept or idea that I want to express, either in text or in code, and it does that. This can’t be me “projecting meaning” onto the completions because the code works and does what I said I wanted. You can call this a light show, but you can’t make it not useful.
The output of these systems can have arbitrary properties.
Consider an actor in a film, their speech has the apparent property, say, of "being abusive to their wife" -- but the actor isnt abusive, and has no wife.
Consider a young child reading from a chemistry textbook, their speech has apparent property "being true about chemistry".
But a professor of chemistry who tells you something about a reaction they've just performed, explains how it works, etc. -- this person might say identical words to the child, or the AI.
But the reason they say those words is radically different.
AI is a "light show" in the same way a film is: the projected image-and-sound appears to have all sorts of properties to an audience. Just as the child appears an expert in chemistry.
But these aren't actual properties of the system: the child, the machine, the actors.
This doesnt matter if all you want is an audiobook of a chemistry textbook, to watch a film, or to run some generated code.
But it does matter in a wide variety of other cases. You cannot rely on apparent properties when, for example, you need the system to be responsive to the world as-it-exists unrepresented in its training data. Responsive to your reasons, and those of other people. Responsive to the ways the world might be.
At this point the light show will keep appearing to work in some well-trodden cases, but will fail catastrophically in others -- for no apparent reason a fooled-audience will be able to predict.
But predicting it is easy -- as you'll see, over the next year or two, ChatGPT's flaws will become more widely know. There are many papers on this already.
>> I feel like our positions are probably both buried in webs of mutually-difficult-to-communicate worldview assumptions, but for what it’s worth, I care more at this point about the models being useful than being meaningful.
The question is how useful they are. With LLMs it seems they can be useful as long as you ask them to do something that a human, or another machine (like a compiler) can verify, like your example of synthesising a program that satisfies your specification and compiles.
Where LLMs will be useless is in taks where we can't verify their output. For example, I don't hear anyone trying to get GPT-4 to decode Linear A. That would be a task of significant scientific value, and one that a human cannot perform -unlike generating text or code, which humans can already do pretty damn well on their own.
>> Guessing the right theory of physics is equivalent to minimizing predictive loss.
A model can reduce predictive loss to almost zero while still not being "the right theory" of physics, or anything else. That is a major problem in science, and machine learning approaches don't have any answer to it. Machine learning approaches can be used to build more powerful predictive models, with lower error, but nothing tells us that one such model is, or even isn't, "the right theory".
As a very famous example, or at least the one I hold as a classic, consider the theory of epicyclical motion of the planets [1]. This was the commonly accepted model of the motion of the observable planets for thousands of years. It persisted because it had great predictive accuracy. I believe alternative models were proposed over the years, but all were shot down because they did not approach the accuracy of the theory of epicycles. Even Copernicus' model, that is considered a great advance because it put the Sun in the center of the universe, continued to use epicycles and so did not essentially change the "standard" model. Eventually, Kepler came along, and then Newton, and now we know why the planets seem to "double back" on themselves. And not only that, but we can now make much better predictions than we ever could do with the epicyclical model, because now we have an explanatory model, a realist model, not just an instrumentalist model, and it's a model not just of the observable motion of the planets but a model of how the entire world works.
As a side point, my concern with neural nets is that we get "stuck in a rut" with them, because of their predictive power, like we got stuck with the epicyclical model, and that we spend the next thousand years or so in a rut. That would be a disaster, at this point in our history. Right now we need models that can do much more than predict; we need models that are theories, that explain the world in terms of other theories. We need more science, not more modelling.
> Guessing the right theory of physics is equivalent to minimising predictive loss.
No it's not. It's minimising "predictive loss" only under extreme non-statistical conditions imposed on the data.
The world itself can be measured an infinite number of ways. There are an infinite number of irrelevant measures. There are an infinite number of low-reliability relevant measures. And so on.
Yes, you can formulate the extremely narrow task of modelling "exactly the right dataset" as loss minimization.
But you cannot model the production of that dataset this way. Data is a product of experiments.
This is just you declaring "no you can't" without supporting that in any way.
How is a theory of physics not a loss minimisation process? The history of science is literally described in these terms i.e. the Bohr model of the atom is wrong, but also so useful that we still use it to describe NMR spectroscopy.
Why did we come up with it? Because their aren't infinite ways to measure the universe, there are in fact very limited ways defined by our technology. Good ones, high loss minimisation, generally then let us build better technology to find more data.
You're invoking infinities which don't exist as a handwave for "understanding is a unique part of humanity" to try and hide that this is all metaphysical special pleading.
Gravitation was literally about predicting future positions of the stars, and was successful because it did so much better then any geocentric model. How is that not a loss minimization activity?
And before we had it, epicycles were steadily increasing in complexity to explain every new local astronomical observation, but that model was popular because it gives a very efficient initial fit of the easiest data to obtain (i.e. the moon actually does go around the Earth, and with only 1 reference point the Sun appears to go round the Earth too). But of course once you have a heliocentric theory, you can throw all those parameters and every new prediction lines up nearly perfectly (accounting for how much longer it would take before we had precise enough orbital measurements to need Relativity to fully model it).
When the law of gravitation was formulated, it could not in fact be used to predict orbits reliably (Kepler's ellipses are the solution to the two body problem anyways, and for a more complex system integration was impossible to any useful precision at the time), and Kepler's theories came out long before it did.
It took more than 70 years after its formulation for the law to actually be conclusively tested against observations in a conclusive manner.
Also note that Copernicus' heliocentric model retained the geocentric model's epicycles on circular orbits. It really took Kepler to make a better model. And it was better because it was explanatory to boot, and not only predictive.
At some point, the metaphor of "loss minimisation" starts to break down. When we're talking about science, there's much more we want to do than minimise some loss function- that nobody has ever written down anyway. We want to be able to say "this is how the world works". The language of function optimisation is simply not the right language to do anything like that.
Even Vladimir Vapnik turned to poetry to try and increase the information available to statistical learners. Let me see if I can find that paper...
Sure but it was a better fit, and before that heliocentric models were definitely the only way forward that didn't keep adding terms every time someone spotted a moon.
Occam's razor - do not multiply terms without necessity - is essentially a loss function.
You're talking about Kepler's model here, not about the gravitational equation. The gravitational equation was not a better fit than Kepler at that time, especially since it used unknown constants.
So would you care to comment on how this relates to the original contention, which is the claim that a loss function could not discover Newton's law of gravitation?
Because what you're arguing, extensively, is that due to lack of fit, Newton's Law of Gravitation wasn't settled science until observational data was of sufficient fidelity to clearly distinguish it.
Formulate the loss function -- you'll find it's just
loss(the-right-answer(perfect-x) - perfect-y)
The most important aspect of "the-right-answer" is its ability to ignore almost all the data.
The existence of planets is "predictable" from the difference between the data and the theory -- if the theory is just a model of the data, it has no capacity to do this.
If you want to "do physics" by brute force optimization you'd need to have all possible measures, all possible data, and then a way of selecting relevant causal structures in that data -- and then able to try every possible model.
loss(Model(all-data|relevant-causal-structures) - Filter(...|...))) forall Model
Of course, (1) this is trivially not computable (eqv. to computing the reals) -- (2) "all possible data with all possible measures" doesn't exist and (3) selecting relevant causal structure requires having a primitive theory not derived from this very process
animals solve this in reverse order: (3) is provided by the body's causal structure; (2) is obtained by using the body to experiment; and (1) we imagine simulated ways-the-world-might-be to reduce the search space down to a finite size.
ie., we DO NOT make theories out of data. We first make theories then use the data to select between them.
This is necessary, since a model of the data (ie., modern AI, ie., automated statistics, etc.) doesnt decide between an infinite number of theories of how the data came to be.
> ie., we DO NOT make theories out of data. We first make theories then use the data to select between them.
No we don't, we make hypotheses and then test them. Hypotheses are based on data.
There are physics experiments being done right now where the exact hope is that existing theory has not predicted the result they produce, because then we'd have data to hypothesis something new.[1]
You are literally describing what deep learning techniques are designed to do while claiming they can't possibly do it.
I know this discussion is a bit old at this point, but I came across this[1] essay for the first time today, and this shows more of what I was trying to get across earlier in the thread. Hopefully you'll find it interesting. Essentially, they trained a GPT on predicting the next move in a game of Othello, and by analyzing the weights of the network, found that the weights encode an understanding of the game state. Specifically, given an input list of moves, it calculates the positions of its own pieces and that of the opponent (a tricky task for a NN given that Othello pieces can swap sides based on moves made on the other side of the board). Doing this allowed it to minimize loss. By analogy, it formed a theory about what makes moves legal in Othello (in this case, the positions of each player's pieces), and found out how to calculate those in order to better predict the next move.
Proving any given AI architecture can't do something doesn't prove all AI architectures forever will never be able to do something. Neural networks aren't all AI, they're not even "neural networks" since the terms wraps up a huge amount of architectural and design choices and algorithms.
Unless you believe in the soul, then the human brain is just a very complicated learning architecture with a specific structure (which we freely know doesn't operate like existing systems...sort of, of course we also don't know that it's not just a convoluted biological path to emulating them for specific subsystems either).
But even your original argument is focused on just playing with words to remove meaning: calling something data doesn't meaningfully make your point, because mathematical symbols are just "data" as well.
Mathematics has no requirement to follow any laws you think it does - 1 + 1 can mean whatever we want, and its a topic of discussion as to why mathematics describes the physical world at all - which is to say, it's valid to say we designed mathematics to follow observed physics.
The whole point is that Newton came up with the law before there was observational data that could prove it, which is fundamentally different from regression. The data is used to reject the theory, not to form it, here.
I get the feeling that the OP is using "loss function" in the figurative sense, and not in the sense of an actual loss function that is fit to observations. We know nobody did that in Newton's time. In Newton's time they didn't even have the least squares method, let alone fit a model to observations by optimising a loss function.
Yes, I'm also using it in the figurative sense. It's not a regression model, the models are developped and then the data is sought out to infirm them. It's the reverse for a regression technique. The model being generated before the data that can support it is a big part of how humans come up with these models and it's fundamentally different in many ways.
What are you talking about? If scientific models aren't developed based on data, then what are they developed based on? Divine inspiration?
No. Very obviously no. The multi-post diversion about Kepler's laws is explicitly evidence to the contrary since Kepler's laws are a curve fitting exercise which matches astronomical data in a specific context but doesn't properly describe the underlying process - i.e. their predictive power vanishes once the context changes. But they do simplify down to Newton's Law once the context is understood.
New data is sought out for models to determine whether they are correct because a correct model has to explain existing data and predict future data. The Bohr Model of the atom was developed because it explained the emission spectra of hydrogen well. It's not correct because it doesn't work anything but hydrogen...but it's actually correct enough that if you're doing nuclear magnetic resonance (which is very hydrogen-centric for organic molecules) then it is in fact good enough to predict and understand spectra with (at least in 1D, 3D protein structure prediction is it's own crazy thing).
This is the entire point of deep learning techniques. The whole idea of latent space representations is that they learn underlying structural content of the data which should include observations about reality.
That's not how the scientific process works. You use your intuition to make a theory, sometimes loosely based on data, and then you come up with an experiment to test it.
We both agree that Kepler was trying to fit curves. But that's not what Newton was trying to do. Newton was trying to explain. Newton's model did not fit the data better than Kepler's model until far after they both died.
Newton's model, to Newton had more loss than Kepler's model.
But it turned out 70 years later that Newton's model was better, because it's only then that there was any data for which it was a better prediction.
You're similarly wrong about Bohr. If all you were interested was to find the emission spectra of hydrogen, there's absolutely no reason you'd try to come up with the Bohr model. Why? Because Rydberg already made a formula that predicted the emission spectra of Hydrogen, 25 years earlier.
The entire point of Bohr's model and of Newton's model is that they weren't empirically better at predicting the phenomena. Indeed, simple curve fitting came up with equations that are far better in practice, earlier.
But they were better at explaining the phenomena.
And that only became relevant because after we had these models, we came up with new experiments, informed by these models, which helped us understand them and eventually push them behind the breaking point.
It's not a curve fitting experiment. We already had better curve fitting models far before either of those was invented. If your goal was to reduce the loss, they'd be useless and there would be no point coming up with them.
That's the difference between the scientific method and mere regression.
(Not the OP) We don't know ho;w the human mind works, or how "intuitions" or "inspiration" come about, but that's no reason to call them "metaphysics". Clearly, they are physical processes that somehow take place in the human brain.
The questions you ask in this comment are good questions, for which we have no good answers. That doesn't mean there's anything supernatural going on, or that anyone is assuming something supernatural is happening. We just don't know how human scientists come up with new hypotheses, that's all there is to it.
But it's not like there's some kind of principled way to do it. There's no formulae, no laws, where we can plug in some data and out pops a hypothesis ready for the testing. Maybe we will find how to define such laws or formulae at some point, but for now, all we got is some scientist waking up one day going "holy cow, that's it!". And then spending the next ten years trying to show that's what it really is.
To clarify, the OP is pointing out that it wasn't Newton's law of universal gravitation that defeated the epicyclical model of the cosmos.
It was Kepler's laws of planetary motion that did for epicycles; and that happened 70 ish years before Newton stated his laws of motion and pointed out that they basically subsume Kepler's laws of planetary motion.
>Now, AI here is capable of generating a circle with radius 0.15 and hence that circle is "not in the original dataset".
The fact that it can generate a circle with a radius of .15 rather than, say, some smushed transformation of an existing circle demonstrates that it properly decomposed the concept of circle into a radius and equidistance around a central point. This is plainly an example of generating novelty from iterating over variations of its conceptualization of circle. But this is no different than what people do. Nothing we generate is sui generis.
You explained this very well with the point that the model necessarily has that "blatant" novelty in order to be useful as more than a quote engine in the first place.
That's a good way to explain the bias too. You can see it now if you ask about Michael Levins work which is spreading now in biology but somewhat still outweighted by older views on formation of the organs during growth, and the extent of possibilities with bioelectric/genetic engineering (e.g. two headed animals). The models often don't even consider or accept that the research Levin reported would be even possible (other times, they can, but I think it serves as a good warning light that this bias can dangerously act as a permanent anchor).
You can’t find spheres from circles alone if you don’t know what a sphere is though. You can however ask it to analyze a novel object (sphere) for patterns or hypothesize in what kind of properties a sphere might have.
Sure, not literally part of the training data.
Statistical AI operates in a transformed space derived from the training data, points in that space will not, in general, exist in the original.
So imagine generating 1000 circles and putting their radii on a line: 0.1, 0.2, 0.3, ...
The circles are the training data, and the "implied line" is the transformed space.
Now, AI here is capable of generating a circle with radius 0.15 and hence that circle is "not in the original dataset".
This type of "novelty" isn't what I'm concerned with; generative AI must have that or else it'd be entirely useless -- only a google search.
Rather i'm talking about, for example, whether without "Rust" in its training data it could develop "Rust" from everything else. Is there enough data on lifetimes/borrowing/etc. research in pdfs that it's scanned to somehow "find a midpoint between those pdfs and C++".
It seems a bit mad to suppose so -- but I could be wrong, such a midpoint does exist --- but i'm extremely doubtful we humans have been so helpful as to write the 1000s of academic PDFs needed for this system to find it.
The novelty I'm talking about is dimensions in the transformed space. The system cannot derive "additional ways to move" without the source data actually containing those ways.
This is, roughly, equivalent to saying that it's biased towards the on-average ways we have conceptualised our problems as represented by the on-average distribution of academic articles, github repos, webpages, etc. *that we happened to have created*.
This is a serious "intellectually conservative" bias.
For sure it can find circles it hasnt seen; but could it find spheres from circles alone? No.