Hacker News new | past | comments | ask | show | jobs | submit login

The experts in the field say we need a philosophical breakthrough. Isn't everyone else inexperienced in this regard?

https://aeon.co/essays/how-close-are-we-to-creating-artifici...




> David Deutschis a physicist at the University of Oxford and a fellow of the Royal Society.

What field were you referring to?


It could be linguistics, philosophy, or knowing enough of the history of those fields that make one an expert. I think Chomsky's argument on AI and specifically cognition and the brain is quite useful.

Yet, you never hear Altman or Carmack talking about cognition or how computers can understand the meaning of something like a human. They aren't interested in such questions. But to conduct an experiment don't you have to know what you are looking for? Does a chemist do experiments by mixing 1 million compounds at a time?


I generally have pretty low regard for philosophy, and consider Popper + current scientific method to be SoTA. Its relationship to the nature of cognition overall seems pretty dubious.

As for linguistics, IMHO the existence and success of GPT pretty much puts Chomsky into the proven wrong bucket, so again, not a good example. (his whole point used to be that statistical model can't learn syntax in principle, and GPT's syntax is close to impeccable)

Re: a chemist. Well sort of. Because technically speaking a molecule of the same compound in a certain location and with certain energy is different from another molecule in a different location and with different energy. And even if you disregard that, why would you think that doing 1 million compounds could not significantly move material sciences forward? It is not like they don't want to do that, it is more of that they can't in practice at this time.


LLMs haven't "learned" syntax, that's the point. It doesn't matter if you just want to predict syntax (engineering) only if you want to understand the human language faculty (science) and nearly no one is interested in the latter.


The fact that you don't understand how GPT models language does not make it less of a model. It did learn the syntax, you are just incapable to grasp the formula it represents.


> The fact that you don't understand how GPT models language does not make it less of a model. It did learn the syntax, you are just incapable to grasp the formula it represents.

The whole point of science is understanding and LLMs don't provide understanding of how human language works.


This is a pseudophilosophical mumbo-jumbo. It does not really address the comment you replied to, because it does not contradict any of the following statements (from which my original point trivially follows):

1. Chomsky claimed syntax can't be modeled statistically.

2. GPT is a nearly perfect statistical model of syntax.


The point is very basic: These "models" don't tell you anything about the human language faculty. They can be useful tools but don't serve science.

Chomky's point is that there is a lot of evidence that humans don't use a statistical process to produce language and these statistical "models" don't tell you anything about the human language faculty.

Whether your 1 & 2 are meaningful depend on how you define "model" which is the real issue at hand: Do you want to understand something (science) --- in which case the model should explain something --- or do you want a useful tool (engineering) --- in which case it can essentially be a black box.

I don't know why you care to argue about this though; my impression is that you don't really care about how human's do language so why does it matter to you?


I argue to get some non-contradictory worldview.

Re: meaningfulness. Your scientific vs engineering model distinction is not how "scientific model" is defined. It includes both. The existence of the model itself does explain something, specifically, that statistics can model language. That alone is explanatory power, so the claim that it doesn't explain anything is a lie. Therefore it is both an "engineering" model (because it can predict syntax) and scientific (because it demonstrates statistical approach to language has predictive powers in scientific sense).


Science is about understanding the natural world, if you want to redefine it to mean something else fine but the point still stands: LLMs do not explain anything about the natural world, specifically anything about the human language faculty. Again it's clear you do not care about this! Instead you want to spend time arguing to make sure labels you like are applied to things you like.


Look, I answered this one already:

> the fact that you don't understand how GPT models language does not make it less of a model.

E.g. the fact that a Pythagorean theorem does not explain anything about the natural world to a slug does not make Pythagorean theorem any less sciency.

Science is not about explanatory power or else the Pythagorean theorem is not science due to the above, which is obviously nonsense.


> E.g. the fact that a Pythagorean theorem does not explain anything about the natural world to a slug does not make Pythagorean theorem any less sciency.

In fact it does! Math is not science! There is a reason it is STEM and not S


> As for linguistics, IMHO the existence and success of GPT pretty much puts Chomsky into the proven wrong bucket, so again, not a good example. (his whole point used to be that statistical model can't learn syntax in principle, and GPT's syntax is close to impeccable)

What do you disagree with? He appears to be correct. The software hasn’t learned anything. It mixes and matches based on training data.

https://m.youtube.com/watch?v=ndwIZPBs8Y4


According to the scientific method, on which the rest of the natural sciences are currently based, GPT is a valid model of GPT's syntax.

There are "alternatives" for the method according to some philosophers, but AFAIK none of them are useful to any degree and can be considered fringe at this point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: