Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thank you for the thoughtful answer. I also wish more people were asking such questions. Let's look at some of your points.

the difference between AGI and ML

I've seen the recent discussions on ARC benchmark. It's not clear if native multi-modal models have been tested. I would expect 4o/Gemini models to do fairly well on these visual tests, and I expect them to do even better after finetuning (perhaps even better than humans). I tried to solve a few of the puzzles, and I'm not convinced they actually require "AGI". To me, generating text of GPT4 quality should require more of AGI-like "Abstraction and Reasoning" than these puzzles. But, as you said, achieving "true" AGI is not really relevant in the context of this conversation.

how do you transition to post scarcity? ... UBI is by far the most common answer

I have no doubt that in 50 years, barring some global catastrophic event, we will have solved most of our basic problems (healthcare, education, having to work for a living, etc), even despite some of the new issues that you outlined. I am much more worried about the next 5-10 years. Let's explore a hypothetical scenario of what might happen if GPT-5 comes out 6 months from now, and if it is smart and reliable enough to solve some common tasks people are paid to do. I'm talking about data management, data analysis, communication (written and, looking at GTP-4o demo, perhaps also oral). Jobs like bookkeeping, accounting, marketing, writing/journalism, administrative assistants (including medical and legal), account management, customer support, analysts, etc. These jobs won't disappear overnight, obviously, but let's look at self-driving cars - we have the technology that works 99% of the time, today. For driving on public roads, 99% reliable is not good enough. But for some of the jobs above, perhaps it would be. Perhaps with layers of agents coordinating actions to gather and store the right information, to try different approaches or different models, and to verify results, we could do a good enough job for many managers to consider layoffs, or hiring freezes. I don't know if GPT-5 (or its rivals) will enable that, but I think we should consider the possibility. There's also a strong possibility the progress does not stop in 6 months. We have just started to train large models on video data - there's a lot to learn about the world from the entirety of YouTube videos - in addition to learning from text. I would not be surprised if most of what GPT-6 can do two years from now comes from video data. I would not be surprised if GPT-5 would help us prepare high quality datasets and even help us find better ways to train its successor. Significant progress might happen even without significant conceptual breakthroughs - just from further scaling up.

So, what do you think will happen if the above scenario plays out? Millions of people being laid off or not hired after school, and the situation getting worse every year, globally. Governments will try to feed them, or course, and US is a rich enough country to support X% of the population for a few years, depending on how quickly we do transition to "post-scarcity" economy. I assume that eventually physical robots will grow food, create products, and provide services to meet basic needs, but it's not clear how long this transition will take, and what would happen in the meantime. We already have people in this country who successfully stormed Capitol. Imagine a lot more of such people, and imagine them a lot angrier. Aside from that, what would happen to our economy if X% people stop paying taxes and become a burden? How would this scenario play out globally, with different countries transitioning in different ways?

I actually do consider the possibility where rulers might "let people die", by creating huge ghettos and then killing everyone there. It does not feel much worse to me than sending hundreds of thousands of people to die on a battlefront just because a dictator didn't like his neighbors. Or we could have something like the "Civil War" movie.

As you can tell, I'm less optimistic than you. I think that if progress in AI happens too fast, we, as a society, are in trouble. I do not think governments will be ready for powerful AI. I think the best case scenario is if we hit a plateau, with GPT-5 being only marginally better than GPT-4, and a slow transition to post-scarcity world (10+ years) to give enough time for automation to make everything cheap. But I do worry a lot, and frequently ask myself whether I need to prepare for the worst, and if so, what should I do.



> It's not clear if native multi-modal models have been tested. I would expect 4o/Gemini models to do fairly well on these visual tests

They have been and I do not expect them too. You can see my comment history talking about LLM failure cases.

I'd advise being careful about just trying to reason your way through things when you don't have significant experience in a domain. Non expert reasoning can lead to good guesses but should never also be taken with high confidence. It's important to remember that nuance is often critical in these issues and not accounting for them often leads to approximations giving you the opposite answer rather than a close enough one.

But as Francois points out, LLMs are compression machines. That's what the mathematically are. They are not reasoning machines. A lot of people don't want to hear this because they think it undermines LLMs and any criticism is equivalent to saying they're useless. But I still think they're quite impressive. Criticism is important though, if we are to improve systems. So don't get blinded by success.

> So, what do you think will happen if the above scenario plays out?

In the next 5-10 years I'm far more worried about people confusing knowledge and reasoning. It's not a thing most people have needed to differentiate in the past because the two are generally associated with one another. But LLMs are more like if Google could talk to you than when a parrot talks to you. If this sounds the least bit odd, I encourage you to dig more into these topics. They are not easy topics because they are filled with nuance that is incredibly easy to miss. I keep stressing this point but it was one of my big fears and people's egos often sets us back, especially when we have no trust in experts. It's crazy to think we know more than people who spend their lives on specific subjects and think intelligence in one domain translates directly to another. So not knowing (most) things shouldn't ever be taken as a bad thing. There's not enough time to learn everything. There's not enough time to learn most things. So focus on a limited set and for the rest maybe just to the point where you can see the level of complexity ahead. If things seem simple, you probably don't understand it enough. Remember, there's thousand page reference manuals on things as narrow as bolts because the details matter so much.

As to the problems you mentioned, I'm not sure how those would be solved with ML or even AGI. Technology can't solve everything and a lot of these issues have significant amounts of politics and social choice associated with them that results in many of the problems (including where nuance dominates in some things and then cascades because we're talking about complex topics at a very high level and our knowledge is gained through a game of telephone rather than academically or experientially).

I think we're more than 50 years out from post scarcity, which is to say that no reasonable prediction can be made. But is still up to us if we want to increase the odds. I also agree with Francis that OpenAI has set us back on the path to AGI.

As for the fear, it's natural. Fear does help us. It's a great motivator. But it too can cripple us, and when it does it can give life to the very thing we fear. So care is needed when analyzing. The problem isn't about people not thinking. Everyone does and everyone is doing it constantly, even our dumbest of friends and acquaintances. The problem is that people are not thinking deep enough and having high confidence when stopping early. I'm not telling you to not have opinions on anything, it's only natural to have opinions on most things. But rather to be careful with the confidence you attribute to those opinions and of others. Here's the thing, if you do gain expertise in any singular field, you'll see that there is this rich but complex landscape. There's a lot of beauty in the landscape but often many pitfalls that cannot be avoided without some expertise and many which are common to these entering a field. These are things not to get discouraged by but to be aware of and why formal educations are typically beneficial. It's also to note that there is great beauty in this chaos ahead, even if it can be hard to see through the initial part of the journey.


I just watched the whole Dwarkesh/Chollet interview, and just like Dwarkesh was clearly not convinced by the Chollet's arguments, neither am I. I still expect decent results (>50%) on ARC benchmark soon (this year) now that the AI community has noticed it. I took another look at it, and it seems the problem is not so much in the complicated visual input encoding, it's more about the actual spatial intelligence. I don't really see what ARC benchmark has to do with AGI, other than AGI will require spatial intelligence - in addition to all other kinds of intelligence. To solve these puzzles we are likely to need a model that has been trained to predict the next frame in a video stream, probably something like SORA - in addition to predicting the next word. 4o/Opus/1.5 have some amount of spatial intelligence because they were trained to correlate text with a static image, but I'm guessing we need to use a lot more visual training data to gain ARC-level spatial intelligence at their scale. I think they might still get to 50% with some finetuning and other tricks, but I would not even try any lesser models. I think that if GPT-5 is being trained on videos, SORA style, it should have no problem beating humans on this test. Regarding Chollet's discrete program search, I'm not familiar with that field, and I didn't quite get the idea of how to combine it with DL. Over the years I've heard some very smart people proposing complex approaches towards building AGI (Lecun, Bengio, Jeff Hawkins, etc), yet scaling up deep learning models is still the best one we have today. If Chollet believes in his hybrid, whatever it is, he should build some sort of a prototype/PoC. Why hasn't he? In any case, the good news is most of academic AI labs today don't have the money to scale up transformers, so they are probably trying out all these other ideas.

So you're not worried about impending mass unemployment, ok. That does make me feel a little better. I can be wrong, and I really want to be wrong.


> I still expect decent results (>50%) on ARC benchmark soon (this year)

What gives you this confidence? What is your expertise in ML? Have you trained systems? Developed architectures? Do you know why the systems currently fail?

> now that the AI community has noticed it.

Which community? The researchers or public? The researchers have known if for quite some time. The previous contest as famous and so is Francis. Big labs have tried to tackle ARC for quite some time. You just don't see negative results.

> I don't really see what ARC benchmark has to do with AGI

ARC is a reasoning test. Which is quite different from all the LLM tests you likely have seen, which are memory tests. The problem is most people are not aware of what the models have been trained on. GI involves memory, it involves reasoning, it involves a lot of things.

> I think they might still get to 50% with some finetuning and other tricks, but I would not even try any lesser models.

And how do you have this confidence? Are you guessing? Have you tried? Because I can tell you that others have. Even before the prize was announced. And I hope you realize there's a lot of models that do in fact do next frame prediction. People have trained multimodal models on ARC.

There's quite a lot of assumptions by many that it just hasn't been tried. But it's a baseless assumption with evidence to the contrary. Look into it yourself before making such claims.

> I've heard some very smart people proposing complex approaches towards building AGI (Lecun, Bengio, Jeff Hawkins, etc), yet scaling up deep learning models is still the best one we have today.

These are not in contention so I'm not sure what your argument is.

> If Chollet believes in his hybrid, whatever it is, he should build some sort of a prototype/PoC. Why hasn't he?

I'm sorry, but I'm going to say this is a dumb question. He's trying. A lot of us are. But clearly there's unsolved problems. The logic doesn't follow from your question. We still don't know how to conceptually build a brain. But there's many things we conceptually know how to build but still can't. We conceptually know how to build space elevators but we don't know how to build all the pieces to actually make them even if we had infinite money.

And I'll ask you a similar question: if scale is all you need then why don't we have AGI now?

There may be parts to this question you don't know. We don't train multiple epochs for LLMs. LLM architecture has been rapidly changing despite maintaining the general structure of transformers (but they aren't your standard transformers and reading the AIAYN paper won't get you there). And if scale was all you needed then shouldn't Google be leading the way? Certainly they have more data and compute than anyone else. In fact, I'd argue that this is why they do so poorly and why LLMs are getting worse at the same time they're getting better.

> the good news is most of academic AI labs today don't have the money to scale up transformers, so they are probably trying out all these other ideas.

The unfortunate news is when you propose some other architecture it gets lambasted in review because they do not perform state of the art and I've had SOTA papers get rejected due to "lack of experiments" which is equivalent to lack of compute. There's a railroad and lots of academic funding comes from big tech, not universities or government. Go look at the affiliations of academic authors. Go to the papers and you'll see.

> So you're not worried about impending mass unemployment, ok

Oh, I'm worried. More worried about displacement. You know how things sucked when everything got outsourced? Because they just cut corners, do the absolute bare minimum, and how they won't consider anything that makes any sense just because there's rules in place that were not correctly created but are strictly followed? Get ready for that to be much worse.


Well, that didn't take long, did it? 50% on ARC public test set [1] less than a week after the announcement of the prize. Though I have to say, the solution, at least superficially, does look like what Chollet alluded to: hybrid of LLM with "discreet program search/synthesis". Again, I'm not familiar with that field, so perhaps it's not at all what he had in mind, but it's intriguing. What do you think? Do you understand Chollet's idea enough to explain whether this solution is on the right track?

if scale is all you need then why don't we have AGI now?

Well, it's my turn to use the "dumb question" card :) We don't have enough scale, obviously! I don't know if scale is all we need for AGI to emerge, but clearly we haven't reached the end of benefits from scaling up. Until we do, it seems like the easiest and the most promising approach. Considering the size of Youtube as a training corpus, we are pretty far from that end. Are there reasons to think otherwise?

LLM architecture has been rapidly changing

Aside from a mixture of experts architecture, which has its pros and cons vs a single large monolithic model, I'm not sure what has fundamentally changed in the architecture of the original transformer proposed in 2017. Minor tweaks here and there, sure, but it's pretty much the same model, no?

if scale was all you needed then shouldn't Google be leading the way?

Oh, a lot of people have been asking how could Google drop the ball so bad, for so long. There are reasons, both well known, and hidden from outsiders, but compute is not all you need to scale, you also need vision, clear direction, and effective coordination of efforts from multiple teams. Something that OpenAI has (or at least had), and which is rare at large corporations.

Re: academics - good ideas get noticed. Today, if someone discovers something good they don't even need to publish. Post a github link on r/MachineLearning, together with benchmark results, and let people test it.

I'm worried. More worried about displacement

This is very interesting - I haven't even thought about it. It's very possible that in the beginning after the mass layoffs, GPT-5 will screw some things up, in subtle ways, and only GPT-6, some time later, will be able to fix them. People need to be ready for that. The period between GPT-5 and GPT-6 will be rough in more ways than I imagined.

[1] https://www.lesswrong.com/posts/Rdwui3wHxCeKb7feK/getting-50...


> Well, that didn't take long, did it? 50% on ARC public test set [1] less than a week after the announcement of the prize.

I think you also misunderstand the challenge and very clearly the author misunderstands neurosymbolic AI, as he implements it... He has it generate programs and then search over those programs. He also tries to challenge Francois's claims (What it means about current LLMs) while he actively performs "claim 1" and misunderstands the context of "claim 3" (model weights are frozen, so there is no online learning. This is distinct from what's going on here, since he is updating the model's priors before answering. But whatever insights the model has gained from this exercise do not persist after execution. i.e. there is no continual learning). "claim 2" is just irrelevant.

A key part that is concerning to me is this

  > In addition to iterating on the training set, I also did a small amount of iteration on a 100 problem subset of the public test set.
The train and test sets are quite different, so if he learned anything from the test set than that invalidates it. And as far as I can tell, he does combine... https://github.com/rgreenblatt/arc_draw_more_samples_pub/blo...

Potentially the confusion is that each data file has a pair where one has "train" and "test" which is your sample and then your actual input/output pair. So you're only supposed to train from ARC-AGI/data/training, but you cannot use ARC-AGI/data/evaluation for anything other than... evaluation.

Not to mention that we don't know what data is in GPT. It would not be surprising if this was in it. Maybe they filtered out the official repo but there are plenty of examples around the web. Did they take check for all such examples? If not, then the result is entirely invalidated.

There's a lot of reason to believe information leakage exists here.

So I'll wait for an open solution before I start to

> Re: academics - good ideas get noticed.

I also need to stress that ARC has been tested in LLMs for quite some time now. You can go see it in both the GPT2 and GPT3 papers. Though these are different versions than the one in the current competition. That version has ARC-e and ARC-c for easy and challenge. GPT2 gets 68.8/51.4 with "zero-shot" (I'm not confident) and the original LLaMA gets 78.9/56.0. So really, if people aren't aware of ARC (prior to the video) then it really demonstrates that they are not doing this kind of research or even reading the papers.

And I think we need to be clear that we need to differentiate academics and normal people. And I'm including anyone with a "machine learning researcher" and "machine learning engineer" title in "academics." This is where all the building is happening and these people all should be very aware of ARC. The public not knowing, well, that's a whole different story and isn't really all that important now is it. They're not the ones improving these systems (for the most part. There are of course always exceptions to the rule).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: