Hacker Newsnew | past | comments | ask | show | jobs | submit | daxfohl's commentslogin

It's easy: we have reached AGI when there are zero jobs left. Or at least non manual labor jobs. If there is a single non-physical job left, then that means that person must be doing something that AI can't, so by definition, it's not AGI.

I think it'll be a steep sigmoid function. For a long time it'll be a productivity booster, but not enough "common sense" to replace people. We'll all laugh about how silly it was to worry about AI taking our jobs. Then some AI model will finally get over that last hump, maybe 10 or 20 years from now (or 1000, or 2}, and it will be only a couple months before everything collapses.


I dislike your definition. There are many problems besides economic ones. If you defined "general" to mean "things the economy cares about", then what do you call the sorts of intelligences that are capable of things that the economically relevant ones are not?

A specific key opens a subset of locks, a general key would open all locks. General intelligence, then, can solve all solvable problems. It's rather arrogant to suppose that humans have it ourselves or that we can create something that does.


It also partitions jobs into physical and intellectual aspects alone. Lots of jobs have a huge emotional/relational/empathetic components too. A teacher could get by being purely intellectual, but the really great ones have motivational/inspirational/caring aspects that an AI never could. Even if an AI says the exact same things, it doesn't have the same effect because everyone knows it's just an algorithm.

And most people get by on those jobs by faking the emotional component, at least some of the time. AGI presumably can fake perfectly and never burn out.

> And most people get by on those jobs by faking the emotional component

If you think this is true, I would say you should leave artificial life alone until you can understand human beings better.


Have a long talk with any working teacher or therapist. If you think the regular workload is adequate for them to offer enough genuine emotional support for all the people they work with, always, everyday, regardless of their personal circumstances, you're mistaken. Or the person you're talking with is incredibly lucky.

It doesn't have to be much, or intentional, or even good for that matter. My kids practice piano because they don't want to let their teacher down. (Well, one does. The other is made to practice because WE don't want to let the teacher down).

If the teacher was a robot, I don't think the piano would get practiced.

IDK how AI gains that ability. The requirement is basically "being human". And it seems like there's always going to be a need for humans in that space, no matter how smart AI gets.


Something still feels off if the formal proof can't be understood. I don't dispute its correctness, but there's a big jump from 4 color theorem, where at least mathematicians understood the program, to this, where GPT did the whole thing. Like if GPT ceased to exist, nobody would have a clue how to recreate the formalization. Or maybe there's a step in there that's a breakthrough to some other problem, but since it was generated, we'll never notice it.

Where do you see any mention of GPT?

The computer-assisted component of the Noperthedron proof is a reasonably small sagemath program that was (as far as I know) written by humans: https://github.com/Jakob256/Rupert

Perhaps you have confused this article with a recent unrelated announcement about a vibe-coded proof of an Erdos conjecture? https://borisalexeev.com/pdf/erdos707.pdf


Oops you're right! I read these both yesterday and they blended together in my memory by the time I made this comment this morning. I knew something felt "off".

Tangentially I'll have to reconsider my position on long but lossy context LLMs.


I remember someone (G.H. Hardy?) saying something along the lines of "the only bad thing about math is that it can be useful."

I wonder if append-only will continue to be important. As agents get more powers, their actions will likely be the bottleneck, not the LLM itself. And at n*2, recomputing a whole new context might not take much longer than just computing the delta, or even save time if the new context is shorter.

True. Generally it seems like you're visualizing things, moving stuff around, seeing vague patterns and trying to make them more clear. IDK how a transformer architecture would fit all of that in its context, or use it productivity once it's there. You can't just keep appending forever, but you also can't delete stuff either, because unlike humans, a deletion is a hard delete; there's no fuzzy remembrance left to rely on, so even deleting bad ideas is dangerous because it'll forget that it was a bad idea and infinite loop. Symbols manipulation doesn't come until the end, after you have a good idea what that part will look like.

Hmm, I wonder what happens if you let them manipulate their own context symbolically, maybe something like a stack machine. Perhaps all you need is a "delete" token, or a "replace" flag. That way you don't have context full of irrelevant information.

I guess the challenge is, where would the training data come from? Data on the internet is in its final form so "next token" is never a delete.

Edit: I guess in essence, that's what reasoning LLMs already do. IIUC the thought blocks are ephemeral, and only the response is maintained for the chat. Maybe there'd be some benefit of doing this recursively? But that's also kind of what subagents are for. So, perhaps nothing new here.


The chains-of-thought here are artificially constructed, very information-dense partial sums formatted in a specific way that guides the fine tuning. A potential next step would be to look at real-world chains-of-thought and see whether some process could start with those and achieve the same result. Then you could really have a self-improving system!

Also I wonder if the LLM "knows" that it has this capability after fine-tuning. If it encounters multiplication as part of some larger chain-of-thought, will it solve that internally, or will it continue to do it step-by-step in the chain-of-thought?


But it's very hard to define "real-world CoT" -- think about human, we learn multiplications by vertical calculation and we learn division in a similar way -- all these learning process requires an "information dense" tools (calculation process) with intrinsic math rules in it. Isn't that an adapted way of CoT?

Oh, by "real world" I meant "chains of thought generated by existing reasoning LLMs" (as opposed to injecting predefined CoT like was done in the experiment), not human thoughts.

Assuming you mean cloud platforms in general, I don't even think it's that tangential. In fact it may cut to the heart of the matter: if React-over-REST-over-SQL-plus-some-background-jobs was all we needed, cloud platform innovation would've stopped at Heroku and Rails 20 years ago, and AI could probably make a run on replacing SWE jobs entirely.

But as it's played out, there are a ton of use cases that don't fit neatly into that model, and each year the cloud platforms offer new tools that fit some of those use cases. AI will probably be able to string together existing tools as IaaS providers offer them, perhaps without even involving an engineer, but for use cases that are still outside cloud platform offerings, seem like things that require some ingenuity and creativity that AI wouldn't be able to grok.


To me, the key quote is the simple "If you had told me in late 2022 I’d be saying these things 3 years later, I would’ve been pretty surprised." As someone with little exposure to the design industry, seeing how quickly AI could generate images, I'd been under the assumption that the AI takeover was already well underway there, so was surprised to learn that it's not.

If anything, that gives some comfort around the future of engineering job prospects. While there's still room to worry, "yeah but design is fundamentally human, while engineering is mostly technical and can be automated", I'm sure, just as design has realized, that when we get to a point where AI should be taking over, we'll realize that there's a lot of non-technical things that engineers do, that AI cannot replace.

Basically, if replacing a workforce is the goal, AI image generators and code generators look like replacement technologies from afar, but when you look closer you realize they're "the right solution to the wrong problem", to be a true replacement tech, and in fact don't really move the needle. And maybe AI, by definition of being artificial and intelligence (as opposed to real common sense) as a whole, is fundamentally an approach that "solves the wrong problem" as a replacement tech, even as AGI or even ASI gets created.


That quote stood out to me as well, but mostly because the 3 images shown by the author have nothing to do with product/interface/communications design.

I guess they’re vaguely cool looking images? If the author had used them to talk about how “concept art” in games/movies was going to get upended by AI there would be a point there, but as it stands I find it very puzzling that someone who claims to teach design would use them as key examples of why design - a human process of coming up with specific solutions to fuzzy problems with arbitrary constraints - was headed in any particularly direction.


I think there's some benefit of hindsight in that perspective though. I can imagine how, at the time, you see the advancement, and it's not obvious what the barriers for AI takeover are. Similar to software now, plenty of SWEs have a nagging feeling about AI encroachment. But in all likelihood, eventually it'll become clear that most SWE work involves coordinating with other teams, planning incremental delivery and various testing and review phases, working with CS when users face issues, etc. The boundaries will be a lot clearer, and looking back at the current FUD because of a better autocomplete, it'll seem ridiculous by then. (At least I hope so!)

Or by watching rats solve mazes

I don't think we've attempted to study if rats have internal monolgues all that much, yet. It wouldn't surprise me if they did, or did not. I wouldn't say it is safe to assume they don't.

About the only real animal model has shown that some species of monkey probably do. [0]

[0] https://www.sciencedirect.com/science/article/abs/pii/S01664...


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: