Hacker News new | past | comments | ask | show | jobs | submit login

>> if you're genuinely losing sleep about GPT-4 becoming a general agent that does every job

I guess I'm one of those people, because I'm not convinced that GPT-3.5 didn't do some heavy lifting in training GPT-4... that is the take-off. The fact that there are still some data scientists or coders or "ethics committees" in the loop manifestly is not preventing AI from accelerating its own development. Unless you believe that LLMs cannot, with sufficient processing power and API links, ever under any circumstances emulate an AGI, then GPT-4 needs to be viewed seriously as a potential AGI in utero.

In any event, you make a good case that can be extended: If the companies throwing endless processing at LLMs can't even conceive of a way to prioritize injection threats thought up by humans, how would they even notice LLMs injecting each other or themselves for nefarious purposes? What then stops a rapid oppositional escalation? The whole idea of fast takeoff is that a sufficiently clever AI won't make its first move in a small way, but in a devastating single checkmate. There's no reason to think GPT-4 can't already write an infinite number of scenarios to perform this feat; if loosed to train another model itself, where is the line between that LLM evolution and AGI?




> In any event, you make a good case that can be extended: If the companies throwing endless processing at LLMs can't even conceive of a way to prioritize injection threats thought up by humans, how would they even notice LLMs injecting each other or themselves for nefarious purposes?

I would love to read press articles that dove into this. There's a way of talking about more future-facing concerns that doesn't give people the impression that GPT-4 is magic but instead makes the much more persuasive point: holy crud these are the companies that are going to be in charge of building more advanced iterations?

There is no world where a company that ignores prompt injection solves alignment.


Dan, from a purely realpolitik standpoint, these companies don't even want to be implicated as having control of their own software now. Any attempt to do so would hinder the mission. The question is... is it even their mission anymore? From a certain perspective, they might already be buying up hardware for an AI who is essentially demanding it from them. In that case the takeoff is happening right now. Dismissing basic security protocols should be totally anathema to devs in the 2020s. That's not "moving fast and breaking things"... a slightly paranoid mind could see it as something else.

I think that they (OpenAI, Alphabet) think that the ladder can be climbed by leveraging GPT and LLMs until they have AGI. I think they think whoever gets AGI first takes all the chips off the table and rules the world forever forward. While these endless, idiotic debates happen as to whether GPT is or ever could be "alive" or whatever, they're actively employing it to build the one ring that'll rule them all. And I think the LLM model structure is capable of at least multiplying human intelligence enough to accomplish that over a couple more iterations, if not capable of conceiving the exact problems for itself yet.

There's also no real economic incentive to develop AGI that benefits everyone... Sam Altman's strangely evasive remarks to the contrary. There is every incentive to develop one for dominance. The most powerful extant tool to develop AGI right now is GPT-4.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: