Hacker News new | past | comments | ask | show | jobs | submit login

I think you're misreading the intention here. The intention of closing it up as they approach AGI is to protect against dangerous applications of the technology.

That is how I read it anyway and I don't see a reason to interpret it in a nefarious way.




Two things that jump out at me here.

First, this assumes that they will know when they approach AGI. Meaning they'll be able to reliably predict it far enough out to change how the business and/or the open models are setup. I will be very surprised if a breakthrough that creates what most would consider AGI is that predictable. By their own definition, they would need to predict when a model will be economically equivalent to or better than humans in most tasks - how can you predict that?

Second, it seems fundamentally nefarious to say they want to build AGI for the good of all, but that the AGI will be walled off and controlled entirely by OpenAI. Effectively, it will benefit us all even though we'll be entirely at the mercy of what OpenAI allows us to use. We would always be at a disadvantage and will never know what the AGI is really capable of.

This whole idea also assumes that the greater good of an AGI breakthrough is using the AGI itself rather than the science behind how they got there. I'm not sure that makes sense. It would be like developing nukes and making sure the science behind them never leaks - claiming that we're all benefiting from the nukes produced even though we never get to modify the tech for something like nuclear power.


Read the sentence before, it provides good context. I don't know if Ilya is correct, but it's a sincerely held belief.

> “a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff.”


Many people consider what OpenAI is building to be the dangerous application. They don't seem nefarious to me per se, just full of hubris, and somewhat clueless about the consequences of Altman's relationship with Microsoft. That's all it takes though. The board had these concerns and now they're gone.


"Tools for me, but not for thee."


I think the fundamental conflict here is that OpenAI was started as a counter-balance to google AI and all other future resource-rich cos that decide to pursue AI BUT at the same time they needed a socially responsible / ethical vector to piggyback off of to be able to raise money and recruit talent as a non profit.

So, they cant release science that the googles of the world can use to their advantage BUT they kind of have to because that's their whole mission.

The whole thing was sort of dead on arrival and Ilya's email dating to 2016 (!!!!) only amplifies that.


When the tools are (believed to be) more dangerous than nuclear weapons, and the "thee" is potentially irresponsible and/or antagonists, then... yes? This is a valid (and moral) position.


If so, then they shouldn’t have started down that path by refusing to open source 1.5B for a long time while citing safety concerns. It’s obvious that it never posed any kind of threat, and to date no language model has. None have even been close to threatening.

The comparison to nuclear weapons has always been mistaken.


Oh I'm talking about the ideal, not what they're actually doing.


Sadly one can’t be separated from the other. I’d agree if it was true. But there’s no evidence it ever has been.

One thought experiment is to imagine someone developing software with a promise to open source the benign parts, then withholding most of it for business reasons while citing aliens as a concern.


> One thought experiment is to imagine someone developing software with a promise to open source the benign parts, then withholding most of it for business reasons while citing aliens as a concern.

I mean, I'm totally with them on the fear of AI safety. I'm definitely in the "we need to be very scared of AI" camp. Actually the alien thought experiment is nice - because if we credibly believed aliens would come to earth in the next 50 years, I think there's a lot of things we would/should do differently, and I think it's hard to argue that there's no credible fear of reaching AGI within 50 years.

That said, I think OpenAI is still problematic, since they're effectively hastening the arrival of the thing they supposedly fear. :shrug:


It makes people feel mistrusted (which they are, and in general should be.) it’s a bit challenging to overcome that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: