Hacker News new | past | comments | ask | show | jobs | submit login

> Wasn't he fired for questionable reasons?

There were a number of concerns, including for-profit projects he was pursuing despite his public insistence on OpenAI being non-profit as well as generally deceptive behaviour on his part. The last part at least is consistent with what others have said about Altman previously, including what allegedly led to his exit from YC, although they have kept those stories pretty quiet. But it seems like PG himself no longer has a lot of trust in Altman after he basically made him his heir apparent, and he has known him for a while now.

What's more, the driving force behind the board's move to remove him from OpenAI was reportedly Ilya Sutskever who was one of their key people and one of the handful of original founders. So it wasn't just a bunch of external people who wanted Altman gone, but at least one high level insider who has been around for the entire thing.

Altman himself was even once asked by journalists whether we could trust him to which he outright answered "no". But then he pointed to the fact that the board could remove him for any reason. Clearly, he was suggesting that there were strong safe guards in place to keep him in check, but it is now apparent that those safe guards never really existed while his answer still stands.




Don’t pay much particular attention to AI but I’m not seeing the knock on him saying “no.”

What human in the world could be trusted with AI? Only delusional people could say yes to that question.


> What human in the world could be trusted with AI? Only delusional people could say yes to that question.

"AI" is too broad a topic to draw that conclusion.

Almost everyone can be trusted with Stockfish. Almost: https://github.com/RonSijm/ButtFish

Most of us can be trusted with current LLMs, despite the nefarious purposes that they can be put to by a minority. Spam and fraud are still minority actors, even though these tools increase the capabilities of those actors; and they're still "only" at the student/recent graduate level for many tasks, so using them for hazardous chemical synthesis will likely literally explode in the face of the person attempting it.

Face recognition AI (and robot control attached to it) is already more than capable enough to violate the 1980 Protocol on Blinding Laser Weapons, and we're just lucky random lone-wolf malicious actors have not yet tried to exploit this: https://en.wikipedia.org/wiki/Protocol_on_Blinding_Laser_Wea...

We don't yet, it seems, have an AI capable enough that a random person wishing for world peace will get a world with the peace of the grave, nor will someone asking for "as many paperclips as possible" accidentally von-Neumann the universe into nanoscale paperclip-shaped molecules — indeed, I would argue that if natural language understanding continues in the current path, we won't ever get that kind of "it did what I said but not what I meant" scenario; what I currently think most likely is as an analogy for the industrial revolution, where we knew CO2 was a greenhouse gas around the turn of the 1900s, and still have not yet stopped emitting it because most people prefer the immediate results of things that emit it over the distant benefits of not emitting it.

But even that kind of AI doesn't seem to be here just yet. It may turn out that GenAI videos, images, text, and voice is in fact as much of a risk as CO2, that it collapses any possibility of large-scale cooperation, but for the moment that's not clear.


a smartphone can be used to guide an ICBM.

what human in the would could be trusted with it?


A road can be used for abductions, acts of violence, and other harmful activities.

What human in the world could be trusted with it [roads]?


Most humans are frequently harming themselves (and sometimes harming others) with their smartphones, so...




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: