Hacker News new | past | comments | ask | show | jobs | submit login

Wasn't he fired for questionable reasons? I thought everyone wanted him back, and that's why he was able to return. It was, as I remember, just the board that wanted him out.

I imagine if he was doing something truly nefarious, opinions might have been different, but I have no idea what kind of cult of personality he has at that company, so I might be wrong here.




> I thought everyone wanted him back, and that's why he was able to return.

Everyone working at OpenAI wanted him back. Which only includes people who have a significant motivation to see OpenAI succeed financially.

Also, there are rumours he can be vindictive. For all I know, that might be a smear campaign. But if that were the case, and half the people at OpenAI wanted him back, the other half would have a motivation to follow so as not to get whatever punishment from Sam.


It sounds to me people working in AI these days have a lot more options than being fraid of a particularly vindictive man.


Between just working on ML tech and working on ML tech with US$XXX billion of Microsoft’s money—salaries aside—imaginably, having access to nuclear plants worth of energy, compute power and cutting edge GPUs, legal department for defending intellectual property violation, etc., makes for a very different value proposition compared to some other startup in the space. (The self-image of working at a company allegedly “advancing humanity” likely helps, too.)


True to some extend. But if you have OpenAI on your resume I have to imagine there’s no necessity for your next job to be much different (e.g. go to any FAANG)


I think (based on very shallow research) among FAANG behemoths Meta is the closest in terms of resources thrown at ML, but it’s an ad company that hasn’t got such a “we advance humanity” image. Equity vesting and that sort of stuff can additionally make moving companies a problematic prospect, even if the new place offered a competitive salary…


Yeah, Meta pays well, but they probably wouldn't match the $10M equity pile that some generic engineer or researcher who joined 1 year before ChatGPT dropped could be sitting on. Ofc, the equity->cash conversion rate for OpenAI pseudo stocks isn't clear, but those paper gains would be tough to abandon.


> Wasn't he fired for questionable reasons?

There were a number of concerns, including for-profit projects he was pursuing despite his public insistence on OpenAI being non-profit as well as generally deceptive behaviour on his part. The last part at least is consistent with what others have said about Altman previously, including what allegedly led to his exit from YC, although they have kept those stories pretty quiet. But it seems like PG himself no longer has a lot of trust in Altman after he basically made him his heir apparent, and he has known him for a while now.

What's more, the driving force behind the board's move to remove him from OpenAI was reportedly Ilya Sutskever who was one of their key people and one of the handful of original founders. So it wasn't just a bunch of external people who wanted Altman gone, but at least one high level insider who has been around for the entire thing.

Altman himself was even once asked by journalists whether we could trust him to which he outright answered "no". But then he pointed to the fact that the board could remove him for any reason. Clearly, he was suggesting that there were strong safe guards in place to keep him in check, but it is now apparent that those safe guards never really existed while his answer still stands.


Don’t pay much particular attention to AI but I’m not seeing the knock on him saying “no.”

What human in the world could be trusted with AI? Only delusional people could say yes to that question.


> What human in the world could be trusted with AI? Only delusional people could say yes to that question.

"AI" is too broad a topic to draw that conclusion.

Almost everyone can be trusted with Stockfish. Almost: https://github.com/RonSijm/ButtFish

Most of us can be trusted with current LLMs, despite the nefarious purposes that they can be put to by a minority. Spam and fraud are still minority actors, even though these tools increase the capabilities of those actors; and they're still "only" at the student/recent graduate level for many tasks, so using them for hazardous chemical synthesis will likely literally explode in the face of the person attempting it.

Face recognition AI (and robot control attached to it) is already more than capable enough to violate the 1980 Protocol on Blinding Laser Weapons, and we're just lucky random lone-wolf malicious actors have not yet tried to exploit this: https://en.wikipedia.org/wiki/Protocol_on_Blinding_Laser_Wea...

We don't yet, it seems, have an AI capable enough that a random person wishing for world peace will get a world with the peace of the grave, nor will someone asking for "as many paperclips as possible" accidentally von-Neumann the universe into nanoscale paperclip-shaped molecules — indeed, I would argue that if natural language understanding continues in the current path, we won't ever get that kind of "it did what I said but not what I meant" scenario; what I currently think most likely is as an analogy for the industrial revolution, where we knew CO2 was a greenhouse gas around the turn of the 1900s, and still have not yet stopped emitting it because most people prefer the immediate results of things that emit it over the distant benefits of not emitting it.

But even that kind of AI doesn't seem to be here just yet. It may turn out that GenAI videos, images, text, and voice is in fact as much of a risk as CO2, that it collapses any possibility of large-scale cooperation, but for the moment that's not clear.


a smartphone can be used to guide an ICBM.

what human in the would could be trusted with it?


A road can be used for abductions, acts of violence, and other harmful activities.

What human in the world could be trusted with it [roads]?


Most humans are frequently harming themselves (and sometimes harming others) with their smartphones, so...


> I thought everyone wanted him back,

Ilyia Sutskever who was the chief Scientist of the company and honestly irreplacable in terms of AI knowledge left after Altman returned.


Only 1 of the 6 board members are still at OAI.


He also apologized and said he was wrong.


That was only after it was apparent that a majority of employees would back Altman's return I believe. A majority of which who had spent less time with him than Ilya had in all likelihood.


Define everyone. I was delighted when they fired him. I don't believe he has humanity's best interest at heart.


745 of 770 employees responded on a weekend to call for his return, threatened mass resignations if the board did not resign; among the signatories was board member Sutskever, who defected from the board and publicly apologized for his participation in the board's previous actions.


>745 of 700 employees responded on a weekend to call for his return, threatened mass resignations if the board did not resign

I would think final count doesn’t really matter. Self serving cowards, like me, would sign it once they see the way the wind was blowing. How many signed it before Satya at Microsoft indicated support for Altman?


I followed the drama. The point I was (somewhat unsuccessfully) trying to make was that while, sure, there were groups who wanted him back (mainly the groups with vested financial interests and associated leverage), my sense was that the way it played out was not necessarily in line with wider humanity's best interest, i.e. as would have been hoped based on OpenAI's publicly stated goals.


Oh, in that case sure.

The statements in the whole popcorn drama still don't add up in my mind, so from the point of view of "humanity's best interest", I'd say it's still bad.

I thought you meant at the time, not with the benefit of hindsight.


>my sense was that the way it played out was not necessarily in line with wider humanity's best interest,

Sure. But you make the foolish assumption here that humanity even has humanity's best interests at heart. Sentiment may be negative on current LLM generative based AI, but there's still plenty of people with either potential vested interests or simply seeing missing the forest for the tree. It's pretty hard to say "everyone hates/loves AI" at this current time.


I'd do that too if I held stock, and I think the guy is a borderline-vampire.


745 of 700?


Whoops, typo for 770, edited to correct. Thanks!



At the time, was it possible for people working at OpenAI to, er, "cash out"?

I don't actually know the answer to that, but I'm suggesting that perhaps people had additional motives for the organization to preserve its current hype/investment growth strategies.


He was fired because he was leading the for profit in a direction that was contrary to the mission of the non profit parent that was supposed to be controlling the for profit.

Those working for the for profit got $$$ in their eyes once MSFT started throwing billions at them and said fuck the mission and bring back Altman we’re going full Gordon Gekko now.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: