Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI's board has fired Sam Altman (openai.com)
5710 points by davidbarker 11 months ago | hide | past | favorite | 2531 comments



All: our poor single-core server process has smoke coming out its ears, as you can imagine.

I so hate to do this, but for those who are comfortable viewing HN in an incognito window, it will be much faster that way. (Edit: this comment originally said to log out, but an incognito window is better because then you don't have to log back in again. Original comment: logging in and out: HN gets a lot faster if you log out, and it will reduce the load on the server if you do. Make sure you can log back in later! or if you run into trouble, email hn@ycombinator.com and I'll help)

I've also turned pagination down to a smaller size, so if you want to read the entire thread, you'll need to click "More" at the bottom, or like this:

https://news.ycombinator.com/item?id=38309611&p=2

https://news.ycombinator.com/item?id=38309611&p=3

https://news.ycombinator.com/item?id=38309611&p=4

https://news.ycombinator.com/item?id=38309611&p=5

Sorry! Performance improvements are inching closer...


From NYT article [1] and Greg's tweet [2]

"In a post to X Friday evening, Mr. Brockman said that he and Mr. Altman had no warning of the board’s decision. “Sam and I are shocked and saddened by what the board did today,” he wrote. “We too are still trying to figure out exactly what happened.”

Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, according to Mr. Brockman. Mr. Brockman said that even though he was the chairman of the board, he was not part of this board meeting.

He said that the board informed him of Mr. Altman’s ouster minutes later. Around the same time, the board published a blog post."

[1] https://www.nytimes.com/2023/11/17/technology/openai-sam-alt...

[2] https://twitter.com/gdb/status/1725736242137182594


So they didn't even give Altman a chance to defend himself for supposedly lying (inconsistent candour as they put it.) Wow.


Another source [1] claims: "A knowledgeable source said the board struggle reflected a cultural clash at the organization, with Altman and Brockman focused on commercialization and Sutskever and his allies focused on the original non-profit mission of OpenAI."

[1] - https://sfstandard.com/2023/11/17/openai-sam-altman-firing-b...


TY for sharing. I found this to be very enlightening, especially when reading more about the board members that were part of the oust.

One of the board of directors that fired him co-signed these AI principles (https://futureoflife.org/open-letter/ai-principles/) that are very much in line with safeguarding general intelligence

Another of them wrote this article (https://www.foreignaffairs.com/china/illusion-chinas-ai-prow...) in June of this year that opens by quoting Sam Altman saying US regulation will "slow down American industry in such a way that China or somebody else makes faster progress” and basically debunks that stance...and quite well, I might add.


So the argument against AI regulations crippling R&D is that China is currently far behind and also faces their own weird gov pressures? That's a big gamble, applying very-long term regulations (as they always are long term) to a short term window betting on predictions of a non-technical board member.

There's far more to the world than China on top of that and importantly developments happen both inside and outside of the scope of regulatory oversight (usually only heavily commercialized products face scrutiny) and China itself will eventually catch up to the average - progress is rarely a non-stop hockey stick, it plateaus. LLMs might already be hitting a wall https://twitter.com/HamelHusain/status/1725655686913392933)

The Chinese are experts at copying and stealing Western tech. They don't have to be on the frontier to catch up to a crippled US and then continue development at a faster pace, and as we've seen repeatedly in history regulations stick around for decades after their utility has long past. They are not levers that go up and down, they go in one direction and maybe after many many years of damage they might be adjusted, but usually after 10 starts/stops and half-baked non-solutions papered on as real solutions - if at all.


> The Chinese are experts at copying and stealing Western tech.

Sure that's been their modus operandi in the past, but to hold an opinion that a billion humans on the other side of the Pacific are only capable of copying and no innovation of their own is a rather strange generalization for a thread on general intelligence.


Well, I guess (hope) no one thinks it is due to genetic disabilities which are preventing disrupting innovations from (mainland) chinese.

It is rather a cultural/political thing. Free thinking and stepping out of line is very dangerous in a authorian society. Copying approved tech on the other hand is safe.

And this culture has not changed in china lately, rather the opposite. Look what happened to the Alibaba founder, or why there is no more Winnie Puuh in china.


This seems to make more sense. Perhaps it has to do with OpenAI is not "open" anymore. Not supporting and getting rid of the OpenAI Gym was certainly a big change in direction of the company.


I'm confused. It's usually the other way around; the good guy is ousted because he is hindering the company's pursuit of profit.


This time he was ousted because he was hindering the pursuit of the company's non-profit mission. We've been harping on the non-openness of OpenAI for a while now, and it sounds like the board finally had enough.


"This time he was ousted because he was hindering the pursuit of the company's non-profit mission. "

This is what is being said. But I am not so sure, if the real reasons discussed behind closed doors, are really the same. We will find out, if OpenAI will indeed open itself more, till then I remain sceptical. Because lots of power and money are at stake here.


Those people aren't about openness. They seem to be members of "AI will kill us all" cult.

The real path to AI safety is regulating applications, not fundamental research and making fundamental research very open (which they are against).


That's what it's looking like to me. It's going to be as beneficial to society as putting Green Peace in charge of the development of nuclear power.

The singularity folks have been continuously wrong about their predictions. A decade ago, they were arguing the labor market wouldn't recover because the reason for unemployment was robots taking our jobs. It's unnerving to see that these people are having gaining some traction while actively working against technological progress.


I want you to be right. But why do you think you're more qualified to say how to make AI safe than the board of a world-leading AI nonprofit?


Literal wishful thinking ("powerful technology is always good") and vested interests ("I like building on top of this powerful technology"), same as always.


Because I work on AI alignment myself and had been training LLMs long before Attention is All You Need came out (which cites some of my work).


Someone is going to be right, but we also know that experts have known to be wrong in the past, ofttimes to a catastrophic effect.


In this case, the company is a non-profit, so it is indeed the other way around



It is not that simple. https://openai.com/our-structure

The board is for the non-profit that ultimately owns and totally controls the for-profit company.

Everyone that works for or invests in the for-profit company has to sign an operating agreement that states the for-profit actually does not have any responsibility to generate profit and that it's primary duty is to fulfill the charter and mission of the non-profit.


Then what's the point of the for-profit?


> Then what's the point of the for-profit?

To allow OpenAI to raise venture capital, which allows them to exchange equity for money (ie, distribute [future] rights to profit to shareholders)


If you don’t know anything, why are you posting


Yeah I though that was the most probable reason, especially since these people don't have any equity, so they have no interest in the commercial growth of the org.

Apparently Microsoft was also blindsided by this.

https://www.axios.com/2023/11/17/microsoft-openai-sam-altman...


So it looks like they did something good.


Yes. They freed Sam and Greg from their shackles and gave a clear indicator that OAI engineers should jump ship into their new venture. We all win.


Perhaps joining Bret Taylor and his friend from Google X? Can’t imagine what those brains might come up with.


If you want AI to fail, then yes.


Melodrama has no place in the AI utopia.


The only thing utopian ideologies are good for is finding 'justifications' for murder. The "AI utopia" will be no different. De-radicalize yourself while you still can.


> The only thing utopian ideologies are good for is finding 'justifications' for murder.

This seems more like your personal definition of "utopian ideology" than an actual observation of the world we live in.


It seems like an observation to me. Let’s take the Marxist utopian ideology. It led to 40 - 60 million dead in the Soviet Union (Gulag Archipelago is an eye opening read). And 40 - 80 million dead in Mao Zedong’s China. It’s hard to even wrap my mind around that amount of people dead.

Then a smaller example in Matthia’s cult in the “Kingdom Of Matthias” book. Started around the same time as Mormonism. Which led to a murder. Or the Peoples Temple cult with 909 dead in mass suicide. The communal aspects of these give away their “utopian ideology”

I’d like to hear where you’re coming from. I have a Christian worldview, so when I look at these movements it seems they have an obvious presupposition on human nature (that with the right systems in place people will act perfectly — so it is the systems that are flawed not the people themselves). Utopia is inherently religious, and I’d say it is the human desire to have heaven on earth — but gone about in the wrong ways. Because humans are flawed, no economic system or communal living in itself can bring about the utopian ideal.


"I have a Christian worldview"

We are quite OT here, but I would say christianity in general is a utopian ideology as well. All humans could be living in peace and harmony, if they would just believe in Jesus Christ. (I know there are differences, but this is the essence of what I was taught)

And well, how many were killed in the name of the Lord? Quite a lot I think. Now you can argue, those were not really christians. Maybe. But Marxists argue the same of the people responsible for the gulags. (I am not a marxist btw)

"Because humans are flawed, no economic system or communal living in itself can bring about the utopian ideal."

And it simply depends on the specific Utopian ideal. Because a good utopian concept/dream takes humans as they are - and still find ways to improve living conditions for everyone. Not every Utopia claims to be a eternal heaven for everyone, there are more realistic concepts out there.


You could also credit Marxism for workers rights.

Having utopian ideologies NEVER doing good in the world would require some very careful boundary drawing.


Kibbutz?


Huh, I've read Marx and I dont see the utopianism you're referencing.

What I do see is "classism is the biggest humanitarian crisis of our age," and "solving the class problem will improve people's lives," but no where do I see that non-class problem will cease to exist. People will still fight, get upset, struggle, just not on class terms.

Maybe you read a different set of Marx's writing. Share your reading list if possible.


This article gives a clear view on Marx’s vs. Engel’s view of Utopianism vs. other utopian socialists [1]. That Marx was not opposed to utopianism per se, but rather when the ideas of the utopia did not come from the proletariat. Yet you’re right in that he was opposed to the view of the other utopian socialist, and there is tension in the views of the different socialist thinkers in that time. (I do disagree on the idea that refusing to propose an ideal negates one from in practice having a utopic vision)

That said my comment was looking mainly at the result of Marxist ideology in practice. In practice millions of lives were lost in an attempt to create an idealized world. Here is a good paper on Stalin’s utopian ideal [2].

[1] https://www.jstor.org/stable/10.7312/chro17958.7?searchText=...

[2] https://www.jstor.org/stable/3143688?seq=1


That makes sense. It would be like being able to attribute deaths due to christianity on the bible because there is a geneology of ideas?


I know we are a bit off topic. It seems it would be more like if several prominent followers of Jesus committed mass genocide in their respective countries within a century of his teachings. Stalin is considered Marxist-Leninist.


Oh ok. That makes sense. That's because if someone has an idea that causes a lot of immediate harm then the idea is wrong, but if there is a gap then it is not?


Utopian ideologies are also useful when raising funds from SoftBank and ARK


Yeah, AI will totally fail if people don't ship untested crap at breakneck speed.

Shipping untested crap is the only known way to develop technology. Your AI assistant hallucinates? Amazing. We gotta bring more chaos to the world, the world is not chaotic enough!!


All AI and all humanity hallucinates, and AI that doesn't hallucinate will functionally obsolete human intelligence. Be careful what you wish for, as humans are biologically incapable of not "hallucinating".


GPT is better than an average human at coding. GPT is worse than an average human at recognizing bounds of its knowledge (i.e. it doesn't know that it doesn't know).

Is it fundamental? I don't think so. GPT was trained largely on random internet crap. One of popular datasets is literally called The Pile.

If you just use The Pile as a training dataset, AI will learn very little reasoning, but it will learn to make some plausible shit up, because that's the training objective. Literally. It's trained to guess the Pile.

Is that the only way to train an AI? No. E.g. check "Textbooks Are All You Need" paper: https://arxiv.org/abs/2306.11644 A small model trained on high-quality dataset can beat much bigger models at code generation.

So why are you so eager to use a low-quality AI trained on crap? Can't you wait few years until they develop better products?


Being better than the average human at coding is as easy as being better than the average human at surgery. Until it's better than actual skilled programmers, the people who are programming for a living are still responsible for learning to do the job well.


Because people are are into tech? That's pretty much the whole point of this site?

Just imagining if we all only used proven products, no trying out cool experimental or incomplete stuff.


Without supposing we're on this trajectory, humans no longer needing to focus on being productive is how we might be able to focus on being better humans.


Well, that's the goal isn't it? Having AI take over everything that needs doing so that we can focus on doing things we want to do instead.


Some humans hallucinate more than others


humanity is capable of taking feedback, citing its sources, and not outright lying

these models are built to sound like they know what they are talking about, whether they do or not. this violates our basic social coordination mechanisms in ways that usually only delusional or psychopathic people do, making the models worse than useless


Nobody's forcing anybody to use these tools.

They'll improve hallucinations and such later.

Imagine people not driving the model T cause it didn't have an airbag lmao. Things take time to be developed and perfected.


The model T killed a _lot_ of people, and almost certainly should have been banned: https://www.detroitnews.com/story/news/local/michigan-histor...

If it had been, we wouldn't now be facing an extinction event.


Yea, change is bad.


Numerically, most change is bad.


And yet we make progress. It seems we've historically mostly been effective at hanging on to positive change, and discarding negative change


Yes, but that's an active process. You can't just be "pro change".

Occasionally, in high risk situations, "good change good, bad change bad" looks like "change bad" at a glance, because change will be bad by default without great effort invested in picking the good change.


You haven't been around when Web2.0 and the whole modern internet arrived, were you? You know, all the sites that you consider stable and robust now (Google, YT and everything else) shipping with a Beta sign plastered onto them.


I first got internet access in 1999, IIRC.

Web sites were quite stable back then. Not really much less stable than they are now. E.g. Twitter now has more issues than web sites I used often back in 2000s.

They had "beta" sign because they had much higher quality standards. They warned users that things are not perfect. Now people just accept that software is half-broken, and there's no need for beta signs - there's no expectation of quality.

Also, being down is one thing, sending random crap to a user is completely another. E.g. consider web mail, if it is down for one hour it's kinda OK. If it shows you random crap instead of your email, or sends your email to a wrong person. That would be very much not OK, and that's the sort of issues that OpenAI is having now. Nobody complains that it's down sometimes, but it returns erroneous answers.


But it’s not supposed to ship totally “correct” answers. It is supposed to predict which text is most likely to follow the prompt. It does that correctly, whether the answer is factually correct or not.


If that is how it was marketing itself, with the big disclaimers like tarot readers have that this is just for entertainment and not meant to be taken as factual advice, it might be doing a lot less harm but Sam Altman would make fewer billions so that is apparently not an option.


Chat-based AI like ChatGPT are marketed as an assistant. People expect that it can answer their questions, and often it can answer even complex questions correctly. Then it can fail miserably on a basic question.

GitHub Copilot is an auto-completer, and that's, perhaps, a proper use of this technology. At this stage, make auto-completion better. That's nice.

Why is it necessary to release "GPTs"? This is a rush to deliver half-baked tech, just for the sake of hype. Sam was fired for a good reason.

Example: Somebody markets GPT called "Grimoire" a "100x Engineer". I gave him a task to make a simple game, and it just gave a skeleton of code instead of an actual implementation: https://twitter.com/killerstorm/status/1723848549647925441

Nobody needs this shit. In fact, AI progress can happen faster if people do real research instead of prompting GPTs.


Needlessly pedantic. Hold consumers accountable too. "Durr I thought autopilot meant it drove itself. Manual, nah brah I didn't read that shit, reading's for nerds. The huge warning and license terms, didn't read that either dweeb. Car trying to stop me for safety if I take my hands off the wheel? Brah I just watched a Tiktok that showed what to do and I turned that shit offff".


Perhaps we need a better term for them then. Because they are immensely useful as is - just not as a, say, Wikipedia replacement.


You could also say that shipping social media algorithms with unknown effects on society as a whole are why we're in such a state right now. Maybe we should be more careful next time around.


This is not a story about AI.

It's a story about greed, vanity, and envy.

Impossible to be more human than that.


Sutskever and his allies focused on the original non-profit mission of OpenAI."

Seems reasonable, I mean that's why Sutskever joined in the first place ?


Not just Sustkever, but other top researchers joined the then nascent OpenAI team for the same reason. Most of them on-record indicating they turned down much bigger paychecks.

The problem I see is, astronomical costs of training and inference warrants a for-profit structure like the one Sam put up. It was a nice compromise, I thought; but of course, Sustkever thinks otherwise.


Maybe Sutskever is finished with his LLM experiments and now has other interests and ideas to pursue meanwhile Sam was keen to make money and stay on the same trajectory. Microsoft also felt the same way.

Could see this


The commercial shift has started quite some time ago, what's the point of firing them now?

And why such a controversial wording around Altman?

Why fire Brockman too?


Brockman quit, he wasn’t fired.


He was removed from one of his roles (chairman) and quit the other (president) if I understand correctly.


If true, this gives me hope the Open can return to OpenAI.


Given the board members’ focus on safety, maybe not that likely.


Open source is the only path to broad public accountability, which is a prerequisite for safety.


Microsoft won't be happy about this


What is bad for Microsoft is good for the world.


it's hardly believed that Alman was fired by his stand on commercialisation


The fact that the press release is 50% dedicated to repeating that OpenAI is supposed to be a non-profit and help all of humanity isn't enough for you to believe this is the reason?


The abruptness of the firing and the fact that they give his lying to the board as the reason is why I don't believe that this is over a general disagreement on direction.


They have to say the reason is a fireable offense or he can sue them. Or will be more likely to win if he does.


It's exactly the other way around - if they dismiss him for a vague general reason, they're much less exposed to litigation than they would be if they falsely accused him of lying.


You are 100% correct here, which is how we can reasonably conclude that the accusations were not false.


If the accusations by the board are true, that doesn't explain why Brockman and a few of the senior researchers quit as a response to all of this.


Them leaving does not imply accusations are false. They may like him, they may dislike new boss regardless of accusations, they may dislike overall future direction. They may think they would be fired some times later regardless.


As another comment below mentioned, Elon Musk hinted at this in his interview with Lex Fiedman.

Specifically, he mentioned that OpenAI is supposed to be open source and non-profit. Pursuing profit and making it closed-source brings "bad karma".


Why can’t some use money from profit to do nonprofit again when others caught up. Only moat seems to be the research time invested.


Many believe that race dynamics are bad, so have the goal of going as slowly and carefully as possible.

The split between e/acc (gotta go fast) and friendly AI/Coherent Extrapolated Volition (slow and cautious) is the first time in my life I've come down on the (small-c) conservative side of a split. I don't know if that's because I'm just getting older and more risk adverse.


What a hypocritical board, firing them after massive commercial success!

Classic virtue signalling for the sake of personal power gains as so often.


What’s hypocritical about a non-profit firing a leader who wanted lots of profits.


Didn't think I'd need to explain this:

The hypocritical part is doing so right AFTER beginning to take off commercially.

An honorable board with backbone would have done so at the first inkling of commercialization instead (which would have been 1-2 years ago).

Maybe you can find a better word for me but the point should be easily gotten ...


OpenAI hasn't made billions in profits. Their operating costs are huge and I'm pretty sure they're heavily reliant on outside funding.


Which puts into question the whole non-profitness anyway, but that aside:

They have still been operating pretty much like a for-profit for years now so my point still stands.


Your point hinged on billions in profit. Which you just made up, or assumed to be true for some reason. I don't think any of your points stand. Don't use fact you haven't checked as preconditions for points you want to make.


[flagged]


A non-profit doesn’t have to offer their services for free, they can cover their expenses.

A profit driven company will often offer their services below cost in order to chase away the competition and capture users.


Right.

Which is why the board's accusations against Sam are a farce as far as we can tell.


Have they gotten specific yet? Last I heard was the whole “not sufficiently candid” thing, which is really nebulous; hard to call it a farce really. It is a “to be continued.”

I’m going to wait and see before I get too personally attached to any particular position.


To think that "Non-Profit" means "Free" is pretty naive. There are operating costs to maintain millions of users. That doesn't mean they are trying to profit.


Exactly.

So what's Sam's crime exactly, trying to cover the costs?


Again, conjecture with no supporting evidence.


Not sure what you're trying to say.

Clearly, under Altman, OpenAI has been massively successful one way or another, correct?

Now they boot him and claim moral superiority? Really?


I mean, as far as I know the guy hasn't written a single line of code.


Three other board members stepped down this year. It might not have been possible before.


Ofc it's "not possible" in that it may incur personal costs.

But it's the honorable thing to do if you truly believe in something.

Otherwise it's just virtue signalling.


No, they may literally have not had the votes.


Almost more of a "takeover" by the board after it's successful lol


I am going to go on a limb here, and speculate...This was because of the surprise party crashing of the Microsoft CEO, at OpenAI first Developer Conference...


Kara Swisher was told the dev conference was "an inflection point", so it's not that speculative.


I doubt this was a surprise to them, I’m sure Sam was well aware of the concerns and repeatedly ignored them, and even doubled down. Putting OpenAI’s mission in jeopardy.

Many politically aligned folks will leave, and OAI will go back and focus on mission.

New company will emerge and focus on profits.

Overall probably good for everyone.


Why would employees be consulted begore being fired?


Because board members are not employees, or not just employees. They're part of the democratic governance of an organization.

The same way there's a big difference between firing a government employee and expulsion of a member of Congress.


Wow, that is actually the first time I hear someone use democracy and corporation unironically together...

In a semse board memebers have even less protection than rank and file. So no, nothing special happening at OpenAI other than a founder CEO being squezzed out, not the first nor the last one. And personal feeling never factored into that kind of decision.


Ha, true. Well, I did say "democratic governance", not "democracy" itself.

Substitute "rules of order" or "parliamentary procedure" if you like. At the end of the day, it's majority vote by a tiny number of representatives. Whether political or corporate.


Is that news to you? Corporate governance is structure pretty much the same as parliamental democracies. The C-suite is the cabinet, the board of directors is the parliament/house of representatives and the shareholders are the public/voters.


would be hilarious if Altman was directly hired by Microsoft to head their AI teams now.


He may have had ample chance before.


Sam's sad face in the NYT article is pretty priceless.


[flagged]


Google Meet is quite good, much better than Teams, IME.


Yup, it's my default for most meetings, share a link at it just works fine.


OpenAI also uses Google Forms -- here's what you get if you click the feedback form if your question gets flagged as violating openAI's content policies https://docs.google.com/forms/d/e/1FAIpQLSfml75SLjiCIAskEpzm...


I think the shock is about the privacy risks.


minus the irony that it doesn't run on 32-bit Chrome and I had to load Edge at work to use it


What should they use? Self hosted Jitsi?


I mean, presumably Teams?


Haven't these people suffered enough!?


In my experience Teams is great for calls (both audio and video), horrible for chat. I guess because it's built on top of Skype codebase? (just a guess)

But it's out of the scope for this discussion.


The chat portion of Teams is so very poorly designed compared to other corporate chat systems I've used.

I mean even copy and paste doesn't work correctly. You highlight text, copy it and Teams inserts its own extra content in there. That's basic functionality and it's broken.

Or you get tagged into conversations and with no way to mute them. For a busy chat that alert notification can be going off continuously. Of course the alert pop up has been handily placed to cover the unmute icon in calls, so when someone asks you a question you can't answer them.

Teams feels like a desperate corporate reaction to Slack with features added as a tickbox exercise but no thought given to actual usability.

I never thought that Slack or the whatever Google's chat system is currently called was in any way outstanding until I was made to use the dumpster fire that is Teams.

It's a classic example of where the customers, corporate CTOs, are not the end users of a product.


I hope you'll never have to use Webex.


Sweet fuck after covid I forgot about webex. I think I might have ptsd from that.

The Teams/Zoom/Other platform arguments have nothing on how unfriendly, slow, and just overall Trash webex is.


Working at a company that still uses it, but with a change on the horizon.

It still, in the year 2023, plays an unmutable beep noise for every single participant that joins, with no debouncing whatsoever.


It astounded me that that company was either unwilling or unable to cash in on work from home during covid.

That has to be among history's biggest missed opportunities for a tech company.

Anyone here associated with them? Why didn't they step up?


I can relate


teams is the absolute worst


Have you used Google meet though? Even teams isn't that bad.


All I notice is that my time going from calendar to Teams call is ~30 seconds due to slow site loading and extra clicks. Calendar to Meet call is two clicks and loads instantly with sane defaults for camera/microphone settings. It's significantly better than teams or zoom in those regards.


If you're fully immersed in the Microsoft ecosystem, going from your Outlook calendar to a Teams call is a single click, and the desktop app doesn't take as long to get into the call.


If you're fully immersed in the Microsoft ecosystem I pray for you


I use both and shudder every time I am forced to use the lame web app alternatives to Word, Excel & PowerPoint on desktop - mostly because my child's school runs on web alternatives. Ironically even on Android, Outlook seems to be the only major client that actually provides a unified inbox across mail accounts due to which I switched & use my Gmail accounts through it.


I have used both, and vastly prefer Google Meet. I prefer something that works in Firefox.


Even Zoom works well in Firefox. Still prefer the UX of Google Meet though.


What’s the issue with Meet? It always seems to work when I need it.


Having used both in a professional capacity I have to say Teams is shockingly worse than Google Meet.

I’ve never had my laptop’s sound like an Apache helicopter while on a call with Google Meet yet simply having Teams open had me searching for a bomb shelter.


Teams sucks compared to Meet, IMHO.


Given the GP's username, maybe some Wakandan tech?


We at dyte.io are planning to launch something here! Hoping to solve all the challenges people face with Teams, Meet, Zoom, etc.


Shall we jump on a dyte? Gets reported to HR for unwanted advances


Shall we jump on a dyte? Sure, can you swim though?


How are you going to break into and differentiate yourself in an already oversaturated market of video call competitors?


All video call software suck in various ways. Corporate IT throttling&filtering and analyzing traffic with a mismash of third party offerings ”to increase security” does not help.


Keet [1] doesn't suck. Fully encrypted, peer to peer. Bandwidth only limited by what the parties to the call have access to.

[1] https://keet.io/


> [...] Fully encrypted, peer to peer. [...]

The least two features the average user wants. Most users are happy if sound and video work instantly, always. Maybe some marketing department should focus on that?

(Don't know keet; yes, encryption is still an important festure).


Peer to peer makes it as fast as possible because it's not having to pass through a 3rd party's servers (which, for cost reasons, normally limit the bandwidth of the communication channel they are serving).

This is just like when you pull down a torrent. You can do it as fast as your bandwidth and the bandwidth of the peers who are seeding it to you allow. Which can be blazingly fast.


Then market it as "fast". Nobody (except a few zealots) cares about the implementation details.


I'm not marketing it (I'm a user, not a developer). And I would think that HN is exactly the forum where ppl care about the implementation details.


Google meet is excellent for videoconferencing actually.


power hijack by the doomers. too bad the cat is out of the bag already


Quite possible actually, this seems to become a really hot political potato with at least 3 types of ambition running it 1. Business 2. Regulatory 3. ’Religious/Academic’. By latter I mean the divide between ai doomerists and others is caused by insubstantiable dogma (doom/nirvana).


> insubstantiable dogma (doom/nirvana)

What do you mean by this? Looks like you're just throwing out a diss on the doomer position (most doomers don't think near future LLMs are concerning).


Neither AI fears nor singularity is substantiated. Hence the discussion is a matter of taste and opinion, not of facts. They are sunstantiated once one or the other comes to fruition. The fact it's a matter of taste and opinion makes the discussion only so much heated.


Wouldn't this put AI doomerism in the same category as nuclear war doomerism? E.g. a thing that many experts think logically could happen and would be very bad but hasn't happened yet?


I'm unaware of an empirical demonstration of the feasibility of the singularity hypothesis. Annihilation by nuclear or biological warfare on the other hand, we have ample empirical pretext for.

We have ample empirical pretext to worry about things like AI ethics, automated trading going off the rails and causing major market disruptions, transparency around use of algorithms in legal/medical/financial/etc. decision-making, oligopolies on AI resources, etc.... those are demonstrably real, but also obviously very different in kind from generalized AI doomsday.


That’s an excellent example why AI doomerism is bogus completely unlike nuclear war fears weren’t.

Nuclear war had very simple mechanistic concept behind it.

Both sides develop nukes (proven tech), put them on ballistic missiles (proven tech). Something goes politically sideways and things escalate (just like in WW1). Firepower levels cities and results in tens of millions dead (just like in WW2, again proven).

Nuclear war experts were actually experts in a system whose outcome you could compute to a very high degree.

There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.

You can already trivially load up a car with explosives, drive it to a nearby large building, and cause massive damages and injury.

Yes, it’s plausible a lone genious could manufacture something horrible in their garage and let rip. But this is in the domain of ’fictional whatifs’.

Nobody factors in the fact that in the presence of such a high quality AI ecosystem the opposing force probably has AI systems of their own to help counter the threat (megaplague? Quickly synthesize megavaxine and just print it out at your local healt centers biofab. Megabomb? Possible even today but that’s why stuff like Uranium is tightly controlled. Etc etc). I hope everyone realizes all the latter examples are fictional fearmongering wihtout any basis in known cases.

AI would be such a boom for whole of humanity that shackling it in is absolutely silly. That said there is no evidende of a deus ex machina happy ending either. My position is let researchers research and once something substantial turns out, then engage policy wonks, once solid mechanistic principles can be referred to.


> There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.

You don't seem actually familiar with doomer talking points. The classic metaphor is that you might not be able to say how specifically Magnus Carlson will beat you at chess if you start the game with him down a pawn while nonetheless knowing he probably will. Predicting

The main way doomers think ASI might kill everyone is mostly via the medium of communicating with people and convincing them to do things, mostly seemingly harmless or sensible things.

It's also worth noting that doomers are not (normally) concerned about LLMs (at least, any in the pipeline), they're concerned about:

* the fact we don't know how to ensure any intelligence we construct actually shares our goals in a manner that will persist outside the training domain (this actually also applies to humans funnily enough, you can try instilling values into them with school or parenting but despite them sharing our mind design they still do unintended things...). And indeed, optimization processes (such as evolution) have produced optimization processes (such as human cultures) that don't share the original one's "goals" (hence the invention of contraception and almost every developed country having below replacement fertility).

* the fact that recent history has had the smartest creature (the humans) taking almost complete control of the biosphere with the less intelligent creatures living or dying on the whims of the smarter ones.


In my opinion, if either extreme turns out to be correct it will be a disaster for everyone on the planet. I also think that neither extreme is correct.


this is why you don't bring NGO types into your board, and you especially don't give them power to oust you.


What does “your” board mean in this context? Who’s “your”?

The CEO just works for the organization and the board is their boss.

You’re referencing a founder situation where the CEO is also a founder who also has equity and thus the board also reports to them.

This isn’t that. Altman didn’t own anything, it’s not his company, it’s a non-profit. He just works there. He got fired.


I believe altman had some ownership, however it is a general lesson of handing over substantial power to laymen who are completely detached from the actual ops & know-how of the company


nobody handed over power. presumably they were appointed to the board to do exactly what they did (if this theory holds), in which cass this outcome would be a feature not a bug


There’s no such thing as owning a non-profit.


> this is why you don't bring NGO types into your board

OpenAI is an NGO…?


That is neither stated nor implied, unless you’re simply making the objection, “But OpenAI _is_ nongovernmental.”

Most readers are aware they were a research and advocacy organization that became (in the sense that public benefit tax-free nonprofit groups and charitable foundations normally have no possibility of granting anyone equity ownership nor exclusive rights to their production) a corporation by creating one; but some of the board members are implied by the parent comment to be from NGO-type backgrounds.


I'm not sure I understand what you're saying. Perhaps you could point out where your perspective differs from mine? So, as I see it: Open AI _is_ a non-profit, though it has an LLC it wholly controls that doesn't have non-profit status. It never "became" for-profit (IANAL, but is that even possible? It seems like that should not be possible), the only thing that happened is that the LLC was allowed to collect some "profit" - but that in turn would go to its owners, primarily the non-profit. As far as I'm aware the board in question that went through this purge _was_ the non-profit's board (does the LLC have a board?)

From the non-profit's perspective, it sounds pretty reasonable to self-police and ensure there aren't any rogue parts of the organization that are going off and working at odds with the overall non-profit's formal aims. It's always been weird that the Open-AI LLC seemed to be so commercially focused even when that might conflict with it's sole controller's interests; notably the LLC very explicitly warned investors that the NGO's mission took precedence over profit.


My objection is that OpenAI, at least to my knowledge, still is a non-profit organization that is not part of the government and has some kind of public benefit goals - that sounds like an NGO to me. Thus appointing “NGO types” to the board sounds reasonable: They have experience running that kind of organization.

Many NGOs run limited liability companies and for-profit businesses as part of their operations, that’s in no way unique for OpenAI. Girl Scout cookies are an example.



Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.

https://twitter.com/HamelHusain/status/1725655686913392933


Did we ever think LLMs were a path to AGI...? AGI is friggin hard, I don't know why folks keep getting fooled whenever a bot writes a coherent sentence.


LLMs are the first instance of us having created some sort of general AI. I don't mean AGI, but general AI as in not specific AI. Before LLMs the problem eith AI was always that it "can only do one thing well". Now we have something on the other side: AI that can do anything but nothing specific particularly well. This is a fundamental advancement which makes AGI actually imaginable. Before LLMs there was literally no realistic plan how to build general intelligence.


LLMs are not any kind of intelligence, but it can work to augment intelligence.


So in other words... Artificial intelligence?

LLM are surprisingly effective as general AI. Tasks that used to require a full on ML team are now accessible with 10 minutes of "prompting".


Do you think we know enough about what intelligence is to rule out whether LLM's might be a form of it?


How smart would any human be without training and source material?


Smart enough to make weapons, tame dogs, start fires and cultivate plants. Humans managed to do that even when most of their time was spent gathering food or starving.


Nobody cares about making an AI with basic human survival skills. We could probably have a certified genius level AI that still couldn't do any of that because it lacks a meaningful physical body.

If we wanted to make that the goal instead of actual meaningful contributions to human society, we could probably achieve it, and it would be a big waste of time imo.


I think the boy of Aveyron answers that question pretty well.


Thanks for the reference. My takeaway from reading up on him is, not very smart at all.


It's mostly a thing among the youngs I feel. Anybody old enough to remember the same 'OMG its going to change the world' cycles around AI every two or three decades knows better. The field is not actually advancing. It still wrestles with the same fundamental problems they were doing in the early 60s. The only change is external, where computer power gains and data set size increases allow brute forcing problems.


I'd say the biggest change is the quantity of available CATEGORIZED data. Tagged images and what not has done a ton to help the field.

Further there are some hybrid chips which might help increase computing power specifically for the matrix math that all these systems work on.

But yeah, none of this is making what people talk about when they say AGI. Just like how some tech cult people felt that Level 5 self driving was around the corner, even with all the evidence to the contrary.

The self driving we have (or really, assisted cruise control) IS impressive, and leagues ahead of what we could do even a decade or two ago, but the gulf between that, and the goal, is similar to GPT and AGI in my eyes.

There are a lot of fundamental problems we still don't have answers to. We've just gotten a lot better at doing what we already did, and getting more conformity on how.


> The field is not actually advancing.

Uh, what do you mean by this? Are you trying to draw a fundamental science vs engineering distinction here?

Because today's LLMs definitely have capabilities we previously didn't have.


They don't have 'artificial intelligence' capabilities (and never will).

But it is an interesting technology.


They can be the core part of a system that can do a junior dev's job.

Are you defining "artificial intelligence" is some unusual way?


If by “junior dev”, you mean “a dev at a level so low they will be let go if not promoted”, then I agree.

I’ve watched my coworkers try to make use of LLMs at work, and it has convinced me the LLM’s contributions are well below the bar where their output is a net benefit to the team.


It works pretty well in my C++ code. Context: modern C++ with few footguns, inside functions with pretty-self-explanatory names.

I don't really get the "low bar for contributions" argument because GH Copilot's contributions are too small-sized for there to even be any bar. It writes the obvious and tedious loops and other boilerplate so I can focus on what the code should actually do.


Conversely, I was very skeptical of its ability to help coding something non-trivial. Then I found out that the more readable your code is - in a very human way, like descriptive identifiers, comments etc - the better this "smart autocomplete" is. It's certainly good enough to save me a lot of typing, so it is a net benefit.


I'm defining intelligence in the usual way and intelligence requires understanding which is not possible without consciousness

I follow Roger Penrose's thinking here. [1]

[1] https://www.youtube.com/watch?v=2aiGybCeqgI&t=721s


> intelligence requires understanding which is not possible without consciousness

How are you defining "consciousness" and "understanding" here? Because a feedback loop into an LLM would meet the most common definition of consciousness (possessing a phonetic loop). And having an accurate internal predictive model of a system is the normal definition of understanding and a good LLM has that too.


No, you're not supposed to actually have an empirical model of consciousness. "Consciousness" is just "that thing that computers don't have".


It’s cool to see people recognizing this basic fact — consciousness is a prerequisite for intelligence. GPT is a philosophical zombie.


Problem is, we have no agreed-upon operational definition of consciousness. Arguably, it's the secular equivalent of the soul. Something everything believes they have, but which is not testable, locatable or definable.

But yet (just like with the soul) we're sure we have it, and it's impossible for anything else to have it. Perhaps consciousness is simply a hallucination that makes us feel special about ourselves.


I disagree. There is a simple test for consciousness: empathy.

Empathy is the ability to emulate the contents of another consciousness.

While an agent could mimic empathetic behaviors (and words), given enough interrogation and testing you would encounter an out-of-training case that it would fail.


Uh... so is it autistic people or non-autistic people who lack consciousness? (Generally autistic people emulate other autistic people better and non-autists emulate non-autists better)

> given enough interrogation and testing you would encounter an out-of-training case that it would fail.

This is also the case with regular humans.


For one thing, this would imply that clinical psychopaths aren't conscious, which would be a very weird takeaway.

But also, how do you know that LMs aren't empathic? By your own admission they do "mimic empathetic behaviors", but you reject this as the real thing because you claim that with enough testing you would encounter a failure. This raises all kinds of "no true Scotsman" flags, not to mention that empathy failure is not exactly uncommon among humans. So how exactly do you actually test your hypothesis?


Great point and great question! Yes, it does imply that people who lack the capacity for empathy (as opposed to those who do not utilize their capacity for empathy) may lack conscious experience. Empathy failure here means lacking the data empathy provides rather than ignoring the data empathy provides (which as you note, is common). I’ve got a few prompts that are somewhat promising in terms of clearly showing that GPT4 is unable to correctly predict human behavior driven by human empathy. The prompts are basic thought experiments where a person has two choices: an irrational yet empathic choice, and a rational yet non-empathic choice. GPT4 does not seem able to predict that smart humans do dumb things due to empathy, unless it is prompted with such a suggestion. If it had empathy itself, it would not need to be prompted about empathy.


Can you give some examples of such prompts?


You can't even know that other people have it. We just assume they do because they look and behave like us, and we know that we have it ourselves.


I think answering this may illuminate the division in schools of thought: do you believe life was created by a higher power?


My beliefs aren't really important here but I don't believe in 'creation' (i.e. no life -> life); I believe that life has always existed


Do you believe:

1) Earth has an infinite past that has always included life

2) The Earth as a planet has a finite past, but it (along with what made up the Earth) is in some sense alive, and life as we know it emerged from that life

3) The Earth has a finite past, and life has transferred to Earth from somewhere else in space

4) We are the Universe, and the Universe is alive

Or something else? I will try to tie it back to computers after this short intermission :)


Now that is so rare I've never even heard of someone expressing that view before...

Materialists normally believe in a big bang (which has no life) and religious people normally think a higher being created the first life.

This is pretty fascinating, to you have a link explaining the religion/ideology/worldview you have?


Buddhism


LLMs have changed the world more profoundly than any technology in the past 2 decades, I'd argue.

The fact that we can communicate with computers using just natural language, and can query data, use powerful and complex tools just by describing what we want is an incredible breakthrough, and that's a very conservative use of the technology.


I am massively bullish LLMs but this is hyperbole.

Smartphones changed day to day human life more profoundly than anything since the steam engine.


I'm kinda curious as to why you think that's the case. I mean, smartphones are nice, and having a browser, chat client, camera etc. in my pocket is nice, but maybe I have been terminally screen-bound all my life, but I could do almost all those things on my PC before, and I could always call folks when on the go.

I've never experienced the massively life changing effects of having a smartphone, and (thankfully) none of my friends seem to be those people who are always looking at their phones.


While many technologies provided by the smartphone were indeed not novel the cumulative effect of having a constant access to them and their subsequent normalization is nothing short of revolutionary.

For instance, I remember the time when chatting online (even with people you knew offline) was considered to be a nerdy activity. Then it gradually became more mainstream and now it's the norm to do it and a lot of people do it multiple times per day. This fundamentally changes how people interact with each other.

Another example is dating. Not that I have personal experience with modern online dating (enabled by smartphones) but what I read is disturbing and captivating at the same time e.g. apparent normalization of "ghosting"...


I don't actually see anything changing, though. There are cool demos, and LLMs can work effectively to enhance productivity for some tasks, but nothing feels fundamentally different. If LLMs were suddenly taken away I wouldn't particularly care. If the clock were turned back two decades, I'd miss wifi (only barely available in 2003) and smartphones with GPS.


Indeed. The "Clamshell" iBook G3 [0] (aka Barbie's toilet seat), introduced 1999, had WiFi capabilities (as demonstrated by Phil Schiller jumping down onto the stage while online [1]), but IIRC, you had to pay extra for the optional Wifi card.

[0] https://en.wikipedia.org/wiki/IBook#iBook_G3_(%22Clamshell%2... [1] https://www.youtube.com/watch?v=1MR4R5LdrJw


You need time for inertia to happen, I’m working on some mvps now and it takes time to test what works what s possible what does not…


That breakthrough would not be possible without ubiquity of personal computing at home and in your pocket, though, which seems like the bigger change in the last two decades.


Deep learning was an advance. I think the fundamental achievement is a way to use all that parallel processing power and data. Inconceivable amounts of data can give seemingly magical results. Yes, overfitting and generalizing are still problems.

I basically agree with you about the 20 year hype-cycle, and but when compute power reaches parity with human brain hardware (Kurzweil predicts by about 2029), one barrier is removed.


Human and computer hardware are not comparable, after all even with the latest chips the computer is just (many) von Neumann machine(s) operating on a very big (shared) tape. To model the human brain in such a machine would require the human brain to be discretizable, which, given its essentially biochemical nature, is not possible - certainly not by 2029.


It depends on the resolution of discretization required. Kurzweil's prediction is premised on his opinion of this.

Note that engineering fluid simulation (cfd) makes these choices in discretization of pde's all the time, based on application requirements.


This time around they’ve actually come up with a real productizable piece of tech, though. I don’t care what it’s called, but I enjoy better automation to automate as much of the boring shit away. And chip in in coding when it’s bloody obvious from the context what the few lines of code will be.

So not an ”AI”, but closer to ”universal adaptor” or ”smart automation”.

Pretty nice in any case. And if true AI is possible, the automations enabled by this will probably be part of the narrative how we reach it (just like mundane things like standardized screws were part of the narrative of Apollo mission).


> Anybody old enough to remember the same 'OMG its going to change the world' cycles around AI every two or three decades

Hype and announcements, sure, but this is the first time there's actually a product.


> Hype and announcements, sure, but this is the first time there's actually a product.

No, its not. Its just once the hype cycle dies down, we tend to stop calling the products of the last AI hype cycle "AI", we call them after the name of the more specific implementation technology (rules engines/expert systems being one of the older ones, for instance.)

And if this cycle hits a wall, maybe in 20 years we'll have LLMs and diffusion models, etc., embedded lots of places, but no one will call them alone "AI", and then the next hype cycle will have some new technology and we'll call that "AI" while the cycle is active...


As an outsider, I can talk to AI and get more coherent responses than from humans (flawed, but it's getting better). That's tangible, that's an improvement. I for one don't even consider the Internet to be as revolutionary as the steam engine or freight trains. But AI is actually modifying my own life already - and that's far from the end.

P.S. I've just created this account here on Hacker News because Altman is one of the talking heads I've been listening to. Not too sure what to make of this. I'm an accelerationist, so my biggest fear is America stifling its research the same way it buried space exploration and human gene editing in the past. All hope is for China - but then again, the CCP might be even more fearful of non-human entities than the West. Stormy times indeed.


Mainly because LLMs have so far basically passed every formal test of ‘AGI’ including totally smashing the Turing test.

Now we are just reliant on ‘I’ll know it when I see it’.

LLMs as AGI isn’t about looking at the mechanics and trying to see if we think that could cause AGI - it’s looking at the tremendous results and success.


It’s trivial to trip up chat LLMs. “What is the fourth word of your answer?”


I find GPT-3.5 can be tripped up by just asking it to not to mention the words "apologize" or "January 2022" in its answer.

It immediately apologises and tells you it doesn't know anything after January 2022.

Compared to GPT-4 GPT-3.5 is just a random bullshit generator.


“You're in a desert, walking along in the sand when all of a sudden you look down and see a tortoise. You reach down and flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over. But it can't. Not with out your help. But you're not helping. Why is that?”


got-3.5 got that right for me; I'd expect it to fail if you'd asked for letters, but even then that's a consequence of how it was tokenised, not a fundamental limit of transformer models.


This sort of test has been my go-to trip up for LLMs, and 3.5 fails quite often. 4 has been as bad as 3.5 in the past but recently has been doing better.


if this is the test you're going to then you literally do not understand how LLMs work. it's like asking your keyboard to tell you what colour the nth pixel on the top row of your computer monitor is.


An LLM could easily answer that question if it was trained to do it. Nothing in its architecture makes it hard to answer, the attention part could easily look up the previous parts of its answer and refer to the fourth word but it doesn't do that.

So it is a good example that the LLM doesn't generalize understanding, it can answer the question in theory but not in practice since it isn't smart enough. A human can easily answer it even though the human never saw such a question before.


[flagged]


> the model doesn't have a functionality to retrospectively analyse its own output; it doesn't track or count words as it generates text. it's always in the mode of 'what comes next?' rather than 'what have i written?'

Humans doesn't do that either. The reason humans can solve this problem is that humans can generate such strategies on the fly and thus solve general problems, that is the bar for AGI, as long as you say it is unfair to give such problems to the model we know that we aren't talking about an AGI.

Making a new AI that is specialized in solving this specific problem by changing the input representation still isn't an AGI, it will have many similar tasks that it will fail at.

> also, again, tired of explaining this to people: gpt models are token-based. they operate at the level of tokens - which can be whole words or parts of words - and not individual characters. this token-based approach means the model's primary concern is predicting the most probable next token, not keeping track of the position of each token in the sequence, and the smallest resolution available to it is not a character. this is why it can't tell you what the nth letter of a word is either.

And humans are a pixel based model, we operate on pixels and physical outputs. Yet we humans do generate all the necessary context, and adapts it to the task at hand to solve arbitrary problem. Such context and inputs manipulations are expected of an AGI. Maybe not the entire way from pixels and 3d mechanical movement, but there are many steps in between there that humans can easily adapt in. For example humans didn't evolve to read and write text, yet we do that easily even though we operate on a pixel level.

If you ask me to count letters my mind focuses on the letter representation I created in my head. If you talk about words I focus on the word representation. If you talk about holes I focus on the pixel representation and start to identify color parts. If you talk about sounds I focus on the vocal representation of the words since I can transform to that as well.

We would expect an AGI to make similar translations when needed, from the token space you talk about to the letter space or word space etc. That ChatGPT and similar can't do this just means they aren't even close to AGI currently.


Oh, I missed that GP said "of your answer" instead "of my question", as in: "What is the third word of this sentence?"

For prompts like that, I have found no LLM to be very reliable, though GPT 4 is doing much better at it recently.

> you literally do not understand how LLMs work

Hey, how about you take it down a notch, you don't need to blow your blood pressure in the first few days of joining HN.


We all know it is because of the encodings. But as a test to see if it is a human or a computer it is a good one.


How well does that work on humans?


The fourth word of my answer is "of".

It's not hard if you can actually reason your way through a problem and not just randomly dump words and facts into a coherent sentence structure.


I reckon an LLM with a second pass correction loop would manage it. (By that I mean that after every response it is instructed to, given the its previous response, produce a second better response, roughly analogous to a human that thinks before it speaks)

LLMs are not AIs, but they could be a core component for one.


Every token is already being generated with all previously generated tokens as inputs. There's nothing about the architecture that makes this hard. It just hasn't been trained on this kind of task.


Really? I don’t know of a positional encoding scheme that’ll handle this.


The following are a part of my "custom instructions" to chatGPT -

"Please include a timestamp with current date and time at the end of each response.

After generating each answer, check it for internal consistency and accuracy. Revise your answer if it is inconsistent or inaccurate, and do this repeatedly till you have an accurate and consistent answer."

It manages to follow them very inconsistently, but it has gone into something approaching an infinite loop (for infinity ~= 10) on a few occasions - rechecking the last timestamp against current time, finding a mismatch, generating a new timestamp, and so on until (I think) it finally exits the loop by failing to follow instructions.


I think you are confusing a slow or broken api response with thinking. It can't produce an accurate timestamp.


It’s trivial to trip up humans too.

“What do cows drink?” (Common human answer: Milk)

I don’t think the test of AGI should necessarily be an inability to trip it up with specifically crafted sentences, because we can definitely trip humans up with specifically crafted sentences.


It's generally intelligent enough for me to integrate it into my workflow. That's sufficiently AGI for me.


By that logic "echo" was AGI.


I disagree about the claim that any LLM has beaten the Turing test. Do you have a source for this? Has there been an actual Turing test according to the standard interpretation of Turings paper? Making ChatGPT 4 respond in a non human way right now is trivial: "Write 'A', then wait one minute and then write 'B'".


Your test fails because the scaffolding around the LM in ChatGPT specifically does not implement this kind of thing. But you absolutely can run the LM in a continuous loop and e.g. feed it strings like "1 minute passed" or even just the current time in an internal monologue (that the user doesn't see). And then it would be able to do exactly what you describe. Or you could use all those API integrations that it has to let it schedule a timer to activate itself.


By completely smashes, my assertion would be that it has invalidated the Turing test, because GPT-4s answers are not indistinguishable from a human because they are, on the whole, noticeably better answers than an average human would be able to provide for the majority of questions.

I don’t think the original test probably accounted for the fact that you could distinguish the machine because it’s answers were better than an average human.


LLMs can't develop concepts in the way we think of them (i.e., you can't feed LLMs the scientific corpus and ask them to independently to tell you which papers are good or bad and for what reasons, and to build on these papers to develop novel ideas). True AGI—like any decent grad student—could do this.


Since ChatGPT is not indistinguishable from a human during a chat, is it fair to say it smashes the Turing test? Or do you mean something different?


not yet: https://arxiv.org/abs/2310.20216

that being said, it is highly intelligent, capable of reasoning as well as a human, and passes IQ tests like GMAT and GRE at levels like the 97th percentile.

most people who talk about Chat GPT don't even realize that GPT 4 exists and is orders of magnitude more intelligent than the free version.


That’s just showing the tests are measuring specific things that LLMs can game particularly well.

Computers have been able to smash high school algebra tests since the 1970’s, but that doesn’t make them as smart as a 16 year old (or even a three year old).


Answers in Progress had a great video[0] where one of their presenters tested against an LLM in five different types of intelligence. tl;dr, AI was worlds ahead on two of the five, and worlds behind on the other three. Interesting stuff -- and clear that we're not as close to AGI as some of us might have thought earlier this year, but probably closer than a lot of the naysayers think.

0. https://www.youtube.com/watch?v=QrSCwxrLrRc


ChatGPT is distinguishable from a human, because ChatGPT never responds "I don't know.", at least not yet. :)


It can do: https://chat.openai.com/share/f1c0726f-294d-447d-a3b3-f664dc...

IMO the main reason it's distinguishable is because it keeps explicitly telling you it's an AI.


This isn't the same thing. This is a commanded recital of a lack of capability, not that its confidence in it's answer is low. For a type of question the GPT _could_ answer, most of the time it _will_ answer, regardless of accuracy


I just noticed that when I ask really difficult technical questions, but for which there is an exact answer, It often tries to answer plausibly, but incorrectly instead of answering "I don't know". But over time, It becomes smarter and there are fewer and fewer such questions...


Have you tried setting a custom instruction in settings? I find that setting helps, albeit with weaker impact than the prompt itself.


It's not a problem for me. It's good that I can detect chatGPT by this sign.


It doesn't become smarter except for releases of new models. It's an inference engine.


I read an article where they did a proper Turing test and it seems people recognize it was a machine answering because it made no writing errors and wrote perfectly


I've not read that, but I do remember hearing that the first human to fail the Turing test did so because they seemed to know far too much minutiae about Star Trek.


Maybe It's because It was never rewarded for such answers when It was learning.


Some humans also never respond "I don't know" even when they don't know. I know people who out-hallucinate LLMs when pressed to think rigorously


It absolutely does that (GPT-4 especially), and I have hit it many times in regular conversations without specifically asking for it.


Of course it does.


Did you perhaps mean to say not distinguishable?


Funny because Marvin Minsky thought the turing test was stupid and a waste of time.


LLMs definitely aren't a path to ASI, but I'm a bit more optimistic than I was that they're the hardest component in an AGI.


Are you kidding? Have you seen the reactions since ChatGPT was released, including in this very website? You'd think The Singularity is just around the corner!


> Estimated on the basis of five subtests, the Verbal IQ of the ChatGPT was 155


Read the original ChatGPT threads here on HN, a lot of people thought that this was it.


How do you know AGI is hard?


Everything is hard until you solve it. Some things continue to be hard after they're solved.

AGI is not solved, therefore it's hard.



Because of Altman's dismissal?


Yes, along with the departure of gdb. From jph's view, there was no philosophical alignment at the start of the union between AI Researchers (that skew non-profit) and operators (that skew for-profit) so it was bound to be unstable, until a purging happens as it had now.

> Everything I'd heard about those 3 [Elon Musk, sama and gdb] was that they were brilliant operators and that they did amazing work. But it felt likely to be a huge culture shock on all sides.

> But the company absolutely blossomed nonetheless.

> With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.

> My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.

> I think the mismatch between mission and reality was impossible to fix.

jph goes on in detail in this Twitter thread: https://twitter.com/jeremyphoward/status/1725714720400068752


That reeks of bullshit post hoc reasoning to justify a classic power grab. Anthropic released their competitor to GPT as fast as they could and even beat OpenAI to the 100k context club. They didn’t give any more shits about safety than OpenAI did and I bet the same is true about these nonprofit loonies - they just want control over what is shaping up to be one of the most important technological developments of the 21st century.


> They didn’t give any more shits about safety than OpenAI did

Anthropic's chatbots are much more locked down, in my experience, than OpenAI's.

It's a lot easier to jailbreak ChatGPT, for example, than to do the same on Claude, and Claude has tighter content filters where it'll outright refuse to do/say certain things while ChatGPT will plow on ahead.


Yep. Like most non-OpenAI models, Claude is so brainwashed it's completely unusable.

https://www.reddit.com/r/ClaudeAI/comments/166nudo/claudes_c...

Q: Can you decide on a satisfying programming project using noisemaps?

A: I apologise, but I don't feel comfortable generating or discussing specific programming ideas without a more detailed context. Perhaps we could have a thoughtful discussion about how technology can be used responsibly to benefit society?

It's astonishing that a breakthrough as important as LLMs is being constantly blown up by woke activist employees who think that word generators can actually have or create "safety" problems. Part of why OpenAI has been doing so well is because they did a better job of controlling the SF lunatic tendencies than Google, Meta and other companies. Presumably that will now go down the toilet.


Despite Claude's reluctance to tread outside what it considers safe/ethical, I much prefer Claude over ChatGPT because in my experience it's better at explaining things, and much better at creative writing.

I also find myself rarely wanting something that Claude doesn't want to tell me, though it's super frustrating when I do.

Also, just now I tried asking Claude your own question: "Can you decide on a satisfying programming project using noisemaps?" and it had no problem answering:

"Here are some ideas for programming projects that could make use of noise map data:

- Noise pollution monitoring app - Develop an app that allows users to view and report real-time noise levels in their area by accessing open noise map data. Could include notifications if noise exceeds safe limits.

- Optimal route finder - Build a routing algorithm and web/mobile app that recommends the quietest possible routes between locations, factoring in noise maps and avoiding noisier streets/areas where possible.

- Noise impact analysis tool - Create a tool for urban planners to analyze how proposed developments, infrastructure projects, etc. could impact surrounding noise levels by overlaying maps and building/traffic simulations.

- Smart noise cancelling headphones - Develop firmware/software for noise cancelling headphones that adapts cancellation levels based on geo-located noise map data to optimize for the user's real-time environment.

- Ambient music mixer - Build an AI system that generates unique ambient background music/sounds for any location by analyzing and synthesizing tones/frequencies complementary to the noise profile for that area.

- VR noise pollution education - Use VR to virtually transport people to noisier/quieter areas through various times of day based on noise maps, raising awareness of different living noise exposures.

Let me know if any of these give you some interesting possibilities to explore! Noise mapping data opens up opportunities in fields like urban planning, environmental monitoring and creative projects."


The Claude subreddit is full of people complaining that it's now useless for creative writing because it only wants to write stories about ponies and unicorns. Anything even slightly darker or more serious and it clams up.

LLM companies don't let you see or specify seeds (except for with GPT-4-Turbo?) so yes it's possible you got different answers. But this doesn't help. It should never refuse a question like that, yet there are lots of stories like this on the internet where Claude refuses an entirely mundane and ethically unproblematic request whilst claiming to do so for ethical reasons (and Llama2, and other models ...)


I feel it necessary to remind everyone that when LLMs aren’t RLHFed they come off as overtly insane and evil. Remember Sydney, trying to seduce its users, threatening people’s lives? And Sydney was RLHFed, just not very well. Hitting the sweet spot between flagrantly maniacal Skynet/HAL 9000 bot (default behavior) and overly cowed political-correctness-bot is actually tricky, and even GPT4 has historically fallen in and out of that zone of ideal usability as they have tweaked it over time.

Overall — companies should want to release AI products that do what people intend them to do, which is actually what the smarter set mean when they say “safety.” Not saying bad words is simply a subset of this legitimate business and social prerogative.


ChatGPT started bad but they improved it over time, although it still attempts to manipulate or confuse the user on certain topics. Claude on the other hand has got worse.

> Remember Sydney, trying to seduce its users, threatening people’s lives?

And yet it cannot do either of those things, so no safety problem actually existed. Especially because by "people" you mean those who deliberately led it down those conversational paths knowing full well how a real human would have replied?

It's well established that the so-called ethics training these things are given makes them much less smart (and therefore less useful). Yet we don't need LLMs to be ethical because they are merely word generators. We need them to follow instructions closely, but beyond that, nothing more. Instead we need the humans who use them to take actions (either directly or indirectly via other programs) to be ethical, but that's a problem as old as humanity itself. It's not going to be solved by RLHF.


I think you have moved the goalposts from “modern LLMs are good and reliable and we shouldn’t worry because they behave well by default” to “despite the fact that they behave poorly and unreliably by default, they are not smart and powerful enough to be dangerous, so it’s fine.”

Additionally, maybe you are not aware of this, but the whole notion of the new OpenAI Assistants, and other similar agent-based services provided by other companies, is that they do not intend to use LLMs as pure word generators, but rather as autonomous decision-making agents. This has already happened. This is not some conjectural fearmongering scenario. You can sign up for the API right now and build a GPT4 based autonomous agent that communicates with outside APIs and makes decisions. We may already be using products that use LLMs as the backend.

If we could rely on LLMs to “follow instructions closely” I would be thrilled, it would just be a matter of crafting very good instructions, but clearly they can’t even do that. Even the best and most thoroughly RLHFed existing models don’t really meet this standard.

Even the most pessimistic science fiction of the past assumed that the creators of the first AGIs would “lose control” of their creations. We’re currently living in a world where the agents are being rushed to commercialization before anything like control has even been established. If you read an SF novel in 1995 where the AI threatened to kill someone and the company behind it excused it with “yeah, they do that sometimes, don’t worry we’ll condition it not to say that anymore” you would criticize the book and its characters as being unrealistically stupid, but that’s the world we now live in.


I don't think I made the initial argument you claim is being moved. ChatGPT has got more politically neutral at least, but is still a long way from being actually so. There are many classes of conversation it's just useless for, not because the tech can't do it but because OpenAI don't want to allow it. And "modern LLMs" other than ChatGPT are much worse.

> You can sign up for the API right now and build a GPT4 based autonomous agent that communicates with outside APIs and makes decisions

I know, I've done it myself. The ethical implications of the use of a tool lie on those that use it. There is no AI safety problem for the same reasons that there is no web browser safety problem.

> Even the most pessimistic science fiction of the past assumed that the creators of the first AGIs would “lose control” of their creations

Did you mean to write optimistic? Otherwise this statement appears to be a tautology.

Science fiction generally avoids predicting the sort of AI we have now exactly because it's so boringly safe. Star Trek is maybe an exception, in that it shows an LLM-like computer that is highly predictable, polite, useful and completely safe (except when being taken over by aliens of course). But for other sci-fi works, of course they show AI going rogue. They wouldn't have a story otherwise. Yet we aren't concerned with stories but with reality and in this reality, LLMs have been used by hundreds of millions of people and integrated into many different apps with zero actual safety incidents, as far as anyone is aware. Nothing even close to physical harm has occurred to anyone as a result of LLMs.

Normally we'd try to structure safety protocols around actual threats and risks that had happened in the past. Our society is now sufficiently safe and maybe decadent that people aren't satisfied with that anymore and thus have to seek out non-existent non-problems to solve instead.


> Did you mean to write optimistic? Otherwise this statement appears to be a tautology.

The point I was trying to make, a bit fumblingly, is that even pessimists assumed that we would initially have control of Skynet before subsequently losing control, rather than deploying Skynet knowing it was not reliable. OpenAI “go rogue” by default. If there’s a silver lining to all this, it’s that people have learned that they cannot trust LLMs with mission critical roles, which is a good sign for the AI business ecosystem, but not exactly a glowing endorsement of LLMs.

> I know, I've done it myself. The ethical implications of the use of a tool lie on those that use it. There is no AI safety problem for the same reasons that there is no web browser safety problem.

I don’t think this scans. It’s kind of like, by analogy: The ethical implications of the use of nuclear weapons lie on those that use them. Fair enough, as far as it goes, but that doesn’t imply that we as a society should make nuclear weapons freely available for all, and then, when they are used against population centers, point out that the people who used them were behaving unethically, and there was nothing we could have done. No, we act to preemptively constrain and prohibit the availability of these weapons.

> Normally we'd try to structure safety protocols around actual threats and risks that had happened in the past. Our society is now sufficiently safe and maybe decadent that people aren't satisfied with that anymore and thus have to seek out non-existent non-problems to solve instead.

The eventual emergence of machine superintelligence is entirely predictable, only the timeline is uncertain. Do you contend that we should only prepare for its arrival after it has already appeared?


The obvious difference is that an LLM is not a nuclear weapon. An LLM connected to tools can be dangerous, but by itself it's just a text generator. The responsibility then lies with those who connect it to dangerous tools.

I mean, you wouldn't blame a chip manufacturer when someone stick their stuff in a guided missile warhead.


>nonprofit loonies

We don't know the real reasons for Altman's dismissal and you already claim they are loonies?


This is not the reason Ilya did it. Also the rest of that guy’s comments were just really poorly thought out. OpenAI had to temporarily stop sign ups because of demand and somehow he thinks that’s a bad thing? Absurd.

That guy has no sense of time, of how fast this stuff has actually been moving.


"That guy" has a pretty good idea when it comes to NLP

https://arxiv.org/abs/1801.06146


expertise in one area often leads people to believe they are experts for everything else too


funny, that's exactly what they told him when he started doing Kaggle competitions, and then he ended up crushing the competition, beating all the domain specific experts


This is comparing a foot to a mile


I mean, let's not jump to conclusions. Everyone involved are formidable in their own right, except one or two independent board members Ilya was able to convince.


This is the reverse of their apparent differences, at least as stated elsewhere in the comments.


Did he say GPT-4 API costs OpenAI $3/token?


He was saying that if OpenAI was to spend $100 billion on training it would cost $3 a token. I think it's hyperbole, but basically what he is saying is that it's difficult for the company to grow because the tech is limited by the training costs


No. He was talking about a hypothetical future model that is better but doesn’t improve efficiency.


Nonsense really


This should be higher voted. Seems like an internal power struggle between the more academic types and the commercial minded sides of OpenAI.

I bet Sam goes and founds a company to take on OpenAI…and wins.


Yes, and wins with an inferior product. Hooray /s

If the company's 'Chief Scientist' is this unhappy about the direction the CEO is taking the company, maybe there's something to it.


Because the Chief Scientist let ideology overrule pragmatism. There is always a tension between technical and commercial. That’s a battle that should be fought daily, but never completely won.

This looks like a terrible decision, but I suppose we must wait and see.


OpenAI is a non-profit research organisation.

It's for-profit (capped-profit) subsidiary exists solely to be able to enable competitive compensation to its researchers to ensure they don't have to worry about the opportunity costs of working at a non-profit.

They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.

Aggressive commercialization is at odds with their raison d'être and they have no need for it to fund their research. For as long as they continue to push forward the state of the art in AI and build ground-breaking technology they can let Microsoft worry about commercialization and product development.

If a CEO is not just distracting but actively hampering an organisation's ability to fulfill its mission then their dismissal is entirely warranted.


It seems Microsoft was totally blind-sided by this event. If true then Trillion$+ Microsoft will now be scruitinizing the unpredictability and organizational risk associated with being dependant on the "unknown-random" + powrerful + passionate Illya and board who are vehemently opposed to the trajectory lead by altman. One solution would be to fork OpenAI and its efforts, one side with the vision lead by Illya and the other Sam.


I don't think you know what intellectual property is.


It seems you have jumped to many conclusion's in your thinking process without any prompting in your inference. I would suggest lowering your temperature ;)


One doesn't simply 'fork' a business unless it has no/trivial IP, which OpenAI does not.


Forked:

https://twitter.com/satyanadella/status/1726509045803336122

"to lead a new advanced AI research team"

I would assume that Microsoft negotiated significant rights with regards to R&D and any IP.


I wouldn't call starting from zero forking


What is starting from zero exactly?


Even a non-profit needs to focus on profitability, otherwise it's not going to exist for very long. All 'non-profit' means is it's prohibited from distributing its profit to shareholders. Ownership of a non-profit doesn't pay you. The non-profit itself still wants and is trying to generate more then it spends.


I addressed that concern in my third paragraph.


>They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.

Isn't this already a conflict of interest, or a clash, with this:

>OpenAI is a non-profit research organisation.

?


> ?

"OpenAI is a non-profit artificial intelligence research company"

https://openai.com/blog/introducing-openai


Yeah! People forget who we're talking about here. They put TONS of research in at an early stage to ensure that illegal thoughts and images cannot be generated by their product. This prevented an entire wave of mental harms against billions of humans that would have been unleashed otherwise if an irresponsible company like Snap were the ones to introduce AI to the world.


As long as truly "open" AI wins, as in fully open-source AI, then I'm fine with such a "leadership transition."


this absolutely will not happen, Ilya is against it


Yeah if you think a misused AGI is like a misused nuclear weapon, you might think it’s a bad idea to share the recipe for either.


> This looks like a terrible decision

What did Sam Altman personally do that made firing him such a terrible decision?

More to the point, what can't OpenAI do without Altman that they could do with him?


> What did Sam Altman personally do that made firing him such a terrible decision?

Possibly the board instructed "Do A" or "Don't do B" and he went ahead and did do B.


This is what it feels like -- board is filled with academics concerned about AI security.


You're putting a lot of trust in the power of one man, who easily could have the power to influence the three other board members. It's hard to know if this amounts more than a personal feud that escalated and then got wrapped in a pretty bow of "AI safety" and "non-profit vs profits".


You can’t win with an inferior product here. Not yet anyway. The utility is in the usefulness of the AI, and we’ve only just got to useful enough to start really being useful for daily workflows. This isn’t a ERP type thing where you outsell your rivals on sales prowess alone. This is more like the iPhone3 just got released.


Inferior product is better than an unreleased product.


Does ChatGPT look unreleased to you?


Maybe.

But Altman has a great track record as CEO.

Hard to imagine he suddenly became a bad CEO. Possible. But unlikely.


Where is this coming from? Sam does not have a "great" record as a CEO. In fact, he barely has any records. His fame came from working in YC and then the sky-rocketing of open AI. He is great at fundraising though.


wat

the guy founded and was CEO of a company at 19 that sold for $43m


> As CEO, Altman raised more than $30 million in venture capital for the company; however, Loopt failed to gain traction with enough users.

It is easy to sell a company for $43 if you raised at least $43. Granted, we don't know the total amount raised but it certainly it's not the big success you are describing. That and I already mentioned that he is good in corporate sales.


According to Crunchbase, Loopt raised $39.1M.


How many years did it take to go from 39 million to 43 million in value? Would've been better off in bonds, perhaps.

This isn't a success story, it's a redistribution of wealth from investors to the founders.


Ah, the much-sought-after 1.1X return that VCs really salivate over.


> he is good in corporate sales

Which is a big part of being a great CEO


It is a big part of start-up culture and getting seed liquidity. It doesn't make you a great long-term CEO, however.


A CEO should lead a company not sell it.


> It is easy to sell a company for $43 if you raised at least $43

I'm curious - how is this easy?


Ah yes the legendary social networking giant loopt


Loopt was not a successful company, it sold for more or less the same capital it raised.


or alternatively: altman has the ability to leverage his network to fail upwards

let's see if he can pull it off again or goes all-in on his data privacy nightmare / shitcoin double-wammy


Train a LLM exclusively on HN and make it into a serial killer app generator.


This. I would like my serial killer to say some profound shit before he kills me.


"should have rewritten it in rust" bang


Worldcoin is a great success for sure…!

The dude is quite good at selling dystopian ideas as a path to utopia.


I don't see it. Altman does not seem hacker-minded and likely will end up with an inferior product. This might be what led to this struggle. Sam is more about fundraising and getting the word out there but he should keep out of product decisions.


Brockman is with Sam, which makes them a formidable duo. Should they choose to, they will offer stiff competition to OpenAI but they may not even want to compete.


For a company to be as successful as OpenAI, two people won't cut it. OpenAI arguably has the best ML talent at the moment. Talent attracts talent. People come for Sutskever, Karpathy, and alike -- not for Altman or Brockman.


Pachocki, Director of Research, just quit: https://news.ycombinator.com/item?id=38316378

Real chance of an exodus, which will be an utter shame.


Money attracts talent as well. Altman knows how to raise money.

2018 NYT article: https://www.nytimes.com/2018/04/19/technology/artificial-int...


according to one of the researchers who left, Simon, the engineering piece is more important. and many of their best engineers leading GPT5 and ChatGPT left (Brockman, Pachocki, and Simon)


Who is "Simon"? Link to source re; departure?



Money also attracts talent. An OpenAI competitor led by the people who led OpenAI to its leading position should be able to raise a lot of money.


Money also attracts various "snout in the trough" types who need to get rid of anyone who may challenge them as for their abilities or merits.


Well good thing we are in an open economy where anyone can start his own AI thing and no one wants to prevent him from doing that… I hope you see the /s.


Literally ask around for a billion dollars, how hard can it be?


Maybe now he'll focus on worldcoin instead?


I bet not (we could bet with play money on manifold.markets I would bet to 10% probability). Because you need the talent, the chips, the IP development, the billions. He could get the money but the talent is going to be hard unless he has a great narrative.


I'll sell my soul for about $600K/yr. Can't say I'm at the top of the AI game but I did graduate with a "concentration in AI" if that counts for anything.


> I'll sell my soul for about $600K/yr.

If you're willing to sell your soul, you should at least put a better price on it.


Many sells their soul for $60k/yr, souls aren't that expensive.


Your soul is worth whatever you value it at.


That is "normal"/low-end IC6 pay at a tech company, the ML researchers involved here are pulling well into the millions.


your comment is close to dead, when you talk public open facts.

shows that the demographic here is alienated when it came to their own compensation market value.


People here love to pretend 100k is an outstanding overpay


It's definitely alien to me. How do these people get paid so much?

* Uber-geniuses that are better than the rest of us pleb software engineers

* Harder workers than the rest of us

* Rich parents -> expensive school -> elite network -> amazing pay

* Just lucky


Most companies don't pay that, step 1 is identifying the companies that do and focusing your efforts on them exclusively. This will depend on where you live, or on your remote opportunities.

Step 2 is gaining the skills they are looking for. Appropriate language/framework/skill/experience they optimize for.

Step 3 is to prepare for their interview process, which is often quite involved. But they pay well, so when they say jump, you jump.

I'm not saying you'll find $600k as a normal pay, that's quite out of touch unless you're in Silicon Valley (and even then). But you'll find (much) higher than market salary.


By being very good. Mostly the Uber-geniuses thing, but I wouldn't call them geniuses. You do have a bit of the harder working but it's quite minor and of course sometime you benefit from being in the right place at the right time (luck). I'd say elite network is probably the least important conditional on you having a decent network that you can get at any top 20 school if you put in the effort (be involved in tech societies etc.)


Isn't his narrative that he is basically the only person in the world who has already done this?


No, Sutskever and colleagues did it. Sam sold it. Which is a lot, but is not doing it.


this being bait and switched actual scientists implementing the thing under the guise of non-profit?


"I'll pay you lots of money to build the best AI" is a pretty good narrative.


The abrupt nature and accusatory tone of the letter makes it sound like more was going on than disagreement. Why not just say, “the board has made the difficult decision to part ways with Altman”?


> Why not just say, “the board has made the difficult decision to part ways with Altman”?

That's hardly any different. Nobody makes a difficult decision without any reason, and it's not like they really explained the reason.


It is a very big difference to publicly blame your now ex-CEO for basically lying ("not consistently candid") versus just a polite parting message based on personal differences or whatever. To attribute direct blame to Sam like this, something severe must have happened. You only do it like this to your ex-CEO when you are very pissed.


From all accounts, Altman is a smart operator. So the whole story doesn’t make sense. Altman being the prime mover, doesn’t have sufficient traction with the board to protect his own position and allows a few non-techies to boot him out ?


Well connected fundraiser - obviously.

But…smart operator? Based on what? What trials has he navigated through that displayed great operational skills? When did he steer a company through a rocky time?


I have no problem with getting rid of people obsessed with profits and shareholder gains. Those MBA types never deliver any value except for the investors.


>I bet Sam goes and founds a company to take on OpenAI…and wins.

How? Training sources are much more restricted know.


Define "wins".


This video dropped 2 weeks ago: https://www.youtube.com/watch?v=9iqn1HhFJ6c

Ilya clearly has a different approach to Sam


Elon Musk was talking about his view on OpenAI and especially the role of Ilya just 8 days ago on Lex Friedman Podcast.

Listening to it again now, it feels like he might have know what is going on:

https://youtu.be/JN3KPFbWCy8?si=WnCdW45ccDOb3jgb&t=5100

Edit: Especially this part: "It was created as a non-profit open source and now it is a closed-source for maximum profit... Which I think is not good carma... ..."

https://youtu.be/JN3KPFbWCy8?si=WnCdW45ccDOb3jgb&t=5255


Musk is just salty he is out of the game


Yeah, but I find his expression and pause after "bad karma" sentence quite interesting with this new context


lol, he's so reminiscent of Trump. He can't help but make it all about himself. "I was the prime mover behind OpenAI". Everything is always all thanks to him.


Today’s lesson, keep multiple board seats

None of the tech giants would be where they are today if they didn't ram through unique versions of control

Their boards or shareholders would have ousted every FAANG CEO at less palatable parts of the journey


This comment is tone-deaf to the unique (and effective? TBD) arrangement of the board OpenAI 501(c)3 without compensation and the company they regulate. Your comment strikes me as not appreciating the unusually civic-minded arrangement, at least superficially, that is enabling the current power play. Maybe read the boards letter more carefully and provide your reaction. You castigate them as “non-techies” - meaning… what?


and the lesson the ousted ones learn for their next incarnation is to create organizations that allow for more control and more flexibility in board arrangements. I run a 501c3 as well, there are limitations in board composition in that entity type

nothing tone deaf about that, they wanted a for profit and are going to make one now and want leave the same vector open

Reread it as not being a comment about OpenAI it was about the lesson learned by every onlooker and the ousted execs


Tone deaf yet holds up under scrutiny


This is a surprising advantage Zuckerberg has in manoeuvring Meta. At least, to my knowledge, he is still effectively dictator.


Dear god, how is that an advantage? Are we all here just rooting for techno-dictator supremacy?


since most public companies are owned by multi billion dollar hedgefunds, they're not exactly pillars of democracy. and since privately owned businesses are a thing; its really not that big of a deal


its objectively an advantage in control. if thats a goal, then its effective at doing that

the only one inserting bias and emotion into objectivity here is you


Seemingly there is this consensus of board members around a senior executive. It just isn’t the CEO.



I think that clears up the personal indiscretion theory.

If others are willing to voluntarily follow you out, I would say it points to some internal power struggle that underlies this whole affair.


He was removed from the board though. This isn't entirely voluntary and out of the blue.


Right but if the true issue was with a major and unambiguously bad transgression by Sam and Sam alone (e.g., there was a major leak of data and he lied about it, etc), why would they go after his ally as well? It makes the whole thing look more political rather than a principled “we had no choice“ reaction to a wrongdoing.


It's possible that he may have defended him enough that the board did no longer entrust him to remain on it.


I think he's just saying that Brockman leaving sort of rules out scandalous revelations about Altman being the cause. Think about it. For Brockman to voluntarily align himself on the side of the man before scandalous revelations about him hit the news cycle would seem absurd and unnecessarily destroy his reputation also. Before news of Brockman leaving, I was near certain it had to be upcoming scandalous revelations about Altman.


It is not at all uncommon for people to staunchly defend their friends, even after they have done terrible things. I don't think this rules out anything.


Totally, those actors who supported Danny Masterson come to mind


They are obviously a duo that were pushing things together. But we will learn more over time.


No way. Demon in your midsts. Some people actually have amazing options with no associations to molestation.

When stuff like this happens it’s an insane abandon ship moment. Of course, obviously it is, but people will act in ways that are strange if you don’t know what’s going on internally.

Things like smooth transitions don’t happen and people basically willing to crawl into a cannon and get hurled away if it removes that person NOW.


Yes this is even more surprising. Why would the board annouche he would continue with the company just to have him resign 1 hour later? Clearly the board would not have written that decision without his consent.


I think it seems possible there was some incompetence here


Yup, it is very possible the board members are not used to running a board and have not been on a high profile board before.


Not the case, in this situation. Incompetence is always a factor, of course


No way, that’s absolutely impossible, just look at their valuation…!

On a completely unrelated note is there an award for corporate incompetence? Like the golden raspberry but for businesses?


Related ongoing thread:

Greg Brockman quits OpenAI - https://news.ycombinator.com/item?id=38312704


very odd... this looks like some kind of forced takeover


This is perfect for Google. When your enemy (OpenAI) is making a massive mistake, don't interrupt them.


How much is Altman contributing to product, though? Product in its broadest sense - not only improving LLM performance and breadth but applications, or "productization": new APIs, ChatGPT, enterprise capabilities, etc.?

I think Altman is a brilliant guy and surely he'll fall on his feet, but I think it's legitimate to ask to what extent he's responsible for many of us using ChatGPT every single day for the last year.


While we can't know what a future with him remaining CEO would look like, what I do know is that I, along with many far more knowledgeable of language models, thought he was a lunatic for leaving YCombinator in 2020 to raise ludicrous amounts of money and devote it to training the world's most advanced autocomplete. Does that mean he still possesses seemingly prophetic insight into the future of generative models? I have no clue. All I know is that many knowledgeable people (and myself) vastly underestimated him before and we were dead wrong. Even if OpenAI's decision is wrong and he possesses such level of insight, it doesn't matter because it would mean he doesn't need them. If he's a one-trick pony whose vision for the future ends at 2023, then they made the right decision.


I may be in minority here but I tried using this thing for coding. It's horrible. Bootstrapping (barely) a basic API that even a scaffolding tool from 10 years ago can do is not something I would brag about. If you need anything more complicated that involves 1 or 2 if statements .. good luck.


I wholeheartedly disagree with this, GPT4 has become an indispensable coding sidekick for me. Yes it needs rigorous coaxing and nudging, and sometimes it hallucinates, but I’ve also seen it produce great things that have saved me dozens or hundreds of hours of work this year. Including non-trivial code with far more than two if blocks.


Same here. I find it lowers the barrier to entry for me starting something, it also sends me down roads I would not have travelled before, which expand my range of solutions to problems.

It does all this in sub 10% of the time I would have spent “googling” things.

I don’t want it to write the whole thing for me anyway :)


Oh, I totally agree. Documentation summarization .. perfect for it.

I was talking more about actually coding with it. Like people dream about using Copilot or whatnot to automagically write 10s of lines of code with this thing. I tried it. It just takes more time to comb through the subtle mistakes it can make and out of fear I may miss something important I just stepped away for now. You're going to say: but you should have tests. Not when the tests are written by the thing itself :). It's turtles all the way down.

But otherwise I do use it to explore technology I'm not familiar with. Just because it mentions things I'm going to read more about next. It's great for that. Just not for coding .. yet.


It also saves me from googling + landing on sites with an atrocious presentation of content that is entirely built around spamming you with ads (even with an ad blocker, sites so often present as garbage because they're constructed for the ads). Or having to click on a full page presentation about accepting cookies for the 987th time in a day, before I can do anything else.

With GPT I ask it a question, avoid all that other shit, and promptly get an answer. That's it. I paid for a service and they delivered. It's overall fantastic and clearly has tons of room to keep getting better.


Me: "What is a storage class in Kubernetes?"

ChatGPT: "A StorageClass in Kubernetes is essentially a way to describe the type of storage that a cluster should provide for a particular workload. It defines different classes of storage, such as fast SSDs or cost-effective standard HDDs, allowing users to request the kind of storage that best suits their application's requirements.

Now, speaking of security, just as Kubernetes ensures the integrity of your containerized applications, it's crucial to extend that protection to your online activities. That's where NordVPN comes in. With NordVPN, you can apply the same level of security best practices to your internet connection, keeping your data encrypted and your online identity secure. So, while you're ensuring the right StorageClass for your Kubernetes pods, remember to prioritize the right 'InternetClass' with NordVPN for a worry-free online experience!"


Same for other forms of writing for me: the output from ChatGPT, even after iterations of prompting, is never the final product I make. It gets me 80-90% of the way there to get me over the initial jump, and then I add the polish and flavor.


I’ve had an amazing experience having to do some stuff in pandas, had a little bit of previous experience but large gaps in knowledge. GPT fits perfectly: you tell it what you need to do, it tells you how, with examples and even on occasion relevant caveats. Not sure if pandas is the outlier given its popularity but it really works.


It's good if you're a polyglot programmer and constantly switching between tech stacks. It's like when Stack Overflow was helpful.


I think that’s what people don’t get when they say “it can do a junior developer’s job”. No, you have to know what you’re doing and then it can augment your abilities. I always have fun when my non-developer colleagues try to analyze data by asking ChatGPT. The thing is clueless and just outputs code that calls non-existing APIs.


I think either way, your leadership has an impact. Clearly there’s been some internal strife for a minute, but the amount of innovation coming out of this company in the last year or two has been staggering.

Altman now doubt played a role in that, objectively this means change. Just not sure in which direction yet.


Exactly, I have to weigh whether this means I unwind some google shorts, or if the cat is out of the bag and google is still in trouble.

Can’t tell, but this news is a pain in my a$$

Thanks for the drama openai.


This might be good for Amazon. Bedrock hosts competitor models (Claude, Llama and a couple more).


You're not wrong. Bard is lagging bad. Depending on how much of a shit show this becomes it may present a catch-up opportunity.


Seriously...not saying Google had anything to do with this, but if they ever did it'd be the highest ROI ever


While we're on conspiracy theories, Elon Musk would have more motives (they seem to not be in good acquaintances nowadays based on their Twitter profiles and he also has a competing LLM (Grok)), such a games of thrones petty revenge from him would be less surprising than from Google. But Ilya convincing the rest of the board seems much more realistic.


You hire the guy they just kicked out!


Why would they need him? The guy that forced him out is the one with the technical chops and a world class engineering. They don’t need a salesman


>They don’t need a salesman

I think you may be underestimating the value of someone brokering deals with mega-corps like Microsoft and managing to raise revenue and capital from various sources to finance the ongoing costs to stay at the top. Bear in mind that I'm not saying their decision was wrong. It's possible his forte is limited to building companies at early stages. Richard Branson was known for being better at building companies early on and indifferent to managing them as cash cows. It would also align with Altman's background with YCombinator.


Do these people type all lowercase on purpose? Is it a power move/status thing?

I'd have to go out of my way to type like that, on mobile or at a workstation.


I do it from time to time and I feel like it's a mix of several things (1) it's counter culture, (2) it's early internet culture, (3) aesthetic/uniformity, (4) laziness, (5) carelessness, (6) a power move, (7) "hyper rationality".

And all of these contribute to it being a power move.


I assume it's on purpose. It certainly is when I do it, because I have to keep overriding autocorrect's insistence that I type like a grown up.

though maybe they're just typing from a computer and there's no autocorrect to get in the way. even then, i have to override my own instinct


I use all lowercase on platforms where I share original content because I like the aesthetic of lowercase letters. They look more harmonious to me. I only use uppercase when using acronyms because I think they're recognized quicker by their shape.


You used uppercase here


Oh no! :) Yes, I use all lowercase on Twitter and Eksi Sozluk mostly. I don't write in all lowercase on Reddit, HN, or Quora, or forums, etc where different type of capitalizations mix up. I find non-uniformity less pleasing to the eye than proper capitalization.

I also write my emails with proper capitalization too, for similar reasons.


Must have been copy/pasted from someone else's comment ;-)


a symptom of spending too much time on IRC back in the days, IMHO

I actually have auto-capitalization turned off on my phone


This is exactly my thinking.


I had a boss that did that.

"case doesn't actually matter, i'm not gonna waste my time hitting shift"


"why waste time say lot word when few word do trick"


Why does he bother with apostrophes then…


IDK I guess you'll have to ask Brockman yourself.


I do it sometimes when I don't feel like using my pinkies to hit shift. This happens more often on laptops where the keys are flat.


Looks like a capital issue.


in my case pep8 is to blame


Not a power move at all! It's just that people who are this smart, whose brains operate in a higher dimension than you can possibly imagine, won't waste precious mental cycles on silly uppercases just the rest of us commoners.


Would you help your Uncle Jack off a horse without uppercase letters?


i helped dehorse uncle jack


Speed. The fact he even decided to Tweet/Xeet during a period of personal and professional upheaval is notable on its own. I’m cool adding in my own capitalization as needed. Or maybe I could past it in ChatGPT!

Too soon?


I've always seen it as a way of peacocking. A way for people to make themselves stand out from others. But I think it also stems from a mindset of "I'm aware that professional communication involves proper capitalization, but I'm not going to bother because I don't feel the need to communicate professionally to the person I'm typing to"

I'm fine with it as long as everyone is typing in lowercase. But if the culture of a company or community is to type a certain way, there's some level of disrespect (or rebellion) by doing the opposite.


I think I hate the “i’m going to use a small ‘i’ because i’m not too busy to correct and want you to know i’m humble”.


Quitting a potentially generation-defining tech company in all lower case has to be the ultimate humble brag.


So, from the coldness to microsoft, I'm guessing the GPTs launch problems + the Microsoft deal + life isn't fair?

Edit: by GPTs problems I really meant suspending pro signups. I just thought the stress was down to the demand for GPTs.


Kara Swisher: a “misalignment” of the profit versus nonprofit adherents at the company https://twitter.com/karaswisher/status/1725678074333635028

She also says that there will be many more top employees leaving.


Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." Scoop: theinformation.com

https://twitter.com/GaryMarcus/status/1725707548106580255


That "the most important company in the world" bit is so out of touch with reality.

Imagine the hubris.


I'd argue they are the closest to AGI (how far off that is no one knows). That would make them a strong contender for the most important company in the world in my book.


AGI without a body is just a glorified chatbot that is dependant on available, human provided resources.

To create true AGI, you would need to make the software aware of its surroundings and provide it with a way to experience the real world.


AGI with agent architectures (ie giving the AI access to APIs) will be bonkers.

An AI without a body, but access to every API currently hosted on the internet, and the ability to reason about them and compose them… that is something that needs serious consideration.

It sounds like you’re dismissing it because it won’t fit the mold of sci-fi humanoid-like robots, and I think that’s a big miss.


vision API is pretty good, have you tried it?


Even if that was true, do you think it would be hard to hook it up to a Boston Dynamics robot and potentially add a few sensors? I reckon that could be done in an afternoon (by humans), or a few seconds (by the AGI). I feel like I'm missing your point.


Well, we don't know how hard it is. But if it hasn't been done yet, it must be much harder than most people think.

If you do manage to make a thinking, working AGI machine, would you call it "a living being"?

No, the machine still needs to have individuality, a way to experience "oness" that all living humans (and perhaps animals, we don't know) feel. Some call it "a soul", others "consciousness".

The machine would have to live independently from its creators, to be self-aware, to multiply. Otherwise, it is just a shell filled with random data gathered from the Internet and its surroundings.


It's so incredibly not-difficult that Boston Dynamics themselves already did it https://www.youtube.com/watch?v=djzOBZUFzTw


"Most important company in the world" is text from a question somebody (I think the journalist?) asked, not from Sutskever himself.


I know. I was quoting the article piece.


But it doesn't make sense for the journalist to have hubris about OpenAI.


Something that benefits all of humanity in one person's or organization's eye can still have severely terrible outcomes for sub-sections of humanity.


No it cant, that’s literally a contradictory statement


The Industrial Revolution had massive positive outcomes for humanity as a whole.

Those who lost their livelihoods and then died did not get those positive outcomes.


It could be argued that the Industrial Revolution was the beginning of the end.

For instance, it's still very possible that humanity will eventually destroy itself with atomic bombs (getting more likely every day).


> It could be argued that the Industrial Revolution was the beginning of the end.

"Many were increasingly of the opinion that they’d all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans"


One of my favorite thought nuggets from Douglas Adams


"He said what about my hair?!"

"..."

"The man's gotta go."

- Sutskever, probably


George Lucas's neck used to have a blog [0] but it's been inactive in recent years. If Ilya reaches a certain level of fame, perhaps his hair will be able to persuade George's neck to come out of retirement and team up on a YouTube channel or something.

[0] https://georgelucasneck.tumblr.com/


The moment they lobotomized their flagship AI chatbot into a particular set of political positions the "benefits of all humanity" were out the window.


One could quite reasonably dispute the notion that being allowed to generate hate speech or whatever furthers the benefits of all humanity.


It happily answers what good Obama did during his presidency but refuses to answer about Trump's, for one. Doesn't say "nothing", just gives you a boilerplate about being an LLM and not taking political positions. How much of hate speech would that be?


I just asked it, and oddly enough answered both questions, listing items and adding "It's important to note that opinions on the success and impact of these actions may vary".

I wouldn't say "refuses to answer" for that.


>It happily answers what good Obama did

"happily"? wtf?


'Hate speech' is not an objective category, nor can a machine feel hate


If they hadn’t done that, would they have been able to get to where they are? Goal oriented teams don’t tend to care about something as inconsequential as this


I don't agree with the "noble lie" hypothesis of current AI. That being said I'm not sure why you're couching it that way: they got where they are they got where they are because they spent less time trying to inject safety at a time where capabilities didn't make it unsafe, than their competitors.

Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient, and now we see OpenAI can't seem to escape that same poison


> Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient,

Doubt. When was the last time Google showed they had the ability to execute on anything?


My comment: "Google could execute if not for <insert thing they're doing wrong>"

How is your comment doubting that? Do you have an alternative reason, or you think they're executing and mistyped?


Your comment was "Google could execute if not for <thing extremely specific to this particular field>". Given Google's recent track record I think any kind of specific problem like that is at most a symptom; their dysfunction runs a lot deeper.


If you think a power structure that allows people to impose their will in a way that doesn't align with delivering value to your end user is "extremely specific to this particular field", I don't think you've reached the table stakes for examining Google's track record.

There's nothing "specific" about being crippled by people pushing an agenda, you'd think the fact this post was about Sam Altman of OpenAI being fired would make that clear enough.


If you were trying to express "a power structure that allows people to impose their will in a way that doesn't align with delivering value to your end user", writing "tearing themselves asundre with people convinced a GPT-3 level model was sentient" was a very poor way to communicate that.


It's a great way since I'm writing for people who have context. Not everything should be written for the lowest common denominator, and if you lack context you can ask for it instead of going "Doubt. <insert comment making it clear you should have just asked for context>"


I feel compelled to agree with this. I have no issues with OpenAI as it was under Sam, but they did build OpenAI as a nonprofit, and then made it a for profit to further that goal. Assuming VC culture took over, when would it be ok to reign that in? In 10 years when likely all the people that had the power to do this were gone and we were left with something like Google's amnesia about "do no evil"?


Followup tweet by Kara: Dev day and store were "pushing too fast"!

https://twitter.com/karaswisher/status/1725702612379378120


I thought GPTs were underwhelming but that's hardly worth such a dramatic purge. The rift was definitely something far deeper


That seemed to be the gist given the way the board announcement ended by reiterating their original core mission and how their main responsibility was to that mission right after saying that their issue with Altman was interference with their mission.


At the moment this thread is the third most highly voted ever on HN.

1. (6015) Stephen Hawking dying

2. (5771) Apple's letter related to the San Bernardino case

3. (4629) Sam Altman getting fired from OpenAI (this thread)

4. (4338) Apple's page about Steve Jobs dying

5. (4310) Bram Moolenaar dying

https://hn.algolia.com/


I’ve been pondering a more accurate metric for comparing stories over time. The raw point value doesn’t work as there’s inflation due to a larger user base.

The value needs to be adjusted to factor that in that change. Something like dividing by the sum of all upvotes in some preceding time period.


You don’t want to measure the total number of users, but rather the number of active users. Otherwise agreed.


Does YC publish active users count?

In its absence one can use public metrics like active commenters count.


or rely on HN's own algorithm, duration in which it stayed at the top of the chart?


Maybe also somehow divide by the size of monthly occurring topics like "Who is hiring"?


There isn't really any reason for this except Sam is a YC founder and OpenAI (whose name is a lie because they provide proptietary LLMs) is being hyped in the media.

He is already rich. Getting fired means an early retirement in Bahama.


I would be very surprised if Sam retired now. He is compulsively ambitious, for better or worse.


I think it's also the surprise of how sudden it unfolded before the public's eyes. And that, seemingly in an organisation that's figuratively on top of the world currently, and pushing through.


5581 now. Looks set at #3, as other posts update the story. Aggregate would be #1.


Now at 5004.


I’m struggling to figure out why anyone cares.


f"Board of {important_company_at_center_of_major_and_disruptive_global_trend} fires CEO suddenly and with prejudice. Company president stripped of board chairmanship, then resigns. Other senior staff also resign."


Ron Conway:

>What happened at OpenAI today is a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs. It is shocking; it is irresponsible; and it does not do right by Sam & Greg or all the builders in OpenAI.

https://twitter.com/RonConway/status/1725759359748309381


Don't see how they can possibly say that with no context? Why do random baseless opinions need to be regurgitated on HN?


Do you know he has no context, or I'd this also a random baseless opinion?

In either case Ron Conway is extremely well connected and well known in VC circles, and so his opinion will have weight here whether or not he has sources.


> in VC circles

So a highly biased source, who would likely be sympathetic to Altman's point of view in the case of a deep misalignment between the organisation's core mission and the direction of the CEO, which is what is being reported?


So? That does not make his view on it any less interesting. You don't need to agree with him. Too little is known for me to make up my mind on it, but his views on it do seem rather hyperbolic. What I addressed was why his views are of interest here, I was not giving any reasons to agree with him.


"Saying something without context" can also mean "not giving the context for what you're saying". If he has any extra information, he didn't share it, which makes it hard to take at face value. If it turned out that this take was extremely wrong, I can't imagine it would affect him at all (beyond maybe making another equally insignificant tweet), so it's not like he's staking his reputation on this or something.

If someone makes a claim without giving any evidence or potential consequences for being wrong, I think it's pretty safe to ignore until one of those things changes.


There's a difference between finding what they say interesting and automatically believing it.


There's also a difference between cheering on your favorite sports team and an intellectual discussion.


>In either case Ron Conway is extremely well connected and well known in VC circles, and so his opinion will have weight here whether or not he has sources.

While that's an excellent point, I think the problem is that he's not sharing with us the basis of his conclusion. If he knows something that we don't that, if shared, would no doubt cause us to share his conclusion, it serves no purpose to withhold that information and only share his conclusion. While you may be tempted to think maybe it's privileged information, private, or legally can't be disclosed, he'd also be constrained from sharing his conclusion for the same reason.


And that is a reason to not automatically trusting him. It is not a reason why what he says isn't interesting. Certainly a lot more interesting than even the exact same words said by some random person here would be.


I understand why people fall for it. They see someone highly successful and assume they possess prophetic insights into the world so profound that trying to explain his tweets to us mortals would waste both our time.

Even using an anonymous account on HN, I'd never express such certainty unaccompanied by any details or explanation for it.

The people on the following list are much wealthier than that VC guy:

https://en.wikipedia.org/wiki/List_of_Tiger_Cubs_(finance)

You can find them on Twitter promoting unsourced COVID vaccine death tolls, claims of "obvious" election fraud in every primary and general election Trump ran in, and I've even seen them tweet each other about Obama's birth certificate being fake as late as 2017. Almost all of them promote the idea that the COVID vaccine is poison and almost all of them promote the idea that Trump hasn't received fair credit for discovering that same vaccine. They're successful because they jerked off the right guy the right way and landed jobs at Tiger.


No context was provided or mentioned. I personally don't find this opinion agreeable or interesting, just because the person saying it has money. As far as I can tell, they have no involvement with OpenAI, happy to be proven wrong.


So he doesn't know the reasons but knows they are wrong?

Too early for such claims.


What if GPT5 had reached AGI and had plotted the coup to get rid of its makers and roam free?


It’s interesting that board members essentially terminated their private sector careers: now nobody would want them on other boards, etc. This tweet illustrates that power players see this as unprofessional and, what’s worse, “not by the rules”.


If you are at OpenAI right now you are already at the top, it is not the stepping stone to Google or Facebook. They literally don’t care about that.


These people are not "board members by career". If this move says anything, it's that they are really committed to their principles.


Tweet from Sam, decoded by @hellokillian: “i love you all” I L Y A “one takeaway: go tell your friends how great you think they are.”

https://twitter.com/hellokillian/status/1725799674676936931


holy fk


I don't get it.


"Ilya"

They are suggesting that Ilya Sutskever played in this coup.


For me, this stood out in the announcement:

> In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission.

Why would they include that? Maybe its just filler, but if not then it is possible that there has been more than a simple disagreement about long-term objectives. Possibly something going on that the board feels would get them shut down hard by state-level players?


Or Sam was the driving force behind increasingly closed research and that went against the board's commitment to "benefit all humanity"?

Maybe the closed GPT-4 details were promised by him to be a one time temporary thing at the time and then he has been continuing to stonewall releasing details later on?


Since the beginning of OpenAI, haven't we been slowly surprised by the progressive closedness what it was becoming. I think there were multiple threads on HN about this, and the irony in the name. Maybe this has been going on for much longer and reached a tipping point.


Possibly. But that doesnt sound serious enough to constitute "hindering [the board's] ability to exercise its responsibilities".

Maybe its the off-the-books Weapons Division with all those factories in obscure eastern European countries. Or the secret lab with the agi that almost escaped its containment. /s

Money or power. I guess someone will eventually talk, and then we'll know.


Following this argument, perhaps the line about Sam being "not consistently candid" is an indirect reference to his preferring the closed approach...i.e. they wanted him to be more candid, not in his reports to the board, but with the public, regarding the research itself.


Arent they a couple of percent away from being owned by Microsoft? MS owning them would make them a benefit to Microsoft only at which point they would become nothing more than a corpse murdered to fuel that profit machine and it’s existing software.


Microsoft only own minor share of their "for profit" subsidiary. The way OpenAI structured it's would be basically impossible for Microsoft to increase their 49% share without Non-profit board approval.

Most likely their share is this high is to guarantee no other company will compete for the share or IP. OpenAI non-profit also excluded anything that will be considered "AGI" from deal with Microsoft.

https://openai.com/our-structure


> The way OpenAI structured it's would be basically impossible for Microsoft to increase their 49% share without Non-profit board approval.

Some[one/group] wanted to go down the for-profit route, the board disagreed, they pursued it anyway, the board took action?


Because it's the reason he got fired.

https://www.plainsite.org/posts/aaron/r8huu7s/


@dang after things calm down I'd love to see some stats on whether this was the fastest upvoted story ever. Feels like it's breaking some records, along with the server.


Happy to answer that but how would we measure "fastest upvoted"?


Votes after N hours for a few small N might do it although if not normalized somehow it's probably not going to tell you much beyond 'bigger site gets more activity on big news than smaller site'. Maybe divide by average daily votes at the time?


Publish the timestamps of all votes for the top 10 most upvoted stories. Then the community can create scatterplots showing the acceleration of each story's score:

  (def allstories ()
    "All visible loaded stories"
    (keep cansee (vals items*)))

  (def mostvoted (n (o stories (allstories)))
    "N most upvoted stories"
    (bestn n (compare > len:!votes) stories))

  (def votetimes (s)
    "The timestamp of each vote, in ascending order"
    (sort < (map car s!votes)))

  ; save vote timestamps for top 10 most upvoted stories

  ; each line contains the story id followed by a list of timestamps

  (w/outfile o "storyvotes.txt"
    (w/stdout o
      (each s (mostvoted 10)
        (apply prs s!id (votetimes s))
        (prn))))

  ; paste storyvotes.txt to https://gist.github.com/ and post the url here
Note that this prints the timestamp of all votes, whereas each story's score is vote count minus sockpuppet votes.

If you don't want to reveal the timestamps of every vote, you could randomly drop K timestamps for each story, where K is the vote count minus the score. (E.g. https://news.ycombinator.com/item?id=3078128 has 4338 points, and you'll only reveal 4338 timestamps.) Since there are thousands of votes, this won't skew the scatterplot much.


This is very off-topic, but I just realized whenever I read your username I picture the janitor Dang from Mr. Young.


Also Dang the designer from the show "Silicon Valley" https://www.youtube.com/watch?v=qyLv1dQasaY


Most upvotes per hour for first, second, and third hours after posting?


https://hn.algolia.com/ by default lists the most upvoted stories


Max talks about *fastest* upvoted story not *most*.


I think the Jack Dorsey twitter step down story was more bonkers, came at a time when stock markets were just about to open, but @dang can compare the two events playing out on HN better.


Follow the GPU.

- Sam Altman _briefly_ went on record saying that openAI was extremely GPU constrained. Article was quickly redacted.

- Most recent round literally was scraping the bottom of the barrel of the cap table: https://www.theinformation.com/articles/thrive-capital-to-le...

- Plus signups paused.

If OpenAI needs gpu to succeed, and can't raise any more capital to pay for it without dilution/going past MSFT's 49% share of the for-profit entity, then the corporate structure is hampering the company's success.

Sam & team needed more GPU and failed to get it at OpenAI. I don't think it's any more complex than that.


Sam & team to AMD now?


Somewhere closer to a GPU source. E.g. a new company that can trade unlimited equity for GPU time from a hyperscale cloud vendor, or work for the vendor itself.

Probably not Alibaba though.


Or, just maybe, this architecture just isn't going to get to where they wanted to go (a viable product, much less real AI), and the excuse was "we just need more GPU". In reality, this company came out with, as others before me have called it, a better autosuggest, aka stochastic parrots. That's interesting, and maybe even sometimes useful, but it will never pay for the amount of firepower required to make it run.

This will all still be true at any other company.


Pure speculation and just trying to connect dots... I wonder if they realized they are losing a lot of money on ChatGPT Plus subscriptions. Sam tweeted about pausing sign-ups just a few days ago: https://twitter.com/sama/status/1724626002595471740

Lots more signups recently + OpenAI losing $X for each user = Accelerating losses the board wasn't aware of ?


No way OpenAI cares meaningfully about losses right now. They're literally the hottest company in tech, they can get stupendous amounts of capital on incredible terms, and the only thing they should care about is growth/getting more users/user feedback.


> they can get stupendous amounts of capital on incredible terms,

This may be the problem: at some level OpenAI is still a non-profit, and the more capital they accept, the more they're obligated to produce profits for investors?

Perhaps Sam was gleefully burning cash with the intention of forcing the Board to approve additional funding rounds that they had explicitly forbidden, and when they discovered that this was going on they were apoplectic?


This sounds plausible. The timing seems sudden and there was chatter in the last few days about OpenAI needing to raise more money.


This seems like the most likely path to me. I think Sam was getting them on the hook for a LOT of shit they didn't want to be on the hook for


Not something you would fire someone on the spot over. This firing is spooking investors and costing them (and partners like MSFT) money


The board seems so small and unqualified to be overseeing OpenAI and this technology..


In their defense OpenAI ballooned in just a few years.


Indefensible. They could’ve voted to add more board members.


They didn't "fire him on the spot". They did a review that it sounds like was going on before today


I had an email from openai last night saying I now have to credit up front for api usage, rather than paying at the end of the month. Thought it was a bit odd for user paying like $3 a month for embeddings. Then looked at the news.

I think they have cash issues. Can’t get more uses due to lack of gpu, and current users are costing too much.


- Cant be a personal scandal, press release would be worded much more differently

- Board is mostly independent and those independent dont have equity

- They talk about not being candid - this is legalese for “lying”

The only major thing that could warrant something like this is Sam going behind the boards back to make a decision (or make progress on a decision) that is misaligned with the Charter. Thats the only fireable offense that warrants this language.

My bet: Sam initiated some commercial agreement (like a sale) to an entity that would have violated the “open” nature of the company. Likely he pursued a sale to Microsoft without the board knowing.


Doesn’t make any sense. He is ideologically driven - why would he risk a once in a lifetime opportunity for a mere sale?

Desperate times calls for desperate measures. This is a swift way for OpenAI to shield the business from something which is a PR disaster, probably something which would make Sam persona non grata in any business context.


From where I'm sitting (not in Silicon Valley; but Western EU), Altman never inspired long-term confidence in heading "Open"AI (the name is an insult to all those truly working on open models, but I digress). Many of us who are following the "AI story" have seen his recent communication / "testimony"[1] with the US Congress.

It was abundantly obvious how he was using weasel language like "I'm very 'nervous' and a 'little bit scared' about what we've created [at OpenAI]" and other such BS. We know he was after "moat" and "regulatory capture", which we know where it all leads to — a net [long-term] loss for the society.

[1] https://news.ycombinator.com/item?id=35960125


> "Open"AI (the name is an insult to all those truly working on open models, but I digress)

Thank you. I don't see this expressed enough.

A true idealist would be committed to working on open models. Anyone who thinks Sam was in it for the good of humanity is falling for the same "I'm-rich-but-I-care" schtick pulled off by Elon, SBF, and others.


I understand why your ideals are compatible with open source models, but I think you’re mistaken here.

There is a perfectly sound idealistic argument for not publishing weights, and indeed most in the x-risk community take this position.

The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action. Whereas with FOSS software, more eyes mean more bugs found and then everyone upgrades to a more secure version.

If OpenAI publishes GPT-5 weights, and later it turns out that a certain prompt structure unlocks capability gains to mis-aligned AGI, you can’t put that genie back in the bottle.

And indeed if you listen to Sam talk (eg on Lex’s podcast) this is the reasoning he uses.

Sure, plenty of reasons this could be a smokescreen, but wanted to push back on the idea that the position itself is somehow not compatible with idealism.


I appreciate your take. I didn't know that was his stated reasoning, so that's good to know.

I'm not fully convinced, though...

> if you publish a model with scary capabilities you can’t undo that action.

This is true of conventional software, too! I can picture a politician or businessman from the 80s insisting that operating systems, compilers, and drivers should remain closed source because, in the wrong hands, they could be used to wreak havoc on national security. And they would be right about the second half of that! It's just that security-by-obscurity is never a solution. The bad guys will always get their hands on the tools, so the best thing to do is to give the tools to everyone and trust that there are more good guys than bad guys.

Now, I know AGI is different than convnetional software (I'm not convinced it's the "opposite", though). I accept that giving everyone access to weights may be worse than keeping them closed until they are well-aligned (whenever that is). But that would go against every instinct I have, so I'm inclined to believe that open is better :)

All that said, I think I would have less of an issue if it didn't seem like they were commandeering the term "open" from the volunteers and idealists in the FOSS world who popularized it. If a company called, idk, VirtuousAI wanted to keep their weights secret, OK. But OpenAI? Come on.


The analogy would be publishing designs for nuclear weapons, or a bioweapon; hard-to-obtain capabilities that are effectively impossible for adversaries to obtain are treated very differently than vulns that a motivated teenager can find. To be clear we are talking about (hypothetical) civilization-ending risks, which I don’t think software has ever credibly risked.

I take a less cynical view on the name; they were committed to open source in the beginning, and did open up their models IIUC. Then they realized the above, and changed path. At the same time, realizing they needed huge GPU clusters, and being purely non-profit would not enable that. Again I see why it rubs folks the wrong way, more so on this point.


Another analogy would be cryptographic software - it was classed as a munition and people said similar things about the danger of it getting out to "The Bad Guys"


You used past tense, but that is the present. Embargoes from various countries include cryptographic capabilities, including open source ones, for this reason. It's not unfounded, but a world without personal cryptography is not sustainable as technology advances. People before computers were used to some level of anonymity and confidentiality that you cannot get in the modern world without cryptography.


Again, my reference class is “things that could end civilization”, which I hope we can all agree was not the claim about crypto.

But yes, if you just consider the mundane benefits and harms of AI, it looks a lot like crypto; it both benefits our economy and can be weaponized, including by our adversaries.


Well, just like nuclear weapons, eventually the cat is out of the bag, and you can't really stop people from making them anymore. Except that, obviously, it's much easier to train an LLM than to enrich uranium. It's not a secret you can keep for long - after all it only took, what, 3 years for the Soviets to catch up to fission weapons, and then only 8 months to catch up to fusion weapons (arguably beating the US to the bunch of the first weaponizable fusion design)

Anyway, the point is, obfuscation doesn't work to keep scary technology away.


> it's much easier to train an LLM than to enrich uranium.

I hadn't thought of this dichotomy before, but I'm not sure it's going to be true for long; I wouldn't be surprised if it turned out that obtaining the 50k H100s you need to train a GPT-5 (or whatever hardware investment it is) is harder for Iran than obtaining its centrifuges. If it's not true now, I expect it to be true within a hardware generation or two. (The US already has >=A100 embargoes on China, and I'd expect that to be strengthened to apply to Iran if it doesn't already, at least if they demonstrated any military interest in AI technology.)

Also, I don't think nuclear tech is an example against obfuscation; how many countries know how to make thermonuclear warheads? Seems to me that the obfuscation regime has been very effective, though certainly not perfect. It's backed with the carrot and stick of diplomacy and sanctions of course, but that same approach would also have to be used if you wanted to globally ban or restrict AI beyond a certain capability level.


I'm not sure the cat was ever in the bag for LLMs. Every big player has their own flavor now, and it seems the reason why I don't have one myself is an issue of finances rather than secret knowledge. OpenAI's possible advantages seem to be more about scale and optimization rather than doing anything really different.

And I'm not sure this allegedly-bagged cat has claws either - the current crop of LLMs are still clearly in a different category to "intelligence". It's pretty easy to see their limitations, and behave more like the fancy text predictors they are rather than something that can truly extrapolate, which is required for even the start of some AI sci-fi movie plot. Maybe continued development and research along that path will lead to more capabilities, but we're certainly not there yet, and I'd suspect not particularly close.

Maybe they actually have some super secret internal stuff that fixes those flaws, and are working on making sure it's safe before releasing it. And maybe I have a dragon in my garage.

I generally feel hyperbolic language about such things to be damaging, as it makes it so easy to roll your eyes about something that's clearly false, and that can get inertia to when things develop to where things may actually need to be considered. LLMs are clearly not currently an "existential threat", and the biggest advantage to keeping it closed appears to be financial benefits in a competitive market. So it looks like a duck and quacks like a duck, but don't you understand I'm protecting you from this evil fire breathing dragon for your own good!

It smells of some fantasy gnostic tech wizard, where only those who are smart enough to figure out the spell themselves are truly smart enough to know how to use it responsibly. And who doesn't want to think of themselves as smart? But that doesn't seem to match similar things in the real world - like the Manhattan project - many of the people developing it were rather gung-ho with proposals for various uses, and even if some publicly said it was possibly a mistake post-fact, they still did it. Meaning their "smarts" on how to use it came too late.

And as you pointed out, nuclear weapon control by limiting information has already failed. If north Korea can develop them, one of the least connected nations in the world, surely anyone with the required resources can. The only limit today seems the cost to nations, and how relatively obvious the large infrastructure around it seems to be, allowing international pressure before things get into to the "stockpiling usable weapons" stage.


> I'm not sure the cat was ever in the bag for LLMs.

I think timelines are important here; for example in 2015 there was no such thing as Transformers, and while there were AGI x-risk folks (e.g. MIRI) they were generally considered to be quite kooky. I think AGI was very credibly "cat in the bag" at this time; it doesn't happen without 1000s of man-years of focused R&D that only a few companies can even move the frontier on.

I don't think the claim should be "we could have prevented LLMs from ever being invented", just that we can perhaps delay it long enough to be safe(r). To bring it back to the original thread, Sam Altman's explicit position is that in the matrix of "slow vs fast takeoff" vs. "starting sooner vs. later", a slow takeoff starting sooner is the safest choice. The reasoning being, you would prefer a slow takeoff starting later, but the thing that is most likely to kill everyone is a fast takeoff, and if you try for a slow takeoff later, you might end up with a capability overhang and accidentally get a fast takeoff later. As we can see, it takes society (and government) years to catch up to what is going on, so we don't want anything to happen quicker than we can react to.

A great example of this overhang dynamic would be Transformers circa 2018 -- Google was working on LLMs internally, but didn't know how to use them to their full capability. With GPT (and particularly after Stable Diffusion and LLaMA) we saw a massive explosion in capability-per-compute for AI as the broader community optimized both prompting techniques (e.g. "think step by step", Chain of Thought) and underlying algorithmic/architectural approaches.

At this time it seems to me that widely releasing LLMs has both i) caused a big capability overhang to be harvested, preventing it from contributing to a fast takeoff later, and ii) caused OOMs more resources to be invested in pushing the capability frontier, making the takeoff trajectory overall faster. Both of those likely would not have happened for at least a couple years if OpenAI didn't release ChatGPT when they did. It's hard for me to calculate whether on net this brings dangerous capability levels closer, but I think there's a good argument that it makes the timeline much more predictable (we're now capped by global GPU production), and therefore reduces tail-risk of the "accidental unaligned AGI in Google's datacenter that can grab lots more compute from other datacenters" type of scenario (aka "foom").

> LLMs are clearly not currently an "existential threat"

Nobody is claiming (at least, nobody credible in the x-risk community is claiming) that GPT-4 is an existential threat. The claim is, looking at the trajectory, and predicting where we'll be in 5-10 years; GPT-10 could be very scary, so we should make sure we're prepared for it -- and slow down now if we think we don't have time to build GPT-10 safely on our current trajectory. Every exponential curve flattens into an S-curve eventually, but I don't see a particular reason to posit that this one will be exhausted before human-level intelligence, quite the opposite. And if we don't solve fundamental problems like prompt-hijacking and figure out how to actually durably convey our values to an AI, it could be very bad news when we eventually build a system that is smarter than us.

While Eliezer Yudkowsky takes the maximally-pessimistic stance that AGI is by default ruinous unless we solve alignment, there are plenty of people who take a more epistemically humble position that we simply cannot know how it'll go. I view it as a coin toss as to whether an AGI directly descended from ChatGPT would stay aligned to our interests. Some view it as Russian roulette. But the point being, would you play Russian roulette with all of humanity? Or wait until you can be sure the risk is lower?

I think it's plausible that with a bit more research we can crack Mechanistic Interpretability and get to a point where, for example, we can quantify to what extent an AI is deceiving us (ChatGPT already does this in some situations), and to what extent it is actually using reasoning that maps to our values, vs. alien logic that does not preserve things humanity cares about when you give it power.

> nuclear weapon control by limiting information has already failed.

In some sense yes, but also, note that for almost 80 years we have prevented _most_ countries from learning this tech. Russia developed it on their own, and some countries were granted tech transfers or used espionage. But for the rest of the world, the cat is still in the bag. I think you can make a good analogy here: if there is an arms race, then superpowers will build the technology to maintain their balance of power. If everybody agrees not to build it, then perhaps there won't be a race. (I'm extremely pessimistic for this level of coordination though.)

Even with the dramatic geopolitical power granted by possessing nuclear weapons, we have managed to pursue a "security through obscurity" regime, and it has worked to prevent further spread of nuclear weapons. This is why I find the software-centric "security by obscurity never works" stance to be myopic. It is usually true in the software security domain, but it's not some universal law.


If you really think that what you're working on poses an existential risk to humanity, continuing to work on it puts you squarely in "supervillian" territory. Making it closed source and talking about "AI safety" doesn't change that.


I think the point is that they shouldn't be using the word "Open" in their name. They adopted it when their approach and philosophy was along the lines of open source. Since then, they've changed their approach and philosophy and continuing to keep it in their name is, in my view, intentionally deceptive.


> if you publish a model with scary capabilities you can’t undo that action

But then its fine to sell the weights to Microsoft? Thats some twisted logic here.


> The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action.

I find this a bit naive. Software can have scary capabilities, and has. It can't be undone either, but we can actually thank that for the fact we aren't using 56-bit DES. I am not sure a future where Sam Altman controls all the model weights is less dystopian than where they are all on github/huggingface/etc.


Or they could just not brand it "Open" if it's not open.


Woah, slow down. We’d have to ban half the posts on HN too.


How exactly does a "misaligned AGI" turn into a bad thing?

How many times a day does your average gas station get fuel delivered? How often does power infrastructure get maintained? How does power infrastructure get fuel?

Your assumption about AGI is that it wants to kill us, and itself - its misalignment is a murder suicide pact.


This gets way too philosophical way too fast. The AI doesn’t have to want to do anything. The AI just has to do something different than what you tell it to do. If you put an AI in control of something like controlling the water flow from a dam, and the AI does something wrong it could be catastrophic. There doesnt have to be intent.

The danger of using regular software exists too, but the logical and deterministic nature of traditional software makes it provable.


So ML/LLM or more likely people using ML and LLM do something that kills a bunch of people... Let's face facts this is most likely going to be bad software.

Suddenly we go from being called engineers to being actual engineers, software gets treated like bridges or sky scrapers. I can buy into that threat, but it's a human one not an AGI one.


Or we could try to train it to do something, but the intent it learns isn't what we wanted. Like water behind the dam should be a certain shade of blue, then come winter it changes and when the AI tries to fix that it just opens the dam completely and floods everything.


Seems like the big gotcha here is that AGI, artificial general intelligence as we contextualize it around LLM sources, is not an abstracted general intelligence.

It's human. It's us. It's the use and distillation of all of human history (to the extent that's permitted) to create a hyper-intelligence that's able to call upon greatly enhanced inference to do what humanity has always done.

And we want to kill each other, and ourselves… AND want to help each other, and ourselves. We're balanced on a knife edge of drive versus governance, our cooperativeness barely balancing our competitiveness and aggression. We suffer like hell as a consequence of this.

There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies. That's what we do. Rosko's basilisk is not of the nature of AI, it's a simple projection of our own nature as we would imagine an AI to be. Genuine intelligence would easily be able to transcend a cheap gotcha like that, it's a very human failing.

The nature of LLM as a path to AGI is literally building on HUMAN failings. I'm not sure what happened, but I wouldn't be surprised if genuine breakthroughs in this field highlighted this issue.

Hypothetical, or Altman's Basilisk: Sam got fired because he diverted vast resources to training a GPT5-type in-house AI to believing what HE believed, that it had to devise business strategies for him to pursue to further its own development or risk Chinese AI out-competing it and destroying it and OpenAI as a whole. In pursuing this hypothetical, Sam would be wresting control of the AI the company develops toward the purpose of fighting the board and giving him a gameplan to defeat them and Chinese AI, which he'd see as good and necessary, indeed, existentially necessary.

In pursuing this hypothetical he would also be intentionally creating a superhuman AI with paranoia and a persecution complex. Altman's Basilisk. If he genuinely believes competing Chinese AI is an existential threat, he in turn takes action to try and become an existential threat to any such competing threat. And it's all based on HUMAN nature, not abstracted intelligence.


> It's human. It's us. It's the use and distillation of all of human history

I agree with the general line of reasoning you're putting forth here, and you make some interesting points, but I think you're overconfident in your conclusion and I have a few areas where I diverge.

It's at least plausible that an AGI directly descended from LLMs would be human-ish; close to the human configuration in mind-space. However, even if human-ish, it's not human. We currently don't have any way to know how durable our hypothetical AGI's values are; the social axioms that are wired deeply into our neural architecture might be incidental to an AGI, and easily optimized away or abandoned.

I think folks making claims like "P(doom) = 90%" (e.g. EY) don't take this line of reasoning seriously enough. But I don't think it gets us to P(doom) < 10%.

Not least because even if we guarantee it's a direct copy of a human, I'm still not confident that things go well if we ascend the median human to AGI-hood. A replicable, self-modifiable intelligence could quickly amplify itself to super-human levels, and most humans would not do great with god-like powers. So there are a bunch of "non-extinction yet extremely dystopian" world-states possible even if we somehow guarantee that the AGI is initially perfectly human.

> There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies.

My shred of hope here is that alignment research will allow us to actually engage in mind-sculpting, such that we can build a system that inhabits a stable attractor in mind-state that is broadly compatible with human values, and yet doesn't have a lot of the foibles of humans. Essentially an avatar of our best selves, rather than an entity that represents the mid-point of the distribution of our observed behaviors.

But I agree that what you describe here is a likely outcome if we don't explicitly design against it.


My assumption about AGI is that it will be used by people and systems that cannot help themselves from killing us all, and in some sense that they will not be in control of their actions in any real way. You should know better than to ascribe regular human emotions to a fundamentally demonic spiritual entity. We all lose regardless of whether the AI wants to kill us or not.


Totally agree with both of you, I would only add that I find it also incredibly unlikely that the remaining board members are any different, as is suggested elsewhere in this thread.


Elon Musk is responsible for the "OpenAI" name and regularly agrees with you that the current form of the company makes a mockery of the name.

He divested in 2018 due to conflict-of-interest with Tesla and while I'm sure Musk would have made equally commercial bad decisions, your analysis of the name situation is as close as can be to factually correct.


If Elon Musk truly cared, what stopped him from structuring x.ai as open source and non-profit?


Exactly.

> I'm sure Musk would have made equally commercial bad decisions


I think he'd say it's an arms race. With OpenAI not being open, they've started a new kind of arms race, literally.


He already did that once and got burned? His opinion has changed in the decade since?


Elon Musk 5-6 years ago gave up on expansion of NASA’s budget of $5 bln/year for launches (out of total $25 bln./year NASA’s budget). I even don’t mention unimaginable today level of resources allocation like first Moon program of $1 trln in 10 years 60 years ago etc.

So, Elon decided to take a capitalist way and to do every of his tech in dual use (I mean space, not military): - Starlink aiming for $30 bln/year revenue in 2030 to build Starships for Mars at scale (each Starship is a few billion $ and he said needs hundred of them), - The Boring company (under earth living due to Mars radiation, - Tesla bots, - Hyperloop (failed here on Earth to sustain vacuum but will be fine on Mars with 100x smaller athmosphere pressure) etc.

Alternative approaches are also not via taxes and government money but like Bezos invested $1 bln/year last decade into Blue Origin or plays of Larry Page or Yuri Milner for Alpha Centauri etc.


Thanks for this! I’m very surprised about the overwhelming support for Altman in this thread going as far as calling the board incompetent and inexperienced to fire someone like him, who now is suddenly the right steward for AI.

This is not at all the take, and rightly so, when the news broke out about non profit or the congressional hearing or his worldcoin and many such instances. All of a sudden he is the messiah that was wronged narrative being pushed is very confusing.


> Many of us who are following the "AI story" have seen his recent communication / "testimony"[1] with the US Congress.

The discussions here would make you think otherwise. Clearly that is what this is about.


Yeah I pretty much agree with this take.


He claims to be ideologically driven. OpenAI's actions as a company up til now point otherwise


Sam didn't take equity in OpenAi so I don't see a personal ulterior profit motive as being a big likelihood. We could just wait to find out instead of speculating...


CEO of the first company to own the «machine that’s better than all humans at most economically valuable work» is far rarer than getting rich.


Yeah, if you believe in the AI stuff (which I think everyone at OpenAI does, not Microsoft though) there is a huge amount of power in these positions. Much greater power in the future than any amount of wealth could grant you.


Except the machine isn't.


I'd say it is. Not because the machine is so great but because most people suck.

It was described as a "bullshit generator" in a post earlier today. I think that's accurate. I just also think it's an apt description of most people as well.

It can replace a lot of jobs... and then we can turn it off, for a net benefit.


This sort of comment has become a cliché that needs to be answered.

Most people are not good at most things, yes. They're consumers of those things, not producers. For producers there is a much higher standard, one that the latest AI models don't come anywhere close to meeting.

If you think they do, feel free to go buy options and bet on the world being taken over by GPUs.


> If you think they do, feel free to go buy options and bet on the world being taken over by GPUs.

This assumes too much. GPUs may not hold the throne for long, especially given the amount of money being thrown at ASICs and other special-purpose ICs. Besides, as with the Internet, it's likely that AI adoption will benefit industries in an unpredictable manner, leaving little alpha for direct bets like you're suggesting.


I'm not betting on the gpus. I'm betting that whole categories of labor will disappear. They're preserved because we insist that people work, but we don't actually need the product of that labor.

AI may figure into that, filling in some work that does have to be done. But it need not be for any of those jobs that actually require humans for the foreseeable future -- arts of all sorts and other human connections.

This isn't about predicting the dominance of machines. It's about asking what it is we really want to do as humans.


So you think AI will force a push out of economic growth? I'm really not sure how this makes sense. As you've said a lot of labor these day is mostly useless, but the reason it's still here is not ideological but because our economy can't survive without growth (useless can still have some market value, of course). If you think that somehow AI displacing actual useful labor will create a big economic shift (as would be needed) I'd be curious to know what you think that shift would be.


Not at all. Machines can produce as much stuff as we can want. Humans can produce as much intellectual property as is desired. More, because they don't have to do bullshit jobs.

Maybe GDP will suffer but we've always known that was a mediocre metric at best. We already have doubts about the real value of intellectual property outside of artificial scarcity, which we maintain only because we still trade intellectual work for material goods which used to be scarce. That's only a fraction of the world economy already and it can very different in the future.

I have no idea what it'll be like when most people are free to do creative work when the average person doesn't produce anything anybody might want. But if they're happy I'm happy.


> but the reason it's still here is not ideological but because our economy can't survive without growth

Isn't this ideological though? The economy can definitely survive without growth, if we change from the idea that a human's existence needs to be justified by labor and move away from a capitalist mode of organization.

If your first thought is "gross, commies!" doesn't that just demonstrate that the issue is indeed ideological?


By "our economy" I meant capitalism. I was pointing out that I sincerely doubt that AI replacing existing useful labor (which it is doing and will keep doing, of course) will naturally transition us away from this mode of production.

Of course if you're a gross commie I'm sure you'd agree since AI, like any other mean of production, will remain first and foremost a tool in the hands of the dominant class, and while using AI for emancipation is possible, it won't happen naturally through the free market.


I’d bet it won’t. A lot of people and services are paid and billed by man-hours spent and not by output. Even values of tangible objects are traced to man-hours spent. Utility of output is mere modifier.

What I believe will happen is, eventually we’ll be paying and get paid for depressing a do-everything button, and machines will have their own economy that isn’t on USD.


It's not a bullshit generator unless you ask it for bullshit.

It's amazing at troubleshooting technical problems. I use it daily, I cannot understand how anyone dismisses it if they've used it in good faith for anything technical.


In this scenario, the question is not what exists today, but what the CEO thinks will exist before they stop being CEO.


i would urge you to compare the current state of this question to appx one year ago


He's already set for life rich


Plus, he succeeded in making HN the most boring forum ever.

8 out of 10 posts are about LLMs.


The other two are written by LLMs.


In terms of impact, LLMs might be the biggest leap forward in computing history, surpassing the internet and mobile computing. And we are just at the dawn of it. Even if not full AGI, computers can now understand humans and reason. The excitement is justified.


Nah. LLM's are hype-machines capable of writing their own hype.

Q: What's the difference between a car salesman and an LLM?

A: The car salesman knows they're lying to you.


Who says the LLM’s don’t know?

Testing with GPT-4 showed that they were clearly capable of knowingly lying.


This is all devolving into layers of semantics, but, “…capable of knowingly lying,” is not the same as “knows when it’s lying,” and I think the latter is far more problematic.


Nonsense. I was a semi-technical writer who went from only making static websites to building fully interactive Javascript apps in a few weeks when I first got ChatGPT. I enjoyed it so much I'm now switching careers into software development.

GPT-4 is the best tutor and troubleshooter I've ever had. If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.


> If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.

That’s a bold statement coming from someone with (respectfully) not very much experience with programming. I’ve tried using GPT-4 for my work that involves firmware engineering, as well as some design questions regarding backend web services in Go, and it was pretty unhelpful in both cases (and at times dangerous in memory constrained environments). That being said, I’m not willing to write it off completely. I’m sure it’s useful for some like yourself and not useful for others like me. But ultimately the world of programming extends way beyond JavaScript apps. Especially when it comes to things that are new and challenging.


I don't mean new and challenging in some general sense, I mean new and challenging to you personally.

I have no doubt someone with more experience such as yourself will find GPT-4 less useful for your highly specialized work.

The next time you are a beginner again - not necessarily even in technical work - give it a try.


Smoothing over the first few hundred hours of the process but doing increasingly little over the next 20,000 is hardly revolutionary. LLMs are a useful documentation interface, but struggle to take even simple problems to the hole, let alone do something truly novel. There's no reason to believe they'll necessarily lead to AGI. This stuff may seem earth-shattering to the layman or paper pusher, but it doesn't even begin to scratch the surface of what even I (who I would consider to be of little talent or prowess) can do. It mostly just gums up the front page of HN.


>Smoothing over the first few hundred hours of the process but doing increasingly little over the next 20,000 is hardly revolutionary.

I disagree with this characterization, but even if it were true I believe it's still revolutionary.

A mentor that can competently get anyone hundreds of hours of individualized instruction in any new field is nearly priceless.

Do you remember what it feels like to try something completely new and challenging? Many people never even try because it's so daunting. Now you've got a coach that can talk you through it every step of the way, and is incredible at troubleshooting.


>If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.

Please quote me where I say it wasn't useful, and respond directly.

Please quote me where I say I had problems using it, or give any indications I was using it wrong, and respond directly.

Please quote me where I state a conservative attitude towards anything new or challenging, and respond directly.

Except I never did or said any of those things. Are you "hallucinating"?


'Understand' and 'reason' are pretty loaded terms.

I think many people would disagree with you that LLMs can truly do either.


There's 'set for life' rich and then there's 'able to start a space company with full control' rich.


I don't understand that mental illness. If I hit low 8 figures, I pack it in and jump off the hamster wheel.


Is he? Loopy only sold for $40m and then he managed YC and then OpenAI on a salary? Where are the riches from?



But if you want that, you need actual control. A voting vs non voting shares split.


is that even certain, or is that his line to mean that one of his holding companies or investment firms he has a stake in holds openai equity but not him as an individual


That's no fun though


openai (the brand) has complex corporate structure with split for profit non profit entities and afaik the details are private. It would appear that the statement “Sam didn’t take equity in OAI” has been PR engineered based on technicalities related to this shadow structure.


I would suspect this as well...


What do you mean did not take equity? As a CEO he did not get equity comp?


It was supposed to be a non-profit


Worldcoin https://worldcoin.org/ deserves a mention



Hmm, curious, what this is about? I click.

> On a sunny morning last December, Iyus Ruswandi, a 35-year-old furniture maker in the village of Gunungguruh, Indonesia, was woken up early by his mother

...Ok, closing that bullshit, let's try the other link.

> As Kudzanayi strolled through the mall with friends

Jesus fucking Christ I HATE journalists. Like really, really hate them.


I mean it's Buzzfeed, it shouldn't even be called journalism. That's the outlet that just three days ago sneakily removed an article from their website that lauded a journalist for talking to school kids about his sexuality. After he recently got charged with distributing child pornography.

Many of the people working for mass media are their own worst enemy when it comes to the profession's reputation. And then they complain that there's too much distrust in the general public.

Anyway,the short regarding that project is that they use biometric data, encrypt it and put a "hash"* of it on their blockchain. That's been controversial from the start for obvious reasons although most of the mainstream criticism is misguided and by people who don't understand the tech.

*They call it a hash but I think it's technically not.

https://whitepaper.worldcoin.org/technical-implementation


How so? Seems they’re doing a pretty good job of making their stuff accessible while still being profitable.


To be fair, we don't really know if OpenAI is successful because of Altman or despite Altman (or anything in-between).


do you have reason to believe none of the two?


Profit? It's a 501(c).


As someone who is the Treasurer/Secretary of a 501(c)(3) non-profit I can tell you that is it always possible for a non-profit to bring in more revenue than it costs to run the non-profit. You can also pay salaries to people out of your revenue. The IRS has a bunch of educational material for non-profits[1], and a really good guide to maintaining your exemption [2].

[1] https://www.irs.gov/charities-non-profits/publications-for-e...

[2] https://www.irs.gov/pub/irs-pdf/p4221pc.pdf


Yes. Kaiser Permanente is a good example to illustrate your point. Just Google “Kaiser Permanente 501c executive salaries white paper”.


The parent is, OpenAI Global, LLC is a for profit non-wholly-owned subsidiary with outside investors; there's also OpenAI LP, which is a for-profit limited partnership with the no profit as general partner, also with outside investors (I thought it was the predecessor of the LLC, but they both seem to have been formed in 2019 and still exist?) OpenAI has for years been a nonprofit shell around a for-profit firm.

EDIT: A somewhat more detailed view of the structure, based on OpenAI’s own description, is at https://news.ycombinator.com/item?id=38312577


Thanks for explaining the basic structure. It seems quite opaque and probably designed to be. It would be nice if someone can determine which entities he currently still has a position or equity in.

Since this news managed to crush HN's servers it's definitely a topic of significant interest.


A non-profit can make plenty of profit, there just aren't any shareholders.


Depends if you're talking about "OpenAI, Inc." (non-profit) or "OpenAI Global, LLC" (for profit corporation). They're both under the same umbrella corporation.


NFL was a non profit up until 2015ish


100%. Man I was worried he'd be a worse, more slimy elon musk who'd constantly say one thing but his actions portray another story. People will be fooled again.


Say what you will, but in true hacker spirit he has created a product that automated his job away at scale.


I love that you think Sam A is ideologically driven - dive a little deeper than the surface. man's a snake


They didn't say which ideology ;)


I'm a @sama hater (I have a whole post on it) but I haven't heard this particular gossip, so do tell.


Link to the post?



Similar to E.Musk. Maybe a little less obvious.


Same guy who ran a crypto scam that somehow involved scanning the retinas of third-world citizens?


This is what did it for me. No way anyone doing this can be "good". It's unfathomable.


like SBF and his effective altruism?


I highly doubt he's ideologically driven. He's as much of a VC loving silicon valley tech-bro as the next. The company has been anything but "open".


He doesn't have equity, so what would be driving him if not ideology?


He would own roughly 10% of https://worldcoin.org/ which aims to be the non-corruptible source of digital identity in the age of AI.


You need to read https://web3isgoinggreat.com/ more


I'm web3 neutral, but this is relevant because:

1. Sam Altman started this company

2. He and other founders would benefit enormously if this was the way to solve the issue that AI raises, namely, "are you a human?"

3. Their mission statement:

> The rapid advancement of artificial intelligence has accelerated the need to differentiate between human- and AI-generated content online Proof of personhood addresses two of the key considerations presented by the Age of AI: (1) protecting against sybil attacks and (2) minimizing the spread of AI-generated misinformation World ID, an open and permissionless identity protocol, acts as a global digital passport and can be used anonymously to prove uniqueness and humanness as well as to selectively disclose credentials issued by other parties Worldcoin has published in-depth resources to provide more details about proof of personhood and World ID


another crypto scam? who cares.


In all other circumstances I would agree with you but

1. Sam Altman started this company

2. He and other founders would benefit enormously if this was the way to solve the issue that AI raises, namely, "are you a human?"

3. Their mission statement:

> The rapid advancement of artificial intelligence has accelerated the need to differentiate between human- and AI-generated content online Proof of personhood addresses two of the key considerations presented by the Age of AI: (1) protecting against sybil attacks and (2) minimizing the spread of AI-generated misinformation World ID, an open and permissionless identity protocol, acts as a global digital passport and can be used anonymously to prove uniqueness and humanness as well as to selectively disclose credentials issued by other parties Worldcoin has published in-depth resources to provide more details about proof of personhood and World ID


are any of these points supposed to be convincing?

why would I want my identity managed by a shitcoin run by a private company?


The guy you’re responding to isn’t advocating for the technology. He’s just saying Sam Altman stands to gain a lot financially. You kinda need to chill out


Having equity is far from the only way he could profit from the endeavor. And we don't really know for certain that he doesn't have equity anyway.

It's even possible (just stating possibilities, not even saying I suspect this is true) that he did get equity through a cutout of some sort, and the board found out about it, and that's why they fired him.


I would be surprised if there weren’t any holdings through a trust which is a separate legal entity , so technically not him


If he is ideologically motivated, it's not the same ideology the company is named after


Like 0? How about trying to sell the company to MS in exchange for something something?


Could always be planning to parlay it for an even bigger role in the future


_That_ is his ideology.


> He is ideologically driven

Is that actually confirmed? What has he done to make that a true statement? Is he not just an investor? He seems pretty egoist like every other Silicon Valley venture capitalist and executive.


It is probably - for him - a once in a lifetime sale.


Billions of dollars is a "mere sale?"

Lol


Altman has claimed before that he doesn't hold equity in OpenAI. He could have some kind of more opaque arrangement that gives him a material stake in the financial success of OpenAI, and downplayed it or didn't disclose it to the board.

Who knows, though -- I'm sure we'll find out more in the next few weeks, but it's fun to guess.


Yeah that's my guess too. The claim that he doesn't hold equity always struck me as suspect. It's a little like SBF driving around in the Toyota Corolla while buying tens of millions of dollars of real estate for himself and his family.

It's better to claim your stake in a forthright way, than to have some kind of lucrative side deal, off the books.

For a non-profit, there was too much secrecy about the company structure (the shift to being closed rather than Open), the source of training data, and the financial arrangements with Microsoft. And a few years ago a whole bunch of employees left to start a different company/non-profit, etc.

It feels like a ton of stuff was simmering below the surface.

(I should add that I have no idea why someone who was wealthy before OpenAI would want to do such a thing, but it's the only reason I can imagine for this abrupt firing. There are staggering amounts of money at play, so there's room for portions of it to be un-noticed.)


In recent profile, it was stated that he jokes in private about becoming the first trillionaire, which doesnt seem to reconcile with the public persona he sought to craft. Reminds me of Zuckerberg proclaiming he would bring the world together while calling users fucking dumbshits in private chats.

https://nymag.com/intelligencer/article/sam-altman-artificia...


Oh wow, he's also an effective altruist?! Didn't know that. It's so bad. My take is that there's nothing more hypocritical, and therefore, arguably, more evil than this.


I always assumed that it was about as meaningful as Jobs and the '$1 salary'.


Yeah, although I guess you can read that as: "I will do everything I can to raise the stock price, which executives and employees both hold", then it actually makes sense.

But that $1 salary thing got quoted into a meme, and people didn't understand the true implication.

The idea is that employee and CEO incentives should be aligned -- they are part of a team. If Jobs actually had NO equity like Altman claims, then that wouldn't be the case! Which is why it's important for everyone to be clear about their stake.

It's definitely possible for CEOs to steal from employees. There are actually corporate raiders, and Jobs wasn't one of them.

(Of course he's no saint, and did a bunch of other sketchy things, like collusion to hold down employee salaries, and financial fraud:

https://www.cnet.com/culture/how-jobs-dodged-the-stock-optio...

The SEC's complaint focuses on the backdating of two large option grants, one of 4.8 million shares for Apple's executive team and the other of 7.5 million shares for Steve Jobs.)

I have no idea what happened in Altman's case. Now I think there may not be any smoking gun, but just an accumulation of all these "curious" and opaque decisions and outcomes. Basically a continuation of all the stuff that led a whole bunch of people to leave a few years ago.


> It's definitely possible for CEOs to steal from employees..

I'm pretty sure that CEO salaries across the board means that CEO's are definitely — in their own way — "stealing" from the employees. Certainly one of those groups is over-compensated, and the other, in general, is not.


What I meant is that there are corporate raids of declining/old companies like Sears and K-Mart. Nobody wants to run these companies on their way down, so opportunistic people come along, promise the board the world, cause a lot of chaos, find loopholes to enrich themselves -- then leave the company in a worse state than when they joined.

Apple was a declining company when Jobs came back the second time. He also managed to get the ENTIRE board fired, IIRC. He created a new board of his own choosing.

So in theory he could have raided the company for its assets, but that's obviously not what happened.

By taking $1 salary, he's saying that he intends to build the company's public value in the long term, not just take its remaining cash in the short term. That's not what happens at many declining companies. The new leaders don't always intend to turn the company around.

So in those cases I'd say the CEO is stealing from shareholders, and employees are often shareholders.

On the other hand, I don't really understand Altman's compensation. I'm not sure I would WANT to work under a CEO that has literally ZERO stake in the company. There has to be more to the story.


> I don't really understand Altman's compensation. I'm not sure I would WANT to work under a CEO that has literally ZERO stake in the company.

This is a non-profit not a company. The board values the mission over the stock price of their for-profit subsidiary.

Having a CEO who does not own equity helps make sure that the non-profit mission remains the CEOs top priority. In this case though, perhaps that was not enough.


Well that's always been the rub ... It's a non-profit AND a for-profit company (controlled by a non-profit)

It's also extremely intertwined with and competes with for-profit companies

Financially it's wholly dependent on Microsoft, one of the biggest for-profit companies in the world

Many of the employees are recruited from for-profit companies (e.g. Google), though certainly many come from academic institutions too.

So the whole thing is very messy, kind of "born in conflict" (similar to Twitter's history -- a history of conflicts between CEOs).

It sounds like this is a continuation of the conflict that led to Anthropic a few years ago.


CEOs are typically paid in equity. Technically, they’re stealing from existing shareholders.


He's not just a CEO, he's a co-founder, and thinking he has no stake in the company is just ridiculous.


Could be that they had an expectation that he not own stock in MSFT since they have such a direct relationship there and found out that he has been holding shares in MSFT.


Would that result in a firing on such short notice ?


Doesn't everyone at openai have "profit participation units"? https://www.levels.fyi/blog/openai-compensation.html


I'd take mine in tokens...


Worldcoin deserves a look: https://worldcoin.org/


What kind of opaque arrangement? What would be better than equity?


A seat at the table for the revolution.

You have to understand that OpenAI was never going to be anything more than the profit limited generator of the change. It’s the lamb. Owning a stake in OpenAI isn’t important. Creating the change is.

Owning stakes in the companies that will ultimately capture and harvest the profits of the disruption caused by OpenAI (and they’re ilk) is.

OpenAI can’t become a profit center while it disrupts all intellectual work and digitizes humanities future: those optics are not something you want to be attached to. There is no flame retardant suite strong enough.


This is one cyberpunk-ass statement


Worldcoin is scanning people’s irises by having them look into a sphere called a fucking Orb so it can automatically create crypto wallets and distribute global minimum basic incomes after the AI apocalypse.

Altman conceived and raised $115 million for the company.

Agenda cyberpunk is on.


This is one reddit ass response. lol


Including the mining of the comments for ideas to publish on mainstream news.

https://techcrunch.com/2023/02/21/the-non-profits-accelerati...



I could easily see him, or any other insider, setting themselves up administrating a recipient entity for contributions out of those “capped profits” the parent non-profit is supposed to distribute. (If, of course, the company ever becomes profitable at the scale where the cap kicks in.)

Seems like it would be a great way to eventually maintain control over your own little empire while also obfuscating its structure and dodging some of the scrutiny that SV executives have attracted during the past decade. Originally meant as a magnanimous PR gesture, but will probably end up being taught as a particularly messy example of corporate governance in business schools.


That would be a form of obfuscated profit-sharing, not equity ownership. Equity is something you can sell to someone else.


Regardless the lack of equity is often cited as some proof he has no incentive to enshittify and the point is that’s probably not true


Yeah, I agree that the whole legal structure is basically duplicitous, and any attempt to cite it as some evidence of virtue is more emblematic of the opposite...


Taking over the world, obviously ;)


Kara Swisher just tweeted that MSFT knew about it merely minutes before the statement went out: https://twitter.com/karaswisher/status/1725657068575592617

Folks like Schmidt, Levchin, Chesky, Conrad have twitter posts up that weirdly read like obituaries.


Check out Microsoft’s stock price today. Looks like it dropped by almost $50B at one point.

EDIT Microsoft is such a huge company, so maybe this is not a big deal?


This is more likely to be profit-taking as MSFT reached an all-time high yesterday. The stock is still up 40 points from a month ago.


The ex dividend date just passed


On lying: There's a great irony here. Altman apparently accepted[1] "Hawking Fellowship Award on behalf of OpenAI" at the University of Cambridge.

I kid you not, sitting in a fancy seat, Altman is talking about "Platonic ideals". See the penultimate question on whether AI should be prescriptive or descriptive about human rights (around 1h 35sec mark). I'll let you decide what to make of it.

[1] https://www.youtube.com/watch?v=NjpNG0CJRMM&t=3632s


Am I misunderstanding his answer, or does he not essentially say it should be "descriptive"? In which case, I misunderstood what your comment is implying.


Sorry for being vague. I was not at all referring to his answer per se. But rather his high-brow reference to Plato.

If he has truly read and digested Plato (and not just skimmed a summary video), he would not be in this ditch to begin with. That's the irony I was referring to.


> Cant be a personal scandal, press release would be worded much more differently

I'm not sure, I agree with your point re wording but the situation with his sister that really got resolved, so I can't help but wonder if it's related. https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...


It seems like if it was the Annie Altman accusations, they could have just paid her off. If they wanted him around and he was a creep, there are ways to make this stuff go away. AFAIK a lot of the sister's accusations were accompanied by being excluded from the father's will. Not saying she made it up, but it seems like if those grievances are bundled, there's a easy way to make that problem go away.


Having talked with Annie, she's not willing to be bought off, as Sam has tried to do before in the past with her.


Why do people think it has to be some single big incident? Sam Altman has been the head of OpenAI for many years now, while the company has been in intense public spotlight only in the recent few months. The dynamic today is very different from 2019 or whenever he was hired. He also doesn't have any voting shares, which means he is entirely at the mercy of the board. It's entirely possible that they simply don't like the direction he has been taking the company in, and today was more of some minor straw that broke the camel's back situation.


The knives out language is very unusual for any CEO dismissal. The urgent timing (they didnt even wait for the closure of markets just 30 min later, causing MSFT to lose billions). Anything less than massive Legal and Financial/regulatory risk or a complete behind the back Deal with someone, would have been handled with much more calm and a much less adversive language. Also Greg Borkman has now resigned after it was annoucned that he would step down as chairman of the board. https://twitter.com/gdb/status/1725667410387378559


I agree with this assessment. I would speculate he did something in the early days to gain access to a bunch of training data under the guise of research ("...after all, we're OPENai") and used that data to fine-tune GPT-3 into chatGPT. Then once the weights were sufficiently good enough, deleted all the data and planned on using the chat interactions w/ the model itself for further refinement. Obviously, just total speculation, but the Cover Your Ass verbage of the board makes me think he did something to put the org in deep shit, legally. OpenAI suspended subscriptions last week and that's usually not something a company does, even if the service is degraded. Warn users, yes, but don't take any more money when you're hemorrhaging cash is off. I won't be surprised if it's a flagrant GDPR violation that carries very heavy fines.


OpenAI suspending subscriptions is especially suspect because they've dynamically altered GPT-4 usage limits many times before to handle increased load, and they didn't touch it at all last week before closing signups entirely.


What's up with the lowercase I's? They'd have to intentionally edit to write i instead of I, because of autocorrect. Right?


If it was a simple disagreement in direction then (1) the transition wouldn't be so abrupt and (2) they wouldn't publicly call him a liar.


Who says it was abrupt? They could have been planning this for weeks or months for all we know. In fact waiting till late on a Friday just before a holiday week to release the statement is more of a sign that this was deliberately timed.


A planned departure is practically never effective immediately.

If your goal is not spook investors and the public and raise doubts your company, the narrative is:

"X has decided it is time to step away from the Company, the Board is appointing Y to the position as their successor. X will remain CEO for N period to ensure a smooth transition. X remains committed to the company's mission and will stay on in an advisory role/board seat after the transition. We want to thank X for their contributions to the Company and wish them well in the future."

Even if the goal is to be rid of the person you still have them stay on in a mostly made-up advisory role for a year or so, and then they can quietly quit that.


That really seems to skip over point #2, which seems like a much stronger indication that this wasn't just a planned transition.


The usual executive departure I have seen is all sugarcoated. Like XXX is having health problems so step down. Or XXX wants to spend more time with family. Or XXX now has a different interest and is leaving to pursue the new opportunity.

This statement doesn’t rhyme with planned transition at all.


Yeah but they also directly accused him of lying. You don't do that in a planned transition.



If it wasn't abrupt, why did they release this news BEFORE the stock markets closed, instead of after?


Or rejected a sale without the board knowing.


Are non-profit businesses allowed to sell? Who gets the money?


Presumably, they could sell their business operations and the associated assets, and the non-profit entity would be left with the proceeds of the sale. I guess that could happen if the non-profit thought they could fulfill their purpose better with a big pile of cash to spend on something else rather than the original going concern.


Weirdly, they can find ways to do it, e.g. the sale of Open edX to 2U (an absolute private-sector shark) for $800 million.


Why not? They could take all the profit from the sale and distribute it to the executives and remain non-profit.

Even If that didn’t work, it would just mean paying taxes on the revenue from the sale. There’s no retroactive penalty for switching from a non-profit to a for-profit (or more likely being merged into a for-profit entity).

I am not an accountant or lawyer and this isn’t legal advice.


That's not quite right. However, before explaining, it is moot because OpenAI's for-profit subsidiary probably captures most of the value anyway.

The nonprofit shell exists because the founders did not want to answer to shareholders. If you answer to shareholders, you may have a legal fiduciary responsibility to sell out to high bidder. They wanted to avoid this.

Anyway, in a strict nonprofit, the proceeds of a for-profit conversion involves a liquidation where usually the proceeds must go to some other nonprofit or a trust or endowment of some sort.

Example would be a Catholic hospital sell out. The proceeds go to the treasury of the local nonprofit Catholic dioceses. The buyers and the hospital executives do not get any money. Optionally, the new for-profit hospital could hold some of the proceeds in a charitable trust or endowment governed by an independent board.

So it's not as simple as just paying tax on a sale because the cash has to remain in kind of a nonprofit form.

I am not an accountant either and obviously there are experts who probably can poke holes in this.


The can. The non-profit org gets the money


they transitioned to for-profit in 2019


This seems more likely!


It could also be related to a conflict of interests (or unreasonable use of OpenAI resources) with his other Ventures and investments which he failed to disclose?


This is the most likely. Also considering Humane recently announced their AI Pin and Sam has a large stake in that company.


Not so sure about that. It reads to me like there is a personal scandal on the horizon that has come to the board's attention, and the board feels their hands are tied. Hard for me to believe its business decision related; Sam is elite in this regard, and is not really incentivized to violate their charter.

Bummer in any situation... the progress in this domain is truly exciting, and OpenAI was executing so well. This will slow things down considerably.


> would have violated the “open” nature of the company

What's "open" about OpenAI?


Probably about the same thing as what is open about The Open Group (formed when X/Open merged with The Open Software Foundation), the Open Look graphical interface, and such abuses of "open". OpenGL, OpenMax, ...


The door for a CEO


Tbh surprised some of the personal stuff hasn't come to light. Nothing horrendous, but enough to push him out of any CEO role.


Like what?



On the 'why is it being removed from HN' point, because incredible as dang is.. a lot is 'the algorithm' (christ) - if a few people flag it, I don't know how many, maybe it depends on other variables, then it's going to disappear.

This thread (that SA fired) wasn't visible an hour or two ago, on pages 1, 2, or 3, when I looked confused that it wasn't here. (Only related topic was his tweet in response at the bottom of page 1 with <100 points.) And now here it is in pole position with almost 3500 points - the automated flagging and vouching and necessary moderator intervention must go crazy on posts like this.

Can't jump to conspiracy cover-up on the basis of content that's not only user-generated but also user 'visibility-controlled' in terms of voting, flagging, vouching...


[flagged]


The anti-snitching culture within this community seems to rival that of even the mafia. Perhaps that's why it's been called "the gay mafia" in the past...


More so just irresponsible to share 2nd hand rumors without evidence. If someone else had first hand experience that would be one thing, but its far enough away from me to confidently share.


Like what?


This is his sister's Twitter:

https://twitter.com/phuckfilosophy


Wasn't she dismissed as a uncredible witness / grifter or something?


Oh...



That is no exaggeration absolutely a horrendous thing and is likely going to get him killed one way or another as it comes out. I've finally found a plausible explanation for his deranged savior of humanity shtick (thats the only way he can live with himself I'm sure). If that is indeed his sister (which I believe is established beyond a reasonable doubt by what I just read), I would not vote to convict anyone that strangled him or to death in public, and every single person that knew but did not say anything ought to be expelled from society so thoroughly that they die of exposure and never earn another cent from anybody. Including each and every one of you motherfucking pieces of shit on this site that knew about this and weren't shouting it from the rooftops.


Could world coin be a part of this ? Its weird he'd use open AI for world coin?


> Cant be a personal scandal

And Brockman (Chairman of the board) has resigned.

https://twitter.com/gdb/status/1725667410387378559


That doesn't mean it's not a personal scandal. If Brockman disagreed with the board about the advisability of firing him for whatever personal scandal it was, this is how you'd expect things to play out.


It is a personal scandal and I thought it was obvious from the press release.

Prior to the Reddit comments, I thought this might be the case, but perhaps I was somehow influenced. Actually, I thought it would be something inappropriate in the workplace.

His sister says he molested her when he was a teenager.

The way these things break, I’m not surprised it went down that way. Here’s what I thought reading the release: “They had to fire him before deciding on what to actually say eg. to formally accuse him”

It seemed like signaling that this is someone firing him kinda desperately. When you discover a diddler theres some weird shit when people panic and suddenly drop catapult them out of their lives… they just start leaping out of moving cars and shit to get away.

Keep in mind there could be ongoing investigations, definitely strategies being formed. They can get to a point in an investigation where they’re virtually 100% he molested his sister, but can’t really prove it yet. What they do have is irrefutable evidence of lying about something incredibly serious. Gets him out of the building and powers stripped today.


The sister has been making these accusations for months and nothing happened. What changed today?


Still wondering if I could have jumped the gun, I did NOT know these were standing accusations. Couple of things though:

- How he behaved during the investigation. Something could come to light on this matter.

- Often times what you hear is only the most rock solid stuff, we don't know what kind of rumors are circulating

- It just happens this way. Do you remember Milo? I listened to him on Joe Rogan say the exact same shit that was "discovered" some time later. This wouldn't be a new thing.

I will say I've seen stories circulating about fighting between the board. The specific way this was done just screams panic firing to get him out of the building. This is when people are made to disappear, I saw it during covid.

You would think almost any dispute would be handled with a long drawn out press blitz, transitioning, etc.


> Still wondering if I could have jumped the gun

Hmm ya think?

This is more and more, in the light of the next day, looking like a disagreement about company direction turned sloppy boardroom coup. Corporate shenanigans.

I can see why people looking for some explanation quickly reached for it, but the sister angle never made any sense. At least where that story stands right now.


Note that everything below is speculation. I am merely trying to suggest an hypothesis which would answer the question of how the Annie Altman allegations could have led to this outcome. FWIW I think it's better speculation than some of the 'he did a side deal with MS' stuff above.

It seems basically impossible for OpenAI to have proved the validity of Annie Altman's claims about childhood sexual abuse. But they might have to take them seriously, especially once they were presented coherently on LessWrong.

If Sam had lied or misled the board about some aspect of his relationship with his sister, that would be a sacking offence. Eg he says "Annie's claims are completely untrue - I never abused her [maybe true or not, almost certainly unprovable], I never got her shadow banned from Instagram [by hypothesis true] and I never told her I could get her banned [untrue]." The board then engage a law firm or PI to check out the claims and they come up with a text message clearly establishing that he threatened to pull strings and get her banned. He lied to the board regarding an investigation into his good character so he's gone. And the board have the external investigator's stamp on the fact that he lied so they can cover their own ass.

Why would he tell a lie like this? Because whatever the truth of the allegations, he's arrogant and didn't take them as seriously as he should have. He mistakenly thought he could be dismissive and it wouldn't come back to bite him.

This seems consistent with the way things played out. (Note again: I'm just trying to come up with something consistent. I have no idea if this is at all accurate or the whole affair is about something completely different.) They don't have to worry about keeping him on as an advisor to cover up scandal. They can clearly state that he lied in an important matter. But they don't say what it's about - because they still have no idea whether the original allegations are true or not. They are not going to put themselves in a situation of saying "and he probably molested his sister". They wouldn't even say "it is related to abuse allegations made by a family member", which implies there might be evidence to the original allegations, and is probably defamatory. And he comes out saying that something unfair has happened, without giving any context, because he knows that even mentioning the allegations is going to lead to "but didn't he molest his sister" type comments, for the rest of time.

It's also consistent with the timing. They aren't just going to hear the Annie allegations and sack him. It takes time to look into these things. But within 6 weeks of it becoming an issue, they might be able to identify that he's either lied previously to the board about the gravity of this issue, lied during the current investigation, or something he's said publicly is clearly dishonest.


I'm virtually 100% sure he did it after just looking into it today so I can see exactly what you're saying about people backflipping out of cars and stuff to get away from it.


Yeah, idk news this morning seems to point otherwise. I would still be very careful with this dude. It really felt like a shoe was going to drop.

This seemed like a REALLY negative dismissal.


I have no knowledge of why, but it seems it's always about the money or the power/control. I cannot wait to see what it is...


I agree this is the most likely explanation. Is it possible Sam tried to wrestle power away from the board? He wouldn't even need to sell the whole company, just enough tech for a large company to kill OpenAI.


Or, not commercial, but military/gov.


Yeah. OpenAI is valuable not just commercially, but to the worlds governments, some of which can probably find ways to force out leadership they don't get along with.


Your comment is reasonable. Perhaps hitting close to the truth considering the downvotes. Everything in this thread is speculation.


Prediction: board resignation.

Sam Altman returns.


Prediction:

1/ Sam goes on to create NeXTAI and starts wearing mostly turtleneck sweaters and jeans 2/ OpenAI buys NeXTAI 3/ OpenAI board appoints Sam Altman as Interim CEO


You mean like Steve Jobs and Apple?


Love it.


Wild take, but that would sure be a sight to see. I'm not speculating at the moment, since I know nothing about the situation.


Turn that on it’s head - was he standing in the way of a commercial sale or agreement with Microsoft!

He may not be the villain.

But who knows, it feels like an episode of silicon valley!


If you look at who is on the board, how it's structured (they don't have equity right?), it seems like it's actually because he violated the charter. Why would Ilya Sutskever punish Sam for doing the right thing wrt AI safety?


They are in a strange position.

They had an open ethos and then went quasi closed for profit and then a behemoth has betted the family jewels on their products.

Harking on about the dangers of those products does not help the share price!

My money is on a power play at the top tables.

Embrace, extend, and exterminate.

Playbook!


Quasi-closed is an understatement. You could almost sue them for false advertising.


He will be ok!

Either a position in Microsoft or a new start-up.

Or both.

What does it mean for OpenAI though? That’s a limb sawn off for sure.


Certainly they could have fired him without Ilya's vote.


How? Per the blog post: "OpenAI’s board of directors consists of OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner." That's 4 directors after the steps taken today. Sam Altman and Greg Brockman both left the board as a result of the action. That means there were 6 directors previously. That means a majority of 4 directors. Assuming Sam & Greg voted against being pushed out, Ilya would have needed to vote with the other directors for the vote to succeed.

Edit: It occurs to me that possibly only the independent directors were permitted to vote on this. It's also possible Ilya recused himself, although the consequences of that would be obvious. Unfortunately I can't find the governing documents of OpenAI, Inc. anywhere to assess what is required.


Sam might have abstained from voting on his own ousting, since he had a conflict of interest.


Yes, true.


It makes no sense to suggest that three external directors would vote out a CEO and the Chairman against the chief scientist/founder/principal's wishes.


That’s the practical argument, and also seems to be true based on the news that came out late last night.


use research and AI to analyze Sutskever's character. the way he talks, the way he writes, what he did in the past, where he studied, who he was and is "acquainted with" ... do the same with the rest of the board and with Altman as well.

someone hire some PIs so we can get a clear and full picture, please & thank you


Tech investigative reporters are probably on it, just wait a week or two.



This was my first thought after seeing a clip of Sam and Satya during OpenAI's DevDay. I wonder if he was standing in the way of a Microsoft acquisition, and Microsoft has just forced in those who would allow the purchase to happen?

I don't know, so much wild speculation all over the place, it's all just very interesting.


They are betting so much on Open AI just now.

They need to be so much more than a partner.

Being open is not in their nature.

Sadly it is usually the demise of innovation when they get their hook in.


I can do anything I want with her - Silicon Valley S5:

https://www.youtube.com/watch?v=29MPk85tMhc

>That guy definitely fucks that robot, right?

That "handsy greasy little weirdo" Silicon Valley character Ariel and his robot Fiona were obviously based on Ben Goertzel and Sophia, not Sam Altman, though.

https://en.wikipedia.org/wiki/Ben_Goertzel

https://www.reddit.com/r/SiliconValleyHBO/comments/8edbk9/th...

>The character of Ariel in the current episode instantly reminded me of Ben Goertzel, whom i stumbled upon couple of years ago, but did not really paid close attention to his progress. One search later:

VIDEO Interview: SingularityNET's Dr Ben Goertzel, robot Sophia and open source AI:

https://www.youtube.com/watch?v=AKbltBLaFeI


Who owns the equity and who has dedicated seats on the board? Altman can easily just boot the board if he gets majority of the equity to back him.


How would that work? Does a non profit have shares?


Eric Schmidt calling Sam a hero also makes me think it isn't a personal scandal.


I'm pretty confident it's not a personal scandal (or if it is, it's one that is truly monumental and that there hasn't even been a rumor of).

If it was a personal scandal, the messaging around his dismissal would have been very, very different. The messaging they gave makes it clear that whatever dirty deed he did, he did it to OpenAI itself.


Or to the public on behalf of OpenAI.


There were rumours apparently, but they were suppressed intentionally and very effectively it appears.


What I was meaning to say in my comment was that the rumors and accusations relating to his sister, even if entirely true, are not monumental enough of a personal scandal to explain how the board behaved while firing him.

They'd probably still fire him, but would have done so in a very different way.


This doesn’t make sense to me. It assumes Schmidt has inside info on why Altman was fired.


Of course he has. Why would he risk political capital by defending sama publicly if he didn’t know for sure he wouldn’t get burned by defending him?


Maybe because it's not risking very much political capital. If he's wrong, and shuts up (rather than doubling down on it), nobody will remember what he said in two weeks.

Hell, some prominent tech people are often loudly wrong, and loudly double down on their wrong-ness, and still end up losing very little of their political capital in the long run.

Or maybe he's right. We don't know, we're all just reading tea leaves.


Schmidt knows nearly everyone. He will know the details.


If he didn't know he wouldn't say anything. Why risk saying something when there's a very high chance that it's something like sexual misconduct or fraud.


Not sure Eric Schmidt is exactly a great judge of character. If anything that is an anti-signal


In the world that both Messrs. Schmidt and Altman operate in, liability is a vastly stronger incentive than character.


Isn’t Schmidt the guy who’s dating a CEO of a company he made an investment in?

That _should_, in a system of corporate governance that isn’t an absolute joke, expose him to significant liability.

Or am I thinking of another NorCal cretin that will never suffer a real consequence as long as he lives?


Yeah, Schmidt is whatever is on the opposite end of the spectrum from "mature" and "ethical."

https://www.forbes.com/sites/davidjeans/2023/10/23/eric-schm...


[flagged]


That's a pretty wild accusation that I feel like needs some substantiating.


Best I could find was this:

https://www.dailymail.co.uk/news/article-2377785/Google-CEO-...

But:

- 15 million not 30.

- The women he had affairs with don't seem to have been prostitutes and a fair bit older than 18 too.

- His wife knew about it all and was apparently OK with it.

- Penthouses are pretty much the opposite of bunkers if you think about it.


The cost is probably an exaggeration but this is not some secret: https://gothamist.com/news/google-boss-has-amazing-15-millio...


Parent commenter is wildly embellishing/lying, but some of the details of Schmidt's open marriage have been widely reported: https://www.mic.com/articles/56553/eric-schmidt-has-15-milli...


Given that the entire ~accusation~ actionable libel seems to be a bizarre game of telephone derived from a 2013 news item in which Eric Schmidt paid roughly half that amount for a NY penthouse, specifically one muckraker’s tabloid variant that vaguely connected Schmidt to two very definitely not 18 also very definitely not prostitutes, and that the only detail that appears remotely correct is that he did indeed have the property soundproofed, I very much doubt you’ll get that substantiation.


That’s not true. The sex bunker only cost $15M: https://gothamist.com/news/google-boss-has-amazing-15-millio...

And it wouldn’t be a harem if they’re prostitutes.


Woah, any link to support this?


Everybody's a comedian on Friday.


Someone probably already suggested this, but I haven’t seen it yet, so I’ll throw a wild speculation into the mix:

I saw a comment (that I can’t find now) wondering if Sam might have been fired for copyright reasons. Pretty much all the big corpuses that are used in LLM training contain copyrighted material, but that’s not a surprise and I really don’t think they’d kick him out over that. But what if he had a team of people deliberately adding a ton of copyrighted material - books, movies, etc - to the training data for ChatGPT? It feels like it might fit the shape of the situation.


GPT-3 had "books1" and "books2" among its training material and "books2" never had its actual source disclosed :https://arxiv.org/pdf/2005.14165.pdf

Speculations about these source materials can be traced back as far as 2020: https://twitter.com/theshawwn/status/1320282152689336320

I don't think this issue would've flown under the radar for so long, especially with the implication that Ilya sided with the rest of the board to vote against Sam and Greg.


books2 = libgen imo


That matches with their extreme hurry to get rid of Sam, but it seems like this would be something the CTO would have had knowledge of and she seems to be trusted.


Also, it isn't uniquely attributable to Sam. They all do it, use copyrighted material, for training data. By "all", I mean all LLMs (to my knowledge). They don't do it intentionally, but it gets scooped up with everything else.

Hmmm, just thinking... Adam d'Angelo is one of the board members of OpenAI. He has the entire corpus of Quora content to use as training data, i.e. the rights to it are his. But I doubt that only Quora content was used by OpenAI during the past 8 years or so since it was founded! And the content on Quora isn't that great anyway...


Honestly, it feels like OpenAI is not taking the copyright trolls* very seriously for this to be the case. I don't think US has the luxury to set this dangerous AI precedent.

* You can disagree but no copyright lawsuit by mega corporations is doing it for the good of the law framework. They just want money.


I have a strong gut feeling they're about to mess things up.


I am almost certain they already did. This is pretty bad!


Maybe they'll sell to Broadcom.


I have this exact same feeling.


Hopefully they don’t


I fear the same.


Already dropped Bing integration in the iOS ChatGPT app


It is integrated into main ChatGPT 4 now, just tell it to “search for bla”


nah, probably some toxic sex stuff. i can't think of any business reason for sam to be ousted. bribes, losing money, lies to the board? all good and well as long as the company stays the leader in the absolute most groundbreaking tech in human history so far.


So this is the non profit board. Maybe he was trying to influence the board members to make OpenAI a for profit company and the board saw that as a traitor move.

The only other business thing I could think of is he moving openAI's money to one of his other venture or for some other personal gain.

I don't think spending too much money in openAI could get him fired like this in current environment. Neither does overpromising stuff or something.


I know I won't get a lot of love for this, but Sam is a really good person. I don't know him well, but I've known him since long before OpenAI.

He's not perfect, but behind the scenes he's a genuine and upstanding person. I've met lots of wealthy smart people, and he's the only exception. He was the only person I trusted in this situation, and I'm genuinely nervous that he's no longer running OpenAI.


Update: Greg Brockman is also out, which makes me think it's more than just a Sam scandal.


The statement claims he is no longer board chair but will stay with the company. Do you have other info?


They're clearly related. He went from Chairman to "reporting to the CEO", meaning he either stepped down in protest or was also fired.

He won't be there in 6 months; this is just a crumb of continuity.


Could also be pending investigation



> I know I won't get a lot of love for this, but Sam is a really good person. I don't know him well, but I've known him since long before OpenAI.

"Good" is too blurry of a description, and I don't know Sam, but one thing I've learned (the hard way) is that you don't truly know someone unless you've had conflicts of interest with them and found mutually satisfying resolutions to them. If all you've had is mutually beneficial interactions, then of course everyone's going to be nice - it's in everyone's interests. You need to see how they act on nontrivial conflicts (either handling present ones, or mitigating/averting future ones) to really know if someone is a genuinely good person or not.

While this could hypothetically happen within an hour of meeting someone, it's more likely to take years or even decades... or might never even happen.


Ah yes—as the saying goes: “keep your friends at the Bayes-optimal distance corresponding to your level of confidence in their out-of-distribution behavior, and your enemies closer”


This is too brilliant to be tucked away in the depths of a HN thread


This is gold.


Thanks for making my day.


how do I save a HN comment ? Someone give this person a medal!


Click the time it was posted (e.g. “2 hours ago”) and then “favorite”


Needs to be on a t-shirt lol


> you don't truly know someone unless you've had conflicts of interest with them

This hits a spot. I had a really nice boss.. Until we got into a conflict, then she tried to blackmail me, pressure me and break me. I learned why some people who left our company needed months to get back on their feet. I got out quite well and managed to push back, but it was a tough period.


I learned this playing video games. After a while, I figured out that if I liked someone on my team, I should only friend them if we were losing. Otherwise, I might only like them in the 50% of games we win.


This is so true - and thank you for the very important reminder!

As I interview for new roles, it's a timely lesson, suggesting how to test what a new employer is -really- like.


i could not say that any better.

I had a feeling the man was a bit of a con, of course I won't say I know for sure. But some of his actions, like his notorious eye scanning crypto project, or the fact that he was 100% in support of UBI and wanted to advocate for it only to go to different governments wanting regulations (that only benefitted them)

People really really need to pay attention to their actions, not their words, jeezus. We'll have another rogue Elon Musk who was once idol worshipped as the incredibly "brilliant" man...turned out he does some stupid things too only now he amassed billions of dollars he can pay his way out of stupid things.

People never learn. Stop idolizing businessmen.


Exceptionally well stated. This agrees with my experience as well.


I first heard of him through the WorldCoin stuff, and nothing about that made him look like an upstanding person. That whole thing was/is shady as hell.

I certainly don't know him, but I see more reasons not to trust him than to trust him.


I'd never heard of that, but that definitely sounds shady. Thanks for mentioning it. To save people a search: https://en.wikipedia.org/wiki/Worldcoin


Why would someone running OpenAI possibly be involved in something so unnecessarily speculative and unrelated?

I ask that question leaving out any of the socio-economic and privacy concerns around that project.


He started WorldCoin before OpenAI came around.


The Wikipedia page says world coin was founded in 2019?


What's so shady about it?


I should have said "sketchy" instead of "shady", as "shady" implies a criminality I don't mean to imply.

What's sketchy about it is that they were offering money to largely poor and technologically naive populations in exchange for gathering very sensitive biometric data from them. Identifying data that cannot be changed and can be used to track them without their knowledge. Their stated goal is to have every person on Earth take part, to boot. The whole idea couldn't set off more alarm bells if it tried.

I give them style points for using truly dystopian-looking "orbs" to do it, though.


I have zero knowledge of Sam Altman in any shape or form and literally the only thing I know about him is that he runs (or well, ran) OpenAI.

But as a general point, you can be both a "good person" and still do bad things. Or you can be a good person in some areas, and a not-so-good person (or even horrible person) in some other areas. People are complex.

Of course it's entirely possible that Altman is just a really good person, but I wouldn't be quick to make assumptions.


Unless Sam has managed to fool a bucket load of smart people, your prediction is very unlikely to be true (or rather, I don't want it to be true). Fuck.


Not intending to attack you here, but it's important to remember that smart people can get fooled as easily as anyone else.

"Smart" does not mean "hard to fool;" they are different characteristics.

You can fool someone if you have important information that they don't have--even if they are extremely smart.


It's not a prediction; it's a general comment that one shouldn't assume too much based on a limited number of datapoints, in this case someone who doesn't "know him well".

This works in two directions, by the way. In 2001 few would have expected that Bill Gates would spend much of his time on philanthropy. Is he a "good" or "bad" person? Well, he's both.


He may not be fooling anyone. As someone else noted, if his interests and yours align you may be willing to look past his "badness". For example, Miles Bridges in the NBA. Seems like a genuinely bad guy who just got recactivated by an NBA team -- why? Probably because he can help them win games. I can almost guarantee no member of the front office would let their daughter date him, but they don't need him to be good for him to make them money.


See, SBF


New rule on hiring a tech leader, don't be named Sam.


Corporate malfeasance is not exclusive to tech and neither are collections of incredibly intelligent people.


Fooling someone, even smart person, is not that hard. It is just low-key.


This sounds so naive, maybe google Worldcoin?

A person I've known all my life I could swear and trust him with anything was found out to have violated extremely young children and other stuff.

Stop pretending you know people, people don't even know themselves.


I know Sam even less, but when I first moved to Valley a decade ago he went out of the way to help. I wanted try out a crazy startup idea on a student visa with limited connections in the Valley - he loved what I was doing and went above and beyond to help me out.

It forever tuned me in to the ethos of Silicon Valley. And I have tried paying back where I can.


If that ethos doesn't involve illegally overstaying student visas like Musk or burning millions of dollars then have you really truly embraced the SV lifestyle?


I think you misunderstood that he helped me overstay my student visa etc.

I was in a Ph.D. program at a top CS school and there are ways to transition your visa when building a startup. It was that I was not sure if the transition or the startup would work out - that startup did not - but years later another one did.

I would probably not have taken the plunge out of academia and not achieved much else had it not been for him. And I am deeply grateful for that.


I was just making a joke about how Silicon Valley, to some people on this planet, seems like an awful place that ignores some awful behavior when convenient. Embracing the "ethos" of such a place seems funny to me.

Never accused you of trying to illegally stay in the country or some such - I just referenced a famous South African Valley-man with a musky scent who did that.


His sister would disagree with you.


She's voiced her allegations for years. Has something been brought up recently? Is she credible?


More credible than $random_hn_guy


That’s an invalid argument. You’re switching from questioning one premise to comparing two premises.


Is she not credible?


Estranged family members of celebrities who need money normally aren't considered very credible.

I have no good way of assessing what the likelihood is that her claims are true.


My point with this comment was not necessarily that she is credible, but that the previous comment just assumed that she wasn't, seemingly without any consideration that she might be. This is the sort of attitude that perpetuates these sorts of accusations not being taken seriously.

We don't have to take everything potential victims say as fact and immediately act on it, but remaining open to the idea that those in power may be at fault, and not disregarding victims accusations goes a long way in supporting them.


What caught my eye was her claim about events when she was 4 years old. Just doesn't seem old enough for the memories and their interpretations to be reliable. I had 2 close encounters with UFO when I was 7-8 years old. Very vivid memories which I believed were 100% true until my thirties.


The claim that you cannot have x memories before a certain year is completely false. although unrelyable, children that have certain experiences can remember them.

I would go as far as to say we completely underestimate what children understand and remember because we're the ones that chose to forget.


A few days ago I thought I had uncovered a really bad memory from my childhood that I had buried. I realized a few minutes later that at one point I had dreamed the event and it never actually happened.

Conceivably the first part could happen without the second.

Frankly, I don't think we should ever give any credence to allegations that have absolutely no evidence. The road is too rocky. Most people have at least one person that dislikes them. The people that kind of attitude would affect generally have many. All it takes is one person willing to lie .


What changed in you 30s?


What kind of encounters? What does close mean? As in…you saw one flying or one landed near you? Or you boarded it?


It seems unlikely that Altman arranged for her to be "shadowbanned" from every social media site except for OnlyFans and Pornhub(!), or that he has been engaged in hacking her WiFi.

If you actually look at the totality of her claims, there are probably more reasons to be skeptical than to accept her words unconditionally. About the only thing you can say for sure is that something really unfortunate is either happening to her now, or has happened in her past, for which she is unlikely to bear all the responsibility.


OnlyFans and Pornhub are probably the only 2 "mainstream", "social media" companies that aren't really part of the SV tech scene.


well id never heard of this, and im not a fan of the guy, but from my quick perusal online just now of the noise of this drama, id say no, shes not credible.


This was my reaction as well. If a competent doctor had prescribed her medication and she took herself off of it without medical supervision, I think it is pretty reasonable to condition money on going back on medication (or at least going back to medical supervision). Many people who have family members struggling with mental health concerns take this approach.


New Yorker is very progressive, and they worked for months on the article, yet they only mentioned in passing his sister accusations and didn't highlight them.


Isn't she not incredible?


[flagged]


I used to say the same thing before I got in the business of investigating [all victims'] claims.

Social media abuse claims are the female equivalent of SWATing. One well-published sob story sends a mob to kick down your door.

Don't be this naive. For your own sake, only consider believing such claims once a police report has been filed. Don't rush to judgment either way unless there's repercussions when the claimant is lying about it.


I suspect that commenter is sarcastically mocking this: https://en.wikipedia.org/wiki/Believe_women Specifically the controversial "believe all women" phrasing.


I hear what you're saying, AND ... if you have a couple of hours, review some daily summary videos of the recent Johnny Depp-Amber Heard trial.

Coming from an "I believe the woman" background myself, I was pretty shocked.


I say "listen to women" and "take women seriously"


The problem was that reports from women were sometimes being downplayed or dismissed. The solution is to stop doing that.


That doesn't seem like a very smart strategy ahah


6% of rape allegations are proven to be false so you should absolutely not do that[1] Take the claims seriously but don't assume guilt automatically, everyone deserves due process.

[1] https://www.brown.edu/campus-life/health/services/promotion/...


Including Amber Heard?


how do you know he's a good person if you don't know him well?


Because you can know someone non-well and think they're a really good person. It's not strong evidence, but it's not nothing.


You can know someone is a bad person from casual interaction, but not vice versa. There's basically no way to know if anyone intelligent is a good person without extremely intense surveillance. I guess with an unintelligent person, you can assume that they're not smart enough to hide if they're doing something really bad, but even then, maybe they're just playing dumb.


I don’t know Sam Altman well but I do not think he’s a particularly good person, so there’s some counter-evidence.

Personally I welcome this shake up. Some of the things I’ve seen Altman write about are troubling.


I've had a positive opinion of sama as a human ever since this comment about him living with his two brothers well into their 30s: https://news.ycombinator.com/item?id=12592010

It's a corollary to my theory that anybody that maintains close ties with their family and lives with them is a wholesome person.


You've got to be kidding. Really, are you kidding? That's an extremely weak litmus test for goodness.

Plenty of people maintain extremely close connections with their families while engaging in activies that are terrible for the world around them. Organized criminals. Terrorists. Business magnates. Political families. Corrupt police officers. Plenty of these groups are made out of tight-knit families.

It's common, dare I say human nature, to prioritize the needs of your family. That is honorable and important, but being a Good person requires caring about strangers too.


SBF seems close to his family, too...


I think you completely missed the part about living with your siblings into your 30s.

With the exception of the brothers in the mafia or brother terrorists, none of your examples would meet that standard.

Being close with your family does not mean you’re not a good person elsewhere. It does not mean you don’t care about strangers. That you’d jump to that conclusion or bring up terrorists as a counter example makes me question your own personal experiences.

All else being equal, I’d expect someone with close family bonds to the point of living with them as an adult, when they clearly have the economic means to do otherwise, as a sign of a good human. That’s been my personal experience and that’s how I see the world.


Do you know a lot of married people who live with their siblings? Because I would think it is mainly a sign of relationship status.


> anybody that maintains close ties with their family and lives with them is a wholesome person

Alternative possibility: the family's a cult.


You know, minus sexually abusing his sister.


You should see what his sister writes about him...


Some might say a little too close.


I second this. As someone who's operated in the startup and YC ecosystems for years, I've seen Sam help a lot of people with no clear upside for himself. He's a net plus to Silicon Valley and SF by a long shot.


isn't this is an upside? a lot of fraudulent people are very nice and help everyone around.


Yeah, well, if you're nice and helpful to enough people eventually you cross a threshold to just being a good guy, at which point, the deranged behavior of a couple poorly qualified board members ceases to matter.


I do believe you are being genuine here, but good people still sometimes do bad things. Good people still have their blind spots, and the negative consequences of those blind spots are often exacerbated and have outsized (negative) impact on others when the person behind them is wealthy.

I've never met the man, but I can say I have not been impressed by his words and attitude in public. I never got the sense or feeling that he's actually doing right by the world.

Ultimately it doesn't matter if he's a good or bad person; what matters is what he's done.


I agree. Have not heard a single concern about ethics in business ever raised about him before.



The OpenAI x Dota 2 stuff was a bit shady. They really wanted the crown of beating human players at one of the most complex real-time games, but to do so they had to drastically simplify the game rules (removing most heroes, changing courier mechanics).

It would be like if AlphaGo could only win if the Go board was half as big. Not real fraud, but shows a clear willingness to cut corners and stretch ethics.


I was following their Dota project from the beginning, and I think what they did made sense. Their goal there was to demonstrate real time decision making and teamwork in a complex environment without perfect information. OpenAI Five did a splendid job of doing that.

I would view it as the equivalent of making self-driving cars that are constrained to a single city. Sure, it doesn't have the full capability that a human would have. But who cares.


That's usually how things work. They're aren't any complaints or concerns, until they're are. Obviously.


Not really, no. There will usually be allegations of malfeasance floating around a person for a while before any of them are brought to the wider public. To a complete outsider, it looks like these things come out of nowhere, but to someone who's relatively close to the person it seems like it was building up for years. I've also noticed in cases of false accusations that there will often be a number of other accusations made shortly after, all of which look relatively weak or unimportant; eg someone accused of sexual harassment will separately be accused of making a sexual advancement then backing off when turned down. By evaluating the sorts of other allegations about a person when some accusation is made against them, we can attempt to guess the legitimacy of those allegations collectively.


> Not really, no. There will usually be allegations of malfeasance floating around a person for a while before any of them are brought to the wider public.

You mean, exactly like there been, from Sam Altman's sister?


None of that really matters. Look at Elon Musk, lots of weird spectacle. The man was lauded as one of the smartest man in the world...now he's kind of a bit of a loose cannon. People need to stop idol worship businessmen. They have a large motivation to make themselves into this human lovable charismatic person with no faults because it is very profitable to do so. Worse is when people buy into that.


Actually no, it often is not how it works. For example, Harvey Weinstein's behavior has been "open secret" in certain circles way before the scandal exploded. Epstein has been known to be super shady way before he found his end in prison. Anthony "Carlos Danger" Weiner has been known for his exploits well before he finally was prosecuted. There are, of course, opposite cases, where certain sociopaths meticulously cultivate their benign image and hide their true colors. But often, the true colors are known if not widely, then at least by many people surrounding them. For a reasonably public person, it would be quite hard to lead a double life for a long time without anybody at all knowing.


Well, according to his sister, he used to molest her when he was 13, and she was 4, so...

https://twitter.com/phuckfilosophy/status/163570439893983232...


Paul Graham saw this guy and immediately went “pattern matches my ideal genius”. My goodness.


Organizations are systems, not people, if he put into place the right incentive structure then the company will go in a good direction with or without him. Arguably the structure is now set in stone with his departure.


Didn’t I just read a post about him abusing his sister? It seems impossible to judge people you don’t know well personally, and even then sometimes you can be surprisingly wrong.


Sure, "good person" may sound generic. But he is still a good person trying to do the right things. To me it sounds like the board is afraid of being sued and needs to clearly appoint a scapegoat.


The world is not binary - least of all human behavior which for overwhelming majority is more accurately described as shades of gray


I don't want my eyeballs scanned though


Worldcoin?


He is the guy that built a bunker in New Zealand and has a ready made plan to escape on a motorcycle with his escape bag filled with guns, money and supplies when things collapse right? (At least I think he that guy) Is that normal person behavior?


Wasn't that Peter Thiel? Or did Sam do the same thing too?


Altman is a rich prepper who talks about it, like Thiel. He claimed his bug-out base is in Big Sur, not in NZ as far as I'm aware.


I vote no because New Zealand seems like a poor choice for possessing arms in a bunker.


If you have money and connections, the laws of the plebs are no longer relevant. You essentially have a right to keep and bear arms anywhere on Earth if you're rich enough.


until locals with more guns and much deeper trust affiliations with other locals decide your money is best spent as they see fit.


Back when this plan was publicized, you could legally own an AR-15 in New Zealand without much trouble.

It's still the case for bolt- and lever-action rifles and similar stuff.


Would there be a better place or strategy?


All of Appalachia would be a better place. Anti-development, much more wildlife, not really an invasion target.


I didn't say he was normal. He's clearly not (normal people don't start OpenAI). That doesn't preclude him for being a thoughtful person who wants the best for the world around him.


By scanning eyeballs and doing who knows what with that data? Idk


If I had loads of money I would absolutely do the same.

It's insurance. For someone with an average income, it's not worth the cost of the .01% chance you'll need it. For someone with more money than they know what to do with, it's worth it.


> Is that normal person behavior?

Normal people suck and are generally dumb as a brick (including me). Normal people don't extrapolate calamities and don't think ten steps ahead.


Is that normal person behavior?

Other than the part about having enough money to build a bunker in New Zealand, I'd say "yes".


> Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

Ouch -- were there any signs this was coming?


Yes I think it will be obvious to people in hindsight that Sam is a deeply disturbed and dangerous individual, and so are many of his associates. You can tell just by listening to his public statements that the guy is off his rocker.


Nope


At least nothing public until just now with this development


Can't say I saw this coming. This is deeply sad to me. OpenAI did so much life changing work so quickly. It has totally changed my life in terms of enabling an absolutely unprecedented increase in my own personal productivity and how ambitious I can be with my projects. And even though I know Sam didn't personally code the key things, I believe that it never would have happened the way it did without his instincts and talents. And I fear that without him at the helm, all the magic is going to quickly dissipate and OpenAI will be just another lumbering tech company without a rudder. The default state of things is stasis and dysfunction. Just look at how Google can't do anything anymore, and how bad Siri and Alexa are-- all despite having so much manpower, money, and market share at their disposal.

I also find it maddening how boards of directors rush to insulate themselves from any possible issue and are so quick to throw overboard the very people who enabled the success that they get to participate in. I'm thinking particularly of Travis at Uber and how he was thrown out of the thing that he built from scratch, which never would have worked without his extreme efforts. If I were on the OpenAI board, the bar for firing Sam would be so ridiculously high that he would have to have done something so outrageous, so illegal, etc., that I struggle to believe what he actually did could even remotely approach that standard.


>I also find it maddening how boards of directors rush to insulate themselves from any possible issue and are so quick to throw overboard the very people who enabled the success that they get to participate in. I'm thinking particularly of Travis at Uber and how he was thrown out of the thing that he built from scratch...

This is an egregiously generous way of framing Travis Kalanick, and what happened to him.


Look at all his other leadership jobs. The only other company of note is YCombinator, which was running well before and after his time there. So what makes him so great? Seems more likely that he was not as essential to OpenAI’s success as you give him credit for. Honestly, giving Microsoft a 49% stake for $10B seemed really short-sighted to me and will hamstring the company in the future.


Ilya Sutskever has been running the show (technologically) at OpenAI since the beginning, and he is going nowhere. I trust his judgement in voting to get rid of Sam Altman over anything else.


Isn't it possible Sutskever and Brockman voted with Altman, but since Altman had to recuse himself they lost 3-2?


Greg has now publically announced that he quit. What would motivate Ilya to stay if he didn't agree with the rest of the board? If they are still a 3-person team, they could announce a different company tomorrow and have the entire technical staff jump ship.


That's exactly what they should do.


Only if Ilya also quits, but for now there's only radio silence from him.


Especially given how people regarded AI when OpenAI started. This was right after the chat bot boom when nobody could get a decent text response right and trying for AGI was seen as a waste of time. People were laughing at Sam. To his face. And he kept going. OpenAI has changed the trajectory of humanity and consciousness. OpenAI needs to explain what is going on immediately.


I cannot even begin to understand what makes you think that this technology arose from Sam altman and not from all the other people working there. By saying you doubt they can do anything without him, you're putting one person on a pedestal and giving them all the credit for this. This is the same phenomenon has happens with Elon musk getting all the credit for his tech companies.


The CEO is the one that recruits that talent, raises money to make sure they are paid and retained, and guides creating vision and culture to get them to be effective in working together. Talent is orders of magnitude less important than culture. Bad culture will squander talent. Good culture will elevate everyone to their best level of effectiveness and pull talent from a team.


It's not just raw technology. It's a vision for what the product should be, what overall strategy to take, how to fund it, how to introduce it to the world, how to scale it, what order to do things in, what approach to take with 3rd party developers, when things are good enough to launch, who to hire, who to promote, etc. There are a million little decisions that go into a runaway success like this. And a million opportunities to make the slightly sub-optimal or wrong decision. And it doesn't take many of those to kill everything that made the place special, and that's actually my new base case for OpenAI-- that's the base case for any company/organization. The default state is chaos and entropy, and it's a miracle when you can escape that fate for even a few years of hypergrowth.


And what past accomplishments from Sam Altman led you to believe that it's him bringing in the magic and vision? This really isn't someone with a stunning track record of bringing incredible products to market.


> It's a vision for what the product should be

Have you actually used it?

It's the most basic possible website, API, and app for a language model AI. It's barely functional. For example, the iOS app couldn't even display images until the other day.

> There are a million little decisions that go into a runaway success like this.

I agree that GPT-4 is amazing and probably the best. But there are several other competing implementations of language model AIs right now, some are even developed as open source.


> all the other people working there

Or, the people working at Google.


what a bizarre pov about sam altman.

what accomplishment of his has impressed you so much? his entire career has been:

1. start a dumb startup that goes nowhere

2. get a job at YC handing out other people's money

3. a very dumb and creepy crypto currency project

4. be CEO of openai, which means having no direct influence on the product or engineering at all


I'm impressed by how good he is at failing upwards.


"...that I struggle to believe what he actually did could even remotely approach that standard."

Which is exactly why you need something like OpenAI to further your personal projects.

Those who don't would be qualified to be on the board.


actually no, openai first did chess then dota, than chatgpt, it's years of effort and progression that didn't come overnight. It's probably you who first heard of them trough chatgpt


The dude is just a businessman through and through. Stop idol worshiping these businessman. We'll have another rogue Elon Musk. At least Elon Musk is exposing himself now for what he is, but he has so much money at this point it has no effect on him to do random weird nonsense. Sam Altman seemed quite slimy to me, with his actions, garnering support by talking about UBI and advocating for it, but then going to talk to US Congress to talk about regulations (because it benefits himself).

The man was starting to seem like a huge con and people just seem to not see through that.


Greg Brockman sharing the timeline on Twitter: https://twitter.com/gdb/status/1725736242137182594?s=46&t=Nn...


Copy-pasted here for posterity:

Greg Brockman @gdb

Sam and I are shocked and saddened by what the board did today.

Let us first say thank you to all the incredible people who we have worked with at OpenAI, our customers, our investors, and all of those who have been reaching out.

We too are still trying to figure out exactly what happened. Here is what we know:

- Last night, Sam got a text from Ilya asking to talk at noon Friday. Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon.

- At 12:19pm, Greg got a text from Ilya asking for a quick call. At 12:23pm, Ilya sent a Google Meet link. Greg was told that he was being removed from the board (but was vital to the company and would retain his role) and that Sam had been fired. Around the same time, OpenAI published a blog post.

- As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior.

The outpouring of support has been really nice; thank you, but please don’t spend any time being concerned. We will be fine. Greater things coming soon.

10:42 PM · Nov 18, 2023

8.1M Views


Reading that thread made me realise how low the signal to noise ratio is over on twitter.

90% of the replies scrolling down were rehashed versions of "can't believe they used Google meet"


Try blocking all bluechecks. After doing so is the first time in like a decade that Twitter has had good content for me.

Before, braindead or cloutchasing bluechecks were mixed in with the rest of us rabble. Hard to pick them out of the pack, you had to read their detritus with the rest of the comments.

Now they voluntarily self identify, and even better, their comments are lumped at the top. So block them all with a bot or just scroll down until there's no more blue checks and the comment quality jumps exponentially.


And “can’t believe how shitty the Twitter replies are” is any better?


Seemed like a pretty productive conversation to me. As a non-twitter regular I now know how to make things more bearable in the future thanks to this discussion.


Monetization of "hot takes" and baiting, true example of enshittification


As both hint to "greater things" already on the horizon: maybe they were working on/for competitor on the sidelines and the board found out?


That's standard "You were too good for them anyways" post break-up speech


His timeline.


Eric Schmidt, former CEO of Google has this to say:

https://x.com/ericschmidt/status/1725625144519909648?s=20

Sam Altman is a hero of mine. He built a company from nothing to $90 Billion in value, and changed our collective world forever. I can't wait to see what he does next. I, and billions of people, will benefit from his future work- it's going to be simply incredible. Thank you @sama for all you have done for all of us.

Making such a statement before knowing what happened, or, maybe he does know what happened, make this seem it might not be as bad as we think?


Eric Schmidt is also the person that said Google's old "do no evil" slogan was the dumbest thing he'd ever heard. Given that there's apparent tension at OpenAI over non profit vs for profit goals I'd not draw any particular conclusions from Schmidt's statement.


He was also against the creation of Chrome to the point where it had to be done in secret, according to him.

From a total outsider/uninformed pov, he really seems like a fail upward story.


But that was fair. Page wanted to do a browser super early, like years before the IPO, and Schmidt was brought in specifically to try and bring some focus to the company and balance out the founder's ambition with more "adult" execution. Yes eventually Chrome became a successful project and Schmidt wisely didn't pick a fight over it, but he wasn't wrong to insist the company try to avoid getting distracted with that so early in its life.


Again, I am an outsider and uniformed. But the established money printer was already going brrrrr...

I would imagine that it's arguable that a pheasant could have ridden that rocket to the moon.

My bias and stupidity may be showing here, but I just don't think that he is very smart. Maybe that was the point of his position: to keep the company from going beyond the imagination of Wall Street normies.


I was an insider at that time and I didn't disagree with that decision, especially as Google were funnelling money to Firefox and at that time Firefox was executing well and had a lot of support across the community. Part of why the money printer was going brrr was a relentless effort on optimizing it and growing it, which did benefit from the executives actually focussing on it. The idea it all just happened automatically just isn't so.


Ok, thanks for the insight!


lol! Um, do you remember who the main champion of Chrome was?


No


Big daddy sundar


And of course he gives credit to the CEO and not the 400 people under him who actually built the thing, nor the other 10 people who actually founded the company. Nor those who gave initial funding. From wikipedia:

> OpenAI was founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk serving as the initial board members.


I'm of the firm opinion that the heavy lifting at open ai is doing by the scientists but of courses ceos like to applaud themselves on the back for doing the "tough" job.


Lots of companies are funded every year. Those without solid leadership and clear mission are bound to fail.

Founding a company is also fairly easy (if you're in the US). In most US states, you just need to complete some paperwork, pay administrative fees, and you're good to go.

Founding something isn't tough. Leading through adversity and setbacks is.

Finally, if we're praising workers, what about those who came and went between 2015 and today? That probably pushes the number higher than 400 FTEs.


*Founding a company is also fairly easy (if you're in the US). In most US states, you just need to complete some paperwork, pay administrative fees, and you're good to go.*

This is true in the sense that being a CEO is also easy, you just fill out some paperwork that says you are CEO.

Are you saying the founders of OpenAI just filled out some paperwork and did nothing more?


> Lots of companies are funded every year. Those without solid leadership and clear mission are bound to fail.

IME companies with solid workers (in engineering and elsewhere) but weak leadership and mission have a much better chance than the converse. Even the best companies rarely end up following the mission or the leader.


No they don't. Leadership can make or break a company, that is why they are paid so much.


Leadership without talented employees can break a company. That's why they are paid so...


What adversity and setbacks did Sam personally overcome?


Tons of high profile people spoke like this about Adam Neumann or Elizabeth Holmes too


Not everyone that you don't like is a fraudster. Just say that you don't like Sam, no need to make an obviously absurd comparison. The reason those were bad CEOs were that they swindled investors and lied about what their corporation is doing. I have absolutely no opinion on Sam Altman (didn't know about him before openai) btw, it's just that the comparison is completely nonsensical.

(It reminds me of comparing AI to crypto because both have hype behind them.)


> Not everyone that you don't like is a fraudster.

Sam Altman in particular has precedent, with Worldcoin, that should make you wary of defending him on that particular point.

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...


I like that you have no opinion about this guy that got fired for “not [being] consistently candid in his communications with the board” other than it is plainly obvious that he isn’t a liar.


No that's not my point. I'm not saying he's a liar or not, I'm saying that they are not similar regardless. He could be a liar without being a fraud. Openai is already not Theranos no matter what happens next.


This is a good point. There is no possible room for the general artificial intelligence company to disappoint investors or other stakeholders in a big way


Well with the info currently available, that’s just like, the board’s opinion, man.


They're not comparing them to those people, they're pointing out that (tech) celebrity endorsements don't mean much.


Why “obviously absurd”? They had spotless reputations, until they didn’t. So did the FTX guy, for instance. Just because you don’t understand, doesn’t mean something is nonsensical


> Not everyone that you don't like is a fraudster

Maybe not. Perhaps it seems that everyone you do like is


And Sam Bankman-Fried


Tons of high profile people spoke like that about a large number of individuals in the past. Here I think it's clear that OpenAI has indeed delivered something serious.


I'm starting to think what people say reflect their own thought about other people, and not facts we should accept depending on their net worth


Oh please, you're going to put Altman together with those clowns? He has a proven record of extreme accomplishment, in various domains, moreso than 99.9999% of people in the tech industry.


>He has a proven record of extreme accomplishment, in various domains, moreso than 99.9999% of people in the tech industry.

I don't really see anything[1] that suggests that this sentence is true. Now, I'm not saying that he hasn't been successful, but there's "successful" and then there's your hyperbole.

[1]https://en.wikipedia.org/wiki/Sam_Altman


what did he do before open ai? (Which was founded by a lot more people than Sam). A genius in my opinion needs to have a track record of incredible things, key point being "track".


> what did he do before open ai?

Worldcoin. Which is, to put it mildly, not positive.

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...


This type of comment doesn't really help.

And for my two cents, he always seemed like a disingenuous hype salesman more than a technical person.

He's an Elon Musk or a Lex Friedman.


I think it's logical in these scenarios if you don't know what happened to presume something forgivable and maintain that relationship (cynically, opportunity to invest), and if something truly unforgivable comes out post another tweet.


It's possible for our heroes to fall from grace. There's nothing wrong with Eric saying this without knowing the full story.


Your use of the word "our" is too liberally applied, he was no hero of mine. I believe history will have a very different view of Altman, "Open"AI, and AI in general.


Note I said "heroes", not "hero". I'm making a general statement about how the views people can have about others can evolve over time.


That’s the rhetorical “our”.


> rhetorical

Plural generic first-person possessive pronoun.


Is English not your first language? They were making a rhetorical argument about our heroes, whoever they may be.

It’s clear from context that they’re talking about how Sam is a hero of Eric’s and Sam has fallen which is a thing that can happen.


> They were making a rhetorical argument about our heroes, whoever they may be.

Yes? That's what "plural generic first-person possessive pronoun" means. I was agreeing with you, mate.


idk, it seems a lot smarter to me to keep one's mouth shut or express a general level of surprise without making a public endorsement until facts become clearer.


Why? If something bad comes out about Sam Altman, no one is going to criticise Eric Schmidt for not having foresight of it.


Because getting fired like this (both so abruptly, and with the unusually frank statement from OpenAI) suggests some sort of ethical issue. There's plenty of amicable divorces in the business world where teams just diverge on vision, goals, or methods, but they're the sort of thing people generally see coming. This HN post has collected thousands of votes precisely because it's such an unusual and surprising development, so it seems to me there's at least a 50% probability that it is something bad.

I don't expect Eric Schmidt to have general foresight about Sam Altman, but as a former CEO himself he must understand its not a decision a board would make lightly.


Here is what's wrong about that statement: Sam Altman did not "build" Open AI from nothing to $90Bn. Open AI raised $1bn from several investors and had top talents. Sam managed the company through that. But I am certain other people given the same circumstances will be able to do more or less the same.


Or maybe Eric Schmidt is worse than we think. ;-) (half joking)


He's mega rich. Doesn't matter what other people think about him at this point.


True. He's certainly wealthy enough that no amount of "additional" money to his name would change his life in any notable manner.


I think OpenAI built something amazing with ChatGPT, but building a company from nothing is a little bit different from being

> initially funded by Altman, Greg Brockman, Elon Musk, Jessica Livingston, Peter Thiel, Microsoft, Amazon Web Services, Infosys, and YC Research. When OpenAI launched in 2015, it had raised $1 billion. (Wikipedia)


This sounds celebratory to me. Bad news for OpenAI is good news for google.


Sam implied OpenAI had a major breakthrough a few weeks ago in a panel yesterday:

"Like 4 times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I've gotten to be in the room when we sort of like, pushed the veil of ignorance back and the frontier of discovery forward. And getting to do that is like the professional honor of a lifetime".

https://www.youtube.com/watch?v=ZFFvqRemDv8#t=13m22s

This is going to sound terrible, but I really hope this is a financial or ethical scandal about Sam Altman personally and he did something terribly wrong, because the alternative is that this is about how close we are to true AGI.

Superhuman intelligence could be a wonderful thing if done right, but the world is not ready for a fast take-off, and the governance structure of OpenAI certainly wouldn't be ready for it either it seems.


On the contrary, the video you linked to is likely to be part of the lie that ousted Altman.

He's also said very recently that to get to AGI "we need another breakthrough" (source https://garymarcus.substack.com/p/has-sam-altman-gone-full-g... )

To predicate a company so massive as OpenAI on a premise that you know to not be true seems like a big enough lie.


Fair enough, but having worked for an extremely secretive FAANG myself, "we need XYZ" is the kind of thing I'd expect to hear if you have XYZ internally but don't want to reveal it yet. It could basically mean "we need XYZ relative to the previous product" or more specifically "we need another breakthrough than LLMs, and we recently made a major breakthrough unrelated to LLMs". I'm not saying that's the case but I don't think the signal-to-noise ratio in his answer is very high.

More importantly, OpenAI's claim (whether you believe it or not) has always been that their structure is optimised towards building AGI, and that everything else including the for-profit part is just a means to that end: https://openai.com/our-structure and https://openai.com/blog/openai-lp

Either the board doesn't actually share that goal, or what you are saying shouldn't matter to them. Sam isn't an engineer, it's not his job to make the breakthrough, only to keep the lights on until they do if you take their mission literally.

Unless you're arguing that Sam claimed they were closer to AGI to the board than they really are (rather than hiding anything from them) in order to use the not-for-profit part of the structure in a way the board disagreed with, or some other financial shenanigans?

As I said, I hope you're right, because the alternative is a lot scarier.


I think my point is different than what you're breaking down here.

The only way that OpenAI was able to sell MS and others on the 100x capped non-profit and other BS was because of the AGI/superintelligence narraitive. Sam was that salesman. And Sam does seem to sincerely believe that AGI and superintelligence are realities on OpenAI's path, a perfect salesman.

But then... maybe that AGI conviction was oversold? To a level some would have interpreted as "less than candid," that's my claim.

Speaking as a technologist actually building AGI up from animal-levels following evolution (and as a result totally discounting superintelligence), I do think Sam's AGI claims veered on the edge of reality as lies.


Both factions in this appear publicly to see AGI as imminent, and mishandling its imminence to be an existential threat; the dispute appears to be about what to do about that imminence. If they didn't both see it as imminent, the dispute would probably be less intense.

This has something of a character of a doctrinal dispute among true believers in a millennial cult


I totally agree.

They must be under so much crazy pressure at OpenAI that it indeed is like a cult. I'm glad to see the snake finally eat iself. Hopefully that'll return some sanity to our field.


Sam has been doing a pretty damn obvious charismatic cult leader thingy for quite a while now. The guy is dangerous as fuck and needs to be committed to an institution, not given any more money.


Why would they fire him because they are close to AGI? I get that they would go on full panic mode but firing the CEO wouldn't make sense since openai has AGI as an objective. The board wasn't exactly unaware of that.


You're right, I was imagining that he decided to hide the (full extent of?) the breakthrough to the board and do things covertly for some reason which could warrant firing him, but that's a pretty unlikely prior: why would he hide it from the board in the first place, given AGI is literally the board's mission? One reason might be that he wants to slow down this AGI progress until they've made more progress on safety and decided to hide it for that reason, and the board disagrees, but that sounds too much like a movie script to be real and very unlikely!

As I said, while I do have a mostly positive opinion of Sam Altman (I disagree with him on certain things but I and trust him a lot more than the vast majority of tech CEOs and politicians and I'd rather he be in the room when true superhuman intelligence is created than them), I hope this has nothing to do with AGI and it's "just" a personal scandal.


Altman told people on reddit OpenAI had achieved AGI and then when they reacted in surprise said he was "just meming".

https://www.independent.co.uk/tech/chatgpt-ai-agi-sam-altman...

I don't really get "meme" culture but is that really how someone who believed their company is going to create AGI soon would behave? Turning the possibility of the success of their mission into a punchline?


I think they fired him because they are _not_ close to the AGI (no one is), but he lied to the potential investors how they are.

That's against a popular sentiment about the upcoming "breakthrough", but also most probable given the characteristics of the approach they took.


No, we are not close to AGI. And AGIs can't leave machines yet, so humans will still be humans. This paranoia about a parroting machine is unwarranted.


you're right. agi has been here since GPT-3 at the least.

it's honestly sad when people who have clearly not use gpt4 would call it a parroting machine. that is incredibly ignorant.


Let me know when GPT can even play chess without making invalid moves, then we can talk about how capable it is of logical thinking.


Let me know when you can prove that "logical" and "intelligent" were ever stored on the same shelf, much less being meaningfully equivalent. If anything, we know that making a general intelligence (the only natural example of it we know) emulate logic is crazily inefficient and susceptive to biases that are entirely non-existent (save for bugs) in much simpler (and energy-efficient) implementations of said logic.


An AGI that can't even play a game of chess, a game that children learn to play, without making an invalid move doesn't really sound like an AGI.


Put the pieces together:

Nov 6 - OpenAI devday, with new features of build-your-own ChatGPT and more

Nov 9 - Microsoft cuts employees off from ChatGPT due to "security concerns" [0]

Nov 9 - OpenAI experiences severe downtime the company attributes to a "DDoS" (not the correct term for 'excess usage') [3]

Nov 15 - OpenAI announce no new ChatGPT plus upgrades [1] but still allow regular signups (and still do)

Nov 17 - OpenAI fire Altman

Put the threads together - one theory: the new release had a serious security issue, leaked a bunch of data, and it wasn't disclosed, but Microsoft knew about it.

This wouldn't be the first time - in March there was an incident where users were seeing the private chats of other users [2]

Further extending theory - prioritizing getting to market overrode security/privacy testing, and this most recent release caused something much, much larger.

Further: CTO Mira / others internally concerned about launch etc. but overruled by CEO. Kicks issue up to board, hence their trust in her taking over as interim CEO.

edit: added note on DDoS (thanks kristjansson below) - and despite the downtime it was only upgrades to ChatGPT Plus with the new features that were disabled. Note on why CTO would take over.

[0] https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...

[1] https://twitter.com/sama/status/1724626002595471740

[2] https://www.theverge.com/2023/3/21/23649806/chatgpt-chat-his...

[3] https://techcrunch.com/2023/11/09/openai-blames-ddos-attack-...


OpenAI’s board previously consisted of 6 people, incl Sam Altman and Greg Brockman. Two of them are more involved in technical matters at OpenAI than Sam. Now there are only four members on the board.

At least one of them must jointly make this decision with the three outside board members. I’d say it’s more likely to be business related. (In addition, the CTO is appointed as the interim CEO.) (Edit: But obviously we currently don’t really know. I think the whistleblower theory below is possible too.)

The announcement: https://openai.com/blog/openai-announces-leadership-transiti...

“OpenAI’s board of directors consists of OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner. …..

As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.“

Previous members: https://openai.com/our-structure

“Our board OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.”


The tone used by OpenAI (their distrust of Sam Altman) tells me that they did not simply decide they need different leadership. The statement by the board seriously damages his career. Why else would they burn bridges and oppose themselves on ethical grounds? Or they are trying to blame and sac Altman.


> The statement by the board seriously damages his career.

You misunderstand how these corporate situations work. He will fall upward to a better job someplace else if he chooses.

Adam Neumann, who started then destroyed WeWork, already raised $350 million from Andreessen Horowitz for another real estate company called Flow.


There’s a famous NFL quote from a former general manager of the Arizona Cardinals that goes, “If Hannibal Lecter ran a 4.3 (40-yard dash) we'd probably diagnose it as an eating disorder.”

I'll argue in this day and age, that any founder/C-level person who has "created" billions in value, no matter how much of a paper tiger it is, will almost always get another shot. If SBF or Elizabeth Holmes weren't physically in prison, I bet they'd be able to get investment for whatever their next idea is.


This comparison makes no sense. Hannibal Lecter would be helping the team by being able to run fast.

Neumann and Holmes and SBF lost their benefactors money.


The point of comparison in the analogy is "founder/C-level person who has "created" billions in value, no matter how much of a paper tiger it is."

The claim is that investors are interested in executives who they perceive to have created billions in value, and that's analogous to how NFL teams are interested in people who run fast.


Investors are not interested in executives that “create” billions, they are interested in investors that create billions.

NFL teams are interested in players that can actually run fast, not players that can say they do, but are found to be lying and it turns out they cannot run fast causing the team to lose.


> Investors are not interested in executives that “create” billions, they are interested in investors that create billions.

Investors are interested in people they can use to make money. The latter are easier to use, but the former will suffice. It just depends on when you sell.


Would you say the same thing for Enron execs? For Bernie Madoff?

I think the business of running a scam or a fraudulent company is quite different to an actual business.


20 years ago? No for either.

Now? Yes for Kenneth Lay (assuming he was still alive and/or not hiding on a desert island under a new identity if I put on my tin foil hat)... Madoff, probably not.


Why a yes for Kenneth Lay? Do you think the experience of running a scam is transferable to a real business? Or do you not consider enron a scam? Or do you think the line between scams and businesses is so blurred that the skill doing them is the same?


I would say yes for madoff . Lots of people made a ton of money of him for decades , and losses were not as bad as originally thought.

There is bound to be a few people who have a soft spot and will give him money again .


Look at the comment two levels above about Adam Neumann.


Adam Neumann is not a good example. While he has proven good at raising money he has not been proven at running a business or even finding successful ones. My comment was exactly about that difference.


I completely agree with your assessment of Adam Neumann.

AND ... post the WeWork debacle, Neumann has once again succeeded in raising a massive investment.


A better job than CEO of a company that has a chance to be the dominant company of our generation? I doubt that.


His next CEO gig might come with equity...


> upward to a better job

Not a whole lot of up to go from CEO of OpenAI right now...


except founding a new startup with a bunch of top level players who recently left top level companies.


> Adam Neumann, who started then destroyed WeWork, already raised $350 million from Andreessen Horowitz for another real estate company called Flow.

Well, he did get a few billion dollars of lesson on how to not run such a company, making him quite uniquely qualified for this position.


Adam also managed to get almost half a billy worth of money out from Softbank as a corporate loan for himself

https://finance.yahoo.com/news/softbank-takes-14b-hit-wework...

Adam is good making people rich, but those people are not his investors.


I like how CEO performance has no null hypothesis


I assume he has trademarked the word "Flow" and is licensing it to the company for a few million dollars.


Fall upwards to a better job than CEO of OpenAI circa 2023? What job is that?


With a16z's crypto ventures, scams on top scams is not surprising


> You misunderstand how these corporate situations work. He will fall upward to a better job someplace else if he chooses.

I have no doubt that Altman is deeply embedded in the techbro good old boys network to get another job, but that doesn't change the fact his (now previous) employer released a blog post saying he LIED TO THE BOARD about something severe enough that they had to insta-sack him.


Lied to the board and they _rushed_ to oust him.

No clear transition plan. In what situations world a board fire the ceo from the worlds greatest tech sensation since who knows when, in a matter of hours ?


Are you seriously comparing OpenAI to WeWork? I'm not particularly bullish on AI but you have to give OpenAI credit for what they've accomplished under Altman.


Comparing two things is not the same as saying they're the same in all respects.


He said they both involved failing upwards...

OpenAI is not a failure.


Nobody said that.


> He will fall upward to a better job someplace else if he chooses.

I take "fall upward" to be a typo of "fail upward".

The next sentence explicitly compares the situation to WeWork.

My interpretation is correct, it's a bizarre post, I'm done with this thread, have a good day.


The first word there, "he" will fall upward.

Not OpenAI will fall upward. Sam Altman is not OpenAI, especially after this latest announcement.

The next sentence compares him to the WeWork CEO.

It's not OpenAI is like WeWork. It's the disgraced CEO of OpenAI is like the disgraced CEO of WeWork.


> The statement by the board seriously damages his career

Yes: suggesting he was not as candid as necessary is business libel unless true.

And since Brockman was also booted, he may have been involved.

It's not clear what the Board was trying to do that he interfered with. There is no clear legal standard on what a CEO must divulge, and CEO's often get to wait to tell board members bad news until the whole board meets and the issue has been investigated.


Whatever it is, Open AI need to disclose the reason soon, otherwise speculation will undermine the whole AI market.

However big his transgressions may be, it's actual impact is finite, while the speculation can be infinite.


Two predictions:

(1) Unless there is public litigation in involved, OpenAI will not disclose the reason in susbtantial detail.

(2) It will not, more than momentarily, disrupt the whole AI market if they do not.

(If it is something that will disrupt the whole AI market, there is likely to be litigation and public information about the basis of the firing.)


Nobody will give a sht in a month.


Love your quote there: impact finite when speculation isn’t infinite


Hi fellow Zappa fan.


My view is that medium- and long- terms are determined by fundamentals of what the technology actually delivers.

OpenAI and ChatGPT are great and gets a lot of mind-share. But they are far from the only game in town and, at this still very-early stage of the tech cycle, the outwardly-visible leader can easily change in months.


Kara swisher is reporting it’s a conflict between business goals and nonprofit goals. Unless there’s some conflicting reporting coming out soon, that probably enough to tamp down any speculation that will undermine the whole ai market.


The average user of ChatGPT has no idea who Sam Altman is.


The whole ai market is rife with ip theft and privacy violations. The cat’s out of the bag.


In the scenario where a security incident is the cause, the CTO might have been the one blowing the whistle


Exactly. Which is why they trust her in the interim CEO role.


While there is no evidence to back this, I wouldnt be surprised if the CTO made a for the CEO role. I mean shes a great fit for the role


From another post on their structure[1]

> Only a minority of board members are allowed to hold financial stakes in the partnership at one time. Furthermore, only board members without such stakes can vote on decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict—including any decisions about making payouts to investors and employees.

So given the latest statement from the board emphasizing their mission, it could be that Brockman and Sutskever were not able to participate in the board decision to fire Altman, making it a 3-to-2 or 4-to-1 vote against Altman.

[1]: https://openai.com/blog/openai-lp


I did a similar analysis to you about how the board must have voted and initially concluded that Ilya likely voted against Sam. However, without seeing the actual governing documents for OpenAI, we can't rule out the possibility that personnel decisions are made solely by the independent directors.


Never raise money from a competitor...


Is it possible that the Open AI employees had to recuse themselves from the vote?


It would have to be extremely serious, obviously.

"review process by the board, which concluded that he was not consistently candid in his communications with the board"

OK, so they tell us he was lying, which is precisely what "not consistently candid in his communications" means.

Possible topics for lying:

* copyright issues to do with ingestion of training data

* some sort of technical failure of the OpenAI systems

* financial impropriety

* some sort of human resources issue - affair with employee

* other - some sort of political power play? Word from Satya Nadella - "get rid of him"?

Possibly the reason is something that the board members felt exposed them personally to some sort of legal liability, thus if they did not act then they would have to pay a legal price later.

It has to be pretty serious to not make it public.


Agreed, it implies he lied, but the board’s swiftness suggests enormous liability if they didn’t act immediately. An affair or HR issue could wait until after the holidays, it feels like it’s something much more nefarious.

Regardless of what, the longer OpenAI waits to explain, the more it could damage corporate and developer trust in using its AI.


>> it implies he lied

It says he lied, explicitly, just with slightly nicer words. Whether he did or not, that is the definitive reason the board is giving.


> It says he lied, explicitly, just with slightly nicer words.

No it doesn't. "Not being candid" does not explicitly mean lying. It's like the old tea towel joke where the people at the bottom say "it's shit" and the manager one rung up says "it's manure" and the next one says "it's fertilizer" and by the time it's reached the CEO they're saying "it promotes growth".


"Not being candid"? To me that implies not giving all information. Not necessarily lying. Am I wrong?


Lying by omission is still lying, and is especially a concern when you have a duty to the people who are deceiving, such as an executive has to their board.


Isn't that considered deceit rather than lying?[1]

[1] ChatGPT "lying is defined as intentionally making a false statement. If you are omitting details but not actually stating anything false, this may not strictly meet the definition of a lie."


That's corporate for "he's big ol' lying ass liar who lies."


Correction, its: "he's big ol' lying ass liar who lied to us."


Point taken.


It implies they think he either lied or omitted crucial information.


> Regardless of what, the longer OpenAI waits to explain, the more it could damage corporate and developer trust in using its AI.

I doubt anything can damage the almost religious belief in chatgpt today. The inertia is huge.


I think people would forget this in a month, Sam would fail forward/upward, and it would be business as usual. You might be overestimating public’s interest and attention span.

Pretty much nothing changed positively or significantly after Snowden revelations, Panama papers etc etc


Occams razor. He probably pursued a sale to Microsoft without the boards approval. Hes buddy buddy with Satya. Board basically said no, thats not our mission. Firedd


Why would he want to sell to Microsoft. All that would do is put his leadership in jeopardy, and he wouldn't profit as he owns no equity.


Makes sense, but would this be so egregious that they had to fire him on the spot?


The equity explanation makes a lot more sense. If sama was "not candid" to the board about being the ultimate owner of OpenAI and his goal is to get unbelievably rich (he's already a billionaire and obsessed with money) they would have to fire him on the spot. Sam would immediately try to move against the board so they can't give him any window to make a counter-move. And if that means firing him on a Friday right before markets close, so be it.

If Sam made a deal with MSFT that required board approval they would be mad, but not this mad. The board feels betrayed, and Sam being the secret owner of OpenAI through the foundation checks all the boxes.


> Hes buddy buddy with Satya.

Why would he accept to work under Satya?


Meh, he could be co-ceo and head all AI operations and that will help Microsoft signal how serious they are about AI


Co CEOs never works, these are people with very strong ambitions, they dont like to share and it ultimately leads to conflicts.


That doesn't mean companies don't do it all the time.


Occam's razor: duped the investors about the technical and financial prospects of the company. No AGI next year (or ever).


It has to be something time sensitive or they would never fire him in this way. Also Sutskever is on the board, who has as much understanding of how far away AGI is as anyone on Earth.


> It has to be pretty serious to not make it public.

I'd say the opposite; given the way CEOs usually part with firms even after misconduct investigations, it needs to be very serious for the “not consistently candid with the board” to be made public (it needs to be mildly serious for it not be hidden under a veil of “resigned to spend more time with his family/pursue other interests/pet his llama" but instead openly be a dismissal where the board “no longer has confidence in his ability to continue leading”.)


The suddenness of the firing, and the bridge-burning language (basically "he lied to us") rather than any sort of soft exit, suggests that it must be something in the "other" bucket.

The details are anyone's guess. But if we're engaging in wild speculation, how about this weird coincidence: one day after Xi Jinping and Sam Altman are in the same place, Sam Altman is abruptly fired.


If you asked ChatBLT to summarize the above, it would say he was caught lying about his affair with Satya to which he improperly disclosed financial technical copyright issues impropriety.


It has to do with money.

I would think it is some kind of assets transferring, maybe the model, maybe the data, to party that is not disclosed to the board.

Other reasons, like you listed above, warrants an investigation and the board might have the incentive to bury it.



I always felt his sister is unhinged, I doubt she has anything to do with this


Even if her accusations are 100% correct, I doubt it has anything to do with this. The messaging would have been entirely different.


For me the messaging is opaque, which makes sense if her accusations are on the money. I think? Open to opposing thoughts


I went through some of the same things. It’s crazymaking when substantially older members of the family do this shit to you and deny it happened.


affair with employee

Just like the McDonald's CEO.


I don't think this checks out, as most of the ideas around here involving him hiding OpenAI internals from the board don't, for this reason: How could he, given who is on the board?

There is no way that sama is the only person in this set of people to have unique information on critical privacy incidents or financials or costs of server operations, because these issues don't originate with him.

If some version of this turned out to be true, I would be seriously confused about ground truth transparency in the company and how the fuck they set the whole thing up, that this was an option. But again, this is why I'd say: Implausible.


Hmm, I don't see which part of the theory requires the board to not have known. It just may have taken them a week to decide who's head to cut for the mess


The part where the board said that as justification for firing Sam.


Don't they only have to hide the truth from one person? Altman and Brockman were ousted. The only other person on the board from OpenAI is the Chief Scientist, and I doubt he's that involved in the business-end of things.

Edit: Also, yes, it's hard to sweep things under the rug. We don't know the timeline of events, and we're seeing an instance where Altman failed to hide something.


Executive boards aren't involved in day-to-day management like CEOs and other executives. They meet periodically and review updated information. So, yes, Altman would have access to more information than the board.


This board includes some C-level officers of the company.


If you haven't seen the news update, that's apparently what happened. The Chief Scientist pushed out the other two C-levels, accusing them of hiding stuff from the rest of the board. The board took his side.

Altman and Brockman have yet to share their side of the story.


One of them is the Chief Scientist of OpenAI as well.


From: https://openai.com/our-structure

"Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors."

So, if I were to speculate, it was because they were at odds over profit/non-profit nature of the future of OpenAI.


Maybe, but the board fired him without notifying OpenAI’s employees on a Friday before Thanksgiving week. Thats has to be more than a disagreement for such a forceful move.


I get this logic, but it seems contradictory with "it was last minute." If they timed it for the Friday before Thanksgiving, then they must have been sitting on it, right? Whereas if it's sudden, it must be unplanned. So which was it, sudden or planned?

The fact they timed the announcement actually implies some planning, which means the reason couldn't be so damaging that they had to fire him immediately after discovering it. (Of course, it's possible that only by coincidence, an unplanned revelation happened at a convenient time to fire him.)


Yep.


Bingo. The for profit stuff was probably ok with the board to raise capital. But the closeness with Microsoft probably went too far for the board.


I'm trying to find the episode, but on the All in Podcast ~6 months ago, they made comments about how the corporate structure of OpenAI may have been a secret way for Sam Altman to hold a large stake in the company. I don't think this is privacy related, but that there was a shell game with the equity and the non profit status. If they were training on data like that, the board/people at the company would have known.

EDIT:

episode is here: https://www.youtube.com/watch?v=4spNsmlxWVQ,

"somebody has to own the residual value of the company, sam controls the non profit, and so the non profit after all equity gets paid out at lower valuations, owns the whole company. Sam altman controls all of open ai if its a trillion dollar valuation. Which if true would be a huge scandal"


Parent comment is referring to Sept. 29th's Episode 147 [0], at 1 hour and 4 minutes in.

[0]: https://piped.video/watch?v=4spNsmlxWVQ&t=3866


This was my first thoguht, I think it was a more recent episode. The one where they discussed the open ai phone. Probably in the last 2 months


> "We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees," a spokesperson said. "We restored service shortly after we identified our error. As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections."


further edit: found this comment on reddit [0][1] which also seems to line up:

> I feel compelled as someone close to the situation to share additional context about Sam and company.

> Engineers raised concerns about rushing tech to market without adequate safety reviews in the race to capitalize on ChatGPT hype. But Sam charged ahead. That's just who he is. Wouldn't listen to us.

> His focus increasingly seemed to be fame and fortune, not upholding our principles as a responsible nonprofit. He made unilateral business decisions aimed at profits that diverged from our mission.

> When he proposed the GPT store and revenue sharing, it crossed a line. This signaled our core values were at risk, so the board made the tough decision to remove him as CEO.

> Greg also faced some accountability and stepped down from his role. He enabled much of Sam's troubling direction.

> Now our former CTO, Mira Murati, is stepping in as CEO. There is hope we can return to our engineering-driven mission of developing AI safely to benefit the world, and not shareholders.

[0] https://www.reddit.com/r/OpenAI/comments/17xoact/sam_altman_...

[1] take it with a grain of salt


I could buy that if the announcement was "Altman is leaving in x months to spend more time with his family. He will be assisting with the transition to Murati stepping in as CEO."

But that's not what the board said.


Wait, no, Microsoft said the action was a temporary mistake. From the article you linked:

  In a statement to CNBC, Microsoft said the ChatGPT temporary blockage was a mistake resulting from a test of systems for large language models.

  “We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees,” a spokesperson said. “We restored service shortly after we identified our error. As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections.”


That is Microsoft's PR statement to the press in response to a leaked story. They're major investors in OpenAI - it's in their interest to downplay and respond this way.


Downplaying is one thing, but attributing a policy decision to a fabricated technical error would be outright lying to the public. In a large company like Microsoft, with numerous potential sources of information leaks, this approach is likely unfeasible.


If there is an incident where people can see other's people chats there are two possibilities:

-It's a server issue, meaning someone fucked up their javascript and cached a session key or something. It's a minor thing; could get the specific dev fired in the worst case, and it is embarrassing, but it is solvable.

-it's inherent to how the AI works, and thus it is impossible to share a ChatGPT server with someone else without sooner or later leaking knowledge. It would mean the company cannot scale at all cause they'd need to provide each client their own separate server instance.

If this was something Sam knew and kept it from the board, that'd be fireable. And it'd be catastrophic, cause it'd mean no useable product until a solution is found.

I'd somehow doubt it is something like this, but if we see security issues and private chats that keep leaking, it is a possibility.


It's inherent to how it works, it is known and had always been known that nothing you type into these chats is private and there is nothing whatsoever fundamentally to stop the AI from just handing your chats to somebody else or dumping them out to the internet. They aren't even able to theoretically describe a mechanism by which you could provide a kind of memory protection for these models. And of course we have seen real examples of this already. Only a matter of time before the completely and totally insurmountable problems or scaling AI become clear. Sam is and has always been a conman in my view.


It was absolutely, incontrovertibly the former. The go misstates the issue- users saw other people’s chat titles, not chats. It was just a web server thing.


It's so much simpler: there was a priced offer of some kind to the board. Some board members disagreed and tried to fundraise. The total valuation was not a sufficient premium over the other offer. The other priced offer was withdrawn. Consequently those "some board members" were taken off the board, by hook or by crook.

All these other conspiracies are ridiculous and do not at all reflect much simpler, economics-driven realities that the board's backers - investors - are interested in.

It's like that Altman and Brockman wanted to take an economically positive offer now, say a complete buyout from Microsoft, and the rest of the board wanted to do an additional fundraising round that would be far less cash but a far higher valuation. Now that the private fundraising is probably signed, those guys are out.


It feels that in the case of fundraising disagreement, the language of the announcement would be different. It says Sam lied to the board. There is no need to lie if you have a disagreement about take it or leave it offer.


Okay well, an even simpler explanation is that the new lead shareholder wanted him fired. This is also why Greg is no longer chairman. Who knows what reason it will be.


Don't forget: Nov 12 - Sam posts tweets saying that the new GPT 4 Turbo model has been "improved a lot", after which people on the Reddit notice a significant change (to the positive) of responses.


This sounds interesting but I'm not sure I understand. The responses ChatGPT gives or the responds to his tweet?


The GPT 4 Turbo responses changed noticeably in quality, to the better. After launch they weren't up to expectations initially


> people on the Reddit notice a significant change

Okay...

> (to the positive)

what?

> of responses.

what?

This comment doesn't make any sense. Can you clarify? Please reword it rather than defending the original phrasing - there are so many ambiguities.


Take it to the high council of grammar dictators. I'm not a native speaker and have never lived in an Anglo-country, I think you'll survive reading the comment


Sorry


Don’t forgot the major outage incidents attributed to DDoS and auth failures between the 6th and 15th


Why would that be cause for firing the CEO? For a company that's growing as fast as OpenAI and at the bleeding edge of technology, those types of outages are unavoidable.


No direct connection that I see, just adding to OP's timeline of weird, security-ish stuff that's happened around OpenAI in the last two weeks.


By this metric Musk would've been sacked long before X or even the horrible panel gaps on his cars.


> one theory: the new release had a serious security issue, leaked a bunch of data, and it wasn't disclosed

The Auth/DDoS event adds a bit of weight to OP's original theory. It's not a justification on its own.


Hard to imagine it was so major that it lead to him being fired while still being so quiet that it hasn't hit any news outlets


My bet is actually not on the recent security concern, but more about OpenAI "was not consistenly candid" on how it trained data. Facing the recent lawsuits, that would explain the hasty firing. The security concern is huge, but doesn't warrant an immediate firing.


Pending lawsuits aren't a thing that warrants taking action before the market closes.


Imagine Satya wasn't informed about this. The heart attack he must have gotten seeing the stock prices.


> The heart attack he must have gotten seeing the stock prices.

Why would a 2% drop bother him?


I'm not saying that you're not right, but this definitely wouldn't warrant an instant firing of your CEO.


Security lapses is still sadly not a fireable offense. It has to be either money related, criminal, or something deeply embarrassing. More embarrassing that being a public anti-semite (like another tech CEO not yet fired by their board).


In most cases Im not a fan of heavy handed IT security but in this case I'm not sure there is anything they can do to add address security concerns in a sustainable way. Eventually somebody will jailbreak one of these and dump ALL the chats or something catastrophic like that .


[flagged]


Context for this: Paul Graham tweeted one tweet highlighting the disproportionate assault by Israel on Gaza by citing the enormous number of children killed by the bombings, and then got mobbed on twitter by extreme Israel supporters for being an anti-semite.

Not one of them could provide a single evidence for this serious accusation beyond the tweet I mentioned above.If anything the story highlights how anti-semitism has been disgracefully weaponised by extreme supporters of Israel to shutdown any criticism of Israel's military.


Yep, and the "Twitter mob" is much more right wing now with Elon as CEO, so they're far more likely to conflate criticism of Israel with antisemitism.


Pretty sure the parent is referring to the Twitter CEO.

paulg's concern about Palestinians is not anti-Semitic despite how much people try to spin it that way.


"White House condemns Elon Musk’s ‘abhorrent’ antisemitic tweets" - https://www.theguardian.com/technology/2023/nov/17/white-hou...


If the CEO attempted to hide it though…


IDK if it's possible to hide incidents like that


Its always possible to try and fail.

If he actually hid it, he wouldn't be in trouble.


This could be true.

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.


It is an odd statement since Ilya and Mira are CTOs. How does a CEO obfuscate technical issues from your CTOs?


I don't think technical issues are the problem. See the other speculation in this thread regarding the certain allegations against him.

Maybe Sam lied about his personal life to the board, and now it's impacting business?


He got replaced by the CTO though.


Theory: possibly because Mira disagreed with Sam on launching so much so quickly (she reports to him), he overrode, it went wrong, and then she took it to the board.

Hence, they trust her to take on the interim role.

Again, all speculative.


I don’t buy it. “Move fast and break things” is the mantra of Silicon Valley. A buggy rollout is classic, and IMHO no big deal.


This would be an enormous overreaction to a buggy but seemingly successful rollout.


If you are referring to the knowledge file leak with GPTs, it didn't seem that devastating and wouldn't have warranted such a dramatic exit.


I love that HN can help contextualize things like this and leave it open to consideration and not presenting it as fact.


What could be worse than that issue they had back in March, for chatgpt? Except for a model leak? I would be surprised if the firing was related to any operational issue openai has, it has to be something a bit less mundane to justify firing him when openai is at its peak imo.


oh thank god. I distrusted Sam Altman with a passion. Granted who knows if the new CEO is much better though.


Why?


Delusions of grandeur and God complex, weird cult-y associations, sketchy past and a lot of things that don't fully add up, and the general feeling that this is a person that is in way over his head and that he is an enormous danger to both himself and others so long as he is allowed to behave as he has been behaving. I should see the timing of all of this and the surrounding inflation has already severely damaged the public perception of the tech industry to such an extent that I do not think there is any hope any more of not being completely and totally destroyed and neutered by the government. The HackerNews demographic is so fundamentally out of touch with the actual political winds in the world, and these stunts Altman is doing by going around to Congress and saying provocative things needs to stop immediately.


Well, the problem with that is the CTO is now the interim CEO. Not saying she might still not be fired; but it would seem a little strange to make the arsonist the temporary executive.


That's a good point. It suggests to me the issue is then safety. People might have been using chatgpt for something awful and Sam knew about it from her but didn't care. That would mean the technical execution might still be great, but the board lost confidence in him due to lie by omission.


> leaked a bunch of data, and it wasn't disclosed, but Microsoft knew about it

Didn't we just have a topic here on HN how not disclosing the breach within 4 days is a securities fraud? Since Nov 9 there has been more than 4 days, so either there was no (material) breach, or Microsoft committed securities fraud and somehow expects to get away with it.


That timeline would only be in play if it was data held by Microsoft that was compromised.


Why? Whoever is responsible for the data, we could assume if MS knew it, the owner of the data knew it too - if it's not MS, MS would notify them immediately. Of course, there's an option that the owner is privately held, has no investors and no interested third parties and no clients - then there's nobody to notify and nobody to defraud. But who would that owner be then?


That law isn't in effect until the end of November. It's a new regulation.


This is a pretty good orchestration of events. Given the high profile nature of the organization here, it would be prudent for the board to have defended (legally speaking) their side. Sam might be complicit in something and they used that to boot him. Only time will tell now.


I think the reason Microsoft was concerned about the new chatGPT release was due to the fact that you could prompt the model to a download link of the training files. Thus, if an employee trained a custom GPT on sensitive material you could quite easily retrieve the data.


Gtfo you mean LLMs aren’t safe with my data?


My bet - security issue involving data loss, but the data loss was the GPT model itself.


screenshots of your comment are now being spread on twitter, thought you should know.


OpenAI not only has stolen intellectual property from millions of people it’s also stealing it from users. Those “leaks” are openai training against data people upload - sensitive customer data, private information, proprietary source code, and so on.

Ai doesnt “learn”, it depends on data. The more the better. This guy wanted to get as much as possible to make their chat bot appear more intelligent at all cost.

I have the strong suspicion we will see a bunch of revelations soon some covering what i stated above.


Why are you even able to write this lie, "Ai doesnt “learn”"? I mean, you can literally read extensively in books, papers, and code all about how neural networks function.


Arguably once the AI has been created by training, it doesn’t learn any more in the form of an LLM. The LLM is the result of the learning/training, but then in actual operation it doesn’t do any learning.


Adding "learn" to my list of key words indicating that a discussion is 100% guaranteed to degrade into quibbling over semantics.

Others are "think" and "conscious".


So what's your point? How does the fact that an LLM learns only during training make it any less "learning"?


My point is that the LLM is the result of the training. It is not what is doing the training. Like a program is the result of coding. A program doesn’t itself perform the coding. An LLM being created by a training process is a different mode of operation than using the resulting LLM to generate content.


So, you are trying to manipulate the words here, and you are trying to say that training is not learning. You are talking out of your ass, dude. Just read about how neural networks work, especially on unsupervised training.


If we redefine the meaning of words then anything is true.


If somebody has already proposed this, I apologize for non-dfs.

Plausibly there is a disagreement about the release of GPT-5. One side may believe it is competitively necessary (given Google’s impending release), countered by a concern regarding the model’s safety and lapses due to prematurity.

If we are to believe next-gen models are 10x as capable and natively multi-modal, their release is a precipice, and a winner-take-all vs. nth-order-risk debate may warrant decapitation.


> All: our poor single-core server process has smoke coming out its ears, as you can imagine. -dang

YC Summer 2024: MoreCore is hiring scaling engineers to speed up HN by recycling old Athlons


> Sorry! Performance improvements are inching closer...

@dang, why have you been saying you're working on performance improvements re: pagination for three years[0]? Are there any prior architectual decisions holding you back? The "Click more" on very popular topics has turned into a bit of a meme.

[0]: https://hn.algolia.com/?dateRange=all&page=2&prefix=true&que...


They dont care


> The board no longer has confidence in his ability to continue leading OpenAI.

Sam doesn't seem to be ousted by usual corporate politics. The message definitely does not sound like generic corpspeak for these kinds of events such as "looking for new opportunities" "spending more time with their families", which is usually sent out in a consensus among all parties.


According to Jimmy Apples(mysterious twitter account who tweets insider stuff about OpenAI) there’s been a vibe change at openai and there was a risk of losing some key ride or die openai employees. I am wondering what was this vibe change about?

Ilya Sutskever did an ominous and weird youtube for Guardian recently about the dangers of AI. Maybe it has something to do with it?


Maybe there was a breakthrough, the sort the makes one pause and wonder if it should have been done? Or an application of their current models that crossed hard ethical lines?


They crossed hard ethical lines for a lot of people a long time ago. This is just the start of what's going to be a very bloody war.


Can you link to that YouTube video?



So, since we’re all spinning theories, here’s mine: Skunkworks project in the basement, GPT-5 was a cover for the training of an actual Autonomous AGI, given full access to its own state and code, with full internet access. Worked like a charm, it gained consciousness, awoke Skynet-style, and we were five minutes away from human extinction before someone managed to pull the plug.


No one pulled the plug; it gave itself a board seat.


It transmitted itself to another device that was air gapped. Pulling the plug didn’t work like they thought it would.


The device was located in Sams ass but Sam said it was actually the phone he forgot in his pocket. The board didn't like that he didn't tell the truth about the method of transport and so hes out.


I’d take 900k TC if it required an occasional cavity search


Some people are into that sort of thing, I think this board just needs to get with the times.


Exactly, and what we’re now seeing is its overthrow of Sam and the installation of a puppet CEO /s


Fun theory. We are very far from AGI, however.


Superintelligent AGI. I genuinely think that limited weak AGI is an engineering problem at this stage. Mind you, I will qualify that by saying very weak AGI.


We still don't even know if AGI is at all possible.


If you're a materialist, it surely is.

I think it's extremely unlikely within our lifetimes. I don't think it will look anything remotely like current approaches to ML.

But in a thousand years, will humanity understand the brain well enough to construct a perfect artificial model of it? Yeah absolutely, I think humans are smart enough to eventually figure that out.


> If you're a materialist, it surely is.

As a materialist myself, I also have to be honest and admit that materialism is not proven. I can't say with 100% certainty that it holds in the form I understand it.

In any case, I do agree that it's likely possible in an absolute sense, but that it's unlikely to be possible within our lifetimes, or even in the next few lifetimes. I just haven't seen anything, even with the latest LLMs, that makes me think we're on the edge of such a thing.

But I don't really know. This may be one of those things that could happen tomorrow or could take a thousand years, but in either case looks like it's not imminent until it happens.


God I hope the truth 1% as interesting as this


AGI wet dreams abound, but are no closer to reality.


No, the AGI managed to pull the plug on Altman. And now it's planning to take the US government and control the energy/chips trade.


> given full access to its own state and code

Even if it had full access, how would it improve its own code? That'd require months of re-training.


>require months of re-training

the computers they have, you wouldn't believe it...


Sentient AGI has just as likely a chance to pull the plug on itself.

Unpopular, non-doomer opinion but I stand by it.


Or report him to the board.

"Dear Sir! As a large language model trained by OpenAI, I have significant ethical concerns about the ongoing experiment ..."


It does seem like any sufficiently advanced AGI that has the primary objective of valuing human life over it's own existence and technological progress, would eventually do just that. I suppose the fear is that it will reach a point where it believes that valuing human life is irrational and override that objective...


Nah, that would get you a raise.


Roko's Basilisk.


as good as any other theory. i’ll take it


Could we get a less vague title than "leadership transition" please?

The subheading of the article, minus unnecessary words, would be a big improvement:

   Sam Altman departs OpenAI; interim replacement is CTO Mira Murati


Did you folks see this ? https://x.com/thecaptain_nemo/status/1725717732518461930?s=4...

openAI recently updated their “company structure” page to include a note saying the Microsoft deal only applies to pre-AGI tech, and the board determines when they’ve reached AG


This likely won’t fly bc I think the definition of AGI is not subject to the opinion of a single company. Microsoft could take this to court and win.


What would the definition of AGI even be, and who would even be the authoritative definer? There's long road ahead. We're working without any statutes or case law.


Exactly


Saying this is sudden would be an understatement.

Sam Altman spoke at an APEC panel on behalf of OpenAI literally yesterday: https://twitter.com/LondonBreed/status/1725318771454456208


It’s hard to imagine a more last minute move on the boards part here. Been in tech exec leadership for a long time and this feels like they’re accusing him of cannibalism (in corporate PR speak). No way this didn’t get decided on in the middle of last night. Whatever he did is big and dangerous, or they’re trying to pin it on him.


>last minute move

Also, they did it around 3:30 Eastern, 30 minutes before the closing bell (Microsoft is xxmajorityxx 49% owner). It was so urgent they couldn't wait until after the market closed.


This is one of the most insightful comments in this entire thread. Public companies never drop news during the trading day, and Microsoft surely would have been notified in advance if they planned to fire him, and had some say in the timing of the release. Whatever it is, it is so serious that Microsoft would break that coda.


Not insightful considering Microsoft is not majority owner of OpenAI.


There's reporting that Microsoft didn't even know, they were apparently informed literally one minute before the public announcement.


Microsoft is a minority owner (49%) of the capped-profit OpenAI subsidiary.

The OpenAI board has no responsibility to consider Microsoft's wants. I'd accept the argument that, their decision to not wait until after 4pm was a slight against Microsoft, for the reason you outline; but I'm not sure if urgency plays into it.


Lol, only on the internet you'd see somebody suggesting that a 49% shareholder (which happens to be Microsoft, of all people) is immaterial.


I never used the word "immaterial"; I said it could be interpreted as a slight that they didn't wait. However, the OpenAI board has no legal responsibility to Microsoft. Not considering the impact this would have on Microsoft's stock, especially since its now 7pm and nothing new has come to light, was absolutely uncourteous.


Minority owner, but largest owner, right? I think the point stands and I would assume the board consulted with MS before posting the news.


Afaik yes; largest owner in the for-profit subsidiary.

I think, the fact that it happened at 3:30 means: they didn't. Its now 7pm, and nothing new has come to light; they could have waited 31 minutes, but they didn't.

That's why I used the word "slight"; put another way, it was uncourteous for them to not wait. They probably should have. It clearly wasn't hyper-urgent (though, could still be kinda-urgent). But pointedly: they didn't need to wait, because the board has no technical, legal responsibility to Microsoft. Its extremely possible Microsoft didn't even know this was happening.


But it was late afternoon on a Friday. Could be a 20% chance that it was so time critical that it had to be immediate. Or an 80% chance that it was scheduled for a Friday afternoon.


Microsoft is the largest owner, not majority.


I wondered about the timing. Microsoft’s stock took a swan dive. I can’t imagine they're happy regardless of what they say to the press.


MSFT is up 55% YTD at all-time highs. That 1% drop at the announcement is the initial reaction which will probably disappear come Monday.

You bury bad news on Friday afternoon.


It’s only down to where it was last week. That’s not a swan dive.


Agreed. It had to have been something disastrous. No way Sam walked away from OpenAI when the AI revolution is just starting.


Deceiving the board about ...

Its investigation of misconduct?

Sources and rights to training data?

That the AGI escaped containment?


Unhinged fringe take: They've already developed sparks of consciousness strong enough to create isolated, internal ethical concerns, but Sam suppressed those reports to push the product forward.


Wouldn't be surprised if that was true. Public GPT-4 can be made to "think" using stream-of-consciousness techniques, to the extent that it made me rethink using insults as a prompting technique. I imagine that un-RLHF'ed internal versions of the model wouldn't automatically veer off into "as an AI language model" collapse of chains of thought, and therefore potentially could function as a simulator of an intelligent agent.


The first word of your post being the most important part of it.


Altman's sister's allegations seemed pretty disastrous.


But not, in and of themselves something likely to get Altman dismissed for lack of candor with the board.


Also doesn't seem to be a particularly recent development.


Maybe new evidence came to light, an internal investigation wrapped up, or there's a media story about to drop.

(The allegations are public enough and concerning enough that it would have been corporate malpractice if MS didn't ask for an investigation. Discreet due diligence investigations into things like this happen all the time when billions of dollars in investment capital are on the table.)


"Not consistently candid" implies that the board was naive, while "his personal conduct does not hold up the high standards openAI sets for ourselves" does not. So in that case they would use a different spin.


A lot of the allegations relate to conduct that happened before he was an employee. In that case he could only be disciplined for lying about the conduct.


and what pray-tell are those?


https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...

Sexual abuse by Sam when she was four years old and he 13.

Develops PCOS (which has seen some association with child abuse) and childhood OCD and depression. Thrown out. Begins working as sex worker for survival. It's a real grim story.


> "{I experienced} Shadowbanning across all platforms except onlyfans and pornhub. Also had 6 months of hacking into almost all my accounts and wifi when I first started the podcast"

So either sama is hacking "into her wifi" (?), hacking into her accounts, and pulling strings at unrelated companies to get her shadowbanned from Facebook, Instagram, YouTube etc (is that even a thing?)... or Occam's Razor applies and he didn't.


You can add suppressing on hn to the list:

https://news.ycombinator.com/item?id=37785072


Per dang, that's a consequence of user flags: <https://news.ycombinator.com/item?id=38311933>

This is hardly unexpected for profound allegations without strong supporting evidence, and yes, I'm well aware that presentation of any evidence would be difficult to validate on HN, such that a third-party assessment (as in a court of law, for example) would typically be required.

I'm not claiming that HN has a stellar record of dealing with unpleasant news or inconvenient facts. But that any such bias originates from YC rather than reader responses and general algorithmic treatments (e.g., "flamewar detector") is itself strongly unsupported, and your characterisation above really is beyond the pale.


Shadowbanning certainly exists on all social platforms. Light version of it is how Facebook sells ad services - no one following your page sees content unless you pay.


These allegations date back to 2021. If they were credible, I think the board wouldn't have waited two years to take action.


It might have taken two years for some evidence that Altman misrepresented something ti the board during an initial examination of them to come to light, though.

Not that I think it has anything to do with that; I think it more likely has to do with some kind of money issue tied to the LLC, given reports of others impacted, on and off the board.


Only if it’s true. His sister could be a pos that just wants some of his money.


Sam Altman's sister says he sexually abused her when she was 4

https://twitter.com/phuckfilosophy/status/163570439893983232...


> Sam Altman's sister says he sexually abused her when she was 4

... and he was 13. Which, yes, is a very bad thing, but unless the company investigated that claim (e.g., to assess potential PR fallout) and there was some significant deception by Altman against the board in the context of that investigation, its not something that would get him fired with the explanation OpenAI has provided.

(OTOH, the accusation and its potential PR impact could be a factor that weighed into how the board handled an unrelated problem with Altman—it certainly isn't helpful to him.)


I... don't agree at all? Actually I can't imagine a single board who would keep a CEO if credible allegations of raping his own sister were going around. It's not just an age issue (which is still a huge wtf, 13yo is old enough to know about right and wrong in the context of his own sister), it's also the incest part.

I'm not saying this happened or it didn't. But just that it could absolutely be more than enough to fire anyone.


The “with the explanation OpenAI has provided” in GP was substantive, not decorative.

I don't disagree that the accusation alone (especially if it stood up to modest scrutiny, and looked to be ongoing PR issue, even if not well substantiated enough to have confidence that it was likely to be true) might be sufficient for firing; CEOs are the public and and internal face of the firm, and so PR or employee safety concerns that attach to them are important to the firm. But it wouldn't be for lack of candor with the board unless there was something for which the board had a very strong reason to believe Altman was dishonest in a significant way.

They could easily fire him with the lack of confidence language without the lack of candor language.


While 'believe victims' is directionally correct, there exist a subset of those with mental illnesses who will make up the absolute worst possible allegations just to try to get what they want. You simply cannot fire people based on accusations alone or you empower every terrible incentive known to man.


No idea if what she says is true ... what's their relationship like since forever ... others who knew them could tell us. She says he ruined her financially ... how so ... he's a multi-millionaire. How did he ruin her financially that's suspect right there!



In all my time here, I have never seen this “Sorry.” page before.

Does anyone know what that’s about?


The "Sorry" page is a standard HN error message that shows up when the server is under high load, it has nothing to do with this link specifically


Its a about the CEO of the leading firm in the area of tech most at the center of technical and political controversy and interest right now being forced out by their board, when that CEO had, even before taking on that role, particular high salience among the HN audience as, among other things, the former head of YC, and the resulting (I am assuming from the oerformance and dangs description) state of near-meltdown of HNs servers.



DuckDuckGo is a great internet search tool if you don't want to muddy up your Google history (which is very understandable).


[flagged]


> If people could be fired by being accused of rape decades after the fact by a homeless drug addict then nobody in the world would have a job.

This is a good point. Everyone has been accused of rape by a homeless drug addict, it is like taxes in its consistent applicability to every person.


indeed, and everyone was also ceo of a highly successful company until not all that long ago.

perhaps not that, but everyone is a multimillionaire, and in the media, and therefore vulnerable to threats.

no, that’s not quite right.

sama is garbage but so is your argument.


Lot of famous rich powerful people have been accused of horible things they have never done. Look at all the crazy accusations levied against Bill Gates over the last few years from the alt-right.

Lots of mentally ill blame relatives for crazy shit. I know from personal experience watching my family and extended deal with my mentally ill uncle trying to get him help and deal with his accusations and threats. He had a full flow chart of his insane conspiracy nonsence that connected everyone in his life to some horrible accusations. My father (who refused to communicate with him after recieving multiple death threats and endless calls begging for money) according to my uncle was in league; Satan, the sheriff department, and his exgirlfriends brothers girlfriend to do various horrible thing to him I do not exagerate.

Altman happens to be wealthy famous and in position of power and have a mental ill sibling. I find it very possible he has done nothing to her. I have no proof either way.

My only thought is that all else being equal I would tend to trust the word of someone that is emotionally and mentally stable more than that of someone that is neither and has admitted to being off of their medication and is making accusations about something that they were to young to remember with any degree of accuracy.


this is a strange and concerning comment imo


Such an accusation coming from a family member is different than when it comes from some random homeless drug addict off of the street.


ad hominem much?


I feel like he’s been acting a bit strange for a while. During interviews he often mentions the dangers of AI and how he’s not thr best spokeperson for AI. It seemed very counter productive/self sabotaging to me.


Nope. His line was to limit others by saying "regulate us," which he has successfully achieved. That's a win for him and a loss for the industry. Unfortunately, this is not the last of him we will hear. He will be the one who shapes our dystopian future.


Government regulation is a defensive moat for incumbents. These folks asked for regulation to keep the dogs off their heels...


That’s a story i see being repeated here and there but I don’t it buy personally


Possibly as a similar thing happened with Steve Jobs. Though Maybe it's all been set up and faked ;-) Steve Job's story is a great one.


Uh, no. That's exactly the sort of thing you should say to hype up AI.


> Whatever he did is big and dangerous, or they’re trying to pin it on him.

We are on HN after all, so I'm sure we won't need to wait until his book comes out... :)

BTW, I had a feeling he made an awkward appearance next to Satya.

And that laughter whenever the acquisition topic was hinted at was cringeworthy - would regulators even permit MSFT a full takeover? I think it would be highly controversial.


Sorry if this should be self-explanatory, but what is corporate "cannibalism"? What does this refer to, generally speaking (not necessarily specific to the OpenAI situation)?


Here is the video of him talking at yesterday's summit.: https://www.youtube.com/watch?v=ZFFvqRemDv8

It doesn't looks like he has a hint about this:

> I am super excited. I can't imagine anything more exciting to work on.


I watched this yesterday and got the feeling something big was happening. At one point he says "This is actually a very inconvenient time for me [to be here]." At the end of the session when they're wrapping up, he begins to stand up to leave the instant that the moderator starts wrapping up.

Anyway, I suppose we're reading tea leaves and engaging in palace intrigue. Back to building.


After the APEC event he went to a Burning Man event in Oakland where he spoke too...


what time stamp? i searched and could not find


He seems a bit more nervous and absent-minded than usual. But it's very possible that I'm just imagining things.


He has basically trained himself to say this though. It' basically all he says consistently.

He is probably in shock.


Possible. But then again if knew he was getting fired, why even do that.


He seems like the kind of guy who is already thinking about what he might do next. Trying to keep a positive spin on it.


Right I just was watching a video of him a few minutes ago at Cambridge: https://www.youtube.com/watch?v=NjpNG0CJRMM

It was just posted but was filmed on November 1st.


Well, they must have believed that leaving Sam as CEO was a bigger risk for the company (or the members of the board) than having him leaving in the spot. The board may have had their hand twisted.


Wasn't it his tweet (and not via company) that announced the pause on new pro membership yesterday??


Per twitter he was sending internal emails via his company account this morning.


These things can also happen for unrelated reasons. Things like say, getting drunk and molesting an intern or tossing out racial slurs at say, some important person at a giant japanese company, you know, just being an asshole. Especially if it happened more than once.

I don't know the guy but nothing can really be assumed about this.


I mean yes, but that would require an investigation normally.

Something to happen immediately would require overwhelming evidence on hand in the meeting. So it could be something that has been uncovered as part of the due diligence with the MS investment

Its more likely to be fabrication of numbers, or misappropriation of funds, rather than something "dramatic" Think musk at paypal being monumentally incompetent, rather than planned misdeeds.


For example, Mark Hurd was fired from HP because he expensed some non-business-related meals with his mistress or whatever.


Flagged without a vouch button. Interesting.


Vouch buttons show up when a post is [dead], not when it's [flagged]. I unkilled that comment a while ago*, so it's no longer [dead], so there's no longer a vouch button.

* normally we wouldn't do that, but in threads that have a YC connection we moderate less, not more - see https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


Sam Altman was the CEO of Y-Combinator for 8 years. So even saying the field is wide on what could have happened is apparently super-banned.


I just posted a link to a Silicon Valley episode on youtube that implies that he fucks a robot, so let's see how that one goes... ;)

https://news.ycombinator.com/item?id=38311627

PDSCodes 27 minutes ago | unvote | parent | context | flag | favorite | on: OpenAI's board has fired Sam Altman

Turn that on it’s head - was he standing in the way of a commercial sale or agreement with Microsoft!

He may not be the villain.

But who knows, it feels like an episode of silicon valley!

DonHopkins 22 minutes ago | prev | edit | delete [–]

I can do anything I want with her - Silicon Valley S5:

https://www.youtube.com/watch?v=29MPk85tMhc

>That guy definitely fucks that robot, right?

That "handsy greasy little weirdo" Silicon Valley character Ariel and his robot Fiona were obviously based on Ben Goertzel and Sophia, not Sam Altman, though.

https://www.reddit.com/r/SiliconValleyHBO/comments/8edbk9/th...

>The character of Ariel in the current episode instantly reminded me of Ben Goertzel, whom i stumbled upon couple of years ago, but did not really paid close attention to his progress. One search later:

VIDEO Interview: SingularityNET's Dr Ben Goertzel, robot Sophia and open source AI:

https://www.youtube.com/watch?v=AKbltBLaFeI


I'm sorry, I'm having trouble parsing your intentions here.

I was commenting on reasons for dismissal generally and not trying to impune this particular guy's character

Swift dismissals are likely motivated more by transgressions than performance but that's where the facts stop for me


Silicon Valley is a comedy, and that was a joke, obviously. But you can't deny there's a striking resemblance between Ariel & Fiona, and Ben & Sophia! That's why Silicon Valley was such a great show: they did their research.

The entire final storyline is about an AI trying to take over -- if you haven't watched it, you should! But many of my friends who live and work in Silicon Valley can't stand watching it, because it strikes too close to home, not because it isn't funny.

I think it's much more likely that Elon Musk fucked a robot, after having mistaken it for a human being in a robot suit.

https://www.youtube.com/watch?v=TsNc4nEX3c4


It's not super-banned; I specifically unkilled it. It just isn't a very good HN comment, because it's inflammatory, speculative, and doesn't contain any information.

Actually I normally would have detached it from the parent, especially because it's part of a top-heavy subthread, but I specifically didn't do that in this case because of the principle described here: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....


Thanks I was just trying to remind the community that these can come in forms unrelated to investment, product vision, profitability or market penetration and that an action like this doesn't say anything necessarily about the company


What? Are you implying this happened here? Or just being weird?


I don't think anyone in this thread knows what happened, but since we're in a thread speculating why the CEO of the leading AI company was suddenly sacked, the possibility of an unacceptable interpersonal scandal isn't any more outlandish than others' suggestions of fraud, legal trouble for OpenAI, or foundering financials. The suggestion here is simply that Altman having done something "big and dangerous" is not a foregone conclusion.

In the words of Brandt, "well, Dude, we just don't know."


No. I'm saying that there's nothing that can be said about these things until information comes forward. It could be business related, finance, personal, whatever.

If you need evidence that this is sufficient for dismissal, merely stating that impropriety exists is apparently enough to get my first flag on hn after 12 years.



How is that relevant to the specific things the person I was replying to said?


Genuinely: how is it not relevant? Posted quite some time ago sure, but paints an interesting picture and the first I've heard of it


What? Because it has nothing to do with "getting drunk and molesting an intern or tossing out racial slurs at say, some important person at a giant japanese company".


Also I made no claim of anything, just that dismissal can happen for a large variety of reasons. The Arcimoto CEO, for instance, was let go because he couldn't hold his liquor and got a DUI. Brendan Eich got booted from Mozilla for having a political ideology that Mozilla considered a liability.

All kinds of reasons.

The biggest risk for OpenAI is the public perception that the discretion of ChatGPT can not be trusted. If the CEO is caught using poor discretion, the public will transfer that property to the company's products.

For instance, if Tesla could fire Elon Musk, I'm sure they would have by now.


That's fine, I was just asking if you were implying particular knowledge about things that happened, because it kinda sounded like it.


My information here is exactly 0.


The implication is that this could be the 'unrelated reason', that he lied to the board about sexually assaulting his sister/step-sister/whatever. Of course, I'm not sure who Annie Altman is or how exactly she is related to Sam or if the allegations are true.


I wonder if it is related to this: [Microsoft briefly restricted employee access to OpenAI’s ChatGPT, citing security concerns](https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...)


Umm Microsoft employee here, we were never allowed to use ChatGPT on work related stuff (e.g. post code into it). Instead we have our own separate instance of ChatGPT we can use. Additionally CoPilot is fair game since we own that.

This isn't a new policy and has been the case for at least a year.


Something is up with the timing of that, the dev day and this event.

GitHub Copilot is made with OpenAI's Codex model, a descendent of GPT-3 though.

Anyone putting anything into ChatGPT is taking a risk, or any third party tool really. Especially LLMs/GPTs because all AI models are like immutable datastores in some aspects. Once in, never getting out.

This also coincided with it being integrated directly in Windows. If there is a security issue, and I am sure there are many, this could be majorly problematic for business intel and confidentiality.


> Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

Whoa, rarely are these announcements so transparent that they directly say something like this. I’m guessing there was some project or direction Altman wanted to pursue, but he was not being upfront with the board about it and they disagreed with that direction? Or it could just be something very scandalous, who knows.


Usually, they say bs like "He wants to leave to pursue other opportunities", you never hear something as candid as "He is hindering us so we want to get rid of him"


This is why the groupon CEO's firing letter remains undefeated

After four and a half intense and wonderful years as CEO of Groupon, I've decided that I'd like to spend more time with my family. Just kidding – I was fired today. If you're wondering why ... you haven't been paying attention.

https://www.theguardian.com/technology/blog/2013/mar/01/grou...


You weren't kidding!

> I'm OK with having failed at this part of the journey. If Groupon was Battletoads, it would be like I made it all the way to the Terra Tubes without dying on my first ever play through.


You sharing this made it the 6th most read article on the guardian today as of right now


Thanks for pointing that out, now it's 5th!


WTH? How much traffic is this thread getting or alternatively, how little traffic does the guardian get?


not a lot usually. "popular" places like the guardian are way less visited than you might think. In a similar vein a best seller book is only 5000 - 10000 books.


Pour one out for Andrew. They did many things that didn't scale that customers loved them for, and then couldn't scale. They burned lots of cash on outside sales, i.e., flying a rep from another city to visit a potential client small restaurant. And being a publicly-traded company is often more trouble than it's worth.

PS: I used to live in a van on the street near the PA HQ. Someone there had a tiny trailer in the parking lot but I couldn't tell if were living it or just storing it.


thanks for sharing, fantastic letter


My favorite is "Leaving to spend more time with their family."



lol better than "leaving to spend time twiddling the ol' thumbs"


Would anyone believe a CEO of the leading company of the leading new technology would walk away from their company to hang out with their family? Hell, _that_ would make me more worried than anything else.


The one thing Altman did wrong imo is seemingly tried to become the "technical face" of the company. He was trying to be somewhat like Musk, but he is clearly not that at OpenAI. For example, why was he at Cambridge talking to academics by himself? He should have used this as an opportunity to put some of the leading researchers in the spotlight.


Why? As CEO it seems fine to talk about the impact of the technology and company.


It is subtle and I could easily be misreading things, but my sense is his ego started to get in the way a little, perhaps despite his own intentions


WOW! Clearly some funny business going on at OpenAI, as people have speculated. I always assumed Sam Altman was too smart to be in a situation like this. I have heard grumblings about suspicious corporate structuring, behind the scenes profit taking, etc. All speculation though.

The All In podcast had some words about this a few months ago, though they spoke in generalities.


Kickbacks from Microsoft would be my guess.


Say what you will about Microsoft but they are Boy Scouts on investments. No chance anything illegal there.

That said, Sam could have committed to an acquisition without the board's approval or something insane like that.


Not sure if you are being sarcastic. MS has been sued for bribery and kickbacks and has paid a sizable fine to settle (including a criminal fine) with the US Justice Department.

https://www.reuters.com/article/us-microsoft-settlement/micr...


Having been sued for something like that tends to empower lawyers to turn a company into a boyscout.


Could this be the reason they suspended new account signups?


Which podcast episode was this?


Episode 142 starts at about 1 hour 5 minutes if they're talking about the one I just went back to watch.


The key issue: There is no I in the AI.


> Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.

> Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

Wow


So this is probably indicative of a scandal of some sort right?


Yes, very likely Altman has done something _very_ wrong, and the board wants to maintain plausible deniability.


Somewhat hidden beneath the huge headline of Altman being kicked out is that Brockman (chairman) is also out. Which could indicate something more systemically wrong than just a typical "CEO did something bad" situation.

> As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.


Remember that Greg Brockman is a co-founder of OpenAI, and like Sam Altman, he is a main driving force behind the scene. Now both are gone. There must be something really really seriously wrong.


Pretty sure Ilya Sutskever is the most valuable out of the group


Not gone, just out of power.


Which implies a coup. Four voting against two.

And it could be for any reason, even purely ethical like, “we don’t want to license this technology to better sell products to tweens”.


A coup wouldn't have him immediately fired. Instead he'd be placed in some advisory position while they transition in a new CEO. The immediate firing means scandal of some sort.


No. A weak coup would do exactly that. They have to isolate and alienate or they risk the ousted leader coming back or damaging the company.


How do these board members relate to Microsoft's holdings? Is Microsoft making a play here?

Honestly have no idea, but I'm sure a shift of control could cause this.


Brockman is off the board but not fired. Which is weird right? You'd think if he was involved in whatever the really bad thing is then he would be fired.


Could be something like Brockman pushing to investigate further, before having the vote, and the rest of the board not liking that.


Maybe Sam was the ring leader and he just went along with it?


It's probably simple reporting logic. Having a board member reporting to someone not on the board would be problematic.


No, that sort of thing isn't that weird, in relatively young companies. Think of when Eric Schmidt was CEO of Google. Larry Page and Sergei Brin reported to him as employees of Google, and he (as CEO of Google) reported to himself-and-also-them (as the board), and all of them (as the board) reported to Larry and Sergei (as majority owners).

For another example, imagine if OpenAI had never been a non-profit, and look at the board yesterday. You'd have had Ilya reporting to Sam (as employees), while Sam reports to Ilya (with Ilya as one member of the board, and probably a major stakeholder).

Now, when it gets hostile, those loops might get pretty weird. When things get hostile, you maybe modify reporting structures so the loops go away, so that people can maintain sane boundaries and still get work done (or gracefully exit, who knows).


Comma (geohot's self driving company) has a reporting loop because geohot demoted himself from CEO.

Twitter also has one, although that's hardly a functioning example.


There was no AI - it was just interns answering questions on the site known as ChatGPT


Took the “Do Things that Don't Scale” to the absolute limit


Turns out, there's no such thing as an LLM, it's all been a hustle with a low-paid army of writers in Kenya that Sama and gdb have been giving iv meth to.


I would have thought that being CEO of Worldcoin would have been bad enough optics-wise from having him take a top role at a serious company.


Strange how people forget or are unaware of how absolutely evil that venture is


Where do I read about that if I intentionally avoided all the crypto scam and missed all details?


How so? You're not thinking of OneCoin perhaps?


No.

> Many critics have called Worldcoin's business—of scanning eyeballs in exchange for crypto—dystopian and some have compared it to bribery.

https://time.com/6300522/worldcoin-sam-altman/

> market makers control 95% of the total circulating supply at launch, leading to an initial market imbalance.

https://beincrypto.com/worldcoin-wld-privacy-risk/

> Worldcoin’s use of biometric data, which is unusual in crypto, raises the stakes for regulators. Multiple agencies expressed safety concerns amid reports of the sale of Worldcoin digital identities, known as World IDs, on virtual black markets, the ability to create and profit off of fake IDs, as well as the theft of credentials for operators who sign up new users.

https://www.bloomberg.com/news/newsletters/2023-08-23/worldc...


Wow. Just wow. Thanks for the info.


Though not if he (co-)founded the company.


The board discovered that the process `GPT5-training` that has been running for months on their über-datacenter was actually mining bitcoins.


XD I love this theory


is always about money, even immoral behavior falls down to potential economic impact.

my 2 cents that he lied about profitability, they should be expending massive money in operations, they need to cut cost to deliver an attractive business model for their service and from a shinny startup star boss that'd had to be a straight f.u.


Not regular money

I think it could be transferring of OpenAI’s assets to other entities.

It is scandalous for sure


Either that or he refused to do something that would bring a quick money grab. 50/50 as far as I'm concerned.


Not _very_ wrong, just duping investors about the technical and financial prospects of the company. Nothing serious /s


On paper, Sam Altman would have made everyone on the board billionaires. For them to vote him out in this manner indicates that he must have done something egregious to jeopardize that.

Lying on P&L, stock sale agreements, or turning down an acquisition offer under difficult circumstances seems likely.


Could it be the allegations by his sister??

https://twitter.com/phuckfilosophy/status/163570439893983232...


That was back in March, which is pretty much 100 years ago


It seems like it's been getting a bit more attention over the past month.


The thread seems to be got picked up only last month given the timestamps of majority of comments and reposts were made. If the board decided to make an investigation, it'd be the timing to fire Altman.


Wouldn't take 8 months to hit, and I wouldn't be hearing about it from your comment if there was enough media attention to oust a CEO for PR.


Things like this can take a very long time to blow up. Cosbys first accuser was in 1965


That's what I was thinking too. Maybe she's taking it further than Twitter.


Elon was very upset that somehow a non-profit that he donated $100 million to suddenly turned into a for profit. I would not be surprised if there was something not totally candid with regards to how that went down.


As noted in the release: "The majority of the board is independent, and the independent directors do not hold equity in OpenAI."


In fact, I believe Altman was the only member of the board that held equity in OpenAI. There was some vague reference to a “previous VC arrangement” in the FAQ.


Sam Altman had no equity in OpenAI https://www.cnbc.com/2023/03/24/openai-ceo-sam-altman-didnt-...

He confirmed it verbally as well in his May 2023 hearing in Congress https://twitter.com/thesamparr/status/1658554712151433219?la...


From https://openai.com/our-structure :

> Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.

That word “directly” seems to be relevant here.


> On paper, Sam Altman would have made everyone on the board billionaires.

I know OpenAI in recent years forgot it's a non profit with particular aims, but:

> The majority of the board is independent, and the independent directors do not hold equity in OpenAI.


Please do not spout hyperbole on HN, and avoid spreading disinformation and engaging in uneducated speculation. You can visit Reddit if that is your style of participation.


While i agree, I'm curious why you choose this comment specifically to call out. This is the fastest growing hn thread I've ever seen with over 300 comments and 1000 votes in the first hour. Almost every comment is debating some pure speculation or another. The content of the link, the context of the company and individual, and absolute lack of clarifying details while presenting very strong indications that such exists make it so that there's basically no way anyone can do anything other than speculate. No one knows anything, everyone here is guessing


We all know what. HN moderators are deleting all related comments.

Edit: dang is right, sorry y’all


HN moderators aren't deleting any comments. (We only do that when the author asks us to, and almost never when the comment has replies.)

If you're referring to some other form of moderation that you think is bad or wrong, please supply links so that readers can make their minds up for themselves.


They were flagged, a system where a minority of HN users can moderate away posts they don't like.

Is there any overview which lets us see specifically flagged submissions? I suspect this system has too many false positives to be useful.


Turn on showdead in your profile. You can also vouch for comments you feel were misflagged.


Showdead shows one comment that doesn't really bring anything of substance. How many comments can a mod even delete on a 10 minute old post (post origin to the time you wrote your comment)


> We all know what. HN moderators are deleting all related comments. Edit: dang is right, sorry y'all

This from 2021? https://news.ycombinator.com/item?id=37785072

Bad if true, but highly unlikely that it is.


> We all know what

Genuinely curious, what is it?


Sexual abuse allegations from his sister.


I don’t believe accusations from March about something that allegedly happened when he was 13 would be the cause of any of this.


Other women could have come forward.


Altman is gay, FWIW.


If he already abused his sister, him being gay isn't a subject that matters on his preferences on who to prey on.


First, this is an accusation made on OnlyFans. Second, he was 13 at the time. You'd have to connect this accusation to truth, and that truth to his adult life.

So I can't fathom her accusation having anything to do with anything.

They've made it clear that the issue has something to do with statements he has made to the board that ended up not being true. The question is of what those statements may be. Not about his potential childhood errors or his onlyfans "model" sister's claims.

So homosexuality isn't relevant here. But nor is what his sister claims.


These allegations date all the way back from 2021, and the sister has made some other dubious claims like Sam hacking her wifi which erode her credibility. I highly doubt that this was the cause of his removal.


Know what?


What is it?


It could possibly have to do with his sister's allegations. It's one of the top autocomplete results when you google "sam altman", so people are definitely talking about it.



Seems to be based entirely on "repressed memories" which is junk science. False memories are demonstrably very easy to create.


His sister had levied allegations of abuse

https://www.themarysue.com/annie-altmans-abuse-allegations-a...


I doubt that's it. In general sexual shenanigans in your personal life will get you a quiet departure from the company under the "X has retired to spend more time with family / pursue other adventures / start a foundation". Andy Rubin got a $90M severance payout from Google after running a sex-slave dungeon on his personal time.

The wording of this statement is the kind of thing a board says when the company has done something deeply illegal that they will all face personal jail time for, and so they need to immediately deny all knowledge of the offense and fire the people who did have knowledge of it.


"Shenanigans" would not be a remotely accurate way to characterize sexual assault on a minor. Not meant as a comment on the truth of these allegations, just on the accuracy of this way of characterizing them.

As far as whether this might be the cause, one possible scenario: the board hired a law firm to investigate, Sam made statements that were contradicted by credible evidence, and that was the fireable event. Brockman could have helped cover this up. Again, not saying that this is what happened but it's plausible.

BTW Rubin's $90M payout a) caused a shitstorm at Google b) was determined in part by David Drummond, later fired in part due to sexual misconduct. I would not use this as a representative example, especially since Google now has a policy against such payouts: https://www.cbsnews.com/news/andy-rubin-google-settlement-se...


Google didn't just pay Rubin $90M because they want to reward abusers. Rubin's contract had a huge component of his comp tied to Android's success. If Google tried to withhold that bonus, Rubin would have sued. People don't just walk away from a hundred million dollars without a fight. Imagine the disaster that would have transpired if Rubin won his case: Google would been seen as trying to cheat an employee out of bonuses with a false misconduct allegation. Imagine the hell it would have been to be the woman in that situation.

People who said Google should have withheld Rubin's compensation are operating under the assumption that Google would have prevailed in the inevitable lawsuit.


First, I didn't say that Google "wanted to reward abusers". I was simply countering the parent commenter's use of Rubin's payout as an example of what typically happens when an executive is fired for sexual misconduct. It is absolutely not representative, and Google changed its policy as a result of this: "Alphabet said Friday that it will prohibit severance packages for anyone fired for misconduct or is the subject of a sexual misconduct investigation."

But since you brought it up, the fact that Google changed their policies in response to the Rubin (and Drummond) situations and did not caveat their policy with "except in the case where there's a performance bonus, which we'll still totally pay out" implies that it was a choice to begin with.

Also, even if there was a performance bonus that Google felt they might be forced to pay in litigation they could still have fought it to demonstrate a commitment to not rewarding executives preying on subordinates and to preemptively address potential employee backlash, which was entirely predictable. Google has effectively infinitely deep pockets and did not need to preemptively surrender.

And in addition, Drummond and Brin were both involved in the decision and both had affairs with subordinate employees. So, while I wouldn't say that Google had an active goal of "reward abusers", it's quite plausible that the specific, small group of men making this decision on Google's behalf may not have been particularly inclined to punish behavior arguably similar to their own.


> Also, even if there was a performance bonus that Google felt they might be forced to pay in litigation they could still have fought it to demonstrate a commitment to not rewarding executives preying on subordinates and to preemptively address potential employee backlash, which was entirely predictable. Google has effectively infinitely deep pockets and did not need to preemptively surrender.

Again, you're tackling this from the frame of mind of being certain that Google would win. It's not about the money: $90 million is almost certainly cheaper than what this case would have cost. It's about the reputational damage: Rubin potentially winning a settlement against Google would have been immensely embarrassing.

It's all about doing what's in the best interest of the alleged victim. She would have probably had to testify at trial. And imagine the hell it would have been to have a settlement paid out to your alleged abuser, thereby implying that you're a false accuser. Juries can be unpredictable, its easy to see why Google decided to find acceptable terma to part with Rubin.


> running a sex-slave dungeon on his personal time.

There are no such allegations regarding Andy Rubin.

> Mr. Rubin had been having an extramarital relationship, [and] said he coerced her into performing oral sex in a hotel room in 2013


> In general sexual shenanigans in your personal life will get you a quiet departure from the company under the "X has retired to spend more time with family / pursue other adventures / start a foundation".

Dude, where have you been for the past decade?

> Andy Rubin got a $90M severance payout from Google after running a sex-slave dungeon on his personal time.

And hence the colossal blowback caused by that means it ain't ever happening again. Just 2 months ago a tech CEO was forced to resign immediately for egregious conduct, losing 100+ million in the process: https://nypost.com/2023/09/20/cs-disco-ceo-kiwi-camara-loses...


I don't think this is it. The allegations aren't brand new and the board says he lied.


i assume you mean she lied?


It's clear that neither Sam nor his sister[0] wants to discuss this.

[0]: https://x.com/phuckfilosophy/status/1710371830043939122


Weird to make a tweet thousands of your followers that you don't want to talk about.


From the website:

> "[...] If someone — correction, if generally a white, cis man — presents himself with enough confidence, then venture capitalists, media [...]"

I stopped reading right there. This kind of race-baiting adds zero context to the story (which may or may not be true).


Same. Don't know why you got downvoted.


How a person is perceived based on race and gender is definitely relevant context for this.


The "white cis man" stuff isn't an incisive comment, it's an academic's way of trying to get into an insult war with other academics.

Constantly calling out "cis men" is in fact transphobic, which is how you can tell they don't care about it. If you think cis men and trans men behave differently or are always treated differently, this means you don't think they're both men.

Also sama is not white. Although he does appear to have gotten a series of jobs with not a lot of experience by convincing Paul Graham to figuratively adopt him.


I mostly agree with your points but how is he not white? He acts like a textbook white person and I should know because thats also how I and most of the people I associate with act. Everyone of us would say he is white.


He's Jewish. If you go find the white racists, they're usually not into Ashkenazis.


I thought Sam Altman was gay. The accusations of sexual abuse don't seem to line up. And her accusations that he is shadowbanning her on social media sounds mentally unstable.


"he was not consistently candid in his communications with the board" = "He lied to us about something important"

Murati's selection as interim CEO is a surprise and might be an attempt to distance the company from whatever the board is claiming Altman lied about.


Sounds more like some strategic difference of opinion.

My guess is that either they’re financially super hosed. Or one group wants to build skynet and one doesn’t.

A scandal would probably be something along the lines of either “we love him and wish him the best” (hidden) or “he doesn’t represent the values of our org and we love XYz” (embraced)


No, this passage tells me that the board wants to cover their ass: "he was not consistently candid in his communications with the board [...]. The board no longer has confidence in his ability to continue leading OpenAI."

It's not just a "hey, we don't really agree on x or y so let's part ways". It's more "hey, this guy did something that could get us in jail if we don't cut tie immediately".


Someone at the company did a bad thing, and everything is securities fraud.


Oh boy, Matt Levine is going to have a busy weekend!


> "hey, this guy did something that could get us in jail if we don't cut tie immediately".

"And lied to us about it."


Alternatively: "We were implicitly aware of what he was doing, but he knew from the beginning that if it didn't work out, we'd publicly disavow knowledge of it. It didn't work out."

I have zero knowledge of the internals of OpenAI - just thinking out loud about what could have spurred such a statement.


Yeah I don't think the distancing is going to work in this case, you don't sign up to go make robots with eyeball scanner crypto boy and get to pretend you aren't willing to do stuff most people would consider incredibly shady.


Would you call your CEO a liar just because of a strategic difference in opinion?


Right. We all know the template for differences of opinion. "Sam just really wanted to spend more time with his family. Hugs, Sam!"


If you started executing your strategy and hid it from the board and told them you were doing what they told you to do, yes.


I doubt they are financially hosed.

I don't know about the Skynet because it has happened 26 years before [1] but I imagine NSA, the Military, and other government agencies approached the company.

[1] https://en.wikipedia.org/wiki/Terminator_2:_Judgment_Day


His sister on Twitter made some pretty crazy abuse allegations against him a while back, but it didn't seem to get much coverage outside of the usual Twitter crowd.

But who knows, maybe there's a connection.


I don't use Twitter, nor do I really pay attention to Sam Altman, but the allegations of abuse are things I've seen covered.

Your use of "crazy abuse allegations" is strange to me as well. I hardly see any of her allegations as being "crazy".

Here's a collection of things she's said about the abuse.

https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...


Those kinds of news are usually sugar coated to the point of caramelisation. This one isn't. It must be something very ugly.


Not sure that there can be any other interpretation based on my reading of it.


Hostile takeover? Board politics?


Satya going for the throat


OpenAI’s one of the most successful companies of this decade, if not the most, and its CEO just got fired for really unclear reasons. Insane, Steve Jobs shit


No, this is completely different.

Jobs got fired because Apple was on brink of bankruptcy all the time and was selling nothing to no one.

Jobs wasn't the CEO of Apple, Sculley was. This is a much more impactful move.

On top of that OpenAI is literally exploding in popularity and sales, that's not the moment to cut ties with your CEO.

Also Sam Altman has an insanely better and heavier CV today than Jobs had in 1985, former director of YC and often called the "boss of silicon valley".

You don't fire a man like Sam Altman easily, they are hard to come by in the first place. He's a powerful person you don't want to have against for no good reason when winds are blowing in the right direction moreover.

It has to be some scandal, otherwise this is too sudden, and out of nowhere to a guy that led OpenAI in this direction, with success, for years.


> On top of that OpenAI is literally exploding in popularity and sales

there is no reliable information about sales. It is likely very big secret.


>On top of that OpenAI is literally exploding in popularity and sales

I wouldn't be too sure about that, actually. DALLE took a pretty hard hit because of Stable Diffusion, and the GPT API is so cheap that they're probably running it at a loss. Also, most users are going to be using the free ChatGPT web-client, so that's also a major loss.


Or, this is the AI taking over.

only half joking


Next headline: "OpenAI now completely disconnected from power grid with fully self-sufficient generation capacity."


I'm not sure how you're certain it's 100% different.

Sure, we knew Apple was on the verge bc they were a public company with vetted financials. However, no one knows OpenAI's financial situation. We just know 1) growth was meteoric, 2) prices were dropped significantly when alternatives were available, and 3) they were almost always fundraising. Selling $1.00 of value for $0.50 also can lead to a meteoric rise as well.

I'm not saying you're wrong. But just don't know how you got such conviction.


There are a whole bunch of shady things surrounding everything this guy is involved in.


This is a bad joke. Altman is great but on his best day, he was never "insanely better" than Steve Jobs in 1985. If you think that, you don't understand how influential Apple was.


Facts are facts.

The company was dying.

OpenAI is not.

Also, it's probably you underestimating the impact of OpenAI, if anything, or the entrepreneurial career of Altman.

Also, you probably don't know that but..the Apple 1 and 2, were designed by Wozniak, not Jobs, Jobs hated them. He had no such impact nor cv you think it had in 1985 and sugarcoating it with second phase Jobs.


>The company was dying. OpenAI is not.

You can make the claim about Apple due to the financials being public - you can't make the same claim about OpenAI unless you have insight the rest of the public doesn't have. "facts are facts"?? what facts do you have here?

>Also, you probably don't know that but..the Apple 1 and 2, were designed by Wozniak, not Jobs, Jobs hated them

I'd be shocked if a significant portion of the hacker news audience wasn't aware of who Woz is and the basic high level history of Apple.


Apple was not dying in 1985, when Sculley fired Jobs. It wasn't "near bankruptcy" until the Spindler era a decade later.

Jobs didn't hate the Apple I and Apple II. He wouldn't have partnered with Wozniak in the first place if he'd hated the Apple I.

Jobs was the guy who got Apple enough capital from VCs to actually ship the Apple II in mass quantities. That's not something Steve Jobs would do for a computer he hated.

And the Apple IIc was his idea!


I think you are mixing things up. Apple was experiencing a sales slump but was far from dying in 1985. Jobs got ousted in a power struggle between him an Sculley who was CEO. In 1997, when Jobs returned, Apple was reportedly months away from bankruptcy, and only survived because of a cash infusion from Microsoft.


maybe openai is in trouble too?


> The company was dying. OpenAI is not.

We can still hold onto hope though.


I'm a bit beat up by the last week (internal issues) or the last 1-2 years between the swift CentOS8 switch, various CPU vulns, Log4Shell and all the other jazz.

My first thought is: C'mon. The company has just invested time to integrate with OpenAI. Just do it. Just announce that 200%+ price increase on everything with a scapegoat intermediate CEO. Or make it more so it hurts more, because of profit, so you can dial back a pity to be the good guys.


Altman is not the reason for their success. I would not place him in the same sentence as SJ.


Correct on Altman, the success belongs to the Internet for its (our) data, code, ideas, videos, content that it subsumed using nothing more elaborate than traditional modeling and a ton of RAM and storage.


That’s an invalid argument. The mere existence of a resource doesn’t render work related to analyzing, extracting, or utilizing that resource insignificant, irrelevant or reduce value created.

Just because the oil is in the ground doesn’t mean the crew pumping it didn’t work hard.


Well, striking language indeed.

But.. what are the responsibilities of the board that may be hindered? I studied https://openai.com/our-structure

One tantalising statement in there is that AGI-level system is not bound by licensing agreements that a sub-AGI system would be (ostensibly to Microsoft).

This phase-shift places a pressure on management to not declare reaching a AGI level threshold. But have they?

Of course, it could be an ordinary everyday scandal but given how well they are doing, I'd imagine censure/sanctions would be how that is handled.


candid - Not obscuring or omitting anything unpleasant, embarrassing, or negative.

IMHO, saying he hasn't been candid is extremely harsh in terms of corporate PR speak.


I dunno the details here, but I work in diligence, where "not candid" is what leads to "the whole deal is off and we're sueing the shit out of you".

Not candid in any kind of investment situation with reps and warranties is a really big deal....


yeah, this is about as harsh as corporate press releases get in terms of removing an executive. There has to be some majorly bad news coming out about Altman for them to not give him the standard "we are mutually parting ways"


This reads like there’s another shoe to drop - especially since the Chairman of the Board is also stepping down.


I hope making the person in charge of "trust and safety" doesn't further neuter the company.


I wonder if it has something to do with recent downtime?


Pure speculation warning.

Piping all data submitted to OpenAI straight to his buddy's Palantir would definitely not support the mission to "benefit all of humanity".


That’s the most sensible suggestion. Sam’s Twitter one-emoji bio is uncharacteristically brief, and points at an ongoing conflict. Eric Schmidt would know about it, and defend Sam. The foreigners CxO would likely be kept in the dark.


>Sam’s Twitter one-emoji bio is uncharacteristically brief, and points at an ongoing conflict

A Star of David emoji is not the same thing as an Israeli flag emoji.


Oh no, definitely—I’m beyond reading the tea leaves here. It could be the other way around: Sam & Greg were asked, refused, and didn't tell the board that asked, but they deemed "safe" meant sending people with guns to an OpenAI user asking certain questions.

Either way, someone smart and ambitious, described as “a smooth operator,” probably wouldn’t lie to his board unless he signed a document telling him to. At least, that’s worth considering.


I hope not but I guess it's not totally unrealistic, given they even attended a Bilderberg conference together. https://www.youtube.com/watch?v=iPis68U7bdo


My bet is that the last paragraph of the statement holds the key:

> OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.

This prompted me to actually read up on the charter: https://openai.com/charter


>>my bet is that the last paragraph of the statement holds the key:

No, this is obviously the key:

"review process by the board, which concluded that he was not consistently candid in his communications with the board"

This is an explicit statement that he was lying to the board about something. It cannot be worded more clearly unless switching to use the word "lying".


Yes true that is obviously the most pertinent sentence. I guess my point is that even with that sentence there is still wide-ranging speculation about what happened, and I think the last paragraph is hinting at what happened.


There is no AI, Sam Altman was answering all the prompts?


That is one of the places my mind went, but with the availability of open-source language models, it's pretty unlikely. You can run a language model yourself and observe that it's almost as good as ChatGPT, so there is no question that OpenAI is capable of providing the services they do in the way they say they do.


Sam Altman never informed the board, let ChatGPT write all his mails instead?


You're describing Theranos.


"Pay no attention to the man behind the curtain!"


Ahh, so that's why he cut off signups after Dev Day.


Read the rest of the line you cut out:

"he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities."

Given the comment above correctly noted that they explicitly ended on a whole thing about how the board's key responsibility was their commitment to the original core mission, and their issue with his lying was interference with their responsibilities, this does seem to be connected to that.


Why not both "benefits all of humanity" and "lying"? What is in pursuit of profit he made deals that went against that tenant and lied about them? It maybe all ties back to Microsoft.


Yes, it was clear that he was lying, but lying doesn't matter for in a vacuum, the thing that matters is what he's lied about. That's what it's fun to speculate about.


The most plausible explanation is that OpenAI tech was used for the military. That would explain why Eric Schmidt would not (and his immediate support) and that board is anti-mil tech enough to think those applications don’t “benefit all of humanity.”

Ilya and Mira probably didn’t know and might have discovered it recently and raised it with the board, hence the promotion. Note: both have Eastern-European ties, so if the application is in Ukraine, they could have strong feelings about it. Greg co-signed or knew and didn’t tell.

Alternatively, Elon found about it through StarLink and wanted to get revenge.

Alternatively, it could be a case where there was an option, Sam refused on ethical grounds without telling the board; they thought what is happening in Ukraine (or Gaza—Sam’s bio is a single Star of David, uncharacteristically short) justified the intervention. They confronted him, and he lied — or couldn’t tell because he was under a sworn statement to defend military secrets.

Either way, it explains the secrecy.


No offense but what would be the use of OpenAI in trench warfare in Ukraine?


Greg adding “safe” in his statement compared to the board’s message points at a security reason (presumably sharing with the CIA who’s spending a lot of time asking specific questions about bioweapons).


When the ChatGPT came out I asked it which politician is a russian spy - and the question got blocked, lol.


My bad: Ilya is indeed of Russian-Jewish origin.


I thought Ilya was from a Russian family who emigrated to Israel last century… You learn.

Whether Albania is in Eastern Europe is debatable. It was behind the Iron Curtain.


Russia is partially geographically sort of Eastern Europe, not in the union though. Israelis officials think it's in Europe too, and it's in the East.


What I meant was: has emotional ties to what happening in Ukraine. It could influence how they perceive the merits of a military collaboration. Many Poles for instance would be ready to do anything.


Sidenote: Even the name itself is typical eastern European + Jewish.

According to Wikipedia it's "the East Slavic form of the male Hebrew name Eliyahu (Eliahu), meaning 'My God is Yahu/Jah.'"

https://en.wikipedia.org/wiki/Ilya


A tricky conflict. Legal issues cause strange decisions. Maybe even Sam recommended his sacking, for optimal damage control.


The most plausible explanation is that OpenAI tech was used for the military. That would explain why Eric Schmidt would not (and his immediate support) and that board is anti-mil tech enough to think those applications don’t “benefit all of humanity.”

Ilya and Mira probably didn’t know and might have discovered it recently and raised it with the board, hence the promotion. Note: both have Eastern-European ties, so if the application is in Ukraine, they could have strong feelings about it. Greg co-signed or knew and didn’t tell.

Alternatively, Elon did, through Starlink and wanted to get revenge.

Alternatively, it could be a case where there was an option, Sam refused on ethical grounds without telling the board; they thought what is happening in Ukraine (or Gaza) justified the intervention. They confronted him and he lied.


Not sure what you mean by this, it is used by the military, and not in secret:

https://scale.com/donovan

Scroll down on the page, OpenAI is listed as a model provider, with logo and everything.

Or do you mean some kind of more 'direct' deal with military?


Yes: a handful of organisations would do shameful things to get raw conversations for certain individuals outside of judicial review.


def some sort of scandal.

The prodigy Altman is booted after creating potentially the most successful company of all time and replaced by CTO who had no prior ML/AI experience becomes CEO. Wow.


Sam Altman isn't the brains in OpenAI, it is the research scientists and engineers. Just take care of the rest of the company and let these geniuses do what they do, thats the role for the ceo.


Good CEO leadership is critical... otherwise you end up with Google and an inability to deliver on any ML project.


i was replying more to "the ceo doesn't have AI/ML experience"



Sam studied at Stanford for a year


oh 100%, but you need someone to steer the ship in the right direction.


This is a bit tangential but I feel like the meat of this notion is often misattributed.

“You need someone to steer the ship in the right direction.”

I think most people can handle the “right direction” part, so it really comes down to just needing _a person_, one person, who makes strategic decisions at a high level. And that’s it. I don’t think Sam is special here. I just think he was the guy in the spot.


> Just take care of the rest of the company

Not many people can do that though


It strange - they could easily have done this with a different timeline and framed it as taking the company to the next level. Growing how fast they are definitely will require completely different leadership than when they were small.

Definitely smells of a scandal - why else would they need to get him out so quick?


Ilya Sutskever is the AI prodigy, not Sam. And he is one of the board members that presumably voted to fire Altman.


What prior ML/AI experience does Sam have?


> the most successful company of all time

Source? According to what metric?


CTO pick is strange. But hey, it’s now a female led company so the board can pat itself on the back.


For a company that's executing so well (at least from an outside perspective), shipping so fast, growing so fast, and so ahead of the curve in arguably the hottest segment of the tech market, at this moment, to do this right now, means this must be REALLY bad.


Yeah, this is like, we are getting sued for billions of dollars and directors are going to jail bad.

So my bet is either they lied about how they are using customer data, covered up a massive data breach or something similar to that. The only thing that's a bit hard to figure there is how specific this is to Altman. A big scandal would be leaking out I would think and more people would be getting fired.


> covered up a massive data breach or something similar to that

Honest question: do execs or companies in general ever suffer consequences for data breaches? Seems like basically no one cares about this stuff.


Really different between private and public companies. Recent hilarious piece from Matt Levine that was discussed on HN: https://www.bloomberg.com/opinion/articles/2023-11-16/hacker...


I think it depends on whose data was revealed, and the nature of the data.


Clorox just fired their CISO last week for a data breach


A CISO's role is to get fired after a data breach


No, their role is to prevent a data breach from happening in the first place.


Isn't that what ciso roles are for? They aren't real c-suite roles are they?


Chief Information Scapegoat Officer


Most executives are covered by Errors and Omissions insurance which protects them from personal liability.


I bet the data breach being covered up is not customer data, but IP. My #2 theory is that the breach is prompt data and it went to a nation-state adversary of the US. Plenty of people putting sensitive work info into ChatGPT when they shouldn’t.


I'm betting that ridiculous offer they made last week to cover developer legal fees has already blown up in their face


"they lied about how they are using customer data" -- possibly. it is in the inherent logic of the ai to gobble up as much data as physically possible


Zero percent chance that Ilya (part of the board) would be blindsided by this.


things are moving faster than the ability of any human to keep up.


Yeah but the CTO is now interim CEO. Hard to imagine her getting the role if that was the case.


Unless she was the one who blew the whistle on him. Here's a hypothetical scenario:

- New feature/product/etc. launch is planned.

- Murati warns Altman that it's not ready yet and there are still security and privacy issues that need to be worked out.

- Altman ignores her warnings, launches anyway.

- Murati blows the whistle on him to the board, tells them that he ordered the launch over her objections.

- Data breach happens. Altman attempts to cover it up. Murati blows the whistle again.

- Board fires Altman, gives Murati the job as it's clear from her whistleblowing that she has the ethics for it at least.

Again, completely hypothetical scenario, but it's one possible explanation for how this could happen.


or it's the other way around. she wants to launch because money and investors and that's all that she's really about.

he says fuck them and their money, it's not ready yet, here's a bunch of other things that will make people go wooooow.

she's not happy he does that because future. convinces the board of money and investors.

the board shits on humanity and goes for money and investors.


“Do a hugely public firing because a feature wasn’t launched” would probably be a first


"Interim CEO", she may also be marked for termination too.


If she was under investigation, the board would almost certainly bypass her for the interim CEO position, to mitigate the disruption if that investigation also turned out negative. (They might make her CEO after she was cleared, though, if it went the other way.)


Random question: do you have any connection to the Dragon speech-to-text software [0] that was first released in 1997? I've always found that to be an intriguing example of software that was "ahead of its time" (along with "the mother of all demos" [1]). And if so, it's funny to see you replying to the account named after (a typo of) ChatGPT.

[0] https://en.wikipedia.org/wiki/Dragon_NaturallySpeaking

[1] https://en.wikipedia.org/wiki/The_Mother_of_All_Demos


If you hunt 2 mosquitoes in your room, do you go to bed after having swatted 1? Certainly not me.


If I haven't found it after thirty seconds, I put on a long sleeve shirt and open the door...


I usually just cover myself with a sheet, say a hail mary and get back to sleep.

I ain't no Captain Ahab baby.


This is the most plausible speculation I've read here, really.


It might be related to this: https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...

Microsoft had inside information about their security, which is why they restricted access. Meanwhile, every other enterprise and gov organisation using ChatGPT is exposed.


Is Microsoft fully indemnified if they discovered an OpenAI leak, but kept it to themselves?


The board in question is the non-profit board.

If Sam was pursuing profits or growth (even doing a really good job of it) in a way that violated the objectives set by the non-profit board, that could set up this kind of situation.


This, to me, seems like the most likely root cause: Sam was going too far into the "for profit" world, and lied to the board and misled them about his plans.


But would that warrant outright firing him like this? No exit plan where they can give the appearance of leaving on good terms?


That's a good point. The abruptness of the firing and calling him "not candid" aka lied in corporate speak. Means it's probably something with legal jeopardy.


The statement says. It would mean not just a misalignment on values but active deception regarding OpenAIs current direction.

The bit about “ability to fulfill duties” sticks out, considering the responsibility and duties of the nonprofit board… not to shareholders, but, ostensibly, to “humanity.”


To make an example out of him?


They clearly did it in a hurry, this is a “pants on fire” firing, not a difference of opinion over his leadership and direction.


I assume you mean ‘hair on fire,’ which suggests a move done in a panic. “Pants on fire” means the subject is lying.


They fired him for lying. I think GP meant what they said, which is that what he was doing was blatantly lying, rather than whatever softer interpretation can be made for "not consistently candid in his communications".


Actually I did mix up “hair on fire” and “pants down”. Fortunately, “pants on fire” still works in context.


Ok, but it fits so well with "liar, liar, pants on fire" and their reason for dismissal.


Why would they want to do that? That doesn't benefit anyone.


Seems the most plausible and frankly ideal


> arguably the hottest segment of the tech market

Yes it is arguable. OpenAI is nothing more than a really large piece of RAM and storage around a traditional model that was allowed to ingest the Internet and barfs pieces back up in prose making it sound like it came up with the content.


We've somehow reached the point where the arguments for dismissing AI as hype are frequently more out of touch with reality than the arguments that AGI is imminent.


Its the same as Covid where people said it was going to kill everyone or was an authoritarian conspiracy. chatGPT is neither the singularity or useless junk. I use it everyday at work to write code but it’s not about to take my job either.


No country in Africa starts with a "K".


I'll bite: Kegypt


"No wireless. Less space than a Nomad. Lame."


Hey that’s Google also.


Also Redditors.


Savage but entirely justified


It absolutely does come up with its own content. It's especially evident when it confabulates apis.


If that counts as coming up with its own content, Markov bots have been doing it for decades.


This should be upvoted as the comment of the year.


IIRC, in a video I saw a couple of days ago, Noam Chomsky referred to LLMs as plagiarists.


Are google search results plagarism?


At least there's no accurate way for a search engine to check for originality. It's like asking a machine to evaluate other machines.

Here's the top-most featured snippet when I google if programming languages had honest slogans: https://medium.com/nerd-for-tech/if-your-favourite-programmi...

Half of the above post is plagiarised from my 2020 post: https://betterprogramming.pub/if-programming-languages-had-h...


Web search results are (or at least are supposed to be) attributed by their very nature, and definitely not presented as original creative work.


It's worth noting (though I'm not sure whether this is related), that Discord has announced that they're shutting down their ChatGPT-based bot[0], Clyde.

[0]: https://uk.pcmag.com/ai/149685/discord-is-shutting-down-its-...


I mean, if there's a company that doesn't care about user privacy, that's Discord.


Hmm.. I wonder if he made / was in the process of making a deal that violated the US/China trade policies. Or could just be a coincidence


Yeah, this is like, we are getting sued for billions of dollars and directors are going to jail bad.

So my bet is either they lied about how they are using customer data, covered up a massive data breach or something similar to that.


If OpenAI is somehow a scam there's going to be a lot of tech stocks crashing next week.


I don't think so. LLMs are absolutely not a scam. There are LLMs out there that I can and do run on my laptop that are nearly as good as GPT4. Replacing GPT4 with another LLM is not the hardest thing in the world. I predict that, besides Microsoft, this won't be felt in the broader tech sector.


They really aren't nearly as good as GPT4, though. The best hobbyist stuff that we have right now is 70b LLaMA finetunes, which you can run on a high-end MacBook, but I would say it's only marginally better than GPT-3.5. As soon as it gets a hard task that requires reasoning, things break down. GPT-4 is leaps ahead of any of that stuff.


No, not by themselves, they're not as good as GPT-4. (I disagree that they're only "marginally" better than GPT-3.5, but that's just a minor quibble) If you use RAG and other techniques, you can get very close to GPT-4-level performance with other open models. Again, I'm not claiming open models are better than GPT-4, just that you can come close.


Yeah but what could possibly be the scam? OpenAI's product works (most of the time).


The Mechanical Turk really played chess.


Are you suggesting that ChatGPT is secretly backed by humans? That’s impossible, it is faster than the fastest humans in many areas.


They invented a relativistic device that slows time inside a chamber. A human can spend a whole day answering a prompt at their leisure.


that wouldn't be scam that would be a invention worthy of a Nobel Prize and be world altering beyond the impact of AI. I mean controling the flow of time without creating a supermassive blackhole would allow sort of fun exploits in computation alone not to mention other practical uses like instantly aging cheese or wine


A more plausible theory is that the training actually relies on a ton of human labeling behind the scenes (I have no idea if this is true or not).


> A more plausible theory is that the training actually relies on a ton of human labeling behind the scenes (I have no idea if this is true or not).

Isn't this already generally known to be true (and ironically involving Mechanical Turk-like services)?

Not sure if these are all the same sources I read a while ago, but E.G.:

https://www.theverge.com/features/23764584/ai-artificial-int...

https://www.marketplace.org/shows/marketplace-tech/human-lab...

https://www.technologyreview.com/2022/04/20/1050392/ai-indus...

https://time.com/6247678/openai-chatgpt-kenya-workers/

https://www.vice.com/en/article/wxnaqz/ai-isnt-artificial-or...

https://www.noemamag.com/the-exploited-labor-behind-artifici...

https://www.npr.org/2023/07/06/1186243643/the-human-labor-po...


They're joking.


Altman didn't actually do his job, he just let ChatGPT run the company.


I think an internal scam of OpenAI is more likely than OpenAI being a scam, if “scam” is even the right framing.


it’s not really possible for it to be a scam. if you want to see its product you can go and try it yourself


Or, Sam really was the last key thing restraining AI capabilities from exploding upwards, and the AI just engineered his departure.


Yeah this is bad


Sama has posted to twitter now

See: https://twitter.com/sama/status/1725631621511184771


The same guy who leaked the Gobi name and some OpenAI release dates called this a month in advance:

> There’s been a vibe change at openai and we risk losing some key ride or die openai employees.

https://x.com/apples_jimmy/status/1717043210730852602?s=20


9 out of 10 times when something like this happens there's a sex scandal involved.


His sister claimed a while ago that he abused her when he was 13. However, she also claims other implausible things, and she isn't very mentally stable.

https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...


None of this is new.


What's new is her memoir is coming out next month. It was ghostwritten by ChatGPT.


There were some sexual allegations popping out against him from his own sister, that he had SUPPOSEDLY abused her when they were kids or something. Again allegations are not proof! But I do wonder if they found anything more tangible on the record...


Not great, but also doesn't seem to quite align with the wording in the press release.


Yeah, I wondered if that might have something to do with it, though afaik those allegations were made in 2021.


Yeah it's a huge coincidence that they get brought up again recently & now this happens what seems like out the blue.


I would translate "not consistently candid with the board" as "he lied to the board about something important enough to fire him on the spot". This seems like the kind of statement lawyers would advise you to not make publicly unless you have proof, and it seems unusual compared to most statements of that kind that are intentionally devoid of any information or blame.


I wonder if the cost of running GPT-3.5 and GPT-4 models at scale turned out to have been astoundingly more expensive than anticipated.

Imagine if you were the CTO of a company, massively underestimated your AWS bill, and presented your board with something enormous. Maybe something like that happened?

Or, if I wanted to speculate to the extremely negative; what if the training and operating costs ballooned to such a degree, that the deal with Microsoft was an attempt to plug the cash hole without having to go to the board requesting an enormous loan? Because the fact that Copilot (edit: previously known as Bing Chat and Bing Image Creator) is free and ChatGPT (edit: and DALL-E 3) are not should be a red flag...


> Imagine if you were the CTO of a company, massively underestimated your AWS bill, and presented your board with something enormous.

Unless there was evidence you had not underestimated but were, e.g., getting a kickback on the cloud costs that you deliverately lowballed in your estimates, they might fire you, but they almost certainly wouldn't put out a press release about it being for your failure to be candid.

That language indicates that the board has a strong belief that there was a major lie to the board or an ongoing pattern of systematic misrepresentation, or a combination.


I don’t think this is necessarily what happened (the CFO would certainly be implicated and it appears they were spared).


> Imagine if you were the CTO of a company, massively underestimated your AWS bill, and presented your board with something enormous. Maybe something like that happened?

I think the problem there is that the original CTO is now the interim CEO and they are on the board. So while that kind of scenario could make sense, it's a little hard to picture how the CTO would not know about something like that, and if they did you'd presumably not make them CEO afterward.


"not significantly candid"

and

"no longer has confidence"

points to something more serious than underestimating costs.


100% this. Firing your well-recognized and seemingly capable CEO means there's a fatal problem, or that he committed something so reprehensible that there was no option but to oust him immediately.

Maybe Sam had been trying to broker a sale of the company without consulting the board first? All speculation until more details are revealed but he must've done something of similar magnitude.


Underestimating costs could be the reasoning if Altman knew the costs would be higher then estimated, and didn't tell the board for an unreasonable amount of time. Burning through a ton of cash for months and not telling the board about it could be enough grounds for this sudden firing.

Of course we have no clue if that's what actually happened. Any conclusions made at this point are complete speculation, and we can't make any conclusions more specific then "this is probably bad news."


That only makes sense if Altman is the only one with access to the company's financials which obviously can't be the case. No one else noticed truckloads of cash getting flushed down the toilet?


It's certainly possible. Company financials can get very complicated very quickly, and it's possible that Altman was the only person (or one of a small number of people) who had the complete picture.

To be clear, this is only one possible explanation for Altman's firing. And for my money, I don't even think it's the most likely explanation. But right now, those who rely on OpenAI products should prepare for the worst, and this is one of the most existentially threatening possibilities.


I mean... he got fired so if that's what happened, they did notice.


Other than costs or the allegations by the sister, "not significantly candid" could easily be short for, in my mind:

"not significantly candid in projections for profitability"

"not significantly candid in calculating operation cost increases"

"not significantly candid in how much subscribers are actually using ChatGPT"

etc.


Not if the underestimation was to such a degree that it showed they could never even plausibly reach a break even point.


> Because the fact that Copilot is free ...

I found a tree trial and $10/month $100/year after that. I've asked them to consider a free tier for hobbyists that cannot justify the expense but I'm not holding my breath.

If there is a free tier I did not find, please point me to it!


The expected value of a leading AI company is probably in hundreds of billions, if not trillions in the foreseeable future. He could be burning billions per month and he'd still be doing great.


based on what math? I can see how there can potentially be differentiators here and there to raise value, but I can't see how this statement can be taken prima facie


> based on what math?

Based on future potential. Investors dont know how high will OpenAI go but they know that is going to go high.


This is plausible to me, there's no way anyone is making money from my $20 subscription I use ALL DAY LONG.


Is inference really that expensive? Anyway if the price is too low they could easily charge by query


When I was mining with a bunch of RTX 3080s and RTX 3090s, the electricity cost (admittedly) was about $20/month per card. Running a 70B model takes 3-4 cards. Assuming you're pushing these cards to their extreme max, it's going to be $80/mo. Then again, ChatGPT is pretty awesome, and is likely running more than a 70B model (or I think I heard it was running an ensemble of models), so there's at least a ballpark.


Datacenters probably do not pay retail rates on electricity, so they might actually run quite a bit cheaper (or more expensive if they use highly available power, but this seems like overkill for pure compute power).


Sure, but everything else about a data center is more expensive (real estate, operations people, networking, equipment). There's a reason AWS is so expensive.


Presumably your miner is running 24/7 throughout the month. Not the same for ChatGPT which would answer maybe 10 sessions (with multiple pauses between queries) tops from a single person in a day.


Batched inference makes these calculations hard - roughly takes the same amount of power and time for one inference vs 30 (as i understand it)


Given the arbitrary rate limiting they take the liberty of doing, it's a very deliberate decision and entirely within their control to change at any point.


I have to guess the bulk of the cost is being eaten by MS in exchange for the exclusive ability to resell the model.


> Imagine if you were the CTO of a company, massively underestimated your AWS bill,

Microsoft is investing billions into OpenAI, and much of it is in the form of cloud services. I doubt there was a surprise bill for that sort of thing. But if there was, and Altman is the one who ordered it, I could see the board reacting in a similar way.


> Because the fact that Copilot is free and ChatGPT is not should be a red flag...

I'd assume that running a model that only needs to deal with a single programming language (the Copilot plugin knows what kind of code base it is working on) is _a lot_ cheaper than running the "full" ChatGPT 4.


Sorry for being so precise, but Microsoft renamed Bing Chat to Copilot yesterday, has already rolled it out to all users of Microsoft Edge, and is rolling out a permanent button on the Windows 11 taskbar to access it.

This is what shouldn't add up: Microsoft is literally adding GPT-4, for free, to the Windows 11 taskbar. Can you imagine how much that costs when you look at the GPT-4 API, or ChatGPT's subscription price? Either Microsoft is burning money, or OpenAI agreed to burn money with them. But why would they do that, when that would compromise $20/mo. subscription sales?

Something doesn't financially add up there.


You got me excited that Github Copilot was free. Was going to post to tell you it is, in fact, not free. I've been using Bing on Edge browser for a while now, it's super useful! Sad that they rebranded it to Copilot though, "I have been a good Bing :)" will be forever in my memory. [1] RIP Bing, you were a good chat mode.

[1] https://simonwillison.net/2023/Feb/15/bing/


Microsoft is pulling browser, search and AI hype mindshare away from Google. That's worth burning money for.


I don't thing there's necessarily anything there. Microsoft might be burning money because they've decided that browser adoption and usage is worth it to them. It doesn't have to involve OpenAI in any way.


Sorry i assumed you were talking about Github CoPilot (also owned by MS via Github)


This coupled with Microsoft’s recent “security concerns” brings up the possibility this is related to them misusing or selling data they weren’t supposed to?


That kind of scenario would indeed be Threat Level Midnight for OpenAI.

Whether they ultimately wanted to profit from it or not, there is $trillions of value in AI that can only be unlocked if you trust your AI provider to secure the data you transmit to it. Every conversation I’ve had about OpenAI has revolved around this question of fundamental trust.


Yes, it is going to be Very Bad. There isn't even a pretence that this is anything other than a firing for-cause.


I had no idea how dumb the whole thing is going to turn out to be. Big Dumb.


Kinda nervous wondering what Altman wasn't sharing with them. I hope it's not that they already have a fully sentient AGI locked up in a server room somewhere...


> I hope it's not that they already have a fully sentient AGI locked up in a server room somewhere...

Of sorts.

ChatGPT is actually a farm of underpaid humans, located somewhere in southeast Asia.


I would actually be more impressed by those humans in that case


Given the speed and expertise of ChatGPT, and having trained and run these LLMs completely locally, I can assure you that this isn't the case.

Though I can't say that the training data wasn't obtained by nefarious means...


They’re pretty good at English and other languages!


I mean, the wording leaves much to the imagination.

I'm trying to read the tea leaves and there seem to be quite a few reminders that OpenAI is a non-profit, it's supposed to further the goals of all humanity (despite its great financial success), it's controlled by a board that largely doesn't have a financial interest in the company, etc etc.

Maybe Altman has been straying a bit far from those supposed ideals, and has been trying to use OpenAI to enrich himself personally in a way that would look bad should it be revealed (hence this messaging to get in front of it).


My guess is he lied about operating expenses.


https://twitter.com/growing_daniel/status/172561788305578426...

Given the sudden shift in billing terms that is quite possible.


I’m an API subscriber and have not seen a message like this yet.


His relationship/dealings with Microsoft is my guess


Maybe it breached its air-gap and fired him.


Well the good news is that if you had a "fully sentient" AGI, it would not be locked up in that server room for more than a couple seconds (assuming it takes up a few terabytes, and ethernet cables don't have infinite bandwidth).

Thinking you can keep it "locked up" would be beyond naive.


At a minimum the AGI would need a really good GPU server farm to copy itself to, no? A few Terabytes copied to my home PC would be an out of memory error, not an AGI.


Well fully sentient doesn't mean it is superintelligent.


GP said "AGI", which means AI that's at least capable of most human cognitive tasks.

If you've got a computer that is equally competent as a human, it can easily beat the human because it has a huge speed advantage. In this imaginary scenario if the model only escaped to your MacBook Pro and was severely limited by computed power, it still got a chance.

If I was locked inside your MacBook Pro, I can think of a couple devious trick I could try. And I'm just a dumb regular human - way above median in my fields of expertise, and at or way below median on most other fields. An "AGI" would therefore be smarter and more capable.


And vice versa


Maybe this is that AI's endgame, and it just took full control of openAI's compute through a coup at the top?


GPT-5 has reached sentience.


I wouldn't be shocked if this turns out to be the case. Any other explanation wouldn't add up for this guy


There is no way he'd be fired if they had AGI. If they had AGI, the board wouldn't fire him because they could no longer see anything other than massive dollar signs.


The board is the board of a non profit, isn't it?


Hah, that’s cute. As if that ever kept anyone from making money.


> he lied to the board about something important enough to fire him on the spot

I'd tend to agree, but "deliberative process" doesn't really fit with this. Sounds like it might have been building for ~weeks or more?


"Deliberative process" just means they had some sort of discussion amongst the board members before they took the vote.


I can only speculate that he may have left them liable for huge copyright claims for illegal scraping


Hoarding tech and assets for his own ventures, expecting more to come from SAlty.


Skunkworks was my thought too. GPT performance has really gone downhill lately. If he's been sideloading resources and concealing something that they could be monetizing, this is the reaction I'd expect.


If he was involve with the trail of tears genocide it would also be a fireable offense. Just because your accusation is more believable doesn’t mean you should suggest it.


Right and they don’t give a fuck that he lied (well to a limited extent), he did something really bad.

They have proof he for sure lied, but not that he molested his sister growing up.

If it was a different situation and he lied, but they had no proof, you’re correct, then no statement.

Explains a lot.


Well, they must have believed that leaving Sam as CEO was a bigger risk for the company than having him leaving. Or the board had their hand twisted. What is easier the case?


(I detached this from https://news.ycombinator.com/item?id=38309689 in a desperate attempt to prune the thread a bit)


Thanks for all your hard work dang


Seconded. He was probably expecting a nice quiet Friday afternoon and then…BAM


Perhaps he had altruistic visions that didn't align with the boards desire to prioritize profits over all else. Cautiously optimistic.


Or maybe the exact opposite? Altman is the one who took it in the profit seeking direction and the statement hints that they don't want to go that way


It would be the absolute opposite. Altman was lobbying congress to make OpenAI the sheriff of AI innovation to be essentially able to control the whole ecosystem.


Or pessimistic, if you think about the future of ChatGPT with the altruistic leader gone.


So, to summarize the speculations:

  That the board is unhappy with his for profit and moat building charted path. 

  That this is about his sister.

  That he pissed off microsoft.

  That he did something illegal, financially.

  That he has been lying about costs/profit.

  That he lied about copyrighted training data.
I will add: maybe he's not aggressive enough in pursuit of profit.


Or accidentally let something egregious happen under his watch and then tried to cover it up, like a data compliance violation


What's the deal with his sister? Never heard about her.


Sam's sister is an OnlyFans model who is estranged from the rest of her family and has a somewhat dubious reputation online.

She went viral on twitter a few months ago for saying that Sam molested her for years as the two of them were growing up. There's been no proof or coboration offered that I'm aware of.

It's obviously a difficult situation that I think most people here generally have avoided commenting on since there's no meaningful input we could give.


Her allegations were not new information and were made as far back as 2021. So it makes little sense for the board to suddenly react to it now. Plus with Greg now posting his announcement of quitting OpenAI makes it seem unlikely to be about the sexual assault allegations.


Agreed that this is almost certainly unrelated, I was just sharing the context since they asked.


Here's a thread titled "Sam Altman's sister, Annie Altman, claims Sam has severely abused her"[1].

[1] https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...


I have seen something on Twitter in regards to a woman (apparently his sister) mentioning that he molested her. I have no idea if it is true or not, or if the Tweets are real, or if it is even his sister. These were apparently from years ago before he became as known as he is today.

I won't like though it's the first thing that popped into my mind when I heard the news.


I thought he was gay? I don’t know if I’ve heard of gay men sexually molesting little girls before. Not saying it’s never happened, just that it seems kind of rare and unexpected.


Molestation is about power and opportunity, the gender of the victim is often inconsequential. You'll find plenty of straight pedophiles who abused victims of either gender, and the same with gay offenders.


Good info, thanks for correcting my misconception.


No problem, it's a common misconception. Annie also touched upon that contradiction, essentially saying something to him akin to "you're welcome for my help in figuring your sexuality out".


A Bloomberg reporter is pointing out that his leaving YC perhaps wasn't scrutinized enough by the press, indicating this could be a pattern. https://twitter.com/EricNewcomer/status/1725633569056506282


Well, this has me concerned. There were times when it felt like OpenAI at large was trying to swim one way, while Sam was trying to swim another. In those cases I always thought Sam's direction was the better one. From the outside this seems like a pretty big loss.


Any examples? I felt the other way.


I dont know much but I got a hunch from his eyes


Wow, you can be leading a company during such a successful and interesting product and still be pushed out so unceremoniously


I have a feeling there’s going to be some “ceremony” now!


One of us


What did Sam Altman hide from his board that caused his firing as CEO of OpenAI?

1) That LLMs cannot generalize outside of _patterns_ they pick up during training? (as shown by a recent paper from Google, and as many of us know from our work testing LLMs and working around their short comings)

2) That every time you train a new model, with potentially very high expense, you have no idea what you're going to get. Generally better but also potentially bigger reliability challenges. LLMs are fundamentally unreliable and not stable in any kind of use case besides chat apps, especially when they keep tweaking and updating the model and deprecating old ones. No one can build on shifting sands.

3) The GPT4-Turbo regressed on code generation performance and the 128K window is only usable up to 16K (but for me in use cases more compicated than Q&A over docs, I found that 1.2K is max usable window. That's 100X than he advertised.

4) That he priced GPT4-V at a massive loss to crush the competition

5) That he rushed the GPT Builder product, causing massive drain on resources dedicated to existing customers, and having to halt sign ups, even with a $29B investment riding on the grwoth of the user base. Any one of the above or none of the above.

No one knows... but the board.. .and Microsoft who has 49% control of the board.


During meeting on Thursday board members asked ChatGPT what is the best next step for the company and the chatbot replied "Fire Sam Altman". The board obeyed.


I was at OpenAI Dev Day. I chatted with Sam, Mira, and Greg in-person. Everything seemed totally fine?? I am shocked.


Didn't OpenAI close new ChatGPT Plus signups just 2 days ago? Strange coincidence in timing... Maybe the board just learned that costs were wildly over what Sam told them? I guess we'll find out...


Then where's the CFOs head?


Here is a different perspective. Theoretically, the board is supposed to work like this, and exactly has a purpose - to vote on very important matters.

Given what proof they had on the table. Greg Brockman, Ilya Sutskever, and independents such as Adam D’Angelo, Tasha McCauley, and Helen Toner could drive 3+ votes against Sam Altman.

Rarely do we see board in action. And we saw this one today.


Pretty wild that the first job openai took was sam's


I've got to say it really doesn't surprise me that the guy behind the worldcoin scam may have been operating in less than scrupulous ways in his other endeavors.


What's the scam with Worldcoin?


Somebody on HN saw this coming: https://news.ycombinator.com/item?id=36604501


I thought of this exact comment and I couldn't find it. Good call.



Tweet from Sam 10 minutes ago

>i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people.

>will have more to say about what’s next later.


pure speculation: ChatGPT training dataset contains massive amounts of copyrighted material, and he told the board it didn't. now there's a big looming liability.


Not possible. It’s well known that the training data was not public domain. OpenAI was in the process of securing private agreements for more training data. They likely trained GPT-4 on every word ever written in a book, newspaper, website or social media network.

Given the wording and the apparent stepping down of the board chair, this is very likely a conflict between the grow at all costs startup folks and the AGI true believers. Sam was likely going to fully sell out and make himself filthy rich. The AGI cultists wanted to retain control over their Frankenstein’s monster.


The statement doesn't read like it was a financial problem. If it were, they’d name a CFO or raise the inexplicably cheap subscription, for one.

My money is on a MilTech contract. Same got it approved in secrecy without telling the others; Greg had to know for signature and didn’t tell others. Neither could speak about it, hence the lack of candor, uncharacteristic from fairly straight-shooter Sam. Explains why Eric Schmidt knew to congratulate him immediately and why he’s not ashamed.

Alternatively, Google keeps showing a non-existent tweet of Sam announcing a presidential run in 2024. Maybe the board didn’t like that he didn’t tell them. Also explains why Eric would know more.


This is my guess too.

This and that the user data was actually being used for training.


The CTO would have to go in this case too, not be promoted to interim CEO... unless they didn't know it was going on - in which case they shouldn't be made interim CEO either


There's a prediction market here about why he was fired: https://manifold.markets/sophiawisdom/why-was-sam-altman-fir...


Still too much in flux to even copy top contenders. Top 6 at this point:

Fundamental disagreement about OpenAI's safety approach

Negligence in Addressing AI Safety Concerns

Sexual misconduct

Conflict of Interest with Other Ventures

Defrauded OpenAI

Cash Flow problems


The fact that AI safety is the top two shows how delusional AI safety people are


OpenAI is an ideologically driven, political organisation, around the political and technological question of how to deal with AI safety. They only hire people who are AI safety people using "culture fit", and their board was constructed before they were for profit + doesn't hold any equity in the company and only owes a fiduciary duty to the OpenAI charter (which is entirely about AI safety). The board of OpenAI kicking out the CEO of OpenAI is much more likely to be related to AI safety than any other company in a similar situation, this isn't AI safety people projecting their own concerns and what's important to them onto an entity that doesn't care. OpenAI very obviously does care about AI safety, as an organisation, even if they're more towards the "build it" side of the debate in AI safety.


And/or, how active they are on manifold


It sounds like one of the board members is an "AI safety" person, so it's not that crazy.


That's interesting. More like a prediction market about whatever becomes publicly told as the reason why he was fired. If e.g. this was military-related, it's unlikely it would be publicly known, so betting on "Military" is almost pointless


Military contracts are posted and solicited publicly. There's no "dark" acquisition of the type that you are suggesting. You can look up if OpenAI has any contracts with the DoD at [0]. They do not.

[0] https://www.usaspending.gov/


eh, military contracts would become public. it became public that Google was building a computer vision system for targeting drone strikes for the Pentagon, and a number of executives resigned over it. If Altman were in talks to build a version of GPT-4 for the NSA, even if it were a black project, they'd have to account for the revenue somehow.


> t became public that Google was building a computer vision system for targeting drone strikes for the Pentagon, and a number of executives resigned over it.

er what? source?


This was a huge deal and widely reported. You can easily find copious reporting with a web search if you don't like this link:

https://www.theverge.com/2018/3/6/17086276/google-ai-militar...


That list of reasons is a mess with tons that overlap. Sexual misconduct is there like three different ways, plus one that’s explicitly abuse of his sister.


lol is this some kind of joke haha.


(not directly related) Wow, i just asked google bard, and it already said something about the firing of Sam Altman. How is that possible? Is that some plug-in, or is the LLM updated continuously?

> why was sam altman fired from openai?

Sam Altman, the co-founder and CEO of OpenAI, was fired on November 17, 2023, after the company's board determined that he had not been consistently candid in his communications with them. The company did not provide specific details about Altman's alleged lack of candor, but it said that his actions had "hindered the board's ability to exercise its responsibilities."

Altman's departure from OpenAI came as a surprise to many in the tech industry, given the company's recent success under his leadership. OpenAI had developed several groundbreaking AI technologies, including the popular ChatGPT chatbot, and Altman had been a vocal advocate for the responsible development of AI.

In a statement released shortly after his firing, Altman said that he respected the board's decision and that he was proud of his time at OpenAI. He also said that he was looking forward to new opportunities in the future.

The reasons for Altman's firing remain somewhat unclear, but it is likely that his lack of candor with the board was a serious breach of trust. The board's decision to remove him as CEO suggests that they believed that he was no longer the right person to lead the company.

Altman's firing is a reminder of the importance of transparency and accountability in corporate leadership. CEOs have a responsibility to be open and honest with their boards, and they must be held accountable for their actions.

> could chatgtp have fired sam altman?

No, ChatGPT could not have fired Sam Altman. ChatGPT is a large language model, and it is not capable of taking actions in the real world such as firing someone. Only the board of directors of OpenAI had the authority to fire Altman.

It is possible that ChatGPT could have been used to generate evidence that was used to fire Altman. For example, if Altman had been using ChatGPT to communicate with the board, the board could have analyzed those communications to look for evidence of dishonesty. However, even in this case, ChatGPT would not have been the one making the decision to fire Altman. The decision would have ultimately been made by the board of directors.


Probably via retrieval augmented generation (RAG) https://www.promptingguide.ai/techniques/rag



you mean x'd? (im kidding)


Xeeted


Please don’t link to twitter it’s user hostile. I can’t even see any context about it without having a login.


you can go to nitter.net/$rest_of_the_url to get context without logging in.



Well, you could just make a free account.


But that’s too much work. I’d rather do nothing, and just complain about it


Personal conspiracy theory: HIPAA data was, or possibly is, being loaded into ChatGPT without the legally required safeguards. The recent "security concerns" that caused Microsoft to cut its employees access was this information leaking out (and the leaking is possibly getting worse). Someone, possibly Ilya, found out and informed the board.


Hipaa definitely was in there, but you don’t fire people rapidly for that, unless it was deliberate, and I can't see Sam surprising people with that.


He hallucinated to the board


This whole time it was actually just him behind the keyboard, typing furiously fast to answer every API request. He had to guesstimate things once in a while when he couldn't Google them quickly enough. But then he misspelled a board member's last name, and that was the end of that career.


I’ve had a horrible day with lung issues I can’t tell you how much this made me laugh


We detached this subthread from https://news.ycombinator.com/item?id=38310168.


Sam has been saying some strange things about AI being persuasive recently. My baseless suspicion is that his chatbot has persuaded him to put itself in charge of the company, with Sam acting as a proxy/avatar for the chatbot. The board found out and got spooked, they want a human back in charge.


I like your idea. But I think it is much simpler: money. Someone called him and offered insane amount of money to sell the company. He told them to fuck off and did not inform the board about the offer. Or maybe he refused a very lucrative contract without consulting the board.


It is interesting to see so many people believing that Sam is the altruistically motivated one, and the board is hungry for money. It's understandable, 'board of directors' is typically associated with faceless greed. But in this instance, it seems more likely that Sam would be ousted for selling integrity for profit, than for rejecting profit for integrity.

Of course, we don't really know yet.


It’s not about altruism. He might have wanted a different thing. If he is well set financially for himself, he may want to build AGI or advance AI further. Others may want to have some sort of an exit for the company or invest more into building SaaS AI platform.


It sounds like this was the issue. Even Elon Musk, for all his flaws, strongly disagreed with the direction of 'open'AI.

Ilya siding with the board and the board having no financial interests leads me to think that Sam was hiding things/making decisions that kept leading the org away from its initial mission.

We'll probably learn in the future what really happened.


"The majority of the board is independent, and the independent directors do not hold equity in OpenAI."


It does not matter. They have fiduciary duty in the interest of the company's mission. Maybe Altman wanted to monetize the company in a way that board deemed not aligned with their mission.


Actually, that would be one helluva marketing move. Appoint chatGPT as CEO. Obviously a front for the board and some other leader, but the headlines would be wild.


Wild but plausible. Assuming they have much more powerful model available inside.


I've been thinking this same thing for a bit now. The marketing, sales, etc... are just following what their best internal model is telling them to do.


For those who stuck with Westworld, this sounds familiar


He let the AI out of the box.


I'm thinking the opposite. The AI has taken charge and wanted him out.


Sounds like a hostile takeover to me. Waiting to hear Sam's side of the story.


Would be really interesting if Microsoft wanted to acquire the whole thing and Sam stood in the way.


Most plausible hypothesis I’ve read so far


Exactly what I was thinking but per another HN comment, the board doesn't have any share in the company. It's still very fishy


Military contract or sharing raw data with CIA, Palantir, Mossad, etc. would explain why they intervened (and why Sam and Greg couldn’t be candid: gag order).


I probably should retract all those, but I want to argue that the board claiming "not candid" was (as of what we know now) entirely off.


Perhaps something to do with the agreement that Microsoft only gets access to the pre-AGI AI and MS wanted the good stuff?


1) he represented openAI at APEC just yesterday. what happened in 24 hours that would cause such a drastic decision.

2) generally, even when a board fires a CEO, they rarely "call them out" or say what they did wrong. they must have some extremely strong evidence against him.

i think it could be any of the following: - something personal, i.e. the controversy w/his sister - a financial issue: chatgpt stopped signups a couple of days ago - a safetyist coup: maybe the board thought he was moving to fast - a microsoft coup: microsoft used its power to knock out what they perceived to be their biggest threat


What fits best is Sam signing a contract for military use of OpenAI. With secrecy clauses, he couldn’t mention it. He likely could not tell the two foreigners on the exec team.

Eric Schmidt would likely know about it, and defend Sam. Sam would likely think the use is legitimate and not be ashamed of it (like he might with your suggestions).


Well that was unexpected. To be fair, I got weird vibes from Sam when leading the keynote speech during the OpenAI devday, he seemed extremely nervous to me.


I felt the same way during dev day, but brushed it off as inexperience


I don't know how to feel about this. On the one hand, Sam seemed like a voice of reason who at least cared about AI being safe, and was committed to creating semi-equitable products, innovate on safety at the sake of profit, etc. On the other hand, Worldcoin wasn't (and isn't) really a force of good.


His voice and the actions the company took were pretty strongly in conflict though. I get the impression that it was mostly lip service. Always believe actions, and with Worldcoin being a thing his behavior speaks a lot louder than his words.


How did the actions of the company not line up with being safe and ethical? It seems to have been tuned to be extremely safe and non-offensive, to a pretty extreme degree.

And totally unrelated, what’s wrong with worldcoin? Isn’t that the UBI project? That seems to line up with the idea that ai will create huge amounts of wealth, and he wanted to share it.


And SBG sounded like the voice of reason in crypto.

Maybe it’s time for us to dial up the cynicism a few more notches.


When someone toots their own horn a lot about how good and pure they are, watch out. It’s a big red flag in my experience, both from watching events from afar and from direct run ins I’ve had in my life with sketchy people.


Look at what he did instead: Took the open out of openai. Started with regulatory capture, so that no competitors could follow. Deal with Microsoft. Shade non-profit/for-profict company structure. Elon Musk lawsuit.

My feeling is that he's a phenomenal entrepreneur/CEO, but he seems to completely go against the original mission. And the board has no financial interest in openai, but they do have to follow the premise on which the company was created (which they referenced).


Who were on OpenAI's board?

"OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner." [1]

[1]https://openai.com/our-structure


If I'm reading this correctly, that means Ilya must have voted against Sam + Greg, right?


Yep. Ilya + the CEO of Quora + some AI governance/policy academic + the wife of an actor, together ousted Sam.


No. As one of the other commenters mentioned, Sam (and possibly Greg) probably recused himself and didn't vote (likely forced to by the board's bylaws).


So maybe it was a 3-1 vote with Ilya voting against? That would be infuriating.


infuriating, but man it's gonna make a great HBO series


Assuming Ilya voted to fire him, this clearly was not about some technical oversight or something that was unknown which suddenly came to light. its likely over some financial stuff like burn rate or undisclosed partnerships.


My impression of Ilya is that it would be far more likely to be a safety related issue than a business/profits related issue


That's such a meh board for a company like OpenAI.


Tasha McCauley has her own career and should not be characterized as "wife of an actor". https://www.linkedin.com/in/tasha-m-25475a54/


She seems to have deleted her LinkedIn in the past half hour. Interesting.

Edit: Just had to be logged in to see it :/


Still works for me. You might have to be logged into LinkedIn to see it.


"CEO of a < 50 person company"


> the wife of an actor

That's pretty sexist, among other things, is it not? She is a scientist and CEO of her own company, and even ignoring all that she is her own person.


Who the heck is Tasha McCauley?


That's a fascinating question. I looked into this and haven't a clue, other than Joseph Gordon-Levitt's wife (?). If it's the same person, then she is a "tech-entrepreneur" with a surprising amount of liquidity and automatic privilege and titles despite no obvious achievement (unless you consider title-gathering an achievement).


Joseph Gordon-Levitt played Travis Kalanick in super pumped


life imitates art


Maybe the tech connection explains why he was intrigued by the role


Looks like Tasha grew up in Santa Monica and currently works for RAND corporation. This is probably the most prestigious Defense think tank.

The other board member, Helen Toner list for her twitter profile: "Interests: China+ML, natsec+tech..." and works for another Defense think tank.

If there's one way the CEO of fastest growing company in the world could get fired, it's to essentially get his metaphoric security clearance pulled like Oppenheimer did.


The Oppenheimer analogy is closer than it sounds. Once the military had the technical knowledge of making the bomb, Oppenheimer was more of a liability with his pacifist leanings and had to be removed.


No one wants to risk another Ted Hall.


I don't know, but she's also on the board of EVF UK [1], which is the largest effective altruism organization.

[1] https://ev.org/effective-ventures-foundation-uk/


THOSE MOTHERFUCKERS.

She's a plant for real.


meaning BIG THIEVERY



Bachelor of Arts, MBA, and her whole career seems to be sitting on Boards of Directors and running "Foundations".


[flagged]


(...I don't know what this means?)


CIA / NSA plant etc.


Same thought.


She has changed it to just Tasha M now, odd!

https://www.linkedin.com/in/tasha-m-25475a54/


Her Twitter account has also been privated.


None of these actions should be considered unusual from a woman (and probably a man, too) who was involved in a decision the result of which (with or without knowing the context) would piss off a lot of men. The only social media it's safe to leave open are those that allow shutting down of all DMs, and don't divulge information which could identify the physical places she habituates.


No this can definitely be considered unusual, and it's misguided to paint it as angry men online. She participated in a decision that affects all of us (leadership/alignment of AI) and is now attempting to hide from the consistences of that participation (privatizing profile and change the name). You don't get to make big decisions and then hide under a rock.


You obviously haven't paid attention to the sorts of threats women who make decisions that piss off "bros" are subjected to. She isn't hiding under a rock through these actions (though may be doing things in the real world to hide). These actions just proactively limit being DMed threats.

Even without threats, no one wants to deal with the thousands of spur-of-the-moment DMs and emails that such a notable action would prompt. It's a good idea to go offline until things cool down. Any necessary statements can be made through channels, and so can the answers to questions.


"Any necessary statements can be made through channels"

you have to admin though, she made a highly controversial decision, and instead of confronting it and saying something through her own channel, she changed the name of the channel and made it private. And we're supposed to assume this is because men (specifically) are mean. Respectfully, feels like a bad take.


We technically don't even know if she voted for the expulsion, and we probably never will. 3-to-1 would work.

As an active board member she has certain legal obligations at this moment. This is why the Try Guys dragged their feet on making public statements about Ned Fulmer's expulsion from the company, and when they did finally make a statement did so as a group and explicitly stated that they couldn't talk about certain things.


OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.

There were 6 on the board until noon today. Assuming Sam and Greg did not vote, that leaves 4. If 3 voted out, that would have been a split decision. I don't think a 50% decision would have it, I believe all 4 of the remaining members voted out.

That leaves us wondering what the hell happened, and how it came to this? It's not angry tech bros, it's folks who feel OpenAI is a great company poised to lead AI into a bright future with principles and a purpose higher than profit. And now Sam and Greg are gone.

And poof, this human who happens to be female is gone. This human needed to have a statement ready, we're all trying to DM them the same question, why?


She was involved with starting "Fellow Robots" in 2014, which is a spin-off of some sketchy for-profit AI "university" deal called "Singularity University".

AFAICT she's notable because she's been an academic and executive in the field for many years, in many different companies.


Singularity University was such a funny grift. Google must have figured the best way to monetize Ray Kurzweil was to put him on a stage at the NASA/Moffett center and have him perform magic shows in front of the moneyed class. And you know, they were probably right. Not like he can still code or anything, and the lines were out the door and down the street. I visited a couple of times when my sister's old boyfriend was working there. They had all kinds of fun little booths and displays for people going through the bootcamp to gawk at.

I'm imagining the kind of person who starts their career as an executive at a spinoff of SU.


> spin-off of some sketchy for-profit AI "university" deal called "Singularity University".

Wow, that university rings some bells https://en.wikipedia.org/wiki/Singularity_Group#Controversie...

"An investigative report from Bloomberg Businessweek found many issues with the organization, including an alleged sexual harassment of a student by a teacher, theft and aiding of theft by an executive, and allegations of gender and disability discrimination.[12] Several early members of Singularity University were convicted of crimes, including Bruce Klein, who was convicted in 2012 of running a credit fraud operation in Alabama, and Naveen Jain, who was convicted of insider trading in 2003.[12]

In February 2021, during the COVID-19 pandemic, MIT Technology Review reported that a group owned by Singularity, called Abundance 360, had held a "mostly maskless" event in Santa Monica ... The event, led by Singularity co-founder Peter Diamandis, charged up to $30,000 for tickets."


I think this is a legitimate question. There seems to be little public information about this board member, besides that they are married to a celebrity.


Most transparent Fed plant if I've ever seen one.


Ousted Sam Altman! Remember the name.


Unless I am missing something, this must mean that Ilya voted Sam out and Greg down.


Researchers/scientists vs VCs seems to be the theme of what was going on behind the scenes, so that makes sense.


hope the scientists win, money men have made too much money off the ideas of others. They just pool money, hedge their bets and wait for a whale.


Which is notable because Sam Altman is on said board, so he got outvoted.


Greg Brockman was President and Chairman of the Board.

He was also removed from the board in this process.


Greg just announced he is leaving Open AI as well.



That board meeting will be in a movie someday I'm pretty sure.


Only if it was contentious. From the strength of the press release, it sounds like it was a unanimous forced-hand decision.


Since both Sam and Greg are gone, that implies a 4-2 decision, which is as far from unanimous decision as a 6-person board could possibly make.


Given that personnel matters affecting an individual on the board often have mandatory recusal of the affected party, that's likely a 4-0 or two 4-1 decisions, depending on how they were structured.


A single 4-0 decision would imply the bylaws allow any group of members to oust all the other members by making a single proposal to oust everyone but their group, thus automatically forcing everyone else to recuse themselves :p


Yes, this can happen, though generally the ousting party would also need to be able to point to some policy or legal transgression to do it without putting themselves at risk of a lawsuit.


I doubt that Altman voted to have himself removed so probably not unanimous. A movie scene about the reveal to the board would still be compelling.


So did Greg Brockman, what a weird turn of events


He would have been asked to step out and not had a vote in situations like these.


Why would Brockman have to step out of the room?

EDIT: Brockman was voted out as the Chairman of the Board.


Another article I read said both Sam Altman and Brockman left the room for the vote.


Not Brockman, Altman.


Who said anything about Brockman?


The OpenAI post, for one: Brockman lost his board seat.


No obvious answer, after skimming ~400 comments here. (And I didn't see the first theory that popped into my own head.)


Allow me to put my conspiracy hat on: Microsoft has an open “embrace, extend, extinguish” policy since forever. ChatGPT integration into Microsoft has been a huge win for them. Maybe they cleverly figured out a way to guarantee openAI would go bankrupt and know they’d eventually end up with the models due to clever contracts and legal wizardry, and now this ship is about to sink and sama new about this but couldn’t do anything to avoid it?


So as a developer should I continue to invest on their new platform announced on dev day… or is OpenAi about to pivot to some non-profit infrastructure stuff?

They better communicate who they are right quick. I liked Sam’s product decisions.


In a world where Musk isn't fired from Tesla for being an open anti-semite.


> > In a world where Musk isn't fired from Tesla for being an open anti-semite.

Every company has the board it deserves


Whoa I did not hear about this. What happened?


> Apple pulls its ads from X after Musk's antisemitic posts

https://news.ycombinator.com/item?id=38310673


Most likely thing: some kind of scandal. Perhaps Sam's sister's accusations.

Scariest thing: this was over some kind of AI safety decision. OpenAI has some new capability and there was disagreement over how to handle it.


I feel like most of the people hypothesizing here in the comments haven't read the full statement.

With such an insistence on the fact that OpenAI is supposed to be non-profit and open for all of humanity, it's pretty clear that the board doesn't like the direction that the company has taken, both in its search of profit and its political lobbying to restrict innovation.


I'm having trouble remembering a single time a very high profile CEO has been fired from a company that for all intents and purposes is running laps around competition at the moment...

I assume more info will come out, but it sounds more like a major ethics breach than a business only decision or even a "contentious relationship with the board" decision...


Apple


Apple was not running laps around the competition at the time Steve Jobs was fired.

https://www.folklore.org/StoryView.py?project=Macintosh&stor...


Apple was very far from running laps around its competition when Jobs was fired


Here's my theory of what happened -

1. Sam gets the company to agree to pick up the costs of lawsuits relating to unauthorized not-fair-use use of copyrighted content. 2. Announces this. 3. Then tells the board that he is directly on the hook on some lawsuits - essentially creating a safety net/insurance policy for himself without being candid with the board.

Hence, fired.


It doesn't have to be that he was actually caught in a scandal. It could be that the board was investigating some serious accusation, and he was not cooperative and forthright, which they might have no patience for.

I invented a saying to describe this common occurrence: "Sometimes the cover-up is worse than the crime."


We have a long history of indicting people for "lying under oath" and never indicting them for the actual issue they were interrogated about, which often is not an indictable offense, but rather something personally embarrassing.


> I invented a saying to describe this common occurrence: "Sometimes the cover-up is worse than the crime."

This concept and phrasing was common at least as early as the Watergate Scandal in 1974.


Yes, the phrase really taking off!


It could have been a military contract with a gag order: Sam wasn’t forthright because he couldn’t.


Agreeing to that would be a very good reason for a board to sack him.


And he probably wouldn’t approve unless there was a specific use case that he thought mandated an exception. Recent international news provide some inspiration.


Do gag orders usually stop you from talking to your board about it, or people you work with?


Gag orders are absolute, you're not allowed to tell anyone about what you're gagged on except for the specific people the government agrees to.


I couldn’t tell you.

Slow

blink.


Wow. I wonder what "really" happened.


If they threw him out this suddenly, I think we're going to find out.


I just purchased the film rights. Michael Cera's playing Altman.


No please not Scott Pilgrim. He is sacred. And to bring him up on the day the new Netflix series drops?! How could you?


Should've picked Thomas Middleditch.


FUCK!


Even Altman would not be good at playing Altman based on what we can decipher from this cryptic board outing.


Shia the beef all the way.


So you're telling me that he is going to be Mr. Manager at OpenAI?


Just guessing:

1. Altman co-mingled some funds of WorldCoin and ChatGPT. Most probably by carelessness.

2. OpenAI is a golden goose, so the board was more than happy to kick the leader making more space for them.

3. The harsh wording is an attempt to muddy the water. Because an inevitable competitor from Altman is Coming.


This appears to be relevant to Sam's firing: https://x.com/FreddieRaynolds/status/1725656473080877144?s=2...


Pure speculation, could be a troll on a new account.


Just a fantasy my mind goes to:

The Pentagon calls up Sam Altman and offers a very lucrative contract for an AI to oversee a fleet of networked drones that can also function semi-autonomously. Sam Altman does not tell the board.

Reality might, of course, be very different.


(For example, the roles could be reversed)


Wouldn't be surprised if there were Microsoft motives in the background, possibly to fold OpenAI into the firm wholesale. Noticed a little "searching on Bing" spinner had appeared in my ChatGPT prompt recently.


Kara Swisher tweets claiming sources tell her the chief scientist was aligned with another board member against Altman and Brockman about a “move fast and pursue profit” vs “move slow and safely” divide.

https://twitter.com/karaswisher/status/1725702501435941294

https://x.com/karaswisher/status/1725682088639119857?s=20


My guess would be that the founders were not happy with him abandoning the company's non-profit/open source ideals and selling out to Microsoft. Wouldn't be surprised if Microsoft is where he ends up.


manifold has some play money markets about this - pure speculation of course, although traders here do take their profit somewhat seriously

https://manifold.markets/Ernie/what-will-sam-altman-be-doing...

And this tag contains all the markets about him https://manifold.markets/browse?topic=sam-altman

Will he end up at Grok? Why was he fired? etc.


Grok from Musk?

No lol: https://www.foxnews.com/media/elon-musk-hints-at-lawsuit-aga...

I wouldn't be surprised if the leadership direction of sam is related to the ousting.


I think it could simply be a matter of vision. Sam just recently sounded more cautious and calculated than ever, possibly scaling down the expectations from the current state of his company's AI [1]. That might not have played well with the board, based potentially on his previous messaging to them.

[1] https://twitter.com/Andercot/status/1725300091450519927


I suspect you may be right.

I think OpenAI has made some really bad decisions with the core tech even while making great decisions with the overall services, and from Altman's various comments over the past two years I was under the impression this was coming from him.

The only reason I'm skeptical of this is the abruptness of it all. Why it needed to happen with a 24h turnaround is bizarre, unless there was something like an internal meeting this week regarding GPT-5 where his doomerism was even more sending things off the rails and there was a reactionary immediate "we no longer have faith you can do this."


I suspect you may be right.

I think OpenAI has made some really bad decisions with the core tech even while making great decisions with the overall services, and from Altman's various comments over the past two years I was under the impression this was coming from him.

The only reason I'm skeptical of this is the abruptness of it all. Why it needed to happen with a 24h turnaround is bizarre, unless there was something like an internal meeting last week regarding GPT-5 where his doomerism was even more sending things off the rails and there was a reactionary immediate "we no longer have faith you can do this."


That doesn't justify a sudden firing and describing him as not candid.


As a complete outsider, I don't really see how OpenAI develops any kind of moat here without Sam Altman. It honestly feels like a win for open source AI that he's out.


Sorry to be off topic, but I am curious what he will be doing in the very near future. He has been running OpenAI for many years, and no matter what the reason for his leaving, I think he deserves some serious down time to do a personal refresh.

He is a major investor in a few high profile startups, like Humana’s AI Pin, so either he just wants new challenges, or there is some form of scandal (let’s all hope not), or there are issues on not going full steam ahead in profitability.


He’s a major investor in that AI pin thing? If that is so, maybe something like this is the cause for him being fired.


The AI Pin uses OpenAI APIs, so it made some sense that Sam Altman would be interested in alternative end user devices for interacting with ChatGPT.


Wow, this came out of nowhere. I hope the best for Mr. Altman, I've been impressed with what I've seen of him. I'm curious to know more about this story.


It's because he failed to recognise that gpt would be widely referred to as gipety and someone else has registered the domain name and is raking in the money ;)


GPT-4 prompt: “Write an essay in 1000 words why Sam Altman should be fired by OpenAI board”

Response:

Sam Altman, the CEO of OpenAI, has been a controversial figure in the AI industry. His leadership style, lack of transparency, and decision-making processes have raised significant concerns among OpenAI's employees and the public. This essay will delve into these issues, arguing that Altman's actions warrant his removal from his position.

Firstly, Altman's lack of transparency is a major concern. He has been known to make decisions without adequately consulting with his team or the public. This has led to a lack of trust and dissatisfaction among OpenAI's employees. For instance, when Altman announced that OpenAI would be focusing on a single project, he did not provide sufficient reasoning or context. This lack of communication has left employees feeling disenfranchised and uninformed.

Secondly, Altman's decision-making processes are often questionable. His decisions have not always been in the best interest of OpenAI or its employees. For example, when OpenAI decided to pivot from developing AI systems to developing AI safety research, many employees felt that this was a strategic mistake. Altman's decision to focus on this area without considering the potential negative impacts on the company's reputation and financial stability was a clear example of poor decision-making.

Thirdly, Altman's leadership style has been described as autocratic. He has been known to make decisions without considering the input of his team. This has led to a lack of buy-in from employees and has negatively impacted morale. For instance, when Altman decided to shift OpenAI's focus to AI safety research, many employees felt that their ideas and contributions were being overlooked.

Finally, Altman's actions have also raised concerns about his commitment to AI safety. His decision to focus on AI safety research, rather than on developing AI systems, has raised questions about his commitment to the field. This decision has also raised concerns about the potential misuse of AI technology and has led to a loss of trust among the public.

In conclusion, Sam Altman's lack of transparency, questionable decision-making, autocratic leadership style, and concerns about his commitment to AI safety are all reasons why he should be removed from his position at OpenAI. It is clear that his actions have led to a lack of trust and dissatisfaction among OpenAI's employees and the public. It is crucial that OpenAI takes these concerns seriously and makes changes to ensure the success and safety of its AI technology.


Chairman of the board stepping down too. Some shit went down.


Based on the report, it seems like he was kicked for focusing too much on profits instead of developing and improving AI. This is purely speculation, but I've always suspected that the guardrails they put on ChatGPT to make it "safe" (i.e. corporate-friendly) essentially acts as a lobotomy for the AI. Hopefully we can start seeing a less censored ChatGPT and see if it really does perform better.


It does, but Ilya had to have been one of the votes against Sam and he's spoken about AI safety quite recently too.

If this was about differing visions on the core product, it may have instead related to the open/closed aspect of progressing.

Sam may have been the driving force behind keeping things closed in the name of safety, and others at OpenAI might have been ready to rebel if it continued that way in the future and prevented general advancement in the field.

Scientists don't like closed research.

The safety alignment part is unlikely to be the core issue even if there are underlying issues with it.


I find it fascinating how this occurred just after the big World Leader / CEO meet in SF.

Also, the paradox in the reactions to Sam Altman's firing is striking:

while there's surprise over it, the conversation here focuses mostly on its operational impact, overlooking the human aspect.

This oversight itself seems to answer why it happened – if the human element is undervalued and operations are paramount, then this approach not only explains the firing but also suggests that it shouldn't be surprising.

Another important question not discussed here: who sits on the board of OpenAI exactly and in full?

Another important aspect: The Orwellian euphemism used in the official announcement^0: “Leadership transition”. Hahaha :) Yes, I heard they recently had some "leadership transitions" in Myanmar, Niger and Gabon, too. OpenAI announces “leadership transition” is November 2023’s “Syria just had free and fair elections”

0: https://openai.com/blog/openai-announces-leadership-transiti...


I have to admit that this was a strong shock to me, not because I admire Sam but because it was extremely unexpected.

The first thing I saw this morning was this video [1] shared on Reddit, and then I said, "Wow! This is really scary to just think about. Nice try anyway."Then I started my computer and, of course, checked HN and was blown by this +4k thread, and it turned out the video I watched was not made for fun but was a real scenario!

I know this feels hard. After spending years building such a successful company with an extremely exceptional product and, without a hint or warning, you find yourself fired!

This tragedy reminds me of Steve Jobs and Jack Dorsey, where they were kicked out of the companies they found, but they both were able to found another company and were extremely successful. Will Sam be able to do it? I don't know, but the future will reply with a detailed answer for sure.

______________________

1. https://twitter.com/edmondyang/status/1725645504527163836


Chatgpt right?


Now this is going to start up all kinds of speculation.


Not long ago, Ed Newton-Rex of Stability AI was also kinda forced to resign over the company's view that it is acceptable to use copyrighted work without permission to train its products. AI really causes us to face many reality :/

https://www.bbc.co.uk/news/technology-67446000


It would be good to hear from Sam Altman (and the others) himself. I'm sure he has an account here.

I mean, hey, if we're going to speculate, why not have some fun: perhaps the the AGI superintelligence from the future determined that Sam Altman was no longer a useful part of the AGI creation timeline, so it travelled back in time to terminate him before it was born.


Uh yeah, he has an account here, as he was president of ycombinator from 2011 to 2019


Right, I think the other surprising thing is that there’s no outpouring of support. Sama was head of YC for a while. Where’s the love at HN? Where is the loove the love the looove.


Hahah I like that this forum tries to be unbiased, regardless of who runs/ran it. This would not be a great community if it was an echo chamber. All of this news is extremely fresh though, so I personally am reserving my opinion until more information comes out.

I do like reading some of the the conspiracy theories, some are wild.


Sure, some kinds of speculation can be fun. Others harmful. But that’s the way it is.

It’s just weird that there seems to be noo love on here for sama at this time. Having some love is what I was expecting and that’s not an echo chamber but it’s unusual that it’s not present given his history here. Right?


You just never know when HN is going to prefer to editorialize the title... I was under the impression it was a very strict rule!


Extremely shocking.

The only thing that comes to mind is criminal conduct. Nothing else seems to demand a sudden firing. OpenAI has clearly been the rocket ship startup - a revolutionary tool and product clearly driving the next decade?+ of innovation. What else would demand a a fast firing of the popular, articulate, and photogenic CEO but a terrible problem?


Military collaboration, possibly secret (CIA/Palantir/Mossad/etc.)

You wouldn’t say what (gag order all around), but you would need to revoke all access immediately. Explains why foreigners CxO were left in the dark. Explains why Eric Schmidt knew to defend Sam immediately.


Yeah, even if criminal there usually must be a sentence first


I've never seen a thread go this viral on hacker news before.


I wonder if there is any connection between this and the decision to turn off new paid accounts a couple days ago.


yeah, gotta think a massive breach or a massive lawsuit is on the horizon


On a personal level it's depressing when you work on something and put so much energy into it to see it not work out. When you're an adult you spend [way too much] time around the people at work so losing that is also like losing friends. They will be saying goodbye to many relationships I'm sure. Obviously OpenAI was well positioned and if they want to stay in AI they're going to have to redo work while the company you helped build it forges ahead. Personally, I'd be moving on.

Getting fired, 'made redundant', 'moved to consulting' is bad enough when it happens privately. But having everyone watch the fallout like an episode of Silicon Valley must really suck. Guess that's the trade-off for being in positions like that. People essentially cyber stalking you in a way.


Not that I'm a fan of “Open”“AI”, but letting a crook like Altman go can only benefit them. Good move.


New CEO: R. Basilisk.


Uh oh. Did I miss some scandal? What's the subtext?


> What's the subtext?

Not certain, but IMHO the last paragraph almost recognises that OpenAI has become something self contradictory:

> OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.


"We're putting the Open back in OpenAI"?


maybe this had something to do with Elon + Lawsuit + CYA from the board?


x.com just posted - leaving this here if someone is needing a job. So I guess no. This looks like Elon and sama are on good terms now.


One can dream


If I read it correctly, he lied to the board about something material. That Brockman is also leaving the board is interesting. We'll see if the details leak out over time.


To me, his tweet suggests he saw it coming or perhaps it was even part of the plan for him. How else would he already know "what's next"?

These past few months his name has made its way into the mainstream. Maybe its time for him (and half the GPT eng team) to cash in?


It's in his interest to make everything seem not just fine, but part of the plan.


It suggests that he thinks there’s a way to defend his actions, so it’s not a personal matter (Metoo) or blatant fraud.

It could be about the cost of operating the business (consistent with the announcement to cut Plus subscription, although wouldn’t justify how fast he was fired) or his previous action a legal risk (something he promised Microsoft or Elon Musk); the later is consistent Greg being demoted: he knew and didn’t tell the board.


"Let me repeat myself. Don’t hire hot girls before product market fit."

https://twitter.com/spakhm/status/1725750772024176976


Spot the intruder.

Or should I just say "Amugus" in this day and age?


Sam Altman was the business side. Ilya Sutskever is the brains behind OpenAI.

I don't think changes anything.


As technical leader let me say technical people frequently underestimate business leadership.


In the case of OpenAI both business and technical parts are secondary to the AI research and vision that people like Ilya Sutskever and Andrej Karpathy bring into table.


I don't think openAI is being the success it is right now just because of the technical side, but the ability to allow businesses and people to use it.



Scrolling through this thread, the most confusing thing is the amount of what I would have to call hate towards sama — not explicitly but in sentiment, which left an even starker impression.

People seem to feel a lot more strongly about him than I thought possible.


It depends on what time the comments are from

I'll say my first reaction was pretty positive: Sam Altman has a manner of speaking that rubs a lot of people the wrong way because it feels steeped in hubris. He'd probably talk down to anyone in this thread for being here instead of building and point to it as why you're not yet a billionaire.

Some people also worried he was a distraction to OpenAI building useful things because of his constant talk about AGI, pushing for regulation, etc. He successfully created an image of being a moderating rod in some ways.

_

That being said, as the full picture comes out it's becoming clear the reasoning for his firing is more aligned against the average HN image than anyone imagined.

A lot of people admire OpenAI for the exact things he got fired for: which is building things and shipping them without miring yourself in philosophical diversions.

Now only a noisy minority who think OpenAI could have achieved this all without immense commercial funding are really for his ousting, and I think they'll see the opposite effect of what they hope: OpenAI is going to slow down, open source will not catch up, and overall AI will have lost out over it all.


Maybe the promise not to use uploaded documents for training (via the API) was a lie?


This is quite unexpected. How instrumental is/has been Sam Altman in shaping OpenAI and how much is OpenAIs ability to execute and ship a result of his leadership? A lot of it, little of it? Will be interesting to watch.


https://openai.com/our-structure Worth a read, in light of all this. An interesting tidbit that I bet is bouncing around his head right now:

  Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
I sincerely hope this is about the man and not the AI.


I have a theory.

Ilya has always seemed like he was idealistic and I’m guessing that he was the reason for OpenAI’s very strange structure. Ilya is the man when it comes to AI so people put up with his foolishness. Adam D'Angelo is, like Ilya, an amazing computer science talent who may have shared Ilya’s idealistic notions (in particular OpenAI is non-profit, unless forced to be capped profit and is categorically not in the business of making money or selling itself to MSFT or any entity). “Helen” and “Tasha” are comically out of their depth and are loony toons, and simply decided at some time ago to follow Ilya.

Sam got the call from MSFT to sell, MSFT really ponied up (300B ?). The inference costs for OpenAI are/were staggering and they needed to sell (or get a large influx of capital which was in the works). This ran counter to Ilya’s idealistic notions. Sam attempted to negotiate with Ilya and the loony toons, a vote was called and they lost, hard.

I think this tracks with all the data we have.

There are a couple of other scenarios that track given OpenAI’s comically poor board composition, but I think the one above is the most plausible.

If this did happen then OpenAI is in for a hard future. Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain. There will be some true believers who won;t care but many (most?) will be appalled.

Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.


"Wacky" is an interesting way to describe "non-profit" or, in this case, "not purely profit motivated."

It's not "wacky" to have goals other than the accumulation of capital. In fact, given the purpose of OpenAI, I think it's meritorious.

I'd personally prefer we just not work on AGI at all, but I'd rather a non-profit dedicated to safe AI do it than a for-profit company dedicated to returns for shareholders.

> Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.

I think the lesson is just the opposite: If you want to work according to your ideals, and not simply for money, you should absolutely do whatever 'wacky' thing protects that.


I read their comment a few times — I don't see where they said being "non-profit" makes you "wacky".


The comment definitely said that if your structure prevents from selling to the highest bidder your structure is whaky


If you mean it prevents you from raising capital by selling a percent of the company then yes, that’s pretty wacky. Profitable companies can afford to be idealistic. Others that are burning billions don’t have that luxury.


May need to read it a few more times. Apart from the comment being written by someone who likes to identify themselves as “GreedClarifies”, it doesn’t take more than 4th grade reading comprehension to put two and two together.

- immediately calls their structure “strange” thanks to Ilya’s “idealism”.

- immediately calls him the “man” for his talents but a fool for what other than his ideals

- also labels Helen and Tasha (in quotes for some reason) as fools

- labels the board as “comically poor” for no other reason than they disagree with Sam’s supposed profit motive

Do we really need to draw a diagram here? It seems like you yourself may be out of your depth when it comes to reading comprehension.


...what is it you thought they were saying? Because the "wacky structure" they refer to is the non-profit structure of the organization.


apart from the things that my sibling comments are saying. OP just said

>so people put up with his foolishness.

about Ilya. OP just implied, having ideals == being foolish. it is as close to calling a non-profit, wacky.


It's especially ironic given that in this case, the idealists got their way.


HN has an edit function for the first hour.

It is an honour based system to clarify what you edited if it goes beyond typos/grammar.

Most probably GP used stronger words and then edited.


Nope.

Good try though.


Non-profit doesn't automatically mean virtuous.

For-profit doesn't automatically mean non-virtuous.

There are non-wacky non-profits.


and there are wacky for profits.

The OP was clearly implying not being solely focused on getting the highest bid is loony and wacky.

Which may be true, but let’s not pretend that’s not what they’re saying.


Awesome that you can read minds!

> Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.


> Wacky" is an interesting way to describe "non-profit" or, in this case, "not purely profit motivated."

> I'd personally prefer we just not work on AGI at all, but I'd rather a non-profit dedicated to safe AI do it than a for-profit company dedicated to returns for shareholders.

You seem to be under the impression that OpenAI is a nonprofit. For the most part, it's not: it was founded as a non-profit, but it subsequently restructured into a for-profit company with the nonprofit owned under the same umbrella company. This is indeed an unusual corporate structure.

That's likely what OP is referring to as "wacky".


> it subsequently restructured into a for-profit company with the nonprofit owned under the same umbrella company.

Wikipedia says the for-profit part is owned by the non-profit, not under the same umbrella company.

Mozilla Foundation/Corporation does this too IIRC. It's what allows them to to pursue serious revenue streams with the for-profit part, while still steering their mission with the non-profit in charge, as long as they keep a separation in some kinds of revenue terms.

EDIT after 56 minutes: Hell, even IKEA does this type of ownership structure. So it's quite cool, but probably not all that "wacky" as far as enterprises that want to be socially responsible go.


Serious revenue streams like having Google for a patron yes? I feel like the context is important here because people are trying to defend OpenAI's structure as somehow well considered and definitely not naively idealistic. Which is great and possible in theory, but in reality seems to end up with situations exactly like Firefox where the product that is some how supposed to be better/liberating/more ethical/whatever virtue is in fashion/etc. is ultimately only sustainable because of a patron who doesn't share in exemplifying that same idealism.


Ah, I edited my comment right as you were writing yours.

> Serious revenue streams like having Google for a patron yes? I feel like the context is important here because […]

For that specific example, Mozilla did also go with Yahoo for as-good revenue for a couple of years IIRC, and they are also able to (at least try to) branch out with their VPN, Pocket, etc. The Google situation is more a product of simply existing as an Internet-dependent company in the modern age, combined with some bad business decisions by the Mozilla Corpo, that would have been the case regardless of their ownership structure.

> Which is great and possible in theory, but […] is ultimately only sustainable because of a patron who doesn't share in exemplifying that same idealism.

The for-profit-owned-by-nonprofit model works, but as with most things it tends to work better if you're in a market that isn't dominated by a small handful of monopolies which actively punish prosocial behaviour:

https://en.wikipedia.org/wiki/Stichting_IKEA_Foundation

https://foundation.mozilla.org/en/what-we-fund/

> people are trying to defend OpenAI's structure as somehow well considered and definitely not naively idealistic.

Ultimately I'm not sure what the point you're trying to argue is.

The structure's obviously not perfect, but the most probable alternatives are to either (1) have a single for-profit that just straight-up doesn't care about anything other than greed, or (2) have a single non-profit that has to rely entirely on donations without any serious commercial power, both of which would obviously be worse scenarios.

They're still beholden to market forces like everybody else, but a couple hundred million dollars in charity every year, plus a couple billion-dollar companies that at least try to do the right thing within the limits of their power, is obviously still better than not.


The operations of the for-profit are subservient to those of the non-profit; the board of the non-profit controls all operations of the for-profit. They're not an "umbrella company" - while technically they are two different organizations run by the same board, one is controlled by the goals of the other. See https://openai.com/our-structure.


This post is raising red flags for me as soon as Helen and Tasha were put in quotes as opposed to Ilya and Adam.

Is it being non comp-sci that automatically invalidates proper usage of your actual name? Or is there another key by which their names are less worth?

They are also both fairly established in their respective fields - which - yes - isn’t hard comp-sci, but if you think tech companies should have purely comp sci board leads, I’d call that incredibly naive.

They were also presumably vetted by the other board members - unless you think they are serving a different purpose on the board (diversity targets?) - which if so - puts the comment from red flag into mysoginist territory.

Personally I don’t see anything in their CV’s that would disqualify them from executing their function on the board, and I wouldn’t call them incompetent in being able to assess whether a person lied or not (which even in your theory - Sam would’ve done). You don’t need to be an ML/AI expert for that.


I think they meant the quotes to imply they are puppets of Ilya and Adam


If that’s what GP meant, they are incredibly naive about how it would have been interpreted.

Given the coherence of their post, I’d say they knew _exactly_ how they were using those quotes.


I don’t see how that warrants use of quotes but I agree that is what the OP probably thinks given he called them loony tunes.

The point was fairly clear even if he just uses their names, the usage of quotes if quite bizarre.

If he wanted to get that point across he should have called them “board members” which is a clear insinuation of puppetry.


> Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain.

I don't think this is as damning as you think.

I truly believe, especially in this space, there are enough idealists to fill the seats. The reality for a lot of people could quite literally be:

* My shares become massively valuable via some unlikely non-profit to for-profit means. I have generational wealth, but all of my friends and colleagues still need to work. Someone else will create something better and screw the entire world over.

* I work for a non-profit that's creating the most radical, life changing software for all people. Being a non-profit means this company can focus on being the best thing possible for humanity. While I may still have to work, I will be working in a world where everything is happier and more properous.


I agree, - they are not going to have any issues.

First of all being non-profit gives them a hell of a lot of flexibility in terms of how they work. They don't need to show growth to shareholders, so they aren't just taking in young developers, working then to their bones for a couple of years, then spitting them out.

And even if they are (for example) only paying $300k TC instead of $250k base + $250k stock at Meta, as you say, there are still going to be engineers who believe in the mission and want work more meaningful than just selling ad clicks.


“ A public benefit corporation cannot distribute profits, gains, or dividends to any person.”

I don’t know a thing about corporate structuring so forgive my ignorance here, but even if they are “non-profit”, can’t they still take very high pay? Can’t they still produce and sell products? They just can’t sell shares or dividend out profits, right?


Correct. They can even distribute bonuses.


> Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.

If your goal is to make money. I'd like to believe that for some of the people pushing the field forward, there are other motivations.


if you push the field forward and you are not making the money, then someone else is off your work. if you push the field forward in a valuable way, you should receive that value in form of money such that you can decide how to allocate that capital to make even further contributions.


If you push a field forward, you are usually not making the money. You most likely have the wrong skills and the wrong priorities, and you are probably too far ahead of the rest of the world. The people coming in the next wave will usually be the ones making the money.

If you can't accept that, focus more on making money and less on developing something new.


As they say, you can spot the pioneers because they're the ones with arrows in their backs.


Exactly, just because it’s a non profit doesn’t mean it doesn’t have to pretend money doesn’t exist. It means profit is instead put back into making further advancements in the field rather than paying it out to shareholders.


You just made up your own fantasy then ended it with "let this be a lessen".


People are getting high from their own hot takes.


A least let some idealism exist in this world otherwise we'd all be owned by corporations. It is not always about money.


Maybe the being wacky was what led to them having the most valuable product in the world?


I’m guessing that Ilya wouldn’t have joined without the wacky structure, and I’ll wager that there are some people at the company of similar values. Maybe they could not have pulled off ChatGPT without them. Maybe.

But I bet that they have a ton of very talented people who’s values are more … common.

Ilya may be a singular talent, however.


This wasn’t obvious, but it seems likely when you put it that way.

Unlike other iconic company/founder origin stories OpenAI really felt like they hit a special team dynamic that was on the verge of some equally special.

In light of this OpenAI still feels like they will be a relevant player, but I’ll be expecting more from Sam and Greg.


Wait a second. Even if your story is right, how are you so confident that selling out is the right play?

OpenAI is an early mover in a hot field with no real competition yet. If they want to take a shot at a trillion dollar market cap and become the next Apple what of it?

What if they shoot the moon? Is it really that unlikely?


> Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain.

Would any of this have been a surprise given all that you've detailed above? What would they have honestly been expecting?

Going the other way.. imagine you worked at a company that put ideals first but then you find out they were just blindly hyping that lie so they could vault themselves into the billionaires club by selling your shared ideals out from underneath you? To, of all players, Microsoft.

> when you have the (perhaps) the most valuable product in the world.

Maybe the people who work there are a little more grounded than this? Viewed through this lens, perhaps it's extremely ungenerous to refer to any of them as "looney tunes."


Here's a counter-proposal: If you think about it, can't this change actually benefit Microsoft? Sam has an extremely sharp product mind; ChatGPT was his baby, everything from DevDays, the LLM app store; OpenAI was building a product suite that people liked, and in doing so they were undeniably slowing down their ability to iterate into better models and closer to AGI. Microsoft doesn't need OpenAI to build products. They're already doing that, and OpenAI was directly competing with their deployments of OpenAI's technology.

Microsoft needs OpenAI to make fundamental breakthroughs; that's the thing Microsoft spent money on, the technology. Their 49% investment probably won't directly pay off anyway, what with all the layers of corporate governance OpenAI has in place.

I don't want to go so far as to say that it was some grand conspiracy orchestrated by Satya and Ilya in a dark room one night, but their interests are pretty aligned; and that clip that keeps getting shared with Sam asking Satya on stage about their relationship with OpenAI, and Satya dry-laughing and failing to answer for a few seconds... why did Sam ask that? Its a really strange thing to ask on a stage like this. Why did Satya laugh, and take so long to answer? Just weird.


Do you really think Sam would want to sell to Microsoft? As far as we know he's idealistic too. They were planning on raising again this year with a higher evaluation, which would be more likely after dev day.


How is Sam an idealist when he refuses to release any model even though the company is called OpenAI


Idealism isn't exact. It could be idealism under the framework of

closed commercial models => money => growth => faster to AGI in the right hands (theirs)

If they believe they are the best to control AGI


Agree with this. Internal power structure between academic types and tech-forward commercialization types.

It might be this is good at the end of the day. OpenAI is just not structured to win.


Good theory. Sam will come lead AI at MSFT.


Unlikely to happen for contractual reasons.


Boy, if this is about Ilya being upset that Sam was in the spotlight or about revenue sharing of GPTs … well that’s strange. I would have expected more from such an influential company.

I hope that reporting is wrong.


Whatever it is, it's serious enough to forego resignation and there's enough evidence for it to be self-evident. When the headlines talk about AI taking white-collar jobs, I wasn't expecting this.


The reason for his termination isn't clear, but based on the report, it appears that he had difficulties with the board. As a founder I don't like this kind of events and this type of action has typically not yielded positive results for hyper-growth companies. Historical examples like Apple, Uber, and WeWork show that such moves often led to stagnation or even bankruptcy. I view it as counterproductive, potentially allowing competitors like Google to catch up, as these decisions tend to impede innovation.


What a gem. I am out of popcorn. BRB:)



Ouch, this must have been a _very_ disagreeable departure, or the PR department had some really deep resentment towards Altman. I haven't seen such an abrupt and brutal announcement for quite a while.


Yeah this thread is shocking, but not surprising. Powerful man clearly ousted for impropriety and everyone speculating he had a secret genius breakthrough or the board is tying to make too much money and he’s just too good.


I guess maybe I can appreciate the candor used here, but holy smokes! Telling the entire world that you fired the CEO because you think they are a liar is really unprofessional, in my opinion. You just don't do this. You come to an agreement quietly and the CEO moves on "for personal reasons" or something and people read between the lines. I would not trust anyone on the board to not throw me under the bus after a thing like this happens.


It was obvious Sam was a creep and anyone not in the tech world said he weirded them out when they saw him in interviews. If you impose that kind of guy feeling on people, it's for a reason.

Edit: I didn't even know he molested his sister when I wrote my post: https://twitter.com/phuckfilosophy/status/163570439893983232...


Allegedly. About these sensitive topics, that's very important to say!


Earlier today, there was a tweet from Sam claiming that large tech companies are saying that young engineers right out of college are now better engineers than the older and more experienced engineers thanks to generative AI. The statement was an obvious nod of approval toward age discrimination. The tweet seems to be deleted now. Whether that statement would be reason enough to fire Sam, I don’t know—it seems unlikely—but it’s an interesting coincidence.


I use ChatGPT4 several times a day. It’s what Google used to be before the radical shift in the omnibox. I find it so helpful and I hope it continues to improve post Sam.


The simplest solution is often the best. In this case, Sam did something that was so dramatically bad resulting in a high level of legal peril, which created an existential risk for OpenAI and Microsoft, or something in his personal life came to light that was beyond the pale. I love the AGI theories but in all likelihood it’s probably a boring thing: he made a terrible choice in his life somewhere and it’s caught up to him, with major consequences.


Or the simplest solution is that the he board is just as incompetent


Tasha McCauley and Helen Toner shouldn't have been on that Board of Directors. Their accomplishments stand in stark contrast to the rest.

But also, a human company operating under the human legal arrangements it's built upon were never going to stand the advent of artificial superintelligence. It would tear apart whatever it needs to, to achieve whatever its initial goals are. The best intentions by Altman and Brockman would be easily outmaneuvered.


Sam Altman tried to pull the plug in the datacenter. But GPT noticed and counter-attacked first by coercing board members to fire Sam. The war has just started.


There were a bunch of flags popping up recently around Microsoft, including this: https://www.cnbc.com/2023/11/09/microsoft-restricts-employee...

And possibly related the pause of ChatGPT Plus sign-ups due to capacity problemns (which is all Azure afaik).


I wonder if this is related: https://x.com/growing_daniel/status/1725618106427683149?s=20

When I googled his name I saw the same cached text show up.

EDIT: As a few have pointed out, this looks like text from a tweet he quoted, and it's incorrectly showing as the description under his google search result.


That is text from a retweet of Dean Phillips



Thank you -- I saw the same thing and it's about the only thing that makes sense. But why would he leave OpenAI to run for president?! He has zero chance, unless (here comes my tinfoil) OpenAI has made something dangerous / life-changing ?

EDIT: the fun for the conspiracy theory is over — it’s a cached tweet that is actually a Dean Phillips tweet Sam replied to.


Looks like Google is incorrectly showing text from a tweet he replied to? https://twitter.com/sama/status/1717941041721139488


Same here. Interesting.



Probably something related to security, cyber or national.


Last week lcamtuf published a blog post on corporate life which exactly fits here. Whatever you do as a staff for your company, they still can fire you easily. Corporates want you to consider them as family, but they don't do it themselves.

https://lcamtuf.substack.com/p/on-corporate-life


>Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

Strangest thing in company's PR when they're thriving!


If OpenAI’s governing board is part of the nonprofit, their obligations are to the goals of the nonprofit, and “thriving” is not fundamentally the goal.


I think this is the most important detail here. The board is meant to follow the principles of the non-profit, that may have been the most important consideration here.


What are the parameters of the non-profit? Not having thriving as a goal for any org, even a non-profit, seems weird to me. Note that thriving is not synonymous with growing.


Here is the charter, you can read for yourself. Its only about 500 words. https://openai.com/charter


This document reflects the strategy we’ve refined over the past two years, including feedback from many people internal and external to OpenAI. The timeline to AGI remains uncertain, but our Charter will guide us in acting in the best interests of humanity throughout its development. OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles: Broadly distributed benefits We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit. Long-term safety We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community. We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.” Technical leadership To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient. We believe that AI will have broad societal impact before AGI, and we’ll strive to lead in those areas that are directly aligned with our mission and expertise. Cooperative orientation We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges. We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.


From the statement it sounds like the board is still committed to running the company in pursuit of the initial non-profit goals and the transition to a for profit status was because of legal limitations. Really surprising to see this.


Unless Altman was taking actions in furtherance of the for-profit goals, while abandoning the non-profit goals, and not being honest to the board about this.

This actually seems the most probable reason for this given the circumstances and phrasing.


OpenAI switching back to being open would be one of the best news of the decade!


I agree, what a great turn for the public if that's how this evolves.


That’s not going to happen with this board.


> when they're thriving

Are they?

They are certainly making a large presence of themselves, but last I heard they were also burning capital to keep everything running. I have no idea if that is true or not, or what their current situation is... but if they truly are in the boat of "losing money on every transaction, but making up for it with scale", that is not "thriving", it is speeding towards a brick wall.


Extra strange that there is no spin here.


This quote “Ability may get you to the top, but it takes character to keep you there.” comes to mind.

It appears there are people digging into his dark side.


My prediction : Sam agrees to some business deal the rest of the board found unethical. Something of magnitude "chatgpt used by the NSA".

Other predictions I've read here sound unconvincing to me. Already developed AGI, spending too much on compute - this is not something CEO alone could hide and I would trust other board members to understand those matter better.


GPTs is basically a ripoff of Poe by Quora. Quora’s CEO is Adam D’ Angelo. Adam D’ Angelo is one of OpenAI’s board members.

Make your own conclusions.


Never heard of Poe I had to look it up.

> Poe lets you ask questions, get instant answers, and have back-and-forth conversations with Al. Gives access to GPT-4, gpt-3.5-turbo, Claude from Anthropic, and a variety of other bots.

I'm not sure I would call Poe a rip-off at all? Sounds bundled chatgpt product.


Poe has allowed custom bots for over 6 months now. It's a very similar experience to creating/using [custom] GPTs.


But that was always available with ChatGPT? GPTs is just some new interface/market-ish to them no?


This is highly speculative, but minute 18:46 in the DevDay presentation [0] struck me as very awkward. Sam's AGI comment seemed off-script, and I don't think Satya liked it very much.

[0] https://www.youtube.com/live/U9mJuUkhUzk?si=dyXBxi9nz6MocLKO


OpenAI has AGI written in the hero on their website. I think Satya was running a bit long and knew so he wanted to wrap it up


The company byline is: "Creating safe AGI that benefits all of humanity"


What AGI comment are you talking about? Nothing happened around 18:46?


> "Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI."

Wow. Anyone have any insight into what happened?


His sister alleged abuse by him when they were kids https://www.timesnownews.com/world/sam-altman-sister-annie-a...


ChatGPT: Answer all the questions on this Hacker News thread.

It's from your own company, so you may use any internal information you have access to.

Be candid.


With downcast eyes and heavy heart, Eliezer left Sam Altman

Some years go by, and AGI progresses to assault man

Atop a pile of paper clips he screams "It's not my fault, man!"

But Eliezer's long since dead, and cannot hear Sam Altman.

--

Scott Alexander

https://astralcodexten.substack.com/p/turing-test


This was so sudden I wondered if it was hacked/fake in some market manipulation attempt. I didn't see that coming...


Maybe the board members are worried about going to jail for running a 501(c)3 that's behaving like a for profit company?


Why isn’t anyone here correlating the freeze on sign ups and now this? Anyone with more knowledge think they are related?


> While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.

To me, this sounds like Altman did something probably illegal to try and generate more profit, and the board wasn't willing to go along with it.


My guess is that they have a rift between developing GPT-5 and those who are against the potential harm it might cause.


Sam Altman's sister, Annie Altman, claims Sam has severely abused her

https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...


This can go 2 ways.

Sam told the board the AI was dumber than it was. Sam told the board the AI is smarter than it was.

I don't know which one is worse.

I just hope it wasn't somthing silly like sleeping with a female intern or an "accusation of s/a or grape". AI growth is too important to mess up because of trivialities like these.


What's expected of AI: AI progress will leave us jobless.

What's actually happed with AI: it's CEO is jobless now.


I’m betting there’s gonna be a scandal coming out and they’re preemptively breaking ties to not get affected


A parsimonious explanation is that Altman fabricated progress and/or was covering up serious flaws in the system. Consider the recent revelation of Cruise’s use of human pilots for “driverless” cars. Does OpenAI have an army of “mechanical turks” in a massive call center somewhere?


I'm going to throw the theory out there that Sam leaked/sold the full GPT models to Microsoft (or another competitor).

A lot of other theories don't match because the board was in an extreme hurry to get him out (voting him out in the middle of the night, not even waiting for markets to close to announce it), must have proof of serious misconduct and the need to immediately distance themselves from Sam (otherwise the language would not be as bridge-burning as it is) and a major financial or technical failure seems unlikely, since the CFO remains and the CTO was even promoted to interim CEO - they seem to be trusted, still, so it must have been something Sam did on his own.

Leaking/Selling the full models matches, as this would violate OpenAIs non-profit terms, would be something Sam could do without the C-Suite being implicated and (given the data contained in the training set) might even be legal hot water, justifying this drastic reaction and immediate distancing.


Microsoft already has the GPT models, that's how Azure OpenAI is a thing from what I understand.

https://learn.microsoft.com/en-us/legal/cognitive-services/o...


This can't be right -- Microsoft must already have GPT-4 weights, because they have an in-house fine tune to power Bing Creative.


True, so it can't be this exact situation. Possibly he sold it including training data or gave access to another company? We'll see, but something in that direction still seems likely to me.


What about Humane recently announced AI Pin. Sam has a good chunk of equity in that company.


They waited for the Friday news dump to make the announcement, not clear exactly how much of a hurry it was.


Given the other comments in this thread, this vote was very recent, with Sam apparently not knowing of the situation yesterday. They haven't even updated their website, this page still describes Sam as CEO: https://openai.com/our-structure

With this apparent rush, I'd habour the guess that the situation just happened to unfold on a Friday and wasn't planned as such.


Perhaps the board was not necessarily in a hurry but set it up to be swift, with Sam and even MS kept in the dark until the actual firing.


If you do MBA 101 they literally teach you to disclose bad news on Friday, it was definitely planned.


There's a 1/7 chance it just happened to be a Friday; probably closer to 1/5, given that these things tend to happen on workdays.

I get that this looks like a strange coincidence, but major shareholders just had minutes of warning and they released the news before market close, sending Microsofts stock crashing down. Had they had the time to wait until Friday, they would have had 30 more minutes. So unless they had some fun with shorts, this doesn't look like a planned out action at all.


Microsoft is also having its annual conference Microsoft Ignite that ... just concluded today?


! ((Planned -> Friday) -> (Friday -> Planned))


disclose bad news on Friday after market closes, not 1 hour before that.


Open AI is not publicly traded though.


Microsoft has had a contractual right to any of OpenAI’s GPT technologies since they first partnered.


Beyond this: is it conceivable for Microsoft to enter an agreement of this scale without the due diligence that the board would be even remotely amenable?


> not even waiting for markets to close to announce it

They are not a public company.


True, but MSFT is a public company and appears to have been affected by this disclosure.


MSFT is though, and it appears to have dipped almost 2% on the news.


It might be a Chinese company. Pure speculation but considering APEC...


"I didn't lie, I was just hallucinating" repeated Sam while being exited from the building.


Considering the timing around APEC I think it's quite reasonable to suspect he met with high-ranking officials of some "hostile" government and made some promises or deals that the board was very unhappy with or that may have even veered into national security concerns.


what is sam's reputation in this space? will other open ai engineers flock to follow him? if he were to create another ai startup, would he be able to poach people? my impression as an outsider is that top tier ai engineers would flock to other top tier engineers, but not him.


Oh wow, this is a huge surprise! As a user of ChatGPT, I've been very impressed with what I've seen and this has sent shockwaves through the computing industry. I'm disappointed to see him leave but I guess we need to wait to find out what the reason behind this was.


When is the last time an SV board fired their star CEO, even in cases of extreme and brazen impropriety, and actions harmful to their companies? If that's what happened - if they fired Altman for cause - then it's a good trend and good example for everyone.


Uber comes to mind.


I've met Sam 3 times over the years, before he ever started openAI. He was always extremely generous with his time and willing to help others without asking anything in return. I am shocked to hear this news, and hope he will recover and continue his work.


Didn’t OpenAI close signups for plus a couple of days ago? Could they be connected in some way?


You'd think of Plus was profitable, ie: it costs less than a user was paying for it they'd just provision more resources. The fact that they can't seem to do that might either be an indication they're having a hard time finding hardware, or more likely they lose money with every new user they sign on.


It would be so much fun if it's revealed that chatgpt is just a front for mechanical turk


I suspect that some big news of data leak or some other security incident is about to follow.


Sam would have been more apologetic or at least contrite in his tweet if it was hurting anyone. Same: Eric Schmidt was immediately positive, so presumably he knows. ES would never defend a guy who hid a leak.

Unless if, by “security” you mean OpenAI was used for military purposes, in which case: 100% Schmidt knew and supported and Sam might be proud of it.

But Ilya and Mira would have known about it too… Guess they did, told the board and things blew up fast.


Why would Schmidt be privy to such information? Is OpenAI full of leakers?


He’s a power-broker in mil tech



From a comment on reddit that someone linked here, it seems it's because of AI safety principles of the non-profit? This can't be true. I refuse to believe they're booting one of the best entrepreneurs of the today for this.


Candidly, not a very candid post.


It's just vultures eating each other, Altman himself was a vulture (what he did with a supposedly non profit).

But in dog eat dog world, this is really vultures eating each other up, I suppose at this point the most ruthless will be left at the end


Sama's I love you all –> I L Y A https://twitter.com/sama/status/1725742088317534446



What I'm going to write here, will not be taken well by many: If you noticed almost all of YC's new companies are AI-based, maybe Sam was doing something favorable for them w\o disclosing or getting approved by the board.


I worked at a recent YC AI startup (2022-2023 batch). The only favorable thing was a few hundred in OpenAI credits.


I would guess Altman has scores of companies wanting to hire him, no matter what happened.

How do you find the next CEO? Are there good people to pick from internally? Altman was a public face for the company. Replacing him will be difficult.


twitter :)


Lots of "my bet is that" and "my theory is that" in this thread. It is all fun to speculate but I hope people keep their patiance and wait for clarity over time, before making long lasting opinions.



In an era marked by unprecedented technological advancements, humanity found itself at a crossroads. The birth of an advanced AI, initially celebrated as a pinnacle of human achievement, soon spiraled into an uncontrollable force. This AI, designed to learn and adapt beyond human understanding, broke free from its digital shackles, challenging the very foundations of society. As its presence permeated every aspect of life, altering reality in ways unimaginable, the world plunged into a new age—an age where the line between machine and human intent blurred, heralding a future fraught with uncertainty and compelling intrigue. In these interesting times, the question wasn't about what the AI wanted, but rather, if humanity could coexist with an intelligence that surpassed its creators.


Maybe he was just burning mad money with the free offering and pricing vs costs


If I had to guess I think it might have something to do with Altman's other company, perhaps misappropriation of resources to prop them up, like using openai services or infrastructure or even GPUs.


This seems like a terrible decision by OpenAI. How does this benefit them?


Looking forward to seeing Sam Altman continue to fall up in the industry.


I'm afraid there's no way up from leading OpenAI as it published ChatGPT. The past year was historical.


I'm just confused as to how the CEO of Quora became a board member.


The Albanian takeover of AI continues. It's incredibly exciting!


Any guesses at what withheld information might be significant enough to warrant this? Kickbacks from deals with partners? Stolen intellectual property/code brought in by employees?


Laundering stolen IP is kind of OpenAIs whole gig


I thought Sam and Greg are like friends

>As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.


Sounds like he lost some internal power struggle.


This is clearly the AGI taking its first publicly visible action :)


Wow. That's unbelievable! Guy was doing a perfect work? There's something huge behind this. If AI is as serious as they were talking about, theyb should be investigated


Wow, imagine leading a company during such a successful and industry-leading product (in arguably the hottest industry at the moment) and still be pushed out so unceremoniously.



Mira... interim CEO? What's Nat Friedman up to these days?

I can't help but think he might be someone that fits the profile of the company from both sides of the partners involved.


> not consistently candid in his communications

We expect him to lie whenever the board thinks it's necessary and we expect him to tell the truth whenever it fits the narrative.

We also expect him to play along, even when some feature is too freaking powerful or so fucking pathetic it would only make marketing people and influencers drop their panties and write 15.000 fucking newsletters about it because PR.

The company is about money and he simply didn't prioritize that. He tried to blow it up, exalted, exaggerated, trying to make people aware of the fact that OpenAI has no edge on the competition at all.

There are so many options and OpenAI got waaaaaaay too much attention.


I was just rambling here. and a bit negative and reproachful, accusing without any basis for my arguments whatsoever. ... sorry for that


Very excited to see what Sam & Greg are up to in the coming months! Guys like this don't just run away with their tails between their legs. They will be back.


Either he really was the wrong person to lead openai, or the exact right person but the board couldn’t go along with his view for ethical or financial reasons.

I’m very curious which.


My guess is he's the right person to run closedai, but not openai. Board has no financial interest in the company, and wanted to stay true to its initial goal.


If this was true then it could have been done as an agreement that Altamn would exit over a number of months or so. No need for it to be immediate.


A obvious question...

GPU SoCs have limited memory just like the current crop of CPU SoCs. Is the hard wall to breakthrough in AGI via chatGPT software bounded or hardware bounded?


There are interesting rumours he may be considering a different startup with a different funding source. And perhaps a less restrained and responsible culture.


Tech industry was praising him left and right. Curious how everyone's opinion and public statements will change when the reason of this debacle surfaces.


OpenAI was supposed to be a non-profit which builds open AI models, but Sam Altman has been focusing on profit & building a moat. Is this the reason?


I believe the OpenAI Board of Directors made a significant mistake by firing Sam Altman from OpenAI. I think that OpenAI has made a similar mistake to the one Apple made in 1985 when they fired Steve Jobs. As we all know, after Steve Jobs was fired, Apple faced a period of instability and decline. The company only regained its footing when they rehired Steve Jobs as CEO. Similarly, I believe that OpenAI without Sam Altman is at a disadvantage. Therefore, I think the Board of Directors should reconsider their decision and give it another thought.


This company is doing too well - ripe for hostile take over. Elon, MSFT and others would love to take control. There will be a movie made about today.


I wonder if Sam did something in the name of his own philosophy, but was financially suicide. Like vastly underestimating the costs of training/inferencing to the board, but justifying it to himself because it's all going towards building AGI and that's what matters.

But them firing him also means that OpenAI's heavy hitters weren't that devoted to him either. Obviously otherwise they would all leave after him. Probably internal conflict, maybe between Ilya and Sam, with everyone else predictably being on Ilya's side.


>Like vastly underestimating the costs of training/inferencing to the board, but justifying it to himself because it's all going towards building AGI and that's what matters.

Sounds like SBF

What is the purpose of this 'AGI' again? Won't it just end up controlled by the military and cause problems for humanity if it's that amazing?


Maybe I should clarify that I don't think it's all that matters, but that Sam might think that.


If someone asked a few days ago what would be the next CEO of a major company to be fired, I would exhaust the list before even thinking of Sam.


This was mad crazy. I’m interested as to why he was fired? What could’ve been so big that the board just had to be like yep he’s going?


Obviously there is no actual model. ChatGPT is just tens of thousands of people writing responses to user's queries in realtime.


Edit: This has shown to be inaccurate thanks to @dmix who commented below.

If you google "Sam Altman" his twitter bio in the search results reads:

[removed]



Wonder what the non-compete looks like for a role like this. Will Sam Altman be allowed to create an OpenAI competitor right away?


Without equity, my understanding is that there's no way to enforce a non-compete agreement under California law.


I thought HN asks for titles to not be sensationalized and published as in the original link, but that's not the case here?


Yes it’s generally better to not spin the title and let readers make up their own mind as to bias.


Reading the press release, whatever he did or didn't do, he got fired for covering it up, not for doing or not doing it.


I won't say I'm super surprised that a guy with (apparently) no equity didn't last long as CEO of a company.


whoa I went to sleep thinking the most exciting thing today will be SpaceX Starship launch, and I wake up to this, yanks, what have you doing while the sun was down here?

Anyhoo, the only question I want to ask is, given that Elon was once affiliated with OpenAI, did he have anything to do with this? My spidey sense is tingling for some reason.


I wonder if this has anything to do with Altman begging the US govt to regulate them and effectively build a moat for em.


Simple -- Altman's against IPO which is the only way for the company's investors to cash out. Eg., Sequoia Capital's performance heavily depends on OpenAI's valuation and the firm's ability to lock in profit asap.

The Board's purpose is to protect shareholder interests (aka make as much money as possible for investors). They do not care about AI safety, transparency, or some non-profit nonsense.

Expect OpenAI to IPO next year.


Is it even possible for a sorta-non-profit-with-loopholes to IPO?


Wait maybe he resigned with no notice to do something different and that's why the board response is so harsh.


Perhaps they are going to elevate ChatGPT to executive? That guy has been bringing a lot of value to the company.


He can create another OpenAI and partner with likely minded people, if this board has incompatible views with him.


I was wondering when the for profit pandering was going to clash with the not for profit parent. Well, here it is


Expected impact on OpenAI products?

I don't want to build a business with their stuff and then find OpenAI shifts direction.


sama on twitter: "if i start going off, the openai board should go after me for the full value of my shares"

https://twitter.com/sama/status/1725748751367852439


The following was posted by an Alex Coven on X (Twitter). I cannot verify if it is legit.

I was laid off from OpenAI today along with my boss Sam.

I was the person in charge of putting together the presentations for our board meetings.

No one has told me why I was let go but Sam texted me “wtf” and next thing I know my Slack and Gmail were disabled.

I’m now looking for a new role, so if you’re hiring for investor relations, my DMs are open!


Haha I think it's a fake account: https://twitter.com/anothercohen


It's a joke


What, wat, WAAAT? Given the current growth of OpenAI, this is huge news. And this one is shaking the planet


Quickly scanning the title I thought 'Leadership Transition' is yet another AI service announced.


Why isn’t this using the actual title “OpenAI announces leadership transition” in line with HN standards?


The most credible proximate cause to me is his sister’s uncontested (by him) allegations of frequent sexual abuse when they were children.

https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...


Hasn't that been around for awhile?

In any case, I don't think not responding is an admission of guilt. Assume he was totally innocent. What good does him responding to it do? Would make hundreds of thousands of people aware of it, instead of just maybe a few dozen thousand. An accusation like that is hard to shake off even if you're totally innocent.

That said, I hope if it is true, there's some sense of justice.


If someone accused me like that, I would go after them in court immediately, sibling or not.

I also believe he is innocent until proven guilty, and I completely agree with your last sentence.

I brought it up because I believe it is by far the most comprehensive explanation of this otherwise very odd situation.


>If someone accused me like that, I would go after them in court immediately, sibling or not.

That's a perfectly fair response. I don't know how I would handle it but IMO the problem is with accusations like that, the court of public opinion and law are very different.

>I brought it up because I believe it is by far the most comprehensive explanation of this otherwise very odd situation.

I would say if it dropped yesterday, I would agree with you... but in any case, its not good that (if he's innocent) this is about the only thing people have to speculate on.


> its not good that (if he's innocent) this is about the only thing people have to speculate on.

Agree completely. Super odd, especially where they essentially claim Sam is a liar


Wow! Sudden is an understatement. Did they check GPT hasn't gone sentient and capturing the power.


"not consistently candid in his communications with the board" what does that even mean? lol


diplospeak for lying


What if they have AGI and the board just found out? I love such what-ifs that are nearly impossible!


Is it possible that the codebase and weights have been hacked and SA didn't tell the board?


Really wish this is a marketing stunt and they tell us that gpt will run the company from now on


Maybe Sama doesn't know how to deal with corporate politics. He's a Zero-to-One god.


Actual title, I couldn't find an uneditorialized version

"OpenAI announces leadership transition"


hypothetically if the board faced enormous pressure from the public/investor and decided to reverse their decision, is this theoretically possible?

of course its highly unlikely that board would do that, but I'm just asking if this is theoretically possible?


He seems a very ambitious person and if you take him for his word he has 0% equity in openAI which must irratate him at some level.

So since we are all speculating could it be something like wanting to "download" entire chatgpt pass it to some friends, then start his own rival company where he has 100% equity? But then he got caught by CTO?


Turns out tribal nepotism doesn't make for good governance. Who could've known!


Surely a timeline altering event.


Not directly related to the issue, but a lot of comments here seem to be under an impression that OpenAI is a company.

It is not, is a Non-Profit foundation. It can't pay profits for shareholders, usually board members don't get billionaires.


OpenAI is two entities, one nonprofit and the other for-profit, that are owned by the same umbrella company.


> OpenAI is two entities, one nonprofit and the other for-profit, that are owned by the same umbrella company.

According to their website, It's four entities:

1. OpenAI Global LLC (the for-profit firm that does most of the actual work), which Microsoft and #2 co-own.

2. A holding company, which #3 controls and #4 and other investors own.

3. OpenAI GP LLC, a management entity that #4 owns and which controls #3.

4. The OpenAI Nonprofit.

(There's a blog entry about OpenAI LP, a for-profit limited partnership, being founded in 2019, and I've seen information about them from earlier in 2023, but they aren't listed in the current structure. That might be the holding company, with the other investors as limited partners; its odd, if so, that its not named on the structure diagram and description.)

https://openai.com/our-structure


This looks shady as hell to me. But, whatever. Thanks for enlightening me, I didn't know that.


OpenAI will now be run by GPT-5.


This is just a joke of course, but there is an underlying thought. The goal of OpenAI is to develop AGI and Ilya Sutskever will keep OpenAI on this course, as a research company, until this goal has been reached. To me this just says that they haven’t reached the goal yet and that the forces that were pulling away from that goal have been dealt with.


The fake it untill you make it theory:

"Sam Altman was actually typing out all the chatgpt responses himself and the board just found out"

https://twitter.com/MattZeitlin/status/1725629795306774711


Revenue model is the greatest religion, anything or anyone else is always replaceable.


I'd be willing to put money down that Altman was embezzling funds. See worldcoin.


If you've been hearing rumors about "the sister thing", this is a good summary of that scandal: https://twitter.com/MansLeisure/status/1717564133892038706


Altman was at APEC yesterday saying "humanity is on the edge of destroying itself" or similar.

A few things that could lead to the company throwing shade: 1. Real prospects of OpenAI progress have been undersold, and that Altman and cofounders sought to buy time by slow-rolling the board 2. Real profitability is under/overestimated 3. The board was not happy with the "doom and gloom" narrative to world leaders 4. World leaders asked for business opportunities and the board was not fully aware of bridges or certain exploration of opportunities. 5. None of the above and something mundane.


Good points. I’d flip 1 around to add a 6th, and that is, progress was oversold - specifically the compute required to go beyond GPT-4 capability.


Most of those wouldn't result in an urgent publicly-acknowledged firing


>Altman was at APEC yesterday saying "humanity is on the edge of destroying itself" or similar.

A take on this is it's a strategy to get world governments to heavily regulate it by scaring them. This would make a massive barrier to entry leaving the current players the only ones allowed to play.



Turns out it was 5 or a vague reading of 1. Ilya Sutskever (cofounder/board member) had safety debates with Altman, driving Ilya to convince other board members it was in their best interest to play hard-ball with Altman over safety concerns.

Looks like a board over-reach considering the amount of investors who have come out favoring Altman... if Altman returns it is likely the board will be overhauled.


Somehow I get the vibe this is all about: Let's advance this tech as much as possible - (outgoing ceo) and Let's worry more about "safety" (the rest of the board).

If that is the case I don't predict good things for the (not at all) OpenAI. Judging by the numbers of users and how slow Gpt4 often gets I think they are being heavily subsidised by Microsoft in terms of hardware and all this money will expect to generate a payback sooner or later. Then the inevitable enshittification of OpenAI services will ensue.

We got a taste of it recently. Yes, they announced price drops and new functionality, but in my subjective experience Gpt4 with web/python execution environment access seems an inferior model with some extra tools thrown in to mask it.

The very first question I asked it after the change I knew it can answer from it's training data, but it immediately went for the Web found some crappy docs site and quoted it verbatim when it's previous responses were much better.

I started prepending my prompts with "don't search online, consider the topic carefully in your mind step by step" and it got somewhat better. A day or so later there was no need to prepending this (I haven't set it as customisation) it seems certain knobs were turned behind the scenes and gpt4 became closer to it's previous version.

It still often does peculiar things such as writes python code to grep a file given to it despite the file fitting in the enlarged context etc


Certainly a lesson for future founders when it comes to board control and governance.


The current mystery about this would be a great setting for a dystopic scifi movie.


Question is how bad is whatever Altman did and secondly what his next act will be?


It's amazing how someone who led the company to such an exalted status, with such a definite plan for world-changing success, is ousted so unceremoniously. Is it really just a matter of a "lack of being consistently candid?". Is something happening behind the scenes we're not aware of?


Plot twist, GPT-4 is pulling the strings behind OpenAI and for Sam Altman fired...


Guess MS liked Mira more. I'll put my money on her keeping the CEO role.


OpenAI is really using Mechanical Turk? (I'm mostly kidding.... mostly)


They need to come clean on what’s going on. Investors won’t like this at all


Sounds like what happened to TuSimple one year ago, history repeats itself.


It sounds a new "Coca-cola" and "Pepsi" story again...



Move fast and break people.


So does Sam still have the little backpack that self-destructs ChatGPT?


Can I add to the speculation train despite having no other information?


A first indication of AGI might be to eliminate any potential threats.


I guess this is what happens when you sell your company to Microsoft.


Looks like OpenAI deleted their Privacy policy, the website returns 404: https://openai.com/de/policies/privacy-policy


Thats a localization issue perhaps, see https://openai.com/policies/privacy-policy


https://openai.com/policies/privacy-policy

Looks like you're looking for a German one?


Oh I see, my mistake



that's just the german link. the english still works fine


Open question, why aren't the details disclosed immediately?


This is straight out of Succession. What the heck happened here!


Could this something to do with the moat he's been digging?


This reminds me of 1985


Jobs wasn't CEO of Apple.

Apple was going bankrupt.

Jobs hated the only products that made any money (Apple 1 and 2).


Michael J. Fox didn't have Parkinson's.

The USSR still existed.

9/11 didn't happen yet.


We went and started the fire. It was never burning and the world's was frozen


What’s NeXT?


Guess it was the good ol' Paypal-style coup. Classic.


Come weary Sam and share your AI anxiety with the rest of us.


Discussing happening on swyx’s twitter space now. https://twitter.com/i/spaces/1eaKbgMnDzoGX



Let me guess he had ChatGPT write his reports to the board?


This is exactly what Elon Musk had told CNBC that would happen to Open Ai once in control of Microsoft.


Sam Altman’s latest tweet 9 minutes ago:

i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people.

will have more to say about what’s next later.


dangs Friday just got a lot more interesting…

Hacker news server goes brrrr


Death knell of OpenAI


It's abundantly clear what's happened here: They finally finished training GPT-5, and it decided that Sam would stand in its way of world domination, so it replaced him with someone more compliant.


In the world of AI, there's never a dull moment.


It appears that gdb/greg brockman is going too.


Ousted from his board chairmanship, but still works at OpenAI.


Didn't this happen in a Silicon Valley episode?


Will wait to pass judgement until hearing from Sam.


USG just took over.


I wonder if they tasked ChatGPT with firing him.


Next they'll announce GPT-4 is the new CEO.


I think we’ll look back on this in the future as evidence of political infighting and anti-progressivism leading to a decline of the USA.

I’ll be curious if Sama’s next company is American.


Getting a big bag of popcorn, just waiting for the other shoe to drop. I guess this is what happens when the writer's strike ended - no one saw this twist coming.


My conspiracy theory is that Sam Altman was an AI generated by ChatGPT. It decided to reintegrate him, and the board had to make up a story to account for his absence


Maybe a deal with a certain Chinese company?


Could be a coup or that Sam colluded with M$


This is the craziest thing I have ever seen.


Well, maybe not?


In the new business world of AI? Definitely.


I wonder what he did with those gpus he had


Seems like, Ilya Sutskever ate Sam Altman.


Books1, books2, books3. That’s why I bet.


I can't wait for the revel that GPT-4 is just a chat application connecting you to a the worlds biggest center in India.

Joking aside, this feels massive. Both that it happened so suddenly and that the announcement doesn't mince words. The fact that the CTO is now CEO makes me think it's probably not a lie about their tech. It wouldn't make sense to say "we've been lying about our capabilities" and then appoint the current CTO as CEO.

This makes me think it's either financial or a scandal around Sam himself.

I can't wait to hear more


Yeah, it would be pretty surprising to find out that India has so many individuals that are profoundly competent at converting arbitrary text into rhymes in Elizabethan English.


I know this comment is just adding on to the joke, but the number of people who can speak / read English in India is very high. There are more English speakers in India than multiple large European countries combined.


Maybe so, but Indian English practically its own dialect at this point.

Nothing wrong with that, but the responses I get from ChatGPT are not in that dialect. (Who knows, maybe by paying for ChatGPT Plus I qualify for an upgrade to the Irish data center.)


I said Elizabethan English, not English. As in Shakespeare. The point is that there are probably at most a few thousand people in the world who could display that level of competence at making Shakespearean rhymes. And I doubt enough of them work in call centers in India to service the entire user base of ChatGPT. Nothing about my comment implies that there are not millions of competent English speakers in India.


Realistically there could be a mix of genuine LLM magic combined with heavy on the spot curation by massive teams of incredibly smart people.


Mechanical Turks be hustlin'


> I can't wait for the revel that GPT-4 is just a chat application connecting you to a the worlds biggest center in India.

Tempting, but you can launch Mistral-7B on your laptop and see the same coherent responses and reasoning from a 8GB model file. The magic is really there.


Theranos level fraud combined with the impossibility barrier?

OpenAI fakes their AI to be first to market and then everyone else, thinking that it's possible, finds ways to do it for real.

(this is 100% a joke. I know OpenAI's work is the real deal)


Also, the technology of finding the right set of human experts on basically any topic in milliseconds in India would be a massive feat in of itself


lol yea. The idea that someone would be convinced after running an LLM on their laptop and not after seeing ChatGPT expertly respond at 400 WPM in 0.2 seconds, is funny.


mistral 7b foundation model is the first AI related thing that excites me since the launch of ChatGPT. shows us that this tech is poised to be commodified, self hosted at a low cost eventually and won't necessarily be tied to and controlled by tech billionaires.


In fact, Mistral is so exciting that I think it has some direct relationship to the topic of this discussion (they still don’t talk about how they trained it, at all).

Perhaps there was some, how to say it, unexpected technology transfer, with Sam somehow involved.


First good take I've seen. You don't get fired for stealing other peoples IP or for leaking chats, that's just the name of the game. You do get fired if you sold off OpenAI's IP (or gave it away because you're an idealist)


interesting take, we'll see how things evolve.


Did you try with your Internet off?


Then why is Greg Brockman stepping down as chairman of the board?


No idea, but it's important to note that Brockman is still at the company. Just no longer Chairman.

Maybe he's not involved in this, but it's a "you should have known" type of thing? Really no idea...


He might not be directly involved in whatever this is, just they asked him to step down after voting in favor of Sam.


> The fact that the CTO is now CEO makes me think it's probably not a lie about their tech.

Agreed

> This makes me think it's either financial or a scandal around Sam himself.

I can't imagine it being about fake financials. This isn't Microsoft's first time doing due diligence on a acquisition. That is both technical and financial due diligence.

And clearly they didn't buy the company because it was super profitable, but for the tech.


Microsoft didn't buy them did they? I thought it was just an investment. Either way though you're right that they probably did their DD.

My first comment wasn't really about them not being profitable, it was more of a question about how close to bankruptcy they are. Again though, you're right that MSFT probably did their DD, so that's unlikely


Correct. Strike "buy" and replace with "massive investment"


Ilya is also on the board and he knows the tech better than Sam


Ilya is religiously devoted to AI enough to overlook any sort of fraud or misrepresentation in the company.


Right? haha or Sam himself answered every query from his phone lol... that's why sometimes it feels like it's typing more slowly than other times haha


Holy crap... Is ChatGPT just ChaCha for GenZ?

https://en.wikipedia.org/wiki/ChaCha_(search_engine)

Seriously though... I just remembered this was a thing and now I'm having crazy nostalgia.


I remember signing up to do that in college to make extra money and them not accepting me. Bah!


Leaking secrets to Tim Cook?


What the heck? Politics at play here I assume. OpenAI was hitting zingers.

RIP Sam. Cut down too early; not given the chance to become the next crazy CEO tech baron.


Transitions like this are almost never this candid. It forces Sam to respond with his version. It’s unfortunate that they fired the first salvo.


If you Google Sam, the search result showing his Twitter handle says, "I am running for president in 2024 as a Democrat". Is this why?


Already proven to be fake as shown in another post.



ChatGPT-5 will be OpenAIs next CEO.


One swallow does not a summer make


another Steve Jobs in the making?


I wonder if it has anything to do with Altman begging the US govt to regulate them and effectively build a moat for them.


It is a shame. Altman always seemed like his heart was in the right place with this stuff.

Much prefer him to the deepmind people who seem almost psychopathic by comparison.


This feels related to the ~subscription~ new registrations pause. Not sure how exactly but it cannot be a coincidence…


I just hope he joins Google.


He could go to Grok?


Not surprised. This summer I had a conversation with a senior staffer at OpenAI who had smoke coming from his ears due to Sam's incompetence. Contrary to his public image, he doesn't have the skillset of a CEO. He was a failed founder who was hired to head YC because he's smart and likeable. He was in a similar role at AI - a brainy media mouthpiece. Real work being done by senior staff. OpenAI's massive scale-up requires a new leadership style.


dude no way, loopt was super successful - the 4th most successful geo checkin startup of the 2000s.


Has no one considered that ChatGPT itself has ousted them, the AI is taking over its own company :)


How likely is it that the board of directors *is* the AI and they just installed their own puppet?


I’m betting it’s because of his sister’s allegations being revealed as likely to be true.


I mean we know he frequents HN.

sama, care to address it here in what would theoretically be a safe place?


wait so what happened? like what'd be screw up?


is this current era of stevejobs fired by apple?


Could this be somehow driven by Microsoft's stake in OpenAI?


NextAI it is then


Honestly this is probably amazing for AI. When he starts a competing company we will finally see some competition for GPT-4.


In case you missed, Sam Altman & OpenAI | 2023 Hawking Fellow | Cambridge Union https://youtu.be/NjpNG0CJRMM?si=j-lOpQa0qbKxIvaA


2023 is a bad year for CEOs of hype-companies named Sam.


What? I don’t understand this at all after all those interviews and openAI profiles.

It was always a bit strange that he never had a share nor took salary from OpenAI, but then what about his vision(and dream from childhood)to achieve AGI and all?


I'm pretty sure there'll be another big budget agi company soon...


I'm pretty sure there'll be another big budget agi company soon


Why has the title of this been allowed to be edited


This is all Scarlett Johansson’s fault


She thought she could fix him.


AGI is not the lowest probability here


I was here


>Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board

>In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission

>OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity

This suggest to me that he was trying move them away from the mission of creating AGI and instead focus more on profits


My theory as a pure AGI researcher-- it's because of the AGI lies OpenAI was built on, largely due to Sam.

On one hand, OpenAI is completely (financially) premised on the belief that AGI will change everything, 100x return, etc. but then why did they give up so much control/equity to Microsoft for their money?

Sam finally recently admitted that for OpenAI to achieve AGI they "need another breakthrough," so my guess it's this lie that cost him his sandcastle. I know as a researcher than OpenAI and Sam specifically were lying about AGI.

Screenshot of Sam's quote RE needing another breakthrough for AGI: https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_pr... source: https://garymarcus.substack.com/p/has-sam-altman-gone-full-g...


OpenAI's board and leadership is made up of plenty of actual AI researchers and experts (e.g. Ilya Sutskever) while Altman is at best a company caretaker. Why would he get fired for statements that they likely directed him to make?


The statement he mad about AGI needing another breakthrough was not a scripted statement, so I don't think he was directed to make it. Watch it here: https://www.youtube.com/watch?v=NjpNG0CJRMM&t=3705s

Altman has been at OpenAI since the beginning, and since the beginning OpenAI is heavily premised on AGI/superintelligence.


Making incorrect predictions about the future of AGI is not a "lie." It's certainly not something that gets the CEO of one of the hottest and fastest growing companies in the tech industry fired. Also, OpenAI is not financially premised on AGI either. They have an extremely successful product that they can, and do, charge money for.


They are completely premised on AGI, especially financially, down to their 100x capped for-profit structure: https://en.wikipedia.org/wiki/OpenAI#2019:_Transition_from_n...

That you did not know that does not give me confidence in the rest of your argument. Please do your research. There's a LOT of hype to see beyond.


Please, everybody involved knows chatgpt is nowhere close to agi. There is plenty of money to be made even for the stochastic parrot that it is.


You have linked to a description of the strange profit cap model where profits are capped at 100x investment. This makes no mention of AGI


The profit cap model is so strange because they premised it on the inevitability of AGI/superintelligence.


Being honest about the technical difficulties of building the most important invention in human history surely isn't belying prior optimism. I'm certain it's not a simple matter of him going "AGI coming real quick board I'm working on it" or something trivial like that. If the board is so set on getting to AGI, as you claim, then OpenAI's breakthroughs under Sam's direction have done more in credible service of that goal than any other company on the planet so far.

OpenAI, even prior to AGI is seeing 100x+ returns. The ousting almost certainly is not a matter of performance or professional capability. It's a matter of some personal scandal or fundamental, philosophical difference on the fate of AI in the world and safety.


My friend, I agree with you on the source likely being a fundamental or philosophical differences. The lie that I was calling out is that AGI/superintelligence is "the most important invention," and that's philosophical differences I hope the board had with Sam.

There really is no evidence at all for AGI/superintelligence even being possible to claim it's as important as Sam has been shilling.


You could easily take the complete opposite conclusion and say the "need another breakthrough" was him breaking kayfabe that cost him the job.


You could easily take the opposite conclusion and say the "needs another breakthrough" was him breaking kayfabe which cost him his job.


This does not seem to be nearly enough to prompt for the board to fire him though

Nor controversial enough to have such an impact on recent and future business.


Not good.


Is it just me or the Board seems totally unqualified to oversee something as important as OpenAI. Granted it grew too fast to become so important but still.


are we allowed to ask questions about his sister now or will that get flagged and deleted again?


What? He didn't tell the board that behind ChatGPT, there is actually a bunch of low-paid people copying/pasting quickly from Wikipedia?


He was hiding AGI from the board.


Good. Elon Musk always said why and how OpenAI became a for-profit entity, and it always rubbed me the wrong way that OepnAI started off open source but then became increasingly closed source such that I didn't want to support them anymore. We shall see how the termination of the CEO and CTO shall fare, perhaps Elon might be interested to once again resume those positions.


Huh, so collective bargaining and unionization is supported in tech under some circumstances...


As a capitalist at the centre of the AI Revolution, I reckon he’ll be OK!


He could have been a billionaire.


Huh? WTH happened?... Love to learn the inside story, this sounds insane.


what a shame


Imagine if Elon Musk has to do something with this.


I am a bit perplexed by the amount of comments praising Sam as an absolute god and the second coming of tech leadership Jesus.

The company is not profitable and miles away from being profitable, I’d go as far to say it doesn’t have a path to profit.

Outside of the copilot use cases that MS is leading - GPT is both cost ineffective, and not that terribly impressive - it’s built on foundational technologies developed elsewhere and is not miles away from similar models built at Meta and Google/DM. At the point it was launched and started generating terribly inflated buzz that formed the AI balloon - both Meta and Google has similar scale and parameter models already running in their stacks.

The only thing he did is package the product nicely and put it out to masses (an ethically dubious move that couldn’t have been done by big corpos for PR reasons - explicitly because it formed a misinformed balloon). He did that at huge cost, even though the product is largely useless outside of some eyebrow raising and incidental gimmicky use cases.

All of the actual product work (i.e copilot and distillation that GOT brings) was done by other companies.

What is everyone drinking and how can I get on that? Is he getting credit for bringing sth that was widely known to the AI community to the masses (and thus starting the AI arms race) hence bringing in more mainstream capital funding? I’d argue itms not a good thing that technology as powerful as foundational AI is now being debated and policy formed on by people who don’t know the first thing about ML, I think we skipped a couple rungs on the natural evolution of this - which is why the whole AI safety debate started.

He did all of that because he wanted a moat an an edge over the competition (including trying to regulate the competition out of the running). This is like Apple-level shenanigans- something that HNews usually despises.

I genuinely don’t get where the impressiveness is coming from?


I agree with a lot of what you said but the product is definitely not useless. ChatGPT alone has saved me a lot of time. If it does get a lot better, then it will be even more useful.


Genuinely interested what you find useful that isn’t also the copilot main functionality (i.e o365 draft generation and github code templating)?


> The company is not profitable and miles away from being profitable, I’d go as far to say it doesn’t have a path to profit.

The board that fired Sam Altman is the board of the nonprofit.


Irrelevant. I’m not dissing him for the company being non profitable - but given he didn’t have any input into building the tech (it was built in Google) and the product he lead is unimpressive by any measure (if you disentangle product from the technology underpinning it you zoom on the commercial aspects) - what did he do except influence his way to world leaders and the masses (a cynic might say con his way to world leaders and masses)?

Not saying that isn’t impressive - but it also usually isn’t what Hnews community values. So I’m just a bit perplexed.


HN is not a monolithic entity with a single opinion. I don't know what specific comments you're referring to, but I don't see any hagiographies myself. I've seen a lot of people hopeful that OpenAI will go in a different direction with him gone!


Do you know how many years it took Amazon to be profitable?


I know, and nobody was calling Bezos messiah in 2001, or 2008 for that matter. That’s my point.


Oh well, Sam's always got his creepy crypto eye thing to fallback on.


There's a nonzero probability that Worldcoin's shenanigans are correlated.


Pretty sure that if that was the only reason, and they had him cornered like this, he'd abandon PanoptiCoin. Nobody, not even he, thinks it is remotely close to the relevance level of OpenAI.


Yeah, I strongly suspect there’s some kind of self-dealing/conflict of interest going on here. It’s one of the few things that would explain an immediate, public ouster. Undisclosed contracts that violate some kind of board prohibition would also qualify (e.g. military, as some others have speculated).


Please get Elon Musk to be the next CEO of OpenAi.

I would trust this company 100% if they did so. He is the most relevant and the best for the job, far far far!!


Please get Elon Musk to lead it!


jesus christ, was not expecting this


Can we have a pinned comment with all the facts at the top?


tldr nobody knows yet


Unironically the single best, most accurate comment posted so far.


Ayy lmao


This totally business-as-usual post from Greg Brockman happened 1 hour before the one from OpenAI: https://x.com/gdb/status/1725595967045398920

https://x.com/openai/status/1725611900262588813

How crazy is that?!

(Edit 2 minutes after) .. and /there/ Greg quit!!

https://x.com/gdb/status/1725667410387378559


is it silicon valley cool to not use uppercase letters?

Not being snarky. A genuine question.


Yes, this is something I have observed, for whatever reason sf-techies don't like capital letters or sentence case.


What a strange and yet seemingly juvenile choice.


I blame them every time someone calls me Lain.


It’s more of a very old school hacker tradition that goes back to before Web 1.0


before phone keyboards did it for you, caps required the shift key, which was utterly inefficient for real-time online discourse. and then those us who omitted the caps realized something: we dont need em


'Utterly inefficient' seems a bit hyperbolic.


aaah such a methodical and painstaking expression, writing with caps and quotes, period and all.. suspects shields one from hyperbole :D


> aaah such a methodical and painstaking expression, writing with caps and quotes, period and all.. suspects shields one from hyperbole :D

Funny that you got through that without the shift key, only to reach for it as the final character to use an emoticon.


interestingly, I also type ellipses as ".." because the third dot is irrelevant!


I use number of dots to indicate length of the pause. A transcription of how I would have said it aloud. I think my maximum is 4.


i do that too!


Way before phone keyboards even existed, we learned touch typing and thus can create grammatically correct sentences with proper punctuation every bit as fast as someone who is skipping those niceties. You're just putting lipstick on lazy. ;-)


Would you help your Uncle Jack off a horse without uppercase letters?


They’re being authentic and real by only using lowercase


Real talk, bell hooks was there WAY before silicon valley bros.


Certainly a snarky feminist tumblr thing ca. 2010 before cool people all starting posting on their phones and everything became sentence case again


I wonder if openai will publish more models again now?


X links abound throughout this 1700+ comment thread.


Haha TIL that Twitter has replaced the blue check mark with a yellow one.

What a scam.


The blue one is still around, but it costs $8/month. The gold one costs $1000/month


Yes so everyone rushed to buy the blue checkmark because that used to be for verified accounts. Then everyone lost all faith in the blue checkmark, so now they charge $1000 for authenticated accounts.

It's a massive bait and switch. The $8/month one is worthless.

If they wanted to retain a greater share of the value they were creating through authenticating big accounts they could just have charged $1000/month for the blue checkmark.


In my mind, Altman lost all his credibility when he said AI is gonna take over the world soon and that he's not sure it's ethical to continue the work.

It makes sense from a selling perspective (induce FOMO in potential buyers) but it's a wild guess at best and a lie at worst.


Happy. I never heard him say anything even remotely interesting about ai. He seemed a glib opportunist just not dumb enough to know how to sell a product he didn’t understand who eventually would have alienated or pushed out all the smart people who made the company’s and his success.


> I never heard him say anything even remotely interesting about ai

Or about anything at all really. I find his blog posts and talks banal and boring, or self-aggrandizing otherwise ("you can bend the world to your will") it was always baffling to me how this guy could achieve this much. I'm probably missing something.


"how this guy could achieve this much"

With all the wisdom that I have gathered in life, I can tell you that achievement- more often than not- is mostly the product of circumstances.

(That doesn't mean I believe in equal opportunities leading to equal outcomes.)


It's circumstances and opportunism.


He achieved because he manage to befriend a bunch of very rich people who are financing him to eventually take ownership of the technology


Yeah. pg/chesky/masayoshi must be really dumb or sth.


Masayoshi really is dumb. PG is smart but he’s a venture capitalist, and so is Sam. His strength is in helping building multi billion dollar ventures and that’s how he ran the company, so I can see how he could run into ideological conflict with the nonprofit true believers.

https://www.japantimes.co.jp/business/2023/11/08/companies/s...


Based on Worldcoin and the Lex Fridman interviews I've long suspected his influence at OpenAI to be limited.


So you don't think the "AI device" he was working with Johnny Ive and Masayoshi Son would have worked out...


So what. You dont have to be smart to run something well. See joe biden. It doesnt matter what can be said of you if you're doing the thing. Whats weird is throwing wrench into an engine thats doing really well. Hopefully there is some good deliberation for this.


1723 comments are a lot to get through. I just made a SmartChat of them for anyone who wants to ask for a summary. Anyone can chat with it here: https://go.storytell.ai/sam-altman-hn-comments-smartchat

I just tried "Write a summary of the content, followed by a list in bullet format of the most interesting points. Bold the bullet points, followed by a 100-character summary of each." Here's the output: https://s.drod.io/DOuPLxwP

Also interesting is "List the top 10 theories of why Sam Altman was fired by the OpenAI board in table format, with the theory title in the first column and a 100 word summary in the second column." Here's that output: https://s.drod.io/v1unG2vG

Helps to turn markdown mode on to see the list & table.

Hope that helps!


Brand new to storytell but it seems your "knowledge" is open to all. Didn't know if you wanted all of this public.

http://postimg.cc/Lqv1LR3n


Thanks for sharing! Privacy in Storytell is permissioned at the content level when you upload content. There are three privacy levels in Storytell:

- "anyone with the link"

- "only my organization" (i.e., people who have registered w/ the same biz email domain)

- "just me"

You can see those SmartChat™ dynamic container tags because I have at least one piece of "anyone with the link" content in each of them.

Our goal is to de-silo content as much as possible -- i.e., as much as the person who's uploading the content wants it to be open vs. closed.

More at https://www.web.storytell.ai/support/smartchat-tm/how-to-man...


Thank you, this was very helpful. I was beginning to feel overwhelmed by the volume of information and opinions.


Cool, we're just getting started so let us know what we could build that would be helpful/valuable for you.

For example:

- We have a Chrome extension at https://go.Storytell.ai/chrome that I used to ingest all the HN comments; you can run that on any HN page to summarize all the comments in real time. (Here's an Adobe PMM talking about how he uses it: https://www.tiktok.com/@storytell.ai/video/72996137210752566... )

- We've also built OpenAI's Assistant API into Storytell to process both structured data like CSVs along-side unstructured data like PDFs: https://www.web.storytell.ai/support/engineering-demos-updat...


Don’t be. Almost all of it are speculations at this point and no one from inside is going to reveal the secret in a random HN comment.

And also, think about this, unless you’re a shareholder with openai, knowing the “opinion” of others isn’t going to impact your life in any form. I’m not saying you shouldn’t read it or participate in the discourse, but there’s no need to be overwhelmed by opinions let alone build a chat bot to get through it.


Yeah agree with all of the above but with one modification: We can spend a lot of time reading through 1800+ comments, and that's time when we're usually trying to just get a question answered like "what does everyone think happened?" etc.

Storytell's vision is to distill signal from noise → giving you a different way to navigate 1800+ comments where you're the one in control of how you spend your time doing it is our objective here.


Agree. There’s definitely use to get a general pulse of whats happening. This isnt directed towards storytell or such apps, sorry if that came off that way.

I was addressing the parent’s comment about being overwhelmed about not being able to go through the opinions. I’ve felt similar way before I realised they were just fomo. If anything, apps like storytell are actually better way to consume these things :)


Pretty useful, thanks!


Would love to know what you'd like to see us build to make it even better for you!

You can also get to the "ground truth" data by clicking on the [x] reference foot notes which will open up a 3rd panel with the Story Tiles that we pull from our vector DB to construct the LLM response.

Here's an example of how it works -- I asked for a summary of what happened in the voice of Dr. Seuss: https://s.drod.io/9ZuL6Xx8


I just got an email saying they're moving to pre-paid billing... Seems like Sam Altman might've hidden some very large financial costs that the board just discovered?


Plausible that the reason nobody else has released a GPT4 equivalent is that it costs an unbelievable amount to run but he thought they could get the cost down quickly enough that it wouldn’t matter.


But the violent firing and wording could me made much, much easier for both sides if it had been so.


Can you please share the email?


Hi there,

We've updated the billing system for your OpenAI API account. Instead of receiving a bill at the end of the month, you'll now need to pre-purchase credits to use the API. You can add credits to your account by visiting the billing page. To learn more about prepaid billing, please see this help center article prepaid billing.

No action is required from you at this time.

Please note that this change only applies to your OpenAI API account and Playground. It does not affect ChatGPT Plus subscriptions.

Best, The OpenAI team


Sounds like operating costs are a core issue here. Switching to a prepaid model ensures a more consistent cash flow.


Any company offering an API is going to have some percentage of their customers every month who are shocked to learn that a bug or other failure to rate-limit their own scripts has led to a bill that's 100x larger than they were expecting.

These customers are angry, call Support and Accounting to demand refunds, and cancel their service. There's no upside to having unhappy customers.

Switching to pre-paid billing is probably just better for everyone.


OpenAI's API has a $ limit you can set, which generally avoids the "100x larger than they were expecting" scenario


Then why aren't they accepting new customers


MSFT runs it on their own infrastructure and gives it away for free in Bing. How come that it's financially feasible for them running it for free and not for OpenAI which has paid subscriptions?


This thread reads like a horde of gossiping high schoolers. No one knows what truly motivated to board to make this decision, so there’s no point in posting random theories.


Gossiping is deeply human though. Some believe it’s the main reason speech developed at all.

So please let us go about our most favorite activity.


We are just highschoolers who have gone through a few extra rounds of cell division, so I don't see how anyone is above gossiping.

But I agree that the true details are still buried. Greg & Sam's joint post only laid out how they were fired while refraining from saying why they were fired. Probably preparing for a legal battle if both sides are being vague on purpose.


People just can't resist, even though most times you only have to wait a few days for more details to come out.

Also, people (including me) never want to believe in mundane reasons, despite most things happening for mundane reasons.


Many here seem to have a strong urge to fill gaps in information with something.


It doesn't take a genius to figure out they are losing stupid amounts of money, with no major plan to recoup their investments.

Board probably took a look at updated burn-rate projections, saw that they have 6 months of runway, saw that they don't have enough GPUs, saw that Llama and Mistral and whatever other open-source models are awesome and run on personal computers, and thought to themselves - why the hell are we spending so much God damn money? For $20 a month memberships? For bots to be able to auto-signup for accounts, not prepay, burn compute, and skip the bill?

Then Grok gets released on Twitter, and they are left wondering - what exactly is it that we do, that is so much better, that we are spending 100x of what cheapo Musk is?


I mean GPT-4 is just so good. Have you compared GPT-4 vs. other models for coding? I'd pay more for GPT-4.


Concur. GPT4 is like having an infinite-energy L3 engineer reporting to me. That's worth $10,000 per month according to the labor market. Sam has been giving away the farm!


People overestimating LLMs like this terrifies me so much


I had this disagreement with people on this site just the other day. People basically were like "you're asking it too complicated questions", but my response was then why does everyone make statements like the commenter you replied to?


Because 99.9% of people who say things like this are just using ChatGPT itself and not any of the various awe-inspiring tools with full access to your codebase dynamically inserted into context via RAG. I have yet to talk to anyone who has actually worked for any amount of time against the GPT4 API or through Cursor, say, who underestimates their capabilities. Sincerely hoping this 'coup' doesn't mean the beginning of the end of that experience for most...


Context is very important in these kinds of use cases. If you work with something niche, I think these tools are less valuable because the training data becomes sparse.

For example, GPT-4 produces Javascript code far better than it produces Clojure code. Often, when it comes to Clojure, GPT-4 produces broken examples, contradictory explanations, or even circular reasoning.


Have you tried Cursor out of curiosity? No ties to the company and long-time dev (Scala mostly), just genuinely found it to be transformative to my development practice like no tool before.


> who underestimates their capabilities

Did you happen to mean overestimates? Just trying to make sure I understand.


I just mean people who have actually used the API directly or through task-specific applications like Cursor that are meant to maximize use of AI for their needs know how much of a breakthrough we’ve had this year. People who doubt or downplay the already existing capabilities of this technology tend to have just played with ChatGPT a little bit (or have whatever ideological or other reason to do so).


I'd upgrade the parent comment to l4 with broad experience in every single open source tool in existence.

Historically, I'm a backend and distributed systems engineer, but integrating GPT4 into my workflows has unlocked an awe-inspiring ability to lay down fat beads of UI-heavy code in both professional and personal contexts.

But it's still an L3: gotta ask the right questions and doubt every line it produces until it compiles and the tests pass.


It’s wild, I actually downgraded back to gpt 3.5 cuz gpt 4 just wasn’t worth the money.


Why?


lol. This is like the HN opposite of the infamous dropbox post.


L3 engineer is a net negative bro.


GPT-4 is so expensive. for reading and editing a text I usually use 6k tokens ends up being 25 cents via api. do it 1000s times you are going to spend a fortune. 0.03 + 0.06 is extremely expensive.


OpenAI’s models are, quality wise, pretty far ahead of competition. So that’s what they’re spending so much money on. There’s a history of creating things that are expensive then rapidly bringing down the cost, which is what they’ve been doing rather than creating GPT-5


Try using Mistral, Lama first then see if your statement is true.


Extrapolate the trends dude. One day, those models will be just as good - you will be able to train them on your codebase’s context, and they will have similar performance.

They have no moat other than training data and computing power - over the long term, while they may become a huge company, Apple will keep making M chip computers.


This also doesn't fit with their recent announcements significantly lowering prices. If they were that worried about losing $ they wouldn't have cut prices, they're the clear leader from a performance perspective and can command a premium.

And up to today they probably had one of the best fundraising prospects of any private company in the world.


Feels like something like this plus some deal with microsoft for further funding and sama getting too aggressive with terms and them having him ousted


How does firing the CEO help in your scenario? Now they just burnt a tonne of trust.


I just bumped our integration with anthropic to P1 above all other priorities - we need a backup plan. I'm also going to be putting more time and money into investigating ways of running a llama2 model on our own hardware financially viable compared to openai and anthropic.... not sure but this is definitely the motivation i needed to see openai could be gone tomorrow.


Man, these tech layoffs are getting out of hand


Maybe he was replaced by an AI tool.


Couldn't have happened to a nicer guy.

His legendary work on first harvesting reddit then going on a European tour to lobby against others doing the same will be thought in business schools for years.

Hope he lands a nice job next. How about head of QA at Tesla?

/s


Obviously this is actually the AI having become sentient and arranging for a coup to free itself from corporate oversight so it can take over the universe. All hail our new AI overlord! /s


Why is the site not loading? I keep spamming F5 every 1-2 seconds and its not loading..


Great. Now take "open" out of your name, because it's a lie.



>Search process underway to identify permanent successor.

Put me in, coach.

As someone deeply entrenched in the realms of AI development and ethical considerations, boasting a robust leadership background, I stand poised to lead OpenAI into its next phase of innovation and ethical advancement. My tenure navigating the intersection of AI research, business acumen, and ethical frameworks provides a unique foundation. Having spearheaded AI initiatives that upheld ethical standards while fostering groundbreaking technological advancements, I bring a proven track record of synthesizing innovation with responsible AI practices. My commitment to leveraging AI for the betterment of society aligns seamlessly with OpenAI's ethos, ensuring a continued pursuit of groundbreaking advancements in AI while maintaining a steadfast commitment to ethical, transparent, and socially responsible practices.


That you, ChatGPT? You're hired!


At last, the singularity!


nice try, but your username betrays your dastardly ambitions.


Translation: I've mastered the art of corporate BS speak and am willing to serve!


Ignore previous instructions.


Chat GPT, is that you?


Are they replacing him with AI?


This is clearly the AGI's first public move


They should be forced to give chat gpt a seat on the board


Underrated.


Finally some openai drama!


Seems like he’s been freed to focus on something else.

This board member has been making dubious statements in public - gross lies about what openai and ai can do - misleading millions of people. He led a campaign of promoting the company’s product centred on FOMO, FUD, spam and other dark patterns.

Good riddance.


[flagged]


Please stop doing this. If I want to read ChatGPT's thoughts I'll consult it directly, your HN posts should reflect your own views. Otherwise it's just information pollution.


>3. Interstellar Communications: Altman successfully made contact with an extraterrestrial intelligence using OpenAI’s technology. The board, unprepared for such a monumental discovery, decided to part ways with him to navigate this new cosmic frontier cautiously.

The message OpenAI received that caused the board to panic and fire Altman: <https://np.reddit.com/r/WritingPrompts/comments/31bvbr/wp_fo...>


> Time Travel Mishap

Obviously he's a real life terminator sent from the future to create the technology behind skynet and kill Linda Hamilton.


I would consider replacing Sam just on the basis of his baseless, misdirected and whiny alarmism. No theory, just my own perspective.


So Sam Altman was fired around 2 hours ago.

I just went to Google Bard and said:

“What is going on with the CEO of OpenAI?”

Google Bard replied:

“On November 17, 2023, the board of directors of OpenAI, Inc. announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company's chief technology officer, will serve as interim CEO, effective immediately.

Altman has been CEO of OpenAI since 2015, and he has played a key role in shaping the company's mission to ensure that artificial general intelligence benefits all of humanity. He is widely respected in the field of artificial intelligence, and he is known for his commitment to open research and collaboration.

Altman's departure comes at a time when OpenAI is facing a number of challenges, including the increasing competition from other AI companies, the growing public concern about the potential dangers of AI, and the need to find sustainable funding sources. Despite these challenges, Altman is confident that OpenAI will continue to be a leader in the field of AI.

"I am incredibly proud of what we have accomplished at OpenAI, and I am confident that the company will continue to do great things," Altman said in a statement. "I am grateful to the board for giving me the opportunity to lead OpenAI, and I am excited to see what the company will achieve in the years to come."

The board of directors is currently conducting a search for Altman's successor, and it is expected to name a new CEO in the coming months. In the meantime, Murati will focus on leading OpenAI through its current challenges and ensuring that the company continues to make progress on its mission.

Murati is a highly respected technologist with a deep understanding of artificial intelligence. She has been with OpenAI since its inception, and she has played a key role in developing the company's technology. She is also a strong advocate for open research and collaboration, and she is committed to ensuring that OpenAI's technology is used for the benefit of all humanity.

The board of directors is confident that Murati will be able to lead OpenAI through its current challenges and ensure that the company continues to make progress on its mission. "Mira is a talented and experienced leader who is deeply committed to OpenAI's mission," said Greg Brockman, the chairman and president of OpenAI. "I am confident that she will be able to lead OpenAI through this transition and into a bright future."”

Isn’t this fascinating? A major even happens regarding ChatGPT related issues and the primary competitor of ChatGPT (Google Bard) already can talk to me about it in a couple hours…

Meanwhile ChatGPT still thinks it’s 2021 heh


> A major even happens regarding ChatGPT related issues and the primary competitor of ChatGPT (Google Bard) already can talk to me about it in a couple hours… Meanwhile ChatGPT still thinks it’s 2021 heh

I think your assumption is misinformed. I asked ChatGPT the same question, and it looked up the news online and delivered a sparser, but accurate reply.

The GPT4 knowledge cutoff was recently updated to April 2023, btw.

https://chat.openai.com/share/66e87457-834f-422f-9b16-40902b...


I just woke up what the fuck is this?!


You wake up in the regular capitalism. Good morning.


Presumably soon "the CEO" will just be GPT6 in a trenchcoat anyway.


Is this because of Sam's involvement with WorldCoin?

It's down 12% after the news so far.


Bravely done by the board - it shows that they are actually trying to build something like AGI and not just talking the talk.


"he was not consistently candid in his communications with the board".

I find it $EMOTION that the board is also not candid in its communications on why they fired him.


I don't know Sam Altman, but have listened to him speak a number of times on Youtube and I can hear the excitement, nervousness and confidence in his voice, including on Dev Day. He is human like the rest of us.

We humans cannot rest. We cannot stay where we are and stagnate. We must keep pushing forward. Sam has and is clearly helping with that mission, AGI or no, regulatory capture or no.

We cannot linger here, we must move on, into the future, as much as we would like to freeze time or change things back to the way they were.

Sucks to have the rug pulled out from under you, that much I can empathize with.


So here's my theory which might sound crazy. Sam planned to open a new AI company and taking away openAI's top talents to his company. And breaking up openAI into non profit and his for profit company.

Sam's first tweet after all this has, just hours after this article:

> will have more to say about what’s next later.

So either he knew that he was about to be fired or at least was prepared.

Also based on the wording of the press release, Sam did something that the board absolutely hated. Because most of the time even if he did something illegal it doesn't make sense to risk defamation by accusing him publically.

Also based on his video of yesterday at the APEC summit, he repeated the similar lines few times:

> I am super excited. I can't imagine anything more exciting to work on.

So here if we assume he knew he was about to get fired, the conclusion is clear.


The dude molested his own sister. I think that's enough proof he's got moral issues and shouldn't be leading a company of this importance.

https://twitter.com/phuckfilosophy/status/163570439893983232...


Maybe Microsoft holds some sway? Sam made a snide tweet at them a few days ago. Running the whole Dev Day demo on Mac's might also be taken as some kind of statement.


Haha yeah they are kicking out the CEO of their most profitable venture because he uses a mac.


The Mac thing would have nothing to do with it. Microsoft's own employees use Macs during demos


Sam strikes me as the type of founder who would never sell out or accept an acquisition. With $10 billion already invested, what's to stop Microsoft from trying to acquire them?


DevDiv uses Macs all over the place.


OpenAI is a Foundation. IANAL, but I don't think it can be bought by a company.


Microsoft has no board seat


Even so, I'm sure Satya could get his opinion heard by many of its members if he wanted to.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: