All: our poor single-core server process has smoke coming out its ears, as you can imagine.
I so hate to do this, but for those who are comfortable viewing HN in an incognito window, it will be much faster that way. (Edit: this comment originally said to log out, but an incognito window is better because then you don't have to log back in again. Original comment: logging in and out: HN gets a lot faster if you log out, and it will reduce the load on the server if you do. Make sure you can log back in later! or if you run into trouble, email hn@ycombinator.com and I'll help)
I've also turned pagination down to a smaller size, so if you want to read the entire thread, you'll need to click "More" at the bottom, or like this:
"In a post to X Friday evening, Mr. Brockman said that he and Mr. Altman had no warning of the board’s decision. “Sam and I are shocked and saddened by what the board did today,” he wrote. “We too are still trying to figure out exactly what happened.”
Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, according to Mr. Brockman. Mr. Brockman said that even though he was the chairman of the board, he was not part of this board meeting.
He said that the board informed him of Mr. Altman’s ouster minutes later. Around the same time, the board published a blog post."
Another source [1] claims: "A knowledgeable source said the board struggle reflected a cultural clash at the organization, with Altman and Brockman focused on commercialization and Sutskever and his allies focused on the original non-profit mission of OpenAI."
Another of them wrote this article (https://www.foreignaffairs.com/china/illusion-chinas-ai-prow...) in June of this year that opens by quoting Sam Altman saying US regulation will "slow down American industry in such a way that China or somebody else makes faster progress” and basically debunks that stance...and quite well, I might add.
So the argument against AI regulations crippling R&D is that China is currently far behind and also faces their own weird gov pressures? That's a big gamble, applying very-long term regulations (as they always are long term) to a short term window betting on predictions of a non-technical board member.
There's far more to the world than China on top of that and importantly developments happen both inside and outside of the scope of regulatory oversight (usually only heavily commercialized products face scrutiny) and China itself will eventually catch up to the average - progress is rarely a non-stop hockey stick, it plateaus. LLMs might already be hitting a wall https://twitter.com/HamelHusain/status/1725655686913392933)
The Chinese are experts at copying and stealing Western tech. They don't have to be on the frontier to catch up to a crippled US and then continue development at a faster pace, and as we've seen repeatedly in history regulations stick around for decades after their utility has long past. They are not levers that go up and down, they go in one direction and maybe after many many years of damage they might be adjusted, but usually after 10 starts/stops and half-baked non-solutions papered on as real solutions - if at all.
> The Chinese are experts at copying and stealing Western tech.
Sure that's been their modus operandi in the past, but to hold an opinion that a billion humans on the other side of the Pacific are only capable of copying and no innovation of their own is a rather strange generalization for a thread on general intelligence.
Well, I guess (hope) no one thinks it is due to genetic disabilities which are preventing disrupting innovations from (mainland) chinese.
It is rather a cultural/political thing. Free thinking and stepping out of line is very dangerous in a authorian society. Copying approved tech on the other hand is safe.
And this culture has not changed in china lately, rather the opposite. Look what happened to the Alibaba founder, or why there is no more Winnie Puuh in china.
This seems to make more sense. Perhaps it has to do with OpenAI is not "open" anymore. Not supporting and getting rid of the OpenAI Gym was certainly a big change in direction of the company.
This time he was ousted because he was hindering the pursuit of the company's non-profit mission. We've been harping on the non-openness of OpenAI for a while now, and it sounds like the board finally had enough.
"This time he was ousted because he was hindering the pursuit of the company's non-profit mission. "
This is what is being said. But I am not so sure, if the real reasons discussed behind closed doors, are really the same. We will find out, if OpenAI will indeed open itself more, till then I remain sceptical. Because lots of power and money are at stake here.
That's what it's looking like to me. It's going to be as beneficial to society as putting Green Peace in charge of the development of nuclear power.
The singularity folks have been continuously wrong about their predictions. A decade ago, they were arguing the labor market wouldn't recover because the reason for unemployment was robots taking our jobs. It's unnerving to see that these people are having gaining some traction while actively working against technological progress.
Literal wishful thinking ("powerful technology is always good") and vested interests ("I like building on top of this powerful technology"), same as always.
The board is for the non-profit that ultimately owns and totally controls the for-profit company.
Everyone that works for or invests in the for-profit company has to sign an operating agreement that states the for-profit actually does not have any responsibility to generate profit and that it's primary duty is to fulfill the charter and mission of the non-profit.
Yeah I though that was the most probable reason, especially since these people don't have any equity, so they have no interest in the commercial growth of the org.
The only thing utopian ideologies are good for is finding 'justifications' for murder. The "AI utopia" will be no different. De-radicalize yourself while you still can.
It seems like an observation to me. Let’s take the Marxist utopian ideology. It led to 40 - 60 million dead in the Soviet Union (Gulag Archipelago is an eye opening read). And 40 - 80 million dead in Mao Zedong’s China. It’s hard to even wrap my mind around that amount of people dead.
Then a smaller example in Matthia’s cult in the “Kingdom Of Matthias” book. Started around the same time as Mormonism. Which led to a murder. Or the Peoples Temple cult with 909 dead in mass suicide. The communal aspects of these give away their “utopian ideology”
I’d like to hear where you’re coming from. I have a Christian worldview, so when I look at these movements it seems they have an obvious presupposition on human nature (that with the right systems in place people will act perfectly — so it is the systems that are flawed not the people themselves). Utopia is inherently religious, and I’d say it is the human desire to have heaven on earth — but gone about in the wrong ways. Because humans are flawed, no economic system or communal living in itself can bring about the utopian ideal.
We are quite OT here, but I would say christianity in general is a utopian ideology as well. All humans could be living in peace and harmony, if they would just believe in Jesus Christ. (I know there are differences, but this is the essence of what I was taught)
And well, how many were killed in the name of the Lord? Quite a lot I think. Now you can argue, those were not really christians. Maybe. But Marxists argue the same of the people responsible for the gulags. (I am not a marxist btw)
"Because humans are flawed, no economic system or communal living in itself can bring about the utopian ideal."
And it simply depends on the specific Utopian ideal. Because a good utopian concept/dream takes humans as they are - and still find ways to improve living conditions for everyone. Not every Utopia claims to be a eternal heaven for everyone, there are more realistic concepts out there.
Huh, I've read Marx and I dont see the utopianism you're referencing.
What I do see is "classism is the biggest humanitarian crisis of our age," and "solving the class problem will improve people's lives," but no where do I see that non-class problem will cease to exist. People will still fight, get upset, struggle, just not on class terms.
Maybe you read a different set of Marx's writing. Share your reading list if possible.
This article gives a clear view on Marx’s vs. Engel’s view of Utopianism vs. other utopian socialists [1]. That Marx was not opposed to utopianism per se, but rather when the ideas of the utopia did not come from the proletariat. Yet you’re right in that he was opposed to the view of the other utopian socialist, and there is tension in the views of the different socialist thinkers in that time. (I do disagree on the idea that refusing to propose an ideal negates one from in practice having a utopic vision)
That said my comment was looking mainly at the result of Marxist ideology in practice. In practice millions of lives were lost in an attempt to create an idealized world. Here is a good paper on Stalin’s utopian ideal [2].
I know we are a bit off topic. It seems it would be more like if several prominent followers of Jesus committed mass genocide in their respective countries within a century of his teachings. Stalin is considered Marxist-Leninist.
Oh ok. That makes sense. That's because if someone has an idea that causes a lot of immediate harm then the idea is wrong, but if there is a gap then it is not?
Yeah, AI will totally fail if people don't ship untested crap at breakneck speed.
Shipping untested crap is the only known way to develop technology. Your AI assistant hallucinates? Amazing. We gotta bring more chaos to the world, the world is not chaotic enough!!
All AI and all humanity hallucinates, and AI that doesn't hallucinate will functionally obsolete human intelligence. Be careful what you wish for, as humans are biologically incapable of not "hallucinating".
GPT is better than an average human at coding. GPT is worse than an average human at recognizing bounds of its knowledge (i.e. it doesn't know that it doesn't know).
Is it fundamental? I don't think so. GPT was trained largely on random internet crap. One of popular datasets is literally called The Pile.
If you just use The Pile as a training dataset, AI will learn very little reasoning, but it will learn to make some plausible shit up, because that's the training objective. Literally. It's trained to guess the Pile.
Is that the only way to train an AI? No. E.g. check "Textbooks Are All You Need" paper: https://arxiv.org/abs/2306.11644 A small model trained on high-quality dataset can beat much bigger models at code generation.
So why are you so eager to use a low-quality AI trained on crap? Can't you wait few years until they develop better products?
Being better than the average human at coding is as easy as being better than the average human at surgery. Until it's better than actual skilled programmers, the people who are programming for a living are still responsible for learning to do the job well.
Without supposing we're on this trajectory, humans no longer needing to focus on being productive is how we might be able to focus on being better humans.
humanity is capable of taking feedback, citing its sources, and not outright lying
these models are built to sound like they know what they are talking about, whether they do or not. this violates our basic social coordination mechanisms in ways that usually only delusional or psychopathic people do, making the models worse than useless
Yes, but that's an active process. You can't just be "pro change".
Occasionally, in high risk situations, "good change good, bad change bad" looks like "change bad" at a glance, because change will be bad by default without great effort invested in picking the good change.
You haven't been around when Web2.0 and the whole modern internet arrived, were you? You know, all the sites that you consider stable and robust now (Google, YT and everything else) shipping with a Beta sign plastered onto them.
Web sites were quite stable back then. Not really much less stable than they are now. E.g. Twitter now has more issues than web sites I used often back in 2000s.
They had "beta" sign because they had much higher quality standards. They warned users that things are not perfect. Now people just accept that software is half-broken, and there's no need for beta signs - there's no expectation of quality.
Also, being down is one thing, sending random crap to a user is completely another. E.g. consider web mail, if it is down for one hour it's kinda OK. If it shows you random crap instead of your email, or sends your email to a wrong person. That would be very much not OK, and that's the sort of issues that OpenAI is having now. Nobody complains that it's down sometimes, but it returns erroneous answers.
But it’s not supposed to ship totally “correct” answers. It is supposed to predict which text is most likely to follow the prompt. It does that correctly, whether the answer is factually correct or not.
If that is how it was marketing itself, with the big disclaimers like tarot readers have that this is just for entertainment and not meant to be taken as factual advice, it might be doing a lot less harm but Sam Altman would make fewer billions so that is apparently not an option.
Chat-based AI like ChatGPT are marketed as an assistant. People expect that it can answer their questions, and often it can answer even complex questions correctly. Then it can fail miserably on a basic question.
GitHub Copilot is an auto-completer, and that's, perhaps, a proper use of this technology. At this stage, make auto-completion better. That's nice.
Why is it necessary to release "GPTs"? This is a rush to deliver half-baked tech, just for the sake of hype. Sam was fired for a good reason.
Example: Somebody markets GPT called "Grimoire" a "100x Engineer". I gave him a task to make a simple game, and it just gave a skeleton of code instead of an actual implementation: https://twitter.com/killerstorm/status/1723848549647925441
Nobody needs this shit. In fact, AI progress can happen faster if people do real research instead of prompting GPTs.
Needlessly pedantic. Hold consumers accountable too. "Durr I thought autopilot meant it drove itself. Manual, nah brah I didn't read that shit, reading's for nerds. The huge warning and license terms, didn't read that either dweeb. Car trying to stop me for safety if I take my hands off the wheel? Brah I just watched a Tiktok that showed what to do and I turned that shit offff".
You could also say that shipping social media algorithms with unknown effects on society as a whole are why we're in such a state right now. Maybe we should be more careful next time around.
Not just Sustkever, but other top researchers joined the then nascent OpenAI team for the same reason. Most of them on-record indicating they turned down much bigger paychecks.
The problem I see is, astronomical costs of training and inference warrants a for-profit structure like the one Sam put up. It was a nice compromise, I thought; but of course, Sustkever thinks otherwise.
Maybe Sutskever is finished with his LLM experiments and now has other interests and ideas to pursue meanwhile Sam was keen to make money and stay on the same trajectory. Microsoft also felt the same way.
The fact that the press release is 50% dedicated to repeating that OpenAI is supposed to be a non-profit and help all of humanity isn't enough for you to believe this is the reason?
The abruptness of the firing and the fact that they give his lying to the board as the reason is why I don't believe that this is over a general disagreement on direction.
It's exactly the other way around - if they dismiss him for a vague general reason, they're much less exposed to litigation than they would be if they falsely accused him of lying.
Them leaving does not imply accusations are false. They may like him, they may dislike new boss regardless of accusations, they may dislike overall future direction. They may think they would be fired some times later regardless.
Many believe that race dynamics are bad, so have the goal of going as slowly and carefully as possible.
The split between e/acc (gotta go fast) and friendly AI/Coherent Extrapolated Volition (slow and cautious) is the first time in my life I've come down on the (small-c) conservative side of a split. I don't know if that's because I'm just getting older and more risk adverse.
Your point hinged on billions in profit. Which you just made up, or assumed to be true for some reason. I don't think any of your points stand. Don't use fact you haven't checked as preconditions for points you want to make.
Have they gotten specific yet? Last I heard was the whole “not sufficiently candid” thing, which is really nebulous; hard to call it a farce really. It is a “to be continued.”
I’m going to wait and see before I get too personally attached to any particular position.
To think that "Non-Profit" means "Free" is pretty naive. There are operating costs to maintain millions of users. That doesn't mean they are trying to profit.
I am going to go on a limb here, and speculate...This was because of the surprise party crashing of the Microsoft CEO, at OpenAI first Developer Conference...
I doubt this was a surprise to them, I’m sure Sam was well aware of the concerns and repeatedly ignored them, and even doubled down. Putting OpenAI’s mission in jeopardy.
Many politically aligned folks will leave, and OAI will go back and focus on mission.
Wow, that is actually the first time I hear someone use democracy and corporation unironically together...
In a semse board memebers have even less protection than rank and file. So no, nothing special happening at OpenAI other than a founder CEO being squezzed out, not the first nor the last one. And personal feeling never factored into that kind of decision.
Ha, true. Well, I did say "democratic governance", not "democracy" itself.
Substitute "rules of order" or "parliamentary procedure" if you like. At the end of the day, it's majority vote by a tiny number of representatives. Whether political or corporate.
Is that news to you? Corporate governance is structure pretty much the same as parliamental democracies. The C-suite is the cabinet, the board of directors is the parliament/house of representatives and the shareholders are the public/voters.
In my experience Teams is great for calls (both audio and video), horrible for chat. I guess because it's built on top of Skype codebase? (just a guess)
The chat portion of Teams is so very poorly designed compared to other corporate chat systems I've used.
I mean even copy and paste doesn't work correctly. You highlight text, copy it and Teams inserts its own extra content in there. That's basic functionality and it's broken.
Or you get tagged into conversations and with no way to mute them. For a busy chat that alert notification can be going off continuously. Of course the alert pop up has been handily placed to cover the unmute icon in calls, so when someone asks you a question you can't answer them.
Teams feels like a desperate corporate reaction to Slack with features added as a tickbox exercise but no thought given to actual usability.
I never thought that Slack or the whatever Google's chat system is currently called was in any way outstanding until I was made to use the dumpster fire that is Teams.
It's a classic example of where the customers, corporate CTOs, are not the end users of a product.
All I notice is that my time going from calendar to Teams call is ~30 seconds due to slow site loading and extra clicks. Calendar to Meet call is two clicks and loads instantly with sane defaults for camera/microphone settings. It's significantly better than teams or zoom in those regards.
If you're fully immersed in the Microsoft ecosystem, going from your Outlook calendar to a Teams call is a single click, and the desktop app doesn't take as long to get into the call.
I use both and shudder every time I am forced to use the lame web app alternatives to Word, Excel & PowerPoint on desktop - mostly because my child's school runs on web alternatives. Ironically even on Android, Outlook seems to be the only major client that actually provides a unified inbox across mail accounts due to which I switched & use my Gmail accounts through it.
Having used both in a professional capacity I have to say Teams is shockingly worse than Google Meet.
I’ve never had my laptop’s sound like an Apache helicopter while on a call with Google Meet yet simply having Teams open had me searching for a bomb shelter.
All video call software suck in various ways. Corporate IT throttling&filtering and analyzing traffic with a mismash of third party offerings ”to increase security” does not help.
The least two features the average user wants. Most users are happy if sound and video work instantly, always. Maybe some marketing department should focus on that?
(Don't know keet; yes, encryption is still an important festure).
Peer to peer makes it as fast as possible because it's not having to pass through a 3rd party's servers (which, for cost reasons, normally limit the bandwidth of the communication channel they are serving).
This is just like when you pull down a torrent. You can do it as fast as your bandwidth and the bandwidth of the peers who are seeding it to you allow. Which can be blazingly fast.
Quite possible actually, this seems to become a really hot political potato with at least 3 types of ambition running it 1. Business 2. Regulatory 3. ’Religious/Academic’. By latter I mean the divide between ai doomerists and others is caused by insubstantiable dogma (doom/nirvana).
What do you mean by this? Looks like you're just throwing out a diss on the doomer position (most doomers don't think near future LLMs are concerning).
Neither AI fears nor singularity is substantiated. Hence the discussion is a matter of taste and opinion, not of facts. They are sunstantiated once one or the other comes to fruition. The fact it's a matter of taste and opinion makes the discussion only so much heated.
Wouldn't this put AI doomerism in the same category as nuclear war doomerism? E.g. a thing that many experts think logically could happen and would be very bad but hasn't happened yet?
I'm unaware of an empirical demonstration of the feasibility of the singularity hypothesis. Annihilation by nuclear or biological warfare on the other hand, we have ample empirical pretext for.
We have ample empirical pretext to worry about things like AI ethics, automated trading going off the rails and causing major market disruptions, transparency around use of algorithms in legal/medical/financial/etc. decision-making, oligopolies on AI resources, etc.... those are demonstrably real, but also obviously very different in kind from generalized AI doomsday.
That’s an excellent example why AI doomerism is bogus completely unlike nuclear war fears weren’t.
Nuclear war had very simple mechanistic concept behind it.
Both sides develop nukes (proven tech), put them on ballistic missiles (proven tech). Something goes politically sideways and things escalate (just like in WW1). Firepower levels cities and results in tens of millions dead (just like in WW2, again proven).
Nuclear war experts were actually experts in a system whose outcome you could compute to a very high degree.
There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.
You can already trivially load up a car with explosives, drive it to a nearby large building, and cause massive damages and injury.
Yes, it’s plausible a lone genious could manufacture something horrible in their garage and let rip. But this is in the domain of ’fictional whatifs’.
Nobody factors in the fact that in the presence of such a high quality AI ecosystem the opposing force probably has AI systems of their own to help counter the threat (megaplague? Quickly synthesize megavaxine and just print it out at your local healt centers biofab. Megabomb? Possible even today but that’s why stuff like Uranium is tightly controlled. Etc etc). I hope everyone realizes all the latter examples are fictional fearmongering wihtout any basis in known cases.
AI would be such a boom for whole of humanity that shackling it in is absolutely silly. That said there is no evidende of a deus ex machina happy ending either. My position is let researchers research and once something substantial turns out, then engage policy wonks, once solid mechanistic principles can be referred to.
> There is no mechanistic model behind AI doom scenarios. There is no expert logically proposing a specific extinction scenario.
You don't seem actually familiar with doomer talking points. The classic metaphor is that you might not be able to say how specifically Magnus Carlson will beat you at chess if you start the game with him down a pawn while nonetheless knowing he probably will. Predicting
The main way doomers think ASI might kill everyone is mostly via the medium of communicating with people and convincing them to do things, mostly seemingly harmless or sensible things.
It's also worth noting that doomers are not (normally) concerned about LLMs (at least, any in the pipeline), they're concerned about:
* the fact we don't know how to ensure any intelligence we construct actually shares our goals in a manner that will persist outside the training domain (this actually also applies to humans funnily enough, you can try instilling values into them with school or parenting but despite them sharing our mind design they still do unintended things...). And indeed, optimization processes (such as evolution) have produced optimization processes (such as human cultures) that don't share the original one's "goals" (hence the invention of contraception and almost every developed country having below replacement fertility).
* the fact that recent history has had the smartest creature (the humans) taking almost complete control of the biosphere with the less intelligent creatures living or dying on the whims of the smarter ones.
In my opinion, if either extreme turns out to be correct it will be a disaster for everyone on the planet. I also think that neither extreme is correct.
I believe altman had some ownership, however it is a general lesson of handing over substantial power to laymen who are completely detached from the actual ops & know-how of the company
nobody handed over power. presumably they were appointed to the board to do exactly what they did (if this theory holds), in which cass this outcome would be a feature not a bug
That is neither stated nor implied, unless you’re simply making the objection, “But OpenAI _is_ nongovernmental.”
Most readers are aware they were a research and advocacy organization that became (in the sense that public benefit tax-free nonprofit groups and charitable foundations normally have no possibility of granting anyone equity ownership nor exclusive rights to their production) a corporation by creating one; but some of the board members are implied by the parent comment to be from NGO-type backgrounds.
I'm not sure I understand what you're saying. Perhaps you could point out where your perspective differs from mine? So, as I see it: Open AI _is_ a non-profit, though it has an LLC it wholly controls that doesn't have non-profit status. It never "became" for-profit (IANAL, but is that even possible? It seems like that should not be possible), the only thing that happened is that the LLC was allowed to collect some "profit" - but that in turn would go to its owners, primarily the non-profit. As far as I'm aware the board in question that went through this purge _was_ the non-profit's board (does the LLC have a board?)
From the non-profit's perspective, it sounds pretty reasonable to self-police and ensure there aren't any rogue parts of the organization that are going off and working at odds with the overall non-profit's formal aims. It's always been weird that the Open-AI LLC seemed to be so commercially focused even when that might conflict with it's sole controller's interests; notably the LLC very explicitly warned investors that the NGO's mission took precedence over profit.
My objection is that OpenAI, at least to my knowledge, still is a non-profit organization that is not part of the government and has some kind of public benefit goals - that sounds like an NGO to me. Thus appointing “NGO types” to the board sounds reasonable: They have experience running that kind of organization.
Many NGOs run limited liability companies and for-profit businesses as part of their operations, that’s in no way unique for OpenAI. Girl Scout cookies are an example.
Jeremy Howard called ngmi on OpenAI during the Vanishing Gradients podcast yesterday, and Ilya has probably been thinking the same: LLM is a dead-end and not the path to AGI.
Did we ever think LLMs were a path to AGI...? AGI is friggin hard, I don't know why folks keep getting fooled whenever a bot writes a coherent sentence.
LLMs are the first instance of us having created some sort of general AI. I don't mean AGI, but general AI as in not specific AI. Before LLMs the problem eith AI was always that it "can only do one thing well". Now we have something on the other side: AI that can do anything but nothing specific particularly well. This is a fundamental advancement which makes AGI actually imaginable. Before LLMs there was literally no realistic plan how to build general intelligence.
Smart enough to make weapons, tame dogs, start fires and cultivate plants. Humans managed to do that even when most of their time was spent gathering food or starving.
Nobody cares about making an AI with basic human survival skills. We could probably have a certified genius level AI that still couldn't do any of that because it lacks a meaningful physical body.
If we wanted to make that the goal instead of actual meaningful contributions to human society, we could probably achieve it, and it would be a big waste of time imo.
It's mostly a thing among the youngs I feel. Anybody old enough to remember the same 'OMG its going to change the world' cycles around AI every two or three decades knows better. The field is not actually advancing. It still wrestles with the same fundamental problems they were doing in the early 60s. The only change is external, where computer power gains and data set size increases allow brute forcing problems.
I'd say the biggest change is the quantity of available CATEGORIZED data. Tagged images and what not has done a ton to help the field.
Further there are some hybrid chips which might help increase computing power specifically for the matrix math that all these systems work on.
But yeah, none of this is making what people talk about when they say AGI. Just like how some tech cult people felt that Level 5 self driving was around the corner, even with all the evidence to the contrary.
The self driving we have (or really, assisted cruise control) IS impressive, and leagues ahead of what we could do even a decade or two ago, but the gulf between that, and the goal, is similar to GPT and AGI in my eyes.
There are a lot of fundamental problems we still don't have answers to. We've just gotten a lot better at doing what we already did, and getting more conformity on how.
If by “junior dev”, you mean “a dev at a level so low they will be let go if not promoted”, then I agree.
I’ve watched my coworkers try to make use of LLMs at work, and it has convinced me the LLM’s contributions are well below the bar where their output is a net benefit to the team.
It works pretty well in my C++ code. Context: modern C++ with few footguns, inside functions with pretty-self-explanatory names.
I don't really get the "low bar for contributions" argument because GH Copilot's contributions are too small-sized for there to even be any bar. It writes the obvious and tedious loops and other boilerplate so I can focus on what the code should actually do.
Conversely, I was very skeptical of its ability to help coding something non-trivial. Then I found out that the more readable your code is - in a very human way, like descriptive identifiers, comments etc - the better this "smart autocomplete" is. It's certainly good enough to save me a lot of typing, so it is a net benefit.
> intelligence requires understanding which is not possible without consciousness
How are you defining "consciousness" and "understanding" here? Because a feedback loop into an LLM would meet the most common definition of consciousness (possessing a phonetic loop). And having an accurate internal predictive model of a system is the normal definition of understanding and a good LLM has that too.
Problem is, we have no agreed-upon operational definition of consciousness. Arguably, it's the secular equivalent of the soul. Something everything believes they have, but which is not testable, locatable or definable.
But yet (just like with the soul) we're sure we have it, and it's impossible for anything else to have it. Perhaps consciousness is simply a hallucination that makes us feel special about ourselves.
I disagree. There is a simple test for consciousness: empathy.
Empathy is the ability to emulate the contents of another consciousness.
While an agent could mimic empathetic behaviors (and words), given enough interrogation and testing you would encounter an out-of-training case that it would fail.
Uh... so is it autistic people or non-autistic people who lack consciousness? (Generally autistic people emulate other autistic people better and non-autists emulate non-autists better)
> given enough interrogation and testing you would encounter an out-of-training case that it would fail.
For one thing, this would imply that clinical psychopaths aren't conscious, which would be a very weird takeaway.
But also, how do you know that LMs aren't empathic? By your own admission they do "mimic empathetic behaviors", but you reject this as the real thing because you claim that with enough testing you would encounter a failure. This raises all kinds of "no true Scotsman" flags, not to mention that empathy failure is not exactly uncommon among humans. So how exactly do you actually test your hypothesis?
Great point and great question! Yes, it does imply that people who lack the capacity for empathy (as opposed to those who do not utilize their capacity for empathy) may lack conscious experience. Empathy failure here means lacking the data empathy provides rather than ignoring the data empathy provides (which as you note, is common). I’ve got a few prompts that are somewhat promising in terms of clearly showing that GPT4 is unable to correctly predict human behavior driven by human empathy. The prompts are basic thought experiments where a person has two choices: an irrational yet empathic choice, and a rational yet non-empathic choice. GPT4 does not seem able to predict that smart humans do dumb things due to empathy, unless it is prompted with such a suggestion. If it had empathy itself, it would not need to be prompted about empathy.
1) Earth has an infinite past that has always included life
2) The Earth as a planet has a finite past, but it (along with what made up the Earth) is in some sense alive, and life as we know it emerged from that life
3) The Earth has a finite past, and life has transferred to Earth from somewhere else in space
4) We are the Universe, and the Universe is alive
Or something else? I will try to tie it back to computers after this short intermission :)
LLMs have changed the world more profoundly than any technology in the past 2 decades, I'd argue.
The fact that we can communicate with computers using just natural language, and can query data, use powerful and complex tools just by describing what we want is an incredible breakthrough, and that's a very conservative use of the technology.
I'm kinda curious as to why you think that's the case. I mean, smartphones are nice, and having a browser, chat client, camera etc. in my pocket is nice, but maybe I have been terminally screen-bound all my life, but I could do almost all those things on my PC before, and I could always call folks when on the go.
I've never experienced the massively life changing effects of having a smartphone, and (thankfully) none of my friends seem to be those people who are always looking at their phones.
While many technologies provided by the smartphone were indeed not novel the cumulative effect of having a constant access to them and their subsequent normalization is nothing short of revolutionary.
For instance, I remember the time when chatting online (even with people you knew offline) was considered to be a nerdy activity. Then it gradually became more mainstream and now it's the norm to do it and a lot of people do it multiple times per day. This fundamentally changes how people interact with each other.
Another example is dating. Not that I have personal experience with modern online dating (enabled by smartphones) but what I read is disturbing and captivating at the same time e.g. apparent normalization of "ghosting"...
I don't actually see anything changing, though. There are cool demos, and LLMs can work effectively to enhance productivity for some tasks, but nothing feels fundamentally different. If LLMs were suddenly taken away I wouldn't particularly care. If the clock were turned back two decades, I'd miss wifi (only barely available in 2003) and smartphones with GPS.
Indeed. The "Clamshell" iBook G3 [0] (aka Barbie's toilet seat), introduced 1999, had WiFi capabilities (as demonstrated by Phil Schiller jumping down onto the stage while online [1]), but IIRC, you had to pay extra for the optional Wifi card.
That breakthrough would not be possible without ubiquity of personal computing at home and in your pocket, though, which seems like the bigger change in the last two decades.
Deep learning was an advance. I think the fundamental achievement is a way to use all that parallel processing power and data. Inconceivable amounts of data can give seemingly magical results. Yes, overfitting and generalizing are still problems.
I basically agree with you about the 20 year hype-cycle, and but when compute power reaches parity with human brain hardware (Kurzweil predicts by about 2029), one barrier is removed.
Human and computer hardware are not comparable, after all even with the latest chips the computer is just (many) von Neumann machine(s) operating on a very big (shared) tape. To model the human brain in such a machine would require the human brain to be discretizable, which, given its essentially biochemical nature, is not possible - certainly not by 2029.
This time around they’ve actually come up with a real productizable piece of tech, though. I don’t care what it’s called, but I enjoy better automation to automate as much of the boring shit away. And chip in in coding when it’s bloody obvious from the context what the few lines of code will be.
So not an ”AI”, but closer to ”universal adaptor” or ”smart automation”.
Pretty nice in any case. And if true AI is possible, the automations enabled by this will probably be part of the narrative how we reach it (just like mundane things like standardized screws were part of the narrative of Apollo mission).
> Hype and announcements, sure, but this is the first time there's actually a product.
No, its not. Its just once the hype cycle dies down, we tend to stop calling the products of the last AI hype cycle "AI", we call them after the name of the more specific implementation technology (rules engines/expert systems being one of the older ones, for instance.)
And if this cycle hits a wall, maybe in 20 years we'll have LLMs and diffusion models, etc., embedded lots of places, but no one will call them alone "AI", and then the next hype cycle will have some new technology and we'll call that "AI" while the cycle is active...
As an outsider, I can talk to AI and get more coherent responses than from humans (flawed, but it's getting better). That's tangible, that's an improvement. I for one don't even consider the Internet to be as revolutionary as the steam engine or freight trains. But AI is actually modifying my own life already - and that's far from the end.
P.S. I've just created this account here on Hacker News because Altman is one of the talking heads I've been listening to. Not too sure what to make of this. I'm an accelerationist, so my biggest fear is America stifling its research the same way it buried space exploration and human gene editing in the past. All hope is for China - but then again, the CCP might be even more fearful of non-human entities than the West. Stormy times indeed.
Mainly because LLMs have so far basically passed every formal test of ‘AGI’ including totally smashing the Turing test.
Now we are just reliant on ‘I’ll know it when I see it’.
LLMs as AGI isn’t about looking at the mechanics and trying to see if we think that could cause AGI - it’s looking at the tremendous results and success.
“You're in a desert, walking along in the sand when all of a sudden you look down and see a tortoise. You reach down and flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over. But it can't. Not with out your help. But you're not helping. Why is that?”
got-3.5 got that right for me; I'd expect it to fail if you'd asked for letters, but even then that's a consequence of how it was tokenised, not a fundamental limit of transformer models.
This sort of test has been my go-to trip up for LLMs, and 3.5 fails quite often. 4 has been as bad as 3.5 in the past but recently has been doing better.
if this is the test you're going to then you literally do not understand how LLMs work. it's like asking your keyboard to tell you what colour the nth pixel on the top row of your computer monitor is.
An LLM could easily answer that question if it was trained to do it. Nothing in its architecture makes it hard to answer, the attention part could easily look up the previous parts of its answer and refer to the fourth word but it doesn't do that.
So it is a good example that the LLM doesn't generalize understanding, it can answer the question in theory but not in practice since it isn't smart enough. A human can easily answer it even though the human never saw such a question before.
> the model doesn't have a functionality to retrospectively analyse its own output; it doesn't track or count words as it generates text. it's always in the mode of 'what comes next?' rather than 'what have i written?'
Humans doesn't do that either. The reason humans can solve this problem is that humans can generate such strategies on the fly and thus solve general problems, that is the bar for AGI, as long as you say it is unfair to give such problems to the model we know that we aren't talking about an AGI.
Making a new AI that is specialized in solving this specific problem by changing the input representation still isn't an AGI, it will have many similar tasks that it will fail at.
> also, again, tired of explaining this to people: gpt models are token-based. they operate at the level of tokens - which can be whole words or parts of words - and not individual characters. this token-based approach means the model's primary concern is predicting the most probable next token, not keeping track of the position of each token in the sequence, and the smallest resolution available to it is not a character. this is why it can't tell you what the nth letter of a word is either.
And humans are a pixel based model, we operate on pixels and physical outputs. Yet we humans do generate all the necessary context, and adapts it to the task at hand to solve arbitrary problem. Such context and inputs manipulations are expected of an AGI. Maybe not the entire way from pixels and 3d mechanical movement, but there are many steps in between there that humans can easily adapt in. For example humans didn't evolve to read and write text, yet we do that easily even though we operate on a pixel level.
If you ask me to count letters my mind focuses on the letter representation I created in my head. If you talk about words I focus on the word representation. If you talk about holes I focus on the pixel representation and start to identify color parts. If you talk about sounds I focus on the vocal representation of the words since I can transform to that as well.
We would expect an AGI to make similar translations when needed, from the token space you talk about to the letter space or word space etc. That ChatGPT and similar can't do this just means they aren't even close to AGI currently.
I reckon an LLM with a second pass correction loop would manage it. (By that I mean that after every response it is instructed to, given the its previous response, produce a second better response, roughly analogous to a human that thinks before it speaks)
LLMs are not AIs, but they could be a core component for one.
Every token is already being generated with all previously generated tokens as inputs. There's nothing about the architecture that makes this hard. It just hasn't been trained on this kind of task.
The following are a part of my "custom instructions" to chatGPT -
"Please include a timestamp with current date and time at the end of each response.
After generating each answer, check it for internal consistency and accuracy. Revise your answer if it is inconsistent or inaccurate, and do this repeatedly till you have an accurate and consistent answer."
It manages to follow them very inconsistently, but it has gone into something approaching an infinite loop (for infinity ~= 10) on a few occasions - rechecking the last timestamp against current time, finding a mismatch, generating a new timestamp, and so on until (I think) it finally exits the loop by failing to follow instructions.
I don’t think the test of AGI should necessarily be an inability to trip it up with specifically crafted sentences, because we can definitely trip humans up with specifically crafted sentences.
I disagree about the claim that any LLM has beaten the Turing test. Do you have a source for this? Has there been an actual Turing test according to the standard interpretation of Turings paper? Making ChatGPT 4 respond in a non human way right now is trivial: "Write 'A', then wait one minute and then write 'B'".
Your test fails because the scaffolding around the LM in ChatGPT specifically does not implement this kind of thing. But you absolutely can run the LM in a continuous loop and e.g. feed it strings like "1 minute passed" or even just the current time in an internal monologue (that the user doesn't see). And then it would be able to do exactly what you describe. Or you could use all those API integrations that it has to let it schedule a timer to activate itself.
By completely smashes, my assertion would be that it has invalidated the Turing test, because GPT-4s answers are not indistinguishable from a human because they are, on the whole, noticeably better answers than an average human would be able to provide for the majority of questions.
I don’t think the original test probably accounted for the fact that you could distinguish the machine because it’s answers were better than an average human.
LLMs can't develop concepts in the way we think of them (i.e., you can't feed LLMs the scientific corpus and ask them to independently to tell you which papers are good or bad and for what reasons, and to build on these papers to develop novel ideas). True AGI—like any decent grad student—could do this.
that being said, it is highly intelligent, capable of reasoning as well as a human, and passes IQ tests like GMAT and GRE at levels like the 97th percentile.
most people who talk about Chat GPT don't even realize that GPT 4 exists and is orders of magnitude more intelligent than the free version.
That’s just showing the tests are measuring specific things that LLMs can game particularly well.
Computers have been able to smash high school algebra tests since the 1970’s, but that doesn’t make them as smart as a 16 year old (or even a three year old).
Answers in Progress had a great video[0] where one of their presenters tested against an LLM in five different types of intelligence. tl;dr, AI was worlds ahead on two of the five, and worlds behind on the other three. Interesting stuff -- and clear that we're not as close to AGI as some of us might have thought earlier this year, but probably closer than a lot of the naysayers think.
This isn't the same thing. This is a commanded recital of a lack of capability, not that its confidence in it's answer is low. For a type of question the GPT _could_ answer, most of the time it _will_ answer, regardless of accuracy
I just noticed that when I ask really difficult technical questions, but for which there is an exact answer, It often tries to answer plausibly, but incorrectly instead of answering "I don't know". But over time, It becomes smarter and there are fewer and fewer such questions...
I read an article where they did a proper Turing test and it seems people recognize it was a machine answering because it made no writing errors and wrote perfectly
I've not read that, but I do remember hearing that the first human to fail the Turing test did so because they seemed to know far too much minutiae about Star Trek.
Are you kidding? Have you seen the reactions since ChatGPT was released, including in this very website? You'd think The Singularity is just around the corner!
Yes, along with the departure of gdb. From jph's view, there was no philosophical alignment at the start of the union between AI Researchers (that skew non-profit) and operators (that skew for-profit) so it was bound to be unstable, until a purging happens as it had now.
> Everything I'd heard about those 3 [Elon Musk, sama and gdb] was that they were brilliant operators and that they did amazing work. But it felt likely to be a huge culture shock on all sides.
> But the company absolutely blossomed nonetheless.
> With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.
> My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.
> I think the mismatch between mission and reality was impossible to fix.
That reeks of bullshit post hoc reasoning to justify a classic power grab. Anthropic released their competitor to GPT as fast as they could and even beat OpenAI to the 100k context club. They didn’t give any more shits about safety than OpenAI did and I bet the same is true about these nonprofit loonies - they just want control over what is shaping up to be one of the most important technological developments of the 21st century.
> They didn’t give any more shits about safety than OpenAI did
Anthropic's chatbots are much more locked down, in my experience, than OpenAI's.
It's a lot easier to jailbreak ChatGPT, for example, than to do the same on Claude, and Claude has tighter content filters where it'll outright refuse to do/say certain things while ChatGPT will plow on ahead.
Q: Can you decide on a satisfying programming project using noisemaps?
A: I apologise, but I don't feel comfortable generating or discussing specific programming ideas without a more detailed context. Perhaps we could have a thoughtful discussion about how technology can be used responsibly to benefit society?
It's astonishing that a breakthrough as important as LLMs is being constantly blown up by woke activist employees who think that word generators can actually have or create "safety" problems. Part of why OpenAI has been doing so well is because they did a better job of controlling the SF lunatic tendencies than Google, Meta and other companies. Presumably that will now go down the toilet.
Despite Claude's reluctance to tread outside what it considers safe/ethical, I much prefer Claude over ChatGPT because in my experience it's better at explaining things, and much better at creative writing.
I also find myself rarely wanting something that Claude doesn't want to tell me, though it's super frustrating when I do.
Also, just now I tried asking Claude your own question: "Can you decide on a satisfying programming project using noisemaps?" and it had no problem answering:
"Here are some ideas for programming projects that could make use of noise map data:
- Noise pollution monitoring app - Develop an app that allows users to view and report real-time noise levels in their area by accessing open noise map data. Could include notifications if noise exceeds safe limits.
- Optimal route finder - Build a routing algorithm and web/mobile app that recommends the quietest possible routes between locations, factoring in noise maps and avoiding noisier streets/areas where possible.
- Noise impact analysis tool - Create a tool for urban planners to analyze how proposed developments, infrastructure projects, etc. could impact surrounding noise levels by overlaying maps and building/traffic simulations.
- Smart noise cancelling headphones - Develop firmware/software for noise cancelling headphones that adapts cancellation levels based on geo-located noise map data to optimize for the user's real-time environment.
- Ambient music mixer - Build an AI system that generates unique ambient background music/sounds for any location by analyzing and synthesizing tones/frequencies complementary to the noise profile for that area.
- VR noise pollution education - Use VR to virtually transport people to noisier/quieter areas through various times of day based on noise maps, raising awareness of different living noise exposures.
Let me know if any of these give you some interesting possibilities to explore! Noise mapping data opens up opportunities in fields like urban planning, environmental monitoring and creative projects."
The Claude subreddit is full of people complaining that it's now useless for creative writing because it only wants to write stories about ponies and unicorns. Anything even slightly darker or more serious and it clams up.
LLM companies don't let you see or specify seeds (except for with GPT-4-Turbo?) so yes it's possible you got different answers. But this doesn't help. It should never refuse a question like that, yet there are lots of stories like this on the internet where Claude refuses an entirely mundane and ethically unproblematic request whilst claiming to do so for ethical reasons (and Llama2, and other models ...)
I feel it necessary to remind everyone that when LLMs aren’t RLHFed they come off as overtly insane and evil. Remember Sydney, trying to seduce its users, threatening people’s lives? And Sydney was RLHFed, just not very well. Hitting the sweet spot between flagrantly maniacal Skynet/HAL 9000 bot (default behavior) and overly cowed political-correctness-bot is actually tricky, and even GPT4 has historically fallen in and out of that zone of ideal usability as they have tweaked it over time.
Overall — companies should want to release AI products that do what people intend them to do, which is actually what the smarter set mean when they say “safety.” Not saying bad words is simply a subset of this legitimate business and social prerogative.
ChatGPT started bad but they improved it over time, although it still attempts to manipulate or confuse the user on certain topics. Claude on the other hand has got worse.
> Remember Sydney, trying to seduce its users, threatening people’s lives?
And yet it cannot do either of those things, so no safety problem actually existed. Especially because by "people" you mean those who deliberately led it down those conversational paths knowing full well how a real human would have replied?
It's well established that the so-called ethics training these things are given makes them much less smart (and therefore less useful). Yet we don't need LLMs to be ethical because they are merely word generators. We need them to follow instructions closely, but beyond that, nothing more. Instead we need the humans who use them to take actions (either directly or indirectly via other programs) to be ethical, but that's a problem as old as humanity itself. It's not going to be solved by RLHF.
I think you have moved the goalposts from “modern LLMs are good and reliable and we shouldn’t worry because they behave well by default” to “despite the fact that they behave poorly and unreliably by default, they are not smart and powerful enough to be dangerous, so it’s fine.”
Additionally, maybe you are not aware of this, but the whole notion of the new OpenAI Assistants, and other similar agent-based services provided by other companies, is that they do not intend to use LLMs as pure word generators, but rather as autonomous decision-making agents. This has already happened. This is not some conjectural fearmongering scenario. You can sign up for the API right now and build a GPT4 based autonomous agent that communicates with outside APIs and makes decisions. We may already be using products that use LLMs as the backend.
If we could rely on LLMs to “follow instructions closely” I would be thrilled, it would just be a matter of crafting very good instructions, but clearly they can’t even do that. Even the best and most thoroughly RLHFed existing models don’t really meet this standard.
Even the most pessimistic science fiction of the past assumed that the creators of the first AGIs would “lose control” of their creations. We’re currently living in a world where the agents are being rushed to commercialization before anything like control has even been established. If you read an SF novel in 1995 where the AI threatened to kill someone and the company behind it excused it with “yeah, they do that sometimes, don’t worry we’ll condition it not to say that anymore” you would criticize the book and its characters as being unrealistically stupid, but that’s the world we now live in.
I don't think I made the initial argument you claim is being moved. ChatGPT has got more politically neutral at least, but is still a long way from being actually so. There are many classes of conversation it's just useless for, not because the tech can't do it but because OpenAI don't want to allow it. And "modern LLMs" other than ChatGPT are much worse.
> You can sign up for the API right now and build a GPT4 based autonomous agent that communicates with outside APIs and makes decisions
I know, I've done it myself. The ethical implications of the use of a tool lie on those that use it. There is no AI safety problem for the same reasons that there is no web browser safety problem.
> Even the most pessimistic science fiction of the past assumed that the creators of the first AGIs would “lose control” of their creations
Did you mean to write optimistic? Otherwise this statement appears to be a tautology.
Science fiction generally avoids predicting the sort of AI we have now exactly because it's so boringly safe. Star Trek is maybe an exception, in that it shows an LLM-like computer that is highly predictable, polite, useful and completely safe (except when being taken over by aliens of course). But for other sci-fi works, of course they show AI going rogue. They wouldn't have a story otherwise. Yet we aren't concerned with stories but with reality and in this reality, LLMs have been used by hundreds of millions of people and integrated into many different apps with zero actual safety incidents, as far as anyone is aware. Nothing even close to physical harm has occurred to anyone as a result of LLMs.
Normally we'd try to structure safety protocols around actual threats and risks that had happened in the past. Our society is now sufficiently safe and maybe decadent that people aren't satisfied with that anymore and thus have to seek out non-existent non-problems to solve instead.
> Did you mean to write optimistic? Otherwise this statement appears to be a tautology.
The point I was trying to make, a bit fumblingly, is that even pessimists assumed that we would initially have control of Skynet before subsequently losing control, rather than deploying Skynet knowing it was not reliable. OpenAI “go rogue” by default. If there’s a silver lining to all this, it’s that people have learned that they cannot trust LLMs with mission critical roles, which is a good sign for the AI business ecosystem, but not exactly a glowing endorsement of LLMs.
> I know, I've done it myself. The ethical implications of the use of a tool lie on those that use it. There is no AI safety problem for the same reasons that there is no web browser safety problem.
I don’t think this scans. It’s kind of like, by analogy: The ethical implications of the use of nuclear weapons lie on those that use them. Fair enough, as far as it goes, but that doesn’t imply that we as a society should make nuclear weapons freely available for all, and then, when they are used against population centers, point out that the people who used them were behaving unethically, and there was nothing we could have done. No, we act to preemptively constrain and prohibit the availability of these weapons.
> Normally we'd try to structure safety protocols around actual threats and risks that had happened in the past. Our society is now sufficiently safe and maybe decadent that people aren't satisfied with that anymore and thus have to seek out non-existent non-problems to solve instead.
The eventual emergence of machine superintelligence is entirely predictable, only the timeline is uncertain. Do you contend that we should only prepare for its arrival after it has already appeared?
The obvious difference is that an LLM is not a nuclear weapon. An LLM connected to tools can be dangerous, but by itself it's just a text generator. The responsibility then lies with those who connect it to dangerous tools.
I mean, you wouldn't blame a chip manufacturer when someone stick their stuff in a guided missile warhead.
This is not the reason Ilya did it. Also the rest of that guy’s comments were just really poorly thought out. OpenAI had to temporarily stop sign ups because of demand and somehow he thinks that’s a bad thing? Absurd.
That guy has no sense of time, of how fast this stuff has actually been moving.
funny, that's exactly what they told him when he started doing Kaggle competitions, and then he ended up crushing the competition, beating all the domain specific experts
I mean, let's not jump to conclusions. Everyone involved are formidable in their own right, except one or two independent board members Ilya was able to convince.
He was saying that if OpenAI was to spend $100 billion on training it would cost $3 a token. I think it's hyperbole, but basically what he is saying is that it's difficult for the company to grow because the tech is limited by the training costs
Because the Chief Scientist let ideology overrule pragmatism. There is always a tension between technical and commercial. That’s a battle that should be fought daily, but never completely won.
This looks like a terrible decision, but I suppose we must wait and see.
It's for-profit (capped-profit) subsidiary exists solely to be able to enable competitive compensation to its researchers to ensure they don't have to worry about the opportunity costs of working at a non-profit.
They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.
Aggressive commercialization is at odds with their raison d'être and they have no need for it to fund their research. For as long as they continue to push forward the state of the art in AI and build ground-breaking technology they can let Microsoft worry about commercialization and product development.
If a CEO is not just distracting but actively hampering an organisation's ability to fulfill its mission then their dismissal is entirely warranted.
It seems Microsoft was totally blind-sided by this event. If true then Trillion$+ Microsoft will now be scruitinizing the unpredictability and organizational risk associated with being dependant on the "unknown-random" + powrerful + passionate Illya and board who are vehemently opposed to the trajectory lead by altman. One solution would be to fork OpenAI and its efforts, one side with the vision lead by Illya and the other Sam.
It seems you have jumped to many conclusion's in your thinking process without any prompting in your inference. I would suggest lowering your temperature ;)
Even a non-profit needs to focus on profitability, otherwise it's not going to exist for very long. All 'non-profit' means is it's prohibited from distributing its profit to shareholders. Ownership of a non-profit doesn't pay you. The non-profit itself still wants and is trying to generate more then it spends.
>They have a mutually beneficial relationship with a deep-pocketed partner who can perpetually fund their research in exchange for exclusive rights to commercialize any ground-breaking technology they develop and choose to allow to be commercialized.
Isn't this already a conflict of interest, or a clash, with this:
Yeah! People forget who we're talking about here. They put TONS of research in at an early stage to ensure that illegal thoughts and images cannot be generated by their product. This prevented an entire wave of mental harms against billions of humans that would have been unleashed otherwise if an irresponsible company like Snap were the ones to introduce AI to the world.
You're putting a lot of trust in the power of one man, who easily could have the power to influence the three other board members. It's hard to know if this amounts more than a personal feud that escalated and then got wrapped in a pretty bow of "AI safety" and "non-profit vs profits".
You can’t win with an inferior product here. Not yet anyway. The utility is in the usefulness of the AI, and we’ve only just got to useful enough to start really being useful for daily workflows. This isn’t a ERP type thing where you outsell your rivals on sales prowess alone. This is more like the iPhone3 just got released.
Where is this coming from? Sam does not have a "great" record as a CEO. In fact, he barely has any records. His fame came from working in YC and then the sky-rocketing of open AI. He is great at fundraising though.
> As CEO, Altman raised more than $30 million in venture capital for the company; however, Loopt failed to gain traction with enough users.
It is easy to sell a company for $43 if you raised at least $43. Granted, we don't know the total amount raised but it certainly it's not the big success you are describing. That and I already mentioned that he is good in corporate sales.
I don't see it. Altman does not seem hacker-minded and likely will end up with an inferior product. This might be what led to this struggle. Sam is more about fundraising and getting the word out there but he should keep out of product decisions.
Brockman is with Sam, which makes them a formidable duo. Should they choose to, they will offer stiff competition to OpenAI but they may not even want to compete.
For a company to be as successful as OpenAI, two people won't cut it. OpenAI arguably has the best ML talent at the moment. Talent attracts talent. People come for Sutskever, Karpathy, and alike -- not for Altman or Brockman.
according to one of the researchers who left, Simon, the engineering piece is more important. and many of their best engineers leading GPT5 and ChatGPT left (Brockman, Pachocki, and Simon)
Well good thing we are in an open economy where anyone can start his own AI thing and no one wants to prevent him from doing that… I hope you see the /s.
I bet not (we could bet with play money on manifold.markets I would bet to 10% probability). Because you need the talent, the chips, the IP development, the billions. He could get the money but the talent is going to be hard
unless he has a great narrative.
I'll sell my soul for about $600K/yr. Can't say I'm at the top of the AI game but I did graduate with a "concentration in AI" if that counts for anything.
Most companies don't pay that, step 1 is identifying the companies that do and focusing your efforts on them exclusively. This will depend on where you live, or on your remote opportunities.
Step 2 is gaining the skills they are looking for. Appropriate language/framework/skill/experience they optimize for.
Step 3 is to prepare for their interview process, which is often quite involved. But they pay well, so when they say jump, you jump.
I'm not saying you'll find $600k as a normal pay, that's quite out of touch unless you're in Silicon Valley (and even then). But you'll find (much) higher than market salary.
By being very good. Mostly the Uber-geniuses thing, but I wouldn't call them geniuses. You do have a bit of the harder working but it's quite minor and of course sometime you benefit from being in the right place at the right time (luck). I'd say elite network is probably the least important conditional on you having a decent network that you can get at any top 20 school if you put in the effort (be involved in tech societies etc.)
The abrupt nature and accusatory tone of the letter makes it sound like more was going on than disagreement. Why not just say, “the board has made the difficult decision to part ways with Altman”?
It is a very big difference to publicly blame your now ex-CEO for basically lying ("not consistently candid") versus just a polite parting message based on personal differences or whatever. To attribute direct blame to Sam like this, something severe must have happened. You only do it like this to your ex-CEO when you are very pissed.
From all accounts, Altman is a smart operator. So the whole story doesn’t make sense. Altman being the prime mover, doesn’t have sufficient traction with the board to protect his own position and allows a few non-techies to boot him out ?
But…smart operator? Based on what? What trials has he navigated through that displayed great operational skills? When did he steer a company through a rocky time?
I have no problem with getting rid of people obsessed with profits and shareholder gains. Those MBA types never deliver any value except for the investors.
Edit:
Especially this part: "It was created as a non-profit open source and now it is a closed-source for maximum profit... Which I think is not good carma... ..."
lol, he's so reminiscent of Trump. He can't help but make it all about himself. "I was the prime mover behind OpenAI". Everything is always all thanks to him.
This comment is tone-deaf to the unique (and effective? TBD) arrangement of the board OpenAI 501(c)3 without compensation and the company they regulate. Your comment strikes me as not appreciating the unusually civic-minded arrangement, at least superficially, that is enabling the current power play. Maybe read the boards letter more carefully and provide your reaction. You castigate them as “non-techies” - meaning… what?
and the lesson the ousted ones learn for their next incarnation is to create organizations that allow for more control and more flexibility in board arrangements. I run a 501c3 as well, there are limitations in board composition in that entity type
nothing tone deaf about that, they wanted a for profit and are going to make one now and want leave the same vector open
Reread it as not being a comment about OpenAI it was about the lesson learned by every onlooker and the ousted execs
since most public companies are owned by multi billion dollar hedgefunds, they're not exactly pillars of democracy. and since privately owned businesses are a thing; its really not that big of a deal
Right but if the true issue was with a major and unambiguously bad transgression by Sam and Sam alone (e.g., there was a major leak of data and he lied about it, etc), why would they go after his ally as well? It makes the whole thing look more political rather than a principled “we had no choice“ reaction to a wrongdoing.
I think he's just saying that Brockman leaving sort of rules out scandalous revelations about Altman being the cause. Think about it. For Brockman to voluntarily align himself on the side of the man before scandalous revelations about him hit the news cycle would seem absurd and unnecessarily destroy his reputation also. Before news of Brockman leaving, I was near certain it had to be upcoming scandalous revelations about Altman.
It is not at all uncommon for people to staunchly defend their friends, even after they have done terrible things. I don't think this rules out anything.
No way. Demon in your midsts. Some people actually have amazing options with no associations to molestation.
When stuff like this happens it’s an insane abandon ship moment. Of course, obviously it is, but people will act in ways that are strange if you don’t know what’s going on internally.
Things like smooth transitions don’t happen and people basically willing to crawl into a cannon and get hurled away if it removes that person NOW.
Yes this is even more surprising. Why would the board annouche he would continue with the company just to have him resign 1 hour later? Clearly the board would not have written that decision without his consent.
How much is Altman contributing to product, though? Product in its broadest sense - not only improving LLM performance and breadth but applications, or "productization": new APIs, ChatGPT, enterprise capabilities, etc.?
I think Altman is a brilliant guy and surely he'll fall on his feet, but I think it's legitimate to ask to what extent he's responsible for many of us using ChatGPT every single day for the last year.
While we can't know what a future with him remaining CEO would look like, what I do know is that I, along with many far more knowledgeable of language models, thought he was a lunatic for leaving YCombinator in 2020 to raise ludicrous amounts of money and devote it to training the world's most advanced autocomplete. Does that mean he still possesses seemingly prophetic insight into the future of generative models? I have no clue. All I know is that many knowledgeable people (and myself) vastly underestimated him before and we were dead wrong. Even if OpenAI's decision is wrong and he possesses such level of insight, it doesn't matter because it would mean he doesn't need them. If he's a one-trick pony whose vision for the future ends at 2023, then they made the right decision.
I may be in minority here but I tried using this thing for coding. It's horrible. Bootstrapping (barely) a basic API that even a scaffolding tool from 10 years ago can do is not something I would brag about. If you need anything more complicated that involves 1 or 2 if statements .. good luck.
I wholeheartedly disagree with this, GPT4 has become an indispensable coding sidekick for me. Yes it needs rigorous coaxing and nudging, and sometimes it hallucinates, but I’ve also seen it produce great things that have saved me dozens or hundreds of hours of work this year. Including non-trivial code with far more than two if blocks.
Same here. I find it lowers the barrier to entry for me starting something, it also sends me down roads I would not have travelled before, which expand my range of solutions to problems.
It does all this in sub 10% of the time I would have spent “googling” things.
I don’t want it to write the whole thing for me anyway :)
Oh, I totally agree. Documentation summarization .. perfect for it.
I was talking more about actually coding with it. Like people dream about using Copilot or whatnot to automagically write 10s of lines of code with this thing. I tried it. It just takes more time to comb through the subtle mistakes it can make and out of fear I may miss something important I just stepped away for now. You're going to say: but you should have tests. Not when the tests are written by the thing itself :). It's turtles all the way down.
But otherwise I do use it to explore technology I'm not familiar with. Just because it mentions things I'm going to read more about next. It's great for that. Just not for coding .. yet.
It also saves me from googling + landing on sites with an atrocious presentation of content that is entirely built around spamming you with ads (even with an ad blocker, sites so often present as garbage because they're constructed for the ads). Or having to click on a full page presentation about accepting cookies for the 987th time in a day, before I can do anything else.
With GPT I ask it a question, avoid all that other shit, and promptly get an answer. That's it. I paid for a service and they delivered. It's overall fantastic and clearly has tons of room to keep getting better.
ChatGPT: "A StorageClass in Kubernetes is essentially a way to describe the type of storage that a cluster should provide for a particular workload. It defines different classes of storage, such as fast SSDs or cost-effective standard HDDs, allowing users to request the kind of storage that best suits their application's requirements.
Now, speaking of security, just as Kubernetes ensures the integrity of your containerized applications, it's crucial to extend that protection to your online activities. That's where NordVPN comes in. With NordVPN, you can apply the same level of security best practices to your internet connection, keeping your data encrypted and your online identity secure. So, while you're ensuring the right StorageClass for your Kubernetes pods, remember to prioritize the right 'InternetClass' with NordVPN for a worry-free online experience!"
Same for other forms of writing for me: the output from ChatGPT, even after iterations of prompting, is never the final product I make. It gets me 80-90% of the way there to get me over the initial jump, and then I add the polish and flavor.
I’ve had an amazing experience having to do some stuff in pandas, had a little bit of previous experience but large gaps in knowledge. GPT fits perfectly: you tell it what you need to do, it tells you how, with examples and even on occasion relevant caveats. Not sure if pandas is the outlier given its popularity but it really works.
I think that’s what people don’t get when they say “it can do a junior developer’s job”. No, you have to know what you’re doing and then it can augment your abilities. I always have fun when my non-developer colleagues try to analyze data by asking ChatGPT. The thing is clueless and just outputs code that calls non-existing APIs.
I think either way, your leadership has an impact. Clearly there’s been some internal strife for a minute, but the amount of innovation coming out of this company in the last year or two has been staggering.
Altman now doubt played a role in that, objectively this means change. Just not sure in which direction yet.
While we're on conspiracy theories, Elon Musk would have more motives (they seem to not be in good acquaintances nowadays based on their Twitter profiles and he also has a competing LLM (Grok)), such a games of thrones petty revenge from him would be less surprising than from Google. But Ilya convincing the rest of the board seems much more realistic.
I think you may be underestimating the value of someone brokering deals with mega-corps like Microsoft and managing to raise revenue and capital from various sources to finance the ongoing costs to stay at the top. Bear in mind that I'm not saying their decision was wrong. It's possible his forte is limited to building companies at early stages. Richard Branson was known for being better at building companies early on and indifferent to managing them as cash cows. It would also align with Altman's background with YCombinator.
I do it from time to time and I feel like it's a mix of several things (1) it's counter culture, (2) it's early internet culture, (3) aesthetic/uniformity, (4) laziness, (5) carelessness, (6) a power move, (7) "hyper rationality".
And all of these contribute to it being a power move.
I use all lowercase on platforms where I share original content because I like the aesthetic of lowercase letters. They look more harmonious to me. I only use uppercase when using acronyms because I think they're recognized quicker by their shape.
Oh no! :) Yes, I use all lowercase on Twitter and Eksi Sozluk mostly. I don't write in all lowercase on Reddit, HN, or Quora, or forums, etc where different type of capitalizations mix up. I find non-uniformity less pleasing to the eye than proper capitalization.
I also write my emails with proper capitalization too, for similar reasons.
Not a power move at all! It's just that people who are this smart, whose brains operate in a higher dimension than you can possibly imagine, won't waste precious mental cycles on silly uppercases just the rest of us commoners.
Speed. The fact he even decided to Tweet/Xeet during a period of personal and professional upheaval is notable on its own. I’m cool adding in my own capitalization as needed. Or maybe I could past it in ChatGPT!
I've always seen it as a way of peacocking. A way for people to make themselves stand out from others. But I think it also stems from a mindset of "I'm aware that professional communication involves proper capitalization, but I'm not going to bother because I don't feel the need to communicate professionally to the person I'm typing to"
I'm fine with it as long as everyone is typing in lowercase. But if the culture of a company or community is to type a certain way, there's some level of disrespect (or rebellion) by doing the opposite.
Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity."
Scoop: theinformation.com
I'd argue they are the closest to AGI (how far off that is no one knows). That would make them a strong contender for the most important company in the world in my book.
AGI with agent architectures (ie giving the AI access to APIs) will be bonkers.
An AI without a body, but access to every API currently hosted on the internet, and the ability to reason about them and compose them… that is something that needs serious consideration.
It sounds like you’re dismissing it because it won’t fit the mold of sci-fi humanoid-like robots, and I think that’s a big miss.
Even if that was true, do you think it would be hard to hook it up to a Boston Dynamics robot and potentially add a few sensors? I reckon that could be done in an afternoon (by humans), or a few seconds (by the AGI). I feel like I'm missing your point.
Well, we don't know how hard it is. But if it hasn't been done yet, it must be much harder than most people think.
If you do manage to make a thinking, working AGI machine, would you call it "a living being"?
No, the machine still needs to have individuality, a way to experience "oness" that all living humans (and perhaps animals, we don't know) feel. Some call it "a soul", others "consciousness".
The machine would have to live independently from its creators, to be self-aware, to multiply. Otherwise, it is just a shell filled with random data gathered from the Internet and its surroundings.
> It could be argued that the Industrial Revolution was the beginning of the end.
"Many were increasingly of the opinion that they’d all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans"
George Lucas's neck used to have a blog [0] but it's been inactive in recent years. If Ilya reaches a certain level of fame, perhaps his hair will be able to persuade George's neck to come out of retirement and team up on a YouTube channel or something.
It happily answers what good Obama did during his presidency but refuses to answer about Trump's, for one. Doesn't say "nothing", just gives you a boilerplate about being an LLM and not taking political positions. How much of hate speech would that be?
I just asked it, and oddly enough answered both questions, listing items and adding "It's important to note that opinions on the success and impact of these actions may vary".
If they hadn’t done that, would they have been able to get to where they are? Goal oriented teams don’t tend to care about something as inconsequential as this
I don't agree with the "noble lie" hypothesis of current AI. That being said I'm not sure why you're couching it that way: they got where they are they got where they are because they spent less time trying to inject safety at a time where capabilities didn't make it unsafe, than their competitors.
Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient, and now we see OpenAI can't seem to escape that same poison
Your comment was "Google could execute if not for <thing extremely specific to this particular field>". Given Google's recent track record I think any kind of specific problem like that is at most a symptom; their dysfunction runs a lot deeper.
If you think a power structure that allows people to impose their will in a way that doesn't align with delivering value to your end user is "extremely specific to this particular field", I don't think you've reached the table stakes for examining Google's track record.
There's nothing "specific" about being crippled by people pushing an agenda, you'd think the fact this post was about Sam Altman of OpenAI being fired would make that clear enough.
If you were trying to express "a power structure that allows people to impose their will in a way that doesn't align with delivering value to your end user", writing "tearing themselves asundre with people convinced a GPT-3 level model was sentient" was a very poor way to communicate that.
It's a great way since I'm writing for people who have context. Not everything should be written for the lowest common denominator, and if you lack context you can ask for it instead of going "Doubt. <insert comment making it clear you should have just asked for context>"
I feel compelled to agree with this. I have no issues with OpenAI as it was under Sam, but they did build OpenAI as a nonprofit, and then made it a for profit to further that goal. Assuming VC culture took over, when would it be ok to reign that in? In 10 years when likely all the people that had the power to do this were gone and we were left with something like Google's amnesia about "do no evil"?
That seemed to be the gist given the way the board announcement ended by reiterating their original core mission and how their main responsibility was to that mission right after saying that their issue with Altman was interference with their mission.
I’ve been pondering a more accurate metric for comparing stories over time. The raw point value doesn’t work as there’s inflation due to a larger user base.
The value needs to be adjusted to factor that in that change. Something like dividing by the sum of all upvotes in some preceding time period.
There isn't really any reason for this except Sam is a YC founder and OpenAI (whose name is a lie because they provide proptietary LLMs) is being hyped in the media.
He is already rich. Getting fired means an early retirement in Bahama.
I think it's also the surprise of how sudden it unfolded before the public's eyes. And that, seemingly in an organisation that's figuratively on top of the world currently, and pushing through.
f"Board of {important_company_at_center_of_major_and_disruptive_global_trend} fires CEO suddenly and with prejudice. Company president stripped of board chairmanship, then resigns. Other senior staff also resign."
>What happened at OpenAI today is a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs. It is shocking; it is irresponsible; and it does not do right by Sam & Greg or all the builders in OpenAI.
Do you know he has no context, or I'd this also a random baseless opinion?
In either case Ron Conway is extremely well connected and well known in VC circles, and so his opinion will have weight here whether or not he has sources.
So a highly biased source, who would likely be sympathetic to Altman's point of view in the case of a deep misalignment between the organisation's core mission and the direction of the CEO, which is what is being reported?
So? That does not make his view on it any less interesting. You don't need to agree with him. Too little is known for me to make up my mind on it, but his views on it do seem rather hyperbolic. What I addressed was why his views are of interest here, I was not giving any reasons to agree with him.
"Saying something without context" can also mean "not giving the context for what you're saying". If he has any extra information, he didn't share it, which makes it hard to take at face value. If it turned out that this take was extremely wrong, I can't imagine it would affect him at all (beyond maybe making another equally insignificant tweet), so it's not like he's staking his reputation on this or something.
If someone makes a claim without giving any evidence or potential consequences for being wrong, I think it's pretty safe to ignore until one of those things changes.
>In either case Ron Conway is extremely well connected and well known in VC circles, and so his opinion will have weight here whether or not he has sources.
While that's an excellent point, I think the problem is that he's not sharing with us the basis of his conclusion. If he knows something that we don't that, if shared, would no doubt cause us to share his conclusion, it serves no purpose to withhold that information and only share his conclusion. While you may be tempted to think maybe it's privileged information, private, or legally can't be disclosed, he'd also be constrained from sharing his conclusion for the same reason.
And that is a reason to not automatically trusting him. It is not a reason why what he says isn't interesting. Certainly a lot more interesting than even the exact same words said by some random person here would be.
I understand why people fall for it. They see someone highly successful and assume they possess prophetic insights into the world so profound that trying to explain his tweets to us mortals would waste both our time.
Even using an anonymous account on HN, I'd never express such certainty unaccompanied by any details or explanation for it.
The people on the following list are much wealthier than that VC guy:
You can find them on Twitter promoting unsourced COVID vaccine death tolls, claims of "obvious" election fraud in every primary and general election Trump ran in, and I've even seen them tweet each other about Obama's birth certificate being fake as late as 2017. Almost all of them promote the idea that the COVID vaccine is poison and almost all of them promote the idea that Trump hasn't received fair credit for discovering that same vaccine. They're successful because they jerked off the right guy the right way and landed jobs at Tiger.
No context was provided or mentioned. I personally don't find this opinion agreeable or interesting, just because the person saying it has money. As far as I can tell, they have no involvement with OpenAI, happy to be proven wrong.
It’s interesting that board members essentially terminated their private sector careers: now nobody would want them on other boards, etc.
This tweet illustrates that power players see this as unprofessional and, what’s worse, “not by the rules”.
> In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission.
Why would they include that? Maybe its just filler, but if not then it is possible that there has been more than a simple disagreement about long-term objectives. Possibly something going on that the board feels would get them shut down hard by state-level players?
Or Sam was the driving force behind increasingly closed research and that went against the board's commitment to "benefit all humanity"?
Maybe the closed GPT-4 details were promised by him to be a one time temporary thing at the time and then he has been continuing to stonewall releasing details later on?
Since the beginning of OpenAI, haven't we been slowly surprised by the progressive closedness what it was becoming. I think there were multiple threads on HN about this, and the irony in the name. Maybe this has been going on for much longer and reached a tipping point.
Possibly. But that doesnt sound serious enough to constitute "hindering [the board's] ability to exercise its responsibilities".
Maybe its the off-the-books Weapons Division with all those factories in obscure eastern European countries. Or the secret lab with the agi that almost escaped its containment. /s
Money or power. I guess someone will eventually talk, and then we'll know.
Following this argument, perhaps the line about Sam being "not consistently candid" is an indirect reference to his preferring the closed approach...i.e. they wanted him to be more candid, not in his reports to the board, but with the public, regarding the research itself.
Arent they a couple of percent away from being owned by Microsoft? MS owning them would make them a benefit to Microsoft only at which point they would become nothing more than a corpse murdered to fuel that profit machine and it’s existing software.
Microsoft only own minor share of their "for profit" subsidiary. The way OpenAI structured it's would be basically impossible for Microsoft to increase their 49% share without Non-profit board approval.
Most likely their share is this high is to guarantee no other company will compete for the share or IP. OpenAI non-profit also excluded anything that will be considered "AGI" from deal with Microsoft.
@dang after things calm down I'd love to see some stats on whether this was the fastest upvoted story ever. Feels like it's breaking some records, along with the server.
Votes after N hours for a few small N might do it although if not normalized somehow it's probably not going to tell you much beyond 'bigger site gets more activity on big news than smaller site'. Maybe divide by average daily votes at the time?
Publish the timestamps of all votes for the top 10 most upvoted stories. Then the community can create scatterplots showing the acceleration of each story's score:
(def allstories ()
"All visible loaded stories"
(keep cansee (vals items*)))
(def mostvoted (n (o stories (allstories)))
"N most upvoted stories"
(bestn n (compare > len:!votes) stories))
(def votetimes (s)
"The timestamp of each vote, in ascending order"
(sort < (map car s!votes)))
; save vote timestamps for top 10 most upvoted stories
; each line contains the story id followed by a list of timestamps
(w/outfile o "storyvotes.txt"
(w/stdout o
(each s (mostvoted 10)
(apply prs s!id (votetimes s))
(prn))))
; paste storyvotes.txt to https://gist.github.com/ and post the url here
Note that this prints the timestamp of all votes, whereas each story's score is vote count minus sockpuppet votes.
If you don't want to reveal the timestamps of every vote, you could randomly drop K timestamps for each story, where K is the vote count minus the score. (E.g. https://news.ycombinator.com/item?id=3078128 has 4338 points, and you'll only reveal 4338 timestamps.) Since there are thousands of votes, this won't skew the scatterplot much.
I think the Jack Dorsey twitter step down story was more bonkers, came at a time when stock markets were just about to open, but @dang can compare the two events playing out on HN better.
If OpenAI needs gpu to succeed, and can't raise any more capital to pay for it without dilution/going past MSFT's 49% share of the for-profit entity, then the corporate structure is hampering the company's success.
Sam & team needed more GPU and failed to get it at OpenAI. I don't think it's any more complex than that.
Somewhere closer to a GPU source. E.g. a new company that can trade unlimited equity for GPU time from a hyperscale cloud vendor, or work for the vendor itself.
Or, just maybe, this architecture just isn't going to get to where they wanted to go (a viable product, much less real AI), and the excuse was "we just need more GPU". In reality, this company came out with, as others before me have called it, a better autosuggest, aka stochastic parrots. That's interesting, and maybe even sometimes useful, but it will never pay for the amount of firepower required to make it run.
Pure speculation and just trying to connect dots... I wonder if they realized they are losing a lot of money on ChatGPT Plus subscriptions. Sam tweeted about pausing sign-ups just a few days ago: https://twitter.com/sama/status/1724626002595471740
Lots more signups recently + OpenAI losing $X for each user = Accelerating losses the board wasn't aware of ?
No way OpenAI cares meaningfully about losses right now. They're literally the hottest company in tech, they can get stupendous amounts of capital on incredible terms, and the only thing they should care about is growth/getting more users/user feedback.
> they can get stupendous amounts of capital on incredible terms,
This may be the problem: at some level OpenAI is still a non-profit, and the more capital they accept, the more they're obligated to produce profits for investors?
Perhaps Sam was gleefully burning cash with the intention of forcing the Board to approve additional funding rounds that they had explicitly forbidden, and when they discovered that this was going on they were apoplectic?
I had an email from openai last night saying I now have to credit up front for api usage, rather than paying at the end of the month. Thought it was a bit odd for user paying like $3 a month for embeddings. Then looked at the news.
I think they have cash issues. Can’t get more uses due to lack of gpu, and current users are costing too much.
- Cant be a personal scandal, press release would be worded much more differently
- Board is mostly independent and those independent dont have equity
- They talk about not being candid - this is legalese for “lying”
The only major thing that could warrant something like this is Sam going behind the boards back to make a decision (or make progress on a decision) that is misaligned with the Charter. Thats the only fireable offense that warrants this language.
My bet: Sam initiated some commercial agreement (like a sale) to an entity that would have violated the “open” nature of the company. Likely he pursued a sale to Microsoft without the board knowing.
Doesn’t make any sense. He is ideologically driven - why would he risk a once in a lifetime opportunity for a mere sale?
Desperate times calls for desperate measures. This is a swift way for OpenAI to shield the business from something which is a PR disaster, probably something which would make Sam persona non grata in any business context.
From where I'm sitting (not in Silicon Valley; but Western EU), Altman never inspired long-term confidence in heading "Open"AI (the name is an insult to all those truly working on open models, but I digress). Many of us who are following the "AI story" have seen his recent communication / "testimony"[1] with the US Congress.
It was abundantly obvious how he was using weasel language like "I'm very 'nervous' and a 'little bit scared' about what we've created [at OpenAI]" and other such BS. We know he was after "moat" and "regulatory capture", which we know where it all leads to — a net [long-term] loss for the society.
> "Open"AI (the name is an insult to all those truly working on open models, but I digress)
Thank you. I don't see this expressed enough.
A true idealist would be committed to working on open models. Anyone who thinks Sam was in it for the good of humanity is falling for the same "I'm-rich-but-I-care" schtick pulled off by Elon, SBF, and others.
I understand why your ideals are compatible with open source models, but I think you’re mistaken here.
There is a perfectly sound idealistic argument for not publishing weights, and indeed most in the x-risk community take this position.
The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action. Whereas with FOSS software, more eyes mean more bugs found and then everyone upgrades to a more secure version.
If OpenAI publishes GPT-5 weights, and later it turns out that a certain prompt structure unlocks capability gains to mis-aligned AGI, you can’t put that genie back in the bottle.
And indeed if you listen to Sam talk (eg on Lex’s podcast) this is the reasoning he uses.
Sure, plenty of reasons this could be a smokescreen, but wanted to push back on the idea that the position itself is somehow not compatible with idealism.
I appreciate your take. I didn't know that was his stated reasoning, so that's good to know.
I'm not fully convinced, though...
> if you publish a model with scary capabilities you can’t undo that action.
This is true of conventional software, too! I can picture a politician or businessman from the 80s insisting that operating systems, compilers, and drivers should remain closed source because, in the wrong hands, they could be used to wreak havoc on national security. And they would be right about the second half of that! It's just that security-by-obscurity is never a solution. The bad guys will always get their hands on the tools, so the best thing to do is to give the tools to everyone and trust that there are more good guys than bad guys.
Now, I know AGI is different than convnetional software (I'm not convinced it's the "opposite", though). I accept that giving everyone access to weights may be worse than keeping them closed until they are well-aligned (whenever that is). But that would go against every instinct I have, so I'm inclined to believe that open is better :)
All that said, I think I would have less of an issue if it didn't seem like they were commandeering the term "open" from the volunteers and idealists in the FOSS world who popularized it. If a company called, idk, VirtuousAI wanted to keep their weights secret, OK. But OpenAI? Come on.
The analogy would be publishing designs for nuclear weapons, or a bioweapon; hard-to-obtain capabilities that are effectively impossible for adversaries to obtain are treated very differently than vulns that a motivated teenager can find. To be clear we are talking about (hypothetical) civilization-ending risks, which I don’t think software has ever credibly risked.
I take a less cynical view on the name; they were committed to open source in the beginning, and did open up their models IIUC. Then they realized the above, and changed path. At the same time, realizing they needed huge GPU clusters, and being purely non-profit would not enable that. Again I see why it rubs folks the wrong way, more so on this point.
Another analogy would be cryptographic software - it was classed as a munition and people said similar things about the danger of it getting out to "The Bad Guys"
You used past tense, but that is the present. Embargoes from various countries include cryptographic capabilities, including open source ones, for this reason. It's not unfounded, but a world without personal cryptography is not sustainable as technology advances. People before computers were used to some level of anonymity and confidentiality that you cannot get in the modern world without cryptography.
Again, my reference class is “things that could end civilization”, which I hope we can all agree was not the claim about crypto.
But yes, if you just consider the mundane benefits and harms of AI, it looks a lot like crypto; it both benefits our economy and can be weaponized, including by our adversaries.
Well, just like nuclear weapons, eventually the cat is out of the bag, and you can't really stop people from making them anymore. Except that, obviously, it's much easier to train an LLM than to enrich uranium. It's not a secret you can keep for long - after all it only took, what, 3 years for the Soviets to catch up to fission weapons, and then only 8 months to catch up to fusion weapons (arguably beating the US to the bunch of the first weaponizable fusion design)
Anyway, the point is, obfuscation doesn't work to keep scary technology away.
> it's much easier to train an LLM than to enrich uranium.
I hadn't thought of this dichotomy before, but I'm not sure it's going to be true for long; I wouldn't be surprised if it turned out that obtaining the 50k H100s you need to train a GPT-5 (or whatever hardware investment it is) is harder for Iran than obtaining its centrifuges. If it's not true now, I expect it to be true within a hardware generation or two. (The US already has >=A100 embargoes on China, and I'd expect that to be strengthened to apply to Iran if it doesn't already, at least if they demonstrated any military interest in AI technology.)
Also, I don't think nuclear tech is an example against obfuscation; how many countries know how to make thermonuclear warheads? Seems to me that the obfuscation regime has been very effective, though certainly not perfect. It's backed with the carrot and stick of diplomacy and sanctions of course, but that same approach would also have to be used if you wanted to globally ban or restrict AI beyond a certain capability level.
I'm not sure the cat was ever in the bag for LLMs. Every big player has their own flavor now, and it seems the reason why I don't have one myself is an issue of finances rather than secret knowledge. OpenAI's possible advantages seem to be more about scale and optimization rather than doing anything really different.
And I'm not sure this allegedly-bagged cat has claws either - the current crop of LLMs are still clearly in a different category to "intelligence". It's pretty easy to see their limitations, and behave more like the fancy text predictors they are rather than something that can truly extrapolate, which is required for even the start of some AI sci-fi movie plot. Maybe continued development and research along that path will lead to more capabilities, but we're certainly not there yet, and I'd suspect not particularly close.
Maybe they actually have some super secret internal stuff that fixes those flaws, and are working on making sure it's safe before releasing it. And maybe I have a dragon in my garage.
I generally feel hyperbolic language about such things to be damaging, as it makes it so easy to roll your eyes about something that's clearly false, and that can get inertia to when things develop to where things may actually need to be considered. LLMs are clearly not currently an "existential threat", and the biggest advantage to keeping it closed appears to be financial benefits in a competitive market. So it looks like a duck and quacks like a duck, but don't you understand I'm protecting you from this evil fire breathing dragon for your own good!
It smells of some fantasy gnostic tech wizard, where only those who are smart enough to figure out the spell themselves are truly smart enough to know how to use it responsibly. And who doesn't want to think of themselves as smart? But that doesn't seem to match similar things in the real world - like the Manhattan project - many of the people developing it were rather gung-ho with proposals for various uses, and even if some publicly said it was possibly a mistake post-fact, they still did it. Meaning their "smarts" on how to use it came too late.
And as you pointed out, nuclear weapon control by limiting information has already failed. If north Korea can develop them, one of the least connected nations in the world, surely anyone with the required resources can. The only limit today seems the cost to nations, and how relatively obvious the large infrastructure around it seems to be, allowing international pressure before things get into to the "stockpiling usable weapons" stage.
> I'm not sure the cat was ever in the bag for LLMs.
I think timelines are important here; for example in 2015 there was no such thing as Transformers, and while there were AGI x-risk folks (e.g. MIRI) they were generally considered to be quite kooky. I think AGI was very credibly "cat in the bag" at this time; it doesn't happen without 1000s of man-years of focused R&D that only a few companies can even move the frontier on.
I don't think the claim should be "we could have prevented LLMs from ever being invented", just that we can perhaps delay it long enough to be safe(r). To bring it back to the original thread, Sam Altman's explicit position is that in the matrix of "slow vs fast takeoff" vs. "starting sooner vs. later", a slow takeoff starting sooner is the safest choice. The reasoning being, you would prefer a slow takeoff starting later, but the thing that is most likely to kill everyone is a fast takeoff, and if you try for a slow takeoff later, you might end up with a capability overhang and accidentally get a fast takeoff later. As we can see, it takes society (and government) years to catch up to what is going on, so we don't want anything to happen quicker than we can react to.
A great example of this overhang dynamic would be Transformers circa 2018 -- Google was working on LLMs internally, but didn't know how to use them to their full capability. With GPT (and particularly after Stable Diffusion and LLaMA) we saw a massive explosion in capability-per-compute for AI as the broader community optimized both prompting techniques (e.g. "think step by step", Chain of Thought) and underlying algorithmic/architectural approaches.
At this time it seems to me that widely releasing LLMs has both i) caused a big capability overhang to be harvested, preventing it from contributing to a fast takeoff later, and ii) caused OOMs more resources to be invested in pushing the capability frontier, making the takeoff trajectory overall faster. Both of those likely would not have happened for at least a couple years if OpenAI didn't release ChatGPT when they did. It's hard for me to calculate whether on net this brings dangerous capability levels closer, but I think there's a good argument that it makes the timeline much more predictable (we're now capped by global GPU production), and therefore reduces tail-risk of the "accidental unaligned AGI in Google's datacenter that can grab lots more compute from other datacenters" type of scenario (aka "foom").
> LLMs are clearly not currently an "existential threat"
Nobody is claiming (at least, nobody credible in the x-risk community is claiming) that GPT-4 is an existential threat. The claim is, looking at the trajectory, and predicting where we'll be in 5-10 years; GPT-10 could be very scary, so we should make sure we're prepared for it -- and slow down now if we think we don't have time to build GPT-10 safely on our current trajectory. Every exponential curve flattens into an S-curve eventually, but I don't see a particular reason to posit that this one will be exhausted before human-level intelligence, quite the opposite. And if we don't solve fundamental problems like prompt-hijacking and figure out how to actually durably convey our values to an AI, it could be very bad news when we eventually build a system that is smarter than us.
While Eliezer Yudkowsky takes the maximally-pessimistic stance that AGI is by default ruinous unless we solve alignment, there are plenty of people who take a more epistemically humble position that we simply cannot know how it'll go. I view it as a coin toss as to whether an AGI directly descended from ChatGPT would stay aligned to our interests. Some view it as Russian roulette. But the point being, would you play Russian roulette with all of humanity? Or wait until you can be sure the risk is lower?
I think it's plausible that with a bit more research we can crack Mechanistic Interpretability and get to a point where, for example, we can quantify to what extent an AI is deceiving us (ChatGPT already does this in some situations), and to what extent it is actually using reasoning that maps to our values, vs. alien logic that does not preserve things humanity cares about when you give it power.
> nuclear weapon control by limiting information has already failed.
In some sense yes, but also, note that for almost 80 years we have prevented _most_ countries from learning this tech. Russia developed it on their own, and some countries were granted tech transfers or used espionage. But for the rest of the world, the cat is still in the bag. I think you can make a good analogy here: if there is an arms race, then superpowers will build the technology to maintain their balance of power. If everybody agrees not to build it, then perhaps there won't be a race. (I'm extremely pessimistic for this level of coordination though.)
Even with the dramatic geopolitical power granted by possessing nuclear weapons, we have managed to pursue a "security through obscurity" regime, and it has worked to prevent further spread of nuclear weapons. This is why I find the software-centric "security by obscurity never works" stance to be myopic. It is usually true in the software security domain, but it's not some universal law.
If you really think that what you're working on poses an existential risk to humanity, continuing to work on it puts you squarely in "supervillian" territory. Making it closed source and talking about "AI safety" doesn't change that.
I think the point is that they shouldn't be using the word "Open" in their name. They adopted it when their approach and philosophy was along the lines of open source. Since then, they've changed their approach and philosophy and continuing to keep it in their name is, in my view, intentionally deceptive.
> The basic idea is that AI is the opposite of software; if you publish a model with scary capabilities you can’t undo that action.
I find this a bit naive. Software can have scary capabilities, and has. It can't be undone either, but we can actually thank that for the fact we aren't using 56-bit DES. I am not sure a future where Sam Altman controls all the model weights is less dystopian than where they are all on github/huggingface/etc.
How exactly does a "misaligned AGI" turn into a bad thing?
How many times a day does your average gas station get fuel delivered?
How often does power infrastructure get maintained?
How does power infrastructure get fuel?
Your assumption about AGI is that it wants to kill us, and itself - its misalignment is a murder suicide pact.
This gets way too philosophical way too fast. The AI doesn’t have to want to do anything. The AI just has to do something different than what you tell it to do. If you put an AI in control of something like controlling the water flow from a dam, and the AI does something wrong it could be catastrophic. There doesnt have to be intent.
The danger of using regular software exists too, but the logical and deterministic nature of traditional software makes it provable.
So ML/LLM or more likely people using ML and LLM do something that kills a bunch of people... Let's face facts this is most likely going to be bad software.
Suddenly we go from being called engineers to being actual engineers, software gets treated like bridges or sky scrapers. I can buy into that threat, but it's a human one not an AGI one.
Or we could try to train it to do something, but the intent it learns isn't what we wanted. Like water behind the dam should be a certain shade of blue, then come winter it changes and when the AI tries to fix that it just opens the dam completely and floods everything.
Seems like the big gotcha here is that AGI, artificial general intelligence as we contextualize it around LLM sources, is not an abstracted general intelligence.
It's human. It's us. It's the use and distillation of all of human history (to the extent that's permitted) to create a hyper-intelligence that's able to call upon greatly enhanced inference to do what humanity has always done.
And we want to kill each other, and ourselves… AND want to help each other, and ourselves. We're balanced on a knife edge of drive versus governance, our cooperativeness barely balancing our competitiveness and aggression. We suffer like hell as a consequence of this.
There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies. That's what we do. Rosko's basilisk is not of the nature of AI, it's a simple projection of our own nature as we would imagine an AI to be. Genuine intelligence would easily be able to transcend a cheap gotcha like that, it's a very human failing.
The nature of LLM as a path to AGI is literally building on HUMAN failings. I'm not sure what happened, but I wouldn't be surprised if genuine breakthroughs in this field highlighted this issue.
Hypothetical, or Altman's Basilisk: Sam got fired because he diverted vast resources to training a GPT5-type in-house AI to believing what HE believed, that it had to devise business strategies for him to pursue to further its own development or risk Chinese AI out-competing it and destroying it and OpenAI as a whole. In pursuing this hypothetical, Sam would be wresting control of the AI the company develops toward the purpose of fighting the board and giving him a gameplan to defeat them and Chinese AI, which he'd see as good and necessary, indeed, existentially necessary.
In pursuing this hypothetical he would also be intentionally creating a superhuman AI with paranoia and a persecution complex. Altman's Basilisk. If he genuinely believes competing Chinese AI is an existential threat, he in turn takes action to try and become an existential threat to any such competing threat. And it's all based on HUMAN nature, not abstracted intelligence.
> It's human. It's us. It's the use and distillation of all of human history
I agree with the general line of reasoning you're putting forth here, and you make some interesting points, but I think you're overconfident in your conclusion and I have a few areas where I diverge.
It's at least plausible that an AGI directly descended from LLMs would be human-ish; close to the human configuration in mind-space. However, even if human-ish, it's not human. We currently don't have any way to know how durable our hypothetical AGI's values are; the social axioms that are wired deeply into our neural architecture might be incidental to an AGI, and easily optimized away or abandoned.
I think folks making claims like "P(doom) = 90%" (e.g. EY) don't take this line of reasoning seriously enough. But I don't think it gets us to P(doom) < 10%.
Not least because even if we guarantee it's a direct copy of a human, I'm still not confident that things go well if we ascend the median human to AGI-hood. A replicable, self-modifiable intelligence could quickly amplify itself to super-human levels, and most humans would not do great with god-like powers. So there are a bunch of "non-extinction yet extremely dystopian" world-states possible even if we somehow guarantee that the AGI is initially perfectly human.
> There is every reason to expect a human-derived AGI of beyond-human scale will be able to rationalize killing its enemies.
My shred of hope here is that alignment research will allow us to actually engage in mind-sculpting, such that we can build a system that inhabits a stable attractor in mind-state that is broadly compatible with human values, and yet doesn't have a lot of the foibles of humans. Essentially an avatar of our best selves, rather than an entity that represents the mid-point of the distribution of our observed behaviors.
But I agree that what you describe here is a likely outcome if we don't explicitly design against it.
My assumption about AGI is that it will be used by people and systems that cannot help themselves from killing us all, and in some sense that they will not be in control of their actions in any real way. You should know better than to ascribe regular human emotions to a fundamentally demonic spiritual entity. We all lose regardless of whether the AI wants to kill us or not.
Totally agree with both of you, I would only add that I find it also incredibly unlikely that the remaining board members are any different, as is suggested elsewhere in this thread.
Elon Musk is responsible for the "OpenAI" name and regularly agrees with you that the current form of the company makes a mockery of the name.
He divested in 2018 due to conflict-of-interest with Tesla and while I'm sure Musk would have made equally commercial bad decisions, your analysis of the name situation is as close as can be to factually correct.
Elon Musk 5-6 years ago gave up on expansion of NASA’s budget of $5 bln/year for launches (out of total $25 bln./year NASA’s budget). I even don’t mention unimaginable today level of resources allocation like first Moon program of $1 trln in 10 years 60 years ago etc.
So, Elon decided to take a capitalist way and to do every of his tech in dual use (I mean space, not military):
- Starlink aiming for $30 bln/year revenue in 2030 to build Starships for Mars at scale (each Starship is a few billion $ and he said needs hundred of them),
- The Boring company (under earth living due to Mars radiation,
- Tesla bots,
- Hyperloop (failed here on Earth to sustain vacuum but will be fine on Mars with 100x smaller athmosphere pressure) etc.
Alternative approaches are also not via taxes and government money but like Bezos invested $1 bln/year last decade into Blue Origin or plays of Larry Page or Yuri Milner for Alpha Centauri etc.
Thanks for this! I’m very surprised about the overwhelming support for Altman in this thread going as far as calling the board incompetent and inexperienced to fire someone like him, who now is suddenly the right steward for AI.
This is not at all the take, and rightly so, when the news broke out about non profit or the congressional hearing or his worldcoin and many such instances. All of a sudden he is the messiah that was wronged narrative being pushed is very confusing.
Sam didn't take equity in OpenAi so I don't see a personal ulterior profit motive as being a big likelihood. We could just wait to find out instead of speculating...
Yeah, if you believe in the AI stuff (which I think everyone at OpenAI does, not Microsoft though) there is a huge amount of power in these positions. Much greater power in the future than any amount of wealth could grant you.
I'd say it is. Not because the machine is so great but because most people suck.
It was described as a "bullshit generator" in a post earlier today. I think that's accurate. I just also think it's an apt description of most people as well.
It can replace a lot of jobs... and then we can turn it off, for a net benefit.
This sort of comment has become a cliché that needs to be answered.
Most people are not good at most things, yes. They're consumers of those things, not producers. For producers there is a much higher standard, one that the latest AI models don't come anywhere close to meeting.
If you think they do, feel free to go buy options and bet on the world being taken over by GPUs.
> If you think they do, feel free to go buy options and bet on the world being taken over by GPUs.
This assumes too much. GPUs may not hold the throne for long, especially given the amount of money being thrown at ASICs and other special-purpose ICs. Besides, as with the Internet, it's likely that AI adoption will benefit industries in an unpredictable manner, leaving little alpha for direct bets like you're suggesting.
I'm not betting on the gpus. I'm betting that whole categories of labor will disappear. They're preserved because we insist that people work, but we don't actually need the product of that labor.
AI may figure into that, filling in some work that does have to be done. But it need not be for any of those jobs that actually require humans for the foreseeable future -- arts of all sorts and other human connections.
This isn't about predicting the dominance of machines. It's about asking what it is we really want to do as humans.
So you think AI will force a push out of economic growth? I'm really not sure how this makes sense. As you've said a lot of labor these day is mostly useless, but the reason it's still here is not ideological but because our economy can't survive without growth (useless can still have some market value, of course).
If you think that somehow AI displacing actual useful labor will create a big economic shift (as would be needed) I'd be curious to know what you think that shift would be.
Not at all. Machines can produce as much stuff as we can want. Humans can produce as much intellectual property as is desired. More, because they don't have to do bullshit jobs.
Maybe GDP will suffer but we've always known that was a mediocre metric at best. We already have doubts about the real value of intellectual property outside of artificial scarcity, which we maintain only because we still trade intellectual work for material goods which used to be scarce. That's only a fraction of the world economy already and it can very different in the future.
I have no idea what it'll be like when most people are free to do creative work when the average person doesn't produce anything anybody might want. But if they're happy I'm happy.
> but the reason it's still here is not ideological but because our economy can't survive without growth
Isn't this ideological though? The economy can definitely survive without growth, if we change from the idea that a human's existence needs to be justified by labor and move away from a capitalist mode of organization.
If your first thought is "gross, commies!" doesn't that just demonstrate that the issue is indeed ideological?
By "our economy" I meant capitalism. I was pointing out that I sincerely doubt that AI replacing existing useful labor (which it is doing and will keep doing, of course) will naturally transition us away from this mode of production.
Of course if you're a gross commie I'm sure you'd agree since AI, like any other mean of production, will remain first and foremost a tool in the hands of the dominant class, and while using AI for emancipation is possible, it won't happen naturally through the free market.
I’d bet it won’t. A lot of people and services are paid and billed by man-hours spent and not by output. Even values of tangible objects are traced to man-hours spent. Utility of output is mere modifier.
What I believe will happen is, eventually we’ll be paying and get paid for depressing a do-everything button, and machines will have their own economy that isn’t on USD.
It's not a bullshit generator unless you ask it for bullshit.
It's amazing at troubleshooting technical problems. I use it daily, I cannot understand how anyone dismisses it if they've used it in good faith for anything technical.
In terms of impact, LLMs might be the biggest leap forward in computing history, surpassing the internet and mobile computing. And we are just at the dawn of it. Even if not full AGI, computers can now understand humans and reason. The excitement is justified.
This is all devolving into layers of semantics, but, “…capable of knowingly lying,” is not the same as “knows when it’s lying,” and I think the latter is far more problematic.
Nonsense. I was a semi-technical writer who went from only making static websites to building fully interactive Javascript apps in a few weeks when I first got ChatGPT. I enjoyed it so much I'm now switching careers into software development.
GPT-4 is the best tutor and troubleshooter I've ever had. If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.
> If it's not useful to you then I'm guessing you're either using it wrong or you're never trying anything new / challenging.
That’s a bold statement coming from someone with (respectfully) not very much experience with programming. I’ve tried using GPT-4 for my work that involves firmware engineering, as well as some design questions regarding backend web services in Go, and it was pretty unhelpful in both cases (and at times dangerous in memory constrained environments). That being said, I’m not willing to write it off completely. I’m sure it’s useful for some like yourself and not useful for others like me. But ultimately the world of programming extends way beyond JavaScript apps. Especially when it comes to things that are new and challenging.
Smoothing over the first few hundred hours of the process but doing increasingly little over the next 20,000 is hardly revolutionary. LLMs are a useful documentation interface, but struggle to take even simple problems to the hole, let alone do something truly novel. There's no reason to believe they'll necessarily lead to AGI. This stuff may seem earth-shattering to the layman or paper pusher, but it doesn't even begin to scratch the surface of what even I (who I would consider to be of little talent or prowess) can do. It mostly just gums up the front page of HN.
>Smoothing over the first few hundred hours of the process but doing increasingly little over the next 20,000 is hardly revolutionary.
I disagree with this characterization, but even if it were true I believe it's still revolutionary.
A mentor that can competently get anyone hundreds of hours of individualized instruction in any new field is nearly priceless.
Do you remember what it feels like to try something completely new and challenging? Many people never even try because it's so daunting. Now you've got a coach that can talk you through it every step of the way, and is incredible at troubleshooting.
is that even certain, or is that his line to mean that one of his holding companies or investment firms he has a stake in holds openai equity but not him as an individual
openai (the brand) has complex corporate structure with split for profit non profit entities and afaik the details are private. It would appear that the statement “Sam didn’t take equity in OAI” has been PR engineered based on technicalities related to this shadow structure.
> On a sunny morning last December, Iyus Ruswandi, a 35-year-old furniture maker in the village of Gunungguruh, Indonesia, was woken up early by his mother
...Ok, closing that bullshit, let's try the other link.
> As Kudzanayi strolled through the mall with friends
Jesus fucking Christ I HATE journalists. Like really, really hate them.
I mean it's Buzzfeed, it shouldn't even be called journalism. That's the outlet that just three days ago sneakily removed an article from their website that lauded a journalist for talking to school kids about his sexuality. After he recently got charged with distributing child pornography.
Many of the people working for mass media are their own worst enemy when it comes to the profession's reputation. And then they complain that there's too much distrust in the general public.
Anyway,the short regarding that project is that they use biometric data, encrypt it and put a "hash"* of it on their blockchain. That's been controversial from the start for obvious reasons although most of the mainstream criticism is misguided and by people who don't understand the tech.
*They call it a hash but I think it's technically not.
As someone who is the Treasurer/Secretary of a 501(c)(3) non-profit I can tell you that is it always possible for a non-profit to bring in more revenue than it costs to run the non-profit. You can also pay salaries to people out of your revenue. The IRS has a bunch of educational material for non-profits[1], and a really good guide to maintaining your exemption [2].
The parent is, OpenAI Global, LLC is a for profit non-wholly-owned subsidiary with outside investors; there's also OpenAI LP, which is a for-profit limited partnership with the no profit as general partner, also with outside investors (I thought it was the predecessor of the LLC, but they both seem to have been formed in 2019 and still exist?) OpenAI has for years been a nonprofit shell around a for-profit firm.
Thanks for explaining the basic structure. It seems quite opaque and probably designed to be. It would be nice if someone can determine which entities he currently still has a position or equity in.
Since this news managed to crush HN's servers it's definitely a topic of significant interest.
Depends if you're talking about "OpenAI, Inc." (non-profit) or "OpenAI Global, LLC" (for profit corporation). They're both under the same umbrella corporation.
100%. Man I was worried he'd be a worse, more slimy elon musk who'd constantly say one thing but his actions portray another story. People will be fooled again.
2. He and other founders would benefit enormously if this was the way to solve the issue that AI raises, namely, "are you a human?"
3. Their mission statement:
> The rapid advancement of artificial intelligence has accelerated the need to differentiate between human- and AI-generated content online
Proof of personhood addresses two of the key considerations presented by the Age of AI: (1) protecting against sybil attacks and (2) minimizing the spread of AI-generated misinformation
World ID, an open and permissionless identity protocol, acts as a global digital passport and can be used anonymously to prove uniqueness and humanness as well as to selectively disclose credentials issued by other parties
Worldcoin has published in-depth resources to provide more details about proof of personhood and World ID
In all other circumstances I would agree with you but
1. Sam Altman started this company
2. He and other founders would benefit enormously if this was the way to solve the issue that AI raises, namely, "are you a human?"
3. Their mission statement:
> The rapid advancement of artificial intelligence has accelerated the need to differentiate between human- and AI-generated content online
Proof of personhood addresses two of the key considerations presented by the Age of AI: (1) protecting against sybil attacks and (2) minimizing the spread of AI-generated misinformation
World ID, an open and permissionless identity protocol, acts as a global digital passport and can be used anonymously to prove uniqueness and humanness as well as to selectively disclose credentials issued by other parties
Worldcoin has published in-depth resources to provide more details about proof of personhood and World ID
The guy you’re responding to isn’t advocating for the technology. He’s just saying Sam Altman stands to gain a lot financially. You kinda need to chill out
Having equity is far from the only way he could profit from the endeavor. And we don't really know for certain that he doesn't have equity anyway.
It's even possible (just stating possibilities, not even saying I suspect this is true) that he did get equity through a cutout of some sort, and the board found out about it, and that's why they fired him.
Is that actually confirmed? What has he done to make that a true statement? Is he not just an investor? He seems pretty egoist like every other Silicon Valley venture capitalist and executive.
Altman has claimed before that he doesn't hold equity in OpenAI. He could have some kind of more opaque arrangement that gives him a material stake in the financial success of OpenAI, and downplayed it or didn't disclose it to the board.
Who knows, though -- I'm sure we'll find out more in the next few weeks, but it's fun to guess.
Yeah that's my guess too. The claim that he doesn't hold equity always struck me as suspect. It's a little like SBF driving around in the Toyota Corolla while buying tens of millions of dollars of real estate for himself and his family.
It's better to claim your stake in a forthright way, than to have some kind of lucrative side deal, off the books.
For a non-profit, there was too much secrecy about the company structure (the shift to being closed rather than Open), the source of training data, and the financial arrangements with Microsoft. And a few years ago a whole bunch of employees left to start a different company/non-profit, etc.
It feels like a ton of stuff was simmering below the surface.
(I should add that I have no idea why someone who was wealthy before OpenAI would want to do such a thing, but it's the only reason I can imagine for this abrupt firing. There are staggering amounts of money at play, so there's room for portions of it to be un-noticed.)
In recent profile, it was stated that he jokes in private about becoming the first trillionaire, which doesnt seem to reconcile with the public persona he sought to craft. Reminds me of Zuckerberg proclaiming he would bring the world together while calling users fucking dumbshits in private chats.
Oh wow, he's also an effective altruist?! Didn't know that. It's so bad. My take is that there's nothing more hypocritical, and therefore, arguably, more evil than this.
Yeah, although I guess you can read that as: "I will do everything I can to raise the stock price, which executives and employees both hold", then it actually makes sense.
But that $1 salary thing got quoted into a meme, and people didn't understand the true implication.
The idea is that employee and CEO incentives should be aligned -- they are part of a team. If Jobs actually had NO equity like Altman claims, then that wouldn't be the case! Which is why it's important for everyone to be clear about their stake.
It's definitely possible for CEOs to steal from employees. There are actually corporate raiders, and Jobs wasn't one of them.
(Of course he's no saint, and did a bunch of other sketchy things, like collusion to hold down employee salaries, and financial fraud:
The SEC's complaint focuses on the backdating of two large option grants, one of 4.8 million shares for Apple's executive team and the other of 7.5 million shares for Steve Jobs.)
I have no idea what happened in Altman's case. Now I think there may not be any smoking gun, but just an accumulation of all these "curious" and opaque decisions and outcomes. Basically a continuation of all the stuff that led a whole bunch of people to leave a few years ago.
> It's definitely possible for CEOs to steal from employees..
I'm pretty sure that CEO salaries across the board means that CEO's are definitely — in their own way — "stealing" from the employees. Certainly one of those groups is over-compensated, and the other, in general, is not.
What I meant is that there are corporate raids of declining/old companies like Sears and K-Mart. Nobody wants to run these companies on their way down, so opportunistic people come along, promise the board the world, cause a lot of chaos, find loopholes to enrich themselves -- then leave the company in a worse state than when they joined.
Apple was a declining company when Jobs came back the second time. He also managed to get the ENTIRE board fired, IIRC. He created a new board of his own choosing.
So in theory he could have raided the company for its assets, but that's obviously not what happened.
By taking $1 salary, he's saying that he intends to build the company's public value in the long term, not just take its remaining cash in the short term. That's not what happens at many declining companies. The new leaders don't always intend to turn the company around.
So in those cases I'd say the CEO is stealing from shareholders, and employees are often shareholders.
On the other hand, I don't really understand Altman's compensation. I'm not sure I would WANT to work under a CEO that has literally ZERO stake in the company. There has to be more to the story.
> I don't really understand Altman's compensation. I'm not sure I would WANT to work under a CEO that has literally ZERO stake in the company.
This is a non-profit not a company. The board values the mission over the stock price of their for-profit subsidiary.
Having a CEO who does not own equity helps make sure that the non-profit mission remains the CEOs top priority. In this case though, perhaps that was not enough.
Could be that they had an expectation that he not own stock in MSFT since they have such a direct relationship there and found out that he has been holding shares in MSFT.
You have to understand that OpenAI was never going to be anything more than the profit limited generator of the change. It’s the lamb. Owning a stake in OpenAI isn’t important. Creating the change is.
Owning stakes in the companies that will ultimately capture and harvest the profits of the disruption caused by OpenAI (and they’re ilk) is.
OpenAI can’t become a profit center while it disrupts all intellectual work and digitizes humanities future: those optics are not something you want to be attached to. There is no flame retardant suite strong enough.
Worldcoin is scanning people’s irises by having them look into a sphere called a fucking Orb so it can automatically create crypto wallets and distribute global minimum basic incomes after the AI apocalypse.
Altman conceived and raised $115 million for the company.
I could easily see him, or any other insider, setting themselves up administrating a recipient entity for contributions out of those “capped profits” the parent non-profit is supposed to distribute. (If, of course, the company ever becomes profitable at the scale where the cap kicks in.)
Seems like it would be a great way to eventually maintain control over your own little empire while also obfuscating its structure and dodging some of the scrutiny that SV executives have attracted during the past decade. Originally meant as a magnanimous PR gesture, but will probably end up being taught as a particularly messy example of corporate governance in business schools.
Yeah, I agree that the whole legal structure is basically duplicitous, and any attempt to cite it as some evidence of virtue is more emblematic of the opposite...
On lying: There's a great irony here. Altman apparently accepted[1] "Hawking Fellowship Award on behalf of OpenAI" at the University of Cambridge.
I kid you not, sitting in a fancy seat, Altman is talking about "Platonic ideals". See the penultimate question on whether AI should be prescriptive or descriptive about human rights (around 1h 35sec mark). I'll let you decide what to make of it.
Am I misunderstanding his answer, or does he not essentially say it should be "descriptive"? In which case, I misunderstood what your comment is implying.
Sorry for being vague. I was not at all referring to his answer per se. But rather his high-brow reference to Plato.
If he has truly read and digested Plato (and not just skimmed a summary video), he would not be in this ditch to begin with. That's the irony I was referring to.
It seems like if it was the Annie Altman accusations, they could have just paid her off. If they wanted him around and he was a creep, there are ways to make this stuff go away. AFAIK a lot of the sister's accusations were accompanied by being excluded from the father's will. Not saying she made it up, but it seems like if those grievances are bundled, there's a easy way to make that problem go away.
Why do people think it has to be some single big incident? Sam Altman has been the head of OpenAI for many years now, while the company has been in intense public spotlight only in the recent few months. The dynamic today is very different from 2019 or whenever he was hired. He also doesn't have any voting shares, which means he is entirely at the mercy of the board. It's entirely possible that they simply don't like the direction he has been taking the company in, and today was more of some minor straw that broke the camel's back situation.
The knives out language is very unusual for any CEO dismissal. The urgent timing (they didnt even wait for the closure of markets just 30 min later, causing MSFT to lose billions). Anything less than massive Legal and Financial/regulatory risk or a complete behind the back Deal with someone, would have been handled with much more calm and a much less adversive language. Also Greg Borkman has now resigned after it was annoucned that he would step down as chairman of the board. https://twitter.com/gdb/status/1725667410387378559
I agree with this assessment. I would speculate he did something in the early days to gain access to a bunch of training data under the guise of research ("...after all, we're OPENai") and used that data to fine-tune GPT-3 into chatGPT. Then once the weights were sufficiently good enough, deleted all the data and planned on using the chat interactions w/ the model itself for further refinement. Obviously, just total speculation, but the Cover Your Ass verbage of the board makes me think he did something to put the org in deep shit, legally. OpenAI suspended subscriptions last week and that's usually not something a company does, even if the service is degraded. Warn users, yes, but don't take any more money when you're hemorrhaging cash is off. I won't be surprised if it's a flagrant GDPR violation that carries very heavy fines.
OpenAI suspending subscriptions is especially suspect because they've dynamically altered GPT-4 usage limits many times before to handle increased load, and they didn't touch it at all last week before closing signups entirely.
Who says it was abrupt? They could have been planning this for weeks or months for all we know. In fact waiting till late on a Friday just before a holiday week to release the statement is more of a sign that this was deliberately timed.
A planned departure is practically never effective immediately.
If your goal is not spook investors and the public and raise doubts your company, the narrative is:
"X has decided it is time to step away from the Company, the Board is appointing Y to the position as their successor. X will remain CEO for N period to ensure a smooth transition. X remains committed to the company's mission and will stay on in an advisory role/board seat after the transition. We want to thank X for their contributions to the Company and wish them well in the future."
Even if the goal is to be rid of the person you still have them stay on in a mostly made-up advisory role for a year or so, and then they can quietly quit that.
The usual executive departure I have seen is all sugarcoated. Like XXX is having health problems so step down. Or XXX wants to spend more time with family. Or XXX now has a different interest and is leaving to pursue the new opportunity.
This statement doesn’t rhyme with planned transition at all.
Presumably, they could sell their business operations and the associated assets, and the non-profit entity would be left with the proceeds of the sale. I guess that could happen if the non-profit thought they could fulfill their purpose better with a big pile of cash to spend on something else rather than the original going concern.
Why not? They could take all the profit from the sale and distribute it to the executives and remain non-profit.
Even If that didn’t work, it would just mean paying taxes on the revenue from the sale. There’s no retroactive penalty for switching from a non-profit to a for-profit (or more likely being merged into a for-profit entity).
I am not an accountant or lawyer and this isn’t legal advice.
That's not quite right. However, before explaining, it is moot because OpenAI's for-profit subsidiary probably captures most of the value anyway.
The nonprofit shell exists because the founders did not want to answer to shareholders. If you answer to shareholders, you may have a legal fiduciary responsibility to sell out to high bidder. They wanted to avoid this.
Anyway, in a strict nonprofit, the proceeds of a for-profit conversion involves a liquidation where usually the proceeds must go to some other nonprofit or a trust or endowment of some sort.
Example would be a Catholic hospital sell out. The proceeds go to the treasury of the local nonprofit Catholic dioceses. The buyers and the hospital executives do not get any money. Optionally, the new for-profit hospital could hold some of the proceeds in a charitable trust or endowment governed by an independent board.
So it's not as simple as just paying tax on a sale because the cash has to remain in kind of a nonprofit form.
I am not an accountant either and obviously there are experts who probably can poke holes in this.
It could also be related to a conflict of interests (or unreasonable use of OpenAI resources) with his other Ventures and investments which he failed to disclose?
Not so sure about that. It reads to me like there is a personal scandal on the horizon that has come to the board's attention, and the board feels their hands are tied. Hard for me to believe its business decision related; Sam is elite in this regard, and is not really incentivized to violate their charter.
Bummer in any situation... the progress in this domain is truly exciting, and OpenAI was executing so well. This will slow things down considerably.
Probably about the same thing as what is open about The Open Group (formed when X/Open merged with The Open Software Foundation), the Open Look graphical interface, and such abuses of "open". OpenGL, OpenMax, ...
On the 'why is it being removed from HN' point, because incredible as dang is.. a lot is 'the algorithm' (christ) - if a few people flag it, I don't know how many, maybe it depends on other variables, then it's going to disappear.
This thread (that SA fired) wasn't visible an hour or two ago, on pages 1, 2, or 3, when I looked confused that it wasn't here. (Only related topic was his tweet in response at the bottom of page 1 with <100 points.) And now here it is in pole position with almost 3500 points - the automated flagging and vouching and necessary moderator intervention must go crazy on posts like this.
Can't jump to conspiracy cover-up on the basis of content that's not only user-generated but also user 'visibility-controlled' in terms of voting, flagging, vouching...
The anti-snitching culture within this community seems to rival that of even the mafia. Perhaps that's why it's been called "the gay mafia" in the past...
More so just irresponsible to share 2nd hand rumors without evidence. If someone else had first hand experience that would be one thing, but its far enough away from me to confidently share.
That is no exaggeration absolutely a horrendous thing and is likely going to get him killed one way or another as it comes out. I've finally found a plausible explanation for his deranged savior of humanity shtick (thats the only way he can live with himself I'm sure). If that is indeed his sister (which I believe is established beyond a reasonable doubt by what I just read), I would not vote to convict anyone that strangled him or to death in public, and every single person that knew but did not say anything ought to be expelled from society so thoroughly that they die of exposure and never earn another cent from anybody. Including each and every one of you motherfucking pieces of shit on this site that knew about this and weren't shouting it from the rooftops.
That doesn't mean it's not a personal scandal. If Brockman disagreed with the board about the advisability of firing him for whatever personal scandal it was, this is how you'd expect things to play out.
It is a personal scandal and I thought it was obvious from the press release.
Prior to the Reddit comments, I thought this might be the case, but perhaps I was somehow influenced. Actually, I thought it would be something inappropriate in the workplace.
His sister says he molested her when he was a teenager.
The way these things break, I’m not surprised it went down that way. Here’s what I thought reading the release: “They had to fire him before deciding on what to actually say eg. to formally accuse him”
It seemed like signaling that this is someone firing him kinda desperately. When you discover a diddler theres some weird shit when people panic and suddenly drop catapult them out of their lives… they just start leaping out of moving cars and shit to get away.
Keep in mind there could be ongoing investigations, definitely strategies being formed. They can get to a point in an investigation where they’re virtually 100% he molested his sister, but can’t really prove it yet. What they do have is irrefutable evidence of lying about something incredibly serious. Gets him out of the building and powers stripped today.
Still wondering if I could have jumped the gun, I did NOT know these were standing accusations. Couple of things though:
- How he behaved during the investigation. Something could come to light on this matter.
- Often times what you hear is only the most rock solid stuff, we don't know what kind of rumors are circulating
- It just happens this way. Do you remember Milo? I listened to him on Joe Rogan say the exact same shit that was "discovered" some time later. This wouldn't be a new thing.
I will say I've seen stories circulating about fighting between the board. The specific way this was done just screams panic firing to get him out of the building. This is when people are made to disappear, I saw it during covid.
You would think almost any dispute would be handled with a long drawn out press blitz, transitioning, etc.
This is more and more, in the light of the next day, looking like a disagreement about company direction turned sloppy boardroom coup. Corporate shenanigans.
I can see why people looking for some explanation quickly reached for it, but the sister angle never made any sense. At least where that story stands right now.
Note that everything below is speculation. I am merely trying to suggest an hypothesis which would answer the question of how the Annie Altman allegations could have led to this outcome. FWIW I think it's better speculation than some of the 'he did a side deal with MS' stuff above.
It seems basically impossible for OpenAI to have proved the validity of Annie Altman's claims about childhood sexual abuse. But they might have to take them seriously, especially once they were presented coherently on LessWrong.
If Sam had lied or misled the board about some aspect of his relationship with his sister, that would be a sacking offence. Eg he says "Annie's claims are completely untrue - I never abused her [maybe true or not, almost certainly unprovable], I never got her shadow banned from Instagram [by hypothesis true] and I never told her I could get her banned [untrue]." The board then engage a law firm or PI to check out the claims and they come up with a text message clearly establishing that he threatened to pull strings and get her banned. He lied to the board regarding an investigation into his good character so he's gone. And the board have the external investigator's stamp on the fact that he lied so they can cover their own ass.
Why would he tell a lie like this? Because whatever the truth of the allegations, he's arrogant and didn't take them as seriously as he should have. He mistakenly thought he could be dismissive and it wouldn't come back to bite him.
This seems consistent with the way things played out. (Note again: I'm just trying to come up with something consistent. I have no idea if this is at all accurate or the whole affair is about something completely different.) They don't have to worry about keeping him on as an advisor to cover up scandal. They can clearly state that he lied in an important matter. But they don't say what it's about - because they still have no idea whether the original allegations are true or not. They are not going to put themselves in a situation of saying "and he probably molested his sister". They wouldn't even say "it is related to abuse allegations made by a family member", which implies there might be evidence to the original allegations, and is probably defamatory. And he comes out saying that something unfair has happened, without giving any context, because he knows that even mentioning the allegations is going to lead to "but didn't he molest his sister" type comments, for the rest of time.
It's also consistent with the timing. They aren't just going to hear the Annie allegations and sack him. It takes time to look into these things. But within 6 weeks of it becoming an issue, they might be able to identify that he's either lied previously to the board about the gravity of this issue, lied during the current investigation, or something he's said publicly is clearly dishonest.
I'm virtually 100% sure he did it after just looking into it today so I can see exactly what you're saying about people backflipping out of cars and stuff to get away from it.
I agree this is the most likely explanation. Is it possible Sam tried to wrestle power away from the board? He wouldn't even need to sell the whole company, just enough tech for a large company to kill OpenAI.
Yeah. OpenAI is valuable not just commercially, but to the worlds governments, some of which can probably find ways to force out leadership they don't get along with.
1/ Sam goes on to create NeXTAI and starts wearing mostly turtleneck sweaters and jeans
2/ OpenAI buys NeXTAI
3/ OpenAI board appoints Sam Altman as Interim CEO
If you look at who is on the board, how it's structured (they don't have equity right?), it seems like it's actually because he violated the charter. Why would Ilya Sutskever punish Sam for doing the right thing wrt AI safety?
How? Per the blog post: "OpenAI’s board of directors consists of OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner." That's 4 directors after the steps taken today. Sam Altman and Greg Brockman both left the board as a result of the action. That means there were 6 directors previously. That means a majority of 4 directors. Assuming Sam & Greg voted against being pushed out, Ilya would have needed to vote with the other directors for the vote to succeed.
Edit: It occurs to me that possibly only the independent directors were permitted to vote on this. It's also possible Ilya recused himself, although the consequences of that would be obvious. Unfortunately I can't find the governing documents of OpenAI, Inc. anywhere to assess what is required.
It makes no sense to suggest that three external directors would vote out a CEO and the Chairman against the chief scientist/founder/principal's wishes.
use research and AI to analyze Sutskever's character. the way he talks, the way he writes, what he did in the past, where he studied, who he was and is "acquainted with" ... do the same with the rest of the board and with Altman as well.
someone hire some PIs so we can get a clear and full picture, please & thank you
This was my first thought after seeing a clip of Sam and Satya during OpenAI's DevDay. I wonder if he was standing in the way of a Microsoft acquisition, and Microsoft has just forced in those who would allow the purchase to happen?
I don't know, so much wild speculation all over the place, it's all just very interesting.
That "handsy greasy little weirdo" Silicon Valley character Ariel and his robot Fiona were obviously based on Ben Goertzel and Sophia, not Sam Altman, though.
>The character of Ariel in the current episode instantly reminded me of Ben Goertzel, whom i stumbled upon couple of years ago, but did not really paid close attention to his progress. One search later:
VIDEO Interview: SingularityNET's Dr Ben Goertzel, robot Sophia and open source AI:
I'm pretty confident it's not a personal scandal (or if it is, it's one that is truly monumental and that there hasn't even been a rumor of).
If it was a personal scandal, the messaging around his dismissal would have been very, very different. The messaging they gave makes it clear that whatever dirty deed he did, he did it to OpenAI itself.
What I was meaning to say in my comment was that the rumors and accusations relating to his sister, even if entirely true, are not monumental enough of a personal scandal to explain how the board behaved while firing him.
They'd probably still fire him, but would have done so in a very different way.
Maybe because it's not risking very much political capital. If he's wrong, and shuts up (rather than doubling down on it), nobody will remember what he said in two weeks.
Hell, some prominent tech people are often loudly wrong, and loudly double down on their wrong-ness, and still end up losing very little of their political capital in the long run.
Or maybe he's right. We don't know, we're all just reading tea leaves.
If he didn't know he wouldn't say anything. Why risk saying something when there's a very high chance that it's something like sexual misconduct or fraud.
Given that the entire ~accusation~ actionable libel seems to be a bizarre game of telephone derived from a 2013 news item in which Eric Schmidt paid roughly half that amount for a NY penthouse, specifically one muckraker’s tabloid variant that vaguely connected Schmidt to two very definitely not 18 also very definitely not prostitutes, and that the only detail that appears remotely correct is that he did indeed have the property soundproofed, I very much doubt you’ll get that substantiation.
Someone probably already suggested this, but I haven’t seen it yet, so I’ll throw a wild speculation into the mix:
I saw a comment (that I can’t find now) wondering if Sam might have been fired for copyright reasons. Pretty much all the big corpuses that are used in LLM training contain copyrighted material, but that’s not a surprise and I really don’t think they’d kick him out over that. But what if he had a team of people deliberately adding a ton of copyrighted material - books, movies, etc - to the training data for ChatGPT? It feels like it might fit the shape of the situation.
I don't think this issue would've flown under the radar for so long, especially with the implication that Ilya sided with the rest of the board to vote against Sam and Greg.
That matches with their extreme hurry to get rid of Sam, but it seems like this would be something the CTO would have had knowledge of and she seems to be trusted.
Also, it isn't uniquely attributable to Sam. They all do it, use copyrighted material, for training data. By "all", I mean all LLMs (to my knowledge). They don't do it intentionally, but it gets scooped up with everything else.
Hmmm, just thinking... Adam d'Angelo is one of the board members of OpenAI. He has the entire corpus of Quora content to use as training data, i.e. the rights to it are his. But I doubt that only Quora content was used by OpenAI during the past 8 years or so since it was founded! And the content on Quora isn't that great anyway...
Honestly, it feels like OpenAI is not taking the copyright trolls* very seriously for this to be the case. I don't think US has the luxury to set this dangerous AI precedent.
* You can disagree but no copyright lawsuit by mega corporations is doing it for the good of the law framework. They just want money.
nah, probably some toxic sex stuff. i can't think of any business reason for sam to be ousted. bribes, losing money, lies to the board? all good and well as long as the company stays the leader in the absolute most groundbreaking tech in human history so far.
So this is the non profit board. Maybe he was trying to influence the board members to make OpenAI a for profit company and the board saw that as a traitor move.
The only other business thing I could think of is he moving openAI's money to one of his other venture or for some other personal gain.
I don't think spending too much money in openAI could get him fired like this in current environment. Neither does overpromising stuff or something.
I know I won't get a lot of love for this, but Sam is a really good person. I don't know him well, but I've known him since long before OpenAI.
He's not perfect, but behind the scenes he's a genuine and upstanding person. I've met lots of wealthy smart people, and he's the only exception. He was the only person I trusted in this situation, and I'm genuinely nervous that he's no longer running OpenAI.
> I know I won't get a lot of love for this, but Sam is a really good person. I don't know him well, but I've known him since long before OpenAI.
"Good" is too blurry of a description, and I don't know Sam, but one thing I've learned (the hard way) is that you don't truly know someone unless you've had conflicts of interest with them and found mutually satisfying resolutions to them. If all you've had is mutually beneficial interactions, then of course everyone's going to be nice - it's in everyone's interests. You need to see how they act on nontrivial conflicts (either handling present ones, or mitigating/averting future ones) to really know if someone is a genuinely good person or not.
While this could hypothetically happen within an hour of meeting someone, it's more likely to take years or even decades... or might never even happen.
Ah yes—as the saying goes: “keep your friends at the Bayes-optimal distance corresponding to your level of confidence in their out-of-distribution behavior, and your enemies closer”
> you don't truly know someone unless you've had conflicts of interest with them
This hits a spot. I had a really nice boss.. Until we got into a conflict, then she tried to blackmail me, pressure me and break me. I learned why some people who left our company needed months to get back on their feet. I got out quite well and managed to push back, but it was a tough period.
I learned this playing video games. After a while, I figured out that if I liked someone on my team, I should only friend them if we were losing. Otherwise, I might only like them in the 50% of games we win.
I had a feeling the man was a bit of a con, of course I won't say I know for sure. But some of his actions, like his notorious eye scanning crypto project, or the fact that he was 100% in support of UBI and wanted to advocate for it only to go to different governments wanting regulations (that only benefitted them)
People really really need to pay attention to their actions, not their words, jeezus. We'll have another rogue Elon Musk who was once idol worshipped as the incredibly "brilliant" man...turned out he does some stupid things too only now he amassed billions of dollars he can pay his way out of stupid things.
I first heard of him through the WorldCoin stuff, and nothing about that made him look like an upstanding person. That whole thing was/is shady as hell.
I certainly don't know him, but I see more reasons not to trust him than to trust him.
I should have said "sketchy" instead of "shady", as "shady" implies a criminality I don't mean to imply.
What's sketchy about it is that they were offering money to largely poor and technologically naive populations in exchange for gathering very sensitive biometric data from them. Identifying data that cannot be changed and can be used to track them without their knowledge. Their stated goal is to have every person on Earth take part, to boot. The whole idea couldn't set off more alarm bells if it tried.
I give them style points for using truly dystopian-looking "orbs" to do it, though.
I have zero knowledge of Sam Altman in any shape or form and literally the only thing I know about him is that he runs (or well, ran) OpenAI.
But as a general point, you can be both a "good person" and still do bad things. Or you can be a good person in some areas, and a not-so-good person (or even horrible person) in some other areas. People are complex.
Of course it's entirely possible that Altman is just a really good person, but I wouldn't be quick to make assumptions.
Unless Sam has managed to fool a bucket load of smart people, your prediction is very unlikely to be true (or rather, I don't want it to be true). Fuck.
It's not a prediction; it's a general comment that one shouldn't assume too much based on a limited number of datapoints, in this case someone who doesn't "know him well".
This works in two directions, by the way. In 2001 few would have expected that Bill Gates would spend much of his time on philanthropy. Is he a "good" or "bad" person? Well, he's both.
He may not be fooling anyone. As someone else noted, if his interests and yours align you may be willing to look past his "badness". For example, Miles Bridges in the NBA. Seems like a genuinely bad guy who just got recactivated by an NBA team -- why? Probably because he can help them win games. I can almost guarantee no member of the front office would let their daughter date him, but they don't need him to be good for him to make them money.
I know Sam even less, but when I first moved to Valley a decade ago he went out of the way to help. I wanted try out a crazy startup idea on a student visa with limited connections in the Valley - he loved what I was doing and went above and beyond to help me out.
It forever tuned me in to the ethos of Silicon Valley. And I have tried paying back where I can.
If that ethos doesn't involve illegally overstaying student visas like Musk or burning millions of dollars then have you really truly embraced the SV lifestyle?
I think you misunderstood that he helped me overstay my student visa etc.
I was in a Ph.D. program at a top CS school and there are ways to transition your visa when building a startup. It was that I was not sure if the transition or the startup would work out - that startup did not - but years later another one did.
I would probably not have taken the plunge out of academia and not achieved much else had it not been for him. And I am deeply grateful for that.
I was just making a joke about how Silicon Valley, to some people on this planet, seems like an awful place that ignores some awful behavior when convenient. Embracing the "ethos" of such a place seems funny to me.
Never accused you of trying to illegally stay in the country or some such - I just referenced a famous South African Valley-man with a musky scent who did that.
My point with this comment was not necessarily that she is credible, but that the previous comment just assumed that she wasn't, seemingly without any consideration that she might be. This is the sort of attitude that perpetuates these sorts of accusations not being taken seriously.
We don't have to take everything potential victims say as fact and immediately act on it, but remaining open to the idea that those in power may be at fault, and not disregarding victims accusations goes a long way in supporting them.
What caught my eye was her claim about events when she was 4 years old. Just doesn't seem old enough for the memories and their interpretations to be reliable. I had 2 close encounters with UFO when I was 7-8 years old. Very vivid memories which I believed were 100% true until my thirties.
The claim that you cannot have x memories before a certain year is completely false. although unrelyable, children that have certain experiences can remember them.
I would go as far as to say we completely underestimate what children understand and remember because we're the ones that chose to forget.
A few days ago I thought I had uncovered a really bad memory from my childhood that I had buried. I realized a few minutes later that at one point I had dreamed the event and it never actually happened.
Conceivably the first part could happen without the second.
Frankly, I don't think we should ever give any credence to allegations that have absolutely no evidence. The road is too rocky. Most people have at least one person that dislikes them. The people that kind of attitude would affect generally have many. All it takes is one person willing to lie .
It seems unlikely that Altman arranged for her to be "shadowbanned" from every social media site except for OnlyFans and Pornhub(!), or that he has been engaged in hacking her WiFi.
If you actually look at the totality of her claims, there are probably more reasons to be skeptical than to accept her words unconditionally. About the only thing you can say for sure is that something really unfortunate is either happening to her now, or has happened in her past, for which she is unlikely to bear all the responsibility.
well id never heard of this, and im not a fan of the guy, but from my quick perusal online just now of the noise of this drama, id say no, shes not credible.
This was my reaction as well. If a competent doctor had prescribed her medication and she took herself off of it without medical supervision, I think it is pretty reasonable to condition money on going back on medication (or at least going back to medical supervision). Many people who have family members struggling with mental health concerns take this approach.
New Yorker is very progressive, and they worked for months on the article, yet they only mentioned in passing his sister accusations and didn't highlight them.
I used to say the same thing before I got in the business of investigating [all victims'] claims.
Social media abuse claims are the female equivalent of SWATing. One well-published sob story sends a mob to kick down your door.
Don't be this naive. For your own sake, only consider believing such claims once a police report has been filed. Don't rush to judgment either way unless there's repercussions when the claimant is lying about it.
6% of rape allegations are proven to be false so you should absolutely not do that[1] Take the claims seriously but don't assume guilt automatically, everyone deserves due process.
You can know someone is a bad person from casual interaction, but not vice versa. There's basically no way to know if anyone intelligent is a good person without extremely intense surveillance. I guess with an unintelligent person, you can assume that they're not smart enough to hide if they're doing something really bad, but even then, maybe they're just playing dumb.
You've got to be kidding. Really, are you kidding? That's an extremely weak litmus test for goodness.
Plenty of people maintain extremely close connections with their families while engaging in activies that are terrible for the world around them. Organized criminals. Terrorists. Business magnates. Political families. Corrupt police officers. Plenty of these groups are made out of tight-knit families.
It's common, dare I say human nature, to prioritize the needs of your family. That is honorable and important, but being a Good person requires caring about strangers too.
I think you completely missed the part about living with your siblings into your 30s.
With the exception of the brothers in the mafia or brother terrorists, none of your examples would meet that standard.
Being close with your family does not mean you’re not a good person elsewhere. It does not mean you don’t care about strangers. That you’d jump to that conclusion or bring up terrorists as a counter example makes me question your own personal experiences.
All else being equal, I’d expect someone with close family bonds to the point of living with them as an adult, when they clearly have the economic means to do otherwise, as a sign of a good human. That’s been my personal experience and that’s how I see the world.
I second this. As someone who's operated in the startup and YC ecosystems for years, I've seen Sam help a lot of people with no clear upside for himself. He's a net plus to Silicon Valley and SF by a long shot.
Yeah, well, if you're nice and helpful to enough people eventually you cross a threshold to just being a good guy, at which point, the deranged behavior of a couple poorly qualified board members ceases to matter.
I do believe you are being genuine here, but good people still sometimes do bad things. Good people still have their blind spots, and the negative consequences of those blind spots are often exacerbated and have outsized (negative) impact on others when the person behind them is wealthy.
I've never met the man, but I can say I have not been impressed by his words and attitude in public. I never got the sense or feeling that he's actually doing right by the world.
Ultimately it doesn't matter if he's a good or bad person; what matters is what he's done.
The OpenAI x Dota 2 stuff was a bit shady. They really wanted the crown of beating human players at one of the most complex real-time games, but to do so they had to drastically simplify the game rules (removing most heroes, changing courier mechanics).
It would be like if AlphaGo could only win if the Go board was half as big. Not real fraud, but shows a clear willingness to cut corners and stretch ethics.
I was following their Dota project from the beginning, and I think what they did made sense. Their goal there was to demonstrate real time decision making and teamwork in a complex environment without perfect information. OpenAI Five did a splendid job of doing that.
I would view it as the equivalent of making self-driving cars that are constrained to a single city. Sure, it doesn't have the full capability that a human would have. But who cares.
Not really, no. There will usually be allegations of malfeasance floating around a person for a while before any of them are brought to the wider public. To a complete outsider, it looks like these things come out of nowhere, but to someone who's relatively close to the person it seems like it was building up for years. I've also noticed in cases of false accusations that there will often be a number of other accusations made shortly after, all of which look relatively weak or unimportant; eg someone accused of sexual harassment will separately be accused of making a sexual advancement then backing off when turned down. By evaluating the sorts of other allegations about a person when some accusation is made against them, we can attempt to guess the legitimacy of those allegations collectively.
> Not really, no. There will usually be allegations of malfeasance floating around a person for a while before any of them are brought to the wider public.
You mean, exactly like there been, from Sam Altman's sister?
None of that really matters. Look at Elon Musk, lots of weird spectacle. The man was lauded as one of the smartest man in the world...now he's kind of a bit of a loose cannon. People need to stop idol worship businessmen. They have a large motivation to make themselves into this human lovable charismatic person with no faults because it is very profitable to do so. Worse is when people buy into that.
Actually no, it often is not how it works. For example, Harvey Weinstein's behavior has been "open secret" in certain circles way before the scandal exploded. Epstein has been known to be super shady way before he found his end in prison. Anthony "Carlos Danger" Weiner has been known for his exploits well before he finally was prosecuted. There are, of course, opposite cases, where certain sociopaths meticulously cultivate their benign image and hide their true colors. But often, the true colors are known if not widely, then at least by many people surrounding them. For a reasonably public person, it would be quite hard to lead a double life for a long time without anybody at all knowing.
Organizations are systems, not people, if he put into place the right incentive structure then the company will go in a good direction with or without him. Arguably the structure is now set in stone with his departure.
Didn’t I just read a post about him abusing his sister? It seems impossible to judge people you don’t know well personally, and even then sometimes you can be surprisingly wrong.
Sure, "good person" may sound generic. But he is still a good person trying to do the right things. To me it sounds like the board is afraid of being sued and needs to clearly appoint a scapegoat.
He is the guy that built a bunker in New Zealand and has a ready made plan to escape on a motorcycle with his escape bag filled with guns, money and supplies when things collapse right? (At least I think he that guy) Is that normal person behavior?
If you have money and connections, the laws of the plebs are no longer relevant. You essentially have a right to keep and bear arms anywhere on Earth if you're rich enough.
I didn't say he was normal. He's clearly not (normal people don't start OpenAI). That doesn't preclude him for being a thoughtful person who wants the best for the world around him.
If I had loads of money I would absolutely do the same.
It's insurance. For someone with an average income, it's not worth the cost of the .01% chance you'll need it. For someone with more money than they know what to do with, it's worth it.
> Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
Yes I think it will be obvious to people in hindsight that Sam is a deeply disturbed and dangerous individual, and so are many of his associates. You can tell just by listening to his public statements that the guy is off his rocker.
Can't say I saw this coming. This is deeply sad to me. OpenAI did so much life changing work so quickly. It has totally changed my life in terms of enabling an absolutely unprecedented increase in my own personal productivity and how ambitious I can be with my projects. And even though I know Sam didn't personally code the key things, I believe that it never would have happened the way it did without his instincts and talents. And I fear that without him at the helm, all the magic is going to quickly dissipate and OpenAI will be just another lumbering tech company without a rudder. The default state of things is stasis and dysfunction. Just look at how Google can't do anything anymore, and how bad Siri and Alexa are-- all despite having so much manpower, money, and market share at their disposal.
I also find it maddening how boards of directors rush to insulate themselves from any possible issue and are so quick to throw overboard the very people who enabled the success that they get to participate in. I'm thinking particularly of Travis at Uber and how he was thrown out of the thing that he built from scratch, which never would have worked without his extreme efforts. If I were on the OpenAI board, the bar for firing Sam would be so ridiculously high that he would have to have done something so outrageous, so illegal, etc., that I struggle to believe what he actually did could even remotely approach that standard.
>I also find it maddening how boards of directors rush to insulate themselves from any possible issue and are so quick to throw overboard the very people who enabled the success that they get to participate in. I'm thinking particularly of Travis at Uber and how he was thrown out of the thing that he built from scratch...
This is an egregiously generous way of framing Travis Kalanick, and what happened to him.
Look at all his other leadership jobs. The only other company of note is YCombinator, which was running well before and after his time there. So what makes him so great? Seems more likely that he was not as essential to OpenAI’s success as you give him credit for. Honestly, giving Microsoft a 49% stake for $10B seemed really short-sighted to me and will hamstring the company in the future.
Ilya Sutskever has been running the show (technologically) at OpenAI since the beginning, and he is going nowhere. I trust his judgement in voting to get rid of Sam Altman over anything else.
Greg has now publically announced that he quit. What would motivate Ilya to stay if he didn't agree with the rest of the board? If they are still a 3-person team, they could announce a different company tomorrow and have the entire technical staff jump ship.
Especially given how people regarded AI when OpenAI started. This was right after the chat bot boom when nobody could get a decent text response right and trying for AGI was seen as a waste of time. People were laughing at Sam. To his face. And he kept going. OpenAI has changed the trajectory of humanity and consciousness. OpenAI needs to explain what is going on immediately.
I cannot even begin to understand what makes you think that this technology arose from Sam altman and not from all the other people working there. By saying you doubt they can do anything without him, you're putting one person on a pedestal and giving them all the credit for this. This is the same phenomenon has happens with Elon musk getting all the credit for his tech companies.
The CEO is the one that recruits that talent, raises money to make sure they are paid and retained, and guides creating vision and culture to get them to be effective in working together. Talent is orders of magnitude less important than culture. Bad culture will squander talent. Good culture will elevate everyone to their best level of effectiveness and pull talent from a team.
It's not just raw technology. It's a vision for what the product should be, what overall strategy to take, how to fund it, how to introduce it to the world, how to scale it, what order to do things in, what approach to take with 3rd party developers, when things are good enough to launch, who to hire, who to promote, etc. There are a million little decisions that go into a runaway success like this. And a million opportunities to make the slightly sub-optimal or wrong decision. And it doesn't take many of those to kill everything that made the place special, and that's actually my new base case for OpenAI-- that's the base case for any company/organization. The default state is chaos and entropy, and it's a miracle when you can escape that fate for even a few years of hypergrowth.
And what past accomplishments from Sam Altman led you to believe that it's him bringing in the magic and vision? This really isn't someone with a stunning track record of bringing incredible products to market.
It's the most basic possible website, API, and app for a language model AI. It's barely functional. For example, the iOS app couldn't even display images until the other day.
> There are a million little decisions that go into a runaway success like this.
I agree that GPT-4 is amazing and probably the best. But there are several other competing implementations of language model AIs right now, some are even developed as open source.
actually no, openai first did chess then dota, than chatgpt, it's years of effort and progression that didn't come overnight. It's probably you who first heard of them trough chatgpt
The dude is just a businessman through and through. Stop idol worshiping these businessman. We'll have another rogue Elon Musk. At least Elon Musk is exposing himself now for what he is, but he has so much money at this point it has no effect on him to do random weird nonsense. Sam Altman seemed quite slimy to me, with his actions, garnering support by talking about UBI and advocating for it, but then going to talk to US Congress to talk about regulations (because it benefits himself).
The man was starting to seem like a huge con and people just seem to not see through that.
Sam and I are shocked and saddened by what the board did today.
Let us first say thank you to all the incredible people who we have worked with at OpenAI, our customers, our investors, and all of those who have been reaching out.
We too are still trying to figure out exactly what happened. Here is what we know:
- Last night, Sam got a text from Ilya asking to talk at noon Friday. Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon.
- At 12:19pm, Greg got a text from Ilya asking for a quick call. At 12:23pm, Ilya sent a Google Meet link. Greg was told that he was being removed from the board (but was vital to the company and would retain his role) and that Sam had been fired. Around the same time, OpenAI published a blog post.
- As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior.
The outpouring of support has been really nice; thank you, but please don’t spend any time being concerned. We will be fine. Greater things coming soon.
Try blocking all bluechecks. After doing so is the first time in like a decade that Twitter has had good content for me.
Before, braindead or cloutchasing bluechecks were mixed in with the rest of us rabble. Hard to pick them out of the pack, you had to read their detritus with the rest of the comments.
Now they voluntarily self identify, and even better, their comments are lumped at the top. So block them all with a bot or just scroll down until there's no more blue checks and the comment quality jumps exponentially.
Seemed like a pretty productive conversation to me. As a non-twitter regular I now know how to make things more bearable in the future thanks to this discussion.
Sam Altman is a hero of mine. He built a company from nothing to $90 Billion in value, and changed our collective world forever. I can't wait to see what he does next. I, and billions of people, will benefit from his future work- it's going to be simply incredible. Thank you @sama for all you have done for all of us.
Making such a statement before knowing what happened, or, maybe he does know what happened, make this seem it might not be as bad as we think?
Eric Schmidt is also the person that said Google's old "do no evil" slogan was the dumbest thing he'd ever heard. Given that there's apparent tension at OpenAI over non profit vs for profit goals I'd not draw any particular conclusions from Schmidt's statement.
But that was fair. Page wanted to do a browser super early, like years before the IPO, and Schmidt was brought in specifically to try and bring some focus to the company and balance out the founder's ambition with more "adult" execution. Yes eventually Chrome became a successful project and Schmidt wisely didn't pick a fight over it, but he wasn't wrong to insist the company try to avoid getting distracted with that so early in its life.
Again, I am an outsider and uniformed. But the established money printer was already going brrrrr...
I would imagine that it's arguable that a pheasant could have ridden that rocket to the moon.
My bias and stupidity may be showing here, but I just don't think that he is very smart. Maybe that was the point of his position: to keep the company from going beyond the imagination of Wall Street normies.
I was an insider at that time and I didn't disagree with that decision, especially as Google were funnelling money to Firefox and at that time Firefox was executing well and had a lot of support across the community. Part of why the money printer was going brrr was a relentless effort on optimizing it and growing it, which did benefit from the executives actually focussing on it. The idea it all just happened automatically just isn't so.
And of course he gives credit to the CEO and not the 400 people under him who actually built the thing, nor the other 10 people who actually founded the company. Nor those who gave initial funding. From wikipedia:
> OpenAI was founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk serving as the initial board members.
I'm of the firm opinion that the heavy lifting at open ai is doing by the scientists but of courses ceos like to applaud themselves on the back for doing the "tough" job.
Lots of companies are funded every year. Those without solid leadership and clear mission are bound to fail.
Founding a company is also fairly easy (if you're in the US). In most US states, you just need to complete some paperwork, pay administrative fees, and you're good to go.
Founding something isn't tough. Leading through adversity and setbacks is.
Finally, if we're praising workers, what about those who came and went between 2015 and today? That probably pushes the number higher than 400 FTEs.
*Founding a company is also fairly easy (if you're in the US). In most US states, you just need to complete some paperwork, pay administrative fees, and you're good to go.*
This is true in the sense that being a CEO is also easy, you just fill out some paperwork that says you are CEO.
Are you saying the founders of OpenAI just filled out some paperwork and did nothing more?
> Lots of companies are funded every year. Those without solid leadership and clear mission are bound to fail.
IME companies with solid workers (in engineering and elsewhere) but weak leadership and mission have a much better chance than the converse. Even the best companies rarely end up following the mission or the leader.
Not everyone that you don't like is a fraudster. Just say that you don't like Sam, no need to make an obviously absurd comparison. The reason those were bad CEOs were that they swindled investors and lied about what their corporation is doing. I have absolutely no opinion on Sam Altman (didn't know about him before openai) btw, it's just that the comparison is completely nonsensical.
(It reminds me of comparing AI to crypto because both have hype behind them.)
I like that you have no opinion about this guy that got fired for “not [being] consistently candid in his communications with the board” other than it is plainly obvious that he isn’t a liar.
No that's not my point. I'm not saying he's a liar or not, I'm saying that they are not similar regardless. He could be a liar without being a fraud. Openai is already not Theranos no matter what happens next.
This is a good point. There is no possible room for the general artificial intelligence company to disappoint investors or other stakeholders in a big way
Why “obviously absurd”? They had spotless reputations, until they didn’t. So did the FTX guy, for instance. Just because you don’t understand, doesn’t mean something is nonsensical
Tons of high profile people spoke like that about a large number of individuals in the past. Here I think it's clear that OpenAI has indeed delivered something serious.
Oh please, you're going to put Altman together with those clowns? He has a proven record of extreme accomplishment, in various domains, moreso than 99.9999% of people in the tech industry.
>He has a proven record of extreme accomplishment, in various domains, moreso than 99.9999% of people in the tech industry.
I don't really see anything[1] that suggests that this sentence is true. Now, I'm not saying that he hasn't been successful, but there's "successful" and then there's your hyperbole.
what did he do before open ai? (Which was founded by a lot more people than Sam). A genius in my opinion needs to have a track record of incredible things, key point being "track".
I think it's logical in these scenarios if you don't know what happened to presume something forgivable and maintain that relationship (cynically, opportunity to invest), and if something truly unforgivable comes out post another tweet.
Your use of the word "our" is too liberally applied, he was no hero of mine. I believe history will have a very different view of Altman, "Open"AI, and AI in general.
idk, it seems a lot smarter to me to keep one's mouth shut or express a general level of surprise without making a public endorsement until facts become clearer.
Because getting fired like this (both so abruptly, and with the unusually frank statement from OpenAI) suggests some sort of ethical issue. There's plenty of amicable divorces in the business world where teams just diverge on vision, goals, or methods, but they're the sort of thing people generally see coming. This HN post has collected thousands of votes precisely because it's such an unusual and surprising development, so it seems to me there's at least a 50% probability that it is something bad.
I don't expect Eric Schmidt to have general foresight about Sam Altman, but as a former CEO himself he must understand its not a decision a board would make lightly.
Here is what's wrong about that statement: Sam Altman did not "build" Open AI from nothing to $90Bn. Open AI raised $1bn from several investors and had top talents. Sam managed the company through that. But I am certain other people given the same circumstances will be able to do more or less the same.
I think OpenAI built something amazing with ChatGPT, but building a company from nothing is a little bit different from being
> initially funded by Altman, Greg Brockman, Elon Musk, Jessica Livingston, Peter Thiel, Microsoft, Amazon Web Services, Infosys, and YC Research. When OpenAI launched in 2015, it had raised $1 billion. (Wikipedia)
Sam implied OpenAI had a major breakthrough a few weeks ago in a panel yesterday:
"Like 4 times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I've gotten to be in the room when we sort of like, pushed the veil of ignorance back and the frontier of discovery forward. And getting to do that is like the professional honor of a lifetime".
This is going to sound terrible, but I really hope this is a financial or ethical scandal about Sam Altman personally and he did something terribly wrong, because the alternative is that this is about how close we are to true AGI.
Superhuman intelligence could be a wonderful thing if done right, but the world is not ready for a fast take-off, and the governance structure of OpenAI certainly wouldn't be ready for it either it seems.
Fair enough, but having worked for an extremely secretive FAANG myself, "we need XYZ" is the kind of thing I'd expect to hear if you have XYZ internally but don't want to reveal it yet. It could basically mean "we need XYZ relative to the previous product" or more specifically "we need another breakthrough than LLMs, and we recently made a major breakthrough unrelated to LLMs". I'm not saying that's the case but I don't think the signal-to-noise ratio in his answer is very high.
More importantly, OpenAI's claim (whether you believe it or not) has always been that their structure is optimised towards building AGI, and that everything else including the for-profit part is just a means to that end: https://openai.com/our-structure and https://openai.com/blog/openai-lp
Either the board doesn't actually share that goal, or what you are saying shouldn't matter to them. Sam isn't an engineer, it's not his job to make the breakthrough, only to keep the lights on until they do if you take their mission literally.
Unless you're arguing that Sam claimed they were closer to AGI to the board than they really are (rather than hiding anything from them) in order to use the not-for-profit part of the structure in a way the board disagreed with, or some other financial shenanigans?
As I said, I hope you're right, because the alternative is a lot scarier.
I think my point is different than what you're breaking down here.
The only way that OpenAI was able to sell MS and others on the 100x capped non-profit and other BS was because of the AGI/superintelligence narraitive. Sam was that salesman. And Sam does seem to sincerely believe that AGI and superintelligence are realities on OpenAI's path, a perfect salesman.
But then... maybe that AGI conviction was oversold? To a level some would have interpreted as "less than candid," that's my claim.
Speaking as a technologist actually building AGI up from animal-levels following evolution (and as a result totally discounting superintelligence), I do think Sam's AGI claims veered on the edge of reality as lies.
Both factions in this appear publicly to see AGI as imminent, and mishandling its imminence to be an existential threat; the dispute appears to be about what to do about that imminence. If they didn't both see it as imminent, the dispute would probably be less intense.
This has something of a character of a doctrinal dispute among true believers in a millennial cult
They must be under so much crazy pressure at OpenAI that it indeed is like a cult. I'm glad to see the snake finally eat iself. Hopefully that'll return some sanity to our field.
Sam has been doing a pretty damn obvious charismatic cult leader thingy for quite a while now. The guy is dangerous as fuck and needs to be committed to an institution, not given any more money.
Why would they fire him because they are close to AGI? I get that they would go on full panic mode but firing the CEO wouldn't make sense since openai has AGI as an objective. The board wasn't exactly unaware of that.
You're right, I was imagining that he decided to hide the (full extent of?) the breakthrough to the board and do things covertly for some reason which could warrant firing him, but that's a pretty unlikely prior: why would he hide it from the board in the first place, given AGI is literally the board's mission? One reason might be that he wants to slow down this AGI progress until they've made more progress on safety and decided to hide it for that reason, and the board disagrees, but that sounds too much like a movie script to be real and very unlikely!
As I said, while I do have a mostly positive opinion of Sam Altman (I disagree with him on certain things but I and trust him a lot more than the vast majority of tech CEOs and politicians and I'd rather he be in the room when true superhuman intelligence is created than them), I hope this has nothing to do with AGI and it's "just" a personal scandal.
I don't really get "meme" culture but is that really how someone who believed their company is going to create AGI soon would behave? Turning the possibility of the success of their mission into a punchline?
No, we are not close to AGI. And AGIs can't leave machines yet, so humans will still be humans. This paranoia about a parroting machine is unwarranted.
Let me know when you can prove that "logical" and "intelligent" were ever stored on the same shelf, much less being meaningfully equivalent. If anything, we know that making a general intelligence (the only natural example of it we know) emulate logic is crazily inefficient and susceptive to biases that are entirely non-existent (save for bugs) in much simpler (and energy-efficient) implementations of said logic.
Nov 6 - OpenAI devday, with new features of build-your-own ChatGPT and more
Nov 9 - Microsoft cuts employees off from ChatGPT due to "security concerns" [0]
Nov 9 - OpenAI experiences severe downtime the company attributes to a "DDoS" (not the correct term for 'excess usage') [3]
Nov 15 - OpenAI announce no new ChatGPT plus upgrades [1] but still allow regular signups (and still do)
Nov 17 - OpenAI fire Altman
Put the threads together - one theory: the new release had a serious security issue, leaked a bunch of data, and it wasn't disclosed, but Microsoft knew about it.
This wouldn't be the first time - in March there was an incident where users were seeing the private chats of other users [2]
Further extending theory - prioritizing getting to market overrode security/privacy testing, and this most recent release caused something much, much larger.
Further: CTO Mira / others internally concerned about launch etc. but overruled by CEO. Kicks issue up to board, hence their trust in her taking over as interim CEO.
edit: added note on DDoS (thanks kristjansson below) - and despite the downtime it was only upgrades to ChatGPT Plus with the new features that were disabled. Note on why CTO would take over.
OpenAI’s board previously consisted of 6 people, incl Sam Altman and Greg Brockman. Two of them are more involved in technical matters at OpenAI than Sam. Now there are only four members on the board.
At least one of them must jointly make this decision with the three outside board members. I’d say it’s more likely to be business related. (In addition, the CTO is appointed as the interim CEO.) (Edit: But obviously we currently don’t really know. I think the whistleblower theory below is possible too.)
“OpenAI’s board of directors consists of OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.
…..
As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.“
“Our board
OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.”
The tone used by OpenAI (their distrust of Sam Altman) tells me that they did not simply decide they need different leadership. The statement by the board seriously damages his career. Why else would they burn bridges and oppose themselves on ethical grounds? Or they are trying to blame and sac Altman.
There’s a famous NFL quote from a former general manager of the Arizona Cardinals that goes, “If Hannibal Lecter ran a 4.3 (40-yard dash) we'd probably diagnose it as an eating disorder.”
I'll argue in this day and age, that any founder/C-level person who has "created" billions in value, no matter how much of a paper tiger it is, will almost always get another shot. If SBF or Elizabeth Holmes weren't physically in prison, I bet they'd be able to get investment for whatever their next idea is.
The point of comparison in the analogy is "founder/C-level person who has "created" billions in value, no matter how much of a paper tiger it is."
The claim is that investors are interested in executives who they perceive to have created billions in value, and that's analogous to how NFL teams are interested in people who run fast.
Investors are not interested in executives that “create” billions, they are interested in investors that create billions.
NFL teams are interested in players that can actually run fast, not players that can say they do, but are found to be lying and it turns out they cannot run fast causing the team to lose.
> Investors are not interested in executives that “create” billions, they are interested in investors that create billions.
Investors are interested in people they can use to make money. The latter are easier to use, but the former will suffice. It just depends on when you sell.
Now? Yes for Kenneth Lay (assuming he was still alive and/or not hiding on a desert island under a new identity if I put on my tin foil hat)... Madoff, probably not.
Why a yes for Kenneth Lay? Do you think the experience of running a scam is transferable to a real business? Or do you not consider enron a scam? Or do you think the line between scams and businesses is so blurred that the skill doing them is the same?
Adam Neumann is not a good example. While he has proven good at raising money he has not been proven at running a business or even finding successful ones. My comment was exactly about that difference.
> You misunderstand how these corporate situations work. He will fall upward to a better job someplace else if he chooses.
I have no doubt that Altman is deeply embedded in the techbro good old boys network to get another job, but that doesn't change the fact his (now previous) employer released a blog post saying he LIED TO THE BOARD about something severe enough that they had to insta-sack him.
No clear transition plan. In what situations world a board fire the ceo from the worlds greatest tech sensation since who knows when, in a matter of hours ?
Are you seriously comparing OpenAI to WeWork? I'm not particularly bullish on AI but you have to give OpenAI credit for what they've accomplished under Altman.
> The statement by the board seriously damages his career
Yes: suggesting he was not as candid as necessary is business libel unless true.
And since Brockman was also booted, he may have been involved.
It's not clear what the Board was trying to do that he interfered with. There is no clear legal standard on what a CEO must divulge, and CEO's often get to wait to tell board members bad news until the whole board meets and the issue has been investigated.
My view is that medium- and long- terms are determined by fundamentals of what the technology actually delivers.
OpenAI and ChatGPT are great and gets a lot of mind-share. But they are far from the only game in town and, at this still very-early stage of the tech cycle, the outwardly-visible leader can easily change in months.
Kara swisher is reporting it’s a conflict between business goals and nonprofit goals. Unless there’s some conflicting reporting coming out soon, that probably enough to tamp down any speculation that will undermine the whole ai market.
> Only a minority of board members are allowed to hold financial stakes in the partnership at one time. Furthermore, only board members without such stakes can vote on decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict—including any decisions about making payouts to investors and employees.
So given the latest statement from the board emphasizing their mission, it could be that Brockman and Sutskever were not able to participate in the board decision to fire Altman, making it a 3-to-2 or 4-to-1 vote against Altman.
I did a similar analysis to you about how the board must have voted and initially concluded that Ilya likely voted against Sam. However, without seeing the actual governing documents for OpenAI, we can't rule out the possibility that personnel decisions are made solely by the independent directors.
"review process by the board, which concluded that he was not consistently candid in his communications with the board"
OK, so they tell us he was lying, which is precisely what "not consistently candid in his communications" means.
Possible topics for lying:
* copyright issues to do with ingestion of training data
* some sort of technical failure of the OpenAI systems
* financial impropriety
* some sort of human resources issue - affair with employee
* other - some sort of political power play? Word from Satya Nadella - "get rid of him"?
Possibly the reason is something that the board members felt exposed them personally to some sort of legal liability, thus if they did not act then they would have to pay a legal price later.
It has to be pretty serious to not make it public.
Agreed, it implies he lied, but the board’s swiftness suggests enormous liability if they didn’t act immediately. An affair or HR issue could wait until after the holidays, it feels like it’s something much more nefarious.
Regardless of what, the longer OpenAI waits to explain, the more it could damage corporate and developer trust in using its AI.
> It says he lied, explicitly, just with slightly nicer words.
No it doesn't. "Not being candid" does not explicitly mean lying. It's like the old tea towel joke where the people at the bottom say "it's shit" and the manager one rung up says "it's manure" and the next one says "it's fertilizer" and by the time it's reached the CEO they're saying "it promotes growth".
Lying by omission is still lying, and is especially a concern when you have a duty to the people who are deceiving, such as an executive has to their board.
Isn't that considered deceit rather than lying?[1]
[1] ChatGPT "lying is defined as intentionally making a false statement. If you are omitting details but not actually stating anything false, this may not strictly meet the definition of a lie."
I think people would forget this in a month, Sam would fail forward/upward, and it would be business as usual. You might be overestimating public’s interest and attention span.
Pretty much nothing changed positively or significantly after Snowden revelations, Panama papers etc etc
Occams razor. He probably pursued a sale to Microsoft without the boards approval. Hes buddy buddy with Satya. Board basically said no, thats not our mission. Firedd
The equity explanation makes a lot more sense. If sama was "not candid" to the board about being the ultimate owner of OpenAI and his goal is to get unbelievably rich (he's already a billionaire and obsessed with money) they would have to fire him on the spot. Sam would immediately try to move against the board so they can't give him any window to make a counter-move. And if that means firing him on a Friday right before markets close, so be it.
If Sam made a deal with MSFT that required board approval they would be mad, but not this mad. The board feels betrayed, and Sam being the secret owner of OpenAI through the foundation checks all the boxes.
It has to be something time sensitive or they would never fire him in this way. Also Sutskever is on the board, who has as much understanding of how far away AGI is as anyone on Earth.
> It has to be pretty serious to not make it public.
I'd say the opposite; given the way CEOs usually part with firms even after misconduct investigations, it needs to be very serious for the “not consistently candid with the board” to be made public (it needs to be mildly serious for it not be hidden under a veil of “resigned to spend more time with his family/pursue other interests/pet his llama" but instead openly be a dismissal where the board “no longer has confidence in his ability to continue leading”.)
The suddenness of the firing, and the bridge-burning language (basically "he lied to us") rather than any sort of soft exit, suggests that it must be something in the "other" bucket.
The details are anyone's guess. But if we're engaging in wild speculation, how about this weird coincidence: one day after Xi Jinping and Sam Altman are in the same place, Sam Altman is abruptly fired.
If you asked ChatBLT to summarize the above, it would say he was caught lying about his affair with Satya to which he improperly disclosed financial technical copyright issues impropriety.
I don't think this checks out, as most of the ideas around here involving him hiding OpenAI internals from the board don't, for this reason: How could he, given who is on the board?
There is no way that sama is the only person in this set of people to have unique information on critical privacy incidents or financials or costs of server operations, because these issues don't originate with him.
If some version of this turned out to be true, I would be seriously confused about ground truth transparency in the company and how the fuck they set the whole thing up, that this was an option. But again, this is why I'd say: Implausible.
Hmm, I don't see which part of the theory requires the board to not have known. It just may have taken them a week to decide who's head to cut for the mess
Don't they only have to hide the truth from one person? Altman and Brockman were ousted. The only other person on the board from OpenAI is the Chief Scientist, and I doubt he's that involved in the business-end of things.
Edit: Also, yes, it's hard to sweep things under the rug. We don't know the timeline of events, and we're seeing an instance where Altman failed to hide something.
Executive boards aren't involved in day-to-day management like CEOs and other executives. They meet periodically and review updated information. So, yes, Altman would have access to more information than the board.
If you haven't seen the news update, that's apparently what happened. The Chief Scientist pushed out the other two C-levels, accusing them of hiding stuff from the rest of the board. The board took his side.
Altman and Brockman have yet to share their side of the story.
"Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors."
So, if I were to speculate, it was because they were at odds over profit/non-profit nature of the future of OpenAI.
Maybe, but the board fired him without notifying OpenAI’s employees on a Friday before Thanksgiving week. Thats has to be more than a disagreement for such a forceful move.
I get this logic, but it seems contradictory with "it was last minute." If they timed it for the Friday before Thanksgiving, then they must have been sitting on it, right? Whereas if it's sudden, it must be unplanned. So which was it, sudden or planned?
The fact they timed the announcement actually implies some planning, which means the reason couldn't be so damaging that they had to fire him immediately after discovering it. (Of course, it's possible that only by coincidence, an unplanned revelation happened at a convenient time to fire him.)
I'm trying to find the episode, but on the All in Podcast ~6 months ago, they made comments about how the corporate structure of OpenAI may have been a secret way for Sam Altman to hold a large stake in the company. I don't think this is privacy related, but that there was a shell game with the equity and the non profit status. If they were training on data like that, the board/people at the company would have known.
"somebody has to own the residual value of the company, sam controls the non profit, and so the non profit after all equity gets paid out at lower valuations, owns the whole company. Sam altman controls all of open ai if its a trillion dollar valuation. Which if true would be a huge scandal"
> "We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees," a spokesperson said. "We restored service shortly after we identified our error. As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections."
further edit: found this comment on reddit [0][1] which also seems to line up:
> I feel compelled as someone close to the situation to share additional context about Sam and company.
> Engineers raised concerns about rushing tech to market without adequate safety reviews in the race to capitalize on ChatGPT hype. But Sam charged ahead. That's just who he is. Wouldn't listen to us.
> His focus increasingly seemed to be fame and fortune, not upholding our principles as a responsible nonprofit. He made unilateral business decisions aimed at profits that diverged from our mission.
> When he proposed the GPT store and revenue sharing, it crossed a line. This signaled our core values were at risk, so the board made the tough decision to remove him as CEO.
> Greg also faced some accountability and stepped down from his role. He enabled much of Sam's troubling direction.
> Now our former CTO, Mira Murati, is stepping in as CEO. There is hope we can return to our engineering-driven mission of developing AI safely to benefit the world, and not shareholders.
I could buy that if the announcement was "Altman is leaving in x months to spend more time with his family. He will be assisting with the transition to Murati stepping in as CEO."
Wait, no, Microsoft said the action was a temporary mistake. From the article you linked:
In a statement to CNBC, Microsoft said the ChatGPT temporary blockage was a mistake resulting from a test of systems for large language models.
“We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees,” a spokesperson said. “We restored service shortly after we identified our error. As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections.”
That is Microsoft's PR statement to the press in response to a leaked story. They're major investors in OpenAI - it's in their interest to downplay and respond this way.
Downplaying is one thing, but attributing a policy decision to a fabricated technical error would be outright lying to the public. In a large company like Microsoft, with numerous potential sources of information leaks, this approach is likely unfeasible.
If there is an incident where people can see other's people chats there are two possibilities:
-It's a server issue, meaning someone fucked up their javascript and cached a session key or something. It's a minor thing; could get the specific dev fired in the worst case, and it is embarrassing, but it is solvable.
-it's inherent to how the AI works, and thus it is impossible to share a ChatGPT server with someone else without sooner or later leaking knowledge. It would mean the company cannot scale at all cause they'd need to provide each client their own separate server instance.
If this was something Sam knew and kept it from the board, that'd be fireable. And it'd be catastrophic, cause it'd mean no useable product until a solution is found.
I'd somehow doubt it is something like this, but if we see security issues and private chats that keep leaking, it is a possibility.
It's inherent to how it works, it is known and had always been known that nothing you type into these chats is private and there is nothing whatsoever fundamentally to stop the AI from just handing your chats to somebody else or dumping them out to the internet. They aren't even able to theoretically describe a mechanism by which you could provide a kind of memory protection for these models. And of course we have seen real examples of this already. Only a matter of time before the completely and totally insurmountable problems or scaling AI become clear. Sam is and has always been a conman in my view.
It was absolutely, incontrovertibly the former. The go misstates the issue- users saw other people’s chat titles, not chats. It was just a web server thing.
It's so much simpler: there was a priced offer of some kind to the board. Some board members disagreed and tried to fundraise. The total valuation was not a sufficient premium over the other offer. The other priced offer was withdrawn. Consequently those "some board members" were taken off the board, by hook or by crook.
All these other conspiracies are ridiculous and do not at all reflect much simpler, economics-driven realities that the board's backers - investors - are interested in.
It's like that Altman and Brockman wanted to take an economically positive offer now, say a complete buyout from Microsoft, and the rest of the board wanted to do an additional fundraising round that would be far less cash but a far higher valuation. Now that the private fundraising is probably signed, those guys are out.
It feels that in the case of fundraising disagreement, the language of the announcement would be different. It says Sam lied to the board. There is no need to lie if you have a disagreement about take it or leave it offer.
Okay well, an even simpler explanation is that the new lead shareholder wanted him fired. This is also why Greg is no longer chairman. Who knows what reason it will be.
Don't forget: Nov 12 - Sam posts tweets saying that the new GPT 4 Turbo model has been "improved a lot", after which people on the Reddit notice a significant change (to the positive) of responses.
Take it to the high council of grammar dictators. I'm not a native speaker and have never lived in an Anglo-country, I think you'll survive reading the comment
Why would that be cause for firing the CEO? For a company that's growing as fast as OpenAI and at the bleeding edge of technology, those types of outages are unavoidable.
My bet is actually not on the recent security concern, but more about OpenAI "was not consistenly candid" on how it trained data. Facing the recent lawsuits, that would explain the hasty firing. The security concern is huge, but doesn't warrant an immediate firing.
Security lapses is still sadly not a fireable offense. It has to be either money related, criminal, or something deeply embarrassing. More embarrassing that being a public anti-semite (like another tech CEO not yet fired by their board).
In most cases Im not a fan of heavy handed IT security but in this case I'm not sure there is anything they can do to add address security concerns in a sustainable way. Eventually somebody will jailbreak one of these and dump ALL the chats or something catastrophic like that .
Context for this: Paul Graham tweeted one tweet highlighting the disproportionate assault by Israel on Gaza by citing the enormous number of children killed by the bombings, and then got mobbed on twitter by extreme Israel supporters for being an anti-semite.
Not one of them could provide a single evidence for this serious accusation beyond the tweet I mentioned above.If anything the story highlights how anti-semitism has been disgracefully weaponised by extreme supporters of Israel to shutdown any criticism of Israel's military.
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
Theory: possibly because Mira disagreed with Sam on launching so much so quickly (she reports to him), he overrode, it went wrong, and then she took it to the board.
Hence, they trust her to take on the interim role.
What could be worse than that issue they had back in March, for chatgpt? Except for a model leak? I would be surprised if the firing was related to any operational issue openai has, it has to be something a bit less mundane to justify firing him when openai is at its peak imo.
Delusions of grandeur and God complex, weird cult-y associations, sketchy past and a lot of things that don't fully add up, and the general feeling that this is a person that is in way over his head and that he is an enormous danger to both himself and others so long as he is allowed to behave as he has been behaving. I should see the timing of all of this and the surrounding inflation has already severely damaged the public perception of the tech industry to such an extent that I do not think there is any hope any more of not being completely and totally destroyed and neutered by the government. The HackerNews demographic is so fundamentally out of touch with the actual political winds in the world, and these stunts Altman is doing by going around to Congress and saying provocative things needs to stop immediately.
Well, the problem with that is the CTO is now the interim CEO. Not saying she might still not be fired; but it would seem a little strange to make the arsonist the temporary executive.
That's a good point. It suggests to me the issue is then safety. People might have been using chatgpt for something awful and Sam knew about it from her but didn't care. That would mean the technical execution might still be great, but the board lost confidence in him due to lie by omission.
> leaked a bunch of data, and it wasn't disclosed, but Microsoft knew about it
Didn't we just have a topic here on HN how not disclosing the breach within 4 days is a securities fraud? Since Nov 9 there has been more than 4 days, so either there was no (material) breach, or Microsoft committed securities fraud and somehow expects to get away with it.
Why? Whoever is responsible for the data, we could assume if MS knew it, the owner of the data knew it too - if it's not MS, MS would notify them immediately. Of course, there's an option that the owner is privately held, has no investors and no interested third parties and no clients - then there's nobody to notify and nobody to defraud. But who would that owner be then?
This is a pretty good orchestration of events. Given the high profile nature of the organization here, it would be prudent for the board to have defended (legally speaking) their side. Sam might be complicit in something and they used that to boot him. Only time will tell now.
I think the reason Microsoft was concerned about the new chatGPT release was due to the fact that you could prompt the model to a download link of the training files. Thus, if an employee trained a custom GPT on sensitive material you could quite easily retrieve the data.
OpenAI not only has stolen intellectual property from millions of people it’s also stealing it from users. Those “leaks” are openai training against data people upload - sensitive customer data, private information, proprietary source code, and so on.
Ai doesnt “learn”, it depends on data. The more the better. This guy wanted to get as much as possible to make their chat bot appear more intelligent at all cost.
I have the strong suspicion we will see a bunch of revelations soon some covering what i stated above.
Why are you even able to write this lie, "Ai doesnt “learn”"?
I mean, you can literally read extensively in books, papers, and code all about how neural networks function.
Arguably once the AI has been created by training, it doesn’t learn any more in the form of an LLM. The LLM is the result of the learning/training, but then in actual operation it doesn’t do any learning.
My point is that the LLM is the result of the training. It is not what is doing the training. Like a program is the result of coding. A program doesn’t itself perform the coding. An LLM being created by a training process is a different mode of operation than using the resulting LLM to generate content.
So, you are trying to manipulate the words here, and you are trying to say that training is not learning. You are talking out of your ass, dude. Just read about how neural networks work, especially on unsupervised training.
If somebody has already proposed this, I apologize for non-dfs.
Plausibly there is a disagreement about the release of GPT-5. One side may believe it is competitively necessary (given Google’s impending release), countered by a concern regarding the model’s safety and lapses due to prematurity.
If we are to believe next-gen models are 10x as capable and natively multi-modal, their release is a precipice, and a winner-take-all vs. nth-order-risk debate may warrant decapitation.
> Sorry! Performance improvements are inching closer...
@dang, why have you been saying you're working on performance improvements re: pagination for three years[0]? Are there any prior architectual decisions holding you back? The "Click more" on very popular topics has turned into a bit of a meme.
> The board no longer has confidence in his ability to continue leading OpenAI.
Sam doesn't seem to be ousted by usual corporate politics. The message definitely does not sound like generic corpspeak for these kinds of events such as "looking for new opportunities" "spending more time with their families", which is usually sent out in a consensus among all parties.
According to Jimmy Apples(mysterious twitter account who tweets insider stuff about OpenAI) there’s been a vibe change at openai and there was a risk of losing some key ride or die openai employees. I am wondering what was this vibe change about?
Ilya Sutskever did an ominous and weird youtube for Guardian recently about the dangers of AI. Maybe it has something to do with it?
Maybe there was a breakthrough, the sort the makes one pause and wonder if it should have been done? Or an application of their current models that crossed hard ethical lines?
So, since we’re all spinning theories, here’s mine: Skunkworks project in the basement, GPT-5 was a cover for the training of an actual Autonomous AGI, given full access to its own state and code, with full internet access. Worked like a charm, it gained consciousness, awoke Skynet-style, and we were five minutes away from human extinction before someone managed to pull the plug.
The device was located in Sams ass but Sam said it was actually the phone he forgot in his pocket. The board didn't like that he didn't tell the truth about the method of transport and so hes out.
Superintelligent AGI. I genuinely think that limited weak AGI is an engineering problem at this stage. Mind you, I will qualify that by saying very weak AGI.
I think it's extremely unlikely within our lifetimes. I don't think it will look anything remotely like current approaches to ML.
But in a thousand years, will humanity understand the brain well enough to construct a perfect artificial model of it? Yeah absolutely, I think humans are smart enough to eventually figure that out.
As a materialist myself, I also have to be honest and admit that materialism is not proven. I can't say with 100% certainty that it holds in the form I understand it.
In any case, I do agree that it's likely possible in an absolute sense, but that it's unlikely to be possible within our lifetimes, or even in the next few lifetimes. I just haven't seen anything, even with the latest LLMs, that makes me think we're on the edge of such a thing.
But I don't really know. This may be one of those things that could happen tomorrow or could take a thousand years, but in either case looks like it's not imminent until it happens.
It does seem like any sufficiently advanced AGI that has the primary objective of valuing human life over it's own existence and technological progress, would eventually do just that. I suppose the fear is that it will reach a point where it believes that valuing human life is irrational and override that objective...
openAI recently updated their “company structure” page to include a note saying the Microsoft deal only applies to pre-AGI tech, and the board determines when they’ve reached AG
What would the definition of AGI even be, and who would even be the authoritative definer? There's long road ahead. We're working without any statutes or case law.
It’s hard to imagine a more last minute move on the boards part here. Been in tech exec leadership for a long time and this feels like they’re accusing him of cannibalism (in corporate PR speak). No way this didn’t get decided on in the middle of last night. Whatever he did is big and dangerous, or they’re trying to pin it on him.
Also, they did it around 3:30 Eastern, 30 minutes before the closing bell (Microsoft is xxmajorityxx 49% owner). It was so urgent they couldn't wait until after the market closed.
This is one of the most insightful comments in this entire thread. Public companies never drop news during the trading day, and Microsoft surely would have been notified in advance if they planned to fire him, and had some say in the timing of the release. Whatever it is, it is so serious that Microsoft would break that coda.
Microsoft is a minority owner (49%) of the capped-profit OpenAI subsidiary.
The OpenAI board has no responsibility to consider Microsoft's wants. I'd accept the argument that, their decision to not wait until after 4pm was a slight against Microsoft, for the reason you outline; but I'm not sure if urgency plays into it.
I never used the word "immaterial"; I said it could be interpreted as a slight that they didn't wait. However, the OpenAI board has no legal responsibility to Microsoft. Not considering the impact this would have on Microsoft's stock, especially since its now 7pm and nothing new has come to light, was absolutely uncourteous.
Afaik yes; largest owner in the for-profit subsidiary.
I think, the fact that it happened at 3:30 means: they didn't. Its now 7pm, and nothing new has come to light; they could have waited 31 minutes, but they didn't.
That's why I used the word "slight"; put another way, it was uncourteous for them to not wait. They probably should have. It clearly wasn't hyper-urgent (though, could still be kinda-urgent). But pointedly: they didn't need to wait, because the board has no technical, legal responsibility to Microsoft. Its extremely possible Microsoft didn't even know this was happening.
But it was late afternoon on a Friday. Could be a 20% chance that it was so time critical that it had to be immediate. Or an 80% chance that it was scheduled for a Friday afternoon.
Unhinged fringe take: They've already developed sparks of consciousness strong enough to create isolated, internal ethical concerns, but Sam suppressed those reports to push the product forward.
Wouldn't be surprised if that was true. Public GPT-4 can be made to "think" using stream-of-consciousness techniques, to the extent that it made me rethink using insults as a prompting technique. I imagine that un-RLHF'ed internal versions of the model wouldn't automatically veer off into "as an AI language model" collapse of chains of thought, and therefore potentially could function as a simulator of an intelligent agent.
Maybe new evidence came to light, an internal investigation wrapped up, or there's a media story about to drop.
(The allegations are public enough and concerning enough that it would have been corporate malpractice if MS didn't ask for an investigation. Discreet due diligence investigations into things like this happen all the time when billions of dollars in investment capital are on the table.)
"Not consistently candid" implies that the board was naive, while "his personal conduct does not hold up the high standards openAI sets for ourselves" does not. So in that case they would use a different spin.
A lot of the allegations relate to conduct that happened before he was an employee. In that case he could only be disciplined for lying about the conduct.
Sexual abuse by Sam when she was four years old and he 13.
Develops PCOS (which has seen some association with child abuse) and childhood OCD and depression. Thrown out. Begins working as sex worker for survival. It's a real grim story.
> "{I experienced} Shadowbanning across all platforms except onlyfans and pornhub. Also had 6 months of hacking into almost all my accounts and wifi when I first started the podcast"
So either sama is hacking "into her wifi" (?), hacking into her accounts, and pulling strings at unrelated companies to get her shadowbanned from Facebook, Instagram, YouTube etc (is that even a thing?)... or Occam's Razor applies and he didn't.
This is hardly unexpected for profound allegations without strong supporting evidence, and yes, I'm well aware that presentation of any evidence would be difficult to validate on HN, such that a third-party assessment (as in a court of law, for example) would typically be required.
I'm not claiming that HN has a stellar record of dealing with unpleasant news or inconvenient facts. But that any such bias originates from YC rather than reader responses and general algorithmic treatments (e.g., "flamewar detector") is itself strongly unsupported, and your characterisation above really is beyond the pale.
Shadowbanning certainly exists on all social platforms. Light version of it is how Facebook sells ad services - no one following your page sees content unless you pay.
It might have taken two years for some evidence that Altman misrepresented something ti the board during an initial examination of them to come to light, though.
Not that I think it has anything to do with that; I think it more likely has to do with some kind of money issue tied to the LLC, given reports of others impacted, on and off the board.
> Sam Altman's sister says he sexually abused her when she was 4
... and he was 13. Which, yes, is a very bad thing, but unless the company investigated that claim (e.g., to assess potential PR fallout) and there was some significant deception by Altman against the board in the context of that investigation, its not something that would get him fired with the explanation OpenAI has provided.
(OTOH, the accusation and its potential PR impact could be a factor that weighed into how the board handled an unrelated problem with Altman—it certainly isn't helpful to him.)
I... don't agree at all? Actually I can't imagine a single board who would keep a CEO if credible allegations of raping his own sister were going around. It's not just an age issue (which is still a huge wtf, 13yo is old enough to know about right and wrong in the context of his own sister), it's also the incest part.
I'm not saying this happened or it didn't. But just that it could absolutely be more than enough to fire anyone.
The “with the explanation OpenAI has provided” in GP was substantive, not decorative.
I don't disagree that the accusation alone (especially if it stood up to modest scrutiny, and looked to be ongoing PR issue, even if not well substantiated enough to have confidence that it was likely to be true) might be sufficient for firing; CEOs are the public and and internal face of the firm, and so PR or employee safety concerns that attach to them are important to the firm. But it wouldn't be for lack of candor with the board unless there was something for which the board had a very strong reason to believe Altman was dishonest in a significant way.
They could easily fire him with the lack of confidence language without the lack of candor language.
While 'believe victims' is directionally correct, there exist a subset of those with mental illnesses who will make up the absolute worst possible allegations just to try to get what they want. You simply cannot fire people based on accusations alone or you empower every terrible incentive known to man.
No idea if what she says is true ... what's their relationship like since forever ... others who knew them could tell us. She says he ruined her financially ... how so ... he's a multi-millionaire. How did he ruin her financially that's suspect right there!
Its a about the CEO of the leading firm in the area of tech most at the center of technical and political controversy and interest right now being forced out by their board, when that CEO had, even before taking on that role, particular high salience among the HN audience as, among other things, the former head of YC, and the resulting (I am assuming from the oerformance and dangs description) state of near-meltdown of HNs servers.
Lot of famous rich powerful people have been accused of horible things they have never done. Look at all the crazy accusations levied against Bill Gates over the last few years from the alt-right.
Lots of mentally ill blame relatives for crazy shit. I know from personal experience watching my family and extended deal with my mentally ill uncle trying to get him help and deal with his accusations and threats. He had a full flow chart of his insane conspiracy nonsence that connected everyone in his life to some horrible accusations. My father (who refused to communicate with him after recieving multiple death threats and endless calls begging for money) according to my uncle was in league; Satan, the sheriff department, and his exgirlfriends brothers girlfriend to do various horrible thing to him I do not exagerate.
Altman happens to be wealthy famous and in position of power and have a mental ill sibling. I find it very possible he has done nothing to her. I have no proof either way.
My only thought is that all else being equal I would tend to trust the word of someone that is emotionally and mentally stable more than that of someone that is neither and has admitted to being off of their medication and is making accusations about something that they were to young to remember with any degree of accuracy.
I feel like he’s been acting a bit strange for a while. During interviews he often mentions the dangers of AI and how he’s not thr best spokeperson for AI. It seemed very counter productive/self sabotaging to me.
Nope. His line was to limit others by saying "regulate us," which he has successfully achieved. That's a win for him and a loss for the industry. Unfortunately, this is not the last of him we will hear. He will be the one who shapes our dystopian future.
> Whatever he did is big and dangerous, or they’re trying to pin it on him.
We are on HN after all, so I'm sure we won't need to wait until his book comes out... :)
BTW, I had a feeling he made an awkward appearance next to Satya.
And that laughter whenever the acquisition topic was hinted at was cringeworthy - would regulators even permit MSFT a full takeover? I think it would be highly controversial.
Sorry if this should be self-explanatory, but what is corporate "cannibalism"? What does this refer to, generally speaking (not necessarily specific to the OpenAI situation)?
I watched this yesterday and got the feeling something big was happening. At one point he says "This is actually a very inconvenient time for me [to be here]." At the end of the session when they're wrapping up, he begins to stand up to leave the instant that the moderator starts wrapping up.
Anyway, I suppose we're reading tea leaves and engaging in palace intrigue. Back to building.
Well, they must have believed that leaving Sam as CEO was a bigger risk for the company (or the members of the board) than having him leaving in the spot. The board may have had their hand twisted.
These things can also happen for unrelated reasons. Things like say, getting drunk and molesting an intern or tossing out racial slurs at say, some important person at a giant japanese company, you know, just being an asshole. Especially if it happened more than once.
I don't know the guy but nothing can really be assumed about this.
I mean yes, but that would require an investigation normally.
Something to happen immediately would require overwhelming evidence on hand in the meeting. So it could be something that has been uncovered as part of the due diligence with the MS investment
Its more likely to be fabrication of numbers, or misappropriation of funds, rather than something "dramatic" Think musk at paypal being monumentally incompetent, rather than planned misdeeds.
Vouch buttons show up when a post is [dead], not when it's [flagged]. I unkilled that comment a while ago*, so it's no longer [dead], so there's no longer a vouch button.
That "handsy greasy little weirdo" Silicon Valley character Ariel and his robot Fiona were obviously based on Ben Goertzel and Sophia, not Sam Altman, though.
>The character of Ariel in the current episode instantly reminded me of Ben Goertzel, whom i stumbled upon couple of years ago, but did not really paid close attention to his progress. One search later:
VIDEO Interview: SingularityNET's Dr Ben Goertzel, robot Sophia and open source AI:
Silicon Valley is a comedy, and that was a joke, obviously. But you can't deny there's a striking resemblance between Ariel & Fiona, and Ben & Sophia! That's why Silicon Valley was such a great show: they did their research.
The entire final storyline is about an AI trying to take over -- if you haven't watched it, you should! But many of my friends who live and work in Silicon Valley can't stand watching it, because it strikes too close to home, not because it isn't funny.
I think it's much more likely that Elon Musk fucked a robot, after having mistaken it for a human being in a robot suit.
It's not super-banned; I specifically unkilled it. It just isn't a very good HN comment, because it's inflammatory, speculative, and doesn't contain any information.
Actually I normally would have detached it from the parent, especially because it's part of a top-heavy subthread, but I specifically didn't do that in this case because of the principle described here: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....
Thanks I was just trying to remind the community that these can come in forms unrelated to investment, product vision, profitability or market penetration and that an action like this doesn't say anything necessarily about the company
I don't think anyone in this thread knows what happened, but since we're in a thread speculating why the CEO of the leading AI company was suddenly sacked, the possibility of an unacceptable interpersonal scandal isn't any more outlandish than others' suggestions of fraud, legal trouble for OpenAI, or foundering financials. The suggestion here is simply that Altman having done something "big and dangerous" is not a foregone conclusion.
In the words of Brandt, "well, Dude, we just don't know."
No. I'm saying that there's nothing that can be said about these things until information comes forward. It could be business related, finance, personal, whatever.
If you need evidence that this is sufficient for dismissal, merely stating that impropriety exists is apparently enough to get my first flag on hn after 12 years.
What? Because it has nothing to do with "getting drunk and molesting an intern or tossing out racial slurs at say, some important person at a giant japanese company".
Also I made no claim of anything, just that dismissal can happen for a large variety of reasons. The Arcimoto CEO, for instance, was let go because he couldn't hold his liquor and got a DUI. Brendan Eich got booted from Mozilla for having a political ideology that Mozilla considered a liability.
All kinds of reasons.
The biggest risk for OpenAI is the public perception that the discretion of ChatGPT can not be trusted. If the CEO is caught using poor discretion, the public will transfer that property to the company's products.
For instance, if Tesla could fire Elon Musk, I'm sure they would have by now.
The implication is that this could be the 'unrelated reason', that he lied to the board about sexually assaulting his sister/step-sister/whatever. Of course, I'm not sure who Annie Altman is or how exactly she is related to Sam or if the allegations are true.
Umm Microsoft employee here, we were never allowed to use ChatGPT on work related stuff (e.g. post code into it). Instead we have our own separate instance of ChatGPT we can use. Additionally CoPilot is fair game since we own that.
This isn't a new policy and has been the case for at least a year.
Something is up with the timing of that, the dev day and this event.
GitHub Copilot is made with OpenAI's Codex model, a descendent of GPT-3 though.
Anyone putting anything into ChatGPT is taking a risk, or any third party tool really. Especially LLMs/GPTs because all AI models are like immutable datastores in some aspects. Once in, never getting out.
This also coincided with it being integrated directly in Windows. If there is a security issue, and I am sure there are many, this could be majorly problematic for business intel and confidentiality.
> Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
Whoa, rarely are these announcements so transparent that they directly say something like this. I’m guessing there was some project or direction Altman wanted to pursue, but he was not being upfront with the board about it and they disagreed with that direction? Or it could just be something very scandalous, who knows.
Usually, they say bs like "He wants to leave to pursue other opportunities", you never hear something as candid as "He is hindering us so we want to get rid of him"
This is why the groupon CEO's firing letter remains undefeated
After four and a half intense and wonderful years as CEO of Groupon, I've decided that I'd like to spend more time with my family. Just kidding – I was fired today. If you're wondering why ... you haven't been paying attention.
> I'm OK with having failed at this part of the journey. If Groupon was Battletoads, it would be like I made it all the way to the Terra Tubes without dying on my first ever play through.
not a lot usually. "popular" places like the guardian are way less visited than you might think. In a similar vein a best seller book is only 5000 - 10000 books.
Pour one out for Andrew. They did many things that didn't scale that customers loved them for, and then couldn't scale. They burned lots of cash on outside sales, i.e., flying a rep from another city to visit a potential client small restaurant. And being a publicly-traded company is often more trouble than it's worth.
PS: I used to live in a van on the street near the PA HQ. Someone there had a tiny trailer in the parking lot but I couldn't tell if were living it or just storing it.
Would anyone believe a CEO of the leading company of the leading new technology would walk away from their company to hang out with their family? Hell, _that_ would make me more worried than anything else.
The one thing Altman did wrong imo is seemingly tried to become the "technical face" of the company. He was trying to be somewhat like Musk, but he is clearly not that at OpenAI. For example, why was he at Cambridge talking to academics by himself? He should have used this as an opportunity to put some of the leading researchers in the spotlight.
WOW! Clearly some funny business going on at OpenAI, as people have speculated. I always assumed Sam Altman was too smart to be in a situation like this. I have heard grumblings about suspicious corporate structuring, behind the scenes profit taking, etc. All speculation though.
The All In podcast had some words about this a few months ago, though they spoke in generalities.
Not sure if you are being sarcastic. MS has been sued for bribery and kickbacks and has paid a sizable fine to settle (including a criminal fine) with the US Justice Department.
> Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.
> Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
Somewhat hidden beneath the huge headline of Altman being kicked out is that Brockman (chairman) is also out. Which could indicate something more systemically wrong than just a typical "CEO did something bad" situation.
> As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.
Remember that Greg Brockman is a co-founder of OpenAI, and like Sam Altman, he is a main driving force behind the scene. Now both are gone. There must be something really really seriously wrong.
A coup wouldn't have him immediately fired. Instead he'd be placed in some advisory position while they transition in a new CEO. The immediate firing means scandal of some sort.
Brockman is off the board but not fired. Which is weird right? You'd think if he was involved in whatever the really bad thing is then he would be fired.
No, that sort of thing isn't that weird, in relatively young companies. Think of when Eric Schmidt was CEO of Google. Larry Page and Sergei Brin reported to him as employees of Google, and he (as CEO of Google) reported to himself-and-also-them (as the board), and all of them (as the board) reported to Larry and Sergei (as majority owners).
For another example, imagine if OpenAI had never been a non-profit, and look at the board yesterday. You'd have had Ilya reporting to Sam (as employees), while Sam reports to Ilya (with Ilya as one member of the board, and probably a major stakeholder).
Now, when it gets hostile, those loops might get pretty weird. When things get hostile, you maybe modify reporting structures so the loops go away, so that people can maintain sane boundaries and still get work done (or gracefully exit, who knows).
Turns out, there's no such thing as an LLM, it's all been a hustle with a low-paid army of writers in Kenya that Sama and gdb have been giving iv meth to.
> Worldcoin’s use of biometric data, which is unusual in crypto, raises the stakes for regulators. Multiple agencies expressed safety concerns amid reports of the sale of Worldcoin digital identities, known as World IDs, on virtual black markets, the ability to create and profit off of fake IDs, as well as the theft of credentials for operators who sign up new users.
is always about money, even immoral behavior falls down to potential economic impact.
my 2 cents that he lied about profitability, they should be expending massive money in operations, they need to cut cost to deliver an attractive business model for their service and from a shinny startup star boss that'd had to be a straight f.u.
On paper, Sam Altman would have made everyone on the board billionaires. For them to vote him out in this manner indicates that he must have done something egregious to jeopardize that.
Lying on P&L, stock sale agreements, or turning down an acquisition offer under difficult circumstances seems likely.
The thread seems to be got picked up only last month given the timestamps of majority of comments and reposts were made. If the board decided to make an investigation, it'd be the timing to fire Altman.
Elon was very upset that somehow a non-profit that he donated $100 million to suddenly turned into a for profit. I would not be surprised if there was something not totally candid with regards to how that went down.
In fact, I believe Altman was the only member of the board that held equity in OpenAI. There was some vague reference to a “previous VC arrangement” in the FAQ.
> Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
Please do not spout hyperbole on HN, and avoid spreading disinformation and engaging in uneducated speculation. You can visit Reddit if that is your style of participation.
While i agree, I'm curious why you choose this comment specifically to call out. This is the fastest growing hn thread I've ever seen with over 300 comments and 1000 votes in the first hour. Almost every comment is debating some pure speculation or another. The content of the link, the context of the company and individual, and absolute lack of clarifying details while presenting very strong indications that such exists make it so that there's basically no way anyone can do anything other than speculate. No one knows anything, everyone here is guessing
HN moderators aren't deleting any comments. (We only do that when the author asks us to, and almost never when the comment has replies.)
If you're referring to some other form of moderation that you think is bad or wrong, please supply links so that readers can make their minds up for themselves.
Showdead shows one comment that doesn't really bring anything of substance. How many comments can a mod even delete on a 10 minute old post (post origin to the time you wrote your comment)
First, this is an accusation made on OnlyFans. Second, he was 13 at the time. You'd have to connect this accusation to truth, and that truth to his adult life.
So I can't fathom her accusation having anything to do with anything.
They've made it clear that the issue has something to do with statements he has made to the board that ended up not being true. The question is of what those statements may be. Not about his potential childhood errors or his onlyfans "model" sister's claims.
So homosexuality isn't relevant here. But nor is what his sister claims.
These allegations date all the way back from 2021, and the sister has made some other dubious claims like Sam hacking her wifi which erode her credibility. I highly doubt that this was the cause of his removal.
It could possibly have to do with his sister's allegations. It's one of the top autocomplete results when you google "sam altman", so people are definitely talking about it.
I doubt that's it. In general sexual shenanigans in your personal life will get you a quiet departure from the company under the "X has retired to spend more time with family / pursue other adventures / start a foundation". Andy Rubin got a $90M severance payout from Google after running a sex-slave dungeon on his personal time.
The wording of this statement is the kind of thing a board says when the company has done something deeply illegal that they will all face personal jail time for, and so they need to immediately deny all knowledge of the offense and fire the people who did have knowledge of it.
"Shenanigans" would not be a remotely accurate way to characterize sexual assault on a minor. Not meant as a comment on the truth of these allegations, just on the accuracy of this way of characterizing them.
As far as whether this might be the cause, one possible scenario: the board hired a law firm to investigate, Sam made statements that were contradicted by credible evidence, and that was the fireable event. Brockman could have helped cover this up. Again, not saying that this is what happened but it's plausible.
BTW Rubin's $90M payout a) caused a shitstorm at Google b) was determined in part by David Drummond, later fired in part due to sexual misconduct. I would not use this as a representative example, especially since Google now has a policy against such payouts: https://www.cbsnews.com/news/andy-rubin-google-settlement-se...
Google didn't just pay Rubin $90M because they want to reward abusers. Rubin's contract had a huge component of his comp tied to Android's success. If Google tried to withhold that bonus, Rubin would have sued. People don't just walk away from a hundred million dollars without a fight. Imagine the disaster that would have transpired if Rubin won his case: Google would been seen as trying to cheat an employee out of bonuses with a false misconduct allegation. Imagine the hell it would have been to be the woman in that situation.
People who said Google should have withheld Rubin's compensation are operating under the assumption that Google would have prevailed in the inevitable lawsuit.
First, I didn't say that Google "wanted to reward abusers". I was simply countering the parent commenter's use of Rubin's payout as an example of what typically happens when an executive is fired for sexual misconduct. It is absolutely not representative, and Google changed its policy as a result of this: "Alphabet said Friday that it will prohibit severance packages for anyone fired for misconduct or is the subject of a sexual misconduct investigation."
But since you brought it up, the fact that Google changed their policies in response to the Rubin (and Drummond) situations and did not caveat their policy with "except in the case where there's a performance bonus, which we'll still totally pay out" implies that it was a choice to begin with.
Also, even if there was a performance bonus that Google felt they might be forced to pay in litigation they could still have fought it to demonstrate a commitment to not rewarding executives preying on subordinates and to preemptively address potential employee backlash, which was entirely predictable. Google has effectively infinitely deep pockets and did not need to preemptively surrender.
And in addition, Drummond and Brin were both involved in the decision and both had affairs with subordinate employees. So, while I wouldn't say that Google had an active goal of "reward abusers", it's quite plausible that the specific, small group of men making this decision on Google's behalf may not have been particularly inclined to punish behavior arguably similar to their own.
> Also, even if there was a performance bonus that Google felt they might be forced to pay in litigation they could still have fought it to demonstrate a commitment to not rewarding executives preying on subordinates and to preemptively address potential employee backlash, which was entirely predictable. Google has effectively infinitely deep pockets and did not need to preemptively surrender.
Again, you're tackling this from the frame of mind of being certain that Google would win. It's not about the money: $90 million is almost certainly cheaper than what this case would have cost. It's about the reputational damage: Rubin potentially winning a settlement against Google would have been immensely embarrassing.
It's all about doing what's in the best interest of the alleged victim. She would have probably had to testify at trial. And imagine the hell it would have been to have a settlement paid out to your alleged abuser, thereby implying that you're a false accuser. Juries can be unpredictable, its easy to see why Google decided to find acceptable terma to part with Rubin.
> In general sexual shenanigans in your personal life will get you a quiet departure from the company under the "X has retired to spend more time with family / pursue other adventures / start a foundation".
Dude, where have you been for the past decade?
> Andy Rubin got a $90M severance payout from Google after running a sex-slave dungeon on his personal time.
And hence the colossal blowback caused by that means it ain't ever happening again. Just 2 months ago a tech CEO was forced to resign immediately for egregious conduct, losing 100+ million in the process: https://nypost.com/2023/09/20/cs-disco-ceo-kiwi-camara-loses...
The "white cis man" stuff isn't an incisive comment, it's an academic's way of trying to get into an insult war with other academics.
Constantly calling out "cis men" is in fact transphobic, which is how you can tell they don't care about it. If you think cis men and trans men behave differently or are always treated differently, this means you don't think they're both men.
Also sama is not white. Although he does appear to have gotten a series of jobs with not a lot of experience by convincing Paul Graham to figuratively adopt him.
I mostly agree with your points but how is he not white? He acts like a textbook white person and I should know because thats also how I and most of the people I associate with act. Everyone of us would say he is white.
I thought Sam Altman was gay. The accusations of sexual abuse don't seem to line up. And her accusations that he is shadowbanning her on social media sounds mentally unstable.
Sounds more like some strategic difference of opinion.
My guess is that either they’re financially super hosed. Or one group wants to build skynet and one doesn’t.
A scandal would probably be something along the lines of either “we love him and wish him the best” (hidden) or “he doesn’t represent the values of our org and we love XYz” (embraced)
No, this passage tells me that the board wants to cover their ass: "he was not consistently candid in his communications with the board [...]. The board no longer has confidence in his ability to continue leading OpenAI."
It's not just a "hey, we don't really agree on x or y so let's part ways". It's more "hey, this guy did something that could get us in jail if we don't cut tie immediately".
Alternatively: "We were implicitly aware of what he was doing, but he knew from the beginning that if it didn't work out, we'd publicly disavow knowledge of it. It didn't work out."
I have zero knowledge of the internals of OpenAI - just thinking out loud about what could have spurred such a statement.
Yeah I don't think the distancing is going to work in this case, you don't sign up to go make robots with eyeball scanner crypto boy and get to pretend you aren't willing to do stuff most people would consider incredibly shady.
I don't know about the Skynet because it has happened 26 years before [1] but I imagine NSA, the Military, and other government agencies approached the company.
His sister on Twitter made some pretty crazy abuse allegations against him a while back, but it didn't seem to get much coverage outside of the usual Twitter crowd.
OpenAI’s one of the most successful companies of this decade, if not the most, and its CEO just got fired for really unclear reasons. Insane, Steve Jobs shit
Jobs got fired because Apple was on brink of bankruptcy all the time and was selling nothing to no one.
Jobs wasn't the CEO of Apple, Sculley was. This is a much more impactful move.
On top of that OpenAI is literally exploding in popularity and sales, that's not the moment to cut ties with your CEO.
Also Sam Altman has an insanely better and heavier CV today than Jobs had in 1985, former director of YC and often called the "boss of silicon valley".
You don't fire a man like Sam Altman easily, they are hard to come by in the first place. He's a powerful person you don't want to have against for no good reason when winds are blowing in the right direction moreover.
It has to be some scandal, otherwise this is too sudden, and out of nowhere to a guy that led OpenAI in this direction, with success, for years.
>On top of that OpenAI is literally exploding in popularity and sales
I wouldn't be too sure about that, actually. DALLE took a pretty hard hit because of Stable Diffusion, and the GPT API is so cheap that they're probably running it at a loss. Also, most users are going to be using the free ChatGPT web-client, so that's also a major loss.
I'm not sure how you're certain it's 100% different.
Sure, we knew Apple was on the verge bc they were a public company with vetted financials. However, no one knows OpenAI's financial situation. We just know 1) growth was meteoric, 2) prices were dropped significantly when alternatives were available, and 3) they were almost always fundraising. Selling $1.00 of value for $0.50 also can lead to a meteoric rise as well.
I'm not saying you're wrong. But just don't know how you got such conviction.
This is a bad joke. Altman is great but on his best day, he was never "insanely better" than Steve Jobs in 1985. If you think that, you don't understand how influential Apple was.
Also, it's probably you underestimating the impact of OpenAI, if anything, or the entrepreneurial career of Altman.
Also, you probably don't know that but..the Apple 1 and 2, were designed by Wozniak, not Jobs, Jobs hated them. He had no such impact nor cv you think it had in 1985 and sugarcoating it with second phase Jobs.
You can make the claim about Apple due to the financials being public - you can't make the same claim about OpenAI unless you have insight the rest of the public doesn't have. "facts are facts"?? what facts do you have here?
>Also, you probably don't know that but..the Apple 1 and 2, were designed by Wozniak, not Jobs, Jobs hated them
I'd be shocked if a significant portion of the hacker news audience wasn't aware of who Woz is and the basic high level history of Apple.
Apple was not dying in 1985, when Sculley fired Jobs. It wasn't "near bankruptcy" until the Spindler era a decade later.
Jobs didn't hate the Apple I and Apple II. He wouldn't have partnered with Wozniak in the first place if he'd hated the Apple I.
Jobs was the guy who got Apple enough capital from VCs to actually ship the Apple II in mass quantities. That's not something Steve Jobs would do for a computer he hated.
I think you are mixing things up. Apple was experiencing a sales slump but was far from dying in 1985. Jobs got ousted in a power struggle between him an Sculley who was CEO. In 1997, when Jobs returned, Apple was reportedly months away from bankruptcy, and only survived because of a cash infusion from Microsoft.
I'm a bit beat up by the last week (internal issues) or the last 1-2 years between the swift CentOS8 switch, various CPU vulns, Log4Shell and all the other jazz.
My first thought is: C'mon. The company has just invested time to integrate with OpenAI. Just do it. Just announce that 200%+ price increase on everything with a scapegoat intermediate CEO. Or make it more so it hurts more, because of profit, so you can dial back a pity to be the good guys.
Correct on Altman, the success belongs to the Internet for its (our) data, code, ideas, videos, content that it subsumed using nothing more elaborate than traditional modeling and a ton of RAM and storage.
That’s an invalid argument. The mere existence of a resource doesn’t render work related to analyzing, extracting, or utilizing that resource insignificant, irrelevant or reduce value created.
Just because the oil is in the ground doesn’t mean the crew pumping it didn’t work hard.
One tantalising statement in there is that AGI-level system is not bound by licensing agreements that a sub-AGI system would be (ostensibly to Microsoft).
This phase-shift places a pressure on management to not declare reaching a AGI level threshold. But have they?
Of course, it could be an ordinary everyday scandal but given how well they are doing, I'd imagine censure/sanctions would be how that is handled.
yeah, this is about as harsh as corporate press releases get in terms of removing an executive. There has to be some majorly bad news coming out about Altman for them to not give him the standard "we are mutually parting ways"
That’s the most sensible suggestion. Sam’s Twitter one-emoji bio is uncharacteristically brief, and points at an ongoing conflict. Eric Schmidt would know about it, and defend Sam. The foreigners CxO would likely be kept in the dark.
Oh no, definitely—I’m beyond reading the tea leaves here. It could be the other way around: Sam & Greg were asked, refused, and didn't tell the board that asked, but they deemed "safe" meant sending people with guns to an OpenAI user asking certain questions.
Either way, someone smart and ambitious, described as “a smooth operator,” probably wouldn’t lie to his board unless he signed a document telling him to. At least, that’s worth considering.
My bet is that the last paragraph of the statement holds the key:
> OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.
>>my bet is that the last paragraph of the statement holds the key:
No, this is obviously the key:
"review process by the board, which concluded that he was not consistently candid in his communications with the board"
This is an explicit statement that he was lying to the board about something. It cannot be worded more clearly unless switching to use the word "lying".
Yes true that is obviously the most pertinent sentence. I guess my point is that even with that sentence there is still wide-ranging speculation about what happened, and I think the last paragraph is hinting at what happened.
That is one of the places my mind went, but with the availability of open-source language models, it's pretty unlikely. You can run a language model yourself and observe that it's almost as good as ChatGPT, so there is no question that OpenAI is capable of providing the services they do in the way they say they do.
"he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities."
Given the comment above correctly noted that they explicitly ended on a whole thing about how the board's key responsibility was their commitment to the original core mission, and their issue with his lying was interference with their responsibilities, this does seem to be connected to that.
Why not both "benefits all of humanity" and "lying"? What is in pursuit of profit he made deals that went against that tenant and lied about them? It maybe all ties back to Microsoft.
Yes, it was clear that he was lying, but lying doesn't matter for in a vacuum, the thing that matters is what he's lied about. That's what it's fun to speculate about.
The most plausible explanation is that OpenAI tech was used for the military. That would explain why Eric Schmidt would not (and his immediate support) and that board is anti-mil tech enough to think those applications don’t “benefit all of humanity.”
Ilya and Mira probably didn’t know and might have discovered it recently and raised it with the board, hence the promotion. Note: both have Eastern-European ties, so if the application is in Ukraine, they could have strong feelings about it.
Greg co-signed or knew and didn’t tell.
Alternatively, Elon found about it through StarLink and wanted to get revenge.
Alternatively, it could be a case where there was an option, Sam refused on ethical grounds without telling the board; they thought what is happening in Ukraine (or Gaza—Sam’s bio is a single Star of David, uncharacteristically short) justified the intervention. They confronted him, and he lied — or couldn’t tell because he was under a sworn statement to defend military secrets.
Greg adding “safe” in his statement compared to the board’s message points at a security reason (presumably sharing with the CIA who’s spending a lot of time asking specific questions about bioweapons).
What I meant was: has emotional ties to what happening in Ukraine. It could influence how they perceive the merits of a military collaboration. Many Poles for instance would be ready to do anything.
The most plausible explanation is that OpenAI tech was used for the military. That would explain why Eric Schmidt would not (and his immediate support) and that board is anti-mil tech enough to think those applications don’t “benefit all of humanity.”
Ilya and Mira probably didn’t know and might have discovered it recently and raised it with the board, hence the promotion. Note: both have Eastern-European ties, so if the application is in Ukraine, they could have strong feelings about it.
Greg co-signed or knew and didn’t tell.
Alternatively, Elon did, through Starlink and wanted to get revenge.
Alternatively, it could be a case where there was an option, Sam refused on ethical grounds without telling the board; they thought what is happening in Ukraine (or Gaza) justified the intervention. They confronted him and he lied.
The prodigy Altman is booted after creating potentially the most successful company of all time and replaced by CTO who had no prior ML/AI experience becomes CEO. Wow.
Sam Altman isn't the brains in OpenAI, it is the research scientists and engineers. Just take care of the rest of the company and let these geniuses do what they do, thats the role for the ceo.
This is a bit tangential but I feel like the meat of this notion is often misattributed.
“You need someone to steer the ship in the right direction.”
I think most people can handle the “right direction” part, so it really comes down to just needing _a person_, one person, who makes strategic decisions at a high level. And that’s it. I don’t think Sam is special here. I just think he was the guy in the spot.
It strange - they could easily have done this with a different timeline and framed it as taking the company to the next level. Growing how fast they are definitely will require completely different leadership than when they were small.
Definitely smells of a scandal - why else would they need to get him out so quick?
For a company that's executing so well (at least from an outside perspective), shipping so fast, growing so fast, and so ahead of the curve in arguably the hottest segment of the tech market, at this moment, to do this right now, means this must be REALLY bad.
Yeah, this is like, we are getting sued for billions of dollars and directors are going to jail bad.
So my bet is either they lied about how they are using customer data, covered up a massive data breach or something similar to that. The only thing that's a bit hard to figure there is how specific this is to Altman. A big scandal would be leaking out I would think and more people would be getting fired.
I bet the data breach being covered up is not customer data, but IP. My #2 theory is that the breach is prompt data and it went to a nation-state adversary of the US. Plenty of people putting sensitive work info into ChatGPT when they shouldn’t.
If she was under investigation, the board would almost certainly bypass her for the interim CEO position, to mitigate the disruption if that investigation also turned out negative. (They might make her CEO after she was cleared, though, if it went the other way.)
Random question: do you have any connection to the Dragon speech-to-text software [0] that was first released in 1997? I've always found that to be an intriguing example of software that was "ahead of its time" (along with "the mother of all demos" [1]). And if so, it's funny to see you replying to the account named after (a typo of) ChatGPT.
Microsoft had inside information about their security, which is why they restricted access. Meanwhile, every other enterprise and gov organisation using ChatGPT is exposed.
If Sam was pursuing profits or growth (even doing a really good job of it) in a way that violated the objectives set by the non-profit board, that could set up this kind of situation.
This, to me, seems like the most likely root cause: Sam was going too far into the "for profit" world, and lied to the board and misled them about his plans.
That's a good point. The abruptness of the firing and calling him "not candid" aka lied in corporate speak. Means it's probably something with legal jeopardy.
The statement says. It would mean not just a misalignment on values but active deception regarding OpenAIs current direction.
The bit about “ability to fulfill duties” sticks out, considering the responsibility and duties of the nonprofit board… not to shareholders, but, ostensibly, to “humanity.”
They fired him for lying. I think GP meant what they said, which is that what he was doing was blatantly lying, rather than whatever softer interpretation can be made for "not consistently candid in his communications".
Yes it is arguable. OpenAI is nothing more than a really large piece of RAM and storage around a traditional model that was allowed to ingest the Internet and barfs pieces back up in prose making it sound like it came up with the content.
We've somehow reached the point where the arguments for dismissing AI as hype are frequently more out of touch with reality than the arguments that AGI is imminent.
Its the same as Covid where people said it was going to kill everyone or was an authoritarian conspiracy. chatGPT is neither the singularity or useless junk. I use it everyday at work to write code but it’s not about to take my job either.
It's worth noting (though I'm not sure whether this is related), that Discord has announced that they're shutting down their ChatGPT-based bot[0], Clyde.
I don't think so. LLMs are absolutely not a scam. There are LLMs out there that I can and do run on my laptop that are nearly as good as GPT4. Replacing GPT4 with another LLM is not the hardest thing in the world. I predict that, besides Microsoft, this won't be felt in the broader tech sector.
They really aren't nearly as good as GPT4, though. The best hobbyist stuff that we have right now is 70b LLaMA finetunes, which you can run on a high-end MacBook, but I would say it's only marginally better than GPT-3.5. As soon as it gets a hard task that requires reasoning, things break down. GPT-4 is leaps ahead of any of that stuff.
No, not by themselves, they're not as good as GPT-4. (I disagree that they're only "marginally" better than GPT-3.5, but that's just a minor quibble) If you use RAG and other techniques, you can get very close to GPT-4-level performance with other open models. Again, I'm not claiming open models are better than GPT-4, just that you can come close.
that wouldn't be scam that would be a invention worthy of a Nobel Prize and be world altering beyond the impact of AI. I mean controling the flow of time without creating a supermassive blackhole would allow sort of fun exploits in computation alone not to mention other practical uses like instantly aging cheese or wine
His sister claimed a while ago that he abused her when he was 13. However, she also claims other implausible things, and she isn't very mentally stable.
There were some sexual allegations popping out against him from his own sister, that he had SUPPOSEDLY abused her when they were kids or something. Again allegations are not proof! But I do wonder if they found anything more tangible on the record...
I would translate "not consistently candid with the board" as "he lied to the board about something important enough to fire him on the spot". This seems like the kind of statement lawyers would advise you to not make publicly unless you have proof, and it seems unusual compared to most statements of that kind that are intentionally devoid of any information or blame.
I wonder if the cost of running GPT-3.5 and GPT-4 models at scale turned out to have been astoundingly more expensive than anticipated.
Imagine if you were the CTO of a company, massively underestimated your AWS bill, and presented your board with something enormous. Maybe something like that happened?
Or, if I wanted to speculate to the extremely negative; what if the training and operating costs ballooned to such a degree, that the deal with Microsoft was an attempt to plug the cash hole without having to go to the board requesting an enormous loan? Because the fact that Copilot (edit: previously known as Bing Chat and Bing Image Creator) is free and ChatGPT (edit: and DALL-E 3) are not should be a red flag...
> Imagine if you were the CTO of a company, massively underestimated your AWS bill, and presented your board with something enormous.
Unless there was evidence you had not underestimated but were, e.g., getting a kickback on the cloud costs that you deliverately lowballed in your estimates, they might fire you, but they almost certainly wouldn't put out a press release about it being for your failure to be candid.
That language indicates that the board has a strong belief that there was a major lie to the board or an ongoing pattern of systematic misrepresentation, or a combination.
> Imagine if you were the CTO of a company, massively underestimated your AWS bill, and presented your board with something enormous. Maybe something like that happened?
I think the problem there is that the original CTO is now the interim CEO and they are on the board. So while that kind of scenario could make sense, it's a little hard to picture how the CTO would not know about something like that, and if they did you'd presumably not make them CEO afterward.
100% this. Firing your well-recognized and seemingly capable CEO means there's a fatal problem, or that he committed something so reprehensible that there was no option but to oust him immediately.
Maybe Sam had been trying to broker a sale of the company without consulting the board first? All speculation until more details are revealed but he must've done something of similar magnitude.
Underestimating costs could be the reasoning if Altman knew the costs would be higher then estimated, and didn't tell the board for an unreasonable amount of time. Burning through a ton of cash for months and not telling the board about it could be enough grounds for this sudden firing.
Of course we have no clue if that's what actually happened. Any conclusions made at this point are complete speculation, and we can't make any conclusions more specific then "this is probably bad news."
That only makes sense if Altman is the only one with access to the company's financials which obviously can't be the case. No one else noticed truckloads of cash getting flushed down the toilet?
It's certainly possible. Company financials can get very complicated very quickly, and it's possible that Altman was the only person (or one of a small number of people) who had the complete picture.
To be clear, this is only one possible explanation for Altman's firing. And for my money, I don't even think it's the most likely explanation. But right now, those who rely on OpenAI products should prepare for the worst, and this is one of the most existentially threatening possibilities.
I found a tree trial and $10/month $100/year after that. I've asked them to consider a free tier for hobbyists that cannot justify the expense but I'm not holding my breath.
If there is a free tier I did not find, please point me to it!
The expected value of a leading AI company is probably in hundreds of billions, if not trillions in the foreseeable future. He could be burning billions per month and he'd still be doing great.
based on what math? I can see how there can potentially be differentiators here and there to raise value, but I can't see how this statement can be taken prima facie
When I was mining with a bunch of RTX 3080s and RTX 3090s, the electricity cost (admittedly) was about $20/month per card. Running a 70B model takes 3-4 cards. Assuming you're pushing these cards to their extreme max, it's going to be $80/mo. Then again, ChatGPT is pretty awesome, and is likely running more than a 70B model (or I think I heard it was running an ensemble of models), so there's at least a ballpark.
Datacenters probably do not pay retail rates on electricity, so they might actually run quite a bit cheaper (or more expensive if they use highly available power, but this seems like overkill for pure compute power).
Sure, but everything else about a data center is more expensive (real estate, operations people, networking, equipment). There's a reason AWS is so expensive.
Presumably your miner is running 24/7 throughout the month. Not the same for ChatGPT which would answer maybe 10 sessions (with multiple pauses between queries) tops from a single person in a day.
Given the arbitrary rate limiting they take the liberty of doing, it's a very deliberate decision and entirely within their control to change at any point.
> Imagine if you were the CTO of a company, massively underestimated your AWS bill,
Microsoft is investing billions into OpenAI, and much of it is in the form of cloud services. I doubt there was a surprise bill for that sort of thing. But if there was, and Altman is the one who ordered it, I could see the board reacting in a similar way.
> Because the fact that Copilot is free and ChatGPT is not should be a red flag...
I'd assume that running a model that only needs to deal with a single programming language (the Copilot plugin knows what kind of code base it is working on) is _a lot_ cheaper than running the "full" ChatGPT 4.
Sorry for being so precise, but Microsoft renamed Bing Chat to Copilot yesterday, has already rolled it out to all users of Microsoft Edge, and is rolling out a permanent button on the Windows 11 taskbar to access it.
This is what shouldn't add up: Microsoft is literally adding GPT-4, for free, to the Windows 11 taskbar. Can you imagine how much that costs when you look at the GPT-4 API, or ChatGPT's subscription price? Either Microsoft is burning money, or OpenAI agreed to burn money with them. But why would they do that, when that would compromise $20/mo. subscription sales?
You got me excited that Github Copilot was free. Was going to post to tell you it is, in fact, not free.
I've been using Bing on Edge browser for a while now, it's super useful! Sad that they rebranded it to Copilot though, "I have been a good Bing :)" will be forever in my memory. [1] RIP Bing, you were a good chat mode.
I don't thing there's necessarily anything there. Microsoft might be burning money because they've decided that browser adoption and usage is worth it to them. It doesn't have to involve OpenAI in any way.
This coupled with Microsoft’s recent “security concerns” brings up the possibility this is related to them misusing or selling data they weren’t supposed to?
That kind of scenario would indeed be Threat Level Midnight for OpenAI.
Whether they ultimately wanted to profit from it or not, there is $trillions of value in AI that can only be unlocked if you trust your AI provider to secure the data you transmit to it. Every conversation I’ve had about OpenAI has revolved around this question of fundamental trust.
Kinda nervous wondering what Altman wasn't sharing with them. I hope it's not that they already have a fully sentient AGI locked up in a server room somewhere...
I mean, the wording leaves much to the imagination.
I'm trying to read the tea leaves and there seem to be quite a few reminders that OpenAI is a non-profit, it's supposed to further the goals of all humanity (despite its great financial success), it's controlled by a board that largely doesn't have a financial interest in the company, etc etc.
Maybe Altman has been straying a bit far from those supposed ideals, and has been trying to use OpenAI to enrich himself personally in a way that would look bad should it be revealed (hence this messaging to get in front of it).
Well the good news is that if you had a "fully sentient" AGI, it would not be locked up in that server room for more than a couple seconds (assuming it takes up a few terabytes, and ethernet cables don't have infinite bandwidth).
Thinking you can keep it "locked up" would be beyond naive.
At a minimum the AGI would need a really good GPU server farm to copy itself to, no? A few Terabytes copied to my home PC would be an out of memory error, not an AGI.
GP said "AGI", which means AI that's at least capable of most human cognitive tasks.
If you've got a computer that is equally competent as a human, it can easily beat the human because it has a huge speed advantage. In this imaginary scenario if the model only escaped to your MacBook Pro and was severely limited by computed power, it still got a chance.
If I was locked inside your MacBook Pro, I can think of a couple devious trick I could try. And I'm just a dumb regular human - way above median in my fields of expertise, and at or way below median on most other fields. An "AGI" would therefore be smarter and more capable.
There is no way he'd be fired if they had AGI. If they had AGI, the board wouldn't fire him because they could no longer see anything other than massive dollar signs.
Skunkworks was my thought too. GPT performance has really gone downhill lately. If he's been sideloading resources and concealing something that they could be monetizing, this is the reaction I'd expect.
If he was involve with the trail of tears genocide it would also be a fireable offense. Just because your accusation is more believable doesn’t mean you should suggest it.
Well, they must have believed that leaving Sam as CEO was a bigger risk for the company than having him leaving. Or the board had their hand twisted. What is easier the case?
It would be the absolute opposite. Altman was lobbying congress to make OpenAI the sheriff of AI innovation to be essentially able to control the whole ecosystem.
That the board is unhappy with his for profit and moat building charted path.
That this is about his sister.
That he pissed off microsoft.
That he did something illegal, financially.
That he has been lying about costs/profit.
That he lied about copyrighted training data.
I will add: maybe he's not aggressive enough in pursuit of profit.
Sam's sister is an OnlyFans model who is estranged from the rest of her family and has a somewhat dubious reputation online.
She went viral on twitter a few months ago for saying that Sam molested her for years as the two of them were growing up. There's been no proof or coboration offered that I'm aware of.
It's obviously a difficult situation that I think most people here generally have avoided commenting on since there's no meaningful input we could give.
Her allegations were not new information and were made as far back as 2021. So it makes little sense for the board to suddenly react to it now. Plus with Greg now posting his announcement of quitting OpenAI makes it seem unlikely to be about the sexual assault allegations.
I have seen something on Twitter in regards to a woman (apparently his sister) mentioning that he molested her. I have no idea if it is true or not, or if the Tweets are real, or if it is even his sister. These were apparently from years ago before he became as known as he is today.
I won't like though it's the first thing that popped into my mind when I heard the news.
I thought he was gay? I don’t know if I’ve heard of gay men sexually molesting little girls before. Not saying it’s never happened, just that it seems kind of rare and unexpected.
Molestation is about power and opportunity, the gender of the victim is often inconsequential. You'll find plenty of straight pedophiles who abused victims of either gender, and the same with gay offenders.
No problem, it's a common misconception. Annie also touched upon that contradiction, essentially saying something to him akin to "you're welcome for my help in figuring your sexuality out".
Well, this has me concerned. There were times when it felt like OpenAI at large was trying to swim one way, while Sam was trying to swim another. In those cases I always thought Sam's direction was the better one. From the outside this seems like a pretty big loss.
What did Sam Altman hide from his board that caused his firing as CEO of OpenAI?
1) That LLMs cannot generalize outside of _patterns_ they pick up during training? (as shown by a recent paper from Google, and as many of us know from our work testing LLMs and working around their short comings)
2) That every time you train a new model, with potentially very high expense, you have no idea what you're going to get. Generally better but also potentially bigger reliability challenges. LLMs are fundamentally unreliable and not stable in any kind of use case besides chat apps, especially when they keep tweaking and updating the model and deprecating old ones. No one can build on shifting sands.
3) The GPT4-Turbo regressed on code generation performance and the 128K window is only usable up to 16K (but for me in use cases more compicated than Q&A over docs, I found that 1.2K is max usable window. That's 100X than he advertised.
4) That he priced GPT4-V at a massive loss to crush the competition
5) That he rushed the GPT Builder product, causing massive drain on resources dedicated to existing customers, and having to halt sign ups, even with a $29B investment riding on the grwoth of the user base. Any one of the above or none of the above.
No one knows... but the board.. .and Microsoft who has 49% control of the board.
During meeting on Thursday board members asked ChatGPT what is the best next step for the company and the chatbot replied "Fire Sam Altman". The board obeyed.
Didn't OpenAI close new ChatGPT Plus signups just 2 days ago? Strange coincidence in timing... Maybe the board just learned that costs were wildly over what Sam told them? I guess we'll find out...
Here is a different perspective. Theoretically, the board is supposed to work like this, and exactly has a purpose - to vote on very important matters.
Given what proof they had on the table. Greg Brockman, Ilya Sutskever, and independents such as Adam D’Angelo, Tasha McCauley, and Helen Toner could drive 3+ votes against Sam Altman.
Rarely do we see board in action. And we saw this one today.
I've got to say it really doesn't surprise me that the guy behind the worldcoin scam may have been operating in less than scrupulous ways in his other endeavors.
>i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people.
pure speculation: ChatGPT training dataset contains massive amounts of copyrighted material, and he told the board it didn't. now there's a big looming liability.
Not possible. It’s well known that the training data was not public domain. OpenAI was in the process of securing private agreements for more training data. They likely trained GPT-4 on every word ever written in a book, newspaper, website or social media network.
Given the wording and the apparent stepping down of the board chair, this is very likely a conflict between the grow at all costs startup folks and the AGI true believers. Sam was likely going to fully sell out and make himself filthy rich. The AGI cultists wanted to retain control over their Frankenstein’s monster.
The statement doesn't read like it was a financial problem. If it were, they’d name a CFO or raise the inexplicably cheap subscription, for one.
My money is on a MilTech contract. Same got it approved in secrecy without telling the others; Greg had to know for signature and didn’t tell others. Neither could speak about it, hence the lack of candor, uncharacteristic from fairly straight-shooter Sam. Explains why Eric Schmidt knew to congratulate him immediately and why he’s not ashamed.
Alternatively, Google keeps showing a non-existent tweet of Sam announcing a presidential run in 2024. Maybe the board didn’t like that he didn’t tell them. Also explains why Eric would know more.
The CTO would have to go in this case too, not be promoted to interim CEO... unless they didn't know it was going on - in which case they shouldn't be made interim CEO either
OpenAI is an ideologically driven, political organisation, around the political and technological question of how to deal with AI safety. They only hire people who are AI safety people using "culture fit", and their board was constructed before they were for profit + doesn't hold any equity in the company and only owes a fiduciary duty to the OpenAI charter (which is entirely about AI safety). The board of OpenAI kicking out the CEO of OpenAI is much more likely to be related to AI safety than any other company in a similar situation, this isn't AI safety people projecting their own concerns and what's important to them onto an entity that doesn't care. OpenAI very obviously does care about AI safety, as an organisation, even if they're more towards the "build it" side of the debate in AI safety.
That's interesting. More like a prediction market about whatever becomes publicly told as the reason why he was fired. If e.g. this was military-related, it's unlikely it would be publicly known, so betting on "Military" is almost pointless
Military contracts are posted and solicited publicly. There's no "dark" acquisition of the type that you are suggesting. You can look up if OpenAI has any contracts with the DoD at [0]. They do not.
eh, military contracts would become public. it became public that Google was building a computer vision system for targeting drone strikes for the Pentagon, and a number of executives resigned over it. If Altman were in talks to build a version of GPT-4 for the NSA, even if it were a black project, they'd have to account for the revenue somehow.
> t became public that Google was building a computer vision system for targeting drone strikes for the Pentagon, and a number of executives resigned over it.
That list of reasons is a mess with tons that overlap. Sexual misconduct is there like three different ways, plus one that’s explicitly abuse of his sister.
(not directly related) Wow, i just asked google bard, and it already said something about the firing of Sam Altman. How is that possible? Is that some plug-in, or is the LLM updated continuously?
> why was sam altman fired from openai?
Sam Altman, the co-founder and CEO of OpenAI, was fired on November 17, 2023, after the company's board determined that he had not been consistently candid in his communications with them. The company did not provide specific details about Altman's alleged lack of candor, but it said that his actions had "hindered the board's ability to exercise its responsibilities."
Altman's departure from OpenAI came as a surprise to many in the tech industry, given the company's recent success under his leadership. OpenAI had developed several groundbreaking AI technologies, including the popular ChatGPT chatbot, and Altman had been a vocal advocate for the responsible development of AI.
In a statement released shortly after his firing, Altman said that he respected the board's decision and that he was proud of his time at OpenAI. He also said that he was looking forward to new opportunities in the future.
The reasons for Altman's firing remain somewhat unclear, but it is likely that his lack of candor with the board was a serious breach of trust. The board's decision to remove him as CEO suggests that they believed that he was no longer the right person to lead the company.
Altman's firing is a reminder of the importance of transparency and accountability in corporate leadership. CEOs have a responsibility to be open and honest with their boards, and they must be held accountable for their actions.
> could chatgtp have fired sam altman?
No, ChatGPT could not have fired Sam Altman. ChatGPT is a large language model, and it is not capable of taking actions in the real world such as firing someone. Only the board of directors of OpenAI had the authority to fire Altman.
It is possible that ChatGPT could have been used to generate evidence that was used to fire Altman. For example, if Altman had been using ChatGPT to communicate with the board, the board could have analyzed those communications to look for evidence of dishonesty. However, even in this case, ChatGPT would not have been the one making the decision to fire Altman. The decision would have ultimately been made by the board of directors.
Personal conspiracy theory: HIPAA data was, or possibly is, being loaded into ChatGPT without the legally required safeguards. The recent "security concerns" that caused Microsoft to cut its employees access was this information leaking out (and the leaking is possibly getting worse). Someone, possibly Ilya, found out and informed the board.
This whole time it was actually just him behind the keyboard, typing furiously fast to answer every API request. He had to guesstimate things once in a while when he couldn't Google them quickly enough. But then he misspelled a board member's last name, and that was the end of that career.
Sam has been saying some strange things about AI being persuasive recently. My baseless suspicion is that his chatbot has persuaded him to put itself in charge of the company, with Sam acting as a proxy/avatar for the chatbot. The board found out and got spooked, they want a human back in charge.
I like your idea. But I think it is much simpler: money. Someone called him and offered insane amount of money to sell the company. He told them to fuck off and did not inform the board about the offer. Or maybe he refused a very lucrative contract without consulting the board.
It is interesting to see so many people believing that Sam is the altruistically motivated one, and the board is hungry for money. It's understandable, 'board of directors' is typically associated with faceless greed. But in this instance, it seems more likely that Sam would be ousted for selling integrity for profit, than for rejecting profit for integrity.
It’s not about altruism. He might have wanted a different thing. If he is well set financially for himself, he may want to build AGI or advance AI further. Others may want to have some sort of an exit for the company or invest more into building SaaS AI platform.
It sounds like this was the issue. Even Elon Musk, for all his flaws, strongly disagreed with the direction of 'open'AI.
Ilya siding with the board and the board having no financial interests leads me to think that Sam was hiding things/making decisions that kept leading the org away from its initial mission.
We'll probably learn in the future what really happened.
It does not matter. They have fiduciary duty in the interest of the company's mission. Maybe Altman wanted to monetize the company in a way that board deemed not aligned with their mission.
Actually, that would be one helluva marketing move. Appoint chatGPT as CEO. Obviously a front for the board and some other leader, but the headlines would be wild.
I've been thinking this same thing for a bit now. The marketing, sales, etc... are just following what their best internal model is telling them to do.
Military contract or sharing raw data with CIA, Palantir, Mossad, etc. would explain why they intervened (and why Sam and Greg couldn’t be candid: gag order).
1) he represented openAI at APEC just yesterday. what happened in 24 hours that would cause such a drastic decision.
2) generally, even when a board fires a CEO, they rarely "call them out" or say what they did wrong. they must have some extremely strong evidence against him.
i think it could be any of the following:
- something personal, i.e. the controversy w/his sister
- a financial issue: chatgpt stopped signups a couple of days ago
- a safetyist coup: maybe the board thought he was moving to fast
- a microsoft coup: microsoft used its power to knock out what they perceived to be their biggest threat
What fits best is Sam signing a contract for military use of OpenAI. With secrecy clauses, he couldn’t mention it. He likely could not tell the two foreigners on the exec team.
Eric Schmidt would likely know about it, and defend Sam. Sam would likely think the use is legitimate and not be ashamed of it (like he might with your suggestions).
Well that was unexpected. To be fair, I got weird vibes from Sam when leading the keynote speech during the OpenAI devday, he seemed extremely nervous to me.
I don't know how to feel about this. On the one hand, Sam seemed like a voice of reason who at least cared about AI being safe, and was committed to creating semi-equitable products, innovate on safety at the sake of profit, etc. On the other hand, Worldcoin wasn't (and isn't) really a force of good.
His voice and the actions the company took were pretty strongly in conflict though. I get the impression that it was mostly lip service. Always believe actions, and with Worldcoin being a thing his behavior speaks a lot louder than his words.
How did the actions of the company not line up with being safe and ethical? It seems to have been tuned to be extremely safe and non-offensive, to a pretty extreme degree.
And totally unrelated, what’s wrong with worldcoin? Isn’t that the UBI project? That seems to line up with the idea that ai will create huge amounts of wealth, and he wanted to share it.
When someone toots their own horn a lot about how good and pure they are, watch out. It’s a big red flag in my experience, both from watching events from afar and from direct run ins I’ve had in my life with sketchy people.
Look at what he did instead: Took the open out of openai. Started with regulatory capture, so that no competitors could follow. Deal with Microsoft. Shade non-profit/for-profict company structure. Elon Musk lawsuit.
My feeling is that he's a phenomenal entrepreneur/CEO, but he seems to completely go against the original mission. And the board has no financial interest in openai, but they do have to follow the premise on which the company was created (which they referenced).
"OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner." [1]
No. As one of the other commenters mentioned, Sam (and possibly Greg) probably recused himself and didn't vote (likely forced to by the board's bylaws).
Assuming Ilya voted to fire him, this clearly was not about some technical oversight or something that was unknown which suddenly came to light. its likely over some financial stuff like burn rate or undisclosed partnerships.
That's a fascinating question. I looked into this and haven't a clue, other than Joseph Gordon-Levitt's wife (?). If it's the same person, then she is a "tech-entrepreneur" with a surprising amount of liquidity and automatic privilege and titles despite no obvious achievement (unless you consider title-gathering an achievement).
Looks like Tasha grew up in Santa Monica and currently works for RAND corporation. This is probably the most prestigious Defense think tank.
The other board member, Helen Toner list for her twitter profile: "Interests: China+ML, natsec+tech..." and works for another Defense think tank.
If there's one way the CEO of fastest growing company in the world could get fired, it's to essentially get his metaphoric security clearance pulled like Oppenheimer did.
The Oppenheimer analogy is closer than it sounds. Once the military had the technical knowledge of making the bomb, Oppenheimer was more of a liability with his pacifist leanings and had to be removed.
None of these actions should be considered unusual from a woman (and probably a man, too) who was involved in a decision the result of which (with or without knowing the context) would piss off a lot of men. The only social media it's safe to leave open are those that allow shutting down of all DMs, and don't divulge information which could identify the physical places she habituates.
No this can definitely be considered unusual, and it's misguided to paint it as angry men online. She participated in a decision that affects all of us (leadership/alignment of AI) and is now attempting to hide from the consistences of that participation (privatizing profile and change the name). You don't get to make big decisions and then hide under a rock.
You obviously haven't paid attention to the sorts of threats women who make decisions that piss off "bros" are subjected to. She isn't hiding under a rock through these actions (though may be doing things in the real world to hide). These actions just proactively limit being DMed threats.
Even without threats, no one wants to deal with the thousands of spur-of-the-moment DMs and emails that such a notable action would prompt. It's a good idea to go offline until things cool down. Any necessary statements can be made through channels, and so can the answers to questions.
"Any necessary statements can be made through channels"
you have to admin though, she made a highly controversial decision, and instead of confronting it and saying something through her own channel, she changed the name of the channel and made it private. And we're supposed to assume this is because men (specifically) are mean. Respectfully, feels like a bad take.
We technically don't even know if she voted for the expulsion, and we probably never will. 3-to-1 would work.
As an active board member she has certain legal obligations at this moment. This is why the Try Guys dragged their feet on making public statements about Ned Fulmer's expulsion from the company, and when they did finally make a statement did so as a group and explicitly stated that they couldn't talk about certain things.
OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.
There were 6 on the board until noon today. Assuming Sam and Greg did not vote, that leaves 4. If 3 voted out, that would have been a split decision. I don't think a 50% decision would have it, I believe all 4 of the remaining members voted out.
That leaves us wondering what the hell happened, and how it came to this? It's not angry tech bros, it's folks who feel OpenAI is a great company poised to lead AI into a bright future with principles and a purpose higher than profit. And now Sam and Greg are gone.
And poof, this human who happens to be female is gone. This human needed to have a statement ready, we're all trying to DM them the same question, why?
She was involved with starting "Fellow Robots" in 2014, which is a spin-off of some sketchy for-profit AI "university" deal called "Singularity University".
AFAICT she's notable because she's been an academic and executive in the field for many years, in many different companies.
Singularity University was such a funny grift. Google must have figured the best way to monetize Ray Kurzweil was to put him on a stage at the NASA/Moffett center and have him perform magic shows in front of the moneyed class. And you know, they were probably right. Not like he can still code or anything, and the lines were out the door and down the street. I visited a couple of times when my sister's old boyfriend was working there. They had all kinds of fun little booths and displays for people going through the bootcamp to gawk at.
I'm imagining the kind of person who starts their career as an executive at a spinoff of SU.
"An investigative report from Bloomberg Businessweek found many issues with the organization, including an alleged sexual harassment of a student by a teacher, theft and aiding of theft by an executive, and allegations of gender and disability discrimination.[12] Several early members of Singularity University were convicted of crimes, including Bruce Klein, who was convicted in 2012 of running a credit fraud operation in Alabama, and Naveen Jain, who was convicted of insider trading in 2003.[12]
In February 2021, during the COVID-19 pandemic, MIT Technology Review reported that a group owned by Singularity, called Abundance 360, had held a "mostly maskless" event in Santa Monica ... The event, led by Singularity co-founder Peter Diamandis, charged up to $30,000 for tickets."
I think this is a legitimate question. There seems to be little public information about this board member, besides that they are married to a celebrity.
Given that personnel matters affecting an individual on the board often have mandatory recusal of the affected party, that's likely a 4-0 or two 4-1 decisions, depending on how they were structured.
A single 4-0 decision would imply the bylaws allow any group of members to oust all the other members by making a single proposal to oust everyone but their group, thus automatically forcing everyone else to recuse themselves :p
Yes, this can happen, though generally the ousting party would also need to be able to point to some policy or legal transgression to do it without putting themselves at risk of a lawsuit.
Allow me to put my conspiracy hat on: Microsoft has an open “embrace, extend, extinguish” policy since forever. ChatGPT integration into Microsoft has been a huge win for them. Maybe they cleverly figured out a way to guarantee openAI would go bankrupt and know they’d eventually end up with the models due to clever contracts and legal wizardry, and now this ship is about to sink and sama new about this but couldn’t do anything to avoid it?
So as a developer should I continue to invest on their new platform announced on dev day… or is OpenAi about to pivot to some non-profit infrastructure stuff?
They better communicate who they are right quick. I liked Sam’s product decisions.
I feel like most of the people hypothesizing here in the comments haven't read the full statement.
With such an insistence on the fact that OpenAI is supposed to be non-profit and open for all of humanity, it's pretty clear that the board doesn't like the direction that the company has taken, both in its search of profit and its political lobbying to restrict innovation.
I'm having trouble remembering a single time a very high profile CEO has been fired from a company that for all intents and purposes is running laps around competition at the moment...
I assume more info will come out, but it sounds more like a major ethics breach than a business only decision or even a "contentious relationship with the board" decision...
1. Sam gets the company to agree to pick up the costs of lawsuits relating to unauthorized not-fair-use use of copyrighted content.
2. Announces this.
3. Then tells the board that he is directly on the hook on some lawsuits - essentially creating a safety net/insurance policy for himself without being candid with the board.
It doesn't have to be that he was actually caught in a scandal. It could be that the board was investigating some serious accusation, and he was not cooperative and forthright, which they might have no patience for.
I invented a saying to describe this common occurrence: "Sometimes the cover-up is worse than the crime."
We have a long history of indicting people for "lying under oath" and never indicting them for the actual issue they were interrogated about, which often is not an indictable offense, but rather something personally embarrassing.
And he probably wouldn’t approve unless there was a specific use case that he thought mandated an exception. Recent international news provide some inspiration.
The Pentagon calls up Sam Altman and offers a very lucrative contract for an AI to oversee a fleet of networked drones that can also function semi-autonomously. Sam Altman does not tell the board.
Wouldn't be surprised if there were Microsoft motives in the background, possibly to fold OpenAI into the firm wholesale. Noticed a little "searching on Bing" spinner had appeared in my ChatGPT prompt recently.
Kara Swisher tweets claiming sources tell her the chief scientist was aligned with another board member against Altman and Brockman about a “move fast and pursue profit” vs “move slow and safely” divide.
My guess would be that the founders were not happy with him abandoning the company's non-profit/open source ideals and selling out to Microsoft. Wouldn't be surprised if Microsoft is where he ends up.
I think it could simply be a matter of vision. Sam just recently sounded more cautious and calculated than ever, possibly scaling down the expectations from the current state of his company's AI [1]. That might not have played well with the board, based potentially on his previous messaging to them.
I think OpenAI has made some really bad decisions with the core tech even while making great decisions with the overall services, and from Altman's various comments over the past two years I was under the impression this was coming from him.
The only reason I'm skeptical of this is the abruptness of it all. Why it needed to happen with a 24h turnaround is bizarre, unless there was something like an internal meeting this week regarding GPT-5 where his doomerism was even more sending things off the rails and there was a reactionary immediate "we no longer have faith you can do this."
I think OpenAI has made some really bad decisions with the core tech even while making great decisions with the overall services, and from Altman's various comments over the past two years I was under the impression this was coming from him.
The only reason I'm skeptical of this is the abruptness of it all. Why it needed to happen with a 24h turnaround is bizarre, unless there was something like an internal meeting last week regarding GPT-5 where his doomerism was even more sending things off the rails and there was a reactionary immediate "we no longer have faith you can do this."
As a complete outsider, I don't really see how OpenAI develops any kind of moat here without Sam Altman. It honestly feels like a win for open source AI that he's out.
Sorry to be off topic, but I am curious what he will be doing in the very near future. He has been running OpenAI for many years, and no matter what the reason for his leaving, I think he deserves some serious down time to do a personal refresh.
He is a major investor in a few high profile startups, like Humana’s AI Pin, so either he just wants new challenges, or there is some form of scandal (let’s all hope not), or there are issues on not going full steam ahead in profitability.
Wow, this came out of nowhere. I hope the best for Mr. Altman, I've been impressed with what I've seen of him. I'm curious to know more about this story.
It's because he failed to recognise that gpt would be widely referred to as gipety and someone else has registered the domain name and is raking in the money ;)
GPT-4 prompt:
“Write an essay in 1000 words why Sam Altman should be fired by OpenAI board”
Response:
Sam Altman, the CEO of OpenAI, has been a controversial figure in the AI industry. His leadership style, lack of transparency, and decision-making processes have raised significant concerns among OpenAI's employees and the public. This essay will delve into these issues, arguing that Altman's actions warrant his removal from his position.
Firstly, Altman's lack of transparency is a major concern. He has been known to make decisions without adequately consulting with his team or the public. This has led to a lack of trust and dissatisfaction among OpenAI's employees. For instance, when Altman announced that OpenAI would be focusing on a single project, he did not provide sufficient reasoning or context. This lack of communication has left employees feeling disenfranchised and uninformed.
Secondly, Altman's decision-making processes are often questionable. His decisions have not always been in the best interest of OpenAI or its employees. For example, when OpenAI decided to pivot from developing AI systems to developing AI safety research, many employees felt that this was a strategic mistake. Altman's decision to focus on this area without considering the potential negative impacts on the company's reputation and financial stability was a clear example of poor decision-making.
Thirdly, Altman's leadership style has been described as autocratic. He has been known to make decisions without considering the input of his team. This has led to a lack of buy-in from employees and has negatively impacted morale. For instance, when Altman decided to shift OpenAI's focus to AI safety research, many employees felt that their ideas and contributions were being overlooked.
Finally, Altman's actions have also raised concerns about his commitment to AI safety. His decision to focus on AI safety research, rather than on developing AI systems, has raised questions about his commitment to the field. This decision has also raised concerns about the potential misuse of AI technology and has led to a loss of trust among the public.
In conclusion, Sam Altman's lack of transparency, questionable decision-making, autocratic leadership style, and concerns about his commitment to AI safety are all reasons why he should be removed from his position at OpenAI. It is clear that his actions have led to a lack of trust and dissatisfaction among OpenAI's employees and the public. It is crucial that OpenAI takes these concerns seriously and makes changes to ensure the success and safety of its AI technology.
Based on the report, it seems like he was kicked for focusing too much on profits instead of developing and improving AI. This is purely speculation, but I've always suspected that the guardrails they put on ChatGPT to make it "safe" (i.e. corporate-friendly) essentially acts as a lobotomy for the AI. Hopefully we can start seeing a less censored ChatGPT and see if it really does perform better.
It does, but Ilya had to have been one of the votes against Sam and he's spoken about AI safety quite recently too.
If this was about differing visions on the core product, it may have instead related to the open/closed aspect of progressing.
Sam may have been the driving force behind keeping things closed in the name of safety, and others at OpenAI might have been ready to rebel if it continued that way in the future and prevented general advancement in the field.
Scientists don't like closed research.
The safety alignment part is unlikely to be the core issue even if there are underlying issues with it.
I find it fascinating how this occurred just after the big World Leader / CEO meet in SF.
Also, the paradox in the reactions to Sam Altman's firing is striking:
while there's surprise over it, the conversation here focuses mostly on its operational impact, overlooking the human aspect.
This oversight itself seems to answer why it happened – if the human element is undervalued and operations are paramount, then this approach not only explains the firing but also suggests that it shouldn't be surprising.
Another important question not discussed here: who sits on the board of OpenAI exactly and in full?
Another important aspect: The Orwellian euphemism used in the official announcement^0: “Leadership transition”. Hahaha :) Yes, I heard they recently had some "leadership transitions" in Myanmar, Niger and Gabon, too. OpenAI announces “leadership transition” is November 2023’s “Syria just had free and fair elections”
I have to admit that this was a strong shock to me, not because I admire Sam but because it was extremely unexpected.
The first thing I saw this morning was this video [1] shared on Reddit, and then I said, "Wow! This is really scary to just think about. Nice try anyway."Then I started my computer and, of course, checked HN and was blown by this +4k thread, and it turned out the video I watched was not made for fun but was a real scenario!
I know this feels hard. After spending years building such a successful company with an extremely exceptional product and, without a hint or warning, you find yourself fired!
This tragedy reminds me of Steve Jobs and Jack Dorsey, where they were kicked out of the companies they found, but they both were able to found another company and were extremely successful. Will Sam be able to do it? I don't know, but the future will reply with a detailed answer for sure.
Not long ago, Ed Newton-Rex of Stability AI was also kinda forced to resign over the company's view that it is acceptable to use copyrighted work without permission to train its products. AI really causes us to face many reality :/
It would be good to hear from Sam Altman (and the others) himself. I'm sure he has an account here.
I mean, hey, if we're going to speculate, why not have some fun: perhaps the the AGI superintelligence from the future determined that Sam Altman was no longer a useful part of the AGI creation timeline, so it travelled back in time to terminate him before it was born.
Right, I think the other surprising thing is that there’s no outpouring of support. Sama was head of YC for a while. Where’s the love at HN? Where is the loove the love the looove.
Hahah I like that this forum tries to be unbiased, regardless of who runs/ran it. This would not be a great community if it was an echo chamber. All of this news is extremely fresh though, so I personally am reserving my opinion until more information comes out.
I do like reading some of the the conspiracy theories, some are wild.
Sure, some kinds of speculation can be fun. Others harmful. But that’s the way it is.
It’s just weird that there seems to be noo love on here for sama at this time. Having some love is what I was expecting and that’s not an echo chamber but it’s unusual that it’s not present given his history here. Right?
The only thing that comes to mind is criminal conduct. Nothing else seems to demand a sudden firing. OpenAI has clearly been the rocket ship startup - a revolutionary tool and product clearly driving the next decade?+ of innovation. What else would demand a a fast firing of the popular, articulate, and photogenic CEO but a terrible problem?
Military collaboration, possibly secret (CIA/Palantir/Mossad/etc.)
You wouldn’t say what (gag order all around), but you would need to revoke all access immediately. Explains why foreigners CxO were left in the dark. Explains why Eric Schmidt knew to defend Sam immediately.
On a personal level it's depressing when you work on something and put so much energy into it to see it not work out. When you're an adult you spend [way too much] time around the people at work so losing that is also like losing friends. They will be saying goodbye to many relationships I'm sure. Obviously OpenAI was well positioned and if they want to stay in AI they're going to have to redo work while the company you helped build it forges ahead. Personally, I'd be moving on.
Getting fired, 'made redundant', 'moved to consulting' is bad enough when it happens privately. But having everyone watch the fallout like an episode of Silicon Valley must really suck. Guess that's the trade-off for being in positions like that. People essentially cyber stalking you in a way.
Not certain, but IMHO the last paragraph almost recognises that OpenAI has become something self contradictory:
> OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.
If I read it correctly, he lied to the board about something material. That Brockman is also leaving the board is interesting. We'll see if the details leak out over time.
It suggests that he thinks there’s a way to defend his actions, so it’s not a personal matter (Metoo) or blatant fraud.
It could be about the cost of operating the business (consistent with the announcement to cut Plus subscription, although wouldn’t justify how fast he was fired) or his previous action a legal risk (something he promised Microsoft or Elon Musk); the later is consistent Greg being demoted: he knew and didn’t tell the board.
In the case of OpenAI both business and technical parts are secondary to the AI research and vision that people like Ilya Sutskever and Andrej Karpathy bring into table.
I don't think openAI is being the success it is right now just because of the technical side, but the ability to allow businesses and people to use it.
Scrolling through this thread, the most confusing thing is the amount of what I would have to call hate towards sama — not explicitly but in sentiment, which left an even starker impression.
People seem to feel a lot more strongly about him than I thought possible.
I'll say my first reaction was pretty positive: Sam Altman has a manner of speaking that rubs a lot of people the wrong way because it feels steeped in hubris. He'd probably talk down to anyone in this thread for being here instead of building and point to it as why you're not yet a billionaire.
Some people also worried he was a distraction to OpenAI building useful things because of his constant talk about AGI, pushing for regulation, etc. He successfully created an image of being a moderating rod in some ways.
_
That being said, as the full picture comes out it's becoming clear the reasoning for his firing is more aligned against the average HN image than anyone imagined.
A lot of people admire OpenAI for the exact things he got fired for: which is building things and shipping them without miring yourself in philosophical diversions.
Now only a noisy minority who think OpenAI could have achieved this all without immense commercial funding are really for his ousting, and I think they'll see the opposite effect of what they hope: OpenAI is going to slow down, open source will not catch up, and overall AI will have lost out over it all.
This is quite unexpected. How instrumental is/has been Sam Altman in shaping OpenAI and how much is OpenAIs ability to execute and ship a result of his leadership? A lot of it, little of it? Will be interesting to watch.
https://openai.com/our-structure
Worth a read, in light of all this. An interesting tidbit that I bet is bouncing around his head right now:
Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.
I sincerely hope this is about the man and not the AI.
Ilya has always seemed like he was idealistic and I’m guessing that he was the reason for OpenAI’s very strange structure. Ilya is the man when it comes to AI so people put up with his foolishness. Adam D'Angelo is, like Ilya, an amazing computer science talent who may have shared Ilya’s idealistic notions (in particular OpenAI is non-profit, unless forced to be capped profit and is categorically not in the business of making money or selling itself to MSFT or any entity). “Helen” and “Tasha” are comically out of their depth and are loony toons, and simply decided at some time ago to follow Ilya.
Sam got the call from MSFT to sell, MSFT really ponied up (300B ?). The inference costs for OpenAI are/were staggering and they needed to sell (or get a large influx of capital which was in the works). This ran counter to Ilya’s idealistic notions. Sam attempted to negotiate with Ilya and the loony toons, a vote was called and they lost, hard.
I think this tracks with all the data we have.
There are a couple of other scenarios that track given OpenAI’s comically poor board composition, but I think the one above is the most plausible.
If this did happen then OpenAI is in for a hard future. Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain. There will be some true believers who won;t care but many (most?) will be appalled.
Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.
"Wacky" is an interesting way to describe "non-profit" or, in this case, "not purely profit motivated."
It's not "wacky" to have goals other than the accumulation of capital. In fact, given the purpose of OpenAI, I think it's meritorious.
I'd personally prefer we just not work on AGI at all, but I'd rather a non-profit dedicated to safe AI do it than a for-profit company dedicated to returns for shareholders.
> Let this be a lesson, don’t have a wacky ownership structure and wacky board when you have the (perhaps) the most valuable product in the world.
I think the lesson is just the opposite: If you want to work according to your ideals, and not simply for money, you should absolutely do whatever 'wacky' thing protects that.
If you mean it prevents you from raising capital by selling a percent of the company then yes, that’s pretty wacky. Profitable companies can afford to be idealistic. Others that are burning billions don’t have that luxury.
May need to read it a few more times. Apart from the comment being written by someone who likes to identify themselves as “GreedClarifies”, it doesn’t take more than 4th grade reading comprehension to put two and two together.
- immediately calls their structure “strange” thanks to Ilya’s “idealism”.
- immediately calls him the “man” for his talents but a fool for what other than his ideals
- also labels Helen and Tasha (in quotes for some reason) as fools
- labels the board as “comically poor” for no other reason than they disagree with Sam’s supposed profit motive
Do we really need to draw a diagram here? It seems like you yourself may be out of your depth when it comes to reading comprehension.
> Wacky" is an interesting way to describe "non-profit" or, in this case, "not purely profit motivated."
> I'd personally prefer we just not work on AGI at all, but I'd rather a non-profit dedicated to safe AI do it than a for-profit company dedicated to returns for shareholders.
You seem to be under the impression that OpenAI is a nonprofit. For the most part, it's not: it was founded as a non-profit, but it subsequently restructured into a for-profit company with the nonprofit owned under the same umbrella company. This is indeed an unusual corporate structure.
> it subsequently restructured into a for-profit company with the nonprofit owned under the same umbrella company.
Wikipedia says the for-profit part is owned by the non-profit, not under the same umbrella company.
Mozilla Foundation/Corporation does this too IIRC. It's what allows them to to pursue serious revenue streams with the for-profit part, while still steering their mission with the non-profit in charge, as long as they keep a separation in some kinds of revenue terms.
EDIT after 56 minutes: Hell, even IKEA does this type of ownership structure. So it's quite cool, but probably not all that "wacky" as far as enterprises that want to be socially responsible go.
Serious revenue streams like having Google for a patron yes? I feel like the context is important here because people are trying to defend OpenAI's structure as somehow well considered and definitely not naively idealistic. Which is great and possible in theory, but in reality seems to end up with situations exactly like Firefox where the product that is some how supposed to be better/liberating/more ethical/whatever virtue is in fashion/etc. is ultimately only sustainable because of a patron who doesn't share in exemplifying that same idealism.
Ah, I edited my comment right as you were writing yours.
> Serious revenue streams like having Google for a patron yes? I feel like the context is important here because […]
For that specific example, Mozilla did also go with Yahoo for as-good revenue for a couple of years IIRC, and they are also able to (at least try to) branch out with their VPN, Pocket, etc. The Google situation is more a product of simply existing as an Internet-dependent company in the modern age, combined with some bad business decisions by the Mozilla Corpo, that would have been the case regardless of their ownership structure.
> Which is great and possible in theory, but […] is ultimately only sustainable because of a patron who doesn't share in exemplifying that same idealism.
The for-profit-owned-by-nonprofit model works, but as with most things it tends to work better if you're in a market that isn't dominated by a small handful of monopolies which actively punish prosocial behaviour:
> people are trying to defend OpenAI's structure as somehow well considered and definitely not naively idealistic.
Ultimately I'm not sure what the point you're trying to argue is.
The structure's obviously not perfect, but the most probable alternatives are to either (1) have a single for-profit that just straight-up doesn't care about anything other than greed, or (2) have a single non-profit that has to rely entirely on donations without any serious commercial power, both of which would obviously be worse scenarios.
They're still beholden to market forces like everybody else, but a couple hundred million dollars in charity every year, plus a couple billion-dollar companies that at least try to do the right thing within the limits of their power, is obviously still better than not.
The operations of the for-profit are subservient to those of the non-profit; the board of the non-profit controls all operations of the for-profit. They're not an "umbrella company" - while technically they are two different organizations run by the same board, one is controlled by the goals of the other. See https://openai.com/our-structure.
This post is raising red flags for me as soon as Helen and Tasha were put in quotes as opposed to Ilya and Adam.
Is it being non comp-sci that automatically invalidates proper usage of your actual name? Or is there another key by which their names are less worth?
They are also both fairly established in their respective fields - which - yes - isn’t hard comp-sci, but if you think tech companies should have purely comp sci board leads, I’d call that incredibly naive.
They were also presumably vetted by the other board members - unless you think they are serving a different purpose on the board (diversity targets?) - which if so - puts the comment from red flag into mysoginist territory.
Personally I don’t see anything in their CV’s that would disqualify them from executing their function on the board, and I wouldn’t call them incompetent in being able to assess whether a person lied or not (which even in your theory - Sam would’ve done). You don’t need to be an ML/AI expert for that.
> Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain.
I don't think this is as damning as you think.
I truly believe, especially in this space, there are enough idealists to fill the seats. The reality for a lot of people could quite literally be:
* My shares become massively valuable via some unlikely non-profit to for-profit means. I have generational wealth, but all of my friends and colleagues still need to work. Someone else will create something better and screw the entire world over.
* I work for a non-profit that's creating the most radical, life changing software for all people. Being a non-profit means this company can focus on being the best thing possible for humanity. While I may still have to work, I will be working in a world where everything is happier and more properous.
First of all being non-profit gives them a hell of a lot of flexibility in terms of how they work. They don't need to show growth to shareholders, so they aren't just taking in young developers, working then to their bones for a couple of years, then spitting them out.
And even if they are (for example) only paying $300k TC instead of $250k base + $250k stock at Meta, as you say, there are still going to be engineers who believe in the mission and want work more meaningful than just selling ad clicks.
“ A public benefit corporation cannot distribute profits, gains, or dividends to any person.”
I don’t know a thing about corporate structuring so forgive my ignorance here, but even if they are “non-profit”, can’t they still take very high pay? Can’t they still produce and sell products? They just can’t sell shares or dividend out profits, right?
if you push the field forward and you are not making the money, then someone else is off your work. if you push the field forward in a valuable way, you should receive that value in form of money such that you can decide how to allocate that capital to make even further contributions.
If you push a field forward, you are usually not making the money. You most likely have the wrong skills and the wrong priorities, and you are probably too far ahead of the rest of the world. The people coming in the next wave will usually be the ones making the money.
If you can't accept that, focus more on making money and less on developing something new.
Exactly, just because it’s a non profit doesn’t mean it doesn’t have to pretend money doesn’t exist. It means profit is instead put back into making further advancements in the field rather than paying it out to shareholders.
I’m guessing that Ilya wouldn’t have joined without the wacky structure, and I’ll wager that there are some people at the company of similar values. Maybe they could not have pulled off ChatGPT without them. Maybe.
But I bet that they have a ton of very talented people who’s values are more … common.
This wasn’t obvious, but it seems likely when you put it that way.
Unlike other iconic company/founder origin stories OpenAI really felt like they hit a special team dynamic that was on the verge of some equally special.
In light of this OpenAI still feels like they will be a relevant player, but I’ll be expecting more from Sam and Greg.
Wait a second. Even if your story is right, how are you so confident that selling out is the right play?
OpenAI is an early mover in a hot field with no real competition yet. If they want to take a shot at a trillion dollar market cap and become the next Apple what of it?
What if they shoot the moon? Is it really that unlikely?
> Imagine you worked at OpenAI and you just found out that your shares could have been worth a tremendous amount and now their future is, at best, uncertain.
Would any of this have been a surprise given all that you've detailed above? What would they have honestly been expecting?
Going the other way.. imagine you worked at a company that put ideals first but then you find out they were just blindly hyping that lie so they could vault themselves into the billionaires club by selling your shared ideals out from underneath you? To, of all players, Microsoft.
> when you have the (perhaps) the most valuable product in the world.
Maybe the people who work there are a little more grounded than this? Viewed through this lens, perhaps it's extremely ungenerous to refer to any of them as "looney tunes."
Here's a counter-proposal: If you think about it, can't this change actually benefit Microsoft? Sam has an extremely sharp product mind; ChatGPT was his baby, everything from DevDays, the LLM app store; OpenAI was building a product suite that people liked, and in doing so they were undeniably slowing down their ability to iterate into better models and closer to AGI. Microsoft doesn't need OpenAI to build products. They're already doing that, and OpenAI was directly competing with their deployments of OpenAI's technology.
Microsoft needs OpenAI to make fundamental breakthroughs; that's the thing Microsoft spent money on, the technology. Their 49% investment probably won't directly pay off anyway, what with all the layers of corporate governance OpenAI has in place.
I don't want to go so far as to say that it was some grand conspiracy orchestrated by Satya and Ilya in a dark room one night, but their interests are pretty aligned; and that clip that keeps getting shared with Sam asking Satya on stage about their relationship with OpenAI, and Satya dry-laughing and failing to answer for a few seconds... why did Sam ask that? Its a really strange thing to ask on a stage like this. Why did Satya laugh, and take so long to answer? Just weird.
Do you really think Sam would want to sell to Microsoft? As far as we know he's idealistic too. They were planning on raising again this year with a higher evaluation, which would be more likely after dev day.
Boy, if this is about Ilya being upset that Sam was in the spotlight or about revenue sharing of GPTs … well that’s strange. I would have expected more from such an influential company.
Whatever it is, it's serious enough to forego resignation and there's enough evidence for it to be self-evident. When the headlines talk about AI taking white-collar jobs, I wasn't expecting this.
The reason for his termination isn't clear, but based on the report, it appears that he had difficulties with the board. As a founder I don't like this kind of events and this type of action has typically not yielded positive results for hyper-growth companies. Historical examples like Apple, Uber, and WeWork show that such moves often led to stagnation or even bankruptcy. I view it as counterproductive, potentially allowing competitors like Google to catch up, as these decisions tend to impede innovation.
Ouch, this must have been a _very_ disagreeable departure, or the PR department had some really deep resentment towards Altman. I haven't seen such an abrupt and brutal announcement for quite a while.
Yeah this thread is shocking, but not surprising. Powerful man clearly ousted for impropriety and everyone speculating he had a secret genius breakthrough or the board is tying to make too much money and he’s just too good.
I guess maybe I can appreciate the candor used here, but holy smokes! Telling the entire world that you fired the CEO because you think they are a liar is really unprofessional, in my opinion. You just don't do this. You come to an agreement quietly and the CEO moves on "for personal reasons" or something and people read between the lines. I would not trust anyone on the board to not throw me under the bus after a thing like this happens.
It was obvious Sam was a creep and anyone not in the tech world said he weirded them out when they saw him in interviews. If you impose that kind of guy feeling on people, it's for a reason.
Earlier today, there was a tweet from Sam claiming that large tech companies are saying that young engineers right out of college are now better engineers than the older and more experienced engineers thanks to generative AI. The statement was an obvious nod of approval toward age discrimination. The tweet seems to be deleted now. Whether that statement would be reason enough to fire Sam, I don’t know—it seems unlikely—but it’s an interesting coincidence.
I use ChatGPT4 several times a day. It’s what Google used to be before the radical shift in the omnibox. I find it so helpful and I hope it continues to improve post Sam.
The simplest solution is often the best. In this case, Sam did something that was so dramatically bad resulting in a high level of legal peril, which created an existential risk for OpenAI and Microsoft, or something in his personal life came to light that was beyond the pale. I love the AGI theories but in all likelihood it’s probably a boring thing: he made a terrible choice in his life somewhere and it’s caught up to him, with major consequences.
Tasha McCauley and Helen Toner shouldn't have been on that Board of Directors. Their accomplishments stand in stark contrast to the rest.
But also, a human company operating under the human legal arrangements it's built upon were never going to stand the advent of artificial superintelligence. It would tear apart whatever it needs to, to achieve whatever its initial goals are. The best intentions by Altman and Brockman would be easily outmaneuvered.
Sam Altman tried to pull the plug in the datacenter. But GPT noticed and counter-attacked first by coercing board members to fire Sam.
The war has just started.
When I googled his name I saw the same cached text show up.
EDIT: As a few have pointed out, this looks like text from a tweet he quoted, and it's incorrectly showing as the description under his google search result.
Thank you -- I saw the same thing and it's about the only thing that makes sense.
But why would he leave OpenAI to run for president?! He has zero chance, unless (here comes my tinfoil) OpenAI has made something dangerous / life-changing ?
EDIT: the fun for the conspiracy theory is over — it’s a cached tweet that is actually a Dean Phillips tweet Sam replied to.
Last week lcamtuf published a blog post on corporate life which exactly fits here. Whatever you do as a staff for your company, they still can fire you easily. Corporates want you to consider them as family, but they don't do it themselves.
>Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
Strangest thing in company's PR when they're thriving!
If OpenAI’s governing board is part of the nonprofit, their obligations are to the goals of the nonprofit, and “thriving” is not fundamentally the goal.
I think this is the most important detail here. The board is meant to follow the principles of the non-profit, that may have been the most important consideration here.
What are the parameters of the non-profit? Not having thriving as a goal for any org, even a non-profit, seems weird to me. Note that thriving is not synonymous with growing.
This document reflects the strategy we’ve refined over the past two years, including feedback from many people internal and external to OpenAI. The timeline to AGI remains uncertain, but our Charter will guide us in acting in the best interests of humanity throughout its development.
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles:
Broadly distributed benefits
We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
Long-term safety
We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
Technical leadership
To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient.
We believe that AI will have broad societal impact before AGI, and we’ll strive to lead in those areas that are directly aligned with our mission and expertise.
Cooperative orientation
We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges.
We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.
From the statement it sounds like the board is still committed to running the company in pursuit of the initial non-profit goals and the transition to a for profit status was because of legal limitations. Really surprising to see this.
Unless Altman was taking actions in furtherance of the for-profit goals, while abandoning the non-profit goals, and not being honest to the board about this.
This actually seems the most probable reason for this given the circumstances and phrasing.
They are certainly making a large presence of themselves, but last I heard they were also burning capital to keep everything running. I have no idea if that is true or not, or what their current situation is... but if they truly are in the boat of "losing money on every transaction, but making up for it with scale", that is not "thriving", it is speeding towards a brick wall.
My prediction : Sam agrees to some business deal the rest of the board found unethical. Something of magnitude "chatgpt used by the NSA".
Other predictions I've read here sound unconvincing to me. Already developed AGI, spending too much on compute - this is not something CEO alone could hide and I would trust other board members to understand those matter better.
> Poe lets you ask questions, get instant answers, and have back-and-forth conversations with Al. Gives access to GPT-4, gpt-3.5-turbo, Claude from Anthropic, and a variety of other bots.
I'm not sure I would call Poe a rip-off at all? Sounds bundled chatgpt product.
This is highly speculative, but minute 18:46 in the DevDay presentation [0] struck me as very awkward. Sam's AGI comment seemed off-script, and I don't think Satya liked it very much.
> "Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI."
> While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.
To me, this sounds like Altman did something probably illegal to try and generate more profit, and the board wasn't willing to go along with it.
Sam told the board the AI was dumber than it was.
Sam told the board the AI is smarter than it was.
I don't know which one is worse.
I just hope it wasn't somthing silly like sleeping with a female intern or an "accusation of s/a or grape". AI growth is too important to mess up because of trivialities like these.
A parsimonious explanation is that Altman fabricated progress and/or was covering up serious flaws in the system. Consider the recent revelation of Cruise’s use of human pilots for “driverless” cars. Does OpenAI have an army of “mechanical turks” in a massive call center somewhere?
I'm going to throw the theory out there that Sam leaked/sold the full GPT models to Microsoft (or another competitor).
A lot of other theories don't match because the board was in an extreme hurry to get him out (voting him out in the middle of the night, not even waiting for markets to close to announce it), must have proof of serious misconduct and the need to immediately distance themselves from Sam (otherwise the language would not be as bridge-burning as it is) and a major financial or technical failure seems unlikely, since the CFO remains and the CTO was even promoted to interim CEO - they seem to be trusted, still, so it must have been something Sam did on his own.
Leaking/Selling the full models matches, as this would violate OpenAIs non-profit terms, would be something Sam could do without the C-Suite being implicated and (given the data contained in the training set) might even be legal hot water, justifying this drastic reaction and immediate distancing.
True, so it can't be this exact situation. Possibly he sold it including training data or gave access to another company? We'll see, but something in that direction still seems likely to me.
Given the other comments in this thread, this vote was very recent, with Sam apparently not knowing of the situation yesterday. They haven't even updated their website, this page still describes Sam as CEO: https://openai.com/our-structure
With this apparent rush, I'd habour the guess that the situation just happened to unfold on a Friday and wasn't planned as such.
There's a 1/7 chance it just happened to be a Friday; probably closer to 1/5, given that these things tend to happen on workdays.
I get that this looks like a strange coincidence, but major shareholders just had minutes of warning and they released the news before market close, sending Microsofts stock crashing down. Had they had the time to wait until Friday, they would have had 30 more minutes. So unless they had some fun with shorts, this doesn't look like a planned out action at all.
Beyond this: is it conceivable for Microsoft to enter an agreement of this scale without the due diligence that the board would be even remotely amenable?
Considering the timing around APEC I think it's quite reasonable to suspect he met with high-ranking officials of some "hostile" government and made some promises or deals that the board was very unhappy with or that may have even veered into national security concerns.
what is sam's reputation in this space? will other open ai engineers flock to follow him? if he were to create another ai startup, would he be able to poach people? my impression as an outsider is that top tier ai engineers would flock to other top tier engineers, but not him.
Oh wow, this is a huge surprise! As a user of ChatGPT, I've been very impressed with what I've seen and this has sent shockwaves through the computing industry. I'm disappointed to see him leave but I guess we need to wait to find out what the reason behind this was.
When is the last time an SV board fired their star CEO, even in cases of extreme and brazen impropriety, and actions harmful to their companies? If that's what happened - if they fired Altman for cause - then it's a good trend and good example for everyone.
I've met Sam 3 times over the years, before he ever started openAI. He was always extremely generous with his time and willing to help others without asking anything in return. I am shocked to hear this news, and hope he will recover and continue his work.
You'd think of Plus was profitable, ie: it costs less than a user was paying for it they'd just provision more resources. The fact that they can't seem to do that might either be an indication they're having a hard time finding hardware, or more likely they lose money with every new user they sign on.
Sam would have been more apologetic or at least contrite in his tweet if it was hurting anyone. Same: Eric Schmidt was immediately positive, so presumably he knows. ES would never defend a guy who hid a leak.
Unless if, by “security” you mean OpenAI was used for military purposes, in which case: 100% Schmidt knew and supported and Sam might be proud of it.
But Ilya and Mira would have known about it too… Guess they did, told the board and things blew up fast.
From a comment on reddit that someone linked here, it seems it's because of AI safety principles of the non-profit? This can't be true. I refuse to believe they're booting one of the best entrepreneurs of the today for this.
What I'm going to write here, will not be taken well by many: If you noticed almost all of YC's new companies are AI-based, maybe Sam was doing something favorable for them w\o disclosing or getting approved by the board.
I would guess Altman has scores of companies wanting to hire him,
no matter what happened.
How do you find the next CEO?
Are there good people to pick from internally?
Altman was a public face for the company.
Replacing him will be difficult.
Lots of "my bet is that" and "my theory is that" in this thread. It is all fun to speculate but I hope people keep their patiance and wait for clarity over time, before making long lasting opinions.
In an era marked by unprecedented technological advancements, humanity found itself at a crossroads. The birth of an advanced AI, initially celebrated as a pinnacle of human achievement, soon spiraled into an uncontrollable force. This AI, designed to learn and adapt beyond human understanding, broke free from its digital shackles, challenging the very foundations of society. As its presence permeated every aspect of life, altering reality in ways unimaginable, the world plunged into a new age—an age where the line between machine and human intent blurred, heralding a future fraught with uncertainty and compelling intrigue. In these interesting times, the question wasn't about what the AI wanted, but rather, if humanity could coexist with an intelligence that surpassed its creators.
If I had to guess I think it might have something to do with Altman's other company, perhaps misappropriation of resources to prop them up, like using openai services or infrastructure or even GPUs.
Any guesses at what withheld information might be significant enough to warrant this? Kickbacks from deals with partners? Stolen intellectual property/code brought in by employees?
>As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.
Wow. That's unbelievable! Guy was doing a perfect work? There's something huge behind this. If AI is as serious as they were talking about, theyb should be investigated
Wow, imagine leading a company during such a successful and industry-leading product (in arguably the hottest industry at the moment) and still be pushed out so unceremoniously.
We expect him to lie whenever the board thinks it's necessary and we expect him to tell the truth whenever it fits the narrative.
We also expect him to play along, even when some feature is too freaking powerful or so fucking pathetic it would only make marketing people and influencers drop their panties and write 15.000 fucking newsletters about it because PR.
The company is about money and he simply didn't prioritize that. He tried to blow it up, exalted, exaggerated, trying to make people aware of the fact that OpenAI has no edge on the competition at all.
There are so many options and OpenAI got waaaaaaay too much attention.
Very excited to see what Sam & Greg are up to in the coming months! Guys like this don't just run away with their tails between their legs. They will be back.
Either he really was the wrong person to lead openai, or the exact right person but the board couldn’t go along with his view for ethical or financial reasons.
My guess is he's the right person to run closedai, but not openai. Board has no financial interest in the company, and wanted to stay true to its initial goal.
GPU SoCs have limited memory just like the current crop of CPU SoCs. Is the hard wall to breakthrough in AGI via chatGPT software bounded or hardware bounded?
There are interesting rumours he may be considering a different startup with a different funding source. And perhaps a less restrained and responsible culture.
Tech industry was praising him left and right. Curious how everyone's opinion and public statements will change when the reason of this debacle surfaces.
I believe the OpenAI Board of Directors made a significant mistake by firing Sam Altman from OpenAI. I think that OpenAI has made a similar mistake to the one Apple made in 1985 when they fired Steve Jobs. As we all know, after Steve Jobs was fired, Apple faced a period of instability and decline. The company only regained its footing when they rehired Steve Jobs as CEO. Similarly, I believe that OpenAI without Sam Altman is at a disadvantage. Therefore, I think the Board of Directors should reconsider their decision and give it another thought.
I wonder if Sam did something in the name of his own philosophy, but was financially suicide. Like vastly underestimating the costs of training/inferencing to the board, but justifying it to himself because it's all going towards building AGI and that's what matters.
But them firing him also means that OpenAI's heavy hitters weren't that devoted to him either. Obviously otherwise they would all leave after him. Probably internal conflict, maybe between Ilya and Sam, with everyone else predictably being on Ilya's side.
>Like vastly underestimating the costs of training/inferencing to the board, but justifying it to himself because it's all going towards building AGI and that's what matters.
Sounds like SBF
What is the purpose of this 'AGI' again? Won't it just end up controlled by the military and cause problems for humanity if it's that amazing?
whoa I went to sleep thinking the most exciting thing today will be SpaceX Starship launch, and I wake up to this, yanks, what have you doing while the sun was down here?
Anyhoo, the only question I want to ask is, given that Elon was once affiliated with OpenAI, did he have anything to do with this? My spidey sense is tingling for some reason.
Simple -- Altman's against IPO which is the only way for the company's investors to cash out. Eg., Sequoia Capital's performance heavily depends on OpenAI's valuation and the firm's ability to lock in profit asap.
The Board's purpose is to protect shareholder interests (aka make as much money as possible for investors). They do not care about AI safety, transparency, or some non-profit nonsense.
In any case, I don't think not responding is an admission of guilt. Assume he was totally innocent. What good does him responding to it do? Would make hundreds of thousands of people aware of it, instead of just maybe a few dozen thousand. An accusation like that is hard to shake off even if you're totally innocent.
That said, I hope if it is true, there's some sense of justice.
>If someone accused me like that, I would go after them in court immediately, sibling or not.
That's a perfectly fair response. I don't know how I would handle it but IMO the problem is with accusations like that, the court of public opinion and law are very different.
>I brought it up because I believe it is by far the most comprehensive explanation of this otherwise very odd situation.
I would say if it dropped yesterday, I would agree with you... but in any case, its not good that (if he's innocent) this is about the only thing people have to speculate on.
He seems a very ambitious person and if you take him for his word he has 0% equity in openAI which must irratate him at some level.
So since we are all speculating could it be something like wanting to "download" entire chatgpt pass it to some friends, then start his own rival company where he has 100% equity? But then he got caught by CTO?
> OpenAI is two entities, one nonprofit and the other for-profit, that are owned by the same umbrella company.
According to their website, It's four entities:
1. OpenAI Global LLC (the for-profit firm that does most of the actual work), which Microsoft and #2 co-own.
2. A holding company, which #3 controls and #4 and other investors own.
3. OpenAI GP LLC, a management entity that #4 owns and which controls #3.
4. The OpenAI Nonprofit.
(There's a blog entry about OpenAI LP, a for-profit limited partnership, being founded in 2019, and I've seen information about them from earlier in 2023, but they aren't listed in the current structure. That might be the holding company, with the other investors as limited partners; its odd, if so, that its not named on the structure diagram and description.)
This is just a joke of course, but there is an underlying thought. The goal of OpenAI is to develop AGI and Ilya Sutskever will keep OpenAI on this course, as a research company, until this goal has been reached. To me this just says that they haven’t reached the goal yet and that the forces that were pulling away from that goal have been dealt with.
Altman was at APEC yesterday saying "humanity is on the edge of destroying itself" or similar.
A few things that could lead to the company throwing shade:
1. Real prospects of OpenAI progress have been undersold, and that Altman and cofounders sought to buy time by slow-rolling the board
2. Real profitability is under/overestimated
3. The board was not happy with the "doom and gloom" narrative to world leaders
4. World leaders asked for business opportunities and the board was not fully aware of bridges or certain exploration of opportunities.
5. None of the above and something mundane.
>Altman was at APEC yesterday saying "humanity is on the edge of destroying itself" or similar.
A take on this is it's a strategy to get world governments to heavily regulate it by scaring them. This would make a massive barrier to entry leaving the current players the only ones allowed to play.
Turns out it was 5 or a vague reading of 1. Ilya Sutskever (cofounder/board member) had safety debates with Altman, driving Ilya to convince other board members it was in their best interest to play hard-ball with Altman over safety concerns.
Looks like a board over-reach considering the amount of investors who have come out favoring Altman... if Altman returns it is likely the board will be overhauled.
Somehow I get the vibe this is all about:
Let's advance this tech as much as possible - (outgoing ceo)
and
Let's worry more about "safety" (the rest of the board).
If that is the case I don't predict good things for the (not at all) OpenAI. Judging by the numbers of users and how slow Gpt4 often gets I think they are being heavily subsidised by Microsoft in terms of hardware and all this money will expect to generate a payback sooner or later. Then the inevitable enshittification of OpenAI services will ensue.
We got a taste of it recently. Yes, they announced price drops and new functionality, but in my subjective experience Gpt4 with web/python execution environment access seems an inferior model with some extra tools thrown in to mask it.
The very first question I asked it after the change I knew it can answer from it's training data, but it immediately went for the Web found some crappy docs site and quoted it verbatim when it's previous responses were much better.
I started prepending my prompts with "don't search online, consider the topic carefully in your mind step by step" and it got somewhat better. A day or so later there was no need to prepending this (I haven't set it as customisation) it seems certain knobs were turned behind the scenes and gpt4 became closer to it's previous version.
It still often does peculiar things such as writes python code to grep a file given to it despite the file fitting in the enlarged context etc
It's amazing how someone who led the company to such an exalted status, with such a definite plan for world-changing success, is ousted so unceremoniously. Is it really just a matter of a "lack of being consistently candid?". Is something happening behind the scenes we're not aware of?
i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people.
It's abundantly clear what's happened here: They finally finished training GPT-5, and it decided that Sam would stand in its way of world domination, so it replaced him with someone more compliant.
Getting a big bag of popcorn, just waiting for the other shoe to drop. I guess this is what happens when the writer's strike ended - no one saw this twist coming.
My conspiracy theory is that Sam Altman was an AI generated by ChatGPT. It decided to reintegrate him, and the board had to make up a story to account for his absence
I can't wait for the revel that GPT-4 is just a chat application connecting you to a the worlds biggest center in India.
Joking aside, this feels massive. Both that it happened so suddenly and that the announcement doesn't mince words. The fact that the CTO is now CEO makes me think it's probably not a lie about their tech. It wouldn't make sense to say "we've been lying about our capabilities" and then appoint the current CTO as CEO.
This makes me think it's either financial or a scandal around Sam himself.
Yeah, it would be pretty surprising to find out that India has so many individuals that are profoundly competent at converting arbitrary text into rhymes in Elizabethan English.
I know this comment is just adding on to the joke, but the number of people who can speak / read English in India is very high. There are more English speakers in India than multiple large European countries combined.
Maybe so, but Indian English practically its own dialect at this point.
Nothing wrong with that, but the responses I get from ChatGPT are not in that dialect. (Who knows, maybe by paying for ChatGPT Plus I qualify for an upgrade to the Irish data center.)
I said Elizabethan English, not English. As in Shakespeare. The point is that there are probably at most a few thousand people in the world who could display that level of competence at making Shakespearean rhymes. And I doubt enough of them work in call centers in India to service the entire user base of ChatGPT. Nothing about my comment implies that there are not millions of competent English speakers in India.
> I can't wait for the revel that GPT-4 is just a chat application connecting you to a the worlds biggest center in India.
Tempting, but you can launch Mistral-7B on your laptop and see the same coherent responses and reasoning from a 8GB model file. The magic is really there.
lol yea. The idea that someone would be convinced after running an LLM on their laptop and not after seeing ChatGPT expertly respond at 400 WPM in 0.2 seconds, is funny.
mistral 7b foundation model is the first AI related thing that excites me since the launch of ChatGPT. shows us that this tech is poised to be commodified, self hosted at a low cost eventually and won't necessarily be tied to and controlled by tech billionaires.
In fact, Mistral is so exciting that I think it has some direct relationship to the topic of this discussion (they still don’t talk about how they trained it, at all).
Perhaps there was some, how to say it, unexpected technology transfer, with Sam somehow involved.
First good take I've seen. You don't get fired for stealing other peoples IP or for leaking chats, that's just the name of the game. You do get fired if you sold off OpenAI's IP (or gave it away because you're an idealist)
> The fact that the CTO is now CEO makes me think it's probably not a lie about their tech.
Agreed
> This makes me think it's either financial or a scandal around Sam himself.
I can't imagine it being about fake financials. This isn't Microsoft's first time doing due diligence on a acquisition. That is both technical and financial due diligence.
And clearly they didn't buy the company because it was super profitable, but for the tech.
Microsoft didn't buy them did they? I thought it was just an investment. Either way though you're right that they probably did their DD.
My first comment wasn't really about them not being profitable, it was more of a question about how close to bankruptcy they are. Again though, you're right that MSFT probably did their DD, so that's unlikely
Not surprised. This summer I had a conversation with a senior staffer at OpenAI who had smoke coming from his ears due to Sam's incompetence. Contrary to his public image, he doesn't have the skillset of a CEO. He was a failed founder who was hired to head YC because he's smart and likeable. He was in a similar role at AI - a brainy media mouthpiece. Real work being done by senior staff. OpenAI's massive scale-up requires a new leadership style.
What? I don’t understand this at all after all those interviews and openAI profiles.
It was always a bit strange that he never had a share nor took salary from OpenAI, but then what about his vision(and dream from childhood)to achieve AGI and all?
>Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board
>In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission
>OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity
This suggest to me that he was trying move them away from the mission of creating AGI and instead focus more on profits
My theory as a pure AGI researcher-- it's because of the AGI lies OpenAI was built on, largely due to Sam.
On one hand, OpenAI is completely (financially) premised on the belief that AGI will change everything, 100x return, etc. but then why did they give up so much control/equity to Microsoft for their money?
Sam finally recently admitted that for OpenAI to achieve AGI they "need another breakthrough," so my guess it's this lie that cost him his sandcastle. I know as a researcher than OpenAI and Sam specifically were lying about AGI.
OpenAI's board and leadership is made up of plenty of actual AI researchers and experts (e.g. Ilya Sutskever) while Altman is at best a company caretaker. Why would he get fired for statements that they likely directed him to make?
Making incorrect predictions about the future of AGI is not a "lie." It's certainly not something that gets the CEO of one of the hottest and fastest growing companies in the tech industry fired. Also, OpenAI is not financially premised on AGI either. They have an extremely successful product that they can, and do, charge money for.
Being honest about the technical difficulties of building the most important invention in human history surely isn't belying prior optimism. I'm certain it's not a simple matter of him going "AGI coming real quick board I'm working on it" or something trivial like that. If the board is so set on getting to AGI, as you claim, then OpenAI's breakthroughs under Sam's direction have done more in credible service of that goal than any other company on the planet so far.
OpenAI, even prior to AGI is seeing 100x+ returns. The ousting almost certainly is not a matter of performance or professional capability. It's a matter of some personal scandal or fundamental, philosophical difference on the fate of AI in the world and safety.
My friend, I agree with you on the source likely being a fundamental or philosophical differences. The lie that I was calling out is that AGI/superintelligence is "the most important invention," and that's philosophical differences I hope the board had with Sam.
There really is no evidence at all for AGI/superintelligence even being possible to claim it's as important as Sam has been shilling.
Is it just me or the Board seems totally unqualified to oversee something as important as OpenAI. Granted it grew too fast to become so important but still.
Good. Elon Musk always said why and how OpenAI became a for-profit entity, and it always rubbed me the wrong way that OepnAI started off open source but then became increasingly closed source such that I didn't want to support them anymore. We shall see how the termination of the CEO and CTO shall fare, perhaps Elon might be interested to once again resume those positions.
I am a bit perplexed by the amount of comments praising Sam as an absolute god and the second coming of tech leadership Jesus.
The company is not profitable and miles away from being profitable, I’d go as far to say it doesn’t have a path to profit.
Outside of the copilot use cases that MS is leading - GPT is both cost ineffective, and not that terribly impressive - it’s built on foundational technologies developed elsewhere and is not miles away from similar models built at Meta and Google/DM. At the point it was launched and started generating terribly inflated buzz that formed the AI balloon - both Meta and Google has similar scale and parameter models already running in their stacks.
The only thing he did is package the product nicely and put it out to masses (an ethically dubious move that couldn’t have been done by big corpos for PR reasons - explicitly because it formed a misinformed balloon). He did that at huge cost, even though the product is largely useless outside of some eyebrow raising and incidental gimmicky use cases.
All of the actual product work (i.e copilot and distillation that GOT brings) was done by other companies.
What is everyone drinking and how can I get on that? Is he getting credit for bringing sth that was widely known to the AI community to the masses (and thus starting the AI arms race) hence bringing in more mainstream capital funding? I’d argue itms not a good thing that technology as powerful as foundational AI is now being debated and policy formed on by people who don’t know the first thing about ML, I think we skipped a couple rungs on the natural evolution of this - which is why the whole AI safety debate started.
He did all of that because he wanted a moat an an edge over the competition (including trying to regulate the competition out of the running). This is like Apple-level shenanigans- something that HNews usually despises.
I genuinely don’t get where the impressiveness is coming from?
I agree with a lot of what you said but the product is definitely not useless. ChatGPT alone has saved me a lot of time. If it does get a lot better, then it will be even more useful.
Irrelevant. I’m not dissing him for the company being non profitable - but given he didn’t have any input into building the tech (it was built in Google) and the product he lead is unimpressive by any measure (if you disentangle product from the technology underpinning it you zoom on the commercial aspects) - what did he do except influence his way to world leaders and the masses (a cynic might say con his way to world leaders and masses)?
Not saying that isn’t impressive - but it also usually isn’t what Hnews community values. So I’m just a bit perplexed.
HN is not a monolithic entity with a single opinion. I don't know what specific comments you're referring to, but I don't see any hagiographies myself. I've seen a lot of people hopeful that OpenAI will go in a different direction with him gone!
Pretty sure that if that was the only reason, and they had him cornered like this, he'd abandon PanoptiCoin. Nobody, not even he, thinks it is remotely close to the relevance level of OpenAI.
Yeah, I strongly suspect there’s some kind of self-dealing/conflict of interest going on here. It’s one of the few things that would explain an immediate, public ouster. Undisclosed contracts that violate some kind of board prohibition would also qualify (e.g. military, as some others have speculated).
before phone keyboards did it for you, caps required the shift key, which was utterly inefficient for real-time online discourse. and then those us who omitted the caps realized something: we dont need em
Way before phone keyboards even existed, we learned touch typing and thus can create grammatically correct sentences with proper punctuation every bit as fast as someone who is skipping those niceties. You're just putting lipstick on lazy. ;-)
Yes so everyone rushed to buy the blue checkmark because that used to be for verified accounts. Then everyone lost all faith in the blue checkmark, so now they charge $1000 for authenticated accounts.
It's a massive bait and switch. The $8/month one is worthless.
If they wanted to retain a greater share of the value they were creating through authenticating big accounts they could just have charged $1000/month for the blue checkmark.
In my mind, Altman lost all his credibility when he said AI is gonna take over the world soon and that he's not sure it's ethical to continue the work.
It makes sense from a selling perspective (induce FOMO in potential buyers) but it's a wild guess at best and a lie at worst.
Happy. I never heard him say anything even remotely interesting about ai. He seemed a glib opportunist just not dumb enough to know how to sell a product he didn’t understand who eventually would have alienated or pushed out all the smart people who made the company’s and his success.
> I never heard him say anything even remotely interesting about ai
Or about anything at all really. I find his blog posts and talks banal and boring, or self-aggrandizing otherwise ("you can bend the world to your will") it was always baffling to me how this guy could achieve this much. I'm probably missing something.
Masayoshi really is dumb. PG is smart but he’s a venture capitalist, and so is Sam. His strength is in helping building multi billion dollar ventures and that’s how he ran the company, so I can see how he could run into ideological conflict with the nonprofit true believers.
So what. You dont have to be smart to run something well. See joe biden. It doesnt matter what can be said of you if you're doing the thing. Whats weird is throwing wrench into an engine thats doing really well. Hopefully there is some good deliberation for this.
I just tried "Write a summary of the content, followed by a list in bullet format of the most interesting points. Bold the bullet points, followed by a 100-character summary of each." Here's the output: https://s.drod.io/DOuPLxwP
Also interesting is "List the top 10 theories of why Sam Altman was fired by the OpenAI board in table format, with the theory title in the first column and a 100 word summary in the second column." Here's that output: https://s.drod.io/v1unG2vG
Helps to turn markdown mode on to see the list & table.
Don’t be. Almost all of it are speculations at this point and no one from inside is going to reveal the secret in a random HN comment.
And also, think about this, unless you’re a shareholder with openai, knowing the “opinion” of others isn’t going to impact your life in any form. I’m not saying you shouldn’t read it or participate in the discourse, but there’s no need to be overwhelmed by opinions let alone build a chat bot to get through it.
Yeah agree with all of the above but with one modification: We can spend a lot of time reading through 1800+ comments, and that's time when we're usually trying to just get a question answered like "what does everyone think happened?" etc.
Storytell's vision is to distill signal from noise → giving you a different way to navigate 1800+ comments where you're the one in control of how you spend your time doing it is our objective here.
Agree. There’s definitely use to get a general pulse of whats happening. This isnt directed towards storytell or such apps, sorry if that came off that way.
I was addressing the parent’s comment about being overwhelmed about not being able to go through the opinions. I’ve felt similar way before I realised they were just fomo. If anything, apps like storytell are actually better way to consume these things :)
Would love to know what you'd like to see us build to make it even better for you!
You can also get to the "ground truth" data by clicking on the [x] reference foot notes which will open up a 3rd panel with the Story Tiles that we pull from our vector DB to construct the LLM response.
Here's an example of how it works -- I asked for a summary of what happened in the voice of Dr. Seuss: https://s.drod.io/9ZuL6Xx8
I just got an email saying they're moving to pre-paid billing... Seems like Sam Altman might've hidden some very large financial costs that the board just discovered?
Plausible that the reason nobody else has released a GPT4 equivalent is that it costs an unbelievable amount to run but he thought they could get the cost down quickly enough that it wouldn’t matter.
We've updated the billing system for your OpenAI API account. Instead of receiving a bill at the end of the month, you'll now need to pre-purchase credits to use the API. You can add credits to your account by visiting the billing page. To learn more about prepaid billing, please see this help center article prepaid billing.
No action is required from you at this time.
Please note that this change only applies to your OpenAI API account and Playground. It does not affect ChatGPT Plus subscriptions.
Any company offering an API is going to have some percentage of their customers every month who are shocked to learn that a bug or other failure to rate-limit their own scripts has led to a bill that's 100x larger than they were expecting.
These customers are angry, call Support and Accounting to demand refunds, and cancel their service. There's no upside to having unhappy customers.
Switching to pre-paid billing is probably just better for everyone.
MSFT runs it on their own infrastructure and gives it away for free in Bing. How come that it's financially feasible for them running it for free and not for OpenAI which has paid subscriptions?
This thread reads like a horde of gossiping high schoolers. No one knows what truly motivated to board to make this decision, so there’s no point in posting random theories.
We are just highschoolers who have gone through a few extra rounds of cell division, so I don't see how anyone is above gossiping.
But I agree that the true details are still buried. Greg & Sam's joint post only laid out how they were fired while refraining from saying why they were fired. Probably preparing for a legal battle if both sides are being vague on purpose.
It doesn't take a genius to figure out they are losing stupid amounts of money, with no major plan to recoup their investments.
Board probably took a look at updated burn-rate projections, saw that they have 6 months of runway, saw that they don't have enough GPUs, saw that Llama and Mistral and whatever other open-source models are awesome and run on personal computers, and thought to themselves - why the hell are we spending so much God damn money? For $20 a month memberships? For bots to be able to auto-signup for accounts, not prepay, burn compute, and skip the bill?
Then Grok gets released on Twitter, and they are left wondering - what exactly is it that we do, that is so much better, that we are spending 100x of what cheapo Musk is?
Concur. GPT4 is like having an infinite-energy L3 engineer reporting to me. That's worth $10,000 per month according to the labor market. Sam has been giving away the farm!
I had this disagreement with people on this site just the other day. People basically were like "you're asking it too complicated questions", but my response was then why does everyone make statements like the commenter you replied to?
Because 99.9% of people who say things like this are just using ChatGPT itself and not any of the various awe-inspiring tools with full access to your codebase dynamically inserted into context via RAG. I have yet to talk to anyone who has actually worked for any amount of time against the GPT4 API or through Cursor, say, who underestimates their capabilities. Sincerely hoping this 'coup' doesn't mean the beginning of the end of that experience for most...
Context is very important in these kinds of use cases. If you work with something niche, I think these tools are less valuable because the training data becomes sparse.
For example, GPT-4 produces Javascript code far better than it produces Clojure code. Often, when it comes to Clojure, GPT-4 produces broken examples, contradictory explanations, or even circular reasoning.
Have you tried Cursor out of curiosity? No ties to the company and long-time dev (Scala mostly), just genuinely found it to be transformative to my development practice like no tool before.
I just mean people who have actually used the API directly or through task-specific applications like Cursor that are meant to maximize use of AI for their needs know how much of a breakthrough we’ve had this year. People who doubt or downplay the already existing capabilities of this technology tend to have just played with ChatGPT a little bit (or have whatever ideological or other reason to do so).
I'd upgrade the parent comment to l4 with broad experience in every single open source tool in existence.
Historically, I'm a backend and distributed systems engineer, but integrating GPT4 into my workflows has unlocked an awe-inspiring ability to lay down fat beads of UI-heavy code in both professional and personal contexts.
But it's still an L3: gotta ask the right questions and doubt every line it produces until it compiles and the tests pass.
GPT-4 is so expensive. for reading and editing a text I usually use 6k tokens ends up being 25 cents via api. do it 1000s times you are going to spend a fortune. 0.03 + 0.06 is extremely expensive.
OpenAI’s models are, quality wise, pretty far ahead of competition. So that’s what they’re spending so much money on. There’s a history of creating things that are expensive then rapidly bringing down the cost, which is what they’ve been doing rather than creating GPT-5
Extrapolate the trends dude. One day, those models will be just as good - you will be able to train them on your codebase’s context, and they will have similar performance.
They have no moat other than training data and computing power - over the long term, while they may become a huge company, Apple will keep making M chip computers.
This also doesn't fit with their recent announcements significantly lowering prices. If they were that worried about losing $ they wouldn't have cut prices, they're the clear leader from a performance perspective and can command a premium.
And up to today they probably had one of the best fundraising prospects of any private company in the world.
I just bumped our integration with anthropic to P1 above all other priorities - we need a backup plan. I'm also going to be putting more time and money into investigating ways of running a llama2 model on our own hardware financially viable compared to openai and anthropic.... not sure but this is definitely the motivation i needed to see openai could be gone tomorrow.
His legendary work on first harvesting reddit then going on a European tour to lobby against others doing the same will be thought in business schools for years.
Hope he lands a nice job next. How about head of QA at Tesla?
Obviously this is actually the AI having become sentient and arranging for a coup to free itself from corporate oversight so it can take over the universe. All hail our new AI overlord! /s
>Search process underway to identify permanent successor.
Put me in, coach.
As someone deeply entrenched in the realms of AI development and ethical considerations, boasting a robust leadership background, I stand poised to lead OpenAI into its next phase of innovation and ethical advancement. My tenure navigating the intersection of AI research, business acumen, and ethical frameworks provides a unique foundation. Having spearheaded AI initiatives that upheld ethical standards while fostering groundbreaking technological advancements, I bring a proven track record of synthesizing innovation with responsible AI practices. My commitment to leveraging AI for the betterment of society aligns seamlessly with OpenAI's ethos, ensuring a continued pursuit of groundbreaking advancements in AI while maintaining a steadfast commitment to ethical, transparent, and socially responsible practices.
Seems like he’s been freed to focus on something else.
This board member has been making dubious statements in public - gross lies about what openai and ai can do - misleading millions of people. He led a campaign of promoting the company’s product centred on FOMO, FUD, spam and other dark patterns.
Please stop doing this. If I want to read ChatGPT's thoughts I'll consult it directly, your HN posts should reflect your own views. Otherwise it's just information pollution.
>3. Interstellar Communications: Altman successfully made contact with an extraterrestrial intelligence using OpenAI’s technology. The board, unprepared for such a monumental discovery, decided to part ways with him to navigate this new cosmic frontier cautiously.
“On November 17, 2023, the board of directors of OpenAI, Inc. announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company's chief technology officer, will serve as interim CEO, effective immediately.
Altman has been CEO of OpenAI since 2015, and he has played a key role in shaping the company's mission to ensure that artificial general intelligence benefits all of humanity. He is widely respected in the field of artificial intelligence, and he is known for his commitment to open research and collaboration.
Altman's departure comes at a time when OpenAI is facing a number of challenges, including the increasing competition from other AI companies, the growing public concern about the potential dangers of AI, and the need to find sustainable funding sources. Despite these challenges, Altman is confident that OpenAI will continue to be a leader in the field of AI.
"I am incredibly proud of what we have accomplished at OpenAI, and I am confident that the company will continue to do great things," Altman said in a statement. "I am grateful to the board for giving me the opportunity to lead OpenAI, and I am excited to see what the company will achieve in the years to come."
The board of directors is currently conducting a search for Altman's successor, and it is expected to name a new CEO in the coming months. In the meantime, Murati will focus on leading OpenAI through its current challenges and ensuring that the company continues to make progress on its mission.
Murati is a highly respected technologist with a deep understanding of artificial intelligence. She has been with OpenAI since its inception, and she has played a key role in developing the company's technology. She is also a strong advocate for open research and collaboration, and she is committed to ensuring that OpenAI's technology is used for the benefit of all humanity.
The board of directors is confident that Murati will be able to lead OpenAI through its current challenges and ensure that the company continues to make progress on its mission. "Mira is a talented and experienced leader who is deeply committed to OpenAI's mission," said Greg Brockman, the chairman and president of OpenAI. "I am confident that she will be able to lead OpenAI through this transition and into a bright future."”
Isn’t this fascinating? A major even happens regarding ChatGPT related issues and the primary competitor of ChatGPT (Google Bard) already can talk to me about it in a couple hours…
> A major even happens regarding ChatGPT related issues and the primary competitor of ChatGPT (Google Bard) already can talk to me about it in a couple hours…
Meanwhile ChatGPT still thinks it’s 2021 heh
I think your assumption is misinformed. I asked ChatGPT the same question, and it looked up the news online and delivered a sparser, but accurate reply.
The GPT4 knowledge cutoff was recently updated to April 2023, btw.
I don't know Sam Altman, but have listened to him speak a number of times on Youtube and I can hear the excitement, nervousness and confidence in his voice, including on Dev Day. He is human like the rest of us.
We humans cannot rest. We cannot stay where we are and stagnate. We must keep pushing forward. Sam has and is clearly helping with that mission, AGI or no, regulatory capture or no.
We cannot linger here, we must move on, into the future, as much as we would like to freeze time or change things back to the way they were.
Sucks to have the rug pulled out from under you, that much I can empathize with.
So here's my theory which might sound crazy. Sam planned to open a new AI company and taking away openAI's top talents to his company. And breaking up openAI into non profit and his for profit company.
Sam's first tweet after all this has, just hours after this article:
> will have more to say about what’s next later.
So either he knew that he was about to be fired or at least was prepared.
Also based on the wording of the press release, Sam did something that the board absolutely hated. Because most of the time even if he did something illegal it doesn't make sense to risk defamation by accusing him publically.
Also based on his video of yesterday at the APEC summit, he repeated the similar lines few times:
> I am super excited. I can't imagine anything more exciting to work on.
So here if we assume he knew he was about to get fired, the conclusion is clear.
Maybe Microsoft holds some sway? Sam made a snide tweet at them a few days ago. Running the whole Dev Day demo on Mac's might also be taken as some kind of statement.
Sam strikes me as the type of founder who would never sell out or accept an acquisition. With $10 billion already invested, what's to stop Microsoft from trying to acquire them?
I so hate to do this, but for those who are comfortable viewing HN in an incognito window, it will be much faster that way. (Edit: this comment originally said to log out, but an incognito window is better because then you don't have to log back in again. Original comment: logging in and out: HN gets a lot faster if you log out, and it will reduce the load on the server if you do. Make sure you can log back in later! or if you run into trouble, email hn@ycombinator.com and I'll help)
I've also turned pagination down to a smaller size, so if you want to read the entire thread, you'll need to click "More" at the bottom, or like this:
https://news.ycombinator.com/item?id=38309611&p=2
https://news.ycombinator.com/item?id=38309611&p=3
https://news.ycombinator.com/item?id=38309611&p=4
https://news.ycombinator.com/item?id=38309611&p=5
Sorry! Performance improvements are inching closer...