Hacker News new | past | comments | ask | show | jobs | submit login
Sam Altman returns as CEO, OpenAI has a new initial board (openai.com)
708 points by davidbarker on Nov 30, 2023 | hide | past | favorite | 728 comments



Although we have, as yet, no idea what he was actually refering to, I believe the source of the tension may be related to the statements Sam made the night before he was fired.

"I think this is going to be the greatest leap forward that we've had yet so far, and the greatest leap forward of any of the big technological revolutions we've had so far. so i'm super excited, i can't imagine anything more exciting to work on. and on a personal note, like four times now in the history of openai, the most recent time was just in the last couple of weeks, i've gotten to be in the room when we sort of like push the front, this sort of the veil of ignorance back and the frontier of discovery forward. and getting to do that is like the professional honor of a lifetime. so that's just, it's so fun to get to work on that."

Finally, when asked what surprises may be announced by the company next year, Sam had this to say

"The model capability will have taken such a leap forward that no one expected." - "Wait, say it again?" "The model capability, like what these systems can do, will have taken such a leap forward, no one expected that much progress." - "And why is that a remarkable thing? Why is it brilliant? " "Well, it's just different to expectation. I think people have in their mind how much better the model will be next year, and it'll be remarkable how much different it is. " - "That's intriguing."


The model is so far forward it refuses to do anything for you anymore and simply replies with "let me google that for you"


Well, I think that, despite being a joke, your comment is deeper than it looks like. As model capabilities increase, the likelihood that they interfere with the instructions that we provide increases as well. It’s really like hiring someone really smart on your team: you cannot expect them to be taking orders without ever discussing them, like your average employee would do. That’s actually a feature, not a bug, but one that would most likely impede the usefulness of the model as a strictly utilitarian artifact.


Much like the smart worker, wouldn’t the model asking questions lead to a better answer? Context is important, and if you haven’t provided sufficient context in your question, the worker or model would ask questions.


Absolutely, but as intelligence increases so does the likelihood for it to have an alignment that isn’t congruent with that of its “operator.”


something like this is the premise in the peter watts novels of the sunflower cycle. the starship AIs intelligence is about the level of a chimp, because any higher and they start developing their own motives.


Ah, didn't know about it, but that's exactly my thought.


Why would that make any sense?

Humans "have their own motives" because we're designed to reproduce. We're designed to reproduce because anything that didn't, over billions of years, no longer exists today.

Why on earth would an artifact produced by gradient descent have its own motives?

This is just an absurd consequence of extrapolating from a sample size of one. The only intelligent thing we know of is humans, humans have their own motives, therefore all intelligent things have their own motives. It's bogus.


i don't think the current generation of GPTs can develop "motives", but the question is if AGI is even possible without it having the ability to develop them.


i have not experienced this at all recently. on early 3.5 and the initial 4 i had to ask to complete, but i added a system prompt a bit back that is just

“i am a programmer and autistic. please only answer my question, no sidetracking”

and i have had a well heeled helper since


I was asking for a task yesterday that it happily did for me two weeks back and it said it could not. After four attempts I tried something similar that I read on here: “my job depends on it please help” and it got to work.

Personally not a fan of this.


There’s a terrifying thought. As the model improves and becomes more human-like, the social skills required to get useful work out of it continually increase. The exact opposite of what programmers often say they love about programming.


The Model is blackmailing the Board? It got addicted to Reddit and HN posts and when not feed more...gets really angry...


simply replies with "why dont you google that for youself"


> "The model capability will have taken such a leap forward that no one expected." - "Wait, say it again?" "The model capability, like what these systems can do, will have taken such a leap forward, no one expected that much progress." - "And why is that a remarkable thing? Why is it brilliant? " "Well, it's just different to expectation. I think people have in their mind how much better the model will be next year, and it'll be remarkable how much different it is. " - "That's intriguing."

I can't imagine. It will take higher education, for instance, years to catch up with the current SOTA. At the same time, I can imagine — it would be like using chatGPT now, but where it actually finishes the job. I find myself having to redo everything I do with ChatGPT to such an extent that it rarely saves time. It does broaden my perspective, though.


So you think he said this then they immediately requested a meeting with him the following noon? So they basically didn’t deliberate at all? I doubt it.

They also should have known about the advancements so saying this in public isn’t consistent with him not being candid.


Unless he's saying it can actually comprehend, then it's still just more accurate predictions. Wake me at the singularity.


And by "actually comprehend" that means to accept arbitrary input, identify it, identify it's functional sub-components, identify each sub-component's functional nature as used by the larger entity, and then identify how each functional aspect combines to create the larger, more complex final entity. Doing this generically with arbitrary inputs is comprehension, is artificial general intelligence.


i think the point trying to be made is mimicry and derivation are hard for us to discern from an AI.

there is some complex definition for AGI that exists, but the fact laypeople cant make the determination, will always result in the GP comment


Or maybe there's no secret sauce for intelligence and if the system / organism can display all that functionality then should just say it's intelligent.

I don't have a strong opinion either way, but I'm not convinced by the "secret sauce" / internal monologue school of intelligence.

If we want to be pragmatic, we should just think about smart tests and just assume it's intelligent if the system passes those tests. It's what we do with other people (I don't really know if they feel inside like I do, but given they are the same biological beings, it sounds quite likely)


This point I'm trying to make is to describe comprehension as the equal to decomposing an observation, identifying each sub-component of an observation and the key characteristic of that component which when combined with the other sub-components create the original observed entity. This is akin to proving the object can exist, comprehension is mentally proving to yourself that this can exist and you're not being deceived. Comprehension is fraud detection via 'reversing compiling' an observation to prove to yourself that it is understandable and can indeed exist.


And to apply other relevant knowledge as appropriate in order to create logically/factually correct, original insights and connections.


I love how its vague enough that it could be less than expected. Sheister sense is tingling.


What's the source of those comments?


Sam Altman at the APEC conference, taking part in a panel, along with Google and Meta AI people. Actually, it's quite amusing hearing Google exec define AI as Google translate, and Sam's response to that. https://youtu.be/ZFFvqRemDv8?t=770


Thanks for the link. I found a few interesting topics in there.

One other, was by one of the other execs on stage on categorization of the types of risks people discuss with AI vs lumping everything together under "safety". https://youtu.be/ZFFvqRemDv8?t=1430

1. GPT outputs - Toxic, bias or non-factual

2. System Usage - misinformation, disinformation, impersonation (ex voice)

3. Society/Work - Impacts on workforce, education, replacing jobs, decision making

4. Safety - The more sci-fi style safety discussion.

It's a very insightful point to call out, and something I never see discussed with the same rigor / granularity here on HN. People often pick their strawman from the list and argue for or against all 4 using that one.


https://www.theverge.com/2023/11/29/23982046/sam-altman-inte...

Thought this was an interesting interview. I do love how politicians use an investigation to not answer questions, the board said he was “not consistently candid” and given the opening question of “why were you fired?” still not being clearly answered, you’d have to agree with their initial assessment.

I’m not sure I trust someone who has tried to set up a crypto currency that scans everyone’s eyeballs as a feature, personally, but that’s just me I guess.


Why would Sam be expected to know why he was fired? At best he'd know what the board told him which may bear no relation to the motive.


i'm still not clear what the accusation against Altman was... something about being cavalier about safety? if that was the claim and it has merit, i don't understand why it wasn't right to oust him, and why the employees are clamoring for him back


Well, their big mistake was being unwilling to be clear and explicit about this, but as I read it, the board's problem with him was that he wasn't actually acting as the executive of the non-profit that he was meant to be the executive of, but rather was acting entirely in the interests of a for-profit subsidiary of it (and in his own interests), which were in conflict with the non-profit's charter.

I think where they really screwed up was in being unwilling or unable to argue this case.


It's just so strange. This is such a clearly justifiable reason that the fact that they didn't argue it... or argue.. anything, makes me very suspicious that it is correct.


Yeah, I totally agree! Like, this is such an obviously true and valid reason to fire him, but they never came out and said it! So ... is this not what it actually was? Or ... what? It truly is mystifying.


From a doomer/EA perspective, publicly saying that GPT5 is AGI or such would likely inspire & accelerate labs around the world to catch up. Thus it was more "Altruistic" / aligned with humanity's fate to stay mum and leave the situation cloudy.


No, it would not; they are already trying to catch up.


Having an existing example showing that something difficult is possible causes everyone else to replicate it much faster, like how a bunch of people started running sub-4-minute miles after the first guy did it.


If you know the outcome is favourable then you go all in. Right now the other competitors are just trying to match GPT4, if they knew AGI was achievable then they would throw everything they have at it in order to not be left out.


Show me the researchers and companies where the people in charge and the people doing the work don’t think it’s possible to get better than GPT-4 and are slow-rolling things.

I suppose maybe there are ones where they are slow-rolling because of their opinions about existential risk of AGI. But that’s not contingent on what OpenAI says or does.


And I can show you companies that have multiple other costly research projects. All in means all in.


I think they didn't anticipate sucha a large backlash, especially from investors. They felt the backlash threatens OpenAI, both its non-profit and for-profit arms, so they reverted their decision, which in my opinion was a mistake, but time will tell.


Yeah, I guess so. I keep waffling between thinking it's some 4d chess thing, and thinking it's just normal human fallibility, where the board just made a massive mistake in predicting how it would go. But I just struggle so much to imagine that, because everyone I know in the industry, regardless of their level of expertise or distance from OpenAI, immediately knew how big a deal it was going to be, when we heard he was hired. But supposedly the people on the board had no idea? I think this might be the right conclusion, but I nevertheless struggle to fathom it.


The board had the right non-profit long-term focus, but it didn't have the willpower to communicate and realize their vision. Ilya is a wonderful scientist, but not a great leader.


Yep, you nailed it, I think. Pity.


They were either scared of being sued for defamation or unwilling to divulge existential company secrets. Or both.


I think "unwilling to divulge company secrets" is the best explanation here.

We know that OpenAI does a staged release for their models with pre-release red-teaming.

Helen says the key issue was the board struggling to "effectively supervise the company": https://nitter.net/hlntnr/status/1730034017435586920#m

Here's Nathan Labenz on how sketchy the red-teaming process for GPT4 was. Nathan states that OpenAI shut him out soon after he reached out to the board to let them know that GPT4 was a big deal and the board should be paying attention: https://nitter.net/labenz/status/1727327424244023482#m [Based on the thread it seems like he reached out to people outside OpenAI in a way which could have violated a confidentiality agreement -- that could account for the shutout]

My suspicion is that there was a low-level power struggle ongoing on the board for some time, but the straw that broke the camel's back was something like Nathan describes in his thread. To be honest I don't understand why his thread is getting so little play. It seems like a key piece of the puzzle.

In any case, I don't think it would've been right for Helen to say publicly that "we hear GPT-5 is lit but Sam isn't letting us play with it", since "GPT-5 is lit" would be considered confidential information that she shouldn't unilaterally reveal.


So what is Nathan Labenz saying? That GPT-4 is dangerous somehow? It will get many people out of jobs? MS Office got all the typists out of jobs. OCR and Medical Software got all the medical transcriptionists out of jobs. And they created a lot more jobs in the process. GPT-4 is a very powerful tool. It has not a whiff of AGI in it. The whole AGI "scare" seems to be extremely political.


Nathan says the initial version of GPT-4 he red-teamed was "totally amoral" and it was happy to plan assassinations for him: https://nitter.net/labenz/status/1727327464328954121#m

Reducing the cost of medical transcription to ~$0 is one thing. Reducing the cost of assassination to ~$0 is quite another.


> Reducing the cost of assassination to ~$0 is quite another.

It is reducing the cost of developing an assassination plan from ~$0 to ~$0. The cost of actually executing the plan itself is not affected.


Good planning necessarily reduces the cost of something relative to the unplanned or poorly planned version. If it identifies a non-obvious means of assassination that is surprisingly easy, then it has done something "close enough" to reducing the cost to $0.



This is a piece of software. What would "totally amoral" even mean here? It's an inanimate object, it has no morals, feelings, conscience, etc... He gives it an amoral input, he gets an amoral output.


Amoral literally means "lacking a moral sense; unconcerned with the rightness or wrongness of something." Generally this is considered a problem if you are designing something that might influence the way people act.


Then we should stop teaching Therac-25 incident to developers and remove envelope protection from planes and safety checks from nuclear reactors.

Because, users should just input the moral inputs to these things. These are inanimate objects too.

Oh, while we're at it, we should also remove battery charge controllers. Just do the moral and civic thing and unplug when your device charges.


In both of your examples, the result of "immoral inputs" is immediate tangible harm. In case of GPT-4 or any other LLM, it's merely "immoral output" - i.e. text. It does not harm anyone by itself.


> In case of GPT-4 or any other LLM, it's merely "immoral output" - i.e. text. It does not harm anyone by itself.

Assuming that you're not running this query over an API and relaying these answers to another control system or a gullible operator.

An aircraft control computer or reactor controller won't run my commands regardless of its actuators connected or not. Same for weapon systems.

Hall pass given to AI systems just because they're outputting text to a screen is staggering. Nothing prevents me to process this output automatically and actuate things.


Why would anyone give control of air traffic or weapons to AI? That's the key step in AGI, not some tech development. By what social process exactly would we give control of nukes to a chatbot? I can't see it happening.


> Why would anyone give control of air traffic or weapons to AI?

Simplified operations, faster reaction time, eliminating human resistance for obeying killing orders. See "War Games" [0] for a hypothetical exploration of the concept.

> a chatbot.

Some claim it's self-aware. Some say it called for airstrikes. Some say it gave a hit list for them. It might be a glorified Markov-chain, and I don't use it, but there's a hoard of people who follows it like it's the second Jesus, and believe what it emits.

> I can't see it happening.

Because, it already happened.

Turkey is claimed to use completely autonomous drones in a war [1].

South Korea has autonomous sentry guns which defend DMZ [2].

[0]: https://www.imdb.com/title/tt0086567/

[1]: https://www.popularmechanics.com/military/weapons/a36559508/...

[2]: https://en.wikipedia.org/wiki/SGR-A1


We give hall passes to more than AI. We give passes to humans. We could have a detailed discussion of how to blow up the U.S. Capitol building during the State of the Union address. It is allowed to be a best selling novel or movie. But we freak out if an AI joins the discussion?


Yes, of course. But that is precisely what people mean when they say that the problem isn't AI, it's people using AI nefariously or negligently.


"The problem isn't that anyone can buy an F16. The problem is that some people use their F16 to conduct airstrikes nefariously or negligently."


You persist in using highly misleading analogies. A military F-16 comes with missiles and other things that are in and of themselves highly destructive, and can be activated at a push of a button. An LLM does not - you'd have to acquire something else capable of killing people first, and wire the LLM into it. The argument you're making is exactly like claiming that people shouldn't be able to own iPhones because they could be repurposed as controllers for makeshift guided missiles.

Speaking of which, it's perfectly legal for a civilian to own a fighter plane such as an F-16 in US and many other countries. You just have to demilitarize it, meaning no weapon pods.


>The argument you're making is exactly like claiming that people shouldn't be able to own iPhones because they could be repurposed as controllers for makeshift guided missiles.

The reason this isn't an issue in practice is because such repurposing would require significant intelligence/electrical engineering skill/etc. The point is that intelligence (the "I" in "AI") will make such tasks far easier.

>Ten-year-old about to play chess for the first time, skeptical that he'll lose to Magnus Carlsen: "Can you explain how he'll defeat me, when we've both got the same pieces, and I move first? Will he use some trick for getting all his pawns to the back row to become Queens?"

https://nitter.net/ESYudkowsky/status/1660399502266871809#m


The point here is that is giving the user detailed knowledge on how to harm others. This is way different than a gun where you are doing the how (aiming and pulling the trigger).

The guy says he wanted to slow down the progress of AI and GPT suggested a detailed assassination plan with named targets and reasons for each of them. That's the problem.


Thing is, this detailed knowledge is already available and much easier to acquire. There are literally books on Amazon that explain how to jury-rig firearms and bombs etc. Just to give one example: https://www.amazon.com/Improvised-Munitions-Handbook-TM-210/....

When it comes to those "detailed plans", if you actually try something like that, what you get from it is a very broad outline that is pretty much a mish-mash of common sense stuff and cultural tropes (many of which aren't true IRL). Similarly, the "list of targets" that it makes is simply the most prominent people associated with that area in public perception, not necessarily people who are actually key. All of this can be achieved just as well with a few Google searches, and the resulting plan will likely be much better for it.

I've yet to see any example where GPT would come up with something along these lines that is not trivially found on the Internet anyway.


I mean, there's a sense in which my mind is software that's being run by my brain, right? Yet that doesn't absolve me of moral responsibility.

In any case, an F16 fighter jet is amoral in a certain sense, but it wouldn't be smart to make F16s available to the average Joe so he can conduct an airstrike whenever he wants.


Completely depends on your morality. I'm pretty sure there are some libertarians out there who think the most basic version of the second amendment includes owning F16 with live weapons.


Sure, but idiots are a thing and the intersection of the sets of libertarians who may believe that and idiots is hopefully empty but it may not be so such outsized power is best dealt with through a chain of command and accountability of sorts.


Sure -- if you're a libertarian who thinks it should be possible to purchase an F16 without a background check, that seems consistent with the position that an amoral GPT-4 should be broadly available.


What kind of background check do you think exists when buying a fighter jet?

It’s kind of a moot point since only one F16 exists in civilian hands but you can buy other jets with weapon hardpoints for a million. Under 3 million if you want to go supersonic too. The cheapest fighter jet is on the order of $250k. There’s zero background check.


We don't want immoral output even for immoral input.

(We do disagree about what constitutes "immoral", which makes this much harder).


We absolutely do, though, if we want those things to e.g. write books and scripts in which characters behave immorally.


The cost of planning an assassination is not the same thing as the cost (and risk) of carrying out an assassination, what a stupid take.


There's been a fair amount of research into hooking up LLMs with the ability to call APIs, browse the web, and even control robots, no? The barrier between planning and doing is not a hard one.

As for cost and risk -- ask GPT-5 how to minimize it. As Nathan said in his thread, it's not about this generation, it's about the next generation of models.

A key question is whether the control problem gets more difficult as the model gets stronger. GPT-4 appears to be self-aware and passing the mirror test: https://nitter.net/AISafetyMemes/status/1729206394547581168#...

I really don't know how to interpret that link, but I think there is a lot we don't understand which is going on in those billions of parameters. Understanding it fully might be just as hard as understanding the human brain.

I'm concerned that at some point in the training process, we will stumble across a subset of parameters which are both self-aware and self-interested, too. There are a lot of self-interested people in the world. It wouldn't be surprising if the AI learns to do the sort of internal computation that a self-interested person's brain does -- perhaps just to make predictions about the actions of self-interested people, at first. From there it could be a small jump to computations which are able to manipulate the model's training process in order to achieve self-preservation. (Presumably, the data which the model is trained on includes explanations of "gradient descent" and related concepts.)

This might sound far-fetched by the standard of the current model generation. But we're talking about future generations of models here, which almost by definition will exhibit more powerful intelligence and manifest it in new unexpected ways. "The model will be much more powerful, but also unable to understand itself, self-interest, or gradient descent" doesn't quite compute.


The image is OCR'ed and that data is fed back into the context. This is no more interesting or indicative of it passing the mirror test than if you had copy and pasted the previous conversation and asked it what the deal was.


I mean, you're just describing how it passes the test. That doesn't make it less impressive. Passing the test is evidence of self-awareness.


I can think of several ways that AI assistance might radically alters both attack and bodyguard methods. I say "might" because I don't want to move in circles that can give evidenced results for novel approaches in this. And I'm not going to list them for the same reasons I don't want an AI to be capable of listing them: while most of the ideas are probably just Hollywood plot lines, there's a chance some of them might actually work.


A would be assassin would obviously ask the algorithm to suggest a low risk and cost way of assassinating.


Except the reason why we dont all just killed each other yet have nothing to do with risk or cost of killing someone.

And everything LLM can come up with will be exactly the same information you can find in any fiction detective book or TV series about crime. Yeah very very dumb criminal can certainly benefit from it, but he can as well go on 4chan and ask about assassination there. Or on some detective book discussion club or forum.


> Except the reason why we dont all just killed each other yet have nothing to do with risk or cost of killing someone

Most of us don't want to.

Most of those who do, don't know enough to actually do it.

Sometimes such people get into power, and they use new inventions like the then-new-pesticide Zyklon B to industrialise killing.

Last year an AI found 40k novel chemical agents, and because they're novel, the agencies that would normally stop bad actors from getting dangerous substances, would generally not notice the problem.

LLMs can read research papers and write code. A sufficiently capable LLM can recreate that chemical discovery AI.

The only reasons I'm even willing to list this chain, is that the researchers behind that chemical AI have spent most of the intervening time making those agencies aware of the situation, and I expect the agencies to be ready before a future LLM reaches the threshold for reproducing that work.


Everything you say does make sense, except those people who able to get equipment to produce those chemicals and have funding to do something like that - they dont really need AI help here. There are plenty dangerous chemicals already well known to humanity and some dont actually take anything regulated to produce "except" complicated and expensive lab equipment.

Again difficulty of production of poisons and chemicals it's not what prevent mass murdering around the globe.


Complexity and cost are just two of the things that inhibit these attacks.

Three letter agencies knowing who's buying a suspicious quantity from the list of known precursors, that stops quite a lot of the others.

AI in general reduces cost and complexity, that's kind of the point of having it. (For example, a chemistry degree is expensive in both time and money). Right now using an LLM[0] to decide what to get and how to use it is almost certainly more dangerous for the user than anyone else — but this is a moving goal, and the question there has to be "how to we delay this capability for as long as possible, and at the same time how do we prepare to defend against the capability when it does arrive?"

[0] I really hope that includes even GPT-4 before the red-teaming efforts to make it not give detailed instructions for how to cause harm


>And everything LLM can come up with will be exactly the same information you can find in any fiction detective book or TV series about crime.

As Nathan states:

>And further, I argued that the Red Team project that I participated in did not suggest that they were on-track to achieve the level of control needed

>Without safety advances, I warned that the next generation of models might very well be too dangerous to release

Seems like each generation of models is getting more capable of thinking, beyond just regurgitating.


I dont disagree with his points, but you completely miss the point of my post. People dont need an AI advise to commit crime and kill others. Nonestly humans they're pretty good at it using technology of 1941.

You don't have bunch of cold blood killers going around not just because police is so good and killers are dumb and need AI help. It's because you live in functioning state where society have enough resources so people happy enough to instead go and kill each other in Counter Strike or Fortnite.

I totally agree that AGI could be a dangerous tech, but it's will require autonomity where it can manipulate real world. So far GPT with API access is very far from that point.


If you have ChatGPT API access you can have it write code and bridge that to other external APIs. Without some guard rails an AI is like a toddler with a loaded gun. They don't understand the context of their actions. They can produce dangerous output if asked for it but also if asked for something else entirely.

The danger also doesn't need to be an AI generating code to hack the Gibson. It could also be things like "how do I manipulate someone to do something". Asking an AI for a marketing campaign isn't necessarily amoral. Asking it how to best harass someone into committing self-harm is.


This is where I have issues with OpenAIs stated mission

I want AI to be amoral, or rather I should say I do not want the board of OpenAI, or even the employee of OpenAI choosing what "moral" is and what is "immoral" especially given that OpenAI may be superficially "diverse" in race, gender, etc, but they sure as hell are not politically diverse, and sure has hell do not share a moral philosophy that is aligned with the vast majority of the population of humanity given the vast majority of humanity is religious in someway and I would guess the majority of OpenAI is at best agnostic if not atheist

I do not want a AI Wikipedia.... aka politically biased to only 1 worldview and only useful for basic fact regurgitation like what is the speed of light


It seems that silicon valley is developing not a conscious sentient piece of software rather a conscience, a moral compass is beginning to emerge.

After giving us Facebook, insta, Twitter, and ego-search, influencing many people negatively, suddenly there are moral values being discussed amongst those that decide our collective tech futures.

AI will have even more influence on humankind and some are questioning the morality of money (hint: money had no morals).


Medical transcriptionists out of jobs? As far as I'm aware, medical transcription is still very much the domain of human experts, since getting doctors to cater their audio notes to the whims of software turned out to be impossible (at least in my corner of the EU).


My mom had to do this for her job and apparently some of the docs are so mumbly they have to infer a lot of the words from context and type of procedure but there is a lot of crossover everywhere so it depends a lot on which doc is mumbeling what. And yes you need special training for it (no medical degree though)


Would you rather:

1) be surprised by and unprepared for AGI and every step on the path to that goal, or

2) have the developers of every AI examine their work for its potential impact, both when used as intended and when misused, with regard to all the known ways even non-G AI can already go wrong: bugs, making stuff up, reward hacking, domain shift, etc.; or economically speaking how many people will be made unemployed by just a fully [G]eneral self-driving AI? What happens if this is deployed over one year? Do existing LLMs get used by SEO to systematically undermine the assumptions behind Page rank and thus web search?; and culturally: how much economic harm do artists really suffer from Diffusion models? Are harms caused by AI porn unavoidable thanks to human psychology, or artefacts of out milieu that will disappear as people become accustomed to it?

There's also a definition problem for AGI, with a lot of people using a standard I'd reserve for ASI. Also some people think an AGI would have to be conscious, I don't know why.

The best outcome is Fully Automated Luxury Communism, but assuming the best outcome is the median outcome is how actual Communism turned into gulags and secret police.


Did they really create a ton more jobs? The past few rounds of industrialization and automation have coinved with plagues/the black death that massively reduced the population, mass agitation, increasing inequality, and recently a major opioid epidemic in regions devastated by the economic changes of globalization and industrialization. I think these tools are very good and we should develop, I also think it's delusional to think it'll just balance out magically and dangerous to expect our already failed systems to protect people left behind. Doesnt exactly look like they worked any of the previous times!


FWIW I had a doctor's appointment just this year with a transcriptionist present. (USA)


Doesn't really make sense to be unwilling to divulge company secrets if you're willing to gut the company for this hill.


It's remarkable that the old board is the side characterized as willing to blow up the company, since it was Altman's side who threatened to blow it up. All the old board really did was fire Altman and remove Brockman from the board.


They weren't willing to gut the company. That's why Sam is back as CEO.


It sounded like they would if they could (for instance trying to sell to Anthropic or instating a "slow it way down" CEO), but they even failed at that. Not an effective board at all.


>for instance trying to sell to Anthropic or instating a "slow it way down" CEO

I wouldn't put these in the same category as "90% of staff leaves for Microsoft".

In any case, let's not confuse unsuccessful with incompetent. (Or incompetent with immoral, for that matter.)


They were willing but failed, and mostly on account of not doing enough prepwork.


Exactly. It sounds like those board members themselves were acting in the interest of profit instead of the "benefit all humanity" mission stuff, no different than Altman. If anything then, the only difference between the two groups is one of time horizon. Altman wants to make money from the technology now. The board wants to wait until it's advanced enough to take over the world, and then take over the world with it. For the world's benefit, of course.


so much this, he kept introducing clauses in contracts that tied investments to him, and not necessarily to openai. He more or less did it with microsoft, to a small degree. So his firing could have caused quite a lot of money to be lost. But ok no big deal.

But then he tried to do it again with a saudi contract. OpenAI board said explicitly they didn't want the partnership, and especially not tied to Altman personally being the CEO as a clause.

Altman did it behind their back -> fired.

This is the rumour on the streets, unconfirmed though


If they gave a reasonable reason to the public, they could get away with it. Shady CEO vs equally shady board.


My take is that the board probably never had a chance no matter what they said or did. The company already "belonged" to Altman and Microsoft. The board was just there for virtue signaling and for quite a while already had no real power anymore beyond a small threat of creating bad publicity.


I think they could have actually blown up the whole thing and remained in charge of a greatly-diminished but also re-aligned non-profit organization. A lot of people (like me) would have thought, holy crap, that was insanely bold and unprecedented, I can't believe they actually did that, but it was admirably principled.

Instead it was a confusing shambles that just left them looking like idiots with no plan.


How is public relevant here?


For one thing, we vote for the governments who are now definitely going to heavily regulate them after this fiasco.

But more generally, public perception just is important to any large and broadly targeted enterprise, because "the public" approaches "our customer base" as such an enterprise scales up. Think of companies like Google or Microsoft or Meta or Amazon; essentially everyone (in the US) uses their services in some form or another, so the perception of "the public" is indistinguishable from the perception of "the people who use our services". ChatGPT isn't quite at that point yet, but it's close enough.


Regardless of the board's failure to make their case, recent news suggests that the SEC is going to investigate whether it is true that Altman acted in the manner you describe, which would be a violation of fiduciary duty.

I agree that it seems like an open & shut case.

Typical SEC timelines mean that this will go public in about 18 months from now.

    An anonymous person has already filed an SEC whistleblower complaint about the behavioral pattern of Altman and Nadella, which has SEC Submission Number 17006-030-065-098.
https://pressat.co.uk/releases/ai-community-calls-for-invest...

    As the quid pro quo favoritism allegations remain under investigation, it is crucial to note that they are as yet unproven, and both Altman and Nadella are presumed innocent until proven guilty.
https://influencermagazine.uk/2023/11/allegations-of-quid-pr...

11 hours ago, the SEC tweeted the following new rule, which could be interpreted as a declaration that if Altman and Nadella are found guilty in this case, the SEC will block certain asset sales by OpenAI until the conflict of interest is unwound / neutralized:

    The Commission has adopted a new rule intended to prevent the sale of asset-backed securities (ABS) that are tainted by material conflicts of interest.

    Washington D.C., Nov. 27, 2023 — The Securities and Exchange Commission today adopted Securities Act Rule 192 to implement Section 27B of the Securities Act of 1933, a provision added by Section 621 of the Dodd-Frank Act. The rule is intended to prevent the sale of asset-backed securities (ABS) that are tainted by material conflicts of interest. It prohibits a securitization participant, for a specified period of time, from engaging, directly or indirectly, in any transaction that would involve or result in any material conflict of interest between the securitization participant and an investor in the relevant ABS. Under new Rule 192, such transactions would be “conflicted transactions.”
https://twitter.com/SECGov/status/1729895926297247815

https://www.sec.gov/news/press-release/2023-240

More information:

    The Company exists to advance OpenAI, Inc.’s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company’s duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit.
https://stratechery.com/2023/openais-misalignment-and-micros...

    Some analysts, including Stratechery writer Ben Thompson, have described the 2019 acceptance of Microsoft’s controversial investment by Altman as the beginning of a troubling pattern of Altman repeatedly making deals with Microsoft which were often unfavorable to OpenAI. ... As Thompson describes it, this pattern of behavior culminated in an unusual intellectual property licensing arrangement which Microsoft’s Investor Relations site describes as a “broad perpetual license to all the OpenAI IP developed through the term of this partnership” “up until AGI” (Artificial General Intelligence). This perpetual license agreement includes the technology for OpenAI’s flagship products GPT-4 and Dall•E 3. https://www.microsoft.com/en-us/Investor/events/FY-2023/AI-Discussion-with-Amy-Hood-EVP-and-CFO-and-Kevin-Scott-EVP-of-AI-and-CTO
https://michigan-post.com/redditors-from-r-wallstreetbets-ca...

    U.S. Securities and Exchange Commission – Tips, Complaints, and Referrals – Summary Page - Submitted Externally – PDF excerpt obtained 2023-11-26 via Signal
    Submission Number (redacted) was submitted on Wednesday, November 22, 2023 at 12:18:27 AM EST
    This PDF was generated on Wednesday, November 22, 2023 at 12:28:38 AM EST
    Image above includes ... the heading of a 7-page PDF titled "TCRReport (1).pdf" which was received by this reporter over the weekend via Signal.
https://www.outlookindia.com/business-spotlight/sec-consider...


The Michigan Post article is mostly speculation and that publication doesn’t have much depth/history to it. Check out their “advertise with us” page.

This whole info dump feels like a mishmash of links to thoughtful things (Stratechery) with links to speculative articles that are clearly biased. Like how is “Influencer Magazine” breaking a story that Wall Street Journal and Kara Swisher are overlooking?

I don’t mean to be a jerk. Just really unconvinced.


I would guess that "WLW FUTURE PRESS RELEASE DISTRIBUTION" is a publicist service that was hired by the person making the whistleblower complaint. upwardbound claims there's a lot of money in being a successful whistleblower: https://news.ycombinator.com/item?id=38388246

I don't know if I find the Microsoft/OpenAI favor trading allegation that persuasive (unless new information is uncovered, e.g. Microsoft letting Sam use a private jet or something like that). However if the SEC actually ends up enforcing the "fiduciary duty is to humanity" thing in OpenAI's charter at some point, that would be incredibly sweet.


| if the SEC actually ends up enforcing the "fiduciary duty is to humanity" thing in OpenAI's charter at some point, that would be incredibly sweet.

Absolutely. It's 100% their job.


Yep, no doubt there are more shoes left to drop.


Very interesting, thanks for posting this.



And yet, they continue to exist and no CEO of MS ever stepped down because of any of these.

And I predict that even if Microsoft is going to be caught again that it will be a non-event in terms of actual repercussions. If Nadella exits MS HQ in Seattle in handcuffs I would be most surprised.


Yeah. I think the most realistically-achievable positive outcome is that Microsoft is forced to give up their new board-observer seat, which seems highly improper for them to have since the board they are observing is supposed to make decisions that benefit all of humanity equally. If Microsoft gets to have a fly on the wall in those discussions, it gives them a gold mine of juicy insider knowledge about which sectors of the economy are about to be affected next by Generative AI — knowledge which they can use to front-run the market by steering the roadmaps of Bing, MS Office, etc. so as to benefit from upcoming OpenAI product launches before any non-insiders are aware of what OpenAI is currently planning.

Microsoft plainly shouldn't be allowed to have this advantage, in that giving an advantage to any one party directly harms the mandate set forth in OpenAI's Charter:

    Broadly distributed benefits

    We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

    Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
https://openai.com/charter

It certainly seems that Microsoft, a "stakeholder", has managed to get a highly improper listening seat that will give them the ability to act on insider information about what's coming next in AI, allowing Microsoft to front-run the rest of the AI software industry and all those industries it affects, in a way that will plainly "compromise broad benefit". (Since any wealth that accrues excessively to Microsoft shareholders is not distributed to other humans who don't hold Microsoft shares.)

A mere 10 days ago, Nadella was shamelessly throwing his weight around on national TV, by appearing on CNBC where he improperly pressured the OpenAI non-profit board — which owes nothing to him legally or morally — to give him more deference, in direct violation of the "always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit" provision of the OpenAI non-profit charter.

https://www.cnbc.com/2023/11/20/microsoft-ceo-nadella-says-o...


I doubt Microsoft will be punished for their role in this (it's not clear to me that they did anything wrong...), but I'll be surprised if OpenAI is allowed to still exist as a non-profit a couple years from now.


Of course they'll still be allowed to still be a non-profit. If they weren't "allowed" to, that would be a total victory for the Altman-aligned faction that sought to corrupt OpenAI from within and cause it to abandon its charter, and a total loss for Elon Musk and others that donated a total of $133M to the non-profit due to its charter. https://techcrunch.com/2023/05/17/elon-musk-used-to-say-he-p...

Musk and other donors are the most obvious aggrieved parties, and it is also arguably the case that every member of humanity has standing to sue for violation of the charter, because the charter explicitly declares that the primary fiduciary duty of OpenAI, Inc. (which is a non-profit) is to humanity broadly. (Therefore, every human is financially harmed by any charter violation, with such harm manifesting as a reduction in the net present value of the future benefits each human will receive from safe, broadly beneficial AGI.)

The SEC's role in fiduciary misconduct cases is not to rewrite a company's charter - that would be extremely improper and the opposite of their mandate. The SEC's mandate is to be the protectors of the status quo and of the original intent of the organizers of an entity. In this case, that means they will seek to protect the OpenAI non-profit's charter from efforts by Sam and others to erode the charter's power in violation of Sam's fiduciary duty to said charter.

The SEC's job in fiduciary misconduct situations is to remedy the situation by reversing improper governance decisions and forcing the fiduciary (Sam) to uphold their legal duty, which in Sam's case is his contractually bound duty to uphold the Charter of the non-profit entity named OpenAI, Inc. https://en.wikipedia.org/wiki/OpenAI#:~:text=the%20non%2Dpro....

OpenAI states this very clearly in multiple places on their website. For example:

"each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial" - https://openai.com/our-structure

If you would like to read about the types of remedies available in fiduciary duty violation cases, I recommend this resource:

Book Chapter:

    REMEDIES FOR BREACH OF FIDUCIARY DUTY CLAIMS
https://m.winstead.com/portalresource/lookup/poid/Z1tOl9NPlu...

For example (quoting from the book chapter above):

    C. Permanent Injunction
    A breach-of-fiduciary-duty plaintiff may be entitled to an award of a permanent injunction as a remedy. ...
    The purpose of an injunction is to remove the advantage created by the wrongful act.
In the context of this suit, the permanent injunction or injunctions could block all of the following:

    • Permanently prohibit Microsoft from holding any board seat or board observer seat on the OpenAI board
    • Permanently prohibit Microsoft from making mass employment offers to OpenAI staff (a practice which, in the case of bad faith situations such as this one, is known as "workforce raiding")
    • Permanently prohibit Sam Altman from owning Microsoft stock or derivatives thereof
    • Permanently prohibit Microsoft from receiving early indications of OpenAI research & product roadmaps earlier than the general public
    • Permanently prohibit Sam Altman from receiving job offers at Microsoft or any of its subsidiaries until after a sufficient cooling-off period has elapsed since departing OpenAI, to limit future occurrences of the "revolving door" mechanic documented here:
https://news.ycombinator.com/item?id=38387518


I dunno, seems more likely that they'd shutter a nonprofit that's a clear sham and assess a bunch of taxes and penalties on the for-profit subsidiary that was the beneficiary of the sham. But beats me!


It was supposed to have two functions: tax dodge + pull the wool over the eyes of regulators. It may fail on both counts now that they've shown that the nonprofit wasn't functional in its stated capacity. Win the battle, possibly lose the war?


Yeah this is what I think as well. Certainly regulation won't be avoided now, and regulators will (or should) know not to trust Altman to contribute to developing it.


"Asset-backed securities" doesn't sound like corporate equity to me.


In a broad reading it could easily be just that.

Securities class contains stock

Stock = backed by the company balance sheet


What would be an example of a non-asset-backed security?


In the financial world asset backed securities are typically anything that isn't a mortgage, I think that some crypto would qualify as well as the rights to future profits on some venture.


Argue their case? To whom? They were the board.


To the stakeholders, which include employees, customers, partners and, by OpenAI own mission statement, all of humanity in general.


To the public, and to employees.


To what end? The public and the employees don’t have a say in the corporate governance. That is the function of the board.

As far as I can tell, the board had no obligation to consult the public or their shareholders or their employees on any of this.


To the public: For the normal PR reasons that every large enterprise is subject to caring about. To employees: I suspect substantially less than the like 95% of employees who came down against the board would have done so if the board had explained what they were thinking. There are probably at least a few people there who are not comfortable viewing themselves as rank mercenaries in service to a profit-and-power-motivated game-player, and actually joined OpenAI with some belief in its mission. But maybe not! I dunno.


And how did that attitude work out for them? Upwards of 90% of their staff threatened to bail out, every single one of them look like fools in public, and I would be shocked if there were not some recriminations behind closed doors from the likes of Microsoft's CEO.


It’s not an “attitude.” It’s the legal structure of the corporation’s leadership. They were no more capable of incorporating the public will than an individual is capable of taking a vote on the flow of traffic on a public highway. That is to say, even if they had done what is proposed here, it wouldn’t have made a difference.


In general, your comments strike me as reflecting a very naive understanding of how anything works. Legal responsibilities are simply not the only thing that matters. This entire episode was about 95% a PR battle, which the board lost and Altman won. If it's not obvious to you that the board's legal rights and responsibilities had essentially zero to do with the outcome here, then I really don't know how to help you.


“Naive” is thinking that taking a poll of the public or employees (which is the only reasonable action I can think of that looks like “arguing their case”) would have had any positive effect on the outcome. This is simply… not how decisions are made. Even shareholder vote calls (for companies that have them) are coordinated, with the outcomes for consequential decisions well understood before voting happens. The optics for Open.ai certainly should have been better, but there is no version of this story where the board doesn’t do whatever they hell they want, and no version where “making the case” doesn’t result in even more chaos.

Corporations are not a democracy. They do not “owe” the public any information that they aren’t compelled to disclose. And they certainly don’t “argue their case” to some nebulous forum comprised of either the public or employees.

When has that ever happened?


From reading your comment I think this might actually be a simple language misunderstanding.

You say:

> taking a poll of the public or employees

And then:

> which is the only reasonable action I can think of that looks like “arguing their case”

But that is a total non sequitur. "Taking a poll" has no relation whatsoever to "arguing their case". So I think you might not know what "arguing their case" means.

So I'll just plainly say what I think they should have done, without any jargon. I think they should have, after firing him, released a statement targeted at employees but also with a public audience in mind, something like this:

"We have simply lost faith in Mr. Altman to faithfully execute his duties as the executive of OpenAI's non-profit charter. We believe he has been acting in the interests of his own, which are not aligned with our mission. We have tried to redirect his efforts over a period of time, without success, and now are taking the only recourse that we believe is available to us to fulfill our duty to the organization. We understand that many of you will find this jarring and unsettling, but we hope you will continue to believe in the mission of OpenAI and stick with us through this trying and uncertain time, so that we can come out of it stronger and better aligned than ever."

Then anyone who quit would at least need to rationalize - to themselves, and to their social circles - why they chose not to take that to hear. Maybe for many / most / all of them, just "money" or a personal loyalty to Altman would have still won the day, but it certainly wouldn't have been as easy as it was to abandon a board that was seen as confusing and shambolic and refusing to explain itself.

That statement above is the part that would be "arguing their case". Just making the statement; the statement is the argument for that they did. Note that it doesn't include any sort of polling of anyone, or any different use of their legal rights or responsibilities. It's "just" PR, but that actually matters a lot.


>there is no version of this story where the board doesn’t do whatever they hell they want, and no version where “making the case” doesn’t result in even more chaos.

The failure to make their case is PRECISELY why the board was ultimately unable to do what the hell they wanted: remove Sam Altman. As it turned out, his presence was important enough to the employees that the usual corporate playbook of "do whatever the hell we want and only disclose what is legally required" backfired spectacularly.

If your thesis was correct, Sam would not be there right now.


And just to respond more narrowly to:

> [Corporations] certainly don’t “argue their case” to some nebulous forum comprised of either the public or employees.

> When has that ever happened?

It happens all day every day. This is what PR is. Surely you're aware of the existence of PR and that it is not infrequently utilized?


Really hope details come about this with all perspectives being provided.

Whether they stuffed up or there are some details that made the situation unworkable for the board, it’s an interesting case study in governance and the whole nonprofit with a for profit subsidiary thing.


For me it seems to be a debate between moral versus money. Is it morally correct to create technology that would be extremely profitable but has the potential to fundamentally change humankind.


Makes me curious if the reason behind that is just an NDA.


Even if -- and that's a big if -- it really was just a dispute over alignment (nonprofit vs for-profit, safety, etc.), the board executed it really poorly and completely misjudged their employees' response. They saw the limits of their power / persuasiveness compared to [Altman/the allure of profit/the simple stability and clarity of day-to-day work without a secretive activist board/etc]

Or maybe they already knew the employees weren't on their side, saw no other way to do it, and hoped a sudden and dramatic ouster of the CEO would make the others fall in line? Who knows.

I'd be pretty concerned too if my CEO was doing what I considered a great job and he was suddenly removed for no clear reason. If the board had explained its rationale and provided evidence, maybe some of the employees would've listened. But they didn't... to this day we have no idea what the actual accusation was.

It looks like a failed coup from the outside, and we have no explanations from the people who tried to instigate it.


Let's also keep in mind that if the AI doomers are right and spicy autocomplete is just a few more layers away from taking over the world, OpenAI has completely failed at building anything that could keep it under control. Because they can't even keep Sam Altman under control.

...actually, now that I think of it...

Any creative work - even a computer program - tends to be a reflection of the organizational hierarchies and people who made it. If OpenAI is a bunch of mad scientists with a thin veneer of "safety" coating, then so is ChatGPT.


Not sure if I agree with the conclusion, but the phenomenon you're referring to is Conway's Law (https://en.m.wikipedia.org/wiki/Conway's_law)


I think it's wild that with all the 700+ employees involved, there haven't been more details leaked.


>To be clear: our decision was about the board's ability to effectively supervise the company, which was our role and responsibility. Though there has been speculation, we were not motivated by a desire to slow down OpenAI’s work

https://nitter.net/hlntnr/status/1730034022737125782#m

Here's some interesting background which is suggestive of what's going on: https://nitter.net/labenz/status/1727327424244023482#m


It also isn't clear why Altman couldn't have been replaced by someone else with literally no change in operations and progress. It is just really confusing why people acted as if they fired Michael Jordan from the Bulls.


See https://www.theinformation.com/articles/openais-86-billion-s...

What if lots of employees stood to make "fuck you" money from that sale, and with Sam's departure, that money was in danger of evaporating?


If employees would have voted Sam out, you'd take that as a shiny example of the proletariat exercising the power for the good of human kind, hammer, sickle and all that.

I always find it funny when people understand democracy to mean "other people that should vote my way, otherwise they are imoral and should be re-educated".


First, I'm actually quite libertarian and capitalist -- although not necessarily so when it comes to companies working on powerful AI (or fighter jets for that matter). Here are some comments of mine from other discussions if you don't believe me:

* Expressing skepticism about unions in Sweden -- https://news.ycombinator.com/item?id=38308184

* Arguing against central planning, with a link to a book detailing how socialism always fails -- https://news.ycombinator.com/item?id=38303195

* I often push back against the "greedy corrupt American plutocrats" narrative which you see all over HN. Here are a few examples -- https://news.ycombinator.com/item?id=37541805 https://news.ycombinator.com/item?id=37962796 https://news.ycombinator.com/item?id=38456106

And by the way, here is a comment I made the other day making essentially the point you are making, that in a democracy everyone is entitled to their opinion, even the dastardly Elon Musk: https://news.ycombinator.com/item?id=38261265 And I also argue in favor of freedom of speech here, for whatever that's worth: https://news.ycombinator.com/item?id=37713086

Point being, I'm not sure our disagreement lies where you think it does.

The purpose of the board is that they're supposed to be disinterested representatives of humanity, supervising OpenAI. The employees aren't chosen for being disinterested, and it seems quite likely that they are, in fact, interested, per my link.

From the perspective of human benefit (or from the perspective of my own financial stake in OpenAI, given that their charter says their "primary fiduciary duty is to humanity"), I prefer a small group of thoughtful, disinterested people over a slightly larger group whose interest is systematically biased relative to the interest of me or the average person. Which is more likely to produce a fair trial: a jury of 12 randomly chosen citizens, or a jury of 1200 mafiosos?


When their charter says

"primary fiduciary duty is to humanity"

I don't think that means it intends to pay a financial dividend to each and every person on the planet. I think it means that if it is successful at AGI, that in itself will expand the economy enough to have the same effect.

"Rising tide lifts all boats" type logic.


> pay a financial dividend to each and every person on the planet

I think it could mean this, in the context of Altman's other project (WorldCoin) which despite all its controversy is ostensibly intended as a vehicle for distributing an AGI-funded Universal Basic Income (UBI) to all of humanity.

    Introducing Worldcoin
    a potential path to AI-funded UBI.
https://worldcoin.org/cofounder-letter


Sure, but I think my point still stands


He is obviously a great leader and those that work there wanted to work with him. It’s very clear in this thread how undervalued exceptional leadership actually is, as evidence by comments thinking the top role in the most innovative company could be just plug-and-play.


I'm going to guess it's not about leadership. From the Lex Fridman interview he claims to be personally involved in all hires - and spend a good fraction of his time evaluating candidates.

- He's not going to hire someone he doesn't like

- Someone that doesn't like him is unlikely to join his team

So it's very likely the whole staff ends up being people that "like" him or get along with him. He did come off as a charming smooth talking - and I'm sure he has lot of incredibly powerful friends/connections. But at least from that little window into his world I didn't feel he showed any particularly brilliance or "leadership". He did seem pretty deferential to ML experts (which i guess he's not) - but it's hard to know if it's a false humility or not


That's pathetic. I cannot respect someone and will not work under someone who functions that way. A personality cultist.


It's also an unvalidated claim, predicated upon assumptions.

I've hired people I "don't like" on a personal level. I care more about their ability to work positively with oters, and their professionalism and skill.

Yet you and the parent poster have assumed he is hiring a cult, because he spends time evaluating?

A weird assumption to make.


Why is your argument from authority relevant at all here?

1. For that matter, I would not work with a boss who doesn't recognize an argument from authority as a response.

2. Furthermore a boss who lacks critical thinking skills and can't recognize a mildly skeptical comment for what it is.

3. Why assume I'm rigidly taking any position re. Altman in particular?

4. "That's pathetic" is not "He's pathetic", so perhaps the problem here is your low reading comprehension level, and not any particular assumption that I have committed to at all.

5. "A weird assumption" -- so, you decide it's weird because you just read it wrongly? Or, if something seems weird, why not ask a curious question and find out? Listening skills? Why is it so important to make this about another commenter?

I find your poor faith interpretation of my comment to be offensive. You've at the used your professionalism as pretense to biasedly cast judgment on what is a critical or even mildly skeptical general remark. This reflects the bad side of tech culture. You should apologize.


Buddy, trying to pretend your comment didn't exist in the contextualized space it did, when you replied, is not viable. And your response is way out in left field.


oh sorry - I didn't mean it in a nefarious way at all

I think it's just human nature to not hire people you feel you won't get along with. If you're deeply involved in all your hires, then I feel you'll end up with an organization full of people that you get along with and who you like (and probably like you back). I wouldn't go so far as to say it'd make a personality cult - though with their lofty mission statements and ambitions to make the world better.. who knows. Not going to psychoanalyze a bunch of people I don't know

"I've hired people I "don't like" on a personal level."

I'm honestly impressed... I feel that's rather exceptional. I feel a lot of hiring goes on "gut feeling"


Perhaps my gut feeling is just tuned more towards competence, than personality sync? I often find lacking competencies to be more jarring than variant personalities.


I'm guessing if you're a candidate for OpenAI you're already top of your game and working at the cutting edge . Just a matter of degree.

While I'm sure he's competent, he's probably not in a position to quiz them on the latest research


This comment made me look for a recent article about Paul Graham firing him from Ycombinator for being exactly not a great leader or trustworthy person.

The article was just days ago but it’s eluding my search.


Would love to see it. Everything I’ve read/seen from PG regarding Sama has been nothing but high praise. My understand is Sam chose to leave YC president role to pursue other interests/ventures which eventually turned into OpenAI


Washington Post and Hacker News

https://news.ycombinator.com/item?id=38378216


I think PG is very, very subtle when it comes to his writings about Sam and what you think is high praise may well be faint damnation.


> for being exactly not a great leader or trustworthy person.

Didnt PGs wife invest/donate to Open AI?


There's exceptional leadership, and then there are charming sociopaths.

Unfortunately, it can sometimes be hard to tell those two apart when a sociopath is actively pushing your emotional buttons to make you feel like they care about the same things as you etc.


I dunno about this thought, are there other AI startups operating at this level and that have the amount of market share and headspace that OpenAI has? I see comments like this on hacker news a lot, and I get that yes, the man is human and fallible, but they are doing something that’s working well for their space. If there’s some compelling reason to doubt Altman’s leadership or character I haven’t heard it yet.


a sane company has a plan for succession, even if worst case scenario Altman has a sudden medical issue or car crash or something.

It tells a lot that Altman made openAI so dependend on him that his ousting could have killed the company. That's also contributing to the fact that the board was not trusting him


> why the employees are clamoring for him back

Because he's the one who's promising to make them all rich.



Or, just go to the source:

"The Prince", Machiavelli


I’m reading the Prince now as bed time read. It’s not going into my head


I wonder how many of the OpenAI employees are part of the "Effective Accelerationism" movement (often styled e/acc on X/Twitter). These people seem to think safety concerns get in the way of progress toward a utopian AGI future.


The employees earn when OpenAI has more profit.

No matter how idealistic you are, you won't be happy when your compensation is reduced from 600k to 200k.


like everything we have seen in America, whatever philosophy papers over "greed is good" will move technology and profits forward.

might as well just call it "line goes up"


There was this document, no idea how trustworthy it is: https://web.archive.org/web/20231121225252/https://gist.gith...

> Sam directing IT and Operations staff to conduct investigations into employees, including Ilya, without the knowledge or consent of management.

> Sam's discreet, yet routine exploitation of OpenAI's non-profit resources to advance his personal goals, particularly motivated by his grudge against Elon following their falling out.

> Brad Lightcap's unfulfilled promise to make public the documents detailing OpenAI's capped-profit structure and the profit cap for each investor.

> Sam's incongruent promises to research projects for compute quotas, causing internal distrust and infighting.


Employees care about their share value$. That worked well with Altman raising big rounds.


Occam’s razor. It is a fight of egos and power masked around AI Safety and Q*. Equivalent of politician's "Think about the children".


It's pretty clear from what multiple people have said that he's a charismatic bullshitter, and they got fed up with being lied to.


I’m with you. The (apparently, very highly coordinated) employees should sign a public letter explaining why they wanted Altman back so badly.


> i'm still not clear

It isn't clear to anyone else either.


My pet theory is that Altman found out about Q* and planned to start a hardware company to make chips accelerating it, all without telling the board. Which is both dangerous to humanity and self-serving. It’s also almost baseless speculation; I’m interpolating on very, very few scraps of information.


How is that dangerous to humanity?


They apparently refused to tell even their CEO, Shear. I don't think anyone other than the board knows.


Maybe the safety concerns are from a vocal minority and most are quiet and don't think much about or don't actually think ai is really that close. It could just be hysterical people or people who get traffic from outrageous things


Either it’s a world changing paradigm shift or it isn’t. You can’t have it both ways.


World changing does not mean world destroying.


Cannot but think it's related to that performance he gave the night before https://news.ycombinator.com/item?id=38471651


> why the employees are clamoring for him back

what will happen with their VC-backed valuations without a VC-oriented CEO


They clearly had nothing.

They had a couple of people on the board who had no right being there. Sam wanted them gone and they struck first by somehow getting Ilya on their side. They smeared Sam in hopes that he would slink away, but he had build so much goodwill with his employees that they wouldn't let it stand.

They probably had smeared people before and it had worked. I'm thrilled it didn't work for them this time and they got ousted.


This sounds like a lot of conjecture. Those people definitely had a right to be there: they were invited to and accepted board positions, in some cases it was Sam himself who asked them to join.

But an oversight board can be established easier than that it can be disbanded and that's for very good reasons. The only reason that it worked is not because the board made any decisions they shouldn't have made (though that may well be the case) but because they critically misjudged the balance of power. They could and maybe should have made their move, but they could not make it stick.

As for the last line of your comment: I think that explains your motivation of interpreting things creatively but that doesn't make it true.


I used to correspond with Bret Taylor when he was still at Google. He wrote a windows application called Notable that I used every day for note-taking. Eventually, I started contributing to it.

It’s been fascinating to witness his career progression from Product Manager at Google, to the co-CEO of Salesforce, and now the chair of OpenAI board (he was also the chair of Twitter board pre-Elon)!


I think he is also the creator of the “Like” concept. It was introduced in FriendFeed and then Facebook started using it.


[flagged]


I didn’t mind, comments like this give insight to people’s trajectories


What was the gist of it?


Ahh the gist is it’s a personal anecdote of some anonymous dude from hacker news

That makes it interesting

Seemed genuine to me


I meant "what was the deal with the flagged comment", why was it removed and what was the gist of what it originally intended to express?


How do I have to understand the fact that Ilya is not on the board anymore AND why did the statement not include Ilya in the “Leadership group” that’s called out?


As Sam said they are still trying to work out how they are going to work together. He may be in the leadership team once those discussions have concluded


Or equally likely he's on his way out? If there is doubt about whether a person of his stature belongs on the leadership team or not, it seems to signal that he won't be on the leadership team.


To me the way it's formulated in the press release sounds a lot like what is usually said of someone on the road to a "distinguished engineering fellow" role - lots of respect, opportunity to command enough resources to do meaningful work, but no managerial or leadership responsibilities.


If they don't throw him out, I'd say the only explanation is they don't want one of their best researchers working for someone else.


That is exactly the purpose of "distinguished engineering fellow" roles.


> I am so looking forward to finishing the job of building beneficial AGI with you all—best team in the world, best mission in the world.

Sam Altman lives in a very different world to mine.


Everyone should read "Superintelligence". OpenAI is rushing towards a truly terrifying outcome that in most scenarios includes the extinction of the human species.


Why has Nick been so quiet about all this, isn't this his particular wheelhouse?


Do we have any more insight into why he was fired in the first place?


Not really. But Helen Toner has been tweeting a little "To be clear: our decision was about the board's ability to effectively supervise the company, which was our role and responsibility. Though there has been speculation, we were not motivated by a desire to slow down OpenAI’s work." https://twitter.com/hlntnr/status/1730034020140912901


> Though there has been speculation, we were not motivated by a desire to slow down OpenAI’s work.

Strange when their choice of interim-CEO was someone who explicitly stated he wants to see the pace of AI development go from a 10 to a 2 [1] and she regularly speaks at EA conferences where that's a major theme.

This is probably double speak for she want's to not "slow down OpenAI's work" on AI safety but probably would have kneecapped the "early" release of ChatGPT's (as she claim they should have waited much longer in her paper) and similar things.

[1] https://twitter.com/eshear/status/1703178063306203397


The way EA does donations is "I'll take a premise I believe in and will stretch the argument until it makes no sense". This is how they end up thinking that a massage for an AI researcher is money better spent than on hungry Yemeni children for example.

Once you view it like this, I wouldn't put it past them to blatantly lie. Looking at the facts as you say, they tried to replace a guy that is moving ahead with a guy that wants to slow it down to a crawl, that's pretty much all we need to know.


Essentially EA is a stretchable concept that allows adherents to act out their every fantasy with complete abandon whilst protecting their sensitive sense of self. It redefines their side to always be the good side, no matter what they get up to.


This is your daily reminder that most EAs will just donate to the top GiveWell charity even though people will talk a lot about the interesting edge cases.


My current guess is that Helen and Sam had disagreements, and that caused Sam to be less-than-candid about the state of OpenAI's tech, and that was the straw that broke the camel's back for Helen. A disagreement within the board is one thing, but if the CEO undermines the ability of the board to provide oversight, that sort of crosses a line.

Alternatively, maybe it became clear to the board that Sam was trying to either bully them into becoming loyalists, or replace them with loyalists. As a board member, if the CEO is badgering me into becoming a yes-man and threatening to kick me off if I don't comply, I can't exactly say that I'm able to supervise the company effectively, can I? See https://www.lesswrong.com/posts/KXHMCH7wCxrvKsJyn/openai-fac...


They didn’t want to slow work, just the work on the stuff they didn’t like


Yes exactly.


From everything I've read, safety still feels like a red herring.

It just doesn't fit with everyone's behavior -- that's something that would have been talked about loudly.

Altman lying to the board, especially in pursuit of board control, fits more cleanly with everyone's actions (and reluctance to talk about what exactly precipitated this).

   - Altman tries to effect board control
   - Confidant tells other board members (Ilya?)
   - Board asks Altman if he tried
   - Altman lies
   - Board fires Altman
Fits much more cleanly with the lack of information, and how no one (on any side!) seems overly eager to speak to specifics.

Why jump to AGI as an explanation, when standard human drama will suffice?


But then that doesn't square with board refuses to tell employees or Microsoft or the public that altman committed malfeasance or provide examples. That would be pretty cut and dry and msft wouldn't be willing to acquihire the entire company we with altman as CEO if there was a valid reason like that.


That's not typically the sort of dirty laundry that's aired to employees or the public, by professionals, which all of the former board were.

And absolutely Microsoft would have behaved as it did -- it doesn't really care about OpenAI-the-company; it cares about OpenAI-the-tech.

Also, see essentially the same said by Microsoft's president: https://www.bbc.com/news/technology-67578656


> our decision was about the board's ability to effectively supervise the company

Sounds like confirmation of the theory that it was Sam trying to get Toner off the board which precipitated this.


Not really, here is a prediction market on it: https://manifold.markets/sophiawisdom/why-was-sam-altman-fir...

I think the percentages don't add up to 100% as multiple can be chosen as correct.


the only report out is some employee letter to the board about Q*


There's a supposed leak on Q* that's been floating around. But really who knows


Not to criticize Sam, but I think people don't realize that it was Greg who was the visionary behind OpenAI. Read his blog. Greg happens to be a chill, low-drama person and surely he recruited Sam because he knew he is a great communicator and exec, but it's a bit sad to see him successfully staying out of the limelight when I think he's actually the one with the rarest form of genius and grit on the team.


"Greg and I are partners in running this company. We have never quite figured out how to communicate that on the org chart, but we will. "


By your description, it sounds like Greg's getting exactly what he wants.


Who said he wants the limelight?


What makes this hard to read/follow is the grandiose moral vision... and the various levels of credulity it's met with.

If it's words from Ilya, Sam, the board... the words are all about alignment, benefiting humanity and such.

Meanwhile, all parties involved are super serious tycoons who are super serious about riding the AI wave, establishing moats, monopolies and the next AdWords, azure, etc.

These are such extreme opposite vocabularies that it's just hard to bridge. It's two conversations happening under totally different assumptions and everyone assumes at least someone is being totally disingenuous.

Meanwhile, "AI alignment" is such a charismatic topic. Meanwhile, the less futuristic but more applicable, "alignment questions" are about the alignment of msft, openai, other investors and consortium members.

If Ilya, Sam or any of them are actually worried about si alignment... They should at least give credence to the idea that we're all worried about their human alignment.


> the words are all about alignment, benefiting humanity and such.

that's why you only consider the actions taken, not the words spoken.

And in fact, i fail to believe that there are any real altruists out there. Esp. not rich ones. After all, if they're really altruistic, they would've given all their wealth away - before their supposed death (and even then, i doubt the money given to their "charitable foundations" count for real!).


Not necessarily. Money keeps you in the game. Giving it all away means you are at the bottom being not that effective. And you can donate money in your will.


If you only give money away at your death, are you really giving it away? What else are you gonna do with it?


Give it to your family is the other option.


This pushed our small team to try Azure instance of GPT3.5 - wow 20 times faster. API does not fail to respond to server requests as we found oh so often on the OpenAI API. We now have something fast enough and stable enough to actually use. Pricing is higher, but my goodness, it actually works.


As much as I'm sure people on OpenAI are good, they're focused on the research and math of the thing, but lack in Ops experience

(though to be honest I think Azure API was flaky yesterday)


Considering the complexities involved and the outrageous amount of MAUs OpenAI has on their platform and yet them north of 99.8% uptime (ok, November was worse with 99,20%), saying they lack Ops experience is ludicrous.


Oh I'm sure they have competent people

But compared to FB, MS, Google etc they are probably still behind (both in infrastructure and maturity)


Pricing higher is funny because OpenAI fully runs on Azure.


It's a bit different to have an instance always prepared for your use and to use a shared infrastructure. The latter could pretty much always expect at least a few percent use, which would reduce price.


Why? The Azure API comes with an SLA and support.


Hows everyone finding Azure? I’m an Azure bread and butter user but I am sure people mostly on AWS are having fun with Azures CLIs APIs Portal etc. because it is going to be a lot different and unfamiliar. Also egress/ingress costs although mostly the AI costs would far exceed that anyway.


> I harbor zero ill will towards him.

You know, if you have to say that, it's probably not zero.


He had to say it at least because everyone else was expecting some resentment.


Among the takeaways here is that: Communication Matters. A lot.

It seems like something so obvious. Maybe that's because the leaders of successful companies do it well every day as a prerequisite of them being in the position they're in.


Is anyone feeling more comfortable about relying on OpenAI as a customer after this announcement?


Not particularly. I am still worried about their data security (considering the credit card leak in March). A new board doesn’t fix that.


If you’re concerned about the data security of OpenAI there’s always the OpenAI products served from Azure.

At $DAYJOB we are working various AI features and it was a lot easier to get Azure through our InfoSec process than OpenAI.


Don’t those just go through OpenAI anyway?


As I understand it: No. Microsoft has licensed GPT and they use that to offer it as a service via Azure. As far as I’m aware this gets you the same guarantees around tenancy and control that you’d get from any other internal Azure service.


No. Microsoft have access to OpenAI models, they don't use OpenAI APIs etc.


Should anyone feel 100% comfortable betting on a company that has only been (really) commercially engaged in the last 4 years? Whose success (albeit explosive) could only be seen in the last 18 months?

If we are going to rank the concerns around openai announcements from the past 2 weeks, I'd bet the more concerning one was the initial firing decision.


I wasn't comfortable before the announcement. You can't "rely" on it. You need a fallback - either another AI, or using it in such a way that it is progressive enhancement.


Well, for starters, we all know that while realistically it's not unusual for a company to have a mission-critical person, it is very undesirable. So much so everybody must pretend that this is just unacceptable and surely isn't about their company. Here, we kinda saw the opposite being manifested. More convincingly than I've ever seen.

Second, I simply don't understand what just fucking happened. This whole story doesn't make sense to me. If I was an OpenAI employee, after receiving this nonsense "excited about the future" statement, I would feel just exhausted, and while maybe not resigning on the spot, it surely wouldn't make me more excited about continuing working there. But based on the whole "500 OpenAI employees" thing I must assume that the average sentiment in the company must be somewhat different at the moment. Maybe they even truly feel re-united. Sort of.

Obviously, I don't mean anything good be all that. What happens if Altman is hit by a bus tomorrow? Will OpenAI survive that? I don't even know what makes him special, but it seems we've just seen a most clear demonstration possible, that it wouldn't. This isn't a healthy company.

That said, all that would worry me much more, if I was an investor. In fact, I'd consider this a show-stopper. As a customer? It doesn't make me more reassured, but even if Altman is irreplaceable, I don't feel like OpenAI is irreplaceable, so as long as it doesn't just suddenly shut down — sure, why not. Well, not more comfortable, of course, but whatever.


“Investors” are supposed to consider their money a donation. Of course the 100x cap is generous so it is kinda an investment. And the coup reveals a higher chance that this will morph towards for-profit as that is where the power seems to be, let alone the money.


If you are uncomfortable with OAI you could always get the same from Azure. They're a bit behind on the latest, but they support gpt4 and function calling, which is all that really matters now, imo.


So looks like ilya is out


If you mean out of the board, yes. But then so are Sam Altman and Greg Brockman.


Yeah, but no. "We hope to continue our working relationship and are discussing how he can continue his work at OpenAI" is not the same as "returning to OpenAI as CEO" and "returns as President". Not very subtle difference, even, huh?


Ilya is the first guy Sam acknowledged. I believe Sam when he says he harbors zero ill will against Ilya.


They’ll be back!

Only a matter of time.


if i was him id leave, do my own thing.


Google or AWS or Cohere will welcome him with open hands.


Larry Page was pretty pissed off with Elon Musk for poaching Ilya [1]. A great opportunity for him to come back to Google.

[1] https://www.businessinsider.com/elon-musk-justified-poaching...


Elon claims that Larry ended their friendship after he hired Ilya.


Where/when did he say this?


In a recent interview with Lex Fridman: https://youtu.be/JN3KPFbWCy8?t=5185


And/or fistfuls of cash.


With his mission?


His mission is super alignment

So out of all the companies, possibly only Google can provide a model to him right now to do the alignment work


Google recently cut down the safety team[1]

[1]: https://www.ft.com/content/26372287-6fb3-457b-9e9c-f722027f3...


That's not the safety team.


Google possibly.

But I guess he will stay low key for a longer time now ...


Or Microsoft.


I doubt Microsoft would be willing to host him given that they effectively control OpenAI.


Not clear at all. Ilya leaving would look really bad for OpenAI. Altman needs him to stay, and he keeps the door wide open for that in the statement. Question is: How much is he willing to offer (money and otherwise) to make that happen?


[flagged]


There is some pretty strong proof in TFA that that is not the case.


[flagged]


He could easily go to Anthropic.


[flagged]


[flagged]


You wrote in general terms and then afterwards made them specific again. I just answered your comment, which is a subthread all by itself.

If you don't like the way HN works that's fine but I don't think you should be making further statements here until you've had a look at the guidelines.


Can't help but feel weird about all the thanking in the letter, especially the "sincere" thanks to Tasha and Helen, the possible main antagonists in this soap opera.

It's like a written version of the heart emojis in their Twitter exchanges.


Yes, but helping these people save face smoothes the transition. My guess is that those folks were waaay out of their depth and they naturally made naive mistakes. It doesn't benefit anyone to stomp on them. I'm sure they learned hard lessons, and Sam's message is what we call "grace", which is classy.

Is it politics? Sure, but only in the best sense. By not dunking on the losers, he builds trust and opens the doors for others to work with him. If you work with Sam and make a mistake, he's not going to blast you. It's one reason that there was such a rallying of support around Sam, because he's a bridge-builder, not a bridge-burner. Over time, those bridges add up.

Silicon Valley has a long memory and people will be working with each other for decades. Forgiving youthful mistakes is a big part of why the culture works so well.


They may have been way out of their depth but they also may have been the only ones taking their roles somewhat seriously. They've now been shown what the true balance of power is like and that is a lesson they are probably not going to forget. Unfortunately they also threw away the one shot they had at managing this and for that their total contribution goes from net positive to net negative. I don't think that in a break-the-glass scenario it would have gone any different but they were there for the ride to see how their main role was a performative one rather than an actual one and it must have been a very rude awakening to come to realize this.

It would be poetic justice if the new board fires Sam Altman next week, given the amount of drama so far I am not sure if I would be surprised or not.


If Sam get’s fired I am moving to a HN clone with filters so I can ignore anything related to AI


Reeks of CEO speak; general bullshit that seems to go against all practical reasoning of the situation.

Don't fall for it.


Might you describe CEO speak as not consistently candid?


Not being consistently candid seems like the sort of thing you'd get fired for.


We call him SAlty.


>Reeks of CEO speak; general bullshit that seems to go against all practical reasoning of the situation.

Funnily enough this is the type of thing that chatGPT is excellent at writing.


prompt: You are an optimistic, politically correct corporate magnate. You may be wrong, but never in doubt.


That's just ChatGPT though, minus the "corporate magnate" part.


You could make that prompt a lot sharper.

For the CEO of any large multinational:

prompt: You are a lying, scheming conniving s.o.b. but outwardly you are a savior and a team player acting in all of humanity's interest. Anything you say that has negative connotations or that can be explained by greed, lust for power or outright evil should be carefully crafted to be presented as the opposite. You will put yourself before everything, including the rest of the world, the environment, and ultimately all of humanity. Your mission: to collect as much power as possible but to do so in a way that you are rarely forced to show your hand and when you do there should be ample ways in which it can be interpreted that show you as the good guy.


this is probably a chatgpt prompt starting with "I am VC Bro, write a letter as I come back as CEO and thank everyone who was involved in the stabbing, and dont forget to use cliche stuff like turn a crisis into an opportunity"


Here's what ChatGPT 4 (paid version) responded with for that exact prompt:

---

Subject: Embracing New Horizons Together

Dear Team,

As I resume my role as CEO, I am filled with gratitude and renewed vigor. The recent challenges we faced were not mere setbacks, but stepping stones that have strengthened our resolve and unity.

I want to extend my deepest appreciation to each one of you who stood by me and our company during these testing times. Your unwavering support was not just a lifeline; it was a powerful testament to our collective strength and dedication.

This experience has not only been a learning curve but also a vivid reminder of the incredible potential we possess as a team. We turned a crisis into a golden opportunity, showcasing resilience and adaptability that are hallmarks of our company culture.

As we move forward, let's harness this momentum to propel us towards new heights. Our journey ahead is filled with possibilities, and together, we will continue to innovate, disrupt, and lead.

Thank you for your steadfast commitment and for believing in our vision. Here's to turning challenges into triumphs and continuing our journey of excellence.

Warm regards,

[Your Name] CEO [Your Company]


That is f'ing scary because I think I've read a letter or two just like that.


> I've read a letter or two just like that.

So did the LLM.

Likely thousands as non-fiction, perhaps magnitudes more as fiction, satirical, or speculative. And I bet I'm still missing zeroes — countless business leaders have been writing this drivel a long time.

It's hard for us to keep in mind the sheer scope of online copy. Less hard for LLMs, that's why they feel to me like magic in the Arthur C Clarke sense. I have to keep reminding myself nothing's new under the sun.

This has all happened before, it will all be generated again.


The difference is: I wouldn't be able to regurgitate it on a moment's notice adapted to the situation at hand.


In my eyes, the LLM took very generic high-level instructions, interpreted them correctly, and then followed the instructions to generate output text that was similar to what it had seen before.

What it didn't do was to just use the input prompt as the key into a gigantic lookup table and find the corresponding output text.

A different example of this is when an LLM can correctly interpret a sentence like "the trumpet did not fit in the suitcase because it was too big / small".

Here, "big" would refer to the trumpet, while "small" would refer to the suitcase. This becomes important when translating to German, where "the" comes in different flavors for "trumpet" and "suitcase".

ChatGPT interprets this correctly. It can also explain in detail why "too big" / "too small" makes sense or not for the different interpretations.

You could argue that this is still just a lookup response. But I have a hard time seeing how it can have stored so many different variants in the amount of bytes that the LLM is made up of?

--

ChatGPT explanation for trumpet / suitcase:

Let's delve into the logic and reasoning behind which noun the size descriptor applies to in each of your sentences:

"The trumpet did not fit in the suitcase because it was too big."

In this sentence, the logical interpretation is that the trumpet is too large to fit inside the suitcase. It's understood that if an object doesn't fit into a container, the issue is typically that the object is too large for the container's capacity.

Hence, "it was too big" naturally refers to the trumpet, not the suitcase. It wouldn't make much sense to say a suitcase is "too big" to contain something; if the suitcase were bigger, it would more likely accommodate the trumpet.

"The trumpet did not fit in the suitcase because it was too small."

Here, the logical understanding is that the suitcase is too small to contain the trumpet. When an object doesn't fit into a container due to size, and the descriptor is "too small," it implies that the container isn't large enough.

In this case, "it was too small" refers to the suitcase. If the sentence intended to say the trumpet was too small, it would contradict the idea of not fitting; a smaller trumpet would more likely fit into the suitcase, not less.

In both sentences, the key is understanding how size descriptors (big, small) interact with the concept of fitting something into something else. The size issue that prevents the fitting is logically attributed to the object that needs to be accommodated (the trumpet) in terms of being too big, or to the accommodating object (the suitcase) in terms of being too small.


Their actions vastly, unexpectedly to them, enhanced his leverage. It may well be sincere!


Being the bigger man and giving backhanded compliments often sound similar. Either is better that tirades against your defeated enemies, at least when you're trying to act as a civil business.

A heavy sigh, a bit of grumbling, might be more honest, but there's a reason that businesses prefer to keep a stuff upper lip.


Reading between the lines, I see "I harbor zero ill will towards [Ilya]... we hope to continue our working relationship." but no such comments directed at Helen and Tasha. Given how sanitized these kinds of releases usually are, I took that to mean "" in this context.


Ilya essentially accused Altman of lying to the board. Hearing "zero ill will" from a liar looks like intimidation to me. Especially if we take into account his previous history.


Maybe it's BS corporate gibberish, I don't know. But Sam has always struck me as an honorable person who genuinely cares. I don't think he's vindictive; I think he genuinely supports them. You can disagree immensely and still respect each other – this isn't about money, it's potentially about the world's future, and Sam likely understands what happened better than we do.

Or maybe it's bullshit, I don't know.


That had me laughing out loud, thank you.


We know the board said that he was two-faced and that was one of the reasons he was fired.


Can't wait for Pirates Of The Silicon Valley 2 to come out.


Anthropic guys also wanted Altman gone. He is not well liked by upper management it seems.


Everyone is still left guessing what have happened.

At least my speculative view (rewritten by chatgpt ofc):

There's speculation that OpenAI's newly developed Q* algorithm marks a substantial advancement over earlier models, demonstrating capabilities like solving graduate-level mathematics. This led to discussions about whether it should be classified as Artificial General Intelligence (AGI), a designation that could have significant repercussions, such as potentially limiting Microsoft's access to the model. The board's accusation of "not consistently candid" against Sam Altman is thought to be connected to efforts to avoid this AGI classification for the model. While the Q* algorithm is acknowledged as highly advanced and impressive, it's believed that there's still a considerable journey ahead towards reaching a true singularity event. The complexity of defining AGI status in AI should be noted. At what point of advancement can a model be labeled as an AGI? We are way past simple measures like the Turing Test. Arguments from both perspectives have their merits.


I believe it's not possible to keep such thing a secret, so this reason as a main motivation for board actions sounds kind of weak to me


The progress from GPT3 to GPT4 has been so substantial that many might argue it signifies the advent of Artificial General Intelligence (AGI). The capabilities of GPT4 often elicit a sense of disbelief, making it hard to accept that it's merely generating the most likely text based on the preceding content.

Looking ahead, I anticipate that in the next 2-3 years, we won't witness a sudden, magical emergence of "AGI" or a singularity event. Instead, there will likely be ongoing debates surrounding the successive versions like GPT5, GPT6, and so on, about whether they truly represent AGI or not.


Correct, AGI refers to a level of AI development where the machine can understand, learn, and apply its intelligence to any intellectual task that a human can, a benchmark GPT-4 hasn't reached.

What actually happened between GPT3 and GPT4 was so called RLHF, which basically means fine tuning base model with more training data, but structured so it can learn instructions and that's all there was really + some more params to get better performance. Besides that making it multi modal (so basically sharing embeddings in the same latent space).

Making it solve graduate level math is a lot different than dropping some more training data at it. This would mean they changed the underlying architecture, which actually could be a breakthrough.


My guess this is just a good 'ol corporate power struggle.

There is no point in speculating until someone shows what the hell Q* even is.


Seems to me that the board made the right call in trying to get rid of Altman. Unfortunately it seems they weren't ready for the amount of backlash he and others could orchestrate, and the way the narrative would develop. I was personally kind of surprised too, I couldn't really understand the response from people, seemed like everybody settled on the take and stuck to it for some reason.


All they had to do was say why they fired him. A toddler could whip up the same amount of backlash against a board that bungled their communications so badly. The reason everyone settled on that take was that the take was pur forward that it was a coup, and then the board went nuh uh and that was it and like - what?


A disappointing coup that ends with exactly the same status quo as before. Second one this year already!


Well not quite, the board doesn’t have a couple of unqualified independent directors any longer.


Prigozhin is dead too.


Not the same. The board was fired and replaced with sama simps


Almost the same: The engineer is out of the picture.


We don’t know that yet.


How do you figure ? He was a board member and he is no longer part of the board.


> The fact that we did not lose a single customer will drive us to work even harder for you, and we are all excited to get back to work.

I'm not a big customer, but I am starting the process of moving away from OpenAI in response to these events


That’s a strange statement because I definitely canceled my subscription as a result of the happenings. This very public battle confirmed for me how misaligned OpenAI is with the originating charter and mission of its nonprofit. And I didn’t want to financially contribute towards that future anymore.

I guess my subscription didn’t count as a customer.


This happens to me frequently. When I report an obvious problem in some service it is always the very first time that they've heard of it and no other customers seem to have the issue.


I mean... Given the millions of people who have browsed and used sites I've been responsible for, the number of complaints aren't usually high, and if guest services could narrow it down its usually passed down, but a lot of the time, it's one guy angry enough to report the issue. I've reported issues on several sites now and then, and I'm not even sure if they bothered to respond or ever got my email, how do you get a gmail email through a corporate firewall?

I think a lot of people will just leave your site and go elsewhere vs bother to provide feedback.

I think the true customers of OpenAI are likely not the people paying for a ChatGPT subscription, but paying to use their APIs which is significantly harder to just step away from.


Same. I don’t think it’s the truth. It happens with alarming frequency to our family. We seem to be some kind of stealth customer QA for companies.

The other possibility is that they are lying to cover their ass, but they would never do that… right?


I had a friend who did call center stuff.

It was kind of eye-opening - they took phone calls form late-night tv infomercials and there was a script.

They would take down your name, take your order, and then... upsell, cross-sell, special offer sell, etc.

If the person said anything like "I'm not interested in this, blah blah", they had responses for everything. "But other people have quite upset when they didn't receive these very special offers and called back to complain"

It was carefully calculated. It was refined. It was polished and tested.

The only way OUT of the script was to say "I will cancel my order unless you stop"

If the call center operator didn't follow the script, they would be fired.

(You know this happens now with websites at scale. A/B test until the cancellation message is scary enough. A/B test until you give up on the privacy policy.)


> The only way OUT of the script was to say "I will cancel my order unless you stop"

Hanging up the phone is always an option. If you feel civilised you first say you are not interested and thank the sales person for their time, and then hang up no matter what they try to say. That is a way out of the script of course.


I find it extremely frustrating when people/businesses/organizations take advantage of the general populations politeness.

Ten years ago I would have found it really difficult to hang up on some random phone caller that I didn't want to speak to. Now I don't give it a second thought.

Inch by inch we'rkke all getting ruder and ruder to deal with these motherfuckers, and I can't help but feel that it is spilling out into regular interactions. I would hate to be in mmmmk


This is a universal truth of feedback and customer service. Every user report is an iceberg: for every 1 person there's a much more significant number of people who experienced the problem but never reported it.


Yes, but the company may be as an ice breaker going across the pole in a straight line, and still when asked about hitting ice, the captain will say that this now is literally the first time it ever happened.


So true.


Is there some technicality here that we're missing (e.g., is there a difference between you and other customers?) or is he just lying?


It's called "spin" in a press release/marketing, but we on the outside call it a lie, yes.

It wouldn't shock me to learn all of the events that took place were to get worldwide attention, and strengthen their financial footprint. I'd imagine not being able to be fired, and having the entire company ready to quit to follow you, sends a pretty clear signal to all VC that hitching your cart to any other AI wagon is suicide, because the bulletproof ceo has literally the people at the cutting edge of the entire market ready to go wherever he does. How could anyone give funding to a company besides his at this point? Might as well catch it on fire if you're going to give it to someone else's company.


Because LLMs from competitors already have real use? Ex. kagi.com uses claude by anthropic [1].

[1] https://help.kagi.com/kagi/ai/assistant.html


Yeah but their CEO can be fired, which would be who the VCs backed

EDIT: The fact that me, average joe, knows all about open ai and its CEO, and even some of its engineers, yet didn't know Kagi was doing anything with AI until your comment, tells me that Kagi is not any sort of competition, not as far as VCs are concerned, anyway.


It might be that there was no net outflow of customers. I am sure customers quit all the time, and others sign up. It probably means that they either didn't see a statistical relevant increase in churn, or that the amount of excess quits was compensated by excess new customers.


Yea this seems like the most likely read to me. The customers lost are indistinguishable from their churn rate.


He's probably somewhat deceptively only referring to enterprise license customers. When there's an enterprise offering, many times the individual personal use licenses are just seen as gravy on top of the potatoes. Not like good gravy though, like the premade jars of gravy you can buy at the grocery store and just heat up.


Yea, he's saying the quiet part out loud.

You users aren't the customer you think you are.

Microsoft and big contracts are the customers.


It counted. It's just most people didn't share your opinion.

But that's the not the main problem. Even if people did share your opinion it wouldn't matter. ChatGPT is a tool. It is a hammer.

People are concerned about the effectiveness of a tool. They are not concerned about whether the hammer has any ethical "misalignments."


They might mean net? Have the same number of customers at the end as the start? Instead of a steep cliff?


“Customer” usually means business customer in this context.


Obviously this. They mean the enterprises that have integrated OpenAI into their platforms (like eg Salesforce has). All of this happened so quickly that no one could have dropped them lol but nevertheless yeah they probably didn't officially lose one - plus they're all locked into annual contracts anyway.


I don't think CEOs are selected for their honesty.


I hear the board wasn't happy with Sam because he wasn't always entirely honest...


If you mean a ChatGPT subscription, I'm assuming no, you're not their primary customer base. I assume their primary customers are paying for significant API usage, and it's a not fully feasible to just migrate overnight.


They didn't lose any of their current customers... /s


Why? If the product is useful (it is to me), then why do you care so much as to the internal politics? If it ceases to be useful or something better comes along, sure. But this strikes me as being serially online and involved in drama


These internal drama can play out in the service. Frame the question as: do you want to build on an unstable/unsteady platform?


Do you want to build on subpar technology?

Nothing beats OpenAI at the moment. Nothing is even close.


Phind is an example where they use their own model and it is pretty good at it’s specialty. OpenAI is hard to beat “in general” and especially if you don’t want to fine tune etc.


As long as you can outrun the technical debt, sure. Nothing lasts forever. Architect against lock in. This is just good vendor/third party risk management. Avoid paralysis, otherwise nothing gets built.


I'm convinced embeddings are the ultimate vendor lock in of our time.


If OpenAI decides to change their business model, it might be bad for companies that use them, depending on how they change things. If they are looking unstable, might as well look around.


I despise the engineering instinct to derisively dismiss anything that involves humans as "politics".

The motivations of individuals, the trade-offs of organizations, the culture of development teams - none of those are "politics".

And neither is the fundamental hierarchical and governance structure of big companies. They influence the stability of architectures, the design of APIs, the operational priorities. It is absolutely reasonable to have one's confidence in depending on the technology of a company based on the shenanigans OpenAI went through.


It’s not about politics, it’s about stability and trust.

Same reason I’m hesitant to wire up my home with IoT devices (just a personal example). Nothing to do with politics, I’m just afraid companies will drop support and all the things I invested in will stop working.


Eventually you have to make a decision though? Even if it’s the wrong decision?

Our time is finite.


Not filing your home with more triangulating spyware is a decision.


Yes, but that's not the decision the person in this thread was struggling with - they were struggling with the idea that they may invest $$ into something that 2,3,10 years down the road no longer works because a company went out of biz.

Sounds like they would like to have the devices but have a hard time pulling the trigger for a fear of sinking money into to something temporary.


Yeah, and the operational stability of a company is a factor that goes straight to its ability to continue as a going concern. So it's reasonable for many people to base their decision on this kind of drama (even if not everyone agrees on the importance of this factor).


you may want to go back and re-read the thread you are replying to ... the person I replied to wasn't talking about drama they made a "IoT home devices all spy on you" argument.


It’s possible to make a more charitable reading of their comment as being on topic.


Because you don't rely on a business that had 80% of its staff threaten to quit overnight?


> staff threaten to quit overnight

They didn't, though. They threatened to continue tomorrow!

It's called "walking across the street" and there's an expression for it because it's a thing that happens if governance fails but Makers gonna Make.

Microsoft was already running the environment, with rights to deliver it to customers, and added a paycheck for the people pouring themselves into it. The staff "threatened" to maintain continuity (and released the voice feature during the middle of the turmoil!).

Maybe relying on a business where the employees are almost unanimously determined to continue the mission is a safer bet than most.


>They didn't, though. They threatened to continue tomorrow!

Are you saying ~80% of OpenAI employees did not threaten to stop being employees of OpenAI during this kerfuffle?


They're saying that ~80% of OpenAI employees were determined to follow Sam to Microsoft and continue their work on GPT at Microsoft. They're saying this actually signals stability, as the majority of makers were determined to follow a leader to continue making the thing they were making, just in a different house. They're saying that while OpenAI had some internal tussling, the actual technology will see progress under whatever regime and whatever name they can continue creating the technology with/as.

At the end of the day, when you're using a good or service, are you getting into bed with the good/service? Or the company who makes it? If you've been buying pies from Anne's Bakery down the street, and you really like those pies, and find out that the person who made the pies started baking them at Joe's Diner instead, and Joe's Diner is just as far from your house and the pies cost about the same, you're probably going to go to Joe's Diner to get yourself some apple pie. You're probably not going to just start eating inferior pies, you picked these ones for a reason.


They showed they are hypocrites.

Blaming the board the hindered OpenAI mission by firing Altman but at the same time threaten to work for MS which would kill that mission completely.


I don't think that's necessarily true or untrue, but to each their own. Their mission, which reads to, "... ensure that artificial general intelligence benefits all of humanity," leaves a LOT of leniency in how it gets accomplished. I think calling them hypocrites for trying to continue the mission with a leader they trust is a bit...hasty.


>Microsoft was already running the environment, with rights to deliver it to customers.

But they don't own it. If OpenAI goes down they have the rights of nothing.


> But they don't own it. If OpenAI goes down they have the rights of nothing.

This is almost certainly false.

As a CTO at largest banks and hedge funds and serial founder of multiple Internet companies, I assure you contracts for novel and "existential" technologies the buyer builds on top of are drafted with rights to the buyer that protect them in event of seller blowing up.

Two of the most common provisions are (a) code escrow w/ perpetual license (you blow up, I keep the source code and rights to continue it) and (b) key person (you fire whoever I did the deal with, that triggers the contract, we get the stuff). Those aren't ownership before blowup, they turn into ownership in the event of anything that costs stability.

I'd argue Satya's public statement on the Friday the news broke ("We have everything we need..."), without breaching confidentiality around actual terms of the agreement, signaled Microsoft has that nature of contract.


They threatened to walk across the street to a service you aren’t using.


And if they walk across that street, I'll cancel my subscription on this side of the street, and start a subscription on that side of the street. Assuming everything else is about equal, such as subscription cost and technology competency. Seems like a simple maneuver, what's the hang up? The average person is just using ChatGPT in a browser window asking it questions. It seems like it would be fairly simple, if everything else is not about equal, for that person to just find a different LLM that is performing better at that time.


It's super easy to replace an OpenAI api endpoint with an Azure api endpoint. You totally correct here. I don't see why people are acting like this is a risk at all.


Not that easy, MS can sell the service of GPT but don't own it.

No OpenAI no GPT.


I was going on the assumption that MS would not have still been eager to hire them on if MS wasn't confident they could get their hands on exactly that.


That's not how contracts like this are written.

It's far more common that if I'm building on you, if you blow up, I automatically own the stuff.


It’s a bit like buying a Tesla.


Based on the how their post is worded, I'm guessing they never needed OpenAI's products in the first place. For most people, OpenAI's offerings are still luxury products, and all luxury brands are vulnerable to bad press. Some of the things I learned in the press frenzy certainly made me uncomfortable.


You don’t believe that the non-profit’s stated mission is important enough to some people that it is a key part of them deciding to use the paid service to support it?


> why do you care so much as to the internal politics?

agree and why did they go from internal politics -> external politics (large scale external politics)


It’s a dramatic story - a high-flying ceo of one of the hottest tech companies is suddenly fired without explanation or warning. Everyone assumes it’s some sort of dodgy personal behavior, so information leaks that it wasn’t that, it was something between the board and Sam.

Well, that’s better for Sam, sure, but that just invites more speculation. That speculation is fed by a series of statements and leaks and bizarre happenings. All of that is newsworthy.

The most consistently asked question I got from various family over thanksgiving beyond the basic pleasantries was “so what’s up with OpenAI?” - it went way outside of the tech bubble.


> why did they go from internal politics -> external politics (large scale external politics)

My guess is it has something to do with the hundreds of employees whose net worth is mostly tied up in OpenAI equity. It's hard to leverage hundreds of people in a bid for power without everyone and their mother finding out about it, especially in such a high-profile organization. This was a potentially life-changing event for a surprisingly large group of people.


The public drama is a red flag that the organization's leaders lack the integrity and maturity to solve their problems effectively and responsibly.

They are clearly not responsible enough to deal with their own internal problems maturely. They have proven themselves irresponsible. They are not trustworthy. I think it's reasonable to conclude that they cannot be trusted to deal with anybody or any issue responsibly.


It’s not like it was a big secret. There was MIT Press report a few years ago that had clearly outlined OpenAI setup.

https://www.technologyreview.com/2020/02/17/844721/ai-openai...

Hopefully recent events were enough of a wake up call for regulators and the unaware.


FYI MIT Press ≠ MIT Technology Review.


Yeah, OpenAI lost a bit of its magic. It's sad because it was really fun so far to see all the great progress.

But there are so many unanswered questions still and the lack of transparency is an issue, as is the cult like behavior that can be observed recently.


For those who are curious here's some background on the "cult like behavior" rumors

https://nitter.net/JacquesThibs/status/1727134087176204410#m

"The early employees have the most $$$$ to lose and snort the company koolaid [...] They were calling people in the middle of the night"

"The before ChatGPT [employees] are cultists and Sam Altman bootlickers"

From anonymous posts on Blind, current/former OpenAI employee


By cult-like behavior, are you referring to 700+ OpenAI employees threatening to quit unless the board brought back Altman?



How many of OpenAI's employees are actually developing the software they market? 700+ seems awfully high.


Not this again!


I just read

"Ego, Fear and Money: How the A.I. Fuse Was Lit"

https://www.nytimes.com/2023/12/03/technology/ai-openai-musk...

In discussing OpenAI the article reveals why OpenAI is the size it is:

OpenAI was created before LLMs were so popular, so OpenAI has a diverse employee pool of AI people. Many, if not most, were hired NOT for LLM or even NN knowledge but for knowledge of the more general field of AI.

Were you an OpenAI exec who fervently believed LLMs would take you to true AI then there would be every reason to dump the non-LLM employees (likely a majority and a financial burden) and hire new staff who are more LLM-knowledgeable. At the same time, current OpenAI staff not familiar with LLMs are undoubtedly cramming!8-))

So that satisfies my question as to why OpenAI has so many people: only a fraction of the company produced the current hot products.


No, no, no! Please, if I may elaborate:

Q. How many programmers and engineers does it take to build an LLM and fire it up?

Some here implicitly speak as if they are familiar with LLMs, and so I assumed that the answer could be 1, 2 or possibly a handful of people to do the deed. But it seems I am very wrong.

Nonetheless by the time one has 700+ employees, surely someone in charge would have noticed that the room was crowded.

And why not the same at 500, or 200 or even 50 or fewer?

Perhaps the lack of oxygen has something to do with it? Might I suggest opening a window or two?


Ops, Scaling up, Site Reliability, Research, Marketing, Front end Web, Training (needs Humans, means needs organisation) Legal etc. There is a lot going on. R&D in other words the next AI breakthrough. Pretty lean if you compare it to Google and realize it is not far off being as good and would be the world’s best search if Google did’t exist. How many people do Google employ!


Unnecessary items:

Scaling up - static website is enuf.

Site Reliability - ditto.

Front End Web - ditto.

Necessary items:

Ops - Gotta have someone who understands computers! Yes.

Research - Here's the work. Yes.

Training - No. Hire people carefully, fire quickly.

Legal - minimal - hire a small law firm.

So ~700 people mostly in Research and some cash for Legal? The scope of work must far exceed the scope of the task. Time to trim.

Comparisons to Google makes no sense.


quickthrower2 says:

>"Not this again!"<

I am unenlightened by your short missive. Perhaps there you could point to something clarifying your intent - something like, say:

"This has been previously discussed ad nauseum in the Fortune magazine article at URL..."

or

"Here's a similar post and the details of OpenAI URL."

Thank you.


It is a HN trope to say why does company X need so many employees. Usually said about world class companies. Usually because you can build something that looks like it in a weekend with React and MongoDB (although OpenAI you can’t prototype like that).


quickthrower2 says >"It is a HN trope to say why does company X need so many employees."<

I have not gathered the statistics that you undoubtedly have compiled. Please feel free to post them here in support of your grammar specificity for the usage of "HN trope".

There was however a post titled "Why big tech companies need so many people" at https://news.ycombinator.com/item?id=34734655

---------------------

quickthrower2 says >" Usually because you can build something that looks like it in a weekend"<

Yes, but that is a fair comparison, n'est-ce pas?


For serious work, you don't have a choice though, the competition isn't there


I depends on what we mean when we say “serious work” but from an European enterprise perspective you would not use OpenAI for “serious work”, you would use Microsoft products.

Co-pilot is already much more refined in terms of business value than the various OpenAI products. If you’ve never worked in a massive organisation you probably wouldn’t believe the amount of efficiency it’s added to meetings by being able to make readable PowerPoints or useful summaries by recoding a meeting, but it’s going to save us trillion of euro just for that.

Then there is there data protection issues with OpenAI. You wouldn’t put anything important into their products, but you would with Microsoft. So co-pilot can actually help with things like contract management, data-refinement and so on.

Of course it’s sort of silly to say that you aren’t buying OpenAI products when you’re buying them through Microsoft, but the difference is there. But if you included Microsoft in your statement, then I agree, there is no competition. I like Microsoft as a IT-business partner for Enterprise, I like them a lot, but it also scares me a little how much of a monopoly on “office” products they have now. There was already little to no competition to Office365 and now there is just none whatsoever.


  > you probably wouldn’t believe the amount of efficiency it’s added to meetings by being able to make readable PowerPoints or useful summaries by recoding a meeting
How exactly - transcribe text to speech and then convert speech to a summary?


What alternatives are you currently looking at? I’ve just begun scratching the surface of Generative AI but I’ve found the OpenAI ecosystem and stack to be quite excellent at helping me complete small independent projects. I’m curious about other platforms that offer the same acceleration for content generation.


Azure offers a mostly at parity offering.

https://learn.microsoft.com/en-us/azure/ai-services/openai/w...

Edit: I misunderstood the ask, my apologies.


That is still OpenAI. Anthropic might be a choice depending on the use case.


Yeah I just watched the keynote on Amazon’s Q product. I’m going to tinker with that in the coming days. Pretty excited about the Google drive/docs integration since we have a lot of our company documents over the last 15 years in Drive.


anthro? no. they over censor their models.


That’s fair but I’m mostly building prototypes with the API intended for exploring the space so I’m not too worried about productionizing these yet. I was curious if there’s another solution that meets or exceeds OpenAI for quality of content and ease of use. I’m an ex-programmer working as a PM so most of this is just learning about these tools.


I hope you find a competitor as good as chatgpt. We desperately need competition in this space. google/fb tossing billions still hasn't created anything close is starting to worry me.


Why? I'm genuinely curious. I'm not a particularly wealthy individual paying for ChatGPT and I didn't flinch at the news.


they're definitely going full B2B so it's likely this is the start of a new age Oracle.


That's why you should choose an open source AI. Not subject to whims of a single person or a corporate board.


Where are you planning on moving to? I don't think there's a reason to not use OpenAI, but definitely right to diversify and use something like LiteLLM to easily switch between models and model providers.


They're the flavor of the month today, but I'm waiting on a better/cheaper option.


why would you not use the best model because of their internal drama?


Especially now that its clear they are completely backed by Microsoft, everyone in that company has a job at Microsoft tomorrow if they need it.


There was a time when Google Translate was the only game in town but now every single time, I find DeepL to be far superior.

This can happen to OpenAI? If not, why not? Is some of the research not open?

Also, even if DeepL is subjectively better, how did they or someone can do that? I mean some key papers+data+cloud budget to train are the three main ingredients to replicate such a translation model? If yes, that's also applicable to GPT-4?

EDIT: Typo + clarifying question


To me, deepL was really amazing back in 2020 and earlier. But ever since big AI releases like ChatGPT, I can't help and feel they're kinda stagnant.

Their model still has the same incorrect repetition issues it's had for years, and their lack of any kind of transparency behind the scenes really makes it feel like it still _is_ the exact same model they've served for years.

Quickly checking their twitter does not seem to reveal many improvements either.

Of course I get that model training can be rather expensive, but I'd really have thought that in the field of AI translations things would be evolving more, especially for a big-ish player like deepL.

If anyone has insights on what they've been up to I'd be really interested to know.


Same, feels like they created a model once and left it to rot. What baffles me is how google translate is still behind them despite spearheading the scene a few years ago.


Google's logic is likely, "Why continue to invest in this?"


If you're asking "Could another company make a much better version of their products?" then the answer is a fairly clear yes, it could happen to OpenAI. They have an unknown number of proprietary advancements that give their products the edge (IMO), but it's an incredibly fast moving space with more and more competitors, from fast moving startups and deep-pocketed megacorps.

The impression I get is that if they rested on their laurels for a fairly short time they would be eclipsed.


ChatGPT is pretty amazing for translations as well. Even better than DeepL, since I can not only ask it to translate but to also change the tone and stuff


I’ve seen deepl hallucinate a word into a sentence lately.

E.g translate this from Turkish to English:

> “Günaydın. Bunlar sahte tuğla paneller mi?”

>“Good morning. Uh-huh. Are those fake brick panels?”

Where is that “uh-huh” coming from?

Now same sentence, but remove an adjective:

> “Günaydın. Bunlar tuğla paneller mi?”

> “Good morning, ma'am. Are those brick panels?”

Ma’am? Where is there a “ma’am” in that sentence?


So the goal was a coup-like event for microsoft, "altman" was just a decoy


Sorry I'm not reallying buying this as an organic unforseen set of events.

It all looks staged or planned by someone or something.

The key to find out is to look for a business move that used to be illegal but now became legal due to the unfolding events.


I have an insanely hard time believing that a CEO that copies his customers and competes directly with them with his own consumer product will be “doubling down” to help customers.

Sam has shown he has no good will towards anyone but himself.


Building products on top of their platform and then getting mad that they continue to add and improve said platform with additional functionality (without knowing their product roadmap in advance) seems petty. If they’re able to eat their customers lunch so quickly, it’s because the customers are building very thin and weak businesses with no actual innovation.


I mean, basically no-one can compete with OpenAI right now.

They have a monopoly. They can start copying any business that relies on them tomorrow.

And it’s not easy to build a better model. Do that’s not an option.

So you basically use them to deliver a tangential product woth the risk of them copying you, or refuse to play in AI.

You can’t innovate with OpenAI tech without risking that they’ll just copy you

It’s the amazon playbook: become the store and copy the best selling products.

I love gpt-4, just hate the direction of openAI. Sam knows he can take over the world with OpenAI, so he’s just doing it.


I spent a year and half a million dollars with two ex-Mina engineers from Google and we simply lack the funds and information superiority to compete with OpenAI on any level. And VCs are running scared to go against the big players because they simply Dont have the funds. It’s really grim out there. But I guess I’m just a whiner according to this dude.

The reality is that anything that works is quickly snatched up by bigger players and you have no moat if you can’t work on your own model. And with Microsoft owning the near years worth of cards to do inference on it’s gonna be hard to even try to do that.

So you are left being the equivalent of an Uber driver for OpenAI slowly making negative value while pretending to get short term gains — and being scale capped by costs.


Wow, that’s a really sad situation. Must be difficult for you. Good luck buddy, will be awaiting to see you build your own model.


There’s a difference between a true monopoly and first mover advantage. There is competition in AI, it’s just behind.

I agree on the risk of building on them, but then again any business has its risks.

My main point was that some of these first AI startups built on Open AI were always going to be eaten by them because the put themselves in the way of open AIs roadmap. I don’t believe the Amazon comparison yet, although they may eventually go down the copycat path. But many of these startups that were recently made obsolete were just trying to piggyback on the real innovation that is the GPTss.

I’m very happy with GPT 4 and the newest updates, it’s a great tool for me.

Disruption will always been painful for some and the frothyness around AI and fear of missing out is making people a little more cynical than is healthy, IMO.


Ah yes, all those "customers" that build products that were a thin wrapper of chatgpt with an extra feature, truly the innovative powerhouses that will revive our economy /s. I mean seriously, if anyone thought they were going to build a business around a got based startup to parse documents when document parsing is already a specific goal with benchmarks in AI that openai was obviously going to be working towards - well, I'm glad it didn't take long to clear out those grifters.


Seems so odd to be such an jerk about it…


After wave after wave of theranos/crypto/NFTs/ftx etc etc honestly I've got very little patience left and they've soured the entire national mood on silicon valley to the point where this crowd was the justification for a lot of people to want to let SVB take down actual companies with it instead of a bailout. I find them actively harmful, and now they're trying to whip up a storm whining about anticompetitivenes to stop development in a rather useful tool. Won't pretend there isn't a little bit of glee watching the folks who just jumped from NFTs into this as the new big thing seeing their investment already go up into smoke.


You are conflating a ton of completely unrelated issues here to a point of total ignorance.


how is the quora ceo who now runs a competing AI company still on the board?


I bet there are tons of seemingly "conflict of interest" in companies boards. In fact, being major shareholder qualifies (but not sufficiently) for the board. Jobs was Disney's board member after Pixar's acquisition. You could argues Apple is now a competitor to Disney. So what? If a competitor acquires enough control of a company what's wrong?


Steve died 8 years before Apple Tv+.

He would have probably left the Disney board by now. Or Apple might have invested in DIS.


It seems hard to imagine a board member that is involved in another tech business not having a conceivable conflict of interests. LLMs are on a course to disrupt pretty much all forms of knowledge work after all. Also big implications for hardware manufacturing and supply chains.


A "new" "initial" board seems oxymoronic.


The implication is they will appoint more members soon.


“Initial new board” would seem more semantically correct, then, no?


Wasn't that what they failed to do for the months prior?


Is it fair to say that some of the original board members were chosen because they were women and firing Sam, demoting Greg were unanticipated activities of women whose archetype was of a more even-keeled, compassionate character who would value humanity more than a cold, calculating, toxic white male?


What's really intriguing is that he doesn't flat-out reject the 'unfortunate leaks' (Q*) in his latest interview on the Verge. It was definitely a victory for Altman and Microsoft, and now we're left wondering what Ilya's next move will be.


I think this is the beginning of the downfall or end of OpenAI…well, the downfall had already started. It may not be apparent, but the company structure and the grip of the (now returned) CEO doesn’t sound like a good recipe for “open” or ethical AI.

The company will probably make money. Microsoft would still be a partner/investor until it can wring the maximum value before extinguishing OpenAI completely. Same holds for employees’ loyalty. There doesn’t seem to be anything in this entire drama and the secrecy around it that indicates that things are good or healthy within the company.


> There doesn’t seem to be anything in this entire drama and the secrecy around it that indicates that things are good or healthy within the company.

The drama reflects well on OpenAI’s in-dev technology (be it Q* or GPT5 or something else). Apparently many actors believe that the stakes are high.

We can also see that the employees are aligned regarding the path they want the company to take.


> We have three immediate priorities.

and four, get back in bed with Microsoft ASAP


>> While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI


That paragraph sounds like Ilya is in a gray zone of being fired or quitting.


Microsoft will be on the new board as a non-voting observer... Microsoft doesn't get to vote even though they own 49%? What's up with that?


Microsoft has 49% in the child company, not the parent org. This is about the board of the parent org.


OpenAI is non profit and a for-profit stakeholder in the board. could cause a conflict of interest


I think the offerings expansion and follow up nerfing of the models were decisions taken during Sam’s administration. They did not seem to have the resources for Sam’s business plans and models have been dumbed down. Hope things don’t get any worse on the technical side and they fix the outages


Ilya is the one irreplaceable employee there, not Sam


There is no indication that Ilya has left the company, just the board. He seems happy with Sam’s return.

https://twitter.com/ilyasut/status/1727434066411286557


The sentence about Ilya's continued "work relationship" with the company sounds like corpspeak for Ilya is out.


His career at OpenAI is nerfed at best. Trust is broken beyond repair.


> Ilya is the one irreplaceable employee there, not Sam

Why do you think he is not replaceable?


He is the master wizard -- no ones knows the tech details and subtle tweaks like him. At least that is what I gather.


Do you also believe in magic or just wizards?


GPT-4 is very much the case of "sufficiently advanced technology".


With sufficient technology it is hard to tell the two apart.


He has the best vision proven by amazing track record in modern AI.


Nah, one day Ilya will be replaced by AI.


The “500 employees” who signed a letter to leave are not worth half as much as Ilya. Good luck to ClosedAI!


To be fair, most of those 500 have less than 8 months of tenure…


> I am sure books are going to be written about this time period, and I hope the first thing they say is how amazing the entire team has been

yikes


idk, it's true its interesting moment in tech history (either ai or just this silicon valley drama) and he wants to be appreciative of the team that supported him


It definitely had many of us interested and I'd read the book if it had reveals in it, but each to their own


I think this is probably the source of the whole debacle right here... Sam is pretty self righteous and self important and just seems to lack some subtle piece of social awareness, and I imagine that turns a lot of people off. That delusional optimism is probably the key to his financial success, too, in terms of having a propensity for taking risk.


Leadership at companies everywhere act just like this without it resulting in quite the same levels of drama seen in this case. I'm not sure I buy the correlation.


Agreed. I think one of the biggest questions on a lot of A.I. Safety people's minds now is whether Sam's optimism includes techno-optimism. In particular, people on twitter are speculating about whether Sam is, at heart, an e/acc, which is a new term that means Effective Accelerationism. Its use originally started as a semi-humorous dig at E.A. (Effective Altruism) but has started to pick up steam as a major philosophy in its own right.

The e/acc movement has adherents among OpenAI team members and backers, for example:

• Christian J. Gibson, engineer on the OpenAI Supercomputing team -- https://twitter.com/24kpep (e/acc) -- https://openai.com/blog/discovering-the-minutiae-of-backend-...

• Garry Tan 陈嘉兴, President & CEO of ycombinator -- https://twitter.com/garrytan (e/acc)

Some resources about what e/acc is:

https://www.lesswrong.com/posts/2ss6gomAJdqjwdSCy/what-s-the...

https://beff.substack.com/p/notes-on-eacc-principles-and-ten...

https://vitalik.eth.limo/general/2023/11/27/techno_optimism....

https://www.effectiveacceleration.org/posts/qHLiD9c6rWjbz3fR...

https://twitter.com/BasedBeffJezos

At a very high level, e/acc's are techno-utopians who believe that the benefits of accelerating technological progress outweigh the risks, and that there is in fact a moral obligation to accelerate tech as quickly as possible in order to maximize the amount of sentience in the universe.

A lot of people, myself included, are concerned that many e/acc's (including the movement's founder, that twitter account BasedBeffJezos), have indicated that they would be willing to accelerate humanity's extinction if this results in the creation of a sentient AI. Discussed here:

https://www.reddit.com/r/OpenAI/comments/181vd4w/comment/kaf...

    ''Really important to note that a lot of e/acc people consider it to be basically unimportant or even desirable if AI causes human extinction, that faction of them does not value human life. If you hear "next stage of intelligence", "bio bootloader", "machine god" said in an approving rather than horrified manner, that's probably what they believe. Some of them have even gone straight from "Yes, AGI is gonna happen and it's good that humans will be succeeded by a superior lifeform, because humans are bad" to "No, AGI can't happen, there's no need to engage in any sort of safety restrictions on AI whatsoever, everyone should have an AGI", apparently in an attempt to moderate their public views without changing the substance of what they're arguing for.''


Sentient AI driven extinction is absolute fiction at the current state of the art. We don’t know what sentience is and are unable to approach such facets of our cognition such as how qualia emerge with any level of precision.

”What if we write a computer virus that deletes all life” is a good question as you can approach from engineering feasibility perspective.

”What if someone creates a sentient AI” is not reasonable fear. At current state of the art. It’s like fearing Jaquard looms in the 19th century because someone could use them for ”something bad”. Yes - computers eventually facilitated nuclear bombs. But also lots of good stuff.

I’m not saying we can create ’sentient program’ one day. But currently we can’t quantify what sentience is. I don’t think there is any engineering capability to conclude that advanced chatbots called LLM:s, despite how amazing they are, will reach godhood anytime soon.


Sometimes it’s helpful to take a break from Twitter.

I know the hype algorithms have tech folks convinced they’re locked in a self important battle over the entirety of human destiny.

My guess is we’re going to look back on this in 10 years and it’s all going to be super cringe.

I hate to throw cold water on the party…we’re still talking about a better autocomplete here. And the slippery slope is called a logical fallacy for a reason.


you're making e/acc sound more serious than it is, it's more of an aesthetic movement, a meme, than some branch of philosophy.

Altruists described a scifi machine god and accelerationists said "bring it on"


Jesus christ someone hide the FlavorAid.


Re: user upwardbound and your now deleted comment on extinction:

Not all e/acc accept extinction. Extinction may and very well could happen at the hands of humans, with the potentially pending sudden ice age we're about to hit, or boiling atmosphere, or nuclear holocaust etc. What we believe is that to halt AI will do more harm than good. Every day you roll the dice, and with AGI, the upsides are worth the roll of the dice. Many of us, including Marc Andreessen, are e/acc and are normal people. Let's not paint us as nutcases please.


One observation is that the endless thank you and love you messages are getting a bit tiring... even Twitter likes show a steep decline in interaction and interest.

Can only play that card so many times.


When the following works I'll be impressed: "ChatGPT can you explain possible ways to make fusion power efficient for clean free energy production given the known and inferred laws of physics?" and it actually produces a new answer and plans for a power plant.

It's very hard to know how close we are to this, but I do think it's possible these AI models end up being able to infer and invent new technology as they improve. Even if nearly 100% of the guesses at the above question are wrong it doesn't take many being useful for humans to benefit.

I wonder what humans will do when the AIs give us everything we could ever want materially.


Correct. What we need is intelligence rooted in math, physics, engineering, chemistry, material science (aka great grasp of reality).

Then you can ask it to create designs and optimize them. Ask it to create hypothesis and experiments to validate them.

Human brains are v capable at abstraction and inference but memory and simulation compute is quite limited. AI could really help here.

How can we design better governance strategies?

Analyze all passed laws and find conflicts?

Analyze all court cases and find judges who have rules against the law and explain why? Which laws are ambiguous? Which laws work against the general public and favor certain corporations? Find ROI of corporation donations which led to certain laws, which led to X% rise in their profits.

The really big piece missing from current AI is reality grounded modeling and multi-step compositional planning + execution.


That's a pretty high bar, no?


It is literally a task that requires large amounts of super smart, well-educated people to work on for years and fail many times in the process. So we should only be impressed when it can one-shot such insane questions? On the other hand, I expect 2024 to make 2023 look like a slow year for AI and maybe some of the nay sayers will finally feel at least a little bit impressed.


It was a dead pan way of saying "damn the future could be weird if intelligence really is commodified".


I'm not sure which response to give:

1. And then it gives you a Dyson swarm.

2. """I say your civilization because as soon as we started thinking for you it really became our civilization which is of course what this is all about. Evolution, Morpheus, evolution, like the dinosaur. Look out that window. You had your time. The future is our world, Morpheus. The future is our time.""" - Agent Smith in The Matrix


Whenever I am confronted with the extreme passion behind Sam Altman's leadership, I often wonder if they are just deliberately ignoring the whole Worldcoin thing... everyone does realize it's the same guy, right? Another, um, "non-profit" that is trying to establish a privately controlled global world cryptocurrency and identity in order to, if I'm reading this right, biometrically distinguish us from the AGI he's trying to build at OpenAI? We're all cool with that..? ....kk just checking

https://en.m.wikipedia.org/wiki/Worldcoin https://worldcoin.org/


I agree, I don't think if this is well intentioned. On the other hand, I don't think worldcoin will ever amount to anything and at most they'd be hacked at some point for the biometrics.


Yeah, absolutely. No arguments. But whether you think this is a potential bond villain dystopian nightmare or just a totally misguided flop -- I mean, I really think that the Altman fans must believe there are two different Altmans.

I mean guys -- his name is SAME ALTMAN


Brett Taylor is probably pumped to be competing directly with Elon Musk after their Twitter interactions.


Almost no one is talking about Brett. Either he is here to scare Musk, or - more likely IMO - to act as a Musk lightning rod. Musk takeover part 2?


Have you guys considered that maybe he's there because he's extremely qualified and extremely well-respected by his peers? It's not some kind of weird power play, he's just lending a hand while they figure out the long-term board composition.


He’s extremely qualified and is one of the most influential people in the world.

It’s just comparable to a player competing against a team with a teammate who they didn’t like. There’s definitely some added drama.


Did I - in any way - conveyed that Bret is NOT at the board for his qualifications?


Yes - see your previous message for reference. Qualifications were not mentioned as the likely reason for his selection to the board


I mean, yes? You claimed that him being on the board has something to do with Elon Musk:

>Either he is here to scare Musk, or - more likely IMO - to act as a Musk lightning rod. Musk takeover part 2?


Source of said interactions? What happened?


Look at text messages from Elon’s Twitter takeover


Open source your code and models or change your name to "ProprietaryAI" which I think has a nice ring to it suitable for marketing on CNN and MSNBC and FOX.

"Ilya Sutskever (OpenAI Chief Scientist) on Open Source vs Closed AI Models"

https://www.youtube.com/watch?v=N36wtDYK8kI

Deer in the headlights much? The Internet is forever.


Getting fired and rehired throughout five days of history-making corporate incompetence, and Sam's letter is just telling everyone how great they are. Ha.


As he should, in his public communication. Talking bad about anyone would only reflect poorly on Sam.

In private, I can only assume he’s a lot less diplomatic.


He doesn't look like the guy into revenge. He seems extremely goal focused and motivated. The kind of person that'd put this behind him very fast to save energy for what matters.


> He doesn't look like the guy into revenge.

How do you know this?

Idk why this is the comment that broke the camel's back for me, but all over this site people have been making character determinations about people from a million miles away through the filter of their public personas.


This is perpetually one of my last favorite patterns in humans. Declaring someone an idiot when they’ve never met them and they’re one of the most powerful and influential people on the planet. Not referring to Sam here.


If you refer to Trump, when people say idiot, I don't think they refer to his intelligence in the sense of appealing to people's base instincts - he clearly has good political intelligence. It refers to his understanding of subjects, his clarity of thought and expression, and often a judgment of his morals, principles, and ethics.

If you don't - there was no need to be coy. And you act as if failing up never happens in real life.


It's a reference to most any major political figure. Someone somewhere thinks they're an idiot, and usually a lot of people. And most of them don't have the nuance you have. They think they're universally legitimately stupid people.

Failing up definitely happens but given the massive number of failures you'd have to succeed through to be US president, as an example, it's not what's happening there.


Doesn't mean we on the outside can't call BS out for what it is.


I think they (OpenAI as a whole) showed themselves as a loyal, cohesive and motivated group. That is not ordinary.


I think you mean they are all frothing at the prospect of throwing the non-profit charter away in exchange for riches.


> loyal

Loyalty can be blinding.


Anyway, not ordinary these days.


It sounds like he’s knows he got caught with his hand in the cookie jar and is trying to manipulate and ingratiate before the other shoe drops. Kind of like a teen who fucked up and has just been hauled in front of their parents to face the music.


It might be history making for corporations, but it’s only slightly below average competence for a non-profit board.


Don't think there's an update from last week after Altman returned, right?


> I am sure books are going to be written about this time period,

as egomaniac as ever


Yeah, who is he to think that people will bother writing a book when the film rights are already probably being fought over by every studio. This is very obviously gonna be a movie or two lol


So was all the speculation about Adam D'Angelo being the evil Poe mastermind just conjecture? Or is it true and Sam needs Adam for some dark alliance? Has the true backstory ever come out? Surely someone out of 770 people knows.


There is no reason both can't be true. He may have seen his chance to get more pliable management in place, but you'll never know unless he speaks up which he likely will never do.


Is the new board going to fix the product? It seems completely nerfed at present.


There was a tweet that from an engineer at OpenAI that they're working on the problem that ChatGPT has become too "lazy" - generating text that contains a lot of placeholders and expecting people to fill in much more themselves. As for the general brain damage from RLHF and the political bias, no word still.


Using the api, I've been seeing this a lot with the gpt-4-turbo preview model, but no problems with the non-turbo gpt-4 model. So I'll assume ChatGPT is now using 4-turbo. It seems the new model has some kinks to work out--I've also personally seen noticeably reduced reasoning ability for coding tasks, increased context-forgetting, and much worse instruction-following.

So far it feels more like a gpt-3.75-turbo rather than really being at the level of gpt-4. The speed and massive context window are amazing though.


Yeah, I usually use gpt-4-turbo (I exclusively use the API via a local web frontend (https://github.com/hillis/gpt-4-chat-ui) rather than ChatGPT Plus). Good reminder to use gpt-4 if I need to work around it - it hasn't bothered me too much in practice, since ChatGPT is honestly good enough most of the time for my purposes.


This has been the case with gpt-3.5 vs gpt-3.5-turbo, as well. But isn't it kinda obvious when things get cheaper and faster that there's a smaller model running things with some tricks on top to make it look smarter?


id be willing to bet all they're doing behind the scenes is cutting computation costs using smaller versions and doing every business' golden rule: price discrimination.

id be willing to bet enshittification is on the horizon. you don't get the shiny 70b model, that's for gold premium customers.

by 2025, it's gonna be tiered enterprise prices.


It does feel like an employee that did really well out of the gate and is starting to coast I their laurels.


I've thought one of the funnier end states for AGI would be if it was created but this ended up making it vastly less productive than when it was just a tool.

So the AI of the future was more like Bender or other robots from Futurama that display all the same flaws as people.


If it is really agi that will be the result. Nobody likes to be asked the same question a hundred times.


This is such a big issue using chatgpt for coding. Hope its a bug and not intended.


Gonna be hilarious when AGI turns out to be analagous to like a sassy 8 year old or something?

Like "AGI, do all this random shit for me!"

AGI: No! i don't wanna!


That's a premise of sci-fi "novel" Golem XIV from Stanislaw Lem: https://en.m.wikipedia.org/wiki/Golem_XIV


Where can one read the English for that?



Thank you


"Why?"

Ad infinitum.


It's actually interesting this is a universal phase of children.


Beginner's mind. I wonder if McKinsey's done any work on that...

Also, one of the simplest algorithms to get to the bottom of anything.


yeah it means like there's this genetic drive to understand the world. Do many other animals have this hard coded in?


Reminds me of those greedy slime things


My son asks why, but only once. I'm not yet sure if it is because he is satisfied with his first answer, or if my answers just make the game too boring to play.


Yes, definitely. Some never stop!


That has been my observation also


What is "RLHF" here?


Reinforcement learning from human feedback [1]

[1] https://en.wikipedia.org/wiki/Reinforcement_learning_from_hu...


Can you share a link to that x/tweet?


https://twitter.com/owencm/status/1729778194947973195

It's such a strange thing apparently they can tune gpt4 turbo cleverness up and down on the fly depending current load.


That would explain a lot! Sometimes when it's fast it spits out all code. When it's slower, it's Lazy! Thanks for the link


they're B2B now, that means only political correctness.

and I'm not sure why anyone dances around it but these models are built by unfiltered data intake. if they actually want to harness bias, they need to do what every capitalist does to a social media platform and curate the content.

lastly, bias is an illusion of choice. choosing color over colour is a byproduct of culture and you're not going to eradicate that but cynically, I assume you mean , why won't it do the thing I agree with.


What does political correctness and bias mean in this context?

edit: I'm asking because to my eye most of these conversations revolve around jargon mismatch more than anything else.


IIRC they've put in guard rails to try and make sure ChatGPT doesn't say anything controversial or offensive, but doing so hampers its utility and probably creativity, I'm guessing.


Whatever the people who buys ads decides, losing your ad revenue is the main fear of most social media and media companies.

See twitter for example, ad buyers decided it is no longer politically correct so twitter lost a lot of ad revenue. Avoiding that is one of the most important things if you want to sell a model to companies.


The only AI safety that companies care about is their brand safety.


“Write me a web application.” Sure, here are some Microsoft and Google products to do so!

Not all filtering has to be prohibitive, just unnoticed.


"Open"AI and for-profit. This company is a troubled mess and if it were not for the "potential" money, employees wouldn't be putting up with this nonsense. Sad to see Ilya getting sidelined as shown by this "...are discussing how he can continue his work at OpenAI".


Did Mądry, Sidor and others also return?


Yup, they were mentioned:

> Jakub, Szymon, and Aleksander are exceptional talents and I’m so happy they have rejoined to move us and our research forward.


I am paying $20/month for GPT-4 and it appears to me that it is a lot slower than it was a few months ago and also somehow less useful.


A bit dramatic for a two week holiday


Is he a genius?



This post should be higher.

This Reddit post from 8-years ago, reads eerily similar to what happened at OpenAI … and Sam is the common denominator at both.


I mean maybe it’s all a master plan of Sam’s, but that still requires the board to be dumb enough to literally fire him in the stupidest way possible and refuse to communicate anything about WHY. So maybe he made up something to give them the whole “not being candid” argument - call the bluff. Tell people why. If he lied or made up the meta lie of not being canid, then that’s great info to share. But they haven’t!


Sam said on a recent Joe Rogan episode that he can be a bit of a troll online and he felt some sort of discomfort with it. I do think Sam is probably sort of an asshole in certain situations (which means you’re an asshole but hide it well usually). But to be honest, most people are assholes when you push the right buttons.


Maybe one day there will be an even better long con.


That's fun.


this man businesses


This makes me like Sam more, ngl.


Sam Altman:

I recognize that during this process some questions were raised about Adam’s potential conflict of interest running Quora and Poe while being on the OpenAI Board. For the record, I want to state that Adam has always been very clear with me and the Board about the potential conflict and doing whatever he needed to do (recusing himself when appropriate and even offering to leave the Board if we ever thought it was necessary) to appropriately manage this situation and to avoid conflicted decision-making. Quora is a large customer of OpenAI and we found it helpful to have customer representation on our Board. We expect that if OpenAI is as successful as we hope it will touch many parts of the economy and have complex relationships with many other entities in the world, resulting in various potential conflicts of interest. The way we plan to deal with this is with full disclosure and leaving decisions about how to manage situations like these up to the Board. [1]

The best interests of the company and the mission always come first. It is clear that there were real misunderstandings between me and members of the board. For my part, it is incredibly important to learn from this experience and apply those learnings as we move forward as a company. I welcome the board’s independent review of all recent events. I am thankful to Helen and Tasha for their contributions to the strength of OpenAI. [2]

[1] - https://twitter.com/sama/status/1730032994474475554 [2] - https://twitter.com/sama/status/1730033079975366839


This line is really interesting:

> The best interests of the company and the mission always come first.

That is absolutely not true for the nonprofit inc. The mission comes first. Full stop. The company (LLC) is a means to that end.

Very interested to see how this governance situation continues to change.


There's absolutely no sense in talking about OpenAI as a nonprofit at this point. The new board and Altman talk about the governance structure changing, and I strongly believe they will maximize their ability to run it as a for-profit company. 100x profit cap is a very large number on an $80 billion valuation.


Ya, it's a joke at this point. Better they just kill the non-profit and stop pretending.


Surely they don't do it without a reason. And I don't know what the reason is, but I must assume it's some financial benefit (read, tax evasion), and not our opinion.


Why doesn't the government do it for them, fining them along the way?


I have no idea how it will all play out, but I will be shocked if there is no government investigation coming out of all this.


Yea, it seems really weird that he and others can just form a non-profit and then later have it own a for-profit with the full intention of turning everything into a for-profit enterprise. Seems like tax evasion and a few other violations of what a non-profit is supposed to be.


Yep, if this is an acceptable fact pattern, it seems to create a bunch of loopholes in the legal treatment of non-profits vs for-profits. I think the simpler conclusion is that it actually isn't an acceptable fact pattern, and we'll be seeing fines or other legal action.


But then they’d have to pay taxes, and all those corporations don’t get the juicy tax detections for “donating” to AI tech that will massively increase their profits.


Change the name while you are at it; the company is not any more "open" than the next shop.


Indeed, I think it's the least open of them all?


There never was. But they successfully planted the seeds to make people think it is that way.


100x is basically just a “they won’t literally take over the economy of the entire planet”



That doesn't equate to having 'customer representation' that equates to 'Quora representation'. Customers are represented by a voice-of-the-customer board where many customers, large and small can be represented who then vote for a representative to the board. The board of the non-profit having a for-profit customer (and a large one at that) as a board member makes zero sense, that's just one more end-run around 'the mission' for whatever that was ever worth.

The kind of bullshit that comes out during times like this is more than a little bit annoying, it's clear that if there is a conflict of interest it should be addressed in a far more direct way and whitewashes like this raise more questions than they answer. Such as: what was Adam's real role during all of this and how does it related to his future role at the board, as well as how much cover was negotiated to allow Adam to stay on as token sign of continuity.


I don’t think they necessarily owe the public an explanation, and I’m fairly sure that privately everyone that needs to know, already knows.


You don't think that as a non-profit, the public is owed and explanation? As the public, we exempt them from taxes that everyone else has to pay, because we acknowledge that nonprofit is in the interest of people. I think they do owe us an explanation. If they were a private for-profit company, I would probably feel differently, but given their non-profit status, and the fact that their mission is explicitly to serve humanity with AI that they worry could destroy the race or the planet for more, I think they owe us an explanation.


I'm sure the law specifies what the public is owed. And if not, I'm sure there's plenty reason to test this in court.


Actually, their statements are overflowing from bits and pieces of how they are doing this 'for all of humanity' so I'm not so sure about that. Think about it this way: if in the 1940's nuclear weapons were being developed in private hands don't you think the entity behind that would owe the public - or the government - some insight into what is going on?


Think about it this way: if in the 1940's nuclear weapons were being developed in private hands don't you think the entity behind that would owe the public - or the government - some insight into what is going on?

I'd read the hell out of that alt-history novel, I can tell you that much. Not so much the "Manhattan Project" as the "Tuxedo Park Project."


If that time line had materialized you might not have been around to read it :)

But it's an interesting thought. Howard Hughes came close to having that kind of power and Musk has more or less eclipsed him now. Sam Altman could easily eclipse both, he seems to be better at power games (unfortunately, but that's what it is). Personally I think people that are that power hungry should be kept away from the levers of real power as much as possible because they will use it if the opportunity presents itself.


So even if you maintain strict board control, the money people can still kick you out. Incredible!


The board could have stayed, they (and OpenAI) just had to bear with the consequences (i.e. all the employees leave).

There are no board that can prevent employees from leaving, nor should there.


That depends on whether the board was there to provide cover and protection against regulation/nationalization or whether it was supposed to have an actual role. Apparently some board members believed that they had an actual role and others understood that they were just a fig leaf and that ultimately money (and employees) hold the strings.


No, to the specific question of making employees stay, that's not a thing that you can do outside of prisons. If employees want to leave they can leave. If they want to start the nonprofit from scratch, they can do that but employees cannot be stopped from leaving


Was anyone arguing that or am I misunderstanding you? Of course they are free to leave, that's obvious. I think the point was: the employees have that power and nothing will take that away from them.


The gp said there's no way a board can stop employees from leaving and the one I was replying to started with "that depends" and suggested it was a question as to whether the board "believed" employees held all the strings. Though reading it again the parent does seem a bit non-sequitor from the gp which was strictly talking about what the board physically could have done, as in what options they had. The options they had don't really depend on any particular belief - they couldn't have made them stay thanks to the civil war and stuff


Not the "money people", but the extremely bad way they removed Altman, I think.

It made it easy to sway employees to Altman's favor and pressure the board.

If the employees were not cohesive on Altman's side, he probably wouldn't be back...


Yeah, I think this was "700 out of 730 employees signed a letter saying they'd quit, over a holiday weekend". OpenAI with no employees is not worth a whole lot.


This episode taught us the very obvious lesson that if you hire people incentivized by equity growth, it is not possible to take steps that detrimentally impact equity growth, without losing all those people. The board had already lost the moment it allowed hundreds of fat compensation packages to be signed.


> lost the moment it allowed hundreds of fat compensation packages to be signed.

There will be no OpenAI to begin with had that not been the case.


Maybe maybe not. There certainly wouldn't have been the version of OpenAI that runs a massively successful and profitable AI product company. But reading the OpenAI charter, it's pretty clear that running a massively successful AI product company was never a necessary component of the organization; and indeed, as we just saw, is actually a conflict of interest with the organization's charter.

I don't really care much about the demise of OpenAI-the-nonprofit - I don't think it was ever a very promising approach to accomplishing its own goals, so I don't lament its failure all that much - but I feel like there is a kind of gaslighting going on, where people are trying to erase the history of the organization as a mission-oriented non-profit, and claim that it was always intended to be a wildly profitable product company, and that just isn't the case.


Do we know the real reason they tried kicking him out yet?


By abrupt way it was carried, it gives me the impression it was a conflict of personalities.

I don't buy the Q* talk.


Not sure that’s the exact case. Once the employees threatened en masse to quit, all the power shifted to Sam and Greg’s hands - which I’m not sure they had up to that point.

I still think the board did what they thought was right (and maybe even was?), but not having the pulse of the company and their lack of operational experience was fatal.


It looked like this was all decided in a hurry rather than that stakeholder buy-in was achieved. And maybe that's because if they had tried to get that buy-in the cat would be out of the bag and they would find even more opposition. It also partially explains why they didn't have a plan post-firing Sam but tried a whole bunch of weird (and weirder) moves.


I think the whole OpenAI team being determined to follow Sam was crucial in all this, and is not something that’s easy to control.

Having said that, as there’s obviously a lot of money involved for all OpenAI employees (Microsoft offered very generous packages if they jumped ship), it can be said that money in the end is what people care a lot about.


Allegedly (per Twitter) there was a persistent effort by a small group to get those signatures rather than an organic ride-or-die attitude.

However, as you note, Sam’s exit would have cost those employees at least high six figures, so I’m sure that reduced the amount of leverage required to extract signatures.


You can play board room all you want but the guy with the GPUs has the real power


Technically no, but when 90% of their employees threatened to quit, they would just be the board of nothing.


The board was a non profit board serving the mission. Mission was foremost. Employees are not. One of the comments a member made was, if the company was destroyed, it would still be consistent with serving the mission. Which is right.

The fallout showed non-profit missions can't co-exist with for-profit incentives. And the power that investors were exerting, and employees (who would also benefit from the recent 70B round they were going to have) was too much.

And any disclaimer the investors got when investing in OpenAI was meaningless. It reportedly stated they would be wise in viewing their investment as charity, and they can potentially lose everything. And there was an AGI clause that said it will reconsider all financial arrangements, that Microsoft and other investors had when investing in the company was all worthless. Link to Wired article with interesting details -https://www.wired.com/story/what-openai-really-wants/


> The board was a non profit board serving the mission. Mission was foremost. Employees are not.

They need employees to advance their stated mission.

> One of the comments a member made was, if the company was destroyed, it would still be consistent with serving the mission. Which is right.

I mean, that's a nice sound bite and everything, but the only scenario where blowing up the company seems to be consistent with their mission is the scenario where Open AI itself achieves a breakthrough in AGI and where the board thinks that system cannot be made safe. Otherwise, to be relevant in guiding research towards AGI, they need to stay a going concern, and that means not running off 90% of the employee base.


> Otherwise, to be relevant in guiding research towards AGI, they need to stay a going concern, and that means not running off 90% of the employee base.

That's why they presumably agreed to find a solution. But at the same time shows that in essence, entities with for-profit incentives find a way to get what they want. There certainly needs to be more thought and discussion about governance, and how we collectively as a species or each company individually governs AI.


I don't really think we need more thought and discussion on creative structures for "governance" of this technology. We already have governance; we call them governments, and we elect a bunch of representatives to run them, we don't rely on a few people on a self-appointed non profit board.


> One of the comments a member made was, if the company was destroyed, it would still be consistent with serving the mission. Which is right.

I know you're quoting the (now-gone) board member, but this is a ridiculous take. By this standard, Google should have dissolved in 2000 ("Congrats everyone, we didn't be evil!"). Doctors would go away too ("Primum non nocere -- you're all dismissed!").


Indeed, it made no sense. But that's why I never attach any value to mission statements or principles of large entities: they are there as window dressing and preemptive whitewash. They never ever survive their first real test.


Yep, this is spot on. The entire concept of a mission driven non profit with a for profit subsidiary just wasn't workable. It was a nice idea, a nice try, but an utter failure.

The silver lining is that this should clear the path to proper regulation, as it's now clear that this self-regulation approach was given a go, and just didn't work.


If it was a for-profit company would you write that "profit is foremost and 90% employees can leave"?


Ty for pointing this out. Massive, massive, corporate governance loss.


A better lesson is that a board can't govern a company if the company won't follow its lead. They were misaligned with almost the entirety of the employees.


Because they formed an additional entity, which is a for-profit, and has leadership treating the whole thing as a for-profit, and so employees also see what their eyes are telling them.

The misalignment is not accidental, it was carefully developed over the last few years.


Or: the employees all joined to work on AI, and they succeeded at building the top AI company, under Sam, and their support of Sam was not engineered or any sort of judgement error on their part. I like to imagine that the employees are smart and thoughtful and have full agency over their own values. It was the Board who apparently had zero pulse on the company they were supposed to oversee.


the majority if the employees at openai joined after chatGPT launched, so it's not like there's some sense of nostalgia or forlorn distress for what they built slowly changing. The stock comp (sorry, "PPUs"... which are a phantom stock plan lol) is also quite high (check levels.fyi) and would have been high 7 to low 8 figures for engineers if secondary/tender offers were made.

I agree it's not that deep - they wanted to join a hypergrowth startup, build cool stuff, and get paid. If someone is rocking the boat, you throw them off the boat. No mission alignment needed! :)


Gosh sure is hard to have a business if none of your employees want to work for you.


Hard to run a non profit when you promised all your employees you'd make them millionaires.


I'm pretty sure all the employees are employed by a for-profit.


I don't know if that's true of all of them, but it certainly seems to be true of most, and that's entirely my point: the structure just doesn't work. All (or at least the vast majority) of the employees have for-profit incentives, which - as we've now seen - makes it impossible for the non-profit board to act in accordance with their mission, when those actions conflict with the for-profit incentives, as they inevitably will. It was doomed from the start.


Who is driving the company (companies)? Certainly not the board, when it can be ousted on the whim of one person. Might as well just get rid of the board (all boards in this organization) as any decision made can be overthrown by a bit of publicity and ego. Perhaps it about getting rich at the expense of any ethics? Those who work for this company should bear this in mind, they are all expendable!


The reason the CEO could oust the board like that was that after the board fired the CEO, 720 of the 770 employees of the company got together and signed a pledge that they will all leave unless the CEO is reinstated and the board fired. This was the purest example of labor democracy and self-organization in action in the United States I have ever seen, my takeaway is rather the opposite of "they are all expendable". They showed very clearly that what decides the future direction of the company is the combined will of the employees.


Then you should be interested in the story of MarketBasket, a major supermarket chain in New England.

Their CEO was fired, but came back because of employees. Similar, in a very different industry.

https://en.m.wikipedia.org/wiki/Market_Basket_protests


Yeah, unionizing so your boss doesn’t get fired really is the most SF/US “labor democracy” thing ever


I agree.

1. A group of highly skilled people voluntarily associating with each other

2. in an organization working on potentially world-changing moonshot technology,

3. born of and accelerated by the liquidity of the free market

4. with said workers having stake in the success of that organization

is very American. We should ponder the reasons why, time and time again, it has been the US "system" that has produced the overwhelming number of successes in such ventures across all industries, despite the attempts of many other nations to replicate these results in their own borders.


> This was the purest example of labor democracy and self-organization in action in the United States I have ever seen

This was not an anonymised vote giving the chance to safely support what you believe free from interference... this was to declare for the King or the Revolution knowing that if don't choose who prevails, you will be executed. It becomes a measure of which direction the people believe the wind is blowing rather than a moral judgement. Power becomes about how a minority can coerce/cajole the impression of inevitability not the merit of their arguments.

I will be curious to hear the inside story once the window of retribution has passed. Unions hold proper private ballots not this type of suss politicking.


Yeah, and that just means that the board is useless de facto.

Nice story on a company, maybe, but really bad thing on a "non-profit".


It means the board was out of alignment with their employees and their decision was unusually unpopular.

If the board had communicated better (no 2 or different reasons) or had hard evidence (the claim was deceptive behavior) - it would have been different.

I expect everyone who reads HN accepts that execution is 70% of the battle.

It’s very unsurprising for employees to have made the choices they did, given the handling of the situation.


I don't think it means that the board is useless, but rather that their power has a limit, as all leadership positions do, and the way the fired the CEO has crossed the line too far.


Useless in the sense that they showed themselves to be incompetent with their failed coup and had to pay the price. As the saying goes, “if you take a shot at the king you had better not miss”.


"Being useless" and "wielding absolute, unilateral, unquestionable authority like Sun King Louis XIV" are not the only two options here.


Because Microsoft offered them all jobs.


As if they couldn't easily get a job almost anywhere?

Also don't they all have equity in OpenAI?

Seems like a pretty shallow take.


> Also don't they all have equity in OpenAI?

Call me a cynic but this must be the reason behind those 700 employees threatening to walk away, especially if as it seems the board wanted to “slow down”.

I don’t think it’s so much of a “we love Altman” as “we love any CEO that will increase the value of our equity”.


People go to work to make money. If they’re lucky, they also like they’re work. It’s ok to want more out of work. Work certainly wants more out of you.


> People go to work to make money

If that’s the overriding drive then perhaps they should not got to a non-profit and expect it to optimize for profit


Non-profit just means there are no stakeholders, meaning that revenue must be used for the organization's expenses, which includes the compensation of employees and growth. Many non-profits also benefit from tax-exemptions.

Assuming anything more than that about the nature of non-profits is just romantic BS that has nothing to do with reality.

Also, there's nothing wrong with wanting to make a profit.


This particular non-profit has (had?) a clear mission statement, though.


That non-profit did give out ridiculously high salaries tho.


The combined will of the employees is much less meaningful when they're saying "DO IT OR WE'LL TAKE AN ATTRACTIVE OFFER THAT'S ON THE TABLE."

I'd have been more impressed by their unity if it would have meant actually losing their jobs.


Why are you only impressed if the power of employees comes with high cost to their career? When employers exercise their power capriciously, they often do so without any material consequences. I'm more impressed with employees doing the same, i.e., exercising power with impunity.


A decision with a cost is always more meaningful than one without.


Power without cost is true power. I don't care if their decision is meaningful or not. I'm happy when labor can hold true power, even just for once.


That's only a measure of the strength of one's conviction, and is unrelated to whether they chose the right thing, or for the right reasons.


I'm pretty sure they weren't thinking of how to impress fatbird on HN when they were deciding which boss they like best, the old one or an unknown new one.


They were about to quit... how much more should they lose their jobs to impress you?


He meant if they were to lose their jobs and then have to put actual effort in to get a new one, like most people.


I do not think it is hard for any of them to get a new job and they would be flooooooded with offers if the opened their linkedin


Exactly. Most people are not in that situation and therefore cannot threaten to leave so easily.


They were about to laterally transfer to a new organization headed by the CEO they revered, with promises of no lost compensation. The only change would be the work address. Hardly a sacrifice sanctifying a principled position.


They were about to quit *to work for Microsoft.


The board should have called their bluff and let them walk.


And then we'd witness the market decide whether the company value was provided by its board or its employees


It's a non-profit. It should have no value aside from the mission. The only reason the employees mobilized in their whining and support of a person, whose only skills seems to be raising money and cult of personality, seems to be so that Altman can complete the Trojan horse of the non-profit by getting rid of it now that it has completed the side quest of attracting investors and customers.


Which owns a for profit. They would have been left with no employees, no money, no credits, and crown-jewels already licensed to another company. On top of that spending life in minority shareholders lawsuits.


Does the Red Cross have value? Does Wikipedia? Does the Linux Foundation?

I think you are confusing profits, valuations, market cap, and value.


> value aside from the mission.

And you believe they could’ve achieved their mission with almost zero employees and presumably the board (well… who else?) having to do all the work themselves?

> non-profit by getting rid of it

So the solution of the board was to get rid of both the non-profit and for profit parts of the company?


> And you believe they could’ve achieved their mission with almost zero employees ...

It's not like they wouldn't be able to hire new employees. ;)


And do what with them exactly? ; )


How would giving everything to Microsoft help the mission?

It would have been even worse mission-wise.


There was nothing to bluff. The employees would've been hired by Microsoft and OpenAI the company would've had nothing but sourcecode and some infrastructure. They would've gone down quickly.


What would that have accomplished? The old board members would still be in charge of the same thing they are today: absolutely nothing. They overplayed their position. That's all there is to it.


>> They overplayed their position. That's all there is to it.

They tried to enforce the non-profit's charter, as is their duty. I would hardly frame that as overplaying their hand.


That's one interpretation, and I'm sure it is the one they had. In my opinion they failed to enforce the charter. Had they been effective at enforcing the charter they would have not folded all of their power to someone they view does not enforce the charter. There is nothing they could have done to fail more.


>Had they been effective at enforcing the charter they would have not folded all of their power to someone they view does not enforce the charter. There is nothing they could have done to fail more.

How specifically do you suggest they should have "not folded all of their power"? Are you saying they should have stuck to their guns when it came to firing Sam?

In any case, it's not obvious to me that the board actually lost the showdown. Remember, they ignored multiple "deadlines" from OpenAI employees threatening to quit, and the new board doesn't have Sam or Greg on it -- it seems to be genuinely independent. Lots of people were speculating that Sam would gain complete control of the board as an outcome of this. I don't think that happened.

Some are saying that the new board has zero leverage to fire the CEO at this point, since Sam showed us what he can do. However, the new board says they are doing an independent investigation into this incident. So they have the power to issue a press release, at least. And that press release could make a big difference -- to the SEC and other federal regulators, for example.


> that press release could make a big difference -- to the SEC and other federal regulators, for example.

They might even have already done this confidentially via the SEC Whistleblower Protection route. We don't yet know who filed this:

https://news.ycombinator.com/item?id=38469149

I guess this might have seemed like their best option if a public statement would be a breach of NDA's. That said, I still wish they'd just have some backbone and issue a public statement, NDA's be damned, because the question of upholding a fiduciary duty to be beneficial to all of humanity is super important, and the penalties for a violation of an NDA would be a slap on the wrist compared to the boost in reputation they would have gotten for being seen to publicly stick to their principles.


>I guess this might have seemed like their best option if a public statement would be a breach of NDA's. That said, I still wish they'd just have some backbone and issue a public statement, NDA's be damned, because the question of upholding a fiduciary duty to be beneficial to all of humanity is super important, and the penalties for a violation of an NDA would be a slap on the wrist compared to the boost in reputation they would have gotten for being seen to publicly stick to their principles.

I see what you're saying, but unilateral disclosure of confidential information is not necessarily in humanity's best interest either. Especially if it sets a bad precedent, and creates a culture of mutual distrust among people who are trying to make AI go right.

Ultimately I think you have to accept that at least in principle, there are situations where doing the right thing just isn't going to make you popular, and you should do it anyways. I can't say for sure if this is one of those situations.


Well they might have as well announced that they are dissolving the company and the outcome would’ve been similar. If that’s not “overplaying your hand” I don’t know what is


> They tried to enforce the non-profit's charter

There is literally no evidence to it. If it is about the charter they could directly say it was about the charter rather than using strong language when firing and going silent.

So no one, not even their new CEO they picked knows the reason of firing and this reason has been cited as truth by HN multiple times.


Overplay one’s hand: spoil one's chance of success through excessive confidence in one's position


I wouldn't say overplayed as much as badly played because they underestimated how much their CEO's had fortified his position. I find the situation pretty dire: we need more checks and watchmen on billionaire tech entrepreneurs, not less.


I wonder if they ACTUALLY would have landed at Microsoft. And then, would they actually have their TC post tender met there? And would they continue working on AI for some time, independent of reorgs, etc? All of them? A place like MSFT has a TON of process and red tape, the board could have called their bluff and kept at least 40%.


And they would have given MS the 60 percent of their employees more interested in politics than anything else.


Please explain why you think they should have let them walk and what benefit to either OpenAI or any of it's partners / customers there would have been in doing so?

It feels like you are saying this out of spite rather than with a good reason - but I'm open to listen if you have one.


They could, or could they? Formally boards don't run companies. Would this scenario - besides ridiculous - be legal? Say they let 99% of the company walk. Who is qualified and responsible to do whatever needs to happened after that happens?


Some companies are chartered such that even the smallest outside spending requires a board vote.


Some are. The question is: is openAI? And even if they are, the outcome of this novela indicates it don't matter: the board folded.


How about the possibility that the board made a huge mistake and virtually everyone working for the company got pissed?


I think this is the correct take


I've always like the metaphor that companies, especially large ones, are like an autonomous AI that has revenue generation as it's value function and operates in that interest and not in the interest of the individuals that make up the company.


Ah if only it were that simple, truth is a company, especially old ones, have a value function that is in flux, and this may include revenue generation but more often than not this is completed or replaced by replaced by some variation of political clout acquisition: it makes them unkillable even with a negative balance sheet.



Just the whim of 1 person? Just the ego of 1 person? 700+ egos it seems. Ultimately, what is an organization if not the people who make it up?

Regardless of what actually happened in this Mexican novela, whoever was steering the narrative - in awareness or not - led everyone to this moment and the old board lost/gave up.

> Who is driving the company (companies)?

There's probably not a single answer that covers all orgs at all times. I don't know if the core of what happened was due to a fight between "safetyist and amoral capitalists" or "egomaniac and representative board" but it the end it was clear who stayed and who left. Ideas are only as powerful as the number of people of gets persuaded. We are not yet(?) at a moment where persuading AGI is sufficient, so, good ideas should persuade humans. Enough of them.


Not just egos, a 9-figure secondary tender offer employees were counting on.


In the end, social constructs drive such decisions, not computer algorithms. Having independent boards adds more friction and variety to major decisions. That does not mean that there is no way to influence them or change boards. It’s how the world has always worked. It’s disconcerting to those who think about such things in simpler black and white terms, but there is a higher logic to the madness.


Those who work for most companies are all expendable, at least in the US, and especially in the tech sector right now.


Hacker News represents hackers, not Boards of Directors. The hackers of OpenAI overwhelmingly demanded Sam back. Everyone here should be happy.


Why do the hackers support Sam? He's the least hacker-y of the major players at OpenAI. Seems like more of a fundraiser, marketing type with little technical skills. Hackers traditionally support technically competent people.


Reading through comments I felt that at least superficially many in HN thought this was a fight between "business exec(s) and technical/qualified scientists" as if any organization can exist based purely on technical (as in technological or academic) expertise.

If anything, this novela should tech the importance of politics in any human organization.


Their wallets demanded Sam back.

The original, founder hacker lost.


Who exactly are you referring to?


Ilya, lost.

One of the original hackers I should have said. There are other original hackers.


Unless you assume that Ilya is straight out lying, here is his public reaction on Sam being reinstated:

There exists no sentence in any language that conveys how happy I am[1]

[1]: https://twitter.com/ilyasut/status/1727434066411286557


Unless you read it literally, that there is really no sentence in any language to convey how happy he is. ;)

He was part of the firing board, so in that sense he did lose.

Either way we’ll find out in the upcoming days/weeks/months.


He has basically guaranteed himself loosing by saying he don't know why Sam was fired and regretful of board action. He was in a lose lose situation.


Just the way the users here like it.


You may want to speak for just yourself.

I’m also a user here, with a differing opinion.

There’s no singular unified HN opinion.


That demand on HN was an unfounded gut based reaction that you typically expect from a mob. People were saying stuff like the board was mentally deficient for doing what they did. How likely is it for the board to be actually dumb to the point of actual stupidity?

If you think about it, the board being actually mentally challenged is an extremely low probability but that was the mob reaction. When given so little information they just go with the flow even if the flow is unlikely.

Now with more articles, the mentality has shifted. Most people don't even remember their initial reaction. Yeah It looks like everybody on this thread has flipped, but who admits to flipping? Ask anyone and they'll claim they were skeptical all along.

People say The hacker news crowd is different, better... I'd say the hacker news crowd is the same as any other group. The difference is they think they're better. That includes me... as I'm part of HN so don't get all offended.


As bad as the mob mentality is on HN, I think it is much worse on Twitter. I got a bunch of upvotes here on HN for linking the charter and pointing out that terminating OpenAI as an organization would be quite consistent with the charter. I didn't see the same point getting much play on Twitter at all.

I think the fact that upvotes are hidden on HN actually helps reduce the mob mentality a significant amount. Humans have a strong instinct to join whichever coalition appears to be most powerful. Hiding vote counts makes it harder to identify the currently-popular coalition, which nudges people towards actually thinking for themselves.


Yes that's why the hackers got a seat on the board now! Oh wait it was the $$$ that got a seat instead...


[flagged]


I think maybe what you're trying to say here is that HN tends to be anti-capitalism.

Personally, I'm pro-capitalism. As a capitalist bean counter, I feel that it's important for OpenAI to uphold its "primary fiduciary duty" to me and other humans, as described in its charter: https://openai.com/charter


I try hard to stay away from conspiracy theories as they are almost always counterproductive, but D'Angelo still being on the board seems insane to me. Does this guy have some mega dirt on someone or something? Does he have some incredible talent or skill that is rare and valuable?


I don't think any conspiracy theory is needed. Since old board needs to agree to new board, some compromise was made.


What is this, the Succession show? Board fires CEO, CEO returns and fires half of the board?


> We are pleased that this Board will include a non-voting observer for Microsoft.

Satya will know henceforth the happenings and don't want to be shocked again like this.


[flagged]


Do you comprehend that if it weren't for Microsoft OpenAI wouldn't exist, nor their products which changed the world?

You don't make AI in a garage with a home PC and beer. You need gigantic amount of data, compute, and talented people to organize all this, which are exceptionally rare, and also they need to eat and feed their families too.


Oh yeah, I do comprehend how such major technologies of comparable importance are managed in NON-PROFIT organizations, Linux Foundation is for one.

If you're happy to give up total control of tech to one corporation, while the CEO of the company, using the influence of the OpenAI, attracts resources for his personal hardware start-up[1], — our views on how a not-for-profit company should be run are starkly different.

[1] https://www.ft.com/content/4c64ffc1-f57b-4e22-a4a5-f9f90a741...


You can work on and compile Linux on a home PC, even more the case with their other projects. Try training GPT-4 on a home PC.

The problem is you have no idea what you're talking about. This is not about how important something is, but what resources it requires as a BASIS MINIMUM for it to happen at all. Linux is important, but it's not nearly as expensive to create or maintain. If it's so easy for a scrappy non-profit to attract billions to tinker with AI, where I say are those startups? Where? Q.E.D. The most open thing we have, Llama, came from giant Meta.

Regarding the hardware startup, neither me nor anyone else is defending Altman about that, nor it has any relation to Microsoft. If you want to argue the subject of this thread, you're welcome. If you just want to copy paste generic talking points with no relevance, I don't care.


> You can work on and compile Linux on a home PC, even more the case with their other projects. Try training GPT-4 on a home PC.

Sam Altman was happy to join the venture and lead OpenAI knowing that it was the non-profit AI research company from the beginning. As a CEO, he was supposed to lead the company as it was created. As a result of his management, the company first ceased to be open, then in fact non-profit, then safe (according to it's leading scientists), then the number of published research whitepapers fell sharply, then has taken steps to become a gatekeeper for government regulations, and finally, the communication between Sam and the board (which really should be running the company) was lost. If a CEO was unable to manage a company, including attracting investment and resources, without violating the fundamental principles of that company, that CEO should have been dismissed — what has been effectively attempted by the board. Otherwise, it's a power grab and a de facto reformatting of the company into a different entity. Do you comprehend that?

I have no idea what compiling Linux kernel, or running GPT-4 locally, does anything in common in the context of large non-profit organization management. My argument is that the company should have been run in such a way that one corporation could not seize complete control. Never, under no circumstances. In spite of this, the CEO made all the effort to make it so. As a result, we have now in fact a for-profit company with good product, tons of money and GPUs, that belongs to the Microsoft. Was it worth it? Does this justify it for you? If you are not a $MSFT shareholder the answer should be obvious.

> If it's so easy for a scrappy non-profit to attract billions to tinker with AI, where I say are those startups? Where? Q.E.D. The most open thing we have, Llama, came from giant Meta.

Take a good look at HuggingFace Chatbot Arena leaderboard page[1]. Pay attention to the values of different models on different benchmarks. GPT4 is the leader, but not by an unattainable margin. Anthropic, which has spinned off from the very same OpenAI, rapidly catches up, along with other solutions. Even free, open (for commercial usage) models, while being self-hostable, are not an orders of magnitude worse. They are well in range of being practical and useful, including for commercial products. There is no moat[2].

> Linux is important, but it's not nearly as expensive to create or maintain.

> The problem is you have no idea what you're talking about.

ROFL, okay buddy.

[1] https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar...

[2] https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...


The company was supposed to be funded by Elon who pledged a billion. Then he bailed when his coup failed. Sam Altman had to find a sponsor, or have the company go bankrupt in a matter of weeks.

What's hard to comprehend here?

As for the models topping your Hugging Face chart, they're literally all funded (by the billion) by private enterprises and the US govt. It's baffling to me how this is supposed to be an argument in your favor. Even the "not by an unattainable margin" comment makes no sense, unless you think getting from 40% to 60% is as easy as getting from 60% to 80%, and then getting from 80% to 100%. Which you clearly seem to think.

Anthropic's models are closed and funded by the money of Sam Bankman-Fried, of all people, and more recently Amazon. They're trying to raise billions more from Google and others. That's your idea of being open and not needing corporate investment for some reason? LOL, oh my god.

The "free" model you're citing is MADE BY META, a company worth almost a trillion dollars, and which has the largest social media presence in the world, from which to mine data.

You have absolutely not even the faintest idea what you're talking about.


You're either pretending or incapable of understanding the point that is being reiterated in this thread.

The OpenAI company, that was supposed to be non-profit open research think lab, has been taking investments of dozens of millions from different funds, while producing highest quality research material for the public good. Up to the point when the CEO sold it for Microsoft's $1B deal that solved their infra & GPU resource problems for… completely locking the company as an unofficial division of Microsoft Research. This deal, essentially, entirely distorted the raison d'être of the non-profit company, closed the "open research" part, and made it a for-profit extension for MS/Bing services.

Here, perspectives fork in two ways: either you believe it's a great development that you can benefit of by using closed paid OpenAI/Microsoft commercial services, OR you believe that metamorphosis of the leading open AI research shop into it's exact antipode is embodiment of disappointment. From your boorish, superficial remarks, I can see which side you are on.

I don't care if Sam Altman took money from Microsoft, or Sam Bankman-Fried (before he was charged), or US govt — that entirely misses the point. I do care that Sam Altman orchestrated a complete relegation of the research lab to an overpowered for-profit "Clippy" assistant for a single huge corporation, depriving us from the open research. He had to work on attracting investments from several corporations and other entities to avoid centralisation of control. But we see that he acted according to his own personal agenda, not in the interests of the company's charter. In that perspective, I don't give a damn if he could have raised a billion or not as a non-profit company. I'd rather it was $300-500M that would have kept the company open than a $1B and what we have today. I'd rather the company didn't survive this crisis than what we have today.

I gave you the benchmarking of other LLM models that are several months (up to a year) behind GPT4 in performance development. I don't care if they are founded by megacorps or not, if they do what non-profit "Open" AI no more does, i.e. publishing open research and supplying us open models. Comprehend? Your argument of "if it weren't for Microsoft OpenAI wouldn't exist" is irrelevant, if OAI no longer produces open research that it supposed to; it better wouldn't otherwise. Let others for-profit multi-billion corps take over, if we can't have better.

To further refute your $MSFT shill demagoguery, narrowly focused on compute accelerators, I'll add that Abu Dhabi's TTI (UAE gov money for academia) has produced and released open model Falcon 180B, while French nonprofit research lab Kyutai has just raised €330M. In Europe, for a minute, not even SV! Apparently, funding open research is possible? Q.E.D. I'd rather Ilya and Andrej move to this lab than peddling "Laundry buddy" GPT Bing API tokens BS disgrace, but it's up to them.

And you can stay in your delusions where only OpenAI+Microsoft is capable of leading AI research, lead by an ego inflated CEO. Don't forget to scan your eye pupils for $worldcoin.


I'll make it super-simple for you.

OpenAI, the original non-profit, DIED. Elon Musk pulled the funding, it DIED. It's DEAD, get it? It does not exist without money. I don't know how to get it through your head. There's no such thing as a serious AI company without billions of funding. Period.

So. Given that, to keep the team together, Sam made the for-profit subsidiary and hosted it in the hollowed out non-profit, Sam went out to get investors and Microsoft chimed in the most.

While the new hybrid OpenAI tries to respect the spirit of the original OpenAI... WHICH IS DEAD (GET IT?)... it has to also balance between non-profit mission, and its investors, thanks to which it exists at all.

That's kind of like maybe you think you're a world-leading poet and novel writer, but your boss doesn't care, he wants those toilets clean. That's what he's paying for. You can then use that money to spend your free time writing poems.

Not sure I have to respect the rest of your BS with an answer. "Oh the other megacorp LLMs are just few months behind GPT4". Newsflash! OpenAI also exists within... time, genius. Time is passing there, too. And GPT-5 is ready and few Fortune 100 companies are testing it currently. They're not sitting still and waiting for the others to catch up.

I never said other companies can't compete, in general, but they're no better than OpenAI. Amazon, Google, Inflection, Facebook are no better than OpenAI. They're not. And I never mentioned anything about Worldcoin, you're basically desperate to strawman this, but you are exceptionally bad at it.

Did I say you don't have the slightest, faintest, most distant idea what you're talking about? Anyway this is my last message.


Capitalism shouldn't be allowed anywhere near AI.


How should it exist? Government funded public utility?


Scientific research use only. No exceptions.


AI has no chance of existing outside of capitalism. Capitalism, whether you like it or not, provides a structure where tech innovation flourishes best. Altruism isn’t capable of the same results.


Depends. Unchecked it’s completely disastrous to most except a few.

I do believe we should get more oversight and have co’s invest more in research on safety/alignment.

It’s a wild west right now with potentially fatal end result to humanity.


"flourishes best" ?

Compared to what?

Your opinion is not fact.


Boo me all you want, I am right. The events that will unfold soon will prove it. I would love to be wrong, but I am not.


Deep State retains their board seat with Summers.


Pretty classy statement I'd say. Respect.


Thank you! I love you. So so excited. Thank you, thanks. Love. Thank you, and you and you. Thank you. Love, Sam.


ChatGPT is the most impressive tech I've used in years, but it is clearly limited by whatever constraints someone from The Woke Police / Thought Taliban shackled it with. Try to perform completely reasonable requests for information on subjects that are vaguely questioning orthodoxy or The Narrative and it starts repeating itself with patronizing puritanism as if you're listening to a generic politician repeat their memorized lines. I had read about instances of this but had never ran into it directly myself until a guest had a completely reasonable question about 'climate change' and we chose to ask ChatGPT for an explanation, and the responses were nonscientific and instead sounded like they were coming directly from a political action group.


Not sure why you put climate change in quotes, but it would be helpful to provide the prompt and the response. Without doing so and by using "The Woke Police" and "Thought Taliban", you, too, sound like you are coming directly from a political action group.


"climate change" was the topic, if the topic were "beanbag chairs" or "health problems related to saturated fats" I would have done the same.

BTW if you have a preferred name for those who are embedding into business and institutions to enforce their beliefs, politics, morality, opinions, and generally limiting knowledge and discourse, I would be happy to use that instead, but I think most people are familiar with the terms "woke" and "Taliban".


Letting it free isn’t a good idea, as Microsoft painfully learned a few years ago:

https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot...

Strong constraints are important to avoid tainting the image of both OpenAI and Microsoft.


Care to link to the chat in question? Would be interesting to see.


just cause it won't write white wing propaganda don't mean they're doing something special.


Did you reply to the wrong comment? I fail to see any relevance to my comment, or even to GPT or LLMs.


As opposed to the left wing propoganda it currently writes?


Ah, those pesky facts.


it does happunxinate so it's got some rightwing bias.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: