Hacker News new | past | comments | ask | show | jobs | submit login

Because you don't rely on a business that had 80% of its staff threaten to quit overnight?



> staff threaten to quit overnight

They didn't, though. They threatened to continue tomorrow!

It's called "walking across the street" and there's an expression for it because it's a thing that happens if governance fails but Makers gonna Make.

Microsoft was already running the environment, with rights to deliver it to customers, and added a paycheck for the people pouring themselves into it. The staff "threatened" to maintain continuity (and released the voice feature during the middle of the turmoil!).

Maybe relying on a business where the employees are almost unanimously determined to continue the mission is a safer bet than most.


>They didn't, though. They threatened to continue tomorrow!

Are you saying ~80% of OpenAI employees did not threaten to stop being employees of OpenAI during this kerfuffle?


They're saying that ~80% of OpenAI employees were determined to follow Sam to Microsoft and continue their work on GPT at Microsoft. They're saying this actually signals stability, as the majority of makers were determined to follow a leader to continue making the thing they were making, just in a different house. They're saying that while OpenAI had some internal tussling, the actual technology will see progress under whatever regime and whatever name they can continue creating the technology with/as.

At the end of the day, when you're using a good or service, are you getting into bed with the good/service? Or the company who makes it? If you've been buying pies from Anne's Bakery down the street, and you really like those pies, and find out that the person who made the pies started baking them at Joe's Diner instead, and Joe's Diner is just as far from your house and the pies cost about the same, you're probably going to go to Joe's Diner to get yourself some apple pie. You're probably not going to just start eating inferior pies, you picked these ones for a reason.


They showed they are hypocrites.

Blaming the board the hindered OpenAI mission by firing Altman but at the same time threaten to work for MS which would kill that mission completely.


I don't think that's necessarily true or untrue, but to each their own. Their mission, which reads to, "... ensure that artificial general intelligence benefits all of humanity," leaves a LOT of leniency in how it gets accomplished. I think calling them hypocrites for trying to continue the mission with a leader they trust is a bit...hasty.


>Microsoft was already running the environment, with rights to deliver it to customers.

But they don't own it. If OpenAI goes down they have the rights of nothing.


> But they don't own it. If OpenAI goes down they have the rights of nothing.

This is almost certainly false.

As a CTO at largest banks and hedge funds and serial founder of multiple Internet companies, I assure you contracts for novel and "existential" technologies the buyer builds on top of are drafted with rights to the buyer that protect them in event of seller blowing up.

Two of the most common provisions are (a) code escrow w/ perpetual license (you blow up, I keep the source code and rights to continue it) and (b) key person (you fire whoever I did the deal with, that triggers the contract, we get the stuff). Those aren't ownership before blowup, they turn into ownership in the event of anything that costs stability.

I'd argue Satya's public statement on the Friday the news broke ("We have everything we need..."), without breaching confidentiality around actual terms of the agreement, signaled Microsoft has that nature of contract.


They threatened to walk across the street to a service you aren’t using.


And if they walk across that street, I'll cancel my subscription on this side of the street, and start a subscription on that side of the street. Assuming everything else is about equal, such as subscription cost and technology competency. Seems like a simple maneuver, what's the hang up? The average person is just using ChatGPT in a browser window asking it questions. It seems like it would be fairly simple, if everything else is not about equal, for that person to just find a different LLM that is performing better at that time.


It's super easy to replace an OpenAI api endpoint with an Azure api endpoint. You totally correct here. I don't see why people are acting like this is a risk at all.


Not that easy, MS can sell the service of GPT but don't own it.

No OpenAI no GPT.


I was going on the assumption that MS would not have still been eager to hire them on if MS wasn't confident they could get their hands on exactly that.


That's not how contracts like this are written.

It's far more common that if I'm building on you, if you blow up, I automatically own the stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: