That’s a strange statement because I definitely canceled my subscription as a result of the happenings. This very public battle confirmed for me how misaligned OpenAI is with the originating charter and mission of its nonprofit. And I didn’t want to financially contribute towards that future anymore.
I guess my subscription didn’t count as a customer.
This happens to me frequently. When I report an obvious problem in some service it is always the very first time that they've heard of it and no other customers seem to have the issue.
I mean... Given the millions of people who have browsed and used sites I've been responsible for, the number of complaints aren't usually high, and if guest services could narrow it down its usually passed down, but a lot of the time, it's one guy angry enough to report the issue. I've reported issues on several sites now and then, and I'm not even sure if they bothered to respond or ever got my email, how do you get a gmail email through a corporate firewall?
I think a lot of people will just leave your site and go elsewhere vs bother to provide feedback.
I think the true customers of OpenAI are likely not the people paying for a ChatGPT subscription, but paying to use their APIs which is significantly harder to just step away from.
It was kind of eye-opening - they took phone calls form late-night tv infomercials and there was a script.
They would take down your name, take your order, and then... upsell, cross-sell, special offer sell, etc.
If the person said anything like "I'm not interested in this, blah blah", they had responses for everything. "But other people have quite upset when they didn't receive these very special offers and called back to complain"
It was carefully calculated. It was refined. It was polished and tested.
The only way OUT of the script was to say "I will cancel my order unless you stop"
If the call center operator didn't follow the script, they would be fired.
(You know this happens now with websites at scale. A/B test until the cancellation message is scary enough. A/B test until you give up on the privacy policy.)
> The only way OUT of the script was to say "I will cancel my order unless you stop"
Hanging up the phone is always an option. If you feel civilised you first say you are not interested and thank the sales person for their time, and then hang up no matter what they try to say. That is a way out of the script of course.
I find it extremely frustrating when people/businesses/organizations take advantage of the general populations politeness.
Ten years ago I would have found it really difficult to hang up on some random phone caller that I didn't want to speak to. Now I don't give it a second thought.
Inch by inch we'rkke all getting ruder and ruder to deal with these motherfuckers, and I can't help but feel that it is spilling out into regular interactions. I would hate to be in mmmmk
This is a universal truth of feedback and customer service. Every user report is an iceberg: for every 1 person there's a much more significant number of people who experienced the problem but never reported it.
Yes, but the company may be as an ice breaker going across the pole in a straight line, and still when asked about hitting ice, the captain will say that this now is literally the first time it ever happened.
It's called "spin" in a press release/marketing, but we on the outside call it a lie, yes.
It wouldn't shock me to learn all of the events that took place were to get worldwide attention, and strengthen their financial footprint. I'd imagine not being able to be fired, and having the entire company ready to quit to follow you, sends a pretty clear signal to all VC that hitching your cart to any other AI wagon is suicide, because the bulletproof ceo has literally the people at the cutting edge of the entire market ready to go wherever he does. How could anyone give funding to a company besides his at this point? Might as well catch it on fire if you're going to give it to someone else's company.
Yeah but their CEO can be fired, which would be who the VCs backed
EDIT: The fact that me, average joe, knows all about open ai and its CEO, and even some of its engineers, yet didn't know Kagi was doing anything with AI until your comment, tells me that Kagi is not any sort of competition, not as far as VCs are concerned, anyway.
It might be that there was no net outflow of customers. I am sure customers quit all the time, and others sign up. It probably means that they either didn't see a statistical relevant increase in churn, or that the amount of excess quits was compensated by excess new customers.
He's probably somewhat deceptively only referring to enterprise license customers. When there's an enterprise offering, many times the individual personal use licenses are just seen as gravy on top of the potatoes. Not like good gravy though, like the premade jars of gravy you can buy at the grocery store and just heat up.
Obviously this. They mean the enterprises that have integrated OpenAI into their platforms (like eg Salesforce has). All of this happened so quickly that no one could have dropped them lol but nevertheless yeah they probably didn't officially lose one - plus they're all locked into annual contracts anyway.
If you mean a ChatGPT subscription, I'm assuming no, you're not their primary customer base. I assume their primary customers are paying for significant API usage, and it's a not fully feasible to just migrate overnight.
Why? If the product is useful (it is to me), then why do you care so much as to the internal politics? If it ceases to be useful or something better comes along, sure. But this strikes me as being serially online and involved in drama
Phind is an example where they use their own model and it is pretty good at it’s specialty. OpenAI is hard to beat “in general” and especially if you don’t want to fine tune etc.
As long as you can outrun the technical debt, sure. Nothing lasts forever. Architect against lock in. This is just good vendor/third party risk management. Avoid paralysis, otherwise nothing gets built.
If OpenAI decides to change their business model, it might be bad for companies that use them, depending on how they change things. If they are looking unstable, might as well look around.
I despise the engineering instinct to derisively dismiss anything that involves humans as "politics".
The motivations of individuals, the trade-offs of organizations, the culture of development teams - none of those are "politics".
And neither is the fundamental hierarchical and governance structure of big companies. They influence the stability of architectures, the design of APIs, the operational priorities. It is absolutely reasonable to have one's confidence in depending on the technology of a company based on the shenanigans OpenAI went through.
It’s not about politics, it’s about stability and trust.
Same reason I’m hesitant to wire up my home with IoT devices (just a personal example). Nothing to do with politics, I’m just afraid companies will drop support and all the things I invested in will stop working.
Yes, but that's not the decision the person in this thread was struggling with - they were struggling with the idea that they may invest $$ into something that 2,3,10 years down the road no longer works because a company went out of biz.
Sounds like they would like to have the devices but have a hard time pulling the trigger for a fear of sinking money into to something temporary.
Yeah, and the operational stability of a company is a factor that goes straight to its ability to continue as a going concern. So it's reasonable for many people to base their decision on this kind of drama (even if not everyone agrees on the importance of this factor).
you may want to go back and re-read the thread you are replying to ... the person I replied to wasn't talking about drama they made a "IoT home devices all spy on you" argument.
They didn't, though. They threatened to continue tomorrow!
It's called "walking across the street" and there's an expression for it because it's a thing that happens if governance fails but Makers gonna Make.
Microsoft was already running the environment, with rights to deliver it to customers, and added a paycheck for the people pouring themselves into it. The staff "threatened" to maintain continuity (and released the voice feature during the middle of the turmoil!).
Maybe relying on a business where the employees are almost unanimously determined to continue the mission is a safer bet than most.
They're saying that ~80% of OpenAI employees were determined to follow Sam to Microsoft and continue their work on GPT at Microsoft. They're saying this actually signals stability, as the majority of makers were determined to follow a leader to continue making the thing they were making, just in a different house. They're saying that while OpenAI had some internal tussling, the actual technology will see progress under whatever regime and whatever name they can continue creating the technology with/as.
At the end of the day, when you're using a good or service, are you getting into bed with the good/service? Or the company who makes it? If you've been buying pies from Anne's Bakery down the street, and you really like those pies, and find out that the person who made the pies started baking them at Joe's Diner instead, and Joe's Diner is just as far from your house and the pies cost about the same, you're probably going to go to Joe's Diner to get yourself some apple pie. You're probably not going to just start eating inferior pies, you picked these ones for a reason.
I don't think that's necessarily true or untrue, but to each their own. Their mission, which reads to, "... ensure that artificial general intelligence benefits all of humanity," leaves a LOT of leniency in how it gets accomplished. I think calling them hypocrites for trying to continue the mission with a leader they trust is a bit...hasty.
> But they don't own it. If OpenAI goes down they have the rights of nothing.
This is almost certainly false.
As a CTO at largest banks and hedge funds and serial founder of multiple Internet companies, I assure you contracts for novel and "existential" technologies the buyer builds on top of are drafted with rights to the buyer that protect them in event of seller blowing up.
Two of the most common provisions are (a) code escrow w/ perpetual license (you blow up, I keep the source code and rights to continue it) and (b) key person (you fire whoever I did the deal with, that triggers the contract, we get the stuff). Those aren't ownership before blowup, they turn into ownership in the event of anything that costs stability.
I'd argue Satya's public statement on the Friday the news broke ("We have everything we need..."), without breaching confidentiality around actual terms of the agreement, signaled Microsoft has that nature of contract.
And if they walk across that street, I'll cancel my subscription on this side of the street, and start a subscription on that side of the street. Assuming everything else is about equal, such as subscription cost and technology competency. Seems like a simple maneuver, what's the hang up? The average person is just using ChatGPT in a browser window asking it questions. It seems like it would be fairly simple, if everything else is not about equal, for that person to just find a different LLM that is performing better at that time.
It's super easy to replace an OpenAI api endpoint with an Azure api endpoint. You totally correct here. I don't see why people are acting like this is a risk at all.
I was going on the assumption that MS would not have still been eager to hire them on if MS wasn't confident they could get their hands on exactly that.
Based on the how their post is worded, I'm guessing they never needed OpenAI's products in the first place. For most people, OpenAI's offerings are still luxury products, and all luxury brands are vulnerable to bad press. Some of the things I learned in the press frenzy certainly made me uncomfortable.
You don’t believe that the non-profit’s stated mission is important enough to some people that it is a key part of them deciding to use the paid service to support it?
It’s a dramatic story - a high-flying ceo of one of the hottest tech companies is suddenly fired without explanation or warning. Everyone assumes it’s some sort of dodgy personal behavior, so information leaks that it wasn’t that, it was something between the board and Sam.
Well, that’s better for Sam, sure, but that just invites more speculation. That speculation is fed by a series of statements and leaks and bizarre happenings. All of that is newsworthy.
The most consistently asked question I got from various family over thanksgiving beyond the basic pleasantries was “so what’s up with OpenAI?” - it went way outside of the tech bubble.
> why did they go from internal politics -> external politics (large scale external politics)
My guess is it has something to do with the hundreds of employees whose net worth is mostly tied up in OpenAI equity. It's hard to leverage hundreds of people in a bid for power without everyone and their mother finding out about it, especially in such a high-profile organization. This was a potentially life-changing event for a surprisingly large group of people.
The public drama is a red flag that the organization's leaders lack the integrity and maturity to solve their problems effectively and responsibly.
They are clearly not responsible enough to deal with their own internal problems maturely. They have proven themselves irresponsible. They are not trustworthy.
I think it's reasonable to conclude that they cannot be trusted to deal with anybody or any issue responsibly.
In discussing OpenAI the article reveals why OpenAI is the size it is:
OpenAI was created before LLMs were so popular, so OpenAI has a diverse employee pool of AI people. Many, if not most, were hired NOT for LLM or even NN knowledge but for knowledge of the more general field of AI.
Were you an OpenAI exec who fervently believed LLMs would take you to true AI then there would be every reason to dump the non-LLM employees (likely a majority and a financial burden) and hire new staff who are more LLM-knowledgeable. At the same time, current OpenAI staff not familiar with LLMs are undoubtedly cramming!8-))
So that satisfies my question as to why OpenAI has so many people: only a fraction of the company produced the current hot products.
Q. How many programmers and engineers does it take to
build an LLM and fire it up?
Some here implicitly speak as if they are familiar with LLMs, and so I assumed that the answer could be
1, 2 or possibly a handful of people to do the deed. But it seems I am very wrong.
Nonetheless by the time one has 700+ employees, surely someone in charge would have noticed that the room was crowded.
And why not the same at 500, or 200 or even 50 or fewer?
Perhaps the lack of oxygen has something to do with it? Might I suggest opening a window or two?
Ops, Scaling up, Site Reliability, Research, Marketing, Front end Web, Training (needs Humans, means needs organisation) Legal etc. There is a lot going on. R&D in other words the next AI breakthrough. Pretty lean if you compare it to Google and realize it is not far off being as good and would be the world’s best search if Google did’t exist. How many people do Google employ!
It is a HN trope to say why does company X need so many employees. Usually said about world class companies. Usually because you can build something that looks like it in a weekend with React and MongoDB (although OpenAI you can’t prototype like that).
quickthrower2 says >"It is a HN trope to say why does company X need so many employees."<
I have not gathered the statistics that you undoubtedly have compiled. Please feel free to post them here in support of your grammar specificity for the usage of "HN trope".
I depends on what we mean when we say “serious work” but from an European enterprise perspective you would not use OpenAI for “serious work”, you would use Microsoft products.
Co-pilot is already much more refined in terms of business value than the various OpenAI products. If you’ve never worked in a massive organisation you probably wouldn’t believe the amount of efficiency it’s added to meetings by being able to make readable PowerPoints or useful summaries by recoding a meeting, but it’s going to save us trillion of euro just for that.
Then there is there data protection issues with OpenAI. You wouldn’t put anything important into their products, but you would with Microsoft. So co-pilot can actually help with things like contract management, data-refinement and so on.
Of course it’s sort of silly to say that you aren’t buying OpenAI products when you’re buying them through Microsoft, but the difference is there. But if you included Microsoft in your statement, then I agree, there is no competition. I like Microsoft as a IT-business partner for Enterprise, I like them a lot, but it also scares me a little how much of a monopoly on “office” products they have now. There was already little to no competition to Office365 and now there is just none whatsoever.
> you probably wouldn’t believe the amount of efficiency it’s added to meetings by being able to make readable PowerPoints or useful summaries by recoding a meeting
How exactly - transcribe text to speech and then convert speech to a summary?
What alternatives are you currently looking at? I’ve just begun scratching the surface of Generative AI but I’ve found the OpenAI ecosystem and stack to be quite excellent at helping me complete small independent projects. I’m curious about other platforms that offer the same acceleration for content generation.
Yeah I just watched the keynote on Amazon’s Q product. I’m going to tinker with that in the coming days. Pretty excited about the Google drive/docs integration since we have a lot of our company documents over the last 15 years in Drive.
That’s fair but I’m mostly building prototypes with the API intended for exploring the space so I’m not too worried about productionizing these yet. I was curious if there’s another solution that meets or exceeds OpenAI for quality of content and ease of use. I’m an ex-programmer working as a PM so most of this is just learning about these tools.
I hope you find a competitor as good as chatgpt. We desperately need competition in this space. google/fb tossing billions still hasn't created anything close is starting to worry me.
Where are you planning on moving to? I don't think there's a reason to not use OpenAI, but definitely right to diversify and use something like LiteLLM to easily switch between models and model providers.
I'm not a big customer, but I am starting the process of moving away from OpenAI in response to these events