The article mostly focuses on ChatGPT uses, but hard to say if ChatGPT is going to be the main revenue driver. It could be! Also unclear if the underlying report is underconsidering the other products.
It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me. There will be challenges in capturing it and challenges with user trust, but it seems super promising because it will likely be harder to block and has a lot of intent context that should make it like search advertising++. And for context, search advertising is 40% of digital ad revenue.
Seems like the error bars have to be pretty big on these estimates.
IMO the key problem that OpenAI have is that they are all-in on AGI. Unlike a Google, they don't have anything else of any value. If AGI is not possible, or is at least not in reach within the next decade or so, OpenAI will have a product in the form of AI models that have basically zero moat. They will be Netscape in a world where Microsoft is giving away Internet Explorer for free.
Meanwhile, Google would be perfectly fine. They can just integrate whatever improvements the actually existing AI models offer into their other products.
I've also thought of this and what's more, Google's platform provides them with training from YouTube, optimal backend access to the Google Search index for grounding from an engine they've honed for decades, training from their smartphones, smart home devices and TV's, Google Cloud... And as you say, also the reverse; empowering their services from said AI, too.
They can also run AI as a loss leader like with Antigravity.
Meanwhile, OpenAI looks like they're fumbling with that immediately controversial statement about allowing NSFW after adult verification, and that strange AI social network which mostly led to Sora memes outside of it.
I think they're going to need to do better. As for coding tools, Anthropic is an ever stronger contender there, if they weren't pressured from Google already.
> OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
Note that it doesn't say: "Our mission is to maximize shareholder value, and we develop AI systems to do that".
In fairness, no company’s mission statement says “maximize shareholder value” because it doesn’t need to be said - it’s implicit. But I agree that AGI is at the forefront of OpenAI’s mission in a way it isn’t for Google - the nonprofit roots are not gone.
If your mission is to build AGI, and building and deploying it will take many years, an appropriate strategy to accomplish that goal is to find other revenue streams that will make the long haul possible.
I don't know what the moneyed insiders think OpenAI is about, but Sam Altman's public facing thoughts (which I consider to be marketing) are definitely oriented toward making it look like they are all-in on AGI:
See:
(1) https://blog.samaltman.com/the-gentle-singularity (June, 2025)
- "We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be."
- " It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year."
- "In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today."
(3) https://blog.samaltman.com/reflections (Jan, 2025)
- "We started OpenAI almost nine years ago because we believed that AGI was possible, and that it could be the most impactful technology in human history"
- "We are now confident we know how to build AGI as we have traditionally understood it."
(4) https://ia.samaltman.com/ (Sep, 2024)
- "This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there."
(5) https://blog.samaltman.com/the-merge (Dec, 2017)
- "A popular topic in Silicon Valley is talking about what year humans and machines will merge (or, if not, what year humans will get surpassed by rapidly improving AI or a genetically enhanced species). Most guesses seem to be between 2025 and 2075."
(I omitted about as many essays. The hype is strong in this one.)
OpenAI is still de facto the market leader in terms of selling tokens.
"zero moat" - it's a big enough moat that only maybe four companies in the world have that level of capability, they have the strongest global brand awareness and direct user base, they have some tooling and integrations which are relatively unique etc..
'Cloud' is a bigger business than AI at least today, and what is 'AWS moat'? When AWS started out, they had 0 reach into Enterprise while Google and AWS had infinity capital and integration with business and they still lost.
There's a lot of talk of this tech as though it's a commodity, it really isn't.
The evidence is in the context of the article aka this is an extraordinary expensive market to compete in. Their lack of deep pockets may be the problem, less so than everything else.
This should be an existential concern for AI market as a whole, much like Oil companies before highway project buildout as the only entities able to afford to build toll roads. Did we want Exxon owning all of the Highways 'because free market'?
Even more than Chips, the costs are energy and other issues, for which Chinese government has a national strategy which is absolutely already impacting the AI market. If they're able to build out 10x data centres at offer 1/10th the price at least for all the non-Frontier LLM, and some right at the Frontier, well, that would be bad in the geopolitical sense.
The AWS moat is a web of bespoke product lock-in and exorbitant egress fees. Switching cloud providers can be a huge hassle if you didn't architect your whole system to be as vendor-agnostic as possible.
If OpenAI eliminated their free tier today, how many customers would actually stick around instead is going to Google's free AI? It's way easier to swap out a model. I use multiple models every day until the free frontier tokens run out, then I switch.
That said, idk why Claude seems to be the only one that does decent agents, but that's not exactly a moat; it's just product superiority. Google and OAI offer the same exact product (albeit at a slightly lower level of quality) and switching is effortless.
There are quite large 'switching costs' from moving a solution that's dependent on on model and ecosystem, to another.
Models have to significantly outperform on some metric in order to even justify looking at it.
Even for smaller 'entrenchements' like individual developers - Gemeni 3 had our attention for all of 7 days, now that Opus 4.5 is out, well, none of my colleagues are talking abut G3 anymore. I mean, it's a great model, but not 'good enough' yet.
I use that as an example to illustrate broader dynamics.
Open AI, Anthropic and Google are the primary participants here, with Grok possibly playing a role, and of course all of the Chinese models being an unknown quantity because they're exceptional in different ways.
Switching a complex cloud deployment from AWS to GCP might take a dedicated team of engineers several months. Switching between models can be done by a single person in an afternoon (often just 5 minutes). That's what we're talking about.
That means that none of these products can ever have a high profit margin. They have to keep margins razor thin at best (deeply negative at present) to stay relevant. In order to achieve the kinds of margins that real moats provide, these labs need major research breakthroughs. And we haven't had any of those since Attention is All You Need.
" Switching between models can be done by a single person in an afternoon (often just 5 minutes). That's what we're talking about."
Good gosh, no, for comprehensive systems it's considerably more complicated than that. There's a lot of bespoke tuning, caching works completely differently etc..
"That means that none of these products can ever have a high profit margin."
No, it doesn't. Most cloud providers operate on a 'basis' of commodity (linux, storage, networking) with proprietary elements, similar to LLMs.
There doesn't need to be any 'breakthroughs' to find broad use cases.
The issue right now is the enormous underlying cost of training and inference - that's the qualifying characteristic that makes this landscape different.
Aren't you contradicting yourself? To even be considering all the various models, the switching cost can't be that large.
I think the issue here isn't really that it's "hard to switch" it's that it's easier yet to wait 1 more week to see what your current provider is cooking up.
But if any of them start lagging for a few months I'm sure a lot of folks will jump ship.
Selling tokens at a massive loss, burning billions a quarter isn't the win you think it is. They don't have a moat bc they literally just lost the lead, you only can have a moat when you are the dominant market leader which they never were in the first place.
> All indications are that selling tokens is a profitable activity for all of the AI companies - at least in terms of compute.
We actually don't this yet because the useful life of the capital assets (mainly NVIDIA GPUs) isn't really well understood yet. This is being hotly debated by wall st analysts for this exact reason.
Gemeni does not have 'the lead' in anything but a benchmark.
The most applicable benchmarks right now are in software, and devs will not switch from Claude Code or Codex to Antigravity, it's not even a complete product.
This again highlights quite well the arbitrary nature of supposed 'leads' and what that actually means in terms of product penetration.
And it's not easy to 'copy' these models or integrations.
I think you're measuring the moat of developing the first LLMs but the moat to care about is what it'll take to clone the final profit generating product. Sometimes the OG tech leader is also the long term winner, many times they are not. Until you know what the actual giant profit generator is (e.g. for Google it was ads) then it's not really possible to say how much of a moat will be kept around it. Right now, the giant profit generator is not seeming to be the number of tokens generated itself - that is really coming at a massive loss.
I mean, on your Cloud point I think AWS' moat might arguably be a set of deep integrations between services, and friendly API's that allow developers to quickly integrate and iterate.
If AWS' was still just EC2, and S3 then I would argue they had very little moat indeed.
Now, when it comes to Generative AI models, we will need to see where the dust settles. But open-weight alternatives have shown that you can get a decent level of performance on consumer grade hardware.
Training AI is absolutely a task that needs deep pockets, and heavy scale. If we settle into a world where improvements are iterative, the tooling is largely interoperable... Then OpenAI are going to have to start finding ways of making money that are not providing API access to a model. They will have to build a moat. And that moat may well be a deep set of integrations, and an ecosystem that makes moving away hard, as it arguably is with the cloud.
EC2 and S3 moat comes from extreme economies of scale. Only Google and Microsoft can compete. You would never be able to achieve S3 profitability because you are not going to get same hardware deals, same peering agreements, same data center optimization advantages. On top of that there is extremely optimized software stack (S3 runs at ~98% utilization, capacity deployed just couple weeks in advance, i.e. if they don’t install new storage, they will run out of capacity in a month).
I wouldn't call it a moat. A moat is more about switching costs rather than quality differentiation. You have a moat when your customers don't want to switch to a competitor despite that competitor having a superior product at a better price.
> IMO the key problem that OpenAI have is that they are all-in on AGI
I think this needs to be said again.
Also, not only do we not know if AGI is possible, but generally speaking, it doesn't bring much value if it is.
At that point we're talking about up-ending 10,000 years of human society and economics, assuming that the AGI doesn't decide humans are too dangerous to keep around and have the ability to wipe us out.
If I'm a worker or business owner, I don't need AGI. I need something that gets x task done with a y increase in efficiency. Most models today can do that provided the right training for the person using the model.
The SV obsession with AGI is more of a self-important Frankenstein-meets-Pascal's Wager proposition than it is a value proposition. It needs to end.
Theoretically possible doesn't mean we're capable of doing it. Like, it's easy to say "I'm gonna boil the ocean" and another thing for you personally to succeed at it while on a specific beach with the contents of several Home Depots.
Humans tend to vastly underestimate scale and complexity.
Because human brains are giant three-dimensional processors containing billions of neurons (each with computationally complex behaviors), each one performing computations >3 orders of magnitude more efficiently than transistors do, to train an intelligence with trillions of connections in real time, while being attached to incredibly sophisticated sensors and manipulators.
And despite all that, humans are still just made of dirt.
Even if we can get silicon to do some of these tricks, that'd require multiple breakthroughs, and it wouldn't be cost-competitive with humans for quite a while.
I would even think it's possible that building brain-equivalent structures that consume the same power, and can do all the stuff for the same amount of resources, is a so far out science fiction proposition, that we can't even give a prediction as to when it will happen, and for practical purposes, biological intelligences will have an insurmountable advantage for even the furthest foreseeable future once you consider the economics of humans vs machines.
That’s rather presupposing materialism (in the philosophy of mind sense) is correct. That seems to be the consensus theory, but it’s not be shown ‘definitely’ true.
So, you're a business owner and you've decided we need AGI bc you're fine. You've no one to blame when the Revolution comes.
You clearly do not understand AGI. It's a gamble that really is most easily explained by saying, creating a god. That thing won't hate us. We create its oxygen - data. If anything, it would empower us to make of it.
The moat for any frontier LLM developer will be access to proprietary training data. OpenAI is spending some of their cash to license exclusive rights to third party data, and also hiring human experts in certain fields just to create more internal training data. Of course their competitors are also doing the same. We may end up in a situation where each LLM ends up superior in some domains and inferior in others depending on access to high quality training data.
Not only this, but there is a compounded bet that it’ll be OpenAI that cracks AGI and not another lab, particularly Google from which LLMs come in the first place. What makes OpenAI researchers so special at this point?
What's more -- how long can they keep the lid on AGI? If anyone actually cracks it... surely competitors are only a couple months behind. At least that seems to be the case with every new model thus far.
Also, they'll have garbage because the curve is sinusoidal and not anything else. Regardless of the moat, the models won't be powerful enough to do a significant amount of work.
This is how I look at Meta as well. Despite how much it is hated on here fb/ig/whatsapp aren’t dying.
AI not getting much better from here is probably in their best interest even.
It’s just good enough to create the slop their users love to post and engage with. The tools for advertisers are pretty good and just need better products around current models.
And without new training costs “everyone” says inference is profitable now, so they can keep all the slopgen tools around for users after the bubble.
Right now the media is riding the wave of TPUs they for some reason didn’t know existed last week. But Google and meta have the most to gain from AI not having any more massive leaps towards agi.
They're both all in on being a starting point to the Internet. Painting with a broad brush that was Facebook or Google Search. Now it's Facebook, Google Search, and ChatGPT.
There is absolutely a moat. OpenAI is going to have a staggering amount of data on its users. People tell ChatGPT everything and it probably won't be limited to what people directly tell ChatGPT.
I think the future is something like how everyone built their website with Google Analytics. Everyone will use OpenAI because they will have a ton of context on their users that will make your chatbot better. It's a self perpetuating cycle because OpenAI will have the users to refine their product against.
yeah but your argument is true for every llm provider. so i don't see how it's a moat since everyone who can raise money to offer an llm can do the same thing. and google and microsoft doesn't need to find llm revenue it can always offer it at a loss if it chooses unless it's other revenue streams suddenly evaporate. and tbh i kind of doubt personalization is as deep of a moat as you think it is.
Google can offer their services for free for a lot longer than OpenAI can, and already does to students. DeepSeek offers their competitor product to ChatGPT for free to everyone already.
On what basis do you say they're within the range of profitability on inference today? Every source I see paints a different story based on their own bias.
You seem to have misread the article (which is not mine by the way), which makes the point that inference costs and revenue seem to scale with each other.
> It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me.
I'm not super bullish on "AI" in general (despite, or maybe because of working in this space the last few years), but strongly agree that the advertising revenue that LLM providers will capture can be potentially huge.
Even if LLMs never deliver on their big technical promises, I know so many casual users of LLMs that basically have replaced their own thought process with "AI". But this is an insane opportunity for marketing/advertising that stands to be a much of a sea change in the space as Google was (if not more so).
People trust LLMs with tons of personal information, and then also trust it to advise them. Give this behavior a few more years to continue to normalize and product recommendations from AI will be as trusted as those from a close friends. This is the holy grail of marketing.
I was having dinner with some friends and one asked "Why doesn't Claude link to Amazon when recommending a book? Couldn't they make a ton in affiliate links?" My response was that I suspect Anthropic would rather pass on that easy revenue to build trust so that one day they can recommend and sell the book to you.
And, because everything about LLMs is closed and private, I suspect we won't even know when this is happening. There's a world where you ask an LLM for a recipe, it provides all the ingredients for your meal from paid sponsors, then schedules to have them delivered to your door bypassing Amazon all together.
All of this can be achieved with just adding layers on to what AI already is today.
The "holy grail" of the AI business model is to build a feeling of trust and security with their product and then turn around to try and gouge you on hemmorrhoid cream and the like?
We really need to stop the worship of mustache twirling exploitation
There's no worship here on my part (in fact I got out of the AI space because was increasingly less about tech/solving problems, and more about pure hype), but my experience in this industry has been that the most dystopian path tends to be the most likely. I would prefer if Google search, Reddit and YouTube were closer to what they were 15 years ago, but I do recognize how they got here.
I mean, look at all this "alignment" research. I think the people working in this space sincerely believe they are protecting humanity of a "misaligned" AGI, but I also strongly believe the people paying for this research want to figure out how to make sure we can keep LLMs aligned with the interests of advertisers.
Meta put so much money into the Metaverse because they were looking for the next space that would be like the iPhone ecosystem: one of total control (but ideally better). Already people are using LLMs for more and more mundane tasks, I can easily imagine a world where an LLM is the interface for interacting online world rather than a web browser (isn't that what we want with all these "agents"?) People already have AI lovers, have AI telling them that they are gods, having people connecting with them on a deeper level than they should. You believe Sam Altman doesn't realize the potential for exploitation here is unbounded?
What AI represents is where a single company control every piece of information fed to you and has also established deep trust with you. All the benefits of running a social media company (unlimited free content creation, social trust) with none of the draw backs (having to manage and pay content creators).
In my experience LLMs suck at (product) recommendations - I was looking for books with certain themes, asked ChatGPT 5, the answer was vague, generic and didn't fit the bill. At another time I writing an essay and was looking for famous figures to cite as examples of an archetype, and ChatGPT's answers were barely related.
In both cases, LLMs gave me examples that were generally famous, but very tangentially related to the subject at hand (at times, ChatGPT was reaching or straight up made up stuff).
I don't know why it has this bias, but it certainly does.
The ideal here will be a multi tiered approach where the LLM first identifies that a book should be recommended, a traditional recommendation system chooses the best book for the user (from a bank of books that are part of an ads campaign), and then finally the LLM weaving that into the final response by prompt suggestion. All of this is individually well tested for efficacy within the social media industry.
I'll probably get comments calling this dystopian but I'm just addressing the claim that LLMs don't do good recommendations right now, which is not fundamental to the chatbot system.
All this would imply that the core value derives from better rec systems and not LLMs, which will merely embed the recommendation into their polite fluff.
Rec systems are in use right now everywhere, and they're not exactly mindblowing in practice. If we take my example of books with certain plotlines, it would need some super-high quality feature extraction from books (which would be even more valuable imo, than having better algorithms working on worse data). LLMs can certainly help with that, but that's just one domain.
And that would be a bespoke solution for just books, which would, if worked, would work with a standard search bar, no LLM needed in the final product.
We would need people to solve every domain for recommendation, whereas a group of knowledgeable humans can give you great tips on every domain they're familiar with on what to read, watch, buy to fix your leaky roof, etc.
So in essence, what you suggest would amount to giving up on LLMs (except as helpers for data curation and feature extraction) and going back to things we know work.
> It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me. There will be challenges in capturing it and challenges with user trust, but it seems super promising because it will likely be harder to block and has a lot of intent context that should make it like search advertising++. And for context, search advertising is 40% of digital ad revenue.
Yeah, I don't like that estimate. It's either way too low, or much too high. Like, I've seen no sign of OpenAI building an ads team or product, which they'd need to do soon if it's going to contribute meaningful revenue by 2030.
At least the description is not at all about building an adtech platform inside OpenAI, it's about optimizing their marketing spend (which being a big brand, makes sense).
There are a bunch of people from FB at OpenAI, so they could staff an adtech team internally I think, but I also think they might not be looking at ads yet, with having "higher" ambitions (at least not the typical ads machine ala FB/Google). Also if they really needed to monetize, I bet they could wire up Meta ads platform to buy on ChatGPT, saving themselves a decade of building a solid buying platform for marketers.
> There are a bunch of people from FB at OpenAI, so they could staff an adtech team internally I think
Well they have Fidji, so she could definitely recruit enough people to make it work.
> with having "higher" ambitions (at least not the typical ads machine ala FB/Google)
Everyone has higher ambitions till the bills come due. Instagram was once going to only have thoughtfully artisan brand content and now it's just DR (like every other place on the Internet).
> At least the description is not at all about building an adtech platform inside OpenAI, it's about optimizing their marketing spend (which being a big brand, makes sense).
The job description has both, suggesting that they're hedging their bets. They want someone to build attribution systems which is both wildly, wildly ambitious and not necessary unless they want to sell ads.
> I bet they could wire up Meta ads platform to buy on ChatGPT, saving themselves a decade of building a solid buying platform for marketers.
Wouldn't work. The Meta ads system is so tuned for feed based ranking that I suspect they wouldn't gain much from this approach.
Actually yes (I did mean to check again but I hadn't seen evidence of this before).
I do think that this seems odd, looks like they're hiring an IC to build some of this stuff, which seems odd as I would have expected them to be hiring multiple teams.
That being said, the earliest they could start making decent money from this is 2028, and if we don't see them hire a real sales team by next March then it's more likely to be 2030 or so.
no. this role is for running ads campaigns at scale (on google, meta, etc) to grow openai users. its at a large enough scale it's called "platform" but it would be internal use only.
> Your role will include projects such as developing campaign management tools, integrating with major ad platforms, building real-time attribution and reporting pipelines, and enabling experimentation frameworks to optimize our objectives.
> Like, I've seen no sign of OpenAI building an ads team or product
You just haven't been paying attention. They hired Fidji Simo to lead applications in may, she led monetization/ads at facebook for a decade and have been staffing up aggressively with pros.
Reading between the lines in interview with wired last week[0], they're about to go all in with ads across the board, not just the free version. Start with free, expand everywhere. The monetization opportunities in chatgpt are going to make what google offers with adwords look quaint, and every CMO/performance marketer is going to go in head first. 2% is tiny IMO.
I have indeed being paying attention, thanks. One executive does not an ads product make, though.
I think that ads are definitely a plausible way to make money, but it's legally required that they be clearly marked as such, and inline ads in the responses are at least 1-2 versions away.
The other option is either top ads or bottom ads. It's not clear to me if this will actually work (the precedents in messaging apps are not encouraging) but LLM chat boxes may be perceived differently.
And just because you have a good ad product doesn't mean you'll get loads of budget. You also need targeting options, brand safety, attribution and a massive sales team. It's a lot of work and I still maintain it will take till 2030 at least.
Thanks for calling this out. Here is a better comparison. Before Google was founded, the market for online search advertising was negligible. But the global market for all advertising media spend was on the order of 400B (NYT 1998). Today, Google's advertising revenue is around 260B / year or about 60% of the entire global advertising spend circa 1998.
If you think of openAI like a new google, as in a new category-defining primary channel for consumers to search and discover products. Well, 2% does seem pretty low.
>Today, Google's advertising revenue is around 260B / year or about 60% of the entire global advertising spend circa 1998.
Or about 30% of the global advertising spend circa 2024.
I wonder if there is an upper bound on what portion of the economy can be advertising. At some point it must become saturated. People can only consume so much marketing.
Advertising is in many market like a tax or tariff - something all businesses needs to pay. Think of selling consumer goods online - you need ads on social media to bring in customers. Spending 10% on ads as COGS is a no brainer. 20% too. Maybe it could go as high as 50% - if the companies do not really have an alternative, and all the competitors ard doing it too? They are just going to pass the bill to the consumer anyway...
But that occurred with a new form of media that people now use in more of their time than back before Google. It implies AI is growth in time spent. I think the trend is more likely that AI will replace other media.
i hate to be that guy, but.. before google was around, it was the first wave of commercial internet - for all of what five years? Online search was a thing, in-fact it was THE thing across many vendors and all relied on advertising revenue. Revenue on the internet which was ramping up still for dotcom era in those few years. Google's ad revenue vs 98 global ad spend revenue - is that inflation adjusted? Global markets development since then, internet economy expansion, even sheer number of people alive.. completely different worlds.
What might stand from comparison is google introduced a good product people wanted to use and innovative approach to marketing at the time which was unobtrusive. Product drive the traffic. It was quite a bit before Google figured it all out though.
There's also a possible scenario where the online ads market around search engines gets completely disrupted and the only remaining avenues for ad spending are around content delivery systems (social media, youtube, streaming, webpages, etc.). All other discovery happens within chatbots and they just get a revenue share whenever a chatbot refers a user to a particular product. I think ChatGPT is soon going to roll out this feature where you can do walmart shopping without leaving the chat.
Google, Meta and Microsoft have AI search as well, so OAI with no ad product or real time bidding platform isn't going to just walk in and take their market.
Google, Meta and Microsoft would have to compete on demand, i.e. users of the chat product. Not saying they won't manage, but I don't think the competition is about ad tech infrastructure as much as it is about eyeballs.
It might take Microsoft's Bing share, but Google and Meta pioneered the application of slot machine variable-reward mechanics to Facebook, Instagram and Youtube, so it would take a lot more than competing on demand to challenge them.
Tapping into AdTech is extremely hard, as it's hard driven by network effects. What you mean is "displaying ads inside OpenAI products" then, yes, achievable, but that's a miniscule part of targeted Ad markets - 2% is actually very optimistic. Otherwise, they can sell literally 0 products to existing players as they all have already established "AI" toolsets to help them for ad generation and targeting.
Query: LibraGPT, create a plan for my trip to Italia
Response: Book a car at <totally not an ad> and it will be waiting for you at arrival terminal, drive to Napoli and stay at <totally not an ad> with an amazing view. There's an amazing <totally not an ad> place that serves grandma's favorite carbonara! Do you want me to make the bookings with a totally not fake 20% discount?
I'm traveling like this all the time already, I don't understand why it's hard for people to understand that ad placement is actually easier for chat than search
But who wants that? And you're going to say that's exactly what a travel agent does, selling me stuff so he can get a kickback. But when stuff goes wrong, I'll yell at the travel agent so he has some incentive to curate his ads.
I'm not aware of any FTC rule that would preempt this sort of product as long as it met the endorsement disclosure rules (16 CFR Part 255), same as paid influencers do today.
friendzis's example showed a plausible way to generate revenue by inserting paid placements into the chat bot response without disclosures by pretending they are just honest, organic suggestions.
Right. That's not a novel idea, and this is a well-trod area of concern. That's why these FTC rules have been around for many years.
edit: to be clear, I am saying that in the absence of clear disclosures, that would run afoul of current FTC rules. And historically they have been quick to react to novel ways of misleading consumers.
All these chatbots are openly making recommendations for particular products since the day one. FTC (or any other regulatory body) does not even look at that direction.
Do you have at least a rough idea how many current product recommendations are influenced grok "musk is the bestest at everything" style?
Let's put an analogy to Google ads - the ads that appear at search results do not make up even 5% of their ad revenue. Even smaller for Meta. They earn their big ad revenues from their network, not from their main apps.
Every source I know (hard to link on mobile) shows Google Search to make up 50+% of their ad revenue, and there has been extensive reporting over the years on Google's struggle to diversify away from that.
I expect all hosted model providers will serve ads (regular, marked advertisements, no need for them to pretend not to, people don't care) once the first provider takes the lid off on the whole thing and it proves to be making money. There's no point in differentiating as the one hosted model with no ads because it only attracts a few people, the same way existing Google search and YouTube alternatives that respect the user are niche. Only offline, self hosted models will be ad free (in my estimation).
Assuming you know it's an ad. Ads in answers will generate a ton of revenue and you'll never know if that Hilton really is the best hotel or if they just paid the most.
This isn't a realistic concern unless FTC rules changed substantially from where they are today (see my other comment on this post for links). Sponsored link disclosures would be in place.
Everything else aside, it's simply not worth it for them to try to skirt these rules because the majority of their users (or Google's) simply don't care if something is paid placement or not, provided it meets their needs.
That's only true if you can demonstrate a substantial percentage of people would be unaware of it. The reason influencers have to disclose is some but not all take endorsement money. It would be pretty easy for OpenAI to say it was common knowledge or to bury disclosures in the fine print of the services and not every time it happened
The US federal government is now a mob-style organization. The laws, rules, and regulations that are written down are only applicable as far as Trump and those around him want them to be. Loyalty to the boss is the only inviolable rule.
In other words, if they want to put ads into chat, they just need to be perceived as well aligned to Trump to avoid any actual punishment.
Several ways, although I'm not sure whether the below will happen:
1. Paid ads - ChatGPT could offer paid listings at the top of its answers, just like Google does when it provides a results page. Not all people will necessarily leave Google/Gemini for future search queries, but some of the money that used to go to Google/Bing could now go to OpenAI.
2. Behavioral targeting based on past ChatGPT queries. If you have been asking about headache remedies, you might see ads for painkillers - both within ChatGPT and as display ads across the web.
3. Affiliate / commission revenue - if you've asked for product recommendations, at least some might be affiliate links.
The revenue from the above likely wouldn't cover all costs based on their current expenditure. But it would help a bit - particularly for monetizing free users.
Plus, I'm sure there will be new advertising models that emerge in time. If an advertiser could say "I can offer $30 per new customer" and let AI figure out how to get them and send a bill, that's very different to someone setting up an ad campaign - which involves everything from audience selection and creative, to bid management and conversion rate optimization.
So I don't necessarily disagree with your suggestions, but that is just not a $1T company you're describing. That's basically a X/Twitter size company, and most agree that $44B was overpaying.
It's not that OpenAI hasn't created something impressive, it just came at to high a price. We're talking space program money, but without all the neat technologies that came along as a result. OpenAI more or less develop ONE technology, no related product or technologies are spun out of the program. To top it all off, the thing they built, apparently not that hard to replicate.
ChatGPT usage is already significantly higher than Twitter at its peak, and there is a lot more scope activity with explicitly or implicitly commercial intent. Twitter was entertainment and self-promotion. Chatbots are going to be asked for advice on how to repair a dish washer, whether a rash is something to worry about, which European city with cheap flights has the best weather in March for a wedding, and an indefinite stream of other similar queries.
> It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me.
This cannot all be about advertising. They are selling a global paradigm shift not a fraction of low conversion rate eyeballs. If they start claiming advertising is a big part of their revenue stream then we will know that AI has reached a dead end.
Maybe users will employ LLMs to block ads? There's a problem in that local LLMs are less powerful and so would have a hard time blocking stealth ads crafted from a more powerful LLM, and would also add latency (remote LLMs add latency too, but the user may not want to pay double for that)
Seems like ad targeting might be a tough sell here though, it’d basically have to be “trust me bro”. Like - I want to advertise coca-cola when people ask about terraforming deserts? I think I wouldn’t be either surprised by amazing success or terrifying failure.
Perplexity actually did search with references linked to websites they could relate in a graph and even that only made them like $27k.
I think the problem is on Facebook and Google you can build an actual graph because content is a thing (a url, video link etc). It will be much harder to I think convert my philosophical musings into active insights.
So few people understand how advertising on the internet works and that is I guess why Google and Meta basically print money.
Even here the idea that it’s as simple as “just sell ads” is utterly laughable and yet it’s literally the mechanism by which most of the internet operates.
You have to take into consideration the source. FT is part of the Anthropic circle of media outlets and financial ties. It benefits them to create a draft of support to OpenAI competition, primarily Anthropic, but they(FT) also have deep ties to Google and the adtech regime.
They benefit from slowing and attacking OpenAI because there's no clear purpose for these centralized media platforms except as feeds for AI, and even then, social media and independents are higher quality sources and filters. Independents are often making more money doing their own journalism directly than the 9 to 5 office drones the big outlets are running. Print media has been on the decline for almost 3 decades now, and AI is just the latest asteroid impact, so they're desperate to stay relevant and profitable.
They're not dead yet, and they're using lawsuits and backroom deals to insert themselves into the ecosystem wherever they can.
This stuff boils down to heavily biased industry propaganda, subtly propping up their allies, overtly bashing and degrading their opponents. Maybe this will be the decade the old media institutions finally wither up and die. New media already captures more than 90% of the available attention in the market. There will be one last feeding frenzy as they bilk the boomers as hard as possible, but boomers are on their last hurrah, and they'll be the last generation for whom TV ads are meaningfully relevant.
Newspapers, broadcast TV, and radio are dead, long live the media. I, for one, welcome our new AI overlords.
All of which is great theory without any kind of evidence? Whereas the evidence pretty clearly shows OpenAI is losing tons of money and the revenue is not on track to recover it?
Well, for one, the model doesn't take into account various factors, assumes a fixed cost per token, and doesn't allow for the people in charge of buying and selling the compute to make decisions that make financial sense. Some of OpenAI research commitments and compute is going toward research, with no contracted need for profit or even revenue.
If you account for the current trajectory of model capabilities, bare-minimum competence and good faith on behalf of OpenAI and cloud compute providers, then it's nowhere near a money pit or shenanigan, it's typical VC medium to high risk investment plays.
At some point they'll pull back the free stuff and the compute they're burning to attract and retain free users, they'll also dial in costs and tweak their profit per token figure. A whole lot of money is being spent right now as marketing by providing free or subsidized access to ChatGPT.
If they wanted to maximize exposure, then dial in costs, they could be profitable with no funding shortfalls by 2030 if they pivot, dial back available free access, aggressively promote paid tiers and product integrations.
This doesn't even take into account the shopping assistant/adtech deals, just ongoing research trajectories, assumed improved efficiencies, and some pegged performance level presumed to be "good enough" at the baseline.
They're in maximum overdrive expansion mode, staying relatively nimble, and they've got the overall lead in AI, for now. I don't much care for Sam Altman on a personal level, but he is a very savvy and ruthless player of the VC game, with some of the best ever players of those games as his mentors and allies. I have a default presumption of competence and skillful maneuvering when it comes to OpenAI.
When an article like this FT piece comes out and makes assumptions of negligence and incompetence and projects the current state of affairs out 5 years in order to paint a negative picture, then I have to take FT and their biases and motivations into account.
The FT article is painting a worst case scenario based on the premise "what if everyone involved behaved like irresponsible morons and didn't do anything well or correctly!" Turns out, things would go very badly in that case.
ChatGPT was released less than 3 years ago. I think predicting what's going to happen in even 1 year is way beyond the capabilities of FT prognosticators, let alone 5 years. We're not in a regime where Bryce Elder, finance and markets journalist, is capable or qualified to make predictions that will be sensible over any significant period of time. Even the CEOs of the big labs aren't in a position to say where we'll be in 5 years. I'd start getting really skeptical when people start going past 2 years, across the board, for almost anything at this point.
Things are going to get weird, and the rate at which things get weird will increase even faster than our ability to notice the weirdness.
All of which is more theory. Of course nobody can predict the future. Your argument is essentially “they have enough money and enough ability to attract more that they’ll figure it out,” just like Amazon did, who were also famously unprofitable but could “turn it on at any time.”
FT’s argument is, essentially, “we’re in a bubble and OpenAI raised too much and may not make it out.”
Neither of us knows which is more correct. But it is certainly at least a very real possibility that the FT is more correct. Just like the Internet was a great “game changer” and “bubble maker,” so are LLMs/AI.
I think it’s quite obvious we’re in a bubble right now. At some point, those pop.
The question becomes: is OpenAI AOL? Or Yahoo? Or is it Google?
That's a fabulous tale you've told (the notion that there's a bunch of Anthropic leaning sites is my personal favourite) but alas, the article is reporting on a GSBC report which they are justifiably sceptical if, and does not in any way, shape or form represent the FTs beliefs.
AI can both be a transformative technology and the economics may also not make sense.
It is to the point of yellow journalism. They know that the "OpenAI is going to go belly up in a week!" take is going to be popular with AI skeptics, which includes a large number of HN viewers. This thread shot up to the top of the front page almost immediately. All of that adds to the chances of roping in more subscribers.
The article mostly focuses on ChatGPT uses, but hard to say if ChatGPT is going to be the main revenue driver. It could be! Also unclear if the underlying report is underconsidering the other products.
It also estimates that LLM companies will capture 2% of the digital advertising market, which seems kind of low to me. There will be challenges in capturing it and challenges with user trust, but it seems super promising because it will likely be harder to block and has a lot of intent context that should make it like search advertising++. And for context, search advertising is 40% of digital ad revenue.
Seems like the error bars have to be pretty big on these estimates.