Hacker News new | past | comments | ask | show | jobs | submit login
Big Tech says AI is booming. Wall Street is starting to see a bubble (washingtonpost.com)
79 points by jameslk 6 months ago | hide | past | favorite | 94 comments




I strongly suspect we're in an AI bubble, but that doesn't mean AI is without value. It's just that investors and entrepreneurs and CEOs get the zoomies about stuff like this. They go all in because they're scared someone is going to get ahead of them, or that they'll leave money on the table, and this often leads to a feedback loop as everyone tries to be more hyped than the last guy. But, obviously GenAI has already created a lot of value, and will create more. It's cool.

I hesitate to make comparisons, but I think of it like VR. For a hot minute, VR was supposed to be the future. We were all going to live our lives in VR. Zuckerberg goes all in. Turns out that's not happening anytime soon, but does that mean VR is a sham? No, it's actually really cool, and it'll keep getting better, but that doesn't mean we're going to spend 8-16 hours a day with a helmet on. That was just a bad prediction, and doesn't reflect on the technology or its value. Same with AI.


Bubbles are inevitable. Imagine a single company investing precisely the right amount into a new technology, not a penny more. Now imagine every single company and person doing the same thing, perfectly. It’s impossible.

The stock market has boom and bust cycles even with companies using 100 year old technologies! With an embryonic but all-promising technology like AI: it will pop then reinflate many times over. And this is fine.


The analogy I like best is that the economy functions like a slime mold. When it senses a chemical gradient that indicates food in some direction it grows out in a very wide search pattern, expanding a lot of resources to move that way. When it finds the actual food, most of the growth withers away and you're left with just a path to the food.


Would ants be more applicable, with the Fed setting and changing rates equivalent to the queen putting out different pheromones to increase harvesting of food, expansion, etc with investors being individual ants or the more abstracted flow of capital?


I love this analogy, it also tells you why index investors usually do better than speculators: even if you invest pretty broadly across the space there's a good chance you missed the one path to the food.


Curious about the numbers - do index investors really do better than speculators? At what rate? Is it also true it nascent industries, like the AI one in this example?


see the buffet hedge fund bet. index fund investment beat the hedge funds over an extended period of time.

You might see some speculators do better than the index fund investors, but you also might be ignoring all the ones that did terrible.


And you have to.

Sometimes (most of the time) you bet wrong or just don't win on some new technology you think is important.

But, while fast followers can sometimes get there, you mostly have to place a meaningful bet even if the outcome is hardly pre-ordained.


A bubble is when expectations are severely detached from reality. Simple over-investing is not a bubble.


So do you think AI expectations are currently detached from reality? Can you quantify it?


> Can you quantify it?

It's on the person saying it's a bubble to quantify it.


I don’t think it’s a bubble yet. I think it’s very real and very early. Yes, most AI startups will fail and have little to no revenue just like how most software startups were prior to the AI boom. That hasn’t changed.

But we are so early in the LLM era. Hell, Slack hasn’t even added an LLM for chat histories, which is something I desperately want. AI agents are just starting. Scaling law hasn’t stopped.

There are a ton of use cases where I think LLMs are extremely helpful but we are bottlenecked by inference speeds and context size. Both of which are rapidly improving. There will come a break point where models are cheap, capable, and fast. We are not there yet. It’s really freaking early.

Expecting AI companies to generate massive amounts of revenue now is silly. Most of them will fail. But some of them will be absolutely gigantic.


The bubble is in the very concept of an "AI company". Even OpenAI is not obviously on a path to profit [0]. I don't see any hope for the many companies whose business plan is "wrap OpenAI and use it to solve all the problems for sector Y!"

Where we will see returns on LLMs and where we will see useful applications aren't in AI companies, they will be in established players integrating these tools into their existing platforms. This isn't like past tech advances that have led to widespread disruption as established players failed to keep up. For one thing, the established players are quite obviously determined to keep up this time and are jumping on LLMs if anything faster than it's worth.

For another, it's not obvious to me that AI by itself can be a product. AI solutions need data to be useful, and the only people who have the data needed to solve the problems that exist that exist in sector Y is the established players.

So yes, I think there's a bubble, and it's going to pop. But that doesn't mean we won't see advances in the tools.

[0] https://news.ycombinator.com/item?id=41058616


IMo that’s kind of like saying cloud servers aren’t a product. Tons of companies need servers, but nobody wants to manage that. AI is similar in my mind, many companies will want (insert some use case for llms) but don’t want to manage it.


I'm not saying every random company is going to manage its own LLMs. I'm saying that the existing software solutions for any given industry are going to adapt and implement LLMs in their existing solutions.

Github with copilot is a good example of what I'm talking about—new companies trying to be the software development LLM company don't really stand a chance of beating out GitHub. The same dynamic is playing out in every industry right now, and companies that are founded to be the AI solution for a given industry will not unseat the established players.


It could be both. There will be existing companies using AI to enhance UX (like Github) and there will be new companies whose UX you can't even imagine now


One or two of those, maybe, but those are the Amazons who survived the bubble. They don't prove it wasn't a bubble.


Unlike cloud servers, AI isn't embedded into the cost of doing business (pretty much anywhere). Until it's essential it's not going to earn the kind of ROI that's needed to continue the current investment rate.


Slack AI (to ask questions of your chat history) is very much a thing, you just have to talk to your Salesforce rep and buy a license for your whole org.

We bought it, were all very excited about it, but in reality it's complete garbage. Their LLM is slow, hallucinates stuff, and never seems to find the answers you're looking for. It's like they take the top two search results and pipe them with no context into GPT-3.

It's a shame because my home-brew email Q&A bot is extremely useful. I could integrate Slack but unfortunately you can't access DMs through the API which hampers it a bit


> Their LLM is slow, hallucinates stuff, and never seems to find the answers you're looking for.

In other words, it's an LLM.


I didn’t know that. I expect it to suck. No one knows how to build great RAG based apps yet. I’m guessing that Slack AI will rapidly improve and that it was more of a prototype.

Small models are really good now. Mistral Large Enough 123 billion parameter model is competitive against GPT4, which is rumored to be 1.7 trillion parameters.

Context sizes are also rapidly improving, as is inference hardware.

There will be a convergence of model intelligence, context size, inference hardware, and developer experience at some point.

One of these days, Slack AI will be good.


The dot-com bubble was still very much a bubble even though the value creation potential of the internet was incomprehensibly immense


Sure but Nasdaq is still more than 4x higher now than peak dotcom.

But we’re not peak AI yet in my opinion. Companies aren’t IPOing at dizzying valuations with just an idea and a few html developers.

The vast majority of the AI boom is centered around companies that were successful before the boom. Apple. Nvidia. Microsoft. TSMC. Google.


Don’t necessarily take IPOs as a signal; the early IPO is not as attractive as it once was, for a wide variety of reasons. Companies with no product taking hundreds of millions in private funding should arguably be seen as the modern equivalent.


If those companies fail, will it matter? NASDAQ had had a huge impact when it crashed, however.


But retail investors generally don't get hurt from those companies - only VCs.


Disagree.

There is a question of what is even a bubble. John Cochrane did a study that showed that the value of Amazon alone justified tech stock index valuation even in 1999.

What is a "bubble" even? For me it would be self-driven cycle of upward valuation whose fundamental value never justifies the market cap in it. An industry that rises in price just a few years later to meet and then far exceed the previous valuation does not need to be a bubble. A bubble is not just a high valuation that decreases in price sometime in the future.


I think one of the core features of a bubble is that the median market participant loses money. I think that definitely occurred in 1999. I don't think it has occurred in 2024.

I'm not convinced it will happen either. Venture Capital operates on a business model that is really hard for human intuition to deal with. 1/1000 success rates require so few break out successes that, yes -- they can fund an entire sector that will, with extremely high probability, fail and still be profitable.


There have been plenty of these retroactive reviews of the 1990s tech boom that have questioned whether it was a bubble, by changing the definitions that everyone else uses to define valuation ratios and bubbles. I've heard it argued that the dotcom bubble wasn't a bubble, because the valuations were justified in the moment due to future expectations, until they weren't, which just sounds like the definition many others use to define a bubble.

As to Cochrane, even if the value of Amazon is argued to have justified the tech sector valuation, the fact was that the tech sector valuation was not all, or even mostly, concentrated in amazon, which is why it was a bubble.


That is a definition of a bubble, one that I have never heard anyone use ever.

For each Amazon that was undervalued there were 100s of companies with stupid ideas and no revenue.

You're free to redefine words, but then a lot of people will disagree with you.


> I don’t think it’s a bubble.

> I think we are no where close to peak bubble.

That said, you're in agreement with the analysts:

“Despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful,”

Anyway, the issue (I see) is that the value added by these features is marginal and does not make up for the huge amount of investment. Please, as a thought experiment, how exactly is adding LLMs to the chat history would make up for Slack's investments in LLMs? Would it enable it to gain so much more market share that investors would get a good return? What if models become so cheap and capable that everyone has LLMs for their chat history? What then? Where will the return of investment come from?


There's no moat given that open source models are almost as good already. And it's yet to be proven that LLMs in particular can be reliable enough to actually provide significant return on investment.

Then there's the legal issue. People are not going to accept the idea of "you can't sue me, the AI decided", and so a not perfect model is not going to be deployed in any field of consequence, because the idea that you can offload liability on models is not going to fly, and is ethically reprehensible in my view and in the view of most serious people working in regulated industries. It actually introduces new legal issues, a jury is far less likely to side with an AI than it is with a person who simply made a mistake. There are numerous social reasons why the liability concerns here are legitimate and concerning and actually different for companies than if a human had been making the decisions.

Therefore, the models have to be perfect, or they have to be checked by hand by experts at every step (in which case, there is no return on investment, you haven't really automated anything).

That's the classic conundrum of "I may as well have done it myself given I had to check that code line by line anyway" I see in my own work, and the same is going to apply for non-perfect models in any field of sufficient consequence that there is a hope for profitability (and usually, these fields being consequential enough for potential profitability also implies a potential for liability).

Anybody that works as a programmer in finance/energy/healthcare/government or other parts of the "real economy" could tell you this years ago. The perception that AI could be used to automate significant work was dead pretty shortly after arrival. The only industry that continues on with the charade is the industry with a vested interest in selling and making dubious promises about AI products.

That doesn't mean I don't think LLMs are cool or that they are totally useless. For endeavours that were limited in their profitability anyway, it may very well have some use cases. I just don't see a viable business model for any large companies except Nvidia or AMD or Broadcom and others making money selling data center equipment. Maybe ads and agitprop, but that seems to be a sort of parasitic relationship going on with social media and the internet and one wonders how long it can continue before canabalizing itself.


There used to be a ton of search engines, there still are. Eventually, one rises to the top. That was Google. I don’t see why the LLM race won’t repeat.


Exactly, people who claim LLM will be commoditized don't understand the underlying forces that are acting


The added value has not been marginal in my life. I'm paying for a few subscriptions to AI services. They make me significantly more productive in various ways. It is possible to quantify.


Your bio shows you have skin in the game, so I wouldn't be surprised to see you try propping up the hype.

But we're talking about different things here. I'm referring to the business value of a company investing in AI features and their return on investment. The end-user is a different story, but anecdotally, I also pay for a few subscriptions to AI services and so far I'm not seeing significant productivity gains after the novelty wore off. For me it's at the point of being just a better search that I need to cross-reference to make sure it didn't go off the rails. I will stop paying soon.


> They make me significantly more productive in various ways. It is possible to quantify.

So quantify it, how many dollars richer did you get thanks to AI? Did you get a massive raise? Did you create a new product that made you tons of money? Since you say it is possible to quantify please tell us!

Edit: Ah, you are selling AI products, that doesn't count. That is saying a shovel is great value since you can sell it.


You're making assumptions. I'm not working with the shovel I use to increase my productivity.

Why would someone selling fridges be motivated to exaggerate the value of stoves?

I'm not selling anything as Pollinations.AI is open source and makes no profit.

Just because I'm working with open-source AI means I cannot have an opinion on how other AI tools have improved my productivity?


Ideally, good working model can be used as tech support - meaning the revenue per license might increase 10x as the value increases. The problem I would say is not that current models are incapable of delivering value as a feature. But that they change not necessarily for better. I liked clickup AI when it was beta. Now it's shit. I have no control over it and the fact that initially I could almost copy paste replies based on docs was really promising to reuse in support


I agree with most of what you say but I think it might be a bubble. Compare to the internet in 1999. Everyone could see that it wasn't going away, and had immense potential. Money was pouring in to ideas that had no paying customers and would not really be workable at scale for another decade or more. Most of it was lost because the vision was too far ahead of the technical and practical reality at the time.

We're in dial-up days of AI but investors are ignoring the reality that it's still way too early to know what it will ultimately look like and what customers will want to pay for. But FOMO rules.


In my opinion, AI bubble is in the 1996 of the dotcom bubble, not 1999.

When it pops, it will still be bigger than in 2024.


What's the relative energy cost of operating LLMs compared with dialup? On a finite planet, given some level of biological diversity is important to our survival, I find it increasingly difficult to justify development of these novel luxuries.


I don't know but the Sun, HP, and Compaq servers that powered the internet in 1999 were not power-efficient at all compared to today's machines.


Jevon’s paradox in action - improvements in the efficiency have caused our usage of these resources to soar.

If you think AI uses a lot of energy now, just wait until there’s commercially successful use-cases.

On the other hand it’s also no different from any other production process in our economy. Just newer. Why improve steel production if that’s just going to lead to more steel consumption?

If you didn’t want to maximize paperclips maybe it was/is a mistake to build our economy around paperclip maximizers.


Humans in general don't think in collective terms beyond a handful of people at most. Expecting that to change overnight is a foolish cause.


But is this pattern of small, cheap, and fast models profitable for the companies making them? Open-source models capable enough to solve user needs and small enough to run on-device: they’re great for users, but it means there’s no one company to take profit. Everybody and every product becomes more valuable and more capable, which is hard to extract differential profit from.


Yes I do. I expect chip makers such as TSMC and Nvidia to massively profit from them because people will need a lot more chips now and in the future.

I expect companies who integrate well with them to increase their value. For example, companies with proprietary data to integrate an LLM with. Another example could be a company that develops very capable agents.

Wallstreet cares about stocks. If models reach a point where it’s cheap, capable, and fast, why wouldn’t the S&P500 take off because of the huge boost to productivity? And some companies will take off more. That’s normal.


It may be a ‘sell shovels’ time, where the hardware providers profit


Sure, Cisco benefited a lot during the boom - as did ISPs who laid down internet infrastructure. Those were the shovel makers. Eventually, the people who bought the shovels significantly outclassed the shovel makers. Google and Amazon are both 10x larger than Cisco in 2024. Meta is 10x. Microsoft is 15x. Hell, Uber was bigger than Cisco for most of the last decade.

These shovel maker comments get tossed around a lot in any AI "bubble" talk. Yet, the shovel makers did not even come close to being the most profitable from the dotcom boom.


Picking the right shovel-user is much more difficult though without a time-machine


Demos Habassis from DeepMind have already spun off a drug discovery company based on AlphaFold v3 which is proprietary.

Others have developed Deep Learning based weather forecast models which much cheaper (orders of magnitude less compute), faster and more accurate than conventional physics simulation based models which costs billions of dollar to run.

There might exists companies which have already enough private, unique and valuable data that if used to develop models that can bring huge benefits.


The problem is: None of the “nice” features you mentioned worth $10TB which is this bubble gets valued now.


when models are cheap- where is the money? I don’t want some third party having access to my data- I want to self host (on-prem or in cloud)


That’s like asking, when electricity is so cheap, where is the money?

The money is in the applications and inventions that build on top of models.


And I have yet to see a useful application that I can use and that is not a glorified JavaScript code completion plugin.


No it’s not. If electricity is cheap margins is lower. We’ve had expensive electricity lately and power companies are earning a lot (compared to normal) also you missed my main point. If it’s cheap - I can self host - and build my own thing, rather than relying on big/overvalued AI


You are one of a tiny monitory in that regard.


Yeah most of my friends think that AI = chatbots.

IMO the best of uses of AI will be smart RAG-ish stuff integrated into existing products.

Like your slack example: imagine the fact-checking possibilities! Imagine a snarky message from a manager saying “you said this would be finished by Friday”. All you have to do is ask slack “is this message true?” And it’ll go and find the 15 messages where you made it very clear the feature would not be finished by Friday.

I’m also really excited for more AI docs. Prisma (a typescript ORM) has an incredibly useful AI on their docs page. You can ask it anything about prisma, and it’ll provide a pretty thorough, hallucination-free answer, with links to all of the relevant doc pages and GitHub issues.

Imagine if google’s most black-box API’s could be explained and navigated for you by an AI. A dream come true


> “There was a dot-com bubble, according to Goldman Sachs, because prices went up and prices went down. According to me, internet traffic didn’t go down at all.”

An idea from macro investing [1] that has stuck with me: don't overindex on the last crisis (or watershed moment more generally). All throughout the 2010s a lot of investors were implicitly assuming that if another crisis hit, it would look like 2008. The 2020 crisis proved to unfold in a very different ways than 2008, at least in terms of where you were best off putting your money.

Which brings me back to the Khosla quote from the start of this comment. A lot of people seem to be overindexing on the dot-com boom, assuming that this AI summer will pan out the same way.

I am not making a directional call here. All I'm suggesting is to stay humble about this trite yet profound truth: the future often unfolds in unexpected ways.

[1] Shout out to The Macro Tourist


Kosla: "The rush into AI might cause a financial bubble where investors lose money, but that doesn’t mean the underlying technology won’t continue to grow and become more important. There was a dot-com bubble, according to Goldman Sachs, because prices went up and prices went down. According to me, internet traffic didn’t go down at all.”

That's a good way to look at it. Many of the companies throwing money at AI are going to lose money. For two reasons: 1) it doesn't work well enough yet, and 2) it's getting cheaper, so more people can do it. Most infrastructure stuff ends up as a low-margin business.

Look at AI-guided autonomous vehicles. First demo, 1980s. First major successes, around 2005. First successful commercial use, around 2023. Profitability, ?. Probably half a century from demo to profitability.


One interesting thing about autonomous vehicles is that companies that have spent billions of dollars and decades in R&D (e.g. Tesla) are only slightly ahead of competitors. Now it's seeming like multimodal LLMs will be the way forward with self-driving vehicles, and if I just fine-tune Phi 3 I can do better than state of the art DL models from a couple years ago.

Looking at the bigger picture, then, it seems like most of the efforts that companies are pouring into AI research generally won't put them that far ahead.


Waymo is way ahead of competitors. They have a working autonomous taxi system deployed. The driving assistance systems are way behind that. Consumer Reports says the current best system is Ford's Blue Cruise, which is the only one on the market where you can take your hands off the wheel for long periods.

Does Phi 3 even do vision?


Isn't Tesla the bottom of the pack on self-driving?


World leader on finger slicing and patina though.


AI blockchain 5G IoT ML self driving are a few trends I’ve seen in the last 10-15 years or so. I was too young to witness things like dotcom bubble but the theme remains the same.

Everytime something becomes popular there is always the ‘you got a hammer so now everything is a nail’ problem. Eventually the trend filters out the non sense applications and only the really important and impactful applications stick around.

Humans never change. Neither in stock markets nor otherwise. Everyone falls for the same things again and again.


The tech is super interesting and useful, but it’s also becoming very commoditized. Those selling compute win but most of these AI startups don’t have a viable business plan. I wouldn’t call the tech “a bubble” but the hype around AI startups and their valuation? Massive bubble ready to pop hard.


In the beginning, there were many search engines. Then there was Google.

Would you say that in 1999 that search engines are easily commoditized and have no money in them?


It's definitely a bubble. Sam Altman wants to build $7tn worth of data centers; if that's not an indication of how ridiculous this has gotten, I don't know what is.

LLM has been vastly oversold to the general public as "AI" when the technology is nowhere near that. We haven't invented turing complete robots that independently identify problems, learn the solutions, and respond. LLM as a technology might not ever be able to do that, by the nature of how it works. We have only created chatbots that reply to prompts, with a higher than acceptable inaccuracy rate. And yet this justifies $7tn.

But silicon valley figured out that saying "AIAI" on repeat works for funding, then other companies started pretending they were the same for the instant stock gain. Rising interest rates and this wave led everyone to pull out of other companies and dump into anything vaguely related to AI. They rode the price up, and now that interest rates are falling (making other companies more attractive) they are rotating out.

This probably didn't become a full on bubble like crypto did because interest rates were high. It's still a bubble, but seems to be pricking of its own accord as opposed to becoming a gigantic, systemic problem.

That said, when rates fall again, we might see a second boom there. Or maybe another fad will strike silicon valley, to continue the trend.

VR - Crypto - Metaverse - LLM?


Part of the problem in AI is the lack of useful benchmarks that show progress is being made in skill areas where AI will truly be useful and transformational. I mean, who cares if a LLM can pass a bar exam or real estate license test?


> Part of the problem in AI is the lack of useful benchmarks that show progress is being made in skill areas where AI will truly be useful and transformational.

There also needs to be discussion if the transformations AI can make possible are actually for the common good. They'll certainly be good for a small minority (e.g. certain billionaires), but its hype-men seem to be lazily gesturing to utopian sci-fi and lazy and oversimple economic thinking to justify it [1].

But it's probably hopeless, since SV is a technopoly and has too much influence.

[1] Like assuming there will always be work for all people in the face of automation, that people will be better off if goods get cheaper as their economic prospects dim, etc.


I would hazard a guess that AI technology is already close to break even from an economic perspective, nearly everyone I know (especially those not in tech) uses it on a daily basis to assist with their work, especially coming in the form of dealing with pointless bureaucracy, writing emails and ideation.

From a cashflow perspective however it is no where near close to capturing its economic benefit, and to be fair to the bubble supporters I agree that I can't see how it can capture this in the short term. In the long term its pretty clear that there will be some major winners here who will reap big rewards.


Everyone is relying on the costs of computation to come down by an order of magnitude. Whether that will actually happen – let's see.


It has already come down by magnitudes. Just check API costs for the same level of intelligence of models over the last 1.5 years.



What is concerning is it seemed like money had dried up, people were laid off. But at the same time, there seems to be so much cash available for AI. If this is a bubble,it will be interesting to see CEOs come back again, in tune, and say "I'm truly sorry I have to lay 30% of you off, layoffs are never easy, this is on me, we could not have foreseen this...best of luck".


It probably is a bubble in the same way the Internet created a bubble in the 90’s. A lot of hype based on the promise of the technology, and when it didn’t expand and pay out quickly enough the bubble “burst”. But over the following few decades the technology did end up living up to the hype, and then some.

It’ll be the same with AI, probably the productivity gains won’t return quickly enough for the current rounds of investment, but on a long enough time line it will absolutely be well-placed hype.



They keep building the same thing, but bigger. It keeps exhibiting the exact same issues. It is a bubble because it is very clear that the tech is hitting a dead end.


AI's current trajectory is actually the best one for workers where it increases worker productivity but does not replace them which will increase demand.


Hum... So far I still have to see "improved productivity", at maximum some low skilled workers can find ways to complete some simple tasks quickly in certain domain, but that's essentially all.

IF an LLM could ingest news and summarize headlines every day some jobs might gain help, but so far next to nothing is there. Some have tried for stocks and does not went anywhere, some others have tried to replace call center operators with horrific mean results and so on.

Actually I fail to see much else except producing quick graphics for publishers and porn deepfakes for teenagers. At a certain point in time things might mature enough for something else, but so far...


I am always amazed when interacting with some people how much it "helps" them to ask a (couple of) questions to LLM rather than (just) using google. I (am many IT friends) never need that. But there are many "low skilled workers" (and that can include people not being able to use technology) for which an LLM will improve their productivity.

To the point I think I am the exception, and that's why LLM will have a larger impact than I would estimate.


As we have witnessed episodes like https://arstechnica.com/?p=1942936 witch pose a big and so far unsolvable problem. Personally I tried khoj on my org-mode notes just for curiosity, it was able to produce some meaningful results, but fails to find much while a mere rg (ripgrep) succeed without special tricks. Similarly a classic Google Search tend to produce meaningful results quickly and with much less computation needed than ChatGPT.

If that's the current bar well, LLMs cost way too much for the results they produce. Of course things change so at a certain point in future we might get much better results and to achieve such goals research, so data, experiments etc is needed but from research and release early and often vs "Artificial Intelligence is here" from PR there is a big gap in the middle.


It does not help me either. C++ answers are flat out wrong and you develop bad habits. If I wanted to plagiarize and had no morals, I could take code from GitHub directly.

And, as you say, Google provides links to better answers in 2 seconds.

I suppose some people like the interaction with the machine. I'm much more exhausted after an LLM session and my brain is literally fried until the next morning. I stopped using LLMs and am much happier.

Googling for answers and reading them strangely enough energizes me.


Problem is, with every advance in worker productivity, workers either work the same or more. They may become more efficient, but the real benefit hardly goes to the worker - it goes to the organization. Whether that’s good, bad, or somewhere in between is another conversation and highly dependent on where you measure success in increased productivity.


Reference? This tells a quite different story: https://ourworldindata.org/working-hours

And talking about benefits, would you claim that life 50 years ago was the same for the average citizen as today? Because if not, to me it seems that people work less (see chart) for more (see what you have).


The labor market was way different 50 years ago (and especially even a little bit further back), because you generally had one bread winner working e.g. 1930 hours to support an entire family. In 1960 only 7% of households were single person, now it's 38% [1]. So now you have people working 1760 hours to support, in many cases, only themselves.

This [2] site has a plethora of data about the inflection of basically everything that happened in 1971. 1971 was when the US defaulted on its agreements under Bretton Woods, enabling it to begin printing money at its own discretion, which is essentially when our current economic era began. Vast amounts of wealth has been generated, but it has come with costs.

[1] - https://www.statista.com/statistics/242022/number-of-single-...

[2] - https://wtfhappenedin1971.com/


It increases short term productivity, yes, long term, not so much.

What I started to notice, and I think some people on HN have noticed already before me, is that producing lines of code is faster. Much faster, even, if examples are already in the knowledge base. This means that "rolling your own" has never been easier. So, we end up with the same solution, replicated many times in the code.

So, now, programmers are producing code faster than reviewers can review. Even the authors of this GPT assisted pile of code may not read or even fully understand it.

I think the critical thing missing is code duplication detection. I know some tools are available but I have yet to see one that does its job satisfactorilly. If, however, that does not get built in soon, then companies will start to see that AI causes people to produce lots of unmaintainable/duplicated content and will start to turn away.


NOT rolling your own has also never been easier. I've had Sonnet recommend great libraries and tools to me that I probably wouldn't have discovered on my own, and the plus side is I don't need to spend as much time learning those libraries because I can pass in the documentation and have it interface my code with it.


Can you expand on that? I can’t see how productivity will not replace (at least some) them in the long run and shift the workload on the remaining ones, basically making fewer people handle a larger workload for a bit more pay or same pay? And that has more or less been the case with tech so far, though at a slower pace it seems.

Or are you saying that AI will create a lot of demand for workforce? And how so?


The amount of work to be done isn’t constant. If you make it cheaper to do a unit of work (less labor per unit) then the market will demand more of it.

Look up the unintentional impact of the cotton gin for an example.


There is an infinite amount of work in the future. Some jobs may become redundant because of AI but I don't see anyone becoming less busier.


We’ll just have more stuff. Product backlogs for most companies have no end so a worker going from creating 2 widgets to 10 widgets a month means they’re creating more bang for the buck.

It will only hurt industries with limited demand or fixed amount of work (think bookkeepers, customer support) that are cost centers.


At my job it’s assisting new out-of-the box tools that require configuration. Good example of productivity increase and not replacing work, but instead creating new work.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: