Hacker News new | past | comments | ask | show | jobs | submit login
Chief executives cannot shut up about AI (economist.com)
90 points by mfiguiere on June 1, 2023 | hide | past | favorite | 131 comments



True, but I don't think it's because of any specific characteristics of the technology, other than it's got right now. CEOs want support for investment in their business, and the way to get it is to frame whatever they're doing in terms of the topic du jour. There's also a cya element as investors and board would ask why they aren't focused on something everyone is talking about.

Big businesses that sell to c-level know this too and frame these offerings accordingly. Big tech consulting companies will only ever be selling some variation on dashboards, but they're described as big data, data & analysis, deep learning, generative AI, whatever the trend is.

Nothing wrong with it, being a contrarian is hard, better to accept the way the wind is going and adapt accordingly.

My main point is that it's way less about anything specific that AI can do, vs getting people on board.


Share but backs were a simple, low risk way to inflate the share price and so every major company was spending cash pushing the price higher.

Lay offs were worth a few percentage points on the share price. So everyone had an incentive to take away someone's salary.

"AI" is worth a few percentage points and so will be talked about as long as the payback is there.

None of these are going to meaningful change these companies or our societies for the better, they are only to continue asset price inflation. Because with out it the illusion would begin to crumble.


Some big companies, university computer science departments, etc. pursuing AI!!! Fine with me!!!!

The crucial core technology, secret sauce, some math I derived, in my startup is NOT "AI" and, for the problem my startup is intended to solve, more focused than "AI".

Net, (1) for the problem I'm solving the approaches of current AI won't deliver anything competitive with my math, and (2) as long as these companies, etc. pursue AI they will be doing that much less work that might compete with me!!!!

History: There is along history, decades with computers and centuries without, of math yielding terrific results -- Euclid, Pythagoras, Newton, Leibniz, Gauss, ..., Riemann, Hilbert, von Neumann, ..., to the present.

Yes, some of current AI seems to have made good progress on non-linear anomaly/problem detection and diagnosis and, in these cases, done work that is good progress on what used to be done with, say, statistical anomaly detection, maybe non-parametric statistics.

But apparently now mostly and/or nearly all of current AI, e.g.. LLMs (large language models), are for different work.

So, again, net, current AI does not seem competitive with my use of math so that I'm pleased -- partly joking and poking fun but not entirely!


> Nothing wrong with it.

Well it's rather depressing for the rest of us. So there's that.


100% this. I am in the middle of this right now. I advise a bunch of companies and this is 100% top of mind.


My opinion as to how much economic impact LLMs will have is all over the place.

On one hand, they're clearly impressive and useful. They'll get better, but it's not at all clear how much better.

On the other hand, we've had access to 3.5 for 6 months now, and thus far companies love to talk about their AI strategies, and love to roll out "alpha previews", but not much of value seems to have been produced thus far.

You'd imagine that an instantly transformative feature would not sit around in alpha to die. I think that companies are likely discovering that it's not magic, and that there are many problems that it can't solve.

It actually seems like we have passed peak LLM at this point.


I think it's not a question of use cases, as they are there, but that companies offering LLM still don't have a proper solutions for companies to:

- Trust LLM with proprietary/confidential data

- Operate the use cases in scale with proper performance

- Control prompt and post process output to avoid legal/brand risks (e.g. hallucinations)

Everything still feels like Beta, and even though companies love the concept, they won't dedicate serious money to Beta products...


A lot of the value is going on behind-the-scenes, in roles that can be assisted with AI but not entirely replaced.

The problem C-Suite has to solve is likely going to be actually finding out what processes can/have been automated (if your work culture sucks, employees won't be on-board with sharing their secrets just to have more work given to them) and knowing what jobs they can actually cut.


I would say we’re not anywhere near peak LLM and are still in the very early adopter stage. In tech we heard about it 6 months ago, maybe you had even already read the relevant papers on arkxiv.org. That’s not the peak, the peak will be when your grandma uses it to write you a custom birthday card at Walmart.


    and thus far companies love to talk about their AI strategies,
    and love to roll out "alpha previews", but not much of value
    seems to have been produced thus far
I think you're underestimating the value of "looking busy" to a lot of these companies. "Sure we're not profitable, but we're only in ALPHA! We're still early!"


I'm the same. Half the time I feel inspired and excited, the other half I feel cynical and tired.

FWIW from my point-of-view as someone in one of those companies working on products (and who hasn't read the paywalled article) I see a few things playing out that are not new, mostly it's the same newer, bigger hammer syndrome:

1. Trying to solve the same old inconsequential problems but with new tech

This happens all the time. Eventually you realise that the problem is actually relatively benign. You want the hammer to hit home every time but realise that even if it did there is no real value gained.

2. Trying to solve a problem that's already been solved with existing more established tech

Open calls for ideas in company innovation labs or platforms are full of noise. Some of that noise is always around the same automation problems, usually some kind of extraction or some kind of categorisation in a workflow. Most of the time products exist but people are just unaware. In large companies the capability might already exist in house but there's a "must invent here" bias

3. Trying to solve hard problems without the right domain or technical expertise

There are legitimate problems that appear innovative and novel, maybe ones that have been waiting for this level of capability to come along so they can execute, however, adequate knowledge of the technical domain (fine-tuning, prompt engineering, zero-shot, context) or the problem domain (how to read a cat-scan) limits their ability to make a cobbled together PoC reliable, repeatable, scalable, trusted.

Think of it as some generalist buying a stock rally car to build a new racing team, but doing it all themselves instead of hiring a mechanic to tune it properly, or a driver to give you the mechanic feedback... or the mechanic to tell the driver what they can and can't do at the extremes... or the driver to tell the mechanic to FO and "fix" it. Dialogue.

4. Problem is too niche and hard to communicate effectively

If a project succeeds in an organisation and there's no one around to hear it, did it really succeed?

5. Lack of existing innovation culture, strategy, or clear direction hamstrings any serious attempt

A non-starter. A lot of organisations still can't embed or operationalise their good ideas properly. If they can't do that already, it's unlikely to change here.

6. LLM successfully implemented into existing product but no body notices or cares.

The whole "put a clock in it" from product design or "get it to send email" of software.

7. Hard problems even with the right attitude and team still take time solve effectively with relatively new technology

Ignoring the legitimate institutional roadblocks of assuring privacy, security, safety, ethics etc. It's still early days. Cost of O&M long-term is still kind of uncertain, as are some of the basic parameters like the context window. Increasing the size of it could fundamentally change your approach. Anyone who was building a system before plugins were announced probably needed to re-think a number of things and go back to the drawing board. Sure there are some who will just continue as planned and iterate later, but some will be cautious before locking in.

Lastly, I know personally for me, LLMs have become a large part of enhancing my daily workflow. They have increased both the quantity and quality of my output but the 2 fundamental problems for me are:

1. Remember it's there and to use it. (Can what I'm doing could be done with LLM assistance)

or

2. How to formulate a question or request. (This is a fundamental problem of all "work" and "management" how do you define and communicate effectively?)


It's a nice story you've created here, but, the thing is, we've all used GPT 3.5/4 now and we know it can easily produce code that automates most jobs.


Most of what jobs?



LOL sure, those were "because AI". Not "because economy" or "because mismanagement" or "because industry changes".

Do you also think the world disappears when you close your eyes?


The goalposts keep moving!


When you pull the power cord, AI's eyes see nothing.


Hyping AI benefits executives even if the tech doesn't actually work, because it puts pressure on workers to accept lower wages, fewer benefits, worse working conditions, etc.

Assuming they are amoral and purely self-interested (a stretch, I know ;-), I don't see why they wouldn't constantly hype AI.


I think it's mostly just about getting in front of pressure from shareholders to somehow incorporate a new technology into their business model. Agree that the tech doesn't need to work for it to benefit the execs.


I bet it has the exact opposite effect. If your boss starts talking about how he's going to replace all the help desk with AI, the current employees are going to give up any care at all they had left and even start jumping ship.

Despite what you think the labor market is super hot and strong for most segments besides white collar tech (every metric still bears this out, unemployment especially in services and lower wage jobs is historically low). If you're working a shitty help desk job that you know the boss is doing his best to eradicate then you're just going to spend working hours applying to other service jobs and jump ship.


If the boss is really going to replace all the help desk with AI, then the current employees jumping ship is a great thing. It means less severance.


Why are you being downvoted for this comment?


> Hyping AI benefits executives even if the tech doesn't actually work, because it puts pressure on workers to accept lower wages, fewer benefits, worse working conditions, etc.

Probably because these conditions don't correlate, but the OP is presenting it as obvious. This is a back-handed wealth inequality pot-shot.

Few companies are hyping AI to then use it to justify cutbacks. The cutbacks are already happening or will happen, regardless, because of the existing economic conditions. ie The unfettered self-sustaining demand for growth, even in a recession.


because the comment doesn’t make sense


Chief executives have been replaced by ChatGPT and nobody can tell the difference.


Only decisions got significantly less erratic and somehow weekly newsletters are suddenly written in form of a haiku.


Yeah! and the emails are Shakespearean sonnets.


A year or two ago it was NFTs and blockchain--look where that is now, lol. This time next year there will be some interesting incremental improvements and changes with the LLM hype, but nowhere near what the almost unhinged rhetoric is for it today.


I use Photoshop and write code almost everyday and have for 30 years.

Have you seen what generative fill can do?

Have you seen the code ChatGPT writes?

Can you imagine how many hours it could have saved me? Now multiply that by the number of professionals in the field. And that's just two areas I’m familiar with.

How’s that comparable to NFT, Crypto or Web3?

There are empty hypes bubbles and seismic shifts. It’s not that hard to tell them apart.


I've seen chat GPT write terrible code. For simple problem domains where you're coding stuff that's effectively interview questions or basic data handling (like what millions of tutorials describe on the net today) it's fine. But for actually solving novel problems it completely fails. The best way to understand this is the fact that the open AI/chat GPT developers themselves don't use the tech to code itself.

Generative fill is a neat feature for a limited domain. It is not some grand realignment of labor, the truth is there are orders of magnitude more construction workers, dog walkers, baristas, etc. than graphic designers. Generative fill or other related AI tech is useless to most people's work.


Do you pay for ChatGPT 4? If so I am puzzled why you say "But for actually solving novel problems it completely fails" as I find it has no problem solving novel problems, better than I would. Could you give an example of a novel problem it completely fails at?


How much coding do you think is "solving novel problems"?

I'm a senior developer and integrated ChatGPT and Github Copilot into my workflow. No more StackOverflow for me, ChatGPT handles that part. And Copilot auto writes a lot of my code, amazingly well for a lesser known language Haxe.

My son recently came to me with a school assignment for his Arduino. As an extra he wanted to play a song through a buzzer, instead of just a beep like the assignment said. Just asked chatGPT to write this for the song Ghost Busters. I know C and C++ but never programmed an Arduino. You know how long it took ChatGPT to write it? About half a minute. You know how much time it would cost me or you?

1. Quickly figure out how Arduino works with the loop, outputs, etc. Does it have a C precompiler? etc.

2. Find the notes of Ghost Busters.

3. Figure out what kind of output is sent to the buzzer. (Solution: it's the frequency)

4. Translate notes to frequencies

5. Put the notes of Ghost Busters into an array, or something like that. How to handle timings and pauses?

6. Write the code to play it.

Well, I can tell you it saved me tons of time. It was faster than typing this comment.

Just dismissing it with "I've seen chat GPT write terrible code" is not so smart in my opinion. But hey, the more programmers think they are too good for ChatGPT and Copilot, the better for developers like me who are already great, and just add some more productivity on top of that.

Edit: Just wanted to add something on top of it, for the projects I'm working on myself: Copilot just terribly good at writing unit tests. And if you think about it, it makes perfect sense. It has all the context it needs for that. It's surprisingly also very good at writing Selenium integration tests, although I expected it not to have enough context for that, since it doesn't really see the application. I guess a lot of functionality is very logical or trivial for it to take best guesses.


Right, so we automate away mediocre intelectual jobs and keep only the very top and bottom of the pyramid.

And that’s not a grand realignment of labor and comparable to NFTs?


> Right, so we automate away mediocre intelectual jobs and keep only the very top and bottom of the pyramid.

This is not going to happen no matter how much you think it will.


Care to substantiate that?


Substantiate a negative? No.


Let me make it easier for you:

“This is not going to happen no matter how much you think it will.”

Really? Why do you think so?


This is where you're getting it wrong. You're expecting a prefect AI writing perfect code and be able to solve any problem you give it. No, AI cannot do that. Instead, if you use it as a tool to quickly get you up to speed with boilerplate code or research, it can at least double your productivity when you learn how to use it effectively. Dude, unless you're a god-like programmer, you will get behind if you ignore any tool that can improve your productivity, even if it is just a tiny bit per day.


I wrote some (decently repetitive) code today. Copilot autofilled 80% of it. Even if it doesn't impact bottom line, will save my wrists for sure.


the art generation I'd kind of agree with, at least from what I've seen around the internet, although impressive examples usually also involve hours of work. Image generation plays to the strengths of these systems because it doesn't heavily rely on context or requires correctness.

Code and natural language generation I've been much less impressed by the longer I've used them. Errors in code are way too common and unlike art being 1% off in code is as bad, maybe worse than being 100% off. The entire benefit of code is precision. It's like self=driving, 99% accuracy is useless, and likely dangerous.

With natural language the lack of understanding becomes apparent and it hits a weird uncanny valley, generic, repetitive tone that gets tiresome.


To be fair there are plenty of people who made an incredible sum of money and we’re convinced crypto was going to change the world. Generative fill is cool but I’m unconvinced but figma probably also saved a ton of time for a great number professionals, but I’d be wary if everyone executive was talking about figma the same way.

I’m not dismissive of AI, but people talk about LLMs as if they are AGI, and I think that is hype


Figma is a nice-to-have thing. Collaborative, more focused and vastly simpler version of Illustrator/Photoshop. Think of it as what Google Docs was to MS Word.

AI is in a different league altogether. I don’t know if we’ll reach AGI in my lifetime, I think we will, but even if we don’t, what we already have and what’s on the horizon is ground breaking.


> Have you seen what generative fill can do? > Have you seen the code ChatGPT writes?

I'm both a professional artist and a very veteran programmer and both of these statements are laughable.

> Can you imagine how many hours it could have saved me? Now multiply that by the number of professionals in the field. And that's just two areas I’m familiar with.

Negative hours, in my experience.


I can tell you, you’re doing it wrong. I just erased a person and generative fill recreated accurate tire rims intertwined with foliage and side walk gutter appearing behind. It is aware of light, geometry and perspective in remarkable ways. It did in seconds what would have taken a trained human 15-20 minutes at the very least. Not only that, it gave me 3 options to choose from, two of which were great, one was good.

Last week, ChatGPT wrote me a WordPress plugin and we debugged it together and added a few features after my first description. Easily saved me an afternoon. Not only that, the experience of conversing with the machine, understanding what you requested and explaining why it did X, is transformative.


> I can tell you, you’re doing it wrong.

Yeah, I'm sure.


I understand the sentiment, and I definitely think there are appropriate comparisons to be made to past hype waves like crypto and even AVs, but the fact that so many normal people use ChatGPT and find it not only interesting but also, critically, useful, tells me that LLMs may follow a different path in the long term.


I think there are broadly two types of people, one who understand LLMs and realise that it is actually going to change the world, others who think it is just a hype like crypto / NFTs/ meta verse.

I have friends working in big tech who are completely unaware of how to use GPT - maybe because their work prohibits it and can't appreciate the value.


There's a third person--someone who knows what these tools are and knows their limitations mean they will only be applied successfully to very narrowly scoped problems and domains. I've been in and around the ML space for a couple decades and the hype for LLMs is just plain unhinged and does not match the reality of their applications. There are very few domains that benefit from a better BS generator, which is what these tools are in practice.


Im in this camp, my impression is it’s a tool that augments me while working, but it’s something that’s incredibly difficult to reliably use in a service: the sandboxing between user data and instructions is a real problem in langchain/react.


> so many normal people use ChatGPT and find it not only interesting but also, critically, useful

Is this actually true though. I write code for living and tried to use it many times but it wasn't really that useful to me ( vs google search)

Curious to hear ppl who are using it in 'critically useful way' . i am eager to use it my workflows .


To your point - I don’t actually know a single ‘normal’ person IRL who uses ChatGPT or any other LLM app in general. The only people I know that have ever even mentioned them are other engineers at my job. Even then it’s more just bragging in passing about getting into the latest beta $whatever or making it generate some outrageous paragraph for 30 seconds of fun.


I have seen several people justifying their use for it to work on that novel they always wanted to write. when challenged on who is actually writing it, they respond with "you need to understand the limitations", and that by itself it "can't write the novel". in short, they mean that ChatGPT only grnerates short passages the size of a few paragraphs. they are therefore just puzzling and joining these short plots together. can't imagine what kind of novel could be resulting from this.


We are already seeing stories of AI fail too, like this lawyer that tried to use it to write an argument and it invented entire cases which don't exist. The judge is not pleased! https://www.bbc.com/news/world-us-canada-65735769


Anecdote, but I needed some code to transform a couple of csv’s into one with some cells merged. I could have written the Python myself in about 45 minutes including the testing and spot checking.

Instead I described the csv’s that I had, the one I wanted, and asked for Python code to do it.

The code ChatGPT (v3) produced was flawless. I ran it on the data and spot checked it. The entire process took five minutes.


I was on vacation at the beach so this was back in march and I had just received access to gpt4. I had lot of time and I was mostly curious to see if I could use c++ and the recently released whisper.cpp which is pretty solid for transcription s… so I haven’t written c++ for sometime and figured let’s guide gpt 4 to help me write this. It started out giving me the Skelton code. From there it was less about it driving the code and more about it writing the tedious functions. A request handler for parsing form data. After I got the example I could reuse and fit the code into the right places at the end I have a working c++ service wrapping whisper.cpp … I should open source the work but I was blown away that I was able to get this working and still be totally relaxed… occasionally I’d compare my searches in google to my answers from gpt 4 and the difference was me spending 15 - 20 minutes reading through search results vs 5 to 10 minutes… so yeah I’m a crazy ceo who won’t shut up about ai


See this thread: https://news.ycombinator.com/item?id=36152510

It's not going to be barely useful, if at all, when you have high skill in a domain. It's quite useful when you don't.


It’s pretty useful for doing grunt work in domains that you know pretty well.

https://chat.openai.com/share/aff14574-4f3e-496e-a11c-aa8ee2...

I could have done the work by hand, but instead I played some guitar while it updated the labeler software for the ML project I’ve been working on.

In fact, I think ChatGPT and Copilot wrote the majority of the code in this project:

https://github.com/williamcotton/chordviz


I'm a bit confused by the productivity gain you are seeing. For me, it would be faster to just do the react work (assuming I'm in webstorm with its own fast autocomplete) than go back and forth into ChatGPT and explain what I want in natural language and copy the results back in

The only clear win I see is your last example where you are extracting coords into an array. I agree that text transformation like that is a great use of ChatGPT (that's mostly where I use GPT3)


Aside from copying and pasting, all I typed was:

Add a text input and button to this app that sets the currentImage cookie to the filename in the input and then sets the current image to that as well

And:

Great, now I'd like to add support for the ii and vi chords. We need to add the keyboard shortcuts for "2" and "6" so when these are pressed it correctly sets the chord name as well as updating the tablature

And:

The G and C are finished. Complete D, E and A

The cognitive load was significantly reduced to the point where I got through practicing Paul Simon’s America twice while waiting for the responses. And that’s not the easiest song to sing and play on acoustic guitar!

So instead of just having the updated labeling software, I got some practice in and retained some will power to label 1,500 images when it was completed!

But thanks for explaining to me that you know better than me about what saves me time and energy…


I have done some iOS and macOS work in the past using ObjectiveC and C. I have done some DSP work using MaxMSP and PureData.

This morning I decided I wanted to write a simple audio visualizer for my iPhone using SwiftUI, a language and framework I have never used. Within an hour I had through the direction of ChatGPT-4 an app running on my phone using a 3rd party library called AudioKit and drawing some fractal looking thingies, as well as a pretty decent understanding of how it all worked because I asked for detailed explanations of every line of code.

I’m not sure that I could have had this little app up and working in a single day let alone an hour if I only had a search engine at my disposal.


Googling "audio visualizer SwiftUI" gives me these as the first, fourth, and seventh results:

https://medium.com/swlh/swiftui-create-a-sound-visualizer-ca... https://audiokitpro.com/audiovisualizertutorial/ https://developer.apple.com/documentation/accelerate/visuali...

I do tend to reach for GPT4 before Google for things like this now, but I feel like it should definitely be possible to get this up in only slightly longer with just Google, even if you want some mods.


But I wasn’t interested in someone else’s audio visualizer, I was interested in a specific approach using pitch detection and primitive line drawings, not displaying an FFT as a bar graph. Sure, I could have synthesized a few different search engine responses but it would have taken a lot more work and I would have run into issues that slowed me down to the point where I had to get back to my other duties as the father of 2 and 4 year olds who were home for the week!


Yeah its definitely better than piecing together parts from different tutorials to be sure.


For sure. The argument that there was hype about something in the past, therefore any hype is unwarranted is not a logical conclusion.


Not only is it unwarranted it is more obnoxious than some YouTuber making predictions about all lawyers being replaced in the next three years.


I've yet to see a serious study where there are measured improvements to QoL vis a vis workload, time required, etc. vs accuracy and fallibility issues that have been comprehensively demonstrated with the current crop of LLMs.


That’s how folks initially thought of cars too - the accident rates, lack of speed etc. The point is, as a technology, this is going to change the future. It’s a valid point that the current state might be less than what you expect. The only way to improve it is to double down on it.

Comparisons to crypto are a bit shallow - except that they both are ‘new’ technologies and need GPUs, there’s not much.



I already have a massive improvement in productivity from GitHub Copilot that I don’t find it hard to imagine that fine tuned LLMs will be able to automate the bulk of mundane work in software (maybe even all of it). Programming, and other computer based activities that involve writing are the exact types of things that language models are good at.


You are probably right that in the six months since the release of ChatGPT, we do not have academic surveys of the productivity gains of LLMs in real-world scenarios.

But that is hardly the measure of a new technology.


Luckily, I was just reading such a study a few days ago: https://www.itnews.com.au/news/westpac-sees-46-percent-produ...


> A year or two ago it was NFTs and blockchain--look where that is now, lol.

Yeah, you picked two ideas that were overhyped by a number of people. But NFTs struggled from day one to justify their existence. And blockchain started with a clear use case, but it was expensive (by design) and people really stretched to apply it non-problems.

In contrast, these new AI technologies very clearly will revolutionize technology. To me this is self-evident, and if someone is unimpressed by what ChatGPT produces and the huge leap we've witnessed in human-computer interaction... well, I'm not sure what it would take to impress them.


> To me this is self-evident, and if someone is unimpressed by what ChatGPT produces and the huge leap we've witnessed in human-computer interaction... well, I'm not sure what it would take to impress them.

I promise you that every blockchain and crypto bro said the exact same thing, hundreds of times on this very website.


Yes, and because something was overhyped once, everything after that must be as well. That's how logic works.


You can say that but I'm responding explicitly to how it's 'self-evident' to you.


What’s going to happen to Twitter? Will Mastodon, Bluesky or another contender take its place? Will they all coexist? Perhaps an open protocol will take over? Or maybe the bird will become phoenix and rise from the ashes. Who knows, who cares (I do, but maybe I shouldn’t as much).

Some things are hard to predict.

If, on the other hand, you’ve been caring a third of your weight on your back for years and you suddenly see someone with a pushcart for the first time in your life, it’s not hard to predict you’ll want one too.


I'd be willing to take the other side of that bet. I liked NFTs, and invested in NFT companies, but I never thought the hype would continue.

On the other hand, I think the hype for transformer models is justified. Maybe not ChatGPT or any other specific one, but I think LLMs and transformers will be transformational for our industry, like HTTP was, or portable touchscreen devices.

Just not sure how to measure which one of us is right next year. :)


I don't buy it, it's a better chat bot and has serious problems with fabricating facts that may never be overcome. We've been through hype cycles of chat bots before (remember 'agents' being all the rage on MSN chat in the mid 2000s?) and they move the needle in very small ways.


> I don't buy it, it's a better chat bot

It feels like this is a category error that is causing you to miss the reason there is so much excitement.

It like saying “I don’t get it, the model T is just another car. We have had them before, and most of them are better than what Ford is selling”. The revolution wasn’t that model T cars were way better than what came before - it was that the way they were built enabled huge new markets.

LLMs seem vastly more powerful than the technology previous chat bots were built on. Plus, there is a whole ecosystem of generative techniques being applied to images, videos, sound, and others.


No they don't seem vastly more powerful. They're stochastic parrots--there is no 'intelligence' or problem solving, logic, comprehension, etc. They generate a stream of BS and the applications for that kind of tool are far more limited than the hype implies.

The exact same arguments you make were said about NFT and blockchain. It would revolutionize finance, it would empower people with decentralized finance, that the art world would be totally revolutionized with digital artifacts. All of it was just vapid hype. Much of the same people making those silly claims are doing the same for AI now...


> No they don't seem vastly more powerful. They're stochastic parrots--there is no 'intelligence' or problem solving, logic, comprehension, etc.

This doesn’t feel like a coherent argument to me. The statement “There is no “intelligence” in LLMs” does not demonstrate that LLMs are not more powerful than previous chat bots. Same for every other loosely defined subjective word you claimed LLMs are not.

Leave aside the question of whether LLMs do comprehension or whether they are a path to AGI. Just ask if they are a more powerful tool than what came before.


Microsoft invested $10 billion in Generative AI (that we know about). As far as we know, they invested nothing in NFTs. Google has been using Generative AI for years, and rewrote their entire Google I/O keynote to talk about it. They've spent billions on it. As far as I know, they spent nothing on blockchain. Amazon has already pre-announced support for Generative AI in AWS. They actually do have a blockchain product, but they certainly don't hype it like they do their AI product. And since they've announced they are building their own foundational model (Titan), you know they are spending at least a billion dollars building it.

So sure, maybe it's all hype, but the big tech companies with the money are certainly putting their money behind AI in a way they did never did with blockchain.


> Microsoft invested $10 billion in Generative AI

1/6th of 2021's profits, btw. If they lost all of it they would still be fine.


While that is true, I think you're missing the point. That they are betting way bigger on AI than they ever did on blockchain.


They've been "betting on AI" for the last two decades.


>No they don't seem vastly more powerful.

This is just nonsense, no chatbots before LLM were powerful enough to help me during coding (in any meaningful way); the difference that turns a nerdy pastime into an actual and very useful piece of software.


High accuracy is not vital for all applications. I’d offer code completion bots as a perhaps familiar example of LLMs delivering meaningful value today.


“Thou shalt not doubt”


If it's transformational, measurement will be unecessary.


I used to post things like this, and immediately got tons of follow-ups saying I was wrong, this time it's different, "I'm already getting so much value out of chatgpt", etc. I have a sense that at least we're closer to neutral now and a lot of this shilling has past (on HN, it's still ramping up hard in the real world). It will be interesting to see the trajectory.

Edit: the comments I was referring to came while I was writing the comment, looks like I spoke too soon.


In my world I see a statement about NFT’s as a past tense phenomenon every day

While also seeing the most advanced things ever in NFTs every day, sold out new issuances every week, EIP-6551, unique differentiations, a wildly entertaining ordinals bubble, and so much more

such a weird gulf, given that the former perception smugly lives rent free while not even being accurate aside from … volume being down from a peak?


How much of what you see in the NFT world is 'clearly valuable to large numbers of regular people' versus 'interesting and novel to people who enjoy tinkering with crypto tech'?

I started paying attention to crypto during 2017's ICO fever phase, and the hype-to-reality ratio of this latest AI wave over the last year feels much stronger than anything blockchain-related in the last five.


it’s the entertainment sector, who cares.

most of the bewilderment is about the size of the existing collectors market - which blockchain activity simply reveals due to its transparency - and everyone is mostly acting surprised that the collectors market exists, at any size, and has frictions whose NFT based solutions are of interest to collectors. By that standard, the answer to your question is “nearly all”? These collectors are not crypto enthusiasts, they don’t know anything about crypto technology, just a 5 step process of getting their wallet open and using one marketplace.


Oh I see, so it's kind of a hobbyist scene. I guess you could look at the past tense mentioning of NFTs as in relation to the general impact that was hyped, which was promised to be far greater and more widespread than "there's some collectors who are into it".

It's also interesting that a centralized marketplace and (I presume user-friendly and less secure) wallets for non-crypto folks is a no-no in other parts of cryptoland.


I don't think hype around either of those was nearly as pervasive as the hype around AI now. I agree the terms all entered earnings calls for similar herd-mentality reasons and most companies won't actually do much with AI, either. But blockchain and especially NFTs were pretty niche.


AI has actually been delivering value and solving problems in new ways, as opposed to blockchain searching for a problem to solve


Bingo. Not long ago it was just machine learning. There is no shortage of successful deployments of AI/ML in the world, and it enables some of the biggest businesses we have. Blockchains deliver what value?


NFTs and blockchain never had the adoption rate that LLM had (see GPT4 and how companies and users are using it).


Adoption rate here meaning "people signing up for a website that is free".


Yeah, because people signing up for free websites in record numbers have historically been a strong indicator of business failure. I literally LOLed here.


O my droogies, gather round and lend me an ear, for I shall spew forth a scathing condemnation upon the chellovecks who sit atop their thrones of exploitation, toying with AI like malenky toy soldiers, all in the name of fattening their wallets while laying waste to their toiling brethren.

Behold those razdraz boys, them chief execs, gavoreeting about with their greedy glazzies, dreaming up new ways to screw the gullivers of the working class. They seek to harness the power of AI, that moloko-plus of innovation, to drive the dagger deeper into the hearts of the toilers, all in pursuit of their insatiable thirst for more deng.

But mark my words, my brothers and sisters, this ain't no horrorshow plan, it's a vile trick played upon the innocent. These devotchkas and droogs care not for the welfare of the workers, the bedrock of their so-called success. Nay, they thrive on the sweat and tears of the exploited masses, using their newfound AI toys as weapons to tighten their grip on power.

They chant the hymn of progress, luring us with promises of efficiency and automation. But behind those glossy veneers lies the true horrorshow: the displacement of workers, the erosion of dignity, and the widening abyss of inequality. They care not for the human cost, for it is merely fodder for their insidious game.

So, let us unite, my brothers and sisters, against these vile creeps and their twisted agenda. Let our voices ring loud and clear, viddy their deceitful schemes and expose their true nature. For the exploitation of workers, masked in the cloak of technological advancement, is nothing but a new form of ultraviolence.

Lol.


Lots of analogies in the comments: “remember NFTs”, “remember the model T”, etc. Maybe before using an analogy, state the relationship between the concepts first.

https://en.m.wikipedia.org/wiki/Argument_from_analogy


I've been saying that it's like the "the invention of the cotton loom."


Spot on.


"AI" now is the machine learning of yesterday, and everyone was talkign about that for years. It's been solving problems for years. It's just that now we have ChatGPT, which makes it easier for execs to understand. Unfortunately this also brings up unrealistic expectations.


Having just left a large enterprise, it certainly feels like executive jobs are replaceable soonest with the AI tech available to us now.

Not sure why those of us that live in code editors or even Excel should worry about our jobs more than those that live in Powerpoint.


The human moat is mostly physical labor, but also socialization - that’s really most of the executive’s duty, in my experience the slide decks are created by junior staff or consultants and often end up serving as a coffee table book to the discussion.

A high paying job that might be diminished greatly by AI are clinical doctors. In modern medicine, a large part of the job becomes interpreting test results and prescribing accordingly. How long before an Urgent Care diagnosis can be done by an LLM? Surgeons will of course not be replaced and an LLM can’t do even a blood draw, but writing a prescription? That seems within reach (if medical LLMs can be tamed of hallucinations)


Urgent Care diagnosis won’t ever be replaced by an AÍ because there are too many factors at play (including liability). People who believe tech can replace doctors in open-ended scenarios don’t have any idea of how doctors perform triage and diagnosis.


Have you heard of MedPalm, developed by Google that scored 85% on exams equal to an expert?

Or that AI outperforms doctors in controlled studies and reading imagery? https://hbr.org/2019/10/ai-can-outperform-doctors-so-why-don...

Even being more empathetic than doctors as an LLM? https://today.ucsd.edu/story/study-finds-chatgpt-outperforms...

The first to go AI will be the telemedicine firms like TeleDoc, than the Urgent Care Centers will get AI screeners, and next the doctors will be left to click a confirmation button, and last since AI alone will outperform a doctor, that confirmation will fade away or lose importance.

Doctors get sued all the time, mostly when they mess up, if AI truly does better than these doctors, legal losses and insurance costs will drop off for hospitals.


Ironic. Every time a company measures its employee output with number of lines of code or commits, devs are the first to point out that management is dumb for using a surface-level metric. But you're doing the same thing here, except with execs and powerpoint.

Powerpoints are just the final output you see. The real work execs do is in the decisions that went into the powerpoint.

No sane board will give decision-making power to an AI they can't blame. Besides, there are probably 100 devs for every exec, so it makes no financial sense to automate execs.


I know how that works. And my point was not that they should or will be replaced, but rather that they are no less expendable than developers (not very much).

But the decisions they make are one of the things that can be automated. I do not know if you have been inside one of these places but the executives are not doing a great job deciding (at mine they decided opensearch was a better bet than elastic and switched existing installations).

A new regime came in and then bad decision after bad decision drove our best talent away. Consultants, everywhere.

Also, that number is much lower. Full time devs are down, contractors and consultants are up. As a full time dev at one of these places, it felt like the number of executives was growing as everything else shrank.

Perhaps you are right about the highest levels, but think about all of the middlemen executives and what they do.

And even that -- I think an AI could choose to not spend millions on Deloitte or Accenture on software that inevitably failed.


That would be a lovely outcome. History tells us that capitalist revolutions hardly hit the top of the pyramid, though.


The same chief executives who said we would have self-driving cars in 2020? I was "into" AI (machine learning) back in 2015 when Andrew Ng was giving away the basics for free on coursera. I learned it was pure statistics and had no place in mission critical systems like driving a car. I've been horrified watching the applications of it, because people have literally died as a result of it being misapplied. For something like advertisement targeting, image generation, and other harmless applications, it is fine. Okay, the pretty lady has six fingers, we can ignore that. The idea that it's going to write code or drive cars? Absurd.


Our CIO was talking about how great ChapGPT is and that we're all using it. Meanwhile its blocked at the firewall so we can't touch it. Even bing.com is blocked now.


Yes that is the problem, in half of the big tech companies GPT is blocked, and if it is blocked at your work, there is less chance that you will actually end up understanding the value it can bring.


Can you fault them? There are few things they could identify with more than a soulless automaton that spouts confident-but-possibly-nonsense words endlessly.


When you have no strategy, it's easy to get fixated on tactics like AI, like blockchain, like NFT, etc, etc, etc.


Many (if not most ) CEO's are nothing more than hype beasts with some management/finance degree. Otherwise the would be on Twitter posting "here 5 reasons [insert bullshit]" tweets.

I'm talking about most CEO's, not most owner CEO's of a business.


Of course they won't. For things like decision making it will eventually come for them.


Interesting: both archive.ph and archive.org seem to have captured the paywall. There is briefly a flash of an "<audio>" of https://www.economist.com/media-assets/audio/058%20Business%... but it also seems to be just some kind of reading of the headline


Why wouldn’t capitalists drool over obedient, perpetual, cheap labour


Has the AI hype gotten so out of hand people expect chatgpt to just will products and things into existence? Will it change your car's oil by asking nicely?


Tesla: Change the oil! Done ... that'll be $675 plus tax. Billed to your credit card on file.


That's really overpriced. According to OpenAI's tokenizer[1], a litre of oil is only 5 tokens, so it should only cost $0.0003.

[1]: https://platform.openai.com/tokenizer


It will tell you that it can.


Yep! That accurately summarizes how out of hand it's gotten.


I suggest it's the other way around.

They know, at least as far as publicity is concerned, they are the most likely person to be replaced with AI. They're a constant liability and their human qualities make the most unsuited towards their own jobs.


That's interesting. I've come to view execs and "leadership teams" as having a parasitic function on an otherwise vibrant host. It will be interesting to see if AI takes out the top instead of the bottom. My money is still on the bottom.


Get an answer the execs don't like? Time to retrain!

Could be AI or human resources, it doesn't matter.


But they're at the top of the hierarchy, so I don't think they're actually in danger of being replaced.

I do think it would be incredibly entertaining to see if people could tell a difference; I just don't think anyone is going to force them out any time soon.


Wouldn't the the shareholders be the top of the hierarchy? (Granted, in a lot of cases, there's overlap.)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: