> It is a sales pitch, one in which the problems of today are brushed aside or softened as issues of now, which surely, leaders in the field insist, will be solved as the technology gets better. What we see today is merely a shadow of what is coming. We just have to trust them.
I'm a believer that AI is currently over-hyped, but to me this feels applicable to a lot of things that have gone on to be revolutionary. For the smart-phone too, one of the "issues of now" was slow cellular connectivity, which made the premise of using your phone for web-browsing feel like a chore. With cellular speeds "solved" we now live in a world where smart phones are the tool the majority of people use the internet on. https://explodingtopics.com/blog/mobile-internet-traffic
> For the smart-phone too, one of the "issues of now" was slow cellular connectivity
Nobody said "you have to buy an iPhone NOW, because in 3 years the connectivity will be good". When Steve Jobs got on that stage to show it off, he didn't ask you to trust his vision. He didn't promise you what it would do tomorrow. The sales pitch was simple, It's an iPod, a Phone, and a web browser, and importantly it did all those things. He sold it in the NOW. all that other stuff that phones became, that came later. The iPhone was first and foremost a good product in the now.
And, the first iPhone worked really, really well over WIFI.
It was good enough in the NOW, even though it didn't have an app store!
Edit: I should also add that I was a "late" adopter of the first iPhone. I didn't rush out and buy it the day it came out. I tried it in a store and made sure the web browser was useful. After a few months after hearing (almost) no complaints, when my phone has issues, I went out and bought one.
> Nobody said "you have to buy an iPhone NOW, because in 3 years the connectivity will be good". When Steve Jobs got on that stage to show it off, he didn't ask you to trust his vision.
I don't think anyone is telling consumers that they need to use AI now, either, unless it actually benefits them (and judging by the usage of ChatGPT, apparently many users do find it useful).
The people promising the future of AI are usually startups pitching to investors, or companies putting resources into AI, etc.
> I don't think anyone is telling consumers that they need to use AI now
I dont understand how you could say that. The very article we are discussing here is a followup to a marketing op-ed about how AI "is uniquely capable" of improving your health, yet the best answer they can provide to serious question about privacy is "maybe the laws should change".
They are directly asking the consumers to vote to change laws in a way that benefits their product, before they even know if it's an app.
>It's an iPod, a Phone, and a web browser, and importantly it did all those things. He sold it in the NOW.
I think this is really important because it shows that the vendor knows they're meeting a real demand NOW. They're not selling hype, early access, and woo-woo future-talk in hopes of maybe eventually finding a problem for their solution.
No doubt! I err on the side of "most things are over-hyped" (see: room temp superconductor hype that lasted a week), but I try to remind myself there's a yin and yang thing going on.
Yin: "this tech will change the future"
Yang: "this is over-hyped and the promises are hot air"
It's easy to default to "yang" because it is true most of the time (especially with tech). But you gotta acknowledge that "yin" is actually right some times too.
I often like to use an analogy involving a local volcano.
The odds are incredibly strong will not explode today, but the granularity/time-periods matter, and there's a fundamental asymmetry in how we value the false-positive rate versus the false-negative rate. :P
___
"Look, I've made daily predictions for 30 years now, and they're all perfectly 100% accurate, go on your hike, it'll be fine."
<Volcano suddenly erupts in background>
"Did I say 100%? OK, fine, after today it's 99.99%, which is still awesome."
Right now, as stated, the "Yang" side as applied to AI is clearly true. Even if the tech will "change the future" it will be no less correct for us to say that current AI products are overhyped/vaporware and that AI salesmen and researchers are passing off sci-fi stories and business strategies as wise prognostication. Even if what they're saying turns out to be true, it's completely correct to say that they're just (sometimes unbeknownst to themselves) wildly guessing.
I don't really intend to bicker, but I'm a little curious about the thinking here..
Maybe it's getting too philosophical, but if you're correct because of "wildly guessing".. you're still correct. Maybe you've only been correct 2% of the time with your predictions, but that doesn't change being right or wrong in any given instance.
If someone says "it will do A" and you say "no it won't, you're passing off sci-fi as prognostication", and then it does end up doing "A", you were wrong? No? If someone's AI tech does end up "changing the future" then how would you not be incorrect if you had previously said it was vaporware?
> If someone's AI tech does end up "changing the future" then how would you not be incorrect if you had previously said it was vaporware?
"This product is vaporware" doesn't mean "This product is impossible and can never come to fruition." Vaporware is "software or hardware that has been advertised but is not yet available to buy, either because it is only a concept or because it is still being written or designed."
It doesn't matter even slightly if Altman and Huffington's app will materialize and change the universe; it's still vaporware. It's just what the word means.
Don't forget the time crystal: It was overhyped in the past, as well. But there are endless details to how things could turn out, few of them expected in advance.
"Technological progress is easy to underestimate because it’s so counterintuitive to see how, for example, the philosophies of a guy who invented Polaroid film would go on to inspire the iPhone. Or how an 18th-century physicist would write a notebook that would set the foundations for a modern electrical system"
Perhaps improvements in networking have been revolutionary. Not sure why "smart phones" are revolutionary. Maybe they are revolutionary in a negative way: destroying privacy, accelerating the decline of mental health in young people, etc.
I would rather have a pocked sized computer that I can fully control and a pocket-sized cellular phone, not a combination of the two that is a Silicon Valley-orchestrated Trojan Horse into the personal lives of every person who uses one. Sometimes I need to carry a computer. Sometimes I need to carry a cellular phone. Sometimes I might need to carry both. The "smart phone" destroys this basic distinction.
In the time I have been using computers, early 1980s-present, improvements in networking have never felt overhyped. But they have been revolutionary. IMHO.
They literally solved protein folding with AI and won the Lasker prize
Similarly with AlphaGo etc…
AI is the most frustrating discipline to be in (and also most inspiring imo) because it’s never enough and anything that works, ceases to be AI
AI has always been a technology of faith if you recognize that the lived definition is “any technology that has been conceived but not widely implemented”
Ha! That’s the hype there. They made predictions, like any other model out there. Their models are better but not close to what we get out of xray crystallography which is a painstaking process.
Protein folding is nowhere near a solved problem. A third of proteins don’t have high enough accuracy.
I can also make a prediction for every single known protein, it's trivial to do. My own predictions would be uniformly wrong. The question is how accurate AlphaFold's predictions are - of course, this question was almost totally avoided by the news reports and DeepMind press releases. It is accurate enough (and accurate often enough) to be a useful tool but by no means accurate enough to say they "literally solved protein folding."
I was a skeptic after all the hype around Blockchain, Virtual Reality, and other technologies over the last few years. However, I have to say I find AI impressive. Compared to the other things I mentioned, I see cool new products and solutions popping up every week that add real value to my personal and professional life. ChatGPT alone saves me at least 10 hours of work every week, with more savings all the time.
I think this is true, but the real question is where to position yourself on emphasizing the true current value and future potential of AI, vs. tamping down on excess hype. My peer group tends to poo poo AI (I think they email me every single Ed Zitron post), so I end up aggressively pro-AI in most social interactions despite having no specific expectations around AGI or even continued near linear improvement. Even if we are six months from hitting the asymptote on LLMs, they will be powering incredible innovation and value for the next decade.
Machine learning is really the only way to deal with non-formal, unstructured data, there is nothing to believe, the technology has been battle tested by billions of people, just think of BERT used by Google Search or all the AI models used for moderation, translation or detection and many more cases. But I guess all these use cases are just too boring to talk about today, AI has already helped millions of people at least, including me since I'm a native English speaker, I learned a lot just by using Deepl or YouTube with the auto-generated captions (which are much better today than they were in the past).
The future’s bright if we get there. Each new technology, be it search (with surveillance compute load), crypto, and now AI, places enormous load on the grid, that also serves consumers, something as un-glamorous as air conditioning. AI or A/C?
“Oh, but it’s different this time.”
Shiny objects competing with survival. Tell that to non-1337 working stiffs and retirees on a fixed income saddled with enormous PG&E bills.
Please innovate in more useful and less shiny ways. The planet is literally burning up, not just my little piece of it.
AI can’t solve everything because, although most things can be described and explained. Most other have to be experienced. And even though something sounds nice in text, once experienced we can say if it’s good or not.
For example, an AI can create a recipe that sounds good, but it will never be able to ground those choices on experience. Only through other people’s experiences combined, which might not be great in the end. And it won’t ever produce and adjust it based on its experience.
The same for a lot of other things that AI is meant to « replace ».
I agree with most of your post, except the last bit. AI replacing human workers is not black and white. AI will not replace all human artists, for example, but with AI, artists are needed less often. Whether this is good or bad, I don’t know.
It’s true and it happened for example with tailors. Before a lot of people were wearing suits and there was a lot of tailors. Now with industries and cheap clothes, tailors are rare but not dead. There is a few tailors compared to before but the skills are now a luxury and those suits are really expensive compared to the industrial made ones.
It’s probably going to be the same circle, the quality will go down, the workers with it as well. But the ones remaining might have the opportunity to move up and become a luxury. It happened to most domains touched by industries, luxury developed (shoes, bags, clothes, food etc).
Whether or not AI worker will level up the quality or dilute the quality in the low end we don’t know yet
Edit: As regards to art, I stumbled across this ad :
https://youtu.be/GuieFpIauN0 which is impressive given the final "cost" to produce it. I wonder how much it would have cost a human to do it from scratch. The quality is not amazing, but the message is there. Art is concerned as well, why would I learn how to do this if no one will pay humans to create it ?
Yesterday I needed to make a simple GUI app for a specific purpose. I haven't done it in a while so I figured it would take some time, but just to try, I drew a picture of the interface I had in mind and told 4o to generate a PyQt5 app for me, and it ... did it. Still took a bit of back and forth for some details but damn if it didn't get me from A to B way faster than if I'd been sitting there figuring out VBoxLayout etc again from scratch after several years. I was impressed.
>I find a lot of those who are skeptical haven't used it very much.
Any tips on how I can use it more? Right now, my AI experience has been limited to asking ChatGPT questions, which it makes up incorrect answers to. And making some funky rap songs. And trying to avoid seemingly AI created videos on Youtube. Is there a setup where I can describe a motion that I want, and it will synthesize a mechanical linkage that will accomplish the task?
AI makes things up so it's not appropriate for deterministic solutions. I'm sure something is coming for your mechanical linkage problem.
As for using it more, think of it as a co-worker and not a chat bot. There was a switch that had to happen in my brain before I started realizing I could just ask AI for the answer.
Starting with, why did you prefer to ask me for this answer rather than AI? This is what AI would have said:
> Great question! It sounds like you've had some interesting experiences with AI so far. There are definitely more practical and impactful ways to utilize AI beyond chatbots and entertainment.
> Problem-Specific AI Tools: There are specialized AI tools designed for various fields. For instance, in mechanical engineering, there are AI-driven CAD tools that can help design and optimize mechanical linkages. You might want to explore software like Autodesk Fusion 360, which incorporates AI for generative design. These tools can take your specifications and generate a variety of design options.
> Learning Resources: To get more comfortable with AI, you might find it useful to check out online courses on platforms like Coursera or edX. They offer courses on machine learning, AI, and specific applications in different fields.
> Experiment with AI APIs: Platforms like OpenAI, Google Cloud AI, and IBM Watson offer APIs that you can integrate into your projects. For instance, you could use them to analyze data, generate content, or even control hardware.
> Join AI Communities: Engaging with communities like AI Stack Exchange, Reddit's r/MachineLearning, or even specialized LinkedIn groups can provide you with tips, resources, and examples of how others are using AI in innovative ways.
> Hands-on Projects: Try building a small project that leverages AI. For example, you could create a simple AI-powered application using Python and libraries like TensorFlow or PyTorch. This could be anything from a predictive model to a small automated system.
What kinds of questions are you asking that it is consistently getting wrong? Do you write any code? That’s something AI is very consistently helpful for, even without being 100% perfect.
And also the metaverse (remember when one of the world's largest companies changed its name because of how revolutionary the metaverse was going to be). The ratio of hits to misses with tech hype is not good recently. I don't blame anyone for being skeptical.
Techno-utopianism requires a new & different Awesome New Technologies every few years - once people start to realize that the previous ANT ain't gonna take us to the Promised Land after all. Just like the previoius ANT didn't, and the one before that didn't, and the one before...
On the bright side - at least they aren't trying to sell us on the idea of nuclear-powered cars, or commuting via rocket-belt anymore.
> the bulk of the conversation about AI’s greatest capabilities is premised on a vision of a theoretical future
Yeah. 99% of the positive opinions I read about AI are "in X years it will do Y, Z, and that will change everything". As if the future is easily predictable.
Where are the flying cars? Where is the moon base? Why isn't everyone moving around on Segways? You never know what will happen. Some techs do grow exponentially, but not all, and it's impossible to know for sure which one will, ahead of time.
Alas. I doubt anyone will change their opinion. "This one is different." Ah well.
I think this movement around 'debunking AI' is entirely the fault of marketing and company CEOs inflating the potential around generative AI to such ridiculous amounts. Of course after CEOs toured the country shouting about the dangers and virtues of AI as if it will destroy the world or remake it into something better, and that it will do it any minute now, everyone is 'annoyed' now that neither thing has happened already. How long ago were people screaming and crying that AI development must be slowed down by world governments or else?
I say 'annoyed' with quotes, because as far as I can tell it's largely journalists and the media machine that seem to be taking a gleeful lap telling everyone how AI hasn't measured up after promoting the idea that AI will steal all their jobs and ruin the internet.
I think that's the fun part of some of these perspectives, that inherent conflict, that journalists want to convince us that AI is very dangerous technology and that it's stealing all their work and it's going to put everyone out of a job... but also AI is not living up to expectations and it's a nothing burger and all these companies are a joke selling lies to people. It's really hard square these conflicting storylines being served to us by the press (who are obviously biased against the technology that they think will destroy their livelihoods).
I hate to sound like one of those "you can't just journalism!" cranks, because I feel that way about nothing else, but in this case sometimes the vitriol coming from journalists about AI related technologies seems a bit much.
> I think this movement around 'debunking AI' is entirely the fault of marketing and company CEOs inflating the potential around generative AI to such ridiculous amounts.
I don't think you should let the AI research community off the hook so easily. Some of the most obnoxious and influential voices are researchers who are taken as credible authorities. I'm thinking for instance of the disgraceful "Sparks of AGI" paper from Sebastien Bubeck's research team at Microsoft Research, also certain "godfathers", etc.
It can be both a big deal fundamentally and a major pillar of a weird, scary religion. It is currently both.
Of course machine learning is a big deal, it was a big deal a decade ago and more. We know it’s a big deal.
This Calvinism meets growth hacking, this Scientology for Atherton thing is bizarre and kind of terrifying.
It is in no way surprising that a philosophy that justifies short-run amoral wealth maximization with tired arguments about long-run utilitarianism would turn out any other way.
What’s surprising is that people who talk about imminent digital gods are put in positions of vast power instead of therapy.
"There is no reason for any individual to have a computer in his home."
- Ken Olsen
I was there. I remember people saying it, but this f*cking site wants to downvote me into negative for remembering it, probably because they are thinking of business computers. Everyone was on board with that. I'm talking specifically of home computers.
I wouldn't worry, I think what you're saying is pretty well known. We've all heard it before from every tech evangelist out there. Everyone knows that sometimes tech naysayers turn out to be wrong; the point is that it's pretty useless information.
The AI naysayers will be similarly remembered was the point of my "useless information" post. I was surprised to be downvoted, probably because those doing it have an axe to grind and have poor reading comprehension skills. Surprising for Hacker News, but I've noticed every site is getting dumber at an alarming rate so why should Hacker News be the exception? If you're going to downvote me for disagreeing with me at least say why. It's in the HN guidelines
I can vouch for this. There was a lot of rhetoric about how a normal family wouldn't really need a computer in their home. This point was usually raised to support an argument that the personal computer market would be quite small.