Hacker News new | past | comments | ask | show | jobs | submit login

I'm tired of LLMs.

Enough billions of dollars have been spent on LLMs that a reasonably good picture of what they can and can't do has emerged. They're really good at some things, terrible at others, and prone to doing something totally wrong some fraction of the time. That last limits their usefulness. They can't safely be in charge of anything important.

If someone doesn't soon figure out how to get a confidence metric out of an LLM, we're headed for another "AI Winter". Although at a much higher level than last time. It will still be a billion dollar industry, but not a trillion dollar one.

At some point, the market for LLM-generated blithering should be saturated. Somebody has to read the stuff. Although you can task another system to summarize and rank it. How much of "AI" is generating content to be read by Google's search engine? This may be a bigger energy drain than Bitcoin mining.






It’s probably generally irrelevant what they can do today, or what you’ve seen so far.

This is conceptually essentially Moore’s law, but about every 5.5 months. That’s the only thing that matters at this stage.

I watched everyone make the same arguments about the general Internet, and then the Web, then mobile, then Bitcoin. It’s just a toy. It’s not that useful. Is this supposed to be the revolution? It uses too much power. It won’t scale. The technology is a dead end.

The general pattern of improvement to technology has been radically to the upside at an increasing pace parabolically for decades and there’s nothing indicating that this is a break in the pattern. In fact it’s setting up to be an order of magnitude greater impact than the Internet was. At a minimum, I don’t expect it to be smaller.

Looking at early telegraphs doesn’t predict the iPhone, etc.

Optimism is warranted here until it isn’t.


>> Looking at early telegraphs doesn’t predict the iPhone, etc.

The problem with this line of argument is that LLMs are not new technology, rather they are the latest evolution of statistical language modelling, a technology that we've had at least since Shannon's time [1]. We are way, way past the telegraph era, and well into the age of large telephony switches handling millions of calls a second.

Does that mean we've reached the end of the curve? Personally, I have no idea, but if you're going to argue we're at the beginning of things, that's just not right.

________________

[1] In "A Mathematical Theory of Communication", where he introduces what we today know as information theory, Shannon gives as an example of an application a process that generates a string of words in natural English according to the probability of the next letter in a word, or the next word in a sentence. See Section 3 "The Series of Approximations to English":

https://people.math.harvard.edu/~ctm/home/text/others/shanno...

Note: Published 1948.


thumbs up for this comment. As for me, I am tired of people talking about AI starting their argument with incorrect information

I think we can pretty safely say bitcoin was a dead end other than for buying drugs, enabling ransomware payments, or financial speculation.

Show me an average person who has bought something real w bitcoin (who couldn’t have bought it for less complexity/transaction cost using a bank) and I’ll change my mind


I came from the other end: I had years to get on the boat and missed it.

I couldn’t fathom what use Bitcoin could possibly have, but completely overlooked the bad actors that would benefit.


I know nothing on the topic but why would one use bitcoin to buy drugs? Aren't all bitcoin transactions public and immutable?

Yes but they’re also anonymous. You don’t have your name attached to the account and there’s no paperwork/bank that’s keeping track of any large/irregular financial transactions

So, exactly like cash?

I heard this as one of the early sales pitches for Bitcoin. “Digital cash.”

That all seemed to go out the window when companies developed wallets to simplify the process for the average user, and when the prices surged, some started requiring account verification to tie it to a real identity. At that point, it’s just a bank with a currency that isn’t broadly accepted. The idea of digital cash was effectively dead, at least for the masses who aren’t going to take the time to figure out how to use Bitcoin without a 3rd party involved. Cash is simple.


Yeah, exactly, but with the energy requirements of a small country to run the network.

No, not exactly. If you know someone used cash at one place can you track every cash transaction they've ever made? If you know one bitcoin transaction from a wallet you can track everything that key pair has done from genesis to present. So, if anything, it's worse.

Not quite because there are logistical challenges to moving large quantities of physical money.

They're pseudonymous. Not anonymous. Big difference. Monero transactions are anonymous.

Bitcoin failed because of bad monetary policy turning it into something like a ponzi scheme where only early adopters win. The monetary policy isn't as hard to fix as people make it out to be.

Speaking of the iPhone, I just ugpraded to the 16 Pro because I want to try out the new Apple Intelligence features.

As soon as I saw integrated voice+text LLM demos, my first thought was that this was precisely the technology needed to make assistants like Siri not total garbage.

Sure, Apple's version 1.0 will have a lot of rough edges, but they'll be smoothed out.

In a few versions it'll be like something out of Star Trek.

"Computer, schedule an appointment with my Doctor. No, not that one, the other one... yeah... for the foot thing. Any time tomorrow. Oh thanks, I forgot about that, make that for 2pm."

Try that with Siri now.

In a few years, this will be how you talk to your phone.

Or... maybe next month. We're about to find out.


The issue with appointments is the provider needs to be integrated into the system. Apple can’t do that on their own. It would have to be more like the roll out of CarPlay. A couple partners at launch, a lot of nothing for several years, and eventually is a lot of places, but still not universal.

I could see something like Uber or Uber Eats trying to be early on something like this, since they already standardized the ordering for all the restaurants in their app. Scheduling systems are all over the place.


I meant appointment in the "calendar entry category" sense, where creating an appointment is entirely on-device and doesn't involve a third party.

Granted, any third-party integrations would be a significant step up from my simple scenario of "voice and text comprehension" and local device state manipulation.


Many situations, prefer text to voice. Text: Easier record keeping, manipulation, search, editing, ....

With some irony, the Hacker News user interface is essentially all just simple text.

A theme in current computer design seems to be: Assume the user doesn't use a text editor and, instead, needs an 'app' for every computer interaction. Like cars for people who can't drive, and a car app for each use of the car -- find a new BBQ restaurant, need a new car app.

Sorry, Silicon Valley, with text anyone who used a typewriter or pocket calculator can do more and have fewer apps and more flexibility, versatility, generality.


I am generally on your side of this debate, but Bitcoin is a reference that is in favor of the opposite position. Crypto is/was all hype. It's a speculative investment, that's all atm.

Bitcoin is the only really useful crypto that fundamentally has no reason to die because of basic economics. It is fundamentally the only hard currency we have ever created and that's why it is revolutionary

I find it hard to accept the statement that "[bitcoin] is fundamentally the only hard currency we have ever created". Is it saying that gold back currencies were not created by us, or that gold isn't hard enough?

Additionally, there's a good reason we moved off deflationary hard currencies and onto inflationary fiat currencies. Bitcoin acts more like a commodity than a medium of exchange. People tend to buy it, hold it, and then eventually cash out. If I am given a bunch of bitcoin, the incentive is for me not to spend it, but rather keep it close and wait for it to appreciate — what good is a currency that people don't spend?

Also I find it weird when I read that due to its mathematically proven finite supply it is basic economics that gives it value. Value in modern economics is defined as what people are willing to give up to obtain that thing. Right now, people are willing to give up a lot for bitcoin, but mainly because other people are also willing to give up a lot for bitcoin, which gives it value.

It's a remarkable piece of engineering that has enabled this (solving the double spending problem especially), but it doesn't have inherent value in and if itself. There are many finite things in the world that are not valued as highly as bitcoin is. There's a finite number of beanie babies, a finite number is cassette tapes, a finite number of blockbuster coupons...

Gold is similar — should we all agree tomorrow that gold sucks and should never be regarded as a precious metal, then it won't lose its value completely (there's only a finite amount of it, and some people will still want it, e.g. for making connectors). But its current valuation is far higher than it would be for its scarcity alone — people mainly want gold, because other people want gold.


> I watched everyone make the same arguments about the general Internet, and then the Web, then mobile, then Bitcoin.

You’re conveniently forgetting all the things that followed the same trajectory as LLMs and then died out.


>died out

Btc is currently like 60k... What does died out mean to you?


Hello everyone! With immense joy in my heart, I want to take a moment to express my heartfelt gratitude to an incredible lottery spell psychic, Priest Ray. For years, I played the lottery daily, hoping for a rewarding outcome, but despite my efforts and the various tips I tried, success seemed elusive. Then, everything changed when I discovered Priest Ray. After requesting a lottery spell, he cast it for me and provided me with the lucky winning numbers. I can't believe it—I am now a proud lottery winner of $3,000,000! I felt compelled to share my experience with all of you who have been tirelessly trying to win the lottery. There truly is a simpler and more effective way to achieve your dreams. If you've ever been searching for a way to increase your chances of winning, I encourage you to reach out via email: psychicspellshrine@gmail.com

People discussing AI certainly is the perfect place to advertise something about the "incredible lottery spell psychic". Honestly, I can't even tell if it's satire or spam, and I love it.

> ...about the general Internet, and then the Web, then mobile, then Bitcoin. It’s just a toy. It’s not that useful.

Well, they're not wrong. They are not that useful toys.

(Yes, the "Web" included.)


"They're really good at some things, terrible at others, and prone to doing something totally wrong some fraction of the time."

I agree 100% with this sentiment, but, it also is a decent description of individual humans.

This is what processes and control systems/controls are for. These are evolving at a slower pace than the LLMs themselves at the moment so we're looking to the LLM to be its own control. I don't think it will be any better than the average human is at being their own control, but by no means does that mean it's not a solvable problem.


> I agree 100% with this sentiment, but, it also is a decent description of individual humans.

But you can understand individual humans and learn which are trustworthy for what. If I want a specific piece of information, I have people in my life that I know I can consult to get an answer that will most likely be correct and that person will be able to give me an accurate assessment of their certainty and they know how to accurately confirm their knowledge and they’ll let me know later if it turns out they were wrong or the information changed and

None of that is true with LLMs. I never know if I can trust the output, unless I’m already an expert on the subject. Which kind of defeats the purpose. Which isn’t to say they’re never helpful, but in my experience they waste my time more often than they save it, and at an environmental/energy cost I don’t personally find acceptable.


It defeats the purpose of LLM as personal expert on arbitrary topics. But the ability to do even a mediocre job with easy unstructured-data tasks at scale is incredibly valuable. Businesses like my employer pay hundreds of professionals to run business process outsourcing sites where thousands of contractors repeatedly answer questions like "does this support contact contain a complaint about X issue?" And there are months-long lead teams to develop training about new types of questions, or to hire and allocate headcount for new workloads. We frequently conclude it's not worth it.

Actually humans are much worse in this regard. The top performer on my team had a divorce and his productivity dropped by like a factor of 3 and quality fell of a cliff.

Another example from just yesterday is I needed to solve a complex recurrence relation. A friend of mine who is good at math (math PhD) helped me for about 30 minutes still without a solution and a couple of false starts. Then he said try ChatGPT and we got the answer in 30s and we spent about 2 minutes verifying it.


I call absolute bullshit on that last one. There's no way ChatGPT solves a maths problem that a maths PhD cannot solve, unless the solution is also googleable in 30s.

> unless the solution is also googleable in 30s.

Is anything googleable in 30s? It feels like finding the right combination of keywords that bypasses the personalization and poor quality content takes more than one attempt these days.


Right, AI is really just what I use to replace google searches I would have used to find highly relevant examples 10 years back. We are coming out of a 5 year search winter.

Duck-duck-goable then :)

>Actually humans are much worse in this regard. The top performer on my team had a divorce and his productivity dropped by like a factor of 3 and quality fell of a cliff.

Wow. Nice of you to see a coworker go through a traumatic life event, and the best you can drudge up is to bitch about lost productivity and decrease in selfless output of quality to someone else's benefit when they are at the time trying to stitch their life back together.

SMH. Goddamn.

Hope your recurrence relation was low bloody stakes. If you spent only two minutes verifying something coming out of a bullshit machine, I'd hazard you didn't do much in the way of boundary condition verification.


> I agree 100% with this sentiment, but, it also is a decent description of individual humans.

But humans can be held accountable, LLMs cannot.

If I pay a human expert to compile a report on something and they decide to randomly make up facts, that's malpractice and there could be serious consequences for them.

If I pay OpenAI to do the same thing and the model hallucinates nonsense, OpenAI can just shrug it and say "oh that's just a limitation of current LLMs".


>also is a decent description of individual humans

A friend of mine was moving from software development into managing devs. He told me: "They often don't do things the way or to the quality I'd like, but 10 of them just get so much more done than I could on my own." This was him coming to terms with letting go of some control, and switching to "guiding the results" rather than direct control.

The LLMs are a lot like this.


Your friend got lucky, I've seen (and worked with) people with negative productivity - they make the effort and sometimes they commit code, but it inevitably ends up being broken, and I realize that it would take less of my time for me to write the code myself, rather than spend all the time explaining and then fixing bugs.

The LLMS are a lot like this.


>> I agree 100% with this sentiment, but, it also is a decent description of individual humans.

Why would that be a good thing? The big thing with computers is that they are reliable in ways that humans simply can't ever be. Why is it suddenly a success to make them just as unreliable as humans?


I thought the big thing with computers is that they are much cheaper than humans.

If we are evaluating LLM suitability for tasks typically performed by humans, we should judge them by the same standards we judge humans. That means it's OK to make mistakes sometimes.


You missed quoting the next sentence about providing confidence metric.

Humans may be wrong a lot but at least the vast majority will have the decency to say “I don’t know”, “I’m not sure”, “give me some time to think”, “my best guess is”. In contrast to most LLMs today that in full confidence just spews out more hallucinations.


I'll keep buying (and paying premium) for dumber things. Cars are a prime example, I want it to be dumb as fuck, offline, letting me decide what to do. At least next 2 decades, and thats achievable. After that I couldnt care less, I'll probably be a bad driver at that point anyway so switch may make sense. I want dumb beautiful mechanival wristwatch.

I am not ocd-riddled insecure man trying to subconsiously immitate much of the crowd, in any form of fasion. If that will make me an outlier, so be it, a happier one.

I suspect new branch of artisanal human-mind-made trademark is just behind the corner, maybe niche but it will find its audience. Beautiful imperfections, clear clunky biases and all that.


What dumb cars are you looking at?

1963 Jaguar E-Type

I want a dumb car with airbags

LLMs have been improving exponentially for a few years. let's at least wait until exponential improvements slow down to make a judgement about their potential

They have been improving a lot, but that improvement is already plateauing and all the fundamental problems have not disappeared. AI needs another architectural breakthrough to keep up the pace of advancement.

>but that improvement is already plateauing

Based on what ? The gap between the release of GPT-3 and 4 is still much bigger than the time that has elapsed since 4 was already released so really, Based on what ?


there are no much reliable benchmarks which would measure what is gap really. I think currently corps compete in who will leak benchmarks to training data the most, hence o1 is world programming medalist, yet makes stupid mistakes.

Yes. Anything on the horizon?

I'm not as up-to-speed on the literature as I used to be (it's gotten a lot harder to keep up), but I certainly haven't heard of any breakthroughs. They tend to be pretty hard to predict and plan for.

I don't think we can continue simply tweaking the transformer architecture to achieve meaningful gains. We will need new architectures, hopefully ones that more closely align with biological intelligence.

In theory, the simplest way to real superhuman AGI would be to start by modeling a real human brain as a physical system at the neural level; a real neural network. What the AI community calls "neural networks" are only very loose approximations of biological neural networks. Real neurons are subject to complex interactions between many different neurotransmitters and neuromodulators and they grow and shift in ways that look nothing like backpropagation. There already exist decently accurate physical models for single neurons, but accurately modeling even C. elegans (as part of the OpenWorm project) is still a way's off. Modeling a full human brain may not be possible within our lifetime, but I also wouldn't rule that out.

And once we can accurately model a real human brain, we can speed it up and make it bigger and apply evolutionary processes to it much faster than natural evolution. To me, that's still the only plausible path to real AGI, and we're really not even close.


I was holding out hope for Q*, which OAI talked about with hushed tones to make it seem revolutionary and maybe even dangerous, but that ended up being o1. o1 is neat, but its far from a breakthrough. It's just recycling the same engine behind GPT-4 and making it talk to itself before spitting out its response to your prompt. I'm quite sure they've hit a ceiling and are now using smoke-and-mirrors techniques to keep the hype and perceived pace-of-progress up.

OpenAI's Orion (GPT 5/Next) is partially trained on synthetic data generated with a large version of o1. Which means if that works the data scarcity issue is more or less solved.

If they were plateauing it would mean OpenAI would have lost its headstart wrt the competition, which is not the case I believe.

OpenAI has the biggest appetite for large models. GPT-4 is generally a bit better than Gemini, for example, but that's not because Google can't compete with it. Gemini is orders of magnitude smaller than GPT-4 because if Google were to run a GPT-4-sized model every time somebody searches on Google, they would literally cease to be a profitable company. That's how expensive inference on these ultra-large models is. OpenAI still doesn't really care about burning through hundreds of billions of dollars, but that cannot last forever.

This, I think, is the crux of it. OpenAI is burning money at a furious rate. Perhaps this is due to a classic tech industry hypergrowth strategy, but the challenge with hypergrowth strategies is that they tend to involve skipping over the step where you figure out if the market will tolerate pricing your product appropriately instead of selling it at a loss.

At least for the use cases I've been directly exposed to, I don't think that is the case. They need to keep being priced about where they are right now. It wouldn't take very much of a rate hike for their end users to largely decide that not using the product makes more financial sense.


They have, Anthropic Claude Sonnet 3.5 is superior to GPT-4o in every way, it's even better then their new o1 model at most things (coding, writing, etc.).

OpenAI went from GPT-4, which was mind blowing, to 4o, which was okay, to o1 which was basically built in chain-of-thought.

No new Whisper models (granted, advanced voice chat is pretty cool). No new Dalle models. And nobody is sure what happened to Sora.


OpenAI had a noticeable head start with GPT-2 in 2019. They capitalized on that head start with ChatGPT in late 2022, and relatively speaking they plateaued from that point onwards. They lost that head start 2.5 months later with the announcement of Google Bard, and since then they've been only slightly ahead of the curve.

It's pretty undeniable that OpenAI's lead has been diminished greatly from the GPT-3 days. Back then, they could rely on marketing their coherency and the "true power" of larger models. But today we're starting to see 1B models that are undistinguishable from OpenAI's most advanced chain-of-thought models. From a turing test perspective, I don't think the average person could distinguish between an OpenAI and a Llama 3.2 response.

In some domains (math and code), progress is still very fast. In others it has slowed or arguably stopped.

We see little progress in "soft" skills like creative writing. EQBench is a benchmark that tests LLM ability to write stories, narratives, and poems. The winning models are mostly tiny Gemma finetunes with single-digit parameter counts. Huge foundation models with hundreds of billions of parameters (Claude 3 Opus, Llama 3.1 405B, GPT4) are nowhere near the top. (Yes, I know Gemma is a pruned Gemini). Fine-tuning > model size, which implies we don't have a path to "superhuman" creative writing (if that even exists). Unlike model size, fine-tuning can't be scaled indefinitely: once you've squeezed all the juice out of a model, what then?

OpenAI's new o1 model exhibits amazing progress in reasoning, math, and coding. Yet its writing is worse than GPT4-o's (as backed by EQBench and OpenAI's own research).

I'd also mention political persuasion (since people seem concerned about LLM-generated propaganda). In June, some researchers tested LLM ability to change the minds of human subjects on issues like privatization and assisted suicide. Tiny models are unpersuasive, as expected. But once a model is large enough to generate coherent sentences, persuasiveness kinda...stops. All large models are about equally persuasive. No runaway scaling laws are evident here.

This picture is uncertain due to instruction tuning. We don't really know what abilities LLMs "truly" possess, because they've been crippled to act as harmless, helpful chatbots. But we now have an open-source GPT-4-sized pretrained model to play with (Llama-3.1 405B base). People are doing interesting things with it, but it's not setting the world on fire.


It feels ironic if the only thing that the current wave of Ai enables (other than novelty cases) is a cutdown of software/coding jobs. I don't see it replacing math professionals too soon for a variety of reasons. From an outsiders perspective on the software industry it is like it's practioners voted to make themselves redundant - that seems to be the main takeaway of ai to normal non tech people ive chatted with.

Many people have anecdotally, when I tell them what I do for a living, have told me that any other profession would have the common sense/street smarts to not make their scarce skill redundant. It goes further than that; many professions have license requirements, unions, professional bodies, etc to enforce this scarcity on the behalf on their members. After all a scarce career in most economies is one not just of wealth but higher social standing.

If all it does is allow us to churn more high level software, which let's be honest is demand inelastic due to mostly large margins on software products (i.e. they would of paid a person anyway due to ROI) it doesn't seem it will add much to society other than shifting profit in tech from Labor to Capital/owners. May replace call centre jobs too I guess and some low level writing jobs/marketing. Haven't seen any real new use cases that change my life yet positively other than an odd picture/ai app, fake social posts,annoying AI assistants in apps, maybe some teaching resources that would of been made/easy to acquire anyway by other means etc. I could easily live without these things.

If this is all it is seems Ai will do or mostly do it seems like a bit of a disappointment. Especially for the massive amount of money going into it.


> many professions have license requirements, unions, professional bodies, etc to enforce this scarcity on the behalf on their members. After all a scarce career in most economies is one not just of wealth but higher social standing.

Well, that's good for them, but bad for humanity in general.

If we had a choice between a system where doctors get high salary and lot of social status, or a system where everyone can get perfect health by using a cheap device, and someone would choose the former, it would make perfect sense to me to call such person evil. The financial needs of doctors should not outweigh the health needs of humanity.

On a smarter planet we would have a nice system to compensate people for losing their privilege, so that they won't oppose progress. For example, every doctor would get a generous unconditional basic income for the rest of their life, and then they would be all replaced by cheap devices that would give us perfect health. Everyone would benefit, no reason to complain.


That's a moral argument, one with a certain ideloogy that isn't shared by most people rightly or wrongly. Especially if AI only replaces certain industries which it looks like to be the more likely option. Even if it is, I don't think it is shared by the people investing in AI unless someone else (i.e. taxpayers) will pay for it. Socialise the losses (loss of income), privatise the profits (efficiency gains). Makes me think the AI proponents are a little hypocritical. Taxpayers may not to afford that in many countries, that's reality. For software workers we need to note only the US mostly has been paid well, many more software workers worldwide don't have the luxury/pay to afford that altruism. I don't think it's wrong for people who have to skill up to want some compensation for that, there is other moral imperatives that require making a living.

On a nicer planet sure, we would have a system like that. But most of the planet is not like that - the great advantage of the status quo is that even people who are naturally not altruistic somewhat co-operate with each other due to mutual need. Besides there is ways to mitigate that and still give the required services especially if they are commonly required. The doctors example - certain countries have worked it out without resorting to AI risks. I'm not against AI ironically in this case either, there is a massive shortage of doctors services that can absorb the increased abundance Imv - most people don't put software in the same category. There is bad sides to humanity with regards to losing our mutual dependence on each other as well (community, valuing the life of others, etc) - I think sadly AI allows for many more negatives than simply withholding skills for money if not managed right, even that doesn't happen everywhere today and is a easier problem to solve. The loss of any safe intelligent jobs for climbing and evening out social mobility due to mutual dependence of skills (even the rich can't learn everything and so need to outsource) is one of them.


> If all it does is allow us to churn more high level software, which let's be honest is demand inelastic due to mostly large margins on software products (i.e. they would of paid a person anyway due to ROI) it doesn't seem it will add much to society other than shifting profit in tech from Labor to Capital/owners.

If creating software becomes cheaper then that means I can transform all the ideas I’ve had into software cheaply. Currently I simply don’t have enough hours in the day, a couple hours per weekend is not enough to roll out a tech startup.

Imagine all the open source projects that don’t have enough people to work on them. With LLM code generation we could have a huge jump in the quality of our software.


With abundance comes diminishing relative value in the product. In the end that skill and product would be seen as worth less by the market. The value of doing those ideas would drop long term to the point where it still isn't worth doing most of them, at least not for profit.

Fair point. I think it would still be a net gain for society, at the very least open source would get a boost.

It may seem this way from an outsiders perspective, but I think the intersection between people who work on the development of state-of-the-art LLMs and people who get replaced is practically zero. Nobody is making themselves redundant, just some people make others redundant (assuming LLMs are even good enough for that, not that I know if they are) for their own gain.

Somewhat true, but again from an outsiders perspective that just shows your industry is divided and therefore will be conquered. I.e. if AI gets good enough to do software and math I don't even see AI engineers for example as anything special.

many tech people are making themselves redundant, so far mostly not because LLMs are putting them out of jobs, but because everyone decided to jump on the same bandwagon. When yet another AI YC startup surveys their peers about the most pressing AI-related problem to solve, it screams "we have no idea what to do, just want to ride this hype wave somehow"

>But once a model is large enough to generate coherent sentences, persuasiveness kinda...stops. All large models are about equally persuasive. No runaway scaling laws are evident here.

Isn't that kind of obvious? Even human speakers and writers have problems changing people's minds, let alone reliably.


The ceiling may be low, but there are definitely human writers that are an order of magnitude more effective than the average can-write-coherent-sentences human.

The only people who changed minds reliably were Age of Empires priests. Wololo, wololo!

> Tiny models are unpersuasive, as expected. But once a model is large enough to generate coherent sentences, persuasiveness kinda...stops.

People are persuaded to change their opinions based on social proof, so this isn’t surprising.


I can't think of any exponential improvements that have happened recently.

I don’t think you should expect exponential growth towards greater correctness past good enough for any given domain of knowledge it is able to mirror. It is reliant on human generated material, and so rate limited by the number of humans able to generate the quality increase you need - which decreases in availability as you expect higher quality. I also don’t believe greater correctness for any given thing is an open ended question that allows for experientially exponential improvements.

Though maybe you are just using exponential figuratively in place of meaning rapid and significant development and investment.


Do you know what exponential means? They might be getting getting but it hardly seems exponential at this stage.

Funnily enough, bitcoin mining still uses at least about 3x more power that AI at the moment, while providing less value imo. AI power use is also dwarfed by other industries even in computing. We should still consider whether it's worth it, but most research and development on LLMs in corporate right now seems to be focused on making them more efficient, and therefore both cheaper and less power intensive, to run. There's also stuff like Apple intelligence that is moving it out to edge devices with much more efficient chips.

I'm still a big critic of AI generally but they're definitely not as bad as crypto which is shocking.


Do you have a nice reference for this? I could really use something like this, this topic comes up a lot in my social circle.


How do you measure the value of bitcoin, if not by its market cap? Do you interview everyone and ask them how much they're willing to pay for a service that allows them to transfer money digitally without institutional oversight/bureaucracy?

The amount of power being used to support a system that can do 5 transactions a second is disgusting.

As opposed to what other system that can do a single transaction in any time frame without individual or organizational interdiction?

As opposed to financial systems that actually work and are used by millions of people.

> They're really good at some things, terrible at others, and prone to doing something totally wrong some fraction of the time. (…) They can't safely be in charge of anything important.

Agreed. If everyone understood that and operated under that assumption, it wouldn’t be that much of an issue. Alas, these guessing machines are marketed as all-knowing oracles that can already solve half of humanity’s problems and a significant number of people treat them as being right every time, even in instances where they’re provably wrong.


Totally agree on the confidence metric. The way chatbots spew complete falsities in such a confident tone is really disheartening. I want to use AI more but I don't feel I can trust it at all. If I can't trust it and have to search for other resources to verify it's claims, the value is really diminished.

Is it even possible in principle for an LLM to produce a confidence interval given that in a lot of cases the input is essentially untrusted?

What comes to mind is - I consider myself an intelligent being capable of recognising my limits - but if you put my brain in a vat and taught me a new field of science, I could quite easily make claims about it that were completely incorrect if your teaching was incorrect because I have no actual real world experience to match it up to.


Right, and that's why "years of experience" matters in humans. You will be giving incorrect answers, but as long as you get feedback, you will improve, or at least calibrate your confidence meter.

This is not the case with current model - they are forever stuck at junior level, and they won't improve no matter how much you correct them.

I know humans like that too. I don't ask them questions that I need good answers too.


Just wait until they get saturated with subtle (and not so subtle) advertising. Then, you'll really hate them.

LLMs are to AI what BTC is to blockchain, let me explain.

blockchain and no-trust decentralization has so much promise, but grifters all go for what got done first and can be squeezed money out of. same is happening with LLMs, as a lot of current AI work started with text first.

they might still lowkey be necessary evils because without them there would not have been so much money or attention flowing in this way.


> blockchain and no-trust decentralization has so much promise

I've been hearing this for the past 5 years, yet nothing of practical use based on blockchains has materialized yet.


you dont think an open finance network that's accessible for anyone with an internet connection is useful?

your westernness is showing

go ask SA or Africa how useful it is that they arent restricted by insane dictatorial capital controls anymore


Indeed. Decentralised currency is at least a technology that can power the individual at times, rather than say governments, big corps, etc especially in certain countries. Yes it didn't change as much as was marketed but I don't see that as a bad thing. Its still a "tool" that people can use, in some cases to enable use cases they couldn't do or didn't have the freedom to do before.

AI, given its requirements for large computation and money, and its ability to make easily available intelligence to certain groups, IMO has a real potential to do the exact opposite - take away power from individuals especially if they are middle class or below. In the wrong hands it can definitely destroy openness and freedom.

Even if it is "Open" AI, for most of society their ability to offer labor and intelligence/brain power is the only thing they can offer to gain wealth and sustenance - making it a commodity tilts the power scales. If it changes even a slice of what it is marketed at; there are real risks for current society. Even if it increases production of certain goods, it won't increase production of the goods the ultra wealthy tend to hold (physical capital, land, etc) making them as a proportion even more wealthy. This is especially true if AI doesn't end up working in the physical realm quick enough. The benefits seem more like novelties to most individuals that they could do without where to large corps and ultra wealthy individuals the the benefits IMO are much more obvious with AI (e.g. we finally don't need workers). Surveillance, control, persuasion, propaganda, mass uselessness of most of the population, medical advances for the ultra wealthy, weapons, etc can now be done at almost infinite scale and with great detail. If it ever gets to the point of obsoleting human intelligence would be a very interesting adjustment period for humanity.

The flaw isn't the technology; its the likely use of it by humans and their nature. Not saying LLMs are there yet or even if they are the architecture to do this, but agentic behaviour and running corporations (as OpenAI makes its goal on their presentation slides to be) seem to be a way to rid many of the need for other people in general (to help produce, manage, invent and control). That could be a good or bad thing, depending on how we manage it but one thing it wouldn't be would be simple.


I love how people are like "there's no use case" and there's already products on shelves. I see AI art everywhere, AI writing, customer support - already happened. You guys are naysaying something that already happened people already replaced jobs with LLMs and already profit due to AI. There are already startups with users where you provide a OPENAI_API_KEY, or customers where you provide theirs.

If you can't see how this tech is useful Idk what to tell you, you have no imagination AND aren't looking around you at the products, marketing, etc. that already exists. These takes remind me of the luddites of ~2012 who were still doubting the Internet in general.


> I see AI art everywhere, AI writing, customer support - already happened.

Is any of it adding value though? I can see that AI has made it easier to do SEO spam and make an excuse for your lack of customer support, just like IVR systems before it. But I don't believe those added any real value (they may have generated profits for their makers, but I think that was a zero- or negative-sum trade). Put it this way: is AI being used to generate anything that people are actually happy to receive?


> But I don't believe those added any real value (they may have generated profits for their makers, but I think that was a zero- or negative-sum trade).

Okay, so some people are making money with it, but no true value was added, eh?


Do new scams create value? No, even though they make money for some people. The same with speculative ventures that don't pan out. You can only say something's added value when it's been positive sum overall, not just allowed some people to take a profit at the expense of others.

There is a difference between “being useful” and living up to galactic-scale hype.

[flagged]


You've continued to break the site guidelines, not just with this account but with others like https://news.ycombinator.com/item?id=41681416, and ignored our requests to stop.

Between that and the personally abusive emails you've been sending, it's clear that you don't want to use HN as intended, so I've banned the accounts.


[flagged]


They assume you'll get the hint, eventually.

The utility of LLMs clearly exists (I'm building a product on this premise, so I'm not uninterested!)

But hype also exists. How closely they are matched is not yet clear.

But your comment seems to indicate that the "pro-tech" position is automatically the best. This is _not_ true, as cryptocurrency has already demonstrated.


Funny thing is you are actually the pro [corporate] tech one, not on the side of freedom. Furthermore nobody said anything about crypto - you are seriously grasping at straws. You have said nothing about the products on shelves (billions of dollars in the industry already) argument, only presented crypto as an argument which has nothing to do with the conversation.

> This is _not_ true, as cryptocurrency has already demonstrated.

His whole argument against AI is basically the anti-tech stance: "Well crypto failed that means AI will too" It's coming from a place of disdain for technology. That's your typical Hacker News commenter. This site is like Fox News in 2008 - some of the dumbest people alive


Not at all! I am broadly speaking very pro-tech.

What I am against is the “if it’s tech it’s good” mindset that seems to have infected far too many. I mention crypto because it’s the largest example of tech that is not good for the world.


You’re certainly able to sus out everything about my worldview, knowledge level and intentions from my one-sentence comment.

The only thing that LLMs are at risk of subverting is the livelihood of millions of people. AI is a capital intensifier, so the rich will get richer as it sees more uptake.

About copyright - yeah, I’m quite concerned for my friends who are writers and artists.

> You'll get left behind with these takes, it's not smart. If you don't care about advancing technology or society then have fun being a luddite, but you're on the wrong side of history.

FWIW I work in applied research on LLMs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: