Hacker News new | past | comments | ask | show | jobs | submit login

They are revolutionary for use cases where hallucinated / wrong / unreliable output is easy and cheap to detect & fix, and where there's enough training data. That's why it fits programming so well - if you get bad code, you just throw it away, or modify it until it works. That's why it work for generic stock images too - if you get bad image, you modify the prompt, generate another one and see if it's better.

But many jobs are not like that. Imagine an AI nurse giving bad health advice on phone. Somebody might die. Or AI salesman making promises that are against company policy? Company is likely to be held legally liable, and may lose significant money.

Due to legal reasons, my company couldn't enable full LLM generative capabilities on chatbot we use, because we would be legally responsible for anything it generates. Instead, LLM is simply used to determine which of the pre-determined answers may fit the query the best, which it indeed does well when more traditional technologies fail. But that's not revolutionary, just an improvement. I suspect there are many barriers like that, which hinder its usage in many fields, even if it could work most of the time.

So, nearly all use cases I can think of now will still require a human in the loop, simply because of the unreliability. That way it can be a productivity booster, but not a replacement.




Human medical errors have been one of the leading causes of death[0] since we started tracking it (at least decades).

The healthcare system has always killed plenty of people because humans are notoriously unreliable, fallible, etc.

It is such a stubborn, critical, and well-known issue in healthcare I welcome AI to be deployed slowly and responsibly to see what happens because the situation hasn’t been significantly improved with everything else we’ve thrown at it.

[0] - https://www.ncbi.nlm.nih.gov/books/NBK225187/


> But many jobs are not like that. Imagine an AI nurse giving bad health advice on phone. Somebody might die.

This problem is not unique to AI and you see this problem with human medical professionals. Regularly people are misdiagnosed or aren’t diagnosed at all. At least with AI you could compare the results of different models pretty instantly and get confirmation. An AI Dr also wouldn’t miss information on a chart like a human can.

> So, nearly all use cases I can think of now will still require a human in the loop, simply because of the unreliability. That way it can be a productivity booster, but not a replacement.

This is exactly what your parent said, but yet you replied seemingly disagreeing. AI tools are here to stay and they do increase productivity. Be it coding, writing papers, strategizing. Those that continue to think of AI as not useful will be left behind.


To me those usecases are already revolutionary. And human in the loop doesn't mean it is not revolutionary. I see it multiplying human productivity rather than as immediate replacement. And it can take some time before it is properly iterated and integrated everywhere in a seamless manner.


A product doesn’t have to be useful for everything to still be useful.


If you adjust your standard to the level of human performance in most roles, including nursing, you’ll find that AI is reasonably similar to most people in that in makes errors, sometimes convincing ones, and that recovering from those errors is something all social/org systems must do & don’t always get right.

Human in the loop can add reliability, but the most common use cases I’m seeing with AI are helping people see the errors they are making/their lack of sufficient effort to solve the problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: