Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Who'll take ownership of AI's mistakes?
14 points by wg0 3 months ago | hide | past | favorite | 15 comments
A lot many companies are being formed with AI automation of Enterprise spcae, lots of optimism around chained together AI agents autonomously working together without much human intervention.

Genuine question - if traditional software makes mistake, its usually deterministic, debugable, fixable and a blame can be assigned.

What's the deal with these autonomous AI agents? Let's say analysing customes paperwork to schedule some shipments from overseas and it fails to let a shipment in because it misclassified or worse, lets it it but being it on the shores under certain conditions leads to heavy financial penalties?

Who's responsible? The AI prompt automation engineer? Or the underlying platform? Or the company providing model?

If the answer is that each outcome of such model should be double checked by a human while going through all that paperwork than what's the point of having that automation in the first place?

EDIT: typos




If the answer is that each outcome of such model should be double checked by a human while going through all that paperwork then what's the point of having that automation in the first place?

That is the underlying issue here. The liability for an erroneous AI output is the same as errors committed by any other agent of the company: the company is on the hook, though it may have recourse against any third party who trained or operates the model. For domains such as law, where the penalty for hallucinations are extremely severe, and, conversely, the penalty for missing things is likely to be losing litigation, this means treating AI like any other LPO: a duly qualified attorney takes responsibility for the work, which means a bunch of associates doing varying levels of review.

Is there still a place for AI in all this? Absolutely. But as long as hallucinations are a basic feature of the architecture, you’re going to spend as much human time on anything of critical importance as you otherwise would, and a lot of deterministic business rules to constrain the results of everything else.


> Let's say analysing customes paperwork to schedule some shipments from overseas and it fails to let a shipment in because it misclassified or worse, lets it it but being it on the shores under certain conditions leads to heavy financial penalties?

The company that decided to let ML models make critical decisions of that sort. Not whoever built the base model (if it's another party), or the engineers.


Not a new question and the answer is simple: it’s in the contract and/or license text.

E.g. see all-caps section here: https://opensource.org/license/mit

For commercial products read EULA.

If they all claim no responsibility, then it’s on you.


Is that even enforceable?

What's stopping me from selling an appliance that burns down your house?

As long as I claim no responsibility in the fine print


Electr[on]ic device regulations probably. There's always some form of a contract depending on the nature of a product/service. Enforceability is on parties to research.

I just sort of don't get the question here. Who's responsible? Well, there's no law of nature for that. You and your provider decide together, out of realistic reasons.

Who's stopping you? Probably some guys who got tired of figthing electric fires in their state/city. Maybe they'll consider regulating AI-related products as well, but if you're concerned then maybe you should think that through yourself and negotiate risks, before using/selling off the shelf AI.


The terms of use for most AI models say that they are unreliable and can make mistakes. Everyone who uses an AI model for say, financial planning or engineering is supposed to pass on those details. It's literally written right under the text box and button where you put the input.

If you're allergic to peanuts, and they put a clear sign that says this contains peanuts, then you are responsible for eating the food with peanuts.

But when these middle man agents try to obscure the source, they leave out the part on unreliability or simply don't read it. In those cases, they're most likely responsible.

In some cases, I see it like informed consent, where the thing could kill you but you were going to die anyway and the odds are better with the dangerous thing.


I will, for a living wage plus a bit. I cannot guarantee that I can help prevent AI mistakes or come up with effective solutions to mitigate them, so my complete effort will go into 'owning' your AI's mistake. Package must include a written evaluation of my owning of the mistake that I can provide to future employers should you decide to publicly terminate me for it. And a good life insurance policy if lynch mobs are possibility.

As a 60 year old administrator whose greatest strength is with Perl, I believe this is my best option for the future.


Based, but what if the AI starts holding you personally accountable.

Being a scapegoat for a Synthetic God for eternity, not muh cup a tea.


I like to think of these neural networks as search engines for "solution" queries. When I used to do an old fashioned web search, I would check the results to see if they are useful. I'm really suspicious of modes that don't follow this pattern for using neural networks, seems to be asking for trouble.


Whoever got paid is responsible. And the chain of responsibility continues until you hit someone you don't want to sue. Simple.


OpenAI gets paid but isn't responsible for what integrations people do with the output of their apis


That's the transitive part. The small companies that are using OpenAI are initially responsible, and it is their job to sue OpenAI. However, they are too small and too afraid so they don't.


Go ask one of Tesla's lawyers.


FSD cases - I'm not aware about their arguments.


Like the sun in the sky that gives way to rain on my disc golf day, there will be know blame. After all it was the act not of negligencecy, nor hubrissing, but godlyness that spoiled the afternoon.

It won't be much different when a bridge fall. Yet it won't with the power of 800 engineering teams and 0 engineering managging, we'll see bridges that stand until the sun itself implodes.

Meanwhile we need not worry for the natural disasters of the world. After all, a game of discus in the sun can be funner than the clearest, windylessness afternoon; it's balance delecate as fresh nuget.

Takes me back. I sip lemonade. All is ripe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: