Hacker News new | past | comments | ask | show | jobs | submit login

Bear in mind what we have here is an example of the cutting edge of AI in its real world use. It doesn't exactly inspire faith in AI if some of the best is this thick when it comes to making decisions that would be incredibly basic for a human of even below average intelligence.



Frankly the whole thing stinks. On the whim of an unaccountable AI basically anything can happen. We need laws around this sooner rather than later. AIs are getting more powerful but less responsible.

Just don't have an argument about a book or you could end up on death row. Gordon Dickson wrote a short story about this https://en.wikipedia.org/wiki/Computers_Don%27t_Argue


Why do we need laws around this? We need to hold the people accountable that put this into production and/or produced it. People should be able to build bad AI, but if that causes poor real world outcomes, then the people that okay'd its use need to take responsibility.

"An AI erroneously banned the channel," shouldn't get any more leeway than "A support staff member banned the channel."


> Why do we need laws around this? We need to hold the people accountable [...] [T]he people that okay'd its use need to take responsibility.

Genuinely asking: what do you think profitable companies would do to their ML engineers over a product issue like this if there was no law under which they would be sued?

I think accountability without an enforcement mechanism is just a wishful suggestion.


>what do you think profitable companies would do to their ML engineers over a product issue like this if there was no law under which they would be sued?

Nothing. Maybe a reprimand at best, because the consequences of this are fairly minor. If a store erroneously bans you and them unbans you and says that there was a mixup with their policy, then what do you think should be the consequences to the store? If it's effectively a monopoly then I can see there being problems, but if it's just a random store then that doesn't really matter.


> "An AI erroneously banned the channel," shouldn't get any more leeway than "A support staff member banned the channel.

What would curtail the leeway then? Other than losing public trust in the product for this happening repeatedly anyway?


Because the inner bias that humans have that might lead to similar exclusion is implicitly hidden and somehow excused and the machines are held to a higher standard, for some reason.


Apple uses 100% human labor for their app reviews. They get about as much accountability as Google does.

Personally, I think the AI is a total red herring.


Youtube is in between a rock and a hard place as well as all user generated content sites.

Either they automate removing content and end up removing some of the good with the mountains of bad, or they do it manually which costs a fortune and is too slow.

The local news is still ridiculing Facebook because they left the stream up for the NZ shooting for hours and their stated reason "No one watching the video had reported it". Well what do we want facebook to do? Hire thousands of people to watch every single livestream live?

IMO we should all move to some kind of self hosting where there is no central mega corp that takes responsibility for the content. You host your own content and you stand behind it legally. As long as its legal content, you have no risk of being shut down.


> Either they automate removing content and end up removing some of the good with the mountains of bad, or they do it manually which costs a fortune and is too slow.

I think this is a false dichotomy; they could automate flagging for human review instead. This would address the cost though not the speed.

Content awaiting review could be disabled, or it could be disabled only if it is deemed sufficiently egregious.


>they could automate flagging for human review instead.

I'm not sure this would save them in many cases. Most of the abusive content on facebook is not easily identifiable. I don't think the current AI systems are up to the task. They may be able to identify a human and a dog are in the video but not tell the difference between petting the dog or punching it.

Similarly they likely can not tell the difference between someone at a shooting range or committing a mass shooting.

Just like it is too much to expect police to stop all crime before it happens, I think it is too much to expect platforms to remove violating content before it is reported.


Are you sure a human, never mind one of below average intelligence, gets this? We have people right now in Silicon Valley calling for renaming CS words because they sound racist to them.


of course a human might screw up but OPs general point is right. The AI doesn't even know what 'racism' is. It doesn't know what 'chess' is. It can't do context and it doesn't understand meaning, it just correlates audio signal.

That this often looks like intelligence doesn't change the fact that it's incredibly far away from the real thing.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: