Hacker News new | past | comments | ask | show | jobs | submit login

The central example of AI ethics is a self-driving car deciding whom to run over. That example is perfectly possible to occure even with today’s technology. I don’t know what this continuation of playing-chess-doesn’t-take-Intelligence griping is supposed to accomplish, except as posturing of people conflating perpetual contrarianism with insight.



See, that exact example is why I look askew at a lot of the field of 'AI ethics'

I mean, human drivers' education doesn't cover choosing who to kill in unavoidable crashes. Isn't that because we believe crashes where the driver can't avoid the crash, but can choose who to kill, are so rare as to be negligible?

IMHO much more realistic and pressing AI ethics questions surround e.g. neural networks for setting insurance prices, and whether they can be shown not to discriminate against protected groups.


> See, that exact example is why I look askew at a lot of the field of 'AI ethics'

The main focus of "AI ethics" needs to be on model bias and how to counter it through transparency and governance. More and more decisions, from mortgage applications to job applications are being automated based on the output of some machine learning model. The person being "scored" has no insight into how they were scored, often has no recourse to appeal the decision, and in many cases isn't even aware that they were scored by a model in the first place. THIS is what AI Ethics needs to focus on, not navel gazing about who self-driving cars should choose to kill or how to implement kill switches for runaway robots.


I don't know about you, but my driving instructor talked to me about when not to perform emergency braking/evasion manoeuvres when I was learning. And about how to choose between emergency responses.


The reason we don’t teach humans is that they are unlikely to have the capacity to make and execute such decisions in a split seconds. Computers do.


> I mean, human drivers' education doesn't cover choosing who to kill in unavoidable crashes. Isn't that because we believe crashes where the driver can't avoid the crash, but can choose who to kill, are so rare as to be negligible?

I'd look at a few other reasons:

- We don't have "driving ethics" classes at all. Human driving education covers how to drive. "AI ethics" might cover many things, but I don't think "how to drive" is on that list. That topic falls under "AI", not "AI ethics".

- The usual example you hear about is an AI driver choosing whether to kill a pedestrian or the driver. There is no point in having a "driving ethics for humans" class which teaches that it is your moral duty to kill yourself in order to avoid killing a pedestrian. No one would pay attention to that, and people would rightly attack the class itself as being a moral abomination.

This example actually makes me more sympathetic to the traditional view that (e.g. for Catholics) suicide is a mortal sin, or (for legalists) suicide should be illegal. This has perverse consequences, like assuring the grieving family that at least their loved one is burning in hell, or subjecting failed suicides to legal penalties. But as a cultural practice, it immunizes everyone against those who would tell them to kill themselves.


https://twitter.com/JoshTheJuggles/status/105455194210439987...

To anyone who is an expert, this is a profoundly uninteresting question. Literally no modern system is programmed this way, and many people would argue that telling a system who to hit is, itself, unethical.

A more interesting question might be if our models will hit certain groups of people more often, without anyone having explicitly asked them to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: