Hacker News new | past | comments | ask | show | jobs | submit login

The big problem is that these people don't understand what current tech really is. The media and Musk have hyped this up as "AI" but it's not even remotely AI. These people go around and present slides as if sky is falling and induce massive FUD in general public. There is no real AI as far as anyone technical is concerned. So whole deal about "AI Ethics" is great way to get on AI train and get massive salaries for non-technical people. There are few things were policy is needed like surveillance and detecting model bias - but these are very few things and they do need awareness of actual capabilities and understanding of tech.



Why should only tech people decide our lives? They don't have any ethics as we have seen in american companies like Google and fb. They only care about money. These companies definitely needed some people outside their tech bubble long time ago.

Tech people can design the tech. But here we are talking about the impact and those people should not be the one deciding everything. They are doing that over a decade and disappointed.


> Why should only tech people decide our lives? They don't have any ethics as we have seen in american companies like Google and fb. They only care about money.

Tech companies are not made only of tech people. Those companies you named employ many engineers but are led, like any other company, by finance, sales and management. This whole story is actually about a conflict between the rank engineers and the management (similar to the earlier conflict about military projects).

You should totally trust the tech people, as they are the ones who are not in for the money (or less likely to be just in for the money, at least).

Actually, the whole story of Google not being trustworthy any longer might just be the story of engineering being slowly overruled by finance/management there.

I would totally trust Google AI researchers, like i would totally trust Einstein with the fussion theory. That employees were still able to overthrow a comitee formed by upper management will achieve more than whatever little benefit this comitee would have achieved.


The central example of AI ethics is a self-driving car deciding whom to run over. That example is perfectly possible to occure even with today’s technology. I don’t know what this continuation of playing-chess-doesn’t-take-Intelligence griping is supposed to accomplish, except as posturing of people conflating perpetual contrarianism with insight.


See, that exact example is why I look askew at a lot of the field of 'AI ethics'

I mean, human drivers' education doesn't cover choosing who to kill in unavoidable crashes. Isn't that because we believe crashes where the driver can't avoid the crash, but can choose who to kill, are so rare as to be negligible?

IMHO much more realistic and pressing AI ethics questions surround e.g. neural networks for setting insurance prices, and whether they can be shown not to discriminate against protected groups.


> See, that exact example is why I look askew at a lot of the field of 'AI ethics'

The main focus of "AI ethics" needs to be on model bias and how to counter it through transparency and governance. More and more decisions, from mortgage applications to job applications are being automated based on the output of some machine learning model. The person being "scored" has no insight into how they were scored, often has no recourse to appeal the decision, and in many cases isn't even aware that they were scored by a model in the first place. THIS is what AI Ethics needs to focus on, not navel gazing about who self-driving cars should choose to kill or how to implement kill switches for runaway robots.


I don't know about you, but my driving instructor talked to me about when not to perform emergency braking/evasion manoeuvres when I was learning. And about how to choose between emergency responses.


The reason we don’t teach humans is that they are unlikely to have the capacity to make and execute such decisions in a split seconds. Computers do.


> I mean, human drivers' education doesn't cover choosing who to kill in unavoidable crashes. Isn't that because we believe crashes where the driver can't avoid the crash, but can choose who to kill, are so rare as to be negligible?

I'd look at a few other reasons:

- We don't have "driving ethics" classes at all. Human driving education covers how to drive. "AI ethics" might cover many things, but I don't think "how to drive" is on that list. That topic falls under "AI", not "AI ethics".

- The usual example you hear about is an AI driver choosing whether to kill a pedestrian or the driver. There is no point in having a "driving ethics for humans" class which teaches that it is your moral duty to kill yourself in order to avoid killing a pedestrian. No one would pay attention to that, and people would rightly attack the class itself as being a moral abomination.

This example actually makes me more sympathetic to the traditional view that (e.g. for Catholics) suicide is a mortal sin, or (for legalists) suicide should be illegal. This has perverse consequences, like assuring the grieving family that at least their loved one is burning in hell, or subjecting failed suicides to legal penalties. But as a cultural practice, it immunizes everyone against those who would tell them to kill themselves.


https://twitter.com/JoshTheJuggles/status/105455194210439987...

To anyone who is an expert, this is a profoundly uninteresting question. Literally no modern system is programmed this way, and many people would argue that telling a system who to hit is, itself, unethical.

A more interesting question might be if our models will hit certain groups of people more often, without anyone having explicitly asked them to.


Well, that's the reason why such a board shouldn't consist of non-tech people only. But if the tech people can't explain to the philosophers etc. what the current and future state of tech is so those can form an educated opinion on it than it's a pretty bad situation. I don't want to let the tech bubble decide on ethical questions. They failed spectacular on it in the last decade already and with ML tools becoming more and more popular, the range of misuse is growing also.


Where is this "these people" narrative coming from in this thread? While I get the controversy about that particular board member, the others seemed to be accomplished researchers and include people with pretty technical backgrounds.

Archive link to the short bios, since Google seems to have taken the document down: https://web.archive.org/web/20190331195013/https://ai.google...


i mean we have super fast and accurate image processing now. doesn't take a genius to realize that its possible to use that technology to aim and manipulate weapons




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: