Hacker News new | past | comments | ask | show | jobs | submit login

Interestingly everyone who hasn't done anything technical but wants to ride AI train chooses to become "AI Ethics" person these days. You can look up vast number of these AI ethics "experts" giving talks on this subject without ever having trained a model for anything. So apparently the bar for this "field" is zero experience in anything technical but great ability to hold a microphone and induce FUD in public.



I would trust an ai ethicist with a PhD in philosophy or ethics of their ideas were coherent. Tech backgrounds are not necessarily required.


>I would trust an ai ethicist with a PhD in philosophy or ethics of their ideas were coherent. Tech backgrounds are not necessarily required.

I agree, it doesn't require technical AI skills but I would expect a person in this position to have SOMETHING relevant in their background that they could point to as a qualifier.

Honestly, I read the article and it left me feeling quite sympathetic to this lady. It seemed like another clear cut case of political correctness run amok. Then I looked up her bio online and found she has a Bachelors degree in Education from the Hampton Institute and her entire career has consisted of a series of political appointments the highlights of which include Director of the Office of Personnel Management (despite not having any HR related background) and the Virginia Secretary of Health and Human Services (despite not having any public health background). She has neither training nor experience in either AI or Ethics. Politics aside, she was not even remotely qualified to be on an AI Ethics Board for Google.


I'd actually push this thought one step further: most tech degrees require between zero and one course of ethics over a full college program. The people we trust to make the technical decisions are under-trained in making those decisions with an eye towards ethical questions, and we're in a time where ethics should be ahead of the technology.

Many successful teams in history included members who were not technically skilled, but had an implementable vision. The gap in knowledge bridgeable, and the alternative is more "Facebook is involuntarily committing users to an experiment in what makes people sad" ethical errors.


The big problem is that these people don't understand what current tech really is. The media and Musk have hyped this up as "AI" but it's not even remotely AI. These people go around and present slides as if sky is falling and induce massive FUD in general public. There is no real AI as far as anyone technical is concerned. So whole deal about "AI Ethics" is great way to get on AI train and get massive salaries for non-technical people. There are few things were policy is needed like surveillance and detecting model bias - but these are very few things and they do need awareness of actual capabilities and understanding of tech.


Why should only tech people decide our lives? They don't have any ethics as we have seen in american companies like Google and fb. They only care about money. These companies definitely needed some people outside their tech bubble long time ago.

Tech people can design the tech. But here we are talking about the impact and those people should not be the one deciding everything. They are doing that over a decade and disappointed.


> Why should only tech people decide our lives? They don't have any ethics as we have seen in american companies like Google and fb. They only care about money.

Tech companies are not made only of tech people. Those companies you named employ many engineers but are led, like any other company, by finance, sales and management. This whole story is actually about a conflict between the rank engineers and the management (similar to the earlier conflict about military projects).

You should totally trust the tech people, as they are the ones who are not in for the money (or less likely to be just in for the money, at least).

Actually, the whole story of Google not being trustworthy any longer might just be the story of engineering being slowly overruled by finance/management there.

I would totally trust Google AI researchers, like i would totally trust Einstein with the fussion theory. That employees were still able to overthrow a comitee formed by upper management will achieve more than whatever little benefit this comitee would have achieved.


The central example of AI ethics is a self-driving car deciding whom to run over. That example is perfectly possible to occure even with today’s technology. I don’t know what this continuation of playing-chess-doesn’t-take-Intelligence griping is supposed to accomplish, except as posturing of people conflating perpetual contrarianism with insight.


See, that exact example is why I look askew at a lot of the field of 'AI ethics'

I mean, human drivers' education doesn't cover choosing who to kill in unavoidable crashes. Isn't that because we believe crashes where the driver can't avoid the crash, but can choose who to kill, are so rare as to be negligible?

IMHO much more realistic and pressing AI ethics questions surround e.g. neural networks for setting insurance prices, and whether they can be shown not to discriminate against protected groups.


> See, that exact example is why I look askew at a lot of the field of 'AI ethics'

The main focus of "AI ethics" needs to be on model bias and how to counter it through transparency and governance. More and more decisions, from mortgage applications to job applications are being automated based on the output of some machine learning model. The person being "scored" has no insight into how they were scored, often has no recourse to appeal the decision, and in many cases isn't even aware that they were scored by a model in the first place. THIS is what AI Ethics needs to focus on, not navel gazing about who self-driving cars should choose to kill or how to implement kill switches for runaway robots.


I don't know about you, but my driving instructor talked to me about when not to perform emergency braking/evasion manoeuvres when I was learning. And about how to choose between emergency responses.


The reason we don’t teach humans is that they are unlikely to have the capacity to make and execute such decisions in a split seconds. Computers do.


> I mean, human drivers' education doesn't cover choosing who to kill in unavoidable crashes. Isn't that because we believe crashes where the driver can't avoid the crash, but can choose who to kill, are so rare as to be negligible?

I'd look at a few other reasons:

- We don't have "driving ethics" classes at all. Human driving education covers how to drive. "AI ethics" might cover many things, but I don't think "how to drive" is on that list. That topic falls under "AI", not "AI ethics".

- The usual example you hear about is an AI driver choosing whether to kill a pedestrian or the driver. There is no point in having a "driving ethics for humans" class which teaches that it is your moral duty to kill yourself in order to avoid killing a pedestrian. No one would pay attention to that, and people would rightly attack the class itself as being a moral abomination.

This example actually makes me more sympathetic to the traditional view that (e.g. for Catholics) suicide is a mortal sin, or (for legalists) suicide should be illegal. This has perverse consequences, like assuring the grieving family that at least their loved one is burning in hell, or subjecting failed suicides to legal penalties. But as a cultural practice, it immunizes everyone against those who would tell them to kill themselves.


https://twitter.com/JoshTheJuggles/status/105455194210439987...

To anyone who is an expert, this is a profoundly uninteresting question. Literally no modern system is programmed this way, and many people would argue that telling a system who to hit is, itself, unethical.

A more interesting question might be if our models will hit certain groups of people more often, without anyone having explicitly asked them to.


Well, that's the reason why such a board shouldn't consist of non-tech people only. But if the tech people can't explain to the philosophers etc. what the current and future state of tech is so those can form an educated opinion on it than it's a pretty bad situation. I don't want to let the tech bubble decide on ethical questions. They failed spectacular on it in the last decade already and with ML tools becoming more and more popular, the range of misuse is growing also.


Where is this "these people" narrative coming from in this thread? While I get the controversy about that particular board member, the others seemed to be accomplished researchers and include people with pretty technical backgrounds.

Archive link to the short bios, since Google seems to have taken the document down: https://web.archive.org/web/20190331195013/https://ai.google...


i mean we have super fast and accurate image processing now. doesn't take a genius to realize that its possible to use that technology to aim and manipulate weapons


Do you need to be a nuclear physicist to have opinions on ethics of nuclear weapons?


Anyone can have an opinion. When a person's title at a company is "Nuclear Weapon Ethics" and they go around giving talks, I would 100% expect them to have some sort of practical experience and knowledge in that domain.

Just because you know a thing or two about ethics doesn't mean you are equipped to discuss a particular domain.

How can you possibly begin to discuss AI Ethics if you have no idea how it works or what's -realistically- possible?


> How can you possibly begin to discuss AI Ethics if you have no idea how it works or what's -realistically- possible?

What matters for ethics is the effects things have on people. You generally don't have to know how something works in order to understand what effects it has--someone who does understand how it works can figure that out and tell you.

For example, if you had to decide between on the ethics of using a nuclear weapon and using conventional weapons to destroy some legitimate military target you wouldn't need to know anything about the physics of nuclear weapons.

All you'd need to know about the nuke is how powerful it is, the long term illnesses it can cause, how it can affect people far away from the blast, how it can make the area unsafe for humans for a long time, and so on. To decide the ethical issues you are concerned with what happens, not how it happens.

If we ever get to general AI, and are dealing with ethical questions like whether it is murder to restore a robot to factory settings, whether it is slavery to own a robot, or whether a robot can own property then we will probably need ethicists who are also AI experts.


And then you’re dependent on experts to tell you that information.

In this case “using a nuclear weapon” is easier to reason about for a non expert. What about “using nuclear technology for renewable energy”? If the person doesn’t really understand the pros and cons of this by nature of being a domain expert, they’re just relying on whatever information they may have (incorrectly) learned or been indoctrinated about.

Otherwise smart ethics people may make stupid decisions because they think they understand what they’re talking about, but actually do not.

Just take existing domain experts and train them in ethics.


So only insiders can be on ethics boards?

How about pushing this to congress. None of them know hardly anything about anything. They delegate a lot of their thinking.

So two problems.

1. People who are practitioners are more likely to be for the technology than against it. Tristan Harris is a good example of what you're looking for. 2. Going to the logical extreme on this doesn't work.

P.S. Should we apply this to journalism? Because seriously journalists these days don't even know how to make phone calls to even pretend to fact check.


Yes, only people who know how the tech works should be discussing it. It’s far easier to train a skilled person in ethics than to take an ethics person and train them in that domain.

I don’t buy your argument that all experts are necessarily proponents. Even within a domain there are disagreements.

The government issue is real and also slightly tangential. We need to make working in the public sector more attractive.

And yes it should apply to journalism, but discerning that falls onto individual people. It’s a bit of a different issue.


> Yes, only people who know how the tech works should be discussing it.

Well then we disagree. Being an engineer or a technician does not make you a good ethicist. And that's what we need.

Training an ethicist that is impartial or thoughtful from the beginning about the technology may also be easier than the opposite. They may be similar...

But training an engineer in ethics I think is a good step. Some fields, like medicine, have it somewhat built in. We can debate how effective or serious that actually is.

Being a technician or engineer does predispose you to thinking what you are working on or working with is ethical. I did list Tristan Harris as a good counter-example and someone that certainly can speak to the ethics of the issue. But his example is also a good example of engineers/technicians not being good candidates for being impartial because he has to be a type of activist.

> I don’t buy your argument that all experts are necessarily proponents. Even within a domain there are disagreements.

I said likely, not exclusively.


People can definitely say what should be allowed and what is not without achiving that result currently.


>How can you possibly begin to discuss AI Ethics if you have no idea how it works or what's -realistically- possible?

That's what other members of the board are there for. Let's flip your question: How do you expect a tech person to comment reasonably about AI ethics when all they've taken is an undergrad course in philosophy (and even that may be a stretch).

The notion that a person on the board must be an expert in every aspect involved is ridiculous.


It’s not ridiculous to expect people to understand the topic that they are discussing. You can take domain experts and train them in ethics. It’s impossible to discuss something–and the ethics around it–if you don’t fully understand how it works.

I wouldn’t want John Stuart Mill on that board because he wouldn’t know what he was talking about, and therefore would be unable to properly evaluate things.


To what degree? There's a difference between knowledge of what's practical, and experience in implementation.

I'm getting the sense that the goalposts are ready to slide here, but I wouldn't expect an expert on "when to not do a new Holocaust" to be able to model interactions between uranium atoms.

Not having any familiarity with the subject is obviously its own disqualification as an expert in a given area of ethics. And a nuclear physicist could lack understanding of ethics, international politics, history, military, who's even armed... This would be far more disqualifying.


A more realistic area where experts would be needed might be “should we be using nuclear energy sources”?

In this case I don’t care how trained an ethics person is in ethics or history. They literally are not equipped to discuss this topic without being fed information from somewhere, which leads to its own issues.

I would much rather take domain experts and train them in philosophy, ethics, and history to an extent. That is far easier and better than the other way around.


Nope, and it seems like nuclear physicists back in the day could have used some stronger opinions on the ethics of making nuclear weapons.


During WWII, Germany was also trying to build the bomb. All things considered, it's probably better that the Manhattan Project beat them to it.


Sure, but it didn't stop there.


You can have opinion on anything without knowing anything. This is what forums like this is about :). But if you want to give an informed opinion, you do need to understand the tech. For example, how do you decide what restrictions to put in export of nuclear power tech if you have no clue how it can be re-purposed to make a bomb?


International efforts against proliferation have been somewhat successful, even though they ultimately rely on politicians, not nuclear physicists. And in any case, a lack of knowledge regarding dual-use technology has never been the problem.


How do we let only tech people decide ethics if they don't care about ethics like we have seen in Facebook?


https://www.oii.ox.ac.uk/people/luciano-floridi/

Publication records help to separate the wheat from the chaff.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: