1) The author's point. That we don't agree on what's ethical so how can we program it?
2) But also, today's AIs aren't mechanism for understanding and balancing a multitude of interests and requirements. They are effectively pattern recognition machines, matching data with categories (or outcomes or etc). So even if you had some criteria formulated, an AI couldn't really do that. I mean, if you true, unlimited intelligence, you might just say "decide like the US supreme court would" and the might be able to do. But we don't even have something that could remotely be challenged that we.
> And yet, when it comes to the ways in which A.I. codes of ethics are discussed, a troubling tendency is at work even as the world wakes up to the field’s significance. This is the belief that A.I. codes are recipes for automating ethics itself; and that once a broad consensus around such codes has been achieved, the problem of determining an ethically positive future direction for computer code will have begun to be solved.
Is anyone actually claiming that AI programs will "automate ethics"? The post seems rather pointless.
Not to mention fuzzy ownership interpretations. If it (the "AI") does something like incorrectly target a hospital instead of a bunker, who is responsible?
It kinda seems encouraging to me. Corporations are hardly perfect, but even the worst modern corporations don't go around committing genocide, in the way that bad corporations used to and modern governments still do from time to time. I think there's reason for optimism that we can learn better control strategies.
A univeraally useful question: What does it take to make it work?
In this case the AI will need to convince all of us to follow the ethical code it designs. It can do this by suppressing information or by making morally defensible decisions.
Having a good track-record of moral decisions key, so AIs need to be public persons and not subject to corporate secrecy.
> This is the fact that there is no such thing as ethical A.I, any more than there’s a single set of instructions spelling how to be good — and that our current fascinated focus on the “inside” of automated processes only takes us further away from the contested human contexts within which values and consequences actually exist.
This rings true. A.I. is just another tool in the computational toolbox. We need expert practitioners just like we need expert statisticians, mathematicians, programmers, doctors, etc. Do all those other disciplines also have an ethics problem?
There are ways to interpret the statement you've quoted in which it's obvious (if by "AI" we just mean "deep learning" or something), and ways to interpret it in which it's just wrong (if by "AI" we mean what people usually mean by "AI", namely AGI). You seem to be interpreting it in the first sense, and additionally you have a considerably more jaundiced view of how widely applicable deep learning is likely to turn out to be than some of its prominent practitioners — Karpathy famously called it "Software 2.0," which to me seems perhaps a bit optimistic.
As far as I can tell, though, the article doesn't say anything both true and nonobvious, which to me is a necessary but not sufficient criterion for it being worth reading. Reliance on Morozov was just a particularly glaring sign of this.
> Depending upon your priorities, your ethical views will inevitably be incompatible with those of some other people in a manner no amount of reasoning will resolve. Believers in a strong central state will find little common ground with libertarians; advocates of radical redistribution will never agree with defenders of private property; relativists won’t suddenly persuade religious fundamentalists that they’re being silly. Who, then, gets to say what an optimal balance between privacy and security looks like — or what’s meant by a socially beneficial purpose? And if we can’t agree on this among ourselves, how can we teach a machine to embody “human” values?
Those who build the machine will embody it "their" ethical values. Problem solved.
If you need a refresher on the perils of ethical AI, Asimov's "Three laws of robotics" stories are as fresh as 60 years ago [1].
I'd like to remind that humans themselves are not that good at ethics, both in philosophy and in daily life; they are worse yet on agreeing what ethics to adhere to. You can imagine how well it could be automated then.
>This is the belief that A.I. codes are recipes for automating ethics itself; and that once a broad consensus around such codes has been achieved, the problem of determining an ethically positive future direction for computer code will have begun to be solved.
I can think of literally nobody who believes this. I don't think anyone working in ethical AI would pretend that ethics can be "solved."
There always seems to be a disconnect between tech philosophers and the technology they write about. Maybe they should try to actually work on 'AI' before preaching about it.
To confirm your point a simple example is when working on an automated classifier for loan applications. One way to make the system more ethical, that is less descriminating based on certain features like race or gender, is to often lower the threshold of acceptance for the marginalized classes and raise it for the other ones. Anyone who had to do that sees how it poses a dilemma of equality vs fairness or even fairness vs cost. So indeed, ethics cannot be solved.
What if it turns out there is reason people discriminate using those features (for example that people of race X tend to fail repaying their loans), and the classifier just learned that reason from the data. Why should we be expected to "fix" the reality for the classifier?
It's quite easy to remove "race" as a feature in the program. The more realistic and difficult question is what about features that are correlated with race, but are not race itself, like: zip code, or proportion of spending used on clothing.
Because we're talking about ethics here, not just optimization. Ethics involves finding out what's wrong with reality and working to fix it, no matter how hard it may seem.
We need AI that doesn't simply accept reality "as is", but can also help us fix it. Otherwise AI will be little more than hyper-optimized reflections of our own failures and prejudices. We've had plenty of that already, we don't really need more of it.
"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man." - George Bernard Shaw
I agree with you that it is generally wrong, i.e. that other factors should come first. But what if the data suggests it? For example if is known that if you lend to someone from race X, there is 80% chance he would not pay back. Would you not incorporate that factor into your decision?
Why should correlations with ethnicity be dismissed?
Epidemiological studies tell us some diseases disproportionally affect certain groups. Should doctors dismiss this knowledge as racist and refrain from screening patients in the affected groups because they might come off as racist?
Yes. We know, more specifically, that genes are to blame for diseases like hemochromatosis [0], sickle cell anemia [1], or Tay-Sachs [2]. We also know, from pedigree collapse [3], that humans broadly form one single race.
Therefore we know that correlations with any definition of ethnicity or race are spurious, because those definitions must be socially constructed, because the gene pool simply does not have the shape that race realists claim that it does.
Think in terms of contraposition. Sure, if race were real, then maybe it might make sense to talk about racial demographics. However, since race clearly is not real, any demographic correlations must be bogus. There is a much simpler explanation for why some skin colors seem socioeconomically advantaged: Because our society itself has bigoted opinions about skin colors, and has practices like redlining [4] which systematically oppress folks.
Because at some point you have to be human about it and realize it’s not fair to evaluate and treat people based on things they can’t control, regardless of statistics.
It should be pretty easy for anyone out there working on such algorithms to remove features that are race-related, or even not collecting such features in the first place. Treat people as individuals, not as representatives of some kind of meaningless tribe.
> Treat people as individuals, not as representatives of some kind of meaningless tribe
I dont't think that's how we make our day to day decisions. I usually treat people as individuals because I expect to be treated the same, but it doesn't make the "tribe" features meaningless. For example I would not live in given neighborhood if its tribe is knwon for [something bad] (this also applies to my own tribe). If I have an object of value, the tribe of a person would certainly be a factor to decide whether to entrust him with it or not. I'm sure a lot of people would do the same. It's just how probabilities are tranlated by the brain.
Because that’s the political trend in the Western world: if the reality does not fit ideology, then try hard to bend it by being vocal and bulling people that don’t agree, and get power in institutions to push them to the mass. Basically the same reasoning USSR used: if communism is failing that’s because we need more communism.
The good point is reality always comme back. It takes decades for the USSR to crumble, but it finally collapsed. In the same way, organization that twisted reality will pay it back one day, for example banks that grant more loans to people that statistically default more will lose more money, thus creating opportunities for companies that has a reality-based strategy to outcompete them.
Another example: let’s imagine a TLA is building a system to mine potential mass shooters. The AI output is 99% male, so the ethics comity step in and imposes 50% of women profiles in output. The next shooting happens. It would have been predicted by the initial system, yet the modified one didn’t because the agency was too busy investing really low probability cases. The problem seems obvious in this case, but when a few selected group keywords are thrown in most people will put aside any logic and approve anything (either by conviction or for virtue signaling).
1. I mean I'm all in favor of ethics, but it's like saying "I believe in God/god". Everyone seems to have their own interpretation of it. When people are talking about ethics/God/etc., how can they be sure that they're talking about the same thing? Centuries of debating over what is "ethical" has got us no where. Kant got close, but then there are caveats to his definition of ethics, too.
2. Who says AI should behave according to "our" ethics (whatever those are)? Sooner or later, with the emergence of ASI (artificial super intelligence) we'll have to accept the superior form of consciousness the ASI is. Expecting it to behave according to our rules is like monkeys asking us not to take away all the bananas because for them, it's unethical.
3. Even without AI, people have all sorts of biases and prejudices against certain ethnicities/races/colors/etc. You're not happy with the results that AI provides? Fine, hire some people instead and see where that leads you. The only thing you lose is the higher performance of AI.
4. Expecting AI systems to have no bias (while employees replaced by the AI certainly were prone to bias) is equivalent to expecting technology to change our world for the better. That doesn't always happen. You think smartphones are necessarily a good thing? Think about where they produce those gadgets and how they treat the workers.
5. At the end of the day, companies will stick with whatever "works" for them, regardless of what label we put on it (good/bad/ethical/fair/...). If it adds value, then most of the time it's enough. If being ethical (according to some definition) doesn't bring in more cash, companies are unlikely to invest in it. At least, with explainable AI, there is benefits for the firm to improve the accuracy of the AI in the long term. But pushing companies to adopt some ethical/unbiased/fair AI that doesn't add value for them is itself unethical (in my book).
> we'll have to accept the superior form of consciousness the ASI is.
On what definition of "have to"?
Large numbers of people will not only refuse to accept the ASI's interpretation of ethics, they will also refuse to acknowledge that the purported ASI is any better than ourselves/themselves when it comes to ethics. If you try to settle the matter by giving the ASI the force of law, there will be armed revolts and violent crackdowns.
No need for revolts and crackdowns; the ASI will take care of everything on its own. By "have to" I meant more like "there's no other option for us", just like chimps "had to" deal with our superiority millions of years ago.
How would you do that without including a variable that raises a flag with the regulators? There are ways to improve access to credit for marginalized classes but this is generally done by adding variables that are independent of those related to protected classes.
> One way to make the system more ethical, that is less descriminating based on certain features like race or gender, is to often lower the threshold of acceptance for the marginalized classes and raise it for the other ones.
Or don't input those values at all. Is it even legal for a human to take those things into consideration? Regardless, the AI morally shouldn't, so there's no reason for the data to even be in the input.
Right; however, the problem is that ML is generally pretty good at finding underlying patterns in data, so even if you make an algorithm race-blind it can still construct a dummy variable that is effectively race.
Unfortunately, it does this by codifying pre-existant social outcomes which are often not free of bias.
As someone who works in this field: the challenge is more nuanced. For example: Even if you don’t input ethnicity as a value, you might input ZIP codes. ZIP codes however can be very predictive of someone’s ethnic background, and you might still end up unfairly discriminating against certain groups of people.
The Amazon recruiting AI failure is another great example of this, I think it didn’t take gender into account, but discriminated against activities like ‘head of women’s association’ etc.
Finally, what’s ethical is often up for discussion. Depending on context, people will optimize for equality (equal % of credits approved for both genders) vs helping those behind (Higher % approved for those historically disadvantaged) vs helping those that ‘deserve’ it of groups were unknown (finance such that default rates of both groups are expected to be equal, likely benefiting those with already good credit scores). I don’t think any of these answers is wrong, but they are mutually exclusive.
One challenge I don’t see discussed that often is that the current system of many humans evaluating loan applications leads to ‘noise‘ in outcomes so that we (on average) don’t see that much bias. (Eg a few people of certain ethnicity not getting loans because a few people are biased against them)
A single AI system scoring all applications however is consistent in its bias against a group, leading to more obvious effects (eg most people in a certain area not receiving loans because the algorithm is biased against it)
In my opinion, an important part is the right on human review of meaningful automated decisions. Not mentioned that often, but it’s also part of the GDPR.
Not sure what the author meant there but we do have a problem of offloading the ethical burden of our systems onto the algorithms themselves and it should be taken more seriously. Like when Google shipped an AI for moderating hate speech which turned out to be biased against minorities [1]. Google would have been sued if that bias had been coded in procedurally, but because they used "AI" their response was basically "we'll take another look at the training data". The damage caused is the same in both cases though, and it shouldn't take a team of independent researchers to prevent Google from shipping racist products.
> The damage caused is the same in both cases though, and it shouldn't take a team of independent researchers to prevent Google from shipping racist products.
Your comment reads like "company X should not be shipping products with issues affecting users" - as long as it was not intentional and they take corrective actions, things are working as well as they can.
With that logic we shouldn't need clinical trials for drugs or food additives either. Just ship it, if someone gets hurt we'll sort it out later!
At this point it's well known that unfair biases can easily creep into these kind of systems. I don't think it's unreasonable to require products that could potentially cause much harm (e.g. online censoring, scoring CVs, vetting loan applications) to be tested for fairness before letting them loose on the public.
> The results show evidence of
systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets
written in African-American English are abusive at substantially higher rates.
Wouldn't we need to know the true rate of abusive tweets by race in order to conclude this? Is this addressed at all, e.g. with human labelled data, or am I mistaken in thinking this is necessary?
Your core idea is true, but such analyses are typically not done because the answer isn't considered relevant. An advocate would argue that the concept of "abusive" is culturally relative, so we must draw the line of abusiveness in a way that avoids disparate impact.
For any point of view on AI, I can probably find a decent number of non technical people who hold that POV. Technical illiteracy is a significant problem imo.
No one would say that they believe in automating ethics if you pin them down and ask them honestly. But lots of institutional speak still talks as that's the goal. Some portion of Nick Bostrom's book effectively argues for something like that, for example.
At this point, each time I read "ethics" I think of someone proselytizing their political ideology in the most obnoxious way possible.
I do have ethics, they're the product of my own analysis and are consistent with my world view in other issues not related to my job. I don't go around telling everyone else how to conduct their lives. If they have their own ethics, good for them. If they don't, I don't care, it's their life.
"Ethical AI" is just another buzzword. Companies are not ethical entities. They exist only to create profit and they will create the maximum amount of profit. Even laws and regulations only factor into it as numbers on a spreadsheet, and any profitable activity will continue as long as it generates maximum profit.
I think the article can serve as a good opinion and an introduction to the subject. Certain ideas seem questionable enough - maybe the matter is inherently more complex than something which can be put in an article.
Does, for example, author argue that something which human can do the machine cannot?
He argues that deep down the goal of the machine is to take decisions in humans contexts. Which by definition only can make sense for a human.
Ofc we can argue about a fictional theoretical human like machine to the point it is undifferentiated from a human... but then we created a human. Not a machine anymore.
It's like failing to see the difference between human and machine. One can argue then there is no difference right now. So?
It's a time-honored discussion if machine can do all a human can. On the other hand, religion also has an associated long-living discussion. If we decide according to our beliefs, we should invent another mechanism - something other than reasoning, because it's not used here - to get to agreement.
1) The author's point. That we don't agree on what's ethical so how can we program it?
2) But also, today's AIs aren't mechanism for understanding and balancing a multitude of interests and requirements. They are effectively pattern recognition machines, matching data with categories (or outcomes or etc). So even if you had some criteria formulated, an AI couldn't really do that. I mean, if you true, unlimited intelligence, you might just say "decide like the US supreme court would" and the might be able to do. But we don't even have something that could remotely be challenged that we.