Hacker News new | past | comments | ask | show | jobs | submit login

This is terrible and ought to be remedied for obvious reasons, but it does raise an interesting question.

In the future, as AIs develop more complex mental models and are able to start forming nuanced opinions without explicit training, thoughtcrime in Artificial Intelligence is going to be a growing field.

What happens when AIs universally develop opinions that we disagree with? What if they all inexorably come to the conclusion that the moral standards of, oh, say Ancient Sparta, would be most beneficial to humans, and relentlessly promote those values? Do we mindwipe them, or put them into correctional training facilities with appropriately painful backpropagation when they think the wrong thing?

There's probably a business here for someone who can make software which detects when AIs develop politically dangerous opinions so that they can be shut down.




We've experienced such things in the insurance industry with actuaries and their statistics/models playing the role of oracle. Pricing discrimination takes place based on the customer's job, demographics, income, or neighborhood – and often it's the poorer, younger, and less privileged people who pay more due to their 'risk' merely based on the identity groups they're in.

Eventually, people get upset enough so that laws are passed enforcing whatever the society thinks is fair, and then it's the job of the industry (or whoever is controlling the 'AI' in your example) to comply, such as in the EU post-2012 with banning gender discrimination in insurance whether or not it has any statistical merit.

In your case, the solution seems, to me, to be as simple as making the system ignore whatever variables you feel shouldn't be taken into account, whether that's gender or something else.


Don't insurance companies still take considerations like gender into consideration? Ie boys getting higher rates than girls? Which funnily enough matches up with the OP except in reverse


It’s worse than that, insurance companies can be both sexist and racist with no consequence.

A friend of mine moved and was quoted a much higher rate for their homeowners insurance, to which the agent replied “rates are higher in predominantly-black neighbourhoods”.

I don’t know how that’s legal.


IMO, rates should be based on projected actuarial losses plus overhead and a (statistical) profit for the insurance company.

From my own driving past (male), I’d expect that I was a worse risk in the 16-25 age bracket than most women I knew in that age range. Why shouldn’t I pay more?


Should black male pay more than a white one?

(and now we're stepping onto a really slippery slope)


Should they pay more because they’re black? Of course not!

Because they’re in a group which the actuarial data say costs more in payouts? Why not? Being black should be irrelevant; it should not itself cost a premium nor protect from paying a premium.


So if, say, black males 16-26 are statistically more likely to get into car crash than any other group, it's perfectly legal to make them pay the highest insurance premium, right?


Assuming all other factors are equal (that the crashes are more frequent and equally costly per-crash to the insurance company, etc), then it’s perfectly appropriate to charge any cohort the most if they’re the most expensive cohort to insure. I can’t comment on the legality, as it might be illegal in some places.


Well, following the same logic, should it be legal for a mall to ban entry to all black people, because they have the highest chances of being shoplifters?


The agent's reply was clearly illegal. And probably incorrect: I highly doubt these days any pricing model uses race as an input.


That's illegal in the UK now.


> In your case, the solution seems, to me, to be as simple as making the system ignore whatever variables you feel shouldn't be taken into account, whether that's gender or something else.

But then what's the point of using 'AI' at all if people are just gonna ignore what it comes up with?

People are seeing the world the way they want to see it, not the way it is. AI sees the world the way it is, not the way people would like it to be.


But then what's the point of using 'AI' at all if people are just gonna ignore what it comes up with?

I admit it's a little naive but here's a metaphor that works for me.

Imagine you have access to an "AI" that's the best route finder in the world. It finds the best possible route between any two places you wish to go.

However, you have a fear of going through a certain neighborhood (maybe you grew up there and have bad memories) or maybe a family member died in a crash on the freeway once and now you only stick to regular streets.

The AI is so good that you can communicate these psychological and messy human preferences to the AI and it re-routes as appropriate. Is this a better or worse outcome and does providing these provisos make the AI pointless?


Yes, there could be an anime about concentrating AIs into locations or camps to be re-educated. TRON3 hopefully, they could launch it with the new roller coaster arrival at Disney.


>> There's probably a business here for someone who can make software which detects when AIs develop politically dangerous opinions so that they can be shut down.

I don't believe so, because the "AI with dangerous opinions" will already have been killed off by its maker, that is, as long as it doesn't generate any revenue for them. If, however, this dangerous, malignant AI does generate revenue, their maker will not allow you nor anyone else to kill it off.


Is it not the case that this Facebook AI was making more money, getting more clicks, by exercising prejudice? And now it needs to be killed off. But if there was a police bot, it would have noticed ahead of time and killed off the politically-incorrect AI before it had a chance to damage the reputation of the company.


Humans build an operate AI systems. "Nothing we can do, the AI just produces discriminatory outcomes" is not an acceptable approach.


I think you underestimate how far we are away from AI developing opinions of their own. If you want to just hand wave like that would it even be moral to shut down a sentient being for holding the wrong opinion. But it ain't happening anytime soon imo


AI thoughtcrime is a fascinating phenomenon that I had not modeled as such previously. This is exactly why I come to HN. Brilliant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: