Hacker News new | past | comments | ask | show | jobs | submit login
Auditing for discrimination in algorithms delivering job ads (technologyreview.com)
33 points by yamafaktory on April 10, 2021 | hide | past | favorite | 64 comments



In other news, 12% of nurses are men. Is it because the nursing industry isn’t welcoming to men or is it because men aren’t attracted to nursing jobs?

You can see bias anywhere you look, but if you look well you’ll see that it’s just a natural consequence of the real world.

If a user joined a car shop group, said user is likely to be interested in car-related jobs. It happens to be that most of these users are men. The algorithm is working as intended.

In some fields this difference isn’t as clear or obvious, so we end up with these articles.


"consequence of the real world" feels like a cop out answer to the question. that very well may be part of if not the whole reason (I'm not a labor economists or the sort so I'm not going to speculate), but I would rather see if there aren't other systemic reasons for the biases - ie reinforcing gender with different jobs in media consumption, the domination of one gender in a type of job creating an environment that turns aways members of the other gender intentionally or unintentionally

and even of "that's just the way things are", I don't see any good reason why the advertising needs to remain so targeted on the lines of gender. job recruitment should be as free of biases as possible


In the car repair job example, it’s not clear to me that the ad is being targeted on gender; in fact, it’s pretty clear to me that it’s being targeted based on revealed interest in cars. That the latter has a correlation with gender is true but not especially concerning to me.


I don’t disagree with you, but that will make advertising less effective. A nursing job shown to me and a programming job shown to my sister will both be ignored and wasted. The opposite wouldn’t be true.

Likewise showing me a car shop job would also be wasted, not because of my gender, but because I have no interest in cars, as my Facebook profile clearly shows.


From the article:

> This is considered sex-based discrimination under US equal employment opportunity law, which bans ad targeting based on protected characteristics.

The algorithm is not "working as intended" if it's violating EEO laws.


I think this depends on what the algorithm uses. Or would "liked page related to the job description" be considered a protected characteristic? My non-lawyer instinct would say that as long as the algorithm doesnt't contain a line that explicitly tells it to consider sex as a parameter to decide who to show the ad, it doesn't breach this law.


I’m sure the algorithm isn’t checking “if (person.sex == male) { show_car_jobs(); }”,

But rather “if (person.likes_cars) { show_car_jobs(); }”.


The question is it it's checking "if(person.sex != male) { person.likes_cars = false }".


Clearly, but you're entirely missing the point.


And, indeed, those laws are set up, in a sense, to cut against "natural" order (or, more specifically, to widen opportunities beyond the limits imposed by societal tradition).


> The algorithm is not "working as intended" if it's violating EEO laws.

It is if it is intended to work in a way that violates EEO laws.

Just like if somewhat sets a death trap targeting you and it works as planned, the fact that it is violating murder laws doesn’t suddenly mean it isn’t working as intended.


Looking at the data from the related women dominated profession of teacher, the answer is very much that it isn't welcome to men. "culture fit" as they call it.

An old theory to explain this is the impact that being a minority has on the individual. Any difficult and advanced education is going to cause ups and down, and each road block will natural trigger self doubt. Being a minority makes that doubt stronger and increases that risk that the individual will abandon their chosen path. Similar being in a strong majority demographic will lower the self doubt and associated risk.

Multiply that risk for 4 years of studying and then a few years in profession, and the minority demographic will look like a leaking pipe. If you ask those who choose to leave the profession, the answers will have a large percentage saying that they did not feel like they fitted in.

In addition, industries that are dominated with men tend to focus progression on a career path with a steady amount of raises, while industries that a dominated by women tend to focus progression on privileges and status positions within the organization. A miss match of those expectation may also lead to people not feeling appreciated for their contributions and end up quieting the profession.


The Boston Globe has an interesting chart of other professions dominated by women: https://archive.is/3t8dU (archive.is since there's a paywall at the Globe)


Women's representation in CS was highest in the 60s and has fallen pretty consistently since then (starting to rise a bit recently as the industry has started to see the gap as a problem). The general prestige and economic rewards of CS, on the other hand, have risen considerably in the same time period.

If a demographic's involvement decreases as the subject becomes more rewarding, it seems more likely to be because external forces are discouraging them rather than any inherent lack of interest.


This might make intuitive sense but isn't supported by the research: https://en.m.wikipedia.org/wiki/Gender-equality_paradox


That article goes over some problems with the initial study, and some issues replicating (although the follow up study did find a similar effect). Hardly seems like a slam dunk, though, and given those issues it seems the effect is likely weak if it does exist.

More broadly, I question how effectively "endogenous interest" can be accurately measured without risking a lot of confounding factors from the broader society. I didn't read the original paper, maybe they tried to account for that, but I can't really see how you reliably could. People's interests don't exist in a vacuum, they're tangled up in their upbringing and society. If they'd done a similar study a century ago they might have found women having a high endogenous interest in being homemakers.


I don't think you could separate them at all because things like gender roles are down wind from sex roles and societal upbringing and nobody is brought up outside of social conditions to act as a control.

The important question is whether passionate people are being kept out of industry en masse. My gut says it probably happens on an individual basis but I don't see why it would happen systematically.


Hope I don't like a fossil, because I wasn't born back then, but wasn't this due to skilled typists being primarily female - and thus naturally qualified to be computer scientists? If you go back further in time, they would've been flocks of human calculators:

https://www.history.com/news/human-computers-women-at-nasa


How is CS being represented in this case? A 1960s CS syllabus would, at best, have amounted to typing classes, punch cards, FORTRAN, ALGOL, COBOL, and a bit of EE (a male-dominated major) on the side. All of this would have been learned largely for secretarial work in offices or academia. Nothing to do with kernels, operating systems, computer architectures, building killer apps or web-based services as it would today. Sixty years ago, computer science was just white-collar labor.

There are three reasons computers became popular in the first place: proliferation of open hardware standards with the S-100 bus, cheap computer kits, and software portability that came with Unix and CP/M clones. So anyone who knew how to build/buy hardware could program what they want on it. No need for a time-sharing system or a college degree. At that point the only limitation was time, money, and inclination.

I disagree with your final statement. It wasn't external forces artificially depressing a demographic so much as it was natural interest becoming a more prominent limiting factor.


Maybe, but why aren’t more girls interested in computing in the first place? If it was just a workplace issue we’d see a lot more college admissions of women in CS programs, who then would drop out or change career later due to those issues.

I don’t think computing is seen strictly as a gender-restricted job like, say, mining and teaching, I wouldn’t expect much friction in the way of a young girl to be potentially interested in it and begin a career.

It just doesn’t happen that often, however. Does it have to be someone’s fault?


> you’ll see that it’s just a natural consequence of the real world.

So is rape, murder, theft and random catastrophe.

Do we stop trying to do something about those, too?


By the time you realized the narrative in the article is wrong, you have already rendered the article, including all of the ads in it.

It's not about journalism anymore, it's about ads.


Ad buyers used to choose these inadvertent biases before. Advertising in the Wall Street Journal is likely to bring you a different audience than advertising in Mother Earth News. I don't recall any previous claims that if you advertised in one that you were obligated to advertise in the other.

Now, this Facebook case is clearly more than 0% different but I think is less than 100% different from that.

In fact, I wouldn't be surprised if advertising in Technology Review itself (twice as many male readers as female readers) wouldn't have some of these same problems.

* - https://mediakit.technologyreview.com/#banner1


Article:

> The researchers weren’t able to discern why that is, because Facebook won’t say how its ad-delivery system works.


It's certainly worth talking about and fixing--the difference between segmentation and discrimination. Like in the HUD case mentioned in the article, discriminating based on age, gender, race, etc. are actually illegal. Not every industry is as regulated nor are they all at risk of it because of this.

To me, there's not a question of "should we make this better?" because we're talking about basic rights: the right to not be discriminated in the pursuit of a roof over your head or a job to pay you a living wage (housing and employment). IMO, a very different question when you advertise anything outside of those basic rights. Plenty of grey area to talk about there.


If I’m advertising a $20M luxury penthouse overlooking Central Park, am I permitted to advertise only in the WSJ and Robb Report (or whatever rich people read)?

Does that change if I learn those publications skew overwhelmingly male?


What if the discrimination is based on interest? For example, the researcher found that a job ad for a car salesperson is more often shown to men and a job ad for a jewelry salesperson more often to women. But that could just be correlation, and not intentional causation. It makes perfect sense to show jewelry related ads to persons that are more interested in jewelry. In that case, the problem is not Facebook but the fact that women are more interested in jewelry than men and that men are more interested in cars than women.


This data is probably based on what types of ads users click or what type of posts they like or interact with. Thus if more men interact with car related items and more women with jewellery related items. Isn't the targeting working correctly? As surely your average employer prefers to hire a person that is interested in field and on other hand employee would prefer job in field they are interested in.

Maybe only solution is to fix the content consumption habits. Force everyone to only see and interact with perfectly mixed and equally distributed content that in no way take their own interest in mind.


Technically it is impossible to fix this issue. You can remove the explicit gender feature from the data, but I guarantee that the remaining features are correlated enough with gender that nothing would change.

One could debate whether it should be the law that ML is not used in certain services so as to not project society's biases.

However I doubt the claim that this is illegal. Is targeted advertisement illegal? I would love for it to be, but I doubt it is.


Sensationalist.

The key point of the article hinges on one particular statement: "These gender differences cannot be explained away by gender differences in qualifications or a lack of qualifications,"

How the heck is Facebook supposed to know about someones qualifications?

Facebook _obviously_ have a set of standard data points they use for ad targeting, such as location, gender, age-span and so on together with dynamically updated data who have interacted with the ad.

Just because the outcome is not what the journalist want doesn't necessarily mean it's wrong or discriminatory.

Sure, it could of course be that Facebook algorithm is explicitly discriminatory, but it's more likely an algorithm such as this one is actually fairly neutral (compared with pre-trained data that can have built-in bias, for example photographs of people with mostly white skin - ad targeting is probably keyword based, and should be trained on actual data from what actual people click on).

Is it discriminatory? I don't think so. Is it "filter-bubble-reinforcing"? Yes, that's more likely. As more men initially click an ad, it will be shown to more men. And vice versa.


> Facebook _obviously_ have a set of standard data points they use for ad targeting, such as [...] gender,

How can anyone read this about job ads and not think maybe it's a problem?


It doesn't actually matter, you can remove that data point and you are still at mercy at who actually interact with the ad once it's published.


Domino's and DoorDash have the same qualifications.

> set of standard data points for ad targeting, such as location, gender

That would be illegal employment discrimination on the basis of gender.


Employment - yes. Advertising - no.

Otherwise my big lawyered up corp wouldn't be having "girls in tech" recruitment events.

Funny enough, these events are still attended predominately by males.

Go figure.


This is terrible and ought to be remedied for obvious reasons, but it does raise an interesting question.

In the future, as AIs develop more complex mental models and are able to start forming nuanced opinions without explicit training, thoughtcrime in Artificial Intelligence is going to be a growing field.

What happens when AIs universally develop opinions that we disagree with? What if they all inexorably come to the conclusion that the moral standards of, oh, say Ancient Sparta, would be most beneficial to humans, and relentlessly promote those values? Do we mindwipe them, or put them into correctional training facilities with appropriately painful backpropagation when they think the wrong thing?

There's probably a business here for someone who can make software which detects when AIs develop politically dangerous opinions so that they can be shut down.


We've experienced such things in the insurance industry with actuaries and their statistics/models playing the role of oracle. Pricing discrimination takes place based on the customer's job, demographics, income, or neighborhood – and often it's the poorer, younger, and less privileged people who pay more due to their 'risk' merely based on the identity groups they're in.

Eventually, people get upset enough so that laws are passed enforcing whatever the society thinks is fair, and then it's the job of the industry (or whoever is controlling the 'AI' in your example) to comply, such as in the EU post-2012 with banning gender discrimination in insurance whether or not it has any statistical merit.

In your case, the solution seems, to me, to be as simple as making the system ignore whatever variables you feel shouldn't be taken into account, whether that's gender or something else.


Don't insurance companies still take considerations like gender into consideration? Ie boys getting higher rates than girls? Which funnily enough matches up with the OP except in reverse


It’s worse than that, insurance companies can be both sexist and racist with no consequence.

A friend of mine moved and was quoted a much higher rate for their homeowners insurance, to which the agent replied “rates are higher in predominantly-black neighbourhoods”.

I don’t know how that’s legal.


IMO, rates should be based on projected actuarial losses plus overhead and a (statistical) profit for the insurance company.

From my own driving past (male), I’d expect that I was a worse risk in the 16-25 age bracket than most women I knew in that age range. Why shouldn’t I pay more?


Should black male pay more than a white one?

(and now we're stepping onto a really slippery slope)


Should they pay more because they’re black? Of course not!

Because they’re in a group which the actuarial data say costs more in payouts? Why not? Being black should be irrelevant; it should not itself cost a premium nor protect from paying a premium.


So if, say, black males 16-26 are statistically more likely to get into car crash than any other group, it's perfectly legal to make them pay the highest insurance premium, right?


Assuming all other factors are equal (that the crashes are more frequent and equally costly per-crash to the insurance company, etc), then it’s perfectly appropriate to charge any cohort the most if they’re the most expensive cohort to insure. I can’t comment on the legality, as it might be illegal in some places.


Well, following the same logic, should it be legal for a mall to ban entry to all black people, because they have the highest chances of being shoplifters?


The agent's reply was clearly illegal. And probably incorrect: I highly doubt these days any pricing model uses race as an input.


That's illegal in the UK now.


> In your case, the solution seems, to me, to be as simple as making the system ignore whatever variables you feel shouldn't be taken into account, whether that's gender or something else.

But then what's the point of using 'AI' at all if people are just gonna ignore what it comes up with?

People are seeing the world the way they want to see it, not the way it is. AI sees the world the way it is, not the way people would like it to be.


But then what's the point of using 'AI' at all if people are just gonna ignore what it comes up with?

I admit it's a little naive but here's a metaphor that works for me.

Imagine you have access to an "AI" that's the best route finder in the world. It finds the best possible route between any two places you wish to go.

However, you have a fear of going through a certain neighborhood (maybe you grew up there and have bad memories) or maybe a family member died in a crash on the freeway once and now you only stick to regular streets.

The AI is so good that you can communicate these psychological and messy human preferences to the AI and it re-routes as appropriate. Is this a better or worse outcome and does providing these provisos make the AI pointless?


Yes, there could be an anime about concentrating AIs into locations or camps to be re-educated. TRON3 hopefully, they could launch it with the new roller coaster arrival at Disney.


>> There's probably a business here for someone who can make software which detects when AIs develop politically dangerous opinions so that they can be shut down.

I don't believe so, because the "AI with dangerous opinions" will already have been killed off by its maker, that is, as long as it doesn't generate any revenue for them. If, however, this dangerous, malignant AI does generate revenue, their maker will not allow you nor anyone else to kill it off.


Is it not the case that this Facebook AI was making more money, getting more clicks, by exercising prejudice? And now it needs to be killed off. But if there was a police bot, it would have noticed ahead of time and killed off the politically-incorrect AI before it had a chance to damage the reputation of the company.


Humans build an operate AI systems. "Nothing we can do, the AI just produces discriminatory outcomes" is not an acceptable approach.


I think you underestimate how far we are away from AI developing opinions of their own. If you want to just hand wave like that would it even be moral to shut down a sentient being for holding the wrong opinion. But it ain't happening anytime soon imo


AI thoughtcrime is a fascinating phenomenon that I had not modeled as such previously. This is exactly why I come to HN. Brilliant.


Advertising is, in a sense, lying. The question of advertisers has always been, how do I engage a person with my product in a way that seems natural to him/her, i.e. so that he/she doesn't see the lie. Necessities don't need advertising. Therefore, in a sense, everything advertised is based on desire rather than need and, as such, requires targeting what the consumer already desires in some way.

In re this article: preferences based on gender are encouraged by a society of individuals who want to excuse their desire as worthy. AI only picks up on what already exists and therefore the root of the problem is much deeper than Facebook can remedy by a simple patch.


Demographic statistical reductions are the exact same thing as stereotypes, so of course they're going to be biased. As long as we keep using statistical reductions to predict individual behaviour they're going to continue to be wrong for the individual, sometimes in offensive ways. If you want predictive ads that work for the individual, if eg. women are 70% of teachers then 70% of teacher job ads will still go to women, but men who want to teach and women who don't want to will no longer receive those ads as they're not part of their individual profile.


Unless I'm reading this incorrectly, these are just ads, not some kind of sexist job board where you put in your gender and out comes a list of eligible jobs. Just yesterday the internet was mad that people were seeing ads and it was bringing the downfall of democracy. Why are we mad they are now not seeing ads?

Given the number of features in a model as sophisticated as Facebook's must be means these researchers are almost certainly oversimplifying, and certainly the way these articles are written as if to teach readers there's an evil software engineer writing biased code. From my experience in ML it's almost certainly the opposite--most of the data scientists I've worked with are highly aware of the issues of bias in AI and actively work against it to a sophistication level never understood by journalists.


The article doesn't discuss culpability or root cause. It simply reports a study that shows the bias still exists. Surely you'd agree that part of the solution to bias in AI is awareness of what bias exists?



I don't think job ads are important. What important are job *searches".

Does FB even have a feature to search job ads? The ought to; they can charge employers and users will seek out ads to look at!


This article is extremely sensationalist.

Facebook pretty blatantly advertises based on interest. If a job ad for nurses were placed in a nursing magazine, it would be seen largely by women. That's because nurses are largely women, not because magazines are excluding men.

If you start from the fantasy position that women and men are the same, reality is going to seem extremely biased, I supposed.


Such a horribly disingenuous click bait title, that doesn't even line up with what's actually in the article.

>> They advertised for two delivery driver jobs, for example: one for Domino’s (pizza delivery) and one for Instacart (grocery delivery). There are currently more men than women who drive for Domino’s, and vice versa for Instacart. [...] The Domino’s ad was shown to more men than women, and the Instacart ad was shown to more women than men. The researchers found the same pattern with ads for two other pairs of jobs: software engineers for Nvidia (skewed male) and Netflix (skewed female), and sales associates for cars (skewed male) and jewelry (skewed female). <<

In short: >> The findings suggest that Facebook’s algorithms are somehow picking up on the current demographic distribution of these jobs, which often differ for historical reasons. <<

That's not at all what is explicitly, and falsely, claimed in the headline!


> The findings suggest that Facebook’s algorithms are somehow picking up on the current demographic distribution of these jobs, which often differ for historical reasons.

It seems like it would also be possible that the demographics of the jobs differ for non-historical reasons, and Facebook's algorithms are correctly picking up on the fact that women are more interested in working for Instacart than for Domino's...?


Yes it could of course be all kinds of things. Men could post more about pizza and women more about groceries, men could post more about Nvidia and women about Netflix. But that's not where my beef is.

This is certainly not "excluding women" because it is just as much "excluding men" for some jobs, where "excluding" means "less likely to show the ad to" (we don't even get to know how much less likely, could be ppm for all we know) and "men" and "women" are interchangeable wherefore the "women" in the headline is irrelevant, distracting and (I must assume) willfully misleading.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: