This has nothing to do with machine learning. It is a simple correlational situation.
If African Americans have, on average, poorer credit ratings, then correlational models will begin to equate race with poor credit ratings, which will impact their ability to get credit and hence feeding back on that mechanism.
...of course RACE isn't allowed to be factored into financial applications, so the applications will often use other data points, like zip code, that end up having a correlation to bad credit as well as race. ...often producing the same result.
The problem isn't with the models - it's with reality.
The author famously said "Math is Racist". It's hard to get over such stupidity.
It seems like they are making the case that if reality is biased then models of reality will retain that bias.
If so they seem to make the point well.
If the only information you have about a loan applicant is where they live, your decision will be 'biased' if the location of where someone lives is correlated with other factors (as opposed to, say, the fact they live on a flood plain means don't give them a loan).
In this context, saying "Math is Racist" is like saying "Physics hates Fat People" because gravity disproportionately affects heavier people. Accurately reporting what is happening is not biased, making decisions without considering [edit: or not making a decision because you didn't consider] the context is biased.
Maths is a tool (well, collection of tools), and the onus is first on the tool user to use it in a fair way. Yes it is important for educators and tool creators to be mindful of how these tools will be used in practice, but there is a big jump from that idea to "Math is Racist".
Isn’t this a similar argument that could be made against race and gender based affirmative action? I don’t understand how organizations like ACLU are critical of face recognition tech because it reenforces implicit bias that engineers have but then turns around and supports race and gender based affirmative action that similarly reenforces implicit bias where PoC (but not Asians for some reasons) and non-males are presumed to be disadvantaged purely due to their identity.
I'm not sure what argument you are referring to here (if it was one above).
I think these organisations are criticising the tool builders for creating tools that are easily misused (or are created with unreasonable limitations, like only being valid for university students at one university, but are sold as widely applicable).
Supporting affirmative action initiatives like you list is trying to address the biases that exist in reality. I think this is often a bit backward (not addressing the root cause) but it can be expensive (in time, effort, money, politics) to address the actual root cause so these programs aim to address the bias at the place in manifests.
This is a similar (dare I say pragmatic?) argument to "it would be cheaper and more effective to just give everyone a no-strings attached payment each month then to provide means-tested payments to those who need help".
Detrmining if these arguments are correct is a different thing altogether, and I have no idea if these programs are cheaper and more effective then dealing with the root problem, or if it's even possible to define and address the root problem in the first place!
The two things you contrast above are fundamentally different - one is criticising tools and tool builders, the other trying to address perceived biases in the world.
When you say "[non-white/non-males] are presumed to be disadvantaged", have you talked to or listened to black or female academics? I follow ~4 black academics on twitter, and each of them have contributed to the #BlackintheIvory topic. Their identity plays a huge role in how others treat them.
> but not Asians for some reasons
Asian people are distinct because so many of them have immigrated recently, and immigration requirements favor educated and well-off folks. That masks many issues because they should have better than average outcomes due to better than average education and skills.
> so the applications will often use other data points, like zip code, that end up having a correlation to bad credit as well as race. ...often producing the same result.
You realize this too is illegal right? The law doesn't say "you can't use race" - instead it says (paraphrased by the Brookings Institute): "Are people within a protected class being clearly treated differently than those of nonprotected classes, even after accounting for credit risk factors?"[1]
O'Neil points out that math is often used to obfuscate this (whether it be deliberately or not). This is a valid point, and one that people who think of math as a values neutral tool should consider.
I didn't love the book, but it's difficult to make the argument that she is stupid.
This has nothing to do with machine learning. It is a simple correlational situation.
If African Americans have, on average, poorer credit ratings, then correlational models will begin to equate race with poor credit ratings, which will impact their ability to get credit and hence feeding back on that mechanism.
...of course RACE isn't allowed to be factored into financial applications, so the applications will often use other data points, like zip code, that end up having a correlation to bad credit as well as race. ...often producing the same result.
The problem isn't with the models - it's with reality.
The author famously said "Math is Racist". It's hard to get over such stupidity.