I was generalizing the bug but my basis is an assumption that the programmer didn’t make some conscious or unconscious racist decision, but just something basic like “match shapes and colors” and the training data had a bunch of gorillas for one reason or another.
I think this gets fixed by better training data and more pictures of really dark skinned people. So with more supervised labels of dark skinned people to people, properly so the matching doesn’t think people are closer to gorillas.
Comically/sadly, we’ll know we get closer to fixing the training sets to be more inclusive when google starts labeling gorillas as people.
I think there are some systemic reasons why there aren’t more diverse populations in training data. And those are more society issues than AI issues (ie, rich people are more represented, rich people are certain races, therefore races are more represented).
And finally, I’ve worked in software that people just test what they are and know so I’ve seen so many test plans that are too simple and only test the programmers dob and address. This doesn’t mean racist because all the programmers are Asian males. It just means the quality review wasn’t thorough enough to include proper test conditions.
I might be inappropriately conflating software bugs from different areas but this is what makes me think “stupidity or weakness more likely than racism.”
I think this gets fixed by better training data and more pictures of really dark skinned people. So with more supervised labels of dark skinned people to people, properly so the matching doesn’t think people are closer to gorillas.
Comically/sadly, we’ll know we get closer to fixing the training sets to be more inclusive when google starts labeling gorillas as people.
I think there are some systemic reasons why there aren’t more diverse populations in training data. And those are more society issues than AI issues (ie, rich people are more represented, rich people are certain races, therefore races are more represented).
And finally, I’ve worked in software that people just test what they are and know so I’ve seen so many test plans that are too simple and only test the programmers dob and address. This doesn’t mean racist because all the programmers are Asian males. It just means the quality review wasn’t thorough enough to include proper test conditions.
I might be inappropriately conflating software bugs from different areas but this is what makes me think “stupidity or weakness more likely than racism.”