You're very much missing the point. The machine obviously isn't intentionally sexualizing anyone, but it's producing a bad result, and not only is it bad, it can be perceived as sexualization (regardless of whether there's bias or not). The machine lacks understanding, producing a bad result, and the bad result is Extra Bad for some people.
Let's say I started a service and designed a machine to produce nice textile patterns for my customers based on their perceived preferences. If the machine started producing some ugly textiles with patterns that could be perceived as swastikas, the answer is not to say "well there are many cultures where the swastika is not a harmful symbol and we never trained the machine on nazi data". The answer is to look at why the machine went in that direction in the first place and change it to not make ugly patterns, and maybe teach it "there are some people who don't like swastikas, maybe avoid making those". It's a machine built to serve humans, and if it's not serving the humans in way that the humans say is good, it should be changed. There's no business loss to having a no-swastika policy, just as there's no business loss that says "don't zoom in on boobs for photos where the boobs aren't the point of the photo".
This problem has _nothing_ to do with sensitivities, it's about teaching the machine to crop images in an intelligent way. Even if you weren't offended by the result of a machine cropping an image in a sexualized way, most folks would agree that cropping the image to the text on a jersey is not the right output of that model. Being offensive to women with American sensibilities (a huge portion of Twitter's users, I might add[0]) is a side effect of the machine doing a crappy job in the first place.
“Badness” is not a property of the object, it is created by the perceiving subject. What AI does is an attempt at scaling the prevention a particular notion of “badness”, that suits its masters. In other words Twitter is just pushing another value judgement to the entire world.
Even the value of “no one should get offended” is subjective, and in my opinion makes a dull, stupid world. Ultimately it is a cultural power play, which is what it is, just don’t try to dress it in ethics.
Badness is indeed a property of the output of this algorithm. A good image crop frames the subject of the photo being cropped to fit nicely in the provided space. A bad image crop zooms in on boobs for no obvious reason, or always prefers showing white faces to Black faces.
You're attempting to suggest that the quality of an image crop cannot be objectively measured. If the cropping algorithm changes the focus or purpose of a photo entirely, it has objectively failed to do its job. It's as simple as that: the algorithm needs to fit a photo in a rectangle, and in doing so its work cannot be perceived as changing the purpose of the photo. Changing a photo from "picture of woman on sports field" to "boobs" is an obvious failure. Changing a photo from "two politicians" to "one white politician" is an obvious failure. The existence of gray area doesn't mean there is not a "correct" or "incorrect".
> Even the value of “no one should get offended” is subjective, and in my opinion makes a dull, stupid world.
You'd agree with the statement "I don't care if my code does something that is by definition racist"?
> If the cropping algorithm changes the focus or purpose of a photo entirely, it has objectively failed to do its job.
You just changed the problem formulation to an objective definition of “purpose” and a delta of deviation that is tolerable. That’s just kicking the can.
Let's say I started a service and designed a machine to produce nice textile patterns for my customers based on their perceived preferences. If the machine started producing some ugly textiles with patterns that could be perceived as swastikas, the answer is not to say "well there are many cultures where the swastika is not a harmful symbol and we never trained the machine on nazi data". The answer is to look at why the machine went in that direction in the first place and change it to not make ugly patterns, and maybe teach it "there are some people who don't like swastikas, maybe avoid making those". It's a machine built to serve humans, and if it's not serving the humans in way that the humans say is good, it should be changed. There's no business loss to having a no-swastika policy, just as there's no business loss that says "don't zoom in on boobs for photos where the boobs aren't the point of the photo".
This problem has _nothing_ to do with sensitivities, it's about teaching the machine to crop images in an intelligent way. Even if you weren't offended by the result of a machine cropping an image in a sexualized way, most folks would agree that cropping the image to the text on a jersey is not the right output of that model. Being offensive to women with American sensibilities (a huge portion of Twitter's users, I might add[0]) is a side effect of the machine doing a crappy job in the first place.
[0] https://www.statista.com/statistics/242606/number-of-active-...