> In your case, the solution seems, to me, to be as simple as making the system ignore whatever variables you feel shouldn't be taken into account, whether that's gender or something else.
But then what's the point of using 'AI' at all if people are just gonna ignore what it comes up with?
People are seeing the world the way they want to see it, not the way it is. AI sees the world the way it is, not the way people would like it to be.
But then what's the point of using 'AI' at all if people are just gonna ignore what it comes up with?
I admit it's a little naive but here's a metaphor that works for me.
Imagine you have access to an "AI" that's the best route finder in the world. It finds the best possible route between any two places you wish to go.
However, you have a fear of going through a certain neighborhood (maybe you grew up there and have bad memories) or maybe a family member died in a crash on the freeway once and now you only stick to regular streets.
The AI is so good that you can communicate these psychological and messy human preferences to the AI and it re-routes as appropriate. Is this a better or worse outcome and does providing these provisos make the AI pointless?
But then what's the point of using 'AI' at all if people are just gonna ignore what it comes up with?
People are seeing the world the way they want to see it, not the way it is. AI sees the world the way it is, not the way people would like it to be.