In fact, it does. It shows that human bias (facial recognition) and the inability to posit all possible outcomes (Uber; halting problem) are built-in to the software we write.
While another human may be able to overcome one's racial basis or another driver wouldn't have run over that biker, the fact remains that the software did which shows that it inherits human frailties and limitations. It doesn't surpass them.
Bias is an inevitable problem but a fixable one. There are also tradeoffs to the way bias works in computers vs humans. If you can successfully point out bias to a computer it will be happy to adjust accordingly, whereas human bias is notoriously stubborn and some humans will even deliberately or covertly embrace their biases despite being aware of them.
> the fact remains that the software did which shows that it inherits human frailties and limitations. It doesn't surpass them.
Well sure, some of them, but it's not as if there aren't other areas where the computer is already superior, e.g. no intoxicated driving, tired driving, road rage, racing or otherwise diving at dangerous speeds, high speed/risky lane merging, driving while eating or applying makeup, texting or watching youtube or fiddling with a GPS or having your kid cry out in the back seat etc etc. So there are a lot of improvements there. Of course, I don't think those systems are anywhere near human-like intelligence, but the current flaws are not a demonstration that these problems cannot be overcome.
While another human may be able to overcome one's racial basis or another driver wouldn't have run over that biker, the fact remains that the software did which shows that it inherits human frailties and limitations. It doesn't surpass them.