I completely agree with you on this point. AI practitioners have an ethical responsibility to communicate these shortcomings to clients. I know my firm regularly talks to clients how humans should intervene in the AI systems we build. It's a necessary conversation to ensure clients know we aren't building them something flawless.
I loved your .001% example, I'm going to steal that when I talk folks. We often describe how systems fail at scale and talk about how being wrong 1/1,000,000 times can wildly backfire at large numbers.
All that being said, I just don't want people around this forum thinking facial recognition is some fringe, low accuracy modeling exercise like they used to be. The models are actually incredibly impressive these days.
I loved your .001% example, I'm going to steal that when I talk folks. We often describe how systems fail at scale and talk about how being wrong 1/1,000,000 times can wildly backfire at large numbers.
All that being said, I just don't want people around this forum thinking facial recognition is some fringe, low accuracy modeling exercise like they used to be. The models are actually incredibly impressive these days.