Its hard to know what to be more amazed about: the fact that with one boneheaded filter Apple now seems creepier than Google in terms of respecting your privacy (one of the few things they could boast about), or the fact that it still seems to be amateur hour over at iCloud. Think about it for a second: they are literally pushing code into production that amounts to if (contents.indexOf(bad_phrase) != -1) delete_email();. How is the takeaway not anything other than "Of course Siri and Maps are a disaster, they can't even filter email in a more complex fashion than 1993."
I agree that it's boneheaded, but I'm not convinced this is a privacy issue, assuming the email only gets dropped. Shouldn't something need to reach the eyes of a human in order to be a privacy issue?
If the email is deleted, that's an extension of the original issue where your email is scanned in the first place, whether by machine or human.
Then comes deletion, making the issue worse than before. Call it privacy, personal data control issue, doesn't matter.
Apple will likely correct this anyway. Two academics could be chatting over email about the potential social harm of "barely legal teens" categories in mainstream porn. They argue the slogan as a provocative, predatory gesture towards all young women. Often the category strives towards "as young looking as possible while legal" which is poor taste and creepy, yet sits alongside "brunette". They might be emailing about that, in which case Apple is wrong to delete the email.
A heuristic is a bit more complex than an indexOf. I just tried it myself: An email just containing the phrase "barely legal teens" is attributed by Apples iCloud IMAP servers in the header a "spamscore" of 3 and is delivered, but marked as likely spam:
I would argue that this kind of filtering is fine. Maybe it was a glitch in the server, maybe other metrics of the mail pushed the spamscore up. If iClouds Mail servers should silently decline mails for delivery is a whole different argument.