Hacker News new | past | comments | ask | show | jobs | submit login

Cherry-picked examples of an incredibly complex system failing (when it succeeds 99% of the time) aren't particularly compelling arguments against the system.



I found it odd that the first thing I searched for returned dozens of different ads that were all flagged for being political when they clearly aren't.


The flag is "Related to politics or issues of importance", meaning it's a broad classifier for sensitive issues, not just political ones.


“This ad ran without a disclaimer. After the ad started running, we determined that the ad was related to politics and issues of national importance and required the label. The ad was taken down.”


>issues of national importance

Basically anything that hits this list and is targeting within those nations:

https://www.facebook.com/business/help/214754279118974?helpr...


Seems like a pretty arbitrary rule set. I'd love to see some examples of ads that don't have anything related to any of those topics. It seems like it would be pretty easy to argue that any ad is related to at least one of those bullets


I'm curious, are they notified when their ads are taken down? Or does facebook do "shadowbanning" themselves? I'd expect them to do it when nation-state interests are involved, but I guess nothing stops them from taking in the money from the regular folk and lying about ad impressions when an ad goes against their interests.


Hmh.. when i search YOGA I get this >>

"Search results display ads with text that matched your keyword search term. Only ads related to politics or issues of importance are included."

Not sure why it only shows ads related to politics (none of the ads shown are).

And, after searching 2x I was temp banned from using the tool.


At Facebook scale, 1% would potentially mean about 15 millions of daily users are presented misclassified ads.


Does it succeed 99% of the time?


It's close to it. I can't remember ever seeing an ad in my newsfeed that was incorrectly labeled with a political affiliation disclaimer. Have you?


From what my results indicate, they stop the ads from running. So you wouldn't see ads that were incorrectly labeled, you WOULDN'T see ads that should be running but were incorrectly flagged as needing a political affiliation label.

And from the looks of the Yoga ads, there are hundreds that have been flagged/paused as needing to disclose they are political (when they actually aren't.)


Your anecdotal experience is completely irrelevant to the statement that you made.


Thank you, captain obvious, and it's actually not. If the system was doing poorly, we'd be able to directly observe it.

Do you have evidence that their political ad detection does poorly?


I don't have any evidence, because I don't use Facebook. Furthermore, I never made any claims to provide evidence for.

Do you have evidence or source for the claim you made? Or are you just here to call people silly names and pass anecdotes off as fact?


Anecdotally someone may have evidence of poor performance. Assigning a specific number to its statistical probability based on that anecdotal evidence is where the problem lies.


That's not how statistics, or debate, works


>when it succeeds 99% of the time

source?


Where did you get your two nines number from?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: