Nothing. But also nothing prevents a user from manually reporting an image of the Statue of Liberty as a violation of such a policy. At best, this could be used to help the user automate such reports. Such abuse (that of users reporting benign images) is far from the most difficult abuse problems facing these platforms, though it does have the effect of making fighting "real" abuse harder.
I like the comparison of the program to hitting “report image” on a platform. Advantages of StopNCII is that you can report proactively, and to report it to all the participating platforms at the same time.
Misuse of the StopNCII platform is something that we (Meta/Facebook), UK Revenge Porn Helpline and the full StopNCII team discussed a lot. Once you have confirmed that a piece of content violates your platform’s policy, it’s easier to find other instances and filter out false-positives, because now you can use your platform-specific ML, or in-house photo detection algorithms.
> "Misuse of the StopNCII platform is something that we (Meta/Facebook) discussed a lot. Once you have confirmed that a piece of content violates your platform’s policy, it’s easier to find other instances and filter out false-po"
Okay, and what was the conclusion then? Can I use the service to take down arbitrary content online that has nothing to do with the subject at hand? Can I just incriminate other devices, users and accounts that store innocent data and get it deleted without anybody's consent?