It's an interesting paper, for sure. I think this only works in reality if 1) there's a clear reason the content was removed that a user could take steps to avoid in the future, and 2) you're willing to admit that reason to the user.
With so much content moderation these days coming from machine learning (violates #1), personal vendettas from human moderators (violates #1 and #2) and quasi-legal threats from third parties (violates #1 and #2), there's not much room for user education left.
With so much content moderation these days coming from machine learning (violates #1), personal vendettas from human moderators (violates #1 and #2) and quasi-legal threats from third parties (violates #1 and #2), there's not much room for user education left.