One of the reasons that agencies have had trouble preventing these events is that they have the data, but they haven't connected enough of the data. That's hard to do when you are collecting completely irrelevant data in addition to useful data.
If you are having trouble finding a needle in the haystack, adding more hay isn't going to help, it's just going to make your problem worse.
Sure, but it doesn't seem realistic to suggest that they stop collecting it.
The best idea I've come up with (and it's got serious flaws of its own) is a social network where deletion is impossible, but you have a choice between anonymity and authority for any given submission you make thereto. Anonymous utterances might be true, but they'd have to have a very high truth-value indeed to overcome the skepticism that would attach to them. Authoritative statements would thus be backed by a person's reputation, which in turn could be good or bad to different people depending on the historical quality of their contributions. Trolls and spammers would thus be marginalized by the low quality of their output, which might be regarded as even worse than anonymous whispers.
I don't have a theoretical foundation for this, and while I've thought it out in rather more depth than I am offering here it;s the product of intuition rather than analysis and iteration. Web annotations seem the best path to explore for experimentation but I'm not equipped or inclined to develop this into an actual product.
If you are having trouble finding a needle in the haystack, adding more hay isn't going to help, it's just going to make your problem worse.