Wow, I guess I'm #3. But my idea quickly expanded into a full fledged "credibility index"- because why not keep track of present-day claims they make about things already known to be false? It seems like one in the same issue to me.
And wouldn't it make sense to quantify the vagueness/concreteness of statements made? It's easy for a pundit to say something is "dangerous for America" without giving a specific consequence, and people who make specific predictions that are borne out deserve more credit than people who make vague claims that can't be either affirmed or falsified.
And from there, why not keep track of how many times a commentator uses equivocating devices like "borders on", "all but certain", "just short of"?
And then, what about commentators who keep predicting a specific thing will happen but ascribing different causes to it?
You can also measure how "substantive" a commentator tends to be by looking at the frequency with which concrete facts or examples are cited to support their claim, vs. how many times vague subjective judgments are invoked, vs. how many times nothing is brought forward to support the claim at all.
You can measure how frequently a commentator tends to repeat the same news event or same analysis.
You can take transcripts of their shows and put them into a program that determines what reading level, (in terms of school grades like 7th grade, 9th grade...) their discussion was judged to be at.
And there are probably tons of others.
Granted, systems like this, if they caught on, could turn around and bite us like those stories about wish-granting genies with a sense of irony- surely this would result in a whole host of unanticipated structural biases generated by such stats that change the character and quality of the discussion; surely pundits would find, even discover, new ways to misinform and adapt to the structure accordingly.
But that has already happened. Pundits already speak in a language designed to make as many claims as possible without being on the hook in any way that risks their credibility. So while it would create new biases, I think it would overturn a whole bunch of pre-existing biases that I think are much worse (or at least shed light on them).
This is a long-winded way of saying I really want something like this to happen, and to work.
not sure what you mean. I wasn't being critical, I was actually encouraging the development of such sites. And I fully believe in the feasibility of eventually using all those different metrics I had described. It's doable, and I want to see it.
Hah - I had a similar idea, except it wasn't pundits that you measured, it was regular people. You could make predictions, for the future, and people could vote up the "truthfulness" of them as time went by. You'd need some sort of penalty to discourage people from predicting everything under the sun though.
Yeah, I just wanted to create a site where people can submit articles, blog posts or just any predictions about the future and a maturity date. At the maturity date, you can mark it correct or incorrect.
Only then can you sort out intelligent predictions from noise.
See Derren Brown's "The System". You wouldn't sort out intelligent predictions from noise, you'd sort out (intelligent and/or lucky) predictions from noise. Which still wouldn't give you any idea which future predictions to trust.
Discount the value of a correct prediction by how many predictions were made by that person. You'd need some way to prevent making a lot of sock puppet accounts, but that's probably doable.
"But discussions between the companies are confirmed."
Strictly speaking, there is no prediction in the article. This is a claim about the present. Granted, he has already had to amend the original claim ("late stage" vs. "early stage" talks), so take him to task over that rather than imply a prediction is being made.
That's true, but I think that it is nitpicking. There is a claim here: "Google in talks to acquire Twitter" -- which has a questionable truth value. This is exactly the type of statement which a predictions site could test (whether it is true or not).
these scores are not comparable because they are missing a critical piece of the puzzle: the likelihoods of your predictions.
I predict everyday for the rest of the year that the dow jones will be above 2 and below 4 million.
The right measure is to take the difference between market and pundit opinion (you know like a real market, except unlike companies these always end at 1/0 on a certain date - like a covered option) and average that.
This is definitely true. Plus, I think these are open to abuse. If the submissions are coming from a self-selected user base, surely its possible to self-servingly interpret certain statements as "predictions" that aren't, or a certain set of events as "proving" a prediction in one way or the other.
There is only so much fudging that can occur, of course. But this kind of on-the-margins behavior is a new challenge that this relatively new breed of sites will have to deal with.
But still. How will that affect him at all? There are no consequences for him being wrong. He won't lose advertisers. He won't lose RSS readers. He only gains pageviews, mindshare, subscribers and google hits by running this stuff and then updating it later to say it's not true.
I don't have a point here, this just strikes me as a pretty weird situation.
It probably has to do with the ultimate irrelevance of the information being presented. I read the headline and cared for all of 2 seconds. If you were a respected pundit and you said "Global warming will kill us in a year if we don't all buy electric cars tomorrow", and we all went and bought electric cars, and you were wrong, the consequences would likely be far more severe. Arrington posts unfounded rumors. About the Internet. Eh.
Arrignton adds in a comment:
> its well sourced, but who knows. Usually simply posting the rumor shakes a lot more information out of the tree. We’ll be updating.
Yeah. Sure.