Well, it's probably better to say that the 10% included all known error estimations. Given the skew between final polls and actual results seen over history, Silver's model figured it was 1 in 10 that some "unknown" factor would pollute the numbers enough to produce a Romney victory. In this he was actually much more conservative than Sam Wang, who AFAICT looked only at the poll sampling error and arrived at a muc higher (e.g. 99%) chance of victory.
And actually, I think you can make the case that Silver was too conservative. Looking at the results, the poll averages were well within sampling error in all battleground states. And sorting the states by margin, Romney would have had to pick up FL, OH, VA and CO to win. Obama won Colorado by 4.7% (not too far from the predicted ~3.5%, I believe), which is comparatively huge and absolutely not explainable by polling error (a Romney victory there looks like about three sigma to me).
In hindsight, the known state of the election on Tuesday morning was never winnable for Romney. Wang was right (even though his averages came down on the wrong side in FL), Silver was too timid.
And actually, I think you can make the case that Silver was too conservative.
Based on what I know about methodology of polls, Silver's approach is better. If your possible errors are all independent, then Wang's approach is correct. But when you introduce methodology dependencies between polls, it is going to be much more confident than it should be.
Ironically if Silver is right, then he'd also predict that with some high probability, at a guess somewhere in the 70-80% range, Wang's numbers are likely to look better than his. But in those outliers there are some shocking results.
For the record, a fundamentally similar methodological error to Wang's helped Wall Street think that bonds backed by subprime mortgages were safe.
Exactly, there's some epistomolgy at work here. There are no first-principle models of polling bias that fit the data. Silver took a history of poll-vs-election skew as a proxy, while Wang ignored the issue and looked only at sampling error (I think -- honestly I don't know for sure).
As it happens, in this election (and across hundreds of polls) there was no poll bias. The polls were right.
Now, is that because we're lucky or because polls have gotten better, or both? I don't know that this is answerable. My intuition (but that's all it is) agrees with you: I wouldn't have put down a bet for Obama given Wang's 100:1 odds, though I might have at Silver's 10:1. Arbitrage vs. Intrade's 2:1 looks a lot like a sure thing in hindsight...
And actually, I think you can make the case that Silver was too conservative. Looking at the results, the poll averages were well within sampling error in all battleground states. And sorting the states by margin, Romney would have had to pick up FL, OH, VA and CO to win. Obama won Colorado by 4.7% (not too far from the predicted ~3.5%, I believe), which is comparatively huge and absolutely not explainable by polling error (a Romney victory there looks like about three sigma to me).
In hindsight, the known state of the election on Tuesday morning was never winnable for Romney. Wang was right (even though his averages came down on the wrong side in FL), Silver was too timid.