Which Poll Was The Most Accurate

Immediately following the election, Fordham University published a ranking of the pollsters based on their accuracy in predicting the 2012 Presidential election.  Their results:

1. PPP (D)*
1. Daily Kos/SEIU/PPP*
3. YouGov*
4. Ipsos/Reuters*
5. Purple Strategies
6. NBC/WSJ
6. CBS/NYT
6. YouGov/Economist
9. UPI/CVOTER
10. IBD/TIPP
11. Angus-Reid*
12. ABC/WP*
13. Pew Research*
13. Hartford Courant/UConn*
15. CNN/ORC
15. Monmouth/SurveyUSA
15. Politico/GWU/Battleground
15. FOX News
15. Washington Times/JZ Analytics
15. Newsmax/JZ Analytics
15. American Research Group
15. Gravis Marketing
23. Democracy Corps (D)*
24. Rasmussen
24. Gallup
26. NPR
27. National Journal*
28. AP/GfK

At the top of the list is PPP, the polling outfit here in North Carolina.  At the bottom sat AP/GfK.

The real talk of the town is the rankings of two prominent pollsters; Gallup and Rasmussen.  These are the big guys, the heavy hitters.  Rasmussen especially given their republican bias.

However, with my new found respect for all things 538 I couldn’t help but notice Nate Silver’s analysis:

The guys at the bottom are kinda the same, we see Rasmussen and Gallup.  But where Fordham had them tied, we see Rasmussen significantly more accurate than Gallup.  And at the top?  Where Fordham had PPP, Nate has them at a more pedestrian 15th, a mere 5 slots ahead of Rasmussen.

What does this mean?

I don’t know.  Maybe it means that we’re all subject to the whims of political gamesmanship.  That it’s more important for my side to be right than it is for my ideas to be better.  Maybe the lesson is that there’s a market for such crap.

Or maybe it’s that we don’t know.  And that’s why we play the game.

8 responses to “Which Poll Was The Most Accurate

  1. The Fordham study was probably premature in that the popular vote margin has grown since they did their study. (Which means that each day that goes by, Rasmussen and Gallup’s results look even worse, not better).

    I think another reason for the discrepancy between Silver and Fordham is that Fordham only looked at national polls while Silver looked at both state and national polling.

    • I think another reason for the discrepancy between Silver and Fordham is that Fordham only looked at national polls while Silver looked at both state and national polling.

      Unless and until Silver gives me a reason to think otherwise, anyone who talks about poll is just talking. Silver’s got it goin’ on!

      • Wait, meant to say

        Unless and until Silver gives me a reason to think otherwise, anyone ELSE who talks about poll is just talking. Silver’s got it goin’ on!

        • What’s interesting to me is that his system works best when there are a range of results, some dem leaning and some republican leaning. If every pollster used the exact same metrics/methodology then we might have considerably less certainty in the results because they might all be making the same mistake.

          • What’s interesting to me is that his system works best when there are a range of results, some dem leaning and some republican leaning. If every pollster used the exact same metrics/methodology then we might have considerably less certainty in the results because they might all be making the same mistake.

            Hybrid Vigor.

  2. Silver averaged the last three weeks of polls, which I think is smart. Interesting that Mellman, a Democratic pollster I tended to ignore because he is partisan, turned out to be so accurate. Both RAND and IPSOs/Reuters were left out of a lot of aggregrates (such as RCP) because their methods were seen as suspect. But they turned out to do very well. Silver’s definitely showing the power of statistics, data, and modeling!

    • Silver’s definitely showing the power of statistics, data, and modeling!

      Two things have drawn me to Nate:

      1. He has a baseball background. I love me some Sabermetrics.
      2. In my job I cannot afford to argue about who is right. It’s more important to me that the “thing” is fixed than me proving I’m right.

      And Nate is right.

  3. Piggybacking on what Scott was saying, Nate Silver’s ranking were based on a large samples of polls before the election, while Fordham’s study was only of the very last poll each pollster conducted. That’s why the accuracy ranking is different.

Leave a Reply