Next week, you can expect to see a piece offering a review of the performances of the polling community from the 2014 cycle. It is the third time I have taken on this particular task—you can see the efforts from 2012 and 2010 by clicking on the appropriate links.
You might note that I changed the formula for the rankings between 2010 and 2012. That’s because in 2010, the focus of the study was a bit more specific (the notion of whether there was a left-leaning or right-leaning “bias” among the more prolific pollsters). In 2012, we went for a little more comprehensive rating.
The plan, for 2014, was to try to generate some continuity by employing the same formula.
That is still the plan. But … whoo boy. Not to give away the ending,
the formula employed in 2012 gave us some folks at the front of the pack who were not only generally acknowledged to be cruddy, but it was nearly a reversal of the 2012 ratings. What’s more: a quick look at the criteria from 2012 points to a problem—there is something in each of those parameters that can be critiqued.
When all is said and done, the more I dive into the matter, the quicker I come to a single conclusion: there is no “one best way” to measure accuracy in polling. Follow me across the fold as I explain why.