The polls got the mid-terms (mostly) right
The polling companies have had a dreadful post midterms press, rightly so if viewed by the, on the whole and in many cases admitted, dreadful results.
But perhaps the mass excoriation is unwarranted if their prognostications as a group are viewed not by how bad they were in detailing how much a particular candidate would win by (at which they were, frankly, terrible as I will show) but simply by what, in the end really matters: who won.
To use a horse race betting analogy, if you put your money down on a horse that the form book suggest would win by three lengths and it wins by a nose, you'd keep referring to the form book as you counted your winnings and looked to your next pick.
So, how did the polling firms do as judged by their ability to pick the winners? If the Real Clear Politics list of the nine Senate races they judged to be in the "toss up" or "leans GOP" category are examined, the answer is pretty good.
I have not included the one "leans Dem" race as the Warner race in Virginia was considered to be, in reality, (a false reality as it turned out) to be so safe that polling ceased in mid-October so the movement to the Republican Gillespie was not accounted for. The "Truman/Dewey" factor.
Here are the polling results based only on "did the pollsters pick the winner in their final poll":
Alaska: 5 out of 6 picked Sullivan to win
Colorado: 6 out of 6 picked Gardner to win
Georgia: 5 out of 5 picked Purdue to win
Iowa 5 out of 6 picked Ernst to win
Kansas 4 out of 5 picked Roberts to win
Arkansas 5 out of 5 picked Cotton to win
Kentucky 5 out of 5 picked McConnell to win
That's an impressive 35 correct winners predictions out of 38 picks by the various pollsters.
Not so impressive were the results for New Hampshire and North Carolina whose waters were muddied by a number of firms picking a tied result in both states. In New Hampshire of the six final polls 3 picked the eventual winner Shaheen, 2 picked Brown and one had it a tie -- but still, for those who picked a candidate, more were right than wrong.
North Carolina is where the pollsters came unstuck in every way possible, getting the winner wrong and, in general the predicted margins outside the margin of error. Of the eight pollsters only 1 picked Tillis as the winner, 4 picked Hagan and 3 had it as a tie. The presence of a third candidate who received nearly 4% of the vote may have contributed to this race being so badly evaluated.
But, in the final analysis, the polling companies picked the eventual winner in their last polls in eight out of nine headline Senate races. Perhaps that, rather than the predicted margins of victory, is what polling companies and the public should look to in future. T
he pollsters were charged with weighting the 2012 polls based on the 2010 election, and then lampooned for basing the 2014 elections on the 2012 result, and it appears they are in a bind as to how to work their psephological crystal balls in 2016.
Some firms, PPP Polling for example, have presented a post mid-terms analysis on how well they did in the governors races, and presented a rebuttal to the charge that they 'follow the herd' in their final polls as election day draws near. But so far they have offered no analysis (unlike Quinnipiac who presented an in-depth review) as to why their numbers were so far off in a number of Senate races.
If that were how they approached the 2016 campaign they, the entire polling industry as represented by the RCP final results would be spared this awful, end result:
Fifty-two final poll predictions; Number of polling firms that predicted the winner's margin: zero
No comments:
Post a Comment