I have compared the final polls in the 2006 election to the actual results in some detail – both the national and the regional numbers.
Most attention has focused on the quite accurate closing SES poll. What I found most interesting about this poll was its unique ballot question:
1. If a FEDERAL election were held today, could you please rank your top two current local voting preferences? (First ranked reported)
2. Are you currently leaning towards any particular FEDERAL party, and if you are, which party would that be?
I do not know if asking voters to rank preferences had anything to do with SES’s accuracy or not – it could have been partly a fluke – but it does intrigue me, partly because it is not entirely obvious how SES translated responses into its numbers.
One reason SES was accurate was that it polled until the last minute and there was a late surge of Liberal support that earlier closing polls such as Ekos and Ipsos-Reid Online missed. The second most accurate national poll, however, came from Léger marketing and it was completed well ahead of election-day. That says to me that its accuracy was indeed accomplished by chance.
Once again all the polls were generally accurate. The largest polling “errors” were associated with the late Liberal surge, and the longstanding difficulty in accurately measuring public opinion in Quebec. This latter phenomenon I believe is related to the deep federalist/nationalist divisions is Quebec society.
These performances reinforce my conviction that polling is generally-speaking accurate, especially if it consists of simple questions asking about behaviour. The aggregate error (that is taking each error for each party and adding it all up) was only 2.7% for SES and reached a high of 9.6% for Ipsos-Reid Online.
Most of the closing numbers in the national polls, with one notable exception, were within the margin of error. The Liberal Party estimates of Ekos, Ipsos-Reid Online and Strategic Counsel were just outside the margin of error, but in the case of Ekos it could be attributed to completing their polling early. There was something more at work in the case of the Ipsos-Reid Online survey that I want to discuss.
This was the first election where online polling really began to take hold. The Ipsos-Reid poll was not the most accurate for its national numbers, but was second when it came to the accuracy of its regional data. The big advantage of polling via the internet is that it significantly reduces data gathering costs and therefore permits much larger sample sizes. However, we find using the Ipsos poll as an example, that its error frequently exceeded the statistical margin of error. Margin of error is related only to sample size and Ipsos had much larger samples (its overall national sample was 9,648) so its errors ought to be smaller. The fact that its errors frequently exceeded the statistical margin of error says to me that this methodology still has some way to go before it can displace more traditional methods. However, their overall accuracy on the regional numbers compared to the actual results tells us that it is worthwhile for the pollsters to pursue their experimentation with online research.
There is a discussion of the accuracy of the campaign polls in the March issue of Policy Options. It is journalistic, does not include systematic research and confuses seat projections (something tcnorris understands) with polling. However, this is a subject that the daily media barely discusses although they devote reams of space during a campaign to the discussion of polls so I to think Policy Options deserves credit for commissioning an article on the subject. Speaking of seat projections, I will have more to say in upcoming posts.