Sunday, May 19, 2013

The B.C. election and polling fiasco

In August 1972 a hamburger poll at the Pacific National Exhibition accurately predicted Dave Barrett and the NDP would win the B.C. Election held August 30th of that year. TC has long had great faith in the capacity of polls, even hamburger polls, to capture public sentiment in election campaigns in a way that correctly, if not precisely, anticipates the election result.

The polls on the whole have delivered. Using Pollster.com's capacity to create one's own prediction map  and based on their aggregation of polls TC was able to accurately predict the outcome in 49 of 50 states in last year's U.S. Presidential election. The fiftieth, Florida, I had as too close to call. Most polls got the 2012 U.S. election right.

However, in B.C. in 2013 the polls were wrong and the errors were extreme (something similar happened in Alberta last year and there were significant regional errors in the 2011 Canadian federal election).  For the two leading parties (the minor parties should be ignored for the purpose of this analysis) only three polls out of the final seven had a result inside the margin of error for the NDP and all were outside the margin for the Liberals.



If we rank the pollsters by methodology we see that while all the polling was weak, the online polls were generally weaker than the phone polls (IVR or interactive voice response polls are still phone polls conducted using automated computer equipment rather than human operators). In the table below you can see that on the whole the phone was a bit more accurate.


Telephone polls do have two problems today that were not present in the not too distant past. First, many people either don't answer the phone or use call display to filter out unwanted telemarketing or polling calls. Partly because of this response rates have fallen dramatically. In addition phone polls to be valid need to call cell phones, which have become so widespread especially among the young that you can't capture an adequately representative sample without them but not all do. The low response rates mean the potential for capturing an accurate representation of the population must be declining.

Online polls are surveys conducted among panels recruited by the company to answer surveys on a variety of topics mostly market research (TC was formerly part of an online panel and has answered many such surveys).

The most important point about this methodology is there are no established standards or methods for what is a recently developed technique for polling.  Some good results have been obtained (there have been some bad results too) but considerable uncertainty about the method remains.

But here is something the public doesn't know and something journalists should be demanding from the online companies.  While companies such as Ipsos say their national samples are large, as large as 200,000, we don't know the exact size of the B.C. part of its national panel (is it about 26,000 representing roughly BC's share of Canada's population) nor do we know fully what methods were employed. Almost by definition, members of online panels are more interested participating in survey research than is typical of the population. Is this not likely to be an important factor in determining outcomes?

The UK firm YouGov has a decent reputation (Nate Silver who generally likes phone polls said in evaluating polls in 2010 "A firm that conducts surveys by Internet, YouGov, also performed relatively well.") YouGov provides a detailed description of its methods on its web site. For example, YouGov notes "restrictions are put in place to ensure that only the people contacted are allowed to participate". Online polls are going to be part of our future, but the industry should establish some minimum standards about how such polls are to be conducted, and should be much more transparent about their methods than they have been to date.

BC may be a more difficult place to poll than other provinces. Reflecting its diverse geography BC is a province of micro-climates both meteorological and political. Pollsters should have a tougher time capturing its variety and likely do. Polls also have a hard time with turnout, which was low in this election. For example the weighting of results demographically by age has a critical impact (older voters are much more likely to vote than younger voters). No doubt some of the error was rooted here.

Paradoxically, at the same time that publicly available media polls are getting weaker we are entering an era where, in private, well-funded sophisticated campaigns such as the 2012 Obama campaign spend massive sums of money to give themselves an extensive almost-census like knowledge of political preferences.  This quote describes the Obama's campaign's activity in battleground states:
For each battleground state every week, the campaign’s call centers conducted 5,000 to 10,000 so-called short-form interviews that quickly gauged a voter’s preferences, and 1,000 interviews in a long-form version that was more like a traditional poll. To derive individual-level predictions, algorithms trawled for patterns between these opinions and the data points the campaign had assembled for every voter—as many as one thousand variables each, drawn from voter registration records, consumer data warehouses, and past campaign contacts.
The failures of polling matter  Our knowledge of politics depends on them in a way that was not the case many years ago. It clearly affected the perceptions of the campaign from beginning to end on the part of the parties, the media and the public. Watching election night coverage it was clear that the Liberals were just as surprised as the NDP. I am distrustful of all claims by those who now say they had foreknowledge.

In hindsight it is clear now that the NDP lost in no small part because it was receiving misleading signals from the polls.  It ran an unfocused weak campaign that took victory for granted (somewhat like that of Lyn Macleod and the Ontario Liberals in 1995 when a big lead at the outset became a Mike Harris government on election day) when they should have been looking over their shoulder.

The whole story is complex but let me discuss two factors:

Adrian Dix made a disastrous decision to forgo attack ads and insisted on a positive campaign.  It seems pundits and politicians confuse negative politics with unfair dirty politics. Negative political discourse whether it is question period, a TV debate or an ad critical of an opponent represents the normal content of political debate.  There is a key distinction to be made, however, between negative campaigning and dirty politics.  It is the latter that should be denounced at every opportunity, especially by journalists and political scientists. Those who have difficulty with this should read Kathleen Hall Jamieson's classic work, Dirty Politics, Deception, Distraction and Democracy.  The BC NDP could have run ads critical of the many twists and turns in Christy Clark's style of governing and the Campbell government's handling of the HST fiasco, which could have been tough but fair, without being the equivalent of the Willie Horton ad (for context see analysis here).

Whether deserved or not, the NDP suffers from a longer term reputation as a party that is weak on economic growth and fiscal management. The Clark campaign ads (they strike TC as quite effective) took advantage of the fact that Adrian Dix was associated with an NDP government in the 1990s that has been portrayed as playing into this NDP stereotype. His mid-campaign shift on the pipeline issue fed into this. This was always going to be a potential weakness the NDP needed to address not just through the proposals it made as a party, but through tough attacks on the record of its opponents as well. It is not something that can be ignored.

There is always a next election. This year's Liberal campaign talked about "getting rid of BC's debt".  Perhaps the BC NDP should make careful note of that pledge.