I have now compared the predicted winner in my seat forecasting model to the actual winner in each constituency (on the basis of the preliminary results).
I have used the actual provincial vote shares as the input into the model. There is considerable confusion about seat prediction vs the polls. I find that it is common for many to confuse polling error with the error in seat prediction. I used the actual election day vote share because if the polls were perfectly accurate this is what one would get. It is almost impossible to predict the outcome in every riding accurately even with the perfect poll, so any forecast model will have error, but one should get the best results by using the most accurate polling data available, which is, by definition, the actual vote share.
Many still wonder what use this is since it comes afterwards, but almost all who read the polls during an election make an inference about outcome so I think seat projection models have a role to play. As we compare the poll numbers with the actual vote shares in an election, so we ought to compare the actual outcome in each riding with what the model predicts. When the final validated results are available in a few months I will compare the difference between my predicted and the actual percentage outcome for each candidate in each constituency.
In making predictions I do not calculate a predicted outcome for the three territorial ridings. In reporting results I simply assume that the incumbents there will be re-elected and report out a total of 308.
For the 305 constituencies using the actual provincial vote shares, the model correctly predicts the outcome in 273, and makes an error in 32, for a success rate of 89.5%. The model correctly predicted the outcome in all the seats in Alberta, PEI and Newfoundland. There were 2 errors in Nova Scotia, 1 in New Brunswick, 2 in Quebec, 16 in Ontario, 1 Manitoba, 4 in Saskatchewan (by far the weakest performance), and 3 in B.C.
The national total from all this was: L - 137, Cons - 95, NDP - 22, BQ - 54, Other - 0, close the actual results.
I have been doing this modelling for awhile and this was actually one of my better nights, although normally the model is better than 80% accurate.
The model numbers from last week's polls were nowhere near the final seat outcome, but that is principally because a reported average 4% Liberal lead in Ontario became an actual 13% Liberal lead. Not surprisingly, a shift on this scale has a huge impact on the results generated by the forecast model.