British Prime Minister Theresa May called snap elections just two years into the parliamentary term with the expectation that her Conservative Party would increase its parliamentary majority. Instead, the British electorate handed her the opposite. The Conservatives lost seats and their majority.
U.K. elections have been tough to forecast in recent years. The 2016 Brexit vote was a shock to many, although the polls suggested it would be close. The last parliamentary election, in May 2015, was also a surprise: Public opinion polls and academic forecasts predicted a hung parliament, but the Conservatives won an outright majority.
How did the forecasts do this time?
For this assessment, we have compiled Conservative and Labour Party seat forecasts from Murr, Stegmaier, and Lewis-Beck, Lebo and Norpoth, Fisher and Goldenberg, Janta-Lipinski, Forecast UK, PME Politics, Kantar, Principalfish, Election Polling, and others first compiled by Simon Hix. Here they are, along with the election outcome:
Here, the YouGov forecast stands out. Its prediction of the Conservatives winning just 302 seats garnered much media attention for predicting a hung parliament. It is the only seat forecast we have found that projected the party would lose seats and its majority.
The other forecasts estimated that the Conservatives would increase the size of their majority. These range from Election Polling’s prediction of 335 seats to Election Data’s forecast of 387 seats.
The Labour Party increased its seats in parliament from 229 to 262. Again, the YouGov forecast did well. Not only did it predict the increase in seats, but it missed the actual number by only seven seats.
Only three other forecasts correctly expected Labour to gain seats in this election. Our forecast predicted they would obtain 236 seats and the forecasts by Election Polling and Kantar predicted Labour would get 232 seats. All the other forecasts suggested losses for Labour ranging from a few seats to more than 40 seats.
Why were so many forecasts wrong?
At this stage, there are many potential culprits, including May’s lackluster campaign, uncertainty over Brexit, the terrorist attacks and the high turnout among younger people.
We can say more about our citizen forecasting model, which was based on voter expectations of who would win. It gave the Conservatives a 77 percent chance of winning a majority and a 20 percent chance of a hung parliament. Thus a hung parliament was a possibility, though not a likely one. Ours was one of the few forecasts to predict gains for Labour, but our forecast for the Conservatives was in the middle of the pack. Based on the performance of the model since 1987, this year’s forecast error was one of the largest.
One possible reason for the error is that poll question that measured voter expectation about who would win differed somewhat from what we have used in the past.
Another possibility is that voter expectations shifted at the last minute, especially after the May 31 headline in the Times of “shock losses” for the Conservatives. Our forecast use the YouGov/The Times voter expectations question from May 30-31. At that time, 69 percent of the British public thought that the Conservatives would win, while 12 percent thought Labour would win.
But a later survey, conducted May 31-June 2 by the Independent/Sunday Mirror/ComRes, showed that citizen expectations had changed: 57 percent thought that the Conservatives would win and 19 percent thought that Labour would win.
What did we learn about forecasting UK elections this time?
The YouGov model was the only forecast to correctly predict both that May’s party would lose seats and the Labour Party would gain seats. Its forecast relied not only on voter intention polls but statistical modeling of results in 650 individual constituencies. Their success may indicate that a combination of polls and statistical modeling at the constituency level will prove consistently more accurate.
But for traditional polling, clearly challenges remain. It will take time to figure out whether a late shift occurred, or if expectations of a Conservative landslide simply tainted media coverage, polls and voter expectations. Or perhaps the errors derived from enduring challenge of executing probability sampling, which has bedeviled elections in the U.K. and elsewhere.
Regardless of the cause, last week’s U.K. election reminds us that forecasting elections is complex and challenging.
Andreas Murr is an assistant professor of quantitative political science in the Department of Politics and International Studies at the University of Warwick. His research focuses on election forecasting, the voting behavior of immigrants and the selection of party leaders.
Mary Stegmaier is an assistant professor in the Truman School of Public Affairs at the University of Missouri. Her research focuses on voting behavior, elections, forecasting, and political representation in the U.S. and abroad.
Michael S. Lewis-Beck is F. Wendell Miller Distinguished Professor of Political Science at the University of Iowa. He has authored or co-authored over 270 articles and books, including “Economics and Elections, the American Voter Revisited, French Presidential Elections, Forecasting Elections, the Austrian Voter, and Applied Regression.”
Read more here: http://www.washingtonpost.com/blogs/monkey-cage/wp/2017/06/12/how-did-the-u-k-election-forecasts-do/ by Andreas Murr, Mary Stegmaier and Michael S. Lewis-Beck Originally posted on http://www.washingtonpost.com/blogs/monkey-cage