Tag Archives: Election

General Election: 8 June 2017

After the problems of 2015, and the need to deal with the issues raised by the subsequent Sturgis enquiry, the pollsters were inevitably going to be under considerable scrutiny in the 2017 election.

The polls proved very good at predicting the Conservative vote, with an average of the final polls being only 0.2% higher than the actual result. (The actual result shares are based on all seats except Kensington).

They did, however, prove much less good at predicting the Labour vote, with the pollsters’ average being 5.2% below Labour’s actual share. This is only the second election since 1987 when the pollsters have underestimated the Labour share.

The average poll figure for the Liberal Democrats was very close, being only 0.3% different from the result.

While the final polls were not ideal, the BPC does not feel there is a need for another formal inquiry; the detailed findings of the Sturgis review are available here and provide a reference point for understanding the issues and challenges. Instead all BPC members who produced final polls will produce a “lessons learned” report, and these will be presented at a conference before the end of this year.

  CON LAB LD UKIP Green Other Method Sample Size Fieldwork
Opinium 43 36 8 5 2 6 Online 3002 June 4
Survation 41 40 8 2 2 6 Telephone 2798 June 6-7
Ipsos MORI 44 36 7 4 2 6 Telephone 1291 June 6-7
ICM 46 34 7 5 2 6 Online 1532 June 6-7
ComRes 44 34 9 5 2 5 Online 2051 June 5-7
YouGov 42 35 10 5 2 6 Online 2130 June 5-7
Panelbase 44 36 7 5 2 6 Online 3018 June 2-7
Kantar Public 43 38 7 4 2 6 Online 2159 June 1-7
BMG 46 33 8 5 3 6 Telephone / online 1199 June 6-7
Average 43.7 35.8 7.9 4.4 2.1 5.9      
Result 43.5 41 7.6 1.9 1.6 4.4      
Difference 0.2 -5.2 0.3 2.5 0.5 1.5      

How Have The Polls Changed Since 2015?

The performance of the polls in the 2015 general election left something to be desired. On average the final polls (that is, those whose fieldwork concluded no more than two days before polling day) put the Conservatives and Labour equal on 34%, when, in practice, the Conservatives proved to be on 38% and Labour on 31%. Indeed, throughout the election campaign the polls mostly suggested that the two largest parties were more or less neck and neck. Although the polls largely anticipated the performance of all of the other parties, including the SNP in Scotland, reasonably well, their misestimate of the performance of the two largest parties unsurprisingly gave rise to the widespread perception that the polls ‘had got it wrong’.

In the wake of this error, the British Polling Council and the Market Research Council appointed an Independent Inquiry under the Chairmanship of Prof. Patrick Sturgis to investigate why the inaccuracy arose and to make recommendations about how polls should be conducted and published in future. The Inquiry made the following recommendations about the future conduct of polls in its final report:

We recommend that BPC members should:

  1. include questions during the short campaign to determine whether respondents have already voted by post. Where respondents have already voted by post they should not be asked the likelihood to vote question.
  2. review existing methods for determining turnout probabilities. Too much reliance is currently placed on self-report questions which require respondents to rate how likely they are to vote, with no clear rationale for allocating a turnout probability to the answer choices.
  3. review current allocation methods for respondents who say they don’t know, or refuse to disclose which party they intend to vote for. Existing procedures are ad hoc and lack a coherent theoretical rationale. Model- based imputation procedures merit consideration as an alternative to current approaches.

The report also recommended that:

  1. Pollsters should take measures to obtain more representative samples within the weighting cells they employ.

and:

  1. Pollsters should investigate new quota and weighting variables which are correlated with propensity to be observed in the poll sample and vote intention.

In responding to the report the British Polling Council, inter alia, agreed the following motion:

The Council recognises the widespread public interest in how polls are conducted, and in ensuring that they are as accurate as possible. It also recognises the need to promote public understanding and informed discussion of how polls are conducted, especially at the time of a UK general election.

The Council thus resolves to commission for publication and launch in autumn 2019, a report that describes the methods being used by its members in their polls of UK election voting intentions and how these methods have changed since 2015. This report will be launched at a seminar at which individual member companies will present details of their current methodology.

This motion was, of course, written in the expectation that, given the terms of the Fixed Terms Parliament Act, the next general election would be held in May 2020. The decision of the Prime Minister, endorsed by the House of Commons, that a snap election should be held on 8 June 2017 has meant that the BPC has been unable to implement its motion by holding an event and publishing a report as it originally intended.

Even so, many people are inevitably still asking whether the polls have learnt from what happened in 2015 and thus can be relied upon to provide a more accurate estimate of Conservative and Labour support in the 2017 election. This report, published on the occasion of a seminar being held by the BPC in collaboration with the National Centre for Research Methods on how polls are being conducted in the 2015 election, is intended to help the reader answer that question for themselves. It provides for each BPC member that has so far been regularly conducting polls during the election campaign a summary of the principal changes they have made since 2015 to the way in which they conduct polls of voting intention.

It will be apparent from the reports that each company has responded to the lessons of the 2015 election in its own particular way, and that no two companies are conducting their polls in exactly the same way. However, two general themes emerge from the individual reports.

First, most companies have implemented one or more changes designed to improve their estimates of which voters are likely to make it to the polls and which not. In some instances, this has consisted of making greater effort to ensure that more potential non-voters are interviewed in the first place. A lot more effort is certainly being made to weight the interviews that are obtained so that known key patterns and differences in turnout and political engagement are reflected in the data upon which vote intentions are estimated. In many cases this endeavour has been facilitated by using the findings of the high-quality face to face random probability survey conducted after the 2015 election on behalf of the British Election Study, a step that was encouraged by the Sturgis Report.

Second, in some instances changes have also been made with a view to ensuring that the political balance amongst those who will make it to the polls is estimated accurately. Greater use is being made by some companies of weighting the data by how people voted in the 2015 election, while the opportunity is also being taken to ensure that the data are also reflective of how people voted in the EU referendum in 2016. At the same time, one of the lessons of that referendum, that sometimes people’s educational background matters to how they vote, means that some companies are now also weighting their data by that characteristic too.

The BPC is not responsible for how opinion polls are conducted; that is the responsibility of each of its members individually. The BPC’s role is to promote transparency about how polls are conducted so that journalists, commentators and the general public can make their own informed decisions about whether to believe the polls or not. We hope that they find it useful in forming their own judgement as to whether the polls have learnt the lessons of the 2015 election.

John Curtice
President, British Polling Council

ComRes

ComRes has continued to strive to improve our methodology and accuracy since the 2015 general election. We conducted an internal review following the election and indeed paused the publication of Voting Intention polls for a period in order to enable us to review our approach properly.

Immediately after the 2015 election we introduced a new model of turnout in order to map voting intention more accurately. The Voter Turnout Model was developed through a multiple regression analysis of ward level demographic and turnout data. Although General Election results do not provide a count of who exactly turns out to vote, this analysis allows us to develop a robust estimate of that phenomenon.

Through a series of experiments testing the relationships between different demographics and turnout, ComRes consistently found that the two most important drivers of turnout are age and social grade. Using regression analysis, we developed a weighting matrix which predicts turnout in different demographic groups more reliably than self-claimed likelihood to vote and the resulting factor weights are now used in weighting our data. The value of the model rests on the fact that the variation in turnout by demographic group is relatively consistent from one General Election to the next.

We have subsequently revisited the polls we conducted during the 2015 election campaign to measure the impact of applying this model. If we had used this method in 2015 and made no other adjustments to our methodology, our final poll would have had a 5 point Conservative lead over Labour. This pattern is repeated in each poll for which a comparative analysis has been conducted.

We are confident that this is the most important change we can and need to make to our methodology for estimating vote intention. Although we, along with the rest of the industry, overstated Labour and underestimated Conservative support in 2015, we got the other party vote shares almost spot-on. Detailed investigations, including of the relative merits of our polls conducted by phone and those conducted online (we did both in 2015), lead us to conclude that estimating the pattern of turnout correctly rather than sampling is the critical factor required to improve the accuracy of our polls. At this election, all of our polls are being conducted online.

ICM Unlimited

ICM’s polls are being conducted in 2017 in a number of different ways from how they were conducted in 2015. All polling is now conducted via an online panel, whereas in 2015 it was conducted via telephone using random digit dialling. At the same time a variety of other changes to the way in which the sample is obtained and the resultant data are weighted and adjusted have been made. These are as follows:

  1. Sample Size. All polls now typically contain at least 2,000 respondents rather than 1,000.
  2. Quotas. In 2015 quotas (i.e. targets for who should be interviewed) were set for a range of demographic factors such as age, gender, and social grade. Now they also include for which party people say they voted at the last general election.
  3. Turnout. In 2015 respondents were asked how likely they were to vote on a scale from 1 to 10. In estimating vote shares for each party each respondent was weighted according to their answer to this question. Thus, for example, respondents who said they are 5/10 likely to vote were given a weight of 0.5. In addition, those who said they did not vote at the previous general election were further downweighted by a half. In 2017, the data are weighted so that the anticipated level of turnout for those in different age groups and social grades matches the reported level of turnout for each combination of age and social grade in 2015 as measured by a variety of sources.
  4. Weighting by Interest in Politics. In 2017 the interview data are being weighted so that the distribution of reported interest in politics matched that obtained by the face to face random probability survey conducted on behalf of the British Election Study after the 2015 general election. No such weighting was applied in 2015.
  5. Weighting by Past Vote. In 2015 the interview data were weighted to a target for each party that was 80% based on the actual result of the last election and 20% to the distribution of past vote (prior to weighting) in the last 20 polls conducted by ICM. The latter part of this formula was intended to reflect the fact that some respondents may have misreported how they voted at the last election. In 2017, however, the data are weighted so that the distribution of reported vote at the last election fully matches the actual result.
  6. Imputing Vote Intentions of those who do not indicate how they will vote. In 2015 those who failed to state a vote intention but indicated for which party they voted in 2010 were assumed to be most likely to vote for the same party again and were added to that party’s tally at a value of 0.5. In 2017 the same procedure is being applied again, except that those who said they voted Conservative or Labour at the last election are being added at a value of 0.75. In addition, the result of this adjustment is now also applied to those who not only fail to state a vote intention for the current election but also do not state how they voted at the last election, except that this group is estimated to be 20% more likely to vote Conservative and 20% less likely to vote Labour.

Ipsos MORI

Over the past two years Ipsos MORI has carefully reviewed its approach on the analysis and reporting of its opinion polls. In the 2015 General Election we believe we sampled too many engaged, higher educated voters, which helps to explain our over-estimate of the Labour vote share by 3.8 percentage points. We did not under-represent the total proportion of Conservative voters, but rather did not interview enough non-voters. Compared with our results in previous elections, we also found a disproportionate number of Labour-leaning voters saying they would vote, some of whom we believe instead chose to abstain.

In light of this we have made three methodological changes since that election: 1) weighting by newspaper readership, 2) setting quotas and weights for educational attainment, and 3) making changes to our turnout filter. We outline these modifications below.

In 2015, after the General Election, we began weighting our samples by newspaper readership, which went some way towards offsetting our initial sample bias towards those who are more politically engaged. (In particular, these weights bring down the proportion of broadsheet readers and increase the proportion of tabloid readers.) We believe this change would have made us more accurate in the 2015 General Election; however, it did not completely overcome the issue of disproportionally interviewing too many graduates and not enough individuals without qualifications.

To compensate, in 2016 we added educational attainment to both our interview quotas and weighting scheme. Adding these elements to our polling approach we believe has made our sample more representative, and indirectly accounts for level of political engagement as both educational attainment and newspaper readership are strongly correlated with interest in and knowledge of politics.

After the 2015 General Election we also introduced a new turnout filter to our published monthly Political Monitor polls, to help mitigate the issue of interviewing individuals who say they will vote but in fact do not show up to the polling station – something which was more of a challenge for us in 2015 than in previous elections. Previously, we have relied simply upon (i) excluding unregistered voters and (ii) using a 1-to-10 likelihood of voting question, taking those responding ’10’ (absolutely certain to vote) as voters and everybody else as non-voters.

Our new turnout filter includes the use of an additional question measuring general election voting frequency (Which of the following best describes how often you vote in General elections? I never vote/I rarely vote/I sometimes vote/I usually vote/I always vote/It depends/Don’t Know). Respondents are now excluded from the headline figure if (a) they say they are not registered to vote in the election or don’t know; or (b) they rate their likelihood of voting at below 9, or don’t know; or (c) they say they “never”, “rarely” or “sometimes” vote in general elections, or don’t know. This filter would also have made us more accurate in the 2015 General Election.

Kantar Public UK

Note Kantar Public polls were previously published under the name TNS UK. We have made the following changes to our polling methods since the 2015 election.

Likelihood of Voting

  • We have developed a new LTV (likelihood to vote) model. The likelihood that each respondent will vote in the General Election is estimated based on respondents’ stated intention to vote, their age and whether they voted in the last general election. This likelihood is used to weight the contribution that each respondent makes to the tally of estimated vote intentions for each party. The model was developed using data from respondents who participated in TNS polls immediately prior to the 2015 general election and immediately afterwards. In addition, a further small adjustment has been made to correct for over-claiming to have voted, based on analysis of the face to face random probability survey conducted after the 2015 election for the British Election Study.

Weighting

  • We have modified the way in which we weight our data by age to ensure that the right proportion of the weighted data is aged 70+.
  • Education is now included in our weighting targets to ensure that we do not have too many respondents with a degree.
  • To ensure that our sample is not too politically engaged the data are now also weighted so that the overall proportion who are expected to vote (estimated via our LTV model) matches our estimate of the likely turnout. This estimated turnout is based on the relationship between the likely turnout in our 2015 data and the actual turnout in the 2015 general election.

Questionnaire

  • In the short campaign, we are now asking whether or not people have voted by post, and if they have already voted we do not ask the likelihood to vote question.

Treatment of “Don’t know” and “Refused” at the voting intention question

  • We now ask a ‘squeeze’ (or follow-up) question of those who plan not to vote /don’t know how they will vote / refuse to say who they will vote for. This question asks respondents which party they prefer, and those naming a party in response to this question are added to the relevant tally.

    In all of the polls we have published so far those that do not provide a response to the initial question on voting intention or to the squeeze question are excluded from our final voting intention figures. However, we think it is unlikely that such voters will distribute their support in the same way as those who do declare a vote intention, and we therefore plan to develop an imputation approach to use in our final polls.

Opinium

The polls conducted by Opinium at this election differ from those undertaken at the time of the 2015 election as follows:

  1. Our estimates of party support are now based on those who say they are 10/10 certain to vote when they are asked to indicate how likely they are to vote on a scale from 0 to 10. In 2015 likelihood of voting was ascertained by asking respondents whether they definitely/probably would/would not vote and our estimates of party support were based on those who said they would definitely vote. The new procedure is somewhat more restrictive in terms of the proportion of sample who is regarded as likely to vote.

  2. We have changed the weighting that is applied to ensure that our sample is politically representative.

    Prior to our 18 April 2017 poll, our results weighted by EU referendum vote and party propensity as well as by standard demographic characteristics.

    Our party propensity model was based on asking respondents their likelihood of ever voting for each party on a 1-10 scale and then assigning them to a group (e.g. “Conservative – lean left” or “Labour – loyalist”). Targets for the proportion of the sample that should belong to each group were based on a rolling average that incorporated (i) the proportions found in each group in polling that was conducted immediately after the most recent general election (including, the EU referendum) after the data had been weighted to the actual result, and (ii) the proportions found in each group in subsequent polls when only demographic weighting had been applied.

    Party propensity was therefore designed to include a measure of weighting by past general election vote but with the weight that this carried in the generation of the propensity group targets diminishing the further we were removed from the last election.

    However, it had become increasingly clear that this procedure was no longer sufficient on its own. For example, in our poll dated 11 April 2017 it had the effect of ensuring that the poll contained equal proportions of respondents who said they voted Labour and Conservative in 2015, with both groups comprising 28% of the sample. We have therefore implemented a stricter 2015 past vote weight that ensures that the proportion who say they voted for each party in 2015 matches the actual outcome of that election. However, our weighting does not attempt to bring the proportion of respondents who say that they voted in line with the actual turnout last time around. Our data are still also weighted by EU referendum vote.

  3. In calculating the distribution of vote intention we are now also downweighting by a half those who say they will vote for a party but also say that they disapprove of that party’s leader.

ORB International

ORB International did not publish polls of voting intention during the 2015 general election, but is doing so during the 2017 election. An outline of our methodology in 2017 is given below:

Panel

Our polls are being conducted via an online panel of potential respondents. This uses a stratified sampling technique to obtain a population that best represents the entire population being studied. For any particular poll, the email addresses of panel members are selected at random using interlocking quotas of age, gender, social grade and region. The size of each quota category takes into account the predicted response rate.

Weights

The interview data are weighted to be demographically representative on the following variables according to targets taken from the National Readership Survey:

  • Gender
  • Socioeconomic grade
  • Tenure
  • Work Status
  • Cars in household
  • Foreign holidays
  • Geographic region

In addition, weights for age and education level are applied so that the size of each age/education group reflects the proportion those who actually voted in 2015 who belonged to that group. The targets for these weights are derived from the race to face random probability survey that was conducted after the 2015 election on behalf of the British Election Study.

Finally, in calculating vote intention a weight for the likelihood that a respondent will vote is applied according to their answers to the following question.

And once again thinking to the next general election on June 8th, on a scale of 0-10 where 0 is certain not to vote and 10 is certain to vote, how likely is it that you would go out and vote?

Thus, for example, a respondent who gives an answer of 5 counts as 0.5 in calculating the tally of vote intentions.

1 2 3 4 5 6 7 8 9 10  
Certain not to vote                 Certain to vote Don’t know

Reported Voting Intention

Our reported voting intention takes the weighted data and includes all those who name a party to provide a final indication of voting intention.

Panelbase

We are weighting our polls not only by how people voted in the 2015 general election but also by how people voted in the EU referendum in 2016. Otherwise, to date, we have not made any radical change to the online polling methodology used in 2015. However, for our final polls, we are considering using data from the face to face random probability survey conducted after the last election for the British Election Study in order to estimate likelihood of voting.

Survation

Survation are conducting polls both by phone (for Good Morning Britain) and online (for the Mail on Sunday).

Our phone polls are being conducted using the same approach as that used in the final poll that we conducted before the 2015 election which put the Conservatives on 37% and Labour on 31%, but which was not published in advance of the election count. Data that links phone numbers to lifestyle information about the households in question is used to generate a stratified random sample of respondents.

Our internet polls are also being conducted using the same methodology as in 2015.

YouGov

We have made the following changes to our methodology since the 2015 general election.

Overview

YouGov conducted an in-house review of our election polls (available online here:). This came to similar conclusions to the BPC/MRS inquiry – that the 2015 polling error was primary down to sampling issues. Specifically, we felt that the sample was skewed towards people who were too interested in politics, particularly among young people.

Our changes since then have been primarily aimed at addressing this issue through improvements to the composition of our panel of potential respondents and changes to quotas and weights.

Panel Recruitment

Since 2015 we have focused our efforts at recruiting people into our panel of potential respondents more specifically upon those who belong to groups that were under-represented in our samples during the 2015 election, particularly those with a low level of interest in politics, younger panellists with low educational qualifications, and those who did not vote at the previous general election.

Sample quotas and weighting targets

We have changed upper age band that we use both in determining who should be asked to participate in a poll (quota) and in subsequently weighting the data we obtain so that it includes only those aged 65 and over rather than those aged 60 and over. This has increased the proportion of those aged 65 plus who are represented in our samples and corrects an over-representation of those aged between 60 and 64.

We have dropped our previous quotas and weights by newspaper readership which, with falling print readership, we found was no longer effective for enough of the sample.

We have added quotas and weights by highest educational qualification, calculated separately for each age group, in order to correct an over-representation of graduates, particularly among younger respondents.

We have replaced political weighting and quotas by party identification with recalled past vote, calculated separately for each region. This has given us greater control over the number of people in the sample who did not vote in 2015, while weighting separately for each region has improved the political representativeness of our samples in Scotland and London.

We have now also set quotas and weight our data by the amount of attention the respondent pays to politics. The targets for these quotas and weights are based upon the face to face random probability survey conducted after the last election on behalf of the British Election Study.

Since 2016 we have also weighted by reported EU referendum vote, collected from our panel members shortly after the referendum.

Likelihood to vote

Finally, we have changed our likelihood to vote model. Respondents are still asked to say on a scale from zero to ten how likely they are to vote in the general election, and the contribution they make to the estimate of vote intentions reflects their answer to this question. Thus, for example, respondents who say they are 5/10 likely to vote are given a weight of 0.5. However, this contribution is now reduced by a half if the respondent says they did not vote at the 2015 general election.

Performance of the Opinion Polls in the 2016 Local and Devolved Elections

The following tables compare the results of opinion polls of voting intentions for the elections that were held on May 5th in London, Scotland and Wales with the final outcome. A poll is included if its fieldwork was conducted wholly or mostly in the seven days before polling day (May 5th), and it was the final poll published by the pollster concerned. (This is a slightly longer period than is normally used in BPC reports on the performance of the polls in general elections — the period has been lengthened because there were very few polls conducted in the last few days.)

It should be noted that all of these polls were conducted online, with the exception of the Survation poll in Scotland, which was conducted by telephone.

London Mayor 1st Preference Vote

Company / Fieldwork Dates Con Lab LibDem UKIP Green Other
Opinium 26.4–1.5.16 35 48 4 5 5 3
TNS 26.4–3.5.16 33 45 7 5 4 5
ComRes 28.4–3.5.16 36 45 6 4 6 3
YouGov 2–4.5.16 32 43 6 7 7 5
RESULT 35 44 5 4 6 7

London Mayor After Redistribution of 2nd Preferences

Company / Fieldwork Dates Con Lab
Opinium 26.4–1.5.16 43 57
TNS 26.4–3.5.16 43 57
ComRes 28.4–3.5.16 44 56
YouGov 2–4.5.16 43 57
RESULT 43 57

London Assembly — Constituency Vote

Company / Fieldwork Dates Con Lab LibDem UKIP Green Other
YouGov 2–4.5.16 30 44 7 11 7 1
RESULT 31 42 8 8 9 3

London Assembly — List Vote

Company / Fieldwork Dates Con Lab LibDem UKIP Green Other
YouGov 2–4.5.16 29 39 8 11 9 5
RESULT 29 40 6 8 8 3

Scotland — Constituency Vote

Company / Fieldwork Dates Con Lab LibDem SNP Green Other
Survation 1–2.5.16 19 21 7 49 5
YouGov 2–4.5.16 19 22 7 48 4
RESULT 22 23 8 47 1

Scotland — List Vote

Company / Fieldwork Dates Con Lab LibDem SNP Green Other
Survation 1–2.5.16 20 19 8 43 7 4
YouGov 2–4.5.16 20 19 6 41 9 5
RESULT 23 19 5 42 7 4

Wales — Constituency Vote

Company / Fieldwork Dates Con Lab LibDem PC UKIP Other
YouGov 2–4.5.16 21 33 8 19 16 4
RESULT 21 35 8 21 12 3

Wales — List Vote

Company / Fieldwork Dates Con Lab LibDem PC UKIP Other
YouGov 5.5.16 20 31 6 20 16 8
RESULT 19 31 6 21 13 3

YouGov also conducted on polling day an exercise in which it recontacted a sample of those who it had interviewed previously in order to ascertain how they had or intended to vote. This produced the same result for the Constituency vote, and very similar figures for the list vote: Con 19%, Lab 30% LibDem 6%, PC 21%, UKIP 16% and other 8%.

British Polling Council opposes Bill on Regulating Opinion Polls

The British Polling Council (BPC) urges the House of Lords to reject the private members bill on the regulation of opinion polls that is being presented today by Lord Foulkes.

The Bill proposes that an authority be established that would regulate polls of voting intentions for all elections and referendums in the United Kingdom. The authority would be empowered to specify approved ways for selecting who should be interviewed, how the questions in polls should be worded and to ban the publication of voting intention polls during an election campaign.

Who is interviewed by a poll and how the questions it asks are phrased are important issues. How any poll has addressed them should always be clearly stated, as the rules of the BPC require. But they are not issues that are susceptible to straightforwardly ‘right’ or ‘wrong’ answers. Professional researchers can and do disagree about how polls should be conducted and how they should be worded. They regularly experiment and test alternative and new ways of doing polls and asking questions in order to improve their methods. Imposing regulatory standards would put at risk the experimentation and competition that are essential to improving the ways in which polls are conducted.

Banning the publication of polls during an election campaign would not mean that polls were not conducted. It would simply mean that access to their results would be confined to those who could afford to pay for polls, such as the banks and the political parties, or who knew where to find the results on an overseas website. Only the ordinary voter, who is meant to be central to the democratic process, would be left out of the loop. It could potentially open the way for politicians to claim that their private polling showed them ahead, regardless of what their polling actually showed, or indeed whether it existed at all.

Doubtless many people feel that in underestimating the Conservative vote and overestimating Labour’s, the polls provided unhelpful misinformation during the recent general election campaign. That is why the BPC has established an independent inquiry into why the polls were wrong and how their conduct might be improved in future. Indeed, the first meeting of that inquiry, at which BPC members will be presenting – in public – their initial findings as to what went wrong, is being held today.

Professor John Curtice, President of the British Polling Council, said, ‘What is needed now is a critical and open appraisal of where the polls went wrong, not the heavy hand of regulation that in attempting to impose common standards would make it more likely that the polls all get it wrong again in future. As any economic forecaster knows too well, forecasting how people will behave is always a difficult enterprise. No-one has yet suggested that, despite their many errors, economic forecasting should be regulated, and it is not clear why attempting to anticipate how people will vote should be treated any differently.’

Notes for Editors:

  • The British Polling Council (BPC) is an association of polling organisations that publish polls. The objectives of the Council are to ensure standards of disclosure that provide consumers of survey results that enter the public domain with an adequate basis for judging the reliability and validity of the results. Website: www.britishpollingcouncil.org. Twitter: @BritPollingCncl
  • The first meeting of the inquiry is at The Royal Statistical Society, 12 Erroll St, London EC1Y 8LX at 1.30 pm on 19 June. Anyone who wishes to attend should register at BPC/MRS Polling Inquiry meetings
  • Members of the inquiry are Dr. Nick Baker, Group CEO, Quadrangle Research Group Ltd; Dr. Mario Callegaro, Senior Survey Research Scientist, Google UK ; Dr. Stephen Fisher, Associate Professor of Political Sociology, University of Oxford, who runs the Electionsetc website; Dr. Jouni Kuha, Associate Professor of Statistics, London School of Economics and lead statistician for the BBC/ITV/Sky exit poll; Prof. Jane Green, Professor of Political Science, University of Manchester and Co-Director of the 2015 British Election Study; Prof. Will Jennings, Professor of Political Science and Public Policy, University of Southampton, and a member of the Polling Observatory team; Dr Ben Lauderdale, Associate Professor in Research Methodology, London School of Economics and one of the team behind the electionforecast.co.uk website; Dr. Patten Smith, Research Director, Research Methods Centre, Ipsos MORI and Chair of the Social Research Association.
  • Details of Lord Foulkes’ bill can be found at: Regulation of Political Opinion Polling Bill [HL] 2015-16

For further information, please contact a member of the BPC Management Committee:

Simon Atkinson: 07791 680770
Nick Moon: 07770 564664
John Curtice 07710 348755

Details of Opinion Poll Inquiry Announced

The British Polling Council (BPC) publishes today further details of the Inquiry into the performance of the opinion polls that it has established in collaboration with the Market Research Society (MRS).

​Under the chairmanship of Prof. Patrick Sturgis, Director of the National Centre for Research Methods at the University of Southampton, the Inquiry is charged with the task of establishing the degree of inaccuracy in the polls, the reasons for the inaccuracies it identifies, and whether the findings and conduct of the polls were adequately communicated to the general public. Due to report by 1 March next year, the Inquiry will seek and welcomes submissions from all interested parties, and is empowered both to make recommendations about the future practice of polling and, where appropriate, for changes in the rules of the BPC. The BPC and MRS are committed to publishing the Inquiry’s report in full.

​Eight people with professional expertise and experience in conducting and analyzing survey and polling data, have agreed to serve (unpaid) as members of the Inquiry. None of them were directly involved in conducting published polls during the election campaign. They are as follows:

  • ​Dr. Nick Baker, Managing Director, Quadrangle Research
  • ​Dr. Mario Callegaro, Senior Survey Research Scientist, Google UK
  • Dr. Stephen Fisher, Associate Professor of Political Sociology, University of Oxford, who runs the Electionsetc website
  • Dr. Jouni Kuha, Associate Professor of Statistics, London School of Economics and lead statistician for the BBC/ITV/Sky exit poll
  • ​Prof. Jane Green, Professor of Political Science, University of Manchester and Co-Director of the 2015 British Election Study
  • ​Prof. Will Jennings, Professor of Political Science and Public Policy, University of Southampton, and a member of the Polling Observatory team.
  • ​Dr Ben Lauderdale, Associate Professor in Research Methodology, London School of Economics and one of the team behind the election forecast website.
  • ​Dr. Patten Smith, Research Director, Research Methods Centre, Ipsos MORI and Chair of the Social Research Association.

​Information about the work of the Inquiry will be available via a website launched today at National Centre for Research Methods (NCRM). As a first step the Inquiry is inviting written submissions, which can be uploaded via the website. A public meeting will be held during the afternoon of 19 June at the Royal Statistical Society, London, where there will be an opportunity to discuss the work of the Inquiry. The event will be free to attend but registration will be required. Registration will open, via the NCRM website, on Tuesday 26 May. Further information and updates about the conduct of the inquiry will be made available on the website thereafter.

​Prof. John Curtice, President of the British Polling Council, said, ‘The polls clearly gave the public a misleading impression of the likely outcome of the 2015 election and this shaped the reporting of the campaign. The Council is committed to ensuring that there should a thorough and transparent investigation into what apparently went wrong, and how both the conduct and the reporting of the polls might be improved in future. We are deeply grateful to Prof. Sturgis and the members of the Inquiry, all of whom have substantial professional expertise in the methodology and analysis of surveys, for agreeing to conduct this Inquiry.’

Jane Frost CBE, MRS’ Chief Executive said, “As the world’s leading research association, we are actively supporting the British Polling Council in its investigation. We continue to support all of our accredited members in ensuring standards are met. Market research is a UK success story, the UK is a world leader in this sector, contributing over £3.6bn to the UK economy. We continue to learn, adapt and innovate.”

For further information:

Notes to Editors

  1. The full details of the Terms of the Reference of the Inquiry are appended to this release.
  2. The British Polling Council (BPC) is an association of polling organisations that publish polls. The Council promotes standards of disclosure that are designed to provide consumers of survey results that enter the public domain with an adequate basis for judging the reliability and validity of the results. Most of the companies that conducted polls of voting intention at the 2015 UK general election are members. Further details can be found at http://www.britishpollingcouncil.org/.
  3. The Market Research Society (MRS) is the world’s leading professional research association, training and regulating the research sector in the UK. The research sector is a major UK industry worth a conservative £3.6bn (GVA) per annum.
  4. The original announcement of the establishment of the Inquiry can be found at General Election: 7 May 2015

BPC/MRS Inquiry into the Performance of the Opinion Polls at the 2015 General Election.

Terms of Reference

  1. To assess the accuracy of the published opinion polls (both national and sub-national) at the 2015 general election.
  2. To evaluate whether any inaccuracies identified might be part of a pattern evident at previous elections.
  3. To investigate the causes of any inaccuracies that are identified. Potential causes to be considered will include (but not necessarily be limited to): the possible impact of late changes in vote preferences, sampling methods, interview mode, weighting and filtering, population coverage, item refusal, differential availability and willingness to participate, question order and wording.
  4. To assess whether the analysis or reporting of polls was influenced by a reluctance to be out of line with the published figures of other polls.
  5. To consult and seek relevant evidence from all appropriate stakeholders, including but not exclusively, polling organisations that are members of the BPC.
  6. To assess whether adequate information was provided and communicated to interested commentators and the public about how polls were conducted and what their results meant.
  7. To make, as it sees fit, recommendations for improving how opinion polls are conducted and published in future.
  8. To make recommendations, if necessary, for changing the rules and obligations of BPC membership.
  9. To submit a report to the BPC and MRS by 1 March 2016, with a view to its publication by BPC and MRS as soon as possible thereafter.

General Election: 7 May 2015

The final opinion polls before the election were clearly not as accurate as we would like, and the fact that all the pollsters underestimated the Conservative lead over Labour suggests that the methods that were used should be subject to careful, independent investigation.

The British Polling Council, supported by the Market Research Society, is therefore setting up an independent enquiry to look into the possible causes of this apparent bias, and to make recommendations for future polling.

We are pleased to announce that Professor Patrick Sturgis, who is Professor of Research Methodology and Director of the ESRC National Centre for Research Methods, has agreed to chair the enquiry, and will take the lead in setting its terms of reference. The membership of the enquiry will be announced in due course.

The headline results for the final opinion polls are set out below:

Con Lab Lib Dem UKIP Green Other Method Sample Size Fieldwork
% % % % % % n
Opinium 35 34 8 12 6 5 online 2960 May 4-5
Survation 31 31 10 16 5 7 online 4088 May 4-6
Ipsos MORI 36 35 8 11 5 5 telephone 1186 May 5-6
ICM 34 35 9 11 4 7 telephone 2023 May 3-6
ComRes 35 34 9 12 4 6 telephone 2015 May 3-5
Populus 33 33 10 14 5 6 online 3917 May 5-6
YouGov 34 34 10 12 4 6 online 10307 May 4-6
Panelbase 31 33 8 16 5 7 online 3019 May 4-6
Average 33.6 33.6 9 13 4.8 6.1
Result 37.8 31.2 8.1 12.9 3.8 6.3
Difference -4.2 2.4 0.9 0.1 1 -0.2

Reading The Polls: Election 2015 and The British Polling Council

This will almost undoubtedly prove to be the most polled election campaign ever. After all, YouGov in particular have been polling almost every day throughout the course of the last five years, and they are not suddenly going to stop doing so now. There are nine other companies who are also all polling on a regular basis. Meanwhile the apparent closeness of the election race will encourage newspapers to spend as much money as they can on their own exclusive polls in the hope of being the news organisation that first breaks the news that the deadlock has finally been broken (if it ever is).

But polling is far from being a straightforward enterprise. Those who undertake polls are attempting to provide an accurate measure of the nation’s political pulse at a time when people have busier lives than ever, when many are increasingly reluctant to answer any kind of survey, and when no less than three insurgent political parties are enjoying unprecedented levels of support. There are evidently plenty of potential pitfalls to avoid.

At the same time, it is clear that the polls have influence. In recent weeks there has been much discussion of who might be willing to do a deal with whom in the event of a hung parliament, all of it predicated on the evidence of the polls that Conservative and Labour are neck and neck and that the SNP might displace the Liberal Democrats as the third largest party in the Commons. Without this evidence the subject matter and tone of the election campaign could well have been very different indeed.

In these circumstances it is clearly important that polls are subject to critical scrutiny. We should be able to satisfy ourselves that numbers that prove to be so influential but which are collected in what would seem quite difficult circumstances are indeed as robust and reliable as can reasonably be expected.

Making this possible is the key objective of the British Polling Council (BPC). Nearly all of the companies and organisations that conduct political opinion polls in the UK are members of the Council. In joining the body they have agreed to abide by a set of rules that demand a high level of transparency about how they go about their business.
Each member is expected to post on its website a description of how it conducts its polls and how it weights or otherwise adjusts or models the raw data it collects in order to arrive at its estimates of the balance of voting intentions. At the same time the details of each poll, including full details of the question asked and detailed tabulations of how the answers given vary by people’s demographic and political characteristics should be posted within three days of the poll being published – and indeed during an election campaign ideally within 18 hours of publication. In practice most polling companies typically publish these details very shortly indeed after initial publication.

Not that this means that all polls have to be published. Anybody has the right to commission a poll from a BPC member and keep the results to themselves. But if they do not want the details of their polling to be published, they do have to keep the results to themselves. If, for example, a commissioner starts to leak results to one or more journalists (perhaps selectively) then the BPC member becomes obligated to publish full details of the polling that has been leaked. If the results of a ‘private’ poll have been put into the public domain then they should be capable of being scrutinized in exactly the same way as a poll that was originally intended for public consumption.

However, conducting a poll is a multi-stage operation. At its most basic it requires the capacity to contact and interview successfully a representative body of voters (these days typically either by telephone or via the internet), to collate the results and to weight the data to a standard demographic scheme so that it has, for example, the correct proportion of men and women younger and older people etc. But it also requires an ability to identify a suitable sample design, to craft suitable questions and to undertake more complex weighting and filtering of the data than simply making sure it has the right proportion of men and women.

Not all of these stages are necessarily conducted by the same organisation. In particular a polling company (or indeed other organisation such as a university or a government department) may not have the capacity to undertake the fieldwork for a poll and thus opts to sub-contract it to a polling company that does. The job of the sub-contractor is simply to conduct the interviews and tabulate the results according to the specification of the contractor. In these circumstances the BPC decided some time ago that the body that should be regarded as responsible for the poll is the company or organisation that designed and commissioned the fieldwork, not the firm that did the interviewing.

This issue of who is regarded as responsible for a poll has arisen on a couple of occasions recently. One of the most active pollsters in recent years has, of course, been Lord Ashcroft, operating under the banner ‘Lord Ashcroft Polls’. Lord Ashcroft Polls does not have the ability to conduct its own fieldwork and thus sub-contracts this part of its polling to a number of companies, many of them BPC members. However, Lord Ashcroft Polls is responsible for the design, weighting and question wording of its polls, and thus it is the body that is ultimately responsible for its results. As it happens, Lord Ashcroft Polls is not a member of the BPC (and as an organisation that does not do work for multiple clients is not eligible to be a member), but as it happens it publishes full details of its polls in much the same way as a BPC member would be expected to do.

At the same time the Liberal Democrat Party has been undertaking quite a lot of polling in constituencies that it currently holds, seemingly with a view to establishing in which ones they might have a chance of winning again. Here too the party has been responsible for the design, and wording of the polling and for the weighting of the data, but has sub-contracted the fieldwork to a BPC member, in this case Survation. In recent weeks the Liberal Democrats have given journalists sight of some of their data, and in so doing apparently gave the impression that the polling was Survation’s responsibility. That, however, was not the case as Survation subsequently made clear in a statement its own web site. To date the Liberal Democrats, who are not BPC members, have published full details of one of their constituency polls, though not as yet the remainder.

BPC members will be making full details of their published polls available as quickly as possible throughout the election campaign so that everyone can come to their own view as to whether they believe the results are robust and reliable or not. But inevitably members can only do so for those polls for which they are themselves responsible. If someone claims their poll was conducted by a BPC member, do please check the claim out. It may not be true.

John Curtice is President of the British Polling Council and Professor of Politics, Strathclyde University