Skip to Content
British Polling Council
British Polling Council

FAQs By Members Of The Public

 
How do you know whether a poll will be accurate if conducted by an organisation with polling experience.
Opinion polling causes different problems for researchers from those in other kinds of survey research, and the more experienced the polling organisation, the more likely it is to have encountered these problems and developed means of dealing with them. New organisations do of course come on the polling scene and prove to be every bit reliable as more established pollsters. However if a poll is produced by a new organisation which has results very different from the general trend of recent polls, it would be advisable to wait to see what other polls say before placing too much reliance on this one.
Can you find out more information about the poll?
One of the key beliefs of the British Polling Council is that transparency is the best way to guard against polls being biased whether deliberately or accidentally. It is impossible to make a good assessment of the likely reliability of a poll without knowing essential details such as when it was conducted, the sample size, the sampling method, any waiting employed and so on. All members of the BPC will provide all this information on request to journalists and others.
Again, an organisation that does not do this is not necessarily wrong, but it is a lot easier to place your trust in a poll conducted by an organisation that is completely open about how the polls are conducted, leaving you to make a sensible judgement on the information provided as to whether the pole is likely to be accurate or not.
Who paid for the poll?
Most polls are conducted by newspapers and television programmes, and while they may be conducted as much for marketing and publicity purposes as to provide a new service for their readers or viewers, media commissioned polls are unlikely to be pursuing any hidden agendas.
Greater care must be taken with polls conducted by pressure groups. These polls are likely to be conducted solely to ,"prove", that the view put forward by this particular pressure group is the majority view, and it is in the interest of the pressure group to get a particular result. If the poll is conducted by a reputable organisation it is likely that they will have prevented any significant bias in the sample or questionnaire design. However even if the poll is conducted by a reputable organisation it may be that it only touches on one aspect of the issue and is thus at best partial, if not actually biased.
Similarly, private polls conducted for political parties are usually conducted for quite different reasons from opinion polls for media publication. A party trying to establish its own strengths and weaknesses, or to work out what the potential support is for a particular issue, may well ask questions which in other circumstances would be considered leading, merely to test such a hypothesis. This is fine as long as the results are used purely internally within the party, but once a decision is taken to place these results in the public domain they should be looked at with some suspicion.
How many people were interviewed?
While it is true that, in a scientifically conducted survey, the more interviews there are the more the results are likely to be correct, it does not follow that a big survey is necessarily better than a small one. If someone mails out a million questionnaires and gets ten thousand back — a response rate of only 1% — this poll will almost certainly be less reliable than a more scientifically conducted survey with a sample of a mere thousand.
There is no, "minimum", sample size for a poll which is acceptable, but around one thousand has become the established norm for a nationwide opinion poll in Great Britain.
One should also be very suspicious of attempts to over interpret sub group analysis in polls. One might see a national poll of one thousand reported where it is commented that support for a particular party is much higher among respondents of Asian origin than among whites or those of black origin. Since a nationally representative sample of one thousand will only contain around forty Asians and a smaller number of Black respondents, any results based on these such small sample sizes are not really worthy of comment.
Finally it should be noted than the reliability of a sample is a function of the absolute sample size rather than the proportion it represents of the population from which it was sampled. Thus a sample of one thousand to represent the whole of Great Britain is no worse than a sample of one thousand to represent the population of a single constituency even though the two sampling fractions are very different.
How were respondents chosen?
There has been much argument over the years as to the effective merits of random, versus quota sampling. Whilst there is a much greater theoretical basis for random sampling, the history of opinion polling in the latter part of the 20th Century suggests that quota surveys can perform every bit as reliably as random ones.
What is more important than exactly how the respondents were chosen is the fact that they were chosen at all, rather than self-selecting themselves for the survey. Thus any survey in which the survey organisation chooses who to take part is likely to be more reliable than the kind of newspaper readership or phone in poll, or open access Internet poll, where anybody can take part in the survey and there is no attempt to insure that the demographic profile of respondents looks like the demographic profile of the country as a whole.
What geographically areas are covered by the survey?
Almost all opinion polls published in the U.K do not involve interviews in Northern Ireland, mainly because of the different political system in operation there. They should thus be described as representing the views of Great Britain, not the views of the United Kingdom. There is nothing wrong with a poll which excludes Northern Ireland as long as everyone is clear this is what is happening.
Similarly, there is nothing wrong with conducting a poll which only interviews people south of Manchester, so long that it is absolutely apparent that this is a poll which excludes the north of the country, and no attempt is made to suggest that it is nationally representative.
When was the poll conducted?
It is not unusual for two polls to be published at a similar time, even on the same day, when one was conducted in the previous two or three days whilst the other was conducted over a week ago. There may have been no general shift in political opinion over this period in which case this gap does not make any difference but at a time when opinion is shifting, then all other things being equal, more weight should be placed on a poll conducted more recently.
Fieldwork dates should always be included in the publication of any poll. If they are not you should ask the polling organisation to supply them.
What is sampling error on a typical opinion poll?
It is quite common to see poll results accompanied by a statement to the effect that the results, "are subject to a sampling error of + or - 3%", but whilst this may be a reasonable indication of the accuracy of the poll it is a gross oversimplification to suggest that this is what the sampling error will be, or indeed that sampling error is the only source of error.
Technically it is only possible to calculate sampling error on random samples, although historically it has appeared that quotas samples tend to have similar sampling errors.
What it means if there is a sampling error of, say, +/- 3%, is that if 20 polls were conducted separately, all at the same time, using exactly the same methodology, and the real level of support for the Conservatives across the entire country is 30%, one would expect 19 of these 20 polls to come up with a result between 27% and 33%. However there is always a possibility of a "rogue poll", the one in 20 that one can expect to fall outside this range of sampling error.
What all this means in practice is that on a poll of a thousand, if two parties are within only 2% or 3% of each other, then there is a realistic chance that they are actually tied, or even that the party shown in the polls in second place is actually in first place. A result this close is not a "statistical dead heat", but it is "too close to call", with the parties "neck and neck".
Is sampling error the whole story?
All polls are subject to sampling error and this should certainly be taken into account in judging the likely accuracy of a poll finding, but there is a lot else which can affect polls other than just sampling error. And whereas sampling error means that the truth is just as likely to be above the poll finding as below it, many other likely problems of polls introduce bias rather than error.
The key difference between bias and error is that whereas error is as likely to understate a party as to overstate it, bias always operates in one direction. Thus if there is something about a poll design which, say, significantly undersamples the richest 10%, and this richest 10% are far more likely to vote Tory, than every single poll conducted by this methodology is likely to under estimate the Tory vote.
Across a number of polls sampling error will be self-cancelling — there will be as many polls that understate the Conservatives through sampling error as overstate them. If a series of polls exhibit bias, then every single poll in that series will all overstate the Conservatives.
What can go wrong apart from the sample?
It is also possible to introduce bias into a survey by means of question wording. For the most part, differences in question wording will not have a significant effect on the answers obtained, and it is unlikely that a poll conducted by a reputable organisation would contain any serious bias in its questionnaire, but it is certainly possible to influence the results by the choice of one form of question wording rather than another.
If two polls are carried out on the subject of, say, abortion, at the same time and have the same sample size and design but different results, you should examine the questions carefully to see if one question wording is more likely to produce a pro-abortion view point and the other wording an anti-abortion view point.