Political Polling and Market Research

Lessons from the 2020 Election

December 4, 2020

By Saide Ashaboglu, Director, Research and Neal Flieger, Managing Director, D.C.

Once again, public political polling is taking a beating. Predictions for the 2020 election, based on countless polls conducted nationwide, and in key states, were off in their predictions, showing much more support for Vice President Biden and some Senate candidates than actually materialized when votes were counted.

At Golin, surveys are an important tool, and one of the ways we help our clients build smart, strategic programs. It is important to call out that the topics and nature of the surveys we conduct through GolINTEL (Golin North America’s primary research team that is located in Washington, D.C.) are inherently different from the political polls. However, it is always a good exercise to dig into what the potential issues might have been from the political polls of 2020 so we can understand – as well as we can – what happened, in order to refine and improve our survey methodologies.

We looked closely at polling, and evaluated where some of the problems were, and we thought hard about how some of those failings apply to the work we do for our clients, and where they don’t. This memo outlines that analysis and concludes with a list of changes and additions we’re going to implement that will make our surveys more transparent and help to flag where issues may exist

First, let’s look at how polls and surveys are conducted. Most of the issues raised about the 2020 polls were based on how the pools of respondents were assembled, and how they engaged with the questions in the survey. It’s useful to understand how survey respondents are identified and assembled.  All polls, whether they are conducted online, over the phone, or in person, begin by accessing an established panel or pool of respondents.

During the 2020 campaign season, polls were conducted among adults (18+) of registered voters and/or likely voters depending on the market research firm running the polls.

Examining the 2020 Polls and Outcomes

Several factors are being examined now by people in the market research industry as possible reasons why the 2016 and 2020 election results were so off from the final vote counts. Based on the ongoing conversations around polling and our own points of view of where the gaps might lie, Golin has identified five key reasons for the situation pollsters find themselves in.

Missed Populations – One thought is that the panels that were built failed to include certain populations at high enough levels. Most traditional political polls are built using historical voting data, so any election – like 2016 and 2020 – where turnout among people who don’t typically vote this high presents a challenge.

Reluctance to be Candid – Political polls by their very nature examine choices that are highly charged and controversial. Many people who might not have a problem responding to a question about their consumer choices or personal views around lifestyle, or personal care, might not be as candid when answering questions about political preference. When it comes to asking about actions, like voting that is a civic duty, a respondent might not want to admit to what might be considered unpopular views. Put simply, some people may not tell the truth in a survey. Because the topics of our industry research are rarely as controversial as political elections, this is less likely to be an issue in GolINTEL surveys.

Refusals – This is an issue worth paying attention to because you don’t get the whole picture if without gathering data from a subset of people. Increasing numbers of people don’t answer their phones if it’s an unknown number and many simply refuse to participate in a survey if they do answer. It is important to note that political polls have heavily relied on phone surveys since they have been seen as the “gold standard” for decades.  While not at the same levels, we are also seeing lower engagement rates with online surveys as well. A survey with a response target of 1,000 people from a particular population could require several thousand contacts before the quota or final sample size is met. Typically, researchers don’t spend time analyzing refusals as part of their analysis of a survey. However, many people are thinking that refusals played an important part in the inaccuracy of 2020 polling since the type of person who agrees to participate in a survey is inherently different from those who refuse.

Distrust in Media – A segment of the population mistrusts polls and the news media that run and report on them. When those people who feel this distrust in the media tend to favor one candidate over another, it increases the risk that that support for that candidate will be under-represented in polling. One strong indicator that this might have been a factor in 2020 is that while the polling was generally wrong, it was more wrong about Trump’s support than Biden’s.  Most polls showed support for Biden at about 50%, or maybe one or two points more.  However, most showed support for President Trump at the mid-to-low 40’s. Therefore, what was missed was a meaningful percentage of Trump’s support, which could be explained if a greater percentage of people who refused to participate in surveys were Trump supporters than were Biden supporters. In general, greater transparency on who refuses to participate in a survey, particularly surveys among specific population groups, is informative.

Discarding Partial Responses – Usually fully completed polls are analyzed within a study. And while refusals create a problem in meeting quotas, a different problem is created by people who start a poll, and then drop out before the poll is completed. The headache for pollsters is that they have to start all over again with a new respondent and discard the data from the partially completed poll. But that approach could be a mistake. There could actually be meaningful data in the partial responses, particularly in terms of which question was the last question they completed and what “type” of person was more likely to drop off, and when. In some cases, people drop off because the poll is taking too long, but in other cases, it could be because a particular question was uncomfortable or hard to follow. Valuable insights could be derived from looking at the pathology of drop-offs.

There are other factors in polls that are run to determine the result of an election – and particularly the 2020 election – that might hold answers but don’t apply as broadly to non-political market research. Presidential elections are highly divisive, and people have very strong pre-existing feelings about candidates.

It is interesting that while polls for the 2016 and 2020 elections were not predictive, the mid-term election of 2018 was one of the most accurately polled election cycles in recent decades. One factor worth considering is that support for one particular candidate – in this case President Trump – was responsible for many of the inaccuracies. The Senate and House races in 2020, where polling failed to accurately predict the eventual result tended to occur in states where President Trump’s support was polled as significantly wrong (Texas, North Carolina, Iowa, etc.). Therefore, it could be that as a non-traditional political figure, President Trump’s support is particularly difficult to capture using traditional survey methods.

Implications and Actions

It is important to note that political polling isn’t consumer market research. The surveys we conduct through GolINTEL are inherently different from political polls, and many of the factors detailed above that made predicting the outcome of the 2020 election difficult do not necessarily apply directly to the work we do. Nevertheless, there are always things we can do to make our work better, more effective and deliver more impact for our clients. Going forward, GolINTEL will focus on three ways to analyze survey results to ensure that clients fully understand the logistics of data collection and how it contributes to the insights provided.

  1. Full Panel Size – We will include information in our data and methodology about the full size of the panel from which the sample was created. If it took reaching four thousand respondents to get 1,000 completed surveys, we will note that in the methodology. Where there were specific quotas for sub-groups within the target sample size, we will note the total number of contacts required to fill those quotas.
  2. Drop-Off Rate – We will note how many respondents began the surveys but did not complete them, and, if it is relevant, which particular question may have seen significant drop off. Again, the conclusions that could be drawn on when and where people decided to abandon a survey may not be decisive, but they could be informative in understanding people’s appetite for answering questions about a particular issue. They may also be useful in the future design of the questions.
  3. Ideological Identification – In issue-driven campaigns, surveys frequently ask respondents to select the party they identify most with (i.e. Republican, Democrat, Independent or no preference). While the answers are useful to show political strength of an issue among a certain partisan group, the party labels are becoming less and less useful as an indicator of where people are themselves. Knowing this, we can now develop an indirect way to assess this through more affinity-based set of questions to help identify the strength of an issue among different groups in society. For example, rather than only relying on self-reported data on which ideology best describes their beliefs, we will ask about issues that are most important for them and/or their viewership of specific cable news outlets since these are indirect ways to identify political ideology recently.

Surveying is used to try to understand what people believe, what motivates those beliefs and/or challenges them as well as how people will react or respond to different stimuli – ideas, messaging, actions, etc. – and how that stimuli can potentially shape new beliefs and/or behaviors. But in an environment where respondents give their consent to be asked questions, results are limited to only those who give that consent. As we are now understanding in the post-2020 election period, the specifics of who DOES NOT give consent can be informative, and sometimes very significant too. Because of that value, GolINTEL can now provide clients with vital information about how their research was gathered, and where issues arose, to better impact and understand  the issue being researched – and ultimately lead to stronger results.