Don’t Shoot the Messenger: The Methods and Power of Pollsters - WhoWhatWhy Don’t Shoot the Messenger: The Methods and Power of Pollsters - WhoWhatWhy

polls, public opinion, survey
Photo credit: adapted by WhoWhatWhy from OpenClipart-Vectors / Pixabay

In this WhoWhatWhy podcast we talk with Scott Keeter, the senior survey adviser at Pew Research Center. In this role, he guides all of Pew’s research and polling. An expert on American public opinion and political behavior, he is a co-author of four books on the subject.

Keeter discusses the complexities of modern polling amid a climate of skepticism and change. As the 2024 elections approach, Keeter confronts the polling industry’s pressing issues: dwindling response rates and the public’s eroding trust in institutions. 

He sheds light on the delicate balance between methodology and the organic evolution of societal trends. Keeter offers a rare glimpse into the intricate mechanics behind capturing the public’s voice, the resilience of the polling profession, and its critical role in the tapestry of American democracy. Keeter gives a nuanced examination of the intersection where data, behavior, and politics meet, providing a compelling narrative for anyone invested in the future of our political discourse.

iTunes Apple PodcastsGoogle PodcastsGoogle PodcastsRSS RSS


Full Text Transcript:

(As a service to our readers, we provide transcripts with our podcasts. We try to ensure that these transcripts do not include errors. However, due to a constraint of resources, we are not always able to proofread them as closely as we would like and hope that you will excuse any errors that slipped through.)

Jeff Schechtman: Welcome to the WhoWhatWhy podcast. I’m your host Jeff Schechtman. As long as we’ve wondered what others think and how we might predict the outcomes of the strange workings of democracy, we have been fascinated by polling and trying to measure the pulse of public opinion. It’s a fascination that rivals our anticipation of the final tally on election night. Polls are a crystal ball, glimpses into the collective psyche, and we hang on their every percentage point. When polls mirror our worldview, they’re lauded as harbingers of truth, but let them dare to challenge our preconceptions, and suddenly they’re diminished as flawed oracles of the public will.

Consider the world of baseball where batting .300 makes you a legend, or Wall Street where it’s the rule that yesterday’s wins offer no guarantee to tomorrow’s fortunes. The pundits on our screens are rarely held to account for their errant forecasts, their past predictions simply fading into the ether. Yet the polling industry is often castigated for every perceived misstep, bearing the brunt of blame for swaying the very tides they seek to measure. It brings to mind Heisenberg’s uncertainty principle, that the act of measurement inevitably alters the state of what’s being measured.

As we edge closer to the 2024 horizon, the alchemy of polling is once again at the forefront of our national discourse. Just last week we witnessed how mere numbers could jolt the politically educated and the erudite into a spiral of doubt, challenging the very bedrock of their convictions. That’s the canvas of our conversation today with my guest, Scott Keeter, a long-time expert in the realm of survey science.

Scott serves as a senior research advisor at Pew Research Center, where his expertise shapes the methodology behind their influential studies. His scholarly contributions include co-authoring seminal works such as What Americans Know About Politics and Why It Matters. His career spans teaching at prestigious institutions and directing survey research centers. He brings a wealth of knowledge on the intricate dance of polling and public sentiment. It is my pleasure to welcome Scott Keeter here to the WhoWhatWhy podcast. Scott, thanks so very much for joining us.

Scott Keeter: Thank you very much, Jeff. It’s great to be with you.

Jeff: Well, it is great to have you. There is this ongoing debate that always seems to go on with respect to polling, about the degree to which it’s either an art or a science. But one thing that’s pretty clear in all the research and all the work that gets done is that it is always evolving, that it’s always changing, that the methodology that might have been used in 2016 or 2020 is different from the methodology today, that this is a very alive process.

Scott: Well, you really hit the nail on the head. And I think one of the great things about the profession is that as the society has changed, as people’s lifestyles have evolved, the polling community has had to evolve with them. We used to knock on people’s front doors to get our samples and our interviews. And that worked well for many decades. But then it began to not work because people were much less welcoming of people coming up and knocking unannounced.

But about the same time, telephones were pretty much universal throughout the country. And so we adapted a methodology to the telephone. And now people don’t want to answer their telephones. And so we’ve had to evolve into some other approaches. So if you look over even just the past 20 years of polling, you see a huge change in the modes by which people are contacted and interviewed, and the way in which our samples are drawn. But that’s a sign I think of resilience and health in the profession, that we’re trying to find ways to continue to accurately portray the voice of the people.

Jeff: Does it take, though, mistakes, errors, elections that don’t go well, 2016 perhaps being the ultimate example? Does it take that in order for the industry to move on to make these changes, or are some of them made organically as the process moves forward?

Scott: It’s a bit of both. The kinds of things that we have seen over the past 20 years, particularly the rise of online surveys, web surveys where most interviewing for public opinion polls and for market research is now actually taking place. That’s something that began even before we were seeing problems with the polls in the last two presidential elections. And now it was mainly a function of cost, that it was much cheaper to have people take the surveys themselves. That is self-administration, as opposed to having an interviewer that one has to pay to call up people and to conduct an interview.

And so the field was already making the kinds of changes that would be necessary. But some of those changes actually probably hurt us in terms of accuracy. And it did take the failed polling in the 2016 election and in the 2020 election to, I think, convince some people in the industry that they needed to address those problems and find ways to make the methodologies that they were currently using more robust and able to deal with the realities of what public opinion has become.

Jeff: How monolithic have these problems been across the board within the industry? Are there a certain set of problems that everybody’s dealing with, or have individual groups found different sets of problems that other companies and other groups then have had to sit up and take notice about?

Scott: The phenomena that are challenging us affect everybody who’s trying to conduct polls now. And the main one is what’s called non-response. And it’s just the decline in the share of people who are ready and willing to be interviewed. This has a multitude of causes. A lot of it is just changes in lifestyle. But some of it is related to things that don’t have anything to do with politics for polling, such as people’s suspicions about identity theft.

And so when someone that they don’t know contacts them and wants to ask them questions, much more so today than 20 or 40 years ago, people are suspicious. Why are you asking me that? What do you want from me? And we hear enough about data breaches to legitimately wonder, is this something I want to do? Do I want to share my candid views or my demographics with this person I don’t know who’s just reached out to me over the telephone or in an email or in some other fashion?

And so the evolving relationship of the public to institutions, the decline in trust in institutions, those are all felt downstream in the polling world’s response rates — the percentage of the people we try to contact and interview who cooperate with us and give us an interview. And so the long story answer to your short question is that this is widespread throughout the whole polling sector, and it’s largely driven by the problems of non-response, non-cooperation, which themselves have a lot of different causes, many of which we can’t fully understand and embrace ourselves.

Jeff: And it seems that that carries within it something that the polling industry has to adjust for, in that it has an internal bias of people that are trusting enough or willing enough to participate with somebody taking the poll, whether it’s self-directed or whether it’s by an individual.

Scott: It’s not a good problem to have, but it is certainly something that we have to be aware of, and I’ll give you a couple of ways in which we know we have a problem and a couple of ways in which maybe we don’t have as much of a problem as we thought. So first of all, one thing that happens when cooperation levels go down is that we see our samples becoming more and more made up of people who are well educated, who are older, and who are more politically and civically engaged. People who are willing to volunteer their time to help others, who donate money, who give blood, and who vote.

And so our samples don’t accurately reflect the public on some of those kinds of characteristics because those characteristics are associated with people’s willingness to take part in the survey. This is something that we know. We can measure this. We can compare it with government surveys that have very high response rates and are accurate. We can look at voter turnout, aggregate statistics, and know that we have too many voters [with certain characteristics] in our samples, and we can fix those things — using statistical weighting, because we know what those numbers really should be.

But the one that you put your finger on, which is trust, is a more difficult thing to get at because there aren’t any really good national gold-standard numbers of what share of people trust institutions or other people.

It makes sense that people who are less trusting are less likely to participate in the polls; and indeed, that’s one of the main theories for why polling underestimated Donald Trump’s support, because he was making an explicit appeal to people who don’t trust institutions.

But I think it’s also good to step back and ask ourselves, how big of a problem is this? Did we underestimate Trump by 20 points, or by 3 or 4 points? And the answer is, it was the latter.

So we may have a problem with reaching people who are less trusting, but it can’t be a gigantic problem because our errors — as serious as they are in a society in which the public is divided pretty evenly between the parties, so that has big consequences for the accuracy of your forecast — we’re not missing the boat on the trends in people’s trust in institutions. It isn’t that we have our samples filled with people who are guileless and love institutions. That’s certainly not the case, as we can tell from just the questions that we ask people.

Jeff: And to what extent is it different when you do polling around specific issues versus polling about individual candidates, for example?

Scott: Polling has long enjoyed the fact that it has been pretty accurate in elections. When we were still doing so-called horse race polling, that is trying to forecast what was going to happen in the election, we had almost perfect results in 2004. We had almost perfect results in 2008, and again, in 2012. And we were very quick to say, “Hey, we got the election right. Therefore, you can trust us when we tell you what share people support the legalization of marijuana or who oppose the repeal of Roe v. Wade.”

Now that we’re having a little trouble with elections, people rightfully would want to know, “Well, can I believe your issue polls as well?” And again, I would point to this question of, “Well, how big are the polling errors that we’re seeing in elections, and does that actually translate into one-to-one errors in our issue polling?”

And we’ve done some analyses of this, and for one thing, the errors in the election polling are not that large. Even three or four points is a big enough error to make your poll wrong in predicting the election, but it’s not going to make a big difference in whether you think the public is very supportive of more integration or very opposed to more immigration, or is supportive of legalizing marijuana or opposed to it.

But the fact is, there’s also not a one-to-one relationship between support for a candidate and how people feel on issues. I noticed exit polls in [this month’s] Ohio elections showed that maybe 20 percent of the people who were Donald Trump supporters in the last election supported the codification of abortion rights in the Ohio Constitution. So those things aren’t exactly the same. So even the errors that we see in election polling don’t necessarily translate into issue errors as well.

Jeff: Because of all of this, has the polling process become more complex in trying to find those balances you’re talking about and not as simple anymore as making phone calls or knocking on doors? Has it therefore become prohibitively expensive in a way that somehow impacts on the polling process?

Scott: The answer is, yes. But it also points to one of the issues that we in the polling profession are dealing with. Polling has become much more difficult and more expensive. Again, largely because of the fact that non-response is so difficult.

It just takes a lot more effort to get a good sample, but it’s also paradoxically become much cheaper, and the barriers to entry to the profession have fallen very substantially because people can conduct polls using what are called opt-in non-probability samples online that are incredibly cheap.

But they also have very large errors associated with them. And they’re really not particularly good for the kind of work that we do here, which is to try to represent the public’s views with a high degree of precision or to forecast the outcome of elections.

They may be perfectly okay for doing certain kinds of market research or for testing the feasibility of ideas, but what’s happened is that probably half of the polls that you read about in the run up to an election are coming from those opt-in online non-probability samples that are cheap and plentiful. Those polls are getting stuck in people’s minds as representative of what the polling profession is doing.

Whereas over on the other side, there are major news organizations and places like Pew Research Center that devote tens and hundreds of thousands of dollars per poll to overcoming the limitations that we’re facing in terms of reaching people and persuading them, and then doing the complicated statistical work to make our samples as faithful a model of the population as we can.

And so, I think both of those things are happening at the same time. Doing good polling is becoming prohibitively expensive for a lot of people and organizations. And doing bad polling is easier than it’s ever been, and it adds to potential public confusion about whether polling is dead or dying or worthless or is in fact still surviving.

One would say, look at some of the good polls that were conducted in the 2023 elections. Those accurately forecast that the Kentucky governor, the Democratic governor, would be reelected; that the Republicans probably would not get unified control in Virginia; that the abortion referendum, the constitutional referendum in Ohio, would pass.

The polling did a very good job of pointing us in that direction, but those were mostly polls done at high expense and with good samples.

Jeff: Is polling easier to do outside of an election per se? Is it easier to get people to respond and to get samples outside of the crucible of an election campaign — if you’re doing, for example, issue polling in off-election time?

Scott: I think there’s a something to that in part because if you live in a battleground state in a presidential election year, and you have a telephone, if you have a landline, you just get inundated with unsolicited calls, many of which are not polls but are efforts to persuade you, [get you] to give money, or something like that. And so, if you’re trying to do a poll through a method like telephone, you’re competing with that cacophony of voices and contacts.

But that’s also balanced a little bit by the fact that during elections people are a little more politically engaged and, when they are, they may be more likely to cooperate with your polls. And so, I think there’s no solid evidence that it’s easier to do polling outside of election times, but there’s a way in which, particularly for people who live in places that are having very active campaigns, it’s probably true.

Jeff: You mentioned telephones a minute ago. How much more difficult is it now because basically the landline is almost extinct, that it is all cell phones now. How much more difficult does that make the phone part of the process?

Scott: We were a telephone shop for a long time through much of— really since the 1990s, and we lived through the transformation to cell phones. We were calling 80 percent or 90 percent of our sample was cell phones at the time that we stopped doing telephone surveys. But we did stop a couple years ago. We tapered off from them, and we don’t do them basically anymore at all.

And not very many organizations in the country do phone surveys anymore or do phone surveys alone. They may do a combination of phone and online, or something like that. Or they may use phone in conjunction with a different sampling method, but it’s just about impossible to get good samples with phones unless you’re willing to put in a tremendous amount of effort. There are organizations still doing it. The New York Times does it with their polling with Siena College, and they have a very good track record. So it’s feasible, but it’s extremely difficult.

Most organizations are now using other methods. We use what’s called a probability-based panel, where we have recruited people using mail surveys to join our panel and then take regular surveys with us. Other organizations are doing online surveys only, either with probability samples or with just opt-in samples where people can join as members of frequent-flyer clubs or they do other ways of getting recruited into these things. But there’s just not that much phone work being done anymore.

Jeff: What are the biggest challenges as we look to 2024 in terms of accuracy? What are the things that your organization, the Pew and, as far you know, other polling organizations are most concerned about or afraid of going into this next cycle?

Scott: There was a hope after 2016 that pollsters had figured out what the problem was. In a nutshell, we know that there’s been a steady kind of realignment underneath the surface of the American party system, with more and more less-educated working class people affiliating with the Republican Party. And the argument was made that some of the errors in the polls in 2016 were a function of the polling not accurately reflecting the educational composition of the electorate and, in so doing, underestimating Trump’s support. So those problems were fixed in 2020 for most pollsters. But the polls, again, had that problem. They underestimated Trump’s level of support and of other Republican candidates’ levels of support.

Post-mortems from the 2020 election found that there appears to be less willingness on the part of Trump supporters and Republicans to take part in polls now than in the past. This is a concern that I have heard as long as I’ve been in the business, 40 years. And it’s not really been true in the past. It’s something of a myth, but it does appear to be true today. And we’re talking here again about not a huge difference, maybe a couple or three percentage points, but that can make a difference in the accuracy of your outcome.

And so the question is, if Republicans, whether it’s because of trust or lifestyle circumstances or whatever, are less willing or less likely to be contacted to take part in polls, then it raises the danger of a systematic bias.

I’m not willing to throw in the towel on this yet. It’s definitely something that we will monitor and everybody who’s doing public polling and private polling for sure in 2024 will be monitoring. But the polls in 2022 were actually pretty good.

There were still some errors. But by and large, if you’d have to say, was there an overall error of polling in 2022, it was probably the anticipation of the red wave that actually never developed. And the same was true for the 2023 off-year elections, even though we’re talking about a very small number of states and places. So we’re concerned about this and about the possibility of underrepresenting Republicans and Trump supporters, but I don’t think anybody is in a panic about it.

Jeff: And finally, talk about the impact of the polls themselves, and how the industry looks at that. Because it really does involve this impact that a poll has that impacts on the very electorate that you’re polling.

Scott: This has been a great subject of debate in the political science world for decades. And I think there’s definitely something to the fact that polling can have an impact on people in terms of their sense of who’s viable in a campaign and what’s likely to happen. But there’s not any good evidence for what we would call bandwagon effects, where people see who’s leading and then decide, well, I’m going to vote for that person.

Where I think you do see some concrete evidence is during the primaries: If a candidate begins to lose support in the polls they will probably find that their fundraising dries up. And that’s a way in which a bad outcome in the polls can really have a concrete impact on the viability of a candidate.

The impact of polls on the public, I think, is just much more vague. And it’s harder to say that it’s something that interferes with the democratic process. What I like to say is this, that polling is one manifestation of what the public thinks. Their votes are another manifestation; protests are another; letters that they write to members of Congress are another.

And polls are one way in which the public can find out what it thinks. And it’s a good one in that it is, at least aspirationally, a way to offset the biases that are associated with a lot of other ways that public opinion gets heard — people who are more verbal or able to voice their opinions in letters to the editor or communications with members of elected officials and so forth. We attempt to give everybody the opportunity to participate.

But anytime you or I or anybody wants to say, well, what does the public really think, they should look at polls as one source of evidence. A good one, we think, but only one among many.

I think if you consider polling from that perspective, then its role in elections is just like any other kind of information that you might happen to be exposed to in the course of a campaign. It’s just something you can factor in, or not, in your own thinking.

Jeff: Scott Keeter, I thank you so very much for spending time with us here on WhoWhatWhy podcast.

Scott: It was great talking with you, Jeff. Thanks for having me.

Jeff: Thank you. And thank you for listening and joining us here on WhoWhatWhy podcast. I hope you join us next week for another Radio WhoWhatWhy podcast. I’m Jeff Schechtman. If you like this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes. You can also support this podcast and all the work we do by going to whowhatwhy.org/donate.


Author

  • Jeff Schechtman

    Jeff Schechtman's career spans movies, radio stations, and podcasts. After spending twenty-five years in the motion picture industry as a producer and executive, he immersed himself in journalism, radio, and, more recently, the world of podcasts. To date, he has conducted over ten thousand interviews with authors, journalists, and thought leaders. Since March 2015, he has produced almost 500 podcasts for WhoWhatWhy.

    View all posts

Comments are closed.