As the 2020 election approaches, voters will see a variety of polls. Many of them will be misleading.
Over time, political science has learned which types of questions are informative and which are not, based on models of public opinion. But many of the questions that polling organizations ask simply do not inform the public.
I am a political scientist who studies polarization and the gaps between the public and their representatives on political matters.
Election polls often fail to heed the lessons that have been hard-won by decades of survey research. Pollsters build their surveys around the idea that voters begin with firm beliefs, evaluate candidates on the basis of those beliefs and will explain their reasoning when prompted.
In reality, voters often just respond to party signals, and can rarely explain their reasoning to pollsters.
The birth of survey research
In 1960, political scientists published their first major study based on the American National Election Studies survey in a book called “The American Voter.”
This was the first time that such a survey brought a person’s political party to the forefront of the question of why people vote the way they do.
Prior to that, the most significant survey-based study of voting behavior was a book published by a trio of Columbia University sociologists in 1954. They argued that people’s political choices reflected their social backgrounds and connections.
For example, if people’s family and social contexts are groups that happen to lean toward the Democratic candidate, they will vote for the Democratic candidate — but not out of loyalty to the Democratic Party.
While there have been many changes in the American electorate over the last half-century, political scientists have replicated the core findings in The American Voter, including two updates. In studies of political behavior, party identification is nearly always the 800-pound gorilla in the room.
Social backgrounds and associated traits may influence beliefs and vote choice, but they do so mainly by informing a person’s party identification, which remains central. In the 2016 American National Election Studies survey, about 79% of respondents who reported having voted were either self-identified Democrats who voted for Hillary Clinton or self-identified Republicans who voted for Donald Trump.
A related study, published in 1964, showed that asking people their opinions on a topic will often lead to responses that don’t reflect real beliefs. People don’t want to admit when they haven’t thought much about a topic. Instead, they will respond with “nonattitudes” — stated opinions that don’t indicate any real beliefs — to avoid the admission of not having thought much about the question.
In a famous, if ludicrous, demonstration of the principle, in 2015, Public Policy Polling asked respondents whether or not they supported bombing Agrabah. The country does not actually exist. But about half of Democrats and Republicans expressed opinions, mostly consistent with Democratic opposition to military action and Republican support for it.
Lessons for 2020
I believe it is important for Americans to think critically about survey questions. There are three lessons that an informed news consumer can bring to understanding, and weighing the importance of, surveys.
First, beware of poll questions that prompt responses that aren’t real attitudes.
People have real beliefs about certain issues because the issues are easy to understand, like abortion. However, the more technical the issue, the less likely people are to hold firm beliefs.
For example, while budget deficits sound intuitively bad to most people, they are a technical issue. Most people lack the education to explain how the balance between taxes and spending affects interest rates, or how interest rates conditionally affect economic growth. According to the Economic Literacy Survey, many, if not most, Americans lack understanding of economic concepts even more basic than that.
When pollsters ask about technical issues like the deficit or international relations, then, readers must be wary of responses.
Second, beware of questions in which people are just parroting a party line.
Consider changes in relations with Russia. Gallup has run a series of polls about how people perceive Russia’s relations with the U.S. Prior to Trump, Democrats and Republicans had similar beliefs, and those beliefs changed over time in similar ways. However, after Trump began sending positive signals to Republicans about Russia, Republican expressions of support for Russia increased. Democratic support did not move.
This change among Republican respondents was not because of assessments of diplomatic developments between the two countries. No new treaties nor economic agreements were signed. A Republican president simply spoke more positively about Russia, and that was reflected in Republicans’ survey responses.
On any question that requires a great deal of information to understand, party identification becomes central to how people respond. When pollsters ask people about deficits, international relations or other complex topics, people may respond with what leaders from their party tell them. The public mostly responds to elite signals.
Third, the public should be wary of questions that ask people how their votes will be affected by various policy positions, a lesson which follows from the first two.
After all, if people’s stated beliefs aren’t real attitudes, and they are simply responding to party signaling much of the time, then when pollsters ask them if a candidate’s position on an issue will influence their vote choice, their answers won’t be informative.
Nonacademic surveys often ask voters if they would be more or less likely to vote for candidates given certain policy positions. But voters may not know how they would respond to policy positions, and they may not admit the truth. Voters rarely admit that party is why they vote the way they do, after all.
Research shows that party has more predictive power than anything else. This was the central lesson from “The American Voter”, and most political science research following from it. I think that the public should interpret polls with that in mind.
Justin Buchler, Associate Professor of Political Science, Case Western Reserve University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Shares