skip to main content
The Scientific Approach to Question Wording Differences

The Scientific Approach to Question Wording Differences

Senior Scientist

A recent column by USA Today writer Walter Shapiro called attention -- again -- to the fact that differences in question wording on a specific topic can make a difference in how people respond.

In his Aug. 19 column, Shapiro highlighted two different polls dealing with civil liberties and government efforts to fight terrorism. He called attention to the fact that Attorney General John Ashcroft was touting a late July Fox News/Opinion Dynamics poll that found general support for the Patriot Act. But Shapiro pointed out that a CBS News poll in May found relatively high levels of concern about "losing your civil liberties as a result of recent measures enacted by the Bush administration to fight terrorism."

In other words, Shapiro maintained that it was disingenuous of Ashcroft to cite just the one poll, and cautioned: "That is the danger inherent in using polls selectively to quash policy disputes. Small variations in the wording of a question can produce diametrically different answers."

Shapiro is certainly correct in the most general sense. There's little question that the subject matter of polling -- "attitudes," opinions and projections of future behavior -- are inherently mushy. That mushiness means that the responses to questions about a given topic can vary depending on a variety of factors in the interviewing situation.

While it might seem that asking a question in a survey is a simple matter, it most assuredly is not. In fact, it turns out that the process of eliciting responses from those being surveyed is the most complex and daunting challenge pollsters face. But it is also the most fascinating challenge, and in many ways, the most rewarding.

The idea that a survey question simply taps into a hard-coded attitude on file in a human's cognitive filing cabinet is far from accurate. What a survey question obtains about a given topic is better viewed as a more generalized range of reaction to a concept. That range can vary based on a number of factors: a) the exact words used in the question; b) the context of the question in the sense of what was asked before it; c) the respondent's perception of the person asking the questions, because most respondents are reacting to the person as well as the question; and d) the setting or environment in which the question is asked (in one's living room is different from on the phone, which in turn is different from filling out a paper-and-pencil survey).

In other words, survey responses to attitude questions can vary depending on the circumstances under which they are measured, including -- not surprisingly -- the exact wording of the question.

Critics -- to some degree touching on the same points Shapiro makes in his USA Today article -- argue that this type of variability of response can invalidate the entire survey process. If the types of answers can vary based on such things as question wording, question order, or response order, then can we or should we pay attention to any survey findings? As critic and Fox News host Eric Burns noted in his 1999 article, "Is Democracy Just a Numbers Game?", published in Reader's Digest: "Poll questions can be phrased to make the answers pointless, irrelevant or deceptive."

Indeed, researcher concern over these issues -- the impact of the way in which questions are asked -- has existed for many years. George Gallup, the founder of The Gallup Organization, used to routinely include split-sample experiments in his Gallup Polls to measure the way in which different question wording could affect responses. As Norbert Schwarz, a longtime investigator of these issues, declared: "Psychologists and social scientists have long been aware that collecting data by asking questions is an exercise that may yield many surprises," and, in his summary of the large amount of literature that has developed on this topic, Schwarz concluded: "Self-reports of behaviors and attitudes are strongly influenced by features of the research instrument, including question wording, format, and context." ("Self-Reports: How the Questions Shape the Answers," The American Psychologist, February 1999)

Some critics view the variability of responses to questions on a given topic as reflecting the fact that there is "no there there" -- or in other words, that there is nothing substantial being measured in the first place. Others argue that responses may be manipulated by unscrupulous pollsters who are using the polls to prove a point or arrive at a pre-specified conclusion.

The first point is nonsense. What we measure in polls is real and important. People have quite stable orientations toward issues that manifest themselves in actual behaviors. Attitudes can be defined as the tendencies to behave in certain ways toward specific stimuli, behaviors that vary within certain ranges based on the circumstances in which they occur. These attitude tendencies are definable and valuable. A person who has an anti-abortion attitude, for example, may express it in different ways depending on the questions asked. However, the general tendency for that person to respond negatively to questions asking about support for abortion rights is highly likely to be stable across time and across different questions. The variations that do occur in responses to abortion-related questions are valuable tools social scientists can use to refine their understanding of how the public relates to the issue.

The point I like to stress is as follows: Variations based on question wording differences do not undermine the value of survey results as important scientific data, but enhance it. Most scientific advances are based on the study of this type of variation, not despite it. The fact that humans give different verbal or written responses to questions in various types of conditions is extremely important and of great value in helping us understand humans' approaches to the subject matter under scrutiny. This type of variation serves as the basis for a more thorough and compelling understanding of humans' views on important matters. The inherent variability of expressed human attitudinal responses presents no more difficult a problem than much of what scientists encounter in dealing with very "squishy" subject matter in other branches of science. The science of polling is based on the assumption that this variability is either controlled (by virtue of standardized methodological procedures) or rigorously studied and used in the analysis of the survey results. And, when used in the analysis or understanding of humans, the variability presents what is in fact important, scientific information.

Our quest as pollsters becomes one of figuring out why responses vary when question wording changes. What does the variation reveal about the attitudes and potential behavioral patterns of the respondents? No variation is meaningless or occurs in a vacuum. Humans' different responses -- instead of indicating a flaw in the process -- give excellent insights into the public's thinking on these issues. The role of science is not to curse the variability, but rather to use it as a keystone to understanding.

Indeed, pollsters need to focus more on careful, in-depth analysis and interpretation of variability in research findings when the results are reported. The lack of this type of analysis and interpretation, in fact, is one of the critical weaknesses polling faces today.

This approach, as noted above, de-emphasizes an orientation that argues that there is such a thing as a fixed "attitude" inside respondents' heads, and that the task of the pollster or survey researcher is to find one and only one way to measure this specific attitude. I'm emphasizing instead that humans have tendencies to respond in certain ways to specific stimuli. The exact pattern of responses will vary depending on environmental elements at the time the question is asked. The task of the researcher is to describe and then understand the implications of these differential response patterns, and to use them to derive the most complete picture of the human's relationship to the stimuli.

So, back to the two examples that Shapiro used in his column. The Fox News/Opinion Dynamics poll question asked if the Patriot Act was a good thing for America. The positive responses show us that the concept or idea of such legislation sounds appealing to the public. The CBS News poll (as well as other polling on the same topic) shows, however, that Americans pull back from anti-terrorism programs within the United States when they potentially involve giving up civil liberties. So we learn that the American public's opinion on the boundaries of the war on terrorism is fairly nuanced. There is support for domestic efforts to fight terrorism but concern about such efforts going too far. Additional polling, if well done, helps determine more precisely just what trade-offs the average American is willing to make. As a result, if elected representatives and government officials study these results carefully and systematically, they can obtain a much better sense for what the public is willing and not willing to support in the fight against terrorism.


Gallup https://news.gallup.com/poll/9193/scientific-approach-question-wording-differences.aspx
Gallup World Headquarters, 901 F Street, Washington, D.C., 20001, U.S.A
+1 202.715.3030