Public Opinion Pros Public Opinion Pros
Home page About us page Contact page Change your password
Home
Free preview of Public Opinion Pros magazine
Past Issues
Features
A feature article From the Field
Up-and-Coming
Departments
From the Editor
Op-Ed
Columns
Letters
In Print
Resources
Bibliography
Glossary
Job Postings
Links

Advertise with us


Subscribe Now
Submit an Article
Advertise With Us
 
 
From the Field for Public Opinion Pros magazine


Best Foot Forward: Social Desirability in Telephone vs. Online Surveys
By Humphrey Taylor, David Krane, and Randall K. Thomas

Many years ago, one of the authors began his career in marketing and opinion research as an in-person interviewer in Scotland. After he completed his very first interview, the elderly Scottish woman respondent looked at him with a twinkle in her eye and asked, "Och, you don't believe all the things the folks tell you, do you?" All too often, of course, we do. This credulity, this tendency to believe answers respondents give us, may be one of the most serious weaknesses in survey research. And not worrying about it makes our lives easier and simpler.

When asked by skeptics whether respondents might lie to us, we point to the generally accurate record of opinion polls as "proof" that most people don't. And yet we have always known that when there is a socially desirable response-such as not beating your wife-the corresponding undesirable behavior or belief is sometimes underreported.

Over the past few years the authors have been involved in hundreds of surveys both online and by telephone, and we began to notice sizable differences between the two methods in the replies to questions like this which did not appear to be sampling errors. One dramatic example of this occurred with a question about sexual orientation. When we asked this near the end of fifteen-or twenty-minute telephone surveys with randomly selected samples (not a panel), we consistently found that 2 percent of all adults identified themselves, in their verbal replies to human interviewers, as "lesbian, gay, bisexual, or transsexual." When we asked the identical question near the end of fifteen-minute online surveys of samples of adults drawn from our multimillion-member Harris Poll Online Panel of respondents (who had opted into our panel to do surveys), we consistently found 6 percent (or occasionally 5 percent or 7 percent) who self-identified as lesbian, gay, bisexual, or transsexual (LGBT). This difference was found in parallel surveys where, even after weighting to compensate for sample differences, the responses to most other questions were almost identical.

One question we considered when we compared these responses (6 percent versus 2 percent LGBT) was whether the samples were different-that is, even after weighting, could there be a sampling difference and not a method effect? Somehow it seemed implausible that people who were lesbian, gay, bisexual, or transsexual were three times more likely to volunteer to join our panel than those who were not. We concluded that this was probably a classic case of social desirability bias. But, if so, we would expect to find similar differences in the replies to other items on the survey. This prompted several obvious questions:

  • Was the difference a result of many people's reluctance to "admit" to their sexual orientation to a live human interviewer, and their much greater willingness to do so in online surveys with a survey organization they trusted enough to voluntarily join its panel of willing respondents?
  • Were these differences produced by a method effect (as opposed to being sampling differences)?
  • Did these differences affect a much broader set of questions than just sexual orientation, and if so, what types of questions?
  • Do our telephone surveys have a larger social desirability bias than our online surveys? (We say "our" because the differences may not apply to all telephone or all online surveys.)

To address these questions, Harris Interactive added a set of identical questions to two nationwide omnibus surveys conducted in the United States in October 2003. One was a telephone survey of 1,017 adults surveyed between October 14 and 19, 2003. The other was an online survey of 2,056 adults surveyed between October 21 and 27. Of the 2,056 respondents in the online sample, 526 were randomly assigned to be asked a battery of yes or no questions on whether they engaged in certain behaviors or held certain beliefs. All respondents in the telephone survey were also asked about these. Another 486 in the online sample were asked to rate the behaviors and beliefs according to the degree to which they considered them good or bad, in order to establish the level of social desirability for each item. Both surveys were weighted by the same demographic variables (sex, age, education, race/ethnicity, number of adults in household). The online sample was also weighted using "propensity score adjustment" to compensate for other biases in our online surveys.

 

top  
Pages 1, 2, 3, Readings

 
 

home | past issues | departments | resources | change password

Public Opinion Pros is an online magazine published twelve times a year
at www.publicopinionpros.norc.org. Copyright © 2005 by LFP Editorial
Enterprises, LLC. All rights reserved.

 


Past Issues of Public Opinion Pros



Email this site to a friend



Public Perspective magazine online