How to protect the integrity of survey research

Surveys provide a scientific way of acquiring information that is used to inform policy decisions, guide political campaigns, clarify the needs of stakeholders, enhance customer service, and help society understand itself.

Science requires data, and survey research is one important means of gathering it. Surveys provide a scientific way of acquiring information that is used to inform policy decisions, guide political campaigns, clarify the needs of stakeholders, enhance customer service, help society understand itself, and improve the quality of life in the U.S.

A person’s hand holding a pen filling out a paper survey.
Image: iStock/AndreyPopov

In recent years, concerns have been raised about growing rates of refusal to participate in surveys, as well as about inaccurate forecasts in high-profile elections, polls with contradictory findings, the declining trust in government and media institutions that fund such research, and skepticism fueled by political polarization. “Although polling is not irredeemably broken,” the authors of a new article write, “changes in technology and society create challenges that, if not addressed well, can threaten the quality of election polls and other important surveys on topics such as the economy.”

In this article, published in the journal PNAS Nexus, 20 experts from diverse fields—including academia, science, government, nonprofits, and the private sector—offer a dozen recommendations to improve the accuracy and trustworthiness of surveys.

The authors’ recommendations aim to better align the practices of survey research with three scientific norms: transparency, clarity, and correcting the record. Among the recommendations are that surveyors and survey researchers should be transparent about their methods, including sampling design and modeling and weighting assumptions. Researchers should also disclose the sources of respondent recruitment, the ways in which exposure to other surveys may affect the responses of members of panels, and the known or expected consequences of attrition on panel surveys.

Surveyors and researchers should also be more precise in the use of terms such as “representative sample” and spell out what that means.

“If survey panelists’ responses have been potentially biased by responses to earlier surveys or questionnaires, researchers and readers need to know that,” says co-author Kathleen Hall Jamieson, director of the Annenberg Public Policy Center and a professor of communication at the Annenberg School for Communication.

Read more at Annenberg Public Policy Center.