Marketing researchers are all facing two important challenges to data quality. First is the question of representativeness: with response rates plummeting, we need to make surveys shorter, more engaging, and easier for respondents to complete. Second is the issue of data accuracy: we must make sure that survey questions measure what we think they measure.
If you want to make surveys more accurate and representative, it all comes down to survey design. You need to think carefully about the survey design and how quality is expressed and impacted throughout all phases of the research project. When you get in the habit of thinking about quality at all phases of a study—from design to implementation to analysis of results and feedback—the payoff will be clear.
First Steps First
It sounds obvious, but the first quality check is to take your survey. Clients, researchers, analysts—everybody on board should complete the survey. Be sure to ask some people who are not familiar with the project to complete it as well. How does it feel to be on the other side? Talk through the questionnaire as a group. Look for areas that need more focus or clarification. Seek recommendations to improve the survey. Encourage the group to advocate for changes and explain why the changes are important. And be sure to use a variety of devices and operating systems to understand how the survey performs in different situations.
Conduct a pretest to get feedback from respondents. You don’t have to complete many pretest surveys, but you should have at least a few “real,” qualified respondents complete the survey. Additionally, they should be answering questions about the survey’s design properties, ease of use, and any other issues they had with the survey. By all means, use survey engagement tools when feasible, but don’t fall into the trap of letting the droids rule the analysis. You need the human touch from real respondents, as well. (Don’t forget to clear the pretest respondents before you fully launch the survey. Or you can clear the responses or filter them out of the final results.)
Use technology to test for data quality. A computer application is great at metrics, scoring and summarizing responses. It can measure survey engagement by tracking rates of abandonment and speeding and measure experience quality via respondent ratings. The average length of time to complete the survey is also a key metric. Use technology as a predictive tool before launching the survey to evaluate engagement levels and suggest improvements.
As you get in the habit of performing quality checks, be on the lookout for these common issues that will lead you to improve your survey design:
Is the survey user-friendly?
- Beware of “survey fatigue.” Split long surveys into many short pages.
- Make survey language more consumer-friendly and conversational and less “research-y.”
- Does the language used on buttons and error messages match the survey language?
- Validate questions for the correct data type and tie validation to relevant error messaging that tells how to fix the response.
- Use a progress indicator to show how far the respondent is from completion.
Does the survey flow?
- Improve the logical flow of the questions and watch out for redundancies.
- Make sure the question type matches what you are looking for. Close-ended questions are ideal for analysis and filtering purposes.
- Test your logical paths. When designing page skips, you don’t want unexpected branching to happen.
- Use required questions for “must get” answers, so the respondent can’t move on without completing them. Be careful about making too many questions required, however, as respondents can become frustrated and break-off before completing the survey.
Is your survey design mobile-capable? While 40% of all survey responses are completed on a mobile device, a recent study reported that half of surveys are not mobile-capable, much less mobile optimized. Design your survey to work on mobile devices:
- Make sure that short page questions fit on the screen.
- Minimize scrolling whenever possible.
- Check for comment box sizing problems and row-width for matrix question labels.
Remember, quality control should never be an afterthought; you must have an established quality control process for surveys. This process must specify the quality review responsibilities of each survey reviewer. One or more team members should be responsible for evaluating respondent-level data. The quality control process should review the survey design end-to-end to focus on maximizing both technological efficiency and respondent experience for optimal data quality.