Quality Starts with Survey Design: Tips for Better Surveys

Marketing researchers are all facing two important challenges to data quality. First is the question of representativeness: with response rates plummeting, we need to make surveys shorter, more engaging, and easier for respondents to complete. Second is the issue of data accuracy: we must make sure that survey questions measure what we think they measure.

If you want to make surveys more accurate and representative, it all comes down to survey design. You need to think carefully about the survey design and how quality is expressed and impacted throughout all phases of the research project. When you get in the habit of thinking about quality at all phases of a study—from design to implementation to analysis of results and feedback—the payoff will be clear.

First Steps First

It sounds obvious, but the first quality check is to take your survey. Clients, researchers, analysts—everybody on board should complete the survey. Be sure to ask some people who are not familiar with the project to complete it as well. How does it feel to be on the other side? Talk through the questionnaire as a group. Look for areas that need more focus or clarification. Seek recommendations to improve the survey. Encourage the group to advocate for changes and explain why the changes are important. And be sure to use a variety of devices and operating systems to understand how the survey performs in different situations.

Conduct a pretest to get feedback from respondents. You don’t have to complete many pretest surveys, but you should have at least a few “real,” qualified respondents complete the survey. Additionally, they should be answering questions about the survey’s design properties, ease of use, and any other issues they had with the survey. By all means, use survey engagement tools when feasible, but don’t fall into the trap of letting the droids rule the analysis. You need the human touch from real respondents, as well. (Don’t forget to clear the pretest respondents before you fully launch the survey. Or you can clear the responses or filter them out of the final results.)

Use technology to test for data quality. A computer application is great at metrics, scoring and summarizing responses. It can measure survey engagement by tracking rates of abandonment and speeding and measure experience quality via respondent ratings. The average length of time to complete the survey is also a key metric. Use technology as a predictive tool before launching the survey to evaluate engagement levels and suggest improvements.

Frequent Challenges

As you get in the habit of performing quality checks, be on the lookout for these common issues that will lead you to improve your survey design:

Is the survey user-friendly?

  • Beware of “survey fatigue.” Split long surveys into many short pages.
  • Make survey language more consumer-friendly and conversational and less “research-y.”
  • Does the language used on buttons and error messages match the survey language?
  • Validate questions for the correct data type and tie validation to relevant error messaging that tells how to fix the response.
  • Use a progress indicator to show how far the respondent is from completion.

Does the survey flow?

  • Improve the logical flow of the questions and watch out for redundancies.
  • Make sure the question type matches what you are looking for. Close-ended questions are ideal for analysis and filtering purposes.
  • Test your logical paths. When designing page skips, you don’t want unexpected branching to happen.
  • Use required questions for “must get” answers, so the respondent can’t move on without completing them. Be careful about making too many questions required, however, as respondents can become frustrated and break-off before completing the survey.

Is your survey design mobile-capable? While 40% of all survey responses are completed on a mobile device, a recent study reported that half of surveys are not mobile-capable, much less mobile optimized. Design your survey to work on mobile devices:

 

  • Make sure that short page questions fit on the screen.
  • Minimize scrolling whenever possible.
  • Check for comment box sizing problems and row-width for matrix question labels.

 

Remember, quality control should never be an afterthought; you must have an established quality control process for surveys. This process must specify the quality review responsibilities of each survey reviewer. One or more team members should be responsible for evaluating respondent-level data. The quality control process should review the survey design end-to-end to focus on maximizing both technological efficiency and respondent experience for optimal data quality.

Reap the Rewards: Finding the Right Incentive Mix for Your Panelists

Pretty much everyone in the survey business understands the value of a satisfied panel. We want our surveys to be well-received and satisfying. We want our panelists to be engaged, and when we invite them again, we want them to participate eagerly.

To achieve these goals, you must work to build loyalty among your panelists. What does loyalty mean in this context? A panelist should think of your panel as their panel. They belong there, and it’s a place they will want to revisit.

One tried and true method for building loyalty is the offering of incentives, also known as rewards. An incentive reinforces positive behaviors and reminds panelists who your brand is and why it’s something worthy of their loyalty.

A panelist who is kept happy will in large measure be a loyal one. Here again, incentives can play a major role in building good will. When you reward respondents, you not only offer them something of value, you are letting them know that you value them.

In the abstract, an incentive program should contribute to the growth of

  • Acquisition
  • Participation frequency
  • Retention of participants

When we reward panelists for good behavior, the happy (thus loyal) panelists are much more likely to share their positive experience with their friends. In this way retention (satisfied panelists) can feedback into acquisition (new participants).

Let’s briefly examine how incentives can be structured to address these aims.

A reward that isn’t worthwhile to the participant isn’t worth much.

The value of the reward should be paired with two factors: time invested by the participant and the level of complexity of the tasks you ask them to complete.

Beware of offering too lavish a reward.

This can trigger fraudulent actions, as in “I’ll say or do anything to get the prize.” An incentive program should NEVER compromise the integrity of the research.

Watch out for the redundancy problem.

Offering the same reward again and again can have a negative impact: participant boredom leading to lack of engagement.

Weigh the benefits of adding diverse incentives.

Are their ways to cater or customize the panel experience? Is your panel management system able to accommodate changes to the incentive package over time, as needs change?

You might, for instance, design a tiered system for qualifying and non-qualifying participants. Why should non-qualifiers be rewarded with a token gift too? Because today’s non-qualifier could be tomorrow’s qualifying participant. Retention is the name of the game. With typical conversion rates tending towards the low range of 10% to 15%, when you reward the non-qualifier, you help to avoid gaming of the system and incentivize honest repeat participation in the next survey.

Be flexible.

A good incentive program will have some flexibility built-in, such as tiered rewards that trigger at different levels depending on specified factors. The levels could consist of gift cards, merchandise, PayPal payments, charitable donations, games, and other exclusive benefits. The key is to match the reward with the panelist. One size does not fit all.

Consider delivering digital rewards by email.

Digital rewards have a couple of advantages: (1) the recipient gets immediate satisfaction (they can redeem it right away) and (2) you reduce overhead for inventory and fulfillment management.

Weigh the costs and benefits.

Tiered rewards can add cost but they really help to cement the bond to your most loyal panelists. Points-based rewards are a popular approach that can be cheaper than cash rewards.

Give them a choice.

Using the idea of “Reverse Preference”, you offer the panelist a choice of reward type other than the default option, and you might use this as a motivational factor for a targeting a particular demographic.

Can your technology handle what you need to do? 

You want the system to accommodate multiple projects and programs across different demographics at the same time, each with its own custom incentive approach. An integrated application programming interface (API) can automatically deliver rewards. Fast incentive fulfillment not only increases efficiencies, it keeps panelists happier. Make sure your panel management system is robust enough to handle the granularity of analytics you need, and is adaptable enough when needs mutate.

Measuring the Results.

The key to improving an incentive program is to test and adjust.

You should always be tracking and measuring respondent satisfaction, which can be gauged via satisfaction surveys, social media feedback, and helpdesk availability.

Doing this will show panelists that you are there for them, are interested in their feedback, and are willing to act to improve their experience with each iteration.

Measurement is necessary for another reason. To gain approval for an incentive program, you will need to demonstrate to management that you have the metrics to show a clear return on investment. Plan to show them the positive feedback loops between completion rates and satisfaction metrics.

With these considerations in mind, you can expect an improved rewards system that boosts acquisition rates, leads to greater participation, and secures higher retention rates.