Smart Survey Design: 3 Forgotten Pain Points to Avoid

“Smart Survey Design” is a loose term (bordering on a catch-all) that you’ve probably heard pitched to you.  Maybe you have used it yourself when piecing together a study.

It’s not a hollow term, by any means. Smart design has advantages for both designers and respondents. Designing “smart” simply means maintaining data integrity, both in terms of capturing statistically relevant data as well as reducing the amount of bad data caused by poor survey takers (straight liners, short responders for OE’s, speeders, cheaters, etc.).

That’s the basic idea, but there is one factor that often gets forgotten or ignored in a “Smart” design:  the respondent’s experience. You want your respondents to have a positive user experience, surveys with a human touch. They should feel good about taking the survey.

I’m not just talking about survey length or incentive, though those are certainly key tools in addressing the problem.  What I am referring to is the very way we talk to the respondent, the questions asked and how many times we ask that question.

It is easy for us as researchers to become so lost in our need for quality data that we forget the source of it—human beings. People are rational and emotional creatures. How do they feel about their participation?  It’s an important consideration, all too often ignored.

Identifying and avoiding potential pain points may not only help to reduce the number of scrubs and drop-outs, but also deliver better, more reliable data.

Have you ever been on a conference call where the speaker repeats the same point 5 times?  Did you like it?  Did you continue to pay attention or did you look at your phone or check your email?  Now imagine that same conference call. The speaker drones on with 4 more points that are roughly one hair’s width different from the original ones. Frustrating!

Plenty of studies out there get too repetitive in hopes of garnering nominal, ordinal, interval, and ratio data just to present the client with 4 different charts.  But you should ask yourself, how reliable are the opinions offered by a respondent that you have just bored and or annoyed?

Some repetition may be unavoidable, especially when you want to determine which of a group of stimuli is most attractive to your target, but you should not bludgeon the people who are meant to be helping you.

Pain point #2: Being too clever

“If you could be a tree, what tree would you be and why?”

This may be a good opener for your therapist to crawl around the workings and motivations of your mind, but some respondents may find such questions to be intrusive or something worse: “hogwash.”  They have signed up to take part in survey research, but they’re not lab rats!

We come back to the reliability question: how reliable is the data you are gathering if your respondent has been made uncomfortable and just wants to finish the ordeal and get out?

The prospect of getting “deeper data” out of your survey may be very alluring, but consider how appropriate those questions are for your audience.  Does a panelist really need to imagine their favorite restaurant as a spirit animal in order to tell you what their favorite sandwich is?

Pain Point #3: Being too “research-y”

While gathering data or even when trying to cut time off the length of interview in consideration for the respondents, questions might be presented impersonally or curtly. These rapid-fire “cold” questions, though absolutely focused, clear and concise, run the risk of boring a respondent into unintentional mental lethargy.

Questions can eliminate responders who have lost interest in your data set, but wouldn’t it be more beneficial to prevent the need for creating them in the first place?  You don’t have to write a narrative or tell a knock-knock joke to keep them engaged with the process.  Panelists are people. You should just remember to “speak” to them conversationally, instead of clinically prompting and probing for responses.

By being more aware of the respondent’s pain points and making a few tweaks to your surveys, you can improve completion rates, quality of open-ended responses and data integrity.  Better yet, it does all this without incurring any additional costs.

How I Learned to Love AAPOR’s ResearchHack 3.0

It was my first year attending the American Association for Public Opinion Research (AAPOR) Annual Conference, and I was feeling a little nervous. AAPOR is one of the most influential conferences in the survey industry. My goal was to actively participate in events and networking opportunities on the conference list. ResearchHack 3.0 was one of them.

ResearchHack is AAPOR’s version of a “hackathon”, where teams of participants (aka. “hackers”) were asked to devise a plan for a mobile app that would inform various uses of the Census Planning Database.

I looked at the blank ResearchHack 3.0 registration form and hesitated. To be honest, I’m a statistician whose focus has been on survey research methodology. Except for the statistical programming language R, which I’ve used for my projects, I know very little about coding or making an app. Me, a hacker? A coder? I don’t think so! I didn’t know whether I could make any meaningful contribution. I was a little scared, but I knew that it would be a great chance to learn, to work with great people, to get out of my comfort zone, and to truly challenge myself. I signed in. “ResearchHack 3.0…bring it on!”

I was paired with three professionals: a health researcher, a healthy policy program research director, and a director of an institute for survey research. Our team decided to work on a Census Planning Database-based mobile app to help any survey firm/researchers who were trying to design a sampling and operational plan for a hard-to-survey population.

Surveying a hard-to-survey population usually results in a very low response rate. The “main idea” of our app proposal was to utilize the Low Response Score in the Census Planning Database to help identify areas with possible low response rate for the targeted population. Then we would “customize” sampling and operational plans based on areas with different degrees of predicted response rate, with the assistance of big data analysis results or shared experiences from other researchers.

Actually, we had no problem creating hot maps to identify areas with possible low response rate, but when we had to create an app prototype to demonstrate how the app can help survey researchers “customize” their research plans, we ran into a problem. None of us knew if our proposed ideas were even applicable in an app! We didn’t know what adjustments we should make to implement those ideas at the app level. None of us had the related experience needed to make those calls. It’s like that feeling you get when you have an awesome idea for decorating a cake, but you don’t know the needed ingredients. I have to admit, it was a frustrating realization, and I believe my team members had a similar feeling.

The clock was ticking. We had to present our ideas to the public only 24 hours after our first meeting. The pressure was huge, but no one gave up. We sacrificed sleep to work on our slides and outputs. We wanted to be sure that our “main proposal idea” would be clearly explained.

Next, we adapted a role-playing strategy in our presentation to show the audience what kind of difficulties any researcher might face when trying to survey a hard-to-survey population, and what “customized” research plans could help if the needed technical assistance for the app was provided.

Although our ideas didn’t wow the judges (totally understandable due to our app-level technical shortcomings), we did win the “audience pick” award. We were grateful to them for appreciating the effort we put in to help relieve the pressure on all the hardworking survey researchers who have to collect responses from hard-to-survey populations.

ResearchHack 3.0 was certainly tough, but very rewarding, too. You couldn’t ask for more from this crazy and unforgettable experience!

After the conference when I got back to the office, I shared my ResearchHack experience with the programmers in the Geo-Dem group. We had some great discussions. They gave me creative ideas that I had never thought of before. This is one of the great benefits of going to conferences like AAPOR. You share new knowledge and insights with your colleagues, which sparks more creative innovation. One day we will continue in the spirit of ResearchHack 3.0 and make great products for survey researchers, together. When that day comes, our blog readers will know the news. Stay tuned!

Kelly Lin | Survey Sample Statistician | Marketing Systems Group