Quality Starts with Survey Design: Tips for Better Surveys

Marketing researchers are all facing two important challenges to data quality. First is the question of representativeness: with response rates plummeting, we need to make surveys shorter, more engaging, and easier for respondents to complete. Second is the issue of data accuracy: we must make sure that survey questions measure what we think they measure.

If you want to make surveys more accurate and representative, it all comes down to survey design. You need to think carefully about the survey design and how quality is expressed and impacted throughout all phases of the research project. When you get in the habit of thinking about quality at all phases of a study—from design to implementation to analysis of results and feedback—the payoff will be clear.

First Steps First

It sounds obvious, but the first quality check is to take your survey. Clients, researchers, analysts—everybody on board should complete the survey. Be sure to ask some people who are not familiar with the project to complete it as well. How does it feel to be on the other side? Talk through the questionnaire as a group. Look for areas that need more focus or clarification. Seek recommendations to improve the survey. Encourage the group to advocate for changes and explain why the changes are important. And be sure to use a variety of devices and operating systems to understand how the survey performs in different situations.

Conduct a pretest to get feedback from respondents. You don’t have to complete many pretest surveys, but you should have at least a few “real,” qualified respondents complete the survey. Additionally, they should be answering questions about the survey’s design properties, ease of use, and any other issues they had with the survey. By all means, use survey engagement tools when feasible, but don’t fall into the trap of letting the droids rule the analysis. You need the human touch from real respondents, as well. (Don’t forget to clear the pretest respondents before you fully launch the survey. Or you can clear the responses or filter them out of the final results.)

Use technology to test for data quality. A computer application is great at metrics, scoring and summarizing responses. It can measure survey engagement by tracking rates of abandonment and speeding and measure experience quality via respondent ratings. The average length of time to complete the survey is also a key metric. Use technology as a predictive tool before launching the survey to evaluate engagement levels and suggest improvements.

Frequent Challenges

As you get in the habit of performing quality checks, be on the lookout for these common issues that will lead you to improve your survey design:

Is the survey user-friendly?

  • Beware of “survey fatigue.” Split long surveys into many short pages.
  • Make survey language more consumer-friendly and conversational and less “research-y.”
  • Does the language used on buttons and error messages match the survey language?
  • Validate questions for the correct data type and tie validation to relevant error messaging that tells how to fix the response.
  • Use a progress indicator to show how far the respondent is from completion.

Does the survey flow?

  • Improve the logical flow of the questions and watch out for redundancies.
  • Make sure the question type matches what you are looking for. Close-ended questions are ideal for analysis and filtering purposes.
  • Test your logical paths. When designing page skips, you don’t want unexpected branching to happen.
  • Use required questions for “must get” answers, so the respondent can’t move on without completing them. Be careful about making too many questions required, however, as respondents can become frustrated and break-off before completing the survey.

Is your survey design mobile-capable? While 40% of all survey responses are completed on a mobile device, a recent study reported that half of surveys are not mobile-capable, much less mobile optimized. Design your survey to work on mobile devices:

 

  • Make sure that short page questions fit on the screen.
  • Minimize scrolling whenever possible.
  • Check for comment box sizing problems and row-width for matrix question labels.

 

Remember, quality control should never be an afterthought; you must have an established quality control process for surveys. This process must specify the quality review responsibilities of each survey reviewer. One or more team members should be responsible for evaluating respondent-level data. The quality control process should review the survey design end-to-end to focus on maximizing both technological efficiency and respondent experience for optimal data quality.

Smart Survey Design: 3 Forgotten Pain Points to Avoid

“Smart Survey Design” is a loose term (bordering on a catch-all) that you’ve probably heard pitched to you.  Maybe you have used it yourself when piecing together a study.

It’s not a hollow term, by any means. Smart design has advantages for both designers and respondents. Designing “smart” simply means maintaining data integrity, both in terms of capturing statistically relevant data as well as reducing the amount of bad data caused by poor survey takers (straight liners, short responders for OE’s, speeders, cheaters, etc.).

That’s the basic idea, but there is one factor that often gets forgotten or ignored in a “Smart” design:  the respondent’s experience. You want your respondents to have a positive user experience, surveys with a human touch. They should feel good about taking the survey.

I’m not just talking about survey length or incentive, though those are certainly key tools in addressing the problem.  What I am referring to is the very way we talk to the respondent, the questions asked and how many times we ask that question.

It is easy for us as researchers to become so lost in our need for quality data that we forget the source of it—human beings. People are rational and emotional creatures. How do they feel about their participation?  It’s an important consideration, all too often ignored.

Identifying and avoiding potential pain points may not only help to reduce the number of scrubs and drop-outs, but also deliver better, more reliable data.

Have you ever been on a conference call where the speaker repeats the same point 5 times?  Did you like it?  Did you continue to pay attention or did you look at your phone or check your email?  Now imagine that same conference call. The speaker drones on with 4 more points that are roughly one hair’s width different from the original ones. Frustrating!

Plenty of studies out there get too repetitive in hopes of garnering nominal, ordinal, interval, and ratio data just to present the client with 4 different charts.  But you should ask yourself, how reliable are the opinions offered by a respondent that you have just bored and or annoyed?

Some repetition may be unavoidable, especially when you want to determine which of a group of stimuli is most attractive to your target, but you should not bludgeon the people who are meant to be helping you.

Pain point #2: Being too clever

“If you could be a tree, what tree would you be and why?”

This may be a good opener for your therapist to crawl around the workings and motivations of your mind, but some respondents may find such questions to be intrusive or something worse: “hogwash.”  They have signed up to take part in survey research, but they’re not lab rats!

We come back to the reliability question: how reliable is the data you are gathering if your respondent has been made uncomfortable and just wants to finish the ordeal and get out?

The prospect of getting “deeper data” out of your survey may be very alluring, but consider how appropriate those questions are for your audience.  Does a panelist really need to imagine their favorite restaurant as a spirit animal in order to tell you what their favorite sandwich is?

Pain Point #3: Being too “research-y”

While gathering data or even when trying to cut time off the length of interview in consideration for the respondents, questions might be presented impersonally or curtly. These rapid-fire “cold” questions, though absolutely focused, clear and concise, run the risk of boring a respondent into unintentional mental lethargy.

Questions can eliminate responders who have lost interest in your data set, but wouldn’t it be more beneficial to prevent the need for creating them in the first place?  You don’t have to write a narrative or tell a knock-knock joke to keep them engaged with the process.  Panelists are people. You should just remember to “speak” to them conversationally, instead of clinically prompting and probing for responses.

By being more aware of the respondent’s pain points and making a few tweaks to your surveys, you can improve completion rates, quality of open-ended responses and data integrity.  Better yet, it does all this without incurring any additional costs.

How I Learned to Love AAPOR’s ResearchHack 3.0

It was my first year attending the American Association for Public Opinion Research (AAPOR) Annual Conference, and I was feeling a little nervous. AAPOR is one of the most influential conferences in the survey industry. My goal was to actively participate in events and networking opportunities on the conference list. ResearchHack 3.0 was one of them.

ResearchHack is AAPOR’s version of a “hackathon”, where teams of participants (aka. “hackers”) were asked to devise a plan for a mobile app that would inform various uses of the Census Planning Database.

I looked at the blank ResearchHack 3.0 registration form and hesitated. To be honest, I’m a statistician whose focus has been on survey research methodology. Except for the statistical programming language R, which I’ve used for my projects, I know very little about coding or making an app. Me, a hacker? A coder? I don’t think so! I didn’t know whether I could make any meaningful contribution. I was a little scared, but I knew that it would be a great chance to learn, to work with great people, to get out of my comfort zone, and to truly challenge myself. I signed in. “ResearchHack 3.0…bring it on!”

I was paired with three professionals: a health researcher, a healthy policy program research director, and a director of an institute for survey research. Our team decided to work on a Census Planning Database-based mobile app to help any survey firm/researchers who were trying to design a sampling and operational plan for a hard-to-survey population.

Surveying a hard-to-survey population usually results in a very low response rate. The “main idea” of our app proposal was to utilize the Low Response Score in the Census Planning Database to help identify areas with possible low response rate for the targeted population. Then we would “customize” sampling and operational plans based on areas with different degrees of predicted response rate, with the assistance of big data analysis results or shared experiences from other researchers.

Actually, we had no problem creating hot maps to identify areas with possible low response rate, but when we had to create an app prototype to demonstrate how the app can help survey researchers “customize” their research plans, we ran into a problem. None of us knew if our proposed ideas were even applicable in an app! We didn’t know what adjustments we should make to implement those ideas at the app level. None of us had the related experience needed to make those calls. It’s like that feeling you get when you have an awesome idea for decorating a cake, but you don’t know the needed ingredients. I have to admit, it was a frustrating realization, and I believe my team members had a similar feeling.

The clock was ticking. We had to present our ideas to the public only 24 hours after our first meeting. The pressure was huge, but no one gave up. We sacrificed sleep to work on our slides and outputs. We wanted to be sure that our “main proposal idea” would be clearly explained.

Next, we adapted a role-playing strategy in our presentation to show the audience what kind of difficulties any researcher might face when trying to survey a hard-to-survey population, and what “customized” research plans could help if the needed technical assistance for the app was provided.

Although our ideas didn’t wow the judges (totally understandable due to our app-level technical shortcomings), we did win the “audience pick” award. We were grateful to them for appreciating the effort we put in to help relieve the pressure on all the hardworking survey researchers who have to collect responses from hard-to-survey populations.

ResearchHack 3.0 was certainly tough, but very rewarding, too. You couldn’t ask for more from this crazy and unforgettable experience!

After the conference when I got back to the office, I shared my ResearchHack experience with the programmers in the Geo-Dem group. We had some great discussions. They gave me creative ideas that I had never thought of before. This is one of the great benefits of going to conferences like AAPOR. You share new knowledge and insights with your colleagues, which sparks more creative innovation. One day we will continue in the spirit of ResearchHack 3.0 and make great products for survey researchers, together. When that day comes, our blog readers will know the news. Stay tuned!

Kelly Lin | Survey Sample Statistician | Marketing Systems Group

Taking Aim with Consumer Cellular Sample

How Consumer Cellular Sample Can Give You a More Accurate Geographic Fit of your Target Population and Improve Coverage

Geo-targeting. We all know what it means, but for the sake of this article, let’s get at the essence of the concept. Geo-targeting is a way to pinpoint an audience based on location. Accuracy is everything. Geography is the fundamental basis for every sample frame – be it individual streets, Census geography or Postal geography.

Certain sample frames such as Cellular RDD tend to be difficult to target geographically due to inward and outward migration of individuals who retain their cell phone numbers.  It’s important to be aware of these limitations when using a Cellular RDD sample, especially when targeting small geographies.

Here’s how you can miss the target: a cellular RDD sample will include people who have moved outside your target geography (a.k.a. outward migration). Additionally, respondents might live in the area of interest, but do not have an opportunity to be included in the RDD frame (inward migration). They have a cell number that corresponds to an entirely different geography. These people wouldn’t  be included in a traditional cellular RDD frame. The result? Under-coverage due to inward migration and increased data collection costs due to outward migration

So how can we account for the under-coverage and take better aim? One option is to supplement from a relatively new convenience frame called Consumer Cell. This frame is based on a multi-source model comprised of consumer databases linked to publically available cellular telephone numbers. It is updated monthly.

The Consumer Cell  database is built from literally hundreds of sources including

  • Public records
  • Census data
  • Consumer surveys
  • Telephone directories
  • Real estate information (deed & tax assessor)
  • Voter registration
  • Magazine subscription
  • Surveys responses
  • E-commerce
  • Proprietary sources

Geographic targeting using consumer cell can be highly accurate, zooming in on small geographies such as census blocks or even the household level.  Further stratification can be done for numerous person and household level demographic variables.

One limitation to the database is that it is a convenience frame (non-probability compilation of households). It does not offer the same coverage as an RDD frame. It is probably best utilized as a supplement to sample respondents who live in a targeted geography. One of the benefits is that you now include respondents who otherwise would not have been sampled.

If your area of interest is at the state or local level, you should consider where we can address under-coverage issues with RDD cell sample.

DIY Web Scraping: Fetching and Extracting Data

Given the sheer multitude of accessible data available on the world wide web, the “web scraping” phenomenon has caught on like wildfire. Web scraping is a method for extracting data from websites. The scraping can be done manually, but is preferably done in a programmatic way.

Many free programs are out there to assist you with your forays into web scraping.  For a recent project we used IMacros to automate the fetching/extraction of needed data from the Residential Construction Branch of the U.S. Census.  This website provides data on the number of new housing units authorized by building permits.  Data are available monthly, year-to-date, and annually at the national, state, and most county and county subdivisions.  Prior to January 9, 2017 all building permit data at the county level or below was only available as individual text files.  This meant we had to manually download over 3,142 individual text files in order to obtain the data for all the counties in the U.S. It was a tedious task, to say the least.

Such a manual process would have been too labor intensive to take on without any automation via web scraping. Automating the entire process using IMacro was pretty straightforward and simple. Here’s an outline of the steps:

  • Install the IMacro extension to the Firefox web browser.
  • Test the IMacro recording function by going through the process of selecting and downloading the first file.
  • View the recorded code and create a loop +1 so that the code repeats itself and downloads each text file.
  • Save the files in the same file/folder location to make the process of merging data files into a single file much easier.
  • Extract data easily for every county, with the ability to roll up by state, region and nationally.

Like many data sites, the Building Permits Website now provides access to the FTP directory where you can navigate and download all 3,142 text files without having to enter specific parameters for each file.  However, if you come across websites that do not, we recommend that you get familiar with the site to determine what format the data is in: i.e. tables, individual pages etc.  If you need to scrape from numerous websites, take the time to get familiar with each one, because any change in formatting from site to site can cause havoc if you are not aware of the potential problem of downloading misaligned or incorrect data. Never forget the rule: garbage in, garbage out. Test before you scrape!

Why Volunteer?

The research industry needs volunteers. Here’s why you should consider playing a part.

Many of us here at MSG serve as active volunteer members of market and survey research industry organizations. It’s part of our company culture to get involved and make a difference. Recently, I attended back to back chapter events, and I began to reflect on the benefits of volunteering. Was it really worthwhile to devote my time to a local chapter organization?

It’s true, the amount of time you need to devote to volunteering can feel like a second job, and it is crucial that you be able to balance your primary and secondary activities. It’s definitely a juggling act, and it isn’t always easy.

That being said, there are loads of good reasons to become a volunteer. Here’s what influenced me to get involved:

Networking. Serving as an industry volunteer will get you talking to people and is a wonderful means for creating and maintaining relationships. I want to meet people whom I can work with, but I also want to build a network of long-lasting professional relationships. In my roles as a volunteer for a local chapter organization and committee memberships, I have encountered industry pros whom I never would have met otherwise.

Learning best practices. Education doesn’t end with a degree, a certification, or on-the-job training. It should be seen as a lifelong habit of mind. By attending events and seminars outside the orbit of your day-to-day business, you will be exposed to new ideas and pick up on new trends within your industry and related industries.

Organic growth. A natural goal we all have is to grow our business. When you volunteer, the cultivation of business growth can tend to happen more organically, as a function of developing relationships within the membership environment. As you discover ways to collaborate and partner with others, those seeds will sprout.

I firmly believe that volunteers are the lifeblood of an association. They keep our communities engaged and informed. Despite the fact that it can take up a lot of spare time, when I reflect and ask myself, should I have volunteered? I always answer a resounding YES!

 

 

 

“Hope for the best, prepare for the worst”: salvaging the client list

You’ve probably heard the story before. It begins, “The study started with a client list….”

I can’t tell you how many times I had a client call and tell me that. The stories follow a pattern. The client says it’s a great list and you should be able to easily complete the study with it. Sounds great, right?

Here comes the plot twist. They forgot to tell you the list is 4 years old and hasn’t been touched since. Oh, and by the way, only 30% of the records have a phone or email address. Suddenly, easy street is filled with potholes.

This isn’t the end of the story, and it can have a happy ending. A sub-standard client list can be rescued with these investigative approaches and performance enhancements:

• Flag any cell phone numbers so they can be separated out and dialed manually, which also ensures TCPA compliance.

• Ask yourself: what is most important on their list? What is the key sampling element? Is it the individual (contact name)? If so, the file can be run against the National Change of Address (NCOA) database to see if the person has moved. If the person has moved, a search can be run for the new address. The next step is to identify the landline and (or) cellular telephone numbers associated with that individual at the new address.

• If location/address is the key element, check for the most up-to-date telephone numbers (either landline or cellular) and name associated with that address.

• Send the call list to a sample provider for verification. Does the information in your list match the sample provider’s database?

• If information doesn’t match, can you append on a new phone number or email address?

• Do you still have open quotas? See if you can append demographics to target for open quotas.

• When you’ve exhausted all options on the client list and the study still isn’t completed, order an additional custom sample that meets the ultimate client’s specifications (or at least comes close). Then you should dedupe the client list from any custom sample orders.

With the help of a good sample provider, even a subpar client list can be salvaged and the study brought to completion on time.

4 Surefire Ways to Increase ABS Response Rates Without Breaking the Bank

So you found the perfect sampling source with nearly 100% coverage and the ability to reach cell phone only homes with address based sample.  One can expect to get the completes needed but realistically what type of response rate will you achieve?  How can you boost it?

responsesDepending on the steps taken the response rate can vary greatly.  You may only realize 10-15% without a big name company endorsement to go with your survey and/or a pre-notification postcard, but such an endorsement can kill the budget before the study even begins!

Here are 4 surefire tips to increase response rates…

Tip #1 Phone number append and name append to the address using commercial databases to personalize the pieces and allow for reminder calls.  Even where the name is appended also include “or current resident” to reduce the return rate.

Tip #2 Add a creatively designed piece with the web link to drive them to participate online.  This allows the respondent to take the survey anytime and on a device of their choice.  Offering a multi-mode approach can increase participation and representation.

Tip #3 Repeat the Message and contact potential respondents multiple times via mail, phone, media or social networking sites which will increase awareness and help entice them to participate.  Messages are more effective when repeated!

Tip #4 Offer an Incentive to motivate your respondent.  Be sure the value represents a balance between effort and time spent on the survey within budget of course.

With some large, heavily endorsed studies we have seen up to 50% along with long field times, reminder calls, multiple post cards, and refusal conversions.  Use the tips that your study and budget will allow and you can experience a higher response rate too!

Recent Conference Experience: MRA Joint Conference

Rajesh Bhai and Bob GranitoEarlier this year I had the pleasure of Co-chairing the Joint MRA Philly/Greater NY conference.  The conference was held in April in Center City Philadelphia and planning started back in October/November.  Being my first time planning a conference of this magnitude, my initial reaction was how will we get this done in time!  However, I soon found that my fellow Co-chair, Bob Granito (Of Interactive Media and member of the Greater NY Chapter) along with all the wonderful and dedicated volunteers were equally committed.  From the get go the entire committee proved to make the planning process engaging and seamless.   In the end it was fulfilling to see the planning and hard work from all come to fruition as many attendees mentioned they enjoyed the conference from beginning to end.  The conference itself was a full day event with 7 engaging speakers making up 5 thoughtful presentations:

  • Steve Levine (Zeldis) and Jerry Valentine (AstraZenaca) discussed current trends in Disruptive Behavior.
  • Michelle Murphy Niedziela, Ph.D of HCD Research discussed the 5 phases of Neuroscience.
  • David Dutwin, Ph.D of SSRS gave the keynote discussing the future of survey research.
  • Nina Hoe, Ph.D of Temple University presented on building a city-wide panel.
  • John Hartman and John Shiela of Phoenix Marketing shared their research on wearable technology trends.

As I reflect on the planning stages, I am glad to have the experience under my belt as I transition to my new role as President of the Philly MRA.  I cannot thank all the board members from both chapters and volunteers who helped make the event a success.

-Rajesh Bhai

 

AAPOR ‘s Task Force on Address Based Sampling

In January of 2016, AAPOR ‘s Task Force on Address Based Sampling published it’s finding for the AAPOR Standard’s Committee.  MSG’s Trent Buskirk and David Malarek played a pivotal role in the formation of the ABS Standards.  Below is the Abstract for the report.  The full report can be found here:

http://www.aapor.org/AAPOR_Main/media/MainSiteFiles/AAPOR_Report_1_7_16_CLEAN-COPY-FINAL.pdf

Arguably, address lists updated via the United States Postal Service (USPS) Computerized Delivery Sequence (CDS) file are the best possible frames for today’s household surveys in the United States. National coverage estimates vary, but are very high overall and nearly 100% in many areas, and coverage continues to improve. In addition, many address lists are regularly updated with changes from the USPS CDS file, reducing the need for expensive field work by survey organizations. Historically, field-generated frames were the only option for in-person surveys, but the high cost was prohibitive for many important national surveys, not to mention other valuable research surveys at the state, region, or community level. For many years, telephone surveys have been the low-cost alternative to in-person surveys with field-generated frames. However, the nature of telephony has shifted dramatically toward cellular technology (Blumberg and Luke 2014; Keeter et al. 2007). With more households switching from landline to mobile telephones, the coverage of landline-based random digit dialing (RDD) frames has dwindled (Blumberg and Luke 2014). Furthermore, because of legislation regarding how survey researchers may dial cell phones, and because of generally lower response rates for cell phone numbers, the cost of telephone surveys that seek coverage of cell-only households is increasing (AAPOR Cell Phone Task Force 2010). Address-based sampling (ABS) offers attractive solutions to these coverage and cost problems in the United States (Link et al. 2008). The accessibility of address frames has reduced the cost of in-person surveys and brought about a resurgence of relatively inexpensive mail surveys. ABS is often used in multimode studies, where different modes may be used for contact versus response in data collection or to follow up with nonrespondents (Alexander and Wetrogan 2000; de Leeuw 2005). Alternatively, advance mailings can be used to direct selected households to web surveys, with the hope that doing so may dramatically reduce costs. Furthermore, the ability to append geocodes, phone numbers, demographics, and other data to the address frame, although imperfect, can provide deep stratification and aid in designing more cost-efficient studies. Society is changing through the way people communicate. Letters and telephone calls are largely being replaced by texts, tweets, e-mails, and other electronic communications, although mail is still used for some formal and official communications. Surveys that push selected individuals to respond to surveys electronically (e.g., via the web) take advantage of today’s 1-2 prevalent modes of communication. Without general frames of electronic addresses, mail addresses provide excellent coverage of households. At the same time, initial contact by mail ensures that virtually every selected household can be reached, regardless of electronic capabilities. Creative use of ABS provides many options for reaching busy households and gaining cooperation. The purpose of this report is to describe the nature of ABS and its uses for conducting surveys. Multiple specific goals of the report are presented in Section 1.3. The report discusses in detail technical aspects of constructing ABS frames and samples, and the technical aspects reveal both its strengths and limitations. These aspects are important for effective use of ABS in survey design and implementation, as described in the report.