How I Learned to Love AAPOR’s ResearchHack 3.0

It was my first year attending the American Association for Public Opinion Research (AAPOR) Annual Conference, and I was feeling a little nervous. AAPOR is one of the most influential conferences in the survey industry. My goal was to actively participate in events and networking opportunities on the conference list. ResearchHack 3.0 was one of them.

ResearchHack is AAPOR’s version of a “hackathon”, where teams of participants (aka. “hackers”) were asked to devise a plan for a mobile app that would inform various uses of the Census Planning Database.

I looked at the blank ResearchHack 3.0 registration form and hesitated. To be honest, I’m a statistician whose focus has been on survey research methodology. Except for the statistical programming language R, which I’ve used for my projects, I know very little about coding or making an app. Me, a hacker? A coder? I don’t think so! I didn’t know whether I could make any meaningful contribution. I was a little scared, but I knew that it would be a great chance to learn, to work with great people, to get out of my comfort zone, and to truly challenge myself. I signed in. “ResearchHack 3.0…bring it on!”

I was paired with three professionals: a health researcher, a healthy policy program research director, and a director of an institute for survey research. Our team decided to work on a Census Planning Database-based mobile app to help any survey firm/researchers who were trying to design a sampling and operational plan for a hard-to-survey population.

Surveying a hard-to-survey population usually results in a very low response rate. The “main idea” of our app proposal was to utilize the Low Response Score in the Census Planning Database to help identify areas with possible low response rate for the targeted population. Then we would “customize” sampling and operational plans based on areas with different degrees of predicted response rate, with the assistance of big data analysis results or shared experiences from other researchers.

Actually, we had no problem creating hot maps to identify areas with possible low response rate, but when we had to create an app prototype to demonstrate how the app can help survey researchers “customize” their research plans, we ran into a problem. None of us knew if our proposed ideas were even applicable in an app! We didn’t know what adjustments we should make to implement those ideas at the app level. None of us had the related experience needed to make those calls. It’s like that feeling you get when you have an awesome idea for decorating a cake, but you don’t know the needed ingredients. I have to admit, it was a frustrating realization, and I believe my team members had a similar feeling.

The clock was ticking. We had to present our ideas to the public only 24 hours after our first meeting. The pressure was huge, but no one gave up. We sacrificed sleep to work on our slides and outputs. We wanted to be sure that our “main proposal idea” would be clearly explained.

Next, we adapted a role-playing strategy in our presentation to show the audience what kind of difficulties any researcher might face when trying to survey a hard-to-survey population, and what “customized” research plans could help if the needed technical assistance for the app was provided.

Although our ideas didn’t wow the judges (totally understandable due to our app-level technical shortcomings), we did win the “audience pick” award. We were grateful to them for appreciating the effort we put in to help relieve the pressure on all the hardworking survey researchers who have to collect responses from hard-to-survey populations.

ResearchHack 3.0 was certainly tough, but very rewarding, too. You couldn’t ask for more from this crazy and unforgettable experience!

After the conference when I got back to the office, I shared my ResearchHack experience with the programmers in the Geo-Dem group. We had some great discussions. They gave me creative ideas that I had never thought of before. This is one of the great benefits of going to conferences like AAPOR. You share new knowledge and insights with your colleagues, which sparks more creative innovation. One day we will continue in the spirit of ResearchHack 3.0 and make great products for survey researchers, together. When that day comes, our blog readers will know the news. Stay tuned!

Kelly Lin | Survey Sample Statistician | Marketing Systems Group

AAPOR ‘s Task Force on Address Based Sampling

In January of 2016, AAPOR ‘s Task Force on Address Based Sampling published it’s finding for the AAPOR Standard’s Committee.  MSG’s Trent Buskirk and David Malarek played a pivotal role in the formation of the ABS Standards.  Below is the Abstract for the report.  The full report can be found here:

http://www.aapor.org/AAPOR_Main/media/MainSiteFiles/AAPOR_Report_1_7_16_CLEAN-COPY-FINAL.pdf

Arguably, address lists updated via the United States Postal Service (USPS) Computerized Delivery Sequence (CDS) file are the best possible frames for today’s household surveys in the United States. National coverage estimates vary, but are very high overall and nearly 100% in many areas, and coverage continues to improve. In addition, many address lists are regularly updated with changes from the USPS CDS file, reducing the need for expensive field work by survey organizations. Historically, field-generated frames were the only option for in-person surveys, but the high cost was prohibitive for many important national surveys, not to mention other valuable research surveys at the state, region, or community level. For many years, telephone surveys have been the low-cost alternative to in-person surveys with field-generated frames. However, the nature of telephony has shifted dramatically toward cellular technology (Blumberg and Luke 2014; Keeter et al. 2007). With more households switching from landline to mobile telephones, the coverage of landline-based random digit dialing (RDD) frames has dwindled (Blumberg and Luke 2014). Furthermore, because of legislation regarding how survey researchers may dial cell phones, and because of generally lower response rates for cell phone numbers, the cost of telephone surveys that seek coverage of cell-only households is increasing (AAPOR Cell Phone Task Force 2010). Address-based sampling (ABS) offers attractive solutions to these coverage and cost problems in the United States (Link et al. 2008). The accessibility of address frames has reduced the cost of in-person surveys and brought about a resurgence of relatively inexpensive mail surveys. ABS is often used in multimode studies, where different modes may be used for contact versus response in data collection or to follow up with nonrespondents (Alexander and Wetrogan 2000; de Leeuw 2005). Alternatively, advance mailings can be used to direct selected households to web surveys, with the hope that doing so may dramatically reduce costs. Furthermore, the ability to append geocodes, phone numbers, demographics, and other data to the address frame, although imperfect, can provide deep stratification and aid in designing more cost-efficient studies. Society is changing through the way people communicate. Letters and telephone calls are largely being replaced by texts, tweets, e-mails, and other electronic communications, although mail is still used for some formal and official communications. Surveys that push selected individuals to respond to surveys electronically (e.g., via the web) take advantage of today’s 1-2 prevalent modes of communication. Without general frames of electronic addresses, mail addresses provide excellent coverage of households. At the same time, initial contact by mail ensures that virtually every selected household can be reached, regardless of electronic capabilities. Creative use of ABS provides many options for reaching busy households and gaining cooperation. The purpose of this report is to describe the nature of ABS and its uses for conducting surveys. Multiple specific goals of the report are presented in Section 1.3. The report discusses in detail technical aspects of constructing ABS frames and samples, and the technical aspects reveal both its strengths and limitations. These aspects are important for effective use of ABS in survey design and implementation, as described in the report.