The Power of Conversation

Observers of the market research industry have been noticing a trend of late: researchers are acknowledging the limitations of large-scale surveys and are rediscovering the value of qualitative research, namely, real conversations with real people. Why?

That’s precisely the question, and also, the answer. “Why.” Quantitative research often has difficulty answering the “why?” questions. While it is true that much insight can be gained by analyzing big data, why not go directly to the source and talk to them? By interviewing and hearing people’s stories and insights, you can understand data better. Why do products sell? Why is growth not taking off? Why do preferences emerge for one brand and not another? Some answers are more readily gained by simply talking to people, then interpreting the results.

Continue reading “The Power of Conversation”

Why Location Continues to be a Difference Maker

Last Christmas I wanted to buy a turntable for my daughter. Thanks to an online message forum, I discovered that Target was selling a new brand of turntable at an affordable price point with features typically seen on higher-end models. It was early in the Christmas buying season, and I had a hunch that a product like this might sell out quickly. So I researched the Target.com website, checked their inventory and used the store locator to find the nearest Target with the turntable in stock. At this point many would click the “buy now” button and have the product shipped. Instead, I hopped in the car and drove to the store. Why, you ask? I wanted to see the product for myself before buying it. Once inside the store, my smart phone told me which aisle to go to. With a little help from my friend the store clerk, I located the turntable, looked it over, bought it, and wrapped it up for Christmas. What this very short story teaches us is that while technology has become a key component in the way we consume, we aren’t quite willing to let go of location-based purchasing decisions.

In his best-selling 2005 book The World is Flat, Thomas L. Friedman popularized the concept that 21st century global economies, fueled by technology, were leveling the playing field, creating a world where geography didn’t matter. Trans-national corporations, Internet technology, and streamlined global supply chains, had combined to make it easier for consumers to access the things they wanted – anywhere, all the time.

The virtual, “flattened” landscape is here to stay, and it has drastically altered the way we do business and live our lives. It’s hard to remember what life was like in “the old days,” before Google was a verb, before social media was pervasive in our daily routines. That being said, has the world really become as flat as we thought it would? Maybe not.

Anywhere/all-the-time access to goods and services is wonderful, but does this really mean it doesn’t matter where you live? Even as far back as 2005, many geographers weren’t buying Friedman’s thesis that the world was flat. They insisted on the ongoing reality that place continues to influence how and what we buy.

Richard Jerome, in a recent column for greenbookblog.org elaborates on this concept. The fact is, consumer preferences and behaviors are conditioned, at least in some measure, by geography. To put it another way, how you behave in virtual space is determined by who you are, which frequently correlates to where you live. Even when buying patterns have been so disrupted that consumers have completely changed their means of accessing goods and services—going online to buy, where location seems to have been flattened out of existence—even there, location underpins how we buy online, how you use technology to gain virtual access to retail.

Market researchers must never forget the deep and abiding persistence of geography. Just as the old saying goes, “all politics is local,” there is a truth to the claim that all consumption is local, too. Geographical context, and the behaviors surrounding a given location, are connected to buying patterns, brand identification, and receptivity to new markets.

The real world wields a sort of gravitational force on consumer habits. This phenomenon is described in Wharton School professor David R. Bell’s book Location is (Still) Everything: The Surprising Influence of the Real World on How We Search, Shop, and Sell in the Virtual One (2014). Bell studied online commerce and developed a GRAVITY framework tool to understand how the geographical and virtual worlds intersect. The Internet obliterated two key barriers to consumers’ never ending quest to obtain the products and services they want. The first was information access. It used to be a royal pain to find information about products and their availability. Search engines and instant access to store inventories solved that problem. The second barrier was something Bell calls “geographical friction,” the tendency to only buy what you could get via local markets. The Internet and services such as Amazon two-day shipping has broken that barrier, too. But that is not the end of the story. When Bell studied Internet sellers, he found that sales evolved in some interesting patterns based on location. Initial demand was focused on a few locations, but then demand spread out contiguously—neighbor to neighbor, block to block, and so on. Consumers see what their peers are buying and tend to imitate those buying patterns. In another scenario, because people with similar consumer preferences tend to live in the same neighborhood, they adopt similar buying patterns.

Once you accept the fact that location still plays a major role in consumer preferences and activities, you need to keep factoring geography into your strategic thinking. Some key factors to consider:

  • How is geography influencing buying patterns and tendencies? What other regional markets compare to my own? Are there parallels, and should we be reaching this consumer segment too?
  • Is online retail really meant to displace offline retail? Or should the virtual and physical spaces coexist in a harmonic, fluid way? Maybe the relationship between the two is more symbiotic than parasitic.
  • How does the fact that consumers are doing more of their buying on smartphones impact your marketing strategy? Smart phones’ built-in location services can ease access to goods and services. Brands should be tracking where their customers are when they buy, and whether their purchases are isolated or contiguous to a geo demographic segment.

MSG’s Geo Demographic services can help you to understand your customer base. By mapping demographics onto geography, you can better understand territories, neighborhoods, block-level consumption patterns.  http://www.m-s-g.com/Pages/genesys/geo_dem_services

SOURCES used for this blog piece:

https://greenbookblog.org/2019/01/31/why-location-still-matters/

https://en.wikipedia.org/wiki/The_World_Is_Flat

https://www.cioinsight.com/it-management/expert-voices/the-importance-of-location-for-digital-cios.html

New Census Data Available for Computer Ownership and Internet Subscription.

The United States Census has long been a treasure trove of data for market researchers, and the riches have just gotten more rewarding. It now offers data regarding computer usage and internet access.

On December 6, 2018, the United States Census Bureau released its Summary File for the 2013-2017 American Community Survey (ACS) Five Year Estimates. For the first time this data product contains tables for computer ownership and internet subscription. The ACS assists government, community leaders, and companies in understanding how their communities are undergoing change. It contains a wealth of information on U.S. population and housing. The new Five Year Estimates for computer use are further broken down by estimated characteristics such as household income, age, educational attainment, and labor force status.

Background

To fully appreciate the significance and importance of this release, you have to go back 10 years. In 2008 Congress enacted the Broadband Data Improvement Act, with a goal to identify geographical areas of the country that did not have broadband services. Legislators were hoping to promote deployment of services within underserved areas and in addition, bring affordable services to all areas of the country.

In 2013 the Census started asking questions concerning computer and Internet use in its ongoing American Community Survey (ACS).  Each year the ACS randomly samples approximately 3.5 million addresses, and the information from that survey is released each year in two distinct datasets:

  • One Year Summary File (SF)
  • Five Year Summary File (SF)

The key difference between the datasets is that the Five Year SF is backed by five years of respondents and thus includes estimates down to a very detailed Census Block Group geography. Block Groups are statistical divisions of census tracts that are defined to contain a minimum of 600 people or 240 housing units and a maximum of 3000 people or 1200 housing units. The One Year SF, created from one year of respondents, only includes estimates for large geographies with a population greater than 65,000. Examples of large geographies are census regions and census divisions, individual states, and metropolitan areas (which are groups of cities and surrounding counties).  Incidentally, there are 501 metro areas with populations greater than 65,000.

One Year Summary File Drilldown

Below is an example of the level of detail that can be produced from the One Year SF, looking at Presence of Internet Subscriptions in US Households.

2017 American Community Survey One Year Estimates – Presence of Internet Subscriptions in Households
  United States
Estimate Percentage
Total US Households: 120,062,818
With an Internet subscription 100,662,676 83.84%
Internet access without a subscription 3,395,581 2.83%
No Internet access 16,004,561 13.33%

We see that there are 19,400,142 households (16.16%) in the U.S. that have no Internet subscription or no Internet access, and as the map shows, the highest percentage of no subscription households tend to be concentrated in Appalachia and deep south.

The Five Year Survey

With the 2013-2017 Five Year current release, the ACS has surveyed more than 17.5 million addresses, which is enough to accurately provide estimates down to a detailed level of geography.

However, there is a catch. Unfortunately, just prior to this release the Census announced the removal of the Block Group estimates and it is unclear at this point whether the Block Group estimates will be available before the next release, which is scheduled for December 2019.  This means that currently the lowest level of geography available is Census Tract. A census tract is an area roughly equal to a neighborhood.  Census tracts are smaller than a city, but larger than a block group and generally have a population size between 1,200 and 8,000 people. There are 73,056 Tracts in the US 50 State (+DC) geography.

Below is an example showing how census tract level geography helps to pinpoint particular target households. This analysis can be valuable because it allows users to target specific detailed areas.  In addition, clusters or groups of neighboring areas that have similar characteristics can then be used to define a study area.

New Data Available at Finer Geographic Levels

For the first time, the Five Year summary file offers new tables and categories which are available at detailed geographic levels. Click on the link to find out more information. https://www.census.gov/programs-surveys/acs/technical-documentation/table-and-geography-changes/2017/5-year.html

One key website containing census and demographic data is the American Fact Finder.  https://factfinder.census.gov.  This site contains an abundance of information but can be tricky to navigate, manipulate, and comprehend.  With over 35 years of combined experience, MSG’s Geo-Demographic team are experts at working with this data and are here for your project needs.

Visit the Resource Center on the MSG website for National estimates of the new data categories.

http://www.m-s-g.com/Pages/downloads/demographics/2013-2017_American_Community_Survey_5-Year_Estimates_Computer_Ownership.pdf

Explore our geo-demographic capabilities at http://www.m-s-g.com/Pages/genesys/geo_dem_services

Resources and links used in this article:

The ACS 5-Year Estimates. https://www.census.gov/programs-surveys/acs/technical-documentation/table-and-geography-changes/2017/5-year.html
Broadband Data Improvement Act. https://www.fcc.gov/general/broadband-data-improvement-act
American Community Survey (ACS). https://www.census.gov/programs-surveys/acs/
Computer and Internet Use in the United States: 2013. https://www.census.gov/library/publications/2014/acs/acs-28.html
ACS Webinar. https://www.census.gov/programs-surveys/acs/guidance/training-presentations/acs-5-year.html
American FactFinder. https://factfinder.census.gov/faces/nav/jsf/pages/index.xhtml

County Level Cell Phone Only Estimates

Probability based telephone surveys must utilize a dual frame approach in order to capture the ever increasing cell phone only population.   Until the day comes where it’ll be a single frame approach of only cellular numbers, researchers need to ensure they get the appropriate blend of cell only vs. dual phone users in their sampling allocations.

Marketing Systems Group (MSG) produces quarterly estimates of Cell-Phone Only (CPO) rates for every county in the country[1]. These rates are based on telephone households (as opposed to occupied housing units) and lag one quarter. A Cell-Phone only household is defined as one in which there is no operational landline telephone and cell phones are used exclusively to make and receive calls.  The CPO estimates are derived by using a combination of survey-based information along with large commercial and administrative data sources.  This triangulation approach provides us the ability to create CPO estimates at the state and county levels.  Moreover, the primary sources used to create these estimates are updated on a continuous basis.  This enables us to create updated CPO estimates each quarter.

MSG is currently the only source of county level CPO estimates available.  Having this granular level of information can benefit survey planning and weight computation/calibration at the sub-state level.  We analyzed our CPO estimates at the county level and observed that many states have big differences in the CPO rate from county to county (see Figure 1). This could have a big impact on studies that infer a given county level CPO rate using the state level estimate.

Figure 1:  CPO variations among counties (minimum vs. maximum) within state compared to the state level estimates.

One limitation of utilizing administrative data in our triangulation methodology is that it does not take into account behavior patterns of dual phone households.  Dual phone households are those that have both land line and cellular numbers but only make and take calls on their cell phones.  With that being said, these estimates are the closest approximations at the county level available.

Whether it be for statistical or productivity reasons, rely on MSG’s CPO estimates as a criterion to obtain the appropriate mix of land line vs. cellular numbers for sub-state dual frame RDD surveys and for weighting calculations.

[1] A Total of 32 counties were combined with other counties or removed due to lack of information.

Quality Starts with Survey Design: Tips for Better Surveys

Marketing researchers are all facing two important challenges to data quality. First is the question of representativeness: with response rates plummeting, we need to make surveys shorter, more engaging, and easier for respondents to complete. Second is the issue of data accuracy: we must make sure that survey questions measure what we think they measure.

If you want to make surveys more accurate and representative, it all comes down to survey design. You need to think carefully about the survey design and how quality is expressed and impacted throughout all phases of the research project. When you get in the habit of thinking about quality at all phases of a study—from design to implementation to analysis of results and feedback—the payoff will be clear.

First Steps First

It sounds obvious, but the first quality check is to take your survey. Clients, researchers, analysts—everybody on board should complete the survey. Be sure to ask some people who are not familiar with the project to complete it as well. How does it feel to be on the other side? Talk through the questionnaire as a group. Look for areas that need more focus or clarification. Seek recommendations to improve the survey. Encourage the group to advocate for changes and explain why the changes are important. And be sure to use a variety of devices and operating systems to understand how the survey performs in different situations.

Conduct a pretest to get feedback from respondents. You don’t have to complete many pretest surveys, but you should have at least a few “real,” qualified respondents complete the survey. Additionally, they should be answering questions about the survey’s design properties, ease of use, and any other issues they had with the survey. By all means, use survey engagement tools when feasible, but don’t fall into the trap of letting the droids rule the analysis. You need the human touch from real respondents, as well. (Don’t forget to clear the pretest respondents before you fully launch the survey. Or you can clear the responses or filter them out of the final results.)

Use technology to test for data quality. A computer application is great at metrics, scoring and summarizing responses. It can measure survey engagement by tracking rates of abandonment and speeding and measure experience quality via respondent ratings. The average length of time to complete the survey is also a key metric. Use technology as a predictive tool before launching the survey to evaluate engagement levels and suggest improvements.

Frequent Challenges

As you get in the habit of performing quality checks, be on the lookout for these common issues that will lead you to improve your survey design:

Is the survey user-friendly?

  • Beware of “survey fatigue.” Split long surveys into many short pages.
  • Make survey language more consumer-friendly and conversational and less “research-y.”
  • Does the language used on buttons and error messages match the survey language?
  • Validate questions for the correct data type and tie validation to relevant error messaging that tells how to fix the response.
  • Use a progress indicator to show how far the respondent is from completion.

Does the survey flow?

  • Improve the logical flow of the questions and watch out for redundancies.
  • Make sure the question type matches what you are looking for. Close-ended questions are ideal for analysis and filtering purposes.
  • Test your logical paths. When designing page skips, you don’t want unexpected branching to happen.
  • Use required questions for “must get” answers, so the respondent can’t move on without completing them. Be careful about making too many questions required, however, as respondents can become frustrated and break-off before completing the survey.

Is your survey design mobile-capable? While 40% of all survey responses are completed on a mobile device, a recent study reported that half of surveys are not mobile-capable, much less mobile optimized. Design your survey to work on mobile devices:

 

  • Make sure that short page questions fit on the screen.
  • Minimize scrolling whenever possible.
  • Check for comment box sizing problems and row-width for matrix question labels.

 

Remember, quality control should never be an afterthought; you must have an established quality control process for surveys. This process must specify the quality review responsibilities of each survey reviewer. One or more team members should be responsible for evaluating respondent-level data. The quality control process should review the survey design end-to-end to focus on maximizing both technological efficiency and respondent experience for optimal data quality.

Smart Survey Design: 3 Forgotten Pain Points to Avoid

“Smart Survey Design” is a loose term (bordering on a catch-all) that you’ve probably heard pitched to you.  Maybe you have used it yourself when piecing together a study.

It’s not a hollow term, by any means. Smart design has advantages for both designers and respondents. Designing “smart” simply means maintaining data integrity, both in terms of capturing statistically relevant data as well as reducing the amount of bad data caused by poor survey takers (straight liners, short responders for OE’s, speeders, cheaters, etc.).

That’s the basic idea, but there is one factor that often gets forgotten or ignored in a “Smart” design:  the respondent’s experience. You want your respondents to have a positive user experience, surveys with a human touch. They should feel good about taking the survey.

I’m not just talking about survey length or incentive, though those are certainly key tools in addressing the problem.  What I am referring to is the very way we talk to the respondent, the questions asked and how many times we ask that question.

It is easy for us as researchers to become so lost in our need for quality data that we forget the source of it—human beings. People are rational and emotional creatures. How do they feel about their participation?  It’s an important consideration, all too often ignored.

Identifying and avoiding potential pain points may not only help to reduce the number of scrubs and drop-outs, but also deliver better, more reliable data.

Have you ever been on a conference call where the speaker repeats the same point 5 times?  Did you like it?  Did you continue to pay attention or did you look at your phone or check your email?  Now imagine that same conference call. The speaker drones on with 4 more points that are roughly one hair’s width different from the original ones. Frustrating!

Plenty of studies out there get too repetitive in hopes of garnering nominal, ordinal, interval, and ratio data just to present the client with 4 different charts.  But you should ask yourself, how reliable are the opinions offered by a respondent that you have just bored and or annoyed?

Some repetition may be unavoidable, especially when you want to determine which of a group of stimuli is most attractive to your target, but you should not bludgeon the people who are meant to be helping you.

Pain point #2: Being too clever

“If you could be a tree, what tree would you be and why?”

This may be a good opener for your therapist to crawl around the workings and motivations of your mind, but some respondents may find such questions to be intrusive or something worse: “hogwash.”  They have signed up to take part in survey research, but they’re not lab rats!

We come back to the reliability question: how reliable is the data you are gathering if your respondent has been made uncomfortable and just wants to finish the ordeal and get out?

The prospect of getting “deeper data” out of your survey may be very alluring, but consider how appropriate those questions are for your audience.  Does a panelist really need to imagine their favorite restaurant as a spirit animal in order to tell you what their favorite sandwich is?

Pain Point #3: Being too “research-y”

While gathering data or even when trying to cut time off the length of interview in consideration for the respondents, questions might be presented impersonally or curtly. These rapid-fire “cold” questions, though absolutely focused, clear and concise, run the risk of boring a respondent into unintentional mental lethargy.

Questions can eliminate responders who have lost interest in your data set, but wouldn’t it be more beneficial to prevent the need for creating them in the first place?  You don’t have to write a narrative or tell a knock-knock joke to keep them engaged with the process.  Panelists are people. You should just remember to “speak” to them conversationally, instead of clinically prompting and probing for responses.

By being more aware of the respondent’s pain points and making a few tweaks to your surveys, you can improve completion rates, quality of open-ended responses and data integrity.  Better yet, it does all this without incurring any additional costs.

How I Learned to Love AAPOR’s ResearchHack 3.0

It was my first year attending the American Association for Public Opinion Research (AAPOR) Annual Conference, and I was feeling a little nervous. AAPOR is one of the most influential conferences in the survey industry. My goal was to actively participate in events and networking opportunities on the conference list. ResearchHack 3.0 was one of them.

ResearchHack is AAPOR’s version of a “hackathon”, where teams of participants (aka. “hackers”) were asked to devise a plan for a mobile app that would inform various uses of the Census Planning Database.

I looked at the blank ResearchHack 3.0 registration form and hesitated. To be honest, I’m a statistician whose focus has been on survey research methodology. Except for the statistical programming language R, which I’ve used for my projects, I know very little about coding or making an app. Me, a hacker? A coder? I don’t think so! I didn’t know whether I could make any meaningful contribution. I was a little scared, but I knew that it would be a great chance to learn, to work with great people, to get out of my comfort zone, and to truly challenge myself. I signed in. “ResearchHack 3.0…bring it on!”

I was paired with three professionals: a health researcher, a healthy policy program research director, and a director of an institute for survey research. Our team decided to work on a Census Planning Database-based mobile app to help any survey firm/researchers who were trying to design a sampling and operational plan for a hard-to-survey population.

Surveying a hard-to-survey population usually results in a very low response rate. The “main idea” of our app proposal was to utilize the Low Response Score in the Census Planning Database to help identify areas with possible low response rate for the targeted population. Then we would “customize” sampling and operational plans based on areas with different degrees of predicted response rate, with the assistance of big data analysis results or shared experiences from other researchers.

Actually, we had no problem creating hot maps to identify areas with possible low response rate, but when we had to create an app prototype to demonstrate how the app can help survey researchers “customize” their research plans, we ran into a problem. None of us knew if our proposed ideas were even applicable in an app! We didn’t know what adjustments we should make to implement those ideas at the app level. None of us had the related experience needed to make those calls. It’s like that feeling you get when you have an awesome idea for decorating a cake, but you don’t know the needed ingredients. I have to admit, it was a frustrating realization, and I believe my team members had a similar feeling.

The clock was ticking. We had to present our ideas to the public only 24 hours after our first meeting. The pressure was huge, but no one gave up. We sacrificed sleep to work on our slides and outputs. We wanted to be sure that our “main proposal idea” would be clearly explained.

Next, we adapted a role-playing strategy in our presentation to show the audience what kind of difficulties any researcher might face when trying to survey a hard-to-survey population, and what “customized” research plans could help if the needed technical assistance for the app was provided.

Although our ideas didn’t wow the judges (totally understandable due to our app-level technical shortcomings), we did win the “audience pick” award. We were grateful to them for appreciating the effort we put in to help relieve the pressure on all the hardworking survey researchers who have to collect responses from hard-to-survey populations.

ResearchHack 3.0 was certainly tough, but very rewarding, too. You couldn’t ask for more from this crazy and unforgettable experience!

After the conference when I got back to the office, I shared my ResearchHack experience with the programmers in the Geo-Dem group. We had some great discussions. They gave me creative ideas that I had never thought of before. This is one of the great benefits of going to conferences like AAPOR. You share new knowledge and insights with your colleagues, which sparks more creative innovation. One day we will continue in the spirit of ResearchHack 3.0 and make great products for survey researchers, together. When that day comes, our blog readers will know the news. Stay tuned!

Kelly Lin | Survey Sample Statistician | Marketing Systems Group

Taking Aim with Consumer Cellular Sample

How Consumer Cellular Sample Can Give You a More Accurate Geographic Fit of your Target Population and Improve Coverage

Geo-targeting. We all know what it means, but for the sake of this article, let’s get at the essence of the concept. Geo-targeting is a way to pinpoint an audience based on location. Accuracy is everything. Geography is the fundamental basis for every sample frame – be it individual streets, Census geography or Postal geography.

Certain sample frames such as Cellular RDD tend to be difficult to target geographically due to inward and outward migration of individuals who retain their cell phone numbers.  It’s important to be aware of these limitations when using a Cellular RDD sample, especially when targeting small geographies.

Here’s how you can miss the target: a cellular RDD sample will include people who have moved outside your target geography (a.k.a. outward migration). Additionally, respondents might live in the area of interest, but do not have an opportunity to be included in the RDD frame (inward migration). They have a cell number that corresponds to an entirely different geography. These people wouldn’t  be included in a traditional cellular RDD frame. The result? Under-coverage due to inward migration and increased data collection costs due to outward migration

So how can we account for the under-coverage and take better aim? One option is to supplement from a relatively new convenience frame called Consumer Cell. This frame is based on a multi-source model comprised of consumer databases linked to publically available cellular telephone numbers. It is updated monthly.

The Consumer Cell  database is built from literally hundreds of sources including

  • Public records
  • Census data
  • Consumer surveys
  • Telephone directories
  • Real estate information (deed & tax assessor)
  • Voter registration
  • Magazine subscription
  • Surveys responses
  • E-commerce
  • Proprietary sources

Geographic targeting using consumer cell can be highly accurate, zooming in on small geographies such as census blocks or even the household level.  Further stratification can be done for numerous person and household level demographic variables.

One limitation to the database is that it is a convenience frame (non-probability compilation of households). It does not offer the same coverage as an RDD frame. It is probably best utilized as a supplement to sample respondents who live in a targeted geography. One of the benefits is that you now include respondents who otherwise would not have been sampled.

If your area of interest is at the state or local level, you should consider where we can address under-coverage issues with RDD cell sample.

Why Volunteer?

The research industry needs volunteers. Here’s why you should consider playing a part.

Many of us here at MSG serve as active volunteer members of market and survey research industry organizations. It’s part of our company culture to get involved and make a difference. Recently, I attended back to back chapter events, and I began to reflect on the benefits of volunteering. Was it really worthwhile to devote my time to a local chapter organization?

It’s true, the amount of time you need to devote to volunteering can feel like a second job, and it is crucial that you be able to balance your primary and secondary activities. It’s definitely a juggling act, and it isn’t always easy.

That being said, there are loads of good reasons to become a volunteer. Here’s what influenced me to get involved:

Networking. Serving as an industry volunteer will get you talking to people and is a wonderful means for creating and maintaining relationships. I want to meet people whom I can work with, but I also want to build a network of long-lasting professional relationships. In my roles as a volunteer for a local chapter organization and committee memberships, I have encountered industry pros whom I never would have met otherwise.

Learning best practices. Education doesn’t end with a degree, a certification, or on-the-job training. It should be seen as a lifelong habit of mind. By attending events and seminars outside the orbit of your day-to-day business, you will be exposed to new ideas and pick up on new trends within your industry and related industries.

Organic growth. A natural goal we all have is to grow our business. When you volunteer, the cultivation of business growth can tend to happen more organically, as a function of developing relationships within the membership environment. As you discover ways to collaborate and partner with others, those seeds will sprout.

I firmly believe that volunteers are the lifeblood of an association. They keep our communities engaged and informed. Despite the fact that it can take up a lot of spare time, when I reflect and ask myself, should I have volunteered? I always answer a resounding YES!

 

 

 

“Hope for the best, prepare for the worst”: salvaging the client list

You’ve probably heard the story before. It begins, “The study started with a client list….”

I can’t tell you how many times I had a client call and tell me that. The stories follow a pattern. The client says it’s a great list and you should be able to easily complete the study with it. Sounds great, right?

Here comes the plot twist. They forgot to tell you the list is 4 years old and hasn’t been touched since. Oh, and by the way, only 30% of the records have a phone or email address. Suddenly, easy street is filled with potholes.

This isn’t the end of the story, and it can have a happy ending. A sub-standard client list can be rescued with these investigative approaches and performance enhancements:

• Flag any cell phone numbers so they can be separated out and dialed manually, which also ensures TCPA compliance.

• Ask yourself: what is most important on their list? What is the key sampling element? Is it the individual (contact name)? If so, the file can be run against the National Change of Address (NCOA) database to see if the person has moved. If the person has moved, a search can be run for the new address. The next step is to identify the landline and (or) cellular telephone numbers associated with that individual at the new address.

• If location/address is the key element, check for the most up-to-date telephone numbers (either landline or cellular) and name associated with that address.

• Send the call list to a sample provider for verification. Does the information in your list match the sample provider’s database?

• If information doesn’t match, can you append on a new phone number or email address?

• Do you still have open quotas? See if you can append demographics to target for open quotas.

• When you’ve exhausted all options on the client list and the study still isn’t completed, order an additional custom sample that meets the ultimate client’s specifications (or at least comes close). Then you should dedupe the client list from any custom sample orders.

With the help of a good sample provider, even a subpar client list can be salvaged and the study brought to completion on time.