by Brad Franz
Do you really know what you are buying when it comes to online sample? The sample landscape continues to evolve, growing in complexity as many online sample providers look to technological innovations to help streamline business practices and maintain financial viability of panels and other channels.
Just as all modes of data collection have their own inherent biases and shortcomings, the same is true for online data collection techniques. As with any process, not all techniques are a universal fit for all research objectives. Because of this, it can be challenging for buyers of online data collection to understand the impact of selecting one approach or technology over another. The method in which sample is delivered will directly impact your results, especially when tracking data over time.
It is easy to be attracted to discounted prices and reduced field time resulting from technological enhancements developed to drive efficiency, but depending on the research objective these, can actually result in a compromised data set and an inability to replicate for future projects. For example, a 30-minute discrete choice survey may require a very different data collection design compared to a 5-minute brand awareness study due to the complexity of the task and questionnaire design. Each project deserves an individual sampling assessment with an eye towards customization.
There are some key topics to consider to help safeguard against fluctuations in your sampling that can be influenced by either technique or technology. Below are a few essential questions to ask the sample provider about respondent sourcing and how they manage the sample. The goal of this process is the reduction of operational impact on study results.
How are new members recruited to join the partner’s panel?
Panelists are recruited for panels in several ways. Some common approaches are through strategic partnerships, website alliances, and media recruitment (social media, TV, radio, and magazines). Different sources contain inherent bias. For example, panelists from social media sources can be very responsive but tend to have lower completion rates and lower quality scores on medium-to-long surveys or task-heavy research projects. Your research objective should dictate the type of respondents being used in your data collection efforts.
Are the respondents proprietary to the partner or will third parties be used to complete my research study?
Be aware that some providers will claim a large number of panelists when in fact they only have access to these counts via partnerships with other providers. The more providers involved, the more room there will be for fluctuation. While there are significant benefits to multi-sourcing approaches, these need to be closely monitored and managed to maintain quality and consistency. This is especially true of tracking projects. Distribution of demographics by source should be closely monitored. Demographic fulfillment on trackers can shift between waves, and demographic groups’ responses can vary from source to source. You want to make sure there are controls in place and you are able to track and tend from wave to wave.
What quality measures are in place to help to ensure a high quality respondent base?
It’s beneficial to understand how respondents are utilized by the partners. This holds especially true for hard-to-reach demographics. What “frequency limiters” are in place to protect the respondents from over contact or over use? What happens to a respondent who has consistently failed quality checks and is this respondent being removed or restricted from taking part in future research projects? It is important to select a partner who actively manages their respondent base to ensure the highest quality online data collection solution.
How are respondents recruited to participate in research projects?
Respondents can be directed and/or invited to participate in your research project in a variety of ways. Common approaches include direct invitation of a panelists, link availability via respondent dashboards/portals, and dynamic sourcing, to name a few. The appropriate recruitment technique for your project is dependent on an understanding of how the different approaches will react to the study specifications and which approaches could possibly lead to unintended data fluctuations.
How are the panelists incentivized for their participation?
Common incentive types include sweepstakes, points, custom currency, and cash. The more complex the task, the higher the incentive should be. Respondents are the lifeblood of online research. Making sure respondents are compensated adequately by sample providers is paramount to ensuring the long term success of online quantitative research. Obviously, price is a key factor driving purchase decisions, so it’s even more important to understanding how that cost per complete trickles back to the actual respondent taking part in your research project.
What respondent sources will be used on my research study?
Common respondent sourcing techniques include traditional panels, dynamic sourcing, open recruitment, and aggregated multi-sourced sampling to name a few. Transparency is key. It’s important to know and understand where the sample is coming from, and the partners you work with should be forthright and honest about the sourcing. It should also be transparent if pre-screening questions, which can affect the incidence, are being asked before respondents are directed to your research study.
How will survey specifications affect sample feasibility and access to respondents?
Survey specs such as quota, incidence, and length of interview need to be carefully considered when designing a sample plan with your online partners. In addition to these basic metrics, platform access such as mobile accessibility needs to be discussed since the inclusion or exclusion could affect feasibility overall and by demographic group. Data collection field time should also be a conversation point as biases can occur due to time-of-day or day-of-week responses.
By asking these essential questions of your online sample providers, you will become more informed about your sample choices and how those choices can affect your research goals. A solid data collection strategy will lead to a data set that is reliable, sustainable, and most importantly, consistent.
As a Senior Consultant in Burke’s Sampling Department, Brad Franz works directly with Burke’s end clients and a network of online panel providers to help create and maintain best practices for online data collection.