by Zack Hancock
Cybercrime and fraud activities impact every industry — and, unfortunately, marketing research isn’t immune. Cybercrime is projected to cost the global population $8 trillion in 2023, growing to a staggering $10.5 trillion by 2025.*
Online survey research can be a lucrative target for bad actors looking to make quick money from survey incentives. Confronting fraud and validating respondents is now more important than ever. However, respondent fraud is only one piece of the data-collection quality puzzle; how a real respondent interacts with surveys can be just as impactful to your survey’s data quality.
As researchers, how do we ensure that we are receiving reliable, quality data from respondents? Understanding the differences between these two behaviors (fraudulence versus disengagement), and having an awareness of how they can each impact your research results are the first steps in unlocking better data quality.
Let’s look at fraudulence first. Survey fraud can take many forms — from bots leveraging scripting tools and AI inputs to access survey opportunities en masse, to click farms working together to exploit vulnerabilities, to disingenuous individuals looking to gather incentives. Combating these challenges requires a three-pronged approach: a well-designed survey, trusted and transparent sourcing, and a Fraud/Dupe detection tool to assess a respondent’s digital fingerprint. Ideally, fraud detection should happen at panel recruitment, as well as prior to a respondent entering a survey.
Disengaged respondents pose just as large of a threat to final data sets as fraud. Inattentive behaviors such as speeding, straight lining, and failing to fully read questions reduces the quality of survey data. While fraud and disengagement both lead to poor data, they each must be viewed as distinct, wholly separate respondent behaviors. A disengaged respondent is oftentimes not a “bad actor” — their responses could simply be impacted by a misalignment of motivation and task (think incentive vs. survey length) and/or a challenging survey experience.
Understanding the impact of how a respondent interacts with a survey is often overlooked but can be considered a critical step towards unlocking better data quality. Creating a detection strategy and implementing the considerations outlined below can help aid in the identification and removal of fraud and disengagement during data collection.
During the survey screener, disengaged respondents can start to be identified with appropriate techniques – right alongside fraudsters. There are a handful of ways to recognize these individuals: you can validate stated demographic data against profiled demographic data, check for outliers in response to screening questions, or identify non-sensical/extremely unlikely response data. These checks aim to remove disengaged and fraudulent respondents in real-time before they enter the survey proper. When developing a screener, it’s critical to craft your questions in a way that not only qualifies the right respondent but also to keep out the wrong ones.
Once a respondent has qualified and passed the screener, a different — yet just as impactful — set of quality measures and detection tools can be used to maintain the integrity of the survey data. Implementing quality measures such as red herring questions, straight-lining and pattern anomaly checks, speeding monitors, and evaluating inconsistent responses and outliers are great checks to gauge attentiveness. Disengagement often produces inconsistent and unreliable data, and creating flags to remove these inattentive respondents is key to preserving data quality. It is recommended to use a combination of detection measures and flagging respondents who fail multiple in-survey checks before removing them from the data set.
Beyond in-survey quality checks, you can help prevent disengagement by designing surveys from a human perspective. Is the survey you’re writing one you would gladly participate in? To keep respondent engagement a priority in your research, consider the following at a project’s inception:
How long would you be willing to spend completing a survey?
Surveys with a longer length-of-interview typically increase disengagement, resulting in higher drop off rates.
Would you be interested in filling out multiple open ends?
More than two to three open ends can be tiresome for respondents, which leads to shorter, less genuine responses.
Are questions easy to complete on any device?
If a question is overly wordy or cumbersome, like an excessive attribute battery, many respondents may lose interest.
Ultimately, leveraging best practice sampling techniques while understanding how respondents engage is critical to understanding quality at a holistic level. Viewing fraud and disengagement as two distinct respondent behaviors can give you a better perspective for designing and implementing more effective research. Respondent experience is often overlooked when determining the level of quality and reliability in research – but bottom-line: respondent experience matters. By combining fraud detection tools, engagement measurements, and more thoughtful questionnaires, higher quality and more reliable data for your business is within reach.
Zack is a Senior Project Manager on Burke’s Data Collection team. His expertise in online panel sampling and relationships with top sample providers ensures clients have access to sustainable high-quality data sources.
Interested in reading more? Check out Zack’s other article:
* Brooks, Chuck. “Cybersecurity Trends & Statistics For 2023; What You Need To Know.” Forbes, 5 Mar. 2023, www.forbes.com/sites/chuckbrooks/2023/03/05/cybersecurity-trends–statistics-for-2023-more-treachery-and-risk-ahead-as-attack-surface-and-hacker-capabilities-grow/?sh=8389e6f19dba. Accessed 6 Jul. 2023.
Feature Image – ©Krakenimages.com – stock.adobe.com