The research industry can’t stop talking about the “data quality crisis.” Fraud, bots, disengagement, straightlining. It’s the laundry list we all know by heart.
But let’s be honest: a big part of this mess is self-inflicted. We created it by treating people like products.
They became “inventory,” “completes.” These human beings, busy professionals, consumers, decision-makers, are reduced to quotas in a spreadsheet. And when you treat people like widgets, you shouldn’t be surprised when the data ends up feeling like it came off an assembly line: uniform, stripped of nuance, and unreliable.
Here’s what a “typical” project sample request often looks like:
Targeting: • Geography: USA • Target: Contractors, General Contractors, Roofing Installers • IR: 5%, 10%, 15% • LOI: 15 minutes • N: Max feasibility
But that’s not good enough. Quoting on vague specs isn’t helping the client. It’s setting them up to fail. Those numbers inevitably collapse in field, and then everyone points fingers.
This isn’t about being difficult or slowing down bids. It’s about responsibility. If our shared goal is to deliver insights that matter, then we owe it to our clients (and to respondents) to demand clarity. Providers need to stop pretending feasibility can be conjured out of thin air. Clients need to share more context, even if it feels messy or incomplete. Researchers need to frame sample not as a commodity line item, but as a living, human part of the design. We’re supposed to be working with each other, not against each other. Asking sharper questions, challenging vague requests, and refusing to reduce people to checkboxes isn’t obstruction, it’s collaboration. The more transparent we are up front, the smoother projects run, the stronger the data, and the more valuable the insights.
This happens because we’ve convinced ourselves respondents are interchangeable. As if a “contractor” is just a checkbox you can fill at will. As if incidence is a knob you can set at 5%, 10%, or 15% without any connection to the actual dynamics of the workforce.
Providers get pushed to promise the impossible just to win projects. Clients get sold numbers, not clarity. Researchers are left to explain why timelines blew up and quotas couldn’t be met.
We throw around terms like “quality controls” and “fraud detection” as if tech can save us. And while those tools definitely matter, they don’t address the root problem: respondents aren’t products, and treating them like products is the surest way to destroy data quality.
Fixing this starts with a shift in mindset. Respondents are people, and people don’t fit neatly into incidence rates without careful definition. If we want better data, we need to stop acting like order-takers and start acting like partners. That means pushing back on vague bids, asking sharper questions, and guiding clients to articulate exactly who they want to understand.
Of course this won’t solve every data quality issue but it will help. Because respect isn’t a “nice to have.” It should be the baseline. Relevant recruitment, fair incentives, and surveys that don’t waste people’s time. The more we treat respondents as human beings, not headcount, the stronger our insights become.
The fix isn’t more patchwork. It’s remembering the obvious: behind every complete is a human being. And until we start treating them that way, no amount of tech, tracking, or fraud detection can save us.
Contact: Ariane Claire, Research Director, myCLEARopinion Insights Hub
A1: Because people aren’t interchangeable. Reducing participants to quotas and incidence rates strips away the nuance needed for meaningful insights.
A2: They need to move from order-taking to collaboration. The goal isn’t speed at all costs—it’s clarity from the start.
A3: By recognizing that every “complete” is a human being who deserves fairness and relevance.