Guilty Until Proven Innocent

When clarity becomes suspicious, the industry starts mistaking expertise for fraud.

In the rush to fight bad data, we’ve learned to distrust the very respondents we need most.

by Ariane Claire, myCLEARopinion Insights Hub
Dec 15, 2025

Some days it feels like the entire research industry has quietly adopted a new philosophy: every respondent is guilty until proven innocent. And it’s wild, because for years everyone complained that panels were full of fraud, bots, nonsense open ends, inattentive clicking, endless VPN triangles, and whatever else we’ve all had to deal with.

We wanted better data. We begged for it. But now that good, clean, highly qualified respondents are actually showing up? Teams are suddenly suspicious of them. It’s like we’ve been burned for so long that we’ve lost the ability to recognize what “good” even looks like.

When ‘Too Good’ Gets You Removed

This is the part that blows my mind. People are removing respondents simply because the data looks too polished. Too consistent. Too coherent. They’re questioning fast, confident industry experts because “real humans wouldn’t answer like this.” Yes, they would. And they do. Every single day. Especially in B2B. But because we’ve normalized years of garbage, now clarity looks fake and messiness looks “authentic.” You almost can’t script the irony.

The Body-Count Approach to Quality

And it’s not just over-suspicion. It’s this desperate hunt for reasons to remove people, almost like teams feel they need a body count to prove they’re doing their jobs. I’m seeing removals for things so ridiculous it’s hard not to laugh. Duplicate IPs, for example. In B2B. As if hundreds of professionals don’t sit behind the same corporate firewall every single day. Of course they share IPs. That’s how businesses work. But some teams treat duplicate IP like a smoking gun instead of what it almost always is: two (or ten) people answering a survey from the exact same company we’re targeting. Duplicate IPs aren’t fraud. They are literally how corporate networks work.

Screeners as Trap Doors

And then there are screeners. Instead of being straightforward and designed to qualify the right audience, some are now built like trap doors, repeating the same question three different ways just to trip people up, turning job titles into riddles, or adding unnecessary layers just so someone can say, “Aha! Inconsistent! Remove them.” Screeners shouldn’t feel like an exam designed by someone who wants you to fail. Yet here we are, with people proudly “catching” respondents in contradictions that they themselves engineered. It’s gross, honestly. And lazy. A good screener qualifies; it shouldn’t hunt.

We Built This Mess

The industry created this mess. We spent years tolerating 20–30 minute surveys, impossible incidence rates, CPI models built for volume instead of verification, sample marketplaces designed to capture anything with a pulse (or script), and open ends that practically invite gibberish. So naturally, fraud skyrocketed. In response, we built these massive quality engines whose performance seems to be measured by how many people they can remove. And now those tools, and the people running them, are turning on the very respondents we claim we want: smart, experienced, consistent, concise professionals who actually know what they’re talking about. We are rewarding chaos and punishing expertise.

Punishing the Pros

I’m watching legitimate panelists get flagged for typing too fast (heaven forbid they actually know the answer), understanding industry terminology (because they work in that industry), giving open ends that sound like how real professionals actually write (but being accused of being AI), or completing at a pace consistent with knowledge and expertise (and being called speeders). Meanwhile, word salad gets a pass because it “feels human.” We have completely inverted our standards. Bad answers = believable. Good answers = suspect.

The Real Cost

The cost of this mindset is enormous. We’re shrinking niche samples, losing specialists clients desperately need, breaking trend lines, introducing bias, and frustrating real respondents who actually want to participate. And we’re doing all this while telling clients they’re getting high-quality insights. How high-quality can anything be if we’re discarding the strongest people simply because they don’t resemble the chaos we’ve become accustomed to?

Relearning Trust

Good panels still exist. Real humans still exist. Not everyone is a bot or ChatGPT-in-a-trench-coat. But we can’t keep treating every respondent like they’re lying to us just because the industry has trauma from years of bad data. Quality control should be evidence-based. Identity-based. Behavior-based. Context-aware. Not fear-based. Not “gotcha”-based. Not built on arbitrary traps and outdated assumptions like duplicate IPs in corporate environments. Fraud forced us to evolve. That part was necessary. But fraud shouldn’t get to define the industry forever. At some point we have to relearn how to trust good data when we see it and stop punishing qualified professionals for being exactly what we claim we’re always trying to find.

Contact: Ariane Claire, Research Director, myCLEARopinion Insights Hub

Q&A Session

Frequently Asked Questions:

Q1: Why are high-quality respondents being flagged more often today?

A1: Because years of bad data have trained teams to distrust anything that looks competent.


Q2: Why do things like duplicate IPs and fast completion times get treated as red flags?

A2: Because outdated assumptions are being applied without context.


Q3: What’s the real risk of this “guilty until proven innocent” mindset?

A3: It systematically removes the exact respondents clients actually want.

Contact Us

Create Dashboard

Panel Book

Insights eBook