Happy MR Podcast Podcast Series

Ep. 512 – HMRP Monday Edition: How to Avoid Fraudulent Respondents with Dr. Leib Litman, Chief Research Officer at CloudResearch

Our guest today is Dr. Leib Litman, Chief Research Officer at CloudResearch. 

Founded in 2015, CloudResearch is headquartered in NY and is a participant-sourcing platform for online research and surveys. 

Prior to joining CloudResearch, Leib served as the Associate Professor of Psychology at Lander College. 

Find Leib Online:

Find Jamin Online:

Find Us Online: 


This Episode is Sponsored by:

This episode is brought to you by Michigan State’s Marketing Research program. Are you looking for higher pay, to expand your professional network, and to achieve your full potential in the world of market research?

Today, the program has tracks for both full-time students and working professionals.

They also provide career support assisting students to win today’s most sought-after jobs. In fact, over 80% of Michigan State’s Marketing Research students have accepted job offers 6 months prior to graduating.

The program has three formats:

  • The first is a Full-Time 100% Online program taught over 12-months starting in January 2022
  • The second is a Part-Time 100% Online program that is 20-months. This one starts in May 2022 and is specifically designed for working professionals,
  • And of course, they offer a Full-Time 12-month in-person experience that starts in September 2022

All programs include real-world experience and full-time job placement support.

If you are looking to achieve your full potential, check out MSMU’s programs at:


It costs nothing to get more details. Take the time, invest in yourself. You are worth it and your future self will thank you. Class sizes are limited, so please, check it out today. 

This episode is brought to you by  HubUX is a research operation platform for private panel management, qualitative automation including video audition questions, and surveys. 

For a limited time, user seats are free. If you’d like to learn more or create your own account, visit hubux.com.


Sponsor MSMU: https://broad.msu.edu/marketing

Sponsor HubUX: https://hubux.com

Doja Cat – Get Into It (Yuh) (Official Video)


Jamin Brazil: Today is March 7th, 2022, happy Monday. You’re listening to the Happy Market Research Podcast. I’m Jamin Brazil, your host. Support for the Happy Market Research Podcast and the following message comes from Michigan State’s marketing research program and HubUX. I’ve done 100s of interviews with today’s top minds in market research, many of them trace their roots to Michigan State’s marketing research program, are you looking for a higher paying job to expand your professional network and to achieve your full potential in the world of market research? Today the program has tracks for both full time students and working professionals. They also provide career support assisting students to win today’s most sought after jobs, in fact, over 80% of Michigan State’s marketing research student have accepted job offers six months prior to graduating. If you are looking to achieve your full potential, check out MSMU’s program at broad. msu.edu/marketing. HubUX is a research operations platform for private panel management, qualitative automation, including video audition questions and surveys. For a limited time, user seats are free, if you’d like to learn more or create your own account, visit hubux.com. This is episode 512 of the Happy Market Research Podcast, and according to iTunes, Doja Cat Get Into It (Yuh) is the number one song across the globe. Now, if you have kids in earshot, you may want to skip ahead 20 seconds, there are some explicit lyrics which I am a little bit embarrassed to even have on the show, and that’s coming from somebody that’s gen X, right? When we would do trick or treating, our biggest fear, they would put razor blades in our apples, there was just a lot – we had lawn darts. Our generation’s pretty hardcore, and yet this song makes me blush. Make sure that you put earmuffs on those kids.




Jamin Brazil: If you’re listening to this episode on the first Monday of March, I am likely on an airplane headed to Washington DC, where I’ll be chairing a two day event. I’ll be chairing the second day of Qual 360. If you’re in the area, I would love to grab a drink with you. DM me on social. It is going to be a ton of fun seeing some market research professionals in person. All right, so our topic is data quality, for this I have brought in an industry expert. I will tell you this, that data quality is at an all-time low, our conversation is a little dark. If you need to find your happy place, don’t fret, we do have some happier topics coming down the line, but you can always find a smile, at least I can today, with Doja Cat and Let’s Get Into It (Yuh). Our guest is Dr. Leib Litman, chief research officer at CloudResearch. Founded in 2015, CloudResearch is headquartered in New York, and is a participant sourcing platform for online research and surveys. Prior to joining CloudResearch, Leib served as the associate professor of psychology at Lander College, sir, welcome to the podcast.


Leib Litman: Thank you, thank you for having me.


Jamin Brazil: I could not be more excited about diving into the topic of data quality, it is the oil that – or maybe it’s the sun that powers, thinking about renewable energy. It is the power behind all of the insights that companies are using to make critical business decisions. At the center, we’ve seen a rise of automation which has created this hotbed for fraud, right? Because no longer are you able to disclose any sort of PII to research data collection platform or to practitioners. Additionally, we really don’t have any idea if you’re purchasing from the marketplace or one of the marketplaces, you have no idea of really where the sample is coming from, unless you’re one of the maybe 300 people in the world that pays attention to that. What’s been really interesting to me is the research that I’ve done, has netted about a 38% fraud rate in the marketplaces. And 18% fraud when I recruit from social media, and by the way, these are ads that created on social asking people for their opinion as opposed to sourcing against a known population of research participants. My question to you is, do we have a systemic issue?


Leib Litman: Yes, the issue is definitely system, and that 38% that you mentioned is very close to what many others have seen also, and pretty much exactly on the mark in terms of what we’re seeing very consistently. There was a talk given by Case as part of the Insights Associations. Case is this group that really cares deeply about data quality and they did a fairly large study looking at fraud and their number was exactly 40%, so very much similar to what you just mentioned. So definitely systemic and is definitely extremely prevalent.


Jamin Brazil: Yes, the issue for me became super visible in 2018, I was at MRMW and I heard a talk by the chief research officer I think, I might have her title mistaken but – scientist, excuse me, Tina Marr [ph] of Proctor & Gamble, and she had done a root cause analysis of some serious misses from the insights – what the insights were saying that – or the outcome of the research was saying was going to happen, to the actual what happened in the marketplace. And when she kind of did her root cause analysis, she found that there was a huge amount of fraud in – going on inside of their studies, and at that point, they had decided 35%. What I find really interesting is, I think of it from an economic point of view, it used to be case even 10 years ago, that we were paying participants five dollars to participate in a research project. Now our average CPIs have dropped in some cases less than a dollar if you’re general pop. How much do you think the usual participant is getting paid?


Leib Litman: You can really ask them that question and we have, you can just run a survey and ask people, what are you getting paid for this? And people generally in the marketplace give a whole range of answers that are somewhere around 50 cents, but it really depends because some people don’t actually get paid in dollars, they get paid in all kinds of rewards points, is probably a very common way of paying participants. Some people get entered into some kind of a lottery, that’s a little bit less common, but that happens quite often. But usually rewards points and 50 cents or less, sometimes people will say 10 cents, 20 cents, it really kind of ranges, but it’s not very much.


Leib Litman: It really does kind of beg the question to me in terms of what the motivation is of people actually participating in research if it’s not based on economic kind of returns for them. Do you have any insight into why people do?


Leib Litman: For people to make an extra 50 cents here and there, or to get a little bit of some rewards for participating in a survey, it kind of makes sense for a lot of people. Some people enjoy doing that sort of thing, and that is all I can say about that. I honestly don’t know anything beyond that, I know that there are some platforms, some panels, where people do get paid more, and usually you could expect to get better data quality there, but there’s also flipside of that coin, which is that when the rewards are high, there’s actually a potential for more fraud there as well, because that attracts fraud. It’s not clear whether the fraud is – what the correlation is between payment and fraud.


Jamin Brazil: I did a study where we sourced this on social, and it paid 25 dollars for a complete and at first it started it out very slow, it’s a classic story, right? Five, 10 completes a day, and then all of a sudden, I was getting 100s of completes in a day, and obviously, all of them were fraud. Right? Getting kind of to your point of paying more doesn’t necessarily solve the problem.


Leib Litman: It really doesn’t, it doesn’t solve the problem. It does solve problems.


Jamin Brazil: Right.


Leib Litman: And the kind of problems that it solves are, you really can’t get too many people to be very engaged for a very long period of time, even if they are real participants. If you’re paying somebody 50 cents, you’re not going to get too many high quality open ends.


Jamin Brazil: Right.


Leib Litman: You’re not going to get someone to really engage for more than a couple of minutes or 10 minutes, if your survey is longer, if it’s 25, 30 minutes, it’s just not going to happen. So at some point, you have to pay people more if you want to do qualitative research for example, and if you want to get any kind of quality out of that. There, the issue of paying more makes a huge difference.


Jamin Brazil: The last five years have been the rise of the survey, right? You can’t go get a haircut without getting a satisfaction survey directly afterwards. Surveys are built into the CRMs, it’s basically the fabric of business, even at an SMB level, which is amazing. Meanwhile, we’ve got this global pandemic which has forced us into this digital framework, almost exclusively, so it just kind of begets more surveys, right? Because you have actually these digital fingerprints that are expansive across the consumer journey, where as before, there was so much of that journey that was happening outside of a digital context. About how many surveys do you think are being completed in the US now?


Leib Litman: I would put it in the billions, and I would say that in addition to just typical market research surveys, there have been other developments, just in our culture that have been transformative as well, that add to just the number of online surveys that are being done. For example, in the last 10 years, there’s been a revolution in academia where now, academic research is being done online at a level that couldn’t be imagined just 10 years ago. 10 years ago, nobody was doing research pretty much online in academia, and by academia, I’m talking about psychology research, sociology research and that kind of behavioral research that –


Jamin Brazil: Right.


Leib Litman: Used to happen through undergraduate subject pools, is where they used to get their respondents. And then at a certain point, academics figured out, hey, we can just recruit people online. And that’s been transformative, there’s also a tremendous amount of polling that’s done now online which hadn’t been the case in the past. There’s a tremendous amount of public health research, medical research and of course, there’s market research. And that just really speaks to your point, our society is really becoming so dependent on online research, more and more, and just thinking about the scope of it, and how much we depend on the data as a society, as a culture is really astounding. And when you look across all of these different domains, the academic, right, the polling, the market research, the medical research, right, pharmaceutical research, all that stuff, it’s certainly in the billions. I was just taking to a client yesterday and they said, yes – the polling firm, and they said, yes, we collect about 50,000 responses a day, right? That’s a single polling firm, right? Just one, so –


Jamin Brazil: Yes, that’s right, and we’re entering into – 2022 is an election year in the US, that usually means we’ll see a volume increase of between 10 and 20 percent of completes, predominantly centered around political concerns. Just thinking about the need expanding, right, of consumer insights or of participants to take surveys, where’s the supply going to come from?


Leib Litman: Yes, that’s a great question. In terms of polling around election time, everyone is having the same problems. It’s not just online surveys, it’s telephone surveys as well, particularly in very specific geographical regions in the United States, right around the time of the election last year, people – just non-response from –


Jamin Brazil: Right.


Leib Litman: Telephone surveys was through the roof, sometimes 99%, you just need respondents and you can’t find them, and that introduces a whole bunch of bias. I would say in online research, there’s the same problem, there’s more demand now than there is supply, and that’s going to get stretched in the coming months as the midterm election draw nearer. And exactly what the answer to that is is unclear. I do think that offering better incentives can certainly go a long way to attract more people, right? To participate, but exactly who’s willing to do that and how that’s going to be implemented is a huge set of question marks. The other thing in relation to this question that also relates to fraud is that what people don’t realize is that much of their sample isn’t actually real sample. Much of it is just noise, and part of the reason that people are reluctant to deal with this problem is because the demand is so high, it’s kind of scary, but that’s actually partly what’s going on.


Jamin Brazil: Yes, there’s this – research happens – it’s like music, it happens in time, and there’s a business decision which is really kind of the end of the fuse, right? And so there’s this tremendous amount of pressure that is created during the fielding stage of a project, when you’re getting your participants in and to make sure that you’re getting all of those quotas filled, that it’s ‘representative’ et cetera, et cetera. And when you think mathematically if in fact 40% are fraud, then you know for a fact, there’s a proportion of your 60% that are your qualified, your non-fraud, that actually are fraud, you’re just simply not catching them, right? It’s kind of like a – unfortunately, but it – data’s a lot like a bucket of water, more than this really clear siloed individual person. And so subsequently, there is this ability for bad actors to get into the pool unbeknownst. And so what concerns me is, to your point, is that that’s not going to go away. The pressure’s not going to go away, we’ve got to get completes done in time, but we’ve seen this decline in the amount of money that companies are willing to pay for quality participants. Additionally, we’re seeing this influx of private equity and venture capital funds in the market research space, and that’s creating a lot of pressure for survey platforms and for operators to improve their gross margin. One of the best ways to improve your gross margin is simply by marking up a complete, let’s say you’re going to get three dollars for that complete, but you’re actually only paying a dollar for it, that’s a really good return, right, on that complete. And so the incentive is really for the operator to go with the lowest cost and then kind of turn a blind eye to the quality considerations.


Leib Litman: Yes, and – exactly, and that’s happening more and more, and at a certain point, the end client has to realize that what they’re getting is just not what they need, right?


Jamin Brazil: Right.


Leib Litman: Pretty often what they’re getting is just noise, it goes back to what you said before, they’re looking for certain insights, they’re looking for certain actionable kind of ideas, and the conclusions that they’re drawing from their research can be completely false, and it is so easy to actually demonstrate this. It’s just very often either the clients aren’t looking for it, and they don’t know exactly how to look for it, or they kind of just trust what they’re getting. But there’s an easy way to actually check, for example, anyone can just conduct a survey, right? And ask a simple question in that survey, right? Are you a petroleum engineer, OK? Yes or no, just simple, yes or no, right? Now maybe a one out of 10,000 people is a petroleum engineer in the real world, but everybody will find that in their survey, is going to be at least 10%, and that’s best case scenario.


Jamin Brazil: Right.


Leib Litman: Sometimes you’ll find that it’s 20%, and remember that if you’re getting 20% saying yes, usually it’s coming from just noise and randomness, right? So if somebody doesn’t read the question, they have a 50% chance of saying yes or no, so if you get 50 – 10% yeses, there’s another 10% who would – said no just by random chance. And so you’re getting 20% complete, right, noise –


Jamin Brazil: Noise.


Leib Litman: From 20% of your sample, and then if you look at the actual outcome variables of interest in that sample from these people, you will see that it looks like complete nonsense.


Jamin Brazil: Right.


Leib Litman: And it’s mixing into your other data and it can either dilute the signal that you’re looking for, it can completely invalidate your conclusions. It really depends on what the research question is, and I’ve seen so many examples. In fact, we just published a paper very recently where the CDC, right? Did a study right at the height of COVID, where they did something similar. They wanted to know whether – this was in May of 2020, right as COVID was starting and there was a really –


Jamin Brazil: Right.


Leib Litman: A tremendous amount of concern, people were very scared, and the CDC found – wanted to know, what kind of actions are people taking to protect themselves, behaviorally, against infection? They asked people, they did a survey where they asked people, are you wearing a mask? Are you touching your face, are you keeping distance? And they wanted to know, are some people going overboard? Are there maybe somethings that people are doing that they shouldn’t be doing, because they’re being overprotective, right?


Jamin Brazil: Right.


Leib Litman: So they asked some questions like, are you washing your food with bleach? And are you actually drinking bleach so that –


Jamin Brazil: Right.


Leib Litman: Because that’s going to protect you, or are you gargling bleach or are you misting your clothes with bleach? And so on and so forth.


Jamin Brazil: Right.


Leib Litman: And they found this astonishing result, right? They found that 40% of their respondents said that they’re doing one of these completely –


Jamin Brazil: Insane.


Leib Litman: Crazy things, and about 10% of people said that they’re actually drinking bleach and gargling bleach, right? So this is astonishing and what is much more bizarre or much more just interesting from a societal perspective, and a social commentary perspective was that they published this paper in the flagship journal of the Center for Disease Control, and the paper over two days got picked up by over a 150 news outlets around the world.


Jamin Brazil: Of course.


Leib Litman: Right? And basically – it was every CNN, Fox News, just any – MSNBC, everybody was talking about it, and people outside of the United States were laughing at how people in the United States are drinking bleach and so on and so forth. We looked at the data – I came across this study and I – as soon as I saw the data, I said to myself, OK, I don’t even need to read the results section, I know exactly what happened, right? So I knew that OK, they probably sourced that data from certain standard kind of places where people get their data, and they didn’t use any kind of protection.


Jamin Brazil: Right.


Leib Litman: And so on and so forth. And so at that point, I looked at the method section, yes, exactly that’s exactly what happened, and then that day, I re-ran that study with our research team at CloudResearch, exactly using the same methodology, asking the same exact questions, but using basic, standard protection. And then we ran the study with that protection and without, and we found without protection, you get exactly what they found, that 10% of people say that they were drinking bleach. Without it, not a single person actually claimed to do any of these things. Literally, everything that they reported was just completely false, and this ends up in the CDC, it ends up in scientific journals, it ends up in the – part of the news cycle, it ends up part of our culture, it ends up really being part of the national conversation, the things that we call “facts,” right? Come from – so much of it, comes from market research data [CROSSTALK] 


Jamin Brazil: Done badly.


Leib Litman: And – exactly, and so this is really the challenge.


Jamin Brazil: Great, I really appreciate the specificity around this, this is a great point you’re making. So last question, what steps can researchers put in place in order to ensure that they don’t have participants that are drinking bleach?


Leib Litman: Yes, that’s a great question, and there are a number of things that researchers need to be doing. At CloudResearch, we’ve actually developed a tool for protecting surveys against data, it’s called Sentry, and what Sentry does it that it pre-survey instrument that basically participants have to answer a couple of questions and based on their answers and based on some of their mouse movements and behavior and some other things that Sentry looks for, it identifies who’s likely to be a good respondent, who’s likely not to be a not good respondent. And Sentry does a really great job of getting rid of the vast majority of bad data, but if somebody wants to know what they need to do as a researcher to really get clean data, so you need to have within your survey, various kinds of questions, various kinds of – we call them validated instruments that will identify who’s the good respondent, who’s the bad respondent. And it’s really about using something that is validated, it’s got to be accurate, it needs to have a low false positive rate, a high hit rate and it needs to be structured so that it’s kind of embedded within different parts of the survey, in order for it to be maximally effective. Those are the kind of things that really need to be within every single survey, and in addition, again, there are various kinds of product out there that will help the researcher create a solution that’s maximally effective. But it’s really, people need to see this as really a partnership between themselves and whatever tools they use, you really need both because there is no tool out there that can completely clean your data, so you can’t just outsource it to someone, it just doesn’t – it’s not going to work. It’s got to be part of something that researcher does, and you go to use the right tools.


Jamin Brazil: Yes, I had a friend of mine who you might know, Vignesh [ph] from Research Defender.


Leib Litman: Sure.


Jamin Brazil: He – yes, he told me fraud has to be managed, it’ll never be solved, and I thought that’s exactly right from a framework perspective, and to your point, thinking about CloudResearch sitting in front of the survey, by screening out a lot of the fraudulent participants early on, then you have the ability or have a higher probability of having clean data on the backend.


Leib Litman: That’s exactly right, you need a really good instrument that will be as effective as possible, and then as a researcher, you need to kind of also provide some of your own input and manage some of your own data. But I do think that it is possible to solve this problem, I don’t think it’s an unsolvable problem. I do think that it’s – it is something that needs to be managed, but I do think that we’re getting much better at solving this problem, I think for example, Sentry, it now cleans out for most surveys, around 70% of the bad data, which is a lot better than getting no protection.


Jamin Brazil: Right, right, yes. [CROSSTALK] 


Leib Litman: When you’re just starting out.


Jamin Brazil: Yes, exactly, and the survey questions that you put in there are going to be part of the solution, but you really need to start at the top of the funnel in vetting those participants early, so that you can have a higher probability of success as they go through your survey. I really appreciate you taking time with us today. Thank you for being on the Happy Market Research Podcast.


Leib Litman: Thank you for having me, I really enjoyed talking to you.


Jamin Brazil: Everyone else, our guest today has been Dr. Leib Litman, chief research officer at CloudResearch. You can find his contact information and more information about CloudResearch in the show notes. I hope you have a great rest of your day.