The signs are there long before the damage is apparent to the naked eye.
Cracks appear and we don’t see them for the impending disaster they are because we’re caught up in the here and now, accepting each new productivity device and cost-saving service as great leaps forward. But each has a cost and the costs mount up.
In our rush to save time and increase our productivity, we’ve mentally atrophied to the point where we’re awash with half-knowledge. The basic decision making skills that were once taught to us in graduate school – or on the job – have been replaced with faster, do-it-yourself short-cuts that do something far worse than give us a “lite” version of results: they give us the wrong results. But we can’t tell the difference.
“All marketing research is wrong.”
Let me unfairly pick on blogger Faris Yakob and his post of the same title. Faris presents a video from the 2007 Hatch Awards showing focus group participants dismissing Apple’s “1984” spot. The purpose of the video is to prove that research could never appropriately guide or judge outstanding creative.
There’s a big problem and a small problem here, which we’ll tackle one at a time. First, the small one.
You don’t make decisions based on focus groups.
The Hatch Awards video suggests that focus groups are appropriate vehicles to test advertising creative. They aren’t. Ever. Testing creative in a focus group – making decisions based on something gleaned from eight people in a room who are searching for an opinion usually supplied by that one confident guy in the far left hand corner away from the board – is a waste of $8,000.
Focus groups are qualitative research vehicles – they are wonderful for getting input and raising questions. They are lousy, dangerous and irresponsible for answering them, though.
Bad research and “non-sampling error.”
The bigger problem is what my research guru, Howie Lipstein, would call “non-sampling error.”
“Back in my early days at Wharton, my mentors described good research as a product of sampling error and non-sampling error. The sampling error is simply a function of sample size. Non-sampling error is procedural bias that creeps into your work, intentional or otherwise, that skews your results and isn’t corrected by sample size. So few people talk about where the data is coming from or what the quality of the data is.”
Ask yourself whether you are hearing from people who are different than the real people in the real market?
You may be hearing from 10,000 people – a great sample size – but if these 10,000 all come from a slice of the market that only represents 3% of your total universe, your data is bad. This is the “we tested it on Facebook” bias.
Yes, you have a big sample size, but it’s a toxic environment of bias.
Why this is such bad news.
I have a hypothesis and I hope I’m wrong. This started when I wrote the first of several posts on the Old Spice campaign, particularly when the lackluster results started drifting in. People were furious that I could describe Isaiah and the production crew as anything but a watershed moment for marketing. The problem was that it wasn’t selling any body wash and therefore wasn’t a successful campaign.
I don’t think many marketers understand how to make evidence-based decisions anymore. Decision making is becoming a dying art form, replaced by the elevation of “gut” glamorized by those very few start-ups that turned their backs on fact-based decision making and frankly got lucky. In addition, we now have do-it-yourself tools that have become so easy to use that we’ve lost the skill to use them correctly.
From the likes of SurveyMonkey.com we can now pump out badly constructed questionnaires to completely biased lists of the wrong respondents from the comfort of our desktops. Who needs an expert when we can do it badly ourselves?
Social media resources like Twitter and Facebook have become magnets for hand-raisers – which is good – and thus have become irresistible to marketers looking for quick research – which is bad. You don’t “test” ideas on Facebook. Gap’s logo comes to mind here. Imagine collecting a thousand of those guys in the focus group room – the loud guy with all the opinions in the corner – and only asking this motley collection what they thought. You’d get results that wouldn’t be projectable to your total universe. But it’s fast and easy.
Content creation is so cheap it’s almost free now. What used to cost a six-figure amount to produce now takes a Flip camcorder and a YouTube upload. It’s so fast and easy that we don’t care whether it’s right or wrong anymore. If it’s good, they’ll pass it along. If not, they’ll delete it. And this mentality pervades the rest of marketing where the stakes are higher.
Most of the agency people I’ve spoken to in my career have reacted to testing creative the same way a cat reacts to being held over a barrel of water. They hate this idea because it’s their product and they don’t like being told that it isn’t “good.” Testing was always the sole preserve of the client side, the one responsible for the budget and the results.
Today, I’m not sure this skill set is still common enough in the modern enterprise to rely on anymore.
But if we’re tasked with making good decisions, we need this discipline.
Who still has it? Anyone?
Do you think “knowing stuff” might be the newest killer trend in the agency world?
Regards.
I have to say I never put much thought into this but it makes sense for sure.
The big, flashy “free” sign these social media instruments got on them encourage people to use them for the most various (and wrong) purposes, included potentially very dangerous practices like the ones you describe.
This will sure give me something to think about for a while, thanks.
[…] This post was mentioned on Twitter by StephenDenny, StephenDenny and StephenDenny, Scott. Scott said: RT Good Read @note_to_cmo: "We now have DIY tools that are so easy to use that we’ve lost the skill to use them….." http://bit.ly/cPtkZd […]
Love the post, very timely. What I see though is another wrinkle re: social media and market research. If you’re not listening in social media, and someone talks about you, it’s your loss. Business have an opportunity to interact with and shape opinion, drive awareness, grow sales in social. But everything you say about market research is true. I looked at Crimson Hexagon last week – it’s the latest and greatest in winnowing through the massive amounts of data to find relevant info for use in making marketing decisions. And it only really works if you have A LOT of people talking about what it is you want to know about, and EVEN THEN, it’s only good for knowing what the social folk think, not necessarily applicable to the target population. Thus, no substitute for good ol’ regular market research…
There are some excellent points raised here – especially about the “non-sampling bias”! I am a moderator with over 2000 focus groups done in my career, and I refuse to do groups that have the purpose of “evaluating creative”. It just does not work for anyone, and I’ve seen focus groups reduce great advertising to mediocre in just a few seconds.
I would frame a couple of things a little differently – focus groups when done well are for the purpose of developing insight – which can take the form of questions – or answers to questions. While not quantitative, they can help with the development of perspective.
And I had to laugh at your characterizations of the “loud-mouthed guy”. The good moderators can deal with this problem before it is one, and have a variety of tricks to neutralize his propensity to boor. Most can pick them out before the dominators even take their seats. Good post!
Frank: Hey, wait a minute, we can’t have actual experts weighing in on things like this! You’ve got… what’s that word again… experience and stuff. Not fair!
Thanks for weighing in on this point, which has always been a painful twitch in my years in business. No, “focus group” is not a synonym for “research.” However, focus groups are wonderful for raising questions and, as you so clearly point out, developing perspective and insight. I love focus groups. Especially the M&M’s.
Thank you sir! Appreciate your comment!
Chris: thank you for your note here – you’re on target here, in my experience. Truly, social media and the ability to listen and instantly react is a new and wonderful tool in the hands of the right marketer and nothing in the ‘non-sampling bias’ discussion dispells this notion, so long (as you’ve pointed out) we don’t mistake social media dialog with actual projectable research.
Appreciate it!
Gabriele: thanks for your comment! Glad this resonated with you.
Using social media as a monitoring device is smart business. But much like using a hammer where a screwdriver is called for, we just need to remember what it’s for, what its limitations are, and when you need to actually roll up your sleeves and get an unbiased, randomly selected, non-social-media-ooh-ooh-pick-me-I-have-a-not-terribly-well-thought-out-idea-that-I-need-to-say-to-absolutely-everyone sample to quantitatively test your well-constructed ideas.
Thanks!
[…] always be done with care. It should always be done for a reason. It should always have enough statistical projection behind it to validate the expense and the trouble you will go through. We discussed that at length […]
I see this everyday in the rather large medical device company I have the misfortune of being a market researcher at. Every MBA thinks they can do research because they took a class or 2 and breezed through it.
I remember a conversation I had with a CAO (Chief Analytic Officer) at a research supplier early on in my career. When I asked why we had to change the question from my novice attempt to her redlined suggestion she replied, “because I have a PhD, you don’t, and I say so.” Hate to put it that simply but it did bring me down to earth. I now count her as one of my greatest mentors because she eloquently put it best. People who have devoted their careers to market research, should be the ones doing the research.