The signs are there long before the damage is apparent to the naked eye.

Cracks appear and we don’t see them for the impending disaster they are because we’re caught up in the here and now, accepting each new productivity device and cost-saving service as great leaps forward. But each has a cost and the costs mount up.

In our rush to save time and increase our productivity, we’ve mentally atrophied to the point where we’re awash with half-knowledge. The basic decision making skills that were once taught to us in graduate school – or on the job – have been replaced with faster, do-it-yourself short-cuts that do something far worse than give us a “lite” version of results: they give us the wrong results. But we can’t tell the difference.

 “All marketing research is wrong.”

Let me unfairly pick on blogger Faris Yakob and his post of the same title. Faris presents a video from the 2007 Hatch Awards showing focus group participants dismissing Apple’s “1984” spot. The purpose of the video is to prove that research could never appropriately guide or judge outstanding creative.

There’s a big problem and a small problem here, which we’ll tackle one at a time. First, the small one.

You don’t make decisions based on focus groups.

The Hatch Awards video suggests that focus groups are appropriate vehicles to test advertising creative.  They aren’t. Ever. Testing creative in a focus group – making decisions based on something gleaned from eight people in a room who are searching for an opinion usually supplied by that one confident guy in the far left hand corner away from the board – is a waste of $8,000.

Focus groups are qualitative research vehicles – they are wonderful for getting input and raising questions. They are lousy, dangerous and irresponsible for answering them, though.

Bad research and “non-sampling error.”

The bigger problem is what my research guru, Howie Lipstein, would call “non-sampling error.”

“Back in my early days at Wharton, my mentors described good research as a product of sampling error and non-sampling error. The sampling error is simply a function of sample size. Non-sampling error is procedural bias that creeps into your work, intentional or otherwise, that skews your results and isn’t corrected by sample size. So few people talk about where the data is coming from or what the quality of the data is.”

Ask yourself whether you are hearing from people who are different than the real people in the real market?

You may be hearing from 10,000 people – a great sample size – but if these 10,000 all come from a slice of the market that only represents 3% of your total universe, your data is bad. This is the “we tested it on Facebook” bias.

Yes, you have a big sample size, but it’s a toxic environment of bias.

Why this is such bad news.

I have a hypothesis and I hope I’m wrong. This started when I wrote the first of several posts on the Old Spice campaign, particularly when the lackluster results started drifting in. People were furious that I could describe Isaiah and the production crew as anything but a watershed moment for marketing. The problem was that it wasn’t selling any body wash and therefore wasn’t a successful campaign.

I don’t think many marketers understand how to make evidence-based decisions anymore. Decision making is becoming a dying art form, replaced by the elevation of “gut” glamorized by those very few start-ups that turned their backs on fact-based decision making and frankly got lucky. In addition, we now have do-it-yourself tools that have become so easy to use that we’ve lost the skill to use them correctly.

From the likes of SurveyMonkey.com we can now pump out badly constructed questionnaires to completely biased lists of the wrong respondents from the comfort of our desktops. Who needs an expert when we can do it badly ourselves?

Social media resources like Twitter and Facebook have become magnets for hand-raisers – which is good – and thus have become irresistible to marketers looking for quick research – which is bad. You don’t “test” ideas on Facebook. Gap’s logo comes to mind here. Imagine collecting a thousand of those guys in the focus group room – the loud guy with all the opinions in the corner – and only asking this motley collection what they thought. You’d get results that wouldn’t be projectable to your total universe. But it’s fast and easy.

Content creation is so cheap it’s almost free now. What used to cost a six-figure amount to produce now takes a Flip camcorder and a YouTube upload. It’s so fast and easy that we don’t care whether it’s right or wrong anymore. If it’s good, they’ll pass it along. If not, they’ll delete it. And this mentality pervades the rest of marketing where the stakes are higher.

Most of the agency people I’ve spoken to in my career have reacted to testing creative the same way a cat reacts to being held over a barrel of water. They hate this idea because it’s their product and they don’t like being told that it isn’t “good.” Testing was always the sole preserve of the client side, the one responsible for the budget and the results.

Today, I’m not sure this skill set is still common enough in the modern enterprise to rely on anymore.

But if we’re tasked with making good decisions, we need this discipline.

Who still has it? Anyone?

Do you think “knowing stuff” might be the newest killer trend in the agency world?

Regards.