Please upgrade your browser.

We use cookies to improve your experience on our website. By continuing to browse this website, you agree to our use of cookies. For more information, please refer to our privacy policy.

Is it validated…?

When it comes to creative evaluation research marketers have every right to ask their research agency partners whether the data and insights being delivered have robust theoretical underpinnings and can ultimately be trusted. After all, what’s the point spending considerable time and money (not to mention emotional energy and effort) speaking to consumers if the insights can’t be relied upon to inform critical business decisions?

But equally, when this conversation shifts to research agencies purporting that their methodologies are “validated”, what exactly does this mean and can these claims be trusted? When making such claims research agencies are typically referring to one of three things:

#1 Pre/Post Sales Analysis

This method has historically been touted as the gold standard of validation by one of the world’s biggest advertising pre-testing companies. And to properly understand these claims it’s necessary to wind the clock all the way back to the early 1990’s when the idea was first conceived. At this time the hero 30 second TVC was the centerpiece of every large advertiser’s media plan. (And if you had really deep pockets you might’ve also produced a 15 second cut-down!) Combined with advertisers only having a handful of channels in which to distribute this content, it’s somewhat plausible that you could take a nice clean period pre and post media activity to accurately attribute sales effects to the creative execution (after factoring in weight of media expenditure).

However, if we put aside what the media environment looks like now versus then, the problem with this approach is that it runs contrary to all the evidence which has since been established around how advertising really works. Much of this has come courtesy of Byron Sharp and the Ehrenberg-Bass institute in How Brands Grow, with this seminal publication using empirical evidence to debunk many long-held marketing myths. One of those learnings is that the majority of advertising effects (90%+) are realized in the long-term. Why? Because most of the category buyers advertising reaches aren’t in-market. Not this week, not next week, and likely not next month, either!

In most consumer packaged goods categories only 1-2% of category buyers will be in-market over the next week and no more than 8-10% over the next month. This isn’t surprising nor a concern given the primary role of advertising is to build and refresh memory structures. By making the brand more ‘mentally available’ it means that when people are eventually ready to buy the memories advertising has created will be accessed, subsequently bringing the brand to mind more easily and making it more likely to be chosen. Next week’s and next month’s sales are largely the pay-off from years of investment in building mental and physical availability — not the brand’s slick new campaign that’s just been launched!

#2 Marketing Mix Modelling (MMM)

Say you’re driving to the grocery store and see a billboard on the way for a food product. If you just so happen to be in the market for the product at that very moment in time, it’s plausible that the ad will influence your decision on what brand to buy. It’s these types of short-term effects that are what a properly constructed MMM (along with a variety of other experiments) can reliably measure. With the goal of understanding the relationship between advertising spend and purchase uplift (over baseline sales), MMM helps quantify the additional short-term sales achieved as a direct result of marketing activity undertaken.

However, as we established earlier, the vast majority of advertising effects aren’t realized in the short-term, meaning it’s not possible to get a complete picture of advertising’s impact through MMM. This is of course because — as we’ve already established — the way advertising primarily works is by building mental availability. Therefore, when people are eventually in the market to buy a product (in 6 or 12 months’ time, and sometimes even longer), the brand will come to mind more readily and thus is more likely to be chosen.

When you attempt to use MMM to analyze sales effects past the ~3 month mark it becomes exceedingly difficult to separate the signal from the noise and confidently estimate what impact an entire channel, wider campaign — let alone individual creative execution — is having. So, unless you’re one of only a small handful of brands that focuses exclusively on direct-response communications (and thus immediate sales uplift is the only thing of consequence), then MMM likely won’t provide reliable insights into the true impact of your advertising investment. For these reasons MMM is best suited for evaluating in-store activations, paid display activity, price promotions, etc.

#3 Advertising Awards Databases

Perhaps the most heinous of all validation claims is the attempt to link creative testing results to the winners of advertising awards. To put it simply, if your research agency claims that their methodology is validated by way of comparing test results to an advertising awards database (i.e. the submissions made by a select group of top-tier advertising agency execs eagerly pursuing shiny trophies) then immediately turn your B.S. Detector Dial up to 11. In academic circles you’d be laughed out of the room if you suggested such a tenuous dataset could be used as proof of anything.

This type of analysis has as much credibility as a television industry lobby group commissioning its own research that (surprise, surprise!) proves that TV is more effective than any other medium. While it might be true, when confronted with these types of claims ask yourself:

In short, treat these claims with a grain of salt.

Okay, so if we can’t trust those claiming “validation” then should we even be undertaking creative evaluation research in the first place?

This piece isn’t intended to dissuade market research companies from continually pushing the boundaries and exploring every available opportunity to advance the profession. To the contrary, it’s our collective responsibility to add greater rigor around the impact of consumer insights and the role it plays in building brands that endure over the long-term.

While we’re all for more evidence that proves the tremendous value of putting the customer at the heart of all decisions, what we do have a problem with is unscrupulous vendors attempting to pull the wool over marketers’ eyes with grossly exaggerated — and often outright false — claims, all for the purpose of justifying their exorbitant fees. These vendors give the entire industry a bad name.

Put simply, we recommend a 3-step process: