Doug Garnett’s Blog

Menu

Seeing the Forest of Success Despite the KPI Trees: Errors in Measuring Market Tests

<strong>Seeing the Forest of Success</strong> Despite the KPI Trees:  Errors in Measuring Market Tests

I spent years pursuing what retailers now call “OmniChannel” as they break down divisions between retail, web, and phone. After all, should you really care which channel the sale comes through? Consumers don’t.

Through this time I relied heavily on direct response television (DRTV) as an advertising medium – but not a sole sales mechanism.

For DRTV to have continued strong, it needed to have embraced omnichannel reality by dramatically changing testing and test metrics. Even more, DRTV agencies needed to learn and embrace new sophistication in the analysis of testing.

So I wrote the following in 2013 looking at this critical challenge for DRTV:  Testing in an omnichannel world.

The key lesson from this experience remains highly applicable today:  The metrics you choose for your test may not tell you if your product/campaign are successful.

Any company or entrepreneur who gets this wrong has made a colossal mistake. And this lesson is as true online or in traditional media as it was with DRTV.

Choosing to advertise through social media, for example, requires accepting severe limits on the kinds of products which will succeed in the available metrics. Most companies or entrepreneurs, though, do not understand this so they quite often decide products are “failures” when they may only be “not good products for social media”.

The following general approach to the discussion first appeared in Response Magazine in the December 2013 issue. I presume the reader can listen to these lessons and draw conclusions for whichever media they are using.

DRTV’s Phone Obsession Kills Campaigns

Practitioners of direct response television (and even traditional brands hoping to leverage the medium) far too often decide whether a product will live or die based on a ratio of phone sales to media — sometimes including any measurable web sales. Truth is that phone sales are merely one piece of the puzzle and typically account for 3% to 7% of the impact of a campaign. Together with web, they may account for only 8% to 15% of sales. The rest happen at retail.

So if these sales don’t go well, it doesn’t mean you have a failure or are even close to a failure. All you know is that these sales didn’t go as well as you hoped.

That said, phone results are very useful and help you make the most of your campaign. Phone results can guide media buyers to more cost effective time slots or to help use superbly cost effective networks where good audience numbers simply aren’t available.

Phones also offer a marketing goldmine you’ll never get from traditional advertising or even most online ads:  a chance to listen to consumers who saw your advertising and acted by calling a phone number.

Smart Testing for the OmniChannel Reality

Here’s the thing:  In an omnichannel world, we need to consider impact through all sources – stores, phone and web. And none of these channels should be ignored because there is powerful learning to extract from each.

Yet there are big picture realities we must respect. Testing required to detect impact among omnichannel options it takes longer. So while phone sales are conveniently instant, web can take a week. Yet the bulk of retail impact from one week’s airings may take more than a month to trickle in — with a portion following the long tail of TV advertising.

Even worse, while phone noise is limited, the retail and web worlds are incredibly noisy – filled with multiple sources of messages that drive action. Separating out advertising impact requires analytic sophistication. It also requires that we assume up front that there are no absolute measures – only pointers from which we can estimate impact.

Testing Discovers Profitable Truths

When testing is done right, success will emerge where less sophisticated testing will only see failure.

In one case, my former agency ran a series of campaigns for a specific client that returned a high level of both direct and retail sales.

Then, for the same client, we put up a campaign for a different type of product. Unfortunately, phone sales came in at about a tenth of what prior experience would have suggested.

Did we worry? Of course we did (we’re human). We also advised patience while waiting to see the total picture emerge. What emerged was entirely different than the phones would have suggested. That campaign continued to drive poor phone results but extraordinary levels of retail volume – volume so high the campaign had to be pulled because the stores sold out at around 1.5M units in 3 months.

An impatient, KPI obsessed client and/or agency could easily have chosen to press the abort button on the campaign and incurred a massive loss – both in goods to be overstocked and in lost opportunity. Because we approached the test with more sophistication, our client recorded an outstanding success.

All marketers and agencies need to learn this lesson – how to test to see the forest without getting lost in the KPI trees.

©2013, 2019 Doug Garnett — All Rights Reserved

Categories:   Uncategorized

Comments