Test With Your Brain
I've been on this harp recently regarding brands that have tested down to a white envelope. But I thought it might be appropriate for me to explain this a little better, since not everyone is familiar with the basics of testing in response advertising. And forgive me if this gets a little basic at first. I've just learned over time that I can't assume everyone knows how this stuff works. But it gets better. I promise.
How We Test
With any response ad we have at least one Control and at least one Test. The Control can be an ad or package, or it can be a group of people held aside who receive no marketing. But essentially they are the group that provides contrast. Their level of response or purchase represents the number that we need to beat with a Test program.
Now the Test group represents something new. It could be a new offer. It might be a new headline. It may be a different background color. But it introduces a new variable of some sort. And in a perfect world, the change introduced is isolated. Meaning, we don't test a new message, a new subject line and a new headline all in one email. After all, how do we know from a test of so many variables why the customer is actually responding better? (Or, how do we know what is suppressing response?) That's why we split the receiving audience and try to isolate their ad exposures to distinct variables, so we can add incremental improvements to the Control.
Now in evaluating the success of a change, we can look at all kinds of data, but at the core we are interested in the response number, the cost number and the sales number. Or in plain English, we want to know which effort is drawing the most measurable response and how much net profit after production/mailing/offer costs is the effort generating. Balancing these three figures tells us which effort offers the best return on our investment. That's why you don't see a lot of really cool box mailings with lots of goodies enclosed. We know that high-production values increase response astronomically, but when balanced against production costs and sales we see that the return on investment is usually far below the simpler effort.
So that's how it works in a nutshell. We introduce small changes. We we find what generates the most profits. We make the winning effort the new control. Rinse. Repeat.
Here's my problem with the system: It's so numbers-based that it can completely check common-sense at the door.
Brilliantly Stupid
On the surface it's brilliant. Whether we do simple A/B testing, like I describe above, or something more complicated, we get quantifiable marketing-by-the-numbers. You put money in. You take more money out. You can prove what works and what doesn't. Awesome!
But like any system that reduces marketing to the purely quantifiable, people push it for all its worth. And that can lead to decisions that make perfect sense to profitability, but overlook long-term consequences that can't be measured at the program level. Which brings me back to my harp on the white envelope.
Again and again we see it. We test and test and test and test, slowly eliminating variables and improving the ROI until we end up at the perfect balance of message and offer...in the cheapest form possible. We find that sweet spot where production costs are as close to zero as we can get them, the offer is as inexpensive as it can be and the response rate is acceptable enough to deliver a maximum profitability rate. And we communicate that We're the cheapest bastards you can find for the product/service you need.
When I say response is actively working against the brand, this is what I'm talking about. There's a level of success in marketing where we are actually diminishing our long-term value. And this is where it happens. Because we succeed here at the expense of our overall identity. To achieve the perfect ROI balance by-the-numbers, we almost always commoditize who we are to the customer.
Looking Beyond The Quantifiable
Over time I've come to look past the very appealing rhetoric of response marketing purists to see that the success of our program always needs to be looked at in context of our brand value.
Look at your most successful marketing program and ask yourself, "What does this say about me and my brand?" And really think about this. Because every touch point with a customer is building on an overarching mountain of impressions. How does this program fit into the mix?
When we strip away identity in an effort to bolster ROI, it becomes the classic case of borrowing against our future. We are stripping away all the lasting impact of advocacy for a cascading series of consequences. Now our CRM programs have no building blocks to work with and have to start loyalty efforts based purely on the fact we were cheapest. Now our customer experience with our brand in many cases works directly against the brand image trying to be created, since one is showcasing relationship and the other is saying only we were 1/2% less expensive. Now our future direct solicitations have no affinity to build on to increase the likelihood of acceptance, since the customer may not even remember our name.
We need to start taking a more holistic look at our marketing. Just because we reduce ROI by spending a little more on identity elements in the piece, doesn't make it money down the drain. We can't simply evaluate the effectiveness of a campaign on the numbers alone. We need to start using our heads and think about what the bigger trends are saying as well. Falling response doesn't always have to do with variable elements in the mailer. Sometimes our response is falling because we look like our competition. We've lost what makes us unique. And our uniquness is every bit as important as adding another .5 basis points in interest reduction to the offer.