Friday, July 31, 2009

Testing, testing....

I came across a stat that 54% of email marketers do a/b testing. Which means that roughly half of email marketers don't do testing...the question is - why?

Email is perhaps the single best platform for marketing testing ever invented. You can design a test, then see the results in minutes or days, versus the weeks and months necessary to evaluate a traditional direct mail test. It also costs almost nothing to test, versus the hundreds of thousands of dollars necessary to test using over the air media. If you code the test up right, you can evaluate the results not only in an instant, but over a course of campaigns.

So why is it that so many marketers don't take advantage of such a powerful tool? There's probably as many reasons as there are marketers, but here's a few I've seen.

(1) People running email marketing programs are not direct marketers - Moving from traditional "offline" direct marketing to email marketing was like being a kid in a candy store - the possibility for testing were endless. That's a mentality that many email people don't have, as they tend to be technologists first, marketers second. Luckily, the technologists are easy to spot - they talk way too much about deliverability, honeypots and browser compatibility.

(2) Tests are too far afield from today - Rather than look at incremental changes that can boost the effectiveness of an existing campaign, people will design tests that are completely different from the existing "champion". It's not a bad idea to do this, but if you do you'll run into the brand police - they're even less schooled in direct marketing than the technologists. You can run a boatload of tests on things like subject line (yes, you can see incremental value from testing subject lines), number and position of buttons, background color, image size, and integration with landing page creative. Once you have those buttoned up (and can prove a methodology for testing), you can move onto more far-flung efforts.

(3) Know it all executives - Sometime when you design a test, a person higher up in management will say either (a) "I don't like it" or (b) "I know our customers - they won't like it" (which is the same thing as (a), just couched in a way that doesn't seem quite so egotistical.) This is a tough hurdle, as you're basically telling someone they should come down off their knowledge-filled ego cloud and live in the land of proof. The fact is, it should not matter what you think - it matters what you can prove. The fun part of testing is when you challenge your own assumptions, then let the data tell you what actually worked. For example, I once used a segmentation pattern based upon consumer behavior. Our field group insisted that "we know the consumers in XYZ market better than some smarty-pants jerk in corporate (or words to that effect.) So we said fine, we'll test it. The fact was, our behavior-based segmentation significantly out-performed the "field tested" knowledge. While it didn't close the case (there's always someone who knows better...) it made for a very effective shield.

(4) Tests are designed badly - Complicated tests are complicated to interpret. They're also complicated to execute. Keep your tests understandable and learn to walk before you run. Also, you have to be clear about what you're trying to affect. For example, focusing on sales of creative A versus creative B can be a testing death trap - your job as an email marketer is not to sell, but to bring people to a place where a sale can happen. Back your test up to look at things like click-through and you'll have a much higher degree of significance to your tests. Not that CTs are the bees knees of measurement - the point is that you need to realize what you can and can't effect via email. The sale happens on the web site. The interest happens in the email.

(5) Tests are too big - I recently talked to someone who designed a test, then send 1/3 to the existing piece, then 1/3 to each of the two test cells. So 2/3 of the test was to something other than the champion. This is a recipe for disaster - early stage tests should encompass no more than 10-20% of your email effort, unless the test cells are very small incremental changes from the champion. If you're designing a "big" test cell in order to gain statistical significance, then raise your aim to a measurement that can be significant with a smaller group (like CTs!)

(6) When a test is successful, you push the "GO" button too fast - One of the least appreciated facets of email marketing is the macroeconomic impact of time. That is, while a test is successful, will it be successful next week? Lots of things can happen in a week that push the results in one direction or another. If a test is "successful", the next step should be to continue the test for several more weeks, then look at the tend of behavior over time. I ran a test where - for three weeks - the challenger beat the champion by 11%. By week 6, the results were even - there was no value in implementing the challenger. I don't know why, but the test was good for a little while, then basically fell flat. Time matters.

(7) Email software doesn't make it easy to measure tests - Pretty much every email program I've seen allows for split tests. Some are easier than others to build and execute. The real problem comes with measuring those results, then comparing them to the champion. It can be a colossal pain to pull the analysis together in any sort of meaningful fashion. If the marketer is not full invested in testing, this "lack of feature" will be used as an excuse way more often than it should be...so while it's a reason, it's not a very good one. Unless we're talking about true multi-variate email testing...then it's a good excuse, as most email vendors make it impossible to execute true multivariate testing (but more about that in a future post...)

All in all, if you're an email marketer who isn't actively testing - or recently completed a series of email tests - you're wasting an opportunity to significantly improve your emarketing results.

No comments:

Post a Comment