When they first start their A/B split testing many neophyte email marketers make a number of common errors which throw the entire statistical results they obtain into the range of serious doubt and obfuscation. Here are the five worst mistakes which will not only negate the positive aspects of A/B split testing but likely provide information so inaccurate you would have been better to skip it in the first place!

1. Activate the Time Machine

One of the most common A/B split testing mistakes is assuming that you can run a conversion test on Sunday and another on Monday and that they will have similar results. Most people have a completely different schedule and mindset on a Sunday than they do on a Monday and thus the results you’ll be obtaining will be so skewed as to be literally unusable. You can also extend this concept to any temporal difference. Would your A/B split testing done at noon have the same result as the one at midnight? How about the one done on the first Wednesday of December versus the one conducted on Christmas Day? Yeah, thought so…

2. One element means one element

Many tyro email marketers won’t think twice about subtly modifying a particular element then complaining that A/B split testing is nonsense and it doesn’t provide usable figures. Let’s say you’re testing an image of a bedspread on a bed vs. the same bedspread folded flat. Sounds like it should be no problem, right? It will be unless you’re actually changing the size or position of the element. If the bed photo is bigger than the flat photo then you really should be testing four elements:

  • Large photo on bed
  • Large photo folded
  • Small photo on bed
  • Small photo folded

Now you’re no longer in the domain of A/B split testing but you’re in the universe of multivariate testing. So keep your elements absolutely identical to obtain the proper results.

3. Excessive micronanotesting

A/B split testing addicts like addicts to just about anything else tend to go overboard and lose perspective on anything but getting the next and stronger fix. In the world of A/B split testing, even some huge corporations that you’d think would know better fall into the neverneverland of overtesting variants in elements that really don’t make a bit of difference. Google tested 41 different shades of blue for the link color in its AdSense text ads. It’s debatable whether the average individual can even discern the difference between 41 shades of blue unless they’re all shown beside each other. The determination of whether this outrageous number of very similar color variants makes any difference to a click rate is purely ludicrous and tantamount to an application of CERN technology to whittling a stick. This is what happens when the pursuit of extra decimal points in your accuracy trumps common sense, so don’t make a similar mistake.

4. Overriding results

Some A/B split testing results will suggest aesthetic elements so incoherent that your designer might get nausea at first sight. Instead of catering to some abstract notion of what looks good and what doesn’t, relegate them back to the art room and believe your figures… as they don’t lie!

5. Conversions Uber Alles

Ask any email marketer what they’re most likely to A/B split test and they’ll almost always answer conversions! After all, conversions are the reason we show up in the office in the morning, as it’s the lifeblood of any online business. However, focusing on just conversions as the only significant online metric is a big and overly common mistake. Let’s say you’re in a business where your average new subscriber takes 6 months to place their first order. If you conduct a campaign to get new subscribers you might see a 10% bump in your total list size, but if you test in that first half-year your conversion rate will actually be falling! Take new subscribers, shifts in cart abandonment, and everything else into consideration for accuracy.

Don’t commit these five critical mistakes in your A/B split testing and you’ll be rewarded with figures that actually make sense!