Factors That Impact A/B Testing

Split testing is one of the simplest ways to test a wide variety of marketing including landing pages, web page design and subject lines. There’s no doubt that A/B testing is powerful. However, it can sometimes be threatened by factors which can impact the results. Successful split testing begins with concentrating on one aspect of your marketing at a time. This will provide you with a clear view of how your split testing is working.

The Size of your List

Who receives which versions of your marketing, specifically the number of people receiving it can make all the difference to the success of your split testing. Take a big list and split it in half in a completely random manner. If you’re planning to test campaign effectiveness within a particular group, however, it will be important to split that group into more than two parts.

History

Beware of something called the ‘history effect’ – things like holidays and season changes which can interfere with the validity of your testing. For example, clothing worn by the Duchess of Cambridge resulted in an explosive increase in sales of similar items. However great that may have been for the clothing company’s bottom line, it also represents an abnormal occurrence that can confuse testing results. If you notice some kind of anomaly like this in your testing results, check your calendar or your local headlines to see what else might be going on to cause it.

Code

Even if you’re doing everything else correctly, your data could still be being compromised by bugs in your code. These bugs can be responsible for the incorrect displaying of your different pages to viewers. This broken code effect need not be the end of your split testing, however. It can be addressed by conducting some compatibility tests across browsers and devices. Once you find the issues, you will need to stop the test and fix the code before restarting.

Instrumentation

The instrumentation effect can occur when you notice incorrect or missing data in your analytics. This common issue can occur due to various bugs in the analytics site you’re using. The good news, however, is that this can be easily fixed by simply stopping your testing, looking for bug fixes supplied by the site, and then restarting your test.

Selection

Just like the history effect can have a positive effect but skew results, so too can the selection effect. This occurs when your site receives a boost in traffic due to a successful previous promotion. The selection effect can have negative consequences when a traffic boost is misinterpreted as a successful test. For example, if you are split testing two versions of a landing page, and one of them receives a boost in traffic from a successful promotion, you may be tempted to choose that one as the winning page. Once time has passed, however, you may find that the page doesn’t convert as well as it did. This can be fixed by re-routing promotion traffic to a page not being tested.

Regardless of the problems that can pop up when conducting A/B testing, the most powerful cure is to be thorough. Ensuring that you know when errors are occurring will allow you to take control of them before they become an issue.