Massive growth often doesn’t come from one or two big winning tests, but from many smaller wins stacked together, and consistent testing means the next win is always right around the corner.
Any time you aren’t A/B testing, you’re wasting an opportunity to increase revenue and gain valuable insights about your customers. However, testing software, consultant fees, and development costs add up quickly, so it’s important to make the most of the investments you make in your testing program by testing consistently. In this article, we’ll outline five keys to running a high-velocity, consistent testing program.
1. Make sure all testing slots are filled at all times
You can often break the sections of your website into a number of “slots” where a test can be run. For example, most ecommerce sites have the following “slots” in the main user journey:
There are a number of other “slots” you may be able to create on your site, such as blog posts, specifically designed landing pages and other informational pages. If your website is responsive or has a different experience on mobile, I recommend running separate tests on desktop and mobile, effectively doubling the number of slots you can use and accounting for what often are very different user experiences.
In order to maintain a high testing velocity, a good rule of thumb is to make sure that all of your possible testing slots are occupied at all times. If you look at your testing roadmap and see an open slot, that represents a wasted opportunity for insights and growth.
Be aware of running tests in parallel that could potentially conflict with each other, or cause cross-contamination. Credible test data is paramount, so avoid running tests with similarly themed-changes (like messaging promoting free shipping, for example) on different pages simultaneously.
2. Prioritize ideas with a low effort-to-impact ratio so you always know what to do next
When executing an A/B testing program, there should never be a question of what test should run next in each slot. To know which test to run next, we recommend using a prioritization model such as the PIE (Potential, Importance, Effort) model. Ideally, you should prioritize tests that have a favorable ratio of lower effort needed to build and higher potential impact on your chosen KPIs (Key Performance Indicators).
PRIORITIZE | ||||
Experiment name | Business impact (1-5) 5 = highest ROI | Technical effort (1-5) 5 = least challenging | Prioritization index (sum) | Priority (high, medium, low) |
0 | ||||
5 | 3 | 8 | ||
5 | 1 | 6 | ||
5 | 5 | 10 | ||
0 |
One method of test prioritization, the PIE model.
3. Start building the next test before the current one finishes
In order to keep every testing slot filled at all times, it’s important to have the next test in a given spot built before the current test is finished. This means being proactive about checking your roadmap and working with your developers’ capacity to make sure that tests in development will be ready to launch as soon as there is an open slot for them.
If developer capacity is a concern, detailed briefing becomes very important. Make it a habit to write detailed, clear test briefs for developers so that they don’t waste their limited capacity clarifying questions with the test designers.
4. Stop underperforming tests quickly
If your goal is high velocity, stop an underperforming test quickly and move on to your next one. At this point, the right course of action is to stop the test, note the insights to be gained from the loss, and immediately fill that empty slot with the next test in line.
Even if a test has only been running for a short period of time, the use of Bayesian statistics can quickly give us a reasonable degree of confidence that a test will be a loss. Be sure to take time to understand what your results really mean and avoid the common misconceptions about statistical relevance.
5. Establish a standardized quality assurance process for test development
Since consistent A/B testing means constantly introducing new changes and elements on your website, there are many opportunities for bugs to occur and break a page, or worse. If you don’t catch bugs before a test goes live, valuable time is wasted when the bug needs to be fixed down the line. Any data collected while a bug is live can’t be considered valid, as the bug may have affected the user experience.
We’ve heard too many stories over the years of people having to scrap weeks of test data because they missed a bug during development. The best way to combat this problem is to establish a standardized quality assurance (QA) process for test development. At AB Genies, we’ve been refining our rigorous QA process for over a decade so we can catch any bugs before a test goes live.