A/B Testing in UX Design: When and Why It’s Worth It
Updated: Jan 18
A/B testing (split testing) is a quantitative method of finding the best performing version of CTA, copy, image, or any other variable. To start A/B testing, prepare two or more versions of a single element, randomly split your user group in two, and see which version performs better. Great tools for A/B testing are Unbounce, VWO, or Optimizely.
Designing a digital product brings about numerous dilemmas: which font reads best? What call-to-action copy converts more? The multitude of options to choose from can give designers a headache. Sure, following best practices and gut feelings is a good place to start, but it won’t take you far in a business setting, and bad design choices can negatively impact your revenue stream. So, what should you do? Base all your UX decisions on solid data. Where do you get them from? Use A/B testing. Continue reading to learn all about it.
What is A/B Testing?
An A/B test – also called split testing – is a simple experiment where users are shown two variants of a design (e.g. background image on a webpage, font size or CTA copy on a homepage, etc.) at random to find out which one performs better. The variant that makes the most users take the desired action (e.g. click the CTA button) is the winner and should be implemented, while the alternative should be discarded.
What can be tested using this method? Well, pretty much everything – from text or phrasing on buttons, through different sizes, colors, or shapes of buttons, to button or CTA placement on the page.
Why is A/B Testing Important in UX?
As mentioned, A/B testing allows you to base your product design decisions on data rather than on an opinion or a guesswork. It both democratizes design and allows your users to participate in your decision-making. A/B testing can help you learn how small changes influence user behavior, decide which approach towards design to implement, and confirms that a new design is going in the right direction. Using A/B testing for different elements of your digital product will also improve the overall user experience over time, as well as increase your conversion.
Importantly, a good UX will make users stay on a website or in the app – or will make them visit it again – while bad UX will do the opposite. So, running A/B tests is a great way of conducting UX research while your product is live, and deciding what works and what doesn’t work for your target users. It’s a much more effective approach as it saves your time and resources spent on the expensive, environmental testing conducted before bringing a product to market.
How to Conduct A/B Testing Just Right
You need to base your A/B test on an educated guess – try to figure out what could be your target users’ pain point, i.e. what could be preventing them from taking a desired action. To conduct an A/B test you need to define a goal (e.g. I want my “Request a demo” page to generate more leads), formulate a solid hypothesis (e.g. I think that changing the CTA copy from “Contact us” to “Book demo” will engage our website visitors more and increase the number of leads), and two versions of a variable (e.g. Book demo and Contact us). The latter are called the altered test (test B, the variable), while the controlled test (test A) is what you compare your altered test against.
Create the two versions of a single variable and make your prototype ready to share for testing. Then monitor it to make sure the test is running correctly. For high-traffic websites, test the smallest change possible; for low-traffic websites you can go bigger and test e.g. two completely different versions of a web design.
Your test should run long enough to provide you with meaningful, statistically significant results. The bigger the sample size and the more information you collect, the more reliable your test results will be. Remember to only analyze the results of a completed A/B test and only implement the clear winner into your digital product. And what should you do with a “no difference” result? Well, be glad about it, as it proves that you can implement the design you prefer with no risk!
Remember: don’t be afraid to formulate different hypotheses and test them. In A/B testing there are no stupid questions! Just make sure you prioritize your tests according to what you know from your customer research.
A/B Testing Tools
There are a lot of tools dedicated to A/B testing out there. Among the most popular are:
Unbounce – a drag-and-drop landing page builder that allows you create and publish them without the need to use coding. It is an easy-to-use and fast tool for getting more conversions from your traffic.
VWO – the world’s leading web testing and conversion optimization platform. It allows you to conduct qualitative and quantitative user research, build an experimentation roadmap and run continuous experiments on your digital products.
Optimizely – an experimentation platform that helps build and run A/B tests on websites. The service allows you to create and run a variety of experiments for making design choices that will increase your conversion rates.