Insight

Your Field Guide to A/B Testing

If you’re still waiting for campaign post-mortem reports to figure out what worked, or testing new strategies when something doesn’t work, you need to rethink your approach. A/B testing is a more efficient way to work. Here’s how to integrate A/B testing into your CXM strategy so you’re always learning something new about your audience. 

Table of contents

    What is A/B testing?

    A/B testing is a process that compares two or more variations of a thing to see which version performs better. It’s also called split, bucket or multivariate testing. They’re all different names for the same thing. We’ll stick to “A/B testing” to keep it simple. 

    Webpages, emails, mobile apps, ads, forms, entire workflows – anything with variables can be A/B tested. You might toy with:

    • Content: Text, images, videos, headlines, fonts
    • Navigation: Page structure, user journey, cues 
    • Styling: The overall look and feel of the experience
    • CTAs: Button styling or text, CTA placement, CTA wording

    The goal is to figure out what your audience responds to, ideally ending up with a hyper-optimised version that achieves your goal metrics, like conversions, revenue or engagement. 

    But that’s rarely how A/B testing works in reality.

    Optimisation is endless, but also endlessly valuable. In other words, don’t stress yourself out chasing perfection. Instead, look at every A/B test as a chance to learn something new about your audience. Aim to make A/B testing a part of every process, not a one-off exercise. 

    The value of A/B testing

    • Data-driven decision-making: Instead of relying on guesswork, you can use data to determine which elements resonate best with your audience.
    • Finding flaws: A/B testing can highlight usability issues and underperforming content.
    • Improved conversions: By optimising your website or app for better performance, you can increase conversions and achieve marketing goals.
    • Enhanced user experience: By testing different designs and functionalities, you can create a more user-friendly and engaging experience for your customers.
    • Better targeting: The more you know about users, the more you can design experiences that appeal to niche segments.

    How to run A/B testing

    Always start with a goal that’s clearly defined, important for the business and meaningful for customers. Without it, you’re just tinkering. This isn’t a ‘step’ in A/B testing because it should be guiding your CXM strategy.

    Step 1: Hypothesis

    Remember, you’re a data scientist now – you need to be legit. Every experiment needs a hypothesis. What do you think will get you closer to your goal? Simpler landing page layouts? More CTAs? A bolder colour scheme? Removing a step in the journey between first click and conversion? A clear hypothesis stops you from going off course and collecting endless data. That means you should choose which metrics to measure based on your hypothesis and stick with them.

    Step 2: Structure

    A/B tests are handled in two main ways:

      • Using a single URL is better for single-variable changes like content, CTA buttons or page layout.
      • Using a URL redirect where each experience has its own dedicated URL, e.g. Experience A uses homepage.com and Experience B uses homepage.com/version2.

    URL redirects are more common when evaluating differing purchase funnels or comparing creative directions. If customers take substantially different steps between selecting a product or service and completing the purchase, or you’re developing two very different versions of a page, use a URL redirect. In both cases, experience A is usually the control version and experience B is used to test different variables.

    Step 3: Variables

    Now that you know what you’re aiming to achieve, you can start experimenting with variables that move the needle. For example:

      • Placing the CTA above/below a page’s key insights 
      • Making the button red/blue
      • Using a hero image featuring people/animals/illustrations
      • Including a video/infographic 
      • Cutting/adding a step in the journey 

    The variables you choose should directly correlate to the metrics you’re trying to measure.

    Step 4: Audience split

    Rudimentary A/B testing splits the audience in half randomly. That can be good for basic experiments like landing page layouts or topline creative directions. Once you’re further into the process, or if you’re looking for more specific results, consider whether running targeted tests might yield more useful insight.

      • Browser specific: Google Chrome, Firefox, Internet Explorer, Safari etc.
      • Operating System specific: Mac OS, Windows OS, etc.
      • Device specific: Using a tablet, mobile or desktop
      • Referring source and medium: Organic search, email, direct, social – and which platform

    A/B testing tools often provide the option to specify audiences. Tests can be set to run for all visitors or only run when the visitor is identified as belonging to a specified audience.

    Step 5: Monitor and optimise

    Monitor your chosen metrics to see which variation performs best. Once you have statistically significant results, analyse the data and implement the winning variation. If you have the resources, restart the process. There’s always something to improve, even if you’ve smashed your original goals.

    The best organisations embed A/B testing into everything they do. Whether it’s a short-lived social ad or a full website re-do, gathering data and continuously improving gives smart teams an advantage.

    Directing traffic in A/B testing

    The default method is to send 50% of visitors to experience A and 50% to experience B. After reviewing the results, you can decide whether to:

      • Direct all traffic to the better-performing experience (and end the A/B testing cycle) or
      • Adjust experience B to learn a little more (and continue the test) 

    Some platforms will automatically allocate traffic to the best-performing experience: The test starts by splitting traffic equally. After a period of analysis, everyone is directed to the best-performing experience. This option is common when there is:

      • A revenue-based goal like an e-commerce order or online donation
      • Limited time and lots of users

    Either way, reviewing the outcomes of A/B tests is always good. You’ll learn something new, even if it’s just confirming your hypothesis.

    How to avoid common A/B testing slip-ups

    Cutting the test short

    A/B tests need to run their course. We know you’re eager to move on, but impatience is the enemy of good experience design. Set a deadline for the test. By all means, monitor results closely, but let the thing run its course.

    This is also an issue with algorithmic A/B testing. Machines lack the nuanced understanding that humans have; an algorithm might decide on a winning variable before the test reaches its natural end.

    Over-measuring 

    Simple is best in A/B testing. Focus on the metrics you decided on in step 1 and ignore the rest.

    Your dashboard might look like a spaceship’s control panel with gauges and dials everywhere. Most of that stuff is just decoration in A/B testing. It’s not valuable. In fact, it can be detrimental if it leads you to draw conclusions or infer correlations that don’t exist.

    Over-testing

    In the same vein, don’t try to stretch the limits of “multivariate” testing. We recommend limiting each test to two or three experiences, one variable per experience. This is the best way to gain confidence in your results. Too many different variables will muddy the waters, and too many test conditions will erode the test quality.

    Remember that A/B testing is a never-ending cycle of continuous improvement, not a zero-sum game.

    Assuming you’re one-and-done

    It’s tempting to run with the first result. Don’t. Retest at least once every time you get a statistically significant result. If subsequent tests tell you the same thing, then you can move on. But when we’re dealing with human behaviour, there’s a decent probability of a false positive. Not only is this unhelpful for your test but it can cause contradictory results between teams.

    A/B testing with Tap CXM

    Hopefully this quick guide answered your questions about A/B testing. But we only skimmed the surface here – if you’re stuck with A/B testing, just reach out and we’ll see how we can help.

    Dive deeper into CX


    Keep in touch

    Stay informed with personalised updates and insights by signing up to our customer experience focused newsletter.

    • Get the latest CX trends and updates
    • Get inspired by the latest success stories
    • On average, we’ll send one email a month
    • You can unsubscribe at any time
    We just need a few more details
    Keep in touch - newsletter
    Radio Buttons
    close