Introducing New A/B Split Testing in Workflows

Introducing New A/B Split Testing in Workflows

See the source image

The ability to perform A/B testing in a workflow is a long running request that we’ve had a basic version of in private beta for some time. Thank you to everyone who voted for this feature and for all the beta users who offered us input. We’ve now made the feature available to everyone.

An A/B split test can have up to 5 variants, each variation creates a new branch that can contain numerous messages, delays or logic nodes. You can decide the percentage of customers that should receive each variation, set a notification and pick a winner when you finish the test. We think this feature is a simple way to run a quick test to gauge how parts of a workflow are performing, however we know it’s just the start of what could be possible with an A/B testing feature in Workflows.

  • Getting Started with Workflow A/B Split Tests
  • Less Guessing, More Goals
  • Content
  • Channels
  • Timing
  • Frequency

Getting Started with Workflow A/B Split Tests

Wondering how many times you should reach out to prospects, which headline works best, what offer converts or how long to wait between messages?

When it comes to customer messaging, it’s not always obvious what’s going to cut through and connect. Rather than basing your marketing decisions on intuition, A/B tests, also known as split tests, are a simple way to get the optimum result.

Think of A/B tests as mini-experiments. Starting with a hypothesis, you’ll split your audience, build versions of a campaign or procedure, and then utilize the data to evaluate if version A or B performed better. A/B split tests take the guesswork by testing out different messages, methods and content to see which makes the most impact.

Less Guessing, More Goals

A/B testing with a conversion target takes your test to the next level. Beyond comparing open and click numbers, you’re able to immediately link your efforts to actual business consequences and see which variation performed the best.

To execute an effective A/B test, break your audience into two segments and develop two separate versions of a message or workflow; option A and option B. Then define a target, so after the test has run its course, you’ve got a clear measure of success. For example, if your purpose is to market a new product, your metric would be revenue.

Generally speaking, there’s no one-goal-fits-all approach to A/B testing. Goals can be anything from email openings, click-throughs or hitting a button. These outcomes are an excellent starting point:

  • Drive site visitors
  • Optimise email engagement rates like open and click-through
  • Increase revenue
  • Decrease cart abandonment

A/B testing are supposed to be quick and snappy investigations. When testing your campaigns, keep it simple and minimize the amount of variables you’re testing in a single split test. Sticking to a few of pieces means you can link success to a certain variable. Here are some of the variables you may test using Vero’s A/B workflows:

Content

Try with alternative subject lines, CTA’s, email copy, layouts and graphics to evaluate what’s working.

Channels

Experiment with endpoints like email, push notifications or SMS to better determine where to best contact your audience.

Timing

Try delivering communications at different times or different days to see when your audience is most receptive.

Frequency

Too much? Not enough? Experiment with send cadence to find the sweet spot.

To make the most out of the test, you need a sufficient number of people in each variation to acquire meaningful results from a decent sample size.