A/B Email Testing 101
There’s a famous quote about marketing attributed to John Wannamaker (1838–1922), one of the founders of modern advertising that goes like this: Half the money I spend on advertising is wasted. The trouble is I don’t know which half.
Do marketing and advertising still heavily rely on guesswork? Yes and no. Guesswork is often where we start. But at least now we have ways to quickly measure the success of that guesswork so you can adjust your marketing practices and get closer to knowing exactly what works. A/B testing, also called doing a split test, is one such great way to reduce some guesswork in email marketing.
In this post, you’ll learn the important basics of A/B testing, so you can start reducing the guesswork in your email marketing.
Why Should You Run an A/B Email Test?
When executed correctly, an A/B test is a great way to boost email performance.
The first type of tests that most marketers run is for optimizing open rates. This includes testing the following elements:
- Subject line
- Sender name
- Preview text
- Send times
You’ve already got people opening your emails? Awesome! Now you can focus on improving your click-through rate. (The click-through rate is the ratio of “clickers” to all email openers.) To do this, test the following:
- Copy text, including greetings
- Paragraphing and text length
- Number of links
- Link placement in running text or in a button
- Button placement, size, shape and color
- Linked images
Once you get past these two hurdles, you’re ready to answer the big question that most marketers wrestle with. How do you optimize your conversion rate? What can you change in your email marketing strategy to encourage more conversions? For conversion rate, test the following:
- Product image choice, description, placement. If you work in e-commerce or any sector where you are embedding buy links in your emails, you can test conversion rates by switching out how you urge your contacts to buy.
- Landing pages. If you aren’t embedding your products directly in your email, you can still test conversion rates. One great way is to send your mailing recipients to two different landing pages. Which one causes people to convert?
Remember, though, the golden rule for starting out in A/B testing: Test only one element at a time. Testing multiple elements at once is known as multivariate testing. For now, you’ll want to leave this to the pros. If you test multiple items (e.g., button size and placement and color while also testing out three different email subject lines), you won’t necessarily understand which element (or element interaction) made your test successful.
Be scientific about your approach to data and keep it simple. Have a hypothesis and test it rigorously.
Remember the golden rule for starting out in A/B testing: Test only one element at a time. Click To Tweet
How Does A/B Testing Work in Email Marketing?
What an A/B test does is test two different versions of one element in your mailing. It creates an “A version,” or a control mailing, and a “B version” that changes one thing. It sends each of these two versions to a small number of contacts.
Most email marketing software can do this for you automatically. Your email marketing tool will then monitor how recipients interact with the two versions (using KPIs like open rates, click-through rates and conversions) and choose a “winner” version of the same mailing. At a later point, this winner mailing then usually gets sent to all remaining contacts.
The idea sounds cool, right? But, wait! Before you run off and start doing tons of A/B testing, review these three steps.
Choose what you are testing and why.
This is the first and most important step when running a split test: You have to start with a hypothesis or a question, and it can be as granular or as broad as you would like.
The goal behind asking this question is to understand recipient behavior. What do people’s brains react to better!? That is the question you will return to over and over in different ways. Once you have this information, you can start to speculate on why recipients respond better to certain elements over others.
Here’s a detail-oriented example. You want to know how your target group responds to capitalization. Will more people open your emails if you write your email subject lines in sentence format or in title format, with all first letters capitalized?
With this example, you may want to run an A/B test (or multiple tests over time) and look for a trend.
Do the subject lines written in sentence format (Subject A above) consistently win or do those in title format (Subject B above) win? Based on your final conclusion, you can then extrapolate a style guide rule for your further email marketing.
You might chose to test something more theoretical. For example, you want to know if more people will open your emails if you use subject lines with vague questions designed to provoke interest or boldly stated facts giving away the most important element in your email. Here’s exactly that test we ran recently with our own email list:
In sum, the first step of constructing an A/B test: Formulate your question and come up with a hypothesis. Before you even start your A/B test, make sure you understand what the aim of it is.When A/B testing, formulate your question and come up with a hypothesis first. Before you even start, make sure you understand what the aim of your test is. Click To Tweet
Create your mailing and its variants.
Most email marketing tools allow you to run A/B tests without having to create separate mailings.
Once you’ve created your A version, add the element you’d like to test. Remember we recommend testing only one element when starting out with A/B testing. This allows you to draw clearer conclusions at the end of your test.
You can test various “envelope details” such as subject line, preview text or even sender email name. Or you can make changes to the body of the email itself. Personalizations are a great element to test, as well. How does your target group react to them?
Select recipient groups – is the test going to all or a select few?
There are several ways you can proceed. One way is to send the A/B test to your whole contact list, with Version A going to 50% and Version B going to the other 50%.
There’s an advantage to this is if you have a small number of contacts and you’re testing something minor that you believe is repeatable. Sending the tests to a larger volume of recipients will give you results that are statistically significant. With the capitalization example above, you might want to run that several times and test your whole contact list each time. Then, once you spot a trend, incorporate that into your company’s email marketing best practices.
In contrast, if you want most of your contact list to receive the winner email from the two A/B versions, it’s better to send the A/B mailing to a small percentage of your contact list.
In our software, you have the chance to choose which percentage of your contact list will receive the test email and which percentage will receive the winner mailing. It’s good to send the winner email to a large percentage when the email is important and you want to ensure most of your contacts receive the most convincing message.
Send the winner mailing to a large percentage of your contact list when you want to ensure they receive the most convincing message possible. Click To Tweet
Analyze the results
What conclusions can you draw from your contacts’ behavior?
Can you make generalizations that will guide how you design future campaigns?
Usually it’s good to try to repeat the results several times before you infer broader rules from them. Also, a lot of marketing experts (us included!) like to give you advice about email. Try testing this out on your own. For example, emojis can boost open rates with some cultures and target groups, but not across the board. Use A/B testing to reach your own conclusions about what your audience wants.
Five Tips for Starting Out Right With A/B Email Testing
Plan an email A/B testing strategy using these five tips.
- Careful planning represents the key to success. Which target groups are you trying to reach? How do you think you might be more effective in reaching them? What do you suspect might not be working in the email marketing you’re currently using?
- Come up with a hypothesis and/or question to test before you start.
- Test only one element at a time. When starting out, choose one element. You might even want to test the same hypothesis over a series of multiple A/B tests. Only once you get repeatable results can you be sure that you have a reliable answer to your question.
- Leave yourself time for analysis. Just as it’s important to plan your A/B campaigns, it’s equally important to analyze them. What worked better? Why do you think it worked better? Can you draw conclusions from this result that will guide you to further marketing success?
- Be wary of taking marketing advice as gospel. Marketing experts love to tell you what to do. Ourselves included! You might have heard advice like, “If you use emojis, you’ll get more opens.” “If you use a subject line of five words or less, you’ll get more opens,” and so on.) Depending on its makeup (age, nationality, sector, cultural background), your target group might behave differently from the norm. So, when it comes to email marketing, take that expert advice as a guideline, and then see how your target group behaves. Using A/B testing, you can come up with a set of best practices that you know is guaranteed to work on your target group.
When You’re Ready to Level Up: Multivariate Email Testing
In addition to A/B testing, which examines one variable, you also have the option to run multivariate tests. As the name suggests, these test multiple variables at once. They’re also known as A/B/n or A/B/z testing.
This could mean, for example, testing three or more versions of a subject line or testing both the subject line and the preview text. With our newsletter software, you can test up to nine versions of a single mailing. The biggest advantage to this type of testing is that it’s more efficient than single-variable testing. For statistically significant results, it’s best to do multivariate email testing only with a high recipient volume, at least several thousand contacts.For statistically significant results, it's best to do multivariate email testing only with a high recipient volume. Click To Tweet
Just like with straightforward A/B testing, make sure you’re clear on what you’re testing.
It won’t tell you much, for example, to test two wildly different types of products against each other. It’s great, though, if you want to study element interactions, i.e., how independent elements affect each other. Think of this like listening to a whole orchestra playing together. Whereas A/B testing is similar to comparing two different musicians, A/B/z testing allows you to contrast multiple orchestras full of different musicians.