A Beginner’s Guide to A/B Testing: Email Campaigns That Convert

Email campaigns and newsletters can be a great way to get repeat business, as well as new customers. You’re already working with a somewhat pre-qualified base: these people have said they want to receive information from you. And a lot of them have likely already done business with you. And we all know it’s easier and cheaper to retain customers than it is to get new ones.

This is why it’s vital to run A/B tests when trying out new techniques or formats for your email campaigns. Improving conversion rates here can make a bigger difference in your bottom line than many other marketing efforts, especially those of similar cost.

Here’s the third installment in our A Beginner’s Guide to A/B Testing series. Be sure to check out our previous posts: A Beginner’s Guide To A/B Testing: An Introduction and A Beginner’s Guide To A/B Testing: Exceptional Web Copy, and stay tuned for our next installments on testing pay-per-click and SEO landing pages.

Decide What You’ll Test

The first step in setting up an effective A/B test is to decide what you’ll test. While you may want to test more than one thing, it’s important to only test one thing at a time to get accurate results. Things you might consider testing include:

  • Call to action (Example: “Buy Now!” vs. “See Plans & Pricing”)
  • Subject line (Example: “Product XYZ on Sale” vs. “Discounts on Product XYZ”)
  • Testimonials to include (or whether to include them at all)
  • The layout of the message (Example: single column vs. two column, or different placement for different elements)
  • Personalization (Example: “Mr. Smith” vs. “Joe”)
  • Body text
  • Headline
  • Closing text
  • Images
  • The specific offer (Example: “Save 20%” vs. “Get free shipping”)

Each of those things is likely to have an effect on different parts of the conversion process. For example, your call to action is obviously going to have a direct affect on how many people buy your product or click through to your landing page. Your subject line, on the other hand, will directly effect how many people open your email in the first place.

Think about this when you’re deciding which things to test first. If not many people are opening your emails, then you’ll likely want to start with your subject line. You’ll likely want to test the more important parts first. Your headline and call to action will likely have a great impact on conversions than the images you use or your body text. Test those things first, and then test the others in greatest to least importance.

Test Your Whole List, Or Just Part?

In the vast majority of cases, you’ll want to test your entire list. You want to get an accurate picture of how your email opt-in list responds to your new email campaign, and the best way to do that is to test all of them. There are a few instances, though, where you might not want to test your entire list:

  • If you have a very large list, and the service you’re using to A/B test charges by the email address. In this case, test the largest sample you can afford, and make sure that the names you select are chosen randomly for accurate results.
  • If you’re trying something really extreme, you might want to limit how many people potentially see it, just in case it goes over terribly. In this case, it’s still a good idea to make sure that at least a few hundred people are seeing each version you’re testing. If you can test a few thousand people, even better.
  • If you’re running a limited-time offer, and want to get as many conversions as possible, you might want to run a small (a few hundred recipients) test batch first, and then send out the winner to your entire list.

The larger your test sample, the more accurate your results will be. Make sure that the split is done randomly, too. Hand-picking recipients (or even using two lists from different sources) is a great way to skew your results. The goal here is to gather empirical data to figure out which version of your A/B test material really works best.

What Does Success Mean?

Before you send out your email versions, it’s important to decide what you’ll be testing for and what you consider success. First, look at your previous results. If you’ve been using the same email campaign style for months or years, then you’ll have a good pool of data to pull from. If your historic conversion rate is 10%, then you might want to increase that to 15% to start with.

Of course, maybe your goal with the initial A/B test is just to get more people to open the email. In that case, look at your historical open rate, and then decide how much improvement you want to see. If you don’t see that improvement with the first set of A/B tests, you might want to run another test, with two more versions.

Tools For Testing

Most email campaign software has built-in tools for A/B testing. Campaign Monitor and MailChimp both have such tools built in, as does Active Campaign.

If your email campaign software doesn’t have specific support for A/B campaigns, you can set one up manually. Simply split your current list into two separate lists, and then send one version of your email campaign to one list and the other to the other list. You’ll then need to compare results manually, though exporting your data to a spreadsheet can help with this.

Analyze the Results

Once you’ve run your email campaign with the two different email versions, it’s time to take a look at the results. There are a few different categories of results you’ll want to look at:

  • The open rate
  • The click-through rate
  • The conversion rate once they’re on your website

The reasons behind tracking the first two are pretty obvious. But a lot of people might wonder why we’d want to track the conversion rate outside of the email. Wouldn’t that be beyond the control of the email itself?

Yes and no. Ideally, the email you send shouldn’t have much to do with the conversion rate once a visitor is on your website. If one email leads to 10% of readers clicking through to your website and another one leads to 15%, then the second email should result in 50% more conversions than the first one. But that doesn’t always happen.

It’s important that the message you give in your email is consistent with the message on your website. If you’re promising your visitors a special deal, and that deal isn’t perfectly apparent on your website, then you’re going to lose customers. The same can happen if your email doesn’t echo the look and feel of your website. Visitors might get confused, and wonder if they’ve landed on the correct page.

Make sure you track your conversion rate from each email version to ensure that you aren’t losing sales. The end goal here is conversions, not just click-throughs. You may find that one email gets more click-throughs than the other, but that it doesn’t result in as many conversions. In that case, you’ll probably want to do more testing to see if you can get an email that not only results in higher click-throughs, but also higher conversions.

Best Practices

Here are a few best practices to keep in mind when running an email A/B test:

  • Always test simultaneously to reduce the chance your results will be skewed by time-based factors.
  • Test as large a sample as you can for more accurate results.
  • Listen the empirical data collected, not your gut instinct.
  • Use the tools available to you for quicker and easier A/B testing.
  • Test early and test often for the best results.
  • Only test one variable at a time for best results. (If you want to test more than one, look into multivariate testing instead of A/B testing.)

Be sure to check out the other posts in this series, too: A Beginner’s Guide to A/B Testing: An Introduction and A Beginner’s Guide to A/B Testing: Exceptional Web Copy. And we have two more posts in this series coming up, covering pay-per-click ad testing and SEO landing page testing.

About the Author: Cameron Chapman is a freelance designer, blogger, and the author of Internet Famous: A Practical Guide to Becoming an Online Celebrity.

Share