Blog

August 23, 2013

5 Tips to Help You Get Started A/B Testing Your Marketing Emails

If you’ve ever tossed a Frisbee, played a game of darts or even had a game of catch, then you’ve attempted to improve your results based on what you previously tried. Whether consciously or unconsciously, you’ve made adjustments between one throw and the next. You’ve altered how you held the dart/ Frisbee/ ball/ whatever. You’ve changed your stance and foot positioning, your follow-through, where you focused your eyes, and a dozen other factors. In short, you compared current performance to past results and tried to figure out what effect came from the changes you made. You kept the good changes and discarded the bad ones; and the Frisbee/ ball/ dart got closer to where you wanted it.

That’s all that A/B testing really is. You change one thing. One little thing, and compare the results. If you didn’t improve or your results were worse, then go back to your original, baseline version and try again.

If you did get a better result, keep your change. And then test that variable some more. You improved one thing a little. How do you know if you tweak it again, it won’t improve even more? You don’t. That’s why you keep testing. And that is the basic idea behind A/B testing.

How to get started?

Follow these guidelines and you’ll be well on your way toward improvement through A/B testing.

  • Decide what you want to improve.

    Is your open rate low? Look into testing something that affects whether or not someone would click to open your email, like your subject line. Try changing something simple like the length (try a 5-8 word subject line vs. a longer 10-12 word subject line). Or try changing the phrasing. For instance, does an intangible or specific subject work better for your audience? (“Enjoy Big Savings on Back-to-School Supplies” vs. “Enjoy 15% off on Back-to-School Supplies”.)

    If your open rate is okay but your conversion or click-through rate is below par, try looking into a change on your offer (“Free Shipping” vs. “25% off when you Buy Two or More”). Or change your CTA (call-to-action). For instance, “Learn More” vs. “Free Demo”. Or consider testing the location, or frequency of your CTA, trying it as a graphic button vs. as a text link—or even consider testing the size or color of your CTA button.

  • Test one thing at a time.

    If you make more than one change, you won’t know which factor is responsible for any improvement or fall-off in performance. For example, are you testing for click-through improvements? Test for a change in the location of your CTA button or its size, but not both during the same test.

  • Control for time, date & duration.

    Unless you are testing to learn the best time of day or day of the week to send your emails, schedule all of your campaigns to go out at the same time and on the same day of the week. If you don’t, this may be another factor that affects your results. And before drawing conclusions, be sure to let your tests run long enough. This ensures for adequate input from all time zones and makes allowances for people who check their email at different times of the day.

    We recommend letting a test run for a full 24 hours, if possible. Studies demonstrate that about 80% of all eventual engagement from an emailing will occur in the first 24 hours—unless you are sending on a Friday or Saturday. In that case, you can expect upwards of 20% of subscriber engagement will occur on the following Monday.

  • Test against a control group.

    Create your control email (the original, or version “A”), and then create your test email (version “B”) by choosing the one factor on of the control that you are testing for. Send each to an equal and random portion of your mailing list.

    You’ll see a lot of suggestions to use 10% of your list but that doesn’t take into account your list size. To get a statistically reliable result, we recommend at least 1,000 subscribers opening each message. So, if you know your open rate, calculate how many subscribers you would need to send to in order to get 1,000 unique opens for each version. For example, if you expect an open rate of 20% you’ll need to send version “A” to 5,000 subscribers and version “B” to 5,000 subscribers in order to get 1,000 unique opens for each.

    This should give you a statistically significant result from which you can confidently draw conclusions. And once you’ve reviewed the results, send the winning version to the remaining contacts in your list.

  • Keep records.

    When you decide to A/B test, you’ve got to keep records so you have good data for when you try to analyze long term trends—or even to see what might have worked, or didn’t work, the last time you ran a similar campaign. With a provider that offers A/B testing options and deep analytics built-in to their platform, record keeping takes care of itself.

    But, if you are starting out—and running your email campaigns without an email solutions provider, your records don’t have to be complicated. You may even be able to get by with a basic table set-up in Excel to track things like your changes, send list size, and results.

    Either way, the longer you test, the larger your supply of data will be. Keep at it for over a year and you may see seasonal changes affecting your measurable results. This long history of records should allow you to anticipate and improve your results throughout the life of your business or organization.

Is this a lot of work that never ever ends? Not really. Well, it shouldn’t ever end—but it’s not really a lot of work. Follow these tips and you should be able to easily incorporate testing into your every-day practices without noticeably burning through your time or resources.

And since you’ve gone to the trouble of setting up a spreadsheet and crunching the numbers, remember to act on what you’ve learned. If you don’t, what was the point? You may as well put on a blindfold and start tossing darts.

 Twitter  FaceBook  LinkedIn  

© 2022 by TailoredMail. All Rights Reserved. Privacy Policy