3 A/B Testing Pitfalls to Avoid

by 2 07/31/2014

When you read about A/B testing, it gets scary fast…

A lot of words are thrown around… things like “qualitative data,” “confidence level,” and “statistically significant,” just to name a few.

If you’re a small business owner—or tight on resources—maybe you’re avoiding A/B testing because you think it’s just too complicated…

ab testing pitfallsSource: Placeit.net

In reality, online A/B testing is easier than ever.

In fact, many website applications—such as shopping carts, email service providers, and even WordPress plugins—have A/B testing functionality built in.

This is exciting news for those of us who have long raved about the benefits of A/B testing.

But, if you aren’t yet convinced, here are a few times A/B testing is essential to your success…

3 Times A/B Testing is Vital to Your Success

1. Website redesign.

Many companies scrap their entire design, move to a new one, and then wonder why things aren’t working as they expected.

The reason?

Likely, they made a lot of assumptions about what their audience wanted.

Instead, they could have—and should have—tested the design as they rolled it out.

Then, they would have real data on whether certain changes were effective or not.

2. You suspect your conversions (or other stats) could be better.

If you’re researching your industry and finding that your competitors are getting better conversion rates than you, it’s time to run some tests.

Likewise, if you’re looking at your site thinking, “I bet so-and-so would perform better,” why not run a test?

Don’t make changes just because you think you know what’s going on. Instead, be sure with an A/B split test.

3. You read or hear that you’re doing something “wrong” or that something else works better.

As you’ve probably heard, what works in one market or industry won’t always work in other markets/industries.

That means you should NOT change all your buttons to red just because you see a test where a red button out-performed a green one.

Instead, you should create two versions of your button and try the test for yourself. Your results could be wildly different from the example you based your test on.

A/B Split Testing Explained

Of all the definitions I’ve found, I think Neil Patel’s is the easiest to understand:

In an A/B test, you run version A (the control) against version B (the challenger) to see which one has a higher conversion rate. You then send an equal amount of traffic to each to see which one improves conversions. That’s the basic definition of an A/B conversion test.

For instance, let’s say you want to increase the number of visitors who sign up for your newsletter.

A relevant test might be to change the color of your opt-in button…

Version A (your control) would be your original button. The other variable (Version B) is identical except for the color. Here’s an example.

Version A:

AB-Testing-Example-1

 

Version B:

AB-Testing-Example-2

In this test, from WhichTestWon, they changed ONE element, the color of the “Add to Cart” button. Everything else stayed the same.

So which button—blue or green—increased sales conversions by 14%?

The answer is the green button.

Why?

Well, that’s where many people go wrong with A/B testing…

3 Common Pitfalls of A/B Testing

When you hear about someone’s results with A/B testing, you might be eager to jump right in. But, doing so—without first knowing what to avoid—can cost you time, money, and your current conversion rate.

That’s because common and easily made mistakes can skew your results…

Pitfall #1 – Making A/B Split Test Assumptions

As I mentioned above, many people will (falsely) assume that because something works in one market, it must work in another… often, they’d be wrong.

Let’s just look at a few assumptions you may have made with the above example:

  1. Assuming blue would win because color psychology tells us it’s a trustworthy color
  2. Assuming green would win because it’s the “color for go”
  3. Assuming blue would win because it matches the site better
  4. Assuming green would win because it stands out
  5. Assuming either of these colors would work better on YOUR site because of this example

As you can see, our assumptions can have us bouncing between the options all day.

The only real way to determine the winner is to test them.

By sending actual visitors to two different versions, we can see which button color actually works to convert more buyers.

Of course, button color is just one of the many things you could test. They could have also tested button wording, location, styling, alignment, and more. They could even test the elements around the button.

That leads us to our next pitfall…

Pitfall #2 – Testing more than one change at a time.

It seems the Internet is divided over this issue: What is the proper way to A/B split test?

  • A. Change only one element at a time. Get the results. Implement the winner. Create a new variation.
  • B. Make two completely different versions of a page, changing any and everything that you want. Then, send equal traffic to each page.

Even WhichTestWon joins the debate:

AB-Testing-Example-7

I always recommend option A (changing one element at a time).

As WhichTestWon says, changing just one element will give you more accurate data. If you make the mistake of changing multiple elements, your results will be skewed and you’ll need additional tests.

Or, as Neil Patel points out:

If you modify too many variables at once, without testing them all, you also won’t know what variables are helping and which ones are hurting. For that reason, you should only try to make one change at a time.

Here’s a graphic to demonstrate:

AB-Testing-Example-6

In this graphic, multiple things about the button are different. The color, text, color of the text, outline, and even corner radius …

This is NOT the correct way to split test. There are just too many variables.

Let’s say, for instance, that the variable wins. Do you know why?

It could be the color, but it could also be the wording, or even the color of the words.

Instead of making multiple changes for one test, you should spread your changes over several tests, like this:

You start with your original and one change:

AB-Testing-Example-3

In this example, we changed the color from “red” to “green.” Nothing else.

Once you have a winner (which we’ll talk more about in a minute), you test that winner against a new variation. Let’s say the winner was Version B. It becomes our new control. Like this:

AB-Testing-Example-4

In this second test we’re testing the words on the button. The previous winner said, “Go!” Our variation (or B) says, “Click here!” Everything else is the same.

Once you have a winner between “Go!” and “Click here!” you can create another test. Let’s say the winner is still the button with the word “Go!”

Next, you might decide to test the color of the word…

AB-Testing-Example-5

Once this test is complete, you can take the winner and create a new variation.

Keep in mind that this is just an example. For real-world tests, you’ll be able to get more dramatic conversions by testing bigger changes.

For instance, testing your original text against a video with the same content could result in a much bigger win than changing the words on your button.

Pitfall #3 – Using sample sizes that are too small.

Your “sample size” is how many people you will run through your test before making a decision on a winner.

For example, if your sample size is a thousand, 500 would see your control (or Version A) and 500 would see the variable (or Version B).

There’s a lot of advice about how big your sample size should be…

  • Some people recommend running your test for at least a few weeks (so you have data across multiple days and times).
  • Some say you should aim for an impression amount of at least 1,000

However, because every test/website/audience is different, I rely on a sample size calculator to tell me how big the sample size should be.

Additionally, once I reach the recommended sample size, I make sure my results are accurate with a split test calculator.

A split test calculator will help you interpret your results and tell you if you have a clear winner.

For more on using these tools, be sure to check out this article.

So, what happens if your sample size is too small?

A small sample size can lead to a false positive. Or, in other words, your test will be inaccurate.

I repeat: if your sample size is too small (or you stop your test early), it won’t be accurate.

Why?

Well, there are many reasons…

It could be as simple as the weather having an effect on people’s behavior, a holiday weekend, or other things beyond our control or explanation.

That’s why it’s important not to stop a test as soon as you think you see winning results. As exciting as it is to see a 40% increase in your conversion rate, it may only be fleeting.

Instead, use a sample size calculator and then wait until you reach the correct sample size. You might find, that in the long run, your original version actually came from behind and beat out the variable that started out so strong.

Note: If you have a large amount of traffic coming to your site, run your tests for at least one solid week. This way you’ll get data over several different days. (Yes, the day of the week can make a dramatic difference in your results.)

Once your test has run at least a week and is complete (use a split test calculator to make sure you have a clear winner), are you done? No.

Of course you’ll want to implement the winner, but after that…

Keep Testing

Many experts agree: You should always be testing.

Whether you’re trying to find the best headline for your site, most effective button color, or something else entirely, running tests will help you improve your site.

Plus, as you gather data over time, you’ll be able to better understand your website visitors and come up with more effective tests.

So, the next time you’re considering a website redesign, speculating about changing what may (or may not) be effective, or yearning to try that new trick you read about, test it first.

A/B testing tools like the ones reviewed here will help you get up and running fast.

So, did I leave anything out? What pitfalls do you aim to avoid in A/B testing?

Read other Crazy Egg articles by Christina Gillick.

About 

Christina Gillick is a direct-response copywriter. She helps her clients create loyal customers and raving fans through relationship building copy and marketing. She is also an entrepreneur and founder of ComfyEarrings – The Most Comfortable Earrings on Earth.

Get our Daily Newsletter

Get conversion optimization, design and copywriting articles delivered to your inbox FREE

2 COMMENTS

Mike Stickney

Great article, with some spot on advice.

The most common problem I find with clients is that they are impatient. Once they see a win, they want to implement and move on… but to your point calling a test early can have a negative effect in the long run. Need to be patient to get results.

For this reasion, while I agree at it’s core A/B testing involves changing one element at a time, for bigger results I’m a fan of “Innovation” followed by “Iteration”. Start with a completely new design or multiple changes (based on assumptions, best practices, etc), and perform a test. Once you have a winner, then do the small “iterative” tests. The “innovative” test can help get people excited about testing, seeing big differences between a test in possibly a shorter amount of time. A simple button color test, or button text change can take a long time to hit significance, or have very little impact overall. That’s where people can get discouraged that things are not moving forward.

The other key thing is that win or lose, every test should tell you something (either something you should do or something you shouldn’t do). That’s why I don’t like to use the term “losers” for challengers that get beat, I always use the term “learners”.

Just some of my thoughts. Again very nice article.

July 31, 2014 Reply

    Neil Patel

    Mike,
    Thanks for sharing. Looking forward to hearing much more from you :)

    July 31, 2014 Reply


Leave comment

Some HTML allowed

Get conversion optimization & A/B testing articles FREE >>>