September 7, 2018 Edgar Guillen

How to Rock Split Testing on Facebook Ads

Facebook Split Test

It’s no secret that marketers need specific strategies and tools to better understand their ideal audience and their reactions. When advertising on Facebook, you want to be confident that your ads are reaching the right people, in the right place, at the right time; most importantly, you’ll want to feel certain your advertising budget isn’t being wasted.

A/B testing is one of the most fundamental principles of online advertising. It allows you to simultaneously test different versions of your ads so you can see what works best, and thereby understand how different strategies will impact your ad performance. Also, by using the split testing feature on Facebook, you can avoid audience overlap, since the potential reach is randomized and split among ad sets to ensure an accurate test.


So, how does it work?

Step 1:

Navigate to Ads Manager and click on “Create” (a new campaign) and, choose your marketing objective: traffic, engagement, app installs, video views, or lead generation:


Step 2:

Select “Create Split Test” and choose a name for your campaign. Select “Continue.”


Step 3:

Inside the Ad Sets, go to the  “Variable” section, and select the variable you would like to test.


Facebook split testing allows you to choose between four variables:

  • Audience
  • Delivery optimization
  • Placement
  • Creative


Step 4:

  • To test an audience as your variable, go to the audience section where you will be able to create a new audience for each Ad Set or select a saved audience.
  • To test delivery optimization as your variable, go to the delivery optimization section and choose your delivery and bid strategies for each ad set.
  • To test different placements as your variable, go to the placements section and determine whether you would like automatic placements or choose your placements to customize where ads are shown.
  • And, to test creatives as your variable, you’ll make your selections for an audience, placements, delivery, budget, and schedule, in the Ad Set first, then set up the different versions of your ad in the Ad section.

You can test up to five Ad Sets for each variable or Ad, though it isn’t recommended to test more than 3 per campaign.

Step 5:

After deciding the variable, you’d like to test, navigate to the “Split test Budget & Schedule” section and, choose a budget. You will then have two options: “Even Split” will budget each ad equally; “Weighted Split,” will allow you to decide the percentage weight of each split.


In the same section, select the schedule of the campaign. Then, click “Continue.”

Step 6:

When the different Ad Sets are ready, it’s time to set up your creatives.


It’s as simple as that!

Facebook has found that split testing produces strategies with a median of 14% CPA (cost per action) improvement. With Split Testing, you are able to test different variables without audience overlap, getting the most out of your budget.



Now that you’ve learned how Split Testing works, you should absorb some best practices recommended by Facebook that will give you more precise and conclusive results.


Best practice #1: Determine first a measurable hypothesis

Once you’ve decided which variable you’re going to test, it’s essential to create a hypothesis before start running the experiment so you’ll be able to answer the question you have and optimize future campaigns.

“If you can’t state your reason for running a test, then you probably need to examine why and what you are testing.” – Brian Schmitt, Conversion Optimization Consultant, CROmetrics

According to Optimizely, a complete hypothesis has three parts: the variable, the desired result and, the rationale, so the statement would be  “If ____, then ____, because ____.”

7-optimizelyImage from Optimizely

An example of this process is: “If the placement is only mobile, then we will have more sign-ups because our main target spends more time on mobile than on desktop.”

Best practice #2: Test one specific factor at a time

Your Ad Sets and Ads must be identical except for the one variable you’re testing. Specifically, when you’re testing different creatives, it’s important to change only one specific factor at a time, while keeping everything else the same.

For example, your ad could have the same headline, text, image copy, CTA, and a different image- here, the image is your “experimental condition.” If your Ad has different headlines, different images, different CTAs, etc., you won’t be able to know which factor was the one that made that creative the winning Ad.

In the following image, you’ll notice there are three different images, with three separate image copies; it’s impossible to know if the winning factor was actually the image or the text.


Best practice #3: Use an ideal audience

It’s necessary to use an audience large enough to run the test to have more accurate results. Depending on how many days your test will run, you’ll want to have a bigger audience to have scalable results.

It’s also important to not use the same audience in other ongoing campaigns you may have to avoid overlapping audiences.

Best practice #4: Use an ideal time frame

Tests should run for at least 3 days and no more than 14 days- but Facebook actually recommends 4-day tests for better results.

This time frame also depends on the objective of your campaign. For example, tests with conversions as their objective should run for more than 7 days since, as Facebook explains, it’s well-known that people take longer than 7 days to convert.


Best practice #5: Set an ideal budget

It’s important to set an ideal budget to obtain enough results to determine a winning Ad Set or Ad. We recommend more than $5 daily per Ad Set or Ad, but if you’re not sure about the ideal budget you can use the suggested budget Facebook provides.

Now it’s your turn!

Use these best practices to uncover the better performing audience, delivery optimization, placement, and audience. Have you tried it out yet?


Leave a Reply

Your email address will not be published. Required fields are marked *