e-idea file

 

           

Testing Primer

testing

Why Test? You test to identify the approaches, content and techniques that produce the most response and profit for your particular product or service. Testing can confirm your intuition, identify what works, what doesn't and what is irrelevant, and provide you with the information you need to optimize your marketing.

  • What price point generates the highest profit (margin times response rate)?
  • What type of offer provides the greatest number of inquiries?
  • What topics are the readers of my newsletter most interested in?
  • What appeal provides our non-profit with the most donor responses? The highest average donation?

These are just a few of the questions you might use testing to answer.

Testing has traditionally been used to increase an organization's insight over time into what approaches work best for their particular products/services. However, with today's faster turn-arounds and lower production costs, some forward-thinking marketers are building tests into individual mailings by staggering drop dates. Take the example of this association whose goal was to maximize registration for a trade show conference.

The customer planned an overall mailing of 60,000 pieces, but randomly selected 10,000 contacts for a preliminary test. With a fairly simple test design, the customer tested a control (their best guess of what worked) against two additional offers, two additional formats, and two additional headlines/teasers. The results determined that an optimized combination could produced 42% higher response than the control, which was then used to determine the final piece mailed to the 50,000 remaining addresses.

Let's take a look at the various approaches to testing and analysis to see if this makes sense for you.

Test Design

A/B [Split-Run] Test
Test One Variable at a Time
ProsConsUses

Minimum sample size is smaller than other options

The math is easy

Takes longer to gain insight into what works best among the multitude of factors that may impact response

Doesn't measure the interaction between variables across multiple tests

Small samples/overall list sizes

Quickly test big picture approaches (discount vs. free gift; postcard vs. envelope/mailer, etc.)

Grid Test
Test Multiple A/B Variables at Once
ProsConsUses

Faster insight into what works best

The math is easy

You basically need to double your sample size for every new variable you want to concurrently test

Doesn't measure the interaction between variables

Quickly identifying the best option for each of several variables where you plan to eventually mail to a reasonably large-sized list

Scientific Test (multivariate, Taguchi, etc.):
Advanced Statistical Analysis to Concurrently Test Any Number of Variables Without Having to Scale the Sample Size
Pros Cons Uses

Most powerful approach to testing

Can provide insight into optimal values for each variable as well as the optimal combination of values (interactions)

The design and analysis of the test requires specialized, advanced math

Not realistic for most of us to do without consultants

More expensive due to consulting, creative and production costs related to developing items to test

Requires a sample size two-to-three times larger than required for a simple A/B test of one variable

Large-scale marketers doing multiple print and/or broadcast email campaigns where the sample is large enough and the impact on ROI justifies the expense

In one case study presented on the Web, Dell used scientific testing to increase its direct mail performance by 352%!


Direct Mail Testing Priorities

In selecting the most appropriate test design, you should:

  1. Create a testing plan to screen and refine over a series of campaigns.
  2. Let your marketing objective drive the testing plan.
  3. Simple is better: Avoid unnecessary complexity.
  4. Start with the big picture before digging into the details.
  5. Make it a standard part of your overall process.

Begin by evaluating if there are any tests you have already completed that you may not even realize; put another way, do you have underlying information related to the lists you have previously mailed that you can correlate to test results?

One example is the question of gender bias in your mail results: If you compare the ratio of men to women on your mailing list to the ratio of men to women in your response, are they the same? If not, your product and/or marketing appeals to one more than the other. This is fairly easy to identify based just on the availability of first names from the original list and the response list. To evaluate your lists, rely on our gender identification list tool at willow.kickshout.com/xgenderfinder.asp. You may have also mailed using a variety of list sources, a variety of formats, and so on. While mailings done at different times are not as reliable as a controlled test, nevertheless analysis may provide key insight into trends and approaches that work best for you.

A Note on Sample Size

Generally, the higher your typical response rate, the lower the sample size you will require. For example, a credit card marketer that averages a 1% response rate and wants to test ideas for a 10% increase (to 1.1%) would need a sample size of over 135,000 to identify whether a single A/B test is significant, at a 75% confidence level that the effect will not be missed. On the other hand, an email marketer testing subject lines with a typical open rate of 35% and looking for a minimum 3% increase in response would need a sample size of only 3,500 to determine whether one subject would outperform the other.

We've added a Sample-Size Calculator to our list tools to help you identify the size required to design a valid test. Just visit willow.kickshout.com/xsample.asp to take advantage of this easy-to-use tool.

Avoid Testing Gotchas

Use a head-to-head competition with a control to determine the winner. Head-to-head, concurrent testing provides you with insight that is the least likely to be influenced by unknown factors. The more time passes between mailings, the more likely factors you are not taking into account may impact results, providing you with less confidence in the overall predictions.

With A/B tests, test everything or just one thing. You can gain valuable insight by either testing multiple items as a single group or by testing just one item. Bear in mind, though, when grouping sets of elements, all you will be able to determine is which group performs best, not which elements that comprise the group were responsible for the difference.

First test areas likely to give you the biggest response/profit boost. Experience shows that the most important areas to test are mailing list, offer, copy, format and seasonality, with mailing lists and offers being the two most significant. Don't start testing font options unless you thoroughly understand these two.

Make sure you can identify responses. If you don't code your tests and capture the code during the response/purchase process, there's no sense in testing. You must be able to track and analyze results or you can't possibly know what works.

Make sure your sample is large enough. To determine sample size, we use the test list calculator, at willow.kickshout.com/xsample.asp.

Use the 80-20 rule. Because they have a sense of what a control package may deliver based on prior experience, many mailers elect not to test because they don't want to give up those known profits on the portion of the list that might be sent a test, since this may not perform as well. The truth is, though, by not testing they may be missing out on even higher returns and profits. There's a solution to this conflict. Instead of testing on a 50-50 basis, test on an 80%-20%, on the minimum number our test list calculator determines to be statistically valid. That way, you can minimize the risk to profit while still gaining insight into what is the optimal approach for your list.

Best Response Rate vs. Highest Profit?

In the simple example below, testing a higher price reveals that even though the marketer can expect a slightly lower response rate, the overall profit is much higher with the higher price.

Widget Price Profit Margin Responses Total Profit

$10

$1

150

$150

$12

$3

120

$360

What to Test

We recommend you use the following order when creating a hierarchy of variables to test:

1. Mailing lists. Test the list source, but also evaluate based on criteria you may have already established (segment, previous purchases, etc.). Consider having us source rented lists for you to test (be careful with email as list rental generally falls into the "spam" category). In the absence of scientific testing, each package sent to each list (or list segment) should be identical.

2. Envelopes/packages or subject lines (broadcast email). Create an entirely different package from your control. Change the copy, theme, format. Then take your most responsive lists and use them to test this new package against your original control. The winner of this test becomes your control. You can combine these first two tests using grid/segment testing.

3. Price. If you have some flexibility with price, take your control package and test prices. Note that this is really a margin test, since you are looking at the profit margin per item at a given price level vs. overall response as measured by number of purchases. In our widget example below, a $2 price increase results in a 10% reduction in response rate order volume, yet still nets over twice the overall profit.

4. Offer. You might test a free bonus vs. no bonus, two bonuses vs. one bonus, soft hard offer, half-price vs. 50% off vs. two-for-one, etc. The winner becomes your new control.

5. Copy and/or format style. On format tests, focus on major changes such as envelope vs. self-mailer, 9x12 package vs. a #10, personalization, process colour vs. black, etc. In copy tests, it's usually best to change the primary appeals, the entire thrust of the copy. You can test copy of individual components like the envelope tickler, sales letter, brochure or order form. Unless you are doing multivariate testing, do not concurrently test offers or lists as part of format tests.

 

Test Elements

List

Segment
Prior action
Purchase history
Gender
Title/job function
External list sources

Offer & Price

Price Points
Phrasing of value
Free with...
Buy... Get..

Appeal

Emotional
Case study/example
Features
ROI

Format & Design

Copy length
Colour
Imagery
Font
Mailer type

Copy Elements

Tickler/subject line
Headline
Letter length

Style & Tone

Voice
Affiliation
Values, beliefs

           


return to e-idea file article list