We are all about marketing, data, analysis, innovation and technology

Saturday, June 23, 2012

How Many Impressions Do I Need in Order to Read My Banner Ad Test Results?

"I am getting ready to run a new marketing test against my control and was wondering how many names I should be testing.  Can you tell me?"

I get asked this question all the time from clients and students alike.  My response is always the same.

"Not enough information."

In order to properly answer this question, and it is a very important question, you must be able to tell me two things:
  1. What is your current banner ad, direct mail format, or email currently yielding in terms of a response rate or click through rate?  In other words, where are you right now, baseline?
  2. And, most importantly, what would the new test need to yield in terms of a response rate, minimum, to be considered a success? 
Notice on point 2, I am not asking you what you want the test response rate to come in at.  I am sure you would want it to be triple where you are now, right?  ;-)  What I am asking you is, at a minimum,what would you need that test response rate to be in order to roll out with the new test.  That is what I need from you.  And, most likely that number will be the break even response rate for the test.

So, in order to answer this question you will need to conduct a break-even analysis.

Lets pretend you are going to be testing a new and more expensive direct mail format where the following assumptions exist:
  • The current control format costs you $0.75 per piece fully loaded. 
  • The profit per order not taking into consideration the cost of the promotion is $20.
  • The current format yields a known 1% response rate.  

Lets also assume the new format you are considering testing will run $0.82 per piece.  Using the "The Plan-alyzer" break-even calculator that I created and provide for free on my website www.confidenttest.com we end up with a break-even response rate of 1.35% (see Figure 1 below).

 Figure 1:  Break-even calculation using www.confidenttest.com

With this data, I now have everything I need in order to help you determine testing quantities.  We will determine the test quantities so that if you get a 1% response rate for the control and you get a 1.35% response rate for the test (what you hope worst case) you will be able to read it with statistical significance.  I know you want to do better than a 1.35%, but I am simply helping you set up the test so you can read the test at break even. 

I simply want to ensure you have tested enough names so that if you just break even on the test you will be able to say yes it is a winner.  If you do better than that, great.  You are covered there also.

So with that said lets go back to "The Plan-alyzer" located at www.confidenttest.com and use the sample size calculator to determine how many names we need to test in order to read the results with significance (see  Figure 2 below).

Figure 2:  Determining the sample sizes for a test and control when concerned
with accurately measuring the difference between response rate using www.confidenttest.com

Based on this calculator, we need to test 7,281 names for the test and 7,281 names for the control.  Doing so will ensure that if you get a 1% response rate for the control and a 1.35% response rate for the test (knowing you want to do better) you will be able to read that difference with statistical significance.

Let talk through a couple of things here.

First of all you will notice that I chose 95% as my level of confidence for this analysis.  That is what I strongly suggest you use as the stake in the ground.  To go below 90% is just too risky.  For more on this see my YouTube video embedded below regarding the steps to selecting your confidence level.  Very important.

Secondly, what if you come back and say you cannot afford to test a total of 14,562 names and can only afford to test about 5,000 per panel.  What do you do?

Well, if that is the case, I then suggest you go back to the sample size calculator and play with the difference you can detect until you resulting get sample sizes of 5,000 each (I call this a "what if" analysis).  Doing so, for this example, I have honed in on needing a a response rate difference of .43% or .0043 before you can detect significance (see Figure 3 below).

Figure 3: What-if analysis using the sample size calculator using www.confidenttest.com

So now the question I pose to you is: "Can you tolerate this much error?"  Because if you cannot, then there is no need to conduct the test.

This is important, so let me rephrase.

If you can only test 5,000 per panel and you get a 1% for the control and a 1.35% for the test (which is break even) you will not be able to tell you boss that the test won.  You will need to see a response rate of 1.43% for the test before you could declare the test a winner.  So the question is, do you think the test has the capability of yielding this high of a response rate.
  • If you answer no then I will suggest you pass on the test, because why test something if you will not be able to test enough names to read the results with significance.
  • If you answer yes, then I will ask you why you did not tell me that before.  Because testing the 7,281 names per panel would have been testing more than you really needed to read this test.

Test design and analysis is my passion.  If you find yourself in need of help establishing or evaluating your marketing campaigns or tests, do not hesitate to call on Drake Direct.

But first, check out all my other testing videos on my Test Design and Analysis YouTube Video Channel


Tuesday, June 5, 2012

Why Universities Must Incorporate Data Analytics in Their Curriculums

I was recently invited by Adobe to conduct a educational webinar on why it is imperative for universities to bring data analytics to the forefront of their marketing curriculums.

When they asked me back in January of this year if I would be interested in talking about this topic to various university faculty and administrators, I jumped at the chance.  After all, I had insight into what was happening out there in the job market and how NYU has become a leader in preparing students for the digital data revolution.

So with that said, I pulled together some slides and went into sales mode.  I created a compelling case for why we must teach our students to embrace data and all it can bring to bear on our marketing decisions.

In this presentation, I revealed (and quite passionately, I might add):
  • What companies have been doing over the past two years to gain a 360 degree view of their customers
  • What software they are using to do so (SAS, SPSS, Sitecatalyst, Radian6, etc.)
  • How and why roles are becoming much less siloed than in the past and what that means for new hires
  • How analytic tools are becoming much more user friendly for marketers, making it easier for them to embrace the data
  • An IBM study of more than 1,700 CMO's discussing their concerns with the lack to properly trained marketers in the use of data
  • Other studies showing the same
  • The trends on indeed.com showing how the use of the word "analytics" in any form is increasing in frequency.

I then went on to discuss some of the major challenges we are facing today as marketers and yet to be determined solutions such as:
  • Issues of inappropriate campaign attribution (first touch vs last touch)
  • Siloed data and the problems caused by it
  • How we are still not able to properly measure the ROI of our social programs
  • How to deploy proper A/B and multivariate testing
  • The challenges of defining appropriate KPI's
In summary, let me just say that universities must incorporate these new digital data analytics topics in their various marketing and MBA programs or else they will risk becoming irrelevant, and quickly so.  

To get more detail about all of these challenges mentioned above, view my presentations in pdf file form: http://www.drakedirect.com/Adobe-Educational-Series-Data-Revolution.pdf

To view my live presentation click here:  https://seminars.adobeconnect.com/_a227210/p55rklmh93y/?launcher=false&fcsContent=true&pbMode=normal