28028Re: [webanalytics] What makes "iterative testing" iterative?
- Jun 1, 2011Hi,
I agree with Matthew - you need your head examined if you want to do
multiple A/B tests at different time periods.
Let's take an example - you have an 'Orange' homepage and a 'Blue' homepage.
In July, Orange wins by 10% and in August, Blue wins by 5%. What should
The issue here is that traffic patterns are ALWAYS changing on your site -
by spreading A/B splits over time, you're going to get (potentially) rubbish
data. Rubbish? Why? - unless you are segmenting your data capture (to
compare conversion by segment) and also monitoring your incoming traffic mix
(paid, organic, direct etc.) then you have no way of knowing if the
difference is down to your creative or an external event.
The answer is to run the A/B/C/D versions simultaneously or use a
multi-variate test. I've discovered this issue to my cost and now advise
people to do multivariate for this reason - there are too many companies
doing time dislocated A/B tests and scratching their heads over the
'strange' results they get.
What I call iterative testing is running different A/B/C/D or multivariate
tests in wave after wave. Each wave learns from the last (this worked, that
failed, this needs tweaked) with each wave aiming to improve the result,
based on analysis and new ideas fed into each test. The important thing -
always test with the same traffic.
On Wed, Jun 1, 2011 at 12:32 AM, Matthew Sundquist <matt.sundquist@...
> Hi Dave,
> Thanks for sending these out. I enjoy your posts, and find them thoughtful
> and very well-written.
> If I read this one right (and I may not have), it seems you're advocating
> testing each version once, for a week, over a five-week period. I drew this
> idea from this section:
> "Suppose I've tested the five following page headlines, and achieved the
> following points scores (per day), running each one for a week, so that the
> total test lasted five weeks."
> I wonder if it might be more productive to run them all at once for five
> weeks or run a multivariate test. This gives you live results, and means
> you can eliminate the versions with lower gains as you go forward. Then you
> can gradually pit the best two or three against one another to maximize
> conversions during testing. This might help with calculating statistical
> significance, running corrections on your data, gathering guiding
> principles, and avoiding the sampling bias associated with a weekend,
> holiday, seasonal dip, sporting event (I imagine the Champions League or
> Finals might spike traffic on certain sites and products), news event, etc.
> Perhaps this is what you meant, and if so, please forgive me. It's clear
> you know a good deal more about these matters than I do, and I am eager to
> hear your view so I can understand this more.
> Thanks for sending these out, and I'll look forward to your next post.
> All the best,
> On Tue, May 31, 2011 at 12:02 PM, Dave <tregowandave@...> wrote:
> > Hello again group,
> > Thanks to all for your comments on my previous posts - I'm back again
> > looking at something that came up during the recent Omniture EMEA Summit,
> > namely iterative testing. What makes it iterative, what's the point, and
> > what's different from normal testing?
> > http://bit.ly/j2yZof
> > As ever, comments sought and welcomed.
> > Thanks
> > David
> [Non-text portions of this message have been removed]
Not sent from my blackberry <grin>
[Non-text portions of this message have been removed]
- << Previous post in topic Next post in topic >>