26986Re: [webanalytics] Re: Why does A/B testing work?
- Nov 6, 2010My book list on this stuff:
* Neuro web design - Susan M Weinshenck
* Filling in the blanks - Luke Wroblewski
* Influence - Robert Cialdini
* Always be testing - Bryan Eisenberg
* Forms that work - Jarrett & Gaffney
* What every body is saying - Joe Navarro
* Why we buy - Paco underhill
* The design of everyday things - Donald Norman
* Buyology - Martin Lindstrom
* Emotionomics - Dan Hill
* Neuromarketing - Renvoise & Morin
* Landing page optimization - Tim Ash
* Web analytics - an hour a day - Avinash Kaushik
* Don't make me think - Steve Krug
* Search Analytics - Hurol Inan
All these have helped in some way as inputs to our testing.
On Sat, Nov 6, 2010 at 2:01 PM, Craig Sullivan <sullivac@...> wrote:
> Some great points there.
> I don't agree with Ophir on one thing though - it isn't impossible to know
> for sure - you may not know the winning combination of elements in a test
> but you can bias the results towards positive in a seriously big way.
> What I'm saying is that the *directionality* of your testing can be
> influenced by:
> * Usability testing
> * Eye tracking
> * Previous test results
> * Persuasive copy techniques
> * Books
> * Surveys
> * Exit (funnel) surveys
> * Analytics data
> * Simply listening lots to customers
> * And last of all, something you learn to focus on for mobile - removing
> A good example of previous tests (we do 3M a month) is when we introduced
> 'goal oriented' words into the 'call to action' button wording.
> We quickly discovered that stuff focusing on the end result (on our site -
> Fix my Glass) had a huge (for a single piece of wording) affect on
> conversion on a page. So, the next tests include more of these (but still
> keeping lots of new stuff & wildcards). We also found out what sort of
> people images to use, where they should look, what body language they should
> have, what they should hold in their hands etc. I can tell, for example,
> that a small benefit is gained by having the person look right at you or
> slightly towards the call to action button. This does not mean that I then
> assume these work all the time, forever - just that there is a huge freaking
> hint there for me!
> Usability testing is extremely important too - this helped uncover many
> things that either needed to be fixed or were ideal candidates for testing
> and yes, this directionality helps too.
> As human beings, we constantly go through the process of learning patterns
> of interaction ourselves. When we do testing, we also begin to learn design
> patterns that work. We then find we can load the tests with new stuff, left
> of field suggestions but most importantly, variants on things that actually
> worked before. From this work, you begin to build a library of things (or
> approaches) that are likely to succeed in tests.
> I may not know the final combination but when working on variables, we
> *can* stack the deck of cards in our favour.
> Based on some ad-hoc tests I've done, usability people are far better at
> guessing than executives who often make the decisions on test or page
> design. Jakob Nielsen has observed this effect in a study (
> http://www.useit.com/alertbox/guesses-data.html) and I see it in my work
> with people who do testing and optimisation - those that keep the user focus
> laser sharp often design better tests.
> What is starting to collide for me is that the combination of user
> experience techniques, testing and web analytics - a winning combo. Combine
> this with data segmentation and we arrive at a scenario where you type
> 'windshield repair' into google and get a page previously optimised for that
> segment (from tests).
> Even after all of that, I still guess wrong about 20% of the time on
> whichtestwon.com, so what the heck do I know?
> On Mon, Nov 1, 2010 at 5:11 PM, fredeilam <prusak@...> wrote:
>> Hi David,
>> Great question!
>> Just to make sure we're on the same page, I'm going to re-phrase your
>> question as such:
>> When doing A/B split testing, we often see that version X converts better
>> than version Y.
>> The question is: Why does version X convert better than version Y?
>> The short answer is that it's impossible to know for sure. There are
>> dozens of factors that will influence a person's online behavior.
>> The long answer is that most of these factors have been studied and
>> written about. Here are a few books that cover the factors in online
>> - Influence: The Psychology of Persuasion
>> - Type & Layout: Are You Communicating or Just Making Pretty Shapes
>> - Don't Make Me Think: A Common Sense Approach to Web Usability, 2nd
>> - Landing Page Optimization: The Definitive Guide to Testing and Tuning
>> for Conversions
>> - Web Design for ROI: Turning Browsers into Buyers & Prospects into Leads
>> - Neuro Web Design: What Makes Them Click?
>> --- In firstname.lastname@example.org <webanalytics%40yahoogroups.com>,
>> "Dave" <tregowandave@...> wrote:
>> > Hello everyone,
>> > Why does A/B testing work?
>> > I mean this question to be perhaps more philosophical than technical.
>> I've run a [small] number of A/B tests, and yes, I've shown that creative A
>> generates a higher click-through rate than creative B. But why?
>> > I frequently face the argument that "If people are looking for blue
>> widgets, they'll find them anyway" when I recommend a link be moved above
>> the fold, or made more prominent in some other way. And to some extent, the
>> argument holds - blue widgets are more popular than green widgets, no matter
>> if you make the green widget link bigger, bolder or more obvious. This can
>> be especially frustrating if we're targetted on the sale of green widgets
>> > So - why does A/B testing work? "Buy one green widget, get one free -
>> click here" may perform better than "Green widgets half price - find out
>> more" but why is that? If someone wants to buy green widgets, won't they
>> click either link with equal enthusiasm? Or are we hoping to persuade the
>> undecided? Do we suspect or believe that there's a core contingent of
>> green-widget fans who want a green widget, and that A/B testing will work
>> better on visitors who want a cheap widget, or a better widget or are just
>> > Have I answered my own question? This is entirely up for debate, and I'm
>> very interested in people's opinions.
>> > David
> Craig Sullivan
> Not sent from my blackberry <grin>
Not sent from my blackberry <grin>
[Non-text portions of this message have been removed]
- << Previous post in topic Next post in topic >>