Loading ...
Sorry, an error occurred while loading the content.

Re: [webanalytics] Re: web split-testing vs. personalization

Expand Messages
  • Matt Belkin
    Hi Xavier, I completely agree that companies need to a acquire a culture of experimentation (as Jim notes), and with your observation that there are some
    Message 1 of 15 , Oct 8, 2004
    • 0 Attachment
      Hi Xavier,
       
      I completely agree that companies need to a acquire a culture of experimentation (as Jim notes), and with your observation that there are some tangible obstacles to getting there.  In fact, every company I have worked at has encountered a rather similar set of challenges. 
       
      However, I do not necessarily think those challenges are entirely as you describe below.  Certainly cost is an issue.  Time to market concerns are another.  Resources are another.  Skills and expertise are another.  And to your observation, I think "imitation" and "gut feelings" are also obstacles - but I would *strongly* disagree that "gut-feeling" works "because good and creative marketing people are better at figuring out what's good or bad for a site than an A|B test, in general".
       
      The harsh reality is that good and creative marketing people are NOT better at figuring out what's good or bad for a site. In fact, I would argue that exact mentality is what gets most websites into trouble, and where most marketing $$'s are wasted. 
       
      Consider first that every website attracts different customer segments.  Even websites that attract similar segments, attract them in differing compositions.  So if you agree with those statements, it's safe to say no 2 websites are the same from a customer standpoint.  And as such, something that may work for Amazon's customers, may not work for your customers.  Hence, disciplined testing/research is the ONLY way to figure out what is good or bad for your customers.  Sure you can have a Junior Analyst pilfer ideas from a number of websites (i.e. 1-click shopping, cross-sell, upsell,  configurators, drop-in windows, exit coupons, etc), but how do you KNOW these are working on your own site?  For your own customers?  And once you implement them, how do you improve the experience? 
       
      It's easy and oh so tempting to get caught up in the constantly changing world of Web fads.  Designers and developers love it because it's what hot at the moment, executives love it because it's sexy and next-generation, managers love it because the executives are happy - but do customers love it?  Unless you analyze their behavior (i.e. AB test), or ask them outright (survey), you simply do not know.  And you have no basis for improvement.
       

      Xavier Casanova <xavier_casanova@...> wrote:
      Hey Jim, you're hitting on a pretty important topic - i.e. companies needing to acquire a culture of experimentation. I think we all want to agree with you, because when companies experiment they need tools to measure, i.e. they need us (Web analytics) ;)
       
      Let me play devil's advocate however and do a reality check. We cannot ignore the fact that imitation, gut-feeling and cost are forces playing against this culture of experimentation which you describe.
      - 'Imitation' because it's pretty easy to go out and copy what others are doing. Navigation, product presentation, shopping cart, checkout, it's all out there. A methodical Junior Analyst can probably come up with a handful of great recommendations for *any* site based just by spending a few hours online.
      - 'Gut feeling' because good and creative marketing people are better at figuring out what's good or bad for a site than an A|B test, in general
      - 'Cost' because A|B tests are resource intensive, and to be really effective, you've got to run them live (i.e. play with real customers)
       
      In other words, is this wishful thinking? Could split&multi-variable testing be the next disappointment after personalization?
       
      OR, Are we witnessing a real change in how people run their online business?
       
      I could argue both ways.
       
      Xavier
      ----- Original Message -----
      Sent: Sunday, October 03, 2004 7:32 AM
      Subject: [webanalytics] Re: web split-testing vs. personalization



      After spending the better part of the 1994-2000 period building
      capabilities to do your capabilties 1-4 and other
      related "personalization," "targeting" or "automated optimization"
      functions and then applying that technology to sites, many retail,
      but many for other types of companies/sites as well, I have to admit
      to some biases in regard to what is worth discussing in regard to
      such functionality.

      What many site that implemented such capabilities in the market ended
      up with was undue complexity.  Complexity that most organizations
      either couldn't manage or didn't/couldn't take advantage of on an
      ongoing basis, at least enough so to produce a positive ROI on such. 

      Large sites like Amazon and others have done a good job of
      implementing some of these personalization capabilities, but I think
      if a survey was done of the broader market it would be found that
      many companies bought all manner of personalization infrastructure
      tools and consulting to help enable these capabilities and found that
      due to a wide range of practical issues they were difficult to derive
      business advantage and ROI from. 

      As an example of this take a look at broadvision.com and see that the
      one time leader in personalization and one-to-one marketing (with a
      multi-billion dollar market cap at the time is now worth ~100MM) has
      changed their focus to "enterprise business portals." 

      My feeling is that you can safely infer from this and many other
      related facts that many many sites that wanted to
      do "personalization" or "automated optimization" gave up many of
      their ambitions after spending a great deal of money.

      INHO there are a number of valuable lessons to be taken from this and
      many of them can be applied to this thread and experimentation in
      general.  They are cliche or platitude, but worthwhile I think none
      the less.

      1.  Simplicity is valuable.  The KISS principle is one that can't be
      forgotten in regard to a site.  Testing can be very valuable to a
      site/business, but to be so it needs to be easy to do.  Easy to plan,
      easy to implement and easy to evaluate.  If one gets too deep into
      the weeds with it, it becomes impractical/impracticable but for the
      most advanced organizations.

      2.  Experimentation is a way of thinking about and doing business. 
      Its a culture that needs to be built in many companies.  One of my
      financial services clients has as their primary business mantra that
      they are a "hypothesis and experimentation driven company."  This is
      a very valuable approach to building a company and can be applied in
      many ways.  The important point is that almost everything that you
      might come up with as a new business idea, approach, functionality,
      etc. can be reduced to a hypothesis and be tested in some fashion.

      So rather than get wrapped around the axle about which methods of
      personalization or automated optimization are most effective in
      general (as the answer varies by company type, state of development
      and many other factors), my encouragement to companies and
      professionals here is to build a culture in your company that
      generates testable hypotheses, implements experiments to prove or
      disprove these hypotheses and learns from these experiments on an
      ongonig interative basis. 

      The web analytics team for a site should be a primary driver around
      building this culture.  The web analytics team can help to generate
      and and quantify hypotheses, determine the right way to test then and
      quantify the results and learnings from them for the rest of the site
      team.

      It is my observation that it almost doesn't matter what the
      hypothesis is that you start with as long as it is a valid hypothesis
      that can be tested.  It is, to use another platitude more important
      to just do it.  Once a team has as their culture to hypothesize
      concretely, test effectively and learn from such, the team and the
      business results evolve more rapidly.  Better and better hypotheses
      and more effective experiments are the result. 

      What should be tested completely depends upon your specific business
      dynamics, state of capability and forward objectives.  The basic
      requirement is that your team needs to come up with hypotheses that
      are testable, then run the experiments and then spend the time
      required to learn from the results.  So pick as your place to start
      experimenting something that your whole team will understand, like
      your home page, a campaign landing page, a new cart flow, etc.  The
      most important thing is to get started.

      The key learnings needed by a team that has an experimentation
      culture are:

      1.  What is a valid hypothesis to test?  What components does the
      hypothesis need?
      2.  How does an experiment need to be designed to produce valid
      results when implemented?
      3.  What are the tools required to implement your experiments?
      4.  How and for how long will the experiments be implemented?
      5.  When and how will the results be evaluated?
      6.  How much confidence do you require in the results for what type
      of decision making?
      7.  How should the results be interpreted and acted on?

      There is a lot of talk about a lot of different testing methodologies
      here in this thread.  In my mind they confuse the real matter at hand
      to a degree.  Basic controlled experimentation (A|B|C) is the place
      to start.  Doing this correctly is hard enough for teams that are new
      to it, using more statistically advanced methods simply adds
      complexity.  Such methods might be helpful for small test sets,
      shortening test runs on small test sets and the like, but these are
      edge cases that can be left for later exploration after a site's team
      has mastered basic controlled experimentation and has developed a
      culture of experimentation that has led them to these edge cases.

      It could be very helpful to a number of people in this group if a lot
      of the questions about experimentation ran something like this:

      "I have a hypothesis that if I alter this landing page like X that it
      will produce a higher campaign conversion rate. What is the best way
      to test that hypothesis if I expect Y visitors to the landing page in
      Z days?"

      "I have a hypothesis that my new site capability to provide cross-
      sell recommendations will increase my order sizes without decreasing
      my conversion rates by at least $X.XX, if I change the cross-selling
      rules or groupings, what is the best way to test that my rules and
      groupings are producing a positive result in regard to order size
      without reducing my conversion rates?"

      Since there is such a great amount of expertise here in regard to
      testing it might be most useful for folks to throw out some real
      hypotheses like these and have the help of the group in thinking
      through how to make the testing of them more concrete and effective
      given the circumstances at hand.  It would be great to see some
      actual results come back in to the group so that all here could
      benefit from the whole cycle of test design, experiment
      implementation and evaluation.

      Best regards,



      --- In webanalytics@yahoogroups.com, "Xavier Casanova"
      <xavier_casanova@y...> wrote:
      > That makes sense - we're all on the same page. I like your
      classification
      > 1 - Automated XSells
      > 2 - Product recommendations
      > 3 - Product embellishment
      > 4 - Configurators and Calculators.
      >
      > Question - From you experience, assuming I have a retail site which
      hasn't done any of these "optimizations", where should I start? Which
      ones provide the highest immediate ROI?
      > (I would also be interested in hearing what retailers have to
      say...)
      >
      >
      >   ----- Original Message -----
      >   From: matthewjncroche
      >   To: webanalytics@yahoogroups.com
      >   Sent: Thursday, September 30, 2004 12:45 PM
      >   Subject: [webanalytics] Re: web split-testing vs. personalization
      >
      >
      >   Great clarifying points.
      >
      >   Personalization is really just overloaded - it can mean MyYahoo
      >   (rearrange elements on a "personal page"), Collaborative
      Filtering,
      >   or even just your own login.  With this range, it is really just
      too
      >   hard to make any general observations.
      >
      >   For conversations sake, lets just talk about those optimizations
      >   which relate to product suggestion or automated merchandising. 
      These
      >   could include:
      >   1. Automated cross-sell (up-sell, bundling) supported by custom
      >   systems, packaged software, and ASPs
      >   2. Product recommendation (best-sellers, people like you, staff
      >   picks, automatic suggestions)
      >   3. Product embellishment (image zoom and pan, 3d, dressing rooms,
      >   fabric/color changers)
      >   4. Configurators, calculators
      >
      >   With these, it would be impossible to A|B an individual
      >   recommendation or presentation. What you would be doing, as Matt
      >   Belkin and others have pointed out, is testing the aggregate
      effect
      >   of the feature or algorithm.  To state another way, not A|B
      testing
      >   that an individual recommendation was effective, but that the
      overall
      >   mechanism resulted in positive effect on the segment to which it
      was
      >   shown as measured by average order size, conversion, revenue per
      >   visit, leads, etc.
      >
      >   A better term would be a present/not present test for the
      particular
      >   optimization.
      >
      >   Implicit in any good optimization, of course, would be a feedback
      >   mechanism for the target metric.  We approach the problem by
      setting
      >   up a listener to measure purchases or clicks so that the
      algorithm
      >   for finding the best product has a way of refining itself. 
      >
      >
      >   Matthew Roche
      >   http://www.offermatica.com
      >
      >
      >   --- In webanalytics@yahoogroups.com, "Xavier Casanova"
      >   <xavier_casanova@y...> wrote:
      >   > My point was not about A|B testing as a tactic to measure
      >   improvements in conversion rates due to personalization. My point
      was
      >   in response to Eric Hansen's comment:
      >   >
      >   > "Seems to me that split-testing is ideal for improving web
      >   conversion for factors such as a product description and UI
      elements
      >   (colors/fonts/images), whereas personization (as defined by
      vendors
      >   such as ATG, e.piphany, etc.) is ideal for testing (and
      automatically
      >   optimizating) product mix, upsell/cross-sell, etc."
      >   >
      >   > Again, from my perspective, in the context of Eric's post,
      these
      >   techniques are incompatible. A|B testing is not ideal for
      determining
      >   for a specific user the ideal product mix, upsell/cross-sell, etc
      >   because the test sample is too small. Collaborative filtering (to
      the
      >   best of my knowledge) is the core technology behind
      personalization.
      >   >
      >   > On the other hand, I need clarification about what you guys are
      >   saying - and what you call personalization.
      >   >
      >   > a/ If what you call personalization are simple touches to a
      page
      >   like "Hello John" or "Your Nissan Altima is due for maintenance
      in 3
      >   days" then yes, you might be right, A|B testing is an effective
      way
      >   of measuring a lift in conversion rates. These are broad/general
      >   features after all.
      >   >
      >   > b/However if what you call personalization is dynamically
      building
      >   pages that contain the right product mix, the right upsell
      products,
      >   (like Amazon) then I don't think A|B testing can give you a
      reliable
      >   answer about the effectiveness of that kind of personalization.
      Isn't
      >   the sample of users too small, and aren't the variables
      constantly
      >   changing? Can that test ever converge? Depending on the quality
      of
      >   your personalization results you may do awesome or day, and
      terrible
      >   the next day.
      >   >
      >   >
      >   >   ----- Original Message -----
      >   >   From: Jim MacIntyre
      >   >   To: webanalytics@yahoogroups.com
      >   >   Sent: Tuesday, September 28, 2004 3:14 AM
      >   >   Subject: RE: [webanalytics] web split-testing vs.
      personalization
      >   >
      >   >
      >   >   The first time I used A|B testing back in the 90s was to test
      the
      >   value of a personalization/mass customization system I was at the
      >   time implementing, much as you describe.  Likewise A|B testing
      can be
      >   used to test a very wide range of such "value add" functionality
      to
      >   see if it actually does so.  It continues to amaze me that sites
      >   implement personalization and other features that have the
      intention
      >   of increasing conversion rates without insisting on any results
      >   tests, such as requiring the personalization vendor to prove
      through
      >   A|B test that their personalization capability can improve
      conversion
      >   rates. 
      >   >
      >   >
      >   >
      >   > ----------------------------------------------------------------
      ----
      >   --------
      >   >     From: Matt Belkin [mailto:mbelkin@m...]
      >   >     Sent: Tuesday, September 28, 2004 2:14 AM
      >   >     To: webanalytics@yahoogroups.com
      >   >     Subject: RE: [webanalytics] web split-testing vs.
      >   personalization
      >   >
      >   >
      >   >     Actually, to clarify, AB testing is really quite compatible
      >   with personalization.  To restate what I think most people here
      >   already know, AB testing is just the comparative test of one
      approach
      >   vs. another in generating a desired result.  For instance, does
      web
      >   page A perform better than web page B at converting sales leads. 
      The
      >   beauty of AB testing, when done correctly, is that you may
      constantly
      >   achieve gains thru continual improvement (and hence, generate ROI
      >   from your Analytics investment).
      >   >
      >   >
      >   >
      >   >     Personalization, on the other hand, is much more about 1:1
      (or
      >   1:many) customer communication.  This includes collaborative
      >   filtering, but certainly isn't limited to it.
      >   >
      >   >
      >   >
      >   >     So to directly address Xavier's comments, you could
      potentially
      >   use AB testing to experiment with different types of
      >   personalization.  For instance, if you choose to provide customer
      >   segment A with a personalized experience (i.e. recommendation
      engine)
      >   and not provide customer segment B with this same functionality,
      you
      >   could compare the productivity of each segment to determine if
      this
      >   personalization capability adds value.  Of course, this assumes
      no
      >   other factors change (ceteris paribus).
      >   >
      >   >
      >   >
      >   >     Hope that helps, Matt.
      >   >
      >   >
      >   >
      >   >
      >   > ----------------------------------------------------------------
      ----
      >   --------
      >   >
      >   >     From: Xavier Casanova [mailto:xavier_casanova@y...]
      >   >     Sent: Monday, September 27, 2004 7:48 PM
      >   >     To: webanalytics@yahoogroups.com
      >   >     Subject: Re: [webanalytics] web split-testing vs.
      >   personalization
      >   >
      >   >
      >   >
      >   >     I have limited knowledge on the topic, but it seems to me
      that
      >   A|B testing and personalization are incompatible techniques for
      >   improving your conversion rates, in general.
      >   >
      >   >     - A|B testing aims at improving broad features of the site,
      and
      >   make them appeal to the masses
      >   >
      >   >     - Personalization on the other hand is about customizing
      the
      >   user experience on an individual basis
      >   >
      >   >
      >   >
      >   >     My understanding is that personalization applications
      >   extensively use collaborative filtering techniques. Collaborative
      >   filtering looks at past behavior to predict future behavior for a
      >   particular user segment ("People who book this book also bought
      this
      >   other book"). To get good results you need well defined user
      segments
      >   (with similar characteristics) - and a large sample of users&data
      per
      >   segment. There might be some overlap if you are using A|B testing
      >   techniques to test some broad recommendations, but I'm not sure
      about
      >   the effectiveness of it. Are there any epiphany or blue martini
      >   people on the board to comment?
      >   >
      >   >
      >   >
      >   >     And since we are close to election day, here's an analogy: 
      A|B
      >   testing is to personalization what federal government is to a
      local
      >   assembly. How about that?
      >   >
      >   >
      >   >
      >   >     Xavier
      >   >
      >   >
      >   >
      >   >
      >   >     
      >   >
      >   >       ----- Original Message -----
      >   >
      >   >       From: ehansen42
      >   >
      >   >       To: webanalytics@yahoogroups.com
      >   >
      >   >       Sent: Monday, September 27, 2004 12:12 PM
      >   >
      >   >       Subject: [webanalytics] web split-testing vs.
      personalization
      >   >
      >   >
      >   >
      >   >       Hi folks, I've just joined this list...  having caught up
      on
      >   the
      >   >       archives, I see alot of interest in web content testing. 
      I'd
      >   like
      >   >       to pose a related question for discussion:
      >   >
      >   >       Where do you see the overlap between split-testing (A/B,
      >   etc.) and
      >   >       web personalization technologies?  What are the unique
      >   advantages of
      >   >       each?  In what cases might you use one vs. the other (or
      >   both)?
      >   >
      >   >       Seems to me that split-testing is ideal for improving web
      >   conversion
      >   >       for factors such as a product description and UI elements
      >   >       (colors/fonts/images), whereas personization (as defined
      by
      >   vendors
      >   >       such as ATG, e.piphany, etc.) is ideal for testing (and
      >   >       automatically optimizating) product mix, upsell/cross-
      sell,
      >   etc.
      >   >
      >   >       But really, there is some overlap between split-testing
      and
      >   >       personalization, no?  The personalization vendors tout
      things
      >   like
      >   >       being "adaptive" and self-learning, meaning that even
      though
      >   they
      >   >       are personalizing the web experience on a visitor-by-
      visitor
      >   basis,
      >   >       they are collecting conversion metrics and generalizating
      >   them to
      >   >       broader visitor segments.
      >   >
      >   >       For example, you may be a first time visitor for a web
      site,
      >   but
      >   >       when you click on a product link, your personalized page
      is
      >   computed
      >   >       from historical conversion data of past visitors.  So
      there's
      >   some
      >   >       inherent testing going on.  Doesn't this sound a bit like
      >   automated
      >   >       split-testing where the target audience is "per arbitrary
      >   segment"
      >   >       rather than "the entire population"?
      >   >
      >   >       Sorry if the topic is on the fringe of being too
      >   academic...  ;)
      >   >
      >   >       cheers
      >   >       Eric
      >   >
      >   >
      >   >
      >   >
      >   >
      >   >
      >   >       ---------------------------------------
      >   >       Web Metrics Discussion Group
      >   >       Moderated by Eric T. Peterson
      >   >       Author, Web Analytics Demystified
      >   >       http://www.webanalyticsdemystified.com
      >   >
      >   >
      >   >
      >   >
      >   >
      >   >       ---------------------------------------
      >   >       Web Metrics Discussion Group
      >   >       Moderated by Eric T. Peterson
      >   >       Author, Web Analytics Demystified
      >   >       http://www.webanalyticsdemystified.com
      >   >
      >   >
      >   >
      >   >
      >   >
      >   >
      >   >
      >   >
      >   >       ---------------------------------------
      >   >       Web Metrics Discussion Group
      >   >       Moderated by Eric T. Peterson
      >   >       Author, Web Analytics Demystified
      >   >       http://www.webanalyticsdemystified.com
      >   >
      >   >
      >   >
      >   >
      >   >
      >   >
      >   >   ---------------------------------------
      >   >   Web Metrics Discussion Group
      >   >   Moderated by Eric T. Peterson
      >   >   Author, Web Analytics Demystified
      >   >   http://www.webanalyticsdemystified.com
      >   >
      >   >
      >   >         Yahoo! Groups Sponsor
      >   >               ADVERTISEMENT
      >   >             
      >   >       
      >   >       
      >   >
      >   >
      >   > ----------------------------------------------------------------
      ----
      >   ----------
      >   >   Yahoo! Groups Links
      >   >
      >   >     a.. To visit your group on the web, go to:
      >   >     http://groups.yahoo.com/group/webanalytics/
      >   >      
      >   >     b.. To unsubscribe from this group, send an email to:
      >   >     webanalytics-unsubscribe@yahoogroups.com
      >   >      
      >   >     c.. Your use of Yahoo! Groups is subject to the Yahoo!
      Terms of
      >   Service.
      >
      >
      >
      >
      >
      >   ---------------------------------------
      >   Web Metrics Discussion Group
      >   Moderated by Eric T. Peterson
      >   Author, Web Analytics Demystified
      >   http://www.webanalyticsdemystified.com
      >
      >
      >         Yahoo! Groups Sponsor
      >               ADVERTISEMENT
      >             
      >       
      >       
      >
      >
      > --------------------------------------------------------------------
      ----------
      >   Yahoo! Groups Links
      >
      >     a.. To visit your group on the web, go to:
      >     http://groups.yahoo.com/group/webanalytics/
      >      
      >     b.. To unsubscribe from this group, send an email to:
      >     webanalytics-unsubscribe@yahoogroups.com
      >      
      >     c.. Your use of Yahoo! Groups is subject to the Yahoo! Terms of
      Service.









      ---------------------------------------
      Web Metrics Discussion Group
      Moderated by Eric T. Peterson
      Author, Web Analytics Demystified
      http://www.webanalyticsdemystified.com






      ---------------------------------------
      Web Metrics Discussion Group
      Moderated by Eric T. Peterson
      Author, Web Analytics Demystified
      http://www.webanalyticsdemystified.com



    • matthewjncroche
      The last two tests we ran (both ended within the last few weeks) paid for themselves on an ROI basis before the tests even finished. (I know that the
      Message 2 of 15 , Oct 12, 2004
      • 0 Attachment
        The last two tests we ran (both ended within the last few weeks) paid
        for themselves on an ROI basis before the tests even finished.

        (I know that the stats-monsters on this list will point out the
        logical inconsistency, but go with me on this one...)

        There are two truths you have to believe in to support a long-term
        future in testing and optimization:
        1. People are not getting better at "picking winners"
        2. On-line customer acquisition is not getting cheaper

        After four years of investment in various forms of customer
        acquisition (PPC, CPA, CPM...), the time has come to reform the site
        and improve conversion. Either you do it through testing your own
        ideas, or using testing to evaluate third-party tools. Either way,
        you will be testing one of these days.

        Matthew Roche
        http://www.offermatica.com
        --- In webanalytics@yahoogroups.com, "Xavier Casanova"
        <xavier_casanova@y...> wrote:
        > Hey Jim, you're hitting on a pretty important topic - i.e.
        companies needing to acquire a culture of experimentation. I think we
        all want to agree with you, because when companies experiment they
        need tools to measure, i.e. they need us (Web analytics) ;)
        >
        > Let me play devil's advocate however and do a reality check. We
        cannot ignore the fact that imitation, gut-feeling and cost are
        forces playing against this culture of experimentation which you
        describe.
        > - 'Imitation' because it's pretty easy to go out and copy what
        others are doing. Navigation, product presentation, shopping cart,
        checkout, it's all out there. A methodical Junior Analyst can
        probably come up with a handful of great recommendations for *any*
        site based just by spending a few hours online.
        > - 'Gut feeling' because good and creative marketing people are
        better at figuring out what's good or bad for a site than an A|B
        test, in general
        > - 'Cost' because A|B tests are resource intensive, and to be really
        effective, you've got to run them live (i.e. play with real customers)
        >
        > In other words, is this wishful thinking? Could split&multi-
        variable testing be the next disappointment after personalization?
        >
        > OR, Are we witnessing a real change in how people run their online
        business?
        >
        > I could argue both ways.
        >
        > Xavier
        >
        > ----- Original Message -----
        > From: jimmacintyreiv
        > To: webanalytics@yahoogroups.com
        > Sent: Sunday, October 03, 2004 7:32 AM
        > Subject: [webanalytics] Re: web split-testing vs. personalization
        >
        >
        >
        >
        > After spending the better part of the 1994-2000 period building
        > capabilities to do your capabilties 1-4 and other
        > related "personalization," "targeting" or "automated
        optimization"
        > functions and then applying that technology to sites, many
        retail,
        > but many for other types of companies/sites as well, I have to
        admit
        > to some biases in regard to what is worth discussing in regard to
        > such functionality.
        >
        > What many site that implemented such capabilities in the market
        ended
        > up with was undue complexity. Complexity that most organizations
        > either couldn't manage or didn't/couldn't take advantage of on an
        > ongoing basis, at least enough so to produce a positive ROI on
        such.
        >
        > Large sites like Amazon and others have done a good job of
        > implementing some of these personalization capabilities, but I
        think
        > if a survey was done of the broader market it would be found that
        > many companies bought all manner of personalization
        infrastructure
        > tools and consulting to help enable these capabilities and found
        that
        > due to a wide range of practical issues they were difficult to
        derive
        > business advantage and ROI from.
        >
        > As an example of this take a look at broadvision.com and see that
        the
        > one time leader in personalization and one-to-one marketing (with
        a
        > multi-billion dollar market cap at the time is now worth ~100MM)
        has
        > changed their focus to "enterprise business portals."
        >
        > My feeling is that you can safely infer from this and many other
        > related facts that many many sites that wanted to
        > do "personalization" or "automated optimization" gave up many of
        > their ambitions after spending a great deal of money.
        >
        > INHO there are a number of valuable lessons to be taken from this
        and
        > many of them can be applied to this thread and experimentation in
        > general. They are cliche or platitude, but worthwhile I think
        none
        > the less.
        >
        > 1. Simplicity is valuable. The KISS principle is one that can't
        be
        > forgotten in regard to a site. Testing can be very valuable to a
        > site/business, but to be so it needs to be easy to do. Easy to
        plan,
        > easy to implement and easy to evaluate. If one gets too deep
        into
        > the weeds with it, it becomes impractical/impracticable but for
        the
        > most advanced organizations.
        >
        > 2. Experimentation is a way of thinking about and doing
        business.
        > Its a culture that needs to be built in many companies. One of
        my
        > financial services clients has as their primary business mantra
        that
        > they are a "hypothesis and experimentation driven company." This
        is
        > a very valuable approach to building a company and can be applied
        in
        > many ways. The important point is that almost everything that
        you
        > might come up with as a new business idea, approach,
        functionality,
        > etc. can be reduced to a hypothesis and be tested in some fashion.
        >
        > So rather than get wrapped around the axle about which methods of
        > personalization or automated optimization are most effective in
        > general (as the answer varies by company type, state of
        development
        > and many other factors), my encouragement to companies and
        > professionals here is to build a culture in your company that
        > generates testable hypotheses, implements experiments to prove or
        > disprove these hypotheses and learns from these experiments on an
        > ongonig interative basis.
        >
        > The web analytics team for a site should be a primary driver
        around
        > building this culture. The web analytics team can help to
        generate
        > and and quantify hypotheses, determine the right way to test then
        and
        > quantify the results and learnings from them for the rest of the
        site
        > team.
        >
        > It is my observation that it almost doesn't matter what the
        > hypothesis is that you start with as long as it is a valid
        hypothesis
        > that can be tested. It is, to use another platitude more
        important
        > to just do it. Once a team has as their culture to hypothesize
        > concretely, test effectively and learn from such, the team and
        the
        > business results evolve more rapidly. Better and better
        hypotheses
        > and more effective experiments are the result.
        >
        > What should be tested completely depends upon your specific
        business
        > dynamics, state of capability and forward objectives. The basic
        > requirement is that your team needs to come up with hypotheses
        that
        > are testable, then run the experiments and then spend the time
        > required to learn from the results. So pick as your place to
        start
        > experimenting something that your whole team will understand,
        like
        > your home page, a campaign landing page, a new cart flow, etc.
        The
        > most important thing is to get started.
        >
        > The key learnings needed by a team that has an experimentation
        > culture are:
        >
        > 1. What is a valid hypothesis to test? What components does the
        > hypothesis need?
        > 2. How does an experiment need to be designed to produce valid
        > results when implemented?
        > 3. What are the tools required to implement your experiments?
        > 4. How and for how long will the experiments be implemented?
        > 5. When and how will the results be evaluated?
        > 6. How much confidence do you require in the results for what
        type
        > of decision making?
        > 7. How should the results be interpreted and acted on?
        >
        > There is a lot of talk about a lot of different testing
        methodologies
        > here in this thread. In my mind they confuse the real matter at
        hand
        > to a degree. Basic controlled experimentation (A|B|C) is the
        place
        > to start. Doing this correctly is hard enough for teams that are
        new
        > to it, using more statistically advanced methods simply adds
        > complexity. Such methods might be helpful for small test sets,
        > shortening test runs on small test sets and the like, but these
        are
        > edge cases that can be left for later exploration after a site's
        team
        > has mastered basic controlled experimentation and has developed a
        > culture of experimentation that has led them to these edge cases.
        >
        > It could be very helpful to a number of people in this group if a
        lot
        > of the questions about experimentation ran something like this:
        >
        > "I have a hypothesis that if I alter this landing page like X
        that it
        > will produce a higher campaign conversion rate. What is the best
        way
        > to test that hypothesis if I expect Y visitors to the landing
        page in
        > Z days?"
        >
        > "I have a hypothesis that my new site capability to provide cross-
        > sell recommendations will increase my order sizes without
        decreasing
        > my conversion rates by at least $X.XX, if I change the cross-
        selling
        > rules or groupings, what is the best way to test that my rules
        and
        > groupings are producing a positive result in regard to order size
        > without reducing my conversion rates?"
        >
        > Since there is such a great amount of expertise here in regard to
        > testing it might be most useful for folks to throw out some real
        > hypotheses like these and have the help of the group in thinking
        > through how to make the testing of them more concrete and
        effective
        > given the circumstances at hand. It would be great to see some
        > actual results come back in to the group so that all here could
        > benefit from the whole cycle of test design, experiment
        > implementation and evaluation.
        >
        > Best regards,
        >
        >
        >
        > --- In webanalytics@yahoogroups.com, "Xavier Casanova"
        > <xavier_casanova@y...> wrote:
        > > That makes sense - we're all on the same page. I like your
        > classification
        > > 1 - Automated XSells
        > > 2 - Product recommendations
        > > 3 - Product embellishment
        > > 4 - Configurators and Calculators.
        > >
        > > Question - From you experience, assuming I have a retail site
        which
        > hasn't done any of these "optimizations", where should I start?
        Which
        > ones provide the highest immediate ROI?
        > > (I would also be interested in hearing what retailers have to
        > say...)
        > >
        > >
        > > ----- Original Message -----
        > > From: matthewjncroche
        > > To: webanalytics@yahoogroups.com
        > > Sent: Thursday, September 30, 2004 12:45 PM
        > > Subject: [webanalytics] Re: web split-testing vs.
        personalization
        > >
        > >
        > > Great clarifying points.
        > >
        > > Personalization is really just overloaded - it can mean
        MyYahoo
        > > (rearrange elements on a "personal page"), Collaborative
        > Filtering,
        > > or even just your own login. With this range, it is really
        just
        > too
        > > hard to make any general observations.
        > >
        > > For conversations sake, lets just talk about those
        optimizations
        > > which relate to product suggestion or automated
        merchandising.
        > These
        > > could include:
        > > 1. Automated cross-sell (up-sell, bundling) supported by
        custom
        > > systems, packaged software, and ASPs
        > > 2. Product recommendation (best-sellers, people like you,
        staff
        > > picks, automatic suggestions)
        > > 3. Product embellishment (image zoom and pan, 3d, dressing
        rooms,
        > > fabric/color changers)
        > > 4. Configurators, calculators
        > >
        > > With these, it would be impossible to A|B an individual
        > > recommendation or presentation. What you would be doing, as
        Matt
        > > Belkin and others have pointed out, is testing the aggregate
        > effect
        > > of the feature or algorithm. To state another way, not A|B
        > testing
        > > that an individual recommendation was effective, but that the
        > overall
        > > mechanism resulted in positive effect on the segment to which
        it
        > was
        > > shown as measured by average order size, conversion, revenue
        per
        > > visit, leads, etc.
        > >
        > > A better term would be a present/not present test for the
        > particular
        > > optimization.
        > >
        > > Implicit in any good optimization, of course, would be a
        feedback
        > > mechanism for the target metric. We approach the problem by
        > setting
        > > up a listener to measure purchases or clicks so that the
        > algorithm
        > > for finding the best product has a way of refining itself.
        > >
        > >
        > > Matthew Roche
        > > http://www.offermatica.com
        > >
        > >
        > > --- In webanalytics@yahoogroups.com, "Xavier Casanova"
        > > <xavier_casanova@y...> wrote:
        > > > My point was not about A|B testing as a tactic to measure
        > > improvements in conversion rates due to personalization. My
        point
        > was
        > > in response to Eric Hansen's comment:
        > > >
        > > > "Seems to me that split-testing is ideal for improving web
        > > conversion for factors such as a product description and UI
        > elements
        > > (colors/fonts/images), whereas personization (as defined by
        > vendors
        > > such as ATG, e.piphany, etc.) is ideal for testing (and
        > automatically
        > > optimizating) product mix, upsell/cross-sell, etc."
        > > >
        > > > Again, from my perspective, in the context of Eric's post,
        > these
        > > techniques are incompatible. A|B testing is not ideal for
        > determining
        > > for a specific user the ideal product mix, upsell/cross-sell,
        etc
        > > because the test sample is too small. Collaborative filtering
        (to
        > the
        > > best of my knowledge) is the core technology behind
        > personalization.
        > > >
        > > > On the other hand, I need clarification about what you guys
        are
        > > saying - and what you call personalization.
        > > >
        > > > a/ If what you call personalization are simple touches to a
        > page
        > > like "Hello John" or "Your Nissan Altima is due for
        maintenance
        > in 3
        > > days" then yes, you might be right, A|B testing is an
        effective
        > way
        > > of measuring a lift in conversion rates. These are
        broad/general
        > > features after all.
        > > >
        > > > b/However if what you call personalization is dynamically
        > building
        > > pages that contain the right product mix, the right upsell
        > products,
        > > (like Amazon) then I don't think A|B testing can give you a
        > reliable
        > > answer about the effectiveness of that kind of
        personalization.
        > Isn't
        > > the sample of users too small, and aren't the variables
        > constantly
        > > changing? Can that test ever converge? Depending on the
        quality
        > of
        > > your personalization results you may do awesome or day, and
        > terrible
        > > the next day.
        > > >
        > > >
        > > > ----- Original Message -----
        > > > From: Jim MacIntyre
        > > > To: webanalytics@yahoogroups.com
        > > > Sent: Tuesday, September 28, 2004 3:14 AM
        > > > Subject: RE: [webanalytics] web split-testing vs.
        > personalization
        > > >
        > > >
        > > > The first time I used A|B testing back in the 90s was to
        test
        > the
        > > value of a personalization/mass customization system I was at
        the
        > > time implementing, much as you describe. Likewise A|B
        testing
        > can be
        > > used to test a very wide range of such "value add"
        functionality
        > to
        > > see if it actually does so. It continues to amaze me that
        sites
        > > implement personalization and other features that have the
        > intention
        > > of increasing conversion rates without insisting on any
        results
        > > tests, such as requiring the personalization vendor to prove
        > through
        > > A|B test that their personalization capability can improve
        > conversion
        > > rates.
        > > >
        > > >
        > > >
        > > > ------------------------------------------------------------
        ----
        > ----
        > > --------
        > > > From: Matt Belkin [mailto:mbelkin@m...]
        > > > Sent: Tuesday, September 28, 2004 2:14 AM
        > > > To: webanalytics@yahoogroups.com
        > > > Subject: RE: [webanalytics] web split-testing vs.
        > > personalization
        > > >
        > > >
        > > > Actually, to clarify, AB testing is really quite
        compatible
        > > with personalization. To restate what I think most people
        here
        > > already know, AB testing is just the comparative test of one
        > approach
        > > vs. another in generating a desired result. For instance,
        does
        > web
        > > page A perform better than web page B at converting sales
        leads.
        > The
        > > beauty of AB testing, when done correctly, is that you may
        > constantly
        > > achieve gains thru continual improvement (and hence, generate
        ROI
        > > from your Analytics investment).
        > > >
        > > >
        > > >
        > > > Personalization, on the other hand, is much more about
        1:1
        > (or
        > > 1:many) customer communication. This includes collaborative
        > > filtering, but certainly isn't limited to it.
        > > >
        > > >
        > > >
        > > > So to directly address Xavier's comments, you could
        > potentially
        > > use AB testing to experiment with different types of
        > > personalization. For instance, if you choose to provide
        customer
        > > segment A with a personalized experience (i.e. recommendation
        > engine)
        > > and not provide customer segment B with this same
        functionality,
        > you
        > > could compare the productivity of each segment to determine
        if
        > this
        > > personalization capability adds value. Of course, this
        assumes
        > no
        > > other factors change (ceteris paribus).
        > > >
        > > >
        > > >
        > > > Hope that helps, Matt.
        > > >
        > > >
        > > >
        > > >
        > > > ------------------------------------------------------------
        ----
        > ----
        > > --------
        > > >
        > > > From: Xavier Casanova [mailto:xavier_casanova@y...]
        > > > Sent: Monday, September 27, 2004 7:48 PM
        > > > To: webanalytics@yahoogroups.com
        > > > Subject: Re: [webanalytics] web split-testing vs.
        > > personalization
        > > >
        > > >
        > > >
        > > > I have limited knowledge on the topic, but it seems to
        me
        > that
        > > A|B testing and personalization are incompatible techniques
        for
        > > improving your conversion rates, in general.
        > > >
        > > > - A|B testing aims at improving broad features of the
        site,
        > and
        > > make them appeal to the masses
        > > >
        > > > - Personalization on the other hand is about
        customizing
        > the
        > > user experience on an individual basis
        > > >
        > > >
        > > >
        > > > My understanding is that personalization applications
        > > extensively use collaborative filtering techniques.
        Collaborative
        > > filtering looks at past behavior to predict future behavior
        for a
        > > particular user segment ("People who book this book also
        bought
        > this
        > > other book"). To get good results you need well defined user
        > segments
        > > (with similar characteristics) - and a large sample of
        users&data
        > per
        > > segment. There might be some overlap if you are using A|B
        testing
        > > techniques to test some broad recommendations, but I'm not
        sure
        > about
        > > the effectiveness of it. Are there any epiphany or blue
        martini
        > > people on the board to comment?
        > > >
        > > >
        > > >
        > > > And since we are close to election day, here's an
        analogy:
        > A|B
        > > testing is to personalization what federal government is to a
        > local
        > > assembly. How about that?
        > > >
        > > >
        > > >
        > > > Xavier
        > > >
        > > >
        > > >
        > > >
        > > >
        > > >
        > > > ----- Original Message -----
        > > >
        > > > From: ehansen42
        > > >
        > > > To: webanalytics@yahoogroups.com
        > > >
        > > > Sent: Monday, September 27, 2004 12:12 PM
        > > >
        > > > Subject: [webanalytics] web split-testing vs.
        > personalization
        > > >
        > > >
        > > >
        > > > Hi folks, I've just joined this list... having
        caught up
        > on
        > > the
        > > > archives, I see alot of interest in web content
        testing.
        > I'd
        > > like
        > > > to pose a related question for discussion:
        > > >
        > > > Where do you see the overlap between split-testing
        (A/B,
        > > etc.) and
        > > > web personalization technologies? What are the
        unique
        > > advantages of
        > > > each? In what cases might you use one vs. the other
        (or
        > > both)?
        > > >
        > > > Seems to me that split-testing is ideal for improving
        web
        > > conversion
        > > > for factors such as a product description and UI
        elements
        > > > (colors/fonts/images), whereas personization (as
        defined
        > by
        > > vendors
        > > > such as ATG, e.piphany, etc.) is ideal for testing
        (and
        > > > automatically optimizating) product mix, upsell/cross-
        > sell,
        > > etc.
        > > >
        > > > But really, there is some overlap between split-
        testing
        > and
        > > > personalization, no? The personalization vendors
        tout
        > things
        > > like
        > > > being "adaptive" and self-learning, meaning that even
        > though
        > > they
        > > > are personalizing the web experience on a visitor-by-
        > visitor
        > > basis,
        > > > they are collecting conversion metrics and
        generalizating
        > > them to
        > > > broader visitor segments.
        > > >
        > > > For example, you may be a first time visitor for a
        web
        > site,
        > > but
        > > > when you click on a product link, your personalized
        page
        > is
        > > computed
        > > > from historical conversion data of past visitors. So
        > there's
        > > some
        > > > inherent testing going on. Doesn't this sound a bit
        like
        > > automated
        > > > split-testing where the target audience is "per
        arbitrary
        > > segment"
        > > > rather than "the entire population"?
        > > >
        > > > Sorry if the topic is on the fringe of being too
        > > academic... ;)
        > > >
        > > > cheers
        > > > Eric
        > > >
        > > >
        > > >
        > > >
        > > >
        > > >
        > > > ---------------------------------------
        > > > Web Metrics Discussion Group
        > > > Moderated by Eric T. Peterson
        > > > Author, Web Analytics Demystified
        > > > http://www.webanalyticsdemystified.com
        > > >
        > > >
        > > >
        > > >
        > > >
        > > > ---------------------------------------
        > > > Web Metrics Discussion Group
        > > > Moderated by Eric T. Peterson
        > > > Author, Web Analytics Demystified
        > > > http://www.webanalyticsdemystified.com
        > > >
        > > >
        > > >
        > > >
        > > >
        > > >
        > > >
        > > >
        > > > ---------------------------------------
        > > > Web Metrics Discussion Group
        > > > Moderated by Eric T. Peterson
        > > > Author, Web Analytics Demystified
        > > > http://www.webanalyticsdemystified.com
        > > >
        > > >
        > > >
        > > >
        > > >
        > > >
        > > > ---------------------------------------
        > > > Web Metrics Discussion Group
        > > > Moderated by Eric T. Peterson
        > > > Author, Web Analytics Demystified
        > > > http://www.webanalyticsdemystified.com
        > > >
        > > >
        > > > Yahoo! Groups Sponsor
        > > > ADVERTISEMENT
        > > >
        > > >
        > > >
        > > >
        > > >
        > > > ------------------------------------------------------------
        ----
        > ----
        > > ----------
        > > > Yahoo! Groups Links
        > > >
        > > > a.. To visit your group on the web, go to:
        > > > http://groups.yahoo.com/group/webanalytics/
        > > >
        > > > b.. To unsubscribe from this group, send an email to:
        > > > webanalytics-unsubscribe@yahoogroups.com
        > > >
        > > > c.. Your use of Yahoo! Groups is subject to the Yahoo!
        > Terms of
        > > Service.
        > >
        > >
        > >
        > >
        > >
        > > ---------------------------------------
        > > Web Metrics Discussion Group
        > > Moderated by Eric T. Peterson
        > > Author, Web Analytics Demystified
        > > http://www.webanalyticsdemystified.com
        > >
        > >
        > > Yahoo! Groups Sponsor
        > > ADVERTISEMENT
        > >
        > >
        > >
        > >
        > >
        > > ----------------------------------------------------------------
        ----
        > ----------
        > > Yahoo! Groups Links
        > >
        > > a.. To visit your group on the web, go to:
        > > http://groups.yahoo.com/group/webanalytics/
        > >
        > > b.. To unsubscribe from this group, send an email to:
        > > webanalytics-unsubscribe@yahoogroups.com
        > >
        > > c.. Your use of Yahoo! Groups is subject to the Yahoo!
        Terms of
        > Service.
        >
        >
        >
        >
        >
        >
        >
        >
        >
        > ---------------------------------------
        > Web Metrics Discussion Group
        > Moderated by Eric T. Peterson
        > Author, Web Analytics Demystified
        > http://www.webanalyticsdemystified.com
        >
        >
        > Yahoo! Groups Sponsor
        > ADVERTISEMENT
        >
        >
        >
        >
        >
        > --------------------------------------------------------------------
        ----------
        > Yahoo! Groups Links
        >
        > a.. To visit your group on the web, go to:
        > http://groups.yahoo.com/group/webanalytics/
        >
        > b.. To unsubscribe from this group, send an email to:
        > webanalytics-unsubscribe@yahoogroups.com
        >
        > c.. Your use of Yahoo! Groups is subject to the Yahoo! Terms of
        Service.
      • matthewjncroche
        To be more terse, gut feeling does not hold up. Test show that even the most experienced marketers can pick the best performing treatment less than 50% of
        Message 3 of 15 , Oct 12, 2004
        • 0 Attachment
          To be more terse, "gut feeling" does not hold up. Test show that
          even the most experienced marketers can pick the best performing
          treatment less than 50% of the time!

          To be clear, great marketing ideas come from marketers 100% of the
          time. Testing just helps to separate great ideas that work from
          great ideas that don't.

          Matthew Roche

          --- In webanalytics@yahoogroups.com, "Xavier Casanova"
          <xavier_casanova@y...> wrote:
          > Hey Jim, you're hitting on a pretty important topic - i.e.
          companies needing to acquire a culture of experimentation. I think we
          all want to agree with you, because when companies experiment they
          need tools to measure, i.e. they need us (Web analytics) ;)
          >
          > Let me play devil's advocate however and do a reality check. We
          cannot ignore the fact that imitation, gut-feeling and cost are
          forces playing against this culture of experimentation which you
          describe.
          > - 'Imitation' because it's pretty easy to go out and copy what
          others are doing. Navigation, product presentation, shopping cart,
          checkout, it's all out there. A methodical Junior Analyst can
          probably come up with a handful of great recommendations for *any*
          site based just by spending a few hours online.
          > - 'Gut feeling' because good and creative marketing people are
          better at figuring out what's good or bad for a site than an A|B
          test, in general
          > - 'Cost' because A|B tests are resource intensive, and to be really
          effective, you've got to run them live (i.e. play with real customers)
          >
          > In other words, is this wishful thinking? Could split&multi-
          variable testing be the next disappointment after personalization?
          >
          > OR, Are we witnessing a real change in how people run their online
          business?
          >
          > I could argue both ways.
          >
          > Xavier
          >
          > ----- Original Message -----
          > From: jimmacintyreiv
          > To: webanalytics@yahoogroups.com
          > Sent: Sunday, October 03, 2004 7:32 AM
          > Subject: [webanalytics] Re: web split-testing vs. personalization
          >
          >
          >
          >
          > After spending the better part of the 1994-2000 period building
          > capabilities to do your capabilties 1-4 and other
          > related "personalization," "targeting" or "automated
          optimization"
          > functions and then applying that technology to sites, many
          retail,
          > but many for other types of companies/sites as well, I have to
          admit
          > to some biases in regard to what is worth discussing in regard to
          > such functionality.
          >
          > What many site that implemented such capabilities in the market
          ended
          > up with was undue complexity. Complexity that most organizations
          > either couldn't manage or didn't/couldn't take advantage of on an
          > ongoing basis, at least enough so to produce a positive ROI on
          such.
          >
          > Large sites like Amazon and others have done a good job of
          > implementing some of these personalization capabilities, but I
          think
          > if a survey was done of the broader market it would be found that
          > many companies bought all manner of personalization
          infrastructure
          > tools and consulting to help enable these capabilities and found
          that
          > due to a wide range of practical issues they were difficult to
          derive
          > business advantage and ROI from.
          >
          > As an example of this take a look at broadvision.com and see that
          the
          > one time leader in personalization and one-to-one marketing (with
          a
          > multi-billion dollar market cap at the time is now worth ~100MM)
          has
          > changed their focus to "enterprise business portals."
          >
          > My feeling is that you can safely infer from this and many other
          > related facts that many many sites that wanted to
          > do "personalization" or "automated optimization" gave up many of
          > their ambitions after spending a great deal of money.
          >
          > INHO there are a number of valuable lessons to be taken from this
          and
          > many of them can be applied to this thread and experimentation in
          > general. They are cliche or platitude, but worthwhile I think
          none
          > the less.
          >
          > 1. Simplicity is valuable. The KISS principle is one that can't
          be
          > forgotten in regard to a site. Testing can be very valuable to a
          > site/business, but to be so it needs to be easy to do. Easy to
          plan,
          > easy to implement and easy to evaluate. If one gets too deep
          into
          > the weeds with it, it becomes impractical/impracticable but for
          the
          > most advanced organizations.
          >
          > 2. Experimentation is a way of thinking about and doing
          business.
          > Its a culture that needs to be built in many companies. One of
          my
          > financial services clients has as their primary business mantra
          that
          > they are a "hypothesis and experimentation driven company." This
          is
          > a very valuable approach to building a company and can be applied
          in
          > many ways. The important point is that almost everything that
          you
          > might come up with as a new business idea, approach,
          functionality,
          > etc. can be reduced to a hypothesis and be tested in some fashion.
          >
          > So rather than get wrapped around the axle about which methods of
          > personalization or automated optimization are most effective in
          > general (as the answer varies by company type, state of
          development
          > and many other factors), my encouragement to companies and
          > professionals here is to build a culture in your company that
          > generates testable hypotheses, implements experiments to prove or
          > disprove these hypotheses and learns from these experiments on an
          > ongonig interative basis.
          >
          > The web analytics team for a site should be a primary driver
          around
          > building this culture. The web analytics team can help to
          generate
          > and and quantify hypotheses, determine the right way to test then
          and
          > quantify the results and learnings from them for the rest of the
          site
          > team.
          >
          > It is my observation that it almost doesn't matter what the
          > hypothesis is that you start with as long as it is a valid
          hypothesis
          > that can be tested. It is, to use another platitude more
          important
          > to just do it. Once a team has as their culture to hypothesize
          > concretely, test effectively and learn from such, the team and
          the
          > business results evolve more rapidly. Better and better
          hypotheses
          > and more effective experiments are the result.
          >
          > What should be tested completely depends upon your specific
          business
          > dynamics, state of capability and forward objectives. The basic
          > requirement is that your team needs to come up with hypotheses
          that
          > are testable, then run the experiments and then spend the time
          > required to learn from the results. So pick as your place to
          start
          > experimenting something that your whole team will understand,
          like
          > your home page, a campaign landing page, a new cart flow, etc.
          The
          > most important thing is to get started.
          >
          > The key learnings needed by a team that has an experimentation
          > culture are:
          >
          > 1. What is a valid hypothesis to test? What components does the
          > hypothesis need?
          > 2. How does an experiment need to be designed to produce valid
          > results when implemented?
          > 3. What are the tools required to implement your experiments?
          > 4. How and for how long will the experiments be implemented?
          > 5. When and how will the results be evaluated?
          > 6. How much confidence do you require in the results for what
          type
          > of decision making?
          > 7. How should the results be interpreted and acted on?
          >
          > There is a lot of talk about a lot of different testing
          methodologies
          > here in this thread. In my mind they confuse the real matter at
          hand
          > to a degree. Basic controlled experimentation (A|B|C) is the
          place
          > to start. Doing this correctly is hard enough for teams that are
          new
          > to it, using more statistically advanced methods simply adds
          > complexity. Such methods might be helpful for small test sets,
          > shortening test runs on small test sets and the like, but these
          are
          > edge cases that can be left for later exploration after a site's
          team
          > has mastered basic controlled experimentation and has developed a
          > culture of experimentation that has led them to these edge cases.
          >
          > It could be very helpful to a number of people in this group if a
          lot
          > of the questions about experimentation ran something like this:
          >
          > "I have a hypothesis that if I alter this landing page like X
          that it
          > will produce a higher campaign conversion rate. What is the best
          way
          > to test that hypothesis if I expect Y visitors to the landing
          page in
          > Z days?"
          >
          > "I have a hypothesis that my new site capability to provide cross-
          > sell recommendations will increase my order sizes without
          decreasing
          > my conversion rates by at least $X.XX, if I change the cross-
          selling
          > rules or groupings, what is the best way to test that my rules
          and
          > groupings are producing a positive result in regard to order size
          > without reducing my conversion rates?"
          >
          > Since there is such a great amount of expertise here in regard to
          > testing it might be most useful for folks to throw out some real
          > hypotheses like these and have the help of the group in thinking
          > through how to make the testing of them more concrete and
          effective
          > given the circumstances at hand. It would be great to see some
          > actual results come back in to the group so that all here could
          > benefit from the whole cycle of test design, experiment
          > implementation and evaluation.
          >
          > Best regards,
          >
          >
          >
          > --- In webanalytics@yahoogroups.com, "Xavier Casanova"
          > <xavier_casanova@y...> wrote:
          > > That makes sense - we're all on the same page. I like your
          > classification
          > > 1 - Automated XSells
          > > 2 - Product recommendations
          > > 3 - Product embellishment
          > > 4 - Configurators and Calculators.
          > >
          > > Question - From you experience, assuming I have a retail site
          which
          > hasn't done any of these "optimizations", where should I start?
          Which
          > ones provide the highest immediate ROI?
          > > (I would also be interested in hearing what retailers have to
          > say...)
          > >
          > >
          > > ----- Original Message -----
          > > From: matthewjncroche
          > > To: webanalytics@yahoogroups.com
          > > Sent: Thursday, September 30, 2004 12:45 PM
          > > Subject: [webanalytics] Re: web split-testing vs.
          personalization
          > >
          > >
          > > Great clarifying points.
          > >
          > > Personalization is really just overloaded - it can mean
          MyYahoo
          > > (rearrange elements on a "personal page"), Collaborative
          > Filtering,
          > > or even just your own login. With this range, it is really
          just
          > too
          > > hard to make any general observations.
          > >
          > > For conversations sake, lets just talk about those
          optimizations
          > > which relate to product suggestion or automated
          merchandising.
          > These
          > > could include:
          > > 1. Automated cross-sell (up-sell, bundling) supported by
          custom
          > > systems, packaged software, and ASPs
          > > 2. Product recommendation (best-sellers, people like you,
          staff
          > > picks, automatic suggestions)
          > > 3. Product embellishment (image zoom and pan, 3d, dressing
          rooms,
          > > fabric/color changers)
          > > 4. Configurators, calculators
          > >
          > > With these, it would be impossible to A|B an individual
          > > recommendation or presentation. What you would be doing, as
          Matt
          > > Belkin and others have pointed out, is testing the aggregate
          > effect
          > > of the feature or algorithm. To state another way, not A|B
          > testing
          > > that an individual recommendation was effective, but that the
          > overall
          > > mechanism resulted in positive effect on the segment to which
          it
          > was
          > > shown as measured by average order size, conversion, revenue
          per
          > > visit, leads, etc.
          > >
          > > A better term would be a present/not present test for the
          > particular
          > > optimization.
          > >
          > > Implicit in any good optimization, of course, would be a
          feedback
          > > mechanism for the target metric. We approach the problem by
          > setting
          > > up a listener to measure purchases or clicks so that the
          > algorithm
          > > for finding the best product has a way of refining itself.
          > >
          > >
          > > Matthew Roche
          > > http://www.offermatica.com
          > >
          > >
          > > --- In webanalytics@yahoogroups.com, "Xavier Casanova"
          > > <xavier_casanova@y...> wrote:
          > > > My point was not about A|B testing as a tactic to measure
          > > improvements in conversion rates due to personalization. My
          point
          > was
          > > in response to Eric Hansen's comment:
          > > >
          > > > "Seems to me that split-testing is ideal for improving web
          > > conversion for factors such as a product description and UI
          > elements
          > > (colors/fonts/images), whereas personization (as defined by
          > vendors
          > > such as ATG, e.piphany, etc.) is ideal for testing (and
          > automatically
          > > optimizating) product mix, upsell/cross-sell, etc."
          > > >
          > > > Again, from my perspective, in the context of Eric's post,
          > these
          > > techniques are incompatible. A|B testing is not ideal for
          > determining
          > > for a specific user the ideal product mix, upsell/cross-sell,
          etc
          > > because the test sample is too small. Collaborative filtering
          (to
          > the
          > > best of my knowledge) is the core technology behind
          > personalization.
          > > >
          > > > On the other hand, I need clarification about what you guys
          are
          > > saying - and what you call personalization.
          > > >
          > > > a/ If what you call personalization are simple touches to a
          > page
          > > like "Hello John" or "Your Nissan Altima is due for
          maintenance
          > in 3
          > > days" then yes, you might be right, A|B testing is an
          effective
          > way
          > > of measuring a lift in conversion rates. These are
          broad/general
          > > features after all.
          > > >
          > > > b/However if what you call personalization is dynamically
          > building
          > > pages that contain the right product mix, the right upsell
          > products,
          > > (like Amazon) then I don't think A|B testing can give you a
          > reliable
          > > answer about the effectiveness of that kind of
          personalization.
          > Isn't
          > > the sample of users too small, and aren't the variables
          > constantly
          > > changing? Can that test ever converge? Depending on the
          quality
          > of
          > > your personalization results you may do awesome or day, and
          > terrible
          > > the next day.
          > > >
          > > >
          > > > ----- Original Message -----
          > > > From: Jim MacIntyre
          > > > To: webanalytics@yahoogroups.com
          > > > Sent: Tuesday, September 28, 2004 3:14 AM
          > > > Subject: RE: [webanalytics] web split-testing vs.
          > personalization
          > > >
          > > >
          > > > The first time I used A|B testing back in the 90s was to
          test
          > the
          > > value of a personalization/mass customization system I was at
          the
          > > time implementing, much as you describe. Likewise A|B
          testing
          > can be
          > > used to test a very wide range of such "value add"
          functionality
          > to
          > > see if it actually does so. It continues to amaze me that
          sites
          > > implement personalization and other features that have the
          > intention
          > > of increasing conversion rates without insisting on any
          results
          > > tests, such as requiring the personalization vendor to prove
          > through
          > > A|B test that their personalization capability can improve
          > conversion
          > > rates.
          > > >
          > > >
          > > >
          > > > ------------------------------------------------------------
          ----
          > ----
          > > --------
          > > > From: Matt Belkin [mailto:mbelkin@m...]
          > > > Sent: Tuesday, September 28, 2004 2:14 AM
          > > > To: webanalytics@yahoogroups.com
          > > > Subject: RE: [webanalytics] web split-testing vs.
          > > personalization
          > > >
          > > >
          > > > Actually, to clarify, AB testing is really quite
          compatible
          > > with personalization. To restate what I think most people
          here
          > > already know, AB testing is just the comparative test of one
          > approach
          > > vs. another in generating a desired result. For instance,
          does
          > web
          > > page A perform better than web page B at converting sales
          leads.
          > The
          > > beauty of AB testing, when done correctly, is that you may
          > constantly
          > > achieve gains thru continual improvement (and hence, generate
          ROI
          > > from your Analytics investment).
          > > >
          > > >
          > > >
          > > > Personalization, on the other hand, is much more about
          1:1
          > (or
          > > 1:many) customer communication. This includes collaborative
          > > filtering, but certainly isn't limited to it.
          > > >
          > > >
          > > >
          > > > So to directly address Xavier's comments, you could
          > potentially
          > > use AB testing to experiment with different types of
          > > personalization. For instance, if you choose to provide
          customer
          > > segment A with a personalized experience (i.e. recommendation
          > engine)
          > > and not provide customer segment B with this same
          functionality,
          > you
          > > could compare the productivity of each segment to determine
          if
          > this
          > > personalization capability adds value. Of course, this
          assumes
          > no
          > > other factors change (ceteris paribus).
          > > >
          > > >
          > > >
          > > > Hope that helps, Matt.
          > > >
          > > >
          > > >
          > > >
          > > > ------------------------------------------------------------
          ----
          > ----
          > > --------
          > > >
          > > > From: Xavier Casanova [mailto:xavier_casanova@y...]
          > > > Sent: Monday, September 27, 2004 7:48 PM
          > > > To: webanalytics@yahoogroups.com
          > > > Subject: Re: [webanalytics] web split-testing vs.
          > > personalization
          > > >
          > > >
          > > >
          > > > I have limited knowledge on the topic, but it seems to
          me
          > that
          > > A|B testing and personalization are incompatible techniques
          for
          > > improving your conversion rates, in general.
          > > >
          > > > - A|B testing aims at improving broad features of the
          site,
          > and
          > > make them appeal to the masses
          > > >
          > > > - Personalization on the other hand is about
          customizing
          > the
          > > user experience on an individual basis
          > > >
          > > >
          > > >
          > > > My understanding is that personalization applications
          > > extensively use collaborative filtering techniques.
          Collaborative
          > > filtering looks at past behavior to predict future behavior
          for a
          > > particular user segment ("People who book this book also
          bought
          > this
          > > other book"). To get good results you need well defined user
          > segments
          > > (with similar characteristics) - and a large sample of
          users&data
          > per
          > > segment. There might be some overlap if you are using A|B
          testing
          > > techniques to test some broad recommendations, but I'm not
          sure
          > about
          > > the effectiveness of it. Are there any epiphany or blue
          martini
          > > people on the board to comment?
          > > >
          > > >
          > > >
          > > > And since we are close to election day, here's an
          analogy:
          > A|B
          > > testing is to personalization what federal government is to a
          > local
          > > assembly. How about that?
          > > >
          > > >
          > > >
          > > > Xavier
          > > >
          > > >
          > > >
          > > >
          > > >
          > > >
          > > > ----- Original Message -----
          > > >
          > > > From: ehansen42
          > > >
          > > > To: webanalytics@yahoogroups.com
          > > >
          > > > Sent: Monday, September 27, 2004 12:12 PM
          > > >
          > > > Subject: [webanalytics] web split-testing vs.
          > personalization
          > > >
          > > >
          > > >
          > > > Hi folks, I've just joined this list... having
          caught up
          > on
          > > the
          > > > archives, I see alot of interest in web content
          testing.
          > I'd
          > > like
          > > > to pose a related question for discussion:
          > > >
          > > > Where do you see the overlap between split-testing
          (A/B,
          > > etc.) and
          > > > web personalization technologies? What are the
          unique
          > > advantages of
          > > > each? In what cases might you use one vs. the other
          (or
          > > both)?
          > > >
          > > > Seems to me that split-testing is ideal for improving
          web
          > > conversion
          > > > for factors such as a product description and UI
          elements
          > > > (colors/fonts/images), whereas personization (as
          defined
          > by
          > > vendors
          > > > such as ATG, e.piphany, etc.) is ideal for testing
          (and
          > > > automatically optimizating) product mix, upsell/cross-
          > sell,
          > > etc.
          > > >
          > > > But really, there is some overlap between split-
          testing
          > and
          > > > personalization, no? The personalization vendors
          tout
          > things
          > > like
          > > > being "adaptive" and self-learning, meaning that even
          > though
          > > they
          > > > are personalizing the web experience on a visitor-by-
          > visitor
          > > basis,
          > > > they are collecting conversion metrics and
          generalizating
          > > them to
          > > > broader visitor segments.
          > > >
          > > > For example, you may be a first time visitor for a
          web
          > site,
          > > but
          > > > when you click on a product link, your personalized
          page
          > is
          > > computed
          > > > from historical conversion data of past visitors. So
          > there's
          > > some
          > > > inherent testing going on. Doesn't this sound a bit
          like
          > > automated
          > > > split-testing where the target audience is "per
          arbitrary
          > > segment"
          > > > rather than "the entire population"?
          > > >
          > > > Sorry if the topic is on the fringe of being too
          > > academic... ;)
          > > >
          > > > cheers
          > > > Eric
          > > >
          > > >
          > > >
          > > >
          > > >
          > > >
          > > > ---------------------------------------
          > > > Web Metrics Discussion Group
          > > > Moderated by Eric T. Peterson
          > > > Author, Web Analytics Demystified
          > > > http://www.webanalyticsdemystified.com
          > > >
          > > >
          > > >
          > > >
          > > >
          > > > ---------------------------------------
          > > > Web Metrics Discussion Group
          > > > Moderated by Eric T. Peterson
          > > > Author, Web Analytics Demystified
          > > > http://www.webanalyticsdemystified.com
          > > >
          > > >
          > > >
          > > >
          > > >
          > > >
          > > >
          > > >
          > > > ---------------------------------------
          > > > Web Metrics Discussion Group
          > > > Moderated by Eric T. Peterson
          > > > Author, Web Analytics Demystified
          > > > http://www.webanalyticsdemystified.com
          > > >
          > > >
          > > >
          > > >
          > > >
          > > >
          > > > ---------------------------------------
          > > > Web Metrics Discussion Group
          > > > Moderated by Eric T. Peterson
          > > > Author, Web Analytics Demystified
          > > > http://www.webanalyticsdemystified.com
          > > >
          > > >
          > > > Yahoo! Groups Sponsor
          > > > ADVERTISEMENT
          > > >
          > > >
          > > >
          > > >
          > > >
          > > > ------------------------------------------------------------
          ----
          > ----
          > > ----------
          > > > Yahoo! Groups Links
          > > >
          > > > a.. To visit your group on the web, go to:
          > > > http://groups.yahoo.com/group/webanalytics/
          > > >
          > > > b.. To unsubscribe from this group, send an email to:
          > > > webanalytics-unsubscribe@yahoogroups.com
          > > >
          > > > c.. Your use of Yahoo! Groups is subject to the Yahoo!
          > Terms of
          > > Service.
          > >
          > >
          > >
          > >
          > >
          > > ---------------------------------------
          > > Web Metrics Discussion Group
          > > Moderated by Eric T. Peterson
          > > Author, Web Analytics Demystified
          > > http://www.webanalyticsdemystified.com
          > >
          > >
          > > Yahoo! Groups Sponsor
          > > ADVERTISEMENT
          > >
          > >
          > >
          > >
          > >
          > > ----------------------------------------------------------------
          ----
          > ----------
          > > Yahoo! Groups Links
          > >
          > > a.. To visit your group on the web, go to:
          > > http://groups.yahoo.com/group/webanalytics/
          > >
          > > b.. To unsubscribe from this group, send an email to:
          > > webanalytics-unsubscribe@yahoogroups.com
          > >
          > > c.. Your use of Yahoo! Groups is subject to the Yahoo!
          Terms of
          > Service.
          >
          >
          >
          >
          >
          >
          >
          >
          >
          > ---------------------------------------
          > Web Metrics Discussion Group
          > Moderated by Eric T. Peterson
          > Author, Web Analytics Demystified
          > http://www.webanalyticsdemystified.com
          >
          >
          > Yahoo! Groups Sponsor
          > ADVERTISEMENT
          >
          >
          >
          >
          >
          > --------------------------------------------------------------------
          ----------
          > Yahoo! Groups Links
          >
          > a.. To visit your group on the web, go to:
          > http://groups.yahoo.com/group/webanalytics/
          >
          > b.. To unsubscribe from this group, send an email to:
          > webanalytics-unsubscribe@yahoogroups.com
          >
          > c.. Your use of Yahoo! Groups is subject to the Yahoo! Terms of
          Service.
        • mayerice
          Hi guys, I am a newcomer to this forum and very happy to be here. I hope to learn/share whatever I can with you folks. I ve been involved in web analytics
          Message 4 of 15 , Oct 14, 2004
          • 0 Attachment
            Hi guys,

            I am a newcomer to this forum and very happy to be here. I hope to
            learn/share whatever I can with you folks. I've been involved in web
            analytics since 99 and I currently use visual sciences as my tool (
            after much research and lots of convincing in house ). In the near
            future I will post some of my split testing that i've done in the
            past which I can say for me has been a very effective tool in
            increaseing revenue and conversion rate for several types of site
            functionality, and with VS it is so simple to read.

            Mayer Gniwisch
            Ice.com

            --- In webanalytics@yahoogroups.com, "matthewjncroche"
            <matthewjncroche@y...> wrote:
            >
            >
            > The last two tests we ran (both ended within the last few weeks)
            paid
            > for themselves on an ROI basis before the tests even finished.
            >
            > (I know that the stats-monsters on this list will point out the
            > logical inconsistency, but go with me on this one...)
            >
            > There are two truths you have to believe in to support a long-term
            > future in testing and optimization:
            > 1. People are not getting better at "picking winners"
            > 2. On-line customer acquisition is not getting cheaper
            >
            > After four years of investment in various forms of customer
            > acquisition (PPC, CPA, CPM...), the time has come to reform the
            site
            > and improve conversion. Either you do it through testing your own
            > ideas, or using testing to evaluate third-party tools. Either way,
            > you will be testing one of these days.
            >
            > Matthew Roche
            > http://www.offermatica.com
            > --- In webanalytics@yahoogroups.com, "Xavier Casanova"
            > <xavier_casanova@y...> wrote:
            > > Hey Jim, you're hitting on a pretty important topic - i.e.
            > companies needing to acquire a culture of experimentation. I think
            we
            > all want to agree with you, because when companies experiment they
            > need tools to measure, i.e. they need us (Web analytics) ;)
            > >
            > > Let me play devil's advocate however and do a reality check. We
            > cannot ignore the fact that imitation, gut-feeling and cost are
            > forces playing against this culture of experimentation which you
            > describe.
            > > - 'Imitation' because it's pretty easy to go out and copy what
            > others are doing. Navigation, product presentation, shopping cart,
            > checkout, it's all out there. A methodical Junior Analyst can
            > probably come up with a handful of great recommendations for *any*
            > site based just by spending a few hours online.
            > > - 'Gut feeling' because good and creative marketing people are
            > better at figuring out what's good or bad for a site than an A|B
            > test, in general
            > > - 'Cost' because A|B tests are resource intensive, and to be
            really
            > effective, you've got to run them live (i.e. play with real
            customers)
            > >
            > > In other words, is this wishful thinking? Could split&multi-
            > variable testing be the next disappointment after personalization?
            > >
            > > OR, Are we witnessing a real change in how people run their
            online
            > business?
            > >
            > > I could argue both ways.
            > >
            > > Xavier
            > >
            > > ----- Original Message -----
            > > From: jimmacintyreiv
            > > To: webanalytics@yahoogroups.com
            > > Sent: Sunday, October 03, 2004 7:32 AM
            > > Subject: [webanalytics] Re: web split-testing vs.
            personalization
            > >
            > >
            > >
            > >
            > > After spending the better part of the 1994-2000 period building
            > > capabilities to do your capabilties 1-4 and other
            > > related "personalization," "targeting" or "automated
            > optimization"
            > > functions and then applying that technology to sites, many
            > retail,
            > > but many for other types of companies/sites as well, I have to
            > admit
            > > to some biases in regard to what is worth discussing in regard
            to
            > > such functionality.
            > >
            > > What many site that implemented such capabilities in the market
            > ended
            > > up with was undue complexity. Complexity that most
            organizations
            > > either couldn't manage or didn't/couldn't take advantage of on
            an
            > > ongoing basis, at least enough so to produce a positive ROI on
            > such.
            > >
            > > Large sites like Amazon and others have done a good job of
            > > implementing some of these personalization capabilities, but I
            > think
            > > if a survey was done of the broader market it would be found
            that
            > > many companies bought all manner of personalization
            > infrastructure
            > > tools and consulting to help enable these capabilities and
            found
            > that
            > > due to a wide range of practical issues they were difficult to
            > derive
            > > business advantage and ROI from.
            > >
            > > As an example of this take a look at broadvision.com and see
            that
            > the
            > > one time leader in personalization and one-to-one marketing
            (with
            > a
            > > multi-billion dollar market cap at the time is now worth
            ~100MM)
            > has
            > > changed their focus to "enterprise business portals."
            > >
            > > My feeling is that you can safely infer from this and many
            other
            > > related facts that many many sites that wanted to
            > > do "personalization" or "automated optimization" gave up many
            of
            > > their ambitions after spending a great deal of money.
            > >
            > > INHO there are a number of valuable lessons to be taken from
            this
            > and
            > > many of them can be applied to this thread and experimentation
            in
            > > general. They are cliche or platitude, but worthwhile I think
            > none
            > > the less.
            > >
            > > 1. Simplicity is valuable. The KISS principle is one that
            can't
            > be
            > > forgotten in regard to a site. Testing can be very valuable to
            a
            > > site/business, but to be so it needs to be easy to do. Easy to
            > plan,
            > > easy to implement and easy to evaluate. If one gets too deep
            > into
            > > the weeds with it, it becomes impractical/impracticable but for
            > the
            > > most advanced organizations.
            > >
            > > 2. Experimentation is a way of thinking about and doing
            > business.
            > > Its a culture that needs to be built in many companies. One of
            > my
            > > financial services clients has as their primary business mantra
            > that
            > > they are a "hypothesis and experimentation driven company."
            This
            > is
            > > a very valuable approach to building a company and can be
            applied
            > in
            > > many ways. The important point is that almost everything that
            > you
            > > might come up with as a new business idea, approach,
            > functionality,
            > > etc. can be reduced to a hypothesis and be tested in some
            fashion.
            > >
            > > So rather than get wrapped around the axle about which methods
            of
            > > personalization or automated optimization are most effective in
            > > general (as the answer varies by company type, state of
            > development
            > > and many other factors), my encouragement to companies and
            > > professionals here is to build a culture in your company that
            > > generates testable hypotheses, implements experiments to prove
            or
            > > disprove these hypotheses and learns from these experiments on
            an
            > > ongonig interative basis.
            > >
            > > The web analytics team for a site should be a primary driver
            > around
            > > building this culture. The web analytics team can help to
            > generate
            > > and and quantify hypotheses, determine the right way to test
            then
            > and
            > > quantify the results and learnings from them for the rest of
            the
            > site
            > > team.
            > >
            > > It is my observation that it almost doesn't matter what the
            > > hypothesis is that you start with as long as it is a valid
            > hypothesis
            > > that can be tested. It is, to use another platitude more
            > important
            > > to just do it. Once a team has as their culture to hypothesize
            > > concretely, test effectively and learn from such, the team and
            > the
            > > business results evolve more rapidly. Better and better
            > hypotheses
            > > and more effective experiments are the result.
            > >
            > > What should be tested completely depends upon your specific
            > business
            > > dynamics, state of capability and forward objectives. The
            basic
            > > requirement is that your team needs to come up with hypotheses
            > that
            > > are testable, then run the experiments and then spend the time
            > > required to learn from the results. So pick as your place to
            > start
            > > experimenting something that your whole team will understand,
            > like
            > > your home page, a campaign landing page, a new cart flow, etc.
            > The
            > > most important thing is to get started.
            > >
            > > The key learnings needed by a team that has an experimentation
            > > culture are:
            > >
            > > 1. What is a valid hypothesis to test? What components does
            the
            > > hypothesis need?
            > > 2. How does an experiment need to be designed to produce valid
            > > results when implemented?
            > > 3. What are the tools required to implement your experiments?
            > > 4. How and for how long will the experiments be implemented?
            > > 5. When and how will the results be evaluated?
            > > 6. How much confidence do you require in the results for what
            > type
            > > of decision making?
            > > 7. How should the results be interpreted and acted on?
            > >
            > > There is a lot of talk about a lot of different testing
            > methodologies
            > > here in this thread. In my mind they confuse the real matter
            at
            > hand
            > > to a degree. Basic controlled experimentation (A|B|C) is the
            > place
            > > to start. Doing this correctly is hard enough for teams that
            are
            > new
            > > to it, using more statistically advanced methods simply adds
            > > complexity. Such methods might be helpful for small test sets,
            > > shortening test runs on small test sets and the like, but these
            > are
            > > edge cases that can be left for later exploration after a
            site's
            > team
            > > has mastered basic controlled experimentation and has developed
            a
            > > culture of experimentation that has led them to these edge
            cases.
            > >
            > > It could be very helpful to a number of people in this group if
            a
            > lot
            > > of the questions about experimentation ran something like this:
            > >
            > > "I have a hypothesis that if I alter this landing page like X
            > that it
            > > will produce a higher campaign conversion rate. What is the
            best
            > way
            > > to test that hypothesis if I expect Y visitors to the landing
            > page in
            > > Z days?"
            > >
            > > "I have a hypothesis that my new site capability to provide
            cross-
            > > sell recommendations will increase my order sizes without
            > decreasing
            > > my conversion rates by at least $X.XX, if I change the cross-
            > selling
            > > rules or groupings, what is the best way to test that my rules
            > and
            > > groupings are producing a positive result in regard to order
            size
            > > without reducing my conversion rates?"
            > >
            > > Since there is such a great amount of expertise here in regard
            to
            > > testing it might be most useful for folks to throw out some
            real
            > > hypotheses like these and have the help of the group in
            thinking
            > > through how to make the testing of them more concrete and
            > effective
            > > given the circumstances at hand. It would be great to see some
            > > actual results come back in to the group so that all here could
            > > benefit from the whole cycle of test design, experiment
            > > implementation and evaluation.
            > >
            > > Best regards,
            > >
            > >
            > >
            > > --- In webanalytics@yahoogroups.com, "Xavier Casanova"
            > > <xavier_casanova@y...> wrote:
            > > > That makes sense - we're all on the same page. I like your
            > > classification
            > > > 1 - Automated XSells
            > > > 2 - Product recommendations
            > > > 3 - Product embellishment
            > > > 4 - Configurators and Calculators.
            > > >
            > > > Question - From you experience, assuming I have a retail site
            > which
            > > hasn't done any of these "optimizations", where should I start?
            > Which
            > > ones provide the highest immediate ROI?
            > > > (I would also be interested in hearing what retailers have to
            > > say...)
            > > >
            > > >
            > > > ----- Original Message -----
            > > > From: matthewjncroche
            > > > To: webanalytics@yahoogroups.com
            > > > Sent: Thursday, September 30, 2004 12:45 PM
            > > > Subject: [webanalytics] Re: web split-testing vs.
            > personalization
            > > >
            > > >
            > > > Great clarifying points.
            > > >
            > > > Personalization is really just overloaded - it can mean
            > MyYahoo
            > > > (rearrange elements on a "personal page"), Collaborative
            > > Filtering,
            > > > or even just your own login. With this range, it is really
            > just
            > > too
            > > > hard to make any general observations.
            > > >
            > > > For conversations sake, lets just talk about those
            > optimizations
            > > > which relate to product suggestion or automated
            > merchandising.
            > > These
            > > > could include:
            > > > 1. Automated cross-sell (up-sell, bundling) supported by
            > custom
            > > > systems, packaged software, and ASPs
            > > > 2. Product recommendation (best-sellers, people like you,
            > staff
            > > > picks, automatic suggestions)
            > > > 3. Product embellishment (image zoom and pan, 3d, dressing
            > rooms,
            > > > fabric/color changers)
            > > > 4. Configurators, calculators
            > > >
            > > > With these, it would be impossible to A|B an individual
            > > > recommendation or presentation. What you would be doing, as
            > Matt
            > > > Belkin and others have pointed out, is testing the
            aggregate
            > > effect
            > > > of the feature or algorithm. To state another way, not A|B
            > > testing
            > > > that an individual recommendation was effective, but that
            the
            > > overall
            > > > mechanism resulted in positive effect on the segment to
            which
            > it
            > > was
            > > > shown as measured by average order size, conversion,
            revenue
            > per
            > > > visit, leads, etc.
            > > >
            > > > A better term would be a present/not present test for the
            > > particular
            > > > optimization.
            > > >
            > > > Implicit in any good optimization, of course, would be a
            > feedback
            > > > mechanism for the target metric. We approach the problem
            by
            > > setting
            > > > up a listener to measure purchases or clicks so that the
            > > algorithm
            > > > for finding the best product has a way of refining itself.
            > > >
            > > >
            > > > Matthew Roche
            > > > http://www.offermatica.com
            > > >
            > > >
            > > > --- In webanalytics@yahoogroups.com, "Xavier Casanova"
            > > > <xavier_casanova@y...> wrote:
            > > > > My point was not about A|B testing as a tactic to measure
            > > > improvements in conversion rates due to personalization. My
            > point
            > > was
            > > > in response to Eric Hansen's comment:
            > > > >
            > > > > "Seems to me that split-testing is ideal for improving
            web
            > > > conversion for factors such as a product description and UI
            > > elements
            > > > (colors/fonts/images), whereas personization (as defined by
            > > vendors
            > > > such as ATG, e.piphany, etc.) is ideal for testing (and
            > > automatically
            > > > optimizating) product mix, upsell/cross-sell, etc."
            > > > >
            > > > > Again, from my perspective, in the context of Eric's
            post,
            > > these
            > > > techniques are incompatible. A|B testing is not ideal for
            > > determining
            > > > for a specific user the ideal product mix, upsell/cross-
            sell,
            > etc
            > > > because the test sample is too small. Collaborative
            filtering
            > (to
            > > the
            > > > best of my knowledge) is the core technology behind
            > > personalization.
            > > > >
            > > > > On the other hand, I need clarification about what you
            guys
            > are
            > > > saying - and what you call personalization.
            > > > >
            > > > > a/ If what you call personalization are simple touches to
            a
            > > page
            > > > like "Hello John" or "Your Nissan Altima is due for
            > maintenance
            > > in 3
            > > > days" then yes, you might be right, A|B testing is an
            > effective
            > > way
            > > > of measuring a lift in conversion rates. These are
            > broad/general
            > > > features after all.
            > > > >
            > > > > b/However if what you call personalization is dynamically
            > > building
            > > > pages that contain the right product mix, the right upsell
            > > products,
            > > > (like Amazon) then I don't think A|B testing can give you a
            > > reliable
            > > > answer about the effectiveness of that kind of
            > personalization.
            > > Isn't
            > > > the sample of users too small, and aren't the variables
            > > constantly
            > > > changing? Can that test ever converge? Depending on the
            > quality
            > > of
            > > > your personalization results you may do awesome or day, and
            > > terrible
            > > > the next day.
            > > > >
            > > > >
            > > > > ----- Original Message -----
            > > > > From: Jim MacIntyre
            > > > > To: webanalytics@yahoogroups.com
            > > > > Sent: Tuesday, September 28, 2004 3:14 AM
            > > > > Subject: RE: [webanalytics] web split-testing vs.
            > > personalization
            > > > >
            > > > >
            > > > > The first time I used A|B testing back in the 90s was
            to
            > test
            > > the
            > > > value of a personalization/mass customization system I was
            at
            > the
            > > > time implementing, much as you describe. Likewise A|B
            > testing
            > > can be
            > > > used to test a very wide range of such "value add"
            > functionality
            > > to
            > > > see if it actually does so. It continues to amaze me that
            > sites
            > > > implement personalization and other features that have the
            > > intention
            > > > of increasing conversion rates without insisting on any
            > results
            > > > tests, such as requiring the personalization vendor to
            prove
            > > through
            > > > A|B test that their personalization capability can improve
            > > conversion
            > > > rates.
            > > > >
            > > > >
            > > > >
            > > > > ----------------------------------------------------------
            --
            > ----
            > > ----
            > > > --------
            > > > > From: Matt Belkin [mailto:mbelkin@m...]
            > > > > Sent: Tuesday, September 28, 2004 2:14 AM
            > > > > To: webanalytics@yahoogroups.com
            > > > > Subject: RE: [webanalytics] web split-testing vs.
            > > > personalization
            > > > >
            > > > >
            > > > > Actually, to clarify, AB testing is really quite
            > compatible
            > > > with personalization. To restate what I think most people
            > here
            > > > already know, AB testing is just the comparative test of
            one
            > > approach
            > > > vs. another in generating a desired result. For instance,
            > does
            > > web
            > > > page A perform better than web page B at converting sales
            > leads.
            > > The
            > > > beauty of AB testing, when done correctly, is that you may
            > > constantly
            > > > achieve gains thru continual improvement (and hence,
            generate
            > ROI
            > > > from your Analytics investment).
            > > > >
            > > > >
            > > > >
            > > > > Personalization, on the other hand, is much more
            about
            > 1:1
            > > (or
            > > > 1:many) customer communication. This includes
            collaborative
            > > > filtering, but certainly isn't limited to it.
            > > > >
            > > > >
            > > > >
            > > > > So to directly address Xavier's comments, you could
            > > potentially
            > > > use AB testing to experiment with different types of
            > > > personalization. For instance, if you choose to provide
            > customer
            > > > segment A with a personalized experience (i.e.
            recommendation
            > > engine)
            > > > and not provide customer segment B with this same
            > functionality,
            > > you
            > > > could compare the productivity of each segment to determine
            > if
            > > this
            > > > personalization capability adds value. Of course, this
            > assumes
            > > no
            > > > other factors change (ceteris paribus).
            > > > >
            > > > >
            > > > >
            > > > > Hope that helps, Matt.
            > > > >
            > > > >
            > > > >
            > > > >
            > > > > ----------------------------------------------------------
            --
            > ----
            > > ----
            > > > --------
            > > > >
            > > > > From: Xavier Casanova [mailto:xavier_casanova@y...]
            > > > > Sent: Monday, September 27, 2004 7:48 PM
            > > > > To: webanalytics@yahoogroups.com
            > > > > Subject: Re: [webanalytics] web split-testing vs.
            > > > personalization
            > > > >
            > > > >
            > > > >
            > > > > I have limited knowledge on the topic, but it seems
            to
            > me
            > > that
            > > > A|B testing and personalization are incompatible techniques
            > for
            > > > improving your conversion rates, in general.
            > > > >
            > > > > - A|B testing aims at improving broad features of the
            > site,
            > > and
            > > > make them appeal to the masses
            > > > >
            > > > > - Personalization on the other hand is about
            > customizing
            > > the
            > > > user experience on an individual basis
            > > > >
            > > > >
            > > > >
            > > > > My understanding is that personalization applications
            > > > extensively use collaborative filtering techniques.
            > Collaborative
            > > > filtering looks at past behavior to predict future behavior
            > for a
            > > > particular user segment ("People who book this book also
            > bought
            > > this
            > > > other book"). To get good results you need well defined
            user
            > > segments
            > > > (with similar characteristics) - and a large sample of
            > users&data
            > > per
            > > > segment. There might be some overlap if you are using A|B
            > testing
            > > > techniques to test some broad recommendations, but I'm not
            > sure
            > > about
            > > > the effectiveness of it. Are there any epiphany or blue
            > martini
            > > > people on the board to comment?
            > > > >
            > > > >
            > > > >
            > > > > And since we are close to election day, here's an
            > analogy:
            > > A|B
            > > > testing is to personalization what federal government is to
            a
            > > local
            > > > assembly. How about that?
            > > > >
            > > > >
            > > > >
            > > > > Xavier
            > > > >
            > > > >
            > > > >
            > > > >
            > > > >
            > > > >
            > > > > ----- Original Message -----
            > > > >
            > > > > From: ehansen42
            > > > >
            > > > > To: webanalytics@yahoogroups.com
            > > > >
            > > > > Sent: Monday, September 27, 2004 12:12 PM
            > > > >
            > > > > Subject: [webanalytics] web split-testing vs.
            > > personalization
            > > > >
            > > > >
            > > > >
            > > > > Hi folks, I've just joined this list... having
            > caught up
            > > on
            > > > the
            > > > > archives, I see alot of interest in web content
            > testing.
            > > I'd
            > > > like
            > > > > to pose a related question for discussion:
            > > > >
            > > > > Where do you see the overlap between split-testing
            > (A/B,
            > > > etc.) and
            > > > > web personalization technologies? What are the
            > unique
            > > > advantages of
            > > > > each? In what cases might you use one vs. the
            other
            > (or
            > > > both)?
            > > > >
            > > > > Seems to me that split-testing is ideal for
            improving
            > web
            > > > conversion
            > > > > for factors such as a product description and UI
            > elements
            > > > > (colors/fonts/images), whereas personization (as
            > defined
            > > by
            > > > vendors
            > > > > such as ATG, e.piphany, etc.) is ideal for testing
            > (and
            > > > > automatically optimizating) product mix,
            upsell/cross-
            > > sell,
            > > > etc.
            > > > >
            > > > > But really, there is some overlap between split-
            > testing
            > > and
            > > > > personalization, no? The personalization vendors
            > tout
            > > things
            > > > like
            > > > > being "adaptive" and self-learning, meaning that
            even
            > > though
            > > > they
            > > > > are personalizing the web experience on a visitor-
            by-
            > > visitor
            > > > basis,
            > > > > they are collecting conversion metrics and
            > generalizating
            > > > them to
            > > > > broader visitor segments.
            > > > >
            > > > > For example, you may be a first time visitor for a
            > web
            > > site,
            > > > but
            > > > > when you click on a product link, your personalized
            > page
            > > is
            > > > computed
            > > > > from historical conversion data of past visitors.
            So
            > > there's
            > > > some
            > > > > inherent testing going on. Doesn't this sound a
            bit
            > like
            > > > automated
            > > > > split-testing where the target audience is "per
            > arbitrary
            > > > segment"
            > > > > rather than "the entire population"?
            > > > >
            > > > > Sorry if the topic is on the fringe of being too
            > > > academic... ;)
            > > > >
            > > > > cheers
            > > > > Eric
            > > > >
            > > > >
            > > > >
            > > > >
            > > > >
            > > > >
            > > > > ---------------------------------------
            > > > > Web Metrics Discussion Group
            > > > > Moderated by Eric T. Peterson
            > > > > Author, Web Analytics Demystified
            > > > > http://www.webanalyticsdemystified.com
            > > > >
            > > > >
            > > > >
            > > > >
            > > > >
            > > > > ---------------------------------------
            > > > > Web Metrics Discussion Group
            > > > > Moderated by Eric T. Peterson
            > > > > Author, Web Analytics Demystified
            > > > > http://www.webanalyticsdemystified.com
            > > > >
            > > > >
            > > > >
            > > > >
            > > > >
            > > > >
            > > > >
            > > > >
            > > > > ---------------------------------------
            > > > > Web Metrics Discussion Group
            > > > > Moderated by Eric T. Peterson
            > > > > Author, Web Analytics Demystified
            > > > > http://www.webanalyticsdemystified.com
            > > > >
            > > > >
            > > > >
            > > > >
            > > > >
            > > > >
            > > > > ---------------------------------------
            > > > > Web Metrics Discussion Group
            > > > > Moderated by Eric T. Peterson
            > > > > Author, Web Analytics Demystified
            > > > > http://www.webanalyticsdemystified.com
            > > > >
            > > > >
            > > > > Yahoo! Groups Sponsor
            > > > > ADVERTISEMENT
            > > > >
            > > > >
            > > > >
            > > > >
            > > > >
            > > > > ----------------------------------------------------------
            --
            > ----
            > > ----
            > > > ----------
            > > > > Yahoo! Groups Links
            > > > >
            > > > > a.. To visit your group on the web, go to:
            > > > > http://groups.yahoo.com/group/webanalytics/
            > > > >
            > > > > b.. To unsubscribe from this group, send an email to:
            > > > > webanalytics-unsubscribe@yahoogroups.com
            > > > >
            > > > > c.. Your use of Yahoo! Groups is subject to the
            Yahoo!
            > > Terms of
            > > > Service.
            > > >
            > > >
            > > >
            > > >
            > > >
            > > > ---------------------------------------
            > > > Web Metrics Discussion Group
            > > > Moderated by Eric T. Peterson
            > > > Author, Web Analytics Demystified
            > > > http://www.webanalyticsdemystified.com
            > > >
            > > >
            > > > Yahoo! Groups Sponsor
            > > > ADVERTISEMENT
            > > >
            > > >
            > > >
            > > >
            > > >
            > > > --------------------------------------------------------------
            --
            > ----
            > > ----------
            > > > Yahoo! Groups Links
            > > >
            > > > a.. To visit your group on the web, go to:
            > > > http://groups.yahoo.com/group/webanalytics/
            > > >
            > > > b.. To unsubscribe from this group, send an email to:
            > > > webanalytics-unsubscribe@yahoogroups.com
            > > >
            > > > c.. Your use of Yahoo! Groups is subject to the Yahoo!
            > Terms of
            > > Service.
            > >
            > >
            > >
            > >
            > >
            > >
            > >
            > >
            > >
            > > ---------------------------------------
            > > Web Metrics Discussion Group
            > > Moderated by Eric T. Peterson
            > > Author, Web Analytics Demystified
            > > http://www.webanalyticsdemystified.com
            > >
            > >
            > > Yahoo! Groups Sponsor
            > > ADVERTISEMENT
            > >
            > >
            > >
            > >
            > >
            > > ------------------------------------------------------------------
            --
            > ----------
            > > Yahoo! Groups Links
            > >
            > > a.. To visit your group on the web, go to:
            > > http://groups.yahoo.com/group/webanalytics/
            > >
            > > b.. To unsubscribe from this group, send an email to:
            > > webanalytics-unsubscribe@yahoogroups.com
            > >
            > > c.. Your use of Yahoo! Groups is subject to the Yahoo! Terms
            of
            > Service.
          • Jim Sterne
            After weeks of being weeks behind in reading posts to this *stellar* discussion group, I was roundly berated by our Host with the Most (the learned, Mr.
            Message 5 of 15 , Oct 17, 2004
            • 0 Attachment
              After weeks of being weeks behind in reading posts to this
              *stellar* discussion group, I was roundly berated by our Host
              with the Most (the learned, Mr. Peterson) over lunch at Shop.org
              for not participating.

              I told him how impressed I was,I told him how encouraged I
              was,I told him how knowledgeable everybody on this list is
              and he *still* made me pay for the lunch.

              So - taking advantage of one of those rare rainy days here
              in Santa Barbara, I have finally caught up! and with no
              further preamble, let me jump right in and comment on this
              recent post:

              At 04:43 PM 10/12/2004, matthewjncroche wrote:

              To be more terse, "gut feeling" does not hold up.  Test show that
              even the most experienced marketers can pick the best performing
              treatment less than 50% of the time!

              At the last Emetrics Summit, the Ronny Kohavi, the Director of
              Data Mining and Personalization at Amazon had a slide at the
              end of his presentation that said: "Shameless plug: we're hiring!"
              One of the bullet points was "Data trumps intuition and there is a lot of data"

              Matthew Roche continued:
              To be clear, great marketing ideas come from marketers 100% of the
              time.  Testing just helps to separate great ideas that work from
              great ideas that don't.

              I love this distinction. I've been in many a meeting with many a
              great idea and nobody seemed to care which might work best.
              outside of whether the client/boss/CEO would like it most.
              Marketing is so focused on creativity that it shrinks from
              accountability. It is my goal in life to change that.

              -------------------------------------------------------------
              Jim Sterne                      Target Marketing of Santa Barbara
              805-965-3184                              http://www.targeting.com
              Consultant, Author, Speaker on Measuring Website Success
              -------------------------------------------------------------
              Subscribe to "Sterne Measures" at  http://www.emetrics.org
            Your message has been successfully submitted and would be delivered to recipients shortly.