Loading ...
Sorry, an error occurred while loading the content.

235Re: [SearchCoP] Re: Methodology for assessing results from multiple search engines?

Expand Messages
  • Lee Romero
    Aug 19, 2008
    • 0 Attachment
      On Tue, Aug 19, 2008 at 2:34 PM, Crystal Knapp
      <crystal.knapp@...> wrote:
      > Lee,
      >
      >
      >
      > Thanks for the detailed information. This definitely helps! It's good to
      > know that we're mostly on the right track.
      >

      [LR] Well, that assumes I'm on the right track :-) Hopefully I am, though.

      >
      >
      > We were thinking of incorporating several of the scoring criteria you
      > suggested and giving different weights for each criterion, but we have been
      > struggling with how to assign scores for the interface and relevancy. I
      > like your suggestion of allowing users to assign scores for these, following
      > controlled testing.

      [LR] I think that general approach works well. I would have preferred
      to do my own assessment while also being able to guard against bias,
      but I was balancing between the effort / cost of setting up the
      assessment and level of confidence one can lend to the results. With
      more effort, I think I could increase the confidence (i.e., reduce the
      "error rate") by doing things like having the search results use the
      exact same presentation (and not have any reference to the underlying
      engine visible in the results) and also by increasing the # of people
      involved. Despite that, I still have confidence in the outcome at
      least at the level of making a sound decision (one could still say,
      "This one isn't high enough in this area" or "That one is too low
      here.")

      > We're worried that they might be biased toward a
      > specific search engine, but that bias will still be there even after the
      > purchase. You're confirming my hunch that using user feedback to score
      > relevancy is valid.

      [LR] Yes, that's very possible (likely?). Biases that participants
      have will definitely influence their perception if they know which
      engine is producing which results.

      >
      >
      >
      > I'm also not surprised to hear that you preferred the administration of one
      > vendor but the results of another. I'm worried we might run into the same
      > issue.

      [LR] I would not be surprised if that happened. Even if it does, you
      can still then ask yourselves - "Is it better to have an X% better
      search experience when we have to do Y amount of work to get this
      engine to work in our infrastructure?" Depending on X and Y, your
      answer will change but at least you can have the discussion.

      Good Luck!
      Lee

      >
      >
      >
      > Thanks so much,
      >
      > Crystal
    • Show all 12 messages in this topic