Loading ...
Sorry, an error occurred while loading the content.

[Computational Complexity] Competitive Analysis

Expand Messages
  • Lance
    When you cannot achieve the optimum solution of a problem, how do you measure the performance of an algorithm? If you knew the distribution of instances, you
    Message 1 of 1 , Nov 1 6:50 PM
    • 0 Attachment
      When you cannot achieve the optimum solution of a problem, how do you measure the performance of an algorithm? If you knew the distribution of instances, you can see how well the algorithm performs on average. But most theoretical computer scientists prefer a worst-case analysis that tries to minimize the ratio of the optimal solution to the algorithmic solution. But many algorithms achieve seemingly large ratios that don't seem practical.

      Vijay Vazirani defends this competitive analysis of algorithms in the preface of his 2001 book Approximation Algorithms.

      With practitioners looking for high performance algorithms having error within 2% or 5% of the optimal, what good are algorithms that come within a factor of 2, or even worse, O(log n) of the optimal? Further, by this token, what is the usefulness of improving the approximation guarantee from, say, factor 2 to 3/2?

      Let us address both issues and point out some fallacies in these assertions. The approximation guarantee only reflects the performance of the algorithm on the most pathological instances. Perhaps it is more appropriate to view the approximation guarantee as a measure that forces us to explore deeper into the combinatorial structure of the problem and discover more powerful tools for exploiting this structure. It has been observed that the difficulty of constructing tight examples increases considerably as one obtains algorithms with better guarantees. Indeed, for some recent algorithms, obtaining a tight example has been a paper in itself. Experiments have confirmed that these and other sophisticated algorithms do have error bounds of the desired magnitude, 2% to 5%, on typical instances, even though their worst case error bounds are much higher. Additionally, the theoretically proven algorithm should be viewed as a core algorithmic idea that needs to be fine tuned to the types of instances arising in specific applications.

      But still why should a practitioner prefer such an algorithm to a heuristic that does as well on "typical" instances but doesn't have a worst case bound? Our usual argument says that we don't really know what "typical" means and we can promise you something no matter what happens.

      Besides approximation algorithms, theoreticians have taken competitive analysis into other arenas, like comparing on-line versus off-line job requests and auctions that make a constant factor of the optimal revenue where achieving a competitive factor of 2 can mean a serious loss of income.

      Sometimes these algorithms will allow themselves to do poorly when the optimum is bad in order to achieve the best ratio. If we truly want to sell these algorithms to the practitioners, should we focus on doing well on the situations they care about and then, only secondarily, worry about the performance in the worst case?

      Posted by Lance to Computational Complexity at 11/01/2005 08:43:00 PM

    Your message has been successfully submitted and would be delivered to recipients shortly.