Loading ...
Sorry, an error occurred while loading the content.
 

[Computational Complexity] Focs Wrap Up

Expand Messages
  • GASARCH
    (Guest post from Rahul Santhanam) As promised, I did do a bit of experimenting this FOCS in terms of attending talks from other areas. Here s an encapsulation
    Message 1 of 1 , Oct 31, 2008
      (Guest post from Rahul Santhanam)

      As promised, I did do a bit of experimenting this FOCS in terms of attending talks from other areas. Here's an encapsulation of the results:

      Cryptography - Yevgeny Vahlis spoke about his negative result with Boneh, Papakonstantinou, Rackoff and Waters on black-box constructions of identity-based cryptosystems (cryptosystems in which an arbitrary string can be used as a public key) from trapdoor permutations. ID-based cryptosystems have been extensively studied recently in work of Dan Boneh and others. Security of known constructions is proved in the random oracle model, and it would be interesting to base security on the same assumptions as those used in standard public-key cryptography. This result indicates this might be too much to hope for, at least using standard techniques.

      Learning - Adam Klivans gave an excellent talk on "Learning Geometric Concepts via Gaussian Surface Area" (with O'Donnell and Servedio). There's been a lot of interesting work recently on learning in the presence of random or adversarial noise. It's known that functions with low noise sensitivity, or even with low Gaussian noise sensitivity, can be learned efficiently in this setting, but these quantities are hard to bound in general. The current work resolves some open problems about learnability of geometric concept classes by using some deep mathematical results relating Gaussian noise sensitivity to Gaussian surface area, and working with the latter more tractable quantity.

      Games with Entangled Provers - I went to a couple of talks on another currently hot topic: the power of two-prover games with entangled provers. A semi-definite programming formulation due to Tsirelson can be used to show that the optimal value can be computed efficiently for a very simple form of 2-prover games, known as XOR games. The first talk that I attended was given by Thomas Vidick on "Entangled Games are Hard to Approximate" (with Kempe and Kobayashi and Matsumoto and Toner) - he showed a reduction from the problem of approximating the value of 2-prover games for classical provers (known to be NP-complete) to the problem of approximating the value of 3-prover games for entangled provers. A complementary talk by Ben Toner on "Unique Games with Entangled Provers are Easy" (with Kempe and Regev), showed that for unique games, the optimal value can actually be approximated efficiently using "quantum rounding" to a semi-definite program. The most interesting open problem in this area seems to be to derive some sort of upper bound on the complexity of approximating the value for general games.

      Algorithms - I heard Dimitris Achlioptas speak on "Algorithmic Barriers from Phase Transitions" (with my colleague Amin Coja-Oghlan). This paper tries to understand the reason why simple algorithms for constraint satisfaction problems on random instances fail in the region around the threshold, by showing that this "region of failure" coincides with the region where the structure of the solution space changes from being connected to being (essentially) an error-correcting code. Rather intriguing that a computational fact, the success or failure of standard algorithms, is so closely related to a combinatorial artifact. Finally, I went to Grant Schoenebeck's talk on proving lower bounds for the Lasserre hierarchy of semi-definite programming formulations of CSP problems, which is interesting because a number of different algorithmic techniques including the Arora-Rao-Vazirani formulation of Sparsest-Cut can be implemented in the lower levels of the hierarchy. Grant's result uses a connection to the width complexity of resolution, which I found very surprising, but then my understanding of this area is rather naive...

      All interesting stuff, but I'd actually prefer FOCS to be even broader, and representative of all areas of theory. If that requires multiple sessions, so be it. Currently, FOCS seems to be an algorithms, complexity and combinatorics conference with papers from other areas filtering in unpredictably depending on the composition of the committee. It's very creditable that the standard of papers has remained consistently high (or even improved) over the years. But with several major sub-areas (semantics and verification, concurrency theory, database theory, theory of distributed computing, computational geometry, computational biology etc.) represented barely or not at all, it's quite hard to still make the case that FOCS and STOC are absolutely central to theoretical computer science as a whole. I guess the question is how much we value FOCS and STOC being core conferences?

      --
      Posted By GASARCH to Computational Complexity at 10/31/2008 10:50:00 AM
    Your message has been successfully submitted and would be delivered to recipients shortly.