[Computational Complexity] The Revenge of Parallelism
- Now that I sit in an engineering school, I see more applied recruiting talks. Many of them have a variation of this picture.
This picture represents the future of Moore's law. The number of transistors in our computers continue to grow exponentially but the clock speed is levelling off. What do we use this new transistors for? To make multiple CPUs on a single integrated circuit, known as a multicore machine. New chips from Intel have 2 or 4 cores and the number of cores is expected to double every couple of years.
Multicores present interesting challenges for computer science, for example compiler researchers are trying to make the best use of multiple CPUs without having the user explicitly use parallelism in their code.
Our theory community hasn't really responded to this new computing model (nothing much in STOC and FOCS, though SPAA 2008 has a special track on the topic). Now the theory isn't that interesting if you have two or four cores, but what happens when we have millions on a chip? Do our old parallel models like the PRAM apply to multicore machines? There are hints of this in comments to my PRAM post three years ago. Or perhaps we need new models.
We study computational complexity in computer science instead of mathematics because, at least some level, our models reflect real-world computing paradigms. As those paradigms change, Complexity quickly adapts (random and quantum for instance). Should multicore machine be another one of these paradigm changes that drives our theory?
Posted By Lance to Computational Complexity at 4/16/2008 05:46:00 AM