After a talk on derandomization at Midwest Theory Day, someone asked if those techniques could also be used in quantum computing.
In classical randomness under reasonable assumptions
we can create a polynomial in m sized set S number of strings of length m such that every circuit of size m will accept with roughly the same probability whether we draw the input uniformly from S or from all strings of length m. Can we have an analog for quantum?
There are many good parallels between probabilistic computing and quantum computing. Both seem to allow an exponential number of possible computations in parallel combined in a limited way. Probabilistic computation is essentially multiplying exponentially large stochastic matrices and quantum computing is multiplying exponentially large unitary (or just orthogonal) matrices.
If a quantum algorithm uses classical randomness, one can replace that randomness with measurements of appropriate qbits. So classical randomness does not give any additional power to quantum computation.
But could one use some hardness assumptions to remove the quantumness from a quantum computation leaving a deterministic algorithm? It's not clear we can pull out the quantumness
and even if we could quantumness and randomness just work differently. Quantum algorithms get their power from interference, allowing adding positive and negative amplitudes causing cancellation, something very difficult to simulate probabilistically or with any form of psuedorandom generator.
In short no hope for dequantification. You'll just have to build that quantum computer if you want to run those quantum algorithms.
Posted By Lance to Computational Complexity
at 12/08/2009 10:05:00 AM