Q3. What calculation is Google planning to do, or has it already done, that’s believed to be classically hard?
So, I can tell you, but I’ll feel slightly sheepish doing so. The calculation is: a “challenger” generates a random quantum circuit
(i.e., a random sequence of 1-qubit and nearest-neighbor 2-qubit gates, of depth perhaps
, acting on a 2D grid of
to
qubits). The challenger then sends
to the quantum computer, and asks it apply
to the all-0 initial state, measure the result in the
basis, send back whatever
-bit string was observed, and repeat some thousands or millions of times. Finally, using its knowledge of
, the classical challenger applies a statistical test to check whether the outputs are consistent with the QC having done this.
So, this is not a problem like factoring with a single right answer. The circuit
gives rise to some probability distribution, call it DC, over
-bit strings, and the problem is to output samples from that distribution. In fact, there will typically be
strings in the support of DC—so many that, if the QC is working as expected, the same output will never be observed twice. A crucial point, though, is that the distribution DC is not uniform. Some strings enjoy constructive interference of amplitudes and therefore have larger probabilities, while others suffer destructive interference and have smaller probabilities. And even though we’ll only see a number of samples that’s tiny compared to
, we can check whether the samples preferentially cluster among the strings that are predicted to be likelier, and thereby build up our confidence that something classically intractable is being done.
So, tl;dr, the quantum computer is simply asked to apply a random (but known) sequence of quantum operations—not because we intrinsically care about the result, but because we’re trying to prove that it can beat a classical computer at some well-defined task.