(no title)
comnetxr | 6 years ago
A source of this confusion is that we need to discuss space and time complexity simultaneously. In the algorithms for quantum simulation (and many other algorithms), there is a trade off between space complexity and time complexity. ELI5: You don't have to store intermediate results if you can recompute them when you need them, but you may end up recomputing them a huge number of times.
For the quantum circuit, the standard method of computation gives exponential memory complexity in number of qubits (store 2^N amplitudes for a N qubit wavefunction) and time complexity D2^N, i.e. linear in circuit depth and exponential in number of qubits. For example, under the IBM calculation, 53 qubits at depth 30 use 64 PB of storage and a few days of calculation time, while 54 qubits use 128 PB and a week in calculation time. Adding a qubit doubles the storage requirements AND the time requirements.
Under google's estimation of the run time, they were using a space time memory tradeoff. There is a continuous range of space-time memory tradeoffs - USE MAX MEMORY as IBM does, MAX RECOMPUTATION (store almost no intermediate results, just add each contribution to the final answer and recompute everything) and a range of in-between strategies. While I don't know the precise complexities, the time-heavy strategies will have time complexity exponential in both N and D ( 2^(ND) ) and space complexity constant.That's why googles estimate for time complexity is so drastically different than IBMs.
Side note: IBM also uses a non-standard evaluation order for the quantum circulation, which utilizes a trade-off between depth and number of qubits. In the regime of a large number of qubits, but a relatively small depth, you can again classically simulate using an algorithm that scales N*2^D rather than D2^N, using a method that turns the quantum circuit on its side and contracts the tensors that way. In the regime of N comparable to D, the optimal tensor contraction corresponds to neither running the circuit the usual way or sideways but something in between. None of these tricks fundamentally change the ultimate exponential scaling, however.
As an extra step, you could also run compression techniques on the tensors (i.e. SVD, throwing away small singular values), to make the space-time complexity tradeoff into a three-way space-time-accuracy tradeoff. You wouldn't expect too much gain by compression, and your accuracy would quickly go to zero if you tried to do more and more qubits or longer depth circuits with constant space and time requirements. However, the _real_ quantum computer (as Google has it now, with no error correction) also has accuracy that goes to zero with larger depth and number of qubits. Thus, one can imagine that the next steps in this battle are as following: If we say that Google's computer has not at this moment beaten classical computers with 128PB of memory to work with, then google will respond with a bigger and more accurate machine that will again claim to beat all classical computers. Then IBM will add in compression for the accuracy tradeoff and perhaps again will still beat the quantum machine.
So this back and forth can continue for a while - but the classical computers are ultimately fighting a losing battle, and the quantum machine will triumph in the end, as exponentials are a bitch.
No comments yet.