Following the quantum supremacy controversy is very frustrating. It gets constantly framed as a decisive step that changes everything, which is wrong. But then there's also an opposite and equally wrong reaction that any challenges to the claim prove that quantum computing doesn't work at all.
The reality is that quantum computing technology has been improving at the same steady pace for decades, and quantum supremacy was recently invented as a concrete but not very useful goal that people could realistically aim for in the near term. That is a totally standard and legitimate way of guiding technological progress.
The problem is that quantum supremacy is vaguely defined (what does "infeasible on any classical computer" mean? what computers can you use, and for how long?) but it is presented to the public as a sharp boundary -- which means that every company in the field has a huge incentive to claim they are the first. This IBM claim is not a fundamental objection. Given that everything they say is correct, Google could still achieve quantum supremacy by their standards just by incremental improvement, slapping a few more qubits on. So why even bother disputing Google's claim? Because it will set them back a few years, and that's enough time for IBM to make a claim of its own. That's why academics, who have no stake in what company gets the hype, will just consider this whole episode rather distasteful.
Quantum supremacy is a big deal. It is a goddamn experimental evidence against Extended Church-Turing Thesis. If you never believed ECT (for example, all physicists seem to think ECT is obviously false) it may not matter to you, but it still is a serious claim.
Yes, Google probably can achieve quantum supremacy by slapping a few more qubits. But a few more qubits were, in fact, not slapped yet. So quantum supremacy is very much not achieved, and Google should not claim so.
Quantum supremacy is the potential ability of quantum computing devices to solve problems that classical computers practically cannot. The weaker term Quantum advantage refers to the potential to solve problems faster. In computational-complexity-theoretic terms, this generally means providing a superpolynomial speedup over the best known or possible classical algorithm. The term was originally popularized by John Preskill but the concept of a quantum computational advantage, specifically for simulating quantum systems, dates back to Yuri Manin's and Richard Feynman's proposals of quantum computing.
The main takeaway here is figure 1 in this article: they show that for an increasing circuit depth, computation time (on a classical computer) scales linearly. Google claims, on the other hand, that a classical calculation would scale exponentially. This is the basis for the graph in the Google blog [1], which seems to suggest that the Quantum computer can easily reach points (such as qbits=50, cycles=25) which the classical computer would never be able to reach. This is not true, if IBM is right. Their linearly scaling graph proves that they are not nitpicking their input, I would say.
There are many problems where the best known quantum algorithm is asymptotically better than the best known classical algorithm but, as far as I've heard, nobody has ever found a case where the best quantum algorithm can be proved to be better than the best classical algorithm. So there's always going to be a danger of this happening no matter what problem you attack.
Such a demonstration would prove nothing of importance, partly because the issue is not how fast a particular run takes, but how running time grows with the size of the problem. More significantly, the issue is whether the paper's argument for its original claim is sound, and that would be best addressed by showing where it goes wrong. For a brief and clear explanation, read this excellent post:
The 2.5 day run is for Summit, the world's #1 supercomputer, owned by Oak Ridge National Laboratory. Getting use of the entire machine for 2.5 days may take some doing.
IBM's head of research had a pretty confident statement the other day:
"I’m convinced there are more quantum computers working here than the rest of the world combined, in this building," Dr. Gil said.
This had been a consistent problem with quantum computers - - they come up with a quantum computer that can basically do one thing and compare it against a general piece of software not tuned for that problem. Once the software is tuned for the exact problem it is much closer in performance to the quantum computer. The same thing arises with dwave.
Also of note is Gil Kalai's updated take on the experiment [1]. The heart of the complaint is that the quantum computer's solution to this problem relies on a calibration process, which is done on a classical computer and requires resources orders of magnitudes higher than the quantum portion of the computation:
> The Google experiment actually showed that a quantum computer running for 100 seconds PLUS a classic computer that runs 1000 hours can compute something that requires a classic computer 10 hours.
EDIT: Just as a note, lest you dismiss Kalai as an outsider/crackpot, you can look at the researchers who show up in the comments on the blog post.
Crucial question - is the calibration a one time effort where the same calibration can be used to calculate any number of results within the calibrated space? In other words, can you amortize the calibration cost across 1,000 runs and come out an order of magnitude ahead?
If I understand his point correctly, it's not the calibration per se that is problematic, but whether the calibration means that it only performs for a small subspace of inputs, which raises the question how large is that subspace?
Even if the calibration process "cheats", why should we care? It's not like being practical is the goal. After the calibration is done, you still have a device that can do something believed intractable on classical computers. If it is possible for a classical computer to get comparable results just from a long calibration process, then that means the cross entropy test is flawed/limited, which is a separate argument.
The linked paper shows the "classical system" is the Summit supercomputer[1], using 128 petabytes of disk. Still, its remarkable that the choice of a different simulation algorithm by IBM that trades memory for CPU usage cuts the time from 10,000 years to only 2.5 days.
Summary:
- Google overhyped their results against a weak baseline.
This seems to be commonplace in academic publishing, especially new-ish fields where benchmarks aren't well-established. There was a similar backlash against OpenAI's robot hand, where they used simulation for the physical robotic movements and used a well-known algorithm for the actual Rubik's Cube solving. I still think it's an impressive step forward for the field.
I'm a layman on this, but reading IBM's rebuttal, it seems that they're saying "you can't say that a conventional supercomputer would take 10'000 years because you haven't accounted for spilling to disk and other optimisations".
I understand that some of these optimisations might be on the processing logic, but doesn't this end up being like comparing apples to oranges?
I understand that there's also what seems to be technicalities around what is "quantum supremacy", but I assume that on a 'processor against processor' basis, this is still significant. Am I oversimplifying things?
No, the point is that spilling to disk allows you to run classical algorithm linearly, so there is no exponential speedup. I elaborated this in detail here: https://news.ycombinator.com/item?id=21334025
>We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity.
and therefore
>Because the original meaning of the term “quantum supremacy,” as proposed by John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met.
The high order bit is that the IBM claims do not cast much of a shadow over the claims of quantum supremacy and still show the same exponential difference.
Seeing this be published on Science Magazine is interesting by itself.
Its like IBM and Google at a high level work on different problems. IBM is focused on research and public discussion of the problem space. Google on the other hand is focusing on marketing using fancy terms like "supremecy" that can easily be misunderstood by the public.
Quote 1: "The Google group reiterates its claim that, in 200 seconds...."
Quote 2: " ...by tweaking the way Summit approaches the task, it can do it far faster: in 2.5 days."
Well, it's still 200 seconds vs 2.5 days, so I'd say the claim kinda stands. I know I would definitely buy the computer with 200 seconds vs the one with 2.5 days.
Not sure it's that clear cut, as per the blog post[0] linked elsewhere in this thread[1]:
> Recall that a quantum supremacy demonstration would be an experiment where a quantum computer can compute in 100 seconds something that requires a classical computer 10 hours. (Say.)
>The Google experiment actually showed that a quantum computer running for 100 seconds PLUS a classic computer that runs 1000 hours can compute something that requires a classic computer 10 hours.
Nobody is denying that the quantum system performs the task faster. The question is whether it performs a task that can't feasibly be performed by a classical system. That is what the researchers are specifically referring to when they say "quantum supremacy."
2.5 days is feasible. 10,000 years is not feasible.
Quantum supremacy is more about quantum computers being able to do something that classical computers could never do (1). I think that's the heart of the controversy. I'm quite sure Quantum has already proved faster for certain use cases already (2)
Part of this controversy is reminiscent of the Higgs Boson "God particle" nonsense, where an unscientific catchphrase created by scientists got picked put by the media and a misinterpretation overshadowed the real scientific issue.
I’m with them. My two problems when reading the original argument:
They rigged the benchmark to be something quantum-specific before saying classical computers couldn’t do it. I get why. I’d just rather it be something they both could do, classical having great algorithms, and quantum one outperforms it many-fold. An old example was factoring. I’m sure there’s other examples out there. Maybe try to simplify them for low-qubit circuits.
The biggest gripe: basing the argument on what classical computers can’t do. Can’t is an assumption that’s been dis-proven about so many things so many times. Who knows what new algorithms people will come up with down the line doing what we previously couldn’t do. I want an argument that’s instead based on what can and has been done vs the new thing being done. I similarly preferred constructivist logic to classical since the latter was assuming things didn’t exist. Universe surprises me too often for that stance.
Another thing about these is the comparison to supercomputers. Me personally, I don’t care if the quantum computer outperforms supercomputers. I’m looking at practical QC like anything else: does it do the job more cost-effectively than alternative solutions? That cost includes hardware, any leasing/licensing, maintenance, specialists, etc. If it does, then they’ve achieved something for me. If it doesn’t, then QC was an implementation detail for a system that didn’t make the cut.
FPGA’s are already an example of how this might play out. They’re friggin amazing with all kinds of potential. I’d love to have a desktop loaded with them with libraries using them to accelerate my apps. Them being invented and improved wasn’t enough for that, though. The duopoly, esp supported by patents, has kept prices of high-performance FPGA’s way too high.
It would be better if they were priced more like flash memory with the synthesis software being more like mass-market A/V software. We’d see an explosion of use. Instead, the legal system allowed the inventors to keep competition and innovation at low levels to maximize profit. I expect the same with practical QC for at least 20 years. I’ll still be using a mix of NUMA, GPU’s and FPGA’s while early adopters mess around with quantum computers.
Your biggest gripe can't be solved. P vs PSPACE is a great unsolved problem in computer science (almost as big as P vs NP), and since PSPACE can simulate quantum computer, any solution to your gripe immediately leads to resolution of P vs PSPACE.
[+] [-] knzhou|6 years ago|reply
The reality is that quantum computing technology has been improving at the same steady pace for decades, and quantum supremacy was recently invented as a concrete but not very useful goal that people could realistically aim for in the near term. That is a totally standard and legitimate way of guiding technological progress.
The problem is that quantum supremacy is vaguely defined (what does "infeasible on any classical computer" mean? what computers can you use, and for how long?) but it is presented to the public as a sharp boundary -- which means that every company in the field has a huge incentive to claim they are the first. This IBM claim is not a fundamental objection. Given that everything they say is correct, Google could still achieve quantum supremacy by their standards just by incremental improvement, slapping a few more qubits on. So why even bother disputing Google's claim? Because it will set them back a few years, and that's enough time for IBM to make a claim of its own. That's why academics, who have no stake in what company gets the hype, will just consider this whole episode rather distasteful.
[+] [-] sanxiyn|6 years ago|reply
Yes, Google probably can achieve quantum supremacy by slapping a few more qubits. But a few more qubits were, in fact, not slapped yet. So quantum supremacy is very much not achieved, and Google should not claim so.
[+] [-] ngold|6 years ago|reply
Quantum supremacy is the potential ability of quantum computing devices to solve problems that classical computers practically cannot. The weaker term Quantum advantage refers to the potential to solve problems faster. In computational-complexity-theoretic terms, this generally means providing a superpolynomial speedup over the best known or possible classical algorithm. The term was originally popularized by John Preskill but the concept of a quantum computational advantage, specifically for simulating quantum systems, dates back to Yuri Manin's and Richard Feynman's proposals of quantum computing.
[+] [-] tomelders|6 years ago|reply
[+] [-] jgmatpdx|6 years ago|reply
IBM's algorithm scales approximately linearly in the number of qubits. So, you'd need more than a few more qubits...
[+] [-] akjssdk|6 years ago|reply
[1]: https://ai.googleblog.com/2019/10/quantum-supremacy-using-pr...
[+] [-] Symmetry|6 years ago|reply
[+] [-] AlexCoventry|6 years ago|reply
[+] [-] mellosouls|6 years ago|reply
If IBM can cast doubt by describing a 2.5 day classical run, why not just do that instead, and blow the claim out of the water?
There were a couple of responses (thanks) but I still don't understand - it looks like it's not as easy as they imply.
[+] [-] mannykannot|6 years ago|reply
https://news.ycombinator.com/item?id=21334025
If you don't follow what is being said there, reposting your question will not help.
[+] [-] hktuotroi|6 years ago|reply
Google implied their supremacy over all of the world computers combined together.
[+] [-] floren|6 years ago|reply
[+] [-] ajay-d|6 years ago|reply
[1]https://www.nytimes.com/2019/10/21/science/quantum-computer-...
[+] [-] CJefferson|6 years ago|reply
[+] [-] tarsinge|6 years ago|reply
[+] [-] andrewla|6 years ago|reply
> The Google experiment actually showed that a quantum computer running for 100 seconds PLUS a classic computer that runs 1000 hours can compute something that requires a classic computer 10 hours.
EDIT: Just as a note, lest you dismiss Kalai as an outsider/crackpot, you can look at the researchers who show up in the comments on the blog post.
[1] https://gilkalai.wordpress.com/2019/10/13/the-story-of-poinc...
[+] [-] zaroth|6 years ago|reply
[+] [-] mantap|6 years ago|reply
[+] [-] dooglius|6 years ago|reply
[+] [-] joycian|6 years ago|reply
[+] [-] snops|6 years ago|reply
[1] https://www.olcf.ornl.gov/olcf-resources/compute-systems/sum...
[+] [-] jaredtn|6 years ago|reply
This seems to be commonplace in academic publishing, especially new-ish fields where benchmarks aren't well-established. There was a similar backlash against OpenAI's robot hand, where they used simulation for the physical robotic movements and used a well-known algorithm for the actual Rubik's Cube solving. I still think it's an impressive step forward for the field.
[+] [-] a_imho|6 years ago|reply
[+] [-] nevi-me|6 years ago|reply
I understand that some of these optimisations might be on the processing logic, but doesn't this end up being like comparing apples to oranges?
I understand that there's also what seems to be technicalities around what is "quantum supremacy", but I assume that on a 'processor against processor' basis, this is still significant. Am I oversimplifying things?
[+] [-] sanxiyn|6 years ago|reply
[+] [-] apawloski|6 years ago|reply
>We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity.
and therefore
>Because the original meaning of the term “quantum supremacy,” as proposed by John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met.
[+] [-] readams|6 years ago|reply
The high order bit is that the IBM claims do not cast much of a shadow over the claims of quantum supremacy and still show the same exponential difference.
[+] [-] dang|6 years ago|reply
[+] [-] dang|6 years ago|reply
Google's post https://news.ycombinator.com/item?id=21332768
Scott Aaronson's post https://news.ycombinator.com/item?id=21335907
[+] [-] d0ne|6 years ago|reply
1. Actually preform the experiment per their approach, compare the results, and provide tangible proof in the coming days.
or
2. Retract their claims.
In either case, exciting times ahead.
[Edit: Yes, I read their published paper[1] which only claims it is possible to achieve and does not perform the same experiment.
1. https://arxiv.org/pdf/1910.09534.pdf]
[+] [-] reikonomusha|6 years ago|reply
[+] [-] kerng|6 years ago|reply
Its like IBM and Google at a high level work on different problems. IBM is focused on research and public discussion of the problem space. Google on the other hand is focusing on marketing using fancy terms like "supremecy" that can easily be misunderstood by the public.
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] nappy-doo|6 years ago|reply
[+] [-] sanxiyn|6 years ago|reply
[+] [-] unnouinceput|6 years ago|reply
Quote 2: " ...by tweaking the way Summit approaches the task, it can do it far faster: in 2.5 days."
Well, it's still 200 seconds vs 2.5 days, so I'd say the claim kinda stands. I know I would definitely buy the computer with 200 seconds vs the one with 2.5 days.
[+] [-] theon144|6 years ago|reply
> Recall that a quantum supremacy demonstration would be an experiment where a quantum computer can compute in 100 seconds something that requires a classical computer 10 hours. (Say.)
>The Google experiment actually showed that a quantum computer running for 100 seconds PLUS a classic computer that runs 1000 hours can compute something that requires a classic computer 10 hours.
0: https://gilkalai.wordpress.com/2019/10/13/the-story-of-poinc... 1: https://news.ycombinator.com/item?id=21334403
[+] [-] apawloski|6 years ago|reply
2.5 days is feasible. 10,000 years is not feasible.
[+] [-] Graphguy|6 years ago|reply
1. https://twitter.com/NatureNews/status/1186969282632134656?re... 2. https://arxiv.org/abs/1904.05803
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] papln|6 years ago|reply
[+] [-] nickpsecurity|6 years ago|reply
FPGA’s are already an example of how this might play out. They’re friggin amazing with all kinds of potential. I’d love to have a desktop loaded with them with libraries using them to accelerate my apps. Them being invented and improved wasn’t enough for that, though. The duopoly, esp supported by patents, has kept prices of high-performance FPGA’s way too high.
It would be better if they were priced more like flash memory with the synthesis software being more like mass-market A/V software. We’d see an explosion of use. Instead, the legal system allowed the inventors to keep competition and innovation at low levels to maximize profit. I expect the same with practical QC for at least 20 years. I’ll still be using a mix of NUMA, GPU’s and FPGA’s while early adopters mess around with quantum computers.
[+] [-] sanxiyn|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]