top | item 42386638

(no title)

GilKalai | 1 year ago

Hi everybody, my post summarizes an on-going 5-year research project and four papers about the 2019 Google experiment. The timing of the post was indeed related to Google's Willow announcement and the fantastic septillion assertion. It is not clear why Google added to the announcement of nice published results about quantum error correction a hyped undocumented fantastic claim. I think that our work on Google's 2019 experiment provides useful information for evaluating Google's scientific conduct.

discuss

order

amirhirsch|1 year ago

Welcome to Hacker News Gil! I’m a big fan of your work in complexity theory and have thought long and hard on the entropy-influence conjecture revisiting it again recently after Hao Huang’s marvelous proof of the sensitivity conjecture.

To answer your question on why the hyped fantastic claim, as you must know, the people who provide the funds for quantum computing research almost certainly do not understand the research they are funding, and need as feedback a steady stream of fantastic “breakthroughs” to justify writing the checks.

This has made QC research ripe for applied physicists who are skilled in the art of bullshitting about Hilbert Spaces. While I don’t doubt the integrity of a plurality of the scientists involved, I can say with certainty that approximately all of the people working on Quantum Computing Research would not take me up on my bet of $2048 that RSA2048 will not be factored by 2048 —- and would happily accept $204,800,000 to make arrays of quantum-related artifacts. Investors require breakthroughs or the physicists will lose their budget for liquid gases — certainly exceeding $2048.

While there might be interesting science discovered along the way, I think of QC a little like alchemy: the promise of unlimited gold attracted both bullshitters and serious physicists (Newton included) for centuries, but the physical laws eventually emerged that it is not scalable to turn lead into gold. Similarly it would be useful to determine the scaling laws for Quantum Computers. How big of an RSA key is needed before even a QC exceeds the total number of particles in the universe to factor it in reasonable time? Is 2048 good enough that we can shelf all the peripheral number-theory research in post-quantum-cryptography? Let’s not forget the mathematicians too!

sgt101|1 year ago

I think Shor's scales lineally, if it's possible to do the fine grain control of the most significant bits. Some people don't think that's a problem, but if it is then growing keys will be an effective defense.

WhitneyLand|1 year ago

Would be interested to hear your response to Scott Aaronson’s comment:

“Gil’s problem is that the 2019 experiment was long ago superseded anyway: besides the new and more inarguable Google result, IBM, Quantinuum, QuEra, and USTC have now all also reported Random Circuit Sampling experiments with good results.”

GilKalai|1 year ago

I think that I responded over Scott's blog but I can respond again perhaps from a different angle. I think it that is important to scrutinize one (major) experiment at a time.

We studied the Google 2019 claims and on the way we also developed tools that can be applied for further work and we identified methodological problems that could be relevant in other cases (or better could be avoided in newer experiments). Of course, other researchers can study other papers.

I don't see in what sense the new results by Google, Quantinuum, QuEra, and USTC are more inarguable and I don't know what experiment by IBM Scott refers to. And also I don't see why it matters regarding our study.

Actually in our fourth paper there is a section about quantum circuits experiments that deserves to be scrutinized (that can now be supplemented with a few more), and I think we relate to all examples given by Scott (except IBM) and more. (Correction: we mention IBM's 127 qubit experiment, I forgot.)

supernewton|1 year ago

You know he's been responding directly on Scott Aaronson's blog, right?

EvgeniyZh|1 year ago

In 2019 you asserted [1] that attempts of creating distance-5 surface code will fail. Do you think you were wrong? If so, what was your mistake and why do you think you made it? If not, what's the problem with the Google's results? Have your estimations of feasibility of quantum computers changed in light of this publication?

[1] https://arxiv.org/abs/1908.02499

noqc|1 year ago

I have a silly question, and I'm going to shamelessly use HN to ask it.

In Kitaev's construction of the high purity approximation to a magic state, he starts with the assumption that we start with a state which can be represented as the tensor product of n mixed states which are "close enough". I don't understand where this separability property comes from. My (very) naive assumption would be that there is some big joint state which you have a piece of, and the information that I have about this piece are n of its partial traces, which are indeed n copies of the "poor man's" magic state.

Can I know more than that? There's lots of stuff in the preimage of these partial traces. Why am I allowed to assert that I have the nicest one?

Strilanc|1 year ago

Distillation will still work if the inputs are slightly entangled with each other or with other qubits.

I recommend just simulating the specific case you're worried about. It's only a 15 qubit circuit; not at all expensive to check. You'll either see it working and stop worrying, or have an amazing concrete counter example to publish.

ziofill|1 year ago

Can it be that he assumes you have some device that produces somewhat bad magic states and then you distill them into a better one? That would be the typical situation in practice.

sampo|1 year ago

> It is not clear why Google added [...] a hyped undocumented fantastic claim.

I think it's clear.

vitus|1 year ago

Hi! I am under the impression that you're one of the better-known skeptics of the practicality of QEC. And to my untrained eye, the recent QEC claim is the more interesting one of the two.

(I am inclined to ignore the claims about quantum supremacy, especially when they're based on random circuit sampling which as you pointed out made assertions that were orders of magnitude off because nobody cares about this problem classically, and so there is not much research effort into finding better classical algorithms. And of course, there's a problem with efficient verification, as Aaronson mentions in his recent post.)

I've seen a few comments of yours where you mentioned that this is indeed a nice result (predicated on the assumption that it's true) [0, 1]. I worry a bit that you're moving the goalposts with this blog post, even as I can't fault any of your skepticism.

I work at Google, but not anywhere close to quantum computing, and I don't know any of the authors or anyone who works on this. But I'm in a space where I feel impacts of the push for post-quantum crypto (e.g. bloat in TLS handshakes) and have historically pooh-poohed the "store now, decrypt later" threat model that Google has adopted -- I have assumed that any realistic attacks are at a minimum decades away (if they ever come to fruition), and very little (if any) of the user data we process today will be relevant to a nation-state actor in, say, 30 years.

If I take the Willow announcement at face value (in particular, the QEC claims), should I update my priors? In particular, how much further progress would need to be made for you to abandon your previously-stated skepticism about the general ability of QEC to continue to scale exponentially? I see a mention of one-in-a-thousand error rates on distance-7 codes which seems tantalizingly close to what's claimed by Willow, but I would like to hear your take.

[0] https://gilkalai.wordpress.com/2024/08/21/five-perspectives-...

[1] https://quantumcomputing.stackexchange.com/questions/30197/#...

pera|1 year ago

> If I take the Willow announcement at face value (in particular, the QEC claims) [...]

Considering that Google's 2019 claim of quantum supremacy was, at the very least, severely overestimated (https://doi.org/10.48550/arXiv.1910.09534) I would wait a little bit before making any decisions based on the Willow announcement.

marcinzm|1 year ago

> very little (if any) of the user data we process today will be relevant to a nation-state actor in, say, 30 years.

30 year old skeletons in people’s closets can be great blackmail to gain leverage with.

edit: As I understand it this is a popular way for state actors to "flip" people. Threaten them with blackmail unless they provide confidential information or do some actions.

giancarlostoro|1 year ago

> very little (if any) of the user data we process today will be relevant to a nation-state actor in, say, 30 years.

The NSA is most likely interested in all data let's be honest. At a bare minimum, in all foreign actor data.

sgt101|1 year ago

Thank you for your work and perspective - it's important that science is carefully reviewed and that doing the review is well regarded.

nickpsecurity|1 year ago

I appreciate you doing peer review of their claims. I have a few questions about the supremacy claim.

Are there good write-ups on the random, sampling problem that would help implement its get started?

What are the top classical algorithms, esp working implementations, for this problem?

Have the classical implementations been similarly peer reviewed to assess their performance?