top | item 34898258

Google claims breakthrough in quantum computer error correction

128 points| aarghh | 3 years ago |ft.com | reply

47 comments

order
[+] texaslonghorn5|3 years ago|reply
https://www.nature.com/articles/s41586-022-05434-1

https://www.nature.com/articles/d41586-022-04532-4

The FT article is a bit fluffy. here are links to the paper and briefing in Nature.

[+] cubefox|3 years ago|reply
Relevant quote from the briefing, which indicates they have not yet reached error correction takeoff velocity:

> It is known in the field that, when the physical error rate of qubits is high, the probability of logical error increases with increasing system size, whereas when physical error rates are low, increasing the system size leads to the desired exponential suppression of logical error. We feel that we are currently in a ‘crossover’ regime between these scenarios, in which increasing system size initially suppresses the logical error per cycle, but would, with increasing size, later increase error rates. Therefore, it is imperative that we continue to improve both qubit performance and system scale.

So this result is not yet the major breakthrough that would be required to build a scalable quantum computer.

[+] sebzim4500|3 years ago|reply
Like all claimed QC breakthroughs, best to wait for Scott Aaronson's response on his blog.
[+] hiddencost|3 years ago|reply
Publishing in Nature is a pretty strong signal the results are real. It's entirely possible Scott was one of the anonymous submission reviewers.

I look forward to hearing Scott contextualize it.

[+] abdullahkhalids|3 years ago|reply
Let me break this down.

* In an error correction code, you encode a logical bit/qubit into a set of physical bits/qubits.

* Error correcting codes come in families, parameterized by integer distance d. Incrementing d, leads to a code with more physical bits/qubits, n, but also the ability to correct errors on a larger number of bits/qubits, j.

* If the error probability on each qubit is p, then on a code of size n, there will be on average n*p errors. It should be immediately clear that if p is small, then n*p<j and the code can correct errors that occur, but if p is large then n*p>j and there will be errors that the code can't correct.

* If the code corrects any physical errors that do occur, then there won't be a logical error (value of logical bit/qubit unchanged), otherwise there will be a logical error. In summary, given a p, you have to pick the right sized code from your family so that n*p<j, and you don't incur any logical errors.

* Another way of saying the same thing is that if p in your hardware becomes small enough, then as you increase your distance d, your logical error rate will go down.

These guys are claiming that their p is small enough that the distance 5 code has a smaller logical error rate then the distance 3 code, which is indeed a breakthrough (if correct). No one has done something like this before to my knowledge.

# Criticism

* The results are limited to storage errors. All they are doing is initializing the logical qubit in some initial state and repeatedly doing error-correction on it, to simulate a qubit at rest while the computation is happening elsewhere on some hypothetical other logical qubits. They have not attempted to do any experiments with applying gates to the qubits. Those will likely yield a much larger error rate. In particular, they are only testing a single logical qubit here, but the interesting gates would be two-qubit gates between two logical qubits, which are necessary to do any non-trivial computation.

* The experiment is limited to 25 cycles of error-detection. This means that their experiment shows that their device could hypothetically implement a depth ~25 circuit. As you might realize, useful circuits have depth many orders of magnitude larger, so this continues to be toy device.

The above is what immediately springs to mind, but I am sure the actual experts will soon chime in. My subjective opinion is that the technical achievements of just running the experiment are very impressive. This is a long journey to useful QCs, but this is nice milestone along the way.

[+] Strilanc|3 years ago|reply
In the surface code the gates are all variations on idling. First you idle one way, then you idle another way for a bit, and the result is a gate. (The technical term is lattice surgery.) Because of that, it would be extremely surprising if the gates had notably different error rates from storage. Idling is already a very busy state of affairs.
[+] throw_pm23|3 years ago|reply
In 15 years as a TCS researcher I have followed one heuristic: if something has the word "quantum" in it, I ignore (articles, surveys, conference talks, projects, etc.).

This is more from personal ignorance/laziness and convenience than strong conviction: you cannot follow all areas, you have to make some choices how you spend your time, and this is one particular area that is easy to delimit. (EDIT: and if it does turn out to be a dead-end, I can be glad I made the right call.)

At times this policy has been quite hard to follow, and I may reconsider it sometime, but so far it has served me well.

[+] wilsynet|3 years ago|reply
It would be one thing to say yeah, you don’t think the field is interesting and is likely to be a dead end.

But that’s not what you’re saying. Instead you are saying that you don’t follow it because well, you can’t follow everything. And I agree with that. But in that case, you could go to every single HN topic and post “I don’t follow <insert topic here>, I’m just posting to tell you that I can’t follow everything. So far it has served me well to not follow everything.”

Which doesn’t seem particularly useful or contributory in any way.

[+] dekhn|3 years ago|reply
Within the field of CS, this is not a bad approach.

However, if you work in: biology, physics, or chemistry: quantum is a frequently used word. It covers far more than QC, entanglement, coherence, tunnelling, or any other crazy bits of quantum. It forms the basis for our atomic theory of matter and has led to extraordinary engineering and science projects.

I used to be excited by DNA computing but it became clear quickly that regardless of any stated advantages of DNA computing, they were tiny compared to the modern working digital computer and the global infrastructure dedicated to improving it year after year (even after Moore's law putters out).

[+] osigurdson|3 years ago|reply
Interesting that the top rated comment on a post is related to how not reading the post (or any posts in the category) is optimal from a time management perspective.

Ironic that writing this comment as well as reading it is considered a worthwhile time expenditure.

[+] meghan_rain|3 years ago|reply
> In 15 years as a TCS researcher I have followed one heuristic: if something has the word "quantum" in it, I ignore (articles, surveys, conference talks, projects, etc.).

Same for "crypto" here :-)

[+] DebtDeflation|3 years ago|reply
Can it factor 21 without resorting to tricks like precompilation?
[+] Strilanc|3 years ago|reply
No, that's years in the future at least. Factoring 21 without any compilation tricks requires doing a modular exponentiation under superposition. The best known way to do that requires two registers of workspace (10 qubits), plus a teensy bit of breathing room (2 qubits), so call it a dozen logical qubits. If all compilation tricks are banned, even the ones that are reasonable for huge numbers but work a bit too well for small numbers such as using small lookup tables to fuse some of the multiplications together, the overall computation takes on the order of 10000 gates. If you require it to work in one or two shots (otherwise even random coin flipping will work), then those gates need to have error rates below one in a hundred thousand and your storage needs error rates per round below one in ten million.

The experiment being announced here is testing different ways storing 1 error corrected qubit, to show that making it bigger can make it better. On an absolute scale, that logical qubit is still not good enough. It needs to be made even bigger. And there needs to be a dozen of them instead of one. And it's barely breaking even; you want strong gains in quality from adding quantity not just minor gains. This means the underlying physical qubits still need more improvement. There's a lot to do!

Disclaimer: am on google quantum team, opinions are my own.

[+] f0e4c2f7|3 years ago|reply
A topic I've been interested in lately is quantum machine learning[0]. Qubits as neurons make sense as a natural architecture to me. Though, as I understand we are still somewhat early in terms of the number of available qubits being useful.

While reading about the actual advantages of quantum machine learning over classical machine learning something that came up was a type of error correction you can do in quantum computing that would make backpropagation faster.

Does anyone who understands this better know if this breakthrough might theoretically apply for that application (in the future, with more qubits of course)?

[0] https://en.m.wikipedia.org/wiki/Quantum_machine_learning

[+] wwarner|3 years ago|reply
Maybe I missed it, but the ft article claimed the improved error rate was due to improved cooling and better components rather that through better error correction. That’s not what the title says…
[+] fdgsdfogijq|3 years ago|reply

[deleted]

[+] hiddencost|3 years ago|reply
Nature is the premiere science journal on the planet. The purpose of publishing in a journal like nature is to share progress and methods with your peers, and be able to get credit for an idea or an observation.

The audience is scientists in the field, not consumers, not the public, and not investors.

Articles in Nature tend to get hype for two reasons: 1) it's the most prestigious journal in the world 2) they tend to only accept papers about important, novel progress in a field.

If you can't contextualize the information in Nature, I suggest you wait for the popular press to digest it for you.

(There are admittedly some critiques to be made of nature, that it biases towards flashy results, but no one in their right mind would prefer to be published by PNAS.)

[+] version_five|3 years ago|reply
There's lots of things to criticize google for, doing fundamental research isn't one of them. I hope the recent chatbot hype + market pressure isn't going to force them to change strategy and drop more R&D in favor of products, though that seems to be what happens historically
[+] alphabetting|3 years ago|reply
They're easily a decade away from releasing a quantum computing product. How would you like them to release something here?
[+] mikece|3 years ago|reply
Claim != Proof
[+] Strilanc|3 years ago|reply
This result is notably more open than most papers. The circuits executed and measurement data collected are available on Zenodo: https://zenodo.org/record/6804040 . You can do your own analysis of the claim.
[+] hiddencost|3 years ago|reply
Publishing in the most prestigious journal on the planet implies it received some of the highest scrutiny peer review around.