top | item 42375123

(no title)

crishoj | 1 year ago

Take the announcement with a grain of salt. From German physicist Sabine Hoffenfelder:

> The particular calculation in question is to produce a random distribution. The result of this calculation has no practical use. > > They use this particular problem because it has been formally proven (with some technical caveats) that the calculation is difficult to do on a conventional computer (because it uses a lot of entanglement). That also allows them to say things like "this would have taken a septillion years on a conventional computer" etc. > > It's exactly the same calculation that they did in 2019 on a ca 50 qubit chip. In case you didn't follow that, Google's 2019 quantum supremacy claim was questioned by IBM pretty much as soon as the claim was made and a few years later a group said they did it on a conventional computer in a similar time.

https://x.com/skdh/status/1866352680899104960

discuss

order

l33tman|1 year ago

TBH you need to take the youtube influencer Sabine Hoffenfelder with a bigger grain of salt. She has converted to mainly posting clickbait youtube stuff over the last years (unfortunately, she was interesting to listen to earlier).

The RCS is a common benchmark with no practical value, as is stated several times in the blog announcement as well. It's used because if a quantum computer can't do that, it can't do any other calculation either.

The main contribution here seems to be what they indeed put first, which is the error correction scaling.

Closi|1 year ago

I think simplifying her to 'youtube influencer' is unfair - she is a doctor of theoretical physics with a specialism in quantum gravity who produces science content for youtube. She knows the field enough to comment.

She doesn't even say that this isn't a big leap (she says it's very impressive - just not the sort of leap that means that there are now practical applications for quantum computers, and that a pinch of salt is required on the claim of comparisons to a conventional computer due to the 2019 paper with a similar benchmark).

pixelsort|1 year ago

As a counterpoint, she recently reviewed a paper in one of her recent videos and completely tore it to shreds as apparently the math was full of absolute nonsense.

This was a fascinating watch, and not the kind of content that is easy to find. Besides videos like that one, I enjoy her videos as fun way to absorb critical takes on interesting science news.

Maybe she is controversial for being active and opinionated on social media, but we need more science influencers and educators like her, who don't just repeat the news without offering us context and interpretation.

falleng0d|1 year ago

I have observed the change in approach to use aggressive (clickbaity) thumbnails, but I do think the quality of the content has not changed.

And I can't blame her for adopting this trend, in many cases it is the difference between surviving or not on YouTube nowadays.

zipy124|1 year ago

Just because she is a YouTuber doesn't diminish her other credentials, just as she is incetivised to do clickbait, so are actual scientific communication outlets such as nature, and the more clicky they are the more downloads and citation they will acquire. Incentives change content but don't directly detract from someone's expertise. See: the fact that most universities now publish some lectures on YouTube, it doesn't make the content any less true.

cowl|1 year ago

she was right the first time when they announced this in 2019 and this time even they admit in their own press release:

> Of course, as happened after we announced the first beyond-classical computation in 2019, we expect classical computers to keep improving on this benchmark

As IBM showed their estimate of classical computer time is taken out of their a**es.

perching_aix|1 year ago

They explicitly cover all of these caveats in the announcement.

Problems that benefit from quantum computing as far as I'm aware have their own formal language class, so it's also not like you have to consider Sabine's or any other person's thoughts and feelings on the subject - it is formally demonstrated that such problems exist.

Whether the real world applications arrive or not, you can speculate for yourself. You really don't need to borrow the equally unsubstantiated opinion of someone else.

eigenket|1 year ago

The formal class is called BQP, in analogy with the classical complexity clas BPP. BQP contains BPP but there is no proof that it is stictly bigger (such a proof would imply P != NP). There are problems in BQP we expect are not in BPP but its not clear if there are any useful problems in BQP and not in BPP, other than essentially Shor's algorithm.

On the other hand it's actually not completely necessary to have a superpolynomial quantum advantage in order to have some quantum advantage. A quantum computer running in quadratic time is still (probably) more useful than a classical computer running in O(n^100) time, even though they're both technically polynomial. An example of this is classical algorithms for simulating quantum circuits with bounded error whose runtime is like n^(1/eps) where eps is the error. If you pick eps=0.01 you've got a technically polynomial runtime classical algorithm but it's runtime is gonna be n^100, which is likely very large.

vctrnk|1 year ago

Not to defend Google, but they end up saying much the same:

> The next challenge for the field is to demonstrate a first "useful, beyond-classical" computation on today's quantum chips that is relevant to a real-world application. We’re optimistic that the Willow generation of chips can help us achieve this goal. So far, there have been two separate types of experiments. On the one hand, we’ve run the RCS benchmark, which measures performance against classical computers but has no known real-world applications. On the other hand, we’ve done scientifically interesting simulations of quantum systems, which have led to new scientific discoveries but are still within the reach of classical computers. Our goal is to do both at the same time — to step into the realm of algorithms that are beyond the reach of classical computers and that are useful for real-world, commercially relevant problems.

chvid|1 year ago

It would be interesting to see what "standard benchmark computation" was used and what its implementation would like in a traditional computer language.

Does anyone know?