top | item 42378407

The Google Willow Thing

763 points| Bootvis | 1 year ago |scottaaronson.blog | reply

458 comments

order
[+] nsxwolf|1 year ago|reply
Man, reading this makes me feel so small. Being a "software engineer" consuming APIs and updating database rows seems laughably childish compared to whatever the hell it is I just read. I can't even imagine why I should bother trying to understand it. It's completely inaccessible. Only an elite few get to touch these machines.
[+] brailsafe|1 year ago|reply
> I can't even imagine why I should bother trying to understand it.

Well, maybe you should just try for the hell of it and see how far you get? Becoming fit seems impossible to a morbidly obese 45 y.o, and it is if that person's expectation is unreasonable, but if they just change it to be more reasonable, break it down into manageable routines, then they can get somewhere eventually.

Find some papers, fill many gaps, dedicate a few years in your spare time, in 6 months you'll be 6 months closer than you were.

Whether there's a reason or not, idk, it's something to do, be curious. Don't forget that by dedicating their life to something, they're naturally not dedicating their life to other things, things that you might be able to do, like climbing mountains, making pizza, or coming up with witty banter in social situations.

[+] Cthulhu_|1 year ago|reply
Same, but kind of; I'm so far removed from higher up engineering stuff like quantum stuff, nuclear fusion stuff, LHC stuff, astronomy stuff, AI stuff that I just scan it, grab a coffee, raise an eyebrow and go "Interesting", then go about my day and wonder what the fuck I'm supposed to be doing at work again. Oh right, implement a component, same thing I've been doing for the past decade or so.

Thing is, I don't know how to get out without on the one side giving up my comfort zone (well paid, doable work), and on the other side gaining responsibility / being looked at as an expert in any field (that's where impostor syndrome and responsibility aversion comes in). I really need a holiday lol.

[+] hellojebus|1 year ago|reply
This was me yesterday after reading the official Willow release.

Spent yesterday afternoon and this morning learning what I could. I'm now superficially familiar with quantum coherence, superposition, and phase relationships.

In other words, you got this. Now I gotta learn linear algebra. brb.

[+] simpaticoder|1 year ago|reply
We are all small in every domain in which we are not an expert, which is approximately all of them. The "computer" domain has expanded wildly over the last 50 years to include so many specialties that even within it one cannot possibly acquire expertise in everything. And of course "computers" do not include (although they do impact!) the vast majority of domains.

If you want to go deeper into quantum computing, I can highly recommend Scott Aaronson's own book, "Quantum Computing since Democritus"[0]. Although I have a background in physics and math, I found his style lively and engaging with truly unique and compact recapitulations of things I already knew (two come to mind: his description of Cantor's diagnalization argument, and the assertion that QM is the natural consequence of "negative probabilities" being real. The later is a marvelous insight that I've personally gotten a lot of use out of).

It's also useful to understand the boundary of what quantum computing is. At the end of the day what we'll see are "QaaS" apis that give us the ability to, for example, factor large primes. You won't need to know Shor's algorithm or the implementation details, you'll just get your answer exponentially faster than the classical method. I would not expect desktop quantum computers, languages designed for them, or general user software designed to run on them. (Of course, eventually someone will make Doom run on one, but that's decades in the future.)

https://www.alibris.com/booksearch?mtype=B&keyword=quantum+c...

[+] benreesman|1 year ago|reply
I’d urge you to not feel small.

First of all, the formalism/practice gap is real: taking API calls and updating a database correctly has a mountain of formalism around it. And it is not easy to get right! Concurrent sequential processes and distributed systems theory and a bunch of topics have a huge formalism. It is also the case that many (most?) working software engineers have internalized much of that formalism: they “play by ear” rather than read sheet music, but what matters is if it sounds good.

Second, whether it’s quantum computing or frontier machine learning or any other formalism-heavy topic? It’s eminently possible to learn this stuff. There’s a certain lingering credentialism around “you need a PhD” or whatever, I call BS: this stuff is learnable.

Keep hacking, keep pushing yourself on topics you’re passionate about, but don’t consign yourself to some inferior caste. You’re just as likely as the next person to be the next self-taught superstar.

[+] rhubarbtree|1 year ago|reply
The theory of quantum computation is accessible with a good understanding of linear algebra. Anyone working in machine learning or computer graphics should feel encouraged by that, and take a look at the theory.

Quantum error correction is one of those “wow” moments like euler’s identity, it is worth making the effort to get there.

[+] vishnugupta|1 year ago|reply
A few years ago I made peace with the fact that my space of ignorance is humongous and will only get exponentially bigger. Even in the domain of my work which is software engineering. It liberated me from the pressure or burden of having to learn or know everything and enabled me to focus on things that I truly like to pursue.

I’ve made a list of 4-5 things at which I want to be extremely good at in 10 years compared to where I’m today. Now I just spend time on those. I occasionally wander into something new just for the sake of diversion.

[+] genewitch|1 year ago|reply
they buried the lede. google doesn't have 2 qubits to rub together 100%. 105 "qubits" make a "single" qubit after coalescing or whatever. I'm really annoyed because i've kinda followed this since the mid-90s and this is the first time i am hearing that "it'll probably take millions of physical qubits to crack 256 bit"

to me the whole endeavor smells like a bait and switch or something. I remember about 10 years ago canada or someone had at least a few hundred qubits if not close to 1000 of them, but these were physical qubits, and don't represent anything, really. Google's 105 finally makes a "fast enough" single qubit or at best half of a pair.

[+] random3|1 year ago|reply
As hard it may seem to you to tackle that, it's harder to convince others like you that tackling it can be like child play. Not just QM/QC (which btw it's beeing successfully taught to highschoolers) but any "advanced" topic. I hope we'll be able to look back and laugh at how backwards education was "back in the day" and how dumb people were to think that some were "elite few", while the reality is that the "elite few" were the "lucky few" to not be deprived of learning to think either by having the right people around them or the right context to find it by themselves.
[+] UltraSane|1 year ago|reply
When Corridor Digital was analyzing the really impressive time spaghetti effect in Loki season 2 they said "there are two kinds of CGI artists. The kind that use the buttons and the kind the program the buttons."
[+] kmarc|1 year ago|reply
You captured very well my sentiment. Also same feelings for AI.

I'm wondering if it's time for me to switch professions and give up compsci / software altogether.

[+] numpad0|1 year ago|reply
It's also way harder to make money doing "not laughably childish" stuffs. The more accessible and human-connected it gets, the more likely people recognize and pays you. People criticize yes-men getting rewarded but you have to be nearly clinically insane to recognize value of a no-machine like a partial prototype quantum supercomputer.
[+] weatherlite|1 year ago|reply
> I can't even imagine why I should bother trying to understand it

Why should you? I agree with your sentiment, super advanced quantum physics is probably out of reach for 99% of the population (I'm estimating here but I think it's reasonable to assume that's the average IQ of the physics PHDs who can actually understand this stuff to a deep level). You can probably make the effort to understand something about what's going on there, but it will be very superficial. Going advanced quantum physics takes a huge amount of effort and an incredible capacity for learning complex things. And even the advanced physics guys don't and can't understand a bunch of very elementary things about reality, so it's not as if the feeling of not understanding stuff ever goes away.

[+] TacticalCoder|1 year ago|reply
> Being a "software engineer" consuming APIs and updating database rows ...

> Only an elite few get to touch these machines.

But lately many can run quite a lot of AI models at home. Doesn't require too crazy of a setup.

Why not build something software fun at home that doesn't involve a DB? Maybe using some free AI model?

I did experiment lately: automatically "screenshot" a browser and ask an AI to find the URL and ask if the URL and site looked like a phishing attempt or not. Fun stuff (and it works).

I tried installing one of these "photo gallery" in a Docker container (where you can put all your family/travel pics and let anyone on your LAN [or on the net] browse them). I saw some of these have "similarity" searches features. I also saw that SAM / SAM2 (Meta's Segment Anything Model) was plenty quick: some people are using these to analyze video frames in real-time. So I was thinking about sending all my family pictures through SAM2 (or a similar model: I saw some modified SAM2 to make it even faster) and then augmenting the "similarity search" by using the results of SAM2. For example finding all the pictures about "pool", etc.

And why limit myself to pictures? I could do family vids too: "Find all the vids where that item can be seen".

Possibilities at the moment seems endless: times are exciting if you ask me.

[+] compumetrika|1 year ago|reply
I haven't tried this yet myself, but have you tried plugging it into GPT or Claude or Perplexity and asking Qs? I've made some progress on thongs this way, much faster than I would have the usual way. Apply the usual precautions about hallucinations etc (and maybe do the "ask multiple AIs the same thing" thing). We don't yet have a perfect tutor in these machines, but on balance I've gained from talking to them about deep topics.
[+] ericmcer|1 year ago|reply
Some people say higher education is a privilege, but a few times during college while grinding out difficult classes it felt more like a huge burden.

Being part of a highly educated elite group like that has some huge benefits, but they have also shackled themselves to an insanely specialized and highly difficult niche. I can't imagine the stress they are under and the potential for despair when dedicating years to something with so much uncertainty.

[+] gerdesj|1 year ago|reply
Start with the handy precis that Mr A leaves at the top in yellow. You can drop that into conversation right now with some confidence! Mr A has quite some clout hereabouts let alone elsewhere, so that seems reasonable.

You can't know everything but knowing what you don't know and when to rely on someone else to know what you don't know and to confidently quote or use what they know that you don't know, is a skill too.

"Critical thinking", and I suspect you do know how to do that and whilst this blog post might be somewhat impenetrable it is still might be useful to you, even as just general knowledge. Use your skills to determine - for you and you alone - whether it is truth, false or somewhere in between.

Besides, a mental work out is good for you!

[+] matthewdgreen|1 year ago|reply
For what it’s worth, Scott spent the last few years working for OpenAI because he wanted to do something much more applied than quantum information theory. As an applied scientist I’m aghast: he’s one of the few people who actually gets to “touch the firmament of the Universe,” why would you ever give that up for mere applied science (even science as interesting as whatever goes on inside OpenAI) :) But as a human being I understand it. The grass is always greener somewhere else, and doing tangible things is sometimes more fun.

TL;DR: whatever you’re doing, there’s probably someone who wishes they were doing it instead.

[+] sourcepluck|1 year ago|reply
People feel like this about domains they don't know all the time. Don't allow your ignorance of an area to trick you into thinking you can't learn that area. It's just new to you!

Source: teaching beginners piano for years. Of course sheet music looks like gobbledygook, until you've spent a bit of time learning the basic rules!

[+] bradleyjg|1 year ago|reply
The problem it solved would take a septillion years to do on a conventional computer but no one other than a quantum researcher cares about that problem.

How about solving a problem someone that’s not a quantum researcher would care about. Give me traveling salesmen with n=10. Or factor a 10 digit number. Something.

Until then quantum computers are in the same category as commercial fusion. Long on “breakthroughs”, zero on results.

Look at cancer researchers for a nice contrast. The annual number of “breakthrough that could cure cancer!1!” announcements have dropped to near zero while steady, real progress is being made all the time.

[+] munchler|1 year ago|reply
The argument in favor of the Everettian multiverse (“where else could the computation have happened, if it wasn’t being farmed out to parallel universes?”) seems illogical to me. Aren't these parallel universes running the same computation at the same time, and thus also "farming out" part of their computations to us? If so, it's a zero-sum game, so how could there be an overall performance gain for all the universes?
[+] bambax|1 year ago|reply
> Having said that, the biggest caveat to the “10^25 years” result is one to which I fear Google drew insufficient attention. Namely, for the exact same reason why (as far as anyone knows) this quantum computation would take ~10^25 years for a classical computer to simulate, it would also take ~10^25 years for a classical computer to directly verify the quantum computer’s results!!

I don't understand that part, can someone explain? There should be plenty of problems that take a long time to solve, but are trivial to verify? Like for example factoring extremely large numbers that are the product of a few very large primes? Maybe not on the order of 10^25 years, but still?

[+] joak|1 year ago|reply
The hardware is progressing, we have a issue though: we don't have algorithms to run on quantum computers. Besides Shor's algorithm, useful to break RSA, we have nothing.

Just vague ideas like: it could be useful for quantum simulations or optimisation or maybe ...

If tomorrow we have a full running quantum computing what would we run on it? We are in a vacuum.

The only hope is a breakthrough in quantum algorithms. Nothing in sight, not much progress on this side.

Oh yes, Zapata Computing, the best funded company in quantum algorithms just went under this year.

[+] imranq|1 year ago|reply
Summary: Its a real result, the cool part is more qubits seem to live longer rather than shorter, bad part the results are not explicitly verifiable, only through extrapolation
[+] r33b33|1 year ago|reply
Let's talk about things that actually matter - where to invest in post-quantum world?

I'll keep this short.

- Google’s Willow quantum chip significantly outpaces current supercomputers, solving tasks in minutes that would otherwise take billions of years.

- Hypothesis: Accelerating advancements in tech and AI could lead to quantum supremacy arriving sooner than the 2030s, contrary to expert predictions.

- Legacy banking systems, being centralized, could transition faster to post-quantum-safe encryption by freezing transfers, re-checking processes, and migrating to new protocols in a controlled manner.

- Decentralized cryptocurrencies face bigger challenges:Hard forks are difficult to coordinate across a decentralized network.

- Transitioning to quantum-safe algorithms could lead to longer transaction signatures and significantly higher fees, eroding trust in the system.

- If quantum computers compromise current cryptography, tangible assets (e.g., real estate, stock indices) may retain more value compared to digital assets like crypto.

Thoughts?

[+] almostgotcaught|1 year ago|reply
Literally none of this is correct.

> - Google’s Willow quantum chip significantly outpaces current supercomputers, solving tasks in minutes that would otherwise take billions of years.

billions of years you say? Just what kinds of "computing tasks" we talkin about here?

[+] NooneAtAll3|1 year ago|reply
> Let's talk about things that actually matter

I'm all ears

> where to invest

and like that, you lost me

[+] machina_ex_deus|1 year ago|reply
Before invoking parallel universes, how about comparing the system to nature's mind-boggling number of particles in the macroscopic world? A single gram contains 10^23=2^76 particles. Google's random circuit sampling experiment used only 67 qubits, Which is still order of magnitude below 76. I wonder why, the chip had 105 qubits and the error correction experiment used 101 qubits.

Did Google's experiment encounter problems when trying to run RCS on the full 105 qubits device?

Before saying that the computation invoked parallel universes, first I'd like to see that the computation couldn't be explained by the state being encoded classically by the state of the particles in the system.

[+] zh3|1 year ago|reply
Somehow the universe knows how to organise the sand in an egg timer to form an orderly pile. Simulating that with a classical computer seems impossible - yet the universe "computes" the correct result in real time. It feels like there is a huge gap between what actually happens and what can be done with a computer (even a quantum one).
[+] NooneAtAll3|1 year ago|reply
> I wonder why, the chip had 105 qubits and the error correction experiment used 101 qubits.

I wonder why, byte has 8 bits and the Hamming error correction code uses 7 bits.

oh right - that's because *the scheme* requires 3-7-15-... bits [0] and 7 is the largest that fits

Same with surface error correction - it's just the largest number in a list. No need for conspiracies. And no connection to manufacturing capabilities, which determine qubits on a single chip

[0] https://en.wikipedia.org/wiki/Hamming_code

[+] aeternum|1 year ago|reply
>it would also take ~10^25 years for a classical computer to directly verify the quantum computer’s results!!

This claim makes little sense. There are many problems that are much easier to verify than to solve. Why isn't that approach ever used to validate these quantum computing claims?

[+] urbandw311er|1 year ago|reply
> No doubt people will ask > me what this means for > superconducting qubits > versus trapped-ion or > neutral-atom or > photonic qubits

I laughed at this. If I understood more than literally 2 words of that, then yes - no doubt I would ask about that.

[+] devit|1 year ago|reply
Where's the performance on common useful tasks?

What's the largest number it can factor using Shor's algorithm? What's the largest hash it can compute a pre-image for using Grover's algorithm?

[+] dataflow|1 year ago|reply
Dumb question: can someone explain the following?

Imagine a ball falling on the ground.

Simulating the O(10^23) atoms in each one with a classical computer would take (say) 10^23 times the amount of work of simulating a single atom. Depending on the level of detail, that could easily take, you know, many, many years...

We don't call the ball a supercomputer or a quantum computer just because it's so much more efficient than a classical computer here.

I presume that's because it can't do arbitrary computation this quickly, right?

So in what way are these quantum computers different? Can they do arbitrary computations?

[+] JKCalhoun|1 year ago|reply
What does quantum computing need to move forward? Will just throwing a lot of money at the thing allow it to scale? Or are there fundamental problems blocking it that require new physics or new material sciences?
[+] nuancebydefault|1 year ago|reply
>> But for anyone who wonders why I’ve been obsessing for years about the need to design efficiently verifiable near-term quantum supremacy experiments: well, this is why! We’re now deeply into the unverifiable regime that I warned about.

Can anybody explain me why it is hard to find a problem that can be solved only by a basic quantum computer within a short timespan and can be easily verified by a normal computer? I thought there are so many algo's out there for which one direction is fast and the reverse takes ages.

[+] de6u99er|1 year ago|reply
IMHO, we're still a long way from anything truly useful. The problem Google used to demonstrate quantum supremacy feels more like a glorified random number generator. Even if quantum computers can generate results faster, processing and storing the data still takes a lot of time. It’s hard not to draw a parallel with claims about "instantaneous quantum communication," where entangled particles appear to defy the speed of light — it seems impressive at first, but the practical value remains unclear.
[+] LikeBeans|1 year ago|reply
In simple terms, if I understand quantum computing, and please correct me if I'm wrong, the big benefit is parallel computing at a massive scale whereas classical computing is serial in nature. If yes likely both method are useful. But a very useful use case for quantum computing is AI training to create the models. Currently consumes a lot of GPUs but QC has nice chance to impact such a use case. Did I get it right?
[+] victor22|1 year ago|reply
Will I be attacked for thinking this is at least fishy? or are they just being ultra secretive.

They never talk about what this computer is actually doing.

[+] r33b33|1 year ago|reply
Can someone just give it to me straight: should I sell my crypto positions and move to stock indices and real estate? Yes or no?

No nuance, just yes or no.

[+] LittleTimothy|1 year ago|reply
No.

This is progress towards quantum computing but no where near progress towards a real practical quantum computer that could break Bitcoin's algorithms. If progress continues it could be an issue in the future, check back in next time Google publishes a paper.

[+] kimchidude|1 year ago|reply
Asking the real questions here.

I got that ‘Google has been talking about Willow for ages, this isn’t new’ blah blah blah. The problem is the public only started talking about it yesterday.

[+] Mistletoe|1 year ago|reply
No, won’t all your stocks and financial information be gone also?

Also Willow can’t even factor 15=5x3 you are good for a very long time.

[+] sgt101|1 year ago|reply
no

not for the next 5 years for sure.