top | item 47317739

The “JVG algorithm” only wins on tiny numbers

83 points| jhalderm | 21 days ago |scottaaronson.blog | reply

48 comments

order
[+] MathMonkeyMan|21 days ago|reply
The title of this post changed as I was reading it. "It looks like the 'JVG algorithm' only wins on tiny numbers" is a charitable description. The article is Scott Aaronson lambasting the paper and shaming its authors as intellectual hooligans.
[+] Strilanc|21 days ago|reply
Agree. Scott is exactly correct when he just straight calls it crap.

It's inaccurate to say it wins on small numbers because on small numbers you would use classical computers. By the time you get to numbers that take more than a minute to factor classically, and start dreaming of quantum computers, you're well beyond the size where you could tractably do the proposed state preparation.

[+] measurablefunc|21 days ago|reply
Scott Aaronson is the guy who keeps claiming quantum supremacy is here every year so he's like the proverbial pot calling the kettle black.
[+] kittikitti|21 days ago|reply
While I think the idea that claiming one can "precompute the xr mod N’s on a classical computer" sounds impractical there are a subset of problems where this might be valid. According to computational complexity theory, there's a class of algorithms called BQP (bounded-error quantum polynomial time).

Shor's algorithm is part of BQP. Is the JVC algorithm part of BQP, even though it utilizes classical components? I think so.

I believe that the precomputational step is the leading factor in the algorithm's time complexity, so it isn't technically a lower complexity than Shor's. If I had to speculate, there will be another class in quantum computational complexity theory that accommodates precomputation utilizing classical computing.

I welcome the work, and after a quick scroll through the original paper, I think there is a great amount of additional research that could be done in this computational complexity class.

[+] amluto|21 days ago|reply
There is a genuinely interesting complexity class called BQP/poly, which is pronounced something like “bounded-error quantum polynomial time with classical advice” (add some more syllables for a complete pronunciation).

The JVG algorithm is not a high quality example of this or really anything else. If you think of it as “classical advice”, then it fails, because the advice depends on the input and not just the size of the input. If you think of it as precomputation, it’s useless, because the precomputation involved already fully solves the discrete log problem. And the JVG paper doesn’t even explain how to run their circuit at respectable sizes without the sheer size of the circuit making the algorithm fail.

It’s a bit like saying that one could optimize Stockfish to run 1000x faster by giving it an endgame table covering all 16-or-fewer-piece-positions. Sure, maybe you could, but you also already solved chess by the time you finish making that table.

[+] adgjlsfhk1|21 days ago|reply
JVC isn't BQP. it's exp time (I.e. worse than factoring without a quantum computer at all). it takes the only step of shors algorithm that is faster to run on a quantum computer and moves it to a classical computer
[+] kmeisthax|21 days ago|reply
I mean, considering that no quantum computer has ever actually factored a number, a speedup on tiny numbers is still impressive :P
[+] dehrmann|21 days ago|reply
I didn't get the quantum hype last year. At least with AI, you can see it do some impressive things with caveats, and there are bull and bear cases that are both reasonable. The quantum hype training is promising the world, but compared to AI, it's at the linear regression stage.
[+] adgjlsfhk1|21 days ago|reply
The problem is that it's an exponential slowdown on large numbers.
[+] Tyr42|21 days ago|reply
Hey hey, 15 = 3*5 is factoring.
[+] guy4261|21 days ago|reply
> (yes, the authors named it after themselves) The same way the AVL tree is named after its inventors - Georgy Adelson-Velsky and Evgenii Landis... Nothing peculiar about this imh
[+] apnorton|21 days ago|reply
This might not be something entirely obvious to people outside of academia, but the vast majority (which I'm only weakening a claim of "totality" in order to guard against unknown instances) of entities that bear the name of humans in the sciences do so because other people decided to call them by that name.

From another view, Adelson-Velsky and Landis called their tree algorithm "an algorithm for the organization of information" (or, rather, they did so in Russian --- that's the English translation). RSA was called "a method" by Rivest, Shamir, and Adleman. Methods/algorithms/numbers/theorems/etc. generally are not given overly specific names in research papers, in part for practical reasons: researchers will develop many algorithms or theorems, but a very small proportion of these are actually relevant or interesting. Naming all of them would be a waste of time, so the names tend to be attached well after publication.

To name something after oneself requires a degree of hubris that is looked down upon in the general academic community; the reason for this is that there is at least a facade (if not an actual belief) that one's involvement in the sciences should be for the pursuit of truth, not for the pursuit of fame. Naming something after yourself is, intrinsically, an action taken in the seeking of fame.

[+] johncarlosbaez|21 days ago|reply
Adelson-Velsky and Evgenii Landis were not the ones who named their tree the "AVL tree".

In my "crackpot index", item 20 says:

20 points for naming something after yourself. (E.g., talking about the "The Evans Field Equation" when your name happens to be Evans.)

[+] abound|21 days ago|reply
Same with RSA and other things, I think the author's point is that slapping your name on an algorithm is a pretty big move (since practically, you can only do it a few times max in your life before it would get too confusing), and so it's a gaudy thing to do, especially for something illegitimate.
[+] croes|21 days ago|reply
Named after != named by