top | item 29929935

Mathematicians clear hurdle in quest to decode primes

251 points| akeck | 4 years ago |quantamagazine.org | reply

73 comments

order
[+] gavagai691|4 years ago|reply
Two inaccuracies in the article. For purposes of simplicity, the author writes "But the Lindelöf hypothesis says that as the inputs get larger, the size of the output is actually always bounded at 1% as many digits as the input." But this is a case where simplification goes too far. What Lindelof says is that the size of the output is always bounded by ε% as many digits as the input, for ANY (arbitrarily miniscule) ε > 0.

Second, the subtitle "Paul Nelson has solved the subconvexity problem..." is strange. The subconvexity problem, for a given L-function, is to lower the percentage described above from 25% to "any positive number"; in other words, to bridge the gap between the convexity bound (which is "trivial") and Lindelof (a consequence of GRH). The only way that the statement "Paul Nelson has solved THE subconvexity problem..." could maybe be accurate is if Nelson proved the Lindelof hypothesis for "all" L-functions. Which is far from the case. (What makes subconvexity so interesting, as Nelson says in the article, is that it is a problem where you can make partial progress towards a particularly important consequence of GRH. And Nelson's result is exactly that: partial progress.) More accurate would be "Paul Nelson has made significant progress on the general subconvexity problem."

[+] civilized|4 years ago|reply
These slips from Quanta are puzzling. Quanta normally seems to be more meticulous about accuracy, and the author is a veteran.

The attraction of Quanta has always been that its writers seemed to have an intrinsic passion for math, treating it as more than just a gee-whiz topic to gloss over in between the latest panicked missives about politics and how cellphones are destroying our children. One of its most delightful qualities has been the consistent willingness to take a few extra sentences to explain even advanced concepts in a manner that is basically technically correct.

Certainly the article remains far better than the average mass media STEM writing, but Quanta should take greater care to keep their quality pristine. They have been utterly unique and have set the example for everyone else.

[+] kevinventullo|4 years ago|reply
I’m confused by your second paragraph. The article describes the subconvexity problem for a given L-function as lowering the bound from 25% to x% for some x < 25. That is, 25% is the “convexity bound” and the existence of such an x is a “subconvexity bound.”

My understanding is that Nelson has indeed done this for every L-function, although the exact x varies with the L-function. How is that not solving the subconvexity problem?

[+] ascar|4 years ago|reply
> What Lindelof says is that the size of the output is always bounded by ε% as many digits as the input, for ANY (arbitrarily miniscule) ε > 0.

Is there are reason or benefit of stating it like that rather than saying the fraction of output and input digit size asymptotically approaches zero? Or did I misunderstand the explanation?

[+] easytiger|4 years ago|reply
I remember as a teen when my parents got me a pc (they could Ill afford) as an upgrade from the, by then 10 years old, hand-me-down ZX Spectrum.

I was in awe at how many primes I could find on my cyrix 333Mhz on my own noddy code

Left it running all day when at school.

Never found any thing of value but I learned how to Beowulf cluster and learned a lot about arbitrary integer implementation

Simpler times

[+] zw123456|4 years ago|reply
When I was a sophomore in high school, in 1972, our school started a "computer class" which was comprised of a teletype in a corner of a former storage room. Me and my best friend were the only ones to sign up for it. The teacher was the math teacher and he had a manual from the local community college he tossed to us and basically turned us loose on it. We logged into their Univac. The first thing we did was find the square root of a number using the Newtonian approximation. The second thing we did is find the 1st 100 primes which took forever :)

Awesome your parents got you a PC, we would have died for one if it existed then!

[+] earleybird|4 years ago|reply
For me it was learning Fortran IV (after Focal then Basic - PDP-8i) because my dad let me use his account on the uni CDC 6400 - basically built basic bigint math library that I then put to use to determine if 1000! + 1 was prime :-)
[+] tapvt|4 years ago|reply
Hah! Reminds me of myself as a youth, but I was trying to find people primes with an utterly horrible Perl script. I wish I still hade my early code from my salad days.
[+] ackbar03|4 years ago|reply
Is finding primes of all things really the first thing you did with a computer as a teen? You could have invented Facebook or Napster
[+] Victerius|4 years ago|reply
Young enough for a Fields? There is a ceremony later this year, but it's probably too soon. So he could be awarded a Fields in 2026 if he's not yet 40. He received his PhD in 2011.

Also, according to MathGenealogy, his grand-grand-grand-grand-grand-grand-grand-grand thesis advisor was none other than Poisson himself. Poisson's advisers were Lagrange and Laplace. Lagrange's was Euler. Euler's was Bernoulli. Bernoulli's was another Bernoulli.

You can go back to the late 15th century to find mathematicians with "unknown" advisors.

[+] JadeNB|4 years ago|reply
> There is a ceremony later this year, but it's probably too soon.

Maybe this is what you meant by "it's probably too soon", but almost certainly the decisions about who gets it have already been made.

[+] gavagai691|4 years ago|reply
My bets are on another analytic number theorist for the Fields this cycle, namely James Maynard.
[+] optimalsolver|4 years ago|reply
Is there a reason we're obsessed with primes beyond aesthetics? Why does this set of numbers garner all the headlines as opposed to some other arbitrary integer sequence like the Recamán numbers [0] ?

If tomorrow someone discovered a closed-form equation for the nth prime, how would mathematics/the world change?

[0] https://en.wikipedia.org/wiki/Recamán%27s_sequence

[+] iNic|4 years ago|reply
Yes. Just within mathematics prime numbers have a tendency to show up in many fields not obviously related to number theory. As an example: in the study of symmetries (group theory [0]) important to number theory, geometry and even many parts of physics prime numbers help us decompose a complicated symmetry into simpler ones (using Sylow theorems [1].

Prime numbers are also used for cryptography due to their computational properties.

Also I think we have shown using a variant of Galois theory that there can not be a closed from solution for the nth prime.

But there are more headlines about the primes relative to its prevalence in research. I would guess that this is because - they can easily be explained to people - have been studied for thousands of years - open problems still persist

In other parts of research you might need at least an undergraduate degree just to understand the definitions, which makes headlines less sexy.

[0] https://en.wikipedia.org/wiki/Finite_group

[1] https://en.wikipedia.org/wiki/Sylow_theorems

[+] btmiller|4 years ago|reply
What impact would this have on elliptic curve cryptography?
[+] echelon|4 years ago|reply
I am not a mathematician or cryptographer, but here's my understanding (please, please correct me if I'm wrong)

Modern cryptography relies on the hardness of integer factorization. Things you want to hide are intractable to calculate on even the most powerful machines due to the underlying math not being workable in polynomial time (or similar low degree time). Age of the universe hard to brute force with our machines and known algorithms.

It has to do with the complex structure of the problem in our intuitive dimension. We haven't thought of any possible way to speed it up classically, and it's not apparent if there are such techniques.

The Riemann Hypothesis conjectures that there is a way to know the distribution of primes in a higher dimensional imaginary math. That the unintuitive and difficult system that gives us primes in real number space is somehow contained in a larger imaginary space, with some kind of regular structure, and that they are immediately accessible and calculable once you know the trick.

If the hypothesis is true, it's quite possible that NP hard problems are the same. That in some unintuitive higher dimension math we can solve the hardest problems in polynomial time.

NP-hard problems are also proven to be equivalent complexity, and if you figure out one then the rest are also solvable. If we figure out the trick behind it all (if such a thing exists), we might break everything. Or maybe we'll prove that there is no such free lunch.

I'm one of the ones hoping for computing to be easy. It would unlock so many possibilities. Fast math, in silico experiments, protein folding, etc. It unfortunately would also break all of the underpinnings of our industry, though. SSL, privacy, crypto, infosec, everything about the Internet...

Most people are betting that primes are hard, and it certainly does look that way.

[+] mikewarot|4 years ago|reply
I suspect somewhere in the Riemann-Zeta function is a reciprocal that only cancels out all the imaginary terms once and only once, thus any number with more than one factor wouldn't cancel out.

I haven't the math skills to find that reciprocal.

[+] zwkrt|4 years ago|reply
I love how every quote by the mathematician clearly states that this does not bring us appreciably closer to the Riemann hypothesis, or that it is very unlikely, while the bulk of the article seems to imply otherwise.
[+] gavagai691|4 years ago|reply
While I had a couple of gripes with the article, I don't get the sense that the article exaggerates the significance of this work. The (Generalized) Riemann Hypothesis implies the (Generalized) Lindelof Hypothesis. The latter is a particularly important consequence of GRH. Not only is it worthwhile, if you are interested in GRH, to verify its predictions (if you proved one of the predictions false, then you would disprove GRH, and if you proved it true, this would in some sense provide "theoretical support" for GRH), but Lindelof is useful in is own right. In particular, it can sometimes serve as a substitute for GRH, and results that can substitute for GRH (such as Bombieri-Vinogradov, zero-density estimates, and subconvexity estimates) are very valuable. Contexts where subconvexity estimates can serve in place of GRH include Quantum Unique Ergodicity, and representations of integers by ternary quadratic forms.

For this reason, this paper is a much bigger deal (in the analytic number theory world / in my opinion) than the paper a couple years back that made big headlines about Jensen polynomials (and Nelson's paper is surely one of the biggest results in the last couple of years in analytic number theory). While the latter was more of a "curiosity" / cute paper on an obscure way of thinking about RH, Nelson's paper is big progress on a well-studied weaker form of GRH with significant utility for applications. (I mean, of course, applications in mathematics.)

[+] pelasaco|4 years ago|reply
probably different than the author from the article, the Mathematician wasn't interested in generating clicks.
[+] charlieyu1|4 years ago|reply
Scientific journalism. Buzzwords, speculations that don’t sound plausible to the scientific community, etc
[+] saimiam|4 years ago|reply
This fascination with primes is has, of late, seemed to be a little misplaced to me. Primes emerge as a property of the number system that you have.

The number system itself is an an approximation of the natural world where actions like addition, subtraction, multiplication and division exist. The natural world in question is, as far as we know today, what we humans observe.

Now, imagine a second world which was excellent at approximations to the extent that their entire Standard model depended on differentiation and integrations. Would they care as much about primes and try to fit everything into “beautiful” math? I don’t know the answer to this but I’ve found this thought experiment useful to put our maths in its place.

[+] misja111|4 years ago|reply
I don't see how you could model the world without addition or subtraction. Even if you started out with differentiation, wouldn't addition and subtraction be implicitly defined by it? And once you have those, you have multiplication and division as well, and primes.
[+] kzrdude|4 years ago|reply
I don't think we have even managed to imagine a world where the natural numbers wouldn't be a useful building block.

That means the natural numbers are a fundamental logical building block of any formal system like mathematics, doesn't matter what the laws of physics are.

[+] orena|4 years ago|reply
It might be that the Riemann Hypothesis will be the first of Math's "Hard Problems" that will be solved by AI before humans...
[+] fulvioterzapi|4 years ago|reply
It might. However, since there's no record of AI solving big, open conjectures such as the Riemann Hypothesis, I find this quite unlikely.
[+] hwers|4 years ago|reply
Starting to think quantamagazine writes headlines this vague as a conscious clickbait strategy by now because these pull the biggest audience.