top | item 45481849

(no title)

Xcelerate | 4 months ago

From the abstract:

> This aligns with number theory conjectures suggesting that at higher orders of magnitude we should see diminishing noise in prime number distributions, with averages (density, AP equidistribution) coming to dominate, while local randomness regularises after scaling by log x. Taken together, these findings point toward an interesting possibility: that machine learning can serve as a new experimental instrument for number theory.

n*log(n) spacing with "local randomness" seems like such a common occurrence that perhaps it should be abstracted into its own term (or maybe it already is?) I believe the description lengths of the minimal programs computing BB(n) (via a Turing machine encoding) follow this pattern as well.

discuss

order

No comments yet.