top | item 30058017

(no title)

fdej | 4 years ago

> But how long will we need to look through these sequences of digits before we find the disagreeing digit? It feels intuitively like we should be able to establish some kind of bound on this. Like, maybe we should be able to say “if you add two lists of n numbers, each of which has d digits, then they can’t disagree for more than k * n * d digits” for some k. But no-one’s been able to prove anything like this.

You can write down a completely explicit bound here using the Mahler-Mignotte root separation bound. More generally, for any algebraic expression involving algebraic numbers, you can bound a priori the number of digits you need to check to determine its sign.

When you involve transcendental numbers, things do get much harder though.

discuss

order

abetusk|4 years ago

So, to state explicitly, given a list of positive integers, a_i, and coefficients, d_i \in {-1,1}, test whether \sum_i d_i sqrt(a_i) <=? 0. Now, construct a polynomial, P(z) = \prod_i (z^2 - a_i), and this gives a (univariate) polynomial so that a Mahler-Mignotte like bound can be used.

I guess there's different levels of bounds you can use (Mahler, Mahler-Mignotte, Davenporte-Mahler-Mignotte [0]) but they all involve the discriminant, the deg to the deg power (n^n) and maybe some other factors which put it neatly in a polynomial time bit representation. One bound puts it in the 2^{-2s^2} range, for bit size s [1].

Why does this not solve it? The problem as stated on cstheory.stackexchange explicitly says the square roots are square roots of integers [2]. What am I missing?

[0] https://arxiv.org/pdf/2005.07843.pdf

[1] http://160592857366.free.fr/joe/ebooks/ShareData/Fundamental... (pg 165, Lecture VI, Section 7, Root separation (pdf pg. 197))

[2] https://cstheory.stackexchange.com/questions/79/problems-bet...

EDIT: I forgot to include the sqrt in the sum equation

abetusk|4 years ago

I'm wrong. The Mahler-Mignotte only works for pairs of roots and doesn't say anything about the absolute value of the sum, at least in the way I was thinking about it. There may be a way to "fix it up" but not that I see and I suspect folks who've studied this in earnest are aware of Mahler-Mignotte and understand why it can't be used to solve this problem.

Thanks to @devit [0] who has understood why the Mahler-Mignotte tactic doesn't work. Just because you can bound the bit complexity of pairs of roots doesn't mean you can bound the complexity of all the 2^N possible {-1,1} combinations of them. At least, I don't see how it can be done simply.

[0] https://news.ycombinator.com/item?id=30059545

devit|4 years ago

Does that actually work?

It seems that the degree of minimal polynomial having as root the sum of N square roots might be up to 2^N, and if you then apply the bound at https://en.wikipedia.org/wiki/Geometrical_properties_of_poly... (where n = 2^N) you get a bound on the order of at least 2^N digits (more precisely 2^N (N + D)).

So it doesn't seem to lead to a proof of a subexponential number of equal digits, unless the degree of the minimal polynomial is actually subexponential.

abetusk|4 years ago

Why do you need the bounds for every combination of the N square roots? Isn't it enough to get the minimum distance between the two nearest elements in that list?

If so, why not consider the 2N degree polynomial where P(z) = \prod (z^2 - a_i) ? This polynomial is only 2N degree and gives you the bound you actually care about, the number of bits needed to sum two numbers in the list. Since you're summing 2N of them instead of just one, you might need on the order of lg(N) more bits in your representation (so 2N + lg(N) bits, say) but this is still well within "polynomial" bits.

fdej|4 years ago

Yes, the worst-case complexity is exponential in N, but the wording in the article could lead you to believe that no explicit exponential bound is known, which is false.

pfortuny|4 years ago

Exactly: algebraic numbers, despite not being periodic, are in general "reasonably far from each other", and especially from rationals.

I guess the problem can be solved using what you say, certainly.

It is only transcendentals that can be "too near" each other, and near rationals (this is Liouville's result, which was improved later on, in a specific case the one you say).

syki|4 years ago

Rational numbers are algebraic so how are algebraic numbers reasonably far from each other? Algebraic numbers are dense in the real number line.

inasio|4 years ago

I remember doing side by side plots of conservative Hamiltonian trajectories doing a standard Euler method (maybe even RK45), vs a symplectic method (which will maintains energy conservation). The RK45 implementation had a very nice symmetric pattern, but which was completely different from the one in the (correct) symplectic implementation. This was a useful eye opener for me to not just blindly rely on Matlab's ODE45 or other default solvers...

woopwoop|4 years ago

I would imagine (but note that I'm totally ignorant here) that this bound depends pretty poorly on the degree of the polynomial defining the expression (and pretty reasonably on the coefficients). Then when you sum two algebraic numbers, the degree of the polynomial defining the sum gets way worse (in general as bad as the product of the degrees). I would imagine this is the issue.