Logs will appear no matter what, at some point in the line of reasoning. They are only held back until differentiating in this approach. I think the expression for the derivative of logn/n was much nicer to grapple with.
Seeing as we're in the integer world, I'm not sure logs need to show up ever, there's probably a proof path around unique prime factorisations. E.g. it's more or less immediately obvious that n and m would need to share the same prime factors in the same ratios.
The key thing you need is that for n >= 3 the sequence of (n^1/n) is decreasing. You can get that pretty easily without logs.
Look at two consecutive terms: n^(1/n) and (n+1)^[1/(n+1)]. The ratio of the first to the second -- we want to prove that this is >1 -- is n^(1/n) / (n+1)^[1/(n+1)]. This is bigger than 1 iff its n(n+1)th power is; that is, iff n^(n+1) / (n+1)^n > 1. We can write this as n (n/(n+1))^n; so what we need is that (n/(n+1))^n > 1/n.
(Handwavily: we know that the LHS is about 1/e, so for n>=3 this should be good. But we want a proof and we're trying to do it without nontrivial analytic machinery.)
Actually, I prefer to write this as ((n+1)/n)^n < n, or (1+1/n)^n < n. Expand this with the binomial theorem: we get sum {0<=k<=n} of (n choose k) n^-k. And we have (n choose k) = n(n-1)...(n-k+1) / k! < n^k/k! so this is strictly less than sum {0<=k<=n} of 1/k!. And for k>0 we easily have k! >= 2^(k-1) so this sum is no bigger than 1 + 1 + 1/2 + 1/4 + 1/8 + etc. = 3.
So, the function on integers n -> n^1/n is decreasing for n >= 3. Now the proof goes through as before.
(Maybe there's a more thoroughly number-theory-ish way to do it by looking at prime factorizations, but when I try it that way it always seems to end up rather a mess.)
[EDITED to add:] But elsewhere in the discussion users bustermellotron and diffeomorphism give (very similar) neat number-theory-ish proofs, either of which is definitely a better proof than the one using the calculations above.
This back-and-forth makes me wonder, is it possible to write proofs about proofs? e.g., you cannot prove this result without logarithms or there must exist a proof that involves prime factors?
I suppose at least some proofs vaguely like that are possible because Gödel's incompleteness theorem is one, although I suspect that that same theorem puts some constraints on these "metaproofs."
Agreed! I'm realizing that my comment came across as negative, but I did appreciate seeing a second path to the same place. I also agree that the expression x^(1/x) feels like a more natural place to start.
You often see this I think, in "pretty" proofs compared with the more direct approach. A clever early step or some bit of startling insight.
pdpi|2 years ago
gjm11|2 years ago
Look at two consecutive terms: n^(1/n) and (n+1)^[1/(n+1)]. The ratio of the first to the second -- we want to prove that this is >1 -- is n^(1/n) / (n+1)^[1/(n+1)]. This is bigger than 1 iff its n(n+1)th power is; that is, iff n^(n+1) / (n+1)^n > 1. We can write this as n (n/(n+1))^n; so what we need is that (n/(n+1))^n > 1/n.
(Handwavily: we know that the LHS is about 1/e, so for n>=3 this should be good. But we want a proof and we're trying to do it without nontrivial analytic machinery.)
Actually, I prefer to write this as ((n+1)/n)^n < n, or (1+1/n)^n < n. Expand this with the binomial theorem: we get sum {0<=k<=n} of (n choose k) n^-k. And we have (n choose k) = n(n-1)...(n-k+1) / k! < n^k/k! so this is strictly less than sum {0<=k<=n} of 1/k!. And for k>0 we easily have k! >= 2^(k-1) so this sum is no bigger than 1 + 1 + 1/2 + 1/4 + 1/8 + etc. = 3.
So, the function on integers n -> n^1/n is decreasing for n >= 3. Now the proof goes through as before.
(Maybe there's a more thoroughly number-theory-ish way to do it by looking at prime factorizations, but when I try it that way it always seems to end up rather a mess.)
[EDITED to add:] But elsewhere in the discussion users bustermellotron and diffeomorphism give (very similar) neat number-theory-ish proofs, either of which is definitely a better proof than the one using the calculations above.
justinpombrio|2 years ago
n^m = m^n, so n^m and m^n share the same prime factorization. Call it p1^x1 * ... * pk^xk.
n = n^m^(1/m) = p1^(x1/m) * ... * pk^(xk/m)
m = m^n^(1/n) = p1^(x1/n) * ... * pk^(xk/n)
Therefore each xi is divisible by both n and m, so it's divisible by lcm(n, m). Call lcm(n, m) = d. Now define:
z = p1^(x1/d) * ... * pk^(xk/d)
Both n and m are powers of z!
n = z^(d/m)
m = z^(d/n)
Call a=d/m and b=d/n. Then:
n^m = (z^a)^(z^b) = z^(az^b)
m^n = (z^b)^(z^a) = z^(bz^a)
n^m = m^n -> z^(az^b) = z^(bz^a) -> az^b = bz^a
EDIT: continuing proof.
That's as far as I've got. Can someone run with it?
actuallyalys|2 years ago
I suppose at least some proofs vaguely like that are possible because Gödel's incompleteness theorem is one, although I suspect that that same theorem puts some constraints on these "metaproofs."
eru|2 years ago
frodetb|2 years ago
You often see this I think, in "pretty" proofs compared with the more direct approach. A clever early step or some bit of startling insight.