(no title)
traes
|
1 year ago
Something that seems to be frequently lacking in discussions of convergence in introductory texts on Taylor series is the possibility that the series DOES converge, but NOT to the approximated function. It's not sufficient to conclude that the derived Taylor series must converge to cos(x) because it always converges, since any of the infinitely many functions that match cosine's derivatives at x = 0 will have the same Taylor expansion. How do you know cos(x) is the one it will converge to?
owalt|1 year ago
As you say, there's no guarantee that even a convergent Taylor series[0] converges to the correct value in any interval around the point of expansion. Though the series is of course trivially[1] convergent at the point of expansion itself, since only the constant term doesn't vanish.
The typical example is f(x) = exp(-1/x²) for x ≠ 0; f(0) = 0. The derivatives are mildly annoying to compute, but they must look like f⁽ⁿ⁾(x) = exp(-1/x²)pₙ(1/x) for some polynomials pₙ. Since exponential growth dominates all polynomial growth, it must be the case that f(0) = f'(0) = f"(0) = ··· = 0. In other words, the Taylor series is 0 everywhere, but clearly f(x) ≠ 0 for x ≠ 0. So the series converges only at x = 0. At all other points it predicts the wrong value for f.
The straight-forward real-analytic approach to resolve this issue of goes through the full formulation of Taylor's theorem with an explicit remainder term[2]:
f(x) = Σⁿf⁽ᵏ⁾(a)(x-a)ᵏ/k! + Rₙ(x),
where Rₙ is the remainder term. To clarify, this is a _truncated_ Taylor expansion containing terms k=0,...,n.
There are several explicit expressions for the remainder term, but one that's useful is
Rₙ(x) = f⁽ⁿ⁺¹⁾(ξ)(x-a)ⁿ⁺¹/(n+1)!,
where ξ is not (a priori) fully known but guaranteed to exist in [min(a,x), max(a,x)]. (I.e the closed interval between a and x.)
Let's consider f(x) = cos(x) as an easy example. All derivatives look like ±sin(x) or ±cos(x). This lets us conclude that |f⁽ⁿ⁺¹⁾(ξ)| ≤ 1 for all ξ∈(-∞, ∞). So |Rₙ(x)| ≤ (x-a)ⁿ⁺¹/(n+1)! for all n. Since factorial growth dominates exponential growth, it follows that |Rₙ(x)| → 0 as n → ∞ regardless of which value of a we choose. In other words, we've proved that f(x) - Σⁿf⁽ᵏ⁾(a)(x-a)ᵏ/k! = Rₙ(x) → 0 as n → ∞ for all choices of a. So this is a proof that the value of the Taylor series around any point is in fact cos(x).
Similar proofs for sin(x), exp(x), etc are not much more difficult, and it's not hard to turn this into more general arguments for "good" cases. Trying to use the same machinery on the known counterexample exp(-1/x²) is obviously hopeless as we already know the Taylor series converges to the wrong value here, but it can be illustrative to try (it is an exercise in frustration).
A nicer, more intuitive setting for analysis of power series is complex analysis, which provides an easier and more general theory for when a function equals its Taylor series. This nicer setting is probably the reason the topic is mostly glossed over in introductory calculus/real analysis courses. However, it doesn't necessarily give detailed insight into real-analytic oddities like exp(-1/x²) [3].
[0]: For reference, the Taylor series of a function f around a is: Σf⁽ᵏ⁾(a)(x-a)ᵏ/k!. (I use lack of upper index to indicate an infinite series as opposed to a sum with finitely many terms.)
[1]: At x = a, the Taylor series expansion is f(a) = Σⁿf⁽ᵏ⁾(a)(a-a)ᵏ/k! = f(a) + f'(a)·0 + f"(a)·0² + ··· = f(a). All the terms containing (x-a) vanish.
[2]: https://en.wikipedia.org/wiki/Taylor%27s_theorem#Taylor's_th...
[3]: Something very funky goes on with this function as x → 0 in the complex plane, but this is "masked" in the real case. In the complex case, this function is said to have an essential singularity at x = 0.