Taylor Series

Contents

Introduction

We want to estimate a function, f(x), with a power series such that

f(x) &\approx&  c_0 + c_1(x-a) + c_2(x-a)^2 + c_3(x-a)^3 + \dots +c_n(x-a)^n (1)

and

f(x) = \displaystyle\sum_{n=0}^\infty c_n(x-a)^n (2)

Of course, this will only be true only if the series converges, which can be written mathematically as

\lvert x-a \rvert < R    

where R is the so-called radius of convergence. This simply means that the series takes on a finite value (as opposed to diverging, which means the value adds to infinity).

Derivation of Taylor Series

As a first approximation, let’s find the value of f(a) and see how it compares with our estimate, which is the right side of equation 1 (we’ll call it P(x)). Notice that if we plug x=a into the equation, all of the terms that contain (x-a) will be zero (i.e., a-a = 0). Therefore,

f(a) = c_0

Next let’s check the derivative of f(x):

f^\prime(x) = 0+c_1+2c_2(x-a)+3c_3(x-a)^2+4c_4(x-a)^3+\dots

If we evaluate f^\prime at a, all of terms that contain (x-a) become zero. Thus, we have

f^\prime(x) = c_1

Let’s keep going with the derivatives:

f^{\prime\prime}(x) = 2\cdot1c_2+3\cdot2c_3(x-a)+4\cdot3c_4(x-a)^2+\dots

Evaluate f^{\prime\prime}(x) at x=a. When we do, all of terms that contain(x-a) disappear. When we do, we have:

f^{\prime\prime}(a) = 2\cdot1c_2

f^{\prime\prime\prime}(x) = 3\cdot2\cdot1c_3+4\cdot3\cdot2c_4(x-a)+\dots

Similar to the above calculations, we evaluate f^{\prime\prime\prime}(x) at a. As previously, the term with (x-a) becomes zero. We are left with

f^{\prime\prime\prime}(a) = 3\cdot2\cdot1c_3

f^{\prime\prime\prime\prime}(x) = 4\cdot3\cdot2\cdot1c_4\dots+\dots

f^{\prime\prime\prime\prime}(a) = 4\cdot3\cdot2\cdot1c_4\dots

We could keep going, but hopefully, the pattern is becoming evident:

  • f(a) = c_0
  • f^\prime(a) = c_1
  • f^{\prime\prime}(a) = 2\cdot1c_2
  • f^{\prime\prime\prime}(a) = 3\cdot2\cdot1c_3
  • f^{\prime\prime\prime\prime}(a) = 4\cdot3\cdot2\cdot1c_4
  • \vdots

The numbers to the left of the coefficients, which are multiplied together and decrease stepwise by 1, are factorials of the subscripts of the coefficients with which they are associated. (Note that the 1 that multiplies c_1 is 1! and 0! also equals 1.) Thus,

  • f(a) = 0!c_0
  • f^\prime(a) = 1!c_1
  • f^{\prime\prime}(a) = 2!c_2
  • f^{\prime\prime\prime}(a) = 3!c_3
  • f^{\prime\prime\prime\prime}(a) = 4!c_4\dots
  • \vdots

and

  • \frac{f(a)}{0!} = c_0
  • \frac{f^\prime(a)}{1!} = c_1
  • \frac{f^{\prime\prime}(a)}{2!} = c_2
  • \frac{f^{\prime\prime\prime}(a)}{3!} = c_3
  • \frac{f^{\prime\prime\prime\prime}(a) }{4!}= c_4\dots
  • \vdots
  • \frac{f^{n}(a) }{n!}= c_n

That means that

\begin{array}{rcl}  f(x) &\approx&  \frac{f(a)}{0!} + \frac{f^\prime(a)}{1!}(x-a) + \frac{f^{\prime\prime}(a)}{2!}(x-a)^2 +\frac{f^{\prime\prime\prime}(a)}{3!}(x-a)^3 + \\ \, &\,& \frac{f^{\prime\prime\prime\prime}(a) }{4!}(x-a)^4 + \dots +\frac{f^{n}(a) }{n!}(x-a)^n   \end{array}   (3)

Proof that f(x) = P(x) as n → ∞

What we want to do now is to see if

P(x) = \displaystyle\sum_{n=0}^N \frac{f^{n}(a) }{n!}(x-a)^n converges to f(x) as N goes to infinity.

Remainder function

Let’s start by defining a function, R_{N,a}, called the remainder (sometimes authors call it the error function). It is the difference between our function, f(x), and our Taylor polynomial, P(x) where our Taylor polynomial contains N derivatives and is centered at the point x=a. Alternatively, one might say that it is the error between our function, f(x) and the estimate provided by our Taylor polynomial, P(x) at some point x. In equation form,

R_{N,a} = \left| f(x) - P(x) \right| = \left| f(x) - \displaystyle\sum_{n=0}^N \frac{f^{n}(a) }{n!}(x-a)^n \right|

What we want to know is, what is the remainder at some point b remote from a (a, again, being the point on which are Taylor polynomial is derived).

What we already know is that P(x) estimates f(x) perfectly at x=a. Therefore,

R_{N,a}(a) = f(a) - P(a) = 0

Next, let’s take the derivative of the function above:

R_{N,a}^\prime(a) = f^\prime(a) - P^\prime(a) = 0

The difference in the derivatives is 0 because, as we saw when we derived the expression for the Taylor series, the derivatives of f(a) are equal to the derivatives of P(a) for all levels of derivative up to N. That is,

\begin{array}{l}P(a)=f(a)\\P^\prime(a)=f^\prime(a)\\P^{\prime\prime}(a)=f^{\prime\prime}(a)\\ \vdots\\ P^{(N)(a)}=f^{(N)}(a)\end{array}

Because of this,

\begin{array}{l}  R_{N,a}(a) = f(a) - P(a) = 0\\ R_{N,a}^\prime(a) = f^\prime(a) - P^\prime(a) = 0\\ R_{N,a}^{\prime\prime}(a) = f^{\prime\prime} - P^{\prime\prime} = 0\\ \vdots\\ R_{N,a}^{(N)}(a) = f^{(N)}(a) - P^{(N)}(a) = 0  \end{array}

Now let’s take the N+1^{\text{th}} derivative of the above equation:

R_{N,a}^{(N+1)}(x) = f^{(N+1)}(x) - P_{N,a}^{(N+1)}(x)

To solve this equation, we focus specifically on the term P_{N,a}^{(N+1)}(a). Note that a subscript, N,a has been added to this term to emphasize that P_{N,a}{(N+1)}(x) is an N^{\text{th}} degree polynomial. What is its value? Zero. Why? Because the N+1^{\text{th}} derivative of an N^{\text{th}} degree polynomial is zero. (For example, if f(x) = x^2, f^\prime(x) = 2x, f^{\prime\prime}(x) = 2 and f^{\prime\prime\prime}(x) = 0.) Due to this,

\begin{array}{rcl}  R_{N,a}^{(N+1)}(x) &=& f^{(N+1)}(x) - P_{N,a}^{(N+1)}(x)\\ &=& f^{(N+1)}(x) - 0\\ &=& f^{(N+1)}(x)  \end{array}

The next thing we need to do is try to find an upper bound to R_{N,a} (we’ll call it M). In math terms,

\left| R_{N,a}^{(N+1)}(x) \right| = \left| f^{(N+1)}(x) \right| \leq M

We take the absolute values of R_{N,a}^{(N+1)}(x) and f^{(N+1)}(x) because our error could be positive or negative but what we care about is the magnitude of the error.

Now we know that, because f^{(N+1)}(x), like all the functions we’ve been dealing with in this article, is continuous, then, on the interval from x=a to x=b, it must have a maximum value. And we already said that we are going to call that maximum value (or upper bound) M.That means that,

\left| f^{(N+1)}(x) \right| \leq M

And because \left| R_{N,a}^{(N+1)}(x) \right| = \left| f^{(N+1)}(x) \right|,

\left| R_{N,a}^{(N+1)}(x) \right| \leq M

What we want to prove, though, is that,

\left| R_{N,a}(b) \right| \leq M

So somehow, we’ve got to get from the N+1^{\text{th}} derivative of R_{N,a}(x) to the “zeroth” derivative of R_{N,a}(x) (i.e., the zeroth derivative being the function,R_{N,a}(x), itself). How do we do this? By successive integrations. Let’s start with

\int \left| R_{N,a}(b) \right|\,dx \leq \int M\,dx

Before we proceed with this undertaking, let us consider an important aside: namely that

\left| \int f(x)\,dx \right| \leq \int \left| f(x) \right|\,dx

If the area under both curves are either both positive or both negative, then their areas will be equal. However, if part of the area is positive and part is negative, then, with \left| \int f(x)\,dx \right|, the negative part will cancel out the positive part of the area. On the other hand, with the absolute value sign on the inside of the integral, the negative part of the area, because of this absolute value sign, will add positively to the positive portion of the area rather than being subtracted from it. Thus, \int \left| f(x) \right|\,dx potentially has a higher value than \left| \int f(x)\,dx \right|.

With that aside behind us, let’s move on to the successive integration process. To begin

\int \left| R_{N,a}^{(N+1)}(x) \right|\,dx = \left| R_{N,a}^{(N)}(x) \right| and \int M\,dx = M \left|  x\right| + C where C is a constant made up from contributions of the indefinite integrals on both sides of the equation. Therefore,

\left| R_{N,a}^{(N)}(x) \right| = M\left| x \right| + C

Since we want to find the lowest upper bound for our error function, we want to minimize C. We’ve already established that P(a) = f(a) so R_{N,a}(a)=0. That means that 0=Ma + C and C = -Ma. Thus, we have

\left| R_{N,a}^{(N)}(x) \right| \leq M\left| x \right| - Ma = M\left| (x-a) \right|

We integrate both sides of this equation:

\left| \int R_{N,a}^{(N)}(x)\,dx \right| \leq \int \left| R_{N,a}^{(N)}(x)\right| \,dx \leq \int M \left| (x-a)\right|\,dx

We get

\left| R_{N,a}^{(N-1)}(x) \right| \leq \frac{M}{2}\left|(x-a)^2 + C\right|

To minimize the constant C, we use the same trick we used above; we take the value of both sides of this equation at x=a. So,

0=\frac{M(a-a)^2}{2}+C\quad\Rightarrow\quad C=0. Because the right side of the equation will contain an (x-a) term in all of our subsequent integration steps, the constant C will be 0 in all of these steps. Thus, we can ignore the constant from here on out.

We continue our successive integration process as follows:

\begin{array}{rcl}  \left| \int R_{N,a}^{(N-1)}(x)\,dx \right|&\leq& \int \frac{M}{2}\left| (x-a)^2 \right|\,dx\\  \, &\,& \,\\  \left| R_{N,a}^{(N-2)}(x) \right| &\leq& \frac{M}{3\cdot2\cdot1}\left| (x-a)^3 \right|\\  \, &\,& \,\\  \left| \int R_{N,a}^{(N-3)}(x)\,dx \right|&\leq& \int \frac{M}{3\cdot2\cdot1}\left| (x-a)^3 \right|\,dx \\  \, &\,& \,\\  \left| R_{N,a}^{(N-4)}(x) \right| &\leq& \frac{M}{4\cdot3\cdot2\cdot1} \left| (x-a)^4 \right| \\  \, &\,& \,\\  \, &\vdots& \,\\  \, &\,& \,\\  \left| R_{N,a}^{0)}(x) \right| = \left| R_{N,a}(x) \right| &\leq& \frac{M}{(n+1)!} \left| (x-a)^{n+1} \right|  \end{array}

From this, we can come up with an expression for \left| R_{N,a}(x) \right| at b:

\left| R_{N,a}(b) \right| &\leq& \frac{M}{(n+1)!} \left| (b-a)^{n+1} \right|

We can use this expression to prove that P(x) converges to f(x) over some interval of x. Note that this interval can be infinite in some cases

Example: ex

The exponential function, e^x, is ubiquitously useful in science and mathematics. Therefore, it will be useful to examine it further. The first order of business is to derive it’s Taylor series. Toward this end, we’ll construct a table:

\begin{array}{ll}  \text{Derivative}&\text{Function}\\ f^{(0)}(x)&e^x\\ f^{\prime}(x)&e^x\\ f^{\prime\prime}(x)&e^x\\ f^{\prime\prime\prime}(x)&e^x\\ \vdots&\vdots\\ f^{(n)}(x)&e^x\\ f^{(n+1)}(x)&e^x  \end{array}

This table illustrates that, of the exponential function, the function itself and all of its derivatives are equal to e^x.

The general formula for the Taylor series is:

f(x) &\approx& \displaystyle\sum_{n=0}^N \frac{f^{n}(a) }{n!}(x-a)^n

For this proof, we’re going to use the specialized form of the Taylor series called the Maclaurin series in which the series is centered at zero (i.e., a=0). The formula is:

f(x) &\approx& \displaystyle\sum_{n=0}^N \frac{f^{n}(a) }{n!}(x-0)^n=\displaystyle\sum_{n=0}^N \frac{f^{n}(a) }{n!}(x)^n

We know that e^x and all its derivatives are equal to e^x. We also know that e^0=1. So now lets plug this information into the formula for the Maclaurin series:

\begin{array}{rcl}  f(x) &\approx& e^0 + \frac{e^0}{1!}x^1 + \frac{e^0}{2!}x^2 +\frac{e^0}{3!}x^3 + \dots + \frac{e^0}{N!}x^N \\  \, &\,& \,\\  &\approx& 1 + \frac{1}{1}x^1 + \frac{1}{2!}x^2 +\frac{1}{3!}x^3 + \dots + \frac{1}{N!}x^N  \, &\,& \,\\  &\approx& 1 + x + \frac{x^2}{2!} +\frac{x^3}{3!} + \dots + \frac{x^N}{N!}  \end{array}

Now we want to show that the above Maclaurin series converges to e^x for the interval -b \leq x=0 \leq +b. To do this, we use Taylor’s inequality and the remainder function that we previously derived.

We know that the N+1^{\text{th}} derivative of e^x, like all of its derivatives, is e^x. Therefore, the maximum value of f^{(N=1)}(b) on the interval -b \leq +b is +b. b, then, is the value of M(b-a)^{N+1} in the formula for Taylor’s inequality. Because, in this case, we’re taking a=0, M(b-a)^{N+1}=M(b-0)^{N+1}=d. Thus, for Taylor’s inequality, we have

R_{N,0}\leq\frac{e^d}{(N+1)!} \leq \frac{e^d}{(N+1)!} \left| x \right|^{N+1} = e^d \frac{\left| x \right|^{N+1}}{(N+1)!}

Finally, we take the limit as N\to\infty of both sides. That gives us

\lim_{N\to\infty} R_{N,0}\leq\frac{e^d}{(N+1)!} \leq \lim_{N\to\infty} e^d \frac{\left| x \right|^{N+1}}{(N+1)!} =0

because (N+1)! goes to infinity much faster than \left| x \right|^{N+1}. To see this, plug in some values of N+1 at x=2:

\begin{array}{rcl}  N+1 & \left| x \right|^{N+1} & (N+1)! \\ 4 & 2^4=16 & 4!=24 \\ 5 & 2^5=32 & 5!=120 \\ 6 & 2^6=64 & 6!=720  \end{array}

Since R, the error between f(x) and P(x), goes to 0 as N\to\infty , that means that f(x)=P(x), as N\to\infty. And because M=e^b for any b we choose, that means that e^x equals its Taylor series at all values of x.

Example: sin x

Next let’s calculate the Maclaurin series for \sin x, P(x), and prove that \sin x=P(x) as n\to\infty.

\begin{array}{lr}  \text{Derivative}&\text{Function}\\ f^{(0)}(x)&\sin x\\ f^{\prime}(x)&\cos x\\ f^{\prime\prime}(x)&-\sin x\\ f^{\prime\prime\prime}(x)&-\cos x\\ f^{\prime\prime\prime\prime}(x)&\sin x\\ \vdots&\vdots\\  \end{array}

We use this data to calculate the Maclaurin series of \sin x:

\begin{array}{rcl}  f(x) &\approx& \frac{f^{0}(0) }{0!}(x-a)^0 + \frac{f^{1}(a) }{1!}(x-a)^1 + \frac{f^{2}(a) }{2!}(x-a)^2 + \frac{f^{3}(a) }{3!}(x-a)^3 +\\ \, &\,& \frac{f^{4}(a) }{4!}(x-a)^4 + \dots + \frac{f^{n}(a) }{n!}(x-a)^n\\  \, &\,& \, \\  \sin x &\approx& \frac{\sin 0}{0!}(x-0)^0 + \frac{\cos 0}{1!}(x-0)^1 + \frac{-\sin 0}{2!}(x-0)^2 + \frac{-\cos 0}{n!}(x-0)^3 + \frac{\sin 0}{4!}(x-0)^4 + \\ \, &\,& \frac{\cos 0}{5!}(x-0)^5 + \frac{-\sin 0}{6!}(x-0)^6 + \frac{-\cos 0}{7!}(x-0)^7 + \dots  \, &\,& \, \\  &\approx& \frac{0}{1}(x)^0 + \frac{1}{1!}(x)^1 + \frac{0}{2!}(x)^2 + \frac{-1}{3!}(x)^3 + \frac{0}{4!}(x)^4 + \\ \, &\,& \frac{1}{5!}(x)^5 + \frac{0}{6!}(x)^6 + \frac{-1}{7!}(x)^7 + \dots  \, &\,& \, \\  &\approx& 0 + x + 0 + \frac{-x^3}{3!} + 0 + \frac{x^5}{5!} + 0 + \frac{-x^7}{7!} + \dots  \, &\,& \, \\  &\approx& x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \dots  \end{array}

Now we prove that the Maclaurin series for \sin x converges to \sin x as n\to\infty. We’ll set the interval of convergence to -\pi \leq x \leq +\pi (i.e., one complete cycle). Because \sin x is a periodic function, in so doing, whatever we prove will hold for all x.

Note that the derivatives of \sin x consist of \pm \sin x and \pm \cos x. \sin x and \cos x vary between -1 and +1. Thus, \left| f^{(n+1)} \right| = \left| R_{n,0} \right| \leq M = 1. So we have

\lim_{n\to\infty} R_{n,0}\leq\frac{1}{(n+1)!} \leq \lim_{n\to\infty} \frac{\left| x \right|^{n+1}}{(n+1)!} =0

Therefore, the Maclaurin series of \sin x converges to \sin x for all values of x.

Example: cos x

In this section, we’ll calculate the Maclaurin series for \cos x, P(x), and prove that \cos x=P(x) as n\to\infty.

\begin{array}{lr}  \text{Derivative}&\text{Function}\\ f^{(0)}(x)&\cos x\\ f^{\prime}(x)&-\sin x\\ f^{\prime\prime}(x)&-\cos x\\ f^{\prime\prime\prime}(x)&\sin x\\ f^{\prime\prime\prime\prime}(x)&\cos x\\ \vdots&\vdots\\  \end{array}

We use this data to calculate the Maclaurin series of \cos x:

\begin{array}{rcl}  f(x) &\approx& \frac{f^{0}(0) }{0!}(x-a)^0 + \frac{f^{1}(a) }{1!}(x-a)^1 + \frac{f^{2}(a) }{2!}(x-a)^2 + \frac{f^{3}(a) }{3!}(x-a)^3 +\\ \, &\,& \frac{f^{4}(a) }{4!}(x-a)^4 + \dots + \frac{f^{n}(a) }{n!}(x-a)^n\\  \, &\,& \, \\  \cos x &\approx& \frac{\cos 0}{0!}(x-0)^0 + \frac{-\sin 0}{1!}(x-0)^1 + \frac{-\cos 0}{2!}(x-0)^2 + \frac{\sin 0}{n!}(x-0)^3 + \frac{\cos 0}{4!}(x-0)^4 + \\ \, &\,& \frac{-\sin 0}{5!}(x-0)^5 + \frac{-\cos 0}{6!}(x-0)^6 + \frac{\sin 0}{7!}(x-0)^7 + \dots  \, &\,& \, \\  &\approx& \frac{1}{1}(x)^0 + \frac{0}{1!}(x)^1 + \frac{-1}{2!}(x)^2 + \frac{0}{3!}(x)^3 + \frac{1}{4!}(x)^4 + \\ \, &\,& \frac{0}{5!}(x)^5 + \frac{-1}{6!}(x)^6 + \frac{0}{7!}(x)^7 + \dots  \, &\,& \, \\  &\approx& 1 + 0 + \frac{-x^2}{2!} + 0 + \frac{x^4}{4!} + 0 + \frac{-x^6}{6!} + \dots  \, &\,& \, \\  &\approx& 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \dots  \end{array}

Now we prove that the Maclaurin series for \cos x converges to \cos x as n\to\infty. We’ll set the interval of convergence to -\pi \leq x \leq +\pi (i.e., one complete cycle). Because \cos x is a periodic function, in so doing, whatever we prove will hold for all x.

Note that the derivatives of \cos x consist of \pm \sin x and \pm \cos x. \sin x and \cos x vary between -1 and +1. Thus, \left| f^{(n+1)} \right| = \left| R_{n,0} \right| \leq M = 1. So we have

\lim_{n\to\infty} R_{n,0}\leq\frac{1}{(n+1)!} \leq \lim_{n\to\infty} \frac{\left| x \right|^{n+1}}{(n+1)!} =0

Therefore, the Maclaurin series of \cos x converges to \cos x for all values of x.

References

https://www.khanacademy.org/math/ap-calculus-bc/bc-series-new/bc-10-11/v/maclaurin-and-taylor-series-intuition

https://www.khanacademy.org/math/ap-calculus-bc/bc-series-new/bc-10-12/v/proof-bounding-the-error-or-remainder-of-a-taylor-polynomial-approximation

http://ocw.uci.edu/lectures/math_2b_lec_27_calculus_taylor_series_and_maclaurin_series.html