\documentclass[a4paper,12pt]{book}
\usepackage{latexsym}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{bm}
\usepackage{graphicx}
\usepackage{wrapfig}
\usepackage{fancybox}
\pagestyle{empty}
\begin{document}
Calculus Refresher, version 2008.4
\copyright{\it 1997-2008, Paul Garrett, garrett}@{\it math.umn.edu}
$http.\cdot//www$. {\it math}. $umn.edu/\sim garrett/$
1
Contents
(1) Introduction
(2) Inequalities
(3) Domain of functions
(4) Lines (and other items in Analytic Geometry) (5) Elementary limits
(6) Limits with cancellation
(7) Limits at infinity
(8) Limits of exponential functions at $\mathrm{i}\square $finity
(9) The idea of the derivative of a function
(10) Derivatives of polynomials
(11) More general power functions
(12) Quotient rule
(13) Product Rule
(14) Chain rule
(15) Tangent and Normal Lines
(16) Critical points, monotone increase and decrease
(17) Minimization and Maximization
(18) Local minima and maxima (First Derivative Test)
(19) An algebra trick
(20) Linear approximations: approximation by differentials (21) Implicit differentiation
(22) Related rates
(23) Intermediate Value Theorem, location of roots
(24) Newton's method
(25) Derivatives of transcendental functions
(26) L'Hospital's rule
(27) Exponential growth and decay: a differential equation (28) The second and higher derivatives
(29) Inflection points, concavity upward and downward
(30) Another differential equation: projectile motion
(31) Graphing rational functions, asymptotes
(32) Basic integration formulas
(33) The simplest substitutions
(34) Substitutions
(35) Area and definite integrals
(36) Lengths of Curves
(37) Numerical integration
2
(38) Averages and Weighted Averages
(39) Centers of Mass (Centroids)
(40) Volumes by Cross Sections
(41) Solids of Revolution
(42) Surfaces of Revolution
(43) Integration by parts
(44) Partial Fractions
(45) Trigonometric Integrals
(46) Trigonometric Substitutions
(47) Historical and theoretical comments: Mean Value Theorem (48) Taylor polynomials: formulas
(49) Classic examples of Taylor polynomials
(50) Computational tricks regarding Taylor polynomials
(51) Prototypes: More serious questions about Taylor polynomials (52) Determining Tolerance/Error
(53) How large an interval with given tolerance?
(54) Achieving desired tolerance on desired interval
(55) Integrating Taylor polynomials: first example
(56) Integrating the error term: example
3
1. {\it Introduction}
The usual trouble that people have with `calculus' (not counting general math phobias) is with algebra, not to mention {\it arithmetic} and other more elementary things.
Calculus itself just involves two new processes, {\it differentiation} and {\it integration}, and {\it applications} of these new things to solution of problems that would have been impossible otherwise.
Some things which were very important when calculators and computers didn't exist are not so important now. Some things are just as important. Some things are more important. Some things are important but with a different emphasis.
At the same time, the essential ideas of much of calculus can be very well illustrated without using calcu- lators at all! (Some not, too).
Likewise, many essential ideas of calculus can be very well illustrated without getting embroiled in awful algebra or arithmetic, not to mention trigonometry.
At the same time, study of calculus makes clear how important it is to be able to do the necessary algebra and arithmetic, whether by calculator or by hand.
2. {\it Inequalities}
It is worth reviewing some elementary but important points:
First, a person must remember that the {\it only} way for a product of numbers to be {\it zero} is that one or more of the individual numbers be zero. As silly as this may seem, it is indispensable.
Next, there is the collection of slogans:
$\bullet$ positive times positive is positive
$\bullet$ negative times negative is positive
$\bullet$ negative times positive is negative
$\bullet$ positive times negative is negative
Or, more cutely: the product of two numbers {\it of the same sign} is {\it positive}, while the product of two num- bers {\it of opposite signs} is {\it negative}.
Extending this just a little: for a {\it product} of real numbers to be {\it positive}, the number of {\it negative} ones must be {\it even}. If the number of negative ones is {\it odd} then the product is {\it negative}. And, of course, if there are any zeros, then the product is zero.
{\it Solving inequalities}: This can be very hard in greatest generality, but there are some kinds of problems
that are very $\zeta do$-{\it able}'. One important class contains problems like {\it Solve}:
$$
5\ (x-1)(x+4)(x-2)(x+3)<0
$$
That is, we are asking where a {\it polynomial} is negative (or we could ask where it's positive, too). One im- portant point is that the polynomial is {\it already factored}: to solve this problem we need to have the polyno- mial factored, and if it $\mathrm{i}\mathrm{s}\mathrm{n}' \mathrm{t}$ already factored this can be a lot of additional work. There are many ways to {\it format} the solution to such a problem, and we just choose {\it one}, which does have the merit of being more efficient than many.
4
We put the roots of the polynomial
$$
P(x)=5(x-1)(x+4)(x-2)(x+3)=5(x-1)(x-(-4))(x-2)(x-(-3))
$$
in order: in this case, the roots are $1, -4,2, -3$, which we put in order (from left to right)
$$
.\text{ . . }<-4<-3<1<2<\ldots
$$
The roots of the polynomial $P$ break the numberline into the intervals
$$
(-\infty,\ -4),\ (-4,\ -3),\ (-3,1),\ (1,2),\ (2,\ +\infty)
$$
On each of these intervals the polynomial is either positive all the time, or negative all the time, since if it were positive at one point and negative at another then it would have to be zero at some intermediate point!
For input $x$ to the right (larger than) all the roots, all the factors $x+4, x+3, x-1, x-2$ are positive, and the number 5 in front also happens to be positive. Therefore, on the interval $(2,\ +\infty)$ the polynomial $P(x)$ is {\it positive}.
Next, moving {\it across} the root 2 to the interval $(1,\ 2)$ , we see that the factor $x-2$ changes sign from posi- tive to negative, while all the other factors $x-1, x+3$, and $x+4$ do {\it not} change sign. (After all, if they would have done so, then they would have had to be $0$ at some intermediate point, but they {\it weren}'{\it t}, since we know where they {\it are} zero Of course the 5 in front stays the same sign. Therefore, since the func- tion was {\it positive} on $(2,\ +\infty)$ and just one factor changed sign in crossing over the point 2, the function is {\it negative} on $(1,\ 2)$ .
Similarly, moving {\it across} the root 1 to the interval $(-3,1)$ , we see that the factor $x-1$ changes sign from positive to negative, while all the other factors $x-2, x+3$, and $x+4$ do {\it not} change sign. (After all, if they would have done so, then they would have had to be $0$ at some intermediate point). The 5 in front
stays the same sign. Therefore, since the function was {\it negative} on $(1,\ 2)$ and just one factor changed sign in crossing over the point 1, the function is {\it positive} on $(-3,1)$ .
Similarly, moving {\it across} the root $-3$ to the interval $(-4,\ -3)$ , we see that the factor $x+3=x-(-3)$
changes sign from positive to negative, while all the other factors $x-2, x-1$, and $x+4$ do {\it not} change sign. $(\mathrm{I}\mathrm{f}$ they would have done $\mathrm{s}\mathrm{o},$ then they would have $\mathrm{h}\mathrm{a}\mathrm{d}\ \mathrm{t}\mathrm{o}\ \mathrm{b}\mathrm{e}\ 0\ \mathrm{a}\mathrm{t}$ some intermediate point) . The
5 in front stays the same sign. Therefore, since the function was {\it positive} on $(-3,1)$ and just one factor
changed sign in crossing over the point $-3$, the function is {\it negative} on $(-4,\ -3)$ .
Last, moving {\it across} the root $-4$ to the interval $(-\infty,\ -4)$ , we see that the factor $x+4=x-(-4)$ changes sign from positive to negative, while all the other factors $x-2, x-1$, and $x+3$ do {\it not} change sign. (If they would have done so, then they would have had to be $0$ at some intermediate point). The 5 in front
stays the same sign. Therefore, since the function was {\it negative} on $(-4,\ -3)$ and just one factor changed
sign in crossing over the point $-4$, the function is {\it positive} on $(-\infty,\ -4)$ .
In summary, we have
$P(x)=5(x-1)(x+4)(x-2)(x+3)>0$ on $(2,\ +\infty)$
$P(x)=5(x-1)(x+4)(x-2)(x+3)<0$ on $(1,\ 2)$
$P(x)=5(x-1)(x+4)(x-2)(x+3)>0$ on $(-3,1)$
$P(x)=5(x-1)(x+4)(x-2)(x+3)<0$ on $(-4,\ -3)$
$P(x)=5(x-1)(x+4)(x-2)(x+3)>0$ on $(-\infty,\ -4)$
In particular, $P(x)<0$ on the {\it union}
$$
(1,\ 2)\cup(-4,\ -3)
$$
of the intervals $(1,\ 2)$ and $(-4,\ -3)$ . That's it.
5
As another example, let's see on which intervals
$$
P(x)=-3(1+x^{2})(x^{2}-4)(x^{2}-2x+1)
$$
is positive and and on which it's negative. We have to factor it a bit more: recall that we have nice facts
$$
x^{2}-a^{2}=(x-a)(x+a)=(x-a)(x-(-a))
$$
$$
x^{2}-2ax+a^{2}=(x-a)(x-a)
$$
so that we get
$$
P(x)=-3(1+x^{2})(x-2)(x+2)(x-1)(x-1)
$$
It is important to note that the equation $x^{2}+1=0$ has no {\it real} roots, since the square of any real number is non-negative. Thus, we can't factor any further than this over the real numbers. That is, the roots of $P$, in order, are
$-2<<1$ (twice!) $<2$
These numbers break the real line up into the intervals
$$
(-\infty,\ -2),\ (-2,1),\ (1,2),\ (2,\ +\infty)
$$
For $x$ larger than all the roots (meaning $x>2$) all the factors $x+2, x-1, x-1, x-2$ are {\it positive}, while the factor of $-3$ in front is {\it negative}. Thus, on the interval $(2,\ +\infty)P(x)$ is {\it negative}.
Next, moving {\it across} the root 2 to the interval $(1,\ 2)$ , we see that the factor $x-2$ changes sign from pos- itive to negative, while all the other factors $1+x^{2}, (x-1)^{2}$, and $x+2$ do {\it not} change sign. (After all,
if they would have done so, then they would have be $0$ at some intermediate point, but they {\it aren}'{\it t}). The $-3$ in front stays the same sign. Therefore, since the function was {\it negative} on $(2,\ +\infty)$ and just one factor changed sign in crossing over the point 2, the function is {\it positive} on $(1,\ 2)$ .
A {\it new feature} in this example is that the root 1 occurs {\it twice} in the factorization, so that crossing over the root 1 from the interval $(1,\ 2)$ to the interval $(-2,1)$ really means crossing over {\it two} roots. That is, {\it two}
changes of sign means $no$ changes of sign, in effect. And the other factors $(1+x^{2}), x+2, x-2$ do not change sign, and the $-3$ does not change sign, so since $P(x)$ was {\it positive} on $(1,\ 2)$ it is {\it still} positive on
$(-2,1)$ . (The rest of this example is the same as the first example).
Again, the point is that each time a root of the polynomial is {\it crossed over}, the polynomial changes sign. So if {\it two} are crossed at once (if there is a double root) then there is really $no$ change in sign. If {\it three} roots are crossed at once, then the effect is to {\it change} sign.
Generally, if an {\it even} number of roots are crossed-over, then there is $no$ change in sign, while if an {\it odd} number of roots are crossed-over then there $is$ a change in sign.
6
\# 2.1 Find the intervals on which $f(x)=x(x-1)(x+1)$ is positive, and the intervals on which it is negative.
\# 2.2 Find the intervals on which $f(x)=(3x-2)(x-1)(x+1)$ is positive, and the intervals on which it is negative.
\# 2.3 Find the intervals on which $f(x)=(3x-2)(3-x)(x+1)$ is positive, and the intervals on which it is negative.
3. {\it Domain of functions}
A function $f$ is a {\it procedure} or {\it process} which converts {\it input} to {\it output} in some way. A traditional mathe- matics name for the input is {\it argument}, but this certainly is confusing when compared with ordinary En- glish usage.
The collection of all `legal' `reasonable' or `sensible' inputs is called the domain of the function. The col- lection of all possible outputs is the range. (Contrary to the impression some books might give, it can be very difficult to figure out all possible outputs!)
The question `What's the domain of this function?' is usually not what it appears to be. For one thing, if we are being formal, then a function hasn't even been {\it described} if its {\it domain} hasn't been described!
What is really meant, usually, is something far less mysterious. The question usually {\it really} is `What
numbers can be used as inputs to this function without anything bad happening?'.
For our purposes, `something bad happening' just refers to one of
$\bullet$ trying to take the square root of a negative number
$\bullet$ trying to take a logarithm of a negative number
$\bullet$ trying to divide by zero
$\bullet$ trying to find {\it arc-cosine} or {\it arc-sine} of a number bigger than 1 or less than $-1$
Of course, dividing by zero is the worst of these, but as long as we insist that everything be {\it real} numbers (rather than {\it complex} numbers) we can't do the other things either.
For example, what is the domain of the function
\begin{center}
$f(x)=\sqrt{x^{2}-1}$?
\end{center}
Well, what could go wrong here? No division is indicated at all, so there is no risk of dividing by $0$. But we are taking a square root, so we must insist that $x^{2}-1\geq 0$ to avoid having complex numbers come up. That is, a preliminary description of the `domain' of this function is that it is the set of real numbers $x$ so that $x^{2}-1\geq 0.$
But we can be clearer than this: we know how to solve such inequalities. Often it's simplest to see what to {\it exclude} rather than {\it include}: here we want to {\it exclude} from the domain any numbers $x$ so that $x^{2}-1<0$ from the domain.
We recognize that we can factor
$$
x^{2}-1=(x-1)(x+1)=(x-1)(x-(-1))
$$
7
This is negative exactly on the interval $(-1,1)$ , so this is the interval we must prohibit in order to have just the domain of the function. That is, the domain is the union of two intervals:
$$
(-\infty,\ -1]\cup[1,\ +\infty)
$$
\# 3.4 Find the domain of the function
$$
f(x)=\frac{x-2}{x^{2}+x-2}
$$
That is, find the largest subset of the real line on which this formula can be evaluated meaningfully.
\# 3.5 Find the domain of the function
$$
f(x)=\frac{x-2}{\sqrt{x^{2}+x-2}}
$$
\# 3.6 Find the domain of the function
$$
f(x)=\sqrt{x(x-1)(x+1)}
$$
4. $Lin$ e{\it s} ({\it and other items in} $An\mathrm{a}/\mathrm{y}Tic$ {\it Geome try})
Let's review some basic analytic geometry: {\it this is description of geometric objects by numbers and by algebra}.
The first thing is that we have to pick a {\it special point}, the origin, from which we'll measure everything else. Then, implicitly, we need to choose a unit of measure for distances, but this is indeed usually only {\it implicit}, so we don't worry about it.
The second step is that points are described by {\it ordered pairs} of numbers: the first of the two numbers
tells how far to the {\it right} horizontally the point is from the {\it origin} (and {\it negative} means go left instead
of right), and the second of the two numbers tells how far $up$ from the origin the point is (and {\it negative}
means go down instead of up). The first number is the horizontal coordinate and the second is the ver- tical coordinate. The old-fashioned names {\it abscissa} and {\it ordinate} also are used sometimes.
Often the horizontal coordinate is called the $x$-{\it coordinate}, and often the vertical coordinate is called the {\it y}- {\it coordinate}, but the letters $x, y$ can be used for many other purposes as well, so {\it don}'{\it t rely on this labelling}.'
The next idea is that {\it an equation can describe a curve}. It is important to be a little careful with use of
language here: for example, a correct assertion is
{\it The set of points} $(x,\ y)$ {\it so that} $x^{2}+y^{2}=1$ {\it is a circle}.
It is {\it not strictly correct} to say that $x^{2}+y^{2}=1$ {\it is} a circle, mostly because {\it an equation is not a circle}, even though it may {\it describe} a circle. And conceivably the $x, y$ might be being used for something other than horizontal and vertical coordinates. Still, very often the language is shortened so that the phrase $\zeta The$ {\it set of points} $(x,\ y)$ {\it so that}' is omitted. Just be careful.
The simplest curves are lines. The main things to remember are:
$\bullet$ Slope of a line is {\it rise over run}, meaning {\it vertical change divided by horizontal change} (moving from left to right in the usual coordinate system).
$\bullet$ The equation of a line passing through a point $(x_{o},\ y_{0})$ and having slope $m$ can be written (in so-called
point-slope form)
$y=m(x-x_{o})+y_{0}$ or $y-y_{0}=m(x-x_{o})$
8
$\bullet$ The equation of the line passing through two points $(x_{1},\ y_{1}), (x_{2},\ y_{2})$ can be written (in so-called two- point form) as
$y=\displaystyle \frac{y_{1}-y_{2}}{x_{1}-x_{2}}(x-x_{1})+y_{1}$
$\bullet\ldots$unless {\it x}$=x_{2}$, in which case the two points are aligned vertically, and the line can't be written that way. Instead, the description of a vertical line through a point with horizontal coordinate $x_{1}$ is just
$$
x=x_{1}
$$
Of course, the two-point form can be derived from the point-slope form, since the slope $m$ of a line through two points $(x_{1},\ y_{1}), (x_{2},\ y_{2})$ is that possibly irritating expression which occurs above:
$$
m=\frac{y_{1}-y_{2}}{x_{1}-x_{2}}
$$
And now is maybe a good time to point out that there is nothing sacred about the horizontal coordinate being called `{\it x}' and the vertical coordinate `{\it y}'. Very {\it often} these $do$ happen to be the names, but it can be otherwise, so just pay attention.
\# 4.7 Write the equation for the line passing through the two points $(1,\ 2)$ and $(3,\ 8)$ .
\# 4.8 Write the equation for the line passing through the two points $(-1,2)$ and $(3,\ 8)$ . $\neq 4.9$ Write the equation for the line passing through the point $(1,\ 2)$ with slope 3.
\# 4.10 Write the equation for the line passing through the point $(11,\ -5)$ with slope $-1.$
5. $El\mathrm{e}$ {\it mentary limits}
The idea of limit is intended to be merely a slight extension of our {\it intuition}. The $\mathrm{s}\mathrm{o}-$called $\in, \delta$-definition was invented after people had been doing calculus for hundreds of years, in response to certain relatively pathological technical difficulties. For quite a while, we will be entirely concerned with situations in which we can either `directly' see the value of a limit {\it by plugging the limit value in}, or where we {\it transform} the expression into one where we {\it can} just plug in.
So long as we are dealing with functions no more complicated than polynomials, most {\it limits} are easy to understand: for example,
$$
\lim_{x\rightarrow 3}4x^{2}+3x-7=4\cdot(3)^{2}+3\cdot(3)-7=38
$$
$$
\lim_{x\rightarrow 3}\frac{4x^{2}+3x-7}{2-x^{2}}=\frac{4\cdot(3)^{2}+3\cdot(3)-7}{2-(3)^{2}}=\frac{38}{-7}
$$
The point is that we just substituted the `3' in and {\it nothing bad happened}. This is the way people evalu- ated easy limits for hundreds of years, and should always be the first thing a person does, just to see what happens.
\# 5.11 Find $\displaystyle \lim_{x\rightarrow 5}2x^{2}-3x+4.$
9
\# 5.12 Find $\displaystyle \lim_{x\rightarrow 2}\frac{x+1}{x^{2}+3}.$ \# 5.13 Find $\displaystyle \lim_{x\rightarrow 1}\sqrt{x+1}.$
6. {\it Limits with} $c\mathrm{a}nc\mathrm{e}//\mathrm{a}$ {\it tion}
But sometimes things `blow up' when the limit number is substituted:
\begin{center}
$\displaystyle \lim_{x\rightarrow 3}\frac{x^{2}-9}{x-3}=\frac{0}{0}????$?
\end{center}
Ick. This is not good. However, in this example, as in {\it many} examples, doing a bit of simplifying algebra first gets rid of the factors in the numerator and denominator which cause them to vanish:
$$
\lim_{x\rightarrow 3}\frac{x^{2}-9}{x-3}=\lim_{x\rightarrow 3}\frac{(x-3)(x+3)}{x-3}=\lim_{x\rightarrow 3}\frac{(x+3)}{1}=\frac{(3+3)}{1}=6
$$
Here at the very end we {\it did} just plug in, after all.
The lesson here is that some of those darn algebra tricks (`identities') are helpful, after all. If you have a `bad' limit, {\it always} look for some {\it cancellation} of factors in the numerator and denominator.
In fact, for hundreds of years people {\it only} evaluated limits in this style! After all, human beings can't re- ally execute $\mathrm{i}\square $finite limiting processes, and so on.
\# 6.14 Find $\displaystyle \lim_{x\rightarrow 2}\frac{x-2}{x^{2}-4} \neq 6.15$ Find $\displaystyle \lim_{x\rightarrow 3}\frac{x^{2}-9}{x-3} \neq 6.16$ Find $\displaystyle \lim_{x\rightarrow 3}\frac{x^{2}}{x-3}$
7. {\it Limits a} $T$ {\it infinity}
Next, let's consider
$$
\lim_{x\rightarrow\infty}\frac{2x+3}{5-x}
$$
The hazard here is that $\infty$ is {\it not} a number that we can do arithmetic with in the normal way. Don't even try it. So we {\it can}'{\it t} really just `plug in' $\infty$ to the expression to see what we get.
On the other hand, what we really mean anyway is {\it not} that $x$ `becomes $\mathrm{i}\square \mathrm{f}\mathrm{i}\mathrm{n}\mathrm{i}\mathrm{t}\mathrm{e}$' in some {\it mystical} sense, but rather that it just `gets larger and larger'. In this context, the crucial observation is that, as $x$ gets larger and larger, $1/x$ gets smaller and smaller (going to $0$). Thus, just based on what we want this all to mean,
$$
\lim_{x\rightarrow\infty}\frac{1}{x}=0
$$
$$
\lim_{x\rightarrow\infty}\frac{1}{x^{2}}=0
$$
$$
\lim_{x\rightarrow\infty}\frac{1}{x^{3}}=0
$$
and so on.
10
This is the essential idea for evaluating simple kinds of limits as $ x\rightarrow\infty$: rearrange the whole thing so that everything is expressed in terms of $1/x$ instead of $x$, and then realize that
$\displaystyle \lim_{x\rightarrow\infty}$ is the same as $\displaystyle \lim_{\frac{1}{x}\rightarrow 0}$
So, in the example above, divide numerator and denominator both by {\it the largest power of} $x$ {\it appearing}
{\it anywhere}:
$$
\lim_{x\rightarrow\infty}\frac{2x+3}{5-x}=\lim_{x\rightarrow\infty}\frac{2+\frac{3}{x}}{\frac{5}{x}-1}=\lim_{y\rightarrow 0}\frac{2+3y}{5y-1}=\frac{2+3\cdot 0}{5\cdot 0-1}=-2
$$
The point is that we called $1/x$ by a new name, `{\it y}', and rewrote the original limit as $ x\rightarrow\infty$ as a limit as $y\rightarrow 0$. Since $0$ {\it is} a genuine number that we can do arithmetic with, this brought us back to ordinary everyday arithmetic. Of course, it was necessary to rewrite the thing we were taking the limit of in terms of $1/x$ (renamed `{\it y}').
Notice that this is an example of a situation where we used the letter `{\it y}' for something other than the name or value of the vertical coordinate.
\# 7.17 Find $\displaystyle \lim_{x\rightarrow\infty}\frac{x+1}{x^{2}+3}.$
\# 7.18 Find $\displaystyle \lim_{x\rightarrow\infty}\frac{x^{2}+3}{x+1}.$
\# 7.19 Find $\displaystyle \lim_{x\rightarrow\infty}\frac{x^{2}+3}{3x^{2}+x+1}. \neq 7.20$ Find $\displaystyle \lim_{x\rightarrow\infty}\frac{1-x^{2}}{5x^{2}+x+1}.$
8. {\it Limits} $of$ {\it exponential functions a} $\tau$ {\it infinity}
It is important to appreciate the behavior of exponential functions as the input to them becomes a large positive number, or a large negative number. This behavior is different from the behavior of polynomials or rational functions, which behave similarly for large inputs regardless of whether the input is large {\it pos}- {\it itive} or large {\it negative}. By contrast, for exponential functions, the behavior is radically different for large {\it positive} or large {\it negative}.
As a reminder and an explanation, let's remember that exponential notation started out simply as an ab-
breviation: for positive integer $n,$
$2^{n}=2\times 2\times 2\times\ldots\times 2$ ( $n$ factors)
$10^{n}=10\times 10\times 10\times\ldots\times 10$ ( $n$ factors)
$(\displaystyle \frac{1}{2})^{n}=(\frac{1}{2})\times(\frac{1}{2})\times(\frac{1}{2})\times\ldots\times(\frac{1}{2})$ ( $n$ factors)
From this idea it's not hard to understand the fundamental properties of exponents (they're not {\it laws} at all):
( $m+n$ factors)
\begin{center}
\includegraphics[width=44.32mm,height=9.78mm]{./Notes_images/image001.eps}
\includegraphics[width=88.39mm,height=8.47mm]{./Notes_images/image002.eps}
\end{center}
11
and also
\begin{center}
\includegraphics[width=46.74mm,height=8.47mm]{./Notes_images/image003.eps}
\includegraphics[width=93.13mm,height=6.10mm]{./Notes_images/image004.eps}
\includegraphics[width=76.20mm,height=6.43mm]{./Notes_images/image005.eps}
\end{center}
at least for positive integers $m, n$. Even though we can only easily see that these properties are true when the exponents are positive integers, the {\it extended} notation is guaranteed (by its {\it meaning}, not by {\it law}) to follow the same rules.
Use of {\it other} numbers in the exponent is something that came later, and is also just an {\it abbreviation}, which happily was {\it arranged} to match the more intuitive simpler version. For example,
$$
a^{-1}=\frac{1}{a}
$$
and (as consequences)
$$
a^{-n}=a^{n\times(-1)}=(a^{n})^{-1}=\frac{1}{a^{n}}
$$
(whether $n$ is positive or not). Just to check one example of consistency with the properties above, notice that
$$
a=a^{1}=a^{(-1)\times(-1)}=\frac{1}{a^{-1}}=\frac{1}{1/a}=a
$$
This is not supposed to be surprising, but rather reassuring that we won't reach false conclusions by such manipulations.
Also, fractional exponents fit into this scheme. For example
$$
a^{1/2}=\sqrt{a}\ a^{1/3}=\gamma_{[3]a}
$$
$$
a^{1/4}=\gamma_{[4]a}\ a^{1/5}=\gamma_{[5]a}
$$
This is {\it consistent} with earlier notation: the fundamental property of the $n^{\mathrm{t}\mathrm{h}}$ root of a number is that its $n^{\mathrm{t}\mathrm{h}}$ power is the original number. We can check:
$$
a=a^{1}=(a^{1/n})^{n}=a
$$
Again, this is not supposed to be a surprise, but rather a consistency check.
Then for arbitrary {\it rational} exponents $m/n$ we can maintain the same properties: first, the definition is
just
$$
a^{m/n}=(\gamma_{[n]a)^{m}}
$$
One hazard is that, if we want to have only real numbers (as opposed to complex numbers) come up, then we should not try to take square roots, $4^{\mathrm{t}\mathrm{h}}$ roots, $6^{\mathrm{t}\mathrm{h}}$ roots, or any {\it even} order root of negative numbers.
For general {\it real} exponents $x$ we likewise should {\it not} try to understand $a^{x}$ except for $a>0$ or we'll have to use complex numbers (which wouldn't be so terrible). But the value of $a^{x}$ can only be defined as a {\it limit}: let $r_{1}, r_{2}, \ldots$ be a sequence of {\it rational} numbers approaching $x$, and define
$$
a^{x}=\lim_{i}a^{r_{i}}
$$
We would have to check that this definition does not accidentally depend upon the sequence approaching $x$ (it doesn't), and that the same properties still work (they do).
12
The number $e$ is not something that would come up in really elementary mathematics, because its reason for existence is not really elementary. Anyway, it's approximately
$$
e=2.71828182845905
$$
but if this ever really mattered you'd have a calculator at your side, hopefully.
With the definitions in mind it is easier to make sense of questions about limits of exponential functions. The two companion issues are to evaluate
$$
\lim_{x\rightarrow+\infty}a^{x}
$$
$$
\lim_{x\rightarrow-\infty}a^{x}
$$
Since we are allowing the exponent $x$ to be {\it real}, we'd better demand that $a$ be a {\it positive real} number (if we want to avoid complex numbers, anyway). Then
$\displaystyle \lim_{x\rightarrow+\infty}a^{x}=\left\{\begin{array}{l}
+\infty\ \mathrm{i}\mathrm{f}\ a>1\\
1\ \mathrm{i}\mathrm{f}\ a=1\\
0\ \mathrm{i}\mathrm{f}\ 01\\
1\ \mathrm{i}\mathrm{f}\ a=1\\
+\infty\ \mathrm{i}\mathrm{f}\ 01$ and $\displaystyle \frac{1}{2}$ for $01$ and $\displaystyle \frac{1}{2}$ for $00$, then $f$ is {\it increasing} on $(x_{i},\ x_{i+1})$ , while if $f'(t_{i+1})<0$, then $f$ is {\it decreasing} on that interval.
$\bullet$ Conclusion: on the `outside' interval $(-\infty,\ x_{o})$ , the function $f$ is {\it increasing} if $f'(t_{o})>0$ and is {\it decreasing} if $f'(t_{o})<0$. Similarly, on $(x_{n},\ \infty)$ , the function $f$ is {\it increasing} if $f'(t_{n})>0$ and is {\it decreasing} if $f'(t_{n})<0.$ It is certainly true that there are many possible shortcuts to this procedure, especially for polynomials of low degree or other rather special functions. However, if you are able to quickly compute values of (deriva- tives of!) functions on your calculator, you may as well use this procedure as any other.
Exactly which {\it auxiliary points} we choose does not matter, as long as they fall in the correct intervals,
since we just need a single sample on each interval to find out whether $f'$ is positive or negative there.
Usually we pick integers or some other kind of number to make computation of the derivative there as
easy as possible.
20
It's important to realize that even if a question does not directly ask for {\it critical points}, and maybe does
not ask about {\it intervals} either, still it is {\it implicit} that we have to find the critical points and see whether
the functions is increasing or decreasing on the {\it intervals between critical points}. Examples:
Find the critical points and intervals on which $f(x)=x^{2}+2x+9$ is increasing and decreasing: Compute $f'(x)=2x+2$. Solve $2x+2=0$ to find only one critical point $-1$. To the left of $-1$ let's use the {\it auxiliary point} $t_{o}=-2$ and to the right use $t_{1}=0$. Then $f'(-2)=-2<0$, so $f$ is {\it decreasing} on the interval
$(-\infty,\ -1)$ . And $f'(\mathrm{O})=2>0$, so $f$ is {\it increasing} on the interval $(-1,\ \infty)$ .
Find the critical points and intervals on which $f(x)=x^{3}-12x+3$ is increasing, decreasing. Compute
$f'(x)=3x^{2}-12$. Solve $3x^{2}-12=0$: this simplifies to $x^{2}-4=0$, so the {\it critical points} are $\pm 2$. To the left of $-2$ choose {\it auxiliary point} $t_{o}=-3$, between $-2$ and $=2$ choose auxiliary point $t_{1}=0$, and to the right of $+2$ choose $t_{2}=3$. Plugging in the auxiliary points to the derivative, we find that $f'(-3)=27-12>0,$ so $f$ is {\it increasing} on $(-\infty,\ -2)$ . Since $f'(\mathrm{O})=-12<0, f$ is {\it decreasing} on $(-2,\ +2)$ , and since $f'(3)=$
$27-12>0, f$ is {\it increasing} on $(2,\ \infty)$ .
Notice too that we don't really need to know the exact value of the derivative at the auxiliary points: all we care about is whether the derivative is positive or negative. The point is that sometimes some tedious computation can be avoided by stopping as soon as it becomes clear whether the derivative is positive or negative.
\# 16.44 Find the critical points and intervals on which $f(x)=x^{2}+2x+9$ is increasing, decreasing. \# 16.45 Find the critical points and intervals on which $f(x)=3x^{2}-6x+7$ is increasing, decreasing. \# 16.46 Find the critical points and intervals on which $f(x)=x^{3}-12x+3$ is increasing, decreasing.
17. {\it Minimization and Maximiza tion}
The fundamental idea which makes calculus useful in understanding problems of maximizing and minimiz- ing things is that at a {\it peak} of the graph of a function, or at the bottom of a {\it trough}, the tangent is {\it horizon}- {\it tal}. That is, {\it the derivative} $f'(x_{o})$ {\it is} $0$ {\it at points} $x_{o}$ {\it at which} $f(x_{o})$ {\it is a maximum or a minimum}.
Well, a little sharpening of this is necessary: sometimes for either natural or artificial reasons the variable $x$ is restricted to some interval $[a,\ b]$. In that case, we can say that {\it the maximum and minimum values of} $f$ {\it on the interval} $[a,\ b]$ {\it occur among the list of critical points and endpoints of the interval}.
And, if there are points where $f$ is not differentiable, or is discontinuous, then these have to be added in, too. But let's stick with the basic idea, and just ignore some of these complications.
Let's describe a systematic procedure to find the minimum and maximum values of a function $f$ on an
interval $[a,\ b].$
$\bullet$ Solve $f'(x)=0$ to find the list of critical points of $f.$
$\bullet$ Exclude any critical points not inside the interval $[a,\ b].$
$\bullet$ Add to the list the {\it endpoints} $a, b$ of the interval (and any points of discontinuity or non-differentiability!)
$\bullet$ At each point on the list, evaluate the function $f$: the biggest number that occurs is the maximum, and the littlest number that occurs is the minimum.
Find the minima and maxima of the function $f(x)=x^{4}-8x^{2}+5$ on the interval [-1, 3]. First, take the derivative and set it equal to zero to solve for critical points: this is
$$
4x^{3}-16x=0
$$
21
or, more simply, dividing by 4, it is $x^{3}-4x=0$. Luckily, we can see how to factor this: it is
$$
x(x-2)(x+2)
$$
So the critical points are $-2,0, +2$. Since the interval does not include $-2$, we drop it from our list.
And we {\it add} to the list the endpoints $-1,3$. So the list of numbers to consider as potential spots for
minima and maxima are $-1,0,2,3$. Plugging these numbers into the function, we get (in that order) $-2,5, -11,14$. Therefore, the maximum is 14, which occurs at $x=3$, and the minimum is $-11$, which occurs at $x=2.$
Notice that in the previous example the maximum did not occur at a critical point, but by coincidence did occur at an endpoint.
You have 200 feet of fencing with which you wish to enclose the largest possible rectangular garden. What is the largest garden you can have?
Let $x$ be the length of the garden, and $y$ the width. Then the area is simply $xy$. Since the perimeter is
200, we know that $2x+2y=200$, which we can solve to express $y$ as a function of $x$: we find that $y= 100-x$. Now we can rewrite the area as a function of $x$ alone, which sets us up to execute our procedure:
$$
area=xy=x(100-x)
$$
The derivative of this function with respect to $x$ is $100-2x$. Setting this equal to $0$ gives the equation
$$
100-2x=0
$$
to solve for critical points: we find just {\it one}, namely $x=50.$
Now what about endpoints? What is the interval? In this example we must look at `physical' considera- tions to figure out what interval $x$ is restricted to. Certainly a {\it width} must be a positive number, so $x>0$ and $y>0$. Since $y=100-x$, the inequality on $y$ gives another inequality on $x$, namely that $x<100$. So $x$ is in $[0,100].$
When we plug the values $0,50,100$ into the function $x(100-x)$ , we get $0$, 2500, $0$, in that order. Thus, the corresponding value of $y$ is $100-50=50$, and the maximal possible area is $50\cdot 50=2500.$
\# 17.47 Olivia has 200 feet of fencing with which she wishes to enclose the largest possible rectangular garden. What is the largest garden she can have?
\# 17.48 Find the minima and maxima of the function $f(x)=3x^{4}-4x^{3}+5$ on the interval [-2, 3].
\# 17.49 The cost per hour of fuel to run a locomotive is $v^{2}/25$ dollars, where $v$ is speed, and other costs are {\$} 100 per hour regardless of speed. What is the speed that minimizes cost {\it per mile}?
\# 17.50 The product of two numbers $x, y$ is 16. We know $x\geq 1$ and $y\geq 1$. What is the greatest possible sum of the two numbers?
\# 17.51 Find both the minimum and the maximum of the function $f(x)=x^{3}+3x+1$ on the interval [-2, 2].
18. {\it Local minim a and maxima} ({\it First Deriva tive Test})
A function $f$ has a local maximum or relative maximum at a point $x_{o}$ if the values $f(x)$ of $f$ for $x$ `near' $x_{o}$ are all less than $f(x_{o})$ . Thus, the graph of $f$ near $x_{o}$ has a {\it peak} at $x_{o}$. A function $f$ has a 10- cal minimum or relative minimum at a point $x_{o}$ if the values $f(x)$ of $f$ for $x$ `near' $x_{o}$ are all greater
22
than $f(x_{o})$ . Thus, the graph of $f$ near $x_{o}$ has a {\it trough} at $x_{o}$. (To make the distinction clear, sometimes the `plain' maximum and minimum are called absolute maximum and minimum.)
Yes, in both these definitions' we are tolerating ambiguity about what `near' would mean, although the peak/trough requirement on the graph could be translated into a less ambiguous definition. But in any case we'll be able to execute the procedure given below to {\it find} local maxima and minima without worry- ing over a formal definition.
{\it This procedure is just a variant of things we}'{\it ve already done to analyze the intervals of increase and de}- {\it crease of a function, or to find absolute maxima and minima}. This procedure starts out the same way as does the analysis of intervals of increase/decrease, and also the procedure for finding (`absolute') maxima and minima of functions.
To find the local maxima and minima of a function $f$ on an interval $[a,\ b]$:
$\bullet$ Solve $f'(x)=0$ to find {\it critical points} of $f.$
$\bullet$ Drop from the list any critical points that aren't in the interval $[a,\ b].$
$\bullet$ Add to the list the endpoints (and any points of discontinuity or non-differentiability): we have an {\it or}- {\it dered} list of special points in the interval:
$$
a=x_{o}0$ and $f'(t_{i+1})<0$ (so $f$ is {\it increasing} to the left of $x_{i}$ and {\it decreasing} to the right of $x_{i}$, then $f$ has a {\it local maximum} at $x_{o}.$
$\bullet$ if $f'(t_{i})<0$ and $f'(t_{i+1})>0$ (so $f$ is {\it decreasing} to the left of $x_{i}$ and {\it increasing} to the right of $x_{i}$, then $f$ has a {\it local minimum} at $x_{o}.$
$\bullet$ if $f'(t_{i})<0$ and $f'(t_{i+1})<0$ (so $f$ is {\it decreasing} to the left of $x_{i}$ and {\it also decreasing} to the right of $x_{i},$ then $f$ has {\it neither} a local maximum nor a local minimum at $x_{o}.$
$\bullet$ if $f'(t_{i})>0$ and $f'(t_{i+1})>0$ (so $f$ is {\it increasing} to the left of $x_{i}$ and {\it also increasing} to the right of $x_{i},$ then $f$ has {\it neither} a local maximum nor a local minimum at $x_{o}.$
The endpoints require separate treatment: There is the auxiliary point $t_{o}$ just to the {\it right} of the left end- point $a$, and the auxiliary point $t_{n}$ just to the {\it left} of the right endpoint $b$:
$\bullet$ At the {\it left} endpoint $a$, if $f'(t_{o})<0$ (so $f'$ is {\it decreasing} to the right of {\it a}) then $a$ is a {\it local maximum}.
$\bullet$ At the {\it left} endpoint $a$, if $f'(t_{o})>$ (so $f'$ is {\it increasing} to the right of {\it a}) then $a$ is a {\it local minimum}.
$\bullet$ At the {\it right} endpoint $b$, if $f'(t_{n})<0$ (so $f'$ is {\it decreasing} as $b$ is approached from the left) then $b$ is a {\it l}0- $cal$ {\it minimum}.
$\bullet$ At the {\it right} endpoint $b$, if $f'(t_{n})>$ (so $f'$ is {\it increasing} as $b$ is approached from the left) then $b$ is a {\it local maximum}.
The possibly bewildering list of possibilities really shouldn't be bewildering after you get used to them. We are already acquainted with evaluation of $f'$ at auxiliary points between critical points in order to see whether the function is increasing or decreasing, and now we're just applying that information to see whether the graph {\it peaks, troughs, or does neither} around each critical point and endpoints. That is, {\it the geometric meaning of the derivative}'{\it s being positive or negative is easily translated into conclusions about local maxima or minima}.
Find all the local ($=$relative) minima and maxima of the function $f(x)=2x^{3}-9x^{2}+1$ on the interval [-2, 2]: To find critical points, solve $f'(x)=0$: this is $6x^{2}-18x=0$ or $x(x-3)=0$, so there are two critical points, $0$ and 3. Since 3 is not in the interval we care about, we drop it from our list. Adding the endpoints to the list, we have
$$
-2<0<2
$$
23
as our ordered list of special points. Let's use auxiliary points $-1,1$. At $-1$ the derivative is $f'(-1)= 24>0$, so the function is increasing there. At $+1$ the derivative is $f'(1)=-12<0$, so the function is decreasing. Thus, since it is increasing to the left and decreasing to the right of $0$, it must be that $0$ is a {\it local maximum}. Since $f$ is increasing to the right of the left endpoint $-2$, that left endpoint must give a {\it local minimum}. Since it is decreasing to the left of the right endpoint $+2$, the right endpoint must be a {\it local minimum}.
Notice that although the processes of finding {\it absolute} maxima and minima and {\it local} maxima and minima have a lot in common, they have essential differences. In particular, the only relations between them are that {\it critical points} and {\it endpoints} (and points of discontinuity, etc.) play a big role in both, and that the {\it absolute} maximum is certainly a {\it local} maximum, and likewise the {\it absolute} minimum is certainly a {\it local} minimum.
For example, just plugging critical points into the function does not reliably indicate which points are {\it l}0- $cal$ maxima and minima. And, on the other hand, knowing which of the critical points are {\it local} maxima and minima generally is only a small step toward figuring out which are {\it absolute}: values still have to be plugged into the funciton! {\it So don}'{\it t confuse the two procedures}.'
(By the way: while it's fairly easy to make up story-problems where the issue is to find the maximum or minimum value of some function on some interval, it's harder to think of a simple application of {\it local} max- ima or minima).
\# 18.52 Find all the local $(=$relative) minima and maxima of the function $f(x)=(x+1)^{3}-3(x+1)$ on the interval [-2, 1].
\# 18.53 Find the local $(=$relative) minima and maxima on the interval [-3, 2] of the function $f(x)=(x+ 1)^{3}-3(x+1)$ .
\# 18.54 Find the local (relative) minima and maxima of the function $f(x)=1-12x+x^{3}$ on the interval [-3, 3].
\# 18.55 Find the local (relative) minima and maxima of the function $f(x)=3x^{4}-8x^{3}+6x^{2}+17$ on the interval [-3, 3].
19. {\it An algebra trick}
The algebra trick here goes back at least 350 years. This is worth looking at if only as an additional re- view of algebra, but is actually of considerable value in a variety of hand computations as well.
The algebraic identity we use here starts with a product of factors each of which may occur with a {\it frac}- {\it tional or negative exponent}. For example, with 3 such factors:
$$
f(x)=(x-a)^{k}(x-b)^{\ell}(x-c)^{m}
$$
The derivative can be computed by using the product rule twice:
$$
f'(x)=
$$
$$
=k(x-a)^{k-1}(x-b)^{\ell}(x-c)^{m}+(x-a)^{k}\ell(x-b)^{\ell-1}(x-c)^{m}+(x-a)^{k}(x-b)^{\ell}m(x-c)^{m-1}
$$
Now all three summands here have a common factor of
$$
(x-a)^{k-1}(x-b)^{\ell-1}(x-c)^{m-1}
$$
24
which we can take out, using the distributive law in reverse: we have
$$
f'(x)=
$$
$$
=(x-a)^{k-1}(x-b)^{\ell-1}(x-c)^{m-1}[k(x-b)(x-c)+\ell(x-a)(x-c)+m(x-a)(x-b)]
$$
The minor miracle is that the big expression inside the square brackets is a mere quadratic polynomial in $x.$
Then to determine {\it critical points} we have to figure out the roots of the equation $f'(x)=0$: If $k-1>0$ then $x=a$ is a critical point, if $k-1\leq 0$ it $\mathrm{i}\mathrm{s}\mathrm{n}' \mathrm{t}$. If $\ell-1>0$ then $x=b$ is a critical point, if $\ell-1\leq 0$ it $\mathrm{i}\mathrm{s}\mathrm{n}' \mathrm{t}$. If $m-1>0$ then $x=c$ is a critical point, if $m-1\leq 0$ it $\mathrm{i}\mathrm{s}\mathrm{n}' \mathrm{t}$. And, last but not least, {\it the two roots of the quadratic equation}
$$
k(x-b)(x-c)+\ell(x-a)(x-c)+m(x-a)(x-b)=0
$$
are critical points.
{\it There is also another issue here, about not wanting to take square roots} ({\it and so on}) {\it of negative numbers. We would exclude from the domain of the function any values of} $x$ {\it which would make us try to take a}
{\it square root of a negative number. But this might also force us to give up some critical points}.' Still, this is not the main point here, so we will do examples which avoid this additional worry.
A very simple {\it numerical} example: suppose we are to find the {\it critical points} of the function
$$
f(x)=x^{5/2}(x-1)^{4/3}
$$
Implicitly, we have to find the critical points first. We compute the derivative by using the product rule, the power function rule, and a tiny bit of chain rule:
$$
f'(x)=\frac{5}{2}x^{3/2}(x-1)^{4/3}+x^{5/2}\frac{4}{3}(x-1)^{1/3}
$$
And now {\it solve} this for $x$? It's not at all a polynomial, and it is a little ugly.
But our algebra trick transforms this issue into something as simple as {\it solving a linear equation}: first fig- ure out the largest power of $x$ that occurs in {\it all} the terms: it is $x^{3/2}$, since $x^{5/2}$ occurs in the first term
and $x^{3/2}$ in the second. The largest power of $x-1$ that occurs in {\it all} the terms is $(x-1)^{1/3}$, since $(x-1)^{4/3}$ occurs in the first, and $(x-1)^{1/3}$ in the second. {\it Taking these common factors out} (using the distributive law `backward'), we rearrange to
$$
f'(x)=\frac{5}{2}x^{3/2}(x-1)^{4/3}+x^{5/2}\frac{4}{3}(x-1)^{1/3}
$$
$$
=x^{3/2}(x-1)^{1/3}(\frac{5}{2}(x-1)+\frac{4}{3}x)
$$
$$
=x^{3/2}(x-1)^{1/3}(\frac{5}{2}x-\frac{5}{2}+\frac{4}{3}x)
$$
$$
=x^{3/2}(x-1)^{1/3}(\frac{23}{6}x-\frac{5}{2})
$$
{\it Now} to see when this is $0$ is not so hard: first, since the power of $x$ appearing in front is {\it positive}, $x=0$
make this expression $0$. Second, since the power of $x+1$ appearing in front is {\it positive}, if $x-1=0$ then the whole expression is $0$. Third, and perhaps {\it unexpectedly}, from the simplified form of the complicated factor, if $\displaystyle \frac{23}{6}x-\frac{5}{2}=0$ then the whole expression is $0$, as well. So, altogether, the {\it critical points} would appear to be
$$
x=0,\ \frac{15}{23},1
$$
25
{\it Many people would overlook the critical point} $\displaystyle \frac{15}{23}$, {\it which is visible only after the algebra we did}.
\# 19.56 Find the critical points and intervals of increase and decrease of $f(x)=x^{10}(x-1)^{12}.$
\# 19.57 Find the critical points and intervals of increase and decrease of $f(x)=x^{10}(x-2)^{11}(x+2)^{3}.$
\# 19.58 Find the critical points and intervals of increase and decrease of $f(x)=x^{5/3}(x+1)^{6/5}.$
\# 19.59 Find the critical points and intervals of increase and decrease of $f(x)=x^{1/2}(x+1)^{4/3}(x-1)^{-11/3}.$
20. $Lin$ ea{\it r approxim} a {\it Tions}.$\cdot$ {\it approxim} a {\it Tion b}y {\it differentials}
The idea here in `geometric' terms is that in some vague sense a curved line can be approximated by a
straight line tangent to it. Of course, this approximation is only good at all `near' the point of tangency, and so on. So the only formula here is secretly the formula for the tangent line to the graph of a function. There is some hassle due to the fact that there are so many different choices of symbols to {\it write} it.
We can write some formulas: Let $f$ be a function, and fix a point $x_{o}$. The idea is that {\it for} $x\zeta near' x_{o}$ {\it we have an} $\zeta${\it approximate} ' {\it equality}
$$
f(x)\approx f(x_{o})+f'(x_{o})(x-x_{o})
$$
We do {\it not} attempt to clarify what {\it either} `near' or `approximate' mean in this context. What is really true here is that for a given value $x$, the quantity
$$
f(x_{o})+f'(x_{o})(x-x_{o})
$$
is {\it exactly} the $y$-coordinate of the line {\it tangent} to the graph at $x_{o}$
The aproximation statement has many paraphrases in varying choices of symbols, and a person needs to be able to recognize all of them. For example, one of the more traditional paraphrases, which introduces some slightly silly but oh-so-traditional notation, is the following one. We might also say that $y$ is a func- tion of $x$ given by $y=f(x)$ . Let
$\triangle x=$ small change in $x$
$\triangle y=$ corresponding change in $y=f(x+\triangle x)-f(x)$
Then the assertion is that
$$
\triangle y\approx f'(x)\triangle x
$$
Sometimes some texts introduce the following questionable (but traditionally popular!) notation:
$dy=f'(x)dx=$ approximation to change in $y$
$$
dx=\triangle x
$$
and call the $dx$ and $ dy\zeta${\it differentials} '. And then this whole procedure is `approximation by differen-
tials'. A not particularly enlightening paraphrase, using the previous notation, is
$$
dy\approx\triangle y
$$
Even though you may see people writing this, don't do it.
More paraphrases, with varying symbols:
$$
f(x+\triangle x)\approx f(x)+f'(x)\triangle x
$$
26
$$
f(x+\delta)\approx f(x)+f'(x)\delta
$$
$$
f(x+h)\approx f(x)+f'(x)h
$$
$$
f(x+\triangle x)-f(x)\approx f'(x)\triangle x
$$
$$
y+\triangle y\approx f(x)+f'(x)\triangle x
$$
$$
\triangle y\approx f'(x)\triangle x
$$
{\it A little history}: Until just 20 or 30 years ago, calculators were not widely available, and especially not typ- ically able to evaluate trigonometric, exponential, and logarithm functions. In that context, the kind of
vague and unreliable `approximation' furnished by `differentials' was certainly worthwhile in many situa- tions.
By contrast, now that pretty sophisticated calculators are widely available, some things that once seemed sensible are no longer. For example, a very traditional type of question is to `approximate $\sqrt{10}$ by differ- entials'. A reasonable contemporary response would be to simply punch in 1, $0, \sqrt{}$' on your calculator and get the answer immediately to 10 decimal places. But this was possible only relatively recently.
{\it Example}: For example let's approximate $\sqrt{17}$ by differentials. For this problem to make sense at all {\it imag}- {\it ine that you have no calculator}. We take $f(x)=\sqrt{x}=x^{1/2}$. {\it The idea here is that we can easily evaluate} $\zeta by$ {\it hand}' {\it both} $f$ {\it and} $f'$ {\it at the point} $x=16$ {\it which is} $\zeta near' 17. ($Here $f'(x)=\displaystyle \frac{1}{2}x^{-1/2})$ . Thus, here
$$
\triangle x=17-16=1
$$
and
$$
\sqrt{17}=f(17)\approx f(16)+f'(16)\triangle x=\sqrt{16}+\frac{1}{2}\frac{1}{\sqrt{16}}\cdot 1=4+\frac{1}{8}
$$
{\it Example}: Similarly, if we wanted to approximate $\sqrt{18}$ `by differentials', we'd again take $f(x)=\sqrt{x}=x^{1/2}.$ Still we imagine that we are doing this `by hand', and then of course we can `easily evaluate' the function $f$ and its derivative $f'$ at the point $x=16$ which is `near' 18. Thus, here
$$
\triangle x=18-16=2
$$
and
$$
\sqrt{18}=f(18)\approx f(16)+f'(16)\triangle x=\sqrt{16}+\frac{1}{2}\frac{1}{\sqrt{16}}\cdot 2=4+\frac{1}{4}
$$
Why not use the `good' point 25 as the `nearby' point to find $\sqrt{18}$? Well, in broad terms, the further away your `good' point is, the worse the approximation will be. Yes, it is true that we have little idea how good or bad the approximation is {\it anyway}.
It is somewhat more sensible to {\it not} use this idea for numerical work, but rather to say things like
$$
\sqrt{x+1}\approx\sqrt{x}+\frac{1}{2}\frac{1}{\sqrt{x}}
$$
and
$$
\sqrt{x+h}\approx\sqrt{x}+\frac{1}{2}\frac{1}{\sqrt{x}}\cdot h
$$
This kind of assertion is more than any particular numerical example would give, because it gives a {\it rela}- {\it tionship}, telling how much the {\it output} changes for given change in {\it input}, depending what {\it regime} $(=\mathrm{i}\mathrm{n}\mathrm{t}\mathrm{e}\mathrm{r}-$ val) the input is generally in. In this example, we can make the {\it qualitative} observation that {\it as} $x$ {\it increases the difference} $\sqrt{x+1}-\sqrt{x}$ {\it decreases}.
{\it Example}: Another numerical example: Approximate $\sin 31^{o}$ `by differentials'. Again, the point is {\it not} to hit 3, 1, $\sin$ on your calculator (after switching to degrees), but rather to {\it imagine that you have no calcu}- {\it lator}. And we are supposed to remember from pre-calculator days the `special angles' and the values of
27
trig functions at them: $\displaystyle \sin 30^{o}=\frac{1}{2}$ and $\displaystyle \cos 30^{o}=\frac{\sqrt{3}}{2}$. So we'd use the function $f(x)=\sin x$, and we'd imagine that we can evaluate $f$ and $f'$ easily by hand at $30^{o}$. Then
$\displaystyle \triangle x=31^{o}-30^{o}=1^{o}=1^{o}\cdot\frac{2\pi \mathrm{r}\mathrm{a}\mathrm{d}\mathrm{i}\mathrm{a}\mathrm{n}\mathrm{s}}{360^{o}}=\frac{2\pi}{360}$ radians
We have to rewrite things in radians since we really only can compute derivatives of trig functions in radi- ans. Yes, this is a complication in our supposed `computation by hand'. Anyway, we have
$$
\sin 31^{o}=f(31^{o})=f(30^{o})+f'(30^{o})\triangle x=\sin 30^{o}+\cos 30^{o}\cdot\frac{2\pi}{360}
$$
$$
=\frac{1}{2}+\frac{\sqrt{3}}{2}\frac{2\pi}{360}
$$
Evidently we are to {\it also} imagine that we {\it know} or can easily {\it find} $\sqrt{3}$ (by differentials?) as well as a value of $\pi$. {\it Yes}, this is a lot of trouble in comparison to just punching the buttons, and from a contemporary perspective may seem senseless.
{\it Example}: Approximate $\ln(x+2)$ `by differentials', in terms of $\ln x$ and $x$: This {\it non-numerical} question is somewhat more sensible. Take $f(x)=\ln x$, so that $f'(x)=\displaystyle \frac{1}{x}$. Then
$$
\triangle x=(x+2)-x=2
$$
and by the formulas above
$$
\ln(x+2)=f(x+2)\approx f(x)+f'(x)\cdot 2=\ln x+\frac{2}{x}
$$
{\it Example}: Approximate $\ln(e+2)$ in terms of differentials: Use $f(x)=\ln x$ again, so $f'(x)=\displaystyle \frac{1}{x}$. We prob- ably have to imagine that we can `easily evaluate' both $\ln x$ and $\displaystyle \frac{1}{x}$ at $x=e$. (Do we know a numerical approximation to $e?$). Now
$$
\triangle x=(e+2)-e=2
$$
so we have
$$
\ln(e+2)=f(e+2)\approx f(e)+f'(e)\cdot 2=\ln e+\frac{2}{e}=1+\frac{2}{e}
$$
since $\ln e=1.$
\# 20.60 Approximate $\sqrt{101}$ `by differentials' in terms of $\sqrt{100}=10.$
\# 20.61 Approximate $\sqrt{x+1}$ `by differentials', in terms of $\sqrt{x}.$
\# 20.62 Granting that $\displaystyle \frac{d}{dx}\ln x=\frac{1}{x}$, approximate $\ln(x+1)$ `by differentials', in terms of $\ln x$ and $x.$ \# 20.63 Granting that $\displaystyle \frac{d}{dx}e^{x}=e^{x}$, approximate $e^{x+1}$ in terms of $e^{x}.$
\# 20.64 Granting that $\displaystyle \frac{d}{dx}\cos x=-\sin x$, approximate $\cos(x+1)$ in terms of $\cos x$ and $\sin x.$
21. {\it Implicit differentia tion}
There is nothing `implicit' about the differentiation we do here, it is quite `explicit'. The difference from earlier situations is that we have a {\it function defined} $\zeta${\it implicitly} '. What this means is that, instead of a clear-cut (if complicated) formula for the value of the function in terms of the input value, we only have a {\it relation} between the two. This is best illustrated by examples.
28
For example, suppose that $y$ is a function of $x$ and
$$
y^{5}-xy+x^{5}=1
$$
and we are to find some useful expression for $dy/dx$. Notice that it is not likely that we'd be able to {\it solve} this equation for $y$ as a function of $x$ (nor vice-versa, either), so our previous methods do not obviously do anything here! But both sides of that equality are functions of $x$, and are {\it equal}, so their derivatives are
equal, surely. That is,
$$
5y^{4}\frac{dy}{dx}-1\cdot y-x\frac{dy}{dx}+5x^{4}=0
$$
Here the trick is that we can `take the derivative' without knowing exactly what $y$ is as a function of $x,$
but just using the rules for differentiation.
Specifically, to take the derivative of the term $y^{5}$, we view this as a {\it composite} function, obtained by ap- plying the take-the-fifth-power function after applying the (not clearly known!) function $y$. Then use the chain rule!
Likewise, to differentiate the term $xy$, we use the product rule
$$
\frac{d}{dx}(x\cdot y)=\frac{dx}{dx}\cdot y+x\cdot\frac{dy}{dx}=y+x\cdot\frac{dy}{dx}
$$
since, after all,
$$
\frac{dx}{dx}=1
$$
And the term $x^{5}$ is easy to differentiate, obtaining the $5x^{4}$. The other side of the equation, the function `1', is {\it constant}, so its derivative is $0$. (The fact that this means that the left-hand side is also constant
should not be mis-used: we need to use the very non-trivial looking expression we have for that constant function, there on the left-hand side of that equation!).
Now the amazing part is that this equation can be {\it solved for} $y'$, if we tolerate a formula involving not only $x$, but also $y$: first, regroup terms depending on whether they have a $y'$ or not:
$$
y'(5y^{4}-x)+(-y+5x^{4})=0
$$
Then move the non-y' terms to the other side
$$
y'(5y^{4}-x)=y-5x^{4}
$$
and divide by the `coefficient' of the $y'$:
$$
y'=\frac{y-5x^{4}}{5y^{4}-x}
$$
Yes, this is {\it not} as good as if there were a formula for $y'$ {\it not} needing the $y$. But, on the other hand, the
initial situation we had did not present us with a formula for $y$ in terms of $x$, so it was necessary to lower our expectations.
Yes, if we are given a value of $x$ and told to find the corresponding $y'$, it would be impossible without luck or some additional information. For example, in the case we just looked at, if we were asked to find $y'$
when $x=1$ and $y=1$, it's easy: just plug these values into the formula for $y'$ in terms of {\it both} $x$ and $y$: when $x=1$ and $y=1$, the corresponding value of $y'$ is
$$
y'=\frac{1-5\cdot 1^{4}}{5\cdot 1^{4}-1}=-4/4=-1
$$
29
If, instead, we were asked to find $y$ and $y'$ when $x=1$, not knowing in advance that $y=1$ fits into the equation when $x=1$, we'd have to hope for some luck. First, we'd have to try to solve the original equa- tion for $y$ with $x$ replace by its value 1: solve
$$
y^{5}-y+1=1
$$
By luck indeed, there is some cancellation, and the equation becomes
$$
y^{5}-y=0
$$
By further luck, we can factor this `by hand': it is
$$
0=y(y^{4}-1)=y(y^{2}-1)(y^{2}+1)=y(y-1)(y+1)(y^{2}+1)
$$
So there are actually {\it three} real numbers which work as $y$ for $x=1$: the values $-1,0, +1$. There is no clear way to see which is `best'. But in any case, any one of these three values could be used as $y$ in substitut- ing into the formula
$$
y'=\frac{y-5x^{4}}{5y^{4}-x}
$$
we obtained above.
Yes, there are really {\it three solutions}, three functions, etc.
Note that we {\it could} have used the Intermediate Value Theorem and/or Newton's Method to {\it numerically} solve the equation, even without too much luck. In `real life' a person should be prepared to do such
things.
\# 21.65 Suppose that $y$ is a function of $x$ and
$$
y^{5}-xy+x^{5}=1
$$
Find $dy/dx$ at the point $x=1, y=0.$
\# 21.66 Suppose that $y$ is a function of $x$ and
$$
y^{3}-xy^{2}+x^{2}y+x^{5}=7
$$
Find $dy/dx$ at the point $x=1, y=2$. Find $\displaystyle \frac{d^{2}y}{dx^{2}}$ at that point.
22. $R\mathrm{e}/\mathrm{a}T\mathrm{e}dr\mathrm{a}$ {\it tes}
In this section, most functions will be functions of a parameter $t$ which we will think of as {\it time}. There is a convention coming from physics to write the derivative of any function $y$ of $t$ as $\dot{y}=dy/dt$, that is, with just a {\it dot} over the functions, rather than a {\it prime}.
The issues here are variants and continuations of the previous section's idea about {\it implicit differentiation}. Traditionally, there are other (non-calculus!) issues introduced at this point, involving both story-problem stuff as well as requirement to be able to deal with {\it similar triangles}, the {\it Pythagorean Theorem}, and to re- call formulas for {\it volumes} of cones and such.
Continuing with the idea of describing a function by a relation, we could have {\it two} unknown functions $x$ and $y$ of $t$, {\it related} by some formula such as
$$
x^{2}+y^{2}=25
$$
30
A typical question of this genre is `What is $\dot{y}$ when $x=4$ and $\dot{x}=6?$'
The fundamental rule of thumb in this kind of situation is {\it differentiate the relation with respect to} $t$: so we differentiate the relation $x^{2}+y^{2}=25$ with respect to $t$, even though we don't know any details about those two function $x$ and $y$ of $t$:
$$
2x\dot{x}+2y\dot{y}=0
$$
using the chain rule. We can solve this for $\dot{y}$ :
$$
\dot{y}=-\frac{x\dot{x}}{y}
$$
So {\it at any particular moment}, if we knew the values of $x, x, y$ then we could find $\dot{y}$ {\it at that moment}.
Here it's easy to solve the original relation to find $y$ when $x=4$: we get $y=\pm 3$. Substituting, we get
$$
\dot{y}=-\frac{4\cdot 6}{\pm 3}=\pm 8
$$
( $\mathrm{T}\mathrm{h}\mathrm{e}\pm$notation means that we take $+$chosen if we take $y=-3$ and --if we take $y=+3$).
\# 22.67 Suppose that $x, y$ are both functions of $t$, and that $x^{2}+y^{2}=25$. Express $\displaystyle \frac{dx}{dt}$ in terms of $x, y$, and $\displaystyle \frac{dy}{dt}$. When $x=3$ and $y=4$ and $\displaystyle \frac{dy}{dt}=6$, what is $\displaystyle \frac{dx}{dt}$ ?
\# 22.68 A 2-foot tall dog is walking away from a streetlight which is on a 10-foot pole. At a certain mo- ment, the tip of the dog's shadow is moving away from the streetlight at 5 feet per second. How fast is the dog walking at that moment?
\# 22.69 A ladder 13 feet long leans against a house, but is sliding down. How fast is the top of the ladder moving at a moment when the base of the ladder is 12 feet from the house and moving outward at 10 feet per second?
23. {\it Intermediate Value Theorem, location ofroots}
The assertion of the Intermediate Value Theorem is something which is probably `intuitively obvious', and is also {\it provably true}: if a function $f$ is {\it continuous} on an interval $[a,\ b]$ and if $f(a)<0$ and $f(b)>0$ (or vice-versa), then there is some third point $c$ with $a0$ and $f(-2)=-5<0$, so we know that there is a root in the interval [-2, 2]. We'd like to cut down the size of the interval, so we look at what happens at the {\it midpoint}, bisecting the interval [-2, 2]: we have $f(0)=1>0$. Therefore, since $f(-2)=-5<0,$ we can conclude that there is a root in $[$-2, $0]$. Since both $f(\mathrm{O})>0$ and $f(2)>0$, we can't say anything at this point about whether or not there are roots in $[0,2]$. Again {\it bisecting} the interval $[$-2, $0]$ where we
31
know there is a root, we compute $f(-1)=1>0$. Thus, since $f(-2)<0$, we know that there is a root in $[-2,\ -1]$ (and have no information about $[-1,0$
If we continue with this method, we can obtain as good an approximation as we want! But there are faster ways to get a really good approximation, as we'll see.
Unless a person has an amazing intuition for polynomials (or whatever), there is really no way to antici- pate what guess is better than any other in getting started.
Invoke the Intermediate Value Theorem to find an interval of length 1 or less in which there is a root of
$x^{3}+x+3=0$: Let $f(x)=x^{3}+x+3$. Just, guessing, we compute $f(\mathrm{O})=3>0$. Realizing that the $x^{3}$ term probably `dominates' $f$ when $x$ is large positive or large negative, and since we want to find a point where $f$ is negative, our next guess will be a `large' negative number: how about $-1$? Well, $f(-1)=1>0,$
so evidently $-1$ is not negative enough. How about $-2$? Well, $f(-2)=-7<0$, so we have succeeded.
Further, the failed guess $-1$ actually was worthwhile, since now we know that $f(-2)<0$ and $f(-1)>0.$ Then, invoking the Intermediate Value Theorem, there is a root in the interval $[-2,\ -1].$
Of course, typically polynomials have several roots, but {\it the number of roots of a polynomial is never more than its degree}. We can use the Intermediate Value Theorem to get an idea where {\it all} of them are.
Invoke the Intermediate Value Theorem to find {\it three different intervals} of length 1 or less in each of which there is a root of $x^{3}-4x+1=0$: first, just starting anywhere, $f(\mathrm{O})=1>0$. Next, $f(1)=-2<0$. So, since $f(0)>0$ and $f(1)<0$, there is at least one root in $[0,1]$, by the Intermediate Value Theorem. Next, $f(2)=1>0$. So, with some luck here, since $f(1)<0$ and $f(2)>0$, by the Intermediate Value Theorem there is a root in [1, 2]. Now if we somehow imagine that there is a {\it negative root} as well, then we try $-1$: $f(-1)=4>0$. So we know {\it nothing} about roots in $[$-1, $0]$. But continue: $f(-2)=1>0$, and still no new conclusion. Continue: $f(-3)=-14<0$. Aha! So since $f(-3)<0$ and $f(2)>0$, by the Intermediate Value Theorem there is a {\it third} root in the interval $[-3,\ -2].$
Notice how even the `bad' guesses were not entirely wasted.
24. {\it Newton}'{\it s method}
This is a method which, once you get started, quickly gives a very good approximation to a root of poly- nomial (and other) equations. The idea is that, if $x_{o}$ is {\it not} a root of a polynomial equation, but is pretty close to a root, then {\it sliding down the tangent line at} $x_{o}$ {\it to the graph of} $f$ {\it gives a good approximation to the actual root}. The point is that this process can be repeated as much as necessary to give as good an ap-
proximation as you want.
Let's derive the relevant formula: if our blind guess for a root of $f$ is $x_{o}$, then the tangent line is
$$
y-f(x_{o})=f'(x_{o})(x-x_{o})
$$
`Sliding down' the tangent line to hit the $x$-axis means to find the intersection of this line with the $x$-axis: this is where $y=0$. Thus, we solve for $x$ in
$$
0-f(x_{o})=f'(x_{o})(x-x_{o})
$$
to find
$$
x=x_{o}-\frac{f(x_{o})}{f'(x_{o})}
$$
Well, let's call this {\it first serious guess} $x_{1}$. Then, repeating this process, the {\it second serious guess} would be
$$
x_{2}=x_{1}-\frac{f(x_{1})}{f'(x_{1})}
$$
32
and generally, if we have the nth guess $x_{n}$ then the $n+1\mathrm{t}\mathrm{h}$ guess $x_{n+1}$ is
$$
x_{n+1}=x_{n}-\frac{f(x_{n})}{f'(x_{n})}
$$
OK, that's the formula for {\it improving our guesses}. How do we decide when to quit? Well, it depends upon to how many decimal places we want our approximation to be good. Basically, if we want, for example, 3 decimal places accuracy, then as soon as $x_{n}$ and $x_{n+1}$ {\it agree} to three decimal places, we can presume that those are the {\it true} decimals of the true root of the equation. This will be illustrated in the examples below. It is important to realize that there is some uncertainty in Newton's method, both because it alone cannot assure that we have a root, and also because the idea just described for approximating roots to a given
accuracy is not foolproof. But to worry about what could go wrong here is counter-productive.
Approximate a root of $x^{3}-x+1=0$ using the intermediate value theorem to get started, and then New- ton's method:
First let's see what happens if we are a little foolish here, in terms of the `blind' guess we start with. If we ignore the advice about using the intermediate value theorem to {\it guarantee} a root in some known interval, we'll waste time. Let's see: The general formula
$$
x_{n+1}=x_{n}-\frac{f(x_{n})}{f'(x_{n})}
$$
becomes
$$
x_{n+1}=x_{n}-\frac{x^{3}-x+1}{3x^{2}-1}
$$
If we take $x_{1}=1$ as our `blind' guess, then plugging into the formula gives
$$
x_{2}=0.5
$$
$$
x_{3}=3
$$
$$
x_{4}=2.0384615384615383249
$$
This is discouraging, since the numbers are jumping around somewhat. But if we are stubborn and can compute quickly with a calculator (not by hand!), we'd see what happens:
\begin{center}
$x_{5} =$ 1.3902821472167361527
$x_{6} =$ 0.9116118977179270555
$x_{7} =$ 0.34502849674816926662
$x_{8} =$ 1.4277507040272707783
$x_{9} =$ 0.94241791250948314662
$x_{10} =$ 0.40494935719938018881
$x_{11} =$ 1.7069046451828553401
$x_{12} =$ 1.1557563610748160521
$x_{13} =$ 0.69419181332954971175
$$
x_{14}\ =\ -0.74249429872066285974
$$
$$
x_{15}\ =\ -2.7812959406781381233
$$
$$
x_{16}\ =\ -1.9827252470441485421
$$
$$
x_{17}\ =\ -1.5369273797584126484
$$
$$
x_{18}\ =\ -1.3572624831877750928
$$
$$
x_{19}\ =\ -1.3256630944288703144
$$
$$
x_{20}\ =\ -1.324718788615257159
$$
$$
x_{21}\ =\ -1.3247179572453899876
$$
\end{center}
33
Well, after quite a few iterations of `sliding down the tangent', the last two numbers we got, $x_{20}$ and $x_{21},$ agree to 5 decimal places. This would make us think that the {\it true} root is {\it approximated to five decimal places} by $-1.32471.$
The stupid aspect of this little scenario was that our initial `blind' guess was {\it too far from an actual root}, so that there was some wacky jumping around of the numbers before things settled down. If we had been computing by hand this would have been hopeless.
Let's try this example again using the Intermediate Value Theorem to pin down a root with some degree of accuracy: First, $f(1)=1>0$. Then $f(0)=+1>0$ also, so we might {\it doubt} that there is a root in $[0,1].$ Continue: $f(-1)=1>0$ again, so we might {\it doubt} that there is a root in $[$-1, $0]$, either. Continue: at last $f(-2)=-5<0$, so since $f(-1)>0$ by the Intermediate Value Theorem we do indeed know that there is a root between $-2$ and $-1$. Now to start using {\it Newton}'{\it s Method}, we would reasonably guess
$$
x_{o}=-1.5
$$
since this is the midpoint of the interval on which we know there is a root. Then computing by Newton's method gives:
$$
x_{1}\ =\ -1.3478260869565217295
$$
$$
x_{2}\ =\ -1.3252003989509069104
$$
$$
x_{3}\ =\ -1.324718173999053672
$$
$$
x_{4}\ =\ -1.3247179572447898011
$$
so right away we have what appears to be 5 decimal places accuracy, in 4 steps rather than 21. Getting off to a good start is important.
Approximate {\it all three} roots of $x^{3}-3x+1=0$ using the intermediate value theorem to get started, and then Newton's method. Here you have to take a little care in choice of beginning `guess' for Newton's method: In this case, since we are {\it told} that there are three roots, then we should certainly be wary about where
we start: presumably we have to start in different places in order to successfully use Newton's method to find the different roots. So, starting thinking in terms of the intermediate value theorem: letting $f(x)= x^{3}-3x+1$, we have $f(2)=3>0$. Next, $f(1)=-1<0$, so we by the Intermediate Value Theorem
we know there is a root in [1, 2]. Let's try to approximate it pretty well before looking for the other roots: The general formula for Newton's method becomes
$$
x_{n+1}=x_{n}-\frac{x^{3}-3x+1}{3x^{2}-3}
$$
Our initial `blind' guess might reasonably be the midpoint of the interval in which we know there is a root: take
$$
x_{o}=1.5
$$
Then we can compute
\begin{center}
$x_{1} =$ 1.533333333333333437
$x_{2} =$ 1.5320906432748537807
$x_{3} =$ 1.5320888862414665521
$x_{4} =$ 1.5320888862379560269
$x_{5} =$ 1.5320888862379560269
$x_{6} =$ 1.5320888862379560269
\end{center}
So it appears that we have quickly approximated a root in that interval! To what looks like 19 decimal
places!
Continuing with this example: $f(\mathrm{O})=1>0$, so since $f(1)<0$ we know by the intermediate value theo- rem that there is a root in $[0,1]$, since $f(1)=-1<0$. So as our blind gues let's use the midpoint of this
34
interval to start Newton's Method: that is, now take $x_{o}=0.5$:
\begin{center}
$x_{1} =$ 0.33333333333333337034
$x_{2} =$ 0.3472222222222222654
$x_{3} =$ 0.34729635316386797683
$x_{4} =$ 0.34729635533386071788
$x_{5} =$ 0.34729635533386060686
$x_{6} =$ 0.34729635533386071788
$x_{7} =$ 0.34729635533386060686
$x_{8} =$ 0.34729635533386071788
\end{center}
so we have a root evidently approximated to 3 decimal places after just 2 applications of Newton's method. After 8 applications, we have apparently 15 correct decimal places.
\# 24.70 Approximate a root of $x^{3}-x+1=0$ using the intermediate value theorem to get started, and then Newton's method.
\# 24.71 Approximate a root of $3x^{4}-16x^{3}+18x^{2}+1=0$ using the intermediate value theorem to get started, and then Newton's method. You might have to be sure to get sufficiently close to a root to start so that things don't `blow up'.
\# 24.72 Approximate {\it all three} roots of $x^{3}-3x+1=0$ using the intermediate value theorem to get started, and then Newton's method. Here you have to take a little care in choice of beginning `guess' for Newton's method.
\# 24.73 Approximate the unique positive root of $\cos x=x.$ \# 24.74 Approximate a root of $e^{x}=2x.$
\# 24.75 Approximate a root of $\sin x=\ln x$. Watch out.
25. {\it De riva tives of transcendental functions}
The new material here is just a list of formulas for taking derivatives of exponential, logarithm, trigono- metric, and inverse trigonometric functions. Then any function made by composing these with polynomi- als or with each other can be differentiated by using the chain rule, product rule, etc. (These new formu- las are not easy to derive, but we don't have to worry about that).
The first two are the essentials for exponential and logarithms:
$$
\frac{d}{dx}e^{x}\ =\ e^{x}
$$
$$
\frac{d}{dx}\ln x\ =\ \frac{1}{x}
$$
The next three are essential for trig functions:
$$
\frac{d}{dx}\sin x\ =\ \cos x
$$
$$
\frac{d}{dx}\cos x\ =\ -\sin x
$$
$$
\frac{d}{dx}\tan x\ =\ \sec 2x
$$
The next three are essential for inverse trig functions
$$
\frac{d}{dx}\arctan x\ =
$$
$$
\frac{d}{dx}\arcsin x\ =\ \frac{}{x\sqrt{x^{2}-1}}\frac{1}{\sqrt{2},\frac{1-x1}{1+x^{2}1}}
$$
$\displaystyle \frac{d}{dx}$ arcsec $x =$
35
The previous formulas are the indispensable ones in practice, and are the only ones that I personally re- member (if $\mathrm{I}' \mathrm{m}$ lucky). Other formulas one {\it might} like to have seen are (with $a>0$ in the first two):
$$
\frac{d}{dx}a^{x}\ =\ \ln a\cdot a^{x}
$$
$$
\frac{d}{dx}\log_{a}x\ =
$$
$$
\frac{1}{\ln a\cdot x}
$$
$$
\frac{d}{dx}\sec x\ =\ \tan x\sec x
$$
$$
\frac{d}{dx}\csc x\ =\ -\cot x\csc x
$$
$$
\frac{d}{dx}\cot x\ =\ -\csc^{2}x
$$
$\displaystyle \frac{d}{dx}$ arccot $x =$
$$
\frac{d}{dx}\arccos x\ =\ \frac{}{x\sqrt{x^{2}-1}}\frac{-1}{\sqrt{2},\frac{1-x-1}{1+x^{2}-1}}
$$
$\displaystyle \frac{d}{dx}$ arccsc $x =$
({\it There are always some difficulties in figuring out which of the infinitely-many possibilities to take for the values of the inverse trig functions, and this is especially bad with} arccsc, {\it for example. But we won}'{\it t have time to worry about such things}).
To be able to use the above formulas it is {\it not} necessary to know very many {\it other} properties of these func- tions. For example, {\it it is not necessary to be able to graph these functions to take their derivatives}.'
\# 25.76 Find $\displaystyle \frac{d}{dx}(e^{\cos x})$
\# 25.77 Find $\displaystyle \frac{d}{dx}(\arctan(2-e^{x}))$
\# 25.78 Find $\displaystyle \frac{d}{dx}(\sqrt{\ln(x-1)})$
\# 25.79 Find $\displaystyle \frac{d}{dx}(e^{2\cos x+5})$
\# 25.80 Find $\displaystyle \frac{d}{dx}(\arctan(1+\sin 2x))$ \# 25.81 Find $\displaystyle \frac{d}{dx}\cos(e^{x}-x^{2})$
\# 25.82 Find $\displaystyle \frac{d}{dx}J[3]1-\ln 2x$
\# 25.83 Find $\displaystyle \frac{d}{dx}\frac{e^{x}-1}{e^{x}+1}$
\# 25.84 Find $\displaystyle \frac{d}{dx}(\sqrt{\ln}(\frac{1}{x}))$
26. $L'${\it EospiT}a /$s$ {\it rule}
L'Hospital's rule is the definitive way to simplify evaluation of limits. It does not directly evaluate limits, but only {\it simplifies evaluation if used appropriately}.
In effect, this rule is the ultimate version of `cancellation tricks', applicable in situations where a more
down-to-earth genuine algebraic cancellation may be hidden or invisible.
Suppose we want to evaluate
$$
\lim_{x\rightarrow a}\frac{f(x)}{g(x)}
$$
where the limit $a$ could also be $+\infty$ or $-\infty$ in addition to `ordinary' numbers. Suppose that {\it either}
$\displaystyle \lim_{x\rightarrow a}f(x)=0$ and $\displaystyle \lim_{x\rightarrow a}g(x)=0$
36
$or$
$\displaystyle \lim_{x\rightarrow a}f(x)=\pm\infty$ and $\displaystyle \lim_{x\rightarrow a}g(x)=\pm\infty$
( $\mathrm{T}\mathrm{h}\mathrm{e}\pm$'s don't have to be the same sign). Then we cannot just `plug in' to evaluate the limit, and these are traditionally called indeterminate forms. The unexpected trick that works often is that (amazingly) we are entitled to {\it take the derivative of both numerator and denominator}:
$$
\lim_{x\rightarrow a}\frac{f(x)}{g(x)}=\lim_{x\rightarrow a}\frac{f'(x)}{g(x)}
$$
No, this is {\it not the quotient rule}. No, it is not so clear why this would help, either, but we'll see in exam- ples.
{\it Example}: Find $\displaystyle \lim_{x\rightarrow 0}(\sin x)/x$: both numerator and denominator have limit $0$, so we are entitled to ap- ply L'Hospital's rule:
$$
\lim_{x\rightarrow 0}\frac{\sin x}{x}=\lim_{x\rightarrow 0}\frac{\cos x}{1}
$$
In the new expression, {\it neither} numerator nor denominator is $0$ at $x=0$, and we can just plug in to see that the limit is 1.
{\it Example}: Find $\displaystyle \lim_{x\rightarrow 0}x/(e^{2x}-1)$ : both numerator and denominator go to $0$, so we are entitled to use
L'Hospital's rule:
$$
\lim_{x\rightarrow 0}\frac{x}{e^{2x}-1}=\lim_{x\rightarrow 0}\frac{1}{2e^{2x}}
$$
In the new expression, the numerator and denominator are both non-zero when $x=0$, so we just plug in $0$ to get
$$
\lim_{x\rightarrow 0}\frac{x}{e^{2x}-1}=\lim_{x\rightarrow 0}\frac{1}{2e^{2x}}=\frac{1}{2e^{0}}=\frac{1}{2}
$$
{\it Example}: Find $\displaystyle \lim_{x\rightarrow 0+}x\ln x$: The $0^{+}$ means that we approach $0$ from the positive side, since otherwise we won't have a real-valued logarithm. This problem illustrates the {\it possibility} as well as {\it necessity} of {\it rear}- {\it ranging} a limit to make it be a {\it ratio} of things, in order to legitimately apply L'Hospital's rule. Here, we rearrange to
$$
\lim_{x\rightarrow 0+}x\ln x=\lim_{x\rightarrow 0}\frac{\ln x}{\mathrm{l}/x}
$$
In the new expressions the top goes to $-\infty$ and the bottom goes to $+\infty$ as $x$ goes to $0$ (from the right). Thus, we are entitled to apply L'Hospital's rule, obtaining
$$
\lim_{x\rightarrow 0+}x\ln x=\lim_{x\rightarrow 0}\frac{\ln x}{\mathrm{l}/x}=\lim_{x\rightarrow 0}\frac{1/x}{-1/x^{2}}
$$
Now it is very necessary to rearrange the expression inside the last limit: we have
$$
\lim\underline{1/x}=\lim_{x\rightarrow 0}-x
$$
$$
x\rightarrow 0-1/x^{2}
$$
The new expression is very easy to evaluate: the limit is $0.$
It is often necessary to apply L'Hospital's rule repeatedly: Let's find $\displaystyle \lim_{x\rightarrow+\infty}x^{2}/e^{x}$: both numerator and denominator go to $\infty$ as $ x\rightarrow+\infty$, so we are entitled to apply L'Hospital's rule, to turn this into
$$
\lim_{x\rightarrow+\infty}\frac{2x}{e^{x}}
$$
But still both numerator and denominator go to $\infty$, so apply L'Hospital's rule again: the limit is
$$
\lim_{x\rightarrow+\infty}\frac{2}{e^{x}}=0
$$
37
since now the numerator is fixed while the denominator goes to $+\infty.$
{\it Example}: Now let's illustrate more ways that things can be rewritten as ratios, thereby possibly making L'Hospital's rule applicable. Let's evaluate
$$
\lim_{x\rightarrow 0}x^{x}
$$
It is less obvious now, but we can't just plug in $0$ for $x$: on one hand, we are taught to think that $x^{0}=1,$ but also that $0^{x}=0$; but then surely $0^{0}$ can't be both at once. And this exponential expression is not a ratio.
The trick here is to {\it take the logarithm}:
$$
\ln(\lim_{x\rightarrow 0+}x^{x})=\lim_{x\rightarrow 0+}\ln(x^{x})
$$
The reason that we are entitled to {\it interchange} the logarithm and the limit is that {\it logarithm is a continu}- {\it ous function} (on its domain). Now we use the fact that $\ln(a^{b})=b\ln a$, so the $\log$ of the limit is
$$
\lim_{x\rightarrow 0+}x\ln x
$$
Aha! The question has been turned into one we already did! But ignoring that, and repeating ourselves, we'd first rewrite this as a ratio
$$
\lim_{x\rightarrow 0+}x\ln x=\lim_{x\rightarrow 0+}\frac{\ln x}{\mathrm{l}/x}
$$
and then apply L'Hospital's rule to obtain
$$
\lim_{x\rightarrow 0+}\frac{1/x}{-1/x^{2}}=\lim_{x\rightarrow 0+}-x=0
$$
But we have to remember that we've computed the $log$ of the limit, not the limit. Therefore, the actual limit is
$\displaystyle \lim_{x\rightarrow 0+}x^{x}=e^{\log}$ of the limit $=e^{0}=1$
{\it This trick of taking a logarithm is important to remember}.
{\it Example}: Here is another issue of rearranging to fit into accessible form: Find
$$
\lim_{x\rightarrow+\infty}\sqrt{x^{2}+x+1}-\sqrt{x^{2}+1}
$$
This is not a ratio, but certainly is $\mathrm{i}\mathrm{n}\mathrm{d}\mathrm{e}\mathrm{t}\mathrm{e}\mathrm{r}\mathrm{m}\mathrm{i}\square \mathrm{a}\mathrm{t}\mathrm{e}'$, since it is the difference of two expressions both of
which go to $+\infty$. To make it into a ratio, we take out the largest reasonable power of $x$:
$$
\lim_{x\rightarrow+\infty}\sqrt{x^{2}+x+1}-\sqrt{x^{2}+1}=\lim_{x\rightarrow+\infty}x\cdot(\sqrt{1+\frac{1}{x}+\frac{1}{x^{2}}}-\sqrt{1+\frac{1}{x^{2}}})
$$
$$
=\lim_{x\rightarrow+\infty}\frac{\sqrt{1+\frac{1}{x}+\frac{1}{x^{2}}}-\sqrt{1+\frac{1}{x^{2}}}}{1/x}
$$
The last expression here fits the requirements of the L'Hospital rule, since both numerator and denomina- tor go to $0$. Thus, by invoking L'Hospital's rule, it becomes
$$
=\lim_{x\rightarrow+\infty}\frac{1}{2}\frac{\frac{-\frac{1}{x^{2}}-\frac{2}{x^{3}}}{\sqrt{1+\frac{1}{x}+\frac{1}{x^{2}}}}-\frac{\frac{-2}{x^{3}}}{\sqrt{1+\frac{1}{x^{2}}}}}{-1/x^{2}}
$$
38
This is a large but actually tractable expression: multiply top and bottom by $x^{2}$, so that it becomes
$$
=\lim_{x\rightarrow+\infty}\frac{\frac{1}{2}+\frac{1}{x}}{\sqrt{1+\frac{1}{x}+\frac{1}{x^{2}}}}+\frac{}{\sqrt{1+\frac{1}{x^{2}}}}\frac{-1}{x}
$$
At this point, we {\it can} replace every $\displaystyle \frac{1}{x}$ by $0$, finding that the limit is equal to
$$
\frac{\frac{1}{2}+0}{\sqrt{1+0+0}}+\frac{0}{\sqrt{1+0}}=\frac{1}{2}
$$
It is important to recognize that in additional to the actual application of L'Hospital's rule, it may be nec- essary to {\it experiment} a little to get things to settle out the way you want. {\it Trial-and-error is not only} $ok$, {\it it is necessary}.
\# 26.85 Find $\displaystyle \lim_{x\rightarrow 0}(\sin x)/x$
\# 26.86 Find $\displaystyle \lim_{x\rightarrow 0}(\sin 5x)/x$
\# 26.87 Find $\displaystyle \lim_{x\rightarrow 0}(\sin(x^{2}))/x^{2}$
\# 26.88 Find $\displaystyle \lim_{x\rightarrow 0}x/(e^{2x}-1)$
\# 26.89 Find $\displaystyle \lim_{x\rightarrow 0}x\ln x$
\# 26.90 Find
$$
\lim_{x\rightarrow 0+}(e^{x}-1)\ln x
$$
\# 26.91 Find
$$
\lim_{x\rightarrow 1}\frac{\ln x}{x-1}
$$
\# 26.92 Find
$$
\lim_{x\rightarrow+\infty}\frac{\ln x}{x}
$$
\# 26.93 Find
$$
\lim_{x\rightarrow+\infty}\frac{\ln x}{x^{2}}
$$
\# 26.94 Find $\displaystyle \lim_{x\rightarrow 0}(\sin x)^{x}$
27. {\it Exponentialgrowth and} decay.$\cdot$ a {\it diff}e{\it r}e{\it n T}$i\mathrm{a}/$e{\it qu}a {\it tion}
This little section is a tiny introduction to a very important subject and bunch of ideas: {\it solving differen}- {\it tial equations}. We'll just look at the simplest possible example of this.
The general idea is that, instead of solving equations to find unknown {\it numbers}, we might solve equations to find unknown {\it functions}. There are many possibilities for what this might mean, but one is that we
have an unknown function $y$ of $x$ and are given that $y$ and its derivative $y'$ (with respect to {\it x}) satisfy a relation
$$
y'=ky
$$
39
where $k$ is some constant. Such a relation between an unknown function and its derivative (or {\it derivatives}) is what is called a differential equation. Many basic `physical principles' can be written in such terms, using `time' $t$ as the independent variable.
Having been taking derivatives of exponential functions, a person might remember that the function $f(t)=e^{kt}$ has exactly this property:
$$
\frac{d}{dt}e^{kt}=k\cdot e^{kt}
$$
For that matter, any {\it constant multiple} of this function has the same property:
$$
\frac{d}{dt}(c\cdot e^{kt})=k\cdot c\cdot e^{kt}
$$
And it turns out that these really are {\it all} the possible solutions to this differential equation.
There is a certain buzz-phrase which is supposed to alert a person to the occurrence of this little story: if
a function $f$ has exponential growth or exponential decay then that is taken to mean that $f$ can be written in the form
$$
f(t)=c\cdot e^{kt}
$$
If the constant $k$ is {\it positive} it has exponential {\it growth} and if $k$ is {\it negative} then it has exponential {\it decay}.
Since we've described all the solutions to this equation, what questions remain to ask about this kind of thing? Well, the usual scenario is that some {\it story problem} will give you information in a way that requires you to take some trouble in order to {\it determine the constants} $c, k$. And, in case you were wondering where you get to take a derivative here, the answer is that you don't really: all the `calculus work' was done at the point where we granted ourselves that all solutions to that differential equation are given in the form $f(t)=ce^{kt}.$
First to look at some general ideas about determining the constants before getting embroiled in story problems: One simple observation is that
$$
c=f(0)
$$
that is, that the constant $c$ is the value of the function at time $t=0$. This is true simply because
$$
f(0)=ce^{k\cdot 0}=ce^{0}=c\cdot 1=c
$$
from properties of the exponential function.
More generally, suppose we know the values of the function at two different times:
$$
y_{1}=ce^{kt_{1}}
$$
$$
y_{2}=ce^{kt_{2}}
$$
Even though we certainly do have `two equations and two unknowns', these equations involve the un-
known constants in a manner we may not be used to. But it's still not so hard to solve for $c, k$: dividing the first equation by the second and using properties of the exponential function, the $c$ on the right side cancels, and we get
$$
\frac{y_{1}}{y_{2}}=e^{k(t_{1}-t_{2})}
$$
Taking a logarithm $($base $e,\ \mathrm{o}\mathrm{f}$ course) we get
$$
\ln y_{1}-\ln y_{2}=k(t_{1}-t_{2})
$$
Dividing by $t_{1}-t_{2}$, this is
$$
k=\frac{\ln y_{1}-\ln y_{2}}{t_{1}-t_{2}}
$$
40
Substituting back in order to find $c$, we first have
$$
y_{1}=ce^{\frac{\ln y_{1}-\ln y_{2}}{t_{1}-t_{2}}t_{1}}
$$
Taking the logarithm, we have
$$
\ln y_{1}=\ln c+\frac{\ln y_{1}-\ln y_{2}}{t_{1}-t_{2}}t_{1}
$$
Rearranging, this is
$$
\ln c=\ln y_{1}-\frac{\ln y_{1}-\ln y_{2}}{t_{1}-t_{2}}t_{1}=\frac{t_{1}\ln y_{2}-t_{2}\ln y_{1}}{t_{1}-t_{2}}
$$
Therefore, in summary, the two equations
$$
y_{1}=ce^{kt_{1}}
$$
$$
y_{2}=ce^{kt_{2}}
$$
allow us to solve for $c, k$, giving
$$
k=\frac{\ln y_{1}-\ln y_{2}}{t_{1}-t_{2}}
$$
$$
c=e^{\frac{t_{1}\ln y_{2}-t_{2}\ln y_{1}}{t_{1}-t_{2}}}
$$
A person might manage to remember such formulas, or it might be wiser to remember the way of {\it deriving} them.
{\it Example}: A herd of llamas has 1000llamas in it, and the population is growing exponentially. At time $t=4$ it has 2000 llamas. Write a formula for the number of llamas at {\it arbitrary} time $t.$
Here there is no direct mention of differential equations, but use of the buzz-phrase $\zeta${\it growing exponentially}' must be taken as indicator that we are talking about the situation
$$
f(t)=ce^{kt}
$$
where here $f(t)$ is the number of llamas at time $t$ and $c, k$ are constants to be determined from the infor- mation given in the problem. And the use of language should probably be taken to mean that at time
$t=0$ there are 1000llamas, and at time $t=4$ there are 2000. Then, either repeating the method above or plugging into the formula derived by the method, we find
$c=$ value of $f$ at $t=0=1000$
$$
k=\frac{\ln f(t_{1})-\ln f(t_{2})}{t_{1}-t_{2}}=\frac{\ln 1000-\ln 2000}{0-4}
$$
$$
=\ln\frac{1000}{2000}-4=\frac{\ln\frac{1}{2}}{-4}=(\ln 2)/4
$$
Therefore,
$$
f(t)=1000e^{\frac{\ln 2}{4}t}=1000\cdot 2^{t/4}
$$
This is the desired formula for the number of llamas at arbitrary time $t.$
{\it Example}: A colony of bacteria is growing exponentially. At time $t=0$ it has 10 bacteria in it, and at time $t=4$ it has 2000. At what time will it have 100, 000 bacteria?
Even though it is not explicitly demanded, we need to find the general formula for the number $f(t)$ of bac- teria at time $t$, set this expression equal to 100, 000, and solve for $t$. Again, we can take a {\it little} shortcut
here since we know that $c=f(\mathrm{O})$ and we are given that $f(\mathrm{O})=10$. (This is easier than using the bulkier more general formula for finding $c$). And use the formula for $k$:
$$
k=\frac{\ln f(t_{1})-\ln f(t_{2})}{t_{1}-t_{2}}=\frac{\ln 10-\ln 2,000}{0-4}=\frac{\ln\frac{10}{2,000}}{-4}=\frac{\ln 200}{4}
$$
41
Therefore, we have
$$
f(t)=10\cdot e^{\frac{\ln 200}{4}t}=10\cdot 200^{t/4}
$$
as the general formula. Now we try to solve
$$
100,\ 000=10\cdot e^{\frac{\ln 200}{4}t}
$$
for $t$: divide both sides by the 10 and take logarithms, to get
$$
\ln 10,000=\frac{\ln 200}{4}t
$$
Thus,
$$
t=4\frac{\ln 10,000}{\ln 200}\approx 6.953407835
$$
\# 27.95 A herd of llamas is growing exponentially. At time $t=0$ it has 1000llamas in it, and at time $t=4$ it has 2000 llamas. Write a formula for the number of llamas at {\it arbitrary} time $t.$
\# 27.96 A herd of elephants is growing exponentially. At time $t=2$ it has 1000 elephants in it, and at time $t=4$ it has 2000 elephants. Write a formula for the number of elephants at {\it arbitrary} time $t.$
\# 27.97 A colony of bacteria is growing exponentially. At time $t=0$ it has 10 bacteria in it, and at time $t=4$ it has 2000. At what time will it have 100, 000 bacteria?
\# 27.98 A colony of bacteria is growing exponentially. At time $t=2$ it has 10 bacteria in it, and at time $t=4$ it has 2000. At what time will it have 100, 000 bacteria?
28. {\it Th} $\mathrm{e}$ {\it second and higher deriva tives}
The second derivative of a function is simply {\it the derivative of the derivative}. The third derivative of a function is the derivative of the second derivative. And so on.
The second derivative of a function $y=f(x)$ is written as
$$
y''=f''(x)=\frac{d^{2}}{dx^{2}}f=\frac{d^{2}f}{dx^{2}}=\frac{d^{2}y}{dx^{2}}
$$
The third derivative is
$$
y\ =f'''(x)=\frac{d^{3}}{dx^{3}}f=\frac{d^{3}f}{dx^{3}}=\frac{d^{3}y}{dx^{3}}
$$
And, generally, we can put on a `prime' for each derivative taken. Or write
$$
\frac{d^{n}}{dx^{n}}f=\frac{d^{n}f}{dx^{n}}=\frac{d^{n}y}{dx^{n}}
$$
for the nth derivative. There is yet another notation for high order derivatives where the number of `primes' would become unwieldy:
$$
\frac{d^{n}f}{dx^{n}}=f^{(n)}(x)
$$
as well.
The geometric interpretation of the higher derivatives is subtler than that of the first derivative, and we won't do much in this direction, except for the next little section.
42
\# 28.99 Find $f$'' $(x)$ for $f(x)=x^{3}-5x+1.$
\# 28.100 Find $f$'' $(x)$ for $f(x)=x^{5}-5x^{2}+x-1.$ \# 28.101 Find $f$'' $(x)$ for $f(x)=\sqrt{x^{2}-x+1}.$
\# 28.102 Find $f$'' $(x)$ for $f(x)=\sqrt{x}.$
29. {\it In} $fl\mathrm{e}$ {\it ction points, conca vity upward and downward}
A point of inflection of the graph of a function $f$ is a point where the {\it second} derivative $f''$ is $0$. We have to wait a minute to clarify the geometric meaning of this.
A piece of the graph of $f$ is concave upward if the curve `bends' upward. For example, the popular parabola $y=x^{2}$ is concave upward in its entirety.
A piece of the graph of $f$ is concave downward if the curve `bends' downward. For example, a `flipped' version $y=-x^{2}$ of the popular parabola is concave downward in its entirety.
The relation of {\it points of inflection} to {\it intervals where the curve is concave up or down} is exactly the same as the relation of {\it critical points} to {\it intervals where the function is increasing or decreasing}. That is, the points of inflection mark the boundaries of the two different sort of behavior Further, only one sample value of $f''$ need be taken between each pair of consecutive inflection points in order to see whether the curve bends up or down along that interval.
Expressing this as a systematic procedure: {\it to find the intervals along which} $f$ {\it is concave upward and con}- {\it cave downward}:
$\bullet$ Compute the {\it second} derivative $f''$ of $f$, and {\it solve} the equation $f''(x)=0$ for $x$ to find all the inflection points, which we list in order as $x_{1}0$, then $f$ is {\it concave upward} on $(x_{i},\ x_{i+1})$ , while if $f''(t_{i+1})<0$, then $f$ is {\it concave downward} on that interval.
$\bullet$ Conclusion: on the `outside' interval $(-\infty,\ x_{o})$ , the function $f$ is {\it concave upward} if $f''(t_{o})>0$ and is {\it concave downward} if $f''(t_{o})<0$. Similarly, on $(x_{n},\ \infty)$ , the function $f$ is {\it concave upward} if $f''(t_{n})>0$ and is {\it concave downward} if $f''(t_{n})<0.$
Find the inflection points and intervals of concavity up and down of
$$
f(x)=3x^{2}-9x+6
$$
First, the second derivative is just $f''(x)=6$. Since this is never zero, there are {\it not} points of inflection. And the value of $f''$ is always 6, so is always $>0$, so the curve is entirely {\it concave upward}.
43
Find the inflection points and intervals of concavity up and down of
$$
f(x)=2x^{3}-12x^{2}+4x-27
$$
First, the second derivative is $f''(x)=12x-24$. Thus, solving $12x-24=0$, there is just the one inflection point, 2. Choose auxiliary points $t_{o}=0$ to the left of the inflection point and $t_{1}=3$ to the right of the inflection point. Then $f''(0)=-24<0$, so on $(-\infty,\ 2)$ the curve is concave {\it downward}. And $f''(2)=12> 0$, so on $(2,\ \infty)$ the curve is concave {\it upward}.
Find the inflection points and intervals of concavity up and down of
$$
f(x)=x^{4}-24x^{2}+11
$$
the second derivative is $f''(x)=12x^{2}-48$. Solving the equation $12x^{2}-48=0$, we find inflection points $\pm 2$. Choosing auxiliary points $-3,0,3$ placed between and to the left and right of the inflection points, we evaluate the second derivative: First, $f''(-3)=12\cdot 9-48>0$, so the curve is concave {\it upward} on $(-\infty,\ -2)$ . Second, $f''(0)=-48<0$, so the curve is concave {\it downward} on $(-2,2)$ . Third, $f''(3)=12\cdot 9-48>0$, so the curve is concave {\it upward} on $(2,\ \infty)$ .
\# 29.103 Find the inflection points and intervals of concavity up and down of $f(x)=3x^{2}-9x+6.$
\# 29.104 Find the inflection points and intervals of concavity up and down of $f(x)=2x^{3}-12x^{2}+4x-27.$
\# 29.105 Find the inflection points and intervals of concavity up and down of $f(x)=x^{4}-2x^{2}+11.$
30. {\it Another diff}e{\it r}e{\it n T}$i\mathrm{a}/$e{\it qu}a {\it Tion}.$\cdot$ {\it projectile motion}
Here we encounter the fundamental idea that {\it if} $s=s(t)$ {\it is position, then} $\dot{s}$ {\it is velocity, and} $s$ {\it is accelera}- {\it tion}. This idea occurs in all basic physical science and engineering.
In particular, for a projectile near the earth's surface travelling straight up and down, ignoring air resis- tance, acted upon by no other forces but {\it gravity}, we have
acceleration due to gravity $=-32$ feet/sec 2
Thus, letting $s(t)$ be position at time $t$, we have
$$
\ddot{s}(t)=-32
$$
We take this (approximate) {\it physical fact} as our starting point.
From $\ddot{s}=-32$ we {\it integrate} (or {\it anti-differentiate}) once to undo one of the derivatives, getting back to {\it ve}- {\it locity}:
$$
v(t)=\dot{s}=\dot{s}(t)=-32t+v_{o}
$$
where we are calling the {\it constant of integration} $v_{o}'$. (No matter which constant $v_{o}$ we might take, the
derivative of $-32t+v_{o}$ with respect to $t$ is $-32.$)
Specifically, when $t=0$, we have
$$
v(0)=v_{o}
$$
Thus, the constant of integration $v_{o}$ is initial velocity. And we have this formula for the velocity at {\it any} time in terms of {\it initial} velocity.
44
We integrate once more to undo the last derivative, getting back to the {\it position} function itself:
$$
s=s(t)=-16t^{2}+v_{o}t+s_{o}
$$
where we are calling the constant of integration $s_{o}'$. Specifically, when $t=0$, we have
$$
s(0)=s_{o}
$$
so $s_{o}$ is initial position. Thus, we have a formula for position at {\it any} time in terms of {\it initial position} and {\it initial velocity}.
Of course, in many problems the data we are given is {\it not} just the initial position and initial velocity, but something else, so we have to determine these constants indirectly.
\# 30.106 You drop a rock down a deep well, and it takes 10 seconds to hit the bottom. How deep is it? \# 30.107 You drop a rock down a well, and the rock is going 32 feet per second when it hits bottom. How deep is the well?
\# 30.108 If I throw a ball straight up and it takes 12 seconds for it to go up and come down, how high
did it go?
31. {\it Graphing} $r\mathrm{a}$ {\it tional functions, asymptotes}
This section shows another kind of function whose graphs we can understand effectively by our methods. There is one new item here, the idea of {\it asymptote} of the graph of a function.
A vertical asymptote of the graph of a function $f$ most commonly occurs when $f$ is defined as a {\it ratio} $f(x)=g(x)/h(x)$ of functions $g, h$ continuous at a point $x_{o}$, but with the denominator going to zero at
that point while the numerator doesn't. That is, $h(x_{o})=0$ but $g(x_{o})\neq 0$. Then we say that $f$ {\it blows up} at
$x_{o}$, and that the line $x=x_{o}$ is a vertical asymptote of the graph of $f.$
And as we take $x$ closer and closer to $x_{o}$, the graph of $f$ zooms off (either up or down or both) {\it closely to the line} $x=x_{o}.$
A very simple example of this is $f(x)=1/(x-1)$ , whose denominator is $0$ at $x=1$, so causing a {\it blow-up} at that point, so that $x=1$ is a {\it vertical asymptote}. And as $x$ approaches 1 from the right, the values of the function zoom $up$ to $+\infty$. When $x$ approaches 1 from the {\it left}, the values zoom {\it down} to $-\infty.$
A horizontal asymptote of the graph of a function $f$ occurs if either limit
$$
\lim_{x\rightarrow+\infty}f(x)
$$
or
$$
\lim_{x\rightarrow-\infty}\ f(x)
$$
exists. If $R=\displaystyle \lim_{x\rightarrow+\infty}f(x)$ , then $y=R$ is a horizontal asymptote of the function, and if $L=$
$\displaystyle \lim_{x\rightarrow-\infty}f(x)$ exists then $y=L$ is a horizontal asymptote.
As $x$ goes off to $+\infty$ the graph of the function gets closer and closer to the horizontal line $y=R$ if {\it that} limit exists. As $x$ goes of to $-\infty$ the graph of the function gets closer and closer to the horizontal line $y= L$ if {\it that} limit exists.
So in rough terms {\it asymptotes} of a function are {\it straight lines} which the graph of the function approaches $at$ {\it infinity}. In the case of {\it vertical asymptotes}, it is the $y$-coordinate that goes off to $\mathrm{i}\square $finity, and in the case of {\it horizontal asymptotes} it is the $x$-coordinate which goes off to $\mathrm{i}\square $finity.
45
Find asymptotes, critical points, intervals of increase and decrease, inflection points, and intervals of con- cavity up and down of $f(x)=\displaystyle \frac{x+3}{2x-6}$: First, let's find the asymptotes. The denominator is $0$ for $x=3$ (and this is {\it not} cancelled by the numerator) so the line $x=3$ is a {\it vertical asymptote}. And as $x$ goes to $\pm\infty$, the function values go to 1/2, so the line $y=1/2$ is a horizontal asymptote.
The derivative is
$$
f'(x)=\frac{1\cdot(2x-6)-(x+3)\cdot 2}{(2x-6)^{2}}=\frac{-12}{(2x-6)^{2}}
$$
Since a ratio of polynomials can be zero only if the numerator is zero, this $f'(x)$ can {\it never} be zero, so
there are {\it no critical points}. There is, however, the discontinuity at $x=3$ which we must take into ac-
count. Choose auxiliary points $0$ and 4 to the left and right of the discontinuity. Plugging in to the deriva- tive, we have $f'(\mathrm{O})=-12/(-6)^{2}<0$, so the function is {\it decreasing} on the interval $(-\infty,\ 3)$ . To the right, $f'(4)=-12/(8-6)^{2}<0$, so the function is also decreasing on $(3,\ +\infty)$ .
The second derivative is $f''(x)=48/(2x-6)^{3}$. This is never zero, so there are {\it no inflection points}. There is the discontinuity at $x=3$, however. Again choosing auxiliary points $0,4$ to the left and right of the
discontinuity, we see $f''(0)=48/(-6)^{3}<0$ so the curve is {\it concave downward} on the interval $(-\infty,\ 3)$ . And $f''(4)=48/(8-6)^{3}>0$, so the curve is concave {\it upward} on $(3,\ +\infty)$ .
Plugging in just two or so values into the function then is enough to enable a person to make a fairly good qualitative sketch of the graph of the function.
\# 31.109 Find all asymptotes of $f(x)=\displaystyle \frac{x-1}{x+2}.$ \# 31.110 Find all asymptotes of $f(x)=\displaystyle \frac{x+2}{x-1}.$ \# 31.111 Find all asymptotes of $f(x)=\displaystyle \frac{x^{2}-1}{x^{2}-4}.$ \# 31.112 Find all asymptotes of $f(x)=\displaystyle \frac{x^{2}-1}{x^{2}+1}.$
32. {\it Basic integration formulas}
The fundamental {\it use} of {\it integration} is as a {\it continuous version of summing}. But, paradoxically, often inte- grals are {\it computed} by viewing integration as essentially an {\it inverse operation to differentiation}. (That fact is the so-called {\it Fundamental Theorem of Calculus}.)
The notation, which we're stuck with for historical reasons, is as peculiar as the notation for derivatives:
the integral of a function $f(x)$ with respect to $x$ is written as
$$
\int f(x)dx
$$
The remark that integration is (almost) an inverse to the operation of differentiation means that if
$$
\frac{d}{dx}f(x)=g(x)
$$
then
$$
\int g(x)dx=f(x)+C
$$
The extra $C$, called the constant of integration, is really necessary, since after all differentiation kills off constants, which is why integration and differentiation are not {\it exactly} inverse operations of each other.
46
Since integration is {\it almost} the inverse operation of differentiation, recollection of formulas and processes for {\it differentiation} already tells the most important formulas for {\it integration}:
$\displaystyle \int x^{n}dx = \displaystyle \frac{1}{n+1}x^{n+1}+C$ unless $n=-1$
$$
\int e^{x}dx\ =\ e^{x}+C
$$
$$
\int\frac{1}{x}dx\ =\ \ln x+C
$$
$$
\int\sin xdx\ =\ -\cos x+C
$$
$$
\int\cos xdx\ =\ \sin x+C
$$
$$
\int\sec^{2}xdx\ =\ \tan x+C
$$
$$
\int\frac{1}{1+x^{2}}dx\ =\ \arctan x+C
$$
And since the derivative of a sum is the sum of the derivatives, the {\it integral of a sum is the sum of the inte}- {\it grals}:
$$
\int f(x)+g(x)dx=\int f(x)dx+\int g(x)dx
$$
And, likewise, constants `go through' the integral sign:
$$
\int c\cdot f(x)dx=c\cdot\int f(x)dx
$$
For example, it is easy to integrate polynomials, even including terms like $\sqrt{x}$ and more general power functions. The only thing to watch out for is terms $x^{-1}=\displaystyle \frac{1}{x}$, since these integrate to $\ln x$ instead of a power of $x$. So
$$
\int 4x^{5}-3x+11-17\sqrt{x}+\frac{3}{x}dx=\frac{4x^{6}}{6}-\frac{3x^{2}}{2}+11x-\frac{17x^{3/2}}{3/2}+3\ln x+C
$$
Notice that we need to include just one `constant of integration'.
Other basic formulas obtained by reversing differentiation formulas:
$$
\int a^{x}dx\ =\ \frac{a^{x}}{\ln a}+C
$$
$$
\int\log_{a}xdx\ =\ \frac{1}{\ln a}\cdot\frac{1}{x}+C
$$
$$
\int\frac{1}{\sqrt{1-x^{2}}}dx\ =\ \arcsin x+C
$$
$\displaystyle \int\frac{1}{x\sqrt{x^{2}-1}}dx =$ arcsec $x+C$
Sums of constant multiples of all these functions are easy to integrate: for example,
$\displaystyle \int 5\cdot 2^{x}-\frac{23}{x\sqrt{x^{2}-1}}+5x^{2}dx=\frac{5\cdot 2^{x}}{\ln 2}-23$ arcsec $x+\displaystyle \frac{5x^{3}}{3}+C$
\# 32.113 $\displaystyle \int 4x^{3}-3\cos x+\frac{7}{x}+2dx=$? \# 32.114 $\displaystyle \int 3x^{2}+e^{2x}-11+\cos xdx=$? \# 32.115 $\displaystyle \int\sec^{2}xdx=$?
\# 32.116 $\displaystyle \int\frac{7}{1+x^{2}}dx=$?
\# 32.117 $\displaystyle \int 16x^{7}-\sqrt{x}+\frac{3}{\sqrt{x}}dx=$?
\# 32.118 $\displaystyle \int 23\sin x-\frac{2}{\sqrt{1-x^{2}}}dx=$?
47
33. {\it Th} $\mathrm{e}$ {\it simplest substitutions}
The simplest kind of chain rule application
$$
\frac{d}{dx}f(ax+b)=a\cdot f'(x)
$$
$(\mathrm{f}\mathrm{o}\mathrm{r}$ constants $a,\ b)$ can easily be run backwards to obtain the corresponding integral formulas: some and illustrative important examples are
$$
\int\cos(ax+b)dx\ =\ \frac{1}{a}\cdot\sin(ax+b)+C
$$
$$
\int e^{ax+b}dx\ =\ \frac{1}{a}\cdot e^{ax+b}+C
$$
$$
\int\sqrt{ax+b}dx\ =\ \frac{1}{a}\cdot\frac{(ax+b)^{3/2}}{3/2}+C
$$
$$
\int\frac{1}{ax+b}dx\ =\ \frac{1}{a}\cdot\ln(ax+b)+C
$$
Putting numbers in instead of letters, we have examples like
$$
\int\cos(3x+2)dx\ =\ \frac{1}{3}\cdot\sin(3x+2)+C
$$
$$
\int e^{4x+3}dx\ =\ \frac{1}{4}\cdot e^{4x+3}+C
$$
$$
\int\sqrt{-5x+1}dx\ =\ \frac{1}{-5}\cdot\frac{(-5x+1)^{3/2}}{3/2}+C
$$
$$
\int\frac{1}{7x-2}dx\ =\ \frac{1}{7}\cdot\ln(7x-2)+C
$$
Since this kind of substitution is pretty undramatic, and a person should be able to do such things {\it by re}- {\it flex} rather than having to think about it very much.
\# 33.119 $\displaystyle \int e^{3x+2}dx=$?
\# 33.120 $\displaystyle \int\cos(2-5x)dx=$?
\# 33.121 $\displaystyle \int\sqrt{3x-7}dx=$?
\# 33.122 $\displaystyle \int\sec^{2}(2x+1)dx=$?
\# 33.123 $\displaystyle \int(5x^{7}+e^{6-2x}+23+\frac{2}{x})dx=$? \# 33.124 $\displaystyle \int\cos$ (7--llx) $dx=$?
34. {\it Substitutions}
The {\it chain rule} can also be `run backward', and is called change of variables or substitution or some- times $\mathrm{u}$-substitution. Some examples of what happens are straightforward, but others are less obvious. It is at this point that the capacity to {\it recognize derivatives} from past experience becomes very helpful.
{\it Example} Since (by the chain rule)
$$
\frac{d}{dx}e^{\sin x}=\cos xe^{\sin x}
$$
then we can anticipate that
$$
\int\cos xe^{\sin x}dx=e^{\sin x}+C
$$
48
{\it Example} Since (by the chain rule)
$$
\frac{d}{dx}\sqrt{x^{5}+3x}=\frac{1}{2}(x^{5}+3x)^{-1/2}\cdot(5x^{4}+3)
$$
then we can anticipate that
$$
\int\frac{1}{2}(5x^{4}+3)(x^{5}+3x)^{-1/2}dx=\sqrt{x^{5}+3x}+C
$$
Very often it happens that things are {\it off by a constant}. This should not deter a person from recognizing the possibilities. For example: since, by the chain rule,
$$
\frac{d}{dx}\sqrt{5+e^{x}}=\frac{1}{2}(5+e^{x})^{-1/2}\cdot e^{x}
$$
then
$$
\int e^{x}(5+e^{x})^{-1/2}dx=2\int\frac{1}{2}e^{x}(5+e^{x})^{-1/2}dx=2\sqrt{5+e^{x}}+C
$$
Notice how for `bookkeeping purposes' we put the $\displaystyle \frac{1}{2}$ into the integral (to make the constants right there) and put a compensating 2 outside.
{\it Example}: Since (by the chain rule)
$$
\frac{d}{dx}\sin^{7}(3x+1)=7\cdot\sin^{6}(3x+1)\cdot\cos(3x+1)\cdot 3
$$
then we have
$$
\int\cos(3x+1)\sin^{6}(3x+1)dx
$$
$$
=\frac{1}{21}\int 7\cdot 3\cdot\cos(3x+1)\sin^{6}(3x+1)dx=\frac{1}{21}\sin^{7}(3x+1)+C
$$
\# 34.125 $\displaystyle \int\cos x\sin xdx=$?
\# 34.126 $\displaystyle \int 2xe^{x^{2}}dx=$?
\# 34.127 $\displaystyle \int 6x^{5}e^{x^{6}}dx=$?
\# 34.128 $\displaystyle \int\frac{\cos x}{\sin x}dx=$?
\# 34.129 $\displaystyle \int\cos xe^{\sin x}dx=$?
\# 34.130 $\displaystyle \int\frac{1}{2\sqrt{x}}e^{\sqrt{x}}dx=$?
\# 34.131 $\displaystyle \int\cos x\sin^{5}xdx=$?
\# 34.132 $\displaystyle \int\sec^{2}x\tan^{7}xdx=$?
\# 34.133 $\displaystyle \int(3\cos x+x)e^{6\sin x+x^{2}}dx=$?
\# 34.134 $\displaystyle \int e^{x}\sqrt{e^{x}+1}dx=$?
49
35. {\it Area and definite integrals}
The actual {\it definition} of `integral' is as a limit of sums, which might easily be viewed as having to do with {\it area}. One of the original issues integrals were intended to address was computation of area.
First we need more notation. Suppose that we have a function $f$ whose integral is another function $F$:
$$
\int f(x)dx=F(x)+C
$$
Let $a, b$ be two numbers. Then the definite integral of $f$ with limits $a, b$ is
$$
\int_{a}^{b}f(x)dx=F(b)-F(a)
$$
The left-hand side of this equality is just {\it notation} for the definite integral. The use of the word `limit' here has little to do with our earlier use of the word, and means something more like `boundary', just like it
does in more ordinary English.
A similar notation is to write
$$
[g(x)]_{a}^{b}=g(b)-g(a)
$$
for any function $g$. So we could also write
$$
\int_{a}^{b}f(x)dx=[F(x)]_{a}^{b}
$$
For example,
$$
\int_{0}^{5}x^{2}dx=[\frac{x^{3}}{3}]_{0}^{5}=\frac{5^{3}-0^{3}}{3}=\frac{125}{3}
$$
As another example,
$$
\int_{2}^{3}3x+1dx=\ [\frac{3x^{2}}{2}+x]_{2}^{3}=\ (\frac{3\cdot 3^{2}}{2}+3)-(\frac{3\cdot 2^{2}}{2}+2)=\frac{21}{2}
$$
All the other integrals we had done previously would be called indefinite integrals since they didn't have `limits' $a, b$. So a {\it definite} integral is just the difference of two values of the function given by an {\it indefinite} integral. That is, there is almost nothing new here except the idea of evaluating the function that we get by integrating.
But now we {\it can} do something new: compute {\it areas}:
For example, if a function $f$ is {\it positive} on an interval $[a,\ b]$, then
$\displaystyle \int_{a}^{b}f(x)dx=$ area between graph and $x$-axis, between $x=a$ and $x=b$
It is important that the function be {\it positive}, or the result is false.
For example, since $y=x^{2}$ is certainly always positive (or at least non-negative, which is really enough), the area `under the curve' $(\mathrm{a}\mathrm{n}\mathrm{d},$ implicitly, above $\mathrm{t}\mathrm{h}\mathrm{e}\ x-$axis) between $x=0$ and $x=1$ is just
$$
\int_{0}^{1}x^{2}dx=[\frac{x^{3}}{3}]_{0}^{1}=\frac{1^{3}-0^{3}}{3}=\frac{1}{3}
$$
50
More generally, {\it the area below} $y=f(x)$ , {\it above} $y=g(x)$ , {\it and between} $x=a$ {\it and} $x=b$ {\it is}
area $=\displaystyle \int_{a}^{b}f(x)-g(x)dx$
$=\displaystyle \int_{left\lim_{\dot{l}l}}^{right\lim it}$ ({\it upper curve}- {\it lower curve}) $dx$
It is important that $f(x)\geq g(x)$ throughout the interval $[a,\ b].$
For example, the area below $y=e^{x}$ and above $y=x$, and between $x=0$ and $x=2$ is
$$
\int_{0}^{2}e^{x}-xdx=[e^{x}-\frac{x^{2}}{2}]_{0}^{2}=(e^{2}-2)-(e^{0}-0)=e^{2}+1
$$
since it really is true that $e^{x}\geq x$ on the interval $[0,2].$
As a person might be wondering, in general it may be not so easy to tell whether the graph of one curve is above or below another. The procedure to examine the situation is as follows: given two functions $f, g$, to find the intervals where $f(x)\leq g(x)$ and vice-versa:
$\bullet$ Find where the graphs cross by solving $f(x)=g(x)$ for $x$ to find the $x$-coordinates of the points of inter- section.
$\bullet$ Between any two solutions $x_{1}, x_{2}$ of $f(x)=g(x)$ (and also to the left and right of the left-most and right- most solutions!), plug in {\it one} auxiliary point of your choosing to see which function is larger.
Of course, this procedure works for a similar reason that the {\it first derivative test} for local minima and
maxima worked: we implicitly assume that the $f$ and $g$ are {\it continuous}, so if the graph of one is above the graph of the other, then the situation can't {\it reverse} itself without the graphs actually {\it crossing}.
As an example, and as an example of a certain delicacy of wording, consider the problem to {\it find the area between} $y=x$ {\it and} $y=x^{2}$ {\it with} $0\leq x\leq 2$. To find where $y=x$ and $y=x^{2}$ {\it cross}, solve $x=x^{2}$: we find solutions $x=0,1$. In the present problem we don't care what is happening to the left of $0$. Plugging in the value 1/2 as auxiliary point between $0$ and 1, we get $\displaystyle \frac{1}{2}\geq (\displaystyle \frac{1}{2})^{2}$, so we see that in $[0,1]$ the curve $y=x$ is the higher. To the right of 1 we plug in the auxiliary point 2, obtaining $2^{2}\geq 2$, so the curve $y=x^{2}$ is higher there.
Therefore, the area between the two curves has to be broken into two parts:
area $=\displaystyle \int_{0}^{1}(x-x^{2})dx+\int_{1}^{2}(x^{2}-x)dx$
since we must always be integrating in the form
$\displaystyle \int_{left}^{right}$ higher-lower $dx$
In some cases the `side' boundaries are redundant or only {\it implied}. For example, the question might be to {\it find the area between the curves} $y=2-x$ {\it and} $y=x^{2}$. What is implied here is that these two curves themselves enclose one or more {\it finite} pieces of area, without the need of any `side' boundaries of the form $x=a$. First, we need to see where the two curves intersect, by solving $2-x=x^{2}$: the solutions are $x=-2,1$. So we {\it infer} that we are supposed to find the area from $x=-2$ to $x=1$, and that the two curves {\it close up} around this chunk of area without any need of assistance from vertical lines $x=a$. We need to find which curve is higher: plugging in the point $0$ between $-2$ and 1, we see that $y=2-x$ is higher. Thus, the desired integral is
area $=\displaystyle \int_{-2}^{1}(2-x)-x^{2}dx$
51
\# 35.135 Find the area between the curves $y=x^{2}$ and $y=2x+3.$
\# 35.136 Find the area of the region bounded vertically by $y=x^{2}$ and $y=x+2$ and bounded horizontally by $x=-1$ and $x=3.$
\# 35.137 Find the area between the curves $y=x^{2}$ and $y=8+6x-x^{2}.$
\# 35.138 Find the area between the curves $y=x^{2}+5$ and $y=x+7.$
36. {\it Lengths} $of$ {\it Curves}
The basic point here is {\it a formula obtained by using the ideas of calculus}: the length of the graph of $y= f(x)$ from $x=a$ to $x=b$ is
arc length $=\displaystyle \int_{a}^{b}\sqrt{1+(\frac{dy}{dx})^{2}}dx$
Or, if the curve is {\it parametrized} in the form
$$
x=f(t)\ y=g(t)
$$
with the parameter $t$ going from $a$ to $b$, then
arc length $=\displaystyle \int_{a}^{b}\sqrt{(\frac{dx}{dt})^{2}+(\frac{dy}{dt})^{2}}dt$
This formula comes from approximating the curve by straight lines connecting successive points on the
curve, using the Pythagorean Theorem to compute the lengths of these segments in terms of the change in $x$ and the change in $y$. In one way of writing, which also provides a good {\it heuristic} for remembering the formula, if a small change in $x$ is $dx$ and a small change in $y$ is $dy$, then the length of the hypotenuse of the right triangle with base $dx$ and altitude $dy$ is (by the Pythagorean theorem)
hypotenuse $=\sqrt{dx^{2}+dy^{2}}=\sqrt{1+(\frac{dy}{dx})^{2}}dx$
Unfortunately, by the nature of this formula, most of the integrals which come up are {\it difficult} or {\it impossi}- {\it ble} to `do'. But if one of these really mattered, we could still estimate it by {\it numerical integration}.
\# 36.139 Find the length of the curve $y=\sqrt{1-x^{2}}$ from $x=0$ to $x=1.$
\# 36.140 Find the length of the curve $y=\displaystyle \frac{1}{4}(e^{2x}+e^{-2x})$ from $x=0$ to $x=1.$
\# 36.141 Set up (but do not evaluate) the integral to find the length of the piece of the parabola $y=x^{2}$ from $x=3$ to $x=4.$
52
37. {\it Numerica} / {\it integration}
As we start to see that integration `by formulas' is a much more difficult thing than differentiation, and
sometimes is impossible to do in elementary terms, it becomes reasonable to ask for {\it numerical approxima}- {\it tions to definite integrals}. Since a {\it definite} integral is just a {\it number}, this is possible. By constrast, {\it indefi}- {\it nite} integrals, being {\it functions} rather than just numbers, are not easily described by `numerical approxima- tions'.
There are several related approaches, all of which use the idea that a definite integral is related to {\it area}. Thus, each of these approaches is really essentially a way of approximating area under a curve. Of course, this $\mathrm{i}\mathrm{s}\mathrm{n}' \mathrm{t}$ exactly right, because integrals are not exactly areas, but thinking of area is a reasonable heuris- tic.
Of course, an approximation is not very valuable unless there is an {\it estimate for the error}, in other words, an idea of the {\it tolerance}.
Each of the approaches starts the same way: To approximate $\displaystyle \int_{a}^{b}f(x)dx$, break the interval $[a,\ b]$ into
smaller subintervals
$$
[x_{0},\ x_{1}],\ [x_{1},\ x_{2}],\text{ . . . , }[x_{n-2},\ x_{n-1}],\ [x_{n-1},\ x_{n}]
$$
each of the same length
$$
\triangle x=\frac{b-}{n}a
$$
and where $x_{0}=a$ and $x_{n}=b.$
Trapezoidal rule: This rule says that
$$
\int_{a}^{b}f(x)dx\approx\frac{\triangle x}{2}[f(x_{0})+2f(x_{1})+2f(x_{2})+\ldots+2f(x_{n-2})+2f(x_{n-1})+f(x_{n})]
$$
Yes, all the values have a factor of `2' except the first and the last. (This method approximates the area under the curve by {\it trapezoids} inscribed under the curve in each subinterval).
Midpoint rule: Let $\displaystyle \overline{x}_{i}=\frac{1}{2}(x_{i}-x_{i-1})$ be the midpoint of the subinterval $[x_{i-1},\ x_{i}]$. Then the midpoint rule says that
$$
\int_{a}^{b}f(x)dx\approx\triangle x[f(\overline{x}_{1})+\ldots+f(\overline{x}_{n})]
$$
(This method approximates the area under the curve by rectangles whose height is the midpoint of each subinterval).
Simpson's rule: This rule says that
$$
\int_{a}^{b}f(x)dx\approx
$$
$$
\approx\frac{\triangle x}{3}[f(x_{0})+4f(x_{1})+2f(x_{2})+4f(x_{3})+\ldots+2f(x_{n-2})+4f(x_{n-1})+f(x_{n})]
$$
Yes, the first and last coefficients are `1', while the `inner' coefficients alternate `4' and `2'. And $n$ has to be an {\it even} integer for this to make sense. (This method approximates the curve by pieces of parabolas). In general, the smaller the $\triangle x$ is, the better these approximations are. We can be more precise: the er- ror estimates for the trapezoidal and midpoint rules depend upon the {\it second derivative}: suppose that
$|f''(x)|\leq M$ for some constant $M$, for all $a\leq x\leq b$. Then
error in trapezoidal rule $\displaystyle \leq\frac{M(b-a)^{3}}{12n^{2}}$
53
error in midpoint rule $\displaystyle \leq\frac{M(b-a)^{3}}{24n^{2}}$
The error estimate for Simpson's rule depends on the {\it fourth} derivative: suppose that $|f^{(4)}(x)|\leq N$ for some constant $N$, for all $a\leq x\leq b$. Then
error in Simpson's rule $\displaystyle \leq\frac{N(b-a)^{5}}{180n^{4}}$
From these formulas estimating the error, it looks like the midpoint rule is always better than the trape- zoidal rule. And for high accuracy, using a large number $n$ of subintervals, it looks like Simpson's rule is the best.
38. {\it A verages and Weigh} $T\mathrm{e}d$ {\it A verages}
The usual notion of {\it average} of a list of $n$ numbers $x_{1}, \ldots, x_{n}$ is
average of $x_{1}, x_{2}, \ldots, x_{n}=\displaystyle \frac{x_{1}+x_{2}+\ldots+x_{n}}{n}$
A {\it continuous} analogue of this can be obtained as an integral, using a notation which matches better: let $f$ be a function on an interval $[a,\ b]$. Then
average value of $f$ on the interval $[a,\displaystyle \ b]=\frac{\int_{a}^{b}f(x)dx}{b-a}$
For example the {\it average} value of the function $y=x^{2}$ over the interval [2, 3] is
average value of $f$ on the interval $[a,\displaystyle \ b]=\frac{\int_{2}^{3}x^{2}dx}{3-2}=\frac{[x^{3}/3]_{2}^{3}}{3-2}=\frac{3^{3}-2^{3}}{3\cdot(3-2)}=19/3$
A weighted average is an average in which some of the items to be averaged are $\zeta more$ {\it important}' or
$\zeta less$ {\it important}' than some of the others. The {\it weights} are (non-negative) numbers which measure the rela- tive importance.
For example, the {\it weighted average} of a list of numbers $x_{1}, \ldots, x_{n}$ with corresponding weights $w_{1}, \ldots, w_{n}$ is
$$
w_{1}\text{ . }x_{1}+w_{2}\cdot x_{2}+\ldots+w_{n}\cdot x_{n}
$$
$$
w_{1}+w_{2}+\ldots+w_{n}
$$
Note that if the weights are all just 1, then the weighted average is just a plain average.
The {\it continuous analogue} of a weighted average can be obtained as an integral, using a notation which matches better: let $f$ be a function on an interval $[a,\ b]$, with {\it weight} $w(x)$ , a non-negative function on $[a,\ b]$. Then
weighted average value of $f$ on the interval $[a,\ b]$ with weight $w=\displaystyle \frac{\int_{a}^{b}w(x)\cdot f(x)dx}{\int_{a}^{b}w(x)dx}$
Notice that in the special case that the weight is just 1 all the time, then the weighted average is just a plain average.
For example the {\it average} value of the function $y=x^{2}$ over the interval [2, 3] with weight $w(x)=x$ is
average value of $f$ on the interval $[a,\ b]$ with weight $x$
54
$=\displaystyle \frac{\int_{2}^{3}x\cdot x^{2}dx}{\int_{2}^{3}xdx}=\frac{[x^{4}/4]_{2}^{3}}{[x^{2}/2]_{2}^{3}}=\frac{\frac{1}{4}(3^{4}-2^{4})}{\frac{1}{2}(3^{2}-2^{2})}$
39. {\it Centers of Mass} ({\it Centroids})
For many (but certainly not all!) purposes in physics and mechanics, it is necessary or useful to be able to consider a physical object as being a mass concentrated at a single point, its {\it geometric center}, also called its centroid. {\it The centroid is essentially the} $\zeta${\it average} ' {\it of all the points in the object}. For simplicity, we will just consider the two-dimensional version of this, looking only at regions in the plane.
The simplest case is that of a rectangle: it is pretty clear that the centroid is the `center' of the rectangle. That is, if the corners are $(0,0), (u,\ 0), (0,\ v)$ and $(u,\ v)$ , then the centroid is
$$
\left(\begin{array}{ll}
u & v\\
\overline{2}'\overline{2} &
\end{array}\right)
$$
The formulas below are obtained by `integrating up' this simple idea:
For the center of mass (centroid) of the plane region described by $f(x)\leq y\leq g(x)$ and $a\leq x\leq b$, we have
$x$-coordinate of the centroid $=$ average $x$-coordinate
$$
=\frac{\int_{a}^{b}x[g(x)-f(x)]dx}{\int_{a}^{b}[g(x)-f(x)]dx}
$$
$\displaystyle \int_{left}^{right}x[upper$-{\it lower}$] dx \displaystyle \int_{left}^{right}x[upper-lower]dx$
$=\overline{\int_{left}^{right}[upper-lower]dx}=\overline{\mathrm{a}\mathrm{r}\mathrm{e}\mathrm{a}}$of the region
And also
$y$-coordinate of the centroid $=$ average $y$-coordinate
$$
=\frac{\int_{a}^{b}\frac{1}{2}[g(x)^{2}-f(x)^{2}]dx}{\int_{a}^{b}[g(x)-f(x)]dx}
$$
$$
\int_{left}^{right}\frac{1}{2}[uppe\uparrow^{2}-lower^{2}]dx\ \int_{left}^{right}\frac{1}{2}[uppe\uparrow^{2}-lower^{2}]dx
$$
$=\overline{\int_{left}^{right}[upper-lower]dx}=\overline{\mathrm{a}\mathrm{r}\mathrm{e}\mathrm{a}}$of the region
{\it Heuristic}: For the $x$-coordinate: there is an amount $(g(x)-f(x))dx$ of the region at distance $x$ from the {\it y}- axis. This is integrated, and then {\it averaged} dividing by the {\it total}, that is, dividing by the {\it area} of the entire region.
For the $y$-coordinate: in each vertical band of width $dx$ there is amount $dxdy$ of the region at distance $y$ from the $x$-axis. This is integrated up and then averaged by dividing by the total area.
For example, let's find the centroid of the region bounded by $x=0, x=1, y=x^{2}$, and $y=0.$
$x$-coordinate of the centroid $=\displaystyle \frac{\int_{0}^{1}x[x^{2}-0]dx}{\int_{0}^{1}[x^{2}-0]dx}$
$$
=\frac{[x^{4}/4]_{0}^{1}}{[x^{3}/3]_{0}^{1}}=\frac{1/4-0}{1/3-0}=\frac{3}{4}
$$
55
And
$y$-coordinate of the centroid $=\displaystyle \frac{\int_{0}^{1}\frac{1}{2}[(x^{2})^{2}-0]dx}{\int_{0}^{1}[x^{2}-0]dx}$
$$
=\frac{\frac{1}{2}[x^{5}/5]_{0}^{1}}{[x^{3}/3]_{0}^{1}}=\frac{\frac{1}{2}(1/5-0)}{1/3-0}=\frac{3}{10}
$$
\# 39.142 Find the center of mass (centroid) of the region $0\leq x\leq 1$ and $0\leq y\leq x^{2}.$
\# 39.143 Find the center of mass (centroid) of the region defined by $0\leq x\leq 1,0\leq y\leq 1$ and $x+y\leq 1.$ \# 39.144 Find the center of mass (centroid) of a homogeneous plate in the shape of an equilateral trian- gle.
40. {\it Volum} e{\it s b}y {\it Cross Sections}
Next to computing areas of regions in the plane, the easiest {\it concept} of application of the ideas of calculus is to computing volumes of solids where somehow we know a formula for the {\it areas of slices}, that is, {\it areas of cross sections}. Of course, in any particular example, the actual issue of getting the formula for the cross section, and figuring out the appropriate limits of integration, can be difficult.
The idea is to just `add them up':
volume $=\displaystyle \int_{1\mathrm{e}\mathrm{f}\mathrm{t}\lim \mathrm{i}\mathrm{t}}^{\mathrm{r}\mathrm{i}\mathrm{g}\mathrm{h}\mathrm{t}\lim \mathrm{i}\mathrm{t}}$ (area of cross section at {\it x}) $dx$
where in whatever manner we describe the solid it extends from $x=left$ {\it limit} to $x=right$ {\it limit}. We must suppose that we have some reasonable {\it formula} for the area of the cross section.
For example, let's find the volume of a solid ball of radius 1. (In effect, we'll be deriving the formula for this). We can suppose that the ball is centered at the origin. Since the radius is 1, the range of $x$ coordi- nates is from $-1$ to $+1$, so $x$ will be integrated from $-1$ to $+1$. At a particular value of $x$, what does the cross section look like? A disk, whose radius we'll have to determine. To determine this radius, look at how the solid ball intersects the $x, y$-plane: it intesects in the disk $x^{2}+y^{2}\leq 1$. For a particular value of $x$, the values of $y$ are between $\pm\sqrt{1-x^{2}}$. This line segment, having $x$ fixed and $y$ in this range, is the in- tersection of the cross section disk with the $x, y$-plane, and in fact is a {\it diameter} of that cross section disk. Therefore, the radius of the cross section disk at $x$ is $\sqrt{1-x^{2}}$. Use the formula that the area of a disk of radius $r$ is $\pi r^{2}$: the area of the cross section is
cross section at $x=\pi(\sqrt{1-x^{2}})^{2}=\pi(1-x^{2})$
Then integrate this from $-1$ to $+1$ to get the volume:
volume $=\displaystyle \int_{1\mathrm{e}\mathrm{f}\mathrm{t}}^{\mathrm{r}\mathrm{i}\mathrm{g}\mathrm{h}\mathrm{t}}$ area of cross-section $dx$
$$
=\int_{-1}^{+1}\pi(1-x^{2})dx=\pi[x-\frac{x^{3}}{3}]_{-1}^{+1}=\pi[(1-\frac{1}{3})-(-1-\frac{(-1)^{3}}{3})]=\frac{2}{3}+\frac{2}{3}=\frac{4}{3}
$$
56
\# 40.145 Find the volume of a circular cone of radius 10 and height 12 (not by a formula, but by cross sections).
\# 40.146 Find the volume of a cone whose base is a {\it square} of side 5 and whose height is 6, by cross- sections.
\# 40.147 A hole 3 units in radius is drilled out along a diameter of a solid sphere of radius 5 units. What is the volume of the remaining solid?
\# 40.148 A solid whose base is a disc of radius 3 has vertical cross sections which are {\it squares}. What is the volume?
41. {\it Solids} $ofR\mathrm{e}$ {\it volution}
Another way of computing volumes of some special types of solid figures applies to solids obtained by {\it ro}- {\it tating plane regions} about some axis.
If we rotate the plane region described by $f(x)\leq y\leq g(x)$ and $a\leq x\leq b$ around the $x$-axis, the volume of the resulting solid is
volume $=\displaystyle \int_{a}^{b}\pi(g(x)^{2}-f(x)^{2})dx$
$=\displaystyle \int_{1\mathrm{e}\mathrm{f}\mathrm{t}\lim \mathrm{i}\mathrm{t}}^{\mathrm{r}\mathrm{i}\mathrm{g}\mathrm{h}\mathrm{t}\lim \mathrm{i}\mathrm{t}} \pi ($upper $\mathrm{c}\mathrm{u}\mathrm{r}\mathrm{v}\mathrm{e}^{2}-$lower c$\mathrm{u}\mathrm{r}\mathrm{v}\mathrm{e}^{2})dx$ It is necessary to suppose that $f(x)\geq 0$ for this to be right.
This formula comes from viewing the whole thing as sliced up into slices of thickness $dx$, so that each slice is a {\it disk} of radius $g(x)$ with a smaller disk of radius $f(x)$ removed from it. Then we use the formula
area of disk $=\pi$ radius2
and `add them all up'. The hypothesis that $f(x)\geq 0$ is necessary to avoid different pieces of the solid `overlap' each other by accident, thus counting the same chunk of volume {\it twice}.
If we rotate the plane region described by $f(x)\leq y\leq g(x)$ and $a\leq x\leq b$ around the $y$-axis (instead of the $x$-axis), the volume of the resulting solid is
volume $=\displaystyle \int_{a}^{b}2\pi x(g(x)-f(x))dx$
$=\displaystyle \int_{1\mathrm{e}\mathrm{f}\mathrm{t}}^{\mathrm{r}\mathrm{i}\mathrm{g}\mathrm{h}\mathrm{t}}2\pi x$ ( upper - lower) $dx$
This second formula comes from viewing the whole thing as sliced up into thin cylindrical shells of thick- ness $dx$ encircling the $y$-axis, of radius $x$ and of height $g(x)-f(x)$ . The volume of each one is
(area of cylinder of height $g(x)-f(x)$ and radius {\it x}) . $dx=2\pi x(g(x)-f(x))dx$
and `add them all up' in the integral.
As an example, let's consider the region $0\leq x\leq 1$ and $x^{2}\leq y\leq x$. Note that for $0\leq x\leq 1$ it really is the case that $x^{2}\leq y\leq x$, so $y=x$ is the {\it upper} curve of the two, and $y=x^{2}$ is the {\it lower} curve of the two. Invoking the formula above, the volume of the solid obtained by rotating this plane region around the $x$-{\it axis} is
volume $=\displaystyle \int_{1\mathrm{e}\mathrm{f}\mathrm{t}}^{\mathrm{r}\mathrm{i}\mathrm{g}\mathrm{h}\mathrm{t}}\pi(\mathrm{u}\mathrm{p}\mathrm{p}\mathrm{e}\mathrm{r}^{2}-1\mathrm{o}\mathrm{w}\mathrm{e}\mathrm{r}^{2})dx$
57
$$
=\int_{0}^{1}\pi((x)^{2}-(x^{2})^{2})dx=\pi[x^{3}/3-x^{5}/5]_{0}^{1}=\pi(1/3-1/5)
$$
On the other hand, if we rotate this around the $y$-axis instead, then
volume $=\displaystyle \int_{1\mathrm{e}\mathrm{f}\mathrm{t}}^{\mathrm{r}\mathrm{i}\mathrm{g}\mathrm{h}\mathrm{t}}2\pi x$ (upper--lower) $dx$
$$
=\int_{0}^{1}2\pi x(x-x^{2})dx=\pi\int_{0}^{1}\frac{2x^{3}}{3}-\frac{2x^{4}}{4}dx=\ [\frac{2x^{3}}{3}-\frac{2x^{4}}{4}]_{0}^{1}=\frac{2}{3}-\frac{1}{2}=\frac{1}{6}
$$
\# 41.149 Find the volume of the solid obtained by rotating the region $0\leq x\leq 1,0\leq y\leq x$ around the $y$-axis.
\# 41.150 Find the volume of the solid obtained by rotating the region $0\leq x\leq 1,0\leq y\leq x$ around the $x$-axis.
\# 41.151 Set up the integral which expresses the volume of the doughnut obtained by rotating the region $(x-2)^{2}+y^{2}\leq 1$ around the $y$-axis.
42. {\it Surfaces} $ofR\mathrm{e}$ {\it volution}
Here is another {\it formula obtained by using the ideas of calculus}: the area of the surface obtained by rotat- ing the curve $y=f(x)$ with $a\leq x\leq b$ around the $x$-axis is
area $=\displaystyle \int_{a}^{b_{2\pi f(x)}}\sqrt{1+(\frac{dy}{dx})^{2}}d_{X}$
This formula comes from extending the ideas of the previous section the length of a little piece of the curve is
$$
\sqrt{dx^{2}+dy^{2}}
$$
This gets rotated around the perimeter of a circle of radius $y=f(x)$ , so approximately give a band of width $\sqrt{dx^{2}+dy^{2}}$ and length $2\pi f(x)$ , which has area
$$
2\pi f(x)\sqrt{dx^{2}+dy^{2}}=2\pi f(x)\sqrt{1+(\frac{dy}{dx})^{2}}dx
$$
Integrating this (as if it were a sum!) gives the formula.
As with the formula for arc length, it is very easy to obtain integrals which are difficult or impossible to evaluate except numerically.
Similarly, we might rotate the curve $y=f(x)$ around the $y$-axis instead. The same general ideas apply to compute the area of the resulting surface. The width of each little band is still $\sqrt{dx^{2}+dy^{2}}$, but now the length is $2\pi x$ instead. So the band has area
width $\times$ length $=2\pi x\sqrt{dx^{2}+dy^{2}}$
58
Therefore, in this case the surface area is obtained by integrating this, yielding the formula
area $=\displaystyle \int_{a}^{b}2\pi X\sqrt{1+(\frac{dy}{dx})^{2}}dx$
\# 42.152 Find the area of the surface obtained by rotating the curve $y=\displaystyle \frac{1}{4}(e^{2x}+e^{-2x})$ with $0\leq x\leq 1$ around the $x$-axis.
\# 42.153 Just set up the integral for the surface obtained by rotating the curve $y=\displaystyle \frac{1}{4}(e^{2x}+e^{-2x})$ with $0\leq x\leq 1$ around the $y$-axis.
\# 42.154 Set up the integral for the area of the surface obtained by rotating the curve $y=x^{2}$ with $ 0\leq x\leq 1$ around the $x$-axis.
\# 42.155 Set up the integral for the area of the surface obtained by rotating the curve $y=x^{2}$ with $ 0\leq x\leq 1$ around the $y$-axis.
43. {\it Integration b}y $p\mathrm{a}rTs$
Strangely, the subtlest standard method is just the {\it product rule} run backwards. This is called integration by parts. (This might seem strange because often people find the chain rule for differentiation harder to get a grip on than the product rule). One way of writing the integration by parts rule is
$$
\int f(x)\cdot g'(x)dx=f(x)g(x)-\int f'(x)\cdot g(x)dx
$$
Sometimes this is written another way: if we use the notation that for a function $u$ of $x,$
$$
du=\frac{du}{dx}dx
$$
then for two functions $u, v$ of $x$ the rule is
$$
\int udv=uv-\int vdu
$$
Yes, it is hard to see how this might be helpful, but it is. The first theme we'll see in examples is where we could do the integral except that there is a power of $x$ `in the way':
The simplest example is
$$
\int xe^{x}dx=\int xd(e^{x})=xe^{x}-\int e^{x}dx=xe^{x}-e^{x}+C
$$
Here we have taken $u=x$ and $v=e^{x}$. It is important to be able to see the $e^{x}$ as being the derivative of itself.
A similar example is
$$
\int x\cos xdx=\int xd(\sin x)=x\sin x-\int\sin xdx=x\sin x+\cos x+C
$$
59
Here we have taken $u=x$ and $v=\sin x$. It is important to be able to see the $\cos x$ as being the derivative of $\sin x.$
Yet another example, illustrating also the idea of {\it repeating} the integration by parts:
$$
\int x^{2}e^{x}dx=\int x^{2}d(e^{x})=x^{2}e^{x}-\int e^{x}d(x^{2})
$$
$$
=x^{2}e^{x}-2\int xe^{x}dx=x^{2}e^{x}-2xe^{x}+2\int e^{x}dx
$$
$$
=x^{2}e^{x}-2xe^{x}+2e^{x}+C
$$
Here we integrate by parts twice. After the first integration by parts, the integral we come up with is $\displaystyle \int xe^{x}dx$, which we had dealt with in the first example.
Or sometimes the theme is that it is easier to integrate the {\it derivative} of something than to integrate the thing:
$$
\int\ln xdx=\int\ln xd(x)=x\ln x-\int xd(\ln x)
$$
$$
=x\ln x-\int x\frac{1}{x}dx=x\ln x-\int 1dx=x\ln x-x+C
$$
We took $u=\ln x$ and $v=x.$
Again in this example it is easier to integrate the derivative than the thing itself:
$\displaystyle \int\arctan xdx=\int\arctan xd(x)=x\arctan x-\int x$ d(arctan $x$)
$$
=x\arctan x-\int\frac{x}{1+x^{2}}dx=x\arctan x-\frac{1}{2}\int\frac{2x}{1+x^{2}}dx
$$
$$
=x\arctan x-\frac{1}{2}\ln(1+x^{2})+C
$$
since we should recognize the
$$
\frac{2x}{1+x^{2}}
$$
as being the derivative (via the chain rule) of $\ln(1+x^{2})$ .
\# 43.156 $\displaystyle \int\ln xdx=$?
\# 43.157 $\displaystyle \int xe^{x}dx=$?
\# 43.158 $\displaystyle \int(\ln x)^{2}dx=$?
\# 43.159 $\displaystyle \int xe^{2x}dx=$?
\# 43.160 $\displaystyle \int\arctan 3xdx=$? \# 43.161 $\displaystyle \int x^{3}\ln xdx=$?
\# 43.162 $\displaystyle \int\ln 3xdx=$?
\# 43.163 $\displaystyle \int x\ln xdx=$?
60
44. {\it Partial Fractions}
Now we return to a more special but still important technique of doing indefinite integrals. This depends on a good trick from algebra to transform complicated {\it rational functions} into simpler ones. Rather than try to formally describe the general fact, we'll do the two simplest families of examples.
Consider the integral
$$
\int\frac{1}{x(x-1)}dx
$$
As it stands, we do not recognize this as the derivative of anything. However, we have
$$
\frac{1}{x-1}-\frac{1}{x}=\frac{x-(x-1)}{x(x-1)}=\frac{1}{x(x-1)}
$$
Therefore,
$$
\int\frac{1}{x(x-1)}dx=\int\frac{1}{x-1}-\frac{1}{x}dx=\ln(x-1)-\ln x+C
$$
That is, by separating the fraction $1/x(x-1)$ into the `partial' fractions $1/x$ and $1/(x-1)$ we were able to do the integrals immediately by using the logarithm. How to see such identities?
Well, let's look at a situation
$$
\frac{cx+d}{(x-a)(x-b)}=\frac{A}{x-a}+\frac{B}{x-b}
$$
where $a, b$ are given numbers (not equal) and we are to {\it find} $A, B$ which make this true. If we can find the $A, B$ then we can integrate $(cx+d)/(x-a)(x-b)$ simply by using logarithms:
$$
\int\frac{cx+d}{(x-a)(x-b)}dx=\int\frac{A}{x-a}+\frac{B}{x-b}dx=A\ln(x-a)+B\ln(x-b)+C
$$
To find the $A, B$, multiply through by $(x-a)(x-b)$ to get
$$
cx+d=A(x-b)+B(x-a)
$$
When $x=a$ the $x-a$ factor is $0$, so this equation becomes
$$
c\cdot a+d=A(a-b)
$$
Likewise, when $x=b$ the $x-b$ factor is $0$, so we also have
$$
c\cdot b+d=B(b-a)
$$
That is,
$$
A=\frac{c\cdot a+d}{a-b}\ B=\frac{c\cdot b+d}{b-a}
$$
So, yes, we can find the constants to break the fraction $(cx+d)/(x-a)(x-b)$ down into simpler `partial' fractions.
{\it Further}, if the numerator is of {\it bigger degree} than 1, then before executing the previous algebra trick we must first {\it divide the numerator by the denominator to get a remainder of smaller degree}. A simple example is
\begin{center}
$\displaystyle \frac{x^{3}+4x^{2}-x+1}{x(x-1)}=$?
\end{center}
61
{\it We must recall how to divide polynomials by polynomials and get a remainder of lower degree than the divi}- {\it sor}. Here we would divide the $x^{3}+4x^{2}-x+1$ by $x(x-1)=x^{2}-x$ to get a remainder of degree less than 2 (the degree of $x^{2}-x$). We would obtain
$$
\frac{x^{3}+4x^{2}-x+1}{x(x-1)}=x+5+\frac{4x+1}{x(x-1)}
$$
since the quotient is $x+5$ and the remainder is $4x+1$. Thus, in this situation
$$
\int\frac{x^{3}+4x^{2}-x+1}{x(x-1)}dx=\int x+5+\frac{4x+1}{x(x-1)}dx
$$
Now we are ready to continue with the {\it first} algebra trick.
In this case, the first trick is applied to
$$
\frac{4x+1}{x(x-1)}
$$
We want constants $A, B$ so that
$$
\frac{4x+1}{x(x-1)}=\frac{A}{x}+\frac{B}{x-1}
$$
As above, multiply through by $x(x-1)$ to get
$$
4x+1=A(x-1)+Bx
$$
and plug in the two values $0,1$ to get
$$
4\cdot\ 0+1=-A\ 4\cdot\ 1+1=B
$$
That is, $A=-1$ and $B=5.$
Putting this together, we have
$$
\frac{x^{3}+4x^{2}-x+1}{x(x-1)}=x+5+\frac{-1}{x}+\frac{5}{x-1}
$$
Thus,
$$
\int\frac{x^{3}+4x^{2}-x+1}{x(x-1)}dx=\int x+5+\frac{-1}{x}+\frac{5}{x-1}dx
$$
$$
=\frac{x^{2}}{2}+5x-\ln x+5\ln(x-1)+C
$$
In a slightly different direction: we can do any integral of the form
$$
\int\frac{ax+b}{1+x^{2}}dx
$$
because we know two different sorts of integrals with that same denominator:
$$
\int\frac{1}{1+x^{2}}dx=\arctan x+C\ \int\frac{2x}{1+x^{2}}dx=\ln(1+x^{2})+C
$$
where in the second one we use a substitution. Thus, we have to break the given integral into two parts to do it:
$$
\int\frac{ax+b}{1+x^{2}}dx=\frac{a}{2}\int\frac{2x}{1+x^{2}}dx+b\int\frac{1}{1+x^{2}}dx
$$
62
$$
=\frac{a}{2}\ln(1+x^{2})+b\arctan x+C
$$
And, as in the first example, if we are given a numerator of degree 2 or larger, then we {\it divide} first, to get a remainder of lower degree. For example, in the case of
$$
\int\frac{x^{4}+2x^{3}+x^{2}+3x+1}{1+x^{2}}dx
$$
we divide the numerator by the denominator, to allow us to write
$$
\frac{x^{4}+2x^{3}+x^{2}+3x+1}{1+x^{2}}=x^{2}+2x+\frac{x+1}{1+x^{2}}
$$
since the quotient is $x^{2}+2x$ and the remainder is $x+1$. Then
$$
\int\frac{x^{4}+2x^{3}+x^{2}+3x+1}{1+x^{2}}dx=\int x^{2}+2x+\frac{x+1}{1+x^{2}}
$$
$$
=\frac{x^{3}}{3}+x^{2}+\frac{1}{2}\ln(1+x^{2})+\arctan x+C
$$
These two examples are just the simplest, but illustrate the idea of using algebra to simplify rational func- tions.
\# 44.164 $\displaystyle \int\frac{1}{x(x-1)}dx=$?
\# 44.165 $\displaystyle \int\frac{1+x}{1+x^{2}}dx=$?
\# 44.166 $\displaystyle \int\frac{2x^{3}+4}{x(x+1)}dx=$?
\# 44.167 $\displaystyle \int\frac{2+2x+x^{2}}{1+x^{2}}dx=$? \# 44.168 $\displaystyle \int\frac{2x^{3}+4}{x^{2}-1}dx=$?
\# 44.169 $\displaystyle \int\frac{2+3x}{1+x^{2}}dx=$?
\# 44.170 $\displaystyle \int\frac{x^{3}+1}{(x-1)(x-2)}dx=$? \# 44.171 $\displaystyle \int\frac{x^{3}+1}{x^{2}+1}dx=$?
45. {\it Trigonometric Integrals}
Here we'll just have a {\it sample} of how to use trig identities to do some more complicated integrals involv- ing trigonometric functions. This is `just the tip of the iceberg'. We don't do more for at least two rea- sons: first, hardly anyone remembers all these tricks anyway, and, second, in real life you can look these things up in tables of integrals. Perhaps even more important, in `real life' there are more sophisticated viewpoints which even make the whole issue a little silly, somewhat like evaluating $\sqrt{26}$ `by differentials' without your calculator seems silly.
The only identities we'll need in our examples are
$\cos^{2}x+\sin^{2}x=1$ Pythagorean identity
63
$\sin x=\sqrt{\frac{1-\cos 2x}{2}}$ half-angle formula
$\cos x=\sqrt{\frac{1+\cos 2x}{2}}$ half-angle formula
The first example is
$$
\int\sin^{3}xdx
$$
If we ignore all trig identities, there is no easy way to do this integral. But if we use the Pythagorean identity to rewrite it, then things improve:
$$
\int\sin^{3}xdx=\int(1-\cos^{2}x)\sin xdx=-\int(1-\cos^{2}x\ \sin x)dx
$$
In the latter expression, we can view the $-\sin x$ as the derivative of $\cos x$, so with the substitution $u= \cos x$ this integral is
$$
-\int(1-u^{2})du=-u+\frac{u^{3}}{3}+C=-\cos x+\frac{\cos^{3}x}{3}+C
$$
This idea can be applied, more generally, to integrals
$$
\int\sin^{m}x\cos^{n}xdx
$$
where {\it at least one of} $m, n$ {\it is odd}. For example, if $n$ is odd, then use
$$
\cos^{n}x=\cos^{n-1}x\cos x=(1-\sin^{2}x)^{\frac{n-1}{2}}\cos x
$$
to write the whole thing as
$$
\int\sin^{m}x\cos^{n}xdx=\int\sin^{m}x(1-\sin^{2}x)^{\frac{n-1}{2}}\cos xdx
$$
The point is that we have obtained something of the form
$\displaystyle \int$ (polynomial in $\sin x$) $\cos xdx$
Letting $u=\sin x$, we have $\cos xdx=du$, and the integral becomes
(polynomial in {\it u}) $du$
which we can do.
But this Pythagorean identity trick does not help us on the relatively simple-looking integral
$$
\int\sin^{2}xdx
$$
since there is no odd exponent anywhere. In effect, we `divide the exponent by two', thereby getting an odd exponent, by using the {\it half-angle formula}:
$$
\int\sin^{2}xdx=\int\frac{1-\cos 2x}{2}dx=\frac{x}{2}-\frac{\mathrm{s}\mathrm{i}\mathrm{n}.2x}{22}+C
$$
64
A bigger version of this application of the half-angle formula is
$$
\int\sin^{6}xdx=\int(\frac{1-\cos 2x}{2})^{3}dx=\int\frac{1}{8}-\frac{3}{8}\cos 2x+\frac{3}{8}\cos^{2}2x-\frac{1}{8}\cos^{3}2xdx
$$
Of the four terms in the integrand in the last expression, we can do the first two directly:
$$
\int\frac{1}{8}dx=\frac{x}{8}+C\ \int-\frac{3}{8}\cos 2xdx=\frac{-3}{16}\sin 2x+C
$$
But the last two terms require further work: using a half-angle formula {\it again}, we have
$$
\int\frac{3}{8}\cos^{2}2xdx=\int\frac{3}{16}(1+\cos 4x)dx=\frac{3x}{16}+\frac{3}{64}\sin 4x+C
$$
And the $\cos^{3}2x$ needs the Pythagorean identity trick:
$$
\int\frac{1}{8}\cos^{3}2xdx=\frac{1}{8}\int(1-\sin^{2}2x)\cos 2xdx=\frac{1}{8}[\sin 2x-\frac{\sin^{3}2x}{3}]+C
$$
Putting it all together, we have
$$
\int\sin^{6}xdx=\frac{x}{8}+\frac{-3}{16}\sin 2x+\frac{3x}{16}+\frac{3}{64}\sin 4x+\frac{1}{8}[\sin 2x-\frac{\sin^{3}2x}{3}]+C
$$
This last example is typical of the kind of repeated application of all the tricks necessary in order to treat all the possibilities.
In a slightly different vein, there is the horrible
$$
\int\sec xdx
$$
There is no decent way to do this at all from a first-year calculus viewpoint. A sort of rationalized-in-
hindsight way of explaining the answer is:
$$
\int\sec xdx=\int\frac{\sec x(\sec x+\tan x)}{\sec x+\tan x}dx
$$
All we did was multiply and divide by $\sec x+\tan x$. Of course, we don't pretend to answer the question of how a person would get the idea to do this. But then (another miracle?) we `notice' that the numerator is the derivative of the denominator, so
$$
\int\sec xdx=\ln(\sec x+\tan x)+C
$$
There is something distasteful about this rationalization, but at this level of technique we're stuck with it. Maybe this is enough of a sample. There are several other tricks that one would have to know in order to claim to be an `expert' at this, but it's not really sensible to {\it want} to be `expert' at these games, {\it because}
{\it there are smarter alternatives}.
\# 45.172 $\displaystyle \int\cos^{2}xdx=$?
\# 45.173 $\displaystyle \int\cos x\sin^{2}xdx=$? \# 45.174 $\displaystyle \int\cos^{3}xdx=$?
65
\# 45.175 $\displaystyle \int\sin^{2}5xdx=$?
\# 45.176 $\displaystyle \int\sec(3x+7)dx$
\# 45.177 $\displaystyle \int\sin^{2}(2x+1)dx=$? \# 45.178 $\displaystyle \int\sin^{3}(1-x)dx=$?
46. {\it Trigonome tric Substitutions}
This section continues development of relatively special tricks to do special kinds of integrals. Even
though the application of such things is limited, it's nice to be {\it aware} of the possibilities, at least a little bit.
The key idea here is to use trig functions to be able to `take the square root' in certain integrals. There are just three prototypes for the kind of thing we can deal with:
$$
\sqrt{1-x^{2}}\ \sqrt{1+x^{2}}\ \sqrt{x^{2}-1}
$$
Examples will illustrate the point.
In rough terms, the idea is that in an integral where the `worst' part is $\sqrt{1-x^{2}}$, replacing $x$ by $\sin u$ (and, correspondingly, $dx$ by $\cos udu$), {\it we will be able to take the square root}, and then obtain an integral in
the variable $u$ which is one of the {\it trigonometric integrals} which in principle we now know how to do. The point is that then
$$
\sqrt{1-x^{2}}=\sqrt{1-\sin^{2}x}=\sqrt{\cos^{2}x}=\cos x
$$
We have `taken the square root'.
For example, in
$$
\int\sqrt{1-x^{2}}dx
$$
we replace $x$ by $\sin u$ and $dx$ by $\cos udu$ to obtain
$$
\int\sqrt{1-x^{2}}dx=\int\sqrt{1-\sin^{2}u}\cos udu=\int\sqrt{\cos^{2}u}\cos udu=
$$
$$
=\int\cos u\cos udu=\int\cos^{2}udu
$$
Now we have an integral we know how to integrate: using the half-angle formula, this is
$$
\int\cos^{2}udu=\int\frac{1+\cos 2u}{2}du=\frac{u}{2}+\frac{\sin 2u}{4}+C
$$
And there still remains the issue of {\it substituting back} to obtain an expression in terms of $x$ rather than $u.$ Since $x=\sin u$, it's just the definition of {\it inverse function} that
$$
u=\arcsin x
$$
To express $\sin 2u$ in terms of $x$ is more aggravating. We use another {\it half-angle formula}
$$
\sin 2u=2\sin u\cos u
$$
Then
$$
\frac{1}{4}\sin 2u=\frac{1}{4}\cdot 2\sin u\cos u=\frac{1}{4}x\cdot\sqrt{1-x^{2}}
$$
66
where `of course' we used the Pythagorean identity to give us
$$
\cos u=\sqrt{1-\sin^{2}u}=\sqrt{1-x^{2}}
$$
Whew.
The next type of integral we can `improve' is one containing an expression
$$
\sqrt{1+x^{2}}
$$
In this case, we use another Pythagorean identity
$$
1+\tan^{2}u=\sec^{2}u
$$
(which we can get from the usual one $\cos^{2}u+\sin^{2}u=1$ by dividing by $\cos^{2}u$). So we'd let
$$
x=\tan u\ dx=\sec^{2}udu
$$
(mustn't forget the $dx$ and $du$ business!).
For example, in
$$
\int\frac{\sqrt{1+x^{2}}}{x}dx
$$
we use
$$
x=\tan u\ dx=\sec^{2}udu
$$
and turn the integral into
$$
\int\frac{\sqrt{1+x^{2}}}{x}dx=\int\frac{\sqrt{1+\tan^{2}u}}{\tan u}\sec^{2}udu=
$$
$$
=\int\frac{\sqrt{\sec^{2}u}}{\tan u}\sec^{2}udu=\int\frac{\sec u}{\tan u}\sec^{2}udu=\int\frac{1}{\sin u\cos^{2}u}du
$$
by rewriting everything in terms of $\cos u$ and $\sin u.$
For integrals containing $\sqrt{x^{2}-1}$, use $x=\sec u$ in order to invoke the Pythagorean identity
$$
\sec^{2}u-1=\tan^{2}u
$$
so as to be able to `take the square root'. Let's not execute any examples of this, since nothing new really happens.
{\it Rather},, let's examine some {\it purely algebraic variants} of these trigonometric substitutions, where we can
get some mileage out of {\it completing the square}. For example, consider
$$
\int\sqrt{-2x-x^{2}}dx
$$
The quadratic polynomial inside the square-root is {\it not} one of the three simple types we've looked at. But, by completing the square, we'll be able to rewrite it in essentially such forms:
$$
-2x-x^{2}=-(2x+x^{2})=-(-1+1+2x+x^{2})=-(-1+(1+x)^{2})=1-(1+x)^{2}
$$
Note that always when completing the square we `take out' the coefficient in front of $x^{2}$ in order to see what's going on, and then put it back at the end.
So, in this case, we'd let
$$
\sin u=1+x\ \cos udu=dx
$$
67
In another example, we might have
$$
\int\sqrt{8x-4x^{2}}dx
$$
Completing the square again, we have
$$
8x-4x^{2}=-4(-2+x^{2})=-4(-1+1-2x+x^{2})=-4(-1+(x-1)^{2})
$$
Rather than put the whole $-4$' back, we only keep track of the $\pm$, and take a $+4$' outside the square root entirely:
$$
\int\sqrt{8x-4x^{2}}dx=\int\sqrt{-4(-1+(x-1)^{2})}dx
$$
$$
=2\int\sqrt{-(-1+(x-1)^{2})}dx=2\int\sqrt{1-(x-1)^{2})}dx
$$
Then we're back to a familiar situation.
\# 46.179 Tell what trig substitution to use for $\displaystyle \int x^{8}\sqrt{x^{2}-1}dx$ \# 46.180 Tell what trig substitution to use for $\displaystyle \int\sqrt{25+16x^{2}}dx$ \# 46.181 Tell what trig substitution to use for $\displaystyle \int\sqrt{1-x^{2}}dx$
\# 46.182 Tell what trig substitution to use for $\displaystyle \int\sqrt{9+4x^{2}}dx$ \# 46.183 Tell what trig substitution to use for $\displaystyle \int x^{9}\sqrt{x^{2}+1}dx$ \# 46.184 Tell what trig substitution to use for $\displaystyle \int x^{8}\sqrt{x^{2}-1}dx$
47. {\it EisToric}a /$\mathrm{a}nd$ {\it th eoretical comments}.$\cdot$ {\it M}ea $n$ {\it Valu} $\mathrm{e}$ {\it Theorem}
For several reasons, the traditional way that {\it Taylor polynomials} are taught gives the impression that the ideas are inextricably linked with issues about {\it infinite series}. This is not so, but every calculus book I
know takes that approach. The reasons for this systematic mistake are complicated. Anyway, we will {\it not} make that mistake here, although we may talk about $\mathrm{i}\square $finite series later.
Instead of following the tradition, we will immediately talk about Taylor polynomials, {\it without} first tiring ourselves over $\mathrm{i}\square $finite series, and {\it without} fooling anyone into thinking that Taylor polynomials have the $\mathrm{i}\square $finite series stuff as prerequisite!
The theoretical underpinning for these facts about Taylor polynomials is {\it The Mean Value Theorem}, which itself depends upon some fairly subtle properties of the real numbers. It asserts that, {\it for a function} $f$ {\it dif}- {\it ferentiable on an interval} $[a,\ b]$, {\it there is a point} $c$ {\it in the interior} $(a,\ b)$ {\it of this interval so that}
$$
f'(c)=\frac{f(b)-f(a)}{b-a}
$$
Note that the latter expression is the formula for the slope of the `chord' or `secant' line connecting the two points $(a,\ f(a))$ and $(b,\ f(b))$ on the graph of $f$. And the $f'(c)$ can be interpreted as the slope of the {\it tangent} line to the curve at the point $(c,\ f(c))$ .
In many traditional scenarios a person is expected to commit the statement of the Mean Value Theorem to memory. And be able to respond to issues like `Find a point $c$ in the interval $[0,1]$ satisfying the conclu- sion of the Mean Value Theorem for the function $f(x)=x^{2}.$' This is pointless and we won't do it.
68
48. {\it Taylor polynomials}.$\cdot$ {\it formulas}
Before attempting to illustrate what these funny formulas can be used for, we just write them out. First, some reminders:
The notation $f^{(k)}$ means the $k\mathrm{t}\mathrm{h}$ derivative of $f$. The notation $k!$ means $k$-{\it factorial} which by definition is
$$
k!=1\cdot 2\cdot 3\cdot 4\cdot\ldots\cdot(k-1)\cdot k
$$
Taylor's Formula with Remainder Term {\it first somewhat verbal version}: Let $f$ be a reasonable func- tion, and fix a positive integer $n$. Then we have
$f$ ({\it input}) $=f(${\it basepoint})$+\displaystyle \frac{f'(\mathrm{b}\mathrm{a}\mathrm{s}\mathrm{e}\mathrm{p}\mathrm{o}\mathrm{i}\mathrm{n}\mathrm{t})}{1!}$ ({\it input}- basepoint)
$+\displaystyle \frac{f''(basepoint)}{2!}(input-${\it basepoint})$+\displaystyle \frac{f'''(basepoint)}{3!}(input-${\it basepoint})
. . . $+\displaystyle \frac{f^{(n)}(basepoint)}{n!}(input-${\it basepoint})$+\displaystyle \frac{f^{(n+1)}(c)}{(n+1)!}(input-${\it basepoint})
for some $c$ between {\it basepoint} and {\it input}.
That is, the value of the function $f$ for some {\it input} presumably `near' the {\it basepoint} is expressible in terms of the values of $f$ and its derivatives {\it evaluated at the basepoint}, with the only mystery being the precise nature of that $c$ between {\it input} and {\it basepoint}.
Taylor's Formula with Remainder Term {\it second somewhat verbal version}: Let $f$ be a reasonable func- tion, and fix a positive integer $n.$
$f(${\it basepoint} $+${\it increment})$=f(${\it basepoint})$+\displaystyle \frac{f'(basepoint)}{1!}$ ({\it increment})
$+\displaystyle \frac{f''(basepoint)}{2!}(${\it increment})$+\displaystyle \frac{f'''(base}{3!}${\it point}) $(${\it increment} $)^{3}$
. . . $+\displaystyle \frac{f^{(n)}(basepoint)}{n!}(${\it increment})$+\displaystyle \frac{f^{(n+1)}(c)}{(n+1)!}(${\it increment})
for some $c$ between {\it basepoint} and {\it basepoint} $+${\it increment}.
This version is really the same as the previous, but with a different emphasis: here we still have a {\it base}- {\it point}, but are thinking in terms of moving a little bit away from it, by the amount {\it increment}.
And to get a more compact formula, we can be more symbolic: let's repeat these two versions:
Taylor's Formula with Remainder Term: Let $f$ be a reasonable function, fix an input value $x_{o}$, and fix a positive integer $n$. Then for input $x$ we have
$$
f(x)=f(x_{o})+\frac{f'(x_{o})}{1!}(x-x_{o})+\frac{f''(x_{o})}{2!}(x-x_{o})^{2}+\frac{f'''(x_{o})}{3!}(x-x_{o})^{3}+\ldots
$$
$$
.\text{ . . }+\frac{f^{(n)}(x_{o})}{n!}(x-x_{o})^{n}+\frac{f^{(n+1)}(c)}{(n+1)!}(x-x_{o})^{n+1}
$$
for some $c$ between $x_{o}$ and $x.$
Note that in every version, in the very last term where all the indices are $n+1$, the input into $f^{(n+1)}$ is {\it not} the basepoint $x_{o}$ but is, instead, that mysterious $c$ about which we truly know nothing but that it lies
69
between $x_{o}$ and $x$. The part of this formula {\it without} the error term is the degree-n Taylor polynomial for $f$ at $x_{o}$, and that last term is the error term or remainder term. The Taylor series is said to be expanded at or expanded about or centered at or simply at the basepoint $x_{o}.$
There are many other possible forms for the error/remainder term. The one here was chosen partly be- cause it resembles the other terms in the main part of the expansion.
{\it Linear} Taylor's Polynomial with Remainder Term: Let $f$ be a reasonable function, fix an input
value $x_{o}$. For any (reasonable) input value $x$ we have
$$
f(x)=f(x_{o})+\frac{f'(x_{o})}{1!}(x-x_{o})+\frac{f''(c)}{2!}(x-x_{o})^{2}
$$
for some $c$ between $x_{o}$ and $x.$
The previous formula is of course a very special case of the first, more general, formula. The reason to in- clude the `linear' case is that {\it without} the error term it is the old {\it approximation by differentials} formula, which had the fundamental flaw of having no way to estimate the error. Now we {\it have} the error estimate.
The general idea here is to approximate `fancy' functions by polynomials, especially if we restrict ourselves to a fairly small interval around some given point. (That `approximation by differentials' circus was a very crude version of this idea).
It is at this point that it becomes relatively easy to `beat' a calculator, in the sense that the methods here can be used to give whatever precision is desired. So at the very least this methodology is not as silly and obsolete as some earlier traditional examples.
But even so, there is more to this than getting numbers out: it ought to be of some intrinsic interest that pretty arbitrary functions can be approximated as well as desired by polynomials, which are so readily
computable (by hand $or$ by machine)!
One element under our control is choice of {\it how high degree polynomial to use}. Typically, the higher the degree (meaning more terms), the better the approximation will be. (There is nothing comparable to this in the `approximation by differentials
Of course, for all this to really be worth anything either in theory or in practice, we do need a tangible {\it error estimate}, so that we can be sure that we are within whatever tolerance/error is required. (There is nothing comparable to this in the `approximation by differentials', either).
And at this point it is not at all clear what exactly can be done with such formulas. For one thing, there are choices.
\# 48.185 Write the first three terms of the Taylor series {\it at} $0$ of $f(x)=1/(1+x)$ . \# 48.186 Write the first three terms of the Taylor series {\it at 2} of $f(x)=1/(1-x)$ . \# 48.187 Write the first three terms of the Taylor series {\it at} $0$ of $f(x)=e^{\cos x}.$
49. {\it Classic examples of Taylor polynomials}
Some of the most famous (and important) examples are the expansions of $\displaystyle \frac{1}{1-x}, e^{x}, \cos x, \sin x$, and $\log(1+$ {\it x}) at $0$: right from the formula, although simplifying a little, we get
$$
\frac{1}{1-x}=1+x+x^{2}+x^{3}+x^{4}+x^{5}+x^{6}+\ldots
$$
$$
e^{x}=1+\frac{x}{1!}+\frac{x^{2}}{2!}+\frac{x^{3}}{3!}+\frac{x^{4}}{4!}+\ldots
$$
70
$$
\cos x=1-\frac{x^{2}}{2!}+\frac{x^{4}}{4!}-\frac{x^{6}}{6!}+\frac{x^{8}}{8!}\ldots
$$
siu $ x=\displaystyle \frac{x}{1!}-\frac{x^{3}}{3!}+\frac{x^{5}}{5!}-\frac{x^{7}}{7!}+\ldots$
$$
\log(1+x)=x-\frac{x^{2}}{2}+\frac{x^{3}}{3}-\frac{x^{4}}{4}+\frac{x^{5}}{5}-\frac{x^{6}}{6}+\ldots
$$
where here the {\it dots} mean to {\it continue to whatever term you want, then stop, and stick on the appropriate remainder term}.
It is entirely reasonable if you can't really see that these are what you'd get, but in any case you should do the computations to verify that these are right. It's not so hard.
Note that the expansion for cosine has no {\it odd} powers of $x$ (meaning that the coefficients are {\it zero}), while the expansion for sine has no {\it even} powers of $x$ (meaning that the coefficients are {\it zero}).
At this point it is worth repeating that we are {\it not} talking about {\it infinite} sums (series) at all here, although we do allow arbitrarily large {\it finite} sums. Rather than worry over an $\mathrm{i}\square $finite sum that we can never truly evaluate, we use the {\it error} or {\it remainder} term instead. Thus, while in other contexts the dots {\it would} mean $\mathrm{i}\square $finite sum', that's not our concern here.
The first of these formulas you might recognize as being a {\it geometric series}, or at least a part of one. The other three patterns might be new to you. A person would want to be learn to recognize these on sight, as if by reflex!
50. {\it Computa tional tricks regarding Taylor polynomials}
The obvious question to ask about Taylor polynomials is `What are the first so-many terms in the Taylor polynomial of some function expanded at some point?'.
The most straightforward way to deal with this is just to do what is indicated by the formula: take how- ever high order derivatives you need and plug in. However, very often this is not at all the most efficient.
Especially in a situation where we are interested in a composite function of the form $f(x^{n})$ or
$f$ (polynomial in {\it x}) with a `familiar' function $f$, there are alternatives.
For example, looking at $f(x)=e^{x^{3}}$, if we start taking derivatives to expand this at $0$, there will be a big mess pretty fast. On the other hand, we might start with the `familiar' expansion for $e^{x}$
$$
e^{x}=1+x+\frac{x^{2}}{2!}+\frac{x^{3}}{3!}+\frac{e^{c}}{4!}x^{4}
$$
with some $c$ between $0$ and $x$, where our choice to cut it off after that many terms was simply a whim. But then replacing $x$ by $x^{3}$ gives
$$
e^{x^{3}}=1+x^{3}+\frac{x^{6}}{2!}+\frac{x^{9}}{3!}+\frac{e^{c}}{4!}x^{12}
$$
with some $c$ between $0$ and $x^{3}.$ Yes, we need to keep track of $c$ in relation to the {\it new} $x.$
So we get a polynomial plus that funny term with the `c' in it, for the remainder. Yes, this gives us a different-looking error term, but that's fine.
So we obtain, with relative ease, the expansion of degree {\it eleven} of this function, which would have
been horrible to obtain by repeated differentiation and direct application of the general formula. Why
`eleven'?: well, the error term has the $x^{12}$ in it, which means that the polynomial itself stopped with a
$x^{11}$ term. Why didn't we see that term? Well, evidently the coefficients of $x^{11}$, and of $x^{10}$ (not to mention $x, x^{2}, x^{4}, x^{5}, x^{7}, x^{8}!)$ are {\it zero}.
71
As another example, let's get the degree-eight expansion of $\cos x^{2}$ at $0$. Of course, it makes sense to use
$$
\cos x=1-\frac{x^{2}}{2!}+\frac{x^{4}}{4!}+\frac{-\sin c}{5!}x^{5}
$$
with $c$ between $0$ and $x$, where we note that $-\sin x$ is the fifth derivative of $\cos x$. Replacing $x$ by $x^{2}$, this becomes
$$
\cos x^{2}=1-\frac{x^{4}}{2!}+\frac{x^{8}}{4!}+\frac{-\sin c}{5!}x^{10}
$$
where now we say that $c$ is between $0$ and $x^{2}.$
\# 50.188 Use a shortcut to compute the Taylor expansion at $0$ of $\cos(x^{5})$ . \# 50.189 Use a shortcut to compute the Taylor expansion at $0$ of $e^{(x^{2}+x)}.$ \# 50.190 Use a shortcut to compute the Taylor expansion at $0$ of $\displaystyle \log(\frac{1}{1-x})$ .
51. $ P\mathrm{y}\sqrt{}.\cdot$ {\it More serious} $qu$ {\it estions about Taylor polynomials}
Beyond just writing out Taylor expansions, we could actually use them to approximate things in a more serious way. There are roughly three different sorts of {\it serious} questions that one can ask in this context. They all use similar words, so a careful reading of such questions is necessary to be sure of answering the question asked.
(The word `tolerance' is a synonym for `error estimate', meaning that we know that the error is {\it no worse} than such-and-such)
$\bullet$ Given a Taylor polynomial approximation to a function, expanded at some given point, and given a re- quired tolerance, {\it on how large an interval} around the given point does the Taylor polynomial achieve that tolerance?
$\bullet$ Given a Taylor polynomial approximation to a function, expanded at some given point, and given an in- terval around that given point, {\it within what tolerance} does the Taylor polynomial approximate the function on that interval?
$\bullet$ Given a function, given a fixed point, given an interval around that fixed point, and given a required tol- erance, find {\it how many terms} must be used in the Taylor expansion to approximate the function to within the required tolerance on the given interval.
As a special case of the last question, we can consider the question of {\it approximating} $f(x)$ {\it to within a given tolerance}/{\it error in terms of} $f(x_{o}), f'(x_{o}), f''(x_{o})$ {\it and higher derivatives of} $f$ {\it evaluated at a given point} $x_{o}.$ In `real life' this last question is not really so important as the third of the questions listed above, since
evaluation at just one point can often be achieved more simply by some other means. Having a polyno-
mial approximation that works {\it all along an interval} is a much more substantive thing than evaluation at a single point.
It must be noted that there are also {\it other} ways to approach the issue of {\it best approximation by a polyno}- {\it mial on an interval}. And beyond worry over approximating the {\it values} of the function, we might also want the values of one or more of the {\it derivatives} to be close, as well. The theory of splines is one approach to approximation which is very important in practical applications.
72
52. {\it De termining Tol}e{\it r}a {\it n}$c\mathrm{e}/Error$
This section treats a simple example of the second kind of question mentioned above: `Given a Taylor
polynomial approximation to a function, expanded at some given point, and given an interval around that given point, {\it within what tolerance} does the Taylor polynomial approximate the function on that interval?' Let's look at the approximation $1-\displaystyle \frac{x^{2}}{2}+\frac{x^{4}}{4!}$ to $f(x)=\cos^{x}$ on the interval $[-\displaystyle \ \frac{1}{2},\ \frac{1}{2}]$. We might ask $\zeta${\it Within what tolerance does this polynomial approximate} $\cos x$ {\it on that} $interval^{i}$
To answer this, we first recall that the error term we have after those first (oh-so-familiar) terms of the expansion of cosine is
$$
\frac{-\sin c}{5!}x^{5}
$$
For $x$ in the indicated interval, we want to know the {\it worst-case scenario} for the size of this thing. A sloppy but good and simple {\it estimate} on $\sin c$ is that $|\sin c|\leq 1$, regardless of what $c$ is. This is a very happy kind of estimate because it's not so bad and because it doesn't depend at all upon $x$. And the biggest that $x^{5}$ can be is $(\displaystyle \frac{1}{2})^{5}\approx 0.03$. Then the {\it error is estimated as}
$$
|\frac{-\sin c}{5!}x^{5}|\leq\frac{1}{2^{5}\cdot 5!}\leq 0.0003
$$
This is not so bad at all!
We could have been a little clever here, taking advantage of the fact that a lot of the terms in the Taylor expansion of cosine at $0$ are already zero. In particular, we could {\it choose} to view the original polynomial $1-\displaystyle \frac{x^{2}}{2}+\frac{x^{4}}{4!}$ as {\it including} the {\it fifth-degree} term of the Taylor expansion as well, which simply happens to be zero, so is invisible. Thus, instead of using the remainder term with the `5' in it, we are actually entitled to use the remainder term with a `6'. This typically will give a better outcome.
That is, instead of the remainder we had must above, we would have an error term
$$
\frac{-\cos c}{6!}x^{6}
$$
Again, in the {\it worst-case scenario} $|-\cos c|\leq 1$. And still $|x|\displaystyle \leq\frac{1}{2}$, so we have the {\it error estimate}
$$
|\frac{-\cos c}{6!}x^{6}|\leq\frac{1}{2^{6}\cdot 6!}\leq 0.000022
$$
This is less than a tenth as much as in the first version.
But what happened here? Are there two different answers to the question of how well that polynomial ap- proximates the cosine function on that interval? Of course not. Rather, there were two {\it approaches} taken by us to {\it estimate} how well it approximates cosine. In fact, we still do not know the {\it exact} error!
The point is that the second estimate (being a little wiser) is {\it closer} to the truth than the first. The first estimate is {\it true}, but is a {\it weaker} assertion than we are able to make if we try a little harder.
This already illustrates the point that `in real life' there is often no single `right' or `best' estimate of an error, in the sense that the estimates that we can obtain by practical procedures may not be perfect, but represent a trade-off between time, effort, cost, and other priorities.
\# 52.191 How well (meaning `within what tolerance') does $1-x^{2}/2+x^{4}/24-x^{6}/720$ approximate $\cos x$ on the interval $[-0.1,0.1]$ ?
73
\# 52.192 How well (meaning `within what tolerance') does $1-x^{2}/2+x^{4}/24-x^{6}/720$ approximate $\cos x$ on the interval [-1, 1]?
\# 52.193 How well (meaning `within what tolerance') does $1-x^{2}/2+x^{4}/24-x^{6}/720$ approximate $\cos x$ on the interval $[\displaystyle \frac{-\pi}{2},\ \frac{\pi}{2}]$?
53. $Eo$ {\it large an inte rval with given tolerance}.7
This section treats a simple example of the first kind of question mentioned above: `Given a Taylor poly- nomial approximation to a function, expanded at some given point, and given a required tolerance, {\it on how large an interval} around the given point does the Taylor polynomial achieve that tolerance?'
The specific example we'll get to here is $\zeta For$ {\it what range of} $x\geq 25$ {\it does} $5+\displaystyle \frac{1}{10}(x-25)$ {\it approximate} $\sqrt{x}$ {\it to within}.001 $i$
Again, with the degree-one Taylor polynomial and corresponding remainder term, for reasonable functions $f$ we have
$$
f(x)=f(x_{o})+f'(x_{o})(x-x_{o})+\frac{f''(c)}{2!}(x-x_{o})^{2}
$$
for some $c$ between $x_{o}$ and $x$. The remainder term is
remainder term $=\displaystyle \frac{f''(c)}{2!}(x-x_{o})^{2}$
The notation 2! means 2-factorial', which is just 2, but which we write to be `forward compatible' with other things later.
{\it Again}: $no$, {\it we do not know what} $c$ {\it is, except that it is between} $x_{o}$ {\it and} $x$. But this is entirely reasonable, since if we really knew it exactly then we'd be able to evaluate $f(x)$ exactly and we are evidently presum- ing that this $\mathrm{i}\mathrm{s}\mathrm{n}' \mathrm{t}$ possible (or we wouldn't be doing all this!). That is, we have {\it limited information} about what $c$ is, which we could view as the limitation on how precisely we can know the value $f(x)$ .
To give an example of how to use this limited information, consider $f(x)=\sqrt{x}$ (yet again!). Taking $x_{o}= 25$, we have
$$
\sqrt{x}=f(x)=f(x_{o})+f'(x_{o})(x-x_{o})+\frac{f''(c)}{2!}(x-x_{o})^{2}=
$$
$$
=\sqrt{25}+\frac{1}{2}\frac{1}{\sqrt{25}}(x-25)-\frac{1}{2!}\frac{1}{4}\frac{1}{(c)^{3/2}}(x-25)^{2}=
$$
$$
=5+\frac{1}{10}(x-25)-\frac{1}{8}\frac{1}{c^{3/2}}(x-25)^{2}
$$
where all we know about $c$ is that it is between 25 and $x$. What can we expect to get from this?
Well, we have to make a choice or two to get started: let's suppose that $x\geq 25$ (rather than smaller). Then we can write
$$
25\leq c\leq x
$$
From this, because the three-halves-power function is {\it increasing}, we have
$$
25^{3/2}\leq c^{3/2}\leq x^{3/2}
$$
Taking inverses (with positive numbers) reverses the inequalities: we have
$$
25^{-3/2}\geq c^{-3/2}\geq x^{-3/2}
$$
So, {\it in the worst-case scenario}, the value of $c^{-3/2}$ is at most $25^{-3/2}=$ 1/125.
74
And we can rearrange the equation:
$$
\sqrt{x}-[5+\frac{1}{10}(x-25)]=-\frac{1}{8}\frac{1}{c^{3/2}}(x-25)^{2}
$$
Taking absolute values {\it in order to talk about error}, this is
$$
|\sqrt{x}-[5+\frac{1}{10}(x-25)]|=|\frac{1}{8}\frac{1}{c^{3/2}}(x-25)^{2}|
$$
Now let's use our estimate $|\displaystyle \frac{1}{c^{3/2}}|\leq$ 1/125 to write
$$
|\sqrt{x}-[5+\frac{1}{10}(x-25)]|\leq|\frac{1}{8}\frac{1}{125}(x-25)^{2}|
$$
OK, having done this simplification, {\it now} we can answer questions like {\it For what range of} $x\geq 25$ {\it does} $5+\displaystyle \frac{1}{10}(x-25)$ {\it approximate} $\sqrt{x}$ {\it to within}.001 {\it i}) We cannot hope to tell {\it exactly}, but only to give a range of values of $x$ for which we {\it can} be sure {\it based upon our estimate}. So the question becomes: solve the in- equality
$$
|\frac{1}{8}\frac{1}{125}(x-25)^{2}|\leq.001
$$
$($with $x\geq 25)$ . Multiplying out by the denominator of $8\cdot 125$ gives $(\mathrm{b}\mathrm{y}\ \mathrm{c}\mathrm{o}\mathrm{i}\mathrm{n}\mathrm{c}\mathrm{i}\mathrm{d}\mathrm{e}\square \mathrm{c}\mathrm{e}?)$
$$
|x-25|^{2}\leq 1
$$
so the solution is $25\leq x\leq 26.$
So we can conclude that $\sqrt{x}$ is approximated to within.001 for all $x$ in the range $25\leq x\leq 26$. This is a worthwhile kind of thing to be able to find out.
\# 53.194 For what range of values of $x$ is $x-\displaystyle \frac{x^{3}}{6}$ within 0.01 of $\sin x$?
\# 53.195 Only consider $-1\leq x\leq 1$. For what range of values of $x$ {\it inside this interval} is the polynomial $1+x+x^{2}/2$ within.Ol of $e^{x}$?
\# 53.196 On how large an interval around $0$ is $1-x$ within 0.01 of $1/(1+x)$ ?
\# 53.197 On how large an interval around 100 is $10+\displaystyle \frac{x-100}{20}$ within 0.01 of $\sqrt{x}$?
54. {\it Achieving desired tolerance on desired} $inT$e{\it rv}a /
This third question is usually the most difficult, since it requires both {\it estimates} and adjustment of {\it num}- $ber$ {\it of terms} in the Taylor expansion: $\zeta Given$ {\it a function, given a fixed point, given an interval around that fixed point, and given a required tolerance, find how many terms must be used in the Taylor expansion to approximate the function to within the required tolerance on the given interval}.
For example, let's get a Taylor polynomial approximation to $e^{x}$ which is within 0.001 on the interval $[-\displaystyle \ \frac{1}{2},\ +\frac{1}{2}]$. We use
$$
e^{x}=1+x+\frac{x^{2}}{2!}+\frac{x^{3}}{3!}+\ldots+\frac{x^{n}}{n!}+\frac{e^{c}}{(n+1)!}x^{n+1}
$$
75
for some $c$ between $0$ and $x$, and where we do not yet know what we want $n$ to be. It is very convenient here that the nth derivative of $e^{x}$ is still just $e^{x}!$ We are wanting to {\it choose} $n$ {\it large-enough to guarantee that}
$$
|\frac{e^{c}}{(n+1)!}x^{n+1}|\leq 0.001
$$
for all $x$ in that interval (without knowing anything too detailed about what the corresponding $c$'s are!).
The error term is estimated as follows, by thinking about the {\it worst-case scenario} for the sizes of the parts of that term: we know that the exponential function is increasing along the whole real line, so in any
event $c$ lies in $[-\displaystyle \ \frac{1}{2},\ +\frac{1}{2}]$ and
$$
|e^{c}|\leq e^{1/2}\leq 2
$$
(where we've not been too fussy about being accurate about how big the square root of $e$ is!). And for $x$ in that interval we know that
$$
|x^{n+1}|\leq(\frac{1}{2})^{n+1}
$$
So we are wanting to {\it choose} $n$ {\it large-enough to guarantee that}
$$
|\frac{e^{c}}{(n+1)!}(\frac{1}{2})^{n+1}|\leq 0.001
$$
Since
$$
|\frac{e^{c}}{(n+1)!}(\frac{1}{2})^{n+1}|\leq\frac{2}{(n+1)!}(\frac{1}{2})^{n+1}
$$
we can be confident of the desired inequality if we can be sure that
$$
\frac{2}{(n+1)!}(\frac{1}{2})^{n+1}\leq 0.001
$$
That is, we want to `solve' for $n$ in the inequality
$$
\frac{2}{(n+1)!}(\frac{1}{2})^{n+1}\leq 0.001
$$
There is no genuine formulaic way to `solve' for $n$ to accomplish this. Rather, we just evaluate the left- hand side of the desired inequality for larger and larger values of $n$ until (hopefully!) we get something smaller than 0.001. So, trying $n=3$, the expression is
$$
\frac{2}{(3+1)!}(\frac{1}{2})^{3+1}=\frac{1}{12\cdot 16}
$$
which is more like 0.01 than 0.001. So just try $n=4$:
$$
\frac{2}{(4+1)!}(\frac{1}{2})^{4+1}=\frac{1}{60\cdot 32}\leq 0.00052
$$
which is better than we need.
The conclusion is that we needed to take the Taylor polynomial of degree $n=4$ to achieve the desired tolerance along the whole interval indicated. Thus, the polynomial
$$
1+x+\frac{x^{2}}{2}+\frac{x^{3}}{3}+\frac{x^{4}}{4}
$$
approximates $e^{x}$ to within 0.00052 for $x$ in the interval $[-\displaystyle \ \frac{1}{2},\ \frac{1}{2}].$
76
Yes, such questions can easily become very difficult. And, as a reminder, there is no real or genuine claim that this kind of approach to polynomial approximation is `the best'.
\# 54.198 Determine how many terms are needed in order to have the corresponding Taylor polynomial approximate $e^{x}$ to within 0.001 on the interval $[-1,\ +1].$
\# 54.199 Determine how many terms are needed in order to have the corresponding Taylor polynomial approximate $\cos x$ to within 0.001 on the interval $[-1,\ +1].$
\# 54.200 Determine how many terms are needed in order to have the corresponding Taylor polynomial approximate $\cos x$ to within 0.001 on the interval $[\displaystyle \frac{-\pi}{2},\ \frac{\pi}{2}].$
\# 54.201 Determine how many terms are needed in order to have the corresponding Taylor polynomial approximate $\cos x$ to within 0.001 on the interval $[-0.1,\ +0.1].$
\# 54.202 Approximate $e^{1/2}=\sqrt{e}$ to within.01 by using a Taylor polynomial with remainder term, ex- panded at $0$. ({\it Do NOT add up the finite sum you get}
\# 54.203 Approximate $\sqrt{101}=(101)^{1/2}$ to within $10^{-15}$ using a Taylor polynomial with remainder term. ({\it Do NOT add up the finite sum you get}.' {\it One point here is that most hand calculators do not easily give 15 decimal places}. $Hah$
55. {\it Integrating Taylor polynomials}.$\cdot$ {\it first example}
Thinking simultaneously about the difficulty (or impossibility) of `direct' symbolic integration of compli- cated expressions, by contrast to the ease of integration of {\it polynomials}, we might hope to get some mileage out of {\it integrating Taylor polynomials}.
As a promising example: on one hand, it's not too hard to compute that
$$
\int_{0}^{T}\frac{dx}{1-x}dx=[-\log(1-x)]_{0}^{T}=-\log(1-T)
$$
On the other hand, if we write out
$$
\frac{1}{1-x}=1+x+x^{2}+x^{3}+x^{4}+\ldots
$$
then we could obtain
$$
\int_{0}^{T}(1+x+x^{2}+x^{3}+x^{4}+\ldots)dx=[x+\frac{x^{2}}{2}+\frac{x^{3}}{3}+\ldots]_{0}^{T}=
$$
$$
=T+\frac{T^{2}}{2}+\frac{T^{3}}{3}+\frac{T^{4}}{4}+\ldots
$$
Putting these two together (and changing the variable back to `{\it x}') gives
$$
-\log(1-x)=x+\frac{x^{2}}{2}+\frac{x^{3}}{3}+\frac{x^{4}}{4}+\ldots
$$
(For the moment let's not worry about what happens to the error term for the Taylor polynomial).
77
This little computation has several useful interpretations. First, we obtained a Taylor polynomial for
$-\log(1-T)$ from that of a geometric series, without going to the trouble of recomputing derivatives. Sec- ond, from a different perspective, we have an expression for the integral
$$
\int_{0}^{T}\frac{dx}{1-x}dx
$$
without necessarily mentioning the logarithm: that is, with some suitable interpretation of the trailing dots,
$$
\int_{0}^{T}\frac{dx}{1-x}dx=T+\frac{T^{2}}{2}+\frac{T^{3}}{3}+\frac{T^{4}}{4}+\ldots
$$
56. {\it Integrating th} $\mathrm{e}$ {\it error} $ T\mathrm{e}rm.\cdot$ {\it example}
Being a little more careful, let's keep track of the error term in the example we've been doing: we have
$$
\frac{1}{1-x}=1+x+x^{2}+\ldots+x^{n}+\frac{1}{(n+1)}\frac{1}{(1-c)^{n+1}}x^{n+1}
$$
for some $c$ between $0$ and $x$, and also depending upon $x$ and $n$. One way to avoid having the $\displaystyle \frac{1}{(1-c)^{n+1}}$ `blow up' on us, is to keep $x$ itself in the range $[0,1$) so that $c$ is in the range $[0,\ x$) which is inside $[0,1$), keeping $c$ away from 1. To do this we might demand that $0\leq T<1.$
For simplicity, and to illustrate the point, let's just take $0\displaystyle \leq T\leq\frac{1}{2}$. Then in the {\it worst-case scenario}
$$
|\frac{1}{(1-c)^{n+1}}|\leq\frac{1}{(1-\frac{1}{2})^{n+1}}=2^{n+1}
$$
Thus, {\it integrating the error term}, we have
$$
|\int_{0}^{T}\frac{1}{n+1}\frac{1}{(1-c)^{n+1}}x^{n+1}dx|\leq\int\frac{1}{n+1}2^{n+1}x^{n+1}dx=\frac{2^{n+1}}{n+1}\int_{0}^{T}x^{n+1}dx
$$
$$
=2^{n+1}n+1[\frac{x^{n+2}}{n+2}]_{0}^{T}=\frac{2^{n+1}T^{n+2}}{(n+1)(n+2)}
$$
Since we have cleverly required $0\displaystyle \leq T\leq\frac{1}{2}$, we actually have
$$
|\int_{0}^{T}\frac{1}{n+1}\frac{1}{(1-c)^{n+1}}x^{n+1}dx|\leq\frac{2^{n+1}T^{n+2}}{(n+1)(n+2)}\leq
$$
$$
\leq\frac{2^{n+1}(\frac{1}{2})^{n+2}}{(n+1)(n+2)}=\frac{1}{2(n+1)(n+2)}
$$
That is, we have
$$
|-\log(1-T)-[T+\frac{T^{2}}{2}+\ldots+\frac{T^{n}}{n}]|\leq\frac{1}{2(n+1)(n+2)}
$$
for all $T$ in the interval $[0,\displaystyle \ \frac{1}{2}]$. Actually, we had obtained
$$
|-\log(1-T)-[T+\frac{T^{2}}{2}+\ldots+\frac{T^{n}}{n}]|\leq\frac{2^{n+1}T^{n+2}}{2(n+1)(n+2)}
$$
and the latter expression shrinks rapidly as $T$ approaches $0.$
78
\end{document}