MATH/CSC 381

Numerical Methods - Chapter 2
Dr. John Travis

Solutions Of Equations In One Variable











Remark: Polynomial Rootfinding is not a stable problem.
 

Remark: Many algorithms will start from some given value and refine this values to obtain a more accurate value by using the previously obtained values. Each step is called an iteration and the methods are called iterative methods. To find solutions (or roots) of equations, we will start with an initial guess p0 to the root or an interval [a,b] in which we know a root exists and then generate successive guesses which (hopefully) converge to the exact root. By letting this continue for as long as possible, we minimize the convergence error.

Problem: Given a continuous function f(x), determine a value p such that f(p)=0 (approximately). Use IVT...

BISECTION METHOD

Assume you have real numbers a and b, with a<b such that f(a) * f(b) < 0. Then, IVT imples a root of f(x) exists in the interval [a,b]. Guess this root to be the midpoint of the interval c = (a+b)/2. Notice, this "bisects" the original interval into two intervals, each half the size of the first. Reapply this procedure on the piece which is guaranteed to contain a root (by IVT again).


In general, consider the sequences an, pn, and bn given by a0=a, b0=b, pn+1=(an+bn)/2 and an+1 and bn+1 chosen to be the endpoints of the half of the interval that is kept. (ie, either an+1= an and bn+1= pn+1, OR an+1=pn+1 and bn+1=bn.)

Stopping Criterion for Bisection:  How do you determine when this recursive process should end? How can you determine if f(pn+1)=0 in the presence of roundoff?

Bisection is "Robust" in that it will always converge to an exact root but will do so rather slowly.

HOMEWORK: page 29, #8

Review: Taylor's Theorem

NEWTON'S METHOD - pg 35

Suppose p is the unknown actual root of f(x)=0 and p0 is an approximation to p.  From Taylor's Theorem, f(p) = f(p0) + (p-p0)f'(p0) + (p-p0)2f''(c)/2!, where c is unknown but between p and p0.
If p0 is close to p, p-p0 is "small" and (p-p0)2 is even "smaller".
Then 0 = f(p) @ f(p0) + (p-p0)f'(p0), or by solving for p, we obtain p @ p0 - f(p0)/f'(p0).
Denote by p1 the value computed by using the (exact) equation p1 = p0 - f(p0)/f'(p0).
In general, pn+1 = pn - f(pn)/f'(pn), where p0 is known.

Comments on Newton's method:

Stopping Criterion for Newton's Method:
From Taylor's theorem above, we have
0 = f(p) = f(pk) + (p-pk)f'(pk) + (p-pk)2f''(c)/2!, or
(p-pk) + f(pk) / f'(pk) = -(p-pk)2f''(c) / {2 * f'(pk) }, or
p - pk+1 = -(p-pk)2 [ f''(c) / {2 * f'(pk) } ].
Hence, if [ f''(c) / {2 * f'(pk) } ] < M is bounded near where f(p)=0, then we have
| p - pk+1| < M |p-pk|2 ,
which says that the iteration must converge as it progresses, so long as the bound exists.
Therefore, for a stopping criteria, be sure to continue the iteration until successive guesses pk and pk are very close to each other.
Special Case of Newton's for finding square roots:  To approximate a1/2, consider f(x) = x2 - a.  Using f'(x) = 2x in Newton's method yields the iteration x0 = a and xn+1 = (xn/2 + a/xn)/2.

SECANT METHOD - pg 30

METHOD OF FALSE POSITION - pg 33

Mixes the ideas of the Secant Method and the Bisection Method.

Convergence of this method can be hard to determine and even slower than Bisection.
 

RATES OF CONVERGENCE: If p is the exact value being successively approximated by the values pn, then the error at step n is denoted by en = | p - pn |.  The rate of convergence is the constant r such that en+1 < M enr, for some constant M, or for which the ratio en+1 / enr  is approximately a constant.
Special Cases:

The Secant exhibits a rate of convergence of about 1.6.
 

HORNER'S ALGORITHM: Each calculation introduces additional roundoff and computer time. To minimize these for a given x value, rearrange the computations using a judicious choice of nested parenthesis. (Equivalent to Synthetic Substitution)
 

MUELLER'S METHOD: Takes 3 initial guesses p0, p1 and p2, passes a quadratic through them and determines the zero closest to p2. The quadratic: p(x) = a(p- p2)2 + b(p- p2) + c. Plug in p0, p1 and p2 to get a, b, and c using the formulas on page 46. Use the new improved roundoff quadratic-formula formulas and choose sign to make bottom largest possible.