Math 541 - Introduction to Numerical Analysis and Computing Kris Stewart Fall 1993 Text: Numerical Methods for Mathematics, Science and Engineering Second Edition, John Mathews, Prentice-Hall OH: MW 3:30-5 Please send me email anytime you have problems though. mail stewart (or mail stewart@cs.sdsu.edu if you are off campus) Grade: 3 Midterms 40% (no makeup exams, but you can drop the lowest score) Final 20% 4 Computational Experiments 40% (These will be written documents exploring computations done using Matlab on ucssun1 or the Student Edition of Matlab which you may purchase at the bookstore for your home Mac or PC.) I'd like to begin by placing this subject in its proper historical perspective. Numerical analysis has been around as a subject since the beginning. Computers came along in the 40's. There has always been a need to approximate mathematical concepts via some sort of discrete, computational algorithm. When you took calculus, you learn or memorized many different "formulas" and "rules" that allowed you to solve certain problems. Unfortunately, the majority of the problems that actually occur in the real world cannot be solved in this manner. 1) Consider: x''(t) = 0 y''(t) - -mg m = constant mass g = acceleration due to gravity Where we would be given x(0) = 0; y(0) = 0; (launch position) theta(0) = theta0 (launch angle) v(0) = v0 (launch velocity) The solution is given by: x(t) = v0 cos(theta0) t y(t) = - mg t^2 + (v0 sin(theta0)) t which describes projectile motion. (p. 20 golub/ortega) But, what if m = mass is not constant? There won't be a nice, simple formula. 2) How to you get the value of sin(x)? When you press the button on the calculator, you are given a result immediately. How is this done? There are a lot of constraints. First of all, did you know that there is no formula to just compute the sine, it must be approximated by a numerical algorithm. The code to approximate the sine must be short and fast to fit in the limited ROM of pocket calculators. Remember Taylor Series? x^3 x^5 x^7 x^9 sine(x) = 1 - --- + --- - --- + --- cosine (eta) 3! 5! 7! 9! |- use to compute -| |- the error -| This gives a computable value using the first 4 terms and as well as an error term. We know that the cosine is never larger than 1, so this portion of the error term won't likely be large. But the x^9 part will be for large values of x. Here's where all those trig formulas you memorized can help out. Since the sine is periodic, we can shift its evaluation to the interval [0,2pi] with no loss of accuracy using sine(x) = sine(x+2pi) = sine(x-2pi) and sine(-x) = - sine(x) repeatedly to cause our approximation to only be needed for the following interval. | 1 - * | * * | | * * | | ----*------------|------------*------------|------------*---- 0 pi/2 pi 3pi/2 2pi | | * * | | * * -1 - * | An additional reduction, sine(x+pi) = - sin(x), means we only need to worry about [0,pi] Another interval reduction comes from: sine(x + pi/2) = cos(x) We'd need the approximation to the cosine anyway, and now we only need to worry about [0,pi/2] These are the sorts of concerns numerical analysts deal with. There is also the new addition of the computer to the field. (Yeah, you don't think computers are new, but in the history of the world, they have only been around for 40 years or so and that's not long). The original definition of "computer" described a human being who calculated all the tables of values, say in "Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables", an amazing compendium produced by the National Bureau of Standards (now NIST) in 1965. This is how things had to be done before pocket calculators and Crays. The world of the numerical analyst has become very exciting lately with the amazing increase in computational power available. It has also become much more complicated due to the variety of high performance architectures that are being used to solve the problems of Computational Science. The algorithm that works best on a scalar computer, like the Sun or the Macintosh or the PC, is not likely to be the best algorithm for a vector computer like the Cray Y-MP. Additionally, the parallel computing realm is still being developed and the possibilities for computational effectiveness and the available architectures are in a state of change. It's very exciting, but confusing. Still there are some fundamental concepts that we will focus on in this course that apply to all computing platforms. The most fundamental is the effective of finite precision floating point arithmetic. The word size of a computer contains a finite number of bits. These bits can be used to store: characters - typically each character is stored in a byte = 8 bits integers - either 1's or 2's complement is used to enable fast integer arithmetic - but there will be a limit in terms of the largest and smallest integer that can be stored) floating point - the word is split into two parts, one to represent the fraction (or mantissa) which determines the precision (or correctness) of the value; the second part to represent the exponent (or size) of the number) You may be familiar with floating point as "scientific notation". This is needed in science because of the vast scale of values that number be worked with. In astronomy, the distances to the far galaxies are enormous. In molecular mechanics, the space between atoms in a molecule are extremely small. One of the most revealing properties of floating point numbers is the unit roundoff threshold of whatever machine you are using. ======================================================================= Calendar: Aug. 30 - Introduction What to look at in Chapter 1: Thm 1.6 Mean Value Theorem Thm 1.13 Taylor's Theorem Thm 1.14 Horner for Polys Thm 1.17 Taylor in O(h^n) 1.3 Error Analysis: Def. 1.8 of error and relative error Truncation Error versus Round-off error Loss of significance O(h^n) approximations Sept. 1 - Root Finding Techniques (Chapter 2) 2.1 (only a little) Fixed Point Iteration is of interest to math majors, but there is too much detail for us 2.2 Bracketing Methods Bisection Regula Falsi (actually a variant of secant method) Sept. 8 - Error of Bisection 2.3 Convergence (a little) especially Figs 2.12 and 2.13 Estimating Errors (computation examples) Sept. 13 - 2.4 Newton's Method Secant Method: Properties and computational behavior Skip 2.5, 2.6, 2.7 Sept. 15 - Examples Practice 1st Midterm Review Sept. 20 - Midterm 1 Root Finding Sept. 22 - Chapter 3 Solving Ax=b 3.1 is a detailed review of Math 254 (skip unless you need a refresher) 3.2 has a nice example in Ex 3.4 Review Math 254 and Gaussian Elimination (we will link in the practical numerical algorithm later) Sept. 27 - 3.3 Triangular System back substitution Computational Examples Computational Experiment exploring Root Finding due Sept. 29 - 3.4 Gaussian Elimination and Pivoting Oct. 4 - Ill conditioning, residuals, error (Class Handout) Skip 3.5 Oct. 6 - 3.6 Triangular Factorization (show this is the same as Gaussian Elimination) Oct. 11 - Operations Counts and Effective Computations Oct. 13 - Review for Midterm 2 Computational Experiment exploring Linear Systems due Oct. 18 - Midterm 2 Linear Systems Oct. 20 - Chapter 4 Interpolation Figures 4.1, 4.3, 4.4, 4.5 in Section 4.1 are good 4.2 Interpolation Skip 4.3 Lagrange Approximation Oct. 25 - 4.4 Newton Polynomials Divided Differences and Estimating Error Handout Oct. 27 - 4.5 Chebyshev Polynomials (just a little) Computational Examples Nov. 1 - Chapter 5 - Curve Fitting 5.1 Least Squares Line (not an interpolant) 5.2 Curve Fitting AND WHY LINEARIZATION IS NOT A GOOD IDEA! I don't agree with the author in Table 5.6 Nov. 8 - Handout on piecewise polynomials 5.3 Cubic Splines (is an interpolant) Computational Examples Nov. 10 - Discuss 5.4 Trignometric Polynomials, though won't be on midterm. These are important approximants. Computational Experiment on Curve Fitting and Interpolation due Nov. 15 - Review for Midterm 3 Nov. 17 - Midterm 3 Interpolation and Curve Fitting Nov. 22 - 6.1 Numerical Differentiation Nov. 24 - 7.1 Quadrature Nov. 29 - 7.2 Composite Trapezoidal and Simpson's Rule Dec. 1 - Estimating Error and Computational Examples 7.3 Recursive Rule and Romberg Integration (a little) Dec. 6 - 7.4 Adaptive Quadrature Dec. 8 - Computational Experiment on Quadrature due Last Class / Review for Final Dec. 15 - Final Exam 1-3