part 3 chapter 9 gauss elimination powerpoints organized by dr. michael r. gustafson ii, duke...
DESCRIPTION
Graphical Method For small sets of simultaneous equations, graphing them and determining the location of the intercept provides a solution.TRANSCRIPT
Part 3Chapter 9
Gauss Elimination
PowerPoints organized by Dr. Michael R. Gustafson II, Duke UniversityAll images copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Chapter Objectives• Knowing how to solve small sets of linear equations with the
graphical method and Cramer’s rule.• Understanding how to implement forward elimination and
back substitution as in Gauss elimination.• Understanding how to count flops to evaluate the efficiency
of an algorithm.• Understanding the concepts of singularity and ill-condition.• Understanding how partial pivoting is implemented and how
it differs from complete pivoting.• Recognizing how the banded structure of a tridiagonal
system can be exploited to obtain extremely efficient solutions.
Graphical Method
For small sets of simultaneous
equations, graphing them
and determining the location of
the intercept provides a
solution.
Graphical Method (cont)• Graphing the equations can also show
systems where:a) No solution existsb) Infinite solutions existc) System is ill-conditioned
Overview• A matrix consists of a rectangular array of elements
represented by a single symbol (example: [A]).• An individual entry of a matrix is an element
(example: a23)
Representing Linear Equations
• Matrices provide a concise notation for representing and solving simultaneous linear equations:
a11x1 a12x2 a13x3 b1
a21x1 a22x2 a23x3 b2
a31x1 a32x2 a33x3 b3
a11 a12 a13
a21 a22 a23
a31 a32 a33
x1
x2
x3
b1
b2
b3
[A]{x} {b}
Matrix Multiplication
• The elements in the matrix [C] that results from multiplying matrices [A] and [B] are calculated using:
c ij aikbkjk1
n
Special Matrices• Matrices where m = n are called square matrices.• Special forms of square matrices:
Symmetric
A 5 1 21 3 72 7 8
Diagonal
A a11
a22
a33
Identity
A 1
11
Upper Triangular
A a11 a12 a13
a22 a23
a33
Lower Triangular
A a11
a21 a22
a31 a32 a33
Banded
A
a11 a12
a21 a22 a23
a32 a33 a34
a43 a44
Matrix Inverse and Transpose
• The inverse of a square, nonsingular matrix [A] is that matrix which, when multiplied by [A], yields the identity matrix.
[A][A]-1 = [A]-1[A] = [I] • The transpose of a matrix involves
transforming its rows into columns and its columns into rows.
(aij)T=aji
Determinants• The determinant D=|A| of a matrix is formed from the
coefficients of [A].• Determinants for small matrices are:
• Determinants for matrices larger than 3 x 3 can be very tedious
11 a11 a11
2 2a11 a12
a21 a22
a11a22 a12a21
33a11 a12 a13
a21 a22 a23
a31 a32 a33
a11
a22 a23
a32 a33
a12
a21 a23
a31 a33
a13
a21 a22
a31 a32
Cramer’s Rule
• Cramer’s Rule states that each unknown in a system of linear algebraic equations may be expressed as a fraction of two determinants with denominator D and with the numerator obtained from D by replacing the column of coefficients of the unknown in question by the constants b1, b2, …, bn.
Cramer’s Rule ExampleFind x2 in the following system of equations:
Find the determinant D
Find determinant D2 by replacing D’s second column with b
•
Divide
0.3x1 0.52x2 x3 0.010.5x1 x2 1.9x3 0.67
0.1x1 0.3x2 0.5x3 0.44
D0.3 0.52 10.5 1 1.90.1 0.3 0.5
0.31 1.9
0.3 0.5 0.52
0.5 1.90.1 0.5
10.5 10.1 0.4
0.0022
D2 0.3 0.01 10.5 0.67 1.90.1 0.44 0.5
0.30.67 1.9 0.44 0.5
0.010.5 1.90.1 0.5
10.5 0.670.1 0.44
0.0649
x2 D2
D
0.0649 0.0022
29.5
Solving With MATLAB• MATLAB provides two direct ways to solve
systems of linear algebraic equations [A]{x}={b}:– Left-divisionx = A\b
– Matrix inversionx = inv(A)*b
• The matrix inverse is less efficient than left-division and also only works for square, non-singular systems.
Naïve Gauss Elimination
• For larger systems, Cramer’s Rule can become unwieldy.
• Instead, a sequential process of removing unknowns from equations using forward elimination followed by back substitution may be used - this is Gauss elimination.
• “Naïve” Gauss elimination simply means the process does not check for potential problems resulting from division by zero.
Naïve Gauss Elimination (cont)• Forward elimination
– Starting with the first row, add or subtract multiples of that row to eliminate the first coefficient from the second row and beyond.
– Continue this process with the second row to remove the second coefficient from the third row and beyond.
– Stop when an upper triangular matrix remains.
• Back substitution– Starting with the last row, solve for the
unknown, then substitute that value into the next highest row.
– Because of the upper-triangular nature of the matrix, each row will contain only one more unknown.
Naïve Gauss Elimination Programfunction x = GaussNaive(A,b)% GaussNaive: naive Gauss elimination% x = GaussNaive(A,b): Gauss elimination without pivoting.% input:% A = coefficient matrix% b = right hand side vector% output:% x = solution vector[m,n] = size(A);if m~=n, error('Matrix A must be square'); endnb = n+1;Aug = [A b];% forward eliminationfor k = 1:n-1 for i = k+1:n factor = Aug(i,k)/Aug(k,k); Aug(i,k:nb) = Aug(i,k:nb)-factor*Aug(k,k:nb); endend% back substitutionx = zeros(n,1);x(n) = Aug(n,nb)/Aug(n,n);for i = n-1:-1:1 x(i) = (Aug(i,nb)-Aug(i,i+1:n)*x(i+1:n))/Aug(i,i);end
function x = GaussNaive(A,b)% GaussNaive: naive Gauss elimination% x = GaussNaive(A,b): Gauss elimination% input:% A = coefficient matrix% b = right hand side vector% output:% x = solution vector[m,n] = size(A);if m~=n, error('Matrix A must be square'); endnb = n+1;Aug = [A b];
% forward eliminationfor k = 1:n-1 for i = k+1:n factor = Aug(i,k)/Aug(k,k); Aug(i,k:nb) = Aug(i,k:nb)-factor*Aug(k,k:nb); endend% back substitutionx = zeros(n,1);x(n) = Aug(n,nb)/Aug(n,n);for i = n-1:-1:1 x(i) = (Aug(i,nb)-Aug(i,i+1:n)*x(i+1:n))/Aug(i,i)end
Gauss Program EfficiencyThe execution of Gauss elimination depends on the amount of floating-point operations (or flops). The flop count for an n x n system is:
Conclusions:– As the system gets larger, the computation time increases
greatly.– Most of the effort is incurred in the elimination step.
ForwardElimination
2n3
3O n2
BackSubstitution n2 O n
Total 2n3
3O n2
QUESTION:I think there is a mistake in the book on page 240.It says for k=1, i=2, and +/- flops=(n-1)*(n)The program says:For i=k+1:n factor=Aug(i,k)/Aug(k,k) Aug(i,k:nb)=Aug(i,k:nb)-factor*Aug(k,k:nb)How many subtractions are for k=1?nb=n+1so if k=1, then i=2, and assuming n=4, then nb=5, then we have that:Aug(i,k:nb)=Aug(i,k:nb)-factor*Aug(k,k:nb)=Aug(2,1)=Aug(2,1)-factor*Aug(1,1)Aug(2,2)=Aug(2,2)-factor*Aug(1,2)Aug(2,3)=Aug(2,3)-factor*Aug(1,3)Aug(2,4)=Aug(2,4)-factor*Aug(1,4)Aug(2,5)=Aug(2,5)-factor*Aug(1,5)The number of subtractions is clearly 5, or (n+1) and the interior loop goes from 2:n so the number of subtraction flops for k=1 is(n+1)*(n-1)the book says (n-1)*n in the table on page 240.
ANSWER:
Yes, you are right; the formulas in the Table of page 240 do not correspond precisely to the program in Figure 9.4. All second factors in the add/sub column should be added 1 and therefore all corresponding in the mult/div column also should be added 1, so the (n+1) should be (n+2) for k=1.This certainly modifies equations 9.16 through 9.19. How can a book go wrong? Well perhaps the book used another program slightly more difficult to understand but doing one less operation per row when it calculates:Aug(i,k:nb)=Aug(i,k:nb)-factor*(k,k:nb) Since the first term, Aug(k+1,k) will became zero anyway, because factor=Aug(i,k)/Aug(k,k), and substituting,Aug(i,k:nb)=Aug(i,k)-factor*Aug(k,k) = Aug(i,k)-(Aug(i,k)/Aug(k,k))*Aug(k,k) = 0Thus, we do not need to start the calculations in k but in k+1; therefore, if you change in the program the sentence, Aug(i,k:nb)=Aug(i,k:nb)-factor*(k,k:nb) to Aug(i,k+1:nb)=Aug(i,k+1:nb)-factor*Aug(k,k+1:nb)You will have one less operation per row and therefore all formulas in the book are correct now but only if you do the above change to the program.The results (solutions) will not change if you change that line of the program as we simply are not calculating the zero elements that are cancelled by the forward elimination.
Pivoting• Problems arise with naïve Gauss elimination if a
coefficient along the diagonal is 0 (problem: division by 0) or close to 0 (problem: round-off error)
• One way to combat these issues is to determine the coefficient with the largest absolute value in the column below the pivot element. The rows can then be switched so that the largest element is the pivot element. This is called partial pivoting.
• If the rows to the right of the pivot element are also checked and columns switched, this is called complete pivoting.
Partial Pivoting Programfunction x = GaussPivot(A,b)% GaussPivot: Gauss elimination pivoting% x = GaussPivot(A,b): Gauss elim. with pivoting% input:% A = coefficient matrix% b = right hand side vector% output:% x = solution vector[m,n]=size(A);if m~=n, error('Matrix A must be square'); endnb=n+1;Aug=[A b];
% forward eliminationfor k = 1:n-1 % partial pivoting [big,i]=max(abs(Aug(k:n,k))); ipr=i+k-1; if ipr~=k Aug([k,ipr],:)=Aug([ipr,k],:); end for i = k+1:n factor=Aug(i,k)/Aug(k,k); Aug(i,k:nb)=Aug(i,k:nb)-factor*Aug(k,k:nb) endend
% back substitutionx=zeros(n,1);x(n)=Aug(n,nb)/Aug(n,n);for i = n-1:-1:1x(i)=(Aug(i,nb)-Aug(i,i+1:n)*x(i+1:n))/Aug(i,i)end
Tridiagonal Systems• A tridiagonal system is a banded system with a
bandwidth of 3:
• Tridiagonal systems can be solved using the same method as Gauss elimination, but with much less effort because most of the matrix elements are already 0.
f1 g1e2 f2 g2
e3 f3 g3
en 1 fn 1 gn 1
en fn
x1x2x3xn 1xn
r1r2r3rn 1rn
Tridiagonal System Solver
function x = Tridiag(e,f,g,r)% Tridiag: Tridiagonal eq. solver banded system% x = Tridiag(e,f,g,r): Tridiagonal system solver.% input:%e=subdiagonal vector%f=diagonal vector%g=superdiagonal vector%r=right hand side vector%output:%x=solution vector
n=length(f);% forward eliminationfor k = 2:n factor = e(k)/f(k-1); f(k) = f(k) - factor*g(k-1); r(k) = r(k) - factor*r(k-1);end% back substitutionx(n) = r(n)/f(n);for k = n-1:-1:1 x(k) = (r(k)-g(k)*x(k+1))/f(k);end
n=length(f);% forward eliminationfor k = 2:n factor = e(k)/f(k-1); f(k) = f(k) - factor*g(k-1); r(k) = r(k) - factor*r(k-1);end
% back substitutionx(n) = r(n)/f(n);for k = n-1:-1:1x(k) = (r(k)-g(k)*x(k+1))/f(k);end