1 chapter: 3c system of linear equations dr. asaf varol [email protected]

33
1 Chapter: 3c System of Linear Equations Dr. Asaf Varol [email protected]

Upload: sally-carnell

Post on 14-Dec-2015

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

1

Chapter: 3cSystem of Linear Equations

Dr. Asaf Varol

[email protected]

Page 2: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

2

Pivoting

Some disadvantages of Gaussian elimination are as follows: Since each result follows and depends on the previous step, for large systems the errors introduced due to round off (or chop off) errors lead to loss of significant figures and hence to in accurate results. The error committed in any one step propagates till the final step and it is amplified. This is especially true for ill-conditioned systems. Of course if any of the diagonal elements is zero, the method will not work unless the system is rearranged so to avoid zero elements being on the diagonal. The practice of interchanging rows with each other so that the diagonal elements are the dominant elements is called partial pivoting. The goal here is to put the largest possible coefficient along the diagonal by manipulating the order of the rows. It is also possible to change the order of variables, i.e. instead of letting the unknown vector

  {X}T = (x, y, z) we may let it be {X}T = ( y, z, x) When this is done in addition to partial pivoting, this practice is called full

pivoting. In this case only the meaning of each variable changes but the system of equations remain the same.

Page 3: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

3

Example E3.4.2

Consider the following set of equations 

0.0003 x + 3.0000 y = 2.00011.0000 x + 1.0000 y = 1.0000

 The exact solution to which is x = 1/3 and y = 2/3 Solving this system by Gaussian elimination with a three significant figure mantissa yields

  x = -3.33, y = 0.667 with four significant figure mantissa yields 

x = 0.0000 and y = 0.6667 

Page 4: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

4

Example E3.4.2 (continued)

Partial pivoting (switch the rows so that the diagonal elements are largest)

 

1.0000 x + 1.0000 y = 1.0000

0.0003 x + 3.0000 y = 2.0001

 

The solution of this set of equations using Gaussian elimination gives

 

y = 0.667 and x = 0.333 with three significant figure arithmetic

 

y = 0.6667 and x = 0.3333 with four significant figure arithmetic

Page 5: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

5

Example E3.4.3

Problem: Apply full pivoting to the following system to achieve a well conditioned matrix.

 3x + 5y - 5z = 32x - 4y - z = -36x - 5y + z = 2

    3 5 5

2 4 1

6 5 1

3

3

2

x

y

z

Page 6: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

6

Example E3.4.3 (continued)

Solution: First switch the first column with the second column, then switch the first column with the third column to obtain

-5z + 3x + 5y = 3

- z + 2x - 4y = -3

z + 6x - 5y = 2

Then switch second row with the third row

-5z + 3x + 5y = 3

z + 6x - 5y = 2

-z + 2x - 4y = -3

 Yielding finally a well conditioned system given by

3

2

3

y

x

z

421

561

535

Page 7: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

7

Gauss – Jordan Elimination

This method is very similar to Gaussian elimination method. The only difference is in that the elimination procedure is extended to the upper diagonal elements so that a backward substitution is no longer necessary. The elimination process begins with the augmented matrix, and continued until the original matrix turns into an identity matrix, of course with necessary modifications to the right hand side. In short our goal is to start with the general augmented system and arrive at the right side after appropriate algebraic manipulations. 

Start arrive 

===>

 

The solution can be written at once as 

 

a a a

a a a

a a a

c

c

c

aa 12 13

21 22 23

31 32 33

1

2

3

1 0 0

0 1 0

0 0 1

1

2

3

c

c

c

*

*

*

x c x c x c1 1 2 2 3 3 * * * ; ;

Page 8: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

8

Pseudo Code for Gauss – Jordan Elimination

do for k = 1 to n ! Important note do for j= k+1 to n+1 ! a(i,n+1) represents the akj = akj/akk ! the right hand side

end do do for i = 1 to n ; i is not equal to k

do for j = k+1 to n+1 aij = aij - (aik)(akj)

end do end do

end do cc----- The solution vector is saved in a(i,n+1), i=1, to n

Page 9: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

9

Example E3.4.4

P r o b l e m : S o l v e t h e p r o b l e m o f E q u a t i o n s ( 3 . 4 . 3 ) u s i n g G a u s s - J o r d a n e l i m i n a t i o n

4 x 1 + x 2 + x 3 = 6 - x 1 - 5 x 2 + 6 x 3 = 02 x 1 - 4 x 2 + x 3 = - 1

S o l u t i o n : T o d o t h a t w e s t a r t w i t h t h e a u g m e n t e d t h e c o e f f i c i e n t m a t r i x

A a

( )0 4 1 1

1 5 6

2 4 1

6

0

1

( i ) F i r s t m u l t i p l y t h e f i r s t r o w b y 1 / 4 , t h e n m u l t i p l y i t b y - 1 a n d s u b t r a c t t h e r e s u l t f r o mt h e s e c o n d r o w a n d r e p l a c e t h e r e s u l t o f t h e s u b t r a c t i o n w i t h t h e s e c o n d r o w . S i m i l a r l ym u l t i p l y t h e f i r s t r o w b y 2 a n d s u b t r a c t t h e r e s u l t f r o m t h e t h i r d r o w a n d r e p l a c e t h e t h i r dr o w w i t h t h e r e s u l t o f t h e s u b t r a c t i o n . T h e s e o p e r a t i o n s l e a d t o

Aa

( 1 )

4

5.1

5.1

50.050.40

25.675.40

25.025.00.1

Page 10: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

10

Example E3.4.4 (continued)

( i i) M u lt ip ly t h e s e c o n d r o w b y - 1 / 4 . 7 5 , t h e n m u lt ip ly t h e s e c o n d r o w b y 0 . 2 5 a n d s u b t r a c t t h er e s u lt f r o m t h e f ir s t r o w a n d r e p la c e t h e r e s u lt w it h t h e f ir s t r o w . S im ila r ly m u lt ip ly t h e s e c o n dr o w b y - 4 . 5 a n d s u b t r a c t t h e r e s u lt f r o m t h e t h ir d r o w a n d r e p la c e t h e r e s u lt w it h t h e t h ir d r o wt o o b t a in

Aa

( )2

4211.5

3158.0

5789.1

4211.500

3158.110

5789.001

( ii i) M u lt ip ly t h e t h ir d r o w b y - 1 / 5 . 4 2 1 1 , t h e n m u lt ip ly t h e t h ir d r o w b y 0 . 5 7 8 9 a n d s u b t r a c t t h er e s u lt f r o m t h e f ir s t r o w , a n d t h e n r e p la c e t h e f ir s t r o w b y t h e r e s u lt . S im ila r ly m u lt ip ly t h e t h ir dr o w b y - 1 . 3 1 5 8 a n d s u b t r a c t t h e r e s u lt f r o m t h e s e c o n d r o w , a n d t h e n r e p la c e t h e s e c o n d r o w b yt h e r e s u lt t o f in a lly a r r iv e a t

Aa

( )3 1 0 0

0 1 0

0 0 1

1 0 0 0 0

1 0 0 0 0

1 0 0 0 0

.

.

.

H e n c e w e f o u n d t h e e x p e c t e d s o lu t io n x 1 = 1 . 0 , x 2 = 1 . 0 , a n d x 3 = 1 . 0 . N o t e , h o w e v e r , t h a t t h es o lu t io n s o f 1 . 0 0 w e r e ju s t t h e s o lu t io n s . O f c o u r s e , e a c h m a t r ix w ill h a v e i t s o w n s o lu t io n s , a n dn o t a lw a y s e q u a l t o o n e !

Page 11: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

11

Finding the Inverse using Gauss-Jordan Elimination

T h e i n v e r s e , [ B ] o f a m a t r i x , [ A ] , i s d e f i n e d s u c h t h a t

[ A ] [ B ] = [ B ] [ A ] = [ I ]

w h e r e [ I ] i s t h e i d e n t i t y m a t r i x . I n o t h e r w o r d s , t o f i n d t h e i n v e r s e o f a m a t r i x w e n e e d t of i n d t h e e l e m e n t s o f t h e m a t r i x [ B ] s u c h t h a t w h e n [ A ] i s m u l t i p l i e d b y [ B ] t h e r e s u l ts h o u l d e q u a l t h e i d e n t i t y m a t r i x . I f w e r e c a l l w h a t m a t r i x m u l t i p l i c a t i o n i s , t h i s a m o u n t s t ot h e s a m e t h i n g a s s a y i n g t h e f i r s t c o l u m n o f [ B ] m u l t i p l i e d b y [ A ] m u s t b e e q u a l t o t h e f i r s tc o l u m n o f [ I ] ; t h e s e c o n d c o l u m n o f [ B ] m u l t i p l i e d b y [ A ] m u s t b e e q u a l t o t h e s e c o n dc o l u m n o f [ I ] a n d s o o n .

F o r s u m m a r y , u s i n g G a u s s - J o r d a n e l i m i n a t i o n

0.1845- 0.17476 0.1359 | 1 0 0

0.2427- 0.01942 0.1262 | 0 1 0

0.1068 0.04854- 0.185 | 0 0 1

1 0 0 | 1 4- 2

0 1 0 | 6 5- 1-

0 0 1 | 1 1 4

o b tain We

Page 12: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

12

LU Decomposition

Lower and Upper (LU) triangular decomposition technique is one of the most widely usedtechniques used for solution of linear systems of equations due to its generality andefficiency. This method calls for first decomposing a given matrix into a product of alower and an upper triangular matrices such that

[A] = [L][U]

Given that

[A]{X} = {R} [U]{X} = {Q}

for an unknown vector {Q} The additional unknown vector {Q} can be found from

[L]{Q} = {R}

The solution procedure (i.e. the algorithm) can now be summarized as

(i) Determine [L] and [U] for a given system of equations [A]{X} = {R}

(ii) Calculate {Q} from [L]{Q} = {R} by forward substitution

(iii) Calculate {X} from [U]{X} = {Q} by backward substitution

Page 13: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

13

Example E3.4.5

[ A ] =

2 5 1

1 3 1

3 4 2

; { R } =

1 2

8

1 6

[ L ] =

2 0 0

1 1 2 0

3 7 2 4

/

/

; [ U ] =

1 5 2 1 2

0 1 1

0 0 1

/ /

T o f i n d t h e s o l u t i o n w e s t a r t w i t h [ L ] { Q } = { R } ; t h a t i s

2 q 1 + 0 + 0 = 1 2- q 1 + q 2 / 2 + 0 = - 83 q 1 + 7 q 2 / 2 + 4 q 3 = 1 6

W e c o m p u t e { Q } b y f o r w a r d s u b s t i t u t i o n

q 1 = 1 2 / 2 = 6 q 2 = ( - 8 + q 1 ) / ( 1 / 2 ) = - 4 q 3 = ( 1 6 - 3 q 1 - 7 q 2 / 2 ) / 4 = 3

H e n c e { Q } T = { 6 , - 4 , 3 }

T h e n w e s o l v e [ U ] { X } = { Q } b y b a c k w a r d s u b s t i t u t i o n

x 1 - ( 5 / 2 ) x 2 + ( 1 / 2 ) x 3 = 60 + x 2 - x 3 = - 40 + 0 + x 3 = 3

x 3 = 3 x 2 = - 4 + x 3 = - 1 x 1 = 6 + ( 5 / 2 ) x 2 - ( 1 / 2 ) x 3 = 2

H e n c e t h e f i n a l s o l u t i o n i s { X } T = { 2 , - 1 , 3 }

Page 14: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

14

Crout Decomposition

W e i l l u s t r a t e C r o u t - d e c o m p o s i t i o n i n d e t a i l f o r a 3 x 3 g e n e r a l m a t r i x

[ L ] [ U ] = [ A ]

l

l l

l l l

u u

u

a a a

a a a

a a a

1 1

2 1 2 2

3 1 3 2 3 3

1 2 1 3

2 3

1 1 1 2 1 3

2 1 2 2 2 3

3 1 3 2 3 3

0 0

0

1

0 1

0 0 1

=

M u l t i p l y i n g [ L ] b y [ U ] a n d e q u a t i n g t h e c o r r e s p o n d i n g e l e m e n t s o f t h e p r o d u c t w i t h t h a t o f t h e m a t r i x[ A ] w e g e t t h e f o l l o w i n g e q u a t i o n s

l 1 1 = a 1 1 ; l 2 1 = a 2 1 ; l 3 1 = a 3 1 ;

i n i n d e x n o t a t i o n : l i 1 = a i 1 , I = 1 , 2 , . . . , n ( j = 1 , f i r s t c o l u m n ) ( T h e f i r s t c o l u m n o f [ L ] i s e q u a l t o t h e f i r s t c o l u m n o f [ A ]

e q u a t i o n s o l u t i o n

l 1 1 u 1 2 = a 1 2 = = = = > u 1 2 = a 1 2 / l 1 1

l 1 1 u 1 3 = a 1 3 = = = = > u 1 3 = a 1 3 / l 1 1

i n i n d e x n o t a t i o n u 1 j = a 1 j / l 1 1 j = 2 , 3 , . . . , n ( i = 1 , f i r s t r o w )

l 2 1 u 1 2 + l 2 2 = a 2 2 = = = = > l 2 2 = a 2 2 - l 2 1 u 1 2

l 3 1 u 1 2 + l 3 2 = a 3 2 = = = = > l 3 2 = a 3 2 - l 3 1 u 1 2

i n i n d e x n o t a t i o n l i 2 = a i 2 - l i 1 u 1 2 ; i = 2 , 3 , . . . , n ( j = 2 , s e c o n d c o l u m n )

Page 15: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

15

Pseudo Code for LU-Decomposition

c c - - - S i m p l e c o d i n gd o f o r i = 1 t o n

l i 1 = a i 1

e n d d o

d o f o r j = 2 t o nu 1 j = a 1 j / l 1 1

e n d d o

d o f o r j = 2 t o n - 1

l j j = a j j - l uj kk

j

k j

1

1

d o f o r i = j + 1 t o n

l i j = a i j - l ui kk

j

k j

1

1

u j i = a j i - ( l uj kk

j

k i

1

1 ) / l j j

e n d d o

e n d d o

l n n = a n n - l un kk

n

k n

1

1

c - - - F o r w a r d s u b s t it u t io nq 1 1 = r 1 / l1 1

d o fo r i = 2 to n

q i = r i - ( l qi jj

i

j

1

1 ) / l i i

e n d d o

c - - - B a c k s u b s t it u t io nx n = q n

d o fo r i = n - 1 to 1

x i = q i - u xi jj i

n

j

1

e n d d o

Page 16: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

16

Cholesky Decomposition for Symmetric Matrices

For symmetric matrices the LU-decomposition can be further simplified to take advantage of the symmetric property that [A] = [A]T ; or aij = aji (3.4.9) Then, it follows that [A] = [L][U] = [A]T = ([L][U])T = [U]T [L]T That is [L] = [U]T ; [U] = [L]T (3.4.10) or in index notation uij = lji (3.4.11) It should be noted that the diagonal elements of [U] are no longer, necessarily, equal to one. Comment: This algorithm should be programmed using complex arithmetic.

Page 17: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

17

Tridiagonal Matrix Algorithm (TDMA), also known as Thomas Algorithm

[A ] =

44

333

222

11

00

0

0

00

ba

cba

cba

cb

If w e ap p ly L U -d eco m p o sit io n to th is m atrix tak ing into acco u nt o f its sp ec ia l st ru c tu re and le t

[L ] =

44

33

22

1

fe00

0fe0

00fe

000f

[U ] =

1 0 0

0 1 0

0 0 1

0 0 0 1

1

2

3

g

g

g

B y m u lt ip lying [L ] and [U ] and eq u a t ing the co rresp o nd ing e lem ents o f the p ro d u ct to tha t o f the o rig ina l m atrix[A ] it can be sho w n tha t

f1 = b 1

g 1 = c 1 /f1

e 2 = a 2

and fo r k = 2 ,3 , .. . , n-1 fu rther

e k = a k e n = a n

fk = b k - e k g k -1 fn = b n - e n g n-1

g k = c k / fk

Page 18: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

18

TDMA Algorithmc----- Decomposition

c1 = c1/b1

do for k=2 to n-1bk = bk - ak ck-1

ck = ck/bk

end dobn = bn - an cn-1

c----- (note here rk ; k=1,2,3, ..., n is the right hand side of the equations)c----- Forward substitution

r1 = r1/b1

do for k = 2, nrk = (rk - ak rk-1)/bk

end do

c----- Back substitution

xn = rn

do for k = (n-1) to 1xk = (rk - ck xk+1)

end do

Page 19: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

19

Example E3.4.6

P ro b le m : D e te rm ine the L U -d e c o m p o sit io n fo r the fo llo w ing m a t r ix u s ing T D M A

[A ] =

2 2 0

1 4 4

0 1 2

S o lu t io n : n= 3

c 1 = c 1 /b 1 = 2 /2 = 1

k = 2b 2 = b 2 - a 2 c 1 = 4 - (1 ) (1 ) = 3c 2 = c 2 /b 2 = 4 /3

k = n = 3b 3 = b 3 - a 3 c 2 = 2 - (1 ) (4 /3 ) = 2 /3

T he o the r e le m e n ts re m a in the sa m e ; he nc e

[L ] =

2 0 0

1 3 0

0 1 2 3/

; [U ] =

1 1 0

0 1 4 3

0 0 1

/

I t c a n be ve r ifie d tha t the p ro d u c t [L ][U ] is ind e e d e q u a l to [A ] .

Page 20: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

20

Iterative Methods for Solving Linear Systems

Iterative methods are those where an initial guess is made for thesolution vector followed by a correction sequentially until a certainbound for error tolerance is reached. The concept of iterativecomputations was introduced in Chapter 2 for nonlinear equations.The idea is the same here except that we deal with linear systems ofequations. In this regard the fixed point iteration method presentedin Chapter 2 for two (nonlinear) equations and two unknowns willbe repeated here, because that is precisely what Jacobi iteration isabout.

Page 21: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

21

Jacobi Iteration

Two linear equations, in general, can be written as

a11 x1 + a12 x2 = c1

a21 x1 + a22 x2 = c2

We now rearrange these equations such that each unknown is written as a function of the others,i.e.

x1 = ( c1 - a12 x2 )/ a11 = f(x1,x2)

x2 = ( c2 - a21 x1 )/ a22 = g(x1,x2)

which are in the same form as those used for the fixed point iteration (Chapter 2). The functionsf(x1,x2) and g(x1,x2) are introduced for generality. It is implicitly assumed that the diagonalelements of the coefficient matrix are not zero. In case there is a zero diagonal element this shouldbe avoided by changing the order of the equations, i.e. by pivoting. Jacobi iteration calls forstarting with a guess for the unknowns, x1 and x2, and then finding new values using theseequations iteratively. If certain conditions are satisfied this iteration procedure converges to theright answer as it did in case of fixed point iteration method.

Page 22: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

22

Example E3.5.1P r o b l e m : S o l v e t h e f o l l o w i n g s y s t e m o f e q u a t i o n s u s i n g J a c o b i i t e r a t i o n

1 0 x 1 + 2 x 2 + 3 x 3 = 2 3 2 x 1 - 1 0 x 2 + 3 x 3 = - 9 - x 1 - x 2 + 5 x 3 = 1 2

S o l u t i o n : R e a r r a n g i n g

x 1 = ( 2 3 - 2 x 2 - 3 x 3 ) / 1 0x 2 = ( - 9 - 2 x 1 - 3 x 3 ) / ( - 1 0 )x 3 = ( 1 2 + x 1 + x 2 ) / 5

S t a r t i n g w i t h a s o m e w h a t a r b i t r a r y g u e s s , x 1 = 0 , x 2 = 0 , a n d x 3 = 0 . a n d i t e r a t i n g y i e l d i n t h e v a l u e s s h o w n i n t h e t a b l eg i v e n b e l o w

I T E R X 1 X 2 X 3 E r r o r N o r m , E = x xi

n e w

i

o l d

i

n

1

0 0 0 0 - - -1 2 . 3 0 0 0 0 0 0 . 9 0 0 0 0 0 2 . 4 0 0 0 0 0 5 . 6 0 0 0 0 0

2 1 . 4 0 0 0 0 0 2 . 0 8 0 0 0 0 3 . 0 4 0 0 0 0 2 . 7 2 0 0 0 0 3 0 . 9 7 2 0 0 0 2 . 0 9 2 0 0 0 3 . 0 9 6 0 0 0 4 . 9 6 0 0 0 1 E - 0 1 4 0 . 9 5 2 8 0 0 2 . 0 2 3 2 0 0 3 . 0 1 2 8 0 0 1 . 7 1 2 0 0 0 E - 0 1 5 0 . 9 9 1 5 2 0 1 . 9 9 4 4 0 0 2 . 9 9 5 2 0 0 8 . 5 1 2 0 1 4 E - 0 2 6 1 . 0 0 2 5 6 0 1 . 9 9 6 8 6 4 2 . 9 9 7 1 8 4 1 . 5 4 8 8 0 3 E - 0 2 7 1 . 0 0 1 4 7 2 1 . 9 9 9 6 6 7 2 . 9 9 9 8 8 5 6 . 5 9 2 0 3 5 E - 0 3 8 1 . 0 0 0 1 0 1 2 . 0 0 0 2 6 0 3 . 0 0 0 2 2 8 2 . 3 0 6 7 0 0 E - 0 3 9 0 . 9 9 9 8 7 9 7 2 . 0 0 0 0 8 9 3 . 0 0 0 0 7 2 5 . 4 8 3 0 3 1 E - 0 4 1 0 0 . 9 9 9 9 6 0 6 1 . 9 9 9 9 9 8 2 . 9 9 9 9 9 4 2 . 5 0 6 9 7 1 E - 0 4

Page 23: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

23

Convergence Criteria for Jacobi Iteration

f

x

f

x1 2

= a 1 2 / a 1 1 < 1

g

x

g

x1 2

= a 2 1 / a 2 2 < 1

I n g e n e r a l a s u f f i c i e n t ( b u t n o t n e c e s s a r y ) c o n d i t i o n f o r c o n v e r g e n c e o f J a c o b i i t e r a t i o n i s

( ) /a ai jj j i

n

i i 1 ,

1

I n o t h e r w o r d s , t h e s u m o f t h e a b s o l u t e v a l u e o f t h e o f f d i a g o n a l e l e m e n t s o n e a c h r o wm u s t b e l e s s t h a n t h e a b s o l u t e v a l u e o f t h e d i a g o n a l e l e m e n t o f t h e c o e f f i c i e n t m a t r i x .T h e s e m a t r i c e s a r e c a l l e d s t r i c t l y d i a g o n a l l y d o m i n a n t m a t r i c e s . T h e r e a d e r c a n s h o we a s i l y t h a t t h e m a t r i x o f e x a m p l e E 3 . 5 . 1 d o e s s a t i s f y t h e c r i t e r i a g i v e n .

Page 24: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

24

Gauss - Seidel Iteration

This method is essentially the same as the Jacobi iteration method except that the newvalues of the variables (i.e. the most recently updated value) is used in subsequentcalculations without waiting for completion of one iteration for all variables. In cases theconvergence criteria is satisfied, Gauss-Seidel iteration takes fewer iteration to achieve thesame error bound compared to Jacobi iteration. To illustrate how this procedure works wepresent the first two iterations of the Gauss-Seidel method applied to the exampleproblem:

x1 = ( 23 - 2x2 - 3x3 )/10x2 = ( -9 - 2x1 - 3x3 )/(-10)x3 = (12 + x1 + x2 )/5

ITER=0, x1=0, x2=0, x3=0 (initial guess)

ITER=1,

x1 = ( 23 - 0 -0 )/10 = 2.3

Page 25: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

25

Gauss - Seidel Iteration (II)

Now this new value for x1 is used in the second equation as opposed to using x1 = 0 (theprevious value of x1) in Jacobi iteration, that is

x2 = [-9 - (2)(2.3) - 0 )/(-10) = 1.36

Similarly in the third equation the most recent values of x1 and x2 are used. Hence,

x3 = (12 + 2.3 + 1.36 )/5 = 3.132

And so an for the rest of the iterations

ITER=2

x1 = [23 - (2)(1.36) - (3)(3.132) ] /10 = 0.8884x2 = [ -9 - (2)(0.8884) - (3)(3.132) ] /(-10) = 2.10728x3 = (12 + 0.8884 + 2.10728) /5 = 2.998256

Page 26: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

26

Example E3.5.2Problem: Solve the following system of equations (from Example E3.5.1) using Gauss -Seidel iteration.

10 x1 + 2x2 + 3x3 = 23 2x1 - 10x2 + 3x3 = -9 - x1 - x2 + 5x3 = 12

Table E3.5.1 Convergence trend of Gauss-Seidel method for Example E3.5.1

ITER X1 X2 X3 Error Norm, E

0 0 0 0 --- 1 2.300000E+00 1.360000E+00 3.132000E+00 6.792000E+00 2 1.088400E+00 2.057280E+00 3.029136E+00 2.011744E+00 3 9.798032E-01 2.004701E+00 2.996901E+00 1.934104E-01 4 9.999894E-01 1.999068E+00 2.999811E+00 2.872980E-02 5 1.000243E+00 1.999992E+00 3.000047E+00 1.412988E-03 6 9.999875E-01 2.000012E+00 3.000000E+00 3.223419E-04 7 9.999977E-01 1.999999E+00 3.000000E+00 2.276897E-05 8 1.000000E+00 2.000000E+00 3.000000E+00 3.457069E-06 9 1.000000E+00 2.000000E+00 3.000000E+00 3.576279E-07 10 1.000000E+00 2.000000E+00 3.000000E+00 0.000000E+00

Page 27: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

27

Convergence Criteria for Gauss - Seidel Iteration

C o n v e r g e n c e c r i t e r i a f o r G a u s s - S e i d e l m e t h o d a r e m o r e r e l a x e d t h a n t h e e q u a t i o n f o rJ a c o b i I t e r a t i o n c o n v e r g e n c e . H e r e a s u f f i c i e n t c o n d i t i o n i s

a af o r a l l e q u a t i o n s e x c e p t o n e

f o r a t l e a s t o n e e q u a t i o ni jj j i

n

i i

1

1

1,

.

.

T h i s c o n d i t i o n i s a l s o k n o w n a s t h e S c a r b o r o u g h c r i t e r i o n . W e a l s o i n t r o d u c e t h ef o l l o w i n g t h e o r e m c o n c e r n i n g t h e c o n v e r g e n c e o f G a u s s - S e i d e l i t e r a t i o n m e t h o d .

T h e o r e m :

L e t [ A ] b e a r e a l s y m m e t r i c m a t r i x w i t h p o s i t i v e d i a g o n a l e l e m e n t s . T h e n t h e G a u s s - S e i d e lm e t h o d s o l v i n g [ A ] { x } = { c } w i l l c o n v e r g e f o r a n y c h o i c e o f i n i t i a l g u e s s i f a n d o n l y i f [ A ]i s a p o s i t i v e d e f i n i t e m a t r i x .

D e f i n i t i o n :

I f f o r a l l n o n z e r o c o m p l e x v e c t o r s { x } , { x } T [ A ] { x } > 0 f o r a r e a l s y m m e t r i c m a t r i x [ A ]t h e n [ A ] i s s a i d t o b e p o s i t i v e d e f i n i t e m a t r i x . ( N o t e a l s o t h a t [ A ] i s p o s i t i v e d e f i n i t e i f a n do n l y i f a l l o f i t s e i g e n v a l u e s a r e r e a l a n d p o s i t i v e .

Page 28: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

28

Relaxation ConceptT h e i t e r a t i v e m e t h o d s s u c h a s J a c o b i a n d G a u s s - S e i d e l i t e r a t i o n a r es u c c e s s i v e l y p r e d i c t i o n c o r r e c t i o n m e t h o d s i n t h a t a n i n i t i a l g u e s s i sc o r r e c t e d t o c o m p u t e a n e w a p p r o x i m a t e s o l u t i o n , t h e n t h e n e w s o l u t i o n i sc o r r e c t e d a n d s o o n . T h e G a u s s - S e i d e l m e t h o d , f o r e x a m p l e c a n b ef o r m u l a t e d a s

x x xik

ik

ik 1

w h e r e k d e n o t e s t h e n u m b e r o f i t e r a t i o n s a n d x i( k ) i s t h e c o r r e c t i o n t o t h e

k t h s o l u t i o n . W h i c h i s o b t a i n e d f r o m

x x xik

ie

ik

T o i n t e r p r e t a n d u n d e r s t a n d t h i s e q u a t i o n b e t t e r w e l e t

x x x xie

in e w

ik

io l d ,

a n d w r i t e i t a s

oldi

newi

newi x1xx

Page 29: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

29

Example E3.5.4P r o b l e m : S h o w t h a t t h e G a u s s - S e id e l i t e r a t io n m e t h o d c o n v e r g e s in 5 0 i t e r a t io n s w h e na p p l ie d t o t h e f o l lo w in g s y s t e m w i t h o u t r e la x a t io n ( i . e . = 1 )

x 1 + 2 x 2 = - 5x 1 – 3 x 2 = 2 0

F in d a n o p t im u m r e la x a t io n f a c t o r t h a t w i l l m in im iz e t h e n u m b e r o f i t e r a t io n s t o a c h ie v e as o lu t io n w i t h t h e e r r o r b o u n d .

| | E i | | < 0 . 5 x 1 0 - 5

w h e r e | | E i | | = x xin e w

io l d

i

1

2

S o l u t i o n : A f e w i t e r a t io n s a r e s h o w n h e r e w i t h = 0 . 8

L e t x 1o l d = 0 a n d x 2

o l d = 0 ( in i t i a l g u e s s )

x 1n e w = ( - 5 - 2 x 2

o l d ) = - 5

A f t e r a f e w i t e r a t io n s w e r e a c h

U n d e r r e la xx 2

n e w = ( 0 . 8 ) ( - 4 . 8 5 3 ) + ( 0 . 2 ) ( - 6 . 4 ) = - 5 . 1 6 2

x 2o l d = - 5 . 1 6 2 ( s w i t c h o ld a n d n e w )

Page 30: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

30

Example E3.5.4 (continued)

Table E3.5.4b Number of Iterations versus Relaxation Factor for Example E3.5.4

Number of iterations0.20 620.40 300.60 170.70 140.80 110.90 151.00 501.05 841.10 >1000

It is seen that minimum number of iterations is obtained with a relaxation factor of = 0.80.

Page 31: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

31

Case Study - The Problem of Falling Objects

m1

m2

m3

T1

T2g - gravity

-m1 a = c1 v2 - m1 g - T1

-m2 a = c2 v2 - m2 g - T1 + T2

-m3 a = c3 v2 - m3 g + T2

Page 32: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

32

Case Study - The Problem of Falling Objects

These three equations can be used to solve for any three of the unknowns parameters: m1,m2, m3, c1, c2, c3, v, a, T1, and T2. As an example here we set up a problem byspecifying the masses, the drag coefficients, and the acceleration as

m1 = 10 kg, m2 = 6 kg, m3 = 14 kg, c1 = 1.0 kg/m, c2 = 1.5 kg/m, c3 = 2.0 kg/m, and a =0.5g and find the remaining three unknowns; v, T1 and T2. The acceleration of gravityg=9.81 m/sec2.

Substituting these values in the given set of equations and rearranging yield

v2 - T1 = 49.05 1.5 v2 + T1 - T2 = 29.43 2.0 v2 + T2 = 68.67

v2 = 32.7 (v = 5.72 m/s), T1 = -16.35 N, T2 = 3.27 N (N=newton=kg.m/sec2)

Page 33: 1 Chapter: 3c System of Linear Equations Dr. Asaf Varol asvarol@mail.wvu.edu

33

References

1. Celik, Ismail, B., “Introductory Numerical Methods for Engineering Applications”, Ararat Books & Publishing, LCC., Morgantown, 2001

2. Fausett, Laurene, V. “Numerical Methods, Algorithms and Applications”, Prentice Hall, 2003 by Pearson Education, Inc., Upper Saddle River, NJ 07458

3. Rao, Singiresu, S., “Applied Numerical Methods for Engineers and Scientists, 2002 Prentice Hall, Upper Saddle River, NJ 07458

4. Mathews, John, H.; Fink, Kurtis, D., “Numerical Methods Using MATLAB” Fourth Edition, 2004 Prentice Hall, Upper Saddle River, NJ 07458

5. Varol, A., “Sayisal Analiz (Numerical Analysis), in Turkish, Course notes, Firat University, 2001

6. http://mathonweb.com/help/backgd3e.htm