linear systems iterative solutions cse 541 roger crawfis
Post on 02-Jan-2016
222 Views
Preview:
TRANSCRIPT
Linear SystemsIterative Solutions
CSE 541
Roger Crawfis
Sparse Linear Systems
Computational Science deals with the simulation of natural phenomenon, such as weather, blood flow, impact collision, earthquake response, etc.
To simulate these issues such as heat transfer, electromagnetic radiation, fluid flow and shock wave propagation need to be taken into account.
Combining initial conditions with general laws of physics (conservation of energy and mass), a model of these may involve a Partial Differential Equation (PDE).
Example PDE’s
The Wave equation:1D: 2 / t2 = -c2 2 / x2
3D: 2 / t2 = -c2 2
Note: 2 = 2 / x2 + 2 / y2 + 2 / z2
(x,y,z,t) is some continuous function of space and time (e.g., temperature).
Example PDE’s
No changes over time (steady state):Laplace’s Equation:
This can be solved for very simple geometric configurations and initial conditions.
In general, we need to use the computer to solve it.
02
Example PDE’s
Second derivatives:
22
2
22
2
,,2,,
,,2,,
h
hyxyxhyxyx
y
h
yhxyxyhxyx
x
Up
Down
Left right
middle
22
2
2
2 ,,,4,,0
h
hyxyhxyxhyxyhx
yx
2 = ( Left + Right + Up + Down – 4 Middle ) / h2 = 0
Finite Differences
Fundamentally we are taking derivatives: Grid spacing or step size of h. Finite-Difference method – uses a regular grid.
Finite Differences
A very simple problem: Find the electrostatic
potential inside a box whose sides are at a given potential
Set up a n by n grid on which the potential is defined and satisfies Laplace’s Equation:
02
2
2
2
yx
Linear System
411
1
1
1
10
4
410
1141
010014
n2
n
3D Simulations
6111
1
1
1
11
1
16
610
161
11016
n3
nn2
Gaussian Elimination
What happens to these banded matrices when Gaussian Elimination is applied?Matrix only has about 7n3 non-zero
elements.Matrix size = N2, where N=n3 or n6 elementsGaussian Elimination on these suffers from
fill. The forward elimination phase will produce
n2 non-zero elements per row, or n5 non-zero elements.
Memory Costs
Example n=300 Memory cost:
189,000,000 = 189*106 elements Floats => 756MB Doubles => 1.4GB
Full matrix would be: 7.29*1014! Gaussian Elimination
Floats => 1.9*1013MB
With n=300, simulating weather for the state of Ohio would have samples > 1km apart.
Remember, this is h in central differences.
Solutions for Sparse Matrices
Need to keep memory (and computation) low.
These types of problems motivate the Iterative Solutions for Linear Systems.
Iterate until convergence.
bNxMx
bNxMxAx
NMA
kk
1
Jacobi Iteration
One of the easiest splitting of the matrix A is A=D-M, where D is the diagonal elements of A.Ax=bDx-Mx=bDx = Mx+b
x(k)=D-1Mx(k-1)+D-1bTrivial to compute D-1.
Jacobi Iteration
Another way to understand this is to treat each equation separately:Given the ith equation, solve for xi.Assume you know the other variables.
Use the current guess for the other variables.
n
ijjjiji
iii
ininiiii
xaba
x
bxaxaxa
,1
11
1
1
1 1
1 1 i
j
n
ij
kjij
kjiji
ii
ki xaxab
ax
Jacobi iteration
nnnnnn
nn
nn
bxaxaxa
bxaxaxa
bxaxaxa
2211
22222121
11212111
0
02
01
0
nx
x
x
x
)(1 0
102121
11
11 nnxaxab
ax
)(1 0
11022
011
1 nnnnnn
nnn xaxaxab
ax
)(1 0
20323
01212
22
12 nnxaxaxab
ax
Jacobi Iteration
Cute, but will it work? Algorithms, even mathematical ones,
need a mathematical framework or analysis.
Let’s first look at a simple example.
Example system:
Initial guess:Algorithm:
Jacobi Iteration - Example
19832
12283
738
zyx
zyx
zyx
0000 zyx
nnn
nnn
nnn
yxz
zxy
zyx
32198
1
23128
1
378
1
1
1
1
1st iteration:
2nd iteration:
Jacobi Iteration - Example
8
193219
8
12
3
8
122312
8
18
737
8
1
001
001
001
yxz
zxy
zyx
32
51
64
102
2
33
8
7219
8
13219
8
1
64
77
2
192
8
7312
8
12312
8
1
64
13
8
193
2
37
8
137
8
1
112
112
112
yxz
zxy
zyx
Jacobi Iteration - Example
x(3) = 0.427734375y(3) = 1.177734375z(3) = 2.876953125x(4) = -0.351y(4) = 0.620z(4) = 1.814
Actual Solution:x =0y =1z =2
Jacobi Iteration
Questions:1. How many iterations do we need?
2. What is our stopping criteria?
3. Is it faster than applying Gaussian Elimination?
4. Are there round-off errors or other precision and robustness issues?
Jacobi Method - Implementation
while( !converged ) { for (int i=0; i<N; i++) { // For each equation
double sum = b[i];for (int j=0; j<N; j++) {// Compute new xi
if( i <> j )sum += -A(i,j)x(j);
}temp[i] = sum / A[i,i];
}// Test for convergence…x = temp;
}
Complexity:Each Iteration: O(N2)
Total: O(MN2)
Jacobi Method - Complexity
while( !converged ) { for (int i=0; i<N; i++) { // For each equation
double sum = b[i];foreach (double element in nonZeroElements[i]) {
// Compute new xiif( i <> j )
sum += -A(i,j)*x(j);}temp[i] = sum / A[i,i];
}// Test for convergence…x = temp;
}
Complexity:Each Iteration: O(pN)
Total: O(MpN)p= # non-zero elements
For our 2D Laplacian Equation, p=4N=n2 with n=300 => N=90,000
Jacobi Iteration
Cute, but does it work for all matrices?Does it work for all initial guesses?Algorithms, even mathematical ones,
need a mathematical framework or analysis.We still do not have this.
(D+L)x(k)=b-Ux(k-1)
Gauss-Seidel Iteration
Split the matrix A into three parts A=D+L+U, where D is the diagonal elements of A, L is the lower triangular part of A and U is the upper part.Ax=bDx+Lx+Ux=b (D+L)x = b-Ux
Gauss-Seidel Iteration
Another way to understand this is to again treat each equation separately:Given the ith equation, solve for xi.Assume you know the other variables.
Use the most current guess for the other variables.
n
ijjij
i
jjiji
ii
n
ijjjiji
iii
ininiiii
xaxaba
xaba
x
bxaxaxa
1
1
1,1
11
11
Gauss-Seidel Iteration
Looking at it more simply:
n
ijjij
i
jjiji
ii
n
ijjjiji
iii
ininiiii
xaxaba
xaba
x
bxaxaxa
1
1
1,1
11
11
Last iterationThis iteration
n
ij
kjij
i
j
kjiji
ii
ki xaxab
ax
1
1
1
11 1
Gauss-Seidel Iteration
Questions:1. How many iterations do we need?
2. What is our stopping criteria?
3. Is it faster than applying Gaussian Elimination?
4. Are there round-off errors or other precision and robustness issues?
Gauss-Seidel - Implementation
while( !converged ) { for (int i=0; i<N; i++) { // For each equation
double sum = b[i];foreach (double element in nonZeroElements[i]) {
if( i <> j )sum += -A(i,j)x(j);
}
x[i] = sum / A[i,i];}// Test for convergence…
temp = x;}
Complexity:Each Iteration: O(pN)
Total: O(MpN)p= # non-zero elements
Differences from Jacobi
Convergence
Jacobi Iteration can be shown to converge from any initial guess if A is strictly diagonally dominant.
Diagonally dominant
Strictly Diagonally dominant
n
ijjijii aa
,0
n
ijjijii aa
,0
Convergence
Gauss-Seidel can be shown to converge is A is symmetric positive definite.
00 xAxxT
Convergence - Jacobi
Consider the convergence graphically for a 2D system:
Initial guess
Equation 1
Equation 2
x=…
y=…
x=…
y=…
Convergence - Jacobi
What if we swap the order of the equations?
Initial guess
Equation 2
Equation 1
x=…
Not diagonally dominant•Same set of equations!
Diagonally Dominant
What does diagonally dominant mean for a 2D system?10x+y=12 => high-slope (more vertical)x+10y=21 => low-slope (more horizontal)
Identity matrix (or any diagonal matrix) would have the intersection of a vertical and a horizontal line. The b vector controls the location of the lines.
Convergence – Gauss-Seidel
Initial guess
Equation 1
Equation 2
x=…
y=…x=…
y=…
Convergence - SOR
Successive Over-Relaxation (SOR) just adds an extrapolation step. w = 1.3 implies
go an extra30%
Initial guess
Equation 1
Equation 2
x=…
y=…
Extrapolation
x=… y=…
Extrapolation
This would Extrapolation at the very end (mix of Jacobi and Gauss-Seidel.
Convergence - SOR
SOR with Gauss-Seidel
Initial guess
Equation 1
Equation 2
x=…
y=…
x=…
Extrapolation in bold
top related