nm_02 linear eq

Upload: liza-oktavia

Post on 05-Apr-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/31/2019 NM_02 Linear Eq.

    1/36

    CHAPTER 02

    PENYELESAIAN PERSAMAAN

    LINEAR

  • 7/31/2019 NM_02 Linear Eq.

    2/36

    2

    Solving Systems of Equations

    A linear equation in n variables:

    a1x1 + a2x2 + + anxn = b

    For small (n 3), linear algebra provides several tools to solve such

    systems of linear equations:

    Graphical method

    Cramers rule

    Method of elimination

    Nowadays, easy access to computers makes the solution ofvery large

    sets of linear algebraic equations possible

  • 7/31/2019 NM_02 Linear Eq.

    3/36

    3

    Determinants and Cramers Rule

    [A] : coefficient matrix[ ]{ } { }bxA =

    =

    3

    2

    1

    3

    2

    1

    333231

    232221

    131211

    b

    b

    b

    x

    x

    x

    aaa

    aaa

    aaa

    [ ]

    =

    333231

    232221

    131211

    aaa

    aaa

    aaa

    A

    D

    aab

    aab

    aab

    x33323

    23222

    13121

    1 =D

    aba

    aba

    aba

    x33331

    23221

    13111

    2=

    D

    baa

    baa

    baa

    x33231

    22221

    11211

    3 =

    D : Determinant of A matrix

  • 7/31/2019 NM_02 Linear Eq.

    4/36

    4

    333231

    232221

    131211

    aaa

    aaa

    aaa

    D =

    [ ]

    =

    333231

    232221

    131211

    aaa

    aaa

    aaa

    A

    [ ]{ } { }BxA =

    3231

    2221

    13

    3331

    2321

    12

    3332

    2322

    11AoftDeterminanaa

    aaa

    aa

    aaa

    aa

    aaaD +==

    Computing the

    Determinant

    23323322

    3332

    2322

    11 aaaaaa

    aaD ==

    22313221

    3231

    222113

    23313321

    3331

    2321

    12

    aaaaaa

    aa

    D

    aaaaaa

    aaD

    ==

    ==

  • 7/31/2019 NM_02 Linear Eq.

    5/36

    5

    Gauss Elimination

    SolveAx = b

    Consists of two phases:

    Forward elimination

    Back substitution

    Forward Elimination

    reducesAx = b to an uppertriangular system Tx = b

    Back substitution can thensolve Tx = b forx

    ''

    3

    ''

    33

    '

    2

    '

    23

    '

    22

    1131211

    3333231

    2232221

    1131211

    00

    0

    ba

    baa

    baaa

    baaa

    baaa

    baaa

    Forward

    Elimination

    Back

    Substitution

    11

    21231311

    '

    22

    3

    '

    23

    '

    22''

    33

    ''

    33

    a

    xaxabx

    a

    xabx

    a

    bx

    =

    ==

  • 7/31/2019 NM_02 Linear Eq.

    6/36

    6

    Gaussian Elimination

    Forward Elimination

    x1 - x2 + x3 = 6

    3x1 + 4x2 + 2x3 = 92x1 + x2 + x3 = 7

    x1 - x2 + x3 = 6

    0 +7x2 - x3 = -90 + 3x2 - x3 = -5

    x1 - x2 + x3 = 6

    0 7x2 - x3 = -9

    0 0 -(4/7)x3=-(8/7)

    -(3/1)

    Solve using BACK SUBSTITUTION: x3 = 2 x2=-1 x1 =3

    -(2/1) -(3/7)

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0 0

  • 7/31/2019 NM_02 Linear Eq.

    7/36

    7

    Back Substitution

    1x0 +1x1 1x2 +4x3 8=

    2x1 3x2 +1x3 5=

    2x2 3x3 0=

    2x3 4=x3 = 2

  • 7/31/2019 NM_02 Linear Eq.

    8/36

    8

    1x0 +1x1 1x2 0=

    2x1 3x2 3=

    2x2 6=

    Back Substitution

    x2 = 3

  • 7/31/2019 NM_02 Linear Eq.

    9/36

    9

    1x0 +1x1 3=

    2x1 12=

    Back Substitution

    x1 = 6

  • 7/31/2019 NM_02 Linear Eq.

    10/36

    1x0 9=

    Back Substitution

    x0 = 9

  • 7/31/2019 NM_02 Linear Eq.

    11/36

    11

    i n down to 1

    /* calculatexi */

    x [ i ] b [ i ]/a [ i, i ]

    /* substitute in the equations above */

    for j 1 to i-1 dob [j ] b [j ] x [ i ] a [j, i ]

    endfor

    Back Substitution(* Pseudocode *)

    Time Complexity? O(n2)

  • 7/31/2019 NM_02 Linear Eq.

    12/36

    12

    Gaussian EliminationForward Elimination

    0=

    +=

    ii

    ji

    iijijia

    aaaa

  • 7/31/2019 NM_02 Linear Eq.

    13/36

    13

    Forward Elimination

    4x0 +6x1 +2x2 2x3 = 8

    2x0 +5x2 2x3 = 4

    4x0 3x1 5x2 +4x3 = 1

    8x0 +18x1 2x2 +3x3 = 40

    -(2/4)

    M

    U

    L

    T

    IP

    L

    I

    E

    R

    S

    -(-4/4)

    -(8/4)

  • 7/31/2019 NM_02 Linear Eq.

    14/36

    14

    4x0 +6x1 +2x2 2x3 = 8

    +4x2 1x3 = 0

    +3x1 3x2 +2x3 = 9

    +6x1 6x2 +7x3 = 24

    3x1

    -(3/-3)

    M

    UL

    T

    I

    P

    L

    I

    E

    R

    S

    Forward Elimination

    -(6/-3)

  • 7/31/2019 NM_02 Linear Eq.

    15/36

    15

    4x0 +6x1 +2x2 2x3 = 8

    +4x2 1x3 = 0

    1x2 +1x3 = 9

    2x2 +5x3 = 24

    3x1

    ??

    M

    U

    LT

    I

    P

    L

    I

    E

    R

    Forward Elimination

  • 7/31/2019 NM_02 Linear Eq.

    16/36

    16

    4x0 +6x1 +2x2 2x3 = 8

    +4x2 1x3 = 0

    1x2 +1x3 = 9

    3x3 = 6

    3x1

    Forward Elimination

  • 7/31/2019 NM_02 Linear Eq.

    17/36

    17

    Gaussian EliminationOperation count in Forward Elimination

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0 0

    2n

    2n

    2n

    2n

    2n

    2n

    2n

    2n

    TOTAL

    1st column: 2n(n-1) 2n2

    )(

    6

    )12)(1(2

    2)1(*2)2(*2...)1(22

    :NELIMINATIOFORWARDforOperationsof#TOTAL

    3

    1

    22222

    nO

    nnn

    inn

    n

    i

    =

    ++=

    =++++ =

    2(n-1)2 2(n-2)2 .

    b11

    b22

    b33

    b44

    b55

    b66

    b77

    b66

    b66

  • 7/31/2019 NM_02 Linear Eq.

    18/36

    18

    Pitfalls of Elimination Methods

    Division by zeroIt is possible that during both elimination and back-substitution phases a

    division by zero can occur.

    For example:

    2x2 + 3x3 = 8 0 2 3

    4x1 + 6x2 + 7x3 = -3 A = 4 6 7

    2x1 + x2 + 6x3 = 5 2 1 6

    Solution: pivoting (to be discussed later)

  • 7/31/2019 NM_02 Linear Eq.

    19/36

    19

    Pitfalls (cont.)

    Round-off errors

    Because computers carry only a limited number of significant figures, round-off

    errors will occur and they willpropagate from one iteration to the next.

    This problem is especially important when large numbers of equations (100 ormore) are to be solved.

    Always use double-precision numbers/arithmetic. It is slow but needed for

    correctness!

    It is also a good idea to substitute your results back into the original equations and

    check whether a substantial error has occurred.

  • 7/31/2019 NM_02 Linear Eq.

    20/36

    ill-conditioned systems - small changes in coefficients result in large changes

    in the solution. Alternatively, a wide range of answers can approximately satisfythe equations.

    (Well-conditioned systems small changes in coefficients result in small changes

    in the solution)

    Problem: Since round off errors can induce small changes in the coefficients, thesechanges can lead to large solution errors in ill-conditionedsystems.

    Example:

    x1 + 2x2 = 10

    1.1x1 + 2x2 = 10.4

    x1 + 2x2 = 10

    1.05x1 + 2x2 = 10.4

    34

    2.0

    )4.10(2)10(2

    )1.1(2)2(1

    24.10

    210

    2

    222

    121

    1 ==

    =

    == xD

    ab

    ab

    x

    181.0

    )4.10(2)10(2

    )05.1(2)2(1

    24.10

    210

    2

    222

    121

    1 ==

    =

    == x

    D

    ab

    ab

    x

    Pitfalls (cont.)

  • 7/31/2019 NM_02 Linear Eq.

    21/36

    Surprisingly, substitution of the erroneous values,x1=8 andx2=1, into the originalequation will not reveal their incorrect nature clearly:

    x1 + 2x2 = 10 8+2(1) = 10 (the same!)

    1.1x1 + 2x2 = 10.4 1.1(8)+2(1)=10.8 (close!)

    IMPORTANT OBSERVATION:

    An ill-conditioned system is one with adeterminant close to zero

    If determinant D=0 then there are infinitely many solutionssingular system

    Scaling (multiplying the coefficients with the same value) does not change the equationsbut changes the value of the determinant in a significant way.

    However, it does not change the ill-conditionedstate of the equations!

    DANGER! It may hide the fact that the system is ill-conditioned!!

    How can we find out whether a system is ill-conditionedor not?

    Not easy! Luckily, most engineering systems yield well-conditioned results!

    One way to find out: change the coefficients slightly and recompute & compare

    ill-conditioned systems (cont.)

  • 7/31/2019 NM_02 Linear Eq.

    22/36

    22

    COMPUTING THE DETERMINANT OF A MATRIX

    USING GAUSSIAN ELIMINATION

    The determinant of a matrix can be found using Gaussian elimination.

    Here are the rules that apply:

    Interchanging two rows changes the sign of the determinant.

    Multiplying a row by a scalar multiplies the determinant by that scalar.

    Replacing any row by the sum of that row and any other row does NOT change the

    determinant.

    The determinant of a triangular matrix (upper or lower triangular) is the product of the

    diagonal elements. i.e. D=t11t22t33

    33

    2322

    131211

    00

    0

    t

    tt

    ttt

    D =

  • 7/31/2019 NM_02 Linear Eq.

    23/36

    Techniques for Improving Solutions

    Use of more significant figures double precision arithmetic

    Pivoting

    If a pivot element is zero, normalization step leads to division by zero. Thesame problem may arise, when the pivot element is close to zero. Problem canbe avoided:

    Partial pivoting

    Switching the rows below so that the largest element is the pivot element.

    Go over the solution in: CHAP9e-Problem-11.doc

    Complete pivoting

    Searching for the largest element in all rows and columns then switching.

    This is rarely used because switching columns changes the order of xsand adds significant complexity and overhead costly

    Scaling

    used to reduce the round-off errors and improve accuracy

  • 7/31/2019 NM_02 Linear Eq.

    24/36

    ( ) ( )

    ( ) ( ) ( ) ( )

    ( ) ( )

    1 1 2 1

    2 1 2 1

    1 2 3

    2 1 2 1 3 2 3 2

    2 3 3 3

    3 2 3 2

    2 2

    ln / ln /

    0ln / ln / ln / ln /

    2 2

    ln / ln /

    s s

    i i S

    s s i i

    i i

    O O a

    k kh D T T h D T

    D D D D

    k k k k T T T

    D D D D D D D D

    k kT h D T h D T

    D D D D

    + =

    + + =

    + =

  • 7/31/2019 NM_02 Linear Eq.

    25/36

    25

    Gauss-Jordan Elimination

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0 0

    0

    a11

    x22

    0

    x33

    0

    0

    x44

    0

    0

    0

    x55

    0

    0

    0

    0

    x66

    x77

    x88

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    0

    b11

    b22

    b33

    b44

    b55

    b66

    b77

    b66

    b660 0 0 x99

  • 7/31/2019 NM_02 Linear Eq.

    26/36

    26

    Gauss-Jordan Elimination: Example

    =

    10

    1

    8

    |

    |

    |

    473

    321

    211

    :MatrixAugmented

    10

    1

    8

    473

    321

    211

    3

    2

    1

    x

    x

    x

    1 1 2 | 8

    0 1 5 | 9

    0 4 2| 14

    R2 R2 - (-1)R1

    R3 R3 - ( 3)R1

    Scaling R2:

    R2 R2/(-1)

    R1 R1 - (1)R2

    R3 R3-(4)R2

    1 1 2 | 8

    0 1 5| 9

    0 4 2| 14

    22

    9

    17

    |

    |

    |

    1800

    510

    701

    Scaling R3:

    R3 R3/(18)

    9/11

    9

    17

    |

    |

    |

    100

    510

    701

    222.1

    888.2

    444.8

    |

    |

    |

    100

    010

    001

    R1 R1 - (7)R3

    R2 R2-(-5)R3

    RESULT:

    x1=8.45, x2=-2.89, x3=1.23

    Time Complexity? O(n3)

  • 7/31/2019 NM_02 Linear Eq.

    27/36

    Contoh

    Selesaikan persamaan berikut:

    x1 + x2 + x3 +x4 +x5 = 2

    x1 + 2 x3 x4 x5 = -1

    2x1 x2 x3 +x4 x5 = 7

    X2 x3 + 2x5 = -9

    x1 + 2 x2 + x3 x4 =-2

  • 7/31/2019 NM_02 Linear Eq.

    28/36

  • 7/31/2019 NM_02 Linear Eq.

    29/36

    Tugas 04 Eliminasi Gauss-Jordan

    Selesaikan persamaan dalam contoh sebelumnya

    memakai metode Gauss-Jordan

  • 7/31/2019 NM_02 Linear Eq.

    30/36

    contoh kasus

    Reaksi fase cair reaktan A menjadi B dijalankan

    dalam sebuah reaktor batch

    Data yang diperoleh:

    T, K k, menit-1

    300 0,017

    340 0,17

    360 0,425

    380 0,955

    400 2,1

    440 7,81

    460 13,2

    Diminta mencari konstanta reaksi sebagai

    fungsi suhu (T), dengan asumsi konstanta

    kecepatan reaksi mengikuti persamaan :

    Carilah nilai kesalahan relatif rerata

    antara persamaan empiris dengan data

    laboratorium!

  • 7/31/2019 NM_02 Linear Eq.

    31/36

    Linearisasi

    Persamaan terlinierisasi di atas dapat dianalogikan menjadi:

    y= a + bx1+ cx2

    dengan

    Diperoleh 3 persamaan dengan 3 variabel yang tidak diketahui

    Cara penentuan nilai a, b, dan ca =ln Ar Ar= e

    a

    b =n n = b

    c = -Er Er =-c

    contoh kasus

  • 7/31/2019 NM_02 Linear Eq.

    32/36

    contoh kasus

  • 7/31/2019 NM_02 Linear Eq.

    33/36

    Disusun menjadi matriks:

    dengan elemen-elemennya adalah:

    contoh kasus

  • 7/31/2019 NM_02 Linear Eq.

    34/36

    Dengan melakukan eliminasi Gauss-Jordan diperoleh:

    contoh kasus

    a =ln Ar Ar= ea

    b =n n = b

    c = -Er Er =-c

  • 7/31/2019 NM_02 Linear Eq.

    35/36

    contoh kasus

    Tahapan pengerjaan program:

    1. Input data dalam matriks

    2. Pengisian dan penyusunan matriks awal

    3. Eliminasi Gauss-Jordan

    4. Menghitung nilai konstanta Ar, n dan Er

    5. Menghitung kesalahan relatif

    function latihan %eliminasi Gauss-jordan

  • 7/31/2019 NM_02 Linear Eq.

    36/36

    clear;clc;

    T=[300;340;360;380;400;440;460];

    kdata=[0.017;0.17;0.425;0.955;2.1;7.81;13.2]

    ;

    Ndata=7;

    %Persamaan kecepatan reaksi k = Ar x T^n x

    exp(-Er/T)

    %dilinearisasikan menjadi bentuk y = a + b.x1

    + c.x2

    %maka: ln k = ln Ar + n.ln T - Er/T

    %dimana y = ln k x1 = ln T x2 = 1/T% a = ln Ar b = n c = -Er

    y=log(kdata);

    x1=log(T);

    x2=1./T;

    %pengisian matriks

    A=[Ndata sum(x1) sum(x2) sum(y)

    sum(x1) sum(x1.^2) sum(x1.*x2) sum(x1.*y)

    sum(x2) sum(x1.*x2) sum(x2.^2) sum(x2.*y)]

    j

    GJ=rref(A)

    %konstanta

    a=GJ(1,4);b=GJ(2,4);

    c=GJ(3,4);

    Er=-c

    Ar=exp(a)

    n=b

    %kesalahan relatif

    kcal=Ar*(T.^n).*exp(-Er./T)

    %for i=1:length(kdata)

    % kcal(i,1)=Ar*(T(i)^n)*exp(-Er/T(i));

    %enderror_total=sum(abs((kdata-kcal)./kdata)*100)

    error_average=error_total/Ndata