solution for linear systems

Upload: shantan02

Post on 30-May-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/14/2019 Solution for Linear Systems

    1/47

    UNIT-I

    SOLUTION FOR LINEAR SYSTEMS

    Elementary row(and column)transformations

    Rank of a matrix-Echelon form-Normal form

    Solution of linear systems-Direct methods

    L-U-Decomposition

    L-U-Decomposition from Gauss elimination

    Solution of Tridiagonal system

    Summary

  • 8/14/2019 Solution for Linear Systems

    2/47

    Summary1. Rank of a matrix: The rank of a matrix is the order r of the largest no

    vanishing minor of the matrix.

    2.Elementary transformation of a matrix:a)Row Transformations :

    i) Interchange of ith and jth row .R ijii)Multiplication of ith row by non-zero scalar l.R i(l)iii)Addition of l times the elements of jth row to corresponding elements of

    ith row---R ij(l)b)Column Transformations are similar to above to (a)---replace R by

    3.Computation of Rank of matrix:Method I:Echelon Form: Transform the given matrix to an echelon form usingelementary transformations. The rank of the matrix is equal to the nuof non-zero rows of echelon form.

    Method II:Canonical Form OR Normal Form:Reduce the given matrix A to one of the normal formsIr 0 , Ir , Ir 0 or Ir , using elementary transformation,0 0 0 Then Rank of A = r

    4.Simulataneous Linear Equations-Methods of Solution.

    1.System for m linear equations in n unknowns can be written in a mform as AX =B,where A = a11 a12 a1n , X = x1 , B = b1

    a21 a22 a2n x2 b2.. .. . .

    am1 am2 amn xn bnIf bi = 0 ,the system is homogeneous i.e. B=0 ,otherwise ,it isnon- homogeneous.

    2.Condition for the consistency: A system of linear equations AX= Bconsistent iff the rank of A is equal to the rank of the augmented ma[A/B]

  • 8/14/2019 Solution for Linear Systems

    3/47

    3.Solution of AX= B Working rule:

    i)Find r(A) and r(A/B)by applying elementary transformations.ii)Ifr(A)=r(A/B)=n ,(n,being number of unknowns).The system isconsistent and has unique solution.[In case A 0]

    iii)If r(A)=r(A/B)

  • 8/14/2019 Solution for Linear Systems

    4/47

    7.Tri-Diagonal matrix:

    Matrices of the type a11 a12 0 0 are Tri Diagonal matricesa21 a22 a23 00 a32 a33 a340 0 a43 a44

    8.Solution of Tri diagonal system procedure is like above method.

    9.Homogeneous linear equations:AX=0 , where A =(aij)mxn:X=[x1,x2,,.xn]1: solution of AX=0 can be done by elementarytransformations.Conclusions:

    i)The system AX=0 is always consistent. Since the trival solution x1= x2=x3==xn=0 always exist.

    ii)If rank of (A/B) = rank of A = n , [A 0] , then the trival solutthe only solution.

    iii)If the rank of (A/B)= rank of A = r < n, [A 0] the solution has

    infinite number of non-trival solutions involving (n-r) arbitraryconstants.

  • 8/14/2019 Solution for Linear Systems

    5/47

    UNIT-II

    EIGEN VALUES & EIGEN VECTORS& THEIR APPLICATIONS

    Eigen values,eigen vecors - Properties

    Cayley-Hamilton theorem-Inverse & powers of matrix by CayleyHamilton theorem

    Diagonalization of a matrix

    Calculations of powers of a matrix-modal & spectral matrices

    Summary

  • 8/14/2019 Solution for Linear Systems

    6/47

    Summary

    1. Eigen values & Eigen vectors : Let A = (aij)mxn

    (a)Characteristic equation of A is given by A- I =0(b)Roots of this equation are 1, 2 , 3,.., n. They arecalled Eigen values of A.

    (c)A non-zero vector X = [x1,x2,x3,..xn]1 which satisfiesthe relation [A- I]X=0 (or) AX= X, is called the Eigen veof A corresponding to . This each Eigen value has an Eigevector.

    2. Properties of Eigen values & Eigen vectors:1.The sum of the Eigen values of a square matrix A is its trac

    their product is A.2.The Eigen values of A and AT are equal.3.If A is non-singular matrix and is an eigen values of A,

    then 1/ is an Eigen values of A-1.4.If is an eigen values of A , then is an eigen values o

    A where is a non-zero scalar.5.If is an eigen values of A, then m is an eigen values of Am,

    m being any positive integer.6.The eigen values of a diagonal matrix are its diagonal elem

    7.If B be a non-singular matrix, and A,B are matrices of samorder,then A and B-1AB have same eigen values.8. is a characteristic root of a square matrx A iff their exists

    non- zero vector X such that AX= X.9.If X be an eigen vector of A corresponding to the eigen va

    ,then c X is also an eigen vector of A corresponding to , being a non-zero scalar.

    10.If X is an eigen vector of a square matrix A, then X cannocorresponding to more than one eigen values of A.

    11.Zero is an eigen value of a matrix iff it is singular.12.If is an eigen value of a non-singular matrix A, thenA / is an eigen values of Adj A.

    3. Cayley Hamilton Theorem-State thatEvery square matrix satisfies its own characteristic equatio

  • 8/14/2019 Solution for Linear Systems

    7/47

    4. To find the inverse of a square matrix A; by using C-H theorLet A be a square matrix and and n+a1 n-1+a2 n-2++an= 0.(1)be the characteristic

    equation of A. (ai,i=1 to n are constants).Then C-H theorem gives An +a1An-1+anI=0(2)(2) x A-1 = An-1+a1An-2+an-1I+anA-1 =0

    A-1 = (-1)/an[An-1+An-2+an-1I]

    5. To find positive integral powers of A using C-H theorem:Let m n, be a positive integer.Then Am-nx (2) ==Am+a1Am-1+..anAm-n= 0,from which we canfind Am interms of powers of lower order of A.

    6. Diagonalization of a square matrix:Let A be a square matrix of order n having n linearlyindependent Eigen vectors. Then there exists a non singular matrix P such that P-1AP =D is a diagonal matrix, andD=Diag[ 1, 2,.., n ]

    7. Working rule to diagonalise A=(aij)nxnStep1:Find Eigen values i (i=1,2,.,n)of A.Step2:Find Eigen vectors Xi corresponding to i ( i,i=1,2,.,n are

    distinct).Step3:Form the matrix P=[X1 X2 X3 Xn]where columnvectors Xi are the Eigen vectors of i.(The matrix P is known as the Modal matrix of A)

    Step4:Find D= P-1AP=Diag[ 1 2 3 n].This is thediagonalisation of A.The matrix D is known as the Spectral matrix of A.Computation of positive powers of A:If m is a positive integer of A: Then,

    Am=(PDP-1)m= [P DmP-1]

  • 8/14/2019 Solution for Linear Systems

    8/47

    UNIT-III

    LINEAR TRANSFORMATIONS

    Real Matrices Symmetric, Skew-symmetric, Orthogonal

    Linear transformations- Orthogonal Transformation Complex Matrices- Hermitian, Skew-Hermitian and Unitary

    Eigen Values and Eigen Vectors of Complex matrices and Their Properties

    Quadratic forms- Reduction to Canonical Form

    Rank- Positive, Negative Definite; Semi definite Index, SignatuSylvester Law

    A Summary

  • 8/14/2019 Solution for Linear Systems

    9/47

    Summary1. Definitions: and properties of some real and complex matrices are

    following

    2. Properties of eigen values of real and complex matrices are given

    1.If is a characterstic root of an orthogonal matrix, then 1/ is alcharacterstic root.

    2.The eigen values of an orthogonal matrix are of unit modulus.3. The eigen values of a hermitian matrix are all real.4. The eigen values of a real symmetric matrix are all real.5. The eigen values of a skew hermitian matrix are either purely

    imaginary or zero.6.The eigen values of a real skew symmetric matrix are purelyimaginary or zero.

    7. The eigen values of a unitary matrix are of unit modulus.8. If A is nilpotent matrix, then 0 is the only eigen value of A9. If A is involuntary matrix its possible eigen values are 1 and -110.If A is an idempotent matrix its possible eigen values are 0 and

    3. Transformations:

    (a) The transformations X = AY where A = (aij)nXn; X = [x x2. xn];Y = columns of [y y2. yn]; transforms vector Y to vector X over thematrix A.

    The transformations is linear.

    (b) Non-singular transformation:(i) If A is non-singular, (A 0 ) then Y = AX is non-singular

    transformation.(ii) Then, X = A-1Y is inverse transformation of Y = AX.

    (c) Orthogonal transformation: If A is an orthogonal matrix, thenY = AX is an orthogonal transformation;A is orthogonal , A1= A- 1 => Y1Y = X1Xi.e., Y = AX transforms ( x12 + x22 +.+ xn2) to (y12 + y22 +..

    +yn2)

  • 8/14/2019 Solution for Linear Systems

    10/47

    4. Quadratic forms: A homogeneous polynomial of 2nd degree in n

    variables x1, x2,xn is called of quadratic form.

    Thus , q = aijxix j from i , j = 1 to n(or) q = [a11x12 + a22x22 ++ annxn2 + (a12+a21)x1x2 +(a13+a31)x1x3 ++]

    is a quadratic form in n variables x1,x2xn.

    5. Matrix of a quadratic form q: If A is a symmetric matrix,

    q = X1AX is the matrix representation of q and A is the matrixof q where ,(aij+a ji)=2aij is coefficient of xix j.

    [i.e. aij=a ji=1/2 coefficient of xix j]Then q = X1AX = [x1x2.xn] A columns of[x1 x2.. xn]

    6. Rank of quadratic form: If q = X1AX, then rank of A is the rank of quadratic form q

    (a) If rank of A = r = n , q is non-singular form(b) If r < n , q is singular

    7. Canonical form or Normal form of q: A real quadratic form q in

    which product terms are missing (i.e. all terms are square terms ois called the canonical form of q.i.e. q = a1x12 + a2x22 + + anxn2 is canonical form.

    8. Reduction to canonical form: If D = Diag [d1, d2,.dr ] is thediagonalization of A, then q1= d1x12 + d2x22 + . + dr xr 2 ,(where r = rank of A) is canonical form of q = X1AX.

    9. Nature of a quadratic form:1. If q= X1AX is the given quadratic form (in n variables) of rank r

    then, q1=d1x12 + d2x22 +.+ dr xr 2 is the canonical for of q.[di is +ve, -ve, or zero]

    (a) Index: The number of +ve terms in q1 is called the index s of quadratic form q

    (b) The number of non +ve terms = r-s(c) Signature = S- (r-s)= 2s-r.

  • 8/14/2019 Solution for Linear Systems

    11/47

    2. The quadratic form q is said to be

    (a) +ve definite if r=n, and s=n(b) ve definite if r=n, and s=0(c) +ve semi-definite if r

  • 8/14/2019 Solution for Linear Systems

    12/47

    10.Methods of Reduction of quadratic form to the canonical form.

    (a) Lagranges method: A quadratic form can be reduced by th

    method to a canonical form by completion of squares.(b) Diagonalization method: Write A=I3AI3 [if A=(aij)3x3] apply

    elementary row transformation on L.H.S and on prefactor oR.H.S. Apply corresponding column transformations on L.Has well as the post-factor of R.H.S continue this process till equation is reduced to the form,

    D = P1A P , where D is a diagonal matrix D = [d1 0 0][0 d2 0][0 0 d3]

    Then the canonical form is q1=y1(P1AP)Y=Diag(d1 d2 d3) whereY = [y1 y2 y3], i.e., if q = X1A X, X = [x1 x2 x3] ,q1= d1y12 + d2y22 + d3y32.

    Here X=PY is corresponding transformation.

    (C) Orthogonal Reduction of q = X1AX:

    (i) Find eigen values i and corresponding eigen vectors Xi,(i=1,2,n) of A.(ii) Find modal matrix B = [X1 X2 Xn](iii) Normalize each column vector Xi of B by dividing it with it

    magnitude and write the normalized modal matrix P which iorthogonal (i.e. P1= P-1)

    (iv) Then X = PY reduces q to q1 where q1 = 1 y12 + 2 y22 + + n yn2

    = Y1(P1AP)Y( X=PY is know as orthogonal transformation)

    11.Sylvesters law of inertia: The signature of real quadratic form isinvariant for all normal reductions.

  • 8/14/2019 Solution for Linear Systems

    13/47

    Symmetric matrix

    Inlinear algebra, asymmetric matrix is asquare matrix, A, that is equal to its transpose

    The entries of a symmetric matrix are symmetric with respect to themain diagonal(topleft to bottom right). So if the entries are written as A = (a ij), then

    for all indicesi and j. The following 33 matrix is symmetric:

    A matrix is called skew-symmetricor antisymmetric if its transpose is the same as itsnegative. The following 33 matrix is skew-symmetric:

    Skew-symmetric matrix

    Inlinear algebra, askew-symmetric (or antisymmetric or antimetric [1]) matrix is asquare matrix A whose transposeis also its negative; that is, it satisfies the equation:

    or in component form, if : for all and

    For example, the following matrix is skew-symmetric:

    Compare this with asymmetric matrixwhose transpose is the same as the matrix

    or an orthogonal matrix, the transpose of which is equal to its inverse:

    http://en.wikipedia.org/wiki/Linear_algebrahttp://en.wikipedia.org/wiki/Square_matrixhttp://en.wikipedia.org/wiki/Transposehttp://en.wikipedia.org/wiki/Transposehttp://en.wikipedia.org/wiki/Main_diagonalhttp://en.wikipedia.org/wiki/Skew-symmetric_matrixhttp://en.wikipedia.org/wiki/Skew-symmetric_matrixhttp://en.wikipedia.org/wiki/Linear_algebrahttp://en.wikipedia.org/wiki/Skew-symmetric_matrix#cite_note-0http://en.wikipedia.org/wiki/Square_matrixhttp://en.wikipedia.org/wiki/Transposehttp://en.wikipedia.org/wiki/Transposehttp://en.wikipedia.org/wiki/Symmetric_matrixhttp://en.wikipedia.org/wiki/Orthogonal_matrixhttp://en.wikipedia.org/wiki/Orthogonal_matrixhttp://en.wikipedia.org/wiki/Square_matrixhttp://en.wikipedia.org/wiki/Transposehttp://en.wikipedia.org/wiki/Main_diagonalhttp://en.wikipedia.org/wiki/Skew-symmetric_matrixhttp://en.wikipedia.org/wiki/Linear_algebrahttp://en.wikipedia.org/wiki/Skew-symmetric_matrix#cite_note-0http://en.wikipedia.org/wiki/Square_matrixhttp://en.wikipedia.org/wiki/Transposehttp://en.wikipedia.org/wiki/Symmetric_matrixhttp://en.wikipedia.org/wiki/Orthogonal_matrixhttp://en.wikipedia.org/wiki/Linear_algebra
  • 8/14/2019 Solution for Linear Systems

    14/47

    The following matrix is neither symmetric nor skew-symmetric:

    Every diagonal matrixis symmetric, since all off-diagonal entries are zero. Similarly,each diagonal element of a skew-symmetric matrix must be zero, since each is its onegative.

    Orthogonal matrixInlinear algebra, anorthogonal matrix is asquare matrixwithrealentries whosecolumns (or rows) are orthogonal unit vectors(i.e., orthonormal). Because the columnsare unit vectors in addition to being orthogonal, some people use the termorthonormalto describe such matrices.

    Equivalently, a matrixQ is orthogonal if itstransposeis equal to itsinverse:

    alternatively,

    (OR)

    Definition: Ann n matrix A is called an orthogonal matrix whenever T A A I = .

    EXAMPLE:1 0 1 0 1 0 cos sin

    , , ,0 1 0 1 0 1 sin cos

    http://en.wikipedia.org/wiki/Diagonal_matrixhttp://en.wikipedia.org/wiki/Diagonal_matrixhttp://en.wikipedia.org/wiki/Linear_algebrahttp://en.wikipedia.org/wiki/Matrix_(mathematics)#Square_matriceshttp://en.wikipedia.org/wiki/Real_numberhttp://en.wikipedia.org/wiki/Orthogonalhttp://en.wikipedia.org/wiki/Orthogonalhttp://en.wikipedia.org/wiki/Unit_vectorhttp://en.wikipedia.org/wiki/Unit_vectorhttp://en.wikipedia.org/wiki/Orthonormalityhttp://en.wikipedia.org/wiki/Orthonormalityhttp://en.wikipedia.org/wiki/Transposehttp://en.wikipedia.org/wiki/Inverse_matrixhttp://en.wikipedia.org/wiki/Diagonal_matrixhttp://en.wikipedia.org/wiki/Linear_algebrahttp://en.wikipedia.org/wiki/Matrix_(mathematics)#Square_matriceshttp://en.wikipedia.org/wiki/Real_numberhttp://en.wikipedia.org/wiki/Orthogonalhttp://en.wikipedia.org/wiki/Unit_vectorhttp://en.wikipedia.org/wiki/Orthonormalityhttp://en.wikipedia.org/wiki/Transposehttp://en.wikipedia.org/wiki/Inverse_matrix
  • 8/14/2019 Solution for Linear Systems

    15/47

    Conjugate transpose

    "Adjoint matrix" redirects here. An adjugate matrix is sometimes called a "classical adjoint matrix".

    In mathematics , the conjugate transpose , Hermitian transpose , or adjoint matrix of an m -by-

    n matrix A with complex entries is the n-by- m matrix A * obtained from A by taking

    the transpose and then taking the complex conjugate of each entry (i.e. negating their imaginary

    parts but not their real parts). The conjugate transpose is formally defined by

    where the subscripts denote the i , j -th entry, for 1 i n and 1 j m , and the overbar denotes a

    scalar complex conjugate . (The complex conjugate of a +bi , where a and b are reals, is a bi .)

    This definition can also be written as

    where denotes the transpose and denotes the matrix with complex conjugated entries.

    Other names for the conjugate transpose of a matrix are Hermitian conjugate , or transjugate .

    The conjugate transpose of a matrix A can be denoted by any of these symbols:

    or , commonly used in linear algebra

    (sometimes pronounced " A dagger "), universally used in quantum mechanics

    , although this symbol is more commonly used for the Moore-Penrose

    pseudoinverse

    In some contexts, denotes the matrix with complex conjugated entries, and thus the

    conjugate transpose is denoted by or .

    EXAMPLE:

    then

    http://en.wikipedia.org/wiki/Adjugate_matrixhttp://en.wikipedia.org/wiki/Adjugate_matrixhttp://en.wikipedia.org/wiki/Mathematicshttp://en.wikipedia.org/wiki/Matrix_(mathematics)http://en.wikipedia.org/wiki/Matrix_(mathematics)http://en.wikipedia.org/wiki/Complex_numberhttp://en.wikipedia.org/wiki/Transposehttp://en.wikipedia.org/wiki/Complex_conjugatehttp://en.wikipedia.org/wiki/Complex_conjugatehttp://en.wikipedia.org/wiki/Complex_conjugatehttp://en.wikipedia.org/wiki/Linear_algebrahttp://en.wikipedia.org/wiki/Dagger_(typography)http://en.wikipedia.org/wiki/Dagger_(typography)http://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Moore-Penrose_pseudoinversehttp://en.wikipedia.org/wiki/Moore-Penrose_pseudoinversehttp://en.wikipedia.org/wiki/Moore-Penrose_pseudoinversehttp://en.wikipedia.org/wiki/Adjugate_matrixhttp://en.wikipedia.org/wiki/Mathematicshttp://en.wikipedia.org/wiki/Matrix_(mathematics)http://en.wikipedia.org/wiki/Complex_numberhttp://en.wikipedia.org/wiki/Transposehttp://en.wikipedia.org/wiki/Complex_conjugatehttp://en.wikipedia.org/wiki/Complex_conjugatehttp://en.wikipedia.org/wiki/Linear_algebrahttp://en.wikipedia.org/wiki/Dagger_(typography)http://en.wikipedia.org/wiki/Quantum_mechanicshttp://en.wikipedia.org/wiki/Moore-Penrose_pseudoinversehttp://en.wikipedia.org/wiki/Moore-Penrose_pseudoinverse
  • 8/14/2019 Solution for Linear Systems

    16/47

    Hermitian matrix

    A Hermitian matrix (or self-adjoint matrix ) is a square matrix with complex entries which isequal to its own conjugate transpose that is, the element in the i th row and j th column is equal

    to the complex conjugate of the element in the j th row and i th column, for all indices i and j :

    If the conjugate transpose of a matrix is denoted by , then the Hermitian property can be

    written concisely as

    Hermitian matrices can be understood as the complex extension of a real symmetric matrix .

    For example,

    is a Hermitian matrix

    Skew-Hermitian matrix

    Inlinear algebra, asquare matrixwithcomplex entries is said to beskew-Hermitian or antihermitian if itsconjugate transposeis equal to its negative.[1]That is, the matrix A isskew-Hermitian if it satisfies the relation

    where denotes the conjugate transpose of a matrix. In component form, this mean

    for alli and j, wherea i, j is thei, j-th entry of A, and the overline denotes complexconjugation.

    Skew-Hermitian matrices can be understood as the complex versions of realskew-symmetric matrices, or as the matrix analogue of the purely imaginary numbers.

    [2]

    Unitary matrix

    Inmathematics, aunitary matrix is ann byn complex matrix U satisfying the condition

    http://en.wikipedia.org/wiki/Square_matrixhttp://en.wikipedia.org/wiki/Square_matrixhttp://en.wikipedia.org/wiki/Complex_numberhttp://en.wikipedia.org/wiki/Complex_numberhttp://en.wikipedia.org/wiki/Conjugate_transposehttp://en.wikipedia.org/wiki/Complex_conjugatehttp://en.wikipedia.org/wiki/Symmetric_matrixhttp://en.wikipedia.org/wiki/Linear_algebrahttp://en.wikipedia.org/wiki/Square_matrixhttp://en.wikipedia.org/wiki/Complex_numberhttp://en.wikipedia.org/wiki/Complex_numberhttp://en.wikipedia.org/wiki/Conjugate_transposehttp://en.wikipedia.org/wiki/Skew-Hermitian_matrix#cite_note-0http://en.wikipedia.org/wiki/Complex_conjugatehttp://en.wikipedia.org/wiki/Complex_conjugatehttp://en.wikipedia.org/wiki/Complex_conjugatehttp://en.wikipedia.org/wiki/Skew-symmetric_matrixhttp://en.wikipedia.org/wiki/Skew-symmetric_matrixhttp://en.wikipedia.org/wiki/Skew-symmetric_matrixhttp://en.wikipedia.org/wiki/Skew-Hermitian_matrix#cite_note-HJ85S412-1http://en.wikipedia.org/wiki/Mathematicshttp://en.wikipedia.org/wiki/Complex_numberhttp://en.wikipedia.org/wiki/Complex_numberhttp://en.wikipedia.org/wiki/Matrix_(mathematics)http://en.wikipedia.org/wiki/Square_matrixhttp://en.wikipedia.org/wiki/Complex_numberhttp://en.wikipedia.org/wiki/Conjugate_transposehttp://en.wikipedia.org/wiki/Complex_conjugatehttp://en.wikipedia.org/wiki/Symmetric_matrixhttp://en.wikipedia.org/wiki/Linear_algebrahttp://en.wikipedia.org/wiki/Square_matrixhttp://en.wikipedia.org/wiki/Complex_numberhttp://en.wikipedia.org/wiki/Conjugate_transposehttp://en.wikipedia.org/wiki/Skew-Hermitian_matrix#cite_note-0http://en.wikipedia.org/wiki/Complex_conjugatehttp://en.wikipedia.org/wiki/Complex_conjugatehttp://en.wikipedia.org/wiki/Skew-symmetric_matrixhttp://en.wikipedia.org/wiki/Skew-symmetric_matrixhttp://en.wikipedia.org/wiki/Skew-Hermitian_matrix#cite_note-HJ85S412-1http://en.wikipedia.org/wiki/Mathematicshttp://en.wikipedia.org/wiki/Complex_numberhttp://en.wikipedia.org/wiki/Matrix_(mathematics)
  • 8/14/2019 Solution for Linear Systems

    17/47

    where is theidentity matrixin n dimensions and is the conjugate transpose(alsocalled theHermitian adjoint) of U . Note this condition says that a matrixU is unitary if and only if it has aninversewhich is equal to its conjugate transpose

    A unitary matrix in which all entries are real is an orthogonal matrix. Just as anorthogonal matrixG preserves the (real) inner productof two realvectors,

    so also a unitary matrixU satisfies

    for allcomplex vectors x and y, where stands now for the standardinner producton

    .

    http://en.wikipedia.org/wiki/Identity_matrixhttp://en.wikipedia.org/wiki/Conjugate_transposehttp://en.wikipedia.org/wiki/Conjugate_transposehttp://en.wikipedia.org/wiki/Hermitian_adjointhttp://en.wikipedia.org/wiki/Inverse_matrixhttp://en.wikipedia.org/wiki/Orthogonal_matrixhttp://en.wikipedia.org/wiki/Orthogonal_matrixhttp://en.wikipedia.org/wiki/Realhttp://en.wikipedia.org/wiki/Inner_producthttp://en.wikipedia.org/wiki/Vectorshttp://en.wikipedia.org/wiki/Inner_producthttp://en.wikipedia.org/wiki/Identity_matrixhttp://en.wikipedia.org/wiki/Conjugate_transposehttp://en.wikipedia.org/wiki/Hermitian_adjointhttp://en.wikipedia.org/wiki/Inverse_matrixhttp://en.wikipedia.org/wiki/Orthogonal_matrixhttp://en.wikipedia.org/wiki/Realhttp://en.wikipedia.org/wiki/Inner_producthttp://en.wikipedia.org/wiki/Vectorshttp://en.wikipedia.org/wiki/Inner_product
  • 8/14/2019 Solution for Linear Systems

    18/47

    UNIT-IV

    SOLUTIONS OF NON-LINEAR SYSTEMS

    Solution of Algorithm and Transcendental Equations

    1. Bisection Method2. Method of False Position3. The Iteration Method4. Newton Raphson Method

    Interpolation

    - Finite Differences- Forward Differences- Backward Differences- Central Differences

    Newtons Forward Interpolation Formula

    Newtons Backward Interpolation Formula Gauss Forward Interpolation Formula

    Gauss Backward Interpolation Formula

    Lagranges Interpolation Formula

    Spline Interpolation and Cubic Splines

    Summary

  • 8/14/2019 Solution for Linear Systems

    19/47

    Summary

    Solution of algebraic and transcendental equations1. The numerical methods to find the roots of f(x)=0

    (i) Bisection method :If a function F(x) is continuous between a and b,f(a) & f(b) are of opposite sign then there exists at least one root between a and b. The approximate value of the root between themXo=(a+b)/2

    If F(Xo)=0 then the Xo is the correct root of F(x)=0.If F(Xo)0, then the root either lies in between [a, (a+b)/2] or

    [(a+b)/2,b] depending on whether F(xo) is negative or positive Again bisection the interval and repeat same method until theaccurate root is obtained.

    (ii) Method of false position; (Regular falsemethod) :

    This is another method to find the root of F(x)=0. in this method,we choose two points and taking the point of intersection of thechord with the x-axis as an approximate root (using y=0 on x-axi

    X1=[aF(b)- b(F(a)]/[F(b)-F(a)]

    Repeat the same process till the root is obtained to the desiredaccuracy.

    (iii) Iteration method :If a function F(x) is continuous between a and b,f(a) & f(b) are of opposite sign then there exists at least one root

    between a and b. The approximate value of the root between themXo=(a+b)/2We can use this method,if we can express f(x)=0 , as

    X =(X0) such that 1(X0)< 1 thenThe successive approximate roots are given by

    Xn =(Xn-1), n=1,2----

  • 8/14/2019 Solution for Linear Systems

    20/47

    (iv) Newton Raphson method : The successive

    approximate roots aregiven by Xn+1=Xn- F(Xn) /F1(Xn) , n=0,1,2----

    Provided that the initial approximate root Xo is chosen sufficientlyclose to root of F(x)=0

    2. Interpolation

    (i) Newtons forward interpolation formula :Let y=F(x) be the function which take the values Yo,Y1,Y2,.Yn corresponding to the equally spaced values Xo, X1, X2..Xn of Xwith h as the interval length between two consecutive points.

    The Newtons forward interpolation formula isF(Xo+ph)=Y p=Yo+pYo=[P(P-1)]/ 2! 2Yo+ [P(P-1)(P-2)]/3! 3Yo++[P(P-1)(P-2).(P-n+1)]/ n! nYo

    Where X= Xo+ph i.e., P=[X-Xo]/hThis is also called Newton- Gregory forward interpolation formul

    (ii) Newtons backward interpolation formula;

    Y p=Yn+Pyn + [P(P-1)]/ 2! 2Yn + [P(P-1)(P-2)]/3! 3Yn +.

    Where P=[X-Xn]/h(iii) Gauss forward interpolation formula:

    using central differences, delta as an operator the Gauss forward interpolation formula isY p=Yo + P Y1/2+ [(P)(P-1)] /2! 2yo+ [(P+1)(P-1)]/3! 3Y1/2

    + [(P+1)P(P-1)(P-2)] /4! 4Yo +

    Where P=[X-Xo]/w(iv) Gauss backward interpolation formula:

    Y p=Yo + PY-1 + [(P+1)P] /2! 2Y-1 + [(P+1)P(P-1) ]/3! 3Y-2 +[(P+2)(P+1)(P-1)]/4! 4Y-2 +..

  • 8/14/2019 Solution for Linear Systems

    21/47

    (v) Lagranges Interpolation formula:

    Let Yo, Y1, Y2,..Yn be the values of Y=(x) corresponding toXo, X1, X2,Xn (not necessarily equispaced)

    Lagranges Interpolation formula is

    Y= (x)= [(X-X1)(X-X2)..(X-Xn)] Y0+(X-Xo)(X-X2)(X-X3)(X-Xn) Y1[(Xo-X1)(Xo-X2)..(X-Xn)] (X1-Xo)(X1-X2)(X1-X3)(X1-Xn)

    +..(X-Xo)(X-X1)(X-X3)(X-Xn) Yn (X2-Xo)(X2-X1)(X2-X3)(X2-Xn)3. Spline interpolation and cubic Splines

    Let the given interval [a,b] is sub divided into n subintervals[Xo,X1], [X1,X2],..[Xn-1,Xn] where a = Xo

  • 8/14/2019 Solution for Linear Systems

    22/47

    OR

    1)Bisectionor Bolzanos method:-i) f(a) = +ve , f(a) = -ve then c(a,b)ii) f(x0) = 0 x0

    0 f(x0) = +ve the root lies in a & x0 , x0 = (a+b)/20 f(x0) = -ve the root lies in x0& b

    iii) f(x0) = +ve then second approximation x1 = (a+x0)/2

    = -ve then x1 = (x0+b)/2till we get an repeated end values.

    2)Regula Falsi Methodor False position method:-i) f(a) = +ve, f(b) = -ve c (a, b)ii) let x1 = a f(b) b f(a)

    f(b) f(a)a) f(x1) & f(a) are opposite then x2 = af(x1) x1f(a)

    f(x1) f(a) b) f(x1) & f(a) are same then x2 = x1f(b) b f(x1)

    f(b) f(x1)up to accuracy root i.e. repeated end values.

    3)Iterationor Successive Approximation method:-i) f(a) = +ve, f(b) = -ve c (a,b)ii) f(x) = 0 x= (x) since | 1(x) | < 1

    let x0 = a+b x1= (x0)2 x2 = (x1)

    x3 = (x2)and so on up to accuracy root i.e., repeated end values

    4)Newton Raphson methodOr Tangents method:-i) f(a) = +ve, f(b) = -ve c (a,b)ii) let x0 = a+b x1 = x0 f(x0)

    2 f 1(x0)

  • 8/14/2019 Solution for Linear Systems

    23/47

    x2= x1 f(x1)f 1(x1)

    and so on up to accuracy root i.e. repeated end values.

    FINITE DIFFERENCE :-

    1. Forward Difference operator: f (x) = f(x+h) f(x)

    y0 = y1 y0

    2. Backward Difference operator:f (x) = f (x) f (x h)

    y1 = y1 y0

    3. Central Difference operator: f (x) = f (x+h/2) f (x h/2)

    y1/2= y1 y0

    4. Shift operator : E f(x) = f(x+h)Eyx = yx + h

    5. Inverse operator : E-1 = E-1f(x) f (x h)

    6. Averaging/Mean operator: f(x) = f(x + h/2) + f( x h/2)2

    yx = 1/2[yx + h/2+ yx - h/2]7. E = 1 +

    8. E = 1/2 [E1/2+ E-1/2]

    9. = E1/2 E-1/2

    10. = E = E = E1/2

  • 8/14/2019 Solution for Linear Systems

    24/47

    11. 2 = =

    12. 1 = (1+ ) (1- )

    (INTERPOLATION WITH EQUAL & UNEQUAL INTERVALS)

    I. 1. Newton Gregory Forward Interpolation Formula:-y = f(x) = y0 + p y0 + p (p-1) 2y0 + p (p-1)(p-2) 3y0+..

    2! 3! Where p = x - x0h

    2. Newton Gregory Backward Interpolation Formula:-y =f(x) = yn+ p yn + p(p+1) 2yn + p (p+1)(p+2)3yn +...Where p = x-xn2! 3! h

    II Central Difference Interpolation Formula: -1. Gauss Forward: -

    y p = y0 + p y0 + p(p-1) 2y-1 + (p+1)p(p-1) 3y-1+.. Where p = x-x0

    2! 3! hy0 --- 2y-1 --- 4y-2 --- 6y-3 --- 8y-4

    y0 3

    y-1 5

    y-2 7

    y-3 2. Gauss Backward:- y p = y0 + p y-1 + (p+1)p 2y-1 + (p+1)p(p-1) 3y-2+(p+2)(p+1)p(p-1) 4y-1+..

    2! 3! 4!Where p = x-x0

    h y-1 3y-2 5y-3 7y-4

    y0 ---- 2y-1 ---- 4y-2 ---- 6y-3 ---- 8y-4

    3.Stirlings: -y p = y0+p y0+ y-1+ p2 2y-1 + p(p2-1) 3 y -1 + 3 y -2 +p2(p2-1) 4y-2+

    .2 2! 3! 2 4!

    y-1 3y-2 5y-3 7y-4

  • 8/14/2019 Solution for Linear Systems

    25/47

    y0 ---- 2y-1 ---- 4y-2 ---- 6y-3 ---- 8y-4 y0 3y-1 5y-2 7y-3

    4.Lagranges Interpolation:(Unequal Intervals ):-

    y = f(x) = (x-x1)(x-x2)(x-xn) f(x0)+ (x-x0)(x-x2)..(x-xn) f(x1)+ (x0-x1)(x0-x2)..(x0-xn) (x1-x0)(x1-x2)(x1-xn)

    (x-x0)(x-x1)..(x-xn) f(x2) +.+ (x-x0)(x-x1)(x-xn-1) f(xn)(x2-x0)(x2-x1)(x2-xn) (xn-x0)(xn-x1)(xn-xn-1)

    UNIT-V

    Curve Fitting & Numerical Integration

    Curve Fitting

    1. Fitting a straight line

    2. Fitting Quadratic Polynomial or parabola

    Numerical differentiation Numerical Integration

    Trapezoidal Rule

    Simpsons 1/3 Rule and Simpsons 3/8 Rule

    Gaussian Integration

    Summary

  • 8/14/2019 Solution for Linear Systems

    26/47

    Summary

    1. Curve Fitting

    Fitting a straight line

    Let Y(x)=ax+b be the straight line approximation for the data.

    The normal equations are

    axi2 + bxi = yixi axi + b1 = yi i from 1 to n

    Solving above equations we get a and b

    (ii) Fitting quadratic polynomial or parabola

    Let Y(x)= aX2+ bX + c be the quadratic polynomial

    The normal equations are

    axi4 + bxi3 +cxi2 = xi2yi

    axi3 + bxi2 +cxi = xiyi

  • 8/14/2019 Solution for Linear Systems

    27/47

    axi2 + bxi +c1 = yi i from 1 to n

    Solving the above equations we get the values of a, b, c

    2. Interpolation

    Derivates using Newtons forward difference interpolation

    Formula

    (i) [dy/dx]x=xo = [Yo- (1/2)2Yo + (1/3)3Yo-(1/4)4Yo +]

    (ii) [d2y/dx2]x=xo = (1/h2)[2Yo-3Yo + (11/12)4Yo+..]

    3. Derivates using Newtons backward Interpolation Formula

    (i) [dy/dx]x=xo = (1/h)[yn-(1/2)2

    yn+(1/3)3

    yn+](ii) [d2y/dx2]x=xo = (1/h2)[2yn-3yn+(11/12)4yn+(5/6)5yn+]

    4. Trapezoidal Rule

    The integral I=(x)dx in-between a to b

    I=(h/2)[(Yo+Yn)+2(Y1+Y2++Yn-1)]Where yo, y1, yn i.e.,yi= (xi) are the values corresponding to theargument xo=a,X1,=Xo+hXn=Xo+nh=b

    5. Simpsons 1/3 Rule

    The integral I=(x)dx = y dx in-between a to b

  • 8/14/2019 Solution for Linear Systems

    28/47

    I = (x)dx=( h/3)[(yo + yn) + 4(y1+y3+..+yn-1)+ 2(y2+ y4++ yn-2)]in-between a to b

    This rule can be applied when the given internal (a,b) is divided i

    even number of sub intervals of length h

    6. Gaussian Integration

    The definite integral I= (x)dx is expressed as I=(x)dx=w1 (x1) + w2 (x2) ++wn (xn)

    = wi (xi) i from 1 to n

    Limit integral in-between a to b

    Which is called Gaussian integral formula where Wi are called

    weights and xi are called abscissa. The weights and abscissa aresymmetrical with respect to the middle points of the interval.

    OR

    (PART-A) (CURVE FITTING)

    1. Fitting of a Straight Line (y=a+bx): -

    y = na + b x

    xy = a x + b x2

    2. i) Parabola (y= a+bx+cx2)

    y = na + b x + c x2

  • 8/14/2019 Solution for Linear Systems

    29/47

    xy = a x + b x2 + c x3

    x2y = a x2 + b x3 + c x4

    ii) Parabola (y = a+bx2)

    y = na + b x2

    x2y = a x2 + b x4

    4. y= ae bx(y=a+bx) : logy = loga+xbloge10 Y=A+Bx

    logy = Y, loga=A, B= b/loge10

    5.y=abx(y=a+bx) : logy = loga+xlogbY=A+Bx

    logy = Y, loga = A, logb = B

    6. y = ax b(y=a+bx): logy = loga+blogxY=A+bX

    since logx=X, logy=Y, loga=A

    Weighted least square Approximation:-

    1.Straight line(y=a0+a1x): Wy = a0W + a1Wx

    Wxy = a0 Wx + a1 Wx2

    2.Parabola(y=a0+a1x+a2x2): Wy = a0 W + a1 Wx + a2 Wx2

    Wxy = a0 Wx + a1Wx2 + a2Wx3

    Wx2y = a0 Wx2 + a1 Wx3 + a2 Wx4

    (NUMERICAL DIFFERENTIATION)

    1. Newton Forward:

  • 8/14/2019 Solution for Linear Systems

    30/47

  • 8/14/2019 Solution for Linear Systems

    31/47

    +(n5-2n4+35n3-50n2+12n) 5 y 0+(n6-15n5+17n4-225n3+274n2-60n) 6 y 0+]6 4 3 5! 6 6 4 3 6!

    1. Trapezoidal Rule (n=1): -

    y dx = (h/2) [(y0+yn) + 2(y1+y2+..+yn-1] Note: Number of subintervals odd or even.

    2. Simpson 1/3 Rule (n=2): -

    y dx = (h/3) [( y0+yn) + 4(y1+y3+y5+.) + 2(y2+y4+y6+)] Note: Number of subintervals should be even.

    3. Simpson 3/8 Rule (n=3)

    y dx = (3h/8) [ (y0+yn) + 3(y1+y2+y4+y5+y7+y8+)+2(y3+y6+y9+.)] Note: Sub intervals should be multiples of 3.

    4. Booles Rule (n=4): -

    y dx = (2h/45) [7y0+32y1+12y2+32y3+14y4+32y5+12y6+.] Note: Subintervals should be multiples of 4.

    5. Weddles Rule (n=6): -

    y dx = (3h/10) [(y0+yn) +(y2+y4+y8+y10+y14+..+yn-4+yn-)+5(y1+y5+y7+y11+..+yn-5+yn-1)+ 6(y3+y9+y15+..+yn-3)+2(y6+y12++yn-6)]

    Note: - Subintervals should be multiples of 6.

  • 8/14/2019 Solution for Linear Systems

    32/47

    UNIT-VI

    Numerical solutions of Initial Value Problems inOrdinary Differential Equations

    Numerical Solution of Ordinary Differential equations

    Taylors series method

    Picards method

    Eulers method

    Modified Eulers method

    Runge kutta method

  • 8/14/2019 Solution for Linear Systems

    33/47

    Predictor corrector method

    Adams Bashforth method

    Summary

    Summary

    The most important methods of solving ordinary differential equationsnumerically are

    1. Taylors series method

    2. Picards method

    3. Eulers modified method

    4. Runge-kutta method

    5. Predictor corrector method

    1. Taylors series method: The numerically solution of the differentialequation

    dy/dx = (x,y) with the given initial condition y(xo) = yo is

    yn-1= yn +(h/1!)y1n + (h2/2!)y11n + (h3/3!)y111n +

  • 8/14/2019 Solution for Linear Systems

    34/47

    2. Picard s method

    To solve the differential equation dy/dx = (x,y) , y(xo) = Yousing Picards method of successive approximations with the help of y

    yo + xo to x (x,y) dx which is called an integral equation.It can be solved by a process of successive approximations y(1)(x), y(2 The first approximation y(1)(x)= yo + xo to x (x,xo0dx

    The second approximation y(2)(x)= yo + xo to x (x,y(1))dx

    OR

    Consider dy/dx = f(x,y) and the initial condition is y(x0)= y01. Taylors:- y(x0) = y0+(x-x0) y01+(x-x0)2 y011 + (x-x0)3 y0111+

    2! 3!

    2. Picards: - y1= y0+ f (x,y0) dxy2= y0 + f(x, y1) dxy3= y0+ f(x, y2) dx

    Similarly yn= y0 + f(x,yn-1) dx3. Eulers :- y1 = y0 +h f(x0, y0)

    y2 = y1+ h f(x1,y1)

    y3 = y2+h f(x2,y2)

    Similarly yn+ 1 = yn + h f(xn,yn)

    4.Runge-Kutta Order 4: -

    y1 = y0 + (1/6)[k1+2k2+2k3+k4]

  • 8/14/2019 Solution for Linear Systems

    35/47

    where k1 = h f(x0,y0) , k2= h f(x0+h/2, y0+k1/2)

    k3= h f(x0+h/2, y0+k2/2) , k4= h f(x0+h, y0+k3)

    yn+1 = yn+ (1/6) [k1+2k2+2k3+k4]

    where k1= h f(xn,yn) , k2 = h f(x0+h/2, y0+k1/2)

    k3= h f(x0+h/2, y0+k2/2) , k4= h f( x0+h, y0+k3) k=0,1,2.

    5.Milnes Predictor-corrector :-

    Predictor:- y4= y0 + 4h [2y11-y21+2y31] where yk1 = f(xk,yk), k=0,1,23

    Corrector:- y4 =y2+h [y21+4y31+y41] where yk1= f(xk, yk), k=0,1,2

    36.Adams Moulton Predictor-Corrector:-

    Predictor: y4 = y3+ (h/24)[ 55y31-59y21+37y11-9y01]

    where yk1 = f(xkyk) , k = 0,1,2,3...

    Corrector: y4 = y3+ (h/24) [9y41+19y31-5y21+y11]

    where yk1 = f(xk,yk) , k = 0,1,2,3..

  • 8/14/2019 Solution for Linear Systems

    36/47

    UNIT-VII

    FOURIER SERIES

    Periodic Functions

    Even and odd Function

    Fourier Series

    Eulers Formulae Fourier Series in an arbitrary Interval (change of interval)

    Fourier Series of Even and odd Functions

    Half-Range Fourier Sine and Cosine Series

  • 8/14/2019 Solution for Linear Systems

    37/47

    Summary

    Summary1. Periodic functions

    Definition: A function f: R R is said to be periodic if there exists a positinumber T such that (x+T)= (x)for all x belongs to R, T is called per (x).

    2. Even and odd functions

    (i) A function (x) is said to be even if (-x)= (x)(ii) A function (x) is said to be odd if (-x)= - (x)

    3. Definition : The Fourier series for (x)in the interval (C,C+2) is

    (x)= ao/2 + [ancos nx + bn sin nx ] where n from 1 to

    where ao= 1/ (x)dx limits in between c to C+2

    an= 1/ (x)cos nx dx in between c to C+2

    bn= 1/ (x)sin nx dx in between c to C+2 where C isconstant

    ao, an, bn, are called Fourier coefficients (Fourier constant ),these formulae are called Eulers formula

  • 8/14/2019 Solution for Linear Systems

    38/47

    Note:

    (i) If C =0 , than the interval becomes (0,2)

    The Fourier coefficients are

    ao= 1/ (x)dx limits in between 0 to 2

    an= 1/ (x)cos nx dx limits in between 0 to 2

    bn= 1/ (x)sin nx dx limits in between 0 to 2

    (ii) If C=- then the interval becomes (-, )The Fourier coefficients are

    ao= 1/ (x)dx limits in between - to

    an= 1/ (x)cos nx dx limits in between - to

    bn= 1/ (x)sin nx dx limits in between - to

    4. Dirichlets conditions:

    A function (x) defined in the interval a1 x a2 can berepresented as a Fourier series if (x) satisfies the followingconditions in the interval

    (i) (x) and its integrals are finite and single valued(ii) (x) has a finite number of discontinuities(iii) (x) has finite number of maxima and minima . Then the

    Fourier series converges to (x) at all points where (x) iscontinuous. Also the series converges to the average of theleft limit and the right limit of (x) at each point of discontinuity of (x).

    5. Change of interval (Arbitrary interval): If a function (x) is defined in (C, C+21). The Fourier expansion

  • 8/14/2019 Solution for Linear Systems

    39/47

    (x) is

    (x)= ao/2 + [ancos(nx)/l + bn sin(nx)/l ] where n from 1to

    ao=1/l (x) dx limits in between c to c+2l

    an= 1/l (x)cos(nx)/l dx limits in between c to c+2l

    bn= 1/l (x)sin(nx)/l dx limits in between c to c+2l

    Note: If c = 0 then the interval becomes (0,2l)If c = -l then the interval becomes (-l,l)

    6. Fourier Series for even and odd functions.

    (i) If (x) is an even function in (0,2) or (-;) the Fourier Series(x) is

    (x) = ao/2 + [ancos nx ] where n from 1 to

    where ao= 2/ (x) dx limits in between 0 to 2

    an= 2/ (x)cos nx dx in between 0 to 2

    here ,if f(x) is an even function, the Fourier coefficient bn= 0

    (ii) If (x) is an odd function in (0,2) or (-;) the Fourier Series (x) is

    f(x) = [bnsin nx ]

    where bn= 2/ (x)sinnx dx limits in between 0 to 2here, the coefficients a0 = 0 , an = 0

    similarly in the case of the intervals (0,2l) or (-l,l)

    7. Half range Fourier series:

  • 8/14/2019 Solution for Linear Systems

    40/47

    (i) Half range fourier sine series for f(x) in (0, )

    f(x) = [bnsin nx ]

    when bn= 2/ (x)sinnx dx limits in between 0 to

    (ii) Half range fourier sine series for f(x) in (0, l )

    f(x) = [bn sin(nx)/l]

    when bn= 2/l (x)sinnx dx limits in between 0 to

    (iii) Half range fourier cosine series

    (x)= ao/2 + [ancos nx ] where n from 1 to

    where ao= 2/ (x) dx limits in between 0 to

    an= 2/ (x)cos nx dx in between 0 to

    (iv) Half range fourier cosine series

    (x)= ao/2 + [ancos(nx)/l + bn sin(nx)/l ]

    where n from 1 to

    ao= 2/l (x) dx limits in between 0 to l

    an= 2/l (x)cos(nx)/l dx limits in between 0 to l

  • 8/14/2019 Solution for Linear Systems

    41/47

    UNIT-VIII

    PARTIAL DIFFERENTIAL EQUATIONS

    Formation of Partial Differential Equation by eliminating arbitraryconstants and arbitrary functions

    First Order Linear (Lagranges) Equations

    Non-Linear (Standard Types) Equations

    Method of Separation of variables for second order

    One Dimensional Wave equation

    One Dimensional Heat equation

    Laplaces equation

    Two Dimensional Wave equation

  • 8/14/2019 Solution for Linear Systems

    42/47

    Summary

    Summary

    1. Formation of Partial equations by the elimination of arbitraryconstants and arbitrary function.

    (a) Elimination of arbitrary constants

    Let (x,y,z,a,b)=0 (1) be the equation where a,b are arbitrary constants.Differentiating partially w.r.to x and yf + f . z = 0 f + p f = 0 .(2)y z x x z

    f + f . z = 0 f + q f = 0 ..(3)y z y y z

    Elimination of two constants a,b from (1) (2) (3) gives anequation of the form (x, y, z, p, q)=0 which is the first ordP.D.E.

    If the numbers of constants are more than the number of independent variables, then the result of eliminating theconstants will give rise to a P.D.E of higher order than thefirst.

    (b) Elimination of arbitrary functions:

  • 8/14/2019 Solution for Linear Systems

    43/47

    Let (u,v)=0 ..(1)Be the equation where u,v are functions of x, y, z and be arbitrary function.Differentiating (1) partially with respect to x and y

    u + u . z = 0 v + vz = 0u x z x v x zx

    And

    u + u . z = 0 v + vz = 0u y z y v y zy

    Eliminating , from 2 nd 3 we get u vu v + u v = 0 x y y x

    P = z , q = zx y

    Pp+Qq=R is the required P.D.E

    Where, P =u v - u vx z z y

    Q = u v - u v

    z x x z

    R = u v - u vx y y x

    2. Lagranges Linear P.D.E

    The P.D.E Pp+Qq=R ..(1)Is called Lagranges first order partial differential equation

  • 8/14/2019 Solution for Linear Systems

    44/47

    P = u v - u vx z z y

    Q = u v - u v

    z

    x x z

    R = u v - u vx y y x

    To solve (1),first write Lagranges auxiliary equation (subsidiary equation)

    x = y = z .(2)P Q R

    Auxiliary equation gives two independent solutions u=c1and v=c2 where u, v are functions of x, y, zFrom these two solutions, the general solutions is (u,v)=0

    3. Non linear partial differential equations of order one

    (i) Complete integral:If F(x, y, z, p, q)=0 (1)Is the non linear partial differential equation of first order then equation

    (x, y, z, a, b)=0 ..(2)Which contains as many constants as the number of independenvariables is called the complete integral.

    (ii) Partial integral : A partial integral of (1) is obtained by giving particular values is called the complete integral.(iii) Singular integral: Differentiating the complete integral (x,y, z, a, b)=0 .(2)Partially w. r. to a and b and then equate to zero

    /a=0 ..(3)

  • 8/14/2019 Solution for Linear Systems

    45/47

    /b=0 ............(4)

    Elimination of a and b from (2) (3) (4) gives an equation of form f(x, y, z)=0 is called singular integral.

    There are four standard forms of non-linear first order partialdifferential equations.

    (i) Standard Form I: The equation of the form f(p,q) = 0 (i.e., the equation in terms of p and q only)is called standard tyThe solution of equation is Z=ax+by+c

    a = z , b = zx y

    Now replacing p= z =a ,and q = z = b in the givenP.D.E x y

    F(a,b) = 0 b = (a)

    Substituting b = (a) in (1)Z = ax + (a)y + c is called complete integral

    (ii)Standard Form II:The equation of the form (x, y, p, q) = 0(1) is called standtype II

    Arrange (1) in the form f 1(x, p) = f 2(y, q) = a(constant)From these two equations we get p=1(x, a) and q=2(y, a)

    Substituting in

    Integrating dz= pdx + qdy + cdz =1(x, a)dx + 2(y, a)dy

    z=1(x, a)dx + 2(y, a)dy + c is the complete integral

  • 8/14/2019 Solution for Linear Systems

    46/47

    (iii) Standard form III:The equation of the form (z, p, q)=0..(1)

    Substituting q=ap ..(2)In (1) we get

    P = (z) (3)from (2) , (3) q = a (z) .(4)

    From (3) , (4) substituting p, q values in

    dz = p. dx + q dydz = (z) dx + a (z) dy

    z / (z) = dx + a dy

    Integrating F(z) = x + ay +c is complete integral

    v) Standard form IV : (Clairauts equation)

    The P.D.E of form z = px + qy + (p, q) (1)is called clairauts equation

    The complete integral of (1) is

    z = ax + by + (a ,b) .(2)

    To find the singular integral difference (2) partially with respeca and b

    0 = x +f ..(3) a 0 = y +f ..(4)

    bThe eliminate of a, b from (3) (4) gives the singular integral

    4. Application of P.D.E s (Method of separation of variables)

  • 8/14/2019 Solution for Linear Systems

    47/47

    (1) One- dimensional wave equation :u/t = c 2 2u/x 2(2) One- dimensional wave equation :

    2u/x 2 + 2u/y 2 = 1/c 2. 2u/t 2

    (3) One- dimensional wave equation :u/t = c 2 2u/x 2

    (4) Laplaces equation :2u/x 2 + 2u/y 2 = 0Problems which satisfy certain initial and boundary conditions called boundary value problems. The suitable method to solve s problems is the method of separation of variables also known a product method.