numerical methods

Post on 19-May-2015

5.684 Views

Category:

Technology

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

Dynamics Course

TRANSCRIPT

Prof. A. Meher PrasadProf. A. Meher Prasad

Department of Civil EngineeringDepartment of Civil EngineeringIndian Institute of Technology MadrasIndian Institute of Technology Madras

email: prasadam@iitm.ac.in

Direct Integration of the Equations of Motion

• Provides the response of system at discrete intervals of time (which are usually equally spaced).

• A process of marching along the time dimension, in which the response parameters (i.e., acceleration, velocities & displacements at a give time point) are evaluated from their known historic values.

• For a SDOF system, this requires three equations to determine three unknowns.

(a) Two of these equations are usually derived from assumptions regarding the manner in which the response parameters vary during a time step.

(b) The third equation is the equation of motion written at a selected time point.

• When the selected point represents the current time (n), the method of integration is referred to as Explicit method (e.g. Central difference method).

• When the equation of motion is written at the next time point in the future (n+1), the method is said to be Implicit method (e.g. Newmark’s β method, Wilson – θ method).

Direct Integration of the Equations of Motion…

P(t)

∆t0 1 …… tn tn+1

Pn Pn+1

For SDOF System mx + cx + kx = P(t).. .

Let ∆t = time interval

tn = n ∆txn, xn, xn Displacement, velocity and acceleration

at time station ‘n’

. ..

Pn is the applied force at time tn

Direct Integration of the Equations of Motion…

General Expression for the time integration methods

)(RxCxBxAx l

n

knll

n

knlll

n

knllln 1

11

1

. ..

R is a remainder term representing the error given by

x(m) is the value of mth differential of x at t=ξ

Al, Bl and Cl are constants ( some of which may be equal to zero)

)(x!m

E )m(m (n-k)∆t ≤ ξ ≤ (n+1) ∆t

• The equation is employed to represent exactly a polynomial of order p-1, p being smaller than m.

• Then (m-p) constants become available which can be assigned arbitrary chosen values so as to improve stability or convergence characteristics of the resulting formula.

• Formulas of type eq(1) for time integration can also be obtained from physical considerations, such as, for example, an assumed variation in the acceleration or from the finite difference approximations of the differentials.

• Eq(1) relates

stations n-k, n-k+1, ………n

• Eq(1) has m = 5+3k undetermined constants A, B and C.

xn, xn, xn @ tn+1 to their values at the previous time

. ..

Newmark’s β Method

• In 1959, Newmark devised a series of numerical integration formulas collectively known as Newmark’s β methods.

• The velocity expression is of the form

xn+1= a1 xn+a2xn+a3xn+1 (1)

• The displacement expression is of the form

xn+1= b1 xn+b2xn+b3xn+b4xn+1 (2)

• To determine constants make equations (1) & (2) for x=1, x=t, x=t2,

we get a1=1, 2∆t = 2a2+2a3

b1=1, b2=∆t, 2b3+2b4=(∆t)2

Say a3=γ∆t & b4=β(∆t)2

. .. ...

. .. ..

xn

xn+1

∆t

Then equations (1) & (2) reduce to

xn+1= xn+ ∆t(1-γ)xn+ ∆tγxn+1+R (3)

xn+1= xn+ ∆t xn+ (∆t)2 (1/2 – β)xn+ (∆t)2 βxn+1+ R(4)

Third relationship

m xn+1+ cxn+1+ kxn+1= Pn+1 (5)

Substituting eqn.(3) and (4) in eqn.(5), we get expression for xn+1

To begin the time integration, we need to know the values of xo, xo and xo

at time t=0.

.

. . .. ..

.. ..

...

..

. ..

Newmark’s β Method…

γ=0, β=0 Constant Acceleration

Acc

ele

ratio

n, x ..

∆t ∆t Time, t

xn

..

)stable(.T

t6360

γ=1/2, β=1/4 Average Acceleration

Acc

ele

ratio

n, x ..

∆t Time, t

xn

..xn+1

..

x =

..2

1 nn xx.. ..

γ=1/2, β=1/6 Linear Acceleration

Acc

ele

ratio

n, x ..

∆t Time, t

xn

..xn+1

..

)stable(.T

t550

x =

..)xx(

t

tx nnn

1

.. .. ..

t

Algorithm

Enter k, m, c, β, γ and P(t)

x0 = ..

m

kuuc - 0)P(t 00 .

Select ∆t

2β(Δt)

m

βΔt

c kk

^

1)c2β

Δt(2β

mb;

β

c

Δtβ

ma

xi+1= xi+ ∆xi ; xi+1= xi+ ∆xi ; xi+1= xi+ ∆xi

∆xi =

k

pi ^

^

.

2

ii2

ii

u

t

u

t)(

uu..

i = 0i = i+1.

∆pi = pi + a xi + b xi^ ..

iii

i u)(tut

u u

2

1. ...

...

. .... .. ..

Elastoplastic System

Enter k, m, c, Rt, Rc and P(t)

Set x0 = 0, x0 = 0; .

Select ∆t

k

R t x0 = ..

m

0)P(t ; xt = ; xc = k

Rc

Define key = 0 (elastic)

key = -1 (plastic behavior in compression

key = 1 (plastic behavior in tension)

Newmark’s β Method --

Calculate xi and xi .

key = -1; R=Rc

xi > xc

xi < xt

R = Rt – (xt – xi) k

xi < xc

xi > xt key = 1; R=Rt

< 0

= (P(ti+1) – ci+1 – R) /m xi+1 ..

xi+1 .

> 0xi . key = 0; xt= xi; xc= xi – (Rt – Rc)/k

R = Rt – (xt – xi) k

xi .

key = 0; xc= xi;

xt= xi + (Rt – Rc)/kR = Rt – (xt – xi) k

n

y

y

n

y

y

yn

i = 0i = i+1

Central Difference Method

The method is based on finite difference approximations of the time derivatives of displacement (velocity and acceleration) at selected time intervals

Dis

pla

cem

en

t, u

Time, t

xn+1

)xx(h nn 12

(n-1)∆t (n+1)∆t

xn-1

Enter k, m, c, and P(t)

x0 = .. 0 0P(t 0) - c x kx

m

.

x-1 = x0 - ∆t x0 + 0.5 ∆t2 x0 ...

t2

c

t)(

mk

2

^

;t)(

2mkb;

t2

c

t)(

ma

22

Algorithm

pi = pi – a xi-1 – b xi^

xi+1 =

k

pi^

^

t2

u-uu 1-i1i

i .

21-i1i

i t)(

uuu

iu2..

i = 0

i = i+1

Wilson- Method

Time, t∆t

∆t

xn

..xn+1

..xn+θ

..

Acc

eler

atio

n, x ..

• This method is similar to the linear acceleration method and is based on the assumption that the acceleration varies linearly over an extended interval θ∆t.

• θ, which is always greater than 1, is selected to give the desired characteristics of accuracy and stability.

Enter k, m, c, , ∆t and P(t)

Specify initial conditions

Δt)(

6a ;

Δt)(

3a ;

2

θΔta;

Δt)(

6a 43221

pn+θ = pn (1- θ) + pn+1 θ

.k̂ = a1m + a3c + k ; a5 = a1xn + a4xn + 2xn ; a6 = a3xn + 2xn + a2xn

. .. . ..

n=0

A

Algorithm

xn+θ= ( pn+θ+ma5 +ca6 ) /k^

2

n θ 1 n θ n n n

(θΔt)x a {x x (θΔt)x x }

3 . ....

xn+1 = xn+ (xn+θ – xn) /θ.... .. ..

xn+1 = xn+ (xn + xn+1) h/2... . ..

2 2

n 1 n n n n 1

t tx x tx x x

3 6

A

n = n+1

Errors involved in the Numerical Integration

• Round off errors

Introduced by repeated computation using a small step size.

• Random in nature

• To reduce use higher precision

• Truncation errors

Involved in representing xn+1 and xn+1 by a finite number of terms in the Taylor’s series expansion.

• Represented by R in the previous slides

• Accumulated locally at each step.

• If integration method is stable, then truncation error indicates the accuracy.

• Propagated error

Introduced by replacing the differential equation by a finite difference equivalent.

.

Stability of the Integration method

• Effect of the error introduced at one step on the computations at the next step determines the stability.

• If error grows, the solution becomes unbounded and meaningless.

• Spectral radius of a matrix ρ(A) = max of (magnitude of eigen values of A)

• A is ‘amplification matrix’

.

n

n

n

n

n

n

x)t(

xt

x

]A[

x)t(

xt

x

21

2

1

1.

.. ..

.

ρ(A) > 1 Unstable

If θ ≥ 1.37 Wilson- θ is unconditionally stable

Attributes required for good Direct Integration method

1. Unconditional stability when applied to linear problems

2. Not more than one set of implicit equation to be solved at each step

3. Second order accuracy

4. Controllable algorithmic dissipation in the higher modes

5. Self starting – Wilson-θ is reasonably good

* For MDOF systems, scalar equations of the SDOF systems become matrix equations.

Spectral radii for α-methods, optimal collocation schemes and Houbolt, Newmark, Park and Wilson methods

Selection of a numerical integration method

Period elongation vs. ∆t/T Amplitude decay vs. ∆t/T

* For the numerical integration of SDOF systems, the linear acceleration method, which gives no amplitude decay and the lowest period elongation, is the most suitable of the methods presented

Selection of time step ∆t

t must be small enough to get a good accuracy, and long enough to be computationally efficient.

• p∆t < 1 i.e., ∆t/T ≤ 0.16 (arrived at from truncation errors for a free vibration case)

• Typically ∆t/T ≈ 0.1 is acceptable.

• Sampling of exciting function at intervals equal to selected ∆t inspection of the forcing function.

Mass Condensation or Guyan Reduction

• Extensively used to reduce the number of D.O.F for eigen value extraction.

• Unless properly used it is detrimental to accuracy• This method is never used when optimal damping is used for mass

matrix

Let 'm' represent those to be restrained

Let 's' represent those to be condensed

. .

. .

0

0mm ms mm ms m

sm ss sm ss s

m

s

K M u

K K M M u

Master d

K K M M u

u

u

o f

Slave d o f

• Assumption: Slave d.o.f do not have masses – only elastic forces are important

1

1

Guass Elimination Scheme

0

1 1 ---

,

ms sm ss

T

s ss ms m

m

m T

s ss ms

T T

r r

M M M

u K K u

s s s s m m

IuT u T

u K K

K T K T M T M T

• Choice of Slave d.o.f

– All rotational d.o.f

– Find ratio, neglect those having large values for this ratio

– If [ Mss ] = 0, diagonal, [Kr] = same as static condensation then there is no loss of accuracy

Reduced eigen problem

master d.o.f

Slave d.o

1

f

1

.

r m r mK u M u

m m m m m m m m

ii

ii

R

m

1 T Ts ss i ss ms i ms mi ii iu K M K M u

Subspace Iteration Method

• Most powerful method for obtaining first few Eigen values/Eigen vectors

• Minimum storage is necessary as the subroutine can be implemented as out-of core solver

• Basic Steps

– Establish p starting vectors, where p is the number of Eigen values/vectors required P<<n

– Use simultaneous inverse iteration on ‘p’ vectors and Ritz analysis to extract best Eigen values/vectors

– After iteration converges, use STRUM sequence check to verify on missing Eigen values

TK M L D L

• Method is called “Subspace” iteration because it is equivalent to iterating on whole of ‘p’ dimension (rather that n) and not as simultaneous iteration of “p’ individual vectors

• Starting vectors

• Strum sequence property

For better convergence of initial lower eigen values ,it is better if subspace is increased to q > p such that,

q = min( 2p , p+8)

Smallest eigen value is best approximated than largest value in subspace q.

Starting Vectors

(1) When some masses are zero, for non zero d.o.f have one as vector entry.

0 0 0

2 1 0, { }

0 0 0

1 0 1

M X

(2) Take ratio .The element that has minimum value will have 1 and rest zero in the starting vector.

3 2

2 0,

4 4

8 1

k M

/u uk m

• Starting vectors can be generated by Lanczos algorithm- converges fast.

• In dynamic optimisation , where structure is modified previous vectors could be good starting values.

Eigen value problem

0 1

0 0{ }

1 0

0 0

X

/ 3/ 2, ,1,8u uk m

[ ][ ] [ ][ ][ ]

[ ] , [ ]

[ ] [ ][ ] [ ]

[ ] [ ][ ] [ ]

n p n p

Tp p

T

k M

k

k

M I

(1)

(2)

(3)

Eqn. 2 are not true. Eigen values unless P = n

If [] satisfies (2) and (3),they cannot be said that they are true

Eigen vectors. If [] satisfies (1),then they are true Eigen vectors.

Since we have reduced the space from n to p. It is only necessary that subspace of ‘P’ as a whole converge and not individual vectors.

Algorithm:

Pick starting vector XR of size n x p

For k=1,2,…..1

1 1 1

1 1 1

1 1 1 1 1

1 1 1

[ ][ ] [ ]{ }

[ ] { } [ ]{ }

[ ] { } [ ]{ }

[ ] { } [ ] { }[ ]

[ ] { } [ ]

k k

Tk k k

Tk k k

k k k k k

k k k

K X M X

K X K X

M X M X

K Q M Q

X X Q

k+1 – { X }k+1 - k -

static

p x p

p x pSmaller eigen value problem, Jacobi

Factorization

Subspace Iteration

1

1

1

1 1 1

1 1 1 1 1

1 1 1

[ ] [ ][ ][ ]

[ ][ ] [ ]

[ ] [ ] [ ]

[ ] [ ] [ ]

[ ] [ ] [ ] [ ] [ ]

[ ] [ ] [ ]

k

T

k k

Tk k

Tk k k

k k k k k

k k k

k L D L

k X Y

k X Y

M X Y

k Q M Q

Y Y Q

Sturm sequence check

1 1 1

1

[ ] [ ] [ ]

[ ] [ ][ ][ ]

[ ][ ] [ ][ ]

[ ][ ]

T

k k ki i i

ki

k k M

k L D L

k M

k

(1/2)nm2 + (3/2)nm

nq(2m+1)

(nq/2)(q+1)

(nq/2)(q+1)

n(m+1)

(1/2)nm2 + (3/2)nm

4nm + 5n

nq2

Total for p lowest vector.

@ 10 iteration with nm2 + nm(4+4p)+5np

q = min(2p , p+8) is 20np(2m+q+3/2)

This factor increases as that iteration increases.

N = 70000,b = 1000, p = 100, q = 108 Time = 17 hours

Use the subspace Iteration to calculate the eigen pairs (1,1) and (2,2) of the problem K = M ,where

2 1 0 0 0

1 2 1 0 2;

0 1 2 1 0

0 0 1 1 1

K M

2

2 1 0 0 0 0

1 2 1 0 2 0

0 1 2 1 0 0

0 0 1 1 0 1

X

2

2 2

2 1

4 2

4 3

4 4

2 1 6 44 ; 8

1 1 4 3

a

M

ndX

K

Example

2 2

2

1 2 1 102 4 8 4 2 4 2 8

;1 11 2

04 4 2 4 2 42 4

1 1

4 41 1

2 2

1 2 1 2

4 4

2 2

2 2

Q

and X

top related