raffy gontier imac xxi conf2003 s07p01

Upload: eseem

Post on 06-Jul-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/17/2019 Raffy Gontier IMAC XXI Conf2003 s07p01

    1/8

    STATISTICAL ERROR ON DAMPING RATIOS INDETERMINISTIC SUBSPACE IDENTIFICATION

    Michel Raffy and Camille Gontier 

    Laboratoire de Mécanique et de RhéologieEcole d’Ingénieurs du Val de Loire

    Rue de la Chocolaterie41000 Blois – France

    ABSTRACT

    Today, in the field of structure modal analysis frequency or time domain methods are available. They allow engineers to

    determine the modal characteristics of a mechanicalstructure in testing conditions, i.e. when the excitation can bemeasured. Once modal parameters are determined, thequestion of data reliability arises especially concerningdamping ratios, which are often identified with anunsatisfactory accuracy.From a deterministic subspace identification algorithm instate space such as N4SID (Numerical algorithm for Subspace State Space System Identification), the presentpaper develops a new formulation which allows theevaluation of the statistical error on the identified dampingratios. After developing the basic formula, the method isvalidated and discussed by means of first the example of spring-damper-mass system, and second an experimentalstructure representing a two-floor building.

    NOMENCLATURE

     K C  M    ,,  mass, damping and stiffness matrices.

     F  f   ,  respectively force and force vector.

    cc   B A   ,  continuous time matrices.

     DC  B A   ,,,  discrete time matrices in displacement form state

    equations.

     DC  B A   ′′′′   ,,,   discrete time matrices in acceleration form

    state equations.

     X  X  x x     ,,,   displacement/velocity, velocity/acceleration and

    associated state vectors.

    u y y   ,,    observed displacements, velocities and excitations in

    state equations.

    ml ,  number of outputs and inputs.

    vwvw   ,,,  process and measurement noises in displacement

    and acceleration form state equations.r ni   ,,   identification order, system order, truncation rank of 

    identification. j sample size.

    iiiii   Ε Ψ∆Η Γ    ,,,,   observability, excitation and stochastic

    matrices.

    •••   V S U V S U    ,,,,,   full and truncated SVD (superscript •

    takes the values  s and n  for signal and noise parts).

    Φ  eigenvector matrix ofc

     A  and  A .

    k λ  , k  µ   eigenvalues of matrices c A  and  A .

    d  A , d c A  diagonal realisation matrices of  A  and c A .

    d iΓ  diagonal realisation matrix of iΓ  .

    ••   k )(  kth row of the matrix.

    k ••)(  kth column of the matrix.

    T   similarity transformation matrix.

    T δ   sampling period.

    •~  error on the variable • .

    •̂  estimate of the variable • .∗•  complex conjugate transpose of the variable • .

    T •  transpose of the variable • .

    c•  conjugate of the variable • .†• pseudo inverse of the matrix • .

     E  expectation operator with  E   is equal to  E  j1 .

     pqδ   Kronecker delta operator.

     B∏  orthogonal projection operator ( )   B BB B   T T  B1−

    =∏ .

    ⊥∏ B

    . orthogonal complement projection operator.

     B B  I    ∏−=∏   ⊥

    C  B/  oblique projection operator ( )   C C C   B B B†

    /   ⊥⊥   ∏∏= .

    Generic procedure (N): from any discrete vector signal )(t  s ,

    size i , sequence length  j  and index k   given, an extended

    signal vector k  s also labelled ik  s ,  is defined as

      [ ]T T T T  ik  sk  sk  sik  sk  s   )1()1()(,   −++==     (N1)from which a Hankel « signal matrix » S    is derived as

    follows

    11,,   −++==  jk k k  jik    s s sS S      (N2)

  • 8/17/2019 Raffy Gontier IMAC XXI Conf2003 s07p01

    2/8

    Two indexes α  ,  β   being given, a « past signal » matrix S  p

    and a « future signal » matrix S  f    are defined from (N1) and

    (N2) by

      ][,,   ,1,1,   α α α α    −++==  jk k k  p   s s s jk S S      (N3)

      ][,,   ,1,1,   β α  β α  β α  β α    −+++++=+=  jk k k    s s s jk S S  f      (N4)

    Furthermore, a « past signal + » matrix + pS   and a « future

    signal - » matrix − f  S   are defined

     jk S S  p   ,1,   +=+   α    (N5)

     jk S S  f     ,1,1   −++=−   β α    (N6)

    Specific procedure (P): from the state vector signal )(t  x ,

    sequence length  j   and index k  given, state vector signal

    k  x is defined as

    [ ])1()1()(, −++==   jk  xk  xk  x x x   jk k    (P1)

     A « past state » sequence  p X    and a « future state »

    sequence  f   X   are defined from (P1) by

     jk  p   x X    ,=   (P2) jk 

     f     x X   , β +

    =   (P3)

     jk  f     x X 

      ,1+++   =

      β    (P4)

    1 INTRODUCTION

     Among numerous methods available today in the field of modal analysis, time domain methods

    [1] are becoming more

    and more important as an alternative to the classicalfrequency domain ones

    [2]. Among them, subspace methods

    are especially attractive since they require no prior manipulation of the data. A comprehensive survey on thesemethods can be found in literature

    [3, 4]. These methods are

    currently applied in three typical situations: i) deterministicidentification, ii) pure stochastic indentification, iii) mixeddeterministic-stochastic identification. The present paper isdeveloped within the framework of the third situation, butobviously could be degenerated to the second one.In such situations, the identification process is of stochasticnature. So the question of the statistical validity of theidentified parameters arises, whatever the identificationalgorithm may be. Although modal frequencies are generallyidentified with accuracy, that is seldom true for the dampingratios. For these reasons, providing a confidence rangeassociated with any estimated parameter would be the onlyway to ensure valuable information.The issue of the statistical analysis of the identifiedparameters from subspace methods was tackled by severalauthors in the last decade. From the early 90’s, a series of papers by Viberg

    [4, 5, 6], Jansson

    [7], Bauer 

    [8, 9, 10], Deistler 

    [11],

    Peternell[12]

    , Kundsen[13]

      thoroughly investigated theconsistency conditions of the subspace methods, providingthe basic elements for a statistical analysis of theparameters at the same time.In the present paper a method proposed by Viberg

    [6]  is

    adapted to the identification of the damping ratios, rewrittenin the frameworks of N4SID algorithm

    [3] and IVM algorithm

    [4],

    then applied to several cases in simulation, and finally to anexperimental case.

    2 STATE EQUATIONS OF STRUCTURES

    The dynamic equation of a structure, under the linear time-invariant hypothesis is usually expressed in the form:

     f   Kx xC  x M    =++     (2.1)

    where matrices  M  , C ,  K    stand respectively for mass,

    damping and stiffness matrices, and  f    for thedisplacement and external force vectors.The following vectors are usually defined.

    [ ]T T T   x x X    =  and [ ]T T  f   F    0=   (2.2 a, b)standing for the state variable and associated force vectors.

    While writing

    −−= −− C  M  K  M 

     I  Ac   11

    0  (2.3 a)

    and

    −=

    −−−

    111

    10

    CM  M  M 

     M  Bc   (2.3 b)

    the above notations allow the classical differential form

     F  B X  A X  cc   +=   (2.4)

    Generally, only a small number of the structure degrees of 

    freedom are observed so that a vector Y    of observations

    must be defined:   X C Y  c=   (2.5)

    cC   being the « observation matrix ».

    Such a defined continuous dynamic equation is thendiscretized using a classical procedure. The resultingdiscrete system of state equations then becomes

    ++=

    ++=+

    )()()()(

    )()()()1(

    k vk  Duk Cxk  y

    k wk  Buk  Axk  x  (2.6)

    In the present paper, we will consider the followingformulation in terms of accelerations

    [14]:

    +′+′=

    +′+′=+

    )()()()(

    )()()()1(

    k vk u Dk  xC k  y

    k wk u Bk  x Ak  x

    δ 

    δ   (2.7)

    with c AT 

    e A A  δ ==′ , c AB B   =′ , cC C C    ==′ ,  D D   =′

    and )()1()(   k uk uk u   −+=δ  .

    Some general assumptions are used:a) the process and measurement noises are assumed to bestationary zero-mean ergodic white Gaussian randomprocesses with covariance

     pqT 

    q

    q

     p

     p

     RS 

    S Q

    v

    w

    v

    w E    δ 

    =

    . (2.8)

    b) the input sequence )(k u   is assumed to be an arbitrary

    quasi-stationary deterministic sequence uncorrelated with

    the process and measurement noises[1].Futhermore the input sequence )(k u  is persistently exciting

    of order i2  [1,3]

    .

    c) The system is assumed to be observable, which implies

    that the extended observability matrix iΓ    has a full rank

    equal to system order n .

  • 8/17/2019 Raffy Gontier IMAC XXI Conf2003 s07p01

    3/8

    3 N4SID: MODEL AND FORMULATION

    From the system of equations (2.7), it is easy to prove thefollowing relations, through a recursive process, and usingthe procedure (N1)

    k k ik ik ik    vwu x y     +Ψ+Η +Γ =   δ    (3.1)

    )()()()(   k wk uk  x Aik  x iii

      Ε +∆+=+   δ    (3.2)

    where iΓ    is called the « extended observability matrix »,

    iΗ  , i∆ , iΨ  and iΕ   the first two matrices being attached to

    the excitation vector and the last two to the stochasticeffects. The matrices have the following structure:

     

    =Γ 

    −1i

    i

    CA

    CA

    ′′

    ′=Η 

    −  D BC  BCA

     D

     D BC 

     D

    i

    i

    2

    0

    00

      (3.3 a, b)

     

    − 0

    00

    0

    000

    2 C CA

    i

    i

      (3.3 c)

       B B A B Aii   ′′′=∆   −

    1  (3.3 d)

       I  A Aii   1−

    =Ε   (3.3 e)

    Using the generic procedures (N1, N3, N4), a « past

    observation » matrix  pY   and a « future observation » matrix

     f  Y    are defined. Similarily, the following matrices  pU ∆ ,

     f  U ∆ ,  pW  ,  f  W 

    ,  pV  ,  f  V 

      are constructed. From the

    equations (3.1), (3.2) and using the specific procedures (P2,P3), the following relations are obtained

     

    Ε +∆∆+=

    +Ψ+∆+Γ =

    +Ψ+∆+Γ =

     p p p f  

     f   f   f   f   f  

     p p p p p

    W U  X  A X 

    V W U  H  X Y 

    V W U  H  X Y 

     β  β 

     β  β  β  β 

    α α α 

      (3.4 a, b, c)

     As part of this formulation in terms of accelerations, theinstrumental variable, by analogy with the one which isgenerally taken up in the literature

    [3, 15, 16], is

    ∆=

     p

     p p Y 

    U  P 

    . (3.5)

    The N4SID algorithm based on the state sequences uses

    the oblique projection of the row space of matrix  f  Y   onto the

    row space of the instrumental variable matrix  p P   parallel to

    the row space of matrix  f  U ∆ . This quantity  β O  is equal

    [3,14]

    to the product of the estimate of observability matrix  β Γ   by

    the « future states »  f   X 

    , plus the oblique projection of 

     f   f     V W   +Ψ β  . When length sequence  j  tends to infinity the

    oblique projection of the noise vanishes.

      pU  f     P Y O  f  

    ∆=   / β   (3.6)

      ( )   f   j

     pU  f   f   f     X  P V W  X O  f  ˆˆ/

    ˆˆ    β  β  β  β    Γ =+Ψ+Γ =

    ∞→

    ∆   (3.7)

    The SVD of  β O  gives

    T USV O   = β 

      (3.8)

     and after inspecting the  β    singular values of S  , the

    matrices are partitioned.

    [ ]

    ==

    T n

    T  s

     sn s

    V S U U USV O

    n0

    0 β    (3.9)

    with  sS  corresponding to cutting rank r  , the most significantS   singular values.

    The estimates of  β Γ   and  f   X   are arbitrarily defined by

     sU =Γ  β ˆ   (3.10)

    T  s s f     V S  X    =

    ̂   (3.11)

    and are obtained with a close similarity transformation.

    The estimates of matrices  A   and C   can be extracted by

    using the « shift property » of  β Γ  .

     After that, the following oblique projection must be defined to

    obtain the estimates of state vector + f   X  .

    +∆−− −=   pU  f     P Y O

     f  

    / β    (3.12)

    +−

    ∞→

    −   Γ =   f  

     j

     X O  ˆˆ

     β  β    (3.13)

    Then the estimates of matrices  B   and  D  are obtained bysolving the following equations in the least square sense.

    α α α α 

    /

    ˆˆˆ

    ˆˆU 

     D

     B X 

     A

     X  f  

     f   ∆

    =

    +  

      (3.14)

    4 POLE ESTIMATE

    Using the shift-structure property of Γ , the estimate of the

    system matrix  A   can be obtained. Two selection matricesmust be introduced

    [6]

    ]0[ )1()1(1   l l l  I  J  ×−−=  β  β    (4.1)]0[ )1()1(2   l l l    I  J  −×−=  β  β    (4.2)

    The error in the estimate of  A  may be written

    ( ) ( ) A J  J  J  A A A  β  β  β    Γ −Γ Γ =−=   ˆˆˆˆ~

    12†

    1   (4.3)

    Considering that at first-order approximation

     β  β    Γ ≈Γ ˆ   (4.4)

    Equation (4.3) becomes

    ( )   ( ) A J  J  J  A  β  β  β    Γ −Γ Γ ≈   ˆˆ~

    12†

    1   (4.5)

     Assuming that matrix  A  has distinct eigenvalues k  µ  , it can

    be diagonalized

    ΦΦ=  −   d  A A   1   (4.6)

    So at first-order approximation, the errors on the eigenvaluesof  A  can be written

    1~~   −••

      ΦΦ≈k k k 

      A µ    (4.7)

    Inserting (4.5) into (4.7) gives

    ( )   ( )   112†1   ˆˆ~   −••   ΦΓ −Γ Γ Φ≈ k k k    A J  J  J   β  β  β  µ    (4.8) As k k k  A   µ •

    −•  Φ=Φ

      1   (4.9)

    then   ( )   ( )   112†1   ˆˆ~   −••   ΦΓ −Γ Γ Φ≈ k k k k    J  J  J   β  β  β    µ  µ    (4.10)

  • 8/17/2019 Raffy Gontier IMAC XXI Conf2003 s07p01

    4/8

    Lemma[17]

    :  let two matrices  A   be r m×   and  B   be nr × ,

    both of rank r  , then   ( )   †††  A B AB   =   (4.11)

    Considering on the one hand the extended observabilitymatrix in the diagonal realization

    1−ΦΓ =Γ   β  β d    (4.12)

    and on the other hand the lemma (4.11)

    ( )   ( ) ( )†

    1

    †11†1 d  J  J  J   β  β  β    Γ =ΦΓ =Γ Φ   −   (4.13)

    Using (3.10) and (4.12), the following relation can be writtend T 

     sU   β Γ =Φ−1   (4.14)

    The error on pole k  µ    can be obtained by inserting (3.10),

    (4.13) and (4.14) into (4.10)

    (k 

    d T  s sk k    U U  f  

    ∗Γ =  β  µ 

      ˆ~   (4.15)

    with ( )   ( )k k 

    d k    J  J  J  f     µ  β    12

    1   −Γ =•

    ∗   (4.16)

    Let us define matrix  β OG  j1ˆ =   (4.17)

    The SVD of Ĝ  can be expressed asT n jnn

    T  s js s   V S U V S U G  ˆˆˆˆˆˆˆ +=   (4.18)

    with  s j js  S S    ˆˆ   1=  and n j jn

      S S    ˆˆ   1=   (4.19 a, b)

    From equations (3.6) and (4.18), multiplying both sides by

     sV ̂ , the following equation is obtained

     js s scT  pU  f   j  S U V W  P Y 

     f  

    ˆˆˆˆ1 =∏   ⊥∆

      (4.20 a)

    with  pT  pU  pc

      P  P  P W  f  

      1ˆ

    ∆ 

     

      

     ∏=   ⊥   (4.20 b)

    Theorem: let the deterministic vectors ξ   and ς  satisfy

    0=Γ =Γ    ∗∗  β  β    ς ξ    (4.21)

    Pre-multiplying by ∗ξ    and post-multiplying by 1ˆ − jsS    the

    equation (4.20 a) gives

     s js scT  pU  f   j  U S V W  P  N 

     f  

    ˆˆˆˆ   11   ∗−∆

    ∗=∏   ⊥   ξ ξ      (4.22 a)

    with  f   f   f     V W  N     +Ψ=  β    (4.22 b)

    Using the definition of the projector ⊥∆

    ∏ f  U 

    , the equation

    (4.22 a) can be written

    1111   ˆˆˆ   −−∗∗

     

     

     

      

      

     ∆−

     

      

     =  js sc fp ff  

    T  f   f   j

    T  p f  

     j s   S V W  R RU  N  P  N U  j

      ξ ξ 

      (4.25 a)

    with T  f   f   ff  

      U U  R   ∆∆=   (4.25 b)

    T  p f   fp   P U  R

      ∆=   (4.25 c)

    From assumptions a) and b), the Central limit theorem[9]

    shows that the elementsT  p f  

     j P  N    1  and T 

     f   f   jU  N    ∆1   (4.26 a, b)

    are )1( pO , and that they have a limiting zero-mean

    Gaussian distribution. Hence, the asymptotic distribution of 1ˆ   −∗

     s s S U  jξ   is the same as that of the quantity

     H  Z  N U  j   T  f   j

     s 

     

      

     =  ∗∗   1ˆ ξ ξ    (4.27 a)

    with

    [ ]11   −++=

    = jk k k 

     p p

     f  

     Z    ζ ζ ζ   

      (4.27 b)

    using the procedure (N2).

    11

    ˆˆˆ   −−

    −=   js sc

     fp ff     S V W  I 

     R R H    (4.27 c)

    Replacing ∗ξ   by ∗k  f    and∗ς   by ∗l  f    in the equation (4.27 a)

    and post-multiplying both sides by (k 

    d T  sU 

    •Γ  β 

    ˆ  yields

    (   )   k T  f   j

    k k 

    d T  s sk    b Z  N  f  U U  f   j

       

      

     =Γ   ∗

    ∗   1ˆˆ β    (4.28 a)

    (   )   l T  f   j

    l l 

    d T  s sl    b Z  N  f  U U  f   j

     

     

     

     

     =Γ   ∗

    ∗   1ˆˆ β    (4.28 b)

    with   { }h

    d T  sh   U  H bl k h

    •Γ =∈  β 

    ˆ,   (4.28 c)

    It is easy to prove[7]

      that ∗k  f     is orthognal tod  β Γ  ,  β Γ    and

     sU  . M. Viberg et al[5, 6, 16]

     showed that  sr k    U W  f   j  ˆˆ   2

    1−∗   (in a

    N4SID case  I W r   =ˆ ) has a limiting zero-mean Gaussian

    distribution with covariance

     c

    l nnl 

     sr l 

     sr k  j

     f   R f   H  R H 

    U W  f  U W  f   E  N 

     f   f  

      )()(

    ˆˆˆˆlim   21

    21

    τ τ  β τ  ζ ζ 

          ∗

    <

    −∗−∗

    ∞→

    =

     

      

      

      

     

      (4.29 a)

     

    l nnl cT 

    c

     sr l 

     sr k  j

     f   R f   H  R H 

    U W  f  U W  f   E  N 

     f   f  )()(

    ˆˆˆˆlim   21

    21

    τ τ  β τ 

    ζ ζ        ∗

    <

    −∗−∗

    ∞→

    =

     

      

      

      

     

      (4.29 b)

     A third relation can be added to the first two

     

    l nnT l 

    c

    c

     sr l  sr k  j

     f   R f   H  R H 

    U W  f  U W  f   E  N 

     f   f  )()(

    ˆˆˆˆlim   21

    21

    τ τ  β τ 

    ζ ζ     

    <

    −∗∗

    −∗

    ∞→

    =

     

      

      

      

     

      (4.29 c)

    In the left hand side of equations (4.28 a, b) the product

    (k 

    d T  s sk    U U  f  

    ∗Γ  β 

    ˆ   is equal to k  µ ~   (4.15). Inserting (4.29 a, b,

    c) into the equation (4.15), the asymptotic normality

    distributions of the poles k  µ  , for { }nk    ,,1∈ , are obtained

      [ ]   ck nnk k 

    T k  jk k  j

     f   R f  b Rb E  j f   f    )()(~~lim   1 τ τ  µ  µ 

     β τ ζ ζ 

    <

    ∞→=  

        (4.30 a)

  • 8/17/2019 Raffy Gontier IMAC XXI Conf2003 s07p01

    5/8

      k nnk ck 

    T k  j

    ck k 

     j f   R f  b Rb E  j

     f   f    )()(~~lim   1 τ τ  µ  µ 

     β τ ζ ζ 

    <

    ∞→=  

        (4.30 b)

      k nnT k 

    ck k  j

    ck 

    ck 

     j f   R f  b Rb E  j

     f   f    )()(~~lim   1 τ τ  µ  µ 

     β τ ζ ζ 

    <

    ∞→=  

        (4.30 c)

    5 STATISTICAL ERROR ON DAMPING RATIOS

    Let k λ 

    ~

      be the error on pole k λ  , the sum of k λ 

    ~

      and itsconjugate gives ( )k k k k 

    ck k 

      η ω η ω λ λ    ˆˆ2~~

    −=+   (5.1)

    By first order approximation, the estimate of k λ   being very

    closed to the real value, the following relation can be written

    ( )   k k k k k ck k 

      η ω η η ω λ λ    ~ˆ2ˆˆ2~~

    −=−−≈+   (5.2)

    The following variable k  ρ ~  is defined as

    ck k k 

      λ λ  ρ   ~~~21 +−=   (5.3)

    The covariance of k  ρ ~  gives

    ( )   [ ]   ( )241   ~ˆ

    ~~~~2

    ~~~~k k 

    ck 

    ck 

    ck k k k k k 

      E  E  E    η ω λ λ λ λ λ λ  ρ  ρ    =++=   (5.4)

    Since the estimate of k ω   can be considered exact, equation

    (5.4) can be written( )   22   ~ˆ~~

    k k k k   E  E    η ω  ρ  ρ    =   (5.5)

    Between the dicrete and continuous time matrices, thefollowing equation exists

    d c AT d  e A

      δ =   (5.6)

    which gives k T k    ek   λ δ  µ    =∀   (5.7)

     After differentiation of (5.7)

    k T k k    eT   λ δ δλ δ δµ    =   (5.8)

    which can be expressed as

    k k 

    T   µ 

     µ 

    δ λ 

    ~1~

    =   (5.9)

    Inserting (5.9) and its conjugate in (5.4) and using (5.5)yields

      [ ]   [ ]   [ ] [ ]

     

     

     

     ++=

    2222

    2~~~~2~~

    ˆ4

    1~ck 

    ck 

    ck 

    ck k 

    ck k 

    k k 

     E  E  E 

    T  E 

     µ 

     µ  µ 

     µ  µ 

     µ  µ 

     µ 

     µ  µ 

    ω δ η    (5.10)

    This equation results in the relation between the covariance

    of k η   and the covariances of pole k  µ   and its conjugate.

    6 EXPERIMENTAL VALIDATION

    The above theory was tested, on the one hand on asimulated spring-damper-mass system, and on the other hand on an experimental structure representing a two floor 

    bulding.

    6.1 Simulated spring-damper-mass system

    This simulated system has five degrees of freedom, and ismade up of five masses (2, 2.5, 2, 1.8 and 4 kg), six springs(180, 100, 200, 100, 85 and 120 kN/m) and five dampers (1,2.5, 0.5, 3 and 0.1 N/m/s).Using the state equations in terms of accelerations, theprevious system was simulated in the following conditions.Two random excitation forces are applied to the nodesnumber two and four, and two acceleration signals are

    collected. A white noise signal is added to these outputsignals. This noise represents around 0.5 percent of thenoise-to-signal ratio.

    Figure 1: Schema of the simulated system

    Two hundred and fifty 4,000 point long simulated data fileshave been analysed to obtain the distribution diagramspresented.Figures 2 and 3 show the distribution of the frequency andthe damping ratio of the second mode obtained after theidentification process with the identification order i equal to11 and the truncation rank r equal to 10. The distributionshave a Gaussian look centred around a central value. Thesecentral values are respectively 33.2768 Hz and 7.358

    E-4.

    Figure 2: Distribution of the frequency

    Figure 3: Distribution of the damping ratio

    Figure 4 shows the distribution of the statistical error estimate on damping ratio.Figure 5 shows the distribution of the ratio between thestatistical error estimate and real error on damping ratio. Itshows that approximately 95% results are bounded by thevalues –2 and +2, that complying with a Gaussiandistribution. For the other modes, similar diagrams areobtained.

    c5c4c3c2c1

    5 x4 x3 x2 x1 x

    m5m4m1 m3m2k6k5k4k3k2k1

    33.2664 33.2666 33.2668 33.267

    0

    5

    10

    15

    Hz

    7.2 7.25 7.3 7.35 7.4 7.45 7.5 7.55

    x 10-4

    0

    5

    10

    15

    20

  • 8/17/2019 Raffy Gontier IMAC XXI Conf2003 s07p01

    6/8

    Figure 4: Distribution of the statistical error estimate on damping ratio

    Figure 5: Distribution of the ratiostatistical error estimate/real error 

    6.2 Experimental structure

    The experimental structure below roughly represents thesteel framework of a two-floor building (figure 6). Observedin its axial direction, this structure holds four « main modes »which are associated with the following mode shapes: i) inphase floor translation, ii) in opposite phase floor translation,iii) in phase floor rotation, iv) in opposite phase floor rotation.Some local modes exist in the structure such as the floor plate modes, the column beam modes, but they are onlyweakly excited.

    Figure 6: Experimental structure

     At a top node, the experimental framework model wassubmitted to a pointwise random excitation by means of a 10newton excitator, and was observed at two other nodes bymeans of two accelerometers.

     A frequency domain analysis was carefully performed withthe help of the Brüel & Kjær « Pulse » system, a systemproviding very accurate results, at least for the main modesof the experimental structure. The damping coefficients wereobtained by zooming around each modal frequency, and

    averaging 400 sample sequences of 4,096 points on onechannel with a 75% overlapping.The damping ratio of the four main modes obtained with the« Pulse » system are presented table 1 in the third column.The values of the obtained frequencies are shown in thesecond column.

    N° F (Hz)   η (%) r=10 r=26 r=42 r=60

    1 35.008 0.44 0.601 0.416 0.447 0.444

    2 60.484 0.086 0.0932 0.0948 0.0948 0.0948

    3 127.22 0.071 0.0812 0.0769 0.0766 0.0766

    4 207.32 0.146 0.1417 0.1415 0.1427 0.1424

    Table 1: Experimental and identified modes

    In this practical validation, 250 experimental records of 4,096points per channel have been analysed, but two mainproblems were encounteredThe first one concerns the stability of the main modes whilerecording the data files, i.e. the frequency and damping ratiovalues of these modes, obtained after the identificationprocess, evolved between the first and the last record. Onfigures 7 and 8, the frequency and damping ratio evolutionsof the first mode are presented according to the analysed filenumber.

    Figure 7: First mode frequency evolution

    Figure 8: First mode damping ratio evolution

     x2 x1

    f t

    2 4 6 8 10 12 14

    x 10-6

    0

    2

    4

    6

    8

    10

    12

    14

    -6 -4 -2 0 2 4

    0

    5

    10

    15

    20

    25

    0 50 100 150 200 250

    34.97

    34.98

    34.99

    35

    35.01

    0 50 100 150 200 250

    4.35

    4.4

    4.45

    4.5

    4.55

    x 10-3

  • 8/17/2019 Raffy Gontier IMAC XXI Conf2003 s07p01

    7/8

    Because of this variation, the « Pulse » damping ratio valuesor the experimental means resulting from the identificationprocess could not be used as a reference for a statisticalvalidation. Therefore another method was used.To solve the problem, a local mean using 20 values aroundthe observed value was used to calculate the « real error »estimates. On figure 9, the first mode distribution of the ratiobetween the statistical error estimate and the estimated« real error » is presented. This distribution still complies

    with Gaussian specifications.

    Figure 9: First mode - distribution of the ratiostatistical error estimate/estimated « real error »

    On figure 10, the same parameter is presented for the thirdmode. The distribution keeps a Gaussian aspect but thelimits for the 95% confidence interval are reduced.

    Figure 10: Third mode – distribution of the ratiostatistical error estimate/estimated « real error »

    The second problem comes from the identification order of 

    structure  β    and cutting rank r  . In a real or experimental

    structure, the exact number of modes is unknown. Theclassical difficulty lies in the choice of these parameters.In this study case, an identification order of 60 and cuttingranks ranging from 10 to 60 have been chosen. On table 1,the damping ratio median values for the four main modes

    are presented when r    equals 10, 26, 42 and 60. Thesemedian values agree on the « Pulse » ones except for thesecond mode.

    7 CONCLUSION

    The study above shows that in a simulated structure casewith white noise, the statistical error estimate analysis ondamping ratios provides reliable results. In this case, there isno problem for the identification process because the

    number of modes is known. Moreover there is no weaklyexcited modes because the number of excitation and outputpoints is sufficient.In a classical subspace identification procedure, the qualityof the identification depends on parameters such as the

    number of excitators m , the number of output sensors l  ,

    the identification order i , the cutting rank r and the sample

    size  s .

    For practical reasons the number of simultaneous excitatorsgenerally reduces to one, sometimes two for largestructures. The question arises from the best choice for theother parameters. It is clear that this choice depends on theexperimental case, for instance on the measurement noises,which can be colored, on the modal densities, and on thepossible presence of small non-linearities.Practical experimentation provides an empiric optimal choicefor these parameters though no real theory is able to confirmthe latter today.In this practical case, the number of excitators and thenumber of output sensors are respectively equal to 1 and 2.It is obvious that some modes are weakly excited, and thatsome modes provide the output sensors with a weak signaldue to the sensor distance. As the exact number of present

    modes is not known, the identification order is arbitrarilyfixed to 60 for computing and time reasons. This choicegives better results than lower values, but the statistical error estimate remains overvalued for some modes: for the thirdone for instance (figure 10).

    For all identification order values  β   and cutting rank values

    r   tested, the statistical error estimate never underestimatesthe real error. In this case, the statistical error estimate maybe used to evaluate the errors made on the damping ratiosduring the identification process.

    REFERENCES

    [1] Ljung, L.,  System Identification, Theory for the User ,Prentice-Hall, Englewood Cliffs, NJ, 1987.

    [2] Ewins, D.,  Modal Testing: Theory and Practice,Research Studies Press Ltd, Somerset, England, 1995.[3] Van Overschee, P.,  Subspace Identification: Theory –Implementation – Application, Phd Thesis, KatholiekeUniversiteit Leuven, Belgium, February 1995.[4] Viberg, M.,  Subspace-based methods for theidentification of linear time-invariant systems, Automatica,Vol. 31, N° 12, pp. 1835-1851, Elsevier Science Ltd, 1995.[5] Viberg, M., Ottersten, B., Wahlberg, B., Ljung, L.,  Astatistical perspective on state-space modeling using subspace methods, Proceedings 30

    th  IEEE Conference on

    Decision and Control, pp. 1337-1342, Brighton, England,1991.[6] Viberg, M., Wahlberg, B., Ottersten, B.,  Analysis of state space system identification methods based oninstrumental variables and subspace fitting , Automatica, Vol.33, N° 9, pp. 1603-1616, Elsevier Science Ltd, 1997.[7] Jansson, M., Wahlberg, B.,  A linear regressionapproach to state-space subspace system identification,Signal Processing, Vol . 52, pp. 103-129, 1996.[8] Bauer, D., Deistler, M., Scherrer, W.,  The analysis of the asymptotic variance of subspace algorithms, IFAC,System Identification, pp. 1037-1041, Fukuoka, Japan, 1997.[9] Bauer, D., Deistler, M., Scherrer, W., Consistency and asymptotic normality of some subspace algorithms for 

    -3 -2 -1 0 1 2

    0

    5

    10

    15

    20

    25

    -1 -0.5 0 0.5 1 1.5 2 2.5

    0

    5

    10

    15

    20

    25

    30

  • 8/17/2019 Raffy Gontier IMAC XXI Conf2003 s07p01

    8/8

    systems without observed inputs, Automatica, Vol. 35, pp.1243-1254, Elsevier Science Ltd, 1999.[10] Bauer, D., Jansson, M.,  Analysis of the asymptotic 

     properties of the MOESP type of subspace algorithms, Automatica, Vol. 36, pp. 497-509, Elsevier Science Ltd,2000.[11] Deistler, M., Peternell, K., Scherrer, W., Consistency and relative efficiency of subspace methods, Automatica,Vol. 31, N° 12, pp. 1865-1875, Elsevier Science Ltd, 1995.

    [12] Peternell, K., Scherrer, W., Deistler, M.,  Statistical analysis of novel subspace identification methods, SignalProcessing, Vol. 52, pp. 161-177, Elsevier Science Ltd,1996.[13] Knudsen, T.,  Consistency analysis of subspaceidentification methods based on a linear regressionapproach, Automatica, Vol. 37, pp. 81-89, Elsevier ScienceLtd, 2001.[14] Raffy, M., Gontier, C.,  A Subspace method of modal analysis using acceleration signals, Proceedings IMAC-XX,pp. 824-830, Los Angeles, U.S.A., 2001.[15] Ottersten, B., Viberg, M.,  A subspace-based instrumental variable method for state-space systemidentification, 10

    th IFAC Symposium on system identification,

    Copenhagen, Denmark, 1994.

    [16] Viberg, M., Ottersten, B., Wahlberg, B., Ljung, L.,Performance of subspace-based system identificationmethods, Proceedings IFAC 93, Vol. 7, pp. 369-372,Sydney, Australia, 1993.[17] Albert, A.,  Regression and Moore-PenrosePseudoinverse, Academic Press, New York, 1972.