grubbs trom tang 2006

Upload: navneet-chaubey

Post on 03-Jun-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/12/2019 Grubbs Trom Tang 2006

    1/14

    Stochastics and Statistics

    The moments and central momentsof a compound distribution

    Robert W. Grubbstrom, Ou Tang *

    Department of Production Economics, Linkoping Institute of Technology, SE-581 83 Linkoping, Sweden

    Received 4 September 2002; accepted 4 June 2004Available online 25 August 2004

    Abstract

    The compound distribution is of interest for the study of production/inventory problems, since it provides a flexibledescription of the stochastic properties of the system. However, due to the difficulties involved in obtaining analyticalresults for the compound distribution, studies are usually limited to searching for a good approximation by replacing amore complex model with a simpler one applying only the first few moments as parameters.

    This paper presents general closed form formulae for the moments and central moments of any order of a compounddistribution made up of non-negative stochastic variables. The Laplace and z-transform methods play an important

    role in this study. The importance of taking into consideration higher-order moments, when computing a safety factorfor inventory control, is illustrated in a numerical example. 2004 Elsevier B.V. All rights reserved.

    Keywords:Stochastic processes; Inventory; Moments; Central moments; Compound distribution

    1. Introduction

    Compound distributions are instances ofmixtures. Mixtures are obtained when the density fof a stoch-astic variable W, depends on a second stochastic variable Xhaving a density g(x), which we write f(w

    jX).

    The mixture will then have the densityhw R11fwjxgxdx. When a stochastic numberMof stochasticvariables Yk, k= 0, 1,2, . . . , M, are added together, the mixture becomes a compounddistribution. WithW PMk1Yk, we have hw P1i0fwjigi, where f(wji) is the density for the sumPik1Ykconditional

    0377-2217/$ - see front matter 2004 Elsevier B.V. All rights reserved.doi:10.1016/j.ejor.2004.06.012

    * Corresponding author. Tel.: +46 13 281773; fax: +46 13 288975.E-mail addresses:[email protected](R.W. Grubbstrom),[email protected](O. Tang).

    European Journal of Operational Research 170 (2006) 106119

    www.elsevier.com/locate/ejor

    mailto:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]
  • 8/12/2019 Grubbs Trom Tang 2006

    2/14

    onM=i, andgithe probability thatM=i. An example of a compound distribution is the demand from astore, where the number of customers and demand from each of them both are independent random vari-ables. A second example is the total demand during the lead time for an acquisition, when the lead time and

    demand rate both are stochastic.As to general formulae for compound distributions, they appear to be scarce in the literature. SirMaurice Kendall and Alan Stuart [1, p. 157] state an expression corresponding to /W/Mln/Ytffiffiffiffi1p for the characteristic function of a compound distribution made up of a sum of M independentidentically distributed variables Yk each having the characteristic function /Y(t) and where M israndom having the characteristic function /M(t). A comparison with our approach is given in Section4 below. A similar expression, instead using the related generating functions, is given in Feller[2, p. 269].

    The compound distribution is of interest for solving production/inventory problems because it can beused to describe in more detail various stochastic properties of the system under study compared tomore simple approaches. Bagchi et al. [3] summarised analytical models of compound distributionsfor modelling production/inventory systems. However, the few analytical models that exist do not easilyprovide tractable results, and a further analysis of the production/inventory system becomes verycomplex.

    The alternative to deal with a compound distribution is by using approximations, such as a normal dis-tribution. However, such a simplification may not fit the actual distribution sufficiently well and the con-sequent analysis may therefore contain substantial errors [4]. In order to avoid such disadvantages ofapproximations, Lau[4]and other researchers[57]adopted other types of distributions for approximationpurposes, such as the Pearson family of distributions, with the aim to get a better fit of real data. In thiscase, the moments of the distribution had to be calculated up to the fourth order to determine the param-eters of the Pearson curve. It is claimed that inventory control parameters, such as the optimal reorderpoint, are easily obtained using this approach.

    In the study of stochastic models, moment-generating functions play an important role. For our pur-

    poses, we will use the Laplace andz-transforms as moment-generating functions. The moments of non-neg-ative continuous and discrete stochastic variables have a close association with properties of the Laplaceand z-transforms of the corresponding density functions.

    In this paper, we first discuss some basics of transforms and their use as moment-generating functions,including the connection between moments and central moments, and vice versa. We then develop generalclosed-form formulae for calculating moments of any order of a compound distribution. It is shown thatthe Laplace transform of the compound distribution can be written as a combination of the z-transformand the Laplace transform for the involved processes. Although transforms have been in use for mo-ment-generating purposes for an extremely long time (since the days of Pierre Simon Laplace, 17491827), this combination appears to be novel, although, as a starting point, there is some resemblance withthe formula provided by Kendall and Stuart[1]. As examples to illustrate the use of our closed-form expres-

    sions, calculations of the first five moments and central moments of a general compound distribution areprovided. The importance of using higher-order moments in inventory control is illustrated by a numericalexample in Section 6.

    As always, it might be questioned, what the use of a general formula, beyond the fourth moment, head-ing possibly into infinity, might be. What use is of this, one might ask? Why think about four dimensions,or more, when we know that we almost always are satisfied with two dimensions? For all domestic purposesEarth is essentially flat. The answer, of course is that we, or someone in the future, might be interested inthe higher-order results, and that, here, there is a procedure providing an answer in a closed form. And ifthis answer were not provided here, we, and they, might have to wait some additional time to find out.Obviously, some have been interested in moments up to the fourth order, but why did they happen to stop

    just there? In this paper, the authors did not.

    R.W. Grubbstrom, O. Tang / European Journal of Operational Research 170 (2006) 106119 107

  • 8/12/2019 Grubbs Trom Tang 2006

    3/14

    2. Notation

    fx ~fs R

    10 fx esxdx Laplace transform of any function f(x) of a continuous

    variable x, xP 0, where s is the complex Laplace frequency

    Zfgmg ^gz P1

    m0gmzm z-transform of a function gmof a discrete variable m,

    m= 0,1, . . . A more conventional definition is Zfgmg P1m0gmz

    m. However, it makes no difference in ourdevelopments to follow and this definition provides morecompact expressions

    lkX EXk R11x

    kfxdx kth moment of a random variable Xhaving the probabilitydensity function f(x)

    lX

    E

    X

    R1

    1x f

    x

    dx first momentof a random variable X, also identified as the

    expectation or the mean ofX

    l0kX EX lXk R11xlXkfxdx kth central moment of a random variable Xhaving the

    probability density function f(x), k= 1,2, . . .

    fkx dkfxdxk Short-hand notation for thekth derivative of a function f(x)

    3. Transforms and basics of moments

    According to Kendall and Stuart[1, p. 57], or Feller[2, p. 213]the moment of order r about a point a isdefined asZ 1

    1xardF

    Z 11

    xarfxdxEXar: 1

    The transformation between moments about a and moments about b are easily found to obey

    Xmj0

    m

    j

    abmjEXaj E

    Xmj0

    m

    j

    abmjXaj

    " #EXbm: 2

    In particular, when a= 0, or when b= 0, we obtain

    EXbm Xmj0

    m

    j

    bmjEXj

    Xmj0

    m

    j

    bmjljX; 3

    lmX EXm Xmj0

    m

    j

    amjEXaj: 4

    The moments about zero are simply called moments lmX EXm, and moments about the mean lX= E[X]are called central moments l0mXEXlXm. The first central moment is always zero, l01X 0. Thus wehave

    108 R.W. Grubbstrom, O. Tang / European Journal of Operational Research 170 (2006) 106119

  • 8/12/2019 Grubbs Trom Tang 2006

    4/14

    l0mXXmj0

    m

    j

    lXmjljX; 5

    lmXXmj0

    m

    j

    lXmjl0jX: 6

    It is well known that the Laplace transform can be used as a moment-generating function for non-neg-ative stochastic variables X:

    ~fn0 lim

    s!0

    Z 10

    dn

    dsnesxfxdx 1n lim

    s!0

    Z 10

    xnesxfxdx 1nZ 1

    0

    xnfxdx 1nlnX: 7

    This method applies to continuous as well as discrete (and mixed) distributions.Similarly, for the discrete probability gmof a stochastic variable Mwe may use the z-transform to gen-

    erate its moments. Interpretinggm,m = 0, 1,2, . . ., in continuous time as a sequence of coefficients of Dirac

    impulses d(0), d(tT), d(t2T),. . .

    , distanced by the constant interval T, the Laplace transform ofP1

    m0gmdtmT will be fgmg P1

    m0gmemsT, and its moments using the substitution z= esT may be

    calculated according to

    lnM 1n lims!0

    X1m0

    dn

    dsnesmTgm 1n lim

    s!0dn

    dsn

    X1m0

    zmgm

    ! 1n lim

    s!0dn^gzs

    dsn

    Tn limz!1

    z ddz

    z ddz

    z ddz

    . . . zd^gdz

    Tn lim

    z!1dn^g

    dlnzn; 8

    since we have dds dz

    ds d

    dz Tz d

    dz T d

    d lnz, cf.[8].Without loss of generality, in the following we define the

    interval as unity T= 1.Using a Taylor expansion aroundz = 1 and a binomial expansion, we may write the z-transform of any

    discrete function as

    ^gz X1i0

    ^gi1i!

    z1i X1i0

    ^gi1i!

    Xij0

    i

    j

    zj1ij: 9

    Thenth derivative of the z-transform with respect to ln z is therefore,

    dn^gzdlnzn

    X1i0

    ^gi1i!

    Xij0

    i

    j

    jnej lnz1ij: 10

    Thenth moment of a discrete distribution of a discrete random variable Mcan thus be written as

    ln

    M limz!1dn^g

    z

    dlnzn Xn

    i0

    ^gi

    1

    i! Xi

    j0

    i

    j

    jn

    1i

    j

    Xn

    i0 ani^g

    i

    ; 11where the coefficients aniare defined as

    ani1i!

    Xij0

    i

    j

    jn1ij: 12

    The first set of these numbers is displayed inTable 1. These coefficients arenumericallythe same as thoseappearing in the z-transform oftk and have a close relationship with Bernoulli numbers. The sum in (12)above has been truncated atn since the coefficients vanish for i>n (for a proof, see[9]). For further values

    R.W. Grubbstrom, O. Tang / European Journal of Operational Research 170 (2006) 106119 109

  • 8/12/2019 Grubbs Trom Tang 2006

    5/14

    and various properties of the ani, reference is made to[10]in which the aniare defined slightly differently.However, one simple property ofani (adjusted to our definition) we repeat here, namely that

    anian1;i1ian1;i; 13

    which provides an easy recursive method for calculating all values ofani.Alternatively, we can describe the derivatives of the z-transform as functions of the moments. Letting

    ^gz hlnzandhilnz di^gzdlnzi, and using a Taylor expansion for the logarithm, we obtain the following

    expression:

    ^gz X1k0

    hk

    0k!

    lnzk hk0 X1k1

    hk

    0k!

    X1j1

    1j

    1

    z1j

    j

    " #k

    hk0 X1k1

    hk0k!

    1kXX

    X

    j1;j2;...;jkP1

    1j11j2 . . . 1jkj1j2. . .jk

    z1j1j2jk: 14

    The limit of its ith derivative becomes

    ^gi1 lim

    z!1di^gz

    dzi i!

    Xik0

    1ikhk0k!

    XX

    Xj1;j2;...;jkP1j1j2jki

    1

    j1j2. . .jk

    i!Xik0

    1iklkMk!

    XX

    Xj1;j2;...;jkP 1j1j2jki

    1j1j2. . .jk

    i!Xik0

    biklkM; 15

    since all terms apart from those havingPk

    l1jli vanish in the limit, and where the coefficients bikare de-fined by

    bik 1ik i!k!

    XX

    Xj1;j2;...;jkP1j1j2jki

    1

    j1j2. . .jk: 16

    Table 1Values of the coefficients anifor n,i6 10

    n i

    0 1 2 3 4 5 6 7 8 9 100 11 0 12 0 1 13 0 1 3 14 0 1 7 6 15 0 1 15 25 10 16 0 1 31 90 65 15 17 0 1 63 301 350 140 21 18 0 1 127 966 1701 1050 266 28 19 0 1 255 3025 7770 6951 2646 462 36 110 0 1 511 9330 34 105 42 525 22 827 5880 750 45 1

    110 R.W. Grubbstrom, O. Tang / European Journal of Operational Research 170 (2006) 106119

  • 8/12/2019 Grubbs Trom Tang 2006

    6/14

    Comparing the two sets of coefficientsaniandbikin(12) and (16), we find that they are orthogonal in thesense that

    Xni0

    anibik

    1; if ik;0; if i6k: 17

    This means that when the coefficients aniand bikare arranged as two triangular matrices, they will be eachothers inverses.

    The multiple summations in(16)are computationally cumbersome. Fortunately, the coefficientsbikhavesome simple properties making them easy to evaluate. In[10, p. 121, Eq. (9)]it is shown that bi1must be(i1)!(1)i1, foriP1 (adopting our definitions from above). Also we must have bii= 1 for alli, since thematrix ofaniis triangular with unit values along its main diagonal. The main property to be used for eval-uating these coefficients is the following recursive formula which the bikobey:

    bik bi1;k1 i1bi1;k; i; kP 1: 18Applying this procedure beginning with b00= 1, all coefficients bikare then directly obtained. The first

    few values of the b ikare displayed inTable 2. A proof by mathematical induction of the recursive relation(18)is the following. Let dikdenote Kroneckers delta, i.e.dik= 1 when i=kand zero otherwise. We knowthat

    Pij0bijajkdikfori,k6 nand need to show that

    Pn1j0bn1;jajk dn1;k, ifbn + 1,jis given by(18), i.e.,

    ifbn+1,j=bn,j1nbnj, which determines the bn+1,juniquely. DevelopingXn1j0

    bn1;jajkXn1j0

    bn;j1nbnjajk

    and using ajk=aj1,k1+kaj1,kaccording to(13), we obtain

    Xn1

    j0bn1;jajk

    Xn1

    j0bn;j1nbnjajk

    Xn1

    j1bn;j1aj1;k1kaj1;k n

    Xn

    j0bnjajknbn;n1an1;k

    dn;k1kdnkndnk;since the triangular form ofbijrequiresbn,n+1= 0. Hence ifn + 1 =kthendn,k1= 1 anddnk= 0 requiringPn1

    j0bn1;jajk1, and ifn+ 1 >kthenPn1

    j0bn1;jajk0, concluding our proof.Further properties ofbikare that each row sum of the matrix formed by the bikis zero, apart from the

    first two rows. Also, the diagonal immediately below the main diagonal of this matrix consists of triangularnumbers with alternating signs.

    Table 2Values of the coefficients b ikfor i,k6 10

    n i

    0 1 2 3 4 5 6 7 8 9 100 11 0 12 0 1 13 0 2 3 14 0 6 11 6 15 0 24 50 35 10 16 0 120 274 225 85 15 17 0 720 1764 1624 735 175 21 18 0 5040 13 068 13 132 6769 1960 322 28 19 0 40 320 109 584 118 124 67 284 22 449 4536 546 36 110 0 362 880 1 026 576 1 172 700 723 680 269 325 63 273 9450 870 45 1

    R.W. Grubbstrom, O. Tang / European Journal of Operational Research 170 (2006) 106119 111

  • 8/12/2019 Grubbs Trom Tang 2006

    7/14

    4. Moments of a compound distribution

    As stated above, the following sum is defined as a stochastic variable having a compound distribution,

    cf. Feller[2, p. 213],

    WXMk1

    Yk; 19

    where theYkare mutually independent random variables with a common distribution having the probabil-ity density functionf(y), and whereMis a random variable having a discrete distribution gmindependent oftheYk. The stochastic variables Mand Ykrepresent the number of events and the size (intensity) of eachevent respectively.Mhas a discrete distribution, but the Ykcan be either discrete or continuous (or a mix-ture thereof).

    The probability ofWtaking on a value between w and w+ dw is therefore

    Prw6W

  • 8/12/2019 Grubbs Trom Tang 2006

    8/14

    and have the limits

    ~wn0 0n n!Xn

    i1

    ^gi1=i!XX Xl1;l2;...;li P 1;l1l2lin

    Yi

    j1

    ~flj0=lj!: 24

    FromPi

    j1ljn and ljP 1 for all j, we havePi

    j1lj P i and n P i. Therefore all terms with i largerthannvanish. As an intermediate result, we obtain thenth moment ofWexpressed in terms of the momentsof Y, the latter all being of an order less than or equal to n, and the z-transform derivatives ^gi1,i= 0,1,2, . . ., n:

    lnW0n n!Xni1

    ^gi1=i!XX

    X

    l1;l2;...;li P1;l1l2lin

    Yij1

    lljY=lj!: 25

    However, we prefer to use the coefficients hi0 l iMinstead of the ^gi1, since the former represent

    moments ofM

    . Inserting(17)into(25)gives us

    EWn 0n n!Xni1

    1

    i!

    Xik1

    bikEMk( )XX

    X

    l1;l2;...;li P 1;l1l2lin

    Yij1

    EYlj =lj!

    8>>>:

    9>>=>>;

    0n n!Xni1

    Xik1

    1iklkMk!

    XX

    Xj1;j2;...;jkP 1j1j2jki

    1

    j1j2. . .jk

    8>>>:

    9>>=>>;

    XX

    Xl1;l2;...;liP 1;l1l2lin

    Yij1

    lljY

    lj!

    8>>>:

    9>>=>>;; 26

    where thebikare defined by(16). Although this expression is rather complicated, we may note the two inde-pendent multiplicative factors, one depending on moments ofMup to the order ofi, the other on momentsofYup to the order ofn. They can therefore be calculated separately for different values ofnand i. Also

    wemay see that the number of terms in the right-hand factor is binomialn1i1

    and in the left-hand factor

    isPi

    k1i1k1

    2i1, if the bik are to be calculated, or i terms, if these coefficients are considered as

    known. For instance, forn= 5 andi= 3 we haven1i1

    42

    6 terms in the right-hand factor. Three

    of these terms contain the common factorlY2l3Yand the three others the factor lYl2Y2. After addingcommon terms together, there are two terms left, see Eq. (32e) below, in which appears43lY2l3Y=3!3lYl2Y=2!2 2lY2l3Y3lYl2Y2, where the coefficient 4 comes from the left-handfactor.

    We offer the following interpretations of these two factors. Concerning the first factor, we may interpret

    the intermediate sum 1i!P

    ik1biklkM in the following way. Consider

    EXik0

    bikMk

    " #E

    Xik0

    bi1;k1 i1bi1kMk" #

    ;

    where the recursive relation(18)and basic properties of thebikhave been used. Lettingistep down one unitat a time gives us

    Xik0

    biklkM E

    Yi1j0

    Mj" #

    i!E Mi

    : 27

    R.W. Grubbstrom, O. Tang / European Journal of Operational Research 170 (2006) 106119 113

  • 8/12/2019 Grubbs Trom Tang 2006

    9/14

    Regarding the second factor, we make a slightly different expansion of the third member of(22)keepingthe zeroth-order term (for l= 0),

    X1l0

    ~

    f

    l

    0l!

    sl 1" #i

    Xij0

    i

    j

    !1ijX1l0

    ~

    f

    l

    0sl

    l!

    " #j

    Xij0

    i

    j

    !1ij

    X1l10

    ~fl10l1!

    X1l20

    ~fl20l2!

    X1li0

    ~flj0lj!

    sl1l2lj ;

    providing the limit

    lims!0

    dn

    dsn

    X1l0

    ~fl0l!

    sl 1" #i

    Xij0

    i

    j

    !1ijn!

    XX

    Xl1;l2;...;ljP 0;l1l2ljn

    Yjk1

    ~flk0=lk!

    Xij0

    i

    j

    !1ij

    XX

    Xl1;l2;...;ljP 0;l1l2ljn

    n

    l1l2. . . lj

    !Yjk1

    ~flk0

    Xij0

    i

    j

    !1ij

    XX

    Xl1;l2;...;ljP 0;l1l2ljn

    n

    l1l2. . . lj

    !E

    Yjk1

    1lkYlkk" #

    Xij0

    i

    j

    !1nijE

    Xjk1

    Yk

    !n" #; 28

    where theYkare independent. Therefore, we end up with the following expression, interpretingP

    jk1Yk to

    be zero for j= 0:

    lnWXni0

    EM

    i

    Xij0

    i

    j

    1ijE

    Xjk1

    Yk

    !n" #( ): 29

    The formulae(26) and (29)thus provide general closed-form expressions for the moments of a generalcompound distribution in terms of the moments of the number of events Mand the moments of their inten-sities Y, where the bijare defined by(16). For n= 1, from (26) we immediately obtain the obvious resultlW=b11lMlY=lMlY. Also, the right-hand factor collapses into anigiven by Eq.(12), ifYis set to unity

    with probability one (case of a constant intensity).If we instead wish to express a relationship between the central moments involved, we may use the bino-mial expansions given by(5) and (6)and insert these into(26). This gives us expressions of a rather lengthytype, which appear difficult to simplify much further.

    A comparison between our method and the formula /Wt /Mln /Ytffiffiffiffi1p provided by Kendall and Stuart[1]is the following. The characteristic function of an individual Ykis defined as /Yt

    R1y1e

    tffiffiffiffi

    1p

    fydy,where t is real. For the sum

    Pjk1Yk, where j is given, its characteristic function will be

    /Ytj R1y1e

    tffiffiffiffi

    1p fydyj, since the Yk are independent. Writing the density ofMas a sequence ofDirac impulsesgx P1j0gjdxj, whered(xj) is an impulse atx =j(cf. Section 3), the characteristicfunction of the distribution ofMis found to be /Mt

    R1x1e

    xtffiffiffiffi

    1pP1j0gjdxjdx

    P1j0gje

    jtffiffiffiffi

    1p . Thecharacteristic function ofW is developed as

    114 R.W. Grubbstrom, O. Tang / European Journal of Operational Research 170 (2006) 106119

  • 8/12/2019 Grubbs Trom Tang 2006

    10/14

    /Wt Z 1w1

    ewtffiffiffiffi

    1p

    hwdwZ 1w1

    ewtffiffiffiffi

    1pX1

    j0hwjjgjdw

    X1j0

    gj

    Z 1w1

    ewtffiffiffiffi

    1p

    hwjjdw

    X1j0

    gjZ 1y1

    eytffiffiffiffi1p fydy j X1

    j0gj/Ytj X1

    j0gjej ln/Yt/M ln/Ytffiffiffiffiffiffiffi1p ;

    which gives Kendalls and Stuarts formula. The Laplace transform of the density of our non-negative var-iable Yk is ~fs

    R1y0e

    syfydy/Ys=ffiffiffiffiffiffiffi1p and the z-transform of the distribution of M is

    ^gz P1j0gjzj P1j0gjej lnz /Mlnz= ffiffiffiffiffiffiffi1p . The Laplace transform of the density ofWaccording to(21) is ~ws ^g~fs hln~fs. Substituting s t

    ffiffiffiffiffiffiffi1p into the right-hand member and using

    hlnz /Mlnz=ffiffiffiffiffiffiffi1p and /Yt ~ftffiffiffiffiffiffiffi1p , provides the left-hand member ~wtffiffiffiffiffiffiffi1p /Wt, i.e.

    the KendallStuart formula. This formula is an alternative bringing us to the starting point of our algebraicdevelopments.

    5. The first five moments as an example

    We demonstrate how to use the above expressions through developing the explicit formulae for the firstfive moments of a compound distribution. From Eq.(11)andTable 1,we have the moments of any discretedistribution as a function of the derivatives of a z-transform

    lM ^g11; 30a

    l2M ^g11 g21; 30b

    l3M ^g11 3^g21 g31; 30c

    l4M ^g11 7^g21 6^g31 g41; 30d

    l5M ^g11 15^g21 23^g31 10^g41 ^g51: 30eAlternatively, from Eq.(15)(usingTable 2), we have

    ^g11 lM; 31a

    ^g21 lMl2M; 31b

    ^g31 2lM3l2Ml3M; 31c

    ^g41 6lM11l2M6l3M l4M; 31d

    ^g51 24lM50l2M35l3M10l4Ml5M: 31e

    For a compound distribution, Eq. (26)gives us the moments ofWas

    lWlMlY; 32a

    l2WlMl2Y l2MlMlY2; 32b

    l3WlMl3Y3l2MlMlYl2Y l3M3l2M2lMlY3; 32c

    R.W. Grubbstrom, O. Tang / European Journal of Operational Research 170 (2006) 106119 115

  • 8/12/2019 Grubbs Trom Tang 2006

    11/14

    l4WlMl4Y l2MlM4lYl3Y3l2Y2 6l3M3l2M2lMlY2l2Y l4M6l3M11l2M6lMlY4; 32d

    l5WlMl5Y5l2MlMlYl4Y2l2Yl3Y 5l3M3l2M2lM2lY2l3Y3lYl2Y210l4M6l3M11l2M6lMlY3l2Y l5M10l4M35l3M50l2M24lMlY5: 32e

    Using(5) and (6)for replacing all moments by central moments in(32), we obtain the following expres-sions for the first set of central moments of a compound distribution:

    l02WlMl02Y l02MlY2; 33a

    l03WlMl03Y 3l02MlYl02Y l03MlY3; 33b

    l04WlMl04Y 4l02MlYl03Y 3l02MlMlM1l02Y2 6l03MlMl02MlY2l02Y l04MlY4; 33c

    l05WlMl05Y 5l02MlYl04Y 10l02MlMlM1l02Yl03Y 10l03MlMl02MlY2l03Y15l03M2lM1l02MlYl02Y2 10l04MlMl03MlY3l02Y l05MlY5: 33d

    Other parameters such as the skewness and kurtosis are readily obtained for further analysis straightfrom(33). The development of higher-order moments of a compound distribution is a complex task. Inthe literature, the derivation up to the fourth-order moments only appear to have been shown in [6,11],where a different method was adopted. In that approach, a random variable was first separated into twocomponents and the dependence between these clarified. Consequently, an error was easily raised due tothe improper assumptions of component independence. Nevertheless, our work verifies that only the resultsin[11]are correct.

    6. Illustrations and numerical examples

    Although all distributions are not fully characterised by the values of all their moments, in most cases, whenconcentrated on a finite interval, a distribution is given by its complete set of moments[12, pp. 222224]. Whenestimating a true distribution from the real world, a better approximation of the true distribution involves agreater number of moments to be assigned correct values. This is similar to better approximations involvingmore terms in Taylor expansions. As pointed out above, the Taylor expansion of the Laplace transform of aprobability density arounds = 0, produces coefficients which indeed are the moments (except for sign).

    Estimations using two-parametric distributions, such as the normal or the Gamma distributions, canadopt at most two chosen moments, all remaining moments have to follow suit, when the two are decided.

    Furthermore, there may be other severe restrictions, such as all odd moments above first being zero-valuedfor any symmetric distribution. For a three-parameter distribution, such as the triangular distribution, sim-ilarly, at most three moments can be assigned values independently.

    The question then arises as to what errors might be generated by choosing an approximation with asmall number of free parameters. In an overwhelming number of cases in the literature, the normal distri-bution with its two parameters is chosen as an estimate, and subsequently as a base for taking decisions. Inparticular, in applications involving cycle service levels, the argument providing a given level of the cumu-lative distribution is requested. Even in simple cases, taking the argument of a normal approximation in lieuof something more correct, can yield substantial errors.

    Compound distributions are inherently complex. In Fig. 1we have compared three simple cases andtheir normal approximations. In all cases, the intensity Yis uniformly distributed on the interval from 5

    116 R.W. Grubbstrom, O. Tang / European Journal of Operational Research 170 (2006) 106119

  • 8/12/2019 Grubbs Trom Tang 2006

    12/14

    to 6. The discrete variable Mis either uniformly distributed (a), or has a linearly increasing probability (b),or a linearly decreasing probability (c).Mmay take on integer values between 1 and 4. The moments and

    central moments of the variables in these cases are tabulated inTable 3together with the normal approx-imation. There are obvious major differences between the values for the central moments of the compounddistributions and their normal approximations.

    As an illustration for the use of our formulae in a safety stock application, we provide an example deal-ing with an inventory system having a Gamma distributed random replenishment lead time Mand a Pois-son distributed random demandYin each period. The aim is to determine the safety stock level SSso thatthe cycle service level is maintained at 95%. The means and the second to fourth central moments of therandom variables are given as

    lM 3; l02M 5; l03M 10; l04M 100;

    lY

    20; l02Y

    50; l03Y

    100; l04Y

    1200:

    Applying(33), we obtain the mean and the second to fourth central moments of the lead time demand Was

    lW60; l02W2150; l03W95300; l04W 19126100:The safety factor kis defined as the safety stock SSdivided by the standard deviation of the lead time

    demand:

    SSk l02W0:5: 34If we assume that the lead time demand follows a normal distribution, only the mean and second centralmoment are considered. For a service level to be 95%, the value k= 1.64 is acquired from a normal distri-bution table. Thus SS= 1.64 21500.5 = 74.

    Fig. 1. Three simple cases of compound distributions (solid) and their normal approximations (dotted).

    R.W. Grubbstrom, O. Tang / European Journal of Operational Research 170 (2006) 106119 117

  • 8/12/2019 Grubbs Trom Tang 2006

    13/14

    We now assume the true lead time demand distribution to belong to Pearson s system of distributions[13]and take into account the mean and second to fourth central moments. The skewness b1and kurtosisb2 are defined as

    b1l03W2l02W3

    ; 35

    b2 l04Wl02W2

    ; 36

    which are evaluated as b1= 0.914 and b2= 4.14 in our example. From standard tables of statistics (for

    example in [14]), we have k= 1.89 which leads to a safety stock level ofSS= 1.89

    2150

    0.5

    = 88 insteadof the formerly calculated value of 74. This simple example illustrates a method to determine the safetystock level by using the mean and second to fourth central moments. In addition, it indicates the differencein computed safety stock level to be up to 16%, when the conventional approach assuming a normal dis-tribution and the more accurate method involving the higher-order moments are compared.

    In order to illustrate which method provides a better result, we assign the above two safety stock levelsand then use simulation to examine the actual service level. The simulation runs 10000 periods and contains10 replications. The result indicates that the actual service levels are 95.2% and 93.5% when the Pearsondistribution and the normal distribution methods are used, respectively.

    A more comprehensive comparison is made recently by Tang and Grubbstrom[15]. Using a simulationmethod, they have investigated more compound combinations such as Poisson-normal, Poisson-lognormal,

    Table 3Moments and central moments of distributions in examples ofFig. 1

    Order of moment

    1 2 3 4 5Intensity, uniform distribution between Moment 5.500 30.333 167.750 930.200 5171.833

    5 and 6, same for the three cases below Central moment 0.000 0.083 0.000 0.013 0.000

    Case a

    Discrete variable, uniformly distributed on Moment 2.500 7.500 25.000 88.500 325.000interval between 1 and 4 Central moment 0.000 1.250 0.000 2.563 0.000

    Compound Moment 13.750 227.083 4169.688 81361.292 1647956.979Central moment 0.000 38.021 1.719 2392.249 358.574

    Normal approximation Central moment 0.000 38.021 0.000 4336.751 0.000

    Case b

    Discrete variable, distribution linearly increasing on Moment 3.000 10.000 35.400 130.000 489.000

    interval between 1 and 4 Central moment 0.000 1.000 0.600 2.200 3.000Compound Moment 16.500 302.750 5903.425 119493.733 2479102.542

    Central moment 0.000 30.500 98.450 2049.621 15040.208Normal approximation Central moment 0.000 30.500 0.000 2790.750 0.000

    Case c

    Discrete variable, distribution linearly decreasing Moment 2.000 5.000 14.600 47.000 161.000on interval between 1 and 4 Central moment 0.000 1.000 0.600 2.200 3.000

    Compound Moment 11.000 151.417 2435.950 43228.850 816811.417Central moment 0.000 30.417 101.200 2052.550 15572.333

    Normal approximation Central moment 0.000 30.417 0.000 2775.521 0.000

    118 R.W. Grubbstrom, O. Tang / European Journal of Operational Research 170 (2006) 106119

  • 8/12/2019 Grubbs Trom Tang 2006

    14/14

    normal-gamma, normal-lognormal, etc. All the results confirm that a high-order moments method is nec-essary for determining the safety stock to avoid severe errors, especially when a high service level is desired.

    Besides the Pearson system of distributions, one may also apply the SchmeiserDeutsch distribution in

    which the mean and second to fourth central moments are considered [16].A comparative study of thesetwo approaches for modelling inventory systems is given by Kottas and Lau[6]. However, in that particularstudy, the third and fourth moments are miscalculated due to errors in their formulae for higher-ordermoments.

    7. Summary

    This article provides general closed-form formulae for computing higher-order moments of a compounddistribution. Their derivation is straightforward due to the advantage that the probability distribution of acompound distributed variable can be described in terms of a mixture of the Laplace andz-transforms. For-mulae are also made available for relating moments and coefficients of the Laplace and z-transforms. The

    only limitation in these formulae is that the random variable needs to be non-negative.The use of(26)is at least twofold. We can either use it to approximate a compound distribution from

    real data, or for analysing how accurate the simplification assumption of normality is in the circumstancethat the individual distributions are known. Both methods are important for investigating properties of, forinstance, production-inventory models. An application of a compound demand process in a production-inventory system can also be found in[17], where the objective is to minimise the average cost of a systemin order to achieve an optimal production plan. A future application of our findings will be to adapt theresults to fit the stockout function, cf. [17].

    References

    [1] M.G. Kendall, A. Stuart, The Advanced Theory of Statistics, fourth ed., vol. 1, Charles Griffin, London, 1977.[2] W. FellerAn Introduction to Probability Theory and Its Applications, vol. 1, John Wiley & Sons, New York, 1957.[3] U. Bagchi, J.C. Hayya, J.K. Ord, Modeling demand during lead time, Decision Sciences 15 (1984) 157176.[4] H.-S. Lau, Toward an inventory control system under non-normal demand and lead-time uncertainty, Journal of Business

    Logistics 10 (1) (1989) 88103.[5] M. Keaton, Using the gamma distribution to model demand when lead time is random, Journal of Business Logistics 16 (1) (1995)

    107131.[6] J.F. Kottas, H.-S. Lau, A realistic approach for modeling stochastic lead time distributions, AIIE Transactions 11 (1) (1979) 54

    60.[7] J.E. Tyworth, Modeling transportation-inventory trade-offs in a stochastic setting, Journal of Business Logistics 13 (2) (1992) 97

    124.[8] R.W. Grubbstrom, The fundamental equations of MRP theory in discrete time, Working Paper WP-254, Department of

    Production Economics, Linkoping Institute of Technology, Sweden, December 1998.

    [9] R.W. Grubbstrom, A closed-form expression for the net present value of a time-power cash flow function, Managerial andDecision Economics 12 (5) (1991) 305316.[10] R.W. Grubbstrom, The z-transform oftk, The Mathematical Scientist 16 (1991) 118129.[11] W.-X. Wan, H.-S. Lau, Formulas for computing the moments of stochastic lead time demand, AIIE Transactions 13 (3) (1981)

    281282.[12] J.K. Ord, Families of frequency distributions, Griffin, London, 1972.[13] W. Feller, An Introduction to Probability Theory and Its Applications, vol. 2, John Wiley & Sons, New York, 1966.[14] E.S. Pearson, H.O. Hartley, Biometrika Tables for Statisticians, vol. II, Cambridge University Press, 1972.[15] O. Tang, R.W. Grubbstrom, On the necessity of using higher-order moments for stochastic inventory systems, Working Paper

    WP-317, Department of Production Economics, Linkoping Institute of Technology, Sweden, 2003.[16] B.W. Schmeiser, S.J. Deutsch, A versatile four parameter family of probability distributions suitable for simulation, AIIE

    Transactions 9 (2) (1977) 176182.[17] O. Tang, Application of transforms in a compound demand process, Promet-Traffic-Trafico 13 (6) (1999) 355364.

    R.W. Grubbstrom, O. Tang / European Journal of Operational Research 170 (2006) 106119 119