041254380 x capability indices

110
Process Capability Indices < and Norman L. Johnson Unwersity of North Carolina, ChapelHill, North Carolina, USA rm CHAPMAN' cStHALL London. Glasgow. New York. Tokyo. Melbourne. Madras

Upload: studentul86

Post on 24-Jul-2015

105 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: 041254380 x Capability Indices

Process CapabilityIndices <

and

Norman L. JohnsonUnwersity of North Carolina, ChapelHill,

North Carolina, USA

rmCHAPMAN' cStHALL

London. Glasgow. New York. Tokyo. Melbourne. Madras

Page 2: 041254380 x Capability Indices

("\ '-) ,'-, '..,,""'VI., 'I 'i 1'/

l, q..

I'ublishl'd by (,h:II)II1:1II.~ lIall,. 2 (, Houlldary I{cn~, LOlldoll SI<:I !II IN

Chapman & Hall. 2-6 Boundary Row, London SEI 8HN, UK

BlackieA\:ademk& ProfessioliaLWeslerCleddensRoad, Bislwpbrig[!.s,Glasgow 064 2NZ. UK

Chapman & HalrInc., 29 West 35th Street, New York NYlOOOl, USA

Chapman & Hall Japan, Thomson Publishing Japan, Hirukawacho NemotoBuilding,6F, 1-7-11 Hirakawa-cho, Chiyoda-ku, Tokyo .102, Japan

Chapman & Hall Australia, Thomas Nelson Australia, 102 Dodds Strect,South Melbourne, Victoria 3205, Australia

Chapman & Hall India, R. Seshadri, 32 Second Main Road, CIT East,Madras 600 035, India

ForewordPreface1 Introduction

1.1 Facts needed from distribution theory1.2 Approximations with statistical differentials

('della melhod')Binomial and Poisson di.stributionsNormal distributions(Central) chi-squared distributions and

non-central chi-squared distributionsBeta distributions

t-distribuljons and non-central t-distributionsFolded distributionsMixture distributions

Multinormal (multivariate normal) distributionsSystemsof distribu!ion:.;

Facts from statistical methodology. Bibliography'

2 'rile basic process capability indices: C", C"k andtheir modifications

2.1 Introduction2.2 The Cp index2.3 Estimation of Cp2.4 The Cpkindex2.5 Estimation of' Cpk2.6 Comparison of capabilities2.7 Correlated observations2.8 PCls for attributes

Firs}-e"dilion1993

\..,..~993 Chapman & Hall

Typeset in l1/13pt Times by Interprint Limited, Malta.Printed in Great Britllin by St Edmundsbury Press, Bury $( Edmunds,Suffolk

ISBN 0 412 54380 X

1.3

..1,4

1.5

Apart from any fair dealing for the purposes of research or private study, orcriticism or review, as permitted under the UK Copyright Designs andPatents Act, 1988, this publication may not be reproduced, stored, ortrunsmitled. in any form 01' by any means, without the prior permission inwriting of the publishers. or in the case of reprographk reproduction only ina\:wrdance with the terms of the li\:en\:es issued by thc ('l1pyrighl LicensingAgency in the UK, or in accordan\:e. with the terms of licences issued by theappropriate Reprodu\:tion Rights Organization outside the UK, Enquiriescon\:erning reproduction omside the terms stated here should be sent to thepublishers at the London address printed on this page.

The publisher makes no representation, express or implied, with regard tothe acwracy of the information contained in this book and cannot a\:cept anylegal responsibility or liability for any errors or omissions that may be made,

A catalogue record for this book is available from the British Library

1.61.7I.RJ.~1.10I. JILI2

Library of Congress (' , , . '.. - :.. "..~O: :..- ~ :'..'.\..

Process capability indices

1111111111111111111111111111111

168718658.562 KOT/P N93

Q, P 'd Katz, Samuel'e rmte on permau"", a u-"..."""" 1"'1"",U""IU""""""U ... ..wv.u,...~e

with the proposed ANSI/NISO Z 39.48-199X and ANSI Z 39.48-1984

CENTH L'!. !.i :t(~ ( :n,

L t 1'" r.i1"~18i7ti~)v

viiIX13

101416

182224262729303236

, 3737384351557476

: 78

Page 3: 041254380 x Capability Indices

VI Contents

Appendix 2.AAppendix 2.BBibliography

. 3 The Cpmindex and related indices. 3.1 Introduction3.2 The Cpmindex3.3 Comparison of Cpmand Ctm with C;pand Cpk3.4 Estimation of Cp~ (and Ctm)

i 3.5 Boyles' modified Cpm index3.6 The Cpmkindex3.7 Cpm(a)indices

Appendix 3.A: A minimum variance unbiasedestimator for W = (9C,~.;)- 1

Bibliography

4 Process capability indices under non-normality:. robustness properties4..1 Introduction4.2 Effects of non-normality4.3 Construction of robust PCls4.4 Flexible PCTs

4.5 Asymptotic propertiesAppendix 4.ABibliography

5 Multivariate process capability indices5.1 Introduction5.2 Construction of multivariatePCIs5.3 Multivariate analogs of Cp5.4 Multivariate analogs of Cpm5.5 Vector-valued PCls

Appendix 5.ABihliography

Note on computet' programsPostscriptIndex

..

798'183

8787gg9598

112117121

Foreword

Me.asures of process capability known as process capabilityindices (PCls) have been popular for well over 20 years, sincethe capability ratio (CR) was popularized by Juran. Sincethat time we have seen the introduction of Cp, Cpk, Cpm,Ppkand a myriad of other measures.

The lIse of tbese measures has sparked a significantamount of controversy and has had a major economicimpact (in many cases, negative) on industry. The issuedoes n.ot generally lie in the validity of the mathematicsQf the indices, but in their application by those who believethe values are deterministic, rather than random va~i-abIes. Once the variability is understood and the bias (ifany) is known, the use of these PCls can be more construc-tive.

As with statistics in general, it is imperative that thereadcr or Ihis lext has a work ing knowlcdgc of sta tistica]methods and distributional theory. This basic understand-ing is what has been lacking in the application of PCls over~~~. I

It is hoped that this text will as~jst in moderating (as in thiswriter's case) some of the polarization that has occurredduring the past decade over the use of PCls and that we canwork on process improvements in general, rather thart focus-ing upon a single measure or index.' .

There are those of us who feel that the use of PCls hasbeen ineffective and should be discontinued. There are those

who feel they have their use when used in conjunction withother measures and there are those who use PCls as absolute

127133

135135137 .1511M170172175

179179180.183187194

.197. )()()

20320520t)

VB

Page 4: 041254380 x Capability Indices

,

Vl11 Forword

measures. I would suggest that an umkrstanding l)!' theconcepts in this text could bring the majority of viewpoints toa common ground.

Preface

Robert A. DovichDecemher. 1992

The use and abuse of process capability indices (PCls) havehecome topics of considerable controversy in the last fewyears. Although seemingly simple in formal use, experiencehas shown thatPCls lend themselves rather easily to i11~based interpretations. It is our hopl that this monograph willprovide background sufficient to allow for informal assess-ment of what PCls can and cannot be expected to do.

We have tried to keep the exposition as elementary a:spossible, without obscuring its essential logical and math-ematical components. Occasionally the more elaborate of thelatter are moved to an appendix. We hope that Chapter '1 wiJlprovide a reminder of the concepts and techniques needed tostudy the book with profit.

We also hope, on the other hand, that more sophisticatedresearchers will find possible topics for further developmentalwork in the book.

Beforc venluring into this (\cld of currcnt controversy wehad somc misgivingsabout' our abilities in this Jldd..We havebenefited substantially from the advice of practitioners (es-pecially Mr. Robert A. Dovich, Quality Manager, IngersollCutting Tool Company, Rockford, Illinois and Dr. K. Hung,Department of Finance, Western Washington University,Bellingham, Washington) but our approach to the subject isbased primarily on our own experience of numerical ap-plications in specific problems over many years and ourprolonged study .of distributional-theoretic aspects of statis-tics. We hope our efforts will contribute to lessening the gapbetween theory and practice.

ix

Page 5: 041254380 x Capability Indices

x Pr~j'ace

We thank Dr. W.L. Pearn, of the National Chiao TungUniversity, Hsinchu, Taiwan, for his extensive collaborationin the early stages of this work, Mr. Robert A. Dovich foruseful comments on our first version, and Ms. Nicki Dennisfor her enthusiastic support of the project. Conversations andcorn.:spondence with Mr. Eric lknson and Drs. Russell 1\.Boyles, Richard Burke, Smiley W. Cheng"LeRoy 1\. Franklin,A.K. Gupta, Dan Grove, Norma F. Hubele, S. Kochcrl,akota,Peter R. Nelson, Leslie 1. Porter, Barbara Price, Robert N.Rodriguez, Lynne Seymour and N.F. Zhang contributed toclarilkation of our ideas on th'e somewhat controversialsubject matter of this monograph.

To paraphrase George Berkeley (1685-1753) our task andaim in writing this monograph was 'to provide hints tothinking men' and women who have determination 'andcuriosity to go to the bottom of things and pursue them intheir own minds'.

1

Introduction

January 1q93

Process capability indices (PCls) have proliferated in' both useand varicty during the Jast decade, Their widespread andoften u,ncritical use may, almost inadvertently, have led tosome improvements in quality, but also, almost certainly,'have been the cause of many unjust decisions, which mighthave been avoided by better knowledge of their properties.

These seemingly innocuous procedures for determining'process capability' by a single index were propagated mainlyby over-zealous customers who viewed them as a panacea forproblems of quality improvement. Insistence on rigid adher-cnce to rules for i;alculating the indices Cp and Cpk (seeChapter 2) on a daily basis, with the goal of raising themabove 1.333as much as possible, caused a revolt among anumber of inOuentialand open,-mindedquality control stat-isticians. An extreme reaction against these indices took theform of accusations of 'statistical terrorism' - Dovich's letter

(1991a), Dovich (1991b) and Burke et al. (1991) - and ofunscrupulous manipulation and doctoring, and calls for their

, elimination - Kitska's letter (1991). There were also moremoderate voices (Gunter, 1989) and more defenders (McCor-mick (in a letter, 1989 with reply by Gunter, 1989b); Steen-burgh, 1991; McCoy, 1991).However, the heated and intensedebate on the advisability of continuation of the practicalapplication of these indices ,- which took place during the firstfour months of 1991 - indicates that something is, indeed,

Samuel KotzUniversity of MarylandCollege Park, MD

Norman L. JohnsonUniversity of North Carolina

Chapel Hill. NC

1

Page 6: 041254380 x Capability Indices

2 J /1/ rod 11<'1 iO/1

wrong either with the indices themselves or the manner theyhad been 'sold' to Clualitycon.trol praditioners and the 'manon the shop floor;. Additional indices have been introducedin leading journals on quality control in the last few years(Chan et ai., 1988; Spiring, 1991; Boyles, 1991; Pearn et ai.,1992) in efforts to improve on Cp and Cpk by taking intoaccount the possibility that the target (nominal) value maynot correspond to the midpoint between fhe specificationlimits, and possible non-normality of the original processcharacteristic. The number of different capability indices hasincreased and so has confusion among practitioners, whohave so far been denied an adequate and clear explanation ofthe meaning of the various indices, and more importantly theunderlying assumptions. Their suspicion that a single numbercannot fully capture the 'capability' of a process is welljustified!

The purpose of this monograph is to clarify the statistical.meaning of these indices by presenting the underlying theory,to investigate and compare, in some detail, the properties ofvarious indices available in the literature, and finally toconvince practitioners of the advisability of representingestimates of process capability by a pair of numbers rather bya single one, where the second (supplementary) number mightprovide an indicator of the sample variability (accuracy) ofthe primary index, which has commonly been, and sometimesstill is, mistakenly accepted as a precise indicator of theprocess capability, without any reservations. This shoulddefuse to some extent the tense atmosphere and the rigidityof opinions prevalent among quality controllers at present,and, perhaps, serve as an additional bridge of understandingbetween the consumer and the producer. It may even reducethe cost of produdion without compromising the qUillity. Weconsider this to be an important educillional goal.

To benefit from studying this monograph, some basicknowledge of statistical methodology and in particular of thetheory of a number of basic statistical distributions is essen-

..

Facts 11('eded .Ii'om distribution theory. 3

tial. Although many books on quality control provide excel-lent descriptions of the normal distribution, the distributiona.l(SI;llislicaJ)propenies of Ihe process capability indiccs involvc

. several othcr importallt but much less familiar distributions.In fact, these indices give rise to new interesting statisticaldistributions which may be of importance and promise inother branches of engineering and sciences in general. Forthis reason most of this chapter will be devoted to a conden-sed, and as elementary as possible, study of those statisticaldistributions and their properties which are essential for adeeper understanding of the distributions of the processcapability indices. Further analytical treatment may be foundin books and articles cited in the bibliography. Specifically,we shall cover basic properties and characteristics of randomvariables and estimators in general, and then survey theproperties of the following:

J. hinomial and Poisson distributions2. 110r111a1distributions ,

.1. (central) chi-squared (and more generally gamma) dis-tributions

4. non-central chi-squared distributions5. beta distributions6. t- and non-central t-distributions7. /(}Jdednormal distributionsX. mixture distributions9. multinormal distributions

We also include, in section 1.12, a summary of techniques instatistical methodology which will be used in special applica-tions.

1.1 FACTS NEEDED FROM DISTRIBUTION THEORY,

We assllme that the reader possesses, at least, a basic knowl-edge of the elements of probability theory. We commence

Page 7: 041254380 x Capability Indices

4 Introduction

with a condensed account of basic concepts to refresh thememory and establish not'ation, This is followed by a sum-mary of results in distribution theory which wil1be used laterin this monograph,

We interpret 'probability' as representing long-run relativefrequency of an event rather than a 'degree of belicf'. Prob-abilities are always in the interval [0, 1].. If an event al\vays(never) occurs the probability is J (0). We will be dealing withprobabilities of various types of events.

A random variable X represe.nts a real-valued varyingquantity such that the event 'X ~ x' has a probability for eachreal number x. Symbolically

Pr[X ~ xJ = F x(x). (1.1)

The symbol Fx(.x) stands for the cumulative distributionfunction(COF) of X (or morc prccisely its valuc at X =x),Note that it is a function (in the mathematical sense) of x, hotof X. It represents a property of the random variable X,namely its distribution. For a proper random variable

lim Fx(x)=O lim Fx(x)= 1x"'" -00 X"'" 00

(1.2)

To represent quantities taking only integer values -- forexample, number of nonconforming items - we use discreterandom variables. Generally these are variables taking only anumber (finite, or count ably infinite) of possible values {xd.In the cases we are dealing with the possible values are justnon-negative integers and the distribution can be defined bythe values

Prf X =J1 J=O, L 2,...

For such random variables the COF is

Fx(x) = L Pr[X =.iJj<;.x

(1.3)

..

Faels needed from distrihution theory 5

To represent quantities such as length, weight, etc., whichcan (in principle) be thought of as taking any value over arangc (tinite or in/inite interval) we u:-.econtinuous randomvariables for which the derivative

dF x(x) I

!x(.X)= -- =Fx(x). dx (1.4)

of the CDF exists. The function J'x(x) is the probabilitydensity function(PDF) of the random variable X. For acontinuous random variable, X, for any real a and {3,({3> a):

Pr[a~X ~/J] = Pr[a<X ~/J] = Pr[a~X <fJJ=Pr[a<X<{3J= F x({3) - F x(a)

= I: fx(x) dx (1.5)

(For such a variablc, Pr[X ='aJ = Pr[X = 11J= 0 for any a and{3,)We will use Xe to denote the (unique) solution of F x(xe)= 8.

The joint CDF of k random variables Xl, .., , X k is

Vdx) Vv "", ,.I'"(x J , "" Xk) Prr( X I <X I) nn(X2~x2)n... n(Xk~xk)J

=pr [n (Xj~Xj) J

'

.1=1 (1.6)

For continuous random variables, the joint PDF is

rlF x(x)fx(x)= fx, Xk(XI,"', xd= ax! aX2'~..aXk (1.7)

(the kth mixed derivative with respect to each variable). ,

Page 8: 041254380 x Capability Indices

6IlllfOdL/CliOIl .

We denote the conditiona' probubility of an event Et givenan event E2 by pr[E1IE2l By definition, El is independentof E2 if Pr[El \E2J = Pr[Ell Generally Pr[El and E2J,denoted by Pr[El n E2}, is equal to Pr[E2J Pr[El \E2J =Pr[E,J Pr[E2IE,l

Also pr[EI or El or bothJ, denoted by Pr[Et u El], isequal to Pr[E1J+Pr[E1J-Pr[ElnE2]. .

If E1 is independent of E2 and Pr[E;J > 0, then E2 ISindependent of E1 and we say that El. and E2 are mutuallyindependent.

Random variables X 1 and X 2 are (mutually) ind.ependentif the events (X1~X1) and (X 1~X2) are mutually indepen-dent for all Xl and Xl' so that

FXl,X2(XI, X2)= F Xt(Xl )FX2(Xl)( 1.1)

The expected value of a random variable represents thelong-run average of a sequence of observed values to wh.ichit corresponds. The expected value of a continuous randomvariable, X is

r.=E[XJ= f~ooXfx(X)dX(1.8a)

also denoted as /1'1(X). Often the term 'mean value' or simply'mean' is used.

More generally, if g(X) is a function of X

E[g(X)] = J ~ 00 g(x)f x(x) dx

(1.9 a)

For a discrete random variable

~=E[XJ = '2>iPr[X =XiJi

( \.8 hI

E[g(X)]= I,g(xJPr[X=xiJi

(1.9 b)

..

Facts needed from distrihution theory

The variance of X is by definition

7

I~L(X - E(X ))2J-0. E(X 2) - LE(X)]2 (LIO a)

It is also written as

varIX) = f.l2= a2(X) = a2 (1.10 b)

The square root of the variance is the standard deviation,usually denoted by c(X) or simplya. This will be an import-ant and frequently used concept in the sequel.

More generally, the rth (crude) moment of X is E[Xr] =I/;.(X). Note that /1'1(X) = Er Xl =~. . .

The rth central moment is by definition:

E[(X-E(X))']=llr(X) (LlI)

. The variance is the second central moment.The coefficient of variation is the ratio of standard devi-

ation to the expected value (CV.(X) = a(X)/E[X]), but it isoftcn expressed as a percentage (lOO{a(x)/E[X]}of<)).. If X and Y are mutually independent, then E[X Y]=E[X]E[Yl For any two random variables; whether indepen-dent or not. .

E[X + YJ=E[X]+E[Y]

The correlation coefficient between two random variables Xand Y,is defined as

E[(X - E[X])(Y- E[YJ)]PXY = a(X)a(Y)

The numerator, EL(X- ELXJ)(Y- ELYJ)J is called thecovariance of X and Y, cov (X, Y).

Page 9: 041254380 x Capability Indices

8 Introduction

Occasionally we need to use higher moments (r> 2), inorder to compute conventioI].al measures of skewness (asym-metry) and kurtosis (flat- or peaked-'topness'). T,hese are

I J.l3(X) J.l3(X)

(f3i (X»2 = {J.l2(X)} 1 = {a(X)p (1.12a)

(the symbols O:3(X)or A3(.r) are also used fa! this parameter),and

J.l4(X) J.l4(X)

f32(X)= {/l2(X)]2 ='{a(X)}4

(the symbols O:4(X) or P'4(X)+ 3} arc. also used for thisparameter).

When there is no fear of confusion the '(X)' is oftenomitted, so we have for example

(1.12 b)

J /31 = It J /12 = It4a3 a4. (1.12(')

and

C.V.(X) = 1OOa//t't (X)(= I OOa/~o;;l) (1.13)

.j7J; and f32 are referred to as moment ratios or shapefactors (because they reflect the shape of the distributionrather than its location - measured by c; - or its scale c-measured by a). Neither j7i; nor /12 is changed if X isreplaced by Y = aX + b, when a and b are constants and

a> O. If a < 0, the sign of j{f; is reversed, but /12 remainsunchanged. If J7E. <0 the distribution is said to be nega-tively skew, if j7f;> 0 the distribution is said to be posi.tively skew. For ,any normal distribution, j/II - 0 andf32= 3. If f32< 3 .the distribution is said to be platykurtic('flat-topped'); if f32> 3 the distribution is said to b~ leptokur-tic ('peak-topped').

..

Facts needed .Ii"omdistribution theory 9

/1'2cannot be less than 1. It is not possible to have{/2..~ /J1- J <0. (If /12- 111,-] =0 the distribution of X reduces

to the 'two-point' distribution with Pr[X = a]:::-1- Pr[X = bJ. for.arbitrary a # h.)

We state here without proof, results which we will lIse inlater chapters. If X1,X2,...,Xn are mutually independentwith common expected value, ~, variance, a2, and kurtosisshape factor, /32then for the arithmetic mean,

n

X=n-i LXIi= 1

, - a2E[XJ=~ var(X)=.:...

11 (1.14a)

For the sample variance

2 .1" -2S = --- L (XI-X)n-11=1

we have

E[S2]=a2 (1.14 h)

.

(' n-3

)var(S2)= IJ2- "- a4n-l (1.14 c)

We also not~ the simple but useful inequality

E[IXI] ~ IE[XJI (1.14 d)

(Since E[IXIJ ~ E[X] and E[IXIJ = E[I- XI] ~ E[ - X] =- E[X].) Note also th~ identity

min(a, b)=t(la+bl-la-bl) (1.14 e)

Page 10: 041254380 x Capability Indices

(0 lilt /'odui'I iOIl

1.2 APPROXIMATIONS WITH STATISTICALDIFFERENTIA,LS ('DELTA METHOD')

Much of our analysis of properties of PCls will be based onthe assumption that the process distribution is normal. In thiscase we can obtain exact forl11l1l;\sfor mOl11entsof sall1plcestimators of the PCls we will discuss, In Chapter 4, however,we address problems arising from non-noimality, and in mostcases we will not have exact formulas for distributions ofestimators of PCls, or even for the moments. It is therefol'euseful to have some way of appmximating these values, oneof which we will now describe. (Even in the case of normalprocess dIstributions, our approximations may be usefulwhen they replace a cumbersome exact formula by a simpler,though only approximate formula. It is, of course, desirabkto reassure ourselves, 'so far as possible, with regard to theaccuracy of the latter.)

Suppose we have to deal with the distribution of the ratioof two random variables '

AG--

-B

and

E[A] =a E[B] ~#

var(A)=O'~ var(B)=O'~

cov(A, B) = PABO'AO'B

i.e. the correlation coefficient between A and B is flAil'To approximate the expected value of Gr we write

r (A

)r

{a+(A-a)

}

r

(a

)r

(' A-a

)r

(. n-"'fJ)G= - = = - 1+- 1+--B 13+ (B - 13) 13 0: j1

( 1.15)

'"

L'

'.ItIf

'~.

IC

JI

,IIr,IeIC

10

"

\',\

..

Approximations 1vithstatistical d(lferentials 11

The deviations A - a, B - a of A and B from their expectedyalUes arc calJed statistical differentials. Clear]y

E[A-aJ=O E[(A-a)2J=0'~ E[(A-a){B-fJ)J=PABO'AO'/I

(The notation A-a==i>(A); B-fJ=(5(B) is sometimes us~d,giving rise to the alternative name: delta method.)

We now expand formalJy the second and third componentsof the right-hand sideof (1.15).If r is an integer,the expansionof the second component terminates, but that of the thirdcomponent does not. Thus we have (with r= 1)

a

(A - a

){, B- #

(B - #)

2

}G=fj 1+'-a- 1-#+ -Ii (1.,16)

The crucial step in the approximation is' to suppose that wecan terminate the last expansion and neglect the remainingterms. This is unlikely to lead to satisfactory results unlessI(B - 13)/ fJI is small. Indeed, if I(B-- fJ)/fJ I exceeds 1, the seriesdoes not even converge. '

Setting aside our misgivings for the present, we take, expected values of both sides of (1.16), obtaining

E[G] ~~ (1- fJAI/(JA(J1/ (J~),

fJ afJ + 132(1.l7a)

A similar analysis leads to

"'G) (a

)2

(0'~ 2PAIIO'AO'I1 (J~)'

Vdi\ -'- --- +--\(Ja2 ,afJ 132

(1.17b)

Formulas (1.17a) and (1.17b) can be written conveniently interms of the coeflicients of variation of A and B (seep. 7).

E[GJ ~ fi [I - flAil':c. V.(A )c. V.(B)}+{C.V.(B)} 2J (1.17c)

Page 11: 041254380 x Capability Indices

12 Introduction

var( G) ~ (~) 2 [{c.v.(A»)2 - 2PAB{C.V.(A)C. V.(B)}

+{C.V.(B)}2] (l..l7d)

Another application of the method is to approximate theexpected value of a function g(A) in terms of moments of A.

Expanding in a Taylor series

g(A) = g(a + A - a) == g(a)+ (A - a)g'(a) + t(A - a)2y"(a) + ...

~ g(a) + (A - a)g'(a) -1-~(A - aF?J"(a)

Taking expected values of both sides

E[g(A)] ~g(a) + t?/'(a)(J"~ (1.18a)

Similar aI1alysis k:ads lo

var(g(A)) ~ {g'(a)} 2 var (A) = {g'(a)} 2(J"~ (1.18b)

Example (see Sections 1.4 and 1.5 for notation)

Suppose we want to approximate the expected value andvariance of

{f

.(XI-X)2 }

-!

i= 1

where X),...,X" arc indcpcndent N(~,(J"2)random variablesand .

- 1 nX=- LXI,

n 1=1

/)

It'I

I)

I)

II

s

...

Approximations with statistical d!fferentials

We take

13

"

L: (Xi--X)21=1

-'as our 'A', and g(A)= A 2.

From (1.31) we will find that

II.

L (XI-X)2 is distributed as X;-1(J"21= 1

Hence

a = E [A ] = (n - 1)(J"2

and

. (J"~.:::2(n-l)(J"4

Also

g'(A)= -fA -~ g"(A)=;iA-~

Hence

ElA jJ~{(fl-l)(J"2} .!+1{2(n-l)([4}{;i(n-l) 1(J"-5}

~(n-1r-!{1+ 4(n~1)}(}'-1 (1.19 a)

and

var(A -!) ~ 2(n -1)(}'4 x (4~3 )

1 -6.) 4 3 (J"

=2(n-1 (}' x 4(n-1)

Page 12: 041254380 x Capability Indices

14 I ntroductiol1

t

- 2(n--;-I)2a2(1.I9h)

Remembering the formulas (1.14 b, c) for E[S2] and var(S2)we find that

1 1(n-3

)1 1

a(S- )~2 /32- n-l a-. (1.19c)

for any distribution.

1.3 BINOMIAL AND POISSON DISTRIBUTIONS

Suppose that in a sequence of n observations we observewhether an event E (e.g. X =::;;x)occurs or not. We canrepresent the number of occasions on which E occurs by arandom variable X. To establish an appropriate distributionfor X, we need to make certain assumptions. The mostcommon set of assumptions is:

1. for each observation the probability of occurrence of lihas the same value, p, say - symbolically - Pr[E] = p:and

2. the set of results of n observations are mutually indepen-den t.

Under the above stated conditions

pr[x=k]=G)pk(1--P)"-k for 1<=0,1,...,/1(1.20)

where

(n

)n! n(k)

k = k!(n-k)!-kf

c:

11

1\

11

II

L~

~.

I

))

..

Binomial and Poisson distributions 15

(Recall that n!=1 x 2 x... x nand n(kl=--=n x (n-l) x... x(n-I\+J).)

Denoting if - J--I' (a common notation in the literature),we ohs~rve thaI Ihe quantities

Pr[X=O], Pr[X=l], Pr[X=2],..., Pr[X=I1]

are the successive terms in the binomial expansion of(q+ p)".

The distribution defined by (1.20) is called a binomialdistribution with parameters nand p, briefly denoted byllin~,~. .

The expected valuc and variance of the distribution are

j.l'l(X)=E[X]=np

112(X)=Var(X) =npq

respectively.

If n is increased indefinitely and p decreased, keeping theexpected value np constant (= 0 say) then

Pr[X = k] approaches to Okexp(- 0)k!

The distribution defined. by

Pr[X = k] =OkeXp~~O) k=O, 1, '"

is. a Poisson distribution with expected value 0, denotedsymbolicaJly as:

X", Poi(O)

For this di~tribution,E[X] = e and var (X) = e.

(1.21)

Page 13: 041254380 x Capability Indices

16 Introduction

The Il/lIllil/ol'll/lll dislrill/ll;o/l extends the binomial distrihu

tion to represent that of numbers of observations N1, ..., Nkof mutually exclusive events E1, .,. , Ek in n independent trialswith Pr[Ej] =Pj (j = 1,..., k) at eachtrial.Wehave

.k

(p~j\

Pr[(Nl=ndn...n(Nk=nk)]=n!})l ~j)(1.22)

with

k

L nj = n,j= 1

k

L Pj = 1j= 1

1.4 NORMAL DISTRIBUTIONS

A random variable U has a standard (or unit) normal dis-tribution if its CDF is

1

J

u

Fv(u)= r;;: . exp(-~t2)dt=<b(u)v 2n - 00

(1.23a)

and its PDF is

1fv(u)=-;= exp( -lu2) = (p(u)

J2n .(1.23 b)

(We shall use the notation (}>(u),(p(u) throughout thisbook.) .

. This distribution is also called a Gaussian distribution,andI

occasionally a bell-shaped distribution. The expected valueand the variance of U are 0 and 1 respectively.

..

I)

~)

III.

hi

D~

N onnal distributions 17

The CDF and PDF of X=~+aU for a>O and ~:f=Oare

1

Ix

[1

(t.-t.

)2

] (x-(

Fx(x)= r:L exp -- _.-:. dt=(}>- )av 2n - 00 2 a 0'

(1.24 a)

and

f'

(1

[1

(X - ~)2

J1

(X - ~)'

: x x)= a~exp -:2 --;- =~(P --;--' (1.24h)

respectively.We write symbolicallyX '" N(~,a2).The .expected value of X is E[XJ =~and the variance of X

is a2. The standard deviation is 0'.The distribution of U is symmetric about :tero, i.e.

fv( -u)=fv(u). The distribution of X is symmetric about itsexpected (mean) value ~, i.e.

fx(~ +x)=fx(~ -x) (1.25)

There are extensive tables of (}>(u)available which arewidely used. Typical values are shown in T,lble 1.1.

, ,

Table 1.1 Values W(u), the standardizednormal CDF

u <1>(u) u <1>(u)-0 0.5000 0.6745 0.75000.5 0.6915 1.2816 0.90001.0 0.8413 1.6449 0.95001.5 0.9332 1.9600 0.97502.0 0.97725 2.3263 0.99002.5 0.9938 2.5758 0.99503.0 0.99865 3.0902. 0.9990

3.2905 0.9995

Page 14: 041254380 x Capability Indices

18 Introduction

Note .that Ul = 1.645,and U2 = 1.96 correspond to $(ud=0.95 and <I>(uz)= 0.975 respectively, i.e. the area under thenormal curve from - ro up to 1.645is95% and that from- Cf:)up to 1.96 is 97,5°1<"Hence by symmetry the area betw~en .-1.645 and 1.645 is 901Yt)and between -1.96 and 1.96, it is

95 (Yo,For applications in process capability indices we note thearea under the normal curve between - 3 and 3 to be 99.73 (%,

Thus the area outside these limits is 0.0027' (an extremely smallproportion),

If Xt,X2,...,X" arc indcpendent random variables witlicommon N(~, (J2) distribution, thcn

"

1 L: Xi

(2

)- i= 1 (JX=-(XI+X2+...+X")=-,,,N ~,-n n n (1,26)

1,5 (CENTRAL) CHI-SQUARED DISTRIBUTIONS ANDNON-CENTRAL CHI-SQUARED DISTRIBUTIONS'

1.5.1 Central chi-squared distributions

If U I, U 2, ..., V v are mutually independerit unit normal vari-ables, the distribution of

X~=UT+U~+... +U~

has the PDF

frt(X)=[2ivr(~)J-1xh-lexp( -i) x>D

"

(1.27)

where l'(a) is the Gamma functioll ddilH.:d by

r(a)= f '" y'x-Icxp(--y)dy0 '

, (1.2R)

.

1

~~

II

It

~)

II

JI)

,'HI

Central and non-centredchi-squared distributions

(Gamma function is a generalization of factorials; S1l1cef(n+ l)=nl for any positive integer n.)

This is a X2 (Chi-squared) distribution with v degrees offreedom. Symbolically we write

19

L: Uf"'x~;= I

The expected value of a x; variable is v and its variance is 2v.If VI, V2,..., Vvare mutually independent normal variables

each with expected value zero and standard deviation (J(variance (J2);then

w= Vi+ V~+.., + V~has the PDF

fW(W)~[2jWW}!'-'exp(;~~) w>O

Sym bolicalJy

(1.29)

W"'(J2X;

JY has expected value (J2V and variance 2(J4V, Generally

'( 2

) -E [( 2)r] _2',rch+r)

/-lrXv - ,Xv - rch)

(= v(v+ 2).., (v+ 2 ;=1) if r is a positive integer).If r:E:;!v,then /-l;(X;)is infinite.

Generally a distribution with PDF

lx(x) = fi:x;(a)XCC-lexp(~x) x> 0; a> 0, {3> 0

is cal1ed a gamma (iX,P) distribution.

(1.30)

Page 15: 041254380 x Capability Indices

Introduction20

A chi-square random variable with v degrees of freedom(X~) has a gamma (tv, 2) distribution., If X1,X2,...,X" are independent N(~,(}2) random vari-ables, then 'the sum of squared deviations from the mean' isdistributed as X;-1 (}2.Symbolically (d. p. 13)

"~ -2 2 2L.. (Xi-X) "'X,,-I(} .

i= 1 '

(1.31 )

Also

"I.tSj X)2 andi= 1

"

X /I I L Xjj= 1

(which is distributed N(~, (}2/n)) are mutually independent.

1.5.2 Non-central chi-squared distributions

Following a similar line of development to that in Section1.5.1,the distribution of

X~2(b2)=(U 1+ ~d2 +(U 2+ ~2)2 + .., +(U v+ ~\)2

where Vi'" N(O, 1) with

v

L a=b2j= 1

has the PD F.

ct:J

[(c52

)j 1

J (c52

)fX~2(62)(X)=j~O 2' Ji. exp ':-2 I

.

h

Central and nor.-central chi-squared distributions 21

{ (

\

}

-I

( )I, \I J -x

x 2,v+Jr J+"2) x,v+rlexp"2,'-,

(iF

)=.i~O ~ 2- fx~+,/x)(1.32)

where

p.i(c52

)= (c52

)j exp(~~:2)2 2 ~=Pr[Y=jJ.I.

if Y has a Poisson distribution with expected value tc52.,This is a non.central chi-squared distribution with v de-

grees of freedom and non-centrality parameter c52. Note thatthe distribution depends only on the sum c52 and not on theindividual values of the summands ~f.

The distribution is a mixture (see section 1.9) of X~+:jdistributions with weights Pj{tP) (j=O, 1,2, ...).

Symbolically we write

x'v\P)", x~ I 2.! /\ POi(c52

), .T 2(1.33)

I

The expected value and variance of the x~\P) distributionarc (v+b2) arid 2(v+2c52) respectivdy.

Note that

k

(k

)k

L X~'h(c5~) '" X~2 L b~ ,. with v= L,

Vhh=1 h=l h=1

(1.34)

if the Xi'S in the summation are mutually independent.

Page 16: 041254380 x Capability Indices

22 1111/'oi!l/cl iOIl

1.6 BETA DISTRIBUTIONS

A random variable Y has a standard heta (ex,In distrihution ifits PDF is

. 1

fy(y) B(rx, /3) yOt-l(l - y)/J-l O~y~ 1 and IX,13>0(1.35 a)

where B(a, h) is the beta function, = f( rx)r(fJ)/n rx+ (1).

Thus

- r(a+{3) y"-l(1_y)P-l O~y~lfyCv)- f(rx)r(fJ)

(1.35 h)

If a = fJ= 1, the PDF is constant and this special case isballed a standard uniform distribution. . .

If Xl and X2 are mutually independent and Xj"'X~j(j= 1,2) then

Xl. (V1 V2)2 . -~'" eta - -Xl+X2"'X\,1+\,l and X1+X2 B 2' 2/

(1.36)

m9reover Xl /(X 1+ X :2)and Xl + X2 are mutually indepen-dent. (These results will be of importance when we shallderive distributions of various PCls.) .

The rth (crude) moment of Y is

I/;.(n 1':1y",nIX -1-r) f(IX + f/) IXrrJ

I '(a)1 '(a I (i I /') (IX I {I)I,.I(U7j

where a[b]=a(a+ l)...(a+b-1) is the bth ascending factorialof a. If r~ -a, It~(Y) is infinite. .

-

I)

t~

1'1

111)

111

I'\I

III

till!

Beta distributions

In particular,

a

E [ YJ = J1't ( Y.\ = ~ + 13

'. rx(a + 1)

E [Y 2] = It 2(Y) = (a + 13)(:;+ (1+ 1)

and hence

rxf3

va r( Y ) =-= E ( Y J ) - {E ( Y ) } 2 = (a + (J) 2 (IX + (J+ 1)

We note here the following definitions:

I. the (complete) beta function;

j'l

B(a, [3)= 0 y"-l(I-- y)P-l dy rx,/3>0

2. the incomplete beta function

I"

IJ,,(I'i.,fJ)°"',II" I(I'-y)/I t dy O<h<]0

3. the incomplete beta function ratio

1,,((1.,rn= B"i,~.jnJ3(o:,If)'

If }' has the PDF (USa) then the CDF is

Fy(y)= ly(rx,fJ) O<y<l

23

(1.38a)

(1.38'b)

(1.38c)

Page 17: 041254380 x Capability Indices

24 Introduction

1.7 (-DISTRIBUTIONS AND NON-CENTRAL(-DISTRIBUTIONS

If V,,-,N(O,a2) and S"-'Xva/~ are mutua!ly independentthen T= V/S= Ua/S has the PDF

f (1)=r(i(V+I)) (l -~yl(V"l)y} (vn)!r(1v) 4 v) .

(.1.39)

This is known as a i-distribution with v degrees of freedom,The t-dis,tribution is symmetrical about zero; thus E[T] =(),Note that V = Ua, where U is a unit normal variable,Hence

E[T'] = E [vlrurx~rJ = vlrE [U rJ E [(X~) - I"J (I AO)

(which is also the rth central moment in this c;\se sinceE ['1'] =0), The variance of l' is th us

vvar(T)= E[T2] = v.1.(v-2)-1 =-. v-2

(1A1)

A common and widely used statistic giving rise to at-distribution is

__~(X-iL-

{

I II_

}

!-,- L (X;-X)2n-1 i=1

(1.42)

where X is 'sample mean' as defined in section 1.2 and.XI,.." XII are mutually independent N(~, a2) random vari- 'abies. This statistic ha$ a t-distribution with (n- I) degrees offreedom.

If T is defined as above, but expected value of V is not 6but b, so that V= (U+ 6/a)a with U"-' N(O,1) then the dis-

II.I

t-distributions and non-central t-distributions

tribution of

U+~T=~

Xv

~

25

(1.43)

is called a non-central i-distribution with v degrees of freedomand non-centrality parameter Na,

The expected value of T' (the rth moment of T about theorigin or the rth crude moment) is

E[Tr] = v~rE[ (U + ~)]x E[(X~)-!r]

In particular

(\'-1

)'

V

)J r 2 ~

Erlj ~ (2 r(}) a

and

,

((52

)1

E[T2]=v 1+ a2 v-2

(1.44)

(1.45 a)

(1.45 b)

If the XI'in the denominator of T is replaced by a noncentralx',().) the resulting variable'

II

U' (j+-

T'=~X~(A)

~

Page 18: 041254380 x Capability Indices

26 Introduction

has a double non-central t distribution with parameters ()/(Jand )" and v degrees of freedom.

1.8 FOLDED DISTRIBUTIONS

Generally the distribution of {O+ IX- OH is caned the dis-tribution of X folded (upwards) about 0; 'upwards' is usuallyomitted. It is used if it is necessary to distinguish it from thefolded (downwards) about 0 distt,"ibution which is the dis-.tribution of {O-IX -Ol}.

The foldednormal distributionformed by folding N(~, (J2)upwards ahout ()has the POF

'I )\

I (X-sV\ \

1(211 \' (\2-

\

C

~~-J3~'I exp -~, --() ) + exr - '2 -~ -";-_c..) \ x> II( IA() II)

If fJ= 0, the variable is IXI= (T Iv + c)I with expected value

(2'

)1

(C)2

)J!'l-.:; ,'~ exp - 2' .- ~ll-- 2(\}(- 6))(1.46b)

where [)= ~/(Tand the v,i.riance is

222J!2=~ + (J - J/t (1.46c)

Also

[".2

(~2

)J,3 2' v,. u

J!3=2 J!1 -~ J!1- ~ exp - -2- -(1.46d)

I)

.)

. I

II

Mixture distributions 27

and

X(J\ ( ()2

)v . " ) 4 , .

114= (4 + 6Co.L +-J(T + -r::-IIJ exp -- -2' " .../2rc \

2(""

3")

,23

,4+ C - (J- Jlt - J!1 (1.46e)

(I:landl (1l)(>I)),

The rolded (upwards) beta distribution formed by foldingbeta (0(,In about ~has the POF

1

B( IX+ Ii) {x~" '(1 - X)/1- 1 + xll- \ (1 -- x )U- \ :

1'<x<1

(1.47)

The corresponding folded (downwards) beta distribution hasa PDF with the same mathematical form but valid over theinterval O::S;x.:(! rather than -t::;;;x~ 1.

1.9 MIXTURE DISTRIBUTIONS

If 17,(x), F~('(), Fdx) are CDFs then

k k

Fx(.x)= L w)<)x) Wj>O, L wj=lj=\ j=l

(1.48)

is the COF of a mixture distribution, with k componentsF\(x), Fdx),ancj weights W\,...,COk respectively.

If the PbFs Jj(x) = Fj(x) exist, then the mixture distribu-tion has POF

k

fx(x) = L w,Jj(x)j= 1

(1.49)

Page 19: 041254380 x Capability Indices

2~ /111willie! ion

The rth crude moment or X is

. k

!-1~(X)= L: Wj!-l),.j= 1

(L50)

where Il~il'is the I;th crude moment of the jth componentdistribution.

In particulark

E[X]:;::: L: ()Jj.~jj= 1

and

k

E[x2J= I: (J)j(~T+O07)j= 1

so that

"

(k

)2

var(X)= j~l (J)j(~T+(Jn- j~l (J)j~j

k k

- '\ U).(J2+ '\' (J).((.~Z;)2L. .1.1 L. .1-.1j= 1 j= 1

(1,51 a)

(1.51 b)

(1.51 c)

where ~j' (Jj arc the cxpected value and standard deviation,respectively, of the jth component distribution and

k

~'= L wl~.I (=E[XJ)j= 1

If Fi(X) = G(x, OJ),where G(x, OJ)is a mathematical. functionor x tlnd OJ (e.g. a' fUI)t:Uon corn.:sponding to thc N(O,frY)distribution), we may regard a parameter e as having thediscrete distribution

Pr[0==Oj]=w.I j=I,...,k

~II.

,Afullinomwl (multivariate normal) distributions 29

More generally. e might have a continuous distributionwith PDF .I~")(t). Thcn the CDf of the mixture distribution is

F(x)= J-~<x/~")(I)G(\". 1) dl(1.520)

If

dG(~ = g(x, t)dx

is the PDF corresponding to G(x, t), the PDF of the mixturedistribution is

. f(x) = f~(D?}(x, t).f'e(t) dt(1.52 b)

We denote 'mixture' symbolically by !\ e where 0 is calledthe mixing parameter. For example the distribution construc-ted hy mixing N(~,(J2) distributions with (J"'2 having agamma (0';,Ii) distribution will be represented symbolicalJy as

N(~, (J2) 1\ gamma (0:,f3)(f"2

(1.53)

1.10 MULTINORMAL (MULTIVARIATE NORMAL)DISTRIBUTIONS

The [J random variables X = (XI"", xl'J' have a joint multi-variate normal, or multi normal distributionif their joint PDFis Qf the form

Ix(x)=(2n)-lfiIVol-l exp{ -!(x-~)'VO1(x-~)) (1.54)

wIICI'C' HLX;I- Sj (.ice-I,...",) i.e. qXJ-'S, ,Inti Vo isthe, variance-.covariance matrix of XI"", X fl' The diagonal

Page 20: 041254380 x Capability Indices

30 1ntroduction SYSTemsof distributions 31

elements of Yo, (J11>...,(J1'1'arc thc varianccs of X\>...,XI'

respectively, and the off diagonal element (Ji}is the covarianceof Xi and X}, with

1.11.1 Pearson system

This is defined as distributions with PDF U(x)) satisfying thedilkrential equation

(Jj} -'- {lij((Jij(Ji.J( 1.55)

Of course, ~ is the standard deviation ,of Xi~ {lij is thecorrelation between Xi and X}.

For i=1,...,p, Xi",N(~i,On. The quadratic form(x-~)'Vo1(x._;) generalizes the quantity (x-~)z/(Jz=(x -- ~)(J- 2(.\-~) which appcars in .lhc PDl,' of lhc lIn~variatcnormal distribution. The ellipsoids .

d(logf(x)) -(a+x)-~~~ - co+Cjx+czxz

(1.58)

(x - ;)'V O~I(X-cJ = C2(1.56 )

This system was developt:d by Karl Pearson (1857-1936), inthe decade 1890 -1900.

The values of the parameters Co, Cj, ('z determine the shapeof the graph of f(x). This can vary quite considerably, anddepends on the roots of the quadratic equation

CO+C1X+CZXZ=0 (1.59)

(where CZ is some constant) define contours of equal prob-ability density (I,(x)=cOI1SI.). The probabilily lhat X railswithin this contour is Pr[xi, < cZJ; the volume inside the

ellipsoid is '

Among the families (commonly cal1ed Pearson type curves)are

{

nil' lV Ii

}a cP

1(1 +~p)(1.57)

. (:0>0; Cl = C2 = 0 - normal distribution (section 1.4)

. (Type III) Cz:":: 0; C1 :;6 0 - Gamma distribution (section 1.5)

. (Type 1) Roots of (1.59) real but opposite sign -- Betadistribution (scclion 1.6). (TVI1£' VII) CI=O, co,cz>O this includes thc 1 distribu-

tion (scdion., 1.7)IVai is called the generalized variance of X.

1.11 SYSTEMS OF DISTRIBUTIONS

Thcse families, we have already encountered, and further Idetails are availablefor example,in JohnsoQand Kotz (1970, I

pp. 9-15).

In addition to specific individual families of distributions, ofthe kind discussed in sections 1.3-1.10, there are systemsincluding a wid? variety of families of distributions, but basedon some simple defining concept. Among these we willdescribe first the Pearson system and later the Edgeworth(Gram-Charlier) system. Both are systems of continuousdistributions.

1.11.2 Edgeworth (Gram-Charlier) distributions

These will be encountered in Chapter 4, and will be describedthere, but a brief account is available in Johnson and Kotz(1970, pp. 15-22).

Page 21: 041254380 x Capability Indices

32 Introduction Facts from statistical methodology 33

1.12 FACTS FROM STAT1ST1CAL METHODOLOGY

1.12.1 Likelihood

are callcd sufficient for the parameters in the PDFs ofthe Xs.

It follows that maximum likelihood estimators of 0 must hefunctions of the suflieient statistics (TJ,...', li,):::::T. Also thejoint distribution of the Xs, given values of T does not depend011O.This is sometimes described, informally, hy the phrase,T contains all the information 0:1 ().' If it is not possible tolind all)' function 11('1')of T with expecled value zero, exceptwhen Pr[h(T) :::::OJ:::::1, then'(T is called a complete sufficientstatistic (set).

This section contains informat'ion on more advanced topics ofstatistical theory. This information is needed for full appreci-ation of a few sections later in the book, but it is not essentialto master (or even read) this material to understand and usethe results.

For independent continuous variables Xl"", X" possess-ing PDFs the likelihood function is simply the product oftheir PDFs. Thus the likelihood of (X1,..., X,,), L(X)is given by

1.12.4 Minimum variance unbiased estimators

"

= rlfdX;)j = 1

( I.60)

It is also tr'ue that among unbiased estimators of a parameterI an estimator determined by the complete set of sufficientstatistics Ti>..., Th will have the minimum possible variance.

Furthermore, given any unbiased estimator of y, G say,such a minimum variance unbiased estimator (MVUE) can,beobtained by deriving

L(X)=fxJX 1)f:dX 2)'" fx.,(X,,)

(For discrete variables the f..dXj) would' be replaced by theprobability function J>Xj(Xj)where J>,dx;)=PrlXj=xiJ) Go(T)= E[GITJ (1.61)

1.12.2 Maximum likelihood estimatorsi.e. the expcctcd valuc of G, given T.

This is based on the Blackwell-Rao theorem (Blackwell,1947; Rao, 1945).

If the distributions of X J,"', X" depend ('In values of par-ameters e1, {)Z, . .. , es ( = 0) then the likelihood function, L, is afunction of 9. The values ()1, Oz, ..., Osl = 9) that maximize L are,called maximum likelihood estimators of e J, .,. , Osrespectively.

Example

1.12.3 Sullicicnt stutistks

If XI>"', X" are independent N(~, (Tz) random variables thenthe likelihood function is

,

If the likelihood function can be expressed entirely in terms qffunctions of X ('statistics') Tl(X), ..., T,,(X)the set lTl>..., Th)

. /I 1

[t

{

X 1:

}

2

]L(X)= I1.--c-exp - - j -- S;=1 (T,/br. 2, (T

Page 22: 041254380 x Capability Indices

34 Introduct ion F acts from statistical methodology 35

- Il

I /I

\- (0" iW exp - ~2I (X i-- l;)'~Y L-Iq ,~ 1

- 1 [I

{

/I

- (aJ2n)"exp - 2a' J, (X,-X)'+n(X -~)'}J(1.62)

1.12.5 Likelihood ratio tests

SlippOSC WL: wanl 10 choOSL: hdwL:en hypotheses Ilj (i 0, I)specifying the values of parameters ()= (01)..., Os) to ,be9j= (elj,..., es;) (j=O,1). The two likelihood functions,are

The maximum likelihood estimator of ~is

i? -':,=X. (1.63a)

. /I

L(XIHj)= TIfx,(xiIOj) j=O,li'" ,

(1.65)

The maximum likelihood ,estimator of a isThe likelihood ratio test procedure is of the following form

{

/I

}

1

6= n-l.L (Xi-X)2,: 1(1.63b)

If L~~JHd ,L(XIHo) <..c, choose Ho

(~, 62) are sufficient statistics for (~,0").they are also, in factcomplete sufficient statistic E[4J =~, so ~ is an unbiasedestimator of ~.

Since E[X IX, 62J = X, 4(= X) is also a minimum varianceunbiased estimator of ~.

But since

If f~(XIII d .L(X!Ho) ;?-:c, Cfl00SC lI,

( 1.66)

E[6J = . ;(.

~I~- (2

.)1

rez(n-l)) ~ ai=a(1.64)

The constant c can be chosen arbitrarily. If sufficient informa"tion on costs and frequencies with which H0, H 1 respectivelyoccur is available, it. is possible to choose c to produceoplim:d results in terms of expectedeost.

Sorndinlcs (' is chosen so as to make the probability ofnot choosing Ho, when it'is, indeed, valid i.e. 'making anerroneous density equal to some predetermined value, e, say.T,he procedure is then termed a 'test of Ho against thealternative hypothesis H 1, at significance level e'. Ho is oftentermed the 'null' hypothesis, even when there is no specialmeaning attachec,1to the word 'null' in the particular circum-stances.

.The significance level, I:, is also caned tht: probability oferror of'the 'first kind'. The other kind of error, choQsingH0

when the alternative H l' is valid is called an error of the'second kind'.

it is not an unbiased estimator of a. However

6(::)\ J '(in-I))2 rein)

is not only unbiased, but is a MVUE of a. (There are moreelaborate applications. of the Blackwell-Rao theorem inAppendices2A and 3A.)

Page 23: 041254380 x Capability Indices

36 I ntr()duction

BIBLIOGRAPHY 2Blackwell, D. (1947) Conditional expectation and unbiased sequen-

tial estiIl1ation, Ann. Math. Statist., 18, 105-110.Boyles, R.A. (1991)The Taguchi capability index. .TournaiofQuality

'f.'ellllololl.1',23(I), 17 2:\.Burke, R..J.,Davis, R.D. and Kaminsky, F.C. (1991) Responding to

statistical terrorism in quality control, Proc. 47th Ann. CO/1fArneI'.Soc. Quat. Control, Rochester Section, March 12.

Chan, L.K., Cheng, C,W., and Spiring, FA (1988) A new measureof process capahility. ./0111'11111(!( (jlllllily .ft.dllloloilY. 211\:1).162--175.

Dovich, R.A. (1991a) in ASQC Statistical Dipision Newsletter,Spring,S.

Dovich, R.A. (1991b) Statistical Terrorists II - its not sq(e yet, Cpk isout there,'MS, Ingersoll Cutting Tools Co., Roc.kford, Illinois,

E1andt, R.C. (1961)The folded normal distribution: two methods ofestimating parameters from moments, Technometrics, 3, 551-62.

Gunter, B.H. (1989a) in Quality Progress, January, 72.Gunter, B:H. (1989b) in Quality Progress, April.Johnson, N,L. and Kotz, S. (1970) Distributions in Statistics: Con-

tinuous Univariate Distribution - 1, Wiley: New York.Kilska, D. (1991) in Qlllllity Prowess, March, R.McCormick, C. (1989) in Quality Progress, April, 9-.10.McCoy, P.F. (1991) in QIUllityProwess, fo\:hruary, 49 55.Peam, W.L., Kot:.!;,S. und Johnson, N.L. (1992) Distl'ibutiomd and

inferential properties of process capability indices, .T.Qual. Tech-nol., 24, 216-231.

Ran. C.R. (1945) Information and the accuracy attainahle in (heestimation ofstatislical parameters. Hli/I.('tI/c1l1/1IMllih. .)0('" J7.

. 81-91.Spiring, FA (1991) in Quality Progress, February, 57- 61.Steenburgh,T. (1991)in QualityProgress,January, 90.

The basic process capabilityindices: Cp', Cpkand their

modifications

2.1 INTRODUCTION

MlIch of the material presented in this chapter has alsoappeared in several books on statistical quality control,published in the last decade. These books usually provideclear explanations and motivation for the use of the pelsthey describe, and often (though not always) emphasizetheir limitations, and provide warnings intended to assistrecognition of situations wherein extreme caution is needed.In the last few years, also, many issues of Quality Progresscontain letters to the Edit<)r in which the use of PCls iscriticized (and only occasionally defended), often combinedwith passionate appeals for their discontinuance on thegrounds that their inherent weaknesses guarantee wide-spread abuse. Some referees for this book have even warnedus that we may cause serious damage by further propagat-ing the inadequate and dangerous concepts implicit in use ofPCls. .

Our opinion is that we hope (even believe) that we mayprovide background for rational use of PCls, based, indeed,on knowledge of their weaknesses. We do not, of course,recommend uncritical or exclusive use of these indices. The

37

Page 24: 041254380 x Capability Indices

38 Tl1e h([sic !Irocess C([fli/hilit.\' indices

backlash against PCls seems to CO.11efrom two broad, 'andopposed, sources: (i) a lack of appreciation of statisticaltheory and its applications; and (ii) a demand for precisestatistical analysis, to exclusion of other constraints. So.urce(i) does not fully comprehend the meaning of the indices,and resents pressure from above to include 'meaningless'numbers in their reports - which may, in,deed, contradict theactual state of the process as seen by experienced operators.They regard this as part of the 'tyranny of numbers' whichhas its roots in the proliferation of statistical (i.e. numerical)information now becoming available, and tending to domi-nate our existence. Often, this resentment underlies the un-popularity of, and lack of respect for 'statistics' amongotherwise enlightened engineers, technicians, sales managersand other professionals.

Source (ii), on the other hand, arises from those who arewell aware that assumptions, on which P('ls :Ire h:lsed. :II'Coften violated in practice. 'l'his is also true of many olhl:rstatistical procedures (such as ANOV A) which, however, arebetter tolerated, and more widely accepted. albeit somewhatgrudgingly.

After these introductory remarks, we now proceed todefine and examine the earliest form of PCI, generally de-noted by CpoThe background information in Chapter 1 willassist in understanding the basis for many of our results -especially in regard to estimators of PCls - but they are notessential for general comprehension. .

2.2 THE Cp INDEX

Consider a situation wherein there arc lower alld lippeI'

specification limits (LSL, USLrespectively) for the valu.e ofa measured characteristic X. Values of X outside theselimits will be termed 'nonconforming' (NC). An indirectmeasure of potential ability ('capability') to meet the re-

II I

I.

0 ~.

I'

~I

~

t'

~I

IIIII'hIIt

The Cp index 39

quirementindex.

(LSL < X < USL) the process, capabilityIS

USL-- LSLC ------ p -- 60'

(2.1 a)

where (Jdenotes the standard deviation of X.

Clearly, large values of Cp are desirable and small valuesuudesirable (because a large standard deviation is undesir-able). The motivation for the multiplier '6' in the denominatorseems to be as follows:

If the distribution of X is normal, and if ~, the expectedvalue of X, is equal to !(LSL+ USL) - t.he mid-point of thespecification interval - then the ex-pected proportion of NCproduct is 2<1>(- diu) where d=-!(USL-- LSL) - the half-length of the specification interval.

From (2.1a) we see that

dCp=30'

(2.1 b)

so the expected proportion (p) of NC product (assuming ~=t(LSL + USL)) is

2<1)( 3 Cp) (2.2)

If Cp= 1 this expected proportion is 0.27%, which is regardedby some as 'acceptably small'. In fact, it is often required thatfor acceptance we should have Cp::::;c with c= 1,1.33,or 1.5corresponding to USL - LSL = 60',So' or 90'.

It is important to note that Cp= 1 does not guarantee thatthere will be only 0:27°1<)of NC product. In fact, all that itdoes,guarantee is that, with the assumption of normality andthe relevantvalueof u. there will neverbe less than 0.27'10.expected proportion of NC product! (It is only when~=1(LSL+USL) that the expected proportion is as small as

,

Page 25: 041254380 x Capability Indices

40 Till' /Josi(' fJI'()(,I'SS (,OfW/Jility indi('('s

0.27°;().).What the value Cp= I does indicate is that it ispossible to have the expected proportion of NC product assmall as 0.27%, provided the process mean ~ is pn~ciscly'controlled at ~(LSL + USL).

. More recently, the view has been put forward that expectedproportion of NC woduct is not (or, perhaps, should not he)the primary motivation in use of PCls, but rather that lossfunctionconsider~ltions should prevail. Although there arcsome attractions in this attitude, it does not explain themultiplier '6' in the denominator of (2.1 a). Also, uncertaintyin actual proportion NC is balanced (perhaps more thanbalanced) by uncertainty in knowledge of the true loss func-tion. The form of loss function by far the most favoured is aquadratic function of ~, though little evidence for this choicehas appeared. Also, if one is really interested in a lossfunction, why not just estimate its average value, and notsome unnecessarily complicated function (e.g. reciprocal) ofit'! Indeed, there would seem to be no need for LSL and USLto appear in the PCI formula at all (except in as far. as theymay define the loss function). The title and content of a recentpaper (Carr (1991)) seem to imply, not only that the originalmotives underlying definition of PCI have been forgotten, but'that they are now returning. Constable and Hobbs (1992) alsodefine 'capable' as referring to percentage of output withinspecification.

It has even been suggested that 2<l>(- 3Cp) itself be used asa PCI, since it is the minimum expected (or the potentiallyattainable) proportion of Nt items provided that normality isa valid assumption. Lam and Littig (1992) on the other hand.suggest defining a PCl

tpp=.\<)-I(1(r> I I))

and using the estimator

Cpp=!CD-1(!( p+ 1))

The Cp index 4)

based on an estimator of p from the observed values of X.Wierda (1992) suggests using --t<f)- 1(p). Appendix 2.A con-tains discussionof estimators of p, based on the assumptionthat the process distribution is normal.

Herman (1989) provides a thought-provoking criticism ofthe PCl concept, based on engineering considerations. Hedistinguishes between the 'mechanical industries' (e.g. auto-motive, wherein Cp first arose) and the 'process industries', Inthe former, 'emphasis is on the making of an assembly ofparts with little if any measurement error of consequence',while this is not generally true for the process industries.

The (Jin the denominator is intended to represer,it processvariability when production is 'in control', but thd quantityestimated by

1 II

S2=- L (Xi-X)2n-l}=1

i,I

(see (2.3) below) is (assuming no bias in measureme~t error)

equal to (J2+(variance of meas~rement erro~). \Herman further notes that If allowance IS mad

~

.,>,for lot-to-lot variability, both (J2 and the second term ave twocomponents - from within-Jots and among-lots ariation.Again, the (T in the denominator or Cp is intcn

,

dC

,

'd t<

\

"rerer to

within-lot process variatiol,1. This can be consider lbly lessthan the overall standard dcvip.tion -- (Jtotabsay. Herman

,suggests that a different index, th~ 'process perf9rmanceindex' (PPI).

(USL-LSL

p =----. P 6 (Jtotal

1

,'

might 'have more value to a customer than Cp'.

. Table. 2.1 gives values of the mipimum possible e~pected

proportIon (2<1>(- 3C,)) of NC l(ems correspo (Og. to

Page 26: 041254380 x Capability Indices

\

I

)

I

I

\

\

>

6'0..

0 0..

;':::::0'" 0

"""I'>'r- '"

'-":r-'" .....

rr,

S0..

'.:!(o..0 0

NIl'> ~ i7\..,fV)

"""

II

s";~ 2:

Or-OONO. . r--ON

II

'"

I

~ §.

E -II'>~ 0..

.~ - 25 ~u 0 IIZ'0 Si:: ~ 2:0 r-.- NIl'>V) r-

~ - 'b ""!0 .0g. 0 II....

0..

]i~E;:I

E'S~

.s~ 2:

O~ ~ '"2~ ~ 25 .8NOo ::=

II S--- ...

g,

""

- UI"'i",QJ I

:;s -..;....

~ ""oS<f-o UN

'C'

i::e<j0-

II

Os0-0-

i:...J

Estimation oj Cp 43

selected values of Cp, 011the assumption of normal variation.One can see why small values of Cp are had signs! But largevalues of Cp do not 'guarantee' acceptability, in thc absence ofinformation about the value of the process mean.

Montgomery (1985) cites recommended minimum valuesfor Cr, as follows:

. for an. existing process, 1.33

. for a new process, 1.50

For characteristics related to essential safety, strength orperformance features - I'or example in manufacturing of boltsused in bridge construction - minimum values of 1.50 forexisting, and 1.67 for new processes, are recommended.

General1y, we prefer to emphasize the relations of values ofPCls to expected proportions of NC items, whenever this ispossible.

2.3 ESTIMATION OF Cp

The C'i1lyparameter in (2.1a) which may need to be estimatedis (J"the standard deviation of X. A natural estimator of (J is

~ where

-r I /I(J' \.

In-I L., (Xij= I x )".r(2.:1)

1 /I

X =- I Xjn j= 1

If the distribution of X is normal, then 82 is distributed as

-~1-(J2 x (chi-squared with (n-I) degrees of freedom),11-

(2.4 a)

Page 27: 041254380 x Capability Indices

44 The hasic process capahility indices

or, symbolically

, 1_(J2 2n-l Xn-1

(2.4b)

(see section 1.5).Estimation of (J using random samples from a normal

population has been very thoroughly studied. If only

-1 6 3Cp = USL-LSL (J=d(J

(2.5)

had been us~d as a PCI, existing theory could have beenapplied directly to

-- I 6 - 38Cp =USL-LSL l1=d

(2.6)

which is distributed as

C; 1{Xn-l/J(n-l)}

Analysis for

C =USL-,~,SL=_~=:,:C" 68 38 8 " (2.7)

(distributcd as C"~I";x,, I) is a hit morc complicated, hutnot substantially so. Nevcrtheless, there are several papCl:Sstudying this topic, including Kane (19X6),Chou ('( (/1.(I<)90),Chou and Owe'n (1989) and Li, Owen and Borrego (1990). \V-.:give here a succinct statement of results for the distribution ofCp, assuming X has a normal distribution. Fig. 2.1 (see p. 65) /

exhibits PDFs of Cp for various vah.;es of d/(J=3Cp. From. I II

Estimation of Cp

(2.4) and (2.7), we have

pr[

C\,> C

] = Pr[x~ - 1 < (n -- I)c - 2 Jc" ,

S. . I) [' J - 2

] h,Inee r.Xr7--1 ~X"-1.1: =i:, we ave

[2 2

J(J 2 -2 (J ,2 -

PI' n-lX"-1."/2~a ~n-lXn-l,I-"/2 -I-a

The interval

(J~)82 ~'I)cr~ )2 ' 2X,,-I, 1-,,/2 Xn-I,"/2

45

(2.8)

(2.9a),

is a lOO(l-ap,-;) confidence interval for 0 2; and so the int-.:rval

( 6 (n-l);., 6 (n-l)l -).iJSL'='-LSLX~~~.~~~~' USL- LSL X:,:~a

is a 1OO(1- a)% confidence interval for C; 1; and

(~JSL-=-~~!:: X"

.':J..",/2 USL- 'JSL Xn-1, 1-,,/2 )68 (11-1)1' 6cr (n-l)!

= (Xn-~c" Xn-l,l.-~/2C )(n I)' I (n-l)' p

(2.9h)

(2.9c)

is a 1O0(1-iX)% confidence interval for C". Lower and upper100(1-a)iI'~1confidence limits for CI' arc, of course,

Xn- 1" - X(~C and " .

,-1,1-,,-(11-1)~ 'p

!- C(11--1) p

respecti vely.

...

Page 28: 041254380 x Capability Indices

46 The basic process capability indices

lI' t:lhks of percen[age points of [he chi-sqll:lred distrihll-tion arc not availabk, it i.s necessary [0 use approximateformulae. In the absence of such tables, two commonly usecl

approximations are

X,.a ~(" - ~)! + -_:'F-)2

( Fisher)

and

(2

(J

)\

)J

X".a~"\ l-9,,+za 9~'(Wilson-Hilfel'ty)

where (l)(za)=ex. (cf. section 1.4).With these approximations we would have (from (2.9 c)) the

100(l-ex)°It) confidence interval for Cp: '

(n~l)\Cp((n-iY- Zjr}

1 -((

3

)\ ZI-~2\

(n-1)jC, n-2 + fi)(2.10a)

(using Fisher) or

-(

2

(2

)1

)Cp ,1- 9(n-1j-Zl-a/2 9(11-1) ,

-

(2 '

(2

)~

);

Cp 1-9(m-1)+ZI-a/2 9(n-1)(2.10 b)

(using Wilson-Hilferty)Heavlin (1988), in an unpublished t.echnical report avail-

able from the author, attacks the distribution of S-1 directly.

. III.Il'

tt!

,I)

~

tv)

I

Ilhl:

(j

1, II

IS

11'1

11

illfilii

,all rI

Estimation of Cp

Assuming normality he introduces the approximations

4.7

f 1}

ae LS IJ~ (I I 41.n- I)

1

var(S - 1 )~ 2(n - 3)0-2

1.2.! I lIj

(2.11 b)

(cf. section 1.2)From (2.11 b) we would obtain the approximation

1 C~var(Cp)~2 (11--3)

(2.11c)

I

However, Heavlin (19'88),presumably using terms of higherorder than n-l in (2.11b) obtains

c2

((,). p 1+-

var(Cp)~2(n-3) n-1'I (2.11 d)\,

The interval

(\{I C(n~3j.(1 +~~-i)YZ1'a()}'

Cp{1+(2(n~3) (1+ n~)YZ1-a/2}\ (2.12)

Iis suggested as a 100(1- ex)(Yt)confidence interval for Cpo

Additional generality may be gained by assuming solelythat 8 is a statist'ic, independent of X, distributed (possiblyapproximately) as (xflJI)(J. (The above case corresponds tof= 11- 1.) This includes situations in which ()' is estimated asa multiple of sa.mple range (or of sample mean deviation).

Page 29: 041254380 x Capability Indices

48 The basic process capability indices

From (2.7), Cp is distributed as

USL-LSLy:[=y:[Cp,60" Xf Xf

(2.13)

so G=(Cp/Cp)2 is distributed as flx}l, and the probabilitydensity function of G is

f'

( )f If \ r- 1 r 1 f

' ~'I 1 0(' If =.. ,,'" ---, i f " eXl)-' If <'

If

, ., :2 '/1 'dI)' .', ' ' ,

The rth moment of Cp about zero is

E[C~,] =f \rC~E[X;rJ =f lrC~E[(X,7r !rJ

('.1')\" rnU-r)) cO'2 J'( 1,n I' .

In particular

f-l )(

'

)' r(2'" 1E[C J= L ~ CP= 'j;-Cp. P I. 2 f

r 2,

A 2 f 2

E[C p ] =.1'- 2 C p ,

and

var( C1~)= (f! 2 -bi 2 )c~

(2:l4;

(2.1))

(2.16a)

(2.16h)

(2.1()c)

II

~,

.Ii

,Jj

/'1

Estimation of Cp 4~

where

(2)~ I'(ln

bf= l rcfU-l))

The estimator

c~ = hfCp (2.17a)

has expected value Cp and so is an unbiased estimator of CpoIts variance is

. ,(CA I

) - ffb} I}

( ' 2

Vdl p -If- 2- p(2.17b)

Table 2.2 gives a few values of the unbiasingfactor, bf'

For f greater than 14,an accurate approximate' formula is

b ~l-~ f'-lf 4'

(2.18)

(cr (1.19a)). With this approximation

. . " ~' ,((XI 1,_9) 2 ('I~Vd I( ( p) '- (I":' 2)(4.1 - 3)

(2.1() 1I)

8/+9 C2var(C~)~ 16f(f-2) p

(2.19 b)

,I

Table 2.3 gives numerical expet:ted values and standarddevi~tions (S.D,s) of ,Cp(not C~) for the same values of f as inTable 2,2, and d/O"= 2(1)6. (Note that d=t(USL-LSL)).Since E[CpJ/Cp and var(Cp)/C; do not depend on Cp, onecould deduce all values in Table 2.3 from those for

Cp=1(d/0"=3), for example.

Page 30: 041254380 x Capability Indices

-r--0\ 00lI) 0\0

\0'<toolI) 0\0

lI)0\ 00'<to\0

rr,'<t00'<t0\0

.....

0\ 00M 0\0

00'<tt--M 0\0

I

'<t~ 0\ t--

t NO;"-, 0

-Irl

I

~ '00~ '<t \0--- N 0\

£; dt:'

1

0X 0\\00\

ds;;N'i

l

'<t~.lS 0\

C3 d'"0::1

(;

>-

'<t.....

0\ 0\

d

NN<:J~C':If-<

"-00 ,-

0\ <;;'<t I'- ,.c;:

c: :'0

~ ..!::::J'-, Z

~ 0.V- ~0. 0.

'V V'-' r<), IIa '-'

vi bII ~

0vi

0\D

,.-,0.'vl J"-III

IJJ

;;:;'<7i'

<U....

'"

""

"-

0

:a::I

";J>

<1)

i3

0.'V'-0

.:!JI::0

S0~...:-

+"-II'

tf')

I"!

(j)

~~

::

0'<t~0r<) r--00r<)00\0-o\r<)~""'OOlr\r<)~""'o\o\r<) V> '<t r<) r<) ~ ~ ~ ~ ~ ~."";00000000000

r--0\~r<)V>'<t~0\V>':"'00~0000 00~V>'<tr<)r<)r<)~~0V> 0 0'0 0 0 0 0 0 o.=:>NNNNNNNNr--iNr--i.NN

('I ~ '<t 0 '<t -.:I" 00 ~ 'n ~ 000\ 0\ ~ 0 ~ r<) - 0\ 00 r-- ~ '.I'I0 '<tr<)r<)('I r--I('I - - '"""- -"";6000000::;;OCO

O\'<tr<)~--V>O~r<)ooor--OON~r<)~-OOO\O\O\OO~Ooor--r--r--r--r--r--~~~~~N"";"";"";"";"";"";"";"";"";"";"";"";

'<t~-OO\r---O\ooOr<)r--r-- 0\ 0\ '<t 0 co r-- "1 "'" '<1" (""'. N00r<)~~~-------0000000000::;;0 ,-0\ 0 0\ r-- 0\ '<t 0 r-- '<t ~ - r<)

I

r--v>-oor--~\C~lr\V>V>II')r<)

~3:::;:;:;:J:;:;:;:J:;:'n r-- 00 0 ~ 0 00 0\ - ".1',0 "n'n 0\ - 00 ".1',--t r 1- - 0 0 0\1.0<"I<"I- ~ - - - - - - 0

000000000000

r<)-.:I"oo~r<)r--r<)Or--~'<tr<)O

iq~8~q~~~CiCiCi$~-------------r-- 00 II') 0 '<t -.:I" ~ "', "'<t 0 r-- (""',r<) 0\ -.:I" ~ 0 0\ 00 r-- r-- r-- \0.\0 I-.:1" 00000000000::;;0000000

~ 0\ II') '<1" ,00 ".I', ~ 0 00 r-- ~ ..-, r--r<) ('1 0 0\ 00 00 00 00 ['--. r-- r-- (,- ~oor--r--\.O~~~~~~~~~oddoo~:i.doddcoo

:: ~II '<to\","O'\-.:I"O\-.:I"O\",,"""'- g'-., '<t 0\ - - ~ r~ r<),.-,"""-q' II')V>

. I

i

I'

t 1;-

i

~

I

The Cpk index 51

The population ("true') values of Cp are thc expected valuesforI'/',

The distribution of ('p when the distribution of X is Ilulnormal will be considered in Chapter 4.

We note that whell f = 24, for example (with n = 25, if/=n-I) the S.D. of Cp is very roughly 14--15°;;1of theexpected value (i.e. coef-I-icientof variation is 14--15%). Thisrepresents a quite considerable amount of dispersion.

As a further example take Montgomery's (1985) suggestedminimum value of Cp for 'existing processes', which is 1.33(see the end of section 2.2). The corresponding value of d/ais 4. If a sample of size 20 is available, we see (from f = 19 inTable 2.3) that if Cp= 1.33, the expected value of Cp would be1.389 and its standard deviation is 0.24. Clearly, the results ofusing Cp will be unduly favourable in over 50% of cases -actually we have from (2.8),with c= 1,

Pr[C;p> 1.33J= Pr[XI9 < 19J = 53.3%

Un is as small as 5 u= 4) and Cp = 1.33, then the standarddeviation of Cp is O.~7 - over half the value of Cp! 'Estima-tion' in this situation is of very little use, but if n exceeds 50,fHirlyreasonable accuracy is attained.

Another disturbing faclor possible elrccls of nOll-normal-ity - will be the topic of Chapter 4.

'Bayesian estimation of Cp has been studied by Bian andSaw (1993).

r'2.4 THE Cpk INDEX

,

The Cp index does not reyuire knowledge of the value of theprocess mean, ~, for its evaluation. As we have already noted,there is a consequent lack of direcl relatioll between Cp andprobability of production of NC product. The Cpkindex wasintroduced to give the value of ~some influence on the value

0vi

IIJJ

0vi

v>11.!.1

0vi

'<tluJ

0VJ

r<)1uJ

dvi

I[.l.J

Page 31: 041254380 x Capability Indices

52 The basic process capability indices

of the PCl. This index is defined as

min(USL-~, ~- LSL)Cpk= 3a (2.20a)

d -I ~-!(LSL + USL)I (using (1.14'e))= 3a

{ 1~-!(LSL+USL)I}

c" (2.2017)= 1 d

Since C,,=dj(3<T),we must have C"k~C", with equality ifand only if ~= t(LSL + USL). Kane (1986) also defines ,'C"k'for a general target value T by replacing i(LSL + USL) by Tin (2.20b). This index shares, and indeed enhances, the featureof Cp' that small values of the index correspond to worse

. quality. The numerator of Cpk is the (signed) distance of ~from the nearer specification limit.

Some authors have used the erroneous formula

min(I~-LSLI, IUSL-~I)3a (2.20c)

for Cpk' This does, indeed, give the same numerical value asformula (2.20a), provided'; is in the specification range, LSLto USL. However, if, for example,

~=LSL-d or ~=USL+d

formula (2.20c) will give the value dj(3a) for Cpk- the samevalue as it would for ~= !(LSL + USL), at the middle of thespecificationrange. The correct valuc, as givcn by (2.20f/) is~dj(3<T).

We will assume, in the following discussion, that

LSL~~~USL

-

...

'I

,.1*

The Cpk index 53

(If ~ were outside the specification range, Cpk would beI1cgalive, and the processwould c1earlybe inadequate forcontrolling values of X.)

If the distribution of X is normal, then the expectedproportion of NC product is

(LSL-~ ) { (

USL--(

)}<I) --;;- + 1- <I) -~ (2.21a)

From (2.20a) we see that, if t(USL + LSL) ~ ~~ USL then

CPk=USL-~3a

and

LSL-~ - (USL-- ~)-(~L- L_~~=Cpk-2Cp~ - Cpk~ _. 3a

(since Cp~ Cpk)'The exact expected proportion of NC pr'oduct can be ex-pressed in terms of the two PCls Cp and Cpb as follows:

<1>(- 3(2C"-C,,k))! <»(--3Cpk) (2.21 ")

lienee thc expected proportion or NC product is less than

2<1>( - 3Cpd (2.22a)

but it is greater than

<I>(- 3Cpk) (2.22 b)

The case LSL~~~!(LSL+ USL) can be treated similarly.Altho~lgh the preceding formulae enable information about

expected proportion NC to be expressed in terms of Cpk(and

Page 32: 041254380 x Capability Indices

54 The basic process capability indices

Cp), it can also (and more simply) be expressed in terms of theprocess mean, ~, and standard deviation (J.

Porter and Oakland (1990) discuss the relationships be~tween Cpkand the probability of obtaining a sample arithme~tic mean, based on a random sample of size n, outside controlchart limits. Although it is useful to be aware of suchrelationships, it is much simplcr to cxprcss thcm in terms orprocess mean and standard deviation, than in terms of PCls.Relationships between Cp and Cpk are also discussed byGensidy (1985)Barnett (1988)and Coleman (1991). .

Estimation of Cpk 55

the reduction to a single index had to be paid for by losingsepar:lte informatio(J on ]ocatio!1 (~) and dispersion (rr).

Another way of approaching Cpkis to introduce the 'lower'and 'upper' PCls.

C - ~-LSLpkl--- 3(J .(2.25a)

USL-~C pk II -=---:,-; (2.25b)

2.4.1 Historical noteand to define

It seems that the original way of compensating for the lack ofinput from ~ in the calculation of Cp, was to use as anadditional statistic,

Cpk = min(Cpkh Cpkll) (2.25 c)

Kane (1986) uses the notation Cpt. CPIi rather than Cpkt.CpktJ. He also introduces indices .

k= I~ - i(LSL+ LJSL)Id (2.23) T -.-LSL and

TCpkl=3(J

USL- T

TCpktJ=~

kJ~-TId

(our notation).A similar PPI to Pp (seesection 2.2)can be constructed by

modifying Cpk to

(Kane (1986) defines

but assumes T=i(LSL+ USL) in the sl~quel.). This was equivalent to using ~ (through Id and (J(through

Cp) separately. While it might be said that one might as weBuse ~and (J themselves, or their straightforward estimators, s:and 8, respectively, k and Cp do have the advantage of beingdimensionless, and related to the specification limits.

With their combination into the index

P '- min(~-;-LSL, USL -- 0pk -

3(J total

2.5 ESTIMATION OF Cpk

A natural estimator of Cpk is

Cpk=(I-k)Cp (2.24)Cpk= d-IX-i(~;L+ USLD

(2.26)

where 6-is an estimator of (J.

Page 33: 041254380 x Capability Indices

56 The hasic process capahility indices

If the distribution of X is normal then not only do we havea di~ributed (perhaps approximately) as XJ(J/ ',,;1]',but also:(a) X is 'normally distributed with expected value ~ andstandard deviation (J/Jn; and (b) X and a arc mutuallyindependent.

From (2.26), using (a) and (h). we find

ECC~'kJ=;rE[a-rJ i (-l)j (~)dr-jE[IX~W~SL+USL)lil. J () .I

=(~ )rE[Xf'J i (-l)j' (~)3(J j=O .l

x Cft)'E[I-f'-lX - ~(~SL + USL)} I J(2.27)

The statistic JIll j( -l(LSL + USL)I/(J has a 'foldcd nor-mal' distribution (see s'ection 1.8), as defined by Leone,:Nelson and Nottingham (1961). It is distributed as

IU+61

where U is a standard normal variable and

<5=';;/- {(-t(LSL+ lISL)}(J (2.28)

Taking r= 1,2 we obtain

,

(f'-'I

) ', ,

I' , , I- I

(f )J 2

l~_(2)

'

E[C,.] ~3 2 rG) u nn,

Estimation of Cpk 57

tIIIt

x cxr5 - n{~.=l(~~~_+~L~~21~1- !5-~(LS~+USL)1) 2(J2 ( (!L )

x {1- 2<1>( - ftl ~-t~LSL+ USL)I)}J(2.29)

and~

A f

[(d

)2

(d

)[(2

)1

var(Cpk) 9(f-2) - ~ -2 ~ Jm

{

n[~-'hLSL+ lISL)J2

}

1~-t(LSL+ lISL) Ixexp - +---

2(J2 (J

II

x {1-24>( -ftl~-!~SL+ lISL)I)}J

.f ~ .,.-}(LSL ,+ tJSL)1, 2 1'"1+ -~~ ~.;":_.,-"" L- + "'

,

'

I

j P(" (" '1 ), ,1~

(J 2 ;1! . " .' .' >,1,....I(2.30)

I, I'.I I

(see also Zhang et aZ.(1990))If Wc usc

[L" ~,

J

1

a= ---=-i" L (Xi-X? ,n i= 1

I,then f = n- 1. Some numerical values of E[CpkJ andS.D.(Cpk)are shown in Table 2.4 and corresponding values ofCpkarc shown in Table 2.5.

Note that Cpk is a biased estimator of Cpk'The bias arisesfrom two sources:

t

(a) Eca-1J=bj:l(J-li=(J-l,

This bias is positivebecause hr< 1 -, see Table 2.2 ~

Page 34: 041254380 x Capability Indices

5~ Th(' !Jusic I1I'0c(,sSC(/flu/lilily il/dic('s

(b) E[ftIX ~t(LSL+ U~L)I

J- ftl~-J(LSL+ USL)I~O.

, a a

This leads to negative bias, because ftlX' ~-!'(LSL+, USL)I has a negative sign in the numerator of Cpk.

The resultant bias is positive for all cases shown in Table2.4 for which ~=I=t(LSL+ USL)=m. Wheh ~=m, the bias ispositive for /1= 10,but becomesnegativefor larger numbers.Of course, as /1-HX),the bias tends to zero. Table 2.6 exhibits

the behaviour of E[CpkJ, when d/a=3 (Cp=Cpk=I), as nincreases, in some detail.

Studying the values of S.D. (Cpk) we note that the stan-dard deviation increases as d/a increases, but decreases as1~-t(USL+LSL)1 increases. As one would expect, the stan-dard deviation decreases as sample size (Il) increases. Forintcrprclalion of cakul:1led valucs of (1'1' il is especi;lIlyimportun,t to bear in mind the standard deviation (S,U.)of theestimator. It can be seen that a sample size as small as 10cannot be relied upon to give results of much practical value.For example, when d/a = 3 (Cp = 1) the standard deviation ofCpkfor various values ()fa-ll~--t(LSL+ USL) is shown inTable 2.7, together with the true value, (;pk'

Even when n is as big as 40 there is still substantialuncertainty in the estimator of Cpk>and it is umvise to useCpk, as a peremptory guide to action, unless large samplesizes (which may bc regarded as uneconomic or even, in-feasible) are available.

Chou and Owen (1989) utilize formula (2.20a) as their. starting point for deriving the distribution of Cpk' We have

Cpk'=min(Cpkl> CpkU) (2.31)

where

~ ' XCpkl=-=L~L38 and c - USL-Xpku-- 38-

J

~

J,

I

r:

II

II

I

II

II

1.

~.

I'

~-'"

's:,c-I-?,avjII "V'

ac.ri,:-,..c-'\.)I-..JU.JII

i.U"'"~,

u'0'"~Q)sc

:?:

""2'

~iQ)

:is<.':S

r-o

(~

V,?

or.0

0

aC/i

O\V)-<O<:t"-<V)I")N-"""-NI")<:t"00000

I") 00 I") I"-- V)0\-<1"--1")00 N I")

00000

O"",v-,oo-CV)OV)......OI")I"--O<:t"000"';"";

Of""'.<:t" o0<:t"01"--<:t"

N N I")00000

'.0 0\ N <:t" I"--I"-- "I 00 f""', 00

'II 00 I"~ 'II000"";"";

00 I") I"--V) V)1"--1")0 I"--- N I") I")

00000

cr, or, 00 0 I")V) 0 V) -< \0M r--O<:t" 1"--,00"";"';"";

-t'~0\0\ §<:t"O\OMN N I")

00000

I"-- 0 1"1 'r. 00(" 1 C/j cr, 00 cr.'r, 00 (I 'r, G\00"";"";"";

("'I - (')'I">'"T - 00 'r, (" 1

1"1 r 1 c') 700000

I") V) 0001")I") 00 I") O\<:t"\00\1")\00

0 0"";""; ('.j

O\O\I")\oNI"-- 0\ <:t" 0\ or,00,"""""" N00000

Ol"--V)NO\070\'</"000 I") 'oJ:) 0 I")000"";"';

"<t00\'</"......ooN\ONOO0 -< N N.00000

""2'"",,00V)1")1"--1"1\0 \0-V)ooN'n000"";"";

0\""2'\01")0O\<:t" 0\ V)......0 < N I")00000

!;;~~&j~I") \0 0 I") I"--00"";"";"';

0\0\<:t"-0-\0 N 001")"""-NNM00000

0 I"-- 'n N 0'\("I \0 - \0 0'r. 00 (" J 'r. 0\00"";"";"";

O\\O'</"I")N.- I"-- f""'. 0\ II">'- - 1"1('" cr.00000

f"')Or-:'<:t"NI") 00 N I"-- N\00\1")\00

00"";"';('.j

°c V) I;:, 01;:,- -<--, N ----II ~ N I") <:t" V) \0 II ~ ('01 I") <:t" V) \0 II ~ N I") <:t" V) \0

~0 V)0<:t" 0\0 \0 1"), 0\ V)01")1"--0<:t"000"";"";

aC/i

O\-<V)I"--NNO'\I"--\O\O-< -< N I") '</"00000

I.:U('01 I"-- ('01 I"-- ......oo7"""1"--<:t"

V) 0\ N '.0

000"";"';

aC/i

"'-0'</"0V) I") N ......-NI")<:t" V)00000

I.:U'r, 0\ 7 0\ '</"oJ:) 1"1 0\ V) NI") r- c <:t"00do"";,"';"";

du:i

;;t?2~~~-< N I") <:t" V)00000

I.:U("I ", \0 .-.7.~~1~800""; ,...; ('oi

QC/i

00 ('I 00 \0 "1'00 00 1- 1- I"--

N I") <:t" 'n00000

I.:U00 NI"--N \0(')0\01")0\'.0 0 I") I"-- 0o"";"";"";('.j

:::: :::: ::::

Page 35: 041254380 x Capability Indices

Table 2.4 (Cont.)

Imlla

0 0.5 1 1.5 2

E S.D. E S.D. E S.D. E S.D. E S.D.

11= 25d(J") 0.634 0.105 0.516 0.104 0.344 0.087 0.172 0.074 0.000 0.070, 0.978 0.154 0.860 0.148 0.688 0.126 0.516 0.105 0.344 0.087.'-+ 1.322 0.205 1.204 0.195 1.032 0.171 0.861 0.148 0.688 0.126.;; 1.666 0.256 1.549 0.245 1.31" 0.220 1.205 0.195 1.033 0.1716 2.010 0.307 1.893 0.295 1.721 0.270 1.549 0.245 1.377 0.220

11=30da") 0.635 0.095 0.513 0.094 0.342 0.078 0.171 0.067 0.000 0.063-, 0.977 0.139 0.856 0.133 0.68:5 0.113 0.513 Q.094 0.342 0.079.)-+ 1.319 0.184 1.198 0.175 1.027 0.154 0.856 0.133 0.685 0.113:5 1.662 0.230 1.540 0.220 1.369 0.198 0.198 0.176 1.027 0.1546 2.004 0.277 1.882 0.265 1.71] 0.242 1.540 0.220 1.369 0.198

11= 35da") 0.636 0.087 0.511 0.086 0.341 0.072 0.171 0.062 0.000 0.058-, 0.977 0.127 0.852 0.122 0.682 0.103 0.511 0.087 0.341 0.072.) .

-t 1.318 0.169 1.193 0.161 1.023 0.141 0.852 0.122 0.682 0.103:5 1.659 0211 1.534 0.201 1.354 0.r81 1.193 0.161 1.023 0.141(1 2.000 0.253 1.875 0.242 1.70:5 0.222 1.534 0.201 1.364 0.181

- - - --"' - - - - -

-.

n=40dla2 0.637 0.081 0.500 0.080 0.340 0.067 0.170 0.058 0.000 0.0543 0.977 0.119 0.850 0.113 0.680 0.096 0.510 0.080 0.340 0.0674 1.317 0.157 1.190 0.149 1.020 0.131 0.850 0.t12 0.680 0.0965 1.657 0.196 1.530 0.186 1.360 0.168 1.190 0.149 1.020 0.1316 1.991 0.235 1.870 0.225 1.700 0.200 1.530 0.186 1.360 0.168

11=45dla2 0.638 0.076 0.509 0.075 0.339 0.063 0.170 0.054 0.000 0.0513 0.977 O.l1t 0.848 0.106 0.678 0.090 0.509 0.075 0.339 0.0634 1.316 0.147 1.187 0.139 1.018 0.122 0.848 0.106 0.678 0.0905 1.655 0.184 1.526 0.175 1.357 0.157 1.187 0.139 1.017 0.1226 1.995 0.220 1.865 0.210 1.696 0.192 1.526 0.175 1.357 0.157

11=50dla2 0.639 0.072 0.508 0.071 0.339 0.060 0.169 0.051 0.000 0.0483 0.977 0.105 0.846 0.100 0.677 0.085 0.508 0.071 0.339 0.0604 1.316 0.139 1.185 0.132 1.016 0.116 0.846 0.100 0.677 0.085 1.655 0.174 1.524 0.165 1.354 0.148 1.185 0.132 1.016 0.1166 1.993 0.208 1.862 0.198 1.693 0.182 1.523 00165 1.354 0.148

Page 36: 041254380 x Capability Indices

-...;§~

~M<1J:cc::I

f-<

'Ot---o-7 <noo-"<t000--00000

000'O"<t~O<'">t---<n0<'">'00<'">000-:-:

0\ 00 <n <n '0"<t'0 0\ ~ <n000--00000

O\t--<n<'">-'O0"<t00~-<noo-<n000-:-:

t-- - 0 - '('I

<n00-7t--00---00000

00 '0 "<t('~0<'">t---<n0\<'">'00<'">'000-:-:-:

00 <n <n '0 00'O0\~<n0000---00000

t--<n<'">-O\0 "<t 00 '"'~<n<n00-<n0000-:-:-:

O\O~<noo'0 0 ~, '0 0\0----00000

000'O"<t~"<tt---<n0\'00\<'">'00\00-:-:-:

"<t "<t t-- <n "<t

"<t 'n t-- 0 ,')000--00000

0 00 <n ~, \-<

8~~8~000-:-:

t-- 'r, - 0\ 0"-"<t'O0\-"<t0 0 0 -'-00000

$cg:;t&3S::-<noo-<n000-:-:

"<tt--<n"<t"<t<n r-'-0 <'">'0'00--......00000

00 'n r-r,- 00<') t-- - <n 00<')'00<'">'000-:-:-:

<n-O\O\O'OO\-"<too00---00000

'0 "<t r~ 0\ t--O"<too-<n1f")00-<noo00-:-:-:

'0 '0 '0 t-- 0\'0 0\ ("~ <n 0000---00000

-00'O~,-"<t t-- - or, 0\'00\<'">'00\00-:-:-:

<nb °b<n '0 ~.II """'"'I ,') "T 'n \0 II""" rl ,r, "T 'r. ";:;::: :;:

63

are natural estimators of Cpkl and Cpku respectively. Thedistributions of 3v/'l x Cpkl and 3vr,;- x CpkUare noncentralI with I degrees of freedom and noncentrality parameters

J1t-(~-LSL)/O" and y'~(USL-~)/O" respectively. Chou and

qC/)

I\.IJ

qC/)

<n-: I,

\.IJ

I'I

!:: -I

".J' I

qC/)

\.IJ

Estimation (?('Cpk

Table 2.5 Values of Cpk

\Eml0.0 0.5 1.0 1.5 2.0

2 2 1 1 l '. 02 3 63 .5. 2 l 1

6 :1 2 :14 Il If, I .5. 2

3 6 J5 1! 1 It 1 16 2 Ii Ii It 11

m = (USL + LSL)

Table 2.6 Values of E[CpkJ for=m and d/(J=3 correspond-

ing to Cp= Cpk= 1 for a seriesof increasing values of n

Samplesize E[CpkJn

10 1.00220 0.98030 0.97760 0.97880 0.980

100 0.981200 0.985400 O.9W)600 0.999

2200 0.995 '3200 0.9965400 0.997

10800 0.99830500 0.99979500 1.000

Page 37: 041254380 x Capability Indices

64 The basic process capability indices

Table 2.7 Variability of Cpk(Cp= 1)

1~-t(LSL+USL)I/O" 0.0 0.5Cpk . 1.00 0.83S.D. (Cpk)(n= 10) 0.28 0.27S.D.(Cpk)(n=40) 0.12 0.11

1.00.670.230.095

1.5 2.00.50 0.330.19 0.1550.08 0.07

Owen (1989) use Owen's (1965) formulae for the joint distribu-tion of these two dependent noncentral t variables to derive thedistribution of Cpk' The formula is too complicated to givehere, but it is the basis for a computer program presented byGuirguis and Rodriguez (1992). This paper also containsgraphs of the probability density function Cpk for selectedvalues of the parameters, and sample sizes of 30 and 100.

The following approach may be 9f interest, and providessome insight into the structure of the distribution of Cpk'

The inequality

Cpk < C

is cquivalent to

d-IX -ml <3cd

i.e.

IX-ml+3c8>d (2.32)

Under the stated assumptions, ftiX -ml is distributedasXIO",(n-l)t8 is distributed as Xn-IO"and these two statisticsare mutually independent (see end of section 1.5).

Hence

~

[1 3c d

JPr[Cpk <c] = Pr ftXI + (n-1)jXn-1 >-;;:(2.33)

where Xl and Xn- 1 are mutually independent.

I,

I,I

I\~I~

~

Estimation of Cpk 65

11;,

Tn(2.33),we see that, as n--HD,the term xd In becomesnegligiblecompared with 3cX,,-I/(n -- ])J (which tcnds to 3c),and the distribution of Cpk tends to that of a multiple of a'reciprocal X' -- in fact, as n-+00 Cpk~ Cp and so has theapproximate distribution (2.13) with f = n --1. .

Figures 2.2--5 ~how the shape factors (J(31 and (32 -- seesection 1.1) for the distribution of Cpk' In each figure, thedotted lines represent the locus of points (.J (31' (32) forthe distribution of Cpk for fixed d*=d/O"(Figs 2.2 and 2.4)with D=I~- m I/O"varying, or for fixed D (Figs 2.3 and 2.5)with d* =d/O" varying. Figures 2.2 and 2.3 are for samplesize (n) equal to 30; Figs 2.4 and 2.5 are for sample size 100.The other lines on the figures represent loci for otherstandard distributions, to assist in assessing the shape

of the Cpk distribution relative to well-established dis-tributions.

1Or- ~-,81I

I

"- -- -

[

Curve ~amPI:-SiZe(n) [I30

l~~~=-- ~go'-.-J

()

;g:

4

0.0 0.5 1.0 1.5

X

2.5 3.02.0

Fig. 2.1 Probability density functions of Cp for various sample sizesand. values of d/cr.

Page 38: 041254380 x Capability Indices

66 The /Josic fJl'Oces.'.;cOfJo/Jility indices

2.6

2.8/,,"_._,,~

. d* ,".0.5','.

3.0,,/

I;'

#.

if t::9'-<.°/1'

0'/Jf.

<l-'/

It.""

RHN

eJ' I!,

"'~\, d* ; 2

- . ~~d*=2.51

'\1IC

3.2 '..d*=1';(

,/.,1 ,

I,;;",.-

r,if

/t

III07c/ I

/f. ReflectedType III

/ Reflected Lognormal4.41 / //1 ReflectedTypeV

I

.

I - . Reflected Inverse Chi

.J4.6j ~ ~-~~~-~._-

-1.2 -1.0 - 0.6 - 0.6 - 0.4 - 0.2 0.0 0.2 0.4 0.6 0.6, Skewness (0';;)

Fig. 2.2 Moment ratios of Cpk for n=30 and levels of d* = d/~.Lab.elled points identify special distributions: N (normal), IC (in-verse chi with df= 29), RIC (reflected inverse chi with df= 29),RHN(rcllcctcd half normal).

""""",,\,- .0

] 3.4

.~ 3.60

~3.6

4.0

4.2

Comparison of Figs 2.2-3 with Figs 2.4-5 shows a greaterconcentration of (J131,f)2) values near the normal point(N:::(0,3)) for the larger sample size (n= 100).

Bissell (1990) uses the modified estimator

I

USL:XC~, - 30'

'pk- -X - LSL

38

for ~~ LSL+ USL" .~

(2.34)

for ~~.LSL+ USL2

This differs from (ipk only in the use of ~ in place of X inthe inequalities defining which expression is to be used.Note that C~k cannot be calculated unless ~ is known.However, if ~ is known, one would not need to estimate it

2.6

2.6

3.0

3.21~ . I~ 3.41~ 3.61:2 3.6

Estimation of' Cpk 67

4.0

--l

I

I

I" /

!/'

§~;

0'/;PI

'-<.°/o'

/%<:>~!*7

c! ! '6.j

J

IC

4.2 Reflected Type IIIReflected Lognormal

. --- Reflected Type Vllolloctod Invorso Chi

4.4

4.6.,

-1.2 :-1.0 - 0.6 - 0.6 - 0.4 - 0.2 0.0 0.2 0.4 0.6 0.6Skewness (v(i1)

Fig. 2.3 Moment ratios of Cpkfor n= 30 and levelsof c5= I~- TI/a.Labelled points identify special distributions: N (normal), IC (in-verse chi with df=29), RIC (reflected inverse chi with df=99),RHN (reflected half normal). .

and could simply use

where

I

I

C~k = min(C~kl' C~kU) (2.35)

CtkU= USL-c;38C* _~-LSLpkl--- 38

and

The distribution of C~k>as defined by.(2.34), is that of

~ x (noncentral t with f deg~es of freedom and noncen-3.jn trality parameter ~{d-I~-1(LSL+ USL)I}/O')

(see section 1.7.)

Page 39: 041254380 x Capability Indices

68 The basic process capability indices

2.6

'~..Q

'--"'. d.*105

/0* '" ; '" f

// . ,.\~ d*~1l/'

.'//:::=-

r--'-" d*~1.5

.;p / //,::0 d* ~ 2I / . "-, . .: d* ~ 2.5cP". /" , I ".-R/ ' ., I

1.(°1 /ftiC

I

IC'0' I

cY/ / ' //f/ ,/I

CJ:: I , if

1(/ /1. /{R N ,/1

/fd/f ' Reflected Type III

/ (, '-'-' Reflected Lognormal/ ( -"'- Reflected Type V

, ! , Reflected Inverse Chi

2.8

3.0

3.2

-;;; 3.4.3",'ijj 3.6

f!:J

::.:: 3.8

4.0

4.2

4.4

4.6" '"

-1.2 -1.0 -0.8 -0.6 ,-0.4 -0.2 0.0 0.2 0.4 0.6 0.8Skewness (v%) .

Fig. 2.4 M omel1t ratio~ of Cpk for /I = 100 and levels or d~ =:0 d/(f.Labelled points identify special distributions: N (normal), IC (in-verse chi with df=99), RIC (reflected inverse chi with df=99), RI-IN(reflected half normal).

Although C~k is not of practical use, it will not dif-fer greatly from Cpb except when ~ is close to ~(LSL + USL).This is because, the probability that X --!-(LSL + USL) hasthe same sign as ~-!(LSL+USL) is

1-W( - ftl ~--!-(~SL+USL~l)

and this will be close to 1 for n large, unless ~=I ' , '

z(CSL + USL).

As the distribution (noncentnil t) of C;,k is a well-established

OIlC,il call bc lIscd as all approxImalioll 10 ihal or ('Pk lIIHkrsuitable conditions-- namely that ~differs from !(LSL+ USL)by a substantial amount, and n is not too small.

Construction of a confidemce interval for Cpk is more

Estimation of Cpk 69

2.6

r

-~.~~;:'o'-~--:~~:--~---~--l,,/ '- 01 ' '

\" u~O,25

VI' U-,

I! / .', . ..

/..

. .,,-.- (\~ 0.4° / / - 016.

1.2j :JJ.~/ /'O":\)' ,!? { "-R/,'/ . / .,/

~

1

,,°;//'.,.,' IC IC

.3 3.4 .

1.

<l!

..

b

.

,~

,

.

"

.

",

'.

Y'

.

"

.

9j/ I.!!J 3 6 .il; ,/,' /j I(/) '

I

"'- " ,I0 CJ:: ,.,/ .',il'I:: ," 11

~ 3.8 ,./f,,{

'~ ,/~4.01 .' ,if'

" /1', ';I

4,2 t ' h Reflected Type III,iI . -. -. Reflected Lognormal

, 1/4.41 , /11 --- ReflectedType V

4.61, ,._.~_-,-~.~~eflect~~~~..L~--~-~---e

-1.2 -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8

Skewness (v:61)

Fig. 2.5 Moment ratios of Cpkfor n=100 and levels of ()=I~ - Tla.Labelled points identify special distributions: N (normal), IC (in-verse chi with df=99), RIC (reflected inverse chi with df=99), RHN(reflected half normalj.

2.8

S.O

I

difficult, because two parameters (~,0') are involved. It istempting to obtain separate confidence intervals for the twoparalllctcrs, and thcn construct a confidcnce interval for Cpk'with limits equal to the least and gretest possible values for Cpkcorresponding to pairs of values (~,0') within the rectangularregion defined by the separate confidence intervals. Althoughthis may give a useful indication of the accuracy of estimation ofCpk it is necessary to remember the following two points.

(a) If eaeh of the separate intervals has confidence coefficient100(1- iX)%the confidence coefficient of the derived inter-val for Cpk will not in general be 100(1-iX)%. For

. example, if the statistics used in calculating the separateintervals were mutually independent, the probability thateach will include its appropriate parameter value would

Page 40: 041254380 x Capability Indices

70 The basic process capability indices

not be 100(1-0:)%, but 100(1-0:)2%. This would mean,for example, with 0:= 0.05, that we would have 90.25%,rather than 95(Yoconfidence coelTIcient. Generally, thecombined probability will be less than the confidencecoefficient of each component confidence interval.

(b) Another effect, arising from the fact that only part of therectangular region contributes to the interval for Cpk, andthat other pairs of values (~,a) could give Cpkvalues inthe same interval, tends to counterbalance effect (a).

In the case of Cpb the 100(1-0:)(1<) separate intervals for ~and a are .

x - tr. I - al 2 a ~ ~~ X + t f. I - ~I2 a (2.36a)

~~a~~lLXI,I-aI2 X.r.rJ.12

Although X and a are mutually independent, the eventsdefined by (2.36a) and (2.36b) are not independent. However,it is still true that the probability that the interval for Cpk,calculated as described above, really includes the true valuewill be less than 100(1- 0:)%. The extent of the counterbal-ancing effect of (b) is difficult to ascertain, but it is not likelyto be comparable to the effectof (a). .

. Zhang et al. (1990),Kushler and Hurley (1992) and Nagataarid'Nilgahilf'til (199~) provide a thorough treatment of con-struction or conlidenel.: intervals 1'01'Cpk which will bl.: rela-tively simple to calculate. ' .

Heavlin (1988) in the report already referred to, suggests

(~

{

n- 1 ~ 2 1 (6

)L!

I Cpl,-ZI-rJ.12 9n(n-3)+CPk2(n-3) 1+ n-1 I'

{ ( )}

'

)~ n-1 ~2 1 6 2

Cpk+Zl-rJ.12 9n(n-3)+CPk2(n-3) 1+ n-1 (2.37)

(2.36b)

as a 100(1-0:)% confidence interval for Cpk'

l'

t

I.

d"

, tII

Estimation of Cpk 71

ChOll, Owen and Borrego (1990) provided tables of 'apcproximate 95% lower confidence limits' for Cpk (inter alia)under the assumption of normality. A few of their values areshown below in Table 2.8.

Franklin and Wasserman (1992) carried Qut simulations toassess the properties, of these limits, and discovered that theyare conservative. They found that the actual coverage for thelimits in Table 2.8, is about 96-7%. Guirguis and Rodriguez(1992, p. 238) explain why these approximate limits areconservative.

On the other hand, Franklin and Wasserman's studiesshowed that the formula

~2

)'

I Cpk 2

CPk-ZI-a(9n +2(n-'-l)(2.38) (cf(2,37))

produces (for n~ 30) remarkably accurate lower 100(1- 0:)%confidence limits for Cpk'

Nagata and Nagahata (1993) suggest modifying (2.38) byadding 1/(30';;;). They obtain very good results from simula-tion experiments for the two-sided confidence intervals withthis modification. The parameter values employed wereC"k=0.4(0.3)2.5; n= 10(10)50, 100; 1-0:=0.90, 0.95. Theactual coverage was never greater than 0.902 for 1- 0:= 0.90,0.954 for 1- 0:= 0.95; and never less than 1- 0:.

Table 2.8 Approximate 95% lower con-fidence limits for Cpk

Cpk n=30 n=50 n=75 ,/,/

1.0 0.72 0.79 0.831.1 0.80 0.87 0.911.5 1.12 1.21 1.261.667 1.25 1.35 1.40

Page 41: 041254380 x Capability Indices

72 The basic process capability indices

Kushler and 11urley (Il)l) I) suggest the simpler formula

[1-Z1-cx

JCpk (2n - 2)1(2.39)

and Dovich (1992) reports th,at the corresponding approxi-mate 100(1-0:)% formula for confidence jntervallimits:-

-[

1 +Z1-CX/,2JCpk (2n - 2)2

(2.40)

gives good results.*The interested reader may find the following brief discussionilluminating.

Difficulties in the construction of confidence intervals for

Cpk arise from the rather complicated way in which theparamctcrs ~ and rr appcar jn thc cxpression for Cpk'

We first consider the much simpler problem arising whenthe value of (1is known. We then calculate

C* - min(X -LSL, USL-X)p,k- 3(1

as our estimate of Cpk'The 100(1-0:)(% confidence interval for ~,

f(X)~ ~~ [(X) (2.41)

with f(X) = X- ZI-a/2(1/jIi, [(X) = X + Z1-a/2(1/jIi can beused as a b,asisfor constructing a confidenceregion for

C k=min(~- LS L,~~~!:.::~~). p 3(1

(2.42)

The confidence region is obtained by replacing the numeratorby the range of values it can take for (2.41).A little care is

~I

-u

~

::0

I

c;i I I::>I I

I II;:J Il;g I

E l,j II~ IIii II I

'""I I~ I II

I

,JI'

"

J ,I 0-"-

:;;

31

'.-'

~

.-u

""I

E

"'I

---!::~0!::

.><:

~...'"

\,.)....

..8enc;;;.....Q)

.S

---~

0

~I I

IIIIIII

8!::Q)

"0

<.j:1!::0

U

\CN

~1;

J I 0-"-:;;

:§:

,

Page 42: 041254380 x Capability Indices

74 The basic process capability indices

needed; this range is not, in general, just the interval with endpoints

min(~(X)- LSL,USL- ~(X));min(;;(X)-LSL, USL- ;;(X)).

For example,if LSL<~(.X)<!(LSL+USL) and ;;(X»USL(see Fig. 2.6 c) the.n the-confidence interval for Cpkis ..

1 ~- 11 d-(USL-s(X)), -'z(USL-LSL)=-30" . . 30" 30"

(2.43)

Some possibilities are shown in' Table 2.9 (with 111=!(LSL + USL)). (See also Figs. 2.6a-d.) ,

When 0"is not known, but is estimated by S, the problem ismore complicated. If we replace the values ~(X), ;;(X) by

x - tll-1 al2S d X + tll-l l-a l2S. an .fi fi

((2.36a) with f = n -1) we would still get 100(1- ex)% confid-ence limits from Table 2.9, but computation of these limits,would need the correct value of CTto evaluate the multiplier1/(30").

Table 2.9 Confidence interval for Cpk(O'known)

Valueof Confidence limits (x 30')f(X') ~(X)

:;;;'m

:;;;'m>m

:;;;'m (~(X)-LSL,f(X)-LSL)

>m (min(~(g)-LSL, USL-~(X)), t(USL-LSL))>m (USL:'~(X), USL-f(X))

Bayesian estimation of Cpkhas been studied by Bian and Saw (1993).

2.6 COM PARlSON OF CAPABILITIES

PCls are measures of 'capability' of a process. They may also beused to compare capabilities of two (or more) processes. In

Comparison of capabilities 75

particular, if process variation is normal, the index..

Cpl =! ~ ,- LSL3 CT--

I

II

)

is directly linked to the expected proportion of output which isbelow the LSL, by the formula <1>(- 3Cpd.Comparisonofestimators Cplt, Cpl2 of Cpl values (Cplt, Cp12)for two processesis therefore equivalent to comparing proportions of output forthe two processes with values falling below their respectiveLSLs. With sufficient data, of course, this could be effected bythe simple procedure of comparing observed proportions ofvalues falling below these limits. However, provided theassumption of normality is valid, the use of Cpll, Cpl2might beexpected to give a more powerful test procedure.

Chou and Owen (1991)have developed such tests, which canbe regarded as a test of the hypothesis Cplt = Cpl2 against eithertwo-sided (Cplt ¥=CpI2) or one-sided (Cplt <CpI2, orCplt > Cp12) alternative hypotheses, for the case when the

. sample sizes from which the estimators Cpl1 , Cpl2 are calculateda're the same - n, say. In an obvious notation the statistics

Tt = 3fiCPlt, 1'2= 3JnCp12 are independent noncentral tvariaqles with (n-1) degrees of freedom, and noncentrality

pnrnnleters JIi(~1 LSL1)/(fI,jn(~r-LSL2)/(f2 respective-ly. Applying a generalized likelihood ratio test technique(section 1.12),Chou and Owen obtain the test statistic

~

I

.

u = [{T'y + 2(n-l) }{1'~+ 2(n -I)} J1- Tt 1'2 (2.44)

I

To establish approximate significance limits for U they carried.. out simulation experiments as a result of which they suggest theapproximdtion, for the distribution of U, when Ho is valid:

.'all IQg {!U/(n - I)} approximately distributed as chi-squaredwith VII degrees offreedom.' Table 2.10 gives values of alland Vi"estimated by Chotl and Oweil from their simulations.

Page 43: 041254380 x Capability Indices

76 The basic rwocess capability indices

Table 2.10 Values of (/11ami VII

, The progression of values is rather irregular, possiblyreflecting random variation from the simulations (though.there were 10000samples in each'case).[If the valucs of (IIOand a15 were about 18.2 and 27.3 respectively, progressionwould be smooth.]' .

2.7 CORRELATED OBSERVATIONS

The above theory based on mutually independent ,obser-vations of X can be extended to cases in which the obser-vations of the process characteristic are correlated, usingresults presented by Yang and Hancock (1990).They show that

E[S2]=(1-p)0"2

wherep = averageof the 1n(n-1) correlationsbetweenpairs ofXs. '(SeeAppendix 2.A.)

This implies that in the estimator

1 dCP=:3S

of Cp,the denominator willtend to underestimate0" and so C\,will tend to overestimate CpoThis reinforces the effect of thebias in l/S as an estimate of 1/0",which also tends to produceoverestimation of Cp (see Table 2.3). .

There will be a similar effect for Cpk' We warn the reader,

I

I II

~

t

Correlated observations 77

however, that in this case we cannot rely on independence ofJ( and S.

Correlation can arise naturally when 0"2 is estimatedfrom data classified into several subgroups. Suppose thatwe have k subgrou.ps, each containing m items and denoteby Xuv the value of vth item in the uth subgroup. If themodel

XUV=Yu+Zuv u= 1, .." k; v= 1, ..., m (2.45)

applies with all Ys and Z mutually independent randomvariables, each with expected value zero, and with

var(Yu)=O"~ var(Zuv)=O"~

thenw~ have n = km,

0"2=0"~+(1i

and correlation between

. . .

{

o

XII" :\/1<JXII"" = (O"J10"~)

so that

if u#u'

if u=u' (and v 7""vi)

..

1 0"2

P ( 1)km(m-1)- (2 y 2)n n- O"y+O"z

km(m-1) O"~= n(n-1)~0"2

m-1 <1~=n-10"2

n all VII

5 8.98 1.2810 19.05 1.2215 26.47 1.0720 36.58 1.06

Page 44: 041254380 x Capability Indices

78 The hasic process capahility indices

In the case of equal correlation we have pij = j5 for all. i,j(i¥=j).ln this case S2 is distributed as (1- p)X;-la2j(n-I).

2.8 PCTs FOR ATTRIBUTES

An interesting proposal for PCI for attributes (characteristics: taking values 0 and 1 only - e.g., conforming (C) or (NC)),

was made by Marcucci and Beazley (1988). This has not asyet received attention in the literat.ure.

We suppose that an upper limit w, for proportion of NCproduct is specified. It is proposed ,to use the odds ratio

R=w(l-wd(I-W)Wl

(2.46)

(see, e.g., Fleiss (1973, pp. 61 et seq.) or Elandt-JohnsonandJohnson (1980, p. 37)) as a PCI, where W is the actualproportion of NC product. The ratio w(l- (I))-1 measures theodds for the characteristic being NC, and w1(1- wd - 1 istherhaximally acceptable odds. Thus R = 1 corrcsponds to productwhich is just acceptable, while R = 2 reflects an unfortunatesituation, in which the odds for getting a NC item are twice themaximum acceptable odds. Unlike Cp, Cpk'etc., large values ofthis, PCI indicate bad situations.

Marcucci and Beazley (1988) study the properties ,of es- .timators of R. Given a random sample of size /1,with X itemsfound to be NC, the estimator .

A X +! 1- WIR- -n-X +1 WI

'(2.47)

is well-known in' statistical practice. The !8 arc so-callcd'Sheppard corrections' for continuity.'Unfortunately, the vari-ance of R is large, and so is the (positive) bias, especially in smallsamples. Marcucci and Beazley (1988), therefore, suggest

Appendix 2.A 79

'shrinking' R towards the value 1.0, to produce a PCI estimator

(, - Ii I~ I (I k) (2.48)

for some k. (The subscript 'a' stands for 'attribute.')They try to choose k to minimize the mean square error

E[(C~--R)2J, and suggest taking

1-1Pk=aj+l-R2 (2.49)

where

aJ?! = R2(X + ~)- t +(11--X + ~)- t].

APPENDIX 2.A

If X 1~X 2"", Xn are independent normal variables with com-mon expected value ~ and standard deviation a then

l-/J= Pr[LSL<X, <USL] ~",(.tJs~-:-~)_",(LS~ - ~)(2.50)I ,

A natural estimator of p (the expected proportion of NCitems) is

A

(USL-X

) (LSL-X)p= 1-<1> S +<I>-S-- (2.51)

This estimator is biased. A minimum variance unbiasedestimator of p is derived below. It is quite compIipated in formand for most occasions (2.51) will be more convenient, and. .also quite adequate.

roo-I

l

:I i

I

t"

I

!

I

III

r

rI'

I

III..

-

Page 45: 041254380 x Capability Indices

80 The basic process capability indices

We will apply the Blackwell-Rao theorem (section 1.12.4).The statistics (X, S) are sufficient for ( and (j, and (1- Y),where

y={

1 if LSL<X1 <USL0 otherwise

is an unbiased estimator of p.Hence the conditional expectedvalue of 1- Y, given (X, S), will be a minimum varianceunbiased estimator of p. .

The conditional distribution of X I' given X and S, isderived from the fact that

Xl-X (~ )!

S n-1

is distributed as 1/1-1 (l with (n-l) degrees of freedom)(section 1.7). Hence the conditional distribution of X I' givenX and S, is that of

-

(n-1

)!

X + -;;- St/l-I

(In this expression, X and S are to be regarded as constants.) .

Hence

p=E[pIX, SJ = 1-pr[

LSL-X

(~ )!

S. n-1

USL-X

(.

n

)l

J<t/l-1< , -

. S f/- J

= 1-Iodx,s)(!(n-1), !(n-1))

+Ioz(x,s)(!(n-1), !(n-l)) (2.52)

.

, ,

.1

II.

.,j

II,

~II

tll

I

~I

II

Appendix 2.B 81

where Io(a,b) denotes incomplete beta function ratio (section1.6) and

8,(X,SJ=KI + US~-X {(n~l)' +(US~-X)rJ

e'(X,SJ=Kl~ LS~-X {(n~l),+(LS~-X)'} -'J

Calculation of p is quite heavy, but can be facilitated bytables of the function

J(n, c)= I![I+C{~+cZ)-jlt(n -1), :t(n-1)) (2.53 a)

for various values of nand c. Then

p= I-J(n, ~S~-X)+J(n,~SI~-X) (2.53 b)

It must be emphasized that the appropriateness of bothof the estimators fi and p depends on the correctnessof the assumption of normality for the process distribu-tion.

APPENDIX 2.B

Xl, X2, '" , XII are dependent random variableswith commonexpected value ( and variance (j2; the correlation between Xiand )fj wi1lbe denoted by Pij. We have

. -{ }

1var(X)=n-2(j2 n+n(n-1) L

.Pij =-{1+(n-l)p}(j2

ii'j n11\

Page 46: 041254380 x Capability Indices

82 The basic process capability indices

where

'L LPij- i~jP=n(n-1)

If/I .

2 -1" -~5 =(n-1) L., (Xj-X)-

j= 1

then

E[52J= ~ 1 i E[(Xj-X)2]n j= 1

1 ~ 2 - -; 2=- L.,E[(Xj-~) ]-2(X-~)(Xj-~)+(X-~)]n -1 j= 1

=~ lt E[(Xj-~)2]-nE(X-~)2 Jn-1 j=l

since ~(X j - ~)= n(X -~) so

E[52]=~- 1{n var(Xj)-/1 var(X)}/1-

1 -=-- [n- {l +(n-1)p} ](j2n-1

=(1-,o)(j2 (2.54)

We emphasize that

fJ='--~--- L LPiJ1I(n-1) i~j

is the average of the correlation coefficients among the!n(n-1) differentpairs of Xs,

I" I

,III,

It.111

BibliographyI

83

BIBLIOGRAPHY

Uarnctt, N.S. (Ing) Process control (/ndproduct quulity: The.Cp undCpkrevisited. Footscray Inst. Techno!., Victoria, Australia.

Bian, G. and Saw, S.L.e. (1993) Bayes estimates for two functions ofthe normal mean and variance with a quality control application;Tpch. Rep., National University of Singapore. .

Bissell, A.F. (1990) How reliable is your capability index? Appl.Statist., 39, 331-40.

Bunks, J. (1989) Principles of Quality Control, Wiley: New York.Carr, W.E. (1991) A new process capability index: parts per million,

Quality Progress, 24, (2), 152.Chen, L.K., Xiong, Z. and Zhang, D. (1990)On asymptotic distribu-

tions of some process capability indices, Commun. Statist. -TheO/'.Meth., 19, 11-18.

Chou, Y.M. and Owen, D.E. (1989) On the distribution of theestimated process capability indices, Commun. Statist. - TheaI'.Meth., 18,4549-60.

Chou, Y.M. and Owen, D.B. (1991) A likelihood ratio test for theequality of proportions of two normal populations, Commun.Stat'ist. - TheO/'.Alt!th., 20, 2357-2374.

Chou, Y.M., Owen, D.B. and Borrego, A.S.A. (1990) Lower confi-dence limits on process capability indices, ./. Qual. Technol, 22,223-9.

Clements, R.R. (1988) Statistical Process Control, Kriger: Malabar,Florida.

Colcman, D.E. (1991) Relationships between Loss and CapabilityIndices, Api. Math. Camp. Techn., ALCOA Techn. Center, PA.

Constable, O.K. and Hobbs, J.R. (1992) Small Samples and Non~normal Capability, Trans. ASQC Quality Congress, 1-7.

Dovich, R.A. (1992) Private communications.Elandt-Johnson, R.C. and Johnson, N.L. (1980)Sul't,ivalModels and

Data Analysis, Wiley: New York.Fleiss, J.L. (1973) Statistical Methods for Rates and Proportions,. . Wiley: New York.Franklin, L.A. and Wasserm~n, 0.5. (1992). A note on the conser-

vative nature of the tables of lower confidence limits for Cpkwitha suggestedcorrection, Commun.Statist. Simul.Compo

Gensidy, A. (1985) Cp and Cpk, Qual. Progress, 18(4),7-8.

Page 47: 041254380 x Capability Indices

84 The basic process capability indices

Grant, E.L. and Leavenworth, R.S. (1988) Statistical Quality Control(6th edn.), McGraw-Hill, New York. .

Guirguis, G. and Rodriguez, R.N. (1992) Computation of Owen's Qfunction applied to process capability analysis, J. Qual. Technol.,24, 236-246.

Heavlin, W.D. (1988) Statistical properties of capability iadices,Technical Report No. 320, Tech. Library, Advanced Micro Devi-ces, Inc., Sunnyvale, California. .

Herman, J.T. (1989) Capability index - enough for process indus-tries? Trans. ASQC Congress, Toronto, 670--5.

John, P.W.M. (1990) Statistical Methods in Engineering and QualityAssurance, Wiley: New York.

Johnson, M. (1992) Statistics simplified. Qual. Progress, 25(1),10--11. .

Kane, V.E. (1986) Process capability indices, J. Qual. Tech/1ol., 18,41-52. '.

Kirmani, S.N.U.A., Kocherlakota, K. and Kocherlakota, S. (1991)Estimation of if and the process capability index based onsubsamples, Cammun. Statilit. - TheaI'. Meth., 20, 275---291 (Cor-rection, 20, 4083).

Kotz, S., Pearn, W.L. and Johnson, N.L. (1992) Some processcapability indices are more reliable than one might think, Appl.Statist., 42, 55-62.

Kushler, R. and Hurley, P. (1992) Confidence bounds for capabilityindices, J. Qual. Technol., 24, 188-195.

Lam, C.T. and Littig, S.J. (1992) A new standard in process cap-ability measurement. Tech. Rep. 92-93, Dept. Industrial andOperations Engineering, University of Michigan, Ann Arbor.

Leone, F.C., Nelson, L.S., and Nottingham, R.R. (1961) The foldednormal distribution, Technometrics, 3, 543-50.

Li, H., Owen, D.B. and Borrego, A.S.A. (1990) Lower confidencelimits op process capability indices based on the range, Commun.Statist. - Simul. Camp., 19, 1-24. .

Marcucci, M.O. and Beazley, cc. (1988) Capability indices: Processperformance measures, Trans. ASQC Conwess, 516 21

Montgomery, D.C. (J985) Introductio/1 to Statistical Quality Con-trol, Wiley:New York.

Nagata, Y. (1991) Interval estimation for the process capabilityindices,J. Japan. Soc. Qual.Control,21, 109-114.

~I

~tll~II

t

~.tl

~\

/"

~'I

~~

t'

~,

I,."

Bihliography 85

Nagata, Y. and Nagahata, H. (1993) Approximation formulas forthe confidenceintervals of processcapability indices,Tech.Rep.Okayama University, .lapan. (submitted for publication). .

Owen, D.H. (1965) A special ease of a bivariate noncentral t-distribution, Biometrika, 53, 437-46.

Owen. M. (cd.) (1989) SPC and Continuous Improvement, SpringerVerlag, Berlin, Germany.

Pearn, W.L., Kotz, S. and Johnson, N.L. (1992) Distributional and,inferential properties of process capability indices, J. Qual.Technol., 24, 216-231.

Porter, LJ. and Oakland, 1.8. (1990) Measuring process capabilityusing indices -- some new considerations, Qual.ReI. Eng. Inter-nat., 6, 19-27.

Wadsworth, H.M., Stephens, K.S. and Godfrey, A.B. (1988) ModernMethods for Quality Control and Improvement,. Wiley: New York.

Wierda, S.J. (1992) A multivariate process capability index, Proc.. 9th Internat. Conj. Israel Soc. Qual. Assur. (Y. Bester, H.

Horowitz and A. Lewis, eds.) pp. 517-522.[Yang. K. and Hancock, W.M. (1990) Stntistical quality c°'ltrol for

correlated samples, inter". J. Product. Res., 28, 595-608.1Zhang, N.F., Stenback, G.A. and Wardrop, D.M. (1990) ,~nterval

estimation of process capability index Cpk' Commun. S,tatist. -Theor. Meth.. 19,4455-4470. \

II

(

..,

. . -'

. <

!

!

(

Page 48: 041254380 x Capability Indices

I\

'I'p'dl~I

l~

t~

,.

~I

,I.

..

1'1

W

~

I1

3

T.he Cpmindex and relatedindices

3.1 INTRODUCTION

The Cpmindex of process capability was introduced in thepublished literature by Chan et al. (1988a). An elementarydiscussion of the index, written for practitioners, was pres-ented by Spiring (1991a), who also (Spiring, 1991 b) extendedthe application of the index to measure process capability inthe presence of 'assignable causes'. He had previously (Spir-ing, 1989) described the application of Cpmto a toolwearproblem, as an example of a process in which there wasvariation due to assignable causes.

R(;ct;nlpapers (lInpllhlished at the time of writing) in whichthl: usc of Cpmis discussed indude Subbaiah and Taam (ISI<)I)and Kushler and Hurley (1992).(All four of these authorswere in the Department of Mathematical Sciences, OaklandUniversity, Rochester, Minnesota.) These papers study esti-mation of Cpm'in some detail; graphical techniques for esti-mating Cpmare discussed briefly by Chan et al. (1988b).

The use of the Cpm index was proposed by Hsiang andTaguchi (1985) at the American Statistical Association's An-nual Meeting in Las Vegas, Nevada (though they did not usethe symbol Cprn)'Their approach was motivated exclusivelyby loss function considerations, but Chan et al. (1988a, b)w~re more concerned with comparisons with Cpb discussed.

87

Page 49: 041254380 x Capability Indices

:88 The Cprn index and related indices

in Chapter 2. Marcucci and Beazley (1988), apparently inde-pendently, but quite possibly influenced by Hsiang andTaguchi, proposed the equivalent index, also defined explicit-ly by Chan et al. (1988a)

Cpg=C;,;

(proportional to average quadratic loss). Johnson (1991) -see section 3.2 - also reaches 'Le', a multiple of C;rn2fromloss - or, conversely, 'worth' - funct.ion considerations..

Chan et al. (1988b) defirie a slightly dif1crent index, C~I;1(see (3.2) below), which is more closely related, in spirit andform, to Cpk than is Cprn'

Finally an index Cpmb combining features of Cpmand Cpkis introduced, and studied in section 3.5.

3.2 THE CPIllINDEX

The original definition of Cprn is

,lJSL-LSL d

Cpm= 6{(,.2+(~-T)2}f = 3{a2+(~-T)2}!(3.1)

where ~ is the expected value and a is the standard devi-ation of the measured characteristic, X, l' is the target valueand d =t(USL - LSL). The target value is commonly themidpoint of the specification' interval, m = 1(USL + LSL).This need not be the case, but if it is not so, there ,can beserious pitfalls in the uncritical use of Cpm,as we shall seelater.

The modified index, C~rn'is defined using (J. J4 e) by

* _min(USL-T,T-LSL)- d-IT-ml '

Cpm- 3{a2+(~-T)2}f - 3{a2+(~-T)2}f (3.2)

,tin.,

.

'111

~'I!.

~"~IIII

II""

"'"I, ,k

.

Ii I

The Cpmindex 89

If T=m (as is often the case), then C;m=Cpm' If T=I=m,diesame type of pitfalls beset uncritical use of C;m as with Cpm'

The denominator in each,of (3.1)and (3.2)

3{a2 +(~- T)2}f= 3{E[(X - T)2]}! (3:3),

that is, three times the root-mean-square deviation of X fromthe target value, T. It is clear that, since

(12+ (~- T)2 ~ (12

we :tpust haveI

Cpm~ Cp . (3.4)1

The relationship between Cpmand Cpk is discussed below;see (110) et seq.

In the approach of Hsiang and Taguchi (1985) it is sup-posed ,that the loss function is proportional to (X - 1')2, andso (12+ (~- 1')2 is a measure of average loss. From this pointof view, there is no need to include USL or LSL in the'numerator, or, indeed, the multiplier 6 in the denominator, ofthe formula for Cpm,except to maintain comparability withthe earlier index Cpo which was introduced with quite adifferent background, although for the same purpose.

Proponents of Hsiang and Taguchi's approach point outthat "it does allow for differing costs arising from differentvalues of X in the range LSL to USL, though in a necessarilyarbitrary way. (A discussion between Mirabella (1991) andSpiring (1991b) is relevant to this point.) It is suggested that are,asonable loss function might be constant outside this range, iperhaps giving a function of form

" ,.., I

\I

(

{

C(X- T)2 for LSL~ X ~ USLc(LSL - 1')2 for X ~ LSL

c(USL-.T)2 for X ~ USL. (3.5)

Page 50: 041254380 x Capability Indices

90 The Cpm index and related indices

One is, of course, free to use one's intuition ad libitum inconstruction of a loss function. However,if this approach isaccepted, there seems to be little justification, in respect of'practicalvalue,easeof calculation,or simplicityofmathemat-ical theory in using the cumbersomeCpm(or C;m)form ratherthan the loss function itself (as in, for example, Cpgor Le - see'sections 3.1 and 3.4). .",

As already remarked, we take the position that expectedproportion of NC product is of primary importance, thoughwe acknowledge that other factors can be of importance inspecific cases. Essentially, thc usc of any singlc'indcx cannotbe entirely satisfactory, though it can be of value in rapid,informal communication or preliminary, cursory assessmentof a situation. '

The following interrelations should be noted. We have

I dCpm= 3"a' (3.6)

wherea'= {E[(X- T)2J}! measures the 'total variability'about the target value. Recalling the definition (inChapter 2)

C = USL - LSL = ! ~p 6a 3 a (3.7)

we have

(iCpm= ,Cp=Cp(1 +,2)-!

(J(3.8)

i

I iII

I I

where' = (~- T)/a. So Cpm~ Cp with equality if~ = T.Boyles (1991) provides an instructive comparison be-,

tween Cp and Cpm'He takes USL=65, LSL=35, T=50

"

..,,'

The Cpm index 91

(:~l(UsL+LSL)=m) so that d=t(65--35)=15, and con- ,siders three processes with the following characteristics: I

A: ~= 50(= T); (j = 5

B: ~= 57.5; a = 2.5C: ~= 61.25; (j = 1.25

The values of the indices Cp, Cpk and Cpm are

ProcessABC

Cp124

Cpk111

Cpm10.630.44

(ctm = Cpm, since T= m).

Note that, in this example, C" and Cpm move in oppositedirections. According to Cp, the processes are, in order ofdesirability - C:B:A - while for CPOIthe order is reversed.According to the index Cpkothere is nothing to choose amongthe processes.

At this point we repeat our warning that CPOl'as defined in(3.1), is not a satisfactory m~asure of process capability unlesst/w larget value is equal to the midpoint of the specificationinterval (i.e. T:'::1II=1(LJSL+LSL)).

The reason for this is that, whether the expected value of Xis T - (5or T + (5,we obtain the same value of CpnlOnamely

13 d(a2 + (52) - J, (3.9) ,

although, unless T = m, these processes can have markedlydifferent expected proportions of NC items (items with Xoutside specification limits). For example, if

T=iuSL+!LSL

/I

.,fr:

tI

! I

I

t:

I II

,

,Jill,,I,

rt

J'I

Page 51: 041254380 x Capability Indices

92 The Cpm index and related indices

and

c5=.t(USL-LSL)=!d

then with E[X] = T + D= USL, we would expect at least 50%NC items if X has a symmetrical distribution, while whenE[X] = T - (j = m, this proportion would, in general, be muchless.

In the light of this, it is, in our opinion, pointless to discussproperties of Cpm (as defined in 0.1)) except when T=m.These comments also apply to C~m'

Generally, it is not possible to calculate Cpmfrom Cpk,or con-versely, without knowledge of values of the ratios d:0":I(- mI.

If T= m then from (3.1), and (2.20b)

~:: ={I_I(~ml} x[1+{(~mrJ (3.10)

If, further Cp = 1 (Le. d = 30")then

9(1-CPk)2=((~mr =C;~-l

Since (d. (2.20 c))

(3.11)

( I~-ml) { (i!I_T

)2

}

-!C k = l C C = 1+ -=-- C

p , d p pm G'. P

we have

CI' ~ max (Cpk>C,",,)

Also, Cpk~ or :::;;Cpm according as

I~-ml

{ (~- T)

2

}

-!

1- ---y- ~ or :::;; 1+ --;--

I

~

\

. ~!,

1

,

It

The Cpm index

I.e. as

(1-I(~ml){1+(~~7r;,

If 1n'=T, then (3.12) becomes

(l-U){l +(~y U2r ~ or :::;;1

(where

u= I~-ml = I(~- TI).d d'

I.e.

(d

)2

1+ -;; U2~ or ~(1_-U)-2

or :::;;1

93

(3.12)

(~)2 ~ or ~(1-u)-2-1 = l-(l-u)~= 2-U,

-

0" U2 u2(1-u)2 u(1-u)2

(3.13)

Usually u< 1 (or else ~ is outside specification limits).Inequalities (3.13) can be written

2-u

C; ~ or :::;;9u(1-u)2, '

Here are some values of this function.

u 0

-- 2-u9u(1-u)2 co

0.2

1.56

0.4

1.23

0.5

1.33

0.6

1.62

0.8

4.17

1

CIJ

Page 52: 041254380 x Capability Indices

94 Th(' lP1l1 il/d("\" 1/111//"1'11/(('(/ indic('s

This function has a minimum valuc of 1.23 at /I=OJX

(appro x.). If Cp is unsatisfactory (Cp~ 1) then we always haveCpk~ Cpm (if T= m), but we can have Cpk> Cpm whenC~> 1.23(Cp> 1.11), provided u is not too close to 0 or 1. Ifu=O (i.e. ~=1'=1J1.)then Cp=Cpk=Cpm' .

Johnson (1991) also approached the Cpmindex from thepoint of view of loss functions, using the converse conceptof 'worth'. He supposed that the maximum worth, W1',of an item corresponds to the characteristic X having thetarget value, T. As the deviation of X from T increases,the worth becomes less, . eventually becoming zero, andthen negative. If the worth function is of the simpleform

WT-k(X ~ 1')2 for W1'~k(X - 1')2

it becomes zero when IX-1'I=JWT/k. In Johnson's (1991)approach the values 1':f:.JWT/k play similar roles to those ofthe specification limits in defining Cpmand we may define~=.J~';k. .

The ratio (worth)/(maximum worth)

= 1- k(X - 1')2Wr

(X - 1')2=1--~2

'is termed rhe relative worth,and (X- 1')2/~2 is termed therelative loss. The expected relative loss is then (in Johnson'snotation)

E[(X - 1')2J - L~2 e

It ISIVI;

IIiCII

.'heIpl11'".

1110

t

",..

lid

Illu

~I) ,

~ ,r

, 'I~

III~II.1.

r

Comparison of Cpm and C~m with Cp and CpkRecalling that

95

IfC;m = (~Y {E[X -- 1'fJ}-1

we see that,

Le=(3~)2C~

Hence Le is effectivelyequivalent to Cpm,although it has adifferentbackground.A natural unbiased estimator of Le is

~ 1 ~ 2Le = -2 L. (Xi- 1')

n~ i=1

The distribution of n~ 2Ie is therefore that of

2 '2(n(~-- 1')2)(J Xn 2(J

discussed in section 1.5. The resulting analysis for construc-tion of confidence intervals follows lines to be developed inscction 3.5. Extcnsion to nonsymmetric loss functions isdirect.

3.3 COMPARISON OF CpmAND qmWITH Cp AND Cpk

We first note that if 1'=~, then Cptn is the same as CpoIf, also,~= rn,then both Cpmand C~mare the SaIne as Cpk' If, further,the process characteristic has a 'stable normal distribution' .thcn a vaillc of ! (for all four indiccs) mcans that thc expectedproportion of product with values of X within specification

Page 53: 041254380 x Capability Indices

96 The Cpm index and related indices

limits is a little over 99.7%. Negative values are impossiblefor Cp and Cpm'They are impossible for C;m if T falls withinspecification limits (and for Cpk if ~ falls within specificationlimits).

. The principal distinction between Cpm and Cpk is, asl'I1irabella (1991) implicitly indicates, and Kushler andHurley (1992) state explicitly, in the relative importance ofthe specification limits (USL and LSL)' versus the targetvalue, T. As explained in Chapter 2, the main functionof Cpk is to indicate the degree to which the process iswithin specification limits. For LSL< ~< USL, Cpk---+ (X) as0"---+0,but large values of Cpk do not provide informa-tion about discrepancy between ~ and T. The index Cpn"however, purports to measure the degree of 'on-targetness'of the process, and Cpm-HX) if ~---+T,as well as 0"---+0.With Cpm, specification limits are used solely in order toscale the loss function (squared deviation) in the denom-inator.

Kushler and Hurley (1992, p.2) emphasize that, if Tis notequal to m, the following paradox can arise'... moving theprocess mean towards the target (which will increase the Cpktand Cpmindices)' can reduce the fraction of the distributionwhich is within' specification limits.' ,

Finally, we compare plots of values of (~, 0")correspondingto fixed values of Cp, Cpk and Cpm'

If Cpm= c, then

0"2+(~-T)2=(} ~r

These (~,0")points lie on a semi-circle with centre at (T,O),radius *d/c, and restricted to non-negative 0".

tThe Cpk index referred to here is the modified index described by Kane(1986), namely Hd-I~-TI}/a.

II

,ml

t

'

"

,r

,'I'II

IIII ..

I':;t,

III

.~

~llr

11111

I'll

Comparison of Cpm and Ctm with Cp and Cpk

If Cp = c, we have ((,0") points on the line

97

1 d0"=--

3 c

parallel to the (-axis, and tangential to the semi-circle corre-sponding to Cpm=c at the point (T,idle).If Cpk= c we have

/(-m/ +3cO"=d

II (and 0");0). This corresponds to ((,0") points on the pair oflines ,

"I

Ii and'II

II

( =m+d-3CO"

(=m-d + 3CO"

These lines intersect at the point (m, Ad/c) and meet the (-axisal Ihe points (m-d,o) and (m+d,o). Boyles (1991, Figs. 4and 5) provides diagrams representirtg these loci from theexample already described in section 3.2 (USL = 65,LSL= 35,m,; 50,2d= 30)withc= -.t,1, 1and l '

Figure 3.1 is a simplified version, giving ((,0") loci for' ageneral value of c, exceeding t with T = m. (If c < t the lin~sCpk =c would meet the (-axis between its points of intersec- .tion ((=m:ttd/c) with the Cpm=C semi-circle.) If T=I=m,thelines Cpk= c and Cp= c would be unchanged but the Cpm=csemi-circle would be moved horizontally to centre at the'point (T, 0), with points of intersection with the (-axis at~= T:ttdlc.

" " , "-"':"'~';'~~';'>\

"">'~\

Page 54: 041254380 x Capability Indices

98 The Cpm index and related indices

(J

(a) .(a)

m-d m'-:!d/c m m-dd/c m+d

Fig. 3.1 Loci of points (~,a) such that: (a) Cp=c; (b) Cpk=C; (c)Cpm=c; when T=m, C>5. (If c<t m.-d/3c<m-d<m-t-d<in+dj3c.) If c=} then a=d.

3.4 ESTIMATION OF Cpm (AND C~n,)

3.4.1 Point estimation

A natural estimator of Cpm is obtained by replacingE[(X - Tf] by its unbiased estimator

(;'2= ~I (Xj-T)2n to; 1

(3.14)

Of course, a' is not an unbiased estimator of (/, but has anegativebias, sinceE[a'2]> {E[a']}2.Nor is t,dja' an unbias.ed estimator of Cpm'

We define

d d

{

I n

}

-iC = -.:.. = - - '\' (X.- 1')

2pm 3-I 3 L., I

(J n i~l(3.15)

~

~

.1(11)

II~ir-

~

\ 11'1

I" I

j

, ,"~

~I"I

I

-' ,I..

~ ')

Estimationof Cpm (and C1~m) 99

Tbe corresponding estimator of C;m is

'* - !~/--=E:~--::_!!1l!

{}- £ (Xi - T)2}

- j

C!"n - 3 n i~ I (3.16)

This,also, is a biasedestimator.The biasesof both Cpm andC;m wiII general1y be positive (because the bias of a' is negative).

For the case when each of Xl"", Xn has a normal dis-tribution with expected value ( and standard deviation (J, theexact distribution of Cpm (and of C;m) can be derived. Thedistribution of

n

na'2= 2:: (Xi - 1')2i~ I

is that of (J2x (noncentral X2 witb n degrees of free,dom andnoncentrality parameter A=n((- Tf(J-2), symbolically'

(J2X~2(A) (3.17)

lit

I

This isa mixture of (central) chi-squared distributions with

(n+ 2j) degrees of freedom and corresponding Poissod, weights

.(l.A)jPltA)=exp( -:tA)-~ , j=O,1,...

' J. j (3.1~)

(See section 1.5.2). .' I

The expected value of C~m (the rth moment of Cpm aboutzero) is

I-(dvhz)r , 00 (1A)j

Jl~(Cpm)= -~ exp(-:tA) .~ ~E[(X;+2j)-!jrJ- 0 .1 I

- (dv,r;;)r _1 IX) (JA)j r(hn-r)l j) .-3

exp( 2A) ,2:: ., 2Jrrn -~ ')(J

J ~o .1. n .1)

, ! (3.19)

(Fo, r;;'n,I4(C~rn)is infinite.) II

Page 55: 041254380 x Capability Indices

100 The Cpmindex and related indices

The rth moment about zero of C~mis obtained from (3.19)by replacing d by d-IT-ml.

Taking r=l and r=2 we 'obtain

I(c )= E[C ] = d~ (_lJe ) ~ U!)}q~(n-l)+J)

/11 pm pm 3 exp 2 L., ., 2\r( 1 +')

(1 j=() .I, 2n .1(3.20 u)

and

var(Cpm)=

00 (lJe)} 1

(d~ )\XP(-~Je) ,~ T n-2+2j, 3(1 )-0

- {E[CpmJ}2 (3.20b)

. Table 3.1 gives values of E[Cpm] and S.D. (Cpm)=(var(C"",))! for sclel:ted values of /I, tlla and I~" Tlla( ~j,(See a~so the note on calculation of these quantities, at theend ofithis section.)

Taqle 3.2 gives corresponding values of Cpm'\ When ~= T we have ),=0 and '

dCpm = 3<1( = Cp)

E[C ] = d~ n!(n-l)) (321 b)pm 3<1 2!n!n) "

(d~ )2

[1 1

{n!(n-1))

}

2

]var(CpnJ= ~ n-2 - '2 n!n) (3.21c)II , .

cL (2.16a) with f= n, and the bias of Cpm,as an estimator ~)f

Cpm,\is' ' , .\ ' . .

E,[C ] - C =!£ [(~)\ n!(n-1)) -l

J

'

(3 "2 )pm pm 3<1 2 n!n) .-

168718 . .0 .f;-g . 5tS~'.,J./' JI.. I ~.'. :,

(3.21 a)

1"11

t II

I-I,

. I,I

\'I

II

",I

Estimation of Cpm (and C;m) 101

Since r(y-l)jY/ny» 1 for all y~ 1 the bias is positive, asexpected h. see the remarks following (3.16).

Some authors, for example, Spiring (1989) use the estimator

Cpm= d (3{n~l,t, (X,- r)'r ~ ": lYC,rn (3.23aJ

in place of Cpm, (replacing the divisor n by (n-1), in thedeno'minator). This is presumably inspired by analogy withthe unbiased estimator of <12, .

S2 =~ f. (Xi-X)2n-1i=1

.

Since Cpm < Cpm,the bias of Cpmmust be less than that ofC\'m. Although the bias of Cprn is positive, however, that ofCpm can be negative,though it cannot be less than

-Cpm(1- (n: 1Y)(3.23b)

(Suhbaiah and Taam, 1991). (This follows from the fact that

~ .

(n~ I

)! -

E[CpmJ-Cpm= -n- E[CpmJ-Cpm

(n-l

)!

> -n- Cpm-<;pm

I

because E[CpmJ > Cpm')If thl1 bias of Cpm (as well as that of Cpm) is positive, and

also ~= T, then the mean square error of Cpm is less than thatof Gpm,because both the variance and the squared bias of Cpm

are less ~han those of Cpm' It should be noted, however, that this

Prnr"""Q "',..""..,~" J_-"J_..

Page 56: 041254380 x Capability Indices

Table 3.1 Moments of Cpm:E= E[CpmJ;S.D.=S.D. (Cpm)

I(- TI/G

0 Q5 1

S.D. E S.D. EE S.D.

1.5

E S.D.

2

E S.D.

n= 10ciG

-0.722 0.1831.084 0.2751.445 0.3661.806 0.458

.2.167 0.550

-.:;

6

11= 15" G

-0.702 0.1391.054 0.2091.405 0.2781.756 0.3482.107 0.417

~

.:;

t

0.644 0.16]0.967 0.2411.289 0.3221.611 0.4021.933 0.482

0.627 0.1220.941 1).1831.254 0.2441.568 0.3051.881 . 0.366

0.501 0.110 0.3860.752 0.165 0.5781.002 0.220 0.7711.253 0.275 0.9641.503 0.330 1.157

0.490 0.084 0.3800.736 0.126 0.5700.981 0.168 0.7601.226 0.210 0.950L471 0.252 1.140

0.0690.1030.137'0.1710.206

0.0530.0800.1060.1330.160

0.307 0:0440.460 0.0660.613 0.0880.767 0.1100.920 0.132

0.304 0.0350.456 0.0520.607 0.0700.759 0.0870.911 0.104

'- ... -.-" .- ..-.

--- -'- ---_.

n=20

. . dJG2 0.693 0.fl6 0.619 0.102 0.485 0.070 0.377 0.045 0.302 0.0303 1.040 0.174 0.928 0.153 0.728 0.106 0.566 0068 0.453 0.0444 1.386 0.233 1.238 0.204 0.971 0.141 0.755 0.090 0.605 0.0595 1.733 0.291 1.547 0.255 1.214 0.176 0.943 0.113 0.756 0.0746 2.079 0.349 1.857 0.306 1.456 0.211 1.132 0.135 0.907 0.089

n=25

diG2 0.688 0.102 0.614 0.089 0.482 0.062 0.376 0.040 0.30J 0.0263 1.031 0.153 0.921 0.134 0.724 0.093 0.564 0.060 0.452 0.0394 1.375 0.204 1.228 0.179 0.965 0.124 0.752 0.079 0.603 0.0525 1.719 0.255 1.536 0.223 1.206 0.155 0.939 0.099 0.754 0.0666 2.063 0.306 1.843 0.268 1.447 0.186 1.127 0.119 0.904 0.079

n=30

diG'2 0.684 0.092 0.611 0.080 0.481 0.056 0.375 0.036 0.301 0.0243 1.026 0.138 0.911 0.121 0.721 0.084 0.562 0.054 0.451 0.0364 1.368 0.184 1.222 0.161 0.961 0.112 0.750 0.072 0.602 0.0485 1.710 0.229 0.528 0.201 1.201 0.140 0.937 0.090 0.752 0.0606 2.052 0.215 1.833 0.241 1.442 0.167 1.124 0.108 0.903 0.071

Page 57: 041254380 x Capability Indices

Table 3.1 (Cont.)

0

1(- TI/o-

1 20.5 1.5

E S.D. EE S.D. E S.D. S.D. E S.D.

n=35d/o-23456

n=40d/o-23456

0.681 0.084 0.6091.022 0.126 0.9131.363 0.168 1.2181.703 0.210 1.5222.044 0.253 1.827

0.074 0.479 0.051 0.374 0.033 0.300 0.0220.111 0.719 0.077 0.561 0.050 0.451 0.0330.148 0.958 0.102 0.748 0.066 0.601 0.0440.184 1.198 0.128 0.935 0.083 0.751 0.0550.221 1.438 0.154 1.122 0.099 0.901 0.066

--- ---. ~_..- ' .,-,,""'"

0.680 0.078 0.607 0.069 0.478 0.048 0.373 0.031 0.300 0.0201.019 0.117 0.911 0.103 0.717 0.071 0.560 0.046 0.450 0.0311.359 0.156 1.215 0.137 0.956 0.095 0.747 0.062 0.600 0.0411.699 0.195 1.518 0.t71 1.196 0.119 0:934 0.077 .0.750 0.0512.039 0.235 1.822 0.206 1.435 0.143 1.120 0.092 0.901 0.061

n=45d/o-2 0.678 0.073 0.606 0.064 0.477 0.045 0.373 0.029 0.300 0.0193 1.017 0.110 0.909 0.096 0.716 0.067 0.560 0.043 0.450 0.0294 1.356 0.147 1.212 0.129 0.955 0.089 0.746 0.058 0.600 0.0385 1.695 0.183 1.515 0.161 1.194 0.112 0.933 0.072 0.750 0.0486 2.034 0.220 1.818 0.193 1.432 0.134 1.119 0.087 0.900 0.058

n=50

d/o-2 0.677 0.069 0.605 0.061 0.477 0.042 0.373 0.027 0.300 0.0183 1.015 0.104 0.908 0.091 0.715 0.063 0.559 0.041 0.450 0.0274 1.354 0.139 1.210 0.121 0.954 0.084 0.745 0.055 0.600 0.0365 1.692 0.173 1.513 0.152 1.192 0.106 0.932 0.068 0.749 0.0466 2.031 0.208 1.815 0.182 1.430 0.127 1.118 0.082 0.899 0.055

Page 58: 041254380 x Capability Indices

is established only under these rather [estrictive conditions. Ifminimization of mean square error were a primary objective,neither n nor (n ~ 1) would be the appropriate divIsor in thedenominator.

The effect of correlation among the X s may be less markedfor Cpm than for Cp (sec section 2.6), because

n - 1E[ f,(X r- T)2]= 0-2 + (~- T)2

j= 1

whatever the values of Pi}= corr(Xj,X).

Note on calculation of E[C;'mJ

The series for E[C~mJ, obtained by putting 1'=2 in (3.19), isoften quite slowly convergent. The following alternative ap-proach was found to give good results.

Since the expected value of (X;+2j)-1 is (n-2+2))-1, theexpected value of C;mis (dJ;;/3(J)2 times the weighted mean ofthcsc quantitics with Poisson wcights Pj( !A.).We can wri1c

(dJ;;)2

[1

]E[C;mJ= ~ EJ n-2+2J (324)

1

~. '

t"

It

',.

.~

,r

II III

I

-I'

II

Estimation of Cpm (and C~m) 107

where J has a Poisson distribution with expected value1 "

2)"

To evaluate EJ [(n - 2 + 2J) - 11, we resort tCJthe method of

statistical differentials (or 'delta method'), obtaining

E[n--2~+2JJ~ n-2:2E[JJ

dr(n-2+ 2J)-11

!lr(J)+ L dF .T=E[J]=J.j2'7r= 1

(3.25)

The quantities Ilr(.I) are the central moments of the Poissondistribution with expected value p, namely

E[.lJ = 1,1; !/2 (J) = 1A.;!/3 (J) = 1,1; !/4(J) = 1,1 + iA.2; ...

~-II

(Higher moments are given by Haight (1967, p. 7) or can beevaluated using the recurrence formula - for moments !l~(j)about zero --

.Ie

[a!l~(J)

J/I~ 11 (.1)= 2_r!/~.-t (.1)+ ~1(A/2)-

- see e.g. Johnson et al. (1993, Ch. 3).)

3.4.2 Confidence interval estimation

Since

Cpm=td{ (J2+(~ - T)2}-j

construction of a confidence interval for Cpm is equivalent toconstruction of a confidence interval for ()'2+(~- T)2. If

!n

106 The Cpm index and related indices

Table 3.2 Values of Cpm

. I-TI(J

d- 0.0 0.5 1.0 1.5 2.0(J

2 0.667 0.596 0.471 0.370 0.2983 1.000 0.894 0.707 0.555 0.4474 1.333 1.193 0.943 0.740 0.5965 . 1.667 1.491 1.179 0.925 0.745 .

6 . 2.000 1.789 1.414 1.109 0.894

Page 59: 041254380 x Capability Indices

108 The Cpm illdex alld related illdices

statistics A(X),B(X),based on Xl, X2, ..., X /I' could be found,such that

Pr[A(X) ~ 62 + (c;- T)2 ~ B(X)] = 1-a (3.26)

(i.e. (A(X), B(X)) is a 1OO(1- a)°;;) confidence interval for62 + (c;- T)2) then

(3(B~X))i' 3(A~~))i)

would be a 100(1-a)!% interval for ('.1111'However, it is not possible to tind statistics satisfying

(3.26) exactly, on the usual assumption of normality ofdistribution of X. We therefore try to construct approximate100(1 - oc)% confidence intervals.

Recalling that ~i'=1(X i 'I' '1')1 is distributed as 62X;.\~.) withA=n{(c;- T)/6}2, we have .

var(itl (Xi- T)2) =2ncr2{62+ 2(~- T)2}(3.27)

(as well as

ELtl (Xi-T)2]=62+(c;-T)2

i which is valid if E[X] =c; and var(X)= 62, whatever thedistributionof X). .

If we use a normal approximation to the distribution of

/I

L (Xi- T)2i= 1

r..",

:rt)

~t'll'

1'*

'Iba

I"~, .

'.I

I

Estimation of cpm (and C:m) 109

,III

I'

I

!

for large n we obtain the approximate 1O0(1-a)% interval

1 /I

- L (Xi- T)2 :tZl-aI26[2n-l{62+2(~- T)2}]! (3.28)n i=1 ,

where <I>(Zt-aI2)=1-a/2, for 62+(c;_T)2. However since ~and 6 are not known (otherwise we would know the value' ofCpmand not need to estimate it), it is necessary to replace themby .estimates. The resulting interval for Cpm is of quitecomplicated form.

Boyles (1991) and Subbaiah and Taam (1991) use a betterapproximation to the distribution of X~2(A),namely thatintroduced by Patnaik (1949). This is the distribution of Cx]where

(T2+2(~-T)2 {a2+(~-T)2}2 nc=- '62+(c;-T)2 and J= 62+2(~-T)2 62

(The values of c and J give the cQrrect expected value andvariance.)

It is still necessary to use estimates c and j, of c and frespectively.

Now

1 II 1 II

(c

)2 -.L (Xi- T)2 -6-2 L (Xi- T)2pm n.=1 n "- I..,.,.- - -

Cpm' - 62+(c;-T)2 = {~-T }

21+ -

6 .

II

6-2 L (Xi- T)2-, i=1 --- cf

(3.29j

0'

IfI

IlrW "fi"

:1

1"1

. 1'1

rIt

Page 60: 041254380 x Capability Indices

110 The Cpm index and related indices

and hence is approximately distributed as

a2ex} X}a2cf = T

(3.30)

and

P [( I' - I 2 )1?1 C ( I' - I 2 . )1?1 1'" tr, XI.a/2 L-pm< pm<, XI. 1"a/2 L'pm =~ -aOJI)

where xL is definedby

Prexy< xL,] = /:

We construct an approximate 10O(1-a)(Yc)confidence in-terval

((~ )lc .(x;"1 a/2)~C )J pm' f pm

(3.32)

for Cpmby using the estimate

" n{82+(~-T)2}2.I = 82{82 + 2(~- T)2}

where

? 1 ~ -2 A;?8-=- t (Xi-X) and ~=An-1 i= 1

Further approximating the distribution of Xl leads to theapproximate 100(1-IX)% confidence intervals

-( Zl-rx/2 )Cpm 1:\: J2!

(3.33)

~~

I~ II

II

d

.~

Estimation of Cpm (and Ctm) 111

and

Cpm(1--9~~ tZl-rx/2(9} YY (3.34)

For practical use, we recommend (3,33).Several other approximate formulas can be obtained for

100(1-a)% confidence intervals for Cpm,using approximatedistributions of various functions of Cpm(e.g. log Cpm,.)C;:,etc.). Subbaiah and Taam (1991)present results of a compara-tive Monte Carlo study of sixteensuch formulas. . .

, From (3.17)it followsthat (Cpm/Cpm)2is distributed as

.~

1

{ (' ~~ T)

2

}

- 1

- 1 + - x [noncentral X2 with n degrees ofn . a freedom and noncentrality

. parameter n{«(- T)/a}2J

Writing (~- T)/rr=(), we have

i\

'ip{ X;'\/2(n(52)< n(1 + 152)( ~::Y < X~~ I _rx/2(nc52) ]=' 1- IX

whence.;

p{Co ~ 02)YX~.rx/2(nc52)Cpm < Cpm <

(1

)!

< 11(1-1- ()2) X:,. I . adno2 )Cpm ]= 1- IX

It is tempting to say that

((1

)1

(1

)1

)I 2 A 2 A

n(1+c52) XII.o</2(nc5)Cpm, n(1+c52) X~.1-rx/2(nc5 )Cpm .

,

Page 61: 041254380 x Capability Indices

112 The C index and related illdice~pm .

provides a 100(1-a)% confidence interval for Cpm'Unfortu-nately, calculation of these confidence limits involves knowl-edge of the value of b. If the natural estimator

b=X-TS

is used, the resulting confidence interval will not have100(1- oc)%coverage,but something rather less.

If ~= T, then Cpm= Cp(see 3.21 a),.and Cpmis distributed as

u.) f ~ f'"",~ 1".ThedistributionofCpm for sample size n is thus the same asthat of C"for sample size (n+ 1).The unit increase in effectivesi:le of sample comes from the additional knowledge that~= T. In particular, we obtain 100(1-a)% confidence limits

(X...a/Z c . X..,l-aI2C )-fit pm -fit pm

for Cpm(=,Cp)in this case.Chan et ai. (1988a) discuss the use of Bayesian methods (with

a 'noninformative' prior distribution for C1)in this situation, andprovide some tables. See also Cheng (1992) for a nontechnicaldescription (with further tables) of this approach,

3.5 BOYLES' MODIFIED CpmINDEX

Boyles (1992) has proposed the PCI

t:m =t[(T-LSL)-2Ex<T[(X - T)2]+(USL- T)-ZEx>T[(X- T)Z]r! (3.35)

-Ii

111

!

!1

,1

lit

II

Iw

II'i.

'1IIt.~.\

11.1\Q

ill

",I,Ii

\I(

Boyles' modified Cpm index

as a modification of Cpm for use when ~# T. Here

113

!

iI

Ex.-:'['[(X - T)2] = E[(X ..- 1')21X < T]Pr[X < T](3.36a)

and

EX>T[(X - 1')zJ=E[(X - T)ZIX> TJPr[X> TJ(3.36b)

These are sometimes called semivariances. (The possibilitythat X might equal T is ignored, since we suppose that we aredealing with a continuous variable, and so Pr[X = T] =0.)

Boylesintroduced this index from a loss function point of,view, to allow for an asymmetric loss function. However, ithas some features in common with the index Cjkp, to beintroduced in Section 4.4 which is intended to allow for someasymmetry in the distribution of X (the process distribution).

Note that, if T=!(LSL+ USL). then C:m=Cpm, so C:m isnot affected by asymmetry of the process distribution in thatcase.

A natural estimator of C:m is

C:m= ![!

{(T-LSL)-2 I (Xi-T)23 n XI<1'

+(USL-T)-z'I (Xi-T)2 }]

-1

. Xi>T(3.37)

In general, the distribution of C:m is complicated, even whenthe process distribution is normal.

We note that

nw=

9C~+2 =(T-LSL)-2 I (Xi-T)2'pm X,<1'

II+(USL- T)-Z L (Xi- T)2 .

XI>T(3.38)

'"'

Page 62: 041254380 x Capability Indices

114 The Cpm index and related indices

We now derive expressions for the expected value and vari-ance of W when Xi is distributed normally with expectedvalue ~equal to T, and variance (Jz. (Of course, ~ and (J arenot in fact known; otherwise we woud not need to estimatethem. The analysis studies what would happen if ~= T == mand the formula for C:m is used.) Conditionally on there beingK Xs less than T and (n-K) greater thall T, W is distributedas '

f('J'

LSL) " 2 2 I (lJSL'I

'

) ' 2 2 I, 2'l -, XK T , ,- XII K j (J nJ9)

the two X2S being mutually independent. The conditionalexpected value and variance of W, given K are

E[WIK] = {(T- LSL)-z K +(USL- T)-Z(n- K)}(Jz(:1.40a)

and

var[WIK] = 2{(T -LSL)-4K +(USL- T)-4(n-K)}(J4(3.40h)

Since Pr[X = T] = 0, the distribution of K is binomialwithparameters (n,~), hence

E[K] = E[n-K] =1n (3.41a)

var(K) = var(n - K) = tn (3.41h)

E[KZJ = var(K) + {E[K]}Z = ,tn(n+ 1)= E[(n - K)Z](1141')

and

E[K(n- K)] =nE[KJ - E[KZ] =tn(n-l) (3.41d)

~"

~i

"

II

I

'1 I

'I I

I

~,:i,') Boyles' mod(fied Cpm index

Hence the overall expected value of W is

115

E[W] = E[E[W IK]] =:tn{(T -- LSL)-z +(USL- T)-Z}(Jz(3.42a)

anu the overall variance is

var(W)= E[E[WZIK]J - {E[Tf']P

=E[2{(1'-LSL)-4K

+(USL- T)-4(n-K)} + {(1'- LSL)-zK

+(USL-- 1')-z(n- K)}Z] -,tnZ{(T - LSL)-z

+(U'SL- T)-Zp(J4

= n[(1'-LSL)-4+(USL- T)-4+i{(1'-LSL)-z

-(USL- T)-Zp](J4 (3.42b)

Since X is distributed as N(T, (JZ) we have ~= T. Hence

+Z 1(' ------pill - <)x 1[(T- LSL) 2 +(USL- T)-Z}(Jz

and so

c:; W,c:; = tn{(T-LSL)-z +(USL- 1')=2}(Jz (3.43)

whence

E[(~YJ=1 (3.44 a)

Page 63: 041254380 x Capability Indices

116 The CPI1lindex ilml relil/ed indices

and

((c+

)2

), var(W)pm 'var - - .

6;'" - [tn{(T- LSL}-2+(USL- T)-2}(J"2Y

- 4{(T-LSL)-4+(USL- T)-4} + {(T-LSL)'-:! -(USL-T) 2}2- {(T-LSL}-2+(USL-T)-:-2}2

(3.44 b)

Boyles (1992) suggests that the distribution of (C:'1;IC;m)2.might bewell approximated by Ih:11of a 'mean chi-squared'X~Iv. This would give the correct first two moments, if v hasthe value

n{(T-LSL) 2+(USL T) 2}2

2{(T - LSL)-4 + (\}SL- T)-4} +t{(T - LSL)- 2- (USL -- T)- 2}2

nO+R2f

2(1+R4)+t(l-R2f

with

R= USL-TT - LSL

[If T=-!(LSL+ USL), then R= 1.JAlthough C;,; is not an unbiased estimator of C;~ (nor is

C;m, of C;m), the statistic W=nl(9C;~) is an unbiased esti-mator of (9C;~)-1. It is not, however, a minimum varianceunbiased estimator.

Details of calculation of such an estimator, based on themethod outlined in section 1.12.4, are shown in the Appendixto the present chapter. \.

The index Cjkp, to be described in section 4.4 also ust:Sthesemivariances, and, in its estimator, the partial sums of squares

(3.45)

L (Xi-T)2 and L (Xi-T)2.X;<T Xi>T

I

. I

i

t: ")

"11)

III

LH

\

"(J

110

IlK

The Cpmk index 117

3.6 THE CpmkINDEX

Cpk is obtained from Cp by modifying the numerator; Cpm isobtained from Cp by modifying the denominator. If the twomodifications are combined, we obtain the indexj. .

II

d-I~-ml d-I~-mlCpmk= 3{E[(X - T)2J}i = 3{(J2 +(~ - T)2}i

introduced by Pearn et al. (1992). We note that

(3.46)

( I~-ml)Cpmk= 1-~ Cpm

{ (~- 1'

)2

}

-!= 1+ - . Cpk(J .

~(1_1~ :ml){1+ (~: Tn c, (3.47)

IIII.

III,I

Hence Cpmk~ Cpm, and also Cpmk~ Cpk' In fact Cpmk=CpkCpm/Cp. If ~=m=T then Cp=Cpk=Cpm=Cpmk' How-ever, if we were estimating Cpmk(or the other PCls) we wouldnot know that ~=T=m.

A natural estimator for CpmkisI

- d-IX -mlC~k=

{

I n-

}

~

3 ;;i~l (Xi- 1')2

(3.48)

III

The distribution of Cpmkis quite complicated, even if m = T.We first suppose, in addition to normality, that ~=m=T and

obtain an expression for the rth moment about zero of Cpmkin that case.

".

Page 64: 041254380 x Capability Indices

.118 The Cpm index and related indices

If ~= T=m, we have,

cr - nir(d-IX -ml)'pmk-

{

n

}

!r

3' i~! (Xi-X)2+n(X -m)2

=~t (-1)j (~)dr-jn!(r-j),

3 j=O 1

{

n(X -m)z.

}

jj

i (X1-X)z+n(X -m)Zi= !

{

1

}

j(r - j)

X it! (X/-X)Z+n(~ -m)Z

(3.49)

We now use the following well known results:

n

(a) Y! = L (X/- X)2 is distributed as (}"2X;- !;i= !

(b) Yz =n(X -m)Z is distributed as (}"2Xt;

(c) Yz/(Y!+ Yz) has a BetaCLi(n -1)) distribution (see sec-tion 1.6);

(d) YZ/(Y1+ Yz) and (Y! + Yz) are mutually independent (seesection 1.6.)

Then, from (3.49) ~md(aHd)

-r 1 r .(r

)(dJn )

r-j

E[Cpmk]=-;: L (-1)J . --3 j=O 1 ()"

x E[{Beta('ti(n-1))}!J] E[(X;)-!(r-j)]

r~

~

'J)(~

O'

bl , '

(g\ '

,

\, to.I~

The Cpmk index 119

=~= t (-l)j (~)\

(~ ~

)r-.j

:\ In.l () .I (nj"2

rct(j+ l))[(l(n-r+j))x .

[(1(n+ 1)) ,(3.50a)

If we allow for the possibility that ~ is not equal to m(though still assuming T = m), then (a) remains unchanged,but Yz is distributed as

(}"2x(noncentral XZwith one degree of freedom andnoncentrality parameter A=(}"-Z(~-m)2)=i(}"ZX'!\A)

Recalling from (1.33) that x't (A) is distributed as a mix-ture of central XI+ 21 distributions with Poisson weightseX:p(-1A)(H)i/i! (i=0,1,...), (c) and «.1)must be replacedby..

. (c,d)' thejoint distribution of Yz/(Y! + Yz) and (Y1+ Yz) isthat of a mixture of pairs of independent variables.

I d. .

b.

B (1 . 1( 1))Z "

WII, Istn utlOns ctaz+l, z 11- , XII+ZirespectIve-ly and the Poisson weights given above. (Rememberthat X:,2,2;(A)+ X~ I is distributed as X::,z;(A) if thetwo variates are independent.)

IIr

I '

II.1 For this more general case we find

E[C~mk]=rreXP(-A/2).f (A~~)/i (-1)j (~)(dJn)r-j !

1=0 l. j=O 1 ()"

x E[ {Beta(i+i, i(n-1))}jjZ]E[(Xf +2;)-(r-j)fZ]

= 3-r exp(- A/2) f (Ae)i1=0 I!

Page 65: 041254380 x Capability Indices

120 The Cpm index and related indices

r _

(1'

)(£1

~1

)r-.i

x L (- 1).1, Jj:O .I (J -1(1(1 +j) + i)r(1(n - I'+ j) + i)

x f(!+i)f(!(n+j)+i)(3.50b)

Taking 1'= 1 we obtain

E[.:>I

]. ==1 ( -,1 /2) ~ (A/2)i[~tn f(!(n-1)+i)

L-pmk 3exp L...,., 2 r( l .)i:() I. (J 2n+1

f(1+i)f(1n+i)J- f(1+i)r(1(n+1)+i)

, 00 (A/2);

[d

~=iexp( - ,1/2)L :;- - - Rji=O L (J 2

2f(1 + i) 1J- (n-1 + 2i)f(1+ i) Ri

withRi=r(1(n-1)+ i)/r(1n+i).Taking ,1.=0we find

1

[d

~2 1

J.

E[Cpmk]=- 3- -

2R-- .,ficR ,(3.50d)(J (11-1) 1t

(3,50 c)

with R =nHn-1))/nin).I ' '

Now taking 1'=2, we obtain from (3o50b) after somesimplification

2 00 (!W[(

d./h)

2 1E[CpmkJ=hxp( -!A) L -"T - - 2 2

'

i'cO I. ,(J 11- + I

- 2-fi(dj;t )[(1 + i) ( 1 )+ 1+ 2iJ

\ (J n!+i) \n-1+2i n+2i(:\.~Oe)

!

\I

\II

\

HI

Id~1

tll/l)

dlHI

1111

.

I~ICo)

,.

tI,l

11,1)

'lMltnoro

r.I~l'Iiell

I

'I I

I') I

Cpm(a) indices 121

From (3.50c) and (3.50d), the variance of Cpmk can bederived. If A= 0 we have

E[C~,nkJ=! [-~

(~j;I)2 - ~- (dj;I)+ !

]9 n-2 (J (n-1)~ (J 11(3.501)

[,

Table 3.3 gives values of expected values and standarddeviations of Cpmb from (3.50) for a few values of 11and dl(J.Remember that these values apply for ~= T = m.

Just as Cpmk~ Cpm, with equality if ~= m, a similar argu-ment shows that Cpmk~ Cpm'However var(Cpmd~ var( Cpm)'

If T;:m we have from (3.46)

I

~.-m

l

~1---d (J d

C,mk~30{l+(,:m)'r

, (3.51)

Table 3.4 gives values of Cpmk for representative values ofdj(J and 1~-mll(J,

J3.7 Cpm(a) INDICES

The index Cpm attempts to represent a combination of ef.fects of greater (or less) variability and of greater (or less)relative deviation of ~ from the target value. The class of

indices now to be described attempts to pro,vide a flexiblechoice of the relative importance to be ascribed to these twoe,ffects.

From (3.8), we note that if '" = I~- TI/(J is small, then

Cpm ~(l-1(2)Cp (3.52)

Page 66: 041254380 x Capability Indices

Table 3.3 Moments of Cpmk: E=E[CpmkJ; S.D.=S.D. (Cpmd -

I(-ml/a

11=10d/a23456

II= 20

d/a23456

......

..

E

0.6360.9971.3591.7202.081

0.6330.9791.3261.6722.019

..~ ".""'-:::

0.0

S.D.

0.1930.2810.3710.4620.553

0.1240.180.0.2370.294.0.352

111!118!

E

0.4910.8131.1260.4581.780

0.4700.7791.0891.398L708

-~.

0.5

S.D.

0.1920.2680.3470.4260.505

0.1290.1760.2250.2750.325

E

0.264 0.1350.515 0.1870.765 0.2411.016 0.2951.266 0.350

0.249 0.0870.491 01110.734 0.1550.977 0.1901.220 0.225

1.0

S.D.

0.1060.2990.4920.6840.877

0.0990.1880.4760.6650.854

1.5

E S.D.

0.0060.1600.3130.467

0;620

0.0030.1540.305OA570.608

-- --- - -' "

0.0810.1140.1480.1820.216

0.0540.0760.0980.1200.142

2.0

E S.D.

0.0510.0720.0940.1160.138

0.0350.0490.0630.0780.093

,-. t..; ._- ". c '"-

n=30

d/a23456

n=40

d/a23456

n=50

d/a23456

0.6350.9771.3191.6612.003

0.6370.9771.3171.6561.996

0.6390.9781.3161.6541.993

0.0990.1420.1870.2320.278

0.084 0.4590.121 0.7620.160 . 1.0660.198 1.3700.237 1.673

0.0750.1080.1410.1750.210

0.4560.7591.0611.3641.666

0.4620.7681.074U791.685

0.0790.1070.1360.1650.195

0.1030.1410.1790.2180.258

0.0880.1200.1530.1860.220

0.2440.48:'0.7250.9651206

0.0700.0960.1230.1510.178

0.2420.4810.7200.9591.199

0.0600.0820.1050.1290.152

0.241OA790.7180.9561.194

0.0530.0730.0930.1140.135

0.0970.2840.4710.6590.846

0.0960.2820.4690.6560.843

0.043 0.0020.060 0.1520.078 0.3030.096 0.4530.114 0.604

0.0370.0520.0670.0820.097

0.095 0.033 0.0010.281 0.046 0.1510.468 0.059 0.3010.654 0.073 0.4510.840 0.087 0.601

0.0020.1520.3020.4520.602

.-.~---,--

0.0280.0390.0510.0630.075

0.0240.0340.0440.0540.064

0.0210.0300.0390.0480.057

Page 67: 041254380 x Capability Indices

Table 3.3 (Cont1

0.0 0.5

I~-ml/(J

1.0

E S.D.

1.5 2.0

E S.D. E S.D. E S.D. E S.D.

.-----------------.------

iii~ --- "--- L~ c "

=-~.~ ~~-~ ~---_..-

n=60dl(J2 0.641 0.068 0.455 0.071 0.240 0.048 0.095 0.030 0.001 0.0193 0.978 0.098 0.756 0.097 0.478 0.066 0.281 0.042 0.151 0.0274 1.316 0.128 1.058 0.123 0.716 0.085 0.467 0.054 0.301 0.0365 1.653 0.159 1.360 0.150 0.954 0.104 0.653 0.066 6.450 0.0446 1.991 0.190 1.662 0.177 1.192 0.123 0.830 0.079 0.600 0.052

n=70di(J2 0.642 0.063 0.454 0.066 0.239 0.044 0.094 0.028 0.001 0.0183 0.979 0.090 0.755 0.089 0.477 0.061 0.280 0.039 0.151 0.0254 1.316 0.118 1.056 0.114 0.715 0.078 0.466 0.050 0.300 0.0335 1.653 0.147 1.357 0.138 0.952 0.096 0.652 0.061 0.450 0.0406 1.990 0.175 1.659 0.163 1.190 0.113 0.838 0.073 0.599 0.048

n=80

dl(J2 0.643 0.058 0.453 0.061 0.239 0.041 0.094 0.026 0.001 0.0173 0.980 . 0.084 0.754 0.083 0.476 0.057 0.280 0.036 0.150 0.0244 1.316 0.110 1.055 0.106 0.714 0.073 0.466 0.047 0.300 0.0315 1.653 0.137 1.355 0.129 0.951 0.089 0.651 0.057 0.449 0.0386 1.989 0.163 1.656 0.152 1.188 0.105 0.837 0.068 0.599 0.045

n=90dl(J2 0.644 0.055 0.452 0.058 0.239 0.039 0.094 0.024 0.001 0.0163 0.980 0.079 0.753 0.078 0.476 0.054 0.280 0.034 0.150 0.0224 1.316 0.104 1.053 0.100 0.713 0.069 0.465 0.044 0.300 0.0295 1.653 0.129 1.354 0.121 0.950 0.084 0.651 0.054 0.449 0.0366 1.989 0.154 1.654 0.143 1.187 0.099 0.837 0.064 0.599 0.042

n= 100

dl(J2 0.645 0.052 0.452 0.055 0.238 0.037 0.094 0.023 0.001 0.0153 0.981 0.075 0.752 0.074 0.475 0.051 0.279 0.032 0.150 0.0214 1.317 0.098 1.052 0.094 0.712 0.065 0.465 0.042 0.300 0.0275 1.653 0.122 1.353 0.115 0.949 0.079 0.651 0.051 0.44-9 0.0346 1.988 0.146 1.653 0.135 1.186 0.094 0.836 0.060 0.599 0.040

Page 68: 041254380 x Capability Indices

It is suggested that we consider the class of PCls defined hy

Cpm(a)=(1-a(2)Cp (3.53)

where a is a positive constant, chosen with regard tq thedesired balance between the effects of variability and relativedeparture from target value.

A natural estimator of Cpm(a)is

-{ (

X- T

)Z

}- d

{ (X- T

)Z

}Cpm(a)= 1-a s- CP=3S 1-a s- (3.54)

Assuming normal (N(~, (JZ»variation, Cpm(a)is distributed as

(n-1)1

{1- a(n; 1)(U+ '~)Z

}Cp (3.55)

Xn-l nXn-l

where U and Xn- 1 are mutually independent and U is a unitnormal variable.

From (3.55),the r-th moment of Cpm(a)is

E[{ Cpm(a)}'] =

II

I

I

Appendix 3,A127

=(n-l)'/2.£ (-IV (~){a(nn-l) }j -E[(U +~.\/;;)2j]E[Xn-:i'2j]1~ () J

(3.56 a)

In particular using (2.16a,b)

-{

a(n -1) 2

}

-E[Cpm(a)] = 1- n(n-4) (1 + n( ) E[Cp] (3.56 b)

and

jE[{C

'

}2

{2a(n-l)

pm (a) ]= 1- -(I+n(Z)+ .n(n-5) '. '

a2(11-1)2

}~Z(n- 5)(n -7) (3 + 6n(2 + n2(4) E[C~J

(3.56c)

t,

It must be remembered that the Cpm(a)indices suffer fromthe same drawback as that noted for Cpmin section 3.2,namely,that if T is not equal to m=:hLSL+ USL), the same value ofCpm(a),obtained when c;=T-~ or c;=T+~, can correspondto markedly different expected proportions of NC items.

It is possible for a C"m(a)indcx to bc negative(unlikeC" orCpn" but like Cpd. Sincc this occurs. only for large values of"I, the relative departure of c; from the target value T, thiswould not be a drawback. .

GUpla and Kolz (1993) have labulated values of E[Cpm(1)Jand S.D.(Cpm(!). ,

APPENDIX 3.A: A MINIMUM VARIANCE UNBIASEDESTIMATOR FOR W=(9C:,;r1

This appendix is for more mathematically-inclined readers.Hthe independent random variables X1,...,Xn each have

the normal distribution with expected value c; and variance

126 The Cpm index and related indices

Table 3.4 Values of Cpmkwhen T=m

I-ml(J"

d- 0.0 0.5 1.0 1.5 2.0(J

2 0.667 0.447 0.236 0.0925 0.0003 1.000 0.745 0.471 0.277 0.1494 1.333 1.043 0.707 9.462 0.29H5 1.667 1.342 1.943 0.647 0.4476 2.000 1.640 . 1.179 1.832 0.596

Page 69: 041254380 x Capability Indices

128 The Cpm index and related indices

(f2, then

I

}

2

n 1 - X) 1 n{(-

) (Xi. .th X= - ~ Xj

n -1 WI n j= 1i (Xj-X)2j= I

, has a beta distribution with parameters !.!(n - 2); as defined, in section 1.6 (see e.g., David et al. (1954).)

The PDF of

y:.= n (Xi-X )2 . n

'(n-l)2 S wIth (n-l)S2 = j~1 (Xj-X)2

, is thus

fYi(y)={B(1.,!n-1)}-ly-!(1_y)in-2 O<y<1 (3.57)

and the PDF of

j;; Xi-X =fi sgn(X,-X))V'=n-l S '

is

fVI(v)={B(-~, tn-l)}-1(I-v2)!n-2 -1<v<1 (3;58)

The conditional distribution of Xi, given the sufficientstatistics X and S, is that of the linear function of Vj:

,

n-l

X+ j;; SVi

I ,

I I

I '

i

I

II 'II

I

Appendix 3.A

Now, the inequalityX,< T is equivalent to'

129

r -Vi<_-=L~ X-Tn-l -S (3.59)

Hence

EXi<I.[(Xi- T)2IX, S] =

f- Cn(X - T)fS

= {Bet !n-l)} -1 (X- T+c;;1Sv)2(I-v2)!n-2 dv-1

=(~Ygn(Cn(~~- T))(3.60)

with CII=; j;;/(n - t) and

gll(z)={B(ttn-l)}-1 f~:(Z+V)2(I-V2)!n-2dv (3.61a)

Since !(V/+ 1) has a beta distribution with parameters(tn--l, tn-I), an alternative form for gn(z) can be obtainedin terms of incomplete beta function ratios (see section 1.6).

We find

YII(z)=4{h2hOn-t, !n,-I)-hhOn, tn-I)n

+ 2n-l Ihnn+ 1,tn-I)} (3.61b)

where h= 1(1- z).Since E[~]=O, E[VrJ=(n-l)-:-t and X/-T=(X-T)+

(n-l/~)SV,

[ 2iT 2-EXi<T (x,-T) IA,S]+ExI>T[(Xj-T)'IX,S]

= E[(Xi- T)2IX, S] =(X - T)2 + n-l S2 (3.62)n

Page 70: 041254380 x Capability Indices

---. - ---

Table 3.5-Function gn(x)

12

10 20 30 40 50 100 200 600 1000

x-1 1.1111 1.0526 1.0345 1.0256 1.0204 1.0101 1.0050 1.0017 1.0010-0.95 1.0136 0.9551 0.9370 0.9281 0.9229 0.9126 0.9075 0.9042 0.9035-0.9 0.9211 0.8626 0.8445 0.8356 0.8304 0.8201 0.8150 0.8117 0.8110-0.85 0.8336 0.7751 0.7570 0.7481 0.7429 0:7326 0.7275 0.7242 0.7235-0.8 0.7511 0.6926 0.6745 0.6656 0.6604 0.6501 0.6450 0.6417 0.6410-0.75 0.6736 0.6151 0.5970 0.5881 0.5829 0.5726 0.5675 0.5642 0.5635-0.7 0.6010 0.5426 0.5245 0.5156 0.5104 0.5001 0.4950 0.4917 0.4910-0.65 0.5334 0.4751 0.4570 0.4481 0.4429 0.4326 0.4275 0.4242 0.4235-0.6 0.4707 0.4126 0.3945 0.3856 0.3804 0.3701 '0.3650 0.3617 0.3610-0.55 0.4118 0.3551 0.3370 0.3281 0.3229 0.3126 0.3075 0.3042 0.3035-0.5 0.3597 0.3025 0.2845 0.2756 0.2704 0.2601 0.2550 0.2517 0.2510-0.45 0.3112 0.2549 0.2370 0.2281 0.2229 0.2126 0.2075 . 0.2042 0.2035-0.4 0.2672 0.2122 0.1944 0.1856 0.1804 0.1701 0.1650 0.1617 0.1610-0.35 0.1276 0.1743 0.1568 0.1481 0.1429 0.1326 0.1275 0.1242 0.1235-0.3 0.1922 0.1410 0.1240 0.1155 0.1103 0.1001 0.0950 0.0917 0.0910-0.25 0.1608 0.1122 0.0960 0.0877 0.0827 0.0726 0.0675 0.0642 0.0635-0.2 0.1331 0.0877 0.0724 0.0646 0.0599 0.0500 0.0450 0.0417 0.0410-0.15 001090 0.0672 0.0532 0.0460 0.0416 0.0324 0.0275 0.0242 0.0235

-------------- --------._'-__n- --._- -------- ---_.-

-- -."",""r''"1.."'- "'-"-'- - ..' ;.""" , ',:::;:-

------'-'---

-0.1 0.0883 0.0504 0.03'79 0.0315 Om76 0.0193 0.0149 0.0117 00110-0.05 0.0705 0.0369 0.061 0.0206 0.0173 0.0105 0.0068 0.0041 0.0035

0 0.0556 0.0263 0.0172 0.0128 0.0102 0.0051 0.0025 0.0008 0.00050.05 0.0431 0.0182 0.0109 0.0075 0.0056 0.0021 0.0007 0.0001 0.00000.1 0.0328 0.0122 0.0066 0.0042 0.0029 0.0008 .0.0001 0.0000 0.00000.15 0.0246 0.0079 0.0038 0.0021 0.0013 0.0002 0.0000 0.0000 0.00000.2 0.0180 0.0049 0.0010 0.0010 0.0006 0.0001 0.0000 0.0000 0.00000.25 0.0129 0.0029 0.0010 0.0004 0.0002 0.0000 0.0000 0.0000 0.00000.3 0.0089 0.0016 0.0005 0.0002 0.0001 0.0000 0.0000 0.0000 0.00000.35 0.0060 0.0009 0.0002 0.0001 0.0000 0.0000 0.0000 0.0000 0.00000.4 0.0039 0.0004 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.45 0.0024 0.0002 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.5 0.0014 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.55 0.0008 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.6 0,0004 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 O.00000.65 0.0002 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.7 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.75 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.8 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.85 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.9 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00000.95 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.00001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000

Page 71: 041254380 x Capability Indices

132 The Cpm index alld related indices

and so

E[WIX, S] =n[(T- LSL)-ZEx;<T[(X/- T)ZIX, S]Z Z -

+(USL- T)- EXI>T[(X/-T) IX,S]]

=n(~)l{(T-LSL)-Z .

-(USL- T)-Z}gn(Cn(Xs- T))

+(USL-n-Z{(X ;T CIIY+ ~~ I}]

(3.63)

The statistic W is, of course, more easily evaluated thanwould be the MVUE, E[WjX, S]. Tablc 3.5, and Fig. 3.2

1.2

I

C:urve SampleiSlze(n) '

I

I I

l~.~~= ~~ I,-,-, 30, " 40

- 50 ' "

,,..,1.0j\\

\\,\,\.\\,\'..,'. ,

'\\'\.\

Ii..,.\'.

Ii\.\.. \Ii..\-.. \'\

"'-;~':'"~';""'"

~""'>'""""

0.8

~ 0.601

0.4

0.2

0.0

-1.0""""'~""-:::;:.-, --r---.--

-0.5 0.0x

0.5

Fig. 3.2 The function gn(x) for various sample sizes.

-Bibliography 133

~II

N.

Iw

giving values of gll(z),will be of some assistance in this regard.Note that 9n(0)= -Hn-1) -1 and gn(-1) = 1+ (n-1)'- 1. It canbe seen that gn(x) is small for x> 0, while for x ~ -0.25

gn(x)~xz+(n-l)-l (3.64)

HM'

01

c'

prIIIW'

YI

The advantage (reduction of variance)attained by use of theMVUE has not been evaluated as yet. It may be worth theextra effort involved in its calculation and it also provides anexample.of relevance of 'advanced' statistical methodology toproblems connected with quality control. It should be notedthat the Blackwell-Rao theorem (see section 1.12.4), onwhich the derivation of the MVUE is based, is almost 50years old!

BIBLIOGRAPHY

litIII

Boyles, R.A.(1991)The Taguchi capability index, J. Qua/. Techno/.,23.Boyles, R.A. (1992) Cpm for asymmetrical tolerances, Tech., Rep.

Precision Castparts Corp., Portland, Oregon. -

Chan, L.K., Cheng, S.W. and Spiring, FA (1988a) A new measureof process capability, CPIn,J. Qua/. Techno/., 20, 160-75.

Chan, L.K., Cheng, S.W. and Spiring, FA (1988b) A graphicalIcchniqtH~for process capahilily, Trans. ASQC Conyress, Dallas,Texas, 268-75. ,

Chan, LX., Xiong, Z. and Zhang, D. (1990) On the asymptoticdistributions of some process capability indices, Commun. Statist.- Them'. Meth., 19,11-18.

Cheng; S.W. (1992) Is the process capable? Tables and graphs in'assessing Cpn" Quality Engineering 4, 563-76.

Chou, Y.-M., Owen, D.B. and Borrego, S.A. (1990) Lower confi-dence limits on process capability indices, Quality Progress, 23(7),231-236.

David, B.A., Hartley, H.O. and Pearson, £.S. (1954) Distribution ofthe ratio, in a single normal sample, of range to sta.ndarddeviation Biometrika, 41, 482-93.

JI II1.0

I

'I "

Page 72: 041254380 x Capability Indices

134 The Cpm index and related indices

Grant, E.L. and Leavenworth, R.S. (\988) Statistical Quality Con-trol, (6th edn), McGraw-Hili: New York.

Gupta, A.K. and Kotz, S. (1993) Tech. Rep. Bowling Green StateUniversity, Bowling Green Ohio.

Haight, F.A. (1967) Handbook of the Poisson Distribution, Wiley:New York.

Hsiang, T.C. and Taguchi, G. (1985) A Tutorial on Quality, Controland Assurance - The Taguchi Methods. ArneI'. Statist. Assoc.Annual Meeting, Las Vegas, Nevada (188 pp.). .

Johnson, N.L.. Kotz. S. and Kemp. A.W. (1993) Distrilmtio/lsin Statistics - Discrete Distributions (2nd edn), Wiley: NewYork.

Johnson, T. (\ 991) A new measure of process capability related toCp"" MS. Dept. Agric. Resource Econ.. North Carolina StateUniversity, Raleigh, North Carolina. .

Kane, V.E. (1986) Process capability indices, J. Qual. Technol., 18,41-52.

Kushler, R. and Hurley, P. (1992) Confidence bounds for capability, iridices. J. Qual. Tee/mo/.. 24, 188-195.

Marcucci, M.O. and Beazley, C.F. (1988)Capability indices: Processperformance measures, Trans. ASQC Congress, 516-23.

Mirabella, J. (1991) Determining which capability index to use,Quality Prouress, 24(8), 10.

Patnaik, P.B. (1949) The non-central X2- and F-distributions andtheir applications, Biometrika, 36, 202-32.

Peam, W.L., Kotz, S. and Johnson, N.L. (1992) Distributional andinferential, properties of process capability indices, J. Qual. Tech-nol., 24, 216-31.

Spiring, FA (1989) An application of Cpmto the toolwear problem,Trans.ASQC Conoress.Toront'o, 123-8.

Spiring. 1".1\.(1991a) Assessing prm;ess capability in the presence orsystematic assignable Clluse, J. Qual. T eclmol., 23, 125-34.

Spiring, FA (1991b) The Cpmindex, Quality Progress, 24(2), 57-61.Subbaiah, P. and Taam, W. (1991) Inference on the capability index:

Cpm, MS, Dcpt. Math. ScL, Oakland University, Rochester,Minnesota.

.,...-

IU"8'I'1'8"

1-.1

I-,

.,

4

Process capability indicesunder non-normality: .

robustness properties

4.1 INTRODUCTION

I

I

The efTectsof non-normality of the distribution of the meas-ured characteristic, X, on properties of PCls have not been aIP..ajor research item until quite recently, although soinepractitioners have been well aware of possible problems inthis respect.

In his seminal paper, Kane (1986) devoted only a shortparagraph to these problems, in a section dealing with'drawbacks'.

'A variety of processes result in a non-normal distributionJor a characteristic. It is probably reasonable to expect thatcapability indices are somewlJ.at sensitive to departuresfrom normality. Alas, it is possible to estimate the percen-tage of parts outside the specification limits, either directlyor with a fitted distribution. This percentage can be relatedto' an equivalent capability for a process having a normaldistribution.'

A more alarmist and pessimistic assessment of the 'hope-lessness' of meaningful interpretation of PCls (in particular,of Cpk) when the process is not normal is presented in Parts2 and 3 of Gunter's (1989) series of four papers. Gunteremphasizes the difference between 'perfect' (precisely normal)

135

Page 73: 041254380 x Capability Indices

136 Process capability illdices under 1IOII-IIOrmality

and 'occasionally erratic' processes, e.g., contaminated pro-cesses with probability density functions of form

,

pcp(x;~1,ad + (1.- p)cp(x;~2' a2) (4.1)

where 0 <p < 1;a1, a2 >0; (~1' ad¥:(~2' a2) and

cp(x;~,a) = (j21t<T)-.lexp{-~( x~~y}

Usually p is close to 1 and (1- p) is small; the secondcomponent in (4.1) represents 'contamination' of the basicdistribution, which corresponds to the first componenL

It is often hard to distinguish such types of non-normality, inpractice, by means of standard, conventional testing pro-cedures, though they can cause the behaviour of Cpk to varyquite drastically. .

The discussion of non-normality falls into two main parts.The first, and easier of the two, is investigation of the propertiesof PCls and their estimators when the distribution of X hasspecific non-normal forms. The second, and more difficult, isdevelopment of methods of allowing for non-normality andconsideration of use of new PCls specially designed to be robust(i.e. not too sensitive) to non-normality.

There is a point of view (with which we disagree, in so far aswe can understand it) which would regard our discussion in thenext three sections as being of little importance. McCoy (1991)regards the manner in which PCls are used, rather than theeffect of non-normality upon them, as of primary importance,and writes: 'All that is necessary are statistically generatedcontrol limits to separate residual noise from a statistical signalindicating something unexpected or undesirable is occurring'.He also says that PCls (especially Cpk)are dimensionless andthus'... become(s) a handy tool for simultaneously looking atall characteristics of conclusions relative to each other'. We arenot clear what this means, but it is not likely that a single index

nfl'lce,led

,lI'nl

~

~Ol~~

A ,atuedex

1-

Effects of non-normality137

will provide such an insight into several features of a distribu-tion. Indeed one of us (NLJ) has some sympathy with the

. viewpointof M.Johnson(1992),whoasserts that '... noneofthe indices Cp, Cph Cpu, Cpmand Cpkadds any knowledge orunderstanding beyond that contained in the equivalent basicparameters J1,a, target value and the specification limits'..

McCoy (1991, p. 50) also notes: 'In fact, the number of real-lifeproduction runs that are normal enough to provide truly ac-curate estimates of population distribution are more likely thanthe exceptions', although (p. 51) '... a well-executed study on astable process will probably result in a normal distribution'.

4.2 EFFECTS OF NON-NORMALITY

Gunter (1989) has studied the interpretation of Cpk underthree different non-normal distributions. These distributionsall have the same mean (~) and standard deviation (a), theytherefore all have the same values for Cpk, as well as CpoThethree distributions are:

1. a skew distribution with finite lower boundary - namely a

xL distribution (Chi-squarewith 4.5 degreesof freedom)2. a heavy-tailed([32>3)distribution- namelya t8distribution3. a uniform distribution.

In each case, the distribution is standardized (shiftedandscaled to produce common values (~=0, 0' =1)formeanandstandard deviation).

The proportions (ppm) of NC items outside limits:!: 3aare:(approximately)

For (1.) 14000 (all above 3a). .. For (2.)4000 (halfabove 3a~half below - 3a).. For (3.) zero!

Arid for normal, 2700 (half above 3a, half below - 3a).Figure 4.1 shows the four distribution curves.

'\

I 10-I, I

l'i \I Iii

", .1)

I'

lill

'II

.,,1 Iii,

i II.11

Intl

IIlc

I 'Iiln

(JO.

1111nyull

Iltll nos

J IIS.Is

2 Itnd3 ,'"SI

,

Page 74: 041254380 x Capability Indices

138,Process capahility indices under non-normality

..11

ell ~,rL~N

ill

..h.Id

Ille,Inl

-~~n

I AI.,~J).

I Shll

~j dill ru, , ('II he

I

.. All four distributions have the same expected value (~=O), and standarddeviations (0'= 1).The specification range was LSL= -J.to USL=3, so thevalue of Cp was t for all four distributions.

-

Effects of non-normality 139

50

i'I

" I .

Also, the values for the (skew) exponential distributions differsharply from the values for the three other (symmetrical)distributions especially for larger values of c.

On the more theoretical side, Kocherlakota et af. (1992)

'Ill

Table 4.1 Values of Pr[CpcJ from simulation (Cp= 1)0.48 t-

'I 'VIDistrihution of X Normal Triangular Un!form Exponential

1/ 11= c=.t(2) \ 5 0.50 0.996 1.000 1.000 0.9690.75 0.873 0.881 0.924 0.846

, I \ \ .. 1.00 0.600 0.561 0.522 0.6830.32 J- (iulIssiulI0.267 0.527(normul) 1.25 0.373 0.320

1.50 0.223 0.193 0.152 0.4042.00 0.089 0.079 0.058 0.2402.50 0.041 0.039 0.025 0.150--- (3)

0.16t- I' \\1 Ii) 10 0.50 1.000 1.000 1.000 0.9850.75 0.933 0.959 0.989 0.8651.00 0.568 0.529 0.501 0.6441.25 0.243 0.189 0.127 0.4231.50 0.091 0.064 0.035 0.2590.00t- .../' I I I2.00 0.013 0.011 0.005 0.097

--r 2.50 0.002 0.003 0.001 0.039' . I-5.0 -2.5 0.0 2.5 5.0 7.5 I20 1.000 0.996I 7.5 20 0.50 1.000 1.000

Fig.4.1 Four different shaped process distributions with the same c;

II'

0.75 0.979 0.994 1.0pO 0.900and 0', and hence the same Cpk'(Redrawn from Gunter, RH. (1989),

0), 1.00 0.548 0.523 0:508 0.617Quality Progress, 22, 108-109.)1.25 0.129 0.087 0.041 0.315

I 1.50 0.017 0.011 0.002 0.135I') I\(\ fI flfI1 fI flfIfI fI flAA A A,.".,A number of investigationsinto effectsof non-normalityon I I

estimators of PCls have been published. English and Taylor i .(1990)carried out extensiveMonte Carlo (simulation)studies

I",v

of the distribution of Cp (with Cp= 1) for normal, symmetrical' 1'1

I 'Itriangular, uniform and exponential distribution"',with I IIsample sizes n = 5,10 (10)30,50. The results are summarized I IIin part in Table 4.1. From these values of Pr[Cpc] we note, Iinter alia, that for n less than 20, there can be substantial, J

i IIdepartures from the true Cp value (as well as differencesin I "-interpretations of these values for non-normal distributions).

Page 75: 041254380 x Capability Indices

140 Process capability indices under non-normality

'have established the distribution of (;p=idj8 in two cases:when the process distribution is (i) contaminated normal with'(11 = (12 = (1, see (4.1); and (ii) Edgeworth series

fx(x) = [1-iA3D3 +~A4D4 +-h-A~D6Jcp(x; 0,1) (4.2)

where Dj signifies jth derivative with respect to X, .

aJ:1,d

A3 = Ji3(X) .{Ji2(X)}i=JP;

(4.3a)

and

A4= Ji4(X){Ji2(X))2 -3=/32-3

(4.3b)

are standardized measures of skewness and kurtosis respect-ively. (See section 1.1 for definitions of the central moments

Jir(X) and the shape factors A3,A4,$, /32')

4.2.1 Contaminatednorm~l

We will refer in more detail, later, to the results of Kochler-lakota et ai. (1992). First, we will describe an approach usedby ourselves (Kotz and Johnson (1993)) to derive moments ofCp for a more. general contamination model, with k com-ponents (but with each component having the same variance,(12), with PDF

k

L PjCP(x;~j, (1)J=1

(4.4)

t'r11',lh

,."

(wl.r(71.,nll.,

1-

1)

~a)

10)

,I.

~I

I~fl

Effects of non-normality 141

The essential clue, in deriving expressions for the momentsof (;" for this model. is the following simple observation.

A random sample of size n from a population with PDF(4.4) can be regarded as a mixture of random samplesof sizes N 1>"', Nk from k populations with PDFsCP(X;~1,(1),...,cp(x;~k>(1),where N=(N1)...,Nk) have ajoint multinomial distribution with parameters (n; P1, ...,pd so that

rr[N=nJ=r{jOl (Nj=nj)]nl k-~ nP"J- k j

Onj!j=tj= 1

k k

Lnj=n LPj=l n=(n1",.,nk)j=1 j=1

(4.5)iIII The conditional distribution of the statistic

"L (XJ-~O)2

j= t(4.6)

(where ~o is an arbitrary, fixed number), given N = n, is that of(12x [noncentral chi-squared with n degrees of freedom andnoncentrality .

k

parameter A~ = L ni~j-'o)2]-symbolically, (12X;,\.t~)

j= 1 (4.7)","-'-"." . .::, ',112 (",", '~~>. '.\,.: - '. ' ;, .

~;'. '," ... '. ,.,. ,/' "'" ~.,.

Page 76: 041254380 x Capability Indices

142 Process capahility indices under non-normality

The conditional distribution of

n,- 1

(n-1)82= L (Xj-X)2j= 1

is that of

0"2X~2-1 (An)

with

k 1 kAn= L nj((j-~1)2 where ~I=- L nj(j

j~ 1 n j I

(4.8) ,

Expressions for moments of noncentral chi-squared distribu-tions were presented in section 1.5 and have been used .inChapters 2 and 3. We will now use these expressions to deriveformulas for the moments of

- dCP=38

Conditional on N = n, Cp is distributed asI

!. d(n -1)% !x~- 1(An)= Cp(n -1)i /x~ - 1(An)3 0"(4.9)

Recalling the formula for moments of noncentral chi-squaredfrom Chapter 1, we have

E[C~IN=nJ = C~(n-1)ir exp(-lAn)

x I{

(t~n)i}

rC; J+i-~ )'

, i=0 z! 22r/2r(

n-1-2+i

(4.10)

1\

, 8~ .

J.8)

\~

A';~-i 10

~ve

.8 I!I/'

, l~

t(»)

~~rcdI'd

j'I"~hI

I

LIO)-.

14I

Effects of non-normality

and in particular

. - I 1E[C1,IN= n] = C,,(n--1)2 exp( -'2)'n)

(n

)CfJ 1 i r L'

X ,~o b::() 2 ' 1-/-

J2r(~;l +)E[C~IN =nJ = C~(n-l) exp(-.tAn)

x CfJ(Hn)l 1L ---:-;- 3 2.1=0 l. n- + 1

Averaging over the distribution of N, we obtain

, E[C~J=L

(~

)(.iI P?)E[C~lnJ 1'=1,2,...

n TInj! J=1j= 1

The summation is over all nj~O, constrained by

k

L nj=n.j~ 1

143

(4.11a)

(4.q b). I

(4.12)

In the special symmetric case of k=3, with ~dO"=-'a,~2= 0, ~3/0"= a, (a> 0) and PI = P3= P, say, the noncentralityparameter is

{1 2

}2

An= nl +n3--~(n3-nd a(4.13)

Some values of E[CpJ/Cp and S.D.(C\)/Cp, calculated from(4.11a) and (4.11 b), are shown in Table 4.2 (based on Kotz

Page 77: 041254380 x Capability Indices

13t)

I

.<1) 0c.. .~ V) ~

~ ~I-< 0;:j.....><

'8~

~I

IIZ ~M~.!!.c=Eo-<

~I

II~ ....

'V.

\0 r-"<t M\ONOOMr-"<tM"""N N"""""""""-

000,000

V)~;:,"<tO\or-"<to\O\"<tOOM0000\000000"":00000

"<tOooO\MNO\OOr-\OO\MNN--"""-000000

--O\N"<tOOooMNOO-\o0000\00\"":"":"":0"":0

\OMO\r-Ooo0\0\ r-r-"<t MNN ..................000000

Noooo\O"<t......O\r-MNN......qqqqqq

......

r-\OoO\oo0\0\00r-"<t"<tN N""""""""""""

000000

MO"""oo\OMO\O'I"<tMNN000000

: : : : : :

r-r-oooo0\0\0000"<t"<tC"IN........................000000

"<t"<tNNr-r-~~~~~~

: : : : : :

r-r-oooo0'I0\0000"<t"<tN N """""""""000000

"<t"<tNNOOOOO\Cl\"<t"<tNN000000

: : : : : :

,.,

"J'

II

... II V) Ir\ Ir\ Ir\ V). or,>lJ>,.,ONONON

11::>..000000II

>lJ> ...I::>..

r'1"

II~,. 0= :.:- 0

N 0M

r "<too\OMOO\Or-Ir\MNNC"I--"""-000000

O\\Ooo "<tOO"<tr-O'IMoo-00\0'10\0\0\"":00000

"<toooo"<tO\\OO\oor-r-MMNN----900000

N-OO',...\oOO\oM""""""o\000000\"":"":"":"":"":0

\0 V) 0\ 0'1 0 0\O\O\r-r-"<tMNN-"""""""""

000000

"""\OO\"<t"<tO\O\ooMMN-000000

: : : : : :

r-r-OO\OO0\0\00r-"<t"<tN N""""""""""""000000

MN"""O\O1r\O\O\"<t"<tNN000000

: : : : : :

r-r-OOOO0\0\ 00 oo"<t"<tN N - - -'......000000

"<t"<tNNr-r-

~~~~~~: : : : : :

r-r-OOOO.0'I0\0000"<t"<tC'!f'j~~~~000000

"<t"<tC"lC"lr-r-O\O\"<t"<tNN

qqqqqq-......-.

b~

I~ II

>lJ>1::>.._V)"""V)-1r\I 000000....

."J>

NII~.. 0.c:.:-

0N

0M

"

II

(I

II,

~ I

t

(I'

~-t

II,

nll~

w4

W

~

.I

/I

w

, th'l"

1'1,,'

(I II

millI ,1

Effects of non-normality 145

and Johnson (1992)) for selected values of a and p. (Hung andHagen (1992) have constructed a computer program usingGAUSS for calculation of these quantities.) ,

The table is subdivided into two parts. In both parts

Cp(=1d/0')=1. In Table 4.2a the cases k=3, n=nt +n2 +n3= 10,20, and 30; Pt =P3 =0.05, and 0.25;a = 0.005,0.01, 0.125,0.25, 0.5, and 1.0 are presented. Table4.2 b covers situations when k = 2 for the same values of nand a, with p=O.l, and 0.5..

From the tables we note that,I

1. for given n, the bias and S.D. both decrease as a increases;2. for given a, the bias and S.D. both decrease as n increases;3. for given a and n, the bias and S.D. do not vary greatly

with Pt and P2 (when k=3) or with P (when k=2). Thebias is numerically greater when the 'contaminating' par-

. ameters are smaller;4. on the other hand, the greater the value of a, the more

marked is the variation (such as it is) with respect to Ptand P3 (or p).

Observe, also, that the bias is positive when a is small, anddecreases as the proportion of contamination increases.HQwever, the bias is negative for larger values of a and p,and it becomes more pronounced as n increases. This iscontrary to the situation when p =0 (no contamination) inwhich the bias of Cp is always positive, though decreasing asn increases. Gunter (1989) also observed a negative bias,when the contaminating variable has a larger vflriance than'the main one.. Kocherlakola et al. (1992) provide more detailed tablesfor the case k = 2. They also derived the moments of Cpu=(USL - X)/(38). The distribution of Cpu is essentially amixture of doubly noncentral t distributions; see section1.7.

0.cr.)

q< I

u.J

0

-IviII", 0u . I u.Jt::<1)..c

<U'" Iq'- V)

0 N

0'0

cr.) 1u.J'-';:::

.8.....

V)Ie<j

';;<1)"0 N

"0......

"0

;;.....CIJ

"0

;; 10G3'

...... cr.)

'-' 0<1) 0;:j 1u.J;>

Page 78: 041254380 x Capability Indices

146Process capahility indices under non-normality

4.2.2 Edgeworth series distributions

The use of Edgeworth expansions to represent mild devi-ations from normality has been quite I~lshionable in recentyears. There is a voluminous literature on this topic, includ-ing Subrahmaniam* (1966, 1968a,b) who has been a pioneerin this field. .

It has to be kept in mind that there are quite severelimitations on the allowable values of .j7f;and fJ2 to ensurethat the PDF (4.2) remains positive for all x; see, e.g. Johnsonand Kotz (1970, p.l8), and for a more detailed analysis,Balitskaya and Zolotuhina (1988)). Kocherlakota et al. (t 992)show that for the process dislribution (4.2) the expt:<.;(t:dva/lit:of Cp is

~ (n-1)Jr(~(n-2)) C,,{t +~y"(fJ2-3)--~h,,fil}

E[C.] fir(i(n-I)j (4.14")where

n-1g,,= n(n+1)

and

h - (n-2)"-: n(n+ l)(n + 3)

The premultiplier of Cp will be recognized as the bias correla-tion factor h"- I, defined in (2.16), so (4.t4 a) can be written

- - 3 J IE[C,,] = E[Cplnormal] {J + /J?!,J/J2-- 3)-, Hh,,/J1i (4./4h)

... K. Subrahmaniam was the (pen) name used by K. Kocherlakota in the1960s.

III! .

_I.lint

~I.

IfQI'

iI!

~(/)I

I'1"1

Effects of non-normality 147

KocherIakota et al. (1992) also show that the variance of Cp isi

n -1 2

[I

{ 1 1

] { ~ 2

n(n'-3)Cp 1+~ 1+g"<fiz-3)-9h,,fJu - E[Cp]}(4.14c)

Table 4.3 is based on Table 8 of Kocherlakota et al. (1992).As is to be expected from section 1.1, for given fJz

the values~of E[CpJjCp do not vary with j7f;, and the valuesof S.D.(Cp)jCp vary only slig~htly with ft. Also, forgiven J7I!, the values of S.D.(Cp)jCp increase with fJ2'

Kocherlakota et at. (1992) carried out a similar investiga-tion for the distribution of the natural estimator of Cpu(=!(lISL-- ~)j(j),

If the process is symmetrical (so that fJl =0), E[CpuJ isproportional to Cpu' Table 4.4 (based on Kocherlakota et al.

Ia-

Ib)I

II

1"'f1he

Cpu=USL.-X

(4.15)3S

, Table4.3 Expeltcd vailicand standard deviationof ,Cp/CpforEdgeworth process distributions

. n= ; /32 E[Cp]/Cp S.D. (Cp)/Cp

10 0.0 3.0 1.094 0.2974.0 1.128 0.3475.0 1.161 O.3R9

:1:0.4 3.0 1.094 0.3004.0 1.128 0.3495.0 1.161 0.392

30 0.0 3.0 1.027 0.1404.0 1.039 0.1695.0 1.051 0.193

:1:0.4 3.0 1.027 0.1414.0 1.039 0.1705.0 1.051 0.194

Page 79: 041254380 x Capability Indices

(1992)) shows that even when /31is not zero, E[CpuJ is verynearly proportional to CpuoOf course, if /31is large, it wil1notbe possible to represent the distribution in the Edgeworthform (4.2). Table 4.3 exhibits reassuring evidence of robust-ness of Cp to mild skewness.

4.2.3 Misccllancous

Price and Price (1992) present values, estimated by simula-tion, of the expected values of Cp and CPk>from the foHowingprocess distributions, numbered [lJ to [13J

II. for,

I"~

.I

r'II1\IIII,

III;'11

I'll111

WI

(II

h ~ry~Ol

itlhI'll-It

I~,

~n -

II, . gIhl

Effects of non-normality

Distribution number and name[I] Normal (50,1)[2J Uniform (48.268,51.732)[3J 10 x Beta (4.4375,13.3125)+47.5[4J 10 x Beta (13.3125,4.4375)+42.5[5J Gamma (9,3)+47[6] Gamma (4,2)+48[7J (xl. 5) Gamma (2.25,1.5)+ 48.5[8J (Exponential) Gamma (1,1)+49[9] Gamma (0.75,0.867)+49.1340

[10] Gamma (0.5,0.707)+49.2929[l1J Gamma (0.4,0.6325)+ 49.3675[12J Gamma (0.3,0.5477)+ 49.4523[13J Gamma (0.25,0.5)+49.5

149

Skewness (J f31 )000.506

-0.5060.6671.0001.3332.0062.3092.8283.1633.6514.000

In all cases; the expected value of the process distribution is50, and the standard deviation is 1.

The values shown in the tables below are based on com-puter output kindly provided by Drs Barbara and Kelly Priceof Wayne State University in 1992.

The specification limits are shown in these tables, togetherwith the appropriate values of Cp and Cpk' The symbols (M),(L) and (R) indicate that T =--=, >, < 1(LSL+ USL) respective-ly. In all cases, T = 50.

LSL USL Cp Cpk(M) 48.5 51.5 0.5 0.5(L) 45.5 51.5 0.5 1.0(R) 48.5 54.5 0.5 1.0(M). 47.0 53.0 1.0 1.0

LSL USL Cp Cpk(L) 44.0 53.0 1.0 1.5(R) 47.0 56.0 1.0 1.5(M) 44.0 56.0 2.0 2.0(L) 41.0 56.0 2.0 2.5(R) 44.0 59.0 2.0 2.5

The value of E[CpjCp] does not depend on ~, so we do 11Otneed to distinguish (M), (L) and (R), nor does it depend on CpoHence the simple Table 4.5 suffices.In this table, the distribu-

tions are arranged in increasing order of I~I.

148 Process capability indices under nOI1-normality-

Table 4.4 Expected value and standard deviation of Cpu forEdgeworth process distributions

JplE[Cpu] S.D.(Cpu)

n P2 Cpu= 1.0 Cpu= 1.5 Cpu= 1.0 Cpu= 1.510 - 0.4 3.0 1.086 1.632 0.287 0.426

4.0 1.119 1.682 0.337 0.5025.0 1.153 1.733 0:381 0.569

0.0 3.0 1.094 1.641 0.320 0.4334.0 1.128 1.592 0.366 0.5335.0 1.161 1.742 0.407 0.596

0.4 3.0 1.100 1.647 0.345 0.4864.0 1.134 1.697 0.388 0.5545.0 1.107 1.748 0.427 0.615

30 -0.4 3.0 1.024 1.538 0.136 0.2014.0 1.036 1.556 0.166 0.2465.0 1.048 1.574 0.190 0.283

0.0 3.0 1.027 1.540 0.154 0.2204.0 1.039 1.55H O.IHO 0.2615.0 1.051 1.576 0.203 0.297

0.4 3.0 1.029 1.542 0.168 0.2354.0 1.041 1.560 0.193 0.2745.0 1.053 1.578 0.214 0.308

Page 80: 041254380 x Capability Indices

These values do not depend on '/: The values for the two beta distributionshave been combined, as they should be the same.

Comparing the estimates for the normal distribution ([lJ)with the correct values (1.0942,1.0268, 1.0077 for 11=10,30, 100 respectively) we see that the estimates are in excessby about 2% for n= 10,t% for n= 30or 100. 00 0

Sampling variation is also evident in the progression ofvalues for the nine gamma distributions, especially in the

0 n = 100 values for distributions [6J and [7J, and [9J and [10].As skewness increases, so does E[CpjCp], reaching quiteremarkable values for high values of y1J;. These how-eyer, correspond to quite remarkable (and, we hope andsurmise, rare) forms of process distributions, having evenhigher values of y/f3; than does the exponeI2:tialdistribution.For moderate skewness, the values of E[CpjCpJ are quiteclose to those of the normal distribution (d. Table 4.3).

Table 4.6 presents values of E[CpkJ estimated from simula-tion for a number of cases, selected from those presented byPrice and Price (1992). 0

We again note the extrem€ly high positive bias - this time

O'

j

-Hn -HI'

al~HIiJ

a~.fn

8h

till

PI',III'

111

10 IOns

11.1

I.hl

st~11J)

(/J ~==

llilficss

Ya,

0111.ofIllhe

MlllfOJ.III,lIeIh,~w-1I\\.,ndhll~cn

ton.fllte

Ua-

Thtfby

ll'lltnl1l4ll11e

Construction of robust PCls 151

of Cpkas an estimator of Cpk- fordistribution[13] whenn= 10,and only relatively smaller biases when n=30 and n= 100.

For the exponential distribution [8J there is a quite sub-stantial positive bias in Cpk' The bias is larger when ( isgreater. than l(LSL+ USL) - case (L) - than when it issmaller"-~ case oCR).The greater among these biases forexponential, in Table 4.6, are of order 25-35% (when n= 10),falling to 21-5% when n= 100.

As for Cp, the results for the extreme distribution [13J are I

sharply discrepant from those for normal, and mildly skewdistributions. Of course, [13J is included only to exhibit thepossibility of such remarkable biases, not to imply that theyare al1ything like everyday occurrences.

Coming to the variability of the estimators Cp and Cpk>wenote that the standard deviation of Cp might be expectedto be approximately proportional to J({32-1) wheref32(=J14/(J4)is the kurtosis shape factor (see section 1.1) forthe process distribution. We would therefore expect lowerstandard deviations for uniform process distributions(f32 = 1.8) than for normal (f32 =3) and higher standard devi-ations when f32> 3 (e.g. for the exponential [8J with f32=9).Values of J(f32 -1) are included in Table 4.7, and theestimated standard deviations support these conjectures.

It should be realized that Tables 4.5-7, being based onsimulations, can give only a broad global picture of variationin bias and standard de:viation of Cp and Cpkwith changes inthe shape of the process distribution. More precise valuesawait completion of relevant mathematical analyses which wehope interested readers might undertake. 0'

4.3 CONSTRUCTION OF ROBUST PCls

The PCls described in this section are not completely dis-tribution-free, but are intended to reduce the effects of non-normality.

150 Process capahility indices under non-normality

Table 4.5 Simulated values of E[Cp/CpJ

Distribution n= 10 n=30 n= 100

[lJ (normal) 1.1183 1.0318 1.0128[2J (Uniform) ' 1.0420 1.0070 1.0017[3J

(Beta)}

1.1171 1.0377 1.0137[4J (Beta)[5J (Gamma) 1.1044 1.027 1.0091[6J (Gamma) 1.1155 ' 1.0371 1.0143 0

[7J (Gamma) 1.1527 1.04"/4 1.0146[8J (Gamma) 1.2714 1.0801 1.0242[9J (Gamma) 1.3478 1.1155 1.0449[10] (Gamma) 1.5795 1.1715 1.0449[11J (Gamma) 1.6220 1.2051 1.0664[12J (Gamma) 1.8792 1.2595 1.0850[13] (Gamma) 2.2152 1.2869 1.0966

Page 81: 041254380 x Capability Indices

-~- -~~~- ~ --- ,--_.

- ----[13] 1.226 1.007

~. ~,,"~;.~- "" JijJ"iIiio """'" ~ ..-..; - ""'1" ;:' .::c.-"" '!a... ~-~~ "" ,j

z~

---

2/3 -

2.0 1

4/5

*Notes: -

(a) When min(~-LSL, USL-Q/d=l, we have ~=1(LSL+USL1 so only (M) applies.(b) Since [3] and [4] are mirror images, results can be merged as shown.

(c) For symmetrical-distributions (normal [1] and uniform [2]), the L and R values should be the same andso they have been.averaged.

-"'-'

iI!i

--=" '-' .J

i

Table 4.6 Values of E[Cpk] from simulation (Price and Price (1992»

Cpkmin(-LSL, USL-' Process n=lO n=30 n= 100

ddistribution*

0.5 1 [1] 0.464 0.468 0.480

[2] 0.424 0.453 0.474

[3][4] 0.464 0.471 0.481

[7] 0.480 0.474 0.480

[8] - 0.527 0.487 0.485.

[13] 0.904 0.582 0.519

1/2 [l]LR 0.559 0.516 0.506

[2]LR 0.521 . 0.504 0.501

[3]L[ 4]R/[3]R[ 4]L 0.569/0.548 0.525/0.509 0.513/0.504[7] L/[7] R 0.597/0.555 0.528/0.519 0.510/0.504[8]L/[8]R 0.671/0.600 0.548/0.532 0.515/0.509[13]L/[13]R 1.259/0.956 0.670/0.617 0.557/0.540

1.0 1 [1] 1.023 0.985 0.986

[2] 0.945 0.957 0.975

[3] [4] 1.022 0.989 0.988

[7] 1.057 . 0.998 0.987

[8] '1.163 1.028 . 0.997-

[13] 2.012 1.226 1.067

- - - - -, .-- . _. ..-- ---.

[l]LR - H18 1.032 1.013[2]LR 1.042 1.007 1.002[3]L[ 4]R/[3]R[ 4]L 1.129/1.111 1.043/1.032 1.017/1.011[7]L/[7]R 1.174/1.132 1.052/1.043 1.017/1.012[8]L/[8]R 1.307/1236 1.088/1.072 1.027/1.021[13]L/[13]R 2.337/2.064 1.J13/1.261 1.105/1.088[1] 2.141 2.016 1.999[2] 1.987 1.964 1.976[3][4] 2139 2.027 1.998[7J 2209 2.045 2.002[8J 2.435 2.108 2.021[13J 4.227 2.513 2.164[lJLR 2.237 2.064 2.026[2JLR 2.084 2.014 2.003[3JL[ 4JR/[3JR[ 4JL 2.245/2.233 2.081/2.070 2.030/2.024[7] L/[7J R 2.326/2.284 2.099/2.090 2.032/2.026[8JL/[8JR 2.578/2.507 2.168/2.153 2.052/2.046[13JL/[13JR 4.582/4279 2.600/2.548 2.201/2.185

Page 82: 041254380 x Capability Indices

----

Table 4.7 Estimates of S.D. (Cp) and S.D. (Cpd from simulation (Price and Price (1992»

Distribution (Jh-1)! sgng-1(LSL+USL)} n=30 n=l00S.D.(CpfCp) S.D.(Cpk) S.D.(CpfCp) S.D.(CpJ

Normal [1J . 1.414 O(M) }0148

{

0.148

}0075

{

0.077

}1(L), -l(R) . 0.160 . 0.082

Uniform [2J 0.894 O(M) }0089 SO.088

}0045

{

0.047

}l(L), -l(R) . . lO.110 .' 0.056

xi.s [7J 2.160 O(M)

} {

0.192

} {

0.109

}l(L) 0,199 0.245 0.111 0.137-l(R) 0.~66 0.092

Exponential [8J 2.828 O(M)

} {

0.258

} {

0.139

}l(L) 0.273 0.237 0.142 0.169l(R) 0.255' 0.119

Values for S.D.(Cpi;) correspond to process distributions with Cpk= 1.

{

-I for h<O

}sgn(h)= 0 for h=O

,... I for h> 0

--. --

, . Table 4.8 0.135% and 99..8.65D/. poiD1s ol: ~ p~ ~ ~ ~~ ~ --;- ,>' u

l - - - - ~'~F ~'- - - ------

Table 4.8 0.135% and 99.865% points of standardized Pearson curves with positive skewness (vip 1 > 0). Ifv7f;. <0, interchange 0.135% and 99.865% points and reverse signs. {Clements (1989»

Ji;/32

1.8

2.2

2.6

3.0

3.4

3.8

4.2

4.6

5.0

0.0

-1.7271.727

-2.2102.210

- 3.0003.000

-3.0003.000

- 3.2613.261

- 3.4583.458

- 3.6113.611

-3.7313.731

- 3.8283.828

0.2

-1.4961.871

- 1.9122.400

- 2.5352.869

- 26893.224

- 2.9523.484

-3.1183.678

-3.2183.724

- 3.2823.942

- 3.3254.034

0.4

-1.2301.896

-1.5552.454

-1.9302.969

- 2.2893.358

- 2.5893.639

-2.8213.844

-2.9833.997

- 3.0924.115

-3.1674.208

0.6

-0.9751.803

-1.2122.349

-1.4962.926

-1.817 -

3.385

- 2.1273.175

- 2.3963.951

-2.6164.124

-2.7874.253

-2.91§4.354

0.8

-0.7471.636

-0.9272.108

- 1.125- 2.699

-1.3563.259

-1.6193.681

- 1.8873.981

- 2.1324.194

- 2.3454.351

-02.5244.468

1.0

- 0.6921.822

-0.8412.314

- 1.0002.914

- 1.1783.468

-1.3813.883

-1.6024.177

-1.8214.386

- 2.0234.539

1.2

-0.6161.928

-0.7392.405

-0.8652.993

-1.0003.861

-1.1493.496

-1.3164.311

- 1.4944.532

1.4

-0.5311.960

-0.6342.398

-0.7362.945

-0.8403.529

-0.9504.015

- 1.0684.372

1.6 1.8

~0.533'2.322

-0.6172.798

- 0.7013.364

-0.7853.907

-0.5102.609

- 0.5803.095

For each (J Ph P2)combination, the upper rmv.contains-9:-!-3-5%,-puiub(O;T1fmillierower, 99:8'65°)'0'points (0.).

Page 83: 041254380 x Capability Indices

--~

,v

l

§1

88~:2~~ g d<""i<""i..f'l-;..0 '-'.~ R::;j'"1-<

1 I

MO-'<tV)

c.8 f6 ~~~~~OMM'<tV)0...

II

I I

r-OO'<tLn'b' ° ~"<I:~~~<:I:> M OMM'<tv)~I'"+HJ'

IOIOOO\Mv)'<t 1.0 00 1.0 -

VI N d<""i<""i'<iV'i~VI

I 1

M 0 00 M V)b V) V)00001.O-

~ - d<""i<""i'<iV'iI

'v'I

I

I

r-0 r- N V)'--' ("0. 1.0 0 00 \0 -I-< - ,....~ o '<tM'<t V)

i,

......cod

...c::1

0-...c::u::;j'"

<:1:>100......0

~:::I 1 \0

~>oi:i11)I-<

g 1:1

.0' .g'r i!).::: i!) V) 00 0\.- ~ 0\0\0\U d d d~ ~ II II II'<t ~ 0... 0... c...~ I-< I-< I-< I-<

~ ~ ~ ",c.8c.8c.8~ Q <::0.<::0.<:1:><:1:><:1:>

OOV)NV)ooNoo\O-d..f<""i..fV'i

00'<t-V)OV)oo\O-""<'<i<""i'<iV'i

MONO\O1"1000\0-""<v-i<""i'<iv-i

Construction of robust PCl s 159

Moment esti'mators for (4.17)and (4.18) are obtained hy.replacing (Jby an appropriate estimator.

At this point we reiterate the difference between Clements'method and the Johnson-Kotz-Pearn method. In the firstmethod there is an attempt to make a direct allowance for thevalues of the skewness and kurtosis coefficients, while thesecond method aims at getting limits which are insensitive tothese values. In the second method we no longer haveguaranteed equal tail probabilities, but we do not have toestimate .jlf;and {32which it may be difficult to achieve withaccuracy, because of bse of third and fourth sample moments,which are subject to large fluctuations. Both methods rely onthe assumption that the population distribution has a uni-modal shape close to a Pearson distribution for Clements'method,and more restrictively, close to a gamma distributionfor Johnson-;Kotz-Pearn method.

Another approach, also aimed at enhancing connectionbetween PCI values and expected proportions NC, tries to'correct' the PCI, so that the corrected value corresponds (atleast approximately) to what would be the value for a normalprocess distribution with the same expected proportion NC.Munechika (1986, 1992) utilized an approximate relationhet\vccncorresponding percentiles of standardized normaland Cham-Charlier (Edgeworth) distribution (see Johnsonand,Kotz (1970, p. 34)) to obtain an approximate relationshipbetween the process PCI and the corrected (equiv~Jent nor-mal) pel values. He appHed this only to the case where thereis only an upper specification limit (110LSL), with the CpkUindex (see equation (2.25b)).The approximation is quite goodfor gamma process distributions which are not too skew.

From the relationship (in an obvious notation) XGt~Ua+ i(U; -1)J7f; he obtained j(process index)~ 3(correctedindex) + i {(corrected index)2-1 } leading to (correctedindex~(1/3..JP~)[{j31 + 18(process index)~ + 9}1/2- 3J.

The inverse (also approximate) relationship UGt~XGt-i(x;: ~ 1)J7f; would give the somewhat simpler formula

I

,4

II

.III dII "IIL

,"

Page 84: 041254380 x Capability Indices

160 Process capahility indices under non-normality

(correctedindex)~ (processindex)- fa{9(process index)2~ 1~(This formula was not used in Munechika (1986, 1992).)

Of course, use of this correction requires a value for -Jf31'Unless this is well established, it is necessary to estimate itSuch estimation may well be subject to quite large samplingvariability. .

4.3.3 'Di,stribution-free' PCls

Chan et at. (19~H~)proposed the following method or Qbtain-ing 'distribution-free' PCI s. They note that the estimator,8, in the denominator of Cp is used solely to cstimtltcthe length (6(1)of the interval covering 99.73% of the valuesof X (on the twin assumptions of normality and perfectcentring, at ~=~(LSL + USL)). They propose using dis-tribution-free tolerance intervals, as defined, for example,in Guenther (1985) (not to be confused with Gunter (1989)1)to estimate this length. These tolerance intervals (widelyused in statistical methodology for the last 50 years) are.designed' to include at least 100f3% of a distribution withpreassigned probability 100(1-1J()%, for given f3 (usuallyclose to 1) and IJ((usually close to zero). In the PCI frame-work, a natural choice for f3 would be f3= 0.9973,with,perhaps, 1J(=0.05. Unfortunately construction of such inter-vals would require prohibitively large samples (of size 1000or more).

Chan et at. (1988) suggest that this difficulty can be over-come in the following way. Construction of tolerance intervalswith smaller values of f3(but still with IJ(=0.05) requires smallersamples (lessthan 300).They recommendtaking .

. f3=0.9546 and using! x (length of tolerance interval) inplace of 68, or

. f3=0.6826 and using 3 x (length of tolerance interval) inplace of 68.

-,I

I1

~

I

I

t.\

I.

I,

~

,.i

I

I

1

~..

~

Construction of robust PCI s 161

The bases for their choices are that for a normal distribution

. the interval (~--2(1,~+2(1) oflength 4(1contains 95.46% of..values, and

. the interval (~- (1,~+(1) of length 2(1 contains 68.26% ofvalues (and, of course, 6(1=1 x 4(1= 3 x 2(1).

It is here that we must part company with them, as it seemsun-reasonable to use relationships based on normal distributionsto estimate values which are supposed to be distribution-free!

4.3.4 Bootstrap methods

Franklin and Wasserman (t 991), together with Price andPrice (1992) should be regarded as the pioneers of applicationof bootstrap methodology in estimation of Cpk' The boot-strap method was introduced some twelve years ago (seeEfron, 1982) and has achieved remarkably rapid acceptanceamong statistical practitioners since then. (Over 600 paperson the bootstrap method were published in the period 1979,90!) It is not until very recently, however, that its applicationin the field of PCls has been developed.

. The bootstrap method is a technique whereby an estimate ofthe.distribution functionofa statisticbased on a samplesizen,say, is obtained from data in a random sample,of sizem (~n)say, by 're-sampling' samples of size n - with replacement -from these m valuesand calculatingthe correspondingvaluesof the statistic in question.Usually m= n,but thisneednot bethe case. Here is a brief formal description of the method.

Given a sample of size m with sample valuesXI,X2,"',X",we choose (with replacement) a random sample ([1], say).x[IJ1>"" X[IJII of size n, and calculate C(1JPk from this new'sample'.This is repeated many (g) timesand we obtain a setof values C[I]Pk,C[2]pko""C[9Jpk'which we regard as approxi-mating the distribution of Cpk in samples of size n - thisestimate is the bootstrap distribution. (The theoretical basis

Page 85: 041254380 x Capability Indices

162 Process capahility indices under non-normality

of this method is that we use the empirical cumulativedistribution from the first sample - assigning probability m-1

to each va:lue - as an approximation to the true CDF of X.)Practice has indicated that a minimum of 1000 bootstrapsamples are needed for a reliable cakulation of bootstrap -confidence intervals for Cpk'

According to Hall (1992)~ a leading expert on bootstrapmethodology - difficulties 'in applying the bootstrap toprocess capability indices is that these indices are ratios ofrandom variables, with a significant amount of variability inthe variable in the denominator'. (Similar situations exist inregard to estimation of a correlation coefficient, and of theratio of two expected values. The bootstrap performs quitepoorly in these two (better-known) contexts.)

Franklin and Wasserman (1991) distinguish three types ofbootstrap confidence intervals for Cpk, discussed below.

4.3.4a The 'standard' confidence interval

One thousand bbotstrap samples are obtained by re-samplingfrom the observed values XI, X 2"", X n (with replacement).The arithmetic mean C~k and standard deviation S*(Cpk) ofthe CpkSare calculated. The 100(1-a)% confidence interval~~~~ -

(C~k - ZI -a/2S*(Cpk)' C~k+Zl-a/2S *(Cpk)) (4.19)

where <1>(ZI-a/2)=1-a/2.

4.3.4b The percentile confidence interval

The 1000 CpkS are ordered as Cpk(1)~Cpd2)~...~Cpk(1000). The confidence interval is then (Cpk«500a»),Cpk«500(1-cx))), where < ) denotes 'nearcst integer to'.

itIJ

1

It,

Ii

I1,,1

MI-III

Flexible PCls 163

4.3.4c The bias-corrected percemile confidence interval

This is intended to produce a shorter confidence interval

by allowing for the skewness of the distribution of Cpk'

Guenther (1985) pointed out the possibility of doing this inthe general case, and Efron (1982) developed a methodapplicable in bootstrapping situations.

The first step is to locate the observed Cpkin the bootstrap'order statistics Cpk(l)~... ~ Cpk(lOOO).For example, if wehave Cpk = 1.42 from the original data, and among the,bootstrapped values we find

Cpk(365)= 1.41 and Cpk(366)= 1.43

then we estimate

A 365- Pr[C11k~ 1.41] as 1,000= 0.365= Po, say

We then calculate <1>-I(po)=zo (i.e. <1>(zo)=Po)'In ourexample, Zo= <1>- 1 (0.365) = - 0.345.

We next calculate

PI =<1>(2z0-Z1-a/2) and Pu = <1>(2zo +ZI-a/2)

(in our case, with a=0.05; ZI-a/2 =<1>-1(0.975)=1.960,PI= 11>(- 2.650)= 0.004; Pu= <1>(1.270)==0.898, and form theconfidence interval

(Cpd1000PI)' Cpk(1000pu))'

In our example, this is (Cpk(4),Cpk(898)).The rationale of this method is that since the observed Cpk

is not at the median of the bootstrap distribution (in our case,below ~he median) the confidence limits should be adjustedapproximately (in our case, lowered, because Zo<0). Evident-ly, the method can be applied to other PCls.

\ -I! tI '

:" i ,

""ru'

-IiI' ;

II i,

Page 86: 041254380 x Capability Indices

164 Process capability indices under non-normality

Schenkler (1985) has presented results casting doubt on theefficiency of the 'percentile" and 'bias-percentile' methodsabove. Hall (1992) suggests that the method of bootstrapiteration described by Hall and Martin (1988) might be moreuseful in this case. Hall, et al. (1989) describe an application ofthis method to estimation of correlation coefficients, but, tothe best of our knowledge, it has not as y~t been tested. forestimation of PCls.

4.4 FLEXIBLE PCls

Johnson et ai. (1992) have introduced a 'flexible' pct, takinginto account possible differences in variability above andbelow the target value, 1'. They define one-sided PCls

I

1 USL - l'

CUjkp= 3)2 {EX>T[(X -1')2]}!(4.20a)

and

1 l' - LSL

CLjkp = 3)2 {Ex<T[(X- 1')2]}!(4.20b)

where (as in section 3.5)

EX>T[(X - 1')2]=E[(X - 1')2IX> 1']Pr[X> 1'] (4.21a)

and

Ex< T[(X - 1')2]= E[(X - 1')21X< 1']Pr[X < 1'] (4.21b)

(It is assumed that Pr[X == T] =0). Note that the radon;Pr[X> 1'], Pr[X < 1') make allowance for how often positive'and negative deviations from l' occur. '.

The multiplier 1/(3)2), while the earlier PCls use 1. aris~sfrom the fact that for a symmetricaldistribution with variance

i

It.

I

~'

Flexible PCl s

(J2 and expected value l' we would have

165

~

EX>T[(X - 1')2J =EX<T[(X - 1')2J =1(J2

Finally they define

(4.22)

i.i

1 .

( USL- l' -

Cjkp=min (CUjkp, CLjkp)= 3)2 mID {EX>T[(X -1')2]}!'

1'-LSL

){EX<T[(X - 1')2]}!(4.23)

Note that if we have 1'=!(USL+LSL)=m, so that USL- 1'=1'-LSL=d, then

d

Cjkp= 3)2max([Ex>T[(X -1')2], EX<T[(X - 1'f]r!

(4.24)

I

f

4.4.1 Estimation of C]kp

A natural estimator of Cjkp is

~ 1.

(USL- l' 1'-LSL

)Cjkp = 3)2 mID (8 + In)! ' (8-In)!(4.25a)

where

8+= L (X/-1'f and 8_= L (X1-1')2Xi>T X,<T

Note that

. E[S +] =nE[(X .:..7')21X > 1']Pr[X> 1']so that S+In (notS+ I[number of XIs greater than T]) is a.n

Page 87: 041254380 x Capability Indices

166I,

Process capability indices under non-normality

unbiased estimator of (4.21a) (and S- jn) is an unbiasedestimator of (4.21b). The es.timator CjkP can be calculatedquite straightforwardly - the only special procedure neededis separate calculation of sums of squares for Xi> T andXi< T, as for Boyles' modified Cpm index, described insection 3.5.

Our analysis will be based on the very reasonable condi-tion LSL< T<USL. We can express CjkPas

A , d~ .

(

USL-T T-LSL

)Cjkp= ;;::; mill

(S )!'

("

)1

(3y 2)0' d ~ d~-='(fz (fz

(4.25 b)

where 0' is an arbitrary constant.To study the distribution of CjkPit will be convenient to

consider the statistic

A n

(d

)z A - Z - Z ' - Z

D= 18;;: Cjkp=max(alS+(f ,azS-O' )

where

(USL~ T

)-z

(T -LSL

)-Z

al= -- and a2=d d

Note that ai"!+ai"!=2, and

E[CjkP]= Ln8 (~Yrr E[J5-!r]

(4.26)

The distribution of jj will be, in general, quite complicated.In Appendix 4A we discuss a special case in which thedistribution of X is, indeed, normal with expected value Tand variance 0'2.Although this is not, in fact, an asymmetrical

Flexible PCls 167

distribution, consideration of this case can provide an initialpoint of reference. Later we will indicate ways in which theresults can be extended to somewhat broader situations -though not as broad as we would wish.

Table 4.10 presents :71Umericalvalues of

E[CjkPjCjkP] and S.D. (CjkPjCjkp)

(assuming normal process distribution) for several values of(USL-T)jd, and n=1O(5)50. Note that if (USL-T)jd=1,then, T = (LSL + USL)j2.

It is instructive to compare these values with similarquantities for CpkjCpk and CpmjCpnuin Kotz and Johnson(1992) and Pearn et al. (1992) respectively (see also Tables 2.4and 3J).

In general the estimator CjkPis biased. The bias is negativewhen T=1:(USL+ LSL) but increases as (USL- T)jd de-crea.ses.It is quite substantial when (USL- T)jd is as small as0.4. As might be expected the variance of CjkPdecreases as ,tincreases .- it increasesas (USL- T)jd decreases,as the targetvalue gets nearer to th6 upper specification limit.,The bias, alsb,decreases asrdncreases; fliiseffeci'Is particulariy noticeable forsnulller values of (USL - T)jd. For n ~ 25, and (USL - T)jd ~0.60 thc stability of both E and S.D. is noteworthy. I

Since the index Cjkp is intended to make allowance for'asymmetry in the distribution of the process characteristic Xit is of interest to learn something of the distribution of theestimator CjkP undersuchconditions.The analysisin Appen-dix 4A can be extended to certain kinds of asymmetry ofdistribution of X.

We note two ways in which this may be done.

1. If the population density function of X is

Hg(x; T, O'd+g( -x: - 'T,0'2)] (4.27)

III

'JIt.. ":PoIt,

'I

,I

:,1"

I

4I'II.

I

'11

II

1.1I

14'i'1 III!

1

t I l'

Page 88: 041254380 x Capability Indices

0\'"-

">.......vQ.,.<:,,):z.~~

k:

~

c.8......;:j

..0

.~......en

:.a

~8H0s::

<.!..~..c::II)

..c::...........0

~0p..II)

..c::......

o~

b

~~

~-~0~0

..c:: ..~

601°0P£6°0

~IroZ£6°0

Izro0£6'06zroL'l6°0

6£roPZ6°0

"Z~roZZ6'0

OLro616'0

86ro816°0

O~Z'OIZ6°0

00°1

_..~ -.~ -~-

h1\\~H

.8N

~I~ b

"--'"

-INI

'-y-..J0..X0

NI~~II

bh">f

~

z£ro£ooor

8£ro£ooor

~vroZOOOI

£~ro100'1

V9ro000°1

8Lro666°0

L6ro666°0

L'lZ'O

QOOol

£8z00800'1

06°0

hV~

>...0"0<U0CIjc.0I-.

H

.8

;>.v8CIj.s::

d'.8......

"..., -;:j~ ..0

..c:: 'i:......en~ :.a-- en\0 enN 0. 0

~ 8s:: 0..

0en ..c::CIj .....'" .....

0(~ 0

00 s::

CIjs:: 'c0 CIj

.~ ° >;:j ,-. 0

..0 ~..c::'i: .-; +->~ II.~°- O""'N"O':<b0--0

..c:: ~+->boS::b"-:I:<U-, "...,

'=: ~-

091"0 vLI"Oz£Oor 6£0'r

891"0 v81"0v£ool £VO"[

9Lro 961"09£0°1 8vOOl

L81"0 II Zoo6£0'1 v~O"1

66 ro !IZZ'O£vOOI Z900r

~rz'o o~Zoo

8Pool: ZLO'I

L£ZoO

p~oor

oaoo£90'1

Z££.oZ80'1

8aoo980°1

OZ£ooLOn

~6£'0£vn

'08"0"0 0~ <U s:: ~ o~ ..0

"E~;:j~ 8 ......0 +->.8 <U 0 :;::0 ~ --.E oS ~~:e~o ..0 ~;.::::: t> 0\ s:: CIj."",CIj <1>- CIj - 0;:j CIj ~ V s::0'0S::q:: CIj~ 0s::~ s:: ..c:: .......- ~ CIj"O""'; V 1)-- 8:;v§o s::~ 6\1:t:: 0 ~ b ~ ..20000:> +->

~ O\:I: ;> 2 h °:;: ~- .- ;> °-8- 8 ;:joo CI)

~ "0 -g H O'.S ~ ~o::CIj.8~"O:: "0"'" ". +-> 0

""'S::b::",0I)0~ I!)0 :: °- s:: 0 0 ~"'o~oX+-> +->

<I>~-O""'~OO ~

g»= ~~8 -g.~ ..0 ~ C ~ ~ e .S=-0 . s::o~'- g, ..0

:e2r::'0~0 81= ~ ~ °';::+->,...s 0<I>01)- ;:j ~ °- 0:.a oo-:e ° ~ ~ 0... ~ I-< ~

""'c;;~+->S::os:: CIj0 ~.~ 0..0 0 --0 s:: "'."0 o~ E o~ . C'i<I>_o-fl =;:j;:j-=,.goClj:es:::e -0

'0 ~ +-> ~ ,,~ s::~en CIj+->~"" CIjL. ~ 0 ~ .~ t:.~ --,, "0,,00 ;-.....-

':i--.NbI-b-

-I~I--

NNb+

N-b

-INII

Nb

-

C'i

----.----.---.-----.

8LI"0Ovo"r

061"0~POOI

VOZ'O

r~oOI

ZZZOO

6~0"1

VPZ"O

690°1

£a"o£80°1

'lI £"0pon

69£'0~£n

99poO£6n

.------

6LooIlVOOI

161"0~vo.t

LOZoO

Z~OOI

9Zz00

090°1

Z~zoO

ILOOI

L8Z'0

L8001

9££°0'lIn

'lIP"O£~n

~KoOZ£'l"I

6LI"0rvoOl

Z6rO~VO'I

LOZ.OZ~O'I

azoo0900y

~~Z'OZLO'Y

£6Z'0880'Y

Z~£oO~In

6VP'0£9n

1£9°0£9'l"I

6LI"0Iv(n

Z6rO~vooI

LOZoOZ~Ooy

8'lZ'0O9O'r

~~Z'O'liO'r

96Z00680°1

19£'0LIn

08v'069rI

Z£CO68Z'1

6LY'0ltO'r

--00N

~~-~Q~ CIj

~ 0 80 s:: I-<bJ)OOs::o- s::.- E-<.!..

0 ~gOE~eno~ <U00 >

I!) .~,",,~CIjL. 01)c.-Or' <U 0o-s::..e~CIj~~ C'! "0 .8-0 ~ s::E<U I-< CIj..o

0 o 'i:<u--Clj+->0..r-- 8 o~

~C'!~oo<u~S::d

~ <I><.!..s::+-> s::-o0 CIj.-

C'i '".P ~ 0-::_0 CIj"OI-<.S:: s::

~..2<O 8; »<U 0_0-:: I-<~

I-<en=+->- 0 s:: ~

~ 0 X I-<+->-0'- I!).~ ~. 8.~

~.8 CIj ~~ho~~

--N

bh~I><-.I

~--~I

-<-+---bh->f15;c:..

-----

6LYoO

IPOOI

Z61"0 z~ro~VO'I . ~VOOI

....

E:;'

I~ IMN- b

H

W~I

N Ih-I

t$: INb

w~1>(

'O"S

"]-'. os

.OO"S

"]

°O'S

"]

°O"S

"]

°ooS

"]

°O"S

O]

"O"S

']

°OS

O]

°O'S

O]

08°0 OCO 1)9°0 O~OO OV'O O£'O 0,'0 01"0 =p/lL -7Sn)

:L=~ u::>qh\'d'lf.:)/(d'lf;;)°O'S="O'S ptrn d'lf.:?/[d'lf.:)JH=HJOS::>°I!.(LfH:t31q8.L

LOZ'O

zso'r

8Zz00090'Y

%Z'OZLOor

L6z006800r

89£'0Lln

IJ~'OUn

OL8'0vln

LOZ'OZ~ooY

8Zz000900y

9~ZoO'liO'Y

86z00680°1

vL£oOLln

9S~'0~Ln

6~n~vn

oS000.....0s::

<I>CIj

>:"0oE::.....000..en<UI-<

"0

s::CIj

:.:I

""0:><

"0s::CIj

:.:><

......°-.....en0 °H-<t:CIj"<f"

~ x.

~:.as::

s:: 00 0..> 0..

°EO -<t:

---:"=i!"""

l- --

~P

Ov

~Z

OZ

~Y

01=u

Page 89: 041254380 x Capability Indices

170 Proccss cafJahility indices /llIdl'/' nOll-normality

4.5 ASYMPTOTIC PROPERTIES.

There are some large-sample properties of PCls which applyto a wide range of process distributions and so contribute toour knowledge of behaviour of PCls under non-normalconditions.' Utilization of some of these properties calls forknowledge of the shape factors A.l and ..14,and the need toestimate the values of these parameters can introduce sub-stantialerrors, as we have already noted. Nevertheless, asym-ptotic properties, used with clear understanding of theirlimitations, can provide valuable insight into the nature ofindices.

Large-scaleapproximations to the distributions of Cp, Cpk

and Cpm have been studied by Chan et ai. (1990).Since

C~~ = (~)8

the asymptotic distribution of Cp can easily be approached byconsidering the asymptotic distribution of 8.

Chan et ai. (1990) provided a rigorous proof that~(Cp - Cp) has an asymptotic normal distribution withjmean zero and varianceII

O'~= t(/32 -1) C~ (4.29)

If /32=3 (as for normal process distribution), O'~=~C~.For Cpk, Chan et ai.jI990) found that if ~¥:m the asympto-

tic distribution of vi n(Cpk- Cpk) is normal with expectedvalue zero and variance

2 1 <p ro 1 2

O'pk= '9 + 2Y PI Cpk+ '4(fJ2-1)Cpk (4 .10)

where <p= 1 if ~> m, <p = ,-I if ~< m.If the process distribution is normal, then .jji;=.0 and

I

.~

'f

Asymptotic properties 171

/32= 3, so that

2 .1 .l C 20'pk = 9 + 2 pk (4.31)

In the very special case when ~= m, the limiting distribu-tion of ~(Cpk - Cpd is that of

- HI U1/+!Cpk(,82 -1)!U 2} (4.32)III

(note that Cp= Cpk in this case) where Uland U2 arestandardized bivariate normal variables, with correlationcoefficient .j7J~/(,82-1)t. If the process distribution is normalthe distribution is that of

j

~

-!{IU11+ ~C kU2J

l3 J2 P

(4.33)

where Uland U2 are independent standardized normalvariables.

For Cpm, we have ..j';;(Cpm- Cpm)asymptotically normallydistributed with expected value zero and variance

)2 -

(~-m

) 1(/3 -1)(~-m +jp, 7 +:[' C;m (4.34)

0'

)2

}

2

6;m= { 1+ (~ 6 m .

If the process distribution is normal, .j7f; = 0 and /32= 3, hence

(~-m)

2

0'2 -- -a +!pm - ')

{I+e:m)';' C;m

(4.35)

Page 90: 041254380 x Capability Indices

172 Process capability indices under non-normality

and if, also, ~= m

".2 _1C2" pm - Z pm (4.36)

(In this case, also, Cp=Cpm)'

APPENDIX 4.A

As noted in the text, it is assumed that the distribution ofX is normal, with expected value T and standard devia-tion (f.

With the stated assumptions, we know that:

1. the distribution of (Xi- T)2 is that of XT(f2;2. this is also the conditional distribution of (Xc- T)z,

whether Xi>T or Xi<T; .

3. the number, K, of Xs which exceed T has a binomialdistribution with parameters n, 1-- denoted Bin(n, handhence

4. given K, the conditional distributions of 8 + (f -Z and8 - (j-z are those of xi:, X;'-K respectively, and 8+ (f-z and8 - (f- Z are mutually independent; and also

5. the distribution of H = 8 + /(8 + + 8-) is that ofxV(xi+ X;-K) which is Beta(1-K,l(n-K)) so that thedensity function of H is

, fH(h)={B(lK,l(n-K))}-lhlK-l(1-h)l(n-KJ-l O<h<1

(4.37)

, for K=I,2,...,n-l, and 11 and 8++S- arc mutuallyI independent; and further

6. the conditional distributions of 8 + (f - Z and 8 - (f - z, givenK and H, are those of HX;and (1- H)X;, respectively.

, ,I ~

I IIII"

I

I'

I I

[I I

\I

t'~

, -

III

I!.I

.'III

.11

Appendix 4.A

From (4.26)

{

S '-2

~ ale +(fD=

az8- (f-Z

f 8 + azor - > -81 al

8+ azfor - < -8- al

So 15 is distribu,ted as

{

Z t' H az . azQ1HXn 10r(1 H»

-l.e. H>-- al al+aZ

( Z t' azaz 1-H)x1l lor H<-al + az

The overall distribution of 15can be represented as

fJ "'{

alH . for H>h}

zaz(1-II) for H < b XII

1\ BetaO:K,1-(n-K» 1\ Bin(n,1)Ii K

173

(4.38)

(4.39)

where the symbol /\ y means 'mixed with respect to Y' havingthe distribution that follows, and h=az/(al +a2).

Without loss of generality, we assume that I

USL- T:::;T - LSL. Then, for a symmetrical distributionwith mean T and variance (fz,

C -~ USL-T=~~jkp - 3(f d 3(f r;;

y al

.and

1 1C+ ;:--=2

Val V Qz

(4.40)

Page 91: 041254380 x Capability Indices

174 Process capability indices under non-normality

Hence (from (4.26) and 4.40))

CjkP = (na1)! 15-!Cjkp 2

(4.41)

and

E[(~~::YJ ~ (n~1yr E[15~!r](4.42)

Now, from (4.37), (4.38) and (4.39), noting that

E[(X;)-!r] = {2!rr(tn)}-1r(t(n - r))

~-!r - r(t(n-r)) ~[ -!r ,,-1 (~)E[D ]- 2\rnin) 2" az + k~l 8(1k, l(n-k))

x {ai\rB1-b(!(n-k), t(k-r))

+ a2!rBb(tk,!(n-k-r))} +al!rJ

whence

E[(

CjkP )r

] = n!r:l~(n;r)) [(a1

)]!r+ 1

Cjkp 2 r(zn) az

,,-I (~){

II.

+ k~l B(tk..Hn-k» 81-1,(z(n-k)02(k-r))

. + (::)!rBb(1k,!(n-k-r))}](4.43)

i"III

\

I

II

I

II1

1

I

I

-

II

.1.

r

Bibliography 175

where

I .i,,

I"Bv(Ul,U2)=JoyUI-I(1--y)"2-ldy

and B(UloUZ)=B1(1l1,llz) (see section 1.6).In particular

E[,

CjkP

]= rO:(n.-1»~ [(at )\ 1

Cjkp 2"+1r(-!n) az +

,,-1 (~){+ k~l B(tk, t(n-k)) B1-b(t(n-k),1(k-1))

+ (::yr Bb(1k,1(n-k-r))}]

BIBLIOGRAPHY

Balitskaya, B.O. and Zolotuhina, L.A. (1988) On the representa-tion of a density by an Edgeworth series, Biometrika, 75, 185-187.

B,\rnard, G.A. (1989) Sophisticated theory and practice in qualityimprovement, Phil. Trans. R. Soc., London A327, 581-9.

Boyles, R.A. (1991).The Taguchi capability index, J. Qual. Technol.,23, 17-26. .

Chan, L.K., Cheng, S.W. and Spiring, FA (1988).The robustness ofprocess capability index Cp to departures from normality. InStatistical Theory and Data Analysis, II (K. Matusita, ed.), North-Holland, Amsterdam, 223-9.

Chan, L.K., Xiong, Z. and Zhang, D. (1990) On the asymptoticdistributions of some process capability indices, Commun.Statist. - Theor. Meth., 19, 1-18. .

Clements, J.A. (1989). Process capability calculations for non-normal calculations,Qual.Progress,22(2),49-55.

Page 92: 041254380 x Capability Indices

176 Process capahility indices under non-norl'l'!ality

Efron,B.(1982)The Jackknife, the Bootstrap and Other Re-samplingPlans, SIAM, CBMS-NSF Monograph, 38, SIAM: Philadelphia,Pennsylvania. '. .'

English,J.R. and Taylor, G.D. (1990)ProcessCapabilityAnalysis-A Robustness Study, MS, Dept. Industr. Eng., University ofArkansas,Fayetteville. .

Fechner, G.T. (1897) Kollektivmasslehre, Engelmann, Leipzig.Franklin, L.A. and Wasserman, G. (1992). BQotstrap confidence

interval estimates of Cpk: An Introduction, Commun. Statist.-Simul. Comp.,20, 231-44'

Gruska, G.F., Mirkhani, K.' and Lamberson, L.R. (1989) Non-normal Data Analysis, Applied Computer Solutions, Inc.; St.Clare Shores, Michigan.

Guenther, W.H. (1985). Two-sided distribution-free tolerance inter-vals and accompanying sample size problems, J. Qual. Teclmol.,17, 40-3. .

Gunst, R.F. and Webster, J.T. (1978). Density functions of the bi-variate chi-squared distribution, J. Statist. Compo Simul., 2, 275"';88.

Gunter, B.H. (1989) The use and abuse of Cpk' 2/3, Qual. Progress,22(3), 108-109; (5), 79-80. .

Hall, P. (1992) Private communication,Hall, P. and Martin, M.A. (1988) On bootstrap resampling and

iteration, Biometrika, 75, 661-71. .Hall, P., Martin, M.A. and Schucany, W.R. (1989) Better non-

parametric bootstrap confidence intervals for the correlationcoefficient,J. Statist. CompoSimul,33, 161-72. .

Hsiang, T.e. and Taguchi, G. (1985)A Tutorial on Quality Controland Assurance - The Taguchi Methods, ASA Annual Meeting,

, Las Vegas,Nevada.Hung, K. and Hagen D. (1992) Statistical Computation Using

GAUSS: Examples in Process Capability Research, Tech. Rep., Western Washington University, Bellingham.Johnson, M. (1992)Statisticssimplified,Qual.Progress,25(1),10-11.Johnson. N.L., and Korz, S. (1970) Distributions in Statistics:

Continuous Univariate Distributions, John Wiley, New York.Johnson, N.L., Kotz, S. and Pearn, W.L. (1992)Flexible process

capability indices(Submittedfor publication).Kane, V.E. (1986)Process capability indices,J. Qual.Technol.,18,

41-52.

II.

I

I .

Bibliography 177

Kocherlakota, S., Kocherlakota, K. and Kirmani, S.N.U.A. (1992)Process capability indices under non-normality. Internat. J.Math. Statist. 1.

Kotz, S. and Johnson, N.L. (1993) Process capability indices fornon-normal populations, Internat. J. Math. Statist. 2.

Marcucci, M.a. and Beazley, C.e. (1988) Capability indices: Pro-cess .performance measures, Trans. ASQC Tech. Conf., Dallas,

. Texas, 516-22.McCby, P.F. (1991) Using performance indexes to monitor produc-

tion processes,Qual.Progress,24(2),49-55.Munechika, M. (1986) Evaluation of process capability for skew

distributions, 30th EOQC Conf. Stockholm. Sweden.Munechika, M. (1992) Studies on process capability in matching

processes, Mem. School Set. Eng., Waseda Univ. (Japan), 56,109~124.

Pearn, W.L. and Kotz, S. (1992) Application of Clements' methodfor calculation second and third generation PCls from non-normal Pearsonian populations. (Submitted for publication).

Pearn, W.L., Kotz, S. and Johnson, N.L. (1992) Distributional andinferentialproperties of process capability indices,J. Qual.Tech-1'101.24, 216-31.

Pearson, E.S. and Tukey, J.W. (1965) Approximate means andstandard deviations based on differences between percentagepoints of frequency curves, Biometrika, 52, 533-46.

Price, B. and Price, K. (1992) Sampling variability in capabilityindices, Tech. Rep. Wayne State University, Detroit, Michigan.

Rudo1t\~E. and Holfman, L. (1990) Bicomponare Verteilung - eineerweiterte asymmetrische form der Gaul3schen Normalverteilung,Textiltechnik,40, 49-500. ..

Schenker, N. (1985) Qualms about bootstrap confidence intervals,J. Amer. Statist. Assoc., 80, 360-1. .'

Subrahmaniam, K. (1966, 1968a, 1968b). Some contributions to thetheqry of non-normality, I; II; III. Sankhya, 28A, 389-406; 30A,411...:32;30B, 383-408.

\

l.1

Page 93: 041254380 x Capability Indices

I

!I

;1"

I~IIIIill"

,.Ii

L

5

Multivariate process capabilityindices

5.1 INTRODUCTION

Fr~quently - indeed usually - manufactured items needvalues of several different characteristics for adequate descrip-tion of their quality. Each of a number of these characteristicsmust satisfy certain specifications. The assessed quality of theproduct depends, inter alia, on the combined effectof thesecharacteristics, rather than on their individual values. Withmodern monitoring devices, simultaneous recording of sev-eral characteristics is becoming more feasible, and the utiliz-ationof such measurements is an important issue. Multivari-ate control methods (see All (1985) for a comprehensivereview), although originating with the work of Hotelling(1947), have only recently become an active and fruitful fieldor research.

The use of PCls ih connection with multivariate measure-ments is hedged arohnd with even more cautions and draw-backs than is the case for univariate measurements. IIiparticular, the intrinsic difficulties arising from use of a singleindex as a quality measure are increased when the singleindex has to summarize measurements on several character~

istics rather than just one. In fact, most of the multivariatePCls which have been proposed, and will be discussed in thischapter, can be best understood as being univariate PCls,

\III

Page 94: 041254380 x Capability Indices

180 Multivariate process capability indices

based on a particular function of the variable.s representingthe characteristics. Further, so far as we are aware, multivari-ate PCls are, as yet, used' very rarely (if at all) in manyindustries. '

The contents of this chapter should, therefore, be regardedmainly as theoretical background, indicating some interestingpossibilities, and only an initial guide to practice. Neverthe-less, we believe that study of this chapter will not prove to bea barren exercise. The reader should gain insight into thenature and consequences 'of multiple variation, and also tounderstand pitfalls in the simultanepus use of separate PCls,one for each measured variate.

5.2 CONSTRUCTION OF MULTIVARIATE PCls

Just as in the univariate case, we ne6d to consider theinherent structure of variation of the measured character-istics - but, this time, not only variances but also correlationsneed to be taken into account. Deviations from target vector(T) also need to be considered.

The structure of variation has to be related to the specifi-cation region (R) for the measured variates Xl"", X v' Inprinciple this region might be of any form - later we willdiscuss a favourite theoretical form - though in practice it isusually in the form of a rectangular paralellopiped(a v-dimensional 'box').defined by.

LSL/~X/~USL/ i=1,...,v (5.1)

If the specification region is of this form, Chan et al. (1990)suggest using the product of the v univariate Cpmvalues as amultivariate PCI. A moments' reflection however, shows thatapart from the serious defects of the univariate Cpm,describedin Chapter 3, this could lead to absurd situations, even if thev measured variates are mutually independent. The reason for

"

;1

JI

ill

']

iI'1 'I

I

"1

i II .

I I~

j I

:~

1

I'

I

i

I

I

III

Construction of multivariate PCl s 181

this is that a very bad (i.e. small) Cpmvalue for one variatecan be compensated by sufficiently large Cpmvalues for theother values, giving an unduly favourable value for themultivariate PCI. (If one component is only rarely withinspecification limits, it is small consolation if all the others are

never nonconforming!) A more promising line of attack maybe possible using results of Kocherlakota and Kocherlakota

(1991), .who have derived the joint distribution of Cpmscalculated for two characteristics, with a joint bivariate nor-mal distribution. We will return to this work in section 5.5.

As with univariate PCls, there are two possibe main linesof approach - one based on expected proportion of NC items,the otper based primarily on loss functions. Both are based

on assumptions: about distributional form in the first ap-proach; and about fo~'mof loss function in the second. Thesemore or less arbitrary assumptions are, of necessity, moreextensiv,ein the multivariate case, just because there are morevariates involved. Note that the approaches of Lam and'LittIg (1992) and Wierda (1992a) (see chapter 2, section 2) canbe extended straightforwardlyto multivariate situations. I

Corresponding to the assumption of normality in theunivariate case, we have, in the multivariate case, the assump-tion of multinormality, with

E[XiJ = ~/var(Xd= at

corr(Xj,Xr)=p// i, if= 1,..., v

i.e. expected value vector ~=(~1'~2""'~v) and variance-covariance matrix V0=(p/i'Q"/al) with PII =1 for all i. Thismeans that the joint PDF of,X=(X b ... , X v) is

fx(x)=(2n)-v/2IV 0I-~exp{ -!(x -~)'Vo l(X-~)} (5.2)

(see section 1.8.)

Il' I

I.

'.11m

IIt

11

Page 95: 041254380 x Capability Indices

182 Multivariate process capability indices

If a loss-,function approach is employed, a natural general-ization of the univariate ~oss function k(X - T)2 is thequadratic form

L(X)=(X-T)'A -l(X-T) (5.3)

with a specification region L(X)::::;C2. (A- 1 is called.. thegenerating matrix of this quadratic form). T. Johnson (1992)interprets C2 as a maximum 'worth', attained when X = T -i.e. all characteristics attain their respective target values -and L(X) is a loss of 'worth', so that .zero worth is reached atthe boundary of R.

Note that A does not have any necessary relationship withthe vari~nce covariance matrix Vo. It is, indeed, very unlikelythat. A would be identical with V0, (though tempting fortheoreticians to explore what would be the consequences ifthis were the case, as we will see later).

In the following se<:lions we describe ~I few proposedmultivariate PCls and provide some critical assessment oftheir properties. ' ,

Before conclu'ding this section however, we stress that anyinClexof process capability, based on multivariate character-istics, that is a single number, has an even higher risk ofbeing misused and misinterpreted than is the case forunivariate PCls. The time-honoured measures and tech-'niques of classical statistical multivariate analysis - e.g. Ho-,teiling's T2, Mahalanobis' distance, principal componentanalysis - provide ways of obtaining more detailed assess-ment of process variation. It would be inefficient, not toutilize appropriate forms of these well-established measuresand techniques. Reduction to a single statistic, howeveringeniously constructed, is equivalent to replacing a multi-variate problem by a univariate one, with attendant loss ofinformation.

However, if a process is stable, it is much easier to monitora single index than, for example to use many mean and range

Ii;v,

Multivariate analogs of Cp 183

III

I

control charts. Nevertheless, a vector-valued multivariate

PCI has been proposed. This will be discussed, briefly, insection 5.6.

5.3 MULTIVARIATE ANALOGS OF Cp

If we accept the assumption of multinormality (5.2), it wouldbe natural to use, as a basis for construction of PCls, thequadratic form

w =(X-~)Vo l(X-~) (5.4)

The statistic W would have a (central) chi-squared dis-

tribution with v degrees of freedom. If, also (see remarks Inear the end of section 5.2), the specification region R were of Iform

2 2W::::;XV,0.9973=Cv

the expected proportion of NC product would be 0.27%.We also note that it has been suggested that an estimate (p,

say) of the expected proportion of NC items. based onobsetved data, and exploiting the assumed form of processdistribution (usually multinormality), might be, itself, used asa PCI. We have already noted this suggestion for the univari-ate case, in section 2.1 It is important to realize that thedependence on correct form of process distribution is evenheavier in the multivariate than in the univariate case. Littiget al. (1992) suggest using $-1(1(p+l)) as a PCI (as in theunivariate case). See also end of section 5.5.

Recall that the univariate index Cp was defined as

length of specification interval

Cp = 6 x (standard deviation of X)(5.5)

,

:11III

I

" III ,Il I I

/1'

:;1 j

I

11

111

11'1

III

.' tll

i "'1

I

Page 96: 041254380 x Capability Indices

184 Multivariate process capability indices

the factor 6 being used because, if Cp =1 and variation isnormal, it is just possible- by making ~:;::t(USL + LSL)- tohave the expected proportion of NC product as small as0.27%. Taam et al. (1991), regarding the denominator in (5.5)as 'length of central interval containing 99.73% of values ofX', propose the natural generalization

C = vol.ume of s~e~ification region .P volume of regIOncontaInIng99.731Yoof values of X.

(5.6)

The denominator is the volume of the ellipsoid

(x - ~)'VO'1 (x -~) ~ X~. 0.9973 (5.7)

which is

. 'v

(nX~.0.9973)2 IVoilntv+ 1)

(5.8)

[Of course, this would be the volume of the ellipsoid (5.7),whatever the value of ~. The ellipsoidwould contain 99.73%

. of values of X however, only with ~ equal to the expectedvalue vector.]

This leads us to the d'efinition

c - volume of RP - Qv IVoll

I where a:=(nx~.0.9973)!v/r(tV+ 1).

(5.9)

However, some modification of this definition is needed inorder to provicie a genuine generalization of the coverageproperty of Cp(v= 1),which would be that, if ~=T and T is thecentre of the specification region, then Cp= 1 should ensurethat 99.73% of values of X fall within R. This property is clearly

I-'I

r

~

~

.

~

~

IIII'

II

/1 /

I~

.I J

I

I

I I

I

Multivariate analogs of Cp 185

not satisfied if the 'box' region LSLi~Xi~USLi (i= 1,..., v) isused. The volume of the rectangular paralellopiped isI

III

I IIj

v

n(USLi-LSLi)i= 1

(5.10)

but Pr[LSLi ~ Xi ~ USLi for all i= 1,... , v] is not necessarilyequal to 99.73%, even if

v .

.n(USLi-LSLi)= (nx~.9973)hIVol1=1 ntv+l)(5.11)

The property would hold if R were of form

(X-~)'VO' 1(X-~)~K2 (5.12a)

but, as we noted in section 5.2, this is unlikely to occur. Evenif an ellipsoidal R, of form

(X-~)'A -l(X -~)~K2 , (5.12b)

were specified, it is unlikely that A would equal Vo.Taam et al. (1991) propose to avoid this difficulty by using

a 'modified specification region' -. R*, say - defined as thegreatest volume ellipsoid with generating matrix VO'l, con-tained within the actual specification region, R. If this ellip-soid is given by (5.12a), then

. volume of R*

(K2

)V/2

C~= volume of(X-~)'VO1(X-~)~X;.0.9973 = X;.0.997;

(5.13)

. for any ~ and any v.The most one might reasonably hope for, in practice, is

that the region R is defined by (5.12b), and it is known that

Page 97: 041254380 x Capability Indices

186 Multivariate process capability indices

v 0 is proportional to A, i.e. V0= 02A, though the value of themultiplier, 02, is not known.. Total rejection of this possibility(an opinion held in some circles) seems to be an undulypessimistic outlook, assuming that practitioners have so littleknowledge of the process as to be unwilling to be somewhatflexible in adjusting engineering specifications to the actualbehaviour of the process in order to benefit from availabletheory. We consider such an attitude to be unhealthy, andencourage practitioners to, take advantage of knowledge oflikely magnitude of correlations among characteristics insetting specification,if possible. .

If V0 = ()2A then

volume of x'A- lX ~ K2Cp = I f Iv - 1 --- 2

voumeo x 0 X"'::Xv.O.9973

volume of x'V() IX~(K/()f=I f 'V- 1 --- 2

vo ume 0 x 0 X"'::Xv.O.9973

(K2

)V/2

= 2 20 Xv,O.9973(5.14)

Pearn et at. (1992) reach a similar PCl hy regarding X;,O.9973as a generalized 'length' corresponding to a proportion 0.0027of NC items, and K2/h2 as a generalized 'allowable length'.They define

K = C~/vvCp= °Xv,O.9973

(5.15)

Estimation of Cp (or vCp) is equivalent to estimation of O.IfVo={)2A, and valucs X/=(xlj, ...,x"j) arc availahle for 11individuals (j= 1, .", n), the statistic,

11

s= L (Xj-X)'A -l(Xj-X)j= 1

I II

~ I

,

IjtI

II~

~,l..II

I,j

,J_1

Multivariate analogs of Cpm 187

is distributed as e2X~-l)v, and S/{(n-1)v} is an unbiasedestimator of (J2.

A natural estimator of vCp is

C~ - K ((n-1 )v)1

v p-X",O.9973 S

, (5.16a)

which is distributed as

K ((n-1)v)! ((n-1)v)j- = "Cp

Xv,O.9973 °X(n-l)v X(n-l)v

and a 100(1- iX)%confidence interval for vCp is

(X(n-l)V,iXI2 C X(n-l)v,l ~ t ){(n-1)v}! v p, {(n-1)v}7 v p(5.16 b)

(cf (2.9 c)).

5.4 MULTIVARIATE ANALOGS OF Cpm

Taam et al. (1991) proceed to define a mutivariate analog ofCpmby the formula

C* = volumeof R* .mm ... ..

pm volume of ellipsoid x'Vi lX<C2(5.17)

(cf. (5.13)) where

VT= Vo+(~-T)(~-T)'

Since

IVTI= IVol{1 +(~- T)'VC;l(~- T)}j

C:m = Ct{ 1+ (~- T)'VC;l(~ - T)} -j (cf. (3.8))(5.18)

Page 98: 041254380 x Capability Indices

(Taamet at. (1991)usethenotationMCPforC:m). Thisindexhas the property that when the process mean vector ~equalsthe target vector T, and the index has the value 1, .then99.73% of the process values lie within the modified specifica-tion region R *. This is analogous to the property of theunivariate Cpmindex.

The matrix VT can be estimated unbiasedly as

188 Multivariate process capability indices

1 n . ,\\= - L (Xj-T)(Xj-T)

n j= 1 .(5.19a)

and V0 by

~ 1 ~ - -V= - L. (Xj-X)(Xj-X)'

n-1 j=1(5.19b)

whereII

I

.j

I'

0

A'I

~'1"

1 nX=- L Xj

n j= 1

Note that

n--1 ~

~T= - V +(X-T)(X-T)'n(5.19c)

, Chan et at. (1990) assume that the specification region R isof form .

(X-T)'V() I(X-T)~ K ,

lit,i.e. the generating matrix of the specification region. R. isproportional to V0, the variance-covariance matrix ofthe measured variates. They define a multivariate analog of

. If V0 is known, an unbiased estimator of the denominator ofC2 .

pm is

! t (Xj-T)'V(j1(X-T)n j=1

and a natural estimator of Cpmis

[ J

'C = nv 2pm n

j~1 (Xj-T)'V()1(Xj-T)

(5.21)

At this point we note that Cpm is subject to the samedrawback as Cpm,noted in section 3.2. Identical values of Cpmcan correspond to dramatically different expected proportionsof NC items, if the specification region is not centered at T.Also, calculation of C"", requires knowledge of the correct Vo.If all arbitrary matrix is used in place of V0 ill defining Cpm,we can encounter a similar effect of ambiguity in meaning ofthe value of Cpm' (See Chen (1992) and also Appendix 5.A).The numerator v is introduced because if ~= T, then

E[(X - T)'V (j 1(X- T)] = v

If ~= T, then C;.; is distributed as (nv)-1 X;v. Hence, inthes~ special circumstances

E[ Cpm] =(nv

)! r( t(nv - 1))2 l( tnv)

(5.22a)

Multivariateanalogs of Cpm 189

I Cpm asI

Cpm =[E[(X- T)';(j l(X- T)]T'I = {I + V-1( - T)V(j 1( - T)} -i (5.20)

Page 99: 041254380 x Capability Indices

190 Multivariate process capability indices

ap.d

var(Cpm) =Jnv [r(!nV)r(tnv-t)- {r(!(nv-t)Y]

- {r(tnv)}2' - -(5.22 b)

(Chan et ai. (1990)).Percentage points for Cpm(on the assumption that ~=T)

are easily obtained since

Pr[Cpm~ GaJ= Pr[C;'; ~ G;2J

= Pr[(nv) - Ix;v ~ G; 2J

=Pr[X~v~nvG;2]

Hence, to make Pr[Cpm ~ GaJ = Ctwe take

G - (nv)!a--Xnv.l-IX

(5.23)

Except when nv is small the approximation

GIX~ (2nv)!zl-IX+(2nv-1)1

(5.24 a)

where <I>(z1- IX)= 1- Ct,or even

1

)-1

, , Zl-IX +1--GIX= C2nV)! 4nv-

(5.24b)

give values which are sufficientlyaccurate for practical pur-poses. [For example,using (5.24a) -

when nv=40 we find Go.o25s:f0.8245(correct value 0.8210)

and Go.os~0.8492 (correct value0.8470):

!-I.

I~

Multivariate analogs of Cpm 191

when nv= 100, we find Go.o2ss:f0.8802(correct value 0.8785)

and Go.os~0.8978 (correct value 0.8968):

and when nv =200, we find GO.O25~0.9118 (correct value 0.9109)

- and Go.os~0.9251 (correctvalue 0.9245)].

Peam et al. (1992) felt that Cpmis not a true generalizationof the univariate Cpm,and proposed the PCI

{

1

}

-i

vCpm=vCp 1+ ;(~-T)'VOl(~-T)

=vCp X Cpm (5.25)

(see (5.15) and (5.20)).

Note that the univariate indices Cpm and Cp a!e related by

Cpm=Cp{l+(~~TY} -1'

where ~= E[XJ, (J2 = var(X) and T is the target value for X. Anatural estimator of vCpmis

- I- K(nv) f -, - 1 if 1 -!"Cpm= 0--' x (S+n(X-l)Vo (A--T)f (5.26)Xv,O.9973

In all the previous work, distribution theory has leanedheavily on the assumption that:

1. R is of form (X~ t)' A-I (X- T)~ K2 and.2. 1\,= V0, or, at least, A is proportional to V0, i.e. (12A = V0,

as in (5.14).

In this second case, even with ~#T, it is possible to evaluate,the distribution of

n

()2 L (Xj-T)'Vo l(Xj-T)j= 1

I

I

II I I

\I

I

\

tI

Page 100: 041254380 x Capability Indices

192 Multivariate process capability indices

as that of

02 X(noncentral X2 with nv degrees of freedom andnoncentrality parameter n(~ - T)'Vo l(~ - T))

However,if A is not proportional to V0, the distribution of(X-T)'A -l(X-T), although known, is complicated (Johnsonand Kotz (1968)). (Appendix SA outlines some details.)

Other approaches to measuring 'capability' from multivari-ate data include: . .

1. Use of the PCI

vCR= IVoA-II +(~- T)'A-l(~-T) (5.27)

. The motivation for this PCI is that it is expressed in termsof the matrix A used in the specification, rather than theprocess variance-covariance matrix - on the lines of Taamet al. (1991). If v=l and A=(1d)2 we have ICR=C;n~'Note that small values of vCRare 'good', and large valuesare 'bad'.

2. Proceeding along the lines of Hsiang and Taguchi (1985)and Johnson (1992) (see section 3.2) we may introdu.ce

L(x)=(x-T)'A -l(x-T) (5.28a)

as a loss function (generalizing L(x) =k(x- T)2).We.have

E[L(X)] =tr(A -IV o)+(~- T)'A'-l(~-T) (5.28 b).where tr (M) denotes 'trace of M', which is the sum ofdiagonal elements of M. Large values of E[L(X)] are, ofcourse, 'bad', and small values, 'good'.

For readers' convenience we summ<Jrize,in Table 5.1,the scalar indicesintroduced in this chapter. .

3. Chen (1992) has noted that a broad class of PCls can bedefined in the following way. Suppose the specification

, :

I I

I

-

regIOn IS

Multivariate analogs of Cpm 193

h( x -- T) < r 0

Source Symbol

Table 5.1 Scalar indices for multivariate data

Taam et al. (1991) Cp

Taam et al. (1991) C:Pearn.etal. (1992) vCpTaam et al. (1991) qm

Chan et al. (1990) Cpm

Pearn et al. (1992) vCpmProposall. vCnProposal 2. E[L(X)]

Definition

vol. speen. region

vol. of (x-~)'VOI(x-~)~X;.O,9973As Cp, with modified speen. regionCl/vp ,

As C~, but denominator =(vol.of(x'Vi 1X<C2)) .

As q but denominatorE[(X-T)'Vo l(X- T)]vCp X CpmiVoA-ll+(~-T)'A -l(~-T)tr(A-1Vo)+(~-T)'A -l(~-T)

v0 is the variance-covariance matrix of the joint distribution of the processcharacteristics, and V 1'= V 0 +(~ --T)(~ -T)',

where h(') is a nonnegative scalar function, satisfying thecondition h(tx)= th(x) for all t > O. Then a multivariate'PCI can be defined as

where

MCp=r/ro

Pr[h(X-T»r] =a

This ensures that if MCp= 1 the ~xpected proportion ofNC items is a.

This definition includes, for example, the rectangularspecification region

IXj-Tjl<rj (j=l,...,v)

Page 101: 041254380 x Capability Indices

194 Multivariate process capability indices

by taking

h(X-T)= n"iaxrj-lIXj- Tjl1 t;it; v

and 1'0= 1.In this case, and several others, actual estimation of

MC" (Le. of r) can entail quite heavy compulation. Somedetails of possible methods of calculation are provided byChen (1992)~

4.' Wierda's (1992a, b) suggestion that -1<1>-1 (expected pro-portion (p) ofNC items) be used as.a PCI, is applicable in

,multivariate, as well as 'univariate situations. It can then'be regarded as a multivariate generalization of Cpk' Thesame comments - that one might as well use p, itself, andthat p is estimated on the basis of an assumed form ofprocess distribution, accuracy will depend on correctness

, of this assumption - apply, as mentioned in section 2.

5.5 VECTOR-VALUED PCIs

As we have noted (several times) it is a risky undertaking torepresent variation of even a univariate characteristic by Asingle index. The possibility of hiding important informa-tion ,is much greater when multivariate characteris.tics are

I under consideration, and the desirability of using vector

i valued PCls arises quite naturally. One such vector wouldI consist of the v PCls, one for each of the v variables. TheseI might be Cps, CpkSor Cpms,and would probably be relatedI to the concept of rectangular parallelopiped specification

I

. "regIOns. '

We have already mentioned (in section 5.1) a suggestion toI use thc product of Cpms, and havc pointcd out thc dcli~icncicsI of this proposal. Interpretation of the set of estimated PClsi would be assisted by knowledge of the joint distribution. The, work of Kocherlakota and Kocherlakota (1991)provides the,

J4

111

II

d

,I

IIII

2, 1

j

Vector-valued PCls 195

joint distribution of Cps for v= 2, under assumed multi-normality.

A different type of vector PCI has been proposed byHubele et al. (1991), in which they suggest using a vector with

Ithree components.

1. The ratio

~

ar~a of specification rectangular paralellopiped

]

+ ,

area of smallest rectangle containing the 99.73%ellipsoid, (x- ;)'V 0 l(X-;) ~ X~,0,9973

=[

'specification rectangte'

]+

'process rectangle'(5.29a)

The numerator is (see (5.10»)

[/~ (USL/- LSL/)J+

Hubele et al. (1991) show that the denominator is

2Xv.O.9973( tl1lVOill}+ Ilvo 11)!'(5.29b)

where, Voil is obtained from Vo 1 by deleting the ith rowand the ith column.

2. The P-vatue of the Hotelling T2 statistic

w2 =n(X- T)'~-l(X- T) (5.30)

where ~ is the s@!p~an~~:£QYariance matrix. Thiscomponent includes information about the relativeloca-tion of the process and specificationvalues.If ~'=T - i.e.

Page 102: 041254380 x Capability Indices

196 Multivariate process capability indices

, thc proccss is accuratcly ccntrcddistribution of

thcn WZ has the

v(n-l) {Fv.n-v variable}n-v

When X is close to T the P-value of WZ is nearly I;thcfurther X is from T, the nearer is the P-value to zero.

3. The third compone:nt measures location and length ofsides of the 'process rectangle' relative to those of the'specification rectangle'. For the case v= 2, and usingthe symbols UPRi, LPRi to denote upper and lower limitsfor variable Xi in the process rectangle, the value of thecomponent is

max[ l. IUPR1-LSLd, ILPR1- USLll, IUPRz-LSLzl,

USLt - LSL1 USL1- LSL1 USLz- LSLz

ILPRi- USLzlJUSLz - LSLz

A value greater than 1 indicates that part or whole, of theprocess rectangle falls outside the specification rectangle.

Taam et al. (1991) point out that components (1.)and (2.)of: this triplet are similar to the numerator (Cp) and the es-I timator of the denominator,

[1 +(X- T)'V-1(X- T)]i= [1+n-l WZJi

of C~III(sec (5.1R», rcspectively.Although these three components do measure dill'erent

aspects of process capability, they are by no mearis exhaus-, Hve. Also, when v> 2, calculation of component (3.) is cum-

bersome, and more importantly, there is no practically rel-

III 1

II i

iIi'

r

\

I iII

!I

I

,

JI;

Appendix 5.A 197

evant distribution theory. Nevertheless, (1.), (2.) and (3.),comhincd will give a useful idca of thc process capability -especiaIIy indicating in what respects it is, and is not, likely tobe satisfactory.

APPENDIX 5.A";I

We investigate the distribution of

~ W=(X-T)'A -l(X-T)

where X has a v-dimensionfll multinormal distribution withexpected value vector ~and variance-covariance matrix V0,and A is a positive definite v x v matrix.

From Anderson (1984, p. 589, Theorem A2.2) we find thatthere exists a 110nsingular vx v matrix F such that

FVoF'=I (5.31a)and

FAF' = diag (),) (5.31b)

where diag (),) is a IIx IIdiagonal matrix in which the diagonalelements A1, ..., Av are roots of the determinantal equation

IA-AVO/=O (5.32)

and 1= diag (1) is a v x v identity matrix.Note that (5.31a) can be rewritten

F'F=VOlI

(5.31c)

and (531 b) as

(F- l)'A-1 F-1 = diag(1/),) (5.31d)

Page 103: 041254380 x Capability Indices

198 Multivariate proce§s capability indices

If Y =F(X- ;) then Y has a v-dimensionalmultinormal dis-tribution with expected value 0 and variance-covariancematrix

E[YY'] = FE[(X- ;)(X - ;)']F' = FVoF'= I

from (5.31a).Then since X-;=F-Iy,

W=(F-Iy + ;-T)'A -1(F-Iy +;-T)"';(Y'+(~-T)'F'}(F:-1)' A-1F-1(Y +F(~-T))=(Y +F(;-T))' diag(1/l)(Y+F(;-T))

v

= I Aj-1(Yj+<5Yj= 1

(5.33).

where 0'=«51>"" <5v)=(;- T)'F'.We note that in the particular case when A= ()2V 0 we have

from (5.32),

A1=A2="'=Av=82

so

v

W=O--2 I (Yj+<5j)2J= I

and is distributed as

8-2 x (noncentral chi-squared with v degrees of freedomand noncentrality parameter

/I

I <5}=o'o=(;-T)'F'F(~-T)=(~-T)'Vo 1(~-T)).j= 1

(5.34)

tc

II

~.IIII

If we have T =;, i.e. the process is centred at the targetvalue, ~hen ,)= 0 and W reduces to

n Y 2

w= I -Lj= I Aj

(5.~5)

Appendix 5.A 199

(W is the weighted sum of n independent xi variables.)The general distribution of W in this case has been studied by

Johnson and Kotz (1968)interalia.They provide tables of per-centage points of the distribution for v=4 and v= 5 for variouscombinations of values of the ASsubject to the condition

v 1L -=v

j= 1 Aj

These lOO(l-IX)% percentage points are values of c~ suchthat Pre W < c~] = 1 -IX. From the tables one can see how c~varies with variation in the values of the AS.

Some values for v=4 are shown in Table 5.2.

The more discrepant the values of the AS, the greater thecorrect value of c~. This will, of course, decrease the values ofthe PCls defined in sections 5.3 and 5.4, implying a greaterproportion of NC items than would be the case if all ASwereequal to 1. Of course, one must keep in mind that thisconclusion is valid with regard to situations restricted by therequirement of a fixed value of th~ sum

n 1I-

j= 1AJ

Table 5.2 Percentage points of W

A-I ..1.-1 rl Ail 97.5% 99% 99.5% 99.73%I. 2 3

2.5 0.7 0.4 0.4 14.3 18.3 21.4 24.12.0 1.0 0.7 0.3 12.8 16.0 18.4 20.71.5 1.5 OJ 0.2 12.4 15.2 17.3 19.21.5 1.5 0.5 0.5 12.3 15.0 17.1 19.01.2 1.2 1.2 0.4 11.7 14.1 15.9 17.5

*1.0 1.0 1.0 1.0 11.1 13.3 14.9 16.25

In all cases !:j=IAjl=4.*Thc values in this row are percentage points of X - chisquaredwith four degrees of freedom.

Page 104: 041254380 x Capability Indices

200 M /III iIJi/rii/1 t' l'nJ('('SS (,ill'ilhi fil.l' iI/ii jet's

If all the AjS are lar'ger in the same proportion, one wouldobtain proportionutely smaller values of c;.

BIBLIOGRAPHY

Alt, F.B. (1985) Multivariate qliality control. In Ellcyclopedia (!I'Statistical Sciences, (S. Kotz, N.L. Johnson and.e,B. Read, eds.) 6,Wiley: New York, 110-22. .

Anderson, T.W. (1984) An Introduction to Multivariate StatisticalAnalysis (2nd edn) Wiley, New York. .

Chan, L.K. (1992), Cheng, S.W. and Spiring, FA (1990) A .multi-variate measure of process capability, J. Modeling Simul., 11,1--6.[Abridged version in Advances in Reliability and Quality Control(1988) (M. Hamza, ed.) ACTA Press, Anaheim, CA; 195-9].

Chen, H.F. A multivariate process index over a rectangular paralel-lopiped, Tech. Rep. Math. Statist. Dept, Bowling Green StateUniversity,BowlingGreen, Ohio. .

Hotelling, H. (1947)in Techniquesof Statistical Analysis (e. Eisenhart,H. Hastay and W.A. Wallis, eds.) McGraw-Hill: New York, 111-84.

Hsiang, T.e. and Taguchi, G. (1985) A Tutorial on Quality Controland Assurance- The Tayuchi Me/hods, Amer. Statist. Assoc.Meeting Las Vegas, Nevada (188pp.).

'Hubele, N.F., Shahriari, H. and Cheng, e.-S. (1991) A bivariateprocess capability vector, in Statistical Process Control in Manu-

, facturing (lB. Keats and D.e. Montgomery, eds.) M. Dekker:New York, 299-310. .

Johnson, N.L. and Kotz, S. (1968) Tables of the distribution ofquadratic forms in central normal variables, Sankhyii Ser. B, 30,303-314.

Johnson, T. (1992)The relationship of Cpmto squared error loss, J.Qual. Technol., 24, 211-15.

Kocherlakota, S. and Kocherlakota, K. (1991) Process capabilityindices: Bivariate normal distribution, Commull. Statist. - Theor.Meth., 20, 2529-47.

Littig, J.L., Lam, C.T. and Pollock, S.M. (1992) Process capability. measurements for a bivariate characteristic over an elliptical

tolerance zone, Tech. Rep. 92-42, Dcpt. Industrial and Oper-ations Engineering, University of Michigan, Ann Arbor.

II

I

.~

r '

II

I

Bibliography 201

Peam, W.L., Katz, S. and Johnson, N.L. (1992) Distributional andinferential properties of process capability indices, old and new, J.Qttal. Technol. 24, 215-31.

Taam, W., Subbaiah, P. and Liddy, J.W. (1991)A note on multivari-ate capability indices. Working paper, Dept. MathematicalSciences, Oakland University, Rochester MI.

Wierda, S.l. (1992a) Multivariate quality control: estimation of thepercentage good products. Research memo, No. 482, Institute ofEconomic Research, Faculty of Economics, University ofGroningen, Netherlands.

Wierda; S.l (1992b) A multivariate process capability index, P"oc.. 9th Int(!l'Ilat.Conf. Israel. Soc. Qual. Assur. Jerusalem, Israel.

II

r

'/,I

Page 105: 041254380 x Capability Indices

I

,~

(' 1 1

-

Note on computer progralns

The following SAS programs, written by Dr Robert N.Rodriguez of SAS Institute Inc., SAS Campus Drive,Cary NC 27513-8000, USA, will be available in the SASSample Library provided with SAS/QC software (Release6.08, expected in early 1993). A directory of these pro-grams is given in the Sample Library program named CAP-DIST.

II

I

ifI

Chapter 2Expected value and standard deviation of Cp

Surface plot for bias of Cp . '

Surface plot for mean square error of Cp

Expected value and standard deviation of CpkSurfm:e plot for bias of CpkSurface plot for mean square error of Cl'kPlot of probahility dcnsity function of CpPlot of probability density function of Cpk

.Confidence intervals for CpApproximate lower confidence bound for Cpk using theapproach of Chou et al. (1990)(reproducestheir Table 5)Approximate lower confidence bound for Cpk using theapproach cf Chou et al. (1990) and the numerical method ofGuirguis and Rodngucz (1992) (generalizes Table 5 of Chou

. et al.) .

Approximate confidence intervals for Cpkusing the methods'. of Bissell (1990) and Zhang et al. (1990)

.J

1

203

Page 106: 041254380 x Capability Indices

204 Note 011computer programs

Chapter 3Expected value and standard deviation of Cpm

Expected value and standard deviation of CpmkSurface plot for bias of Cpm

Surface plot for mean square errr of Cpm

Approximate confidence limits for Cpmusing the method of, Boyles (1991)

Chapter 4Expected value and standard deviation of CjkP/Cjlil~Surface plot for bias of CjkP ,

Surface plot for mean square error of (\.\'i!!

'D

-J

Postscript

Our journey down the bumpy road of process capabilityindices has come to an end, for the moment. We hope thatthose who have patiently studied this b90k, or even thosewho have browsed through it in less detail, have beenmotivated to seek for answers to the following questions.

I. Are the resistance and objections to the use of PCls in'practice justified?

2. is there truth in the statement that 'a manager wouldrather use a wrong tool which (s)he understands than acorrect one which (s)he does no1'?

3. Are PCls too advanced concepts for average technicalworkers on the shop floor to comprehend?

Let us state immediately that our answer to the last questionshould be an emphatic 'No'. In our opinion it is definitely nottoo much to expect from workers a basic understanding ofelementary statistical and, prohahilistic concepts such '~sme,an and standard deviation (and target value and variab,il-ity in general).

We also sincerely hope that the answer to, the secondquestion is an equally emphatic 'No'. In spite of ratherdisturbing catch phrases and slogans, such as 'KISS (keepit simple, statisticians)' and even 'Statisticians keep out'(ascribed to Taguchi advocates at a meeting in London),we strongly believe that a great majority of managersand engineers are sufficiently enlightened and motivatedto explore new avenues, and are not lazy, or afraid o~the 'unknown'. Deming's philosophy and a shrinking world(from development of 'instant' communications) are factors

205

Page 107: 041254380 x Capability Indices

206 PostscriptPostscript 207

contributing to an atmosphere favourable to developmentof innovative techniques C!n the shop floor, and improve-ments in educational methods may be expected to producemany more open-minded individuals.

Having explained our reactions - we hope, satisfactorily -to the second and third questions we now attempt to reply to

. the first, and most difficult,one. We rephrase the question -should the use of PCls be encouraged?

. As we indicated in the introduction to Chapter 1, PCls canbe used with advantage provided we understand their limita-tions - in particular, that a PCI is only a single value andshould not usually be expected to provide an adequatemeasure on its own, and PCls are subject to samplingI

, variation, as are other statistics. In the case of sample mean(X), for example, the sampling distribution of X is usuallyapproximately normal even for moderate sample sizes (n> 6,say), even if the process distribution is not normal. Wetherefore use a normal distribution for X with some confi-dence. On t4e other hand none of the sampling distributionsof PCls in this book is normal, even when the process

. distribution is perfectly normal., Tables of the kind included in this book provide adequate

~ackground .for interpreting estimates of PCls when theprocess distribution is normal. In our opinion, objections,ioic~d by some opponents of PCls, that are based on notingthat the distribution of estimates of PCls when the processdistribution is non-normal is unknown and may hide un-pleasant surprises are somewhat exaggerated. The results ofstudies, like those of Price and Price (1992) and Franklin andWasserman (1992) summarized in Chapter 4 provide somereassurance, at the same time indicating where real dangermaylurk.In particulartheeffectsof kurtosis(fJ2< 3, or moreimportantly {J2»3) are notable. .

We appreciate greatly the efforts of those involved in theseenquiries, although we have to remark - with all due respect-that their methods (of simulation and bootstrapping) do not,

as yet; provide a sufficiently clearcut picture for theoreticiansto guess the exact form of the sampling distribution, on the

. lines of W.S. Gosset ('Student')'s classical work on the dis-tribution of XIS in 1908 wherein hand calculation coupledwith perceptive intuition led to a correct conjecture, derivingthe t-distribution.

,

Page 108: 041254380 x Capability Indices

1

Approximations 10, 12, 46,109

Asymmetry 113Asymptotic distributions 170Attributes 78

Bayesian methods 112, 151Beta distributions 22, 118,

'121, 128, 148, 172functions 22

incomplete 23ratio, incomplete 23, 80,

129Bias 33, 57, 98Binomial distributions 15,

114, 172Bivariate normal distributions

. 179Blackwell-Rao theorem 33,

. 80, 133Bootstrap 161

Capability indices, process,see Process capabilityindices

Chi-squared distributions18,45,75, 137, 172,181

Index

nO~1central20, 99, 111,119, 141, 198

Clements' method 156Coefficient of variation 7Computer program 203Confidence interval 45, 70,

107

for Cp45, 47 .for Cpk69for Cpm107for Cpm 187

Contaminated distribution136, 140

Correlated observations 76,106, 179

Correlation 7, 29, 78, 82,106, 181

Cumulative distributionfunction 4

Delta method, seeStatistical differentialmethod

Density function, seeProbability densityfunction

Distribution functioncumulative 4

Oistribution-free PCls 160

Page 109: 041254380 x Capability Indices

210

Edgeworth series 140, 146Ellipsoid of concentration 18,4Estimation 43, 98Estimator 32

maximum likelihood 32moment 32unbiased 79unbiased minimum variance

(MVUE) 79, 127Expected proportion NC 40,

53, 92, 183.value 6

Exponential distribution 19,138, 149

Folded distributions 26beta 27, 174norl1lul26, 56, 17]

Gamma distributions 18, 149function 18

incomplete 19ratio 19

incomplete 20Gram-Charlier series, see

Edgeworth series

History 1, 54, 89Hotelling's T2 195Hypothesis .35

Independence 7Interpercentile distance 164

Johnson-Kotz-Pearn method156

Index

Kurtosis 8

Likelihood 32maximum 32ratio test 35, 75

Loss function 91, 181, 192

Maximum likelihood 32Mean deviation 47Mean square error 92, 101Mid-point 39Mixture distributions 27, 99,

121, 169Moments 7,59, 102, 122, 168Multinomial distributions 16,

141 .

Multinormal distributions 29,lXI, 19X

Noncentral chi-squareddistributions, see Chi-squared distributions,.noncentral

t-distributions, see [-distributions, noncentral

Nonconforming (NC) product38

Non-normality 135Normal distributions 16, 53,

161bivariate, see Bivariate

normal distributionsfolded, see Folded

distributions, normal

Odds ratio 78

~

Parallelepiped 178, 194Pearson- Tukey (stabilizing)

results 157

Percentile (percentage) point. 18, 190, 199

Performance indices, process,see Process performanceindices

Poisson distributions 15, 21,99, 107

Probability density function 5Process capahility indices

(PC Is)Cjkp 165Cp 38C p 186Cpg 90Cpk 51Cpk 67Cpk 66Cpkl 55Cpku 55Cpk(8)157Cpm 90, 187<;in 112Cpm 187Cpmk 117Cpm(a) 121Cpp 40TCpkl 55'TCpku 55vC p 187vC R 192

CLjkp 164CUjkp 164Le 90

'I

Index 211

MCp 193MCP 188

comparison of 74, 95tlexible 164 .

for attributes 78for multivariate data 177vector-valued 194

Process performance indices(PPls)

Pp 41Ppk 55

Quadratic form 30

Ratio 47Robustness 151

Semivariance 116, 153Shape factor 8, 67, 151

kurtosis 8skewness 8, 149

Significance test 35Simulation 75, 138Skewness 8, 149 '

Specification limit 38lower (LSL) 38upper (USL) 38region 178

Stabilization 157Standard deviation 7Statistical differential methOd

10, 107Sufficient statistic 32Systems of di~tributions 30

Edgeworth 31Gram-Charlier 31Pearson 31, 156

Page 110: 041254380 x Capability Indices

212 Index ill"

t-distributions 24; 137

noncentral 25, 63Total variance (Otolal)41Triangular distribution 138

Variance 7Vector-valued PCls 194 ,Worth 9.4

!*~:ti, II1~

Uniform distribution 22, 136,149

~

'"'till'I

~ II-- t') !::". /' I')~ .:J C I -:> 0.., .? ",r'

\< t; ~r) f~~J C'?I.3

I.~

'I

'\"","""""

:\.f:'~ ),"'-" ~..~.,,'$ ~~ '

'~;:]I;'/

. '(., ...,"t,... . ~

Ct

'"

Process capability

\\\\1\\111\\1\111\111111\\111\\

168718658.562 KOT/P N93Kotz, Samuel

indice

~

m

It 1~

[!

lG871R:~g

!l