07-pyzdek ch07 001-018pyzdek.mrooms.net/file.php/1/reading/bb-reading/... · the mea sur e p has e...

10
The Mea sur e P has e 207 times quite a bit larger than 3-1/2 hours, and another portion quite a bit less that 3-1/2 hours, perhaps because of the difference between clerical staff or the type or number of line items included in the order. The confidence interval calculation assumes the data is from a single population, yet fails to prove the process is in fact stable, suitable of being represented by a single distribution. Some appropriate analytic statistics questions might be: Is the process central tendency stable over time? Is the process dispersion stable over time? Is the process distribution consistent over time? If any of the above are answered "no," then what is the cause of the instability? To help answer this question, ask "what is the nature of the variation as revealed by the patterns?" when plotted in time-sequence and stratified in various ways . If none of the above are answered "no," then, and only then, we can ask such questions as Is the process meeting the requirements? Can the process meet the requirements? Can the process be improved by recentering it? How can we reduce variation in the process? Enumerative and Analytic Studies Deming (1975) defines enumerative and analytic studies as follows: Enumerative study-a study in which action will be taken on the universe. Analytic study-a study in which action will be taken on a process to improve perfor- mance in the future. The term "universe" is defined in the usual way : the entire group of interest, for example, people, material, units of product, which possess certain properties of interest. An example of an enumerative study would be sampling an isolated lot to determine the quality of the lot. In an analytic study the focus is on a process and how to improve it . The focus is the future . Thus, unlike enumerative studies which make inferences about the uni- verse actually studied, analytic studies are interested in a universe which has yet to be produced . Table 7.2 compares analytic studies with enumerative studies (Pro- vost, 1988). Deming (1986) points out that" Analysis of variance, t-tests, confidence intervals, and other statistical techniques taught in the books, however interesting, are inappro- priate because they provide no basis for prediction and because they bury the informa- tion contained in the order of production." These traditional statistical methods have their place, but they are widely abused in the real world . When this is the case, statistics do more to cloud the issue than to enlighten. Analytic study methods provide information for inductive thinking, rather than the largely deductive approach of enumerative statistics. Analytic methods are primarily graphical devices such as run charts in the simplest case or statistical control charts in the more general case. Analytic statistics provide operational guidelines, rather than precise

Upload: others

Post on 20-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 07-Pyzdek CH07 001-018pyzdek.mrooms.net/file.php/1/reading/bb-reading/... · The Mea sur e P has e 207 times quite a bit larger than 3-1/2 hours, and another portion quite a bit less

The Mea sur e P has e 207

times quite a bit larger than 3-1/2 hours, and another portion quite a bit less that 3-1/2 hours, perhaps because of the difference between clerical staff or the type or number of line items included in the order. The confidence interval calculation assumes the data is from a single population, yet fails to prove the process is in fact stable, suitable of being represented by a single distribution.

Some appropriate analytic statistics questions might be:

• Is the process central tendency stable over time?

• Is the process dispersion stable over time?

• Is the process distribution consistent over time?

If any of the above are answered "no," then what is the cause of the instability? To help answer this question, ask "what is the nature of the variation as revealed by the patterns?" when plotted in time-sequence and stratified in various ways.

If none of the above are answered "no," then, and only then, we can ask such questions as

• Is the process meeting the requirements?

• Can the process meet the requirements?

• Can the process be improved by recentering it?

• How can we reduce variation in the process?

Enumerative and Analytic Studies Deming (1975) defines enumerative and analytic studies as follows: Enumerative study-a study in which action will be taken on the universe. Analytic study-a study in which action will be taken on a process to improve perfor­mance in the future.

The term "universe" is defined in the usual way: the entire group of interest, for example, people, material, units of product, which possess certain properties of interest. An example of an enumerative study would be sampling an isolated lot to determine the quality of the lot.

In an analytic study the focus is on a process and how to improve it. The focus is the future . Thus, unlike enumerative studies which make inferences about the uni­verse actually studied, analytic studies are interested in a universe which has yet to be produced. Table 7.2 compares analytic studies with enumerative studies (Pro­vost, 1988).

Deming (1986) points out that" Analysis of variance, t-tests, confidence intervals, and other statistical techniques taught in the books, however interesting, are inappro­priate because they provide no basis for prediction and because they bury the informa­tion contained in the order of production." These traditional statistical methods have their place, but they are widely abused in the real world. When this is the case, statistics do more to cloud the issue than to enlighten.

Analytic study methods provide information for inductive thinking, rather than the largely deductive approach of enumerative statistics. Analytic methods are primarily graphical devices such as run charts in the simplest case or statistical control charts in the more general case. Analytic statistics provide operational guidelines, rather than precise

tom
Line
Page 2: 07-Pyzdek CH07 001-018pyzdek.mrooms.net/file.php/1/reading/bb-reading/... · The Mea sur e P has e 207 times quite a bit larger than 3-1/2 hours, and another portion quite a bit less

208 C hap te r S eye n

Item Enumerative Study Analytic Study

Aim Parameter estimation Prediction

Focus Universe Process

Method of Counts, statistics Models of the process (e.g., access flow charts, cause and effects,

mathematical models)

Major source of Sampling variation Extrapolation into the future uncertainty

Uncertainty Yes No quantifiable

Environment for Static Dynamic the study

TABLE 7.2 Important Aspects of Analytic Studies

calculations of probability. Thus, such statements as "There is a 0.13% probability of a Type I error when acting on a point outside a three-sigma control limit" are false (the author admits to having made this error in the past). The future cannot be predicted with a known level of confidence. Instead, based on knowledge obtained from every source, including analytic studies, one can state that one has a certain degree of belief (e.g., high, low) that such and such will result from such and such action on a process.

Another difference between the two types of studies is that enumerative statistics proceed from predetermined hypotheses while analytic studies try to help the analyst generate new hypotheses. In the past, this extremely worthwhile approach has been criticized by some statisticians as "fishing" or "rationalizing." However, this author believes that using data to develop plausible explanations retrospectively is a perfectly legitimate way of creating new theories to be tested. To refuse to explore possibilities suggested by data is to take a very limited view of the scope of statistics in quality improvement and controL

Although most applications of Six Sigma are analytic, there are times when enu­merative statistics prove usefuL These enumerative methods will be discussed in more detail in the Analyze phase, where they are useful in quantifying sources of variation. The analyst should keep in mind that analytic methods will be needed to validate the conclusions developed with the use of the enumerative methods, to ensure their rele­vance to the process under study.

In the Measure stage of DMAIC, statistical process control (SPC) charts (Le., ana­lytical statistics) are used to define the process baseline. If the process is statistically stable, as evidenced by the SPC charts, then process capability and sigma level esti­mates can be used to quantify the performance of the process relative to requirements.

If the process is not statistically stable, as evidenced by the SPC analysis, then clarity is sought regarding the causes of variation, as discussed in the following sections. If the special causes of variation can be easily identified and removed, and a stable baseline process established, then the requirements for the baseline objective has been met. If not, as is often the case, then our baseline estimate has provided critical information useful in our Analysis phase, where the causes of the process instability can be investi­gated under the controlled conditions of a designed experiment.

Page 3: 07-Pyzdek CH07 001-018pyzdek.mrooms.net/file.php/1/reading/bb-reading/... · The Mea sur e P has e 207 times quite a bit larger than 3-1/2 hours, and another portion quite a bit less

The Mea sur e Ph a s e 209

Of course, the baseline estimate will rely upon the validity of the measurement system, which is discussed in the final section of this chapter. While it may seem obvi­ous that the measurement system should be evaluated before effort is expended taking baseline measurements, the measurement system verification is presented after the baseline section for two reasons:

1. A key analytical technique used for validating the measurement system is the control chart. Readers should become familiar with their use and application in process baselines before their special application toward measurement system validation is discussed.

2. When a Six Sigma project's objectives include improvement to CTQ or CTS metrics, the Measure stage baseline provides validation of the initial conditions estimated in the Define stage. Significant improvements or changes to the mea­surement system before the baseline will bias the baseline relative to the initial conditions. Instead, the baseline should be estimated first using the existing meth­ods, then repeated (after improvements to the measurement system) if the mea­surement system error is significant. Adherence to this recommendation will lend further credibility to the improvements, since the project team can demonstrate replication of the error as well as the resulting improvement. In some cases, the measurement error is found to be the most significant cause of process variation. In those cases, the focus of the Six Sigma project changes at this stage to concen­trate on improvement to the measurement system, rather than the process itself.

The statistical control chart will provide estimates of the process location (Le., its mean or median) and its variation (Le., process standard deviation) . Of key importance, however, is how these current conditions compare with existing requirements or project objectives. These considerations are answered with a proper understanding of the sta­tistical distribution, as will be discussed.

Principles of Statistical Process Control A central concept in statistical process control (SPC) is that every measurable phenom­enon is a statistical distribution. In other words, an observed set of data constitutes a sample of the effects of unknown common causes. It follows that, after we have done everything to eliminate special causes of variations, there will still remain a certain amount of variability exhibiting the state of control. Figure 7.5 illustrates the relation­ships between common causes, special causes, and distributions.

There are three basic properties of a distribution: location, spread, and shape. The location refers to the typical value of the distribution, such as the mean. The spread of the distribution is the amount by which smaller values differ from larger ones. The stan­dard deviation and variance are measures of distribution spread. The shape of a distri­bution is its pattern-peakedness, symmetry, etc. A given phenomenon may have any one of a number of distribution shapes, for example, the distribution may be bell­shaped, rectangular-shaped, etc.

Central Limit Theorem The central limit theorem can be stated as follows:

Irrespective of the shape of the distribution of the population or universe, the distri­bution of average values of samples drawn from that universe will tend toward a nor­mal distribution as the sample size grows without bound.

Page 4: 07-Pyzdek CH07 001-018pyzdek.mrooms.net/file.php/1/reading/bb-reading/... · The Mea sur e P has e 207 times quite a bit larger than 3-1/2 hours, and another portion quite a bit less

210 C hap te r S eye n

eEl Size

EJ EJ

I':EH I ii ,. 6 P

Size

Distribution

~ n Size Size

FIGURE 7.5 Distributions. (From Continuing Process Control and Process Capability Improvement, p. 4a. Copyright © 1983 by Ford Motor Company. Used by permission of the publisher.)

It can also be shown that the average of sample averages will equal the average of the universe and that the standard deviation of the averages equals the standard devia­tion of the universe divided by the square root of the sample size. Shewhart performed experiments that showed that small sample sizes were needed to get approximately normal distributions from even wildly non-normal universes. Figure 7.6 was created by Shew hart using samples of four measurements.

The practical implications of the central limit theorem are immense. Consider that without the central limit theorem effects, we would have to develop a separate statisti­cal model for every non-normal distribution encountered in practice. This would be the only way to determine if the system were exhibiting chance variation. Because of the central limit theorem we can use averages of small samples to evaluate any process using the normal distribution. The central limit theorem is the basis for the most powerful of statistical process control tools, Shewhart control charts.

C/) 200 c .Q Cil

150 > Cii

I I II IIIII I IIIII I I ~II~II III I I II IIIIIIIIII I I I I I I I II I II[~II~ C/) .0 100 0

'0 Cii 50 .0 E ::::l 0 z -2.4 -1 .2 0 1.2 2.4 Universe

Rectangular universe Average X

~ 200 • • 0

~ > 150

: Cii I

C/) .0 0 100 '0 Cii 50 .0 E ::::l 0

Universe z -1 .2 -0.8 -0.4 o 0.4 0.8 1.2 1.6

Right triangular universe Average X

FIGURE 7.6 Illustration of the central limit theorem. (From Shewhart (1931, 1980). Figure 59. Copyright © 1931, 1980 by ASQC Quality Press. Used by permission of the publisher.)

Page 5: 07-Pyzdek CH07 001-018pyzdek.mrooms.net/file.php/1/reading/bb-reading/... · The Mea sur e P has e 207 times quite a bit larger than 3-1/2 hours, and another portion quite a bit less

Common and Special Causes of Variation Shewhart (1931, 1980) defined control as follows:

The Measure Phase 2l.1

A phenomenon will be said to be controlled when, through the use of past experi­ence, we can predict, at least within limits, how the phenomenon may be expected to vary in the future . Here it is understood that prediction within limits means that we can state, at least approximately, the probability that the observed phenomenon will fall within the given limits.

The critical point in this definition is that control is not defined as the complete absence of variation. Control is simply a state where all variation is predictable. A con­trolled process isn't necessarily a sign of good management, nor is an out-of-control pro­cess necessarily producing nonconforming product.

In all forms of prediction there is an element of risk. For our purposes, we will call any unknown random cause of variation a chance cause or a common cause, the terms are synonymous and will be used as such. If the influence of any particular chance cause is very small, and if the number of chance causes of variation is very large and relatively constant, we have a situation where the variation is predictable within limits. You can see from the definition above, that a system such as this qualifies as a controlled system. Where Dr. Shewhart used the term chance cause, Dr. W. Edwards Deming coined the term common cause to describe the same phenomenon. Both terms are encountered in practice.

Needless to say, not all phenomena arise from constant systems of common causes. At times, the variation is caused by a source of variation that is not part of the constant system. These sources of variation were called assignable causes by Shew hart, special causes of variation by Deming. Experience indicates that special causes of variation can usually be found without undue difficulty, leading to a process that is less variable.

Statistical tools are needed to help us effectively separate the effects of special causes of variation from chance cause variation. This leads us to another definition: Statistical process control-the use of valid analytical statistical methods to identify the existence of special causes of variation in a process.

The basic rule of statistical process control is variation from common-cause systems should be left to chance, but special causes of variation should be identified and elimi­nated.

This is Shewhart's original rule. However, the rule should not be misinterpreted as meaning that variation from common causes should be ignored. Rather, common-cause variation is explored "off-line." That is, we look for long-term process improvements to address common-cause variation.

Figure 7.7 illustrates the need for statistical methods to determine the category of variation.

The answer to the question "should these variations be left to chance?" can only be obtained through the use of statistical methods. Figure 7.8 illustrates the basic concept.

In short, variation between the two "control limits" designated by the dashed lines will be deemed as variation from the common-cause system. Any variability beyond these fixed limits will be assumed to have come from special causes of variation. We will call any system exhibiting only common-cause variation, "statistically controlled." It must be noted that the control limits are not simply pulled out of the air, they are calculated from actual process data using valid statistical methods. Figure 7.7 is shown as Fig. 7.9, only with the control limits drawn on it; notice that process (a) is exhibiting variations from special causes, while process (b) is not. This implies that the type of

Page 6: 07-Pyzdek CH07 001-018pyzdek.mrooms.net/file.php/1/reading/bb-reading/... · The Mea sur e P has e 207 times quite a bit larger than 3-1/2 hours, and another portion quite a bit less

212

OJ :; C/) CIl OJ E OJ () C CIl E .g OJ

Il..

• •

FIGURE 7.7 Types of variation.

0.04

0.03 • OJ > t5 OJ

Cii ""C 0.02 c 0

t5 ~

LL

0.01

0.00 J F

0.04

0.03 OJ > t5 OJ

Cii ""C 0.02 c .Q t3 ~

LL

0.01 • •

0.00 J F

Look for a special cause of variation

• • • • • •

• • • Look for a special cause of variation

• • • • • • •

M A M J J A S (a) Months

• • • • •

M A M J J A S (b) Months

-t Leave these

variations • to chance

_I

• •

0 N D

0 N D

FIGURE 7.8 Should these variations be left to chance? (From Shewhart (1931, 1980). P 13. Copyright © 1931, 1980 by ASQC Quality Press . Used by permission of the publisher.)

Page 7: 07-Pyzdek CH07 001-018pyzdek.mrooms.net/file.php/1/reading/bb-reading/... · The Mea sur e P has e 207 times quite a bit larger than 3-1/2 hours, and another portion quite a bit less

The Measure Phase 213

0.04

0.03 • OJ • > - - - - - - - -t5 OJ

Cii -c 0.02 c 0

t5 ~

LL

0.01 • • • • • • • • • •

0.00 J F M A M J J A S 0 N D

(a) Months

0.04

- - - - - -

0.03 • OJ • > t5 OJ

Cii -c 0.02 c 0

t5 ~

LL • • 0.01 • • • • • • • • 0.00

J F M A M J J A S 0 N D

(b) Months

FIGURE 7.9 Charts from Fig. 7.7 with control limits shown. (From Shewhart (1931, 1980). P 13. Copyright © 1931, 1980 by ASQC Quality Press. Used by permission of the publisher.)

action needed to reduce the variability in each case is of a different nature. Without statistical guidance there could be endless debate over whether special or common causes were to blame for variability.

Estimating Process Baselines Using Process Capability Analysis This section presents several methods of analyzing the data using a statistical control chart to determine the extent to which the process meets requirements.

tom
Line
Administrator
Line
Page 8: 07-Pyzdek CH07 001-018pyzdek.mrooms.net/file.php/1/reading/bb-reading/... · The Mea sur e P has e 207 times quite a bit larger than 3-1/2 hours, and another portion quite a bit less

242 C hap te rEi g h t

UCL - - - - - - - - - - - - +3cr A

+2cr B

+lcr C

X ~ C

-Jcr B

-2cr A

LCL - - - - - - - - - - - - - 3cr

FIGURE 8.18 Zones on a control chart.

Since the control limits are at plus and minus three standard deviations, finding the one and two sigma lines on a control chart is as simple as dividing the distance between the grand average and either control limit into thirds, which can be done using a ruler. This divides each half of the control chart into three zones. The three zones are labeled A, B, and C, as shown in Fig. 8.18.

Based on the expected percentages in each zone, sensitive run tests can be devel­oped for analyzing the patterns of variation in the various zones. Remember, the exis­tence of a nonrandom pattern means that a special cause of variation was (or is) probably present. The averages, np and c control chart run tests are shown in Fig. 8.19.

Note that, when a point responds to an out-of-control test it is marked with an "X" to make the interpretation of the chart easier. Using this convention, the patterns on the control charts can be used as an aid in troubleshooting.

Tampering Effects and Diagnosis Tampering occurs when adjustments are made to a process that is in statistical control. Adjusting a controlled process will always increase process variability, an obviously undesirable result. The best means of diagnosing tampering is to conduct a process capability study and to use a control chart to provide guidelines for adjusting the process.

Perhaps the best analysis of the effects of tampering is from Deming (1986). Deming describes four common types of tampering by drawing the analogy of aiming a funnel to hit a desired target. These "funnel rules" are described by Deming (1986, p . 328):

"Leave the funnel fixed, aimed at the target, no adjustment."

"At drop k (k = 1, 2, 3, ... ) the marble will come to rest at point Zk' measured from the target. (In other words, Zk is the error at drop k.) Move the funnel the distance -Zk

from the last position. Memory 1."

"Set the funnel at each drop right over the spot Zk' measured from the target. No memory."

"Set the funnel at each drop right over the spot (Zk) where it last came to rest. No memory."

tom
Line
Page 9: 07-Pyzdek CH07 001-018pyzdek.mrooms.net/file.php/1/reading/bb-reading/... · The Mea sur e P has e 207 times quite a bit larger than 3-1/2 hours, and another portion quite a bit less

Test 1. One point beyond Zone A

UcL-------­A

B

C X

C

B

A

X

LCL - -- - - - - ------------- -X-----Test 3. Six points in a row steadily

increasing or decreasing

UCL----------------------------A

B

C 9.

C

B

LCL !': - ---- ---------------------Test 5. Two out of three points in a row in

Zone A or beyond

UCL-----------------A X B

C

C

B

LCL -~ - - - - - - - - - - - - - - - - - - - - - - - 2<- -

Test 7. Fifteen points in a row in Zone C (above and below center1ine)

UCL----------------------------A

B

x C" A Q=9 8, ;:---Ox c~e~'t! B

LCL -~ ----- - - - - - - - - - - - - - - - - - - - --

Pro c e s s B e hay i 0 r C h art s 243

Test 2 . Nine points in a row in Zone C or beyond

UCL-A--------------------------B

C

C ~ zs: A

LCL----------------------------Test 4. Fourteen points in a row alternating

up and down

UCL----------------------------A

B

C

C "d' \AI '6 d \)X B

o

A LCL----------------------------Test 6. Four out of five points in a row

in Zone B or beyond

UCL----------------------------A oX

B q N X- C ,/

8 C

B \2S?x LCL ~ - -- - - - -- - - - - - - - - - - - - - - - - --

Test 8. Eight points in a row on both sides of center1ine with none in Zone C

UCL----------------------------A

B ?\) q:::=='>X

C l~ I X ~c~~l~--~\~l~------

J SJ B

LCL -~ - -- - --- - -- - - - -- - - - - - -- - - --

FIGURE 8.19 Tests for out-of-control patterns on control charts. From Nelson (1984) 237-239.

Rule 1 is the best rule for stable processes. By following this rule, the process aver­age will remain stable and the variance will be minimized. Rule 2 produces a stable output but one with twice the variance of rule 1. Rule 3 results in a system that "explodes," that is, a symmetrical pattern will appear with a variance that increases without bound. Rule 4 creates a pattern that steadily moves away from the target, with­out limit (see Fig. 8.20).

At first glance, one might wonder about the relevance of such apparently abstract rules. However, upon more careful consideration, one finds many practical situations where these rules apply.

Page 10: 07-Pyzdek CH07 001-018pyzdek.mrooms.net/file.php/1/reading/bb-reading/... · The Mea sur e P has e 207 times quite a bit larger than 3-1/2 hours, and another portion quite a bit less

244 Chapter Eight

25 1-50 Rule 1 101-150 Rule 3 51-100 Rule 2 151-200 Rule 4

20

15

10

5

0

50 100 150 200 -5

FIGURE 8.20 Funnel rule simulation results.

Rule 1 is the ideal situation and it can be approximated by using control charts to guide decision-making. If process adjustments are made only when special causes are indicated and identified, a pattern similar to that produced by rule 1 will result.

Rule 2 has intuitive appeal for many people. It is commonly encountered in such activities as gage calibration (check the standard once and adjust the gage accord­ingly) or in some automated equipment (using an automatic gage, check the size of the last feature produced and make a compensating adjustment) . Since the system produces a stable result, this situation can go unnoticed indefinitely. However, as shown by Taguchi (1986), increased variance translates to poorer quality and higher cost.

The rationale that leads to rule 3 goes something like this: "A measurement was taken and it was found to be 10 units above the desired target. This happened because the process was set 10 units too high. I want the average to equal the target. To accom­plish this I must try to get the next unit to be 10 units too low." This might be used, for example, in preparing a chemical solution. While reasonable on its face, the result of this approach is a wildly oscillating system.

A common example of rule 4 is the "train-the-trainer" method. A master spends a short time training a group of "experts," who then train others, who train others, etc. An example is on-the-job training. Another is creating a setup by using a piece from the last job. Yet another is a gage calibration system where standards are used to create other standards, which are used to create still others, and so on. Just how far the final result will be from the ideal depends on how many levels deep the scheme has pro­gressed.

Short Run Statistical Process Control Techniques Short production runs are a way of life with many manufacturing companies. In the future, this will be the case even more often. The trend in manufacturing has been toward smaller production runs with product tailored to the specific needs of individ­ual customers. Henry Ford's days of "the customer can have any color, as long as it's black" have long since passed.

tom
Line