capability on aggregate processes · pdf filenot centered (low cpk), and i randomly draw parts...

24
CVJ Systems AWD Systems Trans Axle Solutions eDrive Systems Capability on Aggregate Processes

Upload: duongdat

Post on 16-Mar-2018

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

CVJ Systems

AWD Systems

Trans Axle Solutions

eDrive Systems

Capability on Aggregate Processes

Page 2: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

2

The Problem

With one machine and a couple of

fixtures, it’s a pretty easy problem. You

do a study on each fixture and show

both are capable. That’s going to be

somewhere between 60 and 200 parts

measured, depending on requirements. Horizontal

Mach

Fixture 1

Fixture 2

But as the copies of the

process grow, so does the

study. If we increase to 6

machines with 2 fixtures, that’s

between 360 and 1,200

measurements IF we agree that

fixtures aren’t switching

around machines. Add in more

operations, and more

opportunities for mixing, and

this problem grows

exponentially.

If we study the AGGREGATE of this output

(assuming there is representation from each

subprocess …) is this good enough?

Page 3: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

3

The Problem – Formally Stated

There are two components to a capability study: centeredness (mean location) and

spread (variance/standard deviation). We need to examine the effects of both.

1) Case Study: If I have 3 parallel processes that are capable (high Cp), but they are

not centered (low Cpk), and I randomly draw parts from the AGGREGATE of these

processes, what does the resulting study look like?

• We will create some normal distributions with good Cp values (at least 2.5), center up 1 (Cpk about

2.5) and put the other two at the upper and lower limits. (Cpk close to 1).

• Distributions will all be “in spec” because. If they were all over the place, it’s clear the aggregate

would be bad, we don’t want to test extremes, we want to test borderline cases.

• The question is – given this, will the aggregate show a problem?

2) Case Study: If I have 3 parallel processes that are centered (Cpk ~ Cp) and two

have very capable distributions (low variance, good Cp) with the third having a

large variance, and I randomly draw parts from the AGGREGATE of these

processes, what does the resulting study look like?

• We will generate 3 centered processes (Cpk = Cp). Two distributions will be tight (Cpk = Cp near

2.5) and one of them noisy (Cpk = Cp near 1.33).

• Again, we don’t want to discuss extremes, we want to test a reasonable case and examine the

effects. With the same question – will we see a problem in the aggregate?

Page 4: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

4

The Problem – Steps to Making the Models

1) Use Excel to generate random normal data specifying a mean and a standard

deviation. This will simulate the output of the individual processes.

2) Generate thousands of points – simulating a process run at these settings.

3) Grab a subset of these points, as if we were grabbing production parts off the end

of the process for a study. We will grab 40 “parts” at random out of the thousands

with subgrouping.

4) Calculate the capabilities of these experimental draws and confirm they are close to

the desired inputs.

5) Grab a subset from these selected parts at random, as if we then pulled aggregate

parts.

6) Calculate the capabilities of these draws to see what a random study on the

aggregate output shows us.

Page 5: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

5

Case Study 1: Three capable distributions, not centered.

Let’s assume we have 3 machines, each trying to make an inside diameter of 30 mm. We do a separate capability

study on each. What happens if we take all the samples and mix them up and do a capability study on the

aggregate? Essentially losing traceability from each of the 3 machines.

5

Set 1 Set 2 Set 3

Name Mach 1 Mach 2 Mach 3

Upper Spec Limit 30.02 30.02 30.02

Lower Spec Limit 29.98 29.98 29.98

n 40 40 40

mean 29.99 30.00 30.01

std dev 0.0023 0.0024 0.0026

Cp 2.86 2.76 2.57

Cpk 1.15 2.71 0.97

Details on the distributions: Notice the standard deviations are all the same (green arrow) – the processes have all

the same noise. The random distributions were all generated with the same standard deviation as an input. And the

calculated sigma of the study confirms the distribution. Close to the input value (.0024), but still random. Notice the

Cps (purple arrow). Because they all have the same noise (generated with the same standard deviation), the Cps are

all roughly the same – they should be. Now examine the Cpks (red arrow). Only the centered process has Cpk~Cp

(it’s centered). The other two have Cpk near 1 because the processes are near the limits. The means in the

generation of the random data were intentionally set here. A few thousand points were generated, and 40 (8

subgroups of 5) were grabbed out of all these random points. Now, if we grab 40 points at random out of THESE

distributions and do a study, what does the aggregate look like?

Page 6: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

6

Case Study 1: Three capable distributions, not centered.

The final result looks like this ….

6

The histogram bars from the root distributions (blue, red, green) have been removed for clarity. The purple

histogram shown is from the random pulls of the root distributions. You can sort of tell that there were three

independent distributions that generated this data from the 3 higher bars in the purple histogram.

Notice in the combination, the Cp and Cpk are low (gold box). Notice how high the standard deviation is (green box)

compared to the three source distributions. Notice how non-normal the resulting aggregate distribution is. This

random pull was 12 from Mach 1, 18 from Mach 2, and 10 from Mach 3. (This model was run repeatedly, results were

similar).

In this case, with source distributions different, it had a negative effect on the aggregate distribution. This means

that if an aggregate distribution is capable (regarding centerdness) the underlying distributions are also capable.

Set 1 Set 2 Set 3 Set 4

Name Mach 1 Mach 2 Mach 3 123 Comb

Upper Spec Limit 30.02 30.02 30.02 30.02

Lower Spec Limit 29.98 29.98 29.98 29.98

n 40 40 40 40

mean 29.99 30.00 30.01 30.00

std dev 0.0023 0.0024 0.0026 0.0097

Cp 2.86 2.76 2.57 0.69

Cpk 1.15 2.71 0.97 0.67

Page 7: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

7

Case Study 1: The Thought Experiment

7

Let’s recap – we had 3 root distributions, all very capable, but two of them needing

centering. And the aggregate study showed us a very wide, but still centered deviation

we would have interpreted as a fail. Where the sub-processes would have definitely

been a pass (for one of them) and possibly a “pass with centering” on the other two.

Key Points:

1) The result makes sense. Imaging them all aligned in the center, and you drag two

of them left and right from center, you can see in your mind the total distribution

getting wider, but remaining centered.

2) With only 3 distributions, it’s easy to imagine, but what happens if we have more?

What if we had 20 sub-processes, 19 of them a bit right of nominal (perchance well

set up, accounting for tool wear) and the 20th is flubbed. It’s hovering at the lower

limit, what would be the effect?

Page 8: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

8

Case Study 1: The Thought Experiment

8

It might look something like this ….

Lower

Limit Upper

Limit

Nominal

19 distributions

well collected

here.

One process is not

set up well, making a

distribution over here.

A resulting aggregate distribution may look like the pink sketch. Not a wide distribution, per se, but one showing two peaks. It may have

an acceptable Cp or Cpk. This effect is because the samples are heavily weighted towards the “good” distribution (19 good vs 1 outlying

process). This effect would be more muted as more good processes were added because of this weighting. Now, 100 parallel processes

aren’t seen too often in manufacturing. But 40 are (multi-cavity injection molding tools come to mind). There are a few takeaways….

1) Were one to attempt an aggregate experiment, one would have to ensure adequate representation from each sub process. (NOT

random). You would want 5 parts from each process at a minimum. And maybe adjust quantities based on where the means of the

initial 5 draws fell on a histogram.

2) It is unlikely the raw Cp/Cpk numbers would be enough to adequately evaluate the results, you would definitely want to plot the results

and convince yourself the numbers made sense. Especially looking for multiple peaks. Something you would NOT expect from sub-

processes that were identical.

Key Point: The more

parallel processes that

contribute to the

distribution over here, the

less you will statistically

notice the stray process…

Page 9: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

9

Case Study 2: One process is noisy ….

OK. But what if 2 of the machines are good and one is really noisy? Let’s keep the same parameters. But this time,

Machines 4 and 5 are well centered. Machine 6 is centered, but noisy. A cutter is loose … will this still show up in

an aggregate capability study?

9

The first two machines are running good. Well centered. Tight distributions (blue and maroon). The third machine

is noisy (green). It is ALMOST capable (to a hurdle of 1.33). If we were just considering Mach 6, we would reject it

and tell the supplier that they need to work on the noise.

But let’s assume we have no knowledge of which machine a part came from and we randomly take 40 pieces from

this aggregate population again. What would we get? Will the aggregate be out? Think about the answer before

you continue.

Set 1 Set 2 Set 3

Name Mach 4 Mach 5 Mach 6

Upper Spec Limit 30.02 30.02 30.02

Lower Spec Limit 29.98 29.98 29.98

n 40 40 40

mean 30.00 30.00 30.00

std dev 0.0027 0.0026 0.0051

Cp 2.51 2.52 1.31

Cpk 2.47 2.47 1.24

Page 10: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

10

Case Study 2: One process is noisy ….

And here is the result.

10

The aggregate distribution (purple) got worse than the two good machines (4 & 5) because we are including data

from the bad machine (6). But … the two good machines helped the output of the bad one. So the aggregate is a

pass even though one of the subprocesses is NOT.

If we studied each machine individually, we would have caught the “bad apple.” But collectively, it did NOT ruin the

bunch. It made it worse, but not enough to result in a failed study. The likelihood of getting a “tail reading” in the

bad (green) Mach 6 distribution is cut in third by adding the two good machines into the mix. So a similar leveraging

effect works here, too. If we had more and more good processes, the statistical significance of the bad apple would

go down. This does not mean the bad apple is running good, it is just less likely that we would detect it.

Set 1 Set 2 Set 3 Set 4

Name Mach 4 Mach 5 Mach 6 456 Comb

Upper Spec Limit 30.02 30.02 30.02 30.02

Lower Spec Limit 29.98 29.98 29.98 29.98

n 40 40 40 40

mean 30.00 30.00 30.00 30.00

std dev 0.0027 0.0026 0.0051 0.0043

Cp 2.51 2.52 1.31 1.54

Cpk 2.47 2.47 1.24 1.49

Page 11: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

11

Resulting Effects

11

When one studies an aggregate sampling from parallel processes,

the ability to detect a stray process in the aggregate is based on

how extreme the errant process is (either process noise or mean

shift) and how many processes are in parallel. In other words:

• The more extreme the response of the stray process is, the more

likely you are to detect it.

• The fewer the processes in the study, the more likely you are to

detect it.

What this means is, you cannot necessarily say:

few many

Number of parallel processes

slight

extreme

Diffe

rence o

f str

ay p

roce

ss to

th

e

rest

of th

e g

rou

p

Ease of detection

“I have a passing capability

study on the aggregate of my

processes, therefore all my

sub processes are OK.”

With two process having

very different means, you

would easily notice a two

peaked distribution. Easy to

detect.

Trivial solution: with two processes

that do not have a detectable

difference … this is what you want,

processes without a detectable

difference.

This corner is also trivial. If I keep

duplicating process that I cannot

detect a difference between, this is

a good thing – multiple processes,

statistically identical.

This is the danger corner,

MAYBE you detect this.

With enough parallel

processes that are good,

they could very much

mask one stray one in an

aggregate study.

Page 12: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

12

The Problem (Again)

We are still faced with the main

problem. Given we have

demonstrated that it is possible

for an aggregate study to mask

a stray process, do we then

have to do 30+ piece studies on

every combination? The

answer is no – IF we do a

structured experiment.

If we consider the output of each subprocess as its

own subgroup, we can still detect a stray process

with an aggregate study, but we have to approach this

with a structured, stratified approach.

Page 13: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

13

First, here’s a good study.

13

Trial N1 N2 N3 N4 N5

1 20.003 19.986 20.001 19.995 20.004

2 19.998 20.019 19.999 20.017 19.988

3 19.995 19.997 20.003 19.998 19.997

4 20.005 19.996 20.012 20.009 20.009

5 20.008 19.993 19.989 19.994 20.002

6 20.019 19.985 20.003 19.997 20.000

7 19.994 20.005 19.998 19.994 20.012

8 20.000 19.992 19.993 19.998 20.004

9 20.002 19.998 20.002 20.013 20.000

10 20.000 19.988 19.995 20.005 20.000

11 20.007 20.010 19.993 20.010 19.997

12 20.016 19.998 20.009 19.995 19.989

13 19.999 20.001 20.003 20.000 19.998

14 20.009 20.011 20.000 19.990 20.008

15 20.008 20.008 20.001 19.995 20.002

16 19.985 20.001 20.013 20.008 19.992

17 19.995 20.008 20.001 20.004 19.986

18 19.995 19.997 19.999 19.995 20.007

19 20.005 19.998 19.991 20.003 19.993

20 20.003 20.003 19.996 19.999 20.001

Ea

ch

trial is

a d

iffe

ren

t

ma

ch

ine a

nd

fix

ture

We grab 5 parts from each

machine/fixture combination

and keep them controlled

We can generate some data. Let’s assume we want a normal distribution of a feature that’s at 20 ± 0.05 and we want a Cp ~ Cpk = 2.5. We can generate some random, normal data with µ=20 and σ=0.00667 and get such a spread.

Here are the results.

Ppk = 2.212 Cpk = 2.305

Pp = 2.233 Cp = 2.327

Capability Data

This is a little shy of our target Cp of 2.5, but good

enough. What happens if one of the process strays?

This is all centered up. Let’s take trial 5 and move it’s

mean so that it is JUST capable (Cpk = 1.33)

Page 14: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

14

Process 5 nudged

14

Ea

ch

trial is

a d

iffe

ren

t

ma

ch

ine a

nd

fix

ture

We grab 5 parts from each

machine/fixture combination

and keep them controlled

We know the lower limit in our case study (19.95) and

we know the σ that generated the data was 0.00667, so

if we set the mean JUST from process 5 to:

19.95 + 0.00667 = 19.97001 this is what we get.

The red arrows are indicating process 5. Now, it is still capable, it is

one stray process riding right at the limit, but still in. You can

almost see the effect in the histogram. But it is obvious in the run

chart. The recommended course of action would be – pass the

processes, BUT machine/fixture 5 needs a separate study.

Trial N1 N2 N3 N4 N5

1 20.003 19.986 20.001 19.995 20.004

2 19.998 20.019 19.999 20.017 19.988

3 19.995 19.997 20.003 19.998 19.997

4 20.005 19.996 20.012 20.009 20.009

5 19.978 19.963 19.959 19.964 19.972

6 20.019 19.985 20.003 19.997 20.000

7 19.994 20.005 19.998 19.994 20.012

8 20.000 19.992 19.993 19.998 20.004

9 20.002 19.998 20.002 20.013 20.000

10 20.000 19.988 19.995 20.005 20.000

11 20.007 20.010 19.993 20.010 19.997

12 20.016 19.998 20.009 19.995 19.989

13 19.999 20.001 20.003 20.000 19.998

14 20.009 20.011 20.000 19.990 20.008

15 20.008 20.008 20.001 19.995 20.002

16 19.985 20.001 20.013 20.008 19.992

17 19.995 20.008 20.001 20.004 19.986

18 19.995 19.997 19.999 19.995 20.007

19 20.005 19.998 19.991 20.003 19.993

20 20.003 20.003 19.996 19.999 20.001

Ppk = 1.567 Cpk = 2.280

Pp = 1.599 Cp = 2.327

Capability Data

Ppk = 2.212 Cpk = 2.305

Pp = 2.233 Cp = 2.327

Capability Data

Before With shifting

Page 15: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

15

Process 5 nudged

15

Ea

ch

trial is

a d

iffe

ren

t

ma

ch

ine a

nd

fix

ture

We grab 5 parts from each

machine/fixture combination

and keep them controlled

We know the lower limit in our case study (19.95) and

we know the σ that generated the data was 0.00667, so

if we set the mean JUST from process 5 to:

19.95 + 0.00667 = 19.97001 this is what we get.

Another key point, it is more detectable in Pp/Ppk than Cp/Cpk

because Cp/Cpk calculations are less sensitive to subgroup shifts.

And remember, all these distributions are capable. So if a

subprocess was very errant, this method has an excellent chance of

detecting it.

Trial N1 N2 N3 N4 N5

1 20.003 19.986 20.001 19.995 20.004

2 19.998 20.019 19.999 20.017 19.988

3 19.995 19.997 20.003 19.998 19.997

4 20.005 19.996 20.012 20.009 20.009

5 19.978 19.963 19.959 19.964 19.972

6 20.019 19.985 20.003 19.997 20.000

7 19.994 20.005 19.998 19.994 20.012

8 20.000 19.992 19.993 19.998 20.004

9 20.002 19.998 20.002 20.013 20.000

10 20.000 19.988 19.995 20.005 20.000

11 20.007 20.010 19.993 20.010 19.997

12 20.016 19.998 20.009 19.995 19.989

13 19.999 20.001 20.003 20.000 19.998

14 20.009 20.011 20.000 19.990 20.008

15 20.008 20.008 20.001 19.995 20.002

16 19.985 20.001 20.013 20.008 19.992

17 19.995 20.008 20.001 20.004 19.986

18 19.995 19.997 19.999 19.995 20.007

19 20.005 19.998 19.991 20.003 19.993

20 20.003 20.003 19.996 19.999 20.001

Ppk = 1.567 Cpk = 2.280

Pp = 1.599 Cp = 2.327

Capability Data

Ppk = 2.212 Cpk = 2.305

Pp = 2.233 Cp = 2.327

Capability Data

Before With shifting

Page 16: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

16

Why a Stratified Study?

16

Ea

ch

trial is

a d

iffe

ren

t

ma

ch

ine a

nd

fix

ture

Random study, drawn from all

process, no stratification. (What was

process 5 is highlighted)

If we just took all these samples at random, the parts

from machine/fixture 5 would be sprinkled throughout

the data. Similar to what is shown in this table.

This hides the mean shift. The process looks overall less capable

and you lose sight of the fact that there IS a problem with process 5.

If process 5 were actually NOT capable, you may pass this study has

having a few outliers. The conclusion is stratification is key!!

Ppk = 1.567 Cpk = 2.280

Pp = 1.599 Cp = 2.327

Capability Data

Ppk = 2.212 Cpk = 2.305

Pp = 2.233 Cp = 2.327

Capability Data

Before With shifting

Trial N1 N2 N3 N4 N5

1 20.003 19.986 20.001 19.995 20.004

2 19.998 20.019 19.959 20.017 19.988

3 19.995 19.997 20.003 19.998 19.997

4 20.005 19.996 20.012 20.009 20.009

5 20.009 20.001 19.999 19.995 19.972

6 20.019 19.985 20.003 19.997 20.000

7 19.994 20.005 19.998 19.994 20.012

8 20.000 19.992 19.993 19.998 20.004

9 20.002 19.998 20.002 20.013 20.000

10 20.000 19.988 19.995 20.005 20.000

11 20.007 20.010 19.993 20.010 19.997

12 20.016 19.998 20.009 19.964 19.989

13 19.999 20.001 20.003 20.000 19.998

14 19.978 20.011 20.000 19.990 20.008

15 20.008 20.008 20.001 19.995 20.002

16 19.985 19.963 20.013 20.008 19.992

17 19.995 20.008 20.001 20.004 19.986

18 19.995 19.997 19.999 19.995 20.007

19 20.005 19.998 19.991 20.003 19.993

20 20.003 20.003 19.996 19.999 20.001

Ppk = 1.567 Cpk = 1.779

Pp = 1.599 Cp = 1.816

Capability Data

Random pull (NOT stratified) =>

Page 17: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

17

Also suspicious – A noisy process

17

As opposed to a mean shift, a stray process due to noise is harder to detect because of only 5 samples from each process. Left

is the original case study (good, Cpk target of 2.5). The middle column has subgroup 5 with a target Cp/Cpk of 1.33 and the right

column has the same data, but with subgroup 5 targeted at Cp/Cpk of 1.00. The issue isn’t detectable in the indices, histograms,

or run charts, but it IS detectable in the R-Chart. Again – the conclusion is an aggregate study is possible IF it is structured and

you are critical of the results. If this were from ONE process, it would be acceptable.

Random pull (NOT stratified) =>

Ppk = 2.212 Cpk = 2.305

Pp = 2.233 Cp = 2.327

Capability Data

Ppk = 2.085 Cpk = 2.178

Pp = 2.115 Cp = 2.209

Capability Data

Ppk = 2.073 Cpk = 2.205

Pp = 2.117 Cp = 2.252

Capability Data

Page 18: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

18

Keys to Success

18

An aggregate study could be

successful if:

1) It was controlled and stratified.

Each subgroup representing at least

5 parts from each subprocess, kept

together.

2) You must LOOK AT THE DATA and understand what it is trying to tell you. You cannot just look at

the capability indices. The histogram and especially the run and range charts of the data are crucial to

detect an errant process.

3) You must be willing to investigate errant data points. In this example, we are proving out 20 parallel

process with 100 measurements. Don’t balk at having to do an independent study on process 5. It

warrants it, it looks very suspicious.

4) You need to be reasonably capable overall. This is plain confidence interval logic. If the run chart

above were using more of the tolerance, it would justify doing a full study on a couple of

machine/fixture combinations.

Page 19: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

19

Steps to a successful Aggregate Study

1) Create a structured, stratified

experiment.

Guidelines:

• 5 in each subgroup would be the

minimum number.

• If this wasn’t a lot of parts (if we had 2

machines, it would only be 10 parts)

increase the subgroup size until the total

number of parts was at least 40, 100

would be better.

• The minimum of 5 must be maintained. If

there were 100 processes in parallel,

that’s 500 total parts.

19

5 parts from this

machine / fixture

is subgroup 1 n=5, subgroup 2

n=5, subgroup 3 …

And so on …

Page 20: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

20

Steps to a successful Aggregate Study

2) Conduct the stratified aggregate study, maintaining subgroup organization

and pay close attention to the Xbar (run) chart and R charts.

20

This would be ideal. Here’s a nice, tight grouping.

Looking at it, you are pretty confident all the

processes are performing the same. From this, you

could safely stop checking. Overall Cp is high. All

sub-processes in the control limits.

This would be ideal. You want to compare the

calculated range control limit of the aggregate (.037)

to the total tolerance (0.1) Here we use a bit more

than a third of the tolerance. Half would be a red

flag. Remember, in the aggregate, we have most

likely grabbed parts within 1 sigma of the mean from

each process.

Exam

ple

1:

Page 21: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

21

Steps to a successful Aggregate Study

2) Conduct the stratified aggregate study, maintaining subgroup organization

and pay close attention to the Xbar (run) chart and R charts.

21

This is noisy. The aggregate has a bad Cp/Cpk. This

is too noisy to draw a conclusion on from the

aggregate alone. But it’s hard to detect on the run

chart. All means are within the run chart control

limits.

Exam

ple

2:

Red flag: Range control limit is 0.074 which is more

than half the tolerance. You would want to take the

noisiest process (#20, in this case) and do a full

study. More would be better. Problem noise is

easier to detect in the range chart than the run chart

above. You should also pick the process from the

run chart above that is farthest from nominal and

check that too.

Page 22: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

22

Steps to a successful Aggregate Study

2) Conduct the stratified aggregate study, maintaining subgroup organization

and pay close attention to the Xbar (run) chart and R charts.

22

This has one errant process. You would

want to conduct another study focused on

the errant process. Note the errant one is

outside the calculated control limits. (There

may be more than one errant process.)

Exam

ple

3:

The range chart looks good. (It would, the

spread of the subprocess is the same). It

reiterates the fact you need to examine

BOTH.

Page 23: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

23

Steps to a successful Aggregate Study

2) Conduct the stratified aggregate study, maintaining subgroup organization

and pay close attention to the Xbar (run) chart and R charts.

23

Here you have a number of points outside the

control limits. Remember – these are all different

processes. This means they are not all centered.

Best solution is to center these up and repeat the

aggregate study (centering fix is the easy fix). Worst

case would be take the 3 FARTHEST subprocesses

and do a full study on each of them. If they are

capable, all of them should be.

Exam

ple

4:

Run chart looks good in this example as well. And it

should, this has the centering problem.

Page 24: Capability on Aggregate Processes · PDF filenot centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? ... Case

24

Steps to a successful Aggregate Study

3) From the aggregate study, conclude everything is OK OR conduct your sub

process study or studies.

4) Draw your conclusion.

Final remark: There is no substitute for understanding what the graphs and

data are telling you. Take the time to think about how well your model reflects

your processes. One thing is clear – it is investigative which means you need

to conduct an intentional experiment, not a random one.

24