design of experiments – methods and case studies

48
Design of Experiments – Methods and Case Studies Dan Rand Winona State University ASQ Fellow 5-time chair of the La Crosse / Winona Section of ASQ (Long) Past member of the ASQ Hiawatha Section

Upload: shalin

Post on 24-Feb-2016

34 views

Category:

Documents


1 download

DESCRIPTION

Design of Experiments – Methods and Case Studies. Dan Rand Winona State University ASQ Fellow 5-time chair of the La Crosse / Winona Section of ASQ (Long) Past member of the ASQ Hiawatha Section. Design of Experiments – Methods and Case Studies. Tonight’s agenda The basics of DoE - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Design of Experiments –  Methods and Case Studies

Design of Experiments – Methods and Case Studies

• Dan Rand• Winona State University• ASQ Fellow• 5-time chair of the La Crosse / Winona

Section of ASQ• (Long) Past member of the ASQ

Hiawatha Section

Page 2: Design of Experiments –  Methods and Case Studies

Design of Experiments – Methods and Case Studies

• Tonight’s agenda– The basics of DoE– Principles of really efficient experiments– Really important practices in effective experiments– Basic principles of analysis and execution in a

catapult experiment – Case studies – in a wide variety of applications– Optimization with more than one response

variable– If Baseball was invented using DoE

Page 3: Design of Experiments –  Methods and Case Studies

Design of Experiments - Definition

• implementation of the scientific method. -design the collection of information about a phenomenon or process, analyze information, learn about relationships of important variables. - enables prediction of response variables. - economy and efficiency of data collection minimize usage of resources.

Page 4: Design of Experiments –  Methods and Case Studies

Advantages of DoE

• Process Optimization and Problem Solving with Least Resources for Most Information.

• Allows Decision Making with Defined Risks.• Customer Requirements --> Process

Specifications by Characterizing Relationships

• Determine effects of variables, interactions, and a math model

• DOE Is a Prevention Tool for Huge Leverage Early in Design

Page 5: Design of Experiments –  Methods and Case Studies

No DoE Poor Execution Poor Analysis

Why Industrial Experiment Fail

Poor Planning

Poor Design

Page 6: Design of Experiments –  Methods and Case Studies

Steps to a Good Experiment

• 1. Define the objective of the experiment.

• 2. Choose the right people for the team.• 3. Identify prior knowledge, then

important factors and responses to be studied.

• 4. Determine the measurement system

Page 7: Design of Experiments –  Methods and Case Studies

Steps to a Good Experiment

• 5. Design the matrix and data collection responsibilities for the experiment.

• 6. Conduct the experiment.• 7. Analyze experiment results and draw

conclusions.• 8. Verify the findings.• 9. Report and implement the results

Page 8: Design of Experiments –  Methods and Case Studies

An experiment using a catapult

• We wish to characterize the control factors for a catapult

• We have determined three potential factors:

1. Ball type2. Arm length3. Release angle

Page 9: Design of Experiments –  Methods and Case Studies
Page 10: Design of Experiments –  Methods and Case Studies

One Factor-at-a-Time Method

• Hypothesis test - T-test to determine the effect of each factor separately.

• test each factor at 2 levels. Plan 4 trials each at high and low levels of 3 factors

• 8 trials for 3 factors = 24 trials. • levels of other 2 factors?• Combine factor settings in only 8

total trials.

Page 11: Design of Experiments –  Methods and Case Studies

Factors and settings

Factors low highA ball type slotted solidB Arm length 10 12C Release angle 45 75

Page 12: Design of Experiments –  Methods and Case Studies

Factor settings for 8 trials

A B C1 2 3 4 5 6 7-1 -1 1 -1 1 1 -11 -1 -1 -1 -1 1 1-1 1 -1 -1 1 -1 11 1 1 -1 -1 -1 -1-1 -1 1 1 -1 -1 11 -1 -1 1 1 -1 -1-1 1 -1 1 -1 1 -11 1 1 1 1 1 1

Page 13: Design of Experiments –  Methods and Case Studies

Randomization

• The most important principle in designing experiments is to randomize selection of experimental units and order of trials.

•  This averages out the effect of unknown differences in the population, and the effect of environmental variables that change over time, outside of our control.

Page 14: Design of Experiments –  Methods and Case Studies

From Design Expert:

Page 15: Design of Experiments –  Methods and Case Studies

Randomized by Design Expert

Factor 1 Factor 2 Factor 3

Std order Run order A:ball type B:Arm length C:Release angle

8 1 1 12 754 2 1 12 455 3 -1 10 757 4 -1 12 753 5 -1 12 456 6 1 10 751 7 -1 10 452 8 1 10 45

Page 16: Design of Experiments –  Methods and Case Studies

Experiment trials & resultsA B C

1 2 3 4 5 6 7 Result

-1 -1 1 -1 1 1 -1 76.51 -1 -1 -1 -1 1 1 78.5-1 1 -1 -1 1 -1 1 87.751 1 1 -1 -1 -1 -1 89-1 -1 1 1 -1 -1 1 811 -1 -1 1 1 -1 -1 77.5-1 1 -1 1 -1 1 -1 791 1 1 1 1 1 1 77.5A B AB C AC BC

contrast -1.75 19.75 1.25 -16.8 -8.25 -23.8 2.75

effect -0.4375 4.938 0.313 -4.19 -2.06 -5.94 0.688

Page 17: Design of Experiments –  Methods and Case Studies

Graph of significant effects

Page 18: Design of Experiments –  Methods and Case Studies

Detecting interactions between factors

• Two factors show an interaction in their effect on a response variable when the effect of one factor on the response depends on the level of another factor.

Page 19: Design of Experiments –  Methods and Case Studies

Interaction graph from Design Expert

Page 20: Design of Experiments –  Methods and Case Studies

Predicted distance based on calculated effects

• Distance = 80.84 + 4.94 * X2_arm_length – 4.19* X3_Release_angle – 2.06* X1*X3 - 5.94*X2*X3

• X2 = -1 at arm length of 10, = 1 at arm length of 12

• X3 = -1 at release angle of 45, = 1 at release angle of 75

Page 21: Design of Experiments –  Methods and Case Studies

Poorly executed experiments

• If we are sloppy with control of factor levels or lazy with randomization, special causes invade the experiment and the error term can get unacceptably large. As a result, significant effects of factors don’t appear to be so significant.

Page 22: Design of Experiments –  Methods and Case Studies

The Best and the Worst

• Knot Random Team and the String Quartet Team. Each team designed a 16-trial, highly efficient experiment with two levels for each factor to characterize their catapult’s capability and control factors.

Page 23: Design of Experiments –  Methods and Case Studies

Knot Random team resultsMean square error = 1268Demonstration of capability for 6 shots with specifications 84 ± 4 inches , Cpk = .34

Page 24: Design of Experiments –  Methods and Case Studies

String quartet resultMean square error = 362Demonstration of capability for 6 shots with specifications 72 ± 4 inches , Cpk=2.02

Page 25: Design of Experiments –  Methods and Case Studies

String Quartet Best Practices

• Randomized trials done in prescribed order

• Factor settings checked on all trials• Agreed to a specific process for releasing

the catapult arm• Landing point of the ball made a mark that

could be measured to ¼ inch• Catapult controls that were not varied as

factors were measured frequently

Page 26: Design of Experiments –  Methods and Case Studies

Knot Random – Knot best practices

• Trials done in convenient order to hurry through apparatus changes

• Factor settings left to wrong level from previous trial in at least one instance

• Each operator did his/her best to release the catapult arm in a repeatable fashion

• Inspector made a visual estimate of where ball had landed, measured to nearest ½ inch

• Catapult controls that were not varied as factors were ignored after initial process set-up

Page 27: Design of Experiments –  Methods and Case Studies

Multivariable testing (MVT) as DoE

• “Shelf Smarts,” Forbes, 5/12/03• DoE didn’t quite save Circuit City• 15 factors culled from 3000 employee

suggestions• Tested in 16 trials, 16 stores• Measured response = store revenue• Implemented changes led to 3% sales

rise

Page 28: Design of Experiments –  Methods and Case Studies

Census Bureau Experiment

• “Why do they send me a card telling me they’re going to send me a census form???”

• Dillman, D.A., Clark, J.R., Sinclair, M.D. (1995) “How pre-notice letters, stamped return envelopes and reminder postcards affect mail-back response rates for census questionnaires,” Survey Methodology, 21, 159-165

Page 29: Design of Experiments –  Methods and Case Studies

1992 Census Implementation Test

• Factors:– Pre-notice letter – yes/ no– SASE with census form – yes / no– Reminder postcard a few days after

census form – yes / no– Response = completed, mailed survey

response rate

Page 30: Design of Experiments –  Methods and Case Studies

Experiment results –net savings in the millions

letter envelope postcard Response rate

- - - 50%- - + 58%- + - 52.6%- + + 59.5%+ - - 56.4%+ - + 62.7%+ + - 59.8%+ + + 64.3%

Page 31: Design of Experiments –  Methods and Case Studies

Surface Mount Technology (SMT) experiment - problem solving in a

manufacturing environment• 2 types of defects, probably related

– Solder balls– Solder-on-gold

• Statistician invited in for a “quick fix” experiment

• High volume memory card product• Courtesy of Lally Marwah, Toronto, Canada

Page 32: Design of Experiments –  Methods and Case Studies

Problem in screening / reflow operations

Prep card

Solder paste screening

Component placement

Solder paste reflow

Clean card Inspect (T2)

Inspect (T1)insert

Page 33: Design of Experiments –  Methods and Case Studies

8 potential process factors

• Clean stencil frequency: 1/1, 1/10• Panel washed: no, yes• Misregistration: 0, 10 ml• Paste height: 9ml, 12 ml• Time between screen/ reflow: .5, 4 hr• Reflow card spacing: 18 in, 36 in• Reflow pre-heat: cold, hot• Oven: A, B

Page 34: Design of Experiments –  Methods and Case Studies

Experiment design conditions

• Resources only permit 16 trials• Get efficiency from 2-level factors• Measure both types of defects • Introduce T1 inspection station for

counting defects• Same inspectors• Same quantity of cards per trial

Page 35: Design of Experiments –  Methods and Case Studies

Can we measure effects of 8 factors in 16 trials? Yes1 -1 -1 -1 -1 1 1 1 486

-1 1 -1 -1 1 -1 1 1 2211 1 -1 -1 1 1 -1 -1 314

-1 -1 1 -1 1 1 1 -1 6041 -1 1 -1 1 -1 -1 1 549

-1 1 1 -1 -1 1 -1 1 3541 1 1 -1 -1 -1 1 -1 502

-1 -1 -1 1 1 1 -1 1 2221 -1 -1 1 1 -1 1 -1 360

-1 1 -1 1 -1 1 1 -1 6491 1 -1 1 -1 -1 -1 1 418

-1 -1 1 1 -1 -1 1 1 13211 -1 1 1 -1 1 -1 -1 993

-1 1 1 1 1 -1 -1 -1 8931 1 1 1 1 1 1 1 840 response

A B C D E F G H factor5.5 -62 404 314 -109 5.5 136 -7.3

Page 36: Design of Experiments –  Methods and Case Studies

7 more columns contain all interactions

AB AC BC AD BD CD AE

CG DF DE CF CE BE DG

DH BG AG BH AH AF BF

EF EH FH EG FG GH CH

-16 -78 -157 124 38 196 25

• Each column contains confounded interactions

Page 37: Design of Experiments –  Methods and Case Studies

Normal plot for factor effects on solder ball defects

Normal plot- 15 mean effects

-2.5

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

2.5

-200 -100 0 100 200 300 400 500

Mean Effects

Z Va

riate

CD

CD

Page 38: Design of Experiments –  Methods and Case Studies

Which confounded interaction is significant?

• AF, BE, CD, or GH ?• The main effects C and D are

significant, so engineering judgement tells us CD is the true significant interaction.

• C is misregistration• D is paste height

Page 39: Design of Experiments –  Methods and Case Studies

Conclusions from experiment

• Increased paste height (D+) acts together with misregistration to increase the area of paste outside of the pad, leading to solder balls of dislodged extra paste.

• Solder ball occurrence can be reduced by minimizing the surface area and mass of paste outside the pad.

Page 40: Design of Experiments –  Methods and Case Studies

Implemented solutions

• Reduce variability and increase accuracy in registration.

• Lowered solder ball rate by 77%• More complete solution:• Shrink paste stencil opening - pad

accommodates variability in registration.

Page 41: Design of Experiments –  Methods and Case Studies

The Power of Efficient Experiments

• More information from less resources• Thought process of experiment design

brings out: –potential factors – relevant measurements–attention to variability –discipline to experiment trials

Page 42: Design of Experiments –  Methods and Case Studies

Optimization – Back to the Catapult

• Optimize two responses for catapult• Hit a target distance• Minimize variability• Suppose the 8 trials in the catapult

experiment were each run with 3 replicates, and we used means and standard deviations of the 3

Page 43: Design of Experiments –  Methods and Case Studies

8 catapult runs with response = standard deviation (sdev)

A B C sdev

-1 -1 -1 1.6

1 -1 -1 2.5

-1 1 -1 1.5

1 1 -1 2.6

-1 -1 1 1.4

1 -1 1 2.4

-1 1 1 1.4

1 1 1 2.4

Page 44: Design of Experiments –  Methods and Case Studies

Slotted balls have less variability

Page 45: Design of Experiments –  Methods and Case Studies

Desirability - Combining Two Responses

70 75 80 85 90 95 1000

0.2

0.4

0.6

0.8

1

Desirability for distance

d

0 0.5 1 1.5 2 2.5 3 3.5 4 4.50

0.2

0.4

0.6

0.8

1

Desirability for std dev

std dev

Page 46: Design of Experiments –  Methods and Case Studies
Page 47: Design of Experiments –  Methods and Case Studies

Maximum Desirability

• Modeled response equation allows hitting the target distance of 84, d=1

• Best possible standard deviation according to model is 1.475

• d (for std dev) = (3-1.475)/(3-1) = .7625• D = SQRT(1*.7625) = .873

Page 48: Design of Experiments –  Methods and Case Studies

How about a little baseball?

• Questions?• Thank you• E-mail me at [email protected]• Find my slides at

http://course1.winona.edu/drand/web/