creating, critiquing and interpreting run charts and spcc

12
Creating, Critiquing and Interpreting Run Charts and SPCC Heather Kaplan MD, MSCE Assistant Professor of Pediatrics, Perinatal Institute and The James M. Anderson Center for Health Systems Excellence, Cincinnati Children’s Hospital Medical Center Cincinnati, OH Heather Kaplan MD MSCE is an Assistant Professor of Pediatrics in the Perinatal Institute and the James M. Anderson Center for Health Systems Excellence at Cincinnati Children's Hospital Medical Center (CCHMC). Heather is a neonatologist and health services researcher interested in enhancing care delivery and studying how systems of care can be improved using innovative approaches. She completed her neonatal-perinatal fellowship training, including earning a Master's degree of science in clinical epidemiology, at The Children's Hospital of Philadelphia/University of Pennsylvania. She joined the faculty at CCHMC in August 2007 . Heather's early research focused on understanding variation in adoption of evidence-based practices in neonatal care and quality improvement as a strategy for implementing evidence in practice. With funding from the Robert Wood Johnson Foundation, she studied the role of context in the success of quality improvement initiatives and developed a model, the Model for Understanding Success in Quality (MUSIQ). MUSIQ is a tool for developing theories about which aspects of context help or hinder a specific project, and designing and implementing tests of changes to modify those aspects of context. Her current work examines the way research and improvement networks ("learning networks") can be used to improve care delivery and outcomes. She is specifically interested in scaling improvement to reach entire populations of patients and the ways technology, quality improvement methods, and N-of-1 trial methods can be combined to create a personalized learning healthcare system for the individual. Heather also has extensive experience with front-line quality improvement in perinatal care. Dr. Kaplan serves as the Improvement Advisor for the Ohio Perinatal Quality Collaborative (OPQC) neonatal improvement work. She also serves as a faculty expert for Vermont Oxford Network quality collaboratives and has been working with teams to improve their system of improvement by using MUSIQ to identify and modify key aspects of context that are affecting the success of the quality improvement projects and to help them engage with senior leadership around their improvement work. Munish Gupta MD, MMSc Neonatologist Beth Israel Deaconess Medical Center Boston, MA Munish Gupta MD, MMSc, is a staff neonatologist and the Director of Quality and Safety for the Department of Neonatology at Beth Israel Deaconess Medical Center in Boston MA. He is also chair of the Neonatal Quality Improvement Collaborative of Massachusetts.

Upload: others

Post on 02-Apr-2022

8 views

Category:

Documents


0 download

TRANSCRIPT

Creating, Critiquing and Interpreting Run Charts and SPCC

Heather Kaplan MD, MSCE Assistant Professor of Pediatrics, Perinatal Institute and The James M. Anderson Center for Health Systems Excellence, Cincinnati Children’s Hospital Medical Center Cincinnati, OH Heather Kaplan MD MSCE is an Assistant Professor of Pediatrics in the Perinatal

Institute and the James M. Anderson Center for Health Systems Excellence at Cincinnati Children's Hospital Medical Center (CCHMC). Heather is a neonatologist and health services researcher interested in enhancing care delivery and studying how systems of care can be improved using innovative approaches. She completed her neonatal-perinatal fellowship training, including earning a Master's degree of science in clinical epidemiology, at The Children's Hospital of Philadelphia/University of Pennsylvania. She joined the faculty at CCHMC in August 2007 . Heather's early research focused on understanding variation in adoption of evidence-based practices in neonatal care and quality improvement as a strategy for implementing evidence in practice. With funding from the Robert Wood Johnson Foundation, she studied the role of context in the success of quality improvement initiatives and developed a model, the Model for Understanding Success in Quality (MUSIQ). MUSIQ is a tool for developing theories about which aspects of context help or hinder a specific project, and designing and implementing tests of changes to modify those aspects of context. Her current work examines the way research and improvement networks ("learning networks") can be used to improve care delivery and outcomes. She is specifically interested in scaling improvement to reach entire populations of patients and the ways technology, quality improvement methods, and N-of-1 trial methods can be combined to create a personalized learning healthcare system for the individual. Heather also has extensive experience with front-line quality improvement in perinatal care. Dr. Kaplan serves as the Improvement Advisor for the Ohio Perinatal Quality Collaborative (OPQC) neonatal improvement work. She also serves as a faculty expert for Vermont Oxford Network quality collaboratives and has been working with teams to improve their system of improvement by using MUSIQ to identify and modify key aspects of context that are affecting the success of the quality improvement projects and to help them engage with senior leadership around their improvement work.

Munish Gupta MD, MMSc Neonatologist Beth Israel Deaconess Medical Center Boston, MA Munish Gupta MD, MMSc, is a staff neonatologist and the Director of Quality and Safety for the Department of Neonatology at Beth Israel Deaconess Medical Center in Boston MA. He is also chair of the Neonatal Quality Improvement Collaborative of Massachusetts.

Annual Quality Congress Breakout Session, Sunday, October 29, 2017 Creating, Critiquing and Interpreting Run Charts and SPCC Objective: Participate in a workshop aimed at best practices in interpreting QI data including run charts and SPCC.

Creating, Critiquing and Interpreting Run Charts and Control Charts

Heather Kaplan MD, MSCE / Munish Gupta MD, MMSc

October 29, 2017 1

Creating, Critiquing and Interpreting Run Charts and Control Charts

Heather Kaplan MD, MSCE

Munish Gupta MD, MMSc

VON Annual Meeting & Quality Congress Workshop

October 2017

Disclosures

Heather Kaplan MD, MSCE and Munish Gupta MD, MMSc

have no relevant financial disclosures related to the 

content of this workshop.

Heather Kaplan MD, MSCE Munish Gupta MD, MMSc

Learning Objectives

Participate in a workshop aimed at best practices in 

interpreting QI data including run charts and SPCCA Quick Review of the Basics…

Data Over Time Understanding Variation In quality improvement, we are 

looking for change in key data

But natural background variation 

in all things we do – fact of life

Need tools to identify true 

changes versus natural variation

And, we would like to detect true 

change fast

Creating, Critiquing and Interpreting Run Charts and Control Charts

Heather Kaplan MD, MSCE / Munish Gupta MD, MMSc

October 29, 2017 2

Signal vs. NoiseSIGNAL

means something

contains information

difference with a distinction

special cause variation —

specific causes not part of 

usual process (good or bad)

NOISE

statistically indistinguishable 

from other data points

contains no new information

difference without a distinction

common cause variation —

causes inherent as part of usual 

process (good or bad).• Takes 28‐35 min to get to work• Variation due to factors inherent in the process

• Lead‐footed‐ness• Departure time (6:40 vs. 6:43 AM)• Starbucks mobile order speed

noise

• Takes 56 min, 58 min to get to work• Variation due to a specific cause—road closure in 

the square by my house• Eliminate special cause (construction complete), 

drive time returns to stable process (28‐35 min)

signalWhy this is important

Type of variation  type improvement action

Type of variation

Reduce unnatural variation

Reduce natural variation, improve basic process

Common cause

Special cause

Establish stable work process

Improve overall outcomes

Tampering

Responding to a value thinking it’s a signal when actually  it’s noise

Increases process and system variability 

Walter Shewhart

Signal vs. Noise

Statistical Process Control

Tools to help distinguish signal from noise to limit tampering

Plot data over time

Interpret visually and statistically … easy for those on the frontlines!

Walter Shewhart

Signal vs. Noise

Creating, Critiquing and Interpreting Run Charts and Control Charts

Heather Kaplan MD, MSCE / Munish Gupta MD, MMSc

October 29, 2017 3

Statistical Process Control Tools

1. Run charts – minimal standard

2. Control charts

Keys:   

Plot and evaluate over time 

Interpret visually and statistically

What is a Run Chart

Visual display of data over time, annotated

Center line:  Median or mean value

Run Charts

Minimum standard for QI project data

Can start with first few data points!

Need at least 10 data points to use rules for detecting 

special cause

Simple to create (no software needed)

Can be used with all types of data

But… not as powerful as a control chart

Value of Result

Unit of Time(e.g. days, weeks, months, quarters)

Running record of the process over time

Run chart: Center line is the median.

Control chart: Center line is often the mean.

Control limits that reflect inherent variability in data or the extent of common cause variation

Mean

Upper Control Limit (UCL)

Lower Control Limit (LCL)

Run Charts and Control Charts

Median

9181716151413121111

65

60

55

50

45

40

35

Time

_X=49.77

UCL=58.77

LCL=40.77

1

1

11

11

1

• Each time you add a data point…ask your self, “Based on preceding data, are additional data points consistent with predicted performance (likely) or not consistent with predicted performance (unlikely)?”

• A stable process performs predictably over time, with an expected amount of variation• If your next data point is consistent with predicted performance, there is common cause variation (noise)• If your next data point is not consistent with predicted performance, there is special cause (signal)

Control Charts

Continuous Data

1. Numerical value for each unit in a group

Discrete (Integer) Data

2. Classification:  Presence or not of an attribute

3. Count:  How many attributes occur in sample

Type of data

Sample Size

Type of Chart

Math(software)

Constructing Control Charts

Creating, Critiquing and Interpreting Run Charts and Control Charts

Heather Kaplan MD, MSCE / Munish Gupta MD, MMSc

October 29, 2017 4

Types of Data & Control Charts

Common cause probability model Example

Discrete

Classification:  Binomial

Parameter: p

Patient develops an SSI (Y/N)

Count:  Poisson 

Parameter: l

Number of catheter‐associated HAIs

Continuous

Normal

Parameters: m, s

Time to deliver thrombolytics

Which Control Chart To Use

Adapted from Provost & Murray, The Health Care Data Guide, 2011, and Carey, Improving Healthcare with Control Charts, 2003.

Type of Data

Discrete / Attribute(data is counted or classified)

Continuous / Variable(data is measured on a scale)

Count(events/errors are counted; numerator can be greater

than denominator)

Classification(each item is classified;

numerator cannot be greater than denominator)

Equal or fixed area of

opportunity

Unequal or variable area of opportunity

Equal or unequal subgroup size

Subgroup size = 1(each subgroup is single

observation)

Subgroup size > 1(each subgroup has

multiple observations)

C chartCount of events

U chartEvents per unit

P chartPercent classified

X and MR chartsIndividual measures and

moving range

X-bar and S chartsAverage and standard

deviation

ACTION

NO Special Cause is occurring in

System

Special Cause is occurring in System

Take action on individual outcome (treat special) MISTAKE 1 OK

Treat outcome as part of system; work on changing the system (treat common)

OK MISTAKE 2

ACTUAL SITUATION

Provost, LP and Murray S. The Data Guide. 2008

Identifying Common vs. Special Cause Rules for Identifying Special Cause (Signals)

Perla et al, BMJ Qual Saf 2011; 20:46‐51

Run Charts

Rules for Identifying Special Cause (Signals)Control Charts

TEST 1:1 point outside outer control limit

TEST 2:2 out of 3 points more than 2 SD from center line

TEST 3:Run of 8 points in a row on one side of center line

TEST 4: Trend of 6 points in a row increasing or decreasing

Stable Process Process well defined and predictable

Range of variation (performance) 

intrinsic to the process

• Common Cause: sampling error, noise; no signals

Changing results achieved by a 

statistically controlled process requires 

a new process…Current process 

perfectly designed to get what it gets

noise

Creating, Critiquing and Interpreting Run Charts and Control Charts

Heather Kaplan MD, MSCE / Munish Gupta MD, MMSc

October 29, 2017 5

Unstable Process Process not defined and unpredictable

Range of variation (performance) not intrinsic–

influenced by external perturbations

• Special cause: outside chance variation –signals 

Changing results achieved by a statistically 

uncontrolled process begins with removing the 

special causes to establish what the core 

process represents

Learning from experience with unstable

processes is severely limited…all one can say is 

that no one knows what will happen next!

signal

Interpreting a Run Chart or Control Chart1. Is this the right chart?

• Are the measures appropriate (y‐axis)?

• Is the time analysis appropriate (x‐axis)?

• Is the sample size for each data point adequate?

• Is the control chart right for the type of data/measure?

2. Are there enough data points for meaningful conclusions?

3. Is there evidence of special cause variation?

4. Does the chart match your knowledge of your system and 

its context?

Examples Example 1: Providence St. Vincent’s

Providence St. Vincent’s, Portland OR Aim:  increase morbidity‐free survival of extremely preterm infants

Measures:  many, including Standardized Process Assessment (SPA) 

scores on each infant, measuring performance on bundles of 

potentially best practices in each of 8 infant care areas

Changes:  creation of Small Baby Team (SBT) and Small Baby Unit 

(SBU), then multiple PDSAs addressing each care area

Data shown:  one of their SPA scores

Describe the chart!

• What kind of chart is it?

• What is the unit of measurement on the x-axis?

• What is the main measure?

• What is the measure of central tendency?

• What are the different lines?

• What’s the sample size for each period?

• Are the process changes labelled (annotated)?

• Which direction is better?

• Is this the right chart?

• Is there evidence of improvement?

Creating, Critiquing and Interpreting Run Charts and Control Charts

Heather Kaplan MD, MSCE / Munish Gupta MD, MMSc

October 29, 2017 6

Date / Observation Value Mean Goal

1 6 6 82* outborn 3 6 8

3 7 6 84 7 6 85 4 6 86 5 6 87 6 6 88 7 6 89 7 6 8

10 6 6 811 9 8.2 812 8 8.2 813 9 8.2 814 7 8.2 815 9 8.2 816 9 8.2 8

17*outborn 7 8.2 818*outborn 8 8.2 8

19 8 8.2 821 9 8.2 822 9 8.2 8

23*outborn 9 8.2 824 7 8.2 8

25*outborn 8 8.2 826 8 8.2 8

27*outborn 8 8.2 828 8 8.2 829 6 8.2 831 9 8.2 8

33*outborn 9 8.2 834 8 8.2 835 8 8.2 8

Thoughts?

• Clear visual evidence of improvement

• Clear shift (pts 1 to 10)

• Just enough runs (8, with expected range 7 to 17)

Run chart shows clear signal, and supports impact of improvement efforts.

Using for QI

• Start run chart with baseline data

• After at least 10 data points, can use rules for identifying signal

• If no signal noted, and if process is stable, can extend mean and add new data points

Using for QI

• Signal VERY evident –would have been seen by patient 14

• Can add annotations and goal line

Using for QI

• With evidence of signal after patient 10, and knowledge that process changed, can shift the center line to reflect new performance

Using for QI

• Tells the ‘story’!

• Significant improvement with first intervention

• Further performance compared to new median

• Process stable since first change, and at goal

• Need new interventions if further improvement desired

Creating, Critiquing and Interpreting Run Charts and Control Charts

Heather Kaplan MD, MSCE / Munish Gupta MD, MMSc

October 29, 2017 7

Run Charts: Good Practices Need title, labelled axes including primary measure, center line, 

direction of improvement, & indication of sample size

Should generally use median for center line

Can start with just a few data points, but at least 10 points 

needed to establish median and use rules for signal

If process appears stable, can extend median and compare 

future performance to established median

Shift median if clear signal and knowledge of process change

Use annotations and goal line!

Example 2: Emory

Emory, Atlanta GA Aim:  decrease prolonged use of antibiotics in NICU infants

Measures:  treatment with greater than 2 days of initial antibiotics 

among infants started on antibiotics

Changes:  multiple PDSAs, including pharmacy audits, provider 

feedback, and practice guidelines

Data shown:  main outcome chart

Describe the chart!

• What kind of chart is it?

• What is the unit of measurement on the x-axis?

• What is the main measure?

• What is the measure of central tendency?

• What are the different lines?

• What’s the sample size for each period?

• Are the process changes labelled (annotated)?

• Which direction is better?

• Is this the right chart?

• Is there evidence of improvement?

Monthly period (n)Infants treated

>48hrInfants receiving initial antibiotics

Jan 2015 (n=30) 13 30Feb 2015 (n=11) 7 11Mar 2015 (n=22) 16 22Apr 2015 (n=29) 11 29May 2015 (n=36) 15 36Jun 2015 (n=22) 11 22Jul 2015 (n=34) 14 34

Aug 2015 (n=34) 13 34Sep 2015 (n=24) 6 24Oct 2015 (n=33) 13 33Nov 2015 (n=28) 14 28Dec 2015 (n=36) 15 36Jan 2016 (n=24) 10 24Feb 2016 (n=22) 5 22Mar 2016 (n=23) 7 23Apr 2016 (n=19) 7 19May 2016 (n=23) 14 23Jun 2016 (n=23) 12 23Jul 2016 (n=23) 4 23

Aug 2016 (n=22) 8 22Sep 2016 (n=22) 10 22Oct 2016 (n=22) 9 22Nov 2016 (n=24) 12 24Dec 2016 (n=19) 4 19Feb 2017 (n=21) 9 21Mar 2017 (n=34) 14 34

Apr 2017 (n=20) 5 20May 2017 (n=29) 11 29Jun 2017 (n=20) 3 20Jul 2017 (n=48) 11 48

Aug 2017 (n=27) 10 27

Thoughts?

• Appropriate control chart, with overall rate of > 2d of antibiotics of 39%, and outer control limits of approximately 10% to 70%

• While visual interpretation may not suggest clear trend in data, statistical interpretation shows evidence of improvement with special cause

• Could add annotations and goal line to make chart even more effective

Creating, Critiquing and Interpreting Run Charts and Control Charts

Heather Kaplan MD, MSCE / Munish Gupta MD, MMSc

October 29, 2017 8

Thoughts?

• With annotations, appears improvement was noted after third PDSA, to better than goal

• If special cause variation seen, with change in system that is expected to last, can update mean and limits

Thoughts?

• Tells more of the story, suggesting improvement from 41% to 25% after three PDSA cycles

• Since 2nd mean is only based on 3 data points, need to update mean monthly until 15-20 data points available

• Future performance should be compared to 2nd mean

Control Charts: Good Practices Need clear title, axes labels, center line, control limits, direction 

of improvement, and ideally sample size

Use statistical rules to identify special cause variation

Match graph to knowledge of system

Need 15‐20 data points to establish reliable mean and limits

If evidence of special cause variation and knowledge of change 

in system that is expected to last, can revise mean and limits

Example 3: Nationwide Children’s

Nationwide Children’s, Columbus OH Aim:  decrease post‐operative complications of surgically placed 

gastrostomy tubes in NICU patients

Measures:  complication rate in surgically‐placed G‐tubes

Changes:  multiple PDSAs, including G‐tube crib card, standardized 

dressing changes, and standardized feeding plans

Data shown:  main outcome chart

Describe the chart!

• What kind of chart is it?

• What is the unit of measurement on the x-axis?

• What is the main measure?

• What is the measure of central tendency?

• What are the different lines?

• What’s the sample size for each period?

• Are the process changes labelled (annotated)?

• Which direction is better?

• Is this the right chart?

• Is there evidence of improvement?

Creating, Critiquing and Interpreting Run Charts and Control Charts

Heather Kaplan MD, MSCE / Munish Gupta MD, MMSc

October 29, 2017 9

# of G‐tubes # of Complications Complication Rate

Mar‐15 5 0 0%

Apr‐15 5 3 60%

May‐15 4 1 25%

Jun‐15 5 2 40%

Jul‐15 5 3 60%

Aug‐15 4 1 25%

Sep‐15 4 3 75%

Oct‐15 8 2 25%

Nov‐15 8 3 38%

Dec‐15 5 0 0%

Jan‐16 4 2 50%

Feb‐16 3 1 33%

Mar‐16 6 1 17%

Apr‐16 3 0 0%

May‐16 4 3 75%

Jun‐16 5 2 40%

Jul‐16 6 2 33%

Aug‐16 3 0 0%

Sep‐16 8 1 13%

Oct‐16 7 0 0%

Nov‐16 4 3 75%

Dec‐16 1 0 0%

Jan‐17 4 1 25%

Feb‐17 3 1 33%

Apr‐17 1 0 0%

May‐17 3 0 0%

Jun‐17 8 1 13%

Jul‐17 7 1 14%

Aug‐17 4 0 0%

Sep‐17 4 0 0%

Thoughts?

• Overall trend seems to be in right direction

• Control limits are very wide, particularly low limits – sample size is small

• No special cause variation seen (although does not seem to be a stable process)

Using for QI

• Start with baseline data

• Initial chart with 12 data points reflecting first ‘process stage’

• Mean fixed at 35%?

Using for QI

• Start with baseline data

• Initial chart with 12 data points reflecting first ‘process stage’

• Mean fixed at 35%?

• “Additional points added, compared to fixed mean –special cause variation!

• But… hard to designate process as stable (and fix mean) with just 12 data points, and with such wide control limits

• Other approaches?

Bimonthly Control Chart

• Some improvement in sample size, with narrower control limits (although lower limit still at zero)

Quarterly Control Chart

• Further improvement in sample size, with trend that seems more meaningful

• Lower control limit still zero

• However, only 10 data points – should start with run chart until additional data points available

Creating, Critiquing and Interpreting Run Charts and Control Charts

Heather Kaplan MD, MSCE / Munish Gupta MD, MMSc

October 29, 2017 10

Control Charts: Good Practices Need to interpret control charts visually, as well as using 

statistical rules

Adequate sample size important – try to avoid control limits 

that are mostly at the extremes

After 12 data points, can create trial mean and limits, but 

update them until 15‐20 data points are available

After 20 points, if process is stable, fix mean and limits

Other examples? Questions?

ReferencesBenneyan, J.C., R.C. Lloyd, and P.E. Plsek, Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care, 2003. 12(6): p. 458‐64.

Benneyan, J.C., The design, selection, and performance of statistical control charts for healthcare process improvement. Int J Six Sigma and Competitive Advantage, 2008. 4(3):p.209‐239.

Carey, R.G., Improving healthcare with control charts : basic and advanced SPC methods and case studies. 2003, Milwaukee, WI: ASQ Quality Press. xxiv, 194 p.

Gupta, M, Kaplan, HC, Using Statistical Process Control to Drive Improvement in Neonatal Care: A Practical Introduction to Control Charts.  Clinics in Perinatology, 2017, 627‐644. 

Jordan, V, J.C. Benneyan, Common Errors in Using Healthcare SPC, in Statistical Methods in Healthcare. 2012, Wiley, pp. 268‐285.

Langley, G.J., R.D. Moen, K.M. Nolan, T.W. Nolan, C.L. Normal, and L.P. Provost, The Improvement Guide.  2nd ed. 2009, San Francisco, CA: Jossey‐Bass.  490 p. 

Perla, R.J., L.P. Provost, and S.K. Murray, The run chart: a simple analytical tool for learning from variation in healthcare processes. BMJ Qual Saf, 2011. 20(1): p. 46‐51.

Provost, L.P. and S.K. Murray, The Health Care Data Guide: Learning From Data for Improvement. 1st ed. 2011, San Francisco, CA: Jossey‐Bass. 445 p.