difference-in-difference estimation for policy and practice evaluation neal wallace, ph.d. portland...

19
Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

Upload: benjamin-mcdowell

Post on 06-Jan-2018

216 views

Category:

Documents


1 download

DESCRIPTION

What is “Difference-in-Difference” (D-in-D) Estimation D-in-D estimation is a research design and empirical process intended to assess the “true” effect of a policy or practice intervention where random assignment is not feasible. The “true” effect of an intervention is the total effect of an intervention on an outcome, net any changes in outcome that would occur in the absence of the intervention. 3

TRANSCRIPT

Page 1: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

Difference-in-Difference Estimation for Policy and

Practice Evaluation

Neal Wallace, Ph.D.Portland State UniversityFebruary 2014

1

Page 2: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

Overview• What is “difference-in-difference” estimation • When is it used• Why should you care• Underlying assumption• How it works• Getting started• Some Best Practices• MH services research example• Concluding Remarks• References

2

Page 3: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

What is “Difference-in-Difference” (D-in-D) Estimation•D-in-D estimation is a research design and

empirical process intended to assess the “true” effect of a policy or practice intervention where random assignment is not feasible.

•The “true” effect of an intervention is the total effect of an intervention on an outcome, net any changes in outcome that would occur in the absence of the intervention.

3

Page 4: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

What is “Difference-in-Difference” (D-in-D) Estimation•A D-in-D is the difference between two

differences (or changes):•Difference #1: The change in outcome for

an intervention group from pre- to post-intervention

•Difference #2: The change in outcome for a control (non-intervention) group over the same pre- to post-intervention periods

4

Page 5: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

When is D-in-D Used?•For policy or practice evaluations where

experimental conditions reasonably exist except for randomization of subjects:▫Natural Experiments – where the

intervention is established independent of the researcher (e.g. public policy)

▫Quasi Experiments – where the researcher controls the intervention but randomization isn’t ethically or otherwise feasible.

5

Page 6: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

Why Should You Care•D-in-D is becoming the gold standard for

observational services research•Its effective and affordable•Programs and policy-makers love it•Incorporating it in your work can enhance

your opportunities for funded research and publication.

Page 7: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

A Main Underlying Assumption•Parallel Trends – in the absence of

intervention, the unobserved differences between intervention and control groups are the same over time. ▫Relaxes assumption that intervention and

control groups are the same in every respect apart from the intervention (randomization is supposed to achieve this)

▫Intervention group would follow the outcome “path” of control group if no intervention

▫Any pre-intervention outcome differences between intervention and control groups are constant effects that can be factored (differenced) out

Page 8: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

How it works• Given an outcome Oit measured for pre-post

intervention time periods (t=1,2) and control/ intervention groups (i=1,2)

Pre Post ChangeIntervention O21 O22 O22-O21Control O11 O12 O12-O11Difference O21-O11 O22-O12 D-in-D

D-in-D = (O22-O21)-(O12-O11)

Page 9: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

How it Works•To estimate the D-in-D in a regression

framework, we need dummy variables that will identify the four subject group and time period combinations:▫P(ost) = 1 in post periods, =0 in pre periods▫I(ntervention) = 1 if intervention, =0 if control▫P(ost)xI(ntervention) = 1 if post &

intervention, =0 otherwise▫Note – Control group in pre-period is

“excluded” group – will be measured by regression constant

Page 10: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

How it works• A D-in-D regression model would look like: Oit = B0 + B1*I + B2*P + B3*PxI + e

Pre Post ChangeIntervention B0+B1 B0+B1+B2+B3 B2+B3Control B0 B0+B2 B2Difference B1 B1+B3 D-in-D

D-in-D = B3

Page 11: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

Getting Started•You need:

▫An intervention (change)

▫Outcome measure(s)

▫Comparison group(s)

▫Information on subject characteristics

11

Page 12: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

Some Best Practices• Know your intervention

▫ Is there clear documentation of what they are doing(fidelity)?

▫ Are there types of individuals that are more or less likely to respond to the intervention?

▫ Are there likely anticipatory or shock (short-term) effects? (“wash out” periods)

• Know its environment▫ Can you identify those receiving intervention from those

not?▫ Is there anything else going on that might effect the

outcomes you plan to measure?▫ Why is this being done now? In this particular place?

(endogeneity)

Page 13: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

Some Best Practices• Take the parallel trend assumption seriously

▫ Thoughtfully choose control group(s) e.g. can subjects choose to be intervention or control? (selection)

▫ Test for stable differences in outcomes between control/intervention groups across pre-intervention time periods.

▫ Minimize all observable differences (covariates/ matching/weighting methods addressing subject characteristics)

▫ Understand and be prepared to explain the “flow” of outcomes that result in the D-in-D – not just the D-in-D itself.

• Be thorough and transparent▫ Seek additional ways to “test” your findings e.g. “internal” D-in-

Ds on intervention subjects more and less likely to be affected to assure any outcome change is likely related to intervention

▫ Report all aspects of the conduct and context of the study

13

Page 14: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

Example: •Estimate effect of MH insurance parity in

Oregon state on receipt of MH outpatient care within 30 days of MH inpatient stay.▫Start with overall D-in-D to estimate policy

effect for all Oregonians experiencing parity▫Used pooled comparison group of subjects

from states of Oregon, Washington, California▫Followed with “internal” D-in-D estimating

policy effects for individuals most likely to be affected by policy

Page 15: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

15

Measure Estimate SE PPost -Parity Effect (D-in-D) .114 .056 .042

Post-Parity Period (Control) -.033 .034

Pre-Parity Period (Intervention) -.057 .040

Psychotic Disorder Discharge Dx .076 .031 .015

Female .046 .031

Spouse -.064 .043

Dependent -.147 .035 <.001

Calendar Quarter 2 -.000 .039

Calendar Quarter 3 -.061 .037

Calendar Quarter 4 -.087 .039 .027

Observations 888Unique Subjects 7271 Derived from logistic regression results

Table 2 Estimated Average Marginal Effects : Parity vs. Non-ParityObservations1

Page 16: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

16

Measure Estimate SE PPost -Parity Effect (Met Limits) .203 .093 .028

Post-Parity Period (All Other) .021 .050

Pre-Parity Period (Met Limits) -.153 .070 .028

Psychotic Disorder Discharge Dx .067 .051

Female .019 .048

Spouse -.042 .064

Dependent -.138 .054 .011

Calendar Quarter 2 .064 .064

Calendar Quarter 3 -.053 .055

Calendar Quarter 4 -.070 .059

Observations 353Unique Subjects 2981 Derived from logistic regression results

Table 4 Estimated Average Marginal Effects: Parity ObservationsMeeting Pre-Parity Quantitative Limits vs. All Other1

Page 17: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

Concluding Remarks•Thinking with a D-in-D mindset opens your

eyes to “natural” experiments around you.•The “science” of D-in-D can be readily

learned from example – the “art” of D-in-D requires experience and a willingness to immerse yourself in the details of MH service provision and receipt.

•Regularized data collection protocols and D-in-D go hand in hand – each is a justification for the other….

Page 18: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

Some D-in-D References• Some general ones..

▫ Angrist, J. D.; Pischke, J. S. (2008). Mostly harmless econometrics: An empiricist's companion. Princeton University Press. ISBN 9780691120348.

▫ Buckley, Jack & Yi Shang (2003). Estimating policy and program effects with observational data: the “differences-in-differences” estimator. Practical Assessment, Research & Evaluation, 8(24). Retrieved February 3, 2014 from http://PAREonline.net/getvn.asp?v=8&n=24

▫ Meyer, B.D. (1995). Natural and Quasi-Experiments in Economics. Journal of Business & Economic Statistics, Vol. 13, No. 2, JBES Symposium on Program and Policy Evaluation (Apr., 1995), pp. 151-161

▫ Just Google “difference in difference” – many useful class notes from professors out there…

• Some MH services ones…▫ Wallace NT, McConnell KJ. 2013 “Impact of Comprehensive Insurance Parity on Follow-Up Care After

Psychiatric Inpatient Treatment in Oregon”, Psychiatric Services, 64(10):961-966.▫ McConnell KJ, Gast SH, Ridgely MS, Wallace N, Jacuzzi N, Rieckmann T, McFarland BH, McCarty, D.

2012 “Behavioral Health Insurance Parity: Does Oregon’s Experience Presage the National Experience with the Mental Health Parity and Addiction Equity Act?”, American Journal of Psychiatry, 169:31-38.

▫ Wallace NT, Bloom JR, Hu T and Libby AM 2005 “Psychiatric Medication Treatment Patterns for Adults with Schizophrenia under Medicaid Mental Health Managed Care in Colorado” Psychiatric Services, 56(11), November, pp.1402-1408.

▫ Bloom JR, Hu TW, Wallace NT, Cuffel B, Hausman J, and Scheffler R 2002 “Mental Health Costs and Access Under Alternative Capitation Systems in Colorado,” Health Services Research, 37(2), April, pp. 315-340.

Page 19: Difference-in-Difference Estimation for Policy and Practice Evaluation Neal Wallace, Ph.D. Portland State University February 2014 1

Questions?

Thank You

[email protected] O. Hatfield School of Government

Portland State University

19