department of o utcomes r esearch. daniel i. sessler, m.d. michael cudahy professor and chair...

19
Department of OUTCOMES RESEARCH

Upload: jasmin-perkins

Post on 11-Jan-2016

223 views

Category:

Documents


0 download

TRANSCRIPT

Department of OUTCOMES RESEARCH

Daniel I. Sessler, M.D.Michael Cudahy Professor and ChairDepartment of OUTCOMES RESEARCH

The Cleveland Clinic

Clinical Research Design

Systematic Reviewsand

Meta-analysis

Literature Reviews

Reviews are important•Often too much primary literature•Clinicians cannot critically review all literature

Classical reviews•Informed synthesis by authors

– Most helpful when authors are experts and active investigators

•Excellent perspective– Integrates historical development with future directions

•Typically restricted to best relevant articles•Most suitable for reviewing an entire field•Subject to author(s) bias

Useful for specific interventions & outcomes•Specific, important, and sensible question essential•Equally effective for complications and therapeutic outcomes

Standardized search of all relevant work•Documented and reproducible selection process•Tabular presentation, often stratified by

– Research approach– Study quality– Population– Outcome

Synthesis can be•Qualitative, based on authors’ expertise (and bias)•Quantitative: meta-analysis

Systematic Reviews

Meta-analysis

Statistical summary of systematic review•Effect size and significance•Patient level (patient pooling) or study level (aggregate stats)

– Individual patient data preferable, but rarely available

Usually used for randomized trials•Can be used for observational studies— with great caution

Studies must evaluate similar treatment & outcomes

Suitable for various types of data•Dichotomous, continuous, risk difference, relative risk, etc.

Generalizability good; internal validity variable

Data-acquisition

Individual studies are unit of analysis•Summary statistics are the data elements

Consider studies to be like patients in a trial•Rigorous a priori inclusion and exclusion criteria

Specify search strategies and sources of studies•Which databases? Search terms?•Language restrictions?•Randomized trials only?•Primary outcomes only?•Published versus unpublished?

Specify adjudication methods

Sample Data-extraction Form

Population

Comparison•Treatment•Active vs. placebo

Outcome(s)

Measures of quality

Surprisingly difficult•Adjudication critical

Evaluating Study Quality

No good way•Many design errors non-obvious or subtle

Various scoring systems used; points for•Legitimate randomization•Concealed allocation•Blinded outcome evaluation•Drop-outs and reasons described

Standard-of-care: report quality of included studies

Reporting Search Results

Major Sources of ErrorGarbage in, garbage out

•Meta-analysis never better than underlying studies•Cannot correct for methodologic errors or bias

Reporting bias•Changed or omitted primary outcomes•Significant findings 2.2-4.7 X more likely to be complete (Dwan 2008)

Subtle (or not) treatment & measurement differences

Publication bias•Large trials are almost always published•Positive studies usually published even if under-powered•Small negative studies less likely than others to be published

– Censoring by authors or corporate sponsors– Appropriate editorial decision, but unpublished studies disappear– Meta-analysis depends on knowing about all relevant results

Funnel Plots

SE of Log(OR

)

Log(OR)

Heterogeneity

Data: variation in study results exceeding chance

Biology: true differences related to methodology•Differences in populations: age, gender, ethnicity, etc.•Differences in drug dose (or drug within a class)•Unappreciated patient factors

Tests: chi square, etc.

Analysis strategies•Minor heterogeneity

– Report amount– Combined analysis may be sensible

•Treat serious heterogeneity as an interaction– Stratify analysis as for other effect modifiers

Analysis Strategies

Fixed-effects model•Assumes all trials share same underlying treatment effect

– Treats each trial as random samples from one large trial – Differences in results due to chance alone

•Similar to Mantel-Haenszel•Often over-estimates significance

Random-effects model•Assumes each study estimates a unique treatment effect

– That is, may truly differ from other included studies– Allows heterogeneity, and is required for heterogeneous data

•Weights smaller studies more heavily•Generally provides similar effect estimate with lower precision

– More conservative; probably should always be used

Forest Plots

Log weighted mean effect ≈ sum of {log (effect)/variance)} for individual studies, divided by sum of 1/variance

How Good are Meta-analyses?

“Large” defined by n≥1,000

“Large” defined by powerGenerally, pretty good. But not perfect.

Cappelleri, JAMA 1996

Meta-analyses Increasingly Common

Most published as part of systematic reviews

Increasingly included in trial reports•Objective comparison of current to previous results

Grant applications•Summarize knowledge•Support equipoise •Need for proposed trial•Complications unlikely

Blood loss with low-dose perioperative aspirin

Cochrane Collaboration

International non-profit, 1993

Repository for meta-analyses

Standardized reporting•QUORUM (1999)•PRISMA (2009)

Provides free software

Evidence-based med movement•David Sackett•Gordon Guyatt•Tom Chalmers

Archie Cochrane

Summary

Systematic reviews•More objective than “expert” reviews•May lack expert perspective and subtlety•Meta-analysis is quantification of systemic review

Subject to major errors•Any problems with underlying studies remain•Publication and reporting bias can be substantial•Heterogeneity can complicate analysis

Conduct and report per guidelines

Useful summary of available literature•Especially when many similar studies are available

Department of OUTCOMES RESEARCH