antonella mazzei abba, university of neuchatel jixian

16
Antonella Mazzei Abba, University of Neuchatel Jixian (Jason) Wang, Celgene PSI conference, London, 2017

Upload: others

Post on 09-Jan-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Antonella Mazzei Abba, University of Neuchatel Jixian

Antonella Mazzei Abba, University of Neuchatel

Jixian (Jason) Wang, Celgene

PSI conference, London, 2017

Page 2: Antonella Mazzei Abba, University of Neuchatel Jixian

Big data and observational data

The problem of confounding and missing covariates

An introduction to propensity scores

Propensity score calibration for data with missing covariates

Our approach: calibration by Bayesian bootstrap

Simulation results.

Overview

Page 3: Antonella Mazzei Abba, University of Neuchatel Jixian

What are big data?

Big sizes (in terms of the number of subjects and/or variables)

Complex structure

Messy (e.g., incomplete data of different patterns)

Often big data are observational, or a mixture of trial and observational data.

No randomization or randomization lost -> confounding.

Causal inference using big data is a big and hot topic .

Big data, observational data

Page 4: Antonella Mazzei Abba, University of Neuchatel Jixian

One approach to eliminating confounding in big data is the propensity score Suppose there are 300 (150 male/150 females) patients in a population. They take a drug at either high or low dose (not randomized!). We want to compare the outcomes of both doses. Gender affect the outcome, and males are twice likely to get the high dose as

females. The propensity score is the probability of getting high dose given gender. Suppose we have observed 100/50 males/females on high dose. How to adjust for the confounding?

Stratification by gender <-> by PS. Weighting by the inverse PS (IPW) (1 female represents 2 in high dose group).

How about we have 30 (instead of 1) potential confounders? Estimate the PS given the 30 covariates. Use the PS to stratify or to weight.

The propensity score (PS)approach

Page 5: Antonella Mazzei Abba, University of Neuchatel Jixian

Another way of using PS is to add the PS as a covariate (PS-as-cov ) in the model for treatment and outcome.

Similar to the direct adjustment with all covariates in the model.

It is the easiest PS approach, and can be used for all types of models.

The justification:

PS acts as a summary of confounding effect in the model.

Under some situations PS-as-cov is equivalent to stratification/matching.

Under some situations PS-as-cov is equivalent to IPW.

Propensity score as a covariate

Page 6: Antonella Mazzei Abba, University of Neuchatel Jixian

Missing/incomplete covariates

1 2

3

Cov set 2Cov set 1

n2

n1

A,Y Missing data are very common in big data.

Missing patterns may be quite different.

Big data are often a combination of subsets from different sources.

Data pattern on the right: Cov set 1 may be expensive/inconvenient to

measure, hence only n1 patients have them

Cov set 2 are routinely collected, so all have them

Within block 3, there could be further missings

A=treatment allocation, Y=outcome

Page 7: Antonella Mazzei Abba, University of Neuchatel Jixian

Assume Cov sets 1-2 contain all confounders

Fitting a model with sets 1-2 covariates to A for pop n1 gives the gold standard (right) PS(GS)

Fitting a model with set 2 covariates to A for pop n1+n2 gives an error prone (wrong) PS(EP).

Adjusting with PS(EP) may results in bias.

But using PS(GS) we throw away the n2 pop.!

Propensity score with missing covs

1 2

3

Cov set 2Cov set 1

n2

n1

A,Y

Page 8: Antonella Mazzei Abba, University of Neuchatel Jixian

Can we calibrate the estimator adjusted by the “wrong” PS(EP)?

Assuming PS(GS)= c + d PS(EP) +Error (1) one can apply measurement error models for calibration (Sturmer et al.)

Is (1) a correct model, given PS(EP) and PS(GS) are all within (0,1)?

In general the relationship is nonlinear, depends on the distribution of Cov sets

Example on the right (C=set 1, X=set 2):

Calibration depends on the outcome (Y) models

Propensity score

calibration

1 2

3

Cov set 2Cov set 1

n2

n1

T,Y

Page 9: Antonella Mazzei Abba, University of Neuchatel Jixian

Lin & Chen’s (2015) approach based on Chen & Chen (2000).1. Estimate treatment effect (B) in the outcome model

with PS(GS) adjustment from n1 pop.

2. Estimate treatment effect (B*) in the outcome model with PS(EP) adjustment from n1 pop.

3. Estimate E(B*) using PS(EP) for n2 pop.

4. The calibrated estimate is B + K(B* - E(B*)), assuming (B, B*) ~Normal

K=cov(B, B*)/var(B*), not easy to calculate.

Lin & Chen have SAS macros fitting common outcome models from scratch and with complex calculations

A more robust calibration

1 2

3

Cov set 2Cov set 1

n2

n1

A,Y

Page 10: Antonella Mazzei Abba, University of Neuchatel Jixian

One easy approach to calculating cov(B, B*) is bootstrapping1. Take n1 samples with replacement from the n1 pop.

2. Fit outcome models with PS(EP) and PS(GS) adjustment to get B* and B.

3. Repeat 1 and 2 many times then calculate sample cov(B, B*)

4. If n2 is not very large, do 1. and 2. with PS(EP) in n2 pop for better E(B*).

We use Bayesian bootstrap for simplicity and smoothness.

It just needs a single line “weight= - log(ranuni(seed));” (SAS syntax) in the bootstrap loop and use weight when fitting the two outcome models

Ordinary bootstrap weights by integers and Bayesian bootstrap weights by real numbers, hence is smoother.

Bayesian bootstrap calibration

Page 11: Antonella Mazzei Abba, University of Neuchatel Jixian

Simulation results• We performed extensive simulations to evaluate the proposed approach• Results below are for Poisson regression model for outcome• Total samples n=n1+n2 with 20% of n1, 2 covs in Cov set 1 and 2 in Cov set 2• 100 Bayesian bootstrap samples and 1000 simulations for each scenario

• Var(emp)=sample variance in simulation

• Var(est)=Estimated variance

• Coverage=Covarage of 95% CI

• PSC=PS calibration• BB=Bayesian bootstrap (BB)

Page 12: Antonella Mazzei Abba, University of Neuchatel Jixian

We have assumed a constant treatment effect over the whole population

What if treatment effects are different between the pops n1 and n2 populations?

In this case, the approach estimates the effect in the n1 population

In general the effect of the whole population may not be estimable

We have also assumed the same distribution for Cov set 2 for n1 and n2 populations (can be checked).

Warnings: assumptions we made

Page 13: Antonella Mazzei Abba, University of Neuchatel Jixian

Assume that n1=200 and n2=50,000 with 10 repeated outcome measures each subject.

Complex missing data approaches (likelihood, EM algorithms, multiple imputation) may not be feasible.

For our method, the model with PS(EP) needn’t to be correct. So we can

Use a good mixed effect model with PS(GS) to fit data to pop. n1

Use LS to fit a simple model with PS(EP), ignore subject effects

As n2 is very large, no bootstrap is needed to estimate E(B*)

The estimation is still valid, although less efficient than using an equally good model with PS(EP).

Another advantage when dealing with

big data

Page 14: Antonella Mazzei Abba, University of Neuchatel Jixian

An advantage when dealing with big

data: simulation results• We use simulation to evaluate the approach • Assume that n1=200 and n2=800 with 2-6 repeated measures each subject,

generate data with subject and effects and measurement errors• Use GEE for n1 pop. and LS for n=n1+n2 pop.

Page 15: Antonella Mazzei Abba, University of Neuchatel Jixian

Incomplete covariates are common in big data

Propensity score calibration is one simple way to mitigate this issue

Lin and Chen’s approach does not depends the model between PS(EP) and PS(GS), hence is more robust

The approach can be easily applied to any models with the help of Bayesian bootstrap

The framework can be extended to other adjustment approaches

Summary

Page 16: Antonella Mazzei Abba, University of Neuchatel Jixian

Sturmer T, Schneeweiss S, Avorn J, Glynn RJ. Adjusting effect estimates for unmeasured confounding with validation data using propensity score calibration. Am J Epidemiol 2005; 162:279-289.

Sturmer T, Schneeweiss S, Rothman KJ, Avorn J, Glynn RJ. Performance of propensity score calibration: a simulation study. Am J Epidemiol 2007; 165:1110-1118

Lin HW, Chen YH. Adjustment for missing confounders in studies based on observational databases: 2-stage calibration combining propensity scores from primary and validation data. Am J Epidemiol 2014; 180:308-317.

Chen YH, Chen H. A unified approach to regression analysis under double-sampling designs. J R Stat Soc B 2000; 62:449-460.

Rubin DB. The Bayesian bootstrap. Annals of Statistics 1982; 9:130-134.

Wang J, Mazzei Abba A. A practical and robust propensity score calibration approach based on bootstrapping. Unpublished manuscript.

Reference