an update from the jsm 2006 - seattle ryan woods – january 8, 2007

52
An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Upload: hugh-daniel

Post on 17-Dec-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

An update from the JSM 2006 - Seattle

Ryan Woods – January 8, 2007

Page 2: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Topics for this presentation…

• Adaptive Designs and Randomizations in clinical trials

• Statistical Methods for studies of Bioequivalence

• Some models for competing risks in cancer research

Page 3: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Part 1: Adaptive Designs in Clinical Trials…

“Adaptive” has two common meanings in the design of clinical trials literature:

1) A trial in which the design changes in some fashion after the study has commenced (e.g. # of interim looks)

2) A trial in which the randomization probabilities change throughout the study in some fashion

Page 4: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Case 1: Change in Trial Design…

Review of Sequential Hypothesis Testing:

• We choose a number of interim looks

• Select desired power for study, sample size, overall Type I error rate

• Select a spending function for Type I error that reflects how much “alpha” we want to spend at each look

• The spending function determines our rejection boundaries for test statistics computed at each analysis → example…

Page 5: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Example…• Suppose we choose two

interim looks + final analysis• Sample size of 600 with

interim looks at n=200, 400• Overall Type I error = 0.05• Want fairly conservative

boundaries early, with some alpha left for final analysis…

process looks like…

Page 6: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Cumulative Sample Size

Sta

nd

ard

ize

d T

est

Sta

tistic

0 200 400 600

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

Three-look Sequential Boundary for Rejection of Ho

Reject Ho

Page 7: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

What if we want to make a change?

At an interim look we may want to:• Increase in sample size• Increase number of looks• Change the shape of the spending

function for remaining looks• Change inclusion criteria

BUT: can we do this without inflating Type I error? Can we estimate the Rx effect at the end?

Page 8: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

We Want Change!• If at some look L in a K-look trial, we

want to make a design change, we need to consider ε:

•This is referred to as the conditional rejection probability. •Zj, bj are values of test statistic and

boundary at look J. •Any change to the trial at look L must preserve ε for the modified trial (Muller & Schafer)

Page 9: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Cumulative Sample Size

Sta

nd

ard

ize

d T

est

Sta

tistic

0 200 400 600

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

Conditional Rejection Probability at N=400 = 0.255

Three-look Sequential Boundary for Rejection of Ho

Original Design BoundariesAccumulated Data

Reject Ho

Conditional Rejection Probability at N=400 → 0.255

Page 10: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Cumulative Sample Size

Sta

nd

ard

ize

d T

est

Sta

tistic

0 200 400 600 800

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

Modified Design to Four-look Larger Study (N=800)

Original Design BoundariesAccumulated DataModified Study Boundaries

Reject Ho

Such changes are fine, provided ε=0.255 for modified trial

Page 11: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

How to calculate these ε’s?• These conditional rejection probabilities can be calculated

conveniently under various adaptations using EaSt (Cytel Software)• At present our license expired and we are updating it!• I promise an example delivered to you when our license is upgraded

…Example to come!!!

Page 12: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

What about Estimation?

• Interval Estimation for the treatment difference δ is explained in Mehta’s slides

• In summary, the concept is an extension of Jennison and Turnbull’s Repeated Confidence Interval Method from sequential trials (1989)

• Mehta adapts this method by applying the results of Muller & Schafer to allow for the adaptive design change

Page 13: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Case 2: Adaptive Randomizations

• Designs in which allocation probabilities change over the course of the study

• Why? 1) Ensure more patients receive better treatment2) Ensure balance between allocations

• Examples include: Play the Winner Rule, Randomized Play the Winner Rule, Drop the Loser rule, Efron’s biased coin design, etc

…more to come on these….

Page 14: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Some serious examples…1) Connor (1994, NEJM) report a trial

of AZT to reduce rate of mother to child HIV transmission; coin toss randomization used with results:

AZT: 20/239 transmissions

Placebo: 60/238 transmissions

So transmission in control group was 3 times rate in treated group.

Page 15: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Some serious examples…2) Bartlett (1985, Pediatrics) report

study of ECMO in infants; Play The Winner rule used with results:

ECMO: 0/11 deaths

Control: 1/1 deaths → study stopped

- Follow-up study proceeded with very high death rate in control group (40% versus 3% in ECMO)

- See debate in Statistical Science over this! (Nov.1989, Vol 4, No. 4, p298)

Page 16: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Some allocation methods…1) Simple Play the Winner Rule- First patient is randomized by a coin

toss- If patient is Rx success, next patient

gets same Rx; if Rx failure, next patient gets other Rx (Zelen, 1969)

Pros/Cons:- Should put more patients on better Rx- Need current patient’s outcome

before next patient allocated- Not a randomized design

Page 17: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Some allocation methods…

2) Randomized Play the Winner Rule [RPW(Δ,μ,β)]- Start with an urn with Δ red balls, Δ blue balls

inside (one colour per Rx group)- Patient randomized by taking a ball from urn;

ball is then replaced- When patient outcome is obtained, the urn

changes in the following way:

i) If Rx success: we add μ balls of current Rx to urn and β balls of other Rx

ii) If Rx fails: we add β balls of current Rx to urn and μ balls of other Rx

where μ ≥ β ≥ 0

Page 18: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Some allocation methods…

2) RPW(Δ,μ,β) continued…- Commonly discussed design is RPW(0,1,0)

[See Wei and Durham: JASA, 1978]

Pros/Cons:- Can lead to selection bias issues even in a

blinded scenario if investigator enrolls selectively

- Should put more patients on better Rx- Does not require instantaneous outcome- Implementation can be difficult- What about the analysis of such data?

Page 19: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Some allocation methods…

3) Drop the Loser Rule [DL(k)]- An urn contains K+1 types of balls, 1 for each

Rx 1, .., K, plus an “immigration” ball

- Initially, there are Z0,K balls of type K in the urn

- After M draws, the urn composition is equal to ZM = (ZM,0,ZM,1,…, ZM,K)

- To allocate patient, draw a ball; if it is type K, give patient Rx K – ball is not replaced!

- Response is observed on patient; if successful, replace ball in urn (thus ZM= ZM+1). If failure, do not replace ball (thus ZM+1,K= ZM,K-1 for Rx K).

…more…

Page 20: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Some allocation methods…

3) Drop the Loser Rule continued…- If immigration ball drawn, replace the ball,

and add to the urn one of each of the Rx balls

Pros/Cons:- Can accommodate several Rx groups- Also should put patients on the better Rx- Can also be extended to delayed response- Again, what about analysis of data?- Implementation is also not easy

Page 21: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

For all of these methods…• Extensive discussion in the literature about how

these various methods for allocation behave in simulation

• Some issues include:

1) how these methods perform when multiple Rx’s exist and variation in efficacy of Rx’s is high

2) how to balance discrimination of efficacy of Rx’s with minimizing number of patients allocated to poorer Rx’s

• Many, many, many generalizations of these methods to improve the “undesirables” of previous incarnations

• Interim analyses versus adaptive randomization

Page 22: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

At the JSM some talks included…

• Re-sampling methods for Adaptive Designs• Issues associated with non-inferiority and

superiority trials and adaptive designs• Dynamic Rx Allocation and regulatory issues

(EMEA’s request for analysis)• General commentary on risks and benefits of

adaptive methods in clinical trials

In general, a very active area right now!

- Also Feifang Hu gave a recent talk at UBC!

Page 23: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Bioequivalence Studies

I will attempt to cover (briefly):

• Purpose of a bioequivalence study

• Type of data typically collected

• Common methods of data analysis and hypothesis testing/estimation

• Additional comments

Page 24: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

What is bioequivalence?• Bioequivalence (BE) studies are performed to

demonstrate that different formulations or regimens of drug product are similar in terms of efficacy and safety

• BE Studies are done even when formulations are identical between new and old drugs, but type of delivery differ (capsule vs. tablet); could also be generic versus previously patented drug

• Even small changes to formulation can affect bioavailability/absorption/etc so BE studies can reassure regulators new formulation is good WITHOUT repeating entire drug development program (e.g. several phase III trials with clinical endpoints)

Page 25: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Typical Study Design• Typically, BE studies are done as

cross-over trials in healthy volunteer subjects

• Each individual will be administered two formulations (Reference and Test) in one of two sequences (e.g. RT and TR)Sequence Rx

1Wash-out

Rx 2

# subjects

1 (RT) R --- T n/2

2 (TR) T --- R n/2

* Wash-out is period of time where patient takes neither of the formulations

Page 26: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Typical Outcomes• In Clinical Pharmacology (CP) and BE

studies, the central outcomes are pharmacokinetic (PK) summaries

• These PK measures have more to do with what the body does with the drug, than what the drug does to the body

• Many of the outcomes of interest are taken from the drug concentration time curve and include: AUC(0-t), AUC(0-∞), Tmax, Cmax, T1/2

….more to come on these…

Page 27: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Time (hours)

Dru

g C

on

cen

tra

tion

- m

g/L

0 12 24 36 44

0

20

40

60

80

100

Plasma Concentration (mg/L) versus Time (hours)

CMAX

TMAX

AUC

Page 28: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

More on these outcomes…• FDA defines BE as: “the absence of a

significant difference in the rate and extent to which the active ingredient becomes available at site of drug action”

• AUC is taken as the measure of extent of exposure; Cmax as the rate of exposure

• In general, these two outcomes are assumed to be log-normally distributed

• A small increase/decrease of Cmax can result in a safety issue → T and R cannot differ “too much”

Page 29: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Testing what is ”too much”…• The chosen hypothesis testing procedure

by regulatory agencies has been called TOST (two one-sided tests)

• For each PK parameter, we apply a set of two one-sided hypothesis tests to determine if the formulations are bioequivalent

• One of the hypotheses is that the data in the new formulation are “too low” (H01) relative to the reference; another hypothesis is that they are “too high” (H02)

…mathematically we have…

Page 30: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Testing what is ”too much”…

The two tests can be written:

H01: μT - μR ≤ -Δ

versus

H11: μT - μR ≥ -Δ

And then…

H02: μT - μR ≥ Δ

versus

H12: μT - μR ≤ Δ

Page 31: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Testing what is ”too much”…• The testing parameter Δ was chosen by

the FDA to be Δ=log(1.25)

• Both of the two tests are carried out with a 5% level of significance

• Thus, there is a maximum 5% chance of declaring two products bioequivalent when in fact they are not

• TOST has some drawbacks: drugs which small changes in dose → BIG change in clin. response, test limit too narrow for high variability products, doesn’t address individual BE (“Can I safely switch my patient’s formulation?”)

Page 32: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Models for the outcomes…

• They suggest modeling data from the two period, two Rx cross-over via a linear mixed model:

• Let Yijk be the (log-transformed) response obtained from Subject k, in period j, in sequence i, taking formulation l

• If we assume no carry-over effects the model resembles:

Yijk= μi + λj + πl + βk + εijkl

where μi, λj, and πl are fixed; βk, εijkl are random

Page 33: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Models for the outcomes…

So to estimate πT – πR we are supposed to take:

½ [(Y21-Y22)-(Y11-Y12)] which in expectation is equal to the treatment difference

Yij is the sample mean from the i,j’th cell above

Group Period 1 Period 2

1 (RT) μ1 + λ1 + πR

μ1 + λ2 + πT

2 (TR) μ2 + λ1 + πT

μ2 + λ2 + πR

* μ parameters could likely be dropped

Page 34: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Some general comments…• Guidelines from the FDA on methodology

are very specific in this field (e.g. numerical method for AUC, “goal posts” for determining BE, distributional assumptions, etc)

• Interesting history of how these regulations came about/evolved:

- 75/75 rule (70’s): 75% of subjects’ individual ratios of T to R must be ≥ 0.75 to prove BE

- 80/20 rule (80’s): set up H0 such that the two formulations are equal. If the test is NOT rejected, and a difference of 20% not shown, then the formulations are BE

Page 35: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

SAS tricks!!!Some mixing of UNIX commands and SAS:

%SYSEXEC %str(mkdir MYNEWDATA; cd MYNEWDATA; mkdir Output; cd ..;);

%let value=%sysget(PWD);%put &VALUE;

libname data "MYNEWDATA";

data data.junk; variable="ONE VARIABLE";run;

ods rtf file="MYNEWDATA/Output/file.rtf";proc print; title "&VALUE.";

run;

Page 36: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Competing risks models in the monogenic cancer

susceptibility syndromes

7 Aug 2006, JSM 2006 Session #99

Philip S. Rosenberg, Ph.D.Bingshu E. Chen, Ph.D.

Biostatistics BranchDivision of Cancer Epidemiology and Genetics

National Cancer Institute

Page 37: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

AcknowledgementsStatistics

Bingshu E Chen

SCNDavid C Dale

Blanche P AlterSevere Chronic Neutropenia International Registry

FABlanche P Alter

Wolfram Ebell

Eliane Gluckman

Gerard Socié

HBOCMark H Greene

Joan L Kramer

Page 38: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

OutlineI. Monogenic cancer susceptibility syndromes

a) Fanconi Anemia (FA)b) Severe Congenital Neutropenia (SCN)c) Hereditary Breast and Ovary Cancer (HBOC)

II. Cause-specific hazard functions for competing risks

• B-Spline

III. Individualized Risks• Covariates

IV. Cumulative Incidence versus Actuarial Risk

V. Conclusions

Page 39: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Cancer Syndromes

•Single gene defects predisposes to more than one event type (pleitropy).

•Occurrence of one event type censors or alters the natural history of other event types.

•Heterogeneity.

Competing risks theory provides a unifying framework.

Page 40: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

FA

Death

BMT

AML

Solid

Tumor

SCN

MDS/AML

SepsisDeath

HBOC

Breast Cancer

FanconiAnemia (FA)

SevereCongenital

Neutropenia (SCN)

HereditaryBreast and OvaryCancer (HBOC)

GENES: FANCA, FANCB FANCC, FANCD1/BRCA2, FANCD2, FANCE, FANCF, FANCG, (FANCI), FANCJ, FANCL, FANCM

ELA2, other genes BRCA1, BRCA2, other genes

Competing Risks

Death

Page 41: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

•Cause-specific hazards:

•Cumulative Incidence (In the presence of other causes):

•Actuarial Risk (“Removes” other causes):

Modeling the Natural History

0

1( ) lim [ , ) & | , 1, ,kh t P T t t K k T t k K

, 0( ) 1 exp ( ) , 1, ,

t

A k kF t h s ds k K

0 01

( ) ( )exp ( ) , 1, ,Kt a

k k kk

F t h a h s ds da k K

Page 42: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

B-Spline Models of Cause-Specific Hazards:

, , ,1

( ) ( ) , 0km

k j k j k j kj ORDER

h t B t

0 10 20 30 40 50

0

0.25

0.5

0.75

1

t

B-S

plin

e B

asis

Fun

ctio

n

Linear combination

min ,1, ,

ik ik ik

ik ik ik

X Y Ci n

I Y C

0

1

( ) log ( ) ( )ikxn

k ik ik ki T

h x h s ds

•For each cause k separately:

•Knot selection by Akaike Information Criterion (AIC).

•Variance calculations via Bootstrap (because of constraint).

Rosenberg P.S. Biometrics 1995;51:874-887

Page 43: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Fanconi Anemia (n=145) Natural History

Same modeling approach identifies distinct hazard curves.

Rosenberg P.S. et al. Blood 2003;101:2136

Page 44: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Rosenberg, P. S. et al. Blood 2006;107:4628-4635

Severe Congenital Neutropenia (n=374)Natural History

Page 45: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

HBOC – BRCA1 (n=98)Natural History

Page 46: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Individualized Risks •Covariates:

•A covariate may affect one endpoint or multiple endpoints.

•(Different endpoints may be affected by different covariates.)

•Analysis:

•Cox regression models for each endpoint.

•Define a summary categorical risk variable.

•Estimate hazards and Cumulative Incidence for each level.

Page 47: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

FA: Impact of Congenital Abnormalities

Covariate: No Congenital

Abnormalities

Covariate: Specific Congenital

Abnormalities

Rosenberg, P. S. et al. Blood 2004;104:350-355

•Abnormalities are associated only with hazard of BMF.

•BMF curve goes up, other curves go down.

Page 48: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Years on RxYears on Rx

Covariate: Low Response

Covariate: Good Response

SCN: Impact of Hypo-Responsiveness to Rx

•Low Response is associated with hazard of both endpoints.

•Both curves go up or down together.

Rosenberg, P. S. et al. Blood 2006;107:4628-4635

Page 49: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Actuarial Risk vs. Cumulative Incidence

•Actuarial Risk:•“Removes” other causes.

•Estimate using 1 – KM curve.

•Cumulative Incidence:

•Impact of each cause in real-world setting.

•Estimate using non-parametric MLE orspline-smoothed hazards.

, 0( ) 1 exp ( ) , 1, ,

t

A k kF t h s ds k K

0 01

( ) ( )exp ( ) , 1, ,Kt a

k k kk

F t h a h s ds da k K

Page 50: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Rosenberg, P. S. et al. Blood 2003;101:822-826

Example: Fanconi Anemia1-KM vs. Cumulative Incidence

•If you removed other causes, risk of Solid Tumor by age 50 would increase from ~25% to ~75%.

Page 51: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Extrapolating 1 – KM: A Cautionary Tale

Disease Intervention 1-KM Prediction

Result of Intervention

Observed Cumulative Incidence

HBOC BRCA1**

Oophorectomy to reduce risk

of Ovary Cancer

Moderate increase in Cumulative Incidence of

Breast Cancer

Risk of Breast Cancer declines

by 2.6-fold

Much lower than

expected.

**Kramer, J. L. et al. JCO 2005; 23: 8629-8635

Page 52: An update from the JSM 2006 - Seattle Ryan Woods – January 8, 2007

Conclusions•B-spline models of cause-specific hazards elucidate the natural history.

•Physicians understand Cumulative Incidence (our experience).

•Stand-alone software will be available from us. [email protected]

•Much room for methodological refinements.