goodness-of-fit for length-biased survival data with right ...€¦ · goodness-of-fit for...

99
Goodness-of-Fit for Length-Biased Survival Data with Right- Censoring Jaime Younger Thesis submitted to the Faculty of Graduate and Postdoctoral Studies in partial fulfillment of the requirements for the degree of Master of Science in Biostatistics 1 Department of Mathematics and Statistics Faculty of Science University of Ottawa c Jaime Younger, Ottawa, Canada, 2012 1 The program is a joint program with Carleton University, administered by the Ottawa-Carleton Institute of Mathematics and Statistics

Upload: others

Post on 07-Mar-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

Goodness-of-Fit for Length-Biased Survival Data with Right-

Censoring

Jaime Younger

Thesis submitted to the Faculty of Graduate and Postdoctoral Studies

in partial fulfillment of the requirements for the degree of Master of Science in

Biostatistics 1

Department of Mathematics and Statistics

Faculty of Science

University of Ottawa

c© Jaime Younger, Ottawa, Canada, 2012

1The program is a joint program with Carleton University, administered by the Ottawa-CarletonInstitute of Mathematics and Statistics

Page 2: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

Abstract

Cross-sectional surveys are often used in epidemiological studies to identify subjects

with a disease. When estimating the survival function from onset of disease, this

sampling mechanism introduces bias, which must be accounted for. If the onset

times of the disease are assumed to be coming from a stationary Poisson process,

this bias, which is caused by the sampling of prevalent rather than incident cases, is

termed length-bias. A one-sample Kolomogorov-Smirnov type of goodness-of-fit test

for right-censored length-biased data is proposed and investigated with Weibull, log-

normal and log-logistic models. Algorithms detailing how to efficiently generate right-

censored length-biased survival data of these parametric forms are given. Simulation

is employed to assess the effects of sample size and censoring on the power of the test.

Finally, the test is used to evaluate the goodness-of-fit using length-biased survival

data of patients with dementia from the Canadian Study of Health and Aging.

ii

Page 3: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

Acknowledgements

Above all, I would like to express my sincere gratitude to my supervisor, Dr.

Pierre-Jerome Bergeron, without whom the success of this thesis would not be pos-

sible. His guidance, patience, and encouragement helped me greatly throughout this

endeavour. I could not have imagined a better supervisor for my Master’s thesis.

On a more personal note, I would to give a special thanks to my boyfriend Pat

whose encouragement, advice and confidence in me played a crucial role in the success

of this thesis. I would also like to thank my family for their continued support. Lastly,

I would like to give credit to my friend Brad for Figures 2.2, 2.3 and 2.4.

The data reported in this article were collected as part of the Canadian Study

of Health and Aging. The core study was funded by the Seniors’ Independence Re-

search Program, through the National Health Research and Development Program

(NHRDP) of Health Canada (project no. 6606-3954-MC(S)). Additional funding

was provided by Pfizer Canada Incorporated through the Medical Research Coun-

cil/Pharmaceutical Manufacturers Association of Canada Health Activity Program,

NHRDP (project no. 6603-1417-302(R)), Bayer Incorporated, and the British Columbia

Health Research Foundation (projects no. 38 (93-2) and no. 34 (96-1)). The study

was coordinated through the University of Ottawa and the Division of Aging and

Seniors, Health Canada.

iii

Page 4: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

Contents

List of Figures vi

List of Tables vii

1 Introduction 1

2 Background of Survival Analysis and Length-Biased Data 5

2.1 Overview of Lifetime Data . . . . . . . . . . . . . . . . . . . . . 5

2.1.1 Length-Biased Data . . . . . . . . . . . . . . . . . . . . . . 10

2.2 The Survival Function and Hazard Function . . . . . . . . . . . . 16

2.3 Likelihood Construction . . . . . . . . . . . . . . . . . . . . . . . 19

2.4 Estimators of the Survival and Cumulative Hazard Functions . . 21

3 Length-Biased Likelihood, Parametric Survival Models, and Goodness-

of-Fit Tests 25

3.1 Likelihood for a Length-Biased Sample . . . . . . . . . . . . . . . 25

3.2 Parametric Models in Survival Analysis . . . . . . . . . . . . . . 30

3.3 Tests for Goodness-of-Fit . . . . . . . . . . . . . . . . . . . . . . 34

4 Algorithms 41

4.1 Simulating Length-Biased Samples . . . . . . . . . . . . . . . . . 41

iv

Page 5: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

CONTENTS v

4.2 Nonparametric Estimation of Length-Biased Survival Data with

Right-Censoring . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4.3 Simulation and Estimation of p-values . . . . . . . . . . . . . . . 51

5 Applications 54

5.1 Simulation Studies . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5.2 CSHA Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

5.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

6 Conclusion 84

Page 6: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

List of Figures

2.1 Example of Survival Function . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Incident study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.3 Prevalent study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.4 Common vs. random disease onset times. . . . . . . . . . . . . . . . 12

2.5 Unbiased and length-biased densities. . . . . . . . . . . . . . . . . . 13

2.6 Weibull biased and unbiased densities. . . . . . . . . . . . . . . . . . 13

2.7 Unbiased and length-biased survival functions. . . . . . . . . . . . . 14

5.1 Biased and unbiased nonparametric estimates of S(x). . . . . . . . . 68

5.2 Nonparametric and Weibull estimate of S(x). . . . . . . . . . . . . . 71

5.3 Nonparametric and log-normal estimate of S(x). . . . . . . . . . . . 72

5.4 Nonparametric and log-logistic estimates of S(x). . . . . . . . . . . . 73

5.5 Nonparametric and Weibull estimate of S(x) (women). . . . . . . . . 76

5.6 Nonparametric and log-normal estimate of S(x) (men). . . . . . . . 77

5.7 Nonparametric and Weibull estimate of S(x) for 0-10 years (women) . 78

5.8 Nonparametric and log-normal estimate of S(x) for 0-10 years (men) . 79

5.9 Estimated hazard function for women (Weibull) and men (log-normal. 80

5.10 Impact of small survival time on Nonparametric (NP) estimate. . . . 82

vi

Page 7: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

List of Tables

2.1 Censoring Schemes and the Likelihood Function . . . . . . . . . . . 21

5.1 Weibull critical values - varying sample size & amt. of censoring . . 56

5.2 Power when H0 is log-normal and Weibull is true distribution . . . . 57

5.3 Power when H0 is log-logistic and Weibull is true distribution . . . . 58

5.4 Log-normal critical values - varying sample size & amt. of censoring 59

5.5 Power when H0 is Weibull and Log-normal is true distribution . . . 60

5.6 Power when H0 is log-logistic and Log-normal is true distribution . . 61

5.7 Log-logistic critical values - varying sample size & amt. of censoring 62

5.8 Power when H0 is Weibull and log-logistic is true distribution . . . . 63

5.9 Power when H0 is log-normal and log-logistic is true distribution . . 64

5.10 CSHA parameter estimates . . . . . . . . . . . . . . . . . . . . . . . 71

5.11 p-values estimated by simulation . . . . . . . . . . . . . . . . . . . . 74

5.12 CSHA Estimates - Women (Weibull) & Men (Log-normal) . . . . . 75

5.13 p-values estimated by simulation - Women & Men . . . . . . . . . . 81

vii

Page 8: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

Chapter 1

Introduction

Due to an increase in life expectancy, the effects of the baby boom, as well as a

decrease in the number of children per woman, seniors account for the fastest growing

age group in Canada. Consequently, dementia is becoming one of the most important

disorders in our aging society. Its prevalence in Canada has been estimated at 8%

in persons over 65 years of age, and rises to approximately 35% among those aged

85 years and above (Lindsay et al. 2004). The Canadian Study of Health and Aging

(CSHA) 1 was a study of the epidemiology of dementia and other health problems in

the elderly in Canada, and the survival data collected therein serves as the motivation

for this thesis. One of the goals of the CSHA is to estimate the survival distribution,

from onset, of those with dementia. The CSHA was initiated in 1991, when prevalent

cases with dementia were identified and recruited into the study.

The sampling of prevalent cases is common in epidemiological studies as logistical

constraints often prevent the recruitment of incident cases. The ideal settings for sur-

vival data are studies in which individuals are observed at the initiation of the event,

or immediately after. The individuals are then followed for the remainder of the study,

1The CSHA is detailed in the following 3 papers - Canadian Study of Health and Aging: studymethods and prevalence of dementia (1994), The Canadian Study of Health and Aging: risk factorsfor Alzheimers disease in Canada (1994), The Canadian Study of Health and Aging: patterns ofcaring for people with dementia in Canada (1994).

1

Page 9: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

1. Introduction 2

at which point they will be recorded as either censored or not censored, depending

on their status. These studies can be termed ‘incident studies.’ The Kaplan-Meier

estimator is a nonparametric estimator of the survival function for incident cases that

are subject to right-censoring, and relies on the important assumption that censoring

is independent about survival time. Unfortunately, due to issues such as time and

cost, it is not always possible to capture individuals in this manner. Oftentimes, the

subjects have experienced the event prior to the initiation of the study, and these

lifetimes are said to be left-truncated.

The sampling of subjects from a prevalent cohort is a form of biased sampling.

When subjects are identified cross-sectionally, the onset of disease has already oc-

curred. Under this type of sampling scheme, individuals who are longer-lived have

a higher probability of being recruited into the study, and there is a possibility that

shorter-lived individuals may go completely unnoticed. Here, the recruited sample

is not representative of the incident population. If we are interested in estimating

the survival distribution, from onset of disease, the fact that the lifetimes are left-

truncated must be accounted for. The time from onset until recruitment into the

study is contained in both the time from onset until death and the time from onset

until censoring. In this case, the assumption that censoring is uninformative about

survival time is violated, meaning that standard techniques that rely on this assump-

tion do not apply. When there has been no epidemic of disease, the incidence rate of

the disease can be assumed to be constant over time. Under this scenario, the prob-

ability of sampling a subject is directly proportional to its length, and the sampled

lifetimes are said to be length-biased.

Though nonparametric and semiparametric models are widely used for statistical

inference, one may wish to infer upon a data set using a parametric model. A one-

sample goodness-of-fit test is one in which the discrepancy between a nonparametric

estimator and some hypothesized parametric distribution is quantified. For length-

biased survival data subject to right-censoring, a nonparametric estimator has been

Page 10: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

1. Introduction 3

developed by Vardi (1989). In order to test several specific parametric models for a

particular set of data, we seek a versatile one-sample test, one that is adaptable to

various distributions. In this thesis, we propose a one-sample goodness-of-fit test for

length-biased right-censored data that can be applied to several parametric models.

We will use a Kolmogorov-Smirnov type test to evaluate the goodness-of-fit of

three parametric models to the CHSA data - a set of purely length-biased survival

data subject to right-censoring. Goodness-of-fit techniques aim to quantify the dis-

crepancy between an observed set of data and a hypothesized distribution, in hopes

of finding a model that accurately describes the population from which the data

arose. Kolmogorov-Smirnov type statistics rely on the empirical distribution func-

tion of the underlying data, which is the nonparametric maximum likelihood estima-

tor (NPMLE). In the case of full information (i.e no censoring or truncation), the

empirical survival function is the NPMLE for a set of survival data.

However, full information in survival analysis is generally not the case, and

consequently the empirical survival function is not an appropriate estimator. The

Kaplan-Meier (Kaplan & Meier 1958) estimator has been shown to be the NPMLE

for censored survival data, and has also been modified to handle the general prob-

lem of left-truncation (Turnbull 1976) when stationarity does not hold. This modi-

fied Kaplan-Meier estimator can be viewed as a conditional approach to estimation.

However, when data are properly length-biased, the unconditional NPMLE derived

by Vardi (1989) has been shown to be more efficient. As a result, this estimator will

be used to carry out our Kolmogorov-Smirnov type goodness-of-fit testing.

This thesis is organized as follows: Chapter 2 will present an overview of sur-

vival data, including length-biased survival data and how it arises in epidemiology

and medicine. Some fundamental quantities and estimators will also be reviewed.

Chapter 3 will detail how to construct the likelihood function for length-biased data,

review some common parametric models arising in survival analysis, as well as sev-

eral goodness-of-fit statistics used in modeling. Chapter 4 will be devoted to the

Page 11: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

1. Introduction 4

algorithms necessary to carry out our goodness-fit-testing. In Chapter 5, the power

of the one-sample test will be investigated, and the results of our analysis using sur-

vival data from the CSHA will be presented. Finally, Chapter 6 serves as a conclusion

and highlights some routes for further research.

Page 12: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

Chapter 2

Background of Survival Analysis

and Length-Biased Data

We will begin this chapter with an overview of lifetime data, where it arises, and some

common peculiarities involved with analysis, namely censoring and truncation. Next,

we will define and describe length-biased data and how it arises in epidemiology and

medicine. The next section will focus on two key functions relating to lifetime data -

the survival function and the hazard function. Following this we will look at how to

construct the likelihood function under several scenarios, and finally some common

estimators in survival analysis will be defined.

2.1 Overview of Lifetime Data

Survival data, also referred to as lifetime data or time-to-event data, arises in a

variety of areas including medicine, epidemiology, public health, biology, economics,

and manufacturing. The main goal in survival analysis is to study the time until

a particular event occurs. Let X be the time until a specific event occurs. X, the

duration under study, could be the time from onset of disease until death, time for a

5

Page 13: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 6

disease to recur, time for an electrical component to fail, or time for a rehabilitated

drug user to relapse. Time may be measured, for example, in years, months or days,

or alternatively time may refer to the age of the subject when the event occurred. An

event is also often referred as a failure, although the event in question is not always

a negative one. For example, the duration under study could be the time taken for

completion of a university degree. In this case the failure is obtaining a diploma,

which is of course a positive occurrence.

The survival function, S(x), is a fundamental quantity used to describe time-

to-event data. S(x) is defined as the probability that an individual survives to time

x, P (X > x), and is the complement of F (x), the cumulative distribution function

(i.e. S(x) = 1 − F (x)). If X represents the time from onset of some disease until

death, then S(x) would represent the probability that an individual survives beyond

time x. In survival analysis, especially in the medical and epidemiological fields,

the probability of an individual surviving past a certain time point is generally of

more interest, hence S(x) is used instead of F (x). For a continuous lifetime variable

X, S(x) is a continuous, monotone decreasing function with S(0) = 1 and S(∞) =

limx→∞ S(x) = 0. Figure 2.1 provides an example of the survival function.

The analysis of survival data is complicated by a few common factors. To begin

with, the main variable under study is a positive random variable that is generally

not normally distributed. A second feature that proposes difficulties during analysis

is censoring. In order to obtain a complete data set, each individual would need to

be followed from the initiation of the event until the event occurs. In this situation,

both the start and end time of the event under study are known, in which case

the entire lifetime is known. Due to issues such as time and cost, it is not always

feasible to follow a subject from initiation until they experience the event under

study, which means that the variable of interest may not be fully observed. Hence,

information collected on some subjects is unavoidably incomplete. Sometimes the

only information available is that the individual survived until at least a certain

Page 14: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 7

0 5 10 15 20 25

0.0

0.2

0.4

0.6

0.8

1.0

Time

Surv

ival P

roba

bility

P(X>x)

Figure 2.1: Example of Survival Function

point in time. This situation is termed right-censoring. An individual dropping out

of the study, being lost to follow-up, or surviving past the conclusion of the study are

all examples of right-censoring. Here, we know that the event of interest will occur

to the right of our data point.

With right-censoring, data on each individual generally consists of a time under

study, and a binary variable, δ, that indicates whether this time is an event time

or a censoring time. Generally, δ=1 represents an event time, and δ=0 represents

a censoring time. So for example, if an individual has the corresponding data point

(x=62, δ=1), where x represents age, then we know that this individual experienced

the event under study at age 62. For an individual with the data point (x=57, δ=0),

we know that this individual was under study until age 57 and had not yet experienced

the event.

Left censoring occurs when the event is known to have occurred before a certain

time. In this case, the event of interest has already occurred at the time of observation,

Page 15: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 8

but exactly when the event occurred is unknown. Consider studying the time until an

individual develops a certain disease which commences without detectable symptoms.

The exact onset of disease is unknown, and all that can be said is that the onset

occurred prior to the first positive test. Here, the event of interest occurred to the

left of our data point.

Interval censoring occurs when a failure time is known fall within two time points.

Interval censoring is common in studies where the participants are not constantly

monitored, but instead observed at certain time points during the study. For example,

consider a cohort of individuals being followed until they contract malaria. When a

subject first tests positive for malaria, the exact time of disease contraction may

not be known, and the only conclusion we can draw about the event time is that it

occurred between the time of the positive test and the previous testing date. Note

that left-censoring is a form of interval censoring, since with left censoring we know

that the event occurred sometime between time zero and the time of observation.

Another attribute of lifetime data is truncation, which should not be confused

with censoring. Truncation refers to the idea that only individuals whose lifetimes fall

within a certain interval can be selected for the sample. No information is available

on individuals whose event times fall outside of this window, which is in contrast

to censoring where we have at least partial information on each individual. If the

survival time with a certain condition is being investigated, the participants must

survive long enough to take part in the study. Those who die before the study begins

do not have a chance of being included in the sample. For example, suppose we wish

to study how long patients who have suffered a stroke survive at home after their

initial hospitalization. Stroke patients who do not survive the initial hospitalization

will not be included in the sample. This is known as left-truncation of the sample.

Right-truncation occurs when an individual’s event time must be less than some

threshold in order to be included in the study.

The ideal setting for lifetime data is what is commonly referred to as an incident

Page 16: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 9

X

EN

DO

F S

TU

DY

FAILURE

CENSORED

BE

GIN

NIN

GO

F S

TU

DY

Figure 2.2: Incident study.

study. In this type of study, subjects are recruited before or at the time of initiation

of the event in question. Figure 2.2 illustrates this scenario.

The subjects are followed until the event occurs or the study ends, whichever

comes first. Those who have not experienced the event at conclusion of the study

period are censored. For the censored individuals, it is known that they survived

with the disease at least until the end of the study (note - individuals may also be

censored if they drop out of the study or are lost to follow up, in which case they will

be censored at the last time they were known to be alive). For analysis under this

scenario, available methods (e.g. Kaplan Meier estimator) require the assumption

of non-informative censoring. Non-informative censoring means that an individual’s

censoring time is independent of their failure time. In other words, non-informative

censoring means that someone who has been censored will have the same risk of death

as someone who experienced the event.

Page 17: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 10

BE

GIN

NIN

GO

F S

TU

DY

ONSETTIMES

OBSERVED

UNOBSERVED

OBSERVED

UNOBSERVED [unrecorded incident case]

Figure 2.3: Prevalent study.

2.1.1 Length-Biased Data

It is often the case with epidemiological studies that prevalent cases with a certain

disease are identified through a cross-sectional study and followed forward in time,

and this is an example of how length-bias may arise in survival analysis. A cross-

sectional study provides a ‘snapshot’ of the characteristics of the population at a

particular point in time. For the recruited individuals, onset of the disease occurred

prior to the initiation of the study, and these lifetimes are said to be left-truncated.

Figure 2.3 illustrates this scenario.

Cross-sectional sampling from a prevalent cohort may be implemented because

the disease in question is quite rare, or simply due to certain time and cost constraints

that prevent the recruitment of incident cases. When individuals selected for a study

already have the disease in question, the situation is no longer an ideal one, and the

fact that lifetimes ascertained in this fashion are left-truncated must be taken into

account. Under this sampling scheme, it is more likely that longer-lived individuals

will be recruited into the study, and shorter-lived individuals may go completely

Page 18: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 11

unnoticed. As the probability of recruiting a longer-lived individual is higher than

that of recruiting a shorter-lived individual, it follows that the prevalent population

is not representative of the incident population. If one is interested in estimating

the survival distribution from onset, using prevalent cases instead of incident cases

will overestimate the survival probabilities of the true population, since the sample

of prevalent cases is more likely to be made up of individuals who are longer-lived.

As an example, consider using data collected cross-sectionally on individuals

with dementia. In order to be included in the sample, the individuals must be alive

with dementia at the beginning of the study. Individuals with dementia who did not

survive long enough to be included in the study, will not be accounted for. Since it

is necessary to be alive at the time of recruitment, individuals with shorter survival

times are more likely to be missed by the study. Hence, the sample will be made

up those who survive longer with the disease. Using this sample to estimate the

survival time of individuals with dementia in the population will overestimate the

true survival, since those who die quickly with dementia do not have an impact on

the estimation.

For left-truncated data to be properly length-biased, stationarity of the process

generating the onset times is required. The incidence process of the disease can

be assumed to be a segment of a stationary Poisson process if there has been no

epidemic of the disease during the onset times of the subjects (Asgharian et al. 2002).

When this assumption holds, the probability of an individual being included in the

study is directly proportional to their lifetime, and the sampled lifetimes are said

to be properly length-biased. For properly length-biased data, an unbiased density

function, fU(x), has the corresponding length-biased density, fLB(x), given by (Cox

1969)

fLB(x) =xfU(x)

µ, (2.1.1)

where µ is the mean of fU(x).

Page 19: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 12

Figure 2.4: Common vs. random disease onset times.

To see the importance of the Poisson assumption, consider the diagrams in Figure

2.4. The dashed blue lines represent an individual’s lifetime with a particular disease,

from onset until death. In the first figure, all of the disease onset times are the same.

S signifies the beginning of the study. In order for an individual to be included in

the study, they must have a lifetime X ≥ S. The corresponding density in this

case is fU (x)S(s)

. In the second figure, disease onset times appear to begin randomly.

Here, the individual must be alive at the time of study recruitment, but shorter-lived

individuals still have the possibility of being selected. When the onset times are

generated according to a stationary Poisson process, the probability of selecting an

individual is directly proportional to their lifetime with the disease.

Figure 2.5 shows an unbiased density along with its corresponding length-biased

density. Notice that the length-biased density shifts the weight toward higher values

of the variable. Note that how the length-biased density is shifted depends on the

density parameters. Figure 2.6 shows two different Weibull densities along with their

associated length-biased densities.

If the length-bias is not accounted for, the survival distribution will be overes-

timated. In terms of real applications, this overestimation can cause problems. For

example, if we overestimate the survival probabilities of individuals with dementia,

Page 20: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 13

0 5 10 15 20

0.00

0.05

0.10

0.15

x

f(x)

UnbiasedLength−Biased

Figure 2.5: Unbiased and length-biased densities.

0 2 4 6 8 10

0.0

0.1

0.2

0.3

0.4

0.5

Weibull (shape=1.05, scale=1.5)

x

f(x)

UnbiasedLength−Biased

0 2 4 6 8 10

0.0

0.1

0.2

0.3

0.4

0.5

Weibull (shape=4, scale=3)

x

f(x)

UnbiasedLength−Biased

Figure 2.6: Weibull biased and unbiased densities.

Page 21: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 14

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

0.8

1.0

Time

S(x)

UnbiasedLength−Biased

Figure 2.7: Unbiased and length-biased survival functions.

this translates to an overestimation of predicted survival once diagnosed with demen-

tia, as well as an overestimation of the cost of dementia on the health care system.

Figure 2.7 illustrates how ignoring the length-bias leads to an overestimation of the

survival function.

Length-biased sampling has been studied for many years, with the first paper

known to detail the problem mathematically written by Wicksell (1925). The phe-

nomenon was further studied systematically by McFadden (1962), Blumenthal (1967),

and Cox (1969). The literature has since been extensively developed, both in theory

and application. In terms of advances of length-biased sampling in the medical and

epidemiological fields, Zelen & Feinleib (1969) made significant contributions in the

60’s and 70’s. Much of Zelen’s work concentrated on the early detection of chronic

diseases, with a focus on breast cancer. The above papers are important in terms of

Page 22: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 15

the development of the theory of length-biased sampling issues, but more importantly

for this thesis is length-biased sampling when time is the main variable of interest.

In medicine and epidemiology, length-bias arises most often through the sampling of

subjects from a prevalent cohort. Lagakos et al. (1988) recognized the application of

left-truncation in the study of AIDS from a prevalent cohort. Stern et al. (1997) ana-

lyzed data on patients with Alzheimer’s disease from a prevalent cohort, but did not

take into account the length-bias. This resulted in an overestimation of the survival

function, which was validated by Wolfson et al. (2001), who showed that by correcting

for length-bias, the median survival of patients diagnosed with Alzheimer’s is much

shorter. Gao & Hui (2000) used a two-phase sampling scheme to estimate incidence,

again using data on patients with dementia.

When using a prevalent population to estimate the survival distribution from

onset, the stationarity of the incidence process plays an important role in which

estimation procedure should be used. When stationarity cannot be assumed, although

bias still exists in the prevalent population compared to the incident population,

the sampled lifetimes are not properly length-biased, and methods based on purely

length-biased data cannot be used. In addition, the censoring times are informative,

since the time from onset to recruitment is contained in both the full lifetime and

censoring time, so methods used for incident cases will not apply to prevalent cases

without adjustment. The bias stems from the lifetimes being left-truncated (since

the event has already occurred) and this can be dealt with by conditioning upon the

truncation times. This approach has been referred to as a ‘conditional approach’ as

the estimates obtained are conditional upon the smallest truncation time. Turnbull

(1976) developed an approach for dealing with left-truncation, which is an extension

of work done by Kaplan & Meier (1958) who developed the nonparametric product

limit estimator. Both the Kaplan-Meier estimator and the estimator modified for left-

truncation will be discussed in further detail in section 2.4. The asymptotics of the

product limit estimate with random truncation were derived by Wang et al. (1986),

Page 23: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 16

and a nonparametric maximum likelihood estimation of the truncation distribution

was derived by Wang (1991). An advantage of the conditional approach to estimation

is that a truncating distribution need not be specified, however, when stationarity

holds, the conditional approach does not offer the most efficient estimates for the

survival function.

When stationarity can be assumed, an unconditional approach to estimation of

left-truncated data can be used. This approach provides estimates of the survival dis-

tribution from onset, and hence has been termed an ‘unconditional approach.’ Vardi

(1982, 1985) and Gill et al. (1988) developed an unconditional maximum likelihood

approach which allows one to recover the unbiased survival distribution. Vardi (1989)

derived the NPMLE for length-biased data with right-censoring; the asymptotics for

the NPMLE were later given by Vardi & Zhang (1992). The algorithm for recover-

ing the unbiased survival distribution developed by Vardi assumes that the number

of censored and uncensored observations in known a priori, which is a very strong

assumption. In a prevalent cohort study with follow-up, the number of censored and

uncensored cases are not known until the end of the study. If this assumption is not

fulfilled, though the likelihood remains the same, Vardi notes that the asymptotics

will need to be derived separately. These asymptotic results for the NPMLE of both

the length-biased and unbiased survival functions for length-biased, right-censored

data were derived by Asgharian et al. (2002) and Asgharian & Wolfson (2005). An

informal test for stationarity was developed by Asgharian et al. (2006) and the first

formal test for stationarity was offered by Addona & Wolfson (2006).

2.2 The Survival Function and Hazard Function

Before going any further, it is necessary to define some quantities and relations

used in the modeling of lifetime data. The notation used is adopted from Klein

& Moeschberger (2003).

Page 24: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 17

Let X represent the time until a specific event occurs. The survival function, the

probability of an individual surviving past time x, is defined as

S(x) = P (X > x). (2.2.1)

If X is a continuous, random variable, then S(x) is a continuous, strictly decreas-

ing function. Note that S(x) is the complement of F (x), the cumulative distribution

function (cdf ), and so

S(x) = P (X > x) = 1− P (X ≤ x) = 1− F (x). (2.2.2)

Using the fact that∫∞

0f(x)d(x) = 1 (X strictly positive),

S(x) = 1− P (X ≤ x) = 1−∫ x

0

f(t)d(t) =

∫ ∞x

f(t)d(t). (2.2.3)

We also have

f(x) =dF (x)

dx=d(1− S(x))

dx= −dS(x)

dx. (2.2.4)

An important property of the survival curve is that it is equal to one at time

zero, and is equal to zero as time approaches infinity. It is always a monotone,

nonincreasing function, and the rate of decline at time x is determined by the risk

of experiencing the event at that instant. The survival curve is extremely useful in

comparing mortality patterns, however determining the nature of a failure pattern

by simply looking at the survival curve proves more difficult (Klein & Moeschberger

2003).

A second fundamental quantity in survival analysis is the hazard rate or hazard

function. The hazard rate, h(x), is defined by

h(x) = lim∆x→0

P (x ≤ X < x+ ∆x|X ≥ x)

∆x(2.2.5)

Looking at the numerator and denominator of the above expression separately

may help with interpretation of the hazard rate. The numerator gives the probability

Page 25: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 18

that the event will occur between time x and x + ∆x, given that the event has not

already occurred. When divided by the denominator, the width of the interval, a

rate is obtained. By letting the width of the interval tend to zero, an instantaneous

rate of failure is obtained. Unlike the survival function which is bounded between 0

and 1, the hazard rate can range from zero to infinity. From the equation above, the

probability of an individual aged x failing in the next instant can be approximated

by h(x)∆x.

When X is a continuous random variable, the hazard rate can be expressed as

h(x) =f(x)

S(x). (2.2.6)

The cumulative hazard rate, which can be thought of as the accumulation of

hazard over time, is defined as

H(x) =

∫ x

0

h(t)d(t). (2.2.7)

Note some useful relations between the survival function and the hazard function:

h(x) =f(x)

S(x)=−dS(x)

dx

S(x)=−d ln(S(x))

dx, (2.2.8)

H(x) =

∫ x

0

h(t)d(t) =

∫ x

0

−d ln(S(t))

dt= − ln[S(x)] (2.2.9)

and

S(x) = exp(−H(x)). (2.2.10)

The hazard function is very useful in describing how the chance of an event

occurring varies with time. There are numerous different shapes that the hazard

function can take on, and some general shapes will be described, as well as scenarios

where these shapes may occur.

An increasing hazard shape is when the hazard increases as time goes on, and this

type of hazard is common. Natural aging and wear and tear are examples of increasing

hazard. A decreasing hazard shape is when the hazard rate decreases with time, which

Page 26: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 19

is not quite as common. A scenario where an individual may experience a decreasing

hazard rate is following an organ transplant. When someone undergoes a transplant,

their hazard rate may be very high immediately after the surgery, and slowly decline

as they recover. A bath-tub shaped hazard is one that is initially decreasing, followed

by remaining rather constant for a period of time, and finally increasing. This type of

hazard is applicable to the human population followed from birth. Babies experience

a higher hazard rate due to infant mortality in the early stages of life. After this stage

the hazard rate decreases and remains relatively constant, and then the hazard rate

will begin to increase as aging proceeds. Another interesting example that exhibits a

bath-tub shaped hazard is an airplane flight. Typically, most accidents occur at, or

close to take-off or landing. Lastly, there is the hump-shaped hazard, which sees the

hazard rate initially increasing, followed by a decreasing hazard rate. An example

where this hazard may be observed is after surgery. Initially, there is an increasing

hazard rate due to the possibility of infection or related complications, and as the

patient recovers the hazard rate gradually declines.

2.3 Likelihood Construction

As stated above, the incompleteness inevitable in lifetime data can pose certain diffi-

culties during analysis, and it is necessary to pay careful attention to censoring and

truncation when constructing the likelihood function. Whether an observation is cen-

sored, truncated, or an exact lifetime will have a different effect on the likelihood,

and considering what type of information is provided by each observation will aid in

a better understanding of the construction of the likelihood function.

When an exact lifetime is observed, information is gained on the chance of the

event occurring at this exact time. This probability can be approximated by the cor-

responding probability density function (pdf ) evaluated at the exact lifetime. When

a right-censored observation occurs, the information given is that the individual sur-

Page 27: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 20

vived past the censoring time. This probability can be approximated by the survival

function evaluated at the censoring time. When a left-censored observation occurs,

we know that the event has already taken place, and this probability is approximated

by the corresponding cdf evaluated at the censoring time. For an interval-censored

observation, it is known that the event occurred between two time points, and hence

the information gained is the probability that event occurred during this interval

(this can be calculated by using either S(x) or F (x)). The idea is the same for

truncated observations, but in this case a conditional probability is required. When

left-truncation occurs, only the individuals who survive past the truncation time will

be observed, and so the probability of the event occurring is conditional on the in-

dividual surviving at least until the truncation time. It follows that this quantity is

approximated by the ratio of the pdf evaluated at the event time, and the survival

function evaluated at the truncation time. Right-truncation is similar, except now

we have the cdf evaluated at the truncation time in the denominator since this prob-

ability is conditional upon not experiencing the event by the end of the study period.

The same idea follows for interval truncation. Table 2.1 summarizes how the likeli-

hood function is affected under the different censoring and truncation schemes just

described (Klein & Moeschberger 2003). Note that x represents an event time, Cr

and Cl represent right and left censoring times respectively, and (YL, YR) represents

the truncation window.

From here, the likelihood, L, for a data set is constructed by taking the product

of each individual component. For example, consider a data set that consists of

observed lifetimes and right-censored observations. The ith observed lifetime and

censoring time are denoted by xi and ri respectively. Let D and R represent the set

of observed lifetimes and right-censoring times respectively. Then,

L ∝∏i∈D

f(xi)∏i∈R

S(ri). (2.3.1)

The above likelihood can easily be adapted to the other censoring and truncation

Page 28: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 21

Table 2.1: Censoring Schemes and the Likelihood Function

Likelihood Contribution Censoring Scheme

f(x) Observed Lifetime

S(Cr) Right-Censoring

1− S(Cl) Left-Censoring

S(Cr)− S(Cl) Interval-Censoring

f(x)/[1− S(YR)] Right-Truncation

f(x)/S(YL) Left-Truncation

f(x)/[S(YL)− S(YR)] Interval-truncation

schemes discussed above.

2.4 Estimators of the Survival and Cumulative Haz-

ard Functions

Due to the incompleteness inherent in lifetime data, special techniques are required

to properly draw inference about the survival distribution. The first estimator that

will be described in this section is the Product-Limit estimator, also known as the

Kaplan-Meier estimator, which gives an estimate for the survival function from onset.

Next, the Nelson-Aalen estimator which estimates the hazard rate will be described.

These estimates are appropriate for right-censored survival data, with non-informative

censoring.

The Product-Limit estimator was proposed by Kaplan & Meier (1958) in order

to estimate the proportion of the population whose lifetimes surpass time t, when

there is right-censoring in the data. Kaplan and Meier refer to this quantity as P (t),

but it will be denoted as S(t) here. Suppose we have lifetime data on n individuals.

It is possible that two events may occur at the same time (due to rounding), and it is

Page 29: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 22

necessary to account for these ties. Let D be the number of distinct event times, and

let ti be the ith distinct event time, such that t1 < t2 < · · · < tD. Let di represent the

number of distinct events at each time ti. Define Yi to be a count of the number of

individuals who are alive at time ti, or who experience the event at time ti (Yi can be

thought of as the number of subjects at risk of experiencing the event at time ti). The

Kaplan-Meier estimator is defined over the range of time where there is data, and is a

decreasing step function with jumps at the event times. Note that the Kaplan-Meier

estimator is not well defined for values of t greater than the event times in the data

set (Kaplan & Meier 1958). The formula for the Kaplan-Meier estimator, S(t), as

well as the estimator’s variance, V [S(t)], are given below.

S(t) =

1 if t < t1∏ti≤t[1−

diYi

] if t ≥ t1

(2.4.1)

and

V [S(t)] = S(t)2∑ti≤t

diYi(Yi − di)

. (2.4.2)

Due to the relationship between the survival function and cumulative hazard

function shown above, the Kaplan-Meier estimator can also be used to estimate cu-

mulative hazard function by H(t) = − ln(S(t)).

As an alternative to using the Kaplan-Meier estimator to estimate the cumulative

hazard function, there is a second estimator for H(t) that performs better for smaller

sample sizes. This estimator is due to work by Nelson (1972) and Aalen (1978), and is

appropriately known as the Nelson-Aalen estimator. It is defined up until the largest

observation time as

H(t) =

0 if t < t1∑ti≤t

diYi

if t ≥ t1.

(2.4.3)

Page 30: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 23

The variance can be estimated by (Aalen 1978)

σ2HNA

=∑ti≤t

diY 2i

. (2.4.4)

Again, due to the relationship between the survival function and the hazard func-

tion, the Nelson-Aalen estimator can also be used to estimate the survival function.

The estimators described above are suitable for right-censored data. As left-

truncation is common in survival analysis, it is necessary to have estimators that take

into account both right-censoring and left-truncation. The estimators defined above

can be slightly modified in order to accommodate this. Let Lj represent the age at

which the jth individual entered into the study, and Tj represent their corresponding

time of death or censoring. Again, D is the number of distinct times, and ti the ith

distinct event time with t1 < t2 < · · · < tD. Yi, the number of individuals who are

at risk of experiencing the event at time ti, is slightly different when left-truncation

is involved. Define Yi as a count of individuals who entered into the study before

time ti and who have a study time of at least ti. Equivalently, the risk set under left-

truncation is the number of individuals with Lj < ti < Tj. With this new definition

of Yi, the quantities above can be used for estimation when left-truncation is present

(Klein & Moeschberger 2003).

However, it is important to note that now these estimates are conditional. For

the Kaplan-Meier estimate, this means that the survival function for time t is an

estimate of the probability of survival beyond t, given survival to at least the smallest

truncation time. One caution when using the Kaplan-Meier estimator modified for

left-truncation must be noted. If it happens that for some ti, Yi = di, then by

definition the survival estimate will be zero for all values of t beyond this point. With

left-truncated data, Yi and di may be equal for small values of t, and even though

prior to this point more survivors and deaths will be observed, the survival estimate

will be zero, which is very uninformative. In circumstances such as these, the survival

function is commonly estimated by only considering death times beyond this point

Page 31: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

2. Background of Survival Analysis and Length-Biased Data 24

(Klein & Moeschberger 2003).

These modified estimators assume that the truncating distribution is unknown,

and so conditioning upon the truncation times results in little information loss. When

the data are properly length-biased, assumptions can be made about the truncating

distribution, and in this scenario an unconditional analysis will be more informa-

tive. This unconditional approach, as well as common parametric models in survival

analysis and goodness-of-fit tests will be discussed in detail in Chapter 3.

Page 32: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

Chapter 3

Length-Biased Likelihood,

Parametric Survival Models, and

Goodness-of-Fit Tests

This chapter will begin with a look at how to write the likelihood function for a length-

biased sample with right-censoring. Next we will talk about parametric models that

are commonly used in survival analysis. And finally, some goodness-of-fits tests used

in the modeling will be reviewed.

3.1 Likelihood for a Length-Biased Sample

In the case of full information (i.e. no censoring or truncation), the empirical survival

function provides a nonparametric maximum likelihood estimate for the true survival

distribution from onset. This is of little use, as full information in survival analysis

is unrealistic in most real life applications, due to budget and time constraints. The

Kaplan-Meier estimator is the nonparametric maximum likelihood estimate for life-

time data with right-censoring, and reduces to the empirical survival function in the

25

Page 33: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 26

case of no censoring. The Kaplan-Meier estimator is an efficient estimator for life-

time data in the presence of noninformative right-censoring, and can be modified to

accommodate left-truncation, with the assumption that the truncating distribution is

unknown. This modified Kaplan-Meier estimate conditions upon the truncation time,

and can be referred to as a conditional approach to estimation. Conditioning upon

the truncation time results in little information loss when the truncating distribution

is unknown. However, if assumptions can be made about the truncating distribution,

incorporating this information into the estimation procedure provides a more efficient

estimate.

An unconditional approach to a nonparametric estimation of the survival function

for length-biased survival data has been developed by Vardi (1989), but requires the

assumption that the truncation times follow a uniform distribution. This is referred

to as the assumption of stationarity and implies that the initiation times of a disease

follow a stationary Poisson process (Wang 1991). A stationary Poisson process can

be assumed so long as there has been no epidemic of the disease during the time

period that covers the onset times of the subjects under study. An informal test

for stationarity was investigated by Asgharian et al. (2006), and the first formal

test for stationarity of the incidence rate in prevalent cohort studies was proposed

by Addona & Wolfson (2006). If this assumption holds, lifetimes sampled from a

prevalent cohort are considered to be length-biased. For length-biased lifetimes, the

probability of selecting a subject from the target population is directly proportional

to their lifetime. When estimating the survival distribution for a certain disease, the

lifetime corresponds to the time from onset of the disease until death from the disease.

When it is safe to assume stationarity, it has been shown that this unconditional

estimate is more efficient than the modified Kaplan-Meier estimate (Asgharian et al.

2002).

Before describing Vardi’s method of estimation, it is necessary to detail math-

ematically length-biased sampling. Suppose we have a random variable, Y , with

Page 34: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 27

corresponding cdf FU(y). Y represents the true, unbiased event times. The length-

biased distribution of Y , FLB(y), is defined as (Cox 1969)

FLB(y) =1

µ

∫ y

0

xdFU(x) (3.1.1)

where µ =∫∞

0xdFU(x) < ∞. The above distribution is the corresponding length-

biased distribution of FU . The length-biased distribution arises when a random vari-

able, with cdf FU , is observed with probability proportional to its length (Cox 1969).

In the case where FU has a density, fU , with respect to Lebesgue measure, the length-

biased distribution can be written as

FLB(y) =1

µ

∫ y

0

xfU(x)dx (3.1.2)

which implies that the length-biased density is (Correa & Wolfson 1999)

fLB(y) =yfU(y)

µy ≥ 0. (3.1.3)

Suppose we obtain a sample from FLB, Y1, . . . , Yn, that is, a sample from the

length-biased distribution. The Yi’s can be thought of as sampled prevalent cases,

where stationarity of the disease onset times holds. When sampling from a prevalent

cohort, the subjects already have the disease in question. They are selected into

the study and, ideally, followed until they experience the event in question. We can

split their lifetime into two segments: the time from disease onset until recruitment

into the study, and the time from study recruitment until the event occurs. These

time periods are respectively termed the backward and forward recurrence times.

The backward recurrence time is also referred to as the truncation time, and will be

denoted by T . The forward recurrence time is also known as the residual lifetime,

and will be denoted by R. Note that each Yi in the above sample can be represented

as the sum of Ti and Ri. However, as has been discussed, it is not always possible to

follow every individual under study until the event occurs. Hence, the Ri’s may be

subject to censoring. Define Ci to be random residual censoring variables with cdf

Page 35: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 28

FC(c). The observed residual lifetime will now be such that only the minimum of Ri

and Ci is observed, and therefore the ith observed or censored lifetime, Xi, can be

represented as

Xi = Ti +Ri ∧ Ci. (3.1.4)

We can refer to Ai = Ti + Ci as a full censoring time, and Bi = Ti + Ri as

an observed lifetime. Complete lifetimes and complete censoring times both contain

a common backward recurrence time, and therefore are not independent. However,

the residual lifetimes and residual censoring times are independent in most practical

situations. Further, we will assume that the Ci’s are independent of the (Ti, Ri) pairs,

then

Cov(A,B) = Cov(T +R, T + C)

= E[(T +R)(T + C)]− E[T +R]E[T + C]

= E[T 2 + TR + TC +RC]− E2[T ]− E[T ]E[R]− E[T ]E[C]− E[R]E[C]

= V ar(T ) + Cov(T,R) > 0. (3.1.5)

Recall that earlier δ was defined as the variable indicating whether a lifetime is

censored or not, with δ = 1 for an uncensored observation, and δ = 0 for a censored

observation. In terms of Ri and Ci, δ is defined as

δi =

1 if Ri ≤ Ci

0 if Ri > Ci.

(3.1.6)

Survival data on each individual can be represented as a pair, (Xi, δi), with X

indicating the lifetime and δ indicating whether or not it is a censored observation.

Note that censoring can occur either during the follow-up period, or at the end of the

follow-up period.

Let the data described above be written as Wi = (Ti, Ri∧Ci, δi) as opposed to the

above (Xi, δi), for i = 1, . . . , n. With the assumption that the residual censoring times

Page 36: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 29

are independent of the Ti’s and Ri’s, the independence of the Wi’s follows (Bergeron

2006).

Consider two sets of observations, Ui’s and Yi’s from some length-biased life-

time distribution FLB. Let Vi = HiYi, where the Hi follow a U(0, 1) distribution.

This type of data distortion has been termed “multiplicative censoring”, since the

incompleteness of the V ’s comes from them being scaled down by the H’s (Vardi

1989).

Suppose the goal is to nonparametrically estimate FLB, using a set of full ob-

servations (U ’s) and multiplicatively censored observations (V ’s). The likelihood for

this setting, derived by Vardi is,

L(θ) =n∏i=1

Li(θ)

=n∏i=1

(fu(ui; θ)µ(θ)

)δi(∫w≥vi

fu(w; θ)

µ(θ)

)1−δi, (3.1.7)

where µ(θ) is the mean of fU . The multiplicatively censored times are V = HY ,

where H ∼ U(0, 1), which is our truncation setting.

Vardi’s problem A, which will be discussed in Chapter 4, is equivalent to fol-

lowing a fixed m individuals to the event, and throwing out a fixed n individuals

at recruitment once their onset time is known. This is an unrealistic setting as the

number of censored and uncensored observations are not known in advance, and so

all individuals must be followed forward for the duration of the study. This type

of situation can be likened to our truncation setting with length-biased data. Vardi

states that under cross-sectional sampling, the likelihood will be proportional to L(θ),

though the setup differs from multiplicative censoring. For cross-sectional sampling,

the asymptotic properties of the MLE’s obtained from L(θ) have been derived by

Asgharian et al. (2002).

A detailed description of Vardi’s problem and how the NPMLE can be obtained

through an EM algorithm will be given in Chapter 4.

Page 37: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 30

3.2 Parametric Models in Survival Analysis

Though nonparametric and semi-parametric models are extremely useful in analyz-

ing survival data, parametric models are also widely used. When fitting a parametric

model to a set of data, estimation is reduced to only a few parameters. Parametric

survival models allow one to describe easily the behaviour of lifetimes, to compute

values efficiently, as well as to predict model changes in parameters more intuitively.

Also, parametric models are advantageous in building an understanding of the sur-

vival and hazard functions described in Chapter 2 (Klein & Moeschberger 2003).

Some common models used in survival analysis will briefly be discussed including

the exponential, Weibull, Pareto, log-normal, and log-logistic model, as well as the

generalized Gamma family. Note that the class of densities considered here are used

to model fU .

The exponential model is a relatively simple model with some notable properties,

namely, its constant hazard rate. Recall the pdf of the exponential distribution:

f(x) = λ exp(−λx). (3.2.1)

From here the survival and hazard functions are easily derived as

S(x) =

∫ ∞x

λ exp(−λt)dt = exp(−λx), (3.2.2)

and

h(x) =λ exp(−λx)

exp(−λx)= λ. (3.2.3)

It can be easily shown that the mean and standard deviation of the exponential

distribution are both equal to 1/λ.

Notice that for all values of x, the hazard rate will remain constant at λ. This

means that the time until failure does not depend on any past history, which can prove

to be quite restrictive in real-life applications. As Klein and Moeschbeger point out,

Page 38: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 31

a constant hazard rate has been termed the “no-aging” property or “old-as-good-as-

new” property. This property is likewise known as the memory-less property, which

can be illustrated from

P (X > x+ z | X > x) =P (X > x+ z ∩X > x)

P (X > x)

=P (X > x+ z)

P (X > x)

=exp(−λ(x+ z))

exp(−λx)

= exp(−λz)

= P (X > z). (3.2.4)

It follows that the exponential model is more applicable to an industrial setting,

for example, modeling the failure rate of electrical components. Since human beings

age, and hazard increases, this model is not often applicable in the medical field.

The Weibull distribution is more widely used in survival analysis. It has relatively

simple survival and hazard functions, but can accommodate an increasing, decreasing,

or constant hazard rate. Recall the pdf of the Weibull distribution:

f(x) = αλxα−1 exp(−λxα). (3.2.5)

The survival function can be derived as

S(x) =

∫ ∞x

αλtα−1 exp(−λtα)dt = exp(−λxα), (3.2.6)

which implies that the hazard function is equal to

h(x) =αλxα−1 exp(−λxα)

exp(−λxα)= αλxα−1. (3.2.7)

The mean of the Weibull distribution is given by µ = Γ(1+1/α)

λ1/α. α is referred to as

the shape parameter and λ is referred to as the rate parameter. Note that when α = 1,

the exponential distribution is obtained, and hence the exponential distribution is a

Page 39: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 32

special case of the Weibull distribution. As mentioned above the hazard rate can be

increasing, decreasing, or constant which corresponds to α > 1, α < 1, or α = 1

(exponential).

The Pareto model is not as commonly used in survival analysis as say, the Weibull

distribution, but it has closed forms for the survival function, density function and

hazard rate, making it easy to work with. The density function, survival function,

and hazard function are given as:

f(x) =θλθ

xθ+1, (3.2.8)

S(x) =λθ

xθ, (3.2.9)

h(x) =θ

x, (3.2.10)

where θ > 0, λ > 0, and x ≥ λ. The mean of the Pareto distribution is given by

µ = θλθ−1

and exists only if θ > 1. Due to its heavy skewness, the Pareto model is

common in economics for modeling the distribution of incomes (Rice 1995).

A random variable X follows a log-normal distribution if Y = lnX follows a

normal distribution. Some authors have observed that ages at the onset of certain

diseases can be approximated by the log-normal distribution (Feinleib 1960, Horner

1987). The corresponding pdf and survival function are

f(x) =exp[−1

2( lnx−µ

σ)2]

x(2π)1/2σ(3.2.11)

and

S(x) = 1− Φ

[lnx− µ

σ

](3.2.12)

where µ and σ are the mean and standard deviation of Y , and Φ(x) is the cdf for a

standard normal variable.

The hazard function does not reduce to anything less complicated, so it will just

be written as

h(x) =f(x)

S(x)=

exp[− 12

( ln x−µσ

)]

x(2π)1/2σ

1− Φ[ lnx−µσ

]. (3.2.13)

Page 40: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 33

The mean of a log-normal variable is exp(µ + 12σ2), again, where µ and σ are

the mean and standard deviation of Y . The value of the hazard function is always

0 at time zero, and then it increases to a maximum, followed by a decrease, with

the hazard rate approaching zero as x tends to infinity (i.e. - the hazard is hump-

shaped). Decreasing hazard for larger values of x is not always optimal, but in

situations where large values of x are not important, this model may be applicable

(Klein & Moeschberger 2003).

A random variable X is said to follow a log-logistic distribution if Y = lnX

follows a logistic distribution. The log-logistic distribution closely resembles the log-

normal distribution, having heavier tails, and the advantage that the survival and

hazard functions are more tractable. The pdf for Y is

f(y) =exp(y−µ

σ)

σ[1 + exp(y−µσ

)]2(3.2.14)

where µ and σ2 are respectively the mean and scale parameter of Y. The corresponding

pdf, survival function and hazard function for X are

f(x) =αxα−1θ

[1 + θxα]2, (3.2.15)

S(x) =1

1 + θxα, (3.2.16)

h(x) =αxα−1θ

1 + θxα, (3.2.17)

where λ = exp(−µ/θ) and α = 1/σ > 0. The mean of the log-logistic distribution

is µ =π csc( π

α)

αθ1/α. The numerator of the hazard function resembles the Weibull hazard

function, but behaves differently due to the denominator. Namely, the hazard is

monotone decreasing for α ≤ 1 and for α > 1 the hazard rate increases to a maximum

at (α−1θ

)1/α, followed by a decrease to 0 as x approaches infinity.

The gamma distribution is a little more complicated to work with, but has prop-

erties similar to the Weibull distribution. Its pdf is given by

f(x) =λβxβ−1 exp(−λx)

Γ(β), (3.2.18)

Page 41: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 34

where Γ(β) =∫∞

0xβ−1 exp(−x)dx. β is referred to as the shape parameter. When

β > 1, the hazard function is monotone increasing with h(0) = 0 and h(x) → λ as

x → ∞. When β < 1, the hazard function is monotone decreasing with h(0) → ∞

and h(x) → λ as x → ∞. By adding an extra parameter, the generalized gamma

distribution is obtained, allowing for additional flexibility in the hazard function. The

generalized gamma distribution has pdf

f(x) =αλαβxαβ−1 exp(−(λx)α)

Γ(β). (3.2.19)

When both α and β are equal to 1, the exponential distribution is obtained.

When β = 1 the Weibull distribution is obtained, and when α = 1 the gamma

distribution is obtained. The generalized gamma distribution will be useful when in-

vestigating how to generate length-biased samples, which will be discussed in Chapter

4.

In order to infer upon a set of data using a parametric model, it is necessary

to obtain estimates of the model parameters. This is often done through maximum

likelihood estimation. Estimates can be obtained by using the data to maximize the

likelihood either analytically, when possible, or numerically otherwise. A popular nu-

merical technique for likelihood maximization is the Newton-Raphson method. Some

other techniques for likelihood maximization include steepest ascent, quasi-Newton,

and conjugate gradient methods (Lawless 2003b). Once maximum likelihood esti-

mates, θML, for the model parameters have been obtained, one can use the model to

compute various quantities, such as the survival function, hazard function, mean or

median.

3.3 Tests for Goodness-of-Fit

This section will focus on one-sample goodness-of-fit tests. One-sample goodness-of-fit

tests compare a sample to a specified parametric distribution (either fully specified,

Page 42: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 35

or up to unknown parameters). This is in contrast to two-sample goodness-of-fit

tests where the goal is to determine whether two independent samples come from the

same unspecified distribution. It should be noted that an asymptotic most efficient

nonparametric test for the equality of survival distributions, for right-censored data

with biased sampling, has recently been proposed by Ning et al. (2010).

Checking the adequacy of a model is an extremely important part of statisti-

cal inference. Suppose we have a random sample, x1, . . . , xn, from some unknown

distribution, and would like to know how well a particular model fits this set of ob-

servations. Goodness-of-fit statistics attempt to quantify the discrepancy between

values expected under a specified model and the observed values from a random sam-

ple. Goodness-of-fit testing generally involves investigating this random sample to

test the null hypothesis that it is, in fact, from some known, specified distribution.

For example, the null hypothesis could specify some distribution F ∗(x), and then

x1, . . . , xn will be compared to F ∗(x) to see if it is reasonable to conclude that F ∗(x)

is the true underlying distribution of our random sample. Model checking needs vary

depending on the model’s strength and complexity of its assumptions (Lawless 2003b).

Graphical procedures, such as probability and residual plots, can be used as an

informal first step for model checking. However, due to the high amount of variation

inherent in graphical procedures, it can prove difficult to determine whether the

features of the plot are due to natural variation. Even when data are generated from

the assumed model, the eye will not always come to the correct conclusion. As a

result, formal hypothesis tests are required.

A well-known goodness-of-fit statistic used for testing the hypothesis that X has

some specified distribution is Pearson’s χ2 statistic. The null hypothesis that the

distribution of the observed random variable is F ∗(x) is tested against the alternative

hypothesis that the distribution of the observed random variable is different than

Page 43: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 36

F ∗(x). Put another way,

H0 : F (x) = F ∗(x) (3.3.1)

H1 : F (x) 6= F ∗(x). (3.3.2)

Use of Pearson’s χ2 test requires that the observed values are independent and

identically distributed. It is commonly used for categorical data, but can be extended

to accommodate the continuous case. Consider a data set with n observations of a

random variable X. These n observations can be grouped into c classes, and the

frequency of each class recorded. Below is a table representing a data set organized

in this fashion.

C1 C2 C3 ... Cc

Frequency O1 O2 O3 ... Oc

where Oi is the realized number of observations in ith class.

Let p∗i represent the probability of an observation being in class i under the

assumption of the null hypothesis (i.e. under F ∗(x) being the true distribution of X).

The expected number of observations in each class is then defined as

Ei = p∗in, i = 1, . . . , c. (3.3.3)

Pearson’s χ2 test statistic T is given by

T =c∑i=1

(Oi − Ei)2

Ei=

c∑i=1

O2i

Ei− n. (3.3.4)

If some of the Ei are small, the chi-square distribution may not be appropriate, and

it may be necessary to combine cells with small Ei with another class in a suitable

way. The general rule of thumb is that each Ei is at least 5 (Conover 1971).

For large values of n, T follows a chi-square distribution with (c− 1) degrees of

freedom. The null hypothesis is rejected for values of T greater than x(1−α), where

Page 44: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 37

x(1−α) corresponds to the (1 − α) quantile of a chi-square distribution with (c − 1)

degrees of freedom. If T exceeds x(1−α), then H0 is to be rejected. Note that if

parameters are being estimated from the sample, a chi-square distribution with fewer

degrees of freedom is required. One degree of freedom is subtracted for each estimated

parameter. If k is number of parameters estimated, and c the number of classes, then

the appropriate chi-square distribution is one with (c − 1 − k) degrees of freedom

(Conover 1971).

If Pearson’s chi-squared test is to be used for continuous distributions, the ob-

served data are grouped into discrete intervals. The expected frequency in each in-

terval is the product of the probability associated with each interval and the number

of observations, n. However, due to this discretization of the data, Pearson’s chi-

squared test is not that powerful for continuous data (D’Agostino & Stephens 1986).

The choice of location and number of discrete intervals, and how the observations

fall within them, has an effect on the performance of the test. For continuous data,

using a test that is based on the empirical distribution is a more general and adequate

approach.

There are several one-sample goodness-of-fit tests that involve comparing the

empirical distribution function with a hypothesized distribution function, based on

some measure of distance between the two. Recall that the empirical distribution

function, Fn(x), is defined as the fraction of xi’s from a random sample of x1, . . . , xn

that are less than or equal to x, or

Fn(x) =1

n

n∑i=1

I(xi ≤ x). (3.3.5)

The empirical distribution is a useful estimator of the true unknown distribu-

tion function. In fact, in the case of complete information, the empirical distribution

function is the nonparametric maximum likelihood estimate for F (x). If a good agree-

ment does not exist between the empirical distribution function and the hypothesized

distribution function, then it seems reasonable to conclude that the true distribution

Page 45: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 38

function is different than the hypothesized distribution function. Formal goodness-of-

fit tests based on this idea have been studied in depth, and a few well-known tests will

be discussed here. It is noted by Conover (1971) that statistics which are functions

of the distance between the empirical distribution function and the hypothesized dis-

tribution function are known as Kolmogorov-type statistics, whereas statistics which

are functions of the vertical distance between two empirical distribution functions are

known as Smirnov-type statistics.

Let F ∗(x) be the hypothesized distribution. Define D+ to be the largest vertical

difference when Fn(x) is greater than F ∗(x), and D− the largest vertical distance

when Fn(x) is smaller than F ∗(x). Mathematically, D+ and D− are defined as

D+ = supx{Fn(x)− F ∗(x)}, D− = sup

x{F ∗(x)− Fn(x)}. (3.3.6)

Perhaps one of the most well-known statistics is the Kolmogorov statistic, D,

defined as

D = supx|Fn(x)− F ∗(x)| = max(D+, D−), (3.3.7)

which was introduced by Kolmogorov in 1933. D’Agostino & Stephens (1986) refer

to the above statistics as ‘supremum statistics’.

A second class of statistics measuring discrepancies between the empirical and

hypothesized distribution function is given by the Cramer-von Mises family and de-

fined as

Q = n

∫ ∞−∞{Fn(x)− F ∗(x)}2ψ(x)dF ∗(x), (3.3.8)

where ψ(x) is a weight function for the squared difference between Fn(x) and F ∗(x).

If ψ(x) = 1, the Cramer-von Mises statistic, W 2, is obtained. If ψ(x) = [F ∗(x)(1 −

F ∗(x))]−1, the Anderson-Darling statistic (1952), A2, is obtained. Large values of

the above statistics provide evidence against the null hypothesis. Finite sample and

asymptotic distributions are available, when testing a completely specified distribu-

tion.

Page 46: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 39

Written in the above form, the expressions appear difficult to work with. In

order to obtain more straight forward expressions, an important theorem, known as

the Probability Integral Transform theorem, must be recalled:

Theorem 3.3.1 Let X have continuous cdf FX(x) and define the random variable

Y as Y = FX(x). Then Y is uniformly distributed on (0,1), that is, P (Y ≤ y) = y,

0 < y < 1.

The proof for this theorem can be found in Casella & Berger (2001). Consequently,

under the null hypothesis, the FX(xi)’s are the order statistics from a random sample

distributed uniformly on the interval (0, 1). So the distributions of D, W 2, and A2

do not depend on the distribution F ∗(x), assuming the null hypothesis is true, which

is clear from the alternate expressions (D’Agostino & Stephens 1986):

D = max1≤i≤n

[i

n− F ∗(x(i)), F

∗(x(i))−i− 1

n

], (3.3.9)

W 2 =n∑i=1

[F ∗(x(i))−

(i− .5)

n

]2

+1

12n, (3.3.10)

A2 = −n− 1

n

n∑i=1

(2i− 1)[lnF ∗(x(i)) + ln{1− F ∗(x(n+1−i))}]. (3.3.11)

In the case of a fully specified F ∗(x), distribution theory for the above Kolmogorov-

Smirnov type statistics is well developed and finite sample and asymptotic distribu-

tions are available.

Unfortunately, it is rare that one wishes to test a fully specified distribution, as

parameters are often estimated from the data (Lawless 2003b). In this case, often

the maximum likelihood estimates are used in place of the unknown parameters. The

above statistics can be calculated in the same way, however now the distribution

theory is more complicated. The distribution theory under a fully specified model

does not apply in this situation, as the distributions of these statistics will depend on

the distribution being tested, the parameters estimated and method of estimation,

Page 47: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

3. Length-Biased Likelihood, Parametric Survival Models, and Goodness-of-FitTests 40

as well as the sample size (D’Agostino & Stephens 1986). However, as will be done

here, simulations often suffice in approximating p-values (Lawless 2003b). This idea

will be discussed in more detail in the following chapter.

As stated earlier, three models will be investigated for goodness-of-fit to length-

biased, right-censored survival data. Many of the goodness-of-fit statistics discussed

above rely on the empirical distribution function, which is complicated by censoring

and truncation. It follows that the nonparametric estimate for the survival function

developed by Vardi (1989) will be used in place of the empirical survival function.

Estimation in survival analysis uses S(x), and for the purpose of testing, F (x) and

S(x) are essentially equivalent quantities, since F (x) = 1 − S(x). Theoretically, all

of the above tests could be extended using nonparametric estimates of the survival

function.

Page 48: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

Chapter 4

Algorithms

This chapter will focus on the algorithms necessary to test the goodness-of-fit of three

parametric models to a set of length-biased survival data with right-censoring. First

we will look at how to generate length-biased Weibull, log-normal and log-logistic

samples with uniform left-truncation and right-censoring. Next, Vardi’s algorithm

for a nonparametric estimation of the survival function corrected for length-bias will

be given. In the last section we will detail how simulation can be used to approximate

the p-values for our goodness-of-fit tests.

4.1 Simulating Length-Biased Samples

This section will focus on how to generate length-biased data. One way to obtain

a length-biased sample of size n is to generate a large amount of data, say N � n,

from the appropriate distribution. We then sample n units from N , with probability

proportional to size. The resulting sample of size n will be a length-biased sample

from the given distribution. However, this is not an efficient way to generate data,

especially when dealing with large sample sizes.

Here we will show that by making the appropriate transformations, certain

41

Page 49: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

4. Algorithms 42

length-biased distributions have alternative representations, making the simulation

of the data more efficient. Using this approach, we need only to generate n val-

ues from the corresponding distribution and make an appropriate transformation

in order to obtain our length-biased sample. Specifically, upon transformation, the

length-biased Weibull distribution can be obtained through a gamma distribution,

the length-biased log-normal distribution can be obtained through a normal distribu-

tion, and the length-biased log-logistic distribution can be obtained through a beta

distribution.

The length-biased distribution and density of a random variable Y were given as

FY (t) =

∫ t

0

xf(x)dx

µ, (4.1.1)

and

fY (t) =tf(t)

µ. (4.1.2)

Correa & Wolfson (1999) show how the generalized gamma distribution can be

used to generate length-biased Weibull samples. Recall the generalized gamma dis-

tribution discussed in the previous chapter. The density of a GG(k, λ, p) was given

as

f(x) =λp(λx)pk−1exp[−(λx)p]

Γ(k), (4.1.3)

where µ =Γ(k+ 1

p)

λΓ(k).

The length-biased distribution of a generalized gamma random variable is there-

fore given by

FY (t) =

∫ t

0

xλp(λx)pk−1 exp[−(λx)p]λΓ(k)dx

Γ(k)Γ(k + 1p)

=

∫ t

0

λp(λx)p(k+ 1p

)−1 exp[−(λx)p]dx

Γ(k + 1/p),

(4.1.4)

which is the distribution of a GG(k + 1p, λ, p) random variable. Recall that the GG

distribution reduces to an exponential(λ) when k = p = 1, a gamma(k, λ) when

Page 50: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

4. Algorithms 43

p = 1, and a Weibull(λ, p) when k = 1. From equation (4.1.4) we see that the

form of the length-biased distribution of an exponential(λ) is gamma(2, λ), and the

length-biased distribution of a gamma(k, λ) is gamma(k+ 1, λ). For a Weibull(λ, p),

the length-biased distribution is of the form GG(1 + 1p, λ, p).

Correa & Wolfson (1999)) show that by letting z = (λx)p in equation (4.1.4), we

obtain

FY (t) =

∫ (λt)p

0

z(k+1/p)−1e−z

Γ(k + 1/p)dz (4.1.5)

which shows that FY (t) can be written as FY (t) = FZ(h(t)), where FZ ∼ gamma(k+

1/p, 1) and h(t) = (λx)p. h(t) is non-negative, strictly increasing and continuous, and

its inverse is given by

g(s) =s1/p

λ. (4.1.6)

Using the gamma distribution, which is standard for all statistical packages,

we can easily generate survival times when data come from a length-biased Weibull

distribution. This is done as follows:

Algorithm 1

• Generate a gamma(1 + 1/p, 1) random variable, Z, and

• Take W = g(Z) where g is given in equation (4.1.6).

The random variable W follows a length-biased Weibull distribution with parameters

λ and p.

Recall the density for a log-normal random variable as

f(x) =exp[−1

2( lnx−µ

σ)2]

x(2π)1/2σ(4.1.7)

Page 51: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

4. Algorithms 44

with corresponding mean µ = exp(µ+ 12σ2), where µ and σ2 are the mean and variance

of lnX. The length-biased distribution is then given by

FY (t) =

∫ t

0

x exp[−12( lnx−µ

σ)2]dx

x(2π)1/2σ exp(µ+ 12σ2)

=

∫ t

0

exp(−µ− 12σ2) exp[−1

2( lnx−µ

σ)2]dx

(2π)1/2σ.

(4.1.8)

Letting z = lnx, we obtain

FY (t) =

∫ ln t

0

exp(z − µ− 12σ2) exp[−1

2( z−µ

σ)2]dz

(2π)1/2σ

=

∫ ln t

0

exp[−12( z

2−2µz+µ2−2zσ2+2µσ2+σ4

σ2 )dz]

(2π)1/2σ

=

∫ ln t

0

exp[−12( z−(µ+σ2)

σ)2]dz

(2π)1/2σ

(4.1.9)

which shows that FY (t) can be written as FY (t) = FZ(h(t)), where FZ ∼ normal(µ+

σ2, σ2) and h(t) = ln t. h(t) is non-negative, strictly increasing and continuous. Its

inverse is given by

g(s) = es. (4.1.10)

Note that this representation of length-biased log-normal data was given by Patil

& Rao (1978).

The algorithm for generating length-biased log-normal data is as follows:

Algorithm 2

• Generate a normal(µ+ σ2, σ2) random variable, Z, and

• Take W = g(Z) where g is given in equation (4.1.10).

The random variable W follows a length-biased log-normal distribution with parame-

ters µ and σ.

Page 52: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

4. Algorithms 45

Correa & Wolfson (1999) detail how to generate length-biased log-logistic sam-

ples, using a one-parameter log-logistic distribution. Note that the two-parameter

log-logistic distribution is the reciprocal of the one-parameter log-logistic distribu-

tion, with an extra rate parameter. Algorithm 3, that follows, gives the procedure

for generating length-biased samples from a two-parameter length-biased log-logistic

function. This is an extension of the results from Correa & Wolfson (1999).

As mentioned earlier, the two-parameter log-logistic distribution, as given in

Klein & Moeschberger (2003), has the following pdf and mean:

f(x) =αθxα−1

[1 + θxα]2(4.1.11)

µ =π csc(π

α)

αθ1/α. (4.1.12)

Let θ = λα. The equations for f(x) and µ become

f(x) =αλαxα−1

[1 + (λx)α]2(4.1.13)

µ =π csc(π

α)

αλ. (4.1.14)

The mean, µ, can be represented using the gamma function:

µ =π csc(π

α)

αλ=

π

αλ sin(πα

)=

Γ( 1α

)Γ(1− 1α

)

λ=

Γ(1 + 1α

)Γ(1− 1α

)

λ, (4.1.15)

using the relationships csc(x) = 1sin(x)

, xΓ(x) = Γ(1 + x) and πsin(πx)

= Γ(x)Γ(1− x).

The length-biased density for a log-logistic random variable is given by

FY (t) =

∫ t

0

xαλαxα−1λdx

[1 + (λx)α]2Γ(1 + 1α

)Γ(1− 1α

)

=

∫ t

0

αxαλα+1dx

Γ(1 + 1α

)Γ(1− 1α

)(1 + (λx)α)2

=

∫ t

0

αλ(λx)αdx

Γ(1 + 1α

)Γ(1− 1α

)(1 + (λx)α)2

=

∫ t

0

αλ exp[α log(λx)]dx

Γ(1 + 1α

)Γ(1− 1α

)(1 + exp[α log(λx)])2

(4.1.16)

Page 53: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

4. Algorithms 46

Taking z = α log(λx), we obtain

FY (t) =

∫ αlog(λt)

−∞

αλ exp(z) exp( zα

)dz

Γ(1 + 1α

)Γ(1− 1α

)(1 + exp(z))2αλ

=

∫ αlog(λt)

−∞

exp(z(1 + 1α

))dz

Γ(1 + 1α

)Γ(1− 1α

)(1 + exp(z))2

(4.1.17)

Letting u = (1 + ez)−1 we obtain

FY (t) = −∫ (1+exp[α log(λt)])−1

1

u−1/α(1− u)1/αdu

Γ(1 + 1α

)Γ(1− 1α

)

=

∫ 1

(1+exp[α log(λt)])−1

u−1/α(1− u)1/αdu

Γ(1 + 1α

)Γ(1− 1α

)

(4.1.18)

So FY (t) can be written as FZ(h(t)), where FZ is the distribution of a Beta(1−1α, 1 + 1

α) random variable and h(t) = (1 + exp[α log(λt)])−1 = 1

1+(λx)α. h(t) is non-

negative, strictly decreasing and continuous. Its inverse is given by

g(s) =

(1−ss

)1/α

λ. (4.1.19)

Under the original parametrization for the log-logistic distribution, the inverse func-

tion is given by

g(s) =

(1− sθs

)1/α

. (4.1.20)

The algorithm for generating length-biased log-logistic data is as follows:

Algorithm 3

• Generate a beta(1− 1α, 1 + 1

α) random variable, Z, and

• Take W = g(Z) where g is given in equation (4.1.20).

The random variable W follows a length-biased log-logistic distribution with parame-

ters α and θ.

Page 54: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

4. Algorithms 47

Using the three algorithms stated above, we can easily and efficiently generate length-

biased Weibull, log-normal and log-logistic samples.

We wish to generate samples with uniform left-truncation and right-censoring.

Once we have the generated length-biased times, the next step is to split the lifetimes

into two parts; the truncation time, T , and the residual lifetime, R. The truncation

time is the time from onset of disease until the beginning of the study, and the residual

lifetime is the time from the beginning of the study until the event occurs. Given

a generated time, xi, the truncation time, ti, is generated uniformly on the interval

(0, xi). The residual lifetime is the difference between xi and ti, namely ri = xi − ti.

We also require a corresponding censoring indicator, δi, for each generated time. This

is obtained by generating a random censoring variable, ci, and comparing it to ri such

that

δi =

1 if ri ≤ ci

0 if ri > ci.

(4.1.21)

The censoring times can be generated in various ways, and three different ap-

proaches will be used here. We chose to employ three different censoring schemes to

investigate if the type of censoring has any effect on the results of the goodness-of-fit

tests. The first is a fixed censoring approach, which takes ci = c for i = 1, . . . , n. In

this case the censoring time point for every individual in the study is the same. This

is a realistic approach as it is a common scenario that the end of the study period is

the same for all individuals, and on this end date each individual is recorded as either

censored or not censored, depending on their status.

A second approach to generate censoring times is to use some known (essentially

positive) distribution. We chose to use a normal distribution with mean larger than

3σ to avoid negative values that would need to be resampled, and out of convenience.

Exponential censoring is also a common choice for censoring distributions in simu-

Page 55: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

4. Algorithms 48

lations. Here, the censoring times are not all the same, allowing for some variation.

This is also a realistic approach to censoring. It is not always possible, especially in

larger studies, to follow up with every individual on the same day.

The third censoring approach is dependent on having a real data set, which is

not necessary for all simulations. Since Ri and Ci are assumed to be independent, we

can obtain an estimate of the survival function of the residual censoring times, SC(c),

through the usual Kaplan-Meier, with the δi’s considered as the event times, as the

event of interest here is censoring time. Then the p(C = ci) = SC(c−i )− SC(ci). If by

chance, SC(c) is undefined beyond some cmax, we can use S(cmax) = 0 without loss

of generality. We then sample from the residual censoring times from our data set,

according to these probabilities. The sampled residual censoring times become the

ci’s. By comparing each generated ri to the sampled ci, δi is obtained.

Now we have all the tools necessary to generate length-biased samples with left-

truncation and right-censoring, and the following algorithm provides the steps to do

so.

Algorithm 4

• Use the data to estimate the appropriate parameters, depending on which model

is being used. Fix n.

• Generate n length-biased times, x, using the either Algorithm 1, 2 or 3 depending

on the model being used.

• For each xi, generate a truncation time ti from a U(0, xi).

• Compute the residual lifetime, ri, for each observation as ri = xi − ti.

• Generate the censoring times, c, using one of the three censoring approaches.

• Let yi = ti + ri ∧ ci and δi = I[ri < ci].

Page 56: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

4. Algorithms 49

During analysis, it is not necessary to keep track of ti and ri, and so the generated

data has the form (xi, δi).

4.2 Nonparametric Estimation of Length-Biased

Survival Data with Right-Censoring

In this section we will detail an algorithm introduced by Vardi (1989) to find a

nonparametric estimate for the unbiased distribution of the survival function arising

from properly length-biased data with right-censoring. In Vardi’s 1989 paper he

details Problem A, which is as follows: Consider a sample which has m independent

complete observations and n independent incomplete observations. The m complete

observations, X1, . . . , Xm are drawn from a distribution G. Consider the random

variable U which is independently selected from a uniform distribution on the interval

(0, 1). The n incomplete observations, Z1, . . . , Zn are such that Zi = YiUi, where

Yi has the same distribution, G, as the X’s. The incompleteness of the Z’s is a

consequence of their being scaled down by the U ’s, and this form of censoring can

be referred to as “multiplicative censoring” (Vardi 1989). Based on the sample of

X1, . . . , Xm and Z1, . . . , Zn, the nonparametric maximum likelihood estimate of G

will be derived. In other words, the goal is to estimate G, based on a set of complete

and incomplete observations from G. Vardi gives the likelihood for the observations

in Problem A as

L(G) =m∏i=1

G(dxi)n∏i=1

∫y≥zi

1

yG(dy). (4.2.1)

Note that the set of complete data can be considered as the uncensored obser-

vations and the set of incomplete data as the censored observations. The data we

are considering can be thought of as the pairs (ti, 1) (the uncensored observations)

and (ti, 0) (the censored observations),which correspond to the xi’s and zi’s respec-

tively. Vardi shows that the above likelihood can be maximized nonparametrically

Page 57: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

4. Algorithms 50

by assigning positive weights only on the observed xi’s and zi’s, since if mass were

placed anywhere else this would only decrease the likelihood. Let w1 < w2 <, . . . , wN

represent the entire sorted data set, that is, both the complete and incomplete ob-

servations, in increasing order. If there are no ties in the data set then N = m + n,

otherwise N < n + m. In theory ties will not exist in the data if the underlying

distribution is continuous, but there may be ties in real applications.

Define ξj to be the number of complete observations at time wj, and ζj to be the

number of incomplete observations at wj. Define p = (p1, . . . , pN)′ to be a probability

vector where pj = p(wj) = G(dwj). Now the likelihood can be expressed as

L(p) =N∏j=1

pξjj

( N∑k=j

1

wkpk

)ζj. (4.2.2)

Because the data set contains both complete and incomplete observations, it

is considered to be incomplete. It is noted by Vardi that the EM algorithm is a

natural solution for maximizing the likelihood in the above equation as our data set

is incomplete and the form of the complete data is known. The algorithm works as

follows: Given an estimate for p, call it pold, the expected conditional number of

complete observations at wj, given the observed data and the previous probability

vector pold, is assigned to each pnewj . In order for pnew to be a probability vector, each

pnewj is divided by the total number of observations. Vardi’s algorithm can be written

in two steps:

Algorithm 5

• Begin with an arbitrary probability vector, say pold, such that

N∑j=1

poldj = 1

poldj > 0

for j = 1, . . . , N .

Page 58: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

4. Algorithms 51

• Replace each poldj with

pnewj =1

m+ nE(

m∑i=1

I(xi = wj) +n∑i=1

(yi = wj)|(x1, . . . , xm, z1, . . . , zn), pold)

=1

m+ n

{ξj +

poldjwj

j∑k=1

ζk

( N∑i=k

poldiwi

)−1}.

(4.2.3)

In the case of full information (i.e. - no censored observations), p is fixed and

no iterations will take place. Note that for our length-biased sampling setting, p

represents probabilities from the length-biased distribution (i.e. - G can be likened

to FLB discussed earlier). A consistent estimate for the length-biased distribution is

obtained because 4.2.3 will converge to the unique maximizer of 4.2.2 by definition

of the EM algorithm.

By using the inverse length-bias transformation to adjust the probabilities in p,

the unbiased distribution can be found. p is adjusted as follows:

pU,j =

pjwj∑Ni=1

piwi

(4.2.4)

The unbiased survival function is calculated as

SV k = 1−k∑i=1

pU,i. (4.2.5)

4.3 Simulation and Estimation of p-values

In this section we will show how simulation can be used to obtain an approximate

p-value for our goodness-of-fit tests. We will employ the a Kolmogorov-Smirnov type

statistic in order to assess the goodness-of-fit of the Weibull, log-normal, and log-

logistic models to our real set of length-biased, right-censored survival data. Recall

that the survival distribution is equal to 1 − F (x). The Kolmogorov statistic was

Page 59: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

4. Algorithms 52

defined in chapter 3 as

D = max |Fn(x)− F ∗(x)|, (4.3.1)

where Fn(x) is the empirical distribution, and F ∗(x) is the hypothesized distribution.

Replacing Fn(x) with 1− Sn(x) and F ∗(x) with 1− S∗(x), we obtain

D = max |(1− Sn(x))− (1− S∗(x))|

= max |S∗(x)− Sn(x)|,(4.3.2)

and so D can be calculated using the survival function instead of the cdf , as will be

done here.

Since we are dealing with length-biased, right-censored survival data, we will

use Vardi’s algorithm to compute the nonparametric maximum likelihood estimate,

and this estimate will be used in place of the empirical distribution function when

computing D. Let SV (x) represent the estimate of the survival function corrected for

length-bias obtained from the data using Vardi’s algorithm.

When discussing some different goodness-of-fit techniques in Chapter 3, it was

noted that it is rarely the case that one wishes to test a fully specified distribution.

Often, we wish to test that a given data set is of some distributional form, say Sθ(x),

with unknown parameters, θ = (θ1, . . . , θk). For example, we may want to test that

a data set is distributed according to a Weibull distribution, with unknown shape

and scale parameters. It follows that it is necessary to first use the data to estimate

the parameters, which can be done through maximum likelihood estimation. Let θ

represent the vector of estimated parameters, and let S∗θ(x) represent the hypothesized

survival distribution with the estimated parameters. The hypothesis being tested

becomes:

H0 : S(x) = S∗θ(x)

H1 : S(x) 6= S∗θ(x),(4.3.3)

and so in order to compute the goodness-of-fit tests we simply replace S∗(x) with

S∗θ(x). However, now the test quantity obtained does not correspond to a specific

Page 60: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

4. Algorithms 53

p-value, and simulation is required in order to roughly estimate the p-value for our

test.

Once we have used the data to estimate the parameters of the hypothesized

distribution, as well as the nonparametric maximum likelihood estimate according to

Vardi’s algorithm, the Kolmogorov-Smirnov type statistic can be calculated as

ddata = max |SV (x)− Sθ(x)|. (4.3.4)

PSθ(D ≥ ddata) can be used to roughly approximate the p-value (Ross 2006) The

simulation is to be done as follows:

Algorithm 6

• First use the data under study to estimate the parameters θ by θ, then compute

ddata as shown in equation (4.3.4).

• Generate a sample of size n from Sθ. Estimate the parameters of the simulated

data by θsim. Estimate the npmle of the simulated data, SVsim, according to the

Vardi algorithm. Compute

dsim = max |SVsim(x)− Sθsim(x)|. (4.3.5)

Note whether dsimi ≥ ddata. Repeat many times, say N . The p-value is approx-

imated by

p =N∑i=1

I(dsimi ≥ ddata)

N. (4.3.6)

Now we have all the tools necessary to carry out the goodness-of-fit analysis for

a set of length-biased survival data subject to right-censoring. However, before we

apply our goodness-of-fit test to a real set of data, we will use simulation techniques

to approximate the distribution of D under the null hypothesis. The behaviour of

D under different scenarios will be explored to get an idea of the power of the test,

before finally applying this method to the CSHA data.

Page 61: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

Chapter 5

Applications

5.1 Simulation Studies

This section will provide the results of a number of simulations assessing the behaviour

of D under different sample sizes and amounts of censoring. Namely, we are interested

in investigating the power of the test to reject a false null hypothesis. We do this

by first simulating from a particular known distribution, say, Weibull. We re-fit

the simulated data using a Weibull model, as well as fit the data nonparametrically

according to Vardi’s algorithm discussed in the previous chapter, and then we can

obtain di for each simulated set of data. Next, we calculate the 90th, 95th, and 99th

percentiles of di for i = 1, . . . , n (n = 1000 was used). These percentiles become

the critical values at which we would reject the null hypothesis at the 10%, 5%,

and 1% significance levels, respectively. Using the same simulated sets of Weibull

data, we re-fit using a log-normal model and calculate the di values under this model.

We then calculate the proportion of di’s that are above the critical values, and these

proportions provide an idea of the probability that a log-normal null hypothesis would

be rejected, given the data is Weibull, at the given significance level. We do the same

with the log-logistic model. This gives us insight into how often a log-normal or

54

Page 62: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 55

log-logistic null hypothesis would be rejected, if the data was, indeed, distributed

according to a Weibull distribution. We do this for various sample sizes and amounts

of censoring. Next, we repeat the above procedure starting with the log-normal model,

and finally the log-logistic model. A fixed censoring approach will be used since, as

we will see later, the different censoring distributions have negligible effects on the

results of the goodness-of-fit tests.

Sample sizes of 30, 100, 250, 500 and 1000 were considered. Amounts of cen-

soring of ≈ 5%, ≈ 20%, and ≈ 50% were considered as light, medium, and heavy,

respectively. The following tables give the results of the simulation studies.

Page 63: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 56

Weibull Data Critical Values for D

Sample Size Censoring α.10 α.05 α.01

30 ≈ 5% 0.285 0.329 0.478

≈ 20% 0.290 0.345 0.479

≈ 50% 0.277 0.327 0.478

100 ≈ 5% 0.193 0.227 0.355

≈ 20% 0.199 0.227 0.369

≈ 50% 0.191 0.223 0.303

250 ≈ 5% 0.139 0.163 0.226

≈ 20% 0.136 0.162 0.261

≈ 50% 0.134 0.160 0.235

500 ≈ 5% 0.102 0.122 0.174

≈ 20% 0.106 0.125 0.203

≈ 50% 0.100 0.123 0.204

1000 ≈ 5% 0.078 0.090 0.153

≈ 20% 0.080 0.092 0.130

≈ 50% 0.078 0.092 0.134

Table 5.1: Weibull critical values - varying sample size & amt. of censoring

Page 64: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 57

Log-Normal H0 Power

Sample Size Censoring 1− β.10 1− β.05 1− β.01

30 ≈ 5% 0.129 0.076 0.015

≈ 20% 0.116 0.067 0.018

≈ 50% 0.128 0.061 0.017

100 ≈ 5% 0.221 0.111 0.019

≈ 20% 0.198 0.113 0.014

≈ 50% 0.201 0.108 0.028

250 ≈ 5% 0.459 0.250 0.055

≈ 20% 0.443 0.241 0.024

≈ 50% 0.409 0.214 0.038

500 ≈ 5% 0.820 0.601 0.114

≈ 20% 0.731 0.474 0.045

≈ 50% 0.734 0.456 0.036

1000 ≈ 5% 0.987 0.934 0.150

≈ 20% 0.983 0.991 0.336

≈ 50% 0.970 0.849 0.208

Table 5.2: Power when H0 is log-normal and Weibull is true distribution

Page 65: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 58

Log-Logistic H0 Power

Sample Size Censoring 1− β.10 1− β.05 1− β.01

30 ≈ 5% 0.222 0.148 0.027

≈ 20% 0.200 0.121 0.033

≈ 50% 0.214 0.113 0.028

100 ≈ 5% 0.487 0.287 0.045

≈ 20% 0.454 0.274 0.034

≈ 50% 0.416 0.246 0.067

250 ≈ 5% 0.877 0.676 0.198

≈ 20% 0.860 0.643 0.075

≈ 50% 0.821 0.597 0.125

500 ≈ 5% 0.993 0.973 0.598

≈ 20% 0.990 0.939 0.224

≈ 50% 0.997 0.943 0.174

1000 ≈ 5% 1.000 1.000 0.838

≈ 20% 1.000 1.000 0.965

≈ 50% 1.000 1.000 0.921

Table 5.3: Power when H0 is log-logistic and Weibull is true distribution

Page 66: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 59

Log-Normal Data Critical Values for D

Sample Size Censoring α.10 α.05 α.01

30 ≈ 5% 0.193 0.214 0.240

≈ 20% 0.186 0.203 0.237

≈ 50% 0.202 0.220 0.261

100 ≈ 5% 0.112 0.121 0.149

≈ 20% 0.117 0.130 0.151

≈ 50% 0.115 0.123 0.147

250 ≈ 5% 0.074 0.080 0.094

≈ 20% 0.074 0.081 0.096

≈ 50% 0.075 0.082 0.095

500 ≈ 5% 0.054 0.060 0.073

≈ 20% 0.052 0.058 0.066

≈ 50% 0.054 0.060 0.070

1000 ≈ 5% 0.038 0.042 0.053

≈ 20% 0.038 0.042 0.050

≈ 50% 0.037 0.040 0.048

Table 5.4: Log-normal critical values - varying sample size & amt. of censor-ing

Page 67: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 60

Weibull H0 Power

Sample Size Censoring 1− β.10 1− β.05 1− β.01

30 ≈ 5% 0.692 0.616 0.516

≈ 20% 0.675 0.608 0.462

≈ 50% 0.564 0.487 0.323

100 ≈ 5% 0.981 0.974 0.938

≈ 20% 0.966 0.953 0.901

≈ 50% 0.949 0.927 0.856

250 ≈ 5% 1.000 1.000 1.000

≈ 20% 1.000 1.000 0.998

≈ 50% 0.998 0.997 0.994

500 ≈ 5% 1.000 1.000 1.000

≈ 20% 1.000 1.000 1.000

≈ 50% 1.000 1.000 1.000

1000 ≈ 5% 1.000 1.000 1.000

≈ 20% 1.000 1.000 1.000

≈ 50% 1.000 1.000 1 .000

Table 5.5: Power when H0 is Weibull and Log-normal is true distribution

Page 68: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 61

Log-Logistic H0 Power

Sample Size Censoring 1− β.10 1− β.05 1− β.01

30 ≈ 5% 0.176 0.101 0.051

≈ 20% 0.179 0.122 0.048

≈ 50% 0.148 0.080 0.027

100 ≈ 5% 0.335 0.243 0.075

≈ 20% 0.253 0.170 0.067

≈ 50% 0.281 0.193 0.076

250 ≈ 5% 0.593 0.487 0.264

≈ 20% 0.571 0.464 0.212

≈ 50% 0.520 0.393 0.195

500 ≈ 5% 0.869 0.727 0.437

≈ 20% 0.848 0.753 0.569

≈ 50% 0.789 0.652 0.396

1000 ≈ 5% 0.991 0.967 0.856

≈ 20% 0.986 0.957 0.849

≈ 50% 0.985 0.969 0.839

Table 5.6: Power when H0 is log-logistic and Log-normal is true distribution

Page 69: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 62

Log-Logistic Data Critical Values for D

Sample Size Censoring α.10 α.05 α.01

30 ≈ 5% 0.200 0.228 0.282

≈ 20% 0.198 0.220 0.269

≈ 50% 0.214 0.239 0.287

100 ≈ 5% 0.119 0.134 0.168

≈ 20% 0.121 0.135 0.170

≈ 50% 0.117 0.133 0.171

250 ≈ 5% 0.079 0.088 0.107

≈ 20% 0.077 0.087 0.107

≈ 50% 0.079 0.088 0.112

500 ≈ 5% 0.057 0.062 0.073

≈ 20% 0.056 0.063 0.079

≈ 50% 0.056 0.062 0.077

1000 ≈ 5% 0.040 0.045 0.055

≈ 20% 0.040 0.044 0.055

≈ 50% 0.039 0.043 0.053

Table 5.7: Log-logistic critical values - varying sample size & amt. of censor-ing

Page 70: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 63

Weibull H0 Power

Sample Size Censoring 1− β.10 1− β.05 1− β.01

30 ≈ 5% 0.901 0.848 0.708

≈ 20% 0.866 0.822 0.680

≈ 50% 0.761 0.687 0.565

100 ≈ 5% 1.000 0.999 0.996

≈ 20% 0.998 0.996 0.994

≈ 50% 0.997 0.988 0.963

250 ≈ 5% 1.000 1.000 1.000

≈ 20% 1.000 1.000 1.000

≈ 50% 1.000 1.000 1.000

500 ≈ 5% 1.000 1.000 1.000

≈ 20% 1.000 1.000 1.000

≈ 50% 1.000 1.000 1.000

1000 ≈ 5% 1.000 1.000 1.000

≈ 20% 1.000 1.000 1.000

≈ 50% 1.000 1.000 1.000

Table 5.8: Power when H0 is Weibull and log-logistic is true distribution

Page 71: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 64

Log-Normal H0 Power

Sample Size Censoring 1− β.10 1− β.05 1− β.01

30 ≈ 5% 0.236 0.156 0.052

≈ 20% 0.234 0.150 0.068

≈ 50% 0.202 0.117 0.047

100 ≈ 5% 0.513 0.362 0.139

≈ 20% 0.447 0.327 0.143

≈ 50% 0.462 0.314 0.108

250 ≈ 5% 0.783 0.679 0.423

≈ 20% 0.780 0.647 0.383

≈ 50% 0.737 0.602 0.297

500 ≈ 5% 0.973 0.938 0.824

≈ 20% 0.960 0.902 0.666

≈ 50% 0.937 0.882 0.658

1000 ≈ 5% 0.998 0.994 0.978

≈ 20% 0.999 0.996 0.979

≈ 50% 0.999 0.994 0.965

Table 5.9: Power when H0 is log-normal and log-logistic is true distribution

The simulation results generally are as expected. That is, we see an increased

power as we increase the sample size. The effect of censoring does not seem to have

a discernible impact on the power of the test. For data distributed according to a

Weibull distribution, the power to reject a false log-normal or log-logistic hypothesis

is quite high for very large samples (even at the 1% level for log-logistic). For samples

of size 500, the power to reject a false log-logistic hypothesis is high at the 5% level.

In the log-normal case, the power to reject a false Weibull hypothesis is high for all

sample sizes, except very small samples of size n = 30. The power to reject a false

log-logistic hypothesis when the data are log-normal is high for very large samples.

And lastly, when the data are distributed according to a log-logistic distribution, the

power to reject a false Weibull null hypothesis is very high for samples greater than

Page 72: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 65

n = 100, but under a false log-normal hypothesis it is very high only for sample sizes

of above n = 500.

Since our test appears to be powerful enough at sufficiently large sample sizes,

and acts as expected, we can now apply it to the CSHA data. Note that using the

above definitions for sample size and censoring, the CSHA data set is considered as

large in terms of sample size and medium in terms of amount of censoring. A detailed

description of the CSHA data will be given in the next section, and the results of the

CSHA goodness-of-fit tests will be given in section 5.3. A closer look at the length-

biased likelihood for each model, as well as how to carry out the testing step-by-step,

will also be given in section 5.3.

5.2 CSHA Data

The Canadian Study of Health and Aging (CSHA) was a longitudinal study of the epi-

demiology of dementia and other health problems affecting the elderly across Canada.

Diseases that affect the elderly are becoming more common due to our aging popula-

tion and longer lifespans. Affecting the aging population, dementia is a devastating

disease which involves memory deterioration as well as a decline in cognitive, emo-

tional and intellectual functioning. The CSHA has many aims, including estimating

the prevalence and incidence of dementia among the elderly, investigating the risk fac-

tors for Alzheimer’s Disease, as well as estimating the survival distribution of those

with dementia. The study has undergone three phases, with the first phase beginning

in 1991 (CSHA-1), the second phase in 1996 (CSHA-2), and the third phase in 2001

(CSHA-3).

During the first phase, 10,263 individuals aged 65 and above were recruited cross-

sectionally. Canada was split into five geographic regions (British Columbia, the

Prairie provinces, Ontario, Quebec, and the Atlantic region) and equal sized samples

were drawn from each region. In the final sample there were 9,008 study participants

Page 73: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 66

from the community and 1,255 study participants from long-term care facilities.

It is estimated that approximately 8% of Canadians 65 years of age and over suffer

from dementia. Splitting into increasing age groups, a dramatic rise in prevalence is

observed with rates of 2% for those aged 65-74, 11% for those aged 75-84, and 35%

for individuals 85 years and over (Lindsay et al. 2004). Considering the increase

in prevalence with age, the random samples drawn from each area were stratified

by age. Those aged 75-84 were sampled at a rate twice than those aged 65-74, and

individuals aged 85 and above were sampled at a rate 2.5 times those aged 65-74. The

Modified Mini-Mental State (3MS) test was used to screen for cognitive impairment of

individuals from the community. Depending on their score, a full clinical assessment

and final diagnosis by a committee of health professionals was completed. All sampled

individuals living in institutions were given a full clinical assessment as the rate of

dementia is much higher among these subjects.

Those who screened positive for dementia were included in the final sample.

Three dementia categories were used; “probable Alzheimer’s Disease”, “possible Alzheimer’s

Disease”, and “vascular dementia.” The individuals were classified into one of the

categories and then followed until the second phase of the study in 1996. The final

sample included 816 possibly censored survival times. The second phase of the study

began with a follow-up of the individuals from phase one who were still alive.

For this thesis, the phase one portion of the CSHA will be used. The approximate

date of onset of dementia, date of death or censoring, as well as the death indicator

will be used in estimating the survival function. Gender will also be included following

the initial analysis. As is common to epidemiological studies, the data were subject

to both right-censoring and left-truncation. Subjects were censored if they were still

alive at the end of the study or if they were lost to follow-up, although loss to follow-up

occurred in only a small percentage of subjects. Left-truncation follows from the fact

that prevalent cases, instead of incident cases, were selected. Only individuals who

were alive with dementia at the beginning of the study had the possibility of being

Page 74: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 67

included in the sample, and so individuals with longer survival times were favoured.

Both Asgharian et al. (2006) and Addona & Wolfson (2006) proposed tests to

assess stationarity of incidence rates, and could not reject the stationarity assumption

for the CSHA data with their methods. This, coupled with the fact that dementia

rates should have remained relatively constant over the period that covers the CSHA

data dementia onset times, allows us to assume that the random left-truncation times

are uniformly distributed. Under the assumption of stationarity discussed earlier, it

follows that the data are length-biased. When the length-bias of the data is con-

sidered, the median survival of demented patients is estimated to be much shorter

than previous studies have shown. Wolfson et al. (2001) reported an adjusted median

survival of 3.3 years when correcting for length-bias, contrasted with median survival

times varying from 5 to 9.3 years as suggested by previous studies.

5.3 Analysis

Before moving on to the results of the goodness-of-fit testing, Figure 5.1 shows both

the unbiased and length-biased nonparametric survival estimates, using Vardi’s algo-

rithm. The importance of correcting for length-bias is readily observed.

Using length-biased Weibull (1), log-normal (2), and log-logistic (3) models,

goodness-of-fit tests will be performed and compared using the length-biased, right-

censored survival data of dementia patients from the CSHA. The Kolmogorov-Smirnov

type statistic, D, discussed in Chapter 4, will be used as the measure of goodness-of-

fit. The following steps will be used for all three models. The first step is to use the

data to estimate the parameters of the survival function for the length-biased model.

This will be done by maximum likelihood estimation using the non-linear minimiza-

tion (nlm) function in the statistical package R. The length-biased likelihood discussed

Page 75: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 68

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

0.8

1.0

Survival Time (years)

Estim

ated

Pro

babi

lity

of S

urvi

val

UnbiasedLength−Biased

Figure 5.1: Biased and unbiased nonparametric estimates of S(x).

Page 76: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 69

in Chapter 3 can be written as:

L(θ) =n∏i=1

fθ(xi)δiSθ(xi)

1−δi

µ(θ). (5.3.1)

As maximizing the log-likelihood is equivalent to maximizing the likelihood (and

often simpler), the following are the log-likelihoods of the length-biased Weibull, log-

normal and log-logistic models; respectively,

L1(θ1) =n∑i=1

[(pδi+1) log(λ)+δi log(p)+δi(p−1) log(xi−(λxi)

p−log(1+1

p)]

(5.3.2)

L2(θ2) =n∑i=1

[δi[−

1

2(log(xi)− µ

σ)2] + (1− δi) log[1− Φ(

log(xi)− µσ

)]

− δi log[xi(2π)1/2σ]− µ− 1

2σ2] (5.3.3)

L3(θ3) =n∑i=1

[(δi + 1) log(α) + δi(α− 1) log(xi) + (δi +

1

α) log(θ)

+ log[sin(π

α)]− (δi + 1) log(1 + θxαi )− log(π)

] (5.3.4)

Once the likelihoods have been maximized, maximum likelihood estimates for the

parameters of each model are obtained. Specifically, for the Weibull model we have λ

and p, µ and σ for the log-normal model, and α and θ for the log-logistic model. Using

the regular survival function (i.e. unbiased) with the obtained estimates, we have a

parametric model which estimates the survival distribution corrected for length-bias.

For i = 1, 2, 3, let Si(x) represent the estimated unbiased parametric survival curve.

The next step is to use the data to estimate the unbiased survival curve non-

parametrically using the Vardi Algorithm discussed in Chapter 4. A step-function

will be obtained with jumps at each time point from the data. Let SV (x) represent

the estimated nonparametric unbiased survival curve.

Our Kolmgorov-Smirnov type statistic for the ith model is calculated as

di = max |SV (x)− Si(x)|, (5.3.5)

Page 77: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 70

and di is the largest vertical distance between the two curves. If our hypothesized

distribution was a fully specified model, di would correspond to a p-value for our test.

Since this is not the case, simulation is the next step in assessing the goodness-of-fit

for each model. As stated earlier, the p-value is approximated by P (D ≥ di). This

p-value will be estimated by following the simulation steps detailed in Algorithm 6.

The hypotheses being tested are

H0 : S(x) = Si∗(x)

H1 : S(x) 6= S∗i (x).(5.3.6)

A small p-value corresponds to the conclusion that the model in question is not a

good fit for the data (i.e. rejection of the null hypothesis). Initially, we will fix

n = 816, the sample size of the CSHA data set. N , the number of iterations in

our simulation, is required to be large, and a value of 10,000 will be used. For each

model, the simulation will be performed five times using the three censoring schemes

described in Chapter 4. Two simulations will use the fixed censoring scheme with

values of c1 = 5.2 and c2 = 5.8 years. These values represent a constant follow-up at

5.2 and 5.8 years after the study began. The next two simulations will use a random

normal censoring scheme with a mean of µ1 = 5.2, σ1 = 0.3 and µ2 = 5.8, σ2 = 0.6

respectively. Here, the follow-up times are not all the same, which is realistic as it

is not always possible to follow-up with every subject at the exact same time. The

values of 5.2 and 5.8 years were chosen by considering the follow-up time in the actual

study. Lastly, a censoring scheme that samples values from the real residual censoring

times in the CSHA data set will be used. After the initial simulations using a sample

size of n = 816 are carried out, the women and men will be analyzed separately.

Table 5.10 contains the d values, estimated model parameters, along with their

standard deviation and confidence intervals, for each model, followed by their respec-

tive graphs in Figures 5.2, 5.3 and 5.4.

Page 78: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 71

Model di Estimate Std. Dev. C. I.

Weibull 0.096 λ = 0.207 0.009 (0.189, 0.225)

p = 1.215 0.046 (1.125, 1.305)

Log-Normal 0.065 µ = 1.378 0.033 (1.313, 1.443)

σ = 0.679 0.018 (0.644, 0.715)

Log-Logistic 0.124 α = 2.808 0.075 (2.660, 2.955)

λ = 0.018 0.003 (0.012, 0.023)

Table 5.10: CSHA parameter estimates

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

0.8

1.0

Survival Time (years)

Estim

ated

Pro

babi

lity

of S

urvi

val

NonparametricWeibull

Figure 5.2: Nonparametric and Weibull estimate of S(x).

Page 79: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 72

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

0.8

1.0

Survival Time (years)

Estim

ated

Pro

babi

lity

of S

urvi

val

NonparametricLog−Normal

Figure 5.3: Nonparametric and log-normal estimate of S(x).

Page 80: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 73

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

0.8

1.0

Survival Time (years)

Estim

ated

Pro

babi

lity

of S

urvi

val

NonparametricLog−Logistic

Figure 5.4: Nonparametric and log-logistic estimates of S(x).

Page 81: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 74

Upon first observation of the graphs, it appears that the log-normal distribution

is the best fit. The overall shape of the log-normal model seems to accurately mimic

the nonparametric estimate. However, a good visual fit is generally not sufficient.

The log-normal model would also be best if we based our inference on the size of

the D statistic. If we began with fully specified models, we would conclude that the

log-normal model is best, since with fully specified distributions we can rely on the

probability integral transform to obtain the p-value regardless of the form of Sθ. But

since the data was used to estimate parameters, we ‘lose degrees of freedom,’ and

can no longer rely on the smallest d to choose the best fit. Hence, simulations are

required to obtain the approximated p-values for each model. The simulation results

are given in Table 5.11.

Censoring Approach Weibull Log-Normal Log-Logistic

KM 0.0550 0.0007 0.0000

Fixed C=5.2 0.0556 0.0015 0.0000

Fixed C=5.8 0.0576 0.0008 0.0000

Normal (5.2,0.3) 0.0550 0.0011 0.0000

Normal (5.2,0.3) 0.0544 0.0010 0.0000

Table 5.11: p-values estimated by simulation

From the results in Table 5.11, we reject the hypothesis that the data come from

a log-normal or log-logistic population. At a 5% significance level we cannot reject

the hypothesis that the data come from a Weibull population, although this is very

close. It is also clear from the simulation results that the different approaches to

censoring have negligible effects.

The next step taken was to stratify the CSHA data by gender, as was done by

Cook & Bergeron (2011). Also, conventional epidemiological wisdom suggests that

men and women should be analyzed separately. Earlier research has shown that

Page 82: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 75

women have a higher risk of Alzheimer’s disease than men (Seeman 1997). In the

CHSA, a higher proportion of men suffered from vascular dementia, and a higher

proportion of women suffered from Alzheimer’s disease. Thus, men and women may

suffer from different causes of dementia, and perhaps stratifying by gender may pro-

vide us with better insight.

The data was split by gender and di was calculated for men and women, for each

model. The log-logistic was not a good fit for either gender, and so a simulation was

not performed for this model. The Weibull model appeared to be a good fit for the

women, and the log-normal model a good fit for the men. The d values, estimated

model parameters and their standard deviations and confidence intervals are given in

Table 5.12, and the respective graphs are given in Figure 5.5 and Figure 5.6.

Model D Estimate Std. Dev. C. I.

Weibull (women) 0.049 λ = 0.187 0.009 (0.169, 0.204)

p = 1.312 0.059 (1.196, 1.429)

Log-Normal (men) 0.033 µ = 1.244 0.063 (1.120, 1.369)

σ = 0.692 0.034 (0.626, 0.758)

Table 5.12: CSHA Estimates - Women (Weibull) & Men (Log-normal)

Page 83: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 76

0 5 10 15 20 25

0.0

0.2

0.4

0.6

0.8

1.0

Survival Time (years)

Estim

ated

Pro

babi

lity

of S

urvi

val

NonparametricWeibull

Figure 5.5: Nonparametric and Weibull estimate of S(x) (women).

Page 84: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 77

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

0.8

1.0

Survival Time (years)

Estim

ated

Pro

babi

lity

of S

urvi

val

NonparametricLog−Normal

Figure 5.6: Nonparametric and log-normal estimate of S(x) (men).

Graphs of the estimated survival for up to 10 years are shown in Figures 5.7 and

5.8. These are included since surviving for 25 years or above with dementia is quite

rare, and shorter survival times may be more realistic.

The estimated hazard functions for women and men are shown in Figure 5.9. A

monotone increasing hazard is seen for the women, which is expected as the estimated

Weibull shape parameter is greater than 1. An increasing hazard function suggests

that the risk of death with dementia increases as time goes on. For the men, a hump-

shaped hazard is seen, which is an inherent property of the log-normal model. The

hazard function peaks at approximately 5 years, which is not surprising given the

data (i.e. - number of events becomes more sparse past 5 year mark, and hazard

appears to decrease).

Page 85: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 78

0 2 4 6 8 10

0.0

0.2

0.4

0.6

0.8

1.0

Survival Time (years)

Estim

ated

Pro

babi

lity

of S

urvi

val

NonparametricWeibull

Figure 5.7: Nonparametric and Weibull estimate of S(x) for 0-10 years(women) .

Page 86: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 79

0 2 4 6 8 10

0.0

0.2

0.4

0.6

0.8

1.0

Survival Time (years)

Estim

ated

Pro

babi

lity

of S

urvi

val

NonparametricLog−Normal

Figure 5.8: Nonparametric and log-normal estimate of S(x) for 0-10 years(men) .

Page 87: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 80

0 5 10 15 20 25 30

0.0

0.1

0.2

0.3

0.4

Survival Time (years)

Estim

ated

Haz

ard

Func

tion

WomenMen

Figure 5.9: Estimated hazard function for women (Weibull) and men (log-normal.

Page 88: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 81

Simulations were performed for each group, and the results are given in Table

5.13.

Censoring Approach Women - Weibull Men - Log-Normal

KM 0.5756 0.9753

Fixed C=5.2 0.5786 0.9753

Fixed C=5.8 0.5750 0.9749

Normal (5.2,0.3) 0.5723 0.9775

Normal (5.2,0.3) 0.5719 0.9767

Table 5.13: p-values estimated by simulation - Women & Men

The results when the subjects are split into men and women differ greatly from

those when analyzing the group as a whole. The log-normal model seems to be good

fit for the men and the Weibull model a good fit for the women. This suggests that

the men and women come from two different underlying populations, as conventional

epidemiological wisdom suggests. For future analyses, gender may be considered as

an important variable. However, when split by gender, the men are a sample of size

n = 237 and the women are a sample size of n = 579. As was seen earlier, for samples

comparable in size to that of the men, sufficient power is lacking. This test may not

be useful for smaller samples, in the sense that it may not detect a difference between

parametric models when n is small.

When generating the length-biased samples, occasionally a very small time (e.g.

t ≤ 0.005) was generated, although this was rare. A survival time this small severely

affects the unbiased nonparametric estimate of the survival curve, and so only times

greater than t=0.01 were generated for the simulations. In real applications, times

smaller than 0.01 are generally not observed, making this constraint inconsequential.

Figure 5.10 shows three curves: the unbiased parametric estimate of S(x), the unbi-

ased nonparametric estimate of S(x) for a simulated data set containing a very small

time point (t=0.0049, generated from length-biased Weibull distribution in R), and

Page 89: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 82

the unbiased nonparametric estimate when this small time point is removed. Only

one parametric estimate is shown as the difference between the parametric estimates

with and without the small time point is very small and and unobservable from the

graph.

0 5 10 15 20 25

0.0

0.2

0.4

0.6

0.8

1.0

Survival Time (years)

Estim

ated

Pro

babi

lity

of S

urvi

val

NP − small time point removedNP − with small time pointWeibull

Figure 5.10: Impact of small survival time on Nonparametric (NP) estimate.

As can be seen, when the small data point is included in the data set, the

nonparametric estimate does not perform well. It should be noted that ‘small’ in this

case is a relative term. The relative size of the smallest survival time compared to the

second smallest time (as well as the rest of the data set) is the cause of this problem

(in Figure 5.10, t(1) is approximately 43 times smaller than t(2)). This is due to the

inverse length-bias transformation formula used to correct the final probability vector,

p∗, to obtain the unbiased probability vector. Since survival times < 0.01 are rarely

observed, it was not considered a problem to add this constraint when generating the

Page 90: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

5. Applications 83

data sets for the simulation, and we can be ensured that the nonparametric estimate

will perform as expected.

Page 91: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

Chapter 6

Conclusion

The goal of this thesis was to develop a one-sample goodness-of-fit test for length-

biased data subject to right-censoring. Using Weibull, log-normal and log-logistic

models, the proposed test was applied to length-biased survival data collected from

the Canadian Study of Health and Aging.

When potentially censored survival data on individuals is collected by means of

an incident study, the standard tool to estimate the survival function, from onset, is

the Kaplan-Meier estimator (Kaplan & Meier 1958). If, instead, a prevalent cohort

is used to identify individuals, the survival times are then subject to left-truncation.

By modifying the Kaplan-Meier approach, an estimate conditional upon truncation

time can be used to estimate the survival function from onset. When onset times of a

disease can be assumed to come from a segment of a stationary Poisson process, the

survival times are said to be properly length-biased (Wang 1991), and in this case an

unconditional approach to estimation of the survival function from onset, pioneered

by Vardi (1989), will provide a more efficient estimate (Asgharian et al. 2002). These

ideas were discussed in Chapter 2.

In Chapter 3, the length-biased likelihood for right-censored survival data was

reviewed. This likelihood can be maximized both parametrically and nonparamet-

84

Page 92: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

6. Conclusion 85

rically. Some common parametric models used in survival analysis were discussed,

namely, the Weibull, log-normal, and log-logistic models. For each of these models,

we use the length-biased likelihood to obtain estimates for the biased survival function

from onset, and then we can obtain a parametric estimate for the unbiased survival

function. Using the nonparametric maximum likelihood for the unbiased survival

function, we can quantify the discrepancy of the hypothesized parametric models us-

ing an appropriate goodness-of-fit statistic. The Kolmogorov-Smirnov statistic, as

well as other goodness-of-fit statistics used for lifetime data, were also discussed in

Chapter 3. It should be noted that a two-sample nonparametric test for the equality

of survival distributions, under length-biased sampling, has recently been developed

(Ning et al. 2010).

We did not begin with a fully specified model, but instead used the data to

estimate the parameters for our hypothesized distributions, and hence simulation is

required in order to approximate a p-value for our goodness-of-fit tests. Chapter

4 was devoted to the algorithms necessary to carry out our goodness-of-fit testing.

These algorithms include Vardi’s algorithm for a nonparametric estimation of the sur-

vival function from onset, the simulation of length-biased samples of the appropriate

parametric form, as well as an algorithm for p-value approximation for our tests.

Before applying our test, it was necessary to investigate its behaviour under

different scenarios. Once sufficient power for the test was established, the next step

was to apply these concepts to a real set of right-censored length-biased data; these

results were discussed in Chapter 5. Individuals included in the CSHA were recruited

cross-sectionally and then screened for dementia. Those who screened positively were

included in the final sample, which consisted of 816 individuals, who were followed

forward for 5 years, at which point they were recorded as either censored or not

censored. The stationarity of the onset times has been previously verified for the

CSHA data (Addona & Wolfson 2006), and the onset of dementia for those included

in the sample occurred prior to recruitment, hence, the CSHA data is properly length-

Page 93: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

6. Conclusion 86

biased.

The unbiased survival function, from onset, was estimated using a Weibull, log-

normal, and log-logistic model and each parametric estimate curve was compared to

the unbiased unconditional nonparametric survival estimate. Although visually the

log-normal model appeared to be the best fit to the data, it was determined using

simulation that this was not the case, hence the need to develop adequate one-sample

goodness-of-fit tests. Both the log-normal and log-logistic models had p-values very

close to 0, and therefore the hypothesis that the data come from a log-normal or

log-logistic population was rejected. The p-value for the Weibull model was slightly

above 0.05, but we chose to reject the null hypothesis in the case. The next step was

to split the data by gender and re-perform the analysis. In doing so, it was concluded

that the log-normal model was a good fit for the men, and the Weibull model was a

good fit for the women. These results imply that the men and women come from two

different populations, which lines up with conventional epidemiological wisdom which

suggests that men and women should be analyzed separately. However, since this was

a preliminary analysis, an approach which includes covariates could be tested. For

example, an accelerated failure time model or cox proportional hazard model could

be used to estimate the survival function from onset.

In terms of future research ideas, a few things should be mentioned. Instead

of only considering the maximum distance between the nonparametric and paramet-

ric curves, as the Kolmogorov-Smirnov statistic does, we could employ a different

goodness-of-fit statistic, such as the Cramer-von Mises or Anderson-Darling statis-

tic discussed in Chapter 3, to evaluate the fit of our parametric models. Using the

maximum distance was a first step, and employing a different way of quantifying the

discrepancy between the curves is a natural extension. Also, a possible extension of

the Shapiro-Wilk test could be considered for the log-normal model (Shapiro & Wilk

1965).

The data collected during the CSHA is complex in nature. Older individuals were

Page 94: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

6. Conclusion 87

sampled at a higher rate than younger individuals, and equal samples were selected

from each geographic region, despite the different population sizes. An extension

to estimating the survival function in the presence of length-biasedness would be to

incorporate the appropriate sampling weights. For incident cases arising from complex

surveys, a weighted Kaplan-Meier curve has been developed (Lawless 2003a). This

Kaplan-Meier approach uses an estimated population proportion surviving through

each time interval, instead of only the proportion surviving in the data set. As far

as we know, sampling weights have yet to be incorporated in the estimation of the

survival function in the presence of length-bias.

Page 95: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

Bibliography

Aalen, O. (1978), ‘Nonparametric inference for a family of counting processes’, The Annals

of Statistics pp. 701–726.

Addona, V. & Wolfson, D. (2006), ‘A formal test for the stationarity of the incidence

rate using data from a prevalent cohort study with follow-up’, Lifetime Data Analysis

12(3), 267–284.

Asgharian, M., M’Lan, C. & Wolfson, D. (2002), ‘Length-biased sampling with right cen-

soring’, Journal of the American Statistical Association 97(457), 201–209.

Asgharian, M. & Wolfson, D. (2005), ‘Asymptotic behaviour of the NPMLE of the survivor

function when the data are length-biased and subject to right censoring’, Annals of

Statistics 33, 2109–2131.

Asgharian, M., Wolfson, D. & Zhang, X. (2006), ‘Checking stationarity of the incidence

rate using prevalent cohort survival data’, Statistics in medicine 25(10), 1751–1767.

Bergeron, P. (2006), Covariates and length-biased sampling: Is there more than meets the

eye?, PhD thesis, McGill University.

Blumenthal, S. (1967), ‘Proportional sampling in life length studies’, Technometrics pp. 205–

218.

Canadian Study of Health and Aging Working Group (1994a), ‘The Canadian study of

health and aging: risk factors for Alzheimer’s disease in Canada’, Neurology 44, 2073–

2080.

88

Page 96: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

BIBLIOGRAPHY 89

Canadian Study of Health and Aging Working Group (1994b), ‘Canadian study of health

and aging: study methods and prevalence of dementia’, Canadian Medial Association

Journal 150, 899–913.

Canadian Study of Health and Aging Working Group (1994c), ‘Patterns of caring for people

with dementia in Canada’, Canadian Journal on Aging 13, 470–487.

Casella, G. & Berger, R. (2001), Statistical inference, Duxbury Press.

Conover, W. (1971), Practical nonparametric statistics, Wiley.

Cook, R. & Bergeron, P. (2011), ‘Information in the sample covariate distribution in preva-

lent cohorts’, Statistics in Medicine 30(12), 1397–1409.

Correa, J. & Wolfson, D. (1999), ‘Length-bias: some characterizations and applications’,

Journal of statistical computation and simulation 64(3), 209–219.

Cox, D. (1969), Some sampling problems in technology, in ‘New developments in survey

sampling’, Wiley.

D’Agostino, R. & Stephens, M. (1986), Goodness-of-fit techniques, Marcel Dekker, Inc.

Feinleib, M. (1960), ‘A method of analyzing log-normally distributed survival data with

incomplete follow-up’, Journal of the American Statistical Association pp. 534–545.

Gao, S. & Hui, S. (2000), ‘Estimating the incidence of dementia from two-phase sampling

with non-ignorable missing data’, Statistics in medicine 19(11-12), 1545–1554.

Gill, R., Vardi, Y. & Wellner, J. (1988), ‘Large sample theory of empirical distributions in

biased sampling models’, The Annals of Statistics pp. 1069–1112.

Horner, R. (1987), ‘Age at onset of Alzheimer’s disease: clue to the relative importance of

etiologic factors?’, American journal of epidemiology 126(3), 409.

Kaplan, E. & Meier, P. (1958), ‘Nonparametric estimation from incomplete observations’,

Journal of the American statistical association pp. 457–481.

Page 97: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

BIBLIOGRAPHY 90

Klein, J. & Moeschberger, M. (2003), Survival analysis: techniques for censored and trun-

cated data, Springer.

Lagakos, S., Barraj, L. & Gruttola, V. (1988), ‘Nonparametric analysis of truncated survival

data, with application to AIDS’, Biometrika 75(3), 515.

Lawless, J. (2003a), Censoring and weighting in survival estimation from survey data, in

‘Proceedings of the Survey Methods Section’, pp. 31–36.

Lawless, J. (2003b), Statistical methods and methods for lifetime data, Wiley.

Lindsay, J., Sykes, E., McDowell, I., Verreault, R. & Laurin, D. (2004), ‘More than the

epidemiology of Alzheimer’s disease: contributions of the Canadian study of health and

aging’, Canadian journal of psychiatry 49(2), 83–91.

McFadden, J. (1962), ‘On the lengths of intervals in a stationary point process’, Journal of

the Royal Statistical Society. Series B (Methodological) pp. 364–382.

Nelson, W. (1972), ‘Theory and applications of hazard plotting for censored failure data’,

Technometrics pp. 945–966.

Ning, J., Qin, J. & Shen, Y. (2010), ‘Non-parametric tests for right-censored data with

biased sampling’, Journal of the Royal Statistical Society: Series B (Statistical Method-

ology) .

Patil, G. & Rao, C. (1978), ‘Weighted distributions and size-biased sampling with applica-

tions to wildlife populations and human families’, Biometrics pp. 179–189.

Rice, J. (1995), Mathematical statistics and data analysis, Duxbury Press.

Ross, S. (2006), Simulation, Elsevier Academic Press.

Seeman, M. (1997), ‘Psychopathology in women and men: focus on female hormones’,

American Journal of Psychiatry 154(12), 1641.

Page 98: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

BIBLIOGRAPHY 91

Shapiro, S. & Wilk, M. (1965), ‘An analysis of variance test for normality (complete sam-

ples)’, Biometrika 52(3/4), 591–611.

Stern, Y., Tang, M., Albert, M., Brandt, J., Jacobs, D., Bell, K., Marder, K., Sano, M.,

Devanand, D., Albert, S. et al. (1997), ‘Predicting time to nursing home care and death

in individuals with Alzheimer disease’, JAMA: the journal of the American Medical As-

sociation 277(10), 806.

Turnbull, B. (1976), ‘The empirical distribution function with arbitrarily grouped, censored

and truncated data’, Journal of the Royal Statistical Society. Series B (Methodological)

pp. 290–295.

Vardi, Y. (1982), ‘Nonparametric estimation in the presence of length bias’, The Annals of

Statistics pp. 616–620.

Vardi, Y. (1985), ‘Empirical distributions in selection bias models’, The Annals of Statistics

pp. 178–203.

Vardi, Y. (1989), ‘Multiplicative censoring, renewal processes, deconvolution and decreasing

density: nonparametric estimation’, Biometrika 76(4), 751.

Vardi, Y. & Zhang, C. (1992), ‘Large sample study of empirical distributions in a random-

multiplicative censoring model’, The Annals of Statistics pp. 1022–1039.

Wang, M. (1991), ‘Nonparametric estimation from cross-sectional survival data’, Journal

of the American Statistical Association pp. 130–143.

Wang, M., Jewell, N. & Tsai, W. (1986), ‘Asymptotic properties of the product limit

estimate under random truncation’, the Annals of Statistics pp. 1597–1605.

Wicksell, S. (1925), ‘The corpuscle problem: a mathematical study of a biometric problem’,

Biometrika 17(1/2), 84–99.

Page 99: Goodness-of-Fit for Length-Biased Survival Data with Right ...€¦ · Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring Jaime Younger Thesis submitted to the Faculty

BIBLIOGRAPHY 92

Wolfson, C., Wolfson, D., Asgharian, M., M’Lan, C., Østbye, T., Rockwood, K. & Hogan,

D. (2001), ‘A reevaluation of the duration of survival after the onset of dementia’, New

England Journal of Medicine 344(15), 1111–1116.

Zelen, M. & Feinleib, M. (1969), ‘On the theory of screening for chronic diseases’, Biometrika

56(3), 601–614.