hypothesis testing statistics for microarray data analysis – lecture 3 supplement the fields...

28
Hypothesis Testing ics for Microarray Data Analysis – Lecture The Fields Institute for Research in Mathematical Sciences May 25, 2002

Upload: pauline-lang

Post on 26-Dec-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

Hypothesis Testing

Statistics for Microarray Data Analysis – Lecture 3 supplement

The Fields Institute for Research in Mathematical Sciences

May 25, 2002

p-values

The p-value or observed significance level p is the chance of getting a test statistic as or more extreme than the observed one, under the null hypothesis H of no differential expression.

Many tests: a simulation study

Simulations of this process for 6,000 genes with 8 treatments and 8 controls.

All the gene expression values were simulated i.i.d from a N (0,1) distribution, i.e. NOTHING is differentially expressed.

gene t p-value

index value (unadj.)

2271 4.93 210-4

5709 4.82 310-4

5622 -4.62 410-4

4521 4.34 710-4

3156 -4.31 710-4

5898 -4.29 710-4

2164 -3.98 1.410-3

5930 3.91 1.610-3

2427 -3.90 1.610-3

5694 -3.88 1.710-3

Unadjusted p-values

Clearly we can’t just use standard p-value thresholds (.05, .01).

Multiple hypothesis testing: Counting errors

Assume we are testing H1, H2, , Hm .

m0 = # of true hypotheses R = # of rejected hypotheses

# true # false

null hypo. null hypo.

# non-signif. U T m - R

# significant V S R

m0 m-m0

V = # Type I errors [false positives]

T = # Type II errors [false negatives]

Multiple testing procedure

As we will see, there is a bewildering variety of multiple testing procedures. How can we choose which to use? There is no simple answer here, but each can be judged according to a number of criteria:

Interpretation: does the procedure answer a relevant question for you?

Type of control: strong or weak?

Validity: are the assumptions under which the procedure applies clear and definitely or plausibly true, or are they unclear and most probably not true?

Computability: are the procedure’s calculations straightforward to calculate accurately, or is there possibly numerical or simulation uncertainty, or discreteness?

Type I error rates

• Per-comparison error rate (PCER). The PCER is defined as the expected value of (number of Type I errors/number of hypotheses), i.e.,

PCER = E(V/m).

• Family-wise error rate (FWER). The FWER is defined as the probability of at least one Type I error, i.e.,

FWER = pr(V >0).

• False discovery rate (FDR). The FDR of Benjamini & Hochberg (1995) is the expected proportion of Type I errors among the rejected hypotheses, i.e.,

FDR = E(V/R).

Strong vs. weak control

• The Type I error probabilities are conditional on which hypotheses are true.

• Strong control refers to control of the Type I error rate under any combination of true and false hypotheses, i.e., any value of m0.

• Weak control refers to control of the Type I error rate only when all the null hypotheses are true, i.e., under the complete null hypothesis with m0=m.

• In general, weak control without any other safeguards is unsatisfactory.

Computing p-values by permutations

We focus on one gene only. For the bth iteration, b = 1, , B;

1. Permute the n data points for the gene (x). The first n1 are referred to as “treatments”, the second n2 as “controls”.

2. For each gene, calculate the corresponding two sample t-statistic, tb.

After all the B permutations are done;

3. Put p = #{b: |tb| ≥ |t|}/B (plower if we use >).With all permutations in the Apo AI data, B = n!/n1! n2! = 12,870.

p-value adjustments: single-step

Define adjusted p-values π such that the FWER is controlled at level where Hi is rejected when πi ≤ .

• Bonferroni: πi = min (mpi, 1)

• Sidák: πi = 1 - (1 - pi)m

Bonferroni always gives strong control. Sidák is less conservative than Bonferroni. When the genes are independent, it gives strong control exactly (FWER= ), proof later. It controls FWER in many other cases, but is still conservative.Less conservative procedures make use of the dependence structure of the test statistics and/or are sequential.

Single-step adjustments (ctd)

The minP method of Westfall and Young:

i = pr( min Pl ≤ pi | H)

1≤l≤m

Based on the joint distribution of the p-values {Pl }. This is the most powerful of the three single-step adjustments.

If Pi U [0,1], it gives a FWER exactly = . It always confers weak control, and gives strong control under a condition known as subset pivotality (definition omitted). That applies here.

Permutation-based single-step minP adjustment of p-values

For the bth iteration, b = 1, , B;

1. Permute the n columns of the data matrix X, obtaining a matrix Xb. The first n1 columns are referred to as “treatments”, the second n2 columns as “controls”.

2. For each gene, calculate the corresponding unadjusted p-values, pi,b , i= 1,2, m, (e.g. by further permutations) based on the permuted matrix Xb.

After all the B permutations are done.

3. Compute the adjusted p-values πi = #{b: minl pl,b ≤ pi}/B.

More powerful methods: step-down adjustments

The idea: S Holm’s modification of Bonferroni.

Also applies to Sidák, maxT, and minP.

S Holm’s modification of Bonferroni

Order the unadjusted p-values such that pr1 ≤ pr2 ≤ ≤ prm. . The indices

r1, r2, r3,.. are fixed for given data.

For control of the FWER at level , the step-down Holm adjusted p-values are

πrj = maxk {1,…,j} {min((m-k+1)prk, 1).

The point here is that we don’t multiply every prk by the same factor m, but only the smallest. The others are multiplied by successively smaller factors: m-1, m-2, ..,. down to multiplying prm by 1.

By taking successive maxima of the first terms in the brackets, we can get

monotonicity of these adjusted p-values.

Step-down adjustment of minP

Order the unadjusted p-values such that pr1 ≤ pr2 ≤ ≤ prm.

Step-down adjustment: it has a complicated formula, see below, but in effect is

1. Compare min{Pr1, , Prm} with pr1 ;

2. Compare min{Pr2, , Prm} with pr2 ;

3 Compare min{Pr3 , Prm} with pri3 …….

m. Compare Prm with prm . Enforce monotonicity on the adjusted pri . The formula is

πrj = maxk{1,,…,j} {pr(minl{rk,…rm} Pl ≤ prk | H0C )}.

The computing challenge: iterated permutations

The procedure is quite computationally intensive if B is very large (typically at least 10,000) and we estimate all unadjusted p-values by further permutations.

Typical numbers:

• To compute one unadjusted p-value B = 10,000

• # unadjusted p-values needed B = 10,000

• # of genes m = 6,000. In general run time is O(mB2).

Avoiding the computational difficulty of single-step minP adjustment

• maxT method: (Chapter 4 of Westfall and Young)

πi = Pr( max |Tl | ≥ | ti | | H0C )

1≤l≤m

needs B = 10,000 permutations only.

However, if the distributions of the test statistics are not identical, it will give more weight to genes with heavy tailed distributions (which tend to have larger t-values)

•There is a fast algorithm which does the minP adjustment in O(mBlogB+mlogm) time.

Adjusted p-values, Westfall & Young(1993)

For strong control of the FWER at level , let |trm| … |tr2| |tr1|denote the ordered test statistics and define the adjusted p-values as

.~ if expressedally differenti gene Declare

.2for

))|||||max(,~max(~

)|||||max(~

0},{

0},{

1

11

1

j

jmljj

m

r

rlrrl

rr

rlrrl

r

pj

mj

HtTprpp

HtTprp

Takes into account the dependence structure between the hypotheses.

Conclusion: unsuitable. Too much discreteness.

gene t unadj. p minP plower maxT

index statistic (104) adjust. adjust.

2139 -22 1.5 .53 8 10-5 2 10-4

4117 -13 1.5 .53 8 10-5 5 10-4

5330 -12 1.5 .53 8 10-5 5 10-4

1731 -11 1.5 .53 8 10-5 5 10-4

538 -11 1.5 .53 8 10-5 5 10-4

1489 -9.1 1.5 .53 8 10-5 1 10-3

2526 -8.3 1.5 .53 8 10-5 3 10-3

4916 -7.7 1.5 .53 8 10-5 8 10-3

941 -4.7 1.5 .53 8 10-5 0.65

2000 +3.1 1.5 .53 8 10-5 1.00

5867 -4.2 3.1 .76 0.54 0.90

4608 +4.8 6.2 .93 0.87 0.61

948 -4.7 7.8 .96 0.93 0.66

5577 -4.5 12 .99 0.93 0.74

Apo AI. Genes with maxT p-value ≤ 0.01

Gene Adjusted p-values

T Num Den

Apo A1 0 -22.9 -3.2 0.14

Sterol C5 desaturase

0 -13.1 -1.1 0.08

Apo A1 0 -12.2 -1.9 0.16

Apo CIII 0 -11.9 -1.0 0.09

ApoA1 0 -11.4 -3.1 0.2

EST 0 -9.1 -1.0 0.11

Apo CIII 0 -8.4 -1.0 0.12

Sterol C5 desaturase

0 -7.7 -1.0 0.13

False discovery rate(Benjamini and Hochberg 1995)

Definition: FDR = E(V/R |R>0) P(R >0).

Rank the p-values pr1 ≤ pr2 ≤ …≤ prm.

The adjusted p-values are to control FDR when Pi are independently distributed are given by the step-up formula:

ri= mink {i…m} { min (mprk/k ,1) }.

We use this as follows: reject pr1 ,pr2 ,…, ,prk* where k* is the largest k such that prk ≤ k/m. . This keeps the FDR ≤ under independence, proof not given.

Compare the above with Holm’s adjustment to control FWE, the step-down version of Bonferroni, which is

i = maxk {1,…i} { min (kprk ,1) }.

Positive false discovery rate (Storey, 2001, independent case)

A new definition of FDR, called positive false discovery rate (pFDR) pFDR= E(V/R | R >0)

The logic behind this is that in practice, at lease one gene should be expected to be differentially expressed.

The adjusted p-value (called q-value in Storey’s paper) are to control pFDR.

i= mink {1..,i} {m/k pk 0}

Note 0 = m0 /m can be estimated by the following formula for suitable

0= #{pi>}/ {(1-) m}.

One choice for is 1/2; another is the median of the observed (unadjusted) p-values.

Positive false discovery rate ( Storey, 2001, dependent case)

In order to incorporate dependence, we need to assume identical distributions.

Specify 0 to be a small number, say 0.2, where most t-statistics will fall between (-0, 0) for a null hypothesis, and to be a large number, say 3, where we reject the hypotheses whose t-statistics exceeds .

For the original data, find the W = #{i: |ti| 0} and R= #{i: |ti| }.

We can do B permutations, for each one, we can compute Wb and Rb simply by: Wb = #{i: |ti| 0} and Rb= #{i: |ti| }, b=1,…, B.

The we can compute the proportion of genes expected to be null

0=W/{(W1+W2+…+Wb)/B)

An estimate of the pFDR at the point will be 0{(R1+R2+…+RB)/B}/R.Further details can be found in the references.

Discussion

The minP adjustment seems more conservative than the maxT adjustment, but is essentially model-free.

With the Callow data, we see that the adjusted minP values are very discrete; it seems that 12,870 permutations are not enough for 6,000 tests.

With the Golub data, we see that the number of permutations matters. Discreteness is a real issue here to, but we do have enough permutations.

The same ideas extend to other statistics:

Wilcoxon, paired t, F, blocked F.

Same speed-up works with the bootstrap.

Discussion, ctd.

Major question in practice: Control of FWER or some form of FDR?

In the first case, use minP, maxT or something else? In the second case, FDR, pFDR or something else. If minP is to be used, we need guidelines for its use in terms of

sample sizes and number of genes. Another approach: Empirical Bayes. There are links with pFDR.

Acknowledgments

Multiple testing section: based on

Y. Ge (Lecture 8 in stat246, statistics in genetics).

http://www.stat.Berkeley.EDU/users/terry/Classes/s246.2002/index.html

S. Dudoit (Bioconductor short course lecture 2)

http://www.bioconductor.org/