1-17-061 back to basics – probability, conditional probability and independence probability of an...

19
1-17-06 1 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that this particular outcome would occur in a very large (“infinite”) number of replicated experiments Random variable is a mapping assigning real numbers to the set of all possible experimental outcomes - often equivalent to the experimental outcome Probability distribution describes the probability of any outcome, or any particular value of the corresponding random variable in an experiment • If we have two different experiments, the probability of any combination of outcomes is the joint probability and the joint probability distribution describes probabilities of observing and combination of outcomes • If the outcome of one experiment does not affect the probability distribution of the other, we say

Upload: shana-flynn

Post on 04-Jan-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 1

Back to basics – Probability, Conditional Probability and Independence

• Probability of an outcome in an experiment is the proportion of times that this particular outcome would occur in a very large (“infinite”) number of replicated experiments

• Random variable is a mapping assigning real numbers to the set of all possible experimental outcomes - often equivalent to the experimental outcome

• Probability distribution describes the probability of any outcome, or any particular value of the corresponding random variable in an experiment

• If we have two different experiments, the probability of any combination of outcomes is the joint probability and the joint probability distribution describes probabilities of observing and combination of outcomes

• If the outcome of one experiment does not affect the probability distribution of the other, we say that outcomes are independent

• Event is a set of one or more possible outcomes

Page 2: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 2

Back to basics – Probability, Conditional Probability and Independence

i

i

i

1N

N

N

n

3) ip

• Let N be the very large number of trials of an experiment, and n i be the number of times that ith outcome (oi) out of possible infinitely many possible outcomes has been observed

• pi=ni/N is the probability of the ith outcome• Properties of probabilities following from this definition

1) pi 02) pi 1

4) For any set of mutually exclusive events (events that don't have any outcomes in common)

i

i

i

i epoooooopep )(,...,...},,,...,,,({)( 23

22

21

13

12

11

,...},...,,{ ,...},,,{ 23

22

212

13

12

111 oooeoooe

5) p(NOT e) = 1-p(e) for any event e

Page 3: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 3

Conditional Probabilities and Independence

• Suppose you have a set of N DNA sequences. Let the random variable X denote the identity of the first nucleotide and the random variable Y the identity of the second nucleotide.

• Suppose now that you have randomly selected a DNA sequence from this set and looked at the first nucleotide but not the second. Question: what is the probability of a particular second nucleotide y given that you know that the first nucleotide is x*?

• The probability of a randomly selected DNA sequence from this set to have the xy dinucleotide at the beginning is equal to P(X=x,Y=y)

)xX(

y)Y,x(X

/n

/n

n

n)xX|y(Y

*

*

*x

y*x

*x

y*x*

P

P

N

NP

T}G,C,{A,y x,,N

ny)Y x,X( xy P

N

nx)X( xP N

ny)Y( yP

• P(Y=y|X=x*) is the conditional probability of Y=y given that X=x*

• X and Y are independent if P(Y=y|X=x)=P(Y=y)

Page 4: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 4

Conditional Probabilities Another Example

• Measuring differences between expression levels under two different experimental condition for two genes (1 and 2) in many replicated experiments

• Outcomes of each experiment are• X=1 if the difference for gene 1 is greater than 2 and 0 otherwise• Y=1 if the difference for gene 2 is greater than 2 and 0 otherwise

• Suppose now that in one experiment we look at gene 1 and know that X=0 Question: What is the probability of Y=1 knowing that X=0

• The joint probability of differences for both genes being greater than 2 in any single experiment is P(X=1,Y=1)

)0X(

1)Y0,(X

/n

/n

n

n)0X|1(Y

x0

01x0

01

P

P

N

NP

N

n1)Y 1,X( 11P

N

n1)X(

x1P N

n)1Y(

y1P

• P(Y=1|X=0) is the conditional probability of Y=1 given that X=0

• X and Y are independent if P(Y=y|X=x)=P(Y=y) for any x and y

Page 5: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 5

Conditional Probabilities and Independence

• If X and Y are independent, then from

)( )() and (Y)(X,

thatfollows )()| and )X(

Y)(X,)X|(Y

YpXpYXpp

YPXp(Yp

pp

• Probability of two independent events is equal to the product of their probabilities

Page 6: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 6

• Suppose we have T genes which we measured under two experimental conditions (Ctl and Nic) in n replicated experiments

• ti* and pi are the t-statistic and the corresponding p-value for the i th gene, i=1,...,T

• P-value is the probability of observing as extreme or more extreme value of the t-statistic under the “null-distribution” (i.e. the distributions assuming that i

Ctl = i

Nic ) than the one calculated from the data (t*)

• The ith gene is "differentially expressed" if we can reject the ith null hypothesis iCtl

= iNic and

conclude that iCtl

iNic at a significance level (i.e. if pi<)

• Type I error is committed when a null-hypothesis is falsely rejected• Type II error is committed when a null-hypothesis is not rejected but it is false • Experiment-wise Type I Error is committed if any of a set of (T) null hypothesis is falsely

rejected• If the significance level is chosen prior to conducting experiment, we know that by

following the hypothesis testing procedure, we will have the probability of falsely concluding that any one gene is differentially expressed (i.e. falsely reject the null hypothesis) is equal to

• What is the probability of committing a Family-wise Type I Error? • Assuming that all null hypothesis are true, what is the probability that we would reject at

least one of them?

Identifying Differentially Expressed Genes

Page 7: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 7

Experiment-wise error rateAssuming that individual tests of hypothesis are independent and true:

p(Not Committing The Experiment-Wise Error) =

p(Not Rejecting H01 AND Not Rejecting H0

2 AND ... AND Not Rejecting H0T) =

(1- )(1- )...(1- ) = (1- )T

p(Committing The Experiment-Wise Error) =1-(1- )T

0 5000 10000 15000

0.2

0.4

0.6

0.8

1.0

Nubmer of Hypothesis

Fa

mily

-Wis

e T

ype

I E

rro

r R

ate

alpha=0.05alpha=0.01alpha=0.001alpha=0.0001

Page 8: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 8

Experiment-wise error rateIf we want to keep the FWER at level:

Sidak’s adjustment: a= 1-(1- )1/T

FWER=1-(1- a )T = 1-(1-[1-(1- )1/T])T = 1-((1- )1/T)T = 1-(1-) =

For FWER=0.05 a=0.000003

0 5000 10000 15000

0.2

0.4

0.6

0.8

1.0

Nubmer of Hypothesis

Fa

mily

-Wis

e T

ype

I E

rro

r R

ate

alpha=0.05alpha=0.01alpha=0.001alpha=0.0001alpha=0.000003

Page 9: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 9

Experiment-wise error rateAnother adjustment:

p(Committing The Experiment-Wise Error) =

(Rejecting H01 OR Rejecting H0

2 OR ... OR Rejecting H0T) T

(Homework: How does that follow from the probability properties)

Bonferroni adjustment: b= /T

•Generally b<a Bonferroni adjustment more conservative

•The Sidak's adjustment assumes independence – likely not to be satisfied.

•If tests are not independent, Sidak's adjustment is most likely conservative but it could be liberal

Page 10: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 10

Adjusting p-valueIndividual Hypotheses:

H0i: i

W = i

C pi=p(tn-1 > ti*) , i=1,...,T

"Composite" Hypothesis:

H0: {iW

= iC, i=1,...,T} p=min{pi, i=1,...,T}

• The composite null hypothesis is rejected if even a single individual hypothesis is rejected

• Consequently the p-value for the composite hypothesis is equal to the minimum of individual p-values

• If all tests have the same reference distribution, this is equivalent to

p=p(tn-1 > t*max)

• We can consider a p-value to be itself the outcome of the experiment

• What is the "null" probability distribution of the p-value for individual tests of hypothesis?

• What is the "null" probability distribution for the composite p-value?

Page 11: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 11

-4 -2 0 2 4

0.0

0.1

0.2

0.3

0.4

t-statistics

-4 -2 0 2 4

0.0

0.1

0.2

0.3

0.4

t-statistics

Given that the null hypothesis is true, probability of observing the

p-values smaller than a fixed number between 0 and 1 is:

p(pi < a)=p(|t*|>ta)=a

Null distribution of the p-value

ta-ta

The null distribution of t* The null distribution of pi

0.0 0.2 0.4 0.6 0.8 1.0

0.6

0.8

1.0

1.2

1.4

P-value

f (P

-val

ue)

a

Page 12: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 12

p(p < a) = p(min{pi, i=1,...,T} < a) =

= 1- p(min{pi, i=1,...,T} > a) =

= 1-p(p1> a AND p2> a AND ... AND pT> a) =

=Assuming independence between different tests =

=1- [p(p1> a) p(p2> a)... p(pT> a)] =

=1-[1-p(p1< a)] [1-p(p2< a)]... [1-p(pT< a)]=

=1-[1-a]T

Instead of adjusting the significance level, can adjust all p-values:

pia = 1-[1-a]T

Null distribution of the composite p-value

Page 13: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 13

Null distribution of the composite p-value

The null distribution of the composite p-value for 1, 10 and 30000 tests

0.0 0.2 0.4 0.6 0.8 1.0

02

46

810

P-value

f (P

-val

ue)

Page 14: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 14

Seems simple• Applying a conservative p-value adjustment will take care of

false positives

• How about false negatives

• Type II Error arises when we fail to reject H0 although it is false

Power=p(Rejecting H0 when W -C 0)

= p(t* > t|W -C 0)=p(p< |W -C 0)

• Depends on various things (, df, , W -C)

• Probability distribution of is non-central t

Page 15: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 15

Effects multiple comparison adjustments on powerhttp://homepages.uc.edu/%7Emedvedm/documents/Sample%20Size%20for%20arrays%20experiments.pdf

0 1

t

n=10 significance n=5 significance

t4 : Green Dashed Line t9 : Red Dashed Line t4,nc=6.1: Green Solid Line t9,nc=8.6 Red Solid Line

T=5000, =0.05, a =0.0001, W -C = 10, = 1.5

27.68.8

Page 16: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 16

This is not good enough• Traditional statistical approaches to multiple comparison

adjustments which strictly control the experiment-wise error rates are not optimal

• Need a balance between the false positive and false negative rates

• Benjamini Y and Hochberg Y (1995) Controlling the False Discovery Rate: a Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society B 57:289-300.

• Instead of controlling the probability of generating a single false positive, we control the proportion of false positives

• Consequence is that some of the implicated genes are likely to be false positives.

Page 17: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 17

False Discovery Rate

• FDR = E(V/R)

• If all null hypothesis are true (composite null) this is equivalent to the Family-wise error rate

Page 18: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 18

False Discovery Rate

Alternatively, adjust p-values as

},...,1,,min{ )()( kiijPj

mP i

fdri

Page 19: 1-17-061 Back to basics – Probability, Conditional Probability and Independence Probability of an outcome in an experiment is the proportion of times that

1-17-06 19

Effects> FDRpvalue<-p.adjust(TPvalue,method="fdr")> BONFpvalue<-p.adjust(TPvalue,method="bonferroni")

-4 -2 0 2 4

01

23

Unadjusted

Mean Difference

-log10(p

-valu

e)

-4 -2 0 2 4

4.0

e-0

66.0

e-0

68.0

e-0

61.0

e-0

51.2

e-0

51.4

e-0

5

FDR

Mean Difference

-log10(p

-valu

e)

-4 -2 0 2 4

-1.0

-0.5

0.0

0.5

1.0

Bonferroni

Mean Difference

-log10(p

-valu

e)