hypothesis testing and the boundaries between statistics and...
TRANSCRIPT
April 2016
The Data Science and Decisions Lab, UCLA
Hypothesis Testing and the boundaries between Statistics and Machine Learning
Statistics vs Machine Learning: Two Cultures
The Data Science and Decisions Lab, UCLA 2
Data Generation Process
Statistical Inference vs Statistical Learning
X Y
Statistical inference tries to draw conclusions on the data generation model
Statistical learning just tries to predict Y
Statistics vs Machine Learning: Two Cultures
The Data Science and Decisions Lab, UCLA 3
Leo Breiman, “Statistical Modeling: The Two Cultures,” Statistical Science, 2001
Descriptive vs. Inferential Statistics
The Data Science and Decisions Lab, UCLA 4
• Descriptive statistics: describing a data sample (sample size, demographics, mean and median tendencies, etc) without drawing conclusions on the population.
• Inferential (inductive) statistics: using the data sample to draw
conclusions about the population (conclusion still entail uncertainty = need measures of significance)
• Statistical Hypothesis testing is an inferential statistics approach but involves descriptive statistics as well!
Statistical Inference problems
The Data Science and Decisions Lab, UCLA 5
• Inferential Statistics problems: o Point estimation o Interval estimation o Classification and clustering o Rejecting hypotheses o Selecting models
Population
Sample
Infer population properties
Frequentist vs. Bayesian Inference
The Data Science and Decisions Lab, UCLA 6
• Frequentist inference:
• Key idea: Objective interpretation of probability - any given experiment can be considered as one of an infinite sequence of possible repetitions of the same experiment, each capable of producing statistically independent results.
• Require that the correct conclusion should be drawn with a given (high) probability among this set of experiments.
Frequentist vs. Bayesian Inference
The Data Science and Decisions Lab, UCLA 7
• Frequentist inference:
Population
. . . . . . Data set 1 Data set 2 Data set N
Conclusion Conclusion Conclusion
The same conclusion is reached with high probability by resampling the population and repeating the experiment.
Frequentist vs. Bayesian Inference
The Data Science and Decisions Lab, UCLA 8
• Frequentist inference:
• Measures of significance: p-values and confidence intervals
• Frequentist methods are objective: you can do significance testing or confidence interval estimation without defining any explicit (subjective) utility function.
• Do not need to assume a prior distribution over model parameters, but assume they take fixed, unknown values.
Frequentist vs. Bayesian Inference
The Data Science and Decisions Lab, UCLA 9
• Frequentist inference:
• Measures of significance: p-values and confidence intervals
Population
Sample
Population
Sample
Population
Sample
Confidence interval Confidence interval Confidence interval
Frequentist vs. Bayesian Inference
The Data Science and Decisions Lab, UCLA 10
• Frequentist inference:
Sample
Confidence intervals are random!
The fraction of such intervals that contain the true parameter = confidence level 1-δ
Population
Frequentist vs. Bayesian Inference
The Data Science and Decisions Lab, UCLA 11
• Bayesian inference:
• Key idea: Subjective interpretation of probability – statistical propositions that depend on a posterior belief that is formed having observed data samples. Subjective because it depends on prior beliefs (and utility functions).
• Measures of significance: credible intervals and Bayes factors.
Frequentist vs. Bayesian Inference
The Data Science and Decisions Lab, UCLA 12
• Bayesian inference:
• Credible intervals for parameter estimation: an interval in the domain of a posterior probability distribution or predictive distribution used for interval estimation
• Credible intervals vs. Confidence intervals: Bayesian intervals treat their bounds as fixed and the estimated parameter as a random variable, whereas frequentist confidence intervals treat their bounds as random variables and the parameter as a fixed value.
Frequentist vs. Bayesian Inference
The Data Science and Decisions Lab, UCLA 13
• Bayesian inference: Estimation problems • Credible intervals vs. Confidence intervals
θ
Posterior distribution (belief)
𝑷 θ𝟏 < θ < θ𝟐| 𝒙 = 𝟗𝟓%
θ𝟏 θ𝟐
𝟗𝟓% credible interval
θ θ 𝟏(𝒙) θ 𝟐(𝒙)
Random 𝟗𝟓% confidence interval
𝑷 θ 𝟏(𝒙) < θ < θ 𝟐(𝒙) = 𝟗𝟓%
𝑷 θ 𝟏(𝒙) < θ < θ 𝟐(𝒙)| 𝒙 = 𝟗𝟓%
Since parameter is not random, this probability is either 0 or 1
Frequentist vs. Bayesian Inference
The Data Science and Decisions Lab, UCLA 14
• Bayesian inference: Comparison problems • Bayes factors vs. p-values
• Bayesian factors are natural alternative to classical hypothesis
testing that measure the strength of evidence through “risk rations”
𝐾 =𝑃(𝑋|𝐻𝑜)
𝑃(𝑋|𝐻1)=
𝑃 θ𝑜 𝐻𝑜 𝑃 𝑋 θ𝑜, 𝐻𝑜 𝑑θ𝑜
𝑃 θ1 𝐻1 𝑃 𝑋 θ1, 𝐻1 𝑑θ1
• Guidelines on the value of K: K<1 negative, K>100 decisive, etc.
Statistical Hypothesis Testing: Problems
The Data Science and Decisions Lab, UCLA 15
• Many statistical inference problems involve hypothesis testing: • Examples: • Which model best fits the data? • Is treatment X more effective for males than females? • Is smoking a risk factor for coronary heart diseases? • Is the chance of a certain intervention being successful depends
on a specific feature of the patient? • Does this subpopulation of patients belong to the same
category? • Usually a Yes-No question. Inference = answer this question
from a data sample. Understanding the data independent of any specific ML algorithm
Statistical Hypothesis Testing: the setting
The Data Science and Decisions Lab, UCLA 16
• Usually we want to test: 1) Whether two samples can be considered to be from the same
population. 2) Whether one sample has systematically larger values than
another. 3) Whether samples can be considered to be correlated.
Significance of conclusions: predict the likelihood of an event associated with a given statement (i.e. the hypothesis) occurring by chance, given the observed data and available information. Testing is usually objective: frequentist significance measures!
Statistical Hypothesis Testing: the setting
The Data Science and Decisions Lab, UCLA 17
Testing is usually objective: frequentist significance measures! • Complex phenomena (no solid model), inference not necessary
associated with specific utility (need objective conclusions)
• e.g. signal detection vs. medical study
Model and priors known, utility is error probability (Bayesian
risk) = Bayesian framework
Complex population, priors may not be easy to construct, no utility
= frequentist framework
Effect of smoking?
Statistical Hypothesis Testing: Main steps
The Data Science and Decisions Lab, UCLA 18
Select an appropriate test and relevant test statistic
Model and statistical assumptions
Identify the null and alternative hypotheses
Find the distribution of the statistic
Find a threshold on the static given the significance level α
Reject the null hypothesis if statistic is less than threshold
Compute the p-value
Reject the null hypothesis if p-value is
less than the significance level α
Parametric and Nonparametric Hypothesis Testing
The Data Science and Decisions Lab, UCLA 19
Parametric Tests Nonparametric Tests
Assumes a certain parametric form of the underlying distribution
Less applicability, more statistical power
Assumes no specific functional form on the underlying distribution
More applicability, less statistical power
Null Hypothesis test
Ho: Statement is true H1: Statement is not true
We want to accumulate enough evidence to reject the
null hypothesis.
Parametric and Nonparametric Hypothesis Testing
The Data Science and Decisions Lab, UCLA 20
Parametric Tests Nonparametric Tests
Ho H1
Bayesian framework = Neyman-Pearson Lemma: Likelihood is
the test statistic, and can be always used to find a UMP
Frequentist framework = Can compute closed form p-vaues
X
Distribution-free, but need other assumptions!
One-sample vs. two-sample tests
The Data Science and Decisions Lab, UCLA 21
One-sample tests Two-sample tests
Testing whether a coin is fair Testing whether two coins have the same probability of heads
Measures of significance, type-I and type-II errors, and criteria for rejection
The Data Science and Decisions Lab, UCLA 22
Significance level α: the rate of false positive (type-I) errors, called the size of the test. Significance power 1-𝛽: the rate of false negative (type-II) errors, 1- 𝛽 is called the power of the test.
Measures of significance, type-I and type-II errors, and criteria for rejection
The Data Science and Decisions Lab, UCLA 23
The null hypothesis is rejected whenever the p-value is less than the significance level α P-value computation
𝒑 = 𝑷 𝑿 < 𝒙|𝑯𝒐
Ho
Test statistic X x Test statistic realization
P-value
Should we pick a parametric or non-parametric test?
The Data Science and Decisions Lab, UCLA 24
Should we pick a parametric or non-parametric test?
The Data Science and Decisions Lab, UCLA 25
Test the validity of Assumptions of the parametric test
Decide the hypothesis and whether the test is one sample or two-sample
Pick an appropriate parametric test
The t-test
The Data Science and Decisions Lab, UCLA 26
• Assumes that the data is normally distributed: the Shapiro-Wilk test is used to check the validity of that assumption
• The test statistic follows a Student-t distribution
• One sample t-test: test whether the data sample has a mean that is close to a hypothetical mean
• Two sample t-test: test whether two data samples have significantly different means
One-sample t-test
The Data Science and Decisions Lab, UCLA 27
• Null hypothesis: the population mean is equal to some value 𝜇o
0 t 𝒙 − 𝜇
𝒔/ 𝒏
P-value
Independent Two-sample t-test
The Data Science and Decisions Lab, UCLA 28
• Null hypothesis: the population mean of two groups are equal
t
P-value
Welch’s t-test
The Data Science and Decisions Lab, UCLA 29
• Null hypothesis: the population mean of two groups are equal, but does not assume both groups have the same variance
t
P-value
Typical cutoff on the t-statistic
The Data Science and Decisions Lab, UCLA 30
0 t 𝒙 − 𝜇
𝒔/ 𝒏
α/2
• Two tailed tests: the p-values are computed with regard to the two sides of the Student-t distribution, e.g. if significance level is 0.05, then area under each side is 0.025
α/2
−𝒙 + 𝜇
𝒔/ 𝒏
Typical cutoff on the t-statistic
The Data Science and Decisions Lab, UCLA 31
• Typical significance level is α = 0.05, the CDF of Student-t distribution is tabulated
Magic number t = 2. (t-statistic cutoff)
Typical cutoff on the t-statistic
The Data Science and Decisions Lab, UCLA 32
• Typical Statistical Inference in a research study: • Research Question: Is smoking a risk factor for high blood
pressure?
Control demographic features (ages, ethnicities, etc)
Smokers
Non-smokers
0 t-statistic
0.025
𝟐
Reject Hypothesis
Accept
Multiple testing
The Data Science and Decisions Lab, UCLA 33
• Testing multiple hypotheses simultaneously • Should we take decision on every hypothesis separately using
its marginal p-value? NO!
• Multiple testing matters! We may care about the whole set of tests, need a method to control false discoveries
• Example:
• If α = 0.05, and we are doing 100 tests, then the probability of making at least one true null hypothesis is rejected is given by
1 − 1 − 0.05 100 = 0.994
Multiple testing: p-value adjustments and type-I errors control
The Data Science and Decisions Lab, UCLA 34
• For testing M hypotheses, we have a vector of t-statistics and p-values as follows
𝒕𝟏, 𝒕𝟐, … , 𝒕𝑴 , 𝒑𝟏, 𝒑𝟐, … , 𝒑𝑴 When people say “adjusting p-values for the number of hypothesis tests performed” what they mean is controlling the Type I error rate.
Type-I error notions for multiple testing
P-value adjustment methods
The Data Science and Decisions Lab, UCLA 35
Single-step methods Sequential methods
Individual test statistics are compared to their critical
values simultaneously
Stepdown methods therefore improve upon single-step methods by possibly rejecting `less Signiant' hypotheses in
subsequent steps.
Bonferroni method Holm’s method
Bonferroni method
The Data Science and Decisions Lab, UCLA 36
• Reject any hypothesis with a p-value less than α
𝑴
• 𝑝 = min (𝑀 𝑝, 1)
• No assumption on dependency structure, all p-values are treated
similarly
𝑡1
𝑡2
α
α/2
Bonferroni method: criticism
The Data Science and Decisions Lab, UCLA 37
• Counter-intuitive: interpretation of finding depends on the number of other tests performed
• High probability of type-II errors: not rejecting the general null
hypothesis when important effects exist.
“Bonferroni adjustments are, at best, unnecessary and, at worst, deleterious to sound statistical inference”
Perneger (1998)
Holm’s sequential method
The Data Science and Decisions Lab, UCLA 38
• Scales different p-values differently based on their significance
• Order the p-values by their magnitudes 𝑝(1)< 𝑝(2)< ⋯ < 𝑝(𝑀)
• 𝑝 (𝑖) = min ((𝑀 − 𝑖 + 1) 𝑝(𝑖), 1)
𝑡1
𝑡2
α
α/2
α/2
Holm’s vs. Bonferroni Discoveries
The Data Science and Decisions Lab, UCLA 39
• For M = 10, α = 0.05
Ordered tests
P-v
alu
es
α = 0.05
α = 0.005
0 1 2 3 4 5 6 7 8 9 10
Single tests (10 discoveries)
Holm (7 discoveries)
Bonferroni (3 discoveries)
Multiplicity of testing in medical literature
The Data Science and Decisions Lab, UCLA 40
P. Ioannidis, ``Why most published research findings are false?”, PLoS medicine, 2005
Y. Hochberg, and Y. Benjamini, ``More powerful procedures for multiple significance testing”, Stat. in Medicine, 1990.
Hypothesis testing or parameter estimation?
The Data Science and Decisions Lab, UCLA 41
• Confidence intervals on t-static instead of p-values if the numeric values are themselves of interest
The marriage of Statistical Learning and Statistical Inference
The Data Science and Decisions Lab, UCLA 42
Data Generation Process
Y
Statistical inference: draw conclusions about the population in order to select better predictors
Statistical learning: use inference to build a better predictor for Y
X
Bringing together Statistical Learning and Statistical Inference
The Data Science and Decisions Lab, UCLA 43
Personalization as a multiplicity of nested tests • Key idea: clustering is based on inference of subpopulation
properties independent of the classifier and its complexity
• Group homogeneous subpopulation together
• FWER is now an analog of a PAC confidence bound on the homogeneity of subpopulations!!
Classification algorithms that conduct research
The Data Science and Decisions Lab, UCLA 44
Non-parametric t-test for labels based on feature 1
1
2 2
3 3
Non-parametric t-test for feature 2
P-values fail
P-values fail Use all data for
statistical inference!
Confidence in the clustering = significance level!
Classification algorithms that conduct research
The Data Science and Decisions Lab, UCLA 45
1
2 2
3 3 Dataset 1
How to control overfitting: training
data and complexity of the classifiers?
Dataset 2 Dataset 3
For certain classifier set, pruning by a complexity test