ryan tibshirani data mining: 36-462/36-662 march 26 2013ryantibs/datamining/lectures/18-val1.pdf ·...

26
Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013 Optional reading: ISL 2.2, 5.1, ESL 7.4, 7.10 1

Upload: vobao

Post on 12-Apr-2019

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Model selection and validation 1: Cross-validation

Ryan TibshiraniData Mining: 36-462/36-662

March 26 2013

Optional reading: ISL 2.2, 5.1, ESL 7.4, 7.10

1

Page 2: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Reminder: modern regression techniques

Over the last two lectures we’ve investigated modern regressiontechniques. We saw that linear regression has generally low bias(zero bias, when the true model is linear) but high variance,leading to poor predictions. These modern methods introducesome bias but significantly reduce the variance, leading to betterpredictive accuracy

Given a response y ∈ Rn and predictors X ∈ Rn×p, we can thinkof these modern methods as constrained least squares:

β̂ = argminβ∈Rp

‖y −Xβ‖22 subject to ‖β‖q ≤ t,

I q = 1 gives the lasso (and ‖β‖1 =∑p

j=1 |βj |)

I q = 2 gives ridge regression (and ‖β‖2 =√∑p

j=1 β2j )

2

Page 3: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Regularization at large

Most other modern methods for regression can be expressed as

β̂ = argminβ∈Rp

‖y −Xβ‖22 subject to R(β) ≤ t,

or equivalently,

β̂ = argminβ∈Rp

‖y −Xβ‖22 + λ ·R(β).

The term R is called a penalty or regularizer, and modifying theregression problem in this way is called applying regularization

I Regularization can be applied beyond regression: e.g., it canbe applied to classification, clustering, principal componentanalysis

I Regularization goes beyond sparsity: e.g., design R to inducesmoothness or structure, instead of pure sparsity

3

Page 4: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Example: smoothing splines

Smoothning splines use a form of regularization:

f̂ = argminf

n∑i=1

(yi − f(xi)

)2+ λ ·

∫ (f ′′(x)

)2dx︸ ︷︷ ︸

R(f)

Example with n = 100 points:

●●●●

●●●

●●

●●

●●●

●●

●●

●●●●

●●

●●●●

●●

●●●●●

●●

●●

●●

●●

●●●

●●

●●●

0.0 0.2 0.4 0.6 0.8 1.0

02

46

810

12

●●●●

●●●

●●

●●

●●●

●●

●●

●●●●

●●

●●●●

●●

●●●●●

●●

●●

●●

●●

●●●

●●

●●●

0.0 0.2 0.4 0.6 0.8 1.0

02

46

810

12

●●●●

●●●

●●

●●

●●●

●●

●●

●●●●

●●

●●●●

●●

●●●●●

●●

●●

●●

●●

●●●

●●

●●●

0.0 0.2 0.4 0.6 0.8 1.0

02

46

810

12λ too small λ just right λ too big

4

Page 5: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Choosing a value of the tuning parameter

Each regularization method has an associated tuning parameter:e.g., this was λ in the smoothing spline problem, and λ for lassoand ridge regression in the penalized forms (or t in the constrainedforms)

The tuning parameter controls the amount of regularization, sochoosing a good value of the tuning parameter is crucial. Becauseeach tuning parameter value corresponds to a fitted model, we alsorefer to this task as model selection

What we might consider a good choice of tuning parameter,however, depends on whether our goal is prediction accuracy orrecovering the right model for interpretation purposes. We’ll coverchoosing the tuning parameter for the purposes of prediction;choosing the tuning parameter for the latter purpose is a harderproblem

5

Page 6: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Prediction error and test error

Our usual setup: we observe

yi = f(xi) + εi, i = 1, . . . n

where xi ∈ Rp are fixed (nonrandom) predictor measurements, f isthe true function we are trying to predict (think f(xi) = xTi β

∗ fora linear model), and εi are random errors

We call (xi, yi), i = 1, . . . n the training data. Given an estimatorf̂ built on the training data, consider the average prediction errorover all inputs

PE(f̂) = E

[1

n

n∑i=1

(y′i − f̂(xi)

)2]where y′i = f(xi) + ε′i, i = 1, . . . n are another set of observations,independent of y1, . . . yn

6

Page 7: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Suppose that f̂ = f̂θ depends on a tuning parameter θ, and wewant to choose θ to minimize PE(f̂θ)

If we actually had training data y1, . . . yn and test data y′1, . . . y′n

(meaning that we don’t use this to build f̂θ) in practice, then wecould simply use

TestErr(f̂θ) =1

n

n∑i=1

(y′i − f̂θ(xi)

)2called the test error, as an estimate for PE(f̂θ). The larger that nis, the better this estimate

We usually don’t have test data. So what to do instead?

7

Page 8: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

What’s wrong with training error?

It may seem like

TestErr(f̂θ) =1

n

n∑i=1

(y′i − f̂θ(xi)

)2and

TrainErr(f̂θ) =1

n

n∑i=1

(yi − f̂θ(xi)

)2shouldn’t be too different—after all, yi and y′i are independentcopies of each other. The second quantity is called the trainingerror: this is the error of f̂ as measured by the data we used tobuild it

But actually, the training and test error curves are fundamentallydifferent. Why?

8

Page 9: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Example: smoothing splines (continued)

Back to our smoothing spline example: for a small value of thetuning parameter λ, the training and test errors are very different

●●●●

●●●

●●

●●

●●●

●●

●●

●●●●

●●

●●●●

●●

●●●●●

●●

●●

●●

●●●

●●

●●●

0.0 0.2 0.4 0.6 0.8 1.0

02

46

810

12

TrainError= 0.221

●●

●●

●●

●●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

0.0 0.2 0.4 0.6 0.8 1.0

02

46

810

12

TestError= 1.834

9

Page 10: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Training and test error curves, averaged over 100 simulations:

1e−07 1e−05 1e−03 1e−01

24

68

λ

Training errorTest error

10

Page 11: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Cross-validation

Cross-validation is a simple, intuitive way to estimate predictionerror

Given training data (xi, yi), i = 1, . . . n and an estimator f̂θ,depending on a tuning parameter θ

Even if θ is a continuous parame-ter, it’s usually not practically fea-sible to consider all possible valuesof θ, so we discretize the rangeand consider choosing θ over somediscrete set {θ1, . . . θm}

2 4 6 8 10

0.0

0.5

1.0

1.5

2.0

2.5

θ

Pre

dict

ion

erro

r

● ● ● ● ● ● ● ● ● ●

11

Page 12: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

For a number K, we split the training pairs into K parts or “folds”(commonly K = 5 or K = 10)

K-fold cross validation considers training on all but the kth part,and then validating on the kth part, iterating over k = 1, . . .K

(When K = n, we call this leave-one-out cross-validation, becausewe leave out one data point at a time)

12

Page 13: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

K-fold cross validation procedure:

I Divide the set {1, . . . n} into K subsets (i.e., folds) of roughlyequal size, F1, . . . FK

I For k = 1, . . .K:

I Consider training on (xi, yi), i /∈ Fk, and validating on(xi, yi), i ∈ Fk

I For each value of the tuning parameter θ ∈ {θ1, . . . θm},compute the estimate f̂−kθ on the training set, and recordthe total error on the validation set:

ek(θ) =∑i∈Fk

(yi − f̂−kθ (xi)

)2I For each tuning parameter value θ, compute the average error

over all folds,

CV(θ) =1

n

K∑k=1

ek(θ) =1

n

K∑k=1

∑i∈Fk

(yi − f̂−kθ (xi)

)213

Page 14: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Having done this, we get a cross-validation error curve CV(θ) (thiscurve is a function of θ), e.g.,

2 4 6 8 10

0.0

0.5

1.0

1.5

2.0

2.5

θ

CV

err

or

●●

● ●

and we choose the value of tuning parameter that minimizes thiscurve,

θ̂ = argminθ∈{θ1,...θm}

CV(θ)

14

Page 15: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Example: choosing λ for the lasso

Recall our running example from last time: n = 50, p = 30, andthe true model is linear with 10 nonzero coefficients. We consider

β̂lasso = argminβ∈Rp

‖y −Xβ‖22 + λ‖β‖1

and use 5-fold cross-validation to choose λ. Sample code formaking the folds:

K = 5

i.mix = sample(1:n)

folds = vector(mode="list",length=K)

# divide i.mix into 5 chunks of size n/K=10, and

# store each chunk in an element of the folds list

> folds

[[1]]

[1] 30 29 4 2 50 42 27 25 23 41

[[2]]

[1] 21 44 45 9 10 26 16 6 24 48

...15

Page 16: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

We consider the values: lam = seq(0,12,length=60). Theresulting cross-validation error curve:

0 2 4 6 8 10 12

1.5

2.0

2.5

3.0

3.5

4.0

λ

CV

err

or

●●

●●

●●

●● ● ● ● ● ● ● ● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●

λ̂ = 3.458

16

Page 17: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

How does this compare to the prediction error curve? Because thisis a simulation, we can answer this question

0 2 4 6 8 10 12

1.5

2.0

2.5

3.0

3.5

4.0

λ

CV

err

or

●●

●●

●●

●● ● ● ● ● ● ● ● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●

λ̂ = 3.458

0 2 4 6 8 10 121.

41.

51.

61.

7

λ

Pre

dict

ion

erro

r

λ̂ = 4.068

17

Page 18: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Standard errors for cross-validation

One nice thing about K-fold cross-validation (for a small K � n,e.g., K = 5) is that we can estimate the standard deviation ofCV(θ), at each θ ∈ {θ1, . . . θm}

First, we just average the validation errors in each fold:

CVk(θ) =1

nkek(θ) =

1

nk

∑i∈Fk

(yi − f̂−kθ (xi)

)2where nk is the number of points in the kth fold

We then compute the sample standard deviation of CV1(θ), . . .CVK(θ),

SD(θ) =√

var(CV1(θ), . . .CVK(θ)

)Finally we estimate the standard deviation of CV(θ)—called thestandard error of CV(θ)—by SE(θ) = SD(θ)/

√K

18

Page 19: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Example: choosing λ for the lasso (continued)

The cross-validation error curve from our lasso example, with ±standard errors:

0 2 4 6 8 10 12

1.5

2.0

2.5

3.0

3.5

4.0

λ

CV

err

or

●●

●●

●●

●● ● ● ● ● ● ● ● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●

λ̂ = 3.458

19

Page 20: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

The one standard error rule

The one standard error rule is an alternative rule for choosing thevalue of the tuning parameter, as opposed to the usual rule

θ̂ = argminθ∈{θ1,...θm}

CV(θ)

We first find the usual minimizer θ̂ as above, and then move θ inthe direction of increasing regularization (this may be increasing ordecreasing θ, depending on the parametrization) as much as wecan, such that the cross-validation error curve is still within onestandard error of CV(θ̂). In other words, we maintain

CV(θ) ≤ CV(θ̂) + SE(θ̂)

Idea: “All else equal (up to one standard error), go for the simpler(more regularized) model”

20

Page 21: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Example: choosing λ for the lasso (continued)

Back to our lasso example: the one standard error rule chooses alarger value of λ (more regularization):

0 2 4 6 8 10 12

1.5

2.0

2.5

3.0

3.5

4.0

λ

CV

err

or

●●

●●

●●

●● ● ● ● ● ● ● ● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●

Usual rule:λ̂ = 3.458

One standard error rule:λ̂ = 6.305

21

Page 22: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

What happens if we really shouldn’t be shrinking in the first place?We’d like cross-validation, our automated tuning parameterselection procedure, to choose a small value of λ

Recall our other example from last time: n = 50, p = 30, and thetrue model is linear with all moderately large coefficients

0 2 4 6 8 10 12

46

810

12

λ

CV

err

or

●● ●

●●

●●

●●

●●

●●

●●

●●

●●

●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

●●

●●

●●

λ̂ = 0.407

0 2 4 6 8 10 12

2.0

2.5

3.0

λ

Pre

dict

ion

erro

r

λ̂ = 0.407

22

Page 23: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

The standard errors here reflect uncertainty about the shape of theCV curve for large λ

0 2 4 6 8 10 12

46

810

12

λ

CV

err

or

●● ●

●●

●●

●●

●●

●●

●●

●●

●●

●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

●●

●●

●●

Usual rule:λ̂ = 0.407

One standard error rule:λ̂ = 1.017

The one standard error rule doesn’t move much from the minimum23

Page 24: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Prediction versus interpretation

Remember: cross-validation error estimates prediction error at anyfixed value of the tuning parameter, and so by using it, we areimplicitly assuming that achieving minimal prediction error is ourgoal

Choosing λ for the goal of recovering the true model, for the sakeof interpretation, is somewhat of a different task. The value of theparameter that achieves the smallest cross-validation error oftencorresponds to not enough regularization for these purposes. Butthe one standard error rule is a step in the right direction

There are other (often more complicated) schemes for selecting avalue of the tuning parameter for the purposes of true modelrecovery

24

Page 25: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Recap: cross-validation

Training error, the error of an estimator as measured by the dataused to fit it, is not a good surrogate for prediction error. It justkeeps decreasing with increasing model complexity

Cross-validation, on the other hand, much more accurately reflectsprediction error. If we want to choose a value for the tuningparameter of a generic estimator (and minimizing prediction erroris our goal), then cross-validation is the standard tool

We usually pick the tuning parameter that minimizes thecross-validation error curve, but we can also employ the onestandard error rule, which helps us pick a more regularized model

25

Page 26: Ryan Tibshirani Data Mining: 36-462/36-662 March 26 2013ryantibs/datamining/lectures/18-val1.pdf · Model selection and validation 1: Cross-validation Ryan Tibshirani Data Mining:

Next time: model assessment, more cross-validation

How would we do cross-validation for a structured problem likethat solved by smoothing splines?

●●●●

●●●

●●

●●

●●●

●●

●●

●●●●

●●

●●●●

●●

●●●●●

●●

●●

●●

●●

●●●

●●

●●●

0.0 0.2 0.4 0.6 0.8 1.0

02

46

810

12

26