ece595 / stat598: machine learning i lecture 15 logistic ... · lecture 14 logistic regression 1...

30
c Stanley Chan 2020. All Rights Reserved. ECE595 / STAT598: Machine Learning I Lecture 15 Logistic Regression 2 Spring 2020 Stanley Chan School of Electrical and Computer Engineering Purdue University 1 / 30

Upload: others

Post on 30-May-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

ECE595 / STAT598: Machine Learning ILecture 15 Logistic Regression 2

Spring 2020

Stanley Chan

School of Electrical and Computer EngineeringPurdue University

1 / 30

Page 2: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Overview

In linear discriminant analysis (LDA), there are generally two types ofapproaches

Generative approach: Estimate model, then define the classifier

Discriminative approach: Directly define the classifier2 / 30

Page 3: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Outline

Discriminative Approaches

Lecture 14 Logistic Regression 1

Lecture 15 Logistic Regression 2

This lecture: Logistic Regression 2

Gradient Descent

ConvexityGradientRegularization

Connection with Bayes

DerivationInterpretation

Comparison with Linear Regression

Is logistic regression better than linear?Case studies

3 / 30

Page 4: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

From Linear to Logistic Regression

Can we replace g(x) by sign(g(x))?

How about a soft-version of sign(g(x))?

This gives a logistic regression.

4 / 30

Page 5: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Logistic Regression and Deep Learning

Logistic regression can be considered as the last layer of a deepnetwork

Inputs are xn, weights are wThe sigmoid function is the nonlinear activation

To train the model, you compare the prediction error and minimizethe loss by updating the weights

5 / 30

Page 6: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Training Loss Function

J(θ) =N∑

n=1

L(hθ(xn), yn)

=N∑

n=1

−{yn log hθ(xn) + (1− yn) log(1− hθ(xn))

}

This is called the cross-entropy loss

Consider two cases

yn log hθ(xn) =

{0, if yn = 1, and hθ(xn) = 1,

−∞, if yn = 1, and hθ(xn) = 0,

(1− yn)(1− log hθ(xn)) =

{0, if yn = 0, and hθ(xn) = 0,

−∞, if yn = 0, and hθ(xn) = 1.

No solution if mismatch6 / 30

Page 7: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Convexity of Logistic Training Loss

Recall that

J(θ) =n∑

n=1

−{yn log

(hθ(xn)

1− hθ(xn)

)+ log(1− hθ(xn))

}The first term is linear, so it is convex.The second term: Gradient:

∇θ[− log(1− hθ(x))] = −∇θ

[log

(1− 1

1 + e−θT x

)]= −∇θ

[log

e−θT x

1 + e−θT x

]= −∇θ

[log e−θ

T x − log(1 + e−θT x)]

= −∇θ

[−θTx − log(1 + e−θ

T x)]

= x +∇θ

[log(

1 + e−θT x)]

= x +

(−e−θT x

1 + e−θT x

)x = hθ(x)x .

7 / 30

Page 8: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Convexity of Logistic Training Loss

Gradient of second term is

∇θ[− log(1− hθ(x))] = hθ(x)x .

Hessian is:

∇2θ[− log(1− hθ(x))] = ∇θ [hθ(x)x ]

= ∇θ

[(1

1 + e−θT x

)x]

=

(1

(1 + e−θT x)2

)(−e−θ

T x)

xxT

=

(1

1 + e−θT x

)(1− 1

1 + e−θT x

)xxT

= hθ(x)[1− hθ(x)]xxT .

8 / 30

Page 9: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Convexity of Logistic Training Loss

For any v ∈ Rd , we have that

vT∇2θ[− log(1− hθ(x))]v = vT

[hθ(x)[1− hθ(x)]xxT

]v

= (hθ(x)[1− hθ(x)]) ‖vTx‖2 ≥ 0.

Therefore the Hessian is positive semi-definite.

So − log(1− hθ(x) is convex in θ.

Conclusion: The training loss function

J(θ) =n∑

n=1

−{yn log

(hθ(xn)

1− hθ(xn)

)+ log(1− hθ(xn))

}is convex in θ.

So we can use convex optimization algorithms to find θ.

9 / 30

Page 10: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Convex Optimization for Logistic Regression

We can use CVX to solve the logistic regression problem

But it requires some re-organization of the equations

J(θ) =N∑

n=1

−{ynθ

Txn + log(1− hθ(xn))}

=N∑

n=1

−{ynθ

Txn + log

(1− eθ

T xn

1 + eθT xn

)}=

N∑n=1

−{ynθ

Txn − log(

1 + eθT xn

)}

= −

(

N∑n=1

ynxn

)T

θ −N∑

n=1

log(

1 + eθT xn

) .

The last term is a sum of log-sum-exp: log(e0 + eθT x).

10 / 30

Page 11: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Convex Optimization for Logistic Regression

0 2 4 6 8 10

0

0.2

0.4

0.6

0.8

1

Data

Estimated

True

Black: The true model. You create it.

Blue circles: Samples drawn from the true distribution.

Red: Trained model from the samples.11 / 30

Page 12: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Gradient Descent for Logistic Regression

The training loss function is

J(θ) =n∑

n=1

−{ynθ

Txn + log(1− hθ(xn))}.

Recall that∇θ[− log(1− hθ(x))] = hθ(x)x .

You can run gradient descent

θ(k+1) = θ(k) − αk∇θJ(θ(k))

= θ(k) − αk

(N∑

n=1

(hθ(k)(xn)− yn)xn

).

Since the loss function is convex, guaranteed to find global minimum.

12 / 30

Page 13: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Regularization in Logistic Regression

The loss function is

J(θ) =n∑

n=1

−{ynθ

Txn + log(1− hθ(xn))}

=n∑

n=1

−{ynθ

Txn + log

(1− 1

1 + e−θT xn

)}What if hθ(xn) = 1? (We need θTxn =∞.)

Then we have log(1− 1) = log 0, which is −∞.

Same thing happens in the equivalent form

J(θ) = −

(

N∑n=1

ynxn

)T

θ −N∑

n=1

log(

1 + eθT xn

) .

When θTxn →∞, we have log(∞).13 / 30

Page 14: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Regularization in Logistic Regression

Example: Two classes: N (0, 1) and N (10, 1).

Run CVX

-5 0 5 10 15

0

0.2

0.4

0.6

0.8

1

NaN for yn = 1

14 / 30

Page 15: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Regularization in Logistic Regression

Add a small regularization

J(θ) = −

(

N∑n=1

ynxn

)T

θ −N∑

n=1

log(

1 + eθT xn

)+ λ‖θ‖2.

Re-run the same CVX program

-5 0 5 10 15

0

0.2

0.4

0.6

0.8

1

15 / 30

Page 16: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Regularization in Logistic Regression

If you make λ really really small ...

J(θ) = −

(

N∑n=1

ynxn

)T

θ −N∑

n=1

log(

1 + eθT xn

)+ λ‖θ‖2.

Re-run the same CVX program

-5 0 5 10 15

0

0.2

0.4

0.6

0.8

1

16 / 30

Page 17: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Try This Online Exercise

Classify two digits in the MNIST dataset

http://ufldl.stanford.edu/tutorial/supervised/

LogisticRegression/

17 / 30

Page 18: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Outline

Discriminative Approaches

Lecture 14 Logistic Regression 1

Lecture 15 Logistic Regression 2

This lecture: Logistic Regression 2

Gradient Descent

ConvexityGradientRegularization

Connection with Bayes

DerivationInterpretation

Comparison with Linear Regression

Is logistic regression better than linear?Case studies

18 / 30

Page 19: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Connection with Bayes

The likelihood is

p(x |i) =1√

(2π)d |Σ|exp

{−1

2(x − µi )

TΣ−1(x − µi )

}The prior is pY (i) = πi .

The posterior is

p(1|x) =p(x |1)pY (1)

p(x |1)pY (1) + p(x |0)pY (0)

=1

1 + p(x |0)pY (0)p(x |1)pY (1)

=1

1 + exp{− log

(p(x |1)pY (1)p(x |0)pY (0)

)}=

1

1 + exp{− log

(π1π0

)− log

(p(x |1)p(x |0)

)} .19 / 30

Page 20: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Connection with Bayes

We can show that the last term is

log

(p(x |1)

p(x |0)

)

= log

1√(2π)d |Σ|

exp{−1

2(x − µ1)TΣ−1(x − µ1)}

1√(2π)d |Σ|

exp{−1

2(x − µ0)TΣ−1(x − µ0)}

= −1

2

[(x − µ1)TΣ−1(x − µ1)− (x − µ0)TΣ−1(x − µ0)

]= (µ1 − µ0)TΣ−1x − 1

2

(µT1 Σ−1µ1 − µT

0 Σ−1µ0

).

Let us define

w = Σ−1(µ1 − µ0)

w0 = −1

2

(µT1 Σ−1µ1 − µT

0 Σ−1µ0

)+ log

(π1π0

)20 / 30

Page 21: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Connection with Bayes

Then,

log

(p(x |1)

p(x |0)

)= (µ1 − µ0)TΣ−1x − 1

2

(µT1 Σ−1µ1 − µT

0 Σ−1µ0

)= wTx + w0 − log π1/π0

Therefore,

p(1|x) =1

1 + exp{− log

(π1π0

)− log

(p(x |1)p(x |0)

)}=

1

1 + exp{−(wTx + w0)}= hθ(x)

21 / 30

Page 22: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Connection with Bayes

The hypothesis function is the posterior distribution

pY |X (1|x) =1

1 + exp{−(wTx + w0)}= hθ(x)

pY |X (0|x) =exp{−(wTx + w0)

1 + exp{−(wTx + w0)}= 1− hθ(x),

(1)

So logistic regression offers probabilistic reasoning which linearregression does not

Not true when the covariances are different

Remark: If the covariances are different, the Bayes returns aquadratic classifier

22 / 30

Page 23: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Outline

Discriminative Approaches

Lecture 14 Logistic Regression 1

Lecture 15 Logistic Regression 2

This lecture: Logistic Regression 2

Gradient Descent

ConvexityGradientRegularization

Connection with Bayes

DerivationInterpretation

Comparison with Linear Regression

Is logistic regression better than linear?Case studies

23 / 30

Page 24: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Is Logistic Regression Better than Linear?

This is taken from the Internet

Is that true???24 / 30

Page 25: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Is Logistic Regression Better than Linear?

Scenario 1: Identical Covariance. Equal Prior. Enough samples.

N (0, 1) with 100 samples and N (10, 1) with 100 samples.

Linear and logistic: Not much different.

-5 0 5 10 15

0

0.2

0.4

0.6

0.8

1

Bayes oracle

Bayes empirical

lin reg

lin reg decision

log reg

log reg decision

true samples

training samples

25 / 30

Page 26: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

The False Sense of Good Fitting

Scenario 2: Identical Covariance. Equal Prior. Not a lot of samples.

N (0, 2) with 10 samples and N (10, 2) with 10 samples.

Linear and logistic: Not much different.

-5 0 5 10 15

0

0.2

0.4

0.6

0.8

1

Bayes oracle

Bayes empirical

lin reg

lin reg decision

log reg

log reg decision

true samples

training samples

26 / 30

Page 27: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Is Logistic Regression Better than Linear?

Scenario 3: Different Covariance. Equal Prior.

N (0, 2) with 50 samples and N (10, 0.2) with 50 samples.

Linear and logistic: Equally bad.

-5 0 5 10 15

0

0.2

0.4

0.6

0.8

1

Bayes oracle

Bayes empirical

lin reg

lin reg decision

log reg

log reg decision

true samples

training samples

27 / 30

Page 28: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Is Logistic Regression Better than Linear?

Scenario 4: Identical Covariance. Unequal Prior.Training size proportional to prior: 180 samples and 20 samples.N (0, 1) with π0 = 0.9 and N (10, 1) with π1 = 0.1.Linear and logistic: Not much different.

-5 0 5 10 15

0

0.2

0.4

0.6

0.8

1

Bayes oracle

Bayes empirical

lin reg

lin reg decision

log reg

log reg decision

true samples

training samples

28 / 30

Page 29: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

So what can we say about Logistic Regression?

Logistic regression empowers a discriminative method withprobabilistic reasonings.

The hypothesis function is the posterior probability

p(1|x) =1

1 + exp{−(wTx + w0)}= hθ(x)

p(0|x) =exp{−(wTx + w0)

1 + exp{−(wTx + w0)}= 1− hθ(x),

Logistic is yet another special case of Bayesian

More or less the same performance as linear regression

Logistic can give lower training error — which looks better on plots.

But its generalization is similar to linear regression

29 / 30

Page 30: ECE595 / STAT598: Machine Learning I Lecture 15 Logistic ... · Lecture 14 Logistic Regression 1 Lecture 15 Logistic Regression 2 This lecture: Logistic Regression 2 Gradient Descent

c©Stanley Chan 2020. All Rights Reserved.

Reading List

Logistic Regression (Machine Learning Perspective)

Chris Bishop’s Pattern Recognition, Chapter 4.3

Hastie-Tibshirani-Friedman’s Elements of Statistical Learning,Chapter 4.4

Stanford CS 229 Discriminant Algorithmshttp://cs229.stanford.edu/notes/cs229-notes1.pdf

CMU Lecture https:

//www.stat.cmu.edu/~cshalizi/uADA/12/lectures/ch12.pdf

Stanford Language Processinghttps://web.stanford.edu/~jurafsky/slp3/ (Lecture 5)

Logistic Regression (Statistics Perspective)

Duke Lecture https://www2.stat.duke.edu/courses/Spring13/

sta102.001/Lec/Lec20.pdf

Princeton Lecturehttps://data.princeton.edu/wws509/notes/c3.pdf

30 / 30