taming the learning zoo

37
TAMING THE LEARNING ZOO

Upload: akando

Post on 22-Feb-2016

28 views

Category:

Documents


0 download

DESCRIPTION

Taming the Learning Zoo. Supervised Learning Zoo. Bayesian learning (find parameters of a probabilistic model) Maximum likelihood Maximum a posteriori Classification Decision trees (discrete attributes, few relevant) Support vector machines (continuous attributes) Regression - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Taming the Learning Zoo

TAMING THE LEARNING ZOO

Page 2: Taming the Learning Zoo

2

SUPERVISED LEARNING ZOO Bayesian learning (find parameters of a

probabilistic model) Maximum likelihood Maximum a posteriori

Classification Decision trees (discrete attributes, few relevant) Support vector machines (continuous attributes)

Regression Least squares (known structure, easy to interpret) Neural nets (unknown structure, hard to interpret)

Nonparametric approaches k-Nearest-Neighbors Locally-weighted averaging / regression

Page 3: Taming the Learning Zoo

AGENDA Quantifying learner performance

Cross validation Error vs. loss Confusion matrix Precision & recall

Computational learning theory

Page 4: Taming the Learning Zoo

CROSS-VALIDATION

Page 5: Taming the Learning Zoo

ASSESSING PERFORMANCE OF A LEARNING ALGORITHM Samples from X are typically unavailable Take out some of the training set

Train on the remaining training set Test on the excluded instances Cross-validation

Page 6: Taming the Learning Zoo

CROSS-VALIDATION Split original set of examples, train

+

+

+

+

++

+

-

-

-

--

-

+

+

+

+

+

--

-

--

-Hypothesis space H

Train

Examples D

Page 7: Taming the Learning Zoo

CROSS-VALIDATION Evaluate hypothesis on testing set

+

+

+

+

++

+

-

-

-

--

-

Hypothesis space H

Testing set

Page 8: Taming the Learning Zoo

CROSS-VALIDATION Evaluate hypothesis on testing set

Hypothesis space H

Testing set

++

++

+

--

-

-

-

-

++

Test

Page 9: Taming the Learning Zoo

CROSS-VALIDATION Compare true concept against prediction

+

+

+

+

++

+

-

-

-

--

-

Hypothesis space H

Testing set

++

++

+

--

-

-

-

-

++

9/13 correct

Page 10: Taming the Learning Zoo

COMMON SPLITTING STRATEGIES k-fold cross-validation

Leave-one-out (n-fold cross validation)

Train TestDataset

Page 11: Taming the Learning Zoo

COMPUTATIONAL COMPLEXITY k-fold cross validation requires

k training steps on n(k-1)/k datapoints k testing steps on n/k datapoints (There are efficient ways of computing L.O.O.

estimates for some nonparametric techniques, e.g. Nearest Neighbors)

Average results reported

Page 12: Taming the Learning Zoo

BOOTSTRAPPING Similar technique for estimating the

confidence in the model parameters Procedure:1. Draw k hypothetical datasets from original

data. Either via cross validation or sampling with replacement.

2. Fit the model for each dataset with k parameters k

3. Return the standard deviation of 1,…,k (or a confidence interval)

Can also estimate confidence in a prediction y=f(x)

Page 13: Taming the Learning Zoo

EXAMPLE: AVERAGE OF N NUMBERS Data D={x(1),…,x(N)}, model is constant Learning: minimize E() = i(x(i)-)2 => compute

average Repeat for j=1,…,k :

Randomly sample subset x(1)’,…,x(N)’ from D Learn j = 1/N i x(i)’

Return histogram of 1,…,j

10 100 1000 100000.44

0.46

0.48

0.5

0.52

0.54

0.56

AverageLower rangeUpper range

|Data set|

Page 14: Taming the Learning Zoo

14

BEYOND ERROR RATES

Page 15: Taming the Learning Zoo

BEYOND ERROR RATE Predicting security risk

Predicting “low risk” for a terrorist, is far worse than predicting “high risk” for an innocent bystander (but maybe not 5 million of them)

Searching for images Returning irrelevant images is

worse than omitting relevant ones

15

Page 16: Taming the Learning Zoo

BIASED SAMPLE SETS Often there are orders of magnitude more

negative examples than positive E.g., all images of Mark Wilson on Facebook If I classify all images as “not Mark” I’ll have

>99.99% accuracy

Examples of Mark should count much more than non-Mark!

Page 17: Taming the Learning Zoo

FALSE POSITIVES

17x1

x2

True concept Learned concept

Page 18: Taming the Learning Zoo

FALSE POSITIVES

18x1

x2

True concept Learned concept

New query

An example incorrectly predicted

to be positive

Page 19: Taming the Learning Zoo

FALSE NEGATIVES

19x1

x2

True concept Learned concept

New query

An example incorrectly predicted

to be negative

Page 20: Taming the Learning Zoo

PRECISION VS. RECALL Precision

# of relevant documents retrieved / # of total documents retrieved

Recall # of relevant documents retrieved / # of total

relevant documents Numbers between 0 and 1

20

Page 21: Taming the Learning Zoo

PRECISION VS. RECALL Precision

# of true positives / (# true positives + # false positives)

Recall # of true positives / (# true positives + # false

negatives) A precise classifier is selective A classifier with high recall is inclusive

21

Page 22: Taming the Learning Zoo

OPTION 1: CLASSIFICATION THRESHOLDS Many learning algorithms (e.g., linear

models, NNets, BNs, SVM) give real-valued output v(x) that needs thresholding for classification

v(x) > t => positive label given to xv(x) < t => negative label given to x

May want to tune threshold to get fewer false positives or false negatives

22

Page 23: Taming the Learning Zoo

REDUCING FALSE POSITIVE RATE

23x1

x2

True concept Learned concept

Page 24: Taming the Learning Zoo

REDUCING FALSE NEGATIVE RATE

24x1

x2

True concept Learned concept

Page 25: Taming the Learning Zoo

LOSS FUNCTIONS & WEIGHTED DATASETS General learning problem: “Given data D and

loss function L, find the best hypothesis from hypothesis class H”

Loss functions: L contains weights to favor accuracy on positive or negative examples E.g., L = 10 E+

+ 1 E-

Weighted datasets: attach a weight w to each example to indicate how important it is Or construct a resampled dataset D’ where each

example is duplicated proportionally to its w

Page 26: Taming the Learning Zoo

PRECISION-RECALL CURVES

26

Precision

Recall

Measure Precision vs Recall as tolerance (or weighting) is tuned

Perfect classifier

Actual performance

Page 27: Taming the Learning Zoo

PRECISION-RECALL CURVES

27

Precision

Recall

Measure Precision vs Recall as tolerance (or weighting) is tuned

Penalize false negatives

Penalize false positives

Equal weight

Page 28: Taming the Learning Zoo

PRECISION-RECALL CURVES

28

Precision

Recall

Measure Precision vs Recall as tolerance (or weighting) is tuned

Page 29: Taming the Learning Zoo

PRECISION-RECALL CURVES

29

Precision

Recall

Measure Precision vs Recall as tolerance (or weighting) is tuned

Better learningperformance

Page 30: Taming the Learning Zoo

MODEL SELECTION

Page 31: Taming the Learning Zoo

COMPLEXITY VS. GOODNESS OF FIT More complex models can fit the data better,

but can overfit Model selection: enumerate several possible

hypothesis classes of increasing complexity, stop when cross-validated error levels off

Regularization: explicitly define a metric of complexity and penalize it in addition to loss

Page 32: Taming the Learning Zoo

MODEL SELECTION WITH K-FOLD CROSS-VALIDATION Parameterize learner by a complexity level C Model selection pseudocode:

For increasing levels of complexity C: errT[C],errV[C] = Cross-Validate(Learner,C,examples) If errT has converged,

Find value Cbest that minimizes errV[C] Return Learner(Cbest,examples)

Page 33: Taming the Learning Zoo

REGULARIZATION Minimize:

Cost(h) = Loss(h) + Complexity(h) Example with linear models y = Tx:

L2 error: Loss() = i (y(i)-Tx(i))2

Lq regularization: Complexity(): j |j|q

L2 and L1 are most popular in linear regularization L2 regularization leads to simple computation

of optimal L1 is more complex to optimize, but produces

sparse models in which many coefficients are 0!

Page 34: Taming the Learning Zoo

34

OTHER TOPICS IN MACHINE LEARNING Unsupervised learning

Dimensionality reduction Clustering

Reinforcement learning Agent that acts and learns how to act in an

environment by observing rewards Learning from demonstration

Agent that acts and learns how to act in an environment by observing demonstrations from an expert

Page 35: Taming the Learning Zoo

ISSUES IN PRACTICE The distinctions between learning algorithms

diminish when you have a lot of data The web has made it much easier to gather

large scale datasets than in early days of ML Understanding data with many more

attributes than examples is still a major challenge! Do humans just have really great priors?

Page 36: Taming the Learning Zoo

PROJECT MIDTERM REPORT Due 11/10

~1 page description of current progress, challenges, changes in direction

Page 37: Taming the Learning Zoo

NEXT LECTURES Intelligent agents (R&N 2) Decision-theoretic planning Reinforcement learning Applications of AI