stacked generalization an overview of the paper by david h. wolpert

20
Stacked Generalization an overview of the paper by David H. Wolpert Jim Ries [email protected] CECS 477 : Neural Networks February 2, 2000

Upload: niel

Post on 09-Jan-2016

34 views

Category:

Documents


0 download

DESCRIPTION

Stacked Generalization an overview of the paper by David H. Wolpert. Jim Ries [email protected] CECS 477 : Neural Networks February 2, 2000. Introduction. Published in “Neural Networks” 1992 David H. Wolpert - Postdoctoral Fellow Sante Fe Institute. Previously Los Alamos. Degrees in Physics. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Stacked Generalization an overview of the paper by David H. Wolpert

Stacked Generalizationan overview of the paper by David H. Wolpert

Jim Ries

[email protected]

CECS 477 : Neural Networks

February 2, 2000

Page 2: Stacked Generalization an overview of the paper by David H. Wolpert

Introduction Published in “Neural Networks”

1992 David H. Wolpert - Postdoctoral

Fellow Sante Fe Institute. Previously Los Alamos. Degrees in Physics.

Page 3: Stacked Generalization an overview of the paper by David H. Wolpert

Introduction (cont.) A “generalizer” is an algorithm

which guesses a parent function based on a learning set read from the parent function.

Neural networks are a subset of generalizers.

Other generalizers exist, such as Bayesian classifiers.

Page 4: Stacked Generalization an overview of the paper by David H. Wolpert

Introduction (cont.) “Stack generalization” is a

mechanism for minimizing the error rate of one or more generalizers.

Can be used to combine generalizers that have been taught part of the learning set.

More sophisticated version of cross-validation (testing generalizers against previously unseen training data).

Page 5: Stacked Generalization an overview of the paper by David H. Wolpert

Introduction (cont.) General Idea:

Partition learning set. Train on one part. Observe behavior on the other part

of the partition. Correct for biases.

Page 6: Stacked Generalization an overview of the paper by David H. Wolpert

Topics of Discussion Existing “winner take all”

alternatives. Detailed description of stacked

generalization. Discuss an experiment using

stacked generalization. Variations and Extensions. Concluding Thoughts

Page 7: Stacked Generalization an overview of the paper by David H. Wolpert

Existing “Winner-Takes-All” Strategies Cross-validation & generalized

cross-validation. Bootstrapping. Given a set of candidate

generalizers {Gj}, these techniques choose the best G{Gj} s.t. estimated errors are minimized.

Page 8: Stacked Generalization an overview of the paper by David H. Wolpert

Stacked Generalization Generalizer definition

maps learning set {xk Rn, yk R} together with a question q Rn into a guess R.

If the generalizer returns the correct yi when q is one of the xi, then it reproduces the learning set.

Page 9: Stacked Generalization an overview of the paper by David H. Wolpert

Stacked Generalization (cont.) Split learning set Rn+1 into 2

(disjoint) sets, i1 and i2 called partition sets.

Cross-validation takes a set of candidate generalizers {Gj} trained from i1 and chooses the candidate with the least error when fed the test partition set i2.

Page 10: Stacked Generalization an overview of the paper by David H. Wolpert

Stacked Generalization (cont.) Stacked Generalization combines

all of the generalizers rather than choosing a “best” one.

Case of 1 generalizer is still interesting in that stacked generalization is essentially a guard against over-fitting.

Page 11: Stacked Generalization an overview of the paper by David H. Wolpert

Stacked Generalization (cont.) Define Rn+1 in which lives as

“level 0”, and any generalizer of as a “level 0” generalizer.

Look at a set of k numbers determined by the N {Gj} generalizers working together within each partition.

Page 12: Stacked Generalization an overview of the paper by David H. Wolpert

Stacked Generalization (cont.) Consider each set of k as the input

part of a point in Rk+1 (“level 1” space).

Generalize from by operating a generalizer in the level 1 space.

Thus, we have a “stack” of generalizers.

More than 2 level stacks are possible.

Page 13: Stacked Generalization an overview of the paper by David H. Wolpert

Stacked Generalization (cont.) Problem: What generalizer(s) to

use at each level? (unanswered)

Page 14: Stacked Generalization an overview of the paper by David H. Wolpert

Stacked Generalization (cont.)

L’ output

L’ input

G1(ij; in(i2)) G2(ij; in(i2)) ... out(i2))

Multiple Generalizers: 1) Creating L’

Level 1/

Learning set L’. Contains r elements, one for each partition in the level 0 partition set.

Level 0/

Learning set . Partition set ij. Generalizers {Gp}.

Page 15: Stacked Generalization an overview of the paper by David H. Wolpert

Stacked Generalization (cont.)

Final guess

Q’, the level 1 question

G1(; q) ...

Multiple Generalizers: 2) Guessing

Level 1/

Learning set L’. Generalizer G’. Question q’..

Level 0/

Learning set . Generalizers {Gp}. Question q.

G’(L’;q’)

L’ outputs

L’ inputs

G2(; q)

Page 16: Stacked Generalization an overview of the paper by David H. Wolpert

Experiment NETtalk “reading aloud” problem.

7 letters as input. Output is an English phoneme that

a human would utter if reading aloud.

Several separate generalizers combined.

Page 17: Stacked Generalization an overview of the paper by David H. Wolpert

Experiment (cont.) Best level 0 generalizer got 69%

correct. Stacked generalization got 88%

correct.

Page 18: Stacked Generalization an overview of the paper by David H. Wolpert

Variations and Extensions Consider level 1 output not as a

guess but as an estimate of the error of a guess. (can be tweaked by using a constant to denote what percentage of error to be considered)

Consider the entire stacked generalization as a generalization which can be stacked.

Page 19: Stacked Generalization an overview of the paper by David H. Wolpert

Concluding Thoughts Where is the evidence that

considering all of when training is not as good as considering part of and then adding another layer using the remaining ?

How does stacked generalization compare to applying heuristics such as regularization or early stopping to avoid over-fitting?

Page 20: Stacked Generalization an overview of the paper by David H. Wolpert

Questions?