learning invariances and hierarchies

Post on 23-Feb-2016

40 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Learning Invariances and Hierarchies . Pierre Baldi University of California, Irvine. Two Questions. “If we solve computer vision, we have pretty much solved AI.” A-NNs vs B-NNs and Deep Learning. If we solve computer vision…. If we solve computer vision…. - PowerPoint PPT Presentation

TRANSCRIPT

Learning Invariances and Hierarchies

Pierre BaldiUniversity of California, Irvine

Two Questions

1. “If we solve computer vision, we have pretty much solved AI.”

2. A-NNs vs B-NNs and Deep Learning.

If we solve computer vision…

If we solve computer vision…

• If we solve computer audition,….

If we solve computer vision…

• If we solve computer audition,….

• If we solve computer olfaction,…

If we solve computer vision…

• If we solve computer audition,….

• If we solve computer olfaction,…

• If we solve computer vision, how can we build computers that can prove Fermat’s last theorem?

Invariances

• Invariances in audition. We can recognize a tune invariantly with respect to: intensity, speed, tonality, harmonization, instrumentation, style, background.

• Invariances in olfaction. We can recognize an odor invariantly with respect to: concentrations, humidity, pressure, winds, mixtures, background.

Non-Invariances

• Invariances evolution did not care about (although we are still evolving!...)

– We cannot recognize faces upside down.– We cannot recognize tunes played in reverse.– We cannot recognize stereoisomers as such.

Enantiomers smell differently.

A-NNs vs B-NNs

Origin of Invariances• Weight sharing and translational invariance.• Can we quantify approximate weight sharing?• Can we use approximate weight sharing to improve

performance?• Some of the invariance comes • from the architecture. • Some may come from the • learning rules.

Learning Invariances

EHebbsymmetric connections

wij=wji

111

11-1

1-11

Acyclic orientation of the Hypercube O(H)

Isometry

Isometry

HebbHebb

O(H)

H

I(O(H))

I(H)

Deep Learning ≈ Deep Targets

Training set: (xi,yi) or i=1, . . ., m

?

Deep Target Algorithms

Deep Target Algorithms

Deep Target Algorithms

Deep Target Algorithms

Deep Target Algorithms

• In spite of the vanishing gradient problem, (and the Newton problem) nothing seems to beat back-propagation.

• Is backpropagation biologically plausible?

Mathematics of Dropout (Cheap Approximation to Training Full Ensemble)

Two Questions

1. “If we solve computer vision, we have pretty much solved AI.”

2. A-NNs vs B-NNs and Deep Learning.

top related