![Page 1: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/1.jpg)
Introduction to Bayesian Learning
Aaron HertzmannUniversity of Toronto
SIGGRAPH 2004 Tutorial
Evaluations: www.siggraph.org/courses_evaluation
![Page 2: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/2.jpg)
CG is maturing …
![Page 3: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/3.jpg)
… but it’s still hard to create
![Page 4: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/4.jpg)
… it’s hard to create in real-time
![Page 5: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/5.jpg)
Data-driven computer graphics
What if we can get models from the real world?
![Page 6: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/6.jpg)
Data-driven computer graphics
Three key problems:• Capture data (from video,
cameras, mocap, archives, …)• Build a higher-level model• Generate new data
Ideally, it should be automatic, flexible
![Page 7: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/7.jpg)
Example: Motion capture
mocap.cs.cmu.edu
![Page 8: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/8.jpg)
Example: character posing
![Page 9: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/9.jpg)
Example: shape modeling
[Blanz and Vetter 1999]
![Page 10: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/10.jpg)
Example: shape modeling
[Allen et al. 2003]
![Page 11: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/11.jpg)
Key problems
• How do you fit a model to data?– How do you choose weights and
thresholds?– How do you incorporate prior
knowledge?– How do you merge multiple
sources of information?– How do you model uncertainty?
Bayesian reasoning provides solutions
![Page 12: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/12.jpg)
Bayesian reasoning is …
Probability, statistics, data-fitting
![Page 13: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/13.jpg)
Bayesian reasoning is …
A theory of mind
![Page 14: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/14.jpg)
Bayesian reasoning is …
A theory of artificial intelligence
[Thrun et al.]
![Page 15: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/15.jpg)
Bayesian reasoning is …
A standard tool of computer vision
![Page 16: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/16.jpg)
and …
Applications in:• Data mining• Robotics• Signal processing• Bioinformatics• Text analysis (inc. spam filters)• and (increasingly) graphics!
![Page 17: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/17.jpg)
Outline for this course
3:45-4pm: Introduction4pm-4:45: Fundamentals
- From axioms to probability theory- Prediction and parameter estimation
4:45-5:15: Statistical shape models- Gaussian models and PCA
- Applications: facial modeling, mocap
5:15-5:30: Summary and questions
![Page 18: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/18.jpg)
More about the course
•Prerequisites– Linear algebra, multivariate
calculus, graphics, optimization•Unique features
– Start from first principles– Emphasis on graphics problems– Bayesian prediction– Take-home “principles”
![Page 19: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/19.jpg)
Bayesian vs. Frequentist
• Frequentist statistics– a.k.a. “orthodox statistics”– Probability = frequency of
occurrences in infinite # of trials– Arose from sciences with
populations– p-values, t-tests, ANOVA, etc.
• Bayesian vs. frequentist debates have been long and acrimonious
![Page 20: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/20.jpg)
Bayesian vs. Frequentist
“In academia, the Bayesian revolution is on the verge of becoming the majority viewpoint, which would have been unthinkable 10 years ago.”
- Bradley P. Carlin, professor of public health, University of MinnesotaNew York Times, Jan 20, 2004
![Page 21: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/21.jpg)
Bayesian vs. Frequentist
If necessary, please leave these assumptions behind (for today):
• “A probability is a frequency”• “Probability theory only applies
to large populations”• “Probability theory is arcane
and boring”
![Page 22: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/22.jpg)
Fundamentals
![Page 23: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/23.jpg)
What is reasoning?
• How do we infer properties of the world?
• How should computers do it?
![Page 24: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/24.jpg)
Aristotelian logic
• If A is true, then B is true• A is true• Therefore, B is true
A: My car was stolenB: My car isn’t where I left it
![Page 25: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/25.jpg)
Real-world is uncertain
Problems with pure logic:• Don’t have perfect information• Don’t really know the model• Model is non-deterministic
So let’s build a logic of uncertainty!
![Page 26: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/26.jpg)
Beliefs
Let B(A) = “belief A is true” B(¬A) = “belief A is false”
e.g., A = “my car was stolen” B(A) = “belief my car was
stolen”
![Page 27: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/27.jpg)
Reasoning with beliefs
Cox Axioms [Cox 1946]1. Ordering exists
– e.g., B(A) > B(B) > B(C)
2. Negation function exists– B(¬A) = f(B(A))
3. Product function exists– B(A Y) = g(B(A|Y),B(Y))
This is all we need!
![Page 28: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/28.jpg)
The Cox Axioms uniquely define a complete system of reasoning: This is probability
theory!
![Page 29: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/29.jpg)
“Probability theory is nothing more than common sense reduced to calculation.”
- Pierre-Simon Laplace, 1814
Principle #1:
![Page 30: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/30.jpg)
Definitions
P(A) = “probability A is true” = B(A) = “belief A is true”
P(A) 2 [0…1]P(A) = 1 iff “A is true”P(A) = 0 iff “A is false”
P(A|B) = “prob. of A if we knew B”P(A, B) = “prob. A and B”
![Page 31: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/31.jpg)
Examples
A: “my car was stolen”B: “I can’t find my car”
P(A) = .1P(A) = .5
P(B | A) = .99P(A | B) = .3
![Page 32: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/32.jpg)
Sum rule:
P(A) + P(¬A) = 1
Basic rules
Example:A: “it will rain today”
p(A) = .9 p(¬A) = .1
![Page 33: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/33.jpg)
Sum rule:
i P(Ai) = 1
Basic rules
when exactly one of Ai must be true
![Page 34: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/34.jpg)
Product rule:
P(A,B) = P(A|B) P(B) = P(B|A) P(A)
Basic rules
![Page 35: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/35.jpg)
Basic rules
Conditioning
i P(Ai) = 1 i P(Ai|B) = 1
P(A,B) = P(A|B) P(B)
P(A,B|C) = P(A|B,C) P(B|C)
Sum Rule
Product Rule
![Page 36: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/36.jpg)
Summary
Product ruleSum rule
All derivable from Cox axioms; must obey rules of common sense
Now we can derive new rules
P(A,B) = P(A|B) P(B)
i P(Ai) = 1
![Page 37: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/37.jpg)
Example
A = you eat a good meal tonightB = you go to a highly-recommended
restaurant¬B = you go to an unknown restaurant
Model: P(B) = .7, P(A|B) = .8, P(A|¬B) = .5
What is P(A)?
![Page 38: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/38.jpg)
Example, continued
Model: P(B) = .7, P(A|B) = .8, P(A|¬B) = .5
1 = P(B) + P(¬B)1 = P(B|A) + P(¬B|A)P(A) = P(B|A)P(A) + P(¬B|A)P(A) = P(A,B) + P(A,¬B) = P(A|B)P(B) + P(A|¬B)P(¬B) = .8 .7 + .5 (1-.7) = .71
Sum rule
Conditioning
Product rule
Product rule
![Page 39: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/39.jpg)
Basic rules
Marginalizing
P(A) = i P(A, Bi)for mutually-exclusive Bi
e.g., p(A) = p(A,B) + p(A, ¬B)
![Page 40: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/40.jpg)
Given a complete model, we can derive any other
probability
Principle #2:
![Page 41: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/41.jpg)
Model: P(B) = .7, P(A|B) = .8, P(A|¬B) = .5
If we know A, what is P(B|A)?(“Inference”)
Inference
P(A,B) = P(A|B) P(B) = P(B|A) P(A)
P(B|A) =P(A|B) P(B)
P(A)= .8 .7 / .71 ≈ .79
Bayes’ Rule
![Page 42: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/42.jpg)
Inference
Bayes Rule
P(M|D) =P(D|M) P(M)
P(D)
Posterior
LikelihoodPrior
![Page 43: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/43.jpg)
Describe your model of the world, and then compute the
probabilities of the unknowns given the
observations
Principle #3:
![Page 44: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/44.jpg)
Use Bayes’ Rule to infer unknown model variables from observed
data
Principle #3a:
P(M|D) =P(D|M) P(M)
P(D)
LikelihoodPrior
Posterior
![Page 45: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/45.jpg)
Discrete variables
Probabilities over discrete variables
C 2 { Heads, Tails }
P(C=Heads) = .5
P(C=Heads) + P(C=Tails) = 1
![Page 46: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/46.jpg)
Continuous variables
Let x 2 RN
How do we describe beliefs over x?
e.g., x is a face, joint angles, …
![Page 47: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/47.jpg)
Continuous variables
Probability Distribution Function (PDF)a.k.a. “marginal probability”
p(x) P(a · x · b) = sab p(x) dx
x
Notation: P(x) is probp(x) is PDF
![Page 48: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/48.jpg)
Continuous variables
Probability Distribution Function (PDF)Let x 2 Rp(x) can be any function s.t.
s-11 p(x) dx = 1
p(x) ¸ 0
Define P(a · x · b) = sab p(x) dx
![Page 49: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/49.jpg)
Uniform distribution
x » U(x0, x1)
p(x) = 1/(x0 – x1) if x0 · x · x1
= 0 otherwise
x0 x1
p(x)
![Page 50: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/50.jpg)
Gaussian distributions
x » N(, 2)p(x|,2) = exp(-(x-)2/22) / p
22
![Page 51: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/51.jpg)
Why use Gaussians?
• Convenient analytic properties• Central Limit Theorem• Works well• Not for everything, but a good
building block• For more reasons, see
[Bishop 1995, Jaynes 2003]
![Page 52: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/52.jpg)
![Page 53: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/53.jpg)
Rules for continuous PDFs
Same intuitions and rules apply
“Sum rule”: s-11 p(x) dx = 1
Product rule: p(x,y) = p(x|y)p(x)Marginalizing: p(x) = s p(x,y)dy
… Bayes’ Rule, conditioning, etc.
![Page 54: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/54.jpg)
Multivariate distributions
Uniform: x » U(dom) Gaussian: x » N(, )
![Page 55: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/55.jpg)
Inference
How do we reason about the world from observations?
Three important sets of variables:• observations• unknowns• auxiliary (“nuisance”) variables
Given the observations, what are the probabilities of the unknowns?
![Page 56: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/56.jpg)
Inference
Example: coin-flippingP(C = heads|) = p() = U(0,1)
Suppose we flip the coin 1000 times and get 750 heads. What is ?
Intuitive answer: 750/1000 = 75%
![Page 57: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/57.jpg)
What is ?
p() = Uniform(0,1)P(Ci = h|) = P(Ci = t|) = 1-
P(C1:N | ) = i P(Ci = h | )
p( | C1:N) = P(C1:N| ) p()P(C1:N)
Bayes’ Rule
= i P(Ci | ) P() / P(C1:N)/ H (1-)T
H = 750, T = 250
![Page 58: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/58.jpg)
What is ?
p( | C1, … CN) / 750 (1-)250
“Posterior distribution:” new beliefs about
![Page 59: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/59.jpg)
Bayesian prediction
What is the probability of another head?
P(C=h|C1:N) = s P(C=h,|C1:N) d
= s P(C=h|C1:N) P( | C1:N) d
= (H+1)/(N+2)= 751 / 1002 = 74.95 %
Note: we never computed
![Page 60: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/60.jpg)
Parameter estimation
• What if we want an estimate of ?
• Maximum A Posteriori (MAP):* = arg max p( | C1, …, CN)
= H / N = 750 / 1000 = 75%
![Page 61: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/61.jpg)
Parameter estimation
Problem: given lots of data, determine unknown parameters
Various estimators:MAPMaximum likelihoodUnbiased estimators…
![Page 62: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/62.jpg)
A problem
Suppose we flip the coin onceWhat is P(C2 = h | C1 = h)?
MAP estimate: * = H/N = 1This is absurd!Bayesian prediction:
P(C2 = h | C1 = h) = (H+1)/(N+2) = 2/3
![Page 63: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/63.jpg)
What went wrong?
p( | C1)p( | C1:N)
![Page 64: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/64.jpg)
Over-fitting
• A model that fits the data well but does not generalize
• Occurs when an estimate is obtained from a “spread-out” posterior
• Important to ask the right question: estimate CN+1, not
![Page 65: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/65.jpg)
Parameter estimation is not Bayesian. It leads to errors,
such as over-fitting.
Principle #4:
![Page 66: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/66.jpg)
Bayesian prediction
p(x|D) = s p(x, | D) d = s p(x|) p( | D) d
Trainingdata
Data vector
Modelparameters
a.k.a. “model averaging”
![Page 67: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/67.jpg)
p(x|D) = s p(x, | D) d
Advantages of estimation
Bayesian prediction is usually difficult and/or expensive
![Page 68: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/68.jpg)
Q: When is estimation safe?
A: When the posterior is “peaked”• The posterior “looks like” a spike• Generally, this means a lot more
data than parameters• But this is not a guarantee (e.g., fit
a line to 100 identical data points)• Practical answer: use error bars
(posterior variance)
![Page 69: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/69.jpg)
Parameter estimation is easier than prediction. It works well
when the posterior is “peaked.”
Principle #4a:
![Page 70: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/70.jpg)
Learning a Gaussian
?
2 {x2}
![Page 71: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/71.jpg)
Want: max p(x1:K|, 2) = min –ln p(x1:K|, 2) = i (x-)2/22 + K/2 ln 2 2
Closed-form solution: = i xi / N2 = i (x - )2/N
Learning a Gaussian
p(x|,2) = exp(-(x-)2/22) / p 22
p(x1:K|, 2) = p(xi | , 2)
![Page 72: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/72.jpg)
Stereology
[Jagnow et al. 2004 (this morning)]
Model:
PDF over solids
Problem: What is the PDF over solids?Can’t estimate individual solid shapes:
arg max p(, S | I) is underconstrained)
p() p(S | ) p(I|S)
![Page 73: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/73.jpg)
Stereology
PDF over solids
Marginalize out S:p( | I) = s p(, S | I) dS
can be maximized
p() p(S | ) p(I|S)
![Page 74: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/74.jpg)
When estimating variables, marginalize out as many unknowns as possible.
Principle #4b:
Algorithms for this:•Expectation-Maximization (EM)•Variational learning
![Page 75: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/75.jpg)
Regression
![Page 76: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/76.jpg)
Regression
Curve fitting
?
![Page 77: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/77.jpg)
Linear regression
x
y y = a x + b + » N(0, 2I)Model:
Or:
p(y|x,a,b,2) =N(ax + b, 2I)x
![Page 78: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/78.jpg)
Linear regression
p(y|x, a, b,2) = N(ax + b, 2I)
p(y1:K | x1:K, a, b, 2) = i p(yi | xi, a, b, 2)
Maximum likelihood: a*,b*,2* = arg max i p(yi|xi,a,b,2)
= arg min –ln i p(yi|x, a, b, 2)
Minimize: i (yi-(axi+b))2/(22) + K/2 ln 2 2
Sum-of-squared differences: “Least-squares”
![Page 79: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/79.jpg)
Linear regression, cont’d
Standard closed-form solution
(see course notes or any statistics text)
![Page 80: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/80.jpg)
Linear regression
Same idea in higher dimensionsy = Ax +
x y
x
y
![Page 81: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/81.jpg)
Nonlinear regression
y = f(x;w) + » N(0, 2I)Model:
Or:
p(y|x,w,2) =N(f(x;w), 2I)x
y
Curve parameters
![Page 82: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/82.jpg)
Typical curve models
Linef(x;w) = w0 x + w1
B-spline, Radial Basis Functionsf(x;w) = i wi Bi(x)
Artificial neural networkf(x;w) = i wi tanh(j wj x + w0)+w1
![Page 83: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/83.jpg)
Nonlinear regression
p(y|x, w, 2) = N(f(x;w), 2I)
p(y1:K | x1:K, w, 2) = i p(yi | xi, a, b, 2)
Maximum likelihood: w*,2* = arg max i p(yi|xi,a,b,2)
= arg min –ln i p(yi|x, a, b, 2)
Minimize: i (yi-f(xi;w))2/(22) + K/2 ln 2 2
Sum-of-squared differences: “Least-squares”
![Page 84: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/84.jpg)
Least-squares estimation is a special case of maximum
likelihood.
Principle #5:
![Page 85: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/85.jpg)
Because it is maximum likelihood, least-squares suffers from overfitting.
Principle #5a:
![Page 86: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/86.jpg)
Overfitting
![Page 87: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/87.jpg)
Smoothness priors
Assumption: true curve is smooth
Bending energy:p(w|) ~ exp( -skr fk2 / 2 )
Weight decay: p(w|) ~ exp( -kwk2 / 2 )
![Page 88: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/88.jpg)
MAP estimation: arg max p(w|y) = p(y | w)
p(w)/p(y)= arg min –ln p(y|w) p(w) =i (yi – f(xi; w))2/(22) + kwk2/22 + K
ln
Smoothness priors
Sum-of-squares differences Smoothness
![Page 89: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/89.jpg)
Underfitting
![Page 90: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/90.jpg)
Underfitting
![Page 91: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/91.jpg)
MAP estimation with smoothness priors leads to
under-fitting.
Principle #5b:
![Page 92: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/92.jpg)
Applications in graphics
Two examples:
[Rose III et al. 2001]
[Grzeszczuk et al. 1998]
Shape interpolation Approximate physics
![Page 93: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/93.jpg)
Choices in fitting
• Smoothness, noise parameters• Choice of basis functions• Number of basis functions
Bayesian methods can make these choices automatically and effectively
![Page 94: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/94.jpg)
Learning smoothness
Given “good” data, solve* , 2* = arg max p(, 2 | w, x1:K, y1:K)
Closed-form solutionShape reconstruction
in vision [Szeliski 1989]
![Page 95: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/95.jpg)
Learning without shape
Q: Can we learn smoothness/noise without knowing the curve?
A: Yes.
![Page 96: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/96.jpg)
Learning without shape
* , 2* = arg max p(, 2 | x1:K, y1:K)
(2 unknowns, K measurements)
p(, 2 | x1:K, y1:K) = s p(, 2, w | x1:K,y1:K) dw
/ s p(x1:K,y1:K|w,2,)p(w|,2)dw
![Page 97: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/97.jpg)
Bayesian regression
don’t fit a single curve, but keep the uncertainty in the curve:
p(x | x1:N, y1:N)
x
![Page 98: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/98.jpg)
Bayesian regression
Two steps:1. Learn smoothness and noise
* , 2* = arg min p(, 2 | x1:K, y1:K)
2. Predict new valuesp(x | , 2*, x1:K, y1:K) =
s p(x,w|x1:K, y1:K, *,2*) dw
The curve (w) is never estimated
![Page 99: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/99.jpg)
Bayesian regression
MAP/Least-squares(hand-tuned , 2,basis functions)
Gaussian Process regression(learned parameters , 2)
![Page 100: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/100.jpg)
Bayesian regression
![Page 101: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/101.jpg)
Bayes’ rule provide principle for learning (or marginalizing
out) all parameters.
Principle #6:
![Page 102: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/102.jpg)
Prediction variances
More info: D. MacKay’s Introduction to Gaussian Processes
![Page 103: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/103.jpg)
NIPS 2003 Feature Selection Challenge
• Competition between classification algorithm, including SVMs, nearest neighbors, GPs, etc.
• Winners: R. Neal and J. Zhang• Most powerful model they could
compute with (1000’s of parameters) and Bayesian prediction
• Very expensive computations
![Page 104: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/104.jpg)
Summary of “Principles”
1.Probability theory is common sense reduced to calculation.
2.Given a model, we can derive any probability
3.Describe a model of the world, and then compute the probabilities of the unknowns with Bayes’ Rule
![Page 105: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/105.jpg)
Summary of “Principles”
4. Parameter estimation leads to over-fitting when the posterior isn’t “peaked.” However, it is easier than Bayesian prediction.
5. Least-squares estimation is a special case of MAP, and can suffer from over- and under-fitting
6. You can learn (or marginalize out) all parameters.
![Page 106: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/106.jpg)
Statistical shape and appearance models with PCA
![Page 107: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/107.jpg)
Key vision problems
• Is there a face in this image?
• Who is it?• What is the 3D
shape and texture?
Turk and Pentland 1991
![Page 108: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/108.jpg)
Key vision problems
• Is there a person in this picture?
• Who?• What is their 3D pose?
![Page 109: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/109.jpg)
Key graphics problems
• How can we easily create new bodies, shapes, and appearances?
• How can we edit images and videos?
![Page 110: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/110.jpg)
The difficulty
• Ill-posed problems– Need prior assumptions– Lots of work for an artist
![Page 111: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/111.jpg)
Outline
• Face modeling problem– Linear shape spaces– PCA– Probabilistic PCA
• Applications– face and body modeling
![Page 112: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/112.jpg)
Background: 2D models
• Eigenfaces– Sirovich and Kirby 1987, Turk and
Pentland 1991
• Active Appearance Models/Morphable models– Beier and Neely 1990– Cootes and Taylor 1998
![Page 113: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/113.jpg)
Face representation
• 70,000 vertices with (x, y, z, r, g, b)
• Correspondence precomputed
[Blanz and Vetter 1999]
![Page 114: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/114.jpg)
Data representation
yi = [x1, y1, z1, …, x70,000, y70,000, z70,000]T
Linear blends:
ynew = (y1 + y2) / 2
a.k.a. blendshapes, morphing
.5 . + .5. =
y1y2 y3
![Page 115: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/115.jpg)
Linear subspace model
y = i wi yi (s.t., i wi = 1)
= i xi ai +
= A x +
Problem: can we learn this linear space?
x y
![Page 116: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/116.jpg)
Principal Components Analysis (PCA)
Same model as linear regressionUnknown x
x y
x
y
![Page 117: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/117.jpg)
Conventional PCA(Bayesian formulation)
x, A, » Uniform, AT A = I » N(0, 2 I)y = A x + + Given training y1:K, what are A, x, 2?
Maximum likelihood reduces to:
i k yi – (A xi + ) k2 / 22 + K/2 ln 2 2
Closed-form solution exists
![Page 118: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/118.jpg)
PCA with missing data
x y
Linear constraint
ML point
Problems:•Estimated point far from data if data is noisy•High-dimensional y is a uniform distribution•Low-dimensional x is overconstrainedWhy? Because x » U
![Page 119: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/119.jpg)
Probabilistic PCA
x y
x » N(0,I)y = Ax + b +
[Roweis 1998, Tipping and Bishop 1998]
![Page 120: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/120.jpg)
Fitting a Gaussian
y » N(, )easy to learn, and nice
properties… but is a 70,0002 matrix
y
![Page 121: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/121.jpg)
PPCA vs. Gaussians
However…PPCA: p(y) = s p(x,y) dx
= N(b, A AT + 2 I)This is a special case of a
Gaussian!PCA is a degenerate case (2=0)
![Page 122: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/122.jpg)
Face estimation in an image
p(y) = N(, )p(Image | y) = N(Is(y), 2 I)
-ln p(S,T | Image) = kImage – Is(y)k2 /22 + (y-)T-1(y-)/2
Image fitting term Face likelihood
Use PCA coordinates for efficiencyEfficient editing in PCA space
y
Image
[Blanz and Vetter 1999]
![Page 123: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/123.jpg)
![Page 124: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/124.jpg)
Comparison
PCA: unconstrained latent space – not good for missing data
Gaussians: general model, but impractical for large data
PPCA: constrained Gaussian – best of both worlds
![Page 125: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/125.jpg)
Estimating a face from video
[Blanz et al. 2003]
![Page 126: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/126.jpg)
The space of all body shapes
[Allen et al. 2003]
![Page 127: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/127.jpg)
The space of all body shapes
[Allen et al. 2004]
![Page 128: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/128.jpg)
Non-rigid 3D modeling from video
What if we don’t have training data?
[Torresani and Hertzmann 2004]
![Page 129: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/129.jpg)
Non-rigid 3D modeling from video
• Approach: learn all parameters– shape and motion– shape PDF– noise and outliers
• Lots of missing data (depths)– PPCA is essential
• Same basic framework, more unknowns
![Page 130: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/130.jpg)
Results
Lucas-Kanade tracking
Tracking result
3D reconstruction
Reference frame
![Page 131: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/131.jpg)
Results
Robust algorithm
3D reconstruction
[Almodovar 2002]
![Page 132: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/132.jpg)
Inverse kinematics
Constraints
DOFs (y)
[Grochow et al. 2004 (tomorrow)]
![Page 133: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/133.jpg)
Problems with Gaussians/PCA
Space of poses may is nonlinear, non-Gaussian
x y
![Page 134: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/134.jpg)
Non-linear dimension reduction
y = f(x;w) + Like non-linear regression w/o x
x y
f(x;w)
NLDR for BRDFs: [Matusik et al. 2003]
![Page 135: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/135.jpg)
Problem with Gaussians/PPCA
![Page 136: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/136.jpg)
Style-based IK
f(x;w)
Walk cycle:
Details: [Grochow 2004 (tomorrow)]
y
![Page 137: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/137.jpg)
Discussion and frontiers
![Page 138: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/138.jpg)
Designing learning algorithms for graphics
Write a generative modelp(data | model)
Use Bayes’ rule to learn the model from data
Generate new data from the model and constraints
(numerical methods may be required)
![Page 139: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/139.jpg)
What model do we use?
• Intuition, experience, experimentation, rules-of-thumb
• Put as much domain knowledge in as possible– model 3D shapes rather than pixels– joint angles instead of 3D positions
• Gaussians for simple cases; nonlinear models for complex cases (active research area)
![Page 140: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/140.jpg)
Q: Are there any limits to the power of Bayes' Rule?
A: According to legend, one who fully grasped Bayes' Rule would gain the ability to create and physically enter an alternate universe using only off-the-shelf equipment. One who fully grasps Bayes' Rule, yet remains in our universe to aid others, is known as a Bayesattva.
http://yudkowsky.net/bayes/bayes.html:
![Page 141: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/141.jpg)
Problems with Bayesian methods
1. The best solution is usually intractable
• often requires expensive numerical computation
• it’s still better to understand the real problem, and the approximations
• need to choose approximations carefully
![Page 142: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/142.jpg)
Problems with Bayesian methods
2. Some complicated math to do• Models are simple, algorithms
complicated• May still be worth it• Bayesian toolboxes on the
way (e.g., VIBES, Intel OpenPNL)
![Page 143: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/143.jpg)
Problems with Bayesian methods
3. Complex models sometimes impede creativity
• Sometimes it’s easier to tune• Hack first, be principled later• Probabilistic models give
insight that helps with hacking
![Page 144: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/144.jpg)
Benefits of the Bayesian approach
1. Principled modeling of noise and uncertainty
2. Unified model for learning and synthesis
3. Learn all parameters4. Good results from simple
models5. Lots of good research and
algorithms
![Page 145: Introduction to Bayesian Learning Aaron Hertzmann University of Toronto SIGGRAPH 2004 Tutorial Evaluations:](https://reader035.vdocument.in/reader035/viewer/2022062308/56649e165503460f94b0076f/html5/thumbnails/145.jpg)
Course notes, slides, links:http://www.dgp.toronto.edu/~hertzman/ibl2004
Course evaluationhttp://www.siggraph.org/courses_evaluation
Thank you!