nonparametric bayesian methods in machine learning...

40
Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet Processes and Extensions Zoubin Ghahramani Gatsby Computational Neuroscience Unit University College London Center for Automated Learning and Discovery Carnegie Mellon University Work done in collaboration with Wei Chu (Gatsby Unit) Katherine Heller (Gatsby Unit) Hyun-Chul Kim (POSTECH, Korea) Tom Minka (MSR Cambridge) Carl Edward Rasmussen (Max-Planck T ¨ ubingen) Edward Snelson (Gatsby Unit) Sheffield Machine Learning Workshop, Sep 2004

Upload: others

Post on 11-Oct-2020

17 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Nonparametric Bayesian Methods in Machine Learning:Gaussian Processes, Dirichlet Processes and Extensions

Zoubin GhahramaniGatsby Computational Neuroscience Unit

University College London

Center for Automated Learning and DiscoveryCarnegie Mellon University

Work done in collaboration with

Wei Chu (Gatsby Unit)Katherine Heller (Gatsby Unit)Hyun-Chul Kim (POSTECH, Korea)Tom Minka (MSR Cambridge)Carl Edward Rasmussen (Max-Planck Tubingen)Edward Snelson (Gatsby Unit)

Sheffield Machine Learning Workshop, Sep 2004

Page 2: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Outline

• Part I: Gaussian Processes

– Part Ia: A Brief Tutorial on Gaussian Processes for Regression– Part Ib: Warped Gaussian Processes– Part Ic: Gaussian Processes for Ordinal Regression– Part Id: Other Extensions

• Part II: Dirichlet Process Mixtures

– Part IIa: A Brief Tutorial on Dirichlet Process Mixtures– Part IIb: Bayesian Hierarchical Clustering and DPMs– Part IIc: Expectation Propagation for DPMs

• Conclusions and Future Work

Page 3: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Part Ia: A Brief Tutorial on Gaussian Process for Regression

Two ways of understanding Gaussian processes (GPs)...

• Starting from multivariate Gaussians

• Starting from linear regression

Page 4: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

...from multivariate Gaussians to GPs...

univariate Gaussian density for t

p(t) = (2πσ2)−1/2 exp{− t2

2σ2

}

multivariate Gaussian density for t = (t1, t2, t3, . . . tN)>

p(t) = |2πΣ|−1/2 exp{−1

2t>Σ−1t

}

Σ is an N ×N covariance matrix.

Imagine that Σij depends on i and j and we plot samples of t as if they werefunctions...

gpdemo2 gpdemo

Page 5: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

...from linear regression to GPs...

• Linear regression with inputs xi and outputs ti: ti =∑d

wd xid + εi

• Linear regression with basis functions (“kernel trick”): ti =∑d

wdφd(xi) + εi

• Bayesian linear regression with basis functions:

wd ∼ N (0, βd) [ independent of w` ], εi ∼ N (0, σ2)

• Integrating out the weights, wd, we find:

E[ti] = 0, E[titj] =∑d

βdφd(xi)φd(xj) + δijσ2 ≡ Cij

This is a Gaussian process with covariance function C(xi,xj) ≡ Cij.

This Gaussian process has a finite number of basis functions. Many useful GPcovariance functions correspond to infinitely many basis functions.

Page 6: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Gaussian Process Regression

A Gaussian Process (GP) places a prior directly on the space of functions suchthat at any finite selection of points x(1), . . . ,x(N) the corresponding functionvalues t(1), . . . , t(N) have a multivariate Gaussian distribution.

−2 0 2

−2

0

2

input, x

outp

ut, t

(x)

Samples from the Prior

−2 0 2

−2

0

2

input, xou

tput

, t(x

)

Samples from the Posterior

The covariance between two function values t(i) and t(j) under the prior is givenby the covariance function C(x(i),x(j)), which typically decays monotonicallywith ‖x(i) − x(j)‖, encoding smoothness.

GPs are “Bayesian Kernel Regression Machines”

Page 7: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Gaussian processes for regression

Given dataset D = (XN , tN), where XN ≡ {x(n)}Nn=1 and tN ≡ {t(n)}Nn=1.

We wish to predict t(N+1) given a new input vector x(N+1), i.e.p(t(N+1)|x(N+1),D).

Assuming Gaussian noise, the joint on (tN , t(N+1)) is a Gaussian.

A typical covariance function is:

Cmn = v1 exp

− D∑d=1

(x

(m)d − x(n)

d

rd

)2+ v0δmn .

With parameters Θ = (v0, v1, r1, . . . , rD).

Page 8: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Training and making predictions with a GP

The (negative log) likelihood is used to train the model, usually by conjugategradient descent on Θ, the parameters of the covariance matrix CN :

L(Θ) = − log p(tN |XN ,Θ) =12

log det CN +12t>NC−1

N tN +N

2log 2π .

The predictive distribution for a new point given the observed data,p(t(N+1)|tN ,XN+1,Θ), is also Gaussian.

The mean and variance are calculated from the inverse covariance matrixC−1N+1, and the target values tN .

Page 9: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Using Gaussian Processes for ClassificationBinary classification problem: Given a data set D = {(x(n), y(n))}Nn=1, wherey(n) ∈ {−1,+1}, infer class label probabilities at new points.

−1 −0.5 0 0.5 1−1

−0.5

0

0.5

1

1.5

x

f

y = +1 y = −1

−1−0.5

00.5

1

−1

−0.5

0

0.5

1−5

0

5

xy

f

There are many ways to relate function values f (n) to class probabilities:

p(y|f) =

1

1+exp(−yf) sigmoid (logistic)Φ(yf) cumulative normal (probit)H(yf) threshold

ε+ (1− 2ε)H(yf) robust threshold

Page 10: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Part Ib: Warped Gaussian Process Regression

(with Ed Snelson, 2004)

Page 11: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Motivations

• The Gaussian process (GP) is a way of performing nonlinear nonparametricregression, which gives excellent performance, and allows full Bayesianpredictions

• However, it assumes the target data to be distributed as a multivariateGaussian, with Gaussian noise

• This is often an unreasonable assumption to make for the raw data

• Standard practice is to apply a preprocessing transformation in an ad-hocmanner preceding the modelling with the GP

• The warped GP incorporates the preprocessing as an integral part of theprobabilistic model

• This makes GPs more flexible so they can model non-Gaussian processes

Page 12: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Warping the observation space

1. We assume a vector of latent targets zN that is modelled by a GP.

2. We make a transformation from the true observation space t to the latentspace z by mapping each observation through the same function f ,

z(n) = f(t(n); Ψ) ∀n .

3. We require f to be monotonic and mapping on to the whole of the real line;otherwise probability measure will not be conserved in the transformation,and we will not induce a valid distribution over the targets.

Page 13: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Training the warped GP

The negative log likelihoods:

GP: L(Θ) = − log p(tN |XN ,Θ) =12

log det CN +12t>NC−1

N tN +N

2log 2π ,

WGP: L(Θ,Ψ) = − log p(tN |XN ,Θ,Ψ)

=12

log det CN +12f(tN)>C−1

N f(tN)−N∑n=1

log∂f(t)∂t

∣∣∣∣t(n)

+N

2log 2π

This is used to train the warped GP exactly as for the regular GP - both thehyperparameters of the covariance matrix, and the parameters of the warpingfunction are learnt simultaneously.

Page 14: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Predictions with the warped GP

The predictive distribution at a new point in latent space is as for a regular GP:

p(z(N+1)|x(N+1),D,Θ) = N(z(N+1), σ2

N+1

).

To find the distribution in the observation space we pass that Gaussian throughthe nonlinear warping function, giving

p(t(N+1)|x(N+1),D,Θ,Ψ) =f ′(t(N+1))√

2πσ2N+1

exp

[−1

2

(f(t(N+1))− z(N+1)

σN+1

)2].

Page 15: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Choosing the warping function

The warping function has to be chosen according to some constraints:

• monotonicity

• covering the whole real line

• derivatives with respect to t and the parameters Ψ easily evaluated.

We chose a neural net style sum of tanh functions, although several otherchoices are possible:

f(t; Ψ) = t+I∑i=1

aitanh (bi(t+ ci)) ai, bi ≥ 0 ∀i .

Page 16: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

A simple 1D regression task

• 101 points, regularly spaced from −π to π on the x axis, were generatedwith Gaussian noise about a sine function.

• Warped these points through t = z1/3 to get this data

−pi −pi/2 −pi/4 0 pi/2 pi−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

x

t

(a)

−2 −1 0 10

0.5

1

1.5

2

2.5

3

3.5

t

P(t

| x=

−π/

4)

(b)

Page 17: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Results for some real datasets

Dataset D tmin tmax Ntrain Ntest

creep 30 18 MPa 530 MPa 800 1266abalone 8 1 yr 29 yrs 1000 3177ailerons 40 −3.0× 10−3 −3.5× 10−4 1000 6154

Dataset Model Absolute error Squared error − log p(t)creep GP 16.4 654 4.46

GP + log 15.6 587 4.24warped GP 15.0 554 4.19

abalone GP 1.53 4.79 2.19GP + log 1.48 4.62 2.01warped GP 1.47 4.63 1.96

ailerons GP 1.23× 10−4 3.05× 10−8 -7.31warped GP 1.18× 10−4 2.72× 10−8 -7.45

t

z

(a) sine

t

z

(b) creep

t

z

(c) abalone

t

z

(d) ailerons

Page 18: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Part Ib: Conclusions

• We have shown that the warped GP is capable of finding extra structure inthe data through the transformations it learns

• It allows standard preprocessing transforms, such as log, to be discoveredautomatically and improved on, rather than be applied in an ad-hoc manner

• Computationally cheap to apply as an add-on to the regular GP

• Forms of non-Gaussianity which do not directly depend on the output values(such as heavy tailed noise) are not captured by the method proposedhere...

• ...but GPs can also be extended to include both warping and non-Gaussiannoise

Page 19: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Part Ic: Gaussian Processes for Ordinal Regression

(with Wei Chu, 2004)

• Ordinal Regression: target variable is discrete with values that are ordered.

• Examples:

– Movie Rankings (0-5 stars)– Severity of Disease (low, medium, high)– Grades (A-F)

• Related to predicting rankings and Collaborative Filtering.

Page 20: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

A Likelihood Function for Ordinal Regression

• Likelihood defined using thresholds on real line:

P1(yi|f(xi)) ={

1 byi−1 ≤ f(xi) ≤ byi0 otherwise

• Thresholds b0 = −∞ < b1 < b2 < . . . < br−1 < br = +∞ divide the real lineinto r contiguous intervals. This maps the real function value f(xi) into thediscrete variable yi ∈ {1, . . . r} while enforcing the ordinal constraints.

• Adding Gaussian noise to f with zero mean and unknown variance σ2:

P(yi|f(xi)) =∫P1(yi|f(xi) + δi)N (δi; 0, σ2) dδi = Φ

(zi1)− Φ

(zi2)

where zi1 = byi−f(xi)σ , zi2 =

byi−1−f(xi)σ , and Φ(z) =

∫ z−∞N (ς; 0, 1)dς.

• Note that binary classification is a special case of ordinal regression whenr = 2, and in this case the likelihood function becomes the probit function.

Page 21: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Approximate Inference

• Given a Gaussian process prior P(f) and this ordinal likelihood function, theposterior over function values is non-Gaussian:

P(f |D) ∝ P(D|f)P(f)

• We approximate it using two different algorithms:

– Laplace appoximation– Expectation Propagation (EP)

• We compare to a competing SVM-based approach

Page 22: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Results: Abalone

Output discretized into four ordinal values (≤ 6,= 8,= 11 ≥ 14).Data set of 1983 points.AAE=average absolute error (treating ranks as consecutive integers)Error bars over 20 repetitions

200 400 6000.3

0.35

0.4

0.45

0.5

training data size

The SVM Approach

AA

E

200 400 6000.3

0.35

0.4

0.45

0.5

training data size

The MAP Approach

200 400 6000.3

0.35

0.4

0.45

0.5

training data size

The EP Approach

Page 23: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Results: EachMovie Collaborative Filtering

Movie rankings (0 to 5 stars) from Compaq’s EachMovie serviced = 72916 users (features)n = 449 movies (data points)very sparse data 2.4% non-emptytest on new movies for user “id=52647”

100 200 3000.7

0.8

0.9

1

1.1

training data size

The MAP Approach

100 200 3000.7

0.8

0.9

1

1.1

training data size

The EP Approach

100 200 3000.7

0.8

0.9

1

1.1

training data size

The SVM Approach

AA

E

Page 24: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Results: Prostate Cancer Gleason Score Prediction

Microarray data of 12600 genes that might be related to prostate cancern = 52 samples of prostate tumorGleason score y from 6-10 given by the pathologist reflecting the level ofdifferentiation of the glands in the prostate tumour.Irrelevant genes were removed using ARD, down to 40 genes.26 training data points, 26 test data points, 20 splits

40 100 500 2000 126000

0.1

0.2

0.3

0.4

0.5

0.6

The SVM Approach

Number of selected genes

AA

E

40 100 500 2000 126000

0.1

0.2

0.3

0.4

0.5

0.6

The MAP Approach

Number of selected genes40 100 500 2000 12600

0

0.1

0.2

0.3

0.4

0.5

0.6

The EP Approach

Number of selected genes

Page 25: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Part Id: Other Extensions

• Mixtures of Gaussian Process Experts (with Carl Rasmussen)

• EM-EP for Gaussian Process Classification (with Hyun-Chul Kim)

Page 26: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Mixtures of Gaussian Process Experts(with Carl Rasmussen, 2002)Motivation :

1. Difficult to specify flexible GP covariance structures:

−2 0 2

−2

0

2

−2 0 2

−2

0

2

−2 0 2

−2

0

2

eg, varying spatial frequency, varying signal amplitude, varying noise etc.

2. Predictions and training requires C−1 which has O(n3) complexity.

Solution : the divide and conquer strategy of Mixture of Experts.

A (countably infinite) mixture of Gaussian Processes, allows:

• different covariance functions in different parts of space

• divide-and-conquer efficiency (by splitting O(n3) between experts).

Page 27: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

EM-EP algorithm for Gaussian Process Classification

(with Hyun-Chul Kim, 2003)

Motivation : To learn the kernel parameters for classification.

An Approach :

• Treat the Gaussian process as a latent variable model.

• Use EM Algorithm

– Infer the latent function values (E-step)– Optimize the kernel parameters (M-step)

• In the E step we approximate P(f |D,Θ) with a multivariate Gaussian Q(f)using the Expectation Propagation (EP) algorithm

Related work by Opper and Winther; Csato; Minka; Seeger.

Page 28: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Conclusions

• Gaussian processes provide a general framework for non-parametricfunction approximation

• Gaussian processes can be extended in several ways:

– classification– warped regression– ordinal regression– mixtures of Gaussian process experts

• It is straightforward to infer the kernel from data using Bayesian methods(e.g. MCMC, Laplace, EM-EP)

Page 29: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Part II: Dirichlet Process Mixtures (Infinite Mixtures)

Page 30: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Part IIa: A Brief Tutorial on Dirichlet Process Mixtures

Consider using a finite mixture of K components to model a data setD = {x(1), . . . ,x(n)}

p(x(i)|θ) =K∑j=1

πj pj(x(i)|θj)

=K∑j=1

P (s(i) = j|π) pj(x(i)|θj, s(i) = j)

The setting of the indicator variable s(i) = j means that data point x(i) wasgenerated from Gaussian j.

Distribution of indicators s = (s(1), . . . , s(n)) given π is multinomial

P (s(1), . . . , s(n)|π) =K∏j=1

πnjj , nj

def=n∑i=1

δ(s(i), j) .

Page 31: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Dirichlet Process Mixtures (Infinite Mixtures) - II

Distribution of indicators s = (s(1), . . . , s(n)) given π is multinomial

P (s(1), . . . , s(n)|π) =K∏j=1

πnjj , nj

def=n∑i=1

δ(s(i), j) .

If we choose the mixing proportions π to have a Dirichlet prior ...

p(π|α) =Γ(α)

Γ(α/K)K

K∏j=1

πα/K−1j

... then we can integrate out the mixing proportions, π, to obtain

P (s(1), . . . , s(n)|α) =∫dπ P (s|π)P (π|α) =

Γ(α)Γ(n+ α)

K∏j=1

Γ(nj + α/K)Γ(α/K)

We can do this even if K →∞ !

This is an infinite mixture model (or Dirichlet Process Mixture)

Page 32: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Dirichlet Process Mixtures (Infinite Mixtures) - III

DPMs have the property that the prior probability of a new data point joining acluster is proportional to the number of points already in that cluster.

Starting from P (s|α) =Γ(α)

Γ(n+ α)

K∏j=1

Γ(nj + α/K)Γ(α/K)

• Conditional Probabilities: Finite K

P (s(i) = j|s−i, α) =n−i,j + α/K

n− 1 + α

where s−i denotes all indices except i, and n−i,jdef=∑` 6=i δ(s

(`), j)

• Conditional Probabilities: Infinite K

Taking the limit as K →∞ yields the conditionals

P (s(i) = j|s−i, α) =

n−i,jn−1+α j represented

αn−1+α all j not represented

Left over mass, α,⇒ countably infinite number of indicator settings.

Page 33: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Inference in DPMs

• Markov chain Monte Carlo (Escobar and West 1995; Neal 1998;Rasmussen 2000)

• Expectation Propagation (Minka and Ghahramani, 2003)

• Variational Bayes (Blei and Jordan, 2004)

• Hierarchical Clustering (Ghahramani and Heller, 2004)

Page 34: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Part IIb: Inference in DPMs by Hierarchical Clustering(with Katherine Heller, 2004)

Hierarchical Clustering Algorithm:

When merging Ti and Tj to form Tk the algorithm computes:

P (Dk|Tk) = πkP (Dk|H1) + (1− πk)P (Di|Ti)P (Dj|Tj)

where πk is the prior on merging which is computed recursively.

This estimates the marginal likelihood of the data Dk = Di ∪ Dj.

Page 35: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Inference in DPMs by Hierarchical Clustering

Lemma : The marginal likelihood of a Dirichlet Process Mixture is:

p(Dk) =∑v∈V

p(Dk|v)p(v) =∑v∈V

mv∏`=1

p(Dv` )αmv

∏mv`=1 Γ(nv`)[

Γ(nk+α)Γ(α)

]where V is the set of all partitionings of data set Dk.Theorem : The Bayesian Hierarchical Clustering algorithm computes:

p(Dk|Tk) =∑v∈Vk

mv∏`=1

p(Dv` )αmv

∏mv`=1 Γ(nv`)dk

where Vk is the set of all tree-consistent partitionings of data set Dk for tree Tk,and dk is computed recursively.

Corollary : For any tree Tk, the following is a lower bound on the marginallikelihood of a DPM:

dk Γ(α)Γ(nk + α)

p(Dk|Tk) ≤ p(Dk)

Note : This is a new lower bound which can be simply and deterministicallycomputed in O(n log n). A greedy tree can be built in O(n2).

Page 36: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Tree-Consistent Partitions

Consider the above tree and all 15 possible partitions of {1,2,3,4}: (1)(2)(3)(4), (1 2)(3)(4), (1 3)(2)(4), (1 4)(2)(3), (2 3)(1)(4),(2 4)(1)(3), (3 4)(1)(2), (1 2)(3 4), (1 3)(2 4), (1 4)(2 3),(1 2 3)(4), (1 2 4)(3), (1 3 4)(2), (2 3 4)(1), (1 2 3 4)

(1 2) (3) (4) and (1 2 3) (4) are tree-consistent partitions(1)(2 3)(4) and (1 3)(2 4) are not tree-consistent partitions

Page 37: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Part IIc: Inference in DPMs by Expectation Propagation

(with Tom Minka, 2003)

EP for Infinite Mixtures

Page 38: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Expectation Propagation (EP)

Data (iid) D = {x(1) . . . ,x(N)}, model p(x|θ), with parameter prior p(θ).

The parameter posterior is: p(θ|D) =1

p(D)p(θ)

N∏i=1

p(x(i)|θ)

We can write this as product of factors over θ: p(θ)N∏i=1

p(x(i)|θ) =N∏i=0

fi(θ)

where f0(θ) def= p(θ) and fi(θ) def= p(x(i)|θ) and we will ignore the constants.

We wish to approximate this by a product of simpler terms: q(θ) def=N∏i=0

fi(θ)

minq(θ)

KL

(N∏i=0

fi(θ)

∥∥∥∥∥N∏i=0

fi(θ)

)(intractable)

minfi(θ)

KL(fi(θ)‖fi(θ)

)(simple, non-iterative, inaccurate)

minfi(θ)

KL(fi(θ)

∏j 6=i

fj(θ)∥∥∥fi(θ)

∏j 6=i

fj(θ))

(simple, iterative, accurate)← EP

Page 39: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Expectation PropagationInput f0(θ) . . . fN(θ)Initialize f0(θ) = f0(θ), fi(θ) = 1 for i > 0, q(θ) =

∏i fi(θ)

repeatfor i = 0 . . . N do

Deletion: q\i(θ)← q(θ)fi(θ)

=∏j 6=i

fj(θ)

Projection: fnewi (θ)← arg min

f(θ)KL(fi(θ)q\i(θ)‖f(θ)q\i(θ))

Inclusion: q(θ)← fnewi (θ) q\i(θ)

end foruntil convergence

The EP algorithm. Some variations are possible: here we assumed that f0 isin the exponential family, and we updated sequentially over i. The names forthe steps (deletion, projection, inclusion) are not the same as in (Minka, 2001)

• Minimizes the opposite KL to variational methods• fi(θ) in exponential family→ projection step is moment matching• Loopy belief propagation and assumed density filtering are special cases• No convergence guarantee (although convergent forms can be developed)

Page 40: Nonparametric Bayesian Methods in Machine Learning ...mlg.eng.cam.ac.uk/zoubin/talks/sheffield04.pdf · Nonparametric Bayesian Methods in Machine Learning: Gaussian Processes, Dirichlet

Conclusions

• Two examples of non-parametric Bayesian methods

– Gaussian Processes: for regression, classification, ordinal regression, etc– Dirichlet Process Mixtures: for density estimation and clustering

• Both can be thought of as models with “infinitely” many parameters

• Real data require flexible models

• Many different and fast approximate inference methods are available:

– MCMC– Variational methods– EP– Hierarchical clustering