probabilistic graphical modelsepxing/class/10708-14/lectures/lecture8...non-maximal clique...

48
School of Computer Science Probabilistic Graphical Models Maximum likelihood learning of undirected GM Eric Xing Lecture 8, February 10, 2014 Reading: MJ Chap 9, and 11 1 © Eric Xing @ CMU, 2005-2014

Upload: others

Post on 19-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

School of Computer Science

Probabilistic Graphical Models

Maximum likelihood learning of undirected GM

Eric XingLecture 8, February 10, 2014

Reading: MJ Chap 9, and 111© Eric Xing @ CMU, 2005-2014

Page 2: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Why?

Sometimes an UNDIRECTEDassociation graph makes more sense and/or is more informative gene expressions may be influenced

by unobserved factor that are post-transcriptionally regulated

The unavailability of the state of B results in a constrain over A and C

BA C

BA C

BA C

Undirected Graphical Models

2© Eric Xing @ CMU, 2005-2014

Page 3: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

ML Structural Learning via Neighborhood Selection for

completely observed MRF Data

),,( )()( 111 nxx

),,( )()( Mn

M xx 1

),,( )()( 221 nxx

3© Eric Xing @ CMU, 2005-2014

Page 4: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Gaussian Graphical Models Multivariate Gaussian density:

WOLG: let

We can view this as a continuous Markov Random Field with potentials defined on every node and edge:

)-()-(-exp)(

),|( //

xxx 1

21

21221

Tn

p

jiji

iji

iiinp xxqxqQ

Qxxxp 221

2/

2/1

21 -exp)2(

),0|,,,(

4© Eric Xing @ CMU, 2005-2014

Page 5: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Pairwise MRF (e.g., Ising Model) Assuming the nodes are discrete, and edges are weighted,

then for a sample xd, we have

5© Eric Xing @ CMU, 2005-2014

Page 6: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

The covariance and the precision matrices Covariance matrix

Graphical model interpretation?

Precision matrix

Graphical model interpretation?

6© Eric Xing @ CMU, 2005-2014

Page 7: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Sparse precision vs. sparse covariance in GGM

21 3 54

5900094800083700072600061

1

080070120030150070040070010080120070100020130030010020030150150080130150100

.....

...............

.....

)5(or )1(511

15 0 nbrsnbrsXXX

01551 XX

7© Eric Xing @ CMU, 2005-2014

Page 8: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Another example

How to estimate this MRF? What if p >> n

MLE does not exist in general! What about only learning a “sparse” graphical model?

This is possible when s=o(n) Very often it is the structure of the GM that is more interesting …

8© Eric Xing @ CMU, 2005-2014

Page 9: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Recall lasso

9© Eric Xing @ CMU, 2005-2014

Page 10: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Graph Regression

Lasso:Neighborhood selection

10© Eric Xing @ CMU, 2005-2014

Page 11: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Graph Regression

11© Eric Xing @ CMU, 2005-2014

Page 12: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Graph Regression

It can be shown that:given iid samples, and under several technical conditions (e.g., "irrepresentable"), the recovered structured is "sparsistent" even when p >> n

12© Eric Xing @ CMU, 2005-2014

Page 13: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Learning Ising Model (i.e. pairwise MRF) Assuming the nodes are discrete, and edges are weighted,

then for a sample xd, we have

It can be shown following the same logic that we can use L_1 regularized logistic regression to obtain a sparse estimate of the neighborhood of each variable in the discrete case.

13© Eric Xing @ CMU, 2005-2014

Page 14: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Consistency Theorem: for the graphical regression algorithm, under

certain verifiable conditions (omitted here for simplicity):

Note the from this theorem one should see that the regularizer is not actually used to introduce an “artificial” sparsity bias, but a devise to ensure consistency under finite data and high dimension condition.

14© Eric Xing @ CMU, 2005-2014

Page 15: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

ML Parameter Est. for completely observed MRFs of

given structure

The data:{ (z1,x1), (z2,x2), (z3,x3), ... (zN,xN)}

X1

X4X3

X2

15© Eric Xing @ CMU, 2005-2014

Page 16: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Recap: MLE for BNs Assuming the parameters for each CPD are globally independent,

and all nodes are fully observed, then the log-likelihood function decomposes into a sum of local terms, one per node:

i niin

n iiin ii

xpxpDpD ),|(log),|(log)|(log);( ,, xxl

kjikij

ijkMLijk n

n

,','

16© Eric Xing @ CMU, 2005-2014

Page 17: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

MLE for undirected graphical models For directed graphical models, the log-likelihood decomposes

into a sum of terms, one per family (node plus parents). For undirected graphical models, the log-likelihood does not

decompose, because the normalization constant Z is a function of all the parameters

In general, we will need to do inference (i.e., marginalization) to learn parameters for undirected models, even in the fully observed case.

Cc

ccn ZxxP )(),,( x1

1

nxx Cc

ccZ,,

)(1

x

17© Eric Xing @ CMU, 2005-2014

Page 18: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Log Likelihood for UGMs with tabular clique potentials Sufficient statistics: for a UGM (V,E), the number of times that a

configuration x (i.e., XV=x) is observed in a dataset D={x1,…,xN} can be represented as follows:

In terms of the counts, the log likelihood is given by:

There is a nasty log Z in the likelihood

cV

mmm cnn \

count) (clique )()( and ,count) (total ),()(defdef

x

xxxxx

ZNm

Zm

ppDp

pDp

ccc

c

ccc

nn

nn

n

c

n

log)(log)(

)(1log)(

)|(log),()|(log),()(log

)|()( ),(

xx

xx

xxxxxx

x

x

x

xx

x

xx

l

18© Eric Xing @ CMU, 2005-2014

Page 19: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

ZNmDp ccc

cc

log)(log)()(log xxx

Log Likelihood for UGMs with tabular clique potentials Sufficient statistics: for a UGM (V,E), the number of times that a

configuration x (i.e., XV=x) is observed in a dataset D={x1,…,xN} can be represented as follows:

In terms of the counts, the log likelihood is given by:

There is a nasty log Z in the likelihood

cV

mmm cnn \

count) (clique )()( and ,count) (total ),()(defdef

xxxxxx

19© Eric Xing @ CMU, 2005-2014

Page 20: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Derivative of log Likelihood Log-likelihood:

First term:

Second term:

ZNm ccc

cc

log)(log)( xxx

l

)()(

)( ccc

cc

mx

xx

1l

)()(

)~(),~()(

)~()~(

),~(

)~()(

),~(

)~()()(

log

~

~

~

~

cc

ccc

cc

ddd

cccc

ddd

cccc

ddd

cccc

pp

Z

Z

ZZ

xx

xxxx

xx

xx

xx

xx

xxx

x

x

x

x

1

11

1

1

Set the value of variables to xx

20© Eric Xing @ CMU, 2005-2014

Page 21: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Conditions on Clique Marginals Derivative of log-likelihood

Hence, for the maximum likelihood parameters, we know that:

In other words, at the maximum likelihood setting of the parameters, for each clique, the model marginals must be equal to the observed marginals (empirical counts).

This doesn’t tell us how to get the ML parameters, it just gives us a condition that must be satisfied when we have them.

)()(

)()(

)( cc

c

cc

c

cc

pNmxx

xx

x

l

)(~)()(def

*c

ccMLE p

Nmp xxx

21© Eric Xing @ CMU, 2005-2014

Page 22: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

MLE for undirected graphical models Is the graph decomposable (triangulated)? Are all the clique potentials defined on maximal cliques (not

sub-cliques)? e.g., 123, 234 not 12, 23, …

Are the clique potentials full tables (or Gaussians), or parameterized more compactly, e.g. ?

X1

X4X3

X2 X1

X4X3

X2

c ckkcc f )(exp)( xx

22© Eric Xing @ CMU, 2005-2014

Page 23: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Properties on MLE of clique potentials For decomposable models, where potentials are defined on

maximal cliques, the MLE of clique potentials equate to the empirical marginals (or conditionals) of the corresponding clique. Thus the MLE can be solved by inspection!!

If the graph is non-decomposable, and or the potentials are defined on non-maximal cliques (e.g., 12, 34), we could not equate MLE of cliques potentials to empirical marginals (or conditionals). Potential expressed as a tabular form: IPF

Feature-based potentials: GIS

23© Eric Xing @ CMU, 2005-2014

Page 24: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

MLE for decomposable undirected models Decomposable models:

G is decomposable G is triangulated G has a junction tree

Potential based representation:

Consider a chain X1 − X2 − X3. The cliques are (X1,X2 ) and (X2,X3); the separator is X2 The empirical marginals must equal the model marginals.

Let us guess that We can verify that such a guess satisfies the conditions:

and similarly

)(~),(~),(~

),,(2

3221321 xp

xxpxxpMLE xxxp

),(~),(~)|(~),,(),( 2132213212133

xxpxxpxxpxxxpxxpxx MLEMLE

),(~),( 3232 xxpxxpMLE

s ss

c ccp)()(

)(xx

x

24© Eric Xing @ CMU, 2005-2014

Page 25: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

MLE for decomposable undirected models (cont.) Let us guess that To compute the clique potentials, just equate them to the

empirical marginals (or conditionals), i.e., the separator must be divided into one of its neighbors. Then Z = 1.

One more example:

X1

X4X3

X2 ),(~),,(~),,(~

),,,(32

4323214321 xxp

xxxpxxxpxxxxpMLE

),|(~),(~

),,(~),,( 321

32

321321123 xxxp

xxpxxxpxxxMLE

),,(~),,( 432432234 xxxpxxxMLE

)(~),(~),(~

),,(2

3221321 xp

xxpxxpMLE xxxp

),(~),( 212112 xxpxxMLE )|(~)(~

),(~),( 32

2

323223 xxp

xpxxpxxMLE

25© Eric Xing @ CMU, 2005-2014

Page 26: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Non-decomposable and/or with non-maximal clique potentials If the graph is non-decomposable, and or the potentials are

defined on non-maximal cliques (e.g., 12, 34), we could not equate empirical marginals (or conditionals) to MLE of cliques potentials.

X1

X4X3

X2

X1

X4X3

X2

},{

),(),,,(ji

jiij xxxxxxp 4321

)(~/),(~)(~/),(~

),(~

),( s.t. ),( MLE

jji

iji

ji

jiij

xpxxpxpxxp

xxpxxji

Homework!

26© Eric Xing @ CMU, 2005-2014

Page 27: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

MLE for undirected graphical models Is the graph decomposable (triangulated)? Are all the clique potentials defined on maximal cliques (not

sub-cliques)? e.g., 123, 234 not 12, 23, …

Are the clique potentials full tables (or Gaussians), or parameterized more compactly, e.g. ?

X1

X4X3

X2 X1

X4X3

X2

c ckkcc f )(exp)( xx

Decomposable? Max clique? Tabular? Method Direct- - IPF- - - Gradient- - - GIS

27© Eric Xing @ CMU, 2005-2014

Page 28: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Iterative Proportional Fitting (IPF) From the derivative of the likelihood:

we can derive another relationship:

in which c appears implicitly in the model marginal p(xc).

This is therefore a fixed-point equation for c. Solving c in closed-form is hard, because it appears on both sides of this implicit

nonlinear equation.

The idea of IPF is to hold c fixed on the right hand side (both in the numerator and denominator) and solve for it on the left hand side. We cycle through all cliques, then iterate:

)()(

)()(

)( cc

c

cc

c

cc

pNmxx

xx

x

l

)()(

)()(~

cc

c

cc

c ppxx

xx

)()(~

)()( )()()(

ct

cc

tcc

tc p

px

xxx 1

Need to do inference here28© Eric Xing @ CMU, 2005-2014

Page 29: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Properties of IPF Updates IPF iterates a set of fixed-point equations:

However, we can prove it is also a coordinate ascent algorithm (coordinates = parameters of clique potentials).

Hence at each step, it will increase the log-likelihood, and it will converge to a global maximum.

I-projection: finding a distribution with the correct marginals that has the maximal entropy

)()(~

)()( )()()(

ct

cc

tcc

tc p

px

xxx 1

29© Eric Xing @ CMU, 2005-2014

Page 30: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

KL Divergence View IPF can be seen as coordinate ascent in the likelihood using

the way of expressing likelihoods using KL divergences. We can show that maximizing the log likelihood is equivalent

to minimizing the KL divergence (cross entropy) from the observed distribution to the model distribution:

Using a property of KL divergence based on the conditional chain rule: p(x) = p(xa)p(xb|xa):

x xp

xpxpxpxp)|(

)(~log)(~)|(||)(~minmax

KLl

a

baba

ba

xababaaa

xx ab

ababa

xx a

aaba

xx aba

abaabababa

xxpxxqxqxpxqxxpxxqxxqxq

xpxqxxqxq

xxpxpxxqxqxxqxqxxpxxq

)|(||)|()()(||)(

)|()|(log)|()(

)()(log)|()(

)|()()|()(log)|()(),(||),(

,,

,

KLKL

KL

30© Eric Xing @ CMU, 2005-2014

Page 31: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

IPF minimizes KL divergence Putting things together, we have

It can be shown that changing the clique potential c has no effect on the conditional distribution, so the second term in unaffected.

To minimize the first term, we set the marginal to the observed marginal, just as in IPF. Note that this is only good when the model is decomposable !

We can interpret IPF updates as retaining the “old” conditional probabilities p(t)(x-c|xc) while replacing the “old” marginal probability p(t)(xc) with the observed marginal .

axccccc

cc

ppppppp

)|(||)|(~)(~)|(||)(~)|(||)(~

xxxxx

xxxx

KL

KLKL

)(~cp x

31© Eric Xing @ CMU, 2005-2014

Page 32: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

MLE for undirected graphical models Is the graph decomposable (triangulated)? Are all the clique potentials defined on maximal cliques (not

sub-cliques)? e.g., 123, 234 not 12, 23, …

Are the clique potentials full tables (or Gaussians), or parameterized more compactly, e.g. ?

X1

X4X3

X2 X1

X4X3

X2

c ckkcc f )(exp)( xx

Decomposable? Max clique? Tabular? Method Direct- - IPF- - - Gradient- - - GIS

32© Eric Xing @ CMU, 2005-2014

Page 33: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Feature-based Clique Potentials So far we have discussed the most general form of an

undirected graphical model in which cliques are parameterized by general “tabular” potential functions c(xc).

But for large cliques these general potentials are exponentially costly for inference and have exponential numbers of parameters that we must learn from limited data.

One solution: change the graphical model to make cliques smaller. But this changes the dependencies, and may force us to make more independence assumptions than we would like.

Another solution: keep the same graphical model, but use a less general parameterization of the clique potentials.

This is the idea behind feature-based models.33© Eric Xing @ CMU, 2005-2014

Page 34: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Features Consider a clique xc of random variables in a UGM, e.g. three

consecutive characters c1c2c3 in a string of English text. How would we build a model of p(c1c2c3)?

If we use a single clique function over c1c2c3, the full joint clique potential would be huge: 263−1 parameters.

However, we often know that some particular joint settings of the variables in a clique are quite likely or quite unlikely. e.g. ing, ate, ion, ?ed, qu?, jkx, zzz,...

A “feature” is a function which is vacuous over all joint settings except a few particular ones on which it is high or low. For example, we might have fing(c1c2c3) which is 1 if the string is ’ing’ and 0

otherwise, and similar features for ’?ed’, etc.

We can also define features when the inputs are continuous. Then the idea of a cell on which it is active disappears, but we might still have a compact parameterization of the feature.

34© Eric Xing @ CMU, 2005-2014

Page 35: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Features as Micropotentials By exponentiating them, each feature function can be made

into a “micropotential”. We can multiply these micropotentials together to get a clique potential.

Example: a clique potential (c1c2c3) could be expressed as:

This is still a potential over 263 possible settings, but only uses K parameters if there are K features. By having one indicator function per combination of xc, we recover the standard

tabular potential.

K

kkk

ffc

cccf

eeccc

1321

321

),,(exp

),,( ?ed?edinging

35© Eric Xing @ CMU, 2005-2014

Page 36: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Combining Features Each feature has a weight k which represents the numerical

strength of the feature and whether it increases or decreases the probability of the clique.

The marginal over the clique is a generalized exponential family distribution, actually, a GLIM:

In general, the features may be overlapping, unconstrained indicators or any function of any subset of the clique variables:

How can we combine feature into a probability model?

),,(),,(

),,(),,(exp),,(

zzzzzzqu?qu?

?ed?edinging

321321

321321321 cccfcccf

cccfcccfcccp

c

iIi

ckkcc f )(exp)(def

xx

36© Eric Xing @ CMU, 2005-2014

Page 37: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Feature Based Model We can multiply these clique potentials as usual:

However, in general we can forget about associating features with cliques and just use a simplified form:

This is just our friend the exponential family model, with the features as sufficient statistics!

Learning: recall that in IPF, we have Not obvious how to use this rule to update the weights and features

individually !!!

c Ii

ckkc

ccc

if

ZZp )(exp

)()(

)()( xxx

11

i

cii if

Zp )(exp

)()( xx

1

)()(~

)()( )()()(

ct

cc

tcc

tc p

px

xxx 1

37© Eric Xing @ CMU, 2005-2014

Page 38: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

MLE of Feature Based UGMs Scaled likelihood function

Instead of optimizing this objective directly, we attack its lower bound

The logarithm has a linear upper bound …

This bound holds for all , in particular, for

Thus we have

x

nn

xpxp

xpN

NDD

)|(log)(~

)|(log/);();(~

1ll

x i

ii Zxfxp )(log)()(~

1 log)()(log ZZ)( )(tZ 1

1 x

tt

iii Z

ZZxfxpD )(log

)()()()(~);(~ )()(

l

38© Eric Xing @ CMU, 2005-2014

Page 39: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Generalized Iterative Scaling (GIS) Lower bound of scaled loglikelihood

Define

Relax again Assume Convexity of exponential:

We have:

1 x

tt

iii Z

ZZxfxpD )(log

)()()()(~);(~ )()(

l

)(def

)( tii

ti

1

11

11

)(log)(exp)|()()(~

)(log)(exp)(exp)(

)()(~

)(log)(exp)(

)()(~);(~

)()()(

)()()()(

)()(

t

ii

ti

x

t

ii

xi

t

ii

ti

ii

ti

xt

ii

xi

x

t

iii

xt

iii

Zxfxpxfxp

ZxfxfZ

xfxp

ZxfZ

xfxpD

l

i iii ii xx expexp

)()(logexp)()|()()(~);(~ def)()()( 1t

x i

tii

t

ii

xi ZxfxpxfxpDl

10 i ii xfxf )( ,)(

39© Eric Xing @ CMU, 2005-2014

Page 40: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

GIS Lower bound of scaled loglikelihood

Take derivative:

Set to zero

where p(t)(x) is the unnormalized version of p(x|(t))

Update

)()(logexp)()|()()(~);(~ def)()()( 1t

x i

tii

t

ii

xi ZxfxpxfxpDl

xi

ttii

xi

xfxpxfxp )()|(exp)()(~ )()(

)()()(

)()(~

)()|(

)()(~)(

)()(

)( t

xi

t

ix

xi

t

ix Z

xfxp

xfxp

xfxp

xfxpe

ti

i

xfttti

ti

ti

it

iexpxp )()()1()()()1( )(

)()(

i

xf

xfxp

xfxpt

i

xftxf

xfxp

xfxp

t

t

i

xft

xfxp

xfxp

t

tt

i

xi

t

ix

ii

i

xi

t

ix

i

xi

t

ix

xp

ZZ

xp

ZZ

xpxp

)(

)()(

)()(~)(

)()()(

)()(

)()(~

)(

)(

)()(

)()(

)()(~

)(

)()(

)(

)(

)(

)(

)()()(

)()()()(

1

)()(~

)()( )()()(

ct

cc

tcc

tc p

px

xxx 1

Recall IPF:

40© Eric Xing @ CMU, 2005-2014

Page 41: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Summary IPF is a general algorithm for finding MLE of UGMs.

a fixed-point equation for c over single cliques, coordinate ascent I-projection in the clique marginal space Requires the potential to be fully parameterized The clique described by the potentials do not have to be max-clique For fully decomposable model, reduces to a single step iteration

GIS Iterative scaling on general UGM with feature-based potentials IPF is a special case of GIS which the clique potential is built on features defined

as an indicator function of clique configurations.

i

xf

xfxp

xfxptt

i

xi

t

ixxpxp

)(

)()(

)()(~)()1(

)()()()(

)(~)()( )(

)()(

ct

cc

tcc

tc p

px

xxx 1

IPF:GIS:

xi

t

ix

xfxp

xfxpt

it

i )()(

)()(~)()1(

)(log

41© Eric Xing @ CMU, 2005-2014

Page 42: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Where does the exponential form come from? Review: Maximum Likelihood for exponential family

i.e., At ML estimate, the expectations of the sufficient statistics under the model must match empirical feature average.

x iii

x iii

x

ZNxfxm

Zxfxm

xpxmD

)(log)()(

)(log)()(

)|(log)();(

l

xi

xi

x ii

i

xfxpNxfxm

ZNxfxmD

)()|()()(

)(log)()();(

l

x

ix

ix

i xfxpxfNxmxfxp )()|(~)()()()|(

42© Eric Xing @ CMU, 2005-2014

Page 43: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Maximum Entropy We can approach the modeling problem from an entirely

different point of view. Begin with some fixed feature expectations:

Assuming expectations are consistent, there may exist many distributions which satisfy them. Which one should we select? The most uncertain or flexible one, i.e., the one with maximum entropy.

This yields a new optimization problem:

iix

xfxp )()(

x

iix

xp

xp

xfxp

xpxpxp

1)(

)()( s.t.

)(log)()(H max

This is a variationaldefinition of a distribution!

43© Eric Xing @ CMU, 2005-2014

Page 44: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Solution to the MaxEnt Problem To solve the MaxEnt problem, we use Lagrange multipliers:

So feature constraints + MaxEnt exponential family. Problem is strictly convex w.r.t. p, so solution is unique.

xiii

xi

xxpxfxpxpxpL 1)()()()(log)(

iii

xx i

ii

iii

iii

xfZ

xp

xpxfeZ

xfexp

xfxpxpL

)(exp)(

)(

))( (since )(exp)(

)(exp)(

)()(log)(

*

*

1

1

1

1

1

44© Eric Xing @ CMU, 2005-2014

Page 45: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

A more general MaxEnt problem

x

iix

xx

p

xp

xfxp

xhxppxhxpxp

xhxp

1)(

)()( s.t.

)(log)()(H)()(

log)(

)(||)(KL mindef

i

ii xfxhZ

xp )(exp)()(

)(

1

45© Eric Xing @ CMU, 2005-2014

Page 46: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Constraints from Data Where do the constraints i come from? Just as before, measure the empirical counts on the training

data:

This also ensures consistency automatically. Known as the “method of moments”. (c.f. law of large

numbers) We have seen a case of convex duality:

In one case, we assume exponential family and show that ML implies model expectations must match empirical expectations.

In the other case, we assume model expectations must match empirical feature counts and show that MaxEnt implies exponential family distribution.

No duality gap yield the same value of the objective

)()(~)()( xxxxi

xxiN

mi fpf

46© Eric Xing @ CMU, 2005-2014

Page 47: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Geometric interpretation All exponential family distribution:

All distributions satisfying moment constraints

Pythagorean theorem

i

ii xfxhZ

xpxp )(exp)()(

)( : )(

1E

)()(~)()( : )( xfxpxfxpxp ix

ix

M

pppqpq MM ||KL||KL||KL

||KL||KL||KL s.t.

||KL min:MaxEnt

hppqhqq

hq

MM

p

M

||KL||~KL||~KL s.t.

||~KL min:MaxLik

ppppppq

pp

MM

p

E

47© Eric Xing @ CMU, 2005-2014

Page 48: Probabilistic Graphical Modelsepxing/Class/10708-14/lectures/lecture8...non-maximal clique potentials If the graph is non-decomposable, and or the potentials are defined on non-maximal

Summary Exponential family distribution can be viewed as the solution

to an variational expression --- the maximum entropy! The max-entropy principle to parameterization offers a dual

perspective to the MLE.

48© Eric Xing @ CMU, 2005-2014