generative deep models - kaistalinlab.kaist.ac.kr/resource/lec5_generative_dl.pdf · 1. initialize...

51
Algorithmic Intelligence Lab Algorithmic Intelligence Lab EE807: Recent Advances in Deep Learning Lecture 5 Slide made by Sejun Park and Sungsoo Ahn KAIST EE Generative Deep Models

Upload: others

Post on 20-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

Algorithmic Intelligence Lab

EE807: Recent Advances in Deep Learning

Lecture 5

Slide made by

Sejun Park and Sungsoo Ahn

KAIST EE

Generative Deep Models

Page 2: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

1. Introduction• Generative model and discriminative model

2. Boltzmann Machine• Boltzmann machine (BM)

• Restricted Boltzmann machine (RBM)

• Deep Boltzmann machine (DBM)

3. Sum-product Network• Sum-product network (SPN)

• Inference in SPN

• Structure learning of SPN

Table of Contents

2

Page 3: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

Generative Model and Discriminative Model

3

• Given an observed variable and a target variable

• Discriminative model is a model of a conditional distribution• e.g., neural networks (supervised)

• Generative model is a model of a joint distribution (or )• e.g., Boltzmann machines, sum-product networks (unsupervised)

*reference : https://en.wikipedia.org/wiki/Cat#/media/File:Kittyply_edit1.jpg

Page 4: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

Why Generative Model?

4

• Generative models model a full probability distribution given data

• enables us to generate new data similar to existing (training) data • This is impossible under discriminative models

• Sampling methods (e.g., Markov chain) are required for generation

*reference : https://en.wikipedia.org/wiki/File:Cat_poster_1.jpg

Page 5: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Modelling a joint distribution of• Mean Fields

• Tractable inference, low expressive power

• Multivariate Gaussian distributions

• Tractable inference, low expressive power

• Graphical models (e.g., RBM, DBM, etc.)

• Intractable inference, high expressive power with compact representation

• Generative adversarial networks

• for some neural network

• Intractable inference, high expressive power with complex expression

Examples of Generative Models

5

Page 6: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

1. Introduction• Generative model and discriminative model

2. Boltzmann Machine• Boltzmann machine (BM)

• Restricted Boltzmann machine (RBM)

• Deep Boltzmann machine (DBM)

3. Sum-product Network• Sum-product network (SPN)

• Inference in SPN

• Structure learning of SPN

Table of Contents

6

Page 7: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Energy based model (EBM) is a joint distribution on a vector satisfying

• Assignments with high energy appear less likely in EBM

• Examples of EBM• Gaussian distribution

• Bernoulli distribution

• Poisson, binomial, ….

Energy Based Model

7

Energy

Page 8: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

Boltzmann Machine

8

• Given a graph , Boltzmann machine (BM) is a joint distribution on a binary vector such that

• Given a neighborhood of , conditional distribution of is • : logistic sigmoid function

where is a set of neighbors of

• To generate new data using BM, we need to learn parameters of BM

Page 9: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Given training data , learn the distribution

• Goal (Maximum Likelihood Estimation):

Maximize

• is a convex function of parameters

• Gradient descent converges to the global optimum with gradients

where is an empirical expectation

Learning BM

9

Page 10: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Problem: Calculating is intractable in general, i.e., NP-Hard• Naïve approach requires summations

• Instead of exact gradient, we approximate it using samples from

• Gibbs sampler is the most popular sampling algorithm in BM

Learning BM

10

Pick a random Update according to

Choose an arbitrary initial state

Repeat for fixed number of iterations

Page 11: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Learning BM1. Choose initial

2. Generate samples from using Gibbs sampler

3. Update parameters with approximated gradients from samples

4. Repeat 2-3 until convergence

Learning BM

11

Page 12: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

Restricted Boltzmann Machine [Smolensky, 1986]

12

• Restricted Boltzmann machine (RBM) is a bipartite Boltzmann machine with visible nodes and hidden nodes

• Hidden nodes can be described by hidden features of visible nodes

• In RBM structure, all hidden nodes are conditionally independent given visible nodes and vise versa

Higher order potential

*reference : http://arkitus.com/files/bms-eslami-slides.pptx

Page 13: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Given training data , learn the distribution

• Goal: Maximize

• is a non-convex function of parameters

• But we still use gradient descent with gradients

Learning RBM

13

Page 14: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Due to conditional independence of RBM, block Gibbs sampling is possible

• Samples generated by BM and RBM

Learning RBM

14

BM

RBM

*reference : http://arkitus.com/files/bms-eslami-slides.pptx

Page 15: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Convergence of a Gibbs sampler requires exponential number of iterations

• Contrastive divergence (CD- ) is a sampling method which runs only iterations of a Markov chain without convergence guarantee

• For rapid mixing, CD chooses a initial state of the Markov chain from the training data

• CD-1 works surprisingly well in practice even though there is no guarantee

Contrastive Divergence [Hinton, 2002]

Page 16: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

=

• Persistent Contrastive divergence (PCD- ) runs iterations of a Markov chain with initial state from the last Markov chain output

• We expect PCD chain approximates the full Markov chain of long iterations

• PCD requires more iterations than CD, but it shows better performance

Persistent Contrastive Divergence [Tieleman, 2008]

MNIST examples generated from RBM trained with PCD-1 and CD-1

Page 17: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• For a supervised learning using RBM, add a class variable to RBM• : binary input, : multinomial label

• Assume

• Gradient descent for parameter learning• Goal: Maximize log likelihood

• Similar to learning RBM, computing gradient requires

but block Gibbs sampler is not available (inefficient sampling)

Application: Classification using RBM [Nguyen et al., 2017]

17

Page 18: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Gradient descent for parameter learning• Solution: Use for approximating

• Such an approximation is called the mean field approximation

• Solve the optimization minimizing KL divergence

• Formulation as a function of

Application: Classification using RBM [Nguyen et al., 2017]

18

Gradient w.r.t. is tractable

: Indicator function

Page 19: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Gradient descent for parameter learning• Formulation of as a function of

Application: Classification using RBM [Nguyen et al., 2017]

19

Gradient w.r.t. is tractable

where

Page 20: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Gradient descent for parameter learning• Problem: Computing below is hard

• Solution: Approximate into tractable expression

• 1st order Taylor series approximation at

Application: Classification using RBM [Nguyen et al., 2017]

20

Page 21: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Gradient descent for parameter learning• Approximate into polynomial

• 2nd order Taylor series approximation at

• After Taylor series approximation, can be optimized using coordinate descent

• Also, gradient can be approximated by approximating

Application: Classification using RBM [Nguyen et al., 2017]

21

Closed form gradient exists

Page 22: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Evaluation: Exact inference is intractable, use approximated inference

where

• Classification using1. Above approximated output

2. kNN, SVN on the hidden feature space of RBM

Application: Classification using RBM [Nguyen et al., 2017]

22

Hidden feature projection of MNIST examples

Page 23: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Evaluation: Exact inference is intractable, use approximated inference

where

• Classification using1. Above approximated output

2. kNN, SVN on the hidden feature space of RBM

• Generation of RBM using Gibbs sampler

Application: Classification using RBM [Nguyen et al., 2017]

23

RBM VAE GAN

Page 24: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

Deep Boltzmann Machine [Salakhutdinov et al., 2009]

24

• Deep Boltzmann machine (DBM) has a deeper structure than RBM

• Higher expressive power as DBM becomes deeper

• DBM also contains conditionally independence

Higher order &complex potential

*reference : http://arkitus.com/files/bms-eslami-slides.pptx

Page 25: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

Learning DBM

25

• Learning DBM is similar to learning RBM

• Pretraining each layer with RBM empirically achieves better performance

• As used in [Nguyen et al., 2017], the mean field approximation is a good alternative for slow Gibbs sampler in DBM [Salakhutdinov, 2010]

Page 26: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

Learning DBM

26

• Samples generated by BM, RBM and DBM

BM

RBM

DBM

*reference : http://arkitus.com/files/bms-eslami-slides.pptx

Page 27: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Goal: Extract a joint feature from image and audio data

• Joint DBM consisting of sub-DBMs corresponding

to modalities

• Learning joint DBM1. Extract features from each modality

2. Pretrain sub-DBM for each modality using extracted features

3. Pretrain the shared parameters

4. Fine tune the joint DBM with pretrained parameter

Application of DBM: Multimodal Learning [Pang et al., 2017]

27

Page 28: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Evaluation: Person identification using face image and audio data• Logistic regression classifier is used for extracted DBM features

• DBM also shows robustness on noisy unimodal data

Application of DBM: Multimodal Learning [Pang et al., 2017]

28

Page 29: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

1. Introduction• Generative model and discriminative model

2. Boltzmann Machine• Boltzmann machine (BM)

• Restricted Boltzmann machine (RBM)

• Deep Boltzmann machine (DBM)

3. Sum-product Network• Sum-product network (SPN)

• Inference in SPN

• Structure learning of SPN

Table of Contents

29

Page 30: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Modelling a joint distribution of• Mean Fields

• Tractable inference, low expressive power

• Multivariate Gaussian distributions

• Tractable inference, low expressive power

• Graphical models (e.g., RBM, DBM, etc.)

• Intractable inference, complex but simple expression

• Generative adversarial networks

• for some neural network

• Intractable inference, complex expression

Recall: Examples of Generative Models

30

Complex and tractable model?

}}

Simple and tractable models

Complex and intractable models

Page 31: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Goal: Model tractable distributions over

• Any distribution can be represented by network polynomial

• Idea: Learn coefficients of network polynomial which enables tractable inference• Example: mean field

Network Polynomial

31

Iterative sum and product

Page 32: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Sum-product network (SPN) provides tractable distribution over• is SPN

• SPN consists of iterative sum and product nodes (operations)• e.g.

Sum-product Network [Poon et al., 2012]

32*reference : https://jmhldotorg.files.wordpress.com/2013/11/slidescambridgesumproductnetworks2013.pdf

Page 33: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Sum-product network (SPN) provides tractable distribution over• is SPN

• Direct calculation of requires number of summations but…

• Observation: If is a polynomial with a form

then

• What if there exist monomials such as or ?

Sum-product Network [Poon et al., 2012]

33*reference : https://jmhldotorg.files.wordpress.com/2013/11/slidescambridgesumproductnetworks2013.pdf

Page 34: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Theorem: SPN is valid ( ) if

• (consistent) There is no monomial containing both 𝑥𝑖 , 𝑥𝑖

• (complete) Children of sum node have same set of descendant leaf node

• Theorem implies that SPN is valid if it only contains -th order monomials which consist of all or for all

Any SPN Produces Tractable and Valid Distribution?

34

Incomplete Inconsistent

*reference : https://jmhldotorg.files.wordpress.com/2013/11/slidescambridgesumproductnetworks2013.pdf

Page 35: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Goal: calculate joint probability in valid SPN

Joint Probability in Valid SPN

35*reference : https://jmhldotorg.files.wordpress.com/2013/11/slidescambridgesumproductnetworks2013.pdf

Page 36: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Goal: calculate marginal probability in valid SPN

Marginal Probability in Valid SPN

36*reference : https://jmhldotorg.files.wordpress.com/2013/11/slidescambridgesumproductnetworks2013.pdf

Page 37: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Any distribution can be encoded using shallow large SPN [Poon et al., 2012]

• Some distribution can be encoded using compact deep SPN• e.g. uniform distribution over states with even number of 1’s

Deep vs Shallow SPNs

37

Shallow large SPN

Deep compact SPN

*reference : https://jmhldotorg.files.wordpress.com/2013/11/slidescambridgesumproductnetworks2013.pdf

Page 38: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Replace sum nodes by integral (or weighted sum) nodes• Integral for continuous models and weighted sum for multivariate models

• When all are Gaussian, SPN defines very large mixture of Gaussian

Extension to Multivariate/Continuous Models

38*reference : https://jmhldotorg.files.wordpress.com/2013/11/slidescambridgesumproductnetworks2013.pdf

Binary SPN Continuous/multivariate SPN

Page 39: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Learning SPN is similar to learning neural networks• Select an appropriate structure for target dataset

• Define target loss function from target dataset/distribution

• Iteratively update weights using back-propagation

• Learning SPN1. Initialize SPN with some ‘valid’ structure and random parameter

2. Choose an appropriate loss function (e.g., maximum likelihood: )

3. Update weight until convergence

• Weight update algorithm can be generally chosen (e.g., gradient descent)

• Gradient descent can be done using back-propagation

4. Prune zero weighted edges

Learning SPN

39

Page 40: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Recall: Intuition behind SPN• Each sum node represents the mixture of distributions

• Each product node represents the independence of variables

• Given samples, structure learning of SPN is an iterative procedure of• Finding independence (independence test, e.g., G-test of pairwise independence)

• Finding similar instances (clustering methods, e.g., k-means clustering)

Structure Learning of SPN [Gens et al., 2013]

40

Page 41: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Completing the missing left half of images• SPN is trained using Caltech-101, Olivetti datasets

• Consider each pixel as a mixture of Gaussian with unit variance

• Comparison with several algorithms

Application of SPN: Image Completion [Poon et al., 2012]

41

Top to bottom: original, SPN, DBM, DBN, PCA, nearest neighbor

Page 42: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Goal: Use SPN as an image classifier

• Problem• Finding deep valid structure for high dimensional variable may decrease the

classification accuracy as it restrict the number of edges

• Modelling continuous image pixels into Gaussian distribution may not be realistic

• Idea: Ignore the distribution of input . Only model the distribution of target variable given

• Learn SPN which maximizes• Hidden variables are introduced to enhance the expressive power

Application of SPN: Discriminative Learning [Gens et al., 2012]

42

Page 43: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Structure of SPN follows some convolutional structure• Each parts affects the weight of a target variable

• Each mixture extracts image features using convolutional filters

• Hidden variable determine whether discard the part of filter/location of images

Application of SPN: Discriminative Learning [Gens et al., 2012]

43

Extracted features from input

Page 44: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Test accuracy on CIFAR-10 dataset• SPN estimation

• Input images are preprocessed by feature extraction algorithm [Coates et al., 2011]

• Vary number of extracted features for quality measure

• SPN works well even with a small number of feature size

Application of SPN: Discriminative Learning [Gens et al., 2012]

44

Size of features

Page 45: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Goal: Learn the graph structured semantic map for predicting the labels of unvisited location• Each node represents a location of semantic place that a robot can visit

• Each node associate with some local observation (vision, sound, …) and its hidden label (office, corridor, …)

• Each edge represents a spatial relation between nodes representing navigability

Application of SPN: Learning Graph Structured Data [Zheng et al, 2018]

45

Page 46: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Encoding samples of arbitrary size of graph into

SPN is hard• Instead, learn SPN for small templates

• Given training data, learn template SPNs for

modelling distribution of from subgraphs

• Modelling distribution of test data1. Given a graph, decompose a graph into random disjoint templates

2. Add a product node and trained template SPNs on the graph as its children

Application of SPN: Learning Graph Structured Data [Zheng et al, 2018]

46

x Template SPNs of corresponding sizes

Product node

Page 47: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Modelling distribution of data1. Given a graph, decompose a graph into random disjoint templates

2. Add a product node and trained template SPNs on the graph as its children

3. Repeat 1-2 for fixed number

4. For added product nodes, add a sum node as their parent. Use resulting SPN as a modelled distribution

Application of SPN: Learning Graph Structured Data [Zheng et al, 2018]

47

x x

+

….

….

Approximate the full graph distribution using smaller ones

The full distribution is modelled as a mixture of several approximations

Page 48: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Experiment setup• Generate labels and observations under some ground truth distribution of

• Add some noisy labels which are most likely to be correct except for the true label

• Add placeholders which have no label and observation

• Train SPN with above corrupted/missing data

• Comparison with usual graphical models (Markov random field) of pairwise potentials and three nodes potentials

Application of SPN: Learning Graph Structured Data [Zheng et al, 2018]

48

Ground truth

Corrupted training graph

Estimated result of SPN

Page 49: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• Generative models enables us to model the probability distribution of training samples

• BM, RBM, DBM have high expressive power but both inference and learning are intractable• They lead major breakthrough of deep learning before 2012 (AlexNet arrives)

• Sum-product network exhibits complex structure, tractable inference and efficient learning using back-propagation

Summary

49

Page 50: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• [Smolensky, 1986] “Chapter 6: Information Processing in Dynamical Systems: Foundations of Harmony Theory”, Parallel Distributed Processing 1986link : http://stanford.edu/~jlmcc/papers/PDP/Volume%201/Chap6_PDP86.pdf

• [Hinton, 2002] “Training Products of Experts by Minimizing Contrastive Divergence”, Neural Computation 2002link : http://www.cs.toronto.edu/~fritz/absps/tr00-004.pdf

• [Hinton et al., 2006] “A fast learning algorithm for deep belief nets”, Neural Computation 2006link : https://www.cs.toronto.edu/~hinton/absps/fastnc.pdf

• [Wainwright et al., 2008] “Graphical Models, Exponential Families, and Variational Inference“, FTML 2008link : https://people.eecs.berkeley.edu/~wainwrig/Papers/WaiJor08_FTML.pdf

• [Tieleman, 2008] “Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient”, ICML 2008link : https://arxiv.org/pdf/1705.11140.pdf

• [Salakhutdinov et al., 2009] “Deep Boltzmann Machines”, AISTATS 2009link : http://www.cs.toronto.edu/~fritz/absps/dbm.pdf

• [Salakhutdinov et al., 2010] “Efficient Learning of Deep Boltzmann Machines”, AISTATS 2010link : http://proceedings.mlr.press/v9/salakhutdinov10a/salakhutdinov10a.pdf

• [Poon et al., 2012], “Sum-Product Networks: A New Deep Architecture”, UAI 2012link : https://arxiv.org/ftp/arxiv/papers/1202/1202.3732.pdf

• [Gens et al., 2012] “Discriminative Learning of Sum-Product Networks”, NIPS 2012link : https://papers.nips.cc/paper/4516-discriminative-learning-of-sum-product-networks.pdf

• [Gens et al., 2013] “Learning the Structure of Sum-Product Networks”, ICML 2013link : http://proceedings.mlr.press/v28/gens13.pdf

References

50

Page 51: Generative Deep Models - KAISTalinlab.kaist.ac.kr/resource/Lec5_Generative_DL.pdf · 1. Initialize SPN with some ‘valid’ structure and random parameter 2. Choose an appropriate

Algorithmic Intelligence Lab

• [Nguyen et al., 2017], “Supervised Restricted Boltzmann Machines”, UAI 2017link : http://auai.org/uai2017/proceedings/papers/106.pdf

• [Pang et al., 2017], “A Joint Deep Boltzmann Machine (jDBM) Model for Person Identification Using Mobile Phone Data”, IEEE Transactions on Multimedialink : https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7583729

• [Zheng et al., 2018], “Learning Graph-Structured Sum-Product Networks for Probabilistic Semantic Maps”, AAAI 2018link : http://kaiyuzheng.me/documents/papers/zheng2018aaai.pdf

References

51