princeton university arxiv:0710.0845v2 [stat.ml] 19 mar 2008

35
THE NESTED CHINESE RESTAURANT PROCESS AND BAYESIAN INFERENCE OF TOPIC HIERARCHIES DAVID M. BLEI PRINCETON UNIVERSITY THOMAS L. GRIFFITHS UNIVERSITY OF CALIFORNIA, BERKELEY MICHAEL I. JORDAN UNIVERSITY OF CALIFORNIA, BERKELEY ABSTRACT. We present the nested Chinese restaurant process (nCRP), a stochastic process which assigns probability distributions to infinitely- deep, infinitely-branching trees. We show how this stochastic process can be used as a prior distribution in a nonparametric Bayesian model of document collections. Specifically, we present an application to infor- mation retrieval in which documents are modeled as paths down a ran- dom tree, and the preferential attachment dynamics of the nCRP leads to clustering of documents according to sharing of topics at multiple levels of abstraction. Given a corpus of documents, a posterior inference algo- rithm finds an approximation to a posterior distribution over trees, topics and allocations of words to levels of the tree. We demonstrate this al- gorithm on collections of scientific abstracts from several journals. This model exemplifies a recent trend in statistical machine learning—the use of nonparametric Bayesian methods to infer distributions on flexible data structures. 1. I NTRODUCTION For much of its history, computer science has focused on deductive for- mal methods, allying itself with deductive traditions in areas of mathematics such as set theory, logic, algebra, and combinatorics. There has been ac- cordingly less focus on efforts to develop inductive, empirically-based for- malisms in computer science, a gap which became increasingly visible over the years as computers have been required to interact with noisy, difficult- to-characterize sources of data, such as those deriving from physical sig- nals or from human activity. The field of machine learning has aimed to fill this gap, allying itself with inductive traditions in probability and statistics, while focusing on methods that are amenable to analysis as computational procedures. 1 arXiv:0710.0845v2 [stat.ML] 19 Mar 2008

Upload: others

Post on 28-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

THE NESTED CHINESE RESTAURANT PROCESS ANDBAYESIAN INFERENCE OF TOPIC HIERARCHIES

DAVID M. BLEIPRINCETON UNIVERSITY

THOMAS L. GRIFFITHSUNIVERSITY OF CALIFORNIA, BERKELEY

MICHAEL I. JORDANUNIVERSITY OF CALIFORNIA, BERKELEY

ABSTRACT. We present the nested Chinese restaurant process (nCRP),a stochastic process which assigns probability distributions to infinitely-deep, infinitely-branching trees. We show how this stochastic processcan be used as a prior distribution in a nonparametric Bayesian model ofdocument collections. Specifically, we present an application to infor-mation retrieval in which documents are modeled as paths down a ran-dom tree, and the preferential attachment dynamics of the nCRP leads toclustering of documents according to sharing of topics at multiple levelsof abstraction. Given a corpus of documents, a posterior inference algo-rithm finds an approximation to a posterior distribution over trees, topicsand allocations of words to levels of the tree. We demonstrate this al-gorithm on collections of scientific abstracts from several journals. Thismodel exemplifies a recent trend in statistical machine learning—the useof nonparametric Bayesian methods to infer distributions on flexible datastructures.

1. INTRODUCTION

For much of its history, computer science has focused on deductive for-mal methods, allying itself with deductive traditions in areas of mathematicssuch as set theory, logic, algebra, and combinatorics. There has been ac-cordingly less focus on efforts to develop inductive, empirically-based for-malisms in computer science, a gap which became increasingly visible overthe years as computers have been required to interact with noisy, difficult-to-characterize sources of data, such as those deriving from physical sig-nals or from human activity. The field of machine learning has aimed to fillthis gap, allying itself with inductive traditions in probability and statistics,while focusing on methods that are amenable to analysis as computationalprocedures.

1

arX

iv:0

710.

0845

v2 [

stat

.ML

] 1

9 M

ar 2

008

2 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

While machine learning has made significant progress, it has arguablyyet to fully exploit the major advances of the past several decades in eithercomputer science or statistics. The most common data structure in machinelearning is the ubiquitous “feature vector,” a pale shadow of the recursive,compositional data structures that have driven computer science. Recentwork in the area of “kernel methods” has aimed at more flexible represen-tations (Cristianini and Shawe-Taylor, 2000), but a kernel is essentially animplicit way to manipulate feature vectors. On the statistical side, therehas been relatively little effort by machine learning researchers to exploitthe rich literature on stochastic processes, a literature that considers distri-butions of collections of random variables defined across arbitrary indexsets. Stochastic processes have had substantial applications in a number ofapplied areas, including finance, genetics and operations research (Durrett,1999), but have had limited impact within computer science.

It is our view that the class of models known as nonparametric Bayesian(NPB) models provide a meeting ground in which some of the most pow-erful ideas in computer science and statistics can be merged. NPB modelsare based on prior distributions that are general stochastic processes de-fined on infinite index sets. As we will see, it is relatively straightforwardto exploit this machinery to define probability distributions on recursively-defined data structures. Moreover, the NPB methodology provides inferen-tial algorithms whereby the prior can be combined with data so that a learn-ing process takes place. The output of this learning process is a continually-updated probability distribution on data structures that can be used for a va-riety of purposes, including visualization, clustering, prediction, and classi-fication.

In this paper we aim to introduce NPB methods to a wider computationalaudience by showing how NPB methods can be deployed in a specific prob-lem domain of significant current interest—that of learning topic models forcollections of text, images and other semi-structured corpora. We define atopic to be a probability distribution across words from some vocabulary.Given an input corpus, a set of documents each consisting of a sequenceof words, we want an algorithm to both find useful sets of topics and learnto organize the topics according to a hierarchy in which more abstract top-ics are near the root of the hierarchy and more concrete topics are near theleaves. We do not wish to choose a specific hierarchy a priori; rather, weexpect the learning procedure to infer a distribution on topologies (branch-ing factors, etc) that places high probability on those hierarchies that bestexplain the data. In accordance with our goals of working with flexible datastructures, we wish to allow this distribution to have its support on arbitrarytopologies—there should be no limitations such as a maximum depth ormaximum branching factor.

THE NESTED CHINESE RESTAURANT PROCESS 3

While the examples that we present are in the domain of text collections,we emphasize that our methods are general. A “document” might be animage, in which case a topic would be a recurring pattern of image features.Indeed, topic models that are special cases of the model presented here, i.e.,with no hierarchy and with a fixed number of topics, have appeared in therecent vision literature (Fei-Fei and Perona, 2005; Russell et al., 2006). Ina genomics setting, “documents” might be sets of DNA sequences, and top-ics will reflect regulatory motifs. In a collaborative filtering setting, “doc-uments” might be the purchase histories of users, and topics will reflectconsumer patterns in the population. Topic models are also natural in semi-structured domains, including HTML or XML documents.

Our approach falls within the general framework of unsupervised learn-ing. The topic hierarchy is treated as a latent variable—a hypothetical hid-den structure that is responsible for producing the observed data via a gen-erative probabilistic process. We use the methods of Bayesian statistics toinfer a posterior probability distribution on the latent variable (the topic hi-erarchy) given the observed data. In particular, we can determine posteriorprobabilities for specific trees and for the topics embedded in them, thusrevealing structure in an otherwise unorganized collection of documents.

This approach is to be contrasted with a supervised learning approach inwhich it is assumed that each document is labeled according to one or moretopics defined a priori (Joachims, 2002). In this work, we do not assumethat hand-labeled topics are available; indeed, we do not assume that thenumber of topics or their hierarchical structure is known a priori.

By defining a probabilistic model for documents, we do not define thelevel of “abstraction” of a topic formally, but rather define a statistical pro-cedure that allows a system designer to capture notions of abstraction thatare reflected in usage patterns of the specific corpus at hand. While thecontent of topics will vary across corpora, the ways in which abstractioninteracts with usage will not. As we have noted, a corpus might be a col-lection of images, a collection of HTML documents or a collection of DNAsequences. Different notions of abstraction will be appropriate in these dif-ferent domains, but each are expressed and discoverable in the data, makingit possible to automatically construct a hierarchy of topics.

We provide an example of an output from our algorithm in Figure 1. Theinput corpus in this case was a collection of abstracts from the Journal of theACM (JACM) from the years 1987 to 2004. The figure depicts a topologythat is given highest probability by our algorithm, along with the highestprobability words from the topics associated with this topology (each nodein the tree corresponds to a single topic). As can be seen from the figure,the algorithm has discovered the category of function words at level zero,and has discovered a set of first-level topics that are a reasonably faithful

4 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

the, ofa, is, and, in, to, for, that,

we

nalgorithmtimelog

problem

graphgraphsverticesedgeedges

Property testing and its connection to learning and approximationFully dynamic planarity testing with applicationsRecognizing planar perfect graphsThe coloring and maximum independent set problems on planar perfect graphsBiconnectivity approximations and graph carvings

functionsfunctionpolynomial

egrset

treestreesearchregularstring

On the sorting-complexity of suffix tree constructionEfficient algorithms for inverting evolutionTheory of neuromataPatricia tries again revisitedDecision tree reduction

schedulingonline

competitivemachineparallel

programslanguagelanguagessets

program

logicformulaslogicstemporalrelational

Alternating-time temporal logicFixpoint logics, relational machines, and computational complexityDefinable relations and first-order query languages over stringsAutoepistemic logicExpressiveness of structured document query languages...

rulesresolutionproof

rewritingcompleteness

questionalgebra

dependenciesbooleanalgebras

networksnetwork

nprotocolbounds

asynchronoust

objectsconsensusobject

routingsortingnetworksadaptivescheme

Planar-adaptive routing: low-cost adaptive networks for multiprocessorsOn-line analysis of the TCP acknowledgment delay problemA trade-off between space and efficiency for routing tablesUniversal-stability results and performance bounds for greedy contention-resolution protocolsPeriodification scheme: constructing sorting networks with constant period

queuingclosed

throughputproduct-formasymptotic

statesautomataverificationautomatonstate

systemsystemsdatabaseprocessingschemes

transactionsdistributedperformancemeasuresavailability

consistencyconstraintconstraintslocald

Constraint tightness and looseness versus local and global consistencyAn optimal on-line algorithm for metrical task systemOn the minimality and global consistency of row-convex constraint networksUsing temporal hierarchies to efficiently maintain large temporal databasesMaintaining state constraints in relational databases: a proof theoretic basis

methodsretrieval

decompositionseveralextra

knowledgeclassesinferencetheoryqueries

learningprobabilisticformulasquantumlearnable

Learning to reasonLearning Boolean formulasLearning functions represented as multiplicity automataDense quantum coding and quantum finite automataA neuroidal architecture for cognitive computation

FIGURE 1. The topic hierarchy learned from 536 abstractsof the Journal of the ACM (JACM) from 1987–2004. Thevocabulary was restricted to the 1,539 terms that occurred inmore than five documents, yielding a corpus of 68K words.The learned hierarchy contains 25 topics, and each topicnode is annotated with its top five most probable terms. Wealso present examples of documents associated with a subsetof the paths in the hierarchy.

representation of some of the main areas of computer science. The secondlevel provides a further subdivision into more concrete topics.

A learned topic hierarchy can be useful for many tasks, including textcategorization, text compression, text summarization and language model-ing for speech recognition. A commonly-used surrogate for the evaluation

THE NESTED CHINESE RESTAURANT PROCESS 5

of performance in these tasks is predictive likelihood, and we use predic-tive likelihood to evaluate our methods quantitatively. But we also viewour work as making a contribution to the development of methods for thevisualization and browsing of documents. The model and algorithm we de-scribe can be used to build a topic hierarchy for a document collection, andthat hierarchy can be used to sharpen a user’s understanding of the contentsof the collection. A qualitative measure of success of our approach is thatthe same tool should be able to uncover a useful topic hierarchy in differentdomains based solely on the input data.

There have been many algorithms developed for using text hierarchies intasks such as classification and browsing Koller and Sahami (1997); Dumaisand Chen (2000); McCallum et al. (1999); Chakrabarti et al. (1998)), whichtake advantage of a fixed and given hierarchy. Other work has focusedon deriving hierarchies from text (Sanderson and Croft, 1999; Stoica andHearst, 2004; Cimiano et al., 2005). These methods find hierarchies ofindividual terms, and make use of side information such as grammars orthesauri that are available (to some degree) in textual domains.

Our method differs from this previous work in several ways. First, ratherthan learn a hierarchy of terms, we learn a hierarchy of topics, where atopic is a distribution over terms. Second, rather than rely on text-specificside information, our method is readily applied to other domains such asbiological data sets, purchasing data, or collections of images. Finally, as anonparametric Bayesian model, our approach can accommodate data in newand previously undiscovered parts of the tree at inference time. Previouswork commits to a single fixed tree for all future data.

This paper is organized as follows. We begin with a review of the nec-essary background in stochastic processes and nonparametric Bayesian sta-tistics in Section 2. In Section 3, we develop the nested Chinese restaurantprocess, the prior on topologies that we use in the hierarchical topic modelof Section 4. We derive an approximate posterior inference algorithm inSection 5 to learn topic hierarchies from text data. Examples and an empir-ical evaluation are provided in Section 6. Finally, we present related workand a discussion in Section 7.

2. BACKGROUND

Our approach to topic modeling reposes on several building blocks fromstochastic process theory and nonparametric Bayesian statistics, specifi-cally the Chinese restaurant process (Aldous, 1985), stick-breaking pro-cesses (Pitman, 2002), and the Dirichlet process mixture (Antoniak, 1974).We briefly review these ideas and the connections between them.

6 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

1 2

3

4 5

6 78 9

10

β1 β2 β3 β4 β5

FIGURE 2. A configuration of the Chinese restaurant pro-cess. There are an infinite number of tables, each associatedwith a parameter βi . The customers sit at the tables accord-ing to Eq. (1) and each generate data with the correspondingparameter. In this configuration, ten customers have beenseated in the restaurant, populating four of the infinite set oftables.

2.1. Dirichlet and beta distributions. Recall that the Dirichlet distribu-tion is a probability distribution on the simplex of non-negative real num-bers that sum to one. We write:

U ∼ Dir(α1, α2, . . . , αK ),

for a random vector U distributed as a Dirichlet random variable on the K -simplex, where αi > 0 are parameters. The mean of U is proportional tothe parameters:

E[Ui ] =αi∑K

k=1 αk

and the magnitude of the parameters determines the concentration of Uaround the mean. The specific choice α1 = · · · = αK = 1 yields theuniform distribution on the simplex. Letting αi > 1 yields a unimodal dis-tribution peaked around the mean, and letting αi < 1 yields a distributionthat has modes at the corners of the simplex. The beta distribution is a spe-cial case of the Dirichlet distribution for K = 2, in which case the simplexis the unit interval (0, 1). In this case we write U ∼ Beta(α1, α2), where Uis a scalar.

2.2. Chinese restaurant process. The Chinese restaurant process (CRP)is a single parameter distribution over partitions of the integers. The dis-tribution can be most easily described by specifying how to draw a samplefrom it. Consider a restaurant with an infinite number of tables each withinfinite capacity. A sequence of N customers arrive, labeled with the inte-gers 1, . . . , N . The first customer sits at the first table; the nth subsequentcustomer sits at a table drawn from the following distribution:

(1)p(occupied table i | previous customers) = ni

γ+n−1p(next unoccupied table | previous customers) = γ

γ+n−1 ,

THE NESTED CHINESE RESTAURANT PROCESS 7

where ni is the number of customers currently sitting at table i , and γ is areal-valued parameter which controls how often, relative to the number ofcustomers in the restaurant, a customer chooses a new table versus sittingwith others. After N customers have been seated, the seating plan gives apartition of those customers as illustrated in Figure 2.

With an eye towards Bayesian statistical applications, we assume thateach table is endowed with a parameter vector β drawn from a distributionG0. Each customer is associated with the parameter vector at the table atwhich he sits. The resulting distribution on sequences of parameter valuesis referred to as a Polya urn model (Johnson and Kotz, 1977).

The Polya urn distribution can be used to define a flexible clusteringmodel. Let the parameters at the tables index a family of probability distri-butions (for example, the distribution might be a multivariate Gaussian inwhich case the parameter would be a mean vector and covariance matrix).Associate customers to data points, and draw each data point from the cor-responding probability distribution. This induces a probabilistic clusteringof the generated data because customers sitting around each table share thesame parameter vector.

This model is in the spirit of a traditional mixture model (Titteringtonet al., 1985), but is critically different in that the number of tables is un-bounded. Data analysis amounts to reversing the generative process to de-termine a probability distribution on the “seating assignment” of a data set.The underlying CRP lets the data determine the number of clusters (i.e., thenumber of occupied tables) and further allows new data to be assigned tonew clusters (i.e., new tables).

2.3. Stick-breaking constructions. The Dirichlet distribution places a dis-tribution on non-negative K -dimensional vectors whose components sum toone. In this section we discuss a stochastic process that allows K to be un-bounded.

Consider a collection of non-negative real numbers θi ∞

i=1 where∑

i θi =

1. We wish to place a probability distribution on such sequences. Giventhat each such sequence can be viewed as a probability distribution on thepositive integers, we obtain a distribution on distributions, i.e., a randomprobability distribution.

To do this, we use a stick-breaking construction. View the interval (0, 1)as a unit-length stick. Draw a value V1 from a Beta(α1, α2) distribution andbreak off a fraction V1 of the stick. Let θ1 = V1 denote this first fragmentof the stick and let 1 − θ1 denote the remainder of the stick. Continue this

8 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

procedure recursively, letting θ2 = V2(1− θ1), and in general define:

θi = Vi

i−1∏j=1

(1− V j ),

where Vi are an infinite sequence of independent draws from the Beta(α1, α2)distribution. Sethuraman (1994) shows that the resulting sequence θi sat-isfies

∑i θi = 1 with probability one.

In the special case α1 = 1 we obtain a one-parameter stochastic pro-cess known as the GEM distribution (Pitman, 2002). Let γ = α2 denotethis parameter and denote draws from this distribution as θ ∼ GEM(γ ).Large values of γ skew the beta distribution towards zero and yield ran-dom sequences that are heavy-tailed, i.e., significant probability tends to beassigned to large integers. Small values of γ yield random sequences thatdecay more quickly to zero.

2.4. Connections. The GEM distribution and the CRP are closely related.Let θ ∼ GEM(γ ) and let Z1, Z2, . . . , Z N be a sequence of indicatorvariables drawn independently from θ , i.e.,

p(Zn = i | θ) = θi .

This distribution on indicator variables induces a random partition on theintegers 1, 2, . . . , N , where the partition reflects indicators that share thesame values. It can be shown that this distribution on partitions is the sameas the distribution on partitions induced by the CRP (Pitman, 2002). Asimplied by this result, the GEM parameter γ controls the partition in thesame way as the CRP parameter γ .

As with the CRP, we can augment the GEM distribution to consider drawsof parameter values. Let βi be an infinite sequence of independent drawsfrom a distribution G0 defined on a sample space Ω . Define:

G =∞∑

i=1

θiδβi ,

where δβi is an atom at location βi and where θ ∼ GEM(γ ). The object Gis a distribution on Ω; it is a random distribution.

Consider now a finite partition ofΩ . Sethuraman (1994) showed that theprobability assigned by G to the cells of this partition follows a Dirichletdistribution. Moreover, if we consider all possible finite partitions of Ω ,the resulting Dirichlet distributions are consistent with each other. Thus, byappealing to the Kolmogorov consistency theorem (Billingsley, 1995), wecan view G as a draw from an underlying stochastic process. This stochasticprocess is known as the Dirichlet process (Ferguson, 1973).

THE NESTED CHINESE RESTAURANT PROCESS 9

1

1

1

2

2

2

3

3

3

4

4

4

β

β β β β β β β β β

β

β

β

5

5

5

FIGURE 3. A configuration of the nested Chinese restau-rant process illustrated to three levels. Each box representsa restaurant with an infinite number of tables, each of whichrefers to a unique table in the next level of the tree. Inthis configuration, five tourists have visited restaurants alongfour unique paths. Their paths trace a subtree in the infinitetree. (Note that the configuration of customers within eachrestaurant can be determined by observing the restaurantschosen by customers at the next level of the tree.) In thehLDA model of Section 4, each restaurant is associated witha topic distribution β. Each document is assumed to chooseits words from the topic distributions along a randomly cho-sen path.

Note that if we truncate the stick-breaking process after L−1 breaks, weobtain a Dirichlet distribution on an L-dimensional vector. The first L − 1components of this vector manifest the same kind of bias towards largervalues for earlier components as the full stick-breaking distribution. How-ever, the last component θL represents the portion of the stick that remainsafter L − 1 breaks and has less of a bias toward small values than in theuntruncated case.

Finally, we will find it convenient to define a two-parameter variant ofthe GEM distribution that allows control over both the mean and varianceof stick lengths. We denote this distribution as GEM(m, π), in which π >0 and m ∈ (0, 1). In this variant, the stick lengths are defined as Vi ∼

Beta(mπ, (1−m)π). The standard GEM(γ ) is the special case when mπ =1 and γ = (1 − m)π . Note that its mean and variance are tied through itssingle parameter.

10 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

3. THE NESTED CHINESE RESTAURANT PROCESS

The Chinese restaurant process and related distributions are widely usedin nonparametric Bayesian statistics because they make it possible to definestatistical models in which observations are assumed to be drawn from anunknown number of classes. However, this kind of model is limited inthe structures that it allows to be expressed in data. Analyzing the richlystructured data that are common in computer science requires extendingthis approach. In this section we discuss how similar ideas can be usedto define a probability distribution on infinitely-deep, infinitely-branchingtrees. This distribution is subsequently used as a prior distribution in ahierarchical topic model that identifies documents with paths down the tree.

A tree can be viewed a nested sequence of partitions. We obtain a dis-tribution on trees by generalizing the CRP to such sequences. Specifically,we define a nested Chinese restaurant process (nCRP) by imagining thefollowing scenario for generating a sample. Suppose there are an infinitenumber of infinite-table Chinese restaurants in a city. One restaurant is iden-tified as the root restaurant, and on each of its infinite tables is a card withthe name of another restaurant. On each of the tables in those restaurantsare cards that refer to other restaurants, and this structure repeats infinitelymany times.1 Each restaurant is referred to exactly once; thus, the restau-rants in the city are organized into an infinitely-branched, infinitely-deeptree. Note that each restaurant is associated with a level in this tree. Theroot restaurant is at level 1, the restaurants referred to on its tables’ cardsare at level 2, and so on.

A tourist arrives at the city for an culinary vacation. On the first evening,he enters the root Chinese restaurant and selects a table using the CRP distri-bution in Eq. (1). On the second evening, he goes to the restaurant identifiedon the first night’s table and chooses a second table using a CRP distributionbased on the occupancy pattern of the tables in the second night’s restaurant.He repeats this process forever. After M tourists have been on vacation inthe city, the collection of paths describes a random subtree of the infinitetree; this subtree has a branching factor of at most M at all nodes. SeeFigure 3 for an example of the first three levels from such a random tree.

There are many ways to place prior distributions on trees, and our spe-cific choice is based on several considerations. First and foremost, a priordistribution combines with a likelihood to yield a posterior distribution, andwe must be able to compute this posterior distribution. In our case, thelikelihood will arise from the hierarchical topic model to be described inSection 4. As we will show in Section 5, the specific prior that we propose

1A finite-depth precursor of this model was presented in Blei et al. (2003a).

THE NESTED CHINESE RESTAURANT PROCESS 11

in this section combines with the likelihood to yield a posterior distribu-tion that is amenable to probabilistic inference. Second, we have retainedimportant aspects of the CRP, in particular the “preferential attachment” dy-namics that are built into Eq. (1). Probability structures of this form havebeen used as models in a variety of applications (Albert and Barabasi, 2002;Barabasi and Reka, 1999; Krapivsky and Redner, 2001; Drinea et al., 2006),and the clustering that they induce makes them a reasonable starting placefor a hierarchical topic model.

In fact, these two points are intimately related. The CRP yields an ex-changeable distribution across partitions, i.e., the distribution is invariant tothe order of the arrival of customers (Pitman, 2002). This exchangeabilityproperty makes CRP-based models amenable to posterior inference usingMonte Carlo methods (Escobar and West, 1995; MacEachern and Muller,1998; Neal, 2000), when the exchangeable partition distribution is com-bined with a data generating distribution.

4. HIERARCHICAL LATENT DIRICHLET ALLOCATION

The nested CRP provides a way to define a prior on tree topologies thatdoes not limit the branching factor or depth of the trees. We can use thisdistribution as a component of a probabilistic topic model.

The goal of topic modeling is to identify subsets of words that tend toco-occur within documents. Some of the early work on topic modelingderived from latent semantic analysis, an application of the singular valuedecomposition in which “topics” are viewed post hoc as the basis of a low-dimensional subspace (Deerwester et al., 1990). Subsequent work treatedtopics as probability distributions over words and used likelihood-basedmethods to estimate these distributions from a corpus (Hofmann, 1999b). Inboth of these approaches, the interpretation of “topic” differs in key waysfrom the clustering metaphor because the same word can be given highprobability (or weight) under multiple topics. This gives topic models thecapability to capture notions of polysemy (e.g., “bank” can occur with highprobability in both a finance topic and a waterways topic). Probabilistictopic models were given a fully Bayesian treatment in the latent Dirichletallocation (LDA) model (Blei et al., 2003b).

Topic models such as LDA treat topics as a “flat” set of probability dis-tributions, with no direct relationship between one topic and another. Whilethese models can be used to recover a set of topics from a corpus, they failto indicate the level of abstraction of a topic, or how the various topics arerelated. The model that we present in this section builds on the nCRP todefine a hierarchical topic model. This model arranges the topics into atree, with the desideratum that more general topics should appear near the

12 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

root and more specialized topics should appear near the leaves (Hofmann,1999a). Having defined such a model, we use probabilistic inference tosimultaneously identify the topics and the relationships between them.

Our approach to defining a hierarchical topic model is based on iden-tifying documents with the paths generated by the nCRP. We augment thenCRP in two ways to obtain a generative model for documents. First, we as-sociate a topic, i.e., a probability distribution across words, with each nodein the tree. A path in the tree thus picks out an infinite collection of topics.Second, given a choice of path, we use the GEM distribution to define aprobability distribution on the topics along this path. Given a draw from aGEM distribution, a document is generated by repeatedly selecting topicsaccording to the probabilities defined by that draw, and then drawing eachword from the probability distribution defined by its selected topic.

The goal of finding a topic hierarchy at different levels of abstractionis distinct from the problem of hierarchical clustering Zamir and Etzioni(1998); Larsen and Aone (1999); Vaithyanathan and Dom (2000); Dudaet al. (2000); Hastie et al. (2001); Heller and Ghahramani (2005). Hierar-chical clustering treats each data point as a leaf in a tree, and merges similardata points up the tree until all are merged into a root node. Thus, internalnodes represent summaries of the data below which, in this setting, wouldyield distributions across words that share high probability words with theirchildren.

In the hierarchical topic model, the internal nodes are not summaries oftheir children. Rather, the internal nodes reflect the shared terminology ofthe documents assigned to the paths that contain them. This can be seen inFigure 1, where the high probability words of a node are distinct from thehigh probability words of its children.

More formally, consider the infinite tree defined by the nCRP and letcd denote the path through that tree for the dth customer (i.e., document).In the hierarchical LDA (hLDA) model, the documents in a corpus are as-sumed drawn from the following generative process:

(1) For each table k ∈ T in the infinite tree,(a) Draw a topic βk ∼ Dirichlet(η).

(2) For each document, d ∈ 1, 2, . . . , D(a) Draw cd ∼ nCRP(γ ).(b) Draw a distribution over levels in the tree, θd | m, π ∼ GEM(m, π).(c) For each word,

(i) Choose level Zd,n | θ ∼ Mult(θd).(ii) Choose word Wd,n | zd,n, cd,β ∼ Mult(βcd [zd,n]), which

is parameterized by the topic in position zd,n on the pathcd .

THE NESTED CHINESE RESTAURANT PROCESS 13

This generative process defines a probability distribution across possiblecorpora.

It is important to emphasize that our approach is an unsupervised learningapproach in which the probabilistic components that we have defined arelatent variables. That is, we do not assume that topics are predefined, nordo we assume that the nested partitioning of documents or the allocation oftopics to levels are predefined. We will infer these entities from a Bayesiancomputation in which a posterior distribution is obtained from conditioningon a corpus and computing probabilities for all latent variables.

As we will see experimentally, there is statistical pressure in the posteriorto place more general topics near the root of the tree and to place more spe-cialized topics further down in the tree. To see this, note that each path in thetree includes the root node. Given that the GEM distribution tends to assignrelatively large probabilities to small integers, there will be a relatively largeprobability for documents to select the root node when generating words.Therefore, to explain an observed corpus, the topic at the root node willplace high probability on words that are useful across all the documents.

Moving down in the tree, recall that each document is assigned to a singlepath. Thus, the first level below the root induces a coarse partition on thedocuments, and the topics at that level will place high probability on wordsthat are useful within the corresponding subsets. As we move still furtherdown, the nested partitions of documents become finer. Consequently, thecorresponding topics will be more specialized to the particular documentsin those paths.

We have presented the model as a two-phase process: an infinite set oftopics are generated and assigned to all of the nodes of an infinite tree, andthen documents are obtained by selecting nodes in the tree and drawingwords from the corresponding topics. It is also possible, however, to con-ceptualize a “lazy” procedure in which a topic is generated only when anode is first selected. In particular, consider an empty tree (i.e., containingno topics) and consider generating the first document. We select a path andthen repeatedly select nodes along that path in order to generate words. Atopic is generated at a node when that node is first selected and subsequentselections of the node reuse the same topic.

After n words have been generated, at most n nodes will have been visitedand at most n topics will have been generated. The (n + 1)th word in thedocument can come from one of previously-generated topics or it can comefrom a new topic. Similarly, suppose that d documents have previously beengenerated. The (d + 1)th document can follow one of the paths laid downby an earlier document and select only “old” topics, or it can branch off atany point in the tree and generated “new” topics along the new branch.

14 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

This discussion highlights the nonparametric nature of our model. Ratherthan describing a corpus by using a probabilistic model involving a fixed setof parameters, we assume a model in which the number of parameters cangrow as the corpus grows, both within documents and across documents.New documents can spark new subtopics or new specializations of existingsubtopics.

Finally, we note that hLDA is the simplest way to exploit the nested CRP,i.e., a flexible hierarchy of distributions, in the topic modeling framework.In a more complicated model, one could consider a variant of hLDA whereeach document exhibits multiple paths through the tree. This can be mod-eled using a two-level distribution for word generation: first choose a paththrough the tree, and then choose a level for the word.

Recent extensions to topic models can also be adapted to make use of aflexible topic hierarchy. As examples, in the dynamic topic model (Blei andLafferty, 2006) the documents are time stamped and the underlying topicschange over time; in the author-topic model (Rosen-Zvi et al., 2004) theauthorship of the documents affects which topics they exhibit. This said,some extensions are more easily adaptable than others. In the correlatedtopic model, the topic proportions exhibit a covariance structure (Blei andLafferty, 2007). This is achieved by replacing a Dirichlet distribution witha logistic normal, which makes the application of nonparametric Bayesianextensions less direct.

5. PROBABILISTIC INFERENCE

With the hLDA model in hand, our goal is to perform posterior inference,i.e., to “reverse” the generative process of documents described above forestimating the hidden topical structure of a document collection. We haveconstructed a joint distribution of hidden variables and observations—thelatent topic structure and observed documents—by combining prior expec-tations about the kinds of tree topologies we will encounter with a genera-tive process for producing documents given a particular topology. We arenow interested in the distribution of the hidden structure conditioned onhaving seen the data, i.e., the distribution of the underlying topic structurethat might have generated an observed collection of documents. Findingthis posterior distribution for different kinds of data and models is a centralproblem in Bayesian statistics. See Bernardo and Smith (1994) and Gelmanet al. (1995) for general introductions to Bayesian statistics.

In our nonparametric setting, we must find a posterior distribution oncountably infinite collections of objects—hierarchies, path assignments, andlevel allocations of words—given a collection of documents. Moreover, weneed to be able to do this using the finite resources of the computer. Not

THE NESTED CHINESE RESTAURANT PROCESS 15

surprisingly, the posterior distribution for hLDA is not available in closedform. We must appeal to an approximation.

We develop a Markov chain Monte Carlo (MCMC) algorithm to approx-imate the posterior for hLDA. In MCMC, one samples from a target dis-tribution on a set of variables by constructing a Markov chain that has thetarget distribution as its stationary distribution (Robert and Casella, 2004).One then samples from the chain for sufficiently long that it approaches thetarget, collects the sampled states thereafter, and uses those collected statesto estimate the target. This approach is particularly straightforward to applyto latent variable models, where we take the state space of the Markov chainto be the set of values that the latent variables can take on, and the targetdistribution is the conditional distribution of these latent variables given theobserved data.

The particular MCMC algorithm that we present in this paper is a Gibbssampling algorithm (Geman and Geman, 1984; Gelfand and Smith, 1990).In a Gibbs sampler each latent variable is iteratively sampled conditionedon the observations and all the other latent variables. We employ collapsedGibbs sampling (Liu, 1994), in which we marginalize out some of the latentvariables to speed up the convergence of the chain. Collapsed Gibbs sam-pling for topic models (Griffiths and Steyvers, 2004) has been widely usedin a number of topic modeling applications (McCallum et al., 2004; Rosen-Zvi et al., 2004; Mimno and McCallum, 2007; Dietz et al., 2007; Newmanet al., 2007).

In hLDA, we sample the per-document paths through the tree cd and theper-word level allocations to topics in those paths zd,n . We marginalizeout the parameters of the topics themselves βi and the per-document topicproportions θd . The state of the Markov chain is illustrated, for a singledocument, in Figure 4. (The particular assignments illustrated in the figureare taken at the approximate mode of the hLDA model posterior conditionedon abstracts from the JACM.)

Thus, we approximate the posterior p(c1:D, z1:D | γ, η,m, π,w1:D). Thehyperparameter γ reflects the tendency of the customers in each restaurantto share tables, η reflects the expected variance of the underlying topics (e.g,η 1 will tend to choose topics with fewer high-probability words), and mand π reflect our expectation about the allocation of words to levels withina document. The hyperparameters can be fixed according to the constraintsof the analysis and prior expectation about the data, or inferred as describedin Section 5.4.

Intuitively, the CRP parameter γ and topic prior η provide control overthe size of the inferred tree. For example, a model with large γ and smallη will tend to find a tree with more topics. The small η encourages fewerwords to have high probability in each topic; thus, the posterior requires

16 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

the,ofa, isand

nalgorithm

timelog

problem

programslanguagelanguagessets

program

graphgraphsvertices

edgeedges

functionsfunctionpolynomial

egr

treestreesearchregularstring

cd

zd,n

wd,n

FIGURE 4. A single state of the Markov chain in theGibbs sampler for the abstract of “A new approach to themaximum-flow problem” [Goldberg and Tarjan, 1986]. Thedocument is associated with a path through the hierarchy cd ,and each node in the hierarchy is associated with a distri-bution over terms. (The five most probable terms are illus-trated.) Finally, each word in the abstract wd,n is associatedwith a level in the path through the hierarchy zd,n , with 0being the highest level and 2 being the lowest. The Gibbssampler iteratively draws cd and zd,n for all words in all doc-uments (see Section 5).

THE NESTED CHINESE RESTAURANT PROCESS 17

more topics to explain the data. The large γ increases the likelihood thatdocuments will choose new paths when traversing the nested CRP.

The GEM parameter m reflects the proportion of general words relativeto specific words, and the GEM parameter π reflects how strictly we expectthe documents to adhere to these proportions. A larger value of π enforcesthe notions of generality and specificity that lead to more interpretable trees.

The remainder of this section is organized as follows. First, we outlinethe two main steps in the algorithm: the sampling of level allocations andthe sampling of path assignments. We then combine these steps into anoverall algorithm. Next, we present prior distributions for the hyperparam-eters of the model and describe posterior inference for the hyperparameters.Finally, we outline how to assess the convergence of the sampler and ap-proximate the mode of the posterior distribution.

5.1. Sampling level allocations. Given the current path assignments, weneed to sample the level allocation variable zd,n for word n in document dfrom its distribution given the current values of all other variables:(2)p(zd,n | z−(d,n), c,w,m, π, η) ∝ p(zd,n | zd,−n,m, π)p(wd,n | z, c,w−(d,n), η),

where z−(d,n) and w−(d,n) are the vectors of level allocations and observedwords leaving out zd,n and wd,n respectively. We will use similar notationwhenever items are left out from an index set; for example, zd,−n denotesthe level allocations in document d , leaving out zd,n .

The first term in Eq. (2) is a distribution over levels. This distribution hasan infinite number of components, so we sample in stages. First, we samplefrom the distribution over the space of levels that are currently representedin the rest of the document, i.e., max(zd,−n), and a level deeper than thatlevel. The first components of this distribution are, for k ≤ max(zd,−n),

p(zd,n = k | zd,−n,m, π) = E

Vk

k−1∏j=1

V j | zd,−n,m, π

= E[Vk | zd,−n,m, π]

k−1∏j=1

E[1− V j | zd,−n,m, π]

=(1− m)π + #[zd,−n = k]

π + #[zd,−n ≥ k]

k−1∏j=1

mπ + #[zd,−n > j]π + #[zd,−n ≥ j]

where #[·] counts the elements of an array satisfying a given condition.The second term in Eq. (2) is the probability of a given word based on a

possible assignment. From the assumption that the topic parameters βi are

18 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

generated from a Dirichlet distribution with hyperparameters η we obtain(3)

p(wd,n | z, c,w−(d,n), η) =#[z−(d,n) = zd,n, c = cd,w−(d,n) = wd,n]+ η

#[z−(d,n) = zd,n, c = cd]+ Vη,

which is the smoothed frequency of seeing word wd,n allocated to the topicat level zd,n of the path cd .

The last component of the distribution over topic assignments is

p(zd,n > max(zd,−n) | zd,−n,w,m, π, η) = 1−max(zd,−n)∑

j=1

p(zd,n = j | zd,−n,w,m, π, η).

If the last component is sampled then we sample from a Bernoulli distribu-tion for increasing values of `, starting with ` = max(zd,−n) + 1, until wedetermine zd,n ,

p(zd,n = ` | zd,−n, zd,n > `− 1,w,m, π, η) = (1− m)p(wd,n | z, c,w−(d,n), η)p(zd,n > ` | zd,−n, zd,n > `− 1) = 1− p(zd,n = ` | zd,−n, zd,n > `− 1,w,m, π, η)

Note that this changes the maximum level when resampling subsequentlevel assignments.

5.2. Sampling paths. Given the level allocation variables, we need to sam-ple the path associated with each document conditioned on all other pathsand the observed words. We appeal to the fact that max(zd) is finite, andare only concerned with paths of that length:

(4) p(cd |w, c−d, z, η, γ ) ∝ p(cd | c−d, γ )p(wd | c,w−d, z, η).

This expression is an instance of Bayes’s theorem with p(wd | c,w−d, z, η)as the probability of the data given a particular choice of path, and p(cd | c−d, γ )as the prior on paths implied by the nested CRP. The probability of the datais obtained by integrating over the multinomial parameters, which gives aratio of normalizing constants for the Dirichlet distribution:

p(wd | c,w−d, z, η) =max(zd )∏`=1

Γ(∑

w #[z−d = `, c−d = cd,w−d = w]+ Vη)∏

w Γ (#[z−d = `, c−d = cd,w−d = w]+ η)

∏w Γ (#[z = `, c = cd,w = w]+ η)

Γ(∑

w #[z = `, c = cd,w = w]+ Vη) ,

where we use the same notation for counting over arrays of variables asabove. Note that the path must be drawn as a block, because its value at eachlevel depends on its value at the previous level. The set of possible pathscorresponds to the union of the set of existing paths through the tree, eachrepresented by a leaf, with the set of possible novel paths, each representedby an internal node.

THE NESTED CHINESE RESTAURANT PROCESS 19

5.3. Summary of Gibbs sampling algorithm. With these conditional dis-tributions in hand, we specify the full Gibbs sampling algorithm. Given thecurrent state of the sampler, c(t)1:D, z(t)1:D, we iteratively sample each variableconditioned on the rest.

(1) For each document d ∈ 1, . . . , D(a) Randomly draw c(t+1)

d from Eq. (4).

(b) Randomly draw z(t+1)n,d from Eq. (2) for each word, n ∈ 1, . . . Nd.

The stationary distribution of the corresponding Markov chain is the con-ditional distribution of the latent variables in the hLDA model given thecorpus. After running the chain for sufficiently many iterations that it canapproach its stationary distribution (the “burn-in”) we can collect samplesat intervals selected to minimize autocorrelation, and approximate the trueposterior with the corresponding empirical distribution.

Although this algorithm is guaranteed to converge the limit, it is difficultto say something more definitive about the speed of the algorithm indepen-dent of the data being analyzed. In hLDA, we sample a leaf from the treefor each document cd and a level assignment for each word zd,n . As de-scribed above, the number of items from which each is sampled dependson the current state of the hierarchy and other level assignments in the doc-ument. Two data sets of equal size may induce different trees and yielddifferent running times for each iteration of the sampler. For the corporaanalyzed below in Section 6.2, the Gibbs sampler averaged 0.001 secondsper document for the JACM data and Psychological Review data, and 0.006seconds per document for the Proceedings of the National Academy of Sci-ences data.2 Quantifying the complexity of the data and how it relates tothe complexity of the approximate inference algorithm is an area of futurework.

5.4. Sampling the hyperparameters. The values of hyperparameters aregenerally unknown a priori. We include them in the inference process byendowing them with prior distributions,

m ∼ Beta(α1, α2)

π ∼ Exponential(α3)

γ ∼ Gamma(α4, α5)

η ∼ Exponential(α6).

2Timings were measured with the Gibbs sampler running on a 2.2GHz Opteron 275processor.

20 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

0 500 1000 1500 2000

−474

000

−470

000

−466

000

Log complete probability for the JACM corpus

Iteration

Log

com

plet

e pr

obab

ility

0 10 20 30 40

0.0

0.2

0.4

0.6

0.8

1.0

Lag

ACF

Autocorrelation function for the JACM corpus

FIGURE 5. (Left) The log complete log likelihood ofEq. (5) for the first 2000 iterations of the Gibbs sampler runon the JACM corpus of Section 6.2. (Right) The autocor-relation function (ACF) of the log complete log likelihood(with confidence interval) for the remaining 8000 iterations.The autocorrelation decreases rapidly as a function of the lagbetween samples.

These priors also contain parameters, but the resulting inferences are lessinfluenced by these parameters than they are by fixing the original parame-ters to specific values Bernardo and Smith (1994).

To incorporate this extension into the Gibbs sampler, we interleave Metropolis-Hastings (MH) steps between iterations of the Gibbs sampler to obtain newvalues of m, π , γ , and η. This preserves the integrity of the Markov chain,although it may mix slower than the collapsed Gibbs sampler without theMH updates (Robert and Casella, 2004).

5.5. Assessing convergence and approximating the mode. Practical ap-plications must address the issue of approximating the mode of the distribu-tion on trees and assessing convergence of the Markov chain. We can obtaininformation about both by examining the log probability of each sampledstate. For a particular sample, i.e., a configuration of the latent variables,we compute the log probability of that configuration and observations, con-ditioned on the hyperparameters:

(5) L(t) = log p(c(t)1:D, z(t)1:D,w1:D | γ, η,m, π).

With this statistic, we can approximate the mode of the posterior by choos-ing the state with the highest log probability. Moreover, we can assess

THE NESTED CHINESE RESTAURANT PROCESS 21

convergence of the chain by examining the autocorrelation of L(t). Fig-ure 5 (right) illustrates the autocorrelation as a function of the number ofiterations between samples (the “lag”) when modeling the JACM corpusdescribed in Section 6.2. The chain was run for 10,000 iterations; 2000iterations were discarded as burn-in.

Figure 5 (left) illustrates Eq. (5) for the burn-in iterations. Gibbs samplersstochastically climb the posterior distribution surface to find an area of highposterior probability, and then explore its curvature through sampling. Inpractice, one usually restarts this procedure a handful of times and choosesthe local mode which has highest posterior likelihood (Robert and Casella,2004).

Despite the lack of theoretical guarantees, Gibbs sampling is appropriatefor the kind of data analysis for which hLDA and many other latent vari-able models are tailored. Rather than try to understand the full surface ofthe posterior, the goal of latent variable modeling is to find a useful rep-resentation of complicated high-dimensional data, and a local mode of theposterior found by Gibbs sampling often provides such a representation. Inthe next section, we will assess hLDA qualitatively, through visualizationof summaries of the data, and quantitatively, by using the latent variablerepresentation to provide a predictive model of text.

6. EXAMPLES AND EMPIRICAL RESULTS

We present experiments analyzing both simulated and real text data todemonstrate the application of hLDA and its corresponding Gibbs sampler.

6.1. Analysis of simulated data. In Figure 6, we depict the hierarchiesand allocations for ten simulated datasets drawn from an hLDA model. Foreach dataset, we draw 100 documents of 250 words each. The vocabularysize is 100, and the hyperparameters are fixed at η = .005, and γ = 1. Inthese simulations, we truncated the stick-breaking procedure at three levels,and simply took a Dirichlet distribution over the proportion of words allo-cated to those levels. The resulting hierarchies shown in Figure 6 illustratethe range of structures on which the prior assigns probability.

In the same figure, we illustrate the estimated mode of the posterior distri-bution across the hierarchy and allocations for the ten datasets. We exactlyrecover the correct hierarchies, with only two errors. In one case, the erroris a single wrongly allocated path. In the other case, the inferred mode hashigher posterior probability than the true tree structure (due to finite data).

In general we cannot expect to always find the exact tree. This is de-pendent on the size of the dataset, and how identifiable the topics are. Ourchoice of small η yields topics that are relatively sparse and (probably) very

22 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

100

83

6710

6

16 16

1 1

100

83

6710

6

16 16

1 1

100

89 89

6

42

3

21

1 1

1 1

100

89 89

6

42

3

21

1 1

1 1

100

61

2217

164

2

35 35

4 4

100

61

2316

164

2

35 35

4 4

100

57

506

1

39

381

3 3

1 1

100

57

506

1

39

381

3 3

1 1

100

69

3333

21

22

211

5

31

1

4 4

100

69

3333

21

22

211

5

31

1

4 4

100

75

686

1

19

181

5 5

1 1

100

75

686

1

19

181

5 5

1 1

100

67

557

22

1

22 22

11

101

100

67

557

22

1

22 22

11

101

100

35

274

31

27 27

26

1411

1

7

42

1

5 5

100

35

274

31

27 27

26

1411

1

7

42

1

5 5

100

100

5445

1

100

100

5445

1

100

68

5215

1

19

163

12

84

1 1

100

68

671

19

163

12

84

1 1

Tru

e da

tase

t hie

rarc

hyPo

ster

ior

mod

eT

rue

data

set h

iera

rchy

Post

erio

r m

ode

FIGURE 6. Inferring the mode of the posterior hierarchyfrom simulated data. See Section 6.1.

different from each other. Trees will not be as easy to identify in datasetswhich exhibit polysemy and similarity between topics.

THE NESTED CHINESE RESTAURANT PROCESS 23

6.2. Hierarchy discovery in scientific abstracts. Given a document col-lection, one is typically interested in examining the underlying tree of topicsat the mode of the posterior. As described above, our inferential procedureyields a tree structure by assembling the unique subset of paths containedin c1, . . . , cD at the approximate mode of the posterior.

For a given tree, we can examine the topics that populate the tree. Giventhe assignment of words to levels and the assignment of documents to paths,the probability of a particular word at a particular node is roughly propor-tional to the number of times that word was generated by the topic at thatnode. More specifically, the mean probability of a word w in a topic at level` of path p is given by

(6) p(w | z, c,w, η) =#[z = `, c = p,w = w]+ η

#[z = `, c = p]+ Vη.

Using these quantities, the hLDA model can be used for analyzing collec-tions of scientific abstracts, recovering the underlying hierarchical structureappropriate to a collection, and visualizing that hierarchy of topics for abetter understanding of the structure of the corpora. We demonstrate theanalysis of three different collections of journal abstracts under hLDA.

In these analyses, as above, we truncate the stick-breaking procedure atthree levels, facilitating visualization of the results. The topic Dirichlet hy-perparameters were fixed at η = 2.0, 1.0, 0.5, which encourages manyterms in the high level distributions, fewer terms in the mid-level distribu-tions, and still fewer terms in the low-level distributions. The nested CRPparameter γ was fixed at 1.0. The GEM parameters were fixed at m = 100and π = 0.5. This strongly biases the level proportions to place more massat the higher levels of the hierarchy.

In Figure 1, we illustrate the approximate posterior mode of a hierarchyestimated from a collection of 536 abstracts from the JACM. The tree struc-ture illustrates the ensemble of paths assigned to the documents. In eachnode, we illustrate the top five words sorted by expected posterior proba-bility, computed from Eq. (6). Several leaves are annotated with documenttitles. For each leaf, we chose the five documents assigned to its path thathave the highest numbers of words allocated to the bottom level.

The model has found the function words in the dataset, assigning wordslike “the,” “of,” “or,” and “and” to the root topic. In its second level, theposterior hierarchy appears to have captured some of the major subfields incomputer science, distinguishing between databases, algorithms, program-ming languages and networking. In the third level, it further refines thosefields. For example, it delineates between the verification area of network-ing and the queuing area.

24 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

In Figure 7, we illustrate an analysis of a collection of 1,272 psychologyabstracts from Psychological Review from 1967 to 2003. Again, we havediscovered an underlying hierarchical structure of the field. The top nodecontains the function words; the second level delineates between large sub-fields such as behavioral, social and cognitive psychology; the third levelfurther refines those subfields.

Finally, in Figure 8, we illustrate a portion of the analysis of a collec-tion of 12,913 abstracts from the Proceedings of the National Academy ofSciences from 1991 to 2001. An underlying hierarchical structure of thecontent of the journal has been discovered, dividing articles into groupssuch as neuroscience, immunology, population genetics and enzymology.

In all three of these examples, the same posterior inference algorithmwith the same hyperparameters yields very different tree structures for dif-ferent corpora. Models of fixed tree structure force us to commit to one inadvance of seeing the data. The nested Chinese restaurant process at theheart of hLDA provides a flexible solution to this difficult problem.

6.3. Comparison to LDA. In this section we present experiments compar-ing hLDA to its non-hierarchical precursor, LDA. We use the infinite-depthhLDA model; the per-document distribution over levels is not truncated. Weuse predictive held-out likelihood to compare the two approaches quantita-tively, and we present examples of LDA topics in order to provide a quali-tative comparison of the methods. LDA has been shown to yield good pre-dictive performance relative to competing unigram language models, and ithas also been argued that the topic-based analysis provided by LDA repre-sents a qualitative improvement on competing language models (Blei et al.,2003b; Griffiths and Steyvers, 2006). Thus LDA provides a natural point ofcomparison.

On the other hand, there are several issues that must be borne in mind incomparing hLDA to LDA. First, in LDA the number of topics is a fixed pa-rameter; thus, a model selection procedure is required to choose the numberof topics for LDA. Second, given a set of topics, LDA places no constraintson the usage of the topics by documents in the corpus; a document canplace an arbitrary probability distribution on the topics. In hLDA, on theother hand, a document can only access the topics that lie along a singlepath in the tree. Thus LDA is significantly more flexible than hLDA.

This flexibility of LDA implies that for large corpora we can expect LDAto dominate hLDA in terms of predictive performance (assuming that themodel selection problem is resolved satisfactorily and assuming that hyper-parameters are set in a manner that controls overfitting). Thus, rather thantrying to simply optimize for predictive performance within the hLDA fam-ily and within the LDA family, we have instead opted to first run hLDA to

THE NESTED CHINESE RESTAURANT PROCESS 25

THE

OF

AND

A TO

mem

ory

reco

gnition

word

words

spee

ch

read

ing

lexica

lpho

nologica

lse

nten

cesy

ntac

tic

reca

llretrieva

lite

ms

item

asso

ciative

cue

availability

be

can

see

source

mon

itorin

gdep

ress

ion

amou

ntthresh

old

hippoc

ampus

regu

lar

perce

ptual

cond

ition

ing

brain

skill

enco

ded

target

forced

choice

represe

ntation

social

individua

lsinterp

erso

nal

self

perso

nality

attitud

eattitud

esintegration

othe

rsex

plicit

cons

istenc

yperso

nality

cros

ssitu

ationa

lmotives

act

grou

pgrou

ps

intellige

nce

catego

ries

comparison

dep

ress

ion

risk

orac

curacy

relatio

nships

selfe

stee

mbeh

aviors

sche

ma

interaction

judgm

entreinforcem

ent

cond

itioning

beh

avior

resp

ons

esan

imals

fear

den

sity

rece

ntan

imals

inco

nsistent

drug

driv

ebeh

aviors

stim

ulation

brain

sche

dules

match

ing

instrumen

tal

sche

dule

resp

onse

rtdec

ision

judgmen

tjudgmen

tssc

ale

psy

chop

hysica

lmea

suremen

tintens

itysc

ales

mag

nitude

stag

eselem

entary

itfreq

uenc

yalthou

gh

stoc

hastic

attribute

similarity

ambiguity

violations

diffus

ion

conjun

ction

continuo

usjudge

dsize

inferenc

erationa

lstrategies

bay

esian

caus

al

reas

oning

men

tal

statistic

alpropos

ition

alinferenc

es

outcom

emod

elpsy

cholog

ical

muc

htrial

cova

riatio

nan

imal

pow

erov

erclaims

choice

version

sugg

estin

gon

thes

e

FIGURE 7. A portion of the hierarchy learned from the1,272 abstracts of Psychological Review from 1967–2003.The vocabulary was restricted to the 1,971 terms that oc-curred in more than five documents, yielding a corpus of136K words. The learned hierarchy, of which only a portionis illustrated, contains 52 topics.

26 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

THE

OF

IN AND

A

structure

folding

state

structures

reac

tion

neurons

brain

neurona

lco

rtex

mem

ory

spec

ies

evolution

gen

etic

populations

population

enzy

me

biosy

nthe

sis

acid

coli

syntha

se

t cd cells

antigen

il

energy

time

metho

dtheo

ryflu

ores

cenc

e

residue

she

lixen

zyme

catalytic

site

gene

sge

nome

seque

nces

geno

mes

loci

dna

replication

polym

eras

estrand

reco

mbination

syna

ptic

rece

ptors

glutam

ate

gaba

ca

clim

ate

glob

alca

rbon

fossil

years

plants

plant

arab

idop

sis

leav

eshs

p

visu

alco

rtex

task

audito

rystim

ulus

tcr

clas

shla

nk mhc

hiv

virus

viral

infection

ccr

iron

feox

ygen

ohe

me

host

virulenc

eparas

itemalaria

paras

ites

tumor

prostate

antib

ody

anti

canc

er

male

female

males

females

sexu

alpax fgf

retin

alrod

pho

torece

ptor app

ps

notch

amyloid

alzh

eimer

choles

terol

ra rar

sterol

ldl

glob

inprp

prio

nhd gata

capsid

icam

crystallin

heparin

fn

perox

isom

alurea

cho

pex pts

apo

fixation

plasm

inog

enhta

pu

FIGURE 8. A portion of the hierarchy learned from the12,913 abstracts of the Proceedings of the National Academyof Sciences from 1991–2001. The vocabulary was restrictedto the 7,200 terms that occurred in more than five documents,yielding a corpus of 2.3M words. The learned hierarchy, ofwhich only a portion is illustrated, contains 56 topics. Notethat the γ parameter is fixed at a smaller value, to provide areasonably sized topic hierarchy with the significantly largercorpus.

THE NESTED CHINESE RESTAURANT PROCESS 27

obtain a posterior distribution over the number of topics, and then to conductmultiple runs of LDA for a range of topic cardinalities bracketing the hLDAresult. This provides an hLDA-centric assessment of the consequences (forpredictive performance) of using a hierarchy versus a flat model.

We used predictive held-out likelihood as a measure of performance. Theprocedure is to divide the corpus into D1 observed documents and D2 held-out documents, and approximate the conditional probability of the held-outset given the training set

(7) p(wheld-out1 , . . . ,wheld-out

D2|wobs

1 , . . . ,wobsD1,M),

whereM represents a model, either LDA or hLDA. We employed collapsedGibbs sampling for both models and integrated out all the hyperparameterswith priors. We used the same prior for those hyperparameters that exist inboth models.

To approximate this predictive quantity, we run two samplers. First, wecollect 100 samples from the posterior distribution of latent variables giventhe observed documents, taking samples 100 iterations apart and using aburn-in of 2000 samples. For each of these outer samples, we collect 800samples of the latent variables given the held-out documents and approxi-mate their conditional probability given the outer sample with the geometricmean (Kass and Raftery, 1995). Finally, these conditional probabilities areaveraged to obtain an approximation to Eq. (7).

Figure 9 illustrates the five-fold cross-validated held-out likelihood forhLDA and LDA on the JACM corpus. The figure also provides a visualindication of the mean and variance of the posterior distribution over topiccardinality for hLDA; the mode is approximately a hierarchy with 140 top-ics. For LDA, we plot the predictive likelihood in a range of topics aroundthis value.

We see that at each fixed topic cardinality in this range of topics, hLDAprovides significantly better predictive performance than LDA. As discussedabove, we eventually expect LDA to dominate hLDA for large numbers oftopics. In a large range near the hLDA mode, however, the constraint thatdocuments pick topics along single paths in a hierarchy is superior. Thissuggests that the hierarchy is useful not only for interpretation, but also forcapturing predictive statistical structure.

To give a qualitative sense of the relative degree of interpretability of thetopics that are found using the two approaches, Figure 10 illustrates tenLDA topics chosen randomly from a 50-topic model. As these examplesmake clear, the LDA topics are generally less interpretable than the hLDAtopics. In particular, function words are given high probability through-out. In practice, to sidestep this issue, corpora are often stripped of function

28 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

Number of topics

Mea

n he

ld−

out l

og li

kelih

ood

0 100 200 300 400

−80

000

−70

000

−60

000

−50

000

−40

000

−30

000

−20

000

hLDALDA

FIGURE 9. The held-out predictive log likelihood forhLDA compared to the same quantity for LDA as a func-tion of the number of topics. The shaded blue region is cen-tered at the mean number of topics in the hierarchies foundby hLDA (and has width equal to twice the standard error).

ofobjects

toand the

theofatowe

theof aisin

methodstheaof

problems

ktheof

algorithmfor

forthe

linear problem problems

theofweanda

theandofto

that

operationsthe

functionalrequires

and

theofaisin

FIGURE 10. The five most probable words for each of tenrandomly chosen topics from an LDA model fit to fifty top-ics.

words before fitting an LDA model. While this is a reasonable ad-hoc so-lution for (English) text, it is not a general solution that can be used fornon-text corpora, such as visual scenes. Even more importantly, there is nonotion of abstraction in the LDA topics. The notion of multiple levels ofabstraction requires a model such as hLDA.

THE NESTED CHINESE RESTAURANT PROCESS 29

In summary, if interpretability is the goal, then there are strong reasonsto prefer hLDA to LDA. If predictive performance is the goal, then hLDAmay well remain the preferred method if there is a constraint that a relativelysmall number of topics should be used. When there is no such constraint,LDA may be preferred. These comments also suggest, however, that aninteresting direction for further research is to explore the feasibility of amodel that combines the defining features of the LDA and hLDA models.As we described in Section 4, it may be desirable to consider an hLDA-like hierarchical model that allows each document to exhibit multiple pathsalong the tree. This might be appropriate for collections of long documents,such as full-text articles, which tend to be more heterogeneous than shortabstracts.

7. DISCUSSION

In this paper, we have shown how the nested Chinese restaurant processcan be used to define prior distributions on recursive data structures. Wehave also shown how this prior can be combined with a topic model to yielda nonparametric Bayesian methodology for analyzing document collectionsin terms of hierarchies of topics. Given a collection of documents, we useMCMC sampling to learn an underlying thematic structure that provides auseful abstract representation for data visualization and summarization.

We emphasize that no knowledge of the topics of the collection or thestructure of the tree are needed to infer a hierarchy from data. We havedemonstrated our methods on collections of abstracts from three differentscientific journals, showing that while the content of these different domainscan vary significantly, the statistical principles behind our model make itpossible to recover meaningful sets of topics at multiple levels of abstrac-tion, and organized in a tree.

The nonparametric Bayesian framework underlying our work makes itpossible to define probability distributions and inference procedures overcountably infinite collections of objects. There has been other recent workin artificial intelligence in which probability distributions are defined on in-finite objects via concepts from first-order logic (Milch et al., 2005; Pasulaand Russell, 2001; Poole, 2007). While providing an expressive language,this approach does not necessarily yield structures that are amenable to ef-ficient posterior inference. Our approach reposes instead on combinatorialstructure—the exchangeability of the Dirichlet process as a distribution onpartitions—and this leads directly to a posterior inference algorithm thatcan be applied effectively to large-scale learning problems.

The hLDA model draws on two complementary insights—one from sta-tistics, the other from computer science. From statistics, we take the idea

30 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

that it is possible to work with general stochastic processes as prior distribu-tions, thus accommodating latent structures that vary in complexity. This isthe key idea behind nonparametric Bayesian methods. In recent years, thesemodels have been extended to include spatial models (Duan et al., 2007) andgrouped data (Teh et al., 2007b), and nonparametric Bayesian methods nowenjoy new applications in computer vision (Sudderth et al., 2005), bioinfor-matics (Xing et al., 2007), and natural language processing (Li et al., 2007;Teh et al., 2007b; Goldwater et al., 2006b,a; Johnson et al., 2007; Lianget al., 2007).

From computer science, we take the idea that the representations we in-fer from data should be richly structured, yet admit efficient computation.This is a growing theme in work with nonparametric Bayesian models bymachine learning researchers. For example, one line of recent research hasexplored stochastic processes involving multiple binary features rather thanclusters (Griffiths and Ghahramani, 2006; Thibaux and Jordan, 2007; Tehet al., 2007a). A parallel line of investigation has explored alternative pos-terior inference techniques for nonparametric Bayesian models, providingmore efficient algorithms for extracting this latent structure. Specifically,variational methods, which replace sampling with optimization, have beendeveloped for Dirichlet process mixtures to further increase their applicabil-ity to large-scale data analysis problems (Blei and Jordan, 2005; Kuriharaet al., 2007).

The hierarchical topic model that we explored in this paper is just oneexample of how this synthesis of statistics and computer science can pro-duce powerful new tools for the analysis of complex data. However, thisexample showcases the two major strengths of the nonparametric Bayesianapproach. First, the use of the nested CRP means that the model does notstart with a fixed set of topics or hypotheses about their relationship, butgrows to fit the data at hand. Thus, we learn a topology but do not committo it; the tree can grow as new documents about new topics and subtopicsare observed. Second, despite the fact that this results in a very rich hy-pothesis space, containing trees of arbitrary depth and branching factor, it isstill possible to perform approximate probabilistic inference using a simplealgorithm. This combination of flexible, structured representations and effi-cient inference makes nonparametric Bayesian methods uniquely promisingas a formal framework for learning with flexible data structures.

Acknowledgments. We thank Edo Airoldi for providing the PNAS data,and we thank three anonymous reviewers for their insightful comments.David M. Blei is supported by grants from Microsoft Research and Google.Thomas L. Griffiths is supported by NSF grant BCS-0631518 and the DARPA

THE NESTED CHINESE RESTAURANT PROCESS 31

CALO project. Michael I. Jordan is supported by grants from Microsoft Re-search, Google, and Yahoo! Research.

REFERENCES

Albert, R. and Barabasi, A. (2002). Statistical mechanics of complex net-works. Reviews of Modern Physics, 74(1):47–97.

Aldous, D. (1985). Exchangeability and related topics. In Ecole d’ete deprobabilites de Saint-Flour, XIII—1983, pages 1–198. Springer, Berlin.

Antoniak, C. (1974). Mixtures of Dirichlet processes with applications toBayesian nonparametric problems. The Annals of Statistics, 2(6):1152–1174.

Barabasi, A. and Reka, A. (1999). Emergence of scaling in random net-works. Science, 286(5439):509–512.

Bernardo, J. and Smith, A. (1994). Bayesian theory. John Wiley & SonsLtd., Chichester.

Billingsley, P. (1995). Probability and Measure. Wiley-Interscience.Blei, D., Griffiths, T., Jordan, M., and Tenenbaum, J. (2003a). Hierarchi-

cal topic models and the nested Chinese restaurant process. In NeuralInformation Processing Systems (NIPS) 16.

Blei, D. and Jordan, M. (2005). Variational inference for Dirichlet processmixtures. Journal of Bayesian Analysis, 1(1):121–144.

Blei, D. and Lafferty, J. (2006). Dynamic topic models. In Proceedings ofthe 23rd International Conference on Machine Learning.

Blei, D. and Lafferty, J. (2007). A correlated topic model of Science. Annalsof Applied Statistics, 1(1):17–35.

Blei, D., Ng, A., and Jordan, M. (2003b). Latent Dirichlet allocation. Jour-nal of Machine Learning Research, 3:993–1022.

Chakrabarti, S., Dom, B., Agrawal, R., and Raghavan, P. (1998). Scalablefeature selection, classification and signature generation for organizinglarge text databases into hierarchical topic taxonomies. The VLDB Jour-nal, 7:163–178.

Cimiano, P., Hotho, A., and Staab, S. (2005). Learning concept hierarchiesfrom text corpora using formal concept analysis. Journal of ArtificialIntelligence Research, (305–339).

Cristianini, N. and Shawe-Taylor, J. (2000). An Introduction to SupportVector Methods. Cambridge University Press, Cambridge, U.K.

Deerwester, S., Dumais, S., Landauer, T., Furnas, G., and Harshman, R.(1990). Indexing by latent semantic analysis. Journal of the AmericanSociety of Information Science, 41(6):391–407.

Dietz, L., Bickel, S., and Scheffer, T. (2007). Unsupervised prediction ofcitation influences. In International Conference on Machine Learning.

32 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

Drinea, E., Enachesu, M., and Mitzenmacher, M. (2006). Variations onrandom graph models for the web. Technical Report TR-06-01, HarvardUniversity.

Duan, J., Guindani, M., and Gelfand, A. (2007). Generalized spatial Dirich-let process models. Biometrika.

Duda, R., Hart, P., and Stork, D. (2000). Pattern Classification. Wiley-Interscience Publication.

Dumais, S. and Chen, H. (2000). Hierarchical classification of web content.In ACM SIGIR, pages 256–263.

Durrett, R. (1999). Essentials of Stochastic Processes. Springer.Escobar, M. and West, M. (1995). Bayesian density estimation and infer-

ence using mixtures. Journal of the American Statistical Association,90:577–588.

Fei-Fei, L. and Perona, P. (2005). A Bayesian hierarchical model for learn-ing natural scene categories. IEEE Computer Vision and Pattern Recog-nition, pages 524–531.

Ferguson, T. (1973). A Bayesian analysis of some nonparametric problems.The Annals of Statistics, 1:209–230.

Gelfand, A. and Smith, A. (1990). Sampling based approaches to calculat-ing marginal densities. Journal of the American Statistical Association,85:398–409.

Gelman, A., Carlin, J., Stern, H., and Rubin, D. (1995). Bayesian DataAnalysis. Chapman & Hall, London.

Geman, S. and Geman, D. (1984). Stochastic relaxation, Gibbs distributionsand the Bayesian restoration of images. IEEE Transactions on PatternAnalysis and Machine Intelligence, 6:721–741.

Goldberg, A. and Tarjan, R. (1986). A new approach to the maximum flowproblem. In STOC 1986: Proceedings of the 18th annual ACM sym-posium on Theory of computing, pages 136–146, New York, NY, USA.ACM.

Goldwater, S., Griffiths, T., and Johnson, M. (2006a). Contextual depen-dencies in unsupervised word segmentation. In International Conferenceon Computational Linguistics and Association for Computational Lin-guistics.

Goldwater, S., Griffiths, T., and Johnson, M. (2006b). Interpolating betweentypes and tokens by estimating power-law generators. In Advances inNeural Information Processing Systems 18.

Griffiths, T. and Ghahramani, Z. (2006). Infinite latent feature models andthe Indian buffet process. In Weiss, Y., Scholkopf, B., and Platt, J., edi-tors, Advances in Neural Information Processing Systems 18, pages 475–482. MIT Press, Cambridge, MA.

THE NESTED CHINESE RESTAURANT PROCESS 33

Griffiths, T. and Steyvers, M. (2004). Finding scientific topics. Proceedingsof the National Academy of Science.

Griffiths, T. and Steyvers, M. (2006). Probabilistic topic models. In Lan-dauer, T., McNamara, D., Dennis, S., and Kintsch, W., editors, LatentSemantic Analysis: A Road to Meaning. Laurence Erlbaum.

Hastie, T., Tibshirani, R., and Friedman, J. (2001). The Elements of Statis-tical Learning. Springer.

Heller, K. and Ghahramani, Z. (2005). Bayesian hierarchical clustering. InInternational Conference on Machine Learning.

Hofmann, T. (1999a). The cluster-abstraction model: Unsupervised learn-ing of topic hierarchies from text data. In International Joint Conferenceson Artificial Intelligence, pages 682–687.

Hofmann, T. (1999b). Probabilistic latent semantic indexing. Research andDevelopment in Information Retrieval, pages 50–57.

Joachims, T. (2002). Learning to Classify Text Using Support Vector Ma-chines. Kluwer Academic Publishers, Boston, MA.

Johnson, M., Griffiths, T., and S., G. (2007). Adaptor grammars: A frame-work for specifying compositional nonparametric Bayesian models. InScholkopf, B., Platt, J., and Hoffman, T., editors, Advances in Neural In-formation Processing Systems 19, pages 641–648, Cambridge, MA. MITPress.

Johnson, N. and Kotz, S. (1977). Urn Models and Their Applications: AnApproach to Modern Discrete Probability Theory. Wiley, New York.

Kass, R. and Raftery, A. (1995). Bayes factors. Journal of the AmericanStatistical Association, 90(430):773–795.

Koller, D. and Sahami, M. (1997). Hierarchically classifying documents us-ing very few words. In ICML ’97: Proceedings of the Fourteenth Interna-tional Conference on Machine Learning, pages 170–178, San Francisco,CA, USA. Morgan Kaufmann Publishers Inc.

Krapivsky, P. and Redner, S. (2001). Organization of growing random net-works. Physical Review E, 63(6).

Kurihara, K., Welling, M., and Vlassis, N. (2007). Accelerated variationalDirichlet process mixtures. In Scholkopf, B., Platt, J., and Hoffman, T.,editors, Advances in Neural Information Processing Systems 19, pages761–768, Cambridge, MA. MIT Press.

Larsen, B. and Aone, C. (1999). Fast and effective text mining usinglinear-time document clustering. In KDD ’99: Proceedings of the fifthACM SIGKDD international conference on Knowledge discovery anddata mining, pages 16–22, New York, NY, USA. ACM.

Li, W., Blei, D., and McCallum, A. (2007). Nonparametric Bayes pachinkoallocation. In The 23rd Conference on Uncertainty in Artificial Intelli-gence.

34 D. M. BLEI, T. L. GRIFFITHS, M. I. JORDAN

Liang, P., Petrov, S., Klein, D., and Jordan, M. (2007). The infinite PCFGusing hierarchical Dirichlet processes. In Empirical Methods in NaturalLanguage Processing.

Liu, J. (1994). The collapsed Gibbs sampler in Bayesian computations withapplication to a gene regulation problem. Journal of the American Sta-tistical Association, 89(958–966).

MacEachern, S. and Muller, P. (1998). Estimating mixture of Dirichlet pro-cess models. Journal of Computational and Graphical Statistics, 7:223–238.

McCallum, A., Corrada-Emmanuel, A., and Wang, X. (2004). The author-recipient-topic model for topic and role discovery in social networks: Ex-periments with Enron and academic email. Technical report, Universityof Massachusetts, Amherst.

McCallum, A., Nigam, K., Rennie, J., and Seymore, K. (1999). Buildingdomain-specific search engines with machine learning techniques. InAAAI Conference on Artificial Intelligence.

Milch, B., Marthi, B., Sontag, D., Ong, D., and Kobolov, A. (2005). Ap-proximate inference for infinite contingent Bayesian networks. In Artifi-cial Intelligence and Statistics.

Mimno, D. and McCallum, A. (2007). Organizing the OCA: Learningfaceted subjects from a library of digital books. In Joint Conference onDigital Libraries.

Neal, R. (2000). Markov chain sampling methods for Dirichlet processmixture models. Journal of Computational and Graphical Statistics,9(2):249–265.

Newman, D., Chemudugunta, C., and Smyth, P. (2007). Statistical Entity-Topic Models. PhD thesis.

Pasula, H. and Russell, S. (2001). Approximate inference for first-orderprobabilistic languages. In International Joint Conferences on ArtificialIntelligence, pages 741–748.

Pitman, J. (2002). Combinatorial Stochastic Processes. Lecture Notes forSt. Flour Summer School. Springer-Verlag.

Poole, D. (2007). Logical generative models for probabilistic reasoningabout existence, roles and identity. In 22nd AAAI Conference on ArtificialIntelligence.

Robert, C. and Casella, G. (2004). Monte Carlo Statistical Methods.Springer Texts in Statistics. Springer-Verlag, New York, NY.

Rosen-Zvi, M., Griffiths, T., Steyvers, M., and Smith, P. (2004). The author-topic model for authors and documents. In Proceedings of the 20th Con-ference on Uncertainty in Artificial Intelligence, pages 487–494. AUAIPress.

THE NESTED CHINESE RESTAURANT PROCESS 35

Russell, B., Efros, A., Sivic, J., Freeman, W., and Zisserman, A. (2006).Using multiple segmentations to discover objects and their extent in im-age collections. In IEEE Conference on Computer Vision and PatternRecognition, pages 1605–1614.

Sanderson, M. and Croft, B. (1999). Deriving concept hierarchies from text.In SIGIR ’99: Proceedings of the 22nd annual international ACM SIGIRconference on Research and development in information retrieval, pages206–213, New York, NY, USA. ACM.

Sethuraman, J. (1994). A constructive definition of Dirichlet priors. Statis-tica Sinica, 4:639–650.

Stoica, E. and Hearst, M. (2004). Nearly-automated metadata hierarchycreation. In HLT-NAACL.

Sudderth, E., Torralba, A., Freeman, W., and Willsky, A. (2005). Describ-ing visual scenes using transformed Dirichlet processes. In Advances inNeural Information Processing Systems 18.

Teh, Y., Gorur, D., and Ghahramani, Z. (2007a). Stick-breaking construc-tion for the Indian buffet process. In 11th Conference on Artificial Intel-ligence and Statistics.

Teh, Y., Jordan, M., Beal, M., and Blei, D. (2007b). HierarchicalDirichlet processes. Journal of the American Statistical Association,101(476):1566–1581.

Thibaux, R. and Jordan, M. (2007). Hierarchical beta processes and theIndian buffet process. In 11th Conference on Artificial Intelligence andStatistics.

Titterington, D., Smith, A., and Makov, E. (1985). Statistical Analysis ofFinite Mixture Distributions. Wiley, Chichester.

Vaithyanathan, S. and Dom, B. (2000). Model-based hierarchical cluster-ing. In Proceedings of the 16th Conference on Uncertainty in ArtificialIntelligence, pages 599–608, San Francisco, CA, USA. Morgan Kauf-mann Publishers Inc.

Xing, E., Jordan, M., and Roded, R. (2007). Bayesian haplotype inferencevia the Dirichlet process. Journal of Computational Biology, 14(3):267–284.

Zamir, O. and Etzioni, O. (1998). Web document clustering: A feasibilitydemonstration. In Research and Development in Information Retrieval,pages 46–54.