the impact of random models on clustering similarity...network science, and combinatorial...

27
Journal of Machine Learning Research - (-) 1-27 Submitted -/-; Published -/- The Impact of Random Models on Clustering Similarity Alexander J. Gates [email protected] Department of Informatics and Program in Cognitive Science Yong-Yeol Ahn [email protected] Department of Informatics Indiana University Bloomington, IN 47408, USA Abstract Clustering is a central approach for unsupervised learning. After clustering is applied, the most fundamental analysis is to quantitatively compare clusterings. Such comparisons are crucial for the evaluation of clustering methods as well as other tasks such as consensus clustering. It is often argued that, in order to establish a baseline, clustering similarity should be assessed in the context of a random ensemble of clusterings. The prevailing assumption for the random clustering ensemble is the permutation model in which the number and sizes of clusters are fixed. However, this assumption does not necessarily hold in practice; for example, while multiple runs of K-means clustering returns clusterings with a fixed number of clusters, the cluster size distribution varies greatly. Here, we derive corrected variants of two clustering similarity measures (the Rand index and Mutual Information) in the context of two random clustering ensembles in which the number and sizes of clusters vary. In addition, we study the impact of one-sided comparisons in the scenario with a reference clustering. The consequences of different random models are illustrated using synthetic examples, handwriting recognition, and gene expression data. We demonstrate that the choice of random model can have a drastic impact on results and argue that the choice should be carefully justified. Keywords: clustering comparison, clustering evaluation, adjustment for chance, Rand index, normalized mutual information 1. Introduction Clustering is one of the most fundamental techniques of unsupervised learning and one of the most common ways to analyze data. Naturally, numerous methods have been developed and studied (Jain, 2010). To interpret clustering results, it is crucial to compare them to each other. For instance, the evaluation of a clustering method is usually carried out by comparing the method’s results with a planted reference clustering, assuming that the more similar the method’s solution is to the reference clustering, the better the method. This is particularly common in the field of complex networks in which clustering similarity measures are used to justify the performance of community detection methods (Danon et al., 2005 and Lancichinetti and Fortunato, 2009). As quantitative comparison is a fundamental operation, it plays a key role in many other tasks. For instance, comparisons of clusterings can facilitate taxonomies for clustering solutions, can be used as a criteria for parameter estimation, and form the basis of consensus clustering methods (Meila, 2005, Vinh et al., 2009, Yeung et al., 2001). c - Alexander J Gates and Yong-Yeol Ahn.

Upload: others

Post on 10-Nov-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Journal of Machine Learning Research - (-) 1-27 Submitted -/-; Published -/-

The Impact of Random Models on Clustering Similarity

Alexander J. Gates [email protected] of Informatics and Program in Cognitive Science

Yong-Yeol Ahn [email protected]

Department of Informatics

Indiana University

Bloomington, IN 47408, USA

Abstract

Clustering is a central approach for unsupervised learning. After clustering is applied, themost fundamental analysis is to quantitatively compare clusterings. Such comparisons arecrucial for the evaluation of clustering methods as well as other tasks such as consensusclustering. It is often argued that, in order to establish a baseline, clustering similarityshould be assessed in the context of a random ensemble of clusterings. The prevailingassumption for the random clustering ensemble is the permutation model in which thenumber and sizes of clusters are fixed. However, this assumption does not necessarily holdin practice; for example, while multiple runs of K-means clustering returns clusteringswith a fixed number of clusters, the cluster size distribution varies greatly. Here, wederive corrected variants of two clustering similarity measures (the Rand index and MutualInformation) in the context of two random clustering ensembles in which the number andsizes of clusters vary. In addition, we study the impact of one-sided comparisons in thescenario with a reference clustering. The consequences of different random models areillustrated using synthetic examples, handwriting recognition, and gene expression data.We demonstrate that the choice of random model can have a drastic impact on results andargue that the choice should be carefully justified.

Keywords: clustering comparison, clustering evaluation, adjustment for chance, Randindex, normalized mutual information

1. Introduction

Clustering is one of the most fundamental techniques of unsupervised learning and one of themost common ways to analyze data. Naturally, numerous methods have been developedand studied (Jain, 2010). To interpret clustering results, it is crucial to compare themto each other. For instance, the evaluation of a clustering method is usually carried outby comparing the method’s results with a planted reference clustering, assuming that themore similar the method’s solution is to the reference clustering, the better the method.This is particularly common in the field of complex networks in which clustering similaritymeasures are used to justify the performance of community detection methods (Danon et al.,2005 and Lancichinetti and Fortunato, 2009). As quantitative comparison is a fundamentaloperation, it plays a key role in many other tasks. For instance, comparisons of clusteringscan facilitate taxonomies for clustering solutions, can be used as a criteria for parameterestimation, and form the basis of consensus clustering methods (Meila, 2005, Vinh et al.,2009, Yeung et al., 2001).

c©- Alexander J Gates and Yong-Yeol Ahn.

Page 2: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Gates and Ahn

Among many clustering comparison methods (see, e.g. Meila, 2005 and Pfitzner et al.,2009), two of the most prominent measures are the Rand index (Rand, 1971) and theNormalized Mutual Information (NMI, Danon et al., 2005). In both cases, the similarityscore exists in the range [0, 1], where 1 corresponds to identical clusterings and 0 impliesmaximally dissimilar clusterings. However, in practice, both measures do not efficiently usethe full range of values in between 0 and 1, with many comparisons concentrating near theextreme values (Vinh et al., 2009, Hubert and Arabie, 1985). This makes it difficult todirectly interpret the results of a comparison.

Thus, it is often argued that clustering similarity should be assessed in the context of arandom ensemble of clusterings (Vinh et al., 2009, Hubert and Arabie, 1985, DuBien andWarde, 1981, DuBien et al., 2004, Albatineh et al., 2006, Romano et al., 2014, Zhang, 2015,Romano et al., 2016) and rescaled (see Equation 3 in Appendix C). Such a correction forchance establishes a baseline by using the expected similarity of all pair-wise comparisonsbetween clusterings specified by a random model; the resulting similarity values have a newinterpretation that facilitates comparisons within a set of clusterings. Specifically, oncecorrected for chance, a similarity value of 1 still corresponds to identical clusterings, buta value of 0 now corresponds to the expected value amongst random clusterings. Positivevalues of corrected similarity better reflect an intuitive comparison of clusterings (Hubertand Arabie, 1985, Steinley et al., 2016). The correction may also introduce negative valueswhen two clusterings are less similar than expected by chance.

The correction procedure requires two choices: a model for random clusterings and howclusterings are drawn from the random model. However, even the existence of these choices isusually ignored or relegated to the status of technical trivialities. Here, we demonstrate thatthese choices may dramatically affect results, and therefore the choice of a particular modelfor random clusterings should be justified based on the understanding of the clusteringscenario. A poor choice of the random model may “not be random enough” and encodecrucial features of the clusterings in all of the random clusterings, providing a poor baseline.At the same time, a random model may be “too random” in which crucial features arelost in a sea of random clusterings that are not representative of the particular problem.Characterizing random models is an important topic of research across statistical physics,network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010,Mansour, 2012). Yet, despite the importance of random model selection, almost no studythat uses clustering comparison provides a justification for their choice of random model.

By far, the most common approach to correct clustering similarity for chance assumesthat both clusterings are uniformly and independently sampled from the permutation model(Mperm). In the permutation model, the number and size of clusters within a clustering arefixed, and all random clusterings are generated by shuffling the elements between the fixedclusters. However, the premises of the permutation model are frequently violated; in manyclustering scenarios, either the number of clusters, the size distribution of those clusters,or both vary drastically (Hubert and Arabie, 1985, Wallace, 1983). For example, K-meansclustering, probably the most common technique, fixes the number of clusters but not thesizes of those clusters (Jain, 2010). Later, we explore a real example in which K-meansproduces clusterings with large variations in the clusterings’ cluster size sequences. Thissuggests that comparing K-means clusterings based on Mperm is misleading.

2

Page 3: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Impact of Random Models on Clustering Similarity

Expected Rand Index Equation

Permutation Model Mperm Exact∑

i (ai2 )

(N2 )

∑j (

bj2 )

(N2 )+

(1−

∑i (

ai2 )

(N2 )

)(1−

∑j (

bj2 )

(N2 )

)

Fixed Number ofClusters Mnum

Exact S(N−1,KA)S(N,KA)

S(N−1,KB)S(N,KB)

+(1− S(N−1,KA)

S(N,KA)

)(1− S(N−1,KB)

S(N,KB)

)Asymptoticfor N � 1

1KAKB

+(1− 1

KA

)(1− 1

KB

)

All Clusterings Mall

Exact(BN−1

BN

)2+(1− BN−1

BN

)2Asymptoticfor N � 1

(logNN

)2+(1− logN

N

)2

Table 1: The expected Rand index between two clusterings A and B of N elements underdifferent random models. Details and derivations are given in Appendix D.

Furthermore, even the assumption that both clusterings were randomly drawn from thesame random model (a two-sided comparison) is often problematic. For example, whencomparing against a given reference clustering, it is more reasonable to find the expectedsimilarity of the reference clustering with all of the random clusterings from the randommodel. This one-sided comparison accounts for the fixed structure of the reference clusteringwhich is always present in the comparisons, providing a more meaningful baseline.

Here, we present a general framework to adjust measures of clustering similarity forchance by considering a broader class of random clustering models and one-sided com-parisons. Specifically, we consider two other random models for clusterings: a uniformdistribution over the ensemble of all clusterings of N elements with the same number ofclusters (Mnum), and a uniform distribution over the ensemble of all clusterings of N ele-ments (Mall). The resulting expectations for the Rand index under all three random modelsare summarized in Table 1 and for Mutual Information in Table 2, with the full derivationsgiven in Appendix D and Appendix E. The adjusted similarity measures used throughoutthis work rescale the Rand and MI measures by these expectations according to Equation 3.We also introduce one-sided variants of the adjusted Rand index and adjusted Mutual Infor-mation when using the Mnum or Mall random models (for Mperm, the one-sided similarityis equivalent to the two-sided case).

The impact of our framework is illustrated through several examples: a synthetic clus-tering example, K-means clustering of a handwritten digits data set (MNIST), and a com-parison of clustering methods applied to gene expression data. Our results demonstrate thatboth the choice of random model for clusterings and the choice of one-sided comparisonscan affect results significantly. Therefore, we argue that clustering comparisons should beaccompanied by a proper justification for the random model.

3

Page 4: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Gates and Ahn

Expected MutualInformation

Exact

Permutation ModelMperm

∑KAi=1

∑KBj=1

∑nij

nij

Nlog(Nnij

aibj

) ( bjnij

)(N−bijai−nij

)

(Nai)

Fixed Number ofClusters Mnum

−N−(KA−1)∑

a=1

(N

a

)S(N − a,KA − 1)

S(N,KA)

a

Nlog( a

N

)−N−(KB−1)∑

b=1

(N

b

)S(N − b,KB − 1)

S(N,KB)

b

Nlog

(b

N

)

+

N−(KA−1)∑a=1

(N

a

)S(N − a,KA − 1)

S(N,KA)

N−(KB−1)∑b=1

(N

b

)S(N − b,KB − 1)

S(N,KB)

∑nij

nijN

log(nijN

) ( bnij

)(N−ba−nij

)(Na

)

All Clusterings Mall

−2

N∑a=1

(N

a

)BN−aBN

a

Nlog( a

N

)+

N∑a=1

(N

a

)BN−aBN

N∑b=1

(N

b

)BN−bBN

min a,b∑nij=1

nijN

log(nijN

) ( bnij

)(N−ba−nij

)(Na

)

Table 2: The expected Mutual Information between two clusterings A and B of N elementsunder different random models. Details and derivations are given in Appendix E.

2. Results

2.1 Clustering Similarity Ranking

Our first example demonstrates how rankings assigned by the similarity score can changedepending on the assumed random model. Consider the four hypothetical clusterings of20 elements presented in Figure 1a. Clustering W contains four equally sized clusters;clustering X is generated by shifting the membership of one element from W; clusteringY groups the elements into 10 equally sized clusters; and clustering Z groups the elementsinto 10 heterogeneous clusters. The similarity (from the most similar at the top to theleast similar at the bottom) of all 6 clustering pairs is ranked using the Rand index andeach of its three adjusted variants in Figure 1b. Note that the adjusted Rand index can benegative. The unadjusted Rand index ranking serves as a reference to illustrate how therandom models change rankings.

As one would expect, all four Rand measures identify clusterings W and X as the mostsimilar (Figure 1b). However, the ranking of the other five comparisons varies widely asa result of the underlying random models. These changes can be understood by trackingcomparisons to clustering Z. The low Rand index for the three comparisons with clusteringZ (≈ 0.76) reflects the fact that clustering Z has a drastically different number of clusters orcluster size sequence from the other three clusterings. The permutation model retains thesedifferences in all random clusterings; the resulting adjusted index thus treats comparisonsbetween clustering Z and either W or X more favorably than those to clustering Y. Onthe other hand, the cluster size sequence for clustering Z is relatively rare in both Mnum

and Mall. Since clusterings Y and Z have the same number of clusters, the differences intheir adjusted scores using Mnum are a consequence of their cluster size sequences. Finally,

4

Page 5: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Impact of Random Models on Clustering Similarity

all four clusterings are over 20 elements—the only factor that specifies the expected Randindex assuming Mall—so they are all adjusted by the same amount when Mall is used. Notethat in our example, clustering Z has a negative adjusted Rand score using the Mall whencompared with all three other clusterings, thus it is less similar to the other three clusteringsthan one would have expected from comparing two completely random clusterings.

This example illustrates an important property of the Mall model. Namely, the rankingprovided by the Rand index remains unchanged whenever the Mall model is used for ad-justment because all clusterings have the same number of elements. However, the correctedbaseline now provides a strong interpretation for negative scores: Two randomly selectedclusterings are expected to be more similar. This is an important consideration for theevaluation of clustering methods; if the derived clustering is no more similar than wouldbe expected when comparing completely random clusterings, the solution is likely not ameaningful representation of the data.

We then turn our attention to clustering similarity measured by mutual information(MI). Rankings using the adjusted MI depend on two dimensions of variation: the randommodel and the maximum bound for the measure. This variation is illustrated in Figure 2using the same 6 comparisons between pairs of clusterings from Figure 1a. We considerfour cases for the MI maximum bound: Min, Sqrt, Sum, and Max, corresponding to theminimum of the two model partition entropies, the geometric mean of the two model par-tition entropies, the average of the two model partition entropies, and the maximum of thetwo model partition entropies, respectively (see Appendix E for details). For the permu-tation model, the model partition entropies are calculated from the cluster size sequences,while the model partition entropies in Mnum are bounded by the logarithm of the numberof clusters and the model partition entropies in Mall are bounded by the logarithm of thenumber of elements. As a point of reference, all rankings are illustrated in comparison tothe raw mutual information score, unnormalized and without a random model adjustment(None). All adjustments of the mutual information without a random model (None, firstrow) are members of the commonly used family of Normalized Mutual Information (NMI)measures (Danon et al. (2005)).

The rankings in Figure 2 demonstrate that both the random model and the maximumbound affect the relative similarity between clusterings when adjusting MI. Firstly, the onlyrandom model whose adjustments are independent of MI’s maximum bound is Mall. Thisoccurs because every choice of the maximum bound reduces to logN (the entropy of theclustering that places each element into its own cluster). In the other three random modelscenarios, the maximum bound depends on the clusterings under comparison. Secondly,MI is highly dependent on the number of clusters in each of the clusterings: When eitherno normalization and random model adjustment are used, or the Mall model is used, MIranks the similarity of clusterings Y and Z above that of W and X because of the greaternumber of clusters in the former case. This bias is mitigated to varying extents by the NMIvariations; while NMI using the Sqrt, Sum, and Max normalization terms all produce theintuitive ranking of W and X as the most similar pair, NMI using Min for normalizationstill succumbs to the larger number of clusters in Y, and ranks X and Y as the most similarclustering pair. The adjustments provided by both the Mperm and Mnum random modelscontrol for the number of clusters; this reduces the impact of the number of clusters when

5

Page 6: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Gates and Ahn

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Wa1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20X1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Y1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Z

-1 1 -1 1

X -Z

W-Z

Y-Z

W-Y

X -Y

W-X

Y-Z

W-Y

X -Y

X -Z

W-Z

W-X

Rand Adjusted Rand Mperm

b

-1 1 -1 1

X -Z

W-Z

Y-Z

W-Y

X -Y

W-X

Y-Z

X -Z

W-Z

W-Y

X -Y

W-X

Rand Adjusted Rand Mnum

-1 1 -1 1

X -Z

W-Z

Y-Z

W-Y

X -Y

W-X

X -Z

W-Z

Y-Z

W-Y

X -Y

W-X

Rand Adjusted Rand Mall

Figure 1: The choice of random model for the Rand index has a significant impact on therankings of clustering similarity. a) Four clusterings ofN = 20 elements;W and Xeach contain four clusters and differ by the assignment of one element (6), Y and Zeach contain ten clusters. b) Rankings for the similarity of clustering pairs usingthe Rand index, the Adjusted Rand index assuming Mperm, the Adjusted Randindex assuming Mnum, and the Adjusted Rand index assuming Mall. Rankingswhich change as a function of random model are highlighted in dark red.

6

Page 7: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Impact of Random Models on Clustering Similarity

the cluster sizes are regular, but the bias re-occurs when there is a large imbalance betweenthe cluster sizes.

2.2 Random Models and Skewed Cluster Sizes

For both the Rand index and MI, the permutation model induces a bias favoring com-parisons with clusterings containing a large inhomogeneity in cluster sizes. This bias isexplicitly demonstrated in our next example by the difference between the adjusted simi-larity measures assuming Mperm and Mnum. To generate an increasing disparity in clustersizes, we use a preferential attachment model of element assignment. At each step of thealgorithm, a random element is uniformly chosen for reassignment to a new cluster basedon the current sizes of those clusters. A move is rejected if it results in an empty cluster.

In Figure 3, we compare a clustering of 1, 000 elements grouped into 50 equally sizedclusters and the same clustering throughout 10, 000-steps of our preferential attachmentalgorithm using the Adjusted Rand index (Figure 3a) and Adjusted MI (Figure 3b). Clustersize inhomogeneity is measured by the entropy of the clustering size sequence; equally sizedclusters have the maximum entropy (log2 50 ≈ 5.64), while greater inhomogeneity in clustersizes decreases the entropy of the cluster size sequence. In both cases, the bias inducedby Mperm towards skewed cluster size sequences is apparent as the difference between theexpected similarity within the Mperm random model and the Mnum random models increasesas the disparity between relative cluster sizes increases.

2.3 Appropriate Random Model for Comparing K-means Clusterings

Clustering similarity measures are commonly used to evaluate the results of clustering meth-ods in relation to a known reference clustering. Since the number of clusters, and even thenumber of data points, can vary between instances, appropriately corrected similarity mea-sures are necessary. However, as we have already seen, the choice of similarity measure andits chance corrected variants can affect the results of the comparisons and suggest drasticallydifferent interpretations for the effectiveness of the method.

We demonstrate the importance of the random ensemble assumption through a com-parison of the clusterings uncovered by 400 runs of K-means on a collection of hand-writtendigits (Alimoglu and Alpaydin, 1996, see Appendix F.1 for details). The K-means cluster-ing method groups elements so as to minimize the average (Euclidean) distance from thecluster centroid. In most scenarios, it uncovers clusterings with a pre-specified number ofclusters (K). For our example, the digits naturally fall into 10 disjoint clusters, shown inFigure 4a, with relative cluster sizes given on the left of Figure 4b. Interestingly, almostall 400 clusterings produced by K-means have a different cluster size sequence (Figure 4b,bottom) and the cluster sizes vary over a wide range (Figure 4b, top). This suggests thatboth the specific assignment of elements to clusters and the size sequence of the clusters aremajor factors differentiating the K-means clusterings. Both sources of variation need to becaptured by the random model in order to have a meaningful baseline.

Since the number of clusters does not change between runs, but the size sequence ofthose clusters changes considerably, it is more appropriate to assess similarity within thecontext of random clusterings with a fixed number of clusters rather than those given bythe permutation model. Furthermore, since all of the comparisons are made against the

7

Page 8: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Gates and Ahn

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

X -ZW-ZY-ZW-YW-XX -Y

MI NMI

Non

eMin

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

X -ZW-ZW-YX -YY-ZW-X

MI NMI

Sqrt

MI Bound

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

X -ZW-ZW-YX -YY-ZW-X

MI NMI

Sum

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

W-YX -YX -ZW-ZY-ZW-X

MI NMI

Max

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

Y-ZX -ZW-ZW-YX -YW-X

MI Adjusted MI

Mperm

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

Y-ZX -ZW-ZW-YX -YW-X

MI Adjusted MI

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

Y-ZX -ZW-ZW-YX -YW-X

MI Adjusted MI

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

Y-ZW-YX -ZX -YW-ZW-X

MI Adjusted MI

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

Y-ZX -ZW-ZW-YW-XX -Y

MI Adjusted MI

Mnum

Ran

dom

Mod

el

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

Y-ZX -ZW-ZW-YX -YW-X

MI Adjusted MI

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

Y-ZX -ZW-ZW-YX -YW-X

MI Adjusted MI

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

Y-ZX -ZW-ZW-YX -YW-X

MI Adjusted MI

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

X -ZW-ZW-YW-XX -YY-Z

MI Adjusted MI

Mall

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

X -ZW-ZW-YW-XX -YY-Z

MI Adjusted MI

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

X -ZW-ZW-YW-XX -YY-Z

MI Adjusted MI

-3 3 -1 1

X -ZW-ZW-YW-XX -YY-Z

X -ZW-ZW-YW-XX -YY-Z

MI Adjusted MI

Figure 2: Both the random model and maximum bound have a significant impact on therankings of clustering similarity using Mutual Information (MI). Rankings forthe similarity of clustering pairs from Figure 1a using (vertically) the raw MI,the Adjusted MI assuming Mperm, the Adjusted MI assuming Mnum, and theAdjusted MI assuming Mall. MI similarity also depends on the choice of max-imum bound (horizontal) as a function of the two clusterings’ model entropies;minimum (Min), square-root (Sqrt), average (Sum), and maximum (Max). TheMI measures which are normalized but not adjusted by a random model are allmembers of the common family of normalized MI (NMI). For a given randommodel, similarity rankings which change as a function of maximum bound arehighlighted in dark red.

8

Page 9: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Impact of Random Models on Clustering Similarity

5.05.15.25.35.45.55.65.7

Cluster Size Inhomogeneity(Entropy, bits)

−0.1

0.0

0.1

0.2

0.3

0.4

0.5

Adj

uste

dR

and

Diff

eren

ce(M

perm−Mnum

)

aRand Index

5.05.15.25.35.45.55.65.7

Cluster Size Inhomogeneity(Entropy, bits)

−0.01

0.00

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

Adj

uste

dM

IDiff

eren

ce(M

perm−Mnum

)

bMutual Information

Figure 3: The bias towards inhomogeneous cluster size sequences induced by Mperm. Clus-ter size sequence inhomogenity for clusterings generated by a preferential attach-ment model is measured by the cluster size sequence entropy; the bias materializesas an increase in the difference between the adjusted similarity assuming the per-mutation model Mperm and Mnum for a) the Adjusted Rand index and b) theAdjusted Mutual Information as the entropy decreases. Filled area illustrates onestandard deviation when averaged over 100 random initializations.

9

Page 10: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Gates and Ahn

same reference clustering, a one-sided similarity metric better captures the comparisonscenario. In Figure 4c, the similarity of the reference clustering compared to each of the400 uncovered clusterings is shown using the Adjusted Rand index assuming Mperm andthe Adjusted Rand index assuming M1

num. While the measures are strongly correlated(the black line indicates perfect agreement), the Adjusted Rand index assuming Mperm isconsistently biased towards higher similarity. Most importantly, the bulk of the uncoveredclusterings change their relative ranking when considered in the context of M1

num comparedto Mperm as demonstrated by the rankings of the top 10 most similar clusterings in Figure4d.

2.4 Tumor Gene Expression Clustering

We further underline the impact of random model choice using a gene expression data set.Specifically, we use a collection of 35 cancer gene expression studies assembled in de Soutoet al. (2008). The studies in the collection aim to differentiate the gene expression incancerous cell tissue samples from those in healthy controls. Each study contains anywherefrom 22 to 248 data points (individual tissue samples) for which between 85 and 4, 553features (individual gene expression) were measured after removing the uninformative andmissing genes. For details on the individual studies and filtering methodologies, see de Soutoet al. (2008) and references therein.

Clusterings are identified using three clustering methodologies: K-means, agglomera-tive hierarchical clustering using correlation to compute the average linkage between datapoints, and affinity propagation (Frey and Dueck, 2007). While many other methods couldbe used (and indeed, were compared in de Souto et al., 2008), we use these three as rep-resentative examples to illustrate the consequences of clustering correction choices. Sincethe K-means and hierarchical clustering methods both produce a clustering with the userspecified number of clusters, their similarity is adjusted using the one-sided Adjusted Randindex assuming M1

num, where the reference clustering is specified for each study individually.On the other hand, affinity propagation uncovers the number of clusters in the clusteringand is therefore compared using the one-sided Adjusted Rand index assuming M1

all. Theresulting similarities are shown in Figure 5 where each point is a data set. The grey dashedline indicates agreement between the Mperm and appropriate one-sided Adjusted Rand sim-ilarity. Those studies that fall to the left of the line have cluster size sequences which differfrom the reference clustering. There are a few studies whose similarity to the reference clus-tering becomes negative in the context of the appropriate random model. In those cases, arandom clustering generated under the same constraints as the detection method would bemore similar to the reference clustering than the actual uncovered clusterings. Yet, Mperm

misses this fact in almost every case: The positive similarity observed when using Mperm

is a consequence of the cluster size sequences. On the other hand, those studies that fallto the right of the line indicate cluster assignments which are poorly represented in Mperm,yet, given the cluster size sequence, are more similar to the reference clustering than wouldbe expected from a random clustering.

10

Page 11: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Impact of Random Models on Clustering Similarity

0

1

2

3 4

5 6

7

8

9

0

12

3

4

5

6

7

8

9

0

12

3

4

5

6

7

8

9

0

9

55

6

5

0

9

8

9

8

4

1

7

7

3

5

1

00

22

7

82

0

1

2

6

33

7

3 3

4

66

6

4

9

1

5

0

9

5

28 2

00

1

7

6

3

21

7

4

6

3

1

3

9

17

6

8

4

3

1

4

0

5

3

69

6

1

7

5

44

7

2

8 22

5

7

9

5 4

8

8

4

9

0

8

9

8

0

1

2

3

45

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9

5 5

6

5

0

9

8

9

8

4

1

77

3

5

1

0

0

22

7

8

2

0

1

2

6

33

7

3

3

4

6

66

4

9

1

5

0

9

5

2

8

2

0

0

1

7

6

3

2

1

7

3

1

3

9

1

7

6

8

4

3

1

4

0

5

3

6

9

6

1

7

54 4

7

2

8

2255

4

88

4

9

0

8

9

8

0

1

2

3

4

5

6

7

8

9

0

1

2

34

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9

5

5 65

0

9

8

9

84

1

7

7

3

5

1

00

22

7

8

2

0

1

2

6

33

7

3

3

4

666

4

9

1

5

0

9

5

2

8

2

00

1

7

6

3

2

1

7

4

6

3

1

3

9

1

7

6

8

43

1

4

0

5

3

6

9

6

1

7

5

4

4

7

2

8

22

5

7

9

5

4

8

84

9

0

8

9

3

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9

5

5

6

5

0

9

8

9

8

41

7

7

3

5

1

00

2

2

7

8

2

0

1

2

6

33

7

33

4

666

4

9

1

5

0

9

5

2

8

2

0

0

1

7

6

3

2

1

7

4

6

3

1

3

9

1

7

6

8

4

3

1 4

0

5

3

6

9

6

1

75

4

4

7

2

8

2

2

5

7

9

5

4

88

4

9

0

8

9

8

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9

5

56

5

0

9

8

9

84

1

7

7

3

5

1

0

0

22

7

8

2

0

1

2

6

33

7

33

4

6

6

6

4

9

1

5

0

9

5

28

2

00

1

7

6

3

2

1

7

4

6

3

1

3

9

1

7

6

84

3

1

4

0

5

3

6

9

6

1

7

5

44

7

2

8

2

2

5

7

9

5

4

884

9

0

8

9

8

0

1

2

3

4

5

6

7

89

0

1

2

3

45

6

7

8

9

0

1

2

34

5

6

7

89

0

9 5

5

6

5

0

9

8

9

8

4

1

77

3

5

1

0

0

22

7

8

2

0

1

2

6

33

7

33

4

6

6

6

4

9

1

5

0

9

5

2

8

2

00

17

6

3

2

1

7

4

6

3

1

3

9

17

6

8 4

3

1

4

0

5

3

6

9

6

17

5

44

7

2

8

22

5

7

9

5

4

8

8

4

9

0

89

8

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9

5

5

6

5

0

9

8

9

84

1

7

7

3

5

1

00

2 2

7

8

2

0

1

2

6

33

7

334

6

66

4

9

1

5

0

9

5

2

8

2

0

0

1

7

6

3

2

1

7

4

6

3

1

3

9

1

7

6

8

4

3

1

4

0

5

3

6

9

6

1

7

5

4

4

7

2

8

22

5

7

9

5

4

8 8

4

9

0

8

9

81

2

34

5

6

7

8

9

0

1

2

34

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9 55

6

5

0

9

8

9

841

77

3

5

1

2

7

8

2

0

1

2

6

33

7

33

4

666

4

9

1

5

0

95

2

8

2

0

0

1

7

63

2

1

4

6

3

1

3

9

1

7

6

8 43

1

4

0

5

3

6

9

6

1

7

5

4

4

7

2

8

2

2

5

7

9 5

44

9

0

8

9

8

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9

55

6

5

0

9

8

9

8

4

17 7

3

5

1

00

7

8

2

0

1

2

6

33

7

33

4

6

66

4

9

1

5

0

9 5

28

2

00

17

6

3

2

1

7

4

6

3

13

9

17

6

8

4

3

1

4

0

53

69

6

1

7

5

44

7

2

8

2

2

5

7

9 5

4

88

4

9

0

8

9

8

0

12

3

4

5

6

7

8

9

0

12

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9

5

5

6

5

0

9

8

9

8

4

1

77

3

5

1

00

2

2

7

82

0

1

2

6

3

3

7

3

3

4

6 66

4

9

1

5

0

9 5

282

00

17

6

3

2

1

7

4

6

3

1

3

9

1

7

6

8

43

1

4

0

5

3

69 6

1

7

5

44

7

2

8

22

5

7

9

5

4

88

4

9

0

8

9

8

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5 6

7

8

9

0

1

2

34

5 6

7

8

9

0

9

556

5

0

9

8

9

8

4

1

7

7

3

5

1

00

22

7

8

2

0

1

2

6

3

3

7

334

666

4

9

1

5

0

9 5

28

2

00

1

7

6

3

2

1

7

4

6

3

1

3

9

17

6

8

43

1

4

0

53

6

9

6

1

7

5

44

7

2

822

5

7

9

5

4

88

4

9

0

8

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9 55

6

5

0

9

8

9

8

4

1

77

3

5

1

0 0

2 2

7

8

2

0

1

2

6

33

7

33

4

6 6

6

4

9

1

5

0

9

5

2

8

2

0

0

1

7

6

3

2

1

7

4

6

3

13

9

1

7

6

8

4

31

4

0

5

3

69

6

1

7

5

44

7

2

8

2

2

5

7

9

5

4

88

4

9

0

8

9

8

0

1

2

34

5

6

7

8

9

0

1

2

3

4

5

6

9

0

1

2

34

5

6

7

8

9 09

5 5

6

5

0

9

8

9

8 4

177

3

5

1

0

0

2

2

7

8

2

0

1

2

6

33

7

33

4

666

4

9

1

5

0

9

5

2 8

0

17

6

3

2

1

7

4

6

3

1

3

9

17

6

84

3

1

4

0

5

3

6

9

6

17

5

4

4

7

2

2

5

7

9

5

4

4

9

0

89

8

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

34

5

6

7

8

9

0

95

5

6

5

0

9

8

9

8

4

17

73

5

1

0

0

22

7

8

2

0

1

2

6

33

7

3

3

4

6

66

4

9

1

5

0

9 5

2

8

2

0 0

1

7

6

3

2

1

7

4

6

3

1

3

9

1

7

6

8

4

3

1

4

0

5

3

6

9

6

1

7

5

4 4

7

2

8

22

5

7

95

4

88

4

9

0

8

9

8

0

1

2

3 4

5 6

7

8

9

0

12

3

4

5

6

7

8

9

0

12

3

4

5

6

7

8

9

0

9

55

6

5

0

9

8

9

8

4

1

7

7

3

5

1

00

22

7

82

0

1

2

6

33

7

3 3

4

66

6

4

9

1

5

0

9

5

28 2

00

1

7

6

3

21

7

4

6

3

1

3

9

17

6

8

4

3

1

4

0

5

3

69

6

1

7

5

44

7

2

8 22

5

7

9

5 4

8

8

4

9

0

8

9

8

0

1

2

3

45

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9

5 5

6

5

0

9

8

9

8

4

1

77

3

5

1

0

0

22

7

8

2

0

1

2

6

33

7

3

3

4

6

66

4

9

1

5

0

9

5

2

8

2

0

0

1

7

6

3

2

1

7

3

1

3

9

1

7

6

8

4

3

1

4

0

5

3

6

9

6

1

7

54 4

7

2

8

2255

4

88

4

9

0

8

9

8

0

1

2

3

4

5

6

7

8

9

0

1

2

34

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9

5

5 65

0

9

8

9

84

1

7

7

3

5

1

00

22

7

8

2

0

1

2

6

33

7

3

3

4

666

4

9

1

5

0

9

5

2

8

2

00

1

7

6

3

2

1

7

4

6

3

1

3

9

1

7

6

8

43

1

4

0

5

3

6

9

6

1

7

5

4

4

7

2

8

22

5

7

9

5

4

8

84

9

0

8

9

3

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9

5

5

6

5

0

9

8

9

8

41

7

7

3

5

1

00

2

2

7

8

2

0

1

2

6

33

7

33

4

666

4

9

1

5

0

9

5

2

8

2

0

0

1

7

6

3

2

1

7

4

6

3

1

3

9

1

7

6

8

4

3

1 4

0

5

3

6

9

6

1

75

4

4

7

2

8

2

2

5

7

9

5

4

88

4

9

0

8

9

8

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9

5

56

5

0

9

8

9

84

1

7

7

3

5

1

0

0

22

7

8

2

0

1

2

6

33

7

33

4

6

6

6

4

9

1

5

0

9

5

28

2

00

1

7

6

3

2

1

7

4

6

3

1

3

9

1

7

6

84

3

1

4

0

5

3

6

9

6

1

7

5

44

7

2

8

2

2

5

7

9

5

4

884

9

0

8

9

8

0

1

2

3

4

5

6

7

89

0

1

2

3

45

6

7

8

9

0

1

2

34

5

6

7

89

0

9 5

5

6

5

0

9

8

9

8

4

1

77

3

5

1

0

0

22

7

8

2

0

1

2

6

33

7

33

4

6

6

6

4

9

1

5

0

9

5

2

8

2

00

17

6

3

2

1

7

4

6

3

1

3

9

17

6

8 4

3

1

4

0

5

3

6

9

6

17

5

44

7

2

8

22

5

7

9

5

4

8

8

4

9

0

89

8

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9

5

5

6

5

0

9

8

9

84

1

7

7

3

5

1

00

2 2

7

8

2

0

1

2

6

33

7

334

6

66

4

9

1

5

0

9

5

2

8

2

0

0

1

7

6

3

2

1

7

4

6

3

1

3

9

1

7

6

8

4

3

1

4

0

5

3

6

9

6

1

7

5

4

4

7

2

8

22

5

7

9

5

4

8 8

4

9

0

8

9

81

2

34

5

6

7

8

9

0

1

2

34

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9 55

6

5

0

9

8

9

841

77

3

5

1

2

7

8

2

0

1

2

6

33

7

33

4

666

4

9

1

5

0

95

2

8

2

0

0

1

7

63

2

1

4

6

3

1

3

9

1

7

6

8 43

1

4

0

5

3

6

9

6

1

7

5

4

4

7

2

8

2

2

5

7

9 5

44

9

0

8

9

8

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9

55

6

5

0

9

8

9

8

4

17 7

3

5

1

00

7

8

2

0

1

2

6

33

7

33

4

6

66

4

9

1

5

0

9 5

28

2

00

17

6

3

2

1

7

4

6

3

13

9

17

6

8

4

3

1

4

0

53

69

6

1

7

5

44

7

2

8

2

2

5

7

9 5

4

88

4

9

0

8

9

8

0

12

3

4

5

6

7

8

9

0

12

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9

5

5

6

5

0

9

8

9

8

4

1

77

3

5

1

00

2

2

7

82

0

1

2

6

3

3

7

3

3

4

6 66

4

9

1

5

0

9 5

282

00

17

6

3

2

1

7

4

6

3

1

3

9

1

7

6

8

43

1

4

0

5

3

69 6

1

7

5

44

7

2

8

22

5

7

9

5

4

88

4

9

0

8

9

8

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5 6

7

8

9

0

1

2

34

5 6

7

8

9

0

9

556

5

0

9

8

9

8

4

1

7

7

3

5

1

00

22

7

8

2

0

1

2

6

3

3

7

334

666

4

9

1

5

0

9 5

28

2

00

1

7

6

3

2

1

7

4

6

3

1

3

9

17

6

8

43

1

4

0

53

6

9

6

1

7

5

44

7

2

822

5

7

9

5

4

88

4

9

0

8

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

9 55

6

5

0

9

8

9

8

4

1

77

3

5

1

0 0

2 2

7

8

2

0

1

2

6

33

7

33

4

6 6

6

4

9

1

5

0

9

5

2

8

2

0

0

1

7

6

3

2

1

7

4

6

3

13

9

1

7

6

8

4

31

4

0

5

3

69

6

1

7

5

44

7

2

8

2

2

5

7

9

5

4

88

4

9

0

8

9

8

0

1

2

34

5

6

7

8

9

0

1

2

3

4

5

6

9

0

1

2

34

5

6

7

8

9 09

5 5

6

5

0

9

8

9

8 4

177

3

5

1

0

0

2

2

7

8

2

0

1

2

6

33

7

33

4

666

4

9

1

5

0

9

5

2 8

0

17

6

3

2

1

7

4

6

3

1

3

9

17

6

84

3

1

4

0

5

3

6

9

6

17

5

4

4

7

2

2

5

7

9

5

4

4

9

0

89

8

0

1

2

3

4

5

6

7

8

9

0

1

2

3

4

5

6

7

8

9

0

1

2

34

5

6

7

8

9

0

95

5

6

5

0

9

8

9

8

4

17

73

5

1

0

0

22

7

8

2

0

1

2

6

33

7

3

3

4

6

66

4

9

1

5

0

9 5

2

8

2

0 0

1

7

6

3

2

1

7

4

6

3

1

3

9

1

7

6

8

4

3

1

4

0

5

3

6

9

6

1

7

5

4 4

7

2

8

22

5

7

95

4

88

4

9

0

8

9

8

a b

c d

Example K means ClusteringDigits Clustering

987

01

654

23

01

2

3

5

7

8

0.3

0.2

0.1

9

6 4

Digits Clustering K means Predicted Clusterings

Adjusted Rand

Adjusted Rand

-1 1 -1 1

Adjusted Rand M1num

Adjusted Rand Mperm

0.3 0.4 0.5 0.6 0.7 0.8.3

.4

.5

.6

.7

.8

0.3

0.8

0.4

0.5

0.6

0.7

0.3 0.80.4 0.5 0.6 0.7�Adjusted Rand M1

num

Adj

uste

dR

and

Mperm

Figure 4: The impact of random model choice on the evaluation of K-means clustering. a)The digits data set contains 1, 797 points in 64 dimensions (projected to 2 dimen-sions using t-SNE dimensionality reduction for visualization, Van der Maatenand Hinton, 2008) with a ground truth clustering corresponding to the digit,and an example K-means clustering. b) The original cluster size sequence (topleft) and 10 cluster size sequences uncovered by K-means clustering with ran-dom initialization (top right). Intensity represents cluster sizes that are smaller(darker) or larger (lighter) than the ground truth clusters. (bottom) The clustersize sequence for 400 clusterings uncovered by K-means clustering with randominitialization using Barycentric coordinates. The actual clustering size sequence(black) and the most similar clustering determined by the Adjusted Rand indexassuming Mperm (light blue) and M1

num (light orange). c) The similarity betweenthe actual digits clusterings and each of the 400 K-means clusterings as measuredby the Adjusted Rand index assuming Mperm (y-axis) and M1

num (x-axis). d)The ranking of the most similar 10 K-means clusterings as determined by theAdjusted Rand index assuming Mperm (left) and M1

num (right).

11

Page 12: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Gates and Ahn

−2 −1 0 1

Adjusted Rand M1num

−0.5

0.0

0.5

1.0

Adj

uste

dR

andMperm

Hierarchical Clustering

−2 −1 0 1

Adjusted Rand M1num

K-Means

−2 −1 0 1

Adjusted Rand M1all

Affinity Propagation

Figure 5: The impact of random model choice on the evaluation of gene expression cluster-ing. The results of three clustering identification methods (agglomerative hier-archical clustering, K-means, and affinity propagation) identified from the geneexpression in tissue samples from cancerous and healthy cells in 35 studies. Theuncovered clusterings are compared to the reference clustering using the AdjustedRand index assuming the permutation model Mperm (y-axis), and either the one-sided Adjusted Rand index assuming a fixed-number of clusters M1

num, or theone-sided Adjusted Rand index using all clusterings M1

all (x-axis). The dashedgrey line indicates agreement between the similarity measures; the 9 red pointsindicate studies with at least 0.15 disagreement between adjusted similarity scoresfor at least one clustering identification method.

12

Page 13: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Impact of Random Models on Clustering Similarity

3. Discussion

Given the prevalence of clustering methods for analyzing data, clustering comparison isa fundamental problem that is pertinent to numerous areas of science. In particular, thecorrection of clustering similarity for chance serves to establish a baseline that facilitatescomparisons between different clustering solutions. Expanding previous studies on theselection of an appropriate model for random clusterings (Meila, 2005, Vinh et al., 2009,Romano et al., 2016), our work provides an extensive summary of random models and clearlydemonstrates the strong impact of the random model on the interpretation of clusteringresults. Our results underpin the importance of selecting the appropriate random model fora given context.

The specific comparisons studied here are not meant to establish the superiority of aparticular clustering identification technique or a specific random clustering model, rather,they illustrate the importance of the choice of the random model. Crucially, conclusionsbased on corrected similarity measures can change depending on the random model forclusterings. Therefore, previous studies which did promote methods based on evidencefrom corrected similarity measures should be re-evaluated in the context of the appropriaterandom model for clusterings (Yeung et al., 2001, de Souto et al., 2008, Yeung and Ruzzo,2001, Thalamuthu et al., 2006, McNicholas and Murphy, 2010).

Throughout this work, we assumed a uniform probability of selecting a partition givena constraint on the types of partitions in the ensemble. However, other probability dis-tributions could be used which better model the clusterings encountered in practice. Forexample, instead of using a uniform distribution over the number of clusters, one couldconsider an inferred distribution for the number of clusters actually uncovered by a givenmethod (e.g. affinity propagation). Additionally, given that many systems exhibit clus-terings with a heavy-tailed cluster size sequence, clusterings with such skewed cluster sizedistributions could be favored. Changes to the prior probabilities would likely change theexpectations of the clustering similarity measures.

The behavior of the Rand index and Mutual Information in the context of the randomclustering models discussed here further reveals problems with both measures. Specifically,the expected similarity of random clusterings increases as the number of elements grows.Intuition would suggest the opposite; the similarity of two randomly selected clusteringsshould decrease as the number of elements increases because it is harder to match the ele-ment memberships to clusters between two random clusterings. Instead, both MI and Randare dominated by the fact that the expected number of clusters and cluster size distributionare converging with increasing N (Mansour, 2012). Our analysis also illustrates the de-pendency on the normalization term for MI, which, combined with a previously establishedbias on the number of clusters, suggests more care should be taken when interpreting theresults of MI clustering comparisons.

In conclusion, our framework for the correction of clustering similarity for chance al-lows for more conscious comparisons between clusterings. The practitioner should alwaysprovide justification for their choice of random clustering model and treatment of one-sidedcomparisons.

Acknowledgments

13

Page 14: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Gates and Ahn

We thank Ian Wood, Santosh Manicka, James Bagrow, Sune Lehmann, Aaron Clauset,Randall D. Beer, and Luis M. Rocha for helpful discussions.

Appendix A. Clusterings

Given a set of N distinct elements V = {v1, . . . , vN} (i.e. data points or vertices), aclustering is a partition of V into a set C = {C1, . . . , CKC} of KC non-empty disjoint subsetsof V , the clusters, Ck, such that

1. ∀Ci, Cj if i 6= j, then Ci ∩ Cj = ∅

2.⋃KCk=1Ck = V .

Each clustering specifies a sequence of cluster sizes, namely, letting ci = |Ci| be the size ofthe i-th cluster, then the sequence of cluster sizes is [c1, c2, . . . , cKC ].

Throughout this paper, we focus on the similarity of two clusterings over the sameset of N labeled elements, A = {A1, . . . , AKA} (with KA clusters of sizes ai) and B ={B1, . . . , BKB} (with KB clusters of sizes bi).

Appendix B. Stirling and Bell Numbers

The Stirling number of the second kind S(n, k) gives the number of ways to partition a setof n elements into k clusters, where

S(n, k) =1

k!

k∑j=0

(−1)k−j(k

j

)jn. (1)

There are several recurrence relations which also give S(n, k), one of the most useful is therelation

S(0, 0) = 1 S(n, 0) = S(0, n) = 0 (2)

S(n+ 1, k) = kS(n, k) + S(n, k − 1).

As n → ∞, an asymptotic approximation to the Stirling numbers of the second kind for afixed k is given by S(n, k) ≈ kn

k! .

The Bell number Bn is the total number of clusterings over a set with n elements. Itis related the Stirling numbers of the second kind by the summation over k for a fixed n,Bn =

∑nk=0 S(n, k). There is also a useful recurrence relation for Bell numbers: Bn+1 =∑n

k=0

(nk

)Bk. As n→∞, an asymptotic approximation to the ratio of the n-th and (n+ 1)-

th Bell numbers is BnBn+1

≈ lognn . See Mansour (2012) for an extended discussion of both the

Stirling numbers of the second kind and the Bell numbers.

Appendix C. Correction for Chance

Given a clustering similarity measure s and a random model for clusterings model, the ex-pected clustering similarity Emodel[s] of pair-wise comparisons within the random ensemble

14

Page 15: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Impact of Random Models on Clustering Similarity

A/B B1 B2 . . . BKB Sums

A1 n11 n12 . . . n1KB a1

A2 n21 n22 . . . n2KB a2

......

.... . .

......

AKA nKA1 nKA2 . . . nKAKB aKA

Sums b1 b2 . . . bKB∑

ij nij = N

Table 3: The contingency table T for two clusterings A = {A1, . . . , AKA} and B ={B1, . . . , BKB} of N elements, where nij = |Ai ∩ Bj | are the number of elementsthat are in both cluster Ai ∈ A and cluster Bj ∈ B.

defined by the model corrects s for chance as follows (Hubert and Arabie, 1985)

s− Emodel[s]smax − Emodel[s]

. (3)

The denominator rescales the adjusted similarity by the maximum similarity of pair-wisecomparisons within the ensemble smax so identical clusterings always have a similarity of1.0.

Appendix D. Rand Index

The Rand index (Rand, 1971) compares the number of element pairs which are either co-assigned to the same cluster, or assigned to different clusters in both clusterings, to thetotal number of element pairs. The most common formulation of the Rand index focuseson the following four sets of the

(N2

)element pairs: N11 the number of element pairs which

are grouped in the same cluster in both clusterings, N10 the number of element pairs whichare grouped in the same cluster by A but in different clusters by B, N01 the number ofelement pairs which are grouped in the same cluster by B but in different clusters by A,and N00 the number of element pairs which are grouped in different clusters by both A andB. Intuitively, N11 and N00 are indicators of the agreement between the two clusterings,while N10 and N01 reflect the disagreement between the clusterings.

15

Page 16: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Gates and Ahn

The aforementioned pair counts are identified from the contingency table T betweentwo clusterings, shown in Table 3, by the following set of equations

N11 =

KA,KB∑k,m=1

(nkm

2

)=

1

2

KA,KB∑k,m=1

n2km −N

N10 =

KA∑k=1

(ak2

)−N11 =

1

2

KA∑k=1

a2k −KA,KB∑k,m=1

n2km

(4)

N01 =

KB∑m=1

(bm2

)−N11 =

1

2

KB∑m=1

b2m −KA,KB∑k,m=1

n2km

N00 =

(N

2

)−N11 −N10 −N01

The Rand index between clusterings A and B, RI(A,B) is then given by the function

RI(A,B) =N11 +N00(

N2

)=

2∑KA,KB

k,m=1

(nkm2

)−∑KA

k=1

(ak2

)−∑KB

m=1

(bm2

)+(N2

)(N2

) . (5)

It lies between 0 and 1, where 1 indicates the clusterings are identical and 0 occurs forclusters which do not share a single pair of elements (this only happens when one clusteringis the full set of elements and the other clustering groups each element into its own cluster).As the number of clustered elements increases, the measure becomes dominated by thenumber of pairs which were classified into different clusters (N00), resulting in decreasedsensitivity to co-occurring element pairs (Fowlkes and Mallows, 1983).

Another formulation of the Rand index, used in our later derivations, focuses on a binaryrepresentation of the element pairs. Specifically, consider the vector UA = [u1, . . . , u(N2 )]

with binary entries uα ∈ {−1, 1} corresponding to all possible element pairs. Using α toindex over all element pairs by α =

(N2

)−(N−i+1

2

)+ j − i, for i < j ≤ N , then uα = 1 if

elements vi and vj are in the same cluster in A and uα = −1 if elements vi and vj are indifferent clusters in A. There are QA1 1s in UA and QA−1 −1s in UA with

QA1 =

KA∑k=1

(ak2

), QA−1 =

(N

2

)−

KA∑k=1

(ak2

). (6)

The Rand index is found from the vectors UA and UB, for clusterings A and B respectively,as the number of 1s in their product vector, UA � UB, using element-wise multiplicationand normalized by the total size of the vectors,

(N2

).

D.1 Expected Rand Index, Permutation Model (Mperm)

The expectation of the Rand index with respect to the permutation model follows fromdrawing the entries in Table 3 from the generalized hypergeometric distribution. Utilizing

16

Page 17: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Impact of Random Models on Clustering Similarity

the previous notation with QA1 =∑KA

k=1

(ak2

), the expectation Eperm[RI(A,B)] of the Rand

index with respect to the permutation model for the cluster size sequences of clusterings Aand B is given by

Eperm[RI(A,B)] =2QA1 Q

B1 −

(N2

)(QA1 +QB1

)+(N2

)2(N2

)2 (7)

(see Fowlkes and Mallows, 1983, Hubert and Arabie, 1985, or Albatineh and Niewiadomska-Bugaj, 2011 for the full derivation).

The commonly used adjusted Rand index (ARI) of Hubert and Arabie (1985) uses Mperm

to calculate the expectation of the Rand index, Eperm[RI(A,B)], as found in Equation 7.This expectation is then used in Equation 3, along with the fact that the maximum valueof the Rand index is RImax = 1.0, to give

ARIperm(A,B) =

(N2

)∑KAKBk,m=1

(nkm2

)−∑KA

k=1

(ak2

)∑KBm=1

(bm2

)12

(N2

) [∑KAk=1

(ak2

)+∑KB

m=1

(bm2

)]−∑KA

k=1

(ak2

)∑KBm=1

(bm2

) . (8)

D.2 Expected Rand Index, Fixed Number of Clusters

We follow DuBien and Warde (1981) to calculate the Rand index between two clusteringsunder the assumptions that both clusterings were independently and uniformly drawn fromthe ensemble of clusterings with a fixed number of clusters (Mnum). Recall that the Randindex between two clusterings A and B is given by the number of 1s in the element-wiseproduct of the binary representations vectors UA and UB. The expected Rand index underany random model is then the expected number of 1s in this product vector, normalized bythe total size of the vector

E[RI(A,B)] = E

1(N2

) (N2 )∑α=1

1uAα ·uBα=1

(9)

=1(N2

) (N2 )∑α=1

E[1uAα ·uBα=1

]

=1(N2

) (N2 )∑α=1

P (uAα · uBα = 1)

The product uAα · uBα equals 1 when either uAα = 1 and uBα = 1, or uAα = −1 and uBα = −1.Since we assumed both clusterings were independent, this gives

P (uAα · uBα = 1) = P (uAα = 1)P (uBα = 1) + P (uAα = −1)P (uBα = −1) (10)

where P (uAα = 1) is the probability that the two elements vi and vj are in the same cluster

in clustering A, where the element pair is indexed by α =(N2

)− (N−i)(N−i+1)

2 + j − i withi < j ≤ N . Likewise, P (uAα = −1) is the probability that the two elements vi and vj are indifferent clusters.

17

Page 18: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Gates and Ahn

Under the assumption of Mnum, there is a uniform probability of selecting a clusteringfrom the S(N,KA) clusterings of N elements into KA clusters; we define, Pnum(uAα = 1) asthe proportion of these clusterings with elements vi and vj in the same cluster. To find thisproportion, notice that we can ensure vi is in the same cluster as vj by first partitioning allelements besides vi into KA clusters; then, we can add vi to the same cluster as vj . Sincethere are S(N − 1,KA) such clusterings without element vi, this gives

Pnum(uAα = 1) =S(N − 1,KA)

S(N,KA)(11)

Pnum(uAα = −1) = 1− S(N − 1,KA)

S(N,KA). (12)

Finally, the expected Rand index between two clusterings A and B with KA and KBclusters assuming Mnum is given by

Enum[RI(A,B)] =S(N − 1,KA)

S(N,KA)

S(N − 1,KB)

S(N,KB)

+

(1− S(N − 1,KA)

S(N,KA)

)(1− S(N − 1,KB)

S(N,KB)

). (13)

When N is large, we can approximate the Stirling numbers of the second kind for afixed K by S(N,K) ≈ KN

K! . This can be inserted into equation (13) to give the followingapproximation for the mean of the Rand index assuming Mnum

Enum[RI(A,B)] ≈ 1

KAKB+

(1− 1

KA

)(1− 1

KB

). (14)

Interestingly, this suggests that the Rand index goes to 1 at a rate inversely related to thesmaller number of clusters O(max{K−1A ,K−1B }).

D.3 Expected Rand Index, All Clusterings Mall

The average of the Rand index between two clusterings under the assumption that the clus-terings were drawn with uniform probability from the set of all clusterings directly followsfrom the random model with a fixed number of clusters previously discussed. Namely, be-cause Bell numbers are related to Stirling numbers of the second kind byBN =

∑Nk=1 S(N, k),

a similar reasoning as followed for equation (11) gives

Pall(uα = 1) =

N∑k=1

S(N, k)

BNPnum(ukα = 1) (15)

=1

BN

N∑k=1

S(N, k)S(N − 1, k)

S(N, k)

=BN−1BN

. (16)

18

Page 19: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Impact of Random Models on Clustering Similarity

Using this probability for the expectation in equation (9) gives the expected Rand indexunder the assumption that both clusterings were uniformly drawn from the set of all clus-terings of N elements

Eall[RI(A,B)] =

(BN−1BN

)2

+

(1− BN−1

BN

)2

. (17)

When N is large, we can approximate the ratio of successive Bell numbers byBN+1

BN≈

NlogN . Using this approximation in equation eqrefeq:exprandall gives the following approx-imation for the mean of the Rand index in Mall

Eall[RI] ≈(

logN

N

)2

+

(1− logN

N

)2

. (18)

Interestingly, this suggests that the expected Rand index between two random clusterings

goes to 1 at a rate O(log(N)N

), inversely proportional to the number of elements.

D.4 One-Sided Rand

Consider a reference clustering G that has the cluster size sequence [g1, . . . , gKG ]. The

binary pair vector representation of G has QG1 1s and QG−1 =(N2

)− QG1 , −1s. The one-

sided expectation of the Rand index under the assumption that clustering A was randomlydrawn from either theMnum orMall random models follows from treating the two clusteringsindependently as in equation (10). Since the cluster sequence for the reference clusteringis fixed, the probability that a random entry in the binary pair vector is 1 is given by thefraction of 1s in the vector

Pnum(uGα = 1) =1(N2

)QG1 (19)

=1(N2

) KG∑i=1

(gi2

).

The one-sided expectation of the Rand index under the assumption that clustering A wasrandomly drawn from the set of all clusterings with a fixed number of clusters M1

num is

E1num[RI(A,G)] =

(S(N − 1,KA)

S(N,KA)

QG1(N2

))+

(1− S(N − 1,KA)

S(N,KA)

)(1− QG1(

N2

)) . (20)

The one-sided expectation of the Rand index with the assumption that the random clus-tering A is drawn from the ensemble of all partitions M1

all is

E1all[RI(A,G)] =

BN−1BN

QG1(N2

) +

(1− BN−1

BN

)(1− QG1(

N2

)) . (21)

19

Page 20: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Gates and Ahn

Appendix E. Mutual Information

Another prominent family of clustering similarity measures is based on the Shannon infor-mation between probabilistic representations of each clustering. These probability distri-butions are also calculated from the contingency table T , Table 3. The partition entropyH of a clustering A is given by

H(A) = −KA∑k=1

akN

logakN. (22)

Using this entropy, the mutual information MI(A,B) between two clusterings A and B isgiven by

MI(A,B) = H(A) +H(B)−H(A,B)

=

KA,KB∑k,m=1

nkmN

lognkmN

akbm. (23)

The mutual information can be interpreted as an inverse measure of independence betweenthe clusterings, or a measure of the amount of information each clustering has about theother. As it can vary in the range [0,min{H(A), H(B)}], to facilitate comparisons, it isdesirable to normalize it to the range [0, 1]. There are at least six proposals in the literaturefor this upper bound, each with different advantages and drawbacks

min{H(A), H(B)} ≤√H(A)H(B) ≤ H(A) +H(B)

2(24)

≤ max{H(A), H(B)} ≤ max{logKA, logKB} ≤ logN.

The resulting measures are all known as normalized mutual information (NMI). AlthoughNMI exhibits some desirable properties, such as its dependence on the relative proportions ofthe cluster sizes in each clustering rather than the number of elements, due to its dependenceon the number of clusters in each clustering, it is known to favor comparisons betweenclusterings with more clusters regardless of any other shared clustering features (White andLiu, 1994, Vinh et al., 2010, Amelio and Pizzuti, 2015).

E.1 Expected Mutual Information, Permutation Model (Mperm)

The mutual information between two clusterings has also been studied under the assumptionthat both clusterings were randomly generated from the permutation model (Vinh et al.,2009, Romano et al., 2014, Vinh et al., 2010). Expanding the definition of the mutualinformation gives

Eperm[MI(A,B)] = Eperm[H(A)] + Eperm[H(B)]− Eperm[H(A,B)] (25)

= H(A) +H(B)− Eperm[H(A,B)]

where the second line follows from the fact that all cluster sizes (and hence the entropy)are the same for every clustering in Mperm.

20

Page 21: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Impact of Random Models on Clustering Similarity

The expectation of the joint entropy with respect to Mperm for the cluster size distri-butions of clusterings A and B is the average over all possible contingency tables T withentries nkm

Eperm[H(A,B)] = −∑Tp(T |A,B)

KA∑k=1

KB∑m=1

nkmN

log(nkmN

). (26)

Rearranging the summations, and recalling that the entries of the contingency tables are

hyper-geometrically distributed such that the probability of each entry p(nkm) =( bmnkm

)( N−bmak−nkm

)

(Nak)is only dependent on the row sum ak and column sum bm, gives

Eperm[H(A,B)] = −KA∑k=1

KB∑m=1

min {ak,bm}∑nkm=0

nkmN

log(nkmN

) ( bmnkm

)(N−bmak−nkm

)(Nak

) . (27)

Combining this expression with the individual entropies H(A) and H(B) gives (Vinh et al.,2009)

Eperm[MI(A,B)] =

KA∑k=1

KB∑m=1

min {ak,bm}∑nkm=0

nkmN

log

(Nnkmakbm

) ( bmnkm

)(N−bmak−nkm

)(Nak

) . (28)

The adjusted mutual information (AMI) of Vinh et al. (2009) uses Mperm to correctthe MI for chance according to equation (3) and selecting an upper bound MImax fromequation (24) to give

AMI(A,B) =MI(A,B)− Eperm[MI(A,B)]

MImax − Eperm[MI(A,B)]. (29)

E.2 Expected Mutual Information, Fixed Number of Clusters (Mnum)

Next, we consider the Mutual Information between two clusterings under the assumptionsthat both clusterings were independently and uniformly drawn from the ensemble of cluster-ings with a fixed number of clusters (Mnum). In this case, the expected mutual informationis dependent on both the average partition entropy and the joint partition entropy

Enum[MI(A,B)] = Enum[H(A)] + Enum[H(B)]− Enum[H(A,B)]. (30)

This expectation can be found by considering the average partition entropy and joint parti-tion entropy separately. Recall that in the permutation model Eperm[H(A)] = H(A) sincethe cluster sizes remain unchanged; however, the same does not hold in Mnum. Denotinga random clustering with KA clusters as πKA , and using the notion

∑σi∈πKA

to indicate

the summation over all clusters in the clustering πKA , where the cardinality of the clusteris |σi| = a, then the expected partition entropy of a random clustering in Mnum is

Enum[H(A)] = −∑πKA

pnum(πKA)∑

σi∈πKA

a

Nlog( aN

). (31)

21

Page 22: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Gates and Ahn

Note that this expression only depends on the size a = |σi| for σi ∈ πKA of the clusters inthe clustering. This means the expected entropy of a random clustering can be rewrittenin terms of the expected contribution to the entropy from a random cluster of size a.A counting argument gives the number of clusters of size a which appear in all of theclusterings in the random ensemble. First, choose a of the N elements to form the cluster.Each clustering in Mnum must have KA clusters, so the remaining N − a elements have tobe arranged into KA−1 other clusters. There are S(N −a,KA−1) ways to partition theseremaining elements. This gives

(Na

)S(N − a,KA− 1) clusters of size a (Chern et al., 2014).

The expected number of clusters nnum(a) in a random clustering drawn from Mnum is then(Na

)S(N−a,KA−1)S(N,KA)

. Therefore, the expected clustering entropy in Mnum is:

Enum[H(A)] = −N−(KA−1)∑

a=1

(N

a

)S(N − a,KA − 1)

S(N,KA)

a

Nlog( aN

). (32)

where the summation is over all possible cluster sizes [1, N − (KA − 1)] encountered whenpartitioning N elements into KA clusters.

Similarly, the expected joint entropy of two random clusterings drawn independentlyfrom Mnum is given by the expected number of clusters of size a from a clustering withKA clusters, the expected number of clusters of size b from a clustering with KB clusters,

and then considering the probability of overlap p(nkm) =( bnkm

)( N−ba−nkm

)

(Na)from the resulting

random contingency table

Enum[H(A,B)] = −∑πKA

pnum(πKA)∑πKB

pnum(πKB)

×KA∑k=1

KB∑m=1

∑nkm

nkmN

log

(Nnkmakbm

) ( bmnkm

)(N−bmak−nkm

)(Nak

)= −

N−(KA−1)∑a=1

N−(KB−1)∑b=1

∑nij

[(N

a

)S(N − a,KA − 1)

S(N,KA)(33)

×(N

b

)S(N − b,KB − 1)

S(N,KB)

nijN

log(nijN

) ( bnij

)(N−ba−nij

)(Na

) ].

Note that, when using equation (3) to adjust the mutual information for chance underthe assumption of Mnum, the maximum value for the measure over the entire ensemble ofrandom clusterings has to be used.

E.3 Expected Mutual Information, All Clusterings Mall

The expected Mutual Information between two clusterings under the assumption that bothclusterings were independently and uniformly drawn from the set of all possible clusteringsof N elements, Mall, has a similar derivation as the previous case of Mnum. Both theexpectations for the entropy of a single clustering and the joint entropy of the two clusteringsneed to be considered separately and can be rewritten in terms of the contributions from

22

Page 23: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Impact of Random Models on Clustering Similarity

individual clusters of a given size. In Mall, the number of clusters of size a is again foundby choosing a of the N elements for the cluster and then partitioning the remaining N − aelements; there are now BN−a possible ways to cluster the remaining elements (Chern et al.,2014). This gives

Eall[H(A)] = −N∑a=1

(N

a

)BN−aBN

a

Nlog2

( aN

). (34)

The expected joint entropy for two clusterings is then

Eall[H(A,B)] = −∑πa

p(πa)∑πb

p(πb)N∑i=1

N∑j=1

∑nij

nijN

log

(Nnijaibj

) ( bjnij

)(N−bijai−nij

)(Nai

)= −

N∑a=1

(N

a

)BN−aBN

N∑b=1

(N

b

)BN−bBN

min a,b∑nij=1

nijN

log(nijN

) ( bnij

)(N−ba−nij

)(Na

)= −2

N∑a=1

a−1∑b=1

(N

a

)BN−aBN

(N

b

)BN−bBN

b∑nij=1

nijN

log(nijN

) ( bnij

)(N−ba−nij

)(Na

) (35)

−N∑a=1

((N

a

)BN−aBN

)2 a∑nij=1

nijN

log(nijN

) ( anij

)(N−aa−nij

)(Na

) ,

with the last simplification resulting from the symmetry of the hyper-geometric term withrespect to a and b.

E.4 One-Sided Mutual Information

As was the case for the one-sided expected Rand index, the one-sided expectation of mutualinformation follows from the fact that the cluster sequence for the reference clustering isfixed. This results in the following one-sided expected joint entropy when the randomclustering A is drawn from the Mnum model

E1num[H(A,G)] = −

N∑a=1

(N

a

)S(N − a,KA − 1)

S(N,KA)

KG∑b=1

min a,gb∑nij=1

nijN

log(nijN

) ( gbnij

)(N−gba−nij

)(Na

) .

(36)

The corresponding one-sided expected MI assuming M1num is

E1num[MI(A,G)] = −

KA∑a=1

(N

a

)S(N − a,KA − 1)

S(N,KA)

a

Nlog( aN

)−

KG∑b=1

gbN

log(gbN

)(37)

+N∑a=1

(N

a

)S(N − a,KA − 1)

S(N,KA)

KG∑b=1

min a,gb∑nij=1

nijN

log(nijN

) ( gbnij

)(N−gba−nij

)(Na

) .

23

Page 24: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Gates and Ahn

The one-sided expected joint entropy when the random clustering A is drawn from the M1all

model is

E1all[H(A,G)] = −

N∑a=1

(N

a

)BN−aBN

KG∑b=1

min a,gb∑nij=1

nijN

log(nijN

) ( gbnij

)(N−gba−nij

)(Na

) , (38)

and the one-sided expectation of the MI when the random clustering A is drawn from theM1all model is

E1all[MI(A,G)] = −

KA∑a=1

(N

a

)BN−aBN

a

Nlog( aN

)−

KG∑b=1

gbN

log(gbN

)(39)

+

N∑a=1

(N

a

)BN−aBN

KG∑b=1

min a,gb∑nij=1

nijN

log(nijN

) ( gbnij

)(N−gba−nij

)(Na

) .

Appendix F. Application Data Sets

F.1 Digits Data Set

The digits data set is bundled with the sci-kit learn source code and consists of 1, 797images of 8× 8 gray level pixels of handwritten digits. The reference clustering contains 10clusters corresponding to the true digit. The data set was originally assembled in Alimogluand Alpaydin (1996). To provide a visualization, the data was projected to 2-d using thet-Distributed Stochastic Neighbor Embedding (t-SNE) dimensionality reduction method(Van der Maaten and Hinton, 2008) initialized from the pca decomposition.

F.2 Gene Expression Data Set

The data was assembled in de Souto et al. (2008) and is freely available fromhttp://bioinformatics.rutgers.edu/Publications/deSouto2008c/index.html. Thestudies represent two prominent methods for determining gene expression in cell tissuesamples from cancer tumors or healthy controls, Affymetrix microarrays and cDNA mi-croarrays, which, respectively, measure the number of RNA copies found in the cell andthe ratio of the number of copies vs a control sample. Each study contains anywhere from22 to 248 data points (individual tissue samples) for which between 85 and 4, 553 features(individual gene expression) were measured after removing the uninformative and missinggenes. Please see de Souto et al. (2008) for details of this selection process.

References

Ahmed N. Albatineh and Magdalena Niewiadomska-Bugaj. Correcting Jaccard and othersimilarity indices for chance agreement in cluster analysis. Advances in Data Analysisand Classification, 5(3):179–200, 2011.

Ahmed N. Albatineh, Magdalena Niewiadomska-Bugaj, and Daniel Mihalko. On similarityindices and correction for chance agreement. Journal of Classification, 23(2):301–313,2006.

24

Page 25: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Impact of Random Models on Clustering Similarity

Fevzi Alimoglu and Ethem Alpaydin. Methods of combining multiple classifiers based ondifferent representations for pen-based handwritten digit recognition. In Proceedings of theFifth Turkish Artificial Intelligence and Artificial Neural Networks Symposium (TAINN96, 1996.

Alessia Amelio and Clara Pizzuti. Is normalized mutual information a fair measure forcomparing community detection methods? In Proceedings of the 2015 IEEE/ACM In-ternational Conference on Advances in Social Networks Analysis and Mining 2015, pages1584–1585. ACM, 2015.

Bobbie Chern, Persi Diaconis, Daniel M. Kane, and Robert C. Rhoades. Closed expressionsfor averages of set partition statistics. Research in the Mathematical Sciences, 1(1):1–32,2014.

Leon Danon, Albert Dıaz-Guilera, Jordi Duch, and Alex Arenas. Comparing communitystructure identification. Journal of Statistical Mechanics: Theory and Experiment, 2005(09):P09008–P09008, September 2005.

Marcilio CP de Souto, Ivan G. Costa, Daniel SA de Araujo, Teresa B. Ludermir, andAlexander Schliep. Clustering cancer gene expression data: a comparative study. BMCBioinformatics, 9(1):1, 2008.

Janice L. DuBien and William D. Warde. Some distributional results concerning a com-parative statistic used in cluster analysis. In ASA Proceedings of the Social StatisticsSection, pages 309–313, 1981.

Janice L. DuBien, William D. Warde, and Seong S. Chae. Moments of Rand’s C statisticin cluster analysis. Statistics & Probability Letters, 69(3):243–252, 2004.

Edward B. Fowlkes and Colin L. Mallows. A method for comparing two hierarchical clus-terings. Journal of the American Statistical Association, 78(383):553–569, 1983.

Brendan J. Frey and Delbert Dueck. Clustering by passing messages between data points.Science, 315(5814):972–976, 2 2007.

Anna Goldenberg, Alice X. Zheng, Stephen E. Fienberg, and Edoardo M. Airoldi. A surveyof statistical network models. Foundations and Trends in Machine Learning, 2(2):129–233, 2 2010.

Lawrence Hubert and Phipps Arabie. Comparing partitions. Journal of Classification, 2(1):193–218, December 1985.

Anil K. Jain. Data clustering: 50 years beyond K-means. Pattern Recognition Letters, 31(8):651–666, 2010.

Andrea Lancichinetti and Santo Fortunato. Community detection algorithms: a compara-tive analysis. Physical Review E, 80(5):056117, August 2009.

Toufik Mansour. Combinatorics of Set Partitions. CRC Press, 2012.

25

Page 26: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Gates and Ahn

Paul D. McNicholas and Thomas Brendan Murphy. Model-based clustering of microarrayexpression data via latent gaussian mixture models. Bioinformatics, 26(21):2705–2712,2010.

Marina Meila. Comparing clusterings: an axiomatic view. In Proceedings of the 22ndInternational Conference on Machine Learning, pages 577–584, New York, NY, USA,2005. ACM.

Darius Pfitzner, Richard Leibbrandt, and David Powers. Characterization and evaluationof similarity measures for pairs of clusterings. Knowledge and Information Systems, 19(3):361–394, 2009.

William M Rand. Objective criteria for the evaluation of clustering methods. Journal ofthe American Statistical Association, 66(336):846, December 1971.

Simone Romano, James Bailey, Vinh Nguyen, and Karin Verspoor. Standardized mutualinformation for clustering comparisons: one step further in adjustment for chance. InProceedings of the 31st International Conference on Machine Learning (ICML-14), pages1143–1151, 2014.

Simone Romano, Nguyen Xuan Vinh, James Bailey, and Karin Verspoor. Adjusting forchance clustering comparison measures. Journal of Machine Learning Research, 17:1–32,2016.

James Sethna. Statistical mechanics: entropy, order parameters, and complexity, volume 14.Oxford University Press, 2006.

D. Steinley, M. J. Brusco, and L. Hubert. The variance of the adjusted Rand index. Psy-chological Methods, 2016.

Anbupalam Thalamuthu, Indranil Mukhopadhyay, Xiaojing Zheng, and George C. Tseng.Evaluation and comparison of gene clustering methods in microarray analysis. Bioinfor-matics, 22(19):2405–2412, 2006.

Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal ofMachine Learning Research, 9(2579-2605):85, 2008.

Nguyen Xuan Vinh, Julien Epps, and James Bailey. Information theoretic measures forclusterings comparison: is a correction for chance necessary? In Proceedings of the 26thAnnual International Conference on Machine Learning (ICML-09), pages 1073–1080,2009.

Nguyen Xuan Vinh, Julien Epps, and James Bailey. Information theoretic measures forclusterings comparison: Variants, properties, normalization and correction for chance.The Journal of Machine Learning Research, 11:2837–2854, 2010.

David L. Wallace. A method for comparing two hierarchical clusterings: comment. Journalof the American Statistical Association, 78(383):569–576, 1983.

26

Page 27: The Impact of Random Models on Clustering Similarity...network science, and combinatorial mathematics (Sethna, 2006, Goldenberg et al., 2010, Mansour, 2012). Yet, despite the importance

Impact of Random Models on Clustering Similarity

Allan P. White and Wei Zhong Liu. Technical note: bias in information-based measures indecision tree induction. Machine Learning, 15(3):321–329, 1994.

Ka Yee Yeung and Walter L. Ruzzo. Principal component analysis for clustering geneexpression data. Bioinformatics, 17(9):763–774, 2001.

Ka Yee Yeung, Chris Fraley, Alejandro Murua, Adrian E. Raftery, and Walter L. Ruzzo.Model-based clustering and data transformations for gene expression data. Bioinformat-ics, 17(10):977–987, 2001.

Pan Zhang. A revisit to evaluating accuracy of community detection using the normalizedmutual information. arXiv preprint arXiv:1501.03844, 2015.

27