chengchurch_biclustering

Upload: swarup-roy

Post on 07-Apr-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/6/2019 ChengChurch_Biclustering

    1/11

    Biclustering of Expression DataYizong Cheng+i" and George M. Churchll

    +Department of Genetics, Harvard Medical School, Boston, MA 02115Department of ECECS, University of Cincinnati, Cincinnati, OH 45221

    [email protected], [email protected]

    AbstractAn efficient node-deletion algorithm is introduced tofind submatrices in expression data that have low meansquared residue scores and it is shown to perform wellin finding co-regulation patterns in yeast and human.This introduces "biclustering", or simultaneous clus-tering of both genes and conditions, to knowledge dis-covery from expression data. This approach overcomessome problems associated with traditional clusteringmethods, by allowing automatic discovery of similaritybased on a subset of attributes, simultaneous clusteringof genes and conditions, and overlapped grouping thatprovides a better representation for genes with multiplefunctions or regulated by many factors.

    Keywords: microarray, gene expression pattern, clus-tering

    IntroductionGene expression data are being generated by DNA chipsand other microarray techniques and they are often pre-sented as matrices of expression levels of genes underdifferent conditions (including environments, individu-als, and tissues). One of the usual goals in expressiondata analysis is to group genes according to their ex-pression under multiple conditions, or to group condi-tions based on the expression of a number of genes.This may lead to discovery of regulatory patterns orcondition similarities.

    The current practice is often the application of someagglomerative or divisive clustering algorithm that par-titions the genes or conditions into mutually exclusivegroups or hierarchies. The basis for clustering is oftenthe similarity between genes or conditions as a functionof the rows or columns in the expression matrix. Thesimilarity between rows is often a function of the rowvectors involved and that between columns a functionof the column vectors. Functions that have been usedinclude Euclidean distance (or related coefficient of cor-relation and Gaussian similarity) and the dot product

    *Tel: (513) 556-1809, Fax: (513) 556-7326.tTel: (617) 432-7266, Fax: (617) 432-7663.

    Copyright 2000, American Association for Artificial In-telligence (www.aaai.org). All rights reserved.

    (or a nonlinear generalization of it, as used in kernelmethods) between the vectors. All conditions are givenequal weights in the computation of gene similarity andvice versa. One must doubt not only the rationale ofequally weighing all conditions or all genes, but that ofgiving the same weight to the same condition for thesimilarity computation between all the genes and viceversa as well. Any such formula leads to the discoveryof some similarity groups at the expense of obscuringsome other similarity groups.

    In expression data analysis, beyond grouping genesand conditions based on overall similarity, it is some-times needed to salvage information lost during over-simplied similarity and grouping computation. One ofthe goals of doing so is to disclose the involvement ofa gene or a condition in multiple pathways, some ofwhich can only be discovered under the dominance ofmore consistent ones.

    In this article, we introduce the concept of biclus-ter, corresponding to a subset of genes and a subsetof conditions with a high similarity score. Similarityis not treated as an function of pairs of genes or pairsof conditions. Instead, it is a measure of the coher-ence of the genes and conditions in the bicluster. Thismeasure can be a symmetric function of the genes andconditions involved and thus the finding of biclusters isa process that groups genes and conditions simultane-ously. Ifwe project these biclusters onto the dimensionof genes or that of conditions, then we can see the resultas clustering of either genes or conditions, into possiblyoverlapping groups.

    A particular score that applies to expression datatransformed by a logarithm and augmented by the ad-ditive inverse is the mean squared residue. The residueof element aij in the bicluster indicated by the subsetsIand J is aij - aiJ - alj + aIJ, (1)where aiJ is the mean of the ith row in the bicluster,alj the mean ofthe jth column in the bicluster, and aIJthat of all elements in the bicluster. The mean squaredresidue is the variance of the set of all elements in thebicluster, plus the mean row variance and the meancolumn variance. We want to find biclusters with lowmean squared residue, in particular, large and maximal

    mailto:[email protected],mailto:[email protected]:[email protected]:[email protected],
  • 8/6/2019 ChengChurch_Biclustering

    2/11

    ones with scores below a certain threshold.A special case for a perfect score (a zero mean squared

    residue) is a constant bicluster of elements of a singlevalue. When a bicluster has a non-zero score, it is al-ways possible to remove a row or a column to lower thescore, until the remaining bicluster becomes constant.

    The problem of finding a maximum bicluster with ascore lower than a threshold includes the problem offinding a maximum biclique (complete bipartite sub-graph) in a bipartite graph as a special case. If a maxi-mum biclique is one that maximizes the number of ver-tices involved (maximizing II I + IJI), then the problemis equivalent to finding a maximum matching in the bi-partite complement and can be solved using polynomialtime max-flow algorithms. However, this approach of-ten results in a submatrix with maximum perimeter andzero area, particularly in the case of expression data,where the number of genes may be hundreds times morethan the number of conditions.If the goal is to find the largest balanced biclique, for

    example, the largest constant square submatrix, thenthe problem is proven to be NP-hard (Johnson, 1987).On the other hand, the hardness of finding one with themaximum area is still unknown.

    Divisive algorithms for partitioning data into setswith approximately constant values have been pro-posed by Morgan and Sonquist (1963) and Hartigan(1972). The result is an hierarchy of clusters, andthe algorithms foretold the more recent decision treeprocedures. Hartigan (1972) also mentioned that thecriterion for partitioning may be other than a con-stant value, for example, a two-way analysis of vari-ance model, which is quite similar to the mean squaredresidue scoring proposed in this article. Rather than adivisive algorithm, our approach is more of the type ofnode deletion (Yannakakis, 1981).

    The term biclustering has been used by Mirkin (1996)to describe "simultaneous clustering of both row andcolumn sets in a data matrix". Other terms that havebeen associated to the same idea include "direct clus-tering" (Hartigan, 1972) and "box clustering" (Mirkin,1996). Mirkin (1996) presents a node addition algo-rithm, starting with a single cell in the matrix, to finda maximal constant bicluster.

    The algorithms mentioned above either find one con-stant bicluster, or find a set of mutually exclusive near-constant biclusters that cover the data matrix. Thereare ample reasons to allow biclusters to overlap in ex-pression data analysis. One of the reasons is that asingle gene may participate in multiple pathways thatmayor may not be co-active under all conditions. Theproblem of finding a minimum set of biclusters, eithermutually exclusive or overlapping, to cover all the ele-ments in a data matrix is a generalization of the prob-lem of covering a bipartite graph by a minimum setof bicliques, either mutually exclusive or overlapping,which has been shown to be NP-hard (Orlin, 1977).Nau, Markowsky, Woodbury, and Amos (1978) had aninteresting application of biclique covering on the in-

    terpretation of leukocyte-serum immunological reactionmatrices, which are not unlike the gene-condition ex-pression matrices.

    In expression data analysis, the uttermost importantgoal may not be finding the maximum bicluster or evenfinding a bicluster cover for the data matrix. More in-teresting is the finding of a set of genes showing strik-ingly similar up-regulation and down-regulation undera set of conditions. A low mean squared residue scoreplus a large variation from the constant may be a goodcriterion for identifying these genes and conditions.

    In the following sections, we present a set of efficientalgorithms that find these interesting gene and condi-tion sets. The basic iterate in the method consists ofthe steps of masking null values and biclusters that havebeen discovered, coarse and fine node deletion, node ad-dition, and the inclusion of inverted data.

    MethodsA gene-condition expression matrix is a matrix of realnumbers, with possible null values as some of the el-ements. Each element represents the logarithm of therelative abundance of the mRNA of a gene under a spe-cific condition. The logarithm transformation is used toconvert doubling or other multiplicative changes of therelative abundance into additive increments.

    Definition 1. Let X be the set of genes and Y theset of conditions. Let aij be the element of the expres-sion matrix A representing the logarithm of the relativeabundance of the mRNA of the ith gene under the jthcondition. Let I c X and J c Y be subsets of genesand conditions. The pair (I , J) specifies a submatrixAIJ with the following mean squared residue score.

    1H(I, J) = IIIIJ I . z : ; : (aij - aiJ - alj + aIJ)2, (2 )zEl,JEJ

    where1

    alj = -IIILaij,iEl

    (3 )and

    1 1 1aIJ = IIIIJ I . z : ; : aij = m ~aiJ = P I ~alj (4 )

    zEl,JEJ zEl JEJare the row and column means and the mean in the sub-matrix (I, J ). A submatrix AIJ is called a 8-biclusterif H(I, J ) ~ 8 for some 8 2 o . D

    The lowest score H(I, J ) = 0 indicates that the geneexpression levels fluctuate in unison. This includes thetrivial or constant biclusters where there is no fluctu-ation. These trivial biclusters may not be very inter-esting but need to be discovered and masked so moreinteresting ones can be found. The row variance maybe an accompanying score to reject trivial biclusters.

    1 '"' 2V(I, J) = - IJ I L..J (aij - alj) .jEJ

    (5 )

  • 8/6/2019 ChengChurch_Biclustering

    3/11

    The matrix aij = ij, i,j> 0 has the property that nosubmatrix of a size larger than a single cell has a scorelower than 0.5. A K x K matrix of all Osexcept one 1has the score

    1 [ 3 2 ]b = K6 (K - 1) (K - 1) + 2(K - 1) - 2 . (6 )A matrix with elements randomly and uniformly gen-erated in the range of [a , b ] has an expected score of( b - a)2/12. This result is independent of the size ofthe matrix. For example, when the range is [0,800],the expected score is 53,333.

    A translation (addition by a constant) to the matrixwill not affect the H(I, J ) score. A scaling (multiplica-tion by a constant) will affect the score (by the squareof the constant), but will have no affect if the score iszero. Neither translation nor scaling affects the rankingof the biclusters in a matrix.

    Theorem 1. The problem of finding the largest square8-bicluster (III = IJI) is NP-hard.Proof. We construct a reduction from the

    BALANCED COMPLETE BIPARTITE SUBGRAPHproblem (GT24 in Garey and Johnson, 1979) to thisproblem. Given a bipartite graph (VI, V2, E) and apositive integer K, form a real-valued matrix A withaij = 0 if and only if (i, j) E E and aij = 2ij other-wise, for i,j > o . If the largest square l/hwbiclusterin A has a size larger than or equal to K, then there is aKxK biclique (complete bipartite subgraph). Since theBALANCED COMPLETE BIPARTITE SUBGRAPHproblem is NP-complete, the problem of this theoremis NP-hard. D

    Node Deletion

    Every expression matrix contains a submatrix with theperfect score (H(I, J ) = 0), and any single element issuch a submatrix. Certainly the kind of biclusters welook for should have a maximum size, both in terms ofthe number of genes involved (III) and in terms of thenumber of conditions (IJI).Ifwe start with a large matrix, say, the one with all

    the data, then the question is how to select a subma-trix with a low H score. A greedy method is to removethe row or column to achieve the largest decrease ofthe score. This requires the computation of the scoresof all the submatrices that may be the consequencesof any row or column removal, before each choice ofremoval can be made. This method (Algorithm 0) re-quires time in O((n + m)nm), where nand m are therow and column sizes of the expression matrix, to findone bicluster.

    Algorithm 0 (Brute-Force Deletion and Ad-dition).

    Input: A, a matrix of real numbers, and 8 20, the maximum acceptable mean squaredresidue score.

    Output: AIJ, a 8-bicluster that is a subma-trix of A with row set I and column set J,with a score no larger than 8.

    Initialization: Iand J are initialized to thegene and condition sets in the data andAIJ = A.

    Iteration:1. Compute the score H for each possi-

    ble row/column addition/deletion andchoose the action that decreases H themost. If no action will decrease H, or ifH E(S)} (9 )

    will only makeE(S - R) < E(S).

    Proof. Condition (10) can be rewritten asAI A+BIS-RI

  • 8/6/2019 ChengChurch_Biclustering

    4/11

    The definition of the function m requires that AI ~ A.Thus, a sufficient condition for the inequality (11) is

    A A+BIS-RI H(I, .1 ) }(24)

    Proof. Let the points in Lemma 1 be IJI-dimensionalreal-valued vectors and S be the set of vectors b i withcomponents bij = aij - aiJ for iE Iand j E J. Thefunction d is defined as

    d(bi,bk) = L(bij - bkj)2.jEJ (25)In this case,

    1m(S) = - I I ILb iiEl

    and has the components alj - aIJ.(26)D

    There is also a similar result for the columns.Lemma 2 acts as a guide on the trade-off between two

    types of node deletion, that of deleting one node a time,and that of deleting a set of node a time, before thescore is recalculated. These two algorithms are listedbelow.

    Algorithm 1(Single Node Deletion).Input: A, a matrix of real numbers, and 8 20, the maximum acceptable mean squared

    residue score.Output: AIJ, a 8-bicluster that is a subma-

    trix of A with row set I and column set J,with a score no larger than 8.

    Initialization: Iand J are initialized to thegene and condition sets in the data andAIJ = A.

    Iteration:1. Compute aiJ for all iE I, alj for all jEJ , aIJ, and H(I, J ). If H(I, J )

  • 8/6/2019 ChengChurch_Biclustering

    5/11

    of one of them may still decrease the score, unless thescore is already zero.

    Step 1 in each iterate requires time in 0 (nm) and acomplete recalculation of all d values in Step 2 is also ano(nm) effort. The selection of the best row and columncandidates takes 0 (log n + log m) time. When the ma-trix is bi-level, specifying "on" and "off" of the genes,the update of various variables after the removal of arow takes only O(m) time and that after the removalof a column only O(n) time. In this case, the algo-rithm can be made very efficient even for whole genomeexpression data, with overall running time in O(nm).But, for non-bi-level matrices, updates are more expen-sive and it is advisable to use the following MultipleNode Deletion before the matrix is reduced to a man-ageable size, when Single Node Deletion is appropriate.

    Algorithm 2 (Multiple Node Deletion).Input: A, a matrix of real numbers, 8 2 0,

    the maximum acceptable mean squaredresidue score, and 0: > 1, a threshold formultiple node deletion.

    Output: AIJ, a 8-bicluster that is a subma-trix of A with row set I and column set J,with a score no larger than 8.

    Initialization: Iand J are initialized to thegene and condition sets in the data andAIJ = A.

    Iteration:1. Compute aiJ for all iE I, alj for all jEJ , aIJ, and H(I, J ). If H(I, J )

  • 8/6/2019 ChengChurch_Biclustering

    6/11

    Algorithm 3 (Node Addition).Input: A, a matrix of real numbers, I and J

    signifying a 8-bicluster.Output: II and JI such that I C r andJ C J I with the property that H(II, J I) ~H(I, J ).Iteration:

    1. Compute aiJ for all i, alj for all j, aIJ,and H(I, J ).2. Add the columns j ( j_ J with

    1'"' 2-III L)aij -aiJ -alj +aIJ) ~ H(I, J )iE l

    3. Recompute aiJ, aIJ, and H(I, J ).4. Add the rows i j_ Iwith

    1 '"' 2- I J I Ljaij-aiJ-alj+aIJ) ~ H(I,J)jEJ

    5. For each row istill not in I, add itsinverse if

    6. Ifnothing is added in the iterate, returnthe final I and J as I' and JI

    Lemma 3 and Theorem 3 guarantee the addition ofrows and columns in Algorithm 3 will not increase thescore. However, the resulting 8-bicluster may still notbe maximal because of two reasons. The first is thatLemma 3 only gives a sufficient condition for addingrows and columns and it is not necessarily a necessarycondition. The second reason is that by adding rowsand columns, the score may decrease to the point it ismuch smaller than 8. Each iterate in Algorithm 3 onlyadds rows and columns according to the current score,not 8.

    Step 5 in the iteration adds inverted rows into thebicluster. These rows form "mirror images" of the restof the rows in the bicluster and can be interpreted as co-regulated but receiving the opposite regulation. Theseinverted rows cannot be added to the data matrix atthe beginning, because that would make all alj = 0and also atJ = O .

    Algorithm 3 is very efficient. Its time efficiency iscomparable with the Multiple Node Deletion phase ofAlgorithm 2 and in the order of O(mn).

    Clearly, addition of nodes does not have to take placeafter all deletion is done. Sometimes an addition maydecrease the score more than any deletion. A singlenode deletion and addition algorithm based on the Lem-mas and thus more efficient than Algorithm 0 is possibleto set up.

    Experimental MethodsThe biclustering algorithms were tested on two setsof expression data, both having been clustered usingconventional clustering algorithms. The yeast Saccha-romyces cerevisiae cell cycle expression data from Choet al. (1998) and the human B-cells expression datafrom Alizadeh et al. (2000)were used.Data PreparationThe yeast data contain 2,884 genes and 17 condi-tions. These genes were selected according to Tava-zoie et al. (1999). The genes were identified bytheir SaD ORF names (Ball et al., 2000) fromhttp://arep.med.harvard.edu/network_discovery. Therelative abundance values (percentage of the mRNA forthe gene in all mRNAs) were taken from a table pre-pared by Aach, Rindone, and Church (2000). (Two ofthe ORF names did not have corresponding entries inthe table and thus there were 34 null elements.) Thesenumbers were transformed by scaling and logarithmx -+ 100log(105x) and the result was a matrix of inte-gers in the range between 0 and 600. (The transforma-tion does not affect the values 0 and -1 (null element).)

    The human data was downloaded from the Web sitefor supplementary information for the article by Al-izadeh et al. (2000). There were 4,026 genes and 96conditions. The expression levels were reported as logratios and after a scaling by a factor of 100, we ended upwith a matrix of integers in the range between -750 and650, with 47,639 missing values (12.3% of the matrixelements).

    The matrices after the above preparation, alongwith the biclustering results can be found athttp:// arep.med.harvard.edu/biclustering.

    Missing Data ReplacementMissing data in the matrices were replaced with ran-dom numbers. The expectation was that these randomvalues would not form recognizable patterns and thuswould be the leading candidates to get removed in nodedeletion.

    The random numbers used to replace missing valuesin the yeast data were generated so that they form auniform distribution between 0 and 800. For the humandata, the uniform distribution was between -800 and800.Determining Algorithm ParametersThe 30 clusters reported in Tavazoie et al. (1999) wereused to determine the 8 value in Algorithms 1 and 2.From the discussion before, we know that a completelyrandom submatrix of any size for the value range (0 to800) has a score about 53,000. The clusters reported inTavazoie et al. (1999) have scores in the range between261 (Cluster 3) and 996 (Cluster 7), with a median of630 (Clusters 8 and 14). A 8 value (300) close to thelower end of this range was used in the experiment, todetect more refined patterns.

    http://arep.med.harvard.edu/network_discovery.http://arep.med.harvard.edu/network_discovery.
  • 8/6/2019 ChengChurch_Biclustering

    7/11

    rows columns low high peak tail3 6 10 6870 390 15.5%3 17 30 6600 480 6.83%

    10 6 110 4060 800 0.064%10 17 240 3470 870 0.002%30 6 410 2460 960 < 10-630 17 480 2310 1040 < 10-6

    100 6 630 1720 1020 < 10-6100 17 700 1630 1080 < 10-6

    Table 1: Score distributions estimated by randomly se-lecting one million submatrices for each size combina-tion. The columns correspond to the number of rows,the number of columns, the lowest score, the highestscore, the peak score, and the percentage of submatri-ces with scores below 300.

    Submatrices of different sizes were randomly gener-ated (one million times for each size) from the yeastmatrix and the distributions of scores along with theprobability that a submatrix of the size has a score lowerthan 300 were estimated and listed in Table 1.The 8 value used in the experiment with human datawas 1,200, because of the doubling in the range and thequadrupling of the variance in the data, compared tothe yeast data.

    Algorithm 1 (Single Node Deletion) becomes quiteslow when the number of rows in the matrix is in thethousands, which is common in expression data. Aproper 0: must be determined to run the acceleratedAlgorithm 2 (Multiple Node Deletion). Lemma 2 givessome guidance to the determination of 0:. Our aim wasto find an 0: as large as possible and still allow theprogram to find 100 biclusters in less than 10 minutes.When the number of conditions is less than 100, whichwas the case for both data sets, Steps 3 and 4 were notused in Algorithm 2, so deletion of conditions startedonly when Algorithm 1 was called. The 0: used in bothexperiments was 1.2.Node AdditionAlgorithm 3 was used after Algorithm 2 and Algorithm1 (called by Algorithm 2), to add conditions and genesto further reduce the score. Only one iterate of Algo-rithm 3 was executed for each bicluster, based on theassumption that further iterates would not add much.

    Step 5 of Algorithm 3 was performed, so many biclus-ters contain a "mirror image" of the expression pattern.

    These additions were performed using the originaldata set (without the masking described below).Masking Discovered BiclustersBecause the algorithms are all deterministic, repeatedrun of them will not discover different biclusters, unlessdiscovered ones are masked.

    Each time a bicluster was discovered, the elements inthe submatrix representing it were replaced by randomnumbers, exactly like those generated for the missing

    values (see Missing Data Replacement above). Thismade it very unlikely that elements covered by existingbiclusters would contribute to any future pattern dis-covery. The masks were not used during node addition.

    The steps described above are summarized in Algo-rithm 4 below.

    Algorithm 4 (Finding a Given Number ofBiclusters) .

    Input: A, a matrix of real numbers withpossible missing elements, 0: 2 1, a pa-rameter for multiple node deletion, 8 2 0,the maximum acceptable mean squaredresidue score, and n, the number of 8-biclusters to be found.

    Output: n 8-biclusters in A.Initialization: Missing elements in A are re-

    placed with random numbers from a rangecovering the range of non-null values. AIis a copy of A.

    Iteration for n times:1. Apply Algorithm 2 on AI, 8, and 0:.

    If the row (column) size is small (lessthan 100), do not perform multiple nodedeletion on rows (columns). The matrixafter multiple node deletion is B.

    2. (Step 5 of Algorithm 2) Apply Algo-rithm 1 on Band 8 and the matrix aftersingle node deletion is C.

    3. Apply Algorithm 3 on A and C and theresult is the bicluster D.

    4. Report D, and replace the elements inAI that are also in D with random num-bers.

    Implementation and DisplayThese algorithms were implemented using C and run ona Sun Ultra10 workstation. With the parameters spec-ified above, 100 biclusters were discovered from eachdata set in less than 10 minutes. Plots were generatedfor each bicluster, showing the expression levels of thegenes in it under the conditions in the bicluster. 30biclusters for the yeast data were ploted in Figures 1,2, 3, and 4, and 24 for the human data in Figures 5and 6. In the captions, "Bicluster" denotes a biclus-ter discovered using our algorithm and "Cluster" de-notes a cluster discovered in Tavazoie et al. (1999).Detailed descriptions for these biclusters can be foundin http://arep.med.harvard.edu/biclustering.

    ResultsFrom visual inspection of the plots one can see thatthis biclustering approach works as well as conventionalclustering methods, when there were clear patterns overall attributes (conditions when the genes are clustered,or genes when conditions are clustered). The 8 param-

    http://arep.med.harvard.edu/biclustering.http://arep.med.harvard.edu/biclustering.
  • 8/6/2019 ChengChurch_Biclustering

    8/11

    o 47Figure 1: The first bicluster (Bicluster 0) discovered byAlgorithm 4 from the yeast data and the flattest one(Bicluster 47, with a row variance 39.33) are examplesof the "flat" biclusters the algorithm has to find andmask before more "interesting" ones may emerge. Thebicluster with the highest row variance (4,162) is Bi-cluster 93 in Figure 3. The quantization effect visibleat lower ends of the expression levels was due to a lackof the number of significant digits both before and afterlogarithm.

    Figure 2: 12 biclusters with numbers indicating the or-der they were discovered using the algorithms. Thesebiclusters are clearly related to Cluster 2 in Tavazoie etal. (1999), which has a score of 757. They subdividesCluster 2's profile into similar but different ones. Thesebiclusters have scores less than 300. They also include10 genes from Cluster 14 of Tavazoie et al. (1999) and6 from other clusters. All biclusters plotted here exceptBicluster 95 contain all 17 conditions, indicating thatthese conditions form a cluster well, with respect to thegenes included here.

    Figure 3: 10 biclusters discovered in the order labeled.Biclusters 17, 67, 71, 80, and 99 (on the left column)contain genes in Clusters 4, 8, and 12 of Tavazoie et al.(1999), while biclusters 57, 63, 77, 84, and 94 representCluster 7.

    Figure 4: 6 biclusters discovered in the order labeled.Bicluster 36 corresponds to Cluster 20 of Tavazoie et al.(1999). Bicluster 93 corresponds to Cluster 9. Bicluster52 and 90 correspond to Clusters 14 and 30. Bicluster46 corresponds to Clusters 1, 3, 11, 13, 19, 21, 25, and29, and Bicluster 54 corresponds to Clusters 5, 15, 24,and 28. Notice that some biclusters have less than halfof the 17 conditions and thus represent shared patternsin many clusters discovered using similarity based onall the conditions.

  • 8/6/2019 ChengChurch_Biclustering

    9/11

    eter gives a powerful tool to fine-tune the similarity re-quirements. This explains the correspondence betweenone or more biclusters to each of the better clusters dis-covered in Tavazoie et al. (1999). These includes theclusters associated to the highest scoring motifs (Clus-ters 2, 4, 7, 8, 14, and 30 of Tavazoie et al.). For otherclusters from Tavazoie et al., there were not clear corre-spondence to our biclusters. Instead, Biclusters 46 and54 represent the common features under some of theconditions of these lesser clusters.Coverage of the BiclustersIn the yeast data experiment, the 100 biclusters covered2,801, or 97.12% of the genes, 100% of the conditions,and 81.47% of the cells in the matrix.

    The first 100 biclusters from the human data covered3,687, or 91.58% of the genes, 100% of the conditions,and 36.81% of the cells in the data matrix.Sub-Categorization of Tavazoie's Cluster 2Figure 2 shows 12 biclusters containing mostly genesclassified to Cluster 2 in Tavazoie et al.. Each of thesebiclusters clearly represents a variation to the commontheme for Cluster 2. For example, Bicluster 87 con-tains genes with three sharp peaks in expression (CLB6and SPT21). Genes in Bicluster 62 (ERP3, LPP1, andPLM2) showed also three peaks, but the third of these israther flat. Bicluster 56 shows a clear-cut double-peakpattern with DNA replication genes CDC9, CDC21,POL12, POL30, RFA1, RFA2, among others. Bicluster66 contains two-peak genes with an even sharper im-age (CDC45, MSH6, RAD27, SWE1, and PDS5). Onthe other hand, Bicluster 14 contains those genes withbarely recognizable double peaks.Broad Strokes and Fine DrawingsFigures 5 and 6 show various biclusters discovered in thehuman lymphoma expression data. Some of them rep-resent a few genes closely following each other, throughalmost all the conditions. Others show large numbersgenes displaying a broad trend and its mirror image.These "broad strokes" often involve smaller subsets ofthe conditions and genes depicted in some of the "finedrawings" may be added to these broad trends duringthe node addition phase of the algorithm. These biclus-ters clearly say a lot about the regulatory mechanismand also the classification of conditions.Comparison with Alizadeh's ClustersWe did a comparison of the first 100 biclusters discov-ered in the human lymphoma data with the clustersdiscovered in Alizadeh et al. (2000). Hierarchical clus-tering was used in Alizadeh et al. on both the genesand the conditions. We used the root division in eachhierarchy in our comparison. We call the two clustersgenerated by the root division in a hierarchy the pri-mary clusters.

    Out of the 100 biclusters, only 10 have conditionsexclusively from one or the other primary cluster on

    Figure 5: 12 biclusters discovered in the order labeledfrom the human expression data. Scores were 1,200 orlower. The numbers of genes and conditions in eachare reported in the format of (bicluster label, numberof genes, number of conditions) as follows. (12, 4, 96),(19,103,25), (22,10,57), (39,9,51), (44, 10,29), (45,127, 13), (49, 2, 96), (52, 3, 96), (53, 11, 25), (54, 13,21), (75, 25, 12), (83, 2, 96)

    Figure 6: Another 12 biclusters discovered in the orderlabeled from the human expression data. The numbersof genes and conditions in each bicluster (whose labelis the first figure in the triple) are as follows. (7, 34,48), (14, 15, 46), (25, 57, 24), (27, 23, 30), (31, 158,17), (35, 102, 13), (37, 59, 18), (59,8,19), (60,18,11),(67, 102, 20), (79, 18, 11), (86, 18, 11) Mirror imagescan be seen in most of these biclusters.

  • 8/6/2019 ChengChurch_Biclustering

    10/11

    conditions. Bicluster 19 in Figure 5 is heavily biasedtowards one primary condition cluster, while Bicluster67 in Figure 6 is heavily biased towards the other.

    A similar proportion of the biclusters have genes ex-clusively from one or the other primary cluster on genes.The tendency is that the higher the row variance (5)and the number of columns (conditions) involved are,the more likely the genes in the bicluster will come fromone primary cluster. We define mix as the percentageof genes from the minority primary cluster in the bi-cluster, and plot mix versus row variance in Figure 7.Each digit in the Figure 7 represents a bicluster, withthe digit indicating the number of conditions in the bi-cluster.

    All the biclusters shown in Figures 5 and 6 have rowvariance (squared-rooted) greater than 100. Many ofthem have zero percent mix (Biclusters 7, 12, 14, 22,39, 44, 49, 52, and 53 from one primary cluster andBiclusters 25, 54, 59, and 83 from the other). Thisdemonstrates the correctness of the use of row varianceas a means for separating interesting biclusters fromthe trivial ones. Some biclusters have high row vari-ances but also high mix scores (those in the upper rightquadrant of the plot). These invariably involve a nar-rower view of the conditions, with the digit "1" indi-cating the number of conditions involved is below 20%.Genes in these biclusters are distributed on both sidesof the root division of hierarchical clustering. Thesebiclusters show the complementary role that bicluster-ing plays to one-dimensional clustering. Biclusteringallows us to focus on the right subsets of conditions tosee the apparent co-regulatory patterns not seen withthe global scope.

    For example, Bicluster 60 suggests that under the11 conditions, mostly for chronic lymphocytic leukemia(CLL), FMR2 and the interferon-v receptor 0: chainbehave against their stereotype and get down-regulated.As another example, Bicluster 35 suggests that underthe 13 conditions, CD49B and SIT appear similar togenes classified by hierarchical clustering into the otherprimary cluster.

    DiscussionWe have introduced a new paradigm, biclustering, togene expression data analysis. The concept itself canbe traced back to 1960's and 1970's, although it hasbeen rarely used or even studied. To gene expressiondata analysis, this paradigm is relevant, because of thecomplexity of gene regulation and expression, and thesometimes low quality of the gathered raw data.

    Biclustering is performed on the expression matrix,which can be viewed as a weighted bipartite graph. Theconcept of bicluster is a natural generalization of theconcept of biclique in graph theory. There are one-dimensional clustering methods based on graphs con-structed from similarity scores between genes, for ex-ample, the hierarchical clustering method in Alizadehet al. (2000), and the finding of highly connected sub-graphs in Hartuv et al. (1999). A major difference here

    M ix50 _

    30 _

    11

    o 5 2

    40 _

    20 _ 31

    2

    1110 _ 2 3

    o2 3

    412 5 29o

    RowVariance (square-rooted)

    Figure 7: A plot of the first 100 biclusters in the humanlymphoma data. Three measurements are involved.The horizontal axis is the square root of the row vari-ance, defined in (5). The vertical axis is the mix, or thepercentage of genes misclassified across the root divisiongenerated by hierarchical clustering in Alizadeh et al.(2000). We assumed that genes forming mirror imageexpression patterns are naturally from the other sideof the root division. The digits used to represent thebiclusters in the plot also indicate the sizes of the con-dition sets in the biclusters. The digit n indicates thatthe number of conditions is between IOn and 10(n + 1)percent of the total number of conditions.

  • 8/6/2019 ChengChurch_Biclustering

    11/11

    is that biclustering does not start from or require thecomputation of overall similarity between genes.

    The relation between biclustering and clustering issimilar to that between the instance-based paradigmand the model-based paradigm in supervised learning.Instance-based learning (for example, nearest-neighborclassification) views data locally, while model-basedlearning (for example, feed-forward neural networks)finds the globally optimally fitting models.Biclustering has several obvious advantages over clus-tering. First, biclustering automatically selects genesand conditions with more coherent measurement anddrops those representing random noise. This providesa method for dealing with missing data and corruptedmeasurements.

    Secondly, biclustering groups items based on a simi-larity measure that depends on a context, which is bestdefined as a subset of the attributes. It discovers notonly the grouping, but the context as well. And to someextent, these two become inseparable and exchangeable,which is a major difference between biclustering andclustering rows after clustering columns.

    Most expression data result from more or less com-plete sets of genes but very small portions of all thepossible conditions. Any similarity measure betweengenes based on the available conditions becomes any-way context-dependent. Clustering genes based on ameasure like this is no more representative than biclus-tering.

    Thirdly, biclustering allows rows and columns to beincluded in multiple biclusters, and thus allows one geneor one condition to be identified by more than one func-tion categories. This added flexibility correctly reflectsthe reality in the functionality of genes and overlappingfactors in tissue samples and experiment conditions.

    By showing the NP-hardness ofthe problem, we triedto justify our efficient but greedy algorithms. But thenature of NP-hardness implies that there may be siz-able biclusters with good scores evading the search byany efficient algorithm. Just like most efficient conven-tional clustering algorithms, one can say that the bestbiclusters can be found in most cases, but one cannotsay that it will be found in all the cases.

    AcknowledgmentsThis research was conducted at the Lipper Centerfor Computational Genetics at the Harvard MedicalSchool, while the first author was on academic leavefrom the University of Cincinnati.

    ReferencesAach, J., Rindone, W., and Church, G.M. 2000. Sys-tematic management and analysis of yeast gene ex-pression data. Genome Research in press.Alizadeh, A.A. et al. 2000. Distinct types of diffuselarge B-cell lymphoma identified by gene expressionprofiling. Nature 403:503-510.

    Ball, C.A. et al. 2000. Integrating functional ge-nomic information into the Saccharomyces GenomeDatabase. Nucleic Acids Res. 28:77-80.Cho, R.J. et al. A genome wide transcriptional analysisof the mitotic cell cycle. Mol. Cell 2:65-73.Garey, M.R., and Johnson, D.S. 1979. Computersand Intractability: A Guide to the Theory of NP-Completeness. San Francisco: Freeman.Hartigan, J.A. 1972. Direct clustering of a data matrix.JASA 67:123-129.Hartuv, E. et al. 1999. An algorithm for clusteringcDNAs for gene expression analysis. RECOMB '99,188-197.Johnson, D.S. 1987. The NP-completeness column: anongoing guide. J. Algorithms 8:438-448.Mirkin, B. 1996. Mathematical Classification andClustering. Dordrecht: Kluwer.Morgan, J.N. and Sonquist, J.A. 1963. Problems in theanalysis of survey data, and a proposal. JASA 58:415-434.Nau, D.S., Markowsky, G., Woodbury, M.A., andAmos, D.B. 1978. A mathmatical analysis of humanleukocyte antigen serology. Math. Biosci. 40:243-270.Orlin, J. 1977. Containment in graph theory: coveringgraphs with cliques, Nederl. Akad. Wetensch. Indag.Math. 39:211-218.Tavazoie, S., Hughes, J.D., Campbell, M.J., Cho, R.J.,and Church, G.M. 1999. Systematic determination ofgenetic network architecture. Nature Genetics 22:281-285.Yannakakis, M. 1981. Node deletion problems on bi-partite graphs. SIAM J. Comput. 10:310-327.