cse 5243 intro. to data mining - github pages · ¨ cluster analysis (or clustering, data...

100
CSE 5243 INTRO. TO DATA MINING Cluster Analysis: Basic Concepts and Methods Yu Su, CSE@The Ohio State University Slides adapted from UIUC CS412 by Prof. Jiawei Han and OSU CSE5243 by Prof. Huan Sun

Upload: others

Post on 18-Jul-2020

14 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

CSE 5243 INTRO. TO DATA MINING

Cluster Analysis: Basic Concepts and MethodsYu Su, CSE@The Ohio State University

Slides adapted from UIUC CS412 by Prof. Jiawei Han and OSU CSE5243 by Prof. Huan Sun

Page 2: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

2

Chapter 10. Cluster Analysis: Basic Concepts and Methods

¨ Cluster Analysis: An Introduction

¨ Partitioning Methods

¨ Hierarchical Methods

¨ Density- and Grid-Based Methods

¨ Evaluation of Clustering

¨ Summary

Page 3: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

3

Introduction3

¨ Suppose you are a marketing manager and you have (anonymized) information about users such as demographics and purchase history stored as feature vectors. What to do next towards an effective marketing strategy?

Page 4: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

4

What is Cluster Analysis?¨ What is a cluster?

¤ A cluster is a collection of data objects which aren Similar (or related) to one another within the same group (i.e., cluster)n Dissimilar (or unrelated) to the objects in other groups (i.e., clusters)

Page 5: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

5

What is Cluster Analysis?¨ What is a cluster?

¤ A cluster is a collection of data objects which aren Similar (or related) to one another within the same group (i.e., cluster)n Dissimilar (or unrelated) to the objects in other groups (i.e., clusters)

¨ Cluster analysis (or clustering, data segmentation, …)

¤ Given a set of data points, partition them into a set of groups (i.e., clusters), such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups

Page 6: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

6

What is Cluster Analysis?¨ What is a cluster?

¤ A cluster is a collection of data objects which aren Similar (or related) to one another within the same group (i.e., cluster)n Dissimilar (or unrelated) to the objects in other groups (i.e., clusters)

¨ Cluster analysis (or clustering, data segmentation, …)

¤ Given a set of data points, partition them into a set of groups (i.e., clusters), such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups

Intra-cluster distances are minimized

Inter-cluster distances are maximized

Page 7: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

7

What is Cluster Analysis?¨ What is a cluster?

¤ A cluster is a collection of data objects which aren Similar (or related) to one another within the same group (i.e., cluster)n Dissimilar (or unrelated) to the objects in other groups (i.e., clusters)

¨ Cluster analysis (or clustering, data segmentation, …)

¤ Given a set of data points, partition them into a set of groups (i.e., clusters), such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups

¨ Cluster analysis is unsupervised learning (i.e., no predefined classes)

¤ This contrasts with classification, which is supervised learning

¨ Typical ways to use/apply cluster analysis

¤ As a stand-alone tool to get insight into data distribution, or

¤ As a preprocessing (or intermediate) step for other algorithms

Page 8: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

8

What is Good Clustering?¨ A good clustering method will produce high quality clusters, which should have

¤ High intra-class similarity: Cohesive within clusters

¤ Low inter-class similarity: Distinctive between clusters

Page 9: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

9

What is Good Clustering?¨ A good clustering method will produce high quality clusters, which should have

¤ High intra-class similarity: Cohesive within clusters

¤ Low inter-class similarity: Distinctive between clusters

¨ Quality function

¤ There is usually a separate “quality” function that measures the “goodness” of a cluster

¤ It is hard to define “similar enough” or “good enough”

n The answer is typically highly subjective

¨ There exist many similarity measures and/or functions for different applications

¨ Similarity measure is critical for cluster analysis

Page 10: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

10

Cluster Analysis: Applications¨ Understanding

¤ Group related documents for browsing, group genes and proteins that have similar functionality, or group stocks with similar price fluctuations

¨ Summarization¤ Reduce the size of large data sets

Clustering precipitation in Australia

“Sports”

“Science”

Page 11: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

12

Notion of a Cluster can be Ambiguous

How many clusters?

Page 12: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

13

Notion of a Cluster can be Ambiguous

How many clusters?

Four ClustersTwo Clusters

Six Clusters

Page 13: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

14

Types of Clusterings

¨ A clustering is a set of clusters

¨ Important distinction between partitional and hierarchical sets of clusters

¨ Partitional Clustering¤ A division of data objects into non-overlapping subsets (clusters) such that each data object is in

exactly one subset

Page 14: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

15

Partitional Clustering

Original Points A Partitional Clustering

Page 15: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

16

Types of Clusterings

¨ A clustering is a set of clusters

¨ Important distinction between hierarchical and partitional sets of clusters

¨ Partitional Clustering¤ A division data objects into non-overlapping subsets (clusters) such that each data object is in

exactly one subset

¨ Hierarchical clustering¤ A set of nested clusters organized as a hierarchical tree

Page 16: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

17

Hierarchical Clustering

p4p1

p3

p2

p4 p1

p3

p2

p4p1 p2 p3

p4p1 p2 p3

Traditional Hierarchical Clustering

Non-traditional Hierarchical Clustering Non-traditional Dendrogram

Traditional Dendrogram

Page 17: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

18

Types of Clusters: Well-Separated¨ Well-Separated Clusters:

¤ A cluster is a set of points such that any point in a cluster is closer (or more similar) to every other point in the cluster than to any point not in the cluster.

3 well-separated clusters

Page 18: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

19

Types of Clusters: Center-Based¨ Center-based

¤ A cluster is a set of objects such that an object in a cluster is closer (more similar) to the “center” of a cluster, than to the center of any other cluster

¤ The center of a cluster is often a centroid, the average of all the points in the cluster, or a medoid, the most “representative” point of a cluster

4 center-based clusters

Page 19: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

20

Types of Clusters: Contiguity-Based¨ Contiguous Cluster (Nearest neighbor or Transitive)

¤ A cluster is a set of points such that a point in a cluster is closer (or more similar) to one or more other points in the cluster than to any point not in the cluster.

8 contiguous clusters

Page 20: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

21

Types of Clusters: Density-Based¨ Density-based

¤ A cluster is a dense region of points, which is separated by low-density regions, from other regions of high density.

¤ Used when the clusters are irregular or intertwined, and when noise and outliers are present.

6 density-based clusters

Page 21: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

22

Characteristics of the Input Data Are Important¨ Type of proximity or density measure

¤ This is a derived measure, but central to clustering

¨ Sparseness¤ Dictates type of similarity¤ Adds to efficiency

¨ Attribute type¤ Dictates type of similarity

¨ Type of Data¤ Dictates type of similarity¤ Other characteristics, e.g., autocorrelation

¨ Dimensionality¨ Noise and Outliers¨ Type of Distribution

Page 22: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

23

Clustering Algorithms

¨ K-means and its variants

¨ Hierarchical clustering

¨ Density-based clustering

Page 23: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

24

K-means Clustering

¨ Partitional clustering approach ¨ Each cluster is associated with a centroid (center point) ¨ Each point is assigned to the cluster with the closest centroid¨ Number of clusters, K, must be specified¨ The basic algorithm is very simple

Page 24: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

25

K-means Clustering

¨ Partitional clustering approach ¨ Each cluster is associated with a centroid (center point) ¨ Each point is assigned to the cluster with the closest centroid¨ Number of clusters, K, must be specified¨ The basic algorithm is very simple Often chosen

randomly

Page 25: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

26

K-means Clustering

¨ Partitional clustering approach ¨ Each cluster is associated with a centroid (center point) ¨ Each point is assigned to the cluster with the closest centroid¨ Number of clusters, K, must be specified¨ The basic algorithm is very simple Often chosen

randomly Measured by Euclidean distance, cosine similarity,

etc.

Page 26: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

27

K-means Clustering

¨ Partitional clustering approach ¨ Each cluster is associated with a centroid (center point) ¨ Each point is assigned to the cluster with the closest centroid¨ Number of clusters, K, must be specified¨ The basic algorithm is very simple Often chosen

randomly

Typically the mean of the points in the cluster

Page 27: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

28

¨ Initial centroids are often chosen randomly.¤ Clusters produced vary from one run to another.

¨ The centroid is (typically) the mean of the points in the cluster.¨ ‘Closeness’ is measured by Euclidean distance, cosine similarity, correlation, etc.

¨ K-means will converge for common similarity measures mentioned above.

¨ Most of the convergence happens in the first few iterations.¤ Often the stopping condition is changed to ‘Until relatively few points change clusters’

¨ Complexity is O( n * K * I * d )¤ n = number of points, K = number of clusters,

I = number of iterations, d = number of attributes

K-means Clustering – Details

Page 28: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

29

Example: K-Means Clustering

The original data points & randomly select K = 2 centroids

Select K points as initial centroidsRepeat

• Form K clusters by assigning each point to its closest centroid• Re-compute the centroids (i.e., mean point) of each clusterUntil convergence criterion is satisfied

Execution of the K-Means Clustering Algorithm

Page 29: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

30

Example: K-Means Clustering

The original data points & randomly select K = 2 centroids

Select K points as initial centroidsRepeat

• Form K clusters by assigning each point to its closest centroid• Re-compute the centroids (i.e., mean point) of each clusterUntil convergence criterion is satisfied

Assign points to clusters

Execution of the K-Means Clustering Algorithm

Page 30: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

31

Example: K-Means Clustering

The original data points & randomly select K = 2 centroids

Select K points as initial centroidsRepeat

• Form K clusters by assigning each point to its closest centroid• Re-compute the centroids (i.e., mean point) of each clusterUntil convergence criterion is satisfied

Assign points to clusters

Recomputecluster centers

Execution of the K-Means Clustering Algorithm

Page 31: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

32

Example: K-Means Clustering

The original data points & randomly select K = 2 centroids

Select K points as initial centroidsRepeat

• Form K clusters by assigning each point to its closest centroid• Re-compute the centroids (i.e., mean point) of each clusterUntil convergence criterion is satisfied

Assign points to clusters

Recomputecluster centers

Redo point assignment

Execution of the K-Means Clustering Algorithm

Page 32: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

33

Example: K-Means Clustering

The original data points & randomly select K = 2 centroids

Select K points as initial centroidsRepeat

• Form K clusters by assigning each point to its closest centroid• Re-compute the centroids (i.e., mean point) of each clusterUntil convergence criterion is satisfied

Assign points to clusters

Recomputecluster centers

Redo point assignment

Execution of the K-Means Clustering AlgorithmNext iteration

Page 33: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

34

K-means Example – Animated 34

Page 34: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

35

Two different K-means Clusterings

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Sub-optimal Clustering-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Optimal Clustering

Page 35: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

36

Importance of Choosing Initial Centroids (1)

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

yIteration 5

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 6

Optimal Clustering

Page 36: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

37

Importance of Choosing Initial Centroids (2)

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 2

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 3

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

yIteration 4

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 5

Sub-optimalClustering

Page 37: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

38

Evaluating K-means Clusters

¨ Why one clustering result is better than the other?

Page 38: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

39

Evaluating K-means Clusters

¨ Most common measure is Sum of Squared Error (SSE)¤ For each point, the error is the distance to the nearest cluster¤ To get SSE, we square these errors and sum them.

¤ x is a data point in cluster Ci and mi is the representative point for cluster Ci

åå= Î

=K

i Cxi

i

xmdistSSE1

2 ),(

Page 39: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

40

Evaluating K-means Clusters

¨ Most common measure is Sum of Squared Error (SSE)¤ For each point, the error is the distance to the nearest cluster¤ To get SSE, we square these errors and sum them.

¤ Xi is a data point in cluster Ck and ck is the representative point for cluster Ck

n can show that ck corresponds to the center (mean) of the cluster

2

1( ) || ||

i Ck

K

i kk x

SSE C x cÎ=

= -ååUsing Euclidean Distance

Page 40: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

41

Evaluating K-means Clusters

¨ Most common measure is Sum of Squared Error (SSE)¤ For each point, the error is the distance to the nearest cluster¤ To get SSE, we square these errors and sum them.

¤ Xi is a data point in cluster Ck and ck is the representative point for cluster Ckn can show that ck corresponds to the center (mean) of the cluster

¤ Given two clusters, we can choose the one with the smallest error¤ One easy way to reduce SSE is to increase K, the number of clusters

n A good clustering with smaller K can have a lower SSE than a poor clustering with higher K

2

1( ) || ||

i Ck

K

i kk x

SSE C x cÎ=

= -åå

Page 41: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

42

K-Means: From Optimization Angle¨ Partitioning method: Discovering the groupings in the data by optimizing a specific

objective function and iteratively improving the quality of partitions

Page 42: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

43

K-Means: From Optimization Angle¨ Partitioning method: Discovering the groupings in the data by optimizing a specific

objective function and iteratively improving the quality of partitions

¨ K-partitioning method: Partitioning a dataset D of n objects into a set of K clusters so that an objective function is optimized (e.g., the sum of squared distances is minimized, where ck is the "center" of cluster Ck)

¤ A typical objective function: Sum of Squared Errors (SSE)

¤ The optimization problem:

arg min SSE(C)

2

1( ) || ||

i Ck

K

i kk x

SSE C x cÎ=

= -åå

C

Page 43: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

44

Partitioning Algorithms: From Optimization Angle¨ Partitioning method: Discovering the groupings in the data by optimizing a specific

objective function and iteratively improving the quality of partitions

¨ K-partitioning method: Partitioning a dataset D of n objects into a set of K clusters so that an objective function is optimized (e.g., the sum of squared distances is minimized, where ck is the "center" of cluster Ck)

¤ A typical objective function: Sum of Squared Errors (SSE)

¨ Problem definition: Given K, find a partition of K clusters that optimizes the chosen partitioning criterion

¤ Global optimal: Needs to exhaustively enumerate all partitions

¤ Heuristic methods (i.e., greedy algorithms): K-Means, K-Medians, K-Medoids, etc.

2

1( ) || ||

i Ck

K

i kk x

SSE C x cÎ=

= -åå

Page 44: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

45

¨ If there are K ‘real’ clusters then the chance of selecting one centroid from each cluster is small. ¤ Chance is relatively small when K is large¤ If clusters are the same size, n, then

¤ For example, if K = 10, then probability = 10!/1010 = 0.00036¤ Sometimes the initial centroids will readjust themselves in ‘right’ way, and

sometimes they don’t

Problems with Selecting Initial Points

Page 45: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

46

Two different K-means Clusterings

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Sub-optimal Clustering-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Optimal Clustering-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Original Points

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

0

0.5

1

1.5

2

2.5

3

x

y

Iteration 1

Page 46: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

47

Solutions to Initial Centroids Problem

¨ Multiple runs¤ Helps, but probability is not on your side

¨ Sample to determine initial centroids¨ Select more than k initial centroids and then select among these initial

centroids¤ Select most widely separated

Page 47: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

48

Pre-processing and Post-processing

¨ Pre-processing¤ Normalize the data¤ Eliminate outliers

¨ Post-processing¤ Eliminate small clusters that may represent outliers¤ Split ‘loose’ clusters, i.e., clusters with relatively high SSE¤ Merge clusters that are ‘close’ and that have relatively low SSE¤ Can use these steps during the clustering process

n ISODATA

Page 48: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

49

K-Means++

¨ Original proposal (MacQueen’67): Select K seeds randomly

¤ Need to run the algorithm multiple times using different seeds

q There are many methods proposed for better initialization of k seeds

q K-Means++ (Arthur & Vassilvitskii’07):

q The first centroid is selected at random

q The next centroid selected is the one that is farthest from the currently selected (selection is based on a weighted probability score)

q The selection continues until K centroids are obtained

Page 49: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

50

K-Means++

¨ The exact initialization algorithm is:1. Choose one center uniformly at random from all data points.2. For each data point x, compute D(x), the distance between x and the nearest center that has already been chosen.3. Choose one new data point at random as a new center, using a weighted probability distribution where a point x is chosen with probability proportional to D(x)2.4. Repeat Steps 2 and 3 until k centers have been chosen.5. Now that the initial centers have been chosen, proceed using standard K-Means clustering.

Page 50: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

51

Handling Outliers: From K-Means to K-Medoids¨ The K-Means algorithm is sensitive to outliers!—since an object with an extremely large

value may substantially distort the distribution of the data¨ K-Medoids: Instead of taking the mean value of the object in a cluster as a reference

point, medoids can be used, which is the most centrally located object in a cluster

¨ The K-Medoids clustering algorithm:n Select K points as the initial representative objects (i.e., as initial K medoids)n Repeat

n Assigning each point to the cluster with the closest medoidn Randomly select a non-representative object oi

n Compute the total cost S of swapping the medoid m with oi

n If S < 0, then swap m with oi to form the new set of medoidsn Until convergence criterion is satisfied

Page 51: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

52

PAM: A Typical K-Medoids Algorithm

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

K = 2

Arbitrary choose Kobject as initial medoids

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

Assign each remaining object to nearest medoids

Compute total cost of swapping

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

Swapping O and Oramdom

If quality is improved

Select initial K-Medoids randomlyRepeat

Object re-assignmentSwap medoid m with oi if it improves the clustering quality

Until convergence criterion is satisfied

Randomly select a non-medoid object,Oramdom

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

Page 52: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

53

K-Medians: Handling Outliers by Computing Medians

¨ Medians are less sensitive to outliers than means¤ Think of the median salary vs. mean salary of a large firm when adding a few top

executives!

¨ K-Medians: Instead of taking the mean value of the object in a cluster as a reference point, medians are used (corresponding to L1-norm as the distance measure)

¨ The criterion function for the K-Medians algorithm:

¨ The K-Medians clustering algorithm:

n Select K points as the initial representative objects (i.e., as initial K medians)

n Repeat

n Assign every point to its nearest median

n Re-compute the median using the median of each individual feature

n Until convergence criterion is satisfied

1| |

i Ck

K

ij kjk x

S x medÎ=

= -åå

Page 53: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

54

K-Modes: Clustering Categorical Data¨ K-Means cannot handle non-numerical (categorical) data

¤ Mapping categorical value to 1/0 cannot generate quality clusters for high-dimensional data

¨ K-Modes: An extension to K-Means by replacing means of clusters with modes¨ Dissimilarity measure between object X and the center of a cluster Z

¤ Φ(xj, zj) = 1 – njr/nl when xj = zj ; 1 when xj ǂ zj

n where zj is the categorical value of attribute j in Zl, nl is the number of objects in cluster l, and nj

r is the number of objects whose attribute value is r

¨ This dissimilarity measure (distance function) is frequency-based¨ Algorithm is still based on iterative object cluster assignment and centroid update ¨ A fuzzy K-Modes method is proposed to calculate a fuzzy cluster membership

value for each object to each cluster¨ A mixture of categorical and numerical data: Using a K-Prototype method

Page 54: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

55

Limitations of K-means

¨ K-means has problems when clusters are of differing ¤ Sizes¤ Densities¤ Non-globular shapes

¨ K-means has problems when the data contains outliers.

¨ The mean may often not be a real point!

Page 55: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

56

Limitations of K-means: Differing Density

Original Points K-means (3 Clusters)

Page 56: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

57

Limitations of K-means: Non-globular Shapes

Original Points K-means (2 Clusters)

Page 57: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

58

Overcoming K-means Limitations

Original Points K-means Clusters

Page 58: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

59

Chapter 10. Cluster Analysis: Basic Concepts and Methods

¨ Cluster Analysis: An Introduction

¨ Partitioning Methods

¨ Hierarchical Methods

¨ Density- and Grid-Based Methods

¨ Evaluation of Clustering

¨ Summary

Page 59: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

60

Hierarchical Clustering

¨ Produces a set of nested clusters organized as a hierarchical tree¨ Can be visualized as a dendrogram

¤ A tree-like diagram that records the sequences of merges or splits

1 3 2 5 4 60

0.05

0.1

0.15

0.2

1

2

3

4

5

6

1

23 4

5

Page 60: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

61

Dendrogram: Shows How Clusters are Merged/Splitted¨ Dendrogram: Decompose a set of data objects into a tree of clusters by multi-level

nested partitioning

¨ A clustering of the data objects is obtained by cutting the dendrogram at the desired level, then each connected component forms a cluster

Hierarchical clustering generates a dendrogram(a hierarchy of clusters)

Page 61: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

62

Strengths of Hierarchical Clustering

¨ Do not have to assume any particular number of clusters¤ Any desired number of clusters can be obtained by ‘cutting’ the dendogram

at the proper level

¨ They may correspond to meaningful taxonomies¤ Example in biological sciences (e.g., animal kingdom, phylogeny

reconstruction, …)

Page 62: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

63

Hierarchical Clustering

¨ Two main types of hierarchical clustering¤ Agglomerative:

¤ Divisive:

Page 63: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

64

Hierarchical Clustering

¨ Two main types of hierarchical clustering¤ Agglomerative:

n Start with the points as individual clustersn At each step, merge the closest pair of clusters until only one cluster (or k clusters) leftn Build a bottom-up hierarchy of clusters

¤ Divisive:

Step 0 Step 1 Step 2 Step 3 Step 4

b

d

c

e

a a b

d e

c d e

a b c d e

agglomerative

Page 64: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

65

Hierarchical Clustering

¨ Two main types of hierarchical clustering¤ Agglomerative:

n Start with the points as individual clustersn At each step, merge the closest pair of clustersuntil only one cluster (or k clusters) leftn Build a bottom-up hierarchy of clusters

¤ Divisive: n Start with one, all-inclusive cluster n At each step, split a cluster until each cluster contains a point (or there are k clusters)n Generate a top-down hierarchy of clusters

Step 0 Step 1 Step 2 Step 3 Step 4

b

d

c

e

a a b

d e

c d e

a b c d e

Step 4 Step 3 Step 2 Step 1 Step 0

agglomerative

divisive

Page 65: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

66

Hierarchical Clustering

¨ Two main types of hierarchical clustering¤ Agglomerative:

n Start with the points as individual clustersn At each step, merge the closest pair of clusters until only one cluster (or k clusters) left

¤ Divisive: n Start with one, all-inclusive cluster n At each step, split a cluster until each cluster contains a point (or there are k clusters)

¨ Traditional hierarchical algorithms use a similarity or distance matrix¤ Merge or split one cluster at a time

Page 66: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

67

Agglomerative Clustering Algorithm¨ More popular hierarchical clustering technique

¨ Basic algorithm is straightforward1. Compute the proximity matrix2. Let each data point be a cluster3. Repeat4. Merge the two closest clusters5. Update the proximity matrix6. Until only a single cluster remains

Step 0 Step 1 Step 2 Step 3 Step 4

b

d

c

e

a a b

d e

c d e

a b c d e

agglomerative

Page 67: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

68

Agglomerative Clustering Algorithm¨ More popular hierarchical clustering technique

¨ Basic algorithm is straightforward1. Compute the proximity matrix2. Let each data point be a cluster3. Repeat4. Merge the two closest clusters5. Update the proximity matrix6. Until only a single cluster remains

¨ Key operation is the computation of the proximity of two clusters¤ Different approaches to defining the distance/similarity between clusters

distinguish the different algorithms

Page 68: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

69

Starting Situation

¨ Start with clusters of individual points and a proximity matrix

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.Proximity Matrix

...p1 p2 p3 p4 p9 p10 p11 p12

12 data points

Page 69: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

70

Intermediate Situation

¨ After some merging steps, we have some clusters

C1

C4

C2 C5

C3

C2C1

C1

C3

C5

C4

C2

C3 C4 C5

Proximity Matrix

...p1 p2 p3 p4 p9 p10 p11 p12

Page 70: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

71

Intermediate Situation

¨ We want to merge the two closest clusters (C2 and C5) and update the proximity matrix.

C1

C4

C2 C5

C3

C2C1

C1

C3

C5

C4

C2

C3 C4 C5

Proximity Matrix

...p1 p2 p3 p4 p9 p10 p11 p12

Page 71: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

72

¨ How do we update the proximity matrix?

After Merging

C1

C4

C2 U C5

C3? ? ? ?

?

?

?

C2 U C5C1

C1

C3

C4

C2 U C5

C3 C4

Proximity Matrix

...p1 p2 p3 p4 p9 p10 p11 p12

Page 72: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

73

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.

Similarity?

! MIN! MAX! Group Average! Distance Between Centroids

Proximity Matrix

Page 73: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

74

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.Proximity Matrix

! MIN! MAX! Group Average! Distance Between Centroids

Page 74: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

75

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.Proximity Matrix

! MIN! MAX! Group Average! Distance Between Centroids

Page 75: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

76

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.Proximity Matrix

! MIN! MAX! Group Average! Distance Between Centroids

Page 76: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

77

How to Define Inter-Cluster Similarity

p1

p3

p5

p4

p2

p1 p2 p3 p4 p5 . . .

.

.

.Proximity Matrix

! MIN! MAX! Group Average! Distance Between Centroids

´ ´

Page 77: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

78

Cluster Similarity: MIN or Single Link

¨ Similarity of two clusters is based on the two most similar (closest) points in the different clusters¤ Determined by one pair of points, i.e., by one link in the proximity graph.

Page 78: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

79

Cluster Similarity: MIN or Single Link

¨ Similarity of two clusters is based on the two most similar (closest) points in the different clusters¤ Determined by one pair of points, i.e., by one link in the proximity graph.

I1 I2 I3 I4 I5I1 1.00 0.90 0.10 0.65 0.20I2 0.90 1.00 0.70 0.60 0.50I3 0.10 0.70 1.00 0.40 0.30I4 0.65 0.60 0.40 1.00 0.80I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

Page 79: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

80

Cluster Similarity: MIN or Single Link

¨ Similarity of two clusters is based on the two most similar (closest) points in the different clusters¤ Determined by one pair of points, i.e., by one link in the proximity graph.

I1 I2 I3 I4 I5I1 1.00 0.90 0.10 0.65 0.20I2 0.90 1.00 0.70 0.60 0.50I3 0.10 0.70 1.00 0.40 0.30I4 0.65 0.60 0.40 1.00 0.80I5 0.20 0.50 0.30 0.80 1.00

Page 80: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

81

Cluster Similarity: MIN or Single Link

¨ Similarity of two clusters is based on the two most similar (closest) points in the different clusters¤ Determined by one pair of points, i.e., by one link in the proximity graph.

I1 I2 I3 I4 I5I1 1.00 0.90 0.10 0.65 0.20I2 0.90 1.00 0.70 0.60 0.50I3 0.10 0.70 1.00 0.40 0.30I4 0.65 0.60 0.40 1.00 0.80I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

Page 81: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

82

Cluster Similarity: MIN or Single Link

¨ Similarity of two clusters is based on the two most similar (closest) points in the different clusters¤ Determined by one pair of points, i.e., by one link in the proximity graph.

I1 I2 I3 I4 I5I1 1.00 0.90 0.10 0.65 0.20I2 0.90 1.00 0.70 0.60 0.50I3 0.10 0.70 1.00 0.40 0.30I4 0.65 0.60 0.40 1.00 0.80I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

Page 82: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

83

Cluster Similarity: MIN or Single Link

¨ Similarity of two clusters is based on the two most similar (closest) points in the different clusters¤ Determined by one pair of points, i.e., by one link in the proximity graph.

{I1,I2} I3 I4 I5{I1,I2} 1.00 0.70 0.65 0.50

I3 0.70 1.00 0.40 0.30I4 0.65 0.40 1.00 0.80I5 0.50 0.30 0.80 1.00

Update proximity matrix with new cluster {I1, I2}

Page 83: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

84

Cluster Similarity: MIN or Single Link

¨ Similarity of two clusters is based on the two most similar (closest) points in the different clusters¤ Determined by one pair of points, i.e., by one link in the proximity graph.

{I1,I2} I3 I4 I5{I1,I2} 1.00 0.70 0.65 0.50

I3 0.70 1.00 0.40 0.30I4 0.65 0.40 1.00 0.80I5 0.50 0.30 0.80 1.00

1 2 3 4 5Update proximity matrix with new cluster {I1, I2}

Page 84: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

85

Cluster Similarity: MIN or Single Link

¨ Similarity of two clusters is based on the two most similar (closest) points in the different clusters¤ Determined by one pair of points, i.e., by one link in the proximity graph.

{I1,I2} I3 {I4,I5}{I1,I2} 1.00 0.70 0.65

I3 0.70 1.00 0.40{I4,I5} 0.65 0.40 1.00

1 2 3 4 5Update proximity matrix with new cluster {I1, I2} and {I4, I5}

Page 85: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

86

Cluster Similarity: MIN or Single Link

¨ Similarity of two clusters is based on the two most similar (closest) points in the different clusters¤ Determined by one pair of points, i.e., by one link in the proximity graph.

1 2 3 4 5

{I1,I2, I3} {I4,I5}{I1,I2, I3} 1.00 0.65

{I4,I5} 0.65 1.00Only two clusters are left.

Page 86: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

87

Hierarchical Clustering: MIN

Nested Clusters Dendrogram

1

2

3

4

5

6

12

3

4

5

3 6 2 5 4 10

0.05

0.1

0.15

0.2

Page 87: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

88

Strength of MIN

Original Points Two Clusters

• Can handle non-elliptical shapes

Page 88: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

89

Limitations of MIN

Original Points Two Clusters

• Sensitive to noise and outliers

Page 89: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

90

Cluster Similarity: MAX or Complete Linkage

¨ Similarity of two clusters is based on the two least similar (most distant) points in the different clusters¤ Determined by all pairs of points in the two clusters

I1 I2 I3 I4 I5I1 1.00 0.90 0.10 0.65 0.20I2 0.90 1.00 0.70 0.60 0.50I3 0.10 0.70 1.00 0.40 0.30I4 0.65 0.60 0.40 1.00 0.80I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

Page 90: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

91

Cluster Similarity: MAX or Complete Linkage

¨ Similarity of two clusters is based on the two least similar (most distant) points in the different clusters¤ Determined by all pairs of points in the two clusters

{I1,I2} I3 I4 I5I1 1.00 0.10 0.60 0.20I3 0.10 1.00 0.40 0.30I4 0.60 0.40 1.00 0.80I5 0.20 0.30 0.80 1.00

1 2 3 4 5

Page 91: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

92

Cluster Similarity: MAX or Complete Linkage

¨ Similarity of two clusters is based on the two least similar (most distant) points in the different clusters¤ Determined by all pairs of points in the two clusters

{I1,I2} I3 I4 I5I1 1.00 0.10 0.60 0.20I3 0.10 1.00 0.40 0.30I4 0.60 0.40 1.00 0.80I5 0.20 0.30 0.80 1.00

1 2 3 4 5

Which two clusters should be merged next?

Page 92: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

93

Cluster Similarity: MAX or Complete Linkage

¨ Similarity of two clusters is based on the two least similar (most distant) points in the different clusters¤ Determined by all pairs of points in the two clusters

{I1,I2} I3 I4 I5I1 1.00 0.10 0.60 0.20I3 0.10 1.00 0.40 0.30I4 0.60 0.40 1.00 0.80I5 0.20 0.30 0.80 1.00

1 2 3 4 5

Merge {3} with {4,5}, why?

Page 93: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

94

Hierarchical Clustering: MAX

Nested Clusters Dendrogram

3 6 4 1 2 50

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.41

2

3

4

5

61

2 5

3

4

Page 94: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

95

Strength of MAX

Original Points Two Clusters

• Less susceptible to noise and outliers

Page 95: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

96

Limitations of MAX

Original Points Two Clusters

•Tends to break large clusters•Biased towards globular clusters

Page 96: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

97

Cluster Similarity: Group Average¨ Proximity of two clusters is the average of pairwise proximity between points in the two clusters.

¨ Need to use average connectivity for scalability since total proximity favors large clusters

||Cluster||Cluster

)p,pproximity(

)Cluster,Clusterproximity(ji

ClusterpClusterp

ji

jijjii

*=

åÎÎ

I1 I2 I3 I4 I5I1 1.00 0.90 0.10 0.65 0.20I2 0.90 1.00 0.70 0.60 0.50I3 0.10 0.70 1.00 0.40 0.30I4 0.65 0.60 0.40 1.00 0.80I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

Page 97: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

98

Hierarchical Clustering: Group Average

Nested Clusters Dendrogram

3 6 4 1 2 50

0.05

0.1

0.15

0.2

0.251

2

3

4

5

61

2

5

3

4

Page 98: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

99

Hierarchical Clustering: Group Average

¨ Compromise between Single and Complete Link

¨ Strengths¤ Less susceptible to noise and outliers

¨ Limitations¤ Biased towards globular clusters

Page 99: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

100

Hierarchical Clustering: Time and Space requirements

¨ O(N2) space since it uses the proximity matrix. ¤ N is the number of points.

¨ O(N3) time in many cases¤ There are N steps and at each step the size, N2, proximity matrix must be

updated and searched¤ Complexity can be reduced to O(N2 log(N) ) time for some approaches

Page 100: CSE 5243 INTRO. TO DATA MINING - GitHub Pages · ¨ Cluster analysis (or clustering, data segmentation, …) ¤Given a set of data points, partition them into a set of groups (i.e.,

101

Hierarchical Clustering: Problems and Limitations

¨ Once a decision is made to combine two clusters, it cannot be undone

¨ No objective function is directly minimized

¨ Different schemes have problems with one or more of the following:¤ Sensitivity to noise and outliers¤ Difficulty handling different sized clusters and convex shapes¤ Breaking large clusters