final review lei chen. clustering algorithms k-means

74
Final Review Lei Chen

Upload: adelia-scott

Post on 13-Jan-2016

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Final Review Lei Chen. Clustering Algorithms K-Means

Final Review

Lei Chen

Page 2: Final Review Lei Chen. Clustering Algorithms K-Means

Clustering Algorithms

• K-Means

Page 3: Final Review Lei Chen. Clustering Algorithms K-Means

Partitioning Algorithms: Basic Concept

• Partitioning method: Partitioning a database D of n objects into a set of k clusters, such that the sum of squared distances is minimized (where ci is the centroid or medoid of cluster Ci)

• Given k, find a partition of k clusters that optimizes the chosen partitioning criterion– Global optimal: exhaustively enumerate all partitions– Heuristic methods: k-means and k-medoids algorithms– k-means (MacQueen’67, Lloyd’57/’82): Each cluster is represented by the

center of the cluster– k-medoids or PAM (Partition around medoids) (Kaufman &

Rousseeuw’87): Each cluster is represented by one of the objects in the cluster

21 )),(( iCp

ki cpdE

i

3

Page 4: Final Review Lei Chen. Clustering Algorithms K-Means

Clustering Algorithms

• K-Means• K-Medoids

Page 5: Final Review Lei Chen. Clustering Algorithms K-Means

5

PAM (Partitioning Around Medoids) (1987)

• PAM (Kaufman and Rousseeuw, 1987), built in Splus

• Use real object to represent the cluster

– Select k representative objects arbitrarily

– For each pair of non-selected object h and selected object i, calculate the total swapping cost TCih

– For each pair of i and h,

• If TCih < 0, i is replaced by h

• Then assign each non-selected object to the most similar representative object

– repeat steps 2-3 until there is no change

Page 6: Final Review Lei Chen. Clustering Algorithms K-Means

Clustering Algorithms

• K-Means• K-Medoids• Hierarchical Clustering

Page 7: Final Review Lei Chen. Clustering Algorithms K-Means

Hierarchical Clustering

• Use distance matrix as clustering criteria. This method does not require the number of clusters k as an input, but needs a termination condition

Step 0 Step 1 Step 2 Step 3 Step 4

b

d

c

e

aa b

d e

c d e

a b c d e

Step 4 Step 3 Step 2 Step 1 Step 0

agglomerative(AGNES)

divisive(DIANA)

7

Page 8: Final Review Lei Chen. Clustering Algorithms K-Means

AGNES (Agglomerative Nesting)

• Introduced in Kaufmann and Rousseeuw (1990)• Implemented in statistical packages, e.g., Splus• Use the single-link method and the dissimilarity matrix • Merge nodes that have the least dissimilarity• Go on in a non-descending fashion• Eventually all nodes belong to the same cluster

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8 9 10

8

Page 9: Final Review Lei Chen. Clustering Algorithms K-Means

Distance between Clusters

• Single link: smallest distance between an element in one cluster and an

element in the other, i.e., dist(Ki, Kj) = min(tip, tjq)

• Complete link: largest distance between an element in one cluster and an

element in the other, i.e., dist(Ki, Kj) = max(tip, tjq)

• Average: avg distance between an element in one cluster and an element in

the other, i.e., dist(Ki, Kj) = avg(tip, tjq)

• Centroid: distance between the centroids of two clusters, i.e., dist(K i, Kj) =

dist(Ci, Cj)

• Medoid: distance between the medoids of two clusters, i.e., dist(K i, Kj) =

dist(Mi, Mj)

– Medoid: a chosen, centrally located object in the cluster

X X

9

Page 10: Final Review Lei Chen. Clustering Algorithms K-Means

Clustering Algorithms

• K-Means• K-Medoids• Hierarchical Clustering• Density-based Clustering

Page 11: Final Review Lei Chen. Clustering Algorithms K-Means

Density-Based Clustering: Basic Concepts

• Two parameters:

– Eps: Maximum radius of the neighbourhood

– MinPts: Minimum number of points in an Eps-neighbourhood of that point

• NEps(q): {p belongs to D | dist(p,q) ≤ Eps}

• Directly density-reachable: A point p is directly density-reachable from a point q w.r.t. Eps, MinPts if

– p belongs to NEps(q)

– core point condition:

|NEps (q)| ≥ MinPts

MinPts = 5

Eps = 1 cm

p

q

11

Page 12: Final Review Lei Chen. Clustering Algorithms K-Means

Density-Reachable and Density-Connected

• Density-reachable:

– A point p is density-reachable from a point q w.r.t. Eps, MinPts if there is a chain of points p1, …, pn, p1 = q, pn = p such that pi+1 is directly density-reachable from pi

• Density-connected

– A point p is density-connected to a point q w.r.t. Eps, MinPts if there is a point o such that both, p and q are density-reachable from o w.r.t. Eps and MinPts

p

qp1

p q

o

12

Page 13: Final Review Lei Chen. Clustering Algorithms K-Means

DBSCAN: Density-Based Spatial Clustering of Applications with Noise

• Relies on a density-based notion of cluster: A cluster is defined as a maximal set of density-connected points

• Discovers clusters of arbitrary shape in spatial databases with noise

• A point is a core point if it has more than a specified number of points (MinPts) within Eps – These are points that are at the interior of a cluster

• A border point has fewer than MinPts within Eps, but is in the neighborhood of a core point

Core

Border

Outlier

Eps = 1cm

MinPts = 5

13

Page 14: Final Review Lei Chen. Clustering Algorithms K-Means

DBSCAN: The Algorithm

• Arbitrary select a point p

• Retrieve all points density-reachable from p w.r.t. Eps and MinPts

• If p is a core point, a cluster is formed

• If p is a border point, no points are density-reachable from p and DBSCAN visits the next point of the database

• Continue the process until all of the points have been processed

14

Page 15: Final Review Lei Chen. Clustering Algorithms K-Means

Clustering Algorithms

• K-Means• K-Medoids• Hierarchical Clustering• Density-based Clustering• Fuzzy set-based Clustering• Measuring Clustering Quality

Page 16: Final Review Lei Chen. Clustering Algorithms K-Means

Fuzzy (Soft) Clustering

• Example: Let cluster features be– C1 :“digital camera” and “lens”

– C2: “computer“• Fuzzy clustering

– k fuzzy clusters C1, …,Ck ,represented as a partition matrix M = [wij]

– P1: for each object oi and cluster Cj, 0 ≤ wij ≤ 1 (fuzzy set)

– P2: for each object oi, , equal participation in the clustering

– P3: for each cluster Cj , ensures there is no empty cluster

• Let c1, …, ck as the center of the k clusters

• For an object oi, sum of the squared error (SSE), p is a parameter:

• For a cluster Ci, SSE:

• Measure how well a clustering fits the data:

16

Page 17: Final Review Lei Chen. Clustering Algorithms K-Means

The EM (Expectation Maximization) Algorithm

• The k-means algorithm has two steps at each iteration:

– Expectation Step (E-step): Given the current cluster centers, each object is assigned to the cluster whose center is closest to the object: An object is expected to belong to the closest cluster

– Maximization Step (M-step): Given the cluster assignment, for each cluster, the algorithm adjusts the center so that the sum of distance from the objects assigned to this cluster and the new center is minimized

• The (EM) algorithm: A framework to approach maximum likelihood or maximum a posteriori estimates of parameters in statistical models.– E-step assigns objects to clusters according to the current fuzzy clustering

or parameters of probabilistic clusters– M-step finds the new clustering or parameters that maximize the sum of

squared error (SSE) or the expected likelihood

17

Page 18: Final Review Lei Chen. Clustering Algorithms K-Means

Fuzzy Clustering Using the EM Algorithm

• Initially, let c1 = a and c2 = b• 1st E-step: assign o to c1,w. wt =

1st M-step: recalculate the centroids according to the partition matrix, minimizing the sum of squared error (SSE)

Iteratively calculate this until the cluster centers converge or the change is small enough

Page 19: Final Review Lei Chen. Clustering Algorithms K-Means

Clustering Algorithms

• K-Means• K-Medoids• Hierarchical Clustering• Density-based Clustering• Fuzzy set-based Clustering• Probabilistic Model-Based Clustering• Measuring Clustering Quality

Page 20: Final Review Lei Chen. Clustering Algorithms K-Means

20

Model-Based Clustering

• A set C of k probabilistic clusters C1, …,Ck with probability density functions f1,

…, fk, respectively, and their probabilities ω1, …, ωk.

• Probability of an object o generated by cluster Cj is

• Probability of o generated by the set of cluster C is Since objects are assumed to be generated

independently, for a data set D = {o1, …, on}, we have,

Task: Find a set C of k probabilistic clusters s.t. P(D|C) is maximized However, maximizing P(D|C) is often intractable since the probability

density function of a cluster can take an arbitrarily complicated form To make it computationally feasible (as a compromise), assume the

probability density functions being some parameterized distributions

Page 21: Final Review Lei Chen. Clustering Algorithms K-Means

21

Univariate Gaussian Mixture Model

• O = {o1, …, on} (n observed objects), Θ = {θ1, …, θk} (parameters of the k distributions), and Pj(oi| θj) is the probability that oi is generated from the j-th distribution using parameter θj, we have

Univariate Gaussian mixture model Assume the probability density function of each cluster follows a 1-

d Gaussian distribution. Suppose that there are k clusters. The probability density function of each cluster are centered at μj

with standard deviation σj, θj, = (μj, σj), we have

Page 22: Final Review Lei Chen. Clustering Algorithms K-Means

22

Computing Mixture Models with EM

• Given n objects O = {o1, …, on}, we want to mine a set of parameters Θ = {θ1, …, θk} s.t.,P(O|Θ) is maximized, where θj = (μj, σj) are the mean and standard deviation of the j-th univariate Gaussian distribution

• We initially assign random values to parameters θj, then iteratively conduct the E- and M- steps until converge or sufficiently small change

• At the E-step, for each object oi, calculate the probability that oi belongs to each distribution,

At the M-step, adjust the parameters θj = (μj, σj) so that the expected

likelihood P(O|Θ) is maximized

Page 23: Final Review Lei Chen. Clustering Algorithms K-Means

Frequent Item Sets

• Brute-Force Solution

Page 24: Final Review Lei Chen. Clustering Algorithms K-Means

Frequent Itemset Generation• Brute-force approach:

– Each itemset in the lattice is a candidate frequent itemset– Count the support of each candidate by scanning the

database

– Match each transaction against every candidate– Complexity ~ O(NMw) => Expensive since M = 2d !!!

TID Items 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke

N

Transactions List ofCandidates

M

w

Page 25: Final Review Lei Chen. Clustering Algorithms K-Means

Frequent Itemset Generation Strategies

• Reduce the number of candidates (M)– Complete search: M=2d

– Use pruning techniques to reduce M

• Reduce the number of transactions (N)– Reduce size of N as the size of itemset increases– Used by DHP and vertical-based mining algorithms

• Reduce the number of comparisons (NM)– Use efficient data structures to store the candidates or

transactions– No need to match every candidate against every

transaction

Page 26: Final Review Lei Chen. Clustering Algorithms K-Means

Frequent Item Sets

• Brute-Force Solution• Apriori Property and Algorithm

Page 27: Final Review Lei Chen. Clustering Algorithms K-Means

Reducing Number of Candidates• Apriori principle:

– If an itemset is frequent, then all of its subsets must also be frequent

• Apriori principle holds due to the following property of the support measure:

– Support of an itemset never exceeds the support of its subsets

– This is known as the anti-monotone property of support

)()()(:, YsXsYXYX

Page 28: Final Review Lei Chen. Clustering Algorithms K-Means

Apriori Algorithm• Method:

– Let k=1– Generate frequent itemsets of length 1– Repeat until no new frequent itemsets are identified

• Generate length (k+1) candidate itemsets from length k frequent itemsets

• Prune candidate itemsets containing subsets of length k that are infrequent

• Count the support of each candidate by scanning the DB• Eliminate candidates that are infrequent, leaving only those

that are frequent

Page 29: Final Review Lei Chen. Clustering Algorithms K-Means

Frequent Item Sets

• Brute-Force Solution• Apriori Property and Algorithm• Hashing Tree

Page 30: Final Review Lei Chen. Clustering Algorithms K-Means

Reducing Number of Comparisons• Candidate counting:

– Scan the database of transactions to determine the support of each candidate itemset

– To reduce the number of comparisons, store the candidates in a hash structure

• Instead of matching each transaction against every candidate, match it against candidates contained in the hashed buckets

TID Items 1 Bread, Milk 2 Bread, Diaper, Beer, Eggs 3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer 5 Bread, Milk, Diaper, Coke

N

Transactions Hash Structure

k

Buckets

Page 31: Final Review Lei Chen. Clustering Algorithms K-Means

Generate Hash Tree

2 3 45 6 7

1 4 51 3 6

1 2 44 5 7 1 2 5

4 5 81 5 9

3 4 5 3 5 63 5 76 8 9

3 6 73 6 8

1,4,7

2,5,8

3,6,9Hash function

Suppose you have 15 candidate itemsets of length 3:

{1 4 5}, {1 2 4}, {4 5 7}, {1 2 5}, {4 5 8}, {1 5 9}, {1 3 6}, {2 3 4}, {5 6 7}, {3 4 5}, {3 5 6}, {3 5 7}, {6 8 9}, {3 6 7}, {3 6 8}

You need:

• Hash function

• Max leaf size: max number of itemsets stored in a leaf node (if number of candidate itemsets exceeds max leaf size, split the node)

Page 32: Final Review Lei Chen. Clustering Algorithms K-Means

Maximal Frequent Itemsetnull

AB AC AD AE BC BD BE CD CE DE

A B C D E

ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

ABCD ABCE ABDE ACDE BCDE

ABCDE

Border

Infrequent Itemsets

Maximal Itemsets

An itemset is maximal frequent if none of its immediate supersets is frequent

Page 33: Final Review Lei Chen. Clustering Algorithms K-Means

Closed Itemset

• An itemset is closed if none of its immediate supersets has the same support as the itemset

TID Items1 {A,B}2 {B,C,D}3 {A,B,C,D}4 {A,B,D}5 {A,B,C,D}

Itemset Support{A} 4{B} 5{C} 3{D} 4

{A,B} 4{A,C} 2{A,D} 3{B,C} 3{B,D} 4{C,D} 3

Itemset Support{A,B,C} 2{A,B,D} 3{A,C,D} 2{B,C,D} 3

{A,B,C,D} 2

Page 34: Final Review Lei Chen. Clustering Algorithms K-Means

Frequent Item Sets

• Brute-Force Solution• Apriori Property and Algorithm• Hashing Tree• FP-tree

Page 35: Final Review Lei Chen. Clustering Algorithms K-Means

35

FP-tree

• Scan the database once to store all essential information in a data structure called FP-tree (Frequent Pattern Tree)

• The FP-tree is concise and is used in directly generating large itemsets

Page 36: Final Review Lei Chen. Clustering Algorithms K-Means

36

FP-tree

Step 1: Deduce the ordered frequent items. For items with the same frequency, the order is given by the alphabetical order.Step 2: Construct the FP-tree from the above dataStep 3: From the FP-tree above, construct the FP-conditional tree for each item (or itemset).Step 4: Determine the frequent patterns.

Page 37: Final Review Lei Chen. Clustering Algorithms K-Means

Frequent Item Sets

• Brute-Force Solution• Apriori Property and Algorithm• Hashing Tree• FP-tree• Continuous and Categorical Attributes

Page 38: Final Review Lei Chen. Clustering Algorithms K-Means

Frequent Item Sets

• Brute-Force Solution• Apriori Property and Algorithm• Hashing Tree• FP-tree• Continuous and Categorical Attributes• Sequence Pattern Mining

Page 39: Final Review Lei Chen. Clustering Algorithms K-Means

Sequential Pattern Mining: Definition

• Given: – a database of sequences – a user-specified minimum support threshold,

minsup

• Task:– Find all subsequences with support ≥ minsup

Page 40: Final Review Lei Chen. Clustering Algorithms K-Means

Sequential Pattern Mining: Challenge

Page 41: Final Review Lei Chen. Clustering Algorithms K-Means

Sequential Pattern Mining: Example

Minsup = 50%

Examples of Frequent Subsequences:

< {1,2} > s=60%< {2,3} > s=60%< {2,4}> s=80%< {3} {5}> s=80%< {1} {2} > s=80%< {2} {2} > s=60%< {1} {2,3} > s=60%< {2} {2,3} > s=60%< {1,2} {2,3} > s=60%

Object Timestamp EventsA 1 1,2,4A 2 2,3A 3 5B 1 1,2B 2 2,3,4C 1 1, 2C 2 2,3,4C 3 2,4,5D 1 2D 2 3, 4D 3 4, 5E 1 1, 3E 2 2, 4, 5

Page 42: Final Review Lei Chen. Clustering Algorithms K-Means

Generalized Sequential Pattern (GSP)

• Step 1: – Make the first pass over the sequence database D to yield all the 1-element

frequent sequences

• Step 2:

Repeat until no new frequent sequences are found– Candidate Generation:

• Merge pairs of frequent subsequences found in the (k-1)th pass to generate candidate sequences that contain k items

– Candidate Pruning:• Prune candidate k-sequences that contain infrequent (k-1)-subsequences

– Support Counting:• Make a new pass over the sequence database D to find the support for these

candidate sequences

– Candidate Elimination:• Eliminate candidate k-sequences whose actual support is less than minsup

Page 43: Final Review Lei Chen. Clustering Algorithms K-Means

Candidate Generation• Base case (k=2):

– Merging two frequent 1-sequences <{i1}> and <{i2}> will produce two candidate 2-sequences: <{i1} {i2}> and <{i1 i2}>

• General case (k>2):– A frequent (k-1)-sequence w1 is merged with another frequent

(k-1)-sequence w2 to produce a candidate k-sequence if the subsequence obtained by removing the first event in w1 is the same as the subsequence obtained by removing the last event in w2

• The resulting candidate after merging is given by the sequence w1 extended with the last event of w2.

– If the last two events in w2 belong to the same element, then the last event in w2 becomes part of the last element in w1

– Otherwise, the last event in w2 becomes a separate element appended to the end of w1

Page 44: Final Review Lei Chen. Clustering Algorithms K-Means

Candidate Generation Examples• Merging the sequences

w1=<{1} {2 3} {4}> and w2 =<{2 3} {4 5}> will produce the candidate sequence < {1} {2 3} {4 5}> because the last two events in w2 (4 and 5) belong to the same element

• Merging the sequences w1=<{1} {2 3} {4}> and w2 =<{2 3} {4} {5}> will produce the candidate sequence < {1} {2 3} {4} {5}> because the last two events in w2 (4 and 5) do not belong to the same element

• We do not have to merge the sequences w1 =<{1} {2 6} {4}> and w2 =<{1} {2} {4 5}> to produce the candidate < {1} {2 6} {4 5}> because if the latter is a viable candidate, then it can be obtained by merging w1 with < {1} {2 6} {5}>

Page 45: Final Review Lei Chen. Clustering Algorithms K-Means

GSP Example

< {1} {2} {3} >< {1} {2 5} >< {1} {5} {3} >< {2} {3} {4} >< {2 5} {3} >< {3} {4} {5} >< {5} {3 4} >

< {1} {2} {3} {4} >< {1} {2 5} {3} >< {1} {5} {3 4} >< {2} {3} {4} {5} >< {2 5} {3 4} >

< {1} {2 5} {3} >

Frequent3-sequences

CandidateGeneration

CandidatePruning

Page 46: Final Review Lei Chen. Clustering Algorithms K-Means

Frequent Item Sets

• Brute-Force Solution• Apriori Property and Algorithm• Hashing Tree• FP-tree• Continuous and Categorical Attributes• Sequence Pattern Mining• Time Constraint-based Sequence Pattern Mining

Page 47: Final Review Lei Chen. Clustering Algorithms K-Means

Timing Constraints (I)

Page 48: Final Review Lei Chen. Clustering Algorithms K-Means

Mining Sequential Patterns with Timing Constraints

• Approach 1:– Mine sequential patterns without timing constraints– Postprocess the discovered patterns

• Approach 2:– Modify GSP to directly prune candidates that violate

timing constraints– Question:

• Does Apriori principle still hold?

Page 49: Final Review Lei Chen. Clustering Algorithms K-Means

Apriori Principle for Sequence Data

Page 50: Final Review Lei Chen. Clustering Algorithms K-Means

Contiguous Subsequences

• s is a contiguous subsequence of w = <e1>< e2>…< ek>

if any of the following conditions hold:1. s is obtained from w by deleting an item from either e1 or ek

2. s is obtained from w by deleting an item from any element ei that contains more than 2 items

3. s is a contiguous subsequence of s’ and s’ is a contiguous subsequence of w (recursive definition)

• Examples: s = < {1} {2} > – is a contiguous subsequence of

< {1} {2 3}>, < {1 2} {2} {3}>, and < {3 4} {1 2} {2 3} {4} > – is not a contiguous subsequence of

< {1} {3} {2}> and < {2} {1} {3} {2}>

Page 51: Final Review Lei Chen. Clustering Algorithms K-Means

Modified Candidate Pruning Step

• Without maxgap constraint:– A candidate k-sequence is pruned if at least one of

its (k-1)-subsequences is infrequent

• With maxgap constraint:– A candidate k-sequence is pruned if at least one of

its contiguous (k-1)-subsequences is infrequent

Page 52: Final Review Lei Chen. Clustering Algorithms K-Means

Outlier Detection

• Statistical Methods

Page 53: Final Review Lei Chen. Clustering Algorithms K-Means

Outlier Detection (1): Statistical Methods• Statistical methods (also known as model-based methods) assume that the normal data

follow some statistical model (a stochastic model)– The data not following the model are outliers.

53

Effectiveness of statistical methods: highly depends on whether the

assumption of statistical model holds in the real data There are rich alternatives to use various statistical models

E.g., parametric vs. non-parametric

Example (right figure): First use Gaussian distribution to model the normal data

For each object y in region R, estimate gD(y), the probability of y fits the Gaussian distribution

If gD(y) is very low, y is unlikely generated by the Gaussian model, thus an outlier

Page 54: Final Review Lei Chen. Clustering Algorithms K-Means

Statistical Approaches Statistical approaches assume that the objects in a data set are

generated by a stochastic process (a generative model) Idea: learn a generative model fitting the given data set, and then

identify the objects in low probability regions of the model as outliers Methods are divided into two categories: parametric vs. non-

parametric Parametric method

Assumes that the normal data is generated by a parametric distribution with parameter θ

The probability density function of the parametric distribution f(x, θ) gives the probability that object x is generated by the distribution

The smaller this value, the more likely x is an outlier Non-parametric method

Not assume an a-priori statistical model and determine the model from the input data

Not completely parameter free but consider the number and nature of the parameters are flexible and not fixed in advance

Examples: histogram and kernel density estimation54

Page 55: Final Review Lei Chen. Clustering Algorithms K-Means

Parametric Methods I: Detection Univariate Outliers Based on Normal

Distribution Univariate data: A data set involving only one attribute or variable Often assume that data are generated from a normal distribution, learn

the parameters from the input data, and identify the points with low

probability as outliers Ex: Avg. temp.: {24.0, 28.9, 28.9, 29.0, 29.1, 29.1, 29.2, 29.2, 29.3, 29.4}

Use the maximum likelihood method to estimate μ and σ

55

Taking derivatives with respect to μ and σ2, we derive the following maximum likelihood estimates

For the above data with n = 10, we have Then (24 – 28.61) /1.51 = – 3.04 < –3, 24 is an outlier since

Page 56: Final Review Lei Chen. Clustering Algorithms K-Means

56

Outlier Discovery: Statistical

Approaches

Assume a model underlying distribution that generates data set (e.g. normal distribution)

Use discordancy tests depending on data distribution distribution parameter (e.g., mean, variance) number of expected outliers

Drawbacks most tests are for single attribute In many cases, data distribution may not be known

Page 57: Final Review Lei Chen. Clustering Algorithms K-Means

Parametric Methods II: Detection of Multivariate Outliers

Multivariate data: A data set involving two or more attributes or

variables

Transform the multivariate outlier detection task into a univariate

outlier detection problem

Method 1. Compute Mahalaobis distance

Let ō be the mean vector for a multivariate data set. Mahalaobis

distance for an object o to ō is MDist(o, ō) = (o – ō )T S –1(o – ō)

where S is the covariance matrix

Use the Grubb's test on this measure to detect outliers

Method 2. Use χ2 –statistic:

where Ei is the mean of the i-dimension among all objects, and n is

the dimensionality

If χ2 –statistic is large, then object oi is an outlier57

Page 58: Final Review Lei Chen. Clustering Algorithms K-Means

Non-Parametric Methods: Detection Using Histogram

The model of normal data is learned from the

input data without any a priori structure. Often makes fewer assumptions about the data,

and thus can be applicable in more scenarios Outlier detection using histogram:

58

Figure shows the histogram of purchase amounts in transactions A transaction in the amount of $7,500 is an outlier, since only 0.2%

transactions have an amount higher than $5,000 Problem: Hard to choose an appropriate bin size for histogram

Too small bin size → normal objects in empty/rare bins, false positive Too big bin size → outliers in some frequent bins, false negative

Solution: Adopt kernel density estimation to estimate the probability

density distribution of the data. If the estimated density function is high,

the object is likely normal. Otherwise, it is likely an outlier.

Page 59: Final Review Lei Chen. Clustering Algorithms K-Means

Non-Parametric Methods: Detection Using Histogram

• The model of normal data is learned from the input data without any a priori structure.

• Often makes fewer assumptions about the data, and thus can be applicable in more scenarios

• Outlier detection using histogram:

59

Figure shows the histogram of purchase amounts in transactions A transaction in the amount of $7,500 is an outlier, since only 0.2%

transactions have an amount higher than $5,000 Problem: Hard to choose an appropriate bin size for histogram

Too small bin size → normal objects in empty/rare bins, false positive Too big bin size → outliers in some frequent bins, false negative

Solution: Adopt kernel density estimation to estimate the probability

density distribution of the data. If the estimated density function is high,

the object is likely normal. Otherwise, it is likely an outlier.

Page 60: Final Review Lei Chen. Clustering Algorithms K-Means

Outlier Detection

• Statistical Methods• Proximity-based Methods

– Distance-based– Density-based

Page 61: Final Review Lei Chen. Clustering Algorithms K-Means

Outlier Detection (2): Proximity-Based Methods

• An object is an outlier if the nearest neighbors of the object are far away, i.e., the proximity of the object is significantly deviates from the proximity of most of the other objects in the same data set

61

The effectiveness of proximity-based methods highly relies on the proximity measure.

In some applications, proximity or distance measures cannot be obtained easily.

Often have a difficulty in finding a group of outliers which stay close to each other

Two major types of proximity-based outlier detection Distance-based vs. density-based

Example (right figure): Model the proximity of an object using its 3 nearest neighbors

Objects in region R are substantially different from other objects in the data set.

Thus the objects in R are outliers

Page 62: Final Review Lei Chen. Clustering Algorithms K-Means

Proximity-Based Approaches: Distance-Based vs. Density-Based Outlier Detection

Intuition: Objects that are far away from the others are outliers

Assumption of proximity-based approach: The proximity of an outlier deviates significantly from that of most of the others in the data set

Two types of proximity-based outlier detection methods Distance-based outlier detection: An object o is an

outlier if its neighborhood does not have enough other points

Density-based outlier detection: An object o is an outlier if its density is relatively much lower than that of its neighbors

62

Page 63: Final Review Lei Chen. Clustering Algorithms K-Means

Distance-Based Outlier Detection For each object o, examine the # of other objects in the r-

neighborhood of o, where r is a user-specified distance threshold An object o is an outlier if most (taking π as a fraction threshold) of

the objects in D are far away from o, i.e., not in the r-neighborhood of o

An object o is a DB(r, π) outlier if Equivalently, one can check the distance between o and its k-th

nearest neighbor ok, where . o is an outlier if dist(o, ok) > r

Efficient computation: Nested loop algorithm For any object oi, calculate its distance from other objects, and

count the # of other objects in the r-neighborhood. If π∙n other objects are within r distance, terminate the inner loop Otherwise, oi is a DB(r, π) outlier

Efficiency: Actually CPU time is not O(n2) but linear to the data set size since for most non-outlier objects, the inner loop terminates early

63

Page 64: Final Review Lei Chen. Clustering Algorithms K-Means

64

Outlier Discovery: Distance-Based Approach

Introduced to counter the main limitations imposed by statistical methods We need multi-dimensional analysis without knowing

data distribution Distance-based outlier: A DB(p, D)-outlier is an object O in

a dataset T such that at least a fraction p of the objects in T lies at a distance greater than D from O

Algorithms for mining distance-based outliers [Knorr & Ng, VLDB’98] Index-based algorithm Nested-loop algorithm Cell-based algorithm

Page 65: Final Review Lei Chen. Clustering Algorithms K-Means

Density-Based Outlier Detection

Local outliers: Outliers comparing to their local neighborhoods, instead of the global data distribution

In Fig., o1 and o2 are local outliers to C1, o3 is a

global outlier, but o4 is not an outlier. However,

proximity-based clustering cannot find o1 and o2

are outlier (e.g., comparing with O4).

65

Intuition (density-based outlier detection): The density around an outlier object is significantly different from the density around its neighbors

Method: Use the relative density of an object against its neighbors as the indicator of the degree of the object being outliers

k-distance of an object o, distk(o): distance between o and its k-th NN

k-distance neighborhood of o, Nk(o) = {o’| o’ in D, dist(o, o’) ≤ distk(o)}

Nk(o) could be bigger than k since multiple objects may have

identical distance to o

Page 66: Final Review Lei Chen. Clustering Algorithms K-Means

Local Outlier Factor: LOF

Reachability distance from o’ to o:

where k is a user-specified parameter Local reachability density of o:

66

LOF (Local outlier factor) of an object o is the average of the ratio of local reachability of o and those of o’s k-nearest neighbors

The lower the local reachability density of o, and the higher the local reachability density of the kNN of o, the higher LOF

This captures a local outlier whose local density is relatively low comparing to the local densities of its kNN

Page 67: Final Review Lei Chen. Clustering Algorithms K-Means

67

Density-Based Local Outlier Detection

M. M. Breunig, H.-P. Kriegel, R. Ng, J.

Sander. LOF: Identifying Density-Based

Local Outliers. SIGMOD 2000.

Distance-based outlier detection is based

on global distance distribution

It encounters difficulties to identify outliers

if data is not uniformly distributed

Ex. C1 contains 400 loosely distributed

points, C2 has 100 tightly condensed

points, 2 outlier points o1, o2

Distance-based method cannot identify o2

as an outlier

Need the concept of local outlier

Local outlier factor (LOF) Assume outlier is not

crisp Each point has a LOF

Page 68: Final Review Lei Chen. Clustering Algorithms K-Means

Data Cube and OLAP

• Data Cube

Page 69: Final Review Lei Chen. Clustering Algorithms K-Means

69

Typical OLAP Operations

• Roll up (drill-up): summarize data

– by climbing up hierarchy or by dimension reduction• Drill down (roll down): reverse of roll-up

– from higher level summary to lower level summary or detailed data, or introducing new dimensions

• Slice and dice: project and select • Pivot (rotate):

– reorient the cube, visualization, 3D to series of 2D planes• Other operations

– drill across: involving (across) more than one fact table– drill through: through the bottom level of the cube to its back-

end relational tables (using SQL)

Page 70: Final Review Lei Chen. Clustering Algorithms K-Means

70

Fig. 3.10 Typical OLAP Operations

Page 71: Final Review Lei Chen. Clustering Algorithms K-Means

Data Cube and OLAP

• Data Cube• General Method to build up Data Cube

Page 72: Final Review Lei Chen. Clustering Algorithms K-Means

lecture 10

Efficient Computation of Data Cubes

• General cube computation heuristics (Agarwal et al.’96)

• Computing full/iceberg cubes: – Top-Down: Multi-Way array aggregation – Bottom-Up: Bottom-up computation: BUC – Integrating Top-Down and Bottom-Up:

Page 73: Final Review Lei Chen. Clustering Algorithms K-Means

Measures

• Clustering Measures• Frequent Item Set Measures

Page 74: Final Review Lei Chen. Clustering Algorithms K-Means

Web Databases

• PageRank• Hits