frequent pattern miningssjaswal.com/wp-content/uploads/2015/03/dmbi_chp7.pdf · frequent pattern...

71
SLIDES BY: SHREE JASWAL Frequent Pattern Mining

Upload: others

Post on 04-Jul-2020

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

S L I D E S B Y : S H R E E J A S W A L

Frequent PatternMining

Page 2: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Topics to be covered

Chp 7 Slides by Shree Jaswal

2

Market Basket Analysis, Frequent Itemsets, Closed Itemsets, and Association Rules;

Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods,

The Apriori Algorithm for finding Frequent Itemsets Using Candidate Generation,

Generating Association Rules from Frequent Itemsets, Improving the Efficiency of Apriori, A pattern growth approach for mining Frequent Itemsets; Mining Frequent itemsets using vertical data formats; Mining closed and maximal patterns; Introduction to Mining Multilevel Association Rules and Multidimensional

Association Rules; From Association Mining to Correlation Analysis, Pattern Evaluation Measures; Introduction to Constraint-Based Association Mining.

Page 3: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

What Is Frequent Pattern Analysis?3

Frequent pattern: a pattern (a set of items, subsequences, substructures,

etc.) that occurs frequently in a data set

First proposed by Agrawal, Imielinski, and Swami [AIS93] in the context

of frequent itemsets and association rule mining

Motivation: Finding inherent regularities in data

What products were often purchased together?— Beer and diapers?!

What are the subsequent purchases after buying a PC?

What kinds of DNA are sensitive to this new drug?

Can we automatically classify web documents?

Applications

Basket data analysis, cross-marketing, catalog design, sale campaign

analysis, Web log (click stream) analysis, and DNA sequence analysis.

Chp 7 Slides by Shree Jaswal

Page 4: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Chp 7 Slides by Shree Jaswal

4

Why Is Freq. Pattern Mining Important?

Discloses an intrinsic and important property of data sets

Forms the foundation for many essential data mining tasks

Association, correlation, and causality analysis

Sequential, structural (e.g., sub-graph) patterns

Pattern analysis in spatiotemporal, multimedia, time-series, and stream data

Classification: associative classification

Cluster analysis: frequent pattern-based clustering

Data warehousing: iceberg cube and cube-gradient

Semantic data compression: fascicles

Broad applications

Page 5: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Chp 7 Slides by Shree Jaswal

5

Basic Concepts: Frequent Patterns and Association Rules

Itemset X = {x1, …, xk}

Find all the rules X Y with minimum support and confidence

support, s, probability that a transaction contains X Y

confidence, c, conditional probability that a transaction having X also contains Y

Let supmin = 50%, confmin = 50%Freq. Pat.: {A:3, B:3, D:4, E:3, AD:3}Association rules:

A D (60%, 100%)D A (60%, 75%)

Customerbuys diaper

Customerbuys both

Customerbuys beer

Transaction-id Items bought

10 A, B, D

20 A, C, D

30 A, D, E

40 B, E, F

50 B, C, D, E, F

Page 6: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Association Rule Mining

Given a set of transactions, find rules that will predict the occurrence of an item based on the occurrences of other items in the transaction

Market-Basket transactions

TID Items

1 Bread, Milk

2 Bread, Diaper, Beer, Eggs

3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer

5 Bread, Milk, Diaper, Coke

Example of Association Rules

{Diaper} {Beer},{Milk, Bread} {Eggs,Coke},{Beer, Bread} {Milk},

Implication means co-occurrence, not causality!

Chp 7 Slides by Shree Jaswal

6

Page 7: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Definition: Frequent Itemset

Itemset A collection of one or more items

Example: {Milk, Bread, Diaper}

k-itemset An itemset that contains k items

Support count () Frequency of occurrence of an itemset

E.g. ({Milk, Bread,Diaper}) = 2

Support Fraction of transactions that contain

an itemset

E.g. s({Milk, Bread, Diaper}) = 2/5

Frequent Itemset An itemset whose support is greater

than or equal to a minsup threshold

TID Items

1 Bread, Milk

2 Bread, Diaper, Beer, Eggs

3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer

5 Bread, Milk, Diaper, Coke

Page 8: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Definition: Association Rule

Example:Beer}Diaper,Milk{

4.052

|T|)BeerDiaper,,Milk( s

67.032

)Diaper,Milk()BeerDiaper,Milk,(

c

Association Rule– An implication expression of the form

X Y, where X and Y are itemsets

– Example:{Milk, Diaper} {Beer}

Rule Evaluation Metrics

– Support (s)

Fraction of transactions that contain both X and Y

– Confidence (c)

Measures how often items in Y appear in transactions thatcontain X

TID Items

1 Bread, Milk

2 Bread, Diaper, Beer, Eggs

3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer

5 Bread, Milk, Diaper, Coke

Chp 7 Slides by Shree Jaswal

8

Page 9: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Association Rule Mining Task

Given a set of transactions T, the goal of association rule mining is to find all rules having support ≥ minsup threshold

confidence ≥ minconf threshold

Brute-force approach: List all possible association rules

Compute the support and confidence for each rule

Prune rules that fail the minsup and minconf thresholds

Computationally prohibitive!

Chp 7 Slides by Shree Jaswal

9

Page 10: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Mining Association Rules

Example of Rules:

{Milk,Diaper} {Beer} (s=0.4, c=0.67){Milk,Beer} {Diaper} (s=0.4, c=1.0){Diaper,Beer} {Milk} (s=0.4, c=0.67){Beer} {Milk,Diaper} (s=0.4, c=0.67) {Diaper} {Milk,Beer} (s=0.4, c=0.5) {Milk} {Diaper,Beer} (s=0.4, c=0.5)

TID Items

1 Bread, Milk

2 Bread, Diaper, Beer, Eggs

3 Milk, Diaper, Beer, Coke 4 Bread, Milk, Diaper, Beer

5 Bread, Milk, Diaper, Coke

Observations:

• All the above rules are binary partitions of the same itemset: {Milk, Diaper, Beer}

• Rules originating from the same itemset have identical support butcan have different confidence

• Thus, we may decouple the support and confidence requirements

Chp 7 Slides by Shree Jaswal

10

Page 11: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Mining Association Rules

Two-step approach: 1. Frequent Itemset Generation

– Generate all itemsets whose support minsup

2. Rule Generation– Generate high confidence rules from each frequent itemset,

where each rule is a binary partitioning of a frequent itemset

Frequent itemset generation is still computationally expensive

Chp 7 Slides by Shree Jaswal

11

Page 12: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

12

Closed Patterns and Max-Patterns

A long pattern contains a combinatorial number of sub-patterns, e.g., {a1, …, a100} contains (100

1) + (1002) + … +

(110

00

0) = 2100 – 1 = 1.27*1030 sub-patterns!

Solution: Mine closed patterns and max-patterns instead

An itemset X is closed if X is frequent and there exists no super-pattern Y כ X, with the same support as X (proposed by Pasquier, et al. @ ICDT’99)

An itemset X is a max-pattern if X is frequent and there exists no frequent super-pattern Y כ X (proposed by Bayardo @ SIGMOD’98)

Closed pattern is a lossless compression of freq. patterns

Reducing the # of patterns and rules

Chp 7 Slides by Shree Jaswal

Page 13: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

13

Closed Patterns and Max-Patterns

Exercise. DB = {<a1, …, a100>, < a1, …, a50>} Min_sup = 1.

What is the set of closed itemset? <a1, …, a100>: 1

< a1, …, a50>: 2

What is the set of max-pattern? <a1, …, a100>: 1

Chp 7 Slides by Shree Jaswal

Page 14: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Frequent Itemset Generationnull

AB AC AD AE BC BD BE CD CE DE

A B C D E

ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

ABCD ABCE ABDE ACDE BCDE

ABCDE

Given d items, there are 2d possible candidate itemsets

Chp 7 Slides by Shree Jaswal

14

Page 15: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

15

Scalable Methods for Mining Frequent Patterns

The downward closure property of frequent patterns

Any subset of a frequent itemset must be frequent

If {beer, diaper, nuts} is frequent, so is {beer, diaper}

i.e., every transaction having {beer, diaper, nuts} also contains {beer, diaper}

Scalable mining methods: Three major approaches

Apriori (Agrawal & Srikant@VLDB’94)

Freq. pattern growth (FPgrowth—Han, Pei & Yin @SIGMOD’00)

Vertical data format approach (Charm—Zaki & Hsiao @SDM’02)

Chp 7 Slides by Shree Jaswal

Page 16: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

16

Apriori: A Candidate Generation-and-Test Approach

Apriori pruning principle: If there is any itemset which is

infrequent, its superset should not be generated/tested!

(Agrawal & Srikant @VLDB’94, Mannila, et al. @ KDD’ 94)

That is, If an itemset is frequent, then all of its subsets must also

be frequent

Apriori principle holds due to the following property of the support measure:

Support of an itemset never exceeds the support of its subsets

This is known as the anti-monotone property of supportChp 7 Slides by Shree Jaswal

)()()(:, YsXsYXYX

Page 17: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Chp 7 Slides by Shree Jaswal

17

Method:

Initially, scan DB once to get frequent 1-itemset

Generate length (k+1) candidate itemsets from length k

frequent itemsets

Test the candidates against DB

Terminate when no frequent or candidate set can be

generated

Apriori: A Candidate Generation-and-Test Approach

Page 18: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Found to be Infrequent

Illustrating Apriori Principle

Pruned supersets

Chp 7 Slides by Shree Jaswal 18

Page 19: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Illustrating Apriori Principle

Item CountBread 4Coke 2Milk 4Beer 3Diaper 4Eggs 1

Itemset Count{Bread,Milk} 3{Bread,Beer} 2{Bread,Diaper} 3{Milk,Beer} 2{Milk,Diaper} 3{Beer,Diaper} 3

I te m s e t C o u n t {B re a d ,M ilk ,D ia p e r} 3

Items (1-itemsets)

Pairs (2-itemsets)

(No need to generatecandidates involving Cokeor Eggs)

Triplets (3-itemsets)Minimum Support = 3

If every subset is considered, 6C1 + 6C2 + 6C3 = 41

With support-based pruning,6 + 6 + 1 = 13

Chp 7 Slides by Shree Jaswal

19

Page 20: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

20

The Apriori Algorithm—An Example

Database TDB

1st scan

C1L1

L2

C2 C22nd scan

C3 L33rd scan

Tid Items10 A, C, D20 B, C, E30 A, B, C, E40 B, E

Itemset sup{A} 2{B} 3{C} 3{D} 1{E} 3

Itemset sup{A} 2{B} 3{C} 3{E} 3

Itemset{A, B}{A, C}{A, E}{B, C}{B, E}{C, E}

Itemset sup{A, B} 1{A, C} 2{A, E} 1{B, C} 2{B, E} 3{C, E} 2

Itemset sup{A, C} 2{B, C} 2{B, E} 3{C, E} 2

Itemset{B, C, E}

Itemset sup{B, C, E} 2

Supmin = 2

Chp 7 Slides by Shree Jaswal

Page 21: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

21

The Apriori Algorithm

Pseudo-code:Ck: Candidate itemset of size kLk : frequent itemset of size k

L1 = {frequent items};for (k = 1; Lk !=; k++) do begin

Ck+1 = candidates generated from Lk;for each transaction t in database do

increment the count of all candidates in Ck+1

that are contained in tLk+1 = candidates in Ck+1 with min_supportend

return k Lk;

Chp 7 Slides by Shree Jaswal

Page 22: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

22

Important Details of Apriori

How to generate candidates?

Step 1: self-joining Lk

Step 2: pruning

How to count supports of candidates?

Example of Candidate-generation

L3={abc, abd, acd, ace, bcd}

Self-joining: L3*L3

abcd from abc and abd

acde from acd and ace

Pruning:

acde is removed because ade is not in L3

C4={abcd}

Chp 7 Slides by Shree Jaswal

Page 23: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

What’s Next ?

Chp 7 Slides by Shree Jaswal

23

Having found all of the frequent items. This completes our Apriori Algorithm.

What’s Next ? These frequent itemsets will be used to generate strong

association rules( where strong association rules satisfy both minimum support & minimum confidence).

Page 24: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Generating Association Rules from Frequent Itemsets

Chp 7 Slides by Shree Jaswal

24

Procedure:

For each frequent itemset “l”, generate all nonempty subsets of l.

For every nonempty subset s of l, output the rule “s ->(l-s)”if support_count(l) / support_count(s) >= min_conf where min_conf is minimum confidence threshold.

Lets take l = {I1,I2,I5}.

–Its all nonempty subsets are {I1,I2}, {I1,I5}, {I2,I5}, {I1}, {I2}, {I5}.

Page 25: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Chp 7 Slides by Shree Jaswal

25

Page 26: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Chp 7 Slides by Shree Jaswal

26

Page 27: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Methods to Improve Apriori’s Efficiency

Chp 7 Slides by Shree Jaswal

27

Hash-based itemset counting: A k-itemset whose corresponding hashing bucket count is below the threshold cannot be frequent. Hash fn.: h(x,y)=((order of x)*10+(order of y)) mod 7

Transaction reduction: A transaction that does not contain any frequent k-itemset is useless in subsequent scans.

Partitioning: Any itemset that is potentially frequent in DB must be frequent in at least one of the partitions of DB.

Sampling: mining on a subset of given data, lower support threshold + a method to determine the completeness.

Dynamic itemset counting: add new candidate itemsets only when all of their subsets are estimated to be frequent.

Page 28: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Methods to Improve Apriori’s Efficiency

Chp 7 Slides by Shree Jaswal

28

1. Hash-based technique can be used to reduce the size of the candidate k-itemsets, Ck, for k>1. For example when scanning each transaction in the database to generate the frequent 1-itemsetes, L1, from the candidate 1-itemsets in C1, we can generate all of the 2-itemsets for each transaction, hash them into a different buckets of a hash table structure and increase the corresponding bucket counts:

a. H(x,y)=((order of x)X10+(order of y)) mod 7b. A 2-itemset whose corresponding bucket count in the hash table is below the threshold cannot be frequent and thus should be removed from the candidate set.

Page 29: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Methods to Improve Apriori’s Efficiency contd.

Chp 7 Slides by Shree Jaswal

29

2. Transaction reduction – a transaction that does not contain any frequent k itemsets cannot contain any frequent k+1 itemsets. Therefore, such a transaction can be marked or removed from further consideration because subsequent scans of the database for j-itemsets, where j>k, will not require it.

Page 30: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Methods to Improve Apriori’s Efficiency contd.

Chp 7 Slides by Shree Jaswal

30

3. Partitioning (partitioning the data to find candidate itemsets): A partitioning

technique can be used that requires just two database scans to mine the frequent

itemsets. It consists of two phases.

In phase I, the algorithm subdivides the transactions of D into n non-overlapping partitions. If the minimum support threshold for transactions in D is min_sup, then the minimum support count for a partition is min_sup X the number of transactions in that partition. For each partition, all frequent itemsets within the partition are found. These are referred to as local frequent itemsets.

A local frequent itemset may or may not be frequent with respect to the entire

database, D. Any itemset that is potentially frequent with respect to D must occur as a frequent itemset in at least one of the partitions. Therefore all local frequent itemsets are candidate itemsets with respect to D. The collection of frequent itemsets from all partitions forms the global candidate itemsets with respect to D.

In phase II, a second scan of D is conducted in which the actual support of each candidate is assessed in order to determine the global frequent itemsets

Page 31: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Chp 7 Slides by Shree Jaswal

31

Page 32: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Methods to Improve Apriori’s Efficiency contd.

Chp 7 Slides by Shree Jaswal

32

4. Sampling (mining on a subset of a given data): The basic idea of the sampling approach is to pick a random sample S of the given data D, and then search for frequent itemsets in S instead of D. In this way, we trade off some degree of accuracy against efficiency.

The sample size of S is such that the search for frequent itemsets in S can be done in main memory, and so only one scan of the transactions in S in required overall.

Because we are searching for frequent itemsets in S rather than in D, it is possible that we will miss some of the global frequent itemsets.

To lessen this possibility, we use a lower support threshold than minimum support to find the frequent itemsets local to S.

Page 33: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Methods to Improve Apriori’s Efficiency contd.

Chp 7 Slides by Shree Jaswal

33

5. Dynamic itemset counting (adding candidate itemsets at different points during a scan): A dynamic itemset counting technique was proposed in which the database is partitioned into blocks marked by start points.

In this variation new candidate itemsets can be added at any start point, which determines new candidate itemsets only immediately before each complete database scan.

The resulting algorithm requires fewer database scan than Apriori.

Page 34: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Mining Frequent Patterns Without Candidate Generation

Chp 7 Slides by Shree Jaswal

34

Compress a large database into a compact, Frequent-Pattern tree (FP-tree)structure –highly condensed, but complete for frequent pattern mining

–avoid costly database scans

Develop an efficient, FP-tree-based frequent pattern mining method –A divide-and-conquer methodology: decompose mining tasks

into smaller ones

–Avoid candidate generation: sub-database test only!

Page 35: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

FP-Growth Method: Construction of FP-Tree

Chp 7 Slides by Shree Jaswal

35

First, create the root of the tree, labeled with “null”. Scan the database D a second time. (First time we scanned it to

create 1-itemset and then L). The items in each transaction are processed in L order (i.e.

sorted order). A branch is created for each transaction with items having their

support count separated by colon. Whenever the same node is encountered in another transaction,

we just increment the support count of the common node or Prefix.

To facilitate tree traversal, an item header table is built so that each item points to its occurrences in the tree via a chain of node-links.

Now, The problem of mining frequent patterns in database is transformed to that of mining the FP-Tree.

Page 36: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

FP-Growth Method: Construction of FP-Tree

Chp 7 Slides by Shree Jaswal

36

Page 37: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Mining the FP-Tree by Creating Conditional (sub) pattern bases

Chp 7 Slides by Shree Jaswal

37

Steps:1.Start from each frequent length-1 pattern(as an initial suffix pattern).2.Construct its conditional pattern base which consists of the set of prefix paths in the FP-Tree co-occurring with suffix pattern.3.Then, Construct its conditional FP-Tree& perform mining on such a tree.4.The pattern grow this achieved by concatenation of the suffix pattern with the frequent patterns generated from a conditional FP-Tree.5.The union of all frequent patterns (generated by step 4) gives the required frequent itemset.

Page 38: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

FP-Tree Example Continued

Chp 7 Slides by Shree Jaswal

38

Page 39: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

FP-Tree Example Continued

Chp 7 Slides by Shree Jaswal

39

Out of these, Only I1 & I2 is selected in the conditional FP-Tree because I3 is not satisfying the minimum support count. For I1 , support count in conditional pattern base = 1 + 1 = 2 For I2 , support count in conditional pattern base = 1 + 1 = 2 For I3, support count in conditional pattern base = 1 Thus support count for I3 is less than required min_sup which is2 here.

Now , We have conditional FP-Tree with us. All frequent pattern corresponding to suffix I5 are generated

by considering all possible combinations of I5 and conditional FP-Tree.

The same procedure is applied to suffixes I4, I3 and I1. Note:I2 is not taken into consideration for suffix because it

doesn’t have any prefix at all.

Page 40: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

40

Benefits of the FP-tree Structure

Completeness

Preserve complete information for frequent pattern mining

Never break a long pattern of any transaction

Compactness

Reduce irrelevant info—infrequent items are gone

Items in frequency descending order: the more frequently occurring, the more likely to be shared

Never be larger than the original database (not count node-links and the count field)

For Connect-4 DB, compression ratio could be over 100

Chp 7 Slides by Shree Jaswal

Page 41: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Why Frequent Pattern Growth Fast ?

Chp 7 Slides by Shree Jaswal

41

Performance study shows –FP-growth is an order of magnitude faster than Apriori, and

is also faster than tree-projection

Reasoning –No candidate generation, no candidate test

–Use compact data structure

–Eliminate repeated database scan

–Basic operation is counting and FP-tree building

Page 42: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Example 1

Transaction Items

T1 a, c, d, f, g, i, m, p

T2 a, b, c, f, l, m, o

T3 b, f, h, j, o

T4 b, c, k, n, p

T5 a, c, e, f, l, m, n, p

Min support = 60%

Chp 7 Slides by Shree Jaswal

42

Page 43: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Step 1

Item Support Item Support

a 3 i 1

b 3 j 1

c 4 k 1

d 1 l 2

e 1 m 3

f 4 n 2

g 1 o 2

h 1 p 3

Find support of every item.

Chp 7 Slides by Shree Jaswal

43

Page 44: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Step 2

Choose items that are above min support and arrange them in descending order.

item support

f 4

c 4

a 3

b 3

m 3

p 3

Chp 7 Slides by Shree Jaswal

44

Page 45: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Step 3

Rewrite transactions with only those items that have more than min support, in descending order

Transaction

Items Frequent items

T1 a, c, d, f, g, i, m, p f, c, a, m, p

T2 a, b, c, f, l, m, o f, c, a, b, m

T3 b, f, h, j, o f, b

T4 b, c, k, n, p c, b, p

T5 a, c, e, f, l, m, n, p f, c, a, m, p

Chp 7 Slides by Shree Jaswal

45

Page 46: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Step 4

Introduce a Root Node Root = NULL

Give the first transaction to the root.

Transaction T1: f, c, a, m, p

NULL

f : 1

p : 1

Root

m : 1

a : 1

c : 2

Chp 7 Slides by Shree Jaswal

46

Page 47: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Transaction T2: f, c, a, b, m

f : 2

p : 1

Root

m : 1

a : 2

c : 2

b : 1

m : 1

Chp 7 Slides by Shree Jaswal

47

Page 48: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Transaction T3 : f, b

f : 3

p : 1

Root

m : 1

a : 2

c : 2

b : 1

m : 1

b : 1

Chp 7 Slides by Shree Jaswal

48

Page 49: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Transaction T4: c, b, p

f : 3

p : 1

Root

m : 1

a : 2

c : 2

b : 1

m : 1

b : 1

c : 1

b : 1

p : 1

Chp 7 Slides by Shree Jaswal

49

Page 50: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Transaction T5: f, c, a, m, p

f : 4

p : 2

Root

m : 2

a : 3

c : 3

b : 1

m : 1

b : 1

c : 1

b : 1

p : 1

Chp 7 Slides by Shree Jaswal

50

Page 51: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Mining the FP Tree

Frequent items Condition pattern Conditional tree

p fcam:2, cb: 1 c:3

m fca: 2, fcab: 1 f:3, c:3, a:3

b fca:1, f:1, c:1 empty

a fc:3 f:3, c:3

c f:3 f:3

f empty empty

Chp 7 Slides by Shree Jaswal

51

Page 52: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Example 2

Transaction Id Items Purchased

t1 b, e

t2 a, b, c, e

t3 b, c, e

t4 a, c, d

t5 a

Given: min support = 40%

Chp 7 Slides by Shree Jaswal

52

Page 53: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Items Support

a 3

b 3

c 3

d 1

e 3

Step 1/ 2:

Transactions Items in desc order

t1 b, e

t2 b, e, c, a

t3 b, e, c

t4 a, c

t5 a

Step: 3

Chp 7 Slides by Shree Jaswal

53

Page 54: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

b : 3

Root

a : 1

c:2

e: 3

a : 2

c:1

Step: 4

Item Conditional pattern FP tree

a bec: 1 empty

b empty empty

c be: 2, a: 1 b:2, e:2

e b:3 b:3

Step: 5 – Mining the FP Tree

Chp 7 Slides by Shree Jaswal

54

Page 55: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Mining frequent itemsets using vertical data format

Chp 7 Slides by Shree Jaswal

55

The Apriori and the FP-growth methods mine frequent patterns from a set of transactions in TID-itemset format, where TID is a transaction id and itemset is the set of items bought in transaction TID. This data format is known as horizontal data format.

Alternatively data can also be presented in item-TID_set format, where item is an item name and TID_set is the set of transaction identifiers containing the item. This format is known as vertical data format.

Page 56: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Chp 7 Slides by Shree Jaswal

56

Page 57: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Chp 7 Slides by Shree Jaswal

57

Page 58: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Chp 7 Slides by Shree Jaswal

58

Page 59: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Procedure

Chp 7 Slides by Shree Jaswal

59

First we transform the horizontally formatted data to the vertical format by scanning the data set once.

The support count of an itemset is simply the length of the TID_set of the itemset.

Starting with k=1 the frequent k-itemsets can be used to construct the candidate (k+1)-itemsets.

This process repeats with k incremented by 1 each time until no frequent itemsets or no candidate itemsets can be found.

Page 60: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Advantages

Chp 7 Slides by Shree Jaswal

60

Advantages of this algorithm: Better than Apriori in the generation of candidate

(k+1)-itemset from frequent kitemsets There is no need to scan the database to find the

support (k+1) itemsets (for k>=1). This is because the TID_set of each k-itemset carries the complete information required for counting such support.

The disadvantage of this algorithm consist in the TID_set being to long, taking substantial memory space as well as computation time for intersecting the long sets.

Page 61: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

61

MaxMiner: Mining Max-patterns

1st scan: find frequent items

A, B, C, D, E

2nd scan: find support for

AB, AC, AD, AE, ABCDE

BC, BD, BE, BCDE

CD, CE, CDE, DE,

Since BCDE is a max-pattern, no need to check BCD, BDE,

CDE in later scan

R. Bayardo. Efficiently mining long patterns from

databases. In SIGMOD’98

Tid Items

10 A,B,C,D,E

20 B,C,D,E,

30 A,C,D,F

Potential max-patterns

Chp 7 Slides by Shree Jaswal

Page 62: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

62

Mining Frequent Closed Patterns: CLOSET

Flist: list of all frequent items in support ascending order

Flist: d-a-f-e-c

Divide search space

Patterns having d

Patterns having d but no a, etc.

Find frequent closed pattern recursively

Every transaction having d also has cfa cfad is a frequent

closed pattern

J. Pei, J. Han & R. Mao. CLOSET: An Efficient Algorithm for

Mining Frequent Closed Itemsets", DMKD'00.

TID Items10 a, c, d, e, f20 a, b, e30 c, e, f40 a, c, d, f50 c, e, f

Min_sup=2

Chp 7 Slides by Shree Jaswal

Page 63: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

63

Mining Multiple-Level Association Rules

Items often form hierarchies Flexible support settings Items at the lower level are expected to have lower

support Exploration of shared multi-level mining (Agrawal &

Srikant@VLB’95, Han & Fu@VLDB’95)

uniform support

Milk[support = 10%]

2% Milk [support = 6%]

Skim Milk [support = 4%]

Level 1min_sup = 5%

Level 2min_sup = 5%

Level 1min_sup = 5%

Level 2min_sup = 3%

reduced support

Chp 7 Slides by Shree Jaswal

Page 64: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

64

Multi-level Association: Redundancy Filtering

Some rules may be redundant due to “ancestor”

relationships between items.

Example

milk wheat bread [support = 8%, confidence = 70%]

2% milk wheat bread [support = 2%, confidence = 72%]

We say the first rule is an ancestor of the second rule.

A rule is redundant if its support is close to the “expected”

value, based on the rule’s ancestor.

Chp 7 Slides by Shree Jaswal

Page 65: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

65

Mining Multi-Dimensional Association

Single-dimensional rules:buys(X, “milk”) buys(X, “bread”)

Multi-dimensional rules: 2 dimensions or predicates

Inter-dimension assoc. rules (no repeated predicates)age(X,”19-25”) occupation(X,“student”) buys(X, “coke”)

hybrid-dimension assoc. rules (repeated predicates)age(X,”19-25”) buys(X, “popcorn”) buys(X, “coke”)

Categorical Attributes: finite number of possible values, no ordering among values—data cube approach

Quantitative Attributes: numeric, implicit ordering among values—discretization, clustering, and gradient approaches

Chp 7 Slides by Shree Jaswal

Page 66: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

Interestingness Measure: Correlations (Lift)

play basketball eat cereal [40%, 66.7%] is misleading

The overall % of students eating cereal is 75% > 66.7%.

play basketball not eat cereal [20%, 33.3%] is more accurate,

although with lower support and confidence

Measure of dependent/correlated events: lift

89.05000/3750*5000/3000

5000/2000),( CBlift

Basketball Not basketball Sum (row)

Cereal 2000 1750 3750

Not cereal 1000 250 1250

Sum(col.) 3000 2000 5000)()(

)(BPAP

BAPlift

33.15000/1250*5000/3000

5000/1000),( CBlift

Chp 7 Slides by Shree Jaswal

66

Page 67: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

67

Are lift and 2 Good Measures of Correlation?

“Buy walnuts buy milk [1%, 80%]” is misleading

if 85% of customers buy milk

Support and confidence are not good to represent correlations

So many interestingness measures? (Tan, Kumar, Sritastava @KDD’02)

Milk No Milk Sum (row)

Coffee m, c ~m, c c

No Coffee m, ~c ~m, ~c ~c

Sum(col.) m ~m

)()()(

BPAPBAPlift

DB m, c ~m, c m~c ~m~c lift all-conf coh 2

A1 1000 100 100 10,000 9.26 0.91 0.83 9055

A2 100 1000 1000 100,000 8.44 0.09 0.05 670

A3 1000 100 10000 100,000 9.18 0.09 0.09 8172

A4 1000 1000 1000 1000 1 0.5 0.33 0

)sup(_max_)sup(_

XitemXconfall

|)(|)sup(Xuniverse

Xcoh

Chp 7 Slides by Shree Jaswal

Page 68: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

68

Which Measures Should Be Used?

lift and 2 are not good measures for correlations in large transactional DBs

all-conf or coherence could be good measures (Omiecinski@TKDE’03)

Both all-conf and coherence have the downward closure property

Efficient algorithms can be derived for mining (Lee et al. @ICDM’03sub)

Chp 7 Slides by Shree Jaswal

Page 69: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

69

Constraint-based (Query-Directed) Mining

Finding all the patterns in a database autonomously? —unrealistic!

The patterns could be too many but not focused!

Data mining should be an interactive process

User directs what to be mined using a data mining query language (or a graphical user interface)

Constraint-based mining

User flexibility: provides constraints on what to be mined

System optimization: explores such constraints for efficient mining—constraint-based mining

Chp 7 Slides by Shree Jaswal

Page 70: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

70

Constraints in Data Mining

Knowledge type constraint: classification, association, etc.

Data constraint — using SQL-like queries find product pairs sold together in stores in Chicago in

Dec.’02 Dimension/level constraint in relevance to region, price, brand, customer category

Rule (or pattern) constraint small sales (price < $10) triggers big sales (sum > $200)

Interestingness constraint strong rules: min_support 3%, min_confidence

60%

Chp 7 Slides by Shree Jaswal

Page 71: Frequent Pattern Miningssjaswal.com/wp-content/uploads/2015/03/DMBI_chp7.pdf · Frequent Pattern Mining, Efficient and Scalable Frequent Itemset Mining Methods, The Apriori Algorithm

71

Constrained Mining vs. Constraint-Based Search

Constrained mining vs. constraint-based search/reasoning

Both are aimed at reducing search space

Finding all patterns satisfying constraints vs. finding some (or one) answer in constraint-based search in AI

Constraint-pushing vs. heuristic search

It is an interesting research problem on how to integrate them

Constrained mining vs. query processing in DBMS

Database query processing requires to find all

Constrained pattern mining shares a similar philosophy as pushing selections deeply in query processing

Chp 7 Slides by Shree Jaswal