decision trees and association rules prof. sin-min lee department of computer science

Post on 31-Dec-2015

213 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Decision Trees and Association Rules

Prof. Sin-Min Lee

Department of Computer Science

Data Mining: A KDD Process

– Data mining: the core of knowledge discovery process.

Data Cleaning

Data Integration

Databases

Data Warehouse

Task-relevant Data

Selection

Data Mining

Pattern Evaluation

Data Mining process model -DM

Search in State SpacesSearch in State Spaces

Decision Trees

•A decision tree is a special case of a state-space graph.

•It is a rooted tree in which each internal node corresponds to a decision, with a subtree at these nodes for each possible outcome of the decision.

•Decision trees can be used to model problems in which a series of decisions leads to a solution.

•The possible solutions of the problem correspond to the paths from the root to the leaves of the decision tree.

Decision Trees•Example: The n-queens problem

•How can we place n queens on an nn chessboard so that no two queens can capture each other?

•Q•Q

•x

•x

•x

•x

•x

•x

•x•x

•x

•x•x

•x

•x

•x

•x•x

•x

•x

•x

•x•x

•x

•x

•x

•x

•x

•x

A queen can move any A queen can move any number of squares number of squares horizontally, vertically, and horizontally, vertically, and diagonally.diagonally.

Here, the possible target Here, the possible target squares of the queen Q are squares of the queen Q are marked with an marked with an xx..

•Let us consider the 4-queens problem.

•Question: How many possible configurations of 44 chessboards containing 4 queens are there?

•Answer: There are 16!/(12!4!) = (13141516)/(234) = 13754 = 1820 possible configurations.

•Shall we simply try them out one by one until we encounter a solution?

•No, it is generally useful to think about a search problem more carefully and discover constraints on the problem’s solutions.

•Such constraints can dramatically reduce the size of the relevant state space.

Obviously, in any solution of the n-queens problem, Obviously, in any solution of the n-queens problem, there must be there must be exactly one queen in each columnexactly one queen in each column of of the board. the board.

Otherwise, the two queens in the same column could Otherwise, the two queens in the same column could capture each other.capture each other.

Therefore, we can describe the solution of this problem Therefore, we can describe the solution of this problem as a as a sequence of n decisionssequence of n decisions: :

Decision 1: Place a queen in the first column.Decision 1: Place a queen in the first column.

Decision 2: Place a queen in the second column.Decision 2: Place a queen in the second column.......Decision n: Place a queen in the n-th column.Decision n: Place a queen in the n-th column.

Backtracking in Decision Trees

•Q

•Q

•Q

•Q

•Q

•Q

•Q

•Q

•Q

•Q

•Q

•Q

•Q

•Q

•Q

•Q

•Q

•Q

place 1place 1stst queen queen

place 2place 2ndnd queen queen

place 3place 3rdrd queen queen

place 4place 4thth queen queen

empty boardempty board

Neural NetworkMany inputs and a single outputTrained on signal and background sampleWell understood and mostly accepted in HEP

Decision TreeMany inputs and a single output

Trained on signal and background sample

Used mostly in life sciences & business

Decision treeBasic

Algorithm• Initialize top node to all examples• While impure leaves available

– select next impure leave L

– find splitting attribute A with maximal information gain

– for each value of A add child to L

Decision treeFind good

split• Sufficient statistics to compute info gain: count matrix

outlook temperature humidity windy playsunny hot high FALSE nosunny hot high TRUE noovercast hot high FALSE yesrainy mild high FALSE yesrainy cool normal FALSE yesrainy cool normal TRUE noovercast cool normal TRUE yessunny mild high FALSE nosunny cool normal FALSE yesrainy mild normal FALSE yessunny mild normal TRUE yesovercast mild high TRUE yesovercast hot normal FALSE yesrainy mild high TRUE no

play don't playsunny 2 3

overcast 4 0rainy 3 2

outlook

play don't playhigh 3 4

normal 6 1humidity

play don't playhot 2 2mild 4 2cool 3 1

temperature

play don't playFALSE 6 2TRUE 3 3

windy

gain: 0.25 bits

gain: 0.16 bits

gain: 0.03 bits

gain: 0.14 bits

Decision trees

• Simple depth-first construction

• Needs entire data to fit in memory

• Unsuitable for large data sets

• Need to “scale up”

Decision Trees

Planning Tool

Decision Trees

• Enable a business to quantify decision making

• Useful when the outcomes are uncertain

• Places a numerical value on likely or potential outcomes

• Allows comparison of different possible decisions to be made

Decision Trees

• Limitations:– How accurate is the data used in the construction of the

tree?

– How reliable are the estimates of the probabilities?

– Data may be historical – does this data relate to real time?

– Necessity of factoring in the qualitative factors – human resources, motivation, reaction, relations with suppliers and other stakeholders

Process

Advantages

Disadvantages

Trained Decision

Tree

(Binned Likelihood Fit)(Limit)

Decision Trees from Data BaseEx Att Att Att ConceptNum Size Colour Shape Satisfied

1 med blue brick yes2 small red wedge no3 small red sphere yes4 large red wedge no5 large green pillar yes6 large red pillar no7 large green sphere yes

Choose target : Concept satisfiedUse all attributes except Ex Num

Rules from TreeIF (SIZE = large AND ((SHAPE = wedge) OR (SHAPE = pillar AND COLOUR = red) )))OR (SIZE = small AND SHAPE = wedge)THEN NO

IF (SIZE = large AND ((SHAPE = pillar) AND COLOUR = green) OR SHAPE = sphere) )OR (SIZE = small AND SHAPE = sphere)OR (SIZE = medium)THEN YES

Association Rule• Used to find all rules in a basket data• Basket data also called transaction data• analyze how items purchased by customers in a shop are relate

d• discover all rules that have:-

– support greater than minsup specified by user– confidence greater than minconf specified by user

• Example of transaction data:-– CD player, music’s CD, music’s book– CD player, music’s CD– music’s CD, music’s book– CD player

Association Rule• Let I = {i1, i2, …im} be a total set of items D a set of transactions d is one transaction consists of a set of items

–d I

• Association rule:-–X Y where X I ,Y I and X Y = –support = #of transactions contain X Y

D–confidence = #of transactions contain X Y

#of transactions contain X

Association Rule• Example of transaction data:-

– CD player, music’s CD, music’s book– CD player, music’s CD– music’s CD, music’s book– CD player

• I = {CD player, music’s CD, music’s book}• D = 4• #of transactions contain both CD player, music’s CD =2• #of transactions contain CD player =3• CD player music’s CD (sup=2/4 , conf =2/3 );

Association Rule

• How are association rules mined from large databases ?

• Two-step process:-– find all frequent itemsets– generate strong association rules from frequent

itemsets

Association Rules

• antecedent consequent– if then

– beer diaper (Walmart)

– economy bad higher unemployment

– Higher unemployment higher unemployment benefits cost

• Rules associated with population, support, confidence

Association Rules

• Population: instances such as grocery store purchases

• Support– % of population satisfying antecedent and consequent

• Confidence – % consequent true when antecedent true

2. Association rulesSupportEvery association rule has a support and a confidence.

“The support is the percentage of transactions that demonstrate the rule.”

Example: Database with transactions ( customer_# : item_a1, item_a2, … )

1: 1, 3, 5.

2: 1, 8, 14, 17, 12.

3: 4, 6, 8, 12, 9, 104.

4: 2, 1, 8.

support {8,12} = 2 (,or 50% ~ 2 of 4 customers)

support {1, 5} = 1 (,or 25% ~ 1 of 4 customers )

support {1} = 3 (,or 75% ~ 3 of 4 customers)

2. Association rulesSupport

An itemset is called frequent if its support is equal or greater than an agreed upon minimal value – the support threshold

add to previous example:

if threshold 50%

then itemsets {8,12} and {1} called frequent

2. Association rulesConfidenceEvery association rule has a support and a confidence.

An association rule is of the form: X => Y

• X => Y: if someone buys X, he also buys Y

The confidence is the conditional probability that, given X present in a transition , Y will also be present.

Confidence measure, by definition:

Confidence(X=>Y) equals support(X,Y) / support(X)

2. Association rulesConfidence

We should only consider rules derived from itemsets with high support, and that also have high confidence.

“A rule with low confidence is not meaningful.”

Rules don’t explain anything, they just point out hard facts in data volumes.

3. ExampleExample: Database with transactions ( customer_# : item_a1, item_a2, … )

1: 3, 5, 8. 2: 2, 6, 8. 3: 1, 4, 7, 10. 4: 3, 8, 10. 5: 2, 5, 8. 6: 1, 5, 6. 7: 4, 5, 6, 8. 8: 2, 3, 4. 9: 1, 5, 7, 8. 10: 3, 8, 9, 10.

Conf ( {5} => {8} ) ?supp({5}) = 5 , supp({8}) = 7 , supp({5,8}) = 4, then conf( {5} => {8} ) = 4/5 = 0.8 or 80%

3. ExampleExample: Database with transactions ( customer_# : item_a1, item_a2, … )

1: 3, 5, 8. 2: 2, 6, 8. 3: 1, 4, 7, 10. 4: 3, 8, 10. 5: 2, 5, 8. 6: 1, 5, 6. 7: 4, 5, 6, 8. 8: 2, 3, 4. 9: 1, 5, 7, 8. 10: 3, 8, 9, 10.

Conf ( {5} => {8} ) ? 80% Done. Conf ( {8} => {5} ) ? supp({5}) = 5 , supp({8}) = 7 , supp({5,8}) = 4, then conf( {8} => {5} ) = 4/7 = 0.57 or 57%

3. ExampleExample: Database with transactions ( customer_# : item_a1, item_a2, … )

Conf ( {5} => {8} ) ? 80% Done.

Conf ( {8} => {5} ) ? 57% Done.

Rule ( {5} => {8} ) more meaningful then

Rule ( {8} => {5} )

3. ExampleExample: Database with transactions ( customer_# : item_a1, item_a2, … )

1: 3, 5, 8. 2: 2, 6, 8. 3: 1, 4, 7, 10. 4: 3, 8, 10. 5: 2, 5, 8. 6: 1, 5, 6. 7: 4, 5, 6, 8. 8: 2, 3, 4. 9: 1, 5, 7, 8. 10: 3, 8, 9, 10.

Conf ( {9} => {3} ) ? supp({9}) = 1 , supp({3}) = 1 , supp({3,9}) = 1, then conf( {9} => {3} ) = 1/1 = 1.0 or 100%. OK?

3. ExampleExample: Database with transactions ( customer_# : item_a1, item_a2, … )

Conf( {9} => {3} ) = 100%. Done.

Notice: High Confidence, Low Support.

-> Rule ( {9} => {3} ) not meaningful

Association Rules

• Population– MS, MSA, MSB, MA, MB, BA– M=Milk, S=Soda, A=Apple, B=beer

• Support (MS)= 3/6– (MS,MSA,MSB)/(MS,MSA,MSB,MA,MB,

BA)

• Confidence (MS) = 3/5 – (MS, MSA, MSB) / (MS,MSA,MSB,MA,MB)

top related