decision trees
DESCRIPTION
Decision Trees. Example. Example 2. Examples, which one is better?. Good when. Samples are attribute-value pairs Target function has discrete output values Disjunctions required Missing, noisy training data. Construction. Top down construction - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/1.jpg)
Decision Trees
ID Hair Height Weight Lotion ResultSarah Blonde Average Light No Sunburn
Dana Blonde Tall Average Yes none
Alex Brown Tall Average Yes None
Annie Blonde Short Average No Sunburn
Emily Red Average Heavy No Sunburn
Pete Brown Tall Heavy No None
John Brown Average Heavy No None
Katie Blonde Short Light Yes None
![Page 2: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/2.jpg)
Example
![Page 3: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/3.jpg)
Example 2
![Page 4: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/4.jpg)
Examples, which one is better?
![Page 5: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/5.jpg)
Good when
Samples are attribute-value pairs Target function has discrete output values Disjunctions required Missing, noisy training data
![Page 6: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/6.jpg)
Construction
Top down construction1.Which attribute should be tested to form the root
of a tree?2.Create branches for each attribute value and sort
samples into these branches3.At each branch node, repeat 1
![Page 7: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/7.jpg)
So how do we choose attribute?
Prefer smaller trees Occam's razor for DTs The world is inherently simple. Therefore
the smallest decision tree that is consistent with the samples is once that is most likely to identify unknown objects correctly
![Page 8: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/8.jpg)
How can you construct smallest
Maximize homogeneity in each branch
![Page 9: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/9.jpg)
After choosing hair color
![Page 10: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/10.jpg)
Formally
Maximize homogeneity = Minimize Disorder Disorder formula can be taken from
information theory
![Page 11: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/11.jpg)
Entropy
![Page 12: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/12.jpg)
Entropy intuition
An attribute can have two values. If equal numbers of both values then
![Page 13: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/13.jpg)
Entropy intuition (2)
An attribute can have two values. If ONLY one value present
![Page 14: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/14.jpg)
Entropy intuition (3)
![Page 15: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/15.jpg)
Entropy intuition (4)
![Page 16: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/16.jpg)
Decision Trees
ID Hair Height Weight Lotion ResultSarah Blonde Average Light No Sunburn
Dana Blonde Tall Average Yes none
Alex Brown Tall Average Yes None
Annie Blonde Short Average No Sunburn
Emily Red Average Heavy No Sunburn
Pete Brown Tall Heavy No None
John Brown Average Heavy No None
Katie Blonde Short Light Yes None
![Page 17: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/17.jpg)
Worked Example, hair color
![Page 18: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/18.jpg)
Other tests
![Page 19: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/19.jpg)
Issues in DT learning
Over fitting the data Given a learned tree t with error e, if there
is an alternate tree t' with error e' that fits the data and e' > e on the training set, but t' has smaller error over the entire distribution of samples
![Page 20: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/20.jpg)
Overfitting
![Page 21: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/21.jpg)
Dealing with overfitting
Stop growing the tree Post prune the tree after overfitting How do we determine correct final tree (size)
Validation set (1/3rd) Statistical (chi-square) test to determine
whether to grow the tree Minimize MDL (measure of complexity)
size(tree) + size(misclassifications)
![Page 22: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/22.jpg)
Reduced error pruning
Remove subtree at node and replace with leaf Assign most common class at node to leaf Only select node for removal if error <= error
of original tree on validation set
![Page 23: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/23.jpg)
Effect of Reduced-Error Pruning
![Page 24: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/24.jpg)
Rule post pruning
Convert tree to equivalent set of rules Prune each rule independently of others
Remove precondition and test Sort final rules into sequence by estimated
accuracy and consider them in this sequence
![Page 25: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/25.jpg)
Why rules then pruning?
Each path through a node produces a different rule so you have many rules per node that can be pruned versus removing one node (and subtree)
In rules, tests near the root do not mean more than tests near leaves
Rules are often easier to read and understand
![Page 26: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/26.jpg)
Continuous valued attributes
![Page 27: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/27.jpg)
Continuous to discrete
We want a threshold (binary attribute) that produces the greatest information gain.
Sort attribute Identify adjacent examples that differ in class Candidate thresholds are midway between
attribute value on these examples Check candidate thresholds for information gain
and choose the one that maximizes gain (or equivalently minimizes entropy)
![Page 28: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/28.jpg)
Continuous attributes are favored
ID3 prefers many valued attributes Consider Name: Perfect classification Also include how well (broadly and
uniformly) an attribute helps to split data Name not broad at all Lotion used: much better
![Page 29: Decision Trees](https://reader035.vdocument.in/reader035/viewer/2022062811/56816207550346895dd22da2/html5/thumbnails/29.jpg)
Attributes with Costs
We want lower cost attributes tested earlier Multiply by cost?