machine learning learning
TRANSCRIPT
-
7/27/2019 machine Learning Learning
1/35
Learning from Observations
Chapter 18
Section 1 3, 6
-
7/27/2019 machine Learning Learning
2/35
Learning
Learning is essential for unknownenvironments, i.e., when designer lacks omniscience
Learning is useful as a system constructionmethod, i.e., expose the agent to reality rather than trying
to write it down
Learning modifies the agent's decisionmechanisms to improve performance
-
7/27/2019 machine Learning Learning
3/35
Types of Learning
Supervised learning: correct answer for eachexample. Answer can be a numeric variable,categorical variable etc.
Unsupervised learning: correct answers not given just examples (e.g. the same figures as above ,without the labels)
Reinforcement learning: occasional rewards
M M MF F F
-
7/27/2019 machine Learning Learning
4/35
Inductive learning
Simplest form: learn a function from examples
fis the target function
An example is a pair (x, f(x))
Problem: find a hypothesis hsuch that hf
given a trainingset of examples
(This is a highly simplified model of real learning: Ignores prior knowledge Assumes examples are given)
-
7/27/2019 machine Learning Learning
5/35
Inductive learning method
Construct/adjust hto agree with fon training set
(his consistent if it agrees with fon all examples)
E.g., curve fitting:
-
7/27/2019 machine Learning Learning
6/35
Inductive learning method
Construct/adjust hto agree with fon training set
(his consistent if it agrees with fon all examples)
E.g., curve fitting:
-
7/27/2019 machine Learning Learning
7/35
Inductive learning method
Construct/adjust hto agree with fon training set
(his consistent if it agrees with fon all examples)
E.g., curve fitting:
-
7/27/2019 machine Learning Learning
8/35
Inductive learning method
Construct/adjust hto agree with fon training set
(his consistent if it agrees with fon all examples)
E.g., curve fitting:
-
7/27/2019 machine Learning Learning
9/35
Inductive learning method
Construct/adjust hto agree with fon training set
(his consistent if it agrees with fon all examples)
E.g., curve fitting:
-
7/27/2019 machine Learning Learning
10/35
Inductive learning method
Construct/adjust hto agree with fon training set (his consistent if it agrees with fon all examples) E.g., curve fitting:
Ockhams razor: prefer the simplest hypothesisconsistent with data
-
7/27/2019 machine Learning Learning
11/35
Learning decision trees
Problem: decide whether to wait for a table at arestaurant, based on the following attributes:1. Alternate: is there an alternative restaurant nearby?2. Bar: is there a comfortable bar area to wait in?
3. Fri/Sat: is today Friday or Saturday?4. Hungry: are we hungry?5. Patrons: number of people in the restaurant (None, Some,
Full)6. Price: price range ($, $$, $$$)
7. Raining: is it raining outside?8. Reservation: have we made a reservation?9. Type: kind of restaurant (French, Italian, Thai, Burger)10. WaitEstimate: estimated waiting time (0-10, 10-30, 30-60,
>60)
-
7/27/2019 machine Learning Learning
12/35
Attribute-basedrepresentations
Examples described by attribute values (Boolean, discrete, continuous) E.g., situations where I will/won't wait for a table:
Classification of examples is positive (T) or negative (F)
The set of examples used for learning is called training set.
-
7/27/2019 machine Learning Learning
13/35
Decision trees
One possible representation for hypotheses
E.g., here is the true tree for deciding whether towait:
-
7/27/2019 machine Learning Learning
14/35
Expressiveness
Decision trees can express any function of the input attributes. E.g., for Boolean functions, truth table row path to leaf:
Trivially, there is a consistent decision tree for any training set withone path to leaf for each example (unless fnondeterministic in x) but itprobably won't generalize to new examples
Prefer to find more compact decision trees
-
7/27/2019 machine Learning Learning
15/35
Finding compact decision trees
Motivated by Ockhams razor.
However, finding the smallest decisiontree is an unsolved problem (NPc).
There are heuristics that findreasonable decision trees in mostpractical cases.
-
7/27/2019 machine Learning Learning
16/35
Hypothesis spaces
How many distinct decision trees with nBoolean attributes?
= number of Boolean functions
= number of distinct truth tables with 2n rows = 22n
E.g., with 6 Boolean attributes, there are18,446,744,073,709,551,616 trees
-
7/27/2019 machine Learning Learning
17/35
Decision tree learning
Aim: find a small tree consistent with the training examples
Idea: (recursively) choose "most significant" attribute as rootof (sub)tree
-
7/27/2019 machine Learning Learning
18/35
Choosing an attribute
Idea: a good attribute splits the examples intosubsets that are (ideally) "all positive" or "allnegative"
Patrons?is a better choice
-
7/27/2019 machine Learning Learning
19/35
Example contd.
Decision tree learned from the 12 examples:
Substantially simpler than true tree---a morecomplex hypothesis isnt justified by small amount ofdata
-
7/27/2019 machine Learning Learning
20/35
Using information theory
To implement Choose-Attribute in the DTLalgorithm
Information Content (Entropy):
I(P(v1), , P(vn)) = i=1 -P(vi) log2 P(vi) For a training set containingppositive
examples and nnegative examples:
npn
npn
npp
npp
npn
nppI
22 loglog),(
-
7/27/2019 machine Learning Learning
21/35
Information gain
A chosen attribute Adivides the training set Eintosubsets E1, , Evaccording to their values for A,where Ahas vdistinct values.
Information Gain (IG) or reduction in entropy fromthe attribute test:
Choose the attribute with the largest IG
v
i ii
i
ii
iii
np
n
np
p
Inp
np
Aremainder1
),()(
)(),()( Aremaindernp
n
np
pIAIG
-
7/27/2019 machine Learning Learning
22/35
Information gain
For the training set,p = n = 6, I(6/12, 6/12) = 1bit
Consider the attributes Patronsand Type(and others too):
Patronshas the highest IG of all attributes and so is chosen by the
DTL algorithm as the root
Given Patronsas root node, the next attribute chosen is Hungry?,with IG(Hungry?) = I(1/3, 2/3) ( 2/3*1 + 1/3*0) = 0.252
bits0)]4
2,
4
2(
12
4)
4
2,
4
2(
12
4)
2
1,
2
1(
12
2)
2
1,
2
1(
12
2[1)(
bits541.)]6
4,6
2(12
6)0,1(12
4)1,0(12
2[1)(
IIIITypeIG
IIIPatronsIG
-
7/27/2019 machine Learning Learning
23/35
Next step
Given Patronsas root node, the next attribute chosenis Hungry?, with IG(Hungry?) = I(1/3, 2/3) ( 2/3*1 +1/3*0) = 0.252
-
7/27/2019 machine Learning Learning
24/35
Computational Learning Theory why learning works
PAC learning ) Probably Approximately Correct) This has been a breakthrough in the theory of
machine learning. Basic idea: A really bad hypothesis will be easy to
identify, With high probability it will err on one ofthe training examples.
A consistent hypothesis will be probablyapproximately correct.
Notice that if there are more training example, thenthe probability of approximately correct becomeshigher!
-
7/27/2019 machine Learning Learning
25/35
Braodening the applicability of
Decision Trees Missing data
Multivalued attributes
Continuous or integer values attributes Continuous output attributes
-
7/27/2019 machine Learning Learning
26/35
Computational Learning Theory how many examples are needed
Let:
X the set of all possible examples
D the distribution from which samples are
drawn H the set of possible hypothesis
m the number of examples in the training set
And assume that f, the true function is in H.
-
7/27/2019 machine Learning Learning
27/35
error
Error(h) = P( h(x) f(x) | x drawn from D) An approximately correct hypothesis h, is a hypothesis that
satisfies Error(h) <
m 1/ (ln|H| - ln ) , where is the probability that Hbadcontains a hypothesis consistent with all examples
H
Hbadf
-
7/27/2019 machine Learning Learning
28/35
Examples
Learning a Boolean function;
Learning a conjunction of n literals
Learning decision lists
-
7/27/2019 machine Learning Learning
29/35
Boolean function
A general boolean
function on n
attributes can be
represented by itstruth table
Size of truth table 2n
A B C F(A,B,C)
T T T F
T T F T
T F T T
T F F F
F T T F
F T F F
F F T F
F F F T
Arbitrary boolean function
on 3 attributes
-
7/27/2019 machine Learning Learning
30/35
Conjuntion of literals
A literal is a variable or its negation
A
B
C is an example of conjunctionof literals
-
7/27/2019 machine Learning Learning
31/35
Learning Decision Lists
A decision list consists of a series of tests, each ofwhich is a conjunction of literals. If the testssucceeds, the decision list specifies the value to bereturned. Otherwise, the processing continues with
the next test in the list. Decision lists can represent any boolean function
hence are not learnable.
-
7/27/2019 machine Learning Learning
32/35
K- DL
A k-DL is a decision list where each test isrestricted to at most k literals.
K- Dl is learnable!
-
7/27/2019 machine Learning Learning
33/35
Performance measurement
How do we know that hf?1. Use theorems of computational/statistical learning theory
2. Try hon a new test set of examples(use same distribution over example space as training set)
Learning curve = % correct on test set as a function of training setsize
Learning curve for
learning decision
trees restaurant
problem
l f
-
7/27/2019 machine Learning Learning
34/35
Regression and Classificationwith Linear Models Univariate linear regression
Multivariate linear regression
Linear classifiers with a hard threshold Linear classifiers with a logistic regression
-
7/27/2019 machine Learning Learning
35/35
Summary
Learning needed for unknown environments,lazy designers
Learning agent = performance element +
learning element For supervised learning, the aim is to find asimple hypothesis approximately consistentwith training examples
Decision tree learning using information gain Learning performance = prediction accuracymeasured on test set