introduction to machine learning and knowledge representation

43
Introduction to Machine Learning and Knowledge Representation Florian Gyarfas COMP 790-072 (Robotics)

Upload: tessa

Post on 22-Mar-2016

132 views

Category:

Documents


3 download

DESCRIPTION

Introduction to Machine Learning and Knowledge Representation. Florian Gyarfas COMP 790-072 (Robotics). Outline. Introduction, definitions Common knowledge representations Types of learning Deductive Learning (rules of inference) Explanation-based learning - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Introduction to Machine Learning and Knowledge Representation

Introduction to Machine Learning and Knowledge Representation

Florian Gyarfas

COMP 790-072 (Robotics)

Page 2: Introduction to Machine Learning and Knowledge Representation

Outline

Introduction, definitionsCommon knowledge representationsTypes of learningDeductive Learning (rules of inference)

Explanation-based learningInductive Learning – some approaches

Concept LearningDecision-Tree LearningClustering

Summary, references

Page 3: Introduction to Machine Learning and Knowledge Representation

Introduction

What is Knowledge Representation?Formalisms that represent knowledge (facts about the worlds) and mechanisms to manipulate such knowledge (for example, derive new facts from existing knowledge)

What is Machine Learning?Mitchell: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.”• Example: T: playing checkers

P: percent of games won against opponentsE: playing practice games against itself

Page 4: Introduction to Machine Learning and Knowledge Representation

Common Knowledge Representations

Logicpropositionalpredicateother

Structured Knowledge Representations:

FramesSemantic nets

Page 5: Introduction to Machine Learning and Knowledge Representation

Propositional Logic

Consists of:• Constants: true/false• Set of elements called symbols, variables or

atomic formulas (typically letters: a,b,c,…)• Operators (, , , , )• Axioms

– Examples: » a b a» a a true

• Inference rule(s) – Modus ponens

Page 6: Introduction to Machine Learning and Knowledge Representation

Predicate logicIn many cases, propositional logic is too weak.For example, how can we express something like this in propositional logic:

Every person is mortal.Tom is a person.Tom is mortal.How can we represent these sentences in such a way that we can infer the third sentence from the first two?

Need quantifiers and predicates…x: Person(x) Mortal(x)Person(Tom)We can infer: Mortal(Tom)

In addition to propositional logic, predicate logic has:

FunctionsPredicatesQuantifiers (,)More axiomsOne more inference rule (Generalization)

Page 7: Introduction to Machine Learning and Knowledge Representation

Logic: Inference Rules

Used for deductive learningPropositional Logic: Modus Ponens is all you need

If P, then Q.PTherefore, Q.Meta-rule, not the same as axioms

Predicate Logic: Modus Ponens and Generalization Rule:

Page 8: Introduction to Machine Learning and Knowledge Representation

Structured Knowledge Representations

Semantic netsReally just graphs that represent knowledgeNodes represent conceptsArcs represent binary relationships between concepts

FramesExtension of semantic netsEntities have attributesClass/subclass hierarchy that supports inheritance; classes inherit attributes from superclasses

Page 9: Introduction to Machine Learning and Knowledge Representation

Semantic Nets

Example (E. Rich, Artificial Intelligence)

Furniture

Chair

My-chair

Leather

Me

Person Seat

Tan

Brown

is-a is-a

is-a

is-a

covering

owner color

is-part

Page 10: Introduction to Machine Learning and Knowledge Representation

Types of Learning

Deductive – Inductive

Supervised – Unsupervised

Symbolic – Non-symbolic

Page 11: Introduction to Machine Learning and Knowledge Representation

Deductive vs. inductive

Deductive learningKnowledge is deduced from existing knowledge by means of truth-preserving transformations (this is nothing more than reformulation of existing knowledge). If the premises are true, the conclusion must be true!Example: propositional/predicate logic with rules of inference

Page 12: Introduction to Machine Learning and Knowledge Representation

Deductive vs. inductive

Inductive learningGeneralization from examplesExample: “All observed crows are black”

“All crows are black”process of reasoning in which the premises of an argument are believed to support the conclusion but do not ensure it

Page 13: Introduction to Machine Learning and Knowledge Representation

Unsupervised vs. SupervisedSupervised Learning:

There exists a “teacher” that for each training example tells the learner how it is classified (training data consists of pairs of input vectors and desired outputs)

Reinforcement Learning:No input/output pairs; “reward function” tells agent how good its action was

Unsupervised Learning:No a priori output; also no reward; training data just feature vectors; the system needs to form concepts (classes) by itself

Page 14: Introduction to Machine Learning and Knowledge Representation

Learning approaches

Deductive/AnalyticalExplanation-based learning

InductiveSupervised:• Concept Learning• Decision-Tree Learning• Neural networks• Naive Bayes classifier• Support Vector Machines• …

Unsupervised:• Clustering• Neural networks• Expectation-Maximization• …

Page 15: Introduction to Machine Learning and Knowledge Representation

Explanation-based learning (EBL)Deductiveassumes prior knowledge (“domain theory”) in addition to training examplesassumes domain theory is given as a set of horn clausesHorn clause: Disjunction of literals with at most one positive literal

Example: NOT a OR NOT b OR NOT c OR d which is the same as (a AND b AND c) d

Tries to explain training examples using the domain theory

Page 16: Introduction to Machine Learning and Knowledge Representation

EBL example

Consider multiple physical objectsWhich are the pairs of objects such that one can be stacked safely on the other?Target concept:

premise(x,y) SafeToStack(x,y)• where premise is a conjunctive expression containing the

variables x and y.Domain Theory:

SafeToStack(x,y) NOT Fragile(y)SafeToStack(x,y) Lighter(x,y)Lighter(x,y) Weight(x,wx) AND Weight(y,wy) AND LessThan(wx,wy)Weight(x,w) Volume(x,v) AND Density(x,d) AND Equal(w,times(v,d))Weight(x,5) Type(x,Table)Fragile(x) Material(x,Glass)

Page 17: Introduction to Machine Learning and Knowledge Representation

EBL example (2)

Training example:On(Obj1,Obj2)Type(Obj1,Box)Type(Obj2,Table)Color(Obj1,Red)Color(Obj2,Blue)Volume(Obj1,2)Density(Obj1,0.3)Material(Obj1,Cardboard)Material(Obj2,Wood)SafeToStack(Obj1,Obj2)

Page 18: Introduction to Machine Learning and Knowledge Representation

EBL example (3)

Explanation:

Lighter(Obj1,Obj2)

SafeToStack(Obj1,Obj2)

Weight(Obj1,Obj2) Weight(Obj2,5)

Volume(Obj1,2) Density(Obj1,0.3) Equal(0.6,2*0.3) LessThan(0.6,5) Type(Obj2,Table)

Page 19: Introduction to Machine Learning and Knowledge Representation

EBL algorithm

1) Explain training example2) Analyze/Generalize Explanation3) Add Explanation to Learned Rules

(Domain Theory)

Use for example REGRESS algorithm (Mitchell, p. 318) for step (2)In our example most general rule that can be justified by the explanation is:

SafeToStack(x,y) Volume(x,vx) AND Density(x,dx) AND Equal(wx,times(vx,dx)) AND LessThan(wx,5) AND Type(y,Table)

Page 20: Introduction to Machine Learning and Knowledge Representation

EBL - Remarks

Knowledge Reformulation: EBL just restates what the learner already knowsYou don’t really gain new knowledge!Why do we need it then? In principle, we can compute everything we need using just the domain theoryIn practice, however, this might not work. Consider chess: Does knowing all the rules make you a perfect player?So EBL reformulates existing knowledge into a more operational form which might be much more effective especially under certain constraints

Page 21: Introduction to Machine Learning and Knowledge Representation

Concept Learning

Inductivelearn general concept definition from specific training examplessearch through predefined space of potential hypotheses for target conceptpick the one that best fits training examples

Page 22: Introduction to Machine Learning and Knowledge Representation

Concept Learning ExampleTaken from Tom Mitchell’s book “Machine Learning”Concept to learn: “Days on which Tom enjoys his favorite water sport”Training examples D (every row is an instance, every column an attribute):

# Sky AirTemp

Humidity

Wind Water Forecast

EnjoySport (Classification)

1 Sunny Warm Normal Strong Warm Same Yes2 Sunny Warm High Strong Warm Same Yes3 Rainy Cold High Strong Warm Change No4 Sunny Warm High Strong Cool Change Yes

Page 23: Introduction to Machine Learning and Knowledge Representation

Concept Learning

X = set of all instances (instance = combination of attributes), D = set of all training examples (D X)The “target concept” is a function (target function) c(x) that for a given instance x is either 0 or 1 (in our example 0 if EnjoySport = No, 1 if EnjoySport = Yes)We do not know the target concept, but we can come up with hypotheses. We would like to find a hypothesis h(x) such that h(x) = c(x) at least for all x in D (D = training examples)Hypotheses representation:

Let us assume that the target concept is expressed as a conjunction of constraints on the instance attributes. Then we can write a hypothesis like this: AirTemp = Cold Humidity = High. Or, in short: <?,Cold,High,?,?,?> where “?” means any value is acceptable for this attribute. We use “” to indicate that no value is acceptable for an attribute.

Page 24: Introduction to Machine Learning and Knowledge Representation

Concept Learning

Most general hypothesis:<?,?,?,?,?,?>

Most specific hypothesis:<, , , , , >

General-to-specific ordering of hypotheses: ≥g (more general than or equal to) defines partial order over hypothesis space HFIND-S algorithm

finds the most specific hypothesis• For our example : <Sunny,Warm,?,Strong,?,?>

Page 25: Introduction to Machine Learning and Knowledge Representation

Concept Learning (FIND-S)

FIND-S algorithm

1. Initialize h to the most specific hypothesis in H2. For each positive training instance x

• For each atttribute constraint ai in hIf the constraint ai is satisfied by xThen do nothingElse replace ai in h by the next more

generalconstraint that is satisfied by x

3. Output hypothesis h

Why can we ignore negative examples?

Page 26: Introduction to Machine Learning and Knowledge Representation

Concept Learning

FIND-S only computes the most specific hypothesisAnother approach to concept learning: CANDIDATE-ELIMINATIONCANDIDATE-ELIMINATION finds all hypotheses in the the version spaceThe version space, denoted VSH,D, with respect to hypothesis space H and training examples D, is a subset of hypotheses from H consistent with the training examples in D.

VSH,D = {h H|Consistent(h,D)}

Page 27: Introduction to Machine Learning and Knowledge Representation

Concept Learning (Version space)

Version space for our example

{<Sunny,Warm,?,Strong,?,?>}

{<Sunny,?,?,?,?,?>}, {<?,Warm,?,?,?,?>}

{<Sunny,?,?,Strong,?,?>} {<Sunny,Warm,?,?,?,?>} {<?,Warm,?,Strong,?,?>}

Page 28: Introduction to Machine Learning and Knowledge Representation

Concept Learning:Candiate-Elimination algorithm

Initialize G to the set of maximally general hypotheses in HInitialize S to set of maximally specific hypotheses in H

For each training example d, do

• If d is a positive example♦ Remove from G any hypothesis inconsistent with d♦ For each hypothesis in s in S that is not consistent with d

• Remove s from S• Add to S all minimal generalizations h of s such that

– h is consistent with d, and some member of G is more general than h

• Remove from S any hypothesis that is more general than another hypothesis in S

Page 29: Introduction to Machine Learning and Knowledge Representation

Concept Learning:Candiate-Elimination algorithm

• If d is a negative example♦ Remove from S any hypothesis inconsistent with d♦ For each hypothesis in g in G that is not consistent with d

• Remove g from G• Add to G all minimal specializations h of g such that

– h is consistent with d, and some member of S is more specific than h• Remove from G any hypothesis that is less general than another hypothesis

in G

• How to use version space for classification of new instances?

• Both algorithms can’t handle noisy training data, i.e. they assume none of the training examples is incorrect

• For more complex Concept Learning algorithms see Mitchell – Machine Learning, Chapter 10.

Page 30: Introduction to Machine Learning and Knowledge Representation

Inductive bias

For both algorithms, we assumed that the target concept was contained in the hypothesis spaceOur hypothesis space was the set of all hypotheses than can be expressed as a conjunction of attributes

Such an assumption is called an inductive biasWhat if target concept not a conjunction of constraints? Why not consider all possible hypotheses?

Page 31: Introduction to Machine Learning and Knowledge Representation

Decision Tree Learning

Another inductive, supervised learning method for approximating discrete-valued target functionsLearned function represented by a treeLeaf nodes provide classificationEach node specifies a test of some attribute of the instance

Page 32: Introduction to Machine Learning and Knowledge Representation

Decision Tree Learning -Example

Decision tree for the concept “PlayTennis”

Page 33: Introduction to Machine Learning and Knowledge Representation

Decision Tree Learning

Classification starts at the root nodeExample tree corresponds to the expression:

(Outlook = Sunny AND Humidity = Normal)OR (Outlook = Overcast)OR (Outlook = Rain AND Wind = Weak)

Using the tree to classify new instances is easy, but how do we construct a decision tree from training examples?

Page 34: Introduction to Machine Learning and Knowledge Representation

Decision Tree Learning: ID3 Algorithm

Constructs tree top-downOrder of attributes?

Evaluate each attribute using a statistical test (see next slide) to determine how well it alone classifies the training examplesUse the attribute that best classifies the training example attribute at the root node. Create descendants for each possible value of that attribute.Repeat process for each descendant…

Page 35: Introduction to Machine Learning and Knowledge Representation

Decision Tree Learning: Entropy/GainGiven a collection S, containing positive and negative examples of some target concept, the entropy of S is:

Entropy(S) = -p+ log2p+ – p- log2p-

Information gain is then defined as:

ID3 Algorithm picks attribute with highest gain

)()(),()(

vAValuesv

v SEntropySS

SEntropyASGain

Page 36: Introduction to Machine Learning and Knowledge Representation

Decision Trees - RemarksTree represents hypothesis; thus, ID3 determines only a single hypothesis, unlike Candidate-EliminationUnlike both Concept Learning approaches, ID3 makes no assumptions regarding the hypothesis space; every possible hypothesis can be represented by some treeInductive Bias: Shorter trees are preferred over larger trees

Page 37: Introduction to Machine Learning and Knowledge Representation

Decision Trees – Example (Mitchell, p. 59)Day Outlook Temp. Humid. Wind PlayTenni

sD1 Sunny Hot High Weak NoD2 Sunny Hot High Strong NoD3 Overcast Hot High Weak YesD4 Rain Mild High Weak YesD5 Rain Cool Normal Weak YesD6 Rain Cool Normal Strong NoD7 Overcast Cool Normal Strong YesD8 Sunny Mild High Weak NoD9 Sunny Cool Normal Weak YesD10 Rain Mild Normal Weak YesD11 Sunny Mild Normal Strong YesD12 Overcast Mild High Strong YesD13 Overcast Hot Normal Weak YesD14 Rain Mild High Strong No

Page 38: Introduction to Machine Learning and Knowledge Representation

Decision Tree - ExampleGain(S, Outlook) = 0.246Gain(S, Humidity) = 0.151Gain(S, Wind) = 0.048Gain(S, Temperature) = 0.029

So “Outlook” is the first attribute of the tree:

Ssunny = {D1,D2,D8,D9,D11}Gain{Ssunny,Humidity} = 0.97 – (3/5)*0 – (2/5)*0 = 0.97Gain{Ssunny,Temperature} = 0.97 – (2/5)*0 – (2/5)*1 – (1/5)*0 = 0.57Gain{Ssunny,Wind} = 0.97 – (2/5)*1 – (3/5)*0.918 = 0.019 Humidity should be tested next

Outlook

? ?Yes

Sunny RainOvercast

What next?

Page 39: Introduction to Machine Learning and Knowledge Representation

Learning algorithms in roboticsEBL has been used for Planning Algorithms (example: PRODIGY system (Carbonell et al., 1990))Could use EBL, concept learning, decision tree learning for things like object recognition (CL and DTL can easily be extended to more than 2 categories)However, while symbolic learning algorithms are simple and easy to understand, they are not very flexible and powerfulIn many applications, robots need to learn to classify something they perceive (using cameras, sensors etc.)Sensor inputs etc. are normally numeric Non-symbolic approaches such as NN, Reinforcement Learning or HMM are better suited and thus more commonly used in practiceCombined approaches exist, for example EBNN (Explanation-based Neural Networks) (See for example paper “Explanation Based Learning for Mobile Robot Perception” by J. O’Sullivan, T. Mitchell and S. Thrun)

Page 40: Introduction to Machine Learning and Knowledge Representation

Unsupervised Learning

Training data only consists of feature vectors, does not include classificationsUnsupervised Learning algorithms try to find patterns in the dataClassic examples:

ClusteringFitting Gaussian Density functions to dataDimensionality reduction

Page 41: Introduction to Machine Learning and Knowledge Representation

Clustering: k-means algorithmCluster objects based on attributes into k partitionsTries to minimize intra-cluster variance:

Algorithm:Algorithm starts by partitioning input points into k initial sets, either at random or using some heuristic data.Then it calculates the mean point, or centroid, of each set.Constructs new partition by associating each point with the closest centroid.Recalculates centroids for new clustersRepeats this until convergence, which is when points no longer switch clusters.

Page 42: Introduction to Machine Learning and Knowledge Representation

Summary

This presentation mainly covered symbol-based learning algorithms

Deductive• EBL

Inductive• Concept Learning• Decision-Tree Learning• Unsupervised

– Clustering (not necessarily symbol-based), COBWEB

Non symbol-based learning algorithms such as neural networks part of the next lecture?

Page 43: Introduction to Machine Learning and Knowledge Representation

References (Books)

Tom Mitchell: Machine Learning. McGraw-Hill, 1997.Stuart Russell, Peter Norvig: Artificial Intelligence – a modern approach. Prentice Hall, 2003.George Luger: Artificial Intelligence. Addison-Wesley, 2002.Elaine Rich: Artificial Intelligence. McGraw-Hill, 1983.