ai – week 23 – term 2 machine learning and natural language processing lee mccluskey, room 3/10...

13
AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing Lee McCluskey, room 3/10 Email [email protected] http://scom.hud.ac.uk/scomtlm/ cha2555/

Upload: ferdinand-stevens

Post on 17-Dec-2015

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing

Lee McCluskey, room 3/10

Email [email protected]

http://scom.hud.ac.uk/scomtlm/cha2555/

Page 2: AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

School of Computing and Engineering

Term 2: Draft Schedule for Semester 213 - Introduction to Machine Learning14 – Machine Learning - Knowledge Discovery / Data Mining 115 - Machine Learning - Knowledge Discovery / Data Mining 216 - Machine Learning of Planning Knowledge - 117 - Machine Learning of Planning Knowledge - 2Reading Week19 - Machine Learning - Reinforcement Learning20 – Machine Learning – Neural Networks21 – Natural Language Processing 122 - Natural Language Processing 2Easter Break23 - Natural Language Processing 324 - REVISION

Page 3: AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

School of Computing and Engineering

Learning - DEFINITIONs

Learning is fundamental to Intelligent behaviour

Learning is loosely defined as a “change in behaviour”.

Wikipedia has it as “acquiring new, or modifying existing, knowledge, behaviours, skills, values, or preferences and may involve synthesizing different types of information.”

There are taxonomies of learning, and various ways that learning has been utilised by machines.

Question: How could “Machine Learning” be applied to Computer Games?

Page 4: AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

School of Computing and Engineering

Types of LearningLearning by ROTE - most simple type of learning?This is purely storing and remembering “facts” eg memorising a telephone

directory, or arithmetic tables (“times tables”) Store and retrieve – No processing needed on the inputs No recognition of the “meaning” of inputs No integration of learned knowledge with other knowledge.AI Example: A program that stores a game board, and the next best

move. When an identical game board was seen in the future, then the best move could be retrieved.

A program that increases a database of facts could be considered to be learning by rote.

Learning by BEING TOLD (programmed)this is storing and remembering, but implies some kind of understanding /

integration of what is being told, with previous knowledge.- Not just facts, but procedures or plans

Page 5: AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

School of Computing and Engineering

Types of Learning

Learning by EXAMPLE (trained/taught)

this involves a benevolent teacher who gives classified examples to the leaner. The learner performs some generalisation the examples to infer new knowledge. Previous knowledge maybe used to steer the generalisations. In analogy the learner performs the generalisation based on some previously learnt situation.

Learning by ANALOGY

this invovles a benevolent teacher who gives classified examples to the leaner. The learner performs some generalisation the examples to infer new knowledge. Previous knowledge maybe used to steer the generalisations. In analogy the learner performs the generalisation based on some previously learnt situation.

Page 6: AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

School of Computing and Engineering

Types of LearningLearning by OBSERVATION (self-taught)

this is similar to the category to Learning from Examples but without classification by teacher - the learner uses pre-learned information to help classify observation (eg “conceptual clustering”)

Learning by DISCOVERY

this is the highest level of learning covering invention etc and is composed of some of the other types below

Page 7: AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

School of Computing and Engineering

Types of Learning

Our CLASSIFICATION is based loosely on the “amount of autonomy” or processing required to effect a change of behaviour in an agent.

by discovery

by observation

by analogy

by example increase in autonomy or processing requiredby the learner

by being told

by rote

Page 8: AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

Another Way to Categorise Learning

TWO ASPECTS OF LEARNING:

A: KNOWLEDGE/SKILL ACQUISITION Inputting NEW knowledge or procedures For example, learning the rules of a new game

B: KNOWLEDGE/SKILL REFINEMENT Changing/integrating old knowledge to create better (operational)

knowledge (Inputs no or little new knowledge)- Learning heuristics (improve search)- For example, getting skillful at a new gameAcquisition and Refinement combine in obvious ways EG using examples

(new knowledge) to bias refining skills (old knowledge) in the areas covering the examples

School of Computing and Engineering

Page 9: AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

School of Computing and Engineering

“Concrete” Example of Machine Learning: Learning Macros

Imagine you have just solved a problem, how or what can you *learn* form the process ?

One way: learn the (minimal) characteristics of the Situation when you can apply the solution again.

For example, the solution may be a plan. Under what conditions can we use this plan again?

PROCESS: A planner solves a problem and induces one or more macros from the solution sequence by “compiling” (part of) the operator sequence into one macro.

1. Find a solution T = (o(1),..,o(N)) to a goal G from initial state I2. Form a Macro- Operator (macro) based on:Pre-condition: WP = Weakest Precondition(T, G)Post-condition: G

3. In the future, if G is to be achieved and WP is true in the current state, apply T.

Page 10: AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

School of Computing and Engineering

Learning Macros – Rocket Example

Operator Schema: move(R,A,B) pre: at(R,A), A \= B eff+: at(R,B) eff-: at(R,A), load(R,C,L) pre: at(R,L), at(C,L) eff+: in(C,R) eff-: at(C,L) unload(R,C,L) pre: at(R,L), in(C,R) eff+: at(C,L), eff-: in(C,R)

NB the fact that the operators are represented DECLARATIVELY makes it easier for processes to reason with them (not just Planning but Learning also).

Page 11: AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

School of Computing and Engineering

Learning Macros WP ( action, goal) = pre(action) U {goal \ eff+(action) }

e.g. WP(unload(r,c1,paris), {at(c1,paris), at(c2,paris)} ) ={at(r,paris), in(c1,r), at(c2,paris) }

EXAMPLE: Now try WP(T,G) when…

at(r, london), at(c1,london), at(c2,london) Initial Stateat(c1,paris) GoalSolution: load(r,c1,london), move(r, london.paris), unload(r, c1,paris)

Macro is OP with precondition WP(T,G) and eff+ containing G

Page 12: AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

School of Computing and Engineering

Applying Macros

Learned Macro is (WP(T,P), G, T) We can further GENERALISE Macro (WP(T,P), G, T) by

changing constants to variables, because operators are NOT dependent on instances of variables.

APPLICATION: In state space search: if a state is encountered which

contains WP(T,G), where we want to achieve a G’ which contains G, then APPLY T.

NB: Macros can speed up solutions, but can also cause MORE SEARCH if not used wisely. So we have to be careful what we learn!

Page 13: AI – Week 23 – TERM 2 Machine Learning and Natural Language Processing Lee McCluskey, room 3/10 Email lee@hud.ac.uklee@hud.ac.uk

School of Computing and Engineering

Conclusion There are various types of Learning manifest in

nature and in AI Two important roles for Learning are in Knowledge

Acquisition and Knowledge Refinement Macro acquisition is a form of KR for Problem

Solving /Planning where procedures are learned to make plan generation more efficient. This can be done by working out the weakest precondition of a “solution” and storing it with the goal achieved.