knowledge acquisition and problem solving

59
04, G.Tecuci, Learning Agents Center CS 785 Fall 2004 Learning Agents Center and Computer Science Department George Mason University Gheorghe Tecuci [email protected] http://lac.gmu.edu /

Upload: sophia-strong

Post on 30-Dec-2015

30 views

Category:

Documents


1 download

DESCRIPTION

CS 785 Fall 2004. Knowledge Acquisition and Problem Solving. Mixed-initiative Problem Solving and Knowledge Base Refinement. Gheorghe Tecuci [email protected] http://lac.gmu.edu/. Learning Agents Center and Computer Science Department George Mason University. Overview. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

CS 785 Fall 2004

Learning Agents Center and Computer Science Department

George Mason University

Gheorghe Tecuci [email protected]

http://lac.gmu.edu/

Page 2: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

OverviewOverview

General Presentation of the Rule Refinement Method

Characterization of the Disciple Rule Learning Method

Recommended Reading

The Rule Refinement Problem and Method: Illustration

Demo: Problem Solving and Rule Refinement

Another Illustration of the Rule Refinement Method

Integrated Modeling, Learning, and Problem Solving

Page 3: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

The rule refinement problem (definition)The rule refinement problem (definition)

GIVEN:

• a plausible version space rule;

• a positive or a negative example of the rule (i.e. a correct or an incorrect problem solving episode);

• a knowledge base that includes an object ontology and a set of problem solving rules;

• an expert that understands why the example is positive or negative, and can answer agent’s questions.

DETERMINE:

• an improved rule that covers the example if it is positive, or does not cover the example if it is negative;

• an extended object ontology (if needed for rule refinement).

Page 4: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Initial example from which the rule was learnedInitial example from which the rule was learned

US_1943

Identify and test a strategic COG candidate for US_1943

Which is a member of Allied_Forces_1943?

I need to

Therefore I need to

Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943

This is an example of a problem solving step from which the agent will learn a general problem solving rule.

Page 5: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

IFIdentify and test a strategic COG candidate corresponding to a member of a force

The force is ?O1

THENIdentify and test a strategic COG candidate for a force

The force is ?O2

Plausible Upper Bound Condition ?O1 is multi_member_force

has_as_member ?O2 ?O2 is force

Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance

has_as_member ?O2 ?O2 is single_state_force

explanation?O1 has_as_member ?O2

Learned rule to be refinedLearned rule to be refined

IFIdentify and test a strategic COG candidate corresponding to a member of the ?O1

QuestionWhich is a member of ?O1 ?Answer?O2

THENIdentify and test a strategic COG candidate for ?O2

INFORMAL STRUCTURE OF THE RULE

FORMAL STRUCTURE OF THE RULE

Page 6: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

The agent uses the partially learned rules in problem solving. The solutions generated by the agent, when it uses the plausible upper bound condition, have to be confirmed or rejected by the expert.We will now present how the agent improves (refines) its rules based on these examples. In essence, the plausible lower bound condition is generalized and the plausible upper bound condition is specialized, both conditions converging toward one another.The next slide illustrates the rule refinement process.Initially the agent does not contain any task or rule in its knowledge base.The expert is teaching the agent to reduce the task: Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943

To the task Identify and test a strategic COG candidate corresponding to a member of the US_1943

From this task the agent learns a plausible version space task reduction rule, as has been illustrated before.Now the agent can use this rule in problem solving, proposing to reduce the task Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943

To the task Identify and test a strategic COG candidate for Germany_1943

The expert accepts this reduction as correct, and the agent refines the rule.In the following we will show the internal reasoning of the agent that corresponds to this behavior.

Page 7: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Failureexplanation

PVSRule

Example of task reductionsgenerated by the agent

Incorrectexample

Correctexample

Learning fromExplanations

Learning by AnalogyAnd Experimentation

Learning from Examples

Knowledge Base

Rule refinement method

Page 8: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Version space rule learning and refinementVersion space rule learning and refinement

UB

+The agent learns a rule with a very specific lower bound condition (LB) and a very general upper bound condition (UB).

_

++

++

LB

UB

UB

LB

LB

_++

UB=LB _

_ ++

Let E2 be a new task reduction generated by the agent and accepted as correct by the expert. Then the agent generalizes LB as little as possible to cover it.

Let E3 be a new task reduction generated by the agent which is rejected by the expert. Then the agent specialize UB as little as possible to uncover it and to remain more general than LB.

After several iterations of this process LB may become identical with UB and a rule with an exact condition is learned.

Let E1 be the first task reduction from which the rule is learned.

E1

E2

E3

Page 9: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

US_1943

Identify and test a strategic COG candidate for US_1943

Which is a member of Allied_Forces_1943?

I need to

Therefore I need to

Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943

Provides an example

1

Rule_15Learns

2

Rule_15

?

Applies

Germany_1943

Identify and test a strategic COG candidate for Germany_1943

Which is a member of European_Axis_1943?

Therefore I need to

3

Accepts the example

4Rule_15Refines

5I need to

Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943

Page 10: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Rule refinement with a positive exampleRule refinement with a positive example

Condition satisfiedby positive example

?O1 is European_Axis_1943 has_as_member ?O3

?O2 is Germany_1943le

ss g

ener

al t

han

Positive example thatsatisfies the upper bound

explanationEuropean_Axis_1943 has_as_member Germany_1943

IFIdentify and test a strategic COG candidate corresponding to a member of a force

The force is ?O1

THENIdentify and test a strategic COG candidate for a force

The force is ?O2

Plausible Upper Bound Condition ?O1 is multi_member_force

has_as_member ?O2 ?O2 is force

Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance

has_as_member ?O2 ?O2 is single_state_force

explanation?O1 has_as_member ?O2 Identify and test a strategic COG

candidate for Germany_1943

I need to

Therefore I need to

Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943

Page 11: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

The upper right side of this slide shows an example generated by the agent. This example is generated because it satisfies the plausible upper bound condition of the rule (as shown by the red arrows).This example is accepted as correct by the expert. Therefore the plausible lower bound condition is generalized to cover it as shown in the following.

Page 12: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Condition satisfied by the positive example

?O1 is European_Axis_1943 has_as_member ?O2

?O2 is Germany_1943

Plausible Upper Bound Condition

?O1 is multi_member_force has_as_member ?O2

?O2 is force

Plausible Lower Bound Condition (from rule)

?O1 is equal_partners_multi_state_alliance has_as_member ?O2

?O2 is single_state_force

Minimal generalization of the plausible lower boundMinimal generalization of the plausible lower bound

minimal generalization

less general than (or at most as general as)

New Plausible Lower Bound Condition?O1 is multi_state_alliance

has_as_member ?O2

?O2 is single_state_force

Page 13: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

The lower left side of this slide shows the plausible lower bound condition of the rule.The lower right side of this slide shows the condition corresponding to the generated positive example.These two conditions are generalized as shown in the middle of this slide, by using the climbing generalization hierarchy rule.Notice, for instance, that equal_partners_multi_state_alliance and European_Axis_1943 are generalized to multi_state_alliance.This generalization is based on the object ontology, as illustrated in the following slide. Indeed, multi_state_alliance is the minimal generalization of equals_partners_multi_state_alliance that covers European_Axis_1943.

Page 14: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

single_state_force single_group_force multi_state_force multi_group_force

multi_state_alliance multi_state_coalition

equal_partners_multi_state_alliance

dominant_partner_multi_state_alliance

equal_partners_multi_state_coalition

dominant_partner_multi_state_coalition

composition_of_forces

multi_member_forcesingle_member_force

ForcesForces

Allied_Forces_1943

European_Axis_1943

force

Germany_1943US_1943

multi_state_alliance is the minimal generalization of equals_partners_multi_state_alliance that covers European_Axis_1943

Page 15: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Refined ruleRefined rule

gen

eral

izat

ion

IFIdentify and test a strategic COG candidate corresponding to a member of a force

The force is ?O1

THENIdentify and test a strategic COG candidate for a force

The force is ?O2

Plausible Upper Bound Condition ?O1 is multi_member_force

has_as_member ?O2 ?O2 is force

Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance

has_as_member ?O2 ?O2 is single_state_force

explanation?O1 has_as_member ?O2

IFIdentify and test a strategic COG candidate corresponding to a member of a force

The force is ?O1

THENIdentify and test a strategic COG candidate for a force

The force is ?O2

Plausible Upper Bound Condition ?O1 is multi_member_force

has_as_member ?O2 ?O2 is force

Plausible Lower Bound Condition ?O1 is multi_state_alliance

has_as_member ?O2 ?O2 is single_state_force

explanation?O1 has_as_member ?O2

Page 16: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

OverviewOverview

General Presentation of the Rule Refinement Method

Characterization of the Disciple Rule Learning Method

Recommended Reading

The Rule Refinement Problem and Method: Illustration

Demo: Problem Solving and Rule Refinement

Another Illustration of the Rule Refinement Method

Integrated Modeling, Learning, and Problem Solving

Page 17: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

The rule refinement method: general presentationThe rule refinement method: general presentation

Let R be a plausible version space rule, U its plausible upper bound condition, L its plausible lower bound condition, and E a new example of the rule.

1. If E is covered by U but it is not covered by L then

• If E is a positive example then L needs to be generalized as little as possible to cover it while remaining less general or at most as general as U.

• If E is a negative example then U needs to be specialized as little as possible to no longer cover it while remaining more general than or at least as general as L. Alternatively, both bounds need to be specialized.

2. If E is covered by L then

• If E is a positive example then R need not to be refined.

• If E is a negative example then both U and L need to be specialized as little as possible to no longer cover this example while still covering the known positive examples of the rule. If this is not possible, then the E represents a negative exception to the rule.

3. If E is not covered by U then

• If E is a positive example then it represents a positive exception to the rule.

• If E is a negative example then no refinement is necessary.

Page 18: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

1. If E is covered by U but it is not covered by L then

• If E is a positive example then L needs to be generalized as little as possible to cover it while remaining less general or at most as general as U.

The rule refinement method: general presentationThe rule refinement method: general presentation

++

UBLB

+++

UBLB

+

Page 19: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

1. If E is covered by U but it is not covered by L then

• If E is a negative example then U needs to be specialized as little as possible to no longer cover it while remaining more general than or at least as general as L.

Alternatively, both bounds need to be specialized.

The rule refinement method: general presentationThe rule refinement method: general presentation

++

UBLB

UB_++

LB_

Strategy 1:Specialize UB by using a specialization rule (e.g. the descending the generalization hierarchy rule, or specializing a numeric interval rule).

Page 20: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

The rule refinement method: general presentationThe rule refinement method: general presentation

++

UBLB

++

UBLB

_

EXw identifies the features that make E a wrong problem solving episode.The inductive hypothesis is that the correct problem solving episodes should not have these features.EXw is taken as an example of a condition that the correct problem solving episodes should not satisfy, an Except-When condition.The Except-when condition needs also to be learned, based on additional examples.Based on EXw an initial Except-When plausible version space condition is generated.

Strategy 2:Find a failure explanation EXw of why E is a wrong problem solving episode.

Page 21: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

The rule refinement method: general presentationThe rule refinement method: general presentation

++

UBLB

++

UBLB

_

Specialize both bounds of the plausible version space condition by: - adding the most general generalization of EXw, corresponding to the examples encountered so far, to the upper bound; - adding the least general generalization of EXw, corresponding to the examples encountered so far, to the lower bound.

Strategy 3:Find an additional explanation EXw for the correct problem solving episodes, which is not satisfied by the current wrong problem solving episode.

_

Page 22: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

2. If E is covered by L then

• If E is a positive example then R need not to be refined.

The rule refinement method: general presentationThe rule refinement method: general presentation

++

UBLB

+

Page 23: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

2. If E is covered by L then

• If E is a negative example then both U and L need to be specialized as little as possible to no longer cover this example while still covering the known positive examples of the rule. If this is not possible, then the E represents a negative exception to the rule.

The rule refinement method: general presentationThe rule refinement method: general presentation

++

UBLB

- ++

UBLB

-

Strategy 1:Find a failure explanation EXw of why E is a wrong problem solving episode and create an Except-When a plausible version space condition, as indicated before.

Page 24: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

3. If E is not covered by U then

• If E is a positive example then it represents a positive exception to the rule. • If E is a negative example then no refinement is necessary.

The rule refinement method: general presentationThe rule refinement method: general presentation

++

UBLB

-

++

UBLB

+

Page 25: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

OverviewOverview

General Presentation of the Rule Refinement Method

Characterization of the Disciple Rule Learning Method

Recommended Reading

The Rule Refinement Problem and Method: Illustration

Demo: Problem Solving and Rule Refinement

Another Illustration of the Rule Refinement Method

Integrated Modeling, Learning, and Problem Solving

Page 26: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Initial example from which a rule was learnedInitial example from which a rule was learned

IF the task to accomplish is

THEN

industrial_capacity_of_US_1943 is a strategic COG candidate for US_1943

Identify the strategic COG candidates with respect to the industrial civilization of US_1943

Who or what is a strategicallycritical industrial civilization

element in US_1943?

Industrial_capacity_of_US_1943

Page 27: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

IFIdentify the strategic COG candidates with respect to the industrial civilization of a force

The force is ?O1

THENA strategic COG relevant factor is strategic COG candidate for a force

The force is ?O1The strategic COG relevant factor is ?O2

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2 ?O2 IS Industrial_factor

is_a_major_generator_of ?O3?O3 IS Product

Plausible Lower Bound Condition?O1 IS US_1943

has_as_industrial_factor ?O2?O2 IS Industrial_capacity_of_US_1943

is_a_major_generator_of ?O3?O3 IS War_materiel_and_transports_of_US_1943

explanation?O1 has_as_industrial_factor ?O2?O2 is_a_major_generator_of ?O3

Learned PVS rule to be refinedLearned PVS rule to be refined

IFIdentify the strategic COG candidates with respect to the industrial civilization of ?O1

QuestionWho or what is a strategically critical industrialcivilization element in ?O1 ?Answer?O2

THEN?O2 is a strategic COG candidate for ?O1

INFORMAL STRUCTURE OF THE RULE

FORMAL STRUCTURE OF THE RULE

Page 28: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Positive example covered by the upper boundPositive example covered by the upper bound

Condition satisfied by positive example

?O1 IS Germany_1943has_as_industrial_factor ?O2

?O2 IS Industrial_capacity_of_Germany_1943 is_a_major_generator_of ?O3

?O3 IS War_materiel_and_fuel_of_Germany_1943less

gen

eral

th

an

IFIdentify the strategic COG candidates with respect to the industrial civilization of a force

The force is ?O1

THENA strategic COG relevant factor is strategic COG candidate for a force

The force is ?O1The strategic COG relevant factor is ?O2

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Product

Plausible Lower Bound Condition

?O1 IS US_1943has_as_industrial_factor ?O2

?O2 IS Industrial_capacity_of_US_1943 is_a_major_generator_of ?O3

?O3 IS War_materiel_and_transports_of_US_1943

explanation?O1 has_as_industrial_factor ?O2?O2 is_a_major_generator_of ?O3

Identify the strategic COG candidates with respect to the industrial civilization of a force

The force is Germany_1943

A strategic COG relevant factor is strategic COG candidate for a force

The force is Germany_1943The strategic COG relevant factor is

Industrial_capacity_of_Germany_1943

IF the task to accomplish is

THEN accomplish the task

Positive example that satisfies the upper bound

explanationGermany_1943 has_as_industrial_factor

Industrial_capacity_of_Germany_1943Industrial_capacity_of_Germany_1943 is_a_major_generator_of War_materiel_and_fuel_of_Germany_1943

Page 29: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Condition satisfied by the positive example

?O1 IS Germany_1943has_as_industrial_factor ?O2

?O2 IS Industrial_capacity_of_Germany_1943 is_a_major_generator_of ?O3

?O3 IS War_materiel_and_fuel_of_Germany_1943

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Product

Plausible Lower Bound Condition (from rule)

?O1 IS US_1943has_as_industrial_factor ?O2

?O2 IS Industrial_capacity_of_US_1943 is_a_major_generator_of ?O3

?O3 IS War_materiel_and_transports_of_US_1943

Minimal generalization of the plausible lower boundMinimal generalization of the plausible lower bound

New Plausible Lower Bound Condition?O1 IS Single_state_force

has_as_industrial_factor ?O2

?O2 IS Industrial_capacity is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materials

minimal generalization

less general than (or at most as general as)

Page 30: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Opposing_force

Force

Single_state_force Single_group_forceMulti_group_forceMulti_state_force

Generalization hierarchy of forces Generalization hierarchy of forces

Anglo_allies_1943

European_axis_1943

US_1943

Britain_1943

Germany_1943

component_state

Italy_1943

component_state

component_state

component_state

Group

<object>

Page 31: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Generalized ruleGeneralized rule

IFIdentify the strategic COG candidates with respect to the industrial civilization of a force

The force is ?O1

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Product

explanation?O1 has_as_industrial_factor ?O2?O2 is_a_major_generator_of ?O4

IFIdentify the strategic COG candidates with respect to the industrial civilization of a force

The force is ?O1

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Product

explanation?O1 has_as_industrial_factor ?O2?O2 is_a_major_generator_of ?O3

Plausible Lower Bound Condition

?O1 IS US_1943has_as_industrial_factor ?O2

?O2 IS Industrial_capacity_of_US_1943 is_a_major_generator_of ?O3

?O3 IS War_materiel_and_transports_of_US_1943

Plausible Upper Bound Condition?O1 IS Single_state_force

has_as_industrial_factor ?O2

?O2 IS Industrial_capacity is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materials

THENA strategic COG relevant factor is strategic COG candidate for a force

The force is ?O1The strategic COG relevant factor is ?O2

THENA strategic COG relevant factor is strategic COG candidate for a force

The force is ?O1The strategic COG relevant factor is ?O2

Page 32: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

A negative example covered by the upper boundA negative example covered by the upper bound

IFIdentify the strategic COG candidates with respect to the industrial civilization of a force

The force is ?O1

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Product

explanation?O1 has_as_industrial_factor ?O2?O2 is_a_major_generator_of ?O3

Plausible Upper Bound Condition?O1 IS Single_state_force

has_as_industrial_factor ?O2

?O2 IS Industrial_capacity is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materials

Condition satisfied by positive example

?O1 IS Italy_1943has_as_industrial_factor ?O2

?O2 IS Farm_implement_industry_of_Italy_1943 is_a_major_generator_of ?O3

?O3 IS Farm_implements_of_Italy_1943le

ss g

ener

al t

han

Identify the strategic COG candidates with respect to the industrial civilization of a force

The force is Italy_1943

A strategic COG relevant factor is strategic COG candidate for a force

The force is Italy_1943The strategic COG relevant factor is

Farm_implement_industry_of_Italy_1943

IF the task to accomplish is

THEN accomplish the task

Negative example that satisfies the upper bound

explanationItaly_1943 has_as_industrial_factor

Farm_implement_industry_of_Italy_1943Farm_implement_industry_of_Italy_1943 is_a_major_generator_of

Farm_implements_of_Italy_1943

THENA strategic COG relevant factor is strategic COG candidate for a force

The force is ?O1The strategic COG relevant factor is ?O2

Page 33: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

IF

THEN

Automatic generation of plausible explanationsAutomatic generation of plausible explanations

Industrial_capacity_of_Italy_1943is a strategic COG candidate for Italy_1943

Identify the strategic COG candidates with respect to the industrial civilization of Italy_1943

The agent generates a list of plausible explanations from which the expert has to select the correct one:

Farm_implements_of_Italy_1943 IS_NOTStrategically_essential_goods_or_materiel

Farm_implement_industry_of_Italy_1943 IS_NOT Industrial_capacity

explanationItaly_1943 has_as_industrial_factor

Farm_implement_industry_of_Italy_1943Farm_implement_industry_of_Italy_1943 is_a_major_generator_of

Farm_implements_of_Italy_1943

Who or what is a strategicallycritical industrial civilization

element in Italy_1943?

Industrial_capacity_of_Italy_1943

No!

Page 34: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Minimal specialization of the plausible upper boundMinimal specialization of the plausible upper bound

Plausible Upper Bound Condition (from rule)?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Product

Condition satisfied by the negative example

?O1 IS Italy_1943has_as_industrial_factor ?O2

?O2 IS Farm_implement_industry_of_Italy_1943 is_a_major_generator_of ?O3

?O3 IS Farm_Implements_of_Italy_1943

New Plausible Upper Bound Condition

?O1 IS Forcehas_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materiel

New Plausible Lower Bound Condition?O1 IS Single_state_force

has_as_industrial_factor ?O2

?O2 IS Industrial_capacity is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materiel

more general than(or at least as general as)

specialization

Page 35: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Fragment of the generalization hierarchyFragment of the generalization hierarchy

specialization

Main_airport Main_seaport

Sole_airport Sole_seaport

Strategically_essential_resource_or_infrastructure_element

Strategic_raw_material Strategically_essential_goods_or_materiel

War_materiel_and_transports

Raw_material

Strategically_essential_infrastructure_element

Resource_or_ infrastructure_element

<object>

Product

Non-strategically_essentialgoods_or_services

Farm-implementsof_Italy_1943

subconcept_of

instance_ofsubconcept_of

War_materiel_and_fuel

subconcept_of

UB

LB

+

+

_

Page 36: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Specialized ruleSpecialized rule

IFIdentify the strategic COG candidates with respect to the industrial civilization of a force

The force is ?O1

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materials

explanation?O1 has_as_industrial_factor ?O2?O2 is_a_major_generator_of ?O3

Plausible Upper Bound Condition?O1 IS Single_state_force

has_as_industrial_factor ?O2

?O2 IS Industrial_capacity is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materials

IFIdentify the strategic COG candidates with respect to the industrial civilization of a force

The force is ?O1

Plausible Upper Bound Condition?O1 IS Force

has_as_industrial_factor ?O2

?O2 IS Industrial_factor is_a_major_generator_of ?O3

?O3 IS Product

explanation?O1 has_as_industrial_factor ?O2?O2 is_a_major_generator_of ?O3

Plausible Upper Bound Condition?O1 IS Single_state_force

has_as_industrial_factor ?O2

?O2 IS Industrial_capacity is_a_major_generator_of ?O3

?O3 IS Strategically_essential_goods_or_materials

THENA strategic COG relevant factor is strategic COG candidate for a force

The force is ?O1The strategic COG relevant factor is ?O2

THENA strategic COG relevant factor is strategic COG candidate for a force

The force is ?O1The strategic COG relevant factor is ?O2

Page 37: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

OverviewOverview

General Presentation of the Rule Refinement Method

Characterization of the Disciple Rule Learning Method

Recommended Reading

The Rule Refinement Problem and Method: Illustration

Demo: Problem Solving and Rule Refinement

Another Illustration of the Rule Refinement Method

Integrated Modeling, Learning, and Problem Solving

Page 38: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Control of modeling, learning and problem solvingControl of modeling, learning and problem solving

Input Task

Generated Reduction

Mixed-Initiative Problem Solving

Ontology + Rules

Reject ReductionAccept ReductionNew Reduction

Rule Refinement

Task RefinementRule Refinement

Modeling

Formalization

Learning

Solution

Page 39: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

This slide shows the interaction between the expert and the agent when the agent has already learned some rules.

1. This interaction is governed by the mixed-initiative problem solver.

2. The expert formulates the initial task.

3. Then the agent attempts to reduce this task by using the previously learned rules. Let us assume that the agent succeeded to propose a reduction to the current task.

4. The expert has to accept it if it is correct, or he has to reject it, if it is incorrect.

5. If the reduction proposed by the agent is accepted by the expert, the rule that generated it and its component tasks are generalized. Then the process resumes, the agent attempting to reduce the new task.

6. If the reduction proposed by the agent is rejected, then the agent will have to specialize the rule, and possibly its component tasks.

7. In this case the expert will have to indicate the correct reduction, going through the normal steps of modeling, formalization, and learning. Similarly, when the agent cannot propose a reduction of the current task, the expert will have to indicate it, again going through the steps of modeling, formalization and learning.

The control of this interaction is done by the mixed-initiative problem solver tool.

Page 40: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Identify and test a strategic COG candidate for the Sicily_1943 scenario

Allies_Forces_1943

A systematic approach to agent teachingA systematic approach to agent teaching

European_Axis_1943

US_1943 Britain_1943 Italy_1943Germany_1943

allianceindividual states

1

2

3

5

6

government

4

7

8

people

military

economy

Otherfactors

9

government

people

military

economy

allianceindividual states

Otherfactors

10

11 12

13

14

Page 41: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

This slide shows a recommended order of operations for teaching the agent:• Modeling for branches #1 through #5• Rule Learning for branches #1 through #5• Problem solving, Rule refinement, Modeling, and Rule Learning for branches #6 through #10You will notice that several of the rules learned from branch #1 will apply to generate branch #6. One only needs to model and teach Disciple for those steps where the previously learned rules do not apply (i.e. for the aspects where there are significant differences between US_1943 and Britain_1943 with respect to their governments).Similarly, several of the rules learned from branch #2 will apply to generate branch #7, an so on. • Problem solving, Rule refinement, Modeling, and Rule Learning for branches #11 and #12Again, many of the rules learned from branches #1 through #10, will apply for the branches #11 and #12.• Modeling for branches #13• Rule Learning for branches #13• Problem solving, Rule refinement, Modeling, and Rule Learning for branches #14

Page 42: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

OverviewOverview

General Presentation of the Rule Refinement Method

Characterization of the Disciple Rule Learning Method

Recommended Reading

The Rule Refinement Problem and Method: Illustration

Demo: Problem Solving and Rule Refinement

Another Illustration of the Rule Refinement Method

Integrated Modeling, Learning, and Problem Solving

Page 43: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Characterization of the PVS ruleCharacterization of the PVS rule

Page 44: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

This slide shows the relationship between the plausible lower bound condition, the plausible upper bound condition, and the exact (hypothetical) condition that the agent is attempting to learn. During rule learning, both the upper bound and the lower bound are generalized and specialized to converge toward one another and toward the hypothetical exact condition. This is different from the classical version space method where the upper bound is only specialized and the lower bound is only generalized.

Notice also that, as opposed to the classical version space method (where the exact condition is always between the upper and the lower bound conditions), in Disciple the exact condition may not include part of the plausible lower bound condition, and may include a part that is outside the plausible upper bound condition.

We say that the plausible lower bound is, AS AN APPROXIMATION, less general than the hypothetical exact condition. Similarly, the plausible upper bound is, AS AN APPROXIMATION, more general than the hypothetical exact condition.

These characteristics are a consequence of the incompleteness of the representation language (i.e. the incompleteness of the object ontology), of the heuristic strategies used to learn the rule, and of the fact that the object ontology may evolve during learning.

Page 45: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Characterization of the rule learning methodCharacterization of the rule learning method

Uses the explanation of the first positive example to generate a much smaller version space than the classical version space method.

Conducts an efficient heuristic search of the version space, guided by explanations, and by the maintenance of a single upper bound condition and a single lower bound condition.

Will always learn a rule, even in the presence of exceptions.

Learns from a few examples and an incomplete knowledge base.

Uses a form of multistrategy learning that synergistically integrates learning from examples, learning from explanations, and learning by analogy, to compensate for the incomplete knowledge.

Uses mixed-initiative reasoning to involve the expert in the learning process.

Is applicable in complex real-world domains, being able to learn within a complex representation language.

Page 46: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Problem solving with PVS rulesProblem solving with PVS rules

PVS Condition Except-When PVS Condition

Rule’s conclusion

is (most likely)

incorrect

Rule’s conclusion is plausible Rule’s conclusion is

(most likely) correct

Rule’s conclusion is not plausible

Rule’s conclusion

is (most likely)

incorrect

Page 47: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

OverviewOverview

General Presentation of the Rule Refinement Method

Characterization of the Disciple Rule Learning Method

Recommended Reading

The Rule Refinement Problem and Method: Illustration

Demo: Problem Solving and Rule Refinement

Another Illustration of the Rule Refinement Method

Integrated Modeling, Learning, and Problem Solving

Page 48: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Disciple-RKF/COG:

Integrated Modeling, Learning and Problem Solving

Page 49: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Disciple uses the partially learned rules in problem solving and refines

them based on expert’s feedback.

This is done in the Refining mode.

Page 50: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Disciple applies previously learned rules in other similar cases

The expert can expand the “More…” node to view the solution generated

by the rule

Page 51: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

The “?” indicates that Disciple is uncertain whether the reasoning step is correct.

Disciple uses the rule learned from Republican Guard Protection Unit to

the System of Saddam doubles

Page 52: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

The expert has to examine this step and has to indicate whether it is:

correct and completely explained

by selecting “Correct Example”

correct but incompletely explained

by selecting “Explain Example”

incorrect by selecting “Incorrect Example”

Page 53: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

The expert has indicated that the reasoning step is correct and Disciple has generalized the

plausible lower bound condition of the corresponding rule, to cover this example.

Page 54: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Following the same procedure, Disciple generalized the plausible lower bound condition of the rule used to generate

this elementary solution.

Page 55: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Another protection means of Saddam Hussein is the Complex of Bunkers of Iraq

2003. Since this means of protection is different from the previously identified ones,

the learned rules do not apply.

The expert has to provide the modeling that identifies the Complex of Bunkers of

Iraq 2003 as a means for protection of Saddam Hussein and to test it for any

significant vulnerabilities.

This is done with the Modeling tool.

Page 56: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Disciple starts the modeling tool with the appropriate task and suggests the

question to ask

Page 57: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

The expert develops a complete modeling for the Complex of

Bunkers of Iraq 2003

When the modeling is completed, the expert returns to the teaching tool

Page 58: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

The expert can now learn new rules for the Complex of Bunkers of Iraq 2003 as means of protection for Saddam Hussein

Page 59: Knowledge Acquisition and Problem Solving

2004, G.Tecuci, Learning Agents Center

Recommended readingRecommended reading

Tecuci G., Boicu M., Boicu C., Marcu D., Stanescu B., Barbulescu M., The Disciple-RKF Learning and Reasoning Agent, Research Report submitted for publication, Learning Agents Center, George Mason University, September 2004.

G. Tecuci, Building Intelligent Agents, Academic Press, 1998, pp. 21-23, pp. 27-32, pp. 101-129, pp. 198-228.

Tecuci G., Boicu M., Bowman M., and Dorin Marcu, with a commentary by Murray Burke,“An Innovative Application from the DARPA Knowledge Bases Programs: Rapid Development of a High Performance Knowledge Base for Course of Action Critiquing,” invited paper for the special IAAI issue of the AI Magazine, Volume 22, No, 2, Summer 2001, pp. 43-61.http://lac.gmu.edu/publications/data/2001/COA-critiquer.pdf

Boicu M., Tecuci G., Stanescu B., Marcu D. and Cascaval C., "Automatic Knowledge Acquisition from Subject Matter Experts," in Proceedings of the IEEE International Conference on Tools with Artificial Intelligence, Dallas, Texas, November 2001. http://lac.gmu.edu/publications/data/2001/ICTAI.doc