abml k nowledge r efinement l oop a c ase s tudy matej guid, martin možina vida groznik, aleksander...
TRANSCRIPT
ABML KNOWLEDGE REFINEMENT LOOP A CASE STUDY
Matej Guid, Martin Možina Vida Groznik, Aleksander Sadikov, Ivan Bratko
Artificial Intelligence LaboratoryFaculty of Computer and Information Science
University of Ljubljana, Slovenia
Dejan Georgiev Zvezdan Pirtošek
The 20th International Symposium on Methodologies for Intelligent Systems (ISMIS)
4-7 December 2012, Macau
Department of Neurology, University Medical Centre Ljubljana,
Slovenia
ABOUT
• Knowledge acquisition from data, using Machine Learning
• Goal of our approach: acquired knowledge makes sense to a human expert
That is: formulated in “human-like” manner, and therefore easy to understand by a human
• ML technique: Argument Based Machine Learning, ABML (Možina, Žabkar, Bratko, 2007)
• Case study in neurology: diagnosis of tremors
TYPICAL PROBLEM WITH ML
May be fine for classification (diagnosis)
But: hard to understand by expert, and
may not provide sensible explanation
Examples
(patients)
Learning
program
Induced knowledge
(rules, ...)
ILLUSTRATION
Learning to diagnose pneumonia
Patient no. Temperature
Coughing Gender Pneumonia
1 Normal No Female No
2 High Yes Male Yes
3 Very high No Male Yes
A typical rule learner would induce the rule:
IF Gender = Male THEN Pneumonia = Yes
This will correctly diagnose all patients
But it will explain Patient 3 by: Has pneumonia because he
is male
HOW TO FIX THIS WITH ABML?
A possible expert’s explanation for Patient 2:
Patient 2 suffers from pneumonia because he has high
temperature
In ABML we say: Expert annotated this case by positive
argument
The previous rule is now not consistent with the argument
Patient no. Temperature
Coughing Gender Pneumonia
1 Normal No Female No
2 High Yes Male Yes
3 Very high No Male Yes
ABML TAKES EXPERT’S ARGUMENT INTO ACCOUNT
ABML induces the rule consistent with expert’s argument:
IF Temperature > Normal THEN Pneumonia = Yes
This explains Patient 3 by:
Has pneumonia because he has very high temperature
Patient no. Temperature
Coughing Gender Pneumonia
1 Normal No Female No
2 High Yes Male Yes
3 Very high No Male Yes
POINT OF THIS ILLUSTRATION
• By annotating one chosen example, the expert leads the system to induce a rule compatible with expert’s knowledge
• The rule also covers other learning examples
• The rule enables sensible explanation of diagnoses
REST OF TALK (BY MATEJ GUID)
• How to use ABML in interaction loop between expert and system?
• Case study in neurology
• Experimental evaluation of this approach
KNOWLEDGE ELICITATION WITH ABML
IF ... THEN ...IF ... THEN ......
ABMLargument-based machine learning
experts’ arguments
constrain learning
obtained models are consistent
with expert knowledge
Možina M. et al. Fighting Knowledge Acquisition Bottleneck with Argument Based Machine Learning. ECAI 2008.
arguments
critical examplescounter examples
experts introduce
new concepts (attributes)human-understandable models
DIFFERENTIATING TREMORS: DOMAIN DESCRIPTION
PT ETMTParkinsonian tremor essential tremormixed tremor
Data set of 114 patients:• learning set: 47 examples• test set: 67 examples
The patients were described by 45 attributes.
50 patients23 patients
41 patients
resting tremor
bradykinesia
rigidity
Parkinsonian spiral drawings
...
postural tremor
kinetic tremor
harmonics
essential spiral drawings
...
ABML KNOWLEDGE REFINEMENT LOOP
Step 1: Learn a hypothesis with ABML
Step 2: Find the “most critical” example (if none found, stop)
Step 3: Expert explains the example
Return to step 1
critical example
learn data set
ArgumentABML
ABML KNOWLEDGE REFINEMENT LOOP
Step 1: Learn a hypothesis with ABML
Step 2: Find the “most critical” example (if none found, stop)
Step 3: Expert explains the example
Return to step 1 Step 3a: Explaining a critical example (in a natural language)
Step 3b: Adding arguments to the example
Step 3c: Discovering counter examples
Step 3d: Improving arguments with counter examples
Return to step 3c if counter example found
Which features are in favor of ET and which features are in favor of PT?
FIRST CRITICAL EXAMPLE
E.65 MT the initial model could not explain why this example is from class MT
BRADYKINESIA.LEFT = 4BRADYKINESIA.RIGHT = 4KINETIC.TREMOR.UP.LEFT = 1KINETIC.TREMOR.UP.RIGHT = 0POSTURAL.TREMOR.UP.LEFT = 1POSTURAL.TREMOR.UP.RIGHT = 0QUALITATIVE.SPIRAL = ?RESTING.TREMOR.UP.LEFT = 0RESTING.TREMOR.UP.RIGHT = 0RIGIDITY.UP.LEFT = 3RIGIDITY.UP.RIGHT = 3BARE.LEFT.FREQ.HARMONICS = NOBARE.RIGHT.FREQ.HARMONICS = NOTEMPLATE.LEFT.FREQ.HARMONICS = YESTEMPLATE.RIGHT.FREQ.HARMONICS = NO...
Which features are in favor of ET and which features are in favor of PT?
FIRST CRITICAL EXAMPLE
E.65 MT the initial model could not explain why this example is from class MT
BRADYKINESIA.LEFT = 4BRADYKINESIA.RIGHT = 4KINETIC.TREMOR.UP.LEFT = 1KINETIC.TREMOR.UP.RIGHT = 0POSTURAL.TREMOR.UP.LEFT = 1POSTURAL.TREMOR.UP.RIGHT = 0QUALITATIVE.SPIRAL = ?RESTING.TREMOR.UP.LEFT = 0RESTING.TREMOR.UP.RIGHT = 0RIGIDITY.UP.LEFT = 3RIGIDITY.UP.RIGHT = 3BARE.LEFT.FREQ.HARMONICS = noBARE.RIGHT.FREQ.HARMONICS = noTEMPLATE.LEFT.FREQ.HARMONICS = YesTEMPLATE.RIGHT.FREQ.HARMONICS = no...
HARMONICS = true
BRADYKINESIA = true
the expert introduced new attributes
ET: HARMONICS = true
PT: BRADYKINESIA = true
COUNTER EXAMPLES
E.65 MT the critical example
arguments attached to E.65
E.67 (PT)
E.12 (ET)
Counter examples:
What is the most important feature in favor of ET in E.65
that does not apply to E.67?
What is the most important feature in favor of PT in E.65
that does not apply to E.12?
answer: error in data – HARMONICS in E.67 was now changed into false
answer: misdiagnosis – some arguments in favor of PT in E.12 were overlooked E.12 changed from ET to MT
Which features are in favor of ET?
ANOTHER CRITICAL EXAMPLE
E.61 MT the model could not explain why this example is from class MT
BRADYKINESIA = false KINETIC.TREMOR.UP.LEFT = 0KINETIC.TREMOR.UP.RIGHT = 0POSTURAL.TREMOR.UP.LEFT = 0POSTURAL.TREMOR.UP.RIGHT = 3QUALITATIVE.SPIRAL = ParkinsonianRESTING.TREMOR.UP.LEFT = 0RESTING.TREMOR.UP.RIGHT = 0RIGIDITY.UP.LEFT = 0RIGIDITY.UP.RIGHT = 1HARMONICS = false...
POSTURAL = true
RESTING = false
ET: POSTURAL = true
RESTING = false
COUNTER EXAMPLES
E.61 MT the critical example
argument attached to E.61
E.32 (PT)
Counter example:
What is the most important feature in favor of ET in E.61
that does not apply to E.32?
answer: the absence of BRADYKINESIA in E.61
The argument in favor of ET in E.61 was thus extended to:
POSTURAL = true AND RESTING = false AND BRADYKINESIA = false
ABML KNOWLEDGE REFINEMENT LOOP
Step 1: Learn a hypothesis with ABML
Step 2: Find the “most critical” example (if none found, stop)
Step 3: Expert explains the example
Return to step 1
ABML KNOWLEDGE REFINEMENT LOOP
Step 1: Learn a hypothesis with ABML
Step 2: Find the “most critical” example (if none found, stop)
Step 3: Expert explains the example
Return to step 1
ABML KNOWLEDGE REFINEMENT LOOP
Step 3a: Explaining a critical example (in a natural language)
“Presence of postural tremor and resting tremor speak in favor of ET...”
POSTURAL.TREMOR.UP.LEFT = 0POSTURAL.TREMOR.UP.RIGHT = 3
RESTING.TREMOR.UP.LEFT = 0RESTING.TREMOR.UP.RIGHT = 0
Step 3b: Adding arguments to the example
POSTURAL = true
RESTING = false
Step 3c: Discovering counter examples
ET: POSTURAL = true
RESTING = false
Step 3d: Improving arguments with counter examples
POSTURAL = true AND RESTING = false AND BRADYKINESIA = false
E.32 (PT)
THE FINAL MODEL
pure distributions
During the process of knowledge elicitation:
• 15 arguments were given by the expert
• 14 new attributes were included into the
domain
• 21 attributes were excluded from the
domain
CONSISTENCY WITH THE EXPERT’S KNOWLEDGE
Adequatea rule consistent with expert knowledgeready to be used as a strong argument in favor of ET or PT 2
Reasonablea rule consistent with expert knowledgeinsufficient to decide in favor of ET or PT on its basis alone 1
Counter-intuitivean illogical rulein contradiction with expert knowledge 0
THE FINAL MODEL
no c
ounte
r-in
tuit
ive r
ule
s
pure distributions
Method Initial model
Final Model
Naive Bayes 63% 74%
kNN 58% 81%
Rule learning (ABCN2) 52% 82%
classification accuracies on the test set:
The accuracies of all methods improved by adding new attributes.
ABML REFINEMENT LOOP & KNOWLEDGE ELICITATION
IF ... THEN ...IF ... THEN ......
ABMLargument-based machine learning
explain single example easier for experts to articulate knowledge
“critical” examples expert provides only relevant knowledge
“counter” examples detect deficiencies in explanations
arguments
critical examplescounter examples
QUESTIONS AND ANSWERS
? http://www.ailab.si/matej/
dr. Matej Guid. Artificial Intelligence Laboratory,Faculty of Computer and Information Science, University of Ljubljana. Web page: http://www.ailab.si/matej