user-initiated learning (uil) kshitij judah, tom dietterich, alan fern, jed irvine, michael slater,...

Post on 16-Jan-2016

216 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

User-Initiated Learning (UIL)

Kshitij Judah, Tom Dietterich, Alan Fern, Jed Irvine, Michael Slater, Prasad Tadepalli,

Oliver Brdiczka, Jim Thornton,

Jim Blythe,

Christopher Ellwood, Melinda Gervasio, Bill Jarrold

CALO: Intelligent Assistant for the Desktop Knowledge Worker

Learn to Understand Meetings

Learn to Keep User Organized

Learn to Manage Email

Learn to Prepare information Products

Learn to Schedule and Execute

CALO: Learning to be an Intelligent Assistant

PAL Program Focus: “Learning in the Wild”

2

User-Initiated Learning

All of CALO’s learning components can perform Learning In The Wild (LITW)

But the learning tasks are all pre-defined by CALO’s engineers:

What to learn What information is relevant for learning How to acquire training examples How to apply the learned knowledge

UIL Goal: Make it possible for the user to define new learning tasks after the system is deployed

3

Motivating Scenario:Forgetting to Set Sensitivity

TIMELINE

Scientist

Sets sensitivity to confidential

Sends email to team

Sends email to a colleague

“Lunch today?”

Does not set sensitivity to confidential

Collaborates on aClassified project

Sends email to team

Forgets to set sensitivity to confidential

Research Team

4

Motivating Scenario:Forgetting to Set Sensitivity

TIMELINE

“Please do not forget to set sensitivity when sending email”

Scientist

Research Team

Teaches CALO to learn to predict whether user has forgot to set sensitivity

Sends email to team

CALO reminds user to set sensitivity

5

User-CALO Interaction: Teaching CALO to Predict Sensitivity

SAT Based Reasoning System

SAT Based Reasoning System

SPARKProcedure

InstrumentedOutlook

InstrumentedOutlook Events

user

Integrated Task LearningIntegrated Task Learning

user

Compose new email

Modify Procedure

Procedure Demonstration and Learning Task Creation

User Interface for Feature Guidance

User Interface for Feature Guidance

User SelectedFeatures

user

Feature Guidance

Email + Related Objects

CALO Ontology

TrainedClassifier

Feature Guidance

Machine LearnerMachine Learner

KnowledgeBase

Training Examples

Learning

Legal Features

SAT Based Reasoning System

SAT Based Reasoning System

Class Labels

6

User-CALO Interaction: Teaching CALO to Predict Sensitivity

SAT Based Reasoning System

SAT Based Reasoning System

SPARKProcedure

InstrumentedOutlook

InstrumentedOutlook Events

user

Integrated Task LearningIntegrated Task Learning

user

Compose new email

Modify Procedure

Procedure Demonstration and Learning Task Creation

User Interface for Feature Guidance

User Interface for Feature Guidance

User SelectedFeatures

user

Feature Guidance

Email + Related Objects

CALO Ontology

TrainedClassifier

Feature Guidance

Machine LearnerMachine Learner

KnowledgeBase

Training Examples

Learning

Legal Features

SAT Based Reasoning System

SAT Based Reasoning System

Class Labels

7

Initiating Learning via Demonstration LAPDOG: Transforms an observed sequence of

instrumented events into a SPARK procedure

SPARK representation generalizes the dataflow between the actions of the workflow

8

Initiating Learning via Demonstration TAILOR: Supports procedure editing

For UIL, it allows adding a condition to one or more steps in a procedure

9

Initiating Learning via Demonstration The condition becomes the new predicate to be learned

10

User-CALO Interaction: Teaching CALO to Predict Sensitivity

SAT Based Reasoning System

SAT Based Reasoning System

SPARKProcedure

InstrumentedOutlook

InstrumentedOutlook Events

user

Integrated Task LearningIntegrated Task Learning

user

Compose new email

Modify Procedure

Procedure Demonstration and Learning Task Creation

User Interface for Feature Guidance

User Interface for Feature Guidance

User SelectedFeatures

user

Feature Guidance

Email + Related Objects

CALO Ontology

TrainedClassifier

Feature Guidance

Machine LearnerMachine Learner

KnowledgeBase

Training Examples

Learning

Legal Features

SAT Based Reasoning System

SAT Based Reasoning System

Class Labels

11

Inferring Feature LegalityHasToFieldHasSubjectHasBodyHasAttachment HasSensitivity…

DescriptionStartDateEnddate…

HasToFieldHasSubjectHasBodyHasAttachment HasSensitivity…

FirstNameLastNameAddressPhone…

FirstNameLastNameAddressPhone…

ToRecipient

CCRecipient PrevEmailMessage Project

EmailMessage

Subset of Ontology

Naively, the system will use all features For example, system will use HasSensitivity

Dangerous to use HasSensitivity: Has one-to-one correlation with target and is present at

training time Not present at test time

Feature filtering removes such features at training time 12

Feature Guidance Interface

13

User-CALO Interaction: Teaching CALO to Predict Sensitivity

SAT Based Reasoning System

SAT Based Reasoning System

SPARKProcedure

InstrumentedOutlook

InstrumentedOutlook Events

user

Integrated Task LearningIntegrated Task Learning

user

Compose new email

Modify Procedure

Procedure Demonstration and Learning Task Creation

User Interface for Feature Guidance

User Interface for Feature Guidance

User SelectedFeatures

user

Feature Guidance

Email + Related Objects

CALO Ontology

TrainedClassifier

Feature Guidance

Machine LearnerMachine Learner

KnowledgeBase

Training Examples

Learning

Legal Features

SAT Based Reasoning System

SAT Based Reasoning System

Class Labels

14

Training Instance Generation

Goal: autonomously generate labeled training instances for the learning component from stored user emails

Problem: actions used to create emails are not stored in the CALO knowledge base, so we need to infer how email was created

{defprocedure do_rememberSensitivity....[do: (openComposeEmailWindow $newEmail)][do: (changeEmailField $newEmail "to")][do: (changeEmailField $newEmail "subject")][do: (changeEmailField $newEmail "body")][if: (learnBranchPoint $newEmail) [do: (changeEmailField $newEmail "sensitivity")]][do: (sendEmailInitial $newEmail)]....}

Specifically, we want to know: Whether an email is an instance of the

procedure? Which branch was taken during creation

of the email? No such inference can be drawn

15

Training Instance Generation

DomainAxioms

DomainAxioms

NewComposition ComposeNewMailReplyComposition ReplyToMailHasAttachment (AttachFile ForwardMail)…

SPARKAxioms

SPARKAxioms

ProcInstance (u1 u2 … Un)( forget label) (C1 C2 … Cn)

{defprocedure do_rememberSensitivity....[do: (openComposeEmailWindow $newEmail)][do: (changeEmailField $newEmail "to")][do: (changeEmailField $newEmail "subject")][do: (changeEmailField $newEmail "body")][if: (learnBranchPoint $newEmail) [do: (changeEmailField $newEmail "sensitivity")]][do: (sendEmailInitial $newEmail)]....}

LabelAnalysis

Formula (LAF)

LabelAnalysis

Formula (LAF)

KnowledgeBase

NewComposition ReplyCompositionHasToFieldHasSubjectHasBodyHasAttachment …

ReasoningEngine

ReasoningEngine

E forget ╞ (ProcInstance Label)

E forget ╞ (ProcInstance Label)

otherwise

Positive Example

NegativeExample

Discard email

16

The Learning Component Logistic Regression is used as the core learning

algorithm

Features Relational features extracted from ontology

Incorporate User Advice on Features Apply large prior variance on user selected features Select prior variance on rest of the features through cross-

validation

Automated Model Selection Parameters: Prior variance on weights, classification threshold Technique: Maximization of leave-one-out cross-validation

estimate of kappa

22

Assisting the User:Reminding

24

Empirical Evaluation

Problems: Attachment Prediction Importance Prediction

Learning Configurations Compared: No User Advice + Fixed Model Parameters User Advice + Fixed Model Parameters No User Advice + Automatic parameter Tuning User Advice + Automatic parameter Tuning

User Advice: 18 keywords in the body text for each problem

25

Empirical Evaluation:Data Set

Set of 340 emails obtained from a real desktop user

256 training set + 84 test set

For each training set size, compute mean kappa () using test set to generate learning curves

is a statistical measure of inter-rater agreement for discrete classes

is a common evaluation metric in cases when the classes have a skewed distribution

26

Empirical Evaluation:Learning Curves

Attachment Prediction 28

Empirical Evaluation:Learning Curves

Attachment Prediction 29

Empirical Evaluation:Learning Curves

Attachment Prediction 30

Empirical Evaluation:Learning Curves

Attachment Prediction 31

Empirical Evaluation:Learning Curves

Importance Prediction 32

Empirical Evaluation:Learning Curves

Importance Prediction 33

Empirical Evaluation:Learning Curves

Importance Prediction 34

Empirical Evaluation:Learning Curves

Importance Prediction 35

Empirical Evaluation:Robustness to Bad Advice

We intended to test the robustness of the system to bad advice

Bad advice was generated as follows: Use SVM based feature selection in WEKA to

produce a ranking of user provided keywords

Replace top three words in the ranking with randomly selected words from the vocabulary

36

Empirical Evaluation:Robustness to Bad Advice

Attachment Prediction 37

Empirical Evaluation:Robustness to Bad Advice

Attachment Prediction 38

Empirical Evaluation:Robustness to Bad Advice

Attachment Prediction 39

Empirical Evaluation:Robustness to Bad Advice

Attachment Prediction 40

Empirical Evaluation:Robustness to Bad Advice

Importance Prediction 41

Empirical Evaluation:Robustness to Bad Advice

Importance Prediction 42

Empirical Evaluation:Robustness to Bad Advice

Importance Prediction 43

Empirical Evaluation:Robustness to Bad Advice

Importance Prediction 44

Empirical Evaluation:Prediction Utility

We want to evaluate the utility of the system for the user

We use a new metric called Critical Cost Ratio (CCR)

Intuition: A measure of how high cost of forgetting should be compared to cost of interruption for the system to be useful

Intuition : Hence, if CCR is low, the system is useful more often

For example, if CCR=10, then cost of forgetting should be 10 times more than cost of interruption for net benefit

45

Empirical Evaluation:Prediction Utility

Attachment Prediction 47

At size 256, cost of forgetting should be at least 5 times of cost of interruption to gain net benefit from the system

Empirical Evaluation:Prediction Utility

Importance Prediction 49

Lessons Learned

User interfaces should support rich instrumentation, automation, and intervention

User interfaces should come with models of their behavior

User advice is helpful but not critical

Self-tuning learning algorithms are critical for success

50

Beyond UIL: System-Initiated Learning

CALO should notice when it could help the user by formulating and solving new learning tasks

Additional Requirements Knowledge of user’s goals, user’s costs, user’s

failure modes (e.g., forgetting, over-committing, typos)

Knowledge of what is likely to be learnable and what is not

Knowledge of how to formulate learning problems (classification, prediction, anomaly detection, etc.)

51

Thank youand

Questions

52

top related