pluss

4
Non-Cooperation in Computational Models of Dialogue Brian Plüss [email protected] Supervisors Paul Piwek Richard Power Department/Institute Computing Department Status Fulltime Probation viva After Starting date 01/10/08 This research is aimed at finding a computable description (i.e., a computational model) of non-cooperative conversational behaviour in political interviews. Most approaches to finding computational models of dialogue behaviour are based on a strong notion of cooperation between the dialogue participants (DPs). Researchers have proposed models based on DP's intentions (Cohen and Levesque, 1991), regarded dialogue as a game with strict rules guiding the speaker's actions and the hearer's interpretation of those actions (Power, 1979), or addressed conversation as the joint construction of goal-oriented plans that are shared by DPs (Grosz and Sidner, 1990). These models successfully explain dialogue situations in which DPs recognise each other's intentions and, at least to a certain extent, accept each other's goals when deciding on their actions. These assumptions are theoretically grounded (Grice, 1975; Clark and Shaefer, 1989) and also practically sound: dialogue models are usually implemented in the form of dialogue systems, built for the purpose of providing a service to their users. Take, for instance, the following exchange from the TRAINS dialogue system (Ferguson et al. 1996): User I need to get to Detroit, Syracuse, and Montreal. System Ok. User Let's send the trains from Chicago to Detroit. System Ok. [draws route from Chicago to Detroit via Toledo] But the terminal at Toledo is delaying traffic due to bad weather. Trains will take an additional five hours to move through there. [highlights Toledo red] User That's ok. Now let's go from Atlanta up to Syracuse. System Please tell me a route to use to get from Atlanta to Syracuse. User Go via Charleston and Pittsburgh. (...) System No problem. [draws route] User That's great. System I hope it was helpful. In this scenario, failure to cooperate, either on the side of the system or of the user, is against the premises on which the system is conceived and used. In everyday 2010 CRC PhD Student Conference Page 82 of 125

Upload: anesah

Post on 28-Aug-2014

893 views

Category:

Technology


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Pluss

Non-Cooperation in Computational Models of Dialogue

Brian Plü[email protected]

Supervisors Paul PiwekRichard Power

Department/Institute Computing DepartmentStatus FulltimeProbation viva AfterStarting date 01/10/08

This research is aimed at finding a computable description (i.e., a computational model) of non-cooperative conversational behaviour in political interviews.

Most approaches to finding computational models of dialogue behaviour are based on a strong notion of cooperation between the dialogue participants (DPs). Researchers have proposed models based on DP's intentions (Cohen and Levesque, 1991), regarded dialogue as a game with strict rules guiding the speaker's actions and the hearer's interpretation of those actions (Power, 1979), or addressed conversation as the joint construction of goal-oriented plans that are shared by DPs (Grosz and Sidner, 1990). These models successfully explain dialogue situations in which DPs recognise each other's intentions and, at least to a certain extent, accept each other's goals when deciding on their actions. These assumptions are theoretically grounded (Grice, 1975; Clark and Shaefer, 1989) and also practically sound: dialogue models are usually implemented in the form of dialogue systems, built for the purpose of providing a service to their users. Take, for instance, the following exchange from the TRAINS dialogue system (Ferguson et al. 1996):

User I need to get to Detroit, Syracuse, and Montreal.

System Ok.

User Let's send the trains from Chicago to Detroit.

System Ok. [draws route from Chicago to Detroit via Toledo] But the terminal at Toledo is delaying traffic due to bad weather. Trains will take an additional five hours to move through there. [highlights Toledo red]

User That's ok. Now let's go from Atlanta up to Syracuse.

System Please tell me a route to use to get from Atlanta to Syracuse.

User Go via Charleston and Pittsburgh.

(...)

System No problem. [draws route]

User That's great.

System I hope it was helpful.

In this scenario, failure to cooperate, either on the side of the system or of the user, is against the premises on which the system is conceived and used. In everyday

2010 CRC PhD Student Conference

Page 82 of 125

Page 2: Pluss

conversation, however, a great many situations escape these arguments. Consider as an example the following fragment1:

Paxman We're joined now from his count in Bethnal Green and Bow by George Galloway. Mr Galloway, are you proud of having got rid of one of the very few black women in Parliament?

Galloway What a preposterous question. I know it's very late in the night, but wouldn't you be better starting by congratulating me for one of the most sensational election results in modern history?

Paxman Are you proud of having got rid of one of the very few black women in Parliament?

Galloway I'm not, Jeremy move on to your next question.

Paxman You're not answering that one?

Galloway No because I don't believe that people get elected because of the colour of their skin. I believe people get elected because of their record and because of their policies. So move on to your next question.

Paxman Are you proud...

Galloway Because I've got a lot of people who want to speak to me.

Paxman You...

Galloway If you ask that question again, I'm going, I warn you now.

Paxman Don't try and threaten me Mr Galloway, please.

This research is aimed at shedding light on the nature of non-cooperation in dialogue, by capturing the intuitions that allow us to differentiate between both conversations in terms of participant behaviour; and at reproducing such conversational behaviour involving software agents. In other words, we are looking for an answer to the following question:

What properties are needed in a computational model of conversational agents so that they can engage in non-cooperative as well as in cooperative dialogue, in particular in the domain of political interviews?

Computational models of conversational agents are abstract, computable descriptions of autonomous agents that are able to engage in conversation (i.e., to participate in a dialogue displaying adequate conversational behaviour). Developing these models and their implementation would allow for a better understanding of the workings of dialogue. This approach is know as analysis-by-synthesis (Levinson, 1982).

Prior to the development of a computational model, it is necessary to identify precisely the situations under study and the phenomena defining them. We achieved this by carrying on empirical studies of naturally-occurring data. In our case, we analysed broadcast political interviews with two main participants.

Our distinction between cooperative and non-cooperative dialogue is based on the occurrence of particular phenomena, that we call non-cooperative features (NCFs). Intuitively, they refer to whether participants behave as is expected for the type of dialogue in which they engage, i.e., whether they follow the obligations imposed upon their conversational behaviour by the social context in which the exchange takes place (Traum and Allen, 1994).

1 BBC presenter Jeremy Paxman interviews MP George Galloway, shortly after his victory in the UK 2005 General Election (http://www.youtube.com/watch?v=tD5tunBGmDQ, last access May 2010).

2010 CRC PhD Student Conference

Page 83 of 125

Page 3: Pluss

We have chosen political interviews as the the domain for our study, because it provides a well-defined set of scenarios, scoping the research in a way that is suitable for a PhD project. At the same time, a wealth of interesting conversational situations arise in political interviews. In the English-speaking world, journalists are well-known for their incisive approach to public servants, while politicians are usually well trained to deliver a set of key messages when speaking in public, and to avoid issues unfavourable to their image.

For the empirical analysis, we collected a corpus of political interviews with different levels of conflict between the dialogue participants. We proposed a technique for measuring non-cooperation in this domain using NCFs The number of occurrences of these features determines the degree of non-cooperation (DNC) of an exchange.

NCFs are grouped following three aspects of conversation: turn-taking (Sacks et al., 1974), grounding (Clark and Schaefer, 1989) and speech acts (Searle, 1979). As we said above, they constitute departures from expected behaviour according to the social context of the exchange. Examples of NCFs include, among others, interruptions, overlapped speech, failure to acknowledge each other's contributions, the interviewer expressing a personal opinion or criticising the interviewee's positions on subjective grounds and the interviewee asking questions (except for clarification requests) or making irrelevant comments. The DNC was computed for all the political interviews in the corpus and preliminary results are encouraging. Adversarial interviews have a large number of NCFs, thus a high value for the DNC. On the other hand, collaborative exchanges have low occurrence of NCFs (or none at all).

At the time of writing, we are designing two studies to evaluate the DNC measure. The first is structured as an annotation exercise in which 6 annotators will code dialogues from the corpus. The inter-annotator agreement (Krippendorf, 2004) will indicate whether or not we are describing NCFs to an acceptable level of precision. In the second study, participants will watch or listen to the dialogues in the corpus and provide a judgement based on their perception of the DPs behaviour with respect to what is expected from them in a political interview. The correlation between results from these studies will provide a level of confidence on the DNC measure.

As for designing the model, dialogue games supporters could say that there is a game that describes the interaction in which Paxman and Galloway engaged in our second example. While this might be true, such an approach would force us, in the limit, to define one game for each possible conversation that would not fit a certain standard. Walton and Krabbe (1995) attempt a game-based approach in their study of natural argumentation. They claim that a rigorous model of conversational interaction is useful, but accept that most of the huge variety of every day conversation escapes it.

Nevertheless, the rules and patterns captured by game models are useful, as they describe the expected behaviour of the DPs under a certain conversational scenario. In devising our model, we aim at reconciling two worlds, using the insights from dialogue games to provide a description of expected behaviour in the form of social obligations, but looking at naturally occurring cases that deviate from the norm. Our hypothesis is that non-cooperative behaviour emerges from decisions DPs make based on conversational obligations and individual goals, with a suitable configuration of priorities associated with each of them.

The construction of the model will be a formalization of the our hypothesis, including rules for political interviews, goals, obligations, priorities and a dialogue management

2010 CRC PhD Student Conference

Page 84 of 125

Page 4: Pluss

component with the deliberation mechanism. We are currently investigating the line of research on obligation-driven dialogue modelling, initiated by Traum and Allen (1994) and developed further by Poesio and Traum (1998) and Kreutel and Matheson (2003). We are also implementing a prototype simulator based on the EDIS dialogue system (Matheson et al, 2000).

References

H.H. Clark and E.F. Schaefer. 1989. Contributing to discourse. Cognitive science, 13(2):259–294.

P.R. Cohen and H.J. Levesque. 1991. Confirmations and joint action. In Proceedings of the 12 th International Joint Conference on Artificial Intelligence, pages 951–957.

G. Ferguson, J.F. Allen, and B. Miller. 1996. Trains-95: Towards a mixed-initiative planning assistant, pages 70-77. AAAI Press.

H.P. Grice. 1975. Logic and conversation. Syntax and Semantics, 3:41–58.

B.J. Grosz and C.L. Sidner. 1990. Plans for discourse. Intentions in communication, pages 417–444.

J. Kreutel and C. Matheson. 2003. Incremental information state updates in an obligation-driven dialogue model. Logic Journal of IGPL, 11(4):485.

Krippendorff, Klaus. 2004. Content Analysis: An Introduction to Its Methodology, second edition. Sage, Thousand Oaks, CA.

S. C. Levinson. 1983. Pragmatics. Cambridge University Press.

C. Matheson, M. Poesio, and D. Traum. 2000. Modelling grounding and discourse obligations using update rules. In Proceedings of the 1st NAACL conference, pages 1–8. San Francisco, CA, USA.

M. Poesio and D. Traum. 1998. Towards an axiomatization of dialogue acts. In Proceedings of the Twente Workshop on the Formal Semantics and Pragmatics of Dialogues, pages 207–222.

R. Power. 1979. The organisation of purposeful dialogues. Linguistics, 17:107–152.

H. Sacks, E.A. Schegloff, and G. Jefferson. 1974. A simplest systematics for the organization of turntaking for conversation. Language, pages 696–735.

J.R. Searle. 1979. A Taxonomy of lllocutionary Acts. Expression and meaning: studies in the theory of speech acts, pages 1–29.

D.R. Traum and J.F. Allen. 1994. Discourse obligations in dialogue processing. In Proceedings of the 32nd annual meeting of ACL, pages 1–8. Morristown, NJ, USA.

D. Walton and E. Krabbe. 1995. Commitment in dialogue: Basic concepts of interpersonal reasoning. State University of New York Press.

2010 CRC PhD Student Conference

Page 85 of 125