conversational implicatures: interacting with grammar

86
Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion Conversational implicatures: interacting with grammar Christopher Potts Stanford Linguistics UIUC Linguistics, October 28, 2013 This talk: partly joint work with Mike Frank, Noah Goodman, Dan Jurafsky, Roger Levy & Adam Vogel Associated paper (draft form; comments welcome!): http://stanford.edu/ ˜ cgpotts/papers.html 1 / 44

Upload: others

Post on 10-Nov-2021

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Conversational implicatures:interacting with grammar

Christopher Potts

Stanford Linguistics

UIUC Linguistics, October 28, 2013

This talk: partly joint work with Mike Frank, NoahGoodman, Dan Jurafsky, Roger Levy & Adam Vogel

Associated paper (draft form; comments welcome!):http://stanford.edu/˜cgpotts/papers.html

1 / 44

Page 2: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Conversational implicatureDefinition (Grice 1975)Speaker S saying U to listener L conversationally implicates q iff

1 S and L mutually, publicly presume that S is cooperative.

2 To maintain 1 given U, it must be supposed that S thinks q.

3 S thinks that both S and L mutually, publicly presume that L iswilling and able to work out that 2 holds.

2 / 44

Page 3: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Conversational implicatureDefinition (Grice 1975)Speaker S saying U to listener L conversationally implicates q iff

1 S and L mutually, publicly presume that S is cooperative.

2 To maintain 1 given U, it must be supposed that S thinks q.

3 S thinks that both S and L mutually, publicly presume that L iswilling and able to work out that 2 holds.

Example

Ann: What city does Paul live in?Bob: Hmm . . . he lives in California.

(A) Assume Bob is cooperative.(B) Bob supplied less information than was required, seemingly

contradicting (A).(C) Assume Bob does not know which city Paul lives in.(D) Then Bob’s answer is optimal given his evidence.

2 / 44

Page 4: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Conversational implicatureDefinition (Grice 1975)Speaker S saying U to listener L conversationally implicates q iff

1 S and L mutually, publicly presume that S is cooperative.

2 To maintain 1 given U, it must be supposed that S thinks q.

3 S thinks that both S and L mutually, publicly presume that L iswilling and able to work out that 2 holds.

Implicature as social, interactionalImplicatures are inferences that listeners make to reconcile thespeaker’s linguistic behavior with the assumption that the speakeris cooperative.

Implicatures and cognitive complexityThe speaker must believe that the listener will infer that thespeaker believes the implicature.

2 / 44

Page 5: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Two strands of inquiryInteractional models• Embrace the social nature of implicatures.

• Derive implicatures from nested belief models withcooperative structure.

• Focus on contextual variability and uncertainty.

Grammar models• Limit interaction to semantic interpretation.

• Derive implicatures without nested beliefs or cooperativity.

• Place variability and uncertainty outside the theory ofimplicature.

My goalDespite divisive rhetoric, the two sides in this debate are not inopposition, but rather offer complementary insights.

3 / 44

Page 6: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Two strands of inquiryInteractional models• Embrace the social nature of implicatures.

• Derive implicatures from nested belief models withcooperative structure.

• Focus on contextual variability and uncertainty.

Grammar models• Limit interaction to semantic interpretation.

• Derive implicatures without nested beliefs or cooperativity.

• Place variability and uncertainty outside the theory ofimplicature.

My goalDespite divisive rhetoric, the two sides in this debate are not inopposition, but rather offer complementary insights.

3 / 44

Page 7: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Plan for today

1 Conversational implicature

2 Interactional models of implicature

3 Grammar-driven models of implicature

4 Embedded implicatures

5 Uncancelable implicatures

4 / 44

Page 8: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

(Scalar) Implicature calculationExampleA : Sandy’s work this term was satisfactory.Implicature: Sandy’s work was not excellent (= ¬q)

1 Contextual premise: the speaker A intends to exhaustivelyanswer ‘What was the quality of Sandy’s work this term?’

2 Contextual premise: A has complete knowledge of Sandy’swork for the term (say, A assigned all the grades for the class).

3 Assume A is cooperative in the Gricean sense.4 The proposition q that Sandy’s work was excellent is more

informative than p, the content of A ’s utterance.5 q is as polite and easy to express in this context as p.6 By 1 , q is more relevant than p.7 By 3 – 6 , A must lack sufficient evidence to assert q.8 By 2 , A must lack evidence for q because q is false.

5 / 44

Page 9: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

(Scalar) Implicature calculationExampleA : Sandy’s work this term was satisfactory.Implicature: Sandy’s work was not excellent (= ¬q)

1 Contextual premise: the speaker A intends to exhaustivelyanswer ‘What was the quality of Sandy’s work this term?’

2 Contextual premise: A has complete knowledge of Sandy’swork for the term (say, A assigned all the grades for the class).

3 Assume A is cooperative in the Gricean sense.4 The proposition q that Sandy’s work was excellent is more

informative than p, the content of A ’s utterance.5 q is as polite and easy to express in this context as p.6 By 1 , q is more relevant than p.7 By 3 – 6 , A must lack sufficient evidence to assert q.8 By 2 , A must lack evidence for q because q is false.

5 / 44

Page 10: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

(Scalar) Implicature calculationExampleA : Sandy’s work this term was satisfactory.Implicature: Sandy’s work was not excellent (= ¬q)

1 Contextual premise: the speaker A intends to exhaustivelyanswer ‘What was the quality of Sandy’s work this term?’

2 Contextual premise: A has complete knowledge of Sandy’swork for the term (say, A assigned all the grades for the class).

3 Assume A is cooperative in the Gricean sense.4 The proposition q that Sandy’s work was excellent is more

informative than p, the content of A ’s utterance.5 q is as polite and easy to express in this context as p.6 By 1 , q is more relevant than p.7 By 3 – 6 , A must lack sufficient evidence to assert q.8 By 2 , A must lack evidence for q because q is false.

5 / 44

Page 11: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

(Scalar) Implicature calculationExampleA : Sandy’s work this term was satisfactory.Implicature: Sandy’s work was not excellent (= ¬q)

1 Contextual premise: the speaker A intends to exhaustivelyanswer ‘What was the quality of Sandy’s work this term?’

2 Contextual premise: A has complete knowledge of Sandy’swork for the term (say, A assigned all the grades for the class).

3 Assume A is cooperative in the Gricean sense.4 The proposition q that Sandy’s work was excellent is more

informative than p, the content of A ’s utterance.5 q is as polite and easy to express in this context as p.6 By 1 , q is more relevant than p.7 By 3 – 6 , A must lack sufficient evidence to assert q.8 By 2 , A must lack evidence for q because q is false.

5 / 44

Page 12: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Properties of conversational implicatures

1 Context dependence

2 Linguistic dependence

3 Cognitive complexity

4 Uncertainty (and re-enforceability)

5 Post-semanticality

6 / 44

Page 13: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Cancelability

• Cancelability is not a consequence of Grice’s (1975) definition.

• The definition seems to leave room for cancelation inparticular cases, but it does not ensure it for all.

• Cancelation always compromises the speaker’s cooperativityto some degree.

I In many cases, this is tolerable.I If the compromises are too great, the speaker’s behavior

is uncooperative to the point of infelicity.

7 / 44

Page 14: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Scales and partial orders

Examples (Levinson 1983:134)

〈 all, most, many, some, few 〉〈 and, or 〉〈 n, . . . , 5, 4, 3, 2, 1 〉〈 excellent, good 〉〈 hot, warm 〉〈 always, often, sometimes 〉〈 succeed, V ing, try to V , want to V 〉〈 necessarily p, p, possibly p 〉〈 certain that p, probable that p, possible that p 〉〈 must, should, may 〉〈 cold, cool 〉〈 love, like 〉〈 none, not all 〉

8 / 44

Page 15: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Scales and partial orders

Examples (A few other standard lexical scales)

〈 first, second, third, fourth, fifth 〉〈 definite, indefinite 〉〈 lover, friend 〉〈 need, want 〉〈 old, middle-aged, young 〉〈 general, colonel, major, captain, . . . 〉

8 / 44

Page 16: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Scales and partial orders

Examples (Mere partial orders; Hirschberg 1985:§5)

1A: So, is she married?

B: She’s engaged

106

So, S may affinn a nearer location Ii to convey lack of commitment r.o a farther one (i.e.,-,BEL(S, 9) or deny a fanher to convey ....,BEL(S, ....,(li»·

It also would seem that a speaker may declare ignorance of some location Ij to convey....,BEL(S, -,(li» for Ii closer to S than and ....,BEL(S. It) for It further from S. So, B mightconvey that -,BEL(B. it gets to Thirty-sixth) and -,BEL(B. -,(it gets to Thirty-fourrh) by Ll,eresponse in 158.

(158) A: Does this bus go up Walnut?B: I don't know if it gets to Thirty-fifth Street.

5.1.9. Process Stages and Prerequisites

Hamish implicitly recognizes the notion that process or prerequisite orderings may pennitscalar implicanare in his discussion of how the assenion oX finished y may be vlewed as astronger remark than the assenion x started y. Since finishing 'entails' starring. 104 -- but notvice versa - the assertion of x started y implicates the falsity of x finished y, as when Simplicates -,(159b) by saying (159a).

(159)a. Minnie started mowing the lawn.b. Minnie finished mowlng the lawn.

This intuition seems correct, even though Hamish's explanation is unconvincing. lOS And thedeniai offinish can be employed toimplicate ...,BEL(S. -.stan) -- thatfinish is the earliest stagesome process S can truthfully deny. As far as S knows, earlier stages like starting are true.

1041n the sense that having [utisJu!d 'entails' having ,;t::vted. This is one example of the disparirybetween Harnish's abstract characterization of entailment and the intllitive - and, here. teoporaHy-dependcnl -notion he is trying to capture.

IOSA-::cording to Harnish., since {vtishing entails starting, x finishing 'j is to (viia). The denial cf (vlia) is(viib).

(vii)(a) (lC started y) 1\ (."( fmished y)(b) ....,(x started y) v finished y)

Cd) finished y)

Clearly the trUth of the first disjunct of (viib) «vue» is su.:llCient (or :.he truth of !he disjunction. So S might deny(viia) simply by affirming (viie). By afftnning (what is., in effect) the disiun.:tion (viib) S thus makes aweaker statement than would be relevant and suitable if sihe couid tr'JLltfully affirr.t (viic). So it must be: L":at (viic) isfalse, i.e.....":atx started 'j is true. Of course. !he truth of the second disjunct of (viib) «vlid) is also suft"icient for thetr.. th of (vub). So, by same fl"..asoning we might conclude t.'lat S is unable to affirm (vlid) and t.'1at.x {mished J is[rue: The of is that Hamish defines finish in tc:rtns of itself (:.c .• .t j!..'lishirtg 'j is equivalent w (viiJ))and assumes an implicit ordering of conjun..:ts Which hi:; notatioo does not support.

107

Note that, in exchanges such as 160. B provides an indirect response to A's query, which wemight

(160) A: Did you finish this?B: I didn't start it.

interpret as an attempt to block the implicature that could be licensed by a simple denial offinish -- i.e., that tower values such as start are true or unknown to B.

Orderings such as these may be seen as stages of a process or prerequisite orderings andsuppon scalar implicature. For example, assume the following ordering:-- ---going ....----------- engagement ..

/' steady --.......dating marriage'---- .

Then we can explain dle following impticatures as affirmations of stages in this process: In161, B implicates that the woman in question is

(161) A: So, is she mamed?B: She's engaged.

not married by affirming that she is engaged. Note that this response will not commit B to thetruth of going steady, for example, although this state may sometimes precede' engagement: So,process orderings need not be linear.

But note that expressions which may be seen as denoting process stages need not acruaHyserve this function. In some contexts. for example, taking the GRE's, writing a thesis, doing aproject, taking a comprehensive exam laking prerequisites and taking electives might bemodeled as stages in a process of completing a Computer Science major. But it seems dearthat, in an exchange like 162, these expressions are better seen

(162) A: O.K. And for Barnard students, they had to take either GRE orwrite a thesis, right? But for Computer Science I don't know whatto do. Is there 'my project or... ?B: No, no, not. Our Depanment doesn't require any project neid1era comprehensive exam. so all you need to do is fulfill therequirements which are a couple of prerequisites and four electives.

as In unordered set of prerequisites, rather than as stages in a temporally ordered pro<::ess.

\Vhen orderings like these do include alternative or optiOl':at paths. such branching nodesmay be seen. like hierarchical siblings, as alternate values in the ordering. For example, signinga letter may be preceded optionally by proofreading it, and also by the alternate stages of typingthe letter or writing it out by hand. as represented below:

2

A: Do you speakGerman?

B: My husband does.

128

AFFIRt-.t(B,rhird chaprer, BEL(B, read(B,third chapter»» 1\

ALT_SENT(read(B,ch3prec_one), read(B,thirdj:hapter),'parts of a dissertation'»=) SCALAR_IMP(B, A, I read the third, ...,BEL(B, read(B,chapter_one»,Ci)

There are no restrictions on those posers which support scalar implicamre. However, (arleast) one restriction does exist on which posers may be viewed as salient in a given exchange:Above (Section 5.1.6.3) I noted for most metrics that rank utterances. both a given metricand irs dual (converse) may be candidares for salience in an exchange. However, no metric (jiwhich orders values vi and Vj such that a) vi is higher than Vj and b} the truth of Vj entails thetruth of vi can supporr scalar implicature -- for the simple reason in such a case, a sentencePi ranked higher than a sentence Pj by (ji since then the implicature licensed would beinconsistent with the utterance licensing it. In terms of the fonnaIism presented in Chapter 2,such a meaning would not be reinforceable. Consider, for example. (212a):

(212) A: Are you planning to buy a dog?a. B: A German Shepherd.b. B: I'm buying a Gennan Shepherd and I'm not buying a dog.

While one might identify either an ordering defined by 'isa' (i.e.• a Gennan Shepherd isa dog)or by 'subsumes' (i.e., a dog subsumes the subtype Shepherd) as salient in thisexchange, only the latter permits scalar implicature here. B cannot implicate that she is notbuying a dog vla this response, since buying a Gennan Shepherd entails buying a dog. Theattempted reinforcement of (212b) fails. However, we cannot rule out 'isa' relations aspotential supporters of scalar imphcarore: In 213, for example. 8's response might evoke eitheran'isa'

(213) A: Would you like a dog?8: I'd like a German Shepherd.

hierarchy - or irs dual. Apparently, any poser can support scaiar implicature, although othertests for conversational implicature may rule out some particuJ3r posers in panicular exchanges.

5.3.2.3. Representing Scalar Implicature Orderings as Pos.ets

J have demonstrated above how part! whole re!arions can be represented. To demonstratethat L'le other orderings discussed in Section 5.1 are accounted for by a poset condition. I w;i!describe how representative orderings can be accommodated by this condition so matscalar implicatures are correctly predicted by ImPl_3'

Rdations defined by ordering the non-null members of the power of some set x byset-inclusion allow a poset representation of x and its non-null proper subsets a5 follows: Anynon-null proper subset of a set m<lY be nnked as LOWER than the set which it. and

129

set, in consequence, will represent a HIGHER value in the ordering. Subs.ets which are neitherincluded in, nor include, one another, will be ALTERNATE values in this poser. Consider howthe salient ordering in the following exchange mighr be represented:

(214) A: Do you speak: Portuguese?B: My husband does.

The inclusion ordering which supports the implicature in 214 might be represented as follows:

So, {husband,wife.chiid} wi!! be the highest value in this ordering, with the alternate doubletons(husband,wife), (wife.child), and (husband,child) lower values and the alternate values,{husband}, (wife), and {child} lower values still in this poset. By the scalar implicatureconventions, then, S may affinn. say, (husband.wife) to convey ...,BEL(S. (husband,wile.child))as well as -,BEL(S. (husband,child}) and -,BEL(S, (wife.childJ). Note, particularly, that theremay be some redundance in scalar implicatures predicted from this representation. Also, anysubsets so represented may be lexica1ized in various ways -- as, the expression (husband.wife)might be lexicalized as •couple' or as 'husband and wife'. The theory presented in this thesiswill not distinguish between these. 128

As noted in Sections 5.1.7, temporal orderings may also be represented as setS ofrernporalfor the analysis of licensed scalar implicacures. So, these orderings too wilt be defined by setinclusion, as:

{past, resent} {presenr,future} {past,furure}r---:::--{future}

Posers defined by a type! subrype metric, such as that which supports 174, may beillustrated by me (parrial) classification hierarchy:

lZ1!Sut see {CorelIa 84, Ka!ita 84) for some approaches to thtS problem.

3

A: Are you on yourhoneymoon?

B: Well, I was.

128

AFFIRt-.t(B,rhird chaprer, BEL(B, read(B,third chapter»» 1\

ALT_SENT(read(B,ch3prec_one), read(B,thirdj:hapter),'parts of a dissertation'»=) SCALAR_IMP(B, A, I read the third, ...,BEL(B, read(B,chapter_one»,Ci)

There are no restrictions on those posers which support scalar implicamre. However, (arleast) one restriction does exist on which posers may be viewed as salient in a given exchange:Above (Section 5.1.6.3) I noted for most metrics that rank utterances. both a given metricand irs dual (converse) may be candidares for salience in an exchange. However, no metric (jiwhich orders values vi and Vj such that a) vi is higher than Vj and b} the truth of Vj entails thetruth of vi can supporr scalar implicature -- for the simple reason in such a case, a sentencePi ranked higher than a sentence Pj by (ji since then the implicature licensed would beinconsistent with the utterance licensing it. In terms of the fonnaIism presented in Chapter 2,such a meaning would not be reinforceable. Consider, for example. (212a):

(212) A: Are you planning to buy a dog?a. B: A German Shepherd.b. B: I'm buying a Gennan Shepherd and I'm not buying a dog.

While one might identify either an ordering defined by 'isa' (i.e.• a Gennan Shepherd isa dog)or by 'subsumes' (i.e., a dog subsumes the subtype Shepherd) as salient in thisexchange, only the latter permits scalar implicature here. B cannot implicate that she is notbuying a dog vla this response, since buying a Gennan Shepherd entails buying a dog. Theattempted reinforcement of (212b) fails. However, we cannot rule out 'isa' relations aspotential supporters of scalar imphcarore: In 213, for example. 8's response might evoke eitheran'isa'

(213) A: Would you like a dog?8: I'd like a German Shepherd.

hierarchy - or irs dual. Apparently, any poser can support scaiar implicature, although othertests for conversational implicature may rule out some particuJ3r posers in panicular exchanges.

5.3.2.3. Representing Scalar Implicature Orderings as Pos.ets

J have demonstrated above how part! whole re!arions can be represented. To demonstratethat L'le other orderings discussed in Section 5.1 are accounted for by a poset condition. I w;i!describe how representative orderings can be accommodated by this condition so matscalar implicatures are correctly predicted by ImPl_3'

Rdations defined by ordering the non-null members of the power of some set x byset-inclusion allow a poset representation of x and its non-null proper subsets a5 follows: Anynon-null proper subset of a set m<lY be nnked as LOWER than the set which it. and

129

set, in consequence, will represent a HIGHER value in the ordering. Subs.ets which are neitherincluded in, nor include, one another, will be ALTERNATE values in this poser. Consider howthe salient ordering in the following exchange mighr be represented:

(214) A: Do you speak: Portuguese?B: My husband does.

The inclusion ordering which supports the implicature in 214 might be represented as follows:

So, {husband,wife.chiid} wi!! be the highest value in this ordering, with the alternate doubletons(husband,wife), (wife.child), and (husband,child) lower values and the alternate values,{husband}, (wife), and {child} lower values still in this poset. By the scalar implicatureconventions, then, S may affinn. say, (husband.wife) to convey ...,BEL(S. (husband,wile.child))as well as -,BEL(S. (husband,child}) and -,BEL(S, (wife.childJ). Note, particularly, that theremay be some redundance in scalar implicatures predicted from this representation. Also, anysubsets so represented may be lexica1ized in various ways -- as, the expression (husband.wife)might be lexicalized as •couple' or as 'husband and wife'. The theory presented in this thesiswill not distinguish between these. 128

As noted in Sections 5.1.7, temporal orderings may also be represented as setS ofrernporalfor the analysis of licensed scalar implicacures. So, these orderings too wilt be defined by setinclusion, as:

{past, resent} {presenr,future} {past,furure}r---:::--{future}

Posers defined by a type! subrype metric, such as that which supports 174, may beillustrated by me (parrial) classification hierarchy:

lZ1!Sut see {CorelIa 84, Ka!ita 84) for some approaches to thtS problem.8 / 44

Page 17: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

A simple reference game

Example

r1 r2

‘hat’ ‘glasses’

r1 F Tr2 T T

(A) Assume the speaker is cooperative.(B) ‘glasses’ is less informative that ‘hat’.(C) To reconcile ‘glasses’ with (A),

assume the speaker lacks evidencefor ‘hat’.

(D) By the nature of the game, thespeaker lacks evidence for ‘hat’ iff‘hat’ is false.

9 / 44

Page 18: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

A simple reference game

Example

r1 r2

‘hat’ ‘glasses’

r1 F Tr2 T T

(A) Assume the speaker is cooperative.(B) ‘glasses’ is less informative that ‘hat’.(C) To reconcile ‘glasses’ with (A),

assume the speaker lacks evidencefor ‘hat’.

(D) By the nature of the game, thespeaker lacks evidence for ‘hat’ iff‘hat’ is false.

9 / 44

Page 19: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Scalar implicatures: the theoretical landscape

NoncismRussell 2006; Geurts 2011

Neo-GriceanismHorn 1984; Sauerland 2001

Impliciture/ExplicatureBach 1994; Sperber & Wilson 1995

Presumptive/GeneralizedGrice 1975; Levinson 2000

Logical FormsChierchia et al. 2012

Inte

ract

iona

l

Grammar-driven

10 / 44

Page 20: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

1 Conversational implicature

2 Interactional models of implicature

3 Grammar-driven models of implicature

4 Embedded implicatures

5 Uncancelable implicatures

11 / 44

Page 21: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Iterated Bayesian models

r1 r2

(a) Scenario

‘hat’ ‘glasses’

r1 F Tr2 T T

(b) ~·�

r1 0.5r2 0.5

(c) Prior

‘hat’ 0‘glasses’ 0

(d) Costs

Figure: A communication game supporting a scalar implicature.

‘hat’ ‘glasses’

r1 0 1r2 0.5 0.5

S0

r1 r2

‘hat’ 0 1‘glasses’ 0.67 0.33

L(S0)

Figure: The faces implicature in production and interpretation.

12 / 44

Page 22: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Iterated Bayesian models

r1 r2

(a) Scenario

‘hat’ ‘glasses’

r1 F Tr2 T T

(b) ~·�

r1 0.5r2 0.5

(c) Prior

‘hat’ 0‘glasses’ 0

(d) Costs

Figure: A communication game supporting a scalar implicature.

‘hat’ ‘glasses’

r1 0 1r2 0.5 0.5

S0

r1 r2

‘hat’ 0 1‘glasses’ 0.67 0.33

L(S0)

‘hat’ ‘glasses’

r1 0 1r2 0.75 0.25

S(L(S0))

Figure: The faces implicature in production and interpretation.

12 / 44

Page 23: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Iterated Bayesian models

r1 r2

(a) Scenario

‘hat’ ‘glasses’

r1 F Tr2 T T

(b) ~·�

r1 0.5r2 0.5

(c) Prior

‘hat’ 0‘glasses’ 0

(d) Costs

Figure: A communication game supporting a scalar implicature.

‘hat’ ‘glasses’

r1 0 1r2 0.5 0.5

S0

r1 r2

‘hat’ 0 1‘glasses’ 0.67 0.33

L(S0)Iteration

L(r1

| 'g

lass

es')

1 5 10 15 200.00

0.67

1.00

Figure: The faces implicature in production and interpretation.

12 / 44

Page 24: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Priors and context dependence

r1 r2

(a) Scenario

‘hat’ ‘glasses’

r1 F Tr2 T T

(b) ~·�

r1 0.5r2 0.5

(c) Prior

‘hat’ 0‘glasses’ 0

(d) Costs

Figure: A communication game supporting a scalar implicature.

r1 r2

‘hat’ 0 1‘glasses’ 0.46 0.54

(a) L(S0) for P(r1) = 0.3.P(r1)

L(r1

| 'g

lass

es')

0.00 0.34 1.000.0

0.5

1.0

Figure: The influence of the prior.13 / 44

Page 25: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Priors in Frank & Goodman 2012

0 20 40 60 80 100

020

4060

8010

0

Model predictions

Par

ticip

ant b

ets

listenerspeaker

Listener/Salience: Imagine someone is talking to you and uses [the word “blue”/a word you don’t know] to refer to one of these objects. Which object are they talking about?

A

B

CSpeaker: Imagine you are talking to someone and you want to refer to the middle object. Which word would you use, “blue” or “circle”?

1 2 3

020

4060

Prior: Salience Condition

Bet

1 2 3

020

4060

Likelihood: Model Predictions

Bet

1 2 3

020

4060

Posterior: Model vs. Listener Condition

Bet

datamodel

×

=

1 2 3

1 2 3

rbs rbc rgs

blue green square circle

rbs T F T Frbc T F F Trgs F T T F

Predicting Pragmatic Reasoningin Language GamesMichael C. Frank* and Noah D. Goodman

One of the most astonishing features ofhuman language is its ability to conveyinformation efficiently in context. Each

utterance need not carry every detail; in-stead, listeners can infer speakers’ intendedmeanings by assuming utterances conveyonly relevant information. These commu-nicative inferences rely on the shared as-sumption that speakers are informative, butnot more so than is necessary given thecommunicators’ common knowledge andthe task at hand. Many theories providehigh-level accounts of these kinds of in-ferences (1–3), yet, perhaps because of thedifficulty of formalizing notions like “in-formativeness” or “common knowledge,”there have been few successes in makingquantitative predictions about pragmaticinference in context.

We addressed this issue by studyingsimple referential communication games,like those described by Wittgenstein (4).Participants see a set of objects and areasked to bet which one is being referred toby a particular word. We modeled humanbehavior by assuming that a listener canuse Bayesian inference to recover a speak-er’s intended referent rS in contextC, giventhat the speaker uttered word w:

Pðrsjw,CÞ ¼Pðwjrs,CÞPðrsÞ∑r′∈C

Pðwjr′,CÞPðr′Þ ð1Þ

This expression is the product of threeterms: the prior probability P(rS) that anobject would be referred to; the likelihoodP(w|rS,C) that the speaker would utter a particularword to refer to the object; and the normalizingconstant, a sum of these terms computed for allreferents in the context.

We defined the prior probability of referringto an object as its contextual salience. This termpicks out not just perceptually but also sociallyand conversationally salient objects, capturingthe common knowledge that speaker and listenershare, as it affects the communication game.Because there is no a priori method for comput-ing this sort of salience, we instead measured itempirically (5).

The likelihood term in our model is definedby the assumption that speakers choose words tobe informative in context. We quantified the in-

formativeness of a word by its surprisal, aninformation-theoretic measure of how much itreduces uncertainty about the referent. By as-suming a rational actor model of the speaker,with utility defined in terms of surprisal, we canderive the regularity that speakers should choosewords proportional to their specificity (6, 7):

Pðwjrs,CÞ ¼jwj−1

∑w′∈W

jw′j−1ð2Þ

where |w| indicates the number of objects towhich word w could apply and W indicates theset of words that apply to the speaker’s intendedreferent.

In our experiment, three groups of partic-ipants each saw communicative contexts consist-ing of sets of objects varying on two dimensions(Fig. 1A). We systematically varied the distribu-

tion of features on these dimensions. To min-imize the effects of particular configurations orfeatures, we randomized all other aspects ofthe objects for each participant. The first group(speaker condition) bet on which word a speakerwould use to describe a particular object, testingthe likelihood portion of our model. The secondgroup (salience condition) was told that a speakerhad used an unknown word to refer to one of theobjects and was asked to bet which object wasbeing talked about, providing an empirical mea-sure of the prior in our model. The third group

(listener condition) was told that a speakerhad used a single word (e.g., “blue”) andagain asked to bet on objects, testing theposterior predictions of our model.

Mean bets in the speaker condition werehighly correlated with our model’s predic-tions for informative speakers (r = 0.98, P <0.001; Fig. 1B, open circles). Judgments inthe salience and listener conditions were notthemselves correlated with one another (r =0.19, P = 0.40), but when salience and in-formativeness terms were combined via ourmodel, the result was highly correlated withlistener judgments (r = 0.99, P < 0.0001, Fig.1B, solid circles). This correlation remainedhighly significant when predictions of 0 and100 were removed (r = 0.87, P < 0.0001).Figure 1C shows model calculations forone arrangement of objects.

Our simple model synthesizes and ex-tends work on human communication froma number of different traditions, including ear-ly disambiguation models (8), game-theoreticsignaling models (9), and systems for gen-erating referring expressions (10). The com-bination of an information-theoretic definitionof “informativeness” along with empiricalmeasurements of common knowledge en-ables us to capture some of the richness ofhuman pragmatic inference in context.

References and Notes1. H. Grice, in Syntax and Semantics, P. Cole, J. Morgan, Eds.

(Academic Press, New York, 1975), vol. 3, pp. 41–58.2. D. Sperber, D. Wilson, Relevance: Communication and

Cognition (Harvard Univ. Press, Cambridge, MA, 1986).3. H. Clark, Using Language (Cambridge Univ. Press,

Cambridge, 1996).4. L. Wittgenstein, Philosophical Investigations (Blackwell,

Oxford, 1953).5. H. Clark, R. Schreuder, S. Buttrick, J. Verbal Learn. Verbal

Behav. 22, 245 (1983).6. Materials and methods are available as supplementary

materials on Science Online.7. F. Xu, J. B. Tenenbaum, Psychol. Rev. 114, 245 (2007).8. S. Rosenberg, B. D. Cohen, Science 145, 1201 (1964).9. A. Benz, G. Jäger, R. Van Rooij, Eds., Game Theory and

Pragmatics (Palgrave Macmillan, Hampshire, UK, 2005).10. R. Dale, E. Reiter, Cogn. Sci. 19, 233 (1995).

Supplementary Materialswww.sciencemag.org/cgi/content/full/336/6084/998/DC1Materials and MethodsSupplementary Text

3 January 2012; accepted 10 April 201210.1126/science.1218633

BREVIA

Department of Psychology, Stanford University, Stanford,CA 94305, USA.

*To whom correspondence should be addressed. E-mail:[email protected]

0 20 40 60 80 100

020

4060

8010

0

Model predictions

Par

ticip

ant b

ets

listenerspeaker

Listener/Salience: Imagine someone is talking to you and uses [the word “blue”/a word you don’t know] to refer to one of these objects. Which object are they talking about?

A

B

CSpeaker: Imagine you are talking to someone and you want to refer to the middle object. Which word would you use, “blue” or “circle”?

1 2 3

020

4060

Prior: Salience Condition

Bet

1 2 3

020

4060

Likelihood: Model Predictions

Bet

1 2 3

020

4060

Posterior: Model vs. Listener Condition

Bet

datamodel

×

=

1 2 3

1 2 3

Fig. 1. (A) An example stimulus from our experiment, withinstructions for speaker, listener, and salience conditions. (B)Human bets on the probability of a choosing a term (speakercondition,N= 206) or referring to an object (listener condition,N = 263), plotted by model predictions. Points represent meanbets for particular terms and objects for each context type. Thered line shows the best linear fit to all data. (C) An examplecalculation in our model for the context type shown in (A).Empirical data from the salience condition constitute the priorterm, N = 20 (top); this is multiplied by the model-derivedlikelihood term (middle). The resulting posterior model pre-dictions (normalization step not shown) are plotted alongsidehuman data from the listener condition, N = 24 (bottom). Allerror bars show 95% confidence intervals.

25 MAY 2012 VOL 336 SCIENCE www.sciencemag.org998

on

June

7, 2

012

ww

w.s

cien

cem

ag.o

rgD

ownl

oade

d fro

m

rbs rbc rgs

blue 0.6 0.4 0green 0 0 1

square 0.6 0 0.4circle 0 1 0

Table: Listener, no priors

rbs rbc rgs

blue 0.4 0.6 0green 0 0 1

square 0.4 0 0.6circle 0 1 0

Table: Listener with priors.

14 / 44

Page 26: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Priors in Stiller et al. (2011)

2 3 4 adult

Age

Per

cent

cor

rect

020

4060

80100

Adqhoc*pragma3c*inference*

+ 3 other stimulus sets

N=24*per*group*

!

!

!

S3ller,*Goodman,*&*Frank*(2011)*

Slides from Mike Frank

15 / 44

Page 27: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Priors in Stiller et al. (2011)Distracting elements are really distracting:A*puzzle*for*this*view*

“scales”

(0 0) (1 0) (1 1)

(0 0) (1 1)

“no scales”

(1 0)

Failure*seems*odd*on*a*straight*neoqGricean*account,*more*plausible*on*CFS*(2008)*

Slides from Mike Frank

15 / 44

Page 28: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Priors in Stiller et al. (2011)Modifying*speaker*produc3on*probability*Here*are*all*the*people:*

My friend*has*glasses.***Can*you*show*me*my*friend?

Slides from Mike Frank

15 / 44

Page 29: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Priors in Stiller et al. (2011)Base*rate*data*

0*

25*

50*

75*

100*

10* 20* 30* 40* 50* 60* 70* 80* 90*Proportion of faces without top hat in

familiarization (%)

N=432*

Slides from Mike Frank

15 / 44

Page 30: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Costs and linguistic dependence

r1 r2

(a) Scenario

‘hat’ ‘glasses’

r1 F Tr2 T T

(b) ~·�

r1 0.5r2 0.5

(c) Prior

‘hat’ 0‘glasses’ 0

(d) Costs

Figure: A communication game supporting a scalar implicature.

C('hat')

S('g

lass

es' |

r2)

0 2 4 6 8 100.0

0.5

1.0

(a) S0

C('hat')

L(r1

| 'g

lass

es')

0 2 4 6 8 100.00

0.50

0.67

1.00

(b) L(S0)

C('hat')

S('g

lass

es' |

r2)

0 2 4 6 8 100.00

0.25

0.50

1.00

(c) S(L(S0))

Figure: The influence of costs.16 / 44

Page 31: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Cognitive complexity and bounded rationality

r1 0 0 1r2 0 1 1r3 1 1 0

hat glasses mustache

r1 r2 r3(a) Scenario.

‘hat’ ‘glasses’ ‘mustache’

r1 F F Tr2 F T Tr3 T T F

(b) ~·�

‘hat’ ‘glasses’ ‘mustache’

r1 0 0 1r2 0 .5 .5r3 .5 .5 0

(c) S0

r1 r2 r3

‘hat’ 0 0 1‘glasses’ 0 .5 .5

‘mustache’ .67 .33 0

(d) L(S0)

‘hat’ ‘glasses’ ‘mustache’

r1 0 0 1r2 0 .6 .4r3 .67 .33 0

(e) S(L(S0))

r1 r2 r3

‘hat’ 0 0 1‘glasses’ 0 .64 .36

‘mustache’ .71 .29 0

(f) L(S(L(S0)))

Figure: A complex faces scenario. The S(L(S0)) agent does not interpret‘glasses’ pragmatically, but the L(S(L(S0))) agent does.

17 / 44

Page 32: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Extremely bounded rationality

18 / 44

Page 33: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Uncertainty about . . .

• the context

• the linguistic norms

• the speaker’s preferred way to resolve tensions in the maxims

• the speaker’s commitment to cooperativity

• the speaker’s ability to undertake the necessary reasoning

• the listener’s beliefs about the speaker’s abilities

• . . .

19 / 44

Page 34: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Post-semanticality

Generalizes the idea: each successively higher (more pragmatic)level is derived from a more literal lower level, beginning with(probabilistic) truth conditions:

L(S(. . . (L(S0))))

20 / 44

Page 35: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Related models

• Golland et al. (2010): L(S0)

• Frank & Goodman (2012): L(S(L(S0))), with only the outerlistener incorporating the prior.

• Vogel et al. (2013): L(S(L0)) and L(S0) embedded in amulti-agent model of sequential decision making underuncertainty called the Decentralized Partially ObservableMarkov Decision Process.

• Franke (2008, 2009) and Jager (2007, 2012): Best Responseversions of the above.

21 / 44

Page 36: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Bayesian and Best Response models

‘hat’ ‘glasses’

r1 0 1r2 0.5 0.5

(a) S0

r1 r2

‘hat’ 0 1‘glasses’ 0.67 0.33

(b) L(S0)P(r1)

L(r1

| 'g

lass

es')

0.00 0.34 1.000.0

0.5

1.0

Figure: Softmax model.

‘hat’ ‘glasses’

r1 0 1r2 0.5 0.5

(a) S0

r1 r2

‘hat’ 0 1‘glasses’ 1 0

(b) Lbr(S0)

P(r1)L(

r1 |

'gla

sses

')

0.00 0.34 1.000.0

0.5

1.0

Figure: Best-response model.

22 / 44

Page 37: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Relation to other phenomena

• Lewis’s (1969) signaling systems (H. Clark 1996).

• Implicatures encourage mutual exclusivity, a.k.a., thepidgeon-hole principle (E. Clark 1987; Frank et al. 2009).

• Implicatures are modulated by the discourse participants’questions, goals, and preferences (van Rooy, 2003; Benz,2005; Vogel et al., 2013).

• Implicatures are a window into the interactions betweensentence-processing and high-level contextual understanding(Grodner & Sedivy, 2008; Huang & Snedeker, 2009; Grodneret al., 2010; Asher & Lascarides, 2013).

23 / 44

Page 38: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

1 Conversational implicature

2 Interactional models of implicature

3 Grammar-driven models of implicature

4 Embedded implicatures

5 Uncancelable implicatures

24 / 44

Page 39: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Grammar models

Chierchia et al. (2012):“More specifically, the facts suggest that SIs are not pragmatic innature but arise, instead, as a consequence of semantic orsyntactic mechanisms, which we’ve characterized with theoperator, O. This operator, although inspired by Gricean reasoning,must be incorporated into the theory of syntax or semantics, sothat — like the overt operator only — it will find its way toembedded positions.”

25 / 44

Page 40: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Position in the theoretical landscape

NoncismRussell 2006; Geurts 2011

Neo-GriceanismHorn 1984; Sauerland 2001

Impliciture/ExplicatureBach 1994; Sperber & Wilson 1995

Presumptive/GeneralizedGrice 1975; Levinson 2000

Logical FormsChierchia et al. 2012

Inte

ract

iona

l

Grammar-driven

26 / 44

Page 41: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Exhaustification

Definition (Exhaustification operator)

OALT(p) = p ∧ ∀q ∈ ALT : (p 6v q) v ¬q

(Spector, 2007; Fox, 2007, 2009; Magri, 2009; Chierchia et al., 2012)

27 / 44

Page 42: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Scalar implicatures in logical forms

Definition (Exhaustification operator)

OALT(p) = p ∧ ∀q ∈ ALT : (p 6v q) v ¬q

Example (Logical form)

OALT(~p∨q�)={~p∧q�}(~p ∨ q�) = {w2,w3}

OALT(~p∨q�)={~p∧q�} ~p ∨ q� = {w1,w2,w3}

~p� = {w1,w2} ∨ ~q� = {w1,w3}

28 / 44

Page 43: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Scalar implicatures in logical forms

Definition (Exhaustification operator)

OALT(p) = p ∧ ∀q ∈ ALT : (p 6v q) v ¬q

Example (Logical form)

Kim VP

believe OALT(~p∨q�)={~p∧q�}(~p ∨ q�) = {w2,w3}

OALT(~p∨q�)={~p∧q�} ~p ∨ q� = {w1,w2,w3}

~p� = {w1,w2} ∨ ~q� = {w1,w3}

28 / 44

Page 44: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Scalar implicatures in logical forms

Definition (Exhaustification operator)

OALT(p) = p ∧ ∀q ∈ ALT : (p 6v q) v ¬q

Example (Logical form)

if OALT(~p∨q�)={~p∧q�}(~p ∨ q�) = {w2,w3}

OALT(~p∨q�)={~p∧q�} ~p ∨ q� = {w1,w2,w3}

~p� = {w1,w2} ∨ ~q� = {w1,w3}

28 / 44

Page 45: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Implicit interactionalityChierchia et al. (2012)“the facts suggest that SIs are not pragmatic in nature but arise,instead, as a consequence of semantic or syntactic mechanisms”

Resolving underspecification pragmaticallyThe grammatical system specifies a many-to-one mapping fromsurface forms to logical forms. Only a pragmatic theory can explainhow discourse participants coordinate on these LFs.

Chierchia et al. (2012)“one can capture the correlation with various contextualconsiderations, under the standard assumption [. . . ] that suchconsiderations enter into the choice between competingrepresentations (those that contain the operator and those that donot).”

29 / 44

Page 46: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Implicit interactionalityChierchia et al. (2012)“the facts suggest that SIs are not pragmatic in nature but arise,instead, as a consequence of semantic or syntactic mechanisms”

Resolving underspecification pragmaticallyThe grammatical system specifies a many-to-one mapping fromsurface forms to logical forms. Only a pragmatic theory can explainhow discourse participants coordinate on these LFs.

Chierchia et al. (2012)“one can capture the correlation with various contextualconsiderations, under the standard assumption [. . . ] that suchconsiderations enter into the choice between competingrepresentations (those that contain the operator and those that donot).”

29 / 44

Page 47: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Implicit interactionalityChierchia et al. (2012)“the facts suggest that SIs are not pragmatic in nature but arise,instead, as a consequence of semantic or syntactic mechanisms”

Resolving underspecification pragmaticallyThe grammatical system specifies a many-to-one mapping fromsurface forms to logical forms. Only a pragmatic theory can explainhow discourse participants coordinate on these LFs.

Chierchia et al. (2012)“one can capture the correlation with various contextualconsiderations, under the standard assumption [. . . ] that suchconsiderations enter into the choice between competingrepresentations (those that contain the operator and those that donot).”

29 / 44

Page 48: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Coordinating on a logical form in context

Essentially all the properties of implicature that I discussed earlierare predicted to hold on this theory as well.

ExampleA : Sandy’s work this term was satisfactory.Potential implicature: Sandy’s work was not excellent

Available logical forms:

Sandy’s work was

1 ~satisfactory�

2 OALT(~satisfactory�)={~excellent�}(~satisfactory�)

3 OALT(~satisfactory�)={~good�,~excellent�}(~satisfactory�)

30 / 44

Page 49: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

1 Conversational implicature

2 Interactional models of implicature

3 Grammar-driven models of implicature

4 Embedded implicatures

5 Uncancelable implicatures

31 / 44

Page 50: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Cases considered here and in the paper

Logical forms Gricean response

Attitude embedding Facts follow from implicaturereasoning

Conditional antecedents Apparent embedding is an arti-fact of truth-functional analysis

Hurford’s (1974) constrainton disjunction

Questioning the constraint

Intrusive implicatures . . .

Non-monotone quantifiers . . .

32 / 44

Page 51: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Attitude embeddingExampleGeorge believes that some of his advisors are crooks.Implicature: George believes not all of his advisors are crooks.

33 / 44

Page 52: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Attitude embeddingExampleGeorge believes that some of his advisors are crooks.Implicature: George believes not all of his advisors are crooks.

Grammatical analysis

believes (advisor ∩ crook , ∅) ∧ (advisor * crook)

{B | advisor ∩ B , ∅} ∩ {B | advisor * B}

O[some 7→{all}](some) = some-and-not-all

O[some7→{all}] some

of his advisors

are crooks

33 / 44

Page 53: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Attitude embeddingExampleGeorge believes that some of his advisors are crooks.Implicature: George believes not all of his advisors are crooks.

Gricean calculation (Russell, 2006)1 Contextual assumption:

G. believes all his advisors are crooks ∨G. believes not all his advisors are crooks (p ∨ q)

2 Standard Gricean implicature:not(G. believes all of his advisors are crooks

). ¬p

3 From 1 – 2 and disjunctive elimination:G. believes not all of his advisors are crooks. q

33 / 44

Page 54: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Conditional antecedentsExample

S If you take phonology or semantics, you attend meeting A. Ifyou take both, you attend meeting B.

Implicature: If you take phonology or semantics but not both . . .

34 / 44

Page 55: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Conditional antecedentsExample

S If you take phonology or semantics, you attend meeting A. Ifyou take both, you attend meeting B.

Implicature: If you take phonology or semantics but not both . . .

A classical contradictionIf we interpret the disjunctive antecedent inclusively, contradiction:

1 (phono ∨ sem)→ a

2 (phono ∧ sem)→ b

3 (a ∧ b)→ ⊥

4 By 2 , 1 & transitivity: (phono ∧ sem)→ a

5 By 2 and 4 : (phono ∧ sem)→ (a ∧ b)6 By 3 , 5 & transitivity: (phono ∧ sem)→ ⊥

34 / 44

Page 56: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Conditional antecedentsExample

S If you take phonology or semantics, you attend meeting A. Ifyou take both, you attend meeting B.

Implicature: If you take phonology or semantics but not both . . .

Grammatical analysisIf we exhaustify the disjunctive antecedent clause

OALT(phono ∨ sem)→ a

then there is no contradiction: ~phono ∨ sem� and ~phono ∧ sem�are mutually exclusive, so there is no problem with having themlead to incompatible outcomes.

34 / 44

Page 57: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Conditional antecedentsExample

S If you take phonology or semantics, you attend meeting A. Ifyou take both, you attend meeting B.

Implicature: If you take phonology or semantics but not both . . .

Kratzer–Lewis conditional (hat-tip to Dan Lassiter)1 From the worlds that verify ~phono ∨ sem�, select the subset

X of worlds that are most similar to the actual world.2 ~a�(w) = T for all w ∈ X .

w1 w2

w3

phono sem a b

w1 T F T Fw2 F T T Fw3 T T F T

34 / 44

Page 58: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Hurford’s constraint

— Is the constraint real?

Hurford’s (1974) constraint“The joining of two sentences by or is unacceptable if onesentence entails the other; otherwise the use of or is acceptable.”

410 DISCUSSION

Now consider sentences (10) and (11).

(10) Inmates may smoke or drink, but not both. (11) *Inmates may smoke or drink, and not both.

If the general pattern of distribution of but not and and not characterized in (7) is followed here, then we must consider that inmates may both smoke and drink does not entail the negation of inmates may smoke or drink. That is, since (p and q) entails the negation of (p or q) just when the or is inter preted exclusively, we must consider the or in (10) and (11) to be inclusive. The semantic effect of the expression but not both here is to qualify or restrict an inclusive or in order to express exclusive disjunction.

Oddly, perhaps, an exactly parallel argument can be given to show that certain instances of or are exclusive. Consider sentences (12)-(17).

(12) Ivan is an American or a Russian. (13) That painting is of a man or a woman. (14) The value of x is greater than or equal to 6. (15) *John is an American or a Californian. (16) *That painting is of a man or a bachelor. (17) *The value of x is greater than or not equal to 6.

Sentences (12)-(14) are acceptable; sentences (15)-(17) are not. The ap propriate generalization can be expressed as (18).

(18) The joining of two sentences by or is unacceptable if one sentence entails the other; otherwise the use of or is acceptable.

Thus it follows from the fact that John is a Californian entails John is an American that (15) is unacceptable. And (12) is acceptable because Ivan is a Russian does not entail Ivan is an American and Ivan is an American does not entail Ivan is a Russian. The generalization in (18) is confirmed by the

unacceptability of sentence (19), in which a context is explicitly stipulated which determines entailment relations between sentences not logically related.

(19) *Jack and Jill travelled from Vienna to Paris together: he or she went through Strasbourg.

Now consider sentence (20), which is acceptable.

(20) Inmates may smoke or, drink, or2 both.

(In this example, the two tokens of or are given subscript integers to dis

tinguish them in the exposition.) If the general pattern of or characterized

410 DISCUSSION

Now consider sentences (10) and (11).

(10) Inmates may smoke or drink, but not both. (11) *Inmates may smoke or drink, and not both.

If the general pattern of distribution of but not and and not characterized in (7) is followed here, then we must consider that inmates may both smoke and drink does not entail the negation of inmates may smoke or drink. That is, since (p and q) entails the negation of (p or q) just when the or is inter preted exclusively, we must consider the or in (10) and (11) to be inclusive. The semantic effect of the expression but not both here is to qualify or restrict an inclusive or in order to express exclusive disjunction.

Oddly, perhaps, an exactly parallel argument can be given to show that certain instances of or are exclusive. Consider sentences (12)-(17).

(12) Ivan is an American or a Russian. (13) That painting is of a man or a woman. (14) The value of x is greater than or equal to 6. (15) *John is an American or a Californian. (16) *That painting is of a man or a bachelor. (17) *The value of x is greater than or not equal to 6.

Sentences (12)-(14) are acceptable; sentences (15)-(17) are not. The ap propriate generalization can be expressed as (18).

(18) The joining of two sentences by or is unacceptable if one sentence entails the other; otherwise the use of or is acceptable.

Thus it follows from the fact that John is a Californian entails John is an American that (15) is unacceptable. And (12) is acceptable because Ivan is a Russian does not entail Ivan is an American and Ivan is an American does not entail Ivan is a Russian. The generalization in (18) is confirmed by the

unacceptability of sentence (19), in which a context is explicitly stipulated which determines entailment relations between sentences not logically related.

(19) *Jack and Jill travelled from Vienna to Paris together: he or she went through Strasbourg.

Now consider sentence (20), which is acceptable.

(20) Inmates may smoke or, drink, or2 both.

(In this example, the two tokens of or are given subscript integers to dis

tinguish them in the exposition.) If the general pattern of or characterized

1 Violates HC: (smoke ∨ drink) ∨ (smoke ∧ drink)

2 Respects HC: OALT(smoke ∨ drink) ∨ (smoke ∧ drink)

35 / 44

Page 59: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Hurford’s constraint

— Is the constraint real?

Hurford’s (1974) constraint“The joining of two sentences by or is unacceptable if onesentence entails the other; otherwise the use of or is acceptable.”

410 DISCUSSION

Now consider sentences (10) and (11).

(10) Inmates may smoke or drink, but not both. (11) *Inmates may smoke or drink, and not both.

If the general pattern of distribution of but not and and not characterized in (7) is followed here, then we must consider that inmates may both smoke and drink does not entail the negation of inmates may smoke or drink. That is, since (p and q) entails the negation of (p or q) just when the or is inter preted exclusively, we must consider the or in (10) and (11) to be inclusive. The semantic effect of the expression but not both here is to qualify or restrict an inclusive or in order to express exclusive disjunction.

Oddly, perhaps, an exactly parallel argument can be given to show that certain instances of or are exclusive. Consider sentences (12)-(17).

(12) Ivan is an American or a Russian. (13) That painting is of a man or a woman. (14) The value of x is greater than or equal to 6. (15) *John is an American or a Californian. (16) *That painting is of a man or a bachelor. (17) *The value of x is greater than or not equal to 6.

Sentences (12)-(14) are acceptable; sentences (15)-(17) are not. The ap propriate generalization can be expressed as (18).

(18) The joining of two sentences by or is unacceptable if one sentence entails the other; otherwise the use of or is acceptable.

Thus it follows from the fact that John is a Californian entails John is an American that (15) is unacceptable. And (12) is acceptable because Ivan is a Russian does not entail Ivan is an American and Ivan is an American does not entail Ivan is a Russian. The generalization in (18) is confirmed by the

unacceptability of sentence (19), in which a context is explicitly stipulated which determines entailment relations between sentences not logically related.

(19) *Jack and Jill travelled from Vienna to Paris together: he or she went through Strasbourg.

Now consider sentence (20), which is acceptable.

(20) Inmates may smoke or, drink, or2 both.

(In this example, the two tokens of or are given subscript integers to dis

tinguish them in the exposition.) If the general pattern of or characterized

410 DISCUSSION

Now consider sentences (10) and (11).

(10) Inmates may smoke or drink, but not both. (11) *Inmates may smoke or drink, and not both.

If the general pattern of distribution of but not and and not characterized in (7) is followed here, then we must consider that inmates may both smoke and drink does not entail the negation of inmates may smoke or drink. That is, since (p and q) entails the negation of (p or q) just when the or is inter preted exclusively, we must consider the or in (10) and (11) to be inclusive. The semantic effect of the expression but not both here is to qualify or restrict an inclusive or in order to express exclusive disjunction.

Oddly, perhaps, an exactly parallel argument can be given to show that certain instances of or are exclusive. Consider sentences (12)-(17).

(12) Ivan is an American or a Russian. (13) That painting is of a man or a woman. (14) The value of x is greater than or equal to 6. (15) *John is an American or a Californian. (16) *That painting is of a man or a bachelor. (17) *The value of x is greater than or not equal to 6.

Sentences (12)-(14) are acceptable; sentences (15)-(17) are not. The ap propriate generalization can be expressed as (18).

(18) The joining of two sentences by or is unacceptable if one sentence entails the other; otherwise the use of or is acceptable.

Thus it follows from the fact that John is a Californian entails John is an American that (15) is unacceptable. And (12) is acceptable because Ivan is a Russian does not entail Ivan is an American and Ivan is an American does not entail Ivan is a Russian. The generalization in (18) is confirmed by the

unacceptability of sentence (19), in which a context is explicitly stipulated which determines entailment relations between sentences not logically related.

(19) *Jack and Jill travelled from Vienna to Paris together: he or she went through Strasbourg.

Now consider sentence (20), which is acceptable.

(20) Inmates may smoke or, drink, or2 both.

(In this example, the two tokens of or are given subscript integers to dis

tinguish them in the exposition.) If the general pattern of or characterized

1 Violates HC: (smoke ∨ drink) ∨ (smoke ∧ drink)

2 Respects HC: OALT(smoke ∨ drink) ∨ (smoke ∧ drink)35 / 44

Page 60: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Hurford’s constraint — Is the constraint real?We must first make exceptions for cases where the disjuncts areintended as synonyms:

Example1 She’s an oenophile or wine lover

Apparent counterexamples found via Google N-grams and theWordNet hypernym hierarchy:

Examples (From the Web)2 Stop discrimination of an applicant or person

3 Promptly report any accident or occurrence.

4 Recreational boat or vessel accidents are generally coveredby general maritime tort law.

5 Visits by Copts to the Holy Land can hardly be regarded astreason or crime.

35 / 44

Page 61: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Hurford’s constraint — Is the constraint real?Examples (From the Web)

6 The anchor will lie on the bottom and the canoe or boat will beheld by the streams current.

7 How to be a Bikini or Swimwear Model

8 I believe that music can change or affect your emotions.

9 When their resignation is accepted they become an emeritusarchbishop or bishop.

10 Why are you recommending angioplasty or surgery for me?

11 So the next time your home project calls for a caulk or sealant,choose the name you trust.

12 Many state arbitration statutes contemplate motions to corrector modify being made to the tribunal directly.

13 Being a captain or officer is a privilege, and with that privilegecomes great responsibility.

35 / 44

Page 62: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Hurford’s constraint

— Is the constraint real?

Counterexamples to Hurford’s constraint: http://goo.gl/VAGqnB

• 161 to date

• 86 where Left @ Right

• 75 where Left A Right

• Finding more is easy but boring (lots of Web searches).

35 / 44

Page 63: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Intrusive constructions

Examples1 If Jack and Jill get married {to each other}, then their parents

will have to see each other again.

2 Because he earns $40K, he can’t afford a house in Palo Alto.

3 Having three children is less work than having four.

4 It is safer to drive home and drink beer than it is to drink beerand drive home.

5 It is better to eat some of the cake than it is to eat all of it.

See Wilson (1975); Carston (1988); Levinson (2000); Recanati(2003); Horn (2006); King & Stanley (2006); Simons (2013)

36 / 44

Page 64: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Intrusive constructionsLevinson (2000:200):“on a purely semantic basis should be self-contradictory”.

Russell (2006)Phrases like eat some/all the cake are generics, with the someversion crucially excluding situations in which all the cake waseaten, because these are not generic eat-some-cake situations.Predicts markedness for non-generic comparisons.

Geurts (2009:§73):“I believe that, in cases like these, we are forced to admit thatscalar terms give rise to local upper-bounding interpretations,which cannot be accounted for in terms of implicature; they arelocal quasi-implicatures.”

36 / 44

Page 65: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

A case of local enrichmentChemla & Spector (2011), experiment 2

5.4 Predictions

In non-monotonic contexts, localist theories predict that the local readingexists, while globalist theories cannot derive this reading.Moreover, in theLOCAL condition, the local reading is true, while all the readings predictedby globalist theories are false.Hence, in the LOCAL condition (see Figure 11for an example), globalist theories predict that the sentence is plainly false,while localist theories predict that the sentence has a true reading.

5.5. Results and interpretation

5.5.1 Preliminary technical remarks We lost 15% of the responses intarget conditions for technical reasons (see footnote 18). See section4.4.1 for more details about the reported statistical analyses.

5.5.2 Main result: the local reading exists Figure 12 reports the meanratings of the target items grouped according to which interpretation istrue: none, local only, literal only, all. All pairwise differences aresignificant, except for the LOCAL vs. LITERAL conditions in the case of‘or’.33 (The relevant Wilcoxon tests for ‘some’: FALSE vs. LITERAL:W ¼ 126, p < .005, LITERAL versus LOCAL: W ¼ 109, p < .05, LOCAL

Figure 11 Illustrative examples of the images used to illustrate the different conditions FALSE,LITERAL, LOCAL and ALL for the test sentence (21): ’Exactly one letter is connected with someof its circles’. We also reported below each image whether the literal (Lit), global (Glob) andlocal (Loc) readings are true (T) or false (F).

33 On a per item analysis, this difference does come out significant : U ¼ 32 (n1 ¼ 4, n2 ¼ 8), p <.005.

386 Experimental Embedded Scalar Implicatures

at Stanford University Libraries on A

pril 18, 2012http://jos.oxfordjournals.org/

Dow

nloaded from

GlobalLocal

↘Literal

Exactly one letter is connected with some of its circles.

1 Literal meaning: one letter is connected with some or all of itscircles, the other letters are connected with no circle.

2 Global reading: one letter is connected with some but not all of itscircles, the other letters are connected with no circle.

3 Local reading: one letter is connected with some but not all of itscircles, the other letters may be connected with either none or all oftheir circles.

37 / 44

Page 66: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

A case of local enrichmentChemla & Spector (2011), experiment 2

5.4 Predictions

In non-monotonic contexts, localist theories predict that the local readingexists, while globalist theories cannot derive this reading.Moreover, in theLOCAL condition, the local reading is true, while all the readings predictedby globalist theories are false.Hence, in the LOCAL condition (see Figure 11for an example), globalist theories predict that the sentence is plainly false,while localist theories predict that the sentence has a true reading.

5.5. Results and interpretation

5.5.1 Preliminary technical remarks We lost 15% of the responses intarget conditions for technical reasons (see footnote 18). See section4.4.1 for more details about the reported statistical analyses.

5.5.2 Main result: the local reading exists Figure 12 reports the meanratings of the target items grouped according to which interpretation istrue: none, local only, literal only, all. All pairwise differences aresignificant, except for the LOCAL vs. LITERAL conditions in the case of‘or’.33 (The relevant Wilcoxon tests for ‘some’: FALSE vs. LITERAL:W ¼ 126, p < .005, LITERAL versus LOCAL: W ¼ 109, p < .05, LOCAL

Figure 11 Illustrative examples of the images used to illustrate the different conditions FALSE,LITERAL, LOCAL and ALL for the test sentence (21): ’Exactly one letter is connected with someof its circles’. We also reported below each image whether the literal (Lit), global (Glob) andlocal (Loc) readings are true (T) or false (F).

33 On a per item analysis, this difference does come out significant : U ¼ 32 (n1 ¼ 4, n2 ¼ 8), p <.005.

386 Experimental Embedded Scalar Implicatures

at Stanford University Libraries on A

pril 18, 2012http://jos.oxfordjournals.org/

Dow

nloaded from

GlobalLocal

↘Literal

Exactly one letter is connected with some of its circles.

• Griceans depend on implicature→literal and so can’t simulate Local.• If Griceans can derive an implicature, it will be the Global one,

which is false in the Local scenario.

vs. ALL:34W ¼ 105, p < .005; and for ‘or’: FALSE vs. LITERAL: W ¼123, p < .005, LITERAL versus LOCAL: W ¼ 92, p ¼ .23, LOCAL vs.ALL: 35 W ¼ 120, p < .001).

This first set of data qualifies the local reading as a possibleinterpretation of non-monotonic sentences since the LOCAL conditionis rated much higher than the FALSE condition, and is in fact rated veryhigh (73% for the sentence with ‘some’ and 58% for the sentence with‘or’).

Furthermore, the LOCAL condition is rated higher than the LITERAL

condition, a fact that is unexpected under the globalist approach, but canbe understood within the localist approach. Specifically, this fact suggeststhat the preference for readings which include SIs (over readings withoutany SIs), noted in the literature, is not specifically a preference for globalSIs, but rather a general preference for deriving SIs, be they embedded ornot-embedded (unless the resulting reading is weaker than the literalreading, as is the case when an SI is embedded in a DE environment).Note also that this preference cannot be explained by a principle like theStrongest Meaning Hypothesis since it is observed even in this case,where the resulting SI reading is not stronger than the literal reading (cf.footnote 6).36

5.5.3 Analyses of changes in performance between the two experimentalblocks The items were presented in two consecutive similar blocks.Yet, the 2(Block) 3 4(Condition) ANOVA shows no significantinteraction (F(3,45) ¼ 1.2, p ¼ .31). The same ANOVA revealsa significant main effect of Condition (F(3,45) ¼ 41, p < .001) and norobust main effect of Block (F(1,15) ¼ 1.2, p ¼ .29).

Similar analyses restricted to each item yield similar results: noreliable interaction between Block and Condition (‘some’: F(3,45) ¼2.2, p ¼ .11, ‘or’: F(3,45) ¼ .14, p ¼ .93), a main effect of Condition(‘some’: F(3,45) ¼ 45, p < .001, ‘or’: F(3,45) ¼ 30, p < .001), and no

Figure 12 Mean responses in the target conditions of experiment 2 (see section 5.3.1 orFigure 11 for an illustration).

34 P is computed with n ¼ 14 because of ties.35 P is computed with n ¼ 15 because of one tie.36 Thanks to an anonymous reviewer for making this point.

Emmanuel Chemla and Benjamin Spector 387

at Stanford University Libraries on A

pril 18, 2012http://jos.oxfordjournals.org/

Dow

nloaded from

37 / 44

Page 67: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

A case of local enrichmentChemla & Spector (2011), experiment 2

5.4 Predictions

In non-monotonic contexts, localist theories predict that the local readingexists, while globalist theories cannot derive this reading.Moreover, in theLOCAL condition, the local reading is true, while all the readings predictedby globalist theories are false.Hence, in the LOCAL condition (see Figure 11for an example), globalist theories predict that the sentence is plainly false,while localist theories predict that the sentence has a true reading.

5.5. Results and interpretation

5.5.1 Preliminary technical remarks We lost 15% of the responses intarget conditions for technical reasons (see footnote 18). See section4.4.1 for more details about the reported statistical analyses.

5.5.2 Main result: the local reading exists Figure 12 reports the meanratings of the target items grouped according to which interpretation istrue: none, local only, literal only, all. All pairwise differences aresignificant, except for the LOCAL vs. LITERAL conditions in the case of‘or’.33 (The relevant Wilcoxon tests for ‘some’: FALSE vs. LITERAL:W ¼ 126, p < .005, LITERAL versus LOCAL: W ¼ 109, p < .05, LOCAL

Figure 11 Illustrative examples of the images used to illustrate the different conditions FALSE,LITERAL, LOCAL and ALL for the test sentence (21): ’Exactly one letter is connected with someof its circles’. We also reported below each image whether the literal (Lit), global (Glob) andlocal (Loc) readings are true (T) or false (F).

33 On a per item analysis, this difference does come out significant : U ¼ 32 (n1 ¼ 4, n2 ¼ 8), p <.005.

386 Experimental Embedded Scalar Implicatures

at Stanford University Libraries on A

pril 18, 2012http://jos.oxfordjournals.org/

Dow

nloaded from

GlobalLocal

↘Literal

Exactly one letter is connected with some of its circles.

Background and discussion: Chemla 2009; Geurts & Pouscoulous 2009;Clifton & Dube 2010; Ippolito 2010; Sauerland 2010

37 / 44

Page 68: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

The theoretical import of embedded implicatures

• The Gricean can adopt the LFs of this theory to explainembedded implicatures.

• Embedded implicatures are still shaped by pragmatic forces,so the Gricean’s contributions remain vital.

• The questions are therefore much narrower: what is thenature of these phenomena?

38 / 44

Page 69: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

1 Conversational implicature

2 Interactional models of implicature

3 Grammar-driven models of implicature

4 Embedded implicatures

5 Uncancelable implicatures

39 / 44

Page 70: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Grice’s view

Grice (1975)“‘Since, to assume the presence of a conversational implicature,we have to assume that at least the Cooperative Principle is beingobserved, and since it is possible to opt out of the observation ofthis principle, it follows that a generalized conversationalimplicature can be canceled in a particular case.”

40 / 44

Page 71: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

A recipe for obligatory implicatures

There are forms ϕ and ψ such that, relative to the current context,

1 ~ϕ� v ~ψ�, and

2 ψ is strictly more costly than φ.

Examples (Spector 2007; Magri 2009)1 (p ∨ q) vs. p

2 (#always) tall

3 (#Some) Italians come from a warm country.

41 / 44

Page 72: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Obligatory implicatures in the interactional modelExample (Spector 2007)Contextual premise: the atoms of the molecule are inseparable.

1 #Some atoms went right.

2 The atoms went right.

‘some’ ‘the’

p T T

(a) ~·�

‘some’ 1‘the’ 0

(b) Costs

‘some’ ‘the’

p 0.27 0.73

(c) S0

p

‘some’ 0.5‘the’ 0.5

(d) L(S0)

Figure: Where two forms are synonymous and one is more marked, themore marked one is infelicitous. A speaker who used the marked formwould need semantic motivation, impossible with synonyms.

42 / 44

Page 73: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Uncertainty, uncancelability, and uncooperativity

• Uncancelable implicatures are an artifact of idealization.

• There is always doubt surrounding the relevant lexical andcontextual assumptions about synonymy.

• In any case, uncancelability and uncooperativity are related,as the Gricean predicts:

1 Opt out of quantity: “p or q, and I’m not telling which!”2 Never motivated: “p or q, in fact, p!”

See Lauer 2013

43 / 44

Page 74: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Conclusion: interacting with grammar1 Even if implicatures can be embedded in logical forms, they

are still exquisitely sensitive to high-level plans, goals, andpreferences of the discourse participants.

2 Kyle to Ellen: “I have $8.”a. Context A: Movie tickets cost $10.b. Context B: Movie tickets cost $8.

3 Chierchia et al. (2012): “one can capture the correlation withvarious contextual considerations, under the standardassumption [. . . ] that such considerations enter into thechoice between competing representations (those thatcontain the operator and those that do not).”

44 / 44

Page 75: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Conclusion: interacting with grammar1 Even if implicatures can be embedded in logical forms, they

are still exquisitely sensitive to high-level plans, goals, andpreferences of the discourse participants.

2 Kyle to Ellen: “I have $8.”a. Context A: Movie tickets cost $10.b. Context B: Movie tickets cost $8.

3 Chierchia et al. (2012): “one can capture the correlation withvarious contextual considerations, under the standardassumption [. . . ] that such considerations enter into thechoice between competing representations (those thatcontain the operator and those that do not).”

44 / 44

Page 76: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Conclusion: interacting with grammar1 Even if implicatures can be embedded in logical forms, they

are still exquisitely sensitive to high-level plans, goals, andpreferences of the discourse participants.

2 Kyle to Ellen: “I have $8.”a. Context A: Movie tickets cost $10.b. Context B: Movie tickets cost $8.

3 Chierchia et al. (2012): “one can capture the correlation withvarious contextual considerations, under the standardassumption [. . . ] that such considerations enter into thechoice between competing representations (those thatcontain the operator and those that do not).”

44 / 44

Page 77: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Conclusion: interacting with grammar

4 Logical Form theories tell us where a speaker can put certaincovert semantic operators.

5 It’s up to a theory of (inter)action and social cognition to tell usI what the speaker didI why she did itI how the hearer will understand her discourse move.

44 / 44

Page 78: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

Conclusion: interacting with grammar

NoncismRussell 2006; Geurts 2011

Neo-GriceanismHorn 1984; Sauerland 2001

Impliciture/ExplicatureBach 1994; Sperber & Wilson 1995

Presumptive/GeneralizedGrice 1975; Levinson 2000

Logical FormsChierchia et al. 2012

Inte

ract

iona

l

Grammar-driven44 / 44

Page 79: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

References I

Asher, Nicholas & Alex Lascarides. 2013. Strategic conversation. Semantics andPragmatics 6(2). 1–62.

Bach, Kent. 1994. Conversational impliciture. Mind and Language 9(2). 124–162.Benz, Anton. 2005. Utility and relevance of answers. In Anton Benz, Gerhard

Jager & Robert van Rooij (eds.), Game theory and pragmatics, 195–219.Basingstoke, Hampshire: Palgrave McMillan.

Carston, Robyn. 1988. Implicature, explicature, and truth-theoretic semantics. InRuth Kempson (ed.), Mental representations: The interface between languageand reality, 155–181. Cambridge: Cambridge University Press.

Chemla, Emmanuel. 2009. Universal implicatures and free choice effects:Experimental data. Semantics and Pragmatics 2(2). 1–33.

Chemla, Emmanuel & Benjamin Spector. 2011. Experimental evidence forembedded scalar implicatures. Journal of Semantics 28(3). 359–400.

Chierchia, Gennaro, Danny Fox & Benjamin Spector. 2012. The grammaticalview of scalar implicatures and the relationship between semantics andpragmatics. In Maienborn et al. (2012) 2297–2332.

45 / 44

Page 80: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

References II

Clark, Eve V. 1987. The principle of contrast: A constraint on languageacquisition. In Brian MacWhinney (ed.), Mechanisms of language acquisition,1–33. Hillsdale, NJ: Erlbaum.

Clark, Herbert H. 1996. Using language. Cambridge: Cambridge UniversityPress.

Clifton, Charles Jr. & Chad Dube. 2010. Embedded implicatures observed: Acomment on Geurts and Pouscoulous (2009). Semantics and Pragmatics3(7). 1–13.

Fox, Danny. 2007. Free choice disjunction and the theory of scalar implicatures.In Sauerland & Stateva (2007) 71–120.

Fox, Danny. 2009. Too many alternatives: Density, symmetry, and otherpredicaments. In Tova Friedman & Edward Gibson (eds.), Proceedings ofSemantics and Linguistic Theory 17, 89–111. Ithaca, NY: Cornell University.

Frank, Michael C. & Noah D. Goodman. 2012. Predicting pragmatic reasoning inlanguage games. Science 336(6084). 998.

Frank, Michael C., Noah D. Goodman & Joshua B. Tenenbaum. 2009. Usingspeakers’ referential intentions to model early cross-situational word learning.Psychological Science 20(5). 579–585.

46 / 44

Page 81: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

References III

Franke, Michael. 2008. Interpretation of optimal signals. In Krzysztof R. Apt &Robert van Rooij (eds.), New perspectives on games and interaction, vol. 4Texts in Logics and Games, 297–310. Amsterdam University Press.

Franke, Michael. 2009. Signal to act: Game theory in pragmatics ILLCDissertation Series. Institute for Logic, Language and Computation, Universityof Amsterdam.

Geurts, Bart. 2009. Scalar implicatures and local pragmatics. Mind andLanguage 24(1). 51–79.

Geurts, Bart. 2011. Quantity implicatures. Cambridge University Press.Geurts, Bart & Nausicaa Pouscoulous. 2009. Embedded implicatures?!?

Semantics and Pragmatics 2(4). 1–34.Golland, Dave, Percy Liang & Dan Klein. 2010. A game-theoretic approach to

generating spatial descriptions. In Proceedings of the 2010 conference onempirical methods in natural language processing, 410–419. Cambridge, MA:ACL.

Grice, H. Paul. 1975. Logic and conversation. In Peter Cole & Jerry Morgan(eds.), Syntax and semantics, vol. 3: Speech Acts, 43–58. New York:Academic Press.

47 / 44

Page 82: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

References IV

Grodner, Daniel J., Natalie M. Klein, Kathleen M. Carbary & Michael K.Tanenhaus. 2010. “Some,” and possibly all, scalar inferences are not delayed:Evidence for immediate pragmatic enrichment. Cognition 116(1). 42–55.

Grodner, Daniel J. & Julie Sedivy. 2008. The effects of speaker-specificinformation on pragmatic inferences. In Edward A. Gibson & Neal J.Pearlmutter (eds.), The processing and acquisition of reference, 239–272.Cambridge, MA: MIT Press.

Hirschberg, Julia. 1985. A theory of scalar implicature: University ofPennsylvania dissertation.

Horn, Laurence R. 1984. Toward a new taxonomy for pragmatic inference:Q-based and R-based implicature. In Deborah Schiffrin (ed.), Meaning, form,and use in context: Linguistic applications, 11–42. Washington: GeorgetownUniversity Press.

Horn, Laurence R. 2006. The border wars. In Klaus von Heusinger & Ken P.Turner (eds.), Where semantics meets pragmatics, Oxford: Elsevier.

Huang, Ti Ting & Jesse Snedeker. 2009. Online interpretation of scalarquantifiers: Insight into the semantics–pragmatics interface. CognitivePsychology 58(3). 376–415.

48 / 44

Page 83: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

References V

Hurford, James R. 1974. Exclusive or inclusive disjunction. Foundations ofLanguage 11(3). 409–411.

Ippolito, Michela. 2010. Embedded implicatures? Remarks on the debatebetween globalist and localist theories. Semantics and Pragmatics 3(5). 1–15.

Jager, Gerhard. 2007. Game dynamics connects semantics and pragmatics. InAhti-Veikko Pietarinen (ed.), Game theory and linguistic meaning, 89–102.Amsterdam: Elsevier.

Jager, Gerhard. 2012. Game theory in semantics and pragmatics. In Maienbornet al. (2012) 2487–2425.

King, Jeffrey & Jason Stanley. 2006. Semantics, pragmatics, and the role ofsemantic content. In Zoltan Szabo (ed.), Semantics vs. pragmatics, Oxford:Oxford University Press.

Lauer, Sven. 2013. Towards a dynamic pragmatics. Stanford, CA: StanfordUniversity dissertation.

Levinson, Stephen C. 1983. Pragmatics. Cambridge: Cambridge UniversityPress.

Levinson, Stephen C. 2000. Presumptive meanings: The theory of generalizedconversational implicature. Cambridge, MA: MIT Press.

49 / 44

Page 84: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

References VI

Lewis, David. 1969. Convention. Cambridge, MA: Harvard University Press.Reprinted 2002 by Blackwell.

Magri, Giorgio. 2009. A theory of individual-level predicates based on blindmandatory scalar implicatures. Natural Language Semantics 17(3). 245–297.doi:10.1007/s11050-009-9042-x.

Maienborn, Claudia, Klaus von Heusinger & Paul Portner (eds.). 2012.Semantics: An international handbook of natural language meaning, vol. 3.Berlin: Mouton de Gruyter.

Recanati, Francois. 2003. Embedded implicatures. Philosophical Perspectives17(1). 299–332.

van Rooy, Robert. 2003. Questioning to resolve decision problems. Linguisticsand Philosophy 26(6). 727–763.

Russell, Benjamin. 2006. Against grammatical computation of scalarimplicatures. Journal of Semantics 23(4). 361–382.

Sauerland, Uli. 2001. On the computation of conversational implicatures. InRachel Hastings, Brendan Jackson & Zsofia Zvolenszky (eds.), Proceedings ofSemantics and Linguistic Theory 11, 388–403. Ithaca, NY: Cornell LinguisticsCircle.

50 / 44

Page 85: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

References VII

Sauerland, Uli. 2010. Embedded implicatures and experimental constraints: Areply to Geurts & Pouscoulous and Chemla. Semantics and Pragmatics 3(2).1–13.

Sauerland, Uli & Penka Stateva (eds.). 2007. Presupposition and implicature incompositional semantics. Houndmills, Basingstoke, Hampshire: PalgraveMacmillan.

Simons, Mandy. 2013. Local pragmatics and structured contents. PhilosophicalStudies 1–13.

Spector, Benjamin. 2007. Aspects of the pragmatics of plural morphology. InSauerland & Stateva (2007) 243–281.

Sperber, Dan & Deirdre Wilson. 1995. Relevance: Communication and cognition.Oxford: Blackwell 2nd edn.

Stiller, Alex, Noah D. Goodman & Michael C. Frank. 2011. Ad-hoc scalarimplicature in adults and children. In Proceedings of the 33rd annual meetingof the Cognitive Science Society, Boston.

51 / 44

Page 86: Conversational implicatures: interacting with grammar

Conversational implicature Interactional models Grammar-driven models Embedded Uncancelable Conclusion

References VIII

Vogel, Adam, Max Bodoia, Christopher Potts & Dan Jurafsky. 2013. Emergenceof Gricean maxims from multi-agent decision theory. In Human languagetechnologies: The 2013 annual conference of the North American chapter ofthe Association for Computational Linguistics, 1072–1081. Stroudsburg, PA:Association for Computational Linguistics.

Wilson, Dierdre. 1975. Presuppositional and non-truth-conditional semantics.New York: Academic Press.

52 / 44