th e mul ti-mind e - rensselaer polytechnic...

8

Upload: duonghanh

Post on 15-Apr-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

The Multi-Mind E!ect

Selmer Bringsjord1, Konstantine Arkoudas2, Deepa Mukherjee3

Andrew Shilliday4, Joshua Taylor5, Micah Clark6, Elizabeth Bringsjord7

Department of Cognitive Science1!6; Department of Computer Science1,4,5

Rensselaer Polytechnic Institute (RPI)Troy NY 12180 USA

State University of New York7; SUNY PlazaAlbany NY 12246 USA

Abstract Courtesy of experiments carried out by such

thinkers as Wason, Johnson-Laird, and Kahneman &

Tversky, there is overwhelming empirical evidence that

the vast majority of logically untrained humans are un-

able to reason in context-independent, normatively cor-

rect fashion. However, the multi-mind e!ect, which is

predicted by our earlier success at teaching this kind of

reasoning, and also by our general theory of human and

machine reasoning, shows that while individual persons

(with rare exceptions) are unable to solve problems that

demand context-independent reasoning, groups of per-

sons can often solve such problems.

Keywords: multi-mind e!ect, heterogeneous reason-

ing, multi-agent reasoning, logic-based computational

cognitive modeling

1 Introduction

Experimental study of human reasoning has shownthat exceedingly few humans can solve problems de-manding normatively correct, context-independentreasoning [1]. Two theories in the field of psychol-ogy of reasoning, mental logic (ML) [2] and mentalmodels (MM) [3], both predict such failures, givingthe same general explanation: humans generallylack the mental machinery required to solve suchproblems. Predicted failures include phenomenasuch as illusory inferences [4, 5], in which subjects“see” logically valid inferences that simply aren’tthere.

In earlier work, we have shown that education ofa certain kind in the area of formal logic, specificallyeducation in accordance with our theory of humanreasoning, mental meta-logic (MML) [6, 7, 8, 9],contra the claims of some well-known psychologists(e.g., [10]), can produce humans able to negotiateproblems demanding context-independent, norma-tively correct reasoning [11, 7]. We say that such

humans acquire logical minds, and hold that ourwork vindicates Piaget to a considerable degree.1

Unfortunately, the training required for an indi-vidual to reach this level must take place over anextended period of time, and there is no reason tobelieve that this individual, without ongoing prac-tice, would retain her hard-won ability.

Nonetheless, the possibility remains that norma-tively correct reasoning might be possible to achievein a di!erent manner. But how?

While it’s indeed true that the vast majority ofindividuals are unable to solve problems that re-quire context-independent reasoning (unless suit-ably trained), groups of individuals acting togetherafter being stimulated under the right circumstancescan often solve such problems, even in the absenceof extended training. This result is what we callthe multi-mind e!ect (MME).

2 Related Research

Of course, it has long been known that groups canout-perform individuals.2 However, one must dis-tinguish between groups of individuals that can rea-son in normatively correct, context-independent fash-ion to solve so-called “unsolvable” problems (as in

1Piaget, as is well known, held that in the course of nor-mal development humans would acquire a capacity to thinkin accordance with first-order logic [12]. Though Piaget’sposition has fallen out of favor, with su!cient training informal logic, humans can in fact exceed the level of reason-ing Piaget called “formal operations,” and reach the level inwhich they can reason in many logical systems, as well asabout such systems. In this level, humans can also createlogical systems. In general, this level of reasoning is reachedby professionals in the formal sciences.

2E.g., in [13] it’s shown that cooperative and collabora-tive learning in mathematics is very e"ective, and in [14] it’sshown that group performance is generally qualitatively andquantitatively superior to the performance of the averageindividual.

Conf. on Artificial Intelligence | ICAI'07 | 43

MME), versus groups of individuals who engage ingeneric problem-solving as a team. Again, therehas been extensive research into the working ofteams and groups’ ability to solve problems e!ec-tively and e"ciently in a wide variety of fields [15,16]. The benefits of groups of individuals workingtogether have been documented in areas as diverseas open-source software development [17] and pre-diction and forecast models [18]. But in all thesescenarios, the problems that the groups are work-ing on are readily solvable even without the group,albeit with greater expense and e!ort. We are un-aware of prior results that demonstrate MME.3

3 Dearth of C-I Reasoning

Studies of human reasoning have repeatedly shownthat logically untrained humans systematically failto reason in a context-independent manner, evenwhen presented with stimuli that expressly call forthis type of reasoning. This failure has been at-tributed, in part, to the lack of the appropriatereasoning machinery in humans. For example, Rips(1994) has claimed that our mental apparatus is apartial selection from the rules and schemas avail-able in standard proof calculi for the propositionalcalculus. According to this view, humans who havenot been extensively trained in logic cannot reasonover certain problems accurately, as they simply donot have the inference rules to do so. Empiricalevidence for this view has been derived from ex-periments involving stimuli solvable in normativelycorrect fashion only if a standard proof theory forpropositional calculus is correctly exploited. Forexample:

Problem 1: Assume that (1) It is false that‘If the square is green, the circle is red’. Giventhis assumption, can you infer that the squareis green?

As Rips (1994) and other proponents of ML haveshown, nearly all humans, working as individuals,answer “No” — but in fact the correct response is

3Some well-read readers may wonder whether an e"ectstudied at the RAND Corporation during the Cold War an-ticipates MME. Known commonly as the Delphi e!ect (DE),it has been leveraged in a methodology by the same nameand has been (and continues to be) used extensively in pre-diction and forecast models in a wide variety of settings [18].DE has often been cited in the field of open-source softwaredevelopment [17], where many individuals come together tocreate sophisticated software that they would not be able toachieve individually. Attempts to exploit DE are apparentlyongoing in the field of open-source software development,through organizations like the Creative Commons. This be-ing said, MME is clearly distinct from DE.

an a"rmative. The reason is that in the proposi-tional calculus, when a (material) conditional of theform ‘if ! then "’ is false, ! is true while " is false.The explanation from Rips is that the inferentialrules that would support an a"rmative responseare simply not part of the reasoning apparatus ofhumans untrained in formal logic. In other words,ML holds that untrained reasoners reason on thebasis of a collection of proof-theoretic rules incom-plete from the standpoint of (e.g.) standard exten-sional formal logic. Rips’ (1994) PSYCOP systemhas a set R of incomplete inference rules for thepropositional calculus. For example, the formula¬(p ! q) ! (¬p " q) isn’t provable by R, but thisformula, in any standard proof calculus for propo-sitional logic, admits of e!ortless proof.

Next, consider this somewhat more complicatedproblem, a slight variant4 of a puzzle introduced in[19].

Assume that the following is true:

‘If there is a king in the hand, then there is anace in the hand,’ or ‘If there is not a king inthe hand, then there is an ace in the hand,’ —but not both of these if-thens are true.

What can you infer from this assumption? Pleaseprovide a careful justification for your answer.

Nearly all untrained subjects, working individ-ually, declare the answer to be: “There is an acein the hand.” Unfortunately, what one can infer isthat there isn’t an ace in the hand. This logical il-lusion is successfully predicted by MM [4, 3], whichholds that reasoners conceive of situations contain-ing true premises, but not false premises, and thatreasoners are not comfortable reasoning from a falseantecedent, and hence do not explicitly represent afalse possibility. To correctly solve this illusion onemust note the exclusive disjunction. Using obvioussymbolization, the given information becomes:

((K " A) # (¬K " A)) $ ¬((K " A) $ (¬K " A))

It’s not hard to prove in any proof calculus for stan-dard first-order logic that ¬A can be derived fromthis formula.

4 Basic Experimental Design

The multi-mind e!ect is predicted by the aforemen-tioned MML theory [6, 7, 8, 9]. MML predictsthat groups of agents, appropriately tasked, willoften engage in heterogeneous reasoning : For ex-ample, they will meta-reason about the di!erentkinds of reasoning o!ered by di!erent individuals.

4The variation arises from disambiguating Johnson-Laird’s ‘s or else s!’ as ‘either s or s!, but not both.’

44 Conf. on Artificial Intelligence | ICAI'07 |

They will also exploit both the proof-theoretic tech-niques stressed by ML and the model-based tech-niques stressed by MM; and they will move backand forth between these techniques. In addition,groups will often reason in logical systems muchmore powerful than those to which mental modelsand mental logic are anchored. These, at any rate,are the broad-stroke predictions.5 Do the predic-tions hold? Using the following preliminary exper-imental method, one can seek relevant data.

Suppose that PML and PMM are (respectively)such that: PML is a problem subjects cannot solveaccording to ML, and PMM is a problem subjectscannot solve according to MM.

Next, let us verify the situation in traditional ex-perimental fashion. That is, let us give n (logicallyuntrained) subjects a problem PXX (where this no-tation ranges over both mental logic and mentalmodels), with the (guaranteed) result being thatonly a very few of these subjects solve this problem.Next, we remove the subjects who are successful.At this point, we give the subjects who remain,all of whom are now known, by ML and MM, tobe inherently incapable of solving PXX , a problemP !

XX that is formally isomorphic to PXX . How-ever, we instruct these remaining subjects to workon P !

XX in randomly assigned groups. The size ofthe groups will depend upon how many subjectsremain. MML, and our earlier work on de-biasingthrough logic training, predict the surprising resultthat P !

XX will be solved by some groups, despitethe fact that all groups are composed only of indi-viduals who succumbed to bias in the original case.This immediately implies that neither proponentsof ML nor proponents of MM can a!ord to staysilent, since, after all, the“multi-agents” still lack,by these theories, the mental machinery needed tosolve P !

XX .Our hypothesis is that the sophisticated reason-

ing of highly successful and accurate individual rea-soners is distinguished by heterogeneous reasoning.Such reasoning is carried out by some individualsworking alone in accordance with processes that aremulti-agent in nature. Our approach can be con-trasted with the emergentist position adumbratedand advocated in [20], which is that collective cog-nition isn’t reducible to individual cognition (col-lective cognition, to use their term, is autonomous),while it is produced out of individuals interacting.In our case, a precise account of the multi-mind ef-fect would enable an individual problem solver to

5Due to space constraints, we leave aside MML’s predic-tions regarding the general failure of individuals faced withlogical illusions and the like.

leverage that account to achieve performance onpar with the group. In fact, we hypothesize thatthe very highest levels of individual problem solversencompass techniques and patterns of thought seenin MME.

5 Initial Supportive Results

Three pilot experiments were conducted by the firstauthor to test for the existence of the predictedMME. The aim of these experiments was to exploreMME as a phenomenon occurring in multi-agentreasoning. We also wanted to explore the tech-niques by which groups of reasoners could lever-age the cognitive apparatus of the individual mem-bers to come up with performance that was greaterthan the performance of any (or the best) individ-ual in the group. The problems used were variantsof Problems 1 and 2 from above, the famous Wa-son card selection task (WST) [21], and the WiseMan Puzzle (WMP), which is well-known in logic-based AI.6 The experiments were carried out onstudents in the course Logic and Artificial Intelli-gence. These students had taken less than 3 coursesin formal logic, and therefore may be considered‘untrained.’7

The group size originally was 13. The studentswere presented with the problems and asked to solvethem individually. Once the responses were col-lected, students who gave a correct response to theproblems were separated from the rest of the class.The students were not told whether their responseswere accurate or not, to prevent any biases. A coverstory was presented to explain the separation ofsome the students from the rest of the class. Onlyone subject was removed in the three non-WMPcases. No individual solved WMP (under severetime constraints). The remaining students wererandomly assigned to di!erent groups, with fourgroups of three students per group. The groupswere given problems isomorphic to Problems 1 and2, WST, and WMP. The groups were all asked forjustifications for their solutions. These could be inthe form of sentences, proofs, or diagrams, and per-mutations thereof. All the groups reached the cor-rect solution for the isomorphic problem for Prob-lems 1 and 2 and WST. One of the three groupssolved WMP. These results, though extremely pre-liminary, show support for the presence MME.

6For a detailed analysis of WMP, see [22].7In the interests of space, we leave aside discussion of,

and evidence for, the view that the number 3 is the dividingline on this issue.

Conf. on Artificial Intelligence | ICAI'07 | 45

6 Toward Modeling MME

6.1 Declarative CCM

The basic units of declarative computational cog-nitive modeling (CCM) are declarative in nature,or propositional: they are formal objects naturallyassociated with those particular sentences or ex-pressions in natural languages (like English, Ger-man, Chinese) that are declarative statements (asopposed to expressions in the imperative or inquis-itive mode) naturally taking values such as true,false, unknown, probable (sometimes to par-ticular numerical degrees), and so on. The basicprocess over such units is inference, which may bedeductive, inductive, probabilistic, abductive, oranalogical. Because the basic units of declarativeCCM are declarative, a hallmark of this type ofmodeling is a top-down, rather than bottom-up,approach.8

6.2 Logic-Based CCM

As explained in [24], logic-based CCM is the formalkind of modeling that underlies top-down, declara-tive modeling, and is based on a generalized formof the concept of logical system as defined rathernarrowly in mathematical logic, where this conceptstands at the heart of Lindstrom’s Theorems [25].Corresponding to every logical system is a logic-based computer program. The execution (or, bet-ter, the evaluation) of such a program produces acognitive model or simulation of the target phe-nomena. The target in the present case, of course,is the multi-mind e!ect.

6.3 Prior Logic-Based Modeling

Arkoudas & Bringsjord (2004) have carried out pre-vious work designed to simulate, from an engineer-ing/AI point of view, multi-agent reasoning of thesort required to solve WMP. There is insu"cientspace to even encapsulate this work, but it shouldbe noted that while it was undertaken outside theperspective of MME, the core formalisms [26, 27]were in line with logic-based CCM. It is also worthmentioning that some prior work seems to us toindicate that MME explains why mathematics in-volves the interaction of multiple agents, for exam-ple see [28].

8Modeling carried out by [23] is an exception, since it isat once bottom-up and top-down.

6.4 Toward Modeling MME in Slate

Slate is a system for modeling and facilitating hu-man reasoning and decision-making that has beenunder development (led by Shilliday, Taylor, andBringsjord) for the past five years in RPI’s RAIRLab, the research and development made possibleby grants from ARDA, DTO, and DARPA. A userof Slate manipulates a number of information-baseditems, viz., propositions, hypotheticals, sets (i.e.,collections of other information-based items), sub-proofs, models, proofs, and, in the most recent ver-sions of Slate, databases. All of these structures arerepresented graphically in Slate’s workspace withinSystem S. System S empowers users of Slate torecord and share their inferences and reasoning struc-tures, and allows mechanical treatment of thesestructures by machine, so that Slate can automat-ically launch searches for proofs and disproofs inconnection with arguments under consideration byhuman reasoners.

6.4.1 On Automated Translation

We are currently working on various tools and frame-works in the RAIR Lab to facilitate interoperabilitybetween Slate and other systems. A number of lan-guages have been developed for the purpose of shar-ing knowledge and expressing the relation betweenthe knowledge representation schemata of varioussystems. KIF [29] was the first interlingua adoptedon a (relatively speaking) large scale, but after a fewyears (during which the internet became much moreimportant), Common Logic [30] was created to ad-dress the need for namespaces, URIs and the like.Because of the proliferation of unintegrated knowl-edge bases and relational databases, DTO launchedthe IKRIS challenge workshop in April 2005 forthe development of semantic interoperability be-tween such systems — a workshop in which we par-ticipated. In connection with this work, we havedevised a system of translation graphs capable ofyielding so-called “bridging axioms” to enable se-mantic interoperation between various divergent rep-resentation schemata or ontologies (encoded as sig-natures in Many-Sorted Logic (MSL) [31], Slate’s“native” language).

6.4.2 Mental Metalogic Reasoning in Slate

In Slate, items in System S are connected withargument links to graphically depict an argumentfrom some set of premises to a particular conclu-sion. Arguments may be supported or denied bywitness objects, viz., models, proofs, or databases.

46 Conf. on Artificial Intelligence | ICAI'07 |

Figure 1: Slate simulates mental model-based reasoning.

This mechanism can be used to model model-basedreasoning in Slate. For example, consider the fol-lowing argument:

(P1) All humans are mammals.(P2) All mammals are vertebrates.

! C All vertebrates are human.

The easiest way to realize that this argument is in-valid is to imagine a model in which the premisesare true, and yet the conclusion is false. Considera model in which there is a single object that is avertebrate, a mammal, and yet not a human. (P1)is satisfied: All humans are mammals (as there areno humans at all in the model). And (P2) is satis-fied: All mammals are vertebrates (as the one ob-ject that is a mammal is also a vertebrate.) How-ever, (C) is false: There is some vertebrate whichis not a human (the single object in the model is avertebrate, and yet not a human.)

This entire process, which some subjects engagein, can be simulated by Slate; the process is summedup in Figure 1. Each premise and conclusion is rep-resented iconically in System S, and the connectionbetween these objects reflects the inferential struc-ture of the argument given above. The (mental)model can be inspected graphically (the inset im-age), as well as manipulated iconically in the Sys-tem S. The iconic model is connected to the argu-ment with an X, denoting that the model shows theargument to be invalid.

6.4.3 Multi-Agent Reasoning in Slate

Slate can be used to model multi-agent reasoninganalogous to the interactions between actual hu-man reasoners. To get a glimpse of how this ispossible, consider the following scenario.

Four telemarketing companies, A, B, C, and D,each of which we can regard to be an agent, havedecided, mostly due to their highly developed sense

of morality, that they should collaborate with eachother to ensure that the potential customers arecontacted no more than by two of the companieswithin a two month period. Each firm keeps adatabase of those people they have contacted, andwould like to provide this data, while revealing nei-ther sensitive data nor the structure of their ownrecords, to the other companies. Each firm alsowants to receive such data from the other mar-keters. Adding to the di"culty, not only are sensi-tive data and record structures to be kept private,but the information which can be shared is storeddi!erently from firm to firm:

• Firm A maintains records only on whether they havecalled a particular household within a given month.

• Business B considers a contact made when they havetelephoned a household and the household has madea call back to them.

• Company C considers having made a contact with ahousehold if and only if they called the household andthe phone call lasted at least fifteen minutes.

• D considers a contact made when either they havecalled a household or the household has called them.

Though the systems have di!erent notions ofcontacting households, an intermediate o"ce I worksin conjunction with the four companies to help themdetermine who they can call, as follows:

I records the initiating party, the receiving party,the date, and the duration of phone calls. Each firmis able to give this information to I, and when a firmwants to call a household, they first ask I whetherthey are permitted to call the household. I hasat its disposal information from each system andso can determine whether it is permissible to callthe household. Note that no system has to revealits internal structures or sensitive information, butthat each system, with the help of I, can behaveappropriately.

Given translation graphs, the relationships be-tween the representations used by the di!erent com-

Conf. on Artificial Intelligence | ICAI'07 | 47

Figure 2: Slate models four agents interacting through an interlingua.

panies (and more generally, agents), can be ex-plored in Slate, and a process for reconciling therepresentations constructed. A set of bridging ax-ioms can be extracted automatically from this trans-lation graph. Such a graph is shown in Figure 2.

While this example relates to the economic mar-ketplace on its surface, the underlying structuresand processes, we believe, are subtle, and are partof what is required to model the multi-mind e!ect,in which individual humans interact through an in-terlingua to problem-solve.

7 M-M E!ect & Education

The ultimate dream of passionate educators wouldpresumably be nothing short of teaching studentshow to surmount context dependent reasoning. Ourbasic strategy would be to teach individual studentshow to engage in the e"cacious forms of reasoningseen when MME is produced. This strategy, ar-guably, would stretch back to Plato, who definesthinking as inner (silent) dialogue with oneself.9We specifically hold that education in logic is thekey to developing context-independent deductivereasoning in students [11]. Such education shouldinclude instruction in the following, which we haveobserved to be in play in MME:

1. Disproving an incorrect answer, and proving that apurported disproof fails, which reinstates the original,targeted would-be proof.

2. Rigorous and general-purpose methods for transform-ing a natural language (e.g., English) word probleminto a formal representation in a logical system.

3. Using diagrammatic techniques to model proofs anddisproofs, and meta-reasoning over these meta-representations.

9E.g., in Theaetaus (189E-190A). He gives similar ac-counts of thought in the Sophist (263E), and in Philebus(38E).

8 Next Steps

The empirical evidence presented in this paper is aresult of some preliminary studies conducted to testour theory-driven prediction of MME. Further ex-periments are planned in which this e!ect will be in-vestigated in detail, in a controlled manner.10 Ourresearch program is also aimed at precisely mod-eling MME in Slate, under the paradigm of logic-based CCM, a kind of modeling we have achievedpreviously, for example in [22].11 Progress towardthese goals will be reported and demonstrated atECM MAI 2007.

References

[1] K. E. Stanovich and R. F. West. Individual di!er-ences in reasoning: Implications for the rationalitydebate. Behavioral and Brain Sciences, 23(5):645–665, 2000.

[2] L. Rips. The Psychology of Proof. MIT Press,Cambridge, MA, 1994.

[3] P. N. Johnson-Laird. Mental Models. Harvard Uni-versity Press, Cambridge, MA, 1983.

[4] P. N. Johnson-Laird, P. Legrenzi, V. Girotto, andM. S. Legrenzi. Illusions in reasoning about con-sistency. Science, 288:531–532, 2000.

10However, the only empirical issue open to dispute iswhether the multi-mind e"ect ensures de-biasing. We knowthe e"ect occurs, for the simple reason that verbal protocolswe have collected invariably show it in action when groups,as opposed to individuals, grapple with relevant problems.

11We have stressed reasoning in the foregoing, but themulti-mind e"ect should surface in connection with decision-making. Put simply, the main reason is that we believe canshortly show, formally and computationally, that the semi-nal experiments in [32], which reveal a failure of normativelycorrect decision making, are circumvented by artificial agentsthat reason in the normatively correct fashion underlyingMME.

48 Conf. on Artificial Intelligence | ICAI'07 |

[5] S. Bringsjord and Y. Yang. Logical illusions andthe welcome psychologism of logicist artificial intel-ligence. In D. Jacquette, editor, Philosophy, Psy-chology, and Psychologism: Critical and HistoricalEssays on the Psychological Turn in Philosophy,pages 289–312. Kluwer, Dordrecht, The Nether-lands, 2003.

[6] Y. Yang and S. Bringsjord. Mental Metalogic:A New, Unifying Theory of Human and MachineReasoning. Erlbaum, Mahway, NJ, forthcoming.

[7] K. Rinella, S. Bringsjord, and Y. Yang. E"ca-cious logic instruction: People are not irremedia-bly poor deductive reasoners. In J. D. Moore andK. Stenning, editors, Proceedings of the Twenty-Third Annual Conference of the Cognitive ScienceSociety, pages 851–856. Lawrence Erlbaum Asso-ciates, Mahwah, NJ, 2001.

[8] Y. Yang and S. Bringsjord. Mental metalogic:A new paradigm for psychology of reasoning. InProceedings of the Third International Conferenceon Cognitive Science (ICCS 2001), pages 199–204.Press of the University of Science and Technologyof China, Hefei, China, 2001.

[9] Y. Yang and S. Bringsjord. The mental possi-ble worlds mechanism and the lobster problem: ananalysis of a complex gre logical reasoning task.Journal of Experimental and Theoretical ArtificialIntelligence, 18(2):157–168, 2006.

[10] P. W. Cheng and K. J. Holyoak. Pragmatic versussyntactic aproaches to training deductive reason-ing. Cognitive Psychology, 17:391–416, 1985.

[11] S. Bringsjord, E. Bringsjord, and R. Noel. In de-fense of logical minds. In Proceedings of the 20th

Annual Conference of the Cognitive Science Soci-ety, pages 173–178. Lawrence Erlbaum, Mahwah,NJ, 1998.

[12] B. Inhelder and J. Piaget. The Growth of Logi-cal Thinking from Childhood to Adolescence. BasicBooks, New York, NY, 1958.

[13] N. Davidson and T. Worsham. Enhancing Think-ing Through Cooperative Learning. Teachers Col-lege Press, New York, NY, 1992.

[14] G. Hill. Group versus individual performance: Aren + 1 heads better than one? Psychological Bul-letin, 91(3):5517–539, 1982.

[15] J. Richard Hackman, Ruth Wageman, Thomas M.Ruddy, and Charles L. Ray. Industrial and Orga-nizational Psychology, chapter Team E!ectivenessin Theory and Practice, pages 109–119. BlackwellPublishers Inc, Malden, MA, 2000.

[16] Eduardo Salas, C. Shawn Burke, and Janis ACannon-Bowers. Teamwork: Emerging princi-ples. International Journal of Management Re-views, 2(4):339–356, 2000.

[17] Eric Raymond. The cathedral and the bazaar.Knowledge, Technology & Policy, 12, 1999.

[18] Harold A. Linstone and Murray Turo!. The Del-phi Method: Techniques and Applications. AddisonWesley Publishing Company, Reading, MA, 1975.

[19] P.N. Johnson-Laird. Rules and illusions: A criticialstudy of Rips’s The Psychology of Proof. Mindsand Machines, 7(3):387–407, 1997.

[20] Pietro Panzarasa and Nicholas Jennings. Collec-tive cognition and emergence in multi-agent sys-tems. In Ron Sun, editor, Cognition and Multi-Agent Interaction: From Cognitive Modeling toSocial Simulation, pages 401–408. Cambridge Uni-versity Press, Cambridge, UK, 2006.

[21] P. Wason. Reasoning. In New Horizons in Psy-chology. Penguin, Hammondsworth, UK, 1966.

[22] Konstantine Arkoudas and Selmer Bringsjord.Metareasoning for multi-agent epistemic logics. InFifth International Conference on ComputationalLogic In Multi-Agent Systems (CLIMA 2004), vol-ume 3487 of Lecture Notes in Artificial Intel-ligence (LNAI), pages 111–125. Springer-Verlag,New York, 2005.

[23] Ron Sun. Robust reasoning: Integrating rule-based and similarity-based reasoning. Artificial In-telligence, 75:241–295, 1995.

[24] Selmer Bringsjord. Declarative/logic-based com-putational cognitive modeling. In Ron Sun, editor,The Handbook of Computational Cognitive Model-ing. Cambridge University Press, forthcoming.

[25] H. D. Ebbinghaus, J. Flum, and W. Thomas.Mathematical Logic (second edition). Springer-Verlag, New York, NY, 1994.

[26] K. Arkoudas. Athena.http://www.cag.csail.mit.edu/!kostas/dpls/athena.

[27] Lou Goble, editor. The Blackwell Guide to Philo-sophical Logic. Blackwell Publishing, Oxford, UK,2001.

[28] S. Colton, A. Bundy, and T. Walsh. Agent basedcooperative theory formation in pure mathematics,2000.

[29] Michael R. Genesereth and Richard E. Fikes.Knowledge Interchange Format Version 3 Refer-ence Manual, December 1997.

[30] Christopher Menzel. Common Logic Standard. InSanta Fe 2003 Metatdata Forum Symposium onOntologies, 2003. Presentation of the case for Com-mon Logic.

[31] Maria Manzano. Extensions of First Order Logic.Cambridge University Press, 1996.

[32] D. Kahneman and A. Tversky. Prospect theory:An analysis of decision under risk. Econometrica,17(2):263–292, 1979.

Conf. on Artificial Intelligence | ICAI'07 | 49