symbolic grounding and artificial intelligence

30
Symbolic Grounding and Artificial Intelligence BA Thesis (Afstudeerscriptie ) written by Roelof de Vries (born May 21st, 1985 in Amsterdam) under the supervision of Dr. R. Blutner, submitted in partial fulfillment of the requirements for the degree of BA Kunstmatige Intelligentie at the Universiteit van Amsterdam.

Upload: others

Post on 05-Oct-2021

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Symbolic Grounding and Artificial Intelligence

Symbolic Grounding and Artificial Intelligence

BA Thesis (Afstudeerscriptie)

written by

Roelof de Vries(born May 21st, 1985 in Amsterdam)

under the supervision of Dr. R. Blutner, submitted in partial fulfillment of the requirementsfor the degree of

BA Kunstmatige Intelligentie

at the Universiteit van Amsterdam.

Page 2: Symbolic Grounding and Artificial Intelligence

Abstract

The problem of symbol grounding has always been a major discussion in Artificial Intel-ligence. Along the way some researchers became confident it could be solved. The firstpart of this thesis will contain a overview of this discussion and an explanation of how thesymbol grounding problem could be solved. The main part of this thesis involves a break-down of solution which has been proposed to solve the symbol grounding problem. I willprovide an extensive overview of the central approach of this solution, called the guessinggame. My contribution is in the form of a proximity visualization and a process diagramof the guessing game. A thorough examination of the meaning of the symbol groundingproblem and the proposed solution will be discussed. This discussion and other argumentslead me to conclude that the symbol grounding problem has not yet been solved by thissolution. It seems, however, that it is on the verge of been solved.

Page 3: Symbolic Grounding and Artificial Intelligence

Acknowledgements

I would like to thank my supervisor Reinhard Blutner for supporting me during the writing ofthis thesis. Furthermore, I would like to thank Tony Belpaeme for making the slides of his pre-sentation given in Amsterdam in April 2006 available to me and my supervisor. Furthermore,I would like thank Frits de Vries, Gerben de Vries and Douwe Oosterhout for giving usefulcomments on my thesis. Last but not least, I would like to thank all the wonderful people I’vestudied with.

Page 4: Symbolic Grounding and Artificial Intelligence

Contents

1 Introduction 2

2 The relation between philosophy and AI 32.1 What does philosophy have to offer AI in general? . . . . . . . . . . . . . . . 42.2 What does AI have to offer philosophy in general? . . . . . . . . . . . . . . . 5

3 Grounding’s relation to AI and philosophy 53.1 An introduction to the symbol grounding discussion . . . . . . . . . . . . . . . 53.2 Formulation of the symbol grounding problem . . . . . . . . . . . . . . . . . . 6

3.2.1 Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.2.2 Symbols systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.2.3 Grounding & the symbol grounding problem . . . . . . . . . . . . . . 8

3.3 Discussion of the symbol grounding problem . . . . . . . . . . . . . . . . . . 8

4 A concrete example of symbol grounding in AI 94.1 The three theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.2 Accountability for used agents . . . . . . . . . . . . . . . . . . . . . . . . . . 104.3 How the theories are tested . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

5 A breakdown of the guessing game 145.1 Prerequisites for the guessing game . . . . . . . . . . . . . . . . . . . . . . . 145.2 The guessing game is steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

5.2.1 Route 1 (failure, see figure 10 on page 16): . . . . . . . . . . . . . . 155.2.2 Route 2 (failure, see figure 11 on page 17): . . . . . . . . . . . . . . 165.2.3 Route 3 (failure, see figure 12 on page 18): . . . . . . . . . . . . . . . 175.2.4 Route 4 (failure, see figure 13 on page 19): . . . . . . . . . . . . . . 175.2.5 Route 5 (success, see figure 14 on page 20): . . . . . . . . . . . . . . 195.2.6 Example of successful guessing game . . . . . . . . . . . . . . . . . . 215.2.7 The guessing game overall . . . . . . . . . . . . . . . . . . . . . . . . 21

5.3 Aspects of the guessing game to ponder . . . . . . . . . . . . . . . . . . . . . 225.4 The theories tested . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.5 Results and interpretations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.6 Discussion of the concrete example . . . . . . . . . . . . . . . . . . . . . . . 24

6 Discussion and conclusions 26

References 27

1

Page 5: Symbolic Grounding and Artificial Intelligence

1 Introduction

With the discovery of the electronic computer in the mid-20th century and rise of the moderncomputer in the past few decades a new branch of computer science arose. This branch ofcomputer science, now called Artificial Intelligence -AI-, claimed it could simulate traits ofintelligent human behavior in a machine. This claim sparked a still raging discussion in phi-losophy and AI: “Can computers think?” or put differently “Can machines display a trait ofintelligent human behavior like, for example, making conversation?”. One of the first scientistswho contributed to this discussion, by making a concrete test for it, is Alan Turing.

In 1950 Alan Turing developed the Turing-test which comes down to making a personbelieve he is talking to another person digitally while this is actually a machine (Turing, 1950).If this is the case then the machine must ‘understand’ the conversation as well as you and istherefore indistinguishable from a ‘real’ person.

A reply to this assumption came in 1980 by John Searle. He invented the Chinese Roomexperiment (Figure 1 on page 2). This experiment argues that machines cannot ‘understand’.The concept behind this thought experiment is that a person with an extensive instruction man-ual (on how to reply in a conversation) can have a conversation with another person in a lan-guage, for example Chinese, that the first person doesn’t know (Searle, 1980). The only thingthe first person has to do is look up all the words which are coming in from the conversation andreply with the words found in the instruction manual. With this thought experiment in place,Searle then argues that the person replying has no real understanding of Chinese but can stillproduce an seemingly intelligent reply. Searle believed this is analogue to what a machine doesand therefore a machine does not ‘understand’ (for a more extensive explanation see: Searle,1980).

Figure 1: An example visualization of Searle’s Chinese Room thought experiment, from:http://pzwart.wdka.hro.nl/mdr/research/fcramer/wordsmadeflesh/pics/chinese-room.png

One of the discussion points that arose from this experiment is “What is needed for aseemingly intelligent conversation?” And one of the replies to this question is: ‘grounding’. Itis here where the domain of this thesis begins.

The goal of this thesis is to provide an concise and straightforward introduction into relation

2

Page 6: Symbolic Grounding and Artificial Intelligence

between philosophy and AI but focussing on AI’s realm of the problem of symbol grounding.I present an overview of a concrete example of solution for the symbol grounding problem.In this overview I will elaborate on an important aspect of this concrete example, namely theguessing game. In the time available for this thesis I will restrict myself to only providing thisintroduction, overview and the extensive clarification of this game. This could be beneficialto the understanding of the proposed solution of the symbol grounding problem, which in turncould contribute to the discussion of the symbol grounding problem in general. On the otherhand this time restriction means that in this thesis there will not be a complete overview ofthe evolution of the Chinese Room experiment, which is relevant to the symbol groundingproblem, and all of its refutations.

As stated before, the domain of this thesis is ‘grounding’. In short, grounding is givingmeaning to a symbol, often explained as an abstract representation of an object (e.g. wordsin a conversation), by connecting it with something in the real world. When arguing this way,there would be no intelligence in the system of the Chinese Room experiment because thesymbols of the (Chinese) language are not grounded i.e. not connected to the real world, butonly systematically processed. From this viewpoint the question arises if and how groundingcould even be possible in machines and in what way then could grounding be important for AI.Obviously, these are interesting questions for AI researchers but also philosophers. This leadsme to the following research questions:

• What exactly is meant by the symbol grounding problem?

• How is the symbol grounding problem solvable for AI researchers?

• Can I clarify this proposed solution by introducing proximity visualizations and processdiagrams?

These questions comprise the main investigation in this thesis. But asking these ques-tions means answering other questions first.

• How are these questions related to both AI and philosophy?

• What exactly are symbols?

• What is grounding?

Now that my goals and research questions are clear I will proceed with explaining the out-line of the rest of this thesis. First the relation between AI and philosophy will be clarified.Then the relation of the symbol grounding problem to AI and philosophy and all of the ac-companying difficult concepts and problems are discussed. After this, there has been enoughintroduction to discuss a suggested concrete solution to the symbol grounding problem. Theworkings and theory, this includes my two clarifications of the guessing game, behind thissolution are then thoroughly accounted. Finally the results and interpretations are analyzed.

2 The relation between philosophy and AI

Although the introduction makes clear that this thesis is for the most part about the differentaspects concerning symbol grounding, I assume, as I stated before, that some background onthe relation between philosophy and AI is also useful. This way the reader understands howAI and philosophy can benefit from and hinder each other in, for example, the discussion andsolving of the symbol grounding problem. It seems important to point out that the problem of

3

Page 7: Symbolic Grounding and Artificial Intelligence

symbol grounding is a matter concerning both fields of science. Why the symbol groundingdiscussion is so big is partly because there is no consensus about the definitions of some con-cepts concerning symbol grounding (e.g. symbols) between the both fields. The next sectionis, as mentioned, an illustration of how the fields can trade their knowledge.

2.1 What does philosophy have to offer AI in general?

It is my belief, based on all the following provided examples, that what philosophy has to offerAI comes down to five different points. In the following text I will discuss and clarify thesepoints to make clear as to why I think these points matter.

First, I think it is important to point out that philosophy is used as a source of inspiration.This means that philosophy can be useful for AI on matters of ‘cognition’, ‘rationality’ or‘representations’. Philosophy can serve as a pool of inspiration where AI researchers can findnew (or old) ways to look at AI problems. A good example of this is the notion of prototypesconceived by Wittgenstein (1953) which has been put to good use by AI researchers in theresearch into color categories.

Second, probably the most important point, philosophy is also beneficial as a source forclarification. AI researchers benefit much from analytical philosophy in cases where there isneed for describing theories (for example) in formal logic. But AI also needs philosophy to geta clear understanding on the previously mentioned concepts of ‘cognition’, ‘rationality’ and‘representations’. For example, if AI really wants to create intelligence, one has to have a firmunderstanding of what exactly intelligence is and also, is not. This obviously also concerns thediscussion of the symbol grounding problem.

Third, philosophy is, obviously, a vast source of historical knowledge. Historical knowl-edge can be used by AI researchers for finding out where exactly in the field their researchis located and what are the main ideas in this field. For example, which traits are consideredimportant for intelligent behavior and how does that research relate to those ideas. Philoso-phy has spent many years researching and formulating these ideas and AI researchers shouldand do readily absorb this knowledge. A good example of this is the formulation of a conceptas tricky as a ‘symbol’ wherein philosophy has had many more years of practice and AI hasreadily absorbed the resulting knowledge.

Fourth, philosophy can occasionally serve as a check for limitations. Sometimes philoso-phers are needed to point out to AI researchers the limitations of AI research in its fields anddisciplines. For example, Hubert Dreyfus predicted in 1969 that classical AI would not sufficeto create intelligent behavior. This meant that one could not model intelligent behavior withonly rules and symbols. Dreyfus concluded this on a basis of philosophical problems, at thatpoint not yet related to AI research (De Jong, 2009).

Fifth and final, philosophy is used as a source of confirmation. Previous of researching,whilst researching and after researching, philosophy is needed to make sure that what you areresearching is in fact an AI topic and not for example, simple programming. This also meansphilosophy is needed to make sure that along the way and afterwards you are and were stillresearching that which you started researching. For example, in AI research it is easy to beginmodeling a certain trait of intelligent behavior, which along the way degrades to a simple if-then program not resembling anything remotely intelligent. Consider solving a sudoku puzzle,which seems to require a certain degree of intelligent behavior but from a programmers-viewcan be modeled by a fairly simple not-intelligent-at-all program.

4

Page 8: Symbolic Grounding and Artificial Intelligence

2.2 What does AI have to offer philosophy in general?

I will try to explain along the same lines as the previous section what AI has to offer philosophy.In my opinion this, again, comes down to five points. Not surprisingly, these points have a lotin common with the previous account.

First and probably most importantly, AI forces on philosophers a demand for formaliza-tion. AI’s demand for meticulous formalization of philosophical theories is obviously closelyrelated to the second point in the previous section. The difference lies in the fact that usingAI forced and forces philosophers to neatly formalize and rethink their concepts of cognitionand intelligence by such standards that they are applicable, designable and testable concepts.For example, a vaguely formalized logical theory is insufficient to guarantee that it is imple-mentabe and therefore in need of further formalization. In contrast to an AI theory which, asstated above, uses philosophy to clarify their, already neatly formalized, AI concepts as to whythey are relevant in the theory.

Second, AI offers an extension to the idea of thought experiments i.e. a form of imple-mentation. AI offers philosophy a way to concretize the usual thought experiments which thenmakes it possible to apply them to more difficult cases then previously possible with humanreflection and thus offers an extension beyond the field of logical reasoning. For example,instead of just wondering what ‘rationality’ is or ‘representations’ are, one can extend thesethoughts to “could an agent be designed to cope with ‘representations’” (Cummins & Pollock,1995, p. 3) or “how a ‘rational’ agent might be designed” (ibidem). This means a shift fromhow things ought to work to how things could work (Cummins & Pollock, 1995).

Third, AI offers an implementation which in turn offers a consistency check for theories.Philosophical theories, after the first point in this section is met, can use AI as a consistencycheck. Implementation of philosophical theories has frequently revealed “ambiguity, vague-ness, incompleteness and downright error in places where traditional philosophical reflectionwas blind” (Cummins & Pollock, 1995, p. 2)).

Fourth, AI offers an implementation which in turn also offers a dependency check fortheories. So, much like the previous point, AI also offers the possibility of dependency checksin theories. This for example can show relations between assumptions which are redundant ornot applicable.

Fifth and final, AI offers an implementation which in turn also offers a falsifiability checkfor theories. Falsifiability is an important part of philosophical theories because if a theory isfalsifiable but isn’t wrong then this highly increases the ‘value’ of the theory.

3 Grounding’s relation to AI and philosophy

The following section will review the ‘groundbreaking’ notion (of symbol grounding) whichhas had a major effect on both philosophy and AI. ‘Grounding’ and the accompanying ‘symbolgrounding problem’ has stirred up a lot of discussion since the rise of AI and has, as some (ormost) researchers believe, not been resolved as of yet.

3.1 An introduction to the symbol grounding discussion

The usage and manipulation of symbols has been an AI issue from the moment AI arose in theearly 1950’s. The usage of symbols has been a issue in philosophy long before that. Neverthe-less the current discussion about symbols began when Newell and Simon defined the physicalsymbol system hypothesis (1976). The discussion about symbol grounding (although it wasnot called that yet) was started in 1980 when Searle published “Minds, Brains and Programs”,

5

Page 9: Symbolic Grounding and Artificial Intelligence

where he introduced the Chinese Room experiment. This “face” didn’t get his “name” untilHarnad wrote “The Symbol Grounding Problem”-SGP- in 1990. Since then the Chinese Roomexperiment was reintroduced to ignite the symbol grounding discussion and many replies torefute it have followed (for a short summary of those replies see, for example: (Hauser, 1997;Harnad, 2001; Cole, 2008)).

3.2 Formulation of the symbol grounding problem

The first logical steps in explaining what the SGP is, would be to explain what exactly ‘sym-bols’ are and what exactly ‘grounding’ is. After that, an explanation of the most commonunderstanding of the SGP is in order. Although the main authority on the SGP (Harnad, 1990,2003) believes that it is not useful to explain what symbols are (only in relation to symbolsystems), there will follow a short explanation. In the following section there will be somedescriptions of what symbols are or can be. Keep in mind that in the scientific literature thereis no real consensus on a good definition.

3.2.1 Symbols

Meaningful literature on symbols goes as far back as Aristotle (2007). And although a lot ofpeople have written about what constitutes a symbol, no consensus has been reached. In theremainder of this section I will discuss some definitions of symbols and conclude with the onemost useful for this thesis.

The first definition is from an online dictionary1:

A word, phrase, image, or the like having a complex of associated meaningsand perceived as having inherent value separable from that which is symbolized,as being part of that which is symbolized, and as performing its normal functionof standing for or representing that which is symbolized: usually conceived asderiving its meaning chiefly from the structure in which it appears, and generallydistinguished from a sign.

This definition does not really make it clearer what does and does not constitute as a sym-bol. The boundaries of the concept of a symbol are still vague (using words as ‘usually’ and ‘orthe like’ doesn’t help). The main point being made is that a symbol differs from a sign. Whichin turn leads us to the Peirce definition.

According to Vogt, Peirce defines a sign as semiotic, consisting of a representamen, aninterpretant and a object (2002). Quoting Vogt (2002, p. 5): “According to Peirce, the signis called a symbol if the representamen in relation to its interpretant is either arbitrary or con-ventionalized, so that the relationship must be learned.”. Defined this way, a representamen isthe form which the sign takes, an interpretant is the meaning of the sign and the object is thatto which the sign refers (see also figure 2 on page 7 (Vogt, 2002, p. 5)). Simply put, thismeans that a sign is a symbol when the way in which the symbol is represented (the form) isnot directly a logical way to represent the meaning that symbol is supposed to have. Simpleexamples of this would be a word (the word “dog” has little to do with the meaning dog) ora numeral (the numeral “2” has little to do with what 2 means). Consequently this coincidespredominantly with the work of Frege who has said that a term has meaning and a referent(Frege, 1952/1892). Also, almost the same definition is used by Steels (2007, p. 1-2): “... let usstart from Peirce and the (much longer) semiotic tradition which makes a distinction between a

1(http://dictionary.reference.com/browse/symbol) on Wednesday 24th of June 2009

6

Page 10: Symbolic Grounding and Artificial Intelligence

Figure 2: The semiotic triangle taken from Vogt. When the form is either arbitrary or conven-tionalized, the sign can be interpreted as a symbol (Vogt, 2002).

symbol, the objects in the world with which the symbol is associated, for example for purposesof reference, and the concept associated with the symbol ...”. Here, Steels replaces the termform/representamen with symbol, interpretant/meaning with concept and object/referent staysobject.

Another famous interpretation of symbols comes from Newell & Simon (1976). Theyconsider human cognition and the manipulation of symbols to be equivalent to computationin computers. This led Newell & Simon to formulate the now famous physical symbol system(1976, p. 116): “A physical symbol system consists of a set of entities, called symbols, whichare physical patterns that can occur as components of another type of entity called an expression(or symbol structure).” Also, as Sun (2000, p. 3) points out: “They further claimed that symbolscan designate arbitrary: ‘a symbol may be used to designate any expression whatsoever’; ‘it isnot prescribed a priori what expressions it can designate.’; ‘There exist processes for creatingany expression and for modifying any expression in arbitrary ways’.”. Defining it this way hasled to a lot of research into the computation of symbols but clearly differs from the previousdefinitions in which the importance of the meaning of a symbol is stressed.

In the end, Harnad’s definition (2003, p. 3) is somewhat easier to work with: “A symbol isany object that is part of a symbol system. (The notion of symbol in isolation is not a usefulone.)” But then what are symbols systems according to Harnad?

3.2.2 Symbols systems

Symbols systems are, as defined by Harnad (2003, p. 3): “... a set of symbols and rules formanipulating them on the basis of their shapes (not their meanings). The symbols are sys-tematically interpretable as having meanings, but their shape is arbitrary in relation to theirmeaning.” This means that for example letters (e.g. “a”, “b”, “c”) or numerals (e.g. “1”, “2”,“3”) are part of a symbol system (i.e. language and arithmetics). The latter example is the onethat Harnad (ibidem) uses as an example. It is important to see that the shape of these symbolsare arbitrary and in no way related to what is meant by them. Consider for example the Romannumeral “VI” and Hindu-Arabic numeral “6”, both have the same meaning, but their shapes

7

Page 11: Symbolic Grounding and Artificial Intelligence

are not the least bit similar. This example demonstrates that, although both the numerals havemeaning to us and are easily manipulated by a set of rules (we can easily do mathematics withthe numerals) the symbols itself are meaningless. The ‘sense’ that these numerals make is onlyin our minds. Or as Harnad (ibidem) puts it: “‘The numerals in a running desk calculator areas meaningless as the numerals on a page of hand-calculations. Only in our minds do they takeon meaning (Harnad, 1994).”’. This is where the (symbol) “grounding” comes in.

3.2.3 Grounding & the symbol grounding problem

Grounding symbols simply means that meaning is given to the symbols (by connecting sym-bols to the real world). This means for example that, looking back on the previous example ofnumerals, the figure “2” that represents the meaning “two-ness” are connected. But, lookingfrom a different perspective, this figure (“2”) is also connected with the meaning of “to-ness”because of the similarities in pronunciation. Therefore it is sometimes used as “to” in, for ex-ample, the word 2morrow in internet-language. Also, from yet another perspective, the Romannumeral “II” connects to the same meaning as the figure “2”. Presumably, these meanings areeither established by seeing (or by means of an other sense) something that is “two” (e.g. tworocks) and determining that this is now represented by “2”, by learning by communication thatthese rocks are “2” or by connecting “2” with other grounded symbols (e.g. “2” means thesame as “II”). This way millions of these symbols are grounded, and thus given meaning, inour mind. But how does this work with a machine? Can machines ground symbols?

Harnad considers this problem the symbol grounding problem and poses the followingquestion: “How is symbol meaning to be grounded in something other than just more mean-ingless symbols?” (1990, p. 7). Put differently, I say he could ask: “How can a computeracquire something as vague as the ‘meaning’ of a symbol?”. Considering that AI wants tocreate intelligent agents and that most people presume that knowing the meaning of symbolsseems to be innate to intelligent behavior, this question is vital to AI. As an example of how biga problem symbol grounding poses, the Chinese Room Argument was presented which will bediscussed in the next section. By now, all that the SGP encompasses should be more or lessclear.

3.3 Discussion of the symbol grounding problem

The SGP is often related to the Chinese Room Argument -CRA- by Searle (1980)(this is obvi-ously the argument he is making with the Chinese Room experiment). The CRA is in its turnrelated to the Turing Test by Turing (1950). One can already see that the interrelatedness of allthese concepts can create a lot of misunderstanding, especially concerning the SGP. Therefore,a lot has and can be said about the SGP. For all the commotion it has brought about, half of thisor maybe more is probably because it is often ill understood. This misunderstanding seeminglyhas two reasons.

The first is most easily understandable; the symbol grounding problem is a hard concept tograsp for a layman. As explained before, it is a tricky concept because it is hard to visualizeand the boundaries of the related concepts are vague. This answers a part of the problem withthe SGP, namely that it is difficult to understand.

The second reason has a lot to do with the first. Not only is the concept almost incom-prehensible for a layman, even the experts are in disagreement about the concept. As becameapparent in the previous section, throughout the history of the concept there has been a lot ofdiscussion between these experts. Philosophers and AI researchers have been in a standoff foryears. This is also what makes it such an interesting problem, because the discussion is about

8

Page 12: Symbolic Grounding and Artificial Intelligence

how this problem should be solved, or even better, if it is solvable at all. An important part ofthe discussion is, as mentioned earlier, the CRA. The CRA has, as stated before, been discussedand refuted many times. This is not the place to discuss all those refutations and refutations ofrefutations. However, one important reply will be mentioned.

If you assume that real persons have ‘understanding’ and the CRA proves, by succeedingthe Turing Test, that there is no way to distinguish between a real person and a machine (or forthat matter a person in a machine), how can you tell that real persons do have ‘understanding’?Or, by proving they’re indistinguishable, you also can’t say the machine doesn’t ‘understand’.As Harnad (2000, p. 434) puts it: “if you are prepared to impute a mind to the one, you have nononarbitrary basis for denying it of the other.”. This refutation is, as mentioned, based on thefact that the CRA succeeds at the Turing Test. This proves that the machine is indistinguishablefrom a real person, yet Searle claims that the CR doesn’t have real ‘understanding’, so neitherdoes a real person (For a more extensive edition of this refutation see Harnad, 2000). These, andother refutations lead to believe that Searle’s CRA might just be wrong, or that what the CRAis ment to prove (that a machine can’t have ‘understanding’) is not really what it is proving.It is merely proving that a machine like a “disembodied symbol-cruncher” (Harnad, 2000,p. 438) can’t gain ’understanding’, but not that “an embodied robot, capable of nonsymbolic,sensorimotor interactions with the world in addition to symbolic ones” (ibidem) can’t. Thiswould mean that only classical AI, with its symbol processing computers can’t solve the SGP(like Dreyfus predicted).

So, assuming Searle is wrong or assuming he might not be right, and following Harnad’sproposal of an embodied robot, the symbol grounding problem could well be solvable. Asolution seems to be to create an agent which is embodied and situated. Simply put, this meansthat the agent should have sensorimotor capacities and should be deployed in a real situation,interacting and grounding symbols autonomously and directly with the world.

An example of a solution in this form is proposed by Steels & Belpaeme (2005). In thissolution the requirements of embodiment and situatedness are met. The remainder of this thesiswill explain and discuss this proposal.

4 A concrete example of symbol grounding in AI

The following section will summarize an article written by Luc Steels and Tony Belpaeme(2005) called “Coordinating perceptually grounded categories through language: A case studyfor colour” and an article by Belpaeme (2001) called “Simulating the Formation of color Cat-egories”. The goal in these articles is to use artificial agents -robots- with models based on themain approaches on human categorization of color categories (namely: nativism, empiricismand culturalism) and see if comparable results (to human categorization of color categories) inrobots are achievable.

4.1 The three theories

Following the previously mentioned articles by Steels & Belpaeme this section will discussthe three proposed theories on human categorization of color categories. As mentioned before,these three theories are: nativism, empiricism and culturalism.

Nativism is based on the firm belief that grounded categories of perception are innate to ahuman. This means that over hundreds of generations, natural selection in evolutiongave us the (best) color categories we have now. Implementing this in robots means

9

Page 13: Symbolic Grounding and Artificial Intelligence

that simulation of genetic evolution is necessary. This theory implicitly assumes thatno communication is needed for grounding categories. Well known advocates for thistheory are Chomsky, Pinker and Fodor.

Empiricism believes that all people have the same mechanism in which they learn. Combiningthis assumption with the assumption that people grow up in comparable environments,everyone should end up with the same grounded categories of perception. Implementingthis is achieved by using (learning) neural networks. This theory implicitly assumesthat neither communication nor a genetic basis for categories is needed for groundingcategories. Well known advocates for this theory are Rumelhart and McClelland.

Culturalism assumes that for grounded categories of perception neither nativism nor empiri-cism is enough to explain the shared categories. Culturalism argues that a form of com-munication is needed where feedback is provided, to fully explain shared categories.Implementing this requires the possibility for agents to create, adopt and communicateabout (e.g. achieve consensus) color names and categories. A well known advocate forthis theory is Tomasello.

The next step is to implement these theories which would supposedly solve the SGP in thesuggested robots and therefore prove Searle wrong. This means that all three theories mustbe implemented on the same group of agents -population- which should be capable of all theneeded executions for learning and communicating.

4.2 Accountability for used agents

As Steels & Belpaeme (2005, p. 473) put it: “The agent population is an example of distributedmulti-agent system (Ferber 1998), commonly used in artificial-life simulations.” And “We usesmall populations in this target article (typically 10 agents) because we know from other workthat the mechanisms being used in our models scale up to populations of thousands of agents(Steels et al. 2002).”. Furthermore, each agent has the same architecture but does have uniqueassociated information structures. Agents communicate by exchanging a word and pointingat the object that word is intended to represent. Communicating is achieved through infra-redsensors and no agent has insight into the processes of other agents.

These agents make it possible to implements the suggested theories and therefore suppos-edly ground symbols; the next step is to test them.

4.3 How the theories are tested

Following the articles by Steels & Belpaeme, there are two types of tasks the agents need to beable to perform to successfully model the three suggested theories, namely: “the discriminationgame”, which agents perform solely, and “the guessing game”, which agents perform in pairs.The prerequisites for the guessing game (the prerequisites for the discrimination game arecomprised in these prerequisites) are explained in detail in the next section.

1. In the discrimination game an individual agent needs to discriminate a presented sam-ple -topic- from other samples which are not topic (at that moment). Suppose that thepresented samples are identical wires hanging from the ceiling against a wall (maybeconnected to a bomb and only by distinguishing the colors can it be defused!), only dis-tinguishable by their color. The topic in these samples could for example be the red wire.The agent perceives these samples and, by means of a complicated function, can succeed

10

Page 14: Symbolic Grounding and Artificial Intelligence

Figure 3: Example of presented samples, picture inspired on a lecture by Tony Belpaeme(Amsterdam April 2006) and then modified

or fail at discriminating the topic (the red wire) from these samples (the other wires, e.g.blue, green, and yellow). A discrimination game is successful if the topic has a categorythat is different from the categories to which all other samples in the context belong.

2. The guessing game takes place between an agent acting as speaker and an agent actingas hearer. Both the speaker and the hearer get presented (the same) samples (see figure 3on page 11), but only the speaker knows the topic (the red wire). The speaker then playsthe discrimination game (explained above) for himself, and when successful the speakerlooks up the associated word forms. If no words forms are found, the speaker creates onefrom a repertoire of syllables (e.g. the following string of syllables; “mili”) and conveysit to the hearer. If there are word forms available (see figure 4 on page 12), the onewith the best connection is selected and conveyed to the hearer (see figure 5 on page12). If this word form (created or already existing) is unknown to the hearer, the speakerreveals the topic (the red wire) so that the hearer can also play the discrimination gamewith this topic and associate the word form (“mili”) with the found or newly createdcategory. If the word form is known to the hearer (see figure 6 on page 13), it looksup the highest associated category and points it out (see figure 7 on page 13). If thisis the same sample the speaker had as topic (both the red wire), the game is successfuland the connections between category and topic are reinforced in both agents. If this isnot the same sample as the speaker had as topic, the hearer adjusts its connections to theconnections displayed by the speaker (adapt category and lexicon to speaker, see figure8 on page 14).

In the next paragraph the guessing game will be explained more extensively.

11

Page 15: Symbolic Grounding and Artificial Intelligence

Figure 4: Example of presented samples with available word forms and the colors have beenseen, picture inspired on a lecture by Tony Belpaeme (Amsterdam April 2006) and then modi-fied

Figure 5: Example of presented samples with available word forms and “mili” conveyed, pic-ture inspired on a lecture by Tony Belpaeme (Amsterdam April 2006) and then modified

12

Page 16: Symbolic Grounding and Artificial Intelligence

Figure 6: Example of presented samples where hearer looks up conveyed word form, pictureinspired on a lecture by Tony Belpaeme (Amsterdam April 2006) and then modified

Figure 7: Example of presented samples where hearer points to associated color, picture in-spired on a lecture by Tony Belpaeme (Amsterdam April 2006) and then modified

13

Page 17: Symbolic Grounding and Artificial Intelligence

Figure 8: Example of presented samples where speaker and hearer disagree and hearer adjuststo the speaker, picture inspired on a lecture by Tony Belpaeme (Amsterdam April 2006) andthen modified

5 A breakdown of the guessing game

5.1 Prerequisites for the guessing game

An important aspect of thoroughly analyzing and breaking down the guessing game is a de-tailed listing of the prerequisites for the guessing game (which include the prerequisites of thediscrimination game). When doing so it seems evident that these prerequisites are dividable intwo sub-classes, namely agents and environment.

Agents : Obviously, when one wants to see agents play the guessing game, there need to beagents to play it with. As the accountability of the agents is already discussed, this partis meant to list exactly what the agents need to be able to do.

It is required that agents have a means of sensory input (in this case by perception) andfor Steels & Belpaeme, “Perception starts from a spectral energy distribution S(λ) and isconverted into tristimulus values in CIE L*a*b*, which is considered to be a reasonablemodel of human lightness perception (L*), and the opponent channels red-green (a*)and yellow-blue (b*).” (Steels & Belpaeme, 2005, p. 474). Through certain equations,functions and formulas I will not discuss this model of perception works and this choicein color coding seems to be a good one (Steels & Belpaeme, 2005). Also agents need ameans to communicate with other agents. Furthermore, agents need to have a mechanismto create and adjust categories which “is based on the generally accepted notion thatcolours have prototypes and a region surrounding each prototype (Rosch 1978) withfuzzy boundaries (Kay & McDaniel 1978).” (Steels & Belpaeme, 2005, p. 474). Thisis modeled with adaptive networks, which also have high biological plausibility (Steels& Belpaeme, 2005). It is also viable that agents have a mechanism to create and adjustconnections and to create and adjust their lexicon during the guessing game. Also, theagent needs a mechanism to judge when colors are too similar to be a different category.In this way, it starts too look like agents always have some form of nativism because all

14

Page 18: Symbolic Grounding and Artificial Intelligence

Figure 9: Shown here is an example environment taken from Steels (2007, p. 20). Displayedare the two agents with the appropriate innate mechanisms to handle a guessing game, a simpleand ‘clean’ environment and three sample colors.

of the mechanisms for learning color categories are predefined in the agents.

Environment : The environment partially depends on the prerequisites for the agent. One canimagine that in an environment with bad lighting the visual means of the agent need tobetter than in an environment with good lighting. Furthermore, to play the discriminationgame and the guessing game it is important that both agents can see the presented stim-uli and see them in the same manner and that the environment is able to present (these)random stimuli. This is accomplished through Munsell color chips ((Steels & Belpaeme,2005). Also, for the guessing game the agents need to be able to see each other thusgiving them the possibility to communicate (with pointing). Next, the presented stimulineed to be presented in a ‘clean’ environment, meaning no other colors can interfere withthe game besides the presented ones, whom need to be sufficiently different, else cate-gorization is senseless. It is interesting to note that it seems that no statistical regularitiesare needed in the environment. This is something unexpected and this will be discussedshortly later on in the thesis.

5.2 The guessing game is steps

The following account is my reinterpretation of the guessing game described by Steels & Bel-paeme (2005, p. 475-476). By creating this account it is my belief that more insight into thepossible outcomes of the guessing game is achieved and that it is more easily understandablefor the layman. The guessing game has five different possibilities to end, four failures and onesuccess. The following account is an extensive description of all those five possible routes.

5.2.1 Route 1 (failure, see figure 10 on page 16):

• Step 1: The speaker and the hearer are both presented the same samples (for examplethe previous wire example or some blue squares as presented in figure 9 on page 15).

15

Page 19: Symbolic Grounding and Artificial Intelligence

Figure 10: Route 1

The topic of this guessing game is chosen randomly and only known by the hearer (forexample the darkest blue square).

• Step 2: The speaker plays the discrimination game with the topic to find a discriminatingcategory.

– Step 2.1: The speaker fails the discrimination game and therefore fails the guess-ing game because either there is no discriminating category, meaning that an oldcategory should be adapted or a new category created, or there are no categories atall which also means a new category needs to be created.

5.2.2 Route 2 (failure, see figure 11 on page 17):

• Step 1: The same step 1 as in route 1, samples are presented and a topic is chosen (darkestblue square).

• Step 2: The speaker plays the discrimination game with the topic to find a discriminatingcategory.

– Step 2.2: The discrimination game succeeds thus a discriminating category is foundand the guessing game can continue.

• Step 3: Speaker looks up all the associated word forms of topic (for example “mili”,“mala”, “molo”). (Note: although step 3 has two possible sub-steps, and thus impli-cate more possible routes than five, these sub-steps are of no influence on the failure orsuccess of the guessing game.)

– Step 3.1: Looking up the word forms was unsuccessful and thus no word formswere yet associated with the topic, therefore a string of syllables like “mele” iscreated.

– Step 3.2: Looking up the word forms was successful; the word form with the high-est connection (for example “mili”) is returned.

• Step 4: Speaker conveys his word form of the topic (“mele” or “mili”) to the hearer.

• Step 5: Hearer hears the word form and tries to look up this word form (“mele” or “mili”)in his lexicon.

– Step 5.1: Hearer can’t find this word form (“mele” or “mili”) in his lexicon, thusthe speaker shows what the topic is (the darkest blue square) and the hearer plays adiscrimination game to find a discriminating category for this topic.

∗ Step 5.1.1: Discrimination game succeeds meaning a discriminating categoryis found and the hearer associates the category found with the word form.

16

Page 20: Symbolic Grounding and Artificial Intelligence

Figure 11: Route 2

5.2.3 Route 3 (failure, see figure 12 on page 18):

Route 3 is practically the same as route 2 except for a different step 5.1.

• Step 5: Hearer hears the word form and tries to look up this word form (“mele” or “mili”)in his lexicon.

– Step 5.1: Hearer can’t find this word form (“mele” or “mili”) in his lexicon, thusthe speaker shows what the topic is (the darkest blue square) and the hearer plays adiscrimination game to find a discriminating category for this topic.

∗ Step 5.1.2: The hearer fails the discrimination game because either there is nodiscriminating category, meaning that an old category should be adapted or anew category created, or there are no categories at all which also means a newcategory needs to be created.

5.2.4 Route 4 (failure, see figure 13 on page 19):

Route 4 is only differs from routes 2 and 3 after step 4 and thus the earlier steps are omitted.

• Step 5: Hearer hears the word form and tries to look up this word form (“mele” or “mili”)in his lexicon.

– Step 5.2: Hearer find the word form (“mele” or “mili”) in his lexicon and looks upassociated category which means the hearer looks up the category associated withthe word form (“mele” or “mili”) with which the speaker associates a categorywhere the topic (the darkest blue square) is in.

• Step 6: Hearer points at what he thinks the speaker’s topic is judging on what categoryhe found associated with the conveyed word form.

17

Page 21: Symbolic Grounding and Artificial Intelligence

Figure 12: Route 3

18

Page 22: Symbolic Grounding and Artificial Intelligence

Figure 13: Route 4

• Step 7: Speaker observes to what the hearer is pointing:

– Step 7.1: The hearer doesn’t point to topic which means they have different cate-gories associated with the word form conveyed. To reach shared categories one ofthe agents must adjust his knowledge, which is the hearer. The hearer adapts hisassociated category and word form to speakers’.

5.2.5 Route 5 (success, see figure 14 on page 20):

Route 5 is the route where the guessing game is successful and is the same as route 4 exceptfor step 7.

• Step 7: Speaker observes to what the hearer is pointing:

– Step 7.2: Hearer does point to topic which means they have the same categoriesassociated with the conveyed word form. The category and word form connectionsof both the speaker and hearer are reinforced and (a bit of) shared knowledge isachieved.

19

Page 23: Symbolic Grounding and Artificial Intelligence

Figure 14: Route 5

20

Page 24: Symbolic Grounding and Artificial Intelligence

5.2.6 Example of successful guessing game

To present a successful guessing game it is clear that we should follow route 5. Consider thepresented example of three blue squares.

• Step 1: The speaker and hearer both see the blue squares possibly like the presentedpicture. A topic is randomly chosen.

• Step 2: The speaker plays the discrimination game with the topic (let’s take the darkestblue) and he finds that he has a category for this color.

• Step 3: The speaker looks through the associated word forms of the topic (for example“mili”, “mala” “molo”) and finds that “mili” has the highest connection with the topic.

• Step 4: The speaker conveys this word form (“mili”) to the hearer.

• Step 5: The hearer perceives this word form and finds that he also has it in his lexiconand looks up the associated category.

• Step 6: The hearer points to the category he just found to show the speaker what hisassociated category is.

• Step 7: The speaker finds that the hearer is pointing to the same category he found andthe guessing game is therefore successful.

5.2.7 The guessing game overall

These five possible routes result in overall routing schema (figure 15 on page 23):

• Step 1: Speaker sees presented samples (the blue squares) and topic (darkest blue square)is chosen.

• Step 2: Speaker plays discrimination game:

– Step 2.1: If unsuccessful; the guessing game FAILS and a new category is createdor an old category adapted for the topic

– Step 2.2: If successful; continue.

• Step 3: Speaker looks up associated word forms of topic.

– Step 3.1: If unsuccessful; a string of syllables like “mele” is created.

– Step 3.2: If successful; a string of syllables with the highest connection like “mili”is returned.

• Step 4: Speaker conveys his word form of the topic (“mili” or “mele” to the hearer.

• Step 5: Hearer hears the word form.

– Step 5.1: Hearer doesn’t know word form and the guessing game FAILS; speakershows topic and hearer plays discrimination game with topic.

∗ Step 5.1.1: Discrimination game succeeds and hearer associates the categoryfound with the word form.

21

Page 25: Symbolic Grounding and Artificial Intelligence

∗ Step 5.1.2: Discrimination game fails and hearer creates new category for topicand associates it with the word form.

– Step 5.2: Hearer does know word form and looks up associated category.

• Step 6: Hearer points at what he thinks the speaker’s topic is.

• Step 7: Speaker observers the pointing:

– Step 7.1: Hearer doesn’t point to topic and the guessing game FAILS; hearer adaptscategory and word form to speakers’.

– Step 7.2: Hearer does point to topic and the guessing game SUCCEEDS; the cate-gory and word form connections of both the speaker and hearer are reinforced and(a bit of) shared knowledge is achieved.

5.3 Aspects of the guessing game to ponder

To establish how good the guessing game is in comparison to using real language and in whatway this is a form of culturalism are important aspects in determining what the proposed the-ories and their implementation are worth. For example, one important aspect of the guessinggame to consider is that to maximize the usefulness of the game, the color of the proposed sam-ples need to be fairly similar, but on the other hand, not too similar. It seems evident that beingable to distinguish between red en blue is useful, but more useful would be to also distinguishpurple. On the other hand, categorizing every possible shade of purple beats the purpose ofcategorizing at all.

5.4 The theories tested

Empiricism uses the model of individualistic learning in the discrimination game. Simply put,the rule is as follows, when succeeding, the connection between the category and the topic isreinforced. When failing, there are two options: There were no existing categories yet, thus oneis created for the topic. Or there was no discriminating category found for the topic, which alsoleads to two options. If there was almost no way for the topic to a category (almost is definedby an agent’s discriminative success set at a threshold of 95%), a new category is still created.Otherwise the topic is added to the best matching network. The result of this model after a1000 games is success rate of almost 100% per agent. However the alikeness of the repertoiresof the individual agents within a population is not 100%, thus not resulting in shared categoriesand thus no (shared) grounding. On top of that the alikeness of repertoires between populationsis even further from 100

As stated before, the theory of nativism requires the implementing of genetic evolution.Simply put, genetic evolution is achieved by supplying each agent with “color genes” whichdirectly encode categorical networks. Besides that, the same discrimination game is used. Thebest 50% of the agents (in the discrimination game) is kept every generation, the rest discarded.The new 50% are copies of the best 50% of the last generation except with a small mutation(blatantly put, this is sort of analogue to the theory of evolution). The result of this model after50 or 100 generations is an adequate repertoire of color categories which is (completely) sharedamong its population and thus, it seems, grounded. However the color categories are, in theindividualistic learning, still not shared across populations.

The third theory is culturalism, which uses the guessing game (which in turn uses the dis-crimination game). As stated above, there are five possible outcomes to the guessing game.

22

Page 26: Symbolic Grounding and Artificial Intelligence

Figure 15: The Full Guessing Game

23

Page 27: Symbolic Grounding and Artificial Intelligence

Speaker and hearer agree on the topic and therefore the game is a success and for both thespeaker and hearer the connection between word form and category is reinforced, while theconnections between word form and other categories is inhibited. The speaker and hearer didnot agree on the topic and therefore the game failed, the hearer doesn’t know the communicatedword form (two ways to fail here), or the game failed somewhere in the discrimination gameand for both the hearer and speaker the connection between word form and category is inhib-ited. The result of this model after 10000 games is that it leads to an adequate repertoire of colorcategories and color terms and that these categories are shared within a population and thus, itseems, grounded. However, also in this case categories are not shared across populations.

Lastly, a combination of the last two theories was tested. This means that the color reper-toire of the agents was influenced by communicative success (and indirectly discriminativesuccess) and by genetically encoded genes. The results are comparable to the results of thetesting culturalism alone, except these results are a little bit better and the combination of the-ories is more feasible in comparison to how humans (presumable) acquire color categories.

5.5 Results and interpretations

Concluding from Steels & Belpaeme (2005), it is clear that agents can easily learn adequaterepertoires of color categories and adequate repertoires of color terms. It is also clear that itis possible to get these categories shared amongst a population and thus grounded. It seems,however, not possible to get these categories shared across populations. At first, the data usedin these tests was fixed data which didn’t contain statistical structure, but in a second test real-world data was used which did contain statistical structure, which (has been assumed) to benecessary for shared categories with individualistic learning in a population. The results withthese data were merely different, and thus should raise doubt as to whether statistical structureonly is sufficient for shared categories in a population.

Considering that Belpaeme’s 2001 article was a sort of prelude to the 2005 article withSteels, it seems obvious that the results of that article were the same or a little less impressiveas the next. This seems to be the case, except for one point. Quoting Belpaeme (2001, p. 5):“When in simulation the agents have only two color terms, they will have a term for warm-bright colors and one for dark-cool colors, which is in accordance with observations of humanlanguages.”. The observations ment here are the observations done by Berlin & Kay (1969).This is an interesting result, which, to a certain degree supports the idea of the of Steels &Belpaeme approach being reasonably comparable to human color categorization. Alas, thisresult is diminished a bit when confronted with this: “... when the agents have three or morecolor terms, there is no preference for creating a category for reddish colors: the creation ofcategories is entirely opportunistic. For humans the story is different, when human languageshave three or more color terms, there will always be a term for reddish colors.” (Belpaeme,2001, p. 5). Nevertheless, there can be good reasons for this difference. For example, it isnot so certain red is always the third color term being created (Saunders, B. A. C. and vanBrakel, J., 1997). It does, however, give pause as to if there is really modeling of human colorcategorization taking place.

5.6 Discussion of the concrete example

Assuming that this proposed solution really is a solution to the problem of grounding symbols(i.e. are the symbols really being grounded), then so far this solution is only applicable forsymbols directly related to the world and symbols which are only applicable to simple easily

24

Page 28: Symbolic Grounding and Artificial Intelligence

testable categories. Consequently, quoting Harnad (Open Peer Commentary: Steels & Bel-paeme, 2005, p. 498)): “There are many ways in which color categories are unrepresentative ofcategories in general.”. The most important reason being that color categories (as pointed outby Steels & Belpaeme) are partly (or completely) innate. Obviously, this is far from a solutionto the whole domain of the SGP. Also in this example approach, there is not yet an agent with1% of the grounded symbols humans have, let alone 10% or 100%. and this approach is nota solution for abstract symbols (e.g. ‘happiness’ or ‘misery’) which are partly or completelygrounded in other symbols. All this gives the feeling that this is still on a “toy-level”, and,quoting Harnad (2000, p. 430): “Toys are not us.”. Nevertheless these results give room fora little optimism and give a new perspective on solving the symbol grounding. It is clear thatthese colors categories did become shared within a population. Although I would not, as Steelsdoes: “boldly state that the symbol grounding problem is solved, ...” (Steels, 2007, p. 26). Onecould, however, state that in securely controlled environment a small part of what comprisesthe SGP is solved. But we’ve seen these promising starts before, not long after symbol manip-ulation in physical symbols systems was first introduced, a few good steps up the stairs weremade, but before long, it appeared to be a long ladder.

25

Page 29: Symbolic Grounding and Artificial Intelligence

6 Discussion and conclusions

Prior to answering the first main question I have explained some different concepts first. I beganwith explaining the relation between AI and philosophy. Then i proceeded to explain what thedifferent definitions of symbols are so that I could proceed to clarify what symbol groundingis. After these questions were answered I continued with the first main question of this thesis:What exactly is meant by the symbol grounding problem? It seemed the real problem was howto connect abstract symbols with real world sensory data. The second question was: How isthe symbol grounding problem solvable for AI researchers? And the most alluring answer tothis question was to supply the agents with the means to interact with the environment (embod-iment) and to create an environment in which useful interactions are possible (situatedness).The last question to answer was: Can I clarify this proposed solution by introducing proximityvisualizations and process diagrams? I answered this question with the development of myproximity visualizations based on the lecture of Tony Belpaeme. As a more formal approach Idesigned process diagrams which represent the different stages of the guessing game which isa main part of the proposed solution to the symbol grounding problem. The next stage of thisformal approach could be to implement this as a computer simulation in different programminglanguages.

Hopefully with my introduction into the realm of AI and especially symbol grounding, thereader has acquired a firmer understanding of what exactly the symbol grounding problem isand why this is relevant to philosophy but mostly Artificial Intelligence. Also, the reader nowunderstands why the symbol grounding problem is interesting and if it could be solved that thiscould bring us a new step in intelligent agents. Consequently, the reader can see how the symbolgrounding problem is related to the Chinese Room Argument, but also how the Chinese RoomArgument might be wrong. Equipped with this knowledge then, a proposal solution, in theform of embodied and situated agents is advocated. Then it is my hope that, with my proposedproximity visualization and process diagrams, a firm understanding of the guessing game isacquired. My conclusion then is it seems that the proposed solution by Steels & Belpaemedoes not nearly constitute for a full solution. It does give us some light at the end of the tunnel.Concluding from the example and its formalization some grounding seems possible. Furtherresearching in expanding to a different nature of categorization, like for example language,seems necessary to achieve more progression. I expect that artificial agents will not yet rule inmy lifetime.

26

Page 30: Symbolic Grounding and Artificial Intelligence

References

Aristotle. (2007). On Interpretation. eBooks@Adelaide. ([online] (cited 26 June 2009)Available from http://ebooks.adelaide.edu.au/a/aristotle/interpretation/)

Belpaeme, T. (2001). Simulating the Formation of Color Categories. In B. Nebel (Ed.),Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI01(pp. 393–400). Morgan Kaufmann.

Berlin, B., & Kay, P. (1969). Basic Color Terms: Their Universality and Evolution. Universityof California Press, Berkeley, CA.

Cole, D. (2008). The Chinese Room Argument. In E. N. Zalta (Ed.), The Stanford Encyclope-dia of Philosophy.

Cummins, R., & Pollock, J. (1995). Philosophy and AI: Essays at the Interface. The MITPress.

De Jong, H. L. (2009). AI en filosofie. De Connectie, 4(1).Frege, G. (1952/1892). On sense and reference. In P. Geach & M. Black (Eds.), Translations

of the Philosophical Writings of Gottlob Frege. Oxford: Blackwell.Harnad, S. (1990). The symbol grounding problem. Physica D, 42, 335–346.Harnad, S. (1994). Computation Is Just Interpretable Symbol Manipulation: Cognition Isn’t.

Minds and Machines, 4, 379–390.Harnad, S. (2000). Minds, Machines and Turing: The Indistinguishability of Indistinguish-

ables. Journal of Logic, Language, and Information, 9(4), 425–445.Harnad, S. (2001). What’s Wrong and Right About Searle’s Chinese Room Argument? Essays

on Searle’s Chinese Room Argument.Harnad, S. (2003). The symbol grounding problem. Encyclopedia of Cognitive Science.Hauser, L. (1997). Searle’s Chinese Box: Debunking the Chinese Room Argument. Minds

and Machines, 7, 199–226.Newell, A. and Simon, H. (1976). Computer science as empirical inquiry: symbols and search.

Communications of ACM, 19, 113–126.Saunders, B. A. C. and van Brakel, J. (1997). Are there nontrivial constraints on colour

categorization? Behavioral and Brain Sciences, 20, 167–228.Searle, J. R. (1980). Minds, brains, programs. Behavioral and Brain Sciences, 3, 417–457.Steels, L. (2007). The symbol grounding problem is solved, so what’s next? In M. De Vega,

G. Glennberg, & G. Graesser (Eds.), Symbols, embodiment and meaning. New Haven:Academic Press.

Steels, L., & Belpaeme, T. (2005). Coordinating Perceptually Grounded Categories throughLanguage: A Case Study for Colour. Behavioral and Brain Sciences, 28, 469–529.

Sun, R. (2000). Symbol Grounding: A New Look At An Old Idea. Philosophical Psychology,13, 149–172.

Turing, A. M. (1950). Computing machinery and intelligence. Mind, LIX(236), 433–460.Vogt, P. (2002). The physical symbol grounding problem. Cognitive Systems Research, 3(3),

429–457.Wittgenstein, L. W. (1953). Philosophical investigations. New York: Macmillan.

27