narrative reasoning for cognitive ubiquitous robotsrobotic applications [22]. this technique is...

7
Narrative reasoning for cognitive ubiquitous robots Lyazid Sabri, Abdelghani Chibani, Yacine Amirat, Gian Piero Zarri LISSI Laboratory, University of Paris Est Cr´ eteil (UPEC), France {lyazid.sabri, chibani, amirat, gian-piero.zarri}@u-pec.fr Abstract— The symbolic spatio-temporal reasoning is one of the major research challenges aiming to increase the autonomy and cognitive capabilities of ubiquitous robots in ambient intelligent systems. In these environments, there are several situations that are governed by complex processes and where multiple events must be correlated with respect to their spatial and temporal ordering to infer the right situation and to take the right decision. Inferring everyday human activities is a typical example that illustrates the complexity of spatio- temporal modeling and reasoning at a high semantic level. The ambition of this paper is to explore the potential of the narrative representation and reasoning for recognizing and understanding situations. This approach uses hierarchical structures of semantic predicates and functional roles of the Narrative Knowledge Representation Language (NKRL) and allows semantic descriptions of entities, events and relationships between events. A scenario dedicated to the monitoring, in smart home, of elderly people is presented and analyzed to show the feasibility of the proposed approach and its capacity to explain causal chains between observed events. I. INTRODUCTION There is a growing consensus in the robotics community about the need of adding some ”deep reasoning” capabilities to the artefacts they actually produce. This will allow these tools to go beyond their (relatively limited) present possibil- ities of taking given situations into account and to be able to deal in an ”intelligent” way with complex activities like planning, monitoring, recognition of complex environments, detection of intentions/behaviours, etc. At the same time, the cognitive science community has developed a whole panoply of tools like advanced knowledge representation systems, powerful inference techniques, semantic-based advanced ser- vices etc. that could represent, at least in principle, the best solution for adding more ”intelligence” to the robotics- like techniques. The set up of ”bridges” between the two communities in order to support the now emerging ”cognitive robotics” field would then appear both a logical and useful step. In this respect, there is a recent growing interest in the use of high-level semantic information into robotic systems to face high-level issues related to abstraction and reasoning. Indeed, to enhance intelligence of robots in complex real- world scenarios, recent research trends aim to introduce high- level semantic knowledge in the different functions of the robot such as: mapping and localization [13][14][15][16], scene understanding [17], smart and seamless interactions with human and environment for ambient assisted livings and ubiquitous spaces[2][18][19][20][21]. Most of the proposed approaches in this context concern the use of OWL-based conceptual tools. For example, the authors in [12], propose a software toolbox called Cognitive Robot Abstract Machine (CRAM) for the implementation and control of complex mobile manipulation robotic tasks. Based on Prolog, the core of CRAM allows first-order reasoning while knowledge is represented in OWL-DL. Another framework called Ontology-based Multi-layered Robot Knowledge Framework (OMRKF) is proposed in [6]. Its architecture is quite complex knowing that it includes four classes of knowledge: perception, model, activity and context, and that each level of knowledge includes three knowledge layers (high level, middle level and low level knowledge layer). Each knowledge layer has also 3 ontology layers: meta ontology layer, ontology layer and ontology instance layer. Two types of prolog-based rules are used: rules for uni-directional reasoning in the same knowledge class, and rules for bi-directional reasoning between different knowledge classes. To solve space sharing problems in human-robot interaction, authors in [4], propose SpaceOn- tology, an OWL-DL ontology. This ontology is constructed to allow a knowledge representation for modeling space (spatial entities, hierarchical organization), spatial relations and fuzzy information. The latter is used to model the imprecision of spatial description using linguistic variables such as close to, far from, etc. In [10], the work aims to endow the robot with semantic capabilities by relying on a richer multi-ontology approach where two different hierarchies are used for spatial knowledge (based on [11]) and for semantic knowledge. The information in the two hierarchies is combined in different ways to provide the robot with more advanced reasoning and planning capabilities. More general, other approaches propose semantic integration middleware dedicated for the sharing of knowledge and services between networked robots. In this context, we can mention first the PEIS-Ecology project that is distinct in its emphasis on the fundamental issue of heterogeneity of robots functionalities in ubiquitous environment [20]. Peis Ecology is middleware for highly heterogeneous distributed systems, self configuration and dynamic re-configuration, cooperative perception, and the integration between digital and physical interaction. One of the interesting functionalities of the PEIS kernel middleware is the semantic matching. This algorithm determines if between the software components listed in the knowledge base there exists some components that matches a category or functionality expressed in the request. The Ubiquitous Robotic Companion (URC) project aims also at providing a framework for the design and imple- mentation of a semantic-based ubiquitous robotic space (Se-

Upload: others

Post on 17-Mar-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Narrative reasoning for cognitive ubiquitous robotsrobotic applications [22]. This technique is based on an inte-grated approach that combines low-level sensory patterns and high-level

Narrative reasoning for cognitive ubiquitous robots

Lyazid Sabri, Abdelghani Chibani, Yacine Amirat, Gian Piero ZarriLISSI Laboratory, University of Paris Est Creteil (UPEC), Francelyazid.sabri, chibani, amirat, [email protected]

Abstract— The symbolic spatio-temporal reasoning is one ofthe major research challenges aiming to increase the autonomyand cognitive capabilities of ubiquitous robots in ambientintelligent systems. In these environments, there are severalsituations that are governed by complex processes and wheremultiple events must be correlated with respect to their spatialand temporal ordering to infer the right situation and totake the right decision. Inferring everyday human activitiesis a typical example that illustrates the complexity of spatio-temporal modeling and reasoning at a high semantic level.The ambition of this paper is to explore the potential ofthe narrative representation and reasoning for recognizingand understanding situations. This approach uses hierarchicalstructures of semantic predicates and functional roles of theNarrative Knowledge Representation Language (NKRL) andallows semantic descriptions of entities, events and relationshipsbetween events. A scenario dedicated to the monitoring, insmart home, of elderly people is presented and analyzed toshow the feasibility of the proposed approach and its capacityto explain causal chains between observed events.

I. INTRODUCTION

There is a growing consensus in the robotics communityabout the need of adding some ”deep reasoning” capabilitiesto the artefacts they actually produce. This will allow thesetools to go beyond their (relatively limited) present possibil-ities of taking given situations into account and to be ableto deal in an ”intelligent” way with complex activities likeplanning, monitoring, recognition of complex environments,detection of intentions/behaviours, etc. At the same time, thecognitive science community has developed a whole panoplyof tools like advanced knowledge representation systems,powerful inference techniques, semantic-based advanced ser-vices etc. that could represent, at least in principle, thebest solution for adding more ”intelligence” to the robotics-like techniques. The set up of ”bridges” between the twocommunities in order to support the now emerging ”cognitiverobotics” field would then appear both a logical and usefulstep. In this respect, there is a recent growing interest in theuse of high-level semantic information into robotic systemsto face high-level issues related to abstraction and reasoning.Indeed, to enhance intelligence of robots in complex real-world scenarios, recent research trends aim to introduce high-level semantic knowledge in the different functions of therobot such as: mapping and localization [13][14][15][16],scene understanding [17], smart and seamless interactionswith human and environment for ambient assisted livingsand ubiquitous spaces[2][18][19][20][21].

Most of the proposed approaches in this context concernthe use of OWL-based conceptual tools. For example, the

authors in [12], propose a software toolbox called CognitiveRobot Abstract Machine (CRAM) for the implementationand control of complex mobile manipulation robotic tasks.Based on Prolog, the core of CRAM allows first-orderreasoning while knowledge is represented in OWL-DL.Another framework called Ontology-based Multi-layeredRobot Knowledge Framework (OMRKF) is proposed in [6].Its architecture is quite complex knowing that it includesfour classes of knowledge: perception, model, activity andcontext, and that each level of knowledge includes threeknowledge layers (high level, middle level and low levelknowledge layer). Each knowledge layer has also 3 ontologylayers: meta ontology layer, ontology layer and ontologyinstance layer. Two types of prolog-based rules are used:rules for uni-directional reasoning in the same knowledgeclass, and rules for bi-directional reasoning between differentknowledge classes. To solve space sharing problems inhuman-robot interaction, authors in [4], propose SpaceOn-tology, an OWL-DL ontology. This ontology is constructedto allow a knowledge representation for modeling space(spatial entities, hierarchical organization), spatial relationsand fuzzy information. The latter is used to model theimprecision of spatial description using linguistic variablessuch as close to, far from, etc. In [10], the work aimsto endow the robot with semantic capabilities by relyingon a richer multi-ontology approach where two differenthierarchies are used for spatial knowledge (based on [11])and for semantic knowledge. The information in the twohierarchies is combined in different ways to provide the robotwith more advanced reasoning and planning capabilities.More general, other approaches propose semantic integrationmiddleware dedicated for the sharing of knowledge andservices between networked robots. In this context, we canmention first the PEIS-Ecology project that is distinct in itsemphasis on the fundamental issue of heterogeneity of robotsfunctionalities in ubiquitous environment [20]. Peis Ecologyis middleware for highly heterogeneous distributed systems,self configuration and dynamic re-configuration, cooperativeperception, and the integration between digital and physicalinteraction. One of the interesting functionalities of the PEISkernel middleware is the semantic matching. This algorithmdetermines if between the software components listed in theknowledge base there exists some components that matchesa category or functionality expressed in the request.

The Ubiquitous Robotic Companion (URC) project aimsalso at providing a framework for the design and imple-mentation of a semantic-based ubiquitous robotic space (Se-

Page 2: Narrative reasoning for cognitive ubiquitous robotsrobotic applications [22]. This technique is based on an inte-grated approach that combines low-level sensory patterns and high-level

manticURS) [24]. The latter enables automated orchestrationof networked robots tasks, using HTN SHOP2 planningsystem. Robot services, sensors, and devices are representedas semantic web services using (OWL-S) ontology [23].Services OWL-S profiles are registered into the environ-mental knowledge bases (KBs), so that a robotic agent canautomatically discover the required knowledge and composea feasible service plan for the current environments. Theexperiment concerns a user sitting on a sofa in a livingroom carrying his RFID tag, and commands the robot inthe kitchen.

The main domain application on which we will focus inthis paper is the human-robot interaction in ambient intelli-gent systems (e.g., smart homes). These systems are intendedto monitor and interact with an environment populated byhumans and artefacts. The concept of robot companionhas been recently proposed by roboticians as a solutionto provide assistive services to humans, such as healthmonitoring, well being, security, etc. Improving the qualityof human-robot interaction requires robots to be endowedwith spatio-temporal representation and reasoning abilitiesabout dynamic spatial and temporal phenomena. In ambientintelligent systems, there are several situations governedby complex processes and where multiple events must becorrelated with respect to their spatial and temporal orderingto infer the right situation and to take the right decision.Inferring everyday human activities is a typical example thatillustrates the complexity of spatio-temporal modeling andreasoning at a high semantic level. The literature shows thatthis type of modeling and reasoning has not been sufficientlyinvestigated in the field of cognitive robots. In [8], the authorshave presented a mobile robot using a W3C reasoner likeRACER to infer information about an environment of OWL-based conceptual tools. However, as the authors themselvesacknowledge, the very limited and static type of ontologyused that contains only physical entities like areas, corridorsand living rooms makes it difficult for the system torepresent and reason about dynamic events and situationslike recognizing a person who grabs a door handle in orderto open this door. The work by Capezio et al [9] go beyondthese limitations, by including in the ontology dynamicentities intended to be used for situation modeling andsituation assessment. In their system, one can represent bothgeneral notions like ToBeSomewhere and domain specificnotions like TurnOnStove, etc. These notions, in a typicalDescription Logics (DL) style are included in a TBox whilsttheir instances are stored in the corresponding ABox. GiHyun Lim et al propose a situation reasoning technique forrobotic applications [22]. This technique is based on an inte-grated approach that combines low-level sensory patterns andhigh-level semantic knowledge using Hidden Markov Model(HMM) and Service oriented Context Ontology (SCOn) .The SCOn ontology allows the description of two kinds ofcontexts: Normal contexts that concern well-going situationaccording to a service scenario and abnormal contexts thatare exceptions to the normal scenario. Normal contexts arerelated temporally by sequential relationships among them

while abnormal contexts have only sequential relationshipswith normal contexts. Contexts are described in the ontologyusing two blocks of concepts; a meta-level block for thedescription of abstract types of concepts and relationshipsamong the concepts including context, object, action, spaceand human. The service domain specific block allows thedefinition of specific types of the meta-level context modelsthat corresponds to service domain. The Integrated temporalreasoning using HMMs enables a service robot to understandhuman-augmented situations immediately whenever the crit-ical situation happens. Ontology instance reasoning is usedby the robot to infer both normal and abnormal contexts thatwill be evaluated in the HMM process to infer the situationfrom input sensory signal. Experiments on teaching servicedomain demonstrate that HMM based reasoning on situationallows a service robot to detect successfully normal andabnormal contexts and select appropriate actions.

The ambition of this paper is to explore the potentialof the narrative representation and reasoning in the fieldof cognitive robots. To endow robots with the ability torecognize and understand situations, we aim to propose asemantic formal basis for representing and reasoning aboutspace, change and occurrences within ambient intelligent en-vironments. The narrative-based approach we explore allowssemantic descriptions of entities, events and relationshipsbetween events. Based on the use of hierarchical struc-tures of semantic predicates and functional roles of theNarrative Knowledge Representation Language (NKRL) [1],this approach consists of the analysis of complex eventsand behaviors through the use of extended production rulesand narrative-(spatiotemporal) reasoning. It can provide abetter understanding of the situations under examination,by enabling a robot to dynamically detect changes (contextchanges) and adjust/adapt its behavior to these changes thatcan occur anywhere and anytime.

For the validation of the proposed approach, a concretescenario dedicated to the monitoring, in smart home, ofelderly people is proposed. The scenario illustrates the capa-bilities of the proposed approach in terms of implicit knowl-edge inference. The paper is structured as follows. Section IIpresents the foundations of the symbolic representation andreasoning approach. An example of application consisting insituation recognition scenario is presented and analyzed insection III. Finally, a conclusion and future works are givenin the last section.

II. UNDERSTANDING CONTEXT USING NARRATIVEREASONING

Ambient environments are characterized by an increasingnumber of distributed entities (static entities such as table,chair or dynamic entities such as events/actions), complexand structured context/situations. Understanding a situationsuch as cooking, or predicting an emergency situationas falling, is the main challenge of intelligent systems.However, the difficulty increases when some events arenot detected because of sensors or system communicationfailures for example, leading to ambiguities of interpretation

Page 3: Narrative reasoning for cognitive ubiquitous robotsrobotic applications [22]. This technique is based on an inte-grated approach that combines low-level sensory patterns and high-level

and less reliable understanding. Consequently, providingadditional knowledge in absence of explicit informationrequires: i) a reasoning process based on a chronologicaland semantic analysis, about past and ongoing events and ii)the use of narrative inference procedures capability to obtaina plausible answer from a knowledge base. To build suchenvironments, a high-expressive knowledge representationmodel is required to describe complex situations and totake into account both the static and dynamic entities ofthe environment. Furthermore, suitable reasoning engineshould be used to find out a correspondence among thelow level features, initially associated with the entities, andthe high level model descriptions, included in the worldrepresentation of the critical situations recognition andbehavior. In fact, the system must be able to i) analyzein real time the stream of information provided by theentity, to understand the situation and consequently toinfer the right reaction to be performed and ii) inferall possible causes, consequences (and suggestions foractions) related to these events. Building a correspondencebetween the low-level features and the high level conceptualdescriptions requires an abstract model taking into accountstatic (physical) and dynamic (events and situation)characteristics, roles, properties, etc. of entities. Moreover,ensuring the homogeneity of the Knowledge base (KB) andclassifying each entity according to its role, allows to easilyaggregate spatio-temporal events into coherent facts sharedbetween the reactive rule and the narrative reasoning.

A. Symbolic Model Representation

One of the main advantages of NKRL is the possibility tobuild up semantic model of the ambient environment usingthe following main components that can be associated toeach other in order to define very complex scenarios:

1) Ontology of concepts (HClass) denoted by Ω. It corre-sponds, essentially, to a standard ontology of concepts(C) according to the traditional, binary meaning ofthese terms. Concepts are inserted into a generaliza-tion/specialization hierarchy, often, but not necessarily,reduced to a tree where the data structures representsthe nodes of HClass. All concepts (C) are strurcturedaccording to a set of axioms having the form H @W where H and W are concepts. An individual (V)is characterized by the fact of being associated, oftenin an implicit way, to a spatio-temporal dimension.It is represented in upper case like DAVID , LIV-ING ROOM 1 while concepts are represented in lowercase. In NKRL, the symbolic labels used to denoteboth concepts and individuals must include at least an”underscore symbol”. For instance, individual person@ human being” denotes that ”individual person” isa specialization of ”human being”.

2) Ontology of events (HTemp) denoted by Ψ. It is ahierarchy of templates used to represent dynamicknowledge such as moving a physical object. Thesetemplates are based on the notion of ”semantic

predicate” and are organized according to an n-arystructure that must be understood as the formalrepresentation of generic classes of elementary events.

For example, let us assume that a tag RFID is worn by anelderly person named DAVID, and that the environment isequipped with an RFID Reader. The latter can then detectthe presence of DAVID in the living room. The conceptualrepresentation of such an event in NKRL terms is as follows:

aal2.c2) EXIST SUBJ DAVID : (LIVING ROOM 1)date-1: 17/4/2011/21:30date-2:

Where: EXIST is the semantic predicate” denoting thepresence of the human DAVID an ”individual”, instanceof concept like human being ∈ C @ Ω, the location (space)LIVING ROOM 1∈ V ∈ C @ Ω , and the temporal symboldate-1 is associated with the timestamp of the informationcaptured in the real world. Using a date-2 symbol allows usto represent events within a certain time interval.The NKRL representation of the above event correspondsto the general n-ary structure [1] described formally as:

(Li(Pj(R1a1)(R2a2).........(Rnan))) (1)

where: Li is a generic symbolic label identifying a given tem-plate, Pj is a semantic predicate pertaining to the set (MOVE,PRODUCE, RECEIVE, EXPERIENCE, BEHAVE, OWN,EXIST) and Rk, k ∈ 1, . . ., 7 , is a generic functionalrole (see [25] for more details about this topic) pertainingto the set (SUBJ(ect), OBJ(ect), SOURCE, BEN(e)F(iciary),MODAL(ity), TOPIC and CONTEXT). The correspondingarguments encapsulated within ak, k ∈ 1,. . ., n areassociated to the predicate through one of the functionalroles. An argument ak can consist of a simple concept orindividual (an element of HClass) like geographical locationor LIVING ROOM 1, or of a structured association ofseveral concepts/individuals.More precisely, the coded event, (aal2.c2) previouslysupplied, is an instantiation of HTemp template Ex-ist:HumanPresentAutonomously, see Table 1. In this template(in the templates in general), the arguments ak of the pred-icate, see Eq.1, are represented by the symbolic variables.The constraints on these variables are expressed in turn asconcepts or combinations of concepts using the terms of theHClass ontology. In fact, DAVID is a human being (see theconstraint on var1, Table 1 ) and LIVING ROOM 1 is alocation (var2).Predicates are primitive entities that do not bear a ”meaning”in themselves but that take on this meaning only whenthey are inserted in a complex conceptual structure like thatrepresented in Eq. 1 above. The notion of role is distinctfrom that of semantic predicate and is similar, in a sense,to that of ”semantic relationships”, ”conceptual relations”,”thematic roles” etc. in Computational Linguistics [25].In the reasoning process, all the explicit variables”, identifiedby conceptual labels in the vari style, will be replaced by

Page 4: Narrative reasoning for cognitive ubiquitous robotsrobotic applications [22]. This technique is based on an inte-grated approach that combines low-level sensory patterns and high-level

Ci/Vi ⊂ Ω compatible with the original constraints imposedon the variables.

TABLE IPARTIAL REPRESENTATION OF AN ”EXIST” TYPE OF

TEMPLATE

name : Exist:HumanPresentAutonomouslypredicate: EXIST

SUBJ var1 : (var2)!(OBJ)[SOURCE var3 : [(var4)]]!(BENF)[MODAL var5 ][TOPIC var6 ][CONTEXT var7 ]

var1 = <human being or social body>var2 = <location > | <pseudo sortal geographical>var3 = <human being or social body>var4 = <location > | <pseudo sortal geographical>var5 = <economic/financial entity> | <financing process> |<mutual relationship> | <positive income> | <purchase >| <qualifier > | <reified event> | <resource > |<spatio/temporal property> | <symbolic label>var6 = <sortal concept>var7 = <situation > | <symbolic label>

Fig. 1. Inference procedures. The nodes denote the events and the arcsdenote the relationship between events. Inference procedures explore all variin each Yi

B. Symbolic Reasoning

In addition to the symbolic representation obtained bymaking use of the HClass and HTemp ontologies, NKRLprovides some deep inference procedures allowing us toobtain plausible answers from a set of factual elementaryevents by finding implicit relationships among events. Allinference rules follow the model represented by Eq. 2 [1] :

X i f f Y1 and Y2 ...... Yn. (2)

where X is the situation/context to infer and Y1 , ..... , Ynrepresent the reasoning steps (encoded under the form ofpartially instantiated HTemp templates). All rules needed forexperiment are stored in the knowledge base of the robot.The inference procedures can be explained by introducingthe following example:Let us suppose that the following events have occurred inthe environment: an individual pushes the electrical switch

located in the corridor, and the air conditioner stops inthe living room. . ”ELECTRICAL SWITCH” is an NKRLconcept that is subsumed by the more general concept”technical/electrical tool”, see the example in Table II.The robot tries then to understand the situation (X=whythe air conditioner stops working?) in order to propose itsassistance (arise an alarm, open windows if necessary, senda message to a technician, etc.) according to the situationinferred (accident, power failure, etc.).The idle status of the air conditioner is represented in theNKRL context as follows:

aal2.c5)PREDICATE OWN SUBJ(ect) AIR CONDITIONER:( LIVING ROOM 1)

OBJ propertyTOPIC idledate-1: 20/02/2011/08:01:05date-2:

Let us assume, now, that the robot is able to detectthe situation represented by aal2.c5 but that it does notknow the specific ”cause” of this event that, a priori, canalso be produced by the failure of a component of theAIR CONDITIONER 1. Making use of the inference rulerepresented in Table 2, it tries now to see whether, amongthe events registered in the (NKRL) knowledge base of thesystem, it is able to find some form of stopping/haltingevent that concerns an electrical device strictly connectedwith AIR CONDITIONER 1: it could then be reasonableto suppose that the stop of the conditioner is ”caused”by the stop of this connected device. Thus, concretely,the activation of the rule in Table 2 making use of theevents registered in Table 3, shows that the detection ofa power failure in the air conditioner can be reduced toi) verifying that a stopping procedure has concerned anelectrical switch ii) verifying that the electrical switchis really on position ”turned off” and iii) verifying thata semantic relationship exists between the two electricaldevices (air conditioner and electrical switch): stoppingthe electrical switch can then be the cause of the idlesituation proper to ELECTRICAL SWITCH 1 device. Ofcourse, other ”hypotheses” can be imagined to explain thesituation depicted in aal2.c5: each ”causal” inference rulethat can be envisaged in an NKRL context must always beconceived as a member of a ”family” of rules where each ofthem can supply a ”reasonable” explanation of the originalevent. Operationally speaking, during a (successful) retrievaloperation, the variables vari are replaced by concepts orindividuals according to the associated constraints : at theend of the procedure, var1 is has been replaced by anINDIVIDUAL PERSON 2 (an instance of human being),var2 by ELECTRICAL SWITCH 1 and var4 by a theconcept coupled with that is a specific term of the constraintrelational property. var3 has been bounded to the valueAIR CONDITIONER 1 when the original event aal2.c5 hasbeen taken into account. The robot constructs successfullythe path (1,2,4) (Fig 2 ). To build a robust system, multiple

Page 5: Narrative reasoning for cognitive ubiquitous robotsrobotic applications [22]. This technique is based on an inte-grated approach that combines low-level sensory patterns and high-level

TABLE IIRULE CONCERNING THE POWER FAILURE OF THE AIR CONDITIONER

DEVICE

Y1=PREDICATE PRODUCE SUBJ var1OBJ activity stopTOPIC ( SPECIF technical/industrial procedure var2 )

var1 = <human being >var2 = <technical/electrical tool>————————-Y2=PREDICATE OWN SUBJ var2

OBJ propertyTOPIC (SPECIF device property idle )

var2 = <technical/electrical tool>————————–Y3 =PREDICATE OWN SUBJ var2

OBJ propertyTOPIC (SPECIF var4 var3)

var4 = <spatial relationship, relational property >var3 6= var2var3 = <technical/electrical tool >

Fig. 2. Reasoning steps (sees the path denoted by (1, 2, 4))

TABLE IIISOME KNOWLEDGE EVENTS STORED IN THE KNOWLEDGE BASE

event=PREDICATE PRODUCESUBJ INDIVIDUAL PERSON : CORRIDOR 1OBJ activity stop

TOPIC (SPECIF switch startup ELECTRI-CAL SWITCH 1)

date-1: 17/4/2011/08:00date-2:

event=PREDICATE OWNSUBJ ELECTRICAL SWITCH 1: LIV-

ING ROOM 1OBJ propertyTOPIC idledate-1: 17/4/2011/08:00date-2:

event =PREDICATE OWNSUBJ AIR CONDITIONER 1: LIVING ROOM 1OBJ property

TOPIC(SPECIF coupled with ELECTRI-CAL SWITCH 1)

date-1: 31/10/2010/10:00date-2:

events must be correlated with respect to their spatial andtemporal ordering. NKRL uses the second order structurecalled binding occurrences like (GOAL, CAUSE).

Remark: A specific template of the OWN type isemployed to denote the properties of a (non human) entity,while a specific template of the EXPERIENCE type is usedto represent situations where a given entity, human or not,is subject to some kind of experience/happening/event. Forexample.

aal1) PREDICATE OWNSUBJ DAVID: (LIVING ROOM 1)OBJ propertyTOPIC lying positiondate-1: 17/4/2011/20:10date-2:

aal2) PREDICATE EXPERIENCESUBJ ROBOT 1OBJ ALARM SITUATION 1 :LIVING ROOM 1TOPIC (FALL 1 hypothetical DAVID )date-1: 17/4/2011/20:11date-2:

III. SCENARIO

A. General description of the scenario

The following scenario is dedicated to the monitoring ofelderly people in a smart home. It demonstrates the feasibilityof the proposed narrative reasoning. In this scenario, itis assumed that there is an elderly person living alone.He/she wants to prepare a meal. The robot tries then tounderstand the corresponding context by performing thefollowing requests: is she/he eating? is she/he preparingcoffee, tea, meal, sandwich? etc. Depending on the contextinferred, the robot will remind the person to: take her/hismedicine, eat a fruit, because for example, she/he ate twohours ago. Preparing a meal or a sandwich are considered astwo different contexts; the first one requires using a cooker,an oven or microwave oven, kitchen utensil like pot, bakingdish, etc. The environment is instrumented with RFID tagsto identify all the objects deployed in the environment. Itis also equipped also with pressure sensors placed underthe sofa and chairs, that send a signal, when the elderlypeople lays on or leaves the above entities. We assume thatthe robot companion is able to perform a self-localization,obstacle avoidance and navigation using existing techniques.It is equipped with its own camera and microphone, allowing,the robot, to forward, if needed, an image of the elderlyperson at any place and any time, and use the microphone tocommunicate with her/him. The proposed approach is underimplementation on the Pekee II Mobile robot shown in figure6. This robot is equipped with a Vision 3D+ system, a Pan-Tilt camera, an RFID Reader and an ambient sound sensor.

B. Design and modeling

The robot must be able to infer that the elderly personis preparing a meal according to an incoming stream of

Page 6: Narrative reasoning for cognitive ubiquitous robotsrobotic applications [22]. This technique is based on an inte-grated approach that combines low-level sensory patterns and high-level

sensory data. After analyzing events and historical events(Fig 3), the system behave or act in the Smart house,based on a set of rules, and using a set of hypothesis andtransformations rules, see (Figs (4-5)). The robot can theninfer for example that the elderly person is preparing ameal. According to the NKRL formalism, a situation shouldbe transformed in a query form as follows:

X1= PREDICATE PRODUCESUBJ HUMAN BEING 1OBJ planning meal

Where: planning meal v personal activity vdomestic activity ∈ ativity v Ω.

To build up automatically a context for an informationretrieved within a NKRL knowledge ( Fig 3), a transfor-mation rule with triggering pattern (X1) is supposed to befound. Using this rule, a context should be automaticallyconstructed. A causal explanation of the triggering event isfound by retrieving, from the knowledge base, informationin the style of: i) someone is in the kitchen (Y1); ii) she/hemanipulates an entity (E1) which is used for cooking (Y2);iii) she/he is near the stove (Y2) and iv) the entity (E1) isturned on. A detailed formal representation of this rule isgiven in (Fig 5). The robot cannot assert that the knowledgebase includes any information about the use of some type ofdevices, but it can certify that the individual person moves thebaking dish near the oven. The reply can be supplied thanksto another semantic related event stored in the knowledgebase. Consequently, the robot infers that the baking dish isused. The absence of direct answer to Y4 (Fig 4) indicatesthat the system cannot assert that the oven is running but itcan certify that the robot heard the noise of the oven. Thisinformation is inferred thanks to the related events using thetransformation rule.Remark: Using a microphone to certify that the oven isworking should be avoided. The knowledge can be collectedusing another kind of sensors such as temperature sensorinstalled near the oven.

IV. CONCLUSION AND FUTURE WORKS

In this paper, a narrative-based approach is proposedfor representing and reasoning about space, changes andoccurrences in ambient intelligent environments. This ap-proach has shown to be useful to endow ubiquitous robotswith the ability to recognize and understand situations ata high semantic level. The reasoning process is mainlyachieved through the linking of factual primitives events byusing semantic correlations between events over the time.The ongoing work concerns the actual implementation andvalidation of the narrative engine on the Pekee II Mobilerobot. Future works will consist to use, in certain cases,a kind of dialog between the human and the robot toget additional facts to avoid ambiguities and therefore tooptimize the reasoning process of the narrative engine fora better understanding of the situations under examination.

Fig. 3. NKRL representation of elementaries events

Fig. 4. Planning meal ”Transormation Rule”

Fig. 5. Working noise ”Transformation Rule”

Page 7: Narrative reasoning for cognitive ubiquitous robotsrobotic applications [22]. This technique is based on an inte-grated approach that combines low-level sensory patterns and high-level

Fig. 6. The Pekee II Mobile robot from Wany Robotics

REFERENCES

[1] G.P. Zarri, ”Representation and Management of Narrative Informa-tion: Theoretical Principles and Implementation”, Series: AdvancedInformation and Knowledge Processing, Springer, 2009.

[2] L. Sabri, A. Chibani, Y. Amirat, G.P. Zarri, ”Semantic reasoningframework to supervise and manage contexts and objects in pervasivecomputing environments”. IEEE Worskops of International Conferenceon Advanced Information Networking and Applications, AINA 2011,Biopolis, Singapor, March 22 to 25, 2011. 47-52, 2011.

[3] I. Infantino, C. Lodato, S. Lopes, F. Vella,”Human-humanoid inter-action by an intentional system”,Humanoid Robots, 8th IEEE-RASInternational Conference Dec. 2008. pp. 573 - 578

[4] L. Belouaer, M. Bouzid, A. Mouaddib, ”Ontology Based SpatialPlanning for Human-Robot Interaction”,Temporal Representation andReasoning (TIME), 17th International Symposium, 6-8 Sept. 2010. pp.103 - 110

[5] M. Brenner, ”Situation-Aware Interpretation, Planning and Executionof User Commands by Autonomous Robots”,Robot and Human inter-active Communication, The 16th IEEE International Symposium 26-29Aug. RO-MAN 2007. pp.: 540 - 545

[6] I.H Suh, G.H Lim, W. Hwang, H.Suh, J-H. Choi, Y-T. Park,”Ontology-based multi-layered robot knowledge framework (OM-RKF) for robot intelligence”,Intelligent Robots and Systems, IROS .IEEE/RSJ International Conference. Nov. 2 2007.pp. 429 - 436

[7] R. Zoliner, M. Pardowitz, S. Knoop, R. Dillmann, ”Towards CognitiveRobots: Building Hierarchical Task Representations of Manipulationsfrom Human Demonstration”, Robotics and Automation, . Proceedingsof the IEEE International Conference . ICRA 2005.pp.1535 - 1540.

[8] O.M Mozos, P. Jensfelt, H. Zender, G. J. Kruijff, W. Burgard, ”Anintegrated system for conceptual spatial representations of indoorenvironments for mobile robots”, in: Z. Zivkovic, J. Kosecka (eds.):Proceedings of the IROS 2007 Workshop: From Sensors to HumanSpatial Concepts (FS2HSC).pp. 25-32.

[9] F. Capezio, F. Mastrogiovanni, A.Sgorbissa, R. Zaccaria,”Towards aCognitive Architecture for Mobile Robots in Intelligent Buildings”,InProceedings of the 2007 ICRA Workshop on Semantic Information inRobotics (SIIR-07), co-located with ICRA-07, Rome, Italy, April 10,2007.

[10] C. Galindoa, J-A. Fernandez-Madrigala, J. Gonzaleza, A. Saffiotti,”Robot Task Planning Using Semantic Maps”,Robotics and Au-tonomous Systems Semantic Knowledge in Robotics ,Volume 56, Issue11, 30 November 2008.pp. 955-966

[11] P. Beeson, M. MacMahon, J. Modayil, A. Murarka, B. Kuipers, B.Stankiewicz,”Integrating Multiple Representation of Spatial Knowl-edge for Mapping, Navigation, and Communication”, In AAAI SpringSymposium Series, Interaction Challenges for Intelligent Assistants,2007. AAAI Technical Report SS-07-04.

[12] M. Beetz, L. Mosenlechner, M. Tenorth, ”CRAM A Cognitive RobotAbstract Machine for Everyday Manipulation in Human Environ-

ments”, Intelligent Robots and Systems (IROS).IEEE/RSJ InternationalConference,Issue Date: 18-22 Oct. 2010 :pp. 1012 - 1017

[13] C. Nieto-Granda, J. G. Rogers, A. J. B. Trevor and H. I. Christensen,”Semantic map partitioning in indoor environments using regionalanalysis”. In Proceedings of the 2010 IEEE/RSJ International Con-ference on Intelligent Robots and Systems (IROS10), pages 14511456,Taipei, Taiwan, October 2010.

[14] A. Nuchter and J. Hertzberg ”Towards semantic maps for mobilerobots”, Robotics and Autonomous Systems (RAS), 56(11):915926,2008.

[15] B. Suresh, C. Case, A. Coates and A. Y. Ng. ”Autonomous signreading for semantic mapping”. In Proceedings of the 2011 IEEEInternational Conference on Robotics and Automation (ICRA11),Shanghai, China, May 9-13, 2011.

[16] O. M. Mozos, R. Triebel, P. Jensfelt, A. Rottmann and W. Burgard,”Supervised semantic labeling of places using information extractedfrom laser and vision sensor data”, Robotics and Autonomous SystemsJournal, 55(5):391-402, 2007.

[17] K. Zhou, K. M. Varadarajan, A. Richtsfeld, M. Zillich and M. Vincze,”From holistic scene understanding to semantic visual perception:A vision system for mobile robot”, Proceedings of the workshop:Semantic Perception, Mapping and Exploration (SPME) 2011. IEEEInternational Conference on Robotics and Automation (ICRA11),Shanghai, China,k May 9-13, 2011.

[18] A. Yachir, K. Tari, A. Chibani, Y. Amirat, ”Toward an AutomaticApproach for Ubiquitous Robotic Services Composition”, Proc. Of theIEEE/RSJ International Conference on Intelligent Robots and Systems,IROS 2008, Sept, 22-26, 2008, Nice, France. pp. 3717-3724. 2008.

[19] A. Yachir, Y. Amirat, K. Tari, A. Chibani, ”QoS Based Framework forUbiquitous Robotic Services Composition”. In Proc. Of the IEEE/RSJInternational Conference on Intelligent Robots and Systems, IROS2009, Oct., 11-15, 2009, St. Louis, MO, USA. 2009, pp. 2019 2026.

[20] A. Saffiotti, M. Broxvall, M. Gritti, K. LeBlanc, R. Lundh, J. Rashid,B.S. Seo and Y.J. Cho. ”The PEIS-Ecology Project: vision and results”.Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems(IROS) Nice, France, 2008.pp. 2329-2335.

[21] Y. G. Ha, J. C. Sohn, Y. J. Cho. ”Service-oriented integration of net-worked robots with ubiquitous sensors and devices using the semanticWeb services technology”. IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS 2005), 2-6 August, pp. 39473952.

[22] G.H. Lim, K. W. Kim, B. Chung, Il H. Suh, H. Suh and M.Kim, ”Service-Oriented Context Reasoning Incorporating Patterns andKnowledge for Understanding Human-augmented Situations”, 19thIEEE International Symposium on Robot and Human InteractiveCommunication Principe di Piemonte - Viareggio, Italy, Sept. 12-15,2010.pp. 144-150.

[23] http://www.w3.org/Submission/OWL-S/.[24] Ha, Y.-G., Sohn, J.-C., Cho, Y.-J., H. Yoon. ”Towards Ubiqui-

tous Robotic Companion: Design and Implementation of UbiquitousRobotic Service Framework”. ETRI Journal, 2005.27(6), 666-676

[25] G.P. Zarri. ”Differentiating Between ”Functional” and ”Semantic”Roles in a High-Level Conceptual Data Modeling Language”. In:Proceedings of the Twenty-Fourth International Florida ArtificialIntelligence Research Society Conference, FLAIRS-24, Murray, R.C.,and McCarthy, P.M., eds. Menlo Park (CA): AAAI Press 2011.