design an expressive intelligent agent to interacting with ... 80.pdf · it can be a friend or a...
TRANSCRIPT
Design an Expressive Intelligent Agent to Interacting With
Humans
M. Manuruzzaman, L. Alam, M. Moshiul Hoque*and M. S. Arefin
Department of Computer Science & Engineering, CUET, Chittagong, Bangladesh
*E-mail: [email protected]
Abstract: An expressive agent can provide social support and wellness coaching to isolated older adults. It can
provide companionship dialogue, game co-play, and social act ivity tracking. It is necessary because it can play a
vital ro le to reduce human loneliness and sadness to keep them happy. Abundant HCI studies address the
challenges that should be focused in designing virtual interactive agent. In this paper, we present an intelligent
interactive virtual agent that can provide seven basic facial expressions while interacting with it via text input.
The proposed agent is capable to communicate with people by showing some basic facial expression such as joy,
sad, fear, surprise, anger, contempt, disgust respectively that are useful to reduce their loneliness for a while.
Evaluation results in terms of subjective and quantitative ways reveals that the performance of the proposed
agent is quite satisfactory.
Keywords: Human computer interaction, expressive agent, parsing, emotional expression.
1. INTRODUCTION
An Expressive Intelligent Agent (EIA) is a lifelike v irtual human capable o f carrying on conversations with
humans by both understanding and producing speech, hand gesture and emotional facial expressions. Typically
they appear online as a personalized service, navigation guides, and online assistants. Intelligent agent may be
used as a chatting partner in conversation scenarios that will makes the conversation more interesting and
enjoyable for older people. Many people experience loneliness and depression in older age, either as a result of
liv ing alone or due to lack of close family ties and reduced connections with their cu lture of o rig in, which results
in an inability to act ively part icipate in the community activit ies [1]. In order to overcome loneliness almost half
of the older people expend their leisure time by watching the television as their main form of company. Many of
them are expending time with domesticated pets. Pets commonly provide their owners with benefits, such as
providing companionship to elderly adults who do not have adequate social interaction with other people. The
most popular pets are likely dogs and cats. But there are some problems too. Take care of pets become more
difficult for aged people. Different types of difficulties and health problems go with loneliness. Several studies
have presented that a lack of social support can have unpleasant effects on the health and well-being of older
adults [2, 3], and older adults who face extreme isolation face significantly higher risks of mortality than their
connected peers [4]. A recent meta-analysis estimates that 7-17% of older adults face social isolation and 40%
experience loneliness [5].
This phenomenon motivates most- why we should not provide an intelligent virtual agent in older people’s
homes to reduce their loneliness and make them happy. This agent can be acts as a virtual friend of o lder people.
It can be a friend or a company of their daily life. There is growing evidence that expressive displays can impact
people’s emotions [6]. A virtual agent can provide social support and wellness coaching to isolated older adults,
in their homes, for months or years. As the companion agent will be always on, always availab le, it can provide a
range of support interactions including: companionship dialogue, ga me co-play, exercise and wellness
promotion, social activ ity tracking and promotion facilitat ing connections with family and friends, and memory
improvement tasks, among others. Therefore, a v irtual agent can play a vital ro le to reduce human loneliness and
sadness to keep them happy.
It is quite challenging to make an older people familiar with the technology which can support them
mentally. If assistive technology does not meet the individual needs and preferences of the person it may be
ineffective or may even cause additional confusion or distress. Although there are lots of issues remained
unsolved related to the expressive entertaining agent. In the work, we will propose an expressive agent that can
be used in a chatting scenario. When a person communicates with the other human via online messaging or
chatting, this agent can display emotional expression to his/her partner against their text input. The propose agent
can interacts with the elder people expressively and may help to reduce their loneliness . Psychological evidences
Page 123
P-ID 80 International Conference on Physics Sustainable Development & Technology (ICPSDT-2015)(August 19-20, 2015)
Department of Physics, CUET.
revealed that virtual agent can reduced the loneliness, fell them better and creates feelings of enjoyment [ 7]. It is
quite challenging task in human computer interaction (HCI), to design such an intelligent agent that can display
some expressive or emotive behaviors.
2. RELATED WORK
Technologies designed specifically to provide companionship for o lder people are an emerging area of research.
There are lots of intelligent agents that can display some emotional expressions and used as the assistive
technology. These agents can be physical (i.e, robot) and virtual (i.e., avater). Leite et al. [8] developed a robotic
companion designed for game co-play. Wada et al. [9] have examined non-conversational therapeutic robots, and
Klamer, et al. [10] have examined the health benefits of in-home robots. Faces of robots require different ways to
embody their expressions [1] and various facial robot systems are developed. EveR-4 H33 shows the appropriate
emotions for facial expressions of the android head system, which is a head system with 33 degrees of freedom
[11]. To find the appropriate emot ions for facial expressions of EveR-4 H33, it makes some preliminary
evaluations of facial expressions by applying some theories of basic emotions. The main limitation of EveR-4
H33 is people recognized surprise and sadness, and did not recognize disgust and fear well, by comparison with
the average. Wearable and stationary devices that promote multimedia sharing with family and friends have also
been designed to improve the social-connectedness of isolated adults [12]. Other works focused on the
expression of emotions solely through animat ion [13, 14, 15, 16]. These agents have expressed only six
expressions and they do not have any lower eyelids, which caus ed some difficulty in some of the expressions.
Virtual agents have a human appearance and have to response appropriately which may consists of
expression or information. A preliminary work which describes the ro le of the face and human emotion d isplays
in communication and review prev ious research on emotional display in avatar environments [17]. They design
an approach for emot ion-extract ion from text -only input and some results from in formal user studies with the
system. This avatar will reduce the need for users to switch between typing messages and controlling their avatar
representation. As a consequence of this, the user will be able to maintain more natural communication with the
other people in the collaborative v irtual environment. Another agent known as virtual messenger was developed
to communicate with humans and display facial expression [18]. Kramer et. al. [19] explored design issues of
the conversational virtual human as a socially acceptable autonomous assistive system for elderly and
cognitively impaired users. When designing agents to promote social connectedness, intelligent agents should
focus on autonomous and conversational capabilities. Moreover, the agents should adapt to the changing nature
of the socio-emotional relat ionship users with them. In our work, we have designed an agent that can be able to
make conversation with the users with various expressive capabilities.
3. PROPOSED SYSTEM ARCHITECTURE
The main object ive of our work is to develop a software agent that can be able to express emotions during
human-human interaction scenarios such as when people are communicating with each other through live
messaging. For this purpose, we have developed a framework. Fig. 1 shows an abstract view of this framework.
In the framework, we assume that a user A gives input via text entry and the corresponding expressions against
his/her entry is display to the receiving end user B and vice versa. In our current implementation, we take
expressive text as an input and make animat ion of corresponding to this input. This set up is consists of several
hardware devices such as, keyboard, display device, graphics card, general purpose computer, and so on. In
addition to that to develop the system we need to implement of several software modules. Fig. 2 il lustrates the
schematic representation of these modules with their informat ion flows.
Fig. 1: Abstract view of the proposed framework.
Fig.2 Schemat ic diagram of proposed modules.
data
input input
user A user B
Rule
Generator
Lexical
Analyzer
Expressive
Input Sentence
Parser
Expression
Generator
Parse Tree
Analytical
Output Generator
Lexicon
Token
Page 124
P-ID 80 International Conference on Physics Sustainable Development & Technology (ICPSDT-2015)(August 19-20, 2015)
Department of Physics, CUET.
3.1 Expressive Input Sentence Expressive sentence is any kind of sentence which can express the feelings of one’s mind. In ou r work, we deal
with the simple sentences only. An expressive sentence represents the emotional status of human mind. For
example, “Today I am happy”.
3.2 Lexical Analyzer Lexical Analyzer is a program module that accepts a sentence as input and produces or separates the individual
words usually called token [20]. Tokens are stored in a list for farther process. As for example the lexical
analyzer output of “Today I am happy” is “Today”, “I”, “am” and “happy”.
3.3 Lexicon A Lexicon is a resource which has an associated software environment database which permits access to its
contents. The database may be custom-designed for the lexical information or a general-purpose database into
which lexical information has been entered [20]. If the structure of the sentence is not matched according to
parse then it will generate an error expression and no tree will be generated. In our project we create a database
having attributed of all the parts of speech. The input of the lexicon is the token and the output is the tagging
words. After matching the tokens we can find
Today Adverb; I Pronoun; am Verb; happy Adjective
3.4 Parser The goal of syntactic analysis or parsing is to determine whether the text string on input is a sentence in the
given language. The result of the analysis contains a description of the syntactic structure of the sentence. In this
paper, we considered the context-free grammar (CFG) [21]. A fragment of CFG for simple sentences is
illustrated in table 1. For example, the syntax or the ru les used to pars e the sentence “Today I am happy” is S
NP VP; NP Adv P; VP V Adj.
Table-1. A s mall fragment of CFG for simple sentences.
3.5 Rule Generator The grammar or rule generator consists of rules of what are legal input streams to the parser [22]. The grammar
is described in a simple language. In our work, we keep some predefined rules which are u sed so frequently to
represent simple sentence. For example, Noun+ Verb+Noun+Adjective, Noun+ Verb+Adjective, Noun+
Verb+Noun+Adjective+ Adverb, and so on.
3.6 Parse Tree Generation A parse tree is a tree that represents the syntactic structure of a string according to some formal grammar [22].
First we read CFG and then choose appropriate rules to parse the sentence. For example we input a sentence
“Today I am happy”. The appropriate parse tree for this sentence is shown in Fig. 3.
S
NP VP
Adv P V Adj
Today I am happy
Fig. 3: Parse tree for “Today I am happy.” Fig. 4: Textures for facial expression
No. Rules for Simple Sentence No. Rules for Simple Sentence
01 S NP VP 11 VP AdjP Verb
02 NP Adv P 12 Adj Adj Adv
03 NP Noun 13 AdjP Adj Adv
04 NP Noun Det 14 VP Verb Det
05 NP P 15 AdjP Adj N
06 NP NP Adj 16 Verb am|is|are |fly|read
07 Noun Sagor|Sun|Bird 17 Adv so|very
08 PN He|She|They|His 18 Adj happy|sad
09 VP NP Verb 19 Det a |an|the
10 VP Verb 20 NP N Adj
Page 125
P-ID 80 International Conference on Physics Sustainable Development & Technology (ICPSDT-2015)(August 19-20, 2015)
Department of Physics, CUET.
3.7 Expression Generation There are many essential parts in human face that play a vital role for facial expression which are illustrated in
Fig. 4. By the manipulation of these textures of face we can generate different expression. The characteristic of
seven basic facial expressions are given bellow. For our work, we have used FaceGen Modeller 3.5 software
[23]. By the help of that software we can create different human face. Actually this is a graphic MPEG-4
standard compatible facial animation engine. It specifies a set of Face Animation Parameters (FAPs), each
corresponding to a particular facial act ion deforming a face model in its neutral state. A particular facial act ion
sequence is generated by deforming the face model, in its neutral state, according to the specified FAP va lues,
indicating the magnitude of the corresponding action, for the corresponding time instant.
Then the model is rendered onto the screen. It is an open source 3D facial animation framework written in C
programming language and a new WebGL implementation. The C framework allows efficient rendering of a 3D
face model in OpenGL-enabled systems. It has a modular design: each module provides one of the several
common facilit ies needed to create a real-time facial animation application. For facial animat ion the 3D data
dynamic coordinates of passive markers are used to create our face articulator model and to drive directly
expressive face. By using these principals we can create seven expressions such as, joy, sad, surprise, contempt,
fear, anger, and disgust respectively. To represent each expression of human face we have used 100 frames. We
make an image stream and pass the stream one by one to create animation. We pass approximately 23 frames to
make one second animation. Fig. 5 shows the some sample frames for joy expressions.
Fig. 5 A fragment of frames fo r generating the expression, joy.
Fig. 5 : A small fragment of sample frames for generating the expression, joy.
3.8 Analytical Output Generation In this module we tried to represent the label of expression graphically. For example the expression label of
sentence “I am happy.” and “I am very happy.” are normally different. The second sentence usually represents
the higher label o f expression than the first sentence. So we classify the basic facial expression in t wo labels:
positive and negative within the range -5 to +5. The expressions labels of different emotive sentences are
illustrated in Table 2.
Table-2: Expression label of simple sentence.
Positive Expression Negative Expression
Input Type Expression Label Input Type Expression Label
Normal +1 Normal -1
So +2 So -2
Very +3 Very -3
Extreme +4 Extreme -4
Page 126
P-ID 80 International Conference on Physics Sustainable Development & Technology (ICPSDT-2015)(August 19-20, 2015)
Department of Physics, CUET.
4. EXPERIMENTS
In order to evaluate the proposed expressive agent, we have performed two experiments.
4.1 Experiment 1: To Evaluate the Functionality of the Intelligent Agent
In this experiment, we simulate the various functionality of expressive intelligent agent. We compare the
expressions and measure the correctness of the expressive agent against various expressive sentences.
4.1.1 Procedure
To run the agent at first we have to start the .exe file of the proposed software agent. Then the main interface of
the application will appears. Next we have to give some input sentences in the input text area. The input sentence
is separated into a set of tokens and matched with the database. The database is the collection of words
categorized by parts of speech. After the parts of speech tagging the parser pick the appropriate expressive word
and send to the expression generator. According to the expressive word the expression generator will generates
the expressive behaviors into the animation box and corresponding expressive curve will show in the chart area.
4.1.2 Measures
We measure each facial expression with respect to some parameters such as, muscle around the eyes tightened,
checks raised, lip corners raised, and so on.
Measure 1: Expression comparison
At first we generate a character which has some expressive capabilit ies. From this expressive character we
further generate seven basic facial expressions. Behind every expression there are some common criteria. We try
to follow that characteristics and make the following expressions which is illustrated in table 3.
Table-3: Comparison between normal face and expressive face.
Expression Normal Face Expressive Face
Joy
Anger
Contempt
Disgust
Muscle around the
eyes tightened
Cheeks raised
Lip corners raised
diagonally
Eyebrows pulled down
Upper lids pulled up
Lower lids pulled up
Margins of lips rolled in
Eyes neutral
Lip corner pulled up
and back on one side
only
Eyebrows pulled down
Upper lip pulled up
Lips loose
Page 127
P-ID 80 International Conference on Physics Sustainable Development & Technology (ICPSDT-2015)(August 19-20, 2015)
Department of Physics, CUET.
Fear
Sad
Surprise
Measure 2: Expression evaluation
An input sentence is parsed and produces corresponding expression against each expressive word. Fig. 6 shows a
typical example of generating joy expression for the input the sentence “I am feeling so delight”. The agent first
parses the sentence [shown in the middle of the Fig. 6] and picks the “delight” word as an expressive word .
Based on the expressive word it generates joy expression. It also shows the expression label as a graph [indicates
right hand side of the Fig. 6].
Fig. 6: Output animation of joy expression in the sentence, ‘I am feeling so deligh’.
Form the above figure we can see the joy expression. We have tested other expressions such as anger, fear,
contempt, surprise, disgust and sad respectively. Fig. 7 also illustrates another example for the sad expression in
the sentence, ‘I am so sorry’.
Eyebrows pulled up
and together
Mouth stretched
Inner corners of
eyebrows raised
Eyelids loose
Lip corners pulled down
Ent ire eyebrow pulled up
Eyelids pulled up
Mouth hangs open
Page 128
P-ID 80 International Conference on Physics Sustainable Development & Technology (ICPSDT-2015)(August 19-20, 2015)
Department of Physics, CUET.
Fig. 7: Output animation of sad expression in the sentence, ‘I am so sorry’.
Measure 3: Accuracy
We tested several simple sentences for p roducing expressions. Most of these sentences are collected from the
English books [24] and newspapers; few are generated by the authors . Table 3 shows some example sentences
with their expression category, expression label, and recognition.
Table-4: Expression type of some simple sentence.
Input Sentences Expression Category Expression Label Remarks
He is feeling delight Joy +1 Correct
It is very irritating Anger -3 Correct
My uncle is dead Sad -1 Correct
He was held in contempt Contempt -1 Correct
You are a naughty boy Contempt -1 Correct
They are so crazy Anger -2 Correct
I got so tired of playing football Disgust -2 Correct
The movie is very horrific Fear -3 Correct
She is extremely surprised Surprise +4 Correct
Wild animal is good Anger -1 Wrong
We have tested about 225 sentences with 50 different expressive words. The length of sentences varies from
3 to 7 words. Out of 50 expressive words, 47 are recognized. Thus, the agent can produce expressions about 94%
of times correct ly against expressive words that have written by the users. In few cases, the system did not
correctly generate the appropriate expressions. For example , in the sentence, ‘wild an imal is good’. The system
recognized that wild as an adjective and generated angry expression which is not correct.
Measure 4: Res ponse time
The total response time is div ided into two parts. One is parse time and another is animation time. The animat ion
time depends on the expression label. The total response time is represented as Fig. 8. As a typical example,
Table 5 shows response time for some input sentences.
Fig. 8 : Response time Scale
The equation of total response time may calculate from the fo llowing equation:
12
0
1
ii
t
ii ttttTi
Parse Time Animat ion Time
(1)
Page 129
P-ID 80 International Conference on Physics Sustainable Development & Technology (ICPSDT-2015)(August 19-20, 2015)
Department of Physics, CUET.
Here, = Total t ime; =Starting t ime of parsing; = Ending time of parsing/starting time of animat ion;
= Ending time of an imation
Table-5: Response time of some simple sentence
Input Sentences Response Time (sec)
Parse Time (sec) Animation Time (sec)
I am happy 00.0070004 02.8971657
He is feeling delight 00.0130008 02.9851707
They are unhappy 00.0120007 04.5362595
It is very irritating 00.0120007 05.3893060
My uncle is dead 00.0160009 04.5252588
4.2 Experiment 2: To Evaluate the Expressiveness of Intelligent Agent in HCI Scenarios
In this experiment, we measure the users’ impressions regarding the proposed agent while interacting with it. We
recruited 20 participants to evaluate our system of different ages. Two faculty members, two sub assistant
engineers, six housewives, five business personal, two school teachers, and three government employees are
requested to participate in this experiment. They are randomly selected and all of them are resident of Chittagong
division. Mean ages of the participant are 39.45 year (Std = 13.14).
4.2.1 Procedures
After providing the agent to the participants we gave them instructions how to use the agent, its functionality and
the motivation of our project. They interact with the agent in various time periods. The average interacting time
of the participants is about 20 minutes. During this time period they try to share their feelings with the agent.
After interaction, they leave their feedback which is quite satisfactory. Fig. 9 shows an interacting scenario with
the agent.
Fig. 9 : Some snapshots of experiment while the participant interacting with the intelligent agent.
4.2.2 Measurements
We asked participants to fill out a questionnaire after interactions were complete. The measurement was a simple
rating on a Likert scale of 1- to- 7. The questionnaire consists of 5 items which is illustrated in table 5.
Table 5: Questionnaires for subjective evaluation.
Items Questionnaire
Q1 Did you feel intelligence of the agent when interacting with it?
Q2 Did you think that the agent is interesting?
Q3 What did you think that the agent delivered its expression successfully?
Q4 Did the expressions produced by the agent are appropriate?
Q5 Rate your overall impression about the agent?
4.2.3 Results
We conducted the measures of analysis of variance (ANOVA) on participant’s rating. Fig. 10 shows the
result of post questionnaire analysis of subjective evaluation. We observed a total of 100 (20×5) questionnaire
for all part icipants. There are no significant differences were found among questionnaire responses (F (4, 99) =
1.50, p=0.20, 2 = 1.5) of participants. Fig. 10 ind icates the mean and standard error values for questionnaire.
Although, satisfaction depends on the individual characteristics result indicates that participants were quite
satisfied to interacting with the proposed agent. In this work, we try to develop a framework for o lder people that
can create a feeling of virtual friend to them. This agent can display various expressions against the participants’
emotional state that can express by the text.
Page 130
P-ID 80 International Conference on Physics Sustainable Development & Technology (ICPSDT-2015)(August 19-20, 2015)
Department of Physics, CUET.
Fig. 10: Result of post questionnaire for the subjective evaluation. Error bars indicates the standard errors.
5. CONCLUSION There are many ways to display facial expressions through the intelligent agent. The generation of realistic facial
animation is to reproduce the contextual variab ility due to the reciprocal influence of art iculator movements. It is
extremely complex and difficult to model. Moreover, emotions are quite important in human interpersonal
relations and individual development. Our proposed framework may not express a vast range of expression but
the expression made by the agent is quiet satisfactory. It can provide with a positive feeling when people
interacting with the agent. Thus it has some emot ional benefits. People can express their feelings as well as their
emotions through the agent. So they found a virtual friend as a means of communicat ion. The experime ntal
results including subjective and quantitative experiment shows that the proposed agent is functioning quite well.
The proposed system can be helpful during interacting of humans via live messaging and get some mental
satisfaction for a moment which in turn reduces the loneliness of them. Adding sound corresponding to every
expression and full body animation will be the more effective commutative interface which is left for our future
issues.
REFERENCES [1] H. S. Lee, J. W. Park and M. J. Chung, A Linear Affect–Expression Space Model and Control Points
for Mascot-Type Facial Robots, IEEE Transactions on Robotics, pp. 863-873, 2007.
[2] S. Bassuk, T. Glass, L. Berkman, Social Disengagement and Incident Cognitive Decline in
Community-Dwelling Elderly Persons, Annals of Internal Medicine, vol. 131, no. 3, pp. 165-173,
1999.
[3] L. P. Vardoulakis, L. Ring, B. Barry, C. Sidner, and T. Bickmore, Designing Relational Agents as
Long Term Social Companions for Older Adults, In Proc. Int. Conf. on Intelligent Virtual Agents,
pp. 289-302, Santa Cruz, CA, USA, 2012.
[4] L. Berkman, L. Syme, Social Networks, Host Resistance, and Mortality: A Nine-year Follow-up
Study of Alameda County Residents, American Journal of Epidemiology, vol. 109, no. 2, pp. 186-
204, 1979.
[5] A. Dickens, S. Richards, C. Greaves, J. Campbell, Interventions Targeting Social Isolation in Older
People: A Systematic Review, BMC Public Health, vol. 11, no. 1, pp. 647, 2011.
[6] R. Yaghoubzadeh, M. Kramer, K. Pitsch, S. Kopp, Virtual Agents as Daily Assistants for Elderly or
Cognitively Impaired People, Intelligent Virtual Agents, LNCS, vol. 8108, pp 79-91, 2013.
[7] R. Lazlo, B. Barbara, T. Kathleen, B. Timothy, Addressing Loneliness and Isolation in Older Adults,
Archives of Internal Medicine, vol. 172, pp. 1078-1083.
[8] I. Leite, S. Mascarenhas, A. Pereira, C. Martinho, R. Prada, A. Paiva, Why can't we be friends? An
empathic game companion for long-term interaction, In Proc. 10th International Conference on
Intelligent Virtual Agents, Philadelph ia, PA, Springer-Verlag, pp 315-321, 2010.
[9] S. Wada, T. Shibata, Robot Therapy in a Care House - Change of Relationship among the Residents
and Seal Robot during a 2-month Long Study, In Proc. 16th IEEE International Symposium on
Robot and Human interactive Communication, pp. 107-112, 2007.
[10] T. Klamer, S. B. Allouch, Acceptance and Use of a Social Robot by Elderly Users in a Domestic
Environment, Pervasive Computing Technologies for Healthcare (PervasiveHealth), 2010.
Page 131
P-ID 80 International Conference on Physics Sustainable Development & Technology (ICPSDT-2015)(August 19-20, 2015)
Department of Physics, CUET.
[11] H. S Ahn, D. W. Lee, D. Choi, D. Lee, M. Hur and H. Lee, Appropriate Emotions for Facial
Expressions of 33-DOFs Android Head EveR-4 H33 Advanced Telecommunications Research
Institute, pp. 1115-1120, 2012.
[12] Chen C-Y, Kobayashi M, Oh L ShareCom p : sharing for companionship. In: CHI '05 extended
abstracts on Human factors in computing systems, Portland, OR, USA, 2005. ACM, pp 2074 -2078.
[13] C. Bartneck, Affective expression of machines. In CHI’ 01 Extended Abstracts on Human Factors
in Computing Systems, pp. 255-260. New York, USA, 2001.
[14] C. Breazeal and P. Fitzpatrick, Social implication of animate vision. In Proc. AAAI Fall
Symposium, Socially Intelligent Agents -The Human in the Loop, 2000.
[15] C. Breaseal, and R. Brooks, Robot emotion: A functional perspective. Who needs emotions: the
brain meets the robot, Oxford University Press, 2004.
[16] U. Hes, The communication of emotion. Emotions, Qualia, and Consciousness, Singapore, 2001.
[17] T. Ribeiro, A. Paiva, The Illusion of Robotic Life: Principles and Practices of Animation for Robots
March 8, 2012, Boston, Massachusetts, USA.
[18] Intelligent Expressive Avater, Available: http://www.hit l.washington.edu/r-98-32/ [Online].
[19] K. Kramer, R. Yaghoubzadeh, S. Kopp, K. Pitsch, A conversational virtual human as autonomous
assistant for elderly and cognitively impaired users? Social acceptability and design consideration ,
LNI, pp. 1105-1119, 2013.
[20] L. Riccardo, C. Peiro, A Facial Animation Framework with Emotive/Expressive Capabilities, In
IADIS International Conference Interfaces and Human Computer Interaction, pp. 49-53, Italy, 2011.
[21] A. Tru jillo, Translation Engines: Techniques for Machine Translation , (Springer-verlag, London,
1992).
[22] D. Arnold, L. Balkan, S. Meijer, R. L. Humphreys, and L. Sad ler, Machine Translation An
Introductory Guide, (Ncc, Blackwell Ltd., London, 1994).
[23] MakeHuman, http://www.makehuman.org/.
[24] S. Greenbaum, The Oxford English Grammar , Clarendon Press 1996.
Page 132
P-ID 80 International Conference on Physics Sustainable Development & Technology (ICPSDT-2015)(August 19-20, 2015)
Department of Physics, CUET.