towards developing an easy-to-use scripting environment for … · 2018. 11. 8. · towards...

10
Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters Christos Mousas Department of Computer Science Southern Illinois University Carbondale, IL 62901, USA Email: [email protected] Abstract—This paper presents the three scripting commands and main functionalities of a novel character animation envi- ronment called CHASE. CHASE was developed for enabling inexperienced programmers, animators, artists, and students to animate in meaningful ways virtual reality characters. This is achieved by scripting simple commands within CHASE. The com- mands identified, which are associated with simple parameters, are responsible for generating a number of predefined motions and actions of a character. Hence, the virtual character is able to animate within a virtual environment and to interact with tasks located within it. An additional functionality of CHASE is supplied. It provides the ability to generate multiple tasks of a character, such as providing the user the ability to generate scenario-related animated sequences. However, since multiple characters may require simultaneous animation, the ability to script actions of different characters at the same time is also provided. Keywords—character animation, scripting language, scripting actions, interactive narrative I. I NTRODUCTION Character animation can be characterized as a complex and time-consuming process. This is especially true when animating virtual characters based on key-frame techniques, as this requires prior knowledge of software solutions. Moreover, artistic skills are also required since the virtual character should animate as naturally as possible. In order to avoid time-consuming processes in animating virtual characters, motion capture technologies now provide high quality and realistic animated sequences. This is possible because the ability to capture real humans in the act of performing is achieved through the provided required motions. The advantages of motion capture techniques are numerous, especially in the entertainment industry. However, the captured motion data, itself, it is not always usable, since virtual char- acters should be able to perform tasks in which the required constraints are not always fulfilled. Thus, methodologies that retarget [1], wrap [2], blend [3][4], splice [5][6][7], interpolate [8][9] etc., the motion data have become available to help the animators to create the required motion sequences. In addition to the motion synthesis techniques that are based on software solutions, animating a virtual character through programming is also difficult. This is especially true in cases where animators, artists and students do not have the required programming skills. Hence, animating virtual characters in order to visualize ideas and generate simple scenarios in which virtual characters evolve can be a very complex process. Based on the aforementioned difficulties that inexperienced programmers can face, this paper introduces a simple, easy- to-use, scripting environment for animating virtual characters, which is based on a small number of scripting commands. The scripting environment presented (see Fig. 1), which is called CHASE, provides a user with the ability to script the action of a character as well as to script possible interaction between a character and objects that are located within the virtual environment. In order to implement CHASE the following parts were developed. Firstly, identifying the basic actions that a character should be able to perform and also generating the basic scripting commands. Secondly, a number of parameters that should allow the user not only to synthesize the required motion of a character, but also to gain a higher level of control of each action of the character were defined. By using a reach number of motions that a character can perform, as well as by associating these actions with specified keywords, a motion dataset is created. The input commands are handled by a number of developed background algorithms, which are responsible for retrieving the desired motions and synthesizing the requested actions of the character. During the application’s runtime, CHASE synthesizes the requested motion of the character and displays the final animated sequence. The remainder of this paper is organized as follows. Section 2 covers related work in character animation by present- ing previous solutions for animating virtual characters that are based on interactive or automatic techniques. Previously developed scripting environments for the animation of vir- tual characters are also presented and discussed. A system overview of CHASE is presented in Section 3. The script commands, possible parameters, and additional functionalities that have been developed for CHASE are presented in Section 4. The results, which indicate the potential use of a scripting environment by users who are inexperienced in programming, are presented in Section 5. Finally, conclusions are drawn and potential future work is discussed in Section 6. II. 1. RELATED WORK This section presents work that is related to the solution pre- sented. Specifically, the following paragraphs present method- arXiv:1702.03246v1 [cs.GR] 10 Feb 2017

Upload: others

Post on 28-Sep-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Towards Developing an Easy-To-Use Scripting Environment for … · 2018. 11. 8. · Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters Christos

Towards Developing an Easy-To-Use ScriptingEnvironment for Animating Virtual Characters

Christos MousasDepartment of Computer Science

Southern Illinois UniversityCarbondale, IL 62901, USAEmail: [email protected]

Abstract—This paper presents the three scripting commandsand main functionalities of a novel character animation envi-ronment called CHASE. CHASE was developed for enablinginexperienced programmers, animators, artists, and students toanimate in meaningful ways virtual reality characters. This isachieved by scripting simple commands within CHASE. The com-mands identified, which are associated with simple parameters,are responsible for generating a number of predefined motionsand actions of a character. Hence, the virtual character is ableto animate within a virtual environment and to interact withtasks located within it. An additional functionality of CHASEis supplied. It provides the ability to generate multiple tasks ofa character, such as providing the user the ability to generatescenario-related animated sequences. However, since multiplecharacters may require simultaneous animation, the ability toscript actions of different characters at the same time is alsoprovided.

Keywords—character animation, scripting language, scriptingactions, interactive narrative

I. INTRODUCTION

Character animation can be characterized as a complexand time-consuming process. This is especially true whenanimating virtual characters based on key-frame techniques, asthis requires prior knowledge of software solutions. Moreover,artistic skills are also required since the virtual charactershould animate as naturally as possible.

In order to avoid time-consuming processes in animatingvirtual characters, motion capture technologies now providehigh quality and realistic animated sequences. This is possiblebecause the ability to capture real humans in the act ofperforming is achieved through the provided required motions.The advantages of motion capture techniques are numerous,especially in the entertainment industry. However, the capturedmotion data, itself, it is not always usable, since virtual char-acters should be able to perform tasks in which the requiredconstraints are not always fulfilled. Thus, methodologies thatretarget [1], wrap [2], blend [3][4], splice [5][6][7], interpolate[8][9] etc., the motion data have become available to helpthe animators to create the required motion sequences. Inaddition to the motion synthesis techniques that are basedon software solutions, animating a virtual character throughprogramming is also difficult. This is especially true in caseswhere animators, artists and students do not have the requiredprogramming skills. Hence, animating virtual characters in

order to visualize ideas and generate simple scenarios in whichvirtual characters evolve can be a very complex process.

Based on the aforementioned difficulties that inexperiencedprogrammers can face, this paper introduces a simple, easy-to-use, scripting environment for animating virtual characters,which is based on a small number of scripting commands.The scripting environment presented (see Fig. 1), which iscalled CHASE, provides a user with the ability to script theaction of a character as well as to script possible interactionbetween a character and objects that are located within thevirtual environment.

In order to implement CHASE the following parts weredeveloped. Firstly, identifying the basic actions that a charactershould be able to perform and also generating the basicscripting commands. Secondly, a number of parameters thatshould allow the user not only to synthesize the requiredmotion of a character, but also to gain a higher level ofcontrol of each action of the character were defined. By usinga reach number of motions that a character can perform, aswell as by associating these actions with specified keywords,a motion dataset is created. The input commands are handledby a number of developed background algorithms, which areresponsible for retrieving the desired motions and synthesizingthe requested actions of the character. During the application’sruntime, CHASE synthesizes the requested motion of thecharacter and displays the final animated sequence.

The remainder of this paper is organized as follows. Section2 covers related work in character animation by present-ing previous solutions for animating virtual characters thatare based on interactive or automatic techniques. Previouslydeveloped scripting environments for the animation of vir-tual characters are also presented and discussed. A systemoverview of CHASE is presented in Section 3. The scriptcommands, possible parameters, and additional functionalitiesthat have been developed for CHASE are presented in Section4. The results, which indicate the potential use of a scriptingenvironment by users who are inexperienced in programming,are presented in Section 5. Finally, conclusions are drawn andpotential future work is discussed in Section 6.

II. 1. RELATED WORK

This section presents work that is related to the solution pre-sented. Specifically, the following paragraphs present method-

arX

iv:1

702.

0324

6v1

[cs

.GR

] 1

0 Fe

b 20

17

Page 2: Towards Developing an Easy-To-Use Scripting Environment for … · 2018. 11. 8. · Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters Christos

Fig. 1: The interface of CHASE.

ologies that use different input devices or easily specifiedconstraints for animating virtual characters, systems that pro-vides to a user the ability to synthesize task-based or scenario-related animated sequences, and previously proposed scriptingenvironments for character animation. Finally, the advantagesprovided by CHASE comparing by previous solutions arepresented.

Interactive character control can be classified accordingto the input device that is used for the character animationprocess [10]. In general, the character controller can be astandard input device, such as a keyboard and a joystick[11]. Alternatively, it can be more specialized, such as textinput [12][13][14], prosodic features of speech [15], drag anddrop systems where the motion sequences are placed into atime-line [16], sketch-based interfaces [17] or the body of auser [18][19], while the motion is captured by motion capturetechnologies. Each of the previously mentioned methodologieshas advantages and disadvantages. The choice of the mostappropriate input device depends on the actual control of thecharacter’s motion that the user requires.

A variety of methodologies for the animation of a virtualcharacter based on easily specified constraints have also beenexamined. These solutions are based on motion graphs [8][20]literature such as [21], simple footprints [22][23] that a char-acter should follow, on space-time constraints as proposed in[24], or statistical models [25][26][27] that are responsiblefor retrieving and synthesizing a character’s motion. However,

even if easily specified constraints enable a user to animatea character, different frameworks that permit either the in-teractive or automatic animation of a character have beendeveloped. In [28], which is a task-based character animationsystem, by using a number of screen buttons, the user is able toanimate a character and make it interact with objects that arelocated within the virtual environment. Other methods, suchas [29][30][31], which can be characterized as scenario-basedcharacter animation systems, provide automatic synthesizingof a character’s motion based on AI techniques.

In the past, researchers developed scripting languages andsystems in the field of embodied conversational agents. TheXSAMPL3D [32], AniLan [33], AnimalScript [34], SMIL-Agent [35] and many others enable a user to script a charac-ter’s actions based only on predefined command. Among thebest-known markup languages for scripting the animation ofvirtual characters are the Multi-Modal Presentation MarkupLanguage [36], the Character Markup Language [37], theMultimodal Utterance Representation Markup Language [32],the Avatar Markup Language [38], the Rich RepresentationLanguage [33], the Behavior Markup Language [39] and thePlayer Markup Language [40], which developed for control-ling the behavior of virtual characters.

Various solutions that are similar to the presented method-ology were proposed previously for the animation of virtualcharacters based on scripting commands. StoryBoard [41]provides the ability to integrate a scripting language into

Page 3: Towards Developing an Easy-To-Use Scripting Environment for … · 2018. 11. 8. · Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters Christos

an interactive character animation framework. Improv [42],another framework with which to create real-time behavior-based animated actors, enables a user to script the specificaction of a character based on simple behavior commands.STEP [38] framework provides a user the ability to script suchactions as gestures and postures. This methodology, which isbased on the formal semantics of dynamic logic, provides asolid semantic foundation that enriches the number of actionsthat a character can perform.

The majority of previously developed scripting environ-ments and markup languages provide only specific actionsthat a character can perform. An additional limitation is theinability of such systems to enhance a character’s synthesizedmotion. Therefore, a user always receives a lower level ofcontrol of the synthesized motion of a character. Moreover,in cases in which a user must generate an animated sequencewhere many characters will take part, a great deal of effort willbe required due to the difficulty of scripting multiple actionsfor multiple characters. This is especially true for users whowish to generate a sequence with animated characters, but areinexperienced in programming.

These difficulties are overcome in the presented scriptingenvironment. Firstly, instead of enabling a user to script an an-imated character based on XML-related formats, a simplifiedscripting environment with its associated scripted language,which is based only on three commands, is introduced. Sec-ondly, since a character should be able to perform concurrentactions, a simple extension of the basic command handlesthis. Therefore, the user achieves a higher level of control ofa character’s action. Moreover, in cases where a user mustanimate more than one character simultaneously, one canspecify the character that should perform the requested actionby adding an additional method to the existing commandfor a character. Finally, in cases where a user must generatean animated character in a multitask scenario, by simplyspecifying the row in which the task should appear, the systemwill synthesize the tasks requested automatically.

We assume that the described unique functionalities thatare implemented in CHASE will enable a user to synthesizecompelling animated sequences in which a variety of virtualcharacters are involved. Hence, in view of the simplicity ofthe developed commands, in conjunction with the associatedparameters, the proposed methodology is quite powerful incomparison to the previous solution. In addition, the easy-to-use and easy-to-remember commands make the presentedscripting environment effective, especially for users who areinexperienced in programming.

III. SYSTEM OVERVIEW

This section briefly describes the proposed system. Specif-ically, a variety of background algorithms are responsiblefor recognizing the input commands and synthesizing themotion of a character. The developed background algorithmscommunicate with the animation system, which is responsiblefor generating a character’s motion, as well as with a path-finding methodology to retrieve the path that the character

Fig. 2: The architecture of CHASE.

should follow when a locomotion sequence is required. Fi-nally, CHASE synthesizes and displays the requested motionsequence. Fig. 2 represents the procedure.

A. System OverviewThe interface of CHASE (see Fig. 1) is characterized by

its simplicity. In its current implementation, it consists of ascene panel that displays the resulting animations, an editmode panel to edit the input objects, a progress bar that showsthe progress of the displayed animation, a scripting box, anda few buttons for use in building, playing and clearing thewritten scripts. Finally, buttons that save the scripted code andexport the generated animated sequences are also provided. Adownloadable version of the presented system, documentationspecifying all of its capabilities, and examples of scenes canbe found on the CHASE webpage.

B. Third-Party ImplementationsA number of techniques and libraries are used to construct

CHASE. CHASE uses the Recast/Detour library [43] in con-junction with the MEESPM [44][45] for the path finding pro-cess and collision avoidance with the environment. Concurrentactions are generated based on a simple layering methodologysimilarly to the one proposed in [16]. Finally, a similar to [46]full-body inverse kinematics solver was implemented to handlethe postures of a character, while interacting with objectslocated within the virtual environment.

Page 4: Towards Developing an Easy-To-Use Scripting Environment for … · 2018. 11. 8. · Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters Christos

IV. SCRIPTING CHARACTER ANIMATION

Developing scripting commands for animating a virtualcharacter can be characterized as a complex process sincea virtual character should be able to perform a variety ofactions. In this section, the identifications of the basic scriptingcommands that are necessary to enable the virtual characterto navigate and interact within a virtual environment arepresented. Moreover, by introducing additional methods calledby the main scripts, the system generates concurrent actionsof a character, as well as animates multiple characters simul-taneously. Finally, an additional functionality of CHASE forscripting multitask animated sequences for the generation ofscenario-related animated characters is presented.

A. Identifying Scripting Commands

The application that is presented has been developed forusers who are inexperienced in programming. Thus, simple,easily memorized, scripting commands are necessary. To gen-erate the required scripting commands, one must begin byidentifying the possible actions or type of actions that acharacter should perform. Generally, a character should beable to perform simple actions such as waving its hand, tasksrelated to locomotion such as moving to a target position andinteraction tasks such as grasping with its hand an object thatis located in the three-dimensional environment. It is apparentthat these are the three basic types of actions that a virtualcharacter should be able to perform. Based on this generaldescription, three basic scripting commands were developed:the do(parameters), the goTo(parameters) and theinteractWith(parameters).

The do(parameters) command provides a charac-ter with the ability to perform a single action. ThegoTo(parameters) forces a character to move within thegiven virtual environment. The final command is responsi-ble for making the virtual character capable of interactingwith a variety of tasks. Hence, the third command, theinteractWith(parameters), is responsible for provid-ing the ability to control a variety of the character’s actions.

For these commands, the parameters within the paren-theses indicate the possible parameters that each of thescripting commands could receive (see Section 4.2). Dueto the various parameters that each command receives, auser is provided with the means to develop both abstractand specified action of a character. For example, with thegoTo(parameters) command, it is possible not only togenerate the required locomotion of a character, but also toenable a user to gain better control of the synthesized motionof a character, since the user can specify how the locomotionof a character should be generated. The following sectionpresents the basic parameters that each command receives.

B. Command Parameters

A task assigned to a character can be performed in a varietyof different ways. For example, a sequence of locomotion toa target position can be performed by walking, running, etc.motions. Hence, in cases where a user needs a higher level of

control of the synthesized motions of a character, parametersthat enhance these actual actions generated by the previouslymentioned scripting commands should be defined.

The first command that implemented thedo(parameters) command, enables a user to scriptsimple actions of a character. This command has a singlemandatory parameter, which indicates the action that thecharacter should perform. However, optional parameters tospecify the body part or the duration of the task can alsobe used. Specifically, the user can request a single action bycalling do(action), as well as specify the target wherethe action should be performed, the duration of the actionand the body part that should perform the requested action.This command initially permitted a character to perform therequested action without the need to perform a locomotionsequence (i.e., to wave its hand while staying in its position).However, the do(parameters) command can also beused to permit the character to perform locomotion tasks,since one can request that a character perform a walkingmotion. Based on these parameters that can be inserted intothe do(parameters) command, a user has the means notonly to generate the requested action, but also to generate anaction that should fulfill user-specified constraints.

The goTo(parameters) command enables the characterto perform locomotion tasks. The user identifies a mandatoryparameter, which is the target position that the character shouldreach. However, the user is also able to use an additionaloptional parameter that specifies the motion style that willanimate the character. Therefore, a character’s locomotionto a target position can be scripted either by (i) insertingthe target position such as goTo(target) when a simplewalking motion of the character is desired or (ii) insertinggoTo(target, motion style) when both target posi-tion and motion style are specified.

The final command that is implemented in CHASE, theinteractWith(parameters), can be characterized asmore complex than the two previously mentioned commands.The reason is that there are numerous possible interactionsbetween a character and an object. If a character is askedto interact with an object, various actions can be generated.Even if possible to associate actions with specific body parts ofa character in a pre-processing stage, there are also possiblevariations of the required actions. These variations may berelated to the character’s body or to the duration of the displayof the action. For example, scripting a character to kick aball may also require specifying the foot that should performthis action. Moreover, asking a character to knock a doormay also require specifying the duration in the knocking. Forthat reason, four different parameters have been defined. Thefirst two parameters (object name and interactionmodule) are mandatory. They indicate the object that thecharacter should interact with, and the interaction modulethat should be generated. However, depending on the user’srequirements for generating a specific action, two more op-tional parameters could also be inserted. The first one (bodypart) enables the user to choose which of the character’s

Page 5: Towards Developing an Easy-To-Use Scripting Environment for … · 2018. 11. 8. · Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters Christos

body parts should perform the requested action. In the currentimplementation, the user is permitted to choose the handor foot that will perform the action. The second parameter(duration) enables the user to choose the time (in seconds)required for the requested action.

Based on the possible parameters that each command couldreceive, the following should be noted. Firstly, while the userdid not specify any optional parameter for a scripted command,the system generates the required action taking into accounta predefined set of parameters that are associated with eachaction of the character. For example, if a user requests thata character kick a ball, the system will display only a singlekick by the character. The reason is that a ball kicking actionis defined as to be performed only once to avoid synthesizingmeaningless and repeated motions. Secondly, it should benoted that each optional parameter is independent. This meansthat the user is not required to specify all of the optional pa-rameters provided by each command. Therefore, the user maycontrol specific components of the requested action. A simpleexample of this capability of the commands illustrates this.While using the do(parameters) command, the user mayrequest that only either the body part or the durationparameter, or both of these, be filled. In any case, the system’sdecision in generating the requested motion is not influencedby other factors since it is capable of recognizing the correctform of the scripted command in all of the aforementionedcases.

The three commands that are examined in this paper inconjunction with the associated parameters that can be usedto animate a virtual character are summarized in Table I. Inaddition, a small set of possible keywords that the user couldemploy in order to animate virtual characters is presented.It is assumed that an additional control parameter for thesynthesized motion could be quite beneficial, since it enablesthe user not only to animate a character, but also to forcethe system to synthesize the user’s actual wish. Completedocumentation of all possible actions that can be synthesizedby the character can be found in the CHASE webpage ( urlomitted for review purposes).

C. Scripting Concurrent Actions

Virtual characters, such as humans, should be able toperform more than one action simultaneously. This sec-tion presents the scripting process for concurrent actionsthat a character can perform. The concurrent action func-tionality is based upon the ability to specify the bodypart that should perform the action in conjunction withthe base action that has been requested. The concur-rent action lies between the do(parameters) and ei-ther the goTo(parameters) or the interactWith(parameters) commands. Specifically, to have a char-acter perform concurrent actions, the do(parameters)command is attached to either the goTo(parameters) orthe interactWith (parameters). A simple examplefollows. To cause a character to perform a motion, suchas waving its hand while walking to a target position, the

system permits the user to script the desired walking mo-tion of a character and to request the additional motionthat the system should generate. Hence, the previous exam-ple can be requested simply by scripting goTo(target,walk).do(wave hand, handR). Thus, by permittingthe user to generate additional actions of a character, whileanother action is in progress can, be quite beneficial whenmore complex animated sequences are required. Therefore,this additional functionality provides a higher level of controlover a requested action of a virtual character.

D. Scripting Multiple Characters

In animated sequences it is quite common for morethan one character to participate in a single scenario.Hence, by extending the three scripting commands, CHASEalso enables a user to script more than one character si-multaneously. This is achieved by attaching an additionalcommand to one of the three basic commands, calledcharacterName(parameter). This command specifiesthe character that should perform an action, permitting theuser to control multiple characters, in cases where more thanone character participates in the animation process. A simpleexample of forcing a specific character to perform an actionfollows. Consider a character named Rudy who is required towalk to target. This procedure could be called by simplyscripting goTo(target).characterName(Rudy).

E. Scripting Multiple Tasks

In scenario-related sequences that involve virtual characters,the latter should be able to perform a variety of tasks oneafter the other. Thus, this paper presents a method to scriptmultiple tasks, such as enabling a user to synthesize longanimated sequences. Generally, the tasks that a character canperform are characterized by their linearity. Specifically, a taskbegins while a previous task is completed, and the procedurecontinues until there are no other tasks for a character toperform.

Based on the foregoing, a multitask scenario in a generalform can be represented as components of an array that hasa dimensionality equal to N × 1, where N denotes the totalnumber of tasks that a character should perform. By assigningeach of the actions an array called task[index], a usercan generate long animated sequences. This is achieved byallowing the user to assign singe tasks at each index valueof the task array. A simple example of a multitask scenarioappears in Figure 3, as well as in the accompanying video. Itsscripting implementation is represented in Algorithm 1.

It is quite common in multitask scenarios to involve multiplecharacters. Two different approaches can be used in CHASEto script more than one character simultaneously in a multitaskscenario. The first approach animates each character one afterthe other. This means that the action required of a characterB is generated after the action of a character A has beencompleted. The reason is that each task of the characterstaking part in the multitask scenario have been assigned adifferent index value of the task array. A simple example

Page 6: Towards Developing an Easy-To-Use Scripting Environment for … · 2018. 11. 8. · Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters Christos

Commands( Parameters! Parameter(Examples(!"(!"#"$%&%!");!do(action);!do(action,!target);!do(action,!duration);!do(action,!body!part,!target);!do(action,!body!part,!duration);!

action! wave!hand!jump!walk!kick!etc.!

! target! Vector3!(x,y,z)!object!name!

! duration! time!in!seconds!! body!part! handR!

handL!footR!footL!upperB!lowerB!

!"#"(!"#"$%&%#');!goTo(target);!goTo(target,!motion!style);!

target!!

Vector3!(x,y,z)!object!name!

! motion!style! walk!run!jump!walk!back!etc.!

!"#$%&'#(!#ℎ(!"#"$%&%#');!interactWith(object!name,!interaction!module);!interactWith(object!name,!interaction!module,!body!part);!interactWith(object!name,!interaction!module,!duration);!interactWith(object!name,!interaction!module,!body!part,!duration);!

object!name! any! object’s! name!contained!in!the!scene!

! interaction!module! kick!punch!grasp!sit!open!close!etc.!

! body!part! handR!handL!footR!footL!

! duration! time!in!seconds!TABLE I: Commands and associated parameters that can be used in CHASE to request an action by an animated virtualcharacter.

of generating the actions of two different characters appears inAlgorithm 2. However, a user should be able to animate vir-tual characters simultaneously in multitask scenarios. This isachieved in CHASE by using a two dimensional array namedtasks[index][index]. In this array the first indexvalue represents the row in which each action in generated,whereas the second index value represents the number ofthe character. It should be noted that each character shouldbe represented by the same index value while developinga multitask scenario. Hence, the background algorithms thatare implemented recognize and generate the requested tasks as

separate entries. This enables the user to animate a number ofcharacters simultaneously. A simple example in which thereare two characters in a multitask scenario appears in Algorithm3. It should be noted that a multitask scenario where multiplecharacters evolve in a general form can be represented asan array that has a dimensionality equal to M × N, whereM denotes the total number of characters evolving in themultitask scenario and N denotes the total number of tasksthat a character should perform.

Page 7: Towards Developing an Easy-To-Use Scripting Environment for … · 2018. 11. 8. · Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters Christos

Fig. 3: A multitask scenario generated by using Algorithm 1.

Algorithm 1: A simple example for generating a multitaskscenario.

Data: Input commands of a userResult: The result animated sequencetask[1] = do(wave hand, handR, 3);task[2] = goTo(ball, walk).do(wave hand, handL);task[3] = interactWith(ball, punch, handR);task[4] = do(jump);task[5] = do(wave hand, handR, 2);

Algorithm 2: By placing the actions of two differentcharacters at different index values of the task array,the system generates each character action one after theother.

Data: Input commands of a userResult: The result animated sequencetask[1] = goTo(ball, walk).characterName(characterA);task[2] = goTo(ball, walk).characterName(characterB);

V. EVALUATION AND RESULTS

This section covers the results obtained while evaluatingCHASE. To understand the efficiency of using CHASE, wedesigned a study to evaluate the ability of individual students

to animate virtual characters that perform specified tasks. Theparticipants were asked to use CHASE to develop a specifiedscenario. Two different teams of participants took place in theevaluation process. The first team consisted of eight partici-pants who were first or second year undergraduate studentsin science or engineering departments. These students had abasic knowledge of computer programming (had attended onlyan introductory course in computer programming). However,none of them had previous experience in, or any particularknowledge of, computer graphics and animation. The secondteam consisted of eight participants who were in their first orsecond year of study in arts. None of the students of this teamhad any prior knowledge of programming. All participants canbe characterized a computer literate.

A. Experimental Methodology

Each participant sat down in a quiet room in front of adesktop computer in which the CHASE application had beeninstalled. First, the experimenter demonstrated the function-alities of CHASE for 10 minutes. During that time, eachparticipant had the opportunity to note freely. After the demon-stration, each participant was given the opportunity to discussthe system and to ask questions. An additional five minuteswere provided for experimentation with CHASE. However, nofurther explanation of the system was given.

Page 8: Towards Developing an Easy-To-Use Scripting Environment for … · 2018. 11. 8. · Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters Christos

Algorithm 3: A multitask scenario in which there are two characters. In this scenario, characterA moves to its targetposition while walking, and characterB moves to its target position while running. Finally, characterA punchescharacterB with his right hand.

Data: Input commands of a userResult: The result animated sequencetasks[1][1] = goTo(target, walk).characterName(characterA);tasks[1][2] = goTo(target, run). characterName(characterB);tasks[2][1] = interactWith(characterB, punch, handR).characterName(characterA);

After this introductory procedure, the experimenter ex-plained to the participants that they would use CHASE to gen-erate a scenario with only one character involved. Specifically,it was asked to generate the scenario presented in Algorithm1. The scenario was provided in a printed form in which thecharacters’ actions were described sequentially in writing. Theactions that each character was required to perform and the po-sition that was to be reached were written in bold letters in theprinted document. The basic commands and parameters thatthe users were required to employ were supplied on a separatesheet. The time (in seconds) that was required to completethe given scenario was recorded. However, each participantwas permitted a maximum of 15 minutes to complete theassigned scenario. After a participant completed the assignedscenario, he or she evaluated the difficulty of generating ananimated sequence using the commands and functionalities ofCHASE by assigning a score of 1 (difficult) to 7 (easy-to-use).Finally, since CHASE was developed to enable inexperiencedprogrammers to visualize and generate their own stories thatinvolved virtual characters, the participants were asked toevaluate CHASE by assigning a score of 1 (would not liketo use it again) to 7 (I would strongly like to use it again).

B. Results

We recorded separately for each team the time that par-ticipants took to complete the given scenario. A pairwise t-test revealed that science and engineering students completedthe given scenario more quickly than the students from thearts department. Specifically, the mean time for the scienceand engineering students was estimated at m = 316.9 secondswith a standard deviation of σ = 69.2. The arts departmentstudents had a mean time of m = 657.2 seconds with astandard deviation of σ = 86.7. Based on these results it canbe stated that students more familiar with programming areable to finish the given task faster. However, by consideringthe achievement of the arts department students of completionof the given tasks, it could be stated that also inexperiencedin programming users can efficiently use such a scriptingenvironment.

For evaluation purposes, we also recorded the students’satisfaction in interacting with CHASE by scoring the dif-ficulty of using the developed commands in CHASE. Userswere invited to evaluate the intuitiveness and difficulty ofeach command on a 7-point Likert scale (1 being difficult,7 being easy). A pairwise t-test found that it was easier for

participants from the science and engineering departmentsto use the scripting commands than it was for the studentsfrom the arts department. Specifically, for the arts depart-ment students t(8) = 3.09, p = 0.018, with the science andengineering department students m = 4.6 on the difficultyscale and σ = 1.3, while in the arts department studentsm= 3.8 with σ = 1.7. However, a pairwise t-test did not revealany statistical difference between the two intuitiveness scales(p > 0.05).

The final results indicate the potential willingness of eachparticipant to use CHASE in the future. Surprisingly, allparticipants responded that they would like to use CHASE inthe future. Specifically, m = 6.75 with σ = 0.46. Therefore, itis shown that all of the participants would like to visualize theirideas and also develop their own stories that involve virtualcharacters by using CHASE.

VI. CONCLUSIONS AND FUTURE WORK

In this paper, a novel scripting environment, called CHASE,for use in animating virtual characters was presented. CHASEenables a user to request a variety of actions that a charactercan perform by simply using three commands. Each command,which receives a variety of parameters, is associated withspecific actions that the character is able to perform. Moreover,the commands communicate with a variety of backgroundalgorithms that are responsible for generating the actionsrequested of the character. In addition to the scripting com-mands, by introducing three additional functionalities, the useris able to script concurrent actions of a character, multiplecharacters at the same time, and multi-task scenarios in orderto generate scenario-related sequences that involve animatedcharacters. To demonstrate the efficiency and simplicity of useof CHASE, an evaluation process was conducted. Two teamsof students took part in the study. The first team consisted ofstudents who had minimal previous programming experience,whereas the second team consisted of students who had noprevious programming experience. This evaluation process hasshown how easy it is to use such a scripting environment forthe animation of virtual characters, as well as the potentialwillingness of users to employ CHASE in order to visualizetheir ideas and produce their own stories that involve animatedvirtual characters.

In its current version, CHASE provides a variety of func-tionalities. However, there are additional functionalities thatwe would like to implement in the near future. Specifically, we

Page 9: Towards Developing an Easy-To-Use Scripting Environment for … · 2018. 11. 8. · Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters Christos

would like to implement additional commands in conjunctionwith the associate parameters to allow the system to providecomplex interaction between multiple characters. In addition,we would like to expand the concurrent actions functionalityby allowing the user to specify more than one body part thatperforms additional actions simultaneously. Moreover, in thecurrent version of CHASE, the camera that is used to capturethe virtual content is placed into the virtual environment in aspecific position. Hence, in the future we would like to providefunctionalities to allow the camera to be positioned by scripts,as well as to animate in response to the users’ requests. Weassume that the additional functionalities mentioned will bequite beneficial in generating a wide range of actions andinteraction, as well as in representing the generated sequencesmore properly. As a result, users would be able to generatecompelling sequences that involve animated virtual characters.

REFERENCES

[1] M. Gleicher, “Retargetting motion to new characters,” in 25th AnnualConference on Computer Graphics and Interactive Techniques. NewYork, USA: ACM Press, July 1998, pp. 33–42.

[2] A. Witkin and Z. Popovic, “Motion warping,” in 22nd Annual Confer-ence on Computer Graphics and Interactive Techniques. New York,USA: ACM Press, September 1995, pp. 105–108.

[3] L. Kovar and M. Gleicher, “Flexible automatic motion blending withregistration curves,” in ACM SIGGRAPH/Eurographics Symposium onComputer Animation. UK: Eurographics Association, July 2003, pp.214–224.

[4] S. I. Park, H. J. Shin, and S. Y. Shin, “On-line locomotion generationbased on motion blending,” in ACM SIGGRAPH/Eurographics Sympo-sium on Computer Animation. UK: Eurographics Association, July2002, pp. 105–111.

[5] B. Van Basten and A. Egges, “Motion transplantation techniques: Asurvey,” Computer Graphics and Applications, vol. 32, no. 3, pp. 16–23, 2012.

[6] C. Mousas and P. Newbury, “Real-time motion synthesis for multiplegoal-directed tasks using motion layers,” in Virtual Reality Interactionand Physical Simulation, 2012, pp. 79–85.

[7] C. Mousas, P. Newbury, and C.-N. Anagnostopoulos, “Splicing of con-current upper-body motion spaces with locomotion,” Procedia ComputerScience, vol. 25, pp. 248–359, 2012.

[8] L. Kovar, M. Gleicher, and F. Pighin, “Motion graphs,” ACM Transac-tions on Graphics, vol. 21, no. 3, pp. 473–482, 2002.

[9] T. Mukai and S. Kuriyama, “Geostatistical motion interpolation,” Trans-actions on Graphics, vol. 24, no. 3, pp. 1062–1070, July 2005.

[10] N. Sarris and M. G. Strintzis, 3D Modeling and Animation: Synthesisand Analysis Techniques for the Human Body. Hershey, Pennsylvania:IGI Global, July 2003.

[11] J. McCann and N. Pollard, “Responsive characters from motion frag-ments,” ACM Transactions on Graphics, vol. 26, no. 3, p. Article No.6, August 2007.

[12] M. Oshita, “Generating animation from natural language texts and se-mantic analysis for motion search and scheduling,” The Visual Computer,vol. 26, no. 5, pp. 339–352, 2010.

[13] C. Mousas and C.-N. Anagnostopoulos, Character Animation ScriptingEnvironment, ser. Encyclopedia of Computer Graphics and Games.Springer, 2015.

[14] C. Mousas and C.-N. Anagnostopoulos, “Chase: Character animationscripting environment,” in VRCAI, 2015, pp. 55–62.

[15] S. Levine, C. Theobalt, and V. Koltun, “Real-time prosody-drivensynthesis of body language,” ACM Transactions on Graphics, vol. 28,no. 5, p. Article No. 28, 2009.

[16] M. Oshita, “Smart motion synthesis,” Computer Graphics Forum,vol. 27, no. 7, pp. 1909–1918, October 2008.

[17] J. Davis, M. Agrawala, E. Chuang, Z. Popovic, and D. Salesin, “Asketching interface for articulated figure animation,” in ACM SIG-GRAPH/Eurographics Symposium on Computer Animation. UK:Eurographics Association, July 2003, pp. 320–328.

[18] J. Chai and J. K. Hodgins, “Performance animation from low-dimensional control signals,” ACM Transactions on Graphics, vol. 24,no. 3, pp. 686–696, 2005.

[19] C. Ouzounis, C. Mousas, C.-N. Anagnostopoulos, and P. Nebury, “Usingpersonalized finger gestures for navigating virtual characters,” in VirtualReality Interaction and Physical Simulation, 2015, pp. 5–14.

[20] C. Mousas and P. Newbury, “Real-time motion editing for reaching tasksusing multiple internal graphs,” in IEEE Computer Games Conference,2012, pp. 51–55.

[21] A. Safonova and J. K. Hodgins, “Construction and optimal search ofinterpolated motion graphs,” ACM Transactions on Graphics, vol. 26,no. 3, p. Article No. 106, August 2007.

[22] M. Van De Panne, “From footprints to animation,” Computer GraphicsForum, vol. 16, no. 4, pp. 211–223, October 1997.

[23] C. Mousas, P. Newbury, and C.-N. Anagnostopoulos, “Footprint-drivenlocomotion composition,” International Journal of Computer Graphicsand Animation, vol. 4, no. 4, pp. 27–42, 2014.

[24] M. F. Cohen, “Interactive spacetime control for animation,” ACMSIGGRAPH Computer Graphics, vol. 26, no. 2, pp. 293–302, July 1992.

[25] J. Min and J. Chai, “Motion graphs++: A compact generative modelfor semantic motion analysis and synthesis,” ACM Transactions onGraphics, vol. 31, no. 6, p. Article No. 153, 2012.

[26] C. Mousas, P. Newbury, and C.-N. Anagnostopoulos, “Data-drivenmotion reconstruction using local regression models,” in Artificial In-telligence Applications and Innovations, 2014, pp. 364–374.

[27] C. Mousas, P. Newbury, and C.-N. Anagnostopoulos, “Evaluating thecovariance matrix constraints for data-driven statistical human motionreconstruction,” in Spring Conference on Computer Graphics, 2014, pp.99–108.

[28] A. W. Feng, Y. Xu, and A. Shapiro, “An example-based motion synthesistechnique for locomotion and object manipulation,” in ACM SIGGRAPHSymposium on Interactive 3D Graphics and Games. New York, USA:ACM Press, March 2012, pp. 95–102.

[29] M. Thiebaux, S. Marsella, A. N. Marshall, and M. Kallmann, “Smart-body: Behavior realization for embodied conversational agents,” inInternational Joint Conference on Autonomous Agents and MultiagentSystems, vol. 1, International Foundation for Autonomous Agents andMultiagent Systems. New York, USA: ACM Press, May 2008, pp.151–158.

[30] M. Kapadia, S. Singh, G. Reinman, and P. Faloutsos, “A behavior-authoring framework for multiactor simulations,” Computer Graphicsand Applications, vol. 31, no. 6, pp. 45–55, 2011.

[31] A. Shoulson, N. Marshak, M. Kapadia, and N. I. Badler, “Adapt:The agent development and prototyping testbed,” in ACM SIGGRAPHSymposium on Interactive 3D Graphics and Games. New York, USA:ACM Press, March 2013, pp. 9–18.

[32] A. Vitzthum, H. B. Amor, G. Heumer, and B. Jung, “Xsampl3d: Anaction description language for the animation of virtual characters,”Journal of Virtual Reality and Broadcasting, vol. 9, p. Article No. 1,2012.

[33] A. Formella and P. P. Kiefer, “Anilan - an animation language,” inComputer Animation. New York, USA: IEEE, June 1996, pp. 184–189.

[34] G. Roßling and B. Freisleben, “Animalscript: An extensible scriptinglanguage for algorithm animation,” ACM SIGCSE Bulletin, vol. 33, no. 1,pp. 70–74, February 2001.

[35] K. Balci, E. Not, M. Zancanaro, and F. Pianesi, “Xface open sourceproject and smil-agent scripting language for creating and animatingembodied conversational agents,” in International Conference on Multi-media. New York, USA: ACM Press, September 2007, pp. 1013–1016.

[36] H. Prendinger, S. Descamps, and M. Ishizuka, “Mpml: A markuplanguage for controlling the behavior of life-like characters,” Journalof Visual Languages & Computing, vol. 15, no. 2, pp. 183–203, 2004.

[37] Y. Arafa and A. Mamdani, “Scripting embodied agents behaviourwith cml: Character markup language,” in International Conference onIntelligent User Interfaces. New York, USA: ACM Press, January 2003,pp. 313–316.

[38] Z. Huang, A. Eliens, and C. Visser, “Step: A scripting language forembodied agents,” in Workshop of Lifelike Animated Agents. Berlin,Germany: Springer-Verlag, August 2002, pp. 87–109.

[39] H. Vilhjalmsson, N. Cantelmo, J. Cassell, N. E. Chafai, M. Kipp,S. Kopp, M. Mancini, S. Marsella, A. N. Marshall, C. Pelachaud,Z. Ruttkay, K. R. Thorisson, H. v. Welbergen, and R. J. V. D. Werf,“The behavior markup language: Recent developments and challenges,”

Page 10: Towards Developing an Easy-To-Use Scripting Environment for … · 2018. 11. 8. · Towards Developing an Easy-To-Use Scripting Environment for Animating Virtual Characters Christos

in Intelligent virtual agents. Springer Berlin-Heidelberg, January 2007,pp. 99–111.

[40] Y. A. Jung, “Animating and rendering virtual humans: Extending x3d forreal time rendering and animation of virtual characters,” in InternationalConference on Computer Graphics Theory and Applications. UK:SCITEPRESS, 2008, pp. 387–394.

[41] M. Gervautz and D. Schmalstieg, “Integrating a scripting language intoan interactive animation system,” in Computer Animation. New York,USA: IEEE Press, May 1994, pp. 156–166.

[42] K. Perlin and A. Goldberg, “Improv: A system for scripting interactiveactors in virtual worlds,” in 23rd Annual Conference on ComputerGraphics and Interactive Techniques. New York, USA: ACM Press,

August 1996, pp. 205–216.[43] M. Mononen, “Recast/detour navigation library,” https://github.com/

memononen/recastnavigation, accessed 29/11/2014.[44] C. Mousas, P. Newbury, and C.-N. Anagnostopoulos, “Rethinking short-

est path: An energy expenditure approach,” in Virtual Reality Interactionand Physical Simulation, 2013, pp. 35–39.

[45] C. Mousas, P. Newbury, and C.-N. Anagnostopoulos, “The minimumenergy expenditure shortest path method,” Journal of Graphics Tools,vol. 17, no. 1-2, pp. 31–44, 2013.

[46] P. Lang, “Root-motion,” http://www.root-motion.com/, accessed29/11/2014.