jesshiderhonours.files.wordpress.com · web viewwith visual stimuli being the dominant factor in...
Post on 20-Sep-2020
4 Views
Preview:
TRANSCRIPT
University of Abertay Dundee
School of Arts, Media and Computer Games
May 2015
AN EXPLORATION OF SITUATIONAL AWARENESS IN GAMEPLAY ANIMATIONS
Jessica Hider
BA (Hons) Computer Arts 2015
University of Abertay Dundee
Declaration
Author: Jessica Hider
Title: An exploration of situational awareness in gameplay animations
Qualification sought: BA (Hons) Computer Arts
Year: 2015
I. I certify that the above mentioned project is my original work
II. I agree that this dissertation may be reproduced, stored or transmitted, in any form and by any means without the prior written consent of the undersigned.
Signature ………………………………………………………………………………………………….
i
Date ………………… 20/4/15 ……….……………………………………………………….………
ABSTRACT
With visual stimuli being the dominant factor in a players gaming experience,
importance must be placed on visual coherence in games. This can be achieved in
many ways using theories such as mise-en-scène, or environmental storytelling,
which aid with narrative exposition and establishing a mood. However, many games
fail to incorporate a character’s animations into the visual coherence, thus
characters often display little or no reaction to their surroundings. This is unrealistic
when compared with human behaviour and as a result can break character
believability and player immersion. To achieve naturalistic behaviour and visual
coherence, a character needs to demonstrate an awareness of their surroundings.
To demonstrate awareness, this project proposes the application of a model of
situational awareness, created by Mica Endsley, to an avatar’s gameplay animations
in order to rectify the visual discord. Using case studies to explore how avatars are
currently demonstrating awareness, a framework of how the three levels of
situational awareness (perception, comprehension and projection) is defined and
explained. This framework is iteratively applied to animation tests, reviewed and
then updated, with the process culminating in the development of the final
artefact, an interactive piece where the avatar demonstrates situational awareness
through their gameplay animations. This provides further areas of insight that
should be considered when applying situational awareness to gameplay animations.
ii
TABLE OF CONTENTS
ABSTRACT................................................................................................................... ii
TABLE OF CONTENTS.................................................................................................. iii
LIST OF FIGURES..........................................................................................................v
LIST OF TABLES...........................................................................................................vi
1 INTRODUCTION...................................................................................................1
2 CONTEXTUAL REVIEW.........................................................................................3
2.1 The Issue......................................................................................................3
2.2 Current Solutions & Thinking.......................................................................4
2.2.1 Relaxed Behaviour................................................................................4
2.2.2 Humanity System..................................................................................5
2.3 Contextual Review Summary.......................................................................6
3 METHODOLOGY..................................................................................................7
4 SOLUTION DEVELOPMENT..................................................................................9
4.1 Visual Coherence..........................................................................................9
4.2 Skilful Performance....................................................................................10
4.2.1 Twelve Principles of Animation...........................................................10
4.2.2 Seven Essential Acting Principals........................................................11
4.3 Awareness..................................................................................................12
4.4 Solution Development Summary................................................................14
5 SITUATIONAL AWARENESS...............................................................................15
5.1 Theory of Situational awareness................................................................15
5.2 Situational Awareness in Games................................................................16
5.2.1 Definition of surroundings..................................................................17
5.2.2 Demonstrating Personality..................................................................22
iii
6 PRACTICAL APPLICATION OF SITUATIONAL AWARENESS.................................24
6.1 Contextualisation.......................................................................................24
6.2 Prioritisation...............................................................................................25
6.3 Performance...............................................................................................25
7 CONCLUSION & FUTURE STUDY........................................................................27
8 APPENDICES......................................................................................................28
8.1 Appendix A – AI Behaviour.........................................................................28
8.1.1 Relaxed Behaviours.............................................................................28
8.1.2 Humanity Systems...............................................................................29
8.2 Appendix B – Game Animation Case Study Results....................................31
9 REFERENCES......................................................................................................35
iv
LIST OF FIGURES
Figure 3-1 Project Methodology.................................................................................7
Figure 5-1 Model of SA in Dynamic Decision Making (Endsley 2000).......................16
Figure 5-2 Other Character Categorisation System..................................................20
Figure 6-1 Prioritisation Model.................................................................................25
Figure 8-1 External Actions.......................................................................................28
v
LIST OF TABLES
Table 8-1 The Wind Waker (2013)............................................................................31
Table 8-2 Journey (2013)..........................................................................................31
Table 8-3 The Last of Us (2013)................................................................................32
Table 8-4 Captain Toad: Treasure Tracker (2015).....................................................32
Table 8-5 Ni No Kuni: Curse of the White Witch (2010)............................................33
Table 8-6 Assassin's Creed III (2012).........................................................................33
Table 8-7 Tomb Raider (2013)..................................................................................34
vi
1 INTRODUCTION
Whilst playing video games, players are exposed to visual, audition and
occasionally haptic (tactile) feedback, however, each of these stimuli are not
equally received; Gaulin and McBurney (2001) explain that visual stimulus is more
prevalent over audition stimuli, calling this phenomenon visual dominance. As
visual stimulus is the most influential factor in a player’s experience, importance
should be placed on making sure all the viewed elements work together cohesively.
There are several theories for devising visual coherence, (Gibbs 2002; Monaco
2009; Worch and Smith 2010) yet one area that is often forgotten to be aligned
with the rest of the visuals, is the avatar’s gameplay animations; Hooks (2011)
encountered this when teaching at a games company. When he asked why the
avatar did not respond to the creepy atmosphere of the castle it was exploring, the
answer was the animators had not thought about how the location would affect the
character’s behaviour.
This discord between the avatar and their surroundings can lead to unrealistic
behaviour from the character and if severe enough, break the player’s immersion
(Develop 2013b). The discord is noticeable as humans change their behaviour in
reaction to their surroundings; a person in their home with the lights on will act
differently than if they were in a graveyard at midnight; a shy person’s behaviour
will differ when in a large crowd compared to being alone.
If game avatars are to become compelling, like humans, they need to change their
behaviour based on their surroundings (Perkins 2008). Several games (Assassin’s
Creed III 2012; The Last of Us 2013; Tomb Raider 2013) already have their avatars
exhibit spatial awareness, where they respond to the physical environment. Yet,
there is minimal demonstration of the avatar interacting with their surroundings
and other characters in a way that reflects the tone of the scenario as well as their
personality in their gameplay animations.
1
Therefore, this research project aims to explore the following topic: Can current
animations be advanced so the avatar elicits a reaction to both the physicality and
tone of their surroundings, which reflects their personality and mood?
This project focuses on the gameplay animations of the player-controlled character,
the avatar, in third person games. When referring to the avatar moving towards a
goal, it is implied that the player is controlling this movement, and the avatar is not
moving of its own accord.
This project frames the topic from the standpoint of a visual discord between the
character and the environment. Through exploring visual coherence techniques, the
idea of a skilful performance emerges, leading to the concept of avatar awareness.
A model of situational awareness is proposed and its application to gameplay
animations is explored. Finally, elements of consideration that arose during the
practical application of situational awareness to an artefact are discussed.
2
2 CONTEXTUAL REVIEW
2.1 The Issue
Whilst reviewing games such as The Last of Us (2013), Tomb Raider (2013), and The
Wind Waker (2013), it is clear to observe the visual discord when comparing
gameplay animations and cut-scene animations. In The Wind Waker (2013), during
the cut-scene when Link is rescuing his sister from the Forsaken Fortress, he sneaks
in to the room she is in, taking his time to look around and creep forward. However,
this behaviour is not represented in his gameplay animations during the rest of the
dungeon, even though the objectives and atmosphere is the same.
There is no reason why gameplay animations should exclude expressive behaviour;
Disney has long been imbuing their character’s movement with personality and
attitude with Thomas and Johnston (1981, p347) stating, “Walks... are one of the
animator’s key tools in communication. Many actors feel that the first step in
getting hold of a character is to analyse how he will walk.”
Many current video games fail to utilise performance-based locomotion and instead
focus on emphasising lifelike adaptive movement through procedural animation
(Sloan 2011). Hooks (2011) explains the lack of expressive animation is due to
finance and time budgets as believable actions cost more to animate and producers
figure it is unlikely to be noticed in a game environment.
However, as games have advanced so far, players are now calling for realistic
reactions from the avatar, including demonstrating situationally dependent
behaviour (Develop 2013a). Murray (1997) concurs with this idea, highlighting that
expressive gestures beyond that of physical movement are needed in order to build
comprehensive worlds. Murray (1997, p.150) believes, “there is no reason why
gestures could not be animated in a way that very closely matches the visual display
with the interactor’s movement and heightens the dramatic impact of the story”.
3
To clarify, this project is not trying to guess what the player is feeling and display
that emotion through the avatar’s animations. It is solely focused on how the avatar
should react in relation to its own personality, mood or goals. Contrary to being
immersion breaking, by having the avatar display emotions that are uncontrolled by
the player it creates distance, which in turn allows empathy with the avatar (Hooks
and Jungbluth 2013).
2.2 Current Solutions & Thinking
Although there is little documentation for expressive avatar animation, there are
many systems and theories for creating expressive behaviour for non-playable
characters (NPCs) and buddy characters. These systems provided a base knowledge
of AI and adaptive behaviour design, which influenced the development of the
avatar’s reactive animation framework.
Section 2.2.1 to 2.2.3 details three different approaches to building character AI and
how they combat the issue of a visual discord between the character and its
surroundings.
2.2.1 Relaxed Behaviour
Anguelov and Shroff (2015) explained their approach towards developing
expressive interactive NPC AI behaviour in their 2015 Game Developers Conference
(GDC) talk for games such as Just Cause 3 (2015). They detail how relaxed
behaviour is needed for creating life with-in a space, immersing the player and
keeping them engaged as “if the world ignores the player, the player ignores the
world”. Having alterable NPC behaviour also offers the opportunity to express the
narrative of the game or world outside of cut-scenes.
The system Anguelov and Shroff developed is called external actions, where all the
environmental and contextual AI is embedded in the environment rather than in
the NPC’s AI [see appendix A for further details]. This means the system becomes
highly re-useable, as it is not tied to one character. This also allows behaviours and
4
animations to be efficiently layered over a character, in a non-disruptive way to the
core AI, so the NPC can react to events happening in the world, such as being shot
or moving towards a fire for warmth, or interact with the environment, for example
leaning on a wall or sitting on a bench.
As this system has been designed to be used by multiple NPCs, it does not focus on
personality based performance and instead is populated with contextual actions.
Thus, external actions allows the NPC to contextually react to their environment
and events happening around them.
2.2.2 Humanity System
During his talk at GDC 2014, Robertson (2014) explained the AI and animation
systems behind the buddy character Elizabeth from Bioshock Infinite(2013). The
developers wanted the player to build a relationship with Elizabeth as well as give
her the illusion of life. Robertson explains to create the illusion of life they needed
an AI that reacted to the environment, other AI’s and the player.
To achieve this, the developers set up the ‘Liz squad’, a multidisciplinary team
whose purpose was to bring Elizabeth to life. They did this by creating her humanity
system, a combination of five subsystems focusing on different areas of interaction
and performance; Emotion system, Gesture system, Head and eye tracking, Smart
Terrain and Combat system [see appendix A for further details].
The humanity system is systemic, playing and interrupting based on the players
movements and decisions, thus player agency is never sacrificed. The humanity
system also makes use of layering animations. Hence, if Elizabeth is displaying an
angry emotion with her arms folded and then needs to move to keep up with the
player, instead of snapping between angry idle, walk, angry idle, her arms will stay
folded as she walks.
5
Altogether, this means Elizabeth’s humanity system allows her to react to the
physicality and tone of the environment, as well as display her personality and
mood without sacrificing the player’s agency.
2.3 Contextual Review Summary
By identifying the issue of the avatar not responding to their surroundings and
framing it as a discord between the animations and the surroundings, this allows
the project to focus on giving the avatar personality and mood in accordance with
their character, rather than try to second guess what the player is feeling.
As can be seen in the examination current solutions used for AI behaviour of NPC’s
and buddy characters, it is possible to create interactive performance based
behaviour in a systemic manner which does not disrupt player agency. Therefore, it
should be possible to propose a model for the avatar, which allows the
demonstration of performance in relation to their surroundings whilst adhering to
player agency.
6
3 METHODOLOGY
Figure 3.1 depicts the methodology followed during this project. With the issue of a
visual discord identified, an extensive literature and contextual review was carried
out to examine current models and systems used in games. These models were
then reshaped when compared with research into human behaviour and
psychology.
Following the reshape, came testing. This was carried out using practical tests to
explore different areas of animation. The practical tests were evaluated against a
set of aesthetic criteria based on existing theories (12 principles of animation
7
DissertationBlog
Document
Evaluate against
aesthetic criteria
Practical tests
Case studies
Test the model
Reshape the
model
Establish current
models
Figure 3-1 Project Methodology
(Thomas and Johnston 1981), the framed image (Monaco 2009) and the 7 essential
acting principles (Hooks 2011)) and in relation to the current model for rectifying
the visual discord. The findings from these tests led to further research into
selected games, which then yielded further practical tests, and thus the cycle
repeated in an iterative format.
At many points, the testing revealed flaws in the model, and thus the model was
reshaped before the testing cycle began again.
The entire process was documented through a blog and this dissertation, which
allowed for critical analysis and evaluation of the processes undertaken and results.
8
4 SOLUTION DEVELOPMENT
The current solutions discussed in section 2.2 focused on NPC’s or buddy characters
as little data was found for methods that were applied to the avatar. Due to the lack
of avatar related systems, the initial development for a solution began by
comparing how films and games achieve visual coherence, which led to examining
skilful performance in games and the concept of avatar awareness.
4.1 Visual Coherence
There are techniques that can aid in creating visual coherence, such as mise-en-
scène and environmental storytelling. Mise-en-scène, which originated in film, and
is concerned with all the visual elements that make up a shot, can be utilised in
order to influence the audience’s mood as well as advance the story through the
interplay of elements such as lighting, colour, props, framing, décor and
performance (Gibbs 2002).
Yet due to the interactive nature of games, the designer does not have as much
freedom with the camera compared to film; Falstein (2004), a 24-year game
industry veteran, explains that although the position of the camera can influence
the emotional involvement of the player, playability must come first. This is why
many games, such as Tomb Raider (2013) and Skyward Sword (2011), have cameras
that are set behind and slightly above the player, sacrificing emotional involvement,
but improving gameplay by giving the player a wider field of view.
In regards to this, a sub-section of mise-en-scène called the ‘framed image’,
described by Monaco (2009), is more appropriate when analysing games as it
focuses on everything within the frame, regardless of the position of the camera.
Another technique similar to mise-en-scène that ensures visual coherence among
the elements but accounts for dynamic movement through a space is
environmental storytelling. This technique is used when designing theme parks
9
(Gamsutra 2000) but can also be applied to virtual worlds. According to Worch and
Smith (2010) by using this technique, the environment should self-narrate the
history of the place; the functional purpose of the place; what might happen next;
and the mood.
As Worch and Smith (2010) described, environmental storytelling is an effective
method for narrative exposition and creating atmosphere yet, for games, it misses
out a key component of mise-en-scène - action and performance. Gibbs (2002,
p.12) states, “At an important base level, mise-en-scène is concerned with the
action and the significance it might have. Whilst thinking about décor, lighting and
the use of colour, we should not forget how much can be expressed through the
direction of action and through skilful performance.”
If skilful performance can be aligned with the framed image and environmental
storytelling, then a visual coherence between the avatar, environment and tone in
games, in theory, should be achievable.
4.2 Skilful Performance
4.2.1 Twelve Principles of Animation
How do you achieve skilful performance in animation? This has been something
animators have been working on for years to achieve with Disney’s twelve
principles of animation, developed back in the 1930’s, still at the core of animation
(Thomas and Johnston 1981);
1. Squash and stretch
2. Anticipation
3. Staging
4. Straight ahead and pose to pose
5. Follow through and Overlapping action
6. Slow In and Slow Out
7. Arcs
8. Secondary Action
10
9. Timing
10. Exaggeration
11. Solid Drawing
12. Appeal
Although these are at the heart of animated films, tv series etc., not all of the
principles can be translated across into dynamic gameplay; Cartwright (2014) talks
about the strict limitations they have for animating the characters on Skull Girls
(2012), a 2D fighting game, where moves can have as little as three drawings for the
avatar to move from an idle to hitting the other character, or lose the feeling of
responsiveness. Thus, the team has to sacrifice anticipation for gameplay, and
instead focus on overlapping action and exaggeration to bring performance to the
game.
Staging is also restricted during gameplay. As previously mentioned, the intimacy of
the camera may have to be sacrificed in order to aid the players view (Falstein
2004). However, there are ways to provide both artistic staging and retain
playability. An example of this is from Journey (2013) during the sand sliding stage
of the game. During the end of the segment, the player loses control of the camera
as it frames the silhouette of the avatar against the glimmering sand, architecture
and distant mountain, but still allows the player enough view of the world to avoid
obstacles.
4.2.2 Seven Essential Acting Principals
Although Disney’s 12 principles of animation explain how to animate the
performance of an action, they do not detail how to design the action. Instead we
can look to Hooks (2011) Seven Essential Acting Principles:
1. Thinking tends to lead to conclusions, and emotion tends to lead to action.
2. We humans empathize only with emotion. Your job as a character animator
is to create in the audience a sense of empathy with your character.
3. Theatrical reality is not the same thing as regular reality.
11
4. Acting is doing; acting is also reacting.
5. Your character should play an action until something happens to make him
play a different action.
6. Scenes begin in the middle, not at the beginning.
7. A scene is a negotiation.
Aside from principles 6 and 7, as they are based on scripted action, all others could
be applied to dynamic animation in games.
Principle 2, the idea of empathy, can work in games as long as there is distance
between the avatar and player (as previously discussed in section 2.1). Principle 3 is
clearly seen in games like The Wind Waker (2013) where the design of the world
creates a stylistic reality, such as not dying when jumping from great heights.
Principle 5 can be applied in the technical implementation of animation and used in
the logic of the state machines.
It is principles 1 and 4 that stand out in relation to the project issue. To be able to
think and react, which is what is needed to rectify the current visual discord, the
avatar would first need to be aware of their surroundings.
4.3 Awareness
Most current game avatars demonstrate a high level of spatial awareness, where
they respond realistically to the physical world around them. In Assassin’s Creed III
(2012), Desmond had complex animation systems that allowed him to have
predictive foot placement (to allow adaptation to rapid changes in terrain height)
and respond to different degrees of gradation in a lifelike way (Gamasutra, 2013).
Yet it is often buddy characters and NPCs that demonstrate a higher level of
awareness of their surroundings. Ellie, the buddy character for most of the game in
The Last of Us (2013), will shy away if a torch is shone in her face, become startled
12
by gunfire, or complain if you are needlessly shooting objects and wasting
ammunition (EuroGamer 2013).
Although these actions aid realism, they contain little personality. By contrast, in
The Wind Waker (2013), an enemy NPC called a Moblin not only shows an
awareness of other characters by being able to differentiate between friend or foe,
the way it is demonstrated is full of personality. This can be observed as when it
spots Link, it charges toward him and begins attacking recklessly. If during this
charge it accidentally hits another enemy, like a fellow Moblin, it looks around in
shock and surprise with its mouth wide open. These actions add to the idea that
this character has more brawn than brain, which is also reflected in its character
design.
Of the games researched [results in appendix B] Captain Toad, from Captain Toad:
Treasure Tracker (2015) was the avatar that demonstrated the highest level of
awareness to his surroundings. Captain Toad is aware of his enemies; he panics
when he has been spotted and this distress continues in his locomotion animations
when he is being chased. He also demonstrates awareness to the tone and
physicality of his surroundings, shivering in fright during spooky levels, shaking with
cold in ice levels and holding his breath when under water. However, he does not
react to events happening around him, such as the player moving parts of the level.
As shown from these game examples, avatars and characters are becoming
increasingly aware of their surroundings. Focusing on the avatar, there seems to be
no consistency to their demonstrated awareness; spatial movement might be highly
polished but then there is no distinction between other characters or tone of the
environment. Compared to the buddy characters and NPCs there is also a lack of
personality in the animations.
Therefore, to bring visual coherence through skilful performance to the avatar,
there needs to be an awareness model that can be applied to all of the
13
surroundings, in order to maintain consistency and offers the opportunity to
express personality.
4.4 Solution Development Summary
Through the examination of mise-en-scène and environmental storytelling, it has
been identified that skilful performance is an important part of visual coherence. To
create a skilful performance for dynamic gameplay animations, the twelve
principles of animation (Thomas and Johnston 1981) and the seven essential acting
principles (Hooks 2011) can be used as aesthetic criteria for creation and
evaluation.
However, to fully achieve a skilful performance, the avatar needs to demonstrate a
consistent awareness of its surroundings and display its personality.
14
5 SITUATIONAL AWARENESS
As highlighted in the previous section, awareness and personality is key for creating
a skilful performance and thus resolving the visual discord. The proposed model for
obtaining an aware avatar is based on a model of situational awareness.
This section details a brief outline on the theory behind situational awareness
before moving on to explain its application to an avatar’s animations, including
descriptions of the surroundings the avatar should react too. Following this is a
discussion on how situational awareness can demonstrate personality.
5.1 Theory of Situational awareness
Situational awareness (SA) was a phrase that originated from modern fighter pilots
where, simply put “it is the ability to know what is going on around you all of the
time” (Hendrick 1999, p.10). Endsley (2000, p.3), the current Chief Scientist of the
United Stated Air Force, has a more in-depth explanation, “the perception of the
elements in the environment within a volume of time and space, the
comprehension of their meaning and the projection of their status in the near
future.”
15
Figure 5-2 Model of SA in Dynamic Decision Making (Endsley 2000)
According to Endsleys model, figure 5.1, there are three levels of SA:
- Perception: acquiring all of the relevant facts.
- Comprehension: understanding the facts in relation to current knowledge,
motivations and goals.
- Projection: forecasting the future status of events.
Therefore, in a decision process, the person takes in cues from visual, aural, tactile
and other stimuli and relates them to their goals or expectations before forecasting
what future situations may occur. Following these outcomes, they make their
decision and act upon it.
5.2 Situational Awareness in Games
In relation to gameplay animations, SA would be used to visualise the perception,
comprehension and projection of stimuli that the avatar was experiencing. Thus,
alongside stimuli the player can directly receive – visual, audio and haptic - the
16
player can indirectly experience smell, taste, balance, heat etc. through the
reactions of the avatar.
The three levels of SA provide a framework where these reactions can be designed
and built in a scalable way to suit different sized projects, whilst maintaining a level
of consistency. In its simplest form SA would require the avatar to only look
towards stimulus in its surroundings, thus demonstrating perception of the world.
For any further advancement, the level of complexity is dictated by the design and
scope of the project.
SA can also easily be applied to any aspect of the avatars surroundings due to the
simplicity of the three levels. As SA is an open model, the definition for the
environment it is applying too will change based on its setting. The following section
details the surroundings that SA should be applied to when used in the context of
the avatar’s gameplay animations. It also offers examples of SA application to each
category in relation to the three levels of SA (perception, comprehension and
projection).
5.2.1 Definition of surroundings
To define the surroundings, case studies into several games were performed in
order to find common elements with which the avatar interacted. Seven games
were selected as they were third person games where the player controlled a single
avatar. A wide scope of styles was also chosen, in order to observe if there were
any broad commonalities in animations. The chosen games were:
- The Wind Waker (2013)
- Journey (2013)
- The Last of Us (2013)
- Captain Toad: Treasure Tracker (2015)
- Ni No Kuni: Curse of the White Witch (2010)
- Assassins Creed III (2012)
- Tomb Raider (2013)
17
After studying these games closely, and observing what the avatars reacted too
[observations in appendix B], the following four categories are offered as a
definition of surroundings:
- The Environment
- Other Characters
- Player Agency
- Narrative Events
For further clarification, each of these four categories will be discussed in turn,
demonstrating the application of SA in relation to the avatar’s gameplay animations
and detailing example behaviour from current games.
5.2.1.1 The Environment
The environment consists of all parts of the landscape surrounding the avatar, both
physical and tonal. This includes, but is not limited to, the terrain, foliage, buildings,
props climate and atmosphere.
In relation to the physicality of the environment, the perception part of SA can be
applied to avatar navigation. This can be in a spatial awareness sense of the avatar
traversing terrain appropriately (such as in Assassin’s Creed III 2012). It can also
mean looking at points of interest the developers have predetermined to reinforce
an avatar’s objective or other environment based stimuli, similar to Elizabeths
systems from Bioshock Infinite (2013) as previously discussed in section 2.2.2.
For environment elements such as weather, the avatar can demonstrate
comprehension as well as perception. In Ni no Kuni (2010) when Olly enters the
Winter Isles, he shivers and tries to protect himself against the cold demonstrating
he has comprehension of heat stimuli. Comprehension can also come from the
avatar responding to environmental dangers, such as protecting themselves from
falling debris (Tomb Raider 201).
18
Projection requires the avatar to be forward thinking, which is difficult to do in an
open world environment, as it would be very difficult for the avatar to guess where
the player wants to go. Thus, projection will most likely only be applied in areas
where player movement is limited. For example, when Lara has to make a
treacherous crossing across a gorge by climbing a fallen plane at the beginning of
Tomb Raider (2013), she demonstrates projection, by saying to herself “I can do
this.” There is nowhere else the player can meaningfully go, so this becomes an
appropriate place to use projection.
As highlighted by Hooks (2011), awareness of tone is often underrepresented in
games. As atmosphere is intangible, thus cannot be perceived, the importance lies
in the avatar demonstrating comprehension. Comprehension can be achieved
through most of the avatar’s gameplay animations, by altering the character’s
movement to reflect the surrounding tone. Captain Toad (Captain Toad: Treasure
Tracker 2015) will shiver in fright when in a spooky haunted house or happily toddle
about when in the sunshine highlighting his comprehension of the tone of the level.
5.2.1.2 Other Characters
In most games, the avatar will interact with another character in some way. Yet,
behaviour that differs depending on who the other character is, is rarely seen in
current games (Schell 2008).
By using the three core concepts of SA (perception, comprehension and projection),
it is possible to build a categorisation system for other characters, which the avatar
can then appropriately respond too.
19
EntityNew: Appear Harmless?
Appear Dangerous?
Seen Before: Friend: Royalty
Civilian:ComradeFamilyTraderPetDweller: Older
Same AgeYounger
Foe: Highly DangerousDangerousAnnoying
Figure 5-3 Other Character Categorisation System
Figure 5.2 details a proposed high-level categorisation system for a third person
adventure game, such as The Wind Waker (2013), where the avatar has
encountered another character. For the ease of working through this example, the
other character the avatar has encountered shall be referred to as Entity.
The first categorisation is; has the avatar seen the Entity before? If the avatar has
not, how does it appear? If the Entity looks threatening, by carrying weapons,
breathing fire or roaring loudly for example, the avatar should be more cautious in
its approach. On the flipside, if the Entity resembles something small and fluffy,
then the avatar should approach without hesitation.
If the avatar has seen the Entity before, then does it fall it to the friend or foe
category? These top-level categories can then be further divided based on the
status of the Entity. The idea of the avatar being aware of status was proposed by
Schell (2008) in order for the avatar to appear more alive.
20
The animations demonstrating this awareness do not have be deeply complex;
reaching for a weapon when approaching something dangerous, or dropping down
to head height when speaking to a small child subtly show an awareness of the
Entity the avatar is interacting with.
5.2.1.3 Player Agency
In the context of games, player agency refers to the player being able to take action
and see the results of the choices they have made (Murray 1997) which is normally
accomplished through controlling the avatar. Applying SA to player agency allows
the avatar to respond intelligently to the players input. This is mainly accomplished
by the avatar showing projection by forecasting the player’s movements and
adjusting the response accordingly.
An example of how this can be applied in game, is in a preview of the latest
(unreleased at time of writing) Zelda game; the developers comment how Epona, a
horse, will not collide with trees when she is being ridden as, “real horses don’t run
into trees” (GamersPrey HD 2014). In this case, the player still retains agency as
Epona will move in the inputted direction, yet Epona responds intelligently to that
input in the context of the surroundings by projecting that she will hit a tree then
altering course to avoid it.
5.2.1.4 Narrative Events
Narrative events are triggered real-time or pre-rendered cut-scene events that
advance the story or world of the game. Using The Wind Waker (2013) as an
example, a narrative event could be a critical narrative point, such as when Aryll is
kidnapped, or a minor side quest moment, such as turning the lighthouse’s light
back on, on Windfall Island.
21
Perception can be used to direct the player’s attention to these moments. The Last
of Us (2013) had an optional mechanic where during a background in-game
cutscene, the player could focus the avatar’s attention in the direction of the event
by clicking a button, helpfully framing the cut-scene for the player under the illusion
the avatar was watching.
The next stage would be for the avatar to react to the cutscene, showing
comprehension of what happened. At the beginning of Tomb Raider (2013), Lara is
impaled on a piece of bone during a quick-time event. After she pulls herself free,
she holds her wound during her gameplay animations until she rests for the night
and heals. Not only does she maintain consistency between cutscenes and
gameplay she is also reacting to what happened during that cutscene
demonstrating comprehension of the event.
For cutscenes with little player agency, projection will most likely be included in the
bespoke animations. In the above case of Aryll’s kidnapping, Link shows projection
when he yells in fright just before Aryll is taken away. In cutscenes where the player
retains agency, projection requires a case-by-case evaluation to see if it can be
applied without disrupting the player’s movement.
5.2.2 Demonstrating Personality
SA holds the opportunity of expressing the avatars personality whilst being aware,
thus improving the avatar’s believability alongside visual coherence. SA does this as
it incorporates a person’s experience, training and goals into the decision they
make, which is then reflected in their reaction they make; if two people had the
same information and decision to make, they may still arrive at different outcomes
as, “no two individuals react identically, since no two are the same” (Egri 1960,
p.38).
22
Being able to express personality is key to believability (Thomas and Johnston 1981;
Loyall 1997; Sandercock, Padgham and Zambetta 2006; Laird 2000). Having a
believable avatar can benefit the game in two ways. Firstly, “believable characters
significantly increase the immersion and the ‘fun’ that a player has” (Sandercock,
Padgham and Zambetta 2006, p.357). Secondly, believability is vital in creating
empathy in the player, which in turn means the player has a stronger engagement
with the avatar (Hall et al 2005).
One more point to consider when designing interactions for personality through SA,
is the avatars behaviour should be consistent in relation to their emotional and
motivational state (Ortony [no date]). An (extreme) example of non-coherent
behaviour would be if Lara (Tomb Raider 2013) started laughing whilst stabbing
another human, as if she was enjoying it. This would undermine her sense of
vulnerability and remorse, which the developers had been aiming for
(GameNewsOfficial 2012).
23
6 PRACTICAL APPLICATION OF SITUATIONAL AWARENESS
During the course of this project an artefact, an interactive demo where the player
could control an avatar in an environment, was developed to test the practical
application of SA. This section details elements that arose when designing and
implementing SA animations in the artefact.
6.1 Contextualisation
Contextualisation is very important for gameplay animations. As previously
discussed in section 4, diverse reactive animations are often missing in games and
when this happens in a highly polished environment, it produces a visual discord.
However, it was observed during the development of the artefact that this visual
discord can occur in the opposite scenario, when the animations are highly reactive
due to SA, but the environment has little to support these reactions. Thus, to avoid
a visual discord, there should be consistency between the detail of the environment
and the avatar’s animations.
Contextualisation was also found to be significant when playing in game narrative
events, where the player retains agency. In the artefacts development, it was
found staging and timing needed to be carefully considered so the narrative event
could be seen by the player, otherwise the avatar’s SA animations made little sense.
To resolve the contextualisation issue in the artefact, lighting and environment
layout played roles in directing the player’s attention to the required area. Two
techniques used were: highlighting the area of interest through light whilst having
the rest of the surroundings dark, and raising up areas of interest so they were
viewable above the head of the avatar hence not requiring the player to make large
adjustments of the camera angle. Systems were also implemented to ensure if the
avatar was not close enough for the even to be seen, it did not happen. This
ensured when the avatar responded to the event, the SA animations were correctly
contextualised.
24
6.2 Prioritisation
During the practical application of SA, it was discovered that the four categories
defining the avatar’s surroundings needed prioritising so the avatar could respond
appropriately as surroundings changed. Thus, a prioritisation model was developed.
This logic would be built into the avatar’s AI or state machine to aid switching
between animations.
Figure 6-4 Prioritisation Model
Figure 6.1 depicts the prioritisation model, which concentrates on an upward path
of layered behaviour. This allows for easy interruption or layering of behaviours in
response to the changing surroundings. Layered behaviour can be technically
achieved using feathered blending, animation masks, additive animation or
different sets of animation.
Using this model, the avatar always responded to player agency foremost, with
reactions to the environment, narrative events or other characters layered on top.
This allowed SA animations to play without disrupting player agency.
6.3 Performance
When designing emotive or expressive SA animations for an avatar, it is important
to consider how the posture looks from the typical gameplay camera position and
angle. In the case of third person games, the default position is behind and slightly
above the character (Falstein 2004). As the player will spend most of the game
looking at the avatar from this angle, the pose needs to be readable from this view.
25
Other Characters
Narrative Events
The Environment
Player Agency
During development of the artefact, it was found that the curvature of the spine,
shoulders and head played an important role in the readability of poses from
behind the avatar. Contrasting between being curled over and straightened up
helped accentuate each pose.
When applying perception, comprehension and projection to the avatar’s
animations in the artefact, the first pass, whilst reactive, did not depict the avatar’s
personality. In the second pass, closer attention was paid to the avatar’s goals,
experiences and knowledge (as highlighted in Endsley’s (2000) model of SA) when
designing the reactive animations. This brought a coherency to the animations
allowing for the avatar’s personality to emerge.
26
7 CONCLUSION & FUTURE STUDY
In conclusion, by applying Endsley’s (2000) model of SA and its core elements of
perception, comprehension and projection, to an avatar’s gameplay animations, it
is possible for an avatar to react to both the physicality and tone of their
surroundings. As SA incorporates the abilities, experiences, training and goals into
the decision making process, the avatar can also demonstrate their personality by
how they react to their surroundings.
Following a prioritisation model, SA can be applied technically through layered
animations, allowing the player to retain agency and thus not disrupting gameplay.
Yet, as the emotions and moods demonstrated by the avatar are not controlled by
the player, it creates the distance required for empathy, which in turn means the
player and avatar bond becomes stronger.
As SA allows the avatar to be aware of all parts of their surroundings consistently,
due to the three levels of SA, and allows the expression of personality, it fulfils the
determined requirements needed for a skilful performance in games. Thus, it can
be used alongside the framed image and environmental storytelling to achieve a
visual coherence between the avatar and their surroundings.
For future study, an area of exploration would be to see how far the avatars
awareness could be pushed, to a degree where it starts to interfere with player
agency, before it becomes immersion breaking. It would be interesting to measure
the player’s reaction if an avatar refused to kill another character because that
decision conflicted with their personality, or if they ran away from a gang of
enemies because they were too frightened.
27
28
8 APPENDICES
8.1 Appendix A – AI Behaviour
This appendix explains in detail how the AI systems discussed in section 2.2 work.
8.1.1 Relaxed Behaviours
In Bobby Anguelov’s and Jeet Shroff’s 2015 GDC talk [date], they explain their
approach to designing realistic interactive AI behaviour through external actions. An
external action is essentially an extension to the model of the smart object
approach, where the AI would be dragged and dropped into the level and
systemically reacts to the environment around it.
To achieve this, the external action contains the AI behaviours, animations and
sounds needed to execute. It also contains the context related to the external
action. Inside the context are the conditions which specifies which NPC’s the
behaviour can be applied to as well as a spatial link, which defines its position in the
world. Figure 8-1 visually depicts how these elements are contained within the
external action
Figure 8-5 External Actions
29
Spatial Link
Conditions
AI Behaviours
Animations
Sounds
Context
External Action
As the AI is embedded in the environment, it means external actions are highly
reusable, as they can be placed anywhere, and if combined with runtime
retargeting and IK solvers, can be used by any NPC. They are also memory efficient
as they won’t load unless the environment they are in is loaded.
8.1.2 Humanity Systems
In Shawn Robertson’s GDC talk (2014) about the AI and Animations systems for
Elizabeth, the buddy character in Bioshock Infinite, he discusses how they
approached developing Elizabeth, to create the illusion of life. The following is an
overview of the talk.
Cut-scenes offered high fidelity custom animations, an easier way to build life, but
with the cost of low interactivity. Although this works in a film setting, in an
interactive setting, the illusion of life depends on the AI’s reactions to the player
and to other AI’s.
However, there are far too many rules to be able to truly script these interactions
to appear like Elizabeth had humanity. Instead a team called the ‘Liz squad’ was
created to implement solutions that only existed to the player, it did not matter if
off screen she was teleporting or not obeying physics.
This team devised her humanity system, a contingent of five systemic sub-systems,
focusing on different areas of performance and interaction. With these being
systemic systems, Elizabeth could bail out of her system at any point, in order to
move with the player, thus not disrupting player agency.
The five systems were:
Emotion system: A library of emotions was created to be could be called upon
systemically if Elizabth saw something she did or did not like, or be scripted for
narrative events. There was also a catch-all emotion, in-case something happened
30
but did not trigger an emotion. These emotions could be feathered over locomotion
or be full body if she was standing still.
Gesture system: Elizabeth had a lot of voice over (VO) during the game, but the
animators could not custom animate every sentence. Instead a library of gestures
was developed which could then be tagged in the VO by animators, designers or
others.
Head and eye tracking: Elizabeth will look around her either at other AI’s or towards
marked locations. This helps keep her alive if the player is not doing anything, by
maintaining a degree of movement in her body. It was also used to draw the
players attention to important parts of the game.
Smart Terrain: This was developed to aid Elizabeth’s interactions with the
environment. Technically it works by teleporting Elizabeth to a tagged smart terrain
object. She will be placed in the first frame of her animation, and if the player looks
towards her, she will play the animation for that object. This also evolved to allow
golden moments, special one off animations where if the player looks in the right
direction at the right time, they will Elizabeth do something special, like pluck a
flower form a bush and then place it on a body.
Combat system: The aim was not to interfere with the player but to retain her
character. This lead to Elizabeth being able to hide behind cover in combat,
flinching when getting shot at, and looking towards who is shooting her, a subtle
hint to the player where the danger is coming from.
31
8.2 Appendix B – Game Animation Case Study Results
Seven games have been examined for the characters awareness in relation to their
surroundings. The following are observations made from playing these games.
Table 8-1 The Wind Waker (2013)
Movement Expression Awareness
- Leans forwards up steep
slopes
- Slows down before
stopping
- Can run up stairs
correctly, but not walk
- Demonstrates balancing
when on ship
- Expresses emotion
through face
- Blinks
- Shows happiness when
opening chests
- Looks at other
characters
- Doesn’t react if collides
with another character
- Will walk into walls
- Will appear exhausted
when on low health
- Does not show
awareness of heat
- Leans into strong winds
Table 8-2 Journey (2013)
Movement Expression Awareness
- Input is slowed when
running up a hill
- Leans forward to
accommodate incline
- Jump size blending
- Runs up slope instead of
walking up steps
- Slides to a stop when
running
- Moves feet when turning
- Walks differently in snow
than in sand, more stompy
- No facial animation, not
even blinking
- Stops walking if hitting a
wall
- Doesn’t look at
movement
- When scarf is empty,
does not react when
player tries to jump
- Doesn’t show awareness
of heat or cold
32
Table 8-3 The Last of Us (2013)
Movement Expression Awareness
- Strafes left and right
instead of turning in that
direction
- If running, doesn’t stop
immediately
- Walks up and down
stairs correctly
- Can climb over different
size objects
- Blinks
- Expresses emotion
through verbal cues than
animation
- Most reactions are kept
to cutscene elements
- Looks in direction
camera is pointing
- Stops walking if hitting a
wall
- Looks in direction of
other characters if talking
- No reaction when
colliding with other
characters
- Reaches out to touch
vertical surfaces around
- Can’t hurt comrades
Table 8-4 Captain Toad: Treasure Tracker (2015)
Movement Expression Awareness
- Moves faster when
walking down slopes
- Stops instantly
- Leans into wind
- Walks up stairs correctly
- Movement is slower
when walking underwater
- Climbs up and down
ladders correctly
- Full facial animation
- Expresses happiness
when receiving gifts
- Will fall asleep if left in a
safe environment
- Shows distress when
being chased by the
enemy/ falling
- Sneezes when climbs out
of water
- Differentiates between
friend and foe
- Shivers with cold
- Shakes with fright in
spooky locations
- Holds breath when
under water
- Celebrates when hits a
foe with a turnip
- Watches enemies which
a reclose by
33
Table 8-5 Ni No Kuni: Curse of the White Witch (2010)
Movement Expression Awareness
- Walks up and down
stairs correctly
- If movement stops mid-
cycle, will adapt to the
correct foot
- Wobbles when
balancing
- Feet adjust to terrain
angle
- Blinks
- Development in after
battle celebrations
reflecting growing
confidence
- No reaction to danger
around
- Will walk into walls
- Will look at other
characters
- Reacts to cold by
shivering
- Animations change to
reflect health
Table 8-6 Assassin's Creed III (2012)
Movement Expression Awareness
- Feet track to different
height terrain
- Jump blending
- Leans/scrambles up
steep terrain
- Advanced climbing
system
- Blinks - Looks at other
characters
- Walks into walls
34
Table 8-7 Tomb Raider (2013)
Movement Expression Awareness
- Can climb on different
height objects
- Leans when walking up
steep slopes
- Feet track to different
terrain height
- Demonstrates balancing
- Jump blending
- Movement changes if in
a safe or dangerous area
- Full facial animation
- Facial expression
changes depending on
mood
- Blinks
- Shivers with cold or fear
- Holds injured area
- Covers head when falling
debris
- Will look at points of
interest in the
environment
- Flinches when there are
explosion
- Reaches out to touch
environment
- Looks towards danger
35
9 REFERENCES
A. Ortony. [no date]. On making believable emotional agents believable. In: R. Trappl, P. Petta and S. Payr, eds. Emotions in Humans and Artifacts. Cambridge: MIT Press. 2002, pp.189–212.
Anguelov, B. and Shroff, J. 2015. Remember to relax! Realizing relaxed behaviour in AAA games. GDC. [online]. Available from: http://www.gdcvault.com/play/1022230/Remember-to-Relax-Realizing-Relaxed [Accessed 10 May 2015].
Assassin’s Creed III. 2012. [computer game]. Sony PlayStation 3. Ubisoft Montreal.
BioShock Infinite. 2013. [computer game]. Multiple platforms. Irrational Games.
Captain Toad: Treasure Tracker. 2015. [computer game]. Nintendo Wii U. Nintendo.
Cartwright, M. 2014. Animation bootcamp: fluid and powerful animation within frame restrictions. GDC. [online]. Available from: http://www.gdcvault.com/play/1020575/Animation-Bootcamp-Fluid-and-Powerful [Accessed 28 March 2015].
Develop. 2013a. Advancements in AI and character behaviour. [online]. Available from: http://www.develop-online.net/tools-and-tech/advancements-in-ai-and-character-behaviour/0117643 [Accessed 14 October 2014].
Develop. 2013b. The next-gen step in character animation. [online]. Available from: http://www.develop-online.net/tools-and-tech/the-next-gen-step-in-character-animation/0186626 [Accessed 14 October 2014].
Egri, L. 1960. The art of dramatic writing: its basis in the creative interpretation of human motives. New York: Simon and Schuster
Endsley, M.R. 2000. Theoretical underpinnings of situation awareness: A critical review. In: M.R Endsley and D.J. Garland, eds. Situation awareness analysis and measurement. Mahwah: Lawrence Erlbaum Associates. 2000, pp.3–32.
EuroGamer. 2013. Tech analysis: the Last of Us. [online]. Available from: http://www.eurogamer.net/articles/digitalfoundry-the-last-of-us-tech-analysis [Accessed 22 October 2014].
36
Falstein, N. 2004. Lights, camera, action. Game Developer. 11(7): p.50.
Gamasutra. 2000. Environmental storytelling: creating immersive 3D worlds using lessons learned from the theme park industry. [online]. Available from: http://www.gamasutra.com/view/feature/131594/environmental_storytelling_.php?page=1 [Accessed 09 November 2014].
Gamasutra. 2013. Video: improving AI in Assassin’s Creed III, XCOM, Warframe. [online]. Available from: http://www.gamasutra.com/view/news/193015/Video_Improving_AI_in_Assassins_Creed_III_XCOM_Warframe.php [Accessed 22 October 2014].
GameNewsOfficial. 2012. Tomb Raider: the final hours of Tomb Raider (episode 2). [online]. YouTube. Available from: https://www.youtube.com/watch?v=jiJmKFUzkjo [Accessed 28 September 2014].
GamersPrey HD. 2014. Legend of Zelda – brand new gameplay [HD]. [online]. YouTube. Available from: https://www.youtube.com/watch?v=CAE3FFJUdto [Accessed 15 March 2015].
Gaulin, S.J.C. and McBurney, D.H. 2001. Evolutionary Psychology. 2nd ed. New Jersey: Pearson Education, Inc.
Gibbs, J. 2002. Mise-en-scene: film style and interpretation. London: Wallflower.
Hall, L. et al. 2005. Achieving empathetic engagement through affective interaction with synthetic characters. In: J. Tao, T. Tan and R.W. Picard, Affective computing and intelligent interaction. Heidelberg: Springer-Verlag. 2005, pp.731-738.
Hendricks, J. 1999. Situational awareness. Trailer Boats. 28(8): p.10.
Hooks, E. 2011. Acting for Animators. 3rd ed. New York: Routledge.
Hooks, E. and Jungbluth, M. 2013. Animation bootcamp: designing a performance. GDC. [online]. Available from: http://www.gdcvault.com/play/1017634/Animation-Bootcamp-Designing-a [Accessed 9 May 2015].
Journey. 2013. [computer game]. Sony PlayStation 3. thatgamecompany.
Just Cause 3. 2015. [computer game]. Multiple platforms. Avalanche Studios.
37
Loyall, A.B. 1997. Believable agents: building interactive personalities. PhD thesis. Carnegie Mellon University.
Monaco, James. 2009. How to read a film: movies, media and beyond. 4th ed. New York: Oxford University Press.
Murray, J. 1997. Hamlet on the holodeck: the future of narrative in cyberspace. Cambridge: M.I.T.Press.
Ni No Kuni: Curse of the White Witch. 2010. Multiple platforms. Level-5 and Studio Ghibli.
Perkins, S., et al. 2008. A spatial awareness framework for enhancing game agent behaviour: Proceedings of the ACM SIGGRAPH symposium on Video games, [no place] [no date]. New York: ACM.
Robertson, S. 2014. Creating BioShock Infinite’s Elizabeth. GDC. [online]. Available from: http://www.gdcvault.com/play/1020545/Creating-BioShock-Infinite-s [Accessed 9 May 2015].
Sandercock, J., Padgham, L. and Zambetta, F. 2006. Creating adaptive and individual personalities in many characters without hand-crafting behaviours. In: J. Gratch. et al, eds. Intelligent Virtual Agents. Heidelberg: Springer-Verlag. 2006, pp. 357-368.
Schell, J. 2008. The art of game design: a book of lenses. London: Elsevier/Morgan Kaufmann.
Skyward Sword. 2011. [computer game]. Nintendo Wii. Nintendo.
Sloan, R.J.S. 2011. Agency and animation: the performance of interactive game character. Animation Journal. 19: pp.20-49.
The Last of Us. 2013. [computer game]. Sony PlayStation 3. Naughty Dog.
The Wind Waker. 2013. [computer game]. Nintendo Wii U. Nintendo.
Thomas, F. and Johnston, M. 1981. The illusion of life. New York: Disney Editions.
Tomb Raider. 2013. [computer game]. Microsoft Xbox 360. Crystal Dynamics.
38
Worch, M. and Smith, H. 2010. “What happened here?” environmental storytelling. GDC. [online]. Available from: http://www.gdcvault.com/play/1012647/What-Happened-Here-Environmental [Accessed 14 September 2014].
39
10. BIBLIOGRAPHY
40
top related