chapter 2 theoretial foundation - …library.binus.ac.id/ecolls/ethesisdoc/bab2/bab...
TRANSCRIPT
7
CHAPTER 2
THEORETIAL FOUNDATION
2.1 Theoretical Foundation
2.1.1 Animation
Since this work is about creating a tool for animation purposes, it is necessary
then to define what “animation” is. The following excerpts are the definitions of
animation from an online film education website and Apple.com
“Animation is the process by which we see still pictures move.”- filmeducation.org [2]
“Animation is a visual technique that provides the illusion of motion by displaying a collection of
images in rapid sequence.” –developer.apple.com [3]
Figure 3 - Animation is Concentration (Source: http://www.theanimatorssurvivalkit.com/images/bio_animation_is_concentration.jpg)
At the most basic interpretation, animation is simply a string of images, one a
little different from the other, that is showed at a rate of speed, and thus creating an
optical illusion. This phenomenon is called the persistence of vision theory, as described
in the following:
8
“Our brain holds onto an image for a fraction of a second after the image has passed. If the eye
sees a series of still images very quickly one picture after another, then the images will appear to
move because our eyes cannot cope with fast-moving images - our eyes have been tricked into
thinking they have seen movement.” –filmeducation.org [2]
As explained further by filmeducation.org, we only need subtle differences in the
pictures to achieve the illusion of movement. An example is a change in position of a
ball in one picture and in the next – we can get an illusion that the ball is moving.
As to why do animation despite the difficulty and cost required, the following
excerpt from The Animator’s Survival Kit sums it up best:
“Drawings that walk and talk and think: seeing a series of images we’ve done [that] appear to be
thinking is the real aphrodisiac. Plus creating something that is unique, [..] is endlessly
fascinating” – Richard Williams [4]
Furthermore, akin to actors acting their character in a film, in animation it is the
animator that performs the act of acting and gives character into the drawings in their
animation. The animators are acting through the illusion of motion and life of their
drawings. The same is true for 3D animation, where the animator manipulates the 3D
model to instead of drawings compared to traditional animation.
2.1.1.1 History of Animation
According to part about the history of animation from The Animator’s Survival
Guide book [4], the idea of animation has been around since over 35,000 years ago.
They were not real animations back then – they were crude ancient cave drawings that
suggest motion.
9
Figure 4 - The Eight Legged Boar (Source: http://tx.english-ch.com/teacher/yanda/animation_boar.jpg)
Ancient civilizations, such as the Egyptians and Ancient Greeks, had created
paintings on pillars, walls and pottery in a way that these paintings show a succession or
a process of an action. Still not quite animation, but the idea of motion through images
exists.
Figure 5 - Egyptian Burial Mural (Source: http://flimevolution.webs.com/2sequentialdrawing.jpg)
In 1825, Peter Mark Roget discovered the vital principle of the persistence of
vision [5].The theory that says that our brain can temporarily store an image of what we
have just seen moments before. Should a succession of images is shown, the brain can
be tricked to see moving picture. A group of contraptions then appear around this time
period in US markets that is based on this theory and creates illusion of motion.
10
The thaumatrope is a disc held by two pieces of string. An example, the disc has
a picture of a cage on one side and a picture of a monkey on the other. When the disc is
spun by the strings, the resulting optical illusion shows an image of the bird in the cage.
Figure 6 - The Thaumatrope
(Source: http://intelligentheritage.files.wordpress.com/2010/09/thaumatrope1.jpg)
The phenakistoscope is contraption where two discs are connected by a shaft.
The front disc is blank with slits while the rear disc has a sequence of drawings. While
looking through the slits as both discs rotate, illusion of motion is achieved.
The zoetrope is another contraption consisting of a long strip of paper with a
sequence of images on it placed in a cylinder with slits. Spin the cylinder and looking
through the slits, the illusion of motion is achieved.
11
Figure 7 - A zoetrope
(Source: http://1.bp.blogspot.com/-4sj9U2QClg4/Tb7i16GkKHI/AAAAAAAAAb4/DL2OAbIOSUc/s1600/Zoetrope.jpg)
In 1868, the Flipper Book surfaced and would later become the basis of
subsequent animations. The book is simply a book filled with drawings where the reader
can hold the bounded side of the book and flip through the drawn pages. The result is
illusion of motion – in other words, animation.
It is not the first contraption to create an illusion of motion, but the flipper book
introduced a concept that would be used by artists in making conventional animations
years later and even today – flipping through pages of drawings to create animation.
The first recorded feature-length animated film is the Argentinean film El
apóstol from the year 1917 [6]. Unfortunately all copies of the film were lost in a fire.
12
In 1932, Disney created the first full color animated cartoon film with the Flower
and Trees.
Figure 8 - Flowers and Trees
(Source: http://images.bcdb.com/ad_im/disney/flowers_trees.jpg?u=)
There are several forms of animations in existence. The conventional and
traditional animation is by painting several images and sequencing the hand-drawn
images into a camera [7]. Another technique is stop-motion animation [8] – where
photographs of static images are taken frame by frame. Each photograph presents a
sequence of action, thus able to create an illusion of motion when the photographs are
sequenced together. A popular example of this type of animation is The Nightmare
before Christmas.
13
Figure 9 - The Nightmare before Christmas
(Source: http://www.dan-dare.org/FreeFun/Images/CartoonsMoviesTV/NightmareBeforeChristmasPoster1993.jpg)
The other technique incorporates the advancement of computer technology to be
made possible – computer generated (CG) 3D animation. The first full-feature length
fully CG animated film on the record is Toy Story by Disney/Pixar in 1995.
Figure 10 - Toy Story
(Source: http://www.animationsource.org/sites_content/animation_source/img_site/toy_story_ver1.jpg)
14
2.1.1.2 Computer Generated 3D Animation
Like traditional animation, 3D CG animation is achieved by rendering [9] – that
is translating a 3D data into a two dimensional image – CG graphics, frame-by-frame,
and sequence them together to achieve illusion of motion.
Unlike traditional animation, the animator is not bound to the two dimensional
limitations of the drawing paper and instead works on a virtual three dimensional space.
The animator can place a 3D object anywhere on the virtual world and view it from any
angle with ease thanks to the help of 3D software. In traditional animation, the animator
is bound to drawing and redrawing the image in order to change the viewing angle.
The main advantages of 3D animation [10] are “flexible control over the scene
and animation” (e.g.: control over camera and object placement as well as viewing
angles), “rich collection of tools that aid the process of modeling and animation”,
“lighting and camera setup [that is] an exact replica of real world environment”, and
“ultra realism”.
Compared to traditional animation, computer animation can be said “as an
extension of puppetry – high tech marionettes.” [4] That is, instead of focusing on
drawing a succession of images, in 3D CG animation the focus is on the manipulation of
3D models and objects in order to achieve believable animation. It is akin to playing
with puppets, only with 3D models instead.
15
2.1.1.3 The Animation Pipeline
In the 3D CG animation creation process, a set of production stages – pipeline -
exists. While each animation company may have its own version of the animation
pipeline, the general pipeline goes as the following example from Pixar [11].
There are four main stages of production: development, where the storyline is
created; pre-production, where technical difficulties are addressed; production, where
the film is made; and pre-production, where the final product is polished.
At the beginning of development, ideas are pitched between the development
team in order to get a concept and delve its possibilities. Then, the idea is summarized
on text, which may see additional development on later stages. Next, a storyboard is
drawn which a collection of rough drawings depicting the story, as well as color scripts
which dictates the dominant color in each scene and define the mood. It is in essence a
visual outline of the potential film.
Afterwards, the team creates scratch recordings of dialogs. Voice-actors will be
introduced later on, although it some cases it may not be necessary. Several variations of
the dialog is recorded and the best recording that fits the storytelling gets into the
animation
A reel is then made by the development team. It is essentially the storyboard as a
film, in order to capture the timing of each sequence. Based on the written text and
storyboard, concept drawings of characters and their world are then drawn.
In the pre-production stage, based on the concept drawings, the 3D models of
characters are then created and then “articulated” – basically inserting a set of bones or
rig into the 3D models. Subsequently, the world and other props are created in 3D.
16
The team then perform simple choreographs on the models and layout shots
taken o discover the feel and look of each sequence. The layout shots may use multiple
angles as to provide options of which angle gives the best impact on the storytelling.
Once the layouts are finalized, the project goes into the production stage where
shots are then animated. In the animation phase, the animators choreograph the
movements and facial expressions and match the action with the dialog and sound
recording of each model in the scene – much like puppeteers manipulating their puppets.
Then, shaders are introduced onto the 3D models. These shaders dictate how the
surface of the model reacts to light. Coincidentally, placement of lights in the lighting
stage comes immediately after applying shaders. Lights are constructed as such as to
follow the color script in the storyboard.
Once everything is ready, the film is then rendered. In the case of Pixar, a single
rendered frame is 1/24th of a second (i.e.: 24 frames per second) and each frame
generally takes up to six hours to render. Some frames may take up to ninety hours to
render.
In the post-production stage, the rendered film is then added sound effects and
additional visual effects as to polish the final product before launch.
17
2.1.2 3D Animation Basics
2.1.2.1 Animation Terms
3D models are three-dimensional representation of an object through a set of
mathematical equations. The process of representing a these objects is called 3d
modeling [8]. A 3D model can also called a mesh [12] which is a “collection of vertices
(points), edges (lines), and faces (surfaces) that describe a 3D object.”
A bone refers to a 3D object that can control the vertices of other objects. A
collection of bones that is structured in a way is called a rig. Rigging [12] is the process
of applying constraints and relationships between bones and the 3D model, thus creating
a structured organization of bones, or a rig. Rigs aid the manipulation of 3D models and
are akin to skeletal architectures.
A scene [12] in 3D manipulation refers to a virtual plane where objects, cameras,
and lights may be placed. A single work can contain multiple scenes, and multiple
scenes may share the same objects but with different parameters, such as the camera
angle. It is basically a way for animators to organize their work, where they can work
with different environments in the same file, separated into multiple scenes.
18
2.1.2.2 The 3D view
3D space is measured on three axes that create width, height, and depth [13].
However, the computer screen by which the 3D space is viewed can only represent two
dimensional representations.
A perspective view by the 3D manipulation software allows the user to view the
3D world just like he/she would be looking to the real three dimensional world though
their eyes or a camera.
This way we can view the 3D world just like we would the real world and as
such similar phenomenon such as visual occlusion of objects situated behind another
object is also found while viewing the 3D world thorough the computer screen. Aside
from visual occlusion, this may also create confusion in selecting vertices or bones when
occlusion occurs.
Figure 11 - Visual occlusion of vertices
(Source: http://upload.wikimedia.org/wikibooks/en/8/83/Blender3D-Noob-To-Pro-GingerBreadMan2LegPulled.png)
19
2.1.2.3 Lip sync Animation
Lip synchronization (lip synch) is defined by reference.dictionary.com as
follows:
“Combining audio and video recording in such a way that the sound is perfectly synchronized
with the action that produced it; especially synchronizing the movements of a speaker's lips with
the sound of his speech” – reference.dictionary.com [14]
In animation work, lip synch means matching the shape and movement of the
lips (i.e.: animating the lips) of the model with the audio recording.
In order to lip synch properly, it is important to match the animation to the
pronunciation of the words in the audio recording. That is, to animate as closely as
possible to the way how the words are spoken in the audio recording. To do that,
animators need to recognize the phonemes of each word in the audio recording.
“[Phoneme is] any of a small set of units, usually about 20 to 60 in number, and different for each
language, considered to be the basic distinctive units of speech sound by which morphemes,
words, and sentences are represented.” – reference.dictionary.com [14]
Basically, a phoneme is a description of how to pronounce a given word – the
sound of a word. Despite the number of English words that exist, each of them follows
the same set of phoneme. In dictionaries, the phonemes shown may conform that to the
International Phonetic Alphabet system, created in 1888 [15] in order to transcribe the
sound of speech that is applicable to all languages.
Using the collection of words and IPA phonemes contained in a dictionary, it is
theoretically possible to derive a collection of phonemes automatically by linking with
and searching the dictionary.
20
Figure 12 - Preston Blair's sample phoneme
(Source: http://2.bp.blogspot.com/_DyMhG4Jjorw/TQnfwhCctOI/AAAAAAAAAAM/N1qPJHnU8o8/s1600/prest
on_blair_mouths.gif)
21
Figure 13 - IPA Chart 2005
(Source: http://upload.wikimedia.org/wikipedia/commons/1/15/IPA_chart_2005.png)
22
2.1.2.4 Mouth Shapes
While there are a multitude of phonemes available to describe the pronunciation
of English words, there can be as low as ten mouth shapes needed in order to represent
them all.
A freely available internet tutorial [16] on the subject of mouth shapes, one Gary
C. Martin has created a template of twelve mouth shapes to represent the phonemes of
the English words.
The tutorial is based on the reference mouth shapes created by an American
animator Preston Blair. Blair’s phoneme shapes were the templates for cartoon
animation, which were then converted by Martin for 3D mouth shape templates on his
website.
Figure 14 – Phoneme “O” (source: http://www.garycmartin.com/images/blair_o.jpg)
Figure 15 - Phoneme "A" (source: http://www.garycmartin.com/images/blair_a_i.jpg)
23
This research will follow these basic mouth shape tutorial, dubbed the extended
range mouth shape templates.
2.1.3 Blender 3D
2.1.3.1 History of Blender 3D
According to the history page in Blender’s website (blender.org) [17] Blender
was first found as a 3D in-house software in 1995 for the NeoGeo company founded by
Ton Roosendaal, a co-founder and in charge of internal software development of Dutch
animation studio NeoGeo.
In 1998, Blender was developed and marketed by a new company Not a Number
(NaN) founded by Ton. Due to the disappointing sales and slow market, in 2002 NaN
was shut down and with it the development of Blender.
Yet user and community support didn’t yield for Blender, and Ton created the
non-profit Blender Foundation in the same year in order to continue the development
and distribution of Blender.
The foundation’s first goal is to continue support for Blender as an open-source
project – which means “the source code is available free of charge to the public to use,
copy, modify, sublicense, or distribute (reference.dictionary.com).” [14] One unique deal
with NaN investors, 100,000 Euros in donation, and six weeks later, Blender 3D was
officially available as an open-source software.
Blender is unique in that its development is very open
help in its development. The creation of
main office to organize the development of Blender 3D and its
Another unique to Blender is the Open Movie projects the foundation runs
through the years. In open movies, the tools used are open source as well as the end
results and the assets used. Everything used in the making of the movie can be accessed
under open license.
So far, there are a total of three open movies created:
Big Buck Bunny in 2008, and
Blender is unique in that its development is very open-source and a
help in its development. The creation of Blender Institute in 2007 was intended as the
main office to organize the development of Blender 3D and its Open Movie
Another unique to Blender is the Open Movie projects the foundation runs
rough the years. In open movies, the tools used are open source as well as the end
results and the assets used. Everything used in the making of the movie can be accessed
So far, there are a total of three open movies created: Elephant
in 2008, and Sintel in 2010.
Figure 16 - Blender 2.57b Start
24
source and anyone can
in 2007 was intended as the
Open Movie projects.
Another unique to Blender is the Open Movie projects the foundation runs
rough the years. In open movies, the tools used are open source as well as the end
results and the assets used. Everything used in the making of the movie can be accessed
Dreams in 2005,
25
The year 2008 saw a major update of Blender into the 2.5x version. Major user
interface changes, new tools, new data access systems, new event handling systems, etc.
Since then a number of releases has been made, with latest being version 2.57 and
marked as stable.
Download.cnet.com ranks Blender as the number 12 [18] in 3d modeling
software category with a total download of 309,967 downloads from their server.
Statistics in Blender’s own website shows that the download number from their server is
1.57 million downloads from April 2008 to April 2009.
The latest version of Blender, like its predecessors, supports custom extensions
through Python scripting. Be it custom user interface design, custom modeling and/or
sculpting tool, custom rig constraints, custom render pre-sets, anything about Blender
can be extended by Python scripts.
2.1.3.2 Features of Blender 3D
Aside from the open source and extensible nature of Blender, the latest version
2.57b also has some powerful tools for modeling, animation, and rendering tasks, as well
as physics and particles system and a realtime 3D game engine. Below is a brief
summary of the many features available on Blender as listed in their website.
Other than the basic polygon, NURBS, and curve modeling tools, version 2.57b
includes a brand new sculpting tool that allows the user to create models just like
creating sculptures in real life which can result to a more organic model.
For animation tasks, version 2.57b adds automatic weighting to instantly merge
skeleton rigs with models. It also provides means to mix animation sets through Non-
Linear Animation (NLA) Editor as well as audio playback for audio synchronization.
26
For rendering tasks Blender comes equipped with a multitude of render
calculations and gives a lot of option of manipulating visual effects for the user. From a
fast ray tracing calculation to multiple lens effects (Halo, flare, fog) and motion blur
calculation and cartoon shading, Blender can take on most post-production visual effects.
Additional features include particles and physics systems, designed to simulate
hair, fluid bodies, and soft body effects. A realtime 3D rendering/game engine provides
another dimension for Blender as not only a strictly 3D modeling and animation
software.
Not to mention most of these features can be modified through custom Python
scripts or even create whole new tools, Blender presents a powerful tool in 3D
manipulation software.
2.1.3.2 RIGIFY and FACE BONE TOOL
Rigify and Face Bone Tool are two Blender add-ons among others that deals
with automating the process of creating a skeleton rig for use in an animation.
Rigify [19] was created by Nathan Vegdahl and was formerly a skeleton rig used
for Blender Foundation’s latest open movie, “Sintel”. The skeleton rig was later
converted into an add-on and now available on every new Blender software, signed as
disabled add-on by default and must be enabled by the user via User Preferences.
27
Figure 17 - Control rig (left) and template rig (right)
Rigify adds a template humanoid bipedal skeleton rig into the 3D workspace,
ready to be adjusted to the model. A humanoid bipedal rig here means that the skeleton
rig resembles the skeleton structure of an upright human being and stands on two feet.
After adjusting the template rig, Rigify can then generate a rig complete with
multiple constraints and controllers that follows the structure of the adjusted template rig.
It essentially generates a complete and ready for animation rig, where the constraints
make the bones easier to control.
Face Bone Tool [20] is another add-on for Blender created by khuuyj. It is
intended as an extension built for Rigify, specifically the head. Rigify only generates the
body skeleton rig and leaves the facial structure to the user. What Face Bone Tool does
is generating a face skeleton rig to go with Rigify’s head. And like Rigify, Face Bone
Tool creates a fully constrained human face structure that can be adjusted to fit the
model.
28
Figure 18 - Face Bone Tool with Rigify in the background
Initially Face Bone Tool was created to handle lip sync from .vsq audio files
created by Japanese singing synthesizer software, VOCALOID. Hence the add-on
comes standard with phoneme shapes for Japanese words.
By combining Rigify and Face Bone Tool together, a user can quickly create a
fully functional, ready to animate humanoid skeleton rig, which shows just how
powerful add-ons can be in assisting the tasks of the user.
2.1.4 Python Scripting Language
2.1.4.1 What is Python
A scripting language can be classified as: “A high-level programming language
that is interpreted by another program at runtime rather than compiled by the computer’s
processor.” [8]
Python is an example of open-source scripting language and is used by Blender
for modification purposes. Based on the definition, it can be said that Python is
interpreted by Blender, thus is why Python scripting can be done on-the-fly on Blender.
29
As it is Python can run on many major Operating Systems and is freely usable
and distributable, as well as having many available tutorials on the internet and has a big
community supporting Python [21].
Python programming language is under the non-profit Python Software
Foundation, created in 2001 in America. The latest version available is version 3.2 from
python’s own website, although version 2.7.1 is also available since currently more
software is compatible with v. 2.7.1 than the newer 3.2.
2.1.4.2 Python and Blender
As mentioned before, Blender closely incorporates Python for user customization
and creating custom tools. A Python executable may or may not be bundled with a
release of Blender software. With the newest Blender 2.57, the bundled Python version
is v. 3.2.
2.1.5 Relationship between Productivity and Profit
Productivity in this research is defined as the competency of the subject in
performing his/her tasks. Profit in this research is defined as increased income.
A study by Gordon Training International [22] has found that the increased
worker’s competency at doing his/her job can lead to an improved financial result for the
workplace.
Another study by McKinsey Global Institute [23] has also found direct results of
increasing profits for companies in increasing worker productivity.
Following the results of these two separate studies, it is predicted that the profits
of an animation work will increase according to the increased productivity gained from
utilizing the proposed solution.
30
2.2 Research Framework
This work can be said to be a scientific research project since it follows the steps
of scientific method in order to solve a problem. The definition of research is as follow:
“The strict definition of scientific research is performing a methodical study in order to prove a
hypothesis or answer a specific question.” – Experiment Resources [24]
This work can be further categorized as a pilot study such that this is an attempt
to create a solution and test it on a very small control group before releasing it for larger
number of users, as described by the following excerpt:
“To test the feasibility, equipment and methods, researchers will often use a pilot study, a small-
scale rehearsal of the larger research design.” – Experiment Resources
The scientific method followed by this work is as described by Experiment
Resources[3] as structure that has to be followed in order to conduct research, obtain
data, and achieve the aim(s) of the research. As such, this work loosely follows the steps
of scientific method as a guideline for its research framework:
First, start with a question and research as many published works, articles, etc to
create a theoretical foundation. Then, the question is narrowed down into specific issues
to become the hypothesis or the main question of the research. This step also narrows
down the scope of the research so it will not become bigger than needed.
Next, the tests, experiments and analysis needed to test the hypothesis or achieve
the aim of the research are designed. This step is to make clear what are needed and how
they are created in order for the research to work. This includes the solution design and
survey design.
31
Then, observations are made on the execution and results of the research – in this
case, observation of the control group test. Data will be gathered through testing
working animators using the finished solution and observe changes in production time
and convenience.
Afterwards, analysis of data gathered through observation is made. Trends in the
data will be discussed and compared in relation to the hypothesis and/or the research aim,
along with issues that arise during and after implementation of solution. Trends and data
that might lead to a future research will also be noted in this stage.
Finally, a general conclusion is made, summarizing the results of the research in
relation to the hypothesis and/or research aim. Suggestions for future research will also
be mentioned in this stage.